frontpage.
newsnewestaskshowjobs

Made with ♥ by @iamnishanth

Open Source @Github

fp.

RFCs vs. READMEs: The Evolution of Protocols

https://h3manth.com/scribe/rfcs-vs-readmes/
1•init0•4m ago•1 comments

Kanchipuram Saris and Thinking Machines

https://altermag.com/articles/kanchipuram-saris-and-thinking-machines
1•trojanalert•4m ago•0 comments

Chinese chemical supplier causes global baby formula recall

https://www.reuters.com/business/healthcare-pharmaceuticals/nestle-widens-french-infant-formula-r...
1•fkdk•7m ago•0 comments

I've used AI to write 100% of my code for a year as an engineer

https://old.reddit.com/r/ClaudeCode/comments/1qxvobt/ive_used_ai_to_write_100_of_my_code_for_1_ye...
1•ukuina•10m ago•1 comments

Looking for 4 Autistic Co-Founders for AI Startup (Equity-Based)

1•au-ai-aisl•20m ago•1 comments

AI-native capabilities, a new API Catalog, and updated plans and pricing

https://blog.postman.com/new-capabilities-march-2026/
1•thunderbong•20m ago•0 comments

What changed in tech from 2010 to 2020?

https://www.tedsanders.com/what-changed-in-tech-from-2010-to-2020/
2•endorphine•25m ago•0 comments

From Human Ergonomics to Agent Ergonomics

https://wesmckinney.com/blog/agent-ergonomics/
1•Anon84•29m ago•0 comments

Advanced Inertial Reference Sphere

https://en.wikipedia.org/wiki/Advanced_Inertial_Reference_Sphere
1•cyanf•30m ago•0 comments

Toyota Developing a Console-Grade, Open-Source Game Engine with Flutter and Dart

https://www.phoronix.com/news/Fluorite-Toyota-Game-Engine
1•computer23•33m ago•0 comments

Typing for Love or Money: The Hidden Labor Behind Modern Literary Masterpieces

https://publicdomainreview.org/essay/typing-for-love-or-money/
1•prismatic•33m ago•0 comments

Show HN: A longitudinal health record built from fragmented medical data

https://myaether.live
1•takmak007•36m ago•0 comments

CoreWeave's $30B Bet on GPU Market Infrastructure

https://davefriedman.substack.com/p/coreweaves-30-billion-bet-on-gpu
1•gmays•47m ago•0 comments

Creating and Hosting a Static Website on Cloudflare for Free

https://benjaminsmallwood.com/blog/creating-and-hosting-a-static-website-on-cloudflare-for-free/
1•bensmallwood•53m ago•1 comments

"The Stanford scam proves America is becoming a nation of grifters"

https://www.thetimes.com/us/news-today/article/students-stanford-grifters-ivy-league-w2g5z768z
2•cwwc•57m ago•0 comments

Elon Musk on Space GPUs, AI, Optimus, and His Manufacturing Method

https://cheekypint.substack.com/p/elon-musk-on-space-gpus-ai-optimus
2•simonebrunozzi•1h ago•0 comments

X (Twitter) is back with a new X API Pay-Per-Use model

https://developer.x.com/
3•eeko_systems•1h ago•0 comments

Zlob.h 100% POSIX and glibc compatible globbing lib that is faste and better

https://github.com/dmtrKovalenko/zlob
3•neogoose•1h ago•1 comments

Show HN: Deterministic signal triangulation using a fixed .72% variance constant

https://github.com/mabrucker85-prog/Project_Lance_Core
2•mav5431•1h ago•1 comments

Scientists Discover Levitating Time Crystals You Can Hold, Defy Newton’s 3rd Law

https://phys.org/news/2026-02-scientists-levitating-crystals.html
3•sizzle•1h ago•0 comments

When Michelangelo Met Titian

https://www.wsj.com/arts-culture/books/michelangelo-titian-review-the-renaissances-odd-couple-e34...
1•keiferski•1h ago•0 comments

Solving NYT Pips with DLX

https://github.com/DonoG/NYTPips4Processing
1•impossiblecode•1h ago•1 comments

Baldur's Gate to be turned into TV series – without the game's developers

https://www.bbc.com/news/articles/c24g457y534o
3•vunderba•1h ago•0 comments

Interview with 'Just use a VPS' bro (OpenClaw version) [video]

https://www.youtube.com/watch?v=40SnEd1RWUU
2•dangtony98•1h ago•0 comments

EchoJEPA: Latent Predictive Foundation Model for Echocardiography

https://github.com/bowang-lab/EchoJEPA
1•euvin•1h ago•0 comments

Disablling Go Telemetry

https://go.dev/doc/telemetry
1•1vuio0pswjnm7•1h ago•0 comments

Effective Nihilism

https://www.effectivenihilism.org/
1•abetusk•1h ago•1 comments

The UK government didn't want you to see this report on ecosystem collapse

https://www.theguardian.com/commentisfree/2026/jan/27/uk-government-report-ecosystem-collapse-foi...
5•pabs3•1h ago•0 comments

No 10 blocks report on impact of rainforest collapse on food prices

https://www.thetimes.com/uk/environment/article/no-10-blocks-report-on-impact-of-rainforest-colla...
3•pabs3•1h ago•0 comments

Seedance 2.0 Is Coming

https://seedance-2.app/
1•Jenny249•1h ago•0 comments
Open in hackernews

WASM Agents: AI agents running in the browser

https://blog.mozilla.ai/wasm-agents-ai-agents-running-in-your-browser/
169•selvan•7mo ago

Comments

raybb•7mo ago
Can you bypass the cors issue with a browser extension? I seem to recall CORS doesn't apply to extensions, or at least the part that isn't injected to the webpages.
ape4•7mo ago
CORS is mentioned on that page: NOTE: If you want to run tools that get information from some other server into your HTML page (e.g. the visit_webpage tool or the Ollama server itself), you need to make sure that CORS is enabled for those servers. For more information, refer to the troubleshooting section in our GitHub repository.
N_Lens•7mo ago
I guess we're at the stage where every permutation of "AI Agents" and X (Where X is technologies & or spaces) must be tried and posted on HN.
handfuloflight•7mo ago
That's either the peak of inflated expectations or the slope of enlightenment.

Depending on what side you're on and "only time will tell."

Dilettante_•7mo ago
Sometimes you gotta shake the tree to see what falls out.
bravetraveler•7mo ago
Or pump the bubble
selvan•7mo ago
Ship AI Agents as a web page :-)
FrankyHollywood•7mo ago
We are in the early stage :)

https://www.youtube.com/watch?v=gN-ZktmjIfE

latexr•7mo ago
That’s a well-remembered video, but I don’t really think it fits with the original comment. What we’re seeing is more akin to already having the plane flying (badly, still crashing frequently and landing in the wrong country) and instead of making it more reliable, everyone is trying different wheels and paint colours.

We’re not really seeing any significant development with this. What LLMs need most desperately (and are far from getting) is reliability and not being convincing liars. Being able to query existing server models from your oven timer is a cool gimmick but not really transformative or advancing anything.

It’s like a reverse JFK Space Effort speech: “We choose to indiscriminately throw at the wall every single LLM-adjacent idea we can think of. Not because it is useful, but because it is easy and potentially profitable”.

spwa4•7mo ago
An excellent idea to have those be AI generated and posted! I'll start:

https://chatgpt.com/share/68679564-6a44-8012-b1bd-25819bfbf0...

TekMol•7mo ago
It seems the only code that runs in the browser here is the code that talks to LLMs on servers.

Why would you need WASM for this?

politelemon•7mo ago
They're using some python libraries like openai-agents so presumably it's to save on development efforts of calling/prompting/managing the HTTP endpoints. But yes this could just be done in regular JS in the browser, they'd have to write a lot of boilerplate for an ecosystem which is mainly Python.
yjftsjthsd-h•7mo ago
> But yes this could just be done in regular JS in the browser, they'd have to write a lot of boilerplate for an ecosystem which is mainly Python.

Surely that's a prime use for AI?

m13rar•7mo ago
From a quick gander. WASM is not to talk to the servers. WASM can be utilized to run AI Agents to talk to local LLMs from a sandboxed environment through the browser.

For example in the next few years if Operating System companies and PC producers make small local models stock standards to improve the operating system functions and other services. This local LLM engine layer can be used by browser applications too and that being done through WASM without having to write Javascript and using WASM sandboxed layer to safely expose the this system LLM Engine Layer.

TekMol•7mo ago
No matter if the LLM is on the same machine or elsewhere, why would you need WASM to talk to it and not just JS?
lgas•7mo ago
You never need WASM (or any other language, bytecode format, etc) to talk to LLMs. But WASM provides things people might like for agents, eg. strict sandboxing by default.
benatkin•7mo ago
This is trying to use the word agent to make it sound cool, but it doesn't make a case for why it's particularly about agents and not just basic level AI stuff.

> The agent code is nothing more than a Python script that relies on the openai-agents-python library to run an AI agent backed by an LLM served via an OpenAI-compatible API.

The openai-agents-python code is useful for writing agents but it is possible to use it to write code that isn't very agentic. None of the examples are very agentic.

niyyou•7mo ago
Of course. Here is a corrected version of your text that fixes the grammar and typos while keeping the colloquial tone:

I'd like to offer a less skeptical view on this, contrary to what I've read here so far. LLMs that act (a.k.a. agents) bring a whole lot of new security and privacy issues. If we were already heading toward a privacy dystopia (with trackers, centralized services, etc.), agents could take that to a whole new level.

That's why I can only cheer when I see a different path where agents are run locally (by the way, Hugging Face has already published a couple of spaces demonstrating that). As a plus, because they're small, their environmental footprint will also be smaller (although, admittedly, I can also see the Jevons Paradox possibly happening here too).

sandGorgon•7mo ago
i build an opensource mobile browser - we create ai agents (that run in the background) on the mobile browser. and build an extension framework on top so u can create these agents by publishing an extension.

we hook into the android workmanager framework and do some quirky things with tab handling to make this work. its harder to do this on mobile than on desktop.

bunch of people are trying to do interesting things like an automatic labubu purchase agent (on popmart) :D lots of purchase related agents

pull requests welcome ! https://github.com/wootzapp/wootz-browser/pull/334

ipsum2•7mo ago
I recently wrote some Javascript to automate clicking coupons. The website checks for non-human clicks using event.isTrusted. Firefox allowed me to bypass this by rewriting the JS to replace s/isTrusted/true, while Chrome Manifest V3 doesn't allow it. Anyway, Firefox might be the future of agents, due to its extensibility.
_pdp_•7mo ago
Mildly interesting article - I mean, you can already run a ton of libraries that talk to an inference backend. The only difference here is that the client-side code is in Python, which by itself doesn't make creating agents any simpler - I would argue that it complicates things a tone.

Also, connecting a model to a bunch of tools and dropping it into some kind of workflow is maybe 5% of the actual work. The rest is spent on observability, background tasks, queueing systems, multi-channel support for agents, user experience, etc., etc., etc.

Nobody talks about that part, because most of the content out there is just chasing trends - without much real-world experience running these systems or putting them in front of actual customers with real needs.

mentalgear•7mo ago
Agreed, regarding the other parts of the "LLM" stack, have a look at the, IMO best, LLM coordination / Observability platform TS library: https://mastra.ai/
_pdp_•7mo ago
thanks, we are building chatbotkit.com / cbk.ai (not opensource)
meander_water•7mo ago
When I saw the title, I thought this was running models in the browser. IMO that's way more interesting and you can do it with transformers.js and onnx runtime. You don't even need a gpu.

https://huggingface.co/spaces/webml-community/llama-3.2-webg...

salviati•7mo ago
I think you _do_ need a GPU. But it can work with an integrated one, no need for a discrete one.

I can't run it on Linux since WebGPU is not working for me...

pjmlp•7mo ago
It is yet not available on any browser on Linux, other than Android/Linux and ChromeOS.
zoobab•7mo ago
No mention of WebGPU...
ultrathinkeng•7mo ago
hmm
asim•7mo ago
The frustrating thing about this is the limitation of using a browser. Agents should be long-running processes that exist external to a browser. The idea of using wasm is clever, but it feels like the entire browser environment needs to evolve because we're no longer dealing with just web pages. I think we are looking at a true evolution of the web now if this is the way it's going to go
diggan•7mo ago
> Agents should be long-running processes that exist external to a browser

Sure, but there are a ton of ways for doing that today. What this specific thing is addressing, is removing the requirement of "the dependency on extra tools and frameworks that need to be installed before the agents can be run".

boomskats•7mo ago
That's what the Component Model[0] is all about.

WASIp3[1] is gonna be awesome. Hopefully releasing later this year.

[0]: https://component-model.bytecodealliance.org/

[1]: https://wasi.dev/roadmap

simonw•7mo ago
When you say agents should be long running, which definition of "agent" are you talking about?
evacchi•7mo ago
mcp.run is entirely based on wasm. Tools can run on our cloud or locally
_joel•7mo ago
Having to disable CORS restrictions is a bit meh, I understand why, but still.
simonw•7mo ago
In this case the "agent" definition they are using is the one from the https://github.com/openai/openai-agents-python Python library, which they are running in the browser via Pyodide and WASM.

That library defines an agent as a system prompt and optional tools - notable because many other common agent definitions have the tools as required, not optional.

That explains why their "hello world" demo just runs a single prompt: https://github.com/mozilla-ai/wasm-agents-blueprint/blob/mai...

lgas•7mo ago
> notable because many other common agent definitions have the tools as required, not optional.

This feels weird to me. I would think of an agent with no tools as the trivial case of a "null agent" or "empty agent".

It would be like saying you can't have a list with no elements because that's not a list at all... but an empty list is actually quite useful in many contexts. Which is why almost all implementations in all languages allow empty lists and add something distinct like a NonEmptyList to handle that specific case.

simonw•7mo ago
A lot of the agent/agentic definitions I see floating around are variants on "an LLM running tools in a loop".

Can't do that without tools!

thepoet•7mo ago
We looked at Pyodide and WASM along with other options like firecracker for our need of multi-step tasks that require running LLM generated code locally via Ollama etc. with some form of isolation than running it directly on our dev machines and figured it would be too much work with the various external libraries we have to install. The idea was to get code generated by a powerful remote LLM for general purpose stuff like video editing via ffmpeg, beautiful graphs generation via JS + chromium and stuff and execute it locally with all dependencies being installed before execution.

We built CodeRunner (https://github.com/BandarLabs/coderunner) on top of Apple Containers recently and have been using it for sometime. This works fine but still needs some improvement to work across very arbitrary prompts.

indigodaddy•7mo ago
For the Gemini-cli integration, is the only difference between code runner with Gemini-cli, and gemini-cli itself, is that you are just using Gemini-cli in a container?
thepoet•7mo ago
No, Gemini-cli still is on your local machine, when it generates some code based on your prompt, with Coderunner, the code runs inside a container (which is inside a new lightweight VM courtesy Apple and provides VM level isolation), installs libraries requested, executes the generated code inside it and returns the result back to Gemini-cli.

This is also not Gemini-cli specific and you could use the sandbox with any of the popular LLMs or even with your local ones.

indigodaddy•7mo ago
Thanks for explaining
indigodaddy•7mo ago
What would be the advantage of using your tool over say just using G CLI inside distrobox/toolbox/et al..?
om8•7mo ago
I have a demo that runs llama3-{1,3,8}B in browser on cpu. It can be integrated with this thing in the future to be fully local

https://galqiwi.github.io/aqlm-rs

dncornholio•7mo ago
Putting Python code in a string in a html file is a big no no for me. We should be passed this. It looks like going 20 years back in time.