The death knell for variety in AI languages was when Google rug-pulled TensorFlow for Swift.
Use whatever you like.
Pouchdb. Hypercore (pear). It’s nice to be able to spin up JS versions of things and have them “just work” in the most widely deployed platform in the world.
TensorflowJS was awesome for years, with things like blazeface, readyplayer me avatars and hallway tile and other models working in realtime at the edge. Before chatgpt was even conceived. What’s your solution, transpile Go into wasm?
Agents can work in people’s browsers as well as node.js around the world. Being inside a browser gives a great sandbox, and it’s private on the person’s own machine too.
This was possible years ago: https://www.youtube.com/watch?v=CpSzT_c7_UI&t=10m30s
I do my best to run as little in the browser as possible. Everything is an order of magnitude simpler and faster to build if you do the bulk of things on a server in a language of your choice and render to the browser as necessary.
-Someone who has written a ton of JS over the past... almost 30 years now.
> Share memory by communicating
> Centralized cancellation mechanism with context.Context
> Expansive standard library
> Profiling
> Bonus: LLMs are good at writing Go code
I think profiling is probably the lowest value good here, but would be willing to hear out stories of AI middleware applications that found value in that.
Cancelling tasks is probably the highest value good here, but I think the contending runtimes (TS/Python) all prefer using 3P libraries to handle this kind of stuff, so probably not the biggest deal.
Being able to write good Go code is pretty cool though; I don't write enough to make a judgement there.
Good at writing bad code. But most of the code in the wild is written by mid-level devs, without guidance and on short timelines.. i.e bad code. But this is a problem with all languages, not just Go.
The language of agents doesn't matter much in the long run as it's just a thin shell of tool definitions and API calls to the backing LLM.
It must in my view at least, as that's how Oban (https://github.com/oban-bg/oban) in Elixir models this kind of problem. Full disclosure, I'm an author and maintainer of the project.
It's Elixir specific, but this article emphasizes the importance of async task persistence: https://oban.pro/articles/oban-starts-where-tasks-end
This is true regardless of the language. I always do a reasonable amount of work (milliseconds to up to a few seconds) worth of work in a Go routine every time. Anything more and your web service is not as stateless as it should be.
These platforms store an event history of the functions which have run as part of the same workflow, and automatically replay those when your function gets interrupted.
I imagine synchronizing memory contents at the language level would be much more overhead than synchronizing at the output level.
Adding a durable boundary (via a task queue) in between steps is typically the first step, because you at least get persistence and retries, and for a lot of apps that's enough. It's usually where we recommend people start with Hatchet, since it's just a matter of adding a simple wrapper or declaration on top of the existing code.
Durable execution is often the third evolution of your system (after the first pass with no durability, then adding a durable boundary).
Through the use of both a map that holds a context tree and a database we can purge old sessions and then reconstruct them from the database when needed (for instance an async agent session with user input required).
We also don't have to hold individual objects for the agents/workflows/tools we just make them stateless in a map and can refernce the pointers through an id as needed. Then we have a stateful object that holds the previous actions/steps/"context".
To make sure the agents/workflows are consistent we can hash the output agent/workflow (as these are serializable in my system)
I have only implemented basic Agent/tools though and the logging/reconstruction/cancellation logic has not actually been done yet.
From the article: """ Agents typically have a number of shared characteristics when they start to scale (read: have actual users):
They are long-running — anywhere from seconds to minutes to hours.
Each execution is expensive — not just the LLM calls, but the nature of the agent is to replace something that would typically require a human operator. Development environments, browser infrastructure, large document processing — these all cost $$$.
They often involve input from a user (or another agent!) at some point in their execution cycle.
They spend a lot of time awaiting i/o or a human.
"""No. 1 doesn't really point to one language over another, and all the rest show that execution speed and server-side efficiency is not very relevant. People ask agents a question and do something else while the agent works. If the agent takes a couple seconds longer because you've written it in Python, I doubt that anyone would care (in the majority of cases at least).
I'd argue Python is a better fit for agents, mostly because of the mountain of AI-related libraries and support that it has.
> Contrast this with Python: library developers need to think about asyncio, multithreading, multiprocessing, eventlet, gevent, and some other patterns...
Agents aren't that hard to make work, and you can get most of the use (and paying users) without optimizing every last thing. And besides, the mountain of support you have for whatever workflow you're building means that someone has probably already tried building at least part of what you're working on, so you don't have to go in blind.
(I think you can effectively write an agent in any language and I think Javascript is probably the most popular choice. Now, generating code, regardless of whether it's an agent or a CLI tool or a server --- there, I think Go and LLM have a particularly nice chemistry.)
Beneath all the jargon, it’s good to remember that an “agent” is ultimately just a bunch of http requests and streams that need to be coordinated—some serially and some concurrently. And while that sounds pretty simple at a high level, there are many subtle details to pay attention to if you want to make this kind of system robust and scalable. Timeouts, retries, cancellation, error handling, thread pools, thread safety, and so on.
This stuff is Go’s bread and butter. It’s exactly what it was designed for. It’s not going to get you an MVP quite as fast as node or python, but as the codebase grows and edge cases accumulate, the advantages of Go become more and more noticeable.
I think I'd condense this out to "this is not a really important deciding factor in what language you choose for your agent". If you know you need something you can only get in Python, you'll write the agent in Python.
But outside of that - ML in go is basically impossible. Trying to integrate with the outside ecosystem of go is really difficult - and my experience has been that Claude Code is far less effective with Go then it is with Python, or even Swift.
I ditched a project I was writing in Go and replaced it with Swift (this was mostly prompt based anyways). It was remarkably how much better the first pass of the code generation was.
You can safely swap out agents without redeploying the application, the concurrency is way below the scale BEAM was built for, and creating stateful or ephemeral agents is incredibly easy.
My plan is to set up a base agent in Python, Typescript, and Rust using MCP servers to allow users to write more complex agents in their preferred programming language too.
[0]: https://github.com/extism/extism [1]: https://github.com/extism/elixir-sdk
1. If you make your agents/workflows serializable you can run/load them from a config file or add/remove them from a decoupled frontend. You can also hash them to make versioning easy to track/immutable.
2. If you decouple the stateful object from the agent/workflow object you can just store that through sufficient logging then you can rebuild any flow at any state and have branching by allowing traces to build on one another. You can also restart/rerun a flow starting at any location.
3. You can allow for serializable tools by having a standard HttpRequestTool then setup cloudflare workers/any external endpoints for the actual toolcall logic. Removing primary server load and making it possible to add/remove tools without rebuilding/restarting.
Given this system in golang you can have a single server which supports tens of thousands of concurrent agent workflows.
The biggest problem is there isn't that many people who are working on it. So even if you can make agents 100x more efficient by running in Go it doesn't really matter if cost isn't the biggest factor for the final implementations.
The actual compute/server/running costs for big AI Agent implementation contracts is <1%, so making it 100x more efficient doesn't really matter.
You need a DSL either supported in the language or through configuration. These are features you get for free in python and secondly JavaScript. You have to write most of this yourself in go
So every discussion about the "best" programming language is really you telling the world about your favorite language.
Use Go. Use Python. Use JavaScript. Use whatever the hell else you want. They are all good enough for the job. If you are held back it won't be because of the language itself.
But programming languages make tradeoffs on those very paths (particularly spawning child processes and communicating with them, how underlying memory is accessed and modified, garbage collection).
Agents often involve a specific architecture that's useful for a language with powerful concurrency features. These features differentiate the language as you hit scale.
Not every language is equally suited to every task.
It already does well coordinating IoT networks. It's probably one of the most underestimated systems.
The Elixir community has been working hard to be able to run models directly within BEAM, and recently, have added the capability for running Python directly.
The issue with Go, is as soon as you need to do actual machine learning it falls down.
The issue with Python is that you often want concurrency in agents. Although this may be solved with Pythons new threading.
Why is Rust great? It interops very well with Python, so you can write any concurrent pieces into that and simply import it into py, without needing to sacrifice any ML work.
I'll be honest Go is a bit of an odd fit in the world of AI, and if thats the future I'm not sure Go has a big part to play outside of some infra stuff.
behnamoh•4h ago
by that logic Elixir is even better for agents.
also the link at the bottom of the page is pretty much why I ditched Go: https://go.dev/blog/error-syntax
The AI landscape moves so fast, and this conservative, backwards looking mindset of the new Go dev team doesn't match the forward looking LLM engineering mindset.
jclulow•4h ago
behnamoh•4h ago
kamikaz1k•4h ago
guywithahat•4h ago
regularfry•4h ago
guywithahat•3h ago
bheadmaster•4h ago
Funnily, it's also one of the reasons I stay with Go.
Error handling is the most contraversial Go topic, with half the people saying it's terrible and needs a new syntax, and half saying it's perfect and adding any more syntax will ruin it.
sorentwo•4h ago
Elixir's lightweight processes and distribution story make it ideal for orchestration, and that includes orchestrating LLMs.
Shameless plug, but that's what many people have been using Oban Pro's Workflows for recently, and something we demonstrated in our "Cascading Workflows" article: https://oban.pro/articles/weaving-stories-with-cascading-wor...
Unlike hatchet, it actually runs locally, in your own application as well.
jerf•4h ago
Erlang possibly even more so. The argument that pure code is generally safer to vibe code is compelling to me. (Elixir's purity is rather complicated to describe, Erlang's much more obvious and clear.) It's easier to analyze that this bit of code doesn't reach out and break something else along the way.
Though it would be nice to have a language popular enough for the LLMs to work well on, that was pure, but that was also fast. At the moment writing in pure code means taking a fairly substantial performance hit, and I'm not talking about the O(n log n) algorithm slowdowns, I mean just normal performance.
tptacek•4h ago
achileas•3h ago