Ages and ages ago I was an EE turned programmer and everyone was hyping JUnit at the time. I had a customer ask for it on a project so fine I'll learn it. I kept thinking it was stupid because in my mind it barely did anything. But then I got it: it did barely do anything, but it did things you'd commonly need to do for testing and it did them in a somewhat standardized way that kept you from having to roll your own every time. Suddenly it didn't feel so stupid.
Mention John Holland’s work in adaptive systems, Hewitt’s actor model or even Minsky’s “Society of the Mind” and you’ll be met with blank stares.
I do believe LLMs have the potential to make these older ideas relevant again and potentially create something amazing, but sadly the ignorant hype makes it virtually impossible to have intelligent conversations about these possibilities.
That said, the "Agent pattern du jour" is heavily based on using LLM's to provide the "brain" of the Agent and then Tool Calling to let it do things an LLM can't normally do. But still... depending on just what you do with those tool calls and any other code that sits in your Agent implementation then it certainly could be more than "just" an LLM wrapper.
Nothing stops you from, for example, using the BDI architecture, implementing multi-level memory that's analogous to the way human memory works, wiring in some inductive learning, and throwing in some case-based reasoning, and an ontology based reasoning engine.
Most people today aren't doing this, because they're mostly johnny-come-lately's that don't know anything about AI besides what they see on Twitter, Reddit, and LinkedIn; and wouldn't know BDI from BDSM.
[1]: https://en.wikipedia.org/wiki/Belief%E2%80%93desire%E2%80%93...
simonw•3h ago
They went for LLM + short-term and long-term memory + planning + tool using + action execution.
Presumably "planning" here is covered by any LLM that can do "think step by step" reasonably well?
It wasn't clear to me what the difference between "tool using" and "action execution" was.
I haven't seen a definition that specifically encompasses both short- and long-term memory before. They say:
> This dual memory system allows agents to maintain conversation continuity while building knowledge over time.
So presumably, this is the standard LLM chat conversation log plus a tool that can decide to stash extra information in a permanent store - similar to how ChatGPT's memory feature worked up until about four weeks ago.
swyx•2h ago
mild disagree. 1) externalizing the plan and letting the user audit/edit the plan while its working is "tool use", yes, but a very specialcase kind of tool use that, for example, operator and deep research use Temporal for. ofc we also saw this with Devin/Manus and i kinda think they're better 2) there is a form of primitive tree search that people are doing where they can spam out several different paths and run it a few steps ahead to gain information about optimal planning. You will see this with morph's launch at AIE. 3) plan meta reflection and reuse - again a form of tool use, but the devin and allhands folks have worked on this a lot more than most.
my criticism of many agent definitions is that they generally do not take memory, planning, and auth seriously enough, and i think those 3 areas are my current bets for "alpha" in 2025.
> I haven't seen a definition that specifically encompasses both short- and long-term memory before.
here
- https://docs.mem0.ai/core-concepts/memory-types#short-term-m...
- https://x.com/swyx/status/1915128966203236571
Tokumei-no-hito•1h ago
tough•1h ago
doing a lot of inference here, but could be a separation between -read- tool kinda actions, and -write/execute- (like running code/sending an email, etc)
a bit weird from a coding perspective but idk
sebastiennight•44m ago
What happened four weeks ago?
swyx•40m ago