The release of Anthropic's "Imagine with Claude" is fascinating. It shows a model that doesn't generate code to build a UI; it uses tools to construct the UI directly. This feels like a major shift from the "AI as a copilot" paradigm to "AI as a runtime."
This has been a core question behind an open-source project I've been working on with Ismael Faro, called LLMunix https://github.com/EvolvingAgentsLabs/llmunix . Our approach is to build an entire OS for agents where the "executables" are not binaries, but human-readable Markdown files. The LLM interprets these files to orchestrate complex workflows.
The linked article is my analysis of these two approaches. It argues that while direct interpretation is incredibly powerful, an open, transparent, and auditable framework (like our Markdown-based one) is crucial for the future of agentic systems.
Curious to hear what HN thinks. Are we moving towards a future where LLMs are the OS, and if so, what should the "assembly language" for that OS look like?
matiasmolinas•1h ago