Peter Naur's 'Programming as Theory Building' is more relevant today than ever, especially as we move into the era of autonomous agents. One of the biggest friction points with current AI developer tools is that they lack the 'theory' of the repository -- the architectural intent and cross-module mental models that Naur describes as the core of programming.
When an LLM suggests a change based on a local file, it's essentially 'programming by accident' if it doesn't understand the underlying theory. We're seeing a shift where 'Industrial-Grade' agents are trying to solve this by rebuilding that theory through semantic RAG and AST parsing (e.g., using tree-sitter to map out the 'mental model' of function signatures and struct definitions).
The goal isn't just to generate code, but to verify that the code aligns with the existing theory of the system -- which is why loops that include 'cargo check' and test verification are so critical. It turns the agent from a stochastic parrot into something closer to a junior engineer who is at least trying to build a theory before they commit.
dev_arvin2000•1h ago
When an LLM suggests a change based on a local file, it's essentially 'programming by accident' if it doesn't understand the underlying theory. We're seeing a shift where 'Industrial-Grade' agents are trying to solve this by rebuilding that theory through semantic RAG and AST parsing (e.g., using tree-sitter to map out the 'mental model' of function signatures and struct definitions).
The goal isn't just to generate code, but to verify that the code aligns with the existing theory of the system -- which is why loops that include 'cargo check' and test verification are so critical. It turns the agent from a stochastic parrot into something closer to a junior engineer who is at least trying to build a theory before they commit.