Unfortunately or fortunately, there will always need to be a human to supervise an "unsupervised run." What should it implement? Did it implement correctly? What cascading tasks unwired from the AI need to be completed? The human in the loop will always be there, at higher levels of abstraction. Even if we one day build an AI to the level of abstraction of "build me a SaaS company," we will want to intimately know and command what it's doing. When we think of AI sophistication as its ability to remove humans from the loop, we build systems we don't understand, can't finely control or finely improve, and from which we can't easily learn.
Here's my step in the other direction: a human and AI co-maintained "knowledge core" of all symbols of a codebase. We directly see and edit the AI's understanding of the codebase, leveraging human knowledge to teach AI in a symbiotic relationship. The knowledge core serves as documentation that both humans and AI can consume when answering and prompting. It will allow prompting at a more granular symbol level, specifying from which nodes to start a code graph search for context.
At pre-commit AI will generate new knowledge, but ultimately humans will review, clarify, refine, and add context. Unlike current prompting workflows, this context will persist and evolve. It will become the persistent base of an organization's collective knowledge, evolving with every change, never drifting, un-siloed from each dev, and readable and co-writeable by AI. A virtuous cycle of humans improving AI, improving humans, improving AI.
Revisit the exponential: an occurrence in nature with such high growth that it supercedes our intuition. Virality, financial compounding, and now, a feedback loop between humanity and AI. As it improves, it is a tool to improve our efficiency and discovery of knowledge, and in turn we use our synthesis, labor, and reasoning to complete the loop. The interface with AI should be optimized for this workflow, when we design with the intention that the loop will always close at the human.
I'm an ex-Google software engineer building this. Email me if you're interested in working with me. joshua@sage-ai.dev
raooll•4mo ago
I have been struggling with the same paradox, all AI moat seems to be to remove humans. Although that might be possible in a distant future I don't see it happening anytime soon.
Instead I want AI to be my assistant like and IDE, where it help me get things done faster that I can do it by myself but I am always in command. Also what if there is an AI assistant for each code repo, that understands that repo in its entirely and is downloaded when you clone the repo, it has deep knowledge on the code base, it scope and restrictions, how the code changed over time etc etc.
We can provide Small language models for existing codebases for large organizations like banks which run cobol or some other older tech. This AI then empowers there limited developers to punch way above their numbers and help them developer better, cleaner and faster.
This is one way I am thinking on AI/LLMS as they keep shrinking exponentially and keep becoming more powerful.
I think specialized single purpose embedded AI/LLMS will be the future.