(The above is a medium link, the text is below in case you'd prefer to read it here.)
Who this is for: AI researchers and enthusiasts
I recently deployed a small application (Go server, in-memory database, streaming video, webrtc), while developing it with AI. It's not ready for users yet, so I can't link it yet unfortunately, but progress was solid. Amazingly, AI was able to build a dockerized test framework for it, running end-to-end tests using Chrome headless and mocking up video feeds. That's a huge task that would take weeks to do, if indeed I could ever get it done at all, and I was blown away at the fact that AI could complete it. The tests don't pass yet, so that's how I know the application I'm building definitely isn't ready for users yet.
One thing that struck me is that as I iterated with the AI, there were sometimes regressions. It forgot how it solved something it had already struggled with, and then solved. This tracks with people's experience of AI as an intern with a lot of knowledge of different technologies, little experience handling large codebases by itself, and who doesn't learn anything throughout its internship. What they mean is that the only knowledge the AI has is what is included in its context. It doesn't learn from its "experience" thinking through, writing and developing a codebase, unless it is asked to write the experiences down to read right before its next answer. It would be like being an amnesiac who remembers the contents of the entire Internet and every open source codebase, but doesn't remember anything about the project it's working on except any short note it wrote itself and the current codebase, which it has to read right before its next step. It's like being President by waking up every morning as an amnesiac who has to first reread the entire history of your country, since you don't know anything about it, you only just know about every other country in the world, but never learned your own. (Here "your own" country represents your codebase that you wrote yourself.) Except instead of having to do that every morning, you have to do that after every single step you take.
It would be absurd to expect AI's to reread all of their original training data between every prompt, yet this is what's done for the codebases they themselves write. They don't write them and learn them, they write them and forget them.
Some exciting developments that could be expected in the near future are:
* AI agents that remember or learn from their previous thinking (which they express in chains of thought), and definitely learn the codebase and system they're working on, without having to explicitly write it into their context. It can just become part of the model. Maybe this is why humans sleep each night to integrate their experiences? Do humans retrain their brains while they sleep each night?
* AI agents that ask questions, experiment, and learn and explore the systems they're building, just as humans do. Humans don't just think and then type out a complete application without any experimentation, it would be an absurd way to code. Yet, AI's are expected to do just that, having access only to what they've already written, and none of their "experiences" or conclusions from experiments they run to try to undestand what they're working on.
logicallee•1h ago
When presented a piece of code to iterate on, the main difference between a human coder and an AI right now is that the human coder says:
"I know this. I just coded it yesterday, and remember how I did it, too. Here's how to add to it or make this specific change I want to add next."
and the AI says:
"Great question. I just read this codebase for the first time so just give me a minute and (thought for 1 minute) here's the answer"
"Great question. I just read this codebase for the first time so just give me a minute and (thought for 1 minute) here's the answer"
"Great question. I just read this codebase for the first time so just give me a minute and (thought for 1 minute) here's the answer"
"Great question. I just read this codebase for the first time so just give me a minute and (thought for 1 minute) here's the answer"
I look forward to when AI's learn on the job, and I think we're not far off from that period.
What exciting developments do you look forward to in the future?
logicallee•1h ago
Who this is for: AI researchers and enthusiasts
I recently deployed a small application (Go server, in-memory database, streaming video, webrtc), while developing it with AI. It's not ready for users yet, so I can't link it yet unfortunately, but progress was solid. Amazingly, AI was able to build a dockerized test framework for it, running end-to-end tests using Chrome headless and mocking up video feeds. That's a huge task that would take weeks to do, if indeed I could ever get it done at all, and I was blown away at the fact that AI could complete it. The tests don't pass yet, so that's how I know the application I'm building definitely isn't ready for users yet.
One thing that struck me is that as I iterated with the AI, there were sometimes regressions. It forgot how it solved something it had already struggled with, and then solved. This tracks with people's experience of AI as an intern with a lot of knowledge of different technologies, little experience handling large codebases by itself, and who doesn't learn anything throughout its internship. What they mean is that the only knowledge the AI has is what is included in its context. It doesn't learn from its "experience" thinking through, writing and developing a codebase, unless it is asked to write the experiences down to read right before its next answer. It would be like being an amnesiac who remembers the contents of the entire Internet and every open source codebase, but doesn't remember anything about the project it's working on except any short note it wrote itself and the current codebase, which it has to read right before its next step. It's like being President by waking up every morning as an amnesiac who has to first reread the entire history of your country, since you don't know anything about it, you only just know about every other country in the world, but never learned your own. (Here "your own" country represents your codebase that you wrote yourself.) Except instead of having to do that every morning, you have to do that after every single step you take.
It would be absurd to expect AI's to reread all of their original training data between every prompt, yet this is what's done for the codebases they themselves write. They don't write them and learn them, they write them and forget them.
Some exciting developments that could be expected in the near future are:
* AI agents that remember or learn from their previous thinking (which they express in chains of thought), and definitely learn the codebase and system they're working on, without having to explicitly write it into their context. It can just become part of the model. Maybe this is why humans sleep each night to integrate their experiences? Do humans retrain their brains while they sleep each night?
* AI agents that ask questions, experiment, and learn and explore the systems they're building, just as humans do. Humans don't just think and then type out a complete application without any experimentation, it would be an absurd way to code. Yet, AI's are expected to do just that, having access only to what they've already written, and none of their "experiences" or conclusions from experiments they run to try to undestand what they're working on.
logicallee•1h ago
"I know this. I just coded it yesterday, and remember how I did it, too. Here's how to add to it or make this specific change I want to add next."
and the AI says:
"Great question. I just read this codebase for the first time so just give me a minute and (thought for 1 minute) here's the answer"
"Great question. I just read this codebase for the first time so just give me a minute and (thought for 1 minute) here's the answer"
"Great question. I just read this codebase for the first time so just give me a minute and (thought for 1 minute) here's the answer"
"Great question. I just read this codebase for the first time so just give me a minute and (thought for 1 minute) here's the answer"
I look forward to when AI's learn on the job, and I think we're not far off from that period.
What exciting developments do you look forward to in the future?
email the author at: rviragh@gmail.com