Sometimes you need to keep the context and sometimes you need to reset it.
An example of needing to reset. Asking for X, later realizing you meant Y and having the LLM oscillates between them, on an unrelated request it adds X back in, removing Y. Etc.
Clearing the context solves the above. I currently do this by restarting the IDE in Intellij since there isn't a simple button to do it. It's a 100% required feature and knowing LLM contexts and managing them is going to be a basic part of working with LLMs in the future. Yet the concept of the need to do this hasn't quite sunk in yet. It's like the first cars not actually having brakes and the drivers and passengers used to get out and put their feet down. We're at that stage.
What we really need is detailed context history of the AI and a way to manage it well. "Forget i ever asked this prompt". "Keep this prompt in mind next time i restart the IDE" are both examples of extremely important and obvious functionality that just doesn't exist right now.
They still need a baby sitter.
From my perspective I didn't have a development team before. I have one now. I guess I am a member of that team now. But I hadn't thought of it like that -- another strange dimension to working Copilot (and its ilk).
This is disconnected enough from how these words are normally used that the statement, and its downstream conclusions don't have a clear interpretation.
But the real cost that would be interesting is time value: Does he really spends less time for the same feature?
You are right that when someone (a human) submits a PR it didn't cost me anything (short of my time to review it). But those folks are not a team, not someone I could rely on or direct. Open-source projects -- successful ones -- often turn into a company, and then hire a dev team. We all know this.
I have no plans to commercialize rqlite, and I certainly couldn't afford a team of human developers. But I've got Copilot (and Gemini when I use it) now. So, in a sense, I now do have a team. And it's allowed me to fix bugs and add small features I wouldn't have bothered to in the past. It's definitely faster (20 mins to fire up my computer, write the code, push the PR vs. 5 mins to create the GitHub issue, assign to Copilot, review, and merge).
Case in point: I'm currently adding change-data-capture to rqlite. Development is going faster, but it's also more erratic because I'm reviewing more, and coding less. It reminds me of when I've been a TL of a software team.
In another, more accurate sense: no, you have a tool, not a team. A very useful tool, but a tool nonetheless.
If you believe you have a team, try taking a two week vacation and see how much work your team does while you're gone.
The post emphasizes the degree this is true/not.
Different people are going to emphasize changing attributes of new situations using different pre-existing words/concepts. That's sensible use of language.
Exactly.
A team is comprised of people. Being able to prompt an LLM to create a pull request based on specifications is very useful, but it's not a team member, the same way that VSCode isn't a team member even though autocomplete is a massive productivity increase, the same way that pypi isn't a team member even though a central third party dependency repository makes development significantly faster than not having one.
If this article were "I get a massive productivity boost from $41.73/month in developer tools" it'd be honest. As it is, it's dishonest clickbait.
As the saying goes, there is no "AI" in "Team".
Titles don't need to be pedantic.
But nonetheless, thanks for the explanation :).
This was an interesting article and brought some good points around the fact that the AI never has a continuing backward/forward-looking context about one's project. Perhaps these ideas are being thought about to potentially add as features of LLMs somehow without making it unfeasible from token/context perspective.
No shit it's easy. So is a CRUD PHP service.
dmitrygr•2h ago
darth_avocado•2h ago
dmitrygr•2h ago
sejje•2h ago
_fzslm•2h ago
What % of human intervention was there? A module written for me by AI, that was tightly specced with function signatures and behaviour cases, is going to be far more reliable (and arguably basically is human developed) than something an AI just wrote and filled in all the blanks with.
delfinom•1h ago
Granted vibe coded junk will quickly get avoided if it is poorly written to the point that it makes auditing insufferable.
bee_rider•1h ago
If you care about safety, you care about the whole process—coding, sure, but also: code review, testing, what the design specs were, and what the failure-path is for when a bug (inevitably) makes it through.
Big companies produce lots of safety critical code, and it is inevitable that some incompetent people will sneak into the gaps, there. So, it is necessary to design a process that accounts for commits written by incompetent people.
bobsomers•1h ago
However, part of designing and upholding a safety-critical software development process is looking for places to reduce or eliminate the introduction of bugs in the first place.
Strong type systems, for example, eliminate entire classes of errors, so mandating that code is written in X language is a pro-active process decision to reduce the introduction of certain types of bugs.
Restricting the use of AI tools could very much be viewed the same way.
paulddraper•1h ago
kangalioo•1h ago
tracker1•28m ago
uncircle•1h ago
The issue is that there is a non-zero likelihood that a vibe coder pushes code he doesn’t even understand how it actually works. At least a bad coder had to have written the thing themselves in the first place.
tracker1•30m ago
wiseowise•1h ago
dmitrygr•1h ago