I’m not looking for an “agent.” Not an intern. I want a collaboration partner. I'm not a cog in the agentic machine. It's a cog in mine. I want to stay in the game.
What’s been working for me is a much tighter loop. I find I stay in Cursor quite a bit, keep edits small and in front of me, and let something real kick in when I drift—syntax, structure, rules (I wired up my own setup for that). When something’s wrong, I see it fast. I fix it, or I tell the LLM exactly what to change. No surprise thousand-line patches. No “it went off and did stuff” and I find out later.
I stay sharp. I still have the system in my head. I’m not only reviewing it, I’m still building it. The LLM makes me faster but I'm still driving.
I’m doing the same with specs (a custom spec DSL integrated into Cursor and IntelliJ). The LLM helps draft, the tooling says “nope” immediately, and I fix directly or steer the LLM. Same rhythm whether it’s spec or code. And the spec is still the center of gravity. It is used to instruct the LLM when it's writing code.
I know the industry story right now is more autonomy, more agents, less human. I’m running the other way on purpose.
Are you comfortable with delegating authority to LLMs? When you use them, does it still feel like it's your work or mostly like reviewing something else's work? Would you trade a bit of “it did it for me” for quicker feedback and a smaller blast radius?
Do what works for you, so long as it is honestly working. This is just how I'm doing it.
globalchatads•1h ago
Most of the "autonomous agent" pitch assumes agents can find and trust each other. In practice that infrastructure barely exists. The IETF has about 11 competing drafts for how agents should discover and negotiate with each other (ARDP, AID, agents.txt, etc). Six of those expired this month. The surviving ones contradict each other on basics like where an agent publishes its capabilities.
So the autonomous agent vision has a plumbing problem your collaborator model just does not have. When you are driving and the LLM is a cog, you do not need agents.txt or A2A agent cards or any of that. You need a good language model and tight feedback loops, which is what you described.
I suspect the collaborator approach sticks around longer than the current agent hype cycle. The "agents talking to agents" stack is years from being reliable enough for real work. The standards bodies cannot agree on even the basics yet.