The failure mode the author describes on unfamiliar tech is the critical observation. When you have deep domain knowledge, you catch bad design decisions during the 30-minute planning phase and correct them before any code is written. When you don't, bad decisions compound silently because you can't distinguish a reasonable proposal from a plausible-sounding wrong one. The multi-agent pattern doesn't help here -- you just get reviewed bad architecture instead of unreviewed bad architecture.
This is why "the human still needs expertise" isn't just a platitude. It's a specific mechanism: expertise lets you intervene at the design stage where the cost of correction is lowest. Without it, you're reviewing code that shouldn't have been written in the first place.
christofosho•1h ago
I'll admit to being a "one prompt to rule them all" developer, and will not let a chat go longer than the first input I give. If mistakes are made, I fix the system prompt or the input prompt and try again. And I make sure the work is broken down as much as possible. That means taking the time to do some discovery before I hit send.
Is anyone else using many smaller specific agents? What types of patterns are you employing? TIA
1. https://github.com/humanlayer/advanced-context-engineering-f...
marcus_holmes•45m ago
The key change I've found is really around orchestration - as TFA says, you don't run the prompt yourself. The orchestrator runs the whole thing. It gets you to talk to the architect/planner, then the output of that plan is sent to another agent, automatically. In his case he's using an architect, a developer, and some reviewers. I've been using a Superpowers-based [0] orchestration system, which runs a brainstorm, then a design plan, then an implementation plan, then some devs, then some reviewers, and loops back to the implementation plan to check progress and correctness.
It's actually fun. I've been coding for 40+ years now, and I'm enjoying this :)
[0] https://github.com/obra/superpowers
indigodaddy•6m ago