Follow-up to Part 1, where I explained how we rebuilt our dev process around LLM agents at Easylab AI and stopped writing most code by hand.
The original post sparked a lot of questions — the most common being:
“Okay, but how did your developers react?”
Here’s a breakdown of what actually happened inside the team — who stayed, who didn’t, and what new skills emerged.
⸻
Some embraced it. Some left. That’s okay.
When we committed to building with agents — not just using LLMs for autocompletion, but making them first-class executors of logic — not everyone was thrilled.
Some engineers were fascinated.
They saw the shift coming and wanted to be ahead of it. They became architects of multi-agent workflows, prompt designers, QA strategists, validators.
Others didn’t want to work that way.
They liked writing every line, owning every detail, and were (understandably) uncomfortable giving up control to a system that feels less deterministic.
They moved on. We didn’t push them.
Like every evolution in software tooling, this one came with a natural selection effect.
Not better or worse. Just different skillsets, different energy.
⸻
This isn’t no-code. It’s new-code.
Some assumed we were just automating CRUD. That’s not what happened.
The tools we use today — Claude 3.7, DeepSeek, bolt.new, role-based agents, memory stacks — aren’t trivial macros. They’re a new level of abstraction. They reason. They refactor. They test. They fail with style.
You don’t “ask the AI to do it.”
You engineer constraints, context, fallbacks, tooling, and create robust systems through language.
At Easylab AI, we use context protocols, Redis-based memory layers, and model routing logic based on latency and task weight.
It’s not less technical — it’s just built differently.
⸻
Did their skills atrophy?
Actually, the opposite.
Sure, they’re not practicing DSA interview puzzles every day.
But they’re building systems that can write tests, simulate failure, and self-correct.
They’re learning new skills you can’t yet Google:
• Prompt minimalism
• Agent composability
• Multi-agent state consistency
• Prompt-based debugging
They think more like staff engineers than syntax solvers.
⸻
This is abstraction, not disappearance
The fear that “AI replaces engineering” misses the nuance.
This isn’t magic. It’s not cheating. It’s just abstraction — like every wave before:
• Assembly to C
• C to Python
• Python to Terraform
• Terraform to prompt-based execution
As Jensen Huang (NVIDIA CEO) said earlier this year:
“English is now the world’s most popular programming language.”
He’s not wrong.
We’re just learning to write instructions that build systems — without the middle step of syntax.
⸻
One more thing
Some developers left. Most who stayed leveled up.
And today, no one wants to go back.
That tells me something:
It’s not easier work. It’s better work.
Happy to answer more if folks are curious.
falcor84•2mo ago
I'm not clear - is this comment the actual post, while the link that you shared is irrelevant? If so, it would have probably been more appropriate to submit this as an "AMA:" without a url.
buzzbyjool•2mo ago
Hi thanks for your comment, honestly I don't know how to do it. Thanks
falcor84•2mo ago
Oh, it's just that you can make a submission without anything in the 'url' input. Here are a couple of examples:
> Some developers left. Most who stayed leveled up.
"Leveled up" is a subjective, loaded term. I assume what you mean here is "adapted to your way of doing things."
> And today, no one wants to go back.
Well, of course, because those who would have wanted to go back already left. This appears to be selection bias more than evidence that your approach is a good one.
To be clear, I'm not trying to imply that your approach isn't a good one. I'm just saying that the devs who remained not wanting to go back isn't evidence that it is.
buzzbyjool•2mo ago
You’re absolutely right to call that out — and I appreciate the thoughtful framing.
“Leveled up” is subjective, yes. What I meant more precisely is this: the devs who stayed stopped spending time on tasks like writing boilerplate logic or tweaking form validation, and started focusing on higher-order thinking — designing agent workflows, debugging reasoning paths, writing specs that are machine-parsable, and thinking in systems rather than syntax. That shift, in terms of skill depth and adaptability, is something I genuinely view as a level-up. But I agree, it’s through the lens of our environment.
And yes — absolutely fair on the selection bias. When I say “no one wants to go back,” I don’t mean it as proof the approach is universally better. It’s just true for our current team, within the culture and processes we’ve chosen to embrace. Those who didn’t align with this way of working left early — and I don’t hold that against them.
So your comment is a valuable nuance: internal satisfaction is a necessary condition for success, but not a sufficient one. Our team’s enthusiasm is a sign that the model can work — not that it will for everyone.
buzzbyjool•2mo ago
The original post sparked a lot of questions — the most common being:
“Okay, but how did your developers react?”
Here’s a breakdown of what actually happened inside the team — who stayed, who didn’t, and what new skills emerged.
⸻
Some embraced it. Some left. That’s okay.
When we committed to building with agents — not just using LLMs for autocompletion, but making them first-class executors of logic — not everyone was thrilled.
Some engineers were fascinated. They saw the shift coming and wanted to be ahead of it. They became architects of multi-agent workflows, prompt designers, QA strategists, validators.
Others didn’t want to work that way. They liked writing every line, owning every detail, and were (understandably) uncomfortable giving up control to a system that feels less deterministic.
They moved on. We didn’t push them.
Like every evolution in software tooling, this one came with a natural selection effect. Not better or worse. Just different skillsets, different energy.
⸻
This isn’t no-code. It’s new-code.
Some assumed we were just automating CRUD. That’s not what happened.
The tools we use today — Claude 3.7, DeepSeek, bolt.new, role-based agents, memory stacks — aren’t trivial macros. They’re a new level of abstraction. They reason. They refactor. They test. They fail with style.
You don’t “ask the AI to do it.” You engineer constraints, context, fallbacks, tooling, and create robust systems through language.
At Easylab AI, we use context protocols, Redis-based memory layers, and model routing logic based on latency and task weight. It’s not less technical — it’s just built differently.
⸻
Did their skills atrophy?
Actually, the opposite.
Sure, they’re not practicing DSA interview puzzles every day. But they’re building systems that can write tests, simulate failure, and self-correct.
They’re learning new skills you can’t yet Google: • Prompt minimalism • Agent composability • Multi-agent state consistency • Prompt-based debugging
They think more like staff engineers than syntax solvers.
⸻
This is abstraction, not disappearance
The fear that “AI replaces engineering” misses the nuance.
This isn’t magic. It’s not cheating. It’s just abstraction — like every wave before: • Assembly to C • C to Python • Python to Terraform • Terraform to prompt-based execution
As Jensen Huang (NVIDIA CEO) said earlier this year:
“English is now the world’s most popular programming language.”
He’s not wrong. We’re just learning to write instructions that build systems — without the middle step of syntax.
⸻
One more thing
Some developers left. Most who stayed leveled up. And today, no one wants to go back.
That tells me something: It’s not easier work. It’s better work.
Happy to answer more if folks are curious.
falcor84•2mo ago
buzzbyjool•2mo ago
falcor84•2mo ago
https://news.ycombinator.com/item?id=15853374
https://news.ycombinator.com/item?id=43363056
buzzbyjool•2mo ago
JohnFen•2mo ago
"Leveled up" is a subjective, loaded term. I assume what you mean here is "adapted to your way of doing things."
> And today, no one wants to go back.
Well, of course, because those who would have wanted to go back already left. This appears to be selection bias more than evidence that your approach is a good one.
To be clear, I'm not trying to imply that your approach isn't a good one. I'm just saying that the devs who remained not wanting to go back isn't evidence that it is.
buzzbyjool•2mo ago
“Leveled up” is subjective, yes. What I meant more precisely is this: the devs who stayed stopped spending time on tasks like writing boilerplate logic or tweaking form validation, and started focusing on higher-order thinking — designing agent workflows, debugging reasoning paths, writing specs that are machine-parsable, and thinking in systems rather than syntax. That shift, in terms of skill depth and adaptability, is something I genuinely view as a level-up. But I agree, it’s through the lens of our environment.
And yes — absolutely fair on the selection bias. When I say “no one wants to go back,” I don’t mean it as proof the approach is universally better. It’s just true for our current team, within the culture and processes we’ve chosen to embrace. Those who didn’t align with this way of working left early — and I don’t hold that against them.
So your comment is a valuable nuance: internal satisfaction is a necessary condition for success, but not a sufficient one. Our team’s enthusiasm is a sign that the model can work — not that it will for everyone.
Thanks for calling it out clearly.