I feel like no one talks about how people who are "supposed to be reviewing the LLM outputs, guiding the agents, etc.", actually acquire the knowledge to do a decent job. When I see discussion on LLMs making SWEs more productive, I assume they mean compared to someone who knows significantly less than they do.
Here is a real scenario encountered in corporate America:
A new CS grad in their first job after college is given a task in a domain they're unfamiliar with to solve a problem they've never seen. They ask an agent to implement it in code in a language they've never used, and have it give a breakdown of it's process, the tradeoffs encountered, things to consider in the future, etc.
They have no time to actually learn anything, because they should be moving faster, being more productive, AI has already solved all of our problems.
So they submit a 700+ line PR to which whoever reviews it just pushes it through because they don't have time as they need to be moving just as fast and do not have the cognitive capacity to sit through and comprehend what's actually happening.
mc-0•1h ago
Here is a real scenario encountered in corporate America:
A new CS grad in their first job after college is given a task in a domain they're unfamiliar with to solve a problem they've never seen. They ask an agent to implement it in code in a language they've never used, and have it give a breakdown of it's process, the tradeoffs encountered, things to consider in the future, etc.
They have no time to actually learn anything, because they should be moving faster, being more productive, AI has already solved all of our problems.
So they submit a 700+ line PR to which whoever reviews it just pushes it through because they don't have time as they need to be moving just as fast and do not have the cognitive capacity to sit through and comprehend what's actually happening.