Is this the same effect that causes managers and people in power to sometimes become... (for lack of a better phrase) stupid and crazy?
edit: Everyone is responding to the "junior" part of my comment without addressing the actual question I'm asking. I should have just said "employees" -- Sorry.
If anything, it's more like an over enthusiastic intern who'll go way down a rabbithole of self-doubt and overengineering when you're away at a conference for 3 days.
So what if they lie every once in a while?
I think LLMs are a big problem for development of junior devs.(pun intended)
Or like the devs still on IRC.
Of course they will take the www and Google away from us by replacing everything with AI slop.
SO answers will be like, did you ask Macro Banana 42?
the AI at home:
not only do students graduate now only due to chatgpt but also 10 year old kids never build up education while using ai to do their homework.
The moment that happens, insurance flips tables, OSHA starts asking if they need exposure controls, and employers back down.
And that’s the good scenario! The bad scenario is an employer mandated it, and someone mentally declined to the point they committed a public act of violence.
LLMs have yet to feature.
I love coding for problem solving, and can do problems in my spare time. However, lately code is just work for me. It pays the bills.
Worth reading the conclusion - makes a good point or two regarding the cumulative effect of using AI and not only the loss of the learning through struggle/time, but also the reference point of how long tasks should take without AI (e.g. we are no willing longer afford the time to learn the hard way, which will impact the younger generation notably).
With AI I've also witnessed people go crazy going back and forth without even looking carefully at the code (or the compile messages) to figure out what was missing.
I'm pretty sure nobody will read the docs now.
And arguably, our society has made a lot of bad choices about many "convenience[s] in modern life." For instance, cities should probably be designed to make you walk more by default, so healthy physical activity isn't turned into a chore you then have to have the discipline to do consistently.
Basically, collectively, we're stupid and unwise, picking short term convenience and neglecting the medium and long term, and we need to get better at that.
Has anyone else noticed this, as they've scaled up their AI coding use? I've found it harder to stay on task, and it's affected a broad range of my personal activities. I'm able to make incredible things happen with AI tools, but do worry about the personal costs.
Im that style of working, spinning up multiple parallel workstreams appears to be the highest output strategy. So now I'm practicing rapid context switching, jumping from virtual desktop to virtual desktop, and even adding monitors to my desk to keep tabs on more workstreams.
In my home life, I've observed myself wandering off mid-task (reminder to self: the eggs on the stove DO NOT have the ability to wait idly for your next input), or pausing to make an unrelated voice note mid-conversation with a loved one (which does NOT feel good to anyone involved...)
I suspect I can get better as I learn more skills and practice. For example, there are people great at both the hours long tournament chess format, and the 2 minute bullet chess format.
But the fact that so I quickly went from being top tier at long term focus to not very good at focusing on anything gives me real pause...
Point for discussion: We know that task and context switching imposes substantial cognitive costs, leading to lower and slower performance for a time. I think it is may be reasonable to hypothesize that interacting with a LLM to solve tasks tends to focus the brain on a more strategic level of focus. What do I want to solve? What is my goal? Actually solving individual problems is very different. It is more concrete and mechanistic, requiring a different mode of thought. Switching from the former to the latter is a cognive task switch, where the context changes, and resetting into the new context takes time and that imposes costs. Unless they had a control arm that imposed a task switching cost...
Unfortunately, given participant feedback and surveys, we believe that the data from our new experiment gives us an unreliable signal of the current productivity effect of AI tools. The primary reason is that we have observed a significant increase in developers choosing not to participate in the study because they do not wish to work without AI, which likely biases downwards our estimate of AI-assisted speedup.
(https://metr.org/blog/2026-02-24-uplift-update/)This was a huge red flag! Within a year a large majority of devs became so whiny and lazy that METR couldn't fill the "no AI" bucket for their study - it's not like this was a full-time job, just a quick gig, and it was still too much effort for their poor LLM-addled brains. At the time I thought it was a terrible psychological omen.
I am so glad I don't use this stuff.
Just as anything, I believe the dose is the poison. I still find myself thinking about the high-level and decisions, but I spend less cognitive load into library, implementation specifics I can put somewhere else.
now imagine if most of them are using AI
austin-cheney•2h ago
This sounds like something I have seen before: jQuery, Angular, React.
What the article misses is the consequence of destroyed persistence. Once persistence is destroyed people tend to become paranoid, hostile, and irrationally defensive to maintain access to the tool, as in addiction withdrawal.