I've had conversations with people recently who are losing sleep because they're finding building yet another feature with "just one more prompt" irresistible.
Decades of intuition about sustainable working practices just got disrupted. It's going to take a while and some discipline to find a good new balance.
It reminds me of why people wanted financial markets to be 24/7.
We as a society should probably take a look at that otherwise it may lead to burnout in a not so small percentage of people
My problem is - before, I'd get ideas, start something, and it would either become immediately obvious it wouldn't be worth the time, or immediately obvious that it wouldn't turn out well / how I thought.
Now, the problem is, everything starts off so incredibly well and goes smoothly... Until it doesn't.
Now I have an idea and jot it down in the Claude Code tab on my iPhone... and a couple of minutes later the idea is software, and now I have another half-baked project to feel guilty about for the rest of time.
(just joking, your posts are great, Simon!)
The larger often half-baked projects will flail like they always have. People will get tired of bothering to attempt these. Oh look you created a big bloated pile of garbage that nobody will ever use. And of course there will be rare exceptions, some group of N people will work together to vibe code a clone of a billion dollar business and it'll actually start taking off and that'll garner a lot of attention. It'll remain forever extremely difficult to get users to a service. And if app & website creation scales up in volume due to simplicity of creation, the attention economy problem will only get more intense (neutralizing most of the benefits of the LLMs as an advantage).
The smaller, quasi micro projects used to more immediately solve narrow problems will thrive in a huge way, resulting in tangible productivity gains, and there will be a zillion of these, both at home and within businesses of all sizes.
I don’t think I agree. How can something be both “usually not a bottleneck” that usually “takes a significant amount of time” ?
> Now we get to spend more time on the real bottlenecks. Gathering requirements from end users, deciding what should be built, etc.
Sounds like you might really enjoy a PM role. Either way, LLM or not, whatever gets written up and presented will have a lot of focus on a bike shed or will make the end user realize allllll the other things they want added/changed, so the requirements change, the priorities change…
So now we just don’t get to do the interesting part… engineer things.
If I wanted to be a PM I’d do that.
My other comments probably aren't any better, but those escape my notice!
That's the way society is set up.
That's the sentiment you don't get.
Edit: haha, I'll repeat an earlier comment! Nothing can fly on the moon.
But with “AI” the gain is more code getting generated faster. That is the dumbest possible way to measure productivity in software development. Remember, code is a liability. Pumping out 10x the amount of code is not 10x productivity.
LLMs because of their nature require constant hand-holding by humans, unless business are willing to make them entirely accountable for the systems/products they produce.
Do you hold the dice accountable when you lose at the craps table?
I would imagine instead companies will end up sleeping walking into this scenario until catastrophy hits.
The waits are unpredictable length, so you never know if you should wait or switch to a new task. So you just do something to kill a little time while the machine thinks.
You never get into a flow state and you feel worn down from this constant vigilance of waiting for background jobs to finish.
I dont feel more productive, I feel like a lazy babysitter that’s just doing enough to keep the kids from hurting themselves
The output is still small and I can review it. I can switch tasks, however if it's my primary effort for the day I don't like stepping away for an hour to do something else.
(Also, this only applies if what you're working on happens to be easily parallelizable _and_ you're part of the extremely privileged subset of SV software engineers. Try getting two Android Studios/XCodes/Clang builds in parallel without 128GB of RAM, see what happens).
But LLM prompting requires you to constantly engage with language processing to summarize and review the problem.
It helps that I don't outsource huge tasks to the LLM, because then I lose track of what's happening and what needs to be done. I just code the fun part, then ask the LLM to do the parts that I find boring (like updating all 2000 usages of a certain function I just changed).
That said I don't dispute the value of agents but I haven't really figured out what the right workflow is. I think the AI either needs to be really fast if it's going to help me with my main task, so that it doesn't mess up my state of flow/concentration, or it needs to be something I set and forget for long periods of time. For the latter maybe the "AIs submitting PRs" approach will ultimately be the right way to go but I have yet to come across an agent whose output doesn't require quite a lot of planning, back and forth, and code review. I'm still thinking in the long run the main enduring value may be that these LLMs are a "conversational UI" to something, not that they're going to be like little mini-employees.
Probably more stress if I’m on battery and don’t want the laptop to sleep or WiFi to get interrupted.
Standing desk, while it's working I do a couple squats or pushups or just wander around the house to stretch my legs. Much more enjoyable than sitting at my desk, hands on keyboard, all day long. And taking my eyes off the screen also makes it easier to think about the next thing.
Moving around does help, but even so, the mental fatigue is real!
I don’t just give somebody a bit’s ticket a let the go. I give them a ticket but have to hover over their shoulder and nitpick their design choices.
Tell them “you should use a different name for that new class”, “that function should actually be a method on this other thing”, etc
Edit: Looks like plenty of people have observed this: https://www.reddit.com/r/xkcd/comments/12dpnlk/compiling_upd...
For me personally, programming lost most of it's fun many years ago, but with claude code I'm having fun again. It's not the same, but for me personally, at this stage in my life, it's more enjoyable.
e.g. managing systems, initiating backups, thinking about how I'll automate my backups, etc.
The list of things I haven't automated is getting shorter, and having LLMs generate something I'm happy to hand the work to has been a big part of it.
Do you find it works well?
With these agents I've found that making the workflows more complicated has severe diminishing returns. And is outright worse in a lot of cases.
The real productivity boost I've found is giving it useful tools.
2. Don't mix N activities. Work in a very focused way in a single project, doing meaningful progresses.
3. Don't be too open-ended in the changes you do just because you can do it in little time now. Do what really matters.
4. When you are away, put an agent in the right rails to reiterate and provide potentially some very good result in terms of code quality, security, speed, testing, ... This increases the productivity without stressing you. When you return back, inspect the results, discard everything is trash, take the gems, if any.
5. Be minimalistic even if you no longer write the code. Prompt the agent (and your AGENT.md file) to be focused, to don't add useless dependencies, nor complexity, to take the line count low, to accept an improvement only the complexity-cost/gain is adequate.
6. Turn your flow into specification writing. Stop and write your specifications even for a long time, without interruptions. This will improve a lot the output of the coding agents. And it is a moment of calm focused work for you.
You're an engineer, not a manager, or a chef, or anything else. Nothing you do needs to be done Monday-Friday between the hours of 8 and 5 (except for meetings). Sometimes it's better if you don't do that, actually. If your work doesn't understand that, they suck and you should leave.
On the other side, I feel like using AI tools can reduce the cognitive overload of doing a single task, which can be nice. If you're able to work with a tool that's fast enough and just focus on a single task at a time, it feels like it makes things easier. When you try to parallelize that's when things get messier.
There's a negative for that too - cognitive effort is directly correlated with learning, so it means that your own skills start to feel less sharp too as you do that (as the article mentions)
I'm fatigued by this myth.
We are trained on the other thing: unpredictable user interaction, parallelism, circuit-breaking, etc. That's the bread and butter of engineering (of all kinds, really, not just IT).
The non-deterministic intuition is baked into engineering much more than determinism is.
That's perfectly fine. We are honed for this too.
We don't need to produce exact solutions or answers. We need to make things work despite the presence of chaos. That is our job and we're good at it.
Product managers freak out when someone says "I don't know how much time it will take, there are too many variables!". CFOs freak out when someone says "we don't know how much it will cost". Those folk want exact, predictable outcomes.
Engineers don't, we always dealt with unpredictable chaotic things. We're just fine.
> The Hacker News front page alone is enough to give you whiplash. One day it's "Show HN: Autonomous Research Swarm" and the next it's "Ask HN: How will AI swarms coordinate?" Nobody knows. Everyone's building anyway.
These posts got less than 5 upvotes, they didn't make it to home page. And while overall quality of Show HN might have dropped, HN homepage is still quite sane.
The topic is also not something "nobody talks about," it's being discussed even before agentic tools became available: https://hn.algolia.com/?q=AI+fatigue
> Time-boxing AI sessions.
Unless you are a full-time vibe coder, you already wouldn't be using AI all the time. But time boxing it feels artificial, if it's able to make good and real progress (not unmaintainable slop).
> Separating AI time from thinking time.
My usage of AI involves doing a lot of thinking, either collaboratively within a chat, or by myself while it's doing some agentic loop.
> Accepting 70% from AI.
This is a confusing statement. 70% what? What does 70% usable even mean? If it means around 70% of features work and other 30% is broken, perhaps AI shouldn't be used for those 30% in the first place.
> Being strategic about the hype cycle.
Hype cycles have always been a thing. It's good for mind in general to avoid them.
> Logging where AI helps and where it doesn't.
I do most of this logging in my agent md files instead of a separate log. Also after a bit my memory picks it up really quickly what AI can do and what it can't. I assume this is a natural process for many fellow engineers.
> Not reviewing everything AI produces.
If you are shipping in an insane speed, this is just an expected outcome, not an advice you can follow.
Funny, I don't associate that with AI. I associate it with having to write papers of a specific length in high school. (Though at least those were usually numbers of pages, so you could get a little juice from tweaking margins, line spacing and font size.)
Agree. The article could have been summarized into a few paragraphs. Instead, we get unnecessary verbiage that goes on and on in an AI generated frenzy. Like the "organic" label on food items, I can foresee labels on content denoting the kind of human generating the content: "suburbs-raised" "free-lancer" etc.
The real AI fatigue is the constant background irritation I have when interacting with LLMs.
"You're not imagining it" "You're not crazy" "You're absolutely right!" "Your right to push back on this" "Here's the no fluff, correct, non-reddit answer"
Unironically this describes how I feel about books. Not average books, but the 'classics' like The Design of Everyday Things.
It is getting very hard to continue viewing HN as a place where I want to come and read content others have written when blog posts written largely with ChatGPT are constantly upvoted to the top.
It's not the co-writing process I have a problem with, it's that ChatGPT can turn a shower thought into a 10 minute essay. This whole post could have been four paragraphs. The introduction was clearly written by an intelligent and skilled human, and then by the second half there's "it's not X, it's Y" reframe slop every second sentence.
The writing is too good to be entirely LLM generated, but the prose is awful enough that I'm confident this was a "paste outline into chatgpt and it generates an essay" workflow.
Frustrating world. I'm lambasting OP, but I want him to write, but actually, and not via a lens that turns every cool thought into marketing sludge.
Does it matter anymore? Most good engineering principles are to ensure code is easy to read and maintain by humans. When we no longer are the target audience for that, many such decisions are no longer relevant.
I also don't understand why you assume what the AI generates is more readable by AI than human generated code.
AI generates a solution that's functional, but that's at a 70% quality level. But then it's really hard to make changes because it feels horrible to spend 1 hour+ to make minor improvements to something that was generated in a minute.
It also feels a lot worse because it would require context switching and really trying to understand the problem and solution at a deeper level rather than a surface level LGTM.
And if it functionally works, then why bother?
Except that it does matter in the long term as technical debt piles up. At a very fast rate too since we're using AI to generate it.
Its a million little quality of life stuff.
Moving from horses to cars did not give you more free time. Moving from telephone to smartphone did not give more fishing time. You just became more mobile, more productive and more reachable.
People say AI will make us less intelligent, make certain brain regions shrink, but if it stays like this (and I suspect it won’t, but anyway…) then it’ll just make executive functioning super strong because that’s all you’re doing.
This problem has been going on a long time, Helen Keller wrote about this almost 100 years ago:
> The only point I want to make here is this: that it is about time for us to begin using our labor-saving machinery actually to save labor instead of using it to flood the nation haphazardly with surplus goods which clog the channels of trade.
https://www.theatlantic.com/magazine/archive/1932/08/put-you...
Unfortunately, with these types of software simpleton's making decisions we are going to see way more push for AI usage and thus higher productivity expectations. They cannot wrap their heads around the fact (for starters) that AI is not deterministic so that increases the overhead on testing, security, requirements, integrations etc. making all those productivity gains evaporate. Worse (like the author mentioned), it makes your engineer less creative and more burnt-out.
Let's be honest here. Engineers picked this career broadly for 2 reasons, creativity and money. With AI, the creativity aspect is taken away and you are now more of a tester. As for money, those same dumbass decision makers are now going to view this skillset as a commodity and find people who can easily be trained in to "AI Engineers" for way less money to feed inputs.
I am all for technological evolution and welcome it but this isn't anything like that. It is purely based on profits, shareholders and any but building good, proper software systems. Quality be damned. Profession of Software Development be damned. We will regret it in the future.
I dont have exhaustion as such but an increasing sense of dread, the more incredibly work I achieve, the less valuable I realise it potentially will be due to its low cost effort.
Usually there was a cadence to things that allowed for a decent amount of downtime while the machine was running, but I once got to a job where the machine milled the parts so quickly, that I spent more time loading and unloading parts than anything else. Once I started the first part, I didn't actually rest until all of them were done. I ended up straining my back from the repetitive motion. I was shocked because I was in good shape and I wasn't really moving a significant amount.
If I talk about excessive concern for productivity (or profit) being a problem, certain people will roll their eyes. It's hard to separate a message from the various agendas we perceive around us. Regardless of personal feelings, there will always be a negative fallout for people when there's a sudden inversion in workflow like the one described in this article or the one I experienced during my internship.
And even today when it’s useful, it’s really most useful for very specific domains like coding.
It’s not been impressive at all with other applications. Just chat with your local AI chat bot when you call customer service.
For example, I watch a YouTube channel where this guy calls up car dealerships to negotiate car deals and some of them have purchased AI receptionist solutions. They’re essentially worse than a simple “press 1 for sales” menu and have essentially zero business value.
Another example, I switched to a cheap phone plan MVNO that uses AI chat as its first line of defense. All it did was act as a natural language search engine for a small selection of FAQ pages, and to actually do anything you needed to find the right button to get a human.
These two examples of technology were not worth the hype. We can blame those businesses all day long but at the end of the day I can’t imagine those businesses are going to be impressed with the results of the tech long term. Those car dealerships won’t sell more cars because of it, my phone plan won’t avoid customer service interactions because of it.
In theory, these AI systems should easily be able to be plugged in to do some basic operations that actually save these businesses from hiring people.
The cellular provider should be able to have the AI chatbot make real adjustments to your account, even if they’re minor.
The car dealership bot should be able to set the customer up in the CMS by collecting basic contact info, and maybe should be able to send a basic quote on a vehicle stock number before negotiations begin.
But in practice, these AI systems aren’t providing significant value to these businesses. Companies like Taco Bell can’t even replace humans taking food orders despite the language capabilities of AI.
AI is not good for human health - we have it here.
> AI reduces the cost of production but increases the cost of coordination, review, and decision-making. And those costs fall entirely on the human.
The combination of these two facts is why I'm so glad I quit my job a couple of years ago and started my own business. I'm a one-man show and having so much fun using AI as I run things.
Long term, it definitely feels like AI is going to drive company sizes down and lead to a greater prevalence of SMBs, since they get all the benefits with few of the downsides.
I agree with the article and recognize the fatigue, but I have never experienced that the industry is "aggressively pretending it does not exist". It feels like a straw man, but maybe you have examples of this happening.
Code and feature still need to experience time and stability in order to achieve maturity. We need to give our end users time to try stuff, to shape their opinions and habits. We need to let everyone on the dev team take the time to update their mental model of the project as patches are merged. Heck, I've seen too many Product Owners incapable of telling you clearly what went in and out of the code over the previous 2 releases, and those are usually a few weeks apart.
Making individual tasks faster should give us more time to think in terms of quality and stability. Instead, people want to add more features more often.
I know more than most there is some baseline productivity we are always trying to be at, that can sometimes be a target more than a current state. But the way people talk about their AI workflows is different. It's like everyone has become tyranical factory floor managers, pushing ever further for productive gains.
Leave this kind of productivity to the bosses I say! Life is a broader surface than this. We can/should focus on be productive together, but leave your actual life for finer, more sustainable ventures.
Some people thrive in more stressful situations, because they don't get as aroused in calmness, but everybody has a threshold velocity at which discomfort starts, higher or lower. AI puts us closer to that threshold, for sure.
Just a few days ago: https://news.ycombinator.com/item?id=46885530
They you have to deal with slop, slopfluencer articles written under the influence of AI psychosis, AI addicts, lying managers, lying CEOs etc.
And you usually, the author of this article being an exception, get dumber and are only able to verbalize AI boosterism.
AI only works if you become a slopfluencer, sell a course on YouTube and have people "like and subscribe".
Use your own words!
I'd rather read the prompt!
sidk24•1h ago
PaulHoule•1h ago
https://scienceintegritydigest.com/2024/02/15/the-rat-with-t...
srameshc•35m ago
sidk24•29m ago
I've started doing it now, still needs to work on it. Thanks for the tip though, i hope it is working well for you!!