frontpage.
newsnewestaskshowjobs

Made with ♥ by @iamnishanth

Open Source @Github

fp.

Open in hackernews

AI fatigue Is real and nobody talks about it

https://siddhantkhare.com/writing/ai-fatigue-is-real
222•sidk24•1h ago

Comments

sidk24•1h ago
Author here. Not an anti-AI post. It's about the cognitive cost - faster tasks lead to more tasks, reviewing AI output all day causes decision fatigue, and the tool landscape churns weekly. Wrote about what actually helped. Curious if others are hitting similar walls.
PaulHoule•1h ago
Those images make me think of

https://scienceintegritydigest.com/2024/02/15/the-rat-with-t...

srameshc•1h ago
Great post, I certainly feel you. Not just the anxiety but the need to push myself more and accomplish more now that I have some help. Setting right expectations and what is more practical and not every "AI magic post" is worth the attention, has helped me by not being anxious and with the FOMO.
sidk24•55m ago
Thanks <3

I've started doing it now, still needs to work on it. Thanks for the tip though, i hope it is working well for you!!

ai_sloppy_toppy•12m ago
isn't it a bit too ironic that you expect us to read your ai generated slop about ai fatigue?
simonw•1h ago
I really feel this. I can make meaningful progress on half a dozen projects in the course of a day now but I end the day exhausted.

I've had conversations with people recently who are losing sleep because they're finding building yet another feature with "just one more prompt" irresistible.

Decades of intuition about sustainable working practices just got disrupted. It's going to take a while and some discipline to find a good new balance.

bob_theslob646•1h ago
Throw in the fact that clawdbot can work 24/7.

It reminds me of why people wanted financial markets to be 24/7.

We as a society should probably take a look at that otherwise it may lead to burnout in a not so small percentage of people

billylo•54m ago
We should ask how the traders manage this. It's essentially 24/7 markets in the world. For them, the FOMO effects are even stronger... actual money earning opportunity.
onlyrealcuzzo•1h ago
> I've had conversations with people recently who are losing sleep because they're finding building yet another feature with "just one more prompt" irresistible.

My problem is - before, I'd get ideas, start something, and it would either become immediately obvious it wouldn't be worth the time, or immediately obvious that it wouldn't turn out well / how I thought.

Now, the problem is, everything starts off so incredibly well and goes smoothly... Until it doesn't.

simonw•1h ago
I used to have ideas and jot them down in Apple Notes and then usually forget about them entirely.

Now I have an idea and jot it down in the Claude Code tab on my iPhone... and a couple of minutes later the idea is software, and now I have another half-baked project to feel guilty about for the rest of time.

falloutx•1h ago
In couple of minutes? My claude code takes like 5 mins just to wake up and write a simple plan.
barishnamazov•1h ago
You are taking to simonw, surely Claude have given him free super fast unlimited token access to nightly version of Claude 5.1 Opus.

(just joking, your posts are great, Simon!)

adventured•1h ago
There will be a split of two major outcomes from LLM coding near-term.

The larger often half-baked projects will flail like they always have. People will get tired of bothering to attempt these. Oh look you created a big bloated pile of garbage that nobody will ever use. And of course there will be rare exceptions, some group of N people will work together to vibe code a clone of a billion dollar business and it'll actually start taking off and that'll garner a lot of attention. It'll remain forever extremely difficult to get users to a service. And if app & website creation scales up in volume due to simplicity of creation, the attention economy problem will only get more intense (neutralizing most of the benefits of the LLMs as an advantage).

The smaller, quasi micro projects used to more immediately solve narrow problems will thrive in a huge way, resulting in tangible productivity gains, and there will be a zillion of these, both at home and within businesses of all sizes.

nonethewiser•1h ago
It reduces the friction of coding tremendously. Coding was usually not the bottleneck but it still took a significant amount of time. Now we get to spend more time on the real bottlenecks. Gathering requirements from end users, deciding what should be built, etc.
irishcoffee•1h ago
> It reduces the friction of coding tremendously. Coding was usually not the bottleneck but it still took a significant amount of time.

I don’t think I agree. How can something be both “usually not a bottleneck” that usually “takes a significant amount of time” ?

> Now we get to spend more time on the real bottlenecks. Gathering requirements from end users, deciding what should be built, etc.

Sounds like you might really enjoy a PM role. Either way, LLM or not, whatever gets written up and presented will have a lot of focus on a bike shed or will make the end user realize allllll the other things they want added/changed, so the requirements change, the priorities change…

So now we just don’t get to do the interesting part… engineer things.

If I wanted to be a PM I’d do that.

switchbak•10m ago
Just because the magical fairy helps you write things, you still need to ensure it's engineered properly. Especially at the macro level.

Some day it'll handle that, but for now it's very bound to make silly decisions that you need to be on top of, especially as those compound in a large scale system.

disiplus•56m ago
This is real, so im a freelancer, i used this small invoicing platfrom to create invoices for my customers. At "work" im working on accounting systems, and erp-s. So with AI, why would i pay monthly for invoicing when i can build it myself. After i day i had invoicing working. Like the simple thing where you get PDF out. Then i started implementing doube-entry booking. And support different tax systems. And then, but we need a sales part then crm, then warehouse. Then projects to track time and so on. And now i have a full saas that i dont need and im not going to waste time on competing in that market. Now im thinking of puting it as open source.
SoftTalker•10m ago
"Invoicing for freelancers" has just about as many solutions as "to do" lists or ticket systems. Just use what you built if it works, open sourcing it is likely to get zero interest among the thousands of other options.
dizhn•18m ago
> they're finding building yet another feature with "just one more prompt" irresistible.

Totally my experience too. One last little thing to make it perfect or something that I decide would be "nice to have" ends up taking so much time in total. Luckily now I can access the same agent session on my phone mobile browser too so I can keep an eye on things even in bed. (Joke but not joke :D)

layer8•1h ago
There would be less AI fatigue if people stopped talking about AI. ;)
Kiro•1h ago
That's not the type of fatigue the article is talking about.
layer8•1h ago
I know, hence the emoticon.
ted_bunny•52m ago
I'm somewhat new to HN, but most times I am inclined to add an emoji to a comment, it turns out that neither the tone or content are up to community standards.

My other comments probably aren't any better, but those escape my notice!

layer8•46m ago
HN isn’t a singular hive-mind. There are different opinions on what kinds of humor have its place on it. At present the root comment has a good number of net upvotes, so there’s that.
pavel_lishin•1h ago
The article is talking about something completely different.
dankobgd•1h ago
I just ignore it and don't care.
Kiro•1h ago
Sounds like a good way to kill yourself, considering "fatigue" here means actual physical fatigue and not "I'm tired of AI".
psychoslave•1h ago
AI pro/agaisnt/made-related-somehow in every topic is definitely talked a lot. Even my imaginary dog can't stop talking AI all the time.
onraglanroad•1h ago
That's what increasing productivity means. You are working harder to increase the unearned income of "investors".

That's the way society is set up.

falcor84•54m ago
I don't get this sentiment. If you don't want investors to give you any input, don't take money from investors. With a Claude Max subscription, it's cheaper than ever to develop a product entirely by yourself or with a couple of friends, if that's what you prefer to do.
onraglanroad•29m ago
That is what I prefer to do. I prefer "cottage industry" to "capitalism" but that's not the easiest option in this society.

That's the sentiment you don't get.

Edit: haha, I'll repeat an earlier comment! Nothing can fly on the moon.

tempodox•52m ago
Never, ever have productivity gains improved the lives of those who do the actual work. They only ever enriched the owners of the factories.

But with “AI” the gain is more code getting generated faster. That is the dumbest possible way to measure productivity in software development. Remember, code is a liability. Pumping out 10x the amount of code is not 10x productivity.

CurleighBraces•1h ago
I loved the section about trying to fight against a system that isn't deterministic.

LLMs because of their nature require constant hand-holding by humans, unless business are willing to make them entirely accountable for the systems/products they produce.

tempodox•44m ago
How could you hold a dumb machine “accountable”? Attempting that would be insane. How would you discipline it? Reduce the voltage in its power supply?

Do you hold the dice accountable when you lose at the craps table?

CurleighBraces•29m ago
Heh, agree it sounds absurd doesn't it.

I would imagine instead companies will end up sleeping walking into this scenario until catastrophy hits.

falcor84•14m ago
I'm not saying that it's a good idea, but the obvious way would be with evolution: Give each agent its own wallet, rewarding it for a job well done and penalizing it for a poor job. Then if it runs out of money, it's "out of the game", but if it earns enough to it can spawn off another agent with similar characteristics, and give it some of its money.
falcor84•32m ago
How would that make them any more deterministic? I haven't yet met a deterministic human dev.
CurleighBraces•26m ago
It doesn't.

The difference is that we as humans are held accountable for our non-determinism.

The consequences of our actions have real world implications on our lives.

geetee•1h ago
Who knew managing a team of ten occasionally brilliant but generally unreliable engineers would be so draining.
mrspacejam•1h ago
I think you mean _micro_managing.
fHr•1h ago
progress does not care
parpfish•1h ago
For me the fatigue is a little different— it’s the constant switching between doing a little bit of work/coding/reviewing and then stopping to wait for the llm to generate something.

The waits are unpredictable length, so you never know if you should wait or switch to a new task. So you just do something to kill a little time while the machine thinks.

You never get into a flow state and you feel worn down from this constant vigilance of waiting for background jobs to finish.

I dont feel more productive, I feel like a lazy babysitter that’s just doing enough to keep the kids from hurting themselves

pylua•1h ago
What are you generating that the llm takes so long ? I usually prompt and review in small pieces.
Forge36•1h ago
For me: Will this task take 30 seconds or 3 minutes. With good planning I've been able to step away and come back. Sometimes it decides to prompt me within 5 seconds for permissions. Sometimes it runs for 15 minutes.

The output is still small and I can review it. I can switch tasks, however if it's my primary effort for the day I don't like stepping away for an hour to do something else.

Scene_Cast2•1h ago
Not the OP, but the new LLMs together with harnesses (OpenCode in my case) can handle larger scopes of work - so the workflow moves away from pair programming (single-file changes, small scope diffs) to full-feature PR reviewing.
wouldbecouldbe•1h ago
That’s why now it’s legitimate to work on multiple features or projects at the same time
well_ackshually•1h ago
This way you can do twice the terrible job twice as fast!

(Also, this only applies if what you're working on happens to be easily parallelizable _and_ you're part of the extremely privileged subset of SV software engineers. Try getting two Android Studios/XCodes/Clang builds in parallel without 128GB of RAM, see what happens).

Forge36•1h ago
Context switching like that is exhausting
jdonaldson•59m ago
It's a different kind of fatigue, but it's something a felt I got stronger at over time. Beats waiting IMHO, but be sure to give yourself a chance to rest.
AlienRobot•57m ago
The next step is running an LLM that tries to figure out parts of the project that you aren't working on so it automatically starts coding that while letting you code in peace other stuff manually.
Davidzheng•56m ago
really interested in what the brain does when it "loads" the context for something it's familiar with but is currently unloaded from the working memory. Does it mostly try to align some internal state? or more just load memories into fast access
ericmcer•1h ago
Seriously and beyond productivity, flow state was what I liked most about the job. A cup of coffee and noise cancelling headphones and a 2 hour locked in session were when I felt most in love with programming.
parpfish•52m ago
I love the flow state, and I’m pretty sure it’s fundamentally incompatible with prompting. For me, when the flow state kicks in, it’s completely nonverbal and my inner dialogue shuts up. I think that’s part of why it feels so cool and fun when it hits.

But LLM prompting requires you to constantly engage with language processing to summarize and review the problem.

jaapz•38m ago
That's pretty funny because LLM's actually help me achieve flow state easier because they help me automate away the dumb shit that normally kind of blocks me. Flow state for me is not (just) churning out lines of code but having that flow of thought in my head that eventually flows to a solved problem without being interrupted. Interesting that for you the flow state actually means your mind shutting up lol. For me it means shutting up about random shit that doesn't matter to the task at hand and being focused only on solving the current problem.

It helps that I don't outsource huge tasks to the LLM, because then I lose track of what's happening and what needs to be done. I just code the fun part, then ask the LLM to do the parts that I find boring (like updating all 2000 usages of a certain function I just changed).

stackedinserter•51m ago
The question is the result of these 2 hours in noise cancelling headphones.
treespace8•48m ago
For me AI has given that back to me. I'm back to just getting stuff built, not getting stuck for long when working in a new area. And best of all using AI for cleanup! Generate some tests, refactor common code. The boring corporate stuff.
safety1st•37m ago
I'm not at all convinced that "break your concentration and go check on an agent once every several minutes" is a productivity increaser. We already know that compulsively checking your inbox while you try to code makes your output worse. Both kill your focus and that focus isn't optional when you're doing cognitively taxing work--you know, the stuff an AI can't do. So at the moment it's like we're lobotomizing ourselves in order to babysit a robot that's dumber than we are.

That said I don't dispute the value of agents but I haven't really figured out what the right workflow is. I think the AI either needs to be really fast if it's going to help me with my main task, so that it doesn't mess up my state of flow/concentration, or it needs to be something I set and forget for long periods of time. For the latter maybe the "AIs submitting PRs" approach will ultimately be the right way to go but I have yet to come across an agent whose output doesn't require quite a lot of planning, back and forth, and code review. I'm still thinking in the long run the main enduring value may be that these LLMs are a "conversational UI" to something, not that they're going to be like little mini-employees.

luplex•30m ago
I still hit the flow state in cursor, always reviewing the plan for some feature, asking questions, learning, reviewing code. I'm still thinking hard to keep up with the model.
WarmWash•59m ago
I hope Google has been improving their diffusion model in the background this whole time. Having an agentic system that can spin up diffusion agents for lite tasks would be awesome
ithkuil•8m ago
Because they would be faster?
iterateoften•57m ago
For me plan mode is consistently pretty fast. Then to implement I just walk away and wait for it to be done while working on new plan in new tab

Probably more stress if I’m on battery and don’t want the laptop to sleep or WiFi to get interrupted.

Davidzheng•55m ago
makes you wonder how automate-able this babysitter roles is...
pfdietz•32m ago
That was my reaction.
alex_c•54m ago
I joke that I'm on the "Claude Code workout plan" now.

Standing desk, while it's working I do a couple squats or pushups or just wander around the house to stretch my legs. Much more enjoyable than sitting at my desk, hands on keyboard, all day long. And taking my eyes off the screen also makes it easier to think about the next thing.

Moving around does help, but even so, the mental fatigue is real!

_aavaa_•50m ago
Coffee shops got filled with the laptop crew, are gyms the next frontier?
jwarden•54m ago
It’s like being a manager.
parpfish•49m ago
No, it’s like being a micro manager.

I don’t just give somebody a bit’s ticket a let the go. I give them a ticket but have to hover over their shoulder and nitpick their design choices.

Tell them “you should use a different name for that new class”, “that function should actually be a method on this other thing”, etc

xnx•30m ago
Nitpicking seems like a choice. It's also possible to be more relaxed/removed and only delve in when there is a problem.
zozbot234•53m ago
You're supposed to write a detailed spec first (ask the AI for help with that part of the job too!) so that it's less likely to go off track when writing the code. Then just ask it to write the code and switch to something else. Review the result when the work is done. The spec then becomes part of your documentation.
xnx•50m ago
Inferring is the new compiling: https://3d.xkcd.com/303/

Edit: Looks like plenty of people have observed this: https://www.reddit.com/r/xkcd/comments/12dpnlk/compiling_upd...

SomeHacker44•50m ago
"Compiling!" (C.f. xkcd)
SecretDreams•48m ago
I wonder if this is how managers feel -_-'
mikkupikku•44m ago
I know this is a terribly irresponsible and immature suggestion, but what I've been doing is every time I give claude code a request of indeterminate length, I just hit a blunt and chill out. That and sometimes I'll tab into the kind of game that can be picked up and put down on very short notice, here's where I shameless plug for the free and open source game Endless Sky.

For me personally, programming lost most of it's fun many years ago, but with claude code I'm having fun again. It's not the same, but for me personally, at this stage in my life, it's more enjoyable.

Waterluvian•33m ago
Now that’s vibe coding.
tartoran•11m ago
That’s vibe coding while high. Probably terrible for assesing the results from Claude
amelius•24m ago
Except there is a well-known phenomenon among programmers that commencing to work requires more energy than working itself (*).

Every time you chill out and come back to work, you will have to invest that extra bit of start-up energy. Which can be draining.

(* probably has to do with reloading your working memory)

tartoran•10m ago
Context switching tax.
keyle•23m ago
Imagine the captain high while auto pilot is on... who's flying this thing!
the-grump•30m ago
What has worked for me is having multiple agents do different tasks (usually in different projects) and doing something myself that I haven't automated yet.

e.g. managing systems, initiating backups, thinking about how I'll automate my backups, etc.

The list of things I haven't automated is getting shorter, and having LLMs generate something I'm happy to hand the work to has been a big part of it.

mavamaarten•27m ago
For me it honestly matches pretty well. I give it an instruction and go reply to an email, and when I'm back in my IDE I have work (that was done while I was doing something else) to review.

Going back from writing an email to working, versus going back from email to reviewing someone else's work feels harder.

amelius•27m ago
As a programmer I want to minimize my context switches, because they require a lot of energy.

LLMs force me to context switch all the time.

jeremyjacob•14m ago
I don’t think it’s unreasonable to assume that in 1-2 years inference speed with have increased enough to allow for “real time” prompting where the agent finishes work in a few seconds instead of a couple minutes. That will certainly change our workflows. Seems like we are in the dial-up era currently.
rcarmo•9m ago
This. It’s the context switching and synchronicity, just like when you are managing a project and go round the table - every touch point risks having to go back and remember a bazillion things, plus in the meantime you lose the flow state.
wesm•1h ago
I've been building https://roborev.io/ (continuous background code review for agents) essentially as a cope to supervise the poor quality of the agents' work, since my agents write much more code than I can possible review directly or QA thoroughly. I think we'll see a bunch of interesting new tools to help alleviate the cognitive burden of supervising their work output.
nonethewiser•1h ago
You can see the exponential growth of tokens in real time! lol

Do you find it works well?

With these agents I've found that making the workflows more complicated has severe diminishing returns. And is outright worse in a lot of cases.

The real productivity boost I've found is giving it useful tools.

wesm•1h ago
Super well! I don't work without this tool running in the background supervising all the agents' work
lvl155•1h ago
All these tools are can be a big waste of time if you’re an end user dev. It only makes sense if you are investing your time to eventually use that workflow knowledge to make a product.
antirez•1h ago
1. Make long pauses: 1h of work, stop for 30 minutes or more. The productivity gain should leave you more time to rest. Alternatively work just 50% of time, 2h the morning, 2h the evening, instead of 8 hours. Yet trying to deliver more than before.

2. Don't mix N activities. Work in a very focused way in a single project, doing meaningful progresses.

3. Don't be too open-ended in the changes you do just because you can do it in little time now. Do what really matters.

4. When you are away, put an agent in the right rails to reiterate and provide potentially some very good result in terms of code quality, security, speed, testing, ... This increases the productivity without stressing you. When you return back, inspect the results, discard everything is trash, take the gems, if any.

5. Be minimalistic even if you no longer write the code. Prompt the agent (and your AGENT.md file) to be focused, to don't add useless dependencies, nor complexity, to take the line count low, to accept an improvement only the complexity-cost/gain is adequate.

6. Turn your flow into specification writing. Stop and write your specifications even for a long time, without interruptions. This will improve a lot the output of the coding agents. And it is a moment of calm focused work for you.

fhd2•1h ago
(1) is not something the typical employee can do, in my experience. They're expected to work eight hours a day. Though I suppose the breaks could be replaced with low effort / brain power work to implement a version of that.
antirez•1h ago
Yep, slow QA, things that also make the real difference in quality.
ryukoposting•36m ago
Work for a smaller company with more reasonable expectations of a knowledge worker.

You're an engineer, not a manager, or a chef, or anything else. Nothing you do needs to be done Monday-Friday between the hours of 8 and 5 (except for meetings). Sometimes it's better if you don't do that, actually. If your work doesn't understand that, they suck and you should leave.

falloutx•1h ago
1) Is this for founders, because employees surely cant do this. With new AI surveillance tech, companies are looking over our shoulders even more than before.
jezzamon•1h ago
Task switching sucks.

On the other side, I feel like using AI tools can reduce the cognitive overload of doing a single task, which can be nice. If you're able to work with a tool that's fast enough and just focus on a single task at a time, it feels like it makes things easier. When you try to parallelize that's when things get messier.

There's a negative for that too - cognitive effort is directly correlated with learning, so it means that your own skills start to feel less sharp too as you do that (as the article mentions)

gaigalas•1h ago
> Engineers are trained on determinism.

I'm fatigued by this myth.

taway1874•1h ago
Explain?
gaigalas•1h ago
True determinism is rare, we often don't get it. That's what purely functional languages are all about and they're a minority.

We are trained on the other thing: unpredictable user interaction, parallelism, circuit-breaking, etc. That's the bread and butter of engineering (of all kinds, really, not just IT).

The non-deterministic intuition is baked into engineering much more than determinism is.

taway1874•56m ago
Fair point. But are we moving even further away from determinism with the current ways of working with AI?
gaigalas•31m ago
I see, you're using "determinism" coloquially, in the sense of "exact outcome".

That's perfectly fine. We are honed for this too.

We don't need to produce exact solutions or answers. We need to make things work despite the presence of chaos. That is our job and we're good at it.

Product managers freak out when someone says "I don't know how much time it will take, there are too many variables!". CFOs freak out when someone says "we don't know how much it will cost". Those folk want exact, predictable outcomes.

Engineers don't, we always dealt with unpredictable chaotic things. We're just fine.

barishnamazov•1h ago
This write-up has good ideas but gives me the "AI-generated reading fatigue." Things that can cleanly be expressed in 1-2 sentences are whole paragraphs, often with examples that seem unnecessary or unrealistic. There are also some wrong claims like below:

> The Hacker News front page alone is enough to give you whiplash. One day it's "Show HN: Autonomous Research Swarm" and the next it's "Ask HN: How will AI swarms coordinate?" Nobody knows. Everyone's building anyway.

These posts got less than 5 upvotes, they didn't make it to home page. And while overall quality of Show HN might have dropped, HN homepage is still quite sane.

The topic is also not something "nobody talks about," it's being discussed even before agentic tools became available: https://hn.algolia.com/?q=AI+fatigue

pcurve•59m ago
The headline is clickbait-y but I think the article is well articulated. I found the "What actually helped" helpful too.
barishnamazov•46m ago
I'd personally rethink about applying some advice in that section. Here's my take.

> Time-boxing AI sessions.

Unless you are a full-time vibe coder, you already wouldn't be using AI all the time. But time boxing it feels artificial, if it's able to make good and real progress (not unmaintainable slop).

> Separating AI time from thinking time.

My usage of AI involves doing a lot of thinking, either collaboratively within a chat, or by myself while it's doing some agentic loop.

> Accepting 70% from AI.

This is a confusing statement. 70% what? What does 70% usable even mean? If it means around 70% of features work and other 30% is broken, perhaps AI shouldn't be used for those 30% in the first place.

> Being strategic about the hype cycle.

Hype cycles have always been a thing. It's good for mind in general to avoid them.

> Logging where AI helps and where it doesn't.

I do most of this logging in my agent md files instead of a separate log. Also after a bit my memory picks it up really quickly what AI can do and what it can't. I assume this is a natural process for many fellow engineers.

> Not reviewing everything AI produces.

If you are shipping in an insane speed, this is just an expected outcome, not an advice you can follow.

idopmstuff•51m ago
> Things that can cleanly be expressed in 1-2 sentences are whole paragraphs

Funny, I don't associate that with AI. I associate it with having to write papers of a specific length in high school. (Though at least those were usually numbers of pages, so you could get a little juice from tweaking margins, line spacing and font size.)

ryukoposting•43m ago
I had word/page quotas, but I also don't write my blog in a way that resembles the papers I wrote for school 10 years ago.
bwfan123•45m ago
> but gives me the "AI-generated reading fatigue."

Agree. The article could have been summarized into a few paragraphs. Instead, we get unnecessary verbiage that goes on and on in an AI generated frenzy. Like the "organic" label on food items, I can foresee labels on content denoting the kind of human generating the content: "suburbs-raised" "free-lancer" etc.

StilesCrisis•42m ago
"You're not imagining it." I hit back immediately.
goostavos•29m ago
Sigh.. same.

The real AI fatigue is the constant background irritation I have when interacting with LLMs.

"You're not imagining it" "You're not crazy" "You're absolutely right!" "Your right to push back on this" "Here's the no fluff, correct, non-reddit answer"

QuadmasterXLII•32m ago
The boring and likely answer is that is was just clauded out,”I’m tired chat, look through my last ten days of sessions and write and publish a blog post about why,” but it would be fascinating to discover that the author has actually looked at so much ai output that they just write like this now
raincole•21m ago
> HN homepage is still quite sane.

Those Show HN posts aren't the insane part. Insane part is like:

> Thank you, OpenClaw. Thank you, AGI—for me, it’s already here.

> If you haven't spent at least $1,000 on tokens today per human engineer, your software factory has room for improvement

> Code must not be reviewed by humans

> Following this hypothesis, what C did to assembler, what Java did to C, what Javascript/Python/Perl did to Java, now LLM agents are doing to all programming languages.

(All quoted from actual homepage posts today. Fun game: guess which quote is from which article)

jairuhme•10m ago
> Things that can cleanly be expressed in 1-2 sentences are whole paragraphs

Perhaps the author just likes to write? I've only just recently started blogging more, but I unexpectedly started to really enjoy writing and am hoping to have my posts be more of a "story". Different people have different writing styles. It's not a problem, it's just that you prefer reading posts that are straight to the point.

stuartjohnson12•1h ago
Absolute middlebrow dismissal incoming, but the real thinking atrophy is writing blog posts about thinking atrophy caused by LLMs using an LLM.

It is getting very hard to continue viewing HN as a place where I want to come and read content others have written when blog posts written largely with ChatGPT are constantly upvoted to the top.

It's not the co-writing process I have a problem with, it's that ChatGPT can turn a shower thought into a 10 minute essay. This whole post could have been four paragraphs. The introduction was clearly written by an intelligent and skilled human, and then by the second half there's "it's not X, it's Y" reframe slop every second sentence.

The writing is too good to be entirely LLM generated, but the prose is awful enough that I'm confident this was a "paste outline into chatgpt and it generates an essay" workflow.

Frustrating world. I'm lambasting OP, but I want him to write, but actually, and not via a lens that turns every cool thought into marketing sludge.

falloutx•1h ago
Why do you think author used ChatGPT to write this? It has human imperfections and except this 'The "just one more prompt" trap' I didnt think it was written by a prompt
sidk24•56m ago
Author here: Sir, it is almost fully written by human and english/grammar improved by AI
stuartjohnson12•20m ago
...and I usually come to doubt my own intuitions that this is the case when people say things like this, but my experience is usually that the LLM is doing more heavy lifting than you realise.

> Distill - deterministic context deduplication for LLMs. No LLM calls, no embeddings, no probabilistic heuristics. Pure algorithms that clean your context in ~12ms.

I simply do not believe that this is human-generated framing. Maybe you think it said something similar before. But I don't believe that is the case. I am left trying to work out what you meant through the words of something that is trying to interpret your meaning for you.

amichayg•1h ago
taking breaks is really something to try and solve in 2026 - to just write regular code, to read, to exercise even. The mind can eventually get overloaded, and there’s no way around proper hygiene.
PLenz•1h ago
I only use the free tiers of any particular app. It forces you to really think about you want the tool to do as opposed to treating it as the 'easy' button.
orangepanda•1h ago
> What should this function be named? I didn't care. Where should this config live? I didn't care. My brain was full. Not from writing code - from judging code.

Does it matter anymore? Most good engineering principles are to ensure code is easy to read and maintain by humans. When we no longer are the target audience for that, many such decisions are no longer relevant.

mrits•1h ago
I think we've spent exponentially more effort to ensure the code is readable by machines.

I also don't understand why you assume what the AI generates is more readable by AI than human generated code.

preommr•1h ago
I'd like to also add 'perceived cost aversion':

AI generates a solution that's functional, but that's at a 70% quality level. But then it's really hard to make changes because it feels horrible to spend 1 hour+ to make minor improvements to something that was generated in a minute.

It also feels a lot worse because it would require context switching and really trying to understand the problem and solution at a deeper level rather than a surface level LGTM.

And if it functionally works, then why bother?

Except that it does matter in the long term as technical debt piles up. At a very fast rate too since we're using AI to generate it.

mrcwinn•1h ago
I feel none of this. In the absence of data or studies, you might consider writing about your own experience rather than the audience’s.
luxuryballs•1h ago
I haven’t hit this yet and now I feel like someone just told me about thorns for the first time while I’m here jogging confidently through the woods with shorts on.
bonoboTP•1h ago
I personally am a lot less stressed. It helped my mood a lot over the last couple of months. Less worries about forgetting things, about missing problems, about getting started, about planning and prioritizing in solo work. Much less of the "swirling mess" feeling. Context switches are simpler, less drudgery, less friction and pulling my hair out for hours banging against some dumb plumbing and gluing issue or installing stuff from github or configuring stuff on the computer.

Its a million little quality of life stuff.

mrspacejam•1h ago
Absolutely nailed this one. My team has been talking about this for a few weeks, everyone including our manager is completely burned out.
zkmon•1h ago
I said this a few times here. Tech is never to make the life easier for the worker. It is to make the worker more productive and product more competitive.

Moving from horses to cars did not give you more free time. Moving from telephone to smartphone did not give more fishing time. You just became more mobile, more productive and more reachable.

xnx•21m ago
How we use efficiency is a choice. It's possible to work a lot less if you accept quality of life from an older era (no phone, Netflix, etc.)
zkmon•14m ago
It's not a choice. For example, Windows XP is no longer a choice, because the context around it made it unsafe now, though it didn't change. Life style from an older era is no longer the norm, which means your relative life quality degrades automatically and it actually becomes unsafe.
SoftTalker•5m ago
When I retire I plan to have no phone, no computer, and no TV. These are by far the biggest time sucks in my life and I want to see what I can do without their distractions.

I might keep a tablet or old phone with no service so that I can still do email.

Chance-Device•1h ago
Executive functioning fatigue. Usually you’re doing this in between applying skills, here it’s always making top level decisions and reasoning about possibilities. You don’t have nearly as much downtime because you don’t have to implement, you go from hard problem to hard problem with little time in between. You’re probably running your prefrontal cortex a lot hotter than usual.

People say AI will make us less intelligent, make certain brain regions shrink, but if it stays like this (and I suspect it won’t, but anyway…) then it’ll just make executive functioning super strong because that’s all you’re doing.

geetee•1h ago
Engineers that have the audacity to think they can context switch between a dozen different lines of work deserve every ounce of burnout they feel. You're the tech equivalent of wanting to be a Kardashian and you're complicit in the damage being caused to society. No, this isn't hyperbole.
tangotaylor•1h ago
> Your manager sees you shipping faster, so the expectations adjust. You see yourself shipping faster, so your own expectations adjust. The baseline moves.

This problem has been going on a long time, Helen Keller wrote about this almost 100 years ago:

> The only point I want to make here is this: that it is about time for us to begin using our labor-saving machinery actually to save labor instead of using it to flood the nation haphazardly with surplus goods which clog the channels of trade.

https://www.theatlantic.com/magazine/archive/1932/08/put-you...

taway1874•59m ago
But ... but ... your productivity as an engineer shoots up! You can take on more tasks and ship more! -- Dumbass Engineering Director who has never written a line of code in their life.

Unfortunately, with these types of software simpleton's making decisions we are going to see way more push for AI usage and thus higher productivity expectations. They cannot wrap their heads around the fact (for starters) that AI is not deterministic so that increases the overhead on testing, security, requirements, integrations etc. making all those productivity gains evaporate. Worse (like the author mentioned), it makes your engineer less creative and more burnt-out.

Let's be honest here. Engineers picked this career broadly for 2 reasons, creativity and money. With AI, the creativity aspect is taken away and you are now more of a tester. As for money, those same dumbass decision makers are now going to view this skillset as a commodity and find people who can easily be trained in to "AI Engineers" for way less money to feed inputs.

I am all for technological evolution and welcome it but this isn't anything like that. It is purely based on profits, shareholders and any but building good, proper software systems. Quality be damned. Profession of Software Development be damned. We will regret it in the future.

rizs12•48m ago
testing is quite creative too btw
VikRubenfeld•58m ago
We've all seen that if you are interacting with an AI over a lengthy chat, eventually it loses the plot. It gets confused. It appears to me that it's necessary, when coding an AI, to keep its task very limited in terms of the amount of information it needs to complete the task. Even then you still have to check the output very carefully. If it seems to be losing focus, I start a new task to reduce the context window, and focus on something that still needs to be fixed in the previous task.
ionwake•58m ago
This was a good article.

I dont have exhaustion as such but an increasing sense of dread, the more incredibly work I achieve, the less valuable I realise it potentially will be due to its low cost effort.

PaulHoule•56m ago
I’m shocked that the obvious analysis hasn’t come up: this is more disingenuous talk Karpathy-style, designed to awaken feelings of FOMO from someone who’s not developing normal software with A.I. but is selling A.I. programming tools.
CuriouslyC•56m ago
I'm a big AI booster, but I'm so sick of how crazy hype has gotten. Claude Cowork? Game changer! Ralph? Nothing will ever be the same. LOLClaw? Singularity, I welcome our new AI overlords.
clejack•55m ago
When I was in my mid 20s, I interned at a machine shop building automotive parts. In general, the work was pretty easy. I was modifying things via cad, doing dry runs on the cnc machine, loading raw material, and then unloading finished products for processing.

Usually there was a cadence to things that allowed for a decent amount of downtime while the machine was running, but I once got to a job where the machine milled the parts so quickly, that I spent more time loading and unloading parts than anything else. Once I started the first part, I didn't actually rest until all of them were done. I ended up straining my back from the repetitive motion. I was shocked because I was in good shape and I wasn't really moving a significant amount.

If I talk about excessive concern for productivity (or profit) being a problem, certain people will roll their eyes. It's hard to separate a message from the various agendas we perceive around us. Regardless of personal feelings, there will always be a negative fallout for people when there's a sudden inversion in workflow like the one described in this article or the one I experienced during my internship.

dangus•54m ago
I think the fatigue is that the technology has been hyped since long before today when it’s actually started to become somewhat useful.

And even today when it’s useful, it’s really most useful for very specific domains like coding.

It’s not been impressive at all with other applications. Just chat with your local AI chat bot when you call customer service.

For example, I watch a YouTube channel where this guy calls up car dealerships to negotiate car deals and some of them have purchased AI receptionist solutions. They’re essentially worse than a simple “press 1 for sales” menu and have essentially zero business value.

Another example, I switched to a cheap phone plan MVNO that uses AI chat as its first line of defense. All it did was act as a natural language search engine for a small selection of FAQ pages, and to actually do anything you needed to find the right button to get a human.

These two examples of technology were not worth the hype. We can blame those businesses all day long but at the end of the day I can’t imagine those businesses are going to be impressed with the results of the tech long term. Those car dealerships won’t sell more cars because of it, my phone plan won’t avoid customer service interactions because of it.

In theory, these AI systems should easily be able to be plugged in to do some basic operations that actually save these businesses from hiring people.

The cellular provider should be able to have the AI chatbot make real adjustments to your account, even if they’re minor.

The car dealership bot should be able to set the customer up in the CMS by collecting basic contact info, and maybe should be able to send a basic quote on a vehicle stock number before negotiations begin.

But in practice, these AI systems aren’t providing significant value to these businesses. Companies like Taco Bell can’t even replace humans taking food orders despite the language capabilities of AI.

Kiro•40m ago
How is your comment relevant to the article?
shevy-java•54m ago
> If you're an engineer who uses AI daily - for design reviews, code generation, debugging, documentation, architecture decisions - and you've noticed that you're somehow more tired than before AI existed, this post is for you.

AI is not good for human health - we have it here.

idopmstuff•52m ago
> The reason is simple once you see it, but it took me months to figure out. When each task takes less time, you don't do fewer tasks. You do more tasks.

> AI reduces the cost of production but increases the cost of coordination, review, and decision-making. And those costs fall entirely on the human.

The combination of these two facts is why I'm so glad I quit my job a couple of years ago and started my own business. I'm a one-man show and having so much fun using AI as I run things.

Long term, it definitely feels like AI is going to drive company sizes down and lead to a greater prevalence of SMBs, since they get all the benefits with few of the downsides.

xnx•18m ago
The overhead of having even a second employee is huge. Being a one person shop is a huge efficiency gain.
stephc_int13•51m ago
The irony is that this article has likely been crafted by AI. The smell is not too obvious but still there.
janwillemb•49m ago
> You're experiencing something real that the industry is aggressively pretending doesn't exist.

I agree with the article and recognize the fatigue, but I have never experienced that the industry is "aggressively pretending it does not exist". It feels like a straw man, but maybe you have examples of this happening.

babarock•48m ago
The way I experience this is through unprecedented amount of feature creep. We don't use AI generated code for all our projects, but in the ones we do, I see a weird anti-pattern settle in: Simply because it's faster than ever before to generate a patch and get it merged, it doesn't mean that merging 50+ commits this week makes sense.

Code and feature still need to experience time and stability in order to achieve maturity. We need to give our end users time to try stuff, to shape their opinions and habits. We need to let everyone on the dev team take the time to update their mental model of the project as patches are merged. Heck, I've seen too many Product Owners incapable of telling you clearly what went in and out of the code over the previous 2 releases, and those are usually a few weeks apart.

Making individual tasks faster should give us more time to think in terms of quality and stability. Instead, people want to add more features more often.

iryna_kondr•48m ago
I’ve definitely been feeling that shift too. What have you guys found that helps with this? Any habits you use to avoid the constant context switching and decision fatigue?
beepbooptheory•47m ago
The weird thing at the end of the day is that we live in this world where there is this default individual desire to be more "productive." I am always wondering, productive for who, for what?

I know more than most there is some baseline productivity we are always trying to be at, that can sometimes be a target more than a current state. But the way people talk about their AI workflows is different. It's like everyone has become tyranical factory floor managers, pushing ever further for productive gains.

Leave this kind of productivity to the bosses I say! Life is a broader surface than this. We can/should focus on be productive together, but leave your actual life for finer, more sustainable ventures.

paufernandez•46m ago
Apart from the exhaustion of context switching, I believe there is a internal signal that gauges how "fast" things are happening in your life. Stress responses are triggered whenever things are going too fast (as if you were driving in a narrow road at too much speed) and it feels like there is danger since you intuit that a small mistake is gonna have big consequences.

Some people thrive in more stressful situations, because they don't get as aroused in calmness, but everybody has a threshold velocity at which discomfort starts, higher or lower. AI puts us closer to that threshold, for sure.

thesumofall•41m ago
I agree with the sentiment. I don‘t code a lot, but AI has sped up things in all fields for which I use AI (or at least the expectation of speed has grown). For me, it’s the context switching but also just the mental load of holding so many projects and ideas together in my head. It somewhat helps that the usable context of LLMs has grown over time so I tend to trust the „memory“ of the AI a bit more to keep track of things and somewhat try to offload stuff from my brain
1970-01-01•39m ago
Welcome to management. Herding cats is the idiom. AI is behaving on the nose in this aspect. Perhaps this is the author's first taste of it?

Just a few days ago: https://news.ycombinator.com/item?id=46885530

zagfh•38m ago
Of course you are more tired: Code review is more difficult than writing code.

They you have to deal with slop, slopfluencer articles written under the influence of AI psychosis, AI addicts, lying managers, lying CEOs etc.

And you usually, the author of this article being an exception, get dumber and are only able to verbalize AI boosterism.

AI only works if you become a slopfluencer, sell a course on YouTube and have people "like and subscribe".

nubg•34m ago
Goddamn it pisses me off so much when people rant about AI but use LLMs to write their blog posts!

Use your own words!

I'd rather read the prompt!

mungoman2•32m ago
IMHO, this is not really about AI, it's about setting boundaries and not overwork yourself.
thrownaway561•25m ago
Personally I'm loving AI for TECHNICAL problems. Case in point... I just had a server crash last night and obviously I need to do a summary on what could have possibly caused the issue. This use to take hours and painfully hours at that. If you ever had to scroll through a Windows event log you know what I'm talking about. But today I just got an idea of just exporting the log and uploading it to Gemini and asking it:

Looking at this Windows event log, the server rebooted unexpected this morning at 4:21am EST, please analyze the log and let me know what could have been the cause of the reboot.

It took Gemini 5 minutes to come back with an analyst and not only that, it asked me for the memory dump that the machine took. I uploaded that as well and it told me that it looks like SentinelOne might have caused the problem and to update the client if possible.

Checking the logs myself, that's exactly what it looks like.

That used to take me HOURS to do and now it took me 30 seconds, took Gemini 10 minutes, but me 30 seconds. That is a game changer if you ask me.

I love my job, but I love doing other things rather than combing over a log trying to figure out why a server rebooted. I just want to know what to do to fix it if it can be fixed.

I get that AI might be giving other people a sour taste, but to me it really has made my job, and the medial tasks that come with it. easier.

osigurdson•25m ago
Clearly written before Codex 5.3 and Opus 4.6 shipped :)
bicx•25m ago
This reflects my experiences exactly. Thanks for writing this up.
quirkot•24m ago
Sounds a lot like Marx's theory of alienation
cs702•23m ago
Instead of managing code, you're now managing AI entities.

Managing people has always been emotionally and psychologically exhausting.

Managing AI entities can be even more taxing. They're not human beings.

downboots•21m ago
Or we're being managed to refine models
cs702•16m ago
That too. Management is always a two-way street. The "manager" manages down. The "employee" manages up.
SoftTalker•4m ago
Managing people has always seemed easy to me. Don't be an asshole, don't get personally invested in their problems, and things generally work out.
AnotherGoodName•19m ago
We’re all still getting the hang of it.

I keep pushing the ai to do absolutely everything to a fault and instead of spending 10mins to manually correct a mistake the ai made i spend hours adjusting and rerunning the prompt to correct the mistake.

I’m learning how to prompt well at least.

otabdeveloper4•16m ago
> as i’m learning how to prompt well

Prompting isn't a real skill and you're not learning anything.

"Claude 4.5 Sonnet operator" is not a job description.

scotty79•18m ago
Hello developer. Welcome to the tech lead role. Please enjoy your stay till AI makes this role obsolete too.
sgarland•14m ago
> So you read every line. And reading code you didn't write, that was generated by a system that doesn't understand your codebase's history or your team's conventions, is exhausting work.

I’ve noticed this strongly on the database side of things. Your average dev’s understanding of SQL is unfortunately shaky at best (which I find baffling; you can learn 95% of what you need in an afternoon, and probably get by from referencing documentation for the rest), and AI usage has made this 10x worse.

It honestly feels unreasonable and unfair to me. By requesting my validation of your planned schema or query that an AI generated, you’re tacitly admitting that a. You know it’s likely that it has problems b. You don’t understand what it’s written, but you’re requesting a review anyway. This is outsourcing the cognitive load that you should be bearing as a normal part of designing software.

What makes it even worse is MySQL, because LLMs seem to consistently think that it can do things that it can’t (or is at least highly unlikely to choose to), like using multiple indices for a single table access. Also, when pushed on issues like this, I’ve seen them make even more serious errors, like suggesting a large composite index which it claimed could be used for both the left-most prefix and right-most prefix. That’s not how a B{-,+}tree works, my dude, and of all things, I would think AI would have rock-solid understanding of DS&A.

0xbadc0de5•11m ago
[delayed]
gherkinnn•3m ago
My main source of AI fatigue is how it is the main topic anywhere and everywhere I go. I can't visit an art gallery without something pestering me about LLMs.

I am happier writing code by hand

https://www.abhinavomprakash.com/posts/i-am-happier-writing-code-by-hand/
169•lazyfolder•2h ago•105 comments

AI fatigue Is real and nobody talks about it

https://siddhantkhare.com/writing/ai-fatigue-is-real
228•sidk24•1h ago•172 comments

RFC 3092 – Etymology of "Foo" (2001)

https://datatracker.ietf.org/doc/html/rfc3092
35•ipnon•1h ago•6 comments

GitHub Agentic Workflows

https://github.github.io/gh-aw/
36•mooreds•2h ago•22 comments

Running Your Own As: BGP on FreeBSD with FRR, GRE Tunnels, and Policy Routing

https://blog.hofstede.it/running-your-own-as-bgp-on-freebsd-with-frr-gre-tunnels-and-policy-routing/
30•todsacerdoti•2h ago•5 comments

Show HN: It took 4 years to sell my startup. I wrote a book about it

https://derekyan.com/ma-book/
71•zhyan7109•3d ago•11 comments

The Contagious Taste of Cancer

https://www.historytoday.com/archive/history-matters/contagious-taste-cancer
13•Thevet•22h ago•5 comments

Curating a Show on My Ineffable Mother, Ursula K. Le Guin

https://hyperallergic.com/curating-a-show-on-my-ineffable-mother-ursula-k-le-guin/
66•bryanrasmussen•6h ago•18 comments

Matchlock – Secures AI agent workloads with a Linux-based sandbox

https://github.com/jingkaihe/matchlock
97•jingkai_he•8h ago•39 comments

Reverse Engineering Raiders of the Lost Ark for the Atari 2600

https://github.com/joshuanwalker/Raiders2600
48•pacod•7h ago•1 comments

Dave Farber has died

https://lists.nanog.org/archives/list/nanog@lists.nanog.org/thread/TSNPJVFH4DKLINIKSMRIIVNHDG5XKJCM/
108•vitplister•4h ago•17 comments

DoNotNotify is now Open Source

https://donotnotify.com/opensource.html
291•awaaz•8h ago•46 comments

Kolakoski Sequence

https://en.wikipedia.org/wiki/Kolakoski_sequence
14•surprisetalk•5d ago•0 comments

Beyond agentic coding

https://haskellforall.com/2026/02/beyond-agentic-coding
193•RebelPotato•14h ago•75 comments

Rabbit Ear "Origami": programmable origami in the browser

https://rabbitear.org/book/origami.html
74•molszanski•4d ago•4 comments

Show HN: LocalGPT – A local-first AI assistant in Rust with persistent memory

https://github.com/localgpt-app/localgpt
281•yi_wang•14h ago•138 comments

Slop Terrifies Me

https://ezhik.jp/ai-slop-terrifies-me/
161•Ezhik•5h ago•138 comments

OpenClaw Is Changing My Life

https://reorx.com/blog/openclaw-is-changing-my-life/
62•novoreorx•9h ago•111 comments

The Legacy of Daniel Kahneman: A Personal View (2025)

https://ejpe.org/journal/article/view/1075/753
37•cainxinth•3d ago•8 comments

A11yJSON: A standard to describe the accessibility of the physical world

https://sozialhelden.github.io/a11yjson/
33•robin_reala•5d ago•4 comments

We mourn our craft

https://nolanlawson.com/2026/02/07/we-mourn-our-craft/
494•ColinWright•21h ago•644 comments

SectorC: A C Compiler in 512 bytes (2023)

https://xorvoid.com/sectorc.html
341•valyala•22h ago•70 comments

Noam Chomsky's wife responds to Epstein controversy

https://www.aaronmate.net/p/noam-chomskys-wife-responds-to-epstein
12•Red_Tarsius•52m ago•3 comments

Why E cores make Apple silicon fast

https://eclecticlight.co/2026/02/08/last-week-on-my-mac-why-e-cores-make-apple-silicon-fast/
125•ingve•4h ago•133 comments

How to squeeze a lexicon (2001) [pdf]

https://marcinciura.wordpress.com/wp-content/uploads/2019/10/lexicon.pdf
3•mci•4d ago•0 comments

Show HN: Fine-tuned Qwen2.5-7B on 100 films for probabilistic story graphs

https://cinegraphs.ai/
70•graphpilled•4h ago•20 comments

I write games in C (yes, C) (2016)

https://jonathanwhiting.com/writing/blog/games_in_c/
221•valyala•22h ago•241 comments

The Architecture of Open Source Applications (Volume 1) Berkeley DB

https://aosabook.org/en/v1/bdb.html
73•grep_it•5d ago•10 comments

LLMs as the new high level language

https://federicopereiro.com/llm-high/
165•swah•5d ago•318 comments

Roger Ebert Reviews "The Shawshank Redemption" (1999)

https://www.rogerebert.com/reviews/great-movie-the-shawshank-redemption-1994
62•monero-xmr•10h ago•75 comments