frontpage.
newsnewestaskshowjobs

Made with ♥ by @iamnishanth

Open Source @Github

fp.

Open in hackernews

Mitchellh – I strongly believe there are entire companies now under AI psychosis

https://twitter.com/mitchellh/status/2055380239711457578
184•reasonableklout•1h ago

Comments

andreasgl•40m ago
https://xcancel.com/mitchellh/status/2055380239711457578
mhitza•38m ago
https://hachyderm.io/@mitchellh/116580433508108130
teddyh•8m ago
<https://twiiit.com/mitchellh/status/2055380239711457578> – will redirect to a currently-working Nitter instance.
weinzierl•39m ago
"its fine to ship bugs because the agents will fix them so quickly and at a scale humans can't do!"

Hmm, I agree with the point OP is making, but I'm not so sure this is the best supporting argument. The bottleneck is finding the bugs and if he'd criticized people saying AI will be the panacea to that I'd be with him, but people saying agents are fast and good at fixing human found bugs is nothing I'd object to.

Agents are fixing bugs so quickly and at a scale humans can't do already.

woeirua•25m ago
You got downvoted for speaking the truth. HN has a strong anti-AI contingent. They won’t concede until you can just ask Codex or Opus “find and fix all the bugs in this codebase”. We’re not there yet, but soon we will be. Then what?
maxbond•15m ago
More likely people thought GP was missing the point; "MTTR-optimized YOLO deployment" only succeeds against recoverable errors and acceptable periods of downtime against errors that are detected quickly. You could have a bug silently corrupting data for months, and that data may only be used by 1 critical process that runs once every quarter. So you could introduce a timebomb that can't be gracefully recovered from (depending on the nature of the data corruption).

So the point is not that agents cannot find bugs (they certainly can), it's whether you can shirk reviewing for bugs if MTTR is fast enough. There are circumstances where YOLO is appropriate, but they aren't the production environment of a mature application.

hansmayer•13m ago
> won’t concede until you can just ask Codex or Opus “find and fix all the bugs in this

But this is just holding the Slop Companies to the standard they declared themselves! Just recently, the CEO of OpenAI babbled some nonsense on twitter about how he hands over tasks to Codex who according to him, finishes them flawlessly while he is playing with his kid outside.

> but soon we will be.

Ah yes, in the 3-6 months, right? This time next year Rodney, we'll be millionaires!

babarock•17m ago
The tweet is criticizing over-reliance on the "agents will fix it anyway".

The fact that we can fix things faster now doesn't mean that we should throw away caution and prevention. The specific point of his tweet is that we're seeing a lot of people starting to skip proper release engineering.

Agents are quick to fix bugs, yes, but it doesn't mean that users will tolerate software that gets completely broken after each new feature is introduced and takes a certain number of days to heal each time.

lolc•16m ago
> Agents are fixing bugs so quickly and at a scale humans can't do already.

The metric is how many defects are introduced per defect fixed. Being fast is bad if this ratio is above one.

tacostakohashi•33m ago
"no no, it has full test coverage"

at least at my BigCo, AI is being used for everything - writing slop, writing tests, code reviews, etc.

it would make sense to use AI for writing code, but human code review. or, human code, but AI test cases... or whatever combination of cross-checking, trust-but-verify, human in the loop, etc. people prefer.

i think once it gets used for everything, people have lost the plot, it's the inmates running the asylum.

ares623•11m ago
I was rewatching Rich Hickey's "Simple Made Easy" talk (as one does) and there was a great line about full test coverage.

"What's true about all bugs in production? (pause for dramatic effect) They all passed the tests!" (well, he said typechecker but I think the point stands)

robotswantdata•32m ago
Most labs are shilling “AI worker” dreams to these very companies
nialse•32m ago
I'm starting to long for the age after AI. When the generative euphoria has settled and all outputs are formally verified based on exquisite architectures and standards.
senordevnyc•31m ago
Will never happen, for the exact reason that we’ve almost never done that for human output either.
sitkack•29m ago
it is required now, or all civilization collapses.
platinumrad•16m ago
You're going to have to expand on this one.
saltyoldman•25m ago
There was not a renaissance to move back to Assembly when Java sucked. Instead more Java developers were created.
nialse•19m ago
Another argument for less human-like AI then, I guess.
stego-tech•16m ago
That’s literally just software though.
rvz•21m ago
Well a 2008 and a 2000 level financial crash is required for this. It is always during euphoric levels of delusion such events then occur.

...and it also needs more so-called AI companies present in the wreckage in this crash.

AI psychosis is undeniably real.

DiscourseFan•19m ago
They are being developed, but it takes over a decade for this to happen normally
999900000999•14m ago
This is the new normal. AI will continue to reduce the need for human workers until a Universal Basic Income is established.

At the end of the day robots can do the vast vast majority of jobs better and faster. If not now, very soon.

I only worry our economic systems won’t keep up

risyachka•5m ago
Humans can already have 4 hour work week without productivity loss.

But I only see mass layoffs and those who are working - are working longer and harder then before.

senordevnyc•31m ago
Assuming he’s right, I don’t see how that constitutes “psychosis”, as opposed to this beyond yet another of a billion examples of companies jumping on a bandwagon / cargo cult, and then learning they took it too far.

And also, he might not be right. But the good news is, we’ll all get to find out together!

CodingJeebus•28m ago
Anyone who's taken VC funding has no choice. More money has been spent on AI commercialization than the atomic bomb, the US interstate build-out, the ISS and the Apollo program combined. Failure is going to be catastrophic and therefore, one tied to this ship cannot accept a world in which it fails.
infamouscow•13m ago
On the bright side, my guillotine & rope startup is going to make a killing (no pun intended).
hungryhobbit•8m ago
Or anyone who even wants VC funding. 90+% of investors only want to invest in AI companies.

If you're not doing AI there's an incredibly limited pool of people who will give you $$$ ... and you're competing with EVERY OTHER NON-AI COMPANY for their attention.

LunicLynx•25m ago
Either this or we humans are out of the picture soon.
arm32•18m ago
Occams' razor would assume the former.
leeoniya•24m ago
> "no no, it has full test coverage"

i don't have enough fingers (and toes) to count how many times i've demonstrated that "100% coverage" is almost universally bullshit.

kevinsync•13m ago
Codex is freakin hot to trot to churn out test coverage for every single thing it implements, and some of it is very esoteric and highly prescriptive (regexes for days) BUT .. after a while, it dawned on me that LLM-driven test coverage is less about proving “code correctness” (you’re better off writing those tests yourself alongside them), and more about just trying to ensure that whatever gets bolted on stays bolted on. For better or worse, obviously, since if you bolt on trash, trash you shall have.
woeirua•24m ago
This doesn’t constitute AI psychosis. His argument is that we need to retain understanding of the systems we use, but there’s no compelling argument as to why that is the case. (I get that people are going to be offended by that statement, but agents are already better than the average software engineer. I don’t see why we need to fight this, except for economic insecurity caused by mass layoffs.)

It all just feels like horse drawn carriage operators trying to convince automobile drivers to stop driving.

caconym_•17m ago
I am sure you will feel that this is missing the point of your analogy, but we would not have gotten very far with automobiles if we didn't know how they worked.
9dev•17m ago
If you want to draw that line of argument - it's more like horse riders being convinced to give up their horses in favour of trains: You're travelling faster, don't have to navigate yourself, or think about every boulder on the way; but there are destinations you can't go, overcrowded trains slowing down the journey, hefty ticket prices, and instead of enjoying the freedom, you're degraded to a passive passenger.
hansmayer•9m ago
Very funny, this. Did we need forward deployed engineers to convince people that they absolutely need to use the trains in order to "not be left behind"? Or otherwise hype? Or was it sort of obvious and did not need to explained so much - like a bad joke called LLMs ?
jgbuddy•16m ago
agreed completely
miek•23m ago
My very large employer has always been glacially slow on modernization and tech adoption. It may now, oddly enough, become a competitive advantage.
DCKP•21m ago
Literally the plot of Battlestar Galactica! Life imitates art indeed...
Barrin92•11m ago
yes, I was never so happy to work in Germany. People used to joke about the proverbial fax machine still being a thing but I've never been so glad to work in a culture where this mania doesn't exist. Reading HN is like entering Alice's Wonderland of token maxxers and AI psychotics. Genuinely don't know a single person here who is forced to work like this.
alexnewman•9m ago
Ah so it's like 2000 again. Germany will go even farther behind it seems
elevation•21m ago
Mitchell aches because his career has been solving broadly scoped problems by building a collection of thoughtful primitives for others to extend. LLMs seem to do the opposite but at great speed, and it hurts to watch.
taffydavid•21m ago
This post calls out how you can't argue with these people because they say its fine to ship bugs because the agents will fix them so quickly and at a scale humans can't do!"

the top reply is from someone doing exactly that, arguing "but the agents are so fast!"

slopinthebag•20m ago
I have a ton of respect for Mitchell - I didn't really know who he was until Ghostty but his writings and viewpoints on AI seem really grounded and make the most sense to me. Including this one.

Many people on this forum are suffering under this same psychosis.

Groxx•20m ago
Bug reports also go down when people lose faith that they will be fixed, because reporting them is often a substantial time commitment. You see it happen pretty regularly as trust in a group/company collapses.
selectively•18m ago
I do not believe 'AI psychosis' is an actual thing.
bolangi•14m ago
When war psychosis is not enough....
ivanjermakov•13m ago
Deprecating immature workflows (LLM agents in this case) is much simpler and faster than building them from scratch. Many companies get this rush assessment right. The case where being wrong is much more costly than being right.
dtnewman•13m ago
If you feel this way, you might like my new CLI tool, Burn, Baby, Burn (those tokens) (https://github.com/dtnewman/burn-baby-burn/tree/main).

Show HN here: https://news.ycombinator.com/item?id=48151287

mattgreenrocks•12m ago
The only way many people learn that the stove is hot is by burning their hands on it.

Let them.

impulser_•10m ago
I'm pretty sure he's talking about companies and people outsourcing their decision making and thinking to AI and not really about using AI itself.

I don't think using AI to write code is AI psychosis or bad at all, but if you just prompt the AI and believe what it tell you then you have AI psychosis. You see this a lot with financial people and VC on twitter. They literally post screenshots of ChatGPT as their thinking and reasoning about the topic instead of just doing a little bit of thinking themselves.

These things are dog shit when it comes to ideas, thinking, or providing advice because they are pattern matchers they are just going to give you the pattern they see. Most people see this if you just try to talk to it about an idea. They often just spit out the most generic dog shit.

This however it pretty useful for certain tasks were pattern matching is actually beneficial like writing code, but again you just can't let it do the thinking and decision making.

slopinthebag•7m ago
> companies and people outsourcing their decision making and thinking to AI

It's so interesting how easy it is to steer the LLM's based on context to arriving at whatever conclusion you engineer out of it. They really are like improv actors, and the first rule of improv is "yes, and".

So part of the psychosis is when these people unknowingly steer their LLM into their own conclusions and biases, and then they get magnified and solidified. It's gonna end in disaster.

throwawaypath•9m ago
Mitchellh is on to something. Some of the AI products I've seen seem like psychosis hallucinatory fever dreams, using terms and concepts that have no meaning. Funding? $50,000,000 pre-seed.
bsenftner•8m ago
This is a critical communications issue that is becoming what I believe the defining characteristic of "This Age": nobody knows how to discuss disagreement, and because it cannot even be discussed communication ends, followed by blind obedience, forced bullying, retreat and abandonment. This is going to be a hell of a ride, because nobody can really discuss the situation with a rational tone.
spicyusername•6m ago
We're definitely in the mess around phase of AI adoption.

I don't think it's super clear what we'll find out.

We've all built the moat of our careers out of our expertise.

It is also very possible that expertise will be rendered significantly less valuable as the models improve.

Nobody ever cared what the code looked like. They only ever cared if it solved their problem and it was bug free. Maybe everything falls apart, or maybe AI agents ship code that's good enough.

Given the state of the industry were clearly going to find out one way or the other, hah!

gopalv•5m ago
The AI psychosis is not the anti-opinion to the use of AI.

I use AI coding tools every day, but AI tools have no concept of the future.

The selfish thinking that an engineer has when they think "If this breaks in prod, I won't be able to fix it. And they'll page me at 3AM" we've relied on to build stable systems.

The general laziness of looking for a perfect library on CPAN so that I don't have to do this work (often taking longer to not find a library than writing it by hand).

Have written thousands of lines of code with AI tool which ended up in prod and mostly it feels natural, because since 2017 I've been telling people to write code instead of typing it all on my own & setting up pitfalls to catch bad code in testing.

But one thing it doesn't do is "write less code"[1].

[1] - https://xcancel.com/t3rmin4t0r/status/2019277780517781522/

Scalable GPU Acceleration of Scalar Functions in Analytical Databases [pdf]

https://www.vldb.org/pvldb/vol19/p1441-rajan.pdf
1•matt_d•2m ago•0 comments

SpaceX accelerates IPO timeline, targets June 12 listing on Nasdaq, sources say

https://www.reuters.com/world/spacex-accelerates-ipo-timeline-targets-june-11-pricing-nasdaq-2026...
1•TechTechTech•3m ago•0 comments

Review of Surgery

https://operativereview.com/subjects/
1•rolph•7m ago•0 comments

The agent principal-agent problem

https://crawshaw.io/blog/agent-principal-agent
1•matt_d•8m ago•0 comments

AI Skeptic: This Business Makes No Sense [video]

https://www.youtube.com/watch?v=BI96xGqvWII
1•kklisura•10m ago•0 comments

MyAi – Decentralized inference with on-chain payouts (Base)

https://myaitoken.io
1•jlvardon•10m ago•0 comments

FrontierSmith: Synthesizing Open-Ended Coding Problems at Scale

https://frontier-cs.org/blog/frontiersmith/
1•matt_d•10m ago•0 comments

Prepare for an AI Jobs Apocalypse

https://www.economist.com/leaders/2026/05/14/prepare-for-an-ai-jobs-apocalypse
1•edward•12m ago•0 comments

Wine 11.9 – Run Windows Applications on Linux, BSD, Solaris and macOS

https://www.winehq.org/announce/11.9
2•neustradamus•12m ago•0 comments

College Credit for This?

https://hollisrobbinsanecdotal.substack.com/p/college-credit-for-this
1•HR01•15m ago•0 comments

Show HN: AI Design Taste Generator

https://aidesigntaste.com/chrome-extension
5•novateg•15m ago•1 comments

Microsoft backpedals: Edge to stop loading passwords into memory

https://www.bleepingcomputer.com/news/microsoft/microsoft-edge-to-stop-loading-cleartext-password...
2•Cider9986•16m ago•0 comments

Run Hermes on a VPS without Leaking your API keys [video]

https://www.youtube.com/watch?v=6dERVjLk0-Q
1•dangtony98•18m ago•0 comments

Signal adds security warnings for social engineering, phishing attacks

https://www.bleepingcomputer.com/news/security/signal-adds-security-warnings-for-social-engineeri...
3•Cider9986•21m ago•0 comments

"The Prompting Company, Inc., All Rights Reserved"

https://www.google.com/search?q=%22The+Prompting+Company,+Inc.,+All+rights+reserved%22
4•enjoyyourlife•22m ago•0 comments

China Sought Access to Anthropic's Newest A.I. The Answer Was No.

https://www.nytimes.com/2026/05/12/us/politics/china-ai-anthropic-openai-mythos-chatgpt.html
1•bookofjoe•23m ago•1 comments

Show HN: Claude64, a Commodore 64 client for Claude

https://github.com/theletterf/claude64
1•theletterf•23m ago•1 comments

Show HN: Draw anywhere on Earth and estimate the population inside it

https://populationcircle.com
1•PopGuessr•23m ago•0 comments

The Doomsday Organism

https://www.noemamag.com/the-doomsday-organism/
1•anarbadalov•25m ago•0 comments

Case study: AI regulatory monitoring with structured outputs for legal review

https://bndigital.co/en-gb/cases/ai-regulatory-monitoring-system
1•andriioliinyk•26m ago•0 comments

Block AI coding agents from shipping insecure/expensive Terraform

https://github.com/ops0-ai/ops0-cli
1•sureshpaulchamy•28m ago•0 comments

Why Casey Muratori avoids AI [video]

https://www.youtube.com/watch?v=rDQdJP_REIU
1•therepanic•29m ago•1 comments

Mustela – A static site generator born from forensic family analysis

https://mustela.vercel.app/
2•filipvrbaxi•32m ago•0 comments

The 4th Linux kernel flaw this month can lead to stolen SSH host keys

https://www.zdnet.com/article/qualys-flags-a-linux-kernel-security-issue-that-could-lead-to-stole...
4•CrankyBear•33m ago•1 comments

Systems are Everything, Software is Systems

https://wilsoniumite.com/2026/05/15/systems-are-everything-software-is-systems/
2•Wilsoniumite•33m ago•0 comments

Monthly Roundup #42: May 2026

https://thezvi.substack.com/p/monthly-roundup-42-may-2026
1•paulpauper•35m ago•0 comments

Graduation Speech Is One Nobody Remembers

https://www.theatlantic.com/ideas/2026/05/college-graduation-speeches-speaker/687182/
1•paulpauper•35m ago•0 comments

What's the AI Endgame?

https://www.theatlantic.com/podcasts/2026/05/whats-the-ai-endgame/687184/
1•paulpauper•36m ago•1 comments

Demystifying the Silence of Correctness Bugs in PyTorch Compiler

https://arxiv.org/abs/2604.08720
1•matt_d•37m ago•0 comments

Sam Sianis, Chicago's most famous saloonkeeper, has died

https://www.chicagotribune.com/2026/05/15/sam-sianis-billy-goat-chicago-dies/
1•NaOH•38m ago•0 comments