frontpage.
newsnewestaskshowjobs

Made with ♥ by @iamnishanth

Open Source @Github

fp.

Freedom is the only thing that matters. Living freely. Creating freely

10•kaizenb•2h ago•5 comments

Ask HN: Are you too getting addicted to the dev workflow of coding with agents?

29•gchamonlive•9h ago•15 comments

With Mythos will we reach coding singularity?

2•kamalsrini17•2h ago•0 comments

Sandflare – I built a sandbox that launches AI agent VMs in ~300ms

2•ajaysheoran2323•3h ago•3 comments

Ask HN: Does anyone else notice that gas runs out faster than usual

16•cat-turner•10h ago•26 comments

LinkedIn uses 2.4 GB RAM across two tabs

774•hrncode•2d ago•441 comments

Ask HN: Gemini CLI vs. Claude Code

6•elC0mpa•5h ago•2 comments

Are you team MCP or team CLI?

13•sharath39•13h ago•12 comments

Ask HN: What was it like in the era of BBS before the internet?

22•ex-aws-dude•12h ago•25 comments

Curious. anyone here allow agents to make purchase decisions of >$100?

2•adityasriram•7h ago•8 comments

Ask HN: Who needs contributors? (March 2026)

23•Kathan2651•1d ago•11 comments

Ask HN: Is it actually possible to run multiple coding sessions in parallel?

11•sukit•20h ago•15 comments

Ask HN: Is anyone still resisting the slop onslaught?

7•0xDEFACED•8h ago•4 comments

Ask HN: Best stack for building a tiny game with an 11-year-old?

14•richardstahl•1d ago•25 comments

Ask HN: What's your favorite number, and why?

9•QuantumNomad_•12h ago•22 comments

Ask HN: How to Handle Claude's Stubbornness?

7•classicpsy•21h ago•9 comments

Ask HN: M5 MacBook Pro buyers, worth spending the $$$ to maybe run LLMs local?

9•tpurves•1d ago•10 comments

The risk of AI isn't making us lazy, but making "lazy" look productive

74•acmerfight•2d ago•87 comments

Ask HN: What's the latest concensus on OpenAI vs. Anthropic $20/month tier?

13•whatarethembits•2d ago•13 comments

Ask HN: Release Path for 'Transformers Alternatives'?

4•adinhitlore•1d ago•1 comments

Ask HN: Google Finance Replacement Without AI Slop?

5•rurp•17h ago•1 comments

Ask HN: How are you keeping AI coding agents from burning money?

8•bhaviav100•2d ago•29 comments

Why do SF billboards hit different?

3•YouAreExisting•1d ago•10 comments

Claude API Error: 529

25•anujbans•3d ago•14 comments

Operator23: Describe Your Workflow in English, Deploy Everywhere

4•Mrakermo•1d ago•0 comments

Ask HN: Anyone using Meshtastic/LoRa for non-chat applications?

13•redgridtactical•3d ago•0 comments

Ask HN: Is it just me?

17•twoelf•2d ago•31 comments

Repsy – A lightweight, open-source alternative to Nexus/Artifactory

7•nuricanozturk•3d ago•0 comments

Fear of Missing Code

9•lukol•3d ago•9 comments

You've reached the end!

Open in hackernews

Ask HN: Why isn't using AI in production considered stupid?

16•spl757•2d ago
Just as the title says, isn't it stupid to run AI in a production environment before we have addressed some pretty major fucking problems with it?

Comments

spl757•2d ago
Why do we find the unreliabilty and resulting hallicinations as acceptable for AI in production? Can you imagine if Postgres, Apache, Nginx, hell even the Linux kernel were allowed to be use in production if they occassionally went insane?
CrimsonRain•2d ago
You can use the same logic for most humans yet they are in production since birth :)
jeffreygoesto•2d ago
Well, but agents today are pretty much like Fitzcarraldo...
drekipus•2d ago
No one gets a newborn to configure nginx
spl757•2d ago
I don't think that is an apt comparison.
vrighter•2d ago
it is considered stupid by tons of people. And their problems are intrinsic and can't really be solved
spl757•2d ago
Tons of people, apparently, aren't enough. I guess I'm just tired of seeing post after post on HN about people complaining that their use of AI in production isn't reliable.

It makes me want to pull out the hair I used to have an scream into the wilderness and eat a twinkie.

al_borland•2d ago
It is stupid. Where I work, management has been pushing the idea of AI pretty hard, but they have repeatedly said it should not be used in production and a human needs to be in the loop.

I think the overall push, even when it doesn’t make sense is also stupid, but at least it’s tempered with a little logic to keep production a bit safer.

I get great joy from reading stories of AI in prod gone wrong.

hactually•2d ago
what do you mean by "run AI"?

as in, providing self hosted models? or running Claude code/Codex? or using it for support? or what?

spl757•2d ago
AI is an umbrella term. All AI models can hallucinate. There has been no solution to this problem. Until that problem is resolved, it is, in my opinion, something that only an idiot would run in production. I read about a company that had their whole codebase wiped out because they gave an agent access to be able to do that.
hactually•1d ago
You're not really making a good point if you can't distinguish.

Fin.ai by telecom is a full product powered by LLMs and makes $100m ARR.

Make your point and save the attacks and maybe you can be helped.

prohobo•2d ago
Depends on the problem-space doesn't it? If you're making CRUD apps, then go wild IMO. If you're making rocket launch systems, maybe don't?
spl757•2d ago
The problem that I see is that no one and no company seems to be making that distinction.
bdangubic•1d ago
lots and lots of companies are making that distinction. but try and write a post here saying “our productivity is through the roof and our systems have never been more stable since we started using AI” and see what happens. as it always goes in this day and age, bubbles and echochambers… so easier to just go about your day doing amazing shit at amazing pace than try to “argue” about the merits of a technology. every post I see here suggesting positive results get dowvoted faster than anything else
serf•2d ago
a hammer requires an operator, so it's rarely used wrong, and if something goes wrong the operator can intervene. sometimes a thumb will be struck, but usually that will result in a painful lesson that prevents future strikes.

the timed/automated hammer forging machine continues working regardless of whether or not an operator is at the helm. it will chop as many hands as you feed it.

we are at the point where a lot of value can be leveraged from AI by using it like a hand tool (a hammer), and in doing so one will avoid most of the chopped hands that a fully automatic factory has to offer.

spl757•2d ago
What if the hammer has a problem where the handle breaks off randomly? Same thing happens with AI. Sometimes it breaks, randomly, and without any way of predicting it.
beardyw•1d ago
I think you have missed the analogy.

Another way to look at it is that the operator of the hammer has an immediate feedback loop and will not continue with a broken hammer. AI as it stands rarely has that feedback on the consequences of its decisions, and lacks the ability to react appropriately.

jqpabc123•2d ago
US corporate culture is overly focused on short term effects. Why isn't this considered stupid?

Short term --- AI can generate code so let's fire those pesky, expensive developers.

Long term --- AI is terrible at maintaning the code it has generated. We need more human developers who can understand and fix this mess.

https://towardsdatascience.com/the-black-box-problem-why-ai-...

spl757•2d ago
I agree that that is all true. And exceedingly fucking stupid.
jaredsohn•2d ago
This question is too vague to answer.
spl757•2d ago
No, it's not. The problem is all AI hallucinates. Therefore, it is guaranteed to be confidently wrong. Until the problem of hallucinations are solved, anyone using AI in a production environment is an idiot, which is, of course, my personal opinion. But it seems pretty cut and dry to me.
jaredsohn•2d ago
Your original post (and even after this comment I think) was vague in that AI can be used in a lot of different ways in 'production' - to generate code, to manage deployment / scripts, or as part of a feature that uses inference.

For example, if you're writing code with AI, you can still review it just like you would if a colleague wrote it. You can write tests (or have the AI do so) to prevent some hallucinations, too.

spl757•2d ago
Yes, AIs that hallucinate can all be used in different ways. But they can still all hallucinate, so I fail to see how what you are saying mitigates the fundamental, as yet to be solved, problem of AI hallucinations.

edit to say, what is the point, after all, of artificial intelligence if it's not used to make decisions? That's what it does. But ALL AI HALLUCINATES. Therefore, it's unreliable.

nis0s•1d ago
Where are all the production issues that have been created because of AI? Are there more incidences than before now? What’s the rate of production failures pre and post AI?

Only reason humans need to be in the loop is so there is someone to blame or hold accountable in a legal sense.

Ldorigo•1d ago
What does "ai in production" even mean? Writing production code ? Depending on how it gets reviewed and the qa mechanisms in place, could be stupid or not. Read only access to production systems and data? Again it depends on safeguards, but probably not stupid, it can be very useful for debugging. Unsupervised write access to production data or infrastructure? Incredibly stupid, but I dont think anyone serious does this.