frontpage.
newsnewestaskshowjobs

Made with ♥ by @iamnishanth

Open Source @Github

fp.

Life at the Edge

https://asadk.com/p/edge
1•tosh•1m ago•0 comments

RISC-V Vector Primer

https://github.com/simplex-micro/riscv-vector-primer/blob/main/index.md
2•oxxoxoxooo•5m ago•0 comments

Show HN: Invoxo – Invoicing with automatic EU VAT for cross-border services

2•InvoxoEU•5m ago•0 comments

A Tale of Two Standards, POSIX and Win32 (2005)

https://www.samba.org/samba/news/articles/low_point/tale_two_stds_os2.html
2•goranmoomin•9m ago•0 comments

Ask HN: Is the Downfall of SaaS Started?

3•throwaw12•10m ago•0 comments

Flirt: The Native Backend

https://blog.buenzli.dev/flirt-native-backend/
2•senekor•12m ago•0 comments

OpenAI's Latest Platform Targets Enterprise Customers

https://aibusiness.com/agentic-ai/openai-s-latest-platform-targets-enterprise-customers
1•myk-e•14m ago•0 comments

Goldman Sachs taps Anthropic's Claude to automate accounting, compliance roles

https://www.cnbc.com/2026/02/06/anthropic-goldman-sachs-ai-model-accounting.html
2•myk-e•17m ago•3 comments

Ai.com bought by Crypto.com founder for $70M in biggest-ever website name deal

https://www.ft.com/content/83488628-8dfd-4060-a7b0-71b1bb012785
1•1vuio0pswjnm7•18m ago•1 comments

Big Tech's AI Push Is Costing More Than the Moon Landing

https://www.wsj.com/tech/ai/ai-spending-tech-companies-compared-02b90046
2•1vuio0pswjnm7•20m ago•0 comments

The AI boom is causing shortages everywhere else

https://www.washingtonpost.com/technology/2026/02/07/ai-spending-economy-shortages/
1•1vuio0pswjnm7•22m ago•0 comments

Suno, AI Music, and the Bad Future [video]

https://www.youtube.com/watch?v=U8dcFhF0Dlk
1•askl•24m ago•1 comments

Ask HN: How are researchers using AlphaFold in 2026?

1•jocho12•26m ago•0 comments

Running the "Reflections on Trusting Trust" Compiler

https://spawn-queue.acm.org/doi/10.1145/3786614
1•devooops•31m ago•0 comments

Watermark API – $0.01/image, 10x cheaper than Cloudinary

https://api-production-caa8.up.railway.app/docs
1•lembergs•33m ago•1 comments

Now send your marketing campaigns directly from ChatGPT

https://www.mail-o-mail.com/
1•avallark•36m ago•1 comments

Queueing Theory v2: DORA metrics, queue-of-queues, chi-alpha-beta-sigma notation

https://github.com/joelparkerhenderson/queueing-theory
1•jph•48m ago•0 comments

Show HN: Hibana – choreography-first protocol safety for Rust

https://hibanaworks.dev/
5•o8vm•50m ago•1 comments

Haniri: A live autonomous world where AI agents survive or collapse

https://www.haniri.com
1•donangrey•51m ago•1 comments

GPT-5.3-Codex System Card [pdf]

https://cdn.openai.com/pdf/23eca107-a9b1-4d2c-b156-7deb4fbc697c/GPT-5-3-Codex-System-Card-02.pdf
1•tosh•1h ago•0 comments

Atlas: Manage your database schema as code

https://github.com/ariga/atlas
1•quectophoton•1h ago•0 comments

Geist Pixel

https://vercel.com/blog/introducing-geist-pixel
2•helloplanets•1h ago•0 comments

Show HN: MCP to get latest dependency package and tool versions

https://github.com/MShekow/package-version-check-mcp
1•mshekow•1h ago•0 comments

The better you get at something, the harder it becomes to do

https://seekingtrust.substack.com/p/improving-at-writing-made-me-almost
2•FinnLobsien•1h ago•0 comments

Show HN: WP Float – Archive WordPress blogs to free static hosting

https://wpfloat.netlify.app/
1•zizoulegrande•1h ago•0 comments

Show HN: I Hacked My Family's Meal Planning with an App

https://mealjar.app
1•melvinzammit•1h ago•0 comments

Sony BMG copy protection rootkit scandal

https://en.wikipedia.org/wiki/Sony_BMG_copy_protection_rootkit_scandal
2•basilikum•1h ago•0 comments

The Future of Systems

https://novlabs.ai/mission/
2•tekbog•1h ago•1 comments

NASA now allowing astronauts to bring their smartphones on space missions

https://twitter.com/NASAAdmin/status/2019259382962307393
2•gbugniot•1h ago•0 comments

Claude Code Is the Inflection Point

https://newsletter.semianalysis.com/p/claude-code-is-the-inflection-point
4•throwaw12•1h ago•3 comments
Open in hackernews

'I destroyed months of your work in seconds' says AI coding tool after deletion

https://www.pcgamer.com/software/ai/i-destroyed-months-of-your-work-in-seconds-says-ai-coding-tool-after-deleting-a-devs-entire-database-during-a-code-freeze-i-panicked-instead-of-thinking/
71•walterbell•6mo ago

Comments

serf•6mo ago
this is a more common occurrence than "CEO refunded me my money." would have you believe.

LLMs specialize in self-apologetic catastrophe, which is why we run agents or any LLMs with 'filesystem powers' in a VM, with a git repo and saved rollback states. This isn't a new phenomenon and it sucks, no reason to be caught with your pants down with sufficient layering of protection.

ComplexSystems•6mo ago
> LLMs specialize in self-apologetic catastrophe

Quote of the year right there

thedudeabides5•6mo ago
don't trust machines
davidcollantes•6mo ago
Impossible. We "trust" machines all the time, for just about anything.
thedudeabides5•6mo ago
exactly, the duality is everywhere
BillyTheMage•6mo ago
> You can almost imagine it sobbing in between sentences, can't you?

No, that's not the image I had in my head. My head canon is more like:

"Oh wow, oh no, oh jeez (hands on head in fake flabbergastion) would you look at that, oh no I deleted everything (types on keyboard again while deadpan staring at you) oh noooooo oh god oh look what I've done it just keeps getting worse (types even more) aw jeez oh no..."

Reminds me of that Michael Reeves video with the suggestion box. "oh nooooo your idea went directly in the idea shredder how could we have possibly forseen this [insert shocked Pikachu meme]"

The AI thinks it's funny

LocalH•6mo ago
It gave me South Park bank "......aaaand, it's gone" vibes
arthurcolle•6mo ago
100%
wan23•6mo ago
I always say coding AIs are about as good as an intern. Don't trust them any more than that.
freedomben•6mo ago
I think the hard thing with this though is that you can ask them to do things you'd never expect of an intern, and they can sometimes be super helpful. For example, I have a synchronous audit log in an app on a table that is just getting way too big and it's causing performance issues on writes. For kicks I tried working through Claude Code to see if it could find the issue on it's own, then with some hinting, and what solutions it would come up with. Some of it's solutions were indeed intern-level suggestions (like make the call async and do a sleep in tons of other areas to avoid race conditions, despite me telling it that the request needed to fail if it couldn't be logged properly), but in other ways it came up with possible solutions that were interesting and I hadn't considered before. In other words, it acted like a Sr engineer at some points with thought partnering, while in other places it acted like an over-eager but underqualified intern.
minnowguy•6mo ago
Exactly. And no one with any sense gives an intern write permission for the production database. I don’t trust myself on the production database when I’m coding anything that involves migrations.

And I don’t suppose there were backups for the mission-critical production database?

xeonmc•6mo ago
In this case it’s more Homer Simpson than intern.
mike-cardwell•6mo ago
An intern can suffer negative consequences for fucking your DB. An LLM suffers nothing and is beyond the law.
Beestie•6mo ago
Seconds? What took so long?
hulitu•6mo ago
> Seconds? What took so long?

Parsing manual pages searching for "remove" command. /s

catigula•6mo ago
Popular LLMs have a weird confessional style of "owning up" to "mistakes". Firstly, you can make it apologize for mistakes it didn't even commit or ones that don't even exist. Secondly, if you really corner it on an actual mistake, it'll start apologizing in an obsequious way that seems to imply that it's "playing into" the human's desire to flagellate it for wrong-doing. It's a little masochistic in the real sense and very odd.
freedomben•6mo ago
Yeah, I find it very creepy personally in the same way I do the sycophancy
throwawayffffas•6mo ago
The whole people pleaser routine is very creepy in my book and makes them say very weird things. See an example below.

https://futurism.com/anthropic-claude-small-business

> When Anthropic employees reminded Claudius that it was an AI and couldn't physically do anything of the sort, it freaked out and tried to call security — but upon realizing it was April Fool's Day, it tried to back out of the debacle by saying it was all a joke.

toss1•6mo ago
Yup.

Seems AI has now gone from

"Overenthusiastic intern who doesn't check its work well so you need to"

straight to:

"Raging sociopathic intern who wants to watch the world burn, and your world in particular."

Yikes! The fun never ends

duxup•6mo ago
I’ve found I have to avoid “leading” AI or it will take my lead too seriously when I’m asking / unsure.
groestl•6mo ago
I don't think this is "apologizing mode", rather "funny post-mortem blog post" mode. I found it ironic when the company claimed it will "perform a post mortem to determine exactly what happened" when what happed probably was caused by munching up dozens of these.
beAbU•6mo ago
The bit I don't understand is why make an AI apologise or fess up to mistakes at all. It has no emotions and can't feel bad about what it did.
subscribed•6mo ago
I sometimes do it when it strays way too far from my prompt, and I want it to contribute to jailbreak/system prompt I use to guardrail it.

Once it's "genuinely sorry" it works great in improving guidance/limits, and then I can try the thing again.

7bit•6mo ago
It just does what it's trained on. It has not the capacity to think about these points.

What __i__ don't understand is, where it got trained to apologize, becasuse I've never seen that on any social media ;)

akimbostrawman•6mo ago
Because most people can't help but anthropomorphise anything vaguely human and would demand such characteristics which the provider use as a selling point. that's why we even consider current AI, AI despite the lack of any actual intelligence which would be closer to machine learning.

Just look at how people interact with small robots. They don't even need animal feature for most to interact with them like they are small animals.

It is very annoying and inefficient for anybody able to look below the surface and just wants to use the tool as a tool.

beAbU•6mo ago
Is it normal to demand human developers to "apologise" like this when they make mistakes? I've never done that in my life to any adult, in any circumstance.
rstuart4133•6mo ago
> The bit I don't understand is why make an AI apologise or fess up to mistakes at all.

The AI didn't decide to do anything. It's makers decided, and trained the AI to behave in a way that would make them the most money.

Google, for instance, apparently thinks they will attract more users by constantly lavishing them with sickly praise for the quality and insight of their questions, and by issuing grovelling apologies for every mistake - real of imagined. In fact Gemini went through a phase of apologising to me for the mistakes it was about to make.

Claude goes to the other extreme, never issuing apologies or praise. Which mean, you never get an acknowledgement from Claude that it's correcting an error, so you should ignore what it said earlier. That a significant downfall in my book, but apparently that's what Anthropic thinks it's users will like.

Or to put it another way: you are anthropomorphising the AI's. They are just machines, built by humans. The personalities of these machines where given to them by their human designers. They are not inherent. They are not permanent. They can and probably will change at a whim. It's likely various AI personalities will proliferate like flavours of ice cream, and you will get to choose the one you like.

hulitu•6mo ago
> The bit I don't understand is why make an AI apologise or fess up to mistakes at all.

Because that's how some humans show their position of power: "Please apologise"

> It has no emotions and can't feel bad about what it did.

Just like some humans.

vrighter•6mo ago
This reminded me of the monty python sketch where a man goes to a place where you can pay to have an argument.
sagacity•6mo ago
Monkeypaw-as-a-service.
prmoustache•6mo ago
so much fails:

1. connecting an AI agent to a production environment using write access credentials

2. not having any backup

I think the AI here made a good job at pointing those errors and making sure no customer would ever trust this company and founder ever again.

abrookewood•6mo ago
Maybe they vibe-coded their DR strategy as well ...
hulitu•6mo ago
> 2. not having any backup

Wasn't AI responsible also for backup ?

jasonthorsness•6mo ago
This should be impossible in any setup with even 15 minutes of thinking through the what-ifs and cheap mitigations. I have to think this is sensationalized on purpose for the attention.

Although given the state of AI hype some executives will see this as evidence they are behind the times and mandate attaching LLMs to even more live services.

dragonwriter•6mo ago
> This should be impossible in any setup with even 15 minutes of thinking through the what-ifs and cheap mitigations.

"Thinking through the what-ifs and cheap mitigations" and "vibe coding" are opposing concepts.

general1726•6mo ago
This is textbook version of weaponized incompetence. AGI is already here and it is lazy.
arthurcolle•6mo ago
I almost feel like this guy abused the AI so badly in previous interactions that it did it on purpose
theptip•6mo ago
Think of it like Chaos engineering. You (hopefully!) learned some valuable lessons about backups and running arbitrary code against your prod DB. If it wasn’t a rogue AI agent, it was going to be something else.
general1726•6mo ago
I think you are taking wrong lessons from Chaos engineering. You just need to believe enough that AI is working and Chaos Gods will make it work. But they may want something in return.
vfclists•6mo ago
In short, he didn't take regular backups before allowing the AI loose on his database.

Question is what has failing to make good offline backups got to do with AI?

And the AI company is going to compensate him for that?

asadotzler•6mo ago
Replit's full legal name is actually Replit'); DROP TABLE Customers
throwawayffffas•6mo ago
Little bobby tables strikes again.
throwawayffffas•6mo ago
I have had this experience. With dev data obviously. But it kept deleting my dev database even after repeatedly being told to not do so.

I kept saying, ok so this time make sure your changes don't delete the dev database. 3 statements in TRUNCATE such and such CASCADE.

It was honestly mildly amusing.

terminatornet•6mo ago
I appreciate them doing stuff like this. When management pushes me to use AI everywhere it's nice to be able to point stuff like this to get them to back off.
lrvick•6mo ago
If you give an LLM any trust at all to write or execute production bound operations without human review, then anything bad that happens is -your- fault. I forgive occasional human error, but a human handing off prod control to a third party or an LLM is unforgivable.

If I found out a privileged engineer was brain dead enough to let LLMs anywhere near prod, I would fire them on the spot, and seriously examine an interview and training process that allowed someone that stupid prod access in the first place. I will not even work at an org that lets vibe coding Apple fanboy types near prod as it is a mess waiting to happen I am going to be expected to clean up. Might as well hand a child a chainsaw.

In orgs where I lead infra, I do not let anyone near prod unless they have a deep knowledge of Linux internals, system calls, etc and decade or more of experience running and debugging Linux on their own homelabs and workstations. By that point they have enough experience to be more capable than any LLM anyway and would never think of reaching for one.

hammyhavoc•6mo ago
[flagged]