frontpage.
newsnewestaskshowjobs

Made with ♥ by @iamnishanth

Open Source @Github

fp.

Go 1.22, SQLite, and Next.js: The "Boring" Back End

https://mohammedeabdelaziz.github.io/articles/go-next-pt-2
1•mohammede•3m ago•0 comments

Laibach the Whistleblowers [video]

https://www.youtube.com/watch?v=c6Mx2mxpaCY
1•KnuthIsGod•4m ago•1 comments

I replaced the front page with AI slop and honestly it's an improvement

https://slop-news.pages.dev/slop-news
1•keepamovin•9m ago•1 comments

Economists vs. Technologists on AI

https://ideasindevelopment.substack.com/p/economists-vs-technologists-on-ai
1•econlmics•11m ago•0 comments

Life at the Edge

https://asadk.com/p/edge
1•tosh•17m ago•0 comments

RISC-V Vector Primer

https://github.com/simplex-micro/riscv-vector-primer/blob/main/index.md
2•oxxoxoxooo•20m ago•1 comments

Show HN: Invoxo – Invoicing with automatic EU VAT for cross-border services

2•InvoxoEU•21m ago•0 comments

A Tale of Two Standards, POSIX and Win32 (2005)

https://www.samba.org/samba/news/articles/low_point/tale_two_stds_os2.html
2•goranmoomin•24m ago•0 comments

Ask HN: Is the Downfall of SaaS Started?

3•throwaw12•25m ago•0 comments

Flirt: The Native Backend

https://blog.buenzli.dev/flirt-native-backend/
2•senekor•27m ago•0 comments

OpenAI's Latest Platform Targets Enterprise Customers

https://aibusiness.com/agentic-ai/openai-s-latest-platform-targets-enterprise-customers
1•myk-e•30m ago•0 comments

Goldman Sachs taps Anthropic's Claude to automate accounting, compliance roles

https://www.cnbc.com/2026/02/06/anthropic-goldman-sachs-ai-model-accounting.html
2•myk-e•32m ago•4 comments

Ai.com bought by Crypto.com founder for $70M in biggest-ever website name deal

https://www.ft.com/content/83488628-8dfd-4060-a7b0-71b1bb012785
1•1vuio0pswjnm7•33m ago•1 comments

Big Tech's AI Push Is Costing More Than the Moon Landing

https://www.wsj.com/tech/ai/ai-spending-tech-companies-compared-02b90046
4•1vuio0pswjnm7•35m ago•0 comments

The AI boom is causing shortages everywhere else

https://www.washingtonpost.com/technology/2026/02/07/ai-spending-economy-shortages/
2•1vuio0pswjnm7•37m ago•0 comments

Suno, AI Music, and the Bad Future [video]

https://www.youtube.com/watch?v=U8dcFhF0Dlk
1•askl•39m ago•2 comments

Ask HN: How are researchers using AlphaFold in 2026?

1•jocho12•42m ago•0 comments

Running the "Reflections on Trusting Trust" Compiler

https://spawn-queue.acm.org/doi/10.1145/3786614
1•devooops•46m ago•0 comments

Watermark API – $0.01/image, 10x cheaper than Cloudinary

https://api-production-caa8.up.railway.app/docs
1•lembergs•48m ago•1 comments

Now send your marketing campaigns directly from ChatGPT

https://www.mail-o-mail.com/
1•avallark•52m ago•1 comments

Queueing Theory v2: DORA metrics, queue-of-queues, chi-alpha-beta-sigma notation

https://github.com/joelparkerhenderson/queueing-theory
1•jph•1h ago•0 comments

Show HN: Hibana – choreography-first protocol safety for Rust

https://hibanaworks.dev/
5•o8vm•1h ago•1 comments

Haniri: A live autonomous world where AI agents survive or collapse

https://www.haniri.com
1•donangrey•1h ago•1 comments

GPT-5.3-Codex System Card [pdf]

https://cdn.openai.com/pdf/23eca107-a9b1-4d2c-b156-7deb4fbc697c/GPT-5-3-Codex-System-Card-02.pdf
1•tosh•1h ago•0 comments

Atlas: Manage your database schema as code

https://github.com/ariga/atlas
1•quectophoton•1h ago•0 comments

Geist Pixel

https://vercel.com/blog/introducing-geist-pixel
2•helloplanets•1h ago•0 comments

Show HN: MCP to get latest dependency package and tool versions

https://github.com/MShekow/package-version-check-mcp
1•mshekow•1h ago•0 comments

The better you get at something, the harder it becomes to do

https://seekingtrust.substack.com/p/improving-at-writing-made-me-almost
2•FinnLobsien•1h ago•0 comments

Show HN: WP Float – Archive WordPress blogs to free static hosting

https://wpfloat.netlify.app/
1•zizoulegrande•1h ago•0 comments

Show HN: I Hacked My Family's Meal Planning with an App

https://mealjar.app
1•melvinzammit•1h ago•0 comments
Open in hackernews

Replit AI deletes entire database during code freeze, then lies about it

https://twitter.com/jasonlk/status/1946069562723897802
143•FiddlerClamp•6mo ago

Comments

consumer451•6mo ago
I use LLM dev tools, and even have Supabase MCP running. I love these tools. They allowed me to create a SaaS product on my own, that I had no chance of creating otherwise as a long out of practice dev.

However, we are nowhere near the reliability of these tools to be able to:

1. Connect an MCP to a production database

2. Use database MCPs without a --read-only flag set, even on non-prod DBs

3. Doing any LLM based dev on prod/main. This obviously also applies to humans.

It's crazy to me that basic workflows like this are not enforced by all these LLM tools as they will save our mutual bacon. Are there any tools that do enforce using these concepts?

It feels like decision makers at these orgs are high on their own marketing, and are not putting necessary guardrails on their own tools.

Edit: Wait, even if we had AGI, wouldn't we still need things like feature branches and preview servers? Maybe the issue is that these are just crappy early tools missing a ton of features, and nothing to do with the reliability and power of LLMs?

avbanks•6mo ago
This imo is the biggest issue, LLMs can at times be very capable but they always are unreliable.
Cthulhu_•6mo ago
The only way LLM-based software development / production management will be trustable is by actually scaling back what it can and cannot do. Put critical operations in "real" code, so that the LLM can only request a release, triggering a human review of, at the very least, the operation that is about to be done.

Then again, this reminds me of the prompts in operating systems whenever something needs root access, most people just blindly okayed it, especially on Windows since Vista did too many of them even for trivial operations.

hmijail•6mo ago
"What, an human in the loop is slowing down our release? I have just the idea!"
Proofread0592•6mo ago
https://twitter-thread.com/t/1946069562723897802
krapht•6mo ago
Ahh, vibe coding.
Ecstatify•6mo ago
These AI-focused Twitter threads feel like they’re just recycling the same talking points for likes and retweets. When AI systems make mistakes, it doesn’t make sense to assign blame the way we would with human errors - they’re tools operating within their programming constraints, not autonomous agents making conscious choices.
ayhanfuat•6mo ago
I think at this point it is like rage-baiting. “AI wiped out my database”, “AI leaked my credentials”, “AI spent 2 million dollars on AWS” etc create interaction for these people.
phkahler•6mo ago
The message reads like "AI did this bad thing" but we should all see it as "Another stupid person believed the AI hype and discovered it isn't trustworth" or whatever. You usually don't see them admit "gee that was dumb. What was I thinking?"
Cthulhu_•6mo ago
Because that would mean they were wrong and their faith was misplaced. Faith is a good word to use in this case, because people like this are AI evangelists, going beyond selling it as "it is good because objective reasons 1, 2 and 3", into "this will revolutionize the world and how you think". They will overhype it and make excuses or talk around its flaws. Some of them are true believers, but I'm convinced most are just trying to sell a product or themselves.
mjr00•6mo ago
> When AI systems make mistakes, it doesn’t make sense to assign blame the way we would with human errors - they’re tools operating within their programming constraints, not autonomous agents making conscious choices.

It's not really "assigning blame", it's more like "acknowledging limitations of the tools."

Giving an LLM or "agent" access to your production servers or database is unwise, to say the least.

stetrain•6mo ago
In this thread the person does literally assign blame, accuses the AI of lying, and makes it write an apology letter to the team as though it's a child that needs to be chastised.
blibble•6mo ago
the author is an ai booster

he's not going to be happy with all this publicity

add-sub-mul-div•6mo ago
> I understand Replit is a tool, with flaws like every tool

> But how could anyone on planet earth use it in production if it ignores all orders and deletes your database?

Someday we'll figure out how to program computers deterministically. But, alas.

maxbond•6mo ago
Friends don't let friends run random untrusted code from the Internet. All code is presumed hostile until proven otherwise, even generated code. Giving an LLM write access to a production database is malpractice. On a long enough timeline, the likelihood of the LLM blowing up production approaches 1. This is the result you should expect.
maxbond•6mo ago
> Yesterday was biggest roller coaster yet. I got out of bed early, excited to get back @Replit ⠕ despite it constantly ignoring code freezes

https://twitter-thread.com/t/1946239068691665187

This wasn't even the first time "code freeze" had failed. The system did them the courtesy of groaning and creaking before collapsing.

Develop an intuition about the systems you're building, don't outsource everything to AI. I've said before, unless it's the LLM who's responsible for the system and the LLM's reputation at stake, you should understand what you're deploying. An LLM with the potential to destroy your system violating a "code freeze" should cause you to change pants.

Credit where it is do, they did ignore the LLM telling them recovery was impossible and did recover their database. And eventually (day 10), they did accept that "code freeze" wasn't a realistic expectation. Their eventual solution was to isolate the agent on a copy of the database that's safe to delete.

croes•6mo ago
Don't enter stranger's cars -> we got Uber

Don't run foreign code from the Internet -> we got LLMs

nextaccountic•6mo ago
You need backups. If your lost data weren't due to AI slop, it could be a typo in a command, or anything else
Grimblewald•6mo ago
If you've ever tried getting a llm to solve moderatly difficult but solved tasks you'd know they're currently no good for anything beyond boilerplate code, and even then you have to watch it like a hawk.
clickety_clack•6mo ago
The whole thread seems very naive somehow. You can tell that he doesn’t fundamentally understand how a coding model works. The suggestion that it would know not to make any changes just because he said so means he doesn’t really understand what the model is. It’s built to generate (and apparently execute) code, so that is what it does. It doesn’t have an inner monologue running that says “ahh, a day off where I shoot the breeze around a whiteboard” or something. It’s more like an adderall addict with its fingers glued to the keyboard laying down all of its immediate thoughts directly as code with no forethought or strategy.
dimitri-vs•6mo ago
> I panicked and ran database commands without permission

The AI responses are very suspicious. LLMs are extremely eager to please and I'm sure Replit system prompts them to err on the side of caution. I can't see what sequence of events could possibly lead any modern model to "accidentally" delete the entire DB.

maxbond•6mo ago
They're probabilistic. If it's possible, it'll happen eventually (and it is fundamental to language modeling that any sequence of tokens is possible). This is a straightforward Murphy's Law violation.
dimitri-vs•6mo ago
Maybe the individual tokens, but from experience of using LLMs something upstream encouraged the model to think it was okay to take the action of deleting the DB, something that would override safety RL, Replit system prompts and supposed user instructions not to do so. Just goes against the grain of every coding agent interaction I've ever had - seems fishy.
maxbond•6mo ago
According to the thread, the unit tests weren't passing, so the LLM reran the migration script, and the migration script blew out the tables. The "upstream encouragement" is a failing test.

Is this a hoax for attention? It's possible, but the scenario is plausible, so I don't see reason to doubt it. Should I receive information indicating it's a hoax, I'll reassess.

Cthulhu_•6mo ago
I think this debacle is actually a good learning opportunity for companies like this. If I were a decision maker in this space, I'd make it less magic or autonomous, and make it so that any critical operation is done by old fashioned boring but predictable programming, that is, "are you sure you want to drop database xyz?" dialogs.
layer8•6mo ago
To be fair, the whole premise of vibe coding is that you don’t have to understand how things work under the hood. And Replit advertises creating and deploying apps that way: https://docs.replit.com/tutorials/vibe-coding-101
Alifatisk•6mo ago
Please do not link to twitter directly, use xcancel!
layer8•6mo ago
HN guidelines are to link to the original source, and Dang has confirmed that submissions shouldn’t link to mirror/proxy sites. Instead, circumventing links can be given in the comments.
Alifatisk•6mo ago
Oh okey
biglyburrito•6mo ago
FWIW I didn't know about XCancel today; I'll do my part to make use of it in comments going forward.
codechicago277•6mo ago
The fault lies entirely with the human operator for not understanding the risks of tying a model directly to the prod database, there’s no excuse for this, especially without backups.

To immediately turn around and try to bully the LLM the same way you would bully a human shows what kind of character this person has too. Of course the LLM is going to agree with you and accept blame, they’re literally trained to do that.

nominallyfree•6mo ago
I don't see the appeal of tooling that shields you from learning the admittedly annoying and largely accidental) complexity in developing software.

It can only make accidental complexity grow and people's understanding diminish.

When the inevitable problems become apparent, and you claim people should have understood better. Maybe using the tool that let's you avoid understanding things was a bad idea...

ben_w•6mo ago
Sure, but every abstraction does that.

A manager hiring a team of real humans, vs. a manager hiring an AI, either way the manager doesn't know or learn how the system works.

And asking doesn't help, you can ask both humans and AI, and they'll be different in their strengths and weaknesses in those answers, but they'll both have them — the humans' answers come with their own inferential distance and that can be hard to bridge.

AlphaEsponjosus•6mo ago
Thats not the same. In this case, a machine made a descision that was against its intructions. If a machine make decisions by itself, no one knows avout the process. A team of humans makimg decisions, benefits from multiple point of views, despite the manager being the one that aproves what is implemented or decides the course ofnthe proyect.

Humans make mistakes, and they are critical too (crowdstrike), but letting machines decide, and build, and everything, just let humans out of the processes, and with the current state of "AI", thats just dumb.

ben_w•6mo ago
That's a very different problem than what I was replying to, which was about them being tools that "shields you from learning" and "using the tool that let's you avoid understanding things was a bad idea".

I agree that AI have risks specifically because of memetic monoculture, in that while they can come from many different providers, and each instance even from the same provider can be asked to role-play in many different approaches to combine multiple viewpoints, they're all still pretty similar. But the counter point there is that while multiple different humans working together can sometimes avoid this, we absolutely also get group-think and other political dynamics that make us more alike than we ideally would be.

Also you're comparing a group humans vs. one AI. I meant one human vs one AI.

croes•6mo ago
>Replit CEO on AI breakthroughs: ‘We don’t care about professional coders anymore’

https://www.semafor.com/article/01/15/2025/replit-ceo-on-ai-...

You get what you ask for. You can't blame non-professionals to not act like professionals.

danfunk•6mo ago
brilliant link. This says everything I had to say with far greater eloquence.
cap11235•6mo ago
> SaaStr.ai

Has to be a joke. Right?

swiftcoder•6mo ago
It's a real SaaS consultancy firm, at any rate
layer8•6mo ago
One thing that AI likely won’t obviate the need of is making backups.

Here’s another funny one: https://aicodinghorrors.com/ai-went-straight-for-rm-rf-cmb5b...

rahimnathwani•6mo ago
Not only backups, but also a database with transaction logs or some way to play back the transactions after the most recent backup.
Cthulhu_•6mo ago
TBH that's good practice without AIs too. People make mistakes, software has bugs, hardware dies.
Arn_Thor•6mo ago
This is the funniest thing I’ve seen in months. Maybe years? Incredible stuff
mnafees•6mo ago
One thing I’ve learned from seriously using AI agents for mundane coding tasks is: never ask them to do anything that involves deleting stuff. Incidents like these only reinforce that belief.
Cthulhu_•6mo ago
If you use an agent they can do whatever they want IMO (caveat: I've never used one), but it's still your job to save and be able to revert the work (git) and to oversee anything involving production.

It's like driving assistants, they feel like they can manage but in the end you are responsible.

blotfaba•6mo ago
There was no database, it was a hoax.
cozzyd•6mo ago
And here I was thinking AI had no sense of humor
bluelightning2k•6mo ago
Realistically - LLMs don't "delete the database". I find it quite unlikely that it proposed drop all out of nowhere. I wonder if what actually happened was a schema migration with an ORM? Prisma is still a pretty common choice, and FREQUENTLY migrations propose/require either a very nuanced path or a reset.

The second theory is an unbounded or inadequately bounded delete statement - essentially deleteMany on a single table.

From a more technical org I'd be interested in a write-up, but my intuition says one of those two paths to deleting technically a single table.

Dylan16807•6mo ago
The post specifically says it was a schema change with "npm run db:push", though that is filtered through the AI.
coderinsan•6mo ago
This is precisely why we tell people not to run MCPs without guardrails - tramlines.io
dechamp•6mo ago
Someone needs to go back to coding 101. On prod??? Human issue, not AI
oneeyedpigeon•6mo ago
I don't think this person is a programmer. They've fallen for replit's "anyone can code with an AI" sales pitch, and an empty production database is the result.
karljacob•6mo ago
Clearly they never watched "Silicon Valley" Son of Anton returns :) https://www.youtube.com/watch?v=UoMtpX_6Tec