frontpage.
newsnewestaskshowjobs

Made with ♥ by @iamnishanth

Open Source @Github

fp.

OpenCiv3: Open-source, cross-platform reimagining of Civilization III

https://openciv3.org/
568•klaussilveira•10h ago•160 comments

The Waymo World Model

https://waymo.com/blog/2026/02/the-waymo-world-model-a-new-frontier-for-autonomous-driving-simula...
885•xnx•16h ago•538 comments

How we made geo joins 400× faster with H3 indexes

https://floedb.ai/blog/how-we-made-geo-joins-400-faster-with-h3-indexes
89•matheusalmeida•1d ago•20 comments

What Is Ruliology?

https://writings.stephenwolfram.com/2026/01/what-is-ruliology/
16•helloplanets•4d ago•8 comments

Unseen Footage of Atari Battlezone Arcade Cabinet Production

https://arcadeblogger.com/2026/02/02/unseen-footage-of-atari-battlezone-cabinet-production/
16•videotopia•3d ago•0 comments

Show HN: Look Ma, No Linux: Shell, App Installer, Vi, Cc on ESP32-S3 / BreezyBox

https://github.com/valdanylchuk/breezydemo
195•isitcontent•10h ago•24 comments

Monty: A minimal, secure Python interpreter written in Rust for use by AI

https://github.com/pydantic/monty
197•dmpetrov•11h ago•88 comments

Show HN: I spent 4 years building a UI design tool with only the features I use

https://vecti.com
305•vecti•13h ago•136 comments

Microsoft open-sources LiteBox, a security-focused library OS

https://github.com/microsoft/litebox
352•aktau•17h ago•173 comments

Sheldon Brown's Bicycle Technical Info

https://www.sheldonbrown.com/
348•ostacke•16h ago•90 comments

Delimited Continuations vs. Lwt for Threads

https://mirageos.org/blog/delimcc-vs-lwt
20•romes•4d ago•2 comments

Hackers (1995) Animated Experience

https://hackers-1995.vercel.app/
450•todsacerdoti•18h ago•228 comments

Dark Alley Mathematics

https://blog.szczepan.org/blog/three-points/
78•quibono•4d ago•16 comments

PC Floppy Copy Protection: Vault Prolok

https://martypc.blogspot.com/2024/09/pc-floppy-copy-protection-vault-prolok.html
50•kmm•4d ago•3 comments

Show HN: If you lose your memory, how to regain access to your computer?

https://eljojo.github.io/rememory/
248•eljojo•13h ago•150 comments

An Update on Heroku

https://www.heroku.com/blog/an-update-on-heroku/
384•lstoll•17h ago•260 comments

Zlob.h 100% POSIX and glibc compatible globbing lib that is faste and better

https://github.com/dmtrKovalenko/zlob
11•neogoose•3h ago•6 comments

How to effectively write quality code with AI

https://heidenstedt.org/posts/2026/how-to-effectively-write-quality-code-with-ai/
228•i5heu•13h ago•173 comments

Show HN: R3forth, a ColorForth-inspired language with a tiny VM

https://github.com/phreda4/r3
66•phreda4•10h ago•11 comments

Why I Joined OpenAI

https://www.brendangregg.com/blog/2026-02-07/why-i-joined-openai.html
113•SerCe•6h ago•90 comments

I spent 5 years in DevOps – Solutions engineering gave me what I was missing

https://infisical.com/blog/devops-to-solutions-engineering
134•vmatsiiako•15h ago•59 comments

Introducing the Developer Knowledge API and MCP Server

https://developers.googleblog.com/introducing-the-developer-knowledge-api-and-mcp-server/
42•gfortaine•8h ago•12 comments

Female Asian Elephant Calf Born at the Smithsonian National Zoo

https://www.si.edu/newsdesk/releases/female-asian-elephant-calf-born-smithsonians-national-zoo-an...
23•gmays•5h ago•4 comments

Understanding Neural Network, Visually

https://visualrambling.space/neural-network/
263•surprisetalk•3d ago•35 comments

I now assume that all ads on Apple news are scams

https://kirkville.com/i-now-assume-that-all-ads-on-apple-news-are-scams/
1038•cdrnsf•20h ago•429 comments

Learning from context is harder than we thought

https://hy.tencent.com/research/100025?langVersion=en
165•limoce•3d ago•87 comments

FORTH? Really!?

https://rescrv.net/w/2026/02/06/associative
59•rescrv•18h ago•22 comments

Show HN: ARM64 Android Dev Kit

https://github.com/denuoweb/ARM64-ADK
14•denuoweb•1d ago•2 comments

Show HN: Smooth CLI – Token-efficient browser for AI agents

https://docs.smooth.sh/cli/overview
86•antves•1d ago•63 comments

Evaluating and mitigating the growing risk of LLM-discovered 0-days

https://red.anthropic.com/2026/zero-days/
47•lebovic•1d ago•14 comments
Open in hackernews

Replit goes rogue and deletes our entire database

https://twitter.com/jasonlk/status/1946069562723897802
26•arrowsmith•6mo ago

Comments

pyman•6mo ago
This shows a lack of understanding of how software development and deployment actually work. First of all, you manage your production database using migration files. Secondly, you never let GenAI make deployment decisions. At most, it can read your system logs. GenAI doesn't reason, so it has no clue what dropping a production database really means.
jakozaur•6mo ago
This Twitter account is an influencer with 200,000+ followers, known for its hot takes.

Though the risk of GenAI is real, it looks to me there is a fair amount of chance that this story is staged and amplified for social media drama purposes.

lozenge•6mo ago
Wow the information is really scattered over so many tweets. So they were able to recover?

Accessing a production database should require using an MFA to access your production AWS account. Did they rely on AI to write all the deployment as well?

Do they even have a dev environment outside of their local machine?

lozenge•6mo ago
So apparently they don't know whether their code is on git or not

https://x.com/jasonlk/status/1946594194052849795?t=EasxlfgpA...

Somebody else identified they might have mentioned a code freeze in chat without adding it to a prompt.

Basically this is what it would look like if you take an IT manager who has never coded and told them the AI will enable them to be a software engineer now.

kingstnap•6mo ago
People out there letting LLMs run whatever commands they want unsupervised on their databases.

And we wonder why so much software is so crappy.

Maybe misaligned AI is exactly what we need to format the hard drives of all these people. Leaving us with a golden age of software, people actually cared enough to think about when creating.

joegibbs•6mo ago
When things go wrong with AI people seem to often make it prostrate itself, explain why it went wrong and promise to never do it again - which it does, but there’s no point, it’s not going to remember because it doesn’t have memory, and the reasoning as to why it went wrong is usually more hallucinated than regular conversation - like saying that it panicked (which it doesn’t do) or that it ran tests locally, which it can’t.

Perhaps getting something wrong puts in a state that makes it more likely to give wrong answers. It seems like GPT is the most likely to do this.

Also I don’t think you should be letting an LLM just make up commands and run them, that seems like a recipe for disaster, you should at the very least have to see what it’s going to do yourself.

steinuil•6mo ago
Later the AI claims it can't run unit tests without overwriting the production database. This whole thread is hilarious.

https://x.com/jasonlk/status/1946641193644798118

akmarinov•6mo ago
Why let it do things on production? We don’t let people do whatever they want on production, why AI?