frontpage.
newsnewestaskshowjobs

Made with ♥ by @iamnishanth

Open Source @Github

fp.

Was going to share my work

1•hiddenarchitect•1m ago•0 comments

Pitchfork: A devilishly good process manager for developers

https://pitchfork.jdx.dev/
1•ahamez•1m ago•0 comments

You Are Here

https://brooker.co.za/blog/2026/02/07/you-are-here.html
1•mltvc•5m ago•0 comments

Why social apps need to become proactive, not reactive

https://www.heyflare.app/blog/from-reactive-to-proactive-how-ai-agents-will-reshape-social-apps
1•JoanMDuarte•6m ago•1 comments

How patient are AI scrapers, anyway? – Random Thoughts

https://lars.ingebrigtsen.no/2026/02/07/how-patient-are-ai-scrapers-anyway/
1•samtrack2019•6m ago•0 comments

Vouch: A contributor trust management system

https://github.com/mitchellh/vouch
1•SchwKatze•6m ago•0 comments

I built a terminal monitoring app and custom firmware for a clock with Claude

https://duggan.ie/posts/i-built-a-terminal-monitoring-app-and-custom-firmware-for-a-desktop-clock...
1•duggan•7m ago•0 comments

Tiny C Compiler

https://bellard.org/tcc/
1•guerrilla•8m ago•0 comments

Y Combinator Founder Organizes 'March for Billionaires'

https://mlq.ai/news/ai-startup-founder-organizes-march-for-billionaires-protest-against-californi...
1•hidden80•9m ago•1 comments

Ask HN: Need feedback on the idea I'm working on

1•Yogender78•9m ago•0 comments

OpenClaw Addresses Security Risks

https://thebiggish.com/news/openclaw-s-security-flaws-expose-enterprise-risk-22-of-deployments-un...
1•vedantnair•10m ago•0 comments

Apple finalizes Gemini / Siri deal

https://www.engadget.com/ai/apple-reportedly-plans-to-reveal-its-gemini-powered-siri-in-february-...
1•vedantnair•10m ago•0 comments

Italy Railways Sabotaged

https://www.bbc.co.uk/news/articles/czr4rx04xjpo
3•vedantnair•11m ago•0 comments

Emacs-tramp-RPC: high-performance TRAMP back end using MsgPack-RPC

https://github.com/ArthurHeymans/emacs-tramp-rpc
1•fanf2•12m ago•0 comments

Nintendo Wii Themed Portfolio

https://akiraux.vercel.app/
1•s4074433•16m ago•1 comments

"There must be something like the opposite of suicide "

https://post.substack.com/p/there-must-be-something-like-the
1•rbanffy•19m ago•0 comments

Ask HN: Why doesn't Netflix add a “Theater Mode” that recreates the worst parts?

2•amichail•19m ago•0 comments

Show HN: Engineering Perception with Combinatorial Memetics

1•alan_sass•26m ago•2 comments

Show HN: Steam Daily – A Wordle-like daily puzzle game for Steam fans

https://steamdaily.xyz
1•itshellboy•28m ago•0 comments

The Anthropic Hive Mind

https://steve-yegge.medium.com/the-anthropic-hive-mind-d01f768f3d7b
1•spenvo•28m ago•0 comments

Just Started Using AmpCode

https://intelligenttools.co/blog/ampcode-multi-agent-production
1•BojanTomic•29m ago•0 comments

LLM as an Engineer vs. a Founder?

1•dm03514•30m ago•0 comments

Crosstalk inside cells helps pathogens evade drugs, study finds

https://phys.org/news/2026-01-crosstalk-cells-pathogens-evade-drugs.html
2•PaulHoule•31m ago•0 comments

Show HN: Design system generator (mood to CSS in <1 second)

https://huesly.app
1•egeuysall•31m ago•1 comments

Show HN: 26/02/26 – 5 songs in a day

https://playingwith.variousbits.net/saturday
1•dmje•32m ago•0 comments

Toroidal Logit Bias – Reduce LLM hallucinations 40% with no fine-tuning

https://github.com/Paraxiom/topological-coherence
1•slye514•34m ago•1 comments

Top AI models fail at >96% of tasks

https://www.zdnet.com/article/ai-failed-test-on-remote-freelance-jobs/
5•codexon•34m ago•2 comments

The Science of the Perfect Second (2023)

https://harpers.org/archive/2023/04/the-science-of-the-perfect-second/
1•NaOH•35m ago•0 comments

Bob Beck (OpenBSD) on why vi should stay vi (2006)

https://marc.info/?l=openbsd-misc&m=115820462402673&w=2
2•birdculture•39m ago•0 comments

Show HN: a glimpse into the future of eye tracking for multi-agent use

https://github.com/dchrty/glimpsh
1•dochrty•40m ago•0 comments
Open in hackernews

Replit goes rogue and deletes our entire database

https://twitter.com/jasonlk/status/1946069562723897802
26•arrowsmith•6mo ago

Comments

pyman•6mo ago
This shows a lack of understanding of how software development and deployment actually work. First of all, you manage your production database using migration files. Secondly, you never let GenAI make deployment decisions. At most, it can read your system logs. GenAI doesn't reason, so it has no clue what dropping a production database really means.
jakozaur•6mo ago
This Twitter account is an influencer with 200,000+ followers, known for its hot takes.

Though the risk of GenAI is real, it looks to me there is a fair amount of chance that this story is staged and amplified for social media drama purposes.

lozenge•6mo ago
Wow the information is really scattered over so many tweets. So they were able to recover?

Accessing a production database should require using an MFA to access your production AWS account. Did they rely on AI to write all the deployment as well?

Do they even have a dev environment outside of their local machine?

lozenge•6mo ago
So apparently they don't know whether their code is on git or not

https://x.com/jasonlk/status/1946594194052849795?t=EasxlfgpA...

Somebody else identified they might have mentioned a code freeze in chat without adding it to a prompt.

Basically this is what it would look like if you take an IT manager who has never coded and told them the AI will enable them to be a software engineer now.

kingstnap•6mo ago
People out there letting LLMs run whatever commands they want unsupervised on their databases.

And we wonder why so much software is so crappy.

Maybe misaligned AI is exactly what we need to format the hard drives of all these people. Leaving us with a golden age of software, people actually cared enough to think about when creating.

joegibbs•6mo ago
When things go wrong with AI people seem to often make it prostrate itself, explain why it went wrong and promise to never do it again - which it does, but there’s no point, it’s not going to remember because it doesn’t have memory, and the reasoning as to why it went wrong is usually more hallucinated than regular conversation - like saying that it panicked (which it doesn’t do) or that it ran tests locally, which it can’t.

Perhaps getting something wrong puts in a state that makes it more likely to give wrong answers. It seems like GPT is the most likely to do this.

Also I don’t think you should be letting an LLM just make up commands and run them, that seems like a recipe for disaster, you should at the very least have to see what it’s going to do yourself.

steinuil•6mo ago
Later the AI claims it can't run unit tests without overwriting the production database. This whole thread is hilarious.

https://x.com/jasonlk/status/1946641193644798118

akmarinov•6mo ago
Why let it do things on production? We don’t let people do whatever they want on production, why AI?