frontpage.
newsnewestaskshowjobs

Made with ♥ by @iamnishanth

Open Source @Github

fp.

Show HN: LocalGPT – A local-first AI assistant in Rust with persistent memory

https://github.com/localgpt-app/localgpt
93•yi_wang•3h ago•25 comments

Haskell for all: Beyond agentic coding

https://haskellforall.com/2026/02/beyond-agentic-coding
39•RebelPotato•2h ago•8 comments

SectorC: A C Compiler in 512 bytes (2023)

https://xorvoid.com/sectorc.html
241•valyala•11h ago•46 comments

Speed up responses with fast mode

https://code.claude.com/docs/en/fast-mode
154•surprisetalk•10h ago•150 comments

Software factories and the agentic moment

https://factory.strongdm.ai/
186•mellosouls•13h ago•335 comments

Brookhaven Lab's RHIC concludes 25-year run with final collisions

https://www.hpcwire.com/off-the-wire/brookhaven-labs-rhic-concludes-25-year-run-with-final-collis...
68•gnufx•9h ago•56 comments

Hoot: Scheme on WebAssembly

https://www.spritely.institute/hoot/
177•AlexeyBrin•16h ago•32 comments

Homeland Security Spying on Reddit Users

https://www.kenklippenstein.com/p/homeland-security-spies-on-reddit
10•duxup•54m ago•1 comments

Stories from 25 Years of Software Development

https://susam.net/twenty-five-years-of-computing.html
163•vinhnx•14h ago•16 comments

LLMs as the new high level language

https://federicopereiro.com/llm-high/
55•swah•4d ago•97 comments

Total Surface Area Required to Fuel the World with Solar (2009)

https://landartgenerator.org/blagi/archives/127
8•robtherobber•4d ago•2 comments

First Proof

https://arxiv.org/abs/2602.05192
129•samasblack•13h ago•76 comments

Vocal Guide – belt sing without killing yourself

https://jesperordrup.github.io/vocal-guide/
306•jesperordrup•21h ago•95 comments

Show HN: I saw this cool navigation reveal, so I made a simple HTML+CSS version

https://github.com/Momciloo/fun-with-clip-path
74•momciloo•11h ago•16 comments

Al Lowe on model trains, funny deaths and working with Disney

https://spillhistorie.no/2026/02/06/interview-with-sierra-veteran-al-lowe/
98•thelok•13h ago•22 comments

FDA intends to take action against non-FDA-approved GLP-1 drugs

https://www.fda.gov/news-events/press-announcements/fda-intends-take-action-against-non-fda-appro...
104•randycupertino•6h ago•223 comments

Vouch

https://twitter.com/mitchellh/status/2020252149117313349
43•chwtutha•1h ago•7 comments

Show HN: A luma dependent chroma compression algorithm (image compression)

https://www.bitsnbites.eu/a-spatial-domain-variable-block-size-luma-dependent-chroma-compression-...
37•mbitsnbites•3d ago•4 comments

Show HN: Axiomeer – An open marketplace for AI agents

https://github.com/ujjwalredd/Axiomeer
11•ujjwalreddyks•5d ago•2 comments

Start all of your commands with a comma (2009)

https://rhodesmill.org/brandon/2009/commands-with-comma/
571•theblazehen•3d ago•206 comments

The AI boom is causing shortages everywhere else

https://www.washingtonpost.com/technology/2026/02/07/ai-spending-economy-shortages/
292•1vuio0pswjnm7•17h ago•471 comments

Microsoft account bugs locked me out of Notepad – Are thin clients ruining PCs?

https://www.windowscentral.com/microsoft/windows-11/windows-locked-me-out-of-notepad-is-the-thin-...
133•josephcsible•9h ago•161 comments

I write games in C (yes, C) (2016)

https://jonathanwhiting.com/writing/blog/games_in_c/
184•valyala•11h ago•166 comments

Learning from context is harder than we thought

https://hy.tencent.com/research/100025?langVersion=en
229•limoce•4d ago•125 comments

OpenCiv3: Open-source, cross-platform reimagining of Civilization III

https://openciv3.org/
900•klaussilveira•1d ago•276 comments

Selection rather than prediction

https://voratiq.com/blog/selection-rather-than-prediction/
30•languid-photic•4d ago•12 comments

Where did all the starships go?

https://www.datawrapper.de/blog/science-fiction-decline
146•speckx•4d ago•228 comments

The F Word

http://muratbuffalo.blogspot.com/2026/02/friction.html
113•zdw•3d ago•56 comments

Unseen Footage of Atari Battlezone Arcade Cabinet Production

https://arcadeblogger.com/2026/02/02/unseen-footage-of-atari-battlezone-cabinet-production/
145•videotopia•4d ago•48 comments

Show HN: Look Ma, No Linux: Shell, App Installer, Vi, Cc on ESP32-S3 / BreezyBox

https://github.com/valdanylchuk/breezydemo
303•isitcontent•1d ago•39 comments
Open in hackernews

The launch of ChatGPT polluted the world forever

https://www.theregister.com/2025/06/15/ai_model_collapse_pollution/
29•rntn•7mo ago

Comments

Den_VR•7mo ago
Someday maybe we’ll have a term similar to “low-background steel” for information and web content.
etherlord•7mo ago
https://blog.jgc.org/2025/06/low-background-steel-content-wi...
ChrisArchitect•7mo ago
Large discussion earlier this week: https://news.ycombinator.com/item?id=44239481
willis936•7mo ago
The root of it is deterioration in trust. Even before LLMs hit the scene there was suspicion of narrative manipulation by social media sites. ChatGPT only changed how popular this take is, but not its measure.
cheschire•7mo ago
Why did you paraphrase the article’s subtitle?
Den_VR•7mo ago
I’m not sure if admitting I didn’t even open the article helps or harms my case.
Eddy_Viscosity2•7mo ago
This is a great analogy.
happa•7mo ago
LLMs don't really need more training data than they already have. They just need to start using it more efficiently.
_1tem•7mo ago
Exactly. Smart humans work with far less training data and do better.
ghusto•7mo ago
The article keeps making it sound as if it's a problem for humans. e.g.:

> Now here the date is more flexible, let's say 2022. But if you're collecting data before 2022 you're fairly confident that it has minimal, if any, contamination from generative AI. Everything before the date is 'safe, fine, clean,' everything after that is 'dirty.'"

Though what it seems to actually mean is that it's a problem for (future) generative AI (the "genAI collapse"). To which I say;

joshstrange•7mo ago
This seems like a very badly written article that rambles on in random directions. It proposes incredibly dumb ideas to anyone with half a brain like water marking AI output.

The most damning part for me is mentioning the Apple paper and the refute of the Apple paper, to my knowledge that paper had nothing to do with training on generated data. It was talking about reasoning models, but because they use the word “model collapse”, apparently, the author of this article decided to include it in, which just shows how they don’t know what they’re talking about (unless I’m completely misunderstanding the Apple paper).

m4r1k•7mo ago
This! And I’d add, it’s the Register–it has always had a very low bar.
famahar•7mo ago
lowbackgroundsteel.ai sounds really promising. I don't really care for it as a clean AI training source, but I'm interested in a curated internet where I know it's not diluted with generative content. I'm not sure what that would look like when it comes to social media. This AI era has made me return to reading physical books as a hobby and engaging with offline/non-anonymous online communities more. Confidence in authenticity is one of the most important things for me these days.
iJohnDoe•7mo ago
I have no expertise in LLMs. I do think the article poses an interesting question. How do you get the models recent information without ingesting information that has been generated by AI. I’m sure it’s possible, but not without a certain level of uncertainty.

Humanity now lives in a world where any text has most likely been influenced by AI, even if it’s by multiple degrees of separation.