frontpage.
newsnewestaskshowjobs

Made with ♥ by @iamnishanth

Open Source @Github

fp.

A Night Without the Nerds – Claude Opus 4.6, Field-Tested

https://konfuzio.com/en/a-night-without-the-nerds-claude-opus-4-6-in-the-field-test/
1•konfuzio•1m ago•0 comments

Could ionospheric disturbances influence earthquakes?

https://www.kyoto-u.ac.jp/en/research-news/2026-02-06-0
1•geox•3m ago•0 comments

SpaceX's next astronaut launch for NASA is officially on for Feb. 11 as FAA clea

https://www.space.com/space-exploration/launches-spacecraft/spacexs-next-astronaut-launch-for-nas...
1•bookmtn•4m ago•0 comments

Show HN: One-click AI employee with its own cloud desktop

https://cloudbot-ai.com
1•fainir•6m ago•0 comments

Show HN: Poddley – Search podcasts by who's speaking

https://poddley.com
1•onesandofgrain•7m ago•0 comments

Same Surface, Different Weight

https://www.robpanico.com/articles/display/?entry_short=same-surface-different-weight
1•retrocog•9m ago•0 comments

The Rise of Spec Driven Development

https://www.dbreunig.com/2026/02/06/the-rise-of-spec-driven-development.html
2•Brajeshwar•14m ago•0 comments

The first good Raspberry Pi Laptop

https://www.jeffgeerling.com/blog/2026/the-first-good-raspberry-pi-laptop/
3•Brajeshwar•14m ago•0 comments

Seas to Rise Around the World – But Not in Greenland

https://e360.yale.edu/digest/greenland-sea-levels-fall
2•Brajeshwar•14m ago•0 comments

Will Future Generations Think We're Gross?

https://chillphysicsenjoyer.substack.com/p/will-future-generations-think-were
1•crescit_eundo•17m ago•0 comments

State Department will delete Xitter posts from before Trump returned to office

https://www.npr.org/2026/02/07/nx-s1-5704785/state-department-trump-posts-x
2•righthand•20m ago•1 comments

Show HN: Verifiable server roundtrip demo for a decision interruption system

https://github.com/veeduzyl-hue/decision-assistant-roundtrip-demo
1•veeduzyl•21m ago•0 comments

Impl Rust – Avro IDL Tool in Rust via Antlr

https://www.youtube.com/watch?v=vmKvw73V394
1•todsacerdoti•21m ago•0 comments

Stories from 25 Years of Software Development

https://susam.net/twenty-five-years-of-computing.html
3•vinhnx•22m ago•0 comments

minikeyvalue

https://github.com/commaai/minikeyvalue/tree/prod
3•tosh•27m ago•0 comments

Neomacs: GPU-accelerated Emacs with inline video, WebKit, and terminal via wgpu

https://github.com/eval-exec/neomacs
1•evalexec•31m ago•0 comments

Show HN: Moli P2P – An ephemeral, serverless image gallery (Rust and WebRTC)

https://moli-green.is/
2•ShinyaKoyano•36m ago•1 comments

How I grow my X presence?

https://www.reddit.com/r/GrowthHacking/s/UEc8pAl61b
2•m00dy•37m ago•0 comments

What's the cost of the most expensive Super Bowl ad slot?

https://ballparkguess.com/?id=5b98b1d3-5887-47b9-8a92-43be2ced674b
1•bkls•38m ago•0 comments

What if you just did a startup instead?

https://alexaraki.substack.com/p/what-if-you-just-did-a-startup
5•okaywriting•44m ago•0 comments

Hacking up your own shell completion (2020)

https://www.feltrac.co/environment/2020/01/18/build-your-own-shell-completion.html
2•todsacerdoti•47m ago•0 comments

Show HN: Gorse 0.5 – Open-source recommender system with visual workflow editor

https://github.com/gorse-io/gorse
1•zhenghaoz•48m ago•0 comments

GLM-OCR: Accurate × Fast × Comprehensive

https://github.com/zai-org/GLM-OCR
1•ms7892•49m ago•0 comments

Local Agent Bench: Test 11 small LLMs on tool-calling judgment, on CPU, no GPU

https://github.com/MikeVeerman/tool-calling-benchmark
1•MikeVeerman•50m ago•0 comments

Show HN: AboutMyProject – A public log for developer proof-of-work

https://aboutmyproject.com/
1•Raiplus•50m ago•0 comments

Expertise, AI and Work of Future [video]

https://www.youtube.com/watch?v=wsxWl9iT1XU
1•indiantinker•51m ago•0 comments

So Long to Cheap Books You Could Fit in Your Pocket

https://www.nytimes.com/2026/02/06/books/mass-market-paperback-books.html
4•pseudolus•51m ago•2 comments

PID Controller

https://en.wikipedia.org/wiki/Proportional%E2%80%93integral%E2%80%93derivative_controller
1•tosh•55m ago•0 comments

SpaceX Rocket Generates 100GW of Power, or 20% of US Electricity

https://twitter.com/AlecStapp/status/2019932764515234159
2•bkls•55m ago•1 comments

Kubernetes MCP Server

https://github.com/yindia/rootcause
1•yindia•56m ago•0 comments
Open in hackernews

The launch of ChatGPT polluted the world forever

https://www.theregister.com/2025/06/15/ai_model_collapse_pollution/
29•rntn•7mo ago

Comments

Den_VR•7mo ago
Someday maybe we’ll have a term similar to “low-background steel” for information and web content.
etherlord•7mo ago
https://blog.jgc.org/2025/06/low-background-steel-content-wi...
ChrisArchitect•7mo ago
Large discussion earlier this week: https://news.ycombinator.com/item?id=44239481
willis936•7mo ago
The root of it is deterioration in trust. Even before LLMs hit the scene there was suspicion of narrative manipulation by social media sites. ChatGPT only changed how popular this take is, but not its measure.
cheschire•7mo ago
Why did you paraphrase the article’s subtitle?
Den_VR•7mo ago
I’m not sure if admitting I didn’t even open the article helps or harms my case.
Eddy_Viscosity2•7mo ago
This is a great analogy.
happa•7mo ago
LLMs don't really need more training data than they already have. They just need to start using it more efficiently.
_1tem•7mo ago
Exactly. Smart humans work with far less training data and do better.
ghusto•7mo ago
The article keeps making it sound as if it's a problem for humans. e.g.:

> Now here the date is more flexible, let's say 2022. But if you're collecting data before 2022 you're fairly confident that it has minimal, if any, contamination from generative AI. Everything before the date is 'safe, fine, clean,' everything after that is 'dirty.'"

Though what it seems to actually mean is that it's a problem for (future) generative AI (the "genAI collapse"). To which I say;

joshstrange•7mo ago
This seems like a very badly written article that rambles on in random directions. It proposes incredibly dumb ideas to anyone with half a brain like water marking AI output.

The most damning part for me is mentioning the Apple paper and the refute of the Apple paper, to my knowledge that paper had nothing to do with training on generated data. It was talking about reasoning models, but because they use the word “model collapse”, apparently, the author of this article decided to include it in, which just shows how they don’t know what they’re talking about (unless I’m completely misunderstanding the Apple paper).

m4r1k•7mo ago
This! And I’d add, it’s the Register–it has always had a very low bar.
famahar•7mo ago
lowbackgroundsteel.ai sounds really promising. I don't really care for it as a clean AI training source, but I'm interested in a curated internet where I know it's not diluted with generative content. I'm not sure what that would look like when it comes to social media. This AI era has made me return to reading physical books as a hobby and engaging with offline/non-anonymous online communities more. Confidence in authenticity is one of the most important things for me these days.
iJohnDoe•7mo ago
I have no expertise in LLMs. I do think the article poses an interesting question. How do you get the models recent information without ingesting information that has been generated by AI. I’m sure it’s possible, but not without a certain level of uncertainty.

Humanity now lives in a world where any text has most likely been influenced by AI, even if it’s by multiple degrees of separation.