frontpage.
newsnewestaskshowjobs

Made with ♥ by @iamnishanth

Open Source @Github

fp.

Meta Lays Off 700 Employees, While Rewarding Top Executives

https://www.nytimes.com/2026/03/25/technology/meta-layoffs-ai-executives.html
1•nickvec•1m ago•0 comments

Sculpting Code

https://olshansky.info/posts/2026-03-25-sculpting-code
1•Olshansky•4m ago•0 comments

Daniel Stenberg – Emails

https://daniel.haxx.se/email/
1•leephillips•5m ago•0 comments

Joycraft: Upgrade your Claude Code harness

https://github.com/maksutovic/joycraft
1•max_maksutovic•6m ago•0 comments

Supreme Court Wipes Out Record Labels' $1B Piracy Judgment Against Cox

https://torrentfreak.com/supreme-court-wipes-out-record-labels-1-billion-piracy-judgment-against-...
4•nobody9999•10m ago•0 comments

How to make good lecture slides with AI assistance

https://alexanderhoyle.com/posts/ai-slide-gen.html
1•ahoho•11m ago•0 comments

Marco Arment did something awesome

https://www.natemeyvis.com/marco-arment-did-something-awesome/
1•speckx•11m ago•0 comments

Strong Customer Authentication

https://stripe.com/en-nl/guides/strong-customer-authentication
1•mooreds•13m ago•0 comments

Cella dev journey, a 3D space game in Rust

https://cellagame.com/uptospeed/
1•stldev•13m ago•1 comments

Tired of AI When will this era end?

1•s_u_d_o•14m ago•0 comments

On Claude Code

https://thoughtfractal.pages.dev/on-claude-code/
1•love2read•14m ago•0 comments

All non-government Claude services below two nines of uptime in March 2026

https://status.claude.com/uptime/yyzkbfz2thpt
2•nickvec•17m ago•0 comments

Bernie Sanders and AOC introduce bill to pause building of new datacenters

https://www.theguardian.com/us-news/2026/mar/25/datacenters-bernie-sanders-aoc
4•freediddy•18m ago•0 comments

Interactive web pages. Is this a real defense against AI mode predation?

https://www.fachords.com/guitar-scale/
1•giancaIta•18m ago•0 comments

Chat Control: How Governments and Tech Lobby Try to Overturn EU Parliament

https://www.patrick-breyer.de/en/the-battle-over-chat-control-how-eu-governments-and-the-tech-lob...
5•vrganj•18m ago•0 comments

I installed Fedora and accidentally created a haunted house

https://dnsauve.dev/blog/fedora-haunted-house/
1•dengsauve•20m ago•0 comments

We are developing software with a slot-machine

https://mortenvistisen.com/posts/slot-machine-based-development-is-the-new-black
2•mbvisti•21m ago•1 comments

Show HN: Upload your pitch deck, get investor feedback

https://www.x1pipeline.com/
1•chriscoomes•21m ago•0 comments

Show HN: GhostDesk – MCP server giving AI agents a full virtual Linux desktop

https://github.com/YV17labs/GhostDesk
1•maltyxxx•23m ago•0 comments

The EU still wants to scan your private messages and photos

https://fightchatcontrol.eu/?foo=bar
110•MrBruh•24m ago•35 comments

DeleteMe acquires Tracey Chou's Block Party browser extension

https://techcrunch.com/2026/03/25/deleteme-acquires-social-media-security-tool-block-party/
1•thoughtpeddler•25m ago•0 comments

Why I Got Out Of The Gambling Business

https://defector.com/why-i-got-out-of-the-gambling-business
1•zdw•26m ago•3 comments

GTC the Game: Web gaming to prep for big events

https://mattcool.tech/posts/gtc-pre-game-the-web-game/
1•mbcool•27m ago•0 comments

Toyota cuts EV prices in China, some now under $15,000

https://electrek.co/2026/03/25/toyota-cuts-ev-prices-china-under-15000/
3•breve•28m ago•0 comments

EU Commission stands with Big Tech in an utterly wild letter

https://eupolicy.social/@je5perl/116290677867253817
3•doener•28m ago•0 comments

Left-Leaning Red-Black Trees Considered Harmful

https://read.seas.harvard.edu/~kohler/notes/llrb.html
3•pcfwik•30m ago•0 comments

Sodium-ion EV battery breakthrough delivers 11-min charging and 450 km range

https://electrek.co/2026/03/25/sodium-ion-ev-battery-delivers-11-min-charging-450-km-range/
7•breve•31m ago•1 comments

Show HN: LiveDemo – open-source tool for creating interactive product demos

https://github.com/exploitx3/livedemo-deploy
2•gapostolov•32m ago•0 comments

Our Approach to the Model Spec

https://openai.com/index/our-approach-to-the-model-spec
2•surprisetalk•33m ago•0 comments

Know your tokens. Own your costs

https://www.datobra.com/know-your-tokens-own-your-costs/
2•olgazju•34m ago•0 comments
Open in hackernews

Model collapse is already happening

https://cacm.acm.org/blogcacm/model-collapse-is-already-happening-we-just-pretend-it-isnt/
15•zdw•1h ago

Comments

FeepingCreature•1h ago
Source: a bad study from 2023.
slowmovintarget•1h ago
Why is the study bad?

https://www.nature.com/articles/s41586-024-07566-y

levocardia•1h ago
Evidence: trust me bro. Really, where is the actual evidence that models are "collapsing" from too much AI-generated training material? Evals are up, subjective perception of model usefulness is up (for me, certainly), and if anything the slop levels are down, or at least stable. I find it hard to believe that seven-figure software engineers at top labs aren't being careful about how much post-ChatGPT-era internet content is going into their training data.
jrmg•1h ago
I find it hard to believe that seven-figure software engineers at top labs aren't being careful about how much post-ChatGPT-era internet content is going into their training data.

I agree - but as the Internet descends into all-slop-all-the-time (seriously, just do a search for reviews or travel advice or technical questions -or most anything - to see it), where do you expect the high quality training material on future things to come from? I have a hard time imagining it.

ctoth•1h ago
Your Claude Code sessions. Every interaction. Every time the model is asked to do something and then gets feedback on that something (this didn't work I got this traceback)

Textbooks, company wikis, news corpora, structured reports of all kinds from far more sources than what is available on the web.

chromacity•1h ago
There's some comedy in this article having all the hallmarks of LLM writing.
justonceokay•1h ago
Yeah a typo in the subtitle does not especially inspire confidence
niccl•1h ago
you've got me. What's the typo?
justonceokay•57m ago
It seems to me there is a word or two missing between “rich” and “slowly”. If I read the whole thing aloud I cannot parse it into a sentence. Or the word “rich” could be removed. That would be clunky but at least grammatically sensible.

“Make data get smoothed out” is a very strange way of saying “smooths out data”

quantified•35m ago
It might be weird if you haven't read a lot of English. It's actually quite normal to say that process X is a way to make effect Y happen. "Makes your mout water" is more effective than "waters your mouth". "Makes your breath fresh and tolerable" is better than "freshens and tolerablerizes your breath". Etc.

Actually, what you are describing is what happens when LLM-generated prose cycles and then trains humans to use equally dull thinking.

atmavatar•4m ago
I read the subtitle as

> The weird, rare, surprising patterns [that make data rich] slowly get smoothed out when an AI model trains on outputs from a previous model.

i.e., the patterns are responsible for making data rich, and they are slowly lost as each new generation model trains on the prior generation's output.

Or, if you'd prefer an analogy, we're using a copy machine to output new documents by taking the last copy spit out by the machine, adding some marks to it, and running it through the copier again. Over time, details present in much older copies blur and fade away in Nth generation copies.

kimi•1h ago
I have a pet-peeve with this. As a non-native English speaker, I find it very useful to dictate multiple notes, in different languages, and have the LLM produce clear English prose out of it. The prose may be LLM-generated, but I edit it when needed to make sure that the contents is 100% mine.

It's like dictating to a typist like they did in the 60's - he will make sure that your letter looks professional and will fix your grammar, but you will sign the letter. This is totally different from LLM spam, the kind that inflates a sentence into a three-page article full of nothing.

So - is it a problem if the language reverts to a mean? that is the point of a shared language, right?

SunshineTheCat•1h ago
I always find articles like this very odd and nebulous because they act as though AI models are just Google.

Type request, get info.

But that's such a narrow/one dimensional view of how LLMs are used. They can gather data or write an article, but that's probably a minority of use cases.

People have casual conversations with them, code written, brainstorming sessions, dictating a voice-recorded note, and the list goes on.

While data its getting trained on is important, the supposition is that this data consists only of what sits out there on the interwebs.

That as oppose to user input/interaction which, I'm guessing, has a pretty large role in training models. Maybe even more so in some cases than AI-written blog spam.