frontpage.
newsnewestaskshowjobs

Made with ♥ by @iamnishanth

Open Source @Github

fp.

Open in hackernews

The Common Pile v0.1: An 8TB Dataset of Public Domain and Openly Licensed Text

https://arxiv.org/abs/2506.05209
68•djoldman•4mo ago

Comments

secret-noun•4mo ago
> we manually curated a set of over 2,000 YouTube channels that release original openly licensed content containing speech. From these channels, we retrieved and transcribed (using Whisper) over 1.1 million openly licensed videos comprising more than 470,000 hours of content.

This is why Gemini has such an advantage.

Also, link to explore data: https://huggingface.co/collections/common-pile/common-pile-v...

otherme123•4mo ago
The abstract is open about this data to be used to train models. But a lot of this data come from models, like whisper.
ACCount37•4mo ago
What's your concern?
ggm•4mo ago
You don't believe in model collapse? Or don't think it applies to a phase shift from audio to written texts?
simonw•4mo ago
Personally I don't believe in model collapse. Has anyone demonstrated it occurring in the wild, outside of the tiny set of papers that deliberately caused it to happen?

I think model collapse gets talked about so much because it is irresistible schadenfreude. The idea of models eating their own tails in a way that leads to their inevitable demise is captivating to a lot of people, especially AI skeptics.

pama•4mo ago
I agree. A partial counterexample is the RL training loop on verifiable tasks, which uses the model in a loop to generate training data. Another one is the cleanup/prioritization of the pretraining data using earlier models.

More generally, a lot of ideas have been speculated based on very tiny models in controlled settings and they didnt pan out in real LLMs. There probably exists a minimal compute threshold for overcoming generalization traps.

marbro•4mo ago
Carbon-based model collapse is known as groupthink and happens constantly.
ACCount37•4mo ago
"Model collapse" isn't real. It's a laboratory failure mode that doesn't happen in real world environments.

It's popular because some people latched onto the idea - desperately wanting something to stop the AI tech from advancing. It, quite obviously, doesn't stop the AI tech from advancing.

Now, you can write an entire research paper on why model collapse happens or fails to happen. But a simple way to think of it is: looping AI onto itself multiple times amplifies that AI's own deficiencies, distortions and idiosyncrasies - until, after enough iterations, they come to completely dominate its outputs.

This doesn't apply at all to training an LLM on Whisper outputs that are, in turn, based on human-generated videos. The LLM will inherit some Whisper quirks, but most of the data in Whisper outputs comes from the videos themselves.

everforward•4mo ago
No, I don’t think it applies here. The semantics and speech patterns were generated by a human, Whisper just transcribed them.

There is some risk that Whisper transcribed inaccurately, but that’s less model collapse and more “the dataset is bad”.

numpad0•4mo ago
I guess that transcript is not guaranteed clean? * Silence * = "Like and Subscribe" etc.
benterix•4mo ago
So?
otherme123•4mo ago
I don't know much about LLM training, but previous AI needed clean data to train. You shouln't train on generated data.

For example, you had a classifier that works at 95% precission trained with carefully labeled data. Then, to train the next version you download 1Tb of images, classify with your previous model, and use that to retrain. Do you expect to get better than 95%, or are you poisoning your model?

I'm asking: can you do that with LLM? Feed them data that's known to be 95% precise at best? I did some Whisper, and usually get runs of words, like "bye bye bye bye bye bye", despite being only said once. Should I use that kind of data to train a LLM?

I saw this experiment where an LLM was feed an image and asked to make the same image. Then repeat with the generated image. After ten or so cycles, the content (a human head photo) was barely recognizable.

electroglyph•4mo ago
Phi models are notorious for using mostly synthetic data
orbital-decay•4mo ago
The reality of working with humongous datasets is they're always bootstrapped like this, in multiple steps. In LLMs in particular, the entire post-training step is always done on synthetic data. There are ways to avoid failure modes typical for that (like model collapse), you need much less real data to keep the model in check than you probably think.
klft•4mo ago
Whisper ist used for speech-to-text conversion. Not to generate the text.
estimator7292•4mo ago
It's still AI generated text that is not in any way guaranteed to be correct or accurate.
UltraSane•4mo ago
Its accuracy can be and is quantified.

Tiny C Compiler

https://bellard.org/tcc/
59•guerrilla•1h ago•22 comments

SectorC: A C Compiler in 512 bytes

https://xorvoid.com/sectorc.html
151•valyala•5h ago•25 comments

The F Word

http://muratbuffalo.blogspot.com/2026/02/friction.html
81•zdw•3d ago•32 comments

Speed up responses with fast mode

https://code.claude.com/docs/en/fast-mode
86•surprisetalk•5h ago•91 comments

LLMs as the new high level language

https://federicopereiro.com/llm-high/
26•swah•4d ago•19 comments

GitBlack: Tracing America's Foundation

https://gitblack.vercel.app/
19•martialg•58m ago•3 comments

Software factories and the agentic moment

https://factory.strongdm.ai/
120•mellosouls•8h ago•236 comments

Hoot: Scheme on WebAssembly

https://www.spritely.institute/hoot/
159•AlexeyBrin•11h ago•28 comments

OpenCiv3: Open-source, cross-platform reimagining of Civilization III

https://openciv3.org/
866•klaussilveira•1d ago•266 comments

Stories from 25 Years of Software Development

https://susam.net/twenty-five-years-of-computing.html
115•vinhnx•8h ago•14 comments

FDA intends to take action against non-FDA-approved GLP-1 drugs

https://www.fda.gov/news-events/press-announcements/fda-intends-take-action-against-non-fda-appro...
33•randycupertino•1h ago•33 comments

Al Lowe on model trains, funny deaths and working with Disney

https://spillhistorie.no/2026/02/06/interview-with-sierra-veteran-al-lowe/
73•thelok•7h ago•13 comments

Show HN: A luma dependent chroma compression algorithm (image compression)

https://www.bitsnbites.eu/a-spatial-domain-variable-block-size-luma-dependent-chroma-compression-...
22•mbitsnbites•3d ago•1 comments

First Proof

https://arxiv.org/abs/2602.05192
76•samasblack•8h ago•57 comments

I write games in C (yes, C) (2016)

https://jonathanwhiting.com/writing/blog/games_in_c/
157•valyala•5h ago•136 comments

Vocal Guide – belt sing without killing yourself

https://jesperordrup.github.io/vocal-guide/
253•jesperordrup•15h ago•82 comments

Brookhaven Lab's RHIC concludes 25-year run with final collisions

https://www.hpcwire.com/off-the-wire/brookhaven-labs-rhic-concludes-25-year-run-with-final-collis...
36•gnufx•4h ago•41 comments

Start all of your commands with a comma (2009)

https://rhodesmill.org/brandon/2009/commands-with-comma/
535•theblazehen•3d ago•197 comments

Reinforcement Learning from Human Feedback

https://rlhfbook.com/
100•onurkanbkrc•10h ago•5 comments

Show HN: I saw this cool navigation reveal, so I made a simple HTML+CSS version

https://github.com/Momciloo/fun-with-clip-path
39•momciloo•5h ago•5 comments

Selection rather than prediction

https://voratiq.com/blog/selection-rather-than-prediction/
19•languid-photic•4d ago•5 comments

The AI boom is causing shortages everywhere else

https://www.washingtonpost.com/technology/2026/02/07/ai-spending-economy-shortages/
213•1vuio0pswjnm7•12h ago•325 comments

72M Points of Interest

https://tech.marksblogg.com/overture-places-pois.html
42•marklit•5d ago•6 comments

Coding agents have replaced every framework I used

https://blog.alaindichiappari.dev/p/software-engineering-is-back
276•alainrk•10h ago•454 comments

Unseen Footage of Atari Battlezone Arcade Cabinet Production

https://arcadeblogger.com/2026/02/02/unseen-footage-of-atari-battlezone-cabinet-production/
129•videotopia•4d ago•41 comments

A Fresh Look at IBM 3270 Information Display System

https://www.rs-online.com/designspark/a-fresh-look-at-ibm-3270-information-display-system
52•rbanffy•4d ago•14 comments

Microsoft account bugs locked me out of Notepad – Are thin clients ruining PCs?

https://www.windowscentral.com/microsoft/windows-11/windows-locked-me-out-of-notepad-is-the-thin-...
52•josephcsible•3h ago•67 comments

France's homegrown open source online office suite

https://github.com/suitenumerique
650•nar001•9h ago•284 comments

Show HN: Kappal – CLI to Run Docker Compose YML on Kubernetes for Local Dev

https://github.com/sandys/kappal
41•sandGorgon•2d ago•17 comments

Where did all the starships go?

https://www.datawrapper.de/blog/science-fiction-decline
109•speckx•4d ago•149 comments