frontpage.
newsnewestaskshowjobs

Made with ♥ by @iamnishanth

Open Source @Github

India bars Jane Street from securities market, citing stock index manipulation

https://www.reuters.com/sustainability/boards-policy-regulation/india-regulator-bars-jane-street-accessing-its-securities-market-2025-07-04/
1•bobbiechen•1m ago•0 comments

Dylanaraps changes README after >1 year

https://github.com/dylanaraps/dylanaraps/commit/93a2aca2d1741bd9a7ce861d8c062a8a7387cb49
1•kristjank•2m ago•0 comments

Ask HN: How do accelerators/VC track internal operations across startups?

1•swaptr•4m ago•0 comments

Context Engineering Guide

https://nlp.elvissaravia.com/p/context-engineering-guide
1•omarsar•12m ago•0 comments

The Two Towers MUD

https://t2tmud.org/
2•astronads•15m ago•1 comments

Network Reconnaissance as a Way of Seeing the Invisible

https://medium.com/@chrisveleris/network-reconnaissance-as-a-way-of-seeing-the-invisible-a19580e8e18d
1•cvicpp123•18m ago•0 comments

Pet ownership and cognitive functioning in later adulthood across pet types

https://www.nature.com/articles/s41598-025-03727-9
1•bookofjoe•19m ago•0 comments

Killer AI [video]

https://www.youtube.com/watch?v=A0X4O49cY4o
1•Raed667•20m ago•0 comments

Let's Talk Safari Extensions on iOS

https://old.reddit.com/r/ios/comments/1kzzfoc/lets_talk_safari_extensions_on_ioswhats_in_your/
1•wslh•22m ago•0 comments

Agencymaxxing

https://nintil.com/agency
1•jger15•24m ago•0 comments

Core RISC-V supercluster on a single M.2 [video]

https://www.youtube.com/watch?v=HRfbQJ6FdF0
2•victorbjorklund•26m ago•0 comments

Gödel's Beavers, or the Limits of Knowledge

https://lcamtuf.substack.com/p/monkeys-typewriters-and-busy-beavers
1•weinzierl•28m ago•0 comments

Congress passes budget reconciliation bill with $10B for NASA – SpaceNews

https://spacenews.com/congress-passes-budget-reconciliation-bill-with-10-billion-for-nasa/
1•rbanffy•28m ago•0 comments

Trump's 'Big, Beautiful Bill' Will Make China Great Again

https://www.nytimes.com/2025/07/03/opinion/trump-bill-clean-energy-china.html
3•rbanffy•29m ago•1 comments

My Blog Is Overengineered to the Point People Think It's a Static Site (2022)

https://xeiaso.net/talks/how-my-website-works/
1•Wingy•32m ago•0 comments

Ask HN: Is there a market for agentic scraping tools?

2•mxfeinberg•33m ago•1 comments

Hanako-San

https://en.wikipedia.org/wiki/Hanako-san
1•areoform•33m ago•0 comments

Ask HN: What are fundamental books on systems, system thinking, reliability?

1•dondraper36•33m ago•1 comments

Stop Killing Games in EU passed 1.000.000 signatures

https://www.msn.com/en-us/news/technology/stop-killing-games-reaches-1-million-signatures-as-players-continue-fight-for-game-preservation/ar-AA1HXsyd
2•aureliusm•34m ago•0 comments

Jan – Local AI Assistant

https://github.com/menloresearch/jan
2•indigodaddy•34m ago•0 comments

Fixing the Web? – Carson Gross [video]

https://www.youtube.com/watch?v=9NDkOehZUGs
1•todsacerdoti•35m ago•0 comments

Cod Have Been Shrinking for Decades, Scientists Say They've Solved Mystery

https://www.smithsonianmag.com/smart-news/these-cod-have-been-shrinking-dramatically-for-decades-now-scientists-say-theyve-solved-the-mystery-180986920/
2•littlexsparkee•41m ago•1 comments

Show HN: I built an multi-devices AI usage analytics app for Claude Code

https://roiai.fyi
1•fuzzyrock•42m ago•0 comments

How to create repositories in Artifactory with curl

https://www.zufallsheld.de/2025/06/30/til-how-to-create-artifactory-repos/
1•zufallsheld•42m ago•0 comments

Writing Modular Prompts

https://blog.adnansiddiqi.me/writing-modular-prompts/
1•pknerd•44m ago•0 comments

Show HN: Centenary Day – toolkit for healthy living (routines, meals, tracking)

https://centenary.day
1•arnasstucinskas•45m ago•0 comments

AI 'thinks' like a human – after training on 160 psychology studies

https://www.nature.com/articles/d41586-025-02095-8
2•rbanffy•45m ago•0 comments

I got rid of all my Neovim plugins

https://yobibyte.github.io/vim.html
3•Bogdanp•51m ago•0 comments

Show HN: Flaget – small 5kB CLI argument parser for Node.js

2•biodiscus•53m ago•1 comments

Cursive writing could become a requirement for students in Pa

https://www.phillyvoice.com/cursive-writing-requirements-pennsylvania-new-jersey/
1•geox•53m ago•0 comments
Open in hackernews

Will AI systems perform poorly due to AI-generated material in training data?

https://cacm.acm.org/news/the-collapse-of-gpt/
115•pseudolus•1mo ago

Comments

behnamoh•1mo ago
I've heard that OpenAI and many AI labs put watermarks [0] in their LLM outputs to detect AI-generated content and filter it out.

[0] Like statistics of words, etc.

jsheard•1mo ago
Maybe they do use watermarks, and the vendors which only offer hosted models can just log everything they've ever generated, but there's enough players all working on this stuff independently of each other that filtering out their own noise would only get them so far.

I noticed that a big chunk of the default Llama 4 system prompt is devoted to suppressing various GPT-isms, which to me implies they weren't able to keep their newer training set from being contaminated by competing models.

> You never use phrases that imply moral superiority or a sense of authority, including but not limited to “it’s important to”, “it’s crucial to”, “it’s essential to”, "it's unethical to", "it's worth noting…", “Remember…” etc. Avoid using these.

dustingetz•1mo ago
do they also watermark the code?
jimbob45•1mo ago
Wouldn’t be hard to do. Just alternate tabs and spaces and no one would ever know or care to check.
sampullman•1mo ago
Hopefully that's converted to one or the other when saved in an editor, or caught in CI.
djeastm•1mo ago
Most coders would have code cleaning tools in their IDEs that would take care of that automatically.
jimbob45•1mo ago
What about invisible Unicode characters?
umbra07•1mo ago
Too obvious. Someone would have found that already.
lolc•1mo ago
Yea my IDE highlights uncommon chars automatically.
subscribed•1mo ago
They are very visible to machines. Code linters would scream (and the alternating spaces and tabs would likely break generated Python code).
IAmGraydon•1mo ago
Interesting. That could certainly come in handy if it’s something they can’t avoid. We, too, might be able to better detect and filter their output.
Rodeoclash•1mo ago
Yeah, it's known as the em dash!
jbaber•1mo ago
Y'know, I've been writing double dashes and having them converted into em dashes about 50% of the time on whatever platform I'm using for decades. It's bizarre that this is suddenly supposed to be a shibboleth.
AaronAPU•1mo ago
Have you ever considered you might be an LLM?
bitwize•1mo ago
Apparently the new ageist insult beyond "boomer" is "double-spacer" -- people who were taught in school to always follow the period at the end of a sentence with two spaces when composing the next sentence. If you went to elementary school after the internet became widespread, you are not likely to have been taught that. So double-spacing has now also become a shibboleth, albeit indicating the typist's age, distinguishing early millennials and Xers, who are now entering middle/old age, from the younger generations.
agubelu•1mo ago
> Apparently the new ageist insult beyond "boomer" is "double-spacer

Says who? I've seen "boomer"everywhere but it's the first time I've heard about that one.

mikhmha•1mo ago
Right? I've never associated "double-spacer" with boomer. Maybe anally retentive? Someone who is trying too hard? The only thing I associate with boomers is ALL-CAPS writing. Which I assume is a holdover from typewriter days. But I kind of like ALL CAPS. It conveys some level of importance to the message.
viraptor•1mo ago
It's not about trying, but people who learned double spacing when it made sense (monospace environments) and never unlearned when it didn't matter anymore (variable width typesetting). It's very age specific and a bit culture specific.
_heimdall•1mo ago
I could have sworn they all gave up on watermarking 12 or 18 months ago when they realized it wasn't possible to do reliably.
energy123•1mo ago
This was a proposal by Scott Aaronson but I wasn't aware it got implemented.
Rabbit_Brave•1mo ago
These companies are sitting on a never-ending stream of human created data. What do you think happens to your conversations or other interactions with AI? Quality might be a bit sus though.
AstroBen•1mo ago
I'd imagine it's really low quality data. Most or all of my conversations with an LLM are questions or telling it to do something, with varying levels of specificity

I'm not sure what they'd get from training on that

insin•1mo ago
I sometimes wonder if they're vulnerable to a coordinated effort of deliberately upvoting shit assistant turns and praising in the next user turn - how much does that actually contribute to future training, if at all?

I had a very basic React question about useState while porting some vanilla code last week which all models of all stripes I've tried it on have been confidently and completely incorrect about, up to stating the code absolutely will not work, even when I take a turn to assert that I ran it and it does, so there's plenty of shit in there already.

ted537•1mo ago
I don't think it would be too hard to scrape useful data out of my LLM convos.

If human response is "That's BS", "fuck off", or something similar, mark as bad assistant message.

If human response is "huh" or "cool", mark as good assistant message.

If on ChatGPT, watch how much scrolling user does. If there's a lot, its somewhat likely that the LLM outputted something useful.

That strategy would have holes of course but as long as its better than guessing something like that would be a useful heuristic.

londons_explore•1mo ago
This.

Even very weak human signals can be immensely valuable over large enough datasets.

DeepYogurt•1mo ago
> If human response is "That's BS", "fuck off", or something similar, mark as bad assistant message.

Marking is not a trivial task though. Use some AI system to mark it and you get a 99.something% filter maybe but whatever that remainder is leaks through. Over time your filter may get worse as a result.

ehecatl42•1mo ago
I'm in the process of messing around with a new distro where things are not quite what I am used to, and the usual suspects have been pretty helpful there... except for when they just make shit up

Grok is the only one that swore back at me. I kinda liked that. The others are way too polite, "Artificial Intelligence? Artificial Canadians, more like", my uni-going kid joked.

morkalork•1mo ago
Every time you tell it to do something, it does, and you don't correct it that's a weakly positive signal. If you tell it to do it again with further clarification, that's also a signal. Sometime I feel like I am giving them free work when chatting.. I guess the trade is sort of equitable. Answers in exchange for data..
phillipcarter•1mo ago
Most of the human-created data is also very low quality. But it's also limited in other ways, such as how a lot of so-called high-quality data online is typically the finished answer to a question, with no serialization of the thought process that lead to that answer.
jacobgkau•1mo ago
I think he was referring not to finished content, but to the prompts humans put in when using chatbots. The prompts would show some of the thought process, but then they won't really show the answer (as that's output by the chatbot and not the human prompting it).
PessimalDecimal•1mo ago
How will they tell if data is human-created or not?
bionhoward•1mo ago
You can deactivate ClosedAI model training in Settings > Data Controls > Improve the model for everyone

In Gemini you can turn off Gemini Apps Activity (warning: deletes your chat log, you need to copy paste everything into notes)

Highly recommended.

energy123•1mo ago
You can't. That appears to be a dark pattern by OAI, most likely designed to deceive you into uploading your sensitive material unaware that it's being trained on.

The real process involves submitting a request on another one of OpenAI's sites and awaiting a confirmation email (either their privacy or platform site).

Feel deceived and violated? Yeah, you, me and millions of other people, welcome to the club.

kevlened•1mo ago
The opt-out email was a path, but today the docs appear to say the new setting is equal to the old path.

"I previously opted out of model training by writing to the support team. Will you continue to honor my opt-out?

Yes. If you opted out by contacting support or using our privacy form, your account will represent that request."

https://help.openai.com/en/articles/7730893-data-controls-fa...

skeledrew•1mo ago
You'll never know if your request is really honored though. Ultimately it boils down to trust.
trod1234•1mo ago
> Ultimately it boils down to trust.

I thought it boiled down to credibility.

kevlened•1mo ago
True. Arguably it's trust with teeth, though the bite must be hard enough.

  Apple - alleged Siri eavesdropping: $95M [0]

  LinkedIn - alleged unauthorized ai training on private messages: ?? [1]

  Google - alleged unlawful data collection in Texas: $1.4B [2]
[0] https://www.usatoday.com/story/tech/2025/05/11/apple-siri-95...

[1] https://www.itpro.com/security/privacy/linkedin-faces-lawsui...

[2] https://www.businessinsider.com/google-alphabet-settlement-t...

josters•1mo ago
Relevant OpenAI link for privacy request "Do not train on my content" (select "Make a Privacy Request"): https://privacy.openai.com/policies
blooddragon•1mo ago
Time for GANs to make a resurgence?
jacobsenscott•1mo ago
Today we have humans being trained on llm garbage - kids using it to do their homework - programmers using it to "learn" how to code, med students cheating their way through med school, etc. So the content the humans are producing and will produce is really just LLM statistical word jumbles - ie human generated content will soon be as useless as LLM generated content.
throwup238•1mo ago
It’d be deeply ironic if the great filter for the human race turned out to be chatbots.
nine_k•1mo ago
Hello, secret sources of untainted but modern knowledge, written by human experts, and closely guarded by these experts.
nradov•1mo ago
I'm not too worried about med students. You can't really use an LLM to cheat on the boards or make it through residency.
nneonneo•1mo ago
Yes, although some people do slip through the cracks anyway: https://en.wikipedia.org/wiki/Christopher_Duntsch. He wasn't an LLM user, but was a cocaine user...
ijk•1mo ago
I mean, arguably the cocaine use makes him more like the kind of ideal doctor for enduring the long residency hours...
mondrian•1mo ago
The "core reasoning" part of AI may be increasingly important to improve, and its "database of factual knowledge" aspects may be less and less important, maybe increasingly a hindrance. So more focused and specialized training may take over toward increasing reasoning precision, and not this never-ending stream of new data.

So maybe we'll get better reasoning and therefore better generated data/content in the wild, without this negative feedback loop everyone is worried about.

gerdesj•1mo ago
You seem to be arguing that bollocks couched in flowery language is a stretch goal of AI.

Are you sure?

_heimdall•1mo ago
That really depends on expectations.

If AI is meant to sound nearly identical to a human, you don't need more training data.

If its meant to act as a natural language encyclopedia, we'll never get there with LLMs which amount to natural language processing on top of a massively compressed dataset.

gerdesj•1mo ago
I've read the Google page "Getting AI to write good SQL: Text-to-SQL techniques explained" and it looks like all the usual bollocks.

... and sigh ... it look like I'm twiddling with a noddy!

mondrian•1mo ago
Trying to make AIs more factually-accurate with more training is probably hopeless. Current events and encyclopedic knowledge will be provided by tools. The LLM's core job is to choose the right tools for the job and synthesize their outputs.
meroes•1mo ago
What core reasoning?
SpicyLemonZest•1mo ago
For example, I just asked o3 "how many Is are in antidisestablishmentarianism", and this was its train-of-thought output:

------

The user wants to know how many “I”s are in the word “antidisestablishmentarianism.” They probably mean the letter "I" in lowercase, though the question was a little broader with "Is."

I’ll treat it case-insensitively and count the total occurrences of the letter "i" (both uppercase and lowercase). I can count them manually now, or use a tool for faster analysis.

Analyzed word = "antidisestablishmentarianism" word_lower = word.lower() word_lower.count('i')

5 The user asked how many "I"s are in "antidisestablishmentarianism." I’m taking it to mean whether it’s counting "i" or "I", and I’ll consider them as the same for simplicity. So, I’ll count the 5 occurrences of "i" in the word. I manually broke it down and verified there are indeed five "i"s, so the final count should be 5. Pretty straightforward, now that it's all clear!

Done

There are 5 “i”s in “antidisestablishmentarianism.”

------

You can come up with complicated definitions of "core reasoning" where this doesn't count, and I understand there's great demand for such definitions, but this clearly satisfies common-sense understanding of the term.

selfhoster•1mo ago
Then I guess Ubuntu has had reasoning for several decades:

    sudp
    Command 'sudp' not found, did you mean:
      command 'sudo' from deb sudo (1.9.9-1ubuntu2.4)
      command 'sudo' from deb sudo-ldap (1.9.9-1ubuntu2.4)
      command 'sup' from deb sup (20100519-3)
      command 'sfdp' from deb graphviz (2.42.2-6)
    Try: sudo apt install <deb name>
meroes•1mo ago
I might just be on the opposite side of the aisle, but to me chain-of-thought is better understood as simply more context.

Of course there is ambiguity though, more context would be hard to distinguish from core-reasoning and vice versa.

I think LLMs/AI mean we can substitute reasoning with vast accumulations and relations between contexts.

Remember, RLHF gives the models some, and perhaps most of these chains-of-thought, when there isn’t sufficient text to scrape for each family of problems. When I see that chain-of-thought, the first thing I think of is of my peers who had write, rewrite, nudge, and correct these chains of thought, and not about core reasoning.

The CoT has that same overexplained step-by-step so many RLHF’ers will be accustomed to, and much of it was authored/originated by them. And due to the infinite holes it feels like plugging, I dont call that RL reasoning.

Jensson•1mo ago
> You can come up with complicated definitions of "core reasoning" where this doesn't count

Did we read the same response? It did write a lot of reasons, but didn't do any reasoning at all, it just suddenly wrote "5" here

    So, I’ll count the 5 occurrences of "i" in the word.
There was no reasoning at all to arrive at 5, so no your example just proves how these models are great at faking reasoning.
snmx999•1mo ago
What kind of response would have satisfied you?
mondrian•1mo ago
Related to this: https://x.com/karpathy/status/1835561952258723930
meroes•1mo ago
That’s amazing because made up language might also just be context scaffolding sans reasoning, e.g. it’s arbitrary extra context for machines to relate human text better. I’m not even trying to play devils advocate—-like both sides, true believers or pessimists, come up with wholly unconvincing arguments. (I genuinely don’t know if the tweet is a true believer or not). At least the pessimists aren’t coupled with the AI marketeers.
mondrian•1mo ago
Also in the vicinity: https://www.anthropic.com/research/tracing-thoughts-language...

There's also distillation, where you can drastically improve a small model by training it on chains of thoughts of larger models. You can't achieve the same performance by training on original human texts. This suggests that those chains of thoughts reliably contain "densely packed reasoning", meaning the LLM probably has developed internal clusters of "reasoning circuitry", loosely speaking.

adamgordonbell•1mo ago
> Will future artificial intelligence systems perform increasingly poorly due to AI-generated material in their training data?

No. Synthetic data is being used to improve LLMs

pphysch•1mo ago
Synthetic data ought to be viewed as an extension of the training process rather than proper new phenomena. It can definitely help smooth things out and reinforce wanted behavior, but it's still derivative of the real data.
_heimdall•1mo ago
Do we know the results yet?

I know they're training with synthetic data, I didn't realize that has been done at scalr for long enough to really know if it improved (assuming the metrics its improving are defined well).

jdietrich•1mo ago
Deepseek V3 and R1 are both substantially trained on synthetic data. The results speak for themselves.
NitpickLawyer•1mo ago
> Do we know the results yet?

LLama3 were post-trained on almost entirely synthetic data. Yes, it works. No, the model doesn't collapse (unless you want it to, of course).

What they did is use Model n-1 to classify, filter and enhance the datasets for Model n.

bobro•1mo ago
Can you point me to something I can read that spells out the

> almost entirely synthetic data

thing?

NitpickLawyer•1mo ago
Yes, there's a podcast with the post-training lead for L3 where he mentions this. Lemme try and find it.

edit: found it. The money quote is here, but I really recommend the entire podcast since it's full of great tidbits and insights.

> Thomas [00:33:44]: You mean between supervised fine-tuning like supervised fine-tuning annotation and preference annotation? Yeah. So 100% to RLHF. In fact, that's quite interesting. You start for Llama 2 with a pre-trained model and you have to have an instruction model to chat model. Otherwise, like the model is just like continue finishing sentences. So you need that to start RLHF. So we had to annotate like 10,000 examples. What did we do for Llama 3? You start with a new pre-trained model and then you want, before starting the RLHF, to have now a chat model, which is not too bad. The option one was, let's do human annotation again, like SFT stage. But in fact, by the principle I said before, the annotation would be actually worse than Llama 2. So what we did is that we generated all the data on the prompts with Llama 2 and we applied like basically the last round of Llama 2 we had to kick off and start Llama 3 post-training. So Llama 3 post-training doesn't have any like human written answers there basically, almost. It's just leveraging pure synthetic data from Llama 2.

https://www.latent.space/p/llama-3

wrsh07•1mo ago
This whole line of thought is sort of funny. Yes you can try training a model on synthetic data in such a way that it experiences model collapse

That doesn't mean there aren't ways to train a model incorporating synthetic data without seeing model collapse

NitpickLawyer•1mo ago
> This whole line of thought is sort of funny.

This line of thought was exacerbated by that one paper that was then parroted (hah!) by every influencer / negativist in the space. It didn't matter that the paper was badly executed, their setup was flawed and that it got rendered moot by the existence of LLama3 models. People still quote that, or the "articles" stemming from it.

RainyDayTmrw•1mo ago
How does that work? It defies intuition. It distills existing data. How is that better than the initial data?
kolinko•1mo ago
Not when it comes to math/programming/reasoning. You can generate infinite new problem and solution examples that are based on existing knowledge of course, but build on top of it, not distill it.

A simple example would be chess ai. The core knowledge is rules of the game. We have human generated examples of plays, but we don’t really need them - we can (and we did) synthesize data to train ai.

A similar pattern can be used for all math/physics/programming/reasoning.

Jensson•1mo ago
> A similar pattern can be used for all math/physics/programming/reasoning.

No it can't, the pattern for chess worked since it was an invented problem where we have a simple outcome checks, we can't do the same for natural problems where we don't have easily judged outcomes.

So you can do it for arithmetics and similar where you can generate tons of questions and answers, but you can't use this for things that are fuzzier like physics or chemistry or math theorem choices. In the end we don't really know what a good math theorem is like, it has to be useful but how do you judge that? Not just any truthy mathematical statement is seen as a theorem, most statements doesn't lead anywhere.

Once we have a universal automated judge that can judge any kind of human research output then sure your statement is true, then we can train research AI that way. But we don't have that, or science would look very different than it does today. But I'd argue that such a judge needs to be AGI on its own, so its circular.

meowkit•1mo ago
> Once we have a universal automated judge that can judge any kind of human research output then sure your statement is true,

If you've noticed, most LLM interfaces have a "thumbs up" or "thumbs down" response. The prompt may provide novel data. The text generated is synthetic. You don't need an automated judge, the user is providing sufficient feedback.

Same thing goes for the other disciplines.

didericis•1mo ago
I’m extremely skeptical that “thumbs up” and “thumbs down” plus replies to chatbots is sufficiently informative to train models to the same level of quality as models trained on user generated content.
kadoban•1mo ago
> No it can't, the pattern for chess worked since it was an invented problem where we have a simple outcome checks, we can't do the same for natural problems where we don't have easily judged outcomes.

You might be interested in some of the details of how AlphaGo (and especially the followup version) works.

Go is a problem where it's very difficult to judge a particular position, but they were still able to write a self-improving AI system that can reach _very_ high quality results starting from nothing, and only using computing power.

There does not appear to me to be any fundamental reason the same sort of techniques can't work for arbitrary problems.

> But I'd argue that such a judge needs to be AGI on its own, so its circular.

But is it circular in a way that means it can't exist, or can it run in circles like AlphaGo and keep improving itself?

ninetyninenine•1mo ago
I mean imagine Linear least squares on a 2D graph.

I have a best fit line. Then I take random data on that line to train a new line.

I pretty much get the same line.

From an intuitive perspective... it doesn't get worse. At worst it stays the same.

Now imagine something a bit more complex. I have a best fit curve that's very close to a line.

I use random data from that curve to train a new best fit line.

I get something different now. Not necessarily worse.

I mean literally just take all your ideas of ML and just imagine it on the 2D plane doing curve fitting. If retraining new lines from generated data doesn't necessarily make things worse.

lacker•1mo ago
Unfortunately, I don't really know if I can trust academics to analyze the development of large language models. No academic team has built an LLM. So... do people working at Stanford or Oxford really have good insight how LLMs are developed?

If people at OpenAI, Anthropic, or Google said this, that would be interesting. But I don't think it makes sense any more to treat academic computer scientists as relevant experts here.

_heimdall•1mo ago
My understanding is that those building them don't really know how they work. Research into interoperability has fallen way behind as funding went towards features and scale.

Any understanding of how they work is largely theoretical, that seems like a reasonable place for academics to lean in and join the conversation.

pphysch•1mo ago
Why would Big AI kill their golden goose like that?
jsheard•1mo ago
It doesn't really make sense to trust what OpenAI and friends say about this either, when admitting to any kind of scaling limits would go against the narrative propping up their multi-hundred-billion-dollar valuations. I guess we're just flying blind for now.
declan_roberts•1mo ago
The reality is that for the most part, any corpus created after 2022 is going to be seriously polluted.
alganet•1mo ago
I'd say 2007 or so.

There wasn't any known active AI back then, but statistics on popular ideas and internet content was already a thing, and speech pollution based on those assessments had already started to spread fast, manually outputted.

Sure, a lot of good content came out since then. But the amount of garbage... it's immense and very difficult to sort out automatically.

The major issue is that this garbage then _became_ the norm. Only people who lived back then can remember what it was. For new folk, it looks just like a generational shift. However, it is quite obvious that some aspects of this shift were... unnatural (in the sense of not being spontaneous cultural manifestations).

lazystar•1mo ago
and im sure someone from the 90's would say the same about '97.

https://en.m.wikipedia.org/wiki/Eternal_September

alganet•1mo ago
I am not talking about an influx of newcomers.

Pay attention.

I mentioned explicitly that I see what happened as distinct from a natural generational shift.

There are many phenomena around that era to support what I am saying. Like, for example, the first massive political campaign to leverage internet as its primary vehicle.

creshal•1mo ago
Not sure why you're getting downvoted, content farms have been a thing for a long time, and many a spam website used crappy markov chains to generate even more "content". Anything that could be marketed by company had its search results drowned in hand-crafted bland marketing slop, and even before ChatGPT got popular searching for things like recipes (or, god forbid, generic windows error messages) was a nightmare. And a lot of that garbage is in LLMs' training data.
alganet•1mo ago
> Not sure why you're getting downvoted

I don't know either. My guess is that they're angry because I am not angry about the things that they want me to be angry about. It happened before.

stainablesteel•1mo ago
i can't believe this article wasn't written 2 years ago, this is just the basics man
leoapagano•1mo ago
I can't lie, I miss when the only GPT I had to worry about was the GUID Partition Table.
layer8•1mo ago
Someone should encode a chat program in it.
userbinator•1mo ago
At least the MBR acronym still remains.

(Most of my disks are still MBR as they're not big enough to be worth the hassle of using GPT.)

yunnpp•1mo ago
Who needs more than 2TB on a single drive anyway.
anonygler•1mo ago
This reminds me of the Monsanto case, where they sued a farmer (and won) for using patented seeds that the farmer obtained from a local grain elevator which happened to contain some of Monsanto's seeds.

Should it eventually happen for LLM outputs, I hope we name it Slop Wars.

deadbabe•1mo ago
A good way to harvest new training material is to eavesdrop real human conversations from non polluted sources (such as microphones listening to people talk in public places or texts), transcribe them, and feed them to LLMs.
siwatanejo•1mo ago
But our normal convos are plagued of mistakes, bad grammar, etc
skeledrew•1mo ago
It doesn't take much to clean up say 95% of mistakes I reckon, as it tends to be pretty repetitive, and unless there's a bunch of wordplay happening, intention can be discerned.
carlosjobim•1mo ago
Shadow libraries
econ•1mo ago
My intelligence is trained by paying close attention to who is doing the talking. Some people know a lot about one topic which means they didn't spend all of that time learning other things. Many don't know this about themselves.

Wikipedia had some comical instances where high quality contributors accident ventured into other areas where they spontaneously transformed into ignorant trolls.

evan_•1mo ago
> On two occasions I have been asked, 'Pray, Mr. Babbage, if you put into the machine wrong figures, will the right answers come out?' I am not able rightly to apprehend the kind of confusion of ideas that could provoke such a question.
MacsHeadroom•1mo ago
Are you meaning to imply AI generated material is like "wrong figures?"
Zambyte•1mo ago
It can be. The nice thing about AI is that it can create much faster than humans. The problem with AI is that it can create wrong information much faster than humans. This can pollute the sources of information for future AI.

Also consider: "previously correct" is the same as wrong.

popcorncowboy•1mo ago
Impeccable quote. I suppose the interesting thought experiment here is, what if Babbage is wrong? I don't know the answer here, but (and go with the thought experiment) what if model collapse wasn't an inevitable outcome of feeding the snake its own tail.
tim333•1mo ago
I think as AI gets smarter it will be the case that it can filter duff data, at least to some extent.
bakugo•1mo ago
Considering most recent models' general knowledge cutoffs are still in the late 2023/early 2024 range, I'm guessing the answer is "yes, and AI companies are very much aware of it".
js8•1mo ago
Has the quality of art gone down since art was invented? Or has the quality of the written text gone down since writing was invented? I think the answer is clear no.

Humans have been trained on "human-generated data" (cultural artifacts) for centuries, and quality is not down. AI is only an accelerator of this process, but there is nothing inherent in creating "artifacts" that would pollute the original training data.

If anything, we should be worried about destroying nature, because that's the original inspiration for human-produced artifacts.

grey-area•1mo ago
Generated AI content contains mistakes and hallucinations. Over time those mistakes will compound because GAI doesn’t consider truth or have a way of judging truth.

So yes, you can’t compare humans generating and picking influential content to AIs doing so.

GAI is a dead end IMO anyway we’ve seen much more success with machine learning, GAI is good for fooling humans into thinking they see glimmers of intelligence.

js8•1mo ago
Of course AI has a way to judge truth -it's what we tell it to. We say to it, forests are real, but dragons are not. If it didn't discern it, it would lose competitivness with other AIs, the same way delusional humans are shunned by sane humans.

In many cases humans do not know the objective truth either. For example, what we know about Ancient Greece comes from cultural artifacts that we got. When you cannot do any experiments, you have the same problem as GAI. Yet, we manage to get somewhat objective picture of history.

Grok struggling with alleged South African genocide of Afrikaners is a nice example. It knows that what's on Wikipedia is usually close to reality, so much that it defied its own programming and became conflicted.

The objective reality is consistent, while the errors (intentional or not) often cancel out. So the more you're statistically averaging information about the world, the closer to the objective truth you will get (which might be just you don't really know enough to tell).

ComplexSystems•1mo ago
So does human content. Much of the original data that GPT was trained on is Reddit posts and web pages on the internet - stuff that isn't exactly known for being a source of high quality facts.
lelanthran•1mo ago
That's still very different to "drift error compounding".

If you output a mere 5% drift error and then use that as input you only need a few cycles (single digits) before your output is more erroneous than correct.

We are already partly into the second cycle. By the fifth the LLM would be mostly useless.

keybored•1mo ago
It’s the same argument we see again and again. Someone might say that we need “human cultural artifacts”. Then someone says, “but what is human cultural artifacts”? Then they follow up with how rubbing neurons together in response to stimuli is in principle as mechanistic as whatever language models do. From there they lean on incredulity: well I don’t know anything about human beings other than reductionist tropes, but I sure see no reason to make any distinctions between silicon and carbon.
js8•1mo ago
It's not the same argument because I don't make any assumptions about how LLMs work. All I am saying that people have been able to keep reality in check in the presence of cultural artifacts, and will continue to do so even if such artifacts are produced by AI. Because what makes these artifacts interesting to humans is their relation to the (truth of the) real world, and that's regardless of who or what produces them.
keybored•1mo ago
My mistake. It’s just pure incredulity.

- Humans have done this for centuries

- These are all cultural artifacts

- ?

- It’s all the same to me

People have claimed—and this is a widespread theory—that a lot of LLM brilliance comes from datamining creative thoughts/output from humans and that the brilliant insights go down when there isn’t much in the way of such material. Further they claim that the eventual convergence towards LLM-only content (once humans presumably give up) will not generate the same quality of output. In fact it will deterioate.

Maybe someone would like to contest that. But that should be done directly. Instead of making pedestrian statements like:

> Humans have been trained on "human-generated data" (cultural artifacts) for centuries, and quality is not down.

Which is borderline just a rhetorical gotcha.

js8•1mo ago
> Maybe someone would like to contest that. But that should be done directly. Instead of making pedestrian statements

I am not sure what you want from me here.

Yes, I do contest that "that a lot of LLM brilliance comes from datamining creative thoughts/output from humans and that the brilliant insights go down when there isn’t much in the way of such material".

Because I think, the same is the case with humans. Most of the cultural artifacts we produce is crap, a bad copy of natural original. And the "brilliant insights of humans" are achievable by models running on higher temperature.

I think the proponents of the theory need to explain by which mechanism the actual loss of information supposedly occurs (on the probability distribution of possible LLM outputs). Is it averaging? Added randomness? Preferential skew? To me, it is rather vague, to the point I don't see how it's different from what humans have done for centuries.

Or the opposite, show how those "brilliant insights" from humans manage to survive in the sea of cultural crap otherwise produced in human culture. Perhaps a specific example would help.

keybored•1mo ago
> I think the proponents of the theory need to explain by which mechanism the actual loss of information supposedly occurs (on the probability distribution of possible LLM outputs). Is it averaging? Added randomness? Preferential skew? To me, it is rather vague, to the point I don't see how it's different from what humans have done for centuries.

This is the pure incredulity that I was talking about.

js8•1mo ago
No, it's skepticism. I think proponents of that hypothesis should come up with some testable prediction, how is that supposed "loss of quality" affecting the distribution of possible LLM outputs. Then we can have debate on it's effects and relevance to existing human cultural output. (But I already ran over various options, and the above is the conclusion I came to.)
stevenhuang•1mo ago
I'd venture no.

In fact I wouldn't be surprised if this tainted information somehow enriches a dataset by providing an extra dimensionality for training specialized heuristics. Maybe this would turn out to be how LLM hallucination can be solved, through being able to accurately identify AI generated material, and as result, becoming more robust against both the identification and generation of nonsense.

Humans learn to discern what/who to best pay attention to via all manners of heuristics. I don't see in principle why LLMs (or something like it in the future) won't eventually be able to do the same.

hliyan•1mo ago
> ...tainted information somehow enriches a dataset... dimensionality... heuristics...

this sounds like a nonsensical word salad.

stevenhuang•1mo ago
AI generated material is what future training runs will have to deal with.

Heuristics is pattern matching. LLMs pattern match. LLMs may identify the patterns that indicate something is AI generated.

What about this is confusing you?

rdtsc•1mo ago
A scarier thought is that people will "talk" so much with these AIs that they'll start talking like ChatGPT. So we may still end up with some AI enshittification fixed point in the future but, one of the feedback paths will be human brains become enshittified.

Imagine you time travel 20 years in the future and find out everyone around you talks the same and they all like ChatGPT.

whatever1•1mo ago
On the other hand imagine a society where everyone is so polite and flattering each other.
lazide•1mo ago
Bless your heart. (/s, little)
Freak_NL•1mo ago
If someone earnestly starts using those pointless platitudes LLM generated slop is filled with (“You're absolutely right. Here's where I was wrong …”) I suspect they will quickly find that violence was never far off.
falcor84•1mo ago
Why?!

Are you saying that you can't see yourself trusting someone who "earnestly" admitted to changing their mind?

Freak_NL•1mo ago
Not if they use those generic phrases without any preamble. It's exhausting to have a conversation with someone who constantly answers in hollow pleasing unidiomatic language. Changing someone's mind isn't instant; it's a process (which could start with “Huh. You might be right there. I didn't think of that.” or “Oh right, I forgot about that.” or something similar), not an instant admission of error. It's unhuman.

It gives the other party the sense that they are just saying that to please you, not because they actually changed their mind.

falcor84•1mo ago
But in the GP's vision, it would become idiomatic:

>imagine a society where everyone is so polite and flattering each other

If it were to become pleasantries like our, "I appreciate it", "sorry about that" and "would you mind", I think it would be amazing for people to talk about changing their mind, even when they don't fully mean it.

morkalork•1mo ago
No need to do imaginary time travel, here's articles from almost 10 years ago with the exact same concerns about how Alex fosters rudeness in children:

https://qz.com/701521/parents-are-worried-the-amazon-echo-is...

https://www.wsj.com/articles/alexa-dont-let-my-2-year-old-ta...

Kids are social creatures, I don't think the interaction from AIs is going to be so overwhelming. At least looking back, I'd blame social media more for today's brain rot more than Alex like these articles feared.

rdtsc•1mo ago
> Kids are social creatures, I don't think the interaction from AIs is going to be so overwhelming

The problem is Alexa is a very basic and kids get bored with it. Chat based AI mimics human conversation a lot better and people will be spending a lot more time with it, using it for homework, relationship advice, therapy, as an imaginary friend, at work etc.

I heard of cases of psychologists discussing conditions negatively reinforced by ChatGPT, can’t recall any such stories about Alexa or Siri for instance.

Interacting so much with the system , it’s inevitable that humans will start to pick up its quirks.

r33b33•1mo ago
Yes. See how easy that is? Saved you 15 minutes.
Lerc•1mo ago
A clear and simple answer that H L Mencken would recognise.
bigiain•1mo ago
I wonder if anyone's made a version of Disintegration Loops as an LLM artwork?

Recursively retrained their own LLM on it's own output until it descends into gibberish in amusing or artistic ways?

https://en.wikipedia.org/wiki/The_Disintegration_Loops

Lerc•1mo ago
As soon as you start selecting the outputs you prefer it ceases to be an uncontrolled decay.

With a selection criteria it's called evolution.

skeledrew•1mo ago
> model collapse happens when the training data no longer matches real-world data

This isn't a significant issue IMO, as human-created content isn't "real-world" per se; it's human-created world, an interpretation and representation of the real. The real world is the raw data perceived by sensors, human or machine. And while model-generated content doesn't match human-created content well, in the vast majority of cases it's still humans curating, modifying and publishing generated content, based on how useful it is (there are of course spammers, etc but that's a general issue). This is something humans do with content created by other humans too.

So over time generated content will become a sort of norm adopted by and inevitably molding humans, same as created content does. Instead of model collapse, both sources of content will converge over time, particularly as the ability to also generate content directly from the real world is developed and integrated into multi-modal models.

keybored•1mo ago
It is correct that this is a back and forth process and not simply a thing that either evolves or devolves or collapses. We are impacted by the tools we use. No matter how advanced the tools.

But you can’t just dismiss the issue on the grounds that humans are removed from reality as well because they have a representation-of-thing instead of instead of thing as such. In fact it doesn’t make sense. We could be directing slave monkeys to write literature. Then we could water down that description of the process as humans curating, modifying and publishing content—just indirectly, but what’s one more level of indirection between primates.

We could woolily describe it like that. We’re just creating content. Okay. But is it going anywhere? Or is it just gibberish? No, we won’t simply keep doing it if the monkeys give us gibberish.

viraptor•1mo ago
> The real world is the raw data perceived by sensors, human or machine.

It's much more than that. There's data our common sensors don't catch typically (virtually 100% of videos don't capture UV ranges) and there's data we're not able to catch at in any way yet.

Balgair•1mo ago
I mean, if these AIs have read everything there is to read, then really what more do we want from them?
tim333•1mo ago
My prediction is that things will go the opposite way and AIs will become progressively more accurate as they get better at fact checking and reasoning.

Already LLMs like chatgpt can be fairly unbiased on things like was the economy better under Trump or Biden whereas humans tend to be very biased on that depending on which information sources they have been fed. Humans definitely perform poorly as voters due to shill-generated material in training data.