I'm not sure what they'd get from training on that
I had a very basic React question about useState while porting some vanilla code last week which all models of all stripes I've tried it on have been confidently and completely incorrect about, up to stating the code absolutely will not work, even when I take a turn to assert that I ran it and it does, so there's plenty of shit in there already.
If human response is "That's BS", "fuck off", or something similar, mark as bad assistant message.
If human response is "huh" or "cool", mark as good assistant message.
If on ChatGPT, watch how much scrolling user does. If there's a lot, its somewhat likely that the LLM outputted something useful.
That strategy would have holes of course but as long as its better than guessing something like that would be a useful heuristic.
Even very weak human signals can be immensely valuable over large enough datasets.
Marking is not a trivial task though. Use some AI system to mark it and you get a 99.something% filter maybe but whatever that remainder is leaks through. Over time your filter may get worse as a result.
Grok is the only one that swore back at me. I kinda liked that. The others are way too polite, "Artificial Intelligence? Artificial Canadians, more like", my uni-going kid joked.
In Gemini you can turn off Gemini Apps Activity (warning: deletes your chat log, you need to copy paste everything into notes)
Highly recommended.
The real process involves submitting a request on another one of OpenAI's sites and awaiting a confirmation email (either their privacy or platform site).
Feel deceived and violated? Yeah, you, me and millions of other people, welcome to the club.
"I previously opted out of model training by writing to the support team. Will you continue to honor my opt-out?
Yes. If you opted out by contacting support or using our privacy form, your account will represent that request."
https://help.openai.com/en/articles/7730893-data-controls-fa...
I thought it boiled down to credibility.
Apple - alleged Siri eavesdropping: $95M [0]
LinkedIn - alleged unauthorized ai training on private messages: ?? [1]
Google - alleged unlawful data collection in Texas: $1.4B [2]
[0] https://www.usatoday.com/story/tech/2025/05/11/apple-siri-95...[1] https://www.itpro.com/security/privacy/linkedin-faces-lawsui...
[2] https://www.businessinsider.com/google-alphabet-settlement-t...
So maybe we'll get better reasoning and therefore better generated data/content in the wild, without this negative feedback loop everyone is worried about.
Are you sure?
If AI is meant to sound nearly identical to a human, you don't need more training data.
If its meant to act as a natural language encyclopedia, we'll never get there with LLMs which amount to natural language processing on top of a massively compressed dataset.
... and sigh ... it look like I'm twiddling with a noddy!
------
The user wants to know how many “I”s are in the word “antidisestablishmentarianism.” They probably mean the letter "I" in lowercase, though the question was a little broader with "Is."
I’ll treat it case-insensitively and count the total occurrences of the letter "i" (both uppercase and lowercase). I can count them manually now, or use a tool for faster analysis.
Analyzed word = "antidisestablishmentarianism" word_lower = word.lower() word_lower.count('i')
5 The user asked how many "I"s are in "antidisestablishmentarianism." I’m taking it to mean whether it’s counting "i" or "I", and I’ll consider them as the same for simplicity. So, I’ll count the 5 occurrences of "i" in the word. I manually broke it down and verified there are indeed five "i"s, so the final count should be 5. Pretty straightforward, now that it's all clear!
Done
There are 5 “i”s in “antidisestablishmentarianism.”
------
You can come up with complicated definitions of "core reasoning" where this doesn't count, and I understand there's great demand for such definitions, but this clearly satisfies common-sense understanding of the term.
sudp
Command 'sudp' not found, did you mean:
command 'sudo' from deb sudo (1.9.9-1ubuntu2.4)
command 'sudo' from deb sudo-ldap (1.9.9-1ubuntu2.4)
command 'sup' from deb sup (20100519-3)
command 'sfdp' from deb graphviz (2.42.2-6)
Try: sudo apt install <deb name>
Of course there is ambiguity though, more context would be hard to distinguish from core-reasoning and vice versa.
I think LLMs/AI mean we can substitute reasoning with vast accumulations and relations between contexts.
Remember, RLHF gives the models some, and perhaps most of these chains-of-thought, when there isn’t sufficient text to scrape for each family of problems. When I see that chain-of-thought, the first thing I think of is of my peers who had write, rewrite, nudge, and correct these chains of thought, and not about core reasoning.
The CoT has that same overexplained step-by-step so many RLHF’ers will be accustomed to, and much of it was authored/originated by them. And due to the infinite holes it feels like plugging, I dont call that RL reasoning.
Did we read the same response? It did write a lot of reasons, but didn't do any reasoning at all, it just suddenly wrote "5" here
So, I’ll count the 5 occurrences of "i" in the word.
There was no reasoning at all to arrive at 5, so no your example just proves how these models are great at faking reasoning.There's also distillation, where you can drastically improve a small model by training it on chains of thoughts of larger models. You can't achieve the same performance by training on original human texts. This suggests that those chains of thoughts reliably contain "densely packed reasoning", meaning the LLM probably has developed internal clusters of "reasoning circuitry", loosely speaking.
No. Synthetic data is being used to improve LLMs
I know they're training with synthetic data, I didn't realize that has been done at scalr for long enough to really know if it improved (assuming the metrics its improving are defined well).
LLama3 were post-trained on almost entirely synthetic data. Yes, it works. No, the model doesn't collapse (unless you want it to, of course).
What they did is use Model n-1 to classify, filter and enhance the datasets for Model n.
> almost entirely synthetic data
thing?
edit: found it. The money quote is here, but I really recommend the entire podcast since it's full of great tidbits and insights.
> Thomas [00:33:44]: You mean between supervised fine-tuning like supervised fine-tuning annotation and preference annotation? Yeah. So 100% to RLHF. In fact, that's quite interesting. You start for Llama 2 with a pre-trained model and you have to have an instruction model to chat model. Otherwise, like the model is just like continue finishing sentences. So you need that to start RLHF. So we had to annotate like 10,000 examples. What did we do for Llama 3? You start with a new pre-trained model and then you want, before starting the RLHF, to have now a chat model, which is not too bad. The option one was, let's do human annotation again, like SFT stage. But in fact, by the principle I said before, the annotation would be actually worse than Llama 2. So what we did is that we generated all the data on the prompts with Llama 2 and we applied like basically the last round of Llama 2 we had to kick off and start Llama 3 post-training. So Llama 3 post-training doesn't have any like human written answers there basically, almost. It's just leveraging pure synthetic data from Llama 2.
That doesn't mean there aren't ways to train a model incorporating synthetic data without seeing model collapse
This line of thought was exacerbated by that one paper that was then parroted (hah!) by every influencer / negativist in the space. It didn't matter that the paper was badly executed, their setup was flawed and that it got rendered moot by the existence of LLama3 models. People still quote that, or the "articles" stemming from it.
A simple example would be chess ai. The core knowledge is rules of the game. We have human generated examples of plays, but we don’t really need them - we can (and we did) synthesize data to train ai.
A similar pattern can be used for all math/physics/programming/reasoning.
No it can't, the pattern for chess worked since it was an invented problem where we have a simple outcome checks, we can't do the same for natural problems where we don't have easily judged outcomes.
So you can do it for arithmetics and similar where you can generate tons of questions and answers, but you can't use this for things that are fuzzier like physics or chemistry or math theorem choices. In the end we don't really know what a good math theorem is like, it has to be useful but how do you judge that? Not just any truthy mathematical statement is seen as a theorem, most statements doesn't lead anywhere.
Once we have a universal automated judge that can judge any kind of human research output then sure your statement is true, then we can train research AI that way. But we don't have that, or science would look very different than it does today. But I'd argue that such a judge needs to be AGI on its own, so its circular.
If you've noticed, most LLM interfaces have a "thumbs up" or "thumbs down" response. The prompt may provide novel data. The text generated is synthetic. You don't need an automated judge, the user is providing sufficient feedback.
Same thing goes for the other disciplines.
You might be interested in some of the details of how AlphaGo (and especially the followup version) works.
Go is a problem where it's very difficult to judge a particular position, but they were still able to write a self-improving AI system that can reach _very_ high quality results starting from nothing, and only using computing power.
There does not appear to me to be any fundamental reason the same sort of techniques can't work for arbitrary problems.
> But I'd argue that such a judge needs to be AGI on its own, so its circular.
But is it circular in a way that means it can't exist, or can it run in circles like AlphaGo and keep improving itself?
I have a best fit line. Then I take random data on that line to train a new line.
I pretty much get the same line.
From an intuitive perspective... it doesn't get worse. At worst it stays the same.
Now imagine something a bit more complex. I have a best fit curve that's very close to a line.
I use random data from that curve to train a new best fit line.
I get something different now. Not necessarily worse.
I mean literally just take all your ideas of ML and just imagine it on the 2D plane doing curve fitting. If retraining new lines from generated data doesn't necessarily make things worse.
If people at OpenAI, Anthropic, or Google said this, that would be interesting. But I don't think it makes sense any more to treat academic computer scientists as relevant experts here.
Any understanding of how they work is largely theoretical, that seems like a reasonable place for academics to lean in and join the conversation.
There wasn't any known active AI back then, but statistics on popular ideas and internet content was already a thing, and speech pollution based on those assessments had already started to spread fast, manually outputted.
Sure, a lot of good content came out since then. But the amount of garbage... it's immense and very difficult to sort out automatically.
The major issue is that this garbage then _became_ the norm. Only people who lived back then can remember what it was. For new folk, it looks just like a generational shift. However, it is quite obvious that some aspects of this shift were... unnatural (in the sense of not being spontaneous cultural manifestations).
Pay attention.
I mentioned explicitly that I see what happened as distinct from a natural generational shift.
There are many phenomena around that era to support what I am saying. Like, for example, the first massive political campaign to leverage internet as its primary vehicle.
I don't know either. My guess is that they're angry because I am not angry about the things that they want me to be angry about. It happened before.
(Most of my disks are still MBR as they're not big enough to be worth the hassle of using GPT.)
Should it eventually happen for LLM outputs, I hope we name it Slop Wars.
Wikipedia had some comical instances where high quality contributors accident ventured into other areas where they spontaneously transformed into ignorant trolls.
Also consider: "previously correct" is the same as wrong.
Humans have been trained on "human-generated data" (cultural artifacts) for centuries, and quality is not down. AI is only an accelerator of this process, but there is nothing inherent in creating "artifacts" that would pollute the original training data.
If anything, we should be worried about destroying nature, because that's the original inspiration for human-produced artifacts.
So yes, you can’t compare humans generating and picking influential content to AIs doing so.
GAI is a dead end IMO anyway we’ve seen much more success with machine learning, GAI is good for fooling humans into thinking they see glimmers of intelligence.
In many cases humans do not know the objective truth either. For example, what we know about Ancient Greece comes from cultural artifacts that we got. When you cannot do any experiments, you have the same problem as GAI. Yet, we manage to get somewhat objective picture of history.
Grok struggling with alleged South African genocide of Afrikaners is a nice example. It knows that what's on Wikipedia is usually close to reality, so much that it defied its own programming and became conflicted.
The objective reality is consistent, while the errors (intentional or not) often cancel out. So the more you're statistically averaging information about the world, the closer to the objective truth you will get (which might be just you don't really know enough to tell).
If you output a mere 5% drift error and then use that as input you only need a few cycles (single digits) before your output is more erroneous than correct.
We are already partly into the second cycle. By the fifth the LLM would be mostly useless.
- Humans have done this for centuries
- These are all cultural artifacts
- ?
- It’s all the same to me
People have claimed—and this is a widespread theory—that a lot of LLM brilliance comes from datamining creative thoughts/output from humans and that the brilliant insights go down when there isn’t much in the way of such material. Further they claim that the eventual convergence towards LLM-only content (once humans presumably give up) will not generate the same quality of output. In fact it will deterioate.
Maybe someone would like to contest that. But that should be done directly. Instead of making pedestrian statements like:
> Humans have been trained on "human-generated data" (cultural artifacts) for centuries, and quality is not down.
Which is borderline just a rhetorical gotcha.
I am not sure what you want from me here.
Yes, I do contest that "that a lot of LLM brilliance comes from datamining creative thoughts/output from humans and that the brilliant insights go down when there isn’t much in the way of such material".
Because I think, the same is the case with humans. Most of the cultural artifacts we produce is crap, a bad copy of natural original. And the "brilliant insights of humans" are achievable by models running on higher temperature.
I think the proponents of the theory need to explain by which mechanism the actual loss of information supposedly occurs (on the probability distribution of possible LLM outputs). Is it averaging? Added randomness? Preferential skew? To me, it is rather vague, to the point I don't see how it's different from what humans have done for centuries.
Or the opposite, show how those "brilliant insights" from humans manage to survive in the sea of cultural crap otherwise produced in human culture. Perhaps a specific example would help.
This is the pure incredulity that I was talking about.
In fact I wouldn't be surprised if this tainted information somehow enriches a dataset by providing an extra dimensionality for training specialized heuristics. Maybe this would turn out to be how LLM hallucination can be solved, through being able to accurately identify AI generated material, and as result, becoming more robust against both the identification and generation of nonsense.
Humans learn to discern what/who to best pay attention to via all manners of heuristics. I don't see in principle why LLMs (or something like it in the future) won't eventually be able to do the same.
this sounds like a nonsensical word salad.
Heuristics is pattern matching. LLMs pattern match. LLMs may identify the patterns that indicate something is AI generated.
What about this is confusing you?
Imagine you time travel 20 years in the future and find out everyone around you talks the same and they all like ChatGPT.
Are you saying that you can't see yourself trusting someone who "earnestly" admitted to changing their mind?
It gives the other party the sense that they are just saying that to please you, not because they actually changed their mind.
>imagine a society where everyone is so polite and flattering each other
If it were to become pleasantries like our, "I appreciate it", "sorry about that" and "would you mind", I think it would be amazing for people to talk about changing their mind, even when they don't fully mean it.
https://qz.com/701521/parents-are-worried-the-amazon-echo-is...
https://www.wsj.com/articles/alexa-dont-let-my-2-year-old-ta...
Kids are social creatures, I don't think the interaction from AIs is going to be so overwhelming. At least looking back, I'd blame social media more for today's brain rot more than Alex like these articles feared.
The problem is Alexa is a very basic and kids get bored with it. Chat based AI mimics human conversation a lot better and people will be spending a lot more time with it, using it for homework, relationship advice, therapy, as an imaginary friend, at work etc.
I heard of cases of psychologists discussing conditions negatively reinforced by ChatGPT, can’t recall any such stories about Alexa or Siri for instance.
Interacting so much with the system , it’s inevitable that humans will start to pick up its quirks.
Recursively retrained their own LLM on it's own output until it descends into gibberish in amusing or artistic ways?
With a selection criteria it's called evolution.
This isn't a significant issue IMO, as human-created content isn't "real-world" per se; it's human-created world, an interpretation and representation of the real. The real world is the raw data perceived by sensors, human or machine. And while model-generated content doesn't match human-created content well, in the vast majority of cases it's still humans curating, modifying and publishing generated content, based on how useful it is (there are of course spammers, etc but that's a general issue). This is something humans do with content created by other humans too.
So over time generated content will become a sort of norm adopted by and inevitably molding humans, same as created content does. Instead of model collapse, both sources of content will converge over time, particularly as the ability to also generate content directly from the real world is developed and integrated into multi-modal models.
But you can’t just dismiss the issue on the grounds that humans are removed from reality as well because they have a representation-of-thing instead of instead of thing as such. In fact it doesn’t make sense. We could be directing slave monkeys to write literature. Then we could water down that description of the process as humans curating, modifying and publishing content—just indirectly, but what’s one more level of indirection between primates.
We could woolily describe it like that. We’re just creating content. Okay. But is it going anywhere? Or is it just gibberish? No, we won’t simply keep doing it if the monkeys give us gibberish.
It's much more than that. There's data our common sensors don't catch typically (virtually 100% of videos don't capture UV ranges) and there's data we're not able to catch at in any way yet.
Already LLMs like chatgpt can be fairly unbiased on things like was the economy better under Trump or Biden whereas humans tend to be very biased on that depending on which information sources they have been fed. Humans definitely perform poorly as voters due to shill-generated material in training data.
behnamoh•1mo ago
[0] Like statistics of words, etc.
jsheard•1mo ago
I noticed that a big chunk of the default Llama 4 system prompt is devoted to suppressing various GPT-isms, which to me implies they weren't able to keep their newer training set from being contaminated by competing models.
> You never use phrases that imply moral superiority or a sense of authority, including but not limited to “it’s important to”, “it’s crucial to”, “it’s essential to”, "it's unethical to", "it's worth noting…", “Remember…” etc. Avoid using these.
dustingetz•1mo ago
jimbob45•1mo ago
sampullman•1mo ago
djeastm•1mo ago
jimbob45•1mo ago
umbra07•1mo ago
lolc•1mo ago
subscribed•1mo ago
IAmGraydon•1mo ago
Rodeoclash•1mo ago
jbaber•1mo ago
AaronAPU•1mo ago
bitwize•1mo ago
agubelu•1mo ago
Says who? I've seen "boomer"everywhere but it's the first time I've heard about that one.
mikhmha•1mo ago
viraptor•1mo ago
_heimdall•1mo ago
energy123•1mo ago