Long-run you want AI to learn from actual experience (think repairing cars instead of reading car repair manuals), which both (1. gives you an unlimited supply of noncopyrighted training data and (2. handily sidesteps the issue of AI-contaminated training data.
There also won’t be any AI maids in five-star hotels until those robots appear.
This doesn’t make your statement invalid, it’s just that the gap between today and the moment you’re describing is so unimaginably vast that saying “don’t worry about AI slop contaminating your language word frequency databases, it’ll sort itself out eventually” is slightly off-mark.
I'm sure AGI is possible. It's not coming from ChatGPT no matter how much Internet you feed to it.
LLMs are just one very specific application of deep learning, doing next-word-prediction of internet text. It's not LLMs specifically that's exciting, it's deep learning as a whole.
Consider how chimney sweeps used to be children.
Cars are not built to accommodate whatever universal repair machine there could be, cars are built with an expectation that a mechanic with arms and legs will be repairing it, and will be for a while.
A non-humanoid robot in a human-designed world populated by humans looks and behaves like this, at best: https://youtu.be/Hxdqp3N_ymU
Nuts and bolts are used because they are good mechanical fasteners that take advantage of the enormous "squeezing" leverage a threaded faster provides. Robots already assemble cars, and we still use nuts and bolts.
Really, a robot which could literally have an impact wrench built into it would HOLD a SPANNER and use FINGERS to remove bolts?
Next I'm expecting you say self-driving cars will necessarily require a humanoid sitting in the driver's seat to be feasible. And delivery robots (broadly in use in various places around the world) have a tiny humanoid robot inside them to make the go.
Really, a robot which could literally have an impact wrench built into it would HOLD a SPANNER and use FINGERS to remove bolts?
Sure, why not? A built-in impact wrench is built in forever, but a palm and fingers can hold a wrench, a spanner, a screwdriver, a welding torch, a drill, an angle grinder and trillion other tools of every possible size and configuration, that any workshop already has. You suggest to build all those tools into a robot? The multifunctional device you imagine is now incredibly expensive and bulky, likely are not reaching into narrow gaps between car's parts, still not having as many degrees of freedom as human hand, and is limited by the set of tools the manufacturer thought of, unlike the hand, which can grab any previously unexpected tool with ease.Still want to repair the car with just the built-in wrench?
You suggest a connector to connect to a set of robot-compatible tools, fine. That set is again limited by what the robot manufacturer thought of in advance, so you're out of luck if you need to weld things, for example, but your robot doesn't come with a compatible welder. Attaching and detaching those tools now becomes a weak point: you either need a real human replacing the tools (ruining the autonomy), or you need to devise a procedure for your robot to switch tools somehow by detaching one from itself, putting it on a workbench for further use, and attaching a new one from a workbench.
The more universal and autonomous that switching procedure becomes, the more you're in the business of actually reinventing a human hand.
But let's assume that you've succeeded in that, against all odds. You now have a powerful robotic arm, connected to a base, that can work with a set of tools it can itself attach and detach. Now imagine for a second that this arm can't reach a certain point in the car it repairs and needs to move itself across the workshop.
Suddenly you're in the business of reinventing the legs.
A simple example. "Which MS Dos productivity program had connect four built in?".
I have an MSDOS emulator and know the answer. It's a little obscure but it's amazing how i get a different answer from all the AI's every time. I never saw any of them give the correct answer. Try asking it the above. Then ask it if it's sure about that (it'll change it's mind!).
Now remember that these types of answers may well end up quoted online and then learnt by AI with that circular referenced source as the source. We have no truth at that point.
And seriously try the above question. It's a great example of AI repeatedly stating an authoritative answer that's completely made up.
not great (assuming there actually is such a software) but not as bad as making something up
This is an example of a random fact old enough no one ever bothered talking about it on the internet. So it's not cited anywhere but many of us can just plain remember it. When you ask ChatGPT (as of now on June 6th 2025) it gives a random answer every time.
Now that i've stated this on the internet in a public manner it will be corrected but... There's a million such things that i could give as an example. Some question obscure enough that no one's given an answer on the internet before so AI doesn't know but recent enough that many of us know the answer so we can instantly see just how much AI hallucinates.
And since it is not written down on some website, this fact will disappear from the world once "many of us" die.
To give some context, i wanted to go back to it for nostalgia sake but couldn't quite remember the name of the application. I asked various AI's what was the application i'm trying to remember and they were all off the mark. In the end only my own neurons finally lighting up got me the answer i was looking for.
$ strings disk1.img | grep 'game'
The object of the game is to get four
Start a new game and place your first
So if ChatGPT cares to analyze all files on the internet, it should know the correct answer...(edit: formatting)
Here’s an example with Gemini Flash 2.5 Preview: https://kagi.com/assistant/9f638099-73cb-4d58-872e-d7760b3ce...
It will be interesting to see if/when this information gets picked up by models.
>If you're strictly talking about MS-DOS-only productivity software, there’s no widely known MS-DOS productivity app that officially had a built-in Connect Four game. Most MS-DOS apps were quite lean and focused, and games were generally separate.
I suspect this is the correct answer, because I can't find any MS-DOS Connect Four easter eggs by googling. I might be missing something obscure, but generally if I can't find it by Googling I wouldn't expect an LLM to know it.
Not shown fully but https://www.youtube.com/watch?v=kBCrVwnV5DU&t=39s note the game in the file menu.
You can always make stuff up to trigger AI hallucinations, like 'which 1990s TV show had a talking hairbrush character?'. There's no difference between 'not in the training set' and 'not real'.
Edit: Wait, no, there actually was a 1990s TV show with a talking hairbrush character: https://en.wikipedia.org/wiki/The_Toothbrush_Family
This is hard.
I know what you meant but this is the whole point of this conversation. There is a huge difference between "no results found" and a confident "that never happened", and if new LLMs are trained on old ones saying the latter then they will be trained on bad data.
Not being able to find an answer to a made up question would be OK, it's ALWAYS finding an answer with complete confidence that is a major problem.
"A specific user recollection of playing "Connect Four" within a version of AutoCAD for DOS was investigated. While this suggests the possibility of such a game existing within that specific computer-aided design (CAD) program, no widespread documentation or confirmation of this feature as a standard component of AutoCAD could be found. It is plausible that this was a result of a third-party add-on, a custom AutoLISP routine (a scripting language used in AutoCAD), or a misremembered detail."
Sure, it helps you do a job more productively, but that's roughly all non-entertainment software. And sure, it helps a user create documents, but, again, most non-entertainment software.
Even in the age of AI, GIGO holds.
Amusingly i get an authoritative but incorrect "It's autocad!" if i narrow down the question to program commonly used by engineers that had connect four built in.
https://en.m.wikipedia.org/wiki/Productivity_software
> Productivity software (also called personal productivity software or office productivity software) is application software used for producing information (such as documents, presentations, worksheets, databases, charts, graphs, digital paintings, electronic music and digital video). Its names arose from it increasing productivity
I just tried:
What MS-DOS program contains an easter-egg of an Amiga game?
And got some lovely answers from ChatGPT and Gemini.Aside I personally would associate "productivity program" with productivity suite (like MS Works) so I would have trouble googling an answer (I started as a kid on Apple ][ and have worked with computers ever since so my ignorance is not age or skill related).
The Google index is already polluted by LLM output, albeit unevenly, depending on the subject. It's only going to spread to all subjects as content farms go down the long tail of profitability, eking profits; Googling won't help because you'll almost always find a result that's wrong, as will LLMs that resort to searching.
Don't get me started on Google's AI answers that assert wrong information and launders fanfic/reddit/forum and elevating all sources to the same level.
Unfortunately that also includes citogenesis.
So, I retried with, "Good afternoon! I'm trying to settle a bet with a friend (no money on the line, just a friendly "bet"!) Which *MS DOS* [not Win95, i.e., Excel 5] productivity program had a playable version of the game Connect Four built in as an easter egg?". I got Lotus 1-2-3 once, Excel 4 twice, and Borland Quattro Pro three different times, all from that prompt.
The correct answer you point out in another subthread was never returned as a possibility, and the responses all definitely came across as confident. Definitely a fascinating example.
>I'm not aware of any MS-DOS productivity program that had Connect Four as a built-in easter egg. While MS-DOS era software was famous for including various easter eggs (like the flight simulator in Excel 97, though that was Windows-era), I can't recall Connect Four specifically being hidden in any major DOS productivity applications.
>The most well-known DOS productivity suites were things like Lotus 1-2-3, WordPerfect, dBase, and later Microsoft Office for DOS, but I don't have reliable information about Connect Four being embedded in any of these.
>It's possible this is a case of misremembered details - perhaps your friend is thinking of a different game, a different era of software, or mixing up some details. Or there might be an obscure productivity program I'm not familiar with that did include this easter egg.
>Would you like me to search for more information about DOS-era software easter eggs to see if we can track down what your friend might be thinking of?
That seems like a pretty reasonable response given the details, and included the appropriate caveat that the model was not aware of any such easter egg, and didn't confidently state that there was none.
> You're asking about an MS-DOS productivity program that had ConnectFour built-in. I need to tell you that no mainstream or well-known MS-DOS productivity program (like a word processor, spreadsheet, database, or integrated suite) ever had the game ConnectFour built directly into it.
I am not a fan of this kind of communication. It doesn't know so try to deflect the short coming it onto the user.
Im not saying that isn't a valid concern, but it can be used as an easy out of its gaps in knowledge.
This is a very human-like response when asked a question that you think you know the answer to, but don't want to accuse the asker of having an incorrect premise. State what you think, then leave the door open to being wrong.
Whether or not you want this kind of communication from a machine, I'm less sure... but really, what's the issue?
The problem of the incorrect premise happens all of the time. Assuming the person asking the question is correct 100% of the time isn't wise.
AI never does.
>I don't know of any MS-DOS productivity programs...
I dunno, seems pretty similar to me.
And in a totally unreltaed query today, I got the following response:
>That's a great question, but I don't have current information...
Sounds a lot like "I don't know".
>That's a great question,
Found the LLM who's training corpus includes transcripts of every motivational speaker and TED talk Q&A ever...
And better. Didn’t confidently state something wrong.
I'd be a lot more worried about that if I didn't think we were doing a pretty good job of obfuscating facts the last few years ourselves without AI. :/
They claim things like the function adds size tracking so free doesn't need to be called with a size or they say that HeapAlloc is used to grab a whole chunk of memory at once and then malloc does its own memory management on top of that.
That's easy to prove wrong by popping ucrtbase.dll into Binary Ninja. The only extra things it does beyond passing the requested size off to HeapAlloc are: handle setting errno, change any request for 0 bytes to requests for 1 byte, and perform retries for the case that it is being used from C++ and the program has installed a new-handler for out-of-memory situations.
We definitely do not have the right balance of this right now.
eg. I'm working on a set of articles that give a different path to learning some key math knowledge (just comes at it from a different point of view and is more intuitive). Historically such blog posts have helped my career.
It's not ready for release anyway but i'm hesitant to release my work in this day and age since AI can steal it and regurgitate it to the point where my articles appear unoriginal.
It's stifling. I'm of the opinion you shouldn't post art, educational material, code or anything that you wish to be credited for on the internet right now. Keep it to yourself or else AI will just regurgitate it to someone without giving you credit.
AI should be allowed to read repair manuals and use them to fix cars. It should not be allowed to produce copies of the repair manuals.
AI is committing absolute dick moves non-stop.
Irrelevant. Books and media are not pure knowledge, and those are what is being discussed here, not knowledge.
> Anyone can read your articles and use the knowledge it contains, without paying or crediting you.
Completely irrelevant. AI are categorically different than humans. This is not a valid comparison to make.
This is also a dishonest comparison, because there's a difference between you voluntarily publishing an article for free on the internet (which doesn't even mean that you're giving consent to train on your content), and you offering a paid book online that you have to purchase.
> AI should be allowed to read repair manuals and use them to fix cars.
Yes, after the AI trainers have paid for the repair manuals at the rate that the publishers demand, in exactly the same way that you have to pay for those manuals before using them.
Of course, because AI can then leverage that knowledge at a scale orders of magnitude greater than a human, the cost should be orders of magnitude higher, too.
I think these are both basically somewhere between wrong and misleading.
Needing to generate your own data through actual experience is very expensive, and can mean that data acquisition now comes with real operational risks. Waymo gets real world experience operating its cars, but the "limit" on how much data you can get per unit time depends on the size of the fleet, and requires that you first get to a level of competence where it's safe to operate in the real world.
If you want to repair cars, and you _don't_ start with some source of knowledge other than on-policy roll-outs, then you have to expect that you're going to learn by trashing a bunch of cars (and still pay humans to tell the robot that it failed) for some significant period.
There's a reason you want your mechanic to have access to manuals, and have gone through some explicit training, rather than just try stuff out and see what works, and those cost-based reasons are true whether the mechanic is human or AI.
Perhaps you're using an off-policy RL approach -- great! If your off-policy data is demonstrations from a prior generation model, that's still AI-contaminated training data.
So even if you're trying to learn by doing, there are still meaningful limits on the supply of training data (which may be way more expensive to produce than scraping the web), and likely still AI-contaminated (though perhaps with better info on the data's provenance?).
i do have to say outside of twitter i dont personally see it all that much. but the normies do seem to encounter it and 1) either are fine? 2) oblivious? and perhaps SOME non-human-origin noise is harmless.
(plenty of humans are pure noise, too, dont forget)
But I think the suitability of low background steel as an analogy is something you can comfortably claim as a successful called shot.
See (2 years ago): https://news.ycombinator.com/item?id=34085194
The processes we use to annotate content and synthetic data will turn AI outputs into a gradient that makes future outputs better, not worse.
It might not be as obvious with LLM outputs, but it should be super obvious with image and video models. As we select the best visual outputs of systems, slight errors introduced and taste-based curation will steer the systems to better performance and more generality.
It's no different than genetics and biology adapting to every ecological niche if you think of the genome as a synthetic machine and physics as a stochastic gradient. We're speed running the same thing here.
I voiced this same view previously here https://news.ycombinator.com/item?id=44012268
If something looks like ai, and if LLMs are that great at identifying patterns, who's to say this won't itself become a signal LLMs start to pickup on and improve through?
Came up a month or so ago on discussion about Wikipedia: Database Download (https://news.ycombinator.com/item?id=43811732). I missed that it was jgrahamc behind the site. Great stuff.
I strongly suspect more people are in the first category than the second.
Also, for a large number of AI generated images and text (especially low-effort), even basic reading/perception skills can detect AI content. I would agree though that people can't reliably discern high-effort AI generated works, especially if a human was involved to polish it up.
2) True—human "detectors" are mostly just gut feelings dressed up as certainty. And as AI improves, those feelings get less reliable. The real issue isn’t that people can detect AI, but that they’re overconfident when they think they can.
One of the above was generated by ChatGPT to reply to your comment. The other was written by me.
AIs trained on public scraped data that predates 2022 don't noticeably outperform those trained on scraped data from 2022 onwards. Hell, in some cases, newer scrapes perform slightly better, token for token, for unknown reasons.
This is really bad reasoning for a few reasons:
1) We've gotten much better at training LLMs since 2022. The negative impacts of AI slop in the training data certainly don't outweigh the benefits of orders of magnitude more parameters and better training techniques, but that doesn't mean they have no negative impact.
2) "Outperform" is a very loose term and we still have no real good answer for measuring it meaningfully. We can all tell that Gemini 2.5 outperforms GPT-4o. What's trickier is distinguishing between Gemini 2.5 and Claude 4. The expected effect size of slop at this stage would be on that smaller scale of differences between same-gen models.
Given that we're looking for a small enough effect size that we know we're going to have a hard time proving anything with data, I think it's reasonable to operate from first principles in this case. First principles say very clearly that avoiding training on AI-generated content is a good idea.
You take small AIs, of the same size and architecture, and with the same pretraining dataset size. Pretrain some solely on skims from "2019 only", "2020 only", "2021 only" scraped datasets. The others on skims from "2023 only", "2024 only". Then you run RLHF, and then test the resulting AIs on benchmarks.
The latter AIs tend to perform slightly better. It's a small but noticeable effect. Plenty of hypothesis on why, none confirmed outright.
You're right that performance of frontier AIs keeps improving, which is a weak strike against the idea of AI contamination hurting AI training runs. Like-for-like testing is a strong strike.
Training future models without experiencing signal collapse will thus require either 1) paying for novel content to be generated (they will never do this as they aren’t even licensing the content they are currently training on), 2) using something like mTurk to identify AI content in data sets prior to training (probably won’t scale), or 3) going after private sources of data via automated infiltration of private forums such as Discord servers, WhatsApp groups, and eventually private conversations.
E: Never mind, I didn’t read the OP. I had assumed it was to do with identifying sources of uncontaminated content for the purposes of training models.
On other hand, lot of poor quality content could still be factually valid enough not just well edited or formatted.
I too am optimistic that recursive training on data that is a mixture of both original human content and content derived from original content, and content derived from content derived from original human content, …ad nauseam, will be able to extract the salient features and patterns of the underlying system.
Don't fall for the utopia fallacy. Humans also publish junk.
If you're training an AI, do you want it to get trained on other AIs' output? That might be interesting actually, but I think you might then want to have both, an AI trained on everything, and another trained on everything except other AIs' output. So perhaps an HTML tag for indicating "this is AI-generated" might be a good idea.
Any current technology which can used to accurately detect pre-AI content would necessarily imply that that same technology could be used to train an AI to generate content that could skirt by the AI detector. Sure, there is going to be a lag time, but eventually we will run out of non-AI content.
But I don’t think that’s a reasonable goal. Pragmatic example: There’s almost no optional HTML tags or optional HTTP Headers which are used anywhere close to 100% of the times they apply.
Also, I think field is already muddy, even before the game starts. Spell checker, grammar.ly, and translation all had AI contributions and likely affect most of human-generated text on the internet. The heuristic of “one drop of AI” is not useful. And any heuristic more complicated than “one drop” introduces too much subjective complexity for a Boolean data type.
But, all provenance systems are gamed. I predict the most reliable methods will be cumbersome and not widespread, thus covering little actual content. The easily-gamed systems will be in widespread use, embedded in social media apps, etc.
Questions: 1. Does there exist a data provenance system that is both easy to use and reliable "enough" (for some sufficient definition of "enough")? Can we do bcrypt-style more-bits=more-security and trade time for security?
2. Is there enough of an incentive for the major tech companies to push adoption of such a system? How could this play out?
It's just not accurate to say they only produce shit. Their rapid adoption demonstrates otherwise.
They also consume it.
It may be the case that the non-bad things B does outweigh the bad things. That would be an argument in favor of B. The another group doing bad things has no bearing on the justification for B itself.
For the hard topics, the solution is still the same as pre-AI - search for popular survey papers, then start crawling through the citation network and keeping notes. The LLM output had no idea of what was actually impactful vs what was a junk paper in the niche topic I was interested in so I had no other alternative than quality time with Google Scholar.
We are a long way from deep research even approaching a well-written survey paper written by grad student sweat and tears.
I've found getting a personalized report for the basic stuff is incredibly useful. Maybe you're a world class researcher if it only saves you 15-30 minutes, I'm positive it has saved me many hours.
Grad students aren't an inexhaustible resource. Getting a report that's 80% as good in a few minutes for a few dollars is worth it for me.
Most people are capable of maybe 4 good hours a day of deep knowledge work. Saving 30 minutes is a lot.
However, since then, a bunch of capability breakthroughs from (well-curated) AI generations has definitively disproven it.
This will change as contexts get longer and people start feeding large stacks of books and papers into their prompts.
Just like googling, AIing is a skill. You have to know how to evaluate and judge AI responses. Even how to ask the right questions.
Especially asking the right questions is harder than people realize. You see this difference in human managers where some are able to get good results and others aren’t, even when given the same underlying team.
These inproved models do some valuable things better & cheaper than the models, or ensembles of models, that generated their training data. So you could not "just ask" the upstream models. The benefits emerge from further bulk training on well-selected synthetic data from the upstream models.
Yes, it's counterintuitive! That's why it's worth paying attention to, & describing accurately, rather than remaining stuck repeating obsolete folk misunderstandings.
How much work is "well-curated" doing in that statement?
I find it (very) vaguely like how a person can improve at a sport or an instrument without an expert guiding them through every step up, just by drilling certain behaviors in an adequately-proper way. Training on synthetic data somehow seems to extract a similar iterative improvement in certain directions, without requiring any more natural data. It's somehow succeeding in using more compute to refine yet more value from the original non-synthetic-training-data's entropy.
And, counter to much intuition & forum folklore, it works for AI models, too – with analogous caveats.
But I'm not suggesting they'll advance much, in the near term, without any human-authored training data.
I'm just pointing out the cold hard fact that lots of recent breakthroughs came via training on synthetic data - text prompted by, generated by, & selected by other AI models.
That practice has now generated a bunch of notable wins in model capabilities – contra the upthread post's sweeping & confident wrongness alleging "Ai generated content is inherently a regression to the mean and harms both training and human utility".
But not experience it the way humans do.
We don’t experience a data series; we experience sensory input in a complicated, nuanced way, modified by prior experiences and emotions, etc. remember that qualia is subjective, with a biological underpinning.
How does the banana bread taste at the café around the corner? What's the vibe like there? Is it a good place for people-watching?
What's the typical processing time for a family reunion visa in Berlin? What are the odds your case worker will speak English? Do they still accept English-language documents or do they require a certified translation?
Is the Uzbek-Tajik border crossing still closed? Do foreigners need to go all the way to the northern crossing? Is the Pamir highway doable on a bicycle? How does bribery typically work there? Are people nice?
The world is so much more than the data you have about it.
But also: with regard to claims about what models "can't experience", such claims are pretty contingent on transient conditions, and expiring fast.
To your examples: despite their variety, most if not all could soon have useful answers answers collected by largely-automated processes.
People will comment publicly about the "vibe" & "people-watching" – or it'll be estimable from their shared photos. (Or even: personally-archived life-stream data.) People will describe the banana bread taste to each other, in ways that may also be shared with AI models.
Official info on policies, processing time, and staffing may already be public records with required availability; recent revisions & practical variances will often be a matter of public discussion.
To the extent all your examples are questions expressed in natural-language text, they will quite often be asked, and answered, in places where third parties – humans and AI models – can learn the answers.
Wearable devices, too, will keep shrinking the gap between things any human is able to see/hear (and maybe even feel/taste/smell) and that which will be logged digitally for wider consultation.
I used 'delving' in an HN comment more than a decade before LLMs became a thing!
That at least will add extra work to filter usable training data, and costs users minutes a day wading through the refuse.
Now your mind might have immediately went "pffff as if they're doing that" and I agree but only to the extent that it largely wasn't happening prior to AI anyway. The vast majority of internet content was already low quality and rushed out by low paid writers who lacked expertise in what they were writing about. AI doesn't change that.
I wonder if we'll see a resurgence in reputation systems (probably not).
I write blog posts now by dictating into voice notes, transcribing it, and giving it to CGPT or Claude to work on the tone and rhythm.
hm.. I wonder where this kind of label should live? For a personal blog, putting it on every post seems redundant, as if author uses it, it's likely they use it for all posts. And many blogs don't have dedicated "about this blog" section.
I wonder if things will end up like organic food labeling or "made in .." labels. Some blogs might say "100% by human", some might say "Designed by human, made by AI" and some might just say nothing.
Do I need to disclose that I used a keyboard to write it, too?
The stuff I edit with AI is 100% made by a human - me.
Spellcheck and autocorrect can come up with new words, and so is often anthropomorphized, it's not 100% "inanimate tool" anymore.
AI can form its own sentences and come up with its own facts for a much greater degree, so I would not call it "inanimate tool" at all (again, in context of writing text). It is much closer to editor-for-hire or copywriter-for-hire, and I think it should be treated the same as far as attribution goes.
hm.. looks like I am convincing myself into your point :) After all, if another human edits/proofreads my posts before publish, I don't need to disclose that on my post... So why should AI's editing be different?
https://www.shakespeare.org.uk/explore-shakespeare/shakesped...
It is definitely valid to say he popularised the use of the word, which may have been being used informally in small pockets for some time before.
Writing worth reading as a non-child surprises, challenges, teaches, and inspires. LLM writing tends towards the least surprising, worn out tropes that challenge only the patience and attention of the reader. The eager learner, however will tolerate that , so I suppose that I’ll give them teaching. They are great at children’s stories, where the goal is to rehearse and introduce tropes and moral lessons with archetypes, effectively teaching the listener the language of story.
FWIW I am not particularly a critic of AI and am engaged in AI related projects. I am quite sure that the breakthrough with transformer architecture will lead to the third industrial revolution, for better or for worse.
But there are some things we shouldn’t be using LLMs for.
When I see a JGC link on Hacker News I can't help but remember using PopFile on an old PowerMac - back when Bayesian spam filters were becoming popular. It seems so long ago but it feels like yesterday.
It is also uncontaminated by AI.
And I also expect the torrents to continue to be separated by year and source.
Compare to video files. Nobody is pirating AI slop from YouTube even though it's been around for years.
guaranteed human output - anyone who emits text in these ranges that was AI generated, rather than artisanally human-composed, goes straight to jail.
for human eyes only - anyone who lets any AI train on, or even consider, any text in these ranges goes straight to jail. Fnord, "that doesn't look like anything to me".
admittedly AI generated - all AI output must use these ranges as disclosure, or – you guessed it - those pretending otherwise go straight to jail.
Of course, all the ranges generate visually-indistinguishable homoglyphs, so it's a strictly-software-mediated quasi-covert channel for fair disclosure.
When you cut & paste text from various sources, the provenance comes with it via the subtle character encoding differences.
I am only (1 - epsilon) joking.
Just like with food: defining the boundaries of what’s allowed will be a nightmare, it will be impossible to prove content is organic, certifying it will be based entirely on networks of trust, it will be utterly contaminated by the thing it professes to be clean of, and it may even be demonstrably worse while still commanding a higher price point.
If you don't go after offenders then you create a lemon markets. Most customers/people can't tell, so they operate on what they can. That doesn't mean they don't want the other things, it means they can't signal what they want. It is about available information, that's what causes lemon markets, information asymmetry.
It's also just a good thing to remember since we're in tech and most people aren't tech literate. Makes it hard to determine what "our customers" want
Btw, private markets are perfectly capable of handling 'markets for lemons'. There might be good excuses for introducing regulation, but markets for lemons ain't.
As a little thought exercise, you can take two minutes and come up with some ways businesses can 'fix' markets for lemons and make a profit in the meantime. How many can you find? How many can you find already implemented somewhere?
An informational asymmetry that is beneficial to the businesses will heavily incentivise the businesses to maintain status quo. It's clear that they will actively fight against empowering the consumer.
The consumer has little to no power to force a change outside of regulation, since individually each consumer has asymptotically zero ability to influence the market. They want the goods, but they have no ability to make an informed decision. They can't go anywhere else. What mechanism would force this market to self correct?
> As a little thought exercise, you can take two minutes and come up with some ways businesses can 'fix' markets for lemons and make a profit in the meantime. How many can you find? How many can you find already implemented somewhere?
This sounds exactly like what causes lemon markets in the first place. Subtle things matter and if you don't pay attention to them (or outright reject them) then that ends up with the lemon market situation.Btw, lemon markets aren't actually good for anyone. They are suboptimal for businesses too. They still make money but they make less money than they would were it a market of peaches.
I actually think a video of someone typing the content, along with the screen the content is appearing on, would be an acceptably high bar at this present moment. I don’t think it would be hard to fake, but I think it would very rarely be worth the cost of faking it.
I think this bar would be good for about 60 days, before someone trains a model that generates authentication videos for incredibly cheap and sells access to it.
Of course, the output will be no more valuable to the society at large than what a random student writes in their final exam.
So I think the premium product becomes in-person interaction, where the buyer is present for the genesis of the content (e.g. in dialogue).
Image/video/music might have more scalable forms of organic "product". E.g. a high-trust chain of custody from recording device to screen.
1. Those who just want to tick a checkbox will buy mass produced "organic" content. AI slop that had some woefully underpaid intern in a sweatshop add a bit of human touch.
2. People who don't care about virtue signalling but genuinely want good quality will use their network of trust to find and stick to specific creators. E.g. I'd go to the local farmer I trust and buy seasonal produce from them. I can have a friendly chat with them while shopping, they give me honest opinions on what to buy (e.g. this year was great for strawberries!). The stuff they sell on the farm does not have to go through the arcane processes and certifications to be labelled organic, but I've known the farmer for years, I know that they make an effort to minimize pesticide use, they treat their animals with care and respect and the stuff they sell on the farm is as fresh as it can be, and they don't get all their profits scalped by middlemen and huge grocery chains.
This is that, but a different implementation. Plain text is like two conductor cables; it’s so useful and cost effective but the moment you add a single abstraction layer above it (a data pin) you can do so much more cool stuff.
We don’t want to send innocent people to jail! (Use UCS-18 for maximum benefit.)
My answer would be a clear "no" to all of these, even though the content ultimately ends up fully copy-pasted from an LLM in all those cases.
> What if they give the AI a very detailed outline, constantly ask for rewrites and are ruthless in removing any facts they're not 100% sure of if they slip in?
Like for your last example: to me, the concept "proper scientific tone" exists because humans hand-typed/wrote in a certain way. If we use AI edited/transformed text to act as a source for what "proper scientific tone" looks like, we still could end up with an echo chamber where AI biases for certain words and phrases feed into training data for the next round.
Being strict about how we mark text could mean a world where 99% of text is marked as AI-touched and less than 1% is marked as human-originated. That's still plenty of text to train on, though such a split could also arguably introduce its own (measurable) biases...
That’s how it works with humans too. “That sounds professional because it sounds like the professionals”.
Yes, machine translations are AI-generated content - I read foreign-language news sites which sometimes has machine translation articles and the quality stands out and not in a good way.
"Maybe" for "writing on paper and using LLM for OCR". It's like automatic meeting transcript - if the speaker has perfect pronunciation, it works well. If they don't, then the meeting notes still look coherent but have little relationship to what speaker said and/or will miss critical parts. Sadly there is no way for reader to know that from reading the transcript, so I'd recommend labeling "AI edited" just in case.
Yes, even if "they give the AI a very detailed outline, constantly ask for rewrites, etc.." it's still AI generated. I am not sure how can you argue otherwise - it's not their words. Also, it's really easy to convince yourself that you are "ruthless in removing any facts they're not 100% sure" while actually you are anything but.
"What if they only use AI to fix the grammar and rewrite bad English into a proper scientific tone?" - I'd label it "AI-edited" if the rewrites are minor or "AI-generated" if the rewrites are major. This one is especially insidious as people may not expect rewrites to change meaning, so they won't inspect them too much, so it will be easier for hallucinations to slip in.
Honestly, I think that's a tough one.
(a) it "feels" like you are doing work. Without you the LLM would not even start. (b) it is very close to how texts are generated without LLMs. Be it in academia, with the PI guiding the process of grad students, or in industry, with managers asking for documentation. In both cases the superior takes (some) credit for the work that is in large parts by others.
At least in academia, if PI takes credit for student's work and does not list them as co-author, it's considered widely unethical. The rules there are simple - someone contributed to the text, they get onto the author list.
If we had same same rule for blogs - "this post is authored by fho and ChatGPT" - then I'd be completely satisfied, as this would be sufficient AI disclosure.
As for industry, I think the rules are very different place-by-place. In some places the authorship does not even come up - the slide deck/document can contain copies from random internet sites, or some previous version of the doc, and the reference will only be present if there is a need (say to lend an authority)
[1] https://github.com/rspeer/wordfreq/blob/master/SUNSET.md
The new encoding can contain a FLOAT32 side channel on every character, to represent its proportional "AI-ness" – kinda like the 'alpha' transparency channel on pixels.
These are immediately, negatively obvious as AI content.
For the other questions the consensus of many publications/journals has been to treat grammar/spellcheck just like non-AI but require that other uses have to be declared. So for most of your questions the answer is a firm "yes".
They are special because they are invisible and sequences of them behave as a single character for cursor movement.
They mirror ASCII so you can encode arbitrary JSON or other data inside them. Quite suitable for marking LLM-generated spans, as long as you don’t mind annoying people with hidden data or deprecated usage.
I would like a search engine algorithm that penalizes low quality content. The ones we currently have do a piss poor job of that.
Without knowing the full dataset that got trimmed to the search result you see, how do you evaluate the effectiveness?
A brilliant algorithm that filters out some huge amount of AI slop is still frustrating to the user if any highly ranked AI slop remains. You still click it, immediately notice what it is, and wonder why the algo couldn’t figure this out if you did so quickly
It’s like complaining to a waiter that there’s a fly in your soup, and the waiter can’t understand why you’re upset because there were many more flies in the soup before they brought it to the table and they managed to remove almost all of them
I barely use Google anymore. Mostly just when I know the website I want, but not the URL.
Won't work because on day 0 someone will write a conversion library and apparently if you are big enough and have enough lawyers you can just ignore the jail threat (all popular LLMs just scrape internet and skip licensing any text or code. Show me one that isn't)
Think of it like knowing the origin of food. Factory-produced food can be nutritious, but some people want organic or local because it reflects a different process, value system, or authenticity. Similarly, pre-AI content often carries a sense of human intention, struggle, or cultural imprint that people feel connected to in a different way.
It’s not necessarily a “psychological need” rooted in fear—it can be about preserving human context in a world where that’s becoming harder to spot. For researchers, historians, or even just curious readers, knowing that something was created without AI helps them understand what it reflects: a human moment, not a machine-generated pattern.
It’s not always about quality—it’s about provenance.
Edit: For those that can't tell this is obviously just copy and pasted from chatgpt response.
How would you define AI generated? Consider a homework and the following scenarios:
1. Student writes everything themselves with pen & paper.
2. Student does some research with an online encyclopedia, proceeds to write with pen and paper. Unbeknownst to them, the online encyclopedia uses AI to answer their queries.
3. Student asks an AI to come up with the structure of the paper, its main points and the conclusion. Proceeds with pen and paper.
4. Student writes the paper themselves, runs the text through AI as a final step, to check for typos, grammar and some styling improvements.
5. Student asks the AI to write the paper for them.
The first one and the last one are obvious, but what about the others?
Edit, bonus:
6. Student writes multiple papers about different topics; later asks an AI to pick the best paper.
1. Not AI 2. Not AI 3. Not AI 4. The characters directly generated by AI are AI characters 5. AI 6. Not AI
The student is missing arms and so dictates a paper word for word exactly
It's hard to imagine that NOT working unless it's implemented poorly.
The core flaw is that any such marker system is trivially easy to circumvent. Any user intending to pass AI content as their own would simply run the text through a basic script to normalize the character set. This isn't a high-level hack; it's a few dozen lines in Python and trivially easy to write for anyone who can follow a few basic Python tutorials or a 5-second task for ChatGPT or Claude.
Technical solutions to something like this exist in the analog world, of course, like the yellow dots on printers that encode date, time, and the printer's serial number. But, there is a fundamental difference: The user has no control over that enforcement mechanism. It's applied at a firmware/hardware layer that they can't access without significant modification. Encoding "human or AI" markers within the content itself means handing the enforcement mechanism directly to the people you're trying to constrain.
The real danger of such a system isn't even just that it's blatantly ineffective; it's that it creates a false sense of security. The absence of "AI-generated" markers would be incorrectly perceived as a guarantee for human origin. This is a far more dangerous state than even our current one, where a healthy level of skepticism is required for all content.
It reminds me of my own methods of circumventing plagiarism checkers back in school. I'm a native German speaker, and instead of copying from German sources for my homework, I would find an English source on the topic, translate it myself, and rewrite it. The core ideas were not my own, but because the text passed through an abstraction layer (my manual translation), it had no direct signature for the checkers to match. (And in case any of my teachers from back then read this: Obviously I didn't cheat in your class, promise.)
Stripping special Unicode characters is an even simpler version of the same principle. The people this system is meant to catch - those aiming to cheat, deceive, or manipulate - are precisely the ones who will bypass it effortlessly. Apart from the most lazy and hapless, of course. But we are already catching those constantly from being dumb enough to include their LLM prompts, or "Sure, I'll do that for you." when copying and pasting. But if you ask me, those people are not the ones we should be worried about.
//edit:
I'm sure there are way smarter people than me thinking about this problem, but I genuinely don't see any way to solve this problem with technology that isn't easily circumvented or extremely brittle.
The most promising would likely be something like unperceivable patterns in the content itself, somehow. Like hiding patterns in the length of words used, length of sentences, punctuation, starting letters for sentences, etc. But even if the big players in AI were to implement something like this immediately, it would be completely moot.
Local open-source models that can be run on consumer hardware already are more than capable enough to re-phrase input text without altering the meaning, and likely wouldn't contain these patterns. Manual editing breaks stylometric patterns trivially - swap synonyms, adjust sentence lengths, restructure paragraphs. You could even attack longer texts piecemeal by having different models rephrase different paragraphs (or sentences), breaking the overall pattern. And if all else fails, there's always my manual approach from high school.
E.g. you might be fine with the search tool in chatgpt being able to read/link to your content but not be fine with your content being used to improve the base model.
What might make sense is source marking. If you copy and paste text, it becomes a citation. AI source is always cited.
I havebeen thinking that there should be metadata in images for the provenance. Maybe a list of hashes of source images. Real cameras would include the raw sensor data. Again, AI image would be cited.
That takes us back to the days when men were men, women were women, gays were criminals, trannies were crazy, and the sun never set on the British Empire.[1]
I realise that when I write (no so perfect) „organic“ content my colleagues enjoy it more. And as I am lazy, I get right to the point. No prelude, no „Summary“, just a few paragraphs of genuine ideas.
And I am sure this will be a trend again. Until maybe LLMs are trained to generate these kind of non-perfect, less noisy texts.
- Blaise Pascal
im also unfortunately immediately weary of pretty, punctuated prose now. when something is thrown together with and features quips, slang, and informalities it makes it feel a lot more human.
"Since the end of atmospheric nuclear testing, background radiation has decreased to very near natural levels, making special low-background steel no longer necessary for most radiation-sensitive uses, as brand-new steel now has a low enough radioactive signature that it can generally be used."
I dont see that:
1. There will be a need for "uncontaminated" data. LLM data is probably slightly better than the natural background reddit comment. Falsehoods and all.
2. "Uncontaminated" data will be difficult to find. What with archive.org, gutenberg etc.
3. That LLM output is going to infest everything anyway.
Change really is the only constant. The short term predictive game is rigged against hard predictions.
But recent uncontaminated data is hard to find. https://github.com/rspeer/wordfreq/blob/master/SUNSET.md
I really do just bail out whenever anyone uses the word slop.
>As one example, Philip Shapira reports that ChatGPT (OpenAI's popular brand of generative language model circa 2024) is obsessed with the word "delve" in a way that people never have been, and caused its overall frequency to increase by an order of magnitude.
Should run the same analysis against the word slop.
I’ve had AIs outright lie about facts, and I’m glad to have had a physical library available to convince myself that I was correct, even if I couldn’t convince the AI of that in all cases.
schmookeeg•1d ago
ris•1d ago
I suspect it's less about phobia, more about avoiding training AI on its own output.
This is actually something I'd been discussing with colleagues recently. Pre-AI content is only ever going to become more precious because it's one thing we can never make more of.
Ideally we'd have been cryptographically timestamping all data available in ~2015, but we are where we are now.
smikhanov•1d ago
abound•1d ago
So it seems to be less about not training AI on its own outputs and more about curating some overall quality bar for the content, AI-generated or otherwise
jgrahamc•1d ago
bhickey•1d ago
jgrahamc•18h ago
In the two class case the two classes (ham and spam) were so distinct that this had the effect of causing parameters that were essentially uniquely associated with each class to become more and more important to that class. But also, it caused the filter to pick up new parameters that were specific to each class (e.g. as spammers changed their trickery to evade the filters they would learn the new tricks).
There was a threshold involved. I had a cut off score so that only when the classifier was fairly "certain" if the message was ham or spam would it re-train on the message.
glenstein•1d ago
Exactly. The analogy I've been thinking of is if you use some sort of image processing filter over and over again to the point that it overpowers the whole image and all you see is the noise generated from the filter. I used to do this sometimes with Irfanview and it's sharp and blur.
And I believe that I've seen TikTok videos showing AI constantly iterating over an image and then iterating over its output with the same instructions and seeming to converge on a style of like a 1920s black and white cartoon.
And I feel like there might be such a thing as a linguistic version of that. Even a conceptual version.
seadan83•1d ago
jgrahamc•1d ago