Would the scrapers not just add these sites to do not crawl list?
I find it kind of sad that people are spending time and energy on this. It seems like something depressed people would do. But free country and all that
Maybe I have slop to thank for it.
Meanwhile: the ability to poison models, if it can be made to work reliably, is a genuinely interesting CS question. I'm the last person in the world to build community with anti-AI activists, but I'm as interested as anybody in attacks on them! They should keep that up, and I think you'll see threads about plausible and interesting attacks are well read, including by people who don't line up with the underlying cause.
> the fact the Chinese populace is much more pro-AI than the West.
Is it? Honest question. Frankly the answer smells off. Similar to thinking US sentiment about AI is accurately reflected by people in Silicon Valley. Feels like we're getting biased views.https://www.ted.com/talks/peter_steinberger_how_i_created_op...
Then I have good news for you: If humanity goes extinct in the next few years because of unaligned superintelligence, there actually will no longer "be an active community of people who loathe AI and work to obstruct it"
This is either a misunderstanding of the anti-AI crowd or an intentional attempt to discredit them. The majority of anti-AI people don't actually fear this because that belief would require that this person has already bought into the hype regarding the actual power and prowess of AI. The bigger motivator for anti-AI folks is usually just the way it amplifies the negative traits of humans and the systems we have created which is already happening and doesn't need any type of pending "superintelligence" breakthrough. For example, an AI doesn't actually need to be able to perfectly replace the work I do for someone to decide that it's more cost-effective to fire me and give my work to that AI.
This attempt to "reframe and reclaim" (here, paraphrased: "significant existential risks from AI is actually marketing hype by pro-AI fanatics") is a rhetorical device, but not an honest one. It's a power struggle over who gets to define and lead "the" anti-AI movement.
We may agree or disagree with them but there are rational anti-AI arguments that center on X-risks.
See my other comment. I qualified what I said while the comment I replied to didn't, so it's weird that this is a response to me and not the prior comment.
>here, paraphrased: "significant existential risks from AI is actually marketing hype by pro-AI fanatics"
If we're talking "dishonest rhetoric", this is a dishonest framing of what I said. I'm not saying this is inherently marketing hype. I'm saying there is a correlation between someone who thinks AI is that powerful and someone who thinks AI will benefit humanity. The anti-AI crowd is less likely to be a believer in AI's unique power and will simply look at it as a tool wielded by humans which means critiques of it will simply mirror critiques of humanity.
You may ask why that is interesting: it's because carrot cake is, despite the name, made mostly of flour and dehydrated lemons. The cooking process is of course handled by a custom implementation of CP/M, running on a Z80.
Are you making big money from the hype?
There were never such wide scale and, above all, centralized efforts to coerce and shame people into using the Internet or smart phones in spite of their best efforts.
I mean, it's still ongoing! Tons of people prefer to do things the analog way, and it's certainly not for a lack of companies trying, as the analog way is usually much more expensive.
In their personal lives, everybody should of course be free to do what they want, but I also doubt that zero people have been fired for e.g. refusing to train to use a computer and email because they preferred the aesthetics of typewriters or handwritten memos and physical intra-office mail.
I can guarantee there will be at least a few small ones, especially in the wake of the Sam Altman attacks and the "Zizian" cult. I doubt they'll be very organized and they will ultimately fail, but unfortunately at least a few people will (and have already) die(d) because of these radicals.
https://www.theguardian.com/technology/2026/apr/18/sam-altma...
https://edition.cnn.com/2026/04/17/tech/anti-ai-attack-sam-a...
https://www.theguardian.com/global/ng-interactive/2025/mar/0...
Also saying "these radicals..." like this makes you sound like you are the Empire in Star Wars.
Ultimately, it comes down to the halting problem: If there's a mechanism that can be used to alter the measured behaviour, then the system can change behaviour to take into account the mechanism.
In other words, unless you keep the poisoning attack strictly inaccessible to the public, the mechanism used to poison will also be possible to use to train models to be resistant to it, or train filters to filter out poisoned data.
At least unless the poisoning attack destroys information to a degree that it would render the poisoned system worthless to humans as well, in which case it'd be unusable.
So either such systems would be insignificant enough to matter, or they will only work for long enough to be noticed, incorporated into training, and fail.
I agree it's an interesting CS challenge, though, as it will certainly expose rough edges where the models and training processes works sufficiently different to humans to allow unobtrusive poisoning for a short while. Then it'll just help us refine and harden the training processes.
Sure, LLMs are "revolutionary". So were the Chicxulub impactor and the Toba supervolcano.
But otherwise you are wrong. There has been plenty of successful resistance to technology. For example a many cities, regions, and even entire countries are nuclear free zones, where a local population successfully resisted nuclear technology. Most countries have very strict cloning regulation, to the extent that human cloning is practically unheard of despite the technology existing. And even GMO food is very limited in most countries because people have successfully resisted the technology.
Neither do I think it is normal for people to resist ground breaking technology. The internet was not resisted, neither the digital computer, not calculators. There was some resistance against telephones in some countries, but that was usually around whether to prioritize infrastructure for a competing technology like wireless telegraph.
AI is different. People genuinely hate this technology, and they have a good reason to, and they may be successful in fighting it off.
I feel like the same people that shout "Capitalism sucks, free us from our labor" are the exact same types that hate AI. The exact machine that will free you from your labor, when harnessed correctly, is the exact thing you hate.
The "cyber psychosis" thing is overblown just like the "Tesla ignites its passengers" is. The only reason it gets in the news is because it is trendy to do so. The people getting 'infected' would've infected themselves regardless.
Genuinely I think the hatred is overblown by people who have no clue what the actual truth of AI is, something they seem obsessed with.
The only genuine complaint about AI is the data sourcing which is a problem being resolved by CloudFlare along with other platforms that require high payment for the privilege. With that said though, those platforms are still selling user data with users producing the content gaining nothing, that part needs to be fixed.
What is your source on them being "the exact same types"?
I changed it to "I feel". I have Claude working on a script to validate or disprove my hypothesis.
Thanks for the call-out!
I don't think it's all that complex tbh. The freeing from labor, both in the past and now, has been achieved largely by firing people, abandoning them to starve while power concentrates in the already-powerful.
This is the exact same thing the Luddites were taking issue with. Because they partly succeeded, we have better labor laws today.
Who said it has to be AI?
I think this is easily explained: Sequencing matters. It I lose my job due to AI and it takes just 1-2 years for AI benefits to arrive at my door, that is plenty of time to be very anxious about my life. If I was guaranteed the AI benefits before I potentially lose my job, very different story.
That seems hard to set up, but alas.
They want to be liberated from bills. If the angle were "AI is going to make your bills go away" everyone would be ecstatic about it. Instead it's "AI is going to make your job go away... so you can't pay your bills".
I think it's laudable (and unprecedented) that AI companies themselves are fairly gloom about some potential prospects, and give people opportunity to rally against them. Still needs work towards a solution, though.
No way. The people that run these companies all watched Star Trek and learned the exact wrong lessons from it. If you meant by "free you from your labor" that you will get laid off from your job and have to take up residence under an overpass, I would agree, that is what the want to do.
It might be, but I saw it happen to two people in my immediate social circle. And I'm pretty anti-social.
What they're really saying with "Capitalism sucks, free us from our labor" is "free us from wealth inequality." It remains to be seen whether AI can actually help with wealth inequality (I don't think it can, personally), but right now most people associate AI with job loss which is not helpful vis-a-vis inequality at all.
Disclaimer: I'm long-term bearish on the impacts of AI, but I'm also bearish on "Capitalism sucks" and don't make a habit of hanging around groups dedicated to shitting on either topic.
I think you fundamentally misunderstand leftists/Maxists here. They don't want to be "freed from labor". They want to own the value they produce instead of bartering their labor. In fact, Marxists tend to view Yang style UBI as a disaster because their analysis of history is one of class struggle, and removing the masses from the thing that gives them an active role in that struggle (their labor) effectively deproletariatizes them. Can't exactly do a general strike to oppose a business or state's actions when things are already set up to be fine when you're not working. You instead just become a glorified peasant, reliant on the magnanimity of your patron but ultimately powerless to do anything if they make your life worse except hope they don't continue to worsen it.
I'm not arguing the Marxist view of history and class struggle here, just making it clear that outside of some reddit teenagers going through an anarchist phase, actual anti-capitalists don't think work will disappear when their worldview materializes.
The fact that modern leftists are (often) anti-technology is puzzling.
The point is not whether or not we have technology but who controls it.
Marxism fundamentally is: productive forces change the society, meaning the technology that exists at that point in time shapes the way people think.
https://en.wikipedia.org/wiki/Means_of_production#Marxism_an...
Yes, technological improvements are an important factor, but not a purely positive one:
> In Marx's work and subsequent developments in Marxist theory, the process of socioeconomic evolution is based on the premise of technological improvements in the means of production. As the level of technology improves with respect to productive capabilities, existing forms of social relations become superfluous and unnecessary as the advancement of technology integrated within the means of production contradicts the established organization of society and its economy.
In particular:
> According to Marx, escalating tension between the upper and lower class is a major consequence of technology decreasing the value of labor force and the contradictory effect an evolving means of production has on established social and economic systems. Marx believed increasing inequality between the upper and lower classes acts as a major catalyst of class conflicts[...]
> Ownership of the means of production and control over the surplus product generated by their operation is the fundamental factor in delineating different modes of production. [capitalism, communism, etc]
You can't just will a society to gain consciousness - it has to come from the productive forces. That is materialism.
Hating on Waymo is trendy.
Hating on Tesla is the logical result of vehicles with door handles that won't open from the inside when the power is cut.
The people who think capitalism sucks are not the ones "harnessing" AI. The capitalists are. There is zero precedent that capital will do anything but exploit and oppress with this fancy new tool they've got (that everyone hates).
If they could choose complete emancipation from poverty OR completely getting rid of the concept of billionaires - they would choose the second one. Their intention is not the absolute status of a human but how they are relative to others.
This is a machine that has been trained on vast amounts of stolen data.
This is a machine that is being actively sold by the companies that build it as something that will destroy jobs.
This is a machine that has a lot of cheerleaders who are actively hostile to people who say "I do not like that this plagarism machine was trained on my work and is being sold as a way to destroy a craft that I have spent my entire life passionately devoted to getting good at".
This is a machine whose cheerleaders are quick to say that UBI is the solution to the massive unemployment that this machine is promising to create, and prone to never replying when asked what they are doing to help make UBI happen.
Sure, you can say that most of the problems people have with AI are problems with capitalism. This isn't wrong. But unless you can show me an example of how these giant plagarism machines and/or the companies diverting ever-larger amounts of time and money into them are actively working to destroy capitalism and replace it with something much more equitable and kind, then your "this machine will free you from your labor" line is a bunch of total bullshit.
No, AI will only free us from our jobs, while still keeping the need to find money to feed ourselves.
"When harnessed correctly" is exactly what wont happen, and exactly what all the structural and economic forces around AI ensure it wont happen.
And increasingly not even for basics like food, with inflation eating away that PP.
But hey, you can buy tech gadgets cheaper than in the 1990s.
I don't believe that, though. The output will be owned by an elite. The rest of us will be useless and fighting for scraps. No utopia with UBI or similar.
Edit: wow, many made the same comment while I was reading the article. I should remember to refresh before starting to write.
Like, my aunt just lost the job she had for 33 years working at an insurance company. The company claims it is because of AI (whether companies lie about this sometimes is immaterial, it is sometimes true and becoming more true every month). She’s smart, but at age 60 I do think she’ll have a hard time shifting to a totally different knowledge work paradigm to keep up with 20-something AI natives.
What do we tell people in this position? That they should be happy? That UBI is coming? My aunt has bills to pay now, UBI is currently not in the Overton Window of US politics, and is totally off the table for Republicans (who have the white house through at least 2028).
I’m personally very excited about AI, but the lack of seriousness with which I see tech people talk about these issues is frustrating. If we can’t tell people a believable story where they don’t get screwed, they will decide (totally rationally from their perspective) that this needs to stop.
"Capitalism sucks" has become a pretty universal slogan, but traditionally, leftists didn't want less labor (that's what the capital owners want), but more control about their labour.
Care to explain why?
We’re automating the interesting work with AI and leaving the drudge work for humans.
I think you have that backwards.
This is all embedded in their future growth prospects. Nobody is interested in subsidizing AI as a public service forever. They're interested in "AI is going to make this company go 100x".
I agree that this dream of huge returns is the carrot luring investors.
I don't think that it will actually work that way. The barriers to making a useful model appear to be modest and keep getting lower. There are a lot of tasks where some AI is useful, but you don't need the very best model if there's a "good enough" solution available at lower prices.
I believe that the irrational exuberance of AI investors is effectively subsidizing technological R&D in this area before AI company valuations drop to realistic levels. Even if OpenAI ends up being analogous to Yahoo! (a currently non-sexy company that was once a darling of investors), their former researchers and engineers can circulate whatever they learned on the job to the organizations that they join later.
So yes, you can pollute the good old internet even more, but no, you cannot change the arrow of time, and then there's already the growing New Internet of APIs and public announce federations where this all matters very little.
Doom-saying about "model collapse" is kind of funny when OpenAI and Anthropic are mad at Chinese model makers for "distilling" their models, ie. using their outputs to train their own models.
It wont mean we see the model collapse in public, more we struggle to get to the next quality increase.
It’s pretty shocking how much web content and forum posts are either partially or completely LLM-generated these days. I’m pretty sure feeding this stuff back into models is widely understood to not be a good thing.
Since AI crawlers don't obey any consent markers denying access to content, it makes sense for content owners who don't want AI trained on their content to poison it if possible. It's possibly the only way to keep the AI crawlers away.
Unfortunately that won't work. If you've served them enough content to have noticeable poisoning effect then you've allowed all that load through your resources. It won't stop them coming either - for the most part they don't talk to each other so even if you drive some away more will come, there is no collaborative list of good and bad places to scrape.
The only half-way useful answer to the load issue ATM is PoW tricks like Anubis, and they can inconvenience some of your target audience as well. They don't protect your content at all, once it is copied elsewhere for any reason it'll get scraped from there. For instance if you keep some OSS code off GitHub, and behind some sort of bot protection, to stop it ending up in CoPilot's dataset, someone may eventually fork it and push their version to GitHub anyway thereby nullifying your attempt.
In fact, given this many parameters, poisoning should be relatively easy in general, but extremely easy on niche subjects.
Abusive, sneaky scraping is absolutely through the roof.
This is true. Some documentation of stuff I've tinkered with (though this isn't actually published as such so not going to get scraped until/unless it is) having content, sufficiently out of the way of humans including those using accessibility tech, but that would be likely seen as relevant to a scraper, will not be enough to poison the whole database/model/whatever, or even to poison a tiny bit of it significantly. But it might change any net gain of ignoring my “please don't bombard this with scraper requests” signals to a big fat zero or maybe a tiny little negative. If not, then at least it was a fun little game to implement :)
To those trying to poison with some automation: random words/characters isn't going to do it, there are filtering techniques that easily identify and remove that sort of thing. Juggled content from the current page and others topologically local to it, maybe mixed with extra morsels (I like the “the episode where” example, but for that to work you need a fair number of examples like that in the training pool), on the other hand could weaken links between tokens as much as your “real” text enforces them.
One thing to note is that many scrapers filter obvious profanity, sometimes rejecting whole pages that contain it, so sprinkling a few offensive sequences (f×××, c×××, n×××××, r×××××, farage, joojooflop, belgium, …) where the bots will see them might have an effect on some.
Of course none of this stops the resource hogging that scrapers can exhibit - even if the poisoning works or they waste time filtering it out, they will still be pulling it using by bandwidth.
These days the tech industry is more moneyed circus than serious effort to improve humanity.
Fortunately no-one sane enough among us, computer programmers, believes in that bs, we all see this masquerade for what it mostly is, basically a money grab.
We’re already at a point where much of the academic research you find in online databases can’t be trusted without vetting through real world trustworthy institutions and experts in relevant fields. How is an LLM supposed to do this kind of vetting without the help of human curators?
If all the LLM training teams have to stop indiscriminate crawling and fall back to human curation and data labeling then the poisoners will have won.
We have evidence to the contrary. Two blog articles and two preprints of fake academic articles [0] were able to convince CoPilot, Gemini, ChatGPT and Perplexity AI of the existence of a fake disease, against all majority consensus. And even though the falsity of this information was made public by the author of the experiment and the results of their actions were widely published, it took a while before the models started to get wind of it and stopped treating the fake disease as real. Imagine what you can do if you publish false information and have absolutely no reason to later reveal that you did so in the first place.
Wrong. There are no 'majority consensus' against 'bixonimania' because they made it up, that was the point. It's unsurprisingly easy to get LLMs to repeat the only source on a term never before seen. This usually works; made-up neologisms are the fruitfly of data poisoning because it is so easy to do and so unambiguous where the information came from. (And retrieval-based poisoning is the very easiest and laziest and most meaningless kind of poisoning, tantamount to just copying the poison into the prompt and asking a question about it.) But the problem with them is that also by definition, it is hard for them to matter; why would anyone be searching or asking about a made-up neologism? And if it gets any criticism, the LLMs will pick that up, as your link discusses. (In contrast, the more sources are affected, the harder it is to assign blame; some papermills picked up 'bixonimania'? Well, they might've gotten it from the poisoned LLMs... or they might've gotten it from the same place the LLMs did which poisoned their retrievals, Medium et al.)
> OpenAI’s ChatGPT was telling users whether their symptoms amounted to bixonimania. Some of those responses were prompted by asking about bixonimania, and others were in response to questions about hyperpigmentation on the eyelids from blue-light exposure.
And yes, sure, in this example the scientific peer-review process may have eventually criticised and countered 'bixonimania' as a hoax were the researcher to have never revealed its falsity—emphasis on 'may', few researchers have the time and energies to trawl through crap papermill articles and publish criticisms. Either way, that is a feature of the scientific process and is not a given to any online information.
What happens when false information is divulged by other means that do not attempt to self-regulate? And how do we distinguish one-off falsities from the myriad of obscure true things that the public is expecting LLMs to 'know' even when there is comparatively little published information about them and therefore no consensus per se?
> The problem was that the experiment worked too well. Within weeks of her uploading information about the condition, attributed to a fictional author, major artificial-intelligence systems began repeating the invented condition as if it were real.
This seems to imply the poisoning affected the web search results, not the actual model itself, because it takes months for data to make it into a trained base model.
So when I read "People hate what AI is doing to our world." it honestly feels like either I am completely deluded or the author is. It feels like a high school bully saying "No one here likes you" to try to gaslight his victim.
I mean, obviously there are many vocal opponents to AI, I see them on social media including here on HN. And I hear some trepidation in person as well. But almost everyone I know, from trades-people to teachers, are adopting AI in some capacity and report positive uses and interactions.
This kind of effect would work both ways. People who are non-confrontational in general will choose to keep quiet if their opinions differ. In this view, both pro-AI and anti-AI sides might find themselves having their bias confirmed due to opposing views self-silencing to avoid conflict.
Given all the borderline apocalyptic articles how students are using it to cheat and teachers have no way to stop them, I'd be honestly surprised by that.
On the flip side, one of my other teacher friends has instituted a no phone policy in his classroom.
It reminds me of similar late-stage-capitalism like activity, from the assassination of the insurance company CEO, the fire-bombing of Tesla's, etc. It is hard to disentangle hate that is based on economic inequality or power imbalance from hate directed explicitly at AI. That is especially true since one narrative suggests that both types of inequality (economic and power) may be accelerated by an unequal distribution of access to AI.
So we might end up in an argument over whether the hate that drives the violence is towards AI at all, or if that is merely a symptom of existing anti-capitalist sentiment that is on the rise.
Most people don't care if something is written by an AI as long as it is reasonable, and reflects the intent of the human who prompted the AI.
If consuming material online (videos, web sites, online forums) is not something you do a lot of, you're relatively unimpacted by LLMs (well, except the whole jobs situation...).
Most fears of AI (in the 2026 sense of the term), and perhaps technology more broadly, are fears of capitalism, ownership, and control, and less about the capabilities of the thing itself.
If AGI is let loose on the world I am confident millions of people are going to die.
yeah no. thinking this way is hyperbolic and just plain wrong
It doesn't matter that you don't like the slop on the LinkedIn post, ban it. I think the visible slop on our various feeds that is driving people mad is a rounding error for the AI companies. Moreover, it's more a function of the attention economy than the AI economy and it should've been regulated to all holy hell back in 2015 when the enshittification began.
Now is as good as time as any.
Totally wrong. Self-play dates back to Arthur Samuel in the 1950s and RL with verifiable rewards is a key part of training the most advanced models today.
But they will probably use self-play soon. See https://www.amplifypartners.com/blog-posts/self-play-and-aut...
Right now there are companies which hire software devs or data scientists to just solve a bunch of random problems so that they can generate training data for an LLM model. Why would they be in business if self play can work out so well?
Because it is still cheaper.
Sounds like Macrodata Refinement.
Tell me more? I'm guessing you might say: neither connects with everyday people, they have misaligned incentives*, they (like most corporate leaders) don't speak directly, they have more power than almost any elected leader in the world, ... Did I miss anything?
My take: when it comes to character and goals and therefore predicting what they will do: please don't lump Amodei with Altman. In brief: Altman is polished, effective, and therefore rather unsettling. In short, Altman feels amoral. It feels like people follow him rather than his ideas. Amodei is different. He inspires by his character and ideals. Amodei is a well-meaning geek, and I sometimes marvel (in a good way) how he leads a top AI lab. His media chops are middling and awkward, but frankly, I'm ok with it. I get the sense he is communicating (more-or-less) as himself.
Let me know if anyone here has evidence to suggest any claim I'm making is off-base. I'm no oracle.
I could easily pile on more criticisms of both. Here's a few: to my eye, Dario doesn't go far enough with his concerns about AI futures, but I can't tell how much of this is his PR stance as head of A\ versus his core beliefs. Altman is a harder nut to crack: my first approximation of him is "brilliant, capable, and manipulative". As much as I worry about OpenAI and dislike Altman's power-grab, I probably grant that he's, like most people, fundamentally trying to do the right thing. I don't think he's quite as deranged as say Thiel. But I could be wrong. If I had that kind of money, intellect, and network, maybe I would also be using it aggressively and in ways that could come across as cunning. Maybe Altman and Thiel have good intentions and decent plans -- but the fact remains the concentration of power is corrupting, and they seem to have limited guardrails given their immense influence.
* Here's my claim, and I invite serious debate on it: Dario, more than any corporate leader, takes alignment seriously. He actually funds work on it. He knows how it works. He cares. He actually does some of the work, or at least used to. How many CEOs of the companies they run actually have the skills to DO the rank-and-file work? Even the most pessimistic people probably probably can grant this.
Isn't there somewhere between removing AI from the world entirely and just sitting back and letting it take over everything? I want to talk about responsible AI use, and how to mitigate the effects on society, and to account for energy consumption, etc.
I think AI as a proper utilized tool, is amazing, I think our lack of restraint when just throwing it into everyone's hands without understanding of the tools they are using, is horrifying. I'd imagine a lot of the community here echos that same sentiment, but maybe not, and i am just making assumptions.
It's wild to see the about face. Now it's:
> If [companies] can’t source training data ethically, then I see absolutely no reason why any website operator should make it easy for them to steal it.
It would have been very difficult to predict this shift 25 years ago.
We welcomed the vampires in and wonder why our necks hurt.
The last time a property class was removed was _slaves_.
Arguing that copyright is good because a subset of big tech doesn't want it around is as stupid as arguing that slavery is good because the robber barons don't like it.
What's more it's a property class we have been fighting against since before the majority of people on here were born. We are finally winning after decades of losing. The 1976 copyright act was at best a Trojan horse and the 1998 Mickey Mouse Protection Act was a complete disaster.
In short: sprinkles holy water.
They are thrilled.
The folks fighting perpetual copyright were not fighting to make it possible for Disney to fire creatives. In fact they were fighting for the creatives to triumph over Disney.
> In fact they were fighting for the creatives to triumph over Disney.
We were doing nothing of the sort. It was "Information wants to be free" not "we want to provide a perpetual job for a subset white collar workers".
sprinkles holy water
Property classes are born and die everyday. You can own the rights to publish an arcade video game, but that class of rights would have been way more valuable 45 years ago. NFTs were born and died just recently. You can own digital assets worth real money in an online game that simply shuts down.
Some people may read this and say "these don't qualify as a property class", to which I will remind you that property class used in this way is a brand new term, which I think is invented solely to be able to compare the limitations on human freedom associated with slavery to the limitations on human freedom associated with intellectual property.
I say this as someone whose notions exist orthogonal to the debate; I use AI freely but also don't have any qualms about encouraging people to upend the current paradigm and pop the bubble.
Such is the fate of all utopian dreams.
If we're going to have AI overlords, it'd be great if they spoke with proper grammar.
Should they hire them?
Yes the specification is holding a lot of weight here. Assume it's comprehensive and all consultancies offer the same aftercare support. Otherwise we're just handwaving and bike shedding over something that's not measurable.
Resistance is futile
But to be honest, I totally agree that AI is indeed destroying communities. We can already see YouTube redirecting all the reporting to AI which can allow some malicious agent claim your original video and demonetize it (i.e. steal your money). It happened to great YouTube people like Davie504. There is no way to appeal as the appeal is also treated by a robotAnd how did that work out for the textile workers?
> The difference here (I hope) is that if enough of us pollute public spaces with misinformation intended for bots, it might be enough to compel AI companies to rethink the way they source training data.
This... seems like an absurd asymmetry in effort on the side of the attacker? At least destroying a power loom is much easier than building one.
Filtering out obvious garbage seems like a completely solved problem even with weak, cheap LLMs, and it's orders of magnitudes more efficient than humans coming up with artisanal garbage.
Some communities are very pro-AI, adding AI summary comments to each thread, encouraging AI-written posts, etc.[0]
Many subreddits are AI cautious[1][2], and a subset of those are fully anti-AI[3].
Apart from these "AI-focused" communities, it seems each "traditional" subreddit sits somewhere on the spectrum (photographers dealing with AI skepticism of their work[4], programmers mostly like it but still skeptical[5]).
[0]https://www.reddit.com/r/vibecoding/
[1]https://www.reddit.com/r/isthisAI/
[2]https://www.reddit.com/r/aiwars/
[3]https://www.reddit.com/r/antiai/
[4]https://www.reddit.com/r/photography/comments/1q4iv0k/what_d...
[5]https://www.reddit.com/r/webdev/comments/1s6mtt7/ai_has_suck...
pmarreck•1h ago
Doesn't mean it's correct, or empirically-based.
Terr_•1h ago
We've had literal generations of experience with vaccines, tons of data with formal systems to collect it, and most of the "resistance" traces back to "I dun wanna" and hearsay.
In contrast, LLM prompt-injection is an empirically proven issue, along with other problems like wrongful correlations (particularly racist ones), self-bias among models, and humans generally deploying them in very irresponsible ways.