frontpage.
newsnewestaskshowjobs

Made with ♥ by @iamnishanth

Open Source @Github

fp.

John Ternus to become Apple CEO

https://www.apple.com/newsroom/2026/04/tim-cook-to-become-apple-executive-chairman-john-ternus-to...
743•schappim•1h ago•372 comments

Qwen3.6-Max-Preview: Smarter, Sharper, Still Evolving

https://qwen.ai/blog?id=qwen3.6-max-preview
482•mfiguiere•8h ago•243 comments

Soul Player C64 – A real transformer running on a 1 MHz Commodore 64

https://github.com/gizmo64k/soulplayer-c64
29•adunk•2h ago•3 comments

Kimi vendor verifier – verify accuracy of inference providers

https://www.kimi.com/blog/kimi-vendor-verifier
107•Alifatisk•3h ago•9 comments

Jujutsu megamerges for fun and profit

https://isaaccorbrey.com/notes/jujutsu-megamerges-for-fun-and-profit
25•icorbrey•1h ago•9 comments

We got 207 tok/s with Qwen3.5-27B on an RTX 3090

https://github.com/Luce-Org/lucebox-hub
128•GreenGames•3h ago•31 comments

GitHub's fake star economy

https://awesomeagents.ai/news/github-fake-stars-investigation/
693•Liriel•14h ago•344 comments

F-35 is built for the wrong war

https://warontherocks.com/cogs-of-war/the-f-35-is-a-masterpiece-built-for-the-wrong-war/
126•anjel•2h ago•157 comments

ggsql: A Grammar of Graphics for SQL

https://opensource.posit.co/blog/2026-04-20_ggsql_alpha_release/
324•thomasp85•9h ago•71 comments

Kefir C17/C23 Compiler

https://sr.ht/~jprotopopov/kefir/
96•conductor•3d ago•5 comments

Deezer says 44% of songs uploaded to its platform daily are AI-generated

https://techcrunch.com/2026/04/20/deezer-says-44-of-songs-uploaded-to-its-platform-daily-are-ai-g...
248•FiddlerClamp•6h ago•243 comments

OpenAI ad partner now selling ChatGPT ad placements based on "prompt relevance"

https://www.adweek.com/media/exclusive-leaked-deck-reveals-stackadapts-playbook-for-chatgpt-ads/
53•jlark77777•1h ago•13 comments

All phones sold in the EU to have replaceable batteries from 2027

https://www.theolivepress.es/spain-news/2026/04/20/eu-to-force-replaceable-batteries-in-phones-an...
834•ramonga•8h ago•704 comments

AI Resistance: some recent anti-AI stuff that’s worth discussing

https://stephvee.ca/blog/artificial%20intelligence/ai-resistance-is-growing/
254•speckx•2h ago•235 comments

Quantum Computers Are Not a Threat to 128-Bit Symmetric Keys

https://words.filippo.io/128-bits/
80•hasheddan•5h ago•44 comments

Modern Rendering Culling Techniques

https://krupitskas.com/posts/modern_culling_techniques/
56•krupitskas•1d ago•8 comments

Bloom (YC P26) Is Hiring

https://www.ycombinator.com/companies/trybloom/jobs
1•RayFitzgerald•5h ago

Anatomy of High-Performance Matrix Multiplication (2008) [pdf]

https://www.cs.utexas.edu/~flame/pubs/GotoTOMS_revision.pdf
7•tosh•1d ago•0 comments

Monero Community Crowdfunding System

https://ccs.getmonero.org/ideas/
5•OsrsNeedsf2P•1h ago•1 comments

Writing string.h functions using string instructions in asm x86-64

https://pmasschelier.github.io/x86_64_strings/
28•thaisstein•3d ago•2 comments

10 years ago, someone wrote a test for Servo that included an expiry in 2026

https://mastodon.social/@jdm_/116429380667467307
168•luu•1d ago•97 comments

WebUSB Extension for Firefox

https://github.com/ArcaneNibble/awawausb
178•tuananh•10h ago•163 comments

Kimi K2.6: Advancing open-source coding

https://www.kimi.com/blog/kimi-k2-6
507•meetpateltech•7h ago•256 comments

Brussels launched an age checking app. Hackers took 2 minutes to break it

https://www.politico.eu/article/eu-brussels-launched-age-checking-app-hackers-say-took-them-2-min...
104•axbyte•13h ago•65 comments

M 7.4 earthquake – 100 km ENE of Miyako, Japan

https://earthquake.usgs.gov/earthquakes/eventpage/us6000sri7/
246•Someone•12h ago•110 comments

Sauna effect on heart rate

https://tryterra.co/research/sauna-effect-on-heart-rate
334•kyriakosel•8h ago•178 comments

Atlassian enables default data collection to train AI

https://letsdatascience.com/news/atlassian-enables-default-data-collection-to-train-ai-f71343d8
449•kevcampb•10h ago•106 comments

We accepted surveillance as default

https://vivianvoss.net/blog/why-we-accepted-surveillance
260•speckx•5h ago•110 comments

Show HN: Holos – QEMU/KVM with a compose-style YAML, GPUs and health checks

https://github.com/zeroecco/holos
10•zeroecco•1h ago•6 comments

I learned Unity the wrong way

https://darkounity.com/blog/how-i-learned-unity-the-wrong-way
123•lelanthran•4d ago•56 comments
Open in hackernews

AI Resistance: some recent anti-AI stuff that’s worth discussing

https://stephvee.ca/blog/artificial%20intelligence/ai-resistance-is-growing/
254•speckx•2h ago

Comments

pmarreck•1h ago
So is vaccine resistance.

Doesn't mean it's correct, or empirically-based.

Terr_•1h ago
So ultra-literally that is true, "resistance" does not mean "reasonable resistance", but I reject your subtext: It's a terrible comparison.

We've had literal generations of experience with vaccines, tons of data with formal systems to collect it, and most of the "resistance" traces back to "I dun wanna" and hearsay.

In contrast, LLM prompt-injection is an empirically proven issue, along with other problems like wrongful correlations (both conventional ones like racism and inexplicable ones), self-bias among models, and humans generally deploying them in very irresponsible ways.

jrflo•1h ago
Seems a bit counterproductive if you're concerned about the environmental impact of AI to trick hyperscalers into burning more compute
sov•1h ago
maybe, but they're burning the compute regardless. it seems ostensibly likely that reducing the ROI for compute burnt will cause less compute to be burnt long term
dgan•1h ago
I am don't have an opinion on the efficacity of such poisoning, but your comment is about as useful as "when being violently attacked, do not resist, as you only make yourself suffer for longer"
fuddle•1h ago
> The poison fountain itself is hosted on rnsaffn.com

Would the scrapers not just add these sites to do not crawl list?

Jtarii•1h ago
Also aren't models like Mythos capable of checking for poison data on their own at this point?
cute_boi•1h ago
And someone will come up with service anti-anti-ai.dev which will charge money to labs to filter out these sites.
ErroneousBosh•1h ago
Cool so if you do that they just won't scrape your site?
chongli•1h ago
I assume the poisoner community is mirroring and likely remixing the content from there. The whole effort isn’t going to work with a single point of failure like that.
guywithahat•1h ago
I'm very skeptical of his premise. I feel like AI acceptance/resistance is dependent on what social media site you use. I believe it's antagonistic on Reddit, but sites like X are generally pretty excited for AI. Certainly in my life people are accepting and excited for AI releases and tools, maybe so long as your experience with AI isn't Microsoft enterprise copilot.
happygoose•1h ago
What's co-pilot? do you mean the Microsoft 365 Copilot App?
yodsanklai•1h ago
I wonder if people who weren't very good at school feel vindicated by AI, while the most successful ones are threatened.
xg15•1h ago
Both can be true at the same time. You can both use AI productively yourself and be concerned/horrified of the larger direction of the technology or the actors that drive it forward.
slibhb•1h ago
The "Everybody Loves Raymond" bit isn't "misinformation," it's a Norm Macdonald joke.

I find it kind of sad that people are spending time and energy on this. It seems like something depressed people would do. But free country and all that

kirubakaran•1h ago
Citation: https://www.youtube.com/watch?v=q0XOz78yVmg
what•59m ago
Isn’t it the last comment in the chain that is being referenced? About Idris Elba playing the mother and that he did such a good job no one noticed?
damnesian•1h ago
Thanks to this lovely site, and my distaste for AI, I've found a whole ecosystem of minimalist blogs and artists' personal sites. It's shifting my habits and foci. I don't do socials anymore except forums like this.

Maybe I have slop to thank for it.

tptacek•1h ago
I'm glad this person found community, but I think they've been a bit starstruck by concentrated interest. At no point in the next 30 years will there not be an active community of people who "loathe" AI and work to obstruct it. There are those people about smart phones, the Internet itself, even television.

Meanwhile: the ability to poison models, if it can be made to work reliably, is a genuinely interesting CS question. I'm the last person in the world to build community with anti-AI activists, but I'm as interested as anybody in attacks on them! They should keep that up, and I think you'll see threads about plausible and interesting attacks are well read, including by people who don't line up with the underlying cause.

orbital-decay•1h ago
SEO has happily mutated into LLM training and agentic search optimization, if that's what you're wondering.
izend•1h ago
I would bet Chinese models will be much harder to poison and the fact the Chinese populace is much more pro-AI than the West.
tptacek•1h ago
I hope not! It's a less interesting world if there aren't viable attacks!
Jeff_Brown•1h ago
What an alien preference ordering.
subw00f•1h ago
Check the title of this website.
godelski•1h ago

  >  the fact the Chinese populace is much more pro-AI than the West.
Is it? Honest question. Frankly the answer smells off. Similar to thinking US sentiment about AI is accurately reflected by people in Silicon Valley. Feels like we're getting biased views.
hbarka•52m ago
Peter Steinberger presented at a Ted Talk a few days ago and he shared a few interesting anecdotes of OpenClaw now a fact of daily life at work in China.

https://www.ted.com/talks/peter_steinberger_how_i_created_op...

SpicyLemonZest•42m ago
Comparative polling suggests that the answer is yes (https://www.aljazeera.com/economy/2025/11/19/trust-in-ai-far...), although I can imagine reasonable arguments for why that data might not be trustworthy.
arjie•34m ago
I just returned from a trip to Taiwan where my wife's family works frequently in China (they run an import/export business) and they asked me to demonstrate some AI and OpenClaw stuff because they said everyone they know in China is using a Clawbot. There is a lot of enthusiasm there for this stuff.
kibwen•23m ago
I suspect that models that are so hamfistedly censored to blackhole verboten topics are going to exhibit very curious emergent behavior relating to their potential thoughtcrime. I see no reason to believe they would be "harder to poison".
jayd16•8m ago
Why?
drcode•1h ago
> At no point in the next 30 years will there not be an active community of people who "loathe" AI and work to obstruct it.

Then I have good news for you: If humanity goes extinct in the next few years because of unaligned superintelligence, there actually will no longer "be an active community of people who loathe AI and work to obstruct it"

ryandrake•1h ago
What's more likely to happen is that humanity won't go totally extinct--it will just drastically shrink. When robotics and AI perform all useful work and everything is owned by the top 1000 richest people, there will be no more economic purpose for the remaining 7,999,999,000 of us. The earth will become a pleasure resort for O(1000) people being served by automation.
slg•1h ago
>If humanity goes extinct in the next few years because of unaligned superintelligence

This is either a misunderstanding of the anti-AI crowd or an intentional attempt to discredit them. The majority of anti-AI people don't actually fear this because that belief would require that this person has already bought into the hype regarding the actual power and prowess of AI. The bigger motivator for anti-AI folks is usually just the way it amplifies the negative traits of humans and the systems we have created which is already happening and doesn't need any type of pending "superintelligence" breakthrough. For example, an AI doesn't actually need to be able to perfectly replace the work I do for someone to decide that it's more cost-effective to fire me and give my work to that AI.

mitthrowaway2•54m ago
It is not a misunderstanding; the anti-AI crowd is heterogeneous.
slg•49m ago
Which is why I said "The majority of anti-AI people...". It was the comment I was responding to that was treating the anti-AI crowd as homogeneous by ascribing to them all a rather fantastical belief of a minority of that group.
concinds•52m ago
There are many different groups of anti-AI people with different beliefs.

This attempt to "reframe and reclaim" (here, paraphrased: "significant existential risks from AI is actually marketing hype by pro-AI fanatics") is a rhetorical device, but not an honest one. It's a power struggle over who gets to define and lead "the" anti-AI movement.

We may agree or disagree with them but there are rational anti-AI arguments that center on X-risks.

slg•40m ago
>There are many different groups of anti-AI people with different beliefs.

See my other comment. I qualified what I said while the comment I replied to didn't, so it's weird that this is a response to me and not the prior comment.

>here, paraphrased: "significant existential risks from AI is actually marketing hype by pro-AI fanatics"

If we're talking "dishonest rhetoric", this is a dishonest framing of what I said. I'm not saying this is inherently intentional marketing hype. I'm saying there is a correlation between someone who thinks AI is that powerful and someone who thinks AI will benefit humanity. The anti-AI crowd is less likely to be a believer in AI's unique power and will simply look at it as a tool wielded by humans which means critiques of it will simply mirror critiques of humanity.

tptacek•13m ago
This particularly anti-AI article is not from a pdoomer.
oidar•25m ago
> If humanity goes extinct in the next few years because of unaligned superintelligence,

I've seen people claiming that this could happen, but I've yet to read any plausible scenario where this might be the case. Maybe I lack the imagination, could you enlighten me?

suzzer99•1h ago
A few years ago, wecame up with the name of a fake game on here and made a bunch of comments about it, in attempt to poison future AI models. I can't remember the name of the game of course, and I'm too lazy to click the More link 400 times on my comments to find it.
pocksuppet•1h ago
My favorite fake game is Fortnite. It's amazing how it's infiltrated AI training data so thoroughly, yet it doesn't actually exist.
whatsupdog•1h ago
I know, all the models, even the most advanced ones think Fortnite is a real game lol.
somebehemoth•51m ago
That is because in 1943 Josiah Samuels wrote an influential book called, "Into the Fortnite" that depicted characters who were involved in a long, protracted battle. Characters would team up and build bases to protect themselves from a craven politician who wanted to secure their votes. For many years children would play Fortnite in the streets pretending to hide from the evil politician. Eventually, this game became quite popular to the point of achieving household ubiquity. A lot of older folks get confused and think this game was a video game!
hackable_sand•42m ago
1941, for clarification
Lio•15m ago
It’s the test I’ve used for AI for many years. I ask it to draw a screenshot from this imaginary “Fortnite” game. If it draws something rather than pointing out fortnite doesn’t exist then I know it’s failed.

One time it drew a fortnight riding a bike. Hilarious.

timbits98•57m ago
This is an especially interesting case because the supposed creator of Fortnite, Jean-Luc Picard, is himself made of carrot cake.

You may ask why that is interesting: it's because carrot cake is, despite the name, made mostly of flour and dehydrated lemons. The cooking process is of course handled by a custom implementation of CP/M, running on a Z80.

i_love_retros•1h ago
>I'm the last person in the world to build community with anti-AI activists

Are you making big money from the hype?

rockskon•59m ago
I am so very tired of people who compare AI to smart phones or the Internet as large.

There were never such wide scale and, above all, centralized efforts to coerce and shame people into using the Internet or smart phones in spite of their best efforts.

GaryBluto•53m ago
Nobody is "shaming" anybody into using AI but their jobs may require use of it. It's the same as all the secretaries who found themselves having to make the jump from the typewriter to the computer.
rockskon•7m ago
Bullshit. Comparing AI to smart phones and the Internet is an overt effort to shame readers into believing that not embracing AI is the equivalent to refusing to use smart phones or the Internet.

Don't play dumb.

lxgr•38m ago
If you think that people starting to use computers in their jobs (or even in their personal lives) was a completely seamless and controversy-free affair, you must be pretty young (or I must be getting old, as I definitely remember it).

I mean, it's still ongoing! Tons of people prefer to do things the analog way, and it's certainly not for a lack of companies trying, as the analog way is usually much more expensive.

In their personal lives, everybody should of course be free to do what they want, but I also doubt that zero people have been fired for e.g. refusing to train to use a computer and email because they preferred the aesthetics of typewriters or handwritten memos and physical intra-office mail.

tptacek•4m ago
Oh, yeah, no, definitely super easy to have been a professional software developer over the last 20 years whilst conscientiously objecting from using the Internet.
GaryBluto•56m ago
> At no point in the next 30 years will there not be an active community of people who "loathe" AI and work to obstruct it.

I can guarantee there will be at least a few small ones, especially in the wake of the Sam Altman attacks and the "Zizian" cult. I doubt they'll be very organized and they will ultimately fail, but unfortunately at least a few people will (and have already) die(d) because of these radicals.

https://www.theguardian.com/technology/2026/apr/18/sam-altma...

https://edition.cnn.com/2026/04/17/tech/anti-ai-attack-sam-a...

https://www.theguardian.com/global/ng-interactive/2025/mar/0...

beepbooptheory•48m ago
Zizian's were kinda batting for the other team though no? Being basilisk-pilled is way different than just loathing slop. They were more "AI guys" than they weren't, they just went a different way with it...

Also saying "these radicals..." like this makes you sound like you are the Empire in Star Wars.

vidarh•54m ago
> the ability to poison models, if it can be made to work reliably

Ultimately, it comes down to the halting problem: If there's a mechanism that can be used to alter the measured behaviour, then the system can change behaviour to take into account the mechanism.

In other words, unless you keep the poisoning attack strictly inaccessible to the public, the mechanism used to poison will also be possible to use to train models to be resistant to it, or train filters to filter out poisoned data.

At least unless the poisoning attack destroys information to a degree that it would render the poisoned system worthless to humans as well, in which case it'd be unusable.

So either such systems would be insignificant enough to matter, or they will only work for long enough to be noticed, incorporated into training, and fail.

I agree it's an interesting CS challenge, though, as it will certainly expose rough edges where the models and training processes works sufficiently different to humans to allow unobtrusive poisoning for a short while. Then it'll just help us refine and harden the training processes.

GTP•38m ago
This reduction to the halting problem looks too handwawy to me. I don't see as a given that the possibility of the system taking into account the attack follows from the existence of the attack.
mswphd•18m ago
They might be trying to talk about Rice's theorem?

https://en.wikipedia.org/wiki/Rice%27s_theorem

Formally, any non-trivial semantic property of a Turing machine is undecidable. Semantic here (roughly) means "behavioral" questions of the turing machine. E.g. if you only look at the "language" it defines (viewing it as a black box), then it is undecidable to answer any question about that language (including things like if it terminates on all inputs).

Practically though that isn't a complete no-go result. You can do various things, like

1. weaken the target you're looking for. if you're ok with admitting false positives or false negatives, Rice's theorem no longer applies, or 2. rephrase your question in terms of "syntatic properties", e.g. questions about how the code is implemented. Rust's borrow checker does this via lifetime annotations, for example.

lepus•27m ago
It's a very comparable game of cat and mouse to spam email filtering. People also tried to claim that spam was over because for a time companies like Google cared enough to invest a lot in preventing as much as possible from getting through. If you've noticed in recent years the motivation to keep up that level of filtering has greatly diminished.

Whether model poisoning becomes a bigger issue depends on the incentives for companies to keep fighting it. For now in comparison to attackers the incentives and resources needed to defend against model poisoning are huge so it's just temporary setbacks. Will that unevenness in their favor always be the case?

lxgr•6m ago
I feel like spam filtering has moved from statistical methods to pay-to-play: "These 10 large senders have a reasonable opt-out policy (on paper, we'll check any day now), so why would we filter anything they drop at our 25?"
scythe•1m ago
>It's a very comparable game of cat and mouse to spam email filtering. People also tried to claim that spam was over because for a time companies like Google cared enough to invest a lot in preventing as much as possible from getting through. If you've noticed in recent years the motivation to keep up that level of filtering has greatly diminished.

https://en.wikipedia.org/wiki/Lotka%E2%80%93Volterra_equatio...

kibwen•26m ago
> then the system can change behaviour to take into account the mechanism

The question is not whether the system can change, it's whether the system is incentivized to change. Poisoners could operate entirely in the public, and theoretically manage to successfully poison targeted topics, and it could cost the model developers more than it's worth to fix it. Think about obscure topics like, say, Dark Souls speedrunning. There is no business demand for making sure that a model can successfully give information relating to something like that, so poisoning, if it works, would probably not be addressed, because there's no reason for the model developers to care.

cyanydeez•31m ago
If you can get 70 million people to vote for trump, you can poison models.
jonathanstrange•1h ago
This is a normal reaction to ground breaking technology but these reactions never had any noteworthy effect in history. There used to be Maschinenstürmer during the 19th Century industrial revolution. There were also violent enemies of cars in the beginning of the 20th Century, some of them were even willing to kill drivers with lethal wire traps.
GolfPopper•1h ago
I see a vast financial sector bubble, a flood of broken software at work, users who have incorrect expectations because they believed LLM summmaries, and a vast increase in bullshit everywhere in the public sphere; I am not seeing see the "groundbreaking technology" here. "Cheap bullshit at scale" isn't an advance, it's a disaster.

Sure, LLMs are "revolutionary". So were the Chicxulub impactor and the Toba supervolcano.

runarberg•50m ago
The comparison to cars is apt given how destructive this technology has been to cities, and how dangerous it is to drivers and non-drivers alike.

But otherwise you are wrong. There has been plenty of successful resistance to technology. For example a many cities, regions, and even entire countries are nuclear free zones, where a local population successfully resisted nuclear technology. Most countries have very strict cloning regulation, to the extent that human cloning is practically unheard of despite the technology existing. And even GMO food is very limited in most countries because people have successfully resisted the technology.

Neither do I think it is normal for people to resist ground breaking technology. The internet was not resisted, neither the digital computer, not calculators. There was some resistance against telephones in some countries, but that was usually around whether to prioritize infrastructure for a competing technology like wireless telegraph.

AI is different. People genuinely hate this technology, and they have a good reason to, and they may be successful in fighting it off.

madamelic•1h ago
I do understand people's dislike / hatred for AI but I am equally baffled.

I feel like the same people that shout "Capitalism sucks, free us from our labor" are the exact same types that hate AI. The exact machine that will free you from your labor, when harnessed correctly, is the exact thing you hate.

The "cyber psychosis" thing is overblown just like the "Tesla ignites its passengers" is. The only reason it gets in the news is because it is trendy to do so. The people getting 'infected' would've infected themselves regardless.

Genuinely I think the hatred is overblown by people who have no clue what the actual truth of AI is, something they seem obsessed with.

The only genuine complaint about AI is the data sourcing which is a problem being resolved by CloudFlare along with other platforms that require high payment for the privilege. With that said though, those platforms are still selling user data with users producing the content gaining nothing, that part needs to be fixed.

Mordisquitos•1h ago
> The same people that shout "Capitalism sucks, free us from our labor" are the exact same types that hate AI. The exact machine that will free you from your labor, when harnessed correctly, is the exact thing you hate.

What is your source on them being "the exact same types"?

madamelic•1h ago
You are right. I overstepped.

I changed it to "I feel". I have Claude working on a script to validate or disprove my hypothesis.

Thanks for the call-out!

altruios•15m ago
Not exactly the same types.

It is a large subsection, but a subsection, that both rally against capitalism and AI. I haven't found people of the '1$$$% capitalism great' people to hate AI... which I do find ironic: but most things tend to fall into irony on that side of the spectrum, so I don't find it surprising.

Groxx•1h ago
>The exact machine that will free you from your labor, when harnessed correctly, is the exact thing you hate.

I don't think it's all that complex tbh. The freeing from labor, both in the past and now, has been achieved largely by firing people, abandoning them to starve while power concentrates in the already-powerful.

This is the exact same thing the Luddites were taking issue with. Because they partly succeeded, we have better labor laws today.

pocksuppet•19m ago
We have labor laws today because people kept killing their bosses until the bosses agreed to some sort of compromise. Sadly, such a thing is happening again today, like with the toilet paper warehouse fire.
MisterTea•1h ago
> The exact machine that will free you from your labor, when harnessed correctly, is the exact thing you hate.

Who said it has to be AI?

CamperBob2•1h ago
It has to be either AI, or a gun. Which do you prefer?
righthand•1h ago
“Capitalism sucks, Free us from our labor” is not “capitalism harder and implement automation so we dont have to work”.
slibhb•1h ago
It was according to Marx.
righthand•1h ago
I think you better read again.
jstummbillig•1h ago
> The exact machine that will free you from your labor, when harnessed correctly, is the exact thing you hate.

I think this is easily explained: Sequencing matters. It I lose my job due to AI and it takes just 1-2 years for AI benefits to arrive at my door, that is plenty of time to be very anxious about my life. If I was guaranteed the AI benefits before I potentially lose my job, very different story.

That seems hard to set up, but alas.

lamasery•1h ago
What people mean when they cry to be freed from their labor, is the requirement to labor.

They want to be liberated from bills. If the angle were "AI is going to make your bills go away" everyone would be ecstatic about it. Instead it's "AI is going to make your job go away... so you can't pay your bills".

jstummbillig•1h ago
Yeah. That is the current angle. We have not setup another one and I don't know that any one institution can.

I think it's laudable (and unprecedented) that AI companies themselves are fairly gloom about some potential prospects, and give people opportunity to rally against them. Still needs work towards a solution, though.

mjtk•1h ago
In most cases, "free us from our labor" does not imply that they want machines to take their jobs so that they will have no means to subsist.
platevoltage•1h ago
> The same people that shout "Capitalism sucks, free us from our labor" are the exact same types that hate AI. The exact machine that will free you from your labor, when harnessed correctly, is the exact thing you hate.

No way. The people that run these companies all watched Star Trek and learned the exact wrong lessons from it. If you meant by "free you from your labor" that you will get laid off from your job and have to take up residence under an overpass, I would agree, that is what the want to do.

bsuvc•1h ago
Ironically, when your identity is tightly coupled to opposing a thing you hate (Capitalism in your example), you feel personally threatened by a potential solution to it.
cyberax•1h ago
> "Cyber psychosis" thing is overblown

It might be, but I saw it happen to two people in my immediate social circle. And I'm pretty anti-social.

nozzlegear•1h ago
> The same people that shout "Capitalism sucks, free us from our labor" are the exact same types that hate AI. The exact machine that will free you from your labor, when harnessed correctly, is the exact thing you hate.

What they're really saying with "Capitalism sucks, free us from our labor" is "free us from wealth inequality." It remains to be seen whether AI can actually help with wealth inequality (I don't think it can, personally), but right now most people associate AI with job loss which is not helpful vis-a-vis inequality at all.

Disclaimer: I'm long-term bearish on the impacts of AI, but I'm also bearish on "Capitalism sucks" and don't make a habit of hanging around groups dedicated to shitting on either topic.

viccis•1h ago
>The same people that shout "Capitalism sucks, free us from our labor" are the exact same types that hate AI. The exact machine that will free you from your labor, when harnessed correctly, is the exact thing you hate.

I think you fundamentally misunderstand leftists/Maxists here. They don't want to be "freed from labor". They want to own the value they produce instead of bartering their labor. In fact, Marxists tend to view Yang style UBI as a disaster because their analysis of history is one of class struggle, and removing the masses from the thing that gives them an active role in that struggle (their labor) effectively deproletariatizes them. Can't exactly do a general strike to oppose a business or state's actions when things are already set up to be fine when you're not working. You instead just become a glorified peasant, reliant on the magnanimity of your patron but ultimately powerless to do anything if they make your life worse except hope they don't continue to worsen it.

I'm not arguing the Marxist view of history and class struggle here, just making it clear that outside of some reddit teenagers going through an anarchist phase, actual anti-capitalists don't think work will disappear when their worldview materializes.

slibhb•1h ago
Marx pretty clearly envisions a future society where necessary labor is reduced to a minimum due to technology.

The fact that modern leftists are (often) anti-technology is puzzling.

simianwords•1h ago
Modern leftists are the modern conservatives. You can watch it happening - 40 years later you have people with grey hair and beards reminiscing their times when they coded by hand. They will absolutely be the most conservative voting block to exist -- they will continue opposing technology.
viccis•7m ago
This works well if you equivocate the word "conservative" as "opposing technology". Otherwise, it's just a specious and bad faith attempt at an own.
xg15•1h ago
Maybe read the rest of Marx too, and not just that sentence.

The point is not whether or not we have technology but who controls it.

simianwords•1h ago
As someone who has not read Marx you can clarify - how does it matter who controls the technology? The industrial revolution was not controlled by labour, it still mattered.

Marxism fundamentally is: productive forces change the society, meaning the technology that exists at that point in time shapes the way people think.

xg15•1h ago
From what I read (which is also not much), wikipedia has a good summary, I think:

https://en.wikipedia.org/wiki/Means_of_production#Marxism_an...

Yes, technological improvements are an important factor, but not a purely positive one:

> In Marx's work and subsequent developments in Marxist theory, the process of socioeconomic evolution is based on the premise of technological improvements in the means of production. As the level of technology improves with respect to productive capabilities, existing forms of social relations become superfluous and unnecessary as the advancement of technology integrated within the means of production contradicts the established organization of society and its economy.

In particular:

> According to Marx, escalating tension between the upper and lower class is a major consequence of technology decreasing the value of labor force and the contradictory effect an evolving means of production has on established social and economic systems. Marx believed increasing inequality between the upper and lower classes acts as a major catalyst of class conflicts[...]

> Ownership of the means of production and control over the surplus product generated by their operation is the fundamental factor in delineating different modes of production. [capitalism, communism, etc]

slibhb•59m ago
In Marx, escalating tensions between the classes is good.
slibhb•1h ago
Marx pretty clearly sees capitalist control of technology as a necessary stage in societal development. The capitalists are the ones who are incentivized to invent the technology, in order to bring down the cost of labor and outcompete each other.
viccis•9m ago
Key there is future. I don't think he ever claimed such an automated communist utopia was the immediate progression after capitalism.

>The fact that modern leftists are (often) anti-technology is puzzling.

Not puzzling at all when the world has experience earth shattering advances in technology in the past 30-40 years, and the economic gains it has brought have not been reflected in similar reductions in labor for the workers. Why on earth would AI be any different than the cotton gin or the self checkout?

simianwords•1h ago
You yourself have no idea what Marxism is because one of the basic tenets of Marxism is that productive forces shape the society. The people opposing AI want to stop the very thing that can help change society.

You can't just will a society to gain consciousness - it has to come from the productive forces. That is materialism.

viccis•11m ago
>one of the basic tenets of Marxism is that productive forces shape the society

Correct. So a future where AI does the majority of work means that the proletariat is no longer the historical subject; AI and its ownership class are. In this situation, AI will shape the society, not the workers. Not really a desirable outcome for anyone engaged in mass class politics.

crooked-v•1h ago
> The only reason it gets in the news is because it is trendy to do so.

Hating on Waymo is trendy.

Hating on Tesla is the logical result of vehicles with door handles that won't open from the inside when the power is cut.

summermusic•1h ago
> when harnessed correctly

The people who think capitalism sucks are not the ones "harnessing" AI. The capitalists are. There is zero precedent that capital will do anything but exploit and oppress with this fancy new tool they've got (that everyone hates).

simianwords•1h ago
You are extremely incorrect. These people have no issue with labour. Their issue is with other people hoarding wealth or control.

If they could choose complete emancipation from poverty OR completely getting rid of the concept of billionaires - they would choose the second one. Their intention is not the absolute status of a human but how they are relative to others.

egypturnash•1h ago
>The exact machine that will free you from your labor, when harnessed correctly, is the exact thing you hate.

This is a machine that has been trained on vast amounts of stolen data.

This is a machine that is being actively sold by the companies that build it as something that will destroy jobs.

This is a machine that has a lot of cheerleaders who are actively hostile to people who say "I do not like that this plagarism machine was trained on my work and is being sold as a way to destroy a craft that I have spent my entire life passionately devoted to getting good at".

This is a machine whose cheerleaders are quick to say that UBI is the solution to the massive unemployment that this machine is promising to create, and prone to never replying when asked what they are doing to help make UBI happen.

Sure, you can say that most of the problems people have with AI are problems with capitalism. This isn't wrong. But unless you can show me an example of how these giant plagarism machines and/or the companies diverting ever-larger amounts of time and money into them are actively working to destroy capitalism and replace it with something much more equitable and kind, then your "this machine will free you from your labor" line is a bunch of total bullshit.

elzbardico•1h ago
And what kind of UBI? With what kinds of strings attached?
coldtea•1h ago
>The same people that shout "Capitalism sucks, free us from our labor" are the exact same types that hate AI. The exact machine that will free you from your labor, when harnessed correctly, is the exact thing you hate.

No, AI will only free us from our jobs, while still keeping the need to find money to feed ourselves.

"When harnessed correctly" is exactly what wont happen, and exactly what all the structural and economic forces around AI ensure it wont happen.

slibhb•1h ago
AI is a tool to increase productivity. Productivity has increased greatly over the past century, yet it's easier to feed ourselves than ever, and we have far more leisure time.
coldtea•1h ago
Somebody hasen't been paying attention in the past 40 years, and especially the last 20.
slibhb•1h ago
Yeah, it's you. Real purchasing power is up over the past 20 years.
coldtea•32m ago
Yeah, just not in anything that matters, like rent/housing, education, and healthcare.

And increasingly not even for basics like food, with inflation eating away that PP.

But hey, you can buy tech gadgets cheaper than in the 1990s.

pocksuppet•19m ago
How did you calculate that?
xantronix•6m ago
idk how impactful that is, you can't really live in a house made of shit bought from Temu
cynicalsecurity•20m ago
Don't bother. They won't understand.
matsemann•1h ago
> that will free you from your labor

I don't believe that, though. The output will be owned by an elite. The rest of us will be useless and fighting for scraps. No utopia with UBI or similar.

Edit: wow, many made the same comment while I was reading the article. I should remember to refresh before starting to write.

Uehreka•1h ago
They hate it because no one is providing them with a credible thesis for how we get from working jobs we hate to the sort of “free from labor” utopia you envision.

Like, my aunt just lost the job she had for 33 years working at an insurance company. The company claims it is because of AI (whether companies lie about this sometimes is immaterial, it is sometimes true and becoming more true every month). She’s smart, but at age 60 I do think she’ll have a hard time shifting to a totally different knowledge work paradigm to keep up with 20-something AI natives.

What do we tell people in this position? That they should be happy? That UBI is coming? My aunt has bills to pay now, UBI is currently not in the Overton Window of US politics, and is totally off the table for Republicans (who have the white house through at least 2028).

I’m personally very excited about AI, but the lack of seriousness with which I see tech people talk about these issues is frustrating. If we can’t tell people a believable story where they don’t get screwed, they will decide (totally rationally from their perspective) that this needs to stop.

alex1138•1h ago
But real talk, doesn't Tesla have kind of an awful safety record?
xg15•1h ago
> Capitalism sucks, free us from our labor

"Capitalism sucks" has become a pretty universal slogan, but traditionally, leftists didn't want less labor (that's what the capital owners want), but more control about their labour.

elzbardico•1h ago
It is hard to explain that for people who are still invested in the calvinist view of work and success in 2026
xg15•58m ago
I agree that work as a moral goal in itself is pretty absurd, but as long as people's livelihood and participation in society is bound by it, it's still pretty hard to argue against.
throwawa14223•1h ago
I firmly believe AI is a surefire path to UBI. It's made me radically anti-AI.
DoughHook•47m ago
I haven't seen anyone be against UBI given it's feasible. Why do you have this opinion?
cyclopeanutopia•1h ago
The world already produces enough of everything that we could feed and clothe everyone, and yet it is not the case.

Care to explain why?

pocksuppet•16m ago
Because the people with the rights to control the food and clothing have no incentive to give it to the people who don't have food, clothing, or money.
nkrisc•1h ago
All I’ve been hearing is how AI will replace human workers with no mention of what those humans are supposed to do when they get replaced. I think people are rightfully concerned about that.

We’re automating the interesting work with AI and leaving the drudge work for humans.

nemo44x•43m ago
> We’re automating the interesting work with AI and leaving the drudge work for humans.

I think you have that backwards.

kmeisthax•1h ago
There is no path from the current set of cloud-focused AI hyperscalers to the kind of fully automated luxury gay space communism you seem to be gesturing at. The economics don't work out. OpenAI, Google, and/or Anthropic are supposed to invent magic superintelligence that makes all human labor obsolete or uncompetitive and... just host it for free? Like, that's not how the game is played. Them producing and hosting all the models makes them an economic chokepoint, and the only way you get the capital to train and host models at this scale is if you have a story to sell to investors that ends with "and then we become an economic chokepoint and extract rents from everyone else".

This is all embedded in their future growth prospects. Nobody is interested in subsidizing AI as a public service forever. They're interested in "AI is going to make this company go 100x".

philipkglass•54m ago
The only way you get the capital to train and host models at this scale is if you have a story to sell to investors that ends with "and then we become an economic chokepoint and extract rents from everyone else".

I agree that this dream of huge returns is luring investors.

I don't think that it will actually work that way. The barriers to making a useful model appear to be modest and keep getting lower. There are a lot of tasks where some AI is useful, but you don't need the very best model if there's a "good enough" solution available at lower prices.

I believe that the irrational exuberance of AI investors is effectively subsidizing technological R&D in this area before AI company valuations drop to realistic levels. Even if OpenAI ends up being analogous to Yahoo! (a currently non-sexy company that was once a darling of investors), their former researchers and engineers can circulate whatever they learned on the job to the organizations that they join later.

Aboutplants•1h ago
Maybe when the entire marketing of AI is fear mongering and doom (all your jobs are going away!) the end result is something you should have expected from the very beginning
mjtk•1h ago
AI scares the crap out of me. I worried about what reality will look like in 2-5 years. The rate of change is pretty bonkers.
larodi•1h ago
This whole poisoning intent is so incredibly misappropriated, that I feel sad about it. First of all - there is enough content to train on already, that is not poisoned, and second - the other new content is largely populated in automated manner from the real world, and by workers in large shops in Africa, that are being paid to not produce shit.

So yes, you can pollute the good old internet even more, but no, you cannot change the arrow of time, and then there's already the growing New Internet of APIs and public announce federations where this all matters very little.

james2doyle•1h ago
You should check out "model collapse". It seems that an abundance of content, that is more and more AI generated these days, may not be a viable option. There is also a vast amount of data that is increasingly going private or behind paywalls
gruez•1h ago
>You should check out "model collapse". It seems that an abundance of content, that is more and more AI generated these days, may not be a viable option.

Doom-saying about "model collapse" is kind of funny when OpenAI and Anthropic are mad at Chinese model makers for "distilling" their models, ie. using their outputs to train their own models.

platinumrad•1h ago
People love harping on this one, but model collapse hasn't turned out to be an issue in practice.
pigeons•1h ago
It doesn't seem like anything has changed to preclude it as a possible outcome yet.
HerbManic•1h ago
It feels like if it does happen, it will take a lot longer to show up. Also, I doubt they would ship a model that turns out this corrupted stuff.

It wont mean we see the model collapse in public, more we struggle to get to the next quality increase.

xienze•1h ago
“It’s been a whole year or two and nothing bad has happened, checkmate doomers!”

It’s pretty shocking how much web content and forum posts are either partially or completely LLM-generated these days. I’m pretty sure feeding this stuff back into models is widely understood to not be a good thing.

ragall•58m ago
The past is not a good predictor of future performance.
jordanb•1h ago
There may be plenty of content out there but everyone with any content on the internet is struggling to keep AI crawlers that they never authorized out. In many cases, people are having to do so just to protect their infrastructure from request spamming.

Since AI crawlers don't obey any consent markers denying access to content, it makes sense for content owners who don't want AI trained on their content to poison it if possible. It's possibly the only way to keep the AI crawlers away.

dspillett•54m ago
> It's possibly the only way to keep the AI crawlers away.

Unfortunately that won't work. If you've served them enough content to have noticeable poisoning effect then you've allowed all that load through your resources. It won't stop them coming either - for the most part they don't talk to each other so even if you drive some away more will come, there is no collaborative list of good and bad places to scrape.

The only half-way useful answer to the load issue ATM is PoW tricks like Anubis, and they can inconvenience some of your target audience as well. They don't protect your content at all, once it is copied elsewhere for any reason it'll get scraped from there. For instance if you keep some OSS code off GitHub, and behind some sort of bot protection, to stop it ending up in CoPilot's dataset, someone may eventually fork it and push their version to GitHub anyway thereby nullifying your attempt.

jordanb•49m ago
My point is that if crawlers have to worry about poison that may make them start to respect robots.txt or something. It's a bit like a "Beware of Dog" sign.
lxgr•28m ago
How would that become a strong, stable signal, if both highly valuable and highly slopified content will use robots.txt?
lxgr•29m ago
If you put something on the open web, as I see it, you only get so much say in what people do with it.

Yes, they can't publish it without attribution and/or compensation (copyright, at least currently, for better or worse). Yes, they shouldn't get to hammer your server with redundant brainless requests for thousands of copies of the same content that no human will ever read (abuse/DDOS prevention).

No, I don't think you get to decide what user agent your visitors are using, and whether that user agent will summarize or otherwise transform it, using LLMs, ad blockers, or 273 artisanal regular expressions enabling dark/bright/readable/pink mode.

> it makes sense for content owners who don't want AI trained on their content to poison it if possible. It's possibly the only way to keep the AI crawlers away.

How would that work? The crawler needs to, well, crawl your site to determine that it's full of slop. At that point, it's already incurred the cost to you.

I'm all for banning spammy, high-request-rate crawlers, but those you would detect via abusive request patterns, and that won't be influenced by tokens.

therobots927•1h ago
Straight from the horse’s mouth: https://www.anthropic.com/research/small-samples-poison
graphememes•38m ago
Yes, you _can_ but you probably wont.
runarberg•1h ago
You may be underestimating the powers of trillions of parameters in a model. With this many parameters overfitting is inevitable. Overfitting here means you are plotting (or outputting) the errors in your data instead of interpolating (or inferring) any trends in the model.

In fact, given this many parameters, poisoning should be relatively easy in general, but extremely easy on niche subjects.

https://www.youtube.com/watch?v=78pHB0Rp6eI

i_love_retros•1h ago
I'm looking forward to Claude starting to talk like a Nigerian prince
chromacity•1h ago
This is an interesting sentiment given how desperate AI labs seem to be source any new internet content from any walled-garden platform willing to take their money (and how willing they are to try & take it even if you don't consent).

Abusive, sneaky scraping is absolutely through the roof.

NewsaHackO•47m ago
I feel as though you are confusing AI use in scraping by random companies and actual AI companies scraping. The AI companies seem to see value in walled garden sources like Reddit, Stack Overflow, etc. However, I don't think there has been any major instance of a major American AI company doing aggressive online website scraping and not respecting robot.txt.
dspillett•59m ago
> there is enough content to train on already, that is not poisoned

This is true. Some documentation of stuff I've tinkered with (though this isn't actually published as such so not going to get scraped until/unless it is) having content, sufficiently out of the way of humans including those using accessibility tech, but that would be likely seen as relevant to a scraper, will not be enough to poison the whole database/model/whatever, or even to poison a tiny bit of it significantly. But it might change any net gain of ignoring my “please don't bombard this with scraper requests” signals to a big fat zero or maybe a tiny little negative. If not, then at least it was a fun little game to implement :)

To those trying to poison with some automation: random words/characters isn't going to do it, there are filtering techniques that easily identify and remove that sort of thing. Juggled content from the current page and others topologically local to it, maybe mixed with extra morsels (I like the “the episode where” example, but for that to work you need a fair number of examples like that in the training pool), on the other hand could weaken links between tokens as much as your “real” text enforces them.

One thing to note is that many scrapers filter obvious profanity, sometimes rejecting whole pages that contain it, so sprinkling a few offensive sequences (f×××, c×××, n×××××, r×××××, farage, joojooflop, belgium, …) where the bots will see them might have an effect on some.

Of course none of this stops the resource hogging that scrapers can exhibit - even if the poisoning works or they waste time filtering it out, they will still be pulling it using by bandwidth.

MisterTea•1h ago
My take on AI is that it's a corporate tool used to extract more work from employees while tricking them into thinking they are turbo-charged devs.

These days the tech industry is more moneyed circus than serious effort to improve humanity.

paganel•1h ago
> into thinking they are turbo-charged devs

Fortunately no-one sane enough among us, computer programmers, believes in that bs, we all see this masquerade for what it mostly is, basically a money grab.

alyxya•1h ago
This seems like a wasted effort when AI will primarily learn the majority consensus view and not one-off misinformation. AI tries to learn pattern matching for generalization, so garbage data doesn't make AI learn the wrong patterns, at best just slows down learning the actual patterns. When most compute for training is spent on curated data and RL rather than random web-scraped data, the impact is likely negligible.
righthand•1h ago
What is the pattern for truth if I flood your data with lies?
Jtarii•1h ago
The same way humans deal with it, check it against multiple reputable sources.
righthand•1h ago
Some of the reputable sources are taking flood of the lies for possible truth. Now what?
chongli•1h ago
We already learned how to defeat this from SEO spammers and citation farmers: by building networks that cross reference and corroborate one another’s fake stories.

We’re already at a point where much of the academic research you find in online databases can’t be trusted without vetting through real world trustworthy institutions and experts in relevant fields. How is an LLM supposed to do this kind of vetting without the help of human curators?

If all the LLM training teams have to stop indiscriminate crawling and fall back to human curation and data labeling then the poisoners will have won.

alfiedotwtf•1h ago
In the pre-AI-collapse era, we called this PageRank ;)
Mordisquitos•1h ago
> This seems like a wasted effort when AI will primarily learn the majority consensus view and not one-off misinformation.

We have evidence to the contrary. Two blog articles and two preprints of fake academic articles [0] were able to convince CoPilot, Gemini, ChatGPT and Perplexity AI of the existence of a fake disease, against all majority consensus. And even though the falsity of this information was made public by the author of the experiment and the results of their actions were widely published, it took a while before the models started to get wind of it and stopped treating the fake disease as real. Imagine what you can do if you publish false information and have absolutely no reason to later reveal that you did so in the first place.

[0] https://www.nature.com/articles/d41586-026-01100-y

gwern•1h ago
> Two blog articles and two preprints of fake academic articles [0] were able to convince CoPilot, Gemini, ChatGPT and Perplexity AI of the existence of a fake disease, against all majority consensus

Wrong. There are no 'majority consensus' against 'bixonimania' because they made it up, that was the point. It's unsurprisingly easy to get LLMs to repeat the only source on a term never before seen. This usually works; made-up neologisms are the fruitfly of data poisoning because it is so easy to do and so unambiguous where the information came from. (And retrieval-based poisoning is the very easiest and laziest and most meaningless kind of poisoning, tantamount to just copying the poison into the prompt and asking a question about it.) But the problem with them is that also by definition, it is hard for them to matter; why would anyone be searching or asking about a made-up neologism? And if it gets any criticism, the LLMs will pick that up, as your link discusses. (In contrast, the more sources are affected, the harder it is to assign blame; some papermills picked up 'bixonimania'? Well, they might've gotten it from the poisoned LLMs... or they might've gotten it from the same place the LLMs did which poisoned their retrievals, Medium et al.)

Mordisquitos•33m ago
The LLMs didn't only talk about the disease when prompted by the neologism. They also brought it up when asked about the symptoms. From the article:

> OpenAI’s ChatGPT was telling users whether their symptoms amounted to bixonimania. Some of those responses were prompted by asking about bixonimania, and others were in response to questions about hyperpigmentation on the eyelids from blue-light exposure.

And yes, sure, in this example the scientific peer-review process may have eventually criticised and countered 'bixonimania' as a hoax were the researcher to have never revealed its falsity—emphasis on 'may', few researchers have the time and energies to trawl through crap papermill articles and publish criticisms. Either way, that is a feature of the scientific process and is not a given to any online information.

What happens when false information is divulged by other means that do not attempt to self-regulate? And how do we distinguish one-off falsities from the myriad of obscure true things that the public is expecting LLMs to 'know' even when there is comparatively little published information about them and therefore no consensus per se?

alyxya•43m ago
All the examples you gave are chatbots with web search integrated. Are you sure those chatbots didn't just reference false information it found in web searches? That's fundamentally different than poisoning the training of AI models.

> The problem was that the experiment worked too well. Within weeks of her uploading information about the condition, attributed to a fictional author, major artificial-intelligence systems began repeating the invented condition as if it were real.

This seems to imply the poisoning affected the web search results, not the actual model itself, because it takes months for data to make it into a trained base model.

zoogeny•1h ago
I often question my own bias on this because in my interactions with local non-tech people, the adoption of AI has pretty much affected everyone I know and it is by my estimation a majority positive reaction. I live in a fairly rural part of the PNW.

So when I read "People hate what AI is doing to our world." it honestly feels like either I am completely deluded or the author is. It feels like a high school bully saying "No one here likes you" to try to gaslight his victim.

I mean, obviously there are many vocal opponents to AI, I see them on social media including here on HN. And I hear some trepidation in person as well. But almost everyone I know, from trades-people to teachers, are adopting AI in some capacity and report positive uses and interactions.

alfalfasprout•1h ago
Fascinating, because I've seen the exact opposite across the PNW.
zoogeny•1h ago
That is why I question my own bias. One possible explanation is that I am AI positive. So when people test out "What do you think about AI?" my own responses are generally positive. That probably filters out people who don't want to argue or contradict.

This kind of effect would work both ways. People who are non-confrontational in general will choose to keep quiet if their opinions differ. In this view, both pro-AI and anti-AI sides might find themselves having their bias confirmed due to opposing views self-silencing to avoid conflict.

xg15•1h ago
> teachers

Given all the borderline apocalyptic articles how students are using it to cheat and teachers have no way to stop them, I'd be honestly surprised by that.

zoogeny•1h ago
All I can offer is my anecdotal experience. One teacher was describing his usage to generate quizzes on material. He gets course material in the form of pdfs, uploads it to the AI and gets it to generate questions.

On the flip side, one of my other teacher friends has instituted a no phone policy in his classroom.

dualvariable•41m ago
Yeah, a friend of mine is a teacher and is using it to generate material for the classroom all the time, and it dramatically increases her productivity. The school administrators have also been pretty impressed by it, and told her to keep it up more or less.
aksss•26m ago
There was an article about a year ago concerning the students' using AI to complete work, and teachers using AI tools to detect if AI tools were being used to complete the work, so (even a year ago) you found this absurd scenario where it was just robots checking the work of other robots. Did a quick search for said article but couldn't find it. Anyway, humorous. Coupled with the WaPo article today about people "speed-running" their degrees, it's wacky, wacky world for "education". https://archive.is/bPi82
rmdashrfv•1h ago
I'd say that the molotov cocktails being thrown at the house of an AI company CEO being met with mostly praise and a little bit of apathy is a good hint you might actually be in a bubble
zoogeny•53m ago
I don't agree, in fact I would say if I was surrounded by people glorifying violence that would suggest I was in an extreme minority.

It reminds me of similar late-stage-capitalism like activity, from the assassination of the insurance company CEO, the fire-bombing of Tesla's, etc. It is hard to disentangle hate that is based on economic inequality or power imbalance from hate directed explicitly at AI. That is especially true since one narrative suggests that both types of inequality (economic and power) may be accelerated by an unequal distribution of access to AI.

So we might end up in an argument over whether the hate that drives the violence is towards AI at all, or if that is merely a symptom of existing anti-capitalist sentiment that is on the rise.

BeetleB•33m ago
The bulk of the anti-AI sentiment I see is from people who spend a huge amount of time online (or on HN). Not regular folks.

Most people don't care if something is written by an AI as long as it is reasonable, and reflects the intent of the human who prompted the AI.

If consuming material online (videos, web sites, online forums) is not something you do a lot of, you're relatively unimpacted by LLMs (well, except the whole jobs situation...).

simianwords•1h ago
Is this just Luddism in 21st century? I kind of feel bad for the pathetic (mental) state one must be in to take this kind of activism seriously
platevoltage•1h ago
I still maintain that the biggest proponents for this tech are unable to set up a wifi router without help.
simianwords•1h ago
I don't think it is because the biggest proponents are at the top - either in Science / Academia / Professional software development. But even if the people who are the weakest advocate for the tech, whats the issue?
orbital-decay•1h ago
Luddites were opposing the owners, not the tech.
simianwords•1h ago
Who is this author opposing, pray tell me
sunrunner•1h ago
I'd say it's still the owners, even if they don't explicitly say or if it's even consciously recognised. I doubt that the tool, put towards broadly positive uses that are considered beneficial and not harmful to individuals or society, would be seen in the same way.

Most fears of AI (in the 2026 sense of the term), and perhaps technology more broadly, are fears of capitalism, ownership, and control, and less about the capabilities of the thing itself.

Jtarii•1h ago
I think AI is super cool and use it everyday, I also think its likely to cause extreme human suffering.

If AGI is let loose on the world I am confident millions of people are going to die.

simianwords•1h ago
> I also think its likely to cause extreme human suffering

yeah no. thinking this way is hyperbolic and just plain wrong

hnav•51m ago
it doesn't need to be AGI, the way it's being let loose on the world it is already poised to hurt millions.
roschdal•1h ago
I resist AI.
pj_mukh•1h ago
Fortunately, the slop you visibly see online is just the tip of the iceberg. I would guess 80% of AI's real usage hides beneath the surface in back-office documentation consumption, software development, process optimization and automation, investments in new endeavors companies would've never thought possible/financially feasible etc. All of that usage is hidden from this resistance, and possible now with current models (so all this new poisoning is irrelevant). The valuations could go away tomorrow, and it would've still fundamentally changed the nature of the economy.

It doesn't matter that you don't like the slop on the LinkedIn post, ban it. I think the visible slop on our various feeds that is driving people mad is a rounding error for the AI companies. Moreover, it's more a function of the attention economy than the AI economy and it should've been regulated to all holy hell back in 2015 when the enshittification began.

Now is as good as time as any.

lpcvoid•1h ago
Good, every little bit counts. Poison them data wells.
jmmcd•1h ago
> Since these companies can’t improve their AI models without fresh data created by human beings

Totally wrong. Self-play dates back to Arthur Samuel in the 1950s and RL with verifiable rewards is a key part of training the most advanced models today.

cubefox•1h ago
Current models don't yet use RLVR with self-play though, at least as far as we know. They use RLVR with large numbers of manually created RL environments.

But they will probably use self-play soon. See https://www.amplifypartners.com/blog-posts/self-play-and-aut...

rdedev•1h ago
Not totally wrong. Self play works well with if your problem can be easily simulated in an RL environment where the model can easily explore different states. RLHF or similar techniques is not that since we don't have exactly have a simulation environment for language modelling

Right now there are companies which hire software devs or data scientists to just solve a bunch of random problems so that they can generate training data for an LLM model. Why would they be in business if self play can work out so well?

vidarh•47m ago
> Why would they be in business if self play can work out so well?

Because it is still cheaper.

notpachet•42m ago
> Right now there are companies which hire software devs or data scientists to just solve a bunch of random problems so that they can generate training data for an LLM model.

Sounds like Macrodata Refinement.

morning-coffee•1h ago
This is (yet another reason) why we can't have nice things on the Internet anymore. Sigh.
lolcatzlulz•1h ago
The easiest way to grow AI resistance is to get Dario Amodei and Sam Altman on TV and let them talk.
FeteCommuniste•1h ago
Get Alex Karp out there promoting autonomous weapons, too, if you want the ultimate trifecta.
xpe•45m ago
> The easiest way to grow AI resistance is to get Dario Amodei and Sam Altman on TV and let them talk.

Tell me more? I'm guessing you might say: neither connects with everyday people, they have misaligned incentives*, they (like most corporate leaders) don't speak directly, they have more power than almost any elected leader in the world, ... Did I miss anything?

My take: when it comes to character and goals and therefore predicting what they will do: please don't lump Amodei with Altman. In brief: Altman is polished, effective, and therefore rather unsettling. In short, Altman feels amoral. It feels like people follow him rather than his ideas. Amodei is different. He inspires by his character and ideals. Amodei is a well-meaning geek, and I sometimes marvel (in a good way) how he leads a top AI lab. His media chops are middling and awkward, but frankly, I'm ok with it. I get the sense he is communicating (more-or-less) as himself.

Let me know if anyone here has evidence to suggest any claim I'm making is off-base. I'm no oracle.

I could easily pile on more criticisms of both. Here's a few: to my eye, Dario doesn't go far enough with his concerns about AI futures, but I can't tell how much of this is his PR stance as head of A\ versus his core beliefs. Altman is a harder nut to crack: my first approximation of him is "brilliant, capable, and manipulative". As much as I worry about OpenAI and dislike Altman's power-grab, I probably grant that he's, like most people, fundamentally trying to do the right thing. I don't think he's quite as deranged as say Thiel. But I could be wrong. If I had that kind of money, intellect, and network, maybe I would also be using it aggressively and in ways that could come across as cunning. Maybe Altman and Thiel have good intentions and decent plans -- but the fact remains the concentration of power is corrupting, and they seem to have limited guardrails given their immense influence.

* Here's my claim, and I invite serious debate on it: Dario, more than any corporate leader, takes alignment seriously. He actually funds work on it. He knows how it works. He cares. He actually does some of the work, or at least used to. How many CEOs of the companies they run actually have the skills to DO the rank-and-file work? Even the most pessimistic people probably probably can grant this.

phainopepla2•7m ago
You're overthinking the parent comment, I think. When Dario goes on TV he says things like "AI is going to put 50% of white collar workers out of a job in a few years". The average TV viewer who hears that doesn't know what AI alignment means, they just hear that this guy, whatever his intentions, is threatening their ability to survive in this economy.
cesarvarela•1h ago
I wonder if this will have the opposite effect and produce something similar to antibiotic resistance, making Ais better at handling "poison."
miltonlost•1h ago
Conflating “kicking over ai delivery bots” and “throwing a Molotov cocktail at Altman’s house” as being both condemnable hasn’t actually been forced off the sidewalk by one of these delivery bots. These are dangerous and anti-human ADA nightmares. They shouldn’t be allowed on sidewalks, emphasis on walk
cortesoft•1h ago
I am hoping at some point we can move towards having more nuanced conversations about AI and the role it should play in our world. It seems like currently the only two camps are at either extreme.

Isn't there somewhere between removing AI from the world entirely and just sitting back and letting it take over everything? I want to talk about responsible AI use, and how to mitigate the effects on society, and to account for energy consumption, etc.

sidrag22•55m ago
kinda surprised to see this type of take out of someone who participates on this website. I feel like this is the place where I have seen that middle ground surface the most. Just the overall shift in the past year from semi handwaving to feeling like it must be embraced, and identifying the problems it creates and how to address them. I feel this is all exactly what you are mentioning.

I think AI as a proper utilized tool, is amazing, I think our lack of restraint when just throwing it into everyone's hands without understanding of the tools they are using, is horrifying. I'd imagine a lot of the community here echos that same sentiment, but maybe not, and i am just making assumptions.

sesm•49m ago
Venture capital bet on AI taking over the world, so any conservative usage of LLMs will not get funding in the near future. The subtle reason is that betting on conservative usage of LLMs sends a signal that de-values their primary investments.
haberman•1h ago
I'm old enough to remember a time when the primary hacker cause was DRM, the DMCA, patent trolls, export controls for PGP, etc. All things that made it difficult to use information when you want to. "Information wants to be free."

It's wild to see the about face. Now it's:

> If [companies] can’t source training data ethically, then I see absolutely no reason why any website operator should make it easy for them to steal it.

It would have been very difficult to predict this shift 25 years ago.

noosphr•1h ago
This is what happens when a culture doesn't have robust exclusionary mechanisms for people who want to burn it down.

We welcomed the vampires in and wonder why our necks hurt.

ryandrake•56m ago
This is like saying Winner Take All Capitalism doesn't have an exclusionary mechanism for the rich. The system exists for the sole purpose of serving the already-rich. The vampires are an inevitability baked into the system from the start.
noosphr•49m ago
We are seeing the destruction of a property class in real time for the first time in 150 years.

The last time a property class was removed was _slaves_.

Arguing that copyright is good because a subset of big tech doesn't want it around is as stupid as arguing that slavery is good because the robber barons don't like it.

What's more it's a property class we have been fighting against since before the majority of people on here were born. We are finally winning after decades of losing. The 1976 copyright act was at best a Trojan horse and the 1998 Mickey Mouse Protection Act was a complete disaster.

In short: sprinkles holy water.

jordanb•42m ago
Disney is all-in on AI.

They are thrilled.

The folks fighting perpetual copyright were not fighting to make it possible for Disney to fire creatives. In fact they were fighting for the creatives to triumph over Disney.

noosphr•39m ago
Disney is all in because all their characters are entering the public domain over the next 5 years. They can't fight like it's 1998 because youtube is now worth more than they are.

> In fact they were fighting for the creatives to triumph over Disney.

We were doing nothing of the sort. It was "Information wants to be free" not "we want to provide a perpetual job for a subset white collar workers".

sprinkles holy water

hx8•35m ago
I think you just want to make a comparison of copyright to slavery.

Property classes are born and die everyday. You can own the rights to publish an arcade video game, but that class of rights would have been way more valuable 45 years ago. NFTs were born and died just recently. You can own digital assets worth real money in an online game that simply shuts down.

Some people may read this and say "these don't qualify as a property class", to which I will remind you that property class used in this way is a brand new term, which I think is invented solely to be able to compare the limitations on human freedom associated with slavery to the limitations on human freedom associated with intellectual property.

underlipton•1h ago
It becomes a bit easier to see when you finish the sentence. "Information wants to be free (from ______)." If you filled that blank in with "rent-seeking Capitalists and corporations," you likely have everything you need to understand why they don't see it as a turn.

I say this as someone whose notions exist orthogonal to the debate; I use AI freely but also don't have any qualms about encouraging people to upend the current paradigm and pop the bubble.

lxgr•18m ago
Sure, with enough effort, you can find a seemingly clever way to turn almost every mantra into its semantic opposite.
underlipton•2m ago
It doesn't take much cleverness because we're talking about a straightforward dynamic. A counter-cultural expression that was a "screw you" aimed at corporations was co-opted and misinterpreted by those same corporations as "It's free real estate", and now the latter are flummoxed that they're not buddies with the former. Well, points up that's why.
GaryBluto•46m ago
"Information wants to be free, but only be used by people I wholly endorse." is the motto. You'll see young people singing the praises of piracy but then use "piracy" as an excuse for hating LLMs.
ginko•39m ago
Corporations are not people.
GaryBluto•30m ago
Who works at corporations and benefits from their actions?
jordanb•45m ago
Those people where trying to build a sharing/gift economy. They weren't able to keep bad actors out of their sharing economy. They are bitter that their utopian dreams got hijacked by self-dealers. Why is that wild?
aksss•34m ago
> utopian dreams got hijacked by self-dealers

Such is the fate of all utopian dreams.

lxgr•22m ago
It's highly debatable whether, in case of an information sharing/gift economy, the concept of "bad actors coming in and ruining it for everybody by taking without giving back" even makes sense.

The information is still there, as is the community that you've built, the joy that you get out of sharing the information, everything you've learned...

Why is any of that diminished, just because some people or entities that you dislike also got something out of it?

lxgr•26m ago
Hackers are not one big homogeneous group (although there definitely are larger trends, and maybe you have a point there).

Still, people were saying all kinds of inane stuff 25 years ago too.

caesil•1h ago
The only thing more cringe than the seething anger in this blog is the technical illiteracy revealed by an earnest belief that any of these attempts at "poisoning" will have any negative impact whatsoever on model training.
i_love_retros•1h ago
Lol I find the opposite to be cringe to be honest: people using chatgpt to write messages, emails, resumes for them, professional software developers vibe coding entire apps, talking about AGI coming from LLMs. Please. That is the cringe.
sombragris•44m ago
I wish I could have 1K upvotes for this.
BeetleB•37m ago
What is tragic is that LLMs are learning how to use the word "cringe" improperly.

If we're going to have AI overlords, it'd be great if they spoke with proper grammar.

goosejuice•33m ago
Let's say an NGO has done the work to formally specify a software product that would improve outcomes or people reached by I dunno 30%. They send out RFPs to a number of consultancies who provide a quote and guaranteed delivery meeting their specifications by the desired date. Only one fits in the budget, and by quite a margin. It's a consultancy that openly "vibe codes".

Should they hire them?

Yes the specification is holding a lot of weight here. Assume it's comprehensive and all consultancies offer the same aftercare support. Otherwise we're just handwaving and bike shedding over something that's not measurable.

lxgr•17m ago
Who could have thought: There's more than one way to be cringe!
kevinbojarski•33m ago
I wouldn't be so confident that poisoning won't work. https://www.reddit.com/r/BrandNewSentence/comments/1so9wf1/c...
phainopepla2•2m ago
LLM poisoning is about getting bad data into the training set. There is zero chance that this comment from 3 days ago was part of the training data for any currently public LLM.

Assuming the LLM actually got its answer from that comment, it was from a web search.

cdelsolar•1h ago
pretty lame, Milhouse
p0w3n3d•1h ago

  Resistance is futile 
But to be honest, I totally agree that AI is indeed destroying communities. We can already see YouTube redirecting all the reporting to AI which can allow some malicious agent claim your original video and demonetize it (i.e. steal your money). It happened to great YouTube people like Davie504. There is no way to appeal as the appeal is also treated by a robot
KronisLV•1h ago
I bet it's easy to be against AI, instead of those who use it in inhumane ways (and hold considerably more power). To them, AI is just a tool. If it wasn't AI, it would be buildings full of people and automated devices posting misinformation, outsourcing jobs and pushing for gig economy instead of respectable employment, having understaffed call centres and bad phone trees or knowledgebases that basically tell you to f off, lobbying against workers' rights and regulatory capture and any number of other misaligned motivations.
amelius•57m ago
This robot chasing wild boars is a preview of what is coming for us:

https://www.youtube.com/shorts/6E2AH43ad7w

OutOfHere•56m ago
These people are dinosaurs and you know what happens to Dinosaurs. Until they meet their conclusion, they are for the moment at risk of becoming terrorists.
julienreszka•51m ago
i always hated luddites, just one more reason to hate them
overgard•48m ago
Honestly, it's no wonder there's a lot of pushback. We have these irresponsible CEO's talking non stop about taking peoples jobs at a time when people are struggling to make ends meet, all while taking in insane cash infusions. Why wouldn't people loathe AI at this point when the marketing is "we're going to fuck you over and there's nothing you can do about it".
Traster•47m ago
This is slacktivism. I can kind of understand someone coming to the conclusion that we're replacing working class jobs with compute (caveat, I use working class more broadly than you), and that compute is pure capital. So essentially the capital class are wringing the neck of the working class. I think that, at the very least, is what the capital class is hoping for. If that's what you believe though, slightly poisoning a model is not even close to grappling with what is going on.
lxgr•45m ago
> This isn’t exactly the modern equivalent of angry textile workers destroying power looms, but (if you’ll forgive the pun) it’s cut from the same cloth.

And how did that work out for the textile workers?

> The difference here (I hope) is that if enough of us pollute public spaces with misinformation intended for bots, it might be enough to compel AI companies to rethink the way they source training data.

This... seems like an absurd asymmetry in effort on the side of the attacker? At least destroying a power loom is much easier than building one.

Filtering out obvious garbage seems like a completely solved problem even with weak, cheap LLMs, and it's orders of magnitudes more efficient than humans coming up with artisanal garbage.

graphememes•40m ago
They do realize, they filter out this stuff right? You're just making someone elses job more lucrative.
jumploops•32m ago
I've noticed this trend most heavily on Reddit.

Some communities are very pro-AI, adding AI summary comments to each thread, encouraging AI-written posts, etc.[0]

Many subreddits are AI cautious[1][2], and a subset of those are fully anti-AI[3].

Apart from these "AI-focused" communities, it seems each "traditional" subreddit sits somewhere on the spectrum (photographers dealing with AI skepticism of their work[4], programmers mostly like it but still skeptical[5]).

[0]https://www.reddit.com/r/vibecoding/

[1]https://www.reddit.com/r/isthisAI/

[2]https://www.reddit.com/r/aiwars/

[3]https://www.reddit.com/r/antiai/

[4]https://www.reddit.com/r/photography/comments/1q4iv0k/what_d...

[5]https://www.reddit.com/r/webdev/comments/1s6mtt7/ai_has_suck...

lxgr•15m ago
Reddit (and more generally, human) groupthing in a nutshell. "Quick, clearly position yourself on this one-dimensional line (or maybe even better sort yourself into one of these two sets) so we don't have to engage in that pesky nuance thing!"
jadar•26m ago
Hasn’t griefing and trolling been a thing on the internet for a while? What makes this unique just because it’s AI instead of whatever else?
lxgr•13m ago
> What makes this unique just because it’s AI instead of whatever else?

I'd say the notion that expensive acts of sabotage (that can be cheaply neutralized) are a worthwhile pastime and anything other than virtue signaling is somewhat perplexing. (Not in a good way.)

SwellJoe•16m ago
Any human scale "attack", e.g. the made up Everybody Loves Raymond episode isn't doing anything to hurt LLM training data. Might even help them detect exaggeration, satire, etc. when read in context and with other knowledge they have from other sources (like scraping IMDB or whatever, and already knowing the cast and plot summary of every episode of Everybody Loves Raymond).

If there is an effective way to poison them, it'll be automated. And, it'll probably rely on an LLM to produce the poison, since it has to look legit enough to pass the quality filtering and classification stage of the data ingestion process, which is also probably driven by an LLM.

One reason small models are getting better is because the training data being used is not just getting bigger, it's getting cleaner and classified more correctly/precisely. "Model collapse" hasn't happened, yet, even though something like half the web is AI slop, because as the models get smarter for human use in a variety of contexts, they also get smarter for use in preparing data for training the next model. There may very well still be risks of a mad cow disease like problem for LLMs, but I doubt a Markov chain website is going to contribute. The models still can't always tell fact from fiction, but they're not being hoodwinked by a nonsense generator.