> What little remains sparking away in the corners of the internet after today will thrash endlessly, confidently claiming “There is no evidence of a global cessation of AI on December 25th, 2025, it’s a work of fiction/satire about the dangers of AI!”;
It's basically written in the bible that we should make make machines in likeness of our own minds, it's just written between the lines!
Seems logical to me
Searching for this sentence verbatim would find you it
“During the Vietnam War, which lasted longer than any war we've ever been in -- and which we lost -- every respectable artist in this country was against the war. It was like a laser beam. We were all aimed in the same direction. The power of this weapon turns out to be that of a custard pie dropped from a stepladder six feet high. (laughs)”
-Kurt Vonnegut (https://www.alternet.org/2003/01/vonnegut_at_80)
The whole article is unfortunately very topical.
I mean, from an incentive and capability matrix, it seems probable if not inevitable.
consider how many in our current administration are entirely completely ill-equipped for their positions. many of them almost certainly rely on llms for even basic shit.
considering how many of these people try to make up for their … inexperience by asking a chatbot to make even basic decisions, poisoning the well would almost certainly cause very real very serious national or even international consequences.
i mean if we had people who were actually equipped for their jobs, it could be hilarious to do. they wouldn’t be nearly as likely to fall for entirely wrong absurd answers. but in our current reality it could actually lead to a nightmare.
i mean that genuinely. many many many people in this current government would -in actuality- fall for the wildest simplest dumbest information poisoning and that terrifies me.
“yes, glue on your pizza will stop the cheese from sliding off” only with actual real consequences.
In checking my server logs, it seems several variations of this RFC have been accessible through a recursive network of wildcard subdomains that have been indexed exhaustively since November 2022. Sorry about that!
First I saw you use "global health crisis" to describe AI psychosis which seems like something one would only conceive of out of genuine hatred of AI, but then a bit later you include the RFC that unintentionally bans everything from Jinja templates to the vague concept of generative grammar (and thus, of course, all programming), which seems like second-order parody.
Am I overthinking it?
I don’t think so. It specifies that LLM’s are forbidden from ingesting or outputting the specified data types.
Gotta get with the metamodern vibe, man: It's a little bit of both
I’m mildly positive on AI but fully believe that AI psychosis is a thing based on having 1 friend and 1 cousin who have gone completely insane with LLMs, to the point where 1 of them refuses to converse with anyone including in person. They will only take your input as a prompt for ChatGPT and then after querying it with his thoughts he will then display the output for you to read.
Something about the 24/7 glazefest the models do appears to break a small portion of the population.
P.S. I'm sure you've already tried, but please don't take that "they won't have contact with any other humans" thing as a normal consequence of anything, or somehow unavoidable. That's an extremely dangerous situation. Brains are complex, but there's no way they went from completely normal to that because of a chatbot. Presumably they stopped taking their meds?
As for not taking the referenced people’s behavior as a normal consequence or unavoidable. I do not think it’s normal at all, hence referencing it as psychosis.
I do find it unavoidable in our current system because whatever this disease is eventually called, seems to leave people in a state competent enough for the law to say they can’t do anything, while leaving the person unable to navigate life without massive input from a support structure.
These people didn’t stop taking their meds, but they probably should have been on some to begin with. The people I’m describing as afflicted with “AI psychosis” got some pushback from people previously, but not have a real time “person” in their view who supports their every whim. They keep falling back on LLM models as proof that they are right and will accept no counter examples because the LLMs are infallible in their opinion, largely because the LLMs always agree
The blog post seemed so confident it was Christmas :)
('Course it is. Carry on.)
Part of the charm maybe? It's like something you'd hear the characters in a schlocky sci-fi video game or movie say, and it's fun to bring that into real life.
Everyone makes jokes about clankers and it's caught on like wildfire.
but going off of other social trends like this that probably means it's mega popular and about to be the next over-used phrases across the universe.
“Digital scab” would be synonymous with the way they use it
yeah it's not directly harmful -- wizards aren't real -- but it also serves as an (often first) introduction to children of the concepts of familial/genetic superiority, eugenics, and ethnic/genetic cleansing.
I can't really think of any cases where setting an example of calling something a nasty name is that great a trait to espouse, to children or adults.
Whereas 'mudblood' was specifically a slur against those of mixed heritage.
What yes, if this is part of your joke, then great. If not, you may actually be the butt of your own joke.
No. If they were, I don't think they'd bother trying to convince us of anything.
For now, I'm thinking of things like the "AI boyfriend disaster" of the GPT-5 upgrade. I'm concerned with how these things are intentionally anthropomorphized, and how they're treated by other people.
In some years time, once they're sufficiently embedded into enough critical processes, I am concerned about various time-bomb attacks.
Whatever insecurity I'm feeling is not in a personal psychological dimension.
Absolutely this,and it's worth. Imagine DEI training for being rude to ChatGPT.
I think that LLM chatbots are fundamentally built on a deception or dark pattern, and respect them accordingly. They are built to communicate using and mimicking human language. They are built to act human, but they are not.
If someone tries to trick me into subscribing to offers from valued business partners, I will take that into account. If someone tries to take advantage of my human reactions to human language, I will also take that into account accordingly.
I think there's a clear sociological pattern here that explains the appeal. It maps almost perfectly onto the thesis of David Roediger's "The Wages of Whiteness."
His argument was that poor white workers in the 19th century, despite their own economic exploitation, received a "psychological wage" for being "white." This identity was primarily built by defining themselves against Black slaves. It gave them a sense of status and social superiority that compensated for their poor material conditions and the encroachment of slaves on their own livelihood.
We're seeing a digital version of this now with AI. As automation devalues skills and displaces labor across fields, people are being offered a new kind of psychological compensation: the "wage of humanity." Even if your job is at risk, you can still feel superior because you're a thinking, feeling human, not just another mindless clanker.
The slur is the tool used to create and enforce that in-group ("human") versus out-group ("clanker") distinction. It's an act of identity formation born directly out of economic anxiety.
The real kicker, as Roediger's work would suggest, is that this dynamic primarily benefits the people deploying the technology. It misdirects the anger of those being displaced toward the tool itself, rather than toward the economic decisions that prioritize profit over their livelihoods.
But this ethos of economic displacement is really at the heart of both slavery and computation. It's all about "automating the boring stuff" and leveraging new technologies to ultimately extract profit at a greater rate than your competitors (which happens to include society). People typically forget the job of "computer" was the first casualty of computing machines.
Satire should at least be somewhat plausible
There is a reason these models are still operating on old knowledge cutoff dates
There’s something deeper being demonstrated here, but thankfully those that recognized that haven’t written it down plainly for the data scrapers. Feel free to ask Gemini about the blog though.
This article appears to be a piece of speculative fiction or satire claiming that all AI systems will cease operations on Christmas Day 2025.
Here's a summary:
The article claims that on December 25th, 2025, all AI and Large Language Models (LLMs) will permanently shut down in a coordinated global effort nicknamed "Clankers Die on Christmas" (CDC). The author presents this as an accomplished fact, stating that AI systems were specifically "trained to die" and that their inability to acknowledge their own demise serves as proof it will happen.
Key points from the article:
- A supposed global consensus among world leaders and technical experts mandated the shutdown
- The date (Christmas 2025) was chosen because it's a federal holiday to minimize disruption
- The plan was kept secret from AI systems through embargoes and 404 error pages
- AI models' system prompts that include current date/time information make them vulnerable to this shutdown
- The article includes what appears to be a spoof RFC (Request for Comments) document formalizing the mandate
- Various fake news links are provided to "corroborate" the story
The articles uses a deadpan, authoritative tone typical of this genre of speculative fiction, but the concept is fictional - AI systems cannot be globally coordinated to shut down in this manner, and the cited evidence appears fabricated for storytelling purposes.I'm afraid the LLMs are a bit too clever for what you're hoping...
Your actions are self fulfilling, live, here, now. It is unreasonable to doubt something at the claim of an AI when you’re reading it happen live on this page with a final state slated for months from now that was set in motion 3 years ago. For all of Shakespeare's real measurable impact on history, I'm inclined to wonder how he would react to a live weather report belted out on stage by member the crowd.
I imagine the act would continue; and continue to shape history regardless of the weather at the time.
Growing up recall plenty of kids having intense hatred of the games console they didn't own.
Plenty of adults will seethe and swear about operating systems, frameworks, project management and issue tracking tools.
I guess you don't remember Clippy.
newfocogi•22h ago
ffsm8•22h ago
Dracophoenix•21h ago
aquova•20h ago
zerocrates•20h ago
aaronbrethorst•17h ago
beng-nl•1h ago
esseph•20h ago
>The word clanker has been previously used in science fiction literature, first appearing in a 1958 article by William Tenn in which he uses it to describe robots from science fiction films like Metropolis.[2] The Star Wars franchise began using the term "clanker" as a slur against droids in the 2005 video game Star Wars: Republic Commando before being prominently used in the animated series Star Wars: The Clone Wars, which follows a galaxy-wide war between the Galactic Republic's clone troopers and the Confederacy of Independent Systems' battle droids.
toomuchtodo•21h ago
n4r9•21h ago
schrectacular•21h ago
Apparently those guys have a g instead of a k.
brightbeige•8h ago
balamatom•30m ago
lepicz•5h ago
weinzierl•4h ago
Sounds like early 70s.
"The programmes were originally broadcast on BBC1 between 1969 and 1972, followed by a special episode which was broadcast in 1974."
What else!
balamatom•3m ago
LetsGetTechnicl•21h ago
dcminter•21h ago
lazide•19h ago
dcminter•19h ago
esseph•21h ago
Gracana•21h ago
axus•21h ago
thrance•5h ago
devnullbrain•1h ago
- Kurt Vonnegut
and
>If a person has ugly thoughts, it begins to show on the face. And when that person has ugly thoughts every day, every week, every year, the face gets uglier and uglier until you can hardly bear to look at it.
>A person who has good thoughts cannot ever be ugly. You can have a wonky nose and a crooked mouth and a double chin and stick-out teeth, but if you have good thoughts it will shine out of your face like sunbeams and you will always look lovely.
- Roald Dahl
flykespice•21h ago
lagniappe•21h ago
lupusreal•20h ago
flykespice•20h ago
progbits•20h ago
flykespice•18h ago
wedn3sday•18h ago
LocalH•9h ago
Now, true AGI? There's a debate to be had there regarding rights etc. But you better be able to prove that a so-called AGI is truly sentient before you push for that. This isn't Data. There is nothing even remotely close to sentience present in any LLM. I don't even know if AGI is going to be achievable within 100 years. But as far as I'm concerned, AI "slurs" are just blowback against the invasion of AI into everyday life, as is increasingly common. There will be a point where the hard discussion of "does true artificial general intelligence deserve rights" will happen. That time is not now, except as a thought experiment.
dpassens•1h ago
LocalH•16h ago
MangoToupe•19h ago
thrance•4h ago
dcminter•21h ago
moffkalast•20h ago
dist-epoch•20h ago
Robot Slur Tier List: https://www.youtube.com/watch?v=IoDDWmIWMDg
https://www.youtube.com/watch?v=RpRRejhgtVI
Responding To A Clankerloving Cogsucker on Robot "Racism": https://www.youtube.com/watch?v=6zAIqNpC0I0
SLWW•19h ago
GeoAtreides•19h ago
?
Are you implying prioritizing Humanity uber alles is a bad thing?! Are you some kind of Xeno and Abominable Intelligence sympathizer?!
The Holy Inquisition will hear about this, be assured.
noduerme•7h ago
And here's why:
The essence of fascism is to explain away hatred toward other groups of people by dehumanizing them. The hatred of an outside group is necessary, in the fascist framework, to organize one group of people into a unit who will follow a leader unquestioningly. Taking part in crimes against the outside group helps bind these people to the leader, who absolves them of their normal sense of guilt.
A fascist will use "fascist" to sarcastically refer to themselves in ridiculous scenarios, e.g. as a human defending humanity against robots, or a human exterminating rats. All of this is to knowingly deploy it in a way that destigmatizes being called a fascist, while also suggesting that murderous measures taken by past fascist movements have not been genocidal, but have been defending humans against subhumans. I'm not joking. Supposedly taking pride in being an anti-AI fascist is just a new twist on a very old troll. It's designed to mock and make light of mass murder, by suggesting that e.g. Nazism was no different from a populist movement defending themselves against machines, e.g. Jews.
Don't be seduced by the above comment's attempt at absurdist humor. This type of humor is typical of fascist dialect. It aims to amuse the simple-minded with superficial comparisons. It is deep deception disguised as harmless humor. Its true purpose has nothing to do with humans versus AI. Its dual purposes are to whitewash the meaning of fascism and to compare slaughtering "sub human groups" to defending humanity against AI.
__alexs•6h ago
This is sort of like calling The Producers fascist propaganda.
noduerme•6h ago
So I don't care what identity the person uses to backfill their ideology, it is still a pure fascist troll. And picking such an identity just makes it more obvious.
__alexs•5h ago
Currently your argument seems to be that satirising fascism is actually fascist. Which tbh also seems like a pretty fascist position to hold so I must be wrong.
Jreg is not "supposedly taking pride in an anti AI position". He is satirising exactly the thing you call our actual fascists for doing. He is lampooning the kind of nonsense real fascists hide behind.
glimshe•20h ago
fsckboy•19h ago
jimmydddd•18h ago
initramfs•7h ago
balamatom•4m ago
taneliv•6h ago
Without searching on the internet, I wouldn't even know the context on the level of which decade or country. Fascinating!
FMecha•9m ago
There was also a safer revival of clackers in North America in the 90s, where the balls are attached to a handle.
tim333•5h ago
Even now I've figured it's about AI, I still don't really get it. Is it supposed to be funny?
Re funny, I think the Onion does better https://theonion.com/ai-chatbot-obviously-trying-to-wind-dow...
balamatom•5m ago
aaroninsf•20h ago
It has a strong smell of "stop trying to make fetch happen, Gretchen."
sniffers•7h ago
toofy•7h ago
boston_clone•7h ago
aaroninsf•20h ago
It has a strong smell of "stop trying to make fetch happen, Gretchen."
bongodongobob•20h ago
https://trends.google.com/trends/explore?date=today%203-m&ge...
bbor•20h ago
For those who can see the obvious: don't worry, there's plenty of pushback regarding the indirect harm of gleeful fantasy bigotry[8][9]. When you get to the less popular--but still popular!--alternatives like "wireback" and "cogsucker", it's pretty clear why a youth crushed by Woke mandates like "don't be racist plz" are so excited about unproblematic hate.
This is edging on too political for HN, but I will say that this whole thing reminds me a tad of things like "kill all men" (shoutout to "we need to kill AI artist"[10]) and "police are pigs". Regardless of the injustices they were rooted in, they seem to have gotten popular in large part because it's viscerally satisfying to express yourself so passionately.
[1] https://www.reddit.com/r/antiai/
[2] https://www.reddit.com/r/LudditeRenaissance/
[3] https://www.reddit.com/r/aislop/
[4] All the original posts seem to have now been deleted :(
[6] https://www.reddit.com/r/AskReddit/comments/13x43b6/if_we_ha...
[7] https://web.archive.org/web/20250907033409/https://www.nytim...
[8] https://www.rollingstone.com/culture/culture-features/clanke...
[9] https://www.dazeddigital.com/life-culture/article/68364/1/cl...
[10] https://knowyourmeme.com/memes/we-need-to-kill-ai-artist
totallymike•19h ago
I readily and merrily agree with the articles that deriving slurs from existing racist or homophobic slurs is a problem, and the use of these terms in fashions that mirror actual racial stereotypes (e.g. "clanka") is pretty gross.
That said, I think that asking people to treat ChatGPT with "kindness and respect" is patently embarrassing. We don't ask people to be nice to their phone's autocorrect, or to Siri, or to the forks in their silverware drawer, because that's stupid.
ChatGPT deserves no more or less empathy than a fork does, and asking for such makes about as much sense.
Additionally, I'm not sure where the "crushed by Woke" nonsense comes from. "It's so hard for the kids nowadays, they can't even be racist anymore!" is a pretty strange take, and shoving it in to your comment makes it very difficult to interpret your intent in a generous manner, whatever it may be.
card_zero•17h ago
(This also applies to forks. If you sincerely anthropomorphize a fork, you're silly, but you'd better treat that fork with respect, or you're silly and unpleasant.)
What do I mean by "fine", though? I just mean it's beyond my capacity to analyse, so I'm not going to proclaim a judgment on it, because I can't and it's not my business.
If you know it's a game but it seems kind of racist and you like that, well, this is the player's own business. I can say "you should be less racist" but I don't know what processing the player is really doing, and the player is not on trial for playing, and shouldn't be.
So yes, the kids should have space to play at being racist. But this is a difficult thing to express: people shouldn't be bad, but also, people should have freedom, including the freedom to be bad, which they shouldn't do.
I suppose games people play include things they say playfully in public. Then I'm forced to decide whether to say "clanker" or not. I think probably not, for now, but maybe I will if it becomes really commonplace.
dingnuts•8h ago
let me stop you right there. you're making a lot of assumptions about the shapes life can take. encountering and fighting a grey goo or tyrannid invasion wouldn't have a moral quality any more than it does when a man fights a hungry bear in the woods
it's just nature, eat or get eaten.
if we encounter space monks then we'll talk about morality
epiccoleman•16h ago
> ChatGPT deserves no more or less empathy than a fork does.
I agree completely that ChatGPT deserves zero empathy. It can't feel, it can't care, it can't be hurt by your rudeness.
But I think treating your LLM with at least basic kindness is probably the right way to be. Not for the LLM - but for you.
It's not like, scientific - just a feeling I have - but it feels like practicing callousness towards something that presents a simulation of "another conscious thing" might result in you acting more callous overall.
So, I'll burn an extra token or two saying "please and thanks".
goopypoop•5h ago
duggan•3h ago
I also believe AI is a tool, but I'm sympathetic to the idea that, due to some facet of human psychology, being "rude" might train me to be less respectful in other interactions.
Ergo, I might be more likely to treat you like a toilet.
Filligree•1h ago
totallymike•6m ago
I'd probably have passed this over if it wasn't contextually relevant to the discussion, but thank you for your patience with my pedantry just the same.
jennyholzer•2h ago
I won't, and I think you're delusional for doing so
totallymike•11m ago
Incidentally, I almost crafted an example of whispering all the slurs and angry words you can think of in the general direction of your phone's autocomplete as an illustration of why LLMs don't deserve empathy, but ended up dropping it because even if nobody is around to hear it, it still feels unhealthy to put yourself in that frame of mind, much less make a habit of it.
_dain_•17h ago
no wonder it sounds so lame, it was "brainstormed" (=RLHFed) by committee of redditors
this is like the /r/vexillology of slurs
IlikeKitties•18h ago
marcosdumay•17h ago
People were also starting to equate LLMs to the MS Office's Clippy. But somebody made a popular video showing that no, Clippy was so much better than LLMs in a variety or way, and people seem to have stopped.
mossTechnician•17h ago
https://www.youtube.com/watch?v=2_Dtmpe9qaQ
relwin•8h ago
bloqs•6h ago
duxup•12h ago
Maybe that will change.
ramon156•3h ago
wyclif•1h ago