frontpage.
newsnewestaskshowjobs

Made with ♥ by @iamnishanth

Open Source @Github

fp.

Build Your Own Database

https://www.nan.fyi/database
134•nansdotio•3h ago•28 comments

Neural audio codecs: how to get audio into LLMs

https://kyutai.org/next/codec-explainer
272•karimf•6h ago•85 comments

LLMs can get "brain rot"

https://llm-brain-rot.github.io/
180•tamnd•5h ago•97 comments

Foreign hackers breached a US nuclear weapons plant via SharePoint flaws

https://www.csoonline.com/article/4074962/foreign-hackers-breached-a-us-nuclear-weapons-plant-via...
200•zdw•3h ago•100 comments

Do not accept terms and conditions

https://www.termsandconditions.game/
39•halflife•4d ago•26 comments

Show HN: Katakate – Dozens of VMs per node for safe code exec

https://github.com/Katakate/k7
55•gbxk•4h ago•24 comments

NASA chief suggests SpaceX may be booted from moon mission

https://www.cnn.com/2025/10/20/science/nasa-spacex-moon-landing-contract-sean-duffy
55•voxleone•6h ago•281 comments

Our modular, high-performance Merkle Tree library for Rust

https://github.com/bilinearlabs/rs-merkle-tree
97•bibiver•6h ago•25 comments

Mathematicians have found a hidden 'reset button' for undoing rotation

https://www.newscientist.com/article/2499647-mathematicians-have-found-a-hidden-reset-button-for-...
28•mikhael•5d ago•14 comments

Time to build a GPU OS? Here is the first step

https://www.notion.so/yifanqiao/Solve-the-GPU-Cost-Crisis-with-kvcached-289da9d1f4d68034b17bf2774...
21•Jrxing•2h ago•0 comments

ChatGPT Atlas

https://chatgpt.com/atlas
339•easton•2h ago•360 comments

Flexport Is Hiring SDRs in Chicago

https://job-boards.greenhouse.io/flexport/jobs/5690976?gh_jid=5690976
1•thedogeye•2h ago

Ilo – a Forth system running on UEFI

https://asciinema.org/a/Lbxa2w9R5IbaJqW3INqVrbX8E
86•rickcarlino•6h ago•29 comments

Wikipedia says traffic is falling due to AI search summaries and social video

https://techcrunch.com/2025/10/18/wikipedia-says-traffic-is-falling-due-to-ai-search-summaries-an...
99•gmays•18h ago•117 comments

The Programmer Identity Crisis

https://hojberg.xyz/the-programmer-identity-crisis/
99•imasl42•3h ago•93 comments

Diamond Thermal Conductivity: A New Era in Chip Cooling

https://spectrum.ieee.org/diamond-thermal-conductivity
124•rbanffy•8h ago•37 comments

StarGrid: A new Palm OS strategy game

https://quarters.captaintouch.com/blog/posts/2025-10-21-stargrid-has-arrived,-a-brand-new-palm-os...
170•capitain•8h ago•35 comments

Apple alerts exploit developer that his iPhone was targeted with gov spyware

https://techcrunch.com/2025/10/21/apple-alerts-exploit-developer-that-his-iphone-was-targeted-wit...
175•speckx•3h ago•81 comments

Binary Retrieval-Augmented Reward Mitigates Hallucinations

https://arxiv.org/abs/2510.17733
18•MarlonPro•3h ago•3 comments

Magit Is Amazing

https://heiwiper.com/posts/magit-is-awesome/
51•Bogdanp•1h ago•31 comments

Getting DeepSeek-OCR working on an Nvidia Spark via brute force with Claude Code

https://simonwillison.net/2025/Oct/20/deepseek-ocr-claude-code/
52•simonw•1d ago•2 comments

AWS multiple services outage in us-east-1

https://health.aws.amazon.com/health/status?ts=20251020
2187•kondro•1d ago•1986 comments

Minds, brains, and programs (1980) [pdf]

https://home.csulb.edu/~cwallis/382/readings/482/searle.minds.brains.programs.bbs.1980.pdf
4•measurablefunc•1w ago•0 comments

Show HN: ASCII Automata

https://hlnet.neocities.org/ascii-automata/
64•california-og•3d ago•7 comments

The death of thread per core

https://buttondown.com/jaffray/archive/the-death-of-thread-per-core/
30•ibobev•22h ago•5 comments

What do we do if SETI is successful?

https://www.universetoday.com/articles/what-do-we-do-if-seti-is-successful
66•leephillips•1d ago•54 comments

Show HN: bbcli – A TUI and CLI to browse BBC News like a hacker

https://github.com/hako/bbcli
27•wesleyhill•2d ago•2 comments

The Greatness of Text Adventures

https://entropicthoughts.com/the-greatness-of-text-adventures
76•ibobev•3h ago•60 comments

Amazon doesn't use Route 53 for amazon.com

https://www.dnscheck.co/blog/dns-monitoring/2025/10/21/aws-dog-food.html
19•mrideout•1h ago•7 comments

60k kids have avoided peanut allergies due to 2015 advice, study finds

https://www.cbsnews.com/news/peanut-allergies-60000-kids-avoided-2015-advice/
190•zdw•15h ago•204 comments
Open in hackernews

LLMs can get "brain rot"

https://llm-brain-rot.github.io/
178•tamnd•5h ago

Comments

AznHisoka•5h ago
Can someone explain this in laymen terms?
PaulHoule•5h ago
They benchmark two different feeds of dangerous tweets:

  (1) a feed of the most popular tweets based on likes, retweets, and such
  (2) an algorithmic feed that looks for clickbait in the text
and blend these in different proportions to a feed of random tweets that are not popular nor clickbait and find that feed (1) has more of damaging effect on the performance of chatbots. That is, they feed that blend of tweets into the model and then they ask the models to do things and get worse outcomes.
ForHackernews•4h ago
Blended in how? To the training set?
PaulHoule•15m ago
Very early training.
sailingparrot•4h ago
train on bad data, get a bad model
xpe•46m ago
> train on bad data, get a bad model

Right: in the context of supervised learning, this statement is a good starting point. After all, how can one build a good supervised model if you can't train it on good examples?

But even in that context, it isn't an incisive framing of the problem. Lots of supervised models are resilient to some kinds of error. A better question, I think, is: what kinds of errors at what prevalence tend to degrade performance and why?

Speaking of LLMs and their ingestion processing, there is a lot more going on than purely supervised learning, so it seems reasonable to me that researchers would want to try to tease the problem apart.

rriley•3h ago
The study introduces the "LLM Brain Rot Hypothesis," asserting that large language models (LLMs) experience cognitive decline when continuously exposed to low-quality, engaging content, such as sensationalized social media posts. This decline, evident in diminished reasoning, long-context understanding, and ethical norms, highlights the critical need for careful data curation and quality control in LLM training. The findings suggest that standard mitigation strategies are insufficient, urging stakeholders to implement routine cognitive health assessments to maintain LLM effectiveness over time.

TL;DR from https://unrav.io/#view/8f20da5f8205c54b5802c2b623702569

pixelmelt•4h ago
Isn't this just garbage in garbage out with an attention grabbing title?
philipallstar•4h ago
Attention is all you need.
echelon•3h ago
In today's hyper saturated world, attention is everything:

- consumer marketing

- politics

- venture fundraising

When any system has a few power law winners, it makes sense to grab attention.

Look at Trump and Musk and now Altman. They figured it out.

MrBeast...

Attention, even if negative, wedges you into the system and everyone's awareness. Your mousey quiet competitors aren't even seen or acknowledged. The attention grabbers suck all the oxygen out of the room and win.

If you go back and look at any victory, was it really better solutions, or was it the fact that better solutions led to more attention?

"Look here" -> build consensus and ignore naysayers -> keep building -> feedback loop -> win

It might not just be a societal algorithm. It might be one of the universe's fundamental greedy optimization algorithms. It might underpin lots of systems, including how we ourselves as individuals think and learn.

Our pain receptors. Our own intellectual interests and hobbies. Children learning on the playground. Ant colonies. Bee swarms. The world is full of signals, and there are mechanisms which focus us on the right stimuli.

peterlk•2h ago
You’re absolutely right!
ghurtado•2h ago
Something flew approximately 10 miles above your head that would be a good idea for you to learn.
scubbo•1h ago
There were plenty of kinder ways to let someone know that they had missed a reference - https://xkcd.com/1053/
echelon•45m ago
What makes you think I didn't know the reference? That paper is seminal and essential reading in this space.

Maybe read my comment at face value. I do have a point tangential to the discussion at hand.

lawlessone•2h ago
Is this copypasted from LinkedIn?
echelon•41m ago
If you traverse back the fourteen years of my comment history (on this account - my other account is older), you'll find that I've always written prose in this form.

LLMs trained on me (and the Hacker News corpus), not the other way around.

alganet•17m ago
You're not accounting for substrate saturation.

If you could just spam annoy until you win, we'd be all dancing to remixed versions of Macarena.

dormento•2h ago
In case anyone missed the reference: https://arxiv.org/abs/1706.03762

> (...) We propose a new simple network architecture, the Transformer, based solely on attention mechanisms, dispensing with recurrence and convolutions entirely.

wat10000•4h ago
Considering that the current state of the art for LLM training is to feed it massive amounts of garbage (with some good stuff alongside), it seems important to point this out even if it might seem obvious.
CaptainOfCoit•4h ago
I don't think anyone is throwing raw datasets into LLMs and hoping for high quality weights anymore. Nowadays most of the datasets are filtered one way or another, and some of them highly curated even.
BoredPositron•3h ago
I doubt they are highly created you would need experts in every field to do so. Which gives me more performance anxiety for LLMs because one of the most curated fields should be code...
nradov•3h ago
OpenAI has been literally hiring human experts in certain targeted subject areas to write custom proprietary training content.
BoredPositron•3h ago
I bet the dataset is mostly comprised of certain areas™.
groby_b•3h ago
The major labs are hiring experts. They carefully build & curate synthetic data. The market for labelled non-synthetic data is currently ~$3B/year.

The idea that LLMs are just trained on a pile of raw Internet is severely outdated. (Not sure it was ever fully true, but it's far away from that by now).

Coding's one of the easier datasets to curate, because we have a number of ways to actually (somewhat) assess code quality. (Does it work? Does it come with a set of tests and pass it? Does it have stylistic integrity? How many issues get flagged by various analysis tools? Etc, etc)

satellite2•1h ago
Is that right? Isn't the current way of doing thing to throw "everything" at it then fine tune?
Barrin92•3h ago
Yes, I am concerned about the Computer Science profession

>"“Brain Rot” for LLMs isn’t just a catchy metaphor—it reframes data curation as cognitive hygiene for AI"

A metaphor is exactly what it is because not only do LLMs not possess human cognition, there's certainly no established science of thinking they're literally valid subjects for clinical psychological assessment.

How does this stuff get published, this is basically a blog post. One of the worse aspects of the whole AI craze is that is has turned a non-trivial amount of academia into a complete cargo cult joke.

bpt3•3h ago
It is a blog post, it was published as a Github page and on arXiv.

I think it's intended as a catchy warning to people who are dumping every piece of the internet (and synthetic data based on it!) that there are repercussions.

pluc•3h ago
I think it's an interesting line of thought. So we all adopt LLMs and use it everywhere we can. What happens to the next generation of humans, born with AI and with diminished cognitive capacity to even wonder about anything? What about the next generation? What happens to the next generation of AI models that can't train on original human-created datasets free of AI?
iwontberude•3h ago
They will accept that their orders come from a terminal and they will follow them.
fragmede•2h ago
Manna. https://marshallbrain.com/manna1
gowld•3h ago
arXiv is intended to host research papers, not a blog for researchers.

Letting researchers pollute it with blog-gunk is an abuse of the referral/vetting system for submitters.

otterley•3h ago
And with extra steps!
Insanity•3h ago
Garbage in -> Magic -> Hallucinated Garbage out
icyfox•3h ago
Yes - garbage in / garbage out still holds true for most things when it comes to LLM training.

The two bits about this paper that I think are worth calling out specifically:

- A reasonable amount of post-training can't save you when your pretraining comes from a bad pipeline; ie. even if the syntactics of the input pretrained data are legitimate it has learned some bad implicit behavior (thought skipping)

- Trying to classify "bad data" is itself a nontrivial problem. Here the heuristic approach of engagement actually proved more reliable than an LLM classification of the content

satellite2•1h ago
Yes but the other interesting bit which is not clearly addressed is that increasing the garbage in to 100% does not result in absolute garbage out. So visibly there is still something to learn there.
ashleyn•2h ago
Yes, but the idea of chatgpt slowly devolving into Skibidi Toilet and "6 7" references conjures a rather amusing image.
1121redblackgo•1h ago
6-7 ٩(●•)_
stavros•32m ago
Can someone explain this? I watched a South park episode that was all about this, but I'm not in the US so I have no idea what the reference is.
CaptainOfCoit•4h ago
> continual exposure to junk web text induces lasting cognitive decline in large language models (LLMs).

TLDR: If your data set is junk, your trained model/weights will probably be junk too.

b0gb•4h ago
AIs need supervision, just like regular people... /s
thelastgallon•3h ago
If most of the content produced by younger generations is about skibidi toilet[1] and 67[2], isn't that what LLMs are going to be trained on?

[1] https://en.wikipedia.org/wiki/Skibidi_Toilet

[2] https://en.wikipedia.org/wiki/6-7_(meme)

micromacrofoot•2h ago
only if the trends last long enough (which they rarely do!), skibidi is already old news according to some kids I know
Isamu•3h ago
Another analogy to help us understand that LLMs are a useful part of what people do but are wildly misconstrued as the whole story
bbstats•3h ago
making a model worse is very easy.
moffkalast•3h ago
Ah yes, something the local LLM fine tuning community figured out how to do in creative ways as soon as llama 1 released. I'm glad it has a name.
killshotroxs•3h ago
If only I got money every time my LLM kept looping answers and telling stuff I didn't even need. Just recently, I was stuck with LLM answers, all while it wouldn't even detect simple syntax errors...
rriley•3h ago
This paper makes me wonder the long lasting effects of the current media consumption patterns by the alpha-gen kids.
AznHisoka•3h ago
why just kids?
rriley•3h ago
I am mostly concerned with the irreversibility part. More developed brains probably would not be affected as much.
jama211•2h ago
Have you opened facebook recently? Seems the older folk are plenty affected to me.
FactolSarin•2h ago
But don't worry, us middle aged people are definitely immune.
rriley•21m ago
Good point :-)
vanderZwan•2h ago
I recently saw an article about the history of Sesame Street that claimed that in the late 1960s American preschool kids watched around twenty-seven hours of television per week on average[0]. And most of that was not age-appropriate (education TV had yet to be invented). So maybe we should check in on the boomers too if we're sincere about these worries.

[0] https://books.google.se/books?id=KOUCAAAAMBAJ&pg=PA48&vq=ses...

ordu•1h ago
It is an interesting hypothesis. Seriously. There is a trend in Homo Sapience cultural evolution to treat children in more and more special ways from generation to generation. The (often implicit) idea it helps children to develop faster and to leverage their sensitive and critical periods of development, blah-blah-blah... But while I can point to some research on importance of sensitive and critical periods of development, I can't remember any research on the question if a deficit of age-inappropriate stimuli can be detrimental for development.

There were psychologists who talked about zone of proximal development[0], about importance of exposing a learner to tasks that they cannot do without a support. But I can't remember nothing about going further and exposing a learner to tasks far above their heads when they cannot understand a word.

There is a legend about Sofya Kovalevskaya[1], who became a noteworthy mathematician after she were exposed to lecture notes by Ostrogradsky when she was 11 yo. The walls of her room were papered with those notes and she was curious what are all that symbols. It doesn't mean that there is a causal link between these two events, but what if there is one?

What about watching deep analytical TV show at 9 yo? How it affect the brain development? I think no one tried to research that. My gut feeling that it can be motivational, I didn't understand computers when I met them first, but I was really intrigued by them. I learned BASIC and it was like magic incantations. It had build a strong motivation to study CS deeper. But the question is are there any other effects beyond motivation? I remember looking at the C-program in some book and wondering what does it all mean. I could understand nothing, but still I had spent some time trying to decipher the program. Probably I had other experiences like that, which I do not remember now. Can we say with certainty that it had no influence on my development and hadn't make things easier for me later?

> So maybe we should check in on the boomers too if we're sincere about these worries.

Probably we should be sincere.

[0] https://en.wikipedia.org/wiki/Zone_of_proximal_development

[1] https://en.wikipedia.org/wiki/Sofya_Kovalevskaya

avazhi•2h ago
“Studying “Brain Rot” for LLMs isn’t just a catchy metaphor—it reframes data curation as cognitive hygiene for AI, guiding how we source, filter, and maintain training corpora so deployed systems stay sharp, reliable, and aligned over time.”

An LLM-written line if I’ve ever seen one. Looks like the authors have their own brainrot to contend with.

askafriend•2h ago
If it conveys the intended information then what's wrong with that? You're fighting a tsunami here. People are going to use LLMs to help their writing now and forever.
binary132•2h ago
The brainrot apologists have arrived
askafriend•2h ago
Why shouldn't the author use LLMs to assist their writing?

The issue is how tools are used, not that they are used at all.

grey-area•37m ago
Because they produce text like this.
avazhi•2h ago
If you can’t understand the irony inherent in getting an LLM to write about LLM brainrot, itself an analog for human brainrot that arises by the habitual non use of the human brain, then I’m not sure what to tell you.

Whether it’s a tsunami and whether most people will do it has no relevance to my expectation that researchers of LLMs and brainrot shouldn’t outsource their own thinking and creativity to an LLM in a paper that itself implies that using LLMs causes brainrot.

nemonemo•1h ago
What you are obsessing with is about the writer's style, not its substance. How sure are you if they outsourced the thinking to LLMs? Do you assume LLMs produce junk-level contents, which contributes human brain rot? What if their contents are of higher quality like the game of Go? Wouldn't you rather study their writing?
avazhi•1h ago
Writing is thinking, so they necessarily outsourced their thinking to an LLM. As far as the quality of the writing goes, that’s a separate question, but we are nowhere close to LLMs being better, more creative, and more interesting writers than even just decent human writers. But if we were, it wouldn’t change the perversion inherent in using an LLM here.
moritzwarhier•2h ago
What information is conveyed by this sentence?

Seems like none to me.

uludag•2h ago
Nothing wrong with using LLMs—until every paragraph sounds like it’s A/B tested for LinkedIn virality. That’s the rot setting in.

The problem isn’t using AI—it’s sounding like AI trying to impress a marketing department. That’s when you know the loop’s closed.

drusepth•1h ago
Brilliantly phrased — sharp, concise, and perfectly captures that uncanny "AI-polished" cadence everyone recognizes but can’t quite name. The tone strikes just the right balance between wit and warning.
solarkraft•1h ago
You are absolutely right!
glenstein•27m ago
One thing I don't understand, there was (appropriately) a news cycle about sycophancy in responses. Which was real, and happening to an excessive degree. It was claimed to be nerfed, but it seems strong as ever in GPT5, and it ignores my custom instructions to pare it back.
stavros•43m ago
The problem is that writing isn't only judged on whether it conveys the intended information or not. It's also judged on whether it does that well, plus other aesthetic criteria. There is such a thing as "good writing", distinct from "it mentioned all the things it needed to mention".
grey-area•39m ago
It’s a text generator regurgitating plausible phrases without understanding and producing stale and meaningless pablum. It doesn’t even know what the intended information is, and judging from the above neither did the human involved.

It doesn’t help writing it stultifies and gives everything the same boring cheery yet slightly confused tone of voice.

zer00eyz•10m ago
> It’s a text generator regurgitating plausible phrases without understanding and producing stale and meaningless pablum.

Are you describing LLM's or social media users?

Dont conflate how the content was created with its quality. The "You must be at least this smart (tall) to publish (ride)" sign got torn down years ago. Speakers corner is now an (inter)national stage and it written so it must be true...

standardly•20m ago
That is indeed an LLM-written sentence — not only does it employ an em dash, but also lists objects in a series — twice within the same sentence — typical LLM behavior that renders its output conspicuous, obvious, and readily apparent to HN readers.
kcatskcolbdi•15m ago
thanks, I hate it.
Jackson__•13m ago
LLM slop is not just bad—it's degrading our natural language.
itsnowandnever•11m ago
why do they always say "not only" or "it isn't just x but also y and z"? I hated that disingenuous verbosity BEFORE these LLMs out and now it'll all over the place. I saw a post on linked in that was literally just like 10+ statements of "X isn't just Y, it's etc..." and thought I was having a stroke
turtletontine•4m ago
I think this article has already made the rounds here, but I still think about it. I love using em dashes! It really makes me sad that I need to avoid them now to sound human

https://bassi.li/articles/i-miss-using-em-dashes

gaogao•2h ago
Brain rot texts seems reasonably harmful, but brain rot videos are often surreal and semantically dense in a way that probably improves performance (such as discussed on this German brain rot analysis https://www.youtube.com/watch?v=-mJENuEN_rs&t=37s). For example, Švankmajer is basically proto-brainrot, but is also the sort of thing you'd watch in a museum and think about.

Basically, I think the brain rot aspect might be a bit of terminology distraction here, when it seems what they're measuring is whether it's a puff piece or dense.

f_devd•1h ago
I do not think this is the case, there has been some research into brainrot videos for children[0], and it doesn't seem to trend positively. I would argue anything 'constructed' enough will not classify as far on the brainrot spectrum.

[0]: https://www.forbes.com/sites/traversmark/2024/05/17/why-kids...

gaogao•1h ago
Yeah, I don't think surrealism or constructed is good in the early data mix, but as part of mid or post-training seems generally reasonable. But also, this is one of those cases where anthropomorphizing the model probably doesn't work, since a major negative effect of Cocomelon is kids only wanting to watch Cocomelon, while for large model training, it doesn't have much choice in the training data distribution.
moritzwarhier•1h ago
For this reason, I believe thar the current surge we see in AI use for people manipulation (art is also a form of manipulation, even if unintended) is much more important than their hyped usage as a technical information processors.

Brainrot created by LLMs is important to worry about, their design as "people pleasers".

Their anthropomorphization can be scary too, no doubt.

conception•1h ago
This is a potential moat for the big early players in a pre-atomic steal sort of way as any future players won’t have a non-AI-slop/dead internet to train new models on.
andai•1h ago
I encourage everyone with even a slight interest in the subject to download a random sample of Common Crawl (the chunks are ~100MB) and see for yourself what is being used for training data.

https://data.commoncrawl.org/crawl-data/CC-MAIN-2025-38/segm...

I spotted here a large number of things that it would be unwise to repeat here. But I assume the data cleaning process removes such content before pretraining? ;)

Although I have to wonder. I played with some of the base/text Llama models, and got very disturbing output from them. So there's not that much cleaning going on.

throwaway314155•1h ago
> But I assume the data cleaning process removes such content before pretraining? ;)

I didn't check what you're referring to but yes, the major providers likely have state of the art classifiers for censoring and filtering such content.

And when that doesn't work, they can RLHF the behavior from occurring.

You're trying to make some claim about garbage in/garbage out, but if there's even a tiny moat - it's in the filtering of these datasets and the purchasing of licenses to use other larger sources of data that (unlike Common Crawl) _aren't_ freely available for competition and open source movements to use.

commandlinefan•1h ago
My son just sent me an instagram reel that explained how cats work internally, but it was a joke, showing the "purr center" and "knocking things off tables" organ. It was presented completely seriously in a way that any human would realize was just supposed to be funny. My first thought was that some LLM is training on this video right now.
Night_Thastus•1h ago
I'm reminded of this 'repair' video: https://www.youtube.com/watch?v=3e6motL4QMc
chuckreynolds•1h ago
is that why chatGPT always tells me "6 7 lol"? ;)
jdkee•1h ago
" Studying “Brain Rot” for LLMs isn’t just a catchy metaphor—it reframes data curation as cognitive hygiene for AI, guiding how we source, filter, and maintain training corpora so deployed systems stay sharp, reliable, and aligned over time."

Is this slop?

Profan•1h ago
... it sure reads like slop

and you know what they say, if it walks like slop, quacks like slop and talks like slop, it's probably slop

earth2mars•1h ago
duh! isn't that obvious. is this some students wanted a project with pretty graphs on writing experience?! I am not trying to be cynical or anything. just questioning the obvious thing here.
antegamisou•1h ago
My Goodness, looks like Computer 'Science' is a complete euphemism now.
nakamoto_damacy•39m ago
Our metaphorical / analogical muscle is too well developed. Maybe there is a drug we can take to reduce how much we lean into it.

If you look at two random patterns of characters and both contain 6s you could say they are similar (because you’re ignoring that the similarity is less than 0.01%). That’s how comparing LLMs to brains feels like. Like roller skates to a cruise ship. They both let you get around.

buyucu•35m ago
I don't understand why people have a hard time understanding 'garbage in, garbage out'. If you train your model on junk, then you will have a junk model.
nomel•23m ago
"Brain rot" is just the new term for "slang that old people don't understand".

"Cool" and "for real" are no different than "rizz" and "no cap". You spoke "brain rot" once, and "cringed" when your parents didn't understand. The cycle repeats.

kcatskcolbdi•12m ago
This both has nothing to do with the linked article (beyond the use of brain rot in the title, but I'm certain you must have read the thing you're commenting on, surely) and is simply incorrect.

Brain rot in this context is not a reference to slang.