frontpage.
newsnewestaskshowjobs

Made with ♥ by @iamnishanth

Open Source @Github

Open in hackernews

Your Brain on ChatGPT: Accumulation of Cognitive Debt When Using an AI Assistant

https://fermatslibrary.com/s/your-brain-on-chatgpt-accumulation-of-cognitive-debt-when-using-an-ai-assistant-for-essay-writing-task
55•BerislavLopac•3h ago

Comments

out-of-ideas•2h ago
is it supposed to be a a 500 "oops something went wrong" as a comparison for your brain on chatgtp?
aniketsaurav18•2h ago
I wonder what LLMs will do to us in the long term.
sandspar•2h ago
And future, weirder versions of them.
theodric•2h ago
My guess, based on what's been found about somewhat better cognitive outcomes in aging in people who make an effort to remain fit and stimulated[1], is that we could see slightly worse cognitive outcomes in people that spent their lives steering an LLM to do the "cognitive cardio" rather than putting in the miles themselves.

On the other hand, maybe abacuses and written language won't be the downfall of humanity, destroying our ability to hold numbers and memorize long passages of narrative, after all. Who's to know? The future is hard to see.

[1] I mean there's a hell of a lot of research on the topic, but here's a meta-study of 46 reviews https://www.frontiersin.org/journals/human-neuroscience/arti...

bootsmann•2h ago
> On the other hand, maybe abacuses and written language won't be the downfall of humanity, destroying our ability to hold numbers and memorize long passages of narrative, after all

The abacus, the calculator and the book don't randomly get stuff wrong in 15% of cases though. We rely on calculators because they eclipse us in _any_ calculation, we rely on books because they store the stories permanently, but if I use chatGPT to write all my easy SQL I will still have to write the hard SQL by hand because it cannot do that properly (and if I rely on chatGPT to much I will not be able to do that either because of attrition in my brain).

theodric•1h ago
We'll definitely need people who can do the hard stuff still!

If we're lucky, the tendency toward random hallucinations will force an upswing in functional skepticism and and lots of mental effort spent verifying outputs! If not, then we're probably cooked.

Maybe a ray of light, even coming from a serious skeptic of generative AI: I've been impressed at what someone with little ability to write code or inclination to learn can accomplish with something like Cursor to crank out little tools and widgets to improve their daily life, similar to how we still need skilled machinists even while 3D printing has enabled greater democratization of object production. LLMs: a 3D printer for software. It may not be great, but if it works, whatever.

Terr_•1h ago
> The abacus, the calculator and the book don't randomly get stuff wrong in 15% of cases though.

Yeah, you'd think that a profession that talks about stuff like "NP-Hard" and "unit tests" would be more sensitive to the distinction between (A) the work of providing a result versus (B) the amount of work necessary to verify it.

TeMPOraL•56m ago
Yeah, they realize (B) is almost always much, much lower than (A), which is why ChatGPT is stupidly useful even if it gets 15% of the stuff wrong.
ben_w•20m ago
> The abacus, the calculator and the book don't randomly get stuff wrong in 15% of cases though

Not sure about books. Between self-help, religion, and New Age, I'd guess quite a lot of books not marked as fiction are making false claims.

Daviey•1h ago
Similar to the effects of the internet. Before the internet, people used to have to research subject matter in the library, or (shock) ask someone knowledgeable, and likely trust their view.

I remember around ~2000 reading a paper that said the effects of the internet made people impatient and unwilling to accept delays in answering their questions, and a poorer retention of knowledge (as they could just re-research it quickly).

Before daily use of computers, my spelling and maths were likely better, now I have an overdependence on tools.

With LLM's, i'll likely become over-dependant on managing of sentence syntax and subject completion.

The cycle continues...

llamasushi•2h ago
Socrates: "And now, since you are the father of writing, your affection for it has made you describe its effects as the opposite of what they really are. In fact, it will introduce forgetfulness into the soul of those who learn it: they will not practice using their memory because they will put their trust in writing, which is external and depends on signs that belong to others, instead of trying to remember from the inside, completely on their own. You have not discovered a potion for remembering, but for reminding; you provide your students with the appearance of wisdom, not with its reality. Your invention will enable them to hear many things without being properly taught, and they will imagine that they have come to know much while for the most part they will know nothing. And they will be difficult to get along with, since they will merely appear to be wise instead of really being so."
uludag•1h ago
Or you could compare LLMs to a technology like social media. At the beginning, concerns about social media were widely disregarded as moral panic, but with time its become widely acknowledged that this technology does indeed have harms: political disinformation, loneliness, distraction and inability to focus, etc.

Things like ChatGPT have much more in common with social media technologies like Facebook than they do with like writing.

plastic-enjoyer•1h ago
No reason to get an LLM-induced brain atrophy when your chain of thought already doesn't get further than "Socrates thought writing is bad" when LLM usage is criticised
noio•1h ago
Hah, this is super interesting actually.

Is this comment ridiculing critique of AI by comparing it to critique of writing?

Or.. is it invoking Socrates as an eloquent description of a "brain on ChatGPT".

I guess the former? But I can easily read it as the latter, too.

dumpsterdiver•1h ago
I just thought it was a good example of something written long ago that’s only grown in relevance over time, and with LLMs we can see clearly what he envisioned. The people who don’t want to dig deeper and really wrap their head around a subject can just recite the words without ever having done that.
groestl•1h ago
> You have not discovered a potion for remembering, but for reminding;

Tell me you don't have ADHD without telling me you don't have ADHD (or even knowing what ADHD is, yet) ;)

florg•2h ago
Here's the direct link to the paper: https://arxiv.org/abs/2506.08872
bayindirh•2h ago
This is a duplicate. All duplicates are merged to https://news.ycombinator.com/item?id=44286277
Davidzheng•1h ago
I understand it is an important topic, but we shouldn't have so many threads on the same article. https://news.ycombinator.com/item?id=44286277 https://news.ycombinator.com/item?id=44307257
bayindirh•25m ago
I also submitted it once, but I failed to find the original one. Since it's marked as dupe already, I'm not linking it.
kolinko•56m ago
They gave three groups a task if writing an essay - of course the group that uses a tool to write the essay for them will not work out their brain as much.

It’s like saying “someone on a bike will not develop their muscles as well as someone on foot when doing 5km at 5min/km”.

But people on bikes tend to go for higher speeds and longer distances in the same period of time.

Women in Semiconductors: A Critical Workforce Need

https://spectrum.ieee.org/women-in-semiconductors-workforce
1•rbanffy•7s ago•0 comments

Show HN: WFGY – A reasoning engine that repairs LLM logic without retraining

https://github.com/onestardao/WFGY
1•WFGY•11s ago•0 comments

Show HN: Compass Online

https://compassonline.app/
1•artiomyak•32s ago•0 comments

Chatterbox AI: Real-Time Voice Cloning and TTS Generator

https://chatterboxai.net/
1•gregzeng95•2m ago•0 comments

How to tackle OWASP API security risks with minimal resources

https://www.soeren.codes/articles/tackle-owasp-api-with-limited-resources
1•CER10TY•7m ago•0 comments

Van Gogh, AMD's Steam Deck APU

https://chipsandcheese.com/p/van-gogh-amds-steam-deck-apu
2•thomasjb•16m ago•0 comments

All Roads Lead to DSLRs

https://vpetersson.com/2025/06/18/all-roads-lead-to-dslrs.html
4•mvip•17m ago•0 comments

A New Obesity Pill May Burn Fat Without Suppressing Appetite

https://www.wired.com/story/new-obesity-pill-may-burn-fat-without-suppressing-appetite/
1•pseudolus•18m ago•1 comments

When AIs bargain, a less advanced agent could cost you

https://www.technologyreview.com/2025/06/17/1118910/ai-price-negotiation/
1•pseudolus•19m ago•0 comments

TROPIC01 Secure Element – Transparent, auditable secure element

https://tropicsquare.com/tropic01
1•karel-3d•20m ago•0 comments

Reinforcement Learning Algorithms Summarized

https://lossfunk.substack.com/p/reinforcement-learning-algorithms
1•paraschopra•21m ago•0 comments

Google May Charge a Fee to Provide Source Code for Android Binaries

https://source.android.com/opensourcerequest
5•jamesy0ung•25m ago•0 comments

Resources to Self-Study Communication Systems

https://www.study-from-here.com/2025/06/resources-to-self-study-communication.html
2•BhattMayurJ•26m ago•0 comments

The Reality Check Nobody Talks About: What OSS Costs

https://www.seuros.com/blog/the-reality-check-nobody-talks-about-what-oss-actually-costs/
2•seuros•26m ago•0 comments

Welcoming Payload to the Figma Team

https://www.figma.com/blog/payload-joins-figma/
1•pentagrama•27m ago•0 comments

Wheelgames

https://www.wheelgames.net/
1•tiantiankaixin•27m ago•0 comments

On the methods of Theoretical Physics – Albert Einstein 1933 [pdf]

https://www.informationphilosopher.com/solutions/scientists/einstein/Method_of_Theoretical_Physics.pdf
1•nill0•29m ago•0 comments

Picking the Perfect Pet – Mathematical modelling [pdf]

https://www.immchallenge.org/Contests/2024/papers/2024020.pdf
1•nill0•37m ago•0 comments

Iran is going offline to prevent purported Israeli cyberattacks

https://www.theverge.com/politics/688875/iran-cutting-off-internet-israel-war
6•benkan•41m ago•0 comments

Field Notes went from side project to cult notebook

https://www.fastcompany.com/91352848/field-notes-cult-notebook-started-out-as-a-side-project
3•benkan•41m ago•0 comments

Social media now main source of news in US, research suggests

https://www.bbc.com/news/articles/c93lzyxkklpo
1•benkan•42m ago•0 comments

Show HN: Toolflow – Fixing AI Tool Calling Context Bloat

https://github.com/dksingh1997/Toolflow
3•Dheerajiitr•46m ago•0 comments

Fifty Years Ago Today, President Nixon Declared the War on Drugs

https://www.vera.org/news/fifty-years-ago-today-president-nixon-declared-the-war-on-drugs
3•helsinkiandrew•46m ago•0 comments

Before LibreOffice there was OpenOffice, and before there was StarOffice

https://blog.documentfoundation.org/blog/2025/06/18/before-libreoffice-there-was-openoffice-and-before-openoffice-there-was-staroffice/
2•mariuz•48m ago•0 comments

Is Google about to destroy the web?

https://www.bbc.com/future/article/20250611-ai-mode-is-google-about-to-change-the-internet-forever
1•oneeyedpigeon•48m ago•0 comments

Sam Altman Says Meta Offered OpenAI Staffers $100M Bonuses

https://www.bloomberg.com/news/articles/2025-06-17/altman-says-meta-offered-openai-staffers-100-million-bonuses
20•EvgeniyZh•49m ago•22 comments

Is there any way to search all files and folders in tofi

https://github.com/philj56/tofi
1•snaque_•52m ago•0 comments

Show HN: Luna Rail – treating night trains as a spatial optimization problem

https://luna-rail.com/en/home-2
2•ant6n•52m ago•1 comments

Chromium Switching from Ninja to Siso

https://groups.google.com/a/chromium.org/g/chromium-dev/c/v-WOvWUtOpg
1•hortense•53m ago•0 comments

Show HN: The Box – Anti-deepfake wearable [Satire]

https://wearthebox.com
6•louisbarclay•56m ago•0 comments