frontpage.
newsnewestaskshowjobs

Made with ♥ by @iamnishanth

Open Source @Github

fp.

Towards Agentic OS: An LLM Agent Framework for Linux Schedulers

https://arxiv.org/abs/2509.01245
1•Hard_Space•2m ago•0 comments

Booting NetBSD from a wedge, the hard way

https://bentsukun.ch/posts/netbsd-wedge-boot/
1•speckx•3m ago•0 comments

New Gabi/ELF Spec Available for Public Review

https://groups.google.com/g/generic-abi/c/doY6WIIPqhU/?pli=1
1•rascul•6m ago•0 comments

Pypistats.org is now operated by the Python Software Foundation

https://pyfound.blogspot.com/2025/08/pypistats-org-is-now-operated-by-the-psf.html
2•rbanffy•6m ago•0 comments

Python: Fix SyntaxWarning: 'return' in a 'finally' block – Adam Johnson

https://adamj.eu/tech/2025/08/29/python-fix-syntaxwarning-finally/
2•rbanffy•7m ago•0 comments

Delta Lake: Transform Pandas Prototypes into Production – CodeCut

https://codecut.ai/from-pandas-to-production-delta-rs/
1•rbanffy•7m ago•0 comments

Show HN: Writing Arabic in English

https://sherifelmetwally.com/writing/writing-arabic-in-english
1•selmetwa•12m ago•0 comments

Large interview with E.T. designer Howard Scott Warshaw

https://spillhistorie.no/2025/09/03/interview-with-howard-scott-warshaw/
2•Kolorabi•13m ago•1 comments

EU AI Act Compliance Checker

https://artificialintelligenceact.eu/assessment/eu-ai-act-compliance-checker/
1•caminanteblanco•14m ago•1 comments

Airbus B612 Cockpit Font

https://github.com/polarsys/b612
3•Bogdanp•15m ago•0 comments

Today, I learned that eels are fish

https://eocampaign1.com/web-version?p=495827fa-8295-11f0-8687-8f5da38390bd&pt=campaign&t=17562270...
2•speckx•15m ago•0 comments

Insta360's Antigravity A1 drone promises immersive 8K 360º video

https://www.dpreview.com/news/8434677038/insta360-antigravity-a1-drone-announcement
1•PaulHoule•15m ago•0 comments

Agentgateway – Next Generation Agentic Proxy for AI Agents and MCP Servers

https://github.com/agentgateway/agentgateway
1•microflash•16m ago•0 comments

Show HN: Customize your keyboard shortcuts in Chrome with a Chrome extension

https://taupiqueur.github.io/chrome-shortcuts/
1•taupiqueur•17m ago•0 comments

Tech talent biz Andela trains up devs in GitHub Copilot

https://www.theregister.com/2025/09/03/andela_github_copilot_training/
1•rntn•18m ago•0 comments

Ghost – AI agent for beautiful presentations

https://useghost.io/
1•eustoria•18m ago•0 comments

Exxon and California Spar in Dueling Lawsuits over Plastics

https://www.nytimes.com/2025/09/01/climate/exxon-california-plastics-defamation-lawsuit.html
1•mitchbob•19m ago•1 comments

Walikancrypt

https://github.com/altilunium/walikancrypt
1•altilunium•22m ago•1 comments

Why does the Chart Increasing emoji show in red?

https://blog.emojipedia.org/why-does-the-chart-increasing-emoji-show-in-red/
1•isagues•22m ago•0 comments

The Honesty Tax

https://www.theargumentmag.com/p/the-honesty-tax
1•amadeuspagel•22m ago•0 comments

How Jet Lag Cost the Global Face of Japan Inc. His Job

https://www.wsj.com/world/asia/how-jet-lag-cost-the-global-face-of-japan-inc-his-job-5672d7a9
1•impish9208•22m ago•1 comments

Hidden Gems in Iceland

https://charlieswanderings.com/iceland/hidden-gems-in-iceland/
1•novateg•23m ago•0 comments

Vibe Coding Failures Prove AI Can't Replace Developers Yet

https://www.finalroundai.com/blog/vibe-coding-failures-that-prove-ai-cant-replace-developers-yet
2•sarathyweb•23m ago•0 comments

Developers lose focus 1,200 times a day – how MCP could change that

https://venturebeat.com/ai/developers-lose-focus-1200-times-a-day-how-mcp-could-change-that
1•rootlyhq•23m ago•0 comments

My review of Amazon's Shareholder letters

https://nandinfinitum.com/posts/amazon-shareholder-letters/
1•nanfinitum•25m ago•0 comments

Raymarching Explained Interactively

https://imadr.me/raymarching-explained-interactively/
1•ibobev•29m ago•0 comments

Building the most accurate DIY CNC lathe in the world [video]

https://www.youtube.com/watch?v=vEr2CJruwEM
3•pillars•30m ago•0 comments

TorkilsTaskSwitcher, a replacement to Windows' Alt-Tab invoked task switcher

https://oelgaard.dk/torkils/?TorkilsTaskSwitcher
1•speckx•30m ago•0 comments

Cross-Platform Window in C

https://imadr.me/cross-platform-window-in-c/
4•ibobev•31m ago•0 comments

Rotations with Quaternions

https://imadr.me/rotations-with-quaternions/
2•ibobev•31m ago•0 comments
Open in hackernews

MIT Study Finds AI Use Reprograms the Brain, Leading to Cognitive Decline

https://publichealthpolicyjournal.com/mit-study-finds-artificial-intelligence-use-reprograms-the-brain-leading-to-cognitive-decline/
193•cainxinth•2h ago

Comments

puilp0502•1h ago
Isn't this a duplicate of https://news.ycombinator.com/item?id=44286277 ?
jennyholzer•1h ago
There are dozens of duplicates for pro-AI dreck, so this post should stand.
fortyseven•1h ago
Being anti-AI drivel is completely fine though.
ayhanfuat•1h ago
We can at least change the link to the actual paper instead of a vaccine denier's AI generated summary.
chychiu•1h ago
Was going to comment the same but you beat me to it!

On that note, reading the ChatGPT-esque summary in the linked article gave me more brain damage than any AI I've used so far

abirch•1h ago
I think this is based on this paper: https://www.media.mit.edu/publications/your-brain-on-chatgpt...
badbart14•1h ago
I remember this paper when it came out a couple months ago. Makes a lot of sense, the use of tools like ChatGPT essentially offshore the thinking processes in your brain. I really like the analogy to time under tension they talk about in https://www.theringer.com/podcasts/plain-english-with-derek-... (they also discuss this study and some of the flaws/results with it)
ath3nd•1h ago
That explains a lot of Hacker News lately. /s

Like everything else in our life, cognition is "use it or lose it". Oursourcing your decision making and critical thinking to a fancy autocomplete with sycopantic tendencies and incapable of reasoning sure is fun, but as the study found, it has its downsides.

kibwen•1h ago
To be fair, a lot of commenters on HN were demonstrably suffering the effects of cognitive decline for years before LLMs.
AnimalMuppet•40m ago
Not totally sure that's /s.

Over the last three years or so, I have seen more and more posts where the position just doesn't make sense. I mean, ten years ago, there were posts on HN that I disagreed with that I upvoted anyway, because they made me think. That has become much more rare. An increasing number of posts now are just... weird (I don't know a better word for it). Not thoughtful, not interesting (even if wrong), just weird.

I can't prove that any of them are AI-generated. But I suspect that at least some of them are.

teekert•1h ago
Anybody who has tried to shortcut themselves into a report on something using an LLM, and was then asked to defend the plans contained within it knows that writing is thinking. And if you outsource the writing, you do less thinking and with less thinking there is less understanding. Your mental model is less complete, less comprehensive.

I wouldn't call it "cognitive decline", more "a less deep understanding of the subject".

Try solving bugs from your vibe coded projects... It's pain, you haven't learned anything while you build something. And as a result you don't fully grasp how your creation works.

LLM are tools, but also shortcuts, and humans learn by doing ¯\_(ツ)_/¯

This is pretty obvious to me after using LLMs for various tasks over the past years.

jennyholzer•1h ago
This dynamic is frustrating on the individual level, but it is poisonous on the organizational level.

I am offended by coworkers who submit incompletely considered, visibly LLM generated code.

These coworkers are dragging my team down.

teekert•1h ago
I'm sure they are, but maybe they just need some guidance. I was fortunate to learn this by myself, but when you just start out, it feels like magic. Only later do you realize you have also sacrificed something.
warmedcookie•1h ago
On the bright side, if you are forced to write AI code, at least reviewing PRs of AI generated slop gives your brain an exercise, albeit a frustrating one.
gkilmain•1h ago
I find this acceptable if your coworkers are checked out and looking for that next big thing
feverzsj•1h ago
"@gork Is this true?"
jennyholzer•1h ago
> In post-task interviews:

> 83.3% of LLM users were unable to quote even one sentence from the essay they had just written.

> In contrast, 88.9% of Search and Brain-only users could quote accurately.

> 0% of LLM users could produce a correct quote, while most Brain-only and Search users could.

Reminds me of my coworkers who have literally no idea what Chat GPT put into their PR from last week.

aurareturn•1h ago
Maybe we should question the value of essays in the ChatGPT world?

Could a person, armed with ChatGPT, come up with a better solution in a real world problem than without ChatGPT? Maybe that's what actually matters.

kibwen•1h ago
The point of writing essays is not to produce an essay, it's to demonstrate that you understand something well enough to engage with it critically, in addition to being an exercise for critical thinking itself.
Ekaros•1h ago
Can they evaluate if the idea that came up with is better if they do not remember how it was stated? Isn't point of writing actually to formulate down the thoughts in communicable manner. And then possibly to be verified by others.

But how can they discuss any content if even the "writer" does not remember what they wrote.

abirch•1h ago
College was transformed from an apprentice style institution of the 1500s to mass produced thing of the early 2000s (where a professor can "teach" 500 students in a class).

I think a return to the apprentice style of institution where people try to create the best real world solution as possible with LLMs, 3D printers, etc. Then use recorded college courses like our grandparents used books.

quotemstr•1h ago
"Studies" like this bite at the ankles of every change in information technology. Victorians thought women reading too many magazines would rot their minds.

Given that AI is literally just words on a monitor just like the rest of the internet, I have a strong prior it's not "reprogram[ming]" anyone's mind, at least not in some manner that, e.g. heavy Reddit use might.

flanked-evergl•1h ago
VS Code copilot has reprogrammed my mind to the point where not using it is just not worth it. It actually seldomly helps me do difficult things, it often helps me do incredibly mundane things, and if I have to go back to doing those incredibly mundane things by hand I would rather become a gardener.
stego-tech•1h ago
That’s a pretty spicy take for first thing in the morning. The confidence with which you assert a repeatedly proven facile argument is…unenviable. “Fractal wrongness,” I’ve seen it called.

We have decades of research - brain scans, studies, experiments, imaging, stimuli responses, etc - proving that when a human no longer has to think about performing a skill, that skill immediately begins to atrophy and the brain adapts accordingly. It’s why line workers at McDonalds don’t actually learn how to properly cook food (it’s all been procedured-out and automated where possible to eliminate the need for critical thinking skills, thus lowering the quality of labor needed to function), and it’s why - at present - we’re effectively training a cohort of humans who lack critical thinking and reasoning skills because “that’s what the AI is for”.

This is something I’ve known about long before the current LLM craze, and it’s why I’ve always been wary or hostile to “aggressively helpful” tools like some implementations of autocorrect, or some driving aides: I am not just trying to do a thing quickly, I am trying to do it well, and that requires repeatedly practicing a skill in order to improve.

Studies like these continue to support my anxiety that we’re dumbing down the best technical generation ever into little more than agent managers and prompt engineers who can’t solve their own problems anymore without subscribing to an AI service.

quotemstr•48m ago
Learning and habit formation are not "reprogramming". If you define "reprogramming" as anything that updates neuron weights, the term encompasses all of life and becomes useless.

My point is that I don't see LLM's effect on the brain as being anything more than the normal experience we have of living and that the level of drama the headline suggests is unwarranted. I don't believe in infohazards.

Might they result in skill atrophy? For sure! But it's the same kind of atrophy we saw when, e.g. transitioning from paper maps to digital ones, or from memorizing phone numbers to handing out email addresses. We apply the neurons we save by no longer learning paper map navigation and such to other domains of life.

The process has been ongoing since homo erectus figured out that if you bang a rock hard enough, you get a knife. So what?

AnimalMuppet•35m ago
So what is, the skill in question is thinking critically. Letting that atrophy is kind of a bigger deal than if our paper map reading skills atrophy.

Now, you could argue that, when we use AI, critical thinking skills are more important, because we have to check the output of a tool that is quite prone to error. But in actual use, many people won't do that. We'll be back at "Computers Do Not Lie" (look for the song on Youtube if you're not familiar with it), only with a much higher error rate.

ath3nd•1h ago
If it wasn't for studies like this, you'd still think arsenic is a great way to produce a vibrant green color to paint your house in the color of nature!

Because of studies like this we know the burning of fossil fuels is a dead-end for us and our climate, and due to that have developed alternative methods of generating energy.

And the study actually proved that LLM usage reprograms your brain and makes you a dumbass. Social media usage does as well, those two things are not exclusive, if anything, their effects compound on an already pretty dumb and gullible population. So if your argumemt is 'but what about reddit', thats a non argument called 'whataboutism'. Look it up and hopefully it might give you a hint why you are getting downvoted.

There have been three recent studies showing that:

- 1. 95% LLM projects fail in the enterprise https://fortune.com/2025/08/18/mit-report-95-percent-generat...

- 2. Experienced developers get 19% less productive when using an LLM https://metr.org/blog/2025-07-10-early-2025-ai-experienced-o...

- 3. LLM usage makes you dumber https://publichealthpolicyjournal.com/mit-study-finds-artifi...

We reached a stage where people on the internet mistake their opinion on a subject to be as relevant as a study on the subject.

If you don't have another study or haven't done the science to disprove this study, how come you dismiss so easily a study that actually took time, data and the scientific method to reach to a conclusion? I feel we gotta actively and firmly call out that kind of behavior and ridicule it.

AnimalMuppet•32m ago
> "Studies" like this bite at the ankles of every change in information technology. Victorians thought women reading too many magazines would rot their minds.

If the Victorians had scientific studies showing that, you might have a point. Instead, you just have a flawed analogy.

And, why the scare quotes? If you can point to some actual flaws in the study, do so. If not, you're just dismissing a study that you don't agree with, but you have no actual basis for doing so. Whereas the study does give us a basis for accepting its conclusions.

quotemstr•16m ago
> And, why the scare quotes?

N=54, students and academics only (mostly undergrad), impossible to blind, and, worst of all, the conclusion of the study supports a certain kind of anti-technology moralizing want to do anyway. I'd be shocked if it replicated, and even if it did, it wouldn't mean much concretely.

You could run the same experiment comparing paper maps versus Google Maps in a simulated navigation scenario. I'd bet the paper map group would score higher on various comprehension metrics. So what? Does that make digital maps bad for us? That's the implication of the article, and I don't think the inference is warranted.

mensetmanusman•1h ago
Depends how you use it:

https://youtu.be/omYP8IUXQTs?si=SgehtLWjnNho5MR6

DrNosferatu•1h ago
If you blindly trust it instead of using it as an iterative tool, I guess…

But didn’t pocket calculators present the same risk / panic?

jennyholzer•1h ago
When I enter 5 x 5 on a pocket calculator, I always get 25
bell-cot•1h ago
The cognitive decline described here sounds far broader than just getting rusty at arithmetic.
diddid•1h ago
Graphing calculators did, which is why in a lot of math classes they got banned. If your calculator can solve for x, you won’t spend time learning how to. The best math classes usually do without calculators focusing on concepts and skip numbers you’d need a calculator for.
boesboes•1h ago
This, I was allowed to use the grahpic mode to do integrals and differentials. It made high school easy, but in uni I had zero math skills it turned out. Had to switch studies..
wiredfool•1h ago
There’s a narrow band of math that’s amenable to pocket calculators. When used in that band, they can repeatably return the correct answer.
pjio•1h ago
First step out of this mess: Use AI only to proof read or get a second opinion, but not to write the whole thing.
bookofjoe•1h ago
That ship has sailed.

>Everyone Is Cheating Their Way Through College. ChatGPT has unraveled the entire academic project.

https://archive.ph/ZKZiY

bgwalter•1h ago
Not in China:

https://nypost.com/2025/08/19/world-news/china-restricts-ai-...

"That’s because the Chinese Communist Party knows their youth learn less when they use artificial intelligence. Surely, President Xi Jinping is reveling in this leg up over American students, who are using AI as a crutch and missing out on valuable learning experiences as a result.

It’s just one of the ways China protects their youth, while we feed ours into the jaws of Big Tech in the name of progress."

IAmBroom•55m ago
Then there's this new law in China, which sounds amazing - informing, not censoring.

https://www.scmp.com/tech/policy/article/3323959/chinas-soci...

jajko•58m ago
Its as if somebody finds shocking the fact that people are generally lazy. Then you have the other extreme group, deniers. "I work more than ever!", "I ask even more questions!" and so on here and elsewhere.

Sure you do, and maybe its really an actual benefit for ya. Not for most though. For young folks still going through education, this is devastating. If I didn't have kids I wouldn't care, less quality competition at work, but I do (too young to be affected by it now, and by the time they will be allowed to use these, frameworks for use and restrictions will be in place already).

But since maybe 30% of folks here are directly or indirectly dependent on LLMs to be pushed down every possible throat and then some more, I expect much more denial and resistance to critique of their little pets or investments.

charlie-83•47m ago
It feels like all this is because the point of school/college/university is just to get a piece of paper rather than to earn skills. Why wouldn't you get chatgpt to write your essay when your only goal is to get a passing grade.

My optimistic take is that the rise of AI in education could cause more workplaces to move away from "must have xyz degree" and actually determine if the candidate has the skills needed.

jbstack•20m ago
I agree with this in principle, but the problem is what happens to the in-between generation that cheats their way towards getting the piece of paper before the world moves on to a better way? At least for previous generations you got the piece of paper and you acquired some skills/knowledge.

For this reason, I don't feel as optimistic as you do. I worry instead that equality gaps will widen significantly: there will be the majority which abuses AI and graduates with empty brains, and there will be the minority who somehow manage to avoid doing that (e.g. lucky enough to have parents with sufficient foresight to take preventative measures with their children).

sudosteph•41m ago
I'm one of the people who find LLMs extremely helpful from a learning perspective, but to be perfectly honest, I've met the children of complete "luddites" (no tablets, internet on home on timer for school work, not allowed phones until 16, home schooled, house filled with a million books) and they honestly were some of the more intelligent, well-read, and thoughtful young people I've met.

LLMs may end up being both educationally valuable in certain contexts for certain users, and totally unsuitable for developing brains. I would err towards caution for young minds especially.

AnimalMuppet•52m ago
Depends on who you are and what you want.

Let's say I'm a writer of no skill who still wants attention. I could spend years learning to write better, but I still might not get any attention.

Or I could use AI to write something today. It won't be all that interesting, because AI still can't write all that well, but it may be better than I can do on my own, and I can get attention today.

If you care about your own growth (or even not dwindling) as a human, that's a trap. But not everyone cares about that...

Bluecobra•22m ago
This is exactly how I use AI at work—-to quickly generate funny meme images/inside jokes for a quick chuckle. I’m no artist and probably will never be one. My digital art skills amount to drawing stick figures in MS Paint.
mansilladev•1h ago
“…our cognitive abilities and creative capacities appear poised to take a nosedive into oblivion.”

Don’t sugarcoat it. Tell us how you really feel.

jennyholzer•1h ago
I think developers who use "AI" coding assistants are putting their careers at risk.
dguest•1h ago
And here I'm wondering if I'm putting my career at risk by not trying them out.

Probably both are true: you should try them out and then use them where they are useful, not for everything.

Taek•53m ago
HN is full of people who say LLMs aren't good at coding and don't "really" produce productivity gains.

None of my professional life reflects that whatsoever. When used well, LLMs are exceptional and putting out large amounts of code of sufficient quality. My peers have switched entire engineering departments to LLM-first development and are reporting that the whole org is moving 2x as fast even after they fired the 50% of devs who couldn't make the switch and didn't hire replacements.

If you think LLM coding is a fad, your head is in the sand.

bgwalter•43m ago
The instigators say they were correct and fired the political opponents. Unheard of!

I have no doubt that volumes of code are being generated and LGTM'd.

dguest•40m ago
Right now I'm mostly an "admin" coder: I look at merge requests and tell people how to fix stuff. I point them to LLMs a lot too. People I know who are actually writing a lot of code are usually saying LLMs are nice.
mooxie•23m ago
Agreed. I work for a tiny startup where I wear multiple hats, and one of them is DevOps. I manage our cloud infra with Terraform, and anyone who's scaled cloud infrastructure out of the <10 head count company to a successful 500+ company knows how critical it can be to get a wrangle on the infrastructure early. It's basically now or never.

It used to take me days or even multiple sprints to complete large-scale infrastructure projects, largely because of having to repeatedly reference Terraform cloud provider docs for every step along the way.

Now I use Claude Code daily. I use an .md to describe what I want in as much detail as possible and with whatever idiosyncrasies or caveats I know are important from a career of doing this stuff, and then I go make coffee and come back to 99% working code (sometimes there are syntax errors due to provider / API updates).

I love learning, and I love coding. But I am hired to get things done, and to succeed (both personally and in my role, which is directly tied to our organization's security, compliance, and scalability) I can't spend two weeks on my pet projects for self-edification. I also have to worry about the million things that Claude CAN'T do for me yet, so whatever it can take off of my plate is priceless.

I say the same things to my non-tech friends: don't worry about it 'coming for your job' yet - just consider that your output and perceived worth as an employee could benefit greatly from it. If it comes down to two awesome people but one can produce even 2x the amount of work using AI, the choice is obvious.

flanked-evergl•1h ago
The future is increased productivity. If someone can outproduce you if they use AI, then they will take your job.
boesboes•1h ago
After working with claude code for a few months, I am not worried.
falcor84•53m ago
What does that mean? If you're still paying for a Claude Code, you are supposedly getting increased productivity, right? Or otherwise, why are you still using it?
lexandstuff•29m ago
I find it useful. A nice little tool in the toolkit: saves a bunch of typing, helps to over come inertia, helps me find things in unfamiliar parts of the codebase, amongst other things.

But for it to be useful, you have to already know what you're doing. You need to tell it where to look. Review what it does carefully. Also, sometimes I find particular hairy bits of code need to be written completely by hand, so I can fully internalise the problem. Only once I've internalised hard parts of codebase can I effectively guide CC. Plus there's so many other things in my day-to-day where next token predictors are just not useful.

In short, its useful but no one's losing a job because it exists. Also, the idea of having non-experts manage software systems at any moderate and above level of complexity is still laughable.

tmcb•50m ago
This is industrial-grade FOMO. They will take the jobs of the first handful of people. The moment it is obvious that LLMs are a productivity booster, people will learn how to use it, just like it happened with any other technology before.
baq•1h ago
fighter jet pilots who use the ejection seat are putting their careers at risk, but so are the ones who don't use it when they should.
bookofjoe•1h ago
>F-35 pilot held 50-minute airborne conference call with engineers before fighter jet crashed in Alaska

https://edition.cnn.com/2025/08/27/us/alaska-f-35-crash-acci...

010101010101•1h ago
Developers who don’t understand how the most basic aspects of systems they work on function are a dime a dozen already, I’m not sure LLMs change the scale of that problem.
unethical_ban•57m ago
Were accountants that adopted Excel foolish?

Like any new tool that automates a human process, humans must still learn the manual process to understand the skill.

Students should still learn to write all their code manually and build things from the ground up before learning to use AI as an assistant.

falcor84•54m ago
I would say that the careers of everyone who views themselves as writing code for a living are already at great risk. So if you're in that situation, you have to see how to go up (or down) the ladder of abstraction, and getting comfortable with using GenAI is possibly a good way to do that.
micromacrofoot•49m ago
everyone's also telling us that if we don't use AI we're putting our careers at risk, and that AI will eventually take our jobs

personally I think everyone should shut up

tomrod•1h ago
A few things to note.

1. This is arxiv - before publication or peer review. Grain of salt.[0]

2. 18 participants per cohort

3. 54 participants total

Given the low N and the likelihood that this is drawn from 18-22 year olds attending MIT, one should expect an uphill battle for replication and for generalizability.

Further, they are brain scanning during the experiment, which is an uncomfortable/out-of-the-norm experience, and the object of their study is easy to infer if not directly known by the population (the person being studied using LLM, search tools, or no tools).

> We thus present a study which explores the cognitive cost of using an LLM while performing the task of writing an essay. We chose essay writing as it is a cognitively complex task that engages multiple mental processes while being used as a common tool in schools and in standardized tests of a student's skills. Essay writing places significant demands on working memory, requiring simultaneous management of multiple cognitive processes. A person writing an essay must juggle both macro-level tasks (organizing ideas, structuring arguments), and micro-level tasks (word choice, grammar, syntax). In order to evaluate cognitive engagement and cognitive load as well as to better understand the brain activations when performing a task of essay writing, we used Electroencephalography (EEG) to measure brain signals of the participants. In addition to using an LLM, we also want to understand and compare the brain activations when performing the same task using classic Internet search and when no tools (neither LLM nor search) are available to the user.

[0] https://arxiv.org/pdf/2506.08872

i_am_proteus•1h ago
>These 54 participants were between the ages of 18 to 39 years old (age M = 22.9, SD = 1.69) and all recruited from the following 5 universities in greater Boston area: MIT (14F, 5M), Wellesley (18F), Harvard (1N/A, 7M, 2 Non-Binary), Tufts (5M), and Northeastern (2M) (Figure 3). 35 participants reported pursuing undergraduate studies and 14 postgraduate studies. 6 participants either finished their studies with MSc or PhD degrees, and were currently working at the universities as post-docs (2), research scientists (2), software engineers (2)

I would describe the study size and composition as a limitation, and a reason to pursue a larger and more diverse study for confirmation (or lack thereof), rather than a reason to expect an "uphill battle" for replication and so forth.

tomrod•1h ago
> I would describe the study size and composition as a limitation, and a reason to pursue a larger and more diverse study for confirmation, rather than a reason to expect an "uphill battle" for replication and so forth.

Maybe. I believe we bout agree it is a critical gap in the research as-is, but whether it is a neutral item or an albatross is an open question. Much of psychology and neuroscience research doesn't replicate, often because of the limited sample size / composition as well as unrealistic experimental design. Your approach of deepening and broadening the demographics would attack generalizability, but not necessarily replication.

My prior puts this on an uphill battle.

IshKebab•1h ago
Yeah my bullshit detector is going off even more than when I use ChatGPT...

4. This is clickbait research, so it's automatically less likely to be true.

5. They are touting obvious things as if they are surprising, like the fact that you're less likely to remember an essay that you got something else to write, or that the ChatGPT essays were verbose and superficial.

mnky9800n•1h ago
I feel like saying papers pre peer review should be taken with a grain of salt should be stopped. Peer review is not some idealistic scientific endeavour it often leads to bullshit comments, slows down release, is free work for companies that have massive profit margins, etc. From my experience publishing 30+ papers I have received as many bad or useless comments as I have good ones. We should at least default to open peer review and editorial communication.

Science should become a marketplace of ideas. Your other criticisms are completely valid. Those should be what’s front and center. And I agree with you. The conclusions of the paper are premature and designed to grab headlines and get citations. Might as well be posting “first post” on slashdot. IMO we should not see the current standard of peer review as anything other than anachronistic.

tomrod•59m ago
> I feel like saying papers pre peer review should be taken with a grain of salt should be stopped.

Absolutely not. I am an advocate for peer review, warts and all, and find that it has significant value. From a personal perspective, peer review has improved or shot down 100% of the papers that I have worked on -- which to me indicates its value to ensure good ideas with merit make it through. Papers I've reviewed are similarly improved -- no one knows everything and its helpful to have others with knowledge add their voice, even when the reviewers also add cranky items.[0] I would grant that it isn't a perfect process (some reviewers, editors are bad, some steal ideas) -- but that is why the marketplace of ideas exists across journals.

> Science should become a marketplace of ideas.

This already happens. The scholarly sphere is the savanna when it comes to resources -- it looks verdant and green but it is highly resource constrained. A shitty idea will get ripped apart unless it comes from an elephant -- and even then it can be torn to shreds.

That it happens behind paywalls is a huge problem, and the incentive structures need to be changed for that. But unless we want blatant charlatanism running rampant, you want quality checks.

[0] https://x.com/JustinWolfers/status/591280547898462209?lang=e... if a car were a manuscript

chaps•45m ago
Please no. Remember that room temperature superconductor nonsense that went on for way too long? Let's please collectively try to avoid that..
physarum_salad•30m ago
That paper was debunked as a result of the open peer review enabled by preprints! Its astonishing how many people miss that and assume that closed peer review even performs that function well in the first place. For the absolute top journals or those with really motivated editors closed peer review is good. However, often it's worse...way worse (i.e. reams of correct seeming and surface level research without proper methods or review of protocols).

The only advantage to closed peer review is it saves slight scientific embarrassment. However, this is a natural part of taking risks ofc and risky science is great.

P.s. in this case I really don't like the paper or methods. However, open peer review is good for science.

chaps•23m ago
To be clear, I'm not saying that peer review is bad!! Quite the opposite.
physarum_salad•21m ago
Yes ofc! I guess the major distinction is closed versus open peer review. Having observed some abuses of the former I am inclined to the latter. Although if editors are good maybe it's not such a big difference. The superconducting stuff was more of a saga rather than a reasonable process of peer review too haha.
ajmurmann•13m ago
To your point the paper AFAIK wasn't debunked because someone read it carefully but because people tried to reproduce it. Peer reviews don't reproduce. I think we'd be better off with fewer peer reviews and more time spent actually reproducing results. That's why we had a while crisis named after that
mwigdahl•29m ago
And cold fusion. A friend's father (a chemistry professor) back in the early 90s wasted a bunch of time trying variants on Pons and Fleischmann looking to unlock tabletop fusion.
stonemetal12•39m ago
Rather given the reproducibility crisis, how much salt does peer review nock off that grain? How often does peer review catch fraud or just bad science?
Bender•21m ago
I would also add, how often are peer reviews the same group of buddy-bro back-scratchers that know if they help that person with a positive peer review that person will return the favor. How many peer reviewers actually reproduce the results? How many peer reviewers would approve a paper if their credentials were on the line?

Ironically, I am waiting for AI to start automating the process of teasing apart obvious pencil whipping, back scratching, buddy-bro behavior. Some believe its in the 1% range of falsified papers and pencil whipped reviews. I expect it to be significantly higher based on reading NIH papers for a long time in the attempt to actually learn things. I've reported the obvious shenanigans and sometimes papers are taken down but there are so many bad incentives in this process I predict it will only get worse.

perrygeo•23m ago
There's two questions at play. First, does the research pass the most rigorous criteria to become widely-accepted scientific fact? Second, does the research present enough evidence to tip your priors and change your personal decisions?

So it's possible to be both skeptical of how well these results generalize (and call for further research), but also heed the warning: AI usage does appear to change something fundamental about our congnitive processes, enough to give any reasonable person pause.

giancarlostoro•44m ago
The other thing to note is "AI" is being used in place of LLMs. AI is a lot of things, I would be surprised to find out that generating images, video and audio would lead to cognitive decline. What I think LLMs might lead to is intellectual laziness, why memorize or remember something if the LLM can remember it type of thing.
mym1990•20m ago
I would argue that intellectual laziness can and will lead to cognitive decline as much as physical laziness can and will lead to muscle atrophy. It’s akin to using a maps app to get from point a to b but not ever remembering the route, even though someone has done it 100 times.

I don’t know the percentage of people who are still critically thinking while using AI tools, but I can first hand see many students just copy pasting content to their school work.

somenameforme•43m ago
In general I agree with you regarding the weakness of the paper, but not the skepticism towards its outcome.

Our bodies naturally adjust to what we do. Do things and your body reinforces that enabling you do even more advanced versions of those things. Don't do things and your skill or muscle in such tends to atrophy over time. Asking LLMs to (as in this case) write an essay is always going to be orders of magnitude easier than actually writing an essay. And so it seems fairly self evident that using LLMs to write essays would gradually degrade your own ability to do so.

I mean it's possible that this, for some reason, might not be true, but that would be quite surprising.

tomrod•27m ago
Ever read books in the Bobiverse? They provide an pretty functional cognitive model for how human interfaces with tooling like AI will probably work (even though it is fiction) -- lower level actions are pushed into autonomous regions until a certain deviancy threshold is achieved. Much like breathing -- you don't typically think about breathing until it becomes a problem (choking, underwater, etc.) and then it very much hits the high level of the brain.

What is reported as cognitive decline in the paper might very well be cognitive decline. It could also be alternative routing focused on higher abstractions, which we interpret as cognitive decline because the effect is new.

I share your concern, for the record, that people become too attached to LLMs for generation of creative work. However, I will say it can absolutely be used to unblock and push more through. The quality versus quantity balance definitely needs consideration (which I think they are actually capturing vs. cognitive decline) -- the real question to me is whether an individual's production possibility frontier is increased (which means more value per person -- a win!), partially negative in impact (use with caution), or decreased overall (a major loss). Cognitive decline points to the latter.

dahart•30m ago
This comment reminds me of the so called Dunning Kruger Effect. That paper had pretty close to the same participation size, and participants were pulled from a single school (Cornell). It also has major methodology problems, and has had an uphill battle for replication and generalizability, actually losing the battle in some cases. And yet, we have a famous term for it that people love to use, often and incorrectly, even when you take the paper at face value!

The problem is that a headline that people want to believe is a very powerful force that can override replication and sample size and methodology problems. AI rots your brain follows behind social media rots your brain, which came after video games rot your brain, which preceded TV rots your brain. I’m sure TV wasn’t even the first. There’s a long tradition of publicly worrying about machines making us stupider.

boringg•25m ago
I mean there are clear problems with heavy exposure to TV, video games and I have no doubt that there are similar problems with heavy AI use. Any adult with children can clearly see the addiction qualities and behavioral fall out.
hamburga•25m ago
Socrates famously complained about literacy making us stupider in Phaedrus.

Which I believe still does have a large grain of truth.

These things can make us simultaneously dumber and smarter, depending on usage.

tomrod•22m ago
> The problem is that a headline that people want to believe is a very powerful force that can override replication and sample size and methodology problems. AI rots your brain follows behind social media rots your brain, which came after video games rot your brain, which preceded TV rots your brain. I’m sure TV wasn’t even the first. There’s a long tradition of publicly worrying about machines making us stupider.

Your reminded me of this (possibly spurious) quote:

>> An Assyrian clay tablet dating to around 2800 B.C. bears the inscription: “Our Earth is degenerate in these later days; there are signs that the world is speedily coming to an end; bribery and corruption are common; children no longer obey their parents; every man wants to write a book and the end of the world is evidently approaching.”[0]

Same as it ever was. [1]

[0] https://quoteinvestigator.com/2012/10/22/world-end/

[1] https://www.youtube.com/watch?v=5IsSpAOD6K8

hopelite•24m ago
Pretty ironic in many ways since this would have been an undergrad class level project that would never even be in the same room as a publisher maybe even 30 years ago before science became a proto-religious conformity social club.
kelsey98765431•1h ago
Misleading title, the article explicitly says when used to cheat on essays.
bgwalter•1h ago
I tried to see what the hype is about and translated one build system to another using "AI". The result was wrong, bloated and did not work. I then used smaller steps like the prompt geniuses recommend. It was exhausting, still riddled with errors, like a poor version of copy & paste.

Most importantly, I did not remember anything (which is a good thing because half of the output is wrong). I then switched to Stackoverflow etc. instead of the "AI". Suddenly my mental maps worked again, I recalled what I read, programming was fun again, the results were correct and the process much faster.

eviks•1h ago
No, vibe science is not as powerful as to be able to determine "long-term cognitive harm", especially when such "technical wonders" as "measurable through EEG brain scans." are used.

> 83.3% of LLM users were unable to quote even one sentence from the essay they had just written

Not sure why you need to wire EEG up, it's pretty obvious that they simply did _not_ write the essay, LLM did it for them, and likely didn't even read it, so there is no surprise that they don't remember what didn't pass through their own thinking apparatus properly.

matwood•1h ago
I write all the time and couldn't quote anything off hand. What I can talk about are the ideas in the writing. I find LLMs useful as an editor. Here's what I want to say, is it clear or are there better words, etc... And then I never take the output blindly, and depending on how important the writing is I may go back and forth line by line.

The idea that I would say 'write an essay on X' and then never look at the output is kind of wild. I guess that's vibe writing instead of vibe coding.

Mistletoe•1h ago
The future for humans worries me a lot. What evolutionary pressures will exist to keep us intelligent? We are already seeing IQ drop alarmingly across the world. Now AI comes in from the top rope with the steel chair?

https://www.ncbi.nlm.nih.gov/search/research-news/3283/

nerpderp82•1h ago
Why does it matter? Some will become Eloy and some Trogs.
Mistletoe•1h ago
If you are referencing The Time Machine, I remember reading a neat comic book version of the book when I was a kid. Sometimes I feel we are quite close to having Eloi and Murlocks evolving already.

>the gentle, childlike Eloi and the subterranean, predatory Morlocks.

Seems like a nice metaphor for the current two political parties we are provided with.

latexr•38m ago
> a neat comic book version

Wikipedia lists several. Do you recall which you read?

https://en.wikipedia.org/wiki/The_Time_Machine#Comics

latexr•39m ago
> Why does it matter?

Because the people around you affect your life. Presumably you don’t want to live in a world of stupid people who are incapable of critical thought or doing anything which are not direct instructions from a machine. Think about it every time you are frustrated by your interaction with a system you have no choice but to use, such as a bank or a government branch.

John Greene has a quote which I think fits, even if it’s about paying taxes for public education rather than LLM use: https://www.goodreads.com/quotes/1390885-public-education-do...

footy•1h ago
there's going to be an avalanche of dementia for the generations that outsource all their thinking to LLMs
johnisgood•1h ago
IMO that is a misuse of LLMs. You are not supposed to outsource your thinking. You need to be part of the whole process, incl. the architectural design. I am, and I got far with LLMs (Claude mostly, not much with GPT). I use GPT for personal stuff or ramblings, not for coding.

There will always be people who misuse something, but we should not hurt those who do not. Same with drugs. There are functional junkies who know when to stop, go on a tolerance break, take just enough of a dose and so forth, vs. the irresponsible ones. The situation is quite similar and I do not want AI to be "banned" (assuming it could) because of people who misuse LLMs.

People, let us have nice things.

As for the article... did they not say the same thing about search engines and Wikipedia? Do you remember how cheating actually helps us learn (by writing down the things you want to cheat)? Problem is, people do not even bother reading the output of the LLM and that is on them.

footy•1h ago
sure, we may call that a misuse. But there are already people using them this way, and they're marketed this way, and I was not making a point about the correctness of using them this way---just observing that this is going to have far-reaching consequences.
johnisgood•48m ago
I know, and it is huge problem that people use it this way, and that it is marketed this way.
jajko•49m ago
Misuse or not, who cares about labeling.

Internet was supposed to be this wonderful free place with all information available and unbiased, not the cesspool of scams and tracking that makes 1984 look like a fairytale for children. Atomic energy was supposed to free mankind from everlasting struggle for energy dependency, end wars and whatnot. LLMs we supposed to be X and not Y and used as Z and not BBCCD.

For what population loses overall, compared to whats gained (really, what? a mild increased efficiency sometimes experienced on individual level, sometimes made up for PR), I consider these LLMs are a net loss for whole mankind.

Above should tell you something about human nature, how naive some of the brightest of us are.

johnisgood•47m ago
It works for me, so I would rather not have it taken away from me. Take it away from people who misuse it.

If it is a human nature issue (with which I agree), then we are in a deep shit and this is why we cannot have nice things.

Educate, and if that fails, then punish those who "misuse" it. I do not have a better idea. It works for me quite well for coding, and it will continue to work as long as it is not going to get nerfed.

jajko•37m ago
Nobody is taking it away from you, but as we seem to agree that ship has sailed for some deep waters, nobody is backpedaling back.

Well cheers to even bigger gap between elite who can afford good education and upbringing and cheap crappy rest. Number of scifi novels come to mind where poor semi-mindless masses are governed by 'educated' elites. I always thought how such society must have screwed up badly in the past to end up like that. Nope, road to hell is indeed paved with good intentions and small little steps which seem innocent or even beneficial on their own, in their time.

johnisgood•23m ago
It is just crazy that people still believe in the "think of the children" narratives, or "it is for your own safety". I think these seemingly good intentions (which are not actually good intentions, just seem so) are a huge problem, and lack of resistance because if you resist, their rebuttal is "you don't want our kids to be safe?!" and so forth, appealing to emotions and shame.
amelius•1h ago
Isn't intelligence -> asking the right questions?

Rather than coming up with the right answers?

tiborsaas•1h ago
It's both and they form a feedback loop. You come up with a problem (question) and you solve the problem which might lead to more questions. So problem solving and reflecting back on it are both building blocks of intelligence.
infecto•1h ago
Everyone is different. I don’t have a good grasp on the distribution of HN readers these days but I know for myself as a heavy user of LLMs, I am not sold on this for myself. I am asking more questions than ever. I use it for proof reading and editing. But I can see the risk as a software engineer. I really appreciate tools like cursor, I give it bite size chunks and review. Using tools like Claude code though. It becomes a black box and I no longer feel at the helm of the ship. I could see if you outsourced all thinking to an LLM there can be consequences. That said I am not sold on the paper and suspects it’s mostly hyperbole.
ceejayoz•1h ago
> I am asking more questions than ever.

Wouldn't that be the expected result here? Less knowledge, more questions?

rwnspace•1h ago
In my personal experience new knowledge tends to beget questions.
infecto•1h ago
That’s one interpretation, but I think there’s a distinction between “asking more questions because I’ve forgotten things” and “asking more questions because I’m exploring further.”

When I use LLMs, it’s less about patching holes in my memory and more about taking an idea a few steps further than I otherwise might. For me it’s expanding the surface area of inquiry, not shrinking it. If the study’s thesis were true in my case, I’d expect to be less curious, not more.

Now that said I also have a healthy dose of skepticism for all output but I find for the general case I can at least explore my thoughts further than what I may have done in the past.

xnorswap•1h ago
> I am asking more questions than ever.

I don't have a dog in this fight, but "asking more questions" could be evidence of cognitive decline if you're having to ask more questions than ever!

It's easy to twist evidence to fit biases, which is why I'd hold judgement to better evidence comes through.

infecto•59m ago
Fair point, though I think there’s a difference between “questions out of confusion” and “questions out of curiosity.”

If I’m constantly asking “what does this mean again?” that would signal decline. But if I’m asking “what if I combine this with X?” or “what are the tradeoffs of Y?” that feels like the opposite: more engagement, not less.

That’s why I’m skeptical of blanket claims from one study, the lived experience doesn’t map so cleanly.

IAmBroom•58m ago
Well, that's certainly a take.

But if I'm teaching a class, and one student keeps asking questions that they feel the material raised, I don't tend to think "brain damage". I think "engaged and interested student".

charlie-83•58m ago
Not OP but there's a difference between needing to ask more questions and asking more questions because its easier now.

Personally, I find myself often asking AI about things I wouldn't have been bothered to find out about before.

For example I've always these funny little grates on the outside of houses near me and wondered what they are. Googling "little grates outside houses" doesn't help at all. Give AI a vagueish description and it instantly tells you they are old boot scapers.

infecto•46m ago
Haha you nailed it. Walking around and experiencing the world I can now ask a vague question and usually find an answer.

Maybe there is a movie in the back of my head or a song. Typical search engine queries would never find it. I can give super vague references to a LLM and with search enabled get an answer that’s correct often enough.

Taek•59m ago
Cognitive decline is a broad term, and a research paper could claim "decline" if even a single cognitive metric loses strength.

When writing was invented, societies started depending on long form memorization less, which is a cognitive "decline". When calculators were invented, societies started depending on mental math less, which is a cognitive "decline".

I'm sure LLMs are doing the same thing. People aren't getting dumber, they are just outsourcing tasks more, so that their brains spend more time on the tasks that can't be outsourced.

infecto•58m ago
This is super interesting and I had not thought about it like that!
IAmBroom•54m ago
Absolutely true.

Also, domesticated dogs show indications of lower intelligence and memory than wolves. They don't have to plan complex strategies to find and kill food, anymore.

Taek•49m ago
The difference between us and dogs is that we DO still need to make a salary. Dogs live in a lap of luxury where their needs are guaranteed to be handled.

But humans need jobs, and jobs need to capture value from society. So we do actually still have to stay sharp, whatever form "sharp" takes.

yuehhangalt•20m ago
My concern is more attributed to the tasks that can't or won't be outsourced.

People who maintain a high level of curiosity or a have drive to create things will most assuredly benefit from using AI to outsource work that doesn't support those drives. It has the potential to free up more time for creative endeavors or those that require more deep thinking. Few would argue the benefit there.

Unfortunately, anti-intellectualism is rampant, media literacy is in decline, and a lot of people are content to consume content and not think unless they absolutely have to. Dopamine is a helluva drug.

If LLMs reduce the cognitive effort at work, and the people go home to doom scroll on social media or veg out in front of their streaming media of choice, it seems that we're heading down the path of creating a society of mindless automatons. Idiocracy is cited so often today that I hate to do so myself, but it seems increasingly prescient.

Edit: I also don't think that AI will enable a greater work-life harmony. The pandemic showed that a large number of jobs could effectively be done remotely. However, after the pandemic, there was significant "Return to Office" movement that almost seemed like retribution for believing we could achieve a better balance. Corporations won't pass on the time savings to their employees and enable things like 4-day work weeks. They'll simply expect more productivity from the employees they have.

CuriouslyC•1h ago
This does not mesh with my personal experience. I find that AI reduces task noise that prevents me from getting in the flow of high level creative/strategic thinking. I can just plan algorithms/models/architectures and very quickly validate, test, iterate and always work at a high level while the AI handles syntax and arcane build processes.

Maybe it's my natural ADHD tendencies, but having that implementation/process noise removed from my workflow has been transformational. I joke about having gone super saiyan, but it's for real. In the last month, I've gotten 3 papers in pre-print ready state, I'm working on a new model architecture that I'm about to test on ARC-AGI, and I've gotten ~20 projects to initial release or very close (several of which concretely advance SOTA).

j45•1h ago
The gap i see is the definition of "AI use" is not clearly delineated between passive (usage similar to consumption) vs active.

Passive AI use where you let something else think for your will obvious cause cognitive decline.

Active use of AI as a thought partner, and learning as you go yourself seem to feel different.

The issue with studying 18-22 year olds is their prefrontal cortex (a center of logic, will power, focus, reasoning, discipline) is not fully developed until 26. But that probably doesn't matter if the study is trying to make a point about technology.

The art of learning fake information from real could also increase cognitive capacity.

TheAceOfHearts•1h ago
Personally, I don't think you should ever allow the LLM to write for you or to modify / update anything you're writing. You can use it to get feedback when editing, to explore an idea-space, and to find any topical gaps. But write everything yourself! It's just too easy to give in and slowly let the LLM take over your brain.

This article is focused on essay writing, but I swear I've experienced cognitive decline when using AI tools a bit too much to help solve programming-related problems. When dealing with an unfamiliar programming ecosystem it feels so easy and magical to just keep copy / pasting error outputs until the problem is resolved. Previously solving the problem would've taken me longer but I would've also learned a lot more. Then again, LLMs also make it way easier to get started and feel like you're making significant progress, instead of getting stuck at the first hurdle. There's definitely a balance. It requires a lot of willpower to sit with a problem in order to try and work through it rather than praying to the LLM slot machine for an instant solution.

Manik_agg•47m ago
I agree. Asking LLM to write for you is being lazy and it also results in sub-par results (don't know about brain-rot).

I also like preparing a draft and using llm for critique, it helps me figure out some blind spots or ways to articulate better.

lazide•44m ago
I’d consider it similar to always using a GPS/Google Maps/Apple Maps to get somewhere without thinking about it first.

It’s really convenient. It also similarly rots the parts of the brain required for spatial reasoning and memory for a geographic area. It can also lead to brain rot with decision making.

Usually it’s good enough. Sometimes it leads to really ridiculous outcomes (especially if you never double check actual addresses and just put in a business name or whatever). In many edge cases depending on the use case, it leads to being stuck, because the maps data is wrong, or doesn’t have updated locations, or can’t consider weather conditions, etc. especially if we’re talking in the mountains or outside of major cities.

Doing it blindly has led to numerous people dying by stupidly getting themselves into more and more dumb situations.

People still got stuck using paper maps. Sometimes they even died. It was much rarer and people were more aware they were lost, instead of persisting thinking they weren’t. So different failure modes.

Paper maps were very inconvenient, so dealt with it using more human interaction and adding more buffer time. Which had it’s own costs.

In areas where there are active bad actors (Eastern Europe now a days, many other areas in that region sometimes) it leads to actively pathological outcomes.

It is now rare for anyone outside of conflict zones to use paper maps except for specific commercial and gov’t uses, and even then they often use digitized ‘paper’ maps.

giancarlostoro•37m ago
When Firefox added autocorrect, and I started using it, I made it a point to learn what it was telling me was correct, so I could write more accurately. I have since become drastically better at spelling, I still goof, I'm even worse when pronouncing words I've read but never heard. English is my second language mind you.

I think any developer worth their salt would use LLMs to learn quicker, and arrive to conclusions quicker. There's some programming problems I run into when working on a new project that I've run into before but cannot recall what my last solution was and it is frustrating, I could see how an LLM could help with such a resolution coming back quicker. Sometimes its 'first time setup' stuff that you have not had to do for like 5 years, so you forget, and maybe you wrote it down on a wiki, two jobs ago, but an LLM could help you remember.

I think we need to self-evaluate how we use LLMs so that they help us become better Software Engineers, not worse ones.

jbstack•31m ago
> I've experienced cognitive decline when using AI tools a bit too much to help solve programming-related problems. When dealing with an unfamiliar programming ecosystem it feels so easy and magical to just keep copy / pasting error outputs until the problem is resolved. Previously solving the problem would've taken me longer but I would've also learned a lot more.

I've had the opposite experience, but my approach is different. I don't just copy/paste errors, accept the AI's answer when it works, and move on. I ask follow up questions to make sure I understand why the AI's answer works. For example, if it suggests running a particular command, I'll ask it to break down the command and all the flags and explain what each part is doing. Only when I'm satisfied that I can see why the suggestion solves the problem do I accept it and move on to the next thing.

The tradeoff for me ends up being that I spend less time learning individual units of knowledge than if I had to figure things out entirely myself e.g. by reading the manual (which perhaps leads to less retention), but I learn a greater quantity of things because I can more rapidly move on to the next problem that needs solving.

mzajc•12m ago
> I ask follow up questions to make sure I understand why the AI's answer works.

I've tried a similar approach and found it very prone to hallucination[0]. I tend to google things first and ask a LLM as fallback, so maybe it's not a fair comparison, but what do I need a LLM for if a search engine can answer my question.

[0]: Just the other day I asked ChatGPT what a colonn (':') after systemd's ExecStart= means. The correct answer is that it inhibits variable expansion, but it kept giving me convincing yet incorrect answers.

grim_io•1h ago
I have never used LLM's to write essays, so I can't comment on that.

What I can comment on is how valuable and energizing it is for me to cooperatively code with LLM's using agents.

I find it sad to hear when someone finds this experience disappointing, and I wonder what could go wrong to make it so.

grugagag•38m ago
I don’t thik someone finds this experience dissapointing but harmful for cognition, probably in the long run as the cognition ‘muscle’ athrophies in some regions as I see it. Remains to be seen how it pans out. However, how much would you be willing to pay for LLMs before you decide it’s not worth it? It is unexpensive at this stage but this won’t last.
babycheetahbite•1h ago
Does anyone have any suggestions for approaches they are taking to avoid the potential for this? Something I did recently in ChatGPT's 'Instructions' box (so far I have only used ChatGPT) is requesting it to "Make me think through the problem before just giving me the answer." and a few other similar notes.
deadbabe•59m ago
At the very least, don’t use LLMs tightly integrated into your IDE. Keep them at arms length, use them the way you use a search engine.
asimovfan•59m ago
Writing long texts for school is stupid and it is a skill that is in practice purely developed in order to do homework. I am not surprised it immediately declines as soon as the necessity is removed.
patrickmay•52m ago
On the contrary, writing is key to organizing and clarifying one's thoughts. It is an essential part of learning.

"Writing is nature’s way of letting you know how sloppy your thinking is." -- Guindon

asimovfan•41m ago
People write a lot of stuff that is not for homework. Maybe they should make a measurement of something else they write. I would even say that writing for homework is a special skill in bullshitting that does not (cannot) exist in other forms of writing.
LMKIIW•43m ago
> ...is a skill that is in practice purely developed in order to do homework.

I would argue that it helps kids learn how to organize and formulate coherent thoughts and communicate with others. I'm sure it helps them do homework, too.

miltonlost•33m ago
An Asimov fan saying writing long texts is stupid? I bet he would have some strong feelings against that
krapp•30m ago
Well yeah, he was probably getting paid by the word :)
nperez•57m ago
I feel like this sort of thing will be referenced for comic relief in future talks about hysteria at the dawn of the AI era.

The article actually contains the sentence "The machines aren’t just taking over our work—they’re taking over our minds." which reminds me more of Reefer Madness than an honest critique of modern tech.

SkyBelow•56m ago
The main issue I see is that the methodology section of the paper limited the full time to 20 minutes. Is this a study of using LLMs to write an essay for you, or using LLMs to help you write an essay? To be fair, LLMs can't be swapped between the two modes, so the distinction is left up to the user in how they engage in.

Thinking about it myself, and looking at the questions and time limits, I'm not sure how I would be able to navigate that distinction given only 20 minutes. The way I would use an LLM to aid me in writing an essay on the topic wouldn't fit within the time limit, so even with an LLM, I would likely stick to brain only except in a few specific case that might occur (forgetting how to spell a word or forgetting a name for a concept).

So this study likely is applicable to similar timed instances, like letting use LLMs on a test, but that's one I would have already seen as extremely problematic for learning to begin with (granted, still worth while to find evidence to back even the 'obvious' conclusions).

sudosteph•56m ago
Meanwhile my main use cases for AI outside of work:

- Learning how to solder

- Learning how to use a multimeter

- Learning to build basic circuits on breadboxes

- learning about solar panels, mppt, battery management system, and different variations of li-on batteries

- learning about LoRa band / meshtastic / how to build my own antenna

And every single one of these things I've learned I've also applied practically to experiment and learn more. I'm doing things with my brain that I couldn't do before, and it's great. When something doesn't work like I thought it would, AI helps me understand where I may have went wrong, I ask it a ton of questions, and I try again until I understand how it works and how to prove it.

You could say you can learn all of this from YouTube, but I can't stand watching videos. I have a massive textbook about electronics, but it doesn't help me break down different paths to what I actually want to do.

And to be blunt: I like making mistakes and breaking things to learn. That strategy works great for software (not in prod obviously...), but now I can do it reasonably effectively for cheap electronics too.

stripe_away•48m ago
and to be blunt, I learned similar things building analog synths, before the dawn of LLMs.

Like you, I don't like watching videos. However, the web also has text, the same text used to train the LLMs that you used.

> When something doesn't work like I thought it would, AI helps me understand where I may have went wrong, I ask it a ton of questions, and I try again until I understand how it works and how to prove it.

Likewise, but I would have to ask either the real world or written docs.

I'm glad you've found a way to learn with LLMs. Just remember that people have been learning without LLMs for a long time, and it is not at all clear that LLMs are a better way to learn than other methods.

chaps•38m ago

  > However, the web also has text, the same text used to train the LLMs that you used.
The person you're responding to isn't denying that other people learn from those. But they're explicit that having the text isn't helpful either:

  > I have a massive textbook about electronics, but it doesn't help me break down different paths to what I actually want to do.
sudosteph•30m ago
The asking people part was the hard thing for me, always has been. That honestly was the missing piece for me. I absolutely agree that written docs and online content are sufficient for some people, that's how I learned Linux and sysadmin stuff, but I tried on and off to get into electronics for years that way and never got anywhere.

I think the problem was all of the getting started guides didn't really solve problems I cared about, they're just like "see, a light! isn't that neat?" and then I get bored and impatient and don't internalize anything. The textbooks had theory but so much of it I would forget most of it before I could use it and actually learn. Then when I tried to build something actually interesting to me, I didn't actually understand the fundamentals, it always fails, Google doesn't help me find out why because it could be a million things and no human in my life understands this stuff either, so I would just go back to software.

It could be LLMs are at least possibly better for certain people to learn certain things in certain situations.

fxwin•19m ago
Same here. I've been working through some textbooks without solutions for the contained exercises, and ChatGPT has been invaluable for getting feedback on solutions and hints when I'm stuck
amelius•5m ago
Yeah, if you're using LLMs like an apprentice who asks their master, then there's nothing wrong with that, imho.
tqwhite•49m ago
What a load of crap. I don't believe it for one second. Also, AI has only been an important influence for about twenty minutes.

Here's what I think: AI causes you to forget how to program but causes you to learn how to plan.

Also, AI enhances who you are. Dummies get dummer. Smarties get smarter.

But that's not proven. It's anecdote. And I don't believe anyone knows what is really happening and those that claim to are counterproductive.

variadix•40m ago
Seems obvious. If you don’t use it you lose it. Same thing happened with mental arithmetic, remembering phone numbers, etc. Letting an LLM do your thinking will make you worse at thinking.
nzach•36m ago
> 0% of LLM users could produce a correct quote, while most Brain-only and Search users could

I think a better interpretation would be to say that LLMs gives people the ability to "filter out" certain tasks in our brains. Maybe a good parallel would be to point out that some drivers are able to drive long distances on what is essentially an "auto-pilot". When this happens they are able to drive correctly but don't really register every single action they've taken during the process.

In this study you are asking for information that is irrelevant (to the participant). So, I think it is expected that people would filter it out if given the chance.

[edit] Forgot to link the related xkcd: https://xkcd.com/1414/

davidclark•16m ago
I think the “crushing nihilism” pro-AI argument is what makes me most depressed. We are going to have so much fun when we do not communicate with other humans because it is a task that we can easily “filter out.”
rogerkirkness•35m ago
This article is written by AI. The em dashes and 'Don't just X, but Y' logic is a classic ChatGPT writing pattern in particular.
Kuinox•30m ago
The em dashes exists in ChatGPT output because existing human text contains it, like journal articles.
lo_zamoyski•35m ago
Why is this surprising? "Use it or lose it" may be a cliche, but it's true; if you don't keep some faculty conditioned, it gets "rusty". That's the general principle, so it would be surprising if this were an exception.

The age of social media and constant distraction already atrophies the ability to maintain sustained focus. Who reads a book these days, never mind a thick book requiring struggle to master? That requires immersion, sustained engagement, persevering through discomfort, and denying yourself indulgence in all sorts of temptations and enticements to get a cheap fix. It requires postponed gratification, or a gratification that is more subtle and measured and piecemeal rather than some sharp spike. We become conditioned in Pavlovian fashion, more habituated to such behavior, the more we engage in such behavior.

The reliance on AI for writing is partly rooted in the failure to recognize that writing is a form of engagement with the material. Clear writing is a way of developing knowledge and understanding. It helps uncover what you understand and what you don't. If you can't explain something, you don't know it well enough to have clear ideas about it. What good does an AI do you - you as a knowing subject - if it does the "writing" for you? You, personally, don't become wiser or better. You don't become fit by watching others exercise.

This isn't to say AI has no purpose, but our attitude toward technology is often irresponsible. We think that if we have the power to do something, we are missing out by not using it. This is boneheaded. The ultimate measure is whether the technology is good for you in some particular use case. Sometimes, we make prudential allowances for practical reasons. There can be a place for AI to "write" for us, but there are plenty of cases where it is simply senseless to use. You need to be prudent, or you end up abusing the technology.

hopelite•34m ago
What a rather ironic headline that generalizes across all "AI use", while the story is about a study that is specifically about "essay writing tasks". But that kind of slop is just par for the course for journalists and also always has been.

But it does highlight that this mind-slop decline is not new in any way even if it may have accelerated with the decline and erosion of standards.

Think of it what you want, but if the standards that led to a state everyone really enjoys and benefits from are done away with, inevitably that enjoyable state everyone benefited from and you really like will start crumbling all around you.

AI is not really unusual in this manner, other than maybe that it is squarely hitting a group and population like public health policy journalists and programmers that previously thought they were immune because they were engaged in writing. Yes, programmers are essentially just writers.

rusbus•34m ago
Does anyone else find it incredibly ironic that this article summarizing the paper was obviously written with AI?

All the headings and bullets and phrases like "The findings are clear:" stick out like a sore thumb.

Kuinox•33m ago
Remember, they only measured that the less time you spend on a task, the less you remember it.
digitcatphd•32m ago
So users are more detached from their work? How does this correspond with cognitive decline? Wouldn’t it need to be cross referenced in other areas beside the task at hand? Seems a bit of a headline grabbing study to me. Personally I find thinking with an LLM helps me take a more structured and unbiased approach to my thought process
misswaterfairy•29m ago
I can't say I'm surprised by this. The brain is, figuratively speaking, a muscle. Learning through successes and (especially) failures is hard work, though not without benefit, in that the trials and exercises your brain works through exercises the 'muscle', making it stronger.

Using LLMs to do replace the effort we would've otherwise endured to complete a task short-circuits that exercising function, and I would suggest is potentially addictive because it's a near-instant reward for little work.

It would be interesting to see a longitudinal study on the affect of LLMs, collective attention spans, and academic scores where testing is conducted on pen and paper.

onlyrealcuzzo•6m ago
Sounds bullish for AI.

It's like a drug. You start using it, and think you have super powers, and then you've forgotten how to think, and you need AI just to maybe be as smart as you were before.

Every company will need enterprise AI solutions just to maybe get the same amount of productivity as they got before without it.

sigbottle•26m ago
obviously obvious caveats like, intentional use is good, lazy use is bad, etc.

I've found it both helpful and dangerous, it's great for expanding scope obviously, greater search engine.

But I've also significantly noticed further some of the "harmful patterns" I guess that I would not have noticed about... myself? For example, AI is way too eager to "solve things" when given a prompt, even if you give it an abstract one. It's unable to take a step back and just.... think?

And hey, I notice that I do that too! Lol.

It's helped me realize more refined "stages" of thinking I guess, even beyond just "plan" and "solve".

But for sure a lot of the time I'm just lazy and ask AI to just "go do it" and turn off critical thinking, hoping that it can just 1 shot the problem instead of me breaking it down. Sometimes it genuinely works. Often it doesn't.

I think if I stay way more intentional with my thinking, I can use it to good use. Which will probably reduce AI usage - but it's the first principles of real critical thinking, not the usage of AI.

---

These kinds of studies remind me of when my parents told me "stop getting addicted to games" as a kid. Sure, anyone can observe effects, it takes real brains to really try and understand the first principles effects. Addiction went away in a flash once I understood the principles, lol.

whatamidoingyo•25m ago
I've been seeing people use LLMs to reply to people on Facebook. Like, they'll just be having a general discussion, and then reply as ChatGPT. I don't know if they think it makes them look smart; I think it has the complete opposite effect.

Not many people can perform mental arithmetic beyond single-digit numbers. Just plug it into a calculator...

We're at the point of people plugging their thoughts into an LLM and having it do the work for them... what's going to happen to thinking?

TYPE_FASTER•22m ago
I used to know a bunch of phone numbers by heart. I haven't done that since I got a cellphone. Has that had an impact on my ability to memorize things? I have no idea.

I have recently been finding it noticeably more difficult to come up with the word I'm thinking of. Is this because I've been spending more time scrolling than reading? I have no idea.

isodev•8m ago
An AI is telling me these could be symptoms of the onset of a degenerative neurological condition. Is it true? I have no idea.
goalieca•20m ago
Anecdote here, but when I was in grad school, I was talking to a PhD student i respected a lot. Whenever he read a paper, he would try to write the code out and get it working. I would take a couple of months but he could whip it up in a few days. He explained to me that it was just practice and the more you practice the better you become. He not only coded things quickly, he started analyzing papers quicker too and became really good at synthesizing ideas, knowing what worked and didn't, and built up a phenomenal intuition.

These days, I'm fairly senior and don't touch code much anymore but I find it really really instructive to get my hands dirty and struggle through new code and ideas. I think the "just tweak the prompts bro" people are missing out on learning.

vonneumannstan•15m ago
>I think the "just tweak the prompts bro" people are missing out on learning.

Alternatively they're just learning building intuition for something else. The level of abstraction is moving upwards. I don't know why people don't seem to grok that the level of the current models is the floor, not the ceiling. Despite the naysayers like Gary Marcus, there is in fact no sign of scaling or progress slowing down at all on AI capabilities. So it might be that if there is any value in human labor left in the future it will be in being able to get AI models to do what you want correctly.

Brian_K_White•10m ago
Wishful, self-serving, and beside the point. The primary argument here is not about the capability of the ai.

I think the same effect has been around forever in the form of every boss/manager/ceo/rando-divorcee-or-child-with-money using employees to do their thinking as a current information-handling worker or student using an ai to do their thinking.

benterix•9m ago
That would be true if several conditions were fulfilled, starting with LLMs being actually to do their tasks properly, which they still very much struggle with which basically defeats the premise of moving up an abstraction layer if you have to constantly check and correct the lower layer.
jimkri•5m ago
I don't think Gary Marcus is necessarily a naysayer; I take it that he is trying to get people to be mindful of the current AI tooling and its capabilities, and that there is more to do before we say it is what it is being marketed as. Like, GPT5 seems to be an additional feature layer of game theory examples. Check LinkedIn for how people think it behaves, and you can see patterns. But they market it as much more.
KoolKat23•4m ago
Agree with this, I mean the guy assembling the thingymajig in the factory, after a few years can put it together with his hands 10x faster than the actual thingymajig designer. He probably couldn't tell you what the fault tolerance of the item is though however, the designer could.

We just have to get better at identifying risks with using the LLMs doing the grunt work and in mitigating them.

codyb•4m ago
Really? No signs of slowing down?

A year or two ago when LLMs popped on the scene my coworkers would say "Look at how great this is, I can generate test cases".

Now my coworkers are saying "I can still generate test cases! And if I'm _really pacificcccc_, I can get it to generate small functions too!".

It seems to have slowed down considerably, but maybe that's just me.

benterix•12m ago
We are literally witnessing the skills split right in front of our eyes: (1) people who are able to understand the concepts deeply, build a mental model of it and implement them in code at any level, and (2) people who outsource it to a machine and slowly, slowly loose that capability.

For now the difference between these two populations is not that pronounced yet but give it a couple of years.

mrits•10m ago
I suppose the question is if we need to understand the concepts deeply. I'm not sure many of us did to begin with and we have shipped a lot of code.
CuriouslyC•3m ago
We're just moving up the abstraction ladder, like we did with compilers. I don't care about the individual lines of code, I care about architecture, code structure, rigorous automated e2e tests, contracts with comprehensive validation, etc. Rather than waste a bunch of time pouring over agent PRs I just make them jump over extremely high static/analytic hurdles that guarantee functionality, then my only job is to identify places where the current spec and the intended functionality differ and create a new spec to mitigate.
geye1234•3m ago
Interesting, thanks. Do you mean he would write the code out by hand on pen and paper? That has often struck me as a very good way of understanding things (granted, I don't code for my job).

Similar thing in the historian's profession (which I also don't do for my job but have some knowledge of). Historians who spend all day immersed in physical archives tend, over time, to be great at synthesizing ideas and building up an intuition about their subject. But those who just Google for quotes and documents on whatever they want to write about tend to have more a static and crude view of their topic; they are less likely to consider things from different angles, or see how one things affects another, or see the same phenomenon arising in different ways; they are more likely to become monomaniacal (exaggerated word but it gets the point across) about their own thesis.

NiloCK•16m ago
Every augmentation is also an amputation.

Calculators reduced our capabilities in mental and pencil-paper arithmetic. Graphing calculators later reduced our capacity to sketch curves, and in turn, our intuition in working directly with equations themselves. Power tools and electric mixers reduced our grip strength. Cheap long distance plans and electronic messaging reduced our collective abilities in long-form letter writing. The written word decimated the population of bards who could recite Homer from memory.

It's not that there aren't pitfalls and failure modes to watch out for, but the framing as a "general decline" is tired, moralizing, motivated, clickbait.

vonneumannstan•12m ago
No different than Socrates complaining about students using writing ruining their memory.
ChrisArchitect•12m ago
Paper from June.

Discussion then: https://news.ycombinator.com/item?id=44286277

lif•7m ago
What are the costs of convenience? Surely most LLM use by consumers leans into that heavily.
iphone_elegance•5m ago
well now that explains HN