frontpage.
newsnewestaskshowjobs

Made with ♥ by @iamnishanth

Open Source @Github

fp.

AI Coding Agents Hallucinate – Real-Time ResearchAgent

https://hallucinationtracker.com
1•amadosalsta•1m ago•0 comments

Autopsy reveals Daniel Naroditsky's probable cause of death

https://www.charlotteobserver.com/news/local/article314402626.html
1•amrrs•3m ago•0 comments

Attitude-based networking

https://vece.ai/compare-yourself
1•iliakoliev•7m ago•1 comments

Tiny Mars Has a Big Impact on Our Climate

https://nautil.us/tiny-mars-has-a-big-impact-on-our-climate-1262470/
1•Bender•11m ago•0 comments

The Heat Pump relay race

https://www.heatpumped.org/p/the-heat-pump-relay-race
1•ssuds•12m ago•0 comments

Probing quantum mechanics with nanoparticle matter-wave interferometry

https://www.nature.com/articles/s41586-025-09917-9
1•cpncrunch•12m ago•0 comments

Threat Actors Expand Abuse of Microsoft Visual Studio Code

https://www.jamf.com/blog/threat-actors-expand-abuse-of-visual-studio-code/
3•vinnyglennon•12m ago•0 comments

AMD launches 34GB AI bundle in latest driver update

https://www.pcguide.com/news/amd-launches-massive-34gb-ai-bundle-in-latest-driver-update-heres-wh...
1•kristianp•16m ago•0 comments

Making activities load 500x faster than the most popular feed

https://getfast.ai/blogs/activity-feed
3•steadyelk•17m ago•0 comments

Personalized travel itineraries, mapped and shareable

https://TryTourify.app
1•Arnoldsaurus•19m ago•0 comments

Show HN: Dotenv Mask Editor: No more embarrassing screen leaks of your .env

https://marketplace.visualstudio.com/items?itemName=xinbenlv.dotenv-mask-editor
1•xinbenlv•20m ago•0 comments

Doctors raise alarm over declining vaccine rates in America's most vulnerable

https://www.dailymail.co.uk/health/article-15484717/doctors-warn-declining-vaccine-rate-older-adu...
4•Bender•21m ago•1 comments

Ask HN: Have your views about AI / LLMs changed? What triggered it?

3•ATechGuy•22m ago•0 comments

From Stealth Blackout to Whitelisting: Inside the Iranian Shutdown

https://www.kentik.com/blog/from-stealth-blackout-to-whitelisting-inside-the-iranian-shutdown/
1•oavioklein•24m ago•0 comments

Clawdbot Showed Me What the Future of Personal AI Assistants Looks Like

https://www.macstories.net/stories/clawdbot-showed-me-what-the-future-of-personal-ai-assistants-l...
1•janpio•25m ago•0 comments

1 in 35,385 US immigrants are in MN+criminal+undocumented

3•QuantumGood•25m ago•1 comments

Taboo against harming strangler fig spirits protects forests in Borneo

https://news.mongabay.com/2025/12/taboo-against-harming-strangler-fig-spirits-protects-forests-in...
2•PaulHoule•26m ago•0 comments

Fixes That Made My Website Faster and More Accessible

https://dingyu.me/blog/7-fixes-that-made-my-website-faster-and-more-accessible
1•felixding•28m ago•0 comments

Google Cloud to shut down Memorystore for Memcached by Jan 2029

https://docs.cloud.google.com/memorystore/docs/memcached/deprecation/memcached
1•tokkyokky•28m ago•1 comments

Starlight, a Bitcoin-native platform for turning ideas into funded work

https://starlight-ai.freemyip.com/
1•macroadster•28m ago•1 comments

Spend Decisions Get Approved

https://www.letsriff.ai/blog/from-emails-and-excel-to-decision-clarity-fixing-how-spend-decisions...
1•wheresclark•28m ago•0 comments

Lix – universal version control system for binary files

https://lix.dev/blog/introducing-lix/
4•onecommit•29m ago•0 comments

Wasma – Windows Assignment System Monitoring Architecture

https://github.com/Azencorporation/Wasma
1•goychay23•33m ago•1 comments

Show HN: PolyMCP – open-source toolkit to expose MCP tools and run agents

1•justvugg•34m ago•0 comments

Ark and GENESIS A protocol for sovereign know nodes and consent-based federation

1•PiSounds•34m ago•0 comments

On Mark Carney's use of "The Power of the Powerless" at the WEF

https://twitter.com/SilviaPencak/status/2013705975207797113
2•nailer•34m ago•0 comments

New Linux Patch Improved NVMe Performance And15% with CPU Cluster-Aware Handling

https://www.phoronix.com/news/Faster-Linux-NVMe-Cluster-Aware
2•Bender•35m ago•0 comments

Don't click on the LastPass 'create backup' link – it's a scam

https://www.theregister.com/2026/01/21/lastpass_backup_phishing_campaign/
4•Bender•37m ago•0 comments

Claude's New Constitution

https://simonwillison.net/2026/Jan/21/claudes-new-constitution/
2•coloneltcb•43m ago•1 comments

Toronto man fakes pilot badge to score hundreds of free flights

https://www.bbc.com/news/articles/c5y223170vdo
1•belter•43m ago•0 comments
Open in hackernews

Brain on ChatGPT: Accumulation of Cognitive Debt When Using an AI Assistant

https://www.media.mit.edu/publications/your-brain-on-chatgpt/
56•misswaterfairy•1h ago

Comments

misswaterfairy•1h ago
It seems this study has been discussed on HN before, though was recently revised very late December 2025.

https://arxiv.org/abs/2506.08872

mettlerse•1h ago
Article seems long, need to run it through an LLM.
SecretDreams•1h ago
When you're done, let us know so we can aggregate your summarized comment with the rest of the thread comments to back out key, human informed, findings.
observationist•50m ago
Grug no need think big, Grug brain happy. Magic Rock good!
lapetitejort•1h ago
Doesn't look like anything to me
fhd2•46m ago
Perfection.
captain_coffee•1h ago
Curious what the long-term effects from the current LLM-based "AI" systems embedded in virtually everything and pushed aggressively will be in let's say 10 years, any strong opinions or predictions on this topic?
SecretDreams•56m ago
It'll be a lot like giving children all the answers without teaching them how to get the answers for themselves.
netsharc•42m ago
Hopefully the brainrot will mean older developers, who know how to code the old-fashioned way, don't get replaced so quickly..
yesco•36m ago
If we focus only on the impact on linguistics, I predict things will go something like this:

As LLM use normalizes for essay writing (email, documentation, social media, etc), a pattern emerges where everyone uses an LLM as an editor. People only create rough drafts and then have their "editor" make it coherent.

Interestingly, people might start using said editor prompts to express themselves, causing an increased range in distinct writing styles. Despite this, vocabulary and semantics as a whole become more uniform. Spelling errors and typos become increasingly rare.

In parallel, people start using LLMs to summarize content in a style they prefer.

Both sides of this gradually converge. Content gets explicitly written in a way that is optimized for consumption by an LLM, perhaps a return to something like the semantic web. Authors write content in a way that encourages a summarizing LLM to summarize as the author intends for certain explicit areas.

Human languages start to evolve in a direction that could be considered more coherent than before, and perhaps less ambiguous. Language is the primary interface an LLM uses with humans, so even if LLM use becomes baseline for many things, if information is not being communicated effectively then an LLM would be failing at its job. I'm personifying LLMs a bit here but I just mean it in a game theory / incentive structure way.

cluckindan•21m ago
>Human languages start to evolve in a direction that could be considered more coherent than before

Guttural vocalizations accompanied by frantic gesturing towards a mobile device, or just silence and showing of LLM output to others?

m4rtink•16m ago
Like with asbesthos and lead paint, we are building surprises today for the people of tomorrow!

And asbestos and lead paint was actually useful.

binary132•15m ago
Most people will continue to become dumber. Some people will try to embrace and adapt. They will become the power-stupids. Others will develop a sort of immune reaction to AI and develop into a separate evolutionary family.
lacoolj•1h ago
Dont even need to read the article if you been using em. You already know just as well as I do how bad it gets.

A door has been opened that cant be closed and will trap those who stay too long. Good luck!

risyachka•54m ago
Yup. This.
ragle•2m ago
I hate it, but I'm actually counting on this and how it affects my future earning potential as part of my early(ish) retirement plan!

I do use them, and I also still do some personal projects and such by hand to stay sharp.

Just: they can't mint any more "pre-AI" computer scientists.

A few outliers might get it and bang their head on problems the old way (which is what, IMO, yields the problem-solving skills that actually matter) but between:

* Not being able to mint any more "pre-AI" junior hires

And, even if we could:

* Great migration / Covid era overhiring and the corrective layoffs -> hiring freezes and few open junior reqs

* Either AI or executives' misunderstandings of it and/or use of it as cover for "optimization" - combined with the Nth wave of offshoring we're in at the moment -> US hiring freezes and few open junior reqs

* Jobs and tasks junior hires used to cut their teeth on to learn systems, processes, etc. being automated by AI / RPA -> "don't need junior engineers"

The upstream "junior" source for talent our industry needs has been crippled both quantitatively and qualitatively.

We're a few years away from a _massive_ talent crunch IMO. My bank account can't wait!

Yes, yes. It's analogous to our wizzardly greybeard ancestors prophesying that youngsters' inability to write ASM and compile it in their heads would bring end of days, or insert your similar story from the 90s or 2000s here (or printing press, or whatever).

Order of "dumbing down" effect in a space that one way or another always eventually demands the sort of functional intelligence that only rigorous, hard work on hard problems can yield feels completely different, though?

Just my $0.02, I could be wrong.

bethekidyouwant•58m ago
“LLM users also struggled to accurately quote their own work” - why are these studies always so laughably bad?

The last one I saw was about smartphone users who do a test and then quit their phone for a month and do the test again and surprisingly do better the second time. Can anyone tell me why they might have paid more attention, been more invested, and done better on the test the second time round right after a month of quitting their phone?

nothrowaways•55m ago
> Cognitive activity scaled down in relation to external tool use
somewhatrandom9•55m ago
"Over four months, LLM users consistently underperformed at neural, linguistic, and behavioral levels. These results raise concerns about the long-term educational implications of LLM reliance and underscore the need for deeper inquiry into AI's role in learning."
DocTomoe•54m ago
TL;DR: We had one group not do some things, an later found out that they did not learn anything by not doing the things.

This is a non-study.

keithnz•38m ago
no, that isn't accurate. One of the key points is that those previously relying on the LLM still showed reduced cognitive engagement after switching back to unaided writing.
DocTomoe•30m ago
And how exactly is that surprising?

If you wrote two essays, you have more 'cognitive engagement' on the clock as compared to the guy who wrote one essay.

In other news: If you've been lifting in the gym for a week, you have more physical engagement than the guy who just came in and lifted for the first time.

greggoB•24m ago
> And how exactly is that surprising?

Isn't the point of a lot of science to empirically demonstrate results which we'd otherwise take for granted as intuitive/obvious? Maybe in AI-literature-land everything published is supposed to be novel/surprising, but that doesn't encompass all of research, last I checked.

DocTomoe•15m ago
If the title of your study both makes a neurotoxin reference ("This is your brain on drugs", egg, pan, plus pearl-clutching) AND introduces a concept stolen and abused from IT and economics (cognitive debt? Implies repayment and 'refactoring', that is not what they mean, though) ... I expect a bit more than 'we tested this very obvious common sense thing, and lo and behold, it is just as a five year old would have predicted.'
Miraste•5m ago
You are right about the content, but it's still worth publishing the study. Right now, there's an immense amount of money behind selling AI services to schools, which is founded on the exact opposite narrative.
Miraste•9m ago
No, it isn't.

The fourth session, where they tested switching back, was about recall and re-engagement with topics from the previous sessions, not fresh unaided writing. They found that the LLM users improved slightly over baseline, but much less than the non-LLM users.

"While these LLM-to-Brain participants demonstrated substantial improvements over 'initial' performance (Session 1) of Brain-only group, achieving significantly higher connectivity across frequency bands, they consistently underperformed relative to Session 2 of Brain-only group, and failed to develop the consolidation networks present in Session 3 of Brain-only group."

The study also found that LLM-group was largely copy-pasting LLM output wholesale.

Original poster is right: LLM-group didn't write any essays, and later proved not to know much about the essays. Not exactly groundbreaking. Still worth showing empirically, though.

orliesaurus•46m ago
i think i can guess this article without reading it: ive never been on major drugs, even medically speaking yet using AI makes me feels like i am on some potent drug that eating my brain. what's state management? what's this hook? who cares, send it to claude or whatever
tuckwat•27m ago
It's just a different way of writing code. Today you at least need to understand best practices to help steer towards a good architecture. In the near future there will be no developers needed at all for the majority of apps.
alt187•24m ago
Becoming a moron is a different way of writing code?
cluckindan•23m ago
That just means the majority of apps don’t actually serve much of a purpose
joseangel_sc•21m ago
this comment will age badly
akomtu•6m ago
You may be right, but for a different reason: the majority apps on Apple and Google appstores will be 100% AI generated crapware.
foota•42m ago
Imo programming is fairly different between vibes based not looking at it at all and using AI to complete tasks. I still feel engaged when I'm more actively "working with" the AI as opposed to a more hands off "do X for me".

I don't know that the same makes as much sense to evaluate in an essay context, because it's not really the same. I guess the equivalent would be having an existing essay (maybe written by yourself, maybe not) and using AI to make small edits to it like "instead of arguing X, argue Y then X" or something.

Interestingly I find myself doing a mix of both "vibing" and more careful work, like the other day I used it to update some code that I cared about and wanted to understand better that I was more engaged in, but also simultaneously to make a dashboard that I used to look at the output from the code that I didn't care about at all so long as it worked.

I suspect that the vibe coding would be more like drafting an essay from the mental engagement POV.

falloutx•37m ago
AI is not a great partner to code with. For me I just use it to do some boilerplates and fill in the tedious gaps. Even for translations its bad if you know both languages. The biggest issues is that AI constantly tries to steer you wrong, its very subtle in programming that you only realize it a week later when you get stuck in a vibe coding quagmire.
foota•17m ago
shrug YMMV. I was definitely a bit of of a luddite for a while, and I still definitely don't consider myself an "AI person", but I've found them useful. I can have them do legitimately useful things, with varying degrees of supervision.

I wouldn't ask Cursor to go off and write software from scratch that I need to take ownership of, but I'm reasonably comfortable at this point having it make small changes under direction and with guidance.

The project I mentioned above was adding otel tracing to something, and it wrote a tracae viewing UI that has all the features I need and works well, without me having to spend hours getting it up set up.

uriegas•19m ago
I find it very useful for code comprehension. For writing code it still struggles (at least codex) and sometimes I feel I could have written the code myself faster rather than correct it every time it does something wrong.

Jeremy Howard argues that we should use LLMs to help us learn, once you let it reason for you then things go bad and you start getting cognitive debt. I agree with this.

falloutx•42m ago
I think a lot more people, especially at the higher end of the pay scale, are in some kind of AI psychosis. I have heard people at work talk about how they are using chatGPT to quick health advice, some are asking it for gym advice and others are just saying they just dump entire research reports into it and get the summary.
mannanj•37m ago
Yes. Similar to the mass psychosis we were hearing about during COVID in relation to asking particular questions and demonstrating curiosity about controversial topics.

Seems to have somehow been replaced with this AI psychosis?

greggoB•27m ago
> Similar to the mass psychosis we were hearing about during COVID

Can you be more specific and/or provide some references? The "demonstrating curiosity about controversial topics" part is sounding like vaccine skepticism, though I don't recall ever hearing that being referred to as any kind of "psychosis".

tuckwat•22m ago
What does using a chat agent have to do with psychosis? I assume this was also the case when people googled their health results, googled their gym advice and googled for research paper summaries?

As long as you're vetting your results just like you would any other piece of information on the internet then it's an evolution of data retrieval.

DocTomoe•21m ago
Pathologising those who disagree with a current viewpoint follows a long and proud tradition. "Possessed by demons" of yesteryear, today it's "AI psychosis".
netsharc•40m ago
An obvious comparison is probably the habitual usage of GPS navigation. Some people blindly follow them and some seemingly don't even remember routes they routinely take.
nerdsniper•34m ago
I found a great fix for this was to lock my screen maps to North-Up. That teaches me the shape of the city and greatly enhances location/route/direction awareness.

It’s cheap, easy, and quite effective to passively learn the maps over the course of time.

My similar ‘hack’ for LLMs has been to try to “race” the AI. I’ll type out a detailed prompt, then go dive into solving the same problem myself while it chews through thinking tokens. The competitive nature of it keeps me focused, and it’s rewarding when I win with a faster or better solution.

netsharc•28m ago
Yeah, I'm a North-Up cult member too, after seeing a behind the scenes video of Jeremy Clarkson from Top Gear suggesting it, claiming "never get lost again".
Liftyee•25m ago
I haven't tried this technique yet, sounds interesting.

Living in a city where phone-snatching thieves are widely reported on built my habit of memorising the next couple steps quickly (e.g. 2nd street on the left, then right by the station), then looking out for them without the map. North-Up helps anyways because you don't have to separately figure out which erratic direction the magnetic compass has picked this time (maybe it's to do with the magnetic stuff I EDC.)

layman51•23m ago
That's a great tip, but I know some people hate that because there is some cognitive load if they rely more on visuals and have to think more about which way to turn or face when they first start the route, or have to make turns on unfamiliar routes.

I also wanted to mention that just spending some time looking at the maps and comparing differences in each services' suggested routes can be helpful for developing direction awareness of a place. I think this is analogous to not locking yourself into a particular LLM.

Lastly, I know that some apps might have an option to give you only alerts (traffic, weather, hazards) during your usual commute so that you're not relying on turn-by-turn instructions. I think this is interesting because I had heard that many years ago, Microsoft was making something called "Microsoft Soundscape" to help visually impaired users develop directional awareness.

hombre_fatal•15m ago
I try using north-up for that reason, but it loses the smart-zooming feature you get with the POV camera, like zooming in when you need to perform an action, and zooming back out when you're on the highway.

I was shocked into using it when I realized that when using the POV GPS cam, I couldn't even tell you which quadrant of the city I just navigated to.

I wish the north-up UX were more polished.

jchw•30m ago
I recall reading that over-reliance on GPS navigation is legitimately bad for your brain health.

https://www.nature.com/articles/s41598-020-62877-0

This is rather scary. Obviously, it makes me think of my own personal over-reliance on GPS, but I am really worried about a young relative of mine, whose car will remain stationary for as long as it takes to get a GPS lock... indefinitely.

codazoda•29m ago
I have ALWAYS had this problem. It's like my brain thinks places I frequent are unimportant details and ejects them to make room for other things.

I have to visit a place several times and with regularity to remember it. Otherwise, out it goes. GPS has made this a non-issue; I use it frequently.

For me, however, GPS didn't cause the problem. I was driving for 5 or 6 years before it became ubiquitous.

kazinator•14m ago
I always use GPS, and always take better routes based on local knowledge. Once in a rare while, it surprises with a nice idea.

GPS navigators provide a rough estimate of arrival time which becomes more accurate as you get closer. The estimate is occasionaly useful for letting someone know when you expect to arrive somewhere.

It's the non-habitual GPS uses that are detrimental to the brain: the trips to a new address in some unfamiliar part of town, when in the distant past you would have consulted a map and then planned a route, memorized it, and followed it without looking at the map.

stephen_g•8m ago
This is one I've never found really affects me - I think because I just always plan that the third or fourth time I go somewhere I won't use the navigation, so you are in a mindset of needing to remember the turns and which lane you should be in etc.

Not sure how that maps onto LLM use, I have avoided it almost completely because I've seen coleagues start to fall into really bad habits (like spending days adjusting prompts to try and get them to generate code that fixes an issue that we could have worked through together in about two hours), I can't see an equivalent way to not just start to outsource your thinking...

xenophonf•39m ago
I'm very impressed. This isn't a paper so much as a monograph. And I'm very inclined to agree with the results of this study, which makes me suspicious. To what journal was this submitted? Where's the peer review? Has anyone gone through the paper (https://arxiv.org/pdf/2506.08872) and picked it apart?
DocTomoe•27m ago
I love the parts where they point out that human evaluators gave wildly different evaluations as compared to an AI evaluator, and openly admitted they dislike a more introverted way of writing (fewer flourishes, less speculation, fewer random typos, more to the point, more facts) and prefer texts with a little spunk in it (= content doesn't ultimately matter, just don't bore us.)
jchw•32m ago
I try my best to make meta-comments sparingly, but, it's worth noting the abstract linked here isn't really that long. Gloating that you didn't bother to read it before commenting, on a brief abstract for a paper about "cognitive debt" due to avoiding the use of cognitive skills, has a certain sad irony to it.

The study seems interesting, and my confirmation bias also does support it, though the sample size seems quite small. It definitely is a little worrisome, though framing it as being a step further than search engine use makes it at least a little less concerning.

We probably need more studies like this, across more topics with more sample size, but if we're all forced to use LLMs at work, I'm not sure how much good it will do in the end.

Der_Einzige•30m ago
Good. Humans don’t need to waste their mental energy on tasks that other systems can do well.

I want a life of leisure. I don’t want to do hard things anymore.

Cognitive atrophy of people using these systems is very good as it makes it easier to beat them in the market, and it’s easier to convince them that whatever slop work you submitted after 0.1 seconds of effort “isn’t bad, it’s certainly great at delving into the topic!”

Also, monkey see, monkey speak: https://arxiv.org/abs/2409.01754

latexr•17m ago
> Cognitive atrophy of people using these systems is very good as it makes it easier to beat them in the market

I hope you’re being facetious, as otherwise that’s a selfish view which will come back to bite you. If you live in a society, what other do and how they behave affects you too.

A John Green quote on public education feels appropriate:

> Let me explain why I like to pay taxes for schools even though I personally don’t have a kid in school. It’s because I don’t like living in a country with a bunch of stupid people.

softwaredoug•29m ago
Druids used to decry that literacy caused people to lose their ability to memorize sacred teachings. And they’re right! But literacy still happened and we’re all either dumber or smarter for it.
EGreg•14m ago
Druids? Socrates was famously against books far earlier.

Funny enough, the reason he gave against books has now finally been addressed by LLMs.

potatoman22•26m ago
I've definitely noticed an association between how much I vibe code something and how good my internal model of the system is. That bit about LLM users not being able to quote their essay resonates too: "oh we have that unit test?"
pfannkuchen•24m ago
Talking to LLMs reminds me of arguing with a certain flavor of Russian. When you clarify based on a misunderstanding of theirs, they act like your clarification is a fresh claim which avoids them ever having to backpedal. It strikes me as intellectually dishonest in a way I find very grating. I do find it interesting though as the incentives that produce the behavior in both cases may be similar.
bethekidyouwant•18m ago
I’m gonna make a new study one where I give the participant really shitty tools and one more give them good tools to build something and see which one takes more brain power
trees101•10m ago
Skill issue. I'm far more interactive when reading with LLMs. I try things out instead of passively reading. I fact check actively. I ask dumb questions that I'd be embarrassed to ask otherwise.

There's a famous satirical study that "proved" parachutes don't work by having people jump from grounded planes. This study proves AI rots your brain by measuring people using it the dumbest way possible.