frontpage.
newsnewestaskshowjobs

Made with ♥ by @iamnishanth

Open Source @Github

fp.

Open in hackernews

What I'm Hearing About Cognitive Debt (So Far)

https://margaretstorey.com/blog/2026/02/18/cognitive-debt-revisited/
76•raphaelcosta•2h ago

Comments

gdulli•1h ago
> High-performing teams have always managed technical debt intentionally.

The ability to generate code has seemingly transposed what people think of as a "high-performing team" from one that produces quality to one that produces quantity. With the short term gains obviously increasing long term technical debt.

afro88•1h ago
I find it disconcerting that an article about cognitive debt contains many "tells" of being written by AI.
chromacity•1h ago
Independent of that, the article is just a summary of a HN thread...
unignorant•46m ago
I had the same reaction, but the article is not AI-generated according to pangram, which I've generally found reliable. I wonder if LLM turns of phrase and even thought patterns are creeping into normal human thought.
erikerikson•34m ago
Or, stay with me here, the LLMs were trained on how we, statistically, write.
unignorant•14m ago
There are typical LLM voices and styles, just like human writers have differentiated voices and styles. And some common elements of the typical LLM style are distinct from humans I've previously read.
pizzly•31m ago
I think its bidirectional. We change our writing based on what we see (AI generated content on the internet) and AI will learn based on what we write.
Zetaphor•17m ago
It's worth mentioning pangram is more confident in it's positive detections than it's negative ones, as stated by the founder in an interview on the most recent ThursdAI episode
protocolture•1h ago
Its more and more clear to me that AI is a force multiplier for small teams and hobby workflows, but seems to have diminishing returns for larger teams.
melvinroest•1h ago
How so? Could you give some specific examples?
hogehoge51•35m ago
My experience moving between startup/SME/corp:

Smaller teams have more agency to move and usually team members with broader responsibility and understanding of the systems. Also possibly closer to stakeholders, so are already involved in specification creation and know where automation can add value. Add an AI agent and they can pick and choose where they can be most effective at a system level.

Bigger teams have clear boundaries that stop agency - blockers due to cross team dependencies, potentially no idea what stakeholders want, just piecemeal incremental change of a bigger system specified by someone else. If all they can do is automate that limited scope it's really just like faster typing.

melvinroest•1h ago
Edit: yep, I really do type this much. I'm a bit of a "thinking out loud" person.

> Cognitive Debt, Like Technical Debt, Must Be Repaid

In quite a few circumstances, cognitive debt doesn't entirely need to be repaid. I personally found with multiple projects that certain directions aren't the one I want to go in. But I only found it out after fully fleshing it out with Claude Code and then by using my own app realizing that certain things that I thought would work, they don't.

For example, I created library.aliceindataland.com (a narrative driven SQL course). After a while, I noticed that the grading scheme was off and it needed to be rewritten. The same goes for how I wanted to implement the cheatsheet, or lessons not following the standard format. Of course, I need to understand the new code but I don't need to understand the old code.

With other small forms of code, I just don't really need to know how things work because it's that simple. For example, every 5 minutes I track to which wifi network I'm connected with. It's mostly useful to simply know whether I went to the office that day or not. A python script retrieves the data and when I look at it, I can recognize that it's correct. But doing it this way is sure a lot faster than active recall.

At work, I've had similar things. At my previous job I created SEO and SEA tools for marketing experts. So I remember creating this whole app that gave experts insights into SEO things that Ahrefs and similar sites don't, as it was tailored to the data of the company I worked at. The feedback I basically got was: the data is great, the insights are necessary, but the way the app works is unusuable for us. I was a bit perplexed as I personally didn't find it that complicated. But I also know that I'm not the one using it. Then I created a second version and that was way more usable. The second version assumed a completely different front-end app and front-end app architecture though. All the cognitive debt of V1? No payback needed.

The reason that this is the case, as it seems to me, fall under a few categories:

1. Experimenting with technologies. If you have certain assumptions about how a technology works but it turns out you're wrong, or you learn through the process that an adjacent technology works way better, then you need to redo it. Back when coding by hand was such a thing, I had this with a collaborative drawing project called Doodledocs (2019). I didn't know if browsers supported pressure sensitivity and to what extent it was easy to implement. It required a few programming experiments.

2. It's a small and simple script, not much more to it.

3. Experimenting with usability. A lot of the time, we don't know how usable our app is. In my experience, this seems to be either because (1) it's a hobby project or (2) the UX people have been fired years ago. In these cases, more often than not, UX becomes an afterthought. But with LLMs, delivering a 95% fully working version is usually done within a week for a greenfield project. This 95% fully working version is an amazing high fidelity interaction prototype (95% no less). Once you do that for a few iterations, you then understand what you really need. Once you understand what you really need, then you can start repaying the cognitive debt.

I've found it's usually category 3, sometimes 2 and rarely 1.

BLKNSLVR•1h ago
Just reading the first paragraph and I've already started to experience it when attempting to apply AI to Acceptance Criteria that testers have to test against.

The software is necessarily complex due to legislative requirements, and the corpus of documentation the AI has access to just doesn't seem to capture the complexities and subtleties of the system and its related platforms.

I can churn out ACs quicker, but if I just move on to the next thing as if they're 'done' then quality is going to decline sharply. I'm currently entirely re-writing the first set of ACs it generated because the base premise was off.

This is both a prompt engineering and an availability-of-enough-context documentation problem, but both of those have fairly long learning curve work. Not many places do knowledge management very well, and so the requisite base information just may not be complete enough, and one missing 'patch' can very much change a lot of contexts.

nevdka•50m ago
I work with Australian tax - lots of regulatory complexity, add the documentation often assumes the reader is a CPA. I've got decent results by telling the chat bot to ask questions instead of making assumptions, and then grilling it to find edge cases.

I did a live demo in front of the CPAs, using their documentation, and Claude asked clarification questions they hadn't thought of and exposed gaps in the old manual processes.

bugbuddy•1h ago
The most important lesson from Gen AI is that it does not matter how much money you have, make, lose, or spend because in the long run everyone is…

So the logical next step is to focus on Biological Immortality and short of that Digital Immortality. God speed everyone.

skybrian•1h ago
Sometimes people make an assumption that every codebase has a team (or at least a single person) devoted to maintaining it. Companies with large codebases may not be able to afford that, or don't think it's worthwhile. You could have dozens or hundreds of libraries and only a few maintainers. The libraries are effectively "done" until something comes up. Work on them is interrupt-driven.

In that situation, coming in cold to a library that you haven't worked on before to make a change is the normal case, not "cognitive debt."

If you have common coding standards that all your libraries abide by, then it's much easier to dive into a new one.

Also, being able to ask an AI questions about an unfamiliar library might actually help?

01100011•55m ago
These sorts of articles just seem silly to me. Use AI where it helps you and avoid where it doesn't. That dividing line may change week to week.

I think it's great for writing tests and sanity checking changes but wouldn't let it write core driver code(I'm a systems programmer so YMMV). Maybe in a month I'll think differently.

Eufrat•28m ago
Using a tool as a tool is hard when the market is telling you to use it in everything as if it’s the new sliced bread.
darth_avocado•51m ago
> the question becomes how teams will manage cognitive debt

That’s the neat trick kiddo, they won’t. Across the industry, the messaging is clear: use AI and be more productive. Management is salivating at the idea of getting rid of people and keeping a higher share of profits for themselves. Most ICs I talk to are increasingly expressing the feeling of burnout, fear of losing jobs and resentment that AI is being pushed the way it is being pushed. I have more than a few conversations where people have clearly expressed that they are mostly focused on keeping their jobs. They don’t care about cognitive debt and some are looking forward to the time when the debt comes due.

It is depressing, but it is the reality.

hogehoge51•47m ago
Unfortunately I think Cognitive Debt is the cry of the software craftsperson who thought they were an Engineer. Upon working with the agent subcontractor, the agent factory, the agent part vendor, they approached it as a craft; they found themselves wanting to walk through the offices of the subcontractor reviewing screens, inspect pieces at the factory, and get the internal design for the parts they ordered. It's natural to get overwhelmed: this is why Engineers have contracts, specifications, design drawings, datasheets, and characterization data, handed over at clearly defined boundaries of abstraction, accepting the other side may be a black box.

Of course, we have had compilers and tooling, but those are the pencil and drafting board of the draftsperson. An ecosystem of packages, dependencies and APIs has evolved, but those are often just spells the software magician invokes after reading the spellbook^H^H^H^H^H^H^H^H^H stackoverflow^H^H^H^H^H^H^H^H^H^H^H^H^H API documentation.

We are going to need to build a new set of boundaries and abstractions with new handover protocols to manage this mess.

jdw64•39m ago
Personally, my observation is that “cognitive debt” feels closer to a tool for selling essays than a precise engineering concept.

Lack of documentation, failed onboarding, poor architectural understanding, missing tests, review fatigue — if all of these are simply grouped together as “cognitive debt,” isn’t that just a failure to build a proper workflow?

The scope is too broad. It reminds me of Stepanov, the creator of the STL, saying that if everything is an object, then nothing is.

When an abstraction tries to cover too many things, that abstraction inevitably fails.

The way AI specifically amplifies this problem is through the difference between direct work and indirect work. The core issue is that “it works” can easily create the illusion that “I understand it.”

Another thing I felt while reading this essay is that it almost seems to go against the direction of modern software engineering. Once software grows beyond a certain size, it is already impossible for anyone except perhaps the original designer to understand the entire system. The goal is not for everyone to understand everything.

The real goal is to make local changes safely, and to ensure that the system keeps running without major disruption when one replaceable part — including a person — leaves.

At this point, many things being described in the industry as “cognitive debt” look to me like rhetorical tools for selling essays.

Reading this, I even wondered: if I write about trendy terms like cognitive debt or spec-driven development on my own blog, will people pay more attention?

To be honest, spec driven development has a similar issue. When you go from a specification down into implementation, information loss is inevitable. LLMs cannot fully solve that. In the end, a human supervisor still has to iterate several times and tune the result precisely. The real question should be: how far down should the specification go? In other words, at what local scope does it become faster for a human programmer to modify the code directly than to keep steering the AI-generated code?

But that discussion is often missing.

As people sometimes say, “when you start talking about Agile, it stops being agile.” In the same way, I think the “cognitive debt” frame may be a flawed abstraction of the current phenomenon.

The moment a living practice is nominalized, packaged, and turned into a consulting product, it loses its original dynamism and context-dependence, becoming a dead template.

It puts various discomforts that emerged after AI adoption — review burden, lack of understanding, fatigue — into a single box.

Then it attaches the economic metaphor of “debt” to emphasize the seriousness of the problem, and subtly injects the normative idea that “this must eventually be repaid.”

pizzly•36m ago
Cognitive Debt has existed much earlier before LLMs became mainstream. Technical people got good at their jobs and then was promoted to management. After time they lost their technical abilities but if they are a good manager they kept up to date with the technological landscape and used their engineering thinking to ensure that the people below them worked to their optimum efficiency to achieve the companies goals.

Now we all know horrible mangers who didn't keep up to date nor used their thinking. This will happen with AI useage too. What is more we are expecting people who are engineers to have a manager's mindset (by managing AI agents, products requirements, etc). Many engineers are horrible at this and have no desires or ability to become a manager. This is why they went to engineering in the first place.

casualscience•27m ago
While this isn't a unique perspective, I think it's wild more people don't understand this. What happened is everyone is being "promoted" to staff+ level engineer and they're realizing the realities of that situation.

The funny part is that these are the same people who are upset that these folks up the food chain "do nothing".

yuedongze•9m ago
This, and I would even say we are promoting people to be kings and queens. I'm afraid AI will amplify our worst parts because they are ultimately sycophants. I've heard so many things about AI enabling a single person to run a billion dollar business. But I believe without the right mindset/discipline, a person cannot go too far with any technology.
nik282000•28m ago
> the accumulated gap between a system’s evolving structure and a team’s shared understanding of how and why that system works and can be changed over time

That just sounds like everyone is going to be management. Blindly setting goals and demanding features of a black box, formerly the development team, soon to be 'AI' agents.

anilgulecha•19m ago
I mean that's already happened. Everyone is expected to be a manager of agents. Anyone not doing this is programming for hobby.
saltyoldman•19m ago
It is a bit surprising sometimes when you vibe code an AI tool and it ends up doing a bunch of regular expressions to "detect the user's intention". Instead of the code using an LLM to see which tool to run, or if they want to see the SQL or the code you end up seeing .*SQL or \i^build (or some crazy regex). It really likes to use a lot of regex when it's building AI tools.
0xbadcafebee•10m ago
[delayed]

What I'm Hearing About Cognitive Debt (So Far)

https://margaretstorey.com/blog/2026/02/18/cognitive-debt-revisited/
81•raphaelcosta•2h ago•33 comments

Bun is being ported from Zig to Rust

https://github.com/oven-sh/bun/commit/46d3bc29f270fa881dd5730ef1549e88407701a5
272•SergeAx•3h ago•184 comments

CVE-2026-31431: Copy Fail vs. rootless containers

https://www.dragonsreach.it/2026/05/04/cve-2026-31431-copy-fail-rootless-containers/
14•averi•42m ago•5 comments

How OpenAI delivers low-latency voice AI at scale

https://openai.com/index/delivering-low-latency-voice-ai-at-scale/
332•Sean-Der•8h ago•109 comments

Pulitzer Prize Winner in International Reporting

https://www.pulitzer.org/winners/dake-kang-garance-burke-byron-tau-aniruddha-ghosal-and-yael-grau...
34•jay_kyburz•2h ago•2 comments

Agent Skills

https://addyosmani.com/blog/agent-skills/
174•BOOSTERHIDROGEN•6h ago•62 comments

The Car That Watches You Back: The Advertising Infrastructure of Modern Cars

https://nobodyaskedforthis.lol/posts/connected-car/
19•cadito•2h ago•8 comments

When Networking Doesn't Work

https://www.os2museum.com/wp/when-networking-doesnt-work/
27•kencausey•7h ago•4 comments

Securing a DoD contractor: Finding a multi-tenant authorization vulnerability

https://www.strix.ai/blog/how-strix-found-zero-auth-vulnerability-dod-backed-startup
184•bearsyankees•10h ago•78 comments

Does Employment Slow Cognitive Decline? Evidence from Labor Market Shocks

https://www.nber.org/papers/w35117
241•littlexsparkee•12h ago•224 comments

Testing macOS on the Apple Network Server 2.0 ROMs

http://oldvcr.blogspot.com/2026/05/testing-macos-on-apple-network-server.html
69•zdw•1d ago•13 comments

An LLM agent that runs on any Linux box

https://getclaw.site/#demo
8•kilian-ai•1h ago•0 comments

Redis array: short story of a long development process

https://antirez.com/news/164
254•antirez•14h ago•83 comments

Gaps in national food production, worldwide

https://www.nature.com/articles/s43016-025-01173-4
5•simonebrunozzi•15h ago•0 comments

Talking to strangers at the gym

https://thienantran.com/talking-to-35-strangers-at-the-gym/
1247•thitran•16h ago•580 comments

pgxbackup: Continuity Support for pgBackRest

https://thebuild.com/blog/2026/05/01/pgxbackup-continuity-support-for-pgbackrest/
14•Wingy•2d ago•0 comments

Microsoft Edge stores all passwords in memory in clear text, even when unused

https://twitter.com/L1v1ng0ffTh3L4N/status/2051308329880719730
467•cft•10h ago•161 comments

1966 Ford Mustang Converted into a Tesla with Working 'Full Self-Driving'

https://electrek.co/2026/05/02/tesla-1966-mustang-ev-conversion-full-self-driving/
146•Brajeshwar•13h ago•111 comments

Y Combinator's Stake in OpenAI (0.6%)

https://daringfireball.net/2026/05/y_combinators_stake_in_openai
185•gyomu•4h ago•16 comments

I am worried about Bun

https://wwj.dev/posts/i-am-worried-about-bun/
440•remote-dev•11h ago•294 comments

Formatting a 25M-line codebase overnight

https://stripe.dev/blog/formatting-an-entire-25-million-line-codebase-overnight-the-rubyfmt-story
141•r00k•8h ago•73 comments

How Monero’s proof of work works

https://blog.alcazarsec.com/tech/posts/how-moneros-proof-of-work-works
258•alcazar•14h ago•184 comments

Linux, Windows or macOS: Which Operating System to Use in 2026?

https://www.lucasaguiar.xyz/posts/linux-windows-macos-qual-usar-2026/
5•isfttr•1h ago•8 comments

PyInfra 3.8.0

https://github.com/pyinfra-dev/pyinfra/releases/tag/v3.8.0
241•wowi42•15h ago•85 comments

Pomiferous: The most extensive apples (pommes) database

https://pomiferous.com/
111•Ariarule•13h ago•44 comments

GameStop makes $55.5B takeover offer for eBay

https://www.bbc.co.uk/news/articles/cn0p8yled1do
655•n1b0m•18h ago•626 comments

UK Fuel Price Intelligence – Market analytics from reporting stations

https://www.fuelinsight.co.uk
169•theazureguy•13h ago•78 comments

Transformers Are Inherently Succinct (2025)

https://arxiv.org/abs/2510.19315
42•bearseascape•8h ago•6 comments

US healthcare marketplaces shared citizenship and race data with ad tech giants

https://techcrunch.com/2026/05/04/us-healthcare-marketplaces-shared-citizenship-and-race-data-wit...
456•ZeidJ•11h ago•152 comments

Sierra Raises $950M at $15B Valuation

https://sierra.ai/blog/better-customer-experiences-built-on-sierra
99•doppp•12h ago•124 comments