frontpage.
newsnewestaskshowjobs

Made with ♥ by @iamnishanth

Open Source @Github

fp.

Open in hackernews

What I'm Hearing About Cognitive Debt (So Far)

https://margaretstorey.com/blog/2026/02/18/cognitive-debt-revisited/
42•raphaelcosta•1h ago

Comments

gdulli•50m ago
> High-performing teams have always managed technical debt intentionally.

The ability to generate code has seemingly transposed what people think of as a "high-performing team" from one that produces quality to one that produces quantity. With the short term gains obviously increasing long term technical debt.

afro88•36m ago
I find it disconcerting that an article about cognitive debt contains many "tells" of being written by AI.
chromacity•34m ago
Best of all, the article is just a summary of a HN thread...
protocolture•24m ago
Its more and more clear to me that AI is a force multiplier for small teams and hobby workflows, but seems to have diminishing returns for larger teams.
melvinroest•13m ago
How so? Could you give some specific examples?
melvinroest•18m ago
Edit: yep, I really do type this much. I'm a bit of a "thinking out loud" person.

> Cognitive Debt, Like Technical Debt, Must Be Repaid

In quite a few circumstances, cognitive debt doesn't entirely need to be repaid. I personally found with multiple projects that certain directions aren't the one I want to go in. But I only found it out after fully fleshing it out with Claude Code and then by using my own app realizing that certain things that I thought would work, they don't.

For example, I created library.aliceindataland.com (a narrative driven SQL course). After a while, I noticed that the grading scheme was off and it needed to be rewritten. The same goes for how I wanted to implement the cheatsheet, or lessons not following the standard format. Of course, I need to understand the new code but I don't need to understand the old code.

With other small forms of code, I just don't really need to know how things work because it's that simple. For example, every 5 minutes I track to which wifi network I'm connected with. It's mostly useful to simply know whether I went to the office that day or not. A python script retrieves the data and when I look at it, I can recognize that it's correct. But doing it this way is sure a lot faster than active recall.

At work, I've had similar things. At my previous job I created SEO and SEA tools for marketing experts. So I remember creating this whole app that gave experts insights into SEO things that Ahrefs and similar sites don't, as it was tailored to the data of the company I worked at. The feedback I basically got was: the data is great, the insights are necessary, but the way the app works is unusuable for us. I was a bit perplexed as I personally didn't find it that complicated. But I also know that I'm not the one using it. Then I created a second version and that was way more usable. The second version assumed a completely different front-end app and front-end app architecture though. All the cognitive debt of V1? No payback needed.

The reason that this is the case, as it seems to me, fall under a few categories:

1. Experimenting with technologies. If you have certain assumptions about how a technology works but it turns out you're wrong, or you learn through the process that an adjacent technology works way better, then you need to redo it. Back when coding by hand was such a thing, I had this with a collaborative drawing project called Doodledocs (2019). I didn't know if browsers supported pressure sensitivity and to what extent it was easy to implement. It required a few programming experiments.

2. It's a small and simple script, not much more to it.

3. Experimenting with usability. A lot of the time, we don't know how usable our app is. In my experience, this seems to be either because (1) it's a hobby project or (2) the UX people have been fired years ago. In these cases, more often than not, UX becomes an afterthought. But with LLMs, delivering a 95% fully working version is usually done within a week for a greenfield project. This 95% fully working version is an amazing high fidelity interaction prototype (95% no less). Once you do that for a few iterations, you then understand what you really need. Once you understand what you really need, then you can start repaying the cognitive debt.

I've found it's usually category 3, sometimes 2 and rarely 1.

BLKNSLVR•18m ago
Just reading the first paragraph and I've already started to experience it when attempting to apply AI to Acceptance Criteria that testers have to test against.

The software is necessarily complex due to legislative requirements, and the corpus of documentation the AI has access to just doesn't seem to capture the complexities and subtleties of the system and its related platforms.

I can churn out ACs quicker, but if I just move on to the next thing as if they're 'done' then quality is going to decline sharply. I'm currently entirely re-writing the first set of ACs it generated because the base premise was off.

This is both a prompt engineering and an availability-of-enough-context documentation problem, but both of those have fairly long learning curve work. Not many places do knowledge management very well, and so the requisite base information just may not be complete enough, and one missing 'patch' can very much change a lot of contexts.

bugbuddy•15m ago
The most important lesson from Gen AI is that it does not matter how much money you have, make, lose, or spend because in the long run everyone is…

So the logical next step is to focus on Biological Immortality and short of that Digital Immortality. God speed everyone.

skybrian•12m ago
Sometimes people make an assumption that every codebase has a team (or at least a single person) devoted to maintaining it. Companies with large codebases may not be able to afford that, or don't think it's worthwhile. You could have dozens or hundreds of libraries and only a few maintainers. The libraries are effectively "done" until something comes up. Work on them is interrupt-driven.

In that situation, coming in cold to a library that you haven't worked on before to make a change is the normal case, not "cognitive debt."

If you have common coding standards that all your libraries abide by, then it's much easier to dive into a new one.

Also, being able to ask an AI questions about an unfamiliar library might actually help?

Making the Marlboro Man

https://quartr.com/insights/edge/making-the-marlboro-man
1•_vaporwave_•5m ago•0 comments

Safari 26.4 Supports WebTransport

https://webkit.org/blog/17862/webkit-features-for-safari-26-4/
2•nazcan•7m ago•1 comments

Fiddler sues Google after AI Overview wrongly claimed he was a sex offender

https://www.theguardian.com/music/2026/may/05/canadian-ashley-macisaac-fiddler-musician-singer-so...
4•prawn•9m ago•2 comments

Pennsylvania Health Insurance independent research report for Q2 May 2026

https://archive.org/details/pa-health-insurance-market-2026-final.docx-1
2•Steaglsz•10m ago•0 comments

Commodity Markets Outlook [pdf]

https://thedocs.worldbank.org/en/doc/f3138644a1e8e2bb631399ae11d6c408-0050012026/original/CMO-Apr...
1•gmays•11m ago•0 comments

Adobe's 'Modern' User Interface Is Just Webpages – Pixel Envy

https://pxlnv.com/linklog/adobe-modern-user-interface/
2•tambourine_man•12m ago•0 comments

Apple Explores Using Intel and Samsung to Build Main Device Chips in the US

https://www.bloomberg.com/news/articles/2026-05-05/apple-explores-using-intel-and-samsung-to-buil...
6•tambourine_man•14m ago•0 comments

A constrained approach to coding agents

https://github.com/brainless/nocodo
2•brainless•21m ago•1 comments

Ask HN: Best Embedding Models?

3•devstein•22m ago•0 comments

Biscuit

https://github.com/yattsu/biscuit
3•unixfg•27m ago•0 comments

An Analysis of the PocketOS Debacle

https://thedailywtf.com/articles/empty-pockets
2•pseudohadamard•29m ago•1 comments

Musk Settles SEC Suit for Underpaying Twitter Investors by $150M for Just $1.5M

https://www.law.com/corpcounsel/2026/05/04/musk-settles-sec-suit-accusing-him-of-underpaying-twit...
4•1vuio0pswjnm7•30m ago•2 comments

The 90-year-old idea behind JEPA models: Canonical Correlation Analysis (CCA)

https://shonczinner.github.io/posts/embedding-prediction/
2•kjshsh123•31m ago•0 comments

Meta, TikTok Recv Personal Data from Health Exchanges Alarming Privacy Experts

https://www.bloomberg.com/features/2026-healthcare-advertising-trackers-privacy/
4•1vuio0pswjnm7•32m ago•0 comments

An LLM agent that runs on any Linux box

https://getclaw.site/#demo
3•kilian-ai•36m ago•0 comments

Continually improving our agent harness

https://cursor.com/blog/continually-improving-agent-harness
2•gmays•37m ago•0 comments

Show HN: A minimalist personal homepage I designed from scratch

https://olzhasshaikenov.com/
2•olzhas23•41m ago•0 comments

Tokens and Dreams

https://charlesleifer.com/blog/tokens-and-dreams/
4•xngbuilds•41m ago•0 comments

AI and the Danger of Cognitive Surrender

https://www.economist.com/business/2026/04/30/ai-and-the-danger-of-cognitive-surrender
5•1vuio0pswjnm7•45m ago•1 comments

Linux, Windows or macOS: Which Operating System to Use in 2026?

https://www.lucasaguiar.xyz/posts/linux-windows-macos-qual-usar-2026/
2•isfttr•51m ago•3 comments

File Approved – File approvals without the back-and-forth

https://fileapproved.com
2•vannventures•52m ago•0 comments

Echon – A Discord alternative built in Tauri/Rust

https://echon-voice.com
2•highest678•57m ago•0 comments

The Art of Operating Systems (2019)

https://denninginstitute.com/pjd/ArtOS2/
4•aragonite•1h ago•0 comments

Amp's GPT 5.5 Model Analysis

https://ampcode.com/models/gpt-5.5
3•goranmoomin•1h ago•0 comments

Pulitzer Prize Winner in International Reporting

https://www.pulitzer.org/winners/dake-kang-garance-burke-byron-tau-aniruddha-ghosal-and-yael-grau...
19•jay_kyburz•1h ago•2 comments

The artful way of the stack-machine

https://www.pepnom.org/post/post.5.may.2026.html
2•mjbq•1h ago•1 comments

Why AI Agents Need Proof Chains, Not Just Logs

https://github.com/rodriguezaa22ar-boop/atlas-trust-infrastructure
3•astra_omnia•1h ago•0 comments

Process-Level Reward Modeling for Agentic Data Analysis

https://arxiv.org/abs/2604.24198
3•gmays•1h ago•0 comments

You can get dragged into a police investigation by proximity alone – for now

https://www.theverge.com/report/919664/chatrie-v-united-states-supreme-court-arguments-fourth-ame...
3•Cider9986•1h ago•0 comments

What I'm Hearing About Cognitive Debt (So Far)

https://margaretstorey.com/blog/2026/02/18/cognitive-debt-revisited/
48•raphaelcosta•1h ago•12 comments