frontpage.
newsnewestaskshowjobs

Made with ♥ by @iamnishanth

Open Source @Github

fp.

Against Query Based Compilers

https://matklad.github.io/2026/02/25/against-query-based-compilers.html
1•birdculture•45s ago•0 comments

Exostosis

https://en.wikipedia.org/wiki/Exostosis
1•treetalker•2m ago•0 comments

Flip Distance of Convex Triangulations and Tree Rotation Is NP-Complete

https://arxiv.org/abs/2602.22874
1•nill0•3m ago•0 comments

Show HN: GitShow Repo Showroom – a landing page for any GitHub repo

1•ofershap•4m ago•0 comments

Rare Roman lead ingots found by metal detectorists in Ceredigion

https://www.bbc.com/news/articles/c8xy52wndx1o
1•speckx•4m ago•0 comments

Premier League to launch direct streaming in Singapore

https://www.espn.com/soccer/story/_/id/48044798/premier-league-launch-direct-streaming-premier-le...
1•woldemariam•5m ago•0 comments

Dave's Book Review for the Art of Doing Science and Engineering

https://ratfactor.com/b/the-art-of-doing-science-and-engineering
1•zdw•5m ago•0 comments

Show HN: Lightweight, S3-compatible object storage server with built-in web dash

https://github.com/eniz1806/VaultS3
1•open_source_new•6m ago•0 comments

CSP for Pentesters: Understanding the Fundamentals

https://www.kayssel.com/newsletter/issue-20/
1•zdw•9m ago•0 comments

Project Silica's advances in glass storage technology

https://www.microsoft.com/en-us/research/blog/project-silicas-advances-in-glass-storage-technology/
1•rmast•11m ago•0 comments

LLMs killed the privacy star, we can't rewind, we've gone too far

https://www.theregister.com/2026/02/26/llms_killed_privacy_star/
1•speckx•12m ago•0 comments

Video Conferencing with Postgres

https://planetscale.com/blog/video-conferencing-with-postgres
2•thunderbong•12m ago•0 comments

Small Press Tycoon: Independent publishing simulator

https://www.mymik.net/spt.html
1•idempotent_•14m ago•0 comments

Show HN: High-Fidelity Matching via Automated Cognitive Pattern Coding (v138)

https://match-1067501793122.us-central1.run.app/
1•elite-club•14m ago•0 comments

We measured 62% token reduction

https://github.com/base76-research-lab/cognos-proof-engine
1•base76•14m ago•1 comments

On How Mailchimp Suspended Our Account

https://radekmie.dev/blog/on-how-mailchimp-suspended-our-account/
3•radekmie•18m ago•0 comments

From Wi‑Fi Access to Root: Reverse Engineering a $50 CarPlay Dongle

https://medium.com/@louis-e/from-wi-fi-access-to-root-reverse-engineering-a-50-carplay-dongle-a3f...
1•rmast•19m ago•0 comments

S2S – Physics-certified motion data for Physical AI (7 biomechanical laws)

https://github.com/timbo4u1/S2S
1•s2sphysical•21m ago•0 comments

AI is rewiring how the best Go players think

https://www.technologyreview.com/2026/02/27/1133624/ai-is-rewiring-how-the-worlds-best-go-players...
2•Brajeshwar•23m ago•0 comments

Organism-wide cellular dynamics and epigenomic remodeling in mammalian aging

https://www.science.org/doi/10.1126/science.adw6273
3•Brajeshwar•23m ago•0 comments

Suspected insiders make over $1.2M by betting on U.S.'s Iran strike

https://www.coindesk.com/markets/2026/02/28/suspected-insiders-make-over-usd1-2-million-on-polyma...
5•cdrnsf•24m ago•0 comments

Simplifying OpenClaw: I built a library for community workflows

https://workflaw.ai
2•l-fy•24m ago•1 comments

RGGrid – A Workflow-Ready React Data Grid

https://www.npmjs.com/package/@rg-grid/rg-grid
1•rajugundu25•25m ago•0 comments

Show HN: XFolder – A powerful multi-pane file manager for Mac

https://github.com/zebrapixel/XFolder
1•dreampixel•26m ago•0 comments

Neanderthal males and human females had babies together, ancient DNA reveals

https://www.washingtonpost.com/science/2026/02/26/neanderthal-mating-humans/
3•bookofjoe•27m ago•1 comments

Show HN: Nugx.org – A Fresh Nuget Experience

https://nugx.org
3•plsft•28m ago•1 comments

The Pentagon Wanted a Master Key. Anthropic Said No. That Is Not the Story

https://github.com/AionSystem/AION-BRAIN/blob/main/articles%2FMEDIUM%2FSALMON%27S-FRIDAY-REPORTS%...
1•sheldonksalmon•28m ago•0 comments

Moldova broke our data pipeline

https://www.avraam.dev/blog/moldova-broke-our-pipeline
3•almonerthis•28m ago•1 comments

Paramount Beat Out Netflix, Won Warner Bros. and Will Change Hollywood Forever

https://variety.com/2026/film/news/paramount-warner-bros-deal-explained-netflix-ellison-1236674841/
1•verganileonardo•28m ago•0 comments

Show HN: Using a mobile LLM app to safely operate a desktop computer

https://github.com/ruikhu007/action-printer
1•Ruikhu•30m ago•2 comments
Open in hackernews

Cognitive Debt: When Velocity Exceeds Comprehension

https://www.rockoder.com/beyondthecode/cognitive-debt-when-velocity-exceeds-comprehension/
188•pagade•2h ago

Comments

bwestergard•1h ago
This thread is closely related: https://news.ycombinator.com/item?id=47194847

"The right amount of AI is not zero. And it’s not maximum."

tomwojcik•1h ago
Author from the other thread here. I'm surprised to see so many similarities, but in good faith I'll assume that it's just a coincidence because many devs start to notice the upcoming problems.
penr0se•1h ago
I appreciate your good faith but I tried to copy-paste the first ~7k character of this (not yours!) article in an AI detector (gptzero) and it's "highly confident that this text was AI generated" with a probability of 100%
josefrichter•1h ago
It feels like it's Saturday and HN is full of scared blog posts.
soared•1h ago
The organizational memory and on-call debugging sections allude to this, but there are significant effects on other parts of the organization. For example, if I work in product support and a customers asks about a products behavior - it becomes much more challenging to find answers if documentation is sparse (or ai written), engineers don’t immediately know the basics of the code they wrote, etc. Even if documentation is great and engineers can discuss their code, the pace of shipping updates can be a huge challenge for other teams to keep up with.
gusmally•1h ago
With the free time gained from not manually writing code, documentation should be part of the workflow. I should start doing this.
samrus•1h ago
Management will demand that free time goes to more features. Thats the problem. Time spent understand the feature (either while writing it or documenting it) is not valued, only time spent making it. So when making and understanding are decoupled, management will demand you spend all your time making, rather than understanding. They'll just tell you to have the llm make the docs
supriyo-biswas•1h ago
If that is what management wants I’m more than happy to give it to them.
add-sub-mul-div•1h ago
With the free time gained from the advent of fast food people "should" have started exercising more, but they didn't. As disciplined as you yourself may be, the typical person is going to use AI to expend minimal effort and go home at 4:55 pm.
forgetfreeman•1h ago
"free time gained" lol in what world is that ever a thing?
lolive•1h ago
I have been in a big company for 4 years, and following the zillions of projets going on here and there, how they interact [nicely or not] has become a job in itself.

Very disturbing as I thought my technical skills would help me clarify the global picture. And that is exactly the contrary that is happening.

soared•1h ago
I was at a company with one (complex) product and joined a company 10x large with 50x as many products - there is zero chance anyone could understand the global picture, though some of us are expected to somewhat grasp it. Quite the challenge, would be truly impossible with llms
monkeydust•1h ago
The way that people interact inside of knowledge companies to get things done is itself the fabric of how it operates. A recent SaaS CEO piece here calls is the 'language games'.

https://ionanalytics.com/wp-content/uploads/2026/02/The_Wron...

andsoitis•1h ago
Sometimes you have to go slow to go fast.
chrisweekly•1h ago
"Slow is smooth, and smooth is fast."
andai•1h ago
Skill is stored in the fingers!
pajtai•1h ago
The whole premise of the post, that coders remember what and why they wrote things from 6 months ago, is flawed.

We've always had the problem that understanding while writing code is easier than understanding code you've written. This is why, in the pre-AI era, Joel Spolsky wrote: "It's harder to read code than to write it."

Retric•1h ago
Harder here doesn’t mean slower. Reading and understanding your own code is way faster than writing and testing it, but it’s not easy.

AI tools don’t prevent people from understanding the code they are producing as it wouldn’t actually take that much time, but there’s a natural tendency to avoid hard work. Of course AI code is generally terrible making the process even more painful, but you where just looking at the context that created it so you have a leg up.

forgetfreeman•1h ago
Certainly AI tools don't prevent anything per se, that's management's job. Deadlines and other forms of time pressure being what they are it's trivial to construct a narrative where developers are producing (and shipping) code significantly faster than the resulting codebase can be fully comprehended.
empath75•1h ago
I have been laboriously going through the process of adding documentation and comments in code explaining the purpose and all the interfaces we expect and adding tests for the purpose of making it easier for claude to work with it but it also makes it easier for me to work with it.

Claude often makes a hash of our legacy code and then i go look at what we had there before it started and think “i don’t even know what i was thinking, why is this even here?”

softwaredoug•1h ago
If I’m learning for the first time, I think it matters to hand code something. The struggle internalizes critical thinking. How else am I supposed to have “taste”? :)

I don’t know if this becomes prod code, but I often feel the need to create like a Jupyter notebook to create a solution step by step to ensure I understand.

Of course I don’t need to understand most silly things in my codebase. But some things I need to reason about carefully.

Vexs•46m ago
Almost anything I write in Python I start in jupyter just so I can roll it around and see how it feels- which determines how I build it out and to some degree, how easy it is to fix issues later on.

With llm-first coding, this experience is lost

senko•1h ago
I recently did some work on a codebase I last touched 4 years ago.

I didn't remember every line but I still had a very good grasp of how and why it's put together.

(edit: and no, I don't have some extra good memory)

copperx•42m ago
Lucky you. I always go "huh, so I wrote this?". And this was in the pre-AI era.
seba_dos1•35m ago
These feelings aren't mutually exclusive. I'm often like "I have no memory of this place" while my name stares at me from git blame, but that doesn't mean my intuition of how it's structured isn't highly likely to be right in such cases.
SoftTalker•38m ago
I find this to be the case if it was something I was deeply involved with.

Other times, I can make a small change to something that doesn't require much time, and once it's tested and committed, I quickly lose any memory of even having done it.

senko•26m ago
Yeah I did pour a lot of sweat and thinking into that codebase all those years ago.

When I do a drive-by edit, I probably don't remember it in a week.

Which is why the "cognitive debt" from the article is relevant, IMHO. If I just thoroughly review the plan and quickly scan the resulting code, will that have a strong enough imprint on my mind over time?

I would like to think "yes", my gut is telling me "no". IMHO the LLMs are now "good enough" for coding. These are hard questions we'll have to grapple with this in the next year or two (in context of AI-assisted software development).

iainctduncan•1h ago
Oh come on, that is complete nonsense. I can reunderstand complicated code I wrote a year ago far, far faster than complicated code someone else wrote. Especially if I also wrote tests, accompanying notes, and docs. If you can't understand your old code when you come back to it... including looking through your comments and docs and tests... I'm going to say you're doing it wrong. Maybe it takes a while, but it shouldn't be that hard.

Anyone pretending gen-ai code is understood as well as pre-gen-ai, handwritten code is totally kidding themselves.

Now, whether the trade off is still worth it is debatable, but that's a different question.

Vexs•45m ago
I don't remember exactly what I wrote and how the logic works, but I generally remember the broad flow of how things tie together, which makes it easier to drop in on some aspect and understand where it is code-wise.
seba_dos1•44m ago
I juggle between various codebases regularly, some written by me and some not, often come back to things after not even months but years, and in my experience there's very little difference in coming back to a codebase after 6 months or after a week.

The hard part is to gain familiarity with the project's coding style and high level structure (the "intuition" of where to expect what you're looking for) and this is something that comes back to you with relative ease if you had already put that effort in the past - like a song you used to have memorized in the past, but couldn't recall it now after all these years until you heard the first verse somewhere. And of course, memorizing songs you wrote yourself is much easier, it just kinda happens on its own.

TallGuyShort•43m ago
This is also an area where AI can help. Don't just tell it to write your code. Before you get going, have it give you an architectural overview of certain parts you're rusty on, have it summarize changes that have happened since you were familiar, have it look at the bigger picture of what you're about to do and have it critique your design. If you're going to have it help you write code, don't have it ONLY help you write code. Have it help you with all the cognitive load.
SpicyLemonZest•42m ago
I’m very confused by this statement. I routinely answer questions about why we wrote the code we wrote 6 months ago and expect other people to do the same. In my mind that skill is one of the key differences between good and bad developers. Is it really so rare?
maqp•38m ago
A lot of bug fixing relies on some mental model about the code. It manifests as rapid "Oh 100% I know what's causing" -heureka moments. With generated code, that part's gone for good. The "black box written by a black box" is spot on on, you're completely dependent on any LLM to maintain the codebase. Right now it's not a vendor lock thing but I worry it's going to be a monopoly thing. There's going to be 2-3 big companies at most, and with the bubble eventually bursting and investor money dying, running agents might get a lot more expensive. Who's going to propose the rewrite of thousands of LLM-generated features especially after the art of programming dies along with current seniors who burn out or retire.
red_admiral•34m ago
In the past, it was also an optimistic assumption that your engineers would still be working for you in a year's time? You need some kind of documentation / instructive testing anyway. And maybe more than one person who understands each bit of the system (bus factor).
bikelang•22m ago
It’s hard to keep the minutiae in your memory over a long period of time - but I certainly remember the high level details. Patterns, types, interfaces, APIs, architectural decisions. This is why I write comments and have thorough tests - the documentation of the minutiae is critical and gives guardrails when refactoring.

I absolutely feel the cognitive debt with our codebase at work now. It’s not so much that we are churning out features faster with ai (although that is certainly happening) - but we are tackling much more complex work that previously we would have said No to.

yakattak•17m ago
The individual details, probably not. But the high level/broad strokes I definitely remember 6+ months later.
Thanemate•14m ago
OP talks about the increased frequency of such events happening, and not that this is a new problem.

For example, handwritten code also tended to be reviewed manually by each other member of the team, so the probability of someone recalling was higher than say, LLM generated code that was also LLM reviewed.

sghiassy•1h ago
Very much feel this.

I wrote a SaaS project over the weekend. I was amazed at how fast Claude implemented features. 1 sentence turned into a TDD that looked right to me and features worked

but now 3 weeks later I only have the outlines of how it works and regaining the context on the system sounds painful

In projects I hand wrote I could probably still locate major files and recall system architectures after years being away

baumy•1h ago
Management where I work is currently touting a youtube video from some influencer about the levels of AI development, one of the later ones being "you'll care that it works, not how".

We are all supposed to be advancing through these levels. Moving at a pace where you actually understand the system you're responsible for is now considered a performance issue. But also, we're "still held responsible for quality".

Needless to say I'm dusting off my resume, but I'm sure plenty of other companies are following the same playbook.

gusmally•1h ago
> When circumstances eventually require that understanding, when something breaks in an unexpected way or requirements change in a way that demands architectural reasoning, the organization discovers the deficit.

Maybe it's because I work in such a small team on a still-starting project, but even with the chaos of LLM-generated code, I can't imagine such a case as above that the LLMs couldn't also address.

Great read though and I appreciated the article.

youknownothing•1h ago
have you worked in a 10-15 year old codebase? because I honestly doubt that LLMs can cope with that.
aaronrobinson•1h ago
Why wouldn’t you ask AI to explain the architecture and code? It’s much better and efficient than any human.
esafak•1h ago
Just read every line of the generated code and make sure it is as clear and good as possible. If you can't understand it when it's fresh out of the oven you and your coworkers won't tomorrow, either. This verification places a natural limit on the rate of code you can safely generate. I suppose you could reduce that to spot checks and achieve probabilistic correctness but I would not venture there for things that matter.
somebehemoth•1h ago
Because lines of code interact with each other. Understanding what one line does in isolation does not always show the rough edges that are found when code interacts. The challenge is seeing the forest instead of individual trees.
ford•1h ago
Good engineering has always been about minimizing the amount of effort it takes for someone to understand and modify your code. This is the motivation for good abstractions & interfaces, consistent design principles, single-responsibility methods without side-effects, and all of the things we consider "clean code".

These are more important than ever, because we don't have the crutch of "Teammate x wrote this and they are intimately familiar with it" which previously let us paper over bad abstractions and messy code.

This is felt more viscerally today because some people (especially at smaller/newer companies) have never had to work this way, and because AI gives us more opportunity to ignore it

Like it or not, the most important part of our jobs is now reviewing code, not writing it. And "shelfed" ideas will now look like unmerged PRs instead of unwritten code

avaer•1h ago
> The engineer who pauses to deeply understand what they built falls behind in velocity metrics.

This is the most insidious part. It's not even that bad code gets deployed. That can be fixed and hopefully (by definition) the market weeds that out.

The problem is that the market doesn't seem to operate like that, and instead the engineer who cares loses their job because they're not hitting the metrics.

xeromal•55m ago
Of course, there are counter examples but there's a disconnect between the production of something and the selling of it with almost opposing goals. Given unlimited money and time, many engineers, arts, etc will write and rewrite something to perfection. Constraints are needed because the world doesn't operate in a vacuum and unless we all live in a utopia, we have to compete for customers and resources.

Constraints often result in better results. Think of Duke Nukem Forever and how long it took them to release a nothingburger.

I just watched a show called the Knight of the Seven Kingdoms and the showerunners were given a limited budget compared to their cousin shows and it resulted in a better product.

Sometimes those metrics keep things on the rails

bob1029•1h ago
I think stronger determinism could dramatically improve the situation here. Right now, I don't know if the same model within the same hour will produce consistent output given identical prompts and low temperature.

I have no clue what my compiler is emitting every time I hit F5. I don't need to comprehend IL or ASM because I have a ~deterministic way to produce this output from a stable representation.

Writing a codebase as natural language is definitely feasible, but how we're going about it right now is not going to support this. A vast majority of LLM coding is coming out of ad-hoc human in the loop or stochastic agent swarms. If we want to avoid the comprehension gap we need something closer to a compiler & linker that operates over a bucket of version-controlled natural language documents.

samrus•1h ago
Great article. I agree with the argument.

But to offer a counter argument, would the same thing not have happened with the rise of high level languages? The machine code was abstracted away from engineers and they lost understanding of it, only knowing what the high level code is supposed to do. But that turned out fine. Would llms abstracting the code away so engineers only understand the functionality (specs, tests) also be fine for the same reason? Why didnt cognitive debt rise in with high level languages?

A counter counter argument is that compilers are deterministic so understanding the procedure of the high level language meant you understood the procedure that mattered of the machine code, and the stuff abstracted away wasnt necessary to the codes operation. But llms are probabilistic so understanding the functionality does not mean understanding the procedure of the code in the ways that matters. But id love to hear other peoples thoughts on that

avaer•1h ago
I think it won't be too different once we see a few upgrades that are going to be required for reliability (and scaling up the AI assisted engineering process):

  - deterministic agents, where the model guarantees the same output with a seed
  - much faster coding agents, which will allow us to "compile" or "execute" natural language without noticing the llm
  - maybe just running the whole thing locally so privacy and reliability are not an issue
We're not there yet, but once we have that then I agree there won't be too much of a difference between using a high level language and plain text.

There's going to be a massive shift in programming education though, because knowing an actual programming language won't matter any more than knowing assembly does today.

gitanovic•1h ago
I also was having a similar thought, and think you wrote the answer I could not put my finger on. Compilers are deterministic, AI is a stochastic process, it doesn't always converge exactly to the same answer. Here's the main difference
kibwen•50m ago
> would the same thing not have happened with the rise of high level languages?

Any argument that attempts to frame LLMs as analogous to compilers is too flawed to bother pursuing. It's not that compilers are deterministic (an LLM can also be deterministic if you have control over the seed), it's that the compiler as a translator from a high level language to machine code is a deductive logical process, whereas an LLM is inherently inductive rather than deductive. That's not to say that LLMs can't be useful as a way of generating high level code that is then fed into a compiler (an inductive process as a pipeline into a deductive process), but these are fundamentally different sorts of things, in the same way that math is fundamentally different from music (despite the fact that you can apply math to music in plenty of ways).

wrs•21m ago
“Programs must be written for people to read, and only incidentally for machines to execute." — Harold Abelson

The purpose of high level languages is to make the structure of the code and data structures more explicit so it better captures the “actual” program model, which is in the mind of the programmer. Structured programming, type systems, modules, etc. are there to provide solid abstractions in which to express that model.

None of that applies to giving an LLM a feature idea in English and letting it run. (Though all of it is helpful for keeping an LLM from going completely off the rails.)

nottorp•9m ago
> But that turned out fine.

It did not turn out fine. Fortunately no one took it seriously, and at least seniors still have an intuitive model of how the hardware works in their head. You don't have to "see" the whole assembly language when writing high level code, just know enough about how it goes at lower levels that you don't shoot yourself in the foot.

When that's missing, due to lack of knowledge or perhaps time constraints, you end up on accidentally quadratic or they name a CVE after you.

youknownothing•1h ago
I love the concept of Cognitive Debt. I think it ties nicely with the idea that AI is creating Tactical Sharknados: https://news.ycombinator.com/item?id=47048857
itmitica•1h ago
And now programmers experience what is like to be a user, trying to comprehend the system on their computer screen.

I propose a new paradigm: programmer experience, PX.

So, code generated by AI ideally would follow the rules of PX. Whatever those may turn out to be.

knollimar•13m ago
Is this different from DX?
jurgenaut23•1h ago
I wonder when we will realize that we just don’t need more software, just better software.
uvdn7•1h ago
It reminds me of Clay Christensen’s book How to Measure Your Life. In one of his talks, he talked about how companies get killed because they optimized for the wrong/short-term metrics. What we are seeing with AI could be a supercharged flavor of Innovator’s Dilemma, where organizations optimize a pre-existing set of success metrics while missing the bigger picture because some previous assumptions no longer hold.

I really like the article. It’s not trying to sell fear (which does sell); it doesn’t paint the leaderships as clueless. Nobody knows what is going to happen in the future. The article might be wrong on a few things. But it doesn’t matter. It points out a few assumptions that people might be missing and that is great.

jinwoo68•58m ago
This reminds me again of _Programming as Theory Building_[1] by Peter Naur. With agents fast generating the code, we lose the time for building the theory in our heads.

[1] https://pages.cs.wisc.edu/~remzi/Naur.pdf

jasode•56m ago
Not to disagree with anything the article talks about but to add some perspective...

The complaint about "code nobody understands" because of accumulating cognitive debt also happened with hand-written code. E.g. some stories:

- from https://devblogs.microsoft.com/oldnewthing/20121218-00/?p=58... : >Two of us tried to debug the program to figure out what was going on, but given that this was code written several years earlier by an outside company, and that nobody at Microsoft ever understood how the code worked (much less still understood it), and that most of the code was completely uncommented, we simply couldn’t figure out why the collision detector was not working. Heck, we couldn’t even find the collision detector! We had several million lines of code still to port, so we couldn’t afford to spend days studying the code trying to figure out what obscure floating point rounding error was causing collision detection to fail. We just made the executive decision right there to drop Pinball from the product.

- and another about the Oracle RDBMS codebase from https://news.ycombinator.com/item?id=18442941

(That hn thread is big and there are more top-level comments that talk about other ball-of-spaghetti projects besides Oracle.)

bootsmann•27m ago
This underlines the argument of the OP no? The argument presented is that the situation where nobody knows how and why a piece of code is written will happen more often and appear faster with AI.
the_arun•11m ago
Probably, we need to start saving prompts in Version Control. Prompts could be the context for both humans & machines.
apical_dendrite•52m ago
This happened to me yesterday. I give a junior engineer a project. He turns it around really quickly with Cursor. I review the code, get him to fix some things (again turned around really quickly with Cursor) and he merges it. I then try a couple test cases and the system does the wrong thing on the second one I try. I ask him to fix it. He puts into cursor a prompt like "fix this for xyz case" and submits a PR. But when I look at the PR, it's clearly wrong. The model completely misunderstood the code. So I leave a detailed comment explaining exactly what the code does.

He's moving so fast that he's not bothering to learn how the system actually works. He just implicitly trusts what the model tells him. I'm trying to get him to do end-to-end manual testing using the system itself (log into the web app in a local or staging environment and go through the actions that the user would go through), he just has the AI generate tests and trusts the output. So he completely misses things that would be clear if you learned the system at a deep level and could see how the individual project you're working on fit in with the larger system

I see this with all the junior engineers on my team. They've never learned how to use a debugger and don't care to learn. They just ask the model. Sometimes they think critically about the system and the best way to do something, but not always. They often aren't looking that critically at the model's output.

1123581321•7m ago
Senior engineers must become more comfortable giving quick, broad feedback that matches the minimal time put into the PR. "This doesn't fit how the system works; please research and write a more detailed prompt and redo this" is the advice they need. It feels taboo to do it to a significant diff, but diff size no longer has much correlation to thought or effort in these situations.
techxploitation•38m ago
Forgive me if I'm stating the obvious, but, it is completely plausible and possible to run a separate review of what ai just created, explaining what decisions where made and why, how they affect the existing system and going forward. This review can have a critique section over core failure modes that you have found in ai, or discrepancies unique to your setup. It can even be further condensed from verbose 2 page document into the core relevant explanation, for future references. - I think sometimes SWE's have an ego about needing to understand it entirely self-sufficiently, and so hold back on just asking relentless questions, like a child. 'But why?' 'but why?' 'but why?' until it is revealed, is a valid method in today's environment.
AndrewKemendo•34m ago
Code has become cheaper to produce than to perceive.

Which means fixes can go in faster than it would require to first grok it

What’s missing in literally every single one of these conversations is testing

Literally all you have to do is implement test driven development and you solve like 99.9% of these issues

Even if you don’t go fully TDD which I’m not a fan of necessarily having an extensive testing suite the covers edge cases is necessary no matter what you do but it’s a need to have in a case where your code velocity is high

This is true for a company full of juniors pumping out code like early days of Facebook let’s say which allowed for their mono repot to grow insanely but it took major factors every few years but it didn’t really matter because they had their resources to do it

immortalcodes•31m ago
I have a view that we are shifting from the traditional form of Engineering into a more AI guided form, where may be we are not learning as much about the code but about how we can produce that code with correct instructions and high level design.

It's like how we might not know how sewing is done but we know how to put instructions in a loom to produce it. I also agree it is still important to read that code and understand how it works, may be take a moment to see what is happening but we are learning something entirely different here.

osigurdson•23m ago
I think we might as well just go all in at this point: "LGTM, LLM". The industry always overshoots and then self-corrects later. Therefore, maybe the right thing to do is help it get to a more sane equilibrium is to forget about the code altogether and focus on other ways to constrain it / ensure correctness and/or determine better ways to know when comprehension is needed vs optional.

What I don't like is the impossible middle ground where people are asked to 20X their output while taking full responsibility for 100% of the code at the same time. That is the kind of magical thinking that I am certain the market will eventually delete. You have to either give up on comprehension or accept a modest, 20% productivity boost at best.

CoffeeOnWrite•8m ago
While I too am only seeing a boost on the order of 20% so far, I think there are more creative applications of LLM beyond writing code, that can unlock multiples of net productivity in delivering product end to end. People are discovering these today and blogging about them, but the noise about dark factories and agents supervising agents supervising agents, etc, is drowning out their voices.

Every one of us is a pioneer if we choose to be. We have only scratched the surface as an industry.

ffsm8•1m ago
The productivity boost entirely depends on the way the software was written.

Brownfield legacy projects with god classes and millions of lines of code which need to behave coherently across multiple channels- without actually having anything linking them from the written code? That shit is not even gonna get a 20% boost, you'll almost always be quicker on your own - what you do get is a fatigue bonus, by which I mean you'll invest yourself less for the same amount of output, while getting slightly slower because nobody I've ever interacted with is able to keep such code bases in their mind sufficiently to branch out to multiple agents.

On projects that have been architects to be owned by an LLM? Modular modilith with hints linking all channels together etc? Yeah, you're gonna get a massive productivity boost and you also will be using your brain a shitton actually reasoning things out how you'll get the LLM to be able to actually work on the project beyond silly weekends toy project scope (100k-MM LOC)

But let's be real here, most employees are working with codebases like the former.

And I'm still learning how to do the second. While I've significantly improved since I've started one year ago, I wouldn't consider myself a master at it yet. I continue to try things out and frequently try things that I ultimately decide to revert or (best case) discard before merge to main simply because I ... Notice significant impediments modifying/adding features with a given architecture.

Seriously, this is currently bleeding Edge. Things have not even begun to settle yet.

We're way too early for the industry to normalize around llms yet

mikewittie•21m ago
More code written probably does means less understanding per line (or per a more germane metric), statistically speaking. More dilute understanding probably does lead to more failures and longer recovery times. This feels like something better addressed as an end-to-end actuarial problem though, rather than trying to design metrics for something elusive like understanding.
skybrian•13m ago
This seems very similar to the situation of a new employee dropped into a large codebase of varying quality. It seems like similar techniques will get you out of the mess?

Also, you can ask the coding agent for help at understanding it, unlike the old days when whoever wrote it is long gone.

erelong•9m ago
This is like our whole technological society: many people only comprehend a small part of it at a time and only sketches of how other parts work
danny_codes•6m ago
The difference is, perhaps with AI you need understand none of it at all. A thought with some interesting consequences.
erikqu•3m ago
this seems like one of those nonsense posts people will look at in a couple years and laugh at