frontpage.
newsnewestaskshowjobs

Made with ♥ by @iamnishanth

Open Source @Github

fp.

Ask HN: Do you also "hoard" notes/links but struggle to turn them into actions?

139•item007•12h ago•58 comments

Ask HN: Why don't form-fitting Faraday iPhone cases exist?

35•par_12•2d ago•90 comments

Ask HN: Is understanding code becoming "optional"?

7•mikaelaast•5h ago•12 comments

Ask HN: Notification Overload

39•fractal618•3d ago•74 comments

Ask HN: Junior getting lost

39•TheRegularOne•1d ago•32 comments

Ask HN: Should a software engineer have research exposure?

3•c_daeda•7h ago•0 comments

AI has failed to replace a single software application or feature

8•cadabrabra•11h ago•10 comments

Waypoint 1.1, a local-first world model for interactive simulation

8•lcastricato•10h ago•0 comments

Ask HN: How do you reset an AppleID?

9•OhMeadhbh•1d ago•22 comments

AI creates over-efficiency. Organizations must absorb it

6•eriam•11h ago•3 comments

Ask HN: Ergo wireless keyboard with mouse for coding?

3•MarcelOlsz•11h ago•2 comments

Ask HN: How do you market a side project?

7•ruairidhwm•16h ago•7 comments

Ask HN: How are you managing secrets with AI agents?

2•m-hodges•13h ago•3 comments

Ask HN: Is free identity theft protection after a data breach worth the bother?

2•daoboy•13h ago•1 comments

Ask HN: Who do you follow via RSS feed?

69•znpy•4d ago•52 comments

The preposterous notion of AI automating "repetitive" work

7•cadabrabra•22h ago•9 comments

Ask HN: Why the OpenClaw hype? What's so special?

4•anon_anon12•9h ago•1 comments

Ask HN: Books to learn 6502 ASM and the Apple II

101•abkt•3d ago•69 comments

Ask HN: How do you force yourself to take breaks while coding?

3•glidea•1d ago•10 comments

Ask HN: DDD was a great debugger – what would a modern equivalent look like?

56•manux81•5d ago•60 comments

Ask HN: Is archive.is currently broken for WSJ links?

7•bigwheels•1d ago•3 comments

Ask HN: How far has "vibe coding" come?

11•pigon1002•1d ago•26 comments

Ask HN: How are devtool founders getting their paying users in 2026?

7•yasu_c•1d ago•1 comments

Ask HN: What's the Point Anymore?

63•fnoef•3d ago•79 comments

Ask HN: What recent UX changes make no sense to you?

32•superasn•3d ago•35 comments

Designing programming languages beyond AI comprehension

6•mr_bob_sacamano•2d ago•10 comments

Tell HN: Beeper deletes inactive accounts without notice

4•kldx•1d ago•0 comments

How much recurring income do you generate in 2026 and from what?

12•djshah•2d ago•5 comments

Where can I find startups looking for fractional product leads?

8•stulogy•3d ago•3 comments

Ask HN: Where to find cool companies to work for?

7•truetaurus•2d ago•10 comments
Open in hackernews

Ask HN: Is understanding code becoming "optional"?

7•mikaelaast•5h ago
On Twitter, Boris Cherny (creator of Claude Code) recently said that nearly 100% of the code in Claude Code is written by Claude Code, and that he personally hasn’t written code in months. Another tweet, from an OpenAI employee, went: "programming always sucked [...] and I’m glad it’s over."

This "good riddance" attitude really annoys me. It frames programming as a necessary evil we can finally be rid of.

The ironic thing is that I’m aiming for something similar, just for different reasons. I also want to write less code.

Less code because code equals responsibility. Less code because "more code, more problems." Because bad code is technical debt. Because bugs are inevitable. Less code because fewer moving parts means fewer things can go wrong.

I honestly think I enjoy deleting code more than writing it. So maybe it’s not surprising that I’m skeptical of unleashing an AI agent to generate piles of code I don’t have a realistic chance of fully understanding.

For me, programming is fundamentally about building knowledge. Software development is knowledge work: discovering what we don’t know we don’t know, identifying what we do know we don’t know, figuring out what the real problem is, and solving it.

And that knowledge has to live somewhere.

When someone says "I don’t write code anymore," what I hear is: "I’ve shoved the knowledge work into a black box."

To me there’s a real difference between:

- knowledge expressed in language (which AI can produce ad nauseam), and

- knowledge that solidifies as connections in a human mind.

The latter isn’t a text file. It isn’t your "skills" or "beads." It isn’t hundreds of lines of Markdown slop. No. It’s a mental model: what the system is, why it’s that way, what’s safe to change, what leverage the abstractions provide, and where the fragile assumptions lie.

I’ve always carried a mental model of the codebase I’m working in. In my head it’s not "code" in the sense of language and syntax. It’s more like a "mind palace" I can step into, open doors, close doors, renovate, knock down a wall, add a new wing. It happens at a level where intuition and intellect blend together.

I'm not opposed to progress. Lately, with everything going on, I’ve started dividing code into two categories:

- code I don’t need to model in my head (low risk, follows established conventions, predictable, easy to verify), and

- code I can't help modelling in my head (business-critical, novel, experimental, or introduces new patterns).

I’m fine delegating the former to an AI agent. The latter is where domain knowledge and system understanding actually forms. That’s where it gets interesting. That’s the fun part. And my "mind palace" craves to stay in sync with it.

Is the emerging notion that understanding code is somehow optional something you are worried about?

Comments

bediger4000•5h ago
That seems like exactly the wrong lesson to learn from LLM "AI". Under no circumstances does such an "AI" understand anything, much less important semantics, so human understanding becomes that much more important.

I realize that director level managers may not get this because they've always lived and worked in the domain of "vibes" but that doesn't mean it's not true

cyrusradfar•5h ago
The metaphor I'd use is, can you understand the a story if you don't read it in the original language? Code is a language that describes the function.

I want to say, I've lived through the time (briefly) where folks felt if you didn't understand the memory management or, even assembly, level ops of code, you're not going to be able to make it great.

High level languages, obviously, are a counter-argument that demonstrate that you don't necessarily need to understand all the details to deliver an differentiable experience.

Personally, I can get pretty far with a high-level mental model and deeper model of key high-throughput areas in the system. Most individuals aren't optimizing a system, they're building on top of a core innovation.

At the core you need to understand the system.

Code is A language that describes it but there's others and arguably, in a lot of cases, a nice visual language goes much further for our minds to operate on.

mikaelaast•5h ago
Yes, and I like the points you are making. I feel like the mental models we make are exercises in a purer form of knowledge building than the code artifacts we produce. A kind of understanding that is liberated from the confines of languages.
sinenomine•5h ago
If the AI provides 0-1 nines of reliability and you refuse to provide the rest of nines required by the customer, then who will provide these, and what is your role and claim to margin here?
mikaelaast•5h ago
Creating work for the clean-up crew and leaving good money on the table for them (because it ain't gonna be cheap).
chrisjj•5h ago
Great question, but not specific to LLMs. Same applies to importing a C library.

Answer: no. Just harder.

tjr•4h ago
The "good riddance" attitude surprises me also. On one hand, it can be unpleasant to sort through obscure syntactical gobbledegook, like tracing around multiple levels of pointer indirection, but then again, I have found a certain enjoyable satisfaction in such things. It can be tough, but a good tough.

It does seem to me that the people who consistently get the best results from AI coding aren't that far away from the code. Maybe they aren't literally writing code any more, but still communicating with the LLM in terms that come from software development experience.

I think there will still be value in learning how to code, not unlike learning arithmetic and trigonometry, even if you ultimately use a calculator in real life.

But I think there will also still be value in being able to code even in real life. If you have to fix a bug in a software product, you might be able to fix it with more precise focus than an LLM would, if you know where to look and what to do, resulting in potentially less re-testing.

Personally, I balk at the idea of taking responsibility for shipping real software product that I (or, in a team environment, other humans on my team) don't understand. Perhaps that is my aerospace software background speaking -- and I realize most software is not safety-critical -- but I would be so much more confident shipping something that I understood how it worked.

I don't know. Maybe in time that notion will fade. As some are quick to point out, well, do you understand the compiled/assembled machine code? I do not. But I also trust the compilation process more than I trust LLMs. In aerospace, we even formally qualify tools like compilers to establish that they function as expected. LLM output, especially well-guided by good prompts and well-tested, may well be high quality, but I still lack trust in it.

dapperdrake•3h ago
Many irrelevant difference between programming languages are now exposed for what they are.

Thinking clearly is just as relevant or encumbering as it always was.

nacozarina•3h ago
Have CC users been raving about rock-solid stability improvements, more insightful spending analytics, and overall quantum improvements in customer experience?

No, most of the chatter I’ve heard here has been the opposite. Changes have been poorly communicated, surprising, and expensive.

If he’s been vibe-coding all this and feeling impressed with himself, he’s smelling his own farts. The performance thus far has been ascientific, tone-deaf and piss-poor.

Maybe vibe-coding is not for him.

dapangzi•2h ago
If you don't understand code, you're asking for a whole heap of trouble.

Why? You can't validate the LLM outputs properly, and commit bugs and maybe even blatantly non-functional code.

My company is pressuring juniors to use LLM when coding, and I'm finding none of them fully understand the LLM outputs because they don't have enough engineering experience to find code smells, bugs, regressions, and antipatterns.

In particular, none of them have developed strong unit testing skills, and they let the LLM mock everything because they don't know any better, when they should generally only mock API dependencies. Sometimes LLM will even mock integration tests, which to me isn't generally a super good idea.

So the tests that are supposed to validate the code are completely worthless.

It has led to multiple customer impacting issues, and we spend more time mopping the slop than we do engineering as tenured engineers.

raw_anon_1111•50m ago
When I first started coding, I knew how my code worked down to assembly language because that was the only way I could get anything to run at a sufficient speed on a 1Mhz computer, I then graduated to C and C++ with some VB and then C#, JavaScript and Python

Back in 2000 I knew every server and network switch in our office and eventually our self hosted server room with a SAN and a whopping 3TB of RAM before I left. Now I just submit a yaml file to AWS

Code is becoming no different, I treat Claude/Codex as junior developers, I specify my architecture carefully, verify it after it’s written and I test the code that AI writes for functionality and scalability to the requirements. But I haven’t looked at the actually code for the project I’m working on.

I’ve had code that I did write a year ago that I forgot what I did and just asked Codex questions about it.

pigon1002•39m ago
``` - code I don’t need to model in my head (low risk, follows established conventions, predictable, easy to verify), and

- code I can’t help modelling in my head (business-critical, novel, experimental, or introduces new patterns). I feel like there’s actually one or two more shades in between. ```

Sometimes I think something belongs in the second category, but then it turns out it’s really more like the first. And sometimes something is second-category, but for the sake of getting things done, it makes more sense to treat it like the first.

If vibe coding keeps evolving, this is probably the path it needs to explore. I just wonder what we’ll end up discovering along the way.