frontpage.
newsnewestaskshowjobs

Made with ♥ by @iamnishanth

Open Source @Github

fp.

Logic Puzzles: Why the Liar Is the Helpful One

https://blog.szczepan.org/blog/knights-and-knaves/
1•wasabi991011•6m ago•0 comments

Optical Combs Help Radio Telescopes Work Together

https://hackaday.com/2026/02/03/optical-combs-help-radio-telescopes-work-together/
1•toomuchtodo•11m ago•1 comments

Show HN: Myanon – fast, deterministic MySQL dump anonymizer

https://github.com/ppomes/myanon
1•pierrepomes•17m ago•0 comments

The Tao of Programming

http://www.canonical.org/~kragen/tao-of-programming.html
1•alexjplant•18m ago•0 comments

Forcing Rust: How Big Tech Lobbied the Government into a Language Mandate

https://medium.com/@ognian.milanov/forcing-rust-how-big-tech-lobbied-the-government-into-a-langua...
1•akagusu•18m ago•0 comments

PanelBench: We evaluated Cursor's Visual Editor on 89 test cases. 43 fail

https://www.tryinspector.com/blog/code-first-design-tools
2•quentinrl•21m ago•1 comments

Can You Draw Every Flag in PowerPoint? (Part 2) [video]

https://www.youtube.com/watch?v=BztF7MODsKI
1•fgclue•26m ago•0 comments

Show HN: MCP-baepsae – MCP server for iOS Simulator automation

https://github.com/oozoofrog/mcp-baepsae
1•oozoofrog•29m ago•0 comments

Make Trust Irrelevant: A Gamer's Take on Agentic AI Safety

https://github.com/Deso-PK/make-trust-irrelevant
2•DesoPK•33m ago•0 comments

Show HN: Sem – Semantic diffs and patches for Git

https://ataraxy-labs.github.io/sem/
1•rs545837•35m ago•1 comments

Hello world does not compile

https://github.com/anthropics/claudes-c-compiler/issues/1
19•mfiguiere•40m ago•7 comments

Show HN: ZigZag – A Bubble Tea-Inspired TUI Framework for Zig

https://github.com/meszmate/zigzag
3•meszmate•43m ago•0 comments

Metaphor+Metonymy: "To love that well which thou must leave ere long"(Sonnet73)

https://www.huckgutman.com/blog-1/shakespeare-sonnet-73
1•gsf_emergency_6•45m ago•0 comments

Show HN: Django N+1 Queries Checker

https://github.com/richardhapb/django-check
1•richardhapb•1h ago•1 comments

Emacs-tramp-RPC: High-performance TRAMP back end using JSON-RPC instead of shell

https://github.com/ArthurHeymans/emacs-tramp-rpc
1•todsacerdoti•1h ago•0 comments

Protocol Validation with Affine MPST in Rust

https://hibanaworks.dev
1•o8vm•1h ago•1 comments

Female Asian Elephant Calf Born at the Smithsonian National Zoo

https://www.si.edu/newsdesk/releases/female-asian-elephant-calf-born-smithsonians-national-zoo-an...
3•gmays•1h ago•0 comments

Show HN: Zest – A hands-on simulator for Staff+ system design scenarios

https://staff-engineering-simulator-880284904082.us-west1.run.app/
1•chanip0114•1h ago•1 comments

Show HN: DeSync – Decentralized Economic Realm with Blockchain-Based Governance

https://github.com/MelzLabs/DeSync
1•0xUnavailable•1h ago•0 comments

Automatic Programming Returns

https://cyber-omelette.com/posts/the-abstraction-rises.html
1•benrules2•1h ago•1 comments

Why Are There Still So Many Jobs? The History and Future of Workplace Automation [pdf]

https://economics.mit.edu/sites/default/files/inline-files/Why%20Are%20there%20Still%20So%20Many%...
2•oidar•1h ago•0 comments

The Search Engine Map

https://www.searchenginemap.com
1•cratermoon•1h ago•0 comments

Show HN: Souls.directory – SOUL.md templates for AI agent personalities

https://souls.directory
1•thedaviddias•1h ago•0 comments

Real-Time ETL for Enterprise-Grade Data Integration

https://tabsdata.com
1•teleforce•1h ago•0 comments

Economics Puzzle Leads to a New Understanding of a Fundamental Law of Physics

https://www.caltech.edu/about/news/economics-puzzle-leads-to-a-new-understanding-of-a-fundamental...
3•geox•1h ago•1 comments

Switzerland's Extraordinary Medieval Library

https://www.bbc.com/travel/article/20260202-inside-switzerlands-extraordinary-medieval-library
4•bookmtn•1h ago•0 comments

A new comet was just discovered. Will it be visible in broad daylight?

https://phys.org/news/2026-02-comet-visible-broad-daylight.html
5•bookmtn•1h ago•0 comments

ESR: Comes the news that Anthropic has vibecoded a C compiler

https://twitter.com/esrtweet/status/2019562859978539342
2•tjr•1h ago•0 comments

Frisco residents divided over H-1B visas, 'Indian takeover' at council meeting

https://www.dallasnews.com/news/politics/2026/02/04/frisco-residents-divided-over-h-1b-visas-indi...
5•alephnerd•1h ago•5 comments

If CNN Covered Star Wars

https://www.youtube.com/watch?v=vArJg_SU4Lc
1•keepamovin•1h ago•1 comments
Open in hackernews

Ask HN: Is understanding code becoming "optional"?

16•mikaelaast•1w ago
On Twitter, Boris Cherny (creator of Claude Code) recently said that nearly 100% of the code in Claude Code is written by Claude Code, and that he personally hasn’t written code in months. Another tweet, from an OpenAI employee, went: "programming always sucked [...] and I’m glad it’s over."

This "good riddance" attitude really annoys me. It frames programming as a necessary evil we can finally be rid of.

The ironic thing is that I’m aiming for something similar, just for different reasons. I also want to write less code.

Less code because code equals responsibility. Less code because "more code, more problems." Because bad code is technical debt. Because bugs are inevitable. Less code because fewer moving parts means fewer things can go wrong.

I honestly think I enjoy deleting code more than writing it. So maybe it’s not surprising that I’m skeptical of unleashing an AI agent to generate piles of code I don’t have a realistic chance of fully understanding.

For me, programming is fundamentally about building knowledge. Software development is knowledge work: discovering what we don’t know we don’t know, identifying what we do know we don’t know, figuring out what the real problem is, and solving it.

And that knowledge has to live somewhere.

When someone says "I don’t write code anymore," what I hear is: "I’ve shoved the knowledge work into a black box."

To me there’s a real difference between:

- knowledge expressed in language (which AI can produce ad nauseam), and

- knowledge that solidifies as connections in a human mind.

The latter isn’t a text file. It isn’t your "skills" or "beads." It isn’t hundreds of lines of Markdown slop. No. It’s a mental model: what the system is, why it’s that way, what’s safe to change, what leverage the abstractions provide, and where the fragile assumptions lie.

I’ve always carried a mental model of the codebase I’m working in. In my head it’s not "code" in the sense of language and syntax. It’s more like a "mind palace" I can step into, open doors, close doors, renovate, knock down a wall, add a new wing. It happens at a level where intuition and intellect blend together.

I'm not opposed to progress. Lately, with everything going on, I’ve started dividing code into two categories:

- code I don’t need to model in my head (low risk, follows established conventions, predictable, easy to verify), and

- code I can't help modelling in my head (business-critical, novel, experimental, or introduces new patterns).

I’m fine delegating the former to an AI agent. The latter is where domain knowledge and system understanding actually forms. That’s where it gets interesting. That’s the fun part. And my "mind palace" craves to stay in sync with it.

Is the emerging notion that understanding code is somehow optional something you are worried about?

Comments

bediger4000•1w ago
That seems like exactly the wrong lesson to learn from LLM "AI". Under no circumstances does such an "AI" understand anything, much less important semantics, so human understanding becomes that much more important.

I realize that director level managers may not get this because they've always lived and worked in the domain of "vibes" but that doesn't mean it's not true

cyrusradfar•1w ago
The metaphor I'd use is, can you understand the a story if you don't read it in the original language? Code is a language that describes the function.

I want to say, I've lived through the time (briefly) where folks felt if you didn't understand the memory management or, even assembly, level ops of code, you're not going to be able to make it great.

High level languages, obviously, are a counter-argument that demonstrate that you don't necessarily need to understand all the details to deliver an differentiable experience.

Personally, I can get pretty far with a high-level mental model and deeper model of key high-throughput areas in the system. Most individuals aren't optimizing a system, they're building on top of a core innovation.

At the core you need to understand the system.

Code is A language that describes it but there's others and arguably, in a lot of cases, a nice visual language goes much further for our minds to operate on.

mikaelaast•1w ago
Yes, and I like the points you are making. I feel like the mental models we make are exercises in a purer form of knowledge building than the code artifacts we produce. A kind of understanding that is liberated from the confines of languages.
sinenomine•1w ago
If the AI provides 0-1 nines of reliability and you refuse to provide the rest of nines required by the customer, then who will provide these, and what is your role and claim to margin here?
mikaelaast•1w ago
Creating work for the clean-up crew and leaving good money on the table for them (because it ain't gonna be cheap).
chrisjj•1w ago
Great question, but not specific to LLMs. Same applies to importing a C library.

Answer: no. Just harder.

tjr•1w ago
The "good riddance" attitude surprises me also. On one hand, it can be unpleasant to sort through obscure syntactical gobbledegook, like tracing around multiple levels of pointer indirection, but then again, I have found a certain enjoyable satisfaction in such things. It can be tough, but a good tough.

It does seem to me that the people who consistently get the best results from AI coding aren't that far away from the code. Maybe they aren't literally writing code any more, but still communicating with the LLM in terms that come from software development experience.

I think there will still be value in learning how to code, not unlike learning arithmetic and trigonometry, even if you ultimately use a calculator in real life.

But I think there will also still be value in being able to code even in real life. If you have to fix a bug in a software product, you might be able to fix it with more precise focus than an LLM would, if you know where to look and what to do, resulting in potentially less re-testing.

Personally, I balk at the idea of taking responsibility for shipping real software product that I (or, in a team environment, other humans on my team) don't understand. Perhaps that is my aerospace software background speaking -- and I realize most software is not safety-critical -- but I would be so much more confident shipping something that I understood how it worked.

I don't know. Maybe in time that notion will fade. As some are quick to point out, well, do you understand the compiled/assembled machine code? I do not. But I also trust the compilation process more than I trust LLMs. In aerospace, we even formally qualify tools like compilers to establish that they function as expected. LLM output, especially well-guided by good prompts and well-tested, may well be high quality, but I still lack trust in it.

dapperdrake•1w ago
Many irrelevant difference between programming languages are now exposed for what they are.

Thinking clearly is just as relevant or encumbering as it always was.

nacozarina•1w ago
Have CC users been raving about rock-solid stability improvements, more insightful spending analytics, and overall quantum improvements in customer experience?

No, most of the chatter I’ve heard here has been the opposite. Changes have been poorly communicated, surprising, and expensive.

If he’s been vibe-coding all this and feeling impressed with himself, he’s smelling his own farts. The performance thus far has been ascientific, tone-deaf and piss-poor.

Maybe vibe-coding is not for him.

dapangzi•1w ago
If you don't understand code, you're asking for a whole heap of trouble.

Why? You can't validate the LLM outputs properly, and commit bugs and maybe even blatantly non-functional code.

My company is pressuring juniors to use LLM when coding, and I'm finding none of them fully understand the LLM outputs because they don't have enough engineering experience to find code smells, bugs, regressions, and antipatterns.

In particular, none of them have developed strong unit testing skills, and they let the LLM mock everything because they don't know any better, when they should generally only mock API dependencies. Sometimes LLM will even mock integration tests, which to me isn't generally a super good idea.

So the tests that are supposed to validate the code are completely worthless.

It has led to multiple customer impacting issues, and we spend more time mopping the slop than we do engineering as tenured engineers.

raw_anon_1111•1w ago
When I first started coding, I knew how my code worked down to assembly language because that was the only way I could get anything to run at a sufficient speed on a 1Mhz computer, I then graduated to C and C++ with some VB and then C#, JavaScript and Python

Back in 2000 I knew every server and network switch in our office and eventually our self hosted server room with a SAN and a whopping 3TB of RAM before I left. Now I just submit a yaml file to AWS

Code is becoming no different, I treat Claude/Codex as junior developers, I specify my architecture carefully, verify it after it’s written and I test the code that AI writes for functionality and scalability to the requirements. But I haven’t looked at the actually code for the project I’m working on.

I’ve had code that I did write a year ago that I forgot what I did and just asked Codex questions about it.

mikaelaast•6d ago
How do you verify the code without actually looking at it?
raw_anon_1111•6d ago
How do you verify the compiler without looking at the assembled code? How do you verify code that links against binary libraries?

You run it and check for your desired behavior.

mikaelaast•6d ago
(Those are hardly analogous comparisons to LLM generated code, are they?)

So you do a vibe check?

raw_anon_1111•6d ago
What’s “vibe checking”?

I input x and I expect y behavior and check for corner cases - just like I have checked for correctness for 40 years. Why do I care how the code was generated as long as it has the correct behavior?

Of course multithreaded code is the exception unless the LLM is putting a bunch of rnd() calls in the code to make it behave differently.

giantg2•6d ago
Compilers have a finite set of inputs and outputs that should generate reproducible results. There's a larger amount of possible outputs for the same question with AI and very little reproducbility.
raw_anon_1111•6d ago
Yes but once the code is written it’s not going to magically change. I am going to test the code just like I would test something I wrote - again like I’ve been doing for 40 years when writing my code by hand.
giantg2•5d ago
But your thought process during coding influences your testing. At least for most of us, we find edge cases or point of concern during coding that we place extra focus on in test.

This is different than what you've done for the past 40 years becuase you're not testing your code. This would be analogous to you testing someone else's code. The vast majority of people and places have not followed that paradigm until AI showed up.

raw_anon_1111•5d ago
My thought process during my architecture influences my testing.

Since AI has been a thing, I’ve been in a customer facing cloud consulting role - working full time at consulting departments (AWS ProServe) and now a third party company - specializing in app dev.

Before my hands actually write a line of code or infrastructure as code, I’ve already spoken to sales to get a high level idea of what the customer wants, read over the contract (SoW) to see what questions I have, done discovery sessions/requirements analysis, created architecture diagrams, done a design review, created detailed stories/workstreams (epics), thought about all the way things can go wrong etc.

I very much keep my hands on the wheel and treat AI as a junior coder that might not follow my instructions. I can answer any question about architectural decisions, repo structure, what any Lambda does the naming conventions etc.

I’ve also intuited “these are the things that I need to think about and test for from my 30 years of professional experience as a developer and 8 years of experience across literally dozens of AWS implementations”.

In the before times, if I were doing this without AI, I would have to have two or three more junior people doing the work just because I couldn’t physically do it in 40 hours a week. Even then I would be focused on how it works and look for corner cases.

I don’t have to think about what I need to test for. I did specifically call out concurrency because there are subtle bugs.

Ironically, what I am working on now had a subtle concurrent locking bug that Codex wrote. I threw the code into ChatGPT thinking mode and it found it immediately and suggested better alternatives. I also have Claude and Codex cross check each other.

giantg2•5d ago
"I don’t have to think about what I need to test for."

Good luck then. The business process flow including edge cases should arguably be top of mind for what to test. Testing shouldn't be an afterthought but rather an integral thought when writing the code that needs to be tested.

"I would have to have two or three more junior people doing the work"

Yeah, and they're the ones thinking about testing the code they write. Architects (which it sounds like you are an architect and not a dev) don't get into thay much detail.

raw_anon_1111•5d ago
If I’m starting off from sales -> reading the contract -> discovery -> design -> project plan -> implementation -> implementation review -> handover, how am I not involved with the business case?

I would never trust a junior developer who is just an experienced ticket taker (and most don’t get their first job after 10 years of being hobbyist) to look in that level of detail. Honestly the code is the least important. What it does is. If I’m 50 years old and still just a “human LLM ticket taker”, I’ve done something horrible wrong in life.

By definition, this is the worse AI coding will ever be, anyone hoping to stay in this game long term by being able to “codez real gud” is going to be in for a rude awakening.

Enterprise development where most developers work was becoming a commodity in 2016 where it was easy to become “good enough” and comp still looks like it did on the high end a decade ago. Now it’s even harder to stand out from the crowd.

Now we are seeing that even BigTech jobs where “I can reverse a b tree on the whiteboard” developers are becoming a disposable commodity with all of the layoffs. There is a reason I’ve been moving up the stack and closer to “the business” over the last decade

adamzwasserman•6d ago
Although I write very little code myself anymore, I don't trust AI code at all. My default assumption: every line is the most mid possible implementation, every important architecture constraint violated wantonly. Your typical junior programmer.

So I run specialized compliance agents regularly. I watch the AI code and interrupt frequently to put it back on track. I occasionally write snippets as few-shot examples. Verification without reading every line, but not "vibe checking" either.

mikaelaast•6d ago
I like this. The few-shot example snippet method is something I’d like to incorporate in my workflow, to better align generated code with my preferences.
adamzwasserman•6d ago
I have written a research paper on another interesting prompting technique that I call axiomatic prompting. On objectively measurable tasks, when an AI scores below 70%, including clear axioms in the prompt systematically increases success.

In coding this would convert to: when trying to impose a pattern or architecture that is different enough from the "mid" programming approach that the AI is compelled to use, including axioms about the approach (in a IF this THEN than style, as opposed to few shot examples) will improve success.

The key is the 70% threshold: if the model already has enough training data, axioms hurt. If the model is underperforming because the training set did -not- have enough examples (for example hyperscript), axioms helps.

moomoo11•6d ago
"Let's check that we can do X, Y, Z"

"Create documentation and then write tests"

a few moments later...

"There's a bug where we cannot do Y. Investigate the code and then let's discuss the best fix"

"Update the documentation and tests"

pigon1002•6d ago
``` - code I don’t need to model in my head (low risk, follows established conventions, predictable, easy to verify), and

- code I can’t help modelling in my head (business-critical, novel, experimental, or introduces new patterns). I feel like there’s actually one or two more shades in between. ```

Sometimes I think something belongs in the second category, but then it turns out it’s really more like the first. And sometimes something is second-category, but for the sake of getting things done, it makes more sense to treat it like the first.

If vibe coding keeps evolving, this is probably the path it needs to explore. I just wonder what we’ll end up discovering along the way.

mikaelaast•6d ago
If it’s in the second category, I struggle not to mentally model it. How do you stop yourself? And should you?
austin-cheney•6d ago
Don’t buy into self promotion bullshit. AI can be helpful. It’s another form of automation. It is not creative and will not make you a better programmer. The only thing that will make you a better programmer is time spent programming, just like with anything else.
adamzwasserman•6d ago
Job security for those of us who think like this.

Two layers vibe coding can't touch: architecture decisions (where the constraints live) and cleanup when the junior-dev-quality code accumulates enough debt. Someone has to hold the mental model.

giantg2•6d ago
If anything, you have to understand code more now.

Before you (or your devs) could write code a couple different ways and understand it. Now you have to look a code generated by an agent that is not necessary writing code in the same way as the culture at your company. There might be a thousand different ways a feature gets written. You have to spend more time reviewing and thinking it about it in my opinion.

dapangzi•6d ago
Made a similar comment.

It's great for tenured engineers, when we use it.

When juniors use LLM, because they don't have experience, it becomes a nightmare for tenured engineers, and we just end up "mopping the slop", as I tend to say.

I also have issue with how LLM do testing.

taurath•6d ago
Just as with an LLM, a detailed style and format guide helps an incredible amount both for the LLMs and juniors. If you have standards and they’re not written down, you either require everyone to go teach them to anyone new, or you don’t have standards.
dapangzi•5d ago
> you don’t have standards.

The problem is that LLM mess up things as basic as math and dates, and that's before the context gets too large and it starts making other mistakes.

Edit: Also LLM over mock tests and juniors trust that...

taurath•5d ago
Not very often, and most of the time it shouldn't be generating those but rather formatting code to test that. If you accept the non-determinism and use some of the more recent models, you'll find it can do 99% of it very fast, and with some guardrails and testing it can fairly reliably produce working solutions.
dapangzi•5d ago
> Not very often

> testing

This does not match my experience, have been working with LLM since 2023. We presently use the latest models, I assure you. We can definitely afford it.

I am not saying LLM is worthless, but being able to check its outputs is still necessary at this stage, because as you said, it is non-deterministic.

We have had multiple customer impacting events from code juniors committed without understanding it. Please read my top level comment in this post for context.

I genuinely hope you do not encounter issues due to your confidence in LLM, but again, my experience does not match yours.

Edit: Would also add that LLM is not good at determining line numbers in a code file, another flaw that causes a lot of confusion.

taurath•5d ago
I haven’t run into that problem but I do also hold agents on a tight leash!
giantg2•5d ago
I had a mid-level submit a PR implementing caching. I had to reject it multiple times. They were using Copilot and it couldn't implement it right and the developer couldn't understand it. Stuff like always retrieving from the API instead of the cache, or never storing the object in the cache after retrieving it.

They promoted that guy over me because he started closing more stories than me and faster after he started using Copilot. No wonder that team has 40% of its capacity used for rework and tech debt...

dapangzi•5d ago
This matches my experience so hard that I wrote a novel below, have seen this pattern a lot, wanted to expand so people can understand the cycle/pattern.

Let's propose a generic scenario that shows why being able to engineer and read code is still important, and is a story we've all heard or seen a thousand times since the great LLMing of 2025.

"Just deliver the feature/product, we expect `ridiculousMetric` increase in productivity due to LLM" screeching from management and product/business.

A junior engineer will find someone who is willing to rubber stamp their LLM PRs so seniors or designated product experts don't even get a chance to check.

The LLM modifies existing tests to game everything to pass, the junior doesn't know any better, and so it quietly makes it to prod.

Because management is thinking in sprints, the way they see it, the ticket is closed, it's a win.

Then the broken production code, which junior will eventually be promoted for because the ticket is closed on paper, breaks prod, causes a huge outage costing `hugeNumber` dollars to the organization, and senior engineers have to clean it up. To boot, the spend metric is trash because of the LLM not knowing how to scale infra.

Since juniors can't meaningfully debug due to the toxic cycle, seniors spend too much time cleaning things up and it blocks their deliverables, and seniors look bad to leadership. Then they get managed out for not delivering, while the juniors lacking engineering experience due to the toxic cycle continue to rise through the ranks for delivering, even though their deliverables are trash.

I don't blame the juniors, they are under immense pressure and genuinely don't know better. I blame short-sighted leadership.

I've heard this story from contacts at any of the big names you can think of.

It seems US tech industry is flying head-first into having giant teams of mid and senior level engineers who don't know how to debug or deliver efficiency within the next five years.

We're failing our juniors, and punishing seniors for having standards.

giantg2•5d ago
I've never had an LLM create a robust, meaningful test file. I end up rewriting at least half of it.
al_borland•5d ago
It’s important to remember these people you mention who work for Anthropic have a vested interest in selling Claude Code to the world. They are not an impartial third-party, so I would take anything they say with a grain of salt.

> Less code because code equals responsibility.

This is true. The problem with AI is that while someone may personally write less code, they are still responsible for it and have to answer questions about the minutiae of what it does. One of my least favorite things is being responsible for, or having to answer for, work that isn’t mine. I’m not sure why I’d willing make that my whole job.

tstrimple•5d ago
I've used Claude Code a lot over the last year and I've generally been very happy with it. I have a lot of experience writing code both professionally and for personal projects. I've found that for things like basic APIs and websites and database operations, I don't have to pay attention to the code being produced much at all anymore. It Just Works for the most part as long as you adequately describe what you're trying to build. There are only so many ways you can write a CRUD app after all, and generally the implementation isn't "special" just necessary.

But my experience on 3d game dev in particular has been quite different. I've been able to get good results for basic 2d games and basic features in 3d worlds, but have been struggling to build more complicated scenarios with Claude Code without laying out every specific detail. I have to tell it to use quaternions for a particular rotation because I know about issues with gimble lock. I have to suggest a ray traced solution in another area because relative mouse position isn't good enough when accounting for resolution and aspect ratio. If I didn't know about ray tracing or quaternions and how they are used and fit into game development I wouldn't have been able to interrupt Claude Code and guide it down a better path. I think claude code is particularly weak in spatial reasoning and I suspect the context required for some GPU operations are pushing other parts of instructions out of context. It's forgetting "the basics" far more than I've experienced in any other project. Building a 3d world simulation featuring a bastardization of plate tectonics and weather systems is the first thing I've tried to do with Claude Code that I could have probably written myself faster. If it wasn't for the crippling adhd.