frontpage.
newsnewestaskshowjobs

Made with ♥ by @iamnishanth

Open Source @Github

fp.

The Grand Canyon I Never Got to See

https://www.nytimes.com/interactive/2026/03/16/magazine/grand-canyons-north-rim-wildfire.html
1•mooreds•1m ago•0 comments

Reddit New Post 2

https://old.reddit.com/r/PisequaltoNP/comments/1rwgj9l/breaking_rsa_via_transcendent_reduction/
1•KaoruAK•1m ago•0 comments

Edge.js – Run Node.js safely, anywhere, with any JavaScript engine

https://edgejs.org/
1•fmoronzirfas•3m ago•0 comments

Show HN: I made PDF tools that work with no internet – full offline PWA

https://filegod.app
1•honzabroski•3m ago•0 comments

A Synthesis of LLM Evaluation

https://www.aroy.sh/posts/llm-agent-evals/
1•dpe82•4m ago•0 comments

Ask HN: Has anyone gotten AI agents to make money autonomously?

1•builtbyzac•4m ago•0 comments

Cape Town After Coetzee

https://www.theatlantic.com/magazine/2026/04/coetzee-cape-town-apartheid/686067/
1•speckx•5m ago•0 comments

Testing 6 Claude models on consciousness questions via raw API

https://hayalguienaqui.com/en/test-en-frio
1•camilodesan•7m ago•0 comments

Private credit hit $1.7T. Its verification infrastructure never kept pace

https://www.zkvalue.com/
2•smmaan•7m ago•1 comments

High Court: Witness coached via smart glasses while giving evidence

https://www.legalfutures.co.uk/latest-news/high-court-witness-coached-via-smart-glasses-while-giv...
1•croes•8m ago•0 comments

How the Turner Twins Are Mythbusting Modern Gear

https://www.carryology.com/insights/how-the-turner-twins-are-mythbusting-modern-gear/
1•greedo•8m ago•0 comments

Norway's all-conquering Winter Olympians have a message for us all

https://www.theguardian.com/sport/2026/feb/26/norway-winter-olympics-message-for-us-all
1•PaulHoule•9m ago•0 comments

ZK-STARK proofs made easy so you can prove claims without exposing data

https://zkesg.com/
1•mcdoolz•10m ago•1 comments

Meta Horizon Worlds on Meta Quest is being discontinued

https://communityforums.atmeta.com/blog/AnnouncementsBlog/updates-to-your-meta-quest-experience-i...
6•par•10m ago•0 comments

You're all staff engineers now

https://jdauriemma.com/programming/youre-all-staff-engineers-now
1•jdauriemma•11m ago•0 comments

CEO

https://www.AgenthiveInc.com
1•AgentHive•11m ago•0 comments

Len Deighton, spy novelist and author of The Ipcress File, dies aged 97

https://www.theguardian.com/books/2026/mar/17/len-deighton-spy-novelist-author-dies-aged-97
3•bookofjoe•11m ago•0 comments

Stop throwing AI at developers and hoping for magic

https://leaddev.com/ai/stop-throwing-ai-at-developers-and-hoping-for-magic
2•tonkkatonka•12m ago•0 comments

Krafton deletes ChatGPT chats asking to help terminate contracts with founders

https://courts.delaware.gov/Opinions/Download.aspx?id=392880
1•simonreiff•12m ago•1 comments

Scientists discover heavier version of proton with upgraded detector

https://www.theguardian.com/science/2026/mar/17/scientists-discover-heavier-proton-upgraded-detector
1•bookofjoe•14m ago•0 comments

Sulcus – Reactive triggers for AI agent memory, governing itself

https://sulcus.dforge.com/
1•mcdoolz•16m ago•1 comments

Some small US airports may have to shut due to TSA absences, official says

https://www.reuters.com/world/us/us-says-it-may-be-forced-shut-down-some-airports-over-funding-st...
3•cdrnsf•16m ago•0 comments

In search of Banksy, Reuters found the artist took on a new identity

https://www.reuters.com/investigates/special-report/global-art-banksy/
3•gnabgib•18m ago•0 comments

Amazon Owes New York City Almost $10M in Fines over Idling Vehicles

https://www.roadandtrack.com/news/a70757976/amazon-owes-nyc-millions-idling-vehicle-fines-report/
2•randycupertino•19m ago•0 comments

China Has Five-Minute EV Charging. America Is Trying to Catch Up

https://www.wsj.com/business/autos/china-has-five-minute-ev-charging-america-is-trying-to-catch-u...
2•JumpCrisscross•22m ago•0 comments

Ask HN: What Are You Reading? (Mar 2026)

1•juanpabloaj•24m ago•2 comments

Claude Chief of Staff

https://github.com/mimurchison/claude-chief-of-staff
1•AnhTho_FR•25m ago•0 comments

Show HN: Turn your OpenAPI document to an MCP server in ~1000 tokens and 3 tools

https://scalar.com/blog/agent-scalar
1•marclave•25m ago•0 comments

Asteroids and meteorites may have delivered the building blocks for life

https://courthousenews.com/asteroids-and-meteorites-may-have-delivered-the-building-blocks-for-li...
1•everybodyknows•26m ago•0 comments

MinRLM: A Token-Efficient Recursive Language Model Implementation and Benchmark

https://avilum.github.io/minrlm/recursive-language-model.html
1•curmudgeon22•27m ago•0 comments
Open in hackernews

If you thought the code writing speed was your problem; you have bigger problems

https://andrewmurphy.io/blog/if-you-thought-the-speed-of-writing-code-was-your-problem-you-have-bigger-problems
122•mooreds•1h ago

Comments

gammalost•36m ago
Seems easy to address with a simple rule. Push one PR; review one PR
hathawsh•32m ago
Also add a PR reviewer bot. Give it authority to reject the PR, but no authority to merge it. Let the AIs fight until the implementation AI and the reviewer AI come to an agreement. Also limit the number of rounds they're permitted to engage in, to avoid wasting resources. I haven't done this myself, but my naive brain thinks it's probably a good idea.
dmitrygr•27m ago
> I haven't done this myself, but my naive brain thinks it's probably a good idea.

Many a disaster started this way

hathawsh•25m ago
Yep, we're on the same wavelength.
zer00eyz•15m ago
The problem is most of the people we have spent the last 20 years hiring are bad at code review.

Do you think the leet code, brain teaser show me how smart you are and how much you can memorize is optimized to hire the people who can read code at speed and hold architecture (not code but systems) in their head? How many of your co-workers are set up and use a debugger to step through a change when looking at it?

Most code review was bike shedding before we upped the volume. And from what I have seen it hasn't gotten better.

luxuryballs•34m ago
It’s definitely going to create a lot of problems in orgs that already have an incomplete or understaffed dev pipeline, which happen to often be the ones where executive leadership is already disconnected and not aware of what the true bottlenecks are, which also happen to often be the ones that get hooked by vendor slide decks…
furyofantares•34m ago
> The bottleneck is understanding the problem. No amount of faster typing fixes that.

Why not? Why can't faster typing help us understand the problem faster?

> When you speed up code output in this environment, you are speeding up the rate at which you build the wrong thing.

Why can't we figure out the right thing faster by building the wrong thing faster? Presumably we were gonna build the wrong thing either way in this example, weren't we?

I often build something to figure out what I want, and that's only become more true the cheaper it is to build a prototype version of a thing.

> You will build the wrong feature faster, ship it, watch it fail, and then do a retro where someone says "we need to talk to users more" and everyone nods solemnly and then absolutely nothing changes.

I guess because we're just cynical.

zabzonk•29m ago
> Why can't faster typing help us understand the problem faster?

Why can't standing on your head?

bob1029•29m ago
> Why can't we figure out the right thing faster by building the wrong thing faster?

Because usually the customer can only tolerate so many failed attempts per unit of time. Running your fitness function is often very expensive in terms of other people's time.

This is easily the biggest bottleneck in B2B/SaaS stuff for banking. You can iterate maybe once a week if you have a really, really good client.

furyofantares•27m ago
That's fair. I'm usually my own customer.
furyofantares•28m ago
The post also smells heavily LLM-processed. I feel like I've been had by someone pumping out low effort blog posts.
cdrnsf•27m ago
Because you're working on the implementation before you understand the problem?
mooreds•7m ago
Ding ding ding!

The article talks about process flows and finding the bottleneck. That might be coding, but probably is not.

nyeah•27m ago
"Why can't faster typing help us understand the problem faster?"

Because typing is not the same as understanding.

john_strinlai•20m ago
>Why not? Why can't faster typing help us understand the problem faster?

do you have a example (even a toy one) where typing faster would help you understand a problem faster?

lgessler•11m ago
Has everyone always nailed their implementation of every program on the first try? Of course not. Probably what happens most times is you first complete something that sorta works and then iterate from there by modifying code, executing, observing, and looping back to the beginning. You can wonder about ultimately how much of your time/energy is consumed by the "typing code" part, and there's surely a wide range of variation there by individual and situation, but it's undeniable that it is a part of the core iteration loop for building software.

I don't understand why GP's comment is so controversial. GP is not denying that you should maybe think a little before a key hits the keyboard as many commenters seem to suppose. Both can be true.

intrasight•10m ago
Here's a literal toy one.

Build a toy car with square wheels and one with triangular wheels and one with round wheels and see which one rolls better.

The issue isn't "typing faster" it's "building faster".

jmulho•5m ago
I often understand problems by discussing them with AI (by typing prompts and reading the response). Typing or reading faster would make this faster.
doix•19m ago
Pretty much, the article assumes people didn't build the wrong thing before AI. Except that did happen all the time and it just happened slower, took longer to realize that it was the wrong thing and then building the right thing took longer.

It's funny, because you could actually take that story and use it to market AI.

> I once watched a team spend six weeks building a feature based on a Slack message from a sales rep who paraphrased what a prospect maybe said on a call. Six weeks.

Except now with AI it takes one engineer 6 hours, people realize it's the wrong thing and move on. If anything, I would say it helps prove the point that typing faster _does_ help.

Terr_•16m ago
Sometimes being involved in the construction process allows you to discover all the (many, overlapping) ways it's the "wrong thing" sooner.

In the long term, some of the most expensive wrong-things are the ones where the prototype gets a "looks good to me" from users, and it turns out what they were asking for was not what they needed or what could work, for reasons that aren't visually apparent.

p-o•12m ago
> Why not? Why can't faster typing help us understand the problem faster?

Why can't you understand the problem faster by talking faster?

mooreds•11m ago
> Why not? Why can't faster typing help us understand the problem faster?

I think we can, in some cases.

For instance, I prototyped a feature recently and tested an integration it enabled. It took me a few hours. There's no way I would have tried this, let alone succeeded, without opencode. Because I was testing functionality, I didn't care about other aspects: performance, maintainability, simplicity.

I was able to write a better description of the problem and assure the person working on it that the integration would work. This was valuable.

I immediately threw away that prototype code, though. See above aspects I just didn't need to think about.

ErroneousBosh•9m ago
> Why not? Why can't faster typing help us understand the problem faster?

Why do you need to type at all to understand the problem?

I write my best code when I'm driving my car. When I stop and park up, it's just a case of typing it all in at my leisure.

onlyrealcuzzo•9m ago
AI is really good when:

1. you want something that's literally been done tons of times before, and it can literally just find it inside its compressed dataset

2. you want something and don't care that much how it actually functions

It turns out, this is not the majority of software people are paying engineers to write.

And it turns out that actually writing the code is only part of what you're paying for - much smaller than most people think.

You are not paying your surgeon only to cut things.

You are not paying your engineer only to write code.

gedy•31m ago
I'm cynical but kinda surprised that so many mgmt types are rah-rah AI as "we're waiting for engineering... sigh" has been a very convenient excuse for many projects and companies that I've seen over past 25 years.
shermantanktop•18m ago
Absolutely. Everyone loves a roadblock that someone else needs to clear, giving back some time to breathe and think about the problem a bit.

This only works in large companies. In startups this is how you run out of money.

everdrive•31m ago
Companies genuinely don't want good code. Individual teams just get measured by how many things they push around. An employee warning that something might not work very well is going to get reprimanded as "down in the weeds" or "too detail oriented," etc. I didn't understand this for a while, but internal actors inside of companies really just want to claim success.
myylogic•29m ago
I think both sides are partially right, but they’re optimizing for different failure modes.

Speed doesn’t fix misunderstanding, but it does change how quickly you can iterate toward understanding.

In practice, building something (even if it’s wrong) creates feedback loops you can’t get from thinking alone. Especially in systems like ML/LLMs, where behavior emerges from the pipeline rather than just the idea.

The real bottleneck isn’t typing speed — it’s how fast you can validate assumptions.

Faster iteration without reflection leads to chaos. Pure thinking without building leads to stagnation.

The balance is tight feedback loops with deliberate evaluation.

devnotes77•20m ago
There's a useful framing from OODA loops here: the bottleneck isn't usually Observe or Orient (understanding the problem) in isolation, but the feedback cycle between Act and Observe. Faster code generation compresses the Act->Observe gap, which can speed up the Orient phase indirectly - but only if you're actually measuring something real, not just shipping. The failure mode you're describing (speed without understanding) is really just skipping the Observe step. That happens with or without AI; it's a process problem, not a tooling problem. I've seen teams write code slowly and still never learn anything because they never measured outcomes.
6stringmerc•25m ago
Because the way the world is currently and this is trending, let me jump in and save you a lot of time:

Expedience is the enemy of quality.

Want proof? Everyone as a result of “move fast and break things” from 5-10 years ago is a pile of malfunctioning trash. This is not up for debate.

This is simply an observation. I do not make the rules. See my last submission for some CONSTRUCTIVE reading.

Bye for now.

petcat•25m ago
As human developers, I think we're struggling with "letting go" of the code. The code we write (or agents write) is really just an intermediate representation (IR) of the solution.

For instance, GCC will inline functions, unroll loops, and myriad other optimizations that we don't care about. But when we review the ASM that GCC generates (we don't) we are not concerned with the "spaghetti" and the "high coupling" and "low cohesion". We care that it works, and is correct for what it is supposed to do. And that it is a faithful representation of the solution that we are trying to achieve.

Source code in a higher-level language is not really different anymore. Agents write the code, maybe we guide them on patterns and correct them when they are obviously wrong, but the code is merely the work-item artifact that comes out of extensive specification, discussion, proposal review, and more review of the reviews.

A well-guided, iterative process and problem/solution description should be able to generate an equivalent implementation whether a human is writing the code or an agent.

felipellrocha•23m ago
If you truly believe that, why don’t you just transform code directly to assembly? Skip the middleman, and get a ton of performance!
n4r9•20m ago
Can agents write good assembly code?
svachalek•16m ago
With the complexity of modern pipelines, there are very few humans that can beat a good optimizing compiler. Considering that with an LLM you're also bloating limited context with unsemantic instructions I can't see how this is anything but an exercise in failure.
ICantFeelMyToes•20m ago
you know if I could I would (Android dev)
operatingthetan•17m ago
I know you're being cheeky but we are definitely heading in that direction. We will see frameworks exclusively designed for LLM use get popular.
bdcravens•14m ago
I assume you're being cynical, but there's a lot of truth in what you say: LLMs allow me to build software to fit my architecture and business needs, even if it's not a language I'm familiar with.
krackers•20m ago
>Source code in a higher-level language is not really different anymore

Source code is a formal language, in a way that natural language isn't.

eecc•19m ago
This is the answer
inamberclad•19m ago
When you get to the really tightly controlled industries, your "formal" language becomes carefully structured English.
petcat•16m ago
Legalese exists precisely because it is an attempt to remove doubt when it comes to matters of law.

Maybe a dialect of legalese will emerge for software engineering?

ruszki•7m ago
Legalese is nowhere near precise, and we have a whole very expensive system because it’s not precise.
jrop•14m ago
Right? That's the only reason that "coding with LLMs" works at all (believe me, all at the same time, I am wowed by LLMs, and carry a healthy level of skepticism with respect to their ability as well). You can prompt all you want, let an Agent spin in a Ralph loop, or whatever, but at the end of the day, what you're checking into Git is not the prompts, but the formalized, codified artifact that is the bi-product of all of that process.
exceptione•16m ago
None of the comparisons make any sense. In short, these concepts are essential to understand:

- determinism vs non-determinism

- conceptual integrity vs "it works somewhat, don't touch it"

petcat•10m ago
> determinism vs non-determinism

Here are the reported miscompilation bugs in GCC so far in 2026. The ones labeled "wrong-code".

https://gcc.gnu.org/bugzilla/buglist.cgi?chfield=%5BBug%20cr...

I count 121 of them. It appears that code-generation is not as deterministic as you seem to think it is.

tovej•15m ago
Is this a copypasted response? I've seen the exact same bs in other AI threads on this site.
yummypaint•9m ago
Just because an LLM can turn high level instructions into low level instructions does not make it a compiler
tcmart14•5m ago
I really hate the trying to make LLM coding sound like it's just moving up the stack and is no different from a compiler. A compiler is deterministic and has a set of rules that can be understood. I can look at the output and see patterns and know exactly what the compiler is doing and why it does and where it does it. And it will be deterministic in doing it.
renewiltord•24m ago
Understanding the problem is easier for me experienced with engaging with solutions to the problem and seeing what form they fail in. LLMs allow me to concretize solutions so that pre-work simply becomes work. This allows me to search through the space of solutions more effectively.
lukaslalinsky•20m ago
If I can offload the typing and building, I can spend more energy understanding the bigger picture
devnotes77•20m ago
There's a useful framing from OODA loops here: the bottleneck isn't usually Observe or Orient (understanding the problem) in isolation, but the Act->Observe feedback cycle. Faster code generation compresses that gap, which can speed up orientation indirectly - but only if you're actually measuring real outcomes, not just shipping. The failure mode you're describing is skipping the Observe step, which happens with or without AI. I've seen teams write code slowly and still never learn anything because they never measured results.
danilocesar•19m ago
I'm here just for the comments...
teaearlgraycold•13m ago
> I once watched a team spend six weeks building a feature based on a Slack message from a sales rep who paraphrased what a prospect maybe said on a call. Six weeks. The prospect didn't even end up buying. The feature got used by eleven people, and nine of them were internal QA. That's not a delivery problem. That's an "oh fuck, what are we even doing" problem.

I have very much upset a CEO before by bursting his bubble with the fact that how fast you work is so much less important than what you are working on.

Doing the wrong thing quickly has no value. Doing the right thing slowly makes you a 99th percentile contributor.

phillipclapham•9m ago
The bottleneck I keep hitting isn't how fast the code gets written, it's whether the decisions upstream of the code were sound. You can have an agent generating at 10x speed and still burn two days on "WAIT, that's not what I meant."

The real problem is that most AI coding workflows have the agent figure out the decision logic as it goes. That works until it doesn't, and when it doesn't you're debugging an opaque reasoning chain instead of a clear decision path. Been building tooling around making those decisions explicit before generation. The productivity gains aren't in generation speed. They're in eliminating the back-and-forth rounds that end up eating all your productivity.

nathias•8m ago
people can have more than one problem
podgorniy•6m ago
Yeah, we again have a solution (LLMs) in search of problems.

Proper approach to speeding things up would be to ask "What are the limiting factors which stops us from X, Y, Z".

--

This situation of management expecting things to become fast because of AI is "vibe management". Why to think, why to understand, why to talk to your people if you saw an excited presentation of the magic tool and the only thing you need to do is to adopt it?..