frontpage.
newsnewestaskshowjobs

Made with ♥ by @iamnishanth

Open Source @Github

Open in hackernews

The unbearable slowness of AI coding

https://joshuavaldez.com/the-unbearable-slowness-of-ai-coding/
49•aymandfire•1h ago

Comments

falcor84•1h ago
> ... I’ll keep pulling PRs locally, adding more git hooks to enforce code quality, and zooming through coding tasks—only to realize ChatGPT and Claude hallucinated library features and I now have to rip out Clerk and implement GitHub OAuth from scratch.

I don't get this, how many git hooks do you need to identify that Claude had hallucinated a library feature? Wouldn't a single hook running your tests identify that?

sc68cal•1h ago
They probably don't have any tests, or the tests that the LLM creates are flawed and not detecting these problems
deegles•48m ago
I tried using agents in Cursor and when it runs into issues it will just rip out the offending code :)
AstroBen•35m ago
Just tell the AI "and make sure you don't add bugs or break anything"

Works every time

loandbehold•1h ago
"hallucinated" library features are identified even earlier, when claude builds your project. i also don't get what author is talking about.
pluto_modadic•40m ago
AI agents have been known to rip out mocks so that the tests pass.
thrown-0825•9m ago
I have had human devs do that too
avr5500•1h ago
I split large large task in 4-5 small sub tasks, each in new conversation to save tokens and it does a pretty good job.
jama211•1h ago
Well yeah, as the app scales it will bump up against context limits. Giving it sandboxed areas to do specific tasks will speed it up again, but that’s not possible with everything.
dwringer•59m ago
Slow is smooth, smooth is fast.
DrewADesign•37m ago
Maybe I’ve misunderstood this, so correct me if I’m wrong… do actual professional developers let enough code be generated to include entire libraries that handle things as important as authentication, and then build on top of it without making sure the previously generated code actually does what it’s supposed to? Just accept local PRs written by AI, with a very sternly worded “now you better not make any bullshit” system prompt? All this just in time to ramp up AI penetration tools. Jesus.

It’s kind of crazy to me how the cool kid take on software development, as recent as 3 years ago, was: strictly-typed everything, ‘real men’ don’t use garbage collection, everything must be optimized to death even when it isn’t really necessary, etc. and now it seems to be ‘you don’t seriously expect me to look at ‘every single line of code’ I submit, do you?’

poszlem•5m ago
The mistake you’re making is assuming it’s the same group of people saying both things. The “strictly typed, no GC, optimize everything” crowd hasn’t suddenly turned into the “lol I don’t read my AI-generated PRs” crowd. Those are two different tribes of devs with completely different value systems.

What’s changed isn’t that the same engineers did a 180 on principles, it’s that the discourse got hijacked by a new set of people who think shipping fast with AI is cooler than sweating over type systems. The obsession with performance purity was always more of a niche cultural flex than a universal law, and now the flex du jour is “look how much I can outsource to the machine.”

hodgehog11•34m ago
AI tools seem excellent at getting through boilerplate stuff at the start of a project. But as time goes on and you have to think about what you are doing, it'll be faster to write it yourself than to convey it in natural language to an LLM. I don't see this as an issue with the tool, but just getting a better idea of what it is really good for.
Nextgrid•28m ago
The role of a software engineer is to condense the (often unclear) requirements, business domain knowledge, existing code (if any) and their skills/experience into a representation of the solution in a very concise language: a programming language.

Having to instead express all that (including the business-related part, since the agent has no context of that) in a verbose language (English) feels counter-productive, and is counter-productive in my experience.

I've successfully one-shotted easy self-contained, throwaway tasks ("make me a program that fills Redis with random keys and values" - Claude will one-shot that) but when it comes to working with complex existing codebases I've never seen the benefits - having to explain all the context to the agent and correcting its mistakes takes longer than just doing it myself (worse, it's unpredictable - I know roughly how long something will take, but it's impossible to tell in advance whether an agent will one-shot it successfully or require longer babysitting than just doing it manually from the beginning).

pbalau•18m ago
We are going to end up having boilerplate natural language text, that's been tested and proven to get the same output every time. Then we'll have a sort of transpiler and maybe a sub language of English, to make prompting easier. Then we will source control those prompts. What we actually do today, with extra steps.
skeedle•33m ago
Even it's slow, you can run multiple agents. You can have one doing changes, while another writes documentation, while another does security checks, while another looks for optimizations. Persist finding to markdown files to track progress and for cross-agent knowledge sharing if need. And do whatever else while it's all running. This has been my experience.
foobarbecue•29m ago
OP says in 2nd paragraph that they are using multiple agents in parallel. In fact, that's what their app does.
ants_everywhere•27m ago
if they are modifying the same code, then you have to merge all of different changes so it's not really parallel.

IME it's faster to not try to edit the same code in parallel because of the cost of merging.

sarchertech•26m ago
But then you have to keep all those tasks in your head and be ready to jump into any of them.

The check-ins are much more frequent and the instructions much lower level than what you’d give to a team if you were running it.

Do you have an example of a large application you’ve released with this methodology that has real paying users that isn’t in the AI space?

block_dagger•27m ago
My employer hosts one of the largest Ruby on Rails apps in the world. I've noticed that Claude Code takes a long time to grep for what it needs. Cursor is much better at this (probably because of local project indexing). Due to this, I favor Cursor over CC in my day to day workflows. In smaller code bases, both are pretty fast.
doctoboggan•26m ago
When building a project from scratch using AI, it can be tempting to give in to the vibe and ignore the structure/architecture and let it evolve naturally. This is a bad idea when humans do it, and it's also a bad idea when LLM agents do it. You have to be considering architecture, dataflow, etc from the beginning, and always stay on top of it without letting it drift.

I have tried READMEs scattered through the codebase but I still have trouble keeping the agent aware of the overall architecture we built.

doubleorseven•24m ago
I've never done QA. Just thinking about doing QA makes my head swirl. But yes, because of LLMs I am now a part time QA engineer, and I think that it's kinda helping me be a better developer. Im working on a massive feature at work, something I can't just give to an agent and I already feel like something changed in how I think about every little piece of code im adding. didn't see that coming.
ricardo81•21m ago
Somewhat related, I Found cursor/VS was slowing to the point of being unusable. Turning on privacy mode helped, but the main culprit was extremely verbose logging. Running `fatrace -c --command=cursor` discovered the issue.

The disk in question was an HDD and the problem disappeared (or is better hidden) after symlinking the log dir to an SSD.

As for code itself, I've never had an issue with slowness. If anything it's the verbosity of wanting to explain itself and excess logging in the code it creates.

ants_everywhere•19m ago
I've found LLMs to be very good at writing design docs and finding problems in code.

Currently they're better at locating problems than fixing them without direction. Gemini seems smarter and better at architecture and best practices. Claude seems dumber but is more focused on getting things done.

The right solution is going to be a variety of tools and LLMs interacting with each other. But it's going to take real humans having real experience with LLMs to get there. It's not something that you can just dream up on paper and have it work out well since it depends so much on the details of the current models.

nchmy•5m ago
This should be called the eternal, unbearable slowness of code review, because the author writes that the AI actually churns out code extremely rapidly. The (hopefully capable, attentive, careful) human is the bottleneck here, as it should be
stillpointlab•3m ago
I'm still calibrating myself on the size of task that I can get Claude Code to do before I have to intervene.

I call this problem the "goldilocks" problem. The task has to be large enough that it outweighs the time necessary to write out a sufficiently detailed specification AND to review and fix the output. It has to be small enough that Claude doesn't get overwhelmed.

The issue with this is, writing a "sufficiently detailed specification" is task dependent. Sometimes a single sentence is enough, other times a paragraph or two, sometimes a couple of pages is necessary. And the "review and fix" phase again is totally dependent and completely unknown. I can usually estimate the spec time but the review and fix phase is a dice roll dependent on the output of the agent.

And the "overwhelming" metric is again not clear. Sometimes Claude Code can crush significant tasks in one shot. Other times it can get stuck or lost. I haven't fully developed an intuition for this yet, how to differentiate these.

What I can say, this is an entirely new skill. It isn't like architecting large systems for human development. It isn't like programming. It is its own thing.

AI tooling must be disclosed for contributions

https://github.com/ghostty-org/ghostty/pull/8289
211•freetonik•1h ago•85 comments

How does the US use water?

https://www.construction-physics.com/p/how-does-the-us-use-water
25•juliangamble•7h ago•3 comments

Building AI products in the probabilistic era

https://giansegato.com/essays/probabilistic-era
37•sdan•1h ago•8 comments

Beyond sensor data: Foundation models of behavioral data from wearables

https://arxiv.org/abs/2507.00191
168•brandonb•5h ago•36 comments

An interactive guide to SVG paths

https://www.joshwcomeau.com/svg/interactive-guide-to-paths/
93•joshwcomeau•3d ago•10 comments

Miles from the ocean, there's diving beneath the streets of Budapest

https://www.cnn.com/2025/08/18/travel/budapest-diving-molnar-janos-cave
44•thm•3d ago•4 comments

DeepSeek-v3.1 Release

https://api-docs.deepseek.com/news/news250821
64•wertyk•1h ago•4 comments

Weaponizing image scaling against production AI systems

https://blog.trailofbits.com/2025/08/21/weaponizing-image-scaling-against-production-ai-systems/
265•tatersolid•7h ago•66 comments

My other email client is a daemon

https://feyor.sh/blog/my-other-email-client-is-a-mail-daemon/
40•aebtebeten•11h ago•11 comments

D4D4

https://www.nmichaels.org/musings/d4d4/d4d4/
405•csense•4d ago•46 comments

Using Podman, Compose and BuildKit

https://emersion.fr/blog/2025/using-podman-compose-and-buildkit/
211•LaSombra•9h ago•58 comments

Cua (YC X25) is hiring design engineers in SF

https://www.ycombinator.com/companies/cua/jobs/a6UbTvG-founding-engineer-ux-design
1•frabonacci•3h ago

The power of two random choices

https://brooker.co.za/blog/2012/01/17/two-random.html
19•signa11•3d ago•2 comments

Crimes with Python's Pattern Matching (2022)

https://www.hillelwayne.com/post/python-abc/
3•agluszak•27m ago•0 comments

The contrarian physics podcast subculture

https://timothynguyen.org/2025/08/21/physics-grifters-eric-weinstein-sabine-hossenfelder-and-a-crisis-of-credibility/
103•Emerson1•3h ago•102 comments

Show HN: OS X Mavericks Forever

https://mavericksforever.com/
243•Wowfunhappy•3d ago•98 comments

Launch HN: Skope (YC S25) – Outcome-based pricing for software products

29•benjsm•5h ago•26 comments

The Core of Rust

https://jyn.dev/the-core-of-rust/
94•zdw•3h ago•56 comments

Adding my home electricity uptime to status.href.cat

https://aggressivelyparaphrasing.me/2025/08/21/adding-my-home-electricity-uptime-to-status-href-cat/
27•todsacerdoti•4h ago•22 comments

Unity reintroduces the Runtime Fee through its Industry license

https://unity.com/products/unity-industry
174•finnsquared•5h ago•85 comments

Mark Zuckerberg freezes AI hiring amid bubble fears

https://www.telegraph.co.uk/business/2025/08/21/zuckerberg-freezes-ai-hiring-amid-bubble-fears/
564•pera•9h ago•528 comments

The unbearable slowness of AI coding

https://joshuavaldez.com/the-unbearable-slowness-of-ai-coding/
51•aymandfire•1h ago•27 comments

Show HN: ChartDB Cloud – Visualize and Share Database Diagrams

https://app.chartdb.io
70•Jonathanfishner•7h ago•9 comments

Why is D3 so Verbose?

https://theheasman.com/short_stories/why-is-d3-code-so-long-and-complicated-or-why-is-it-so-verbose/
79•TheHeasman•10h ago•48 comments

Show HN: Using Common Lisp from Inside the Browser

https://turtleware.eu/posts/Using-Common-Lisp-from-inside-the-Browser.html
85•jackdaniel•8h ago•21 comments

You Should Add Debug Views to Your DB

https://chrispenner.ca/posts/views-for-debugging
60•ezekg•4d ago•18 comments

A summary of recent AI research (2016)

https://blog.plan99.net/the-science-of-westworld-ec624585e47
19•mike_hearn•4h ago•0 comments

Unmasking the Privacy Risks of Apple Intelligence

https://www.lumia.security/blog/applestorm
78•mroi•4h ago•17 comments

Margin debt surges to record high

https://www.advisorperspectives.com/dshort/updates/2025/07/23/margin-debt-surges-record-high-june-2025
182•pera•8h ago•229 comments

Bank forced to rehire workers after lying about chatbot productivity, union says

https://arstechnica.com/tech-policy/2025/08/bank-forced-to-rehire-workers-after-lying-about-chatbot-productivity-union-says/
233•ndsipa_pomu•4h ago•89 comments