frontpage.
newsnewestaskshowjobs

Made with ♥ by @iamnishanth

Open Source @Github

Open in hackernews

Programming with AI: You're Probably Doing It Wrong

https://www.devroom.io/2025/08/08/programming-with-ai-youre-probably-doing-it-wrong/
21•ariejan•6h ago

Comments

ActionHank•3h ago
"If you are only using your hammer to hammer nails, you're doing it wrong" then goes on to explain how you should use agents.

I would've thought that following the initial argument and the progression to the latest trend we would've ended at use agents and write specs and these several currently popular MCPs.

I guess my rant is it to arrive at the point that no one knows what the "correct" way to use them is yet. A hammer has many uses.

mattkrick•3h ago
I want to believe, and I promise I'm not trying to be a luddite here. Has anyone with decent (5+ years) experience built a non-trivial new feature in a production codebase quicker by letting AI write it?

Agents are great at familiarizing me with a new codebase. They're great at debugging because even when they're wrong, they get me thinking about the problem differently so I ultimately get the right solution quicker. I love using it like a super-powered search tool and writing single functions or SQL queries about the size of a unit test. However, reviewing a junior's code ALWAYS takes more time than writing it myself, and I feel like AI quality is typically at the junior level. When it comes to authorship, either I'm prompting it wrong, or the emperor just isn't wearing clothes. How can I become a believer?

9rx•3h ago
> Has anyone with decent (5+ years) experience built a non-trivial new feature in a production codebase quicker by letting AI write it?

I would say yes. I have been blown away a couple of times. But find it is like playing a slot machine. Occasionally you win — most of the time you lose. As long as my employer is willing to continue to cover the bet, I may as well pull the handle. I think it would be pretty hard to convince myself to pay for it myself, though.

ath3nd•3h ago
> and I feel like AI quality is typically at the junior level. When it comes to authorship, either I'm prompting it wrong, or the emperor just isn't wearing clothes. How can I become a believer?

The emperor is stark naked, but the hype is making people see clothes where there is only an hairy shriveled old man.

Sure, I can produce "working" code with Claude, but I have not ever been able to produce good working code. Yes, it can write a okay-ish unit test (almost 100% identical to how I'd have written it), and on a well structured codebase (not built with Claude) and with some preparation, it can kind of produce a feature. However, on more interesting problems it's just slop and you gotta keep trying and prodding until it produces something remotely reasonable.

It's addictive to watch it conjure up trash and you constantly trying to steer it in the right direction, but I have never ever ever been able to achieve the code quality level that I am comfortable with. Fast prototype? Sure. Code that can pass my code review? Nah.

What is also funny is how non-deterministic the quality of the output is. Sometimes it really does feel like you almost fly off with it, and then bam, garbage. It feels like a roulette, and you gotta keep spinning the wheel to get your dopamine hit/reward.

All while wasting money and time, and still it ends up far far worse than you doing it in the first place. Hard pass.

rco8786•3h ago
“Kinda”. I run Claude code on a parallel copy of our monorepo, while I use my primary copy.

I typically only give Claude the boring stuff. Refactors, tech debt cleanup, etc. But occasionally will give it a real feature if the urgency is low and the feature is extremely well defined.

That said, I still spend a considerable amount of time reviewing and massaging Claude’s code before it gets to PR. I haven’t timed myself or anything, but I suspect that when the task is suitable for an LLM, it’s maybe 20-40% faster. But when it’s not, it’s considerably slower and sometimes just fails completely.

ramesh31•3h ago
>Has anyone with decent (5+ years) experience built a non-trivial new feature in a production codebase quicker by letting AI write it?

Yes. Claude Code has turned quarter long initiatives into a few afternoons of prompting for me, in the context of multiple different massive legacy enterprise codebases. It all comes down to just reaching that "jesus take the wheel" level of trust in it. You have to be ok with letting it go off and potentially waste hundreds of dollars in tokens giving you nonsense, which it will some times. But when it doesn't it's like magic, and makes the times that it does worth the cost. Obviously you'll still review every line before merging, but that takes an order of magnitude less time than wrestling with it in the first place. It has fundamentally changed what myself and our team is able to accomplish.

glhaynes•3h ago
>Obviously you'll still review every line before merging, but that takes an order of magnitude less time than wrestling with it in the first place.

Just speculating here, but I wouldn't be surprised if the truth of both parts of this sentence vary quite a bit amongst users of AI coding tools and their various applications; and, if so, if that explains a lot of the discrepancy amongst reports of success/enthusiasm levels.

jtfrench•3h ago
This article was a bit confusing for me. It starts off by describing what "doing it wrong" looks like (okay). It then goes on to talk about Agents. Perhaps it's just that my human brain needs a firmware update, but I was expecting the "what doing it wrong looks like" section to be followed by a "what doing it right looks like" section. Instead, the next paragraph just begins with "Agents".

Sure, one could surmise that perhaps "doing it right" means "using Agents", but that's not even how the article reads:

> "To make AI development work for you, you’ll need to provide your AI assistant with two things: the proper context and specific instructions (prompts) on how to behave under certain circumstances."

This, to me, doesn't necessitate the usage of agents, so to then enter a section of agents seems to be skipping over a potentially-implied logical connection between the problem in the "doing it wrong" section and how that is solved in the "Agents" section.

Copying code snippets into web UIs and testing manually is slow and clunky, but Agents are essentially just automations around these same core actions. I feel this article could've made a stronger point by getting at the core of what it means to do it wrong.

• Is "doing it wrong" indicated by the time wasted by not using an agentic mechanism vs manual manipulation?

• Is "doing it wrong" indicated by manually switching between tools instead of using MCP to automate tool delegation?

Having written several non-trivial agents myself using Gemini and OpenAI's APIs, the main difference between handing off a task to an agent and manually copy/pasting into chat UIs is efficiency — I usually first do a task manually using chat UIs, but once I have a pattern established, or have identified a set of tools to validate responses, I can then "agentify" it if it's something I need to do repeatedly. But the quality of both approaches is still dependent on the same core principles: adequate context (no more nor less than what keeps the LLM's attention on the task at hand) and adequate instructions for the task (often with a handful of examples). In this regard, I agree with the author, as correct context + instructions are the key ingredients to a useful response. The agentic element is an efficiency layer on top of those key ingredients which frees up the dev from having to manually orchestrate, and potentially avoids human error (and potentially introduces LLM error).

Am I missing something here?

kingkawn•2h ago
Will there be any more junior engineers?

By the end of the current generation’s careers in a few decades surely it will be able to do everything.

jmull•2h ago
Well, F-.

I can't believe how far I got on that article before it finally dawned on me that it's just some AI slop. (Well, that's the charitable explanation.)

Chinjut•2h ago
It claims at the bottom to be hand-written. But one can manually write slop too.

I want everything local – Building my offline AI workspace

https://instavm.io/blog/building-my-offline-ai-workspace
282•mkagenius•3h ago•88 comments

Ultrathin business card runs a fluid simulation

https://github.com/Nicholas-L-Johnson/flip-card
758•wompapumpum•10h ago•165 comments

Tor: How a military project became a lifeline for privacy

https://thereader.mitpress.mit.edu/the-secret-history-of-tor-how-a-military-project-became-a-lifeline-for-privacy/
164•anarbadalov•6h ago•99 comments

Jim Lovell, Apollo 13 commander, has died

https://www.nasa.gov/news-release/acting-nasa-administrator-reflects-on-legacy-of-astronaut-jim-lovell/
197•LorenDB•2h ago•30 comments

Efrit: A native elisp coding agent running in Emacs

https://github.com/steveyegge/efrit
39•simonpure•2h ago•3 comments

Build durable workflows with Postgres

https://www.dbos.dev/blog/why-postgres-durable-execution
52•KraftyOne•2h ago•23 comments

Ask HN: How can ChatGPT serve 700M users when I can't run one GPT-4 locally?

142•superasn•2h ago•91 comments

Disney 1985 film The Black Cauldron was an experiment that failed

https://www.bbc.com/culture/article/20250807-the-radical-film-that-became-a-disaster-for-disney
21•tigerlily•2h ago•22 comments

Astronomy Photographer of the Year 2025 shortlist

https://www.rmg.co.uk/whats-on/astronomy-photographer-year/galleries/2025-shortlist
134•speckx•7h ago•20 comments

How we replaced Elasticsearch and MongoDB with Rust and RocksDB

https://radar.com/blog/high-performance-geocoding-in-rust
162•j_kao•8h ago•38 comments

Json2dir: a JSON-to-directory converter, a fast alternative to home-manager

https://github.com/alurm/json2dir
32•alurm•3h ago•9 comments

Apple's history is hiding in a Mac font

https://www.spacebar.news/apple-history-hiding-in-mac-font/
104•rbanffy•4d ago•13 comments

Fire hazard of WHY2025 badge due to 18650 Li-Ion cells

https://wiki.why2025.org/Badge/Fire_hazard
54•fjfaase•2d ago•53 comments

Poltergeist: File watcher with auto-rebuild for any language or build system

https://github.com/steipete/poltergeist
7•jshchnz•3d ago•2 comments

HRT's Python fork: Leveraging PEP 690 for faster imports

https://www.hudsonrivertrading.com/hrtbeat/inside-hrts-python-fork/
52•davidteather•5h ago•65 comments

Linear sent me down a local-first rabbit hole

https://bytemash.net/posts/i-went-down-the-linear-rabbit-hole/
395•jcusch•16h ago•186 comments

GPU-rich labs have won: What's left for the rest of us is distillation

https://inference.net/blog/what-s-left-is-distillation
41•npmipg•2h ago•23 comments

Getting good results from Claude code

https://www.dzombak.com/blog/2025/08/getting-good-results-from-claude-code/
179•ingve•8h ago•89 comments

Window Activation

https://blog.broulik.de/2025/08/on-window-activation/
158•LorenDB•4d ago•86 comments

Open SWE: An open-source asynchronous coding agent

https://blog.langchain.com/introducing-open-swe-an-open-source-asynchronous-coding-agent/
49•palashshah•5h ago•17 comments

Overengineering my homelab so I don't pay cloud providers

https://ergaster.org/posts/2025/08/04-overegineering-homelab/
170•JNRowe•3d ago•148 comments

Imaging reveals 2k-year-old ice mummy's 'incredibly impressive' tattoos

https://www.cbc.ca/radio/asithappens/ice-mummy-tattooos-1.7601132
5•empressplay•3d ago•0 comments

Texas politicians warn Smithsonian it must not lobby to retain its space shuttle

https://arstechnica.com/space/2025/08/texas-politicians-warn-smithsonian-it-must-not-lobby-to-retain-its-space-shuttle/
10•LorenDB•27m ago•0 comments

Someone keeps stealing, flying, fixing and returning this man's 1958 Cessna

https://www.latimes.com/california/story/2025-08-08/mystery-plane-thief
60•MBCook•4h ago•75 comments

A robust, open-source framework for Spiking Neural Networks on low-end FPGAs

https://arxiv.org/abs/2507.07284
22•PaulHoule•4d ago•1 comments

Telefon Hírmondó

https://en.wikipedia.org/wiki/Telefon_H%C3%ADrmond%C3%B3
67•csense•4d ago•9 comments

A message from Intel CEO Lip-Bu Tan to all company employees

https://newsroom.intel.com/corporate/my-commitment-to-you-and-our-company
70•rntn•4h ago•78 comments

Voice Controlled Swarms

https://jasonfantl.com/posts/Voice-Controlled-Swarms/
24•jfantl•4d ago•3 comments

Study finds flavor bans cut youth vaping but slow decline in cigarette smoking

https://medicalxpress.com/news/2025-07-flavor-youth-vaping-decline-cigarette.html
13•PaulHoule•1h ago•10 comments

The surprise deprecation of GPT-4o for ChatGPT consumers

https://simonwillison.net/2025/Aug/8/surprise-deprecation-of-gpt-4o/
224•tosh•3h ago•198 comments