frontpage.
newsnewestaskshowjobs

Made with ♥ by @iamnishanth

Open Source @Github

Open in hackernews

Getting Good Results from Claude Code

https://www.dzombak.com/blog/2025/08/getting-good-results-from-claude-code/
66•ingve•2h ago

Comments

aosaigh•1h ago
I’m just today after having my first real success with Claude (and generally with coding agents). I’ve played with Cursor in the past but am now trying Claude and others.

As mentioned in the article, the big trick is having clear specs. In my case I sat down for 2 hours and wrote a 12 step document on how I would implement this (along with background information). Claude went through step by step and wrote the code. I imagine this saved me probably 6-10 hours. I’m now reviewing and am going to test etc. and start adjusting and adding future functionality.

Its success was rooted in the fact I knew exactly how to do what it needed to do. I wrote out all the steps and it just followed my lead.

It makes it clear to me that mid and senior developers aren’t going anywhere.

That said, it was amazing to just see it go through the requirements and implement modules full of organised documented code that I didn’t have to write.

philipwhiuk•1h ago
Frankly, even if you ignore Claude entirely, being able to write a good spec for yourself is a worthwhile endeavour.
aosaigh•1h ago
Complete agree. It’s a core skill of a good developer. What’s interesting is that in the past I’d have started this process but then jumped into coding prematurely. Now when you know you are using an agent, the more you write, the better the results.
danielbln•1h ago
Yes but let's not forget the lessons of waterfall planning. You can't anticipate everything, so the detail level of the implementation plan should be within a goldi locks zone of detailed but not too detailed, and after each implementation and test phase one should feel comfortable adjusting the spec/plan to the current state of things.
aosaigh•1h ago
Another good point. I noticed this happening while writing my document.

A few times while writing the doc I had to go back and update the previous steps to add missing features.

Also I knew when to stop. It’s not fully finished yet. There are additional stages I need to implement. But as an experienced developer, I knew when I had enough for “core functionalty” that was well defined.

What worries me is how do you become a good developer if AI is writing it all?

One of my strengths as a developer is understanding the problem and breaking it down into steps, creating requirements documents like I’ve discussed.

But that’s a hard-earned skill from years of client work where I wrote the code. I have a huge step up in getting the most from these agents now.

danielbln•18m ago
Agents raise the floor for all, but they raise the ceiling for those of us with sufficient priors.
closewith•1h ago
The downside of waterfall was not overly detailed specs. In fact, the best software development is universally waterfall following a good, ideally formal spec.

The downside that Agile sought to remedy was inflexibility, which is an issue greatly ameliorated by coding agents.

danielbln•19m ago
Maybe if you know the entire possibility space beforehand, in which case that's a great position to be in. In other cases and if the spec doesn't align with reality after implementation has begun or unforseen issues pop up, the spec needs revision, does it not?
esafak•1h ago
You can use Claude to write the spec next time.
mft_•1h ago
Can you (or anyone) share an example of such a specification document? As an amateur programmer experimenting with CC, it would be very helpful to understand the nature and depth of the information that is helpful.
bongodongobob•1h ago
I do a multistep process

Step 1: back and forth chat about the functionality we want. What do we want it to do? What are the inputs and outputs? Then generate a spec/requirements sheet.

Step 2: identify what language, technologies, frameworks to use to accomplish the goal. Generate a technical spec.

Step 3: architecture. Get a layout of the different files that need to be created and a general outline of what each will do.

Step 4: combine your docs and tell it to write the code.

jamesponddotco•1h ago
I have multiple system prompts that I use before getting to the actual specification.

1. I use the Socratic Coder[1] system prompt to have a back and forth conversation about the idea, which helps me hone the idea and improve it. This conversation forces me to think about several aspects of the idea and how to implement it.

2. I use the Brainstorm Specification[2] user prompt to turn that conversation into a specification.

3. I use the Brainstorm Critique[3] user prompt to critique that specification and find flaws in it which I might have missed.

4. I use a modified version of the Brainstorm Specification user prompt to refine the specification based on the critique and have a final version of the document, which I can either use on my own or feed to something like Claude Code for context.

Doing those things improved the quality of the code and work spit out by the LLMs I use by a significant amount, but more importantly, it helped me write much better code on my own because I know have something to guide me, while before I used to go blind.

As a bonus, it also helped me decide if an idea was worth it or not; there are times I'm talking with the LLM and it asks me questions I don't feel like answering, which tells me I'm probably not into that idea as much as I initially thought, it was just my ADHD hyper focusing on something.

[1]: https://github.com/jamesponddotco/llm-prompts/blob/trunk/dat...

[2]: https://github.com/jamesponddotco/llm-prompts/blob/trunk/dat...

[3]: https://github.com/jamesponddotco/llm-prompts/blob/trunk/dat...

time0ut•1h ago
Thank you for sharing these prompts. These are excellent.
miroljub•1h ago
> That said, it was amazing to just see it go through the requirements and implement modules full of organised documented code that I didn’t have to write

Small side remark, but what is the value added of the AI generated documentation for the AI generated code. It's just a burden that increases context size whenever AI needs to re-analyse or change the existing code. It's not like any human is ever going to read the code docs, when he can just ask AI what it is about.

nisegami•1h ago
It's entirely possible that the parameters that get activated by comments in code are highly correlated with the parameters involved in producing good code.
infecto•1h ago
Doc strings within the code could be helpful for both humans and AI. Sometimes spoken word intent is easier to digest then code and help identify side effects for both human and AI.
felixgallo•1h ago
frequently your session/context may drop (e.g. claude crashes, or your internet dies, or your computer restarts, etc.). Claude does best when it can recover the context and understand the current situation from clear documentation, rather than trying to reverse engineer intent and structure from an existing code base. Also, the human frequently does read the code docs as there may be places where Claude gets stuck or doesn't do what you want, but a human can reason their way into success and unstick the obstacle.
manwe150•47m ago
From Claude -r you can resume any conversation at any previous point, so there isn’t a way to lose context that way. As opposed to compact, which I find makes it act brain dead afterwards for a while
Der_Einzige•33m ago
I promise you that token context rot is worse than the gains from added natural language explanations
felixgallo•11m ago
This hasn't been my experience.
aosaigh•1h ago
I’m not sure I agree that I’ll never look at the code. I think it’s still important to know how the code is working for your own mental model of the app. So in this case I’ll be testing and reviewing everything to see how it’s implemented. With that in mind it’s useful for me as well as serving as context for the AI. That said, you may be right.
weego•38m ago
written once, looked at 100 times.

I try to prompt-enforce no line by line documentation, but encourage function/class/module level documentation that will help future developers/AI coding agents. Humans are generally better, but AI sometimes needs a help to stop it not understanding a piece of code's context and just writing it's own new function that does the same thing

spyckie2•1h ago
Can’t you send the same spec through cursor? Am I missing something there?
aosaigh•1h ago
Yes certainly. I’m sure Cursor would do a good job.

That said, I think that the differing UIs of Cursor (in the IDE) and Claude (in the CLI) fundamentally change how you approach problems with them.

Cursor is “too available”. It’s right there and you can be lazy and just ask it anything.

Claude nudges you to think more deeply and construct longer prompts before engaging with it.

That my experience anyway

danielbln•22m ago
Fun fact: there is a Cursor CLI now
camel_gopher•1h ago
Many mid and senior developers cannot write specs. I agree with the intent of your statement.
dewey•38m ago
After someone mentioned that recently I've started to write really detailed specs with the help of ChatGPT Deep Research and editing it myself. Then getting this exported as a Markdown document and passing it to Cursor really worked very well.

It puts you in a different mind space to sit down and think about it instead of iterating too much and in the end feeling productive while actually not achieving much and going mostly in circles.

sillyfluke•25m ago
The test and review cycle is what determines time saved in my view. Since you were satisfied overall I take it that cycle was not too cumbersome?

The parent wrote:

>I imagine this saved me probably 6-10 hours. I’m now reviewing and am going to test etc.

Guessing the time saved prior to reviewing and testing seems premature fron my end.

delichon•1h ago

  Asking the agent to perform a code review on its own work is surprisingly fruitful.
I do this routinely with its suggestions, usually before I apply them. It is surprising how often Claude immediately dumps on its own last output, talking both of us out if it, and usually with good reasons. I'd like to automate this double-take.
doctorhandshake•1h ago
I found that for a period of time Claude was almost excessively negative when reviewing its own work. It was only after some contemplation that I realized that it was the phrasing of my code review slash command that framed the review with a negative bent, essentially prompting Claude to dump on its own stuff. The phrasing of that prompt has been a focus of a lot of effort on my side since.
bgirard•1h ago
I'm playing with Claude Code to build an ASCII factorio-like. I first had it write code without much code supervision. It quickly added most of the core features you'd expect (save/load, options, debug, building, map generation, building, belts, crafting, smart belt placing, QoL). Then I started fixing minor bugs and each time it would break something eg. tweaking movement broke belts. So I prompted it to add Playwright automation. Then it wasn't able to write good quality tests and have them all pass, the test were full of sleep calls, etc...

So I looked at the code more closely and it was using the React frontend and useEffect instead of a proper game engine. It's also not great at following hook rules and understanding their timing in advance scenarios. So now I'm prompting it to use a proper tick based game engine and rebuilding the game up, doing code reviews. It's going 'slower' now, but it's going much better now.

My goal is to make a Show HN post when I have a good demo.

berlinismycity•1h ago
Including Claude Code into the normal subscription was a genius move by Anthrophic. It's so much better than copy and pasting code from chat windows, but that's hard to tell if I had to pay via an API for that service
iambateman•1h ago
if you use Laravel, I wrote github.com/iambateman/speedrun to help get good results. You type /feature [[description of feature]] and it takes it from there.

The system helps you build out a spec first, then uses a few subagents which are tuned for placing files, reviewing for best practice, etc.

I've been using it for about a week and about 70% of my Claude Code usage runs through /feature right now.

The nice thing is you can give it a _lot_ of requests and let it run for 10-15 minutes without interruption. Plus, it makes a set of planning documents before it implements, so you can see exactly what it thought it was changing.

tobyhinloopen•1h ago
That's a great, short prompt. I'm going to steal it.
OJFord•1h ago
Do you mean the claude.md?
softwaredoug•1h ago
I get a lot of success when I’ve laid out the patterns and first implementation of an idea in my code. Then tell Claude to repeat the pattern to implement X feature.

And do it very step by step in what would equate to a tiny PR that gradually roles out the functionality. Too big and I find lots of ugly surprises and bugs and reorganizations that don’t make sense.

time0ut•1h ago
I've been working with Claude Code daily for a month or so. It is quite excellent and better than the other agents I have used (Cursor, Q). This article has some good tips that echo some of the things I have learned.

Some additional thoughts:

- I like to start with an ideation session with Claude in the web console. I explain the goals of the project, work through high level domain modeling, and break the project down into milestones with a target releasable goal in mind. For a small project, this might be a couple hours of back and forth. The output of this is the first version of CLAUDE.md.

- Then I start the project with Claude Code, have it read my global CLAUDE.md and the project CLAUDE.md and start going. Each session begins this way.

- I have Claude Code update the project CLAUDE.md as it goes. I have it mark its progress through the plan as it goes. Usually, at the end of the session, I will have it rewrite a special section that contains its summary of the project, how it works, and how to navigate the code. I treat this like Claude's long term memory basically. I have found it helps a lot.

- Even with good guidelines, Claude seems to have a tendency to get ahead of itself. I like to keep it focused and build little increments as I would myself if it is something I care about. If its just some one off or prototype, I let it go crazy and churn whatever works.

kace91•45m ago
Does the $20 subscription hold a similar bang for your buck as cursor?

I’m curious about the tool but I wonder if it requires more significant investment to be a daily driver.

nlh•1h ago
One fantastic tip I discovered (sorry I've forgotten who wrote it but probably a fellow HNer):

If you're using an AI for the "architecture" / spec phase, play a few of the models off each other.

I will start with a conversation in Cursor (with appropriate context) and ask Gemini 2.5 Pro to ask clarifying questions and then propose a solution, and once I've got something, switch the model to O3 (or your other preferred thinking model of choice - GPT-5 now?). Add the line "please review the previous conversation and critique the design, ask clarifying questions, and proposal alternatives if you think this is the wrong direction."

Do that a few times back and forth and with your own brain input, you should have a pretty robust conversation log and outline of a good solution.

Export that whole conversation into an .md doc, and use THAT in context with Claude Code to actually dive in and start writing code.

You'll still need to review everything and there will still be errors and bad decisions, but overall this has worked surprisingly well and efficiently for me so far.

enobrev•59m ago
I do something very similar for the planning phase, as well as for the code-review after a task is complete. I like to switch between opus in claude code and gemini cli, so I can work from the same files rather than copying and pasting things.

One tip I picked up from a video recently to avoid sycophancy was to take the resulting spec and instead of telling the reviewing LLM "I wrote this spec", tell it "an engineer on my team wrote this spec". When it doesn't think it's insulting you, it tends to be a bit more critical.

abroun_beholder•1h ago
Nice post, I'll try a few of those in my own file. From my side, one thing in the troubleshooting section that I think is missing is telling the agent that it should collect some proof of what it thinks is wrong before trying to proceed with a fix. I have burnt through a large number of tokens in the past in situations where Claude took a look at the dodgy code (that it had written) and went 'aha! I know what the problem is here' before proceeding to make things worse. Telling Claude to add in debug print statements can be remarkably effective but I'm sure it can also come up with other approaches.
enobrev•54m ago
Nothing quite like "I see what the problem is", and then seeing Claude start reading a bunch of files and strategizing the re-architecture of a feature just resolve its own 3-line blunder.

If you happen catch it and you're quick to "esc" and just tell it to find a simpler solution, it's surprisingly great at reconsidering, resolving the issue simply, and picking up where it left off before the blunder.

libraryofbabel•1h ago
I use Claude Code regularly and have been responsible for introducing colleagues to it. The consensus here seems to be that it’s the best coding agent out there. But since it’s the only coding agent I’ve used, when colleagues ask why it’s better than Cursor, Cline, GitHub Copilot, Gemini CLI, etc., I sometimes struggle to articulate reasons.

Claude Code power users, what would you say makes it superior to other agents?

aosaigh•1h ago
I mentioned this is another comment, but for me one of the big positives is nothing to do with the model, it’s the UI of how it presents itself.

I hated at first that it wasn’t like Cursor, sitting in the IDE. Then I realised I was using Cursor completely differently, using it often for small tasks where it’s only moderately helpful (refactoring, adding small functions, autocompleting)

With Claude I have to stop, think and plan before engaging with it, meaning it delivers much more impactful changes.

Put another way, it demands more from me meaning I treat it with more respect and get more out of it

libraryofbabel•55m ago
This is a good point, the CLI kind of forces you to engage with the coding process through the eyes of the agent, rather than just treating it as “advanced autocomplete” in the IDE.

However, there are a lot of Claude Code clones out there now that are basically the same (Gemini CLI, Codex, now Cursor CLI etc.). Claude still seems to lead the pack, I think? Perhaps it’s some combination of better coding performance due to the underlying LLM (usually Sonnet 4) being fine-tuned on the agent tool calls, plus Claude is just a little more mature in terms of configuration options etc.?

enobrev•48m ago
I haven't tried codex or cursor-cli yet, but I have tried to give gemini a few tasks and in my experience, compared to claude code, it's not great.

Gemini's been very quick to dive in and start changing things, even when I don't want it to. But those changes almost always fall short of what I'm after. They don't run or they leave failing tests, and when I ask it to fix the tests or the underlying issue, it churns without success. Claude is significantly slower and definitely not right all the time, but it seems to do a better job of stepping through a problem and resolving it well enough, while also improving results when I interject.

CamouflagedKiwi•55m ago
Not a power user, but most recently I tried it out against Gemini and Claude produced something that compiled and almost worked - it was off in some specifics that I could easily tweak. The next thing I asked it (with slightly more detailed prompting) it more or less just nailed.

Meanwhile Gemini got itself stuck in a loop of compile/fail/try to fix/compile/fail again. Eventually it just gave up and said "I'm not able to figure this out". It does seem to have a kind of self-esteem problem in these scenarios, whereas Claude is more bullish on itself (maybe not always a good thing).

Claude seems to be the best at getting something that actually works. I do think Gemini will end up being tough competition, if nothing else because of the price, but Google really need a bit of a quality push on it. A free AI agent is worthless if it can't solve anything for me.

paulhodge•44m ago
Lots of signs point to a conclusion that the Opus and Sonnet models are fundamentally better at coding, tool usage, and general problem solving across long contexts. There is some kind of secret sauce in the way they train the models. Dario has mentioned in interviews that this strength is one of the company's closely guarded secrets.

And I don't think we have a great eval benchmark that exactly measures this capability yet. SWE Bench seems to be pretty good, but there's already a lot of anecdotal comments that Claude is still better at coding than GPT 5, despite having similar scores on SWE Bench.

monkeydust•56m ago
Been playing around with Claude Code for a home project over the last week.

I started with an idea but no spec. I got it to a happy place I can deploy yesterday. Spent around $75 on tokens. It was starting to feel expensive towards the end.

I did wonder if I had started with a clearer specification could I have got there quicker and for less money.

The thing is though, looking back at the conversations I had with it, the back and forth (vibe coding I guess) helped me refine what I was actually after so in two minds if a proper tight specification upfront would have been the best thing.

electroly•51m ago
Switch from Opus to Sonnet. When people report high spending in Claude Code it's always because they're using Opus. Opus is for people on unlimited plans who aren't paying API rates.
JulesRosser•46m ago
You could also define a subagent that uses Opus, for special cases such as planning
maherbeg•48m ago
I highly recommend having fairly succinct project level CLAUDE.md files, and defer more things into sub-folders. Use the top level as a map. Then during your planning of a feature, it can reach into each folder as it sees fit to find useful context to build out your phased implementation plan. I have it use thinking mode to figure out the right set of context.

At the end of each phase, I ask claude to update my implementation plan with new context for a new instance of claude to pick it up. This way it propagates context forward, and then I can clear the context window to start fresh on the next phase.

andrew_lastmile•47m ago
Creating temporary artifacts of implementation plans seem to be very useful for breaking down complex tasks and even more so, for me to double check the logic and plans.
naiv•11m ago
The update to Opus 4.1 really improved the quality.

I personally really like to use Claude Code together with Zen MCP https://github.com/BeehiveInnovations/zen-mcp-server to analyse existing and review fresh code with additional eyes from Gpt5 and Gemini.

Neovim Integration with Cursor Agent CLI

https://github.com/xTacobaco/cursor-agent.nvim
1•xTacobaco•30s ago•1 comments

Qron.ai

https://qron.ai
1•tobiasmacke•1m ago•1 comments

Why Remote Work Just Works (For Me)

https://megalomaniacbore.blogspot.com/2025/08/why-remote-work-just-works-for-me.html
1•speckx•2m ago•0 comments

Show HN: GPT-5 Document Retrieval – AI Assistant with Inline Citations

https://www.smartresearch-ai.com/
1•ben011•2m ago•0 comments

An LLM Codegen Hero's Journey

https://harper.blog/2025/04/17/an-llm-codegen-heros-journey/
1•aeontech•3m ago•0 comments

Show HN: Bringing Tech News from HN to My Community

https://www.sh4jid.me/blog/making-tech-news-accessible-for-my-people
1•sh4jid•3m ago•0 comments

GPT-5 on SWE-bench: Cost and performance deep-dive

https://mini-swe-agent.com/latest/blog/2024/01/15/gpt-5-on-swe-bench-cost--performance-deep-dive/
2•lieret•3m ago•2 comments

There won't be code at all

https://twitter.com/elonmusk/status/1953503512723382556
1•andsoitis•5m ago•1 comments

Benchmarking GPT-5

https://www.coderabbit.ai/blog/benchmarking-gpt-5-why-its-a-generational-leap-in-reasoning
4•aravindputrevu•5m ago•1 comments

Agent Zero AI Framework

https://github.com/agent0ai/agent-zero
2•indigodaddy•6m ago•0 comments

UAE offers free open-source AI as alternative to US and China

https://restofworld.org/2025/chatgpt-alternative-uae-falcon-ai/
1•donohoe•7m ago•1 comments

Show HN: Suitely. Your C-Suite, Reimagined by AI

https://suitely.prisen.co
1•prisenco•7m ago•0 comments

Show HN: 'Backed by Y Combinator' Embeddable Widget

https://yc-widget.pages.dev
1•TimCTRL•10m ago•0 comments

The Origins of the <Blink> Tag (2009)

https://web.archive.org/web/20110904000523/http://www.montulli.org/theoriginofthe%3Cblink%3Etag
2•donatj•12m ago•0 comments

We have to bring remote work to the country

https://fortune.com/2025/08/07/how-to-bring-good-jobs-to-rural-america-country/
2•harambae•12m ago•1 comments

Show HN: Password Encrypted Manager

https://github.com/wgbowley/PassEM
1•wgbowley•12m ago•0 comments

Magic Patterns – AI-driven web and app prototyping and design

https://www.magicpatterns.com/
1•eustoria•13m ago•0 comments

Reversing a Downlevel Offer

https://www.tryexponent.com/blog/reversing-downlevel-offers
6•landucci•13m ago•0 comments

We built an open-source asynchronous coding agent

https://blog.langchain.com/introducing-open-swe-an-open-source-asynchronous-coding-agent/
2•palashshah•16m ago•0 comments

The Science of Mediation Posture

https://theeightfoldpath.substack.com/p/the-science-of-mediation-posture
1•arihantparsoya•16m ago•1 comments

Show HN: YooAI – All-in-One Platform with Advanced Models, No Subscriptions

https://yooai.co
1•zy5a59•17m ago•0 comments

Show HN: LLM from URL –– A free AI chat completion service directly from URL

https://818233.xyz/
2•yvonuk•18m ago•1 comments

Cloud Bits: API Gateways – Cloud System's Reception Desk

https://distributed-computing-musings.com/2025/06/cloud-bits-api-gateways-cloud-systems-reception-desk/
2•pbardea•20m ago•0 comments

HRT's Python Fork: Leveraging PEP 690 for Faster Imports

https://www.hudsonrivertrading.com/hrtbeat/inside-hrts-python-fork/
4•davidteather•21m ago•1 comments

Alien planet glimpsed in star's 'habitable zone'

https://www.nature.com/articles/d41586-025-02549-z
2•rntn•21m ago•0 comments

Neurosymbolic AI: The 3rd Wave

http://muratbuffalo.blogspot.com/2025/08/neurosymbolic-ai-3rd-wave.html
1•pbardea•21m ago•0 comments

Self-Guaranteeing Promises

https://stephango.com/self-guarantee
1•kblissett•22m ago•0 comments

Trump signs executive order going after debanking

https://www.nytimes.com/2025/08/07/business/trump-debanking-executive-order.html
1•southernplaces7•22m ago•1 comments

How to Make Marketing That Converts?

https://businessofsoftware.org/jobs-to-be-done-in-marketing/
1•arilgomez•23m ago•0 comments

U.S. Preparing IPO for Fannie Mae and Freddie Mac Later This Year

https://www.wsj.com/finance/regulation/trump-aiming-to-ipo-fannie-mae-and-freddie-mac-later-this-year-13b138cf
2•JumpCrisscross•23m ago•0 comments