frontpage.
newsnewestaskshowjobs

Made with ♥ by @iamnishanth

Open Source @Github

Open in hackernews

6 Weeks of Claude Code

https://blog.puzzmo.com/posts/2025/07/30/six-weeks-of-claude-code/
122•mike1o1•2d ago

Comments

iwontberude•6h ago
I stopped writing as much code because of RSI and carpal tunnel but Claude has given me a way to program without pain (perhaps an order of magnitude less pain). As much as I was wanting to reject it, I literally am going to need it to continue my career.
iaw•6h ago
Now that you point this out, since I started using Claude my RSI pain is virtually non-existent. There is so much boilerplate and repetitive work taken out when Claude can hit 90% of the mark.

Especially with very precise language. I've heard of people using speech to text to use it which opens up all sorts of accessibility windows.

flappyeagle•6h ago
Are you using dictation for text entry
iwontberude•5h ago
Great suggestion! I will be now :)
cooperaustinj•5h ago
Superwhisper is great. It's closed source, however. There may be other comparable open spurce options available now. I'd suggest trying superwhisper, so you know what's possible and maybe compare to open source options after. Superwhisper runs locally and has a one time purchase option, which makes it acceptable to me.
robbomacrae•4h ago
Talkito (I posted the link further up) is open source and unlike Superwhisper it makes Claude Code talk back to you as well - which was the original aim to be able to multitask.
robbomacrae•4h ago
Sorry to hear that and whilst it wasn't my original goal to serve such a use case I wonder if being able to interact with Claude Code via voice will help you? On MacOS it uses free defaults for TTS and ASR but you can BYOK to other providors. https://github.com/robdmac/talkito
libraryofbabel•1h ago
You aren't the first person I have heard say this. It's an under-appreciated way in which these tools are a game-changer. They are a wonderful gift to those of us prone to RSI, because they're most good at precisely the boilerplate repetitive stuff that tends to cause the most discomfort. I used to feel slightly sick every time I was faced with some big piece of boilerplate I had to hammer out, because of my RSI and also because it just makes me bored. No longer. People worry that these tools will end careers, but (for now at least) I think they can save the careers of more than a few people. A side-effect is I now enjoy programming much more, because I can operate at a level of abstraction where I am actually dealing with novel problems rather than sending my brain to sleep and my wrists to pain hell hammering out curly braces or yaml boilerplate.
Fraterkes•6h ago
Irrespective of how good Claude code actually is (I haven’t used it, but I think this article makes a really cogent case), here’s something that bothers me: I’m very junior, I have a big slow ugly codebase of gdscript (basically python) that I’m going to convert to C# to both clean it up and speed it up.

This is for a personal project, I haven’t written a ton of C# or done this amount of refactoring before, so this could be educational in multiple ways.

If I were to use Claude for this Id feel like I was robbing myself of something that could teach me a lot (and maybe motivate me to start out with structuring my code better in the future). If I don’t use Claude I feel like Im wasting my (very sparse) free time on a pretty uninspiring task that may very well be automated away in most future jobs, mostly out of some (misplaced? Masochistic?) belief about programming craft.

This sort of back and forth happens a lot in my head now with projects.

gjfkririfif•6h ago
Hii
jansan•6h ago
It depends how you use it. You can ask Claude Code for instructions to migrate the Code yourself, and it will be a teacher. Or you can ask it to create a migration plan and the execute it, in which case learning will of course be very limited. I recommend to do the conversion in smaller steps if possible. We tried to migrate a project just for fun in one single step and Claude Code failed miserably (itself thought it had done a terrific job), but doing it in smaller chunks worked out quite well.
mentos•6h ago
Cursor has made writing C++ like a scripting language for me. I no longer wrestle with arcane error messages, they go straight into Cursor and I ask it to resolve and then from its solution I learn what my error was.
baq•6h ago
As someone who is programming computers for almost 30 years and professionally for about 20 by all means do some of it manually, but leverage LLMs in tutor/coach mode, with „explain this but don’t solve it for me” prompts when stuck. Let the tool convert the boring parts once you’re confident they’re truly boring.

Programming takes experience to acquire taste for what’s right, what’s not, and what smells bad and will bite you but you can temporarily (yeah) not care. If you let the tool do everything for you you won’t ever acquire that skill, and it’s critical to judge and review your work and work of others, including LLM slop.

I agree it’s hard and I feel lucky for never having to make the LLM vs manual labor choice. Nowadays it’s yet another step in learning the craft, but the timing is wrong for juniors - you are now expected to do senior level work (code reviews) from day 1. Tough!

jvanderbot•6h ago
Well I think you've identified a task that should be yours. If the writing of the code itself is going to help you, then don't let AI take that help from you because of a vague need for "productivity". We all need to take time to make ourselves better at our craft, and at some point AI can't do that for you.

But I do think it could help, for example by showing you a better pattern or language or library feature after you get stuck or finish a first draft. That's not cheating that's asking a friend.

adamcharnock•6h ago
I think this is a really interesting point. I have a few thoughts as a read it (as a bit of a grey-beard).

Things are moving fast at the moment, but I think it feels even faster because of how slowly things have been moving for the last decade. I was getting into web development in the mid-to-late-90s, and I think the landscape felt similar then. Plugged-in people kinda knew the web was going to be huge, but on some level we also know that things were going to change fast. Whatever we learnt would soon fall by the wayside and become compost for the next new thing we had to learn.

It certainly feels to me like things have really been much more stable for the last 10-15 years (YMMV).

So I guess what I'm saying is: yeah, this is actually kinda getting back to normal. At least that is how I see it, if I'm in an excitable optimistic mood.

I'd say pick something and do it. It may become brain-compost, but I think a good deep layer of compost is what will turn you into a senior developer. Hopefully that metaphor isn't too stretched!

MrDarcy•5h ago
I’ve also felt what GP expresses earlier this year. I am a grey-beard now. When I was starting my career in the early 2000’s a grey-beard told me, “The tech is entirely replaced every 10 years.” This was accompanied by an admonition to evolve or die in each cycle.

This has largely been true outside of some outlier fundamentals, like TCP.

I have tried Claude code extensively and I feel it’s largely the same. To GP’s point, my suggestion would be to dive into the project using Claude Code and also work to learn how to structure the code better. Do both. Don’t do nothing.

Fraterkes•3h ago
Thx to both of you, I think these replies helped me a bit.
infecto•5h ago
What’s wrong with using a Claude code to write a possible initial iteration and then go back and review the code for understanding? Various languages and frameworks have there own footguns but those usually are not unfixable later on.
georgeburdell•4h ago
In my experience, determining what to write is harder than how to write it, so you deprive yourself of that learning if you start from generated code
jmalicki•41m ago
I actually think this helps in that learning - it's sitting alongside a more experienced expert doing it and seeing what they came up with.

In the same sense that the best way to learn to write is often to read a lot, whether English or code. Of course, you also have to do it, but having lots of examples to go on helps.

CuriouslyC•5h ago
How much do you care about experience with C# and porting software? If that's an area you're interested in pursuing maybe do it by hand I guess. Otherwise I'd just use claude.
jghn•5h ago
Disagree entirely, and would suggest the parent intentionally dive in on things like this.

The best way to skill up over the course of one's career is to expose yourself to as broad an array of languages, techniques, paradigms, concepts, etc. So sure, you may never touch C# again. But by spending time to dig in a bit you'll pick up some new ideas that you can bring forward with you to other things you *do* care about later.

bredren•3h ago
I agree here. GP should take time to learn the thing and use AI to assist in learning not direct implementation.

If there is going to be room for junior folks in SWE, it will probably be afforded to those who understand some language’s behaviors at a fairly decent level.

I’d presume they will also be far better at system design, TDD and architecture than yesterday’s juniors, (because they will have to be to drive AI better than other hopeful candidates).

But there will be plenty of what will be grey beards around that expect syntactical competence and fwiw, if you can’t read your own code, even slowly, you fail at the most crucial aspect of AI accelerated development—-validation.

thatfrenchguy•5h ago
Doing the easy stuff is what gives you the skills to do the harder stuff that a LLM can’t do, which arguably makes this hard indeed
yoyohello13•5h ago
A few years ago there was a blog post trend going around about “write you’re own x” instead of using a library or something. You learn a lot about how software by writing your own version of a thing. Want to learn how client side routing works? Write a client side router. I think LLMs have basically made it so anything can be “library” code. So really it comes down to what you want to get out of the project. Do you want to get better at C#? Then you should probably do the port yourself. If you just want to have the ported code and focus on some other aspect, then have Claude do it for you.

Really if your goal is to learn something, then no matter what you do there has to be some kind of struggle. I’ve noticed whenever something feels easy, I’m usually not really learning much.

michaelcampbell•5h ago
I'm on the tail end of my 35+ year developer career, but one thing I always do with any LLM stuff is this: I'll ask it to solve something generally I know I COULD solve, I just don't feel like it.

Example: Yesterday I was working with an Open API 3.0 schema. I know I could "fix" the schema to conform to a sample input, I just didn't feel like it because it's dull, I've done it before, and I'd learn nothing. So I asked Claude to do it, and it was fine. Then the "Example" section no longer matched the schema, so Claude wrote me a fitting example.

But the key here is I would have learned nothing by doing this.

There are, however, times where I WOULD have learned something. So whenever I find the LLM has shown me something new, I put that knowledge in my "knowledge bank". I use the Anki SRS flashcard app for that, but there are other ways, like adding to your "TIL blog" (which I also do), or taking that new thing and writing it out from scratch, without looking at the solution, a few times and compiling/running it. Then trying to come up with ways this knowledge can be used in different ways; changing the requirements and writing that.

Basically getting my brain to interact with this new thing in at least 2 ways so it can synthesize with other things in your brain. This is important.

Learning a new (spoken) language uses this a lot. Learn a new word? Put it in 3 different sentences. Learn a new phrase? Create at least 2-3 new phrases based on that.

I'm hoping this will keep my grey matter exercised enough to keep going.

jona777than•5h ago
After 16 years of coding professionally, I can say Claude Code has made me considerably better at the things that I had to bang my head against the wall to learn. For things I need to learn that are novel to me, for productivity sake, it’s been “easy come; easy go” like any other learning experience.

My two cents are:

If your goal is learning fully, I would prioritize the slow & patient route (no matter how fast “things” are moving.)

If your goal is to learn quickly, Claude Code and other AI tooling can be helpful in that regard. I have found using “ask” modes more than “agent” modes (where available) can go a long way with that. I like to generate analogies, scenarios, and mnemonic devices to help grasp new concepts.

If you’re just interested in getting stuff done, get good at writing specs and letting the agents run with it, ensuring to add many tests along the way, of course.

I perceive there’s at least some value in all approaches, as long as we are building stuff.

fsloth•4h ago
Yes! Valuable, fundamental, etc. - do it yourself, the slow path.

Boring, uninspiring, commodity - and most of all - easily reversible and not critical - to the LLM it goes!

When learning things intrinsic motivation makes one unreasonably effective. So if there is a field you like - just focus on it. This will let you proceed much faster at _valuable_ things which all in all is the best use of ones time in any case.

Software crafting when you are not at a job should be fun. If it’s not fun, just do the least effort that suits your purpose. And be super diligent only about the parts _you_ care about.

IMHO people who think everyone should do everything from first principles with the diligence of a swiss clocksmith are just being difficult. It’s _one_ way of doing it but it’s not the _only right way_.

Care about important things. If a thing is not important and not interesting just deal with it the least painfull way and focus on something value adding.

stavros•4h ago
In my experience, if you don't review the generated code, and thus become proficient in C# enough to do that, the codebase will become trash very quickly.

Errors compound with LLM coding, and, unless you correct them, you end up with a codebase too brittle to actually be worth anything.

Friends of mine apparently don't have that problem, and they say they have the LLM write enough tests that they catch the brittleness early on, but I haven't tried that approach. Unfortunately, my code tends to not be very algorithmic, so it's hard to test.

bulginess2173•4h ago
Before AI, there was copy paste. People who copied code from Stackoverflow without understanding it learned nothing, and I saw it up close many times. I don't see a problem with you asking for advice or concepts. But if you have it write everything for you, you definitely won't learn

That being said, you have to protect your time as a developer. There are a million things to learn, and if making games is your goal as a junior, porting GDscript code doesn't sound like an amazing use of your time. Even though you will definitely learn from it.

tiltowait•4h ago
The difference now is that LLMs propose to provide copy+paste for everything, and for your exact scenario. At least with Stack Overflow, you usually had to adapt the answers to your specific scenario, and there often weren’t answers for more esoteric things.
queenkjuul•3h ago
Based on my usage of Claude Code, i would not trust it with anything so major.

My problem with it is that it produces _good looking_ code that, at a glance, looks 'correct', and occasionally even works. But then i look at it closely, and it's actually bad code, or has written unnecessary additional code that isn't doing anything, or has broken some other section of the app, etc.

So if you don't know enough C# to tell whether the C# it's spitting out is good or not, you're going to have a bad time

aledalgrande•35m ago
Have it generate the code. Then have another instance criticize the code and say how it could be improved and why. Then ask questions to this instance about things you don't know or understand. Ask for links. Read the links. Take notes. Internalize.

One day I was fighting Claude on some core Ruby method and it was not agreeing with me about it, so I went to check the actual docs. It was right. I have been using Ruby since 2009.

jansan•6h ago
A lot of things that the author achieved with Claude Code is migrating or refactoring of code. To me, who started using Claude Code just two weeks ago, this seems to be one of the real strengths at the moment. We have a large business app that uses an abandoned component library and contains a lot of cruft. Migrating to another component library seemed next to impossible, but with Claude Code the whole process took me just about one week. It is making mistakes (non-matching tags for example), but with some human oversight we reached the first goal. Next goal is removing as much cruft as possible, so working on the app becomes possible or even fun again.

I remember when JetBrains made programming so much easier with their refactoring tools in IntelliJ IDEA. To me (with very limited AI experience) this seems to be a similar step, but bigger.

zkry•6h ago
On the other hand though, automated refactoring like in IntelliJ can scale practically infinitely, are extremely low cost, and are gauranteed to never make any mistakes.

Not saying this is more useful per se, just saying that different approaches have their pros and cons.

bongodongobob•1h ago
I tried out Claude for the first time today. I have a giant powershell script that has evolved over the years, doing a bunch of different stuff. I've been meaning to refactor it for a long time, but it's such a tangled mess that every time I try, I give up fairly quickly. GPT has not been able to split it into separate modules successfully. Today I tried Claude and it refactored it into a beautifully separated collections of modules in about 30 minutes. I am extremely impressed.
slackpad•6h ago
Really agree with the author's thoughts on maintenance here. I've run into a ton of cases where I would have written a TODO or made a ticket to capture some refactoring and instead just knocked it out right then with Claude. I've also used Claude to quickly try out a refactoring idea and then abandoned it because I didn't like how it came out. It really lowers the activation energy for these kinds of maintenance things.

Letting Claude rest was a great point in the article, too. I easily get manifold value compared to what I pay, so I haven't got it grinding on its own on a bunch of things in parallel and offline. I think it could quickly be an accelerator for burnout and cruft if you aren't careful, so I keep to a supervised-by-human mode.

Wrote up some more thoughts a few weeks ago at https://www.modulecollective.com/posts/agent-assisted-coding....

delduca•5h ago
My opinion on Claude as ChatGPT user.

It feels like ChatGPT on cocaine, I mean, I asked for a small change and it came with 5 solutions changing all my codebase.

iamsaitam•5h ago
I'm NOT saying it is, but without regulatory agencies having a look or it being open source, this might be well working as intended, since Anthropic makes more money out of it.
crop_rotation•4h ago
Is this opinion on claude code or claude the model?
stavros•4h ago
Was it Sonnet or Opus? I've found that Sonnet will just change a few small things, Opus will go and do big bang changes.

YMMV, though, maybe it's the way I was prompting it. Try using Plan Mode and having it only make small changes.

Applejinx•3h ago
Is this the one that goes 'Oh no, I accidentally your whole codebase, I suck, I accept my punishment and you should never trust me again' or is that a different one?

I seem to remember the 'oh no I suck' one comes out of Microsoft's programmer world? It seems like that must be a tough environment for coders if such feelings run so close to the surface that the LLMs default to it.

MuffinFlavored•5h ago
I think Claude Code is great, but I really grew accustomed to the "Cursor-tab tab tab" autocomplete style. A little perplexed why the Claude Code integration into VS Code doesn't add something like this? It would make it the perfect product to me. Surprised more people do not talk about this/it isn't a more commonly requested feature.
infecto•5h ago
Agree. I used Claude code a bit and enjoyed it but also felt like I was too disconnected to the changes, I guess too much vibe coding?

Cursor is a nice balance for me still. I am automating a lot of the writing but it’s still bite size pieces that feel easier to review.

robbomacrae•4h ago
With these agentic coders you can have better conversations about the code. My favorite use case with CC is after a day coding I can ask it to for a thorough review of the changes, a file, or even the whole project.. setting it to work when I go off to bed and have it ranking issues and even proposing a fix for the most important ones. If you get the prompt right and enable permissions it can work for quite a long time independently.
k9294•21m ago
I just use Claude Code in Cursor's terminal (both a hotkey away, very convenient). For 2 months I don't use cursor chat, but tab autocomplete is to good, definitely worth 20$.
jeswin•5h ago
Claude Code is ahead of anything else, in a very noticeable way. (I've been writing my own cli tooling for AI codegen from 2023 - and in that journey I've tried most of the options out there. It has been a big part of my work - so that's how I know.)

I agree with many things that the author is doing:

1. Monorepos can save time

2. Start with a good spec. Spend enough time on the spec. You can get AI to write most of the spec for you, if you provide a good outline.

3. Make sure you have tests from the beginning. This is the most important part. Tests (along with good specs) are how an AI agent can recurse into a good solution. TDD is back.

4. Types help (a lot!). Linters help as well. These are guard rails.

5. Put external documentation inside project docs, for example in docs/external-deps.

6. And finally, like every tool it takes time to figure out a technique that works best for you. It's arguably easier than it was (especially with Claude Code), but there's still stuff to learn. Everyone I know has a slightly different workflow - so it's a bit like coding.

I vibe coded quite a lot this week. Among them, Permiso [1] - a super simple GraphQL RBAC server. It's nowhere close to best tested and reviewed, but can be quite useful already if you want something simple (and can wait until it's reviewed.)

[1]: https://github.com/codespin-ai/permiso

unshavedyak•5h ago
> 2. Start with a good spec. Spend enough time on the spec. You can get AI to write most of the spec for you, if you provide a good outline.

Curious how you outline the spec, concretely. A sister markdown document? How detailed is it? etc.

> 3. Make sure you have tests from the beginning. This is the most important part. Tests (along with good specs) are how an AI agent can recurse into a good solution. TDD is back.

Ironically i've been struggling with this. For best results i've found claude to do best with a test hook, but then claude loses the ability to write tests before code works to validate bugs/assumptions, it just starts auto fixing things and can get a bit wonky.

It helps immensely to ensure it doesn't forget anything or abandon anything, but it's equally harmful at certain design/prototype stages. I've taken to having a flag where i can enable/disable the test behavior lol.

jeswin•5h ago
> Curious how you outline the spec, concretely. A sister markdown document? How detailed is it? etc.

Yes. I write the outline in markdown. And then get AI to flesh it out. The I generate a project structure, with stubbed API signatures. Then I keep refining until I've achieved a good level of detail - including full API signatures and database schemas.

> Ironically i've been struggling with this. For best results i've found claude to do best with a test hook, but then claude loses the ability to write tests before code works to validate bugs/assumptions, it just starts auto fixing things and can get a bit wonky.

I generate a somewhat basic prototype first. At which point I have a good spec, and a good project structure, API and db schemas. Then continuously refine the tests and code. Like I was saying, types and linting are also very helpful.

rane•2h ago
I don't even write the outline myself. I tell CC to come up with a plan, and then we iterate on that together with CC and I might also give it to Gemini for review and tell CC to apply Gemini's suggestions.
skydhash•1h ago
What kind or projects are more suitable for this approach? Because my workflow, sans LLM agents, have been to rely on frameworks to provide a base abstraction for me to build upon. The hardest is to nail down the business domain, done over rounds of discussions with stakeholders. Coding is pretty breezy in comparison.
swader999•3h ago
Playwright is such a chore with Claude but I'm afraid to live without it. Every feature seems to be about 70% of the time spent fixing it's playwright mess. It struggles with running the tests, basic data setup and cleanup, auth and just basic best practices. I have a testing guide that outlines all this but it half asses every thing ..
nico•5h ago
Agreed, for CC to work well, it needs quite a bit of structure

I’ve been working on a Django project with good tests, types and documentation. CC mostly does great, even if it needs guidance from time to time

Recently also started a side project to try to run CC offline with local models. Got a decent first version running with the help of ChatGPT, then decided to switch to CC. CC has been constantly trying to avoid solving the most important issues, sidestepping errors and for almost everything just creating a new file/script with a different approach (instead of fixing or refactoring the current code)

wenc•1h ago
I've also found that structure is key instead of trusting its streams of consciousness.

For unit testing, I actually pre-write some tests so it can learn what structure I'm looking for. I go as far as to write mocks and test classes that *constrain* what it can do.

With constraints, it does a much better job than if it were just starting from scratch and improvising.

There's a numerical optimization analogy to this: if you just ask a solver to optimize a complicated nonlinear (nonconvex) function, you will likely get stuck or hit a local optimum. But if you carefully constrain its search space, and guide it, you increase your chances of getting to the optimum.

LLMs are essentially large function evaluators with a huge search space. The more you can herd it (like herding a flock into the right pen), the better it will converge.

arevno•3h ago
> 1. Monorepos can save time

Yes they can save you some time, but at the cost of Claude's time and lots of tokens making tool calls attempting to find what it needs to find. Aider is much nicer, from the standpoint that you can add the files you need it to know about, and send it off to do its thing.

I still don't understand why Claude is more popular than Aider, which is by nearly every measure a better tool, and can use whatever LLM is more appropriate for the task at hand.

KronisLV•3h ago
> Aider is much nicer, from the standpoint that you can add the files you need it to know about, and send it off to do its thing.

As a user, I don't want to sit there specifying about 15-30 files, then realize that I've missed some and that it ruins everything. I want to just point the tool at the codebase and tell it: "Go do X. Look at the current implementation and patterns, as well as the tests, alongside the docs. Update everything as needed along the way, here's how you run the tests..."

Indexing the whole codebase into Qdrant might also help a little.

macNchz•2h ago
I think it makes sense to want that, but at least for me personally I’ve had dramatically better overall results when manually managing the context in Aider than letting Claude Code try to figure out for itself what it needs.

It can be annoying, but I think it both helps me be more aware of what’s being changed (vs just seeing a big diff after a while), and lends itself to working on smaller subtasks that are more likely to work on the first try.

rane•2h ago
You get much better results in CC as well if you're able to give the relevant files as a starting point. In that regard these two tools are not all that different.
aledalgrande•51m ago
> Aider is much nicer, from the standpoint that you can add the files you need it to know about, and send it off to do its thing.

Use /add-dir in Claude

qaq•5h ago
Another really nice use case building very sophisticated test tooling. Normally a company might not allocate enough resources to a task like that but with Claude Code it's a no brainer. Also can create very sophisticated mocks like say db mock that can parse all queries in the codebase and apply them to in memory fake tables. Would be total pain to build and maintain by hand but with claude code takes literally minutes.
airstrike•5h ago
In my experience LLMs are notoriously bad at tests, so this is, to me, one of the worst use cases possible.
qaq•5h ago
In my experience they are great for test tooling. For actual tests after I have covered a number of cases it's very workable to tell it to identify gaps and edge cases and propose tests than I'd say I accept about 70% of it suggestions.
lopatin•2h ago
While people's experience with LLMs is pretty varied and subjective, saying they're bad at writing tests just isn't true. Claude Code is incredible at writing tests and testing infrastructure.
danielbln•46m ago
It worth mentioning that one should tell CC to not overmock, and to produce only truly relevant tests. I use an agent that I invoke to spot this stuff, because I've run into some truly awful overmocked non-tests before.
skydhash•1h ago
Hill I’m willing to die on (metaphorically):

If your test structure is a pain to interact with, that usually means some bad decisions somewhere in the architecture of your project.

qaq•5h ago
For me real limit is the amount of code I can read and lucidly understand to spot issues in a given day.
acedTrex•4h ago
I try to use claude code a lot, I keep getting very frustrated with how slow it is and how it always does things wrong. It does not feel like its saving my any mental energy on most tasks. I do gravitate towards it for some things. But then I am sometimes burned on doing that and its not pleasent.

For example, last week i decided to play with nushell, i have a somewhat simple .zshrc so i just gave it to claude and asked it to convert it to nushell. The nu it generated for the most part was not even valid, i spent 30 mins with it, it never worked. took me about 10 minutes in the docs to convert it.

So it's miserable experiences like that that make me want to never touch it, because I might get burned again. There are certainly things that I have found value in, but its so hit or miss that i just find my self not wanting to bother.

azuanrb•3h ago
Have you tried context7 MCP? For things that are not mainstream (like Javascript, Typescript popularity), LLM might struggle. I usually have better result with using something like context7 where it can pull up more relevant, up to date examples.
queenkjuul•1h ago
This is basically my experience with it. I thought it'd be great for writing tests, but every single time, no matter how much coaxing, i end up rewriting the whole thing myself anyway. Asking it for help debugging has not yet yielded good results for me.

For extremely simple stuff, it can be useful. I'll have it parse a command's output into JSON or CSV when I'm too lazy to do it myself, or scaffold an empty new project (but like, how often am i doing that?). I've also found it good at porting simple code from like python to JavaScript or typescript to go.

But the negative experiences really far outweigh the good, for me.

searls•4h ago
I appreciate that Orta linked to my "Full-breadth Developers" post here, for two reasons:

1. I am vain and having people link to my stuff fills the void in my broken soul

2. He REALLY put in the legwork to document in a concrete way what it looks like for these tools to enable someone to move up a level of abstraction. The iron triangle has always been Quality, Scope, Time. This innovation is such an accelerant that that ambitious programmers can now imagine game-changing increases in scope without sacrificing quality and in the same amount of time.

For this particular moment we're in, I think this post will serve as a great artifact of what it felt like.

lherron•3h ago
A few years ago the SRE crowd went through a toil automation phase. SWEs are now gaining the tools to do the same.
esafak•3h ago
Coding agents are empowering, but it is not well appreciated that they are setting a new baseline. It will soon not be considered impressive to do all the things that the author did, but expected. And you will not work less but the same hours -- or more, if you don't use agents.

Despite this, I think agents are a very welcome new weapon.

lukaslalinsky•3h ago
I have about two weeks of using Claude Code and to be honest, as a vibe coding skeptic, I was amazed. It has a learning curve. You need to learn how to give it proper context, how to chunk up the work, etc. And you need to know how to program, obviously. Asking it to do something you don't know how to do, that's just asking for a disaster. I have more than 25 years of experience, so I'm confident with anything Claude Code will try to do and can review it, or stop and redirect it. About 10-15 years ago, I was dreaming about some kind of neural interface, where I could program without writing any code. And I realized that with Claude Code, it's kind of here.

A couple of times I hit the daily limits and decided to try Gemini CLI with the 2.5 pro model as a replacement. That's not even comparable to Claude Code. The frustration with Gemini is just not worth it.

I couldn't imagine paying >100$/month for a dev tool in the past, but I'm seriously considering upgrading to the Max plans.

MarkMarine•3h ago
Claude code is great until it isn’t. You’re going to get to a point where you need to modify something or add something… a small feature that would have been easy if you wrote everything, and now it’s impossible because the architecture is just a mishmash of vibe coded stuff you don’t understand.
lukaslalinsky•3h ago
How can you end up with code you don't understand, if you review anything it writes? I wouldn't let it deviate from the architecture I want to have for the project. I had problems with junior devs in the past, too eager to change a project, and I couldn't really tell them to stop (need to work on my communication skills). No such problem with Claude Code.
tronikel•3h ago
So youre telling me that reading is the same as writing? In terms of the brain actually consuming and processing the info you gave it and storing it
muspimerol•2h ago
> a mishmash of vibe coded stuff you don’t understand.

No, there is a difference between "I wrote this code" and "I understand this code". You don't need to write all the code in a project to understand it. Otherwise writing software in a team would not be a viable undertaking.

ruszki•3h ago
I don’t remember what architecture was used by PRs I reviewed a month ago. I remember what architecture I designed 15 years ago for projects I was part of.
everforward•18m ago
I've only used the agentic tools a bit, but I've found that they're able to generate code at a velocity that I struggle to keep in my head. The development loop also doesn't require me to interact with the code as much, so I have worse retention of things like which functions are in which file, what helper functions already exist, etc.

It's less that I can't understand, and more that my context on the code is very weak.

jshen•3h ago
The trick is to ask it to do more narrow tasks and design the structure of the code base yourself.
hkt•2h ago
This. It helps to tell it to plan and to then interrogate it about that plan, change it to specification etc. Think of it as a refinement session before a pairing session. The results are considerably better if you do it this way. I've written kubernetes operators, flask applications, Kivy applications, and a transparent ssh proxy with Claude in the last two months, all outside of work.

It also helps to tell it to write tests first: I lean towards integration tests for most things but it is decent at writing good unit tests etc too. Obviously, review is paramount if TDD is going to work.

tiahura•1h ago
As a hobbyist coder, the more time I spend brainstorming with all the platforms about specs and tests and architecture, the better the ultimate results.
skippyboxedhero•2h ago
Yes, the default when it does anything is to try and create. It will read my CLAUDE.md file, it will read the code that is already there, and then it will try to write it again. I have had this happen many times (today, I had to prompt 5/6 times to read the file as a feature had already been implemented).

...and if something is genuinely complex, it will (imo) generally do a bad job. It will produce something that looks like it works superficially, but as you examine it will either not work in a non-obvious way or be poorly designed.

Still very useful but to really improve your productivity you have to understand when not to use it.

bboygravity•2h ago
How do you write complex code as a human? You create abstraction layers, right?

Why wouldn't that work with an llm? It takes effort, sure, but it certainly also takes effort if you have to do it "by hand"?

sidjxjxbnxkxkkx•1h ago
English is much less expressive compared to code. Typing the keys was never the slow part for senior developers.

It does work with an LLM, but you’re reinventing the wheel with these crazy markup files. We created a family of language to express how to move bits around and replacing that with English is silly.

Vibe coding is fast because you’re ok with not thinking about the code. Anytime you have to do that, an LLM is not going to be much faster.

skippyboxedhero•1h ago
Because it creates the wrong layers.

In theory, there is no reason why this is the case. For the same reason, there is no reason why juniors can't create perfect code first time...it is just the tickets are never detailed enough?

But in reality, it doesn't work like that. The code is just bad.

lukaslalinsky•42m ago
You are responsible for the layers. You should either do the design on your own, or let the tool ask you questions and guide you. But you should have it write down the plan, and only then you let it code. If it messes up the code, you /clear, load the plan again and tell it to do the code differently.

It's really the same with junior devs. I wouldn't tell a junior dev to implement a CRM app, but I can tell the junior dev to add a second email field to the customer management page.

cmrdporcupine•2h ago
You're not setting good enough boundaries or reviewing what it's doing closely enough.

Police it, and give it explicit instructions.

Then after it's done its work prompt it with something like "You're the staff engineer or team lead on this project, and I want you to go over your own git diff like it's a contribution from a junior team member. Think critically and apply judgement based on the architecture of the project describes @HERE.md and @THERE.md."

GiorgioG•2h ago
Ah yes…the old “you’re holding it wrong”. The problem is these goddamn things don’t learn, so you put in the effort to police it…and you have to keep doing that until the end of time. Better off training someone off the street to be a software engineer.
hkt•2h ago
Not so. Adding to context files helps enormously. Having touchstone files (ARCHITECTURE.md) you can reference helps enormously. The trick is to steer, and create the guardrails.

Honestly, it feels like DevOps had a kid with Product.

cmrdporcupine•1h ago
It's just a tool, not an intelligence or a person.

You use it to make your job easier. If it doesn't make your job easier, you don't use it.

Anybody trying to sell you on a bill of goods that this is somehow "automating away engineers" and "replacing expensive software developers" is either stupid or lying (or both).

I find it incredibly useful, but it's garbage-in, garbage-out just like anything else with computers. If your code base is well commented and documented and laid out in a consistent pattern, it will tend to follow that pattern, especially if it follows standards. And it does better in languages (like Rust) that have strict type systems and coding standards.

Even better if you have rigorous tests for it to check its own work against.

MattGaiser•1h ago
They don't learn by themselves, but you can add instructions as they make mistakes that are effectively them learning. You have to write code review feedback for juniors, so that s not an appreciable difference.

> Better off training someone off the street to be a software engineer.

And that person is going to quit and you have to start all over again. They also cost at least 100x the price.

quentindemetz•1h ago
The world is not a zero-sum game.
rane•2h ago
Having used Claude Code extensively for the last few months, I still haven't reached this "until it isn't" point. Review the code that comes out. It goes a long way.
risyachka•1h ago
>> Review the code that comes out. It goes a long way.

Sure, but if I do 5 reviews for a task - in 99% of cases it is net negative as it is faster to DIY it at that point. Harder for sure, but faster.

rane•1h ago
Maybe our brains are wired different but reading and reviewing code is way faster for me than writing it.
wredcoll•13m ago
There's very few objective ways to measure review 'performance'.

Coding is easy, it works or doesn't.

Aurornis•2h ago
The people successfully using Claude Code for big projects aren’t letting it get to the point where they don’t understand what it wrote.

The best results come from working iteratively with it. I reject about 1/3 of edits to request some changes or a change of direction.

If you just try to have it jam on code until the end result appears to work then you will be disappointed. But that’s operator error.

danielbln•1h ago
So far I'm bullish on subagents to help with that. Validate completion status, bullshit detection, catching over engineering etc. I can load them with extra context like conventions ahd specific prompts to clamp down on the Claude-isms during development.
baq•3h ago
Fascinating since I found the recent Claude models untrustworthy for writing and editing SQL. E.g. it'd write conditions correctly, but not add parens around ANDs and ORs (which gemini pro then highlighted as a bug, correctly.)
risyachka•1h ago
I bet it highly depends on the work you do.

It is very useful for simpler tasks like writing tests, converting code bases etc where the hard part is already done.

When it comes to actually doing something hard - it is not very useful at least in my experience.

And if you do something even a bit niche - it is mostly useless and its faster do dig into topic on your own that try to have Claude implement it.

danielbln•1h ago
Even when I hand roll certain things, it still nice to have Claude Code take over any other grunt work that might come my way. And there are always yaks to shave, always.
CharlesW•1h ago
If you aren't already (1) telling Claude Code which flavor of SQL you want (there are several major dialects and many more minor ones) and (2) giving it access to up-to-date documentation via MCP (e.g. https://github.com/arabold/docs-mcp-server) so it has direct access to canonical docs for authoritative grounding and syntax references, you'll find that you get much better results by doing one or both of those things.
impure-aqua•24m ago
Documentation on features your SQL dialect supports and key requirements for your query are very important for incentivizing it to generate the output you want.

As a recent example, I am working on a Rust app with integrated DuckDB, and asked it to implement a scoring algorithm query (after chatting with it to generate a Markdown file "RFC" describing how the algorithm works.) It started the implementation with an absolute minimal SQL query that pulled all metrics for a given time window.

I questioned this rather than accepting the change, and it said its plan was to implement the more complex aggregation logic in Rust because 1) it's easier to interpret Rust branching logic than SQL statements (true) and 2) because not all SQL dialects include EXP(), STDDEV(), VAR() support which would be necessary to compute the metrics.

The former point actually seems like quite a reasonable bias to me, personally I find it harder to review complex aggregations in SQL than mentally traversing the path of data through a bunch of branches. But if you are familiar with DuckDB you know that 1) it does support these features and 2) the OLAP efficiency of DuckDB makes it a better choice for doing these aggregations in a performant way than iterating through the results in Rust, so the initial generated output is suboptimal.

I informed it of DuckDB's support for these operations and pointed out the performance consideration and it gladly generated the (long and certainly harder to interpret) SQL query, so it is clearly quite capable, just needs some prodding to go in the right direction.

UltraSane•5m ago
Claude Sonnet 4 is very good at generating Cypher queries for Neo4j
asdev•2h ago
I feel like Cursor gives the same experience without having to be in the terminal. I don't see how Claude Code is so much better
maouida•2h ago
I use neovim so claude code makes more sense to me. I think having the code agent independent from the code editor is a plus.
mattmanser•2h ago
I went on n a bit of a YouTube frenzy last weekend on getting an overview of agentic tools.

A lot of people are saying that cursor is much worse than Claude Code who have used both.

CharlesW•1h ago
Having spent a couple of weeks putting both AIDE-centric (Cursor, Windsurf) and CLI-centric (Claude Code, OpenAI Codex, Gemini CLI) options through real-world tasks, Cursor was one of the least effective tools for me. I ultimately settled on Claude Code and am very happy with it.
danielbln•1h ago
I realized Claude Code is the abstraction level I want to work in. Cursor et al still stick me way down into the code muck when really I only want to see the code during review. It's an implementation detail that I still have to review because it's makes mistakes, even when guided perfectly, but otherwise I want to think in interfaces, architecture, components. The low level code, don't care. Is it up to spec and conventions, does it work? Good enough for me.
Aurornis•2h ago
I haven’t found massive performance between tools that use the same underlying LLM

The benefit of Claude Code is that you can pay a fixed monthly fee and get a lot more than you would with API requests alone.

fuzzzerd•14m ago
That has not been my experience, Copilot using Claude is way different than claude code for me. Anecdotal, and "vibes" based, but it'd what I've been experiencing.
MattGaiser•1h ago
I put Cursor as 4th of the tools I have tried. Claude Code, Junie, and Copilot all do work that I find much more acceptable.
lukaslalinsky•1h ago
I really don't know what is it, but Claude Code just seems like an extremely well tuned package. You can have the same core models, but the internal prompts matter, how they are looking up extra context matters, how easy is it to add external context matters, how it applies changes matters, how eager is it to actually use an external tool to help you matters. With Claude Code, it just feels right. When I say I want a review, I get a review, when I want code, I get code, when I want just some git housekeeping, I get that.
mapme•40m ago
The native tool use is a game changer. When I ask it to debug something it can independently add debug logging to a method, run the tests, collect the output, and code based off that until the tests are fixed.
Aurornis•2h ago
> as a vibe coding skeptic, I was amazed.

The interesting thing about all of this vibe coding skepticism, cynicism, and backlash is that many people have their expectations set extremely low. They’re convinced everything the tools produce will be junk or that the worst case examples people provide are representative of the average.

Then they finally go out and use the tools and realize that they exceed their (extremely low) expectations, and are amazed.

Yeah we all know Claude Code isn’t going to generate a $10 billion SaaS with a team of 10 people or whatever the social media engagement bait VCs are pushing this week. However, the tools are more powerful than a lot of people give them credit for.

Freedom2•2h ago
It doesn't help that a lot of skeptics are also dishonest. A few days ago someone here tried to claim that inserting verbose debug logging, something Claude Code would be very good at, is "actually programming" and it's important work for humans to do.

No, Claude can create logs all across my codebase with much better formatting far faster than I can, so I can focus on actual problem solving. It's frustrating, but par for the course for this forum.

Edit: Dishonest isn't correct, I should have said I just disagree with their statements. I do apologize.

shortrounddev2•37m ago
That's not what dishonesty means. That's just someone who disagrees with you
Freedom2•11m ago
Thank you for calling out my inaccuracy. One thing I'm always appreciative for in HN is the level of pedantry day after day.
qsort•1h ago
People are using different definitions of "vibe coding". If you expect to just prompt without even looking at the code and being involved in the process the result will be crap. This doesn't preclude the usefulness of models as tools, and maybe in the future vibe coding will actually work. Essentially every coder I respect has an opinion that is some shade of this.

There are the social media types you mention and their polar opposites, the "LLMs have no possible use" crowd. These people are mostly delusional. At the grown-ups table, there is a spectrum of opinions about the relative usefulness.

It's not contradictory to believe that the average programmer right now has his head buried in the sand and should at least take time to explore what value LLMs can provide, while at the same time taking a more conservative approach when using them to do actual work.

apples_oranges•52m ago
In case some people having realized it by now: it’s not just the code, it’s also/mostly the marketing. Unless you make something useful that’s hard to replicate..

I have recently found something that’s needed but very niche and the sort of problem that Claude can only give tips on how to go about it.

shortrounddev2•39m ago
Im a vibe code skeptic because I dont consider it coding. I assume it can write some decent code, but that's not coding.
exe34•28m ago
Coding can only be done by replicators born out of carbon, I imagine?
indigodaddy•2h ago
Maybe instead try opencode or crush with Gemini/Google auth when your Claude Code hits the limit.
cpursley•1h ago
Gemini is shockingly, embarrassingly, shamefully bad (for something out of a company like Google). Even the open models like Qwen and Kimi are better on opencode.
indigodaddy•58m ago
Ah I was thinking maybe the Gemini-cli agent itself might be attributable to the problems, thus maybe try the opencode/Gemini combo instead..

I'd like to mess around with "opencode+copilot free-tier auth" or "{opencode|crush}+some model via groq(still free?)" to see what kind of mileage I can get and if it's halfway decent..

skerit•12m ago
In my experience, Gemini is pretty good in multishotting. So just give it a system prompt, some example user/assistant pairs, and it can produce great results!

And this is its biggest weakness for coding. As soon as it makes a single mistake, it's over. It somehow has learned that during this "conversation" it's having, it should make that mistake over and over again. And then it starts saying things like "Wow, I'm really messing up the diff format!"

j45•1h ago
I was the same and switched t the first max plan. It’s very efficient with token usage for what I’ve been trying so far.
einpoklum•55m ago
What exactly have you written with Claude Code?

I have not tried it, for a variety of reasons, but my (quite limited, anecdotal, and gratis) experience with other such tools is, that I can get them to write something I could perhaps get as an answer on StackOverflow: Limited scope, limited length, address at most one significant issue; and perhaps that has to do with what they are trained on. But that once things get complicated, it's hopeless.

You said Claude Code was significantly better than some alternatives, so better than what I describe, but - we need to know _on what_.

M4v3R•21m ago
Not with Claude Code but with Cursor using Claude Sonnet 4 I coded an entire tower defense game, title, tutorial, gameplay with several waves of enemies, and a “rewind time” mechanic. The whole thing was basically vibe coded, I touched maybe a couple dozen lines of code. Apparently it wasn’t terrible [0]

[0] https://news.ycombinator.com/item?id=44463967

msikora•35m ago
Just a few months ago I couldn't imagine paying more than $20/mo for any kind of subscription, but here I am paying $200/mo for the Max 20 plan!

Similarly amazed as an experienced dev with 20 YoE (and a fellow Slovak, although US based). The other tools, while helpful, were just not "there" and they were often simply more trouble than they were worth producing a lot of useless garbage. Claude Code is clearly on another level, yes it needs A LOT of handholding; my MO is do Plan Mode until I'm 100% sure it understands the reqs and the planned code changes are reasonable, then let it work, and finally code review what it did (after it auto-fixes things like compiler errors, unit test failures and linting issues). It's kind of like a junior engineer that is a little bit daft but very knowledgeable but works super, super fast and doesn't talk back :)

It is definitely the future, what can I say? This is a clear direction where software development is heading.

giancarlostoro•21m ago
If you are a Senior Developer, who is comfortable giving a Junior tips, and then guiding them to fixing them (or just stepping in for a brief moment and writing where they missed something) this is for you. I'm hearing from Senior devs all over thought, that Junior developers are just garbage at it. They product slow, insecure, or just outright awful code with it, and then they PR the code they don't even understand.

For me the sweet spot is for boilerplate (give me a blueprint of a class based on a description), translate a JSON for me into a class, or into some other format. Also "what's wrong with this code? How would a Staff Level Engineer white it?" those questions are also useful. I've found bugs before hitting debug by asking what's wrong with the code I just pounded on my keyboard by hand.

lysecret•2h ago
I have been coding with claude code for about 3 weeks and I love it. I have bout 10yoe and mostly do Python ML / Data Eng. Here are a few reasons:

1. It takes away the pain of starting. I have no barrier to writing text but there is a barrier to writing the first line of code, to a large extend coming form just remembering the context, where to import what from, setting up boilerplate etc.

2. While it works I can use my brain capacity to think about what I'm doing.

3. I can now do multiple things in parallel.

4. It makes it so much easier to "go the extra mile" (I don't add "TODOs" anymore in the code I just spin up a new Claude for it)

5. I can do much more analysis, (like spinnig up detailed plotting / analysis scripts)

6. It fixes most simple linting/typing/simple test bugs for me automatically.

Overall I feel like this kind of coding allows me to focus about the essence: What should I be doing? Is the output correct? What can we do to make it better?

MattGaiser•2h ago
> 4. It makes it so much easier to "go the extra mile" (I don't add "TODOs" anymore in the code I just spin up a new Claude for it)

This especially. I've never worked at a place that didn't skimp on tests or tech debt due to limited resources. Now you can get a decent test suite just from saying you want it.

Will it satisfy purists? No, but lots of mid hanging fruit long left unpicked can now be automatically picked.

scrollaway•1h ago
Taking the pain of starting is a big one. It lets me do things I would never have done just because it’d go on the “if only I had time” wish list.

Now literally between prompts, I had a silly idea to write a NYT Connections game in the terminal and three prompts later it was done: https://github.com/jleclanche/connections-tui

alfiedotwtf•2h ago
“Trust, but verify”

AI, but refactor

buffer1337•2h ago
I've been using Claude code 12-16 hours a day since I first got it running two weeks ago. Here's the tips I've discovered:

1. Immediately change to sonnet (the cli defaults to opus for max users). I tested coding with opus extensively and it never matches the quality of sonnet.

2. Compacting often ends progress - it's difficult to get back to the same quality of code after compacting.

3. First prompt is very important and sets the vibe. If your instance of Claude seems hesitant, doubtful, sometimes even rude, it's always better to end the session and start again.

4. There are phrases that make it more effective. Try, "I'm so sorry if this is a bad suggestion, but I want to implement x and y." For whatever reason it makes Claude more eager to help.

5. Monolithic with docker orchestration: I essentially 10x'd when I started letting Claude itself manage docker containers, check their logs for errors, rm them, rebuild them, etc. Now I can get an entirely new service online in a docker container, from zero to operational, in one Claude prompt.

j45•1h ago
Letting Claude manage docker has been really good.

I’m working my way through building a guide to my future self for packaging up existing products in case I forget in 6 months.

At the same time frontier models may improve it, make it worse, or it stays the same, and what I’m after is consistency.

achierius•1h ago
> 5. Monolithic with docker orchestration: I essentially 10x'd when I started letting Claude itself manage docker containers, check their logs for errors, rm them, rebuild them, etc. Now I can get an entirely new service online in a docker container, from zero to operational, in one Claude prompt.

This is very interesting. What's your setup, and what kind of prompt might you use to get Claude to work well with Docker? Do you do anything to try and isolate the Claude instance from the rest of your machine (i.e. run these Docker instances inside of a VM) or just YOLO?

slackpad•1h ago
Not the parent but I've totally been doing this, too. I've been using docker compose and Claude seems to understand that fine in terms of scoping everything - it'll run "docker compose logs foo" "docker compose restart bar" etc. I've never tried to isolate it, though I tend to rarely yolo and keep an eye on what it's doing and approve (I also look at the code diffs as it goes). It's allowed to read-only access stuff without asking but everything else I look at.
danielbln•1h ago
I YOLO and there isn't much guidance needed, it knows how docker and compose etc. works, how to get logs, exec in etc.

"bring this online in my local docker" will get you a running service, specify further as much as you like.

kachapopopow•1h ago
I had success with creating a VERY detailed plan.md file - down to how all systems connect together, letting claude-loop[1] run while I sleep and coming back in the morning manually patching it up.

[1]: https://github.com/DeprecatedLuke/claude-loop

turnsout•1h ago
I'm fascinated by #5. As someone who goes out of my way to avoid Docker while realizing its importance, I would love to know the general format of your prompt.
buffer1337•1h ago
It's the difference between Claude making code that "looks good" and code that actually runs. You don't have to be stuck anymore saying, "hey help me fix this code." Say, "Use tmux to create a persistent session, then run this python program there and debug it until its working perfectly"
aledalgrande•57m ago
5. it's not just docker, give it playwright MCP server so it can see what it is implementing in UI and requests

6. start in plan mode and iterate on the plan until you're happy

7. use slash commands, they are mini prompts you can keep refining over time, including providing starting context and reminding it that it can use tools like gh to interact with Github

not sure I agree on 1.

2. compact when you are at a good stop, not when you are forced to because you are at 0%

danielbln•9m ago
Use agents to validate the code. Is it over engineered, does it conform to conventions and spec, is it actually implemented or half bullshit. I run three of these at the end of a feature or task and it almost always send Opus back to the workbench fixing a bunch of stuff. And since they have their own context, you don't blow up the main context and can go for longer.
arrowsmith•1h ago
The real power of Claude Code comes when you realise it can do far more than just write code.

It can, in fact, control your entire computer. If there's a CLI tool, Claude can run it. If there's not a CLI tool... ask Claude anyway, you might be surprised.

E.g. I've used Claude to crop and resize images, rip MP3s from YouTube videos, trim silence from audio files, the list goes on. It saves me incredible amounts of time.

I don't remember life before it. Never going back.

danielbln•1h ago
It's the automators dream come true. Anything can be automated, anything scripted, anything documented. Even if we're gonna use other (possibly local) models in the future, this will be my interface of choice. It's so powerful.
arrowsmith•1h ago
Yes, Claude has killed XKCD 1319:

https://xkcd.com/1319/

Automation is now trivially easy. I think of another new way to speed up my workflow — e.g. a shell script for some annoying repetitive task — and Claude oneshots it. Productivity gains built from productivity gains.

ta555555•1h ago
It hasn't killed anything. Might have reduced the time for some tasks. Try something not trivial and you still spend more than you save.
arrowsmith•1h ago
Skill issue
tiahura•1h ago
Combine with pywin32 to open up windows.
einpoklum•52m ago
It's not a dream come true to have a bunch of GPUs crunching at full power to achieve your minor automation, with the company making them available losing massive amounts of money on it:

https://www.wheresyoured.at/the-haters-gui/

... while also exposing the contents of your computer to surveillence.

danielbln•12m ago
Well, yes there is that.

I'd like to say I'm praising the paradigm shift more than anything else (and this is to some degree achievable with smaller, open and sometimes local agentic models), but yes, there are definitely nasty externalities (though burning VC cash is not high up that list for me). I hope some externalities can be be optimized away.

But a fair comment.

risyachka•1h ago
Its all great until

>> I thought I would see a pretty drastic change in terms of Pull Requests, Commits and Line of Code merged in the last 6 weeks. I don’t think that holds water though

The chart basically shows same output with claude than before. Which kinda represents what I felt when using LLMs.

You "feel" more productive and you definitely feel "better" because you don't do the work now, you babysit the model and feel productive.

But at the end of the day the output is the same because all advantages of LLMs is nerfed by time you have to review all that, fix it, re-prompt it etc.

And because you offload the "hard" part - and don't flex that thinking muscle - your skills decline pretty fast.

Try using Claude or another LLM for a month and then try doing a tiny little app without it. Its not only the code part that will seem hard - but the general architecture/structuring too.

And in the end the whole code base slowly (but not that slowly) degrades and in longer term results net negative. At least with current LLMs.

holoduke•1h ago
Not for me. I just reversed engineered a bluetooth protocol for a device which would taken me at least a few days capturing streams of data wireshark. Now i dumped entire dumps inside a llm and it gave me much more control finding the right offsets etc. It took me only a day.
justlikereddit•1h ago
I've been exploring vibe coding lately and by far the biggest benefit is the lack of mental strain.

You don't have to try to remember your code as a conceptual whole, what your technical implementation of the next hour of code was going to be like at the same time as a stubborn bug is taunting you.

You just ask Mr smartybots and it deliver anything between proofreading and documentation and whatnot, with some minor fuckups occasionally

risyachka•1h ago
"mental strain" in a way of remembering/thinking hard is like muscle strain. You need it to be in shape otherwise it starts atrophying.
blackqueeriroh•14m ago
My friend, there’s no solid evidence that this is the case. So far, there are a bunch of studies, mostly preprints, that make vague implications, but none that can show clear causal links between a lack of mental strain and atrophying brain function from LLMs.
fantasizr•1h ago
I learned the hard old fashioned way how to build a imagemagick/mogrify command. Having the ai tools assist saves a crazy amount of time.
skydhash•1h ago
> If there's a CLI tool, Claude can run it. If there's not a CLI tool... ask Claude anyway, you might be surprised.

No Claude Code needed for that! Just hang around r/unixporn and you'll collect enough scripts and tips to realize that mainstream OS have pushed computers from a useful tool to a consumerism toy.

knowsuchagency•18m ago
That's like saying "you don't need a car, just hang around this bicycle shop long enough and you'll realize you can exercise your way around the town!"
jstummbillig•16m ago
The point is not that a tool maybe exists. The point is: You don't have to care if the tool exists and you don't have to collect anything. Just ask Claude code and it does what you want.

At least that's how I read the comment.

wjnc•1h ago
Over the holidays I built a plan for an app that would be worthwhile to my children, oldest son first. That plan developed to several thousand words of planning documents (MVP, technical stack, layout). That was just me lying in the sun with Claude on mobile.

Today I (not a programmer, although programming for 20+ years, but mostly statistics) started building with Claude Code via Pro. Burned through my credits in about 3 hours. Got to MVP (happy tear in my eye). Actually one of the best looks I've ever gotten from my son. A look like, wow, dad, that's more than I'd ever think you could manage.

Tips:

- Plan ahead! I've had Claude tell me that a request would fit better way back on the roadmap. My roadmap manages me.

- Force Claude to build a test suite and give debugging info everywhere (backend, frontend).

- Claude and me work together on a clear TODO. He needs guidance as well as I do. It forgot a very central feature of my MVP. Do not yet know why. Asked kindly and it was built.

Questions (not specifically to you kind HN-folks, although tips are welcome):

- Why did I burn through my credits in 3 hours?

- How can I force Claude to keep committed to my plans, my CLAUDE.md, etc.

- Is there a way to ask Claude to check the entire project for consistency? And/Or should I accept that vibing will leave crusts spread around?

isoprophlex•1h ago
Burning through your credits is normal. We're in the "lmao free money"/"corner the market" phase, where anthropic offers claude code at a loss.

Recently they had to lower token allowances because they're haemorrhaging money.

You can run "ccusage" in the background to keep tabs, so you're leas surprised, is all I can say.

Enjoy the cheap inference while you can, unless someone cracks the efficiency puzzle the frontier models might get a lot more expensive at one point.

cflewis•20m ago
Yeah, not going to lie, working at Google and having unlimited access to Gemini sure is nice (even if it has performance issues vs Claude Code… I can’t say as I can’t use it at work)
floppyd•1h ago
Anybody had similarly good experience with Gemini CLI? I'm only a hobbyist coder, so paying for Claude feels silly when Gemini is free (at least for now), but so far I've only used it inside Cline-like extensions
chaosprint•1h ago
can anyone compare it with cursor?
softwaredoug•1h ago
I think it's possible Claude Code might be the most transformative piece of software since ChatGPT. It's a step towards an AI agent that can actually _act_ at a fundamental level - with any command that can be found on a computer - in a way that's beyond the sandboxed ChatGPT or even just driving a browser.
fdsf111•59m ago
I recently tried a 7-day trial version of Claude Code. I had 3 distinct experiences with it: one obviously positive, one bad, and one neutral-but-trending-positive.

The bad experience was asking it to produce a relatively non-trivial feature in an existing Python module.

I have a bunch of classes for writing PDF files. Each class corresponds to a page template in a document (TitlePage, StatisticsPage, etc). Under the hood these classes use functions like `draw_title(x, y, title)` or `draw_table(x, y, data)`. One of these tables needed to be split across multiple pages if the number of rows exceeded the page space. So I needed Claude Code to do some sort of recursive top-level driver that would add new pages to a document until it exhausted the input data.

I spent about an hour coaching Claude through the feature, and in the end it produced something that looked superficially correct, but didn't compile. After spending some time debugging, I moved on and wrote the thing by hand. This feature was not trivial even for me to implement, and it took about 2 days. It broke the existing pattern in the module. The module was designed with the idea that `one data container = one page`, so splitting data across multiple pages was a new pattern the rest of the module needed to be adapted to. I think that's why Claud did not do well.

+++

The obviously good experience with Claude was getting it to add new tests to a well-structured suite of integration tests. Adding tests to this module is a boring chore, because most of the effort goes into setting up the input data. The pattern in the test suite is something like this: IntegrationTestParent class that contains all the test logic, and a bunch of IntegrationTestA/B/C/D that do data set up, and then call the parent's test method.

Claude knocked this one out of the park. There was a clear pattern to follow, and it produced code that was perfect. It saved me 1 or 2 hours, but the cool part was that it was doing this in its own terminal window, while I worked on something else. This is a type of simple task I'd give to new engineers to expose them to existing patterns.

+++

The last experience was asking it to write a small CLI tool from scratch in a language I don't know. The tool worked like this: you point it at a directory, and it then checks that there are 5 or 6 files in that directory, and that the files are named a certain way, and are formatted a certain way. If the files are missing or not formatted correctly, throw an error.

The tool was for another team to use, so they could check these files, before they tried forwarding these files to me. So I needed an executable binary that I could throw up onto Dropbox or something, that the other team could just download and use. I primarily code in Python/JavaScript, and making a shareable tool like that with an interpreted language is a pain.

So I had Claude whip something up in Golang. It took about 2 hours, and the tool worked as advertised. Claude was very helpful.

On the one hand, this was a clear win for Claude. On the other hand, I didn't learn anything. I want to learn Go, and I can't say that I learned any Go from the experience. Next time I have to code a tool like that, I think I'll just write it from scratch myself, so I learn something.

+++

Eh. I've been using "AI" tools since they came out. I was the first at my company to get the pre-LLM Copilot autocomplete, and when ChatGPT became available I became a heavy user overnight. I have tried out Cursor (hate the VSCode nature of it), and I tried out the re-branded Copilot. Now I have tried Claude Code.

I am not an "AI" skeptic, but I still don't get the foaming hype. I feel like these tools at best make me 1.5X -- which is a lot, so I will always stay on top of new tooling -- but I don't feel like I am about to be replaced.

mr_tox•54m ago
I don't know if it's something only I "perceive," but as a 50-year-old who started learning to use computers from the command line, using Claude Code's CLI mode gives me a unique sense of satisfaction.
mirkodrummer•41m ago
> Painting by hand just doesn’t have the same appeal anymore when a single concept can just appear and you shape it into the thing you want with your code review and editing skills.

In the meanwhile one the most anticipated game in the industry, a second chapter of an already acclaimed product, has its art totally hand painted

softwaredoug•34m ago
So far what I've noticed with Claude Code is not _productivity gains_ but _gains in my thoughtfulness_

As in the former is hyped, but the latter - stopping to ask questions, reflect, what should we do - is really powerful. I find I'm more thoughtful, doing deeper research, and asking deeper questions than if I was just hacking something together on the weekend that I regretted later.

cflewis•22m ago
Agreed. The most unique thing I find with vibecoding is not that it presses all the keyboard buttons. That’s a big timesaver, but it’s not going to make your code “better” as it has no taste. But what it can do is think of far more possibilities than you can far quicker. I love saying “this is what I need to do, show me three to five ways of doing it as snippets, weigh the pros and cons”. Then you pick one and let it go. No more trying the first thing you think of, realizing it sucks after you wrote it, then back to square one.

I use this with legacy code too. “Lines n—n+10 smell wrong to me, but I don’t know why and I don’t know what to do to fix it.” Gemini has done well for me at guessing what my gut was upset about and coming up with the solution. And then it just presses all the buttons. Job done.

lbrito•28m ago
It's less that I'm a skeptic, but more that I'm finding I intensely abhor the world we're building for ourselves with these tools (which I admittedly use a lot).
M4v3R•26m ago
Why abhor specifically?

Telo MT1

https://www.telotrucks.com/
238•turtleyacht•4h ago•177 comments

6 Weeks of Claude Code

https://blog.puzzmo.com/posts/2025/07/30/six-weeks-of-claude-code/
125•mike1o1•2d ago•178 comments

Helsinki records zero traffic deaths for full year

https://www.helsinkitimes.fi/finland/finland-news/domestic/27539-helsinki-records-zero-traffic-deaths-for-full-year.html
213•DaveZale•3d ago•103 comments

The Art of Multiprocessor Programming 2nd Edition Book Club

https://eatonphil.com/2025-art-of-multiprocessor-programming.html
200•eatonphil•7h ago•28 comments

Browser extension and local backend that automatically archives YouTube videos

https://github.com/andrewarrow/starchive
73•fcpguru•4h ago•28 comments

I tried living on IPv6 for a day, and here's what happened

https://www.xda-developers.com/the-internet-isnt-fully-ipv6-ready/
23•speckx•2d ago•10 comments

We may not like what we become if A.I. solves loneliness

https://www.newyorker.com/magazine/2025/07/21/ai-is-about-to-solve-loneliness-thats-a-problem
288•defo10•10h ago•615 comments

Anandtech.com now redirects to its forums

https://forums.anandtech.com/
56•kmfrk•7h ago•13 comments

Online Collection of Keygen Music

https://keygenmusic.tk
123•mifydev•3d ago•29 comments

At a Loss for Words: A flawed idea is teaching kids to be poor readers (2019)

https://www.apmreports.org/episode/2019/08/22/whats-wrong-how-schools-teach-reading
28•Akronymus•8h ago•34 comments

Helion begins work on Washington nuclear fusion plant

https://www.nucnet.org/news/microsoft-backed-fusion-company-begins-work-on-washington-nuclear-fusion-plant-7-4-2025
34•mpweiher•2d ago•28 comments

Show HN: WebGPU enables local LLM in the browser – demo site with AI chat

https://andreinwald.github.io/browser-llm/
93•andreinwald•6h ago•35 comments

The /o in Ruby regex stands for "oh the humanity "

https://jpcamara.com/2025/08/02/the-o-in-ruby-regex.html
95•todsacerdoti•6h ago•22 comments

Great Question (YC W21) Is Hiring a VP of Engineering (Remote)

https://www.ycombinator.com/companies/great-question/jobs/ONBQUqe-vp-of-engineering
1•nedwin•3h ago

PixiEditor 2.0 – A FOSS universal 2D graphics editor

https://pixieditor.net/blog/2025/07/30/20-release/
69•ksymph•2d ago•7 comments

Compressing Icelandic name declension patterns into a 3.27 kB trie

https://alexharri.com/blog/icelandic-name-declension-trie
181•alexharri•9h ago•69 comments

Australia’s gains in wheat-farm productivity

https://www.reuters.com/investigations/less-rain-more-wheat-how-australian-farmers-defied-climate-doom-2025-07-29/
44•tiarafawn•3d ago•1 comments

Financial lessons from my family's experience with long-term care insurance

https://www.whitecoatinvestor.com/financial-lessons-father-long-term-care-insurance/
83•wallflower•6h ago•85 comments

Double-slit experiment holds up when stripped to its quantum essentials

https://news.mit.edu/2025/famous-double-slit-experiment-holds-when-stripped-to-quantum-essentials-0728
25•ColinWright•2d ago•9 comments

Linear Types for Programmers (2023)

https://twey.io/for-programmers/linear-types/
30•marvinborner•4h ago•4 comments

Modeling Open-World Cognition as On-Demand Synthesis of Probabilistic Models

https://arxiv.org/abs/2507.12547
3•PaulHoule•32m ago•0 comments

A.I. researchers are negotiating $250M pay packages

https://www.nytimes.com/2025/07/31/technology/ai-researchers-nba-stars.html
117•jrwan•9h ago•207 comments

ThinkPad designer David Hill on unreleased models

https://www.theregister.com/2025/08/02/thinkpad_david_hill_interview/
127•LorenDB•8h ago•51 comments

A dive into open chat protocols

https://wiki.alopex.li/ADiveIntoOpenChat
55•Bogdanp•3d ago•8 comments

TclSqueak – Program in Tcl the Smalltalk Way

http://www.xdobry.de/tclsqueak/
5•ofalkaed•2d ago•0 comments

The Rubik's Cube Perfect Scramble (2024)

https://www.solutionslookingforproblems.com/post/the-rubik-s-cube-perfect-scramble
78•notagoodidea•6h ago•22 comments

The Big Oops in type systems: This problem extends to FP as well

https://danieltan.weblog.lol/2025/07/the-big-oops-in-type-systems-this-problem-extends-to-fp-as-well
36•ksymph•2d ago•9 comments

Introduction to Unikernel: Building, deploying lightweight, secure applications

https://tallysolutions.com/technology/introduction-to-unikernel-2/
14•eyberg•1d ago•7 comments

Show HN: Wordle-style game for Fermi questions

https://www.fermiquestions.org/
22•danielfetz•3h ago•23 comments

Write "Freehold" Software

https://deadbeef.io/freehold_software
41•rjinman•1w ago•15 comments