Have you tried doing it on your own as comparison?
Any lessons why that might be true?
Are you missing some design/brainstorming stage that you are doing now by iterating through junk code and which might not be necessary?
You might be right now, but you gain experience that you otherwise lose by delegating to the LLM. With experience the reverse might be true.
Is it all just throw away code to test out ideas?
Would libraries to help abstract stuff and you writing a few lines at a higher lever be better?
Since you throw away most of the code, could you design a bit before jumping to code and just develop something near desired solution?
I’d fire someone who was putting out that “code”.
Seriously, there is no way someone can review all that code so you must be solo (or at a place that doesn’t code review). If you aren’t reviewing your code then you are “vibe coding” (said with maximum derision) and what you are “building” is a house of cards.
"You are a dumb vibe coder producing bad slop"
You don't know anything about the code, the project, the person, but feel justified to insult and demean.
Do you seriously think that someone can write/review/maintain that?
Either they are lying about the output or they _are_ creating slop, period.
Ridiculous comments like that should _not_ go unchallenged, I wouldn't want younger-me seeing that and thinking it was normal or a good idea.
You didn't challenge, you insulted.
You can ask how they ensure quality, what the review process is like, what QA is like, how do humans keep up with that pace of change, etc.
>> Do you seriously think that someone can write/review/maintain that?
> Yes, I do.
Then we have nothing more to discuss. If someone says "I can fly by flapping my arms" and I can absolutely say "That's absurd", I'm not going to respond with "What technique do you use? How can you do that?" because it's patently absurd. I'm not going to engage, I'm going to call out what is a lie or someone who has deluded themselves into thinking they are actually creating something of value.
Claims regarding productivity gains would require long-term studies involving multiple metrics to be credible.
Until those exist, the benefits of AI code gen are basically as credible as MAHA pseudoscience.
People are sharing their experiences. Every comment on the internet doesn't have to have scientific rigor behind.
I just want to point out you've insulted, made absurd comparisons, and now raised to goalposts to an impossible level. The cope about AI is real.
I'm a mediocre dev but I'm at least 10x more productive with AI tooling.
I don't need AI to generate code.
Pipelines are non existant (who wants useless code?, just make better abstractions)
Copy-and-paste code is non existant - taboo to duplicate, just find libraries to reuse within the project or write them.
I don't need it to write tests, I want to write tests myself to force myself to think of the problem.
Code base is large enough that it's generally useless for search (and old tools work much better)
And I don't see what else it would be useful for though I'm trying. (writing a class skeleton?, maybe?, but then, I can do that fast as well)
If I find any API I want to use I let the Claude built an API wrapper for me. Or any other specific issues that typically end up in a Lib.
So a isolated, single purpose file that doesn't depend on any other files where the code base already includes similar files where it can steal the style from.
This is a situation where it does really well usually without breaking anything.
Alternative, if it's web calls then openAPI and a specific generator will work faster long term... (write generator once then import swagger files for whatever you need or tell claude to generate the swagger if not available?)
That would help with api changes longer term (just update swagger file, regen code and update where needed)
If it's actual library calls (C/C++), then why would wrapper be needed?, doesn't add yet-anothet-layer that makes things more difficult to grasp?
Usually I just send a link to the API doc with a sentence or two to let it wrap one or two functions leading to small, specialized and mostly efficient wrappers without overhead. Easy to fix should something change.
It's just a random aspect where I find AI to work really well for coding (for me).
I’d rather shared infrastructure fit a stability and correctness spec than fit the whims of some random programmers throwing a fit and refusing to adapt.
I suppose you’re fine with bridges built haphazardly and airplanes with doors that fall off.
There’s real infrastructure under the software that powers critical logistics. Such systems really should just be simple and straightforward.
You want ornate meta software structures? Have at it on your personal laptop like a proper h4xx0r
If you’re a real practitioner of the craft you’ll do it for free in your basement. If you’re just trying to unicorn you’re just a traditional capitalist and that’s boring.
So, specs is more like an historical archive of decisions taken. I prefer to take a change request (ticket) transform it into a list of requirements, then apply that against the code rather than iterating through old specs in order to produce a new version of the code.
So it becomes f(z, k) where z is the specs for the change and k the old code, instead of f(a, b, c, …, z) (without k) where a and the others are updated specs and z is the new one.
ADDENDUM
In other words, specs are like memos while the code is the company policy handbook. No one edits past memos, you just issue new one, and then edit the handbook.
So I rather have code versions (git commits, …) that links to change requests, which link to discussion and analysis. Not dealing with a pile of documents.
(Especially for html layouts and repetitive changes)
Agentic workflows are cool and have their place but essentially turn me into a full time code reviewer. Sometimes that’s acceptable/ok, sometimes I don’t like it. Essentially, if the agent gets even slightly off the path I want it on, I need to just do it myself. I give up on trying to prompt my out of a mistake it made. You get stuck thinking “one more prompt/round and it will fix the problem and everything will be perfect”, then you find yourself 4+ rounds in with the agent flip-flopping between 2 things it’s already tried.
If I truly need to generate a starter project I just talk to ChatGPT.
But what about the opposite: all the other stuff that's surface? Today I had it fill out a calendar, to generate test data meeting specs, to do some simple translation, to proof some reading, to search.
If there's a task that can be directed and it takes less time to direct than do, it's a productivity gain.
However, my biggest gripe with any of the LLMs or Tools is that they do not suggest package / library to solve my issue.
For example I wanted to do something with GEOJSON and it wrote a heap of code. In most cases for such things I would also use a well maintained and documented package.
I keep thinking that I need to write something that suggest a package / libary before writing code.
Track one metric (e.g., hours saved weekly) to cut through the hype real gains hide in boring tasks, not flashy demos. What specific area feels unproductive for you? Maybe the community can spot the gap.
d00mB0t•7mo ago
cookiemonsieur•7mo ago
skydhash•7mo ago
jf22•7mo ago
paulcole•7mo ago