Would open source, local models keep pressure on AI companies to prioritize the usable code, as code quality and engineering time saved are critical to build vs buy discussions?
Both can generate code though, I've generated code using the web interface and it works, it's just a bit tedious to copy back and forth.
Like you, I’ve accumulated tons of LLM usage via apps and web apps. I can actually see how the models are much more succinct there compared to the API interface.
My uneducated guess is that LLM models try to fit their responses into the “output tokens” limit, which is surely much lower in UIs than what can be set in pay-as-you-go interfaces.
You remember those days right? All those Flash sites.
I've found with LLMs I can usually convince them to get me at least something that mostly works, but each step compounds with excessive amounts of extra code, extraneous comments ("This loop goes through each..."), and redundant functions.
In the short term it feels good to achieve something 'quickly', but there's a lot of debt associated with running a random number generator on your codebase.
Good programs are written by people who anticipate what might go wrong. If the document says 'don't do X'; they know a tester is likely to try X because a user will eventually do it.
I can see an LLM producing a good program with terrible code that's hard to grok and adjust.
On the other hand, it shows how much coding is just repetition. You don't need to be a good coder to perform serviceable work, but you won't create anything new and amazing either, if you don't learn to think and reason - but that might for some purposes be fine. (Worrying for the ability of the general population however)
You could ask whether these students would have gotten anything done without generated code? Probably, it's just a momentarily easier alternative to actual understanding. They did however realise the problem and decided by themselves to write their own code in a simpler, more repetitive and "stupid" style, but one that they could reason about. So hopefully a good lesson and all well in the end!
Anthropomorphizing LLMs is not helpful. It doesn't get anything, you just gave it new tokens, ones which are more closely correlated with the correct answer. It also generates responses similar to what a human would say in the same situation.
Note i first wrote "it also mimicks what a human would say", then I realized I am anthropomorphizing a statistical algorithm and had to correct myself. It's hard sometimes but language shapes how we think (which is ironically why LLMs are a thing at all) and using terms which better describe how it really works is important.
https://www.microsoft.com/en-us/worklab/why-using-a-polite-t...
This is why I tend to lead with the "quality of response" argument rather than the "user's own mind" argument.
It's a feature of language to describe things in those terms even if they aren't accurate.
>using terms which better describe how it really works is important
Sometimes, especially if you doing something where that matters, but abstracting those details away is also useful when trying to communicate clearly in other contexts.
This is an obvious mistake, the price is per Megatoken, not per token.
Though I'm not a "vibe coder" myself I very much recognize this as part of the "appeal" of GenAI tools more generally. Trying to get Image Generators to do what I want has a very "gambling-like" quality to it.
if it doesn't work the first time you pull the lever, it might the second time, and it might not. Either way, the house wins.
It should be regulated as gambling, because it is. There's no metaphor, the only difference from a slot machine is that AI will never output cash directly, only the possibility of an output that could make money. So if you're lucky with your first gamble, it'll give you a second one to try.
Gambling all the way down.
That's wild. Anything with non-deterministic output will have this.
Anything with non-deterministic output that charges money ...
Edit Added words to clarify what I meant.
You only lose those rights in the contracts you sign (which, in terms of GPT, you've likely clicked through a T&C which waves all right to dispute or reclaim payment).
If you ask an artist to draw a picture and decide it's crap, you can refuse to take it and to pay for it. They won't be too happy about it, but they'll own the picture and can sell it on the market.
Maybe art is special, but there are other professions where someone can invest heaps of time and effort without delivering the expected result. A trial attorney, treasure hunter, oil prospector, app developer. All require payment for hours of service, regardless of outcome.
When it comes to work that requires craftmanship it's pretty common to be able to not pay them if they do a poor job. It may cost you more than you paid them to fix their mistake, but you can generally reclaim your money you paid them if the work they did was egregiously poor.
That's still not gambling and it's silly to pretend it is. It feels like gambling but that's it.
Brain scans have revealed that waiting for a potential win stimulates the same areas as the win itself. That's the "appeal" of gambling. Your brain literally feels like it's winning while waiting because it _might_ win.
Though you could say the same thing about pretty much any VC funded sector in the "Growth" phase. And I probably will.
It also kind of breaks the whole argument that they're designed to be addictive in order to make you spend more on tokens.
Every prompt and answer is contributing value toward your progress toward the final solution, even if that value is just narrowing the latent space of potential outputs by keeping track of failed paths in the context window, so that it can avoid that path in a future answer after you provide followup feedback.
The vast majority of slot machine pulls produce no value to the player. Every single prompt into an LLM tool produces some form of value. I have never once had an entirely wasted prompt unless you count the AI service literally crashing and returning a "Service Unavailable" type error.
One of the stupidest takes about AI is that a partial hallucination or a single bug destroys the value of the tool. If a response is 90% of the way there and I have to fix the 10% of it that doesn't meet my expectations, then I still got 90% value from that answer.
This has not been my experience, maybe sometimes, but certainly not always.
As an example: asking chatgpt/gemini about how to accomplish some sql data transformation set me back in finding the right answer because the answer it did give me was so plausible but also super duper not correct in the end. Would've been better off not using it in that case.
Brings to mind "You can't build a ladder to the moon"
But with an LLM, I was able to eliminate this bad path faster and earlier. I also learned more about my own lack of knowledge and improved myself.
I truly mean it when I say that I have never had an unproductive experience with modern AI. Even when it hallucinates or gives me a bad answer, that is honing my own ability to think, detect inconsistencies, examine solutions for potential blindspots, etc.
That assumes that the value of a solution is linear with the amount completed. If the Pareto Principle holds (80% of effects come from 20% of causes), then not getting that critical 10+% likely has an outsized effect on the value of the solution. If I have to do the 20% of the work that's hard and important after taking what the LLM did for the remainder, I haven't gained as much because I still have to build the state machine in my head to understand the problem-space well enough to do that coding.
I personally think of AI tools as an incremental aid that enables me to focus more of my efforts on the really hard 10-20% of the problem, and get paid more to excel at doing what I do best already.
AI is not an excuse to turn off your brain. I find it ironic that many people complain that they have a hard time identifying the hallucinations in LLM generated content, and then also complain that LLM's are making LLM users dumber.
The problem here is also the solution. LLM's make smarter people even smarter, because they get even better at thinking about the hard parts, while not wasting time thinking about the easy parts.
But people who don't want to think at all about what they are doing... well they do get dumber.
When you get deep into engineering with AI you will find yourself spending a dramatically larger percentage of your time thinking about the hardest things you have ever thought about, and dramatically less time thinking about basic things that you've already done hundreds of times before.
You will find the limits of your abilities, then push past those limits like a marathon runner gaining extra endurance from training.
I think the biggest lie in the AI industry is that AI makes things easier. No, if anything you will find yourself working on harder and harder things because the easy parts are done so quickly that all that is left is the hard stuff.
- I buy stock that doesn't perform how I expected.
- I hire someone to produce art.
- I pay a lawyer to represent me in court.
- I pay a registration fee to play a sport expecting to win.
- I buy a gift for someone expecting friendship.
Are all gambas.
You aren't paying for the result (the win), you are paying for the service that may produce the desired result, and in some cases one of may possibly desirable results.
Hence the adage "sir, this is a casino"
Neither is GenAI, the grandparent comment is dumb.
I almost can't believe this idea is being seriously considered by anybody. By that logic buying any CPU is gambling because it's not deterministic how far you can overclock it.
Just so you know, not every llm use case requires paying for tokens. You can even run a local LLM and use cline w/ it for all your coding needs. Pull that slot machine lever as many times as you like without spending a dollar.
If you don't get something good the first time you buy a book, you might with the next book, or you might not. Either way, the house wins.
It should be regulated as gambling, because it is. There's no metaphor — the only difference from a slot machine is that books will never output cash directly, only the possibility of an insight or idea that could make money. So if you're lucky with your first gamble, you'll want to try another.
Gambling all the way down.
All those laid off coders gambled on a career that didn’t pan out.
Want more certainty in life, gonna have to get political.
And even then there is no guarantee the future give a crap. Society may well collapse in 30 years, or 100…
This is all just role play to satisfy the prior generations story driven illusions.
Especially when you try to get them to generate something they explicitly tell you they won't, like nudity. It feels akin to hacking.
Is it really vibe coding if you are building a detailed coding plan, conducting "git-based experimentation with ruthless pruning", and essentially reviewing the code incrementally for correctness and conciseness? Sure, it's a process dependent on AI, but it's very far from nearly "forget[ing] that the code even exists".
That all said, I do think the article captures some of the current cost/quality dilemmas. I wouldn't jump to conclusions that these incentives are actually driving most current training decisions, but it's an interesting area to highlight.
[1] https://trends.google.com/trends/explore?geo=US&q=%22vibe%20...
In my own usage, I tend to alternate between tiny, well-defined tasks and larger-scale, planned architectural changes or new features. Things in between those levels are hit and miss.
It also depends on what I'm building and why. If it's a quick-and-dirty script for my own use, I'll often write up - or speak - a prompt and let it do its thing in the background while I work on other things. I care much less about code quality in those instances.
This has lead to their abilities stalling while their output seemingly goes up. But when you look at the quality of their output, and their ability to get projects over the last 10% or make adjustments to an already completed project without breaking things, it's pretty horrendous.
At work I've inherited a Kotlin project and I've never touched Kotlin or android before, though I'm an experienced programmer in other domains. ChatGPT has been guiding me through what needs to be done. The problem I'm having is that it's just too damn easy to follow its advice without checking. I might save a few minutes over reading the docs myself, but I don't get the context the docs would have given me.
I'm a 'Real Programmer' and I can tell that the code is logically sound and self-consistent. The code works and it's usually rewritten so much as to be distinctly my code and style. But still it's largely magical. If I'm doing things the less-correct way, I wouldn't really know because this whole process has led me to some pretty lazy thinking.
On the other hand, I very much do not care about this project. I'm very sure that it will be used just a few times and never see the light of day again. I don't expect to ever do android development again after this, either. I think lazy thinking and farming the involved thinking out to ChatGPT is acceptable here, but it's clear how easily this could become a very bad habit.
I am making a modest effort to understand what I'm doing. I'm also completely rewriting or ignoring the code the AI gives me, it's more of an API reference and example. I can definitely see how a less-seasoned programmer might get suckered into blindly accepting AI code and iterating prompts until the code works. It's pretty scary to think about how the coming generations of programmers are going to experience and conceptualize programming.
It certainly is hard when I'm say writing unit tests to avoid the temptation to throw it into Cursor and prompt until it works.
I have managed a python app for a long time due to it being part of a much larger set of services I manage. I've never been particularly comfortable with it.
I am easily learning, and understanding the python much much better.
I think I'm atrophying in a lot of syntax, and typing automatic things.
It doesn't really feel straight forward that it's one or the other.
2. I've had good fortunes keeping the agents to constrained areas, working on functions, or objects, with clearly defined (by me) boundaries. If the measure of a junior engineer is that you correct them once a day, an engineer once a week, a senior once a month, a principal once a quarter... Treat these agents like hyper-energetic interns. Nudge frequently.
3. Standard org management coding practices apply. Force the agents to show work, plan, unit test, investigate.
And, basically, I've described that we're becoming Software Development Managers with teams of on-demand low-quality interns. That's an incredibly powerful tool, but don't expect hyper-elegant and compact code from them. Keep that for the senior engineering staff (humans) for now.
(Note: The AlphaEvolve announcement makes me wonder if I'm going to have hyper-energetic applied science interns next...)
"write minimum code required"
It's not even that sensitive to the wording - "be terse" or "make minimal changes" amount to the same thing - but the resulting code will often be at least 50% shorter than the un-guided version.
It'll check _EVERY_ edge case separately, even in situations where it will never ever happen and if it does, it's a NOP anyway.
It really captures how little control one has over the process, while simultaneously having the illusion of control.
I don't really believe that code is being made verbose to make more profits. There's probably some element of model providers not prioritizing concise code, but if conciseness while maintaining "quality" was possible is would give one model a sufficient edge over others that I suspect providers would do it.
I think there's another perverse incentive here - organisations want to produce features/products fast, which LLMs help with, but it comes at the cost of reduced cognitive capabilities/skills in the developers over the longer term as they've given that up through lack of use/practice.
This is actually a big insight about life, that in some eastern philosophies, you are supposed to arrive to
We love the illusion of control, even though we don’t really have it. Life mostly just unfolds as we experience it
Shortly after he turned 50, he was diagnosed with pancreatic cancer, and he died several months later, following a very painful and difficult attempt to treat it.
In my mind, this kind of thing is the height of tragedy—he did everything right. He exhibited an incredible amount of self-control and deferred his happiness, ensuring that his family and finances were well-cared for and secured, and then having fulfilled his obligations, he was almost immediately robbed of a life that he’d worked so hard to earn.
I experienced a few more object lessons in the same vein myself, namely having been diagnosed with multiple sclerosis at the age of 18, and readjusting my life’s goals to accommodate the prospect of disability. I’m thankfully still churning along under my own capacities, now at 41yo, but MS can be unpredictable, and I find it is necessary to remind myself of this from time to time. I am grateful for every day that I have, and to the extent it’s possible, I try to find nearer-term sources for happiness and fulfillment.
Don’t waste any time planning for more than the next five years (with the obvious exceptions for things like financial planning), as you can’t possible know what’s coming. Even if the unexpected event is a happy one, like an unexpected child or sudden financial windfall, your perspective will almost certainly be dramatically altered 1-2x each decade.
It created a sense of urgency in my own life. You have this idea that you will be the same person until you die of old age, and suddenly you realise that the current year is worth much more than another year two decades from now. A bird in the hand is worth two in the bush.
Yes, there are the grandmas in a trance vibe-gambling by shoving a bucket of quarters in a slot machine.
But you also have people playing Blackjack and beating the averages by knowing how it's played, maybe having a "feel" for the deck (or counting cards...), and most importantly knowing when to fold and walk away.
Same with LLMs, you need to understand context sizes and prompts and you need to have a feel for when the model is just chasing its own tail or trying to force a "solution" just to please the user.
Someone needs to make a plugin to count lines of discard code and prompts
Half of my job is fighting the "copy/paste/change one thing" garbage that developers generate. Keeping code DRY. The autocompletes do an amazing job of automating the repeated boilerplate. "Oh you're doing this little snippet for the first and second property? Obviously you want to do that for every property! Let me just expand that out for you!"
And I'm like "oooh, that's nice and convenient".
...
But I also should be looking at that with the stink-eye... part of that code is now duplicated a dozen times. Is there any way to reduce that duplication to the bare minimum? At least so it's only one duplicated declaration or call and all of the rest is per-thingy?
Or any way to directly/automatically wrap the thing without going property-by-property?
Normally I'd be asking myself these questions by the 3rd line. But this just made a dozen of those in an instant. And it's so tempting and addictive to just say "this is fine" and move on.
That kind of code is not fine.
I agree, but I'm also challenging that position within myself.
Why isn't it OK? If your primary concern is readability, then perhaps LLMs can better understand generated code relative to clean, human-readable code. Also, if you're not directly interacting with it, who cares?
As for duplication introducing inconsistencies, that's another issue entirely :)
Depends on your definition of fine. Is it less readable because it's doing the straight forward thing several times instead of wrapping it into a loop or a method, or is it more readable because of that.
Is it not fine because it's slower, or does it all just compile down to the same thing anyway?
Or is it not fine because you actually should be doing different things for the different properties but assumed you don't because you let the AI do the thinking for you?
The author should try Gemini it’s much better.
Just to illustrate, I asked both about a browser automation script this morning. Claude used Selenium. Gemini used Playwright.
I think the main reasons Gemini is much better are:
1. It gets my whole code base as context. Claude can't take that many tokens. I also include documentation for newer versions of libraries (e.g. Svelte 5) that the LLM is not so familiar with.
2. Gemini has a more recent knowledge cutoff.
3. Gemini 2.5 Pro is a thinking model.
4. It's free to use through the web UI.
1. Poor solutions.
2. Solutions not understood by the person who prompted them.
3. Development team being made dumber.
4. Legal and ethical concerns about laundering open source copyrights.
5. I'm suspicious of the name "vibe coding", like someone is intentionally marketing it to people who don't care to be good at their jobs.
6. I only want to hire people who can do holistically better work than current "AI". (Not churn code for a growth startup's Potemkin Village, nor to only nominally satisfy a client's requirements while shipping them piles of counterproductive garbage.)
7. Publicizing that you are a no-AI-slop company might scare away the majority of the bad prospective employees, while disproportionately attracting the especially good ones. (Not that everyone who uses "AI" is bad, but they've put themselves in the bucket with all the people who are bad, and that's a vastly better filter for the art of hiring than whether someone has spent months memorizing LeetCode answers solely for interviews.)
The second one is more intra/interpersonal: under pressure to produce, it's very easy to rely on LLMs to get one 80% of the way there and polish the remaining 20%. I'm in a new domain that requires learning a new language. So something I've started doing is asking ChatGPT to come up with exercises / coding etudes / homework for me based on past interactions.
Even in this article though, I feel like there is a lot of anthropomorphization of LLMs.
> LLMs and their limitations when reasoning about abstract logic problems
As I understand them, LLMs don't "reason" about anything. It's purely a statistical sequencing of words (or other tokens) as determined by the training set and the prompt. Please correct me if I'm wrong.
Also, regarding this theory that the models may be biased to produce bloated code: I've reposted this once already, and no one has replied yet, and I still wonder:
----------
To me, this represents one of the most serious issues with LLM tools: the opacity of the model itself. The code (if provided) can be audited for issues, but the model, even if examined, is an opaque statistical amalgamation of everything it was trained on.
There is no way (that I've read of) for identifying biases, or intentional manipulations of the model that would cause the tool to yield certain intended results.
There are examples of DeepState generating results that refuse to acknowledge Tienanmen square, etc. These serve as examples of how the generated output can intentionally be biased, without the ability to readily predict this general class of bias by analyzing the model data.
----------
I'm still looking for confirmation or denial on both of these questions...
Consider Database-as-a-service companies: They're not incentivized to optimize on CPU usage, they charge per cpu. They're not incentivized to improve disk compression, they charge for disk-usage. There are several DB vendors who explicitly disable disk compression and happily charge for storage capacity.
When you run the software yourself, or the model yourself, the incentives aligned: use less power, use less memory, use less disk, etc.
Wait, no, sorry... that doesn't quite "paint the right picture".
The "single use" SSDs are 75 times cheaper than storing the data in the cloud.
Because then I might accept the cost!
Realistically all of these systems use some type of data compression such as Parquet files, so the data on disk is likely smaller than the ingested data.
I worked out that the markup on CPU, network bandwidth, and storage for the default logging products from the major clouds is on the order of 25x to 500x.
Okay, sure, there's some people that need to be paid, the back-end software may have some licensed components, etc, etc...
But still, comparing this to any other cloud service, the gross profit margin is just ridiculous!
It's the typical IT marketing trick of selling the commodity (VMs) at competitive prices, and then clawing back the profits via the "enterprise add-ons".
That said: we both agree, log ingestion services are extremely expensive.
But my team's time is soooo valuable. It's sooo sooo sooo valuable. Oh and we can't afford to hire anyone else either. But our time its sooo valuable. We need these tools!
- Premature optimization is the root of all evil, can't waste expensive dev hours on that..
However... "vibe architecting" is likely going to be the way forward. I have had success with generating/tuning an architecture plan with AI, having it create stub files/functions then filling them out individually. I can get pretty much the whole way without typing code, but it does require a fair bit more architectural thinking than usual and a good bit of reading code (then telling the AI to "do better").
I think of it like the analogy of blind men describing an elephant when they can only feel a single part. AI is decent at high level architecture and decent at low level production but you need a human to understand the big picture and how the pieces fit (and which ones are missing).
What’s the difference?
Yes, Claude Code can be token-heavy, but that's often a trade-off for their current level of capability compared to other options. Additionally, Claude Code has built-in levers for cost (I prefer they continue to focus on advanced capability, let pricing accessibility catch up).
"early days" means:
- Prompt engineering is still very much a required skill for better code and lower pricing
- Same with still needing to be an engineer for the same reasons, and:
- Devs need to actively guide these agents. This includes detailed planning, progress tracking, and careful context management – which, as the author notes, is more involved than many realize. I've personally found success using Gemini to create structured plans for Claude Code to execute, which helps manage its verbosity and focus to "thoughtful" execution (as guided by gemini). I drop entire codebases into Gemini (for free).
Agree with you on all the rest, and I think writing a post like this was very much intended as a gut-check on things since the early days are hopefully the times when things can get fixed up.
The leaked Claude Code codebase was riddled with "concise", "do not add comments", "mimic codestyle", even an explicit "You should minimize output tokens as much as possible" etc. Btw, Claude Code uses a custom system prompt, not the leaked 24k claude.ai one.
Like sure, I can ask claude to give me the barebones of a web service that does some simple task. Or a webpage with some information on it.
But any time I've tried to get AI services to help with bugfixing/feature development on a large, complex, potentially multi-language codebase, it's useless.
And those tasks are the ones that actually take up the majority of my time. On the occasion that I'm spinning a new thing up quickly, I don't really need an AI to do it for me -- I mean, that's the easy part!
Is there something I'm missing? Am I just not using it right? I keep seeing people talk about how addictive it is, how the productivity boost is insane, how all their code is now written by AI and then audited, and I just don't see how that's possible outside of really simple rote programming.
The talk about it makes more sense when you remember most developers are primarily writing CRUD webapps or adware, which is essentially a solved problem already.
Clearly something like “server telemetry” is the datacenter’s “CRUD app” analogue.
It’s a solved problem that largely requires rtfm and rote execution of well worn patterns in code structure.
Please stick to the comment guidelines:
> Please respond to the strongest plausible interpretation of what someone says, not a weaker one that's easier to criticize. Assume good faith.
Perhaps it’s time for a career change then. Follow your joy and it will come more naturally for you to want to spread it.
Again,
> Please respond to the strongest plausible interpretation of what someone says, not a weaker one that's easier to criticize. Assume good faith.
From my reading the “strongest possible interpretation” of the original “CRUD app” line was “it’s a solved problem that largely requires rtfm and rote execution of well worn patterns in code structure” making it similarly situated as “server telemetry” to make llms appear superintelligent to people new to programming within those paradigms.
I’m unfamiliar with “device mapping”, so perhaps someone else can confirm if it is “the crud app of Linux kernel dev” in that vein.
Just listing topics in software development is hardly evidence of either your own ability to work on them, or of their inherent complexity.
Since this seems to have hurt your feelings, perhaps a more effective way to communicate your needs would be to explain why you find “server telemetry” to be more difficult/complex/w/e to warrant needing an llm for you to be able to do it.
Based from what I've seen, Python and TypeScript are where it fares best. Other languages are much more hit and miss.
It's a moderately useful tool for me. I suspect the people that get the most use out of are those that would take more than 1 hour to read code I would take 10 minutes to read. Which is to say the least experienced people get the most value.
I guess I could work on the magic incantations to tweak here and there a bit until it works and I guess that's the way it is done. But I wasn't hooked.
I do get value out of LLM's for isolated broken down subtasks, where asking a LLM is quicker than googling.
For me, AI will probably become really usefull, once I can scan and integrate my own complex codebase so it gives me solutions that work there and not hallucinate API points or jump between incompatible libary versions (my main issue).
Also you have to learn to talk to it and how to ask it things.
"The programming language of the future will be English"
---
"Well are you using it right? You have to know how to use it"
I typically use it to whip up a CLI tool or script to do something that would have been too fiddly otherwise.
While sitting in a Teams meeting I got it to use the Roslyn compiler SDK in a CLI tool that stripped a very repetitive pattern from a code base. Some OCD person had repeated the same nonsense many thousands of times. The tool cleaned up the mess in seconds.
What were they doing?
That does nothing except add visual noise.
It's like a magic incantation to make the errors go away (it doesn't actually), probably by someone used to Visual Basic's "ON ERROR RESUME NEXT" or some such.
Almost everybody doing serious work with LLMs is using an agent, which means that the LLM is authoring files, linting them, compiling them, and iterating when it spots problems.
There's more to using LLMs well than this, but this is the high-order bit.
later
Oh, I like Zed a lot too. People complain that Zed's agent (the back-and-forth with the model) is noticeably slower than the other agents, but to me, it doesn't matter: all the agents are slow enough that I can't sit there and wait for them to finish, and Zed has nice desktop notifications for when the agent finishes.
Plus you get a pretty nice editor --- I still write exclusively in Emacs, but I think of Zed as being a particularly nice code UI for an LLM agent.
I have this workflow where I trigger a bunch of prompts in the morning, lunch and at the end of the day. At those same times I give it feedback. The async nature really means I can have it work on things I can’t be bothered with myself.
It keeps _me_ from context switching into agent manager mode. I do the same thing for doing code reviews for human teammates as well.
Like most of the code agents it works best with tight testable loops. But it has a concept of short vs long tests and will give you plans as nd confidence values to help you refine your prompt if you want.
I tend to just let it go. If it gets to a 75% done spot that isn’t worth more back and forth I grab the pr and finish it off.
I haven't seen any mentions of Augment code yet in comment threads on HN. Does anyone else use Augment Code?
It has a very good system prompt so the code is pretty good without a lot of fluff.
Cursor is fine, Claude Code and Aider are a bit too janky for me - and tend to go overboard (making full-ass git commits without prompting) and I can't be arsed to rein them in.
If you aren't building up mental models of the problem as you go, you end up in a situation where the LLM gets stuck at the edges of its capability, and you have no idea how even to help it overcome the hurdle. Then you spend hours backtracking through what it's done building up the mental model you need, before you can move on. The process is slower and more frustrating than not using AI in the first place.
I guess the reality is, your luck with AI-assisted coding really comes down to the problem you're working on, and how much of it is prior art the LLM has seen in training.
If it helps, for context: I'll go round and round with an agent until I've got roughly what I want, and then I go through and beat everything into my own idiom. I don't push code I don't understand and most of the code gets moved or reworked a bit. I don't expect good structure from LLMs (but I also don't invest the time to improve structure until I've done a bunch of edit/compile/test cycles).
I think of LLMs mostly as a way of unsticking and overcoming inertia (and writing tests). "Writing code", once I'm in flow, has always been pleasant and fast; the LLMs just get me to that state much faster.
I'm sure training data matters, but I think static typing and language tooling matters much more. By way of example: I routinely use LLMs to extend intensely domain-specific code internal to our project.
Inconsistency and crap code quality aren't solved yet, and these make the agent workflow worse because the human only gets to nudge the AI in the right direction very late in the game. The alternative, interactive, non-agentic workflows allow for more AI-hand-holding early, and better code quality, IMO.
Agents are fine if no human is going to work on the (sub)system going forward, and you only care about the shiny exterior without opening the hood to witness the horrors within.
I have definitely not seen this in my experience (with Aider, Claude and Gemini). While helping me debug an issue, Gemini added a !/bin/sh line to the middle of the file (which appeared to break things), and despite having that code in the context didn't realise it was the issue.
OTOH, when asking for debugging advice in a chat window, I tend to get more useful answers, as opposed to a half-baked implementation that breaks other things. YMMV, as always.
With a web-based system you need repomix or something similar to give the whole project (or parts of it if you can be bothered to filter) as context, which isn't exactly nifty
Regardless, Gemini 2.5 Pro is far far better and I use that with open-source free Roo Code. You can use the Gemini 2.5 Pro experimental model for free (rate limited) to get a completely free experience and taste for it.
Cursor was great and started is off, but others took notice and now they're all more or less the same. It comes down to UX and preference, but I think Windsurf and Roo Code just did a better job here than Cursor, personally.
For code changes I prefer to paste a single function in, or a small file, or error output from a compile failure. It’s pretty good at helping you narrow things down.
So, for me, it’s a pile of small gains where the value is—because ultimately I know what I generally want to get done and it helps me get there.
I've successfully been able to test out new libraries and do explorations quickly with AI coding tools and I can then take those working examples and fix them up manually to bring them up to my coding standards. I can also extend the lifespan of coding tools by doing cleanup cycles where I manually clean up the code since they work better with cleaner encapsulation, and you can use them to work on one scoped component at a time.
I've found that they're great to test out ideas and learn more quickly, but my goal is to better understand the technologies I'm prototyping myself, I'm not trying to get it to output production quality code.
I do think there's a future where LLMs can operate in a well architected production codebase with proper type safe compilation, linting, testing, encapsulation, code review, etc, but with a very tight leash because without oversight and quality control and correction it'll quickly degrade your codebase.
I find that token memory size limits are the main barrier here.
Once the LLM starts forgetting other parts of the application, all bets are off and it will hallucinate the dumbest shit, or even just remove features wholesale.
Generating function documentation hasn't been that useful either as the doc comments generated offer no insight and often the amount I'd have to write to get it to produce anything of value is more effort than just writing the doc comments myself.
For my personal project in zig they either get lost completely or gives me terrible code (my code isn't _that_ bad!). There seems to be no middle ground here. I've even tried the tools as pair programmers but they often get lost or stuck in loops of repeating the same thing that's already been mentioned (likely falls out of the context window).
When it comes to others using such tools I've had to ask them to stop using it to think as it becomes next to impossible to teach / mentor if they're passing that I say to the LLM or trying to have it perform the work. I'm confident in debugging people when it comes to math / programming but with an LLM between it's just not possible to guess where they went wrong or how to bring them back to the right path as the throught process is lost (or there wasn't one to begin with).
This is not even "vibe coding", I've just never found it generally useful enough to use day-to-day for any task and my primary use of say phind has been to use it as an alternative to qwant when I cannot game the search query well enough to get the search results I'm looking for (i.e I ignore the LLM output and just look at the references).
That's because whatever training the model had, it didn't covered anything remotely similar to the codebase you worked on.
We get this issue even with obscure FLOSS libraries.
When we fail to provide context to LLMs, they generate examples by following supperficial queues like coding conventions. In extreme cases, such as code that employs source code generators or templates, LLMs even fill in function bodies that code generators are designed to generate for you. That's because, if LLMs are oblivious to the context, they resort to hallucinate their way into something seemingly coherent. Unless you provide them with context or instruct them not to make up stuff, they will resort to bullshit their way into an example.
What's truly impressive about this is that often times the hallucinated code actually works.
> Generating function documentation hasn't been that useful either as the doc comments generated offer no insight and often the amount I'd have to write to get it to produce anything of value is more effort than just writing the doc comments myself.
Again,this suggest a failure on your side for not providing any context.
If you give it enough context LLMs synthesize and present them almost instantly. If you're prompting a LLM to generate documentation, which boils down to synthesizing what an implementation does and what's their purpose,and the LLM comes up empty, that means you failed to give it anything to work on.
The bulk of your comment screams failure to provide any context. If your code steers far away from what it expects, fails to follow any discernible structure, and doesn't even convey purpose and meaning in little things like naming conventions, you're not giving the LLM anything to work on.
I guess my point is, I have no use for LLMs in their current state.
> That's because whatever training the model had, it didn't covered anything remotely similar to the codebase you worked on. > We get this issue even with obscure FLOSS libraries.
This is the issue however as unfamiliar codebases is exactly where I'd want to use such tooling. Not working in those cases makes it less than useful.
> Unless you provide them with context or instruct them not to make up stuff, they will resort to bullshit their way into an example.
In all cases context was provided extensively but at some point it's easier to just write the code directly. The context is in surrounding code which if the tool cannot pick up on that when combined with direction is again less than useful.
> What's truly impressive about this is that often times the hallucinated code actually works.
I haven't experienced the same. It fails more often than not and the result is much worse than the hand-written solution regardless of the level of direction. This may be due to unfamiliar code but again, if code is common then I'm likely familiar with it already thus lowering the value of the tool.
> Again,this suggest a failure on your side for not providing any context.
This feels like a case of blaming the user without full context of the situation. There are comments, the names are descriptive and within reason, and there's annotation of why certain things are done the way they are. The purpose of a doc comment is not "this does X" but rather _why_ you want to use this function and it's purpose which is something LLMs struggle to derive from my testing of them. Adding enough direction to describe such is effectively writing the documentation with a crude english->english compiler between. This is the same problem with unit test generation where unit tests are not to game code coverage but to provide meaningful tests of the domain and known edge cases of a function which is again something the LLM struggles with.
For any non-junior task LLM tools are practically useless (from what I've tested) and for junior level tasks it would be better to train someone to do better.
I challenge you to explore different perspectives.
You are faced with a service that handles any codebase that's thrown at it with incredible ease, without requiring any tweaking or special prompting.
For some reason, the same system fails to handle your personal codebase.
What's the root cause? Does it lie in the system that works everywhere with anything you throw at it? Or is it in your codebase?
Note that language servers, static analysis tooling, and so on still work without issue.
The cause (which is my assumption) is that there aren't enough good examples in the training set for anything useful to be the most likely continuation thus leading to a suboptimal result given the domain. Thus the tool doesn't work "everywhere" for cases where there's less use of a language or less code in general dealing with a particular problem.
The one thing I really appreciated though was the AI’s ability to do a “fuzzy” search in occasional moments of need. Or, for example, sometimes the colloquial term for a feature didn’t match naming conventions in source code. The AI could find associations in commit messages and review information to save me time rummaging through git-blame. Like I said though, that sort of problem wasn’t necessarily a bottleneck and could often be solved much more cheaply by asking around coworker on Slack.
I could spend 5-10 minutes digging on through the docs for the correct config option, or I can just tap a hotkey, open up GitHub Copilot in Rider and tell it what I want to achieve.
And within seconds it had a correct-looking setting ready to insert to my renovate.json file. I added it, tested it and it works.
I kinda think people who diss AIs are prompting something like "build me Facebook" and then being disappointed when it doesn't :D
For instance, dealing with files that don't quite work correctly between two 3D applications because of slightly different implementations. Ask for a python script to patch the files so that they work correctly – done almost instantly just by describing the problem.
Also for prototyping. Before you spend a month crafting a beautiful codebase, just get something standing up so you can evaluate whether it's worth spending time on – like, does the idea have legs?
90% of programming problems get solved with a rubber ducky – and this is another valuable area. Even if the AI isn't correct, often times just talking it through with an LLM will get you to see what the solution is.
They are very handy tools that can help you learn a foreign code/base faster. They can help you when you run into those annoying blockers that usually take hours or days or a second set of eyes to figure out. They give you a sounding board and help you ask questions and think about the code more.
Big IF here. IF you bother to read. The danger is some people just keep clicking and re-prompting until something works, but they have zero clue what it is and how it works. This is going to be the biggest problem with AI code editors. People just letting Jesus take the wheel and during this process, inefficient usage of the tools will lead to slower throughput and a higher bill. AI costs a good chunk of change per token and that's only going up.
I do think it's addictive for sure. I also think the "productivity boost" is a feeling people get, but no one measures. I mean, it's hard to measure. Then again, if you do spend an hour on a problem you get stuck on vs 3 days then sure it helped productivity. In that particular scenario. Averaged out? Who knows.
They are useful tools, they are just also very misunderstood and many people are too lazy to take the time to understand them. They read headlines and unsubstantiated claims and get overwhelmed by hype and FOMO. So here we are. Another tech bubble. A super bubble really. It's not that the tools won't be with us for a long time or that they aren't useful. It's that they are way way overvalued right now.
dude, you can use Gemini Pro 2.5 with Cline - it's free and is rated at least as good as Claude Sonnet 3.7 right now.
1. Develop a Minimum Viable Product (MVP) or prototype that functions.
2. Write tests, either before or after the initial development.
3. Implement coding guidelines, style guides, linter etc. Do code reviews.
4. Continuously adjust, add features, refactor, review and expand your test suite. Iterate and let AI run tests and linters on each change
While this process may seem lengthy, it ensures reliability and efficiency. Experienced engineers might find it as quick as working solo, but the structured approach guarantees success. It feels like pairing with a inexperienced developer.
Also, this process may run you into rate limits with Copilot and might not work with your current codebase due to a lack of tests and the absence of applied coding style guides.
Additionally, it takes time. For example, for a simple to mid-level tool/feature in Go, it might take about 1 hour to develop the MVP or prototype, but another 6 to 10 hours to refine it to a quality that you might want to show to other engineers.
// Create a copy of the state
const stateCopy = copyState(state);
For now I'll do it with some examples in my context priming prompt, like:
Do not emit comments. Instead of this:
# frobnicate a fyzzit
def frobnicate_fyzzit(self):
"""
This function frobnicates a fyzzit.
"""
# Get the fyzzit to frobnicate.
fyzzit = ...
...
Do this: def frobnicate_fyzzit(self):
fyzzit = ...
...
Werent they recently complaining that people thanking LLMs were costing them too much money?
1. start a project with vague README (or take an existing one).
2. create makefile with the "prompt" action that looks something like (I might put it in a script to work around tabs etc):
```
prompt:
for f in `find ./ | grep '*.go *.ts *.files_i_care_about' | grep -v 'files to ignore' | pbcopy`
do
echo "// FILE: $f"
cat $f
done
```3. Run `make prompt` to get a fresh new starting prompt, Go to Gemini (AI Studio) and use the prompt:
```You have the following files. Understand it and we will start building some features.
<Ctrl-v to paste the files copied above> ```
4. It thinks, understands and gives me the "I am ready" line.
5. To build feature X I simply prompt it with:
``` I want to build feature X. Understand it, plan it, and do not regenerate entire files. Just give me unix style diffs. ```
6. Iterate on what i like and dont (including refactors, etc)
7. Copy patches and apply locally
8. Repeat steps 5 - 7.
10. After about 300-400k tokens generated (say over 20-40 features) I snapshot with the prompt:
``` Great now is a great time to checkpoint. Generate a SUMMARY.md on a per folder basis of your understanding of the current state of the project along with a roadmap of next steps. ```
11. I save/update the SUMMARY.md and go to bed. When I come back I repeat from step 2 - and voila the SUMMARY.md generated before are included too.
I have generated about 20M tokens so far at a cost of 0. For me "copy/pasting" diffs is not a big deal. Getting clean code, having a nice custom workflow is more important. I am still ready to relinquish control fully to an agent. I just want a really good code search/auto-complete out of the LLM that adheres to *my* interfaces and constraints.
> it’s much more worthwhile to work with a plan composed of discrete tasks that could be explained to a junior level developer
I'm sure this is a more effective way to get more usable results. But I really think anyone in this situation should be taking it as a kind of wake-up call. Mentoring/guiding a junior is work -- there's a significant cost. But it's a cost easily justified -- a lot of people find it intrinsically rewarding, you're training a colleague, etc.. What you're describing here, though, is all of the cost with none of the benefits and you're being the junior developer as well (you must be -- you're doing their work). You're alone, mentoring a chatbot that cannot learn or grow.
> I’m beginning to think the problem runs deeper, and it has to do with the economics of AI assistance.
> When charging by token count, there’s naturally less incentive to optimize for elegant, minimal solutions.
... Or maybe the tool just isn't that good / what you want. There doesn't have to be a conspiracy behind it.
That is to say, I think the main point presented here is very unconvincing. If they could build a tool that could just do what you want in an acceptable manner, they would. People would obviously throw money at that.
It produces verbose, comment-heavy, procedural code because that form reasonably effectively supports the nature of the generator. Procedural code is obviously well-suited to "what comes next?" style append-oriented editing operations. Verbosity eliminates nuance.
OK, one more thing:
> While we wait for AI companies to better align their incentives with our need for elegant code I’ve developed several strategies to counteract verbose code generation
"While we wait" is the most depressing thing I've read in a while. It's just completely at odds with the field itself.
This has been my experience as well. I have to continuously explicitly instruct Claude to be more concise (though that often leads to broken code ...). Gemini is even more verbose.
I'm not sure in the end how much time is saved over simple good auto-completes (for method syntax lookups), other than for rote tasks like "replicate this pattern across X" (and even then it doesn't get it 100% right), and for quick answers to specific questions usually in frameworks I'm not that well versed it that I would have searched SO for ("how do I do X in Qt?", "how do I do the equivalent of Y in Linux on Windows")--but even then I have to verify the answer, whereas if it's a highly voted answer on SO I'll know it works (or there will be helpful comments to the contrary under the reply).
Most of the "it can build X app for you automatically" comments I read remind me of "build a Rails app in 5 lines" (back in the day).
comex•1mo ago
> Blaine read that, shook his head, and called Sally. Presently she joined him in his cabin.
> “Yes, I wrote that," she said. "It seems to be true. Every nut and bolt in that probe was designed separately. It's less surprising if you think of the probe as having a religious purpose. But that's not all. You know how redundancy works?"
> “In machines? Two gilkickies to do one job. In case one fails."
> “Well, it seems that the Moties work it both ways."
> “Moties?"
> She shrugged. "We had to call them something. The Mote engineers made two widgets do one job, all right, but the second widget does two other jobs, and some of the supports are also bimetallic thermostats and thermoelectric generators all in one. Rod, I barely understand the words. Modules: human engineers work in modules, don't they?"
> “For a complicated job, of course they do."
> “The Moties don't. It's all one piece, everything working on everything else. Rod, there's a fair chance the Moties are brighter than we are."
- The Mote in God's Eye, Larry Niven and Jerry Pournelle (1974)
[…too bad that today's LLMs are not brighter than we are, at least when it comes to writing correct code…]
mnky9800n•1mo ago
Loughla•1mo ago
Suppafly•1mo ago
Given how prevalent furries seem to be, especially in nerd adjacent culture, I'd say he was ahead of his time.
AlexCoventry•1mo ago
Suppafly•1mo ago
Ringworld is pretty good, the multiples sequels get kind of out there.
mnky9800n•1mo ago
Suppafly•1mo ago
jerf•1mo ago
I think a lot about Motie engineering versus human engineering. Could Motie engineering be practical? Is human engineering a fundamentally good idea, or is it just a reflection of our working memory of 7 +/- 2? Biology is Motie-esque, but it's pretty obvious we are nowhere near a technology level that could ever bring a biological system up from scratch.
If Motie engineering is a good idea, it's not a smooth gradient. The Motie-est code I've seen is also the worst. It is definitely not the case that getting a bit more Motie-esque, all else being equal, produces better results. Is there some crossover point where it gets better and maybe passes our modular designs? If AIs do get better than us at coding, and it turns out they do settle on Motie-esque coding, no human will ever be able to penetrate it ever again. We'd have to instruct our AI coders to deliberately cripple themselves to stay comprehensible, and that is... economically a tricky proposition.
After all, anyone can write anything into a novel they want to and make anything work. It's why I've generally stopped reading fiction that is explicitly meant to make ideological or political points to the exclusion of all else; anything can work on a page. Does Motie engineering correspond to anything that could be manifested practically in reality?
Will the AIs be better at modularization than any human? Will they actually manifest the Great OO Promise of vast piles of amazingly well-crafted, re-usable code once they mature? Or will the optimal solution turn out to be bespoke, locally-optimized versions of everything everywhere, and the solution to combining two systems is to do whatever locally-sensible customizations are called for?
(I speak of the final, mature version, however long that may be. Today LLMs are kind of the worst of both worlds. That turns out to be a big step up from "couldn't play in this space at all", so I'm not trying to fashionably slag on AIs here. I'm more saying that the one point we have is not yet enough to draw so much as a line through, let alone an entire multi-dimensional design methodology utility landscape.)
I didn't expect to live to see the answers, but maybe I will.
fwip•1mo ago
rcxdude•1mo ago
It's the kind of thing you commonly get if you let an unconstrained optimization process run for long enough. It will generally be better, according to whatever function you're optimizing for. The main disadvantage, apart from being hard to understand or modify the design, is manufacturing and repair (needing to make many different parts), but if you have sufficiently good manufacturing technology (e.g. atomic level printers), then that may be a non-issue. And in software it's already feasible: you can see very small scale versions of this in extremely resource-constrained environments where it's worthwhile really trying to optimize things (see some demoscene entries), but it's pretty rare (some tricks that optimizing compilers pull off are similar, but they are generally very local).