"Hey I need a quick UI for a storefront", can be done with voice. I got pretty far with just doing this, but given my experience I don't feel fully comfortable in building the mech-suit yet because I still want to do things by hand. Think about how wonky you would feel inside of a Mech, trying to acclimate your mind to the reality that your hand movements are in unity with the mech's arm movements. Going to need a leap of faith here to trust the Mech. We've already started attacking the future by mocking it as "vibe coding". Calling it a "Mech" is so much more inspiring, and probably the truth. If I say it, I should see it. Complete instant feedback, like pen to paper.
The term ‘vibe coding’ was coined by OpenAI’s co-founder.
I feel like LLMs are just the next step on the Jobs analogy of "computers are bicycles for the mind" [0]. And if these tools are powerful bicycles available to everyone, what happens competitively? It reminds me of a Substack post I read recently:
> If everyone has AI, then competitively no one has AI, because that means you are what drives the differences. What happens if you and LeBron start juicing? Do you both get as strong? Can you inject your way to Steph’s jumpshot? What’s the differentiator? This answer is inescapable in any contested domain. The unconventionally gifted will always be ascendant, and any device that’s available to everyone manifests in pronounced power laws in their favor. The strong get stronger. The fast get faster. Disproportionately so. [1]
[0] https://youtu.be/ob_GX50Za6c?t=25
[1] https://thedosagemakesitso.substack.com/p/trashbags-of-facts...
> I've been thinking about generative AI tools as "bicycles for the mind" (to borrow an old Steve Jobs line), but I think "electric bicycles for the mind" might be more appropriate.
> They can accelerate your natural abilities, you have to learn how to use them, they can give you a significant boost that some people might feel is a bit of a cheat, and they're also quite dangerous if you're not careful with them!
Technology is part of humanity. Just as a hammer extends the hand, so too does the LLM extend the mind.
As an old geezer, I appreciate very much how LLMs enable me skip the steep part of the learning curve you have to scale to get into any unfamiliar language or framework. For instance, LLMs enabled me to get up to speed on using Pandas for data analysis. Pandas is very tough to get used to unless you emerged from the primordial swamp of data science along with it.
So much of programming is just learning a new API or framework. LLMs absolutely excel at helping you understand how to apply concept X to framework Y. And this is what makes them useful.
Each new LLM release makes things substantially better, which makes me substantially more productive, unearthing software engineering talent that was long ago buried in the accumulating dust pile of language and framework changes. To new devs, I highly encourage focusing on the big picture software engineering skills. Learn how to think about problems and what a good solution looks like. And use the LLM to help you achieve that focus.
Once you're good at it in general. I recently witnessed what happens when a junior developer just uses AI for everything, and I found it worse than if a non-developer used AI: at least they wouldn't confuse the model with their half-understood ideas and wouldn't think they could "just write some glue code", break things in the process, and then confidently state they solved the problem by adding some jargon they've picked up.
It feels more like an excavator: useful in the right hands, dangerous in the wrong hands. (I'd say excavators are super useful and extremely dangerous, I think AI is not as extreme in either direction)
Most people were not. Most tech people were not, even.
Using LLMs feels a ton like working with Google back then, to me. I would therefore expect most people to be pretty bad at it.
(it didn't stop being possible to be "good at Google" because Google Search improved and made everyone good at Google, incidentally—it's because they tuned it to make being "bad at Google" somewhat better, but eliminated much of the behavior that made it possible to be "good at Google" in the process)
0: https://www.oreilly.com/library/view/designing-large-languag...
This deeply resonates with me every time I stare at pandas code seeking to understand it.
It is truly amazing what a superpower these LLM tools are for me. This particular moment in time feels like a perfect fit for my knowledge level. I am building as many MVP ideas as quickly as I can. Hopefully, one of them sticks with users.
No, it's not perfect and I imagine there's some large warts as a result, but it was much, much better than following a bog-standard tutorial on YouTube to get something running, and I'm always able to go refactor my scripts later now that I'm past initial scaffolding and setup.
Ever since, this has been my favorite use case, to cut through the accidental complexity when learning a new implementation of a familiar thing. This not only speeds up my learning process and projects using the new tool, it also gives me a lot more confidence in taking on projects with unfamiliar tools. This is extremely valuable.
From this it has me wondering if AI could increase the adoption of provably correct code. Dependent types have a reputation for being hard to work with, but with AI help, it seems like they could be a lot more tractable. Moreover, it'd be beneficial it the other direction too: the more constraints you can build into the type system of your domain model, the harder it will be for an AI to hallucinate something that breaks it. Anything that doesn't satisfy the constraints will fail to compile.
I doubt it, but wishful thinking.
If I'm not 100% sure something will work, then I'll still just code it. If it doesn't work, I can throw it away and update my mental model and set out on a new typing adventure.
If you're trying to LLM your way to a new social site you're going to need to know what entities make up that site and the relationships they have ahead of time. If you have no concept of an idea then of course the LLM will be "correct" because there were no requirements!
Software design is important today and will be even more important in the future. Many companies do not require design docs for changes and I think it is a misstep. Software design is a skill that needs to be maintained.
Another way to think about it is SWE agents. About a year ago Devin was billed as a dev replacement, with the now common reaction that it's over for SWEs and it's no longer a useful to learn software engineering.
A year later there have been large amounts of layoffs that impacted sw devs. There have also been a lot of fluff statements attributing layoffs to increased efficiency as a result of AI adoption. But is there a link? I have my doubts and think it's more related to interest rates and the business cycle.
I've also yet to see any AI solutions that negate the need for developers. Only promises from CEOs and investors. However, I have seen how powerful it can be in the hands of people that know how to leverage it.
I guess time will tell. In my experience the current trajectory is LLMs making tasks easier and more efficient for people.
And hypefluencers, investors, CEOs, and others will continue promising that just around the corner is a future in which human software developers are obsolete.
they were just playing to this market reaction
layoffs = bad
layoffs because of AI = good
So I would say there are three categories of programmers:
1. Programmers that just want to prompt, using AI agents to write the code.
2. Programmers, like me, that use LLM as tools, writing code by hand, letting the LLM write some code too, inspecting it, incorporating what makes sense, using the LLM to explore the frontier of programming and math topics that are relevant to the task at hand, to write better code.
3. Programmers that refuse to use AI.
I believe that today category "2" is what has a real advantage over the other two.
If you are interested in this perspective, a longer form of this comment is contained in this video in my YouTube channel. Enable the English subtitles if you can't understand Italian.
The future is coming, but you still need fundamentals to make sure the generated code has been properly setup for growth. That means you need to know what you expect your codebase to look like before or during your prompting so you can promote the right design patterns and direct the generation towards the proper architecture.
So software design is not going away. Or it shouldn't for software that expects to grow.
I feel reassured to see that I'm not the only one who feels this way. With all the talk about in-IDE direct code editing, I was thinking that I was being somewhat of a luddite who feels like the chat form is the best balance between getting help from the AI and understanding/deciding how things are actually structured/working.
Usually when I am in the flow of writing code, I can think, write, tab away and review without breaking it. If I need a smallish (up to 100-ish lines) piece of code that I know the shape of - I would use the chat to generate it and merge it back after review.
Letting the agent rip always has led to more pain and suffering down the line :(
I can see the usefulness of agents however for (a) some tedious refactorings where the IDE features might not reach and (b) occasionally writing a first pass of a low-value module when I am low on energy.
For the rest of stuff I feel very happy with copy-paste.
Will it just be these functional cores that are the product, and users will just use an LLM to mediate all interaction with it? The most complex stuff, the actual product, will be written by those skilled in mech suits, but what will it look like when it is written for a world where everyone else has a mech suit (albeit less capable) on too?
Think like your mother running a headless linux install with an LLM layer on top, and it being the least frustrating and most enjoyable computing experience she has ever had. I'm sure some are already thinking like this, and really it represents a massive paradigm shift in how software is written on the whole (and will ironically resemble the early days of programming).
Like a toy policeman costume so you can pretend you have authority and you know what you're doing.
My personal opinion is that now experience matters a lot more.
A lot of times, the subtle mistakes that LLM makes or wrong direction that it takes can only be corrected by experience. LLM also don't tend to question its own decisions in the past, and will stick with them unless explicitly told.
This means LLM based project accumulate subtle bugs unless there is a human in the loop who can rip them out, and once a project accumulated enough subtle bugs it generally becomes unrecoverable spaghetti.
Dangerous as well, is that LLMs won't (unless aggressively prompted to) question your own decisions either, in contrast to something like a mentor which would help you discover a better way, if there is one.
The end game is outsourcing, instead of team mates doing the actual programing from the other side of the planet, it will be from inside the computer.
Sure the LLMs and Agents are rather limited today, just like optimizating compilers were still a far dream in the 1960's.
That's not to say the output is correct, there are usually bugs and unnecessary stuff if the logic generated isn't trivial, but reading it isn't the biggest hurdle.
I think you are referring to the situation where people just don't read the code generated at all.. in that case it's not really LLM's fault.
Even if this were true, which I strongly disagree with, it actually doesn't matter if the code is easier to understand
> I think you are referring to the situation where people just don't read the code generated at all.. in that case it's not really LLM's fault
It may not be the LLM's "fault", but the LLM has enabled this behavior and therefore the LLM is the root cause of the problem
Reality: a saddle on the developer's back.
They really want a faster horse.
This is my experience as well. You have to know what you want, how to interfere if things go in the wrong direction, and what to do with the result as well.
What I did years ago with a team of 3-5 developers I can do now alone using Claude Code or Cursor. But I need to write a PRD, break it down into features, epics and user stories, let the llm write code, review the results. Vibe coding tools feel like half a dozen junior to mid level developers for a fraction of the cost.
I'm curious about the fundamental reason why LLMs and their agents struggle with executive function over time.
On Limitations of the Transformer Architecture https://arxiv.org/abs/2402.08164
Limits of Deep Learning: Sequence Modeling through the Lens of Complexity Theory https://arxiv.org/abs/2405.16674
TL;DR transformers are inherently limited with tasks requiring composition of sequential steps
Can we stop saying this? It hasn't been true for more than 15 years.
skydhash•5h ago
> Why am I doing this? Understanding the business problem and value
> What do I need to do? Designing the solution conceptually
> How am I going to do it? Actually writing the code
> For decades, that last bucket consumed enormous amounts of our time. We’d spend hours, days or weeks writing, debugging, and refining. With Claude, that time cost has plummeted to nearly zero.
That last part is actually the easiest, and if you're spending inordinate amount of time there, that usually means the first two were not done well or you're not familiar with the tooling (language, library, IDE, test runner,...).
There's some drudgery involved in manual code editing (renaming variable, extracting functions,...) but those are already solved in many languages with IDEs and indexers that automate them. And so many editors have programmable snippets support. I can genuinely say in all of my programming projects, I spent more time understanding the problem than writing code. I even spent more time reading libraries code than writing my own.
The few roadblocks I have when writing code was solved by configuring my editor.
jstanley•4h ago
It's possible that people's experiences are different to yours because you work on a specific type of software and other people work on other specific types of software.
vlovich123•4h ago
At many big tech companies I've worked out, an abstract design proposal precedes any actual coding for many tasks. These design proposals are not about how you lay out code or name variables, but a high level description of the problem and the general approach to solve the problem.
Expressing that abstract thinking requires writing code but that's the "how" - you can write that same code many ways.
codr7•4h ago
Which points at a pretty substantial limitation of LLM coding...
skydhash•4h ago
Correctness is not embedded in software. It's embedded in the real world.
vlovich123•3h ago
I don’t think there’s anything where the first step is writing code. It’s like saying the first step of solving a math problem is writing down equations.
skydhash•4h ago
The whole argument behind TDD is that it's easier to write code that verify something than actually implement the code. Because it only have the answer, not the algorithm to solve the question.
So for any code you will be writing, find the answers first (expected behavior). Then add tests. The you write the code for the algorithm to come up with the answer.
Static typing is just another form of these. You tell the checker: This is the shape of this data, and it warns you of any code that does not respect that.
vlovich123•3h ago
carlmr•4h ago
Not OP, but I find this a very good question. I've always found that playing with the problem in code is how I refined my understanding of the problem. Kind of like how Richard Feynman describes his problem solving. Only by tinkering with the hard problem do you really learn about it.
I always found it strange when people said they would plan out the whole thing in great detail and code later. That never worked for me, and I've also rarely seen it work for those proposing it.
It may be because I studied control systems, but I've always found you need the feedback from actually working with the problem to course correct, and it's faster, too. Don't be scared to touch some code. Play with it, find out where your mental model is deficient, find better abstractions than what you originally envisioned before wrestling with the actual problems.
mixmastamyk•4h ago
Espressosaurus•4h ago
Not everything has been solved for ten thousand years.
mixmastamyk•4h ago
6510•4h ago
corytheboyd•4h ago
As is usually the truth in practice, it’s a mess, which is why I’ve seen combinations of upfront planning and code spiking work the best.
An upfront plan ensures you can at least talk about it with words, and maybe you’ll find obvious flaws or great insights when you share your plan with others. Please, for the love of god, don’t ruin it with word vomit. Don’t clutter it with long descriptions of what a load balancer is. Get to the point. Be honest about weaknesses, defend strengths.
Because enterprise corporate code is a minefield of trash, you just have to suck it up and go figure out where the mines are. I’ve heard so many complaints “but this isn’t right! It’s bad code! How am I supposed to design around BAD code!” I’ll tell you how, you find the bad parts, and deal with them like a professional. It’s annoying and slow and awful, but it needs doing, and you know it.
By not doing the planning, you run the risk of building a whole thing, only to be told “well, this is nice, but you could have just done X in half the time.” By not doing the coding, you risk blowing up your timeline on some obvious unknown that could have been found in five minutes.
skydhash•4h ago
Sometimes you don't have a way to get the exact answers, so you do experiments to get data. But just like scientists in a lab, they should be rigorous and all assumptions noted down.
And sometimes, there are easy answers, so you can get these modules out of the way first.
And in other cases, maybe a rough solution is better than not having anything at all. So you implement something that solves a part of the problem while you're working on the tougher parts.
Writing code without answers is brute-forcing the solution. But novel problems are rare, so with a bit of research, it's quite easy to find answers.
hnthrow90348765•3h ago
POCs are better for customer-facing, product management driven work. This is because they can be bad at describing what they want. There's more risk of building the wrong thing.
POCs can be okay for system design or back-end work (or really anything not involving vague asks), but chances are planning and deeper thinking will help you more there because the problems you solve tend to be less subjective. Less risk of building the wrong thing.
skydhash•3h ago
Kinda like scaling. Instead of going for Kubernetes, use a few VPS and a managed database to get your first customers.
vlovich123•4h ago
It's a useful tool that can accelerate certain tasks, but it has a lot of sharp edges.
bluefirebrand•4h ago
I do quite a bit of this and even here LLMs seem extremely hit and miss, leaning towards the miss side more often than not
ijk•57m ago
LLMs _can_ do consistency, they're pretty good continuing a pattern...if they can see it. Which can be hard if it's scattered around the codebase.
bluefirebrand•47m ago
This describes any codebase in any programming language
This is why "programming patterns" exist as a concept
The fact that LLMs are bad at this is a pretty big mark against them
namaria•46m ago
They won't even consistently provide the same answer to the same input. Occasional consistency is inconsistency.
bcrosby95•5m ago
snovv_crash•3h ago
Yeah, now get the LLM to write C++ for a public facing service, what could possibly go wrong?
hiAndrewQuinn•4h ago
If it were, the median e.g. business analyst should be getting paid significantly more than the median software engineer. That's not what the data shows, however.
>I can genuinely say in all of my programming projects, I spent more time understanding the problem than writing code.
This is almost trivially true for anyone who understands the problem via writing code, though.
jimbokun•4h ago
What if you replace "business analyst" with "software architect"?
skydhash•4h ago
Snuggly73•2h ago
Writing the code is the trivial part.
otabdeveloper4•4h ago
r00fus•3h ago
Aperocky•3h ago
The business analyst mostly just scratch the top half of the first part.
But I do encourage them to go vibe coding! It's providing a lot of entertainment. On off chance, they would become one of us and would be most welcomed.
hiAndrewQuinn•1h ago
My real point is claiming #3 is the easiest is just silly. It's obviously much easier to come up with good business ideas in the abstract than to bring them into being. The mixture works because software as a business is an O-ring problem. These 3 tasks are not cleanly separable, they're all part of a feedback loop together.
corytheboyd•4h ago
Snuggly73•3h ago
By the end of the day (10-ish hours) all I got to show was about 3 screens with few buttons each… Something a normal React developer probably would’ve spat out in about a hour. On top of that, I can’t remember shit about the application itself - and I can practically recite most of the codebases that I’ve spent time on.
And here I read about people casually generating and erasing 20k lines of code. I dunno, I guess I am either holding it wrong, or most of the time developing software isn’t spent vomiting code.
Graphon1•1h ago
I've also been writing code for a long time, did the 6502 assembly thing way back when, and lots since then. For this current project I wanted to build a web app with a frontend in Angular and a backend in Java 21 relying on javalin.io for the services layer. It had a few other integrations as well - into a remote service requiring OAuth and also into subtlecrypto. After less than 10 hours I had a fully functioning MVP that was far superior to anything I could have created without an assistant. It gave me build files, even a test skeleton. Restyling the UI or reflowing the UX to include confirmations, additional steps, modals, ... was really easy. I just had to type it, and those changes would get made. It felt like I was "director of development" for a day.
I used Aider, plugged into Gemini 2.5.
Snuggly73•1h ago
Was it productivity boost for me - yeah, cause I know mostly shit about React. But as an end result it just felt very underwhelming. Discussing it today with my brother (who lives and breathes FE) it apparently was.
I guess I was just expecting... I dunno... more - people are claiming nX productivity boosts, and considering how the UI is mostly boilerplate...
Snuggly73•20m ago
I think I was expecting that it will turn me into a FE developer and it will feel as natural and smooth as usual when I am in my element.
It didn’t. And the results weren’t what you would get from a real FE dev. And it felt unsatisfactory, stressful and ultimately hollow.
I guess _for me_ it would be fine for a throw away MVP - something that I don’t want to put my heart into.
jandrese•4h ago
For me the most important part of a project is working out the data structures and how they are accessed. That's where the rubber meets the road, and is something that AI struggles with. It requires a bit too high a level of abstract thinking and whole problem conceptualization for existing LLMs. Once the data structures are set the coding is easy.
exe34•4h ago
the balance only shifts with a language/framework I'm not familiar with.
lherron•3h ago
For me, I give Gemini the full context of my repo, tell it the sweeping changes I want to make, and let it do the zero to one planning step. Then I modify (mostly prune) the output and let Cursor get to work.
jimbokun•3h ago
bluefirebrand•1h ago
And if you don't speak the language please spare us from your LLM generated vibe coding nonsense
otabdeveloper4•4h ago
DonHopkins•2h ago
ornornor•4h ago
delecti•3h ago
palmotea•1h ago
The result ain't going to be what you get if you've got a focused group of 10x geniuses working on everything, but I think a lot of the aspects of "enterprise development" that people complain about is simply the result of making the best of a bad situation.
I like Java, because I've worked with people who will fuck up repeatedly without static type checking.
Tade0•32m ago
Meanwhile no two React projects are the same because they typically have several dependencies, each solving a small part of the problem at hand.
DonHopkins•2h ago
nyrikki•3h ago
I am using a modified form of TDD's red/green refactor, specifically with an LLM interface independent of my IDE.
While I error on good code over prompt engineering, I used the need to submit it to both refine the ADT and domain tests, after creating a draft of those I submit them to the local LLM, and continue on with my own code.
If I finish first I will quickly review the output to see if it produced simpler code or if my domain tests ADT are problematic. For me this avoids rat holes and head of line blocking.
If the LLM finishes first, I approach the output as a code base needing a full refactor, keeping myself engaged with the code.
While rarely is the produced code 'production ready' it often struggles when I haven't done my job.
You get some of the benefits of pair programming without the risk of demoralizing some poor Jr.
But yes, tradeoff analysis and choosing the least worst option is the part that LLM/LRMs will never be able to do IMHO.
Courses for horses and nuance, not "best practices" as anything more than reasonable defaults that adjust for real world needs.
xyzzy123•1h ago
I don't always find this, because there's a lot of "inside baseball" and accidental complexity in modern frameworks and languages. AI assist has been very helpful for me.
I'm fairly polyglot and do maintenance on a lot of codebases. I'm comfortable with several languages and have been programming for 20 years but drop me in say, a Java Spring codebase and I can get the job done but I'm slow. Similarly, I'm fast with TypeScript/CDK or Terraform but slow with cfndsl because I skipped learning Ruby because I already knew Python. I know Javascript and the DOM and the principles of React but mostly I'm backend. So it hurts to dive into a React project X versions behind current and try to freshen it up because in practice you need reasonably deep knowledge of not just version X of these projects but also an understanding of how they have evolved over time.
So I'm often in a situation where I know exactly what I want to do, but I don't know the idiomatic way to do it in a particular language or framework. I find for Java in particular there is enormous surface area and lots of baggage that has accumulated over the years which experienced Java devs know but I don't, e.g. all the gotchas when you upgrade from Spring 2.x to 3.x, or what versions of ByteBuddy work with blah maven plugin, etc.
I used to often experience something like a 2x or 3x hit vs a specialised dev but with AI I am delivering close to parity for routine work. For complex stuff I would still try to pair with an expert.
palmotea•1h ago
Or they're the kind of people who rushed to step 3 too fast, substantially skipping steps 1 and/or 2 (more often step 2). I've worked with a lot of people like that.
alooPotato•4h ago
So anything that can let you iterate the loop faster is good. The analogy is kind of like if you can make your compile and tests faster, its way easier to code. Because you don't just code and test at the very end, you do it as part of a thinking loop.
skydhash•3h ago
Writing code to find specs is brute-forcing the solution. Which is only useful when there's no answer or data (kinda rare in most domains). Taking some time to plan and do research can resolve a lot of inconsistency in your starting design. If you have written the code before, then you'll have to refactor even if the program is correct, because it will be a pain to maintain.
In painting, even sketching is a lot of work. Which is why artists will collect references instead, mentally select the part they will extract. Once you start sketching, the end goal is always a final painting, even if you stop and redo midway. Actual prototyping is called a study and it's a separate activity.
bufferoverflow•3h ago
For my current job coding is 90% of my time. The rest is meetings, deployments, ticket management. Most of the time coding isn't particularly hard, but it sure consumes lots of time. I've had many days with 1000+ line diffs.
sanderjd•2h ago
sitzkrieg•2h ago
hdjjhhvvhga•2h ago
I'm not sure if you're familiar with modern JS frameworks.
apwell23•1h ago
not sure if anyone knows. how good would a bigquery-sql to scala parser generated code would be? can i use it without having to dig into generated code?