- Driftless sounds like it might be better as a claude code skill or hook
- Deploycast is an LLM summarization service
- Triage also seems like it might be more effective inside CC as a skill or hook
In other words all these projects are tooling around LLM API calls.
> What was valuable was the commitment. The grit. The planning, the technical prowess, the unwavering ability to think night and day about a product, a problem space, incessantly obsessing, unsatisfied until you had some semblance of a working solution. It took hustle, brain power, studying, iteration, failures.
That isn't going to go away. Here's another idea: a discussion tool for audio workflows. Pre-LLMs the difficult part of something like this was never code generation.
Treat it rhetorically.
There can be no question that the cost coefficients of Ideas vs. Execution have changed, with LLMs
Yes.
> LLMs don't change the equation.
No. They make more things easily replicable.
"We made this and all it took was 500 juniors working for a year" used to be a reasonable business moat of effort. Now it's not.
> easily
reproducible now?
I mean, sometimes the hard work is creating object number 1. There are a crapload of inventions that we look back on and go "why did it take so long for us to make the first one", then after that whatever object/idea it was explodes over the planet because of the ease of implementation and the useful practical application.
I think this statement is marred by the our modern sensibilities that say everything must be profitable or it's a bad idea.
There is also the matter of having ideas that are good and knowing how to make them into good software, not something that simply "technically works". LLMs are not enough to overcome this barrier, and the author's examples seem to prove the point. The "working products with test suites, documentation, and polish" that are just another batch of LLM front-ends are frankly unimpressive. Is this the best that AI can offer?
I am sure that whatever work was put into actually trying to implement that, was crucial in order to instruct Claude what to do. System design doesn't come by itself.
I am also building some agents. It is almost hands off at this point.
I was just coding a personal website the other day while waiting for our number to be called at the DMV. I couldn’t really review the code but it did give me a chance to test on mobile.
This is without doing anything special, just using one instance of Claude Opus 4.5 and exe.dev.
Ironically a lot of monotonous work that you were forced to do helped you immerse yourself in the problem domain and equipped you for the hard parts. Not just talking about AI btw, in general when people automate away the easy parts, the hard parts will suddenly seem more difficult, because there's no ramp-up.
While I know in some ways AI coding is helpful, the mode of work where you keep getting distracted while the agent works is much less productive when you just grind the problem.
I mean AI also helps you stay in the zone, but this 'casual' approach to work ultimately results in things not getting done, in my personal experience.
Also, sometimes you could be a busy beaver implementing a lot of stuff you don't need. (And I'm a hobbyist programmer now (retired) so it's all stuff I don't really need.)
Also, sometimes being in the zone results in tunnel version and taking breaks results in new perspective.
So I think this is an area where "it all depends" is the best summary.
It's way past the point of "just" doing MVPs or simple proof of concepts. I'm talking about user auth, dynamic input parsing, calendar views, tags, projects, history of events and more, given a few prompts.
Good being a difficult term to define but most of not all of us here know what I mean
Nothing replaces making simple UX instead of complicated kitchen sink products.
It’s easy to make stuff. It’s harder to make stuff people want.
I am thankful for the increase in product velocity and I also recognize that a lot of stuff people make isn’t what people want.
Product sense and intuition still matter.
> This isn’t about one person copying one idea. It’s about the fundamental economics of software changing.
That "this isn't x, it's y" really is a strong tell
Which means people either can't tell, or don't mind.
How about “Not-Just Abuse”?
Not-Just Abuse (informal, pejorative)
Definition: The practice of knowingly deploying the “not just X, but Y” construction—typically via a mode-collapsed LLM—to simulate insight, inflate banality into profundity, and efficiently convert reader attention into nothing.
AFAIK that's the style of ChatGPT specifically. I haven't noticed that particular turn of phrase turn up in Gemini output, for example. Even if using GPT, via the Open AI playground you can easily control the system prompt and adjust the style and tone to your taste.
So if you see the default ChatGPT style, that's not "just" AI slop, it's low effort AI slop.
What annoys me personally is that both ChatGPT and Gemini like to output bullet point lists with the first key phrase highlighted in bold for each item. I do that! I've been doing that for years! Now many of my customers will likely start assuming my writing is mere AI slop.
I've become tempted to leave typos in my writing on purpose as a shibboleth indicating its human origins.
They f*cked it up. I am convinced ChatGPT will be a classic case of an early prodigy which gets surpassed by the better, second generation products. History is full of those. I think Tesla is another, recent one.
Itll take a bit of time to show up in the numbers overall, but within my reach I see the numbers changing.
Even with GPT models, it's only with 5 that instruction following has become strong enough that your instructions can override this tendency. During the whole 4 (and o) series, it wasn't something you could just override through a system prompt.
Yep, at times, I dictate my thoughts with VoiceInk and have an LLM be an editor on P2 tasks so I can publish instead of have another unfinished idea that never sees the light of day
If you want pre-LLM samples, go ahead and scroll back or check my history—but I've got two kids to take care of and appreciate the publish assist :)
People still argue that distribution is the real bottleneck now. But when the product itself is trivial to build and change, the old dynamics break down. Historically, sales was hard because you had to design and refine a sales motion around a product that evolved slowly and carried real technical risk. You couldn’t afford to pour resources into distribution before the product stabilized, because getting it wrong was expensive.
That constraint is gone. The assumptions and equations we relied on to understand SaaS no longer apply—and the industry hasn’t fully internalized what that means yet.
My decades of experience suggests that the opposite will happen. People will realize that the software industry is 100% moat and 0% castle.
People will build great software that nobody will use while a few companies will continue to dominate with vaporware.
Except for the token cost maybe.
Company: 6 month change:
============================
Palantir: +20%
Salesforce: -9%
Shopify: +40%
Intuit: -26%
ServiceNow: -30%
Adobe: -17%
CrowdStrike: -1.5%
Snowflake: -3%
Cloudflare: 0%
Autodesk: -9%
Above companies are the 10 largest from this list:If a superior product launches without marketing, it will not get a single customer; it's incredibly hard to spread through word of mouth these days... With paid advertising, you end up paying Facebook inflated CPC to get bot traffic!
that makes no sense. "Dominate" implies people use or buy you software. If you produce nothing ("vaporware") how can you dominate?
They're vaporware in relation to what else is available.
My bet: front end devs who need mocks to build something that looks nice get crowded out by UX designers with taste as code generation moves further into "good enough" territory.
Then those designers get crowded out as taste generation moves into "good enough" territory.
But nonfunctional requirements such as reliability, performance and security are still extremely hard to get right, because they not only need code but many correct organizational decisions to be achieved.
As customers connect these nonfunctional requirements with a brand, I don't see how big SaaS players will have a problem.
For new brands, it's as hard as ever to establish trust. Maybe coding is a bit faster due to AI, but I'm not yet convinced that vibe coders are the people on top of which you can build a resilient organization that achieves excellence in nonfunction requirements.
Brand means almost nothing when a competitor can price the software at 90% cheaper. Which is what we are going to see
Even on a technical level the interfaces with country-specific legacy software used all over the place are so badly documented the AI won't help you to shortcut these kind of integrations. There are not 10k stackoverflow posts about each piece of niche software to train from.
while your statement is true - this is actually a very minor reason why sales is hard.
I think developers who have an inclination towards UI/UX and a good grip on the technical side are particularly well positioned right now.
> provides none
I'm pro LLM/AI, but most of hype are just pure vibes. There's no evidence, there are only anecdotes.
All the hype-men that I follow either have a stake at it (they either work for LLM provider or have an AI startup) or post billions of examples and zero revenue.
While I wouldn't say execution is necessarily "cheap" for everything, ChatGPT and Gemini helped me build out a little Spotify playlist generator [1] recently that scans my top 100 artists in the last 12 months then generate a playlist based on their bottom 50% of songs in terms of popularity with an option for 1 or 2 songs per artist.
Sadly the Spotify API limits will never allow me to offer it to more than 25 people at a time but I get so bored of their algorithm playing me the same top songs from artists it's a fun way for me to explore "lesser lights" and something I'd have absolutely never have been able to build before, let alone spin up in a couple of evenings.
It's quite liberating as a non-dev suddenly having these new tools available that's for sure.
> Stack Overflow, the site that defined a generation of software development, received 3,710 questions last month. That’s barely above the 3,749 it got in its first month of existence. The entire knowledge-sharing infrastructure we built our careers on is collapsing because people don’t need to ask anymore.
"Because people don't need to ask anymore."?!
Yeah, I wouldn't call it exaggerating, I think I would call it a fundamental misunderstanding.
I wanted to comment on the code examples he shared. But they're they're all closed source. Which is a decision given the premise of the whole article, err I mean ad, that implementations are free these days.
It's just they are asking there where they expect they'll reach a better answer faster than on SO.
There's a hilarious thread on Twitter where someone "built a browser" using an LLM feedback loop and it just pasted together a bunch of Servo components, some random other libraries and tens of thousands of spaghetti glue to make something that can render a webpage in a few seconds to a minute.
This will eventually get better once they learn how to _actually_ think and reason like us - and I don't believe by any means that they do - but I still think that's a few years out. We're still at what is clearly a strongly-directed random search stage.
The industry is going through a mass psychosis event right now thinking that things are ready for AI loops to just write everything, when the only real way for them to accomplish anything is by just burning tokens over and over until they finally stumble across something that works.
I'm not arguing that it won't ever happen. I think the true endgame of this work is that we'll have personal agents that just do stuff for us, and the vast majority of the value of the entire software industry will collapse as we all return to writing code as a fun little hobby, like those folks who spend hours making bespoke furniture. I, for one, look forward to this.
I just hope that we retain some version of autonomy and privacy because no one wants the tech giants listening in every single word you utter because your agent heard it. No-one wants it but some, not many, care.
Agents deployed locally should be the goal.
But for the code where the hard part isn't making things designed separately work together, but getting the actual algorithm right. That's where I find LLMs still really fail. Finding that trick to take your approach from quadratic to N log N, or even just understanding what you mean after you found the trick yourself. I've had little luck there with LLMs.
I think this is mostly great, because its the hard stuff that I have always found fun. Properly architecting these CRUD apps, and learning which out of the infinite set of ways to do this are better, was fun as a matter of craftsmanship. But that hits at a different level from implementing a cool new algorithm.
Great ideas are rare.
I guarantee making code cheaper and faster to produce will not change the world. The ideas are what change the world. Ironically lighting up code production will make people worser at thinking, therefore, the great ideas are even harder to come by.
"AI startups say the promise of turning dazzling models into useful products is harder than anyone expected":
https://www.wired.com/story/artificial-intelligence-startups...
This is not new. There is tech that enables new possibilities, but it's not a f---ing magic wand.
We already know the hard part of software engineering is designing and implementing code that is maintainable.
Can LLMs reliably create software and maintain it transparently without bringing in regressions ? How do people with no knowledge of software guide LLMs to build quality test suite to prevent regressions ?
Or is it the expectation that every new major release is effectively a rewrite from scratch ? Don't they have to maintain consistency with the UI, database and other existing artifacts.
LLMs make it a lot easier to build MVPs, but the hard work of VALIDATING problems and their solutions, which IMO was always >80% of the work for a successful founder, is harder than ever. With AI we now get 100 almost-useful solutions for every real problem.
Writing a formbuilder and saying you've replicated Typeform is like finishing a todo app and saying you've replicated Jira. Yes, in a way I guess...but there is way more to the product and that's usually where the hard parts are.
And especially some folks keep claiming that one just needs to get better at prompting and describe a detailed spec.
Wanna know what a detailed spec is called? An unambiguous one? It's called code.
LLMs still feel like a very round-about way of re-inventing code. But instead of just a new language, it's a language that nondeterministically creates "code" or a resemblance thereof.
And I am aware that this is currently not a popular opinion on HN, so keep the downvotes coming.
If you use LLMs outside the popular Github languages, it will fail hard on you. It is glorified text-completion, that's what it is.
Have they iterated on user feedback? Have they fixed obscure issues? Made any major changes after the initial version?
More importantly, can the author claim with a straight face that they no longer need to read or understand the code that has been produced?
Just another one of those “look, I built a greenfield pet project over the weekend, software engineering is dead.”
That's the execution part of creating a successful business and it's still entirely missing.
A lot of cost of mature SaaS products come from security, scaling, expensive sales teams, etc. For me, if I have something sandboxed, not available to the public, and only powerful enough to serve only _me_ as a customer, then I don't need to pay those extra costs and I can build something a lot simpler, while still maintaining the core feature that I need.
This has never been true.
To believe this, you would have had to miss the number of functioning apps and games on all the app stores that no one cares about, to just give one example. Or all the excellent but abandoned open source projects.
Rewards follow an exponential distribution ("power law ..."). Ideas and execution are important ingredients. Furthermore, they are not easily separable.
Frankly Im convinced very few can actually succeed at this game - your mix of characteristics that define your personality are a greater determinant. This increase in acceleration of writing code is noise.
I'd split execution mainly in two: Building Selling
Building, barring some possibly extremely hard technical challenges, is not as hard as people make it to be, and LLM are definitely a huge help but I would never call vibecoding an entire application "Building a production ready SaaS".
Selling or, to put it better Product/Market fit, is what's actually very very hard and where most ideas fail.
It’s very easy to vibe code a happy path. Getting it to do the same robustly is a step change in difficulty
I‘m not a programmer by profession and definitely notice that. Some concurrency gotcha or whatever that I didn’t know to look out for and the LLM doesn’t organically cover etc
nepger21•3w ago
skybrian•3w ago