As one who clearly see the huge potential of this tech this is an interesting outlook; make sure to make your products resilient to changing vendors and price hikes and it will probably be fine.
Side note: Google seems to be playing the long game..
AI foundries, Nvidia, the hyperscalers, enterprise buyers of AI, consumers, the US, China, the rest of the world, startups, investors, FOSS, students, teachers, coders, lawyers, publishers, artists... each stand to win or lose in profoundly different ways.
Otherwise we all end up talking past each other.
I feel like we've pretty much already hit a fundamental barrier in compute that is unlikely to be overcome in the near future barring a profound, novel algorithmic approach or an entirely novel computing model.
Nothing interesting without some fundamental breakthrough IMO. Model/agent providers add another level of "thinking" that uses 10x the energy for 10% gain on benchmarks.
Also the fact that 30B models, while less capable than 300B+ models, are not quite one whole order of magnitude less capable, suggests that all things being equal, capability scales sub-linearly to parameter count. It's even more flagrant with 4B models, honestly. The fact that those are serviceable at all is kind of amazing.
Both factors add up to the hunch that a point of diminishing returns must soon be met, if it hasn't already. But as long as no one asks where all the money went I suppose we can keep going for a while still. Just a few more trillions bro, we're so close.
For example, language might not be baked in, but the software for quickly learning human languages certainly is.
To take a simple example, spiders aren’t taught by their mothers how to spin webs. They do it instinctively. So that means the software for spinning a web is in the spider’s DNA.
And if they aren't, then they will be soon enough.
I don’t think over the last 5-8 years there has been a shortage of code-slingers, as evidenced by all the tech layoffs. Using LLMs to generate more code does not equal productivity. There’s that famous story about the -2000 LOC commit, etc.
From observation and induction, it seems very easy to contribute net-negative value in a software development position, and delivering long-term net positive value really requires a lot of discipline from the developers, or a lot of wrangling from non-technical members.
PS: Discipline here does not mean "being picky about what things to accept" it means being picky about what things you make up. From observation, a lot of the most valuable work is super fucking boring, and it takes discipline not to make up a more intellectually stimulating task.
I could not have attempted to create the application I have created without it.
As far as money-making - I wager it's helped me a great deal more in regards to preparation for marketing and sales, since I barely knew where to start with that. But I don't have the tangible proof yet, since I'm just starting that process up.
If you can create an application easily with AI, then someone else can also create that application easily with AI. This means the value is no longer in that application merely existing.
To me, this means that the value, i.e. the things people will pay for, in software will increasingly come from branding on the low end and from the applications that AI can't yet create on the high end.
There should be some amazing new end-user-facing software, or features in existing software, or reduced amounts of bugs in software, any day now...?
https://youtube.com/@groove-tronic
Now you can't say nobody has anything to show.
This does real-time DSP (time stretching, effects, analog emulation), in a language I didn't know when I started (C++), that I started a couple of months ago.
I could not have created this without AI, cannot make progress without AI. When I use up my Pro account limits, I'm done for the day - I'm too slow without it for continuing to make sense.
And this is why untold billions are sunk into AI. Creates a dependency on their services and removes agency from end users.
Edit: yes I’m aware it adds agency in the sense that some people can do things they couldn’t do before… as long as that’s permitted/surveilles, which is where the agency is lost.
I've don't see how I've lost agency. It's not like it's made me a worse coder: getting so much faster with AI hasn't made me slower without it.
> > cannot make progress without AI. When I use up my Pro account limits, I'm done for the day
Aside from the contradictory statements, I did address your points in my edit. I get the tradeoff. As long as whatever you do is in line with the providers of the services, and as long as you're ok with the surveillance, then sure... It's a tradeoff that MUST be considered and be aware of..
You can get stuff done without these.
Local models exist.
Why that might not translate to an increase in software quality and feature count is left as an exercise to the weary senior reader.
With AI, refactoring is cheap and safe.
Ah, I misunderstood. I guess I don't care much if AI was used to create the code that enabled a revolutionary feature vs. AI being the revolutionary feature.
But I do in terms of "is it a industry-changing advance in software development, the profession" which is what this thread seemed to be about.
https://youtube.com/@groove-tronic
I'm disabled (which is why I've been forced into starting my own venture) and can only work a few hours a day, but I'm still more productive than I have been in 25 years, even when I had a team of engineers and a designer working for me.
The code is also cleaner - because refactoring is cheap now.
And I'm working in a language (C++) that I had barely touched when I started.
The 0-to-1 learning curve reduction is definitely very real, but the open question for me is what's next.
It's a cross platform PXE server. You just run it (root-less, no config) and the other computers on your LAN can boot up via PXE and (via netboot.xyz and iPXE) automatically download a Linux installer
The tool itself and the website both started their life as extremely functional 1-shots from GPT-5. pxehost was in a ChatGPT chat, and pxehost.com began as a 1-shot in a Codex CLI.
To me it's really cool that something like pxehost exists, but the fact that it began life as a fully working prototype from a single ChatGPT response is insane.
Not suggesting it wasn’t useful, or that it’s not remarkably convenient. Just easy to forget just how much went into providing it, in terms of the human labor involved with its training data, financial investments, and raw resources. In the broader context, it’s an unbelievably inefficient way to get to where we got. But as long we are here, I guess we should enjoy it.
That claim would have much more force if you could point to a repo. Otherwise, it just seems like blind bias.
Given their access to models, tooling and insane funding/talent - they still suck in standard software engineer like the rest of us, if not more because of the pace.
All the AI integrations so far have been a joke, PoC level quality software. Talk to me when AI helps them rebuild core products into something more impressive.
The only questions left, the only ones that matter, are:
- When is it going to pop? Tomorrow? Next year? 2030?
- How hard is the crash going to be? Only a bunch of AI startups and one or two of the big "AI" companies (OpenAI, Anthropic) go down, or a global financial crash that wipes out hundreds of companies and hundreds of thousands of jobs.
- What's left of "AI" tech in the wreckage. Once the hype is over, what real use-cases exist?
[0] https://www.cnbc.com/2025/08/18/openai-sam-altman-warns-ai-m...
So here's the line of thinking we'll see more of:
"Yes it's a bubble, but so were the railroads, and yet plenty of people made out big time! The railroads themselves were left over and hugely valuable!"
Then the slightly more astute observer would say, well hold on, that's not quite analogous because the depreciation on AI buildout is way faster than in railroads.
Then the even more astute observer would say, even that downplays the stupidity: the actual value creation from railroads was in the land itself.
There is no analogous dynamic in AI-land, and so will probably be far more broadly catastrophic to the bubble blowers.
There definitely was a speculative bubble, but it left behind a lot of real value, but that real value was not in the railroad business nor in the railroads themselves but the gigantic amount of land grants and development that supported the bubble.
I imagine the AI bubble bust will be like the .com bubble bust. Of the 20 (or whatever number) of companies that have a shot, something like 3 will survive and do well. The problem is we don't know which 3.
Many, many, dot-com era companies died during the .com bubble and tons of money was lost, but not everything. For example: Amazon.com, eBay.com, Intuit, etc.
The AI boom is way way way bigger than dotcom was.
I don't think it's a bubble. The numbers seem large but that's b/c the underlying infrastructure costs a lot of money & unlike other forms of infrastructure computers can be used for all sorts of different things by simply redeploying different software to it that consumers will find compelling (even if it's no longer 6 second clips of cats doing backflips from diving boards).
Markets can not be forecasted, therefore if everyone is predicting a downturn, the downturn will not come. There needs to be some ambiguity, a kind of FergusArgyll uncertainty principle
That is if the work produced is actually useful, and more and more do we see that unless we're hyper-specific, we don't get what we want. We have to painfully iterate with non-deterministic stuff hoping that it gets it right.
If there's an AI pop, a most people and most huge funds will lose money, driving liquidity out of the global financial system.
I've started investing into a mix of Europe etf and global etf, instead of just the global etf, because the global ETF is so exposed to US computer companies.
(1) https://investor.vanguard.com/investment-products/etfs/profi...
(2) https://investor.vanguard.com/investment-products/etfs/profi...
datadrivenangel•4mo ago
This is the most plausible looking path forward: LLMs + conventional ML + conventional software inverts how our economy operates over the next few decades, but over the next few years a lot of people are going to lose a lot of money when the singularity is actually a sigmoid curve.
Lionga•4mo ago
AlecSchueler•4mo ago
malfist•4mo ago
arthurcolle•4mo ago
We've been there for a long fucking time
Spivak•4mo ago
arthurcolle•4mo ago
dontlaugh•4mo ago
Spivak•4mo ago
jrflowers•4mo ago
dontlaugh•4mo ago
Now that I have a mortgage, it’s much less than the rent I was paying for a similar property. This is the case in most developed cities on Earth.
Spivak•4mo ago
This is what my parents kept telling me would happen but no one in my late 20s social circle who've bought houses (including myself) has been able to make it happen. Even my friends who got locked in at the 2% interest rates still pay about $300 more than the equivalent rent. I live in a, by population, a top 30 city in the US and actually live in the city, not its "surrounding metro area."
dontlaugh•4mo ago
In much of Europe, small landlords buy with a mortgage, cover it with the rent and still extract a profit on top. Why would they buy-to-let otherwise?
jrflowers•4mo ago
walleeee•4mo ago
datadrivenangel•4mo ago