What happens to all of these AI-native companies if the AI bubble is not able to survive in these conditions? If your current development process is built on the metabolic equivalent of 400kg of leaves per day[0], then when the allegorical asteroid hits, you're going to be outperformed by smaller, nimbler companies with much lower resource requirements. Those companies may be better suited for survival in hostile macro conditions.
In other words, I think a lot of companies believe that they're trimming their metabolic fat by replacing engineers with AI. Lower salary costs! But at the same time, they're also increasing their reliance on brittle energy infrastructure that may not survive this century. (Not to mention the brittleness of the semiconductor fabrication pipeline, RAM availability, etc)
Folks using AI aren't interested in the future, they are interested in buying today and maximizing profits today. If something goes wrong tomorrow, then that's when the problems are dealt with: tomorrow.
AI is an incredibly fragile technology, as you say, it's depended on so many things going right, amazing stuff that it works at all. That fragility includes price, once that goes up and developer price goes down, the winds of change might blow again.
AI also forces folks to be online to code, without being online, companies cannot extend their products. Git was the first version (open source) control system that worked offline. We're literally turning back the hands of time with AI.
AI is another vendor lock-in with the big providers being the sole key-holders to the gates of coding heaven. Folks are blindly running into the hands of vendors who will raise prices as soon as their investors demand their money back.
AI is "improving" code bases to make subtle errors and edge cases harder to detect debugging without using AI will be impossible. Will a human developer actually be able to understand a code base that has been coded up by an AI? That's a problem for tomorrow, today we're making the profits and pumping up the shareholder value.
AI prompts are depended on versions of LLMs - change the LLM and the prompt might will generate different code. Upgrade LLMs or change prompts and suddenly generated code degrades without warning. But prompts are single-use one-way technology: once the generated code is in the code base, there is no need for the prompt - so that's non-issue, only for auditors.
Having come from levers, to punch cards, to transistors, to keyboards, to mice and finally AI, programming has fundamentally forgotten there is a second dimension. Most fields have moved to visual representation of data - graphs, photos, images, plans etc. Programming is fundamentally a single dimension activity with lines and lines of algorithmic code. Hard to understand and harder to visualize (see UML). Now AI comes along and entrenches this dependency on text-based programming, as if the keyboard is the single most (and only) important tool for programming.
It's a lack of imagine of exploring alternatives for programming that has lead us here. Having non-understandable AI tools generating subtly failing code that we blindly deploy to our servers is not an approach that promises look term stability.
Huh? It’s just code that you can read. Why do you think the code will be impregnable by a team of human minds?
So it will be with AI code that has just been generated and blindly added to the code base. It makes everything work but sometimes, perhaps not always, the devil lies in the details.
Take any book, open it up to a random middle section, read it. I can read the words but I don't understand the story. And so it is with code.
This isn't true in the broad sense you've used. It's true that most people don't have the hardware to run the bleeding-edge foundation models, but with a modest Macbook you can run very capable local models now (at least capable for coding, where my experience is).
AI can be run locally but with the growth of agent factories, this is going to be less and less possible if you want to keep up with the Jones.
If two money-losing companies decide that they would like to make money, the math gets ugly fast.
Good luck to anyone cleaning up the mess.
I feel like the internet is programming me.
At this point it is impossible to tell if AI writes like people or people write like AI.
but the ship has sailed :)
there is no hiding from it
of course the content we consume modifies us, but now everybody "reads" the same book, whatever they read.
funny trick. similarly when I use LLMs I try to make them emulate people's writing patterns from previous eras.
AI generated or at least heavily edited would be my guess. Although, I'm with you at this point hard to tell, I'm seeing those AI filler phrases or over use words like "here is what actually happening" more and more and not only on blog posts but social media, video content, podcasts.
It was like science fiction becoming aware of this pattern, but as the OP says, this is indeed happening. Going to change the shape of tech careers for sure. my 2c.
Those tests don't sound very useful to have then.
tra3•2d ago
I know gas town made a splash here a while back and some colleagues promote software factories, but I haven’t seen much real output..have any of you?
I prefer the guided development approach where it’s a pretty detailed dialog with the LLM. The results are good but it’s hardly hands off.
If I squint I can almost see this fully automated development life cycle, so why aren’t there real life examples out there?
jcims•2d ago
https://code.claude.com/docs/en/changelog
Flux159•2d ago
throwanem•1d ago
There was a time in my life when I too would give such a thing away free, on the idea that those who might do some good with it may make up for the ones who will certainly turn it to great evil. After 30 years' exposure, some consensual, to Bay Area/Silicon Valley "culture," I am no longer so sweetly naïve.