And this is why Matt Levine calls Sam Altman the greatest business negger of all time
>Look inside.
>Written by someone having a stake in LLM business.
Every time.
A rhetorical technique as old as dirt, but apparently still effective.
But seriously, it isn't on me to justify my skepticism of the extreme claim, "We are in a race to build machine super-intelligence" because that skepticism is the rational default. Instead it's the burden of people who claim that we are in fact in that race, just like "self driving next year" was a claim for others to prove, just like "Crypto is the future of money" is a statement requiring a high degree of support.
We've seen this all before, and in the end the argument in favor seems to boil down to, "Look at how much money we're moving around with this hype" and "Trust us, the best is yet to come."
Maybe this time it will.
For the record, I would be more inclined to be sympathetic towards the author if any receipts (i.e., repos) were produced at all, but as you so correctly stated, extraordinary claims require extraordinary evidence.
I agree you do not have burden of defending the author's claims, apologies if that was not clear.
Whens the last time you saw management tell you which compiler or toolchain you need to use to build your code ? But now we have CEOs and management dictating how coding should be done.
In the article the author admits: "I started coding again last year. But I hadn't written production code since 2012" and then goes on to say: "While established developers debate whether AI will replace them, these kids are shipping.".
Then I ask myself, what are they selling ? and lo and behold, it is AI/ML consulting.
In Sirens of Titan Vonnegut tells a story where governments decided to boost the space industry to drive aggregate demand.
This is exactly what is happening. When you realize that the whole thing is predicated on building and selling more $100,000 GPUs (and the solution to every problem therein is to use even more GPUs), everything really comes into focus.
Asking for a friend.
The $560B for those who believe in AGI isn't about ROI using today's money-in/money-out formula; it's about power positioning for a post-capitalist transition.
Every major player knows that whoever controls the infrastructure once the threshold is crossed might control what comes after.
The "bubble" narrative assumes these actors are optimizing for quarterly returns rather than civilizational leverage.
I could also say, if you truly believe nuclear fusion is imminent we will have infinite free energy and all current economic metrics are meaningless. But there is no nuclear fusion bubble. Why not? Because people don't believe nuclear fusion is imminent. But for some reason they do believe AGI is imminent - despite there being no actual evidence of that. There is probably less understanding of what is needed to close the gap to true AGI than there is to close the gap to make nuclear fusion possible.
The only distinction here is what people are willing to "believe" based on pure conjecture - which is why I class it as a true bubble.
It’s a religion. Repent now, the AGI is coming.
That's more for less true for predicting any new financial trend.
If AI is making devs 20-30% more efficient, then you could invest in tech stocks if you think they can ship as much with lower overhead. The financial metrics look better if that's true.
I suspect this hype cycle won't end until a new one forms, whether technology or some catastrophic event (disease, war) changes focus and allows the same delaying tactics.
Look at stock prices trajectory before and after COVID.
When this bubble bursts, the ensuing chaos will be used in a similar manner.
I never cease to be shocked at how little tech people think of what creative people do and why they do it.
I don’t mean to say I don’t think there are any uses but I think the main misunderstanding here is that what holds indie filmmakers back isn’t access to technology, generally.
I think this really hits on the difference in our understanding because constraints are what cause actual creativity and art to happen.
A lack of constraints is why big-budget movies are so tedious. Lower budget movies are better because of their constraints.
A more obvious example is The Blair Witch Project, which cost less than a million dollars even after all the marketing was done (and cost essentially nothing to make).
The original Halloween was a very low-budget movie considering how long it took to shoot.
Vin Diesel's career was established by his own movie, Strays, which cost less than $50K. Which is zero budget, essentially, for a film that opened at Sundance.
Away from films there are many, many examples of massively popular albums and songs that were made essentially for nothing off the back of simple constraints and creativity.
In the long run, the only way artists will use AI effectively is by deciding on constraints that limit its use.
Because as soon as you don't limit its use, anyone can do what you can do.
So I tend towards thinking that AI won't really move the needle in terms of human creativity. It may reframe it. But nobody is going to be liberated creatively by it.
Tech people, I suspect, tend to assume that AI brings "full creative freedom" to artists the same way a patron does when they say "you can have full creative freedom".
It's not the same kind of freedom.
It will hopefully lead to a democratization of previously expensive settings (e.g. historic, fantastical, large scale events) etc. Many indie movies still have huge budgets and need some kind of sponsor. Now we will hopefully see a wonderful mix of hobbyist, semi-professional and professional fully independent setups that tell stories without worrying about financial risks that are connected to certain forms of artistic expression.
I don't think it is helpful to gatekeep movie making with arbitrary requirements regarding AI usage nor do I believe that the requirement for patrons or state sponsorships that is prevalent in indie movie making are a good thing regarding the current neo-feudal and authoritarian currents.
I am not gatekeeping at all; I don't understand this argument that this could ever be perceived as gatekeeping. I'm just saying that in my own experience, indie creators tend to perceive generative AI as bullshit, not as liberation.
Artists who tell you that AI is not helping art are not gatekeeping either.
The world is full of creative people and some of them will make movies with AI. Those are indie film makers.
Shipping where? What production? What kids? I've yet to see this. I see the tools everywhere, but not anything built with them. You'd think it would be getting yelled about from the mountaintops, but I'm still waiting.
Heck they did it with languages for the longest time. Here's twitter, we built it on Rails, everyone use Rails! Facebook, built on PHP, everyone use PHP! Feels weird that if these AI tools are doing all this work that no one is showing it off.
A whole bunch of folks got into management thinking coding is beneath them, they are now wielding the power - let the code-monkeys do the typing. Then, turns out, coders are continuing to call the shots, and the management folks have coder-envy.
Now, with LLMs, coding is again not only within management's reach, but they think it is trivial, and it can be outsourced to the LLM code-monkeys, and management has regained power from the pesky coder-class.
So, you have management of all stripes "shipping" things, and dictating what coders should do - not realizing that they should stay in their lanes, and let coders decide for themselves what works best in their craft.
It's struck me as odd that managers of software engineers would seek to negate the field of software development almost completely. But maybe you're onto something.
I call bullshit. Let's see some repos.
I often find people contest this with the non-sequitur of "No, it's not a bubble, there is real value there. We are building things with it". The fact there is real value in the technology does not contradict in any way that we are in a bubble. It may even be supporting evidence for it. Compare with the dot com bubble : nobody would tell you there was no value in the internet. But it was still a bubble. A massive hyper inflated bubble. And when it popped, it left large swathes of the industry devastated even while a residual set of companies were left to carry on and build the "real" eventual internet based reworking of the entire economy which took 10 - 15 years.
People would be well advised to have a look at this point in time at who survived the dot com bubble and why.
The crowd is always wrong on these things. Just like everyone "knew" we were going into a deep recession sometime in late 2022, early 2023. The crowd has an incredibly short memory too.
What it means is that people are really cautious about AI. That is not a self reinforcing, fear of missing out, explosive process bubble. That is a classic bull market climbing a wall of worry.
Current models excel because of the corpus of the open internet they (stole from) built off of. New languages aren't likely to see as consistent results as old ones simply because these pattern matchers are trained on past history and not new information (see Rust vs C). I think the fact nobody's minting billions turning LLMs into trading bots should be pretty telling in that regard, since finance is a blend of relying on old data for models and intuiting new patterns from fresh data - in other words, directly targeting the weak points of LLMs specifically (inability to adapt to real-time data streams over the long haul).
AI's not going away, and I don't think even the doomiest of AI doomers is claiming or hoping for that. Rather, we're at a crossroads like you say: stakeholders want more money and higher returns (which AI promises), while the people doing the actual work are trying to highlight that internal strife and politics are the holdups, not a lack of brute-force AI. Meanwhile both sides are trying to rattle the proverbial prison bars over the threats to employment real AI will pose (and the threats current LLMs pose to society writ large), but the booster side's actions (e.g., donating to far-right candidates that oppose the very social reforms AI CEOs claim are needed) betray their real motives: more money, less workers, more power.
Is this the consensus on nomenclature? I though "AI doomers" was people thinking some dystopia will come out of it due to it. In that case I've read so much text wrong.
I’m worried that the US knowledge industries jumped the shark in the teens and have been living off hopeful investors assuming the next equivalent of the SaaS revolution is right around the corner, and AI for whatever reason just won’t change things that much, or if it does, the US tech industry will fumble it, assuming their resources and reputations will insulate them from the competition, just like the tech giants of the 90s vs Internet startups. If that’s true, some industries like biotech will still do fine, but the trajectory of the tech sector, generally, will start looking like that of the manufacturing sector in the 90s.
E.g. crypto displayed many, many characteristics of a bubble for a number of years, but the crypto bubble seems like it has just slowly stopped growing and slowly stopped getting larger, rather than popping in a fantastical way. (Not to say it still can’t, of course)
Then again, this bubble is different in that it has engulfed the entire US economy (including public companies, which is the scary part since the damage potential isn’t limited to private investors). If there’s even a 10% chance of it popping, that’s incredibly frightening.
I personally think a crash is more likely than not, but I think we should not assume that history will follow a particular pattern like the dot com bust. There are a variety of ways this can go and anyone who tells you they know how it’s all going to shake out is either guessing or trying to sell you something.
It is for sure an interesting time to be in the industry. We’ll be able to tell the next generation a lot of stories.
For me the big concern is really the level of detachment from reality that I'm seeing around time scales. People in the startup world seem to utterly fail to appreciate the complexity of changing business processes - for any type of change, let alone for an immature tech where there are still fundamental unsolved problems. The only way for the value of AI to be realised is for large scape business adoption to happen, and that is simply not achievable in the 2 years of runway most of these companies seem to be on.
Bitcoin is now worth 2.3 trillion dollars. The price graph looks like a hockey stick. For tokens in a self contained ledger system.
You may be conflating hype and bubble.
I'm extremely tired of bespoke solutions when OTS or already-known would work just fine.
That said, that job market is not as crazy as it was during .com, in fact right now most technologists are finding it more difficult to find work at the moment. Most of this AI hype started when the employment market started to slow down. Usually these bubbles pop after the employment market goes crazy. The employment starts to go nuts when crazy money enters the picture. So if, for example the fed really starts to cut rates and/or investment starts to really pick up and we have another boom period, the tail end of that seems to historically be when the bubbles pop.
Put another way, there is a good chance that the bubble will continue to inflate for a few years before it pops.
Meta has been offering 7 figure salaries for AI talent. This is a very different bubble from the .com bubble. The hiring frenzy in this limited to a very small group of people with unique skills/experience that few people posses. While at the same time thousands of other people are being let go in order to pay those big salaries to a few people (and in order to buy more GPUs). The C-suite has become obsessed with the idea that they're going to need much fewer engineers and they're hiring/firing like it.
It'll never be AGI or superintelligence, it won't create or cause the singularity, and it'll never be a substitute for learning, practicing, and honing skills into mastery. For the fields LLMs do displace in part or in whole, I still expect it'll largely displace the mediocre or the barely-passable, not the competent or experts. Those experts will, once the bubble pops and the hype train derails, find the novel and transformative uses for LLMs outside of building moats for big enterprises or vamping for investor capital.
I especially enjoy the on-prem/locally-run angle, as I think that is where much of the transformation will occur - in places like homes, small offices, or private datacenters where a GPU or two can accelerate novel tasks for the entity using it, without divulging data to corporate entities or outright competitors. Inference is cheap, and a modest gaming GPU or AI accelerator can easily support 99.9% of individual use cases offline, with the right supporting infrastructure (which is improving daily!).
All in all, an excellent post.
I'm reminded of the motto of the Royal Society: Nullius in verba.
I also don't understand the value of using AI to write stuff in loads of unfamiliar languages. I get why one might choose Rust vs. Golang vs. JavaScript depending on the mission, but I would think that those differences go away entirely when you're depending on an LLM to author something in those languages AND you aren't skilled enough in those languages to understand when something's suboptimal or not. This just feels like an express train to bankruptcy via technical debt.
I'm also having trouble with the notion of AI accelerating the creation of side projects. For me, actually writing the code (or figuring out how the language works) is part of the fun that I get from doing side projects. If I wanted to create something as quickly as possible, I'd just buy a SaaS subscription or physical version for what I want.
It's also insane to me that we're just not AT ALL considering how LLMs stunt the growth of our juniors. Spending hours banging my head against the wall on tiny bugs is how I got to where I am today. I'm going to guess that's the case for many of the people on HN as well. That learning process goes away entirely once an LLM goes into the mix. You can just ask it to fix whatever's broken, no understanding of the bug required. This is fine for seniors who know why things happen how they happen, but I can't imagine juniors making up this skill gap.
It's like learning a new language vs having your phone generate whatever in the target language. The end result is the same, but there's no way you can really learn that language with your phone doing the work, unless one assigns no value to learning that language in the first place.
Finally, I have trouble accepting the idea of giving up the keyboard once you become an "architect." I very much understand that us "architects" have less free time in the day to fire up the IDE (death by meetings, basically), but giving that up entirely feels somewhat career-limiting to me. Then again, this is a moot point if the market moves towards making software development an AI-only activity.
What's crazy to me is that most developers and architects sneered at low/no-code solutions because they created unmaintainable codebases that were too proprietary to make sense of, yet here we are lapping up code generated by "coder" LLMs and accepting that they "might" produce insecure code here and there. Insane.
fullstick•20h ago
As an engineer, development still comes down to requirements gathering, solid engineering principles, and the tools we already have at our disposal - network calls, rendering the UI, orchestrating containers and job, etc.
All that is to say that I thought AI was going to be sexy, like Westworld, and not so boring...
brokencode•20h ago
Westworld robots are still a long way off, but think about how far we’ve come so quickly.
It’s pretty incredible that natural language computing is now seen as boring when it barely even existed 5 years ago.