The amount of interest to explore this opportunity is worth it. The bubble is worth it. I don't think it's lost years, and even if it is, the technology is compelling enough to make the gamble worth it.
The fatigue of reading the same shit over and over again makes people forget that it's only a couple years. It also makes people forget how ground breaking and paradigm shifting this technology was. People are complaining about how stupid LLMs are when possibly just 5 years back no one could even predict that that such levels of intelligence in machinese was even possible.
Asking Gemini _is_ just much better at finding you the answers you need, _and_ providing links for you to verify that information.
It will be a sad day when they start injecting ads, I really hope the foss alternatives catch up.
We are in the infancy of LLM technology.
So cheap gaming hardware in the future (similar to when telecoms over invested in transcontinental undersea fiber-optic cables)? What's the hangover gonna look like after this? What's the next grift?
The amount of money that's been spent on AI related investments over the past 2-5 years really has been astonishing - like single digit percentage points of GDP astonishing.
I think it's clear to at there are productivity boosts to be had from applying this technology to fields like programming. If you completely disagree with that statement I have a hunch that nothing could convince you otherwise at this point.
But at what point could those productivity boosts offset the overall spend? (If we assume we don't get to some weird AGI that upturns all forms of economics.)
Two points of comparison. Open source has been credibly estimated to have provided over 8 trillion dollars of value to the global economy over the past few decades: https://papers.ssrn.com/sol3/papers.cfm?abstract_id=4693148 - could AI-assisted programming provide a similar boost over the next decade or so?
The buildout of railways in the 1800s took resources comparable to the AI buildout of the past few years and lost a lot of investors a lot of money, but are regarded as a huge economic boost despite those losses.
The countries adopting these the most are declining economies. It's places that are looking for something to do after there's no more oil left to drill up and export.
You know where fossil fuel use is booming? Emerging (e.g., growing) economies. Future scarcity of such resources will only make them more valuable and more profitable.
Yes, this is a dim view on the world, but until those alternatives are fundamentally more attractive than petrochemicals, these efforts will always be charity/subsidy.
If you're expecting that to be the area of strong and safe returns on investment, I've got some dodgy property to sell you.
My understanding is that "green" investment portfolios which were intended as "ethics over return on investment" have actually outperformed petrochemical stocks for years now, and it's more ideology than economics that's preventing further investment (hence why you see so much renewable energy in Texas which is famously money driven)
Every VC pitch is about some ground-breaking tech or unassailable moat that will be built around a massive SAM; in reality early traction is all about solving that annoying and stupid problem your customers hate doing but that you can do for them. The disconnect between the extraordinary pitch and the mundane shipped solution is the core of so much business.
That same disconnect also means that a lot of real and good problems will be solved with money that was meant for AGI but ends up developing other, good technology.
My biggest fear is that we are not investing in the basic, atoms-based tech that we need in the US to not be left behind in the cheap energy future: batteries, solar, and wind is being gutted right now due to chaotic government behavior, the actions of madmen that are incapable of understanding the economy today, much less where tech will take it in 5-10 years. We are also underinvesting in basics like housing, or construction tech. Hopefully some of the AI money goes to fixing those gaping holes in the country's capital allocation.
The elephant in the room is that capital would likely be better directed if it was less concentrated.
A surface-to-air missile?
As funny as that would be, maybe you should define your terms before you try to use them.
TAM or Total Available Market is the total market demand for a product or service. SAM or Serviceable Available Market is the segment of the TAM targeted by your products and services which is within your geographical reach. SOM or Serviceable Obtainable Market is the portion of SAM that you can capture.
And if you're working at a startup or interested in a startup, at any level of employment, and you don't understand what those terms mean...then what the hell are you doing in this space? Go work at some "safe" company.
Bold claim, should we do a poll? How long should we let it run for, a week, two weeks?
I'm sure dang will come and ding me for this one, but I'm sitting here having my points undermined by literal sockpuppets.
Here's the actual guideline (not rule):
"Throwaway accounts are ok for sensitive information, but please don't create accounts routinely. HN is a community—users should have an identity that others can relate to."
People have commented both appreciating your clear definitions and calling you out for the condescension, with a perfect xkcd suggesting an attitude change. It's up to you how you react to such feedback.
Perhaps not ironically, the careless distribution of incorrect information, combined with a dismissal of human endeavor, is such a perfect encapsulation of why so many people absolutely despise everything surrounding LLM hype.
They've never had to generate a real return, create a product of real value, etc. This wave-of/gamble-on AI slop just shows that they don't even know what value looks like. We've operated for ~40 years on a promise of...something.
Like, surfacing APIs, fostering interoperability... I don't want an AI agent, but I might be interested in an agent operating with fixed rules, and with a limited set of capabilities.
Instead we're trying to train systems to move a mouse in a browser and praying it doesn't accidentally send 60 pairs of shoes to a random address in Topeka.
We don't need to figure out the one true perfect design for standardized APIs for a given domain any more.
Instead, we need to build APIs with just enough documentation (and/or one or two illustrative examples) that an LLM can help spit out the glue code needed to hook them together.
Time of executing bytecode << REST APIs << launching a full JVM for each file you want to compile << launching an LLM to call an API (each << is above x10).
I think about code generation in this space a lot because I’ve been writing Gleam. The LSP code actions are incredible. There’s no “oh sorry I meant to do it the other way” you get with LLMs because everything is strongly typed. What if we spent 100billion on a programming language?
We’ve now spent many hundreds of billions on tools which are powerful but we’ve also chosen to ignore many other ways to spend that money.
You can spend an enormous amount of money building out a standard like SOAP which might then turn out not to have nearly as much long-running as the specification authors expected.
But also even if the W3C spent $10m a year for the 10 years SOAP was being actively developed according to Wikipedia that would still be 1/1000 of the 100billion we’re talking about. So we really have no idea what this sort of money could do if mobilized in other ways.
I get that in the webdev space, that is true to a much larger degree than has been true in the past. But it's still not really the central problem there, and is almost peripheral when it comes to desktop/native/embedded.
LLMs are 10x better than the existing state of the art (scraping with hardcodes selectors). LLMs making voice calls are at least that compared to the existing state of the art (humans sitting on hold.)
The beauty of LLMs is that they can (can! not perfectly!) turn something without an API into one.
I’m 100% with you that an API would be better. But they’re not going to make one.
Mix of open platforms facing immense abuse from bad actors, and companies realising their platform has more value closed. Reddit for example doesn't want you scraping their site to train AIs when they could sell you that data. And they certainly don't want bots spamming up the platform when they could sell you ad space.
Like, we already had a perfectly reasonable decentralized protocol with the internet itself. But ultimately businesses with a profit motive made it such that the internet became a handful of giant silos, none of which play nice with each other.
Well maybe for you and not the millions of people that use this technology daily.
Investors want otherwise.
I think we'd still be talking about Web 3.0 DeFi.
There is a lot of stinky garbage in AI, but at least you can rescue some value from it, in fact it could be most of the activity out there, but you only notice what stinks.
This is a bit like the question "what if we spent our time developing technology to help people rather than developing weapons for war?"
The answer is that, the only reason you were able to get so many people working on the same thing at once, was because of the pressing need at hand (that "need" could be real or merely perceived). Without that, everyone would have their own various ideas about what projects are the best use of their time, and would be progressing in much smaller steps in a bunch of different directions.
To put it another way - instead of building the Great Pyramids, those thousands of workers (likely slaves) could have all individually spent that time building homes for their families. But, those homes wouldn't still be around and remembered millenia later.
Yes, but they'd have homes. Who's to say if a massive monument is better than ten thousand happy families?
It's not. The pyramids have never been of any use to anyone (except as a tourist attraction).
I'm referring merely to the magnitude of the project, not to whether it was good for mankind.
They would have been better off. Those pyramids are epitomes of white elephants.
I understand organizationally how this happens, and the incentives that build such a monstrosity but it’s still objectively a shame.
Since when does this have anything to do with AI? Commercial/enterprise software has always been this way. If it's not going to cost the company in some measurable way issues can get ignored for years. This kind of stuff was occurring before the internet exists. It boomed with the massive growth of personal computers. It continues to today.
GenAI has almost nothing to do with it.
All software is this way. The only way something gets fixed is if someone decides it's a priority to fix it over all the other things they could be doing. Plenty of open source project have tons of issues. In both commercial and open source software they don't get fixed because the stack of things to do is larger than the amount of time there is to do them.
Things that are easy, fun, or "cool" are done before other things no matter what kind of software it is.
I don’t feel like this article is trying to start a conversation, it wants to end the conversation so we can have dessert (aka, catastrophizing about the outcome of the thing “we know” is bad).
Yes! That’s correct!
The implied answer to this question really just misunderstands the tradeoffs of the world. We had plenty of money and effort going into our technology before AI, and we got... B2B SaaS, mostly.
I don't disagree that the world would be better off if all of the money going into so many things (SaaS, crypto, social media, AI, etc.) was better allocated to things that made the world better, but in order for that to happen, we would have to be in a very different system of resource allocation than capitalism. The issue there is that capitalism has been absolutely core to the many, many advances in technology that have been hugely beneficial to society, and you if you want to allocate resources differently than the way capitalism does, you lose all of those benefits and probably end up worse off as a result (see the many failures of communism).
> So I ask: Why is adding AI the priority here? What could have been if the investment went into making these apps better?
> I’m not naive. What motivates people to include AI everywhere is the promise of profit. What motivates most AI startups or initiatives is just that. A promise.
I would honestly call this more arrogant than naive. Doesn't sound like OP has worked at any of the companies that make these apps, but he feels comfortable coming in here and presuming to know why they haven't spent their resources working on the things he thinks are most important.
He's saying that they're not fixing issues with core functionality but instead implementing AI because they want to make profit, but generally the sorts of very severe issues with core functionality that he's describing are pretty damaging to the revenue prospects of a company. I don't know if those issues are much less severe than he's describing or if there's something else going on with prioritization. I don't know if the whole AI implementation was competitive with fixing those - maybe it was just an intern given a project, and that's why it sucks.
I have no idea why they've prioritized the things they have, and neither does the author. But just deciding that they're not fixing the right things because they implemented an AI feature that he doesn't like is not a particularly valid leap of logic.
> Tech executives are robbing every investor blind.
They are not. Again, guy with a blog here is deciding that he knows more than the investors about the things they're investing in. Come on. The investors want AI! Whether that's right or wrong, it's ridiculous to suggest they're being robbed blind.
> Unfortunately, people making decisions (if there are any) only chase ghosts and short term profits. They don’t think that they are crippling their companies and dooming their long term profitability.
If there are any? Again, come on. And chasing short term profits? That is obviously and demonstrably incorrect - in the short term, Meta, Anthropic, OpenAI and everybody else is losing money on AI. In the long term, I'm going to trust that Mark Zuckerberg and Sam Altman, whether you like them or hate them, have a whole lot better idea of whether or not they're going to be profitable in the long term than the author.
This reads like somebody who's mad that the things he wants to be funded aren't being funded and is blaming it on the big technology of the day then trying to back into a justification for that blame.
This would be very efficient in avoiding duplication, the entire industry would probably only need a few thousand developers. It would also save material resources and energy. But I think that even if the software these companies produced was entirely reliable and bug-free it it would still be massively outcompeted by the flashy trend-chasing free-market companies which produce a ton of duplicated outputs (Monday.com, Trello, Notion, Asana, Basecamp - all these do basically the same thing).
It's the same with AI, or any other trend like tablets, the internet, smartphones - people wanted these and companies put their money into jumping aboard. If ChatGPT really was entirely useless and had <10,000 users then it would be business as usual - but execs can see how massive the demand is. Of course plenty are going to mess it up and probably go broke, but sometimes jumping on trends is the right move if you want a sustainable business in the future. Sears and Blockbuster could've perfected their traditional business models and customer experience without getting on the internet, and they would have still gone broke as customers moved there.
So you want an open source project to really succeed? It's not money, but real passion for the work.
Write better documentation (with realistic examples!) and fix the critical bugs users have been screaming about since over a decade ago.
Sure fine pay a few people real wages to work on it full time, but that level of funding has to deliver something more than barely documented functionality.
Yeah, nah... passion only sustains a person for 3 days max before they expire.
My theory is that open source boomed in the last few decades because developers had enough income and free time from their day jobs to moonlight as contributors. With the gravy train ending, I suspect open source will suffer greatly. Maybe LLMs can cover what was lost, or maybe corporations will pay their engineers to contribute directly (even more so than what they do now), but there will definitely be some losses here.
Glyptodon•2h ago
kjkjadksj•2h ago