I like categorize AI outputs by prompt + context input information size vs output information size.
Summaries: output < input. It’s pretty good at this for most low-to-medium stakes tasks.
Translate: output ≈ input but in different format/language. It’s decent at this, but requires more checking.
Generative expansion: output > input. This is where the danger is. Like asking for a cheeseburger and it infers a sesame seed bun because that matches its model of a cheeseburger. Generally that’s fine. Unless you’re deathly allergic to sesame seeds. Then it’s a big problem. So you have to be careful in these cases. And, at best, the anything inferred beyond the input is average by definition. Hence AI slop.
You can bet that even if the specific forms attempted in this interval don't take hold, they will eventually.
You and I are too expensive, and have had too much power.
what about improved life quality? what about an explosion of types of jobs?
> You and I are too expensive, and have had too much power.
do you think the average citizen (or the collective) have MORE power or LESS power than 100 years ago, than 200 years ago?
What I said vs what you imagined I said are two different things.
But agreed on the overall meaning of the comment, LLMs promises are still exaggerated.
What we got from the Internet was some version of the original promises, on a significantly longer timescale, mostly enabled by technology that didn't exist at the time those promises were made. "Directionally correct" is a euphemism for "wrong".
"They called me bubble boy..." - some dude at Deutsche.
Reasoning models didn't even exist at the time, LLMs were struggling a lot with math at the time, now it's completely different with SOTA models, there have been massive improvements since gpt4.
Probably very expensive to run of course, probably ridiculously so, but they were able to solve really difficult maths problems.
It's not real, they are cheating on benchmarks. (Just like the previous many times this was announced.)
My point is that even if things are pleatuing, a lot of these advancements are done in step change fashion. All it takes is one or two good insights to make massive leaps, and just because things are plateauing now, it's a bad predictor for how things will be in the future.
We could compare it to the railroad boom, and the telecom boom - in both cases vast sums capital expenditures were made, and reasonable people might have concluded that eventually these expenses would have to be reimbursed through higher prices. However, in both cases, many firms simply went bankrupt and all that excess infrastructure went time to serve humanity for decades at lower cost.
Creative destruction is a woefully underappreciated force in capitalism. Shareholders can lose everything. Debt can be restructured or sold for pennies on the dollar. Debt can go unsold and unpaid, and the creditors can lose everything.
I think here it has to be mentioned that bankruptcy in the United States actually works very differently to bankruptcy in the European Union, where creditors have a lot more legal means at their disposal to haunt you if you try risky plays like taking on more debt to moonshot your way out of your current debt. In a funny way, a country's bankruptcy laws are their most important ones when it comes to wealth transfer.
"B-but the developer always has to make the money back so rents & prices can never go down!" "That's the fun part, they don't!"
The builder/buyer/lender/landlord/etc can go bankrupt but as long as the building actually got built, it will carry on and benefit the rest of us, regardless of what happened to the ppl who paid for it to be built.
Also fun when landlords claim to be "housing providers" "No actually, the housing will still be there, even if you sell, even if you lose your shirt and get foreclosed on"
If the buildings, I assume single family houses, were built in places that few people want to live, or places people are simply forced to live because they can’t live where they want to live, then it wouldn’t shock me if someone chose the bulldozer as the best option. I’d argue that the root cause is bad land use law that doesn’t allow construction in the places people really want to live, but that’s a whole other topic of course :)
My argument, both for AI, railroads, telecom and construction, is that if you built something tangible and useful, it can outlive your financial arrangements, and go on to serve humanity, even if you go belly up.
But for residential construction, location is everything, you could build a theoretically useful building, but in a location that few people want to voluntarily live in, far from jobs services, shops, friends, and family, etc. Ofc, you could build a railroad between two unpopular destinations, or run a fibre optic line between two places with little demand, and those would probably not outlive your finances and might be torn up or abandoned too.
It seems as if these are developments which didn’t get completed.
The impact of firms and people going bankrupt that other people making investment and lending decisions will see risk more clearly and may (for a time) be less greedy and stupid when they make capital allocation decisions.
Debts can & do magically disappear. To be clear someone paid for the lost money, but at that stage it's far too late for them to be able to do anything about it, let alone raise prices.
Here's an example: Founder A founds a startup with equity funding from B & C. Later they take loans from D & E. They spend all the money but never become profitable. None of the original investors or lenders is interested in pumping in good money after bad. They voluntarily declare bankruptcy or they default on a loan and D or E forces them into bankruptcy. Either way, whatever is left of the company's assets are sold to reimburse, in part, the loan D & E made. A, B & C got nothing.
A, B, C, D, and E, all lost real money.
But by the time this loss is crystalized, there is no way any of them can go back in time to raise prices to pay for it. It's gone and so is the company. The only thing they can do is act differently in the future.
So no it doesn’t magically disappear. A bankruptcy somewhere is a loss for others somewhere else. Even cutting dept to pennies on the dollar means lenders are losing money. Bankruptcy is not a magic trick…
"Easy". "Just" get more users and "just" increase prices to somehow cover hundreds of billions of invested dollars and hundreds of millions of running costs.
It's that easy. I'm surprised none of the companies mentioned in the article thought of that.
I was just stating the obvious. That is what they are doing.
The only reason they are raising prices is to try and recoup some of the ongoing operational costs. In the end it will only be Google left standing because Google has unlimited money, and they can price dump indefinitely long, and offer most things for free.
(With a caveat that LLMs actually do have their uses)
Billions of dollars were spent on crypto as well.
Today we're at 40, and Nvidia alone is at 49.
As much as everyone wants this to be a bubble: it isn't. ChatGPT was the fastest "thing" in history to reach 100M MAUs, and is believed to be a top 5 most visited website today, across the entire internet. Cursor was the fastest company in human history to reach $500M in revenue. Midjourney, the company no one talks about anymore, is profitable and makes over $200M in revenue.
Being brutal here: HackerNews is in the bubble. Yeah, there's some froth, there's some overvaluation, some of these companies will die. But I seriously do not understand how people can see these real, hard statistics, not fake money like VC dollars or the price of bitcoin but fucking deep and real shit and still say "nah its like crypto all over again".
48% of survey respondents to a recent survey said they've used ChatGPT for therapy [1]. FORTY EIGHT PERCENT. There is no technology humanity has ever invented that has seen genpop uptake this quickly, and its not dropping. This is not "oh, well, the internet will be popular soon, throw money at it people will eventually come". This is: "we physically cannot buy enough GPUs to satisfy demand, our services keep going down every week because so many people want to pay for this".
You're commenting this under an article showing how deeply unprofitable most of "AI" companies are. Revenue isn't profit. Midjourney is probably the only company that is profitable.
> 48% of survey respondents to a recent survey said they've used ChatGPT for therapy [1]. FORTY EIGHT PERCENT.
No idea why you provide this frankly scary statistic. It's not proof that this isn't a bubble.
> This is: "we physically cannot buy enough GPUs to satisfy demand, our services keep going down every week because so many people want to pay for this".
It's called a mania. Manias always end.
is caused by mispricing - VC money is used to pay for the GPUs but the product is mostly given away for free. If companies just charged the public what the service cost to provide usage would go down dramatically.
In other words: How literally every tech business that has ever worked has worked. This isn't news. This isn't novel. This is just how it works.
If you want me to feel fright that the world is coming to an end, wake me up when Google (one of the world's largest frontier AI labs by any measure) isn't posting $35B in profit on a 39% gross margin every quarter, or when Meta (who is reported to be paying AI researchers nine figure comp packages) isn't making $16B on 40%. The amount of money these companies make is disgusting, its so disgusting that they can blow a hundred billion on GPUs and key people, they could write it all to zero two years later, and its all a teeny tiny little blip on their graphs, it becomes the third line-item in their quarterly board meetings, behind far more important stuff. The reason why some of you get so freaked out is because you literally cannot comprehend the scale these companies operate at, and how financialized their operations are.
Military contracts.
I hope people understand the irony, but to spell it out: they need to live on government money to sustain growth.
Corporate welfare while 60% of the USA population doesn't have the money to cover a 1000$ emergency.
Meta makes 99% of its revenue from advertising (according to the article). Google, similarly, makes most of its money from advertising.
Tesla makes money by selling cars (there's no indication the government is going to transform their fleets to Tesla vehicles; in fact, they're openly hostile to EVs).
Apply needs to rely on US government military contracts for continued growth? What?
Amazon, the company that sells toothpaste and cloud services needs to rely on US government military contracts?
Consider me not convinced by the story you tell.
https://breakingdefense.com/2025/01/army-kickstarts-possible...
Of course it won't work. These tech companies have no clue about the real world and humans.
How large is the US military contract market for the kinds of products and services these companies produce?
For reference, their combined 2024 revenue was around $2 Trillion.
So valuable that it will be next main source of growth (what was claimed) for Amazon, Apple, Alphabet, Microsoft, Nvidia, Meta, and Tesla?
The US military budget is less than $1 trillion per annum. these companies had a combined revenue of $2 trillion. For military contracts to be THE new source of growth, the military budget would have to be how much larger?
To be fair, it wasn't suggested that the growth would be equivalent to or surpassing of past growth, just growth of some kind. The budget doesn't necessarily have to become any larger, they just need a piece of the pie.
AI/LLMs are an infant technology, it’s at the beginning.
It took many many years until people figured out how to use the internet for more than just copying corporate brochures into HTML.
I put it to you that the truly valuable applications of AI/LLMs are yet to be invented and will be truly surprising when they come (which they must of course otherwise we’d invent them now).
Amdahl says we tend to overestimate the value of a new technology in the short term and underestimate it in the long term. We’re in the overestimate phase right now.
So I’d say ignore the noise about AI/LLMs now - the deep innovations are coming.
what
The Internet is to the World Wide Web
It was immediately clear for many people how it could be used to express themselves. It took a lot of years to figure out how to kill most of those parts and turn the remainder into a corporate hellscape thats barely more than corporate brochures.
Has this effect been demonstrated by any company yet? AFAIK it has not, but I could be wrong. This seems like a rather large "what if"
That said, all of these LLMs are interchangeable, there are no moats, and the profit will almost entirely be in the "last mile," in local subject matter experts applying this technology to their bespoke business processes.
how can massively buying hardware that will have to be thrown away in a few years be a "good" bubble in the sense of being a lasting infrastructure investment?
Up to a point it is better than having additional compute sitting idle at the edge, economies of scale and all that, but after some point it becomes excess and wasteful, even if people figure out ways to entertain themselves with it.
And if people don't want to pay what it costs to improve and maintain these city-sized electronic brains? Then it all becomes waste, or the majority transformed into office or warehouse space or something else.
Proceeding with combined 1% (US GDP)-sized budgets despite this risk being an elephant in the room is what makes it a bubble.
Nvidia sold ~3M blackwells in 2025: https://wccftech.com/nvidia-has-sold-over-three-million-blac...
Compare that to laptops which sell in tens of millions per manufacturer: https://en.wikipedia.org/wiki/List_of_laptop_brands_and_manu...
Plus, it's way easier to collect boards for recycling from a centralized data center.
https://www.tomshardware.com/pc-components/gpus/datacenter-g...
I wonder if ubiquitous, user-friendly finite elements analysis tools could become a boon for 3D printers.
How well used do you think those AI data centers are going to be?
America's internet infrastructure, like the railroads, was also left in the hands of private monopolies and it is also a piece of shit compared to other countries. It's slow and everyone pays far too much for it and many are still excluded from it because it's not profitable enough to run fiber to their area.
The AI bubble won't leave behind any new infrastructure when it bursts. Just millions of burned out GPUs that get sent to an e-waste processing plant where they are ground up into sand, trillions of dollars wasted, many terawatt hours of energy wasted, many billions of liters of freshwater wasted, and the internet being buried under an avalanche of pseudorandomly-generated garbage.
AI-optimist or not, that's just shocking to me.
What's the problem with that? Why shouldn't people feel comfortable sharing their vision of the future, even if it's just a "gut feeling" vision? We're not going to run out of ink.
But hey more fun to pretend the chatbot will turn into Terminator
But then I think about the real actual planning decisions that were made based on the claims about driving cars and Hyperloop being available "soon" that made people materially worse off due to differed or canceled public transportation infrastructure.
Ethical approach? hell no. What do you expect from an unregulated capitalistic system.
Competition, fortunately
so there's no competition when there are no rules and regulations... ? interesting.
all those sports without rules or regulations, like american football where anything goes.
Highly regulated industries: healthcare, banking, aviation
Less regulated industries: web software, e-commerce, entertainment
It is easier for startups to get started in the latter, harder in the former.
[0] https://storage.googleapis.com/gweb-research2023-media/pubto...
[1] https://metr.org/blog/2025-07-10-early-2025-ai-experienced-o...
"...being entirely blunt, I am an AI skeptic. I think AI and LLM are somewhat interesting but a bit like self-driving cars 5 years ago - at the peak of a VC-driven hype cycle and heading for a spectacular deflation.
My main interest in technology is making innovation useful to people and as it stands I just can't conceive of a use of this which is beneficial beyond a marginal improvement in content consumption. What it does best is produce plausible content, but everything it produces needs careful checking for errors, mistakes and 'hallucinations' by someone with some level of expertise in a subject. If a factory produced widgets with the same defect rate as ChatGPT has when producing content, it would be closed down tomorrow. We already have a problem with large volumes of bad (and deceptive!) content on the internet, and something that automatically produces more of it sounds like a waking nightmare.
Add to that the (presumed, but reasonably certain) fact that common training datasets being used contain vast quantities of content lifted from original authors without permission, and we have systems producing well-crafted lies derived from the sweat of countless creators without recompense or attribution. Yuck!"
I'll be interested to see how long it takes for this "spectacular deflation" to come to pass, but having lived through 3 or so major technology bubbles in my working life, my antennae tell me that it's not far off now...
Nah you just post it, if people point out the mistakes the comment is treated as a positive engagement by the algorithm anyway, unfortunately for anyone that cares.
AlphaFold is having a big influence in medical research. There's more to AI than chatbots.
It's quite interesting the work they are doing there now - article on it https://www.labiotech.eu/in-depth/alpha-fold-3-drug-discover... I've got a personal interest because my sister has ALS and I think an in silico breakthrough is the only thing that could fix that before her dying.
Machine learning has been there for quite a while and is a useful tool. But it's only a tool among many other ones, like programming languages and libraries. It's not a product. At most it can be the engine of a specific feature.
A poor man's Gary Marcus, basically.
Thank you for your imput!
I find that fact alone a bit alarming.
You don't need to be a technical expert to understand that it's worrying how the entire media industry is pushing for everyone, everywhere, all of the time, to lean on a tech where the biggest providers are not profitable.
You also don't need to be a technical expert to see how much of a failure it is for the entire media industry to interview Sam Altman, let him spew out utter gibberish, and not even question him on it.
Companies that are exploding in popularity and expanding as fast as possible are not expected to make a profit. This is not unusual in the slightest.
>the entire media industry is pushing for everyone, everywhere, all of the time
No, people use AI because they want to use AI. New users arrive on their own. If you take a closer look at what the legacy media is actually saying, they tend to have a negative slant against AI. Yet people still show up. And will continue to show up.
>Sam Altman, let him spew out utter gibberish, and not even question him on it
If Altman is pissing you and Ed off, he's doing at least something right. That said, I follow AI news every single day and I barely even glance at what Altman is saying. Here lies one of the biggest follies of the anti-AI crowd. Zitron et al. think that they can make AI go away by canceling Altman.
Also, a lot of AI 'users' arrive, search for an actual use case, find none, and then move on.
>>the entire media industry is pushing for everyone, everywhere, all of the time
>No
Yes it is. You can't just "no" this. People aren't just "happily using AI and getting bothered by the EVIL AI haters!!!". The push for AI is literally made of threats "You WILL get left behind", "people WILL replace you". Even if it is undeniably disruptive, you can't not call that pressure on a very large scale.
Just because I ask copilot once every few days to write me a piece of boilerplate doesn't invalidate the fact that my workplace has been made to believe prompting is an "essential skill" that must be enforced via mandatory e-learnings.
>think that they can make AI go away by canceling Altman.
I didn't say that and I don't think that. That last paragraph's basically bait.
Yes, Zitron talks all the time. Loudly and insultingly. Here's a suggestion: he should put his money where his mouth is. If he honestly thinks the market for AI is actually 100x smaller than reported and ready to collapse any moment now, he could easily bet against the market. He could earn millions and look like a genius. Make it all public, let his fans join too. You could join. Why not? He supposedly knows the hard truths, he brought the receipts, etc.
But he won't. That's too real. Endlessly prophesying the end of AI is more lucrative.
>You can't just "no" this.
Fundamentally my argument is that the media has little say in this. Take all the new releases this week. Another ~200 million new users will shuffle their way in. Not through fear or fervor, just basic curiosity and a need for getting things done. Models will continue to get better and this trend will continue. It's just how it is.
>I didn't say that and I don't think that. That last paragraph's basically bait.
In any case, I'll leave the Sam Altman discussion to you and Ed.
Back to the media thing. I still don't fully agree that media coverage has absolutely no influence on the success of a technology or that the best solution always wins (which, correct me if I'm wrong, is the argument you're making). If my fundamental argument is "it has an impact" and yours is "it doesn't" then we've reached a stalemate and can drop that topic too.
Have a good day, it hasn't been nice talking with you, but it's helped me realize that I should care less when I have no stake in it.
The challenge all these frontier labs have is: Their existing models can have high token profitability, but they have to invest everything they've got (and everything they're given) into new model R&D, because if they don't xAI will beat them, or Anthropic will beat them, or Google will. That's the nature of frontier spaces like this.
But the flipside is, model capability will plateau, they probably are already, and as that happens it becomes safer to aim for profitability. And I have zero doubt that OpenAI and Anthropic can find profitability. xAI, Perplexity, Mistral, and the other labs, I'm less sure about.
I think the cost is more in thousands to cover inference. And, no, I don’t think it’s been proven out that an engineer is so much more productive to justify thousands of dollars a month cost. The models are great for greenfield projects. But a lot of engineering is iterating and maintaining an existing code base——a code base that the engineer is fluent in. So the time savings is writing code specific enough to implement a new feature vs writing a prompt specific enough that the AI can write code specific enough to implement a new feature. The difference between those two tasks is the time savings.
Say that difference is like 10%. You save 10% of your time by using AI, meaning you have 4 more hours a week than you did before. Are you going to spend 4 more hours writing code? No. Some will be spent in meetings. Some will be spent reading Hacker News. Maybe you’ll get two hours a week of additional coding time. So you’re really only increasing your output by 5%.
The so the employer gets 5% more from you if you have AI. If your salary is 10k per month, they wouldn’t pay more than $500. Per month. And you’re probably costing Anthropic >$10k in inference costs per _week_. The economics just don’t make sense.
You can sub out the numbers here and play around with the scenario. I think the cost of inference needs to drastically fall. And I don’t think that happens soon. What might happen 10 years from now is developers are given a laptop with a built-in GPU for AI inference that does much better code auto-complete using AI. That’s something an employer can pay 3k-5k for _once_ as a hardware investment. But the future of AI coding won’t be agents. It won’t be prompt-engineering. The models aren’t going to get much better. It will be simple and standard and useful but unimpressive. It’s going to feel boring. It’s going to feel boring. When it’s working, when it’s mature, when it becomes economical, it always feels boring. And that’s a good thing.
Somehow, in AI, people lost sight of the fact that transformer architecture AI is a fundamentally extractive process for identifying and mining the semantic relationships in large data sets.
Because human cultural data contains a huge amount of inferred information not overtly apparent in the data set, many smart people confused the results with a generative rather than an extractive mechanism.
….To such a point that the entire field is known as “generative” AI, when fundamentally it is not in any way generative. It merely extracts often unseen or uncharacterized semantics, and uses them to extrapolate from a seed.
There are, however, many uses for such a mechanism. There are many, many examples of labor where there is no need to generate any new meaning or “story”.
All of this labor can be automated through the application of existing semantic patterns to the data being presented, and to do so we suddenly do not need to fully characterize or elaborate the required algorithm to achieve that goal.
We have a universal algorithm, a sonic screwdriver if you will, with which we can solve any fully solved problem set by merely presenting the problems and enough known solutions so that the hidden algorithms can be teased out into the model parameters.
But it only works on the class of fully solved problems. Insofar as unsolved problems can be characterized as a solved system of generating and testing hypothesis to solve the unsolved, we may potentially also assail unsolved problems with this tool.
That doesn’t make it non useful. It just makes it non innovative.
Trial and error within a defined problem space is an area where automation can definitely be useful. Once again though, the result is not innovation but rather automation of labor.
There is a -lot- of labor requiring mind numbing repetition or iteration. The vast majority of labor falls into this category, and exists in fundamentally solved problem spaces, but still is complex enough that the algorithms involved are opaque. This is where the current type of AI can work miracles when trained with enough oblique data.
Let's unpack that a bit.
Capex is spending on capital goods, with the spending being depreciated over the expected lifetime of the good. You can't compare a year of capex to a year of revenue: a truck doesn't need to pay for itself in year 1, it needs to pay for itself over 10 or 20 years. The projected lifetime of datacenter hardware bought today is probably something like 5-7 years (changes to the depreciation schedule are often flagged in earnings releases, so that's a good source for hard data). The projected lifetime of a new datacenter building is substantially longer than that.
Somehow Zitron manages to not make a comparison that's even more invalid than comparing one year of Capex to one year of revenue: he basically ends up comparing a year of revenue to two years of Capex. So now the truck needs to pay for itself in six months.
They way you'd need to think about this is to for example consider what the return on the capital goods bought in 2024 was in 2025. But that's not what's happening here. Instead the article is basically expecting a GPU that's to be paid for and installed in late 2025 to produce revenue in early 2025. That's not going to happen. In a stable state, this would not matter so much. But this is not a stable state. Both capex and revenue are growing rapidly, and revenue will lag behind.
What about the capex being inflated and the revenue being low-balled?
None of us really know for sure how much of the capex spending is on things one might call AI. But the pre-AI capex baseline of these companies was tens of billions each. Probably some non-AI projects no longer happen so that the companies can plow more money into AI capex, but it absolutely won't be all of it like the article assumes. As another example, why in the world is Tesla being included in the capex numbers? It's just blatant and desperate padding of the numbers.
As for the revenue, this is mostly analyst estimates rather than hard data (with the exception of Microsoft, though Zitron is misrepresenting the meaning of run rate). Given what he has to say about analysts elsewhere, seems odd to trust them here. But more importantly, they are analyst estimates of a subset of the revenue that GPUs/TPUs would produce. What happens when Amazon buys a GPU? Some of those GPUs will be used internally. Some of them will be used to provide genai API services. Some might be used to provide end-user AI proucts. And some of them will be rented out as GPUs. Only the two middle ones would be considered AI revenue.
I don't know what the fair and comparable numbers would be, am not aware of a trustworthy public source, and won't even try to guess at them. But when we don't know what the real numbers are, the one thing we should not do is use obviously invalid ones and present them as facts.
> I am only writing with this aggressive tone because, for the best part of two years,
Zitron's entire griftluencer schtick has always been writing aggressive and often obscenity-laden diatribes. Anyway, please don't forget to subscribe for just $7/month, and remember that he just loves to write and has no motive for clickbait or stirring up some outrage.
I’m not accusing you of anything, just giving the feedback that this line makes your post sound like it is AI slop. This is an extremely typical phrase when you prompt any current AI with some variation of “explain this post”. Honestly, the verbosity of the rest of your post also reinforces this signal. The typo here also indicates cutting and pasting things together “Given what he has to say about . But more importantly,”
If it is not AI slop, then hopefully you can use this feedback for future writing.
(I explained the issue briefly in the first paragraph, but the article is 15k words. Hard to convincingly rebut that without the details.)
But thanks for the superficial style feedback and ignoring the substance.
He appears to only be doing that for the seven companies cumulatively, and in each company's case is only comparing year with year.
> Both capex and revenue are growing rapidly, and revenue will lag behind.
Even if his capex estimates are inflated, unless they're off by magntitudes, isn't the ratio between the two figures still alarming? What was, say, Amazon's initial capex for AWS compared to the revenue? Or in any other cases where long-term investment bore fruit?
> What happens when Amazon buys a GPU?
What else are they using GPUs for? Luna cloud gaming? Crypto mining?
Says the PR guy who discovered AI a couple of years ago and now knows it all and that all the AI experts are wrong.
I mean it's a good rant but I don't think he gets the bigger picture.
Quoting the end of the article ad verbatim:
> And remember that you, as a regular person, can understand all of this. These people want you to believe this is black magic, that you are wrong to worry about the billions wasted or question the usefulness of these tools.
You are currently being "these people".
You don't need a huge technical baggage to understand that OpenAI still operates at a loss, and that there are at the very least some risks to consider before trying to rebuild all of society on it.
I've seen many people on HN (or maybe it was also you the other times) give this same reply again and again, "what do you know? You've not made your research, and if you made research, you don't have reliable sources, and if you have reliable sources, you're not seeing the bigger picture, and if you are seeing the bigger picture, you're not a tech guy, so what do you know?"
This essentially comes back to what the article also says, you are somehow held to crazy fucking standards if you ever say anything remotely critical, and then people will come up in HN threads and say "the human brain is basically also autocomplete, so genAI will be as good as the human brain soon™" (hey, according to your reply, shouldn't people be experts in the human brain to be able to post stuff like this?)
I think this is among the most unhinged paragraphs I've ever read in my entire life. It deeply, metaphysically, struggles to frame what its presenting in a bad light, but the data is so overwhelmingly positive that it just can't do it.
"Ugh, there's only twelve companies basically none of which existed two years ago making over a hundred million dollars in revenue. What a failure of an industry. And only three of them are making a half a billion? What utter failures. See, no one is using any of this stuff!!"
I think that Apple will hold on to their "AI" stuff for a while longer and wait until it really dies down. Then they will introduce a much better Siri and get rid of the "summarize your email" and "re-write this sentence" bullshit.
camillomiller•6mo ago