even if the models do not get better there is so much demand even just b2b
we have all these problems we can fix but we are totally limited by availability
cloud providers are overwhelmed by the demand - you can see this in how stingy they are with rate limits and how they don't even talk to you unless you've already spent a lot with them
Yes... there are a lot of expensive efforts of dubious actual value under way. Basically every company is trying to see if they can replace workers with AI, driving this demand. I've had insight into several of these, and from what I've seen, not a lot of people need to worry about their jobs in the next few years.
Moreover, slaves aren't productive over the medium/long term. You don't get useful and lengthy performance from raw coercion, the same way you don't get truthful and actionable intelligence from torturing prisoners.
So no, an LLM has very little in common with a slave.
No one is going to get a billion dollar investment due to that. It's why all the corporate speak and marketing is harping on about productivity, robot takeover and deus ex machina in corporate language.
Normal people will use it for creative writing aid, scrapbooking and other extremely unprofitable non-technical stuff.
Personally - i was able to get mockups of interior designs based on the photo of a building under construction using chatgpt - this would've cost me both time and money if i went to a real designer.
Gemini has been summarising meetings i could not attend (scheduling conflicts etc...) and saving me from hours of watching meeting recordings.
I was really skeptical of things because of the horrible results I've had 2 years ago with copilot and chatgpt, but things have improved drastically. To the point that it's already empowering certain people/jobs while having the opposite effect on others.
Is it perfect? Nope. The mockups did have weird glitches. But they were 75% there and good enough for the task I wanted. The meeting notes were as good as a real human.
So it's definitely eroding more of these kinds of jobs and so we are
And they're right. They cited consumer research to show the ambivalence of consumers towards these products as well.
It must be truly abysmal everywhere else then, because it doesn't show much value on highly technical tasks when I try.
Not sure if they are 'excited', but they are definitely using it.
Lots of interns and students also use the bots.
What I am annoyed by is having to tell users and management "no, LLMs can't do that" over and over and over and over and over. There's so much overhype and just flat out lying about capabilities and people buy into it and want to give decision making power to the statistics model that's only right by accident. Which: No.
It's a fun toy to play with and it has some limited uses, but fundamentally it's basically another blockchain: a solution in search of a problem. The set of real world problems where you want a lot of human-like writing but don't need it to be accurate is basically just "autocomplete" and "spam".
I don't think it's this. At least, I don't see a lot of that. What I do see a lot of is people realizing that AI is massively overhyped, and a lot of companies are capitalizing on that.
Until/unless it moves on from the hype cycle, it's hard to take it that seriously.
The average consumer does not appear to be particularly excited about products w/ AI features though. A big example that comes to mind is Apple Intelligence. It's not like the second coming of the iPhone, which it should be, given the insane amount of investment capital and press in the tech sphere.
Unless both the App Store and Google Play Store rankings are somehow determined primarily by HN users, then it seems like AI isn't only a thing on HN.
[0]: https://app.sensortower.com/overview/6448311069?tab=category...
And these people dont know a thing about C, Java, CPU or RAM. They are not tech people.
Over the decades the moment I hear real life conversation from non tech people in public talking about certain tech and being somewhat enthusiastic about it, is the moment that piece of tech has reached escape velocity. And it will go mainstream. And somewhat strangely enough, I only started using more ChartGPT because every non-tech people are starting to use it. And they use it much more than I do.
Just like people laughed about "Smartphone" as in iPhone era. Lots of tech people including i believe MKBHD only got their first Smartphone with iPhone 4, most consumer are even later. While I have watched the introduction of iPhone Keynote a dozen times before the thing was even shipped. The adoption curve of any tech will never be linear.
The example pointed out at the start of the article is somewhat bizarre. AWS is only pausing Colo. Apple Intelligence has more to blame with Apple themselves rather than AI. Intel ( or PC ) not selling AI enhanced chip is because consumer dont buy AI hardware, they buy AI functions. And so far nothing on Windows OS seems to be AI enhanced and specifically requires the AI Intel CPU.
And I am not even Pro AI or AI Optimist to see all that.
That isn't pure doomerism - there's plenty of room for AI assist, and people like using AI experiences themselves. AI as a product is here to stay, but the second order of products openly using AI is showing it's limits.
Preface: I'm generally an AI skeptic.
There are a LOT of people who are doing "business work" for a living, which is significantly different than hands-on coding. AI gives these people a way to just automate all of the (maybe necessary) work that they don't want to do.
The final product being 80% good enough is fine. It is done and doesn't require them to spend time on something they don't want to do.
More often than not, it is at 80% today.
That's true, but quality was never the problem. Business leaders typically don't bat an eye about outsourcing to the second world or third world, even if the quality might be subpar.
Being a business leader is not letting "perfect" be the enemy of "good enough;" and there's apparently a mountain of fields where AI is "good enough." Or, at least, good enough to replace where the third world would have been doing the work.
Btw, it's not even that workers in developing countries are intrinsically worse---it depends on the task and the people. But no matter how good they are, communicating half a world away and across cultures definitely makes turning your business requirements into good work harder.
Welcome to ~15 years ago when everything was labelled data science. Rebranding statistics as data science was hot because it got investor dollars, and it could get you hired if you went to a bootcamp. Companies everywhere were hiring data "scientists" that barely knew how to program because that's what someone on their board or their investors wanted to see (or they thought they wanted to see it). Today it's AI (machine learning) which is an extension of that earlier data science phase, which itself was an extension of applied statistics branded with a trendier name.
And LLMs (generative AI) fall under the same trend as crypto systems a decade or so ago. If you toss it into your product (actually or just claimed) you get investor money. Because it's a fad. There may be some value from it, but the majority is not valuable it's just trend following.
AI (as Indian companies are currently panicking about) is good enough to mostly replace their role as the lower-tier budget option.
Obviously lots of people like it, especially when they don't specifically think of it as AI.
"Do you want an AI to filter all your information?" "No way!"
"Do you want Google to summarize your search results?" "Yes please!"
Today I heard my wife ask our HomePod which US state was most similar in size to Germany. First, I was absolutely shocked that it gave a useful and correct answer. Well done, little dingus, and sorry to have doubted you. But more relevant, her goal wasn't to do a search. Her goal was to get an answer.
For the most part, people want an answer from Google, not a list of pages that might potentially answer them if they're lucky. Sometimes I do want to see a long list of results I can skim through for the most likely answer, especially if I'm looking for technical details on something. But if I ask "how long do I bake a frozen 20 lb turkey?", I really just want a correct answer.
So maybe people wouldn't actually say they want Google to summarize their search results if you phrased it exactly like that. But I bet most people, most of the time, would say that they wish the thing would just look at the 437 pages of results and tell them the answer.
There's a lot of skimming and scanning that we've come to expect as part of the process in locating a piece of information, and we do it quickly with practice, but that doesn't mean it has to be that way.
That the human artist still deserves and requires paying I enthusiastically agree. I would rather that happen in a way which will demean their skills less than having to subsist on makework like this garbage would.
Art classes go gaga today over the Clothed and Nude Majas, when the whole reason they existed was so some rich noble could have a "respectable" decoration that he could then hoist away when the party got raunchy enough and go "heh, look, she's nekkid now."
I’ve used many features of Apple Intelligence and Google Gemini and they have made me more productive after I have learned how to use them. Generally you get more complainers on a new product then people who use it. Being in the HN bubble doesn’t help either IMHO.
The rest? 95% of people have not heard about.
> "Why don't people like AI?"
> many such cases
It's a damn cliche at this point, why does everyone still do it?
Money attracts attention, both passively and actively. People see OpenAI, etc. spending billions on training and figure there must be something to it. OpenAI and others also probably spend quite a bit on marketing and social media bombing too, and a lot of that will likely be done by humans. If you can spend billions on training, what's a few million more on social media?
It doesn't matter what AI can actually do now. If companies like OpenAI can attract enough investment and customers to stay afloat long enough, then they may yet become indispensable in the future.
The line between bubble and self-fulfilling prophecy is thin.
https://www.pewresearch.org/internet/2025/04/03/how-the-us-p...
They find that the general public is overall much more skeptical that AI will benefit anyone, much more likely to view it as harmful and much less excited about its potential than "AI experts". A majority of Americans are more concerned than excited. There is interestingly a large gender gap between men and women -- women are much less likely to view AI favorably, to use it frequently or to be excited about its potential than men.
There is some research to suggest that consumers are less likely to buy a product and less likely to trust it (less "emotional trust") when AI is used prominently to market it:
https://www.tandfonline.com/doi/full/10.1080/19368623.2024.2...
So I think the data suggests that while there is excitement around AI, overall consumers are much less excited about AI than people in the industry think and that it may actually impact their buying decisions negatively. Will this gap go away over time? I don't know. For any of you working in tech at the time, was there a similar gap in perceptions around the Internet back in the days of the dot com bubble?
The other problem as pointed out is that MANY things are labeled as AI, ranging from logistic regression to chatbots, and probably there is more enthusiasm around some of these things than others.
some credit card companies have botched chatbot process: "lost/forged credit card report" / "talk-to-person" are essential support process, but they require you to enter your PIN to get thru that process
(if the request for a new credit-card is faked, you're out of luck)
During the dot-com bubble, inasmuch as it represented a turning tide, this trickle had reached a tipping point and we witnessed a tsunami of innovative products that consumers were genuinely fascinated by. There were just too many of them for the market to sustain them all, and a correction followed, as you would expect.
This AI story is basically the opposite, much like the blockchain story. Many investors and some consumers who have living or borrowed memory of dot-com bubble or the smartphone explosion really really want another opportunity to cash in on a exponentially expanding market and/or live through a new technological revolution and are basically trying to will the next one into existence as soon as possible, independent of any organicity or practicality.
In contrast to blockchain hype, maybe it'll work here. Maybe it won't. But it's fundamentally a different scenario from the dot-com bubble either way.
I think this hits the nail on the head. At least, it's the only explanation I've heard that makes any sense.
It's fine to express your opinion on AI, whether positive or negative. It's even fine to share anecdotes about how other people feel. Just don't say that's how "most people" feel without providing some actual evidence.
Thats the ultimate test, how many users will pay for something like this?
I also have usage pattern questions but I don’t think OpenAI publishes much data as to how their platform is most commonly used
Obviously one report is not the end of the discussion. And if more research is done that indicates that most people really are interested in AI, I'll shift my beliefs on the matter.
I was interested in that 400 million weekly user number you posted, so I did a little digging and found this source [1] (I also looked through their linked sources and double checked elsewhere, and this info seems reasonably accurate). It seems like that 400 million figure is what OpenAI is self-reporting, with no indication how that number is being calculated. Weekly user count is a figure that's fairly easy to manipulate or over-count, which makes me skeptical of the data. For example, is this figure just counting users that are directly interacting with ChatGPT, or is it counting users of services that utilize the ChatGPT API?
In addition, someone can use ChatGPT while having a neutral or negative opinion on it. My linked source [1] indicates that around 10 million people are actively paying for a ChatGPT subscription, which is a much more modest number then 400 million weekly users. There clearly are a lot of people who use and like AI, but that doesn't mean the majority of the population feels positively about it.
In my experience, this makes HN probably the most pro-AI spaces around. Most people in my life feel more negatively about AI, without a lot of defense for it (even if they do use it). The only space in my life that is more pro-AI than HN is when people from the C-suite are speaking about it at work meetings :/
If you can actually use it to build a better product on it's own terms, great. But as has ALWAYS been true, a product has to actually be good.
I can't help but think of the iPhone 16 series's top-line marketing: "Built for Apple Intelligence." In practice, the use cases have been lackluster at best (e.g., Genmoji), if not outright garbage (e.g., misleading notification summaries).
I feel like a lot of AI use cases are solutions looking for a problem, and really sucking at solving those problems where the rubber meets the road. I can't even get something as low-stakes and well-bounded as accurate sports trivia and stats out of these systems reliably, and there's a plethora of good data on that out there.
I wasn't drawing a paycheck from tech at the time, but I was a massive nerd, and from my recollection: Yes, absolutely. Dialup modems were slow, and you only had The Internet on a desktop computer. Websites were ugly (yes, the remaining 1.0 sites are charming, but that's mainly our nostalgia speaking), and frequently broke. It was or could be) expensive: you had to pay for a second phone (land!) line (or else deal with the hassle of coordinating phone calls), and probably an "internet package" from your phone company, or else pay by the minute to connect; and, of course, rural phone providers were slow to adopt any of those avenues of adoption. Commerce, pre-PayPal, was difficult - I remember ordering things online and then mailing a paper check to the address on the invoice!
Above all, we underestimate (especially in fora like this) how few people actually were online. I don't remember exact numbers at any particular times, but I remember being astonished a few times - the 'net was so ubiquitous in my and my friends' lives that "What do you mean, only X minority of people have ever used the internet?" For people who weren't interested in tech (the vast majority), seeing web addresses and "e[Whatever]" all over the place was mainly irritating.
Those elements and attitudes are certainly analogous to AI Hype today. Whether everything else along that path will turn out roughly the same remains to be seen. From my point of view, looking back, the most-hyped (or maybe just most-memorable) 1.0 failures were fantastic ideas that just arrived ahead of their time. For instance, Webvan = InstaCart; Pets.com = Chewy; Netbank = any virtual bank you care to name; Broadcast.com = any streaming video company you care to name; honorable mention: Beenz (though this might be controversial) was the closest we ever came to a viable micro-payments model.
The necessary infrastructure for (love it or hate it) a commercialized web was the smart-phone, and 'always on' portable connectivity. By analogy, the necessary infrastructure for widespread, democratized AI (whether for good or for ill) may not yet exist.
These two groups hate each other and AI promises holy grail to both parties - devs can more easily learn the 1000th tech stack in between yoga classes while pretending to work, the business dreams of finally firing all the devs and keeping all that money for themselves. And neither group has a clue how this dream will be fulfilled, but they want to believe because computers can now talk, so enter the entrepreneurs and VCs to gaslight everyone involved with fake stories of how 35% of LOC at Google was coded by AI (erm, accepted IDE autocomplete), laughing all the way to the bank while they vacuum up all that dumb money, poor befuddled executives that roleplayed their way into an 8 figure budget responsibility by being tall white men with blue eyes
This sounds like that Yogi Berra-ism, "No one comes here anymore, it's too crowded."
I suppose well over half of the people who read HN are too young to remember the dotcom craze. Everyone had every scrap of money tied up in tech stocks. IMO, the hype over AI is relatively small compared to other hype cycles. The goofiest part was the endless predictions about the singularity, and "you don't know what exponential growth looks like, man!". I mean, it can still happen, but for a while that's what AI was all about.
I think this is a really interesting observation actually
The dotcom boom describes a period of time where money was flying around like crazy, the economy went wild
During this "AI boom", investment and the economy is cratering. It's not even remotely comparable
https://trends.google.com/trends/explore?date=today%203-m&ge...
Google Gemini has 350m monthly users
I hate generative AI and refuse to use it, but I hear of people using it all the time in low-stakes contexts:
1) recipes (the cookies might suck but they won't be poisonous)
2) low-quality infotainment (NotebookLM)
3) OpenAI proudly celebrating that horrible Studio Ghibli crap - unlike dishonest math benchmark scores, garish slop on demand actually brings in customers!
4) ChatGPT boyfriend scams :( https://news.ycombinator.com/item?id=42710976
And I've also heard of people using it at work and being severely criticized:
1) ChatGPT-drafted license agreements that the executives would never agree to
2) summarizing documents you were too lazy to read and missing crucial context
3) coworkers being personally offended (or superiors being angry) about a ChatGPT email
Programmers and bottom-barrel creatives have the only reliable success with LLMs if there's real money at stake. Then there are notably but low-margin use cases like dyslexia assistance, Be My Eyes, etc. For everyone else, it's just a nifty doo-dad.
Category. So generating some text, some image, something else occasionally is pretty cool. Maybe asking some questions and getting something explained. Or search when needed. And ofc, chatting when bored.
But for general person. I do not really think there is that frequent use. And then I really doubt that this can be sold a service for vast majority of population. Like say search can not.
Literally every person I talk to in every single industry uses AI daily: Community managers for sending different email content, sales managers for emailing marketing content and researching prospects and are actively researching agents to help communicate with people automatically, govt workers for generating RFPs, defense industry, coders, band members researching audio engineering, real estate marketing house descriptions, just to name a few. Everyone also says they love it and makes their job so much easier. Not a single person has ever said what these articles headline or try to claim: “man this is awful seriously AI is such a dumb concept and it’s making life worse, no one asked for all this AI to get in my way all the time.”
Obviously my experience is anecdotal but makes it very hard for me to understand this kind of negative content who it’s for and who it’s serving. I think people are aware of auto generated content and the words here ring so empty to me and I feel like it has to be the case for others as well.
Pessimists will get left in the dusts of these machines whose shoulders the optimists ride on.
I sure hope this is the case, I sell services and contract with lots of governmental entities (municipal, county, regional, state, and federal) and I love to make money.
If an RFP has flaws I will ignore it at bid time and then get a nice fat contract change order out of it to pad my profit margins. I’m looking forward to clipping every government dumb enough to use AI to generate RFPs for as much money as I possibly can.
Any way you can share which governments are using AI for RFP generation? ;)
He skips the main reason which is it will get better in the future. Today chatbots, tomorrow I, Robot/Terminator/Her/AI/The Matrix etc. Or better as the movies tend to be biased towards disaster.
Incidentally there aren't really movies about future crypto/NFT/dotcom bubbles. AI and robots are different.
gjsman-1000•4h ago
ChatGPT alone has over 100 million active users per month. Not necessarily paying users; and not necessarily a number that's going to double overnight again requiring another server blitzscale; but it's comfortably cemented.
https://techcrunch.com/2025/03/06/chatgpt-doubled-its-weekly...
nancyminusone•3h ago