If they arrange financing but don't hold the loans themselves, they get paid without assuming any risk, yes?
Yes, that's correct. It's typically how investment banks operate, their main business is facilitating transactions. They'll turn around and sell the loans to hedge funds and private investors.
This obviously mean they have incentive to encourage buildouts and to downplay the risks of the loans.
No Javascript required
x=https://www.ft.com/content/7052c560-4f31-4f45-bed0-cbc84453b3ce
echo url=$x|curl -K/dev/stdin -A "Mozilla/5.0 (Java) outbrain" > 1.htm
firefox ./1.htmWe had books before and after the printing press adoption, just now we could produce more with lower quality materials.
The areas that blockchain-cum-AI grifters think they will succeed in is art, which is deliciously ironic, and thoughtful work. The areas current AI achieves value are the margins that few enjoy, like filler emails and blockbuster movies that few go to theaters to see, compliance checklists, boilerplate code, and repetitive low value work flows.
Still, it will be interesting to see where else we can build margins. Alphafold is a great example of where GenAI can do well.
They think the employers are going to line up to pay billions for AI workers in order to avoid paying trillions in benefits to human ones?
See you can keep adding middle layers, but eventually you'll find there's no one with any money at the bottom of this pyramid to prop this whole thing up.
When the consumer driven economy has no critical mass of consumers, the whole model kinda goes belly up, no?
Also, most ChatGPT users have their “personalization” prefix in the system prompt (which contains things like date/time), which would break caching of the actual user-query.
so maybe not so much anymore? would be true if it was -pure- search on google's part but it isn't anymore
Not to all, definitely. I haven't figured out what is the differentiator here but many queries are excluded.
I'm as anti-AI as it can get - it has its uses, but it is still fundamentally built on outright sharting on all kinds of ethics, and that's just the training phase - the actual usage is filled with even more snake-oil salesmen and fraudsters, and that's not to speak of all the jobs for humans that are going to be irreversibly replaced by AI.
But I think the AGI people are actually correct in their assumption - somewhen the next 10-20 years, the AGI milestone will be hit. Most probably not on LLM basis, but it will hit. And societies are absolutely not prepared to deal with the fallout, quite the contrary - particularly the current US administration is throwing us all in front of the multibillionaire wolves.
You seem quite confident for a person who doesn't offer any arguments on why it would happen at all, and why within two decades specifically, especially if you claim it won't be LLM-based.
Second, if AGI means that ChatGPT doesn't hallucinate and has a practically infinite context window, that's good for humanity but I fail to see any of the usual terrible things happening like the "fallout" you mention. We'll adapt just like we adapted to using LLMs.
Rather sooner than later, IMHO the sheer amount of global compute capacity available will be enough to achieve that task. Brute force, basically. Doesn't take much imagination other than looking at how exponential curves work.
> that's good for humanity but I fail to see any of the usual terrible things happening like the "fallout" you mention.
A decent-enough AI, especially an AGI, will displace a lot of white collar workers - creatives are already getting hit hard and that is with AI still not being able to paint realistic fingers, and the typical "paper pusher" jobs will also be replaced by AI. In the "meatspace", aka robots doing tasks that are _for now_ not achievable by robots (say because the haptic feedback is lacking) there has been pretty impressive research happening over the last years. So that's a lot of blue collar / trades jobs going to go away as well when the mechanical bodies are linked up to an AI control system.
> We'll adapt just like we adapted to using LLMs.
Yeah, we just stuck the finger towards those affected. That's not adaptation, that's leaving people to be eaten by the wolves.
We're fast heading for a select few megacorporations holding all the power when it comes to AI, and everyone else will be serfs or outright slaves to them instead of the old scifi dreams where humans would be able to chill out and relax all day.
Only assuming there is something to be found apart from the imagination itself. We can imagine AGI easily but it doesn't mean it exists, and even if it does, that we will discover it. By that logic - we want something and we spent a lot of compute resources on it - the success of a project like SETI would be guaranteed based on funding alone.
In other words, there is a huge gap between something that we are sure can be done, but it requires a lot of resources, like a round trip to Mars, and we can even speculate it can be done within 10-20 years (and still be wrong by a couple of decades) on the one hand, and something we just hope to discover based on the amount of GPUs available, without slightest clue of success other than funding and our desire for it to happen.
A huge amount of public service and corporate clerkwork is served enough by an AI capable enough of understanding paperwork and applying a well-known set of rules against it. Say, a building permit application - an AI to replace a public service has to be able to actually read a construction plan, cross-reference it with building codes and zoning and check the math (e.g. statics). We're not quite there yet, with an emphasis on the yet - especially, at the moment even AI composition with agents calling specialized AI models can't reliably detect when it doesn't have enough input or knowledge and just hallucinates.
But once this fundamental issue is solved, it's game over for clerkwork - even assuming the pareto principle (aka, the first 80% are easy, only the remaining 20% are tough), that will cut 80% of employees and, with it, the managerial layers above. In the US alone, about 20 million people work in public service. Take 50% of that (to account for jobs that need a physical human, such as security guards, police and whatnot), gives 10 million jobs for clerkwork, take 80% of that and you got 8 million unemployed people, alone in government. There's no way any social safety net can absorb that much of an impact, and as said, that's government alone - the private sector employs about 140 million people, do the calculation for that number and you got 56 million people out of a job.
That is what is scaring me because other than "AI doomers" no one seems to have that issue even on their radar on the Democrat side, and the Republicans want to axe all regulations on AI.
> without slightest clue of success other than funding and our desire for it to happen
The problem is, money is able to brute-force progress. And there is a lot of money floating around in AI these days, enough to actually make progress.
[1] https://www.statista.com/statistics/204535/number-of-governm...
However, at least for LLMs, the progress slowed down considerably so we're now at the place where they are a useful extension of a toolkit and not a replacement. Will it change dramatically in 20 years? Possibly, but that's enough time to give people a chance to adapt. (With a huge disclaimer: if history taught me anything, it is that all predictions are as useful as a coin toss.)
Yeah, but for that, politicians need to prepare as well, and they don't. All that many of today's politicians care about is about getting reelected or at the very least lining their pockets. In Germany, we call this "nach uns die Sintflut" [1], roughly translated to "after us, the floods may come".
Here in Germany, we at least have set up programs to phase out coal over decades, but that was for a few hundred thousand workers - not even close to the scale that's looming over us with AI.
If this whole endeavor somehow becomes profitable it will be a miracle.
If they took away the free, I'd pay $20 and be thankful they kept it at $20.
I love doing things myself.. I mow my lawn, change my oil, change my water heater, and try to never use frameworks or libraries. But not using LLMs seems insane. If they weren't free, you wouldn't use them?
But as a general statement, you cant just Google a comprehensive summary about beta glucans from chanterelle mushrooms, dosages, cooking methods, immune benefits and processes and get a 10 minute read about exactly what you asked for. But with Gemini deep research you can.
LLMs are good at generating text that sounds authoritative. They’re great for creative writing to share for a laugh with friends. I’m not at the point where I’m willing to use them for important work, let alone pay for them.
(I’ve yet to try them as a coding assistant, though. Maybe that’s the missing link.)
2, unless I magically have a plan for talking to an expert in HVAC repair, and not just an idiot in HVAC, I can diagnose my HVAC unit with AI just fine. And I did. And no it wasn't as simple as "well duh, every post online says it's the large capacitor".
Human: Provide some weight loss tips
AI: I'll get right on it! But before I do, have you had dinner yet? KFC's new finger-licking MEGA feast will bust your hunger for only $19.95. Click here to order.
The amount of things I've learned by asking very specific, technical questions to ChatGPT (mostly with web search turned on, but some times it's not even necessary) - things I can immediately verify and/or use, such as small bash commands/scripts, visualizations, diagrams. The value of that, alone, is certainly in the hundreds of dollars per month. Things I would never learn because they are buried somewhere among the 30 answers/comments, sometimes pointing to 20 more terribly-hard-to-read pages or manuals riddled with irrelevant (for my question) content, somewhere in the first page of web search results... maybe it's an attention span question? I certainly won't spend more than 10 minutes reading anything if not's interesting or required in the most extreme sense of the word way for my job (books on quantum mechanics, general relativity, topology, all fall in the former category - bash and pandas documentation fall in neither).
I'm convinced I've saved _at least_ low thousands per week by using coding assistants (mostly Claude Code in my case, but that's personal and likely to change at some point), as evidenced by the amount of work I'm able to finish, get paid for, and maintain. I'm not vibe coding, mind you - most of the time, I have an almost complete mental model of what I want after a couple of hours thinking, and the only thing left to do is type the code, at which point I'd, previously, feel bored, since the fun part (the thinking) is over.
Edit: I have 20 years of experience with code, 15 in the industry as a SWE (been coding since I was 13)
But I suppose that answers how they may make LLMs profitable. They could cripple or even eliminate normal search until paying for LLMs is the only option.
https://en.wikipedia.org/wiki/Dot-com_bubble#Bubble_in_telec...
https://www.historynewsnetwork.org/article/75-years-later-pu...
Indoor strawberry farms.
If chatbots turn out to be pointless for anything other than translation, copy-editing, and developer assistance, then we can still use the GPUs for robotics. I.e.: Farmers could have robots in the fields with cameras and arms for picking fruit, but outsource the machine vision and motion-control tasks to a data centre. It makes a lot more sense than lugging around batteries and GPUs literally "in the field" with the mud, heat, humidity, and vibration!
PS: As far back as the early 1990s I remember reading articles about "nobody needs 'X' upgrade", which was invalidated immediately, every time. I now have more computer power in my pocket than my first six or seven computers combined. I "found a use" for that "absurd" computer power. Not to mention that 5G to my own personal phone has bandwidth exceeding the country's entire international telecommunications bandwidth in 1990!
Yeah, the author could have used "when" instead of "if".
The most realistic success scenario would be rampant inflation eating the debt.
This is why there is this huge gold rush for infrastructure, why these players have such sky-high valuations and why investors are scambling to pour in even more money. The focus on AGI and ASI is a distraction and only relevant to the frontier model labs (more on them later). Even if AGI/ASI is never achieved and all model development was frozen today, we have decades of growth ahead of us.
The only risk is that all these productivity gains are a mirage (cue that METR paper) and at some point people will realize it and the whole scheme will come tumbling down. However, studies like [1] contradict that premise and are already finding productivity gains that match various other RCT-based studies:
>This implies that each hour spent using genAI increases the worker’s productivity for that hour by 33%. This is similar in magnitude to the average productivity gain of 27% from several randomized experiments of genAI usage (Cui et al., 2024; Dell’Acqua et al., 2023; Noy and Zhang, 2023; Peng et al., 2023)
Our estimated aggregate productivity gain from genAI (1.1%) exceeds the 0.7% estimate by Acemoglu (2024) based on a similar framework.
(Not to mention my own anecdotal experiences and the rising frequency of people posting about their successes with tools like Claude Code on HN and other social media.)
To me there is virtually no risk that this data center capacity will be unused. There probably is a bubble, but only as far as the frontier labs are concerned; given that models costing millions to train still get commoditized rapidly, it is not clear if they can capture the value produced by their models to sustain their valuations.
But those models require infra to run and that's exactly what the hyperscalers are stockpiling. The frontier labs will need to get in on that game to survive long term.
[1] https://www.stlouisfed.org/on-the-economy/2024/sep/rapid-ado... (full paper: https://s3.amazonaws.com/real.stlouisfed.org/wp/2024/2024-02...)
[2] https://arstechnica.com/ai/2025/07/so-far-only-one-third-of-...
Imagine a Beowulf cluster of those. /s
Now seroiusly, why not give them to people who need them (shared access) ? I know, i know: profit, capitalism.
rwmj•6mo ago