I don't buy it at all.
This sounds like complete and total bullshit to me.
it's a fucking dud.
What evidence are you aware of that counters it?
In the real world, it's immensely useful to millions of people. It's possible for a thing to both be incredibly useful and overhyped at the same time.
Try asking "what evidence supports your conclusions?".
At least the statement starts with a conditional, even if it is a silly one.
If you know your growth curve is ultimately going to be a sigmoid, fitting a model with only data points before the inflection point is underdetermined.
> If AI stays on the trajectory that we think it will
Is a statement that no amount of prior evidence can support.
AI boosters are going to spam the replies to your comment in attempts to muddy the waters.
That being said the current models are transformative on their own, once the systems catch up to the models that will be glaringly obvious to everyone.
Also you can most certainly fit a sigmoid function only from past data points. Any projection will obviously have error, but your error at any given point should be smaller than for an exponential function with the same sampling.
The moat will be how efficiently you convert electricity into useful behavior. Whoever industrializes evaluation and feedback loops wins the next decade.
In order to help reduce global poverty (much of which was caused by colonialism), it is the moral and ethical duty of the Global North to adopt LLMs on a mass scale and use them in every field imaginable, and then give jobs to the global poor to fix the resulting mess.
I am only 10% joking.
You can get your drinking water from a utility, or you can get bottled water. Guess which one he's gonna be selling?
And if you think for a second that the "utility" models will be trained on data as pristine as the data that the "bottled" models will be trained on, I've got a bridge in Brooklyn to sell you. (The "utility" models will not even have any access to all of the "secret sauce" currently being hoarded inside these labs.)
Essentially we can all expect to go back to the Lexis-Google type dichotomy. You can go into court on Google searches, nothing's stopping you. But nearly everyone will pay for LexisNexis because they're not idiots and they actually want to compete.
Lots of assumptions about the path to get there, though.
And interesting that he's measuring intelligence in energy terms.
My product is going to be the fundamental driver of the economy. Even a human right!
> Maybe with 10 gigawatts of compute, AI can figure out how to cure cancer.
How?
> We are particularly excited to build a lot of this in the US; right now, other countries are building things like chips fabs and new energy production much faster than we are, and we want to help turn that tide.
There's the appeal to the current administration.
> Over the next couple of months, we’ll be talking about some of our plans and the partners we are working with to make this a reality. Later this year, we’ll talk about how we are financing it
Beyond parody.
But for real, the leap from GPT4 to GPT5 was nowhere near as impressive as from GTP3 to GPT4. They'll have to do a lot more to give any weight to their usual marketing ultra-hype.
All that being said, it does seem like OpenAI and Anthropic are on a quest for more dollars by promoting fantasy futures where there is not a clear path from A to B, at least to those of us on the outside.
Every engineer I see in coffee shops uses AI. All my coworkers use AI. I use AI. AI nearly solved protein folding. It is beginning to unlock personalized medicine. AI absolutely will be a fundamental driver of the economy in the future.
Being skeptical is still reasonable.. but flippant dismissal of legitimately amazing accomplishments is peak HN top comment.
* Nvidia invests 5 billion in Intel * Nvidia and OpenAI announce partnership to deploy 10 gigawatts of NVIDIA systems (Investment of upto 100 billion) * This indirectly benefits TSMC (which implies they'll be investing more in the US)
Looks like the US is cooking something...
It could start by figuring out how to keep kids from using AI to write all their essays.
If a tenth of this happens, and we don't build a new power plant every ten weeks... then what?
The growth in energy is because of the increase in the output tokens due to increased demand for them.
Models do not get smarter the more they are used.
So why does he expect them to solve cancer if they haven't already?
And why do we need to solve cancer more than once?
Because I do agree with him on that front. The question is whether the AI industry will end up like airplanes: massively useful technology that somehow isn't a great business to be in. If indeed that is the case, framing OpenAI as a nation-bound "human right" is certainly one way to ensure its organizational existence if the market becomes too competitive.
Think about the legal field. The masses tend to use Google, whereas the wealthy and powerful all use LexisNexis. Who do you think has been winning in court?
Apple: Privacy is a fundamental Human right. That is why we must control everything. And stop our user from sharing any form of data other than to Apple.
OpenAI: AI is a fundamental Human right.....
There is something about Silicon Valley that is philosophically very odd for the past 15 to 20 years.
Something I've never understood: why do AGI perverts think that a superintelligence is any more likely to "cure cancer" than "create unstoppable super-cancer"
* AI is a “fucking dud” (you have to be either highly ignorant or trolling to say this)
* Altman is a “charlatan” (definitely no but it does look like he has some unsavory personal traits, quite common BTW for people at that level)
* the ridiculousness of touting a cancer cure (I guess the post is targeted to the technical hoi polloi, with whom such terminology resonates, but also see protein 3D structure discovery advances)
I found the following to be interesting in this post:1. Altman clearly signaling affinity for the Abundance bandwagon with a clear reference right in the title. Post is shorter but has the flavor of Marc Andreessen's "It's Time to Build" post from 2020: https://a16z.com/its-time-to-build/
2. He advances the vision of "creat[ing] a factory that can produce a gigawatt of new AI infrastructure every week". This may be called frighteningly ambitious at a minimum: U.S. annual additions have been ~10-20 GW/year for solar builds (https://www.climatecentral.org/report/solar-and-wind-power-2...)
what's in between this line and the next:
"or it might not. Now give me moar money!!!!!"
wiz21c•1h ago
Did Donald call him ?
tao_oat•1h ago
interestingly, it doesn’t seem to be linked from the “news” section of their website.