> 2026 will likely see the arrival of systems that can figure out novel insights
Interesting the level of confidence compared to recent comments by Sundar [1]. Satya [2] also is a bit more reserved in his optimism.
[1] https://www.windowscentral.com/software-apps/google-ceo-agi-...
[2] https://www.tomshardware.com/tech-industry/artificial-intell...
well yeah, you can't continue the ponzi scheme if you say "aw shit, it's gonna take another 10 years"
Microsoft/Google will continue to exist without, OpenAI won't
Here he says:
> Intelligence too cheap to meter is well within grasp.
Six months ago[0] he said:
> We are now confident we know how to build AGI as we have traditionally understood it.
This time:
> we have recently built systems that are smarter than people in many ways
My summary: ChatGPT is already pretty great and we can make it cheaper and that will help humanity because...etc
Which moves the goal posts quite a bit vs: we'll have AGI pretty soon.
Could be he didn't reiterate we'd have AGI soon because he thought that was obvious/off-topic. Or it could be that he's feeling less bullish, too.
Does anyone know if there are well established scaling laws for reasoning models similar to chinchilla scaling. (i.e. is the above claim valid?)
I heard similar things in my college dorm, amid all the hazy smoke.
It’s very difficult to take this stuff seriously. It’s like the initial hype around self driving cars wound up by 1000x. Because we got from 1 to 100 of course we’ll get from 100 to 200 in the same amount of time. Or less! Why would you even question it?
I would not be opposed to living in a future where I can personally live in space. It would be quite fun.
[1] (paywalled) https://fortune.com/2025/06/06/google-deepmind-ceo-demis-has...
Unless people are envisioning living in magical holodecks all the time, with magical food replicators? But those don't come along automatically with "space", no matter how much Star Trek you've seen...
(my comparison is incomplete though, it doesn’t factor in that these two also have a huge financial incentive to be hyping this stuff up)
What is that man smoking and can I have some?
Five years wouldn’t be enough time to “colonise” Antarctica, let alone another planet (just one!), and certainly not anything at a larger scale, even if we were visited by aliens tomorrow and they gifted us five hundred spaceships to give us a boost.
> “If that all happens, then it should be an era of maximum human flourishing, where we travel to the stars and colonize the galaxy. I think that will begin to happen in 2030.”
You are confusing "era of maximum human flourishing ... begin to happen" with "have colonized galaxy".
It turned out automating creative output, like art or writing, at least to be competitive enough with entry-level humans, was much easier, and the consequences for getting bad output are much less serious, and far short of death, seriously injury, and major financial liability as with self-driving. Fields like concept art and copy editing are already being devastated by generative AI. Voice overs too.
And looking at Google's Veo, I can easily see this tech being used to generate short ads, especially for YouTube, where before you would have had to hire human actors, cameramen, sound/lighting people, and an editor.
Wait... Self driving cars actually happened. Seems like the hype was true, but 5 years later than expected and now people just think it's normal so they feel the hype was overblown.
I've always been a proponent that we would see self driving cars in our lifetime. But they have absolutely not arrived outside of the sheltered enclaves of a handful of tech-centered cities.
In my opinion, it's as similarly off-base as to making claims that 'mach speed consumer air travel' "arrived" just because for a few decades you could catch a Concorde flight between NYC and London.
You’re right that they have a long way to go before being fully mainstream. But thousands of people are using them every day in multiple major cities. That’s a pretty big milestone.
Not able to handle what the streets of Los Angeles throw at it, however.
None of this is even close to happening. Waymo is impressive tech, for sure, but what they're proving is that this is just not currently solvable as a general problem. The best we can do is to meticulously craft a solution for one constrained use case - driving in SF, driving in LA, etc. And even then, we need to pick certain use cases - it's not like they could use their current tech to start autonomous service in Juneau Alaska. Or even NYC, most likely.
I'll believe self-driving is solved when I can order an automated ride to a BLM/forest service dirt road I only have GPS coordinates for and that has cell service on the hilltop only.
Might as well move it to "as long as there's no semi crossing the road" to cover for Tesla.
Centrally-managed clean and rigid setting is not the full extent of the problem domain.
As I said, I'll believe it's solved when these limitations are no longer necessary for it to work. You can choose to believe it based on a research project or marketing materials.
For most people, they'll consider self-driving as "happened" when most cars are self-driving. Similarly, the consider the smartphone as "happened" not when the palm came out - but somewhere around the iPhone 4.
Like, right now, immunotherapy for cancer has "happened" - there's real patients really doing it, for real. But most cancer patients don't consider immunotherapy as "happened", they're still getting chemo. Once chemo is obsolesced, we can consider immunotherapy as "happened". From then, to now, may be on the scale of decades.
This isn't correct: people want good software and good art, and the current trajectory of how LLMs are used on average in the real world unfortunately run counter to that. This post doesn't offer any forecasts on the hallucination issues of LLMs.
> As datacenter production gets automated, the cost of intelligence should eventually converge to near the cost of electricity. (People are often curious about how much energy a ChatGPT query uses; the average query uses about 0.34 watt-hours
This is the first time a number has been given in terms of ChatGPT's cost-per-query and is obviously much lower than the 3 watts still cited by detractors, but there's a lot of asterisks in how that number might be calculated (is watt-hours the right unit of measurement here?).
Most art is just meh and earns billions.
I don’t see AI stealing art. Most modern arts have a huge social component.
One aspect about modern generative AI that has surprised me is that it should have resulted in a new age of creatively surreal shitposts, but instead we have people Ghiblifying themselves and pictures of Trump eating tacos.
That's how electricity is most commonly priced. What else would you want?
I still find the Technology Connections video counterintuitive. https://www.youtube.com/watch?v=OOK5xkFijPc
A watt needs a time unit to be converted to joules because a watt is a measure of power, while a joule is a measure of energy.
Think of it like speed and distance:
A watt is a RATE of energy use, just like speed (e.g., miles per hour) is a RATE of travel.
A joule is a total amount of energy, just like distance (e.g., miles) is a total amount of travel.
If you're traveling 60 mph but you don't specify for how long... You won't know how far you went, how much gas you used, etc.
There are situations where Watts per se matter, eg if you build a datacenters in an area that doesn't have enough spare electricity capacity to generate enough watts to cover you, and you'd end up browning out the grid. Or you have a faulty burner and can't pump enough watts into your water to boil it before the heat leaks out, no matter how long you run it.
what he has left unsaid, is that electricity demand will rise substantially
good luck heating your home when you're competing with "superintelligence" for energy
[0] To my knowledge, no desktops are built to max out their power supply for an extended time; this is a simplification for illustration purposes.
At some point you’ll need to measure in terms of $ earned per W-H, or just $out/$in.
Money is irrelevant to energy efficiency.
My point was twofold, “the average query” becomes less meaningful as the variance increases. Sure one can _in principle_ report W-h spent on your account or query, but I think it will get more opaque and hard to predict. Average becomes less useful when agents can do an unpredictable amount of work with increasingly large bounds.
Second, and this was perhaps worthy of expanding in my post - I think in the radical abundance world that Altman is describing, energy and efficiency stops being something that people talk about. Fusion, space based solar farms, whatever, it’s easy to imagine solving this stuff. And I think you can even imagine this happening sooner if for example one provider stamps a “100% renewable datacenter” option on their AI product. Then you might not care about energy efficiency at all; in which case you just care about profit.
You are getting confused here...
LLMs are terrible at doing software engineering jobs. They fall on their face trying to work in sprawling codebases.
However, they are excellent at writing small bespoke programs on behalf of regular people.
Becky doesn't need ChatGPT to one-shot Excel.exe to keep track of the things she sells at the craft fair that afternoon.
This is a huge blind-spot I keep seeing people miss. LLM "programmers" are way way more useful to regular people than they are to career programmers.
This smells like "Self-checkouts", "perfect bespoke solution so that everyone can check their own groceries out, no need to wait for a slow/expensive _human_ to get the job done, except you end up basically just doing the cashiers job and waiting around inconveniently whenever you want to get some beer
I'm also pretty anti-social and actually prefer the robotic banter of quickly checking out with a person to the anxious nightmare of just trying to buy something real quick and now waiting because the machine hates life more than me.
Oh and many of the folks doing the bagging at some of the stores are disabled, and I dunno - I hope we're taking care of people in those jobs.
Instead of a guard there's a scanner by the exit gate where you scan your ticket. In my case just a small stub since the actual ticket is sent digitally.
I think it works so smoothly, much better than before this system was introduced. My recent experience buying groceries in Belgium was very similar.
And the scanner to exit is terrible for fire safety and accessibility. Also I've seen it break with the result that you had to trigger the alarm to leave.
It does not work smoothly at all, but perhaps you live in a different sweden than I.
I use this method of shopping whenever available.
And for items like bread or so you need to navigate a number of menus to find it.
And of course you are asked if you are a member, if you'd like to become a member, if you want to do a donation, and a number of other useless questions.
Yes you do need to be a member to use the system, but that's a very small price to pay for the convenience and speed of avoiding all lines and stress of the regular checkout.
Going from "a supermarket near my home" to "every single supermarket in sweden" is kinda of a big leap.
Try ICA in Olskroken or Godhemsgatan for example…
I Most supermarkets have self checkout, but even that isn't omnipresent (and it's usually slower).
The other problem is that, like self checkout, it often requires human intervention (buying alcohol or high value items, for examples) still requires human intervention. This can require a long wait at the end. I once got so sick of waiting that I left my shopping and walked away and went somewhere else.
That's exactly why she asks an LLM to do it for her. A program like this would be <1k LOC, well within even the current LLM one-shot-working-app domain.
The lower barrier to entry might mean the average quality of software goes WAY down. But this could be great if it means that there are 10x as many little apps that make one person happy by doing only as much as they need.
The above is also consistent with the quantity of excellent software going way up as well (just slower than the below-average stuff).
Fortunately AI helps with that too.
We have these really amazing computers with very high-quality software but still, so many processes are manual because we have too many hops from tool to tool. And each hop is a marginal increase in complexity, and an increase in risk of things going wrong.
I don't think AI will solve this necessarily, and it could make it worse. But, if we did want to improve it, AI can act as a sort of spongy API. It can make decisions in the face of uncertainty.
I imagine a world where I want to make a doctor's appointment and I don't open any apps at all, or pick up the phone. I tell my AI assistant, and it works it out, and I have all my records in one place. Maybe the AI assistant works over a phone line, like some sort of prehistoric amalgamation of old technology and new. Maybe the AI assistant doesn't even talk to a receptionist, but another AI assistant. Maybe it gives me spongy human information, but speaks to other's of it's kind in more defined interfaces and dialects. Maybe, in the future, there are no applications at all. Applications are, after all, a human construction. A device intended to replace labor, but a gross approximation of it.
>However, they are excellent at writing small bespoke programs
For programmers I predict a future of a million micro services.
Sprawling has always been an undesirable and unnecessary byproduct of growing code bases, but there's been no easy solution to that. I wonder if LLMs would perform better on a mono repo of many small services than one sprawling monolith.
It still happens but it's not a favorite experience anymore. It's just a source of loathing for MBA culture.
https://en.wikipedia.org/wiki/Apparatchik
Been saying that for years. Private equity is perhaps analogous to the Politburo.
Cloud providers make more money the more they can get people to use inefficient designs with more moving parts to rack up more charges. Bonus if it also locks you into managed services. Double bonus if those are proprietary. Complexity benefits cloud hosts.
Microservices, like all patterns, sometimes make sense. Like all patterns they often get overused.
But end users often only need a tiny fraction of what the software is fully capable of, leaving a situation where you need you need to purchase 100% to use 5%.
LLMs can break down this wall by offering the user the ability to just make a bespoke 5% utility. I personally have had enormous success doing this.
Sure, an LLM might be able to guide you through the steps, and even help when you stumble, but you still have to follow a hundred little steps exactly with no intuition whether things will work out at the end. I very much doubt that this is something many people will even think to ask for, and then follow-through with.
Especially since the code quality of LLMs is nowhere near what you make it out to be. You'll still need to bear with them and go through a lot of trial and error to get anything running, especially if you have no knowledge of the terms of art, nor any clue of what might be going wrong if it goes wrong. If you've ever seen consumer bug reports, you might have some idea of the quality of feedback the LLM may be getting back if something is not working perfectly the first time. It's very likely to be closer to "the button you added is not working" than to "when I click the button, the application crashes" or "when I click the button, the application freezes for a few seconds and then pops up an error saying that input validation failed" or "the button is not showing up on the screen" [...]
I’m radiologist, I’ve been paying for software that sped up my reporting like 200 usd per month. I’ve remade all the functionality I need in one evening with cursor and added some things that I’ve found missing from the original software.
So far I have created 7 programs that are now used daily in production. One of them replaces a $1k/yr/usr CAD package, and another we used to bring in a contractor to write. The others a miscellaneous apps for automating/simplifying our in houses processes. None of the programs are more than 6k LOC.
Kinda. They're good, and I like them, but I think of them like a power tool: just because you can buy an angle grinder or a plasma cutter from a discount supermarket, doesn't mean the tool is safe in the hands of an untrained amateur.
Someone using it to help fix up a spreadsheet? Probably fine. But you should at least be able to read the code to the level you don't get this deliberate bad example to illustrate the point:
#!/usr/bin/python3
total_sales = 0.0
def add_sale(total_sales, amount):
total_sales = total_sales + amount
print("total_sales: " + str(total_sales))
Still useful, still sufficiently advanced technology that for most people it is (Clarketech) magic, but also still has some rough edges.Absolutely! We’re in an interesting time, and LLMs are certainly over-hyped.
With that said, I’d argue that most software today is _average_. Not necessarily bad by design, but average because of the (historic) economies of software development.
We’ve all seen it: some beloved software tool that fills a niche. They raise funding/start to scale and all of a sudden they need to make this software work for more people.
The developers remove poweruser features, add functionality to make it easier to use, and all of a sudden the product is a shell of its former self.
This is what excites me about LLMs. Now we can build software for 10s or 100s of users, instead of only building software with the goal (and financial stability) of a billion users.
My hope is that, even with tons of terrible vibe coded atrocities littering the App Store/web, we’ll at least start to see some software that was better than before, because the developers can focus on the forest rather than each individual tree (similar to Assembly -> C -> Python).
The body uses 25w resting and thus the brain is about 5w.
Source: biology degree but like I said please take with the same amount of weight as a hallucinating LLM.
Resting energy usage in humans is ~1200–1500 kcal/day, or about 60–70 watts, depending on the person. Logic holds, estimate is just low
Sure, we'll all get a subscription and subject ourselves to biometric verification in order to continue our profession.
OpenAI's website looks pretty bad. Shouldn't it be the best website in existence if the new Übermensch programmers rely on "AI"?
What about the horrible code quality of "AI" applications in the "scientific" Python sphere? Shouldn't the code have been improved by "AI"?
So many questions and no one shows us the code.
I used to believe this wasn't worth considering (because while training is energy-intensive, once the model is trained we can use it forever.) But in practice it seems like we switch to newer/better models every few months, so a model has a limited period of usefulness.
> (is watt-hours the right unit of measurement here?)
Yes, because we want to measure energy consumed, while watts is only a measure of instantaneous power.
Other people know a novel cannot be written by a machine— because a novel is human by definition, and AI is not human, by definition.
It’s like wondering if a machine can express heartfelt condolences. Certainly a machine can string words together that humans associate with other humans who express heartfelt condolences, but when AI does this they are NOT condolences. Plagiarized emotional phrases do not indicate the presence of feeling, just the presence of bullshit.
I thought a novel was just a decent length fiction book?
The purpose of fiction at least isn't to express true feeling, it's to instill feeling in the reader. To that end it doesn't matter the process that produces the words.
Is AI going to win the Pulitzer any time soon? Nah.
Is AI going to be able to produce novel length works of fiction that are good enough for at least some people in the near future? Most likely.
Anything is good enough if your standards are low.
How about affordable housing?
At the very least, moving heavy extractive industries out of the biosphere would be a sensible allocation of resources: metal and energy are common in the solar system, but life supporting ecosystems are not.
(Or both)
The average person on earth would love it we could have readily available clean water, cheaper housing and food, reliable healthcare to retire without worrying.
That would actually be blissful beyond our wildest dreams: everyone around you and beyond being able to have a peaceful life.
Then, I just buy houses in High COL areas and start popping them up and moving families in ASAP. What are they gonna do evict working families? Good luck
Then dissolve the underlying corporation before the municipal fines kick in and escape into the night. Render municipal and neighborhood control laws meaningless.
*may not be humanoid, I'm skirting that debate here.
This seems optimistic.
It's simply pitting the most dysfunctional municipal and judicial systems against each other with the default outcome being people being housed.
The municipality can’t take away the land (most of the value here) and putting the house on top will be relatively cheap, given the low total labor costs.
If the whole affair is revenue neutral, that's a huge win.
they do this all the time!
But space colonization is cool and poor people are yickie.
It was the one genuine, serious potential improvement.
But technology often can solve political problems by shifting the balance of power. Come up with more ways to make it easier for people to live away from high cost of living areas, for example, so that governments with high housing costs start losing tax base to other jurisdictions.
Between this and Ed Zitron at the other end of the spectrum, Ed's a lot more believeable to be honest.
"probably", "if they embrace the new tools". Hard to read anything but contempt for the role of humans in creative endeavors here, he advocates for quantity as a measure of success.
Gary has invested heavily in an anti-AI persona, continually forecasting AI Winters despite wave after wave of breakthroughs.
Sam, on the other hand, is not just an AI enthusiast; he speaks in a manner designed to build the brand, influence policy, and continuously boost OpenAI's valuation and consolidate its power. It's akin to asking the Pope whether Catholicism is true.
Of course, there might indeed be significant roadblocks ahead. It’s also possible that OpenAI might outpace its competitors—although, as of now, Gemini 2.5 Pro holds the lead. Nevertheless, whenever we listen to highly biased figures, we should always take their claims with a grain of salt.
That’s heresy, with this crowd. Everyone wants to be The Gatekeeper, and get filthy rich.
it's super competitive, which should be good for innovation, but there's also significant incentive to use PR tactics to sell that innovation for much more than it's worth.
Sam's comments about how we're super close to AGI fall flatter-than-ever, after the latest model releases (from all players) and the Apple paper confirming what everybody already knew.
If you're thinking about the apple paper, you should know that its methodology was flawed in many ways, and their findings absolutely do not support the catchy title. But a lot of slop was generated because negativity + catchy title + apple = hits.
https://www.wheresyoured.at/openai-is-a-systemic-risk-to-the...
I see a similar thing at work — the number of projects a developer can get through isn’t bounded by the lines of code they can churn out in a day. Instead it’s bounded by their appetite for getting shouted at when something goes wrong.
Until you can shout at an LLM, I don’t think they’re going to replace humans in the workplace.
Viewed through that lens, LLMs and their total lack of accountability are a perfect match for the modern-day business; that's probably part of why so many executives are hell-bent on putting them everywhere as quickly as possible.
I think you answered this with the rest of your comment.
It was not obvious to me that this was the author’s intent.
Our epistemological standards are so shitty and so pathetically poor, that it's hardly surprising that we're fooled by LLMs much as we are by academic bullshit artists.
We're obviously not quite there yet, but I don't see any inherent limitation to human managers' ability to shout at AI agents. I'll go even a step further and say that knowing that the AI doesn't actually have feelings that I can affect, and that it has limited context helps make my shouting more productive, focusing on the task at hand, rather than on its general faults, as I might when I'm angry at a human.
"Hey, it'd be a shame if somethin', uh, happened to that nice bit of expertise ya got there, y'know. A darn shame."
Famous last words.
>Intelligence too cheap to meter is well within grasp
And also:
>cost of intelligence should eventually converge to near the cost of electricity.
Which is a meter-worthy resource. So intelligence effect on people's lives is in the order of magnitude of one second of a toaster use each day, in present value. This begs the question: what could you do with a toaster-second say 5 years from today?
Unsurprisingly, this scared the crap out of the fossil fuel industry in the US and countries like Russia that are net exporters of fossil fuels, so they've spent decades lobbying to bind nuclear plant construction up in red tape to prevent them being built and funding anti-nuclear propaganda.
You can see a lot of the same attempts being made with AI, proposals to ban it or regulate it etc., or make sure only large organizations have it.
But the difference is that power plants are inherently local. If the US makes it uneconomical to build new nuclear plants, US utility customers can't easily get electricity from power plants in China. That isn't really how it works with AI.
The primary cost of a traditional plant is fuel. However a nuclear plant needs a tad more (qualified) oversight than a coal plant.
In the same way that the navy has specialists for running nuclear propulsion systems versus crew needed for diesel engines. Not surprisingly nuclear engines cost "more than fuel".
That cost may end up being insignificant in the long run, but cost is not zero. And shortages of staff would matter (like it does with ATC at the moment.)
Construction cost should be much lower than it is, but I don't think it'll be as cheap as say coal or diesel. The nature of the fuel would always require some extras, if only because the penalty-for-failure is so high. There's a difference between a coal-plant catastrophe and Chernobyl.
So there are costs gor running nuclear, I don't think it necessarily gets "too cheap to measure".
Solar is on its way to do something incredibly disruptive because it's the same "too cheap to meter" but only when the sun is shining, and then you still need the independent capacity to supply power from something else when it isn't. So now instead of "you pay ~$0.12/kWh all the time" you have a situation where power during sunshine hours is basically free but power at other times costs dramatically more than it used to because the infrastructure to supply that power has to recover its costs over significantly less usage.
Wouldn't there still be the OpEx of maintaining the power plants + power infrastructure (long-distance HVDC lines, transformer substations, etc)? That isn't negligible.
...I say that, but I do note that I live in a coastal city with a temperate climate near a mountain watershed, and I literally do have unmetered water. I suppose I pay for the OpEx of the infrastructure with my taxes.
These are fixed costs. They don't go down if people use less power, so the sensible way to pay for them is with some kind of fixed monthly connection charge (e.g. proportional to the size of your service) or via taxes. You're still not measuring how much power people use at any given time.
There are quite a few businesses that can scale limited only by power consumption — but due to that, they today need connections massively overbuilt compared to their current usage (as they project their usage growth to be extremely fast, potentially as much as doubling each year) in order to not need to be constantly re-trenching connections or browning out. Which in turn means that, under a leased-line electrical system, they'd be massively overpaying for unused power capacity at all times — possibly to the point of being unprofitable.
To achieve profitability, they'd need to negotiate billing based on only the fraction of the capacity of the available grid capacity that they're actually demanding on any given day... or, in other words, metered billing.
This is frankly nonsense, and my hope is that this nonsense is coming from a person too young to remember the real, valid fears from disasters like 3 Mile Island, Chernobyl, and Fukushima.
Yes, I fully understand that over the long term coal plants cause many more deaths, and of course climate change is an order of magnitude worse, eventually. The issue is that human fears aren't based so much on "the long term" or "eventualities". When nuclear power plants failed, they had the unfortunate side effect of making hundreds or thousands of square miles uninhabitable for generations. The fact that societies demand heavy regulation for something that can fail so spectacularly isn't some underhanded conspiracy.
Just look at France, the country with probably the most successful wide-scale deployment of nuclear. They are rightfully proud of their nuclear industry there, but they are not a huge country (significantly smaller than Texas), and thus understand the importance of regulation to prevent any disasters. Electricity there is relatively cheap compared to the rest of Western Europe but still considerably higher than the US average.
And the fears from Chernobyl was MOSTLY irrational.
The reason for the extreme fears that are generated from even very moderate spills from nuclear plants comes in part from the association with nuclear bombs and in part from fear of the unknown.
A lot (if not most) people shut their rational thinking off when the word "nuclear" is used, even those who SHOULD understand that a lot more people die from coal and gas plants EVERY YEAR than have died from nuclear energy throughout history.
Indeed, the safety level at Chernobyl may have been atrocious. But so was the coal industry in the USSR. Indeed, even if just considering the USSR, the death toll from coal alone caused a similar number of deaths (or a bit more) than the deaths caused by Chernobyl EVERY YEAR [1].
[1] https://www.science.org/doi/pdf/10.1126/science.238.4823.11....
I'm so tired of hearing about how regulation is this magic salve that saves everything. Regulation is what caused Chernobyl. Soviet regulations mandated that the flawed RBMK reactor design be used. They knew it was flawed and they forced people to use it anyway. Because that's what government does. There's a similar story in western countries, where it hasn't been feasible to use better designs due to antiquated government regulations, it's just no one here has screwed up as badly as the Soviets did.
Here's an MIT study that dug into the reasons behind high nuclear construction cost. They found that regulatory burdens were only responsible for 30% of cost increases. Most of the cost overruns were because of needing to adapt the design of the nuclear plant to the site.
https://news.mit.edu/2020/reasons-nuclear-overruns-1118
Now, you can criticize the methodology of that study, but then you have to bring your own study that shows precisely which regulatory burdens are causing these cost overruns, and which of those are in excess. Is it in excess that we have strict workplace safety regulation now? Is it in excess that we demand reactor containment vessels to prevent meltdowns from contaminating ground water supplies? In order to make a good red tape argument I expect detail in what is excess regulation, and I've never seen that.
Besides, if "red tape" and fossil industry marketing was really the cause of unpopularity, and the business case for nuclear was real when installing plants at scale, you would see Russia and China have the majority of their electricity production from nuclear power.
- Russia is the most pro-nuclear country in the world, and even they didn't get past 20% of electricity share. They claim levelized cost for nuclear on the order of that of solar and wind, but I am very skeptical of that number, and anyone who knows anything about the Russian government's relation to the truth will understand why. When they build nuclear plants in other countries (e.g. the bangladesh project) they are not that cheap.
- China sits at low single digit percentages of nuclear share, with a levelized cost that is significantly higher than Russia's and well above that of solar and wind. While they're planning to grow the nuclear share they assume it will be based on new nuclear technology that changes the business case.
Both Russia and China can rely on cheap skilled labor to bring down costs, a luxury western countries do not have.
And this is ultimately the issue: the nuclear industry has been promising designs that bring down costs for over half a century, and they have never delivered on that promise. It's all smoke and mirrors distracting from the fact that building nuclear plants is inherently freaking expensive. Maybe AI can help us design better and cheaper nuclear power plants, but as of today there is no proven nuclear plant design that is economical to build, and that is ultimately why you see so little new nuclear plant construction in the west.
Not France?
> France derives about 70% of its electricity from nuclear energy, due to a long-standing policy based on energy security.
— https://world-nuclear.org/information-library/country-profil...
[0] https://www.mdpi.com/1996-1073/10/12/2169 - Fig 3
This is conjecture. If you wanted to establish this, you would have to show that cost of (skilled) labor was unchanging or negligible.
It is also important to consider that nuclear power deaths/damages are much more localized and traceable than excess deaths from air pollution, and thus much less acceptable to the voting population-- you could argue that this should not make any difference (I disagree), but I don't want to digress here too much.
> we're all lucky that the Chinese chose a different tack to the West's policy of energy failure
What do you believe that is? Because from my point of view, China generates a negligible amount of electricity from nuclear power (<5%), this is not going to change within the next decades, and the main "purpose" from what I can tell is to in-house reactor/turbine know-how (instead of relying on Alstom/Siemesn).
> It is also important to consider that nuclear power deaths/damages...
Maybe you can answer this for me - what deaths and damages? So far I've never been able to pin down any actual death or damage to a nuclear meltdown. I'm sure there are some, but most of the actual attempts to quantify it require appealing to hypothetical deaths and damages that no-one can specifically point to, or tiny numbers that are irrelevant to industrial policy.
I know people who lived in a town next to a lead-zinc mine. That appears to be about as bad as a nuclear crisis from what I can gather and it doesn't seem to be causing anyone undue stress. We're still using lead and zinc. People still live in the town.
> What do you believe that is?
They're building reactors. https://en.wikipedia.org/wiki/List_of_commercial_nuclear_rea... is a happy tale of new and planned plants.
Some of them are really cool too, there is one by the Gobi desert, apparently to prove that they don't need to use water as a coolant.
Many things are cheap when you ignore externalities.
It’ll never be too cheap to meter, but electricity will get much cheaper over the coming decades, and so will synthetic hydrocarbons on the back of it.
There are better options, and at scale they're literally capable of producing electricity that literally is too cheap to meter.
The reasons they haven't been built at scale are purely political.
Today's AI is computing's equivalent of nuclear energy - clumsy, centralised, crude, industrial, extractive, and massively overhyped and overpriced.
Real AI would be the step after that - distributed, decentralised, reliable, collaborative, free in all senses of the word.
(that excludes a brief period when I camped with a solar panel)
but here is some data bro! https://fred.stlouisfed.org/series/APU000072610
also this is weird i thought electricity prices only get cheaper? https://fred.stlouisfed.org/series/APU000072610
> The deal would help enable a revival of Unit 1 of the five-decades-old facility in Pennsylvania that was retired in 2019 due to economic reasons.
i wonder why they stopped producing energy in 2019 even though energy prices have gone up over the five decades?
Not sure if wishful thinking trying to LARP-manifest this future into being, or just more unfalsifiable thinking where we can always be said to be past the event horizon and near to the singularity, given sufficiently underwhelming definitions of "event horizon" and "nearness."
Of course, progress could stall out, but we appear to have sufficient compute to do anything a human brain can do, and in areas, AIs are already far better than humans. With the amount of brain power working in, and capital pouring into, this area, including work on improving algorithms, I think this essay is fundamentally correct that the takeoff has started.
You could say the same thing about a CPU from 40 years ago - they can do math far better than humans. The problem is that there are some very simple problems that LLMs can’t seem to reliably solve that a child easily could, and this shows that there likely isn’t actual intelligence going on under the hood.
I think LLMs are more like human simulators rather than actual intelligent agents. In other words, they can’t know or extrapolate more knowledge than the source material gives them, meaning they could never become more intelligent than humans. They’re like a very efficient search engine of existing human knowledge.
Can anyone give me an example of any true breakthrough that was generated by an LLM?
40 years ago we were clearly compute bound. Today, I think it's fairly clear we are not; if there is anything a human can do that an AI can't, it's because we lack the algorithms, not the compute.
So the question becomes, now that we have sufficient compute capacity, how long do you think it will take the army of intelligent creative humans (comp sci PhDs, and now accelerate by AI assistance) to develop the algorithmic improvements to take AI from LLMs to something human level?
Nobody knows the answer to the above, and I could be very wrong, but my money would bet on it being <30 years, if not dramatically sooner (my money is on under 10).
It seems to me like the building blocks are all here. Computers can now see, process scenes in real time, move through the world as robots, speak and converse with humans in real time, use tools, create images (imagine?), and so forth. Work is continuing to give LLMs memory, expanded context, and other improvements. As those areas all get improved on, tied together, recursively improved, etc., at some point I think it will be hard to argue it is not intelligence.
Where we are with LLMs is Kitty Hawk. The world now knows that flight (true human level intelligence) is possible and within reach, and I strongly believe the progress from here on out will continue to be rapid and extreme.
This assumes that the eventual breakthroughs start from something like LLMs. It's just as likely or more that LLMs are an evolutionary dead end or wrong turn, and whatever leads to AGI is completely unrelated. I agree that we are no longer compute bound, but that doesn't say anything about any of the other requirements.
"Rich" is a relative term, the existence of the rich requires the existence of the poor, and according to the article the rich will get richer much faster than the poor. There's nothing gentle about this singularity
I hope LLM use will drive efforts for testing and overall quality processes up. If such thing as an AGI ever exists, we'll still need output testing.
To me it does not matter if the person doing something for you is smarter than you, if it's not well specified and tested it is as good as a guess.
Can't wait for the AI that is almost unusable for someone without a defined problem.
More broadly, I wonder how many key insights he thinks are actually missing for AGI or ASI. This article suggests that we've already cleared the major hurdles, but I think there are still some major keys missing. Overall his predictions seem like fairly safe bets, but they don't necessarily suggest superintelligence as I expect most people would define the term.
If it is the context window, then you are limited to the size of said window and everything is lost on the next run.
Learning is memory, what you are describing is an llm being the main character in the movie Momento, I.e. no longterm memories past what was trained in the last training run.
The crazy thing is that a well crafted language model is great product. A man should be content to say "my company did something akin to compressing the whole internet behind a single API" and take his just rewards. Why sully that reputation boasting to have invented a singularity that solves every (economically registerable) problem on Earth?
Which underlying LLM powers your agent system doesn't matter. In fact you can swap them for any state-of-the-art model you like, or even points Cursor to your self-hosted LLM API.
So in a sense every advanced model today is AGI. We were already past the AGI "singularity" back in 2023 with GPT4. What we're going through now is a maybe-decades-long process of integrating AGI into each corner of society.
It's purely an interface problem. Coding agent products hook the LLM to the real world with [web_search, exec_command, read_file, write_file, delegate_subtask, ...] tools. Other professions may require vastly more complicated interfaces (such as "attend_meeting",) it takes more engineering effort, sure, but 100% those interfaces will be built at some point in the coming years.
I get the gist of what he is saying, but I really think that most of the "idea guys" who never got farther than an idea will stay that way. Sure, they might spit out a demo or something, but from what I've seen the "idea guys" tend to be the "I want to play business" guys who have read all the top books and refine their Powerpoints but never actually seem to get around to, you know, running a business. I think there is underlying difference there.
I do see AI as a great accelerator. Just as scripting languages suddenly unlocked some designers who could make great websites but couldn't hang with pointers and malloc, I think AI will unlock great idea guys who can make great apps or businesses. But it will be fewer people than you think, because "building the app" is rarely the biggest challenge - focused intent is much harder to come by.
I do think the age of decently robust apps getting shat out like Flash games is going to be fun and chaotic, and I am 100% here for it.
I enjoy technology but less and less so each year, because it increasingly feels like there’s some kind of disconnect with the real world that’s hard to put my finger on
> Scientific progress is the biggest driver of overall progress; it’s hugely exciting to think about how much more we could have.
Does he mean "how much more we could have if our federal government wasn't actively destroying our science institutions"?
- Loans get issued when they're likely to make a banker rich. We know that's a lousy proxy for assurances that their associated ventures will be beneficial to the public, but it kinda sorta works enough.
- Those loans cause money to enter the system and then we all chase it around and we cooperate with people who give us some of it even though we know that having money is a lousy proxy for competence or vision or integrity or any other virtue that might actually warrant such deference, but it kinda sorta works enough.
- Research gets done in support of publications, the count of which is used to assess the capability of the researcher, even though we know that having many publications is a lousy proxy for having a made a positive impact, but it kinda sorta works enough.
If we can fix these systems that are just limping along... If money can enter the system in support of ventures which are widely helpful, not just profitable... If we can support each other due to traits we find valuable in each other, not just because of some happenstance regarding scarcity. If we can encourage effective research in pursuit of widely sought after goals, rather than just counting publications... well that will be a whole new world with a new incentive structure. AI could make it happen if it manages to dispense with all these lousy proxies.
The thing that you, altman, and every other AI booster fail to explain is how AI will do this. Every argument is littered with "if"s and "could"s, and it's just taken for granted that AI will be smart enough to find a solution that solves all our problems. It's also taken for granted that we'll actually implement the solution! If AI said that curing cancer requires raising taxes, cancer would remain uncured.
The trouble with you naysayers is you always take as immutable things that we can change.
Let's be real here, US politics is fucked and none of us knows how to fix it.
as the top level comment pointed out, we have solutions to many of these things, but we choose not to do them. I don't think it's unfair to say "maybe we shouldn't spend billions of dollars on this thing that will probably just reinforce existing power structures".
> Let's be real here, US politics is fucked and none of us knows how to fix it.
Agreed.
This system exists and is optimized to make the bankers rich. The idea that it helps ensure that ventures using that money will be successful is the thin veneer used 100+ years ago to sell it to everyone. But the true purpose has always been to make bankers rich. If you to institute any other system that would not achieve that purpose, you will find yourself battling enormous opposition.
The same things are true for huge parts of our society. Perhaps the most glaringly obvious is the US Healthcare system. Experience from all over the world shows clearly that it is not an efficient way to organize Healthcare, by any stretch of the imagination. Still, it won't change, not because people don't believe the outside examples, but because the system is working for what it was designed to do - transfer huge amounts of money to rich people. And it delivers just enough health care that people aren't routing in the streets against it.
My take: echo chambers have become mainstream (ironically, aided by technology). Typically that's online for most people, but in SF, it's a physical echo chamber, too.
That echo chamber allows large numbers of people (coincidentally, the people shepherding a lot of the technology) to rationalize any position they come up with and fall into a semi-permanent state of cognitive dissonance ("of course my technological solution is the solution").
If other people are saying the same thing nearly everywhere you look, who's to say those who disagree are actually the correct ones?
Technology can scale up small conveniences into major economic and quality of life wins.
But when it's tolerated, technology also ramps up seemingly small conflicts of interest into economic and society-degrading monsters.
Our legal intolerance for conflicts of interest as business models needs to go up a lot. No amount of strongly worded letters, uncomfortable senate interviews, or unplug-the-system theater, are going to discourage billionaires farming people's behavior, attention and psychology from continuing to farm people's behavior, attention and psychology.
It's like the math that takes # of homeless people times cost of an apartment, and then people claim we can solve homelessness for [reasonable amount]. Makes sense until you look how much US spends on homelessness every year already.
Cell phone technology was the closest thing we got to teleporting technology and wealth to the rest of the world. Most developing countries completely skipped a phase of development that required crazy amounts of infrastructure to be built out. More people have cell phones than running water. It's pretty incredible if you think about it. Hopefully AI will be a similar leap frog.
Inequality is bad and getting worse. Big changes accelerate it because richer people can adapt and take advantage faster.
I enjoy playing around with AI for fun and find it amazingly useful. But I do not believe it can solve inequality - that’s a people problem.
AI, as most technology, make these things easier, but they are power. It's all about what you do with power. You can build power plants or bombs. AI could (let's be hypothetical) free humans from all necessary labor. Robots making all the food, mining all the materials, and do the whole pipeline. But that requires a fundamental rethinking of how we operate now. That world isn't socialism nor capitalism. That's a world where ownership becomes ill defined in many settings but still strict in others. It's easy to income Star Trek but hard to imagine how we get there from here. Do we trust a few people to make all the robots to do all those things and then just abdicate all that power? Do we trust governments to do it? There's reason to trust no one. Because a deep dystopian can be created from any such scenarios. Going towards that hypothetical future is dangerous. Not because super intelligences, but because of us. Those with power tend to not just freely abdicate it... that's not a problem we're discussing enough and our more frequent conversations of ASI and the like just distracts from this one.
I'd like someone, amongst the tech bros for instance but it could be any influential politician in power, to set a target on when do we stop making life more miserable than it could for billions of people, by asking them to aim for no more than structural unemployment, 40+ hours weeks, steady economic growth in the name of progress.
Because as long as the end game isn't defined (and some milestones towards it), we won't have Star Trek, but a sci-fi version of a Dickens or Zola book, or at least an eternal treadmill augmented with marginally less useful innovations.
That's how I project over the next centuries the failed prediction from Keynes about everyone working 15 hours weeks in a near future, in a western world that yet did achieve post scarcity (at least for now)
What I mean is let's say you are making something and you see an issue. You know it'll be an issue and if fixed now will cost $100, but if you fix it a year from now it'll cost $1000. Many people will choose the latter. On paper, I think most people will choose the former but in practice the latter is often hard to determine. An example might be a security risk. It only gets harder to fix as you generate more code and complexity increases. But of hacked this has high costs both through business and through lawsuits.
The amortized one is a nasty problem because it often goes unnoticed. Say a problem costs $100 to fix now but every day it isn't fixed it costs $1.00 every day you don't fix it costs 5% more (so $1, $1.05, $1.1025,...). These sneak up on you and you'll pay a lot more than that $100 by the time you notice
A follow up would be to observe that in many places today, inequality causes substantial sub-populations to feel that they are not thriving, or even declining though GDP is increasing. Which would explain the rage of the middle-American and many Europeans.
If you buy all this, then there is a clear path by which radical abundance resolves the problems. Same as how the Baby Boomers were a very low-polarization generation; when everybody is thriving it’s a lot easier to get along.
Personally I worry more about the possible worlds where technology doesn’t bring us radical abundance. Declining empires with nuclear weapons don’t sound conducive to a fun time.
These are political problems not technological problems.
Same here.
I want to be an engineer.
As a good engineer, I find a problem then recursively ask why.
I end up with a political root cause - not a technical one.
This makes me sad.
I wish I could just be an engineer.
What the richest and most powerful people want, happens to be "be even richer and even more powerful".
Banal, sad, and universal.
I don't think I understand your point. We shouldn't develop new treatments because one country won't supply them to the entire world for free? Why is it the US's responsibility to treat every disease everywhere in the world? Do these countries not have their own governments?
Either way, individuals are welcome to practice any sort of compassion they want with their own money. The government collects tax dollars from citizens under threat of violence, and their only responsibility should be to use that money to ensure the welfare of its citizens, not to engage in charity work in other countries. If citizens want to do that, have the government collect fewer tax dollars and citizens can give them to charity as they see fit.
Soft power is of obvious immense benefit to citizens of the United States, however you've rejected that in other comments.
The argument otherwise reads like a stereotypical "Not with my tax dollars!" argument. It's always fascinated me, that. Inevitably it's always an impassioned argument, regardless of the funding subject.
In this particular case, a very conservative estimate might put the number of child deaths in the tens of thousands. Reality is probably closer to hundreds of thousands at this point.
I pay taxes, a lot of them. I don't get angry when my taxes are used by the government to keep disadvantaged children in hellish conditions alive.
If you were to express the total cost of USAID's former budget against your tax bill as a binary choice between that and a few hundred thousand kids dying, I suspect it'd be much harder to maintain your current position.
Moreover, consider that central to this issue is the abrupt dismantling of an agency which was critical to global aid flow, and amounted to a rug pull. There was no justification for that.
Hopefully Sam's ASI is more compassionate than people, which frankly isn't a high bar lately.
https://www.npr.org/2025/05/28/nx-s1-5413322/aid-groups-say-...
https://www.npr.org/sections/goats-and-soda/2025/05/28/g-s1-...
I'm more than happy to see the "empire" die and willing to do anything I can to speed that along.
It seems people are infinitely compassionate when spending someone else’s money.
What of the argument against USAID’s rapid disassembly then? Is such an outcome permissible or even desired on account of sparing these funds sooner, aid continuity be damned?
I mean, your plan seems to be:
A. The US should give infinity money to everyone forever and never stop. If anyone ever dies, it's the US's fault for not supporting them enough.
B. If you are going to stop, do it on the schedule of the people who are getting free stuff, and only stop when they decide they don't want free stuff anymore (i.e. never).
A responsible pace that doesn't result in abrupt mass deaths due to the lack of aid continuity.
>A. The US should give infinity money to everyone forever and never stop. If anyone ever dies, it's the US's fault for not supporting them enough.
Nobody said that, but with operating the world's largest aid agency for the better part of a century comes massive responsibility.
>B. If you are going to stop, do it on the schedule of the people who are getting free stuff, and only stop when they decide they don't want free stuff anymore (i.e. never).
You're right. Hopefully those impoverished kids (many of whom are dead now) take some personal responsibility for themselves in the afterlife. To think we'd even entertain pulling their food and medicine on their schedule and not our schedule.
The children? No, but their parents and the other adults running their country. That is who is responsible for providing for them. Americans have their own children they need to take care of and do not need their money seized and sent overseas to take care of other people's children.
And yes, maybe it is a "rug pull" but it was always going to be. It is immoral to engender such dependence in the first place, like keeping someone slightly poisoned so they're constantly sick and dependent on you to take care of them. Let people grow strong so they can take care of themselves and treat with them as equals.
You talk as if it's a zero-sum game, as if the two choices are mutually exclusive.
>And yes, maybe it is a "rug pull" but it was always going to be.
No reason it had to be.
If you look at Africa, it has the most wealth by resources out of any continent. It's also the poorest continent nominally. We're getting a lot of good stuff at INSANE discounts.
What I'm describing is, of course, colonialist in nature. The US is an empire, not a nation. But, the hope is that as we help those countries develop they can help us stay developed, and we can eventually reach some mutually beneficial equilibrium. Instead of exploitation.
But, currently, the relationship is exploitative. It's a bit wild to me that you legitimately think the US, of all countries, is being exploited. No bubba... no. We do the exploiting. Everything you own is build with layers and layers of global exploitation built into it. You have a few hundred slaves working for you as we speak.
Absolutely we aren’t a Christian country. I personally don’t need Christianity to tell me charity for our fellow humans is a good thing. Plus, richest country in the history of the world remember?
The rich in the US enjoy their wealth at the “pleasure” of the lower classes. (And not just the American lower classes.) Those dollars they’re hoarding? Those were created by the people and have value because of the people. So, I’m all for confiscatory taxation to fund humane charitable endeavors and eliminate wealth hoarding. Someone will have to make do with one less yacht I suppose.
Finally, the amount we’re taking about here is a mere pittance. Let’s cut some other wasteful spending first (Pentagon) if you’re looking for savings.
1) The point is that global health is also about access to a drug, not just existence.
2) We don’t. USA’s foreign aid per capita is not that high (especially now). To mobilize private money for aid, you need a long-term, trusted infrastructure.
3) Countries that receive aid typically do not have functional governments.
And that point is irrelevant to whether or not we should develop new drugs.
> Countries that receive aid typically do not have functional governments.
Maybe they should work on fixing that, or just dissolve the country and get absorbed into a functional country if they can't manage to create a functional government on their own.
If we're non-functional, where is all the foreign aid providing free stuff for us?
C'mon other countries, give us free stuff. It will be great for your soft power, and prevent us from doing terrorist attacks on you.
You are getting a lot of free stuff, because you own the dollar. That keeps you afloat so far, but I don't think it will last for much longer.
Do you think a dysfunctional gov has a higher chance of fixing itself with widespread disease?
Maybe providing aid is just propping up dysfunctional governments by doing their job for them and it would be better in the long run if they were allowed to collapse and be replaced with something that was forced to be functional.
I have no fantasies that aid will magically make countries stable.
It’s not an either/or. While we figure out the practical solutions to corruption in impoverished nations, we can /also/ do other work to improve the situation in Earth. And, in doing so, we will make solving the impoverished/corruption problem easier to fix.
Personally speaking I wish the US was drastically more involved with providing aid because it can help reduce the impact of individual catastrophies happening everyday.
https://moderndiplomacy.eu/2025/02/04/beyond-rescue-why-huma...
How rude.
I suppose that my work as a coder would look equally odd and useless to someone looking at it from 1000 years ago, while being important and satisfying to me. I didn't sense any condescension in that.
Yeah dawg imma need a citation for that. Or maybe by "the world" he means "Silicon Valley oligarchs", who certainly have been "entertaining" all sorts of "new policy ideas" over the past half year.
I don't know if gentle is the right word. Maybe Gently face hugger and chest burster is more apt. It's slowly infecting us and eating us from the inside but it's so gently we don't even care. We just complain on HN while doing nothing about it.
It was probably around 7 years ago when I first got interested in machine learning. Back then I followed a crude YouTube tutorial which consisted of downloading a Reddit comment dump and training an ML model on it to predict the next character for a given input. It was magical.
I always see LLMs as an evolution of that. Instead of the next character, it's now the next token. Instead of GBs of Reddit comments, it's now TBs of "everything". Instead of millions of parameters, it's now billions of parameters.
Over the years, the magic was never lost on me. However, I can never see LLMs as more than a "token prediction machine". Maybe throwing more compute and data at it will at some point make it so great that it's worthy to be called "AGI" anyway? I don't know.
Well anyway, thanks for the nostalgia trip on my birthday! I don't entirely share the same optimism - but I guess optimism is a necessary trait for a CEO, isn't it?
If you think of the tokenization near the end as a serializer, something like turning an object model into json, you get a better understanding. The interesting part of a an OOP program is not in the json, but what happens in memory before the json is created.
Likewise, the interesting parts of a neural net model, whether it's LLM's, AlphaProteo or some diffusion based video model, happen in the steps that operate in their latent space, which is in many ways similar to our subconscious thinking.
In those layers, the AI models detect deeper and deeper patterns of reality. Much deeper than the surface pattern of the text, images, video etc used to train them. Also, many of these patterns generalize when different modalities are combined.
From this latent space, you can "serialize" outputs in several different ways. Text is one, image/video another. For now, the latent spaces are not general enough to do all equally well, instead models are created that specialize on one modality.
I think the step to AGI does not require throwing a lot more compute into the models, but rather to have them straddle multiple modalities better, in particular, these:
- Physical world modelling at the level of Veo3 (possibly with some lessons from self driving or robotics model for elements like object permananence and perception) - Symbolic processing of the best LLM's. - Ability to be goal oriented and iterate towards a goal, similar to the Alpha* family of systems - Optionally: Optimized for the use of a few specific tools, including a humanoid robot.
Once all of these are integrated into the same latent space, I think we basically have what it takes to replace most human thought.
this is just made up.
- we don't have any useful insight on human subconscious thinking. - we don't have any useful insight on the structures that support human subconscious thinking. - the mechanisms that support human cognition that we do know about are radically different from the mechanisms that current models use. For example we know that biological neurons & synapses are structurally diverse, we know that suppression and control signals are used to change the behaviour of the networks , we know that chemical control layers (hormones) transform the state of the system.
We also know that biological neural systems continuously learn and adapt, for example in the face of injury. Large models just don't do these things.
Also this thing about deeper and deeper realities? C'mon, it's surface level association all the way down!
Most of your critique is surface level: you can add all kinds of different structural diversity to an ML model and still find learning. Transformers themselves are formally equivalent to "fast weights" (suppression and control signals). Continuous learning is an entire field of study in ML. Or, for injury, you can randomly mask out half the weights of a model, still get reasonable performance, and retrain the unmasked weights to recover much of your loss.
Obviously there are still gaps in ML architectures compared to biological brains, but there's no particular reason to believe they're fundamental to existence in silico, as opposed to myelinated bags of neurotransmitters.
I agree - for example Octopus's are clearly somewhat intelligent, maybe very intelligent, and they have a very different brain architecture. Bees have a form of collective intelligence that seems to be emergent from many brains working together. Human cognition could arguably be identified as having a socially emergent component as well.
>Most of your critique is surface level: you can add all kinds of different structural diversity to an ML model and still find learning. Transformers themselves are formally equivalent to "fast weights" (suppression and control signals). Continuous learning is an entire field of study in ML. Or, for injury, you can randomly mask out half the weights of a model, still get reasonable performance, and retrain the unmasked weights to recover much of your loss.
I think we can only reasonably talk about the technology as it exists. I agree that there is no justifiable reason (that I know of) to claim that biology is unique as a substrate for intelligence or agency or consciousness or cognition or minds in general. But the history of AI is littered with stories of communities believing that a few minor problems just needed to be tidied up before everything works.
> We also know that biological neural systems continuously learn and adapt, for example in the face of injury. Large models just don't do these things.
This is a deliberate choice on the part of the model makers, because a fixed checkpoint is useful for a product. They could just keep the training mechanism going, but that's like writing code without version control.
> Also this thing about deeper and deeper realities? C'mon, it's surface level association all the way down!
To the extent I agree with this, I think it conflicts with your own point about us not knowing how human minds work. Do I, myself, have deeper truths? Or am myself I making surface level association after surface level association, but have enough levels to make it seem deep? I do not know how many grains make the heap.
Training more and learning online are really different processes. In the case of large models I can't see how it would be practical to have the model learn as it was used because it's shared by everyone.
>To the extent I agree with this, I think it conflicts with your own point about us not knowing how human minds work. Do I, myself, have deeper truths? Or am myself I making surface level association after surface level association, but have enough levels to make it seem deep? I do not know how many grains make the heap.
I can't speak for your cognition or subjective experience, but I do have both fundamental grounding experiences (like the time I hit my hand with an axe, the taste of good beer, sun on my face) and I have used trial and error to develop causative models of how these come to be. I have become good at anticipating which trials are too costly and have found ways to fill in the gaps where experience could hurt me further. Large models have none of these features or capabilities.
Of course I may be deceived by my cognition into believing that deeper processes exist that are illusory because that serves as a short cut to "fitter" behaviour and evolution has exploited this. But it seems unlikely to me.
Given it can learn from unordered text of the entire the internet, it can learn from chats.
> I can't speak for your cognition or subjective experience, but I do have both fundamental grounding experiences (like the time I hit my hand with an axe, the taste of good beer, sun on my face) and I have used trial and error to develop causative models of how these come to be. I have become good at anticipating which trials are too costly and have found ways to fill in the gaps where experience could hurt me further. Large models have none of these features or capabilities.
> Of course I may be deceived by my cognition into believing that deeper processes exist that are illusory because that serves as a short cut to "fitter" behaviour and evolution has exploited this. But it seems unlikely to me.
Humans are very good at creating narratives about our minds, but in the cases where this can be tested, it is often found that our conscious experiences are preceded by other brain states in a predictable fashion, and that we confabulate explanations post-hoc.
So while I do not doubt that this is how it feels to be you, the very same lack of understanding of causal mechanisms within the human brain that makes it an error to confidently say that LLMs copy this behaviour, also mean we cannot truly be confident that the reasons we think we have for how we feel/think/learn/experience/remember are, in fact, the true reasons for how we feel/think/learn/experience/remember.
What are you talking about?
It has not made its own experiences, not interacted with the outer world. Dunno, I won't to rule out something operating solely on language artifacts cannot develop intelligence or consciousness, whatever that is,.. but so far there are also enough humans we could care about and invest into.
Some LLMs have interacted with the outside world, such as through reinforcement learning while trying to complete tasks in simulated physics environments.
And the web contains a lot more than people's expressions: think of all the scientific papers with tables and tables of interesting measurements.
You must first invent the universe
If you wish to predict the next token really well
You must first model the universe
The "mere token prediction machine" criticism, like Pearl's "deep learning amounts to just curve fitting", is true but it also misses the point. AI in the end turns a mirror on humanity and will force us to accept that intelligence and consciousness can emerge from some pretty simple building blocks. That in some deep sense, all we are is curve fitting.
It reminds me of the lines from T.S. Eliot, “...And the end of all our exploring, Will be to arrive where we started, And know the place for the first time."
The Transformer circuits[0] suggest that this representation is not coherent at all.
It's doing compression which does not mean it's coherent.
> The addition circuits are also fairly easy to interpret.
The addition circuits make no sense whatsoever. It's doing great at guessing that's all.
> To write the second line, the model had to satisfy two constraints at the same time: the need to rhyme (with "grab it"), and the need to make sense (why did he grab the carrot?). Our guess was that Claude was writing word-by-word without much forethought until the end of the line, where it would make sure to pick a word that rhymes. We therefore expected to see a circuit with parallel paths, one for ensuring the final word made sense, and one for ensuring it rhymes.
> Instead, we found that Claude plans ahead. Before starting the second line, it began "thinking" of potential on-topic words that would rhyme with "grab it". Then, with these plans in mind, it writes a line to end with the planned word.
This is an older model (Claude 3.5 Haiku) with no test time compute.
[0]: https://www.anthropic.com/news/tracing-thoughts-language-mod...
I'm really no expert in neural nets or LLMs, so my thinking here is not an expert opinion, but as a CS major reading that blog from Anthropic, I just cannot see how they provided any evidence for "thinking". To me it's pretty aggressive marketing to call this "thinking".
Either way: They're tampering with the inference process, by turning circuits in the LLM on and off, in an attempt to prove that those circuits are related with a specific function. [0]
They noticed that circuits related to a token that is only relevant ~8 tokens forward were already activated on the newline token. Instead of only looking at the sequence of tokens that has been generated so far (aka backwards), and generating the next token based off of that information, the model is activating circuits related to tokens that are not relevant to the next token only, but to specific tokens a handful of tokens after.
So, information related to more than just the next upcoming token (including a reference to just one specific token) is being cached during a newline token. Wouldn't call that thinking, but I don't think calling it planning is misguided. Caching this sort of information in the hidden state would be an emergent feature, rather than a feature that was knowingly aimed at by following a specific training method, unlike with models that do test time compute. (DeepSeek-R1 paper being an example, with a very direct aim at turbocharging test time compute, aka 'reasoning'. [1])
The way they went at defining the function of a circuit, was by using their circuit tracing method, which is open source so you can try it out for yourself. [2] Here's the method in short: [3]
> Our feature visualizations show snippets of samples from public datasets that most strongly activate the feature, as well as examples that activate the feature to varying degrees interpolating between the maximum activation and zero.
> Highlights indicate the strength of the feature’s activation at a given token position. We also show the output tokens that the feature most strongly promotes / inhibits via its direct connections through the unembedding layer (note that this information is typically more meaningful for features in later model layers).
[0]: https://transformer-circuits.pub/2025/attribution-graphs/bio... [1]: https://arxiv.org/pdf/2501.12948 [2]: https://github.com/safety-research/circuit-tracer [3]: https://transformer-circuits.pub/2025/attribution-graphs/met...
Yet. The human mind is a big bag of tricks. If the creators of AI can enumerate a large enough list of capabilities and implement those, then the product can be as good as 90% of humans, but at a fraction of the cost and a billion times the speed - then it doesn't matter if it's AGI or not. It will have economic consequences.
The observant will note that the word "knowing" kept appearing in the previous paragraph. Can that knowing also be reduced to LLM-like tricks? Or is it an additional step?
Me personally, I expect to see LLMs to be a mere part of whatever will be invented later.
Not quite yet, but I’m working on it. It’s ~~hard~~ impossible to get original ideas out of an LLM, so it’ll probably always be a human assisted effort.
Things like when to create an ugly hack because the perfect solution may result in in your existing customers moving over to your competitor. When to remove some tech debt and when to add to tech debt.
When to do a soft delete vs when to do a purge. These things are learnt when a customer shouts at you and you realize that you may be most intelligent kid on the block but it wont really help the customer tonight as the code is already deployed and your production deployment means a maintenance window.
> If we have to make the first million humanoid robots the old-fashioned way, but then they can operate the entire supply chain—digging and refining minerals, driving trucks, running factories, etc.—to build more robots, which can build more chip fabrication facilities, data centers, etc, then the rate of progress will obviously be quite different.
It's really cool to hear a public figure seriously talk about self-replicating machines. To me this is the key to unlocking human potential and ending material scarcity.
If you owned a pair of robots that with sufficient spare parts could repair each other and do other useful work you could effectively do anything by using them to do all the things necessary to build copies of themselves.
Once we as a species have exponential growth on that we can do anything. Clean energy, Carbon sequestration, Von Neumann probes, asteroid mining, O'Neill Cylinders, it's all possible.
I’m astonished by the sheer arrogance of a man who thinks we’re anywhere near the point where general purpose robots build general purpose robots.
What technical challenges do you foresee with such a device?
And you're right about planned obsolescence in theory, but self-replication is fundamentally different from a smartphone. It's like having a genie that grants more wishes. Once someone demonstrates true self-replication even in a proprietary system then the cat's out of the bag. Open source communities, countries with different IP laws, hobbyists -- eventually someone will reverse engineer it. The economic incentive is just too massive. It's not like trying to copy a luxury handbag. It's copying the means of production itself.
As for space applications, yes, mass matters, but that's exactly why modular, self-repairing systems make sense. Instead of one giant redundant system, you'd have swarms of smaller units that can reconfigure, repair each other, and even combine into larger structures when needed. Much more mass-efficient than traditional redundancy. And regarding humanoid form sure you're right it's not optimal for space, but it's optimal for working in human environments. Our entire infrastructure is built for human bodies. Any robot that can use our tools, navigate our spaces, and interface with our systems has an immediate advantage. They don't need to be perfect copies - maybe four arms instead of two, different joint configurations - but roughly humanoid makes sense for terrestrial applications. And space applications that are designed and built by humans or for humans.
The question isn't whether this will happen, but when. What's your estimate for when we'll see the first truly self-maintaining robotic systems, even if they're not fully self-replicating?
You are a self-replicating machine running on top of a massive web of other self replicating machines. You are fundamentally constrained by the energy and materials available to you, as is your entire operating stack. You are a petal on a fractal flower whose growth, already exponential, threatens to crack its pot.
Incidentally, crackpot would be a good way to describe these sorts of pieces, if the person writing them did not so obviously benefit from writing them.
But I think that there are limitations on these kinds of techniques and we can see them with the changing economics of our most advanced technologies -- semiconductors. Improvements have slowed, capital investments have increased and cost per transistor have plateaued and are now starting to rise.
At the same time the way that we extract energy from our environment is not sustainable and is causing great imbalances in our atmosphere which are having cascading effects on the environment and this is all due to the fact that our energy systems are not closed loop.
If we're going to take manufacturing and industry as a whole to the next level just like the industrial revolution did we're going to need to take cues from biology. The next manufacturing revolution will be a merger of the ideas from the industrial revolution, the digital revolution and biological systems like the ones you describe above.
Self replicating systems that can heal, source their parts from the environment around them, and that can scale exponentially through processes akin to cellular division are inevitable.
They will allow us to offload the burden of mineral extraction and refining to the moon and asteroids, and will allow us to massively scale up the production of products on scales previously unimaginable and of goods full of elements like platinum or gold we consider obscenely expensive due to their relative rarity on Earth.
We don't know that. It seems to be a leap of faith, based on the possibility of numerous successive achievements (IA, then fusion, then robots, then interplanetary mining, etc.) where each new step is always reachable. Like in a video game.
Maybe the necessary amount of oil (or whatever) required to reach the next key milestone towards unlimited resources has never existed. Maybe it'd have required 1 billion more years before humankind reached the industrial age.
Maybe the trees on our island are just enough to build a raft, not a brig, and we should have used them to build a shelter and make the inevitable end more comfortable.
To unlock these resources we need to turn to self replicating machines that can stand up lunar and asteroid mining to build sufficient orbital manufacturing capacity.
If we deplete our oil stocks (even non-conventional) before discovering the multiple replacements we'd need at large scale for logistics, electronics, hardware, healthcare, heavy vehicles and tools, etc. we won't get anywhere close your solar system matter. Maybe we should have rationed it 70 years ago, to give us more time to research the next breakthrough, instead of investing in all the modern life shenanigans like buying dozens of $5 pieces of clothes from Shein or Temu because it make us feel good in the schoolyard or on Instagram. The "trust the science bro, if it's possible we'll make it, let the human genius do its thing" is dangerous.
Sam you need to touch grass.
Like a storefront advertising "live your wildest dreams" in pink neon. A slightly obese Mediterranean man with questionable taste tries to get you into his fine establishment. And if you do enter the first thing that meets you is an interior with a little bit too many stains and a smell of cum.
That's the vibe I get whenever Sam goes on his bi-quarterly AGI hype spree.
I mean, I suppose Sam loves ChatGPT like his own child, but I would struggle to describe any of its output as 'beautiful'! 'Grating' would be the word that springs to mind, generally.
My blog posts didn't age all that well, and I've learned to be a little more sceptical about the speed of technological change, just as the political events over the intervening years have made me more aware of how fast political realities can change: https://benwheatley.github.io/blog/2016/04/12-00.31.55.html and https://benwheatley.github.io/blog/2022/09/20-18.35.10.html
* at least until the rate of change of curvature gets so high you're spaghetti, you're (approximately) co-moving with the light from your own body. This means that when you cross the event horizon, you still see your own legs, even though the space the light is in is moving towards the singularity faster than the light itself moves through that space: https://youtu.be/4rTv9wvvat8?feature=shared&t=516
(you can get a mathematical singularity if you consider amount of stuff that can be produced per hour of human labour which should go infinite around the robot uprising)
Nah, even then it's only exponential, not a mathematical singularity. Whatever the doubling period is for the e.g. von Neumann machines, but it doesn't go to infinity in finite time.
Though I suspect the actual number of human hours worked doesn't really go to zero for cultural/psychological reasons, your point remains valid as we will only find out if/when it happens.
On the other hand, we may need more practical/theoretical breakthroughs to be able to build AI models that are reliable and precise, so they stop making up stuff "whenever they feel like it." Unfortunately, the timing of breakthroughs is not predictable. Maybe it will take months. Maybe it will take a decade. No one knows for sure.
It left off ingredients. The very gentle singularity…
I also often retest new models with tasks old models failed, and see some improvements. I really liked “format this SQL generated by my ORM and explain it” last week.
I honestly have no insight on if the tasks it is failing to do are right around the corner or if they are decades away.
https://www.snexplores.org/article/explainer-where-fossil-fu...
Oh right, Sam Altman.
> There will be very hard parts like whole classes of jobs going away, but on the other hand the world will be getting so much richer so quickly that we’ll be able to seriously entertain new policy ideas we never could before
Real wages haven’t risen since 1980. Wealth inequality has. Most people have much less political power than they used to as wealth - and thus power - have become concentrated. Today we have smartphones, but also algorithm-driven polarization and a worldwide rise in authoritarian leaders. Depression and anxiety affect roughly 30% of our population.
The rise of wealth inequality and the stagnation of wages corresponds to the collapse of the labor movement under globalization. Without a counterbalancing force from workers, wealth accrues to the business class. Technological advances have improved our lives in some ways but not on balance.
So if we look at people’s well-being, society as whole hasn’t progressed since the 1980s; in many ways it’s gotten worse. Thus the trajectory of progress described in the blog post is make believe. The utopia Altman describes won’t appear. Mass layoffs, if they happen, will further concentrate wealth. AI technology will be used more and more for mass surveillance, algorithmic decision making (that would make Kafka blush), and cost cutting.
What we can realistically expect is lowering of quality of life, an increased shift to precarious work, further concentration of wealth and power, and increasing rates of suffering.
What we need instead of science fiction is to rebuild the labor movement. Otherwise “value creation” and technology’s benefits will continue to accrue to a dwindling fraction of society. And more and more it will be at everyone else’s expense.
Source: https://fred.stlouisfed.org/graph/?g=1JxBn
Details: uses the "Wage and salary accruals per full-time-equivalent employee" time series, which is the broadest wage measure for FTE employees, and adjusts for inflation using the PCE price index, which is the most economically meaningful measure of "how much did prices change for consumers" (and is the inflation index that the Fed targets)
Source with details: https://fred.stlouisfed.org/graph/?g=1JxIa
Then calculate cumulative inflation as the proportional change in the price level, like this:
(P_final - P_initial) / P_initial = (125.880 - 37.124) / 37.124 = 2.39
This shows that the overall price level (the cumulative inflation embodied in the PCEPI) has increased by about 2.39 times over the period, which is 239%.
Overall, historical comparisons of inflation numbers are so imprecise to be practically worthless the longer the timescale. You can expect the real figure to be much greater in reality for consumers, given the political incentive to lie over inflation data.
3.389 - 1 (to account for increase) = 2.38 ~ 239%
I do agree that it makes no logical sense to couple medical insurance to employment. This system was created sort of accidentally as a side effect of wartime tax law and has persisted mainly due to inertia.
That’s to say nothing of the fact that millions are uninsured in the US and have limited access to necessary medical treatment, never mind “cutting edge” treatments.
is a very US thing. In China they've probably 10xd over that time.
China: ~$10,000 – $12,000
US: ~$74,580 (U.S. Census Bureau, 2022)
Actually I doubt that economists even tried to calculate PPP of China 40 years ago because (even back then) the basket of goods used in the PPP calculation probably included gasoline and cars and such, which only the economic top 1% of China could afford 40 years ago, but if you forced the calculation somehow, you'd probably arrive at a GDPPPP/person not much lower than the current GDPPPP/person (i.e., China has grown spectacularly in GDP/person, but not in GDPPPP/person)
That shift opens the possibility of GDPPPP changes in excess (or under) strict GDP per capita growth.
Lovely tabulation of the data here: https://en.wikipedia.org/wiki/List_of_countries_by_GDP_(PPP)...
The US is kind of an outlier.
I guess if you have the opportunity to have your stuff made by cheap or free labour whether low paid Chinese or AI robots, societies have a choice how to distribute the benefits which has varied from everything to the rich in the US to fairly even in France say. Such choices will be important with AI.
That's why we have a whole thing about immigration going on. It's the one issue that the president is not underwater on right now [2]. You can't get much of a labor movement like this.
[1] https://www.texasobserver.org/white-people-rural-arkansas-re...
[2] https://www.natesilver.net/p/trump-approval-ratings-nate-sil...
I think the idea of a law that only allows a limited number of owned properties per person and requires them to actually be using those properties would be interesting to alleviate this.
This also goes without mentioning the restrictions on "just" building new housing (land, time, and space, particularly space located near job sites).
"Californians, used to spending crazy high prices for property"
Please dig into why that statement is true and re-read your parent statement. Your analysis can't just abruptly stop there. It all goes back to housing supply.
How did Hacker News already forget these things?
Do people really believe that? I think either people have too rosy view of 80s or consider that real wages should also adjust for lifestyle inflation.
Sure it will, as far as Altman is concerned. To make the whole post make sense, add "... for the rich" where appropriate.
>Air quality (no more leaded gasoline)
>Life expectancy
>Cancer survival rates
>Access to information
>Infant mortality
>Violent crime rates across the western world
>Access to education
>Clean water and food for 4+ billion people
>HIV treatment
>etc
The negativity on this site is insane. They will deny the greatest scientific achievements if it lets them dunk on AI or whoever is the enemy of the week.
No "the world" won't be getting richer. A small subset of individuals will be getting richer.
The "new policy ideas" (presumably to help those who are being f*d by all this) have been there all along. It's just that those with the wealth don't want to consider them. Those people having even more wealth does _not_ make them more likely to consider those ideas.
Honestly this drivel makes me want to puke.
If the "mistake" is that of concentrating too much power in too few hands, there's no recovery. Those with the willingness to adapt will not have the power to do so, and those with the power to adapt will not have the willingness. And it feels like we're halfway there. How do we establish a system of checks and balances to avoid this?
I bet he wants to be the first to "plug in" and become the first AI enhanced human.
> AI driving faster scientific progress and increased productivity will be enormous; the future can be vastly better than the present
And then nothing substantial after this proclamatory hot-take. So let’s just choose to believe le ai propheté.
It’s like quick instructions to PR team style written post (and then asking an LLM to inflate it) from the comforts of a warm and cozy Japanese seat.
Definitely not written by Genini at least. Usually does a better job than this. Well, at least like Zuck he eats his own food that he killed.
Will be looking forward to titles like “The Vehement Duality” &c in the near future.
colesantiago•1d ago
So when AGI comes, I am curious what the new jobs are?
I see that prompt engineer is one of the jobs created because it's the way to ask a LLM certain tasks, but now AI can do this too.
I'm thinking that any new jobs AI would make, AI would just take them anyway.
Are there new jobs coming from this abundance that is on the horizon?
breckenedge•1d ago
ge96•1d ago
baq•1d ago
patapong•1d ago
Terr_•1d ago
saubeidl•1d ago
Terr_•1d ago
There's no reason to be confident that such a future will arrive without difficult moral questions, or that it's as simple as a #define FREE_WILL 0 .
saubeidl•1d ago
GuinansEyebrows•1d ago
turnsout•1d ago
Yes: we still have a long way to go in restoring equilibrium to the climate and producing more sustainable alternatives to our current cities, products and materials. It's a megaproject which could take hundreds of years, and will require plenty of human involvement.
The planet is going to be fine either way, but if capitalism doesn't figure out how to price in externalities, it will slowly run out of human consumers.
boshalfoshal•1d ago
All that I hope for in this case is that governments actually take this seriously and labs/governments/people work together to create better societal systems to handle that. Because as it stands, under capitalism I don't think anyone is going to willingly give up the wealth they made from AI to spread to the populus as UBI. This is necessary in some capitalist system (if we want to maintain that) since its built on consumption and spending.
Though if its truly an "abundance" scenario then I'd imagine it probably wouldn't matter that people don't have jobs since I'd assume everything would be dirt cheap and quality of life would be very high. Though personally I am very cynical when it comes to "agi is magic pixie dust that can solve any problem" takes, and I'd assume in the short term companies will lay off people in swathes since "AI can do your job," but AI will be nowhere close to increasing those laid-off people's quality of life. It'll be a tough few years if we don't actually get transformative AI.
bamboozled•1d ago