Example: QAnon.
Because conspiracy theories and populism are like a sugar hit to people who don't want to think too deeply.
> Since a signal is transmitted along a synapse, on average, with a frequency of about 100 Hz and since its memory capacity is probably less than 100 bytes (1 byte looks like a more reasonable estimate)
I admit my feeling is that neurons/synapses probably have less than 100 bytes of memory, and also that a byte or less is more plausible, but I would like to see some more rigorous proof that they can't possibly have more than a gigabyte of memory that the synapse/neuron can access at the speed of computation.
The author has a note where they handwave away the possibility that chemical processes could meaningfully increase the operations per second, and I'm comfortable with that, but this point:
> Perhaps a more serious point is that that neurons often have rather complex time-integration properties
Seems more interesting. Especially in the context of if there's dramatically more storage available in neurons/synapses. If a neuron can do maybe some operations per minute over 1GB of data per synapse, for example. (Which sounds absurdly high, but just for the sake of argument.)
And I think putting some absurdly generous upper bounds in might be helpful since, we're clearly past the 100TOPs, asking, like, how many H100s would you need if we made some absurd suppositions about the capacity of human synapses and neurons? It seems like, we probably have enough. But also I think you could make a case some of the largest supercomputing clusters are the only things that can actually match the upper bound for the capacity of a single human brain.
Although I think someone might be able to convince me that a manageable cluster of H100s already meets the most generous possible upper bound.
That changes the calculus likely very little, but it feels more accurate.
1. https://www.cell.com/current-biology/pdf/S0960-9822(16)30489...
One could even argue you should only compare it back to the discovery of writing or similar.
Anyway, humanoid robots should be big in the next 10-20 years. The compute, the batteries, the algorithms are all coming together.
If I had to give an estimate, I would consider less the time taken to date, but the current state of our knowledge of how the brain works, and how it has grown in the last decades. There is almost nothing that we know so little about as the human brain, how thoughts are represented, modern imaging techniques notwithstanding.
If that's the bar, then anything else can fit in "a few decades", since that also rests "ON TOP of millions of years".
Although that's not looking at memory, and I am also interested in some explanation of what... a 5090 has 32GB which, a human brain has more like a petabyte of memory assuming 1 byte/synapse. Which is to say 1 million GB in which case even a large cluster of H100s has an absurd amount of TOPS but nowhere near enough high-speed memory.
Perhaps AI companies don’t know how to run continuous learning on their models:
* it’s unrealistic to do it for one big model because it will instantly start shifting in an unknown direction
* they can’t make millions of clones of their model, run them separately and set them free like it happens with humans
My feeling is we have enough compute for ASI already but not algorithms like the brain. I'm not sure if it'll get solved by smart humans analysing it or by something like AlphaEvolve (https://news.ycombinator.com/item?id=43985489).
One advantage of computers being much quicker than needed is you can run lots of experiments.
Just the power requirements make me think current algorithms are pretty inefficient compared to the brain.
Even the most rudimentary AI would pick this up these days, ironically enough.
And that matches what we expect theoretically: of the difficult problems we can model mathematically, the vast majority benefit sub-linearly from a linear increase in processing power. And of the processes we can model in the physical world, many are chaotic in the formal sense, in that a linear increase in processing power provides a sublinear increase in the distance ahead in time that we can simulate. Such computational complexity results are set in stone, i.e. no amount of hand-wavy "superintelligence" could sort an array of arbitrary comparables in O(log(n)) time, any more than it could make 1+1=3.
You don't get useful intelligence unless the software is also fit for purpose. Slow hardware can still outperform broken software.
Social status depends on factors like good looks, charm, connections, and general chutzpah, often with more or less overt hints of narcissism. That's an orthogonal set of skills to being able to do tensor calculus.
As for an impending AI singularity - no one has the first clue what the limits are. We like to believe in gods, and we love stories about god-like superpowers. But there are all kinds of issues which could prevent a true singularity - from stability constraints on a hypercomplex recursive system, to resource constraints, to physical limits we haven't encountered yet.
Even if none of those are a problem, for all we know an ASI may decide we're an irrelevance and just... disappear.
That's simply untrue. Theoretical computer scientists understand the lower bounds limits of many classes of problems. And that for many problems, it's mathematically impossible to significantly improve performance in them with only a linear increase in computing power, regardless of the algorithm/brain/intelligence. Many problems would even not benefit much from a superlinear increase in computing power, because of the nature of exponential growth. For a chaotic system in the mathematical sense, where prediction grows exponentially harder with time, even exactly predicting one minute ahead could require more compute than could be provided by turning the entire known universe into a computer.
But these don't really address the near-term question of "What if growth in AI capabilities continues, but becomes greatly sub-exponential in terms of resources spent?", which would put a huge damper on all the "AI takeoff" scenarios. Many strong believers seem to think "a constant rate of relative growth" is so intuitive as to be unquestionable.
Because they never give a rigorous definition of intelligence. The most rigorous definition in psychology is the G factor, which correlates with IQ and the ability to solve various tasks well, and which empirically shows diminishing returns in terms of productivity.
A more general definition is "the relative ability to solve problems (and relative speed at solving them)". Attempting to model this mathematically inevitably leads into theoretical computer science and computational complexity, because that's the field that tries to classify problems and their difficulty. But computational complexity theory shows that only a small class of the problems we can model achieve linear benefit from a linear increase in computing power, and of the problems we can't model, we have no reason to believe they mostly fall in this category. Whereas believers implicitly assume that the vast majority of problems fall into that category.
https://www.cremieux.xyz/p/brief-data-post?open=false#%C2%A7...
To me, that's a really good definition. "Much smarter than the best human brains in practically every field". It's going to be hard to weasel around that.
> Once artificial intelligence reaches human level, there will be a positive feedback loop that will give the development a further boost. AIs would help constructing better AIs, which in turn would help building better AIs, and so forth.
This depends a great deal on what the shape of the processor-power-vs-better-algorithm curve is. If the AI can get you 1% better algorithms, and that gets you an AI that can in turn get you 0.5% better algorithms, and so on, then yes, you're still getting "a further boost", but it won't matter much.
The article is super focused on the hardware side of things, and to a point, that makes sense. Your hardware has to be able to handle what you're simulating.
But it's not the hardware that's the difficult problem. We're nowhere close to hitting the limits of scaling hardware capability, and every time people declare that we are, they're proven wrong in just a few years, and sometimes even in just a few months.
It's the software. And we're so far away from being able to construct anything that could think like a human being that the beginning of it isn't even in sight.
LLMs are fantastic, but they're not a path to building something more intelligent than a human being, "Superintelligence". I would have a negative amount of surprise if LLMs are an evolutionary dead end as far as building superintelligence goes.
Is modeling neuron interactions the only way to achieve it? No idea. But even doing that for the number of neurons in a human brain is currently in fantasy land and most likely will be for at least a few decades, if not longer.
If I had to characterize the current state of things, we're like Leonardo Da Vinci and his aerial screw. We know what a helicopter could be and have ideas about how it could work, but the supporting things required to make it happen are a long, long way off.
Would note that we've only recently crossed Bostrom's 10 ^ 17 ops line [1].
To my knowledge, we don't have 10 ^ 14 to 10 ^ 17 ops computing available for whole-brain simulation.
https://news.engineering.utoronto.ca/human-powered-ornithopt...
It's my understanding that we (as a spieces) are far from understanding what intelligence is and how it works in our selves.
How are we going to model an unknown in a way that allows us to write software that logically represents it?
To me things like MuZero (learns go etc. without even being told the rules) and the LLMs getting gold in the math olympiad recently suggest we are quite close to something that can think like a human. Not quite there but not a million miles off either.
Both in human terms involve thinking and are beyond what I can do personally. MuZero is already superintelligent in board games but current AI can't do things like tidy your room and fix your plumbing. I think superintelligence will be gradually achieved in different specialities.
>like Leonardo Da Vinci and his aerial screw
that didn't function. Current AI functions quite a lot. I think we are more maybe like people trying to build things that will soar like and eagle but we presently have the Wright bros plane making it 200m.
No matter how impressive you find current LLMs, even if you're the sort of billionaire who predicts AGI before the end of 2025[0], the mechanism that Bostrom describes in this article is completely irrelevant.
We haven't figured out how to simulate human brains in a way that could create AI and we're not anywhere close, we've just done something entirely different.
[0] Yes, I too think most of this is cynical salesmanship, not honest foolishness.
Whether or not LLMs are the correct algorithm, the hardware question is much more straightforward and that's what this paper is about.
> Creating superintelligence through imitating the functioning of the human brain requires two more things in addition to appropriate learning rules (and sufficiently powerful hardware): it requires having an adequate initial architecture and providing a rich flux of sensory input.
> The latter prerequisite is easily provided even with present technology. Using video cameras, microphones and tactile sensors, it is possible to ensure a steady flow of real-world information to the artificial neural network. An interactive element could be arranged by connecting the system to robot limbs and a speaker.
> Developing an adequate initial network structure is a more serious problem. It might turn out to be necessary to do a considerable amount of hand-coding in order to get the cortical architecture right. In biological organisms, the brain does not start out at birth as a homogenous tabula rasa; it has an initial structure that is coded genetically. Neuroscience cannot, at its present stage, say exactly what this structure is or how much of it needs to be preserved in a simulation that is eventually to match the cognitive competencies of a human adult. One way for it to be unexpectedly difficult to achieve human-level AI through the neural network approach would be if it turned out that the human brain relies on a colossal amount of genetic hardwiring, so that each cognitive function depends on a unique and hopelessly complicated inborn architecture, acquired over aeons in the evolutionary learning process of our species.
But still I think you're not engaging with the article properly - it doesn't say we will, it just talks about how much computing power you might need. And I think within the paper it suggests we don't have enough computing power yet, but it doesn't seem like you read deeply enough to engage with that conversation.
> This paper outlines the case for believing that we will have superhuman artificial intelligence within the first third of the next century. It looks at different estimates of the processing power of the human brain; how long it will take until computer hardware achieve a similar performance; ways of creating the software through bottom-up approaches like the one used by biological brains; how difficult it will be for neuroscience figure out enough about how brains work to make this approach work; and how fast we can expect superintelligence to be developed once there is human-level artificial intelligence.
The paper very clearly suggests an estimate of the required hardware power for a particular strategy of imitating the brain. And it very clearly predicts we will achieve superintelligence by 2033.
If that strategy is a non-starter, which it is for the foreseeable future, then the hardware estimate is irrelevant, because the strategies we have available to us may require orders of magnitude more computing power (or even may simply fail to work with any amount of computing power).
janzer•18h ago