Humans are the bar for general intelligence.
All of that just sounds hard, not mathematically impossible.
As I understand it, this is mostly a rehash on the dated Lucas Penrose argument, which most Mind Theory researches refute.
So because of this we know reality is governed by maths. We just can’t fully model the high level consequence of emergent patterns due to the sheer complexity of trillions of interacting atoms.
So it’s not that there’s some mysterious supernatural thing we don’t understand. It’s purely a complexity problem in that we only don’t understand it because it’s too complex.
What does humility have to do with anything?
Speak for yourself. LLMs are a feedforward algorithm inferring static weights to create a tokenized response string.
We can compare that pretty trivially to the dynamic relationship of neurons and synapses in the human brain. It's not similar, case closed. That's the extent of serious discussion that can be had comparing LLMs to human thought, with apologies to Chomsky et. al. It's like trying to find the anatomical differences between a medieval scribe and a fax machine.
If we're OK with descriptions so lossy that they fit in a sentence, we also understand the human brain:
A electrochemical network with external inputs and some feedback loops, pumping ions around to trigger voltage cascades to create muscle contractions as outputs.
No. The answer is already solved; AI is not a brain, we can prove this by characteristically defining them both and using heuristic reasoning.
That "can" should be "could", else it presumes too much.
For both human brains and surprisingly small ANNs, far smaller than LLMs, humanity collectively does not yet know the defining characteristics of the aspects we care about.
I mean, humanity don't agree with itself what any of the three initials of AGI mean, there's 40 definitions of the word "consciousness", there are arguments about if there is either exactly one or many independent G-factors in human IQ scores, and also if those scores mean anything beyond correlating with school grades, and human nerodivergence covers various real states of existance that many of us find incomprehensible (sonetimes mutually, see e.g. most discussions where aphantasia comes up).
The main reason I expect little from an AI is that we don't know what we're doing. The main reason I can't just assume the least is because neither did evolution when we popped out.
https://www.reddit.com/r/singularity/comments/1lbbg0x/geoffr...
https://youtu.be/qrvK_KuIeJk?t=284
In that video above George Hinton, directly says we don't understand how it works.
So I don't speak just for myself. I speak for the person who ushered in the AI revolution, I speak for Experts in the field who know what they're talking aboutt. I don't speak for people who don't know what they're talking about.
Even though we know it's a feedforward network and we know how the queries are tokenized you cannot tell me what an LLM would say nor tell me why an LLM said something for a given prompt showing that we can't fully control an LLM because we don't fully understand it.
Don't try to just argue with me. Argue with the experts. Argue with the people who know more than you, Hinton.
And that's okay - his humility isn't holding anyone back here. I'm not claiming to have memorized every model weight ever published, either. But saying that we don't know how AI works is empirically false; AI genuinely wouldn't exist if we weren't able to interpret and improve upon the transformer architecture. Your statement here is a dangerous extrapolation.
> you cannot tell me what an LLM would say nor tell me why an LLM said something for a given prompt showing that we can't fully control an LLM because we don't fully understand it.
You'd think this, but it's actually wrong. If you remove all of the seeded RNG during inference (meaning; no random seeds, no temps, just weights/tokenizer), you can actually create an equation that deterministically gives you the same string of text every time. It's a lot of math, but it's wholly possible to compute exactly what AI would say ahead of time if you can solve for the non-deterministic seeded entropy, or remove it entirely.
LLM weights and tokenizer are both always idempotent, the inference software often introduces variability for more varied responses. Just so we're on the same page here.
Your statement completely contradicts hintons statement. You didn’t even address his point. Basically you’re saying Hinton is wrong and you know better than him. If so, counter his argument don’t restate your argument in the form of an analogy.
> You'd think this, but it's actually wrong.
No you’re just trying to twist what I’m saying into something that’s wrong. First I never said it’s not deterministic. All computers are deterministic, even RNGs. I’m saying we have no theory about it. A plane for example you can predict its motion via a theory. The theory allows us to understand and control an airplane and predict its motion. We have nothing for an LLM. No theory that helps us predict, no theory that helps us fully control and no theory that helps us understand it beyond the high level abstraction of a best fit curve in multidimensional space. All we have is an algorithm that allows an LLM to self assemble as a side effect from emergent effects.
Rest assured I understand the transformer as much as you do (which is to say humanity has limited understanding of it) you don’t need to assume I’m just going off hintons statements. He and I knows and understands LLMs as much as you even though we didnt invent it. Please address what I said and what he said with a counter argument and not an analogy that just reiterates an identical point.
Care to elaborate? Because that is utter nonsense.
"Cat" lights up a certain set of neurons, but then "cat" looks completely different. That is what we don't really understand.
(This is an illustrative example made for easy understanding, not something I specifically went and compared)
We don't and can't know with certainty which specific atoms will fission in a nuclear reactor either. But we know how nuclear fission works.
Prove or give a counter-example of the following statement:
In three space dimensions and time, given an initial velocity field, there exists a vector velocity and a scalar pressure field, which are both smooth and globally defined, that solve the Navier–Stokes equations.
The above is a video clip of Hinton basically contradicting what you’re saying.
So thats my elaboration. Picture that you just said what you said to me to hintons face. I think it’s better this way because I noticed peoples responding to me are rude and completely dismiss me and I don’t get good faith responses and intelligent discussion. I find if people realize that there statements are contradictory to the statements of the industry and established experts they tend to respond more charitably.
So please respond to me as if you just said to hintons face that what he said is utter nonsense because what I said is based off of what he said. Thank you.
> So because of this we know reality is governed by maths.
That's not really true. You have a theory, and let's presume so far it's consistent with observations. But it doesn't mean it's 100% correct, and doesn't mean at some point in the future you won't observe something that invalidates the theory. In short, you don't know whether the theory is absolutely true and you can never know.
Without an absolutely true theory, all you have is belief or speculation that reality is governed by maths.
> What does humility have to do with anything?
Not the GP but I think humility is kinda relevant here.
Let me repharse it. As far as we know all of reality is governed by the principles of logic and therefore math. This is the most likely possibility and we have based all of our technology and culture and science around this. It is the fundamental assumption humanity has made on reality. We cannot consistently demonstrate disproof against this assumption.
>Not the GP but I think humility is kinda relevant here.
How so? If I assume all of reality is governed by math, but you don't. How does that make me not humble but you humble? Seems personal.
You make broadly valid points, particularly about the advantages of embodyment, but I just dont think theyre good responses to the theoretical article under discussion (or the comment that you were responding to).
Some processes are undoubtedly learned from experience but considering people seem to think many of the same things and are similar in many ways it remains to be seen whether the most important parts are learned rather than innate from birth.
Ai currently has issues with seeing what's missing. Seeing the negative space.
When dealing with complex codebases you are newly exposed to you tackle an issue from multiple angles. You look at things from data structures, code execution paths, basically humans clearly have some pressure to go, fuck, I think I lost the plot, and then approach it from another paradigm or try to narrow scope, or based on the increased information the ability to isolate the core place edits need to be made to achieve something.
Basically the ability to say, "this has stopped making sense" and stop or change approach.
Also, we clearly do path exploration and semantic compression in our sleep.
We also have the ability to transliterate data between semantic to visual structures, time series, light algorithms (but not exponential algorithms, we have a known blindspot there).
Humans are better at seeing what's missing, better at not closuring, better at reducing scope using many different approaches and because we operate in linear time and there are a lot of very different agents we collectively nibble away at complex problems over time.
I mean on a 1:1 teleomere basis, due to structural differences people can be as low as 93% similar genetically.
We also have different brain structures, I assume they don't all function on a single algorithmic substrate, visual reasoning about words, semantic reasoning about colors, synesthesia, the weird handoff between hemispheres, parts of our brain that handle logic better, parts of our brain that handle illogic better. We can introspect on our own semantic saturation, we can introspect that we've lost the plot. We get weird feelings when something seems missing logically, we can dive on that part and then zoom back out.
There's a whole bunch of shit the brain does because it has a plurality of structures to handle different types of data processing and even then the message type used seems flexible enough that you can shove word data into a visual processor part and see what falls out, and this happens without us thinking about it explicitly.
It's a deeply philosophical question what constitutes a subjective experience of "green" or whatever... but intelligence is a bit more tractable IHO.
Similar to how "computer code" and "video game world" are the same thing. Everything in the video game world is perfectly encoded in the programming. There is nothing transcendent happening, it's two different views of the same core object.
Have you not met the average person on the street? (/s)
Well, it in fact depends on what intelligence is to your understanding:
-If it intelligence = IQ, i.e. the rational ability to infer, to detect/recognize and extrapolate patterns etc, then AI is or will soon be more intelligent than us, while we humans are just muddling through or simply lucky having found relativity theory and other innovations just at the convenient moment in time ... So then, AI will soon also stumble over all kind of innovations. None of both will be able to deliberately think beyond what is thinkable at the respective present.
- But If intelligence is not only a level of pure rational cognition, but maybe an ability to somehow overcome these frame-limits, then humans obviously exert some sort of abilities that are beyond rational inference. Abilities that algorithms can impossibly reach, as all they can is compute.
- Or: intelligence = IQ, but it turns out to be useless in big, pivotal situations where you’re supposed to choose the “best” option — yet the set of possible options isn’t finite, knowable, or probabilistically definable. There’s no way to defer to probability, to optimize, or even to define what “best” means in a stable way. The whole logic of decision collapses — and IQ has nothing left to grab onto.
The main point is: neither algorithms nor rationality can point beyond itself.
In other words: You cannot think out of the box - thinking IS the box.
(maybe have a quick look at my first proof -last chapter before conclusion- - you will find a historical timeline on that IQ-Thing)
More interestingly, humans are capable of assessing the results of their "neural misfires" ("hmm, there's something to this"), whereas even if we could make a computer do such mistakes, it wouldn't know its Penny Lane from its Daddy's Car[0], even if it managed to come up with one.
And we can get LLMs to do better by just prompting them to "think step by step" or replacing the first ten attempts to output a "stop" symbolic token with the token for "Wait… "?
I’ll read the paper but the title comes off as out of touch with reality.
Physics gives us a way to answer questions about nature, but it is not nature itself. It is also, so far (and probably forever), incomplete.
Math doesn't need to agree with nature, we can take it as far as we want, as long as it doesn't break its own rules. Physics uses it, but is not based on it.
The laws of physics can, as far as I can tell, be described using mathematics. That doesn't mean that we have a perfect mathematical model of the laws of physics yet, but I see no reason to believe that such a mathematical model shouldn't be possible. Existing models are already extremely good, and the only parts which we don't yet have essentially perfect mathematical models for yet are in areas which we don't yet have the equipment necessary to measure how the universe behaves. At no point have we encountered a sign that the universe is governed by laws which can't be expressed mathematically.
This necessarily means that everything in the universe can also be described mathematically. Since the human experience is entirely made up of material stuff governed by these mathematical laws (as per the assumption in the first paragraph), human intelligence can be described mathematically.
Now there's one possible counter to this: even if we can perfectly describe the universe using mathematics, we can't perfectly simulate those laws. Real simulations have limitations on precision, while the universe doesn't seem to. You could argue that intelligence somehow requires the universe's seemingly infinite precision, and that no finite-precision simulation could possibly give rise to intelligence. I would find that extremely weird, but I can't rule it out a priori.
I'm not a physicist, and I don't study machine intelligence, nor organic intelligence, so I may be missing something here, but this is my current view.
I'm just saying you're mistaking the thing for the the tool we use to describe the thing.
I'm also not talking about simulations.
Epistemologically, I'm talking about unknown unknowns. There are things we don't know, and we still don't know we don't know yet. Math and physics deal with known unknowns (we know we don't know) and known knowns (we know we know) only. Math and physics do not address unknown unknowns up until they become known unknowns (we did not tackle quantum up until we discover quantum).
We don't know how humans think. It is a known unknown, tackled by many sciences, but so far, incomplete in its description. We think we have a good description, but we don't know how good it is.
If you think there are potential flaws in this line of reasoning other than the ones I already covered, I'm interested to hear.
Also, a simulation is not the thing. It's a simulation of the thing. See? The same issue. You're mistaking the thing for the tool we use to simulate the thing.
You could argue that the universe _is_ a simulation, or computational in nature. But that's speculation, not very different epistemologically from saying that a magic wizard made everything.
I don't understand what fundamental difference you see between a thing governed by a set of mathematical laws and an implementation of a simulation which follows the same mathematical laws. Why would intelligence be possible in the former but fundamentally impossible in the latter, aside from precision limitations?
FWIW, nothing I've said assumes that the universe is a simulation, and I don't personally believe it is.
Again, you're mistaking the thing for the tool we use to describe the thing.
> aside from precision limitations
It's not only about precision. There are things we don't know.
--
I think the universe always obeys rules for everything, but it's an educated guess. There could be rules we don't yet understand and are outside of what mathematics and physics can know. Again, there are many things we don't know. "We'll get there" is only good enough when we get there.
The difference is subtle. I require proof, you seem to be ok with not having it.
Also, interesting timing of this post - https://news.ycombinator.com/item?id=44348485
I can respect the first argument. I personally don't see any reason to believe AGI is impossible, but I also don't see evidence that it is possible with the current (very impressive) technology. We may never build an AGI in my lifetime, maybe not ever, but that doesn't mean it's not possible.
But the second argument, that humans do something machines aren't capable of always falls flat to me for lack of evidence. If we're going to dismiss the possibility of something, we shouldn't do it without evidence. We don't have a full model of human intelligence, so I think it's premature to assume we know what isn't possible. All the evidence we have is that humans are biological machines, everything follows the laws of physics, and yet, here we are. There isn't evidence that anything else is going on other than physical phenomenon, and there isn't any physical evidence that a biological machine can't be emulated.
AI via LLMs has limitations, but they don't come from computability.
[1] https://sortingsearching.com/2021/07/18/roger-penrose-ai-ske...
But this isn’t that, as I’m not making a claim about consciousness or invoking quantum physics or microtubules (which, I agree, are highly speculative).
The core of my argument is based on computability and information theory — not biology. Specifically: that algorithmic systems hit hard formal limits in decision contexts with irreducible complexity or semantic divergence, and those limits are provable using existing mathematical tools (Shannon, Rice, etc.).
So in some way, this is the non-microtubule version of AI critique. I don’t have the physics background to engage in Nobel-level quantum speculation — and, luckily, it’s not needed here.
I hate "stopped reading at x" type comments but, well, I did. For those who got further, is this paper interesting at all?
Still an interesting take and will need to dive in more, but already if we assume the brain is doing information processing then the immediate question is how can the brain avoid this problem, as others are pointing out. Is biological computation/intelligence special?
Humans don't have the processing power to traverse such vast spaces. We use heuristics, in the same way a chess player does not iterate over all possible moves.
It's a valid point to make, however I'd say this just points to any AGI-like system having the same epistemological issues as humans, and there's no way around it because of the nature of information.
Stephen Wolfram's computational irreducibility is another one of the issues any self-guided, phyiscally grounded computing engine must have. There are problems that need to be calculated whole. Thinking long and hard about possible end-states won't help. So one would rather have 10000 AGIs doing somewhat similar random search in the hopes that one finds something useful.
I guess this is what we do in global-scale scientific research.
> Anyway, this is not part of the questions this paper seeks to answer. Neither will we wonder in what way it could make sense to measure the strength of a model by its ability to find its relative position to the object it models. Instead, we chose to stay ignorant - or agnostic? - and take this fallible system called "human". As a point of reference.
Cowards.
That's the main counter argument and acknowledging its existence without addressing it is a craven dodge.
Assuming the assumptions[1] are true, then human intelligence isn't even able to be formalized under the same pretext.
Either human intelligence isn't
1. Algorithmic. The main point of contention. If humans aren't algorithmically reducible - even at the level computation of physics, then human cognition is supernatural.
2. Autonomous. Trivially true given that humans are the baseline.
3. Comprehensive (general): Trivially true since humans are the baseline.
4. Competent: Trivially true given humans are the baseline.
I'm not sure how they reconcile this given that they simply dodge the consequences that it implies.
Overall, not a great paper. It's much more likely that their formalism is wrong than their conclusion.
Footnotes
1. not even the consequences, unfortunately for the authors.
–Are we treating an arbitrary ontological assertion as if it’s a formal argument that needs to be heroically refuted? Or better: is that metaphysical setup an argument?
If that’s the game, fine. Here we go:
– The claim that one can build a true, perfectly detailed, exact map of reality is… well... ambitious. It sits remarkably far from anything resembling science , since it’s conveniently untouched by that nitpicky empirical thing called evidence. But sure: freed from falsifiability, it can dream big and give birth to its omnicartographic offspring.
– oh, quick follow-up: does that “perfect map” include itself? If so... say hi to Alan Turing. If not... well, greetings to Herr Goedel.
– Also: if the world only shows itself through perception and cognition, how exactly do you map it “as it truly is”? What are you comparing your map to — other observations? Another map?
– How many properties, relations, transformations, and dimensions does the world have? Over time? Across domains? Under multiple perspectives? Go ahead, I’ll wait... (oh, and: hi too.. you know who)
And btw the true detailed map of the world exists.... It’s the world.
It’s just sort of hard to get a copy of it. Not enough material available ... and/or not enough compute....
P.S. Sorry if that came off sharp — bit of a spur-of-the-moment reply. If you want to actually dig into this seriously, I’d be happy to.
ICBTheory•4h ago
The idea is called IOpenER: Information Opens, Entropy Rises. It builds on Shannon’s information theory to show that in specific problem classes (those with α ≤ 1), adding information doesn’t reduce uncertainty — it increases it. The system can’t converge, because meaning itself keeps multiplying.
The core concept — entropy divergence in these spaces — was already present in my earlier paper, uploaded to PhilArchive on June 1. This version formalizes it. Apple’s study, The Illusion of Thinking, was published a few days later. It shows that frontier reasoning models like Claude 3.7 and DeepSeek-R1 break down exactly when problem complexity increases — despite adequate inference budget.
I didn’t write this paper in response to Apple’s work. But the alignment is striking. Their empirical findings seem to match what IOpenER predicts.
Curious what this community thinks: is this a meaningful convergence, or just an interesting coincidence?
Links:
This paper (entropy + IOpenER): https://philarchive.org/archive/SCHAIM-14
First paper (ICB + computability): https://philpapers.org/archive/SCHAII-17.pdf
Apple’s study: https://machinelearning.apple.com/research/illusion-of-think...
ben_w•3h ago
As you note in 2.1, there is widespread disagreement on what "AGI" means. I note that you list several definitions which are essentially "is human equivalent". As humans can be reduced to physics, and physics can be expressed as a computer program, obviously any such definition can be achieved by a sufficiently powerful computer.
For 3.1, you assert:
"""
Now, let's observe what happens when an Al system - equipped with state-of-the-art natural language processing, sentiment analysis, and social reasoning - attempts to navigate this question. The Al begins its analysis:
• Option 1: Truthful response based on biometric data → Calculates likely negative emotional impact → Adjusts for honesty parameter → But wait, what about relationship history? → Recalculating...
• Option 2: Diplomatic deflection → Analyzing 10,000 successful deflection patterns → But tone matters → Analyzing micro-expressions needed → But timing matters → But past conversations matter → Still calculating...
• Option 3: Affectionate redirect → Processing optimal sentiment → But what IS optimal here? The goal keeps shifting → Is it honesty? Harmony? Trust? → Parameters unstable → Still calculating...
• Option n: ....
Strange, isn't it? The Al hasn't crashed. It's still running. In fact, it's generating more and more nuanced analyses. Each additional factor may open ten new considerations. It's not getting closer to an answer - it's diverging.
"""
Which AI? ChatGPT just gives an answer. Your other supposed examples have similar issues in that it looks like you've *imagined* an AI rather than having tried asking an AI to seeing what it actually does or doesn't do.
I'm not reading 47 pages to check for other similar issues.
vessenes•3h ago
That said, the most obvious objection that comes to mind about the title is that … well, I feel that I’m generally intelligent, and therefore general intelligence of some sort is clearly not impossible.
Can you give a short précis as to how you are distinguishing humans and the “A” in artificial?
rusk•3h ago
ICBTheory•2h ago
Well, given the specific way you asked that question I confirm your self assertion - and am quite certain that your level of Artificiality converges to zero, which would make you a GI without A...
- You stated to "feel" generally intelligent (A's don't feel and don't have an "I" that can feel) - Your nuanced, subtly ironic and self referential way of formulating clearly suggests that you are not a purely algorithmic entity
A "précis" as you wished: Artificial — in the sense used here (apart from the usual "planfully built/programmed system" etc.) — algorithmic, formal, symbol-bound.
Humans as "cognitive system" have some similar traits of course - but obviously, there seems to be more than that.
kevin42•22m ago
I don't see how that's obvious. I'm not trying to be argumentative here, but it seems like these arguments always come down to a qualia, or the insistence that humans have some sort of 'spark' that machines don't have, therefore: AGI is not possible since machines don't have it.
I also don't understand the argument that "Your nuanced, subtly ironic and self referential way of formulating clearly suggests that you are not a purely algorithmic entity". How does that follow?
What scientific evidence is there that we are anything other than a biochemical machine? And if we are a biochemical machine, how is that inherently capable of more than a silicon based machine is capable of?
jemmyw•9m ago
WhitneyLand•3h ago
No it doesn’t.
Shannon entropy measures statistical uncertainty in data. It says nothing about whether an agent can invent new conceptual frames. Equating “frame changes” with rising entropy is a metaphor, not a theorem, so it doesn’t even make sense as a mathematical proof.
This is philosophical musing at best.
ICBTheory•2h ago
But the paper doesn’t just restate Shannon.
It extends this very formalism to semantic spaces where the symbol set itself becomes unstable. These situations arise when (a) entropy is calculated across interpretive layers (as in LLMs), and (b) the probability distribution follows a heavy-tailed regime (α ≤ 1). Under these conditions, entropy divergence becomes mathematically provable.
This is far from being metaphorical: it’s backed by formal Coq-style proofs (see Appendix C in he paper).
AND: it is exactly the mechanism that can explain the Apple-Papers' results
gremlinsinc•3h ago
yodon•2h ago