General intelligence—especially AGI—is structurally impossible under certain epistemic conditions.
Not morally, not practically. Mathematically.
The argument splits across three barriers: 1.Computability (Gödel, Turing, Rice): You can’t decide what your system can’t see. 2.Entropy (Shannon): Beyond a certain point, signal breaks down structurally. 3.Complexity (Kolmogorov, Chaitin): Most real-world problems are fundamentally incompressible.
This paper focuses on (3): Kolmogorov Complexity. It argues that most of what humans care about is not just hard to model, but formally unmodellable—because the shortest description of a problem is the problem.
In other words: you can’t generalize from what can’t be compressed.
⸻
Here’s the abstract:
There is a common misconception that artificial general intelligence (AGI) will emerge through scale, memory, or recursive optimization. This paper argues the opposite: that as systems scale, they approach the structural limit of generalization itself. Using Kolmogorov complexity, we show that many real-world problems—particularly those involving social meaning, context divergence, and semantic volatility—are formally incompressible and thus unlearnable by any finite algorithm.
This is not a performance issue. It’s a mathematical wall. And it doesn’t care how many tokens you’ve got
The paper isn’t light, but it’s precise. If you’re into limits, structures, and why most intelligence happens outside of optimization, it might be worth your time.
https://philpapers.org/archive/SCHAII-18.pdf
Happy to read your view.
Tuna-Fish•6mo ago
automatic6131•6mo ago
Tuna-Fish•6mo ago
Retric•6mo ago
baq•6mo ago
In practical terms, the result doesn’t matter. The race to approximate the human thought process and call it AGI (which is what matters economically) is on. If you can approximate it meaningfully faster that the real brain works in meatspace, you are winning. What it will mean for humanity or civilization is an open question.
hahn-kev•6mo ago
trhway•6mo ago
delusional•6mo ago
Even that isn't needed. A "general intelligence" separable from ethics and rights is valuable in itself. It's valuable to subjugate, as long as the subjugated object is producing more than they are consuming.
Veen•6mo ago
baq•6mo ago
mindcrime•6mo ago
Maybe I'm just being pedantic, but I'd argue that there's no particular reason to say that AGI involves "approximating the human thought process". That is, what matters is the result, not the process. IF one can find another way to "get there" in a completely different manner than the human mind, then great.
That said, obviously there is some appeal to the "mimic human thought" approach since human thought is currently an existence proof that the kind of intelligence we are talking about is possible at all and mimicking that does seem like an obvious path to try.
stogot•6mo ago
Tuna-Fish•6mo ago
stogot•6mo ago
DragonStrength•6mo ago
aetherson•6mo ago
If you aren't arguing for a non-materialist position, then the distinction between "artificial" and human intelligence isn't meaningful. A powerful enough computer could simulate the material processes in your brain. If as the OP claims it is mathematically impossible for a computer to generate intelligence, no matter how powerful that computer, then it is impossible for your brain to do so (via material processes).
DragonStrength•6mo ago
Every generation tries to map its most complex technology onto its understanding of nature. "AGI" has a specific meaning today, but if you want it to mean atheism versus theism or whatever materialist argument, you're far outside of science and technology. Like our fathers of the Enlightenment with their watchmaker god. The idea there is some way for humans to break free of nature seems like a religious belief to me, but whether you agree or not, certainly there is room for doubting that faith, since we're outside the realm of what science can explore.
aetherson•6mo ago
DragonStrength•6mo ago
aetherson•6mo ago
But also: if general intelligence were computable, but it was not possible to learn how to make the computer that can compute it, then you've disproved evolution.
DragonStrength•6mo ago
> How is your brain doing it then?
Do you have an answer? What indication do we have that any AGI we would create would have to follow the same process to achieve the result? Can humans recreate all phenomenon observed in the universe? You're arguing yes in all of these then? I'd love to read more of that argument. I don't care about this proof though. I don't think I've indicated I think AGI is impossible. I care far more about why someone would be convinced it must be possible for humans to recreate in the exact same manner as the brain, which this commenter and you seem to think. I know humans have not shown we can fully model our observations cohesively.
> But also: if general intelligence were computable, but it was not possible to learn how to make the computer that can compute it, then you've disproved evolution.
record scratch What? Did I agree to all of these premises? Do you have some backing for the three or four assumptions you've made in this sentence? You still need to show humans not only could be but are capable of replicating the system. I am asking you for some argument out there that says everything we observe in nature humans can replicate in the exact same way it occurs. That's a much stronger statement than such intelligence exists. You can just link a book. I am not sold on one way or the other, but you seem very confident. Is there some argument I can read? To me, our models in physics point to a fragmented and contradictory understanding of our world to get results. But yes, results are results, but that doesn't mean we are doing anything but modeling -- but can we model everything? Is that the implication of evolution?
We seem to be wandering into capital-S Science vs. science, and I'm not really into religious discussions here. I would love to understand why you seem to think I'm so dimwitted as to dismiss with an edge, when all of this stems from a glib reply to a glib reply that I am no less convinced is in fact glib and fatuous. (And that original comment was not yours, lest you feel insulted in the same way you have insulted me.)
mindcrime•6mo ago
DragonStrength•6mo ago
int_19h•6mo ago
DragonStrength•6mo ago
int_19h•6mo ago
00deadbeef•6mo ago
Aardwolf•6mo ago
bsindicatr•6mo ago
Quantum entanglement?:
https://www.popularmechanics.com/science/a65368553/quantum-e...
And we’re not mathematically impossible, unless that’s some new philosophical theory: “If human intelligence is mathematically impossible, and yet it exists, then mathematics is fallible, and by inductive reasoning logic is fallible, and I can prove things with inductive reasoning, because piss off.”
Tuna-Fish•6mo ago
delusional•6mo ago
Wasn't this essentially the conclusion of Gödel. Math, based on a set of axioms, will either have to accept that there are things that are true but can't be proven, or that there are proofs that aren't true.
quotemstr•6mo ago
he0001•6mo ago