General intelligence—especially AGI—is structurally impossible under certain epistemic conditions.
Not morally, not practically. Mathematically.
The argument splits across three barriers: 1.Computability (Gödel, Turing, Rice): You can’t decide what your system can’t see. 2.Entropy (Shannon): Beyond a certain point, signal breaks down structurally. 3.Complexity (Kolmogorov, Chaitin): Most real-world problems are fundamentally incompressible.
This paper focuses on (3): Kolmogorov Complexity. It argues that most of what humans care about is not just hard to model, but formally unmodellable—because the shortest description of a problem is the problem.
In other words: you can’t generalize from what can’t be compressed.
⸻
Here’s the abstract:
There is a common misconception that artificial general intelligence (AGI) will emerge through scale, memory, or recursive optimization. This paper argues the opposite: that as systems scale, they approach the structural limit of generalization itself. Using Kolmogorov complexity, we show that many real-world problems—particularly those involving social meaning, context divergence, and semantic volatility—are formally incompressible and thus unlearnable by any finite algorithm.
This is not a performance issue. It’s a mathematical wall. And it doesn’t care how many tokens you’ve got
The paper isn’t light, but it’s precise. If you’re into limits, structures, and why most intelligence happens outside of optimization, it might be worth your time.
https://philpapers.org/archive/SCHAII-18.pdf
Happy to read your view.
Tuna-Fish•6h ago
automatic6131•6h ago
Tuna-Fish•5h ago
Retric•4h ago
baq•6h ago
In practical terms, the result doesn’t matter. The race to approximate the human thought process and call it AGI (which is what matters economically) is on. If you can approximate it meaningfully faster that the real brain works in meatspace, you are winning. What it will mean for humanity or civilization is an open question.
hahn-kev•5h ago
trhway•5h ago
delusional•5h ago
Even that isn't needed. A "general intelligence" separable from ethics and rights is valuable in itself. It's valuable to subjugate, as long as the subjugated object is producing more than they are consuming.
Veen•5h ago
baq•4h ago
mindcrime•5h ago
Maybe I'm just being pedantic, but I'd argue that there's no particular reason to say that AGI involves "approximating the human thought process". That is, what matters is the result, not the process. IF one can find another way to "get there" in a completely different manner than the human mind, then great.
That said, obviously there is some appeal to the "mimic human thought" approach since human thought is currently an existence proof that the kind of intelligence we are talking about is possible at all and mimicking that does seem like an obvious path to try.
stogot•5h ago
Tuna-Fish•5h ago
DragonStrength•5h ago
aetherson•5h ago
If you aren't arguing for a non-materialist position, then the distinction between "artificial" and human intelligence isn't meaningful. A powerful enough computer could simulate the material processes in your brain. If as the OP claims it is mathematically impossible for a computer to generate intelligence, no matter how powerful that computer, then it is impossible for your brain to do so (via material processes).
DragonStrength•4h ago
Every generation tries to map its most complex technology onto its understanding of nature. "AGI" has a specific meaning today, but if you want it to mean atheism versus theism or whatever materialist argument, you're far outside of science and technology. Like our fathers of the Enlightenment with their watchmaker god. The idea there is some way for humans to break free of nature seems like a religious belief to me, but whether you agree or not, certainly there is room for doubting that faith, since we're outside the realm of what science can explore.
aetherson•2h ago
DragonStrength•34m ago
mindcrime•5h ago
DragonStrength•4h ago
00deadbeef•5h ago
Aardwolf•5h ago
bsindicatr•5h ago
Quantum entanglement?:
https://www.popularmechanics.com/science/a65368553/quantum-e...
And we’re not mathematically impossible, unless that’s some new philosophical theory: “If human intelligence is mathematically impossible, and yet it exists, then mathematics is fallible, and by inductive reasoning logic is fallible, and I can prove things with inductive reasoning, because piss off.”
Tuna-Fish•5h ago
delusional•5h ago
Wasn't this essentially the conclusion of Gödel. Math, based on a set of axioms, will either have to accept that there are things that are true but can't be proven, or that there are proofs that aren't true.
quotemstr•5h ago
he0001•5h ago