frontpage.
newsnewestaskshowjobs

Made with ♥ by @iamnishanth

Open Source @Github

Ejabberd 25.07 / ProcessOne – Erlang Jabber/XMPP/Matrix Server – Communication

https://www.process-one.net/blog/ejabberd-25-07/
1•neustradamus•2m ago•0 comments

Trump Seeks to Cut Basic Scientific Research by Roughly One-Third, Report Shows

https://www.nytimes.com/2025/07/10/science/trump-science-budget-cuts.html
1•zzzeek•4m ago•0 comments

Distributed Cache for S3

https://clickhouse.com/blog/building-a-distributed-cache-for-s3
1•zX41ZdbW•7m ago•0 comments

Show HN: Free Online Text Compare – Instantly Spot Differences Between Two Texts

1•rahulbstomar•8m ago•0 comments

Workaround for Claude Code running `python` instead of `uv`

https://solmaz.io/log/2025/07/13/claude-code-python-override/
2•hosolmaz•11m ago•0 comments

America's soccer dad has some advice for the White House

https://www.politico.com/news/2025/07/13/world-cup-usa-2026-alan-rothenberg-1994-00448727
1•srameshc•17m ago•0 comments

Jekyll Companion App

https://hiyd.uk
1•TheChelsUK•21m ago•0 comments

Reflecting on PLDI 2025

https://people.csail.mit.edu/rachit/post/pldi-2025/
1•chriscbr•23m ago•0 comments

Databento

https://databento.com/
1•handfuloflight•25m ago•0 comments

Show HN: BloomSearch – Keyword search with hierarchical bloom filters

https://github.com/danthegoodman1/bloomsearch
1•dangoodmanUT•25m ago•0 comments

Illegal loggers profit from Brazil's carbon credit projects

https://www.reuters.com/business/environment/illegal-loggers-profit-brazils-carbon-credit-projects-2025-07-07/
1•Qem•26m ago•0 comments

Why Did Cars Get So Hard to See Out Of?

https://www.bloomberg.com/news/articles/2025-07-10/why-did-cars-get-so-hard-to-see-out-of-blame-the-a-pillars
3•pseudolus•26m ago•3 comments

How to cut U.S. residential solar costs in half

https://pv-magazine-usa.com/2025/07/11/how-to-cut-u-s-residential-solar-costs-in-half/
1•westurner•28m ago•0 comments

Zig's new I/O: function coloring is inevitable?

https://blog.ivnj.org/post/function-coloring-is-inevitable
3•ivanjermakov•30m ago•0 comments

I wrote a hypergraph causality framework

https://deepcausality.com/blog/towards-undamental-causality/
1•marvin-hansen•33m ago•0 comments

Study on the dynamics of an origami space plane during Earth atmospheric entry

https://www.sciencedirect.com/science/article/pii/S0094576525004047
1•bookofjoe•34m ago•0 comments

Floorp Browser

https://floorp.app/en-US
2•thunderbong•36m ago•0 comments

A cellular entity retaining only its replicative core

https://www.biorxiv.org/content/10.1101/2025.05.02.651781v1
3•gmays•37m ago•1 comments

Plasma proteomics links brain and immune system aging with healthspan

https://www.nature.com/articles/s41591-025-03798-1
1•baxtr•37m ago•0 comments

Eromanga Sea

https://en.wikipedia.org/wiki/Eromanga_Sea
1•nyc111•39m ago•0 comments

Microsoft Tay (Chatbot)

https://en.wikipedia.org/wiki/Tay_(chatbot)
1•microsoftedging•40m ago•0 comments

Parachute use prevents death when jumping from aircraft random controlled trial

https://www.bmj.com/content/363/bmj.k5343
9•Bluestein•45m ago•2 comments

A quick look at unprivileged sandboxing

https://www.uninformativ.de/blog/postings/2025-07-13/0/POSTING-en.html
2•zdw•46m ago•0 comments

Facemash.in – Indian Facebook

1•iharshgarg•48m ago•2 comments

Aircela creates synthetic gasoline from thin air

https://abc7.com/post/aircela-uses-proven-science-create-synthetic-gasoline-thin-air/17005782/
3•lxm•49m ago•0 comments

Top Emerging Technologies of 2025 [pdf]

https://reports.weforum.org/docs/WEF_Top_10_Emerging_Technologies_of_2025.pdf
2•gmays•50m ago•0 comments

The Cost of Human-Centric Tools in LLM Workflows

https://www.joshbeckman.org/blog/the-hidden-cost-of-humancentric-tools-in-llm-workflows
2•bckmn•50m ago•1 comments

Efforts to Reconstruct Edo Castle Tower Keep Enter 18th Year

https://www.tokyoweekender.com/art_and_culture/japanese-culture/efforts-to-reconstruct-edo-castle-tower-keep-enter-18th-year/
1•PaulHoule•51m ago•0 comments

The beauty entrepreneur who made the Jheri curl a sensation

https://thehustle.co/originals/the-beauty-entrepreneur-who-made-the-jheri-curl-a-sensation
1•Anon84•52m ago•0 comments

NK's fake tech workers targeting European employers with help from UK operatives

https://www.theregister.com/2025/04/02/north_korean_fake_techies_target_europe/
6•Bluestein•53m ago•0 comments
Open in hackernews

AGI Is Mathematically Impossible (3): Kolmogorov Complexity

30•ICBTheory•7h ago
Hi folks. This is the third part in an ongoing theory I’ve been developing over the last few years called the Infinite Choice Barrier (ICB). The core idea is simple:

General intelligence—especially AGI—is structurally impossible under certain epistemic conditions.

Not morally, not practically. Mathematically.

The argument splits across three barriers: 1.Computability (Gödel, Turing, Rice): You can’t decide what your system can’t see. 2.Entropy (Shannon): Beyond a certain point, signal breaks down structurally. 3.Complexity (Kolmogorov, Chaitin): Most real-world problems are fundamentally incompressible.

This paper focuses on (3): Kolmogorov Complexity. It argues that most of what humans care about is not just hard to model, but formally unmodellable—because the shortest description of a problem is the problem.

In other words: you can’t generalize from what can’t be compressed.

⸻

Here’s the abstract:

There is a common misconception that artificial general intelligence (AGI) will emerge through scale, memory, or recursive optimization. This paper argues the opposite: that as systems scale, they approach the structural limit of generalization itself. Using Kolmogorov complexity, we show that many real-world problems—particularly those involving social meaning, context divergence, and semantic volatility—are formally incompressible and thus unlearnable by any finite algorithm.

This is not a performance issue. It’s a mathematical wall. And it doesn’t care how many tokens you’ve got

The paper isn’t light, but it’s precise. If you’re into limits, structures, and why most intelligence happens outside of optimization, it might be worth your time.

https://philpapers.org/archive/SCHAII-18.pdf

Happy to read your view.

Comments

Tuna-Fish•6h ago
How is your brain doing it then?
automatic6131•6h ago
I could believe we're not generally intelligent.
Tuna-Fish•5h ago
Then your definition of general intelligence is useless.
Retric•4h ago
Being accurate is hardly useless. Individual people are worse at some tasks than others beyond what past training alone would produce.
baq•6h ago
It follows that it doesn’t.

In practical terms, the result doesn’t matter. The race to approximate the human thought process and call it AGI (which is what matters economically) is on. If you can approximate it meaningfully faster that the real brain works in meatspace, you are winning. What it will mean for humanity or civilization is an open question.

hahn-kev•5h ago
I don't think it even needs to be faster, if you can make an artificial brain that's useful even if it's 100x slower, you can always run stuff in parallel.
trhway•5h ago
It is already faster because you don't need to wait 12 years of K-12 and 4 years of college before it can produce meaningful answers.
delusional•5h ago
> you can always run stuff in parallel.

Even that isn't needed. A "general intelligence" separable from ethics and rights is valuable in itself. It's valuable to subjugate, as long as the subjugated object is producing more than they are consuming.

Veen•5h ago
That's one possible inference. However, it would also be consistent to claim that there is a fundamentally uncomputable and impossible-to-artificially-replicate "mechanism" underlying human intelligence.
baq•4h ago
My inner philosopher agrees. My inner engineer doesn’t care; a good enough approximation will suffice.
mindcrime•5h ago
The race to approximate the human thought process and call it AGI (which is what matters economically) is on

Maybe I'm just being pedantic, but I'd argue that there's no particular reason to say that AGI involves "approximating the human thought process". That is, what matters is the result, not the process. IF one can find another way to "get there" in a completely different manner than the human mind, then great.

That said, obviously there is some appeal to the "mimic human thought" approach since human thought is currently an existence proof that the kind of intelligence we are talking about is possible at all and mimicking that does seem like an obvious path to try.

stogot•5h ago
Doing artificial?
Tuna-Fish•5h ago
The brain still exists in the same universe running on the same mathematical laws. The A adds no constraints that can make anything impossible. There is nothing that the brain is doing that we cannot also replicate, we just cannot get to the same scale yet.
DragonStrength•5h ago
The "A" stands for "Artificial" in contrast to what our brains do.
aetherson•5h ago
If you want to argue for a fundamentally non-material mind (ie, that human cognition happens in a physically impossible, spiritual plane), then cool. Though you might want to give some consideration to how much it seems like physical processes on the brain can demonstrably affect cognition.

If you aren't arguing for a non-materialist position, then the distinction between "artificial" and human intelligence isn't meaningful. A powerful enough computer could simulate the material processes in your brain. If as the OP claims it is mathematically impossible for a computer to generate intelligence, no matter how powerful that computer, then it is impossible for your brain to do so (via material processes).

DragonStrength•4h ago
There's no reason to assume our current pursuit is not a dead end for any number of reasons we do not yet understand. There is a lot of faith we are capturing the same thing based on perceptions, which have a lot to do with the individual observer. It seems very important to some folks that our natural process is a mirror of what our technology does. The same result does not mean it is the same process or anything other than a mirage -- though one that may trick a lot of us.

Every generation tries to map its most complex technology onto its understanding of nature. "AGI" has a specific meaning today, but if you want it to mean atheism versus theism or whatever materialist argument, you're far outside of science and technology. Like our fathers of the Enlightenment with their watchmaker god. The idea there is some way for humans to break free of nature seems like a religious belief to me, but whether you agree or not, certainly there is room for doubting that faith, since we're outside the realm of what science can explore.

aetherson•2h ago
"Current LLMs are not going to get to AGI" is a different and much weaker claim than "AGI is mathematically impossible."
DragonStrength•34m ago
I was responding to the claim that an observer bound by a system may understand and replicate all phenomenon within that system. It's quite a bold claim which has already exited the bounds of science, IMHO. That you're using the language of religion and philosophy is the point.
mindcrime•5h ago
That's kind of distinction with no distinction though, in this context. Our brains are physical machines, and computers are physical machines. Sure, one is wetware and based on chemistry, biology, and some electricity, while the other is based on electricity, logic gates, and bits and bytes, but still... if one can be intelligent, there doesn't seem to be any particular reason to think that the other can't as well.
DragonStrength•4h ago
Oh certainly there is: why assume we will ever be able to fully replicate the natural process? We may very well be bound here by our own bodies.
00deadbeef•5h ago
My brain isn’t artificial. I hope.
Aardwolf•5h ago
Artificial is the distinction between biologically made or made by humans, does that mathematically matter?
bsindicatr•5h ago
> How is your brain doing it then?

Quantum entanglement?:

https://www.popularmechanics.com/science/a65368553/quantum-e...

And we’re not mathematically impossible, unless that’s some new philosophical theory: “If human intelligence is mathematically impossible, and yet it exists, then mathematics is fallible, and by inductive reasoning logic is fallible, and I can prove things with inductive reasoning, because piss off.”

Tuna-Fish•5h ago
Quantum entanglement is not magic that lets you bypass mathematics. Anything that's true with it is also true without it.
delusional•5h ago
> then mathematics is fallible

Wasn't this essentially the conclusion of Gödel. Math, based on a set of axioms, will either have to accept that there are things that are true but can't be proven, or that there are proofs that aren't true.

quotemstr•5h ago
Even if the brain were using quantum voodoo (and it's not: it's too messy and too hot), a machine could use the same techniques to implement AGI.
he0001•5h ago
Wouldn’t it be possible that not all brains can do it all, but some can specialize in certain problems. But when combined with everyone else’s we can approach general intelligence?
motorest•5h ago
I think that any paper that argues something is impossible is fundamentally flawed, particularly when there are examples of it being possible.

Also, what's the point of telling others you believe what they are doing is impossible, specially after the results we are seeing even at the free-tier, open-to-the-public services?

Veen•5h ago
What examples are there of the possibility of artificial general intelligence?
mrjay42•5h ago
You might want to check out the works of that buzzkill that Gödel is ^^

https://en.wikipedia.org/wiki/G%C3%B6del%27s_incompleteness_...

" The first incompleteness theorem states that no consistent system of axioms whose theorems can be listed by an effective procedure (i.e. an algorithm) is capable of proving all truths about the arithmetic of natural numbers. For any such consistent formal system, there will always be statements about natural numbers that are true, but that are unprovable within the system.

The second incompleteness theorem, an extension of the first, shows that the system cannot demonstrate its own consistency. "

:3

motorest•5h ago
> You might want to check out the works of that buzzkill that Gödel is ^^

Please explain why do you believe this is relevant to the points I've made.

he0001•5h ago
It argues of “impossibilities” and also proves it.
motorest•3h ago
I think you need to read it again.
he0001•1h ago
Gödel wrote his teorem to test David Hilbert’s endeavor, Logic and the Foundation of Mathematics[0], to unify mathematics. Gödel proved that it is impossible to do.

But you may have a different version of history.

[0] https://www.famousscientists.org/david-hilbert/

calf•5h ago
I'll bite; There was a Kurt Jaimungal interview yesterday explaining that the Navier-Stokes fluid equations are not only unpredictable (chaotic), but also uncomputable (in the Turing sense). (if I recalled it correctly)

But I take that to mean there's no general, universal algorithm to tell us anything we want to know. But that's not what intelligence is, we're not defining some kind of absolute intelligence like an oracle for the halting problem. That definition would be a category error.

pama•5h ago
I didnt read your draft paper, but your premise in HN sounds a bit off to me. AGI does not assume the ability for finding or learning an optimal solution to every problem (with that assumption it would be trivial to prove it impossible in many different ways). Independent of the exact definition, a system of intelligence that is better or equal to the best human in any domain would be at least termed AGI. (If there exist a couple incompressible problems along the way you can memorize the human solution.) If you proved AGI impossible under such a (weaker?) definition you would prove that humans can no longer improve in any domain (as the set of all humans is a general intelligence). Or you would need to assume that there is something special inside humans, which no technology can ever build. I disagree with both premises.
mindcrime•5h ago
Independent of the exact definition, a system of intelligence that is better or equal to the best human in any domain would be at least termed AGI.

Exactly. There's this "thing" you see in certain circles, where people (intentionally?) mis-interpret the "G" in AGI as meaning "the most general possible intelligence". But that's not the reality. AGI has pretty much always been taken to mean "AI that is approximately human level". Going beyond that is getting into the realm of Artificial Super Intelligence, or Universal Artificial Intelligence.

orwin•5h ago
What I remember is that a lot of people used the word 'AI', other (including me) said 'thats not intelligence, it's too specific', and poof, a new word came to replace the word AI, 'AGI', to mean an AI that can adapt to new, unforseen situations.

The LLM that will convince me that AGI is near is one that will understand language enough to find the linguistic rules of conlangs they weren't trained on (or more specifically, engineered languages made by linguists with our current knowledge of how languages work), and create grammatically correct sentences. Something someone trained on can do with great effort, but it's more due to breaking habits and limited brainpower than real complexity.

jhanschoo•4h ago
I think that OP's conclusion may be true in a not very meaningful sense: once a particular non-trivial threshold of competence is defined for every task (infinitely many), then any policy must be bad at some of them.
marvin-hansen•5h ago
Okay, read the abstract and Intro. Recently, in the paper

"What Has a Foundation Model Found? Using Inductive Bias to Probe for World Models"

your thesis of Ai's lack of capacity to abstract or at least extract understanding from noisy data was largely experimentally confirmed. I am uncertain though about the exact mechanics b/c as they used LLM's, its not transparent what happened internally that lead to constant failure to abstract the concept despite ample predictive power. One interesting experiment was the introduction of the Oracle that literally enabled the LLM to solve the task that was previously impossible without the oracle, which means, at least its possible that LLM's can reconstruct known rules. They just can't find new ones.

On a more fundamental level, I am not so sure why these experiments and mathematical proofs still are made since Judea Pearl already established about seven years ago in "Theoretical Impediments to Machine Learning " that all correlation based methods are doomed as they fail to understand anything. his point about causality is well placed, but will not solve the problem either.

The question I have though, if we ignore all existing methods for one moment, then what makes you so sure that AGI is really Mathematically impossible? Suppose some advancement in quantum computing would allow to reconstruct incomplete information, does your assertion still holds true?

https://arxiv.org/abs/2507.06952 https://arxiv.org/abs/1801.04016

mindcrime•5h ago
AGI Is Mathematically Impossible

Unless you believe in magic, the human brain proves that human level general intelligence is possible in our physical universe, running on a system based on the laws of said physical universe. Given that, there's no particular reason to think that "what the brain does" OR a reasonably close approximation, can't be done on another "system based on the laws of our physical universe."

Also, Marcus Hutter already proved that AIXI[1] is a universal intelligence, where it's only short-coming is that it requires infinite compute. But the quest of the AGI project is not "universal intelligence" but simply intelligence that approximates that of us humans. So I'd count AIXI as another bit of suggestive evidence that AGI is possible.

Using Kolmogorov complexity, we show that many real-world problems—particularly those involving social meaning, context divergence, and semantic volatility—are formally incompressible and thus unlearnable by any finite algorithm.

So you're saying the human brain can do something infinite then?

Still, happy to give the paper a read... eventually. Unfortunately the "papers to read" pile just keeps getting taller and taller. :-(

[1]: https://en.wikipedia.org/wiki/AIXI

chrsw•4h ago
Human intelligence is not general intelligence. If it were, you'd be able to use your conscious thoughts to fight off diseases and you wouldn't need your immune system, for example.

The problem I see isn't that AGI isn't possible, that's not even surprising. The problem is the term "AGI" caught on when people really meant "AHI" or artificial _human_ intelligence, which is fundamentally distinct from AGI.

AGI is difficult to define and quite likely impossible to implement. AHI is obviously implementable but I'm unaware of any serious public research that has made significant progress towards this goal. LLMs, SSMs or any other trainable artificial systems are not oriented towards AHI and, in my opinion, are highly unlikely achieve this goal.

Eddy_Viscosity2•4h ago
> Human intelligence is not general intelligence. If it were, you'd be able to use your conscious thoughts to fight off diseases and you wouldn't need your immune system, for example.

The above is nonsense. Mostly because we actually do use our conscious thoughts to fight disease when our immune system can't. That's what medical science is for. We used our intelligence to figure out antibiotics, for example.

The broadly accepted meaning of AGI is human intelligence on a machine. Redefining it to mean something else does nothing useful.

chrsw•1h ago
> Mostly because we actually do use our conscious thoughts to fight disease when our immune system can't.

I think in some sense you're right. There's a higher level way to address disease that humans have made progress on.

But in the critical sense more related to the point I was making I completely disagree. The sense I'm speaking of is we do not (maybe one day we well) directly affect disease states without our brains the way our immune system does. It's a very complicated process that we know works but we absolutely do not understand all the mechanisms involved like we do with say, solving a calculus equation.

My point is, if we could do this our central nervous system would also be the immune system. But it is not because it operates in a entirely different cognitive space than our conscious brain. There are many examples of this, like regulating your body's blood sugar. We know the endocrine system is doing this but we are not actively involved in say the way we are when speaking to one another. The examples are actually countless and go far behind what the human cognitive system is currently capable of. AGI, by definition, would have to not only encompass intelligence in all these different cognitive spaces but also encompass intelligence in any arbitrary future space.

> The broadly accepted meaning of AGI is human intelligence on a machine.

Then the name is inaccurate to the point of deception. You've just described artificial human intelligence, not artificial general intelligence.

Eddy_Viscosity2•2m ago
Human intelligence is general intelligence.

Just because we don't have direct conscious control over our white blood cells or pancreas does not mean we don't have general intelligence. We may not control them, but we have the ability to figure out how they work. Our intelligence is general in the sense we can understand body functions, or invent calculus, or develop relativity, or any unlistable number of other things.

seu•4h ago
> another "system based on the laws of our physical universe."

Since when is mathematics based on the laws of our physical universe? last time I checked, it's an abstract system with no material reality.

jacknews•4h ago
The OP isn't comparing maths.

They say humans exists and are intelligent, therefore intelligence is possible in this universe. And it might well be possible in other configurations within this universe, on computers for example.

edanm•2h ago
Either humans are not generally intelligent, or they are, in which case they're an existence proof of general intelligence. Math really has nothing to do with it beyond the most basic statements of logic.
glimshe•4h ago
This. First thing that came to my mind when I read the headline. Sounds like someone saying "Birds fly but we can't make planes because flying is mathematically impossible".

Or.. "After Johnny read the paper humanity disappeared in a puff of logic"

kamaal•4h ago
>>Unless you believe in magic, the human brain proves that human level general intelligence is possible in our physical universe, running on a system based on the laws of said physical universe.

Using your analogy, what this means is, we have to make humans to make human like intelligence. Not that we can make human like intelligence out side of humans.

>>Given that, there's no particular reason to think that "what the brain does" OR a reasonably close approximation, can't be done on another "system based on the laws of our physical universe."

What exactly does the brain do? Part of the problems with this is language itself might be insufficient to describe intelligence. And Language might be working a level below our thought. There are occasions where even the best of us fail to come up with how we think. We can go close and its not enough. A picture is better than a thousand words - why? Perhaps language is enough to display signs of intelligence but can't entirely contain or describe it.

Similarly, even in the case of LLMs we have seen showing spatial intelligence is whole lot different than predicting text.

Heck intelligence might not even be one monolithic thing. It could be a collection of several intelligences. And this whole idea of one grand AGI monolith could be wrong.

rotten•5h ago
The human brain does not have perfect memory. It is not always logical. And more often than not it is motivated and influenced by "external" forces - health, hunger, sex drive, environmental conditions, luck, spiritual inspiration, or whatever. The perfect worker is purely logical and has perfect memory and no external influences - never gets hungry or sick or wants to be the boss themselves. The AI race is funded by folks interested in creating the perfect worker, not a human. I have to agree with the conclusions of this paper that they won't be able to make humans. (But they don't really want to.) The Vatican has also published interesting works on this idea. The question is - if you take out everything that makes it human, can you call it intelligent?
PeterStuer•5h ago
At first I was thinking, let's see if an argument is made that is not applicable to GI, whether artificial or not, and if not, why even mention AI at all?

Then I started to read the paper, and it's worse.

Every one of his 'examples' would not just be 'solved' by any existing LLM, even a 'dumb' system that just spits out a random sentence to any question would pass his first 2 'tests' with flying colors. I'm not kidding, he accepts "Leave the classroom and stop confusing everybody with your senseless questions" as a good solution.

In fact, the only system that would fail is this hypothetical AI he imagines that somehow gets into infinitely analyzing loops.

Then his 3rd test, an investment decision, gives the same outcome as himself up until the point he draws in extra information not available to the AI, after which he flips his 'answer' which he then labels as 'correct' and the previous answer based on the original info as 'false' because he made some money on the bet a few weeks later, seriously?

mkl•5h ago
The physics of our brains can in principle be simulated at a subatomic quantum level mathematically, even on a classical computer. It would be absurdly expensive and slow with current technology, but it is mathematically possible. Therefore our own generally intelligent brains can be considered a counterexample.

I think for your theory to hold up, you would need to show that physics cannot, even in principle, be simulated mathematically at sufficient scale (the number of interacting subatomic particles). That would be surprising.

At the moment it seems like your results contradict reality, meaning your starting assumptions cannot all be true.

al45tair•4h ago
And even if OP could show that physics couldn’t be simulated, it still wouldn’t follow that AGI was impossible, or even that it couldn’t be achieved by approximating the simulation that was proved to be impossible to do accurately.

AGI is clearly possible, because our brains are fundamentally machines, and there’s no reason in principle why we couldn’t build something similar. Right now we don’t - as human beings - have the ability to do that, but it clearly isn’t impossible since cellular machinery is able to build it in the first place.

xbmcuser•5h ago
My pet theory is that AGI is not possible until we have real quantum computing.
geldedus•5h ago
there is a thing called quantum computing. So nope.
tom_morrow•5h ago
I tried to understand your paper, but could not.

Then I understood why not. Your paper proves that I am unable to understand your paper. It also proves that you are unable to understand your paper.

anthk•3h ago
Consciusness = intrinsic information evaluating itself.

Like eval/apply under Lisp. Or Forth.

effed3•3h ago
Probably every intelligence has its limits, as every systems (eg. mathematics, remembering goedel) has his own. This kind of AGI seems like a deity, hard to belive it's possible, but in pratice many kind of "smaller" intelligences exist (from ants to primates) less "general" but enough to solve enough problems to live and evolve, and maybe can be even created by others intelligences. IMHO It's reasonable to think a real intelligence as a property of complex evolving systems interacting with a complex environment, so to live in a complex world a not-so-general intelligence can be enough, even given some limits and errors.
jhanschoo•2h ago
> This is cognition at its weirdest: solving problems somewhat by accident, finding answers in the wrong place, connecting dots that aren’t even in the same picture.

If you solve a problem "by accident", there are very many other people who make foolish decisions daily because they do not think. Some of those pan out too and lead to understanding. A resource-bounded agent can also maintain a notion of fuel and give a random answer when it has exhausted its fuel.

The structural incompleteness mentioned isn't really meaningful. Humans have not demonstrated the capacity to make epsilon-optimal decisions on an infinite number of tasks, since we do not do an infinite number of tasks anyway.

K-complexity, and resource-bounded K-complexity are indeed extremely useful tools to talk about generalization, I'd agree, but I think the author has misunderstood the limits that K-complexity places on generalization.