PK may be very differently shaped from normal Kolmogorov complexity because you can find primitive recursive functions whose implementation in a primitive recursive programming language is arbitrarily larger than their implementation in a Turing complete programming language.
That said, I’m not sure what if anything this implies about AI.
I can tell, because it is garbage.
AI's notion of PK is useless because of Blum's speedup theorem. Because the invariance theorem fails in PR (PR is not universal), description-length gaps between PR and Turing complete languages can grow without bound.
Essentially a more expressive formalism can encode an interpreter for the weaker one and then diagonalise over it. Restricting yourself to a total language, you sacrifice potentially unbounded conciseness.
This is a profound difference between TC and non-TC languages, and it manifests even for terminating functions. It's not just that a TC languages can encode non-terminating computations, it's that a TC language can encode terminating functions more efficiently than a non-TC language. A terminating program expressed in a total language must manifest its termination proof in a strict way with finite degrees of freedom, whatever the choice of total language might be. In a TC language it doesn't have this artificial constraint.
In some sense, as program complexity grows (expressed in a total language), more and more of the program is dedicated just to encoding its own termination proof. We can sort of see this a corollary of this experimentally with programs (proofs) written in theorem provers like Coq (which are total). Giant proofs extract to very small programs. We don't see the sort of phenomenon in theorem provers using a Curry-style type system, for example Nuprl, where the underlying lambda calculus is Turing complete. This is experimental evidence that even though most interesting functions might be PR, a PR language might not be the best language to express those functions. And this seems to be the case even without choosing specially-crafted pathological examples.
These are subtle issues and I can't fault the author for not knowing about them, but I can fault him for using AI to appear to say something profound when all that was said was woefully naïve.
This is perhaps pedantic, but this statement is a little misleading. Kolmogorov complexity and the halting problem both describe the same concept in different formulations. One could just as easily say that the halting problem arises directly from Kolmogorov complexity.
S can't be arsed to define itself. K is a measure of s (an object). What on earth is S? Is it the result of using Solomonoff induction ... on a hypothesis based on K?
Then we suddenly get the science bit:
"Whilst we might one day care about the sort of intelligence that is defined as a full "search over turing machines", for today's real world practical general intelligence, a "search over primitive recursive functions" is likely sufficient."
So, sod the Halting problem, with some sleight of hand (waving) we get AGI tomorrowish or something.
The author seems to forget that "intelligence" might eventually be defined by a "full search over turing machines" and "practical general intelligence" is not a particularly useful system and merely the description for a next token guesser and the like.
317070•5h ago
pfdietz•4h ago