frontpage.
newsnewestaskshowjobs

Made with ♥ by @iamnishanth

Open Source @Github

The Talent Stack [video]

https://www.youtube.com/watch?v=PP91WmrgpBE
1•gmays•1m ago•0 comments

I Made a Free AI Coloring Page Generator for Cats (and Humans Too)

https://www.cat-coloring-pages.org
1•sweeii•3m ago•1 comments

U.S. Sanctions Cloud Provider 'Funnull' as Top Source of 'Pig Butchering' Scams

https://krebsonsecurity.com/2025/05/u-s-sanctions-cloud-provider-funnull-as-top-source-of-pig-butchering-scams/
2•todsacerdoti•3m ago•0 comments

The Rise of the Shallow State

https://danieldrezner.substack.com/p/the-rise-of-the-shallow-state
1•mooreds•4m ago•0 comments

Show HN: AI powered enterprise architecture platform

https://enterprise.architectfwd.com
1•quintes•5m ago•0 comments

I build workflowhub.ai for user to buy and selling automations

https://www.workflowhub.ai/
1•workflowhubai•14m ago•0 comments

Ask HN: Launched image AI SaaS, struggling with initial traffic

1•rikies•14m ago•0 comments

What is Humphrey's Executor? A look at the 90-year-old Supreme Court decision

https://www.theusconstitution.org/news/what-is-humphreys-executor-a-look-at-the-90-year-old-supreme-court-decision-trump-is-targeting/
1•walterbell•15m ago•0 comments

Four Stages of Competence

https://en.wikipedia.org/wiki/Four_stages_of_competence
1•elsewhen•16m ago•0 comments

SWE-Rebench: Task Collection and Decontaminated Evaluation of SWE Agents

https://arxiv.org/abs/2505.20411
1•SerCe•16m ago•0 comments

Practical SDR: Getting Started with Software-Defined Radio

https://nostarch.com/practical-sdr
5•teleforce•27m ago•0 comments

Takeaways from Anthropic CEO Dario Amodei's CNN Interview

https://www.cnn.com/2025/05/29/business/anthropic-amodei-cnn-anderson-cooper-takeaways
1•andy99•27m ago•0 comments

Agentle

https://github.com/paragon-intelligence/agentle
1•handfuloflight•30m ago•0 comments

SpaceX Update on Martian Colonization

https://twitter.com/SpaceX/status/1928185351933239641
2•d_silin•31m ago•0 comments

An endless cycle of ignorance is the result of not offering classes on IQ [2024]

https://doi.org/10.1016/j.intell.2024.101827
3•alliancedamages•32m ago•1 comments

Show HN: PulpMiner – Converts Any Webpage into Realtime JSON API

https://www.pulpminer.com
1•pulpminer•32m ago•0 comments

EpicDetect Learn Platform

https://epicdetect.io
1•EpicDetect•36m ago•0 comments

Riot IQ

https://www.riotiq.com/
1•alliancedamages•37m ago•0 comments

IRS Direct File

https://github.com/IRS-Public/direct-file
6•nerevarthelame•42m ago•2 comments

Newtok, Was Supposed to Be a Model for Climate Relocation. It Went Wrong

https://www.propublica.org/article/newtok-alaska-climate-relocation
1•speckx•45m ago•0 comments

Sony Shaped Steve Jobs' Vision

https://obsoletesony.substack.com/p/how-sony-shaped-steve-jobs-vision
2•tzury•45m ago•0 comments

Ask HN: Management wants to talk to my Datalake. What's the best way to do this?

2•GaiusCoffee•45m ago•0 comments

Can IOPS Be Estimated from Fsync?

https://justincartwright.com/2025/03/13/iops-and-fsync.html
2•jcartw•46m ago•0 comments

Share of Model

https://shareofmodel.ai/
1•handfuloflight•46m ago•0 comments

Advising Reasonable AI Criticism

https://vale.rocks/posts/ai-criticism
2•OuterVale•47m ago•0 comments

Show HN: Flux Kontext

https://www.fluxkontext.com/
1•chengzeyi•48m ago•0 comments

NATS GUI for Underground People

https://natsnui.app/
1•veggieroll•56m ago•0 comments

Can we take a moment to appreciate what kind of web experience we are building?

2•tomdesantis•57m ago•2 comments

Instacart Uses ML to Suggest Replacements for Out-of-Stock Products

https://tech.instacart.com/how-instacart-uses-machine-learning-to-suggest-replacements-for-out-of-stock-products-8f80d03bb5af
1•mooreds•58m ago•0 comments

Google's New A.I.-Powered Search Has Arrived. Proceed with Caution

https://www.nytimes.com/2025/05/29/technology/personaltech/google-ai-mode-search.html
2•mooreds•1h ago•1 comments
Open in hackernews

What does “Undecidable” mean, anyway

https://buttondown.com/hillelwayne/archive/what-does-undecidable-mean-anyway/
142•BerislavLopac•1d ago

Comments

skydhash•1d ago
One of the biggest boost to my SWE career was studying theory of computation and programming languages theory. My major was electronic engineering, so I didn't touch those at the university. But I use some books to at least grasp the introductory knowledge.

The boost was how easy it is to find the mechanism to some abstractions. And instead of being those amazing and complex tools that you have to use carefully, they've become easier to use and reason about. I would say very much like the study of forms, value, and perspectives for an artist, or music theory for a musician. It just makes everything easier. Especially the understanding about automata (regex), context-free grammar (programming language), and the Turing machine (CPU).

Gruzzy•1d ago
Would you have book recommendations / resources to share?
skydhash•1d ago
Introduction to the Theory of Computation by Michael Sipser. There's also his course based on the book on YouTube[0].

Language Implementation Patterns by Terence Parr. It avoids the theory in other books, going for a more practical approach.

Then it was just trying a lot of programming paradigms like functional programming with Common Lisp and Clojure, logic programming with Prolog, array and stack programming with Uiua. And reading snippets of books and papers. It was chaotic.

But the most enlightening lesson was: Formalism is the key. You define axioms, specify rules and as long as you stay consistent, it's ok. The important thing is to remember the axioms and understanding the rules.

[0]: https://www.youtube.com/playlist?list=PLUl4u3cNGP60_JNv2MmK3...

soulofmischief•1d ago
I'd recommend codifying such axioms, rules and grammar into a domain-specific language, and then writing your logic in that DSL.

It will keep things consistent and allow newcomers to quickly understand the domain, enabling them to contribute without need for deep institutional knowledge.

skydhash•1d ago
That's one of the foundation of Domain-Driven Design. First you try to comes up with a glossary (aka your axioms). Then you'll notice that some have relations with each other and some terms may have the same name, but refers to two different concepts (or two parts of the same whole). So now you will have your boundaries. Then you try to make a subdomain internally consistent. But you still have to communicate with the other subdomains (to enact actions and query data). These communications are equally important as each subdomain. The hope is to have something that reflects the business domain, especially the cost of changes. Something that's easy/hard to change in the business should be easy/hard to change in the code,

With OOP, this often results in verbose code, because each view of the data (which has its own rules) has its own class. With FP, because you often use more primitive data structures, it's easier to commute between subsets of data.

soulofmischief•21h ago
Exactly. Getting DDD to be implemented and followed in past orgs has been difficult but it pays in spades. It's hard enough to push startup engineering teams to use consistent terminology, much less adopt DDD.

I do find it sometimes can lead to less verbose code even in OOP, because it makes logic more focused and DRY. It certainly can greatly reduce the amount of bugs in large codebases.

nixpulvis•1d ago
Might be hard to approach but my favorite is Pierce's "Types and Programing Languages": https://www.cis.upenn.edu/~bcpierce/tapl/ (known as TAPL). Was grateful to take a graduate level class using it in college.
skydhash•1d ago
TAPL is nice, and also very dense. I haven't fully read it, but my current understanding is the follow: Both the Turing Machine and the lambda calculus only describes how to act on data. They don't actually describes what the data is. So you can craft an algorithm that works well, but it can be supplied with the wrong type of data and it will goes haywire. Your only recourse is to trust the user of your algorithm.

What type does is to have a set of classes of data, then have rules between those classes. You then annotate your code with those classes (combining them if it's possible) and then your type system can verify that the relations between them hold. To the type system, your program is data.

lanstin•1d ago
I think undecidable means more like there are classes of problems for which there is no algorithm (that runs in finite time) can give the answer to. To square N, there is an algorithm that always works. To see if the Turing machine N halts in input M, there is not such an algorithm. The halting problem is undecidable.

There is a clever way to encode Turing machines into Diophantine equations, so Diophantine equations (linear equations

tel•1d ago
I really like TAPL but would recommend Harper’s Practical Foundations of Programming Languages (PFPL) first (though skip the first 2 chapters I think?).

https://www.cs.cmu.edu/~rwh/pfpl.html

It’s far more directed than TAPL, so I think it’s easier to read from start to finish. TAPL feels better as a reference.

downboots•1d ago
Mertens Theory of Computation
Jtsummers•1d ago
> Mertens Theory of Computation

No such book exists. Do you mean The Nature of Computation by Moore and Mertens?

downboots•1d ago
Sure, M&M's
Aicy•1d ago
Really? I studied it in my undergrad degree, and since then have worked as a SWE and not found it relevant at all really
skydhash•1d ago
It's the recursive nature of it.

Almost any program can be written in a DSL that solves the class of problem that the program solves. So if you consider the user stories of your project as that class of problem, you can come up with with a DSL (mostly in form of pseudocode) that can describe the solution. But first you need to refine the terms down to some primitives (context free grammar). You then need to think about the implementation of the execution machine. which will be a graph of states (automata). The transition between the states will be driven by an execution machine. The latter needs not be as basic as the Turing machine.

But often, you do not need to do all these stuff. You can just use common abstractions like design patterns, data structures, and basic algorithms to have a ready made solution. But you still have to compose them and if you understand how everything works, it's easier to do so.

drdrey•1d ago
it depends what you're working on, if you do any form of program analysis (security, compiler stuff) you bump into undecidability everyday
nitwit005•1d ago
I have to vote the opposite direction. I don't think it's ever been relevant.

As an example, A practical walk through of what a simple CPU is doing was quite useful for getting the idea of what's going on. The concept of a Turing machine, which was originally intended for mathematical proofs, was not.

tialaramex•1d ago
I completely disagree. For example understanding the theory gives me a very powerful bullshit detector because entire categories of problem which might sound merely difficult are in fact Undecidable.

Knowing that the actual regular expressions can be recognised by a finite automaton but PCRE and similar "LOL, just whatever neat string matching" extensions cannot means you can clearly draw the line and not accidentally make a product which promises arbitrary computation for $1 per million queries.

I agree that understanding something of how the CPU works is also useful, but it's no substitute for a grounding in theory. The CPU is after all obliged to only do things which are theoretically possible, so it's not even a real difference of kind.

skydhash•1d ago
The CPU is an implementation of theory. It just happens that the realization of several axioms in the Turing machine is realizable physically. And the rest hold true. The nice thing with notation is that they're more flexible than building/creating/executing the thing they describe.
nitwit005•22h ago
If you look back at the history, CPUs were often designed for specific tasks, and some of the early ones lacked conditional jumps that you'd normally consider nessesary for a Turing machine.

We've made them more general purpose as that sells better, not because people cared about making Turing machines.

skydhash•20h ago
Jump is not required for a Turing machine, only being able to move to a specific case by sliding. Conditional jump is an abstraction. Just like looping.
nitwit005•8h ago
I suspect you have forgotten this was a conversation about relevance in the day to day.
skydhash•8h ago
TM is the basic theory. It's a more advanced finite automata that is able to modify it's own instructions. The latter can be generated from a context free grammar. Both CFG and FA have a recursive nature that allows to build more advanced mechanism from those simple elements. And that's how you get CPUs (and more specialised ones like GPUs, NPUs, etc) and programming languages.

There are other computation mechanism like lambda calculus and Hoare logic. They all can be mapped to each other. So what we usually do is to invent a set of abstractions (an abstract machine) that can be mapped down to a TM-equivalent. Then denote those abstractions as primitives and then build a CFG that we can use to instruct the machine.

In the case of CPUs, the abstractions are the control unit, the registers, the combinational logic, the memory,... and the CFG is the instruction set architecture. On top of that we can find the abstract machine for the C programming language. On top of the latter you can find the abstract machine for programming language like Common Lisp (SBCL) and Python.

It's turtles all the way down. You just choose where you want to stop. Either at the Python level, the C level, the assembly level, the ISA level, or the theorical TM level.

Dylan16807•6h ago
A Turing machine isn't the only way to improve a finite automaton to make it universal. It doesn't have to be the bottom turtle.
nitwit005•6h ago
I think it's pretty clear you just want to talk about the theory. You're simply ignoring what I'm saying.
andoando•1d ago
Turing machine works going left or right writing/reading on a tape of 1s and 0s. Your CPU works on RAM writing and reading 1s and 0s.

CPU in principle isn't that different and is largely just using a more sophisticated instruction set. Move 5 vs right right righy right right. swap vs read, write, right, write. etc

Definitely not necessary for programming but it's not some completely theoretical mumbo jumbo.

Moreover what's really interesting to me is that Turing came up with the idea from following what he himself was doing as he did computation on graph paper

DonaldPShimoda•1d ago
> Definitely not necessary for programming but it's not some completely theoretical mumbo jumbo.

While I'm a big fan of teaching theory, I regret to inform you that the Turing machine is kind of completely theoretical mumbo jumbo. The theoretical equivalent of the modern processor is the Von Neumann machine. Certainly there is a direct connection to be made to the Turing machine, as all computation can be framed as a program run on such a theoretical device, but there is not really much practical use in teaching students that modern computers are Turing machines. There's just too much difference, I think.

The utility of the Turing machine as a pedagogical device stems more from, like, being able to formalize a decidable problem in a particular way, following a rigorous algorithmic approach to computation, thinking about abstraction, etc. I think the lambda calculus is also useful to teach for similar (though also different) pedagogical reasons, but I would never tell students that the lambda calculus "in principle isn't that different" from how their favorite language works. Practically all programming languages can be framed as lambda calculi, but some of them are sufficiently far removed from the theory for that comparison to be mostly useless to the majority of students.

andoando•1d ago
I'm not sure what were arguing to be honest. You definitely don't need to understand Turing machines to understand how computers work, and certainly not how to do programming.

But as far as understanding computer science, computational theory, etc certainly you'd want to study Turing machines and lambda calculus. If you were say, writing a programming language, it would be nice to understand the fundamentals.

I mean, I don't think Turing machines or Lambda calculus are even that far removed to call them completely theoretical. You can easily implement a few functions in lambda calculus that already resemble modern programming interfaces.

dooglius•15h ago
Suppose I were to teach theoretical CS using only RAM models of computation, with no reference to Turing Machine tapes. Would there be any downside to doing this, pedagogically? (Other than, of course, the backward compatibility concern of students being able to engage with existing literature, which is the main reason this isn't done I think)
skydhash•12h ago
You could do this with the C abstract machine. because it’s Turing complete. But we go with TM because they’re the most basic. Anything else is an abstraction. So you can stop at any level you like.
dooglius•10h ago
In what way is a TM the most basic? If you mean in terms of simplicity I think a queue automatic is simpler to explain.
skydhash•9h ago
The usual theory of computation textbooks usually goes on to explain finite automata first, then context free grammar and finally the Turing machine. Why? Because the execution part of a TM is a finite automata, while the instructions can use the generative process of CFG. And then you got the whole computing world. Just from these principles (the other chapters are mostly about what's possible and what's not and exploration of other properties)

The nice thing about finite automata is that they can be composed together. That leads to a recursive pattern. This leads to the creation of higher abstractions, leading to general purposes CPU and special processors (gpu, npu, cryptographic chipset, hardware encoding,...). The same things applied to CFGs lead to the creation of programming languages.

andoando•8h ago
If you were teaching theoretical CS, I think you'd certainly want to use a lower level abstraction than RAM. (And RAM really is not unlike a Turing machine tape, except its chunked in bytes and addressable, but in principle plays the same role and has the same operations that can be performed on it which is, move to the left, move to the right, read at the current location, write to the current location. And the modern CPU instruction set isn't really all that different in principle either as if you look at it, its mostly using higher level instructions for accomplishing the aforementioned operations. Eg. Move x, versus 5 right or left operations. But you can certainly write a Turing Machines which implement such an instruction set, and have an easier programmable Turing Machine).

Now TMs certainly are not the only more fundamentals models of computing, but they are certainly interesting nonetheless, and have an for the influence for the Von Neuman Architecture and modern computers.

If I were studying theoretical CS, Id want to learn about TMs, lamba calculus, FSMs, queue automatica, all of it. If you just told me about how modern computers work, Id be left wondering how anyone even conceived this idea.

And as I said in my earlier comment, when you get down to the core of it, what's really interesting to me, and this is readily apparent if you reading Turing's 1936 paper, is that Turing very much came up with the idea thinking about how HE does computation on graph paper. That to me is such an essential fact I would not have want to missed, and I wouldn't have known it lest I actually read the paper myself.

Dylan16807•19h ago
If the only principle you look at is "does this compute?" then they're not that different. Otherwise they're about as far apart as you can get.

A Turing machine (at least one that isn't designed in some super wacky way to prove a point) has a few bits of internal state and no random access memory. If you want RAM you have to build a virtual machine on top of the Turing machine. If you try to program it directly it's going to be a byzantine nightmare.

Being restricted to tape, and in particular a single tape, makes Turing machines absolutely awful for teaching programming or how a CPU implements an algorithm. Instructions, data, and status all have to fit in the same place at the same time.

It's so bad that Brainfuck is an order of magnitude better, because it at least has separate instruction and data tapes.

from your other comment > But as far as understanding computer science, computational theory, etc certainly you'd want to study Turing machines and lambda calculus. If you were say, writing a programming language, it would be nice to understand the fundamentals.

Turing machines are not fundamental. They're just one way to achieve computation that is close to minimal. But they're not completely minimal, and I strongly doubt studying them is going to help you make a programming language. At best it'll help you figure out how to turn something that was never meant to compute into a very very bad computer.

andoando•16h ago
Turing machine in essence is a finite state machine + memory (the tape) + some basic instructions for reading and writing to the memory.

Its a very simple, rudimentary computer, not some completely abstract mathematical object, which was what I was responding to.

With universal turing machines, its not difficult to start writing composable functions, like an assembly instruction set, adders, multipliers, etc.

TMs certainly arent fundemental, but when you look at TMs, lambda calculus and understand why they are equivalent, wouldnt you say you gain an understanding of what is fundamental? Certainly constructions like for loops, the stack etc are not fundamental, so youd want to go deeper in your study of languages

skydhash•12h ago
One realization with TM is that programs and data are essentially the same and separation is usually imposed. When you think about your program as data, it’s hard to not notice patterns and you start to yearn for metaprogramming to more expressively express those.
Dylan16807•6h ago
And a barebones traditional CPU is a finite state machine plus random access memory. It teaches you mostly the same things about how you put together simple components into universal computation, while having programs that are far easier to comprehend.

And then for another perspective on computation, lambda calculus is very different and can broaden your thoughts. Then you could look at Turing machines and get some value, but niche value at that point. I wouldn't call it important if you already understand the very low level, and you should not use it as the model for teaching the very low level.

andoando•6h ago
>while having programs that are far easier to comprehend.

If you want to learn the fundamentals of something, should you not wish to you know, think about the fundamentals?

Dylan16807•5h ago
My argument is that FSM+tape and FSM+RAM are at the same level of "fundamental", but one is easier to understand so it should be the thing you teach with. Being more obtuse is not better.
DonaldPShimoda•1d ago
> One of the biggest boost to my SWE career was studying theory of computation and programming languages theory.

I totally agree with you.

> My major was electronic engineering, so I didn't touch those at the university.

Unfortunately, ever more computer science graduates lack the same belief, even if they go to "top" university programs.

Of course, I think part of the problem is the way the material is presented; "learn this to pass the exam" without fundamental motivation is pretty much always a bad set-up to get students interested. But, speaking anecdotally, a large part also seems to be a semi-recent surge of undergraduate students who genuinely believe that their academic career is nothing more than a hurdle to be muddled through in service of getting a piece of paper that lets them get a high-paying, low-labor-intensity job. They just don't engage with theory-focused material beyond what's strictly necessary to graduate, and then they dump the thoughts from their minds the second the semester is over.

This manifests in all sorts of ways, but a common one is undergrads telling one another how little their professors know about the "real world" because they use (not even that) outdated technology in their lectures. Or maybe the profs use some less popular language and the students think of it as a waste of time. There's comparatively little self-motivation among modern CS students, where historically I think there was rather a lot.

I suppose I'm not really going anywhere specific with this other than complaining, so maybe I can ask: What do you think it was that helped you realize that learning fundamental theory would be beneficial? (I don't have any indication that you were the same kind of student as these that I mention, since you were in a different major altogether, but I'm always looking for insights to help motivate students to study theory more, and insights can come from anywhere.)

gnulinux•1d ago
The reason decidability makes no or very little sense to tons of CS or Math majors is because the logical basis of decidability is almost never explained in school. Even if you're in a logic/computability class you likely won't get a very concrete explanation, unless you're in a Philosophy of Math class.

The problem that's usually not told to kids is that decidability has different sort of implications to classical mathematics and constructive mathematics. Since most undergrad programs only teach mainstream math (i.e. classical) and don't even mention constructive logic, decidability becomes a black magic situation for some confused students.

Here's the crucial part: decidability really only has material impact on our reasoning if we're in some kind of constructive setting. Undecidability does not and cannot have any impact on the results of classical mathematics, period, but it does have effect on the proofs of those classical theorems. It can also have impact on classical mathematics if we're reasoning classically but describing results in constructive settings (e.g. this is what happens when we do computability theory in classical mathematics).

In short, classically we're always allowed to work around undecidability by using an "oracle". But it can be important to state that we can only do it this way (similar to how we can state the need to use the axiom of choice).

One way to re-phrase Aristotle's law of excluded middle (forall P, P or not P) is to say: "every relation is decidable" which is how you would postulate it in homotopy type theory: https://agda.github.io/agda-stdlib/master/Axiom.ExcludedMidd...

This doesn't mean in classical mathematics every relation "literally" is decidable. Classical mathematics can still study constructive settings by creating a computational model (e.g. Turing Machine model). But it does mean that in classical mathematics every relation effectively is indistinguishable from decidable relations since given a statement like "Program P will halt" you are always allowed to say "either (program P will halt) or (program P won't halt)" regardless of our impossibility to prove one of the cases in general.

Also note that we call this kind of constructive reasoning "neutral constructive mathematics" which is when we operate in a constructive reasoning setting (like Agda's type theory above) but allow ourselves to say things like "AxiomOfChoice implies ExcludedMiddle" etc, read: https://ncatlab.org/nlab/show/neutral+constructive+mathemati...

LPisGood•11h ago
I was with you until here:

> which is how you would postulate it in homotopy type theory: https://agda.github.io/agda-stdlib/master/Axiom.ExcludedMidd...

Why did you randomly shoehorn in homotopy type theory? Maybe I’m overreacting to this, but the only person I’ve ever known to shoehorn homotopy type theory into largely unrelated discussion has left quite a poor impression for me.

dunkeltaenzer•1d ago
The problem there is overlap between fractal and non-fractal spaces. Our Math and Logic were created to work in non-fractal spaces. Whenever we cross the boundaries into fractal spaces (recursion in programming, for example), our math and logic have a tendency to explode, because there are infinities flying around everywhere and things stop to be guaranteed to be deterministic, because you have to assure determinability for several dimensions of infinities. You have to accept certain quantum rules, in those spaces. And being deterministic isn't compatible with quantum rules. That's why quantum physics is struggling so badly to get rid of all those infinities flowing in with every new dimension they add to the wave function. That thing wasn't designed for fractal spaces. So it becomes harder and harder to contain all those leaking infinities into a workable magic ball. We stored most of our fractal remembrance in pictures for a reason. Much easier to draw fractals than to compute them with math, not made for fractal spaces
paulddraper•1d ago
> I often see the halting problem misconstrued as "it's impossible to tell if a program will halt before running it." This is wrong.

> The halting problem says that we cannot create an algorithm that, when applied to an arbitrary program, tells us whether the program will halt or not.

Or more simply:

No program can tell whether any program will halt or not.

cobbzilla•1d ago
I really enjoyed the author’s explanation of undecidability.

I found this bit was really helpful:

“This to me is a strong "intuitive" argument for why the halting problem is undecidable: a halt detector can be trivially repurposed as a program optimizer / theorem-prover / bcrypt cracker / chess engine. It's too powerful, so we should expect it to be impossible.”

wat10000•1d ago
I'm not sure about that. These are also easily solved with a general-purpose CPU, if you don't care about running time. And there's nothing that says that a halt detector would be fast. The impossibility result is more interesting than just "it would take a million times the age of the universe to complete if every atom in it were put to the sole purpose of computing it" as we often see with cryptography.
hwayne•1d ago
That's not necessarily true. Consider the program `x := 4; loop {if !sum_of_two_primes(x) {return true}; x += 2}`. If we run this on a general purpose CPU, this will halt if and only if Goldbach's conjecture has a counterexample. Otherwise it will run forever. So even if a working halt detector takes 14 million billion years, it will definitely tell us if the conjecture is true or not. Whereas if the general purpose CPU is still running after that time, we still have no way of knowing whether it's because it's going to run forever or if it simply hasn't reached the first (ludicrously large) counterexample.
wat10000•1d ago
I think you're agreeing with me. The problems listed as something that a halt detector can be "trivially repurposed" for are problems that a normal CPU can also be trivially repurposed for (ignoring the issue of execution time), and my understanding is that normal CPUs are not impossible. The halting problem is different because it is provably impossible even with unlimited resources unless you're somehow able to make something more capable than a Turing machine.
paulddraper•20h ago
They’re saying that chess and bcrypt and some others can be brute forced.

You are correct that Goldbach cannot be proven true via brute force. But again, a hypothetical general halting machine may require impractical time — 14 million billion years.

So the idea that “if this existed we crack all sorts of hard problems/optimize” is not necessarily true.

nullc•1d ago
> No program can tell whether any program will halt or not.

Might try for another wording which isn't easily misunderstood. I was going to suggest every instead of any but that supports a different misunderstanding.

zqna•1d ago
There will always be more questions the fool will ask, than a wise man will be able to answer
brap•1d ago
I sometimes wonder if the concept of “intelligence” is going to benefit from a formal model the way “computation” benefited from Turing Machines.

Are there classes of intelligence? Are there things that some classes can and cannot do? Is it a spectrum? Is the set of classes countable? Is it finite? Is there a maximum intelligence? One can dream…

ryandamm•1d ago
Purely intuitively, it seems like there should be a connection between the two (computation and intelligence). But I have not formally studied any relevant fields, I would be interested to hear thoughts from those who have.
brap•1d ago
I was thinking the same thing, maybe a level above in the Chomsky Hierarchy…
wat10000•1d ago
The known laws of physics are computable, so if you believe that human intelligence is purely physical (and the known laws of physics are sufficient to explain it) then that means that human intelligence is in principle no more powerful than a Turing machine, since the Turing machine can emulate human intelligence by simulating physics.

If there's a nonphysical aspect to human intelligence and it's not computable, then that means computers can never match human intelligence even in theory.

bawolff•1d ago
Philosophers have been trying to define what it means to be conscious since forever. I think that is informally what you mean here.

If you just mean what problems can it solve, and how quickly, we already have a well developed theory of that in terms of complexity classes -https://complexityzoo.net/Complexity_Zoo

mystified5016•1d ago
I think this is more about levels or classifications of intelligence.

If you've ever interacted with a very smart animal, it's easy to recognize that their reasoning abilities are on par with a human child in a very subjective and vague way. We can also say with extreme confidence that humans have wildly different levels of intelligence and intellectual ability.

The question is, how do we define what we mean by "Alice is smarter than Bob". Or more pertinently, how do we effectively compare the intelligence and ability of an AI to that of another intelligent entity?

Is ChatGPT on par with a human child? A smart dog? Crows? A college professor? PhD level?

Of course we can test specific skills. Riddles, critical thinking, that sort of thing. Problem is that the results from a PhD will be indistinguishable from the results of a child with the answer key. You can't examine the mental state of others, so there's no way to know if they've synthesized the answer themselves or are simply parroting. (This is also a problem philosophers have been thinking about for millenia)

Personally, I doubt we'll answer these questions any time soon. Unless we do actually develop a science of consciousness, we'll probably still be asking these questions in a century or two.

skydhash•1d ago
Intelligence is often a measure how quickly we can embrace a new model and how effectively we can use it. Building such a model can be done haphazardly or be guided with skill transfer methodology. Once that done, intelligence is how well we can select the correct model, filter the relevant parameters out and then produce a correct answer.

There's a lot of factors there and more that I haven't specified. But one thing that I believe is essential is the belief that an answer is correct or uncertain.

bawolff•1d ago
> Is ChatGPT on par with a human child? A smart dog? Crows? A college professor? PhD level?

That presumes a total ordering of intelligence. I think the balance of evidence is that no such total ordering exists.

There are things chatgpt can do that children (or adults) cannot. There are thing that children can do that chatgpt cannot.

At best maybe the Turing test can give us a partial ordering.

I don't think there is much value in viewing "intelligence" as a whole. Its a combination of a multitude of factors, that need to be dealt with independently.

joe_the_user•1d ago
I don't either philosophical conceptions of consciousness or theories of computational complexity count as even "efforts to formalize intelligence". They are each focused on something significantly different.

The closest effort I know of as far characterizing intelligence as such is Steven Smale's 18th problem.

https://en.wikipedia.org/wiki/Smale%27s_problems

bawolff•1d ago
The wikipedia article is pretty useless here.

The original paper is better, but still seems to be too vauge to be useful. Where it isn't vauge it seems to point pretty strongly to computability/complexity theory.

Intelligence means many different things to different people. If we just gesture vaugely at it we aren't going to get anywhere, everyone will just talk past each other.

joe_the_user•6h ago
Yeah,

Smale is a very smart person but his stuff indeed seems as much a vague gesture as the other efforts. I feel like neural networks have succeeded primarily because of the failure of theorists/developers/etc to create any coherent theory of intelligence aside from formal logic (or Perl, formal probability). Nothing captures the ability of thinking to use very rough approximations. Nothing explains/accounts-of Moravec's Paradox etc.

joe_the_user•1d ago
I agree the idea of a formal view of intelligence is appealing. The hurdle that any such view faces is that most intelligence seems to involve rough, approximate reasoning.

I think this approximate nature of intelligence is essentially why neural nets have been more successful than earlier Gofai/logic based system. But I don't think that means formal approaches to intelligence are impossible, just they face challenges.

mystified5016•1d ago
This is one of those ideas that great minds have been chewing on for all of recorded history.

In modern thinking, we do certainly recognize different classes of intelligence. Emotional intelligence, spatial reasoning, math, language, music.

I'd say the classes of intelligence are finite. Primarily because different types of intelligence (generally) rely on distinct parts of our finite brains.

As for maximum intelligence, I like the angle taken by several SciFi authors. Generally, when an AI is elevated to the extremes of intelligence, they detach from reality and devolve into infinite naval gazing. Building simulated realities or forever lost in pondering some cosmic mystery. Humans brought to this state usually just die.

For physical, finite entities, there's almost certainly an upper limit. It's probably a lot higher than we think, though. Biological entities have hard limits on energy, nutrients, and physical size beyond which it's just impossible to maintain any organism. Infinities are rarely compatible with biology, and we will always be limited.

Even an AI is still beholden to the practicalities of moving energy and resources around. Your processor can only get so large and energy/information can only move so fast. Intelligence in AI is probably also bounded by physical constraints, same as biological entities.

ninkendo•1d ago
Kinda related, I’ve had a hunch for a while that we’re going to eventually learn that “the singularity” (AI’s improving themselves ad infinitum) is impossible for similar reasons the halting problem is impossible. I can’t really articulate why though. It just seems similarly naive to think “if the AI becomes smarter than us, surely it can thus make a better AI than we could” as it is to think “if a computer can compute anything, surely it can compute whether this program is will halt.”

My bet is there is some level of “incompleteness” to what an intelligence (ours or a machine’s) can do, and we can’t just assume that making one means it can become a singularity. More likely we’re just going to max out around human levels of intelligence, and we may never find a higher level than that.

pixl97•23h ago
>More likely we’re just going to max out around human levels of intelligence, and we may never find a higher level than that.

I've seen people state this before, but I don't think I've seen anyone make a scientific statement on why this could be the case. The human body has a rather tight power and cooling envelope itself. On top of that we'd have to ask how and why our neural algorithm somehow found the global maxima of intelligence when we can see that other animals can have higher local maxima of sensory processing.

Moreso machine intelligence has more exploration room to search the problem space of survival (aka Mickey7) that the death any attached sensoring/external network isn't the death of the AI itself. How does 'restore from backup' affect the evolution of intelligence?

Granted there are limits somewhere, and maybe those limits are just a few times what a human can do. Traversing the problem space in networks much larger than human sized capabilities might explode in time and memory complexity, or something weird like that.

Dylan16807•19h ago
For that definition, including the phrase "ad infinitum", then it's pretty unlikely.

But a lack of infinities won't prevent the basic scenario of tech improving tech until things are advancing too fast for humans to comprehend.

kazinator•1d ago
An undecidable problem in the Turing computational model is this: it is a problem for which no algorithm exists which can ] decide every input case.

This doesn't preclude some input cases from being decided.

What happens it that every attempt at making an algorithm for an undecidable problem will run into two issues: there will be input cases for which an answer exists, but for which the algorithm either gets into an infinite loop/recursion or else produces the wrong answer.

Why would a decision procedure ever yield the wrong answer? That is due to mitigations of the infinite looping problem. To avoid becoming trapped by the input cases that trigger infinite calculations, the algorithm has to resort to some mechanism of arbitrarily stopping, without having reached the decision. Like for instance by capping the number of steps executed. But when that situation is reached, the correct answer is not yet known. The algorithm can return "yes" or "no", but that will be wrong half the time. Or it can report "failed", which is always incorrect, not matching either "yes" or "no".

A classic example of an undecidable problem is the Halting Problem: the question of whether a given program (or algorithm) P, operating on an input I, will halt or not. So the space of inputs for this problem is <P, I> pairs.

The input space is infinite, but all instances in that space are finite: finite length programs P operating on finite inputs I.

The problem is that just we have a given finite P and finite I doesn't mean that halting can be decided in finite steps.

No matter how cleverly someone constructs a halting decider, it is easy to produce test cases which will cause that decider to either stop and produce the answer (decide incorrectly) or else iterate or recurse forever. (Demonstrating how such test cases can be produced for any decider is at the heart of a popular proof which shows that halting is undecidable.)

The Halting Problem is at the center of decidability, because every decision procedure of any kind (not necessarily a decision in the Halting Problem) is a <P, I> pair. For instance the decision whether a list of N integers is sorted in ascending order is a <P, I> pair. P is an algorithm for sweeping through the integers to check that they are ordered, and I is some list of integers. Since it is a <P, I> pair, it is in the domain of the Halting Problem. P is not in the Halting Problem: P is in the domain of processing a list of integers. But <P, I> is an input case of the Halting Problem: does list-processing algorithm P halt on the list of integers I.

The upshot is that when we are solving some decision problem by running an algorithm, which is taking long, and we ask "will this decision terminate, or is it in a loop?" we are then posing another problem, and that problem is an input case of the Halting Problem.

And we know that the Halting Problem is undecidable.

What that means is that if we happen to be calculating a problem which is undecidable, the Halting Problem informs us that there is no guaranteed shortcut for us to know whether our calculation is hung, or whether it is working toward halting.

When we try to apply "meta reasoning" about an algorithm that appears stuck, our reasoning itself may also get stuck. We may have to give up, cut the investigation short, and report "we don't know". I.e. we don't know whether or not our decision program is forever stuck or working toward the solution.

Undecidability is a somewhat important topic to a software engineer for the following reasons. In your career you may from time to time encounter requirements that call for a the calculation of an undecidable problem. If you know can recognize it, you can take action to revise the requirements. For instance, perhaps you can negotiate the requirements such that it's acceptable for the decision to be incorrect, erring on one side of the other (false positive or false negative). Equivalently, maybe some inputs can be rejected before the algorithm, leaving a decidable subset. You may have a People Problem there because the requirements are coming from non-computer-scientists who don't know about decidability; you have to be careful not to make it look like you are refusing to do hard work due to not believing in your skill, or laziness.

As an example, we could use the Halting Problem itself. Say you work on some virtual machine for sandboxing untrusted programs and your boss says, can we detect programs which loop infinitely, and not run them at all? Just reject them? If you know about undecidability, you can tell your boss, no, that is an undecidable problem! What we can do is limit the number of instructions or run time, which will stop some programs that are not looping infinitely but are just taking long to calculate something. Or here is another thing we can do instead; only admit programs written in a language that is not Turing Complete: has only bounded loops and no recursion. The sandbox compiler can reject all others. Or we can allow recursion but stacked recursion only (not tail) and limit the depth. Of course, that's just an easy example; undecidability can sneak into the requirements in forms that are not immediately recognizable.

cubefox•1d ago
Do you know whether there is an algorithm which "mostly" solves the halting problem, by returning either "halts", "doesn't halt" or "don't know", while only very rarely returning "don't know" for (what we would intuitively consider) natural algorithms?
JonChesterfield•1d ago
Yes. Take the one which might not halt and add an integer argument which you decrement on recursion/loop, possibly called "fuel", and return don't know when that hits zero. Then call your algorithm with a large integer for that argument and wait.
kazinator•1d ago
Practically speaking, if your goal is to protect system resources, which is a practical concern removed from theoretical computer science, the halting problem isn't even the right concern. It's good to know about and all, but the problem is that programs which halt can be a nuisance. You don't care about the difference between "runs for 5 days and halts" and "runs forever".

Some programs run indefinitely by design, like services. Those may be acceptable in a system, but not CPU-intensive, long-running programs.

So you in fact have to reject some programs which halt, and accept some which don't.

mrkeen•1d ago
I think it would be much more fruitful to flip it around the other way.

Start with small pieces which halt, and see how far you can get by combining them into larger programs. Compiler-help is available: https://docs.idris-lang.org/en/latest/tutorial/theorems.html...

A down-side of a yes/no/maybe halting checker is that it wouldn't tell you why. You'd get your "doesn't halt" result and then have to debug the algorithm.

kazinator•1d ago
If statement S1 halts and we combine it with S2 like "S1; S2", then we know that halts. :)
skydhash•1d ago
> undecidability can sneak into the requirements in forms that are not immediately recognizable.

The whole issue is loops and recursions (more complicated loops). And how to avoid the bad sort is to inspect your invariants. For classic loops, you need to prove that your condition will eventually be false. For iterators, you need to prove that your data is bounded. And for recursion, that the base case will eventually happens. The first two are easier to enforce.

The goal is for your code to be linear. If you have code as one of your input, you force it to be linear too.

johnea•1d ago
I don't know, I can't decide...
cubefox•1d ago
There are at least two meanings of "undecidable". The one, from computer science, is discussed in the blog post. The other, from formal logic, is a synonym to "independent". A proposition (not property) is independent of some axiomatic theory with respect to a proof system, if and only if the proposition can neither be proved nor disproved in that theory.

For example, the continuum hypothesis is independent of ZFC. Another way to express this is to say that the continuum hypothesis is undecidable (in ZFC).

Kurt Gödel used this sense of "undecidable" in his famous paper "Über formal unentscheidbare Sätze der Principia Mathematica und verwandter Systeme I". ("On formally undecidable propositions in Principia Mathematica and related systems I")

Are these two meanings of "undecidable" related in some way? I guess probably yes. But I'm not sure how.

ttctciyf•1d ago
> Are these two meanings of "undecidable" related

I can't claim to answer this, but you might like to look at A Relatively Small Turing Machine Whose Behavior Is Independent of Set Theory[0] which (amongst other interesting things) does discuss the proposition "ZFC is consistent", which is independent from ZFC, in terms of a Turing Machine which enumerates all possible proofs in ZFC and halts when it finds a contradiction (e.g. a proof of 0=1).

But I don't think there's a simple equivalence here, all the same.

The logician's undecidable is always relative to a set of axioms, whereas the question of whether a property of strings can be decided by some TM doesn't place any constraints on the TM, which is to say the deciding TM is not required to, nor prohibited from, implementing any particular axiom set.

(It's tempting to try and show "provable in ZFC" to be a (CS)undecidable property by having a TM enumerate ZFC proofs and halt on finding a proof or disproof of the input string, and imagine the TM running forever when given a statement of the Continuum Hypothesis. But the TM could proceed by other means - for example [1] shows a proof of CH's independence in a theorem prover, so that a TM incorporating this construction could indeed reject CH statements as unprovable in ZFC. Which is not to say "ZFC-provable" *isn't* (CS)undecidable, just that showing this isn't as simple as constructing a ZFC-proof-enumerator and giving it the CH as input.)

0: https://arxiv.org/abs/1605.04343

1: https://arxiv.org/abs/2102.02901

tel•1d ago
Independent and undecidable aren't quite the same, even in formal logic. Or rather, sometimes they are but it’s worth being specific.

A proposition P being independent of a theory T means that both (T and P) and (T and not P) are consistent. T has nothing to say about P. This may very well be what Gödel was indicating in his paper.

On the other hand, undecidable has a sharper meaning in computation contexts as well as constructive logics without excluded middle. In these cases we can comprehend the “reachability” of propositions. A proposition is not true or false, but may instead be “constructively true”, “constructively false”, or “undecidable”.

So in a formal logic without excluded middle we have a new, more specific way of discussing undecidability. And this turns out to correspond to the computation idea, too.

alok-g•20h ago
Could I say that 'P is Undecidable' is defined as: It is False that {There exists T such that [(T and P) and (T and not P) are both consistent]}?
tel•11h ago
Quantifying over T is probably not going to work. In informal terms that reads like "No logic exists where P is independent", which probably wasn't quite what you wanted, but also we can trivially disprove that with T = {}. As long as P is self-consistent, then "not P" should be too.

We're interested in a proposition's status with respect to some theory that we enjoy (i.e. Zermelo–Fraenkel set theory).

cubefox•17h ago
I still think what I said was correct.

> On the other hand, undecidable has a sharper meaning in computation contexts as well as constructive logics without excluded middle. In these cases we can comprehend the “reachability” of propositions. A proposition is not true or false, but may instead be “constructively true”, “constructively false”, or “undecidable”.

Yes, but that just means that independence/undecidability depend on the proof system, as I said before. So when using a constructive proof system, more statements will turn out to be undecidable/independent of a theory than with a classical one, since the constructive proof system doesn't allow non-constructive proofs, but the classical one does.

tel•10h ago
Yeah, I agree. "Independence" is fundamentally a property of the formal system you're working within (or really, it's a property of the system you're using and of the axiomatic system under test, the system a proposition would be independent from). I'm holding out a bit to unify that with "undecidability" because undecidability takes on a particular character in constructive systems that happens to align with Turing's notion.

So at some level, this was just an acknowledgement that "undecidability" in this form is well represented in formal logic. In that sense, at least in constructive logics, it's not just a synonym for "independence".

aatd86•1d ago
And here I thought it was simply about a system of equations solving down to a single solution (decidable).

As opposed to having too many free variables.

taeric•1d ago
I'm curious if you could make an analogy to the idea of "underspecified" in FreeCAD. It isn't that the drawing doesn't exist. It is that you still have some freedom in how long certain parts could be. Crucially, not all. It could be that parts of the drawing are indeed fully specified.

Same can go with programs. You can constrain parts of it enough that you can answer some pretty specific questions. And similar to how a drawing with fewer lines is easier to constrain, a program that has restricted constructs can be easier to constrain.

constantcrying•1d ago
>I'm curious if you could make an analogy to the idea of "underspecified" in FreeCAD.

That is just solution theory of a non-linear system of equations. The solution space can have exactly one solution (fully constrained), more than one (under constrained) or zero (over constrained).

To be honest I do not really see a connection.

taeric•1d ago
I meant more as how to mentally model it in a way that can get someone across the line. My assertion is that caring about undecidability is almost certainly a waste of time for most people. That said, the reason we work in small chunks is often so that we can more easily answer questions about programs.

Moving the graphical drawing and constraints over to a symbolic system also helps see how many symbols it can take to cover a simple system.

Of course, the real reason for me thinking on this is that I'm playing with FreeCAD for the first time in a long time. :D

nialv7•1d ago
One of my favorite insights is that the existence of undecidable problems is the same thing as the uncountability of real numbers.

Too bad the author didn't get into it.

mreid•1d ago
I had exactly the same reaction, which prompted me to write this comment: https://news.ycombinator.com/item?id=44122045
bubblyworld•17h ago
...which is the same thing as Rice's theorem, and many other mind-bending results. It's all diagonalization under the hood =)
mreid•1d ago
This is a really nice explanation of decidability. One extra thing it might be worth mentioning is that there are many more functions `f : string -> boolean` then there are programs that implement those functions.

When I first encountered this topic I had trouble intuitively understanding how there could not exist an `IS_HALTING` function when it is also just a function that takes in a string (representing a program plus its inputs) and outputs True or False depending on whether it halts or not.

The argument in the article does a great job of showing that `IS_HALTING` cannot exist because it is in some sense "too powerful" but that means there is a mapping f : strings -> boolean that cannot be represented as a program, which seems weird if you've been programming for ages and every function you encounter is expressed as a program.

The result becomes less weird when you realize that that almost all functions from string -> boolean are not expressible as a program. Why? Well there are countable many programs since there are only countably many finite length strings and every program, by definition, is a finite length string. However, there are uncountably many functions from string -> boolean since these functions map one-to-one to sets of strings (just let the set be all inputs that map to True) and the cardinality of the set of sets of strings is uncountable.

This is essentially due to Cantor's diagonalization argument which shows you cannot put all elements in a set X into a 1-1 correspondence with all the subsets of X, even when X is countably infinite. This fact is at the heart of a lot of these computability results since it shows there is a gap between all functions (= any arbitrary subset of finite strings) and a program (= a finite string).

cyberax•23h ago
I wonder if this is a correct argument.

A function string -> boolean is always expressible? Simply because the set of all possible mappings from all possible finite strings to booleans is countable.

It's better to say that some functions like "does this program halt?" simply don't exist.

hwayne•23h ago
`S = {str -> bool}` is actually uncountable. `S` is isomorphic to the power set (set of all subsets) of `str`, and `2^str` is at least as big as the set of real numbers, as any real number (like π) can be mapped to the set of string prefixes (like `{"3", "3.1", "3.14", ...}`). Since the reals are uncountable, so is `2^str` and `S`.
cyberax•21h ago
> `S` is isomorphic to the power set

D'oh. I missed that.

mreid•20h ago
I think you are experiencing the same confusion I felt when I first started thinking about the difference between a program and a function.

The set of all possible mapping from all possible finite strings to booleans is definitely *not* countable.

What I (and the article) mean by a "function" `f : string -> boolean` here is any arbitrary assignment of a single boolean value to every possible string. Let's consider two examples:

1. Some "expressible" function like "f(s) returns True if the length of s is odd, and returns False otherwise".

2. Some "random" function where some magical process has listed out every possible string and then, for each string, flipped a coin and assigned that string True if the coin came up heads, and False if it came up tails and wrote down all the results in a infinite table and called that the function f.

The first type of "expressible" function is the type that we most commonly encounter and that we can implement as programs (i.e., a list of finitely many instructions to go from string to boolean).

Nearly all of the second type of function -- the "random" ones -- cannot be expressible using a program. The only way to capture the function's behavior is to write down the infinite look-up table that assigns each string a boolean value.

Now you are probably asking, "How do you know that the infinite tables cannot be expressed using some program?" and the answer is because there are too many possible infinite tables.

To give you an intuition for this, consider all the way we can assign boolean values to the strings "a", "b", and "c": there are 2^3 = 8. For any finite set X of n strings there will be 2^n possible tables that assign a boolean value to each string in X. The critical thing here is that 2^n is always strictly larger than n for all n >= 1.

This fact that there are more tables mapping strings to boolean than strings still holds even when there are infinitely many strings. What exactly we mean by "more" here is what Cantor and others developed. They said that a set A has more things than a set B if you consider all possible ways you can pair a thing from A with a thing from B there will always be things in A that are left over (i.e., not paired with anything from B).

Cantor's diagonalization argument applied here is the following: let S be the set of all finite strings and F be the set of all functions/tables that assigns a boolean to each element in S (this is sometimes written F = 2^S). Now suppose F was countable. By definition, that would mean that there is a pairing that assigns each natural number to exactly one function from F with none left over. The set S of finite strings is also countable so there is also a pairing from natural numbers to all elements of S. This means we can pair each element of S with exactly one element of F by looking up the natural number n assigned to s \in S and pairing s with the element f \in F that was also assigned to n. Crucially, what assuming the countability of F means is that if you give me a string s then there is always single f_s that is paired with s. Conversely, if you give me an f \in F there must be exactly one string s_f that is paired with that f.

We are going to construct a new function f' that is not in F. The way we do this is by defining f'(s) = not f_s(s). That is, f'(s) takes the input string s, looks up the function f_s that is paired with s, calculates the value of f_s(s) then flips its output.

Now we can argue that f' cannot be in F since it is different to every other function in F. Why? Well, suppose f' was actually some f \in F then since F is countable we know it must have some paired string s, that is, f' = f_s for some string s. Now if we look at the value of f'(s) it must be the same as f_s(s) since f' = f_s. But also f'(s) = not f_s(s) by the way we defined f' so we get that f_s(s) = not f_s(s) which is a contradiction. Therefore f' cannot be in F and thus our premise that F was countable must be wrong.

Another way to see this is that, by construction, f' is different to every other function f in F, specifically on the input value s that is paired with f.

Thus, the set F of all functions from strings to boolean must be uncountable.

yusina•18h ago
I appreciate your lengthy explanation and I largely agree with it, even though it probably doesn't help much because anybody who has not understood this yet will have stopped reading latest at 20% in. That's not your fault but just based on the observation that attention spans are short and people strongly prefer to spend their time on reading things they are interested in. And anybody interested in this subject who has this much time to spare has likely already done that elsewhere. But I'd be happy to be wrong and would welcome if just a single person gained better understanding through your text.

What I would suggest though is to avoid using the term "random" for this. I know you put it in quotes, but that term is so over-misused that we do it a disservice by adding more misuse. Numbers (or other objects) are not random, it's the process that produced them in some context that's random. Random processes have very specific, mathematically well-defined properties, just like the concept of decidability has, and throwing around the term with other meanings really does it a disservice, similar to doing the same for decidability.

What you are probably looking for is a term like "arbitrary".

mreid•18h ago
Sure, “arbitrary” is a suitable term here too. A random process is just one way to generate an arbitrary function.

My use of “random” here was referring to the coin flipping process and was more for building intuition than precisely specifying all the other non-expressible functions. I was trying to allude to the fact that these other type of functions don’t have any easily expressible process behind them. When I’ve taught this stuff in the past I’ve found that people latch onto coin flipping more easily that imagining some arbitrary assignment of values.

For what it’s worth, I used to be a researcher in probability and information theory and have published papers on those topics so I am aware of the various technical definitions of randomness (Kolmogorov axioms, algorithmic probability theory, etc.)

I think you’re right about my comment being a little too lengthy for most people to find useful. I started explaining this stuff and got carried away.

drewhk•18h ago
> The set of all possible mapping from all possible finite strings to booleans is definitely not countable.

Just to add another perspective to this, this is one of the places where classical and constructive mathematics diverge. Do those functions that are not expressible by algorithms (algorithms are countable) even exist? Of course you can define them into existence, but what does that mean?

Another food for thought is to consider if the limits imposed on computation by the Turing machine is a law of physics? Is this an actual physical limit? If so, what does that mean about the functions not expressible by algorithms? What is so exciting about programs/algorithms that they are both a well-defined mathematical object suitable for formal analysis, but they are actually machines as well, fully realizable physically and their properties physically falsifiable.

Before anyone starting to nit-pick, I just put this comment here as a conversation starter, not a precise thought-train: this is a deep rabbit whole that I think is worth to explore for everyone interested in the computational world. I am pretty sure other commenters can add more accurate details!

mreid•18h ago
These are definitely thought-provoking questions and there are branches of mathematical philosophy such as constructivism and mathematical intuitionism that explore these.

Even if computation is not directly part of the laws of physics, knowing that humans and our computers are limited to things that are finite and computable might place limits on how we can appreciate how the universe works.

This is kind of a digression but if you (or others) are interested in some examples of things that are right on the edge of these questions you should check out the [busy beaver function](https://www.quantamagazine.org/amateur-mathematicians-find-f...). This tell you the maximum number of steps an n-state Turing machine can take before halting.

drewhk•18h ago
It is also interesting to consider, that if all transcendental numbers exist physically, then it basically means that there is an experiment that yields the Nth digit of such a number (for any N assuming unlimited physical resources to realize the experiment). If such experiment does NOT exist though, then there cannot be any relevance physically of that Nth digit (otherwise the "relevance" would materialize as an observable physical effect - an experiment!). This is something Turing machines cannot do for uncomputable numbers, like Chaitin's Omega, etc. We can yield the Nth digit of _many_ transcendental numbers (PI, e, trig functions, etc), but not all of them. It is so interesting that physics, machines and the existence of all the real numbers are so intertwined!

Of course one can also ponder, even if a mathematical object is "un-physical", can it be still useful? Like negative frequencies in fourier analysis, non-real solutions to differential equations, etc. Under what conditions can "un-physical" numbers be still useful? How does this relate to physical observation?

And just for the fun of it: when you execute unit tests, you are actually performing physical experiments, trying to falsify your "theory" (program) :D

turboponyy•15h ago
> It's better to say that some functions like "does this program halt?" simply don't exist.

Let f : (p: String) -> Boolean equal the function that returns True if p is a halting program, and False otherwise

aeneasmackenzie•10h ago
This is really just the constructive/classical argument but I want to be specific.

You just named a function and specified a property you want it to have. However no function with this property meaningfully exists. We can manipulate the symbol just fine, but we can never look inside it because it’s not real. Classical mathematics was developed before computation was relevant and the question of decidability arose fairly late, it makes more sense to consider it an early attempt at what is now called intuitionistic mathematics. The halting problem disproved excluded middle.

pdpi•15h ago
> The result becomes less weird when you realize that that almost all functions from string -> boolean are not expressible as a program.

I think this is one of those cases where a maths background makes computer science much easier. It only takes enough calculus to get you to entry level differential equations before you’re confronted with the fact that most functions ℝ → ℝ aren’t elementary functions (or admit any closed-form expression at all). In a certain sense, “program” is really just a weird word for “closed-form expression”.

tliltocatl•13h ago
IMHO what just referring to uncountability misses is that not only most functions are unexpressable, many __useful ones__ are not. Most ℝ→ℝ functions (and even most real numbers) are so unremarkable there is no way to uniquely name them. The fact that some useful functions are undecidable is quite a bit less trivial than "there are more functions than there are programs to evaluate them".
SkyBelow•12h ago
Isn't this related to why the busy beaver is the fastest growing program, given that at some N states it would be able to emulate any closed form function and then at some greater number N + M states be able to use that function in more complex functions (as a lower bounds, the actual busy beaver of N + M states given size would likely be an even larger output than the same sized Turing machine emulating whichever fast growing function is used for comparison)?
rstuart4133•4h ago
> This is a really nice explanation of decidability.

I'm not an enthused about it as you. It doesn't mention that every undecidabilty involves an infinity. What makes a problem undecidable is not that you can't write an algorithm for it, it's that you can't guarantee the spits out an answer in a finite number of steps.

Take the halting problem. He defines it as "does [the Turning] machine M halt on input i?". The answer is famously no. But you can write an algorithm for it, and if you restrict the Turning machine it's analyzing to having a finite tape it will always spit out the correct answer. The problem is the algorithm creates a power set of the Turning machine states. If you run it on a machine with infinite states it potentially needs to construct power set of an infinity. That's not just an infinity, its a new, distinguishable from, and in some sense higher order infinity than the one you started with.

I can't call an explanation nice when it omits the central idea that explains what is going on under the hood. His description of the universality Turning machines on the other hand was nice.

gerdesj•1d ago
I'm not sure
graycat•23h ago
> What does “Undecidable” mean, anyway

A big and relatively recent example is to take (1) the axioms of Zermelo-Fraenkel set theory and (2) the axiom of choice and ask if (1) can be used to prove or disprove (2). The surprising answer is "No", i.e., (1) cannot be used either to prove or disprove (2). So, can work with (1) and, whenever convenient, assume (2), continue on, and never encounter a problem.

So, given (1), (2) is undecidable.

The work was by Paul J. Cohen as at

https://en.wikipedia.org/wiki/Paul_Cohen

The field of research about what is undecidable also goes back to Kurt Gödel as at

https://en.wikipedia.org/wiki/Kurt_G%C3%B6del

Another example, starting with (1), of an undecidable statement is the continuum hypothesis:

     There is no set whose cardinality is
     strictly between that of the integers
     and the real numbers.
In simple terms, "cardinality" means that there is no set X that is too large to be put into 1-1 correspondence with the set of integers and too small to be put into 1-1 correspondence with the set of real numbers.
Animats•23h ago
Any deterministic system with a finite number of states is decidable, in the halting problem sense. Either it halts or repeats a state. The halting problem only applies for infinite memory.

Now, there are finite state systems where the halting problem is arbitrarily hard. But that's not undecidability. That's complexity. That's a problem in the space where P=NP lives.

The article does not make this distinction, and it's important.

ummonk•22h ago
Yes, the article is clearly confused about the concepts, given its mention of bcrypt cracking and chess engines.
dmurray•19h ago
Yes, I thought the article was really good until it got to that point.

> a halt detector can be trivially repurposed as a program optimizer / theorem-prover / bcrypt cracker / chess engine. It's too powerful, so we should expect it to be impossible.)

A Turing machine can be trivially repurposed as a bcrypt cracker or chess engine (for the same definition of trivial the author is using), so if it's not intuitive that a computer can do this, then your intuition is wrong.

Program optimizers and theorem provers also exist, of course, but in those cases the problems really are undecidable so no Turing machine is guaranteed to work on any given program or theorem.

bubblyworld•18h ago
A solution to the Halting problem can be repurposed as a general-purpose theorem prover. The author is correct. You simply write a program that searches all possible valid proofs till it finds the one you are looking for (or maybe doesn't and runs forever). Then you check whether it halts with your Halting solution - if that returns true, you know that a proof exists, otherwise you know that one doesn't.

In other words, you can use undecidability of first-order logic to prove undecidability of the Halting problem if you like, although it's a bit of a chicken-egg thing historically (I believe).

dmurray•18h ago
I edited the last part; I meant that the Turing machine from my second paragraph can't do those things, though the Halting machine can.
bubblyworld•18h ago
Right, yeah that makes sense. But I think the author understands that very well.
dmurray•18h ago
Of course he must. But the fact that we're arguing about it (and there's another thread with the same conversation) says that at least, the intent of this part wasn't clear.
Maxatar•7h ago
>A solution to the Halting problem can be repurposed as a general-purpose theorem prover. The author is correct. You simply write a program that searches all possible valid proofs till it finds the one you are looking for (or maybe doesn't and runs forever).

The author is not correct and it's a common misconception. Simple question for you... let's say I give you a magical black box that can solve the halting problem. Now please use it to prove or disprove the continuum hypothesis (within ZFC).

Hopefully you'll come back to me and say the black box shows that the continuum hypothesis can not be proved or disproved (in ZFC). Okay we agree so far, but note that we have established that having a solution to the halting problem can not actually be repurposed as a general purpose theorem prover, since no such proof exists in the first place.

Okay you might counter by saying that a solution to the halting problem can be repurposed as a general purpose theorem prover for propositions that either have a proof or don't have a proof, ignore those pesky propositions that are neither provable or unprovable...

But then the solution to the halting problem doesn't get you anything... if you're dealing with a proposition that either has a proof or a proof of its negation, you don't need a solution to the halting problem to find it, you are guaranteed to find it eventually by definition.

The claim that a black box that solves the halting problem can be used as a general purpose theorem prover is simply untrue and confuses certain concepts together. At best you can claim it gives you a restricted theorem classifier.

Tainnor•53m ago
You're mixing things up. By a "general theorem prover", we typically mean "a program which can decide for every sentence in FOL whether it's valid" (I say FOL, because the situation is even worse for HOL), i.e. a decider for the set of valid FOL sentences. The sentence ZFC -> CH (where ZFC and CH are expanded as needed) is not valid because CH doesn't follow from the ZFC axioms. Neither is ZFC -> not(CH) valid. We know both of these results from proofs specific to this problem, but a general purpose theorem prover would give us both of these results for free. If we had such a tool, mathematics would essentially be "solved" (maybe apart from deciding what questions to ask). But since FOL is undecidable, it can't exist.

What you are asking for is a machine that for each pair of formulas A, B (correctly) decides that either A -> B or A -> not(B), but this is impossible for model-theoretic reasons (not all theories are complete) and that has nothing to do with decidability.

bubblyworld•19h ago
I don't think this is an important distinction. The point of Turing machines is that you can ask questions like "can all instances of this problem be solved uniformly, and how relatively expensive is it?". This requires infinite memory in a formal sense, because instances of (most) problems can be arbitrarily large.

Yes, if you ask the same question with a fixed (finite) memory restriction on everything, the answer is uninteresting. To me, this is... uninteresting. It tells you nothing about the underlying logical structure of your problem, which is what mathematicians are really trying to get at!

(first and foremost Turing machines are a tool for analysing problems mathematically, not a model of your laptop)

Also note that "states" in your sense are not the same as "states" in a Turing machine (which are just one of the inputs to the transition function). There are Turing machines with less than ten thousand states whose behaviour is undecidable in ZFC (https://arxiv.org/abs/1605.04343).

Ukv•14h ago
> Yes, if you ask the same question with a fixed (finite) memory restriction on everything, the answer is uninteresting. To me, this is... uninteresting. It tells you nothing about the underlying logical structure of your problem, which is what mathematicians are really trying to get at!

> (first and foremost Turing machines are a tool for analysing problems mathematically, not a model of your laptop)

I think what makes Animats's distinction worth stressing is that a lot of people miss this and fall into the trap of making false/misleading claims about the impact of the halting problem on actual computers/software.

For instance, claiming that the halting problem proves there are problems humans can solve that computers fundamentally can't, giving "the halting problem makes this impossible" as reason against adding a halt/loop detector to a chip emulator, making all sorts of weird claims about AI[0][1], or arguably even this author's claim that a halt detector would make bcrypt cracking trivial.

[0]: https://mindmatters.ai/2025/02/agi-the-halting-problem-and-t...

[1]: https://arxiv.org/pdf/2407.16890

bubblyworld•13h ago
That's fair, I've run into a lot of misconceptions like that before. And probably believed some of them at earlier points in my life =P

(Roger Penrose himself makes a version of that claim in "the Emperor's New Mind", so we're in good company...)

About the bcrypt thing... yeah, so one way you can do it is write a program that generates all possible keys starting with the letter 'a', and tests them. Then you run your halting oracle on that, and if it returns true you know the first letter of the key is 'a'.

Do this for each index/letter combination and you can crack bcrypt keys in linear time (in the number of calls to your halting oracle).

Ukv•12h ago
> About the bcrypt thing... yeah, so one way you can do it is write a program that generates all possible keys starting with the letter 'a', and tests them. Then you run your halting oracle on that, and if it returns true you know the first letter of the key is 'a'.

And you really can do that. For actual computers we have cycle detection algorithms that can tell you definitively if your program will halt or not.

Determining whether the program halts can still be complex or slow, which is the case here, but the halting problem does not make it undecidable (because it doesn't apply to real computers) nor place any lower bound on that complexity (nothing preventing a more advanced halt detector from finding better shortcuts).

bubblyworld•11h ago
The conversation is going in circles here =) I agree with everything you have written.
Animats•8h ago
> (Roger Penrose himself makes a version of that claim in "the Emperor's New Mind", so we're in good company...)

Right. That leads to the whole 'brains are special and analog and quantum or something and thus can't be emulated digitally' line of thought. That line of argument is looking rather threadbare since LLMs got good. That's a different issue than decidability, though. Penrose does raise a good question as to how biological brains get so much done with so little power and rather low-frequency signals.

Xcelerate•11h ago
To be fair to [0], Leonid Levin (co-discoverer of NP-completeness) makes a similar but slightly weaker claim in a more formal sense: that no algorithm or even random process can increase mutual algorithmic information between two strings (beyond O(1)), which includes any process that attempts to increase mutual information with the halting sequence (https://cs-web.bu.edu/fac/lnd/dvi/IIjacm.pdf).

Nevertheless, we clearly do have some finite amount of information about this sequence, evident in the axioms of PA or ZFC or any other formal system that proves an infinite number of programs as non-halting (hence why the Busy Beaver project has been able to provably confirm BB(5)). We presume these systems are truly consistent and sound even if that fact is itself unprovable, so then where exactly did the non-halting information in the axioms of these systems “come from”?

Levin stops short of speculating about that, simply leaving it at the fact that what we have so far cannot be extended further by either algorithmic or random means. But if that’s the case, then either AI is capped at these same predictive limits as well (i.e., in the sense of problems AI could solve that humans could not, both given unlimited resources), or there is additional non-halting information embedded in the environment that an AI could “extract” better than a human. (I suppose it’s also possible we haven’t fully exploited the non-halting information we do have, but I think that’s unlikely since we’re not even sure whether BB(6) is ZFC-provable and that’s a rather tiny machine).

bubblyworld•10h ago
What is the halting sequence? It isn't mentioned in your linked article anywhere (or on google), and makes the rest of your response a little hard to make sense of.
Xcelerate•10h ago
He mentions it on the second page:

> any Martin-Lof random sequence that computes (i.e. allows computing from it) a consistent completion of PA also computes the Halting Problem H; and by deLeeuw et al. [1965], only a recursive sequence (which H is not) can be computed with a positive probability by randomized algorithms.

It's just the sequence of the solutions to the halting problem, i.e., the characteristic function of the halting set. Levin points out this sequence is not recursive/decidable.

bubblyworld•7h ago
Thanks - I have to admit there's far too much jargon in there for me to make heads or tails of the main claims, let alone say anything sensible about your original reply. I can't even work out how your statement that no algorithmic process can increase the mutual information between two given strings is formalised in the language of the paper, since there seems to be an obvious counterexample - the mutual information between an infinite string of 0s and the halting sequence can be made arbitrarily large by a program with oracle access to the halting sequence by simply transforming each element into the corresponding element of the halting sequence. So of course I am missing something.

But it seems interesting, thanks for the link =)

yusina•18h ago
The halting problem applies also for finite-but-unbounded memory.

If you give me a decider that can tell me for any program with a state space up to size N whether it halts or not, then I will be able to produce another program with a larger state space and which then your decider won't be able to decide. This new program doesn't use infinite space. Just more than your decider can handle. You can't produce a single decider that works for all inputs.

It is often ok to approximate "finite but unbounded" with "infinite", and surely anyhing that can handle infinite inputs will be able to handle finite-but-unbounded inputs too, but in this context the two are not the same.

The term "infinite" is often misused to mean "finite but unbounded" but it is an important distinction.

dooglius•15h ago
That doesn't sound right. All my decider needs to do is wait for a repeated state (no halt) or a halt. If the state space is finite then one of these is guaranteed to happen.
yusina•15h ago
And how does your decider recognize that a state has been attained twice? If I make the input system large enough then you don't have enough space in your decider to save all the states which it has observed.
dooglius•15h ago
What do you mean "space in your decider"? My decider takes finite but unbounded memory, same as the machine it's deciding.
yusina•14h ago
Ok, if that's the class of systems we are talking about, then the system that your decider wants to check does not need to attain the same state twice. It may or may not terminate, and your decider may never know. You can't have it both ways. Keep in mind that the input can be as adversarial as it wants and knows, including having full knowledge of what your decider is trying to do.
dooglius•12h ago
What class of system are you talking about where the claim is false? Can you express your statement/definitions rigorously? A system that enters the same state twice will never halt, we know it will keep cycling through the same list of states over and over again.
Ukv•13h ago
> If I make the input system large enough then you don't have enough space in your decider to save all the states

We have cycle detection algorithms that don't require saving all states and work in the same big-O space complexity (constant factor overhead) as just running the input system normally without cycle detection.

To my understanding that means that, for any input system that runs in finite-but-unbounded space, the detector will determine whether it halts in finite-but-unbounded space - unless I am misunderstanding that term.

Dagonfly•9h ago
Assume your cycle detection D(TM, i) always outputs 1 when TM(i) doesn't halt (TM is the encoding of a turing machine and i is the input to TM). Otherwise, if TM(i) halts, D does NOT halt. [1]

We can construct a new turing machine F(TM) = D(TM, TM). That is: We ask D to detect a cycle when a TM receives its own encoding as an input. What is the output of F(F)?

F(F) = D(F, F) = 1 can only happen if your cycle detector detected a cycle in F(F) which we just saw terminates with 1. If F(F) doesn't halt, then your cycle detector claims that F(F) terminated, which it doesn't.

Therefore no such cycle detection can exist. It either has to bound the size of the input system or be non-exhaustive in its detection. The intuition here is: Assume the encoding of D has size N and D can check all encodings up to size N, then D must be able to dectect its own cycles. But the input to D(F, F) is size 2N (cause F is roughly the same size as D).

[1] If your original cycle detection outputs 0 if no cycle is present, then just wrap it in a TM that calls the original cycle detection and either returns on 1 or infinite loops on 0.

Ukv•8h ago
I believe the problem there is that the constructed F(F) input itself would not run in finite space:

* F(F) internally runs its implementation of D's code on (F, F)

* ...which runs F(F) plus some constant-factor-overhead

* ...which runs its implementation of D's code on (F, F)

* ...which runs F(F) plus some constant-factor-overhead

* etc.

Ends up with the F emulating itself recursively, with the overhead adding up.

Note that the claim I make of D is that for any input system that runs in finite-but-unbounded space it will determine whether it halts in finite-but-unbounded space - not for input systems that already by themselves use infinite space.

Animats•8h ago
Right. Cycle detection can be done by running two copies of the program in lockstep, but at a 2:1 step rate. If the state of both programs match, you're in an infinite loop.
LPisGood•11h ago
I don’t think finite-but-unbounded and infinite makes any difference in this setting, since at most finitely many cells of the memory tape may be used at any stage of computation.
yusina•4h ago
Exactly, that was my point. Thanks for putting it more succinctly.
Tainnor•1h ago
Decidability is a term from theoretical CS where languages can be arbitrarily big. It doesn't make sense to repurpose the term for actual finite-memory machines.
Aziell•22h ago
The first time I encountered the halting problem, I was honestly confused. I kept thinking there had to be another solution. But over time, I came to realize it wasn’t a technical issue ,it was about the limits of computation itself. This article explains the concept of undecidability really well. It breaks things down in a very practical way. That last part, about the limits of how we think, really hit me.
ummonk•22h ago
"This to me is a strong "intuitive" argument for why the halting problem is undecidable: a halt detector can be trivially repurposed as a program optimizer / theorem-prover / bcrypt cracker / chess engine. It's too powerful, so we should expect it to be impossible."

I don't buy this. Bcrypt cracking and solving chess is easy already if you don't care about runtime complexity (both have rather trivial finite -- albeit exponential -- runtime algorithms), and it wouldn't be any easier to transform them into halting problems.

jonahx•21h ago
What do you not buy? He's saying it could be used to do anything for free. He gave four examples. The first two don't have general solutions, and never will. You don't like the last 2 examples because we have algorithms for them, ok. It doesn't undermine the argument. And in any case both things were developed with effort and ingenuity... they weren't spit out of a halting problem calculator.
cvoss•21h ago
Having a decision algorithm does not make a computation free. It may take exponential time in the input size, doubly exponential time, some time involving Graham's number, ... etc. It may also take up an unreasonable amount of space. Indeed, some problems have such complex algorithms that you should expect the universe to end before the answer is determined and/or the computing device is doomed to collapse into a black hole, making the answer irretrievable.

So the blog post's claims that a hypothetical halting algorithm would solve anything "overnight" are exaggerated and naive.

jonahx•21h ago
"Free" in the sense of thought, cleverness, insight. You have an "everything" calculator. That is the sense in which it might be intuitive that it couldn't exist.

> So the blog post's claims that a hypothetical halting algorithm would solve anything "overnight" are exaggerated and naive.

I agree "overnight" is misleading in this context. However, I am fairly sure the author is aware of the point you are making.

Dylan16807•20h ago
> "Free" in the sense of thought, cleverness, insight. You have an "everything" calculator. That is the sense in which it might be intuitive that it couldn't exist.

Right.

But brute force solvers for bcrypt and chess are already "free" in the sense of thought, cleverness, insight. We already have the "everything" algorithm: iterate through all possible solutions in O(2^n) time and pick the best one.

A halting solver gains us nothing in these scenarios.

(The code for scoring a solution is the same code that would need to go into the halting solver. For bcrypt it's checking if the input matches, for chess it's the number of turns until checkmate.)

jonahx•19h ago
Yeah, I thought I already addressed that above.

Can we do the same for a theorem prover? For proofs of some fixed finite length, I think the answer is yes, but without that constraint the answer is no. Whereas with a halting detector we could.

It still seems to me your complaint (and the other poster's) are just about these specific examples rather than general argument Hillel is making. Please clarify if that's not the case, and why.

ummonk•18h ago
The complaint is that Hillel is providing an intuitive explanation but that intuition is clearly faulty, as demonstrated by two of the examples he gave.

P.S., you can run that proof finding algorithm (iterate through every candidate proof one by one and check for validity) for proofs of finite length in general, not just some fixed finite length. Where the halting oracle comes in is that you can use it to check whether the proof finding algorithm will ever halt, and thereby find out whether the theorem is provable or not.

jonahx•18h ago
> for proofs of finite length in general, not just some fixed finite length.

For a brute force proof finder, for your program to be guaranteed to finish in theory, you have to pick a length. So it is fixed. Ofc you can choose whatever length you want. But you don't have that constraint with the halting oracle. Perhaps we're saying the same thing?

ummonk•16h ago
For the program to be guaranteed to finish in theory, all that is required is that a valid proof exists. You don't have to pick a length in advance - the program just has to keep trying proofs of progressively longer lengths.
jonahx•9h ago
But it won't finish if there is no proof! A halting oracle will finish either way.
Maxatar•7h ago
Sure, but neither bcrypt or chess fall into the category of being unprovable, so having a halting detector doesn't help for those situations. The author is mixing up problems that are "hard" in the sense that we know in principle how to solve it but need a lot of resources, versus "hard" in the sense that we genuinely don't know how to solve the problem, even if we had access to infinite resources.

Mixing these two up is very misleading and detracts from what is otherwise a well written article.

Dylan16807•18h ago
The problem is the author's giving a misleading picture of the problem space with those examples.

Tasks like optimizing whole programs or running a theorem prover are difficult/impossible tasks to do perfectly. We don't have a solution verifier that we can plug into the "free" brute force framework. With theorem provers, even when restricted to fixed finite (non-trivial) lengths, I don't think we have one that always gives the right answer. And fully optimizing programs is similarly impossible to be perfect at. But if you had a halting solver you could bypass those difficulties for all of those problems.

Tasks like breaking encryption or playing chess have super simple verifiers. A halting solver would solve them sure, but we already have programs to solve them. We only lack fast enough computer to run those programs.

These are both big and significant classes of problem. The latter is not just a couple scattered examples. It has its own answers to the important questions like whether you can try every answer to make an "everything" calculator. For the first class you can't, for the second class you can. The intuition that such a thing is "too powerful" is actually a pretty bad intuition here.

ComplexSystems•12h ago
My complaint is that I don't get what we are getting at when we reference "creativity," "cleverness," "ingenuity" and so on. The halting problem cannot be solved, in the general case, by any machine constructable in physical reality. That includes both computers and human beings. And computers can keep at it much longer than I can, come up with novel hypotheses to test much longer than I have the patience for, and can basically do anything I can do except try a bunch of random crap out of sheer frustration and eventually give up.
godelski•22h ago
I think it helps to understand my namesake's Incompleteness Theorem[0]. They are actually connected [1,2]

  1) no consistent system of axioms whose theorems can be listed by an effective procedure (i.e. an algorithm) is capable of proving all truths about the arithmetic of natural numbers. 
  For any such consistent formal system, there will always be statements about natural numbers that are true, but that are unprovable within the system.

  2) the system cannot demonstrate its own consistency.
Any axiomatic system will have true statements that are unprovable. Which means basically any system of logic. If you've made an assumption you'll end up with this. You're always making assumptions, even if you don't realize it. It's usually a good idea to figure out what those are.

I hear a lot of people say "from first principles". If you're not starting from your axioms, you're not starting from first principles. Think Carl Sagan making a pie from scratch.

[0] https://en.wikipedia.org/wiki/G%C3%B6del%27s_incompleteness_...

[1] https://scottaaronson.blog/?p=710

[2] https://cstheory.stackexchange.com/questions/10635/halting-p...

Tainnor•47m ago
> Any axiomatic system will have true statements that are unprovable.

Any effectively computable axiomatic system of sufficient strength.

The "effectively computable" condition is probably uninteresting (if we couldn't effectively decide if something is an axiom we wouldn't be able to check proofs), but the "sufficient strength" part matters. There are interesting theories that are complete (i.e. for every sentence P, either P or not(P) is a consequence), such as the theory of the natural numbers with addition (but without multiplication).

irthomasthomas•14h ago
Ha, I bought https://undecidability.com, but haven't decided what to do with yet, so for now it just redirects to my public bookmarks.
theodorethomas•10h ago
Can someone explain to me why although we've known that classical physics is not a correct description of the hardware of the universe for at least 100 years now, we are still hooked on 90-year-old Turing Machines which cannot physically exist (they violate quantum mechanics) and whose theoretical limitations are, consequently, irrelevant.
Tainnor•28m ago
My pet peeves when it comes to undecidability:

1. People saying "sure, problem A may be undecidable but it's decidable for instance X". That's not how the concept works - if you restrict the a problem to a specific instance (in the sense of having a decider for the intersection of A and {X}), then it will always be decidable because either the Turing Machine that always outputs true or the Turing Machine that always outputs false is a decider (it doesn't matter that we don't know which one). I know what people are trying to say - "just because a problem is undecidable it doesn't mean you can't solve specific instances" - but this is arguing against a misconception of what decidability means.

2. "Decidability doesn't matter because real-world computers have finite memory". Yeah, but if your solution to problem X involves running an algorithm until you've run through all possible 2^N configurations for N=the size of your memory, you haven't really practically solved it either in any way that anyone would want to use.