[1] https://github.com/msoos/cryptominisat/ [2] https://github.com/meelgroup/ganak/ [3] https://mccompetition.org/
[0] https://www.sciencedirect.com/science/article/pii/S030439750...
The P vs NP problem is expressible as the question of whether a specific Π_2 sentence is true or not (more specifically, whether a particular base theory proves the sentence or its negation). Unlike problems involving higher order set theory or analytical mathematics, I would think any claimed proof of a relatively short arithmetical sentence shouldn’t be too difficult to write up formally for Lean (although I suppose Fermat’s Last Theorem isn’t quite there either, but my understanding is we’re getting closer to having a formal version of it).
The impact of this would be that journals could publicly post which specific sentences they consider to represent various famous open problems, and a prerequisite to review is that all purported proofs require a formal version that automatically Lean-validates. This would nix the whole crank issue.
Then again, I may be way off base with how close we are to achieving something like I’ve described. My impression is that this goal seems about 5 years away for “arithmetical” mathematics, but someone better in the know feel free to correct me.
In many cases, the problem isn't that the statement of the theorem can't be properly expressed in Lean. It's very easy to write down the statement of FLT in Lean, for example. But for theorems whose proofs have been elusive for decades or centuries, the proofs very likely include objects in rather advanced mathematics that are not at all easy to encode in Lean. This is one of the current struggles of the FLT formalisation process, for example.
In any case, the problem in this case has nothing to do with formalisation, it's simply a case of a bad submission that for unclear reasons didn't get properly peer reviewed.
Here is a business idea. Start a "Frontier Journal" published electronically as a pdf. Contact professors, grad students, etc for paper submissions. Publish their papers for a fee, say $100. The paper will buff their resumes and pad their egos. Send out proceedings as pdf to participants. Everybody profits including the environment. I have seen this scam play out.
> With respect to the assumption, we have clearly elaborated on why a mathematical assumption is essential for proving computational hardness results in Appendix A of our paper as follows: In fact, the Turing machine itself is an assumption about machines (i.e., mechanized finite formal systems) that can be realized in the physical universe. In essence, the Turing machine represents a fundamental physical assumption, and Turing’s findings on uncomputability signify the physical limits of mechanically solving all instances of a problem.
But Turing's assumption was necessary precisely because it was not mathematical. He had to cross the gap between physics (where computation takes place) and mathematics (which he wanted to use to discuss computation) with an assumption about physics that allowed him to construct a mathematical definition of computation. The authors of the paper, on the other hand, make an assumption within mathematics that could simply be mathematically wrong. It's like "proving" Golbach's conjecture by assuming that if an even number were ever not the sum of two primes, then that number must also be divisible by 17 and proceeding from there. Their "proof" is essentially: we assume that if a tractable algorithm (a mathematical construct) for SAT existed, it would be done in a certain way [1], but that way isn't possible ergo a tractable algorithm for SAT cannot exist.
Physics is based on (mathematically) unprovable assumptions. In mathematics, though, "assumptions" are conditionals. The paper should have said, we make a mathematical conjecture that, if true, would lead to our conclusion. You can't assume a mathematical conjecture and then call the entailment (from that conjecture) of X a proof of X. It's calling a proof of `A ⇒ B` a proof of `B`. I agree that a paper that aims to present a mathematical proof that is based on a redefinition of what a mathematical proof means should have led to the paper's outright rejection.
[1]: The paper says: "We only need to assume that this task is finished by dividing the original problem into subproblems" (... of the same kind)
What's funnier is that their assumption requires an even bigger leap of faith than their conclusion, i.e. more people are inclined to believe that P ≠ NP than that if someday somebody did find a tractable algorithm for SAT, that algorithm would surely work by division into smaller subproblems.
qsort•17h ago
CJefferson•16h ago
It's easy to get a superficial understanding of the problem, and then very easy to subtly mis-understand it.
I've reviewed published papers by respectable people where they've made one of several easy mistakes:
* Be careful about how you encode your problem, it's easy to make it "too large", at which point your problem can be solved in P in the input size. For example, don't represent a sudoku as triples "x-pos,y-pos,value", where those 3 numbers are encoded in binary, because now if I give you a problem with only 2 filled in values, you can't take the solution as your 'certificate', as your input is about size 6 * log(n), but the solution will be n * n * log(n).
* It's also easy if you write your problem as a little language to accidentally make it impossible to check in P time.
* When proving a reduction (say turning SAT into a Sudoku, to prove Sudoku is NP-complete), it's (usually) easy to show how solutions map to solutions (so you show how the SAT instance's solution turns into a particular solution to the Sudoku). It's usually much harder, and easy to make mistakes, showing the Sudoku can't possible have any other solutions.
eru•15h ago
Basically, they show that you can use SAT to solve Sudoku, and then claim that this makes Sudoku NP-complete. (All it shows is that Sudoku is in NP.)
People make similar mistakes often when they want to show that a certain problem isn't solvable in linear time, and they try to show that sorting can solve your problem. But it's the wrong direction.
dataflow•12h ago
Wait, did you mess up the direction here too, or am I confused? If you reduce problem A to B, then it means B is at least as hard as A, because solving it would solve A. Which certainly means in this case that Sudoku is NP-hard. And it doesn't (without introducing additional facts) imply Sudoku is in NP either. I don't see anything wrong here, do you?
andrewla•11h ago
That is, you've shown that an oracle that solves any SAT problem in constant time can solve any Sudoku in polynomial time.
The more difficult side is to show that for any SAT instance, you can reduce it to a Sudoku.
Really proving that you can use SAT to solve Sudoku is not a great or interesting result; since Sudoku is a decision problem it is very clear that it is in NP. Or see that verifying that a Sudoku solution is correct is achievable in polynomial time.
mzl•11h ago
> Wait, did you mess up the direction here too, or am I confused? If you reduce problem A to B, then it means B is at least as hard as A, because solving it would solve A.
Using SAT to solve Sudoku is a reduction of Sudoku to SAT. The order of the problems names switches depending on how you write it.
jcranmer•11h ago
Reducing unknown-complexity to NP-complete means you can bound the complexity by above, but not by below. I can reduce binary integer addition to SAT, which means that binary integer addition is no harder than SAT... but we also know by other algorithms that it is in fact easier than SAT.
To bound by below, you have to reduce NP-complete (or NP-hard suffices) to unknown-complexity.
almostgotcaught•8h ago
Lololol
hejsansvejsan•15h ago
Joel_Mckay•15h ago
https://www.youtube.com/watch?v=IuX8QMgy4qE
Algorithmic isomorphism practically ensures most CS approaches will fail to formally model the problem clearly. To be blunt, that million dollar prize will be waiting around a long time.
An infinite amount of Papers do not necessarily have to converge on a correct solution. =3
jojomodding•7h ago
Why would Gödel's Incompleteness Theorem stop them for P=NP in particular?
Why is the current definition of P or NP insufficiently formal or clear?
PS: Citing YouTube Videos in mathematical discussions is a big red flag indicating you have not really understood things.
Joel_Mckay•6h ago
> "not really understood things"
Something currently impossible to prove is by definition confusing. lol =3
https://www.youtube.com/watch?v=aNSHZG9blQQ
jojomodding•2h ago
PS: I read through the transcript of the YouTube video you linked (and also know the material from my CS degree education) so I do in fact know what Gödel's Incompleteness Theorem is. Note that the video is really not about P=NP at all, not any more than it is about 1+1=2. The use of P=NP in the video was just to give an example of "what is a statement."
Joel_Mckay•1h ago
People can twist it anyway they like, but stating they can "sort of" disprove or prove P=NP is 100% obfuscated BS. That is why the million dollar prize remains unclaimed.
Have a great day, =3
shusaku•53m ago
andrewla•11h ago
The insane thing is that non-deterministic Turing machines are computable at all! It really feels like they belong to a different class of "magical" computers, or an axiom-of-choice / Banach-Tarski naval-gazing infeasible trick. You mean, you just "guess" the answers and your program will get the right answer?
But they are computable; the Church-Turing model of computation is fantastically powerful. Now the problem is just "feasibility" and "complexity". It seems to the initiate that there must be an answer hiding just around the next corner because it's SO OBVIOUS but in the end if you give someone n^100 time they can solve any problem that you care to pose but that still counts as P, so you're not going to stumble upon some grand insight.