frontpage.
newsnewestaskshowjobs

Made with ♥ by @iamnishanth

Open Source @Github

fp.

AI Agent Automates Google Stock Analysis from Financial Reports

https://pardusai.org/view/54c6646b9e273bbe103b76256a91a7f30da624062a8a6eeb16febfe403efd078
1•JasonHEIN•3m ago•0 comments

Voxtral Realtime 4B Pure C Implementation

https://github.com/antirez/voxtral.c
1•andreabat•5m ago•0 comments

I Was Trapped in Chinese Mafia Crypto Slavery [video]

https://www.youtube.com/watch?v=zOcNaWmmn0A
1•mgh2•11m ago•0 comments

U.S. CBP Reported Employee Arrests (FY2020 – FYTD)

https://www.cbp.gov/newsroom/stats/reported-employee-arrests
1•ludicrousdispla•13m ago•0 comments

Show HN: I built a free UCP checker – see if AI agents can find your store

https://ucphub.ai/ucp-store-check/
2•vladeta•18m ago•1 comments

Show HN: SVGV – A Real-Time Vector Video Format for Budget Hardware

https://github.com/thealidev/VectorVision-SVGV
1•thealidev•20m ago•0 comments

Study of 150 developers shows AI generated code no harder to maintain long term

https://www.youtube.com/watch?v=b9EbCb5A408
1•lifeisstillgood•20m ago•0 comments

Spotify now requires premium accounts for developer mode API access

https://www.neowin.net/news/spotify-now-requires-premium-accounts-for-developer-mode-api-access/
1•bundie•23m ago•0 comments

When Albert Einstein Moved to Princeton

https://twitter.com/Math_files/status/2020017485815456224
1•keepamovin•24m ago•0 comments

Agents.md as a Dark Signal

https://joshmock.com/post/2026-agents-md-as-a-dark-signal/
2•birdculture•26m ago•0 comments

System time, clocks, and their syncing in macOS

https://eclecticlight.co/2025/05/21/system-time-clocks-and-their-syncing-in-macos/
1•fanf2•28m ago•0 comments

McCLIM and 7GUIs – Part 1: The Counter

https://turtleware.eu/posts/McCLIM-and-7GUIs---Part-1-The-Counter.html
1•ramenbytes•30m ago•0 comments

So whats the next word, then? Almost-no-math intro to transformer models

https://matthias-kainer.de/blog/posts/so-whats-the-next-word-then-/
1•oesimania•31m ago•0 comments

Ed Zitron: The Hater's Guide to Microsoft

https://bsky.app/profile/edzitron.com/post/3me7ibeym2c2n
2•vintagedave•34m ago•1 comments

UK infants ill after drinking contaminated baby formula of Nestle and Danone

https://www.bbc.com/news/articles/c931rxnwn3lo
1•__natty__•35m ago•0 comments

Show HN: Android-based audio player for seniors – Homer Audio Player

https://homeraudioplayer.app
3•cinusek•35m ago•0 comments

Starter Template for Ory Kratos

https://github.com/Samuelk0nrad/docker-ory
1•samuel_0xK•37m ago•0 comments

LLMs are powerful, but enterprises are deterministic by nature

2•prateekdalal•40m ago•0 comments

Make your iPad 3 a touchscreen for your computer

https://github.com/lemonjesus/ipad-touch-screen
2•0y•46m ago•1 comments

Internationalization and Localization in the Age of Agents

https://myblog.ru/internationalization-and-localization-in-the-age-of-agents
1•xenator•46m ago•0 comments

Building a Custom Clawdbot Workflow to Automate Website Creation

https://seedance2api.org/
1•pekingzcc•49m ago•1 comments

Why the "Taiwan Dome" won't survive a Chinese attack

https://www.lowyinstitute.org/the-interpreter/why-taiwan-dome-won-t-survive-chinese-attack
2•ryan_j_naughton•49m ago•0 comments

Xkcd: Game AIs

https://xkcd.com/1002/
2•ravenical•50m ago•0 comments

Windows 11 is finally killing off legacy printer drivers in 2026

https://www.windowscentral.com/microsoft/windows-11/windows-11-finally-pulls-the-plug-on-legacy-p...
1•ValdikSS•51m ago•0 comments

From Offloading to Engagement (Study on Generative AI)

https://www.mdpi.com/2306-5729/10/11/172
1•boshomi•53m ago•1 comments

AI for People

https://justsitandgrin.im/posts/ai-for-people/
1•dive•54m ago•0 comments

Rome is studded with cannon balls (2022)

https://essenceofrome.com/rome-is-studded-with-cannon-balls
1•thomassmith65•59m ago•0 comments

8-piece tablebase development on Lichess (op1 partial)

https://lichess.org/@/Lichess/blog/op1-partial-8-piece-tablebase-available/1ptPBDpC
2•somethingp•1h ago•0 comments

US to bankroll far-right think tanks in Europe against digital laws

https://www.brusselstimes.com/1957195/us-to-fund-far-right-forces-in-europe-tbtb
5•saubeidl•1h ago•0 comments

Ask HN: Have AI companies replaced their own SaaS usage with agents?

1•tuxpenguine•1h ago•0 comments
Open in hackernews

Some thoughts on journals, refereeing, and the P vs NP problem

https://blog.computationalcomplexity.org/2025/08/some-thoughts-on-journals-refereeing.html
69•luu•6mo ago

Comments

qsort•6mo ago
Tangential, but for some reason P vs. NP attracts an ungodly amount of cranks, probably because as far as open problems of that importance go it's easy to understand the question.
CJefferson•6mo ago
I agree, but think it's worse.

It's easy to get a superficial understanding of the problem, and then very easy to subtly mis-understand it.

I've reviewed published papers by respectable people where they've made one of several easy mistakes:

* Be careful about how you encode your problem, it's easy to make it "too large", at which point your problem can be solved in P in the input size. For example, don't represent a sudoku as triples "x-pos,y-pos,value", where those 3 numbers are encoded in binary, because now if I give you a problem with only 2 filled in values, you can't take the solution as your 'certificate', as your input is about size 6 * log(n), but the solution will be n * n * log(n).

* It's also easy if you write your problem as a little language to accidentally make it impossible to check in P time.

* When proving a reduction (say turning SAT into a Sudoku, to prove Sudoku is NP-complete), it's (usually) easy to show how solutions map to solutions (so you show how the SAT instance's solution turns into a particular solution to the Sudoku). It's usually much harder, and easy to make mistakes, showing the Sudoku can't possible have any other solutions.

eru•6mo ago
I've also seen people make the wrong direction of reduction.

Basically, they show that you can use SAT to solve Sudoku, and then claim that this makes Sudoku NP-complete. (All it shows is that Sudoku is in NP.)

People make similar mistakes often when they want to show that a certain problem isn't solvable in linear time, and they try to show that sorting can solve your problem. But it's the wrong direction.

dataflow•6mo ago
> Basically, they show that you can use SAT to solve Sudoku, and then claim that this makes Sudoku NP-complete. (All it shows is that Sudoku is in NP.)

Wait, did you mess up the direction here too, or am I confused? If you reduce problem A to B, then it means B is at least as hard as A, because solving it would solve A. Which certainly means in this case that Sudoku is NP-hard. And it doesn't (without introducing additional facts) imply Sudoku is in NP either. I don't see anything wrong here, do you?

andrewla•6mo ago
No, the GP is correct. If you use SAT to solve Sudoku, you have reduced Sudoku to SAT, not the other way around.

That is, you've shown that an oracle that solves any SAT problem in constant time can solve any Sudoku in polynomial time.

The more difficult side is to show that for any SAT instance, you can reduce it to a Sudoku.

Really proving that you can use SAT to solve Sudoku is not a great or interesting result; since Sudoku is a decision problem it is very clear that it is in NP. Or see that verifying that a Sudoku solution is correct is achievable in polynomial time.

mzl•6mo ago
> > they show that you can use SAT to solve Sudoku

> Wait, did you mess up the direction here too, or am I confused? If you reduce problem A to B, then it means B is at least as hard as A, because solving it would solve A.

Using SAT to solve Sudoku is a reduction of Sudoku to SAT. The order of the problems names switches depending on how you write it.

dataflow•6mo ago
Thanks, yeah, that's what I messed up. I was so focused on the reduction direction that I misparsed the statement.
jcranmer•6mo ago
You've messed up the direction here.

Reducing unknown-complexity to NP-complete means you can bound the complexity by above, but not by below. I can reduce binary integer addition to SAT, which means that binary integer addition is no harder than SAT... but we also know by other algorithms that it is in fact easier than SAT.

To bound by below, you have to reduce NP-complete (or NP-hard suffices) to unknown-complexity.

dataflow•6mo ago
Oh gosh. I think I was so focused on the reduction direction that I think I misread the premise of the comment -- it seems somehow my brain parsed "use SAT to solve Sudoku" as "solve SAT using Sudoku". That's why I was saying that reducing SAT to Sudoku would imply Sudoku is at least as hard as SAT. It indeed would if they had actually done that, but they did the opposite. Not sure how my wires got crossed when reading, but thanks!
almostgotcaught•6mo ago
> I don't see anything wrong here, do you?

Lololol

eru•6mo ago
Yeah, this is a confusing topic and easy to get wrong!
hejsansvejsan•6mo ago
There's nothing subtle about the mistake in the paper at hand. The reason everybody expects proving P != NP to be difficult is that it's very hard to say anything at all about arbitrary programs. The authors just assume without justification that any program that solves SAT must operate in a certain recursive way -- obvious rubbish. It's hard to overstate how embarrassing this is for the Springer journal where this nonsense is published.
Joel_Mckay•6mo ago
"Gödel's Incompleteness Theorem"

https://www.youtube.com/watch?v=IuX8QMgy4qE

Algorithmic isomorphism practically ensures most CS approaches will fail to formally model the problem clearly. To be blunt, that million dollar prize will be waiting around a long time.

An infinite amount of Papers do not necessarily have to converge on a correct solution. =3

jojomodding•6mo ago
Gödel's Incompleteness Theorem has little to do with complexity theory. Complexity theorists routinely find out if two complexity classes are included in each other or not.

Why would Gödel's Incompleteness Theorem stop them for P=NP in particular?

Why is the current definition of P or NP insufficiently formal or clear?

PS: Citing YouTube Videos in mathematical discussions is a big red flag indicating you have not really understood things.

Joel_Mckay•6mo ago
I would advise listening to Professor Thorsten Altenkirch brief introduction about the subject, and consider delaying argumentum ad hominem opinions a few minutes.

> "not really understood things"

Something currently impossible to prove is by definition confusing. lol =3

https://www.youtube.com/watch?v=aNSHZG9blQQ

jojomodding•6mo ago
The ad-hominem was maybe unwarranted, sorry. Let's get back to the subject matter by me repeating the material question in my comment above, which you ignored: "Complexity theorists routinely find out if two complexity classes are included in each other or not. Why would Gödel's Incompleteness Theorem stop them for P=NP in particular?"

PS: I read through the transcript of the YouTube video you linked (and also know the material from my CS degree education) so I do in fact know what Gödel's Incompleteness Theorem is. Note that the video is really not about P=NP at all, not any more than it is about 1+1=2. The use of P=NP in the video was just to give an example of "what is a statement."

Joel_Mckay•6mo ago
Personally I never saw degrees as an excuse to post nonsensical answers or troll people with personal barbs. P!=NP can be shown true for certain constrained sets, but P=NP was shown it can't currently be proven.

People can twist it anyway they like, but stating they can "sort of" disprove or prove P=NP is 100% obfuscated BS. That is why the million dollar prize remains unclaimed.

Have a great day, =3

shusaku•6mo ago
Complaining about ad hominem and then writing that awful comment, you should be embarrassed.
jojomodding•6mo ago
> P!=NP can be shown true for certain constrained sets

What? That's not even wrong. What do you mean by that?

Joel_Mckay•6mo ago
"Directions (I speak no English)"

https://www.youtube.com/watch?v=6vgoEhsJORU

Best of luck =3

andrewla•6mo ago
I think both easy to understand, and also it seems very obvious that non-deterministic Turing machines are more powerful than deterministic ones. It feels almost like a non-deterministic Turing machine is more powerful almost in the sense that a halting oracle is more powerful.

The insane thing is that non-deterministic Turing machines are computable at all! It really feels like they belong to a different class of "magical" computers, or an axiom-of-choice / Banach-Tarski naval-gazing infeasible trick. You mean, you just "guess" the answers and your program will get the right answer?

But they are computable; the Church-Turing model of computation is fantastically powerful. Now the problem is just "feasibility" and "complexity". It seems to the initiate that there must be an answer hiding just around the next corner because it's SO OBVIOUS but in the end if you give someone n^100 time they can solve any problem that you care to pose but that still counts as P, so you're not going to stumble upon some grand insight.

zero_k•6mo ago
I work on a (once top-of-the-line) SAT solver [1] and a (currently top-of-the-line) model counter [2]. Actually, I am very interested in the part of the rebuttal of "when each constraint has at most two variables, then the constraint satisfaction problem (and even the more difficult problem of counting the number of solutions) can be solved in time less than the lower bound that is claimed" -- in the model counting competition [3] there are actually problems that are binary-clause only, and I have to admit I am having trouble counting them any smarter than I already do normal (i.e. >=3 length clause) problems. Is there some very fun algorithm I'm missing that I could use for only-binary clause solution counting? I have thought about it, but I just... can't come up with anything smarter than compiling it into a d-DNNF form, which most SOTA model counters (and so I as well) do.

[1] https://github.com/msoos/cryptominisat/ [2] https://github.com/meelgroup/ganak/ [3] https://mccompetition.org/

simiones•6mo ago
Isn't this the claim for which they link to a paper [0] directly, which describes an algorithm by Ryan Williams for achieving this bound?

[0] https://www.sciencedirect.com/science/article/pii/S030439750...

zero_k•6mo ago
Ah, interesting. Thanks, I'll look into it!
Xcelerate•6mo ago
I’m not up-to-date on how advanced proof assistants have become, but are we not nearing the point where serious attempts at proving P vs NP can be automatically validated (or not) by a widely vetted collection of Lean libraries?

The P vs NP problem is expressible as the question of whether a specific Π_2 sentence is true or not (more specifically, whether a particular base theory proves the sentence or its negation). Unlike problems involving higher order set theory or analytical mathematics, I would think any claimed proof of a relatively short arithmetical sentence shouldn’t be too difficult to write up formally for Lean (although I suppose Fermat’s Last Theorem isn’t quite there either, but my understanding is we’re getting closer to having a formal version of it).

The impact of this would be that journals could publicly post which specific sentences they consider to represent various famous open problems, and a prerequisite to review is that all purported proofs require a formal version that automatically Lean-validates. This would nix the whole crank issue.

Then again, I may be way off base with how close we are to achieving something like I’ve described. My impression is that this goal seems about 5 years away for “arithmetical” mathematics, but someone better in the know feel free to correct me.

Tainnor•6mo ago
The effort from going to a paper proof, even one that everyone agrees is correct, to a formally verified one is still gigantic. Requiring all submissions in maths papers to come with a verified Lean proof would be a complete non-starter - most mathematicians don't even use Lean.

In many cases, the problem isn't that the statement of the theorem can't be properly expressed in Lean. It's very easy to write down the statement of FLT in Lean, for example. But for theorems whose proofs have been elusive for decades or centuries, the proofs very likely include objects in rather advanced mathematics that are not at all easy to encode in Lean. This is one of the current struggles of the FLT formalisation process, for example.

In any case, the problem in this case has nothing to do with formalisation, it's simply a case of a bad submission that for unclear reasons didn't get properly peer reviewed.

Xcelerate•6mo ago
Thanks for sharing! I didn’t think about all of the intermediate steps that might be difficult to encode, but that makes sense.
pron•6mo ago
The problem is this: a proof of P ≠ NP is likely to appeal to some theorem in some area of mathematics. Whether or not that theorem has already been formalised is actually unimportant, because it's already accepted as convincing. But the core of the proof would be some sort of a reduction from P ≠ NP to that theorem, and that reduction is likely to be novel and very arduous to formalise.
andrewla•6mo ago
I think it's reasonably clear at this point that an elementary proof that P != NP is unlikely to exist; in all likelihood such a proof would have to traverse many other fields of mathematics to get back to the main result, and would almost certainly hinge on areas of mathematics that have proven difficult to formalize so far.
shusaku•6mo ago
“Frontiers” journals are crap, so it’s no surprise. But I don’t understand why editors let this happen. It’s similar to academic fraud: if you lie about results nobody cares about, you’ll never get caught; if you lie about big things, suddenly the hammer comes down. And for what benefit? It’s not like they walked away with a payday from splitting the open access fee. There’s nothing wrong with desk rejecting an author by saying “if your result is true, publish it somewhere better”
flobosg•6mo ago
Despite the similarity in naming, Frontiers of Computer Science (Front. Comput. Sci.) is not published by Frontiers Media SA, but Springer and Higher Education Press. Note, however, that Frontiers Media does publish Frontiers *in* Computer Science (Front. Comput. Sci. (Lausanne)).
bwfan123•6mo ago
The issue here is that the deputy editor-in-chief of the journal is also an author of the paper. As such, the conflict of interest should have raised the bar to the acceptance of the paper.

Here is a business idea. Start a "Frontier Journal" published electronically as a pdf. Contact professors, grad students, etc for paper submissions. Publish their papers for a fee, say $100. The paper will buff their resumes and pad their egos. Send out proceedings as pdf to participants. Everybody profits including the environment. I have seen this scam play out.

jojomodding•6mo ago
Having such a paper mill paper on your CV will do the opposite of "boasting" your CV when people from your field look at it. Usually this happens at your PhD defense at the latest.
pron•6mo ago
The paper's authors' reply to the comment is really bad. E.g., it says this:

> With respect to the assumption, we have clearly elaborated on why a mathematical assumption is essential for proving computational hardness results in Appendix A of our paper as follows: In fact, the Turing machine itself is an assumption about machines (i.e., mechanized finite formal systems) that can be realized in the physical universe. In essence, the Turing machine represents a fundamental physical assumption, and Turing’s findings on uncomputability signify the physical limits of mechanically solving all instances of a problem.

But Turing's assumption was necessary precisely because it was not mathematical. He had to cross the gap between physics (where computation takes place) and mathematics (which he wanted to use to discuss computation) with an assumption about physics that allowed him to construct a mathematical definition of computation. The authors of the paper, on the other hand, make an assumption within mathematics that could simply be mathematically wrong. It's like "proving" Golbach's conjecture by assuming that if an even number were ever not the sum of two primes, then that number must also be divisible by 17 and proceeding from there. Their "proof" is essentially: we assume that if a tractable algorithm (a mathematical construct) for SAT existed, it would be done in a certain way [1], but that way isn't possible ergo a tractable algorithm for SAT cannot exist.

Physics is based on (mathematically) unprovable assumptions. In mathematics, though, "assumptions" are conditionals. The paper should have said, we make a mathematical conjecture that, if true, would lead to our conclusion. You can't assume a mathematical conjecture and then call the entailment (from that conjecture) of X a proof of X. It's calling a proof of `A ⇒ B` a proof of `B`. I agree that a paper that aims to present a mathematical proof that is based on a redefinition of what a mathematical proof means should have led to the paper's outright rejection.

[1]: The paper says: "We only need to assume that this task is finished by dividing the original problem into subproblems" (... of the same kind)

pron•6mo ago
P.S.

What's funnier is that their assumption requires an even bigger leap of faith than their conclusion, i.e. more people are inclined to believe that P ≠ NP than that if someday somebody did find a tractable algorithm for SAT, that algorithm would surely work by division into smaller subproblems.

brohee•6mo ago
Wait, the obviously wrong paper was written by GPT-4? https://arxiv.org/pdf/2309.05689 (from the comments on the original blog)