frontpage.
newsnewestaskshowjobs

Made with ♥ by @iamnishanth

Open Source @Github

fp.

OpenCiv3: Open-source, cross-platform reimagining of Civilization III

https://openciv3.org/
553•klaussilveira•10h ago•157 comments

The Waymo World Model

https://waymo.com/blog/2026/02/the-waymo-world-model-a-new-frontier-for-autonomous-driving-simula...
876•xnx•15h ago•532 comments

How we made geo joins 400× faster with H3 indexes

https://floedb.ai/blog/how-we-made-geo-joins-400-faster-with-h3-indexes
79•matheusalmeida•1d ago•18 comments

What Is Ruliology?

https://writings.stephenwolfram.com/2026/01/what-is-ruliology/
8•helloplanets•4d ago•3 comments

Unseen Footage of Atari Battlezone Arcade Cabinet Production

https://arcadeblogger.com/2026/02/02/unseen-footage-of-atari-battlezone-cabinet-production/
13•videotopia•3d ago•0 comments

Show HN: Look Ma, No Linux: Shell, App Installer, Vi, Cc on ESP32-S3 / BreezyBox

https://github.com/valdanylchuk/breezydemo
191•isitcontent•10h ago•24 comments

Monty: A minimal, secure Python interpreter written in Rust for use by AI

https://github.com/pydantic/monty
190•dmpetrov•10h ago•84 comments

Show HN: I spent 4 years building a UI design tool with only the features I use

https://vecti.com
303•vecti•12h ago•133 comments

Microsoft open-sources LiteBox, a security-focused library OS

https://github.com/microsoft/litebox
347•aktau•16h ago•169 comments

Sheldon Brown's Bicycle Technical Info

https://www.sheldonbrown.com/
347•ostacke•16h ago•90 comments

Dark Alley Mathematics

https://blog.szczepan.org/blog/three-points/
75•quibono•4d ago•16 comments

Hackers (1995) Animated Experience

https://hackers-1995.vercel.app/
444•todsacerdoti•18h ago•226 comments

Show HN: If you lose your memory, how to regain access to your computer?

https://eljojo.github.io/rememory/
242•eljojo•13h ago•148 comments

PC Floppy Copy Protection: Vault Prolok

https://martypc.blogspot.com/2024/09/pc-floppy-copy-protection-vault-prolok.html
46•kmm•4d ago•3 comments

Delimited Continuations vs. Lwt for Threads

https://mirageos.org/blog/delimcc-vs-lwt
17•romes•4d ago•2 comments

An Update on Heroku

https://www.heroku.com/blog/an-update-on-heroku/
379•lstoll•16h ago•258 comments

How to effectively write quality code with AI

https://heidenstedt.org/posts/2026/how-to-effectively-write-quality-code-with-ai/
225•i5heu•13h ago•171 comments

Why I Joined OpenAI

https://www.brendangregg.com/blog/2026-02-07/why-i-joined-openai.html
103•SerCe•6h ago•84 comments

Learning from context is harder than we thought

https://hy.tencent.com/research/100025?langVersion=en
162•limoce•3d ago•85 comments

I spent 5 years in DevOps – Solutions engineering gave me what I was missing

https://infisical.com/blog/devops-to-solutions-engineering
131•vmatsiiako•15h ago•56 comments

Introducing the Developer Knowledge API and MCP Server

https://developers.googleblog.com/introducing-the-developer-knowledge-api-and-mcp-server/
41•gfortaine•8h ago•11 comments

Show HN: R3forth, a ColorForth-inspired language with a tiny VM

https://github.com/phreda4/r3
63•phreda4•9h ago•11 comments

Female Asian Elephant Calf Born at the Smithsonian National Zoo

https://www.si.edu/newsdesk/releases/female-asian-elephant-calf-born-smithsonians-national-zoo-an...
20•gmays•5h ago•3 comments

Show HN: ARM64 Android Dev Kit

https://github.com/denuoweb/ARM64-ADK
14•denuoweb•1d ago•2 comments

Understanding Neural Network, Visually

https://visualrambling.space/neural-network/
262•surprisetalk•3d ago•35 comments

I now assume that all ads on Apple news are scams

https://kirkville.com/i-now-assume-that-all-ads-on-apple-news-are-scams/
1035•cdrnsf•19h ago•428 comments

Zlob.h 100% POSIX and glibc compatible globbing lib that is faste and better

https://github.com/dmtrKovalenko/zlob
6•neogoose•2h ago•3 comments

FORTH? Really!?

https://rescrv.net/w/2026/02/06/associative
56•rescrv•18h ago•19 comments

Show HN: Smooth CLI – Token-efficient browser for AI agents

https://docs.smooth.sh/cli/overview
85•antves•1d ago•63 comments

WebView performance significantly slower than PWA

https://issues.chromium.org/issues/40817676
20•denysonique•6h ago•3 comments
Open in hackernews

An 11-qubit atom processor in silicon with all fidelities from 99.10% to 99.99%

https://www.nature.com/articles/s41586-025-09827-w
86•giuliomagnifico•1mo ago

Comments

giuliomagnifico•1mo ago
Source and “readable” article: https://thequantuminsider.com/2025/12/17/sqc-study-shows-sil...
refulgentis•1mo ago
This is a PR release meant to accompany the scientific work shown in the actual source / link. I don’t mean to be argumentative, just, would have taken back the time I spent reading it after reading the Nature version. It’s just “go read Nature” + 3 bullet points + anodyne CXO quotes.
dvh•1mo ago
Can it run Shor's?
Ellipsis753•1mo ago
It should be able to factor 15.
YesThatTom2•1mo ago
So can a 10 year old. The breakthrough I’m waiting for is factoring something I cant do in my head.
thrance•1mo ago
And so can a dog: https://eprint.iacr.org/2025/1237.pdf
bgbntty2•1mo ago
Apart from being a fun read, I learned that I should be skeptical at papers claiming to have factorized certain numbers. Thanks.
aipatselarom•1mo ago
How much money or time do they owe you, though?
iwontberude•1mo ago
But it can’t because the error rate is still too high even for the most trivial examples
vtomole•1mo ago
No, and Shor's is not a good benchmark for these early quantum computers: https://algassert.com/post/2500
sestep•1mo ago
That's a 404; here's a working link: https://algassert.com/post/2500
vtomole•1mo ago
Oops, updated. Thanks!
rowanG077•1mo ago
I'm not sure you can really call it "early days" anymore. The first quantum computer was in 1998. That's 27 years ago.
vtomole•1mo ago
"early days" means that the 1998 computer didn't have qubits that were below the error correction threshold. Now we have hundreds of qubits below threshold. We'll need millions of qubits like these for quantum computing to be useful. If that take decades, this is the "early days" relatively.

It's not only early days in hardware, it's early days in practical applications as well: https://arxiv.org/abs/2511.09124

rowanG077•1mo ago
I admit it's early days in practical application. But in hardware definitely not.
vtomole•1mo ago
Depends on what we mean by "early days on hardware".

If we mean "we've have been working on this for almost 3 decades. That's a very long time to be working on something!". I agree.

If we mean "We just now only have a few logical qubits that outperform their physical counterparts and we'll need thousands of these logical qubits to run anything useful" then we are still in the early days.

sgt101•1mo ago
can you give a bit more information on 100's of qubits below threshold? I wasn't aware of 100's...
vtomole•1mo ago
https://www.nature.com/articles/s41586-025-09848-5 performs CZ gates on up to 256 qubits with fidelities of 99.5%, which is good enough to run surface codes below threshold.
avadodin•1mo ago
Maybe the real quantum computing was the friends we made along the way
trebligdivad•1mo ago
The engineering at those scales is pretty magical isn't it! Getting a whole bunch of individual atoms exactly where they want them. I wonder what the success rate is - i.e. how many do they build to get one working.
krastanov•1mo ago
Usually they randomly shoot atoms at the substrate and then just search for a spot (among thousands) where it randomly has the configuration they want. Still pretty amazing.
trebligdivad•1mo ago
Can they do that here, they've got quite a few sets of 4/5 atoms which they've interconnected, so that's a lot to get by shotgunning it. I'd assumed they were using something like a STM to nudge the atoms around.
wrs•1mo ago
The “precision manufacturing” reference in the paper is to this 2012 paper about an STM placement technique. [0]

[0] https://www.nature.com/articles/nnano.2012.21

8note•1mo ago
hmm. i remember my electron microscopes prof being very excited about his ability to manipulate single atoms exactly where he wants them ~10years ago.

id have assumed the holography has gotten more common and able to operate on bigger volumes

nikanj•1mo ago
This being a research paper, the rate is 1.0. They built one, then tinkered until it worked, then published.
colesantiago•1mo ago
Quantum Computing is a scam.

I have not seen any progress or breakthroughs in the QC field at all that are significant.

If the only goal for QC is to try to run Shor's algorithm or to "try to break the bitcoin blockchain" then it is worse than useless.

vtomole•1mo ago
QC progress happens super-exponentially: https://news.ycombinator.com/item?id=46383233
colesantiago•1mo ago
Graphs aren't telling me anything.

What are the real world use cases now, today? The only thing I see in the QC space, are QC stocks and funding paying for the employment of scientific experimentation, which isn't a real world application.

Do I have to wait 15 to 30 years for a series of real world changing breakthroughs that I can already do on a NVIDIA GPU card?

That doesn't exponential at all, in fact that sounds very very bearish.

vtomole•1mo ago
The graphs aren't telling you that QC hardware is not improving at a super-exponential pace?

There are no real world use cases today. The hardware is not advanced enough yet, but it's improving exponentially.

iinnPP•1mo ago
I think the point being made is that the graphs don't show real world applications progress. Being 99.9999999% or 0.000001% of the way to a useful application could be argued as no progress given the stated metric. Is there a guarantee that these things can and will work given enough time?
vtomole•1mo ago
> Is there a guarantee that these things can and will work given enough time?

Quantum theory predicts that they will work given enough time. If they don't work, there is something about physics that we are missing.

pohl•1mo ago
Sounds like a pursuit where we win either way
ziofill•1mo ago
Unless the overall cost is too high, but yes it's definitely worth pursuing as far as we currently know.
stocksinsmocks•1mo ago
Publishing findings that amount to an admission that you and others spent a fortune studying a dead end is career suicide and guarantees your excommunication from the realm of study and polite society. If a popular theory is wrong, some unlucky martyr must first introduce incontrovertible proof and then humanity must wait for the entire generation of practitioners whose careers are built on it to die.
vtomole•1mo ago
Quantum theory is so unlikely to be wrong that if large-scale fault tolerant quantum computers could not be built, the effort to try to build them will not be a dead end, but instead a revolution in physics.
zarzavat•1mo ago
Quantum theory says that quantum computers are mathematically plausible. It doesn't say anything about whether it's possible to construct a quantum computer in the real world of a given configuration. It's entirely possible that there's a physical limit that makes useful quantum computers impossible to construct.
vtomole•1mo ago
Quantum theory says that quantum computers are physically plausible. Quantum theory lies in the realm of physics, not mathematics. As a physical theory, it makes predictions about what is plausible in the real world. One of those predictions is that it's possible to build a large-scale fault tolerant quantum computer.

The way to test out this theory is to try out an experiment to see if this is so. If this experiment fails, we'll have to figure out why theory predicted it but the experiment didn't deliver.

Dylan16807•1mo ago
> The way to test out this theory is to try out an experiment to see if this is so. If this experiment fails, we'll have to figure out why theory predicted it but the experiment didn't deliver.

If "this experiment" is trying to build a machine, then failure doesn't give much evidence against the theory. Most machine-building failures are caused by insufficient hardware/engineering.

vtomole•1mo ago
Quantum theory predicts this: https://en.wikipedia.org/wiki/Threshold_theorem. An experiment can show that this prediction is false. This is a scientific problem not an engineering one. Physical theories have to be verified with experiments. If the results of the experiment don't match what the theory predicts then you have to do things like re-examine data, revise the theory e.t.c.
Dylan16807•1mo ago
But that theorem being true doesn't mean "they will work given enough time". That's my objection. If a setup is physically possible but sufficiently thorny to actually build, there's a good chance it won't be built ever.

In the specific spot I commented, I guess you were just talking about the physics part? But the GP was talking about both physics and physical realization, so I thought you were also talking about the combination too.

Yes we can probably test the quantum theory. But verifying the physics isn't what this comment chain is really about. It's about working machines. With enough reliable qubits to do useful work.

vtomole•1mo ago
You're right. I didn't sufficiently separate experimental physics QC from engineering QC.

On the engineering end, the question on if a large-scale quantum computer can be built is leaning to be "yes" so far. DARPA QBI https://www.darpa.mil/research/programs/quantum-benchmarking... was made to answer this question and 11 teams have made it to Stage B. Of course, only people who believe DARPA will trust this evidence, but that's all I have to go on.

On the application front, the jury is still out for applications that are not related to simulation or cryptography: https://arxiv.org/abs/2511.09124

zarzavat•1mo ago
> One of those predictions is that it's possible to build a large-scale fault tolerant quantum computer.

Quantum theory doesn't predict that it's possible to build a large scale quantum computer. It merely says that a large scale quantum computer is consistent with theory.

Dyson spheres and space elevators are also consistent with quantum theory, but that doesn't mean that it's possible to build one.

Physical theories are subtractive, something that is consistent with the lowest levels of theory can still be ruled out by higher levels.

vtomole•1mo ago
Good point. I didn't sufficiently delineate what counts as a scientific problem and what counts as an engineering problem in QC.

Quantum theory, like all physical theories, makes predictions. In this case, quantum theory predicts that if the physical error rate of qubits is below a threshold, then error correction can be used to increase the quality of a logical at arbitrarily high levels. This prediction can be false. We currently don't know all of the potential noise sources that will prevent us from building a quantum logic gate that is of similar quality as a classical logic gate.

Building thousands of these logical qubits is an engineering problem similar to Dyson spheres and space elevators. You're right that the lower levels of building 1 really good logical qubit doesn't mean that we can build thousands of them.

If our case, even the lower-levels haven't been validated. This is what I meant when I implied that the project of building a large-scale QC might teach us something new about physics.

kevlened•1mo ago
> The only thing I see in the QC space, are QC stocks and funding paying for the employment of scientific experimentation

Then invest accordingly, and later reinvest your winnings in a different direction.

throwaway_7274•1mo ago
It’s not, but I can understand how it might look that way to a tech industry professional used to dealing with scams (indeed, there are lots of scam-adjacent startups with quantum-flavored branding). Real science and engineering are just very difficult and take a long time. You can go to the arXiv, read the papers, and see the progress and breakthroughs that are made every year. But scientists are relatively honest, so even their breakthroughs are incremental.
throwaway_7274•1mo ago
Maybe I should clarify that this isn't meant in a combative way, although it is in defense of scientists, who shouldn't be liable for other people's marketing.

Here's what's going on here: there's a way that people talk past each other, because they mean different things by the same words, because they ultimately have different cultures and values.

There's one kind of person (let's call them "technologists," but I'm sure there's a better word) who feels deeply and intuitively that the point of a technology is to Create Shareholder Value. There's another kind (let's call them "scientists") who feels deeply and intuitively that the point of a technology is to Evince That We Have Known The Mind Of God. I think that these two kinds of people have a hard time understanding one another. Sometimes they don't realize, as strange as it sounds, that the other exists.

There are many scientists who have been working on problems falling loosely under the umbrella of "quantum computing" for a few decades now. Most of them are not literally Building A Quantum Computer, or even trying to. Not exactly. For this reason it might be better to call the field "things you can do with coherent control of isolated quantum systems" than "quantum computing." There are many strange and wonderful things that you can see when you have good coherent control of isolated quantum systems. The scientists are largely interested in seeing those things, in order to Evince That We Have Known The Mind Of God. One sort of strange and wonderful thing, way down the line, is maybe factoring big numbers? The scientists honestly call that a "goal," because it would be strange and wonderful indeed. But it's not really the goal. The scientists don't really care about it for its own sake, and certainly not for the sake of Creating Shareholder Value. It's just one thing that would Evince That We Have Known The Mind Of God.

Incidentally, over those last couple of decades, we've gotten way better at coherent control of isolated quantum systems, and have, in many ways, succeeded at Evincing That We Have Known The Mind Of God again and again. We have made, and continue to make, amazing progress. One day we probably will factor large numbers. But that's not really the goal for the scientists.

On the other hand, there are "technologists" who hear about the goal of factoring large numbers, take this to be, in some sense, "the point" (that is, a proxy for Creating Shareholder Value), and expect it to happen in short order. They raise lots of money and promise a payout. They might act in very "commercial" ways, telling people what things are going to happen when, using an idiosyncratic, personal definition of truth. This is understood and expected in commercial situations. They and their creditors may be disappointed.

The trouble is that it's hard for people on the outside to tell the difference between the scientists and the technologists! This makes things confusing. On some level, this is a failure of science communication: laypeople hear about breakthroughs (from scientists), then don't see the promises of technologists immediately fulfilled, they get confused, and they start to think the scientists are lying. But they're not! They're different people.

Another thing that laypeople don't really know is that there are commercially-useful and near-commercially-useful technologies using coherent control of isolated quantum systems. They've come out of the same research program, but aren't strictly "quantum computing." I don't know why it's not more widely known that quantum sensors made out of qubits (usually a different kind of qubit than the kind used for computing applications!) are on the market today, and beat other sensors along a variety of axes.

This might sound like goalpost-moving, but I promise you it's not. If it sounds like goalpost-moving, it's because there are two different relevant groups of people you hadn't previously resolved!

throwaway_7274•1mo ago
Here's an analogous situation that might clarify the dynamic somewhat:

1. Sam Altman: [tells a tall tale to raise 100 quintillion dollars]

2. Outside observer: "hey, these so-called AI researchers have been pulling the wool over our eyes! They've promised AGI for decades. Where's my robot maid?"

3. Researcher who's been making steady progress in a niche subfield of optimization algorithms at Nebraska State University for the last 20 years: "huh?"

8note•1mo ago
this does not explain something like the manhattan project.

its not necessarily time that real science and engineering takes, but resources.

there's lots of fast progress happening in areas that get a lot of resources invested into them, and much slower on areas that dont have financial champions. moving fast doesn't necessitate that something is a scam

throwaway_7274•1mo ago
Sorry, I'm not sure I follow what the disagreement is? I don't claim that moving fast necessitates that something is a scam.

In any case (and I don't think this bears on your point, it's just something I'd like to add), building a quantum computer is very unlike building a nuclear fission device. Echoing my other comments here, it's almost misleading to call it "building a quantum computer," as that puts people in mind of 'unlocking' some single discrete technology in a strategy game tech tree. It's not that at all; it's a huge umbrella of (in many cases) extremely sophisticated technologies. The Manhattan project, as complex and astonishing a feat as it was, was a little closer to the strategy-game vision of research in that way. There's a reason it was possible in 3-4 years in the 1940s!

vtomole•1mo ago
Silicon is not one of the leading modalities for quantum computers, but it has progressed a lot in the past ~2-3 years. Here are a few key advancements that have happened as of late:

- Intel can now do 2D which means a Surface code can be run on these devices: https://arxiv.org/abs/2412.14918

- HRL can now do 2D as well: https://arxiv.org/abs/2502.08861

- They are solving the wiring problem: https://www.nature.com/articles/s41565-023-01491-3

- Their interconnects are high fidelity: https://www.nature.com/articles/s41586-025-09827-w

iwontberude•1mo ago
Ahh yes another quantum processor that creates noise.
vtomole•1mo ago
This processor is state-of-the-art for silicon quantum computing. It's where modalities like superconducting were 15 years ago, and superconducting does not create noise these days https://www.nature.com/articles/s41586-024-08449-y
iwontberude•1mo ago
Gate fidelity significantly less than 100 is always noisy, regardless of the qubit itself
vtomole•1mo ago
Sure, I'm not disagreeing that this processor is noisy, just providing enough context to say that it's fine. Historically, these devices improve enough to be under threshold at which point it doesn't matter that they are noisy cause error correction protocols can be run on top of them.
nikanj•1mo ago
Quantum computers can almost, but not quite, factor numbers bigger than 10.

Time for git to break all workflows by showing huge alerts if a server is using crypto not proven quantum-proof!

notepad0x90•1mo ago
What's closer to practical application these days, photonic/optical computing or quantum computing (silicon or not)?
twosdai•1mo ago
Photonic computing has a lot of practical applications for signal transfer already.

Basically anytime we send a signal across a large fiber optic cable we need to convert signal from light back to electricity and that requires some level of photonic computing. Its used at scale today. https://www.ebsco.com/research-starters/science/photonics

However I suspect that you mean photonic computing where a computer on chip device uses photons instead of electrons to communicate. In which case, as far as I know is still research phase.

notepad0x90•1mo ago
Thank you, I didn't know that. and you were right, I meant photons instead of electrons being used at the processor logic-gate level.