frontpage.
newsnewestaskshowjobs

Made with ♥ by @iamnishanth

Open Source @Github

fp.

Start all of your commands with a comma

https://rhodesmill.org/brandon/2009/commands-with-comma/
140•theblazehen•2d ago•41 comments

OpenCiv3: Open-source, cross-platform reimagining of Civilization III

https://openciv3.org/
667•klaussilveira•14h ago•202 comments

The Waymo World Model

https://waymo.com/blog/2026/02/the-waymo-world-model-a-new-frontier-for-autonomous-driving-simula...
949•xnx•19h ago•551 comments

How we made geo joins 400× faster with H3 indexes

https://floedb.ai/blog/how-we-made-geo-joins-400-faster-with-h3-indexes
122•matheusalmeida•2d ago•32 comments

Unseen Footage of Atari Battlezone Arcade Cabinet Production

https://arcadeblogger.com/2026/02/02/unseen-footage-of-atari-battlezone-cabinet-production/
53•videotopia•4d ago•2 comments

Show HN: Look Ma, No Linux: Shell, App Installer, Vi, Cc on ESP32-S3 / BreezyBox

https://github.com/valdanylchuk/breezydemo
229•isitcontent•14h ago•25 comments

Jeffrey Snover: "Welcome to the Room"

https://www.jsnover.com/blog/2026/02/01/welcome-to-the-room/
16•kaonwarb•3d ago•19 comments

Monty: A minimal, secure Python interpreter written in Rust for use by AI

https://github.com/pydantic/monty
222•dmpetrov•14h ago•117 comments

Vocal Guide – belt sing without killing yourself

https://jesperordrup.github.io/vocal-guide/
26•jesperordrup•4h ago•16 comments

Show HN: I spent 4 years building a UI design tool with only the features I use

https://vecti.com
330•vecti•16h ago•143 comments

Hackers (1995) Animated Experience

https://hackers-1995.vercel.app/
493•todsacerdoti•22h ago•243 comments

Sheldon Brown's Bicycle Technical Info

https://www.sheldonbrown.com/
381•ostacke•20h ago•95 comments

Microsoft open-sources LiteBox, a security-focused library OS

https://github.com/microsoft/litebox
359•aktau•20h ago•181 comments

Show HN: If you lose your memory, how to regain access to your computer?

https://eljojo.github.io/rememory/
288•eljojo•17h ago•169 comments

An Update on Heroku

https://www.heroku.com/blog/an-update-on-heroku/
412•lstoll•20h ago•278 comments

Was Benoit Mandelbrot a hedgehog or a fox?

https://arxiv.org/abs/2602.01122
19•bikenaga•3d ago•4 comments

PC Floppy Copy Protection: Vault Prolok

https://martypc.blogspot.com/2024/09/pc-floppy-copy-protection-vault-prolok.html
63•kmm•5d ago•6 comments

Dark Alley Mathematics

https://blog.szczepan.org/blog/three-points/
90•quibono•4d ago•21 comments

How to effectively write quality code with AI

https://heidenstedt.org/posts/2026/how-to-effectively-write-quality-code-with-ai/
256•i5heu•17h ago•196 comments

Delimited Continuations vs. Lwt for Threads

https://mirageos.org/blog/delimcc-vs-lwt
32•romes•4d ago•3 comments

What Is Ruliology?

https://writings.stephenwolfram.com/2026/01/what-is-ruliology/
43•helloplanets•4d ago•41 comments

Where did all the starships go?

https://www.datawrapper.de/blog/science-fiction-decline
12•speckx•3d ago•4 comments

Introducing the Developer Knowledge API and MCP Server

https://developers.googleblog.com/introducing-the-developer-knowledge-api-and-mcp-server/
59•gfortaine•12h ago•25 comments

Female Asian Elephant Calf Born at the Smithsonian National Zoo

https://www.si.edu/newsdesk/releases/female-asian-elephant-calf-born-smithsonians-national-zoo-an...
33•gmays•9h ago•12 comments

I now assume that all ads on Apple news are scams

https://kirkville.com/i-now-assume-that-all-ads-on-apple-news-are-scams/
1066•cdrnsf•23h ago•446 comments

I spent 5 years in DevOps – Solutions engineering gave me what I was missing

https://infisical.com/blog/devops-to-solutions-engineering
150•vmatsiiako•19h ago•67 comments

Why I Joined OpenAI

https://www.brendangregg.com/blog/2026-02-07/why-i-joined-openai.html
149•SerCe•10h ago•138 comments

Understanding Neural Network, Visually

https://visualrambling.space/neural-network/
287•surprisetalk•3d ago•43 comments

Learning from context is harder than we thought

https://hy.tencent.com/research/100025?langVersion=en
182•limoce•3d ago•98 comments

Show HN: R3forth, a ColorForth-inspired language with a tiny VM

https://github.com/phreda4/r3
73•phreda4•13h ago•14 comments
Open in hackernews

IBM Delivers New Quantum Package

https://newsroom.ibm.com/2025-11-12-ibm-delivers-new-quantum-processors,-software,-and-algorithm-breakthroughs-on-path-to-advantage-and-fault-tolerance
54•donutloop•2mo ago

Comments

pm90•2mo ago
I've been bit by the mass marketing nonsense of "Watson" but IBM Research does some pretty good work, and their progress on Quantum Computing seems to be "real"; and certainly more reliable than Microsoft (shocked!).
jimmar•2mo ago
> IBM anticipates that the first cases of verified quantum advantage will be confirmed by the wider community by the end of 2026.

In 2019, Google claimed quantum supremacy [1]. I'm truly confused about what quantum computing can do today, or what it's likely to be able to do in the next decade.

[1] https://www.nasa.gov/technology/computing/google-and-nasa-ac...

StableAlkyne•2mo ago
There's legitimately interesting research in using it to accelerate certain calculations. For example, usually you see a few talks at chemistry conferences on how it's gotten marginally faster at (very basic) electronic structure calculations. Also some neat stuff in the optimization space. Stuff you keep your eye on hoping it's useful in 10 years.

The most similar comparison is AI stuff, except even that has found some practical applications. Unlike AI, there isn't really much practicality for quantum computers right now beyond bumping up your h-index

Well, maybe there is one. As a joke with some friends after a particularly bad string of natural 1's in D&D, I used IBM's free tier (IIRC it's 10 minutes per month) and wrote a dice roller to achieve maximum randomness.

NickC25•2mo ago
that was my understanding too - in the fields of chemistry, materials science, pharmaceutical development, etc... quantum tech is somewhat promising and might be pretty viable in those specific niche fields within the decade.
Y_Y•2mo ago
The trouble with quantum supremacy results is they disappear as soon as you observe them (carefully).

Sorry for that, but seriously, I'd treat this kind of claim like any other putative breakthrough (room-temperature superconductors spring to mind), until it's independently verified it's worthless. The punishment for crying wolf is minimal and by the time you're shown to be bullshitting the headlines have moved on.

The other method, of course, is to just obsessively check Scott Aaronson's blog.

mapmeld•2mo ago
IBM challenged that the 2019 case could be handled by a supercomputer [1].

The main issue is that these algorithms where today's early quantum computers have an advantage were specifically designed to be demonstration problems. All of the tasks that people previously wanted a quantum computer to do are still impractical with today's hardware.

[1] https://www.quantamagazine.org/google-and-ibm-clash-over-qua...

hattmall•2mo ago
A decade from now Quantum computing will be in the same place it was a decade ago, on the cusp of proving a quantum advantage for tailor made problems in comparison to normal availability supercomputers. Classical compute will advance in that time period to keep the quantum computers always on the cusp.

The major non-compute related engineering breakthroughs needed for quantum computing to actually be advantageous in a way that would be revolutionary are themselves so revolutionary that the advancements of quantum computing would be vastly overshadowed. Again it's a case where those breakthroughs would so greatly enhance classic compute in terms of processing and reduction in costs that it still probably wouldn't be economically viable to produce general purpose quantum computers.

knowitnone3•2mo ago
"Qiskit capabilities show 24 percent increase in accuracy" what was it before? What good is a computer that is not 100% accurate? Do I have to run a function 1000x to get some average 99% chance the output is correct?
mushufasa•2mo ago
One of my colleagues read a paper about quantum computing techniques to solve complex optimization problems (the domain of complex mixed integer solvers) and tried it out for a financial portfolio optimization, replicating the examples provided by one of the quantum computing companies during a trial period.

The computer *did not* produce the same results each time, and often the results were wrong. The service provider's support staff didn't help -- their response was effectively "oh shucks."

We discontinued considering quantum computing after that. Not suitable for our use-case.

Maybe quantum computing would be applicable if you were trying to crack encryption, wherein getting the right result once is helpful regardless of how many wrong answers you get in the process.

a_vanderbilt•2mo ago
Essentially correct. With a quantum computer you do multiple runs and average the result.
jfengel•2mo ago
(Right now "computers that aren't 100% accurate" are all the rage, even without quantum computing. Though a lot of people are wondering if that's any good, too.)

They're especially good for oracle-type problems, where you can verify an answer much faster than you can find them. NP problems are an especially prominent example of that. If it's wrong, you try again.

In theory it might take a very long time to find the answer. But even if you've only got 25% accuracy, the odds of you being wrong 10 times in a row are only 6%. Being wrong 100 times in a row is a number so small it requires scientific notation (10^-13). It's worth it to be able to solve an otherwise exponential problem.

Quantum computers have error bounds, and you can use that to tune your error rate to being-hit-by-a-cosmic-ray level of acceptability.

It's still far from clear that they can build general-purpose quantum computers big enough to do anything useful. But the built-in error factors are not, in themselves, a bar.

abdullahkhalids•2mo ago
Many classical information processing devices are less than 100% reliable. Wifi (or old school dialup) will drop a non-trivial number of packets. RAM chips have some non-zero amount of unreliability, but in most cases we don't notice [1]. Computer processors in space will similarly fail due to cosmic ray bombardment. In all cases, you mitigate such problems by adding redundancy or error correction.

Quantum computer hardware is similarly very error-prone, and it is unlikely that we will ever build quantum hardware which will have ignorable levels of error. However, people have developed many techniques, often much more sophisticated that in the classical domain, for handling the fragility of quantum hardware. I am not familiar with the details of recent improvements in qiskit, but they are referring to improvements in specific "error mitigation" techniques implemented within qiskit. These techniques will be used in tandem with others methods like error correction to create quantum computers that give you answers with close to but less than 100% chance of success.

As you say, in these cases, you will repeat your simulation a few times and take a majority vote.

[1] https://en.wikipedia.org/wiki/ECC_memory

boilerupnc•2mo ago
Related Qiskit Tutorial Video[0] "This tutorial covers advanced techniques for implementing the Quantum Approximate Optimization Algorithm (QAOA) at the utility scale using Qiskit. In this video, we walk through how to build, optimize, and run QAOA for real world optimization problems on real IBM Quantum hardware. This series is designed for quantum computing practitioners who are ready to move beyond basic examples and start running large scale, hardware aware algorithms. We explore how to transition from theory to practical execution, covering algorithm development, circuit optimization, hybrid workflows, and best practices for hardware performance. Whether you are expanding your QAOA skills or preparing to run your own research experiments, this tutorial will help you strengthen your understanding of utility scale quantum computing with Qiskit."

[0] https://www.youtube.com/watch?v=rBfK-l-qSNk

mushufasa•2mo ago
I happen to know IBM made some great hires -- one of my classmates who was excellent in the field, who had impressive quantum computing nature publications before graduation, worked at IBM for the past several years.

Though it looks like he recently switched to working at Google AI...

https://scholar.google.com/citations?user=NaxMJzQAAAAJ&hl=en

IsTom•2mo ago
Sooo... are we factoring 21 without shortcuts yet?
piskov•2mo ago
How come IBM is still alive? Is it those sweet-sweet legacy cobol/mainframe systems?

I wonder what would happen to them if codex or what have helps migrate that to c#.

= how long until the exodus to aws/azure will follow

dudus•2mo ago
Outsourcing of software dev to India and support to Latin America. Paying pennies and charging high fees. They get contracts to all sorts of big companies like telecoms and manufacturers
vrighter•2mo ago
Because in most cases, it's not about the quality of the product. I've had cases where using a (free, open source) reverse proxy to implement SSO and TLS termination would save 5-digit figures (on the side closer to 6 digits) yearly from upgrading the licensing a product we used. That was rejected because then we wouldn't have anyone to point our finger at if something goes wrong with the product. It's about the "support contracts", not about the products themselves.

Which is in itself a fucking joke because now everything is outsourced to some clueless person in a call center half-way around a world, or you get to chat with an LLM. Either way, it has been ages since the "support contracts" actually resulted in a problem that wasn't ultimately solved by ourselves, not them.

pjmlp•2mo ago
It is IBM money that keeps many Linux projects going by the way, for the last 25 years.
horns4lyfe•2mo ago
IBM is just an Indian labor arbitrage company at this point, why anyone believes they’re capable of this type Id advancement is beyond me
pjmlp•2mo ago
Anyone getting use of their money via Red-Hat sponsored projects like Linux kernel, GNOME and GCC, OpenJDK, Quarkus, VSCode plugins for Java for example.