frontpage.
newsnewestaskshowjobs

Made with ♥ by @iamnishanth

Open Source @Github

fp.

France's homegrown open source online office suite

https://github.com/suitenumerique
361•nar001•3h ago•177 comments

British drivers over 70 to face eye tests every three years

https://www.bbc.com/news/articles/c205nxy0p31o
93•bookofjoe•1h ago•79 comments

Start all of your commands with a comma (2009)

https://rhodesmill.org/brandon/2009/commands-with-comma/
412•theblazehen•2d ago•152 comments

Hoot: Scheme on WebAssembly

https://www.spritely.institute/hoot/
77•AlexeyBrin•4h ago•15 comments

Leisure Suit Larry's Al Lowe on model trains, funny deaths and Disney

https://spillhistorie.no/2026/02/06/interview-with-sierra-veteran-al-lowe/
10•thelok•1h ago•0 comments

OpenCiv3: Open-source, cross-platform reimagining of Civilization III

https://openciv3.org/
769•klaussilveira•19h ago•240 comments

First Proof

https://arxiv.org/abs/2602.05192
33•samasblack•1h ago•18 comments

Reinforcement Learning from Human Feedback

https://arxiv.org/abs/2504.12501
49•onurkanbkrc•4h ago•3 comments

Stories from 25 Years of Software Development

https://susam.net/twenty-five-years-of-computing.html
25•vinhnx•2h ago•3 comments

The Waymo World Model

https://waymo.com/blog/2026/02/the-waymo-world-model-a-new-frontier-for-autonomous-driving-simula...
1019•xnx•1d ago•580 comments

Coding agents have replaced every framework I used

https://blog.alaindichiappari.dev/p/software-engineering-is-back
155•alainrk•4h ago•191 comments

Vocal Guide – belt sing without killing yourself

https://jesperordrup.github.io/vocal-guide/
158•jesperordrup•9h ago•56 comments

72M Points of Interest

https://tech.marksblogg.com/overture-places-pois.html
8•marklit•5d ago•0 comments

A Fresh Look at IBM 3270 Information Display System

https://www.rs-online.com/designspark/a-fresh-look-at-ibm-3270-information-display-system
16•rbanffy•4d ago•0 comments

Software Factories and the Agentic Moment

https://factory.strongdm.ai/
10•mellosouls•2h ago•8 comments

Unseen Footage of Atari Battlezone Arcade Cabinet Production

https://arcadeblogger.com/2026/02/02/unseen-footage-of-atari-battlezone-cabinet-production/
102•videotopia•4d ago•26 comments

StrongDM's AI team build serious software without even looking at the code

https://simonwillison.net/2026/Feb/7/software-factory/
7•simonw•1h ago•1 comments

Making geo joins faster with H3 indexes

https://floedb.ai/blog/how-we-made-geo-joins-400-faster-with-h3-indexes
152•matheusalmeida•2d ago•41 comments

Show HN: Look Ma, No Linux: Shell, App Installer, Vi, Cc on ESP32-S3 / BreezyBox

https://github.com/valdanylchuk/breezydemo
260•isitcontent•19h ago•33 comments

Ga68, a GNU Algol 68 Compiler

https://fosdem.org/2026/schedule/event/PEXRTN-ga68-intro/
34•matt_d•4d ago•9 comments

Monty: A minimal, secure Python interpreter written in Rust for use by AI

https://github.com/pydantic/monty
273•dmpetrov•19h ago•145 comments

Show HN: Kappal – CLI to Run Docker Compose YML on Kubernetes for Local Dev

https://github.com/sandys/kappal
15•sandGorgon•2d ago•3 comments

Google staff call for firm to cut ties with ICE

https://www.bbc.com/news/articles/cvgjg98vmzjo
98•tartoran•1h ago•27 comments

Hackers (1995) Animated Experience

https://hackers-1995.vercel.app/
544•todsacerdoti•1d ago•262 comments

Sheldon Brown's Bicycle Technical Info

https://www.sheldonbrown.com/
415•ostacke•1d ago•108 comments

Show HN: I spent 4 years building a UI design tool with only the features I use

https://vecti.com
361•vecti•21h ago•161 comments

What Is Ruliology?

https://writings.stephenwolfram.com/2026/01/what-is-ruliology/
61•helloplanets•4d ago•64 comments

Show HN: If you lose your memory, how to regain access to your computer?

https://eljojo.github.io/rememory/
332•eljojo•22h ago•205 comments

An Update on Heroku

https://www.heroku.com/blog/an-update-on-heroku/
456•lstoll•1d ago•298 comments

Microsoft open-sources LiteBox, a security-focused library OS

https://github.com/microsoft/litebox
370•aktau•1d ago•194 comments
Open in hackernews

IBM Delivers New Quantum Package

https://newsroom.ibm.com/2025-11-12-ibm-delivers-new-quantum-processors,-software,-and-algorithm-breakthroughs-on-path-to-advantage-and-fault-tolerance
54•donutloop•2mo ago

Comments

pm90•2mo ago
I've been bit by the mass marketing nonsense of "Watson" but IBM Research does some pretty good work, and their progress on Quantum Computing seems to be "real"; and certainly more reliable than Microsoft (shocked!).
jimmar•2mo ago
> IBM anticipates that the first cases of verified quantum advantage will be confirmed by the wider community by the end of 2026.

In 2019, Google claimed quantum supremacy [1]. I'm truly confused about what quantum computing can do today, or what it's likely to be able to do in the next decade.

[1] https://www.nasa.gov/technology/computing/google-and-nasa-ac...

StableAlkyne•2mo ago
There's legitimately interesting research in using it to accelerate certain calculations. For example, usually you see a few talks at chemistry conferences on how it's gotten marginally faster at (very basic) electronic structure calculations. Also some neat stuff in the optimization space. Stuff you keep your eye on hoping it's useful in 10 years.

The most similar comparison is AI stuff, except even that has found some practical applications. Unlike AI, there isn't really much practicality for quantum computers right now beyond bumping up your h-index

Well, maybe there is one. As a joke with some friends after a particularly bad string of natural 1's in D&D, I used IBM's free tier (IIRC it's 10 minutes per month) and wrote a dice roller to achieve maximum randomness.

NickC25•2mo ago
that was my understanding too - in the fields of chemistry, materials science, pharmaceutical development, etc... quantum tech is somewhat promising and might be pretty viable in those specific niche fields within the decade.
Y_Y•2mo ago
The trouble with quantum supremacy results is they disappear as soon as you observe them (carefully).

Sorry for that, but seriously, I'd treat this kind of claim like any other putative breakthrough (room-temperature superconductors spring to mind), until it's independently verified it's worthless. The punishment for crying wolf is minimal and by the time you're shown to be bullshitting the headlines have moved on.

The other method, of course, is to just obsessively check Scott Aaronson's blog.

mapmeld•2mo ago
IBM challenged that the 2019 case could be handled by a supercomputer [1].

The main issue is that these algorithms where today's early quantum computers have an advantage were specifically designed to be demonstration problems. All of the tasks that people previously wanted a quantum computer to do are still impractical with today's hardware.

[1] https://www.quantamagazine.org/google-and-ibm-clash-over-qua...

hattmall•2mo ago
A decade from now Quantum computing will be in the same place it was a decade ago, on the cusp of proving a quantum advantage for tailor made problems in comparison to normal availability supercomputers. Classical compute will advance in that time period to keep the quantum computers always on the cusp.

The major non-compute related engineering breakthroughs needed for quantum computing to actually be advantageous in a way that would be revolutionary are themselves so revolutionary that the advancements of quantum computing would be vastly overshadowed. Again it's a case where those breakthroughs would so greatly enhance classic compute in terms of processing and reduction in costs that it still probably wouldn't be economically viable to produce general purpose quantum computers.

knowitnone3•2mo ago
"Qiskit capabilities show 24 percent increase in accuracy" what was it before? What good is a computer that is not 100% accurate? Do I have to run a function 1000x to get some average 99% chance the output is correct?
mushufasa•2mo ago
One of my colleagues read a paper about quantum computing techniques to solve complex optimization problems (the domain of complex mixed integer solvers) and tried it out for a financial portfolio optimization, replicating the examples provided by one of the quantum computing companies during a trial period.

The computer *did not* produce the same results each time, and often the results were wrong. The service provider's support staff didn't help -- their response was effectively "oh shucks."

We discontinued considering quantum computing after that. Not suitable for our use-case.

Maybe quantum computing would be applicable if you were trying to crack encryption, wherein getting the right result once is helpful regardless of how many wrong answers you get in the process.

a_vanderbilt•2mo ago
Essentially correct. With a quantum computer you do multiple runs and average the result.
jfengel•2mo ago
(Right now "computers that aren't 100% accurate" are all the rage, even without quantum computing. Though a lot of people are wondering if that's any good, too.)

They're especially good for oracle-type problems, where you can verify an answer much faster than you can find them. NP problems are an especially prominent example of that. If it's wrong, you try again.

In theory it might take a very long time to find the answer. But even if you've only got 25% accuracy, the odds of you being wrong 10 times in a row are only 6%. Being wrong 100 times in a row is a number so small it requires scientific notation (10^-13). It's worth it to be able to solve an otherwise exponential problem.

Quantum computers have error bounds, and you can use that to tune your error rate to being-hit-by-a-cosmic-ray level of acceptability.

It's still far from clear that they can build general-purpose quantum computers big enough to do anything useful. But the built-in error factors are not, in themselves, a bar.

abdullahkhalids•2mo ago
Many classical information processing devices are less than 100% reliable. Wifi (or old school dialup) will drop a non-trivial number of packets. RAM chips have some non-zero amount of unreliability, but in most cases we don't notice [1]. Computer processors in space will similarly fail due to cosmic ray bombardment. In all cases, you mitigate such problems by adding redundancy or error correction.

Quantum computer hardware is similarly very error-prone, and it is unlikely that we will ever build quantum hardware which will have ignorable levels of error. However, people have developed many techniques, often much more sophisticated that in the classical domain, for handling the fragility of quantum hardware. I am not familiar with the details of recent improvements in qiskit, but they are referring to improvements in specific "error mitigation" techniques implemented within qiskit. These techniques will be used in tandem with others methods like error correction to create quantum computers that give you answers with close to but less than 100% chance of success.

As you say, in these cases, you will repeat your simulation a few times and take a majority vote.

[1] https://en.wikipedia.org/wiki/ECC_memory

boilerupnc•2mo ago
Related Qiskit Tutorial Video[0] "This tutorial covers advanced techniques for implementing the Quantum Approximate Optimization Algorithm (QAOA) at the utility scale using Qiskit. In this video, we walk through how to build, optimize, and run QAOA for real world optimization problems on real IBM Quantum hardware. This series is designed for quantum computing practitioners who are ready to move beyond basic examples and start running large scale, hardware aware algorithms. We explore how to transition from theory to practical execution, covering algorithm development, circuit optimization, hybrid workflows, and best practices for hardware performance. Whether you are expanding your QAOA skills or preparing to run your own research experiments, this tutorial will help you strengthen your understanding of utility scale quantum computing with Qiskit."

[0] https://www.youtube.com/watch?v=rBfK-l-qSNk

mushufasa•2mo ago
I happen to know IBM made some great hires -- one of my classmates who was excellent in the field, who had impressive quantum computing nature publications before graduation, worked at IBM for the past several years.

Though it looks like he recently switched to working at Google AI...

https://scholar.google.com/citations?user=NaxMJzQAAAAJ&hl=en

IsTom•2mo ago
Sooo... are we factoring 21 without shortcuts yet?
piskov•2mo ago
How come IBM is still alive? Is it those sweet-sweet legacy cobol/mainframe systems?

I wonder what would happen to them if codex or what have helps migrate that to c#.

= how long until the exodus to aws/azure will follow

dudus•2mo ago
Outsourcing of software dev to India and support to Latin America. Paying pennies and charging high fees. They get contracts to all sorts of big companies like telecoms and manufacturers
vrighter•2mo ago
Because in most cases, it's not about the quality of the product. I've had cases where using a (free, open source) reverse proxy to implement SSO and TLS termination would save 5-digit figures (on the side closer to 6 digits) yearly from upgrading the licensing a product we used. That was rejected because then we wouldn't have anyone to point our finger at if something goes wrong with the product. It's about the "support contracts", not about the products themselves.

Which is in itself a fucking joke because now everything is outsourced to some clueless person in a call center half-way around a world, or you get to chat with an LLM. Either way, it has been ages since the "support contracts" actually resulted in a problem that wasn't ultimately solved by ourselves, not them.

pjmlp•2mo ago
It is IBM money that keeps many Linux projects going by the way, for the last 25 years.
horns4lyfe•2mo ago
IBM is just an Indian labor arbitrage company at this point, why anyone believes they’re capable of this type Id advancement is beyond me
pjmlp•2mo ago
Anyone getting use of their money via Red-Hat sponsored projects like Linux kernel, GNOME and GCC, OpenJDK, Quarkus, VSCode plugins for Java for example.