And while searching for this silly joke, I'm now baffled by the fact that it's still alive !
I expect this won't be the last time we hear about quantum research that has been foundational to a lot of work turns out to have been manipulated, or designed poorly and unverified by other research labs.
Your words sounds like what people said in the 40s and 50s about computers.
I'm not the OP, but when you're of a certain age, you don't need citations for that. Memory serves. And my family was saying those sorts of things and teasing me about being into computers as late as the 1970's.
By your own criteria, a citation better than "me" is needed.
Quite the opposite, in fact. It was pointing out that some supposed scams do turn out to be useful.
The Navy, Air Force, government, private institutions, etc didn't dump billions of funding into computers because they thought they were overrated.
They've been plugging along at quantum computers for decades now and have not produced a single useful machine (although a lot of the math and science behind it has been useful for theoretical physics).
Ballistics tables, decryption of enemy messages, and more. Early programmable general-purpose electronic computers, from the moment they were turned on could solve problems in minutes that would take human computers months or years. In the 40s, ENIAC proved the feasibility of thermonuclear weaponry.
By 1957 the promise and peril of computing entered popular culture with the Spencer Tracy and Katharine Hepburn film "Desk Set" where a computer is installed in a library and runs amok, firing everybody, all while romantic shenanigans occur. It was sponsored by IBM and is one of the first instances of product placement in films.
People knew "electronic brains" were the future the second they started spitting out printouts of practically unsolvable problems instantly-- they just didn't (during your timeframe) predict the invention and adoption of the transistor and its miniaturization, which made computers ubiquitous household objects.
Even the quote about the supposed limited market for computers trotted out from time-to-time to demonstrate the hesitance of industry and academia to adopt computers is wrong.
In 1953 when Thomas Watson said that "there's only a market for five computers" what he actually said was "When we developed the IBM 701 we created a customer list of 20 organizations who might want it and because it is so expensive we expected to only sign five deals, but we ended up signing 18" (paraphrased).
Militaries, universities, and industry all wanted all of the programmable general-purpose electronic computers they could afford the second it became available because they all knew that it could solve problems.
Included for comparison is a list of problems that quantum computing has solved:
https://www.reddit.com/r/QuantumComputing/comments/1535lii/w...
Its a shame. I was really looking forward to finding out what the prime factors of 34 are.
34 requires 6 bits, though
"As pointed out in [57], there has never been a genuine implementation of Shor’s algorithm. The only numbers ever to have been factored by that type of algorithm are 15 and 21, and those factorizations used a simplified version of Shor’s algorithm that requires one to know the factorization in advance..."
If you have a clue what these factors are, you can build an implementation of Shor's algorithm for them, I guess.
[1] https://fixupx.com/CraigGidney/status/1907199729362186309
Meanwhile, a networking company wants to "network" these chips - what does that even mean ? And a gpu company produces a library for computing with quantum.
Smoke-and-mirrors can carry on for a long time, and fool the best of them. Isaac Newton was in on the alchemist bandwagon.
I'm maybe a little jaded having worked on whole products that had no market success, but were in fact just so that the company had something new to talk about.
1. 100% secure communication channels (even better we can detect any attempt at eavesdropping and whatever information is captured will be useless to the eavesdropper)
2. Building larger quantum computers. A high fidelity quantum network would allow you to compute simultaneously with multiple quantum chips by interfacing them.
The thing that makes quantum networking different from regular networking is that you have to be very careful to not disturb the state of the photons you are sending down the fiber optics.
Im currently doing my PhD building quantum networking devices so im a bit biased but I think it’s pretty cool :).
Now does it matter I’m not sure. Reason 1 isn’t really that useful because encryption is very secure. However if quantum computers start to scale up and some encryption methods get obsoleted this could be nice. Also having encryption that is provably secure would be nice regardless.
Reason 2 at the moment seems like the only path to building large scale quantum computing. Think a datacenter with many networked quantum chips.
1. What is it about quantum computers that can guarantee 100% secure communication channels?
2. If the communications are 100% secure, why are we worried about eavesdropping?
3. If it can detect eavesdropping, why do we need to concern ourselves with the information they might see/hear? Just respond to the detection.
4. What is it about quantum computing that would make an eavesdroppers’ overheard information useless to them, without also obviating said information to the intended recipients?
This is where the language used to discuss this topic turns into word salad for me. None of the things you said necessarily follow from the things that were said before them, but rather just levied as accepted fact.
This seems like a decent overview if you want to learn more: https://www.chalmers.se/en/centres/wacqt/discover-quantum-te....
2. Because they're not 100% secure. Only the key exchange step with an authenticated endpoint is 100% secure.
3. Eavesdropping acts like a denial of service and breaks all communications on the channel.
4. It makes the information useless to everyone, both the eavesdropper and the recipients. Attempting to eavesdrop on a QKD channel randomizes the transmitted data. It's a DOS attack. The easier DOS attack is to break the fiber-optic cable transmitting the light pulses, since every endpoint needs a dedicated fiber to connect to every other endpoint.
Still trying to figure out what is going on. Are they preposition for the upcoming breakthroughs and until then it will be like the beginning in AI where many claimed to have it but actually just pretended. Additionally they likely want to access the money flow.
The most accurate and expirimentaly tested theory of reality is "smoke and mirrors".
There are so many other areas to say that about, even in physics. But this?...
Scaling is itself the open question. Gravitational effects start creeping in when you scale up sensitive entangled systems and we don't have a good understanding of how gravity interacts with entanglement. Entangled systems above a certain size may just be impossible.
There are maybe other reasons to invest, but this caused me to sell my shares
This research wasn't foundational to a lot of work. Most of important/foundational works in quantum (doesn't matter if computing or general, I'm not sure which one you meant) are verified. How can you possibly base your experimental work on someone else's work if you can't replicate it?
Only perception matters?
Guy Debord wrote a book about what he called "The Society of the Spectacle," wherein he argues that capitalism, mostly by virtue of transforming objects into cash at the point of exchange, (that is, a person can take the money and run) tends to cause all things to become evacuated, reduced as much as possible to their image, rather than their substance.
I believe even GK Chesterton understood this when he said that the purpose of a shovel is to dig, not to be sold, but that capitalism tends to see everything as something to be sold primarily and then as something to be used perhaps secondarily.
There is some truth in all this, I think, though obviously the actual physical requirements of living and doing things place some restrictions on how far we can transform things into their images.
Average of positive and negative Vbias data and many other manipulations are hard to justify, this reeks of "desperate PhD needed to publish at all costs". Yet at the same time I wouldn't fully disqualify the findings, but make the conclusion a lot weaker "there might be something here".
All in all, it's in Microsoft's interests that the data is not cooked. They can only ride on vaporware for so long. Sooner or later the truth will come out; and if Microsoft is burning a lot of cash to lie to everyone, the only loser will be Microsoft.
Might as well draw a straight line through a cloud of data points that look like a dog
Having 5 working devices out of 21 is normal. The problem is that the other 16 weren't mentioned.
Not really - that cash could have been allocated to more productive work and more honest people.
[0]: https://backreaction.blogspot.com/2025/02/microsoft-exaggera...
and little controversy is not automatically a problem or reason to discount/ignore someone anyway
At least with fusion we've gotten some cool lasers, magnets, and test and measurement gear.
I think certain VCs are a little too optimistic about quantum computing timelines, but that doesn't mean it's not steadily progressing. I saw a comment talking about prime factorization from 2001 with some claim that people haven't been working on pure quantum computing since then?
It's really hard. It's still firmly academic, with the peculiar factor that much of it is industry backed. Google quantum was a UCSB research lab turned into a Google branch, while still being powered by grad students. You can begin to see how there's going to be some culture clash and unfortunate pressure to make claims and take research paths atypical of academia (not excusing any fraud, edit: also to be very clear, not accusing Google quantum of anything). It's a hard problem in a funky environment.
1) it's a really hard problem. Anything truly quantum is hard to deal with, especially if you require long coherence times. Consider the entire field of condensed matter (+ some amo). Many of the experiments to measure special quantum properties/confirm theories do so in a destructive manner - I'm not talking only about the quantum measurement problem, I'm talking about the probes themselves physically altering the system such that you can only get one or maybe a few good measurements before the sample is useless. In quantum computing, things need to be cold, isolated, yet still read/write accessible over many many cycles in order to be useful.
2) given the difficulty, there's been many proposals for how to meet the "practical quantum computer" requirement. This ranges from giving up on a true general purpose quantum computer (quantum annealers) to NV vacancies, neutral/ionic lattices, squid/Josephson based,photonic, hybrid system with mechanical resonators, and yeah, topological/anyon shit.
3) It's hard to predict what will actually work, so every approach is a gamble and different groups take different gambles. Some take bigger gambles than the others. Id say topological quantum was a pretty damn big gamble given how new the theory was.
4) Then you need to gradually build up the actually system + infrastructure, validating each subsystem then subsystem interactions and finally full systems. Think system preparation, system readout, system manipulation, isolation, gate design... Each piece of this could be multiple +/- physicist, ece/cse, me, CS PhDs + postdocs amount of work. This is deep expertise and specialization.
4) Then if one approach seems to work, however poorly*, you need to improve it, scale it. Scaling is not guaranteed. This will mean many more PhDs worth trying to improve subsystems.
5) again, this is really hard. Truly, purely quantum systems are very difficult to work with. Classical computing is built on transistors, which operate just fine at room temperature*(plenty of noise, no need for cold isolation) with macroscopic classical observables/manipulations like current, voltage. Yes, transistors work because of quantum effects, and with more recent transistors more directly use quantum effects (tunneling). For example, the "atomic" units of memory are still effectively macroscopic. The systems as a whole are very well described classically, with only practical engineering concerns related to putting things too close together, impurities, heat dissipation. Not to say that any of that is easy at all, but there's no question of principle like "will this even work?"
* With a bunch of people on HN shitting on how poorly + a bunch of other people saying its a full blown quantum computer + probably higher ups trying to make you say it is a real quantum computer or something about quantum supremacy.
*Even in this classical regime think how much effort went into redundancy and encoding/decoding schemes to deal with the very rare bit flips. Now think of what's needed to build a functioning quantum computer at similar scale
No, I don't work in quantum computing, don't invest in it, have no stake in it.
General computing is great, but we built large hadron collider to validate a few specific physics theories, couldn't we we make do with single-use quantum computer for important problems? Prove out some physics simulation, or to break some military encryption or something?
IIRC some of them have done proof of principle solutions to hydrogen atom ground state, for example. I haven't kept up but I'm guessing they've solved more complicated systems by now. I don't know if they've gone beyond ground states.
Taking this particular problem as an example... The challenge, in my mind, is that we already have pretty good classical approaches the problem. Say the limit of current approaches is characterized by something like the number of electrons ( I don't know actual scaling factors) and that number is N_C(lassical). I think the complexity and thus required advances (difficulty) for building special purpose hypothetical quantum ground state solver that can solve the problem for N_Q >> N_C is similar enough to the difficulty required to scale a more general quantum computer to some "problem" size of moderately smaller magnitude that it's probably hard to justify the funding for the special purpose one over the generic one.
I could be way off, and it's very possible there's new algorithms to solve specific problems that I'm unaware of. Such algorithms with an accompanying special purpose quantum computer could make its construction investible in the sense that efficient solutions to problem under consideration are worth enough to offset the cost. Sorry that was very convoluted phrasing but I'm on my phone and I gtg.
krastanov•5h ago
pc86•4h ago
Academic fraud ranging from plagiarism to outright faking data should, more often than not, make it basically impossible for you to get any academic job whatsoever, in your field or others.
This chip is an extreme example, but potentially millions of dollars of productivity, hundreds or even thousands of people spending months or years on something based in a fabrication.
The person or people directly responsible for this should never work again.
jakobgm•4h ago
In case anybody else also isn't familiar with "PI" as an abbreviation in this context:
> In many countries, the term principal investigator (PI) refers to the holder of an independent grant and the lead researcher for the grant project, usually in the sciences, such as a laboratory study or a clinical trial.
Source: https://en.wikipedia.org/wiki/Principal_investigator
NoMoreNicksLeft•4h ago
That might actually be a perverse incentive. If you've already nuked your career with some fraud, you can't make it worse by extra fraud... why ever stop? People inclined to do this sort of thing, when faced with that deterrent just double down and commit even more fraud, they figure the best that can be hoped for is to do it so much and so perfectly that they're never discovered.
The trouble is that the system for science worked well when there exists only some tiny number of scientists, but now we're a planet of 8 billion and where people tell their children they have to go to college and get a STEM degree. Hell, you can only become a scientist by producing new research, even if there's not much left to research in your field. And the only way to maintain that position as a scientist is "to publish or perish". We have finite avenues of research with an ever-growing population of scientists, bullshit is inevitable.
dullcrisp•3h ago
pc86•3h ago
"Well there's lots of people now" is not really a great justification. You become a low trust society by allowing trust to deteriorate. That happens in part because you choose not to punish people who violate that trust in the first place.
NoMoreNicksLeft•3h ago
I am not wishy-washy on punishment. A part of me that I do not deny nor suppress wants punishment for those who do wrong.
But sometimes punishments are counter-productive. The easiest example is the death penalty for heinous, non-murder crimes. This incentivizes the rapist or child molester (or whatever) to kill the victim. You can't execute them twice, after all, so if they're already on the hook for a death penalty crime, murdering their victim also gets rid of a prime witness who could get them the death penalty by testifying, but without increasing the odds of the death penalty.
"Career death penalty" here is like that.
>"Well there's lots of people now" is not really a great justification.
It wasn't meant to be a justification. It was an explanation of the problem, and (in part, at least) and attempt to show that things need to change if we want the fraud to go away.
>You become a low trust society by allowing trust to deteriorate
We've been a low trust society for a long time now. People need to start thinking about how to accomplish the long, slow process of changing a low trust society to a high trust one.
hollerith•3h ago
Although trust has been decreasing, the US remains a high-trust society compared to the global average.
mschuster91•2h ago
The core problem is that most people define their self-worth by their employment, and no matter what, this is all going to crash hard due to automation. The generation currently in power is doing everything they can to deny and downplay what is about to happen, instead of helping our societies prepare.
We're all being thrown into the rat race, it is being told to us verbally and in personal experience that there is no alternative than to become the top dog at all costs because that will be the only chance to survive once automation truly hits home. The result is that those who have the feeling they have failed the rat race and have no hope of catching up withdraw from the "societal contract" and just do whatever they want, at the expense of others if need be.
kevinventullo•1h ago
The fact is, we don’t want these people in academia at all. You want researchers who are naturally inclined not to fabricate data, not people who only play by the rules because they think they’re otherwise going to get caught.
nathan_compton•47m ago
nextos•4h ago
Sadly, the system is often rewarding fake or, especially, exaggerated/misrepresented data and conclusions. I think that a significant proportion of articles exaggerate findings and deliberately cherry-pick data.
It's a market of lemons. Proving misrepresentation is really hard, and the rewards for doing so are immense. Publishing an article in Nature, Science, or Cell is a career-defining moment.
pc86•3h ago
But I do wonder when someone's PhD thesis gets published and it turns out they plagiarized large parts of it, why isn't their degree revoked? When someone is a professor at a prestigious institution and they fabricate data, why are they still teaching the following year?
nextos•2h ago
> When someone is a professor at a prestigious institution and they fabricate data, why are they still teaching the following year?
Internal politics. Committees judging potential misconduct are not independent. If you are sufficiently high up in the ladder, you can get away with many things. Sweden recently created a Swedish National Board for Assessment of Research Misconduct (Npof) to address this problem. I think this is a step in the right direction.
But, ultimately, I think academic fraud should be judged in court. However, e.g. Leonid Schneider (forbetterscience.com) has been taken to court several times for reporting fraud, including fraud that led to patient death, and some judges didn't seem to care much about data fabrication / misrepresentation.
MeteorMarc•2h ago
77pt77•2h ago