(Most of the demand for random numbers, of course, comes from cryptography. In which case public verifiability of what the random thing was is the last thing that you want.)
Public randomness does have uses in cryptography, crypto is not only secret keys.
If I think about it, I can come up with some. But they seem pretty niche relative to secret keys.
I don't know how statistically random it is — suspect it is quantum in nature though since we're dealing with transistors.
(EDIT: checked with ChatGPT, has a sense of humor: "Be careful not to exceed the maximum reverse voltage ratings, or you’ll get more “magic smoke” than white noise.")
The public verifiability is the real "quantum" advance of this research; probably the title should say that. Of course, it's true that when you don't need public verifiability, your OS's entropy pool + PRNG is good enough for any currently known scenario.
Also it is possible for any group to agree that they will all sign messages at a given time about a given source, and stick them on a blockchain. This then becomes proof that this group all agreed on what was displayed, at that time. This becomes a kind of public verification of what was there.
"This is the first random number generator service to use quantum nonlocality as a source of its numbers, and the most transparent source of random numbers to date."
Is this not possibly just random-seeming to us, because we do not know or cannot measure all the variables?
> The process starts by generating a pair of entangled photons inside a special nonlinear crystal. The photons travel via optical fiber to separate labs at opposite ends of the hall.
> Once the photons reach the labs, their polarizations are measured. The outcomes of these measurements are truly random.
I understand that obviously for our purposes (e.g. for encryption), this is safely random, but from a pure science perspective, have we actually proven that the waveform collapsing during measurement is "truly random"?
How could we possibly assert that we've accounted for all variables that could be affecting this? There could be variables at play that we don't even know exist, when it comes to quantum mechanics, no?
A coin toss is completely deterministic if you can account for wind, air resistance, momentum, starting state, mass, etc. But if you don't know that air resistance or wind exists, you could easily conclude it's random.
I ask this as a layman, and I'm really interested if anyone has insight into this.
I.e. if coin toss results skew towards heads, you can conclude some factor is biasing it that way, therefore if the results are (over the course of many tests) 'even', you can conclude the absence of biasing factors?
Let me take a crack at this. Quantum Mechanics like this: we write down an expression for the energy of a system using position and momentum (the precise nature of what constitutes a momentum is a little abstract, but the physics 101 intuition of "something that characterizes how a position is changing" is ok). From this definition we develop both a way of describing a wave function and time-evolving this object. The wave function encodes everything we could learn about the physical system if we were to make a measurement and thus is necessarily associated with a probability distribution from which the universe appears to sample when we make a measurement.
It is totally reasonable to ask the question "maybe that probability distribution indicates that we don't know everything about the system in question and thus, were that the case, and we had the extra theory and extra information we could predict the outcome of measurements, not just their distribution."
Totally reasonable idea. But quantum mechanics has certain features that are surprising if we assume that is true (that there are the so-called hidden variables). In quantum mechanical systems (and in reality) when we make a measurement all subsequent measurements of the system agree with the initial measurement (this is wave function collapse - before measurement we do not know what the outcome will be, but after measurement the wave function just indicates one state, which subsequent measurements necessarily produce). However, measurements are local (they happen at one point in spacetime) but in quantum mechanics this update of the wave function from the pre to post measurement state happens all at once for the entire quantum mechanical system, no matter its physical extent.
In the Bell experiment we contrive to produce a system which is extended in space (two particles separated by a large distance) but for which the results of measurement on the two particles will be correlated. So if Alice measures spin up, then the theory predicts (and we see), that Bob will measure spin down.
The question is: if Alice measures spin up at 10am on earth and then Bob measures his particle at 10:01 am earth time on Pluto, do they still get results that agree, even though the wave function would have to collapse faster than the speed of light to get there to make the two measurements agree (since it takes much longer than 1 minute for light to travel to Pluto from earth).
This turns out to be a measureable fact of reality: Alice and Bob always get concordant measurement no matter when the measurement occurs or who does it first (in fact, because of special relativity, there really appears to be no state of affairs whatever about who measures first in this situation - it depends on how fast you are moving when you measure who measures first).
Ok, so we love special relativity and we want to "fix" this problem. We wish to eliminate the idea that the wave function collapse happens faster than the speed of light (indeed, we'd actually just like to have an account of reality where the wave function collapse can be totally dispensed with, because of the issue above) so we instead imagine that when particle B goes flying off to Pluto and A goes flying off to earth for measurement they each carry a little bit of hidden information to the effect of "when you are measured, give this result."
That is to say that we want to resolve the measurement problem by eliminating the measurement's causal role and just pre-determine locally which result will occur for both particles.
This would work for a simple classical system like a coin. Imagine I am on mars and I flip a coin, then neatly cut the coin in half along its thin edge. I mail one side to earth and the other to Pluto. Whether Bob or Alice opens their envelope first and in fact, no matter when they do, the if Alice gets the heads side, Bob will get the tails side.
This simple case fails to capture the quantum mechanical system because Alice and Bob have a choice of not just when to measure, but how (which orientation to use on their detector). So here is the rub: the correlation between Alice and Bob's measurement depends on the relative orientation of their detectors and even though both detectors measure a random result, that correlation is correct even if Alice and Bob, for example, just randomly choose orientations for their measurements, which means Quantum Mechanics describes the system correctly even when the measurement would have had to be totally determined for all possible pairs of measurements ahead of time at the point the particles were separated.
Assuming that Alice and Bob are actually free to choose a random measuring orientation, there is no way to pre-decide the results of all pairs of measurements ahead of time without knowing at the time the particles are created which way Alice and Bob will orient their detectors. That shows up in the Bell Inequality, which basically shows that certain correlations are impossible in a purely classical universe between Alice and Bob's detectors.
Note that in any given single experiment, both Alice and Bob's results are totally random - QM only governs the correlation between the measurements, so neither Alice nor Bob can communicate any information to eachother.
15% of the time they get combination result A, 15% of the time they get combination result B. Logically we would expect a result of A or B 30% of the time, and combination result C 70% of the time (There are only 3 combinatorial output possibilities - A,B,C)
But when we set the detectors to rule out result C (so they must be either A or B), we get a result of 50%.
So it seems like the particle is able to change it's result based on how you deduce it. A local hidden variable almost certainly would be static regardless of how you determine it.
This is simplified and dumbified because I am no expert, but that is the gist of it.
Another comment basically answered but basically you are touching on Hidden Variable Theorems in QM. Basically that there could be missing variables we can't currently measure that explain the seeming randomness of QM. Various tests have shown and most Physicists agree that Hidden Variables are very unlikely at this point.
If the prng was weak, then the quantum circuit being simulated could be a series of operations that solve for the seed being used by the simulator. At which point collapses would be predictable. Also, it would become possible to do limited FTL communication. An analogy is some people built a redstone computer in minecraft that would detonate TNT repeatedly, record the random directions objects were thrown, and solve for the prng's seed [1]. By solving at two times, you can determine how many calls to the prng had occurred, and so get a global count of various actions (like breaking a block) regardless of where they happened in the world.
Dupe the drive.
You now have a matching pair of "one-time pads" for, I have heard, the hardest form of encryption to decrypt. I would think expect there is a business already doing this.
This is somewhat related to the idea of complexity. So if you have a sequence of "random" numbers, how do you know they're random? Take a look at a Mandelbrot Set and you wouldn't guess it's not that complex.
I really like the idea of Kolomogorv complexity [1], which is to say that the complexity of an object (including a sequence of numbers) is defined by the shortest program that can produce that result. So a sequence of number generated by a PRNG isn't complex because an infinite sequence of such numbers can be reduced to the (finite) size of the program and initial seed.
There are various random number generators that use quantum effects to create random numbers. One interesting implication of this is that it ends the debate about whether quantum effects can affect the "classical" or "macro" world.
Of course, PRNGs should still be seeded with real entropy from the outside world, but even if that fails at some point, your PRNG will still be producing effectively unpredictable numbers for a long time.
The universe is a swirling vortex of entropy. In theory, with enough data, you can predict anything, at any point in time. There is no such thing as "truly random"
perching_aix•5h ago
kgwgk•5h ago
abdullahkhalids•4h ago
[1] https://arxiv.org/pdf/1409.1570
perching_aix•4h ago
dr_dshiv•4h ago
perching_aix•3h ago
As just a potential consumer of True (tm) Random (tm) Numbers (tm) [0] rather than a physicist, I'm still only vaguely sure this meta-review is actually assessing what I'm having a problem with. I'm also struggling with the language and layout a bit, but it's not too bad, and I do see that my phrasing above is incorrect (should have said ontic and epistemic).
Not sure what kind of report are you hoping to hear though, sounded a bit like you're waiting for a laughter?
[0] got a degree in compsci but I don't work in academia, or any industry fields where reading papers on the regular is a thing (AI, graphics, physics sim, etc.)
mjburgess•3h ago
So, for a given epistemic-random Y, "0 < P(Y) < 1" => Y is ontic-random iff there is no such X st. P(Y|X) = 1 or P(Y|-X) = 1 where dim(X) is abitarily large
The existence of X is not epistemic, and is decided by the best interpretation of the best available science.
Bell's theorem limits the conditions on `X` so that either (X does not exist) or (X is non-local).
If you take the former branch then ontic-randomness falls out "for free" from highly specific cases of epistemic; if you take the latter, then there is no case in all of physics where one implies the other.
Personally, I lean more towards saying there is no case of ontic randomness, only "ontic vagueness" or measurement-indeterminacy -- which gives rise to a necessary kind of epistemic randomness due to measurement.
So that P(Y|X) = 1 if X were known, but X isn't in principle knowable. This is a bit of a hybrid position which allows you to have the benefits of both: reality isn't random, but it necessarily must appear so because P(X|measure(X)) is necessarily not 1. (However this does require X to be non-local still).
This arises, imv, because I think there are computability constraints on the epistemic P(Y|X, measure(X)), ie., there has to be some f: X -> measure(X) which is computable -- but reality isn't computable. ie., functions of the form f : Nat -> Nat do not describe reality.
This is not an issue for most macroscopic systems because they have part-whole reductions that make "effectively computable" descriptions fine. But in systems whether these part-whole reductions dont work, including QM, the non-computability of reality creates a necessary epistemic randomness to any possible description of it.