Look at 3 sub 10 = which is (10^(10^10)). So that is 10 to the power of 10 billion. In regular decimal notation, that is a "1" with 10 billion "0"s following it. It takes 10 gigabytes of ram to represent the number in decimal notation, naively.
The number of atoms in the universe is only 10^80, or 1,000...000 (80 zeroes). 10-million sub 10 is so huge, how much ram to represent it.
This example is from https://www.statisticshowto.com/tetration-function-simple-de...
The mass-energy includes ordinary matter, dark matter, and dark energy. Current estimates suggest the observable universe contains roughly 10^53 kg of mass-energy equivalent.
Plugging these into S ≤ 2πER/ℏc gives someting on the order of 10^120 bits of maximum information content.
S ≤ 2πER/ℏc
S ≤ (2 × 3.141593 × 3.036e+71 × 4.399e+26)/(1.055e-34 × 299792458)
S ≤ 2.654135e+124
S ≤ 10^120
So, no.
Most of these 'uncomputable' problems are uncomputable in the sense of the halting problem: you can write down an algorithm that should compute them, but it might never halt. That's the sense in which BB(x) is uncomputable: you won't know if you're done ever, because you can't distinguish a machine that never halts from one that just hasn't halted yet (since it has an infinite number of states, you can't just wait for a loop).
So presumably the independence of a number from ZFC is like that also: you can't prove it's the value of BB(745) because you won't know if you've proved it; the only way to prove it is essentially to run those Turing machines until they stop and you'll never know if you're done.
I'm guessing that for the very small Turing machines there is not enough structure possible to encode whatever infinitely complex states end up being impossible to deduce halting from, so they end up being Collatz-like and then you can go prove things about them using math. As you add states the possible iteration steps go wild and eventually do stuff that is beyond ZFC to analyze.
So the finite value 745 isn't really where the infinity/uncomputability comes from-it comes from the infinite tape that can produce arbitrarily complex functions. (I wonder if over a certain number of states it becomes possible to encoding a larger Turing machine in the tape somehow, causing a sort of divergence to infinite complexity?)
In a very real sense, a deep kind of infinite complexity can be generated from 745 objects of certain kind, but not from 5 objects of that kind..
Turing machines have infinite tape, not infinite state. The entire set of all halting machines of a given size collectively only use finite tape. Totally finite. Only (some of) the non-halting machines use infinite tape.
The problem is that we don't know in advance how large the (definitely finite) upper bound on the amount of tape all the size-N halting machines use, until after enough of them (one per known equivalence class) halt. And we don't know (in general) how to run all the halting ones until they halt, without also running a non-halting program for an unbounded amount of time.
TL:DR: unbounded is not infinite, but big enough to be a problem.
But the overall 'complexity' (at a timestep, say) is going to be due to the states and the tape together. The BB(5) example that was analyzed, iirc, was a Collatz-like problem (Aaronson describes it here: https://scottaaronson.blog/?p=8088 ). My interpretation of this is that:
1. collatz-like functions have a lot of complexity just due to math alone 2. 5 states turned out to be enough to "reach" that one that 3. more states means you're going to reach more possible Collatz-like functions (they don't have to be Collatz-like; it's just easier to think about them like that) 4. eventually you reach ones that ZFC cannot show to halt, because there is effectively no way to prove it other than running them, and then you would have to solve the halting problem.
The part that was helpful for me to be less unsettle by BB(745) being independent of the ZFC was the notion that it eventually boils down to a halting problem, and asking ZFC to "solve" it... which is more agreeable than the idea that "ZFC cannot compute a function that seems to be solvable by brute force".
Incidentally, Gödel's theorem eventually comes down to a halting-like argument as well (well, a diagonal argument). There is a presentation of it that is in like less than one page in terms of the halting problem---all of the Gödel-numbering stuff is essentially an antiquated proof. I remember seeing this in a great paper which I can't find now, but it's also mentioned as an aside in this blog post: https://scottaaronson.blog/?p=710
wait jk I found it: https://arxiv.org/abs/1909.04569
Usually, "what happens" is that the machines become large enough to represent a form of induction too strong for the axioms to 'reason' about. It's a function of the axioms of your theory, and you can add more axioms to stave it off, but of course you can't prove that your new axioms are consistent without even more axioms.
> There is a presentation of it that is in like less than one page in terms of the halting problem---all of the Gödel-numbering stuff is essentially an antiquated proof.
Only insofar as you can put faith into the Church–Turing thesis to sort out all the technicalities of enumerating and verifying proofs. There still must be an encoding, just not the usual Gödel numbering.
> There is a presentation of it that is in like less than one page in terms of the halting problem
Those are two very different ideas. Your second sentence says that Gödel's theorem is easy to prove if you have results about the halting problem. Your first one says that in order to prove Gödel's theorem, you need to establish results about the halting problem.
But to me as a layman that seems true regardless of formal axioms chosen, but I guess I need to read that linked thesis.
Two lenses for trying to understand this are potentially Chastain's limits on output of a lisp program being more complex than the program itself [1] or Markov's proof that you can't classify manifolds in d>= 4.
If you try the latter and need/want to figure out how the Russian school is so different this is helpful [2]
IMHO the former gives an intuition why, and the latter explains why IMHO.
In ZFC, C actually ends up implying PEM, which is why using constructionism as a form of reverse math helped it click for me .
This is because in the presence of excluded middle, every sequentially complete metric space is a complete space, and we tend to care about useful things, but for me just how huge the search space grows was hidden due to the typical (and useful) a priori assumption of PEM.
If you have a (in my view) dislike for the constrictive approach or don't want/have to invest in learning an obscure school of it, This recent paper[3] on the limits for finding a quantum theory of everything is another lens.
Yet another path is through Type 2 TMs and the Borel hierarchy, where while you can have a uncomputable number on the input tape you algorithms themselves cannot use them, while you can produce uncomputable numbers by randomly selecting and/or changing an infinite sequence.
Really it is the difference between expressability and algorithms working within what you can express.
Hopefully someone else can provide more accessible resources. I think a partial understanding of the limits of algorithms and computation will become more important in this new era.
[1] https://arxiv.org/abs/chao-dyn/9407003 [2] https://arxiv.org/abs/1804.05495 [3] https://arxiv.org/abs/2505.11773
Like, a TOE is not expected to decide all statements expressible in the theory, only to predict particular future states from past states, with as much specificity as such past states actually determine the future states. It should not be expected to answer “given a physical setup where a Turing machine has been built, is there a time at which it halts?” but rather to answer “after N seconds, what state is the machine (as part of the physical system) in?” (for any particular choice of N).
Whether a particular statement expressed in the language of the theory is provable in the theory, is not a claim about the finite-time behavior of a physical system, unless your model of physics involves like, oracle machines or something like that.
Edit: it later says: “ Chaitin’s theorem states that there exists a constant K_{ℱ_{QG}} , determined by the axioms of ℱ_{QG} , such that no statement S with Kolmogorov complexity K(S) > K_{ℱ_{QG}} can be proven within ℱ_{QG} .”
But this, unless I’m badly misinterpreting it, seems very wrong? Most formal systems of interest have infinitely many distinct theorems. Given an infinite set of strings, there is no finite universal upper bound on the Kolmogorov complexity of the strings in that set.
Maybe this was just a typo or something?
They do then mention something about the Bekenstein bound, which I haven’t considered carefully yet but seems somewhat more promising than the parts of the article that preceded it.
If you are refering to the idea that nothing that can't exist in the real universe "really exists", then the "Busy Beaver" portion of that idea is extraneous, as 100% of integers can't exist in the real universe, and therefore, 100% of integers are equally just "mathematical concepts". That one of them is identified by BB(748) isn't a particularly important aspect. But certainly, a very specific number is identified by that designation, though nothing in this universe is going to know what it is in any meaningful sense.
For that to be the case, there would have to be some symbol under the tape and some state the machine is in for which the action the machine takes and the next state it goes to would depend on the axioms taken somehow. There is no place where the Turing Machine has somehow been running for so long and just gotten so large that its behavior becomes non-deterministic somehow.
What this means is that even if we lived in a universe where we had the unfathomable resources to actually have this number somehow meaningfully "in hand", we would be unable to prove that it was the correct one with just ZFC. Maybe one of the really quite numerous other machines still spinning away would in fact terminate in the future and be the real BB winner, because even this staggeringly monstrously large universe is still piddling nothing next to infinity and the other machines still require infinite resources to run them to discover they never terminate. But that doesn't do anything to affect whether or not there in fact is a single concrete integer that corresponds to BB(748).
Although one imagines that any universe with the resources to "have" BB(748) in it might also have some much more powerful axiom systems to play with in the process. The amount of computational power this universe apparently possesses is beyond all comprehension and who knows what they could know. But even if they used a more powerful system, it wouldn't change what BB(748) is... it just might affect whether or not they were correct about it.
That's easy, you just have to be an ultrafinitist, and say, "The definition of a TM presupposes an infinite set of natural numbers for time steps and tape configurations. But there aren't actually infinitely many natural numbers, infinitely long executions, arbitrarily long proofs, etc., outside of the formalism. If a formal statement and its negation do not differ regarding any natural numbers small enough to actually exist (in whatever sense), then neither is more true than the other." In particular, consistency statements may have no definite truth value, if the hypothetical proof of an inconsistency would be too large.
Of course, metamathematics tells us "you can't do that, in principle you could tell the lie if you wrote out the whole proof!" But that principle also presupposes the existence of arbitrarily-long proofs.
(Personally, hearing some of the arguments people make about BB numbers, I've become attracted to agnosticism toward ultrafinitist ideas.)
Plus, ultafinitism is essentially relative to the universe you find yourself in. I hypothesized a universe in which BB(748) could actually exist, but you can equally hypothesize ones in which not only can it exist, it exists comfortably and is considered a small number by its denizens. We can't conceive of such a thing but there's no particular a priori reason to suppose it couldn't exist. If such a universe does actually "exist" does that mean our ultrafinitism is wrong? I'm actually a sort of a proponent of knowing whether your operating in a math space that corresponds to the universe (see also constructive mathematics), but concretely declaring that nothing could possibly exist that doesn't fit into our universe is a philosophical statement, not a mathematical one.
The formalism says that there's still a definite integer that it corresponds to. The ultrafinitist would deny that the formalism keeps capturing truth past where we've verified it to be true, or some unknown distance farther.
> I hypothesized a universe in which BB(748) could actually exist, but you can equally hypothesize ones in which not only can it exist, it exists comfortably and is considered a small number by its denizens.
Sure, but the ultrafinitist would argue, "All this is still just a shallow hypothesis: you've said the words, but that's not enough to breathe much 'life' into the concept. It is but the simplest of approximations that can fit into our heads, and such large things (if they could exist) would likely have an entirely different nature that is incomprehensible to us."
> We can't conceive of such a thing but there's no particular a priori reason to suppose it couldn't exist.
That's why I wouldn't call myself an ultrafinitist, but would prefer an agnostic approach. There may be no great a priori reason to suppose it cannot exist, but I similarly do not see any such reason it must necessarily exist. We empirically notice that our formalism works for numbers small enough to work with, and we pragmatically round it off to "this formalism is true", but one could argue that surprising claims about huge numbers need stronger support than mere pragmatism.
Every sentence ever spoken and every view ever looked at is also a number. It's not a freaky thing about "things like" busy beaver, it's a freaky thing about the concept of information.
But even though everything is a number, saying "it's crazy that a number can be X" is usually someone making a mistake, using the everyday concept of numbers in their head. If you replace "a number" with "some text and code and data", people wouldn't say it's surprising that "some text and code and data" can be unprovable in ZFC.
Technically a photograph is a number, but primarily it's something else. BB(748) is the same, technically a number but primarily it's a series of detailed computer calculations.
I'd say that's a bit of a wrong or misleading statement. I think the correct version is "everything[1] can be encoded as a number". The concept of number is a very particular concept! It's pretty absurd to say "a screwdriver is a number" or "a word is a number". That is true for the peano axiomatization of numbers; but to me in particular, I believe numbers are a generalization (and formalization) of the idea or concept of quantity. There's a particular idea that refers to say 'two' apples, the quantity of apples. A word is not a quantity, it's a different concept. Even though each of them could be encoded as a number somehow!
[1]: Everything that we believe to be finite and of interest, that is. We don't know presently anything that could be used in reality (a music, picture, etc.) that can't in principle be encoded as a large enough number.
I think this is quite interesting, because this encoding is critical, and it completes the system. You essentially need a machine to turn things into numbers and numbers into things; and this is unavoidable. You can actually encode this machine itself with numbers! This number (which encodes this transcoding machine) can even be decoded by its own machine! But we cannot actually avoid the machine itself, some actual realization in the real world, because any number, in order to represent something, can only be translated by one "machine" (which can be essentially a computer, or a mind, etc.).
Instead of thinking of machines, you can also think of conventions. So you can have a convention that say the number '5' encodes the concept 'word', or maybe it simply encodes the string of letters "word" ("w"+"o"+"r"+"d"). But the convention interpretation isn't complete, because you still need someone, or something, to interpret this convention in practice and potentially turn concepts into reality, or simply manipulate those concepts in significant and useful ways.
Some more examples: (1) you can encode objects by describing a series of solid operations, essentially CAD modelling, so you have numbers that represent solids. The machine that interprets this number and is able to translate it for example into a picture, a series of instructions to be interpreted by a 3d printer, of performs operations (inferences) about the relevant solid model (for example, a structural analysis) is your "machine", i.e. your software, without which a number, or string of bits by itself doesn't mean anything (except the quantity associated with that binary number, perhaps), and again this encoding or number is essentially arbitrary, it could be very different. (2) a JPEG file for example encodes an image that is read by a software stack (jpeg decoder+picture viewer+operating system+display driver) and forwarded to your monitor to be viewed as a pixel array. Again the string of bits associated with any image could in principle represent anything else representable.
Information (Shannon information in particular) simply implies the encoding possible.
It's really interesting that a lot of the time we are performing essentially translations between different representations of a thing: a series of bits into states of pixels on a screen (a picture), [a series of bits] into a 3d printed object, into a visualization of an object on a screen, etc. (one way), or a reading of a camera sensor (photograph) into a series of bits, a conception of an object (3d modelling), a conception of a story (writing), etc. (the other way). We of course can (and must for them to be built of course) conceptualize those "machines" themselves (e.g. the software part), represent them in some way (our encoding), and then turn this representation into a realization of those machines (a software, a piece of hardware, or just a representation convention, etc.).
In other words, the mind or computer itself is always an integral part of the process, and information in a vacuum doesn't represent anything necessarily.
Finally, most of what we do is some kind of translation, inference, and construction -- everything to assist our lives. Of course some "machines" are capable of generating new concepts, those are very interesting "machines" :)
Well if we're using a more narrow view, then "BB(748)" isn't a number, it's an encoding of a partial algorithm. And it still shouldn't be surprising that an algorithm might be unprovable in ZFC.
The actual number, the quantity, is quite easy to write down inside ZFC. And so is the beaver turing machine itself. The hard part is knowing which of the 748-state machines is the beaver.
[0] https://en.wikipedia.org/wiki/Zero:_The_Biography_of_a_Dange...
For zero I can not only do that, I can also count to it if you let me count both up and down, which seems like a very simple ask.
We just can't prove which number it is, we don't know which of the machines halt.
With this definition, we can say that "ZFC is inconsistent" is semidecidible: you run a program that searches for a contradiction.
The question BB(748) =/= 1000 is similarly semidecidable. You can run a program that will rule out 1000 if it is not BB(748).
So they are in the same "category", at least regarding their undecidability.
Also, if you turn "ZFC is consistent" into a number: {1 if ZFC is consistent; 0 if ZFC is inconsistent}, you will see, that BB(748) is not very much different, both are defined (well, equivalently) using the halting of Turing machines, or, the result of an infinite search.
As soon as Gödel published his first incompleteness theorem, I would have thought the entire field of mathematics would have gone full throttle on trying to find more axioms. Instead, over the almost century since then, Gödel’s work has been treated more as an odd fact largely confined to niche foundational studies rather than any sort of mainstream program (I’m aware of Feferman, Friedman, etc., but my point is there is significantly less research in this area compared to most other topics in mathematics).
But why? Gödel's theorem does not depend on number of axioms but on them being recursively enumerable.
If you’re talking about every true sentence in the language of PA, then not all such sentences are derivable via the theory of PA. If you are talking about the theorems of PA, then these are missing an infinite number of true statements in the language of PA.
Harvey Friedman’s “grand conjecture” is that virtually every theorem that working mathematicians actually publish can already be proved in Elementary Function Arithmetic (much weaker than PA in fact). So the majority of mathematicians are not pushing the boundaries of the existing foundational theories of mathematics, although there is certainly plenty of activity regardless.
In fact, both Gödel and Turing worked on this problem quite a bit. Gödel thought we might be able to find some sort of “meta-principle” that could guide us toward discovering an ever increasing hierarchy of more powerful axioms, and Turing’s work on ordinal progressions followed exactly this line of thinking as well. Feferman’s completeness theorem even showed that all arithmetical truths could be discovered via an infinite process. (Now of course this process is not finitely axiomatizable, but one can certainly extract some useful finite axioms out of it — the strength of PA after all is equivalent to the recursive iteration up to ε_0 of ‘Q_{n+1} = Q_n + Q_n is consistent’ where Q_0 is Robinson arithmetic).
https://jdh.hamkins.org/the-universal-algorithm-a-new-simple...
This is similar to how there are countable models of ZFC but those models think of themselves as uncountable. They are countable externally and not internally.
Statements that are independent of ZFC are a dime a dozen when doing foundations of mathematics, but they're not so common in many other areas of math. Harvey Friedman has done interesting work on finding "natural" statements that are independent of ZFC, but there's dispute about how natural they are. https://mathoverflow.net/questions/1924/what-are-some-reason...
In fact, it turns out that a huge amount of mathematics does not even require set theory, it is just a habit for mathematicians to work in set theory. https://en.wikipedia.org/wiki/Reverse_mathematics.
> This ignores the fact that it is not so easy to find natural interesting statements that are independent of ZFC.
I’m not ignoring this fact—just observing that the sheer difficulty of the task seems to have encouraged mathematicians to pursue other areas of work beside foundational topics, which is a bit unfortunate in my opinion.
> approximate those aspects of physical reality that are primarily relevant to the endeavors of humanity.
This is the comment that made me think that you were saying we needed more work on foundations for math as it is used in the sciences, and that doesn't match my understanding. Did I read it differently than you meant it?
Thus, any proof that BB(748) = N must either show that TM_ZF_INC halts within N steps or never halts. By Gödel's famous results, neither of those cases is possible if ZFC is assumed to be consistent.
I guess it's also hard when we have an arbitrary Turing machine and have to prove that what it's doing isn't equilavent to trying to prove an undecibable statement.
BB(748) is by definition a finite number, and it has some value - we just don't know what it is. If an oracle told us the number, and we ran TM_ZFC_INC that many steps we would know for sure whether ZFC was consistent or not based on whether it terminated.
The execution of the turing machine can be encoded in ZFC, so it really is the value of BB(748) that is the magic ingredient. Somehow even knowledge of the value of this finite number is a more potent axiomatic system than any we've developed.
BB(748) is very similar, in that I'd call it a 'definition' independent of ZF rather than a 'number' independent of ZF.
* ZFC is a set of axioms. A "model" is a structure that respects the axioms.
* By Godel, we know that ZFC proves a statement if and only if the statement is true in all models of ZFC.
* Therefore, the statement "BB(748) is independent of ZFC" is the same as the statement "There are two different models of ZFC where BB(748) are two different numbers.
* We can take one of these to be the "standard model"[1] that we all think of when we picture a Turing Machine. However, the other would be a strange "non-standard" model that includes finite "natural numbers" that are not in the set {0,1,2,3,...} and it includes Turing Machines that halt in "finite" time that we would not say halt at all in the standard model.
* So BB(748) is indeed a number as far as the standard model is concerned, the problem only comes from non-standard models.
TL;DR this is more about the fact that ZFC axioms allow weird models of Turing Machines that don't match how we think Turing Machines usually work.
[1] https://en.wikipedia.org/wiki/Non-standard_model_of_arithmet...
I thought it was a typo. First time I encounter tetration.
When arithmetic is introduced just as a way to, for example, count money, it's more directly practical in the moment, but you're not seeing the larger pattern.
Unlike Aaronson, he actually is on the forefront of Busy Beaver research, and is one of the people behind the https://bbchallenge.org website
Extremely bad ad hominem, I enjoyed Aaronson's read, nothing wrong with it.
Colloquially, I understand it's easy to think it means "saying something about someone that could be interpreted negatively" because that's the context it is read in it when it is used.
The meaning is saying a logical argument is incorrect because of who wrote the argument.
It is implying that claims from the article like "Then, three days ago, Tristan wrote again to say that mxdys has improved the bound again, to BB(6)>9_2_2_2" are not real results. The justification for these not being real results is solely based off whether author is actually on the forefront of research.
OP isn't making a ad hominem fallacy in a logical argument sense - it's not saying "Aaronson is wrong because he's not a frontline researcher."
But you're absolutely right to feel uncomfortable with their approach. There's something off-putting about dismissing someone's reporting of research developments, even if you prefer more comprehensive coverage, or there's more interesting things to say.
The thing is, if that's ad hominem, so is any recommendation preferring one second-hand reporting over another -- ex. "if you want the actual news, read Tucker, not Krugman" isn't an ad hominem towards Krugman.
Another example we see often on HN: saying "you should read the actual paper instead of this pop science" is a quite frequent, quite agreeable, and yet dull, contribution on say, a Quanta article. Yet, I imagine we agree this isn't an ad hominem.
The real issue might be that OP conflates two different things: being a primary researcher versus being a good science communicator who accurately reports on others' work.
Both roles have value, and questioning whether someone has filled one role doesn't necessarily invalidate their ability to fill the other.
(this helped me understand my odd frustration with the dull comments on science articles: I emotionally engage with it as being mean / out of bounds, but its true, and in reality, what I'm frustrated with is there could always be a more detailed article, or even paper, but yet we all must publish)
But how you justify that could be one. If you are just attacking the person instead of their reporting I would call that ad hominem.
>If you want to learn about actual Busy Beaver results [...]
This is saying there is no discussion of the results in the article, which is not true.
>Unlike Aaronson, he actually is on the forefront of Busy Beaver research [...]
This implies Aaronson has no (or lesser) authority on the subject and suggests we should listen to somebody else who purportedly has more.
Nowhere in @NooneAtAll3's comment is there an argument made against/for the contents of the article, an example of that would be:
"Aaronson mentions X but this is not correct because Y" or something along those lines.
Instead, the comment, in it's full extent, is either discrediting (perhaps unintentionally) and/or appealing to the authority of people involved. That's ad hominem.
I spent 5 minutes trying to verify any link in the post above links to Scott Aaronson, or mentions him, and found nothing. :\ (both the siglocki, and when I found nothing there, the busy beaver site)
"One Collatz Coincidence", the 2nd story on the blog, also mentions Aaronson
People can't visualize numbers that big. There's more ways to express numbers than just counting them. For example a single grain of sand has infinite states it can be in (there are an infinite amount of real numbers), so you could say a single grain of sand could represent BB(6). Combinations can grow exponentially, so that may be something useful to try and express it.
I.e., how well can a system fake being inconsistent before that fact it discovered? An inconsistent system faking consistency via BB(3) will be “found out” much quicker than a system faking consistency via BB(6). (What I mean by faking consistency is claiming that all programs that run longer than BB(n) steps for some n never halt.)
Using infinite precision to make things seem tractable is sleight of hand in my book. Stick with integers when you're describing scale.
We know from the functional busy beaver [1] that Graham behaviour can come surprisingly early; a 49-bit lambda term suffices. There are only 77519927606 closed lambda terms of at most that size [2], compared to 4^12*23836540=399910780272640 unique 6-state Turing Machines [3].
With the achievement of pentation in only 6 states, several people now believe that 7 states should suffice to surpass Graham's. I would still find that rather surprising. A few days ago, I made a large bet with one of them on whether we would see proof of BB(7)>Graham's within the next 10 years.
What do people here think?
BB has to grow faster than any computable sequence. What exactly that means concretely for BB(7) is... nothing other than handwaving... but it sort of means it needs to walk up the "operator strength" ladder very quickly... it eventually needs to grow faster than any computable operator we define (including, for example, up-arrow^n, and up-arrow^f(n) for any computable f).
My gut feeling is that the growth between 47 million and 2^^2^^2^^9 is qualitatively larger than the growth between 2^^2^^2^^9 and graham's number in terms of how strong the operator we need is (with gramah's number being g_64 and g here being roughly one step "above" up_arrow^n). So probably we should have BB(7)>Graham's number.
You can convert every atom of observable Universe into a substrate for supercomputer, you can harness energies of supermassive black holes to power it, but running a humble BB(6) to halting state would be forever out of its reach.
seeknotfind•4h ago
I don't get this part. Is it really rounding away the volume of the observable universe divided by the average volume of a grain of sand? That is many more orders of magnitude than the amount of mass in the universe, which is a more usual comparison.
Scarblac•4h ago
mckeed•4h ago
Chirono•4h ago
Straw•4h ago
10↑↑10,000,000 / (sand grains per universe) is vastly larger than, say, 10↑↑9,999,999
So on system we're using to write these numbers, there's really no better way to write (very big)/ (only universally big) than by writing exactly that, and then in the notation for very big, it pretty much rounds to just (very big).
lupire•3h ago
In significant figures, 1.0 billion minus 1.0 million equals 1.0 billion.
Nevermark•2h ago
However many universes in question, there is a qualitative difference between that many empty universes (with 1 grain), and that many completely packed with grain.
Ask anybody who lives in one!