frontpage.
newsnewestaskshowjobs

Made with ♥ by @iamnishanth

Open Source @Github

Open in hackernews

0.9999 ≊ 1

https://lcamtuf.substack.com/p/09999-1
34•zoidb•1d ago

Comments

sans_souse•1d ago
and 0.3… + 0.3… + 0.3… = 0.9… = 1.0

Maybe there is a difference, but it's intangible.

Maybe it is to the number line what Planck Length is to measures.

As a non-math-guy, I understand and accept it, but I feel like we can have both without breaking math.

In a non-idealized system, such as our physical reality; if we divide an object into 3 pieces, no matter what that object was we can never add our 3 pieces together in a way that recreates perfectly that object prior to division. Is there some sort of "unquantifiable loss" at play?

So yea, upvoting because I too am fascinated by this and its various connections in and out of math.

bardan•1d ago
0.3... is just the decimal representation of 1/3. So:

0.3... = 1/3

0.6... = 2/3

0.9... = 3/3 (= 1)

LiKao•1d ago
But you are assuming 0.3... is the representation of 1/3. We don't have to make this assumption, it's just the one we are usually taught. Math doesn't really break from making different assumptions, quite the opposite.

Let's make some different assumptions, not following high school math: When I divide 1 by 3, I always get a remainder. So it would just be as equally valid to introduce a mathematical object representing this remainder after I performed the infinite number of divisions. Then

1/3 = 0.3... + eps / 3

2/3 = 0.6... + 2eps / 3

3/3 = 0.9... + 3eps / 3

and since 0.9... = 1 - eps, we get 3/3 = 0.9... + eps = 1

It's all still sound (I haven't proven this, but so far I don't see any contradiction in my assumptions). And it comes out where 0.9... is not equal to 1. Just because I added a mathematical object that forces this to come out.

Edit: Yes, I am breaking a lot of other stuff (e.g. standard calculus) by introducing this new eps object. But that is not an indicator that this is "wrong", just different from high school math.

pvdebbe•1d ago
Nothing is broken, just people stumbling on various notations.
spyrja•1d ago
Another easy way to understand it is to extend the idea of remainders to decimals. When we say N / D = Q r R that obviously means N = D * Q + R. For example 13 / 3 = 4 r 1 because 13 = 3 * 4 + 1. Likewise then 1 / 3 = 0.3(...) = 0.3 r 0.1 because 1 = 3 * 0.3 + 0.1, but also 1 / 3 = 0.3(...) = 0.33 r 0.01 because 1 = 3 * 0.33 + 0.01, etc. Hence 3 * 0.3(...) = 0.9(...) = 1.
Suppafly•1d ago
> if we divide an object into 3 pieces, no matter what that object was we can never add our 3 pieces together in a way that recreates perfectly that object prior to division. Is there some sort of "unquantifiable loss" at play?

When you cut a cake into 3 slices, there is always a little bit of cake stuck the knife.

fouronnes3•1d ago
To me the most obvious proof is that therere are no numbers in between 0.999... and 1. Therefore it must be the same number.
murkle•1d ago
Exactly, add them up and divide by 2. What's the answer?
fouronnes3•1d ago
TFA goes into this somehow but I fail to see why it's so hard to grasp that they are the same. Maybe I should read more crackpot blogs!
blackbear_•1d ago
The fact that there are no numbers in between is not obvious at all, and has to be proven formally!

In fact, there is a (rational) number between any two distinct real numbers, therefore your proof attempt only works if you assume that 0.999 equals 1. As that is a circular reasoning, it is not a valid proof.

thaumasiotes•1d ago
> your proof attempt only works if you assume that 0.999 equals 1. As that is a circular reasoning, it is not a valid proof.

No, his proof is fine. Take the standard definition of > as applied to decimal numbers when they're represented as strings. It's very easy to show that no x simultaneously satisfies x > 0.9999... and 1.0000... > x.

blackbear_•1d ago
You are right, that works if you assume that every number can be represented as a decimal string.

That is indeed true for real numbers, but not for hyper-reals (https://en.m.wikipedia.org/wiki/Hyperreal_number), which is what I had in mind when I originally said that it was not obvious.

thaumasiotes•1d ago
Well, for hyperreal numbers it's still true that every real number can be represented as a countably infinite string. The proof will still work fine if you say "the real number with expansion 0.9999...". In the hyperreals, given that not every value can be expressed as a decimal string, we'd need to establish what "0.9999..." meant before further commenting on the proof.

---

lcamtuf proposes treating 0.9999... as a representation of a hyperreal number in the halo of 1, which raises some notational questions. The logical extension of the notation in my mind would, given an assumed infinitesimal value epsilon, represent epsilon as 0.000000... -| (barrier between real and infinitesimal decimal places) |- ...00001 . It's not obvious to me how you'd work with the representation 0.000000... | 1000... (what's 1 + 20ε?).

But I don't think my instinct works either, because you can grade the levels of infiniteness more finely than you have room in a countable string to represent. (This is just a gut feeling.)

Going with a more formal definition, you could take the new decimal places as representing coefficients of 10^{-N}, where N is an infinite hypernatural number, but since those have no beginning or end it's not at all clear where you'd position the coefficients.

On the other hand, I don't really see the conceptual problem with treating 0.9999... as a value that is infinitely close to 1. What bothers me the most at that level is the comment somewhere in the thread asking, given the established infinitesimal difference between 0.99999... and 9/9, what's the difference between 0.33333... and 3/9? lcamtuf's derivation works the same way, suggesting that a difference exists. And you can say that it does; you could just declare that rational numbers with prime factors other than 2 and 5 in the denominator have no exact representation in the decimal system.

But there's no demand for this. People don't seem to have the same problem with the idea that 0.33333... is 1/3 as they do with the idea that 0.99999... is 3/3.

throwaway31131•1d ago
I usually use this idea to show that 0.999 is not less than 1 (or more simply, there is no nonzero number you can add to 0.999 to make it 1), then because it’s not greater than 1, and there are only three possibilities (>,<,=) they must be equal.
lcrz•1d ago
So the authors tries to be rigorous, but again falls into the same traps that the people who claim 0.9… != 1 fall.

“0.999… = 1 - infinitesimal”

But this is simply not true. Only then they get back to a true statement:

“Inequality between two reals can be stated this way: if you subtract a from b, the result must be a nonzero real number c”.

This post doesn’t clear things up, nor is it mathematically rigorous.

Pointing towards hyperreals is another red herring, because again there 0.999… equals 1.

hinkley•1d ago
I don’t like any of his examples at the top. Look, it’s not that hard:

    x = 0.999…

    2x = 1.999…

    2x - x = 1

    x = 1
Multiplying by ten just confused things and the result doesn’t follow for most people.
derbaum•1d ago
Whether you multiply by 10 or 2, the same "counter" argument from the article stands. Only now you don't have a trailing zero after infinite nines, you have a trailing 8.
ndsipa_pomu•1d ago
I don't understand how you can even have a trailing zero after an infinite number of nines. Surely any place that someone would want to put the zero can be refuted by correctly stating that a nine goes there (it's an infinite number of them, after all) and there is literally no "last" place.
hinkley•1d ago
I’ve seen videos of actual mathematicians complaining to each other about how the general public thinks like GP. There is no last digit. Every time you reach the horizon there’s another horizon.
anthk•1d ago
Technically you don't have an '8', you keep doing a carried sum forever, think about it. The last eight will be set to 9 forever and appended a new one to it. Thus, you are getting a periodical 1.9_ in practice.
hinkley•1d ago
There is no eight. This is something I’ve heard actual mathematicians complain about to other actual mathematicians: the non math public misunderstands infinite series as “imagine a number so big you can’t fathom it and add 1 more number to it. That’s not how things work.

Going as far as you can imagine and a little farther is an infinitesimal of the real infinite.

tsimionescu•1d ago
The way I was taught decimals in school (in Romania) always made 0.99... seem like an absurdity to me: we were always taught that fractions are the "real" representation of rational numbers, and decimal notation is just a shorthand. Doing arithmetic with decimal numbers was seen as suspect, and never allowed for decimals with infinite expansions. So, for example, if a test asked you to calculate 2 × 0.2222... [which we notated as 2 × 0,(2)], then the right solution was to expand it:

  2 × 0.2222...
   = 2 × 2/9 
   = 4/9 
   = 0.444...
Once you're taught that this is how the numbers work, it's easy(ish) to accept that 0.999... is just a notational trick. At the very least, you're "immune" to certain legit-looking operations, like

  0.33... + 0.66...
    = 1/3 + 2/3
    = 3/3
    = 1
Instead of

  0.33... + 0.66...
    = 0.99...
So, in this view, 0.3 or 0.333... are not numbers in the proper sense, they're just a convenient notation for 3/10 and 1/3 respectively. And there simply is no number whose notation would be 0.999..., it's just an abuse of the decimal notation.
lmm•1d ago
> Doing arithmetic with decimal numbers was seen as suspect, and never allowed for decimals with infinite expansions.

With that attitude how do you handle e.g. pi or sqrt(2), which it's perfectly legitimate to do arithmetic with?

AndrewDucker•1d ago
Once you're dealing with irrational numbers you have to understand that all results are approximations.
lmm•1d ago
Well, sure, but you should still be able to ask and answer questions like "Is pi + sqrt(2) less than or greater than 4.553?"
AndrewDucker•1d ago
In that case you know how many decimal places you want to expand them to, in order to compare.
tsimionescu•1d ago
Note that "expanding them to some number of decimal places" gives a somewhat misleading idea about how this works. What you're actually doing is computing a good enough approximation of pi, and expressing that as a decimal. But this is not the same kind of simple process that naturally gives decimals as it is for a rational fraction. Instead, you have to find some series with rational elements which converges to pi, and then compute enough terms of that series that you have a good enough approximation of pi for your purpose. Ideally, since you're interested in an inequality, you'd pick a series which is monotoniclaly increasing or decreasing, so that you know that computing more terms can't put you below or above the target number after you've reached a conclusion. But there is no canonical answer, there are numerous series which converge to pi that you could use, and they would givw you different decimal expansions as you are computing them.
dagw•1d ago
When you have those sorts of problems the best way is to approach them using inequalities.

   3.1415<pi<3.1416 and 1.4142<sqrt(2)<1.4143, => 4.5557<pi + sqrt(2)<4.5559
   => 4.553 < 4.5557 < pi + sqrt(2) => 4.553 < pi + sqrt(2)
tsimionescu•1d ago
It's important to understand that this was a non-trivial question for thousands of years. The ancient Babylonians would have probably believed this to be false (their best known approximation had pi ≈ 25/8, which is too small). The right way to approach this problem from first principles would be to construct some geometrical objects that have these lengths and try to compare them (for example by taking the perimeter of a square inscribing a unit circle and a square inscribed in a unit circle as the upper and lower bounds for pi, though that may not be good enough for this particular problem).

When you're doing something like pi + sqrt(2) ≈ 3.14159 + 1.41421 = 4.5558, you're taking known good approximations of these two real numbers and adding them up. The heavy lifting was done over thousands of years to produce these good approximations. It's not the arithmetic on the decimal representations that's doing the heavyh lifting, it's the algorithms needed to produce these good approximations in the first place that are the magic here.

And it would be just as easy to compute this if I told you that pi ≈ 314159/100000, and sqrt(2) ≈ 141421/100000, so that their sum is 455580/100000, which is clearly larger than 4553/1000.

mcphage•5h ago
> their best known approximation had pi ≈ 25/8, which is too small

I'm curious if they had a better one that we don't know of yet—their best known approximation of sqrt(2) is significantly more accurate.

qayxc•1d ago
Not really. Like the sibling comment said - you simply keep the symbolic values. I.e. instead of 4.442882938158... you write π√2, just like you would ⅚ and not 0,8333... in both cases you preserve the exact values. Decimal (or any other numbering system, really) approximations are only useful when you never want to do any further arithmetic with the result.
thaumasiotes•1d ago
> Decimal (or any other numbering system, really) approximations are only useful when you never want to do any further arithmetic with the result.

What? The opposite is the case. Anything you want to do something with, you can only measure inaccurately; arithmetic doesn't have any use if you can't apply it to inaccurate measurements. That's what we use it for!

qayxc•1d ago
So I take it you never wrote any numerical simulations or did symbolic calculations then?

Catastrophic cancellation and other failures are serious issues to consider when doing numerical analysis and can often be avoided completely by using symbolic calculation instead. You can easily end up with wrong results, especially when composing calculations. This would make it difficult to, for example, match your theoretical model against actual measurement results; particularly if the model includes expressions that don't have closed-form solutions.

rob_c•1d ago
Less approximations and more representations of complex things at times. (Just my opinion)

I prefer comparing it to complex numbers where I can't have "i" apples but I can calculate the phase difference between 2 power supplies in a circuit using such notation.

Nobody really cares about the 3rd decimal place when taking about a speeding car at a turn but they do when talking about electrons in an accelerator, so accuracy and precision always feel mucky to talk about when dealing with irrationals (again my opinion).

tsimionescu•1d ago
Well, 3.141 is an approximation of pi, not a representation of it, insomuch as you use it in an arithmetic expression. Of course, you can write 3.141... to just represent pi, but you can't eaisly use that in an arithmetic expression. For example, I can't tell you from "mechanical" operations if 3.141... - 3.1417 > 0, I have to lookup how big pi actually is.
qsort•1d ago
You don't. You keep them in symbolic form until they simplify and you do arithmetic at the last possible moment.
lmm•1d ago
Sure, but when you reach that "last possible moment", what then?
dagw•1d ago
If it's a maths problem you just leave it as symbols. If it's a science or engineering problem you expand it to a decimal approximation with the precision needed for the specific problem you are dealing with.
tsimionescu•1d ago
Note that even for an engineering problem, you don't necessarily use a decimal representation. You may well want to represent pi as 3 or 4 or 22/7 or any other approximation that is good for your particular use case. Or you may even have usecases where you do things the opposite way - you may want to approximate 1 as pi/3 or something like that for certain kinds of problems (e.g if you're going to take the sin of your result).
chongli•1d ago
Then you calculate the decimal expansion to the desired number of decimal places. This avoids accumulation of roundoff errors in intermediate results.

Note that writing sqrt(2) as 1.41 or 1.41421 or any other decimal expansion you might want to write is incorrect: you will always get some roundoff error. If you want to calculate that sqrt(2)*sqrt(2)=2 then you can’t do so by multiplying the decimal expansions.

rob_c•1d ago
You never evaluate symbols until your giving a numerical equivalent.

Sure if a question asks for the escape velocity from Jupiter this has an approximate numerical value, but you don't just start by throwing numbers at a wall, you get the simplest equation which represents the value you're interested in an then evaluate it once you have a single equation for that parameter.

Yes sqrt(2)*pi has a numerical approximation but you don't want that right at the start of taking about something like spin orbitals or momenta of spinning disks. Doing the latter compounds errors.

It's no different to keeping around "i"/"j" until you need to express a phase or angle as it's cleaner and avoids compounding accuracy errors.

tsimionescu•1d ago
This is a very strange question. With repeating decimals, it is technically possible, though very complicated, to do arithmetic directly on the representations. You have to remember a bunch of extra rules, but it can be done.

However, with numbers that have non-repeating inifinite decimal expansions, it is completely imposible to do arithmetic in the decimal notation. I'm not exagerating: it's literally physically impossible to represent on paper the result of doing 3pi in decimal notation in an unambiguous form other than 3pi. It's also completely impossible to use the decimal expansion of pi to compute that pi / pi = 1.

Here, I'll show you what it would be like to try:

  pi / pi
    =  3.141592653589793238462643383279502884197169399375105820949445923078164062862089986280348253421170679821480865132820664709384460955058223172.... 
Now, of course you can do arithmetic with certain approximations of pi. For example, I can do this:

  pi / pi
    ≈ 3.1415 / 3.1415
    = 1
Or even

  3 × pi 
   ≈ 3 × 3
   = 9
But this is not doing arithmetic with the decimal expansion of pi, this is doing arithmetic with rational numbers that are close enoigh to pi for some purpose (that has to be defined).
anthk•1d ago
pi/pi would evaluate to 1 as most proper languages would deal with pi symbolically and not arithmetically.
tsimionescu•1d ago
My point was only that even trivial arithmetic is impossible to do with the infinite decimal representations of irrational numbers.
librasteve•15h ago
in raku

  say pi/pi;   #1
lizmat•13h ago
Sadly, that's just Num.gist showing 1.0 as "1" though.

say (pi/pi).^name; # Num

librasteve•13h ago
lol … my bad I should have realized that pi is a Num since it’s an Irrational and a Num over a Num is a Num
librasteve•15h ago
in python

  import math

  result = math.pi / math.pi
  print(result)     #1.0
bit more long winded than raku, but nearly right

fwiw I want my pi/pi to be 1 (ie an Int) not 1.0 but then I’m a purist

anthk•1d ago
You would use rational approximations good enough for different scales and roundings.
cwmma•1d ago
in American math classes (as opposed to science classes) you almost never expand PI or sqrt(2), you either cancel them out or leave them in the answer until the end. Maybe if it's a word problem you sub them in the very last step but the problem itself is almost certainly going to be designed so it's not an issue.
Suppafly•1d ago
>in American math classes (as opposed to science classes) you almost never expand PI

Except we have some fascination with memorizing the digits of pi and having competitions for doing so for some reason.

BobaFloutist•1d ago
Just for fun.
mrguyorama•1d ago
There is no school math test that will require you to know the digits of Pi, except as a silly extra credit question.

The fascination is just dick measuring. "I'm smarter than you", for memorizing a longer string? It's quite dumb, but American media loves to use the dumbest possible ways of demonstrating that a character is intelligent, because uh it's really really hard to demonstrate "This person is very intelligent" to a subset of the population that is mostly at a middle school reading level and barely comprehends basic arithmetic, let alone algebra.

anthk•23h ago
And that's useless for actual math.
Suppafly•7h ago
>And that's useless for actual math.

Agreed. The schools always seem to have these learning adjacent things that are theoretically supposed to make subjects engaging, but in reality are so disconnected from the subject that they are meaningless.

anthk•6h ago
Even the games/puzzles from Martin Gardner are a much better solution than memorizing a random... string. Because pi is not about 3.1415... but a proportion.
lcrz•1d ago
Both ways are just notation. There’s nothing more real about 3/10 compared to 0.3.

Telling you otherwise might have worked as a educational “shorthand”, but there are no mathematical difficulties as long as you use good definitions of what you mean when you write them down.

The issues people have with 0.333… and 0.999… is due to two things: not understanding what the notation means and not understanding sequences and limits.

tsimionescu•1d ago
I agree that ultimately both are just notations. I do think the fractional notation has some definite advantages and few disadvantages, so I think it's better to regard it as more canonical.

I disagree though that it's necessary or even useful to think of 0.99... or 0.33... as sequences or limits. It's of course possible, but it complicates a very simple concept, in my opinion, and muddies the waters of how we should be using notions such as equality or inifinity.

For example, it's generally seen as a bad idea to declare that some infinite sum is actually equal to its limit, because that only applies when the series of partial sums converges. It's more rigorous to say that sigma(1/n) for n from 2 going to infinity converges to 1, not that it is equal to 1; or to say that lim(sigma(1/n)) for n from 2 to infinity = 1.

So, to say that 0.xxx... = sigma(x/10^n) for n from 1 to infinity, and to show that this is equal to 1 for x = 9, muddies the waters a bit. It still gives this impression that you need to do an infinite addition to show that 0.999... is equal to 1, when it's in fact just a notation for 9/9 = 1.

It's better in my opinion to show how to calculate the repeating decimal expansion of a fraction, and to show that there exists no fraction whose decimal expansion is 0.9... repeating.

WorldMaker•1d ago
> The issues people have with 0.333… and 0.999… is due to two things: not understanding what the notation means and not understanding sequences and limits.

Also a possible third thing: not enjoying working in a Base that makes factors of 3 hard to write. Thirds seem like common enough fractions "naturally" but decimal (Base-10) makes them hard to write. It's one of the reasons there are a lot of proponents of Base-12 as a better base for people, especially children, because it has a factor of 3 and thirds have nice clean duodecimal representations. (Base-60 is another fun option; it's also Babylonian approved and how we got 60 minutes and 60 seconds as common unit sizes.)

tsimionescu•1d ago
You get the same problem with 0.44... + 0.55... - I don't think that makes it any easier to anyone who is confused. It's more likely just that 0.33... and 0.66... are very common and simple repeating fractions that lead to this issue.
WorldMaker•1d ago
Sure, I was just pointing out that Base you use for your math does affect how common repeating digits are, based on the available factors in that base.

In Base-12 math, 1/3 = 0.4 and 2/3 = 0.8. With the tradeoff that 1/5 is 0.2947 repeating (the entire 2947 has the repeating over-bar).

Base-10 only has the two main factors 2 and 5, so repeating fractions are much more common in decimal representation, making this overall problem much more common, than compared to duodecimal/dozenal/Base-12 (or even hexadecimal/Base-16). It's interesting that this is a trade-off directly related to the base number of digits we want to express rational numbers in.

leereeves•1d ago
The fact that people are still debating whether 0.9999.... = 1 suggests that one notation is less confusing than the other.

Nobody debates whether 9/9 = 1.

olau•1d ago
I was taught something of the same.

But I think it was misguided. I'll note that 1/3 is not a number, it's a calculation. So more complicated.

And fractions are generally much more complicated than the decimal system. Beyond some simple fractions that you're bound to experience in your everyday life, I don't think it makes sense to drill fractions. In the end, when you actually need to know the answer to a computation as a number, you're more likely to make a mistake because you spend your time juggling fractions instead of handling numerical instability.

Decimal notation used to be impractical because calculating with multiple digits was slow and error-prone. But that's no longer the case.

tsimionescu•1d ago
This is ultimately a matter of definitions, and neither defining the fractions nor the decimals as the "true" representation of rationals is ultimately more or less correct.

But, operations on fractions are definitely easier than operations on decimals. And fractions have the nice property that they have finite representations for all rational numbers, whereas decimal representations always have infinite representations even for very simple numbers, such as 1/3.

Also, if you are going to do arithmetic with infinite decimal representations, the you have to be aware that the rules are more complex then simply doing digit-by-digit operations. That is, 0.77... + 0.44... ≠ 1.11... even though 7+4 = 11. And it gets even more complex for more complicated repeating patterns, such as 0.123123123... + 0.454545... (that is, 123/999 + 45/99). I doubt there is any reason whatsoever to attempt to learn the rules for these things, given that the arithmetic of fractions is much simpler and follows from the rules for division. The fact that a handful of simple cases work in simple ways doesn't make it a good idea to try.

volemo•1d ago
> I'll note that 1/3 is not a number, it's a calculation. So more complicated.

1/3 is a calculation the same way 42 is a calculation (4*10^1 + 2*10^0). Nothing is real except sets containing sets! /j

DemocracyFTW2•14h ago
Yes, true. *BUT* 1/3 is a fraction with denominator 3. 1/5 is a fraction with another denominator, and 1/7 has yet another. So how much is 1/3 + 1/5 + 1/7? You can't just add up, you first have to multiply to get to common ground. The decimal expansions of these use the same base and are readily comparable.
anthk•6h ago
No, they arent. Adding periodic decimals can yield terrible results. Just... don't.
anthk•1d ago
Rationals are numbers, not calculations. They can evaluate to themselves as members from a set.
anthk•1d ago
Forth and Lisp users often try to use rational first and floats later. On Scheme Lisps, you have exact-inexact and inexact->exact functions which convert rationals to floats and viceversa.
Suppafly•1d ago
>So, in this view, 0.3 or 0.333... are not numbers in the proper sense, they're just a convenient notation for 3/10 and 1/3 respectively. And there simply is no number whose notation would be 0.999..., it's just an abuse of the decimal notation.

I wonder if students from Romania are hamstrung in more advanced mathematics from being taught this way.

leereeves•1d ago
I don't think they would be; I think they might even have an advantage. They'd understand that the only numbers that have infinite repeating expansions are rationals, and that decimals are, in general, just approximations.
tsimionescu•1d ago
I don't think decimals, especially repeated decimals or other infinite decimal expansions, show up much if at all in any advanced math subjects beyond the study of themselves, of course). Higher math is almost exclusively symbolic. You're more likely to need to learn that "1" is just a notation for the set which contains the empty set then to learn that it's OK to add 0.22... + 0.44... = 0.66...
mcphage•5h ago
> we were always taught that fractions are the "real" representation of rational numbers

There is no "real" representation of rational numbers, and fractions are no more real—or fake—than decimals.

> And there simply is no number whose notation would be 0.999...

There is, though. It's 1.

mr_mitm•1d ago
Any confusion about this should go away as soon as you make clear what exactly you are talking about. If you construct the real numbers using Cauchy sequences and define the* decimal representation of a number using a Maclaurin series at x=1/10 then it's perfectly clear that 0.9... and 1.0... are two different representations of the same number. So it's the same equivalence class, but not the same representation. Thus, if you're talking about the representation of the abstract number 1, they're not equal but equivalent. If you're talking about the numbers they represent, they're equal.

* As the example shows, the decimal representation isn't unique, so perhaps we should say "_a_ decimal representation".

dagw•1d ago
The intersection between people who are both confused by this and are comfortable working with Cauchy sequences, Maclaurin series and equivalence classes, is probably pretty small.
quchen•1d ago
It baffles me how there are still blogposts with a serious attitude about this topic. It’s akin to discussing possible loopholes of how homeopathy might be medicinally helpful beyond placebo, again and again.

Why are hyperreals even mentioned? This post is not about hyperreals or non-standard math, it’s about standard math, very basic one at that, and then comes along with »well under these circumstances the statement is correct« – well no, absolutely not, these aren’t the circumstances the question was posed under.

We don’t see posts saying »1+2 = 1 because well acktchually if we think modulo 2«, what’s with this 0.9… thing then?

tsimionescu•1d ago
I think it's worse than this. Even with hyperreals, 0.999... = 1, I believe, since they have to obey all laws of arithmetic that are true for the reals. At the very least, 3 × 0.333... = 1, and not 0.999... even for the hyperreals.
qayxc•1d ago
IMHO the confusion arises, because the author failed to recognise that N cannot be a natural number if they go down the nonstandard analysis path. N would have to be elevated to a hyperinteger as well, which would eliminate the infinitesimal they end up with.
Tistron•1d ago
You're saying that 0.999...=1, and simultaneously you are saying that 3 × 0.333... = 1 and not 0.999...

What? How can it be that a=b and a≠c when b=c?

tsimionescu•1d ago
I'm saying that, in the hyperreals as well as the reals, I am 100% certain that 3 × 0.33... = 1. I am not as sure that 0.999 = 1 with the hyperreals, BUT, if it's true as the author claims that 0.99... ≠ 1 in the hyperreals, then it must follow that 3 × 0.33... ≠ 0.99... in the hyperreals.
LiKao•1d ago
I still think that the distinction is very important. With standard math (e.g. real numbers) we obviously have 0.9999... = 1 and this is actually very easy to prove using the assumptions that you are taught during high school math.

However, in higher math you are taught that all this is just based on certain assumptions and it is even possible to let go of these assumptions and replace them with different assumptions.

I think it is important to be clear about the assumptions one is making, and it is also important to have a common set of standard assumptions. Like high school math, which has its standard assumptions. But it is just as possible to make different assumptions and still be correct.

This kind of thinking has very important applications. We are all taught the angle sum in a triangle is 180 degrees. But again this is assuming (default assumption) euclidean geometry. And while this is sensible, because it makes things easy in day to day life, we find that euclidean geometry almost never applies in real life, it is just a good approximation. The surface of the earth, which requires a lot of geometry only follows this assumption approximately, and even space doesn't (theory of relativity). If we would have never challenged this assumption, then we would have never gotten to the point where we could have GPS.

It is easy to assume that someone is wrong, because they got a different result. But it is much harder to put yourself into someones shoes and figure out if their result is really wrong (i.e. it may contradict their own assumption or be non-sequitur) or if they are just using different assumptions. And to figure out what these assumptions are and what they entail.

For this assumption: Yes, you can construct systems where 0.9999... != 1, but then you also must use 1/3 != 0.33333... or you will end up contradicting yourself. In fact when you assume 1 = 0.999999... + eps, then you must likely also use 1/3 = 0.33333 - eps/3 to avoid contradicting yourself (I haven't proven the resulting axiom system is free of contradiction, this is left as an excercise to the reader).

cbolton•1d ago
The right way to approach this is to ask a question: What does 0.999... mean? What is the mathematical definition of this notation? It's not "what you get when you continue to infinity" (which is not clear). It's the value your are approaching as you continue to add digits.

When applying the correct definition for the notation (the limit of a sequence) there's no question of "do we ever get there?". The question is instead "can we get as close to the target as we want if we go far enough?". If the answer is yes, the notation can be used as another way to represent the target.

smidgeon•1d ago
Don't say that near Richard Dedekind, he'll cut you.
bubblyworld•1d ago
There's an extremely subtle point here about the hyperreals that the author glosses over (and is perhaps unaware of):

If you take 0.999... to mean sum of 9/10^n where n ranges over every standard natural, then the author is correct that it equals 1-eps for some infinitesmal eps in the hyperreals.

This does not violate the transfer principle because there are nonstandard naturals in the hyperreals. If you take the above sum over all naturals, then 0.999... = 1 in the hyperreals too.

(this is how the transfer principle works - you map sums over N to sums over N* which includes the nonstandards as well)

The kicker is that as far as I know there cannot be any first-order predicate that distinguishes the two, so the author is on very confused ground mathematically imo.

(not to mention that defining the hyperreals in the first place requires extremely non-constructive objects like non-principal ultrafilters)

im3w1l•1d ago
So something I was thinking of: A number in decimal notation can be seen as a function from the integers to {0,1,2,3,4,5,6,7,8,9} (where negative numbers map to digits left of the decimal point and non-negative to digits right of the decimal point) such that only finitely many negative numbers map to non-zero.

Could you generalize this to include the hyperreals by lifting the restrictions on finitely many, and also adding in some transfinite ordinals to the domain of the function?

bubblyworld•1d ago
I suspect yes - no need to introduce transfinite ordinals, you simply map from the set Z*, which is the integers but including the nonstandard ones. In fact you don't even need to remove the finiteness hypothesis, the transfer principle should guarantee that every hyperreal has such a representation since you can prove that every real does for the standard version.

(if the finiteness thing seems confusing, remember that there are infinitely large nonstandard integers in the hyperreals, and you can't tell them apart from the others "from the inside")

yodsanklai•1d ago
I'd say the key point is to understand the difference between a number, and the decimal representation of a number. 0.99999... is one possible representation of number 1. 1 is another one. Once one understand the definition of the decimal representation, it's just a simple proof to show that 0.99999... = 1.
kypro•1d ago
I'm not a mathematician, but this is always the way I've looked at it too.

We can't represent values like 1/3 precisely in the decimal number system, the best we can do is represent in a way that it's clear what's implied with minimal error.

The representation isn't really suppose to be interpreted as an infinite decimal series, and depending on how you interpret 3.333... you could argue it's a slightly different value. And that's plainly obvious – 3.333... != 1/3

throwaway31131•1d ago
I think one other piece is one needs to understand the number is not being built, the whole representation exists all at once. When I tutor students the confusion is they think of each 9 like a brick being added to a wall and for them the wall is never done, that’s their argument why 0.999 doesn’t equal 1. Then when you explain numbers don’t have a time dimension they usually get it.
constantcrying•1d ago
>I'd say the key point is to understand the difference between a number, and the decimal representation

But there is no such distinction. In fact the decimal representation is "closer" to a real number, then just 1.

>is one possible representation of number 1.

Why? You are just asserting things. You do not even give an argument why that should be the case. Why is 0.999... a representation of 1 and not 0.123?

yodsanklai•19h ago
> But there is no such distinction

Of course there's a distinction. A decimal representation is a sequence of digits, not a number

> Why?

It boils down from the definition of the decimal representation and the limit of a geometrical sequence.

https://en.wikipedia.org/wiki/Decimal_representation

constantcrying•17h ago
>Of course there's a distinction. A decimal representation is a sequence of digits, not a number

Oh and what is a real number? Might a real number be a sequence of rationals? Or more correctly an equivalence class of cauchy series.

>It boils down from the definition of the decimal representation and the limit of a geometrical sequence.

No, it doesn't. You are assuming the conclusion.

dominicrose•1d ago
I think rational thinking just doesn't work when it comes to infinity math. I'd say the same thing about probabilities.

ps: based on the title I thought this would be about IEEE 754 floats.

HourglassFR•1d ago
I don't get what the author is trying to do here. I mean he complains that talking about the limit of a sequence is too asbstract and unfamiliar to most people so the explaination is not satisfaying. But then names drop the notion of an Archimedean group and introduces with a big ol' handwave the hyperreals to solve this very straightforward highschool math problem…

Now don't get me wrong, it is nice and good to have blogs presenting these math ideas in a easy if not rigorous way by attaching them to known concept. Maybe that was the real intend here, the 0.99… = 1 "controversy" is just bait, and I am too out of the loop to get the new meta.

A_D_E_P_T•1d ago
FWIW, there's an old Arxiv paper with this same argument:

https://arxiv.org/abs/0811.0164

It feels intuitively correct is what I'll say in its favor.

400thecat•1d ago
> The belief that 0.x must be less than 1.y makes perfect sense to rational people

what is meant here by this notation 0.x and 1.y ?

dmvjs•1d ago
are there no longer an infinite number of floating point numbers between every two floating point numbers?
anthk•23h ago
Two? It's the same number. There's no number between 0.9r and 1.0
singularity2001•1d ago
Maybe this can be fixed for good using (axiomatic) hyperreal numbers:

0.9̅ = 0.9̂ + ε = 1

For some definition of 0.9̂ = 1 - ε

anthk•1d ago
0.999... -> 1 because you are correcting a supossed carry from a decimal forever. This is close to adding +1 to every odd number ever. No matter how much you try, you will get an even number.
quitit•1d ago
Where school kids tend to get stuck is that they'll hold contradictory views on how fractions can be represented.

First it'll be uncontroversial that ⅓ = 0.333... usually because it's familiar to them and they've seen it frequently with calculators.

However they'll then they'll get stuck with 0.999... and posit that it is not equal to 1/1, because there must "always be some infinitesimally small amount difference from one".

However here lies the contradiction, because on one hand they accept that 0.333... is equal to ⅓, and not some infinitesimally small amount away from ⅓, but on the other hand they won't extend that standard to 0.999...

Once you tackle the problem of "you have to be consistent in your rules for representing fractions", then you've usually cracked the block in their thinking.

Another way of thinking about it is to suggest that 0.999.. is indistinguishable from 1.

Suppafly•1d ago
>However they'll then they'll get stuck with 0.999... and posit that it is not equal to 1/1, because there must "always be some infinitesimally small amount difference from one".

Honestly teachers are half of the problem because they seem to make a game out of pointing out these sorts of contradictions instead of teaching the idea that you need "to be consistent in your rules for representing fractions".

That and every next step in math classes is the teacher explaining that most of how you were taught to think about math in the previous step was incorrect and you really should think about it this way, only to be told that again the next year.

neeeeeeal•1d ago
This is why I love HN. One post about advanced SQL ACID concepts, the next about mathematics, yet another about history.

What a community.

beyondCritics•1d ago
The explanation is, that the number is _not_ the infinite string of characters, but the sum of the scaled digits of the string. This sum is defined as the limit of the partial sums. In Germany, you can understand this in high school.
constantcrying•1d ago
And why does that change anything? No, it comes down to the definition of "=", which is not explained in schools.
constantcrying•1d ago
All these supposed proofs are totally wrong. Students are correctly interpreting as hand waving, by people who themselves do not have a good answer because that is exactly the case.

The reason 0.999... and 1 are equal comes down to the definition of equality for real numbers. The informal formulation would be that two real numbers are equal if and only if their difference in magnitude is smaller than every rational number.

(Formally two real numbers are equal iff they belong to the same equivalence class of cauchy series, where two series are in the same equivalence class iff their element wise difference is smaller than every rational number)

implements•1d ago
“By definition, there is no real number between 0.9r and 1 therefore they are the same” … was how I heard it explained.

If This, Then That, Except for When

https://www.chrbutler.com/2025-06-03
1•delaugust•43s ago•0 comments

Things are different between system and application monitoring

https://utcc.utoronto.ca/~cks/space/blog/sysadmin/SystemVsApplicationMonitoring
1•todsacerdoti•1m ago•0 comments

Aurora DSQL: How to spend a dollar

https://marc-bowes.com/dsql-how-to-spend-a-dollar.html
1•cebert•5m ago•0 comments

Meta pauses mobile port tracking tech on Android after researchers cry foul

https://www.theregister.com/2025/06/03/meta_pauses_android_tracking_tech/
1•coloneltcb•6m ago•0 comments

American Malware

https://quality.ghost.io/american-malware/
1•notkoalas•6m ago•0 comments

How China Uses Work to Reshape Uyghur Identity and Control a Strategic Region

https://www.nytimes.com/2025/05/29/world/asia/china-uyghur-labor.html
1•suraci•7m ago•0 comments

A Meme Generator MCP

https://github.com/tuananh/hyper-mcp/tree/main/examples/plugins/meme-generator
1•tuananh•8m ago•0 comments

Adopting Docs-as-Code at Pinterest

https://medium.com/pinterest-engineering/adopting-docs-as-code-at-pinterest-4f18ad169c25
2•keyworks•11m ago•0 comments

Brain aging shows nonlinear transitions, suggesting a midlife "critical window"

https://www.pnas.org/doi/10.1073/pnas.2416433122
2•derbOac•11m ago•0 comments

Jeff Bezos' risky bet (2006)

https://www.nbcnews.com/id/wbna15536386
1•spking•12m ago•0 comments

Show HN: Humanizeai.fun Free AI Humanizer

https://humanizeai.fun/
2•zaddyzaddy•13m ago•0 comments

RackColo: Find Server Colocation Plans

https://rackcolo.com/
1•da02•13m ago•0 comments

Nintendo Video for 3DS and Lost Media

https://lilysthings.org/blog/nintendo-video/
1•zdw•16m ago•0 comments

About the AI Apocalypse

https://www.x-pose.org/2025/06/03/about-the-ai-apocalypse/
2•xpose2000•16m ago•0 comments

A new cloud serverless database built for the edge

https://zephyrdb.com/
1•victoradventure•19m ago•0 comments

Google Archives Kaniko

https://github.com/GoogleContainerTools/kaniko/pull/3502
1•markmandel•20m ago•1 comments

Experimental evidence for the physical delocalization of individual photons

https://arxiv.org/abs/2505.00336
1•westurner•25m ago•0 comments

Functional unemployment rate soars past 24%

https://newpittsburghcourier.com/2025/06/03/true-unemployment-rate-soars-past-24-black-workers-hit-hardest/
8•MarcoDewey•29m ago•1 comments

Why I'm done with Firefox for good, and which browser I'm using instead

https://www.zdnet.com/article/why-im-moving-away-from-firefox-and-firefox-based-browsers/
2•dxs•34m ago•1 comments

Precious Plastic Is in Trouble

https://www.preciousplastic.com//news/problems-in-precious-plastic
3•diggan•37m ago•1 comments

Mini Bluray Audio Player (2024) [video]

https://www.youtube.com/watch?v=KzejGdn8DBE
2•zdw•37m ago•0 comments

Heisenberg on Helgoland

https://3quarksdaily.com/3quarksdaily/2017/09/heisenberg-on-helgoland.html
2•geox•42m ago•0 comments

Show HN: Plan Harmony – AI-powered travel planning with real itinerary structure

https://www.planharmony.com/
2•mikeluby•43m ago•0 comments

Show HN: llm-tools-openapi and MCP: access to MCP server with this one trick

https://github.com/Oliviergg/llm-tools-openapi
2•oliviergg•45m ago•0 comments

GPS Observation of a Western Gull Riding in a Long-Haul Garbage Transfer Truck

https://bioone.org/journals/waterbirds/volume-48/issue-1/063.048.0101/The-First-GPS-Observation-of-a-Western-Gull-Larus-occidentalis/10.1675/063.048.0101.short
1•slyall•48m ago•1 comments

Barry B. Longyear (1942-2025)

https://locusmag.com/2025/06/barry-b-longyear-1942-2025/
1•sohkamyung•48m ago•0 comments

Co-culturing diazotrophic cyanobacteria and lichen for self-healing concrete

https://www.sciencedirect.com/science/article/abs/pii/S2352492825006051
1•westurner•51m ago•0 comments

Show HN: Building an Airbreather for Leo? Try Athens Voice UI for Advanced Comms

https://athens.winterdelta.com/playlist/65828f94-9d5d-4a37-ad80-304315f7784f
1•Simorgh•55m ago•0 comments

New Retro Handheld Is Nostalgia Bait for Early-2000s Sliding Phones

https://gizmodo.com/this-new-retro-handheld-is-nostalgia-bait-for-early-2000s-sliding-phones-2000610413
1•thunderbong•56m ago•0 comments

Zero-Cost 'Tagless Final' in Rust with GADT-Style Enums

https://www.inferara.com/en/blog/rust-tagless-final-gadt/
1•todsacerdoti•57m ago•0 comments