In fact, there is a (rational) number between any two distinct real numbers, therefore your proof attempt only works if you assume that 0.999 equals 1. As that is a circular reasoning, it is not a valid proof.
No, his proof is fine. Take the standard definition of > as applied to decimal numbers when they're represented as strings. It's very easy to show that no x simultaneously satisfies x > 0.9999... and 1.0000... > x.
That is indeed true for real numbers, but not for hyper-reals (https://en.m.wikipedia.org/wiki/Hyperreal_number), which is what I had in mind when I originally said that it was not obvious.
---
lcamtuf proposes treating 0.9999... as a representation of a hyperreal number in the halo of 1, which raises some notational questions. The logical extension of the notation in my mind would, given an assumed infinitesimal value epsilon, represent epsilon as 0.000000... -| (barrier between real and infinitesimal decimal places) |- ...00001 . It's not obvious to me how you'd work with the representation 0.000000... | 1000... (what's 1 + 20ε?).
But I don't think my instinct works either, because you can grade the levels of infiniteness more finely than you have room in a countable string to represent. (This is just a gut feeling.)
Going with a more formal definition, you could take the new decimal places as representing coefficients of 10^{-N}, where N is an infinite hypernatural number, but since those have no beginning or end it's not at all clear where you'd position the coefficients.
On the other hand, I don't really see the conceptual problem with treating 0.9999... as a value that is infinitely close to 1. What bothers me the most at that level is the comment somewhere in the thread asking, given the established infinitesimal difference between 0.99999... and 9/9, what's the difference between 0.33333... and 3/9? lcamtuf's derivation works the same way, suggesting that a difference exists. And you can say that it does; you could just declare that rational numbers with prime factors other than 2 and 5 in the denominator have no exact representation in the decimal system.
But there's no demand for this. People don't seem to have the same problem with the idea that 0.33333... is 1/3 as they do with the idea that 0.99999... is 3/3.
“0.999… = 1 - infinitesimal”
But this is simply not true. Only then they get back to a true statement:
“Inequality between two reals can be stated this way: if you subtract a from b, the result must be a nonzero real number c”.
This post doesn’t clear things up, nor is it mathematically rigorous.
Pointing towards hyperreals is another red herring, because again there 0.999… equals 1.
x = 0.999…
2x = 1.999…
2x - x = 1
x = 1
Multiplying by ten just confused things and the result doesn’t follow for most people.Going as far as you can imagine and a little farther is an infinitesimal of the real infinite.
2 × 0.2222...
= 2 × 2/9
= 4/9
= 0.444...
Once you're taught that this is how the numbers work, it's easy(ish) to accept that 0.999... is just a notational trick. At the very least, you're "immune" to certain legit-looking operations, like 0.33... + 0.66...
= 1/3 + 2/3
= 3/3
= 1
Instead of 0.33... + 0.66...
= 0.99...
So, in this view, 0.3 or 0.333... are not numbers in the proper sense, they're just a convenient notation for 3/10 and 1/3 respectively. And there simply is no number whose notation would be 0.999..., it's just an abuse of the decimal notation.With that attitude how do you handle e.g. pi or sqrt(2), which it's perfectly legitimate to do arithmetic with?
3.1415<pi<3.1416 and 1.4142<sqrt(2)<1.4143, => 4.5557<pi + sqrt(2)<4.5559
=> 4.553 < 4.5557 < pi + sqrt(2) => 4.553 < pi + sqrt(2)
When you're doing something like pi + sqrt(2) ≈ 3.14159 + 1.41421 = 4.5558, you're taking known good approximations of these two real numbers and adding them up. The heavy lifting was done over thousands of years to produce these good approximations. It's not the arithmetic on the decimal representations that's doing the heavyh lifting, it's the algorithms needed to produce these good approximations in the first place that are the magic here.
And it would be just as easy to compute this if I told you that pi ≈ 314159/100000, and sqrt(2) ≈ 141421/100000, so that their sum is 455580/100000, which is clearly larger than 4553/1000.
I'm curious if they had a better one that we don't know of yet—their best known approximation of sqrt(2) is significantly more accurate.
What? The opposite is the case. Anything you want to do something with, you can only measure inaccurately; arithmetic doesn't have any use if you can't apply it to inaccurate measurements. That's what we use it for!
Catastrophic cancellation and other failures are serious issues to consider when doing numerical analysis and can often be avoided completely by using symbolic calculation instead. You can easily end up with wrong results, especially when composing calculations. This would make it difficult to, for example, match your theoretical model against actual measurement results; particularly if the model includes expressions that don't have closed-form solutions.
I prefer comparing it to complex numbers where I can't have "i" apples but I can calculate the phase difference between 2 power supplies in a circuit using such notation.
Nobody really cares about the 3rd decimal place when taking about a speeding car at a turn but they do when talking about electrons in an accelerator, so accuracy and precision always feel mucky to talk about when dealing with irrationals (again my opinion).
Note that writing sqrt(2) as 1.41 or 1.41421 or any other decimal expansion you might want to write is incorrect: you will always get some roundoff error. If you want to calculate that sqrt(2)*sqrt(2)=2 then you can’t do so by multiplying the decimal expansions.
Sure if a question asks for the escape velocity from Jupiter this has an approximate numerical value, but you don't just start by throwing numbers at a wall, you get the simplest equation which represents the value you're interested in an then evaluate it once you have a single equation for that parameter.
Yes sqrt(2)*pi has a numerical approximation but you don't want that right at the start of taking about something like spin orbitals or momenta of spinning disks. Doing the latter compounds errors.
It's no different to keeping around "i"/"j" until you need to express a phase or angle as it's cleaner and avoids compounding accuracy errors.
However, with numbers that have non-repeating inifinite decimal expansions, it is completely imposible to do arithmetic in the decimal notation. I'm not exagerating: it's literally physically impossible to represent on paper the result of doing 3pi in decimal notation in an unambiguous form other than 3pi. It's also completely impossible to use the decimal expansion of pi to compute that pi / pi = 1.
Here, I'll show you what it would be like to try:
pi / pi
= 3.141592653589793238462643383279502884197169399375105820949445923078164062862089986280348253421170679821480865132820664709384460955058223172....
Now, of course you can do arithmetic with certain approximations of pi. For example, I can do this: pi / pi
≈ 3.1415 / 3.1415
= 1
Or even 3 × pi
≈ 3 × 3
= 9
But this is not doing arithmetic with the decimal expansion of pi, this is doing arithmetic with rational numbers that are close enoigh to pi for some purpose (that has to be defined). say pi/pi; #1
say (pi/pi).^name; # Num
import math
result = math.pi / math.pi
print(result) #1.0
bit more long winded than raku, but nearly rightfwiw I want my pi/pi to be 1 (ie an Int) not 1.0 but then I’m a purist
Except we have some fascination with memorizing the digits of pi and having competitions for doing so for some reason.
The fascination is just dick measuring. "I'm smarter than you", for memorizing a longer string? It's quite dumb, but American media loves to use the dumbest possible ways of demonstrating that a character is intelligent, because uh it's really really hard to demonstrate "This person is very intelligent" to a subset of the population that is mostly at a middle school reading level and barely comprehends basic arithmetic, let alone algebra.
Agreed. The schools always seem to have these learning adjacent things that are theoretically supposed to make subjects engaging, but in reality are so disconnected from the subject that they are meaningless.
Telling you otherwise might have worked as a educational “shorthand”, but there are no mathematical difficulties as long as you use good definitions of what you mean when you write them down.
The issues people have with 0.333… and 0.999… is due to two things: not understanding what the notation means and not understanding sequences and limits.
I disagree though that it's necessary or even useful to think of 0.99... or 0.33... as sequences or limits. It's of course possible, but it complicates a very simple concept, in my opinion, and muddies the waters of how we should be using notions such as equality or inifinity.
For example, it's generally seen as a bad idea to declare that some infinite sum is actually equal to its limit, because that only applies when the series of partial sums converges. It's more rigorous to say that sigma(1/n) for n from 2 going to infinity converges to 1, not that it is equal to 1; or to say that lim(sigma(1/n)) for n from 2 to infinity = 1.
So, to say that 0.xxx... = sigma(x/10^n) for n from 1 to infinity, and to show that this is equal to 1 for x = 9, muddies the waters a bit. It still gives this impression that you need to do an infinite addition to show that 0.999... is equal to 1, when it's in fact just a notation for 9/9 = 1.
It's better in my opinion to show how to calculate the repeating decimal expansion of a fraction, and to show that there exists no fraction whose decimal expansion is 0.9... repeating.
Also a possible third thing: not enjoying working in a Base that makes factors of 3 hard to write. Thirds seem like common enough fractions "naturally" but decimal (Base-10) makes them hard to write. It's one of the reasons there are a lot of proponents of Base-12 as a better base for people, especially children, because it has a factor of 3 and thirds have nice clean duodecimal representations. (Base-60 is another fun option; it's also Babylonian approved and how we got 60 minutes and 60 seconds as common unit sizes.)
In Base-12 math, 1/3 = 0.4 and 2/3 = 0.8. With the tradeoff that 1/5 is 0.2947 repeating (the entire 2947 has the repeating over-bar).
Base-10 only has the two main factors 2 and 5, so repeating fractions are much more common in decimal representation, making this overall problem much more common, than compared to duodecimal/dozenal/Base-12 (or even hexadecimal/Base-16). It's interesting that this is a trade-off directly related to the base number of digits we want to express rational numbers in.
Nobody debates whether 9/9 = 1.
But I think it was misguided. I'll note that 1/3 is not a number, it's a calculation. So more complicated.
And fractions are generally much more complicated than the decimal system. Beyond some simple fractions that you're bound to experience in your everyday life, I don't think it makes sense to drill fractions. In the end, when you actually need to know the answer to a computation as a number, you're more likely to make a mistake because you spend your time juggling fractions instead of handling numerical instability.
Decimal notation used to be impractical because calculating with multiple digits was slow and error-prone. But that's no longer the case.
But, operations on fractions are definitely easier than operations on decimals. And fractions have the nice property that they have finite representations for all rational numbers, whereas decimal representations always have infinite representations even for very simple numbers, such as 1/3.
Also, if you are going to do arithmetic with infinite decimal representations, the you have to be aware that the rules are more complex then simply doing digit-by-digit operations. That is, 0.77... + 0.44... ≠ 1.11... even though 7+4 = 11. And it gets even more complex for more complicated repeating patterns, such as 0.123123123... + 0.454545... (that is, 123/999 + 45/99). I doubt there is any reason whatsoever to attempt to learn the rules for these things, given that the arithmetic of fractions is much simpler and follows from the rules for division. The fact that a handful of simple cases work in simple ways doesn't make it a good idea to try.
1/3 is a calculation the same way 42 is a calculation (4*10^1 + 2*10^0). Nothing is real except sets containing sets! /j
I wonder if students from Romania are hamstrung in more advanced mathematics from being taught this way.
There is no "real" representation of rational numbers, and fractions are no more real—or fake—than decimals.
> And there simply is no number whose notation would be 0.999...
There is, though. It's 1.
* As the example shows, the decimal representation isn't unique, so perhaps we should say "_a_ decimal representation".
Why are hyperreals even mentioned? This post is not about hyperreals or non-standard math, it’s about standard math, very basic one at that, and then comes along with »well under these circumstances the statement is correct« – well no, absolutely not, these aren’t the circumstances the question was posed under.
We don’t see posts saying »1+2 = 1 because well acktchually if we think modulo 2«, what’s with this 0.9… thing then?
What? How can it be that a=b and a≠c when b=c?
However, in higher math you are taught that all this is just based on certain assumptions and it is even possible to let go of these assumptions and replace them with different assumptions.
I think it is important to be clear about the assumptions one is making, and it is also important to have a common set of standard assumptions. Like high school math, which has its standard assumptions. But it is just as possible to make different assumptions and still be correct.
This kind of thinking has very important applications. We are all taught the angle sum in a triangle is 180 degrees. But again this is assuming (default assumption) euclidean geometry. And while this is sensible, because it makes things easy in day to day life, we find that euclidean geometry almost never applies in real life, it is just a good approximation. The surface of the earth, which requires a lot of geometry only follows this assumption approximately, and even space doesn't (theory of relativity). If we would have never challenged this assumption, then we would have never gotten to the point where we could have GPS.
It is easy to assume that someone is wrong, because they got a different result. But it is much harder to put yourself into someones shoes and figure out if their result is really wrong (i.e. it may contradict their own assumption or be non-sequitur) or if they are just using different assumptions. And to figure out what these assumptions are and what they entail.
For this assumption: Yes, you can construct systems where 0.9999... != 1, but then you also must use 1/3 != 0.33333... or you will end up contradicting yourself. In fact when you assume 1 = 0.999999... + eps, then you must likely also use 1/3 = 0.33333 - eps/3 to avoid contradicting yourself (I haven't proven the resulting axiom system is free of contradiction, this is left as an excercise to the reader).
When applying the correct definition for the notation (the limit of a sequence) there's no question of "do we ever get there?". The question is instead "can we get as close to the target as we want if we go far enough?". If the answer is yes, the notation can be used as another way to represent the target.
If you take 0.999... to mean sum of 9/10^n where n ranges over every standard natural, then the author is correct that it equals 1-eps for some infinitesmal eps in the hyperreals.
This does not violate the transfer principle because there are nonstandard naturals in the hyperreals. If you take the above sum over all naturals, then 0.999... = 1 in the hyperreals too.
(this is how the transfer principle works - you map sums over N to sums over N* which includes the nonstandards as well)
The kicker is that as far as I know there cannot be any first-order predicate that distinguishes the two, so the author is on very confused ground mathematically imo.
(not to mention that defining the hyperreals in the first place requires extremely non-constructive objects like non-principal ultrafilters)
Could you generalize this to include the hyperreals by lifting the restrictions on finitely many, and also adding in some transfinite ordinals to the domain of the function?
(if the finiteness thing seems confusing, remember that there are infinitely large nonstandard integers in the hyperreals, and you can't tell them apart from the others "from the inside")
We can't represent values like 1/3 precisely in the decimal number system, the best we can do is represent in a way that it's clear what's implied with minimal error.
The representation isn't really suppose to be interpreted as an infinite decimal series, and depending on how you interpret 3.333... you could argue it's a slightly different value. And that's plainly obvious – 3.333... != 1/3
But there is no such distinction. In fact the decimal representation is "closer" to a real number, then just 1.
>is one possible representation of number 1.
Why? You are just asserting things. You do not even give an argument why that should be the case. Why is 0.999... a representation of 1 and not 0.123?
Of course there's a distinction. A decimal representation is a sequence of digits, not a number
> Why?
It boils down from the definition of the decimal representation and the limit of a geometrical sequence.
Oh and what is a real number? Might a real number be a sequence of rationals? Or more correctly an equivalence class of cauchy series.
>It boils down from the definition of the decimal representation and the limit of a geometrical sequence.
No, it doesn't. You are assuming the conclusion.
ps: based on the title I thought this would be about IEEE 754 floats.
Now don't get me wrong, it is nice and good to have blogs presenting these math ideas in a easy if not rigorous way by attaching them to known concept. Maybe that was the real intend here, the 0.99… = 1 "controversy" is just bait, and I am too out of the loop to get the new meta.
https://arxiv.org/abs/0811.0164
It feels intuitively correct is what I'll say in its favor.
what is meant here by this notation 0.x and 1.y ?
0.9̅ = 0.9̂ + ε = 1
For some definition of 0.9̂ = 1 - ε
First it'll be uncontroversial that ⅓ = 0.333... usually because it's familiar to them and they've seen it frequently with calculators.
However they'll then they'll get stuck with 0.999... and posit that it is not equal to 1/1, because there must "always be some infinitesimally small amount difference from one".
However here lies the contradiction, because on one hand they accept that 0.333... is equal to ⅓, and not some infinitesimally small amount away from ⅓, but on the other hand they won't extend that standard to 0.999...
Once you tackle the problem of "you have to be consistent in your rules for representing fractions", then you've usually cracked the block in their thinking.
Another way of thinking about it is to suggest that 0.999.. is indistinguishable from 1.
Honestly teachers are half of the problem because they seem to make a game out of pointing out these sorts of contradictions instead of teaching the idea that you need "to be consistent in your rules for representing fractions".
That and every next step in math classes is the teacher explaining that most of how you were taught to think about math in the previous step was incorrect and you really should think about it this way, only to be told that again the next year.
What a community.
The reason 0.999... and 1 are equal comes down to the definition of equality for real numbers. The informal formulation would be that two real numbers are equal if and only if their difference in magnitude is smaller than every rational number.
(Formally two real numbers are equal iff they belong to the same equivalence class of cauchy series, where two series are in the same equivalence class iff their element wise difference is smaller than every rational number)
sans_souse•1d ago
Maybe there is a difference, but it's intangible.
Maybe it is to the number line what Planck Length is to measures.
As a non-math-guy, I understand and accept it, but I feel like we can have both without breaking math.
In a non-idealized system, such as our physical reality; if we divide an object into 3 pieces, no matter what that object was we can never add our 3 pieces together in a way that recreates perfectly that object prior to division. Is there some sort of "unquantifiable loss" at play?
So yea, upvoting because I too am fascinated by this and its various connections in and out of math.
bardan•1d ago
0.3... = 1/3
0.6... = 2/3
0.9... = 3/3 (= 1)
LiKao•1d ago
Let's make some different assumptions, not following high school math: When I divide 1 by 3, I always get a remainder. So it would just be as equally valid to introduce a mathematical object representing this remainder after I performed the infinite number of divisions. Then
1/3 = 0.3... + eps / 3
2/3 = 0.6... + 2eps / 3
3/3 = 0.9... + 3eps / 3
and since 0.9... = 1 - eps, we get 3/3 = 0.9... + eps = 1
It's all still sound (I haven't proven this, but so far I don't see any contradiction in my assumptions). And it comes out where 0.9... is not equal to 1. Just because I added a mathematical object that forces this to come out.
Edit: Yes, I am breaking a lot of other stuff (e.g. standard calculus) by introducing this new eps object. But that is not an indicator that this is "wrong", just different from high school math.
pvdebbe•1d ago
spyrja•1d ago
Suppafly•1d ago
When you cut a cake into 3 slices, there is always a little bit of cake stuck the knife.