I have some notes for a fantasy computer which is maybe what would have happened if Chinese people [1] evolved something like the PDP-10 [2] Initially I was wanting a 24-bit wordsize [3] but decided on 48-bit [4] because you can fit 48 bits into a double for a Javascript implementation.
[1] There are instructions to scan UTF-8 characters and the display system supports double-wide bitmap characters that are split into halves that are indexed with 24-bit ints.
[2] It's a load-store architecture but there are instructions to fetch and write 0<n<48 bits out of a word even overlapping two words, which makes [1] possible; maybe that write part is a little unphysical
[3] I can't get over how a possible 24-bit generation didn't quite materialize in the 1980s, and find the eZ80 evokes a kind of nostalgia for an alternate history
[4] In the backstory, it started with a 24-bit address space like the 360 but got extended to have "wide pointers" qualified by an address space identifier (instead of the paging-oriented architecture the industry) really took as well as "deep pointers" which specify a bitmap, 48-bit is enough for a pointer to be deep and wide and have some tag bits. Address spaces can merge together contiguously or not depending on what you put in the address space table.
Well... it depends on how you look at it.
While the marketers tried to cleanly delineate generations into 8- and 16- and 32-bit eras, the reality was always messier. What exactly the "bits" were that were being measureds was not consistent. The size of a machine word in the CPU was most common, and perhaps in some sense objectively the cleanest, but the number of bits of the memory bus started to sneak in at times (like the "64 bit" Atari Jaguar with the 32-bit CPU because one particular component was 64 bits wide). In reality the progress was always more incremental and there are some 24-bit things, like, the 286 can use 24 bits to access memory, and a lot of "32 bit graphics" is really 24 bits because 8 bits for RGB gets you to 24 bits. The lack of a "24-bit generation" is arguably more about the marketing rhetoric than the lack of things that were indeed based around 24 bits in some way.
Even today our "64-bit CPUs" are a lot messier than meets the eye. As far as I know, they can't actually address 64 bits of RAM, there are some reserved higher bits, and depending on which extensions you have, modern CPUs may be able to chew on up to 512 bits at a time with a single instruction, and I could well believe someone snuck something that can chew on 1024 bits without me noticing.
But I still see people building systems where implicit conversation of float to int is not allowed because "it would lose precision", but that allow int to float.
[0] don't reply to me about NaNs, please
Yes
They are different types
They are different things
They are related concepts, that is all
The point of the article is that this "however many digits" actually implies rounding many numbers that aren't that big. A single precision (i.e. 32-bit) float cannot exactly represent some 32-bit integers. For example 1_234_567_891f is actually rounded to 1_234_567_936. This is because there are only 23 bits of fraction available (https://en.wikipedia.org/wiki/Single-precision_floating-poin...).
Which of those is better? It depends on the application. All you can do is hope the person who gave you the numbers chose the more appropriate representation, the less painful way to lose fidelity in a representation of the real world. By converting to the other representation you now have lost that fidelity in both ways.
Again, by a similar argument to above, something like this has to be true. You have exactly 32 bits of information either way, and both conversions lose the same amount of information - you end up with a 32-bit representation, but it only represents a domain of 2^27 (or whatever it is) distinct numbers.
This is exactly what's untrue. I said "orders of magnitude", but I could have said bits - they're analogous concepts. Ints with the high 9 bits set to zero lose no information when converting to single precision floating point. The rest lose progressively more information, concentrated in the low order bits, up to 8 bits. Certainly regrettable, but usually manageable. Converting from floats to ints might be representable exactly, but if the number is, say, near 1, then it will be rounded to 1, losing 23 bits of information. If the number is near 0 then it will be rounded to 0, losing 31 bits of information.
IEEE actually has a decimal float format defined. I don’t know if any system uses it, though (my suspicion is it must be pretty niche).
The precision of casting float to int, on the other hand, depends on the input.
And not think what happened when people do `i64 as usize` and friends
(This is one are where the pascal have it right, including the fact you should do loops like `for I in low(nuts)..high(nuts)`)
p.d: 'nuts' was the autocorrect choice that somehow is topical here so I keep it.
The question is why they added so many NaNs to the spec, instead of just one. Probably for a signal, but who actually uses that?
For IEEE float16 the amount of lacking values due to an entire exponent value being needlessly taken up by NaNs is actually quite blatant.
Here is a visualization I made recently on the density of float32. It seems that float32 is basically just PCM, which was a lossy audio compression exploiting the fact that human hearing has logarithmic sensitivity. I’m not sure why they needed the mantissa though. If you give all 31 bits to the exponent, then normalize it to +/-2^7, you get a continuous version of the same function.
float has higher accuracy around 1.0 than around 2*24. This makes it quite a bit different from PCM which is fully linear. Which is probably why floating point PCM keeps it's samples primarily between -1.0 and +1.0.
> which was a lossy audio compression
It's not lossy. Your bit depth simply defines the noise floor which is the smallest difference in volume you can represent. This may result in loss of information but at even 16 bits only the most sensitive of ears could even pretend to notice.
> If you give all 31 bits to the exponent, then normalize it to +/-2^7, you get a continuous version of the same function.
You'll extend the range but loose all the precision. This is probably the opposite of what any IEE754 user actually wants.
No, it's just it's more natural/intuitive to express algorithms in a normalized range if given the possibility.
Same with floating point RGBA (like in GPUs)
https://gist.github.com/deckar01/3f93802329debe116b0c3570bed...
exponent *= 2 ** (8 - E)
In the E=8 case then this is just `* 1`. In the E=31 case this is now `* 2*-23`. Python is going to do all of this for you in the float64 domain. I think it's possible that you haven't graphed what you intended.You also don't have subnormals, infinities or propagating NaNs. You manage to only retain the signed 0.
EDIT: And the midpoint of your system is 0.5. Which is a little uncomfortable.
Yes, exactly; the linear regions are needed to more evenly distribute precision, while the average precision remains the same. Alternatively, you can omit the mantissa, but use an exponent base much closer to 1 (perhaps 1 + 2⁻²³).
- Most floats are not ints.
- There are the same number of floats as ints.
- Therefore, most ints are not floats.
On the other hand, floating point is the gift that never stops giving.
A recent wtf I encountered was partly caused by the silent/automatic conversion/casting from a float to double. Nearly all C-style languages do this even though it's unsafe in its own way. Kinda obvious to me now and when I state it like this (using C-style syntax), it looks trivial but: (double) 0.3f is not equal to 0.3d
The wtfness of it was mostly caused by other factors (involving overloaded method/functions and parsing user input) but I realized that I had never really thought about this case before - probably because floats are less common than doubles in general - and without thinking about it, sort of assumed it should be similar to int to long conversion for example (which is safe).
-Wfloat-conversion
will warn about this. Been bitten by it too.Floats are Sneaky and Not to be Trusted. :D
The float representation of 0.3 (e.g.) does not, when cast to double, represent 0.3 - in contrast the i32 representation of any number when cast to i64 represents the same number.
Perhaps it is the FORTRAN overhang of engineering education that predisposes folks to using floats when Int32/64 would be fine.
I shudder as to why in JavaScript they made Number a float/double instead of an integer. I constantly struggle to coerce JS to work with integers.
taeric•7mo ago
atn34•7mo ago
[0]: https://en.wikipedia.org/wiki/Birds_Aren%27t_Real
tialaramex•7mo ago
munchler•7mo ago
jrvieira•7mo ago
neepi•7mo ago
jrvieira•7mo ago
You can make the argument that "proper" integers are also bounded in practice by limitations of our universe :)
layer8•7mo ago
The important point is that the arithmetic operators on int perform modulo arithmetics, not the normal arithmetics you would expect on unbounded integers. This is often not explained when first teaching ints.
adastra22•7mo ago
layer8•7mo ago
MathMonkeyMan•7mo ago
Signed ints behave like the integers in some tiny subset of representable values. Maybe it's something like the interval (-sqrt(INT_MAX), sqrt(INT_MAX)).
AlotOfReading•7mo ago
LegionMammal978•7mo ago
(Alas, most languages don't expose a convenient multiplicative inverse for their integer types, and it's a PITA to write a good implementation of the extended Euclidean algorithm every time.)
sjrd•7mo ago
SkeuomorphicBee•7mo ago
adastra22•7mo ago
perching_aix•7mo ago
jrvieira•7mo ago
neepi•7mo ago
perching_aix•7mo ago
tialaramex•7mo ago
Take realistic::Rational::fraction(1, 3) ie one third. Floats can't represent that, but we don't need a whole lot of space for it, we're just storing the numerator and denominator.
If we say we actually want f64, the 8 byte IEEE float, we get only a weak approximation, 6004799503160661/18014398509481984 because 3 doesn't go neatly into any power of 2.
Edited: An earlier version of this comment provided the 32-bit fraction 11184811/33554432 instead of the 64-bit one.
neepi•7mo ago
MathMonkeyMan•7mo ago
Tom7 has a good video about this: https://www.youtube.com/watch?v=5TFDG-y-EHs
jcranmer•7mo ago
It's generally accurate to consider floats an acceptable approximation of the [extended] reals, since it's possible to do operations on them that don't exist for rational numbers, like sqrt or exp.
perching_aix•7mo ago
This kinda sent me on a spin, for a moment I thought my whole life was a lie and these functions don't take rationals as inputs somehow. Then I realized you mean rather that they typically produce non-rationals, so the outputs will be approximated.
ants_everywhere•7mo ago
Instead it's more accurate to think of them as being in scientific notation like 1.23E-1.
In this notation it's clearer that they're sparsely populated because some of the 32 bits encode the exponent, which grows and shrinks very quickly.
But yes rationals are reals. It's clear that you can't represent, say, all digits of pi in 32 bits, so the parent comment was not saying that 32 bit floats are all of the reals.
perching_aix•7mo ago
adastra22•7mo ago
taeric•7mo ago
That said, I'd argue that they are neither reals nor rationals. They are scientific numbers with no syntax to indicate "repeated" tails. Would be like saying that you want someone to represent 1/3 using 6 digits with no bar notation. Best you can do is "0.33333" and that just isn't the same. Moving it so that you have 6 digits with 2 being exponent, you are stuck with "3.333e-01". Which is just different still.
perching_aix•7mo ago
The way I like to frame it, and this will almost certainly not be mathematically rigorous, is that every number a float (as in, the formats defined in IEEE-754) can hold can be sufficiently described as at most a rational, though they cannot represent all rationals, and cannot represent anything "higher" than rationals. That's why I prefer to say they're representing rationals. It's in the sense that they're all representing some rational, not any arbitrary rational.
To do that would require infinite space. E.g. for one third, you'd need infinite binary digits (or even infinite decimal ones, as you say). It's just not how IEEE-754 floats work, as you mention.
This launched me into a research on "perfect numerical accuracy" a while back, and there I did find schemes where you store the numerator and the denominator separately, freeing you from this specific problem. It's a fun topic.
taeric•7mo ago
I don't tend to give much thought to real versus rational. Such that I still find it not that surprising to see people treat floats as reals. Especially since most languages make it hard to represent anything else. To that end, it will take me a bit to really internalize what you are saying here. I think I understand it.
That said, my favorite for the craziness of "old is new" is that the literal "1/3" works to represent a rational in Common Lisp. Indeed, I had originally thought that they had some specific constants for common fractions defined. Nope, they just support rational literals.
And I question if you need infinite space to represent repeated decimals. Strictly, you just need a way to indicate the repeating. No?
kazinator•7mo ago
taeric•7mo ago
My point was strictly that we have "bar notation" in writing to show that 1.3 is not the same as 1.33 or 1.333 or 4/3. No matter how many 3s you put at the end. I don't know of any similar scheme in computers. I'm assuming it has been tried.
kazinator•7mo ago
taeric•7mo ago
That is, yes, I know that 1/6 can be used to represent 0.1(6), but if you are already storing something in positional digits, there may have been a benefit to keeping it in positional digits? I'm assuming there was not, in fact, any benefit?
kazinator•7mo ago
(We could think of other representations like 0.12'34 or something where the '34 indicates repeating digits. I've not seen that anywhere either, but it would be easy to implement.)
taeric•7mo ago
Actually getting the overbar, I wasn't too concerned with. Just noting that we have a way to do it on paper that doesn't require using ratios directly. Or infinite paper. :D
At any rate, this also got me thinking about how to do operations on repeated digits. I'm assuming I would have learned something like this years ago, but I don't remember it. At all. I vaguely remember it was awkward to realize that 0.(9) == 1. But, I don't recall playing with that too much. Is neat to see you should be able to make the general ideas work out just fine after you account for that? Just widen any repeating groups so that they are the same size, and then add. Reduce using the 9 rule.
perching_aix•7mo ago
It depends on your data structure, yes. You can also do the separate numerator / denominator thing, and I'm sure there are other ways too. Just that naively if you try representing it, that's when you need infinite space. Or if you do it the way IEEE-754 formats do.
taeric•7mo ago
Agreed that there are other ways, almost certainly. I was going with the assumption that we wanted to store positional values for that question.
That is, it is only when you assume that you have to write out all decimal digits that you get people writing silly things that computers do all of the time. "0.66666...7" is an easy example. Ironically, to me, is that in written assignments you likely would have gotten knocked off points for not writing 0.(6) where () means an overbar. You definitely would lose points for rounding at the end.
analog31•7mo ago
capyba•7mo ago
pklausler•7mo ago
But in our IEEE-754 modern world, no, it’s not.
capyba•7mo ago