Equality on things that it doesn't make sense to compare returning false seems wrong to me. That operation isn't defined to begin with.
By shipping with undefined, JavaScript could have been there only language whose type system makes sense... alas!
Please no, js devs rely too much on boolean collapse for that. Undefined would pass as falsy in many places, causing hard to debug issues.
Besides, conceptually speaking if two things are too different to be compared, doesn’t that tell you that they’re very unequal?
It kind of sounds like we need more type coercion because we already have too much type coercion!
I'm not sure what an ergonomic solution would look like though.
Lately I'm more in favour of "makes sense but is a little awkward to read and write" (but becomes effortless once you internalize it because it actually makes sense) over "convenient but not really designed so falls apart once you leave the happy path, and requires you to memorize a long list of exceptions and gotchas."
- In 1985 there were a ton of different hardware floating-point implementations with incompatible instructions, making it a nightmare to write floating-point code once that worked on multiple machines
- To address the compatibility problem, IEEE came up with a hardware standard that could do error handling using only CPU registers (no software, since it's a hardware standard) - With that design constraint, they (reasonably imo) chose to handle errors by making them "poisonous" - once you have a NaN, all operations on it fail, including equality, so the error state propagates rather than potentially accidentally "un-erroring" if you do another operation, leading you into undefined behavior territory
- The standard solved the problem when hardware manufacturers adopted it
- The upstream consequence on software is that if your programming language does anything other than these exact floating-point semantics, the cost is losing hardware acceleration, which makes your floating-point operations way slower
Or, maybe we could say that our variables just represent some ideal things, and if the ideal things they represent are equal, it is reasonable to call the variables equal. 1.0d0, 1.0, 1, and maybe “1” could be equal.
That said, I don’t think undefined in JS has the colloquial meaning you’re using here. The tradeoffs would be potentially much more confusing and error prone for that reason alone.
It might be more “correct” (logically; standard aside) to throw, as others suggest. But that would have considerable ergonomic tradeoffs that might make code implementing simple math incredibly hard to understand in practice.
A language with better error handling ergonomics overall might fare better though.
So what always trips me up about JavaScript is that if you make a mistake, it silently propagates nonsense through the program. There's no way to configure it to even warn you about it. (There's "use strict", and there should be "use stricter!")
And this aspect of the language is somehow considered sacred, load-bearing infrastructure that may never be altered. (Even though, with "use strict" we already demonstrated that have a mechanism for fixing things without breaking them!)
I think the existence of TS might unfortunately be an unhelpful influence on JS's soundness, because now there's even less pressure to fix it than there was before.
NaN is a value of the Number type; I think there are some problems with deciding that Number is not compatible with Number for equality.
We just need another value in the boolean type called NaB, and then NaN == NaN can return NaB.
To complement this, also if/then/else should get a new branch called otherwise that is taken when the if clause evaluates to NaB.
As specified by the standard since its beginning, there are 2 methods for handling undefined operations:
1. Generate a dedicated exception.
2. Return the special value NaN.
The default is to return NaN because this means less work for the programmer, who does not have to write an exception handler, and also because on older CPUs it was expensive to add enough hardware to ensure that exceptions could be handled without slowing down all programs, regardless whether they generated exceptions or not. On modern CPUs with speculative execution this is not really a problem, because they must be able to discard any executed instruction anyway, while running at full speed. Therefore enabling additional reasons for discarding the previously executed instructions, e.g. because of exceptional conditions, just reuses the speculative execution mechanism.
Whoever does not want to handle NaNs must enable the exception for undefined operations and handle that. In that case no NaNs will ever be generated. Enabling this exception may be needed in any case when one sees unexpected NaNs, for debugging the program.
after all, the usual WTF lists for JS usually have a stringified NaN somewhere as part of the fun.
fun isNan(n) = n != n
NaN should have been NaVN, not a valid number.
- NaN is a floating point number, and NaN != NaN by definition in the IEEE 754-2019 floating point number standard, regardless of the programming language, there's nothing JavaScript-specific here.
- In JS Number.isNaN(v) returns true for NaN and anything that's not a number. And in JS, s * n and n * s return NaN for any non empty string s and any number n ("" * n returns 0).
Therefore, trying to do math with either (for example: NaN/NaN or inf./inf.) was to try to pin them down to something tangible and no longer conceptual — therefore disallowed.
No? It is easy to verify that `"3" * 4` evaluates to 12. The full answer is that * converts its operands into primitives (with a hint of being number), and any string that can be parsed as a number converts to that number. Otherwise it converts to NaN.
because a <= b is defined as !(a > b)
then:
5 < NaN // false
5 == NaN // false
5 <= NaN // true
Edit: my bad, this does not work with NaN, but you can try `0 <= null`
Note that == has special rules, so 0 == null does NOT coerce to 0 == 0. If using == null, it only equals undefined and itself.
While I'm not really against the concept of NaN not equaling itself, this reasoning makes no sense. Even if the standard was "NaN == NaN evaluates to true" there would be no reason why NaN/Nan should necessarily evaluate to 1.
Old-school base R is less type-sensitive and more "do what I mean", but that leads to slowness and bugs. Now we have the tidyverse, which among many other things provides a new generation of faster functions with vectorized C implementations under the hood, but this requires them to be more rigid and type-sensitive. When I want to stick a NA into one of these, I often have to give it the right type of NA, or it'll default to NA_boolean and I'll get type errors.
That's not true. For example: 0 == 0, but 0/0 != 1.
(See also +Infinity, -Infinity, and -0.)
Does Numpy do the same? That’s where I usually meet NaN.
https://en.wikipedia.org/wiki/NaN
Also you even have different kinds of NaN (signalling vs quiet)
While it’s common to see groaning about double-equal vs triple-equal comparison and eye-rolling directed at absurdly large tables like in https://developer.mozilla.org/en-US/docs/Web/JavaScript/Guid... but I think it’s genuinely great that we have the ability to distinguish between concepts like “explicitly not present” and “absent”.
So, by that logic, if 0 behaved like a number and had a value equal to itself, well, you could accidentally do math with it: 0 / 0 would result in 1...
But as it turns out, 0 behaves like a number, has a value equal to itself, you can do math with it, and 0/0 results in NaN.
Zealotux•2h ago
ForHackernews•2h ago
JavaScript is a quirky, badly-designed language and I think that is common knowledge at this point.
JKCalhoun•1h ago