PLEASE don’t follow the code recipe in the accepted answer
(or the original question)
I’ve started a bit of a crusade to rid the world of dodgy “epsilon” advice, so I figured I might as well write an answer here, rather than just seconding Samuel’s good advice in a comment on the original answer.
The original question here is really about this:
What is Number.EPSILON supposed to be used for?
The accepted answer misses the intent of the example code quoted in the question and assumes it’s supposed to be an “approximately equal” test. That is not what the example code is trying to show!
The accepted answer then goes on to offer some dangerous advice on how to misuse Number.EPSILON. Number.EPSILON MUST NOT be used for any kind of “approximately equal” test!
So, now we seem to have four questions …
(damn, these things are multiplying!)
We’ve got two questions from the original question …
- What is Number.EPSILON supposed to be used for?
- Why would I want to use some code that compares an approximation error with Number.EPSILON?
… and two more raised by the accepted answer …
- Why would I want to avoid comparing numbers with
==
or===
on a real website? - If Number.EPSILON is no good as a “tolerance”, what should I use?
Q1: What is Number.Epsilon supposed to be used for?
Short answer: it’s just something that uber-nerdy computer scientists might make use of in their calculations.
It is not for mere-mortal programmers. Number.Epsilon is a measure of “approximation error”. To get any practical use out of it, you need to scale it according to the size of the numbers you’re working with.
If you’re not intimately familiar with all the internal workings of floating point numbers, then Number.EPSILON is not for you (frankly I’ve not found a use for it yet in anything I’ve done, so I count myself among the “mere mortals”).
Q2: Why would I want to use some code that compares approximation errors with Number.EPSILON?
Short answer? You really don’t.
The example code shown in the original question is just super-nerdy proof-of-concept type stuff. It has no practical application (without being expanded on) in a real world program.
All that the Mozilla example does is demonstrate that the loss of precision you’ll get with some numbers smaller than one is less than Javascript’s Number.EPSILON. That does NOT mean you should use Number.EPSILON as a tolerance for approximation errors in numbers larger than 1.0!
For those reading along, the original question is referencing this page on Mozilla’s Development Network site: https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Global_Objects/Number/EPSILON – it’s a page that just lists technical properties of Number.EPSILON and doesn’t have any commentary on what it’s supposed to be used for.
Q3: Why is it buggy to try to compare numbers in JS with == or === ?
Short Answer: Because JavaScript uses floating point numbers and they are not precise. There will often be small “approximation errors” and you’ll get “false” answers to things that you would expect to be “true”
The classic example of this is: 0.1 + 0.2 != 0.3
(this expression is TRUE in JavaScript!)
I’ve put together a reasonably decent primer on why that “fact” is so, which avoids getting overly technical. If you’re interested, have a look at https://dev.to/alldanielscott/why-floating-point-numbers-are-so-weird-e03
At the end of all the “reasons why” it comes down to this piece of practical advice:
Never use == or === to compare two floating point numbers! Instead check that the two numbers are “close enough” for your liking.
Q4: What’s a good tolerance to use for comparing two floating point numbers?
Short Answer: You need to pick a sensible tolerance for your application. How close do YOU need two numbers to be before you consider them “equal enough”?
How close do values need to be in your program? +/- 0.1? +/- 0.000001? +/- 0.000000001? All of those values are orders of magnitude bigger than Number.EPSILON.
WARNING: If you ever see some very clever code that claims to have solved the problem of “wonky equality” for all floating point numbers and it doesn’t require you to specify your own tolerance, then it is bad advice (regardless of how clever it is).
I’ve gone into the reasons for that at: https://dev.to/alldanielscott/how-to-compare-numbers-correctly-in-javascript-1l4i
The short version is that Number.EPSILON is way too small to use as a fixed tolerance, and you’ll often encounter approximation errors much larger than Number.EPSILON in real-world applications.
Also, you shouldn’t use a constantly changing “tolerance” in your “equality” tests, or they won’t behave like equality tests and you’ll end up with subtle (and annoying) bugs in your program. Use a fixed tolerance, which you decide on, all through your program.**
** or, at least, all through a discrete component of your program – don’t allow two different tolerances to “mix”.
BONUS: Proof of why Number.EPSILON a bad “epsilon”?
Want proof that Number.EPSILON is a bad choice? How about this?
for (i = 1; i < 100000000000000000000; i *= 100) {
a = i + 0.1; // Base
b = 0.2; // Additional value
c = i + 0.3; // Expected result of a + b
console.log(
'is ' + a + ' + ' + b + ' near ' + c + '? ... ' +
(
Math.abs((a + b - c) < Number.EPSILON) ?
'yes' :
'NOPE! DANGIT!!! ... Missed by ' + Math.abs(a + b - c).toFixed(30) + '!'
)
);
}
That’ll output this:
is 1.1 + 0.2 near 1.3? ... yes
is 100.1 + 0.2 near 100.3? ... yes
is 10000.1 + 0.2 near 10000.3? ... NOPE! DANGIT!!! ... Missed by 0.000000000001818989403545856476!
is 1000000.1 + 0.2 near 1000000.3? ... yes
is 100000000.1 + 0.2 near 100000000.3? ... yes
is 10000000000.1 + 0.2 near 10000000000.3? ... NOPE! DANGIT!!! ... Missed by 0.000001907348632812500000000000!
is 1000000000000.1 + 0.2 near 1000000000000.3? ... yes
is 100000000000000.1 + 0.2 near 100000000000000.3? ... yes
is 10000000000000000 + 0.2 near 10000000000000000? ... yes
is 1000000000000000000 + 0.2 near 1000000000000000000? ... yes
Ouch!!! Things are behaving strangely, with no obvious pattern!
BONUS 2: More Floating-Point Oddities
If you are a “detail-oriented” kind of person, you might notice that the last couple of lines in the output above are missing the .1 and .3 in the expressions being evaluated.
The fractional parts haven’t just been chopped off in the output: they’re not there in the actual numbers being worked on. You’d be forgiven for thinking that the last two expressions should be “out” by 0.2 – but that’s not how floating point numbers work.
I’ll leave it as an exercise for the reader to figure out why those last two expressions come out as “almost-nearly-equal”: it’s not a bug!
Welcome to the world of floating point numbers with limited precision!