The NumPy docs give a hint:
For real-valued input,
log1p
is accurate also forx
so small that1 + x == 1
in floating-point accuracy.
So for example let’s add a tiny non-zero number and 1.0
. Rounding errors make it a 1.0
.
>>> 1e-100 == 0.0
False
>>> 1e-100 + 1.0 == 1.0
True
If we try to take the log
of that incorrect sum, we get an incorrect result (compare to WolframAlpha):
>>> np.log(1e-100 + 1)
0.0
But if we use log1p()
, we get the correct result
>>> np.log1p(1e-100)
1e-100
The same principle holds for exp1m()
and logaddexp()
: The’re more accurate for small x
.