I am going to assume IEEE 754 binary floating point arithmetic, with float
32 bit and double
64 bit.
In general, there is no advantage to doing the calculation in double
, and in some cases it may make things worse through doing two rounding steps.
Conversion from float
to double
is exact. For the infinite, NaN, or zero divisor inputs it makes no differences. Given a finite number result, the IEEE 754 standard requires the result to be the result of the real number division f1/f2
, rounded to the type being using in the division.
If it is done as a float
division that is the closest float
to the exact result. If it is done as double
division, it will be the closest double
with an additional rounding step for the assignment to result
.
For most inputs, the two will give the same answer. Any overflow or underflow that did not happen on the division because it was done in double
will happen instead on the conversion.
For simple conversion, if the answer is very close to half way between two float
values the two rounding steps may pick the wrong float
. I had assumed this could also apply to division results. However, Pascal Cuoq, in a comment on this answer, has called attention to a very interesting paper, Innocuous Double Rounding of Basic Arithmetic
Operations by Pierre Roux, claiming proof that double rounding is harmless for several operations, including division, under conditions that are implied by the assumptions I made at the start of this answer.