Evaluate if two doubles are equal based on a given precision, not within a certain fixed tolerance

From msdn:

By default, a Double value contains 15 decimal digits of precision, although a maximum of 17 digits is maintained internally.

Let’s assume 15, then.

So, we could say that we want the tolerance to be to the same degree.

How many precise figures do we have after the decimal point? We need to know the distance of the most significant digit from the decimal point, right? The magnitude. We can get this with a Log10.

Then we need to divide 1 by 10 ^ precision to get a value around the precision we want.

Now, you’ll need to do more test cases than I have, but this seems to work:

  double expected = 1632.4587642911599d;
  double actual = 1632.4587642911633d; // really comes from a data import

  // Log10(100) = 2, so to get the manitude we add 1.
  int magnitude = 1 + (expected == 0.0 ? -1 : Convert.ToInt32(Math.Floor(Math.Log10(expected))));
  int precision = 15 - magnitude ;

  double tolerance = 1.0 / Math.Pow(10, precision);

  Assert.That(actual, Is.EqualTo(expected).Within(tolerance));

It’s late – there could be a gotcha in here. I tested it against your three sets of test data and each passed. Changing pricision to be 16 - magnitude caused the test to fail. Setting it to 14 - magnitude obviously caused it to pass as the tolerance was greater.

Leave a Comment

techhipbettruvabetnorabahisbahis forumu