471,089 Members | 1,643 Online

# inconsistent double precision math

I'm seeing a very strange behavior with double precision subtraction.
I'm using csUnit for testing. If I run the test by itself, the test
passes. When I run the batch of tests, the test fails. Here's the
test:

[Test]
public void GetAcuteAngleDifference_PI()
{
double a = 0;
double b = Math.PI;

Assert.Equals( b, Euclid.GetAcuteAngleDifference(a, b));
}
/// <summary>
/// Get the smallest acute difference between 2 angles. A positive
result
/// shows that b is clockwise of a, while a negative shows that b is
/// counter clockwise of a.
/// </summary>
/// <returns>smallest accute angle in radians</returns>
public static double GetAcuteAngleDifference(double a, double b)
{
a = SnapAngle(a); //place the angle between 0 and 2 Pi
b = SnapAngle(b);

if (a == b) return 0;

double result = b-a; //////////This line gives me grief.///////

double absResult = Math.Abs(result);
double direction = absResult / result;

if (absResult > Math.PI) //the acute angle is the opposite
direction
result = direction * (absResult - (2 * Math.PI));

return result;
}

The test is looking for the smallest acute angle between 0 and Pi. The
expected result is Pi.

Like I said, if I run the test by iteself, it passes. The 'result'
after the (b-a) subtraction is essentially (Math.PI - 0) and the result
the same as Math.PI = 3.141592 6535897931. However, the batch run
shows 'result' variable is slightly bigger at 3.141592 7410125732.

Has anyone else experienced this behavior? What can I do to correct it
or work around it?

BTW, the test is running a Debug build on a 2.8 Ghz Pentium 4.

Thanks!

Marshall

Nov 17 '05 #1
2 5687
Hi Marshall,
I am not sure why your test returns different results when run
individually or as part of a batch. As we know floating point numbers cannot
be precise in some cases as they cannot represent all possible contiguous
numbers, I would add an epsilon value (a small error margin) to the
Assert.Equals function, it is possible as one of the overloaded parameters,
which would allow your test to fail even if the numbers are not perfectly
identical, but within accepted error margins.

Mark R Dawson

"mbelew" wrote:

I'm seeing a very strange behavior with double precision subtraction.
I'm using csUnit for testing. If I run the test by itself, the test
passes. When I run the batch of tests, the test fails. Here's the
test:

[Test]
public void GetAcuteAngleDifference_PI()
{
double a = 0;
double b = Math.PI;

Assert.Equals( b, Euclid.GetAcuteAngleDifference(a, b));
}
/// <summary>
/// Get the smallest acute difference between 2 angles. A positive
result
/// shows that b is clockwise of a, while a negative shows that b is
/// counter clockwise of a.
/// </summary>
/// <returns>smallest accute angle in radians</returns>
public static double GetAcuteAngleDifference(double a, double b)
{
a = SnapAngle(a); //place the angle between 0 and 2 Pi
b = SnapAngle(b);

if (a == b) return 0;

double result = b-a; //////////This line gives me grief.///////

double absResult = Math.Abs(result);
double direction = absResult / result;

if (absResult > Math.PI) //the acute angle is the opposite
direction
result = direction * (absResult - (2 * Math.PI));

return result;
}

The test is looking for the smallest acute angle between 0 and Pi. The
expected result is Pi.

Like I said, if I run the test by iteself, it passes. The 'result'
after the (b-a) subtraction is essentially (Math.PI - 0) and the result
the same as Math.PI = 3.141592 6535897931. However, the batch run
shows 'result' variable is slightly bigger at 3.141592 7410125732.

Has anyone else experienced this behavior? What can I do to correct it
or work around it?

BTW, the test is running a Debug build on a 2.8 Ghz Pentium 4.

Thanks!

Marshall

Nov 17 '05 #2
Hi,

Take a look at Jon Skeet's article about flaoting points number

:http://www.yoda.arachsys.com/csharp/floatingpoint.html
cheers,

--
Ignacio Machin,
ignacio.machin AT dot.state.fl.us
Florida Department Of Transportation

"mbelew" <mb****@koiosworks.com> wrote in message

I'm seeing a very strange behavior with double precision subtraction.
I'm using csUnit for testing. If I run the test by itself, the test
passes. When I run the batch of tests, the test fails. Here's the
test:

[Test]
public void GetAcuteAngleDifference_PI()
{
double a = 0;
double b = Math.PI;

Assert.Equals( b, Euclid.GetAcuteAngleDifference(a, b));
}
/// <summary>
/// Get the smallest acute difference between 2 angles. A positive
result
/// shows that b is clockwise of a, while a negative shows that b is
/// counter clockwise of a.
/// </summary>
/// <returns>smallest accute angle in radians</returns>
public static double GetAcuteAngleDifference(double a, double b)
{
a = SnapAngle(a); //place the angle between 0 and 2 Pi
b = SnapAngle(b);

if (a == b) return 0;

double result = b-a; //////////This line gives me grief.///////

double absResult = Math.Abs(result);
double direction = absResult / result;

if (absResult > Math.PI) //the acute angle is the opposite
direction
result = direction * (absResult - (2 * Math.PI));

return result;
}

The test is looking for the smallest acute angle between 0 and Pi. The
expected result is Pi.

Like I said, if I run the test by iteself, it passes. The 'result'
after the (b-a) subtraction is essentially (Math.PI - 0) and the result
the same as Math.PI = 3.141592 6535897931. However, the batch run
shows 'result' variable is slightly bigger at 3.141592 7410125732.

Has anyone else experienced this behavior? What can I do to correct it
or work around it?

BTW, the test is running a Debug build on a 2.8 Ghz Pentium 4.

Thanks!

Marshall

Nov 17 '05 #3

### This discussion thread is closed

Replies have been disabled for this discussion.