472,328 Members | 1,953 Online

# Float/Double arithmetic precision error

Hi all,
I had problem regarding float/double arithmetic only with + and -
operations, which gives inaccurate precisions. I would like to know how the
arithmetic operations are internally handled by C# or they are hardware
(processor) dependent. Basic addition operation errors, for ex:
3.6 - 2.4 = 1.19999999999 or 1.20000000003
There are the erroneous values I'm getting. I'm using C#.Net v1.1 Please
reply me how these operations are handled internally.
Nov 15 '05 #1
3 23529
Floating point representations are inherently imprecise for many numbers.
This paper does a good job explaining it:

http://www.physics.ohio-state.edu/~d...point_math.pdf

--
Eric Gunnerson

Visit the C# product team at http://www.csharp.net
Eric's blog is at http://blogs.gotdotnet.com/ericgu/

This posting is provided "AS IS" with no warranties, and confers no rights.
news:uk**************@TK2MSFTNGP12.phx.gbl...
Hi all,
I had problem regarding float/double arithmetic only with + and -
operations, which gives inaccurate precisions. I would like to know how the arithmetic operations are internally handled by C# or they are hardware
(processor) dependent. Basic addition operation errors, for ex:
3.6 - 2.4 = 1.19999999999 or 1.20000000003
There are the erroneous values I'm getting. I'm using C#.Net v1.1 Please
reply me how these operations are handled internally.

Nov 15 '05 #2
I had problem regarding float/double arithmetic only with + and -
operations, which gives inaccurate precisions. I would like to know how the
arithmetic operations are internally handled by C# or they are hardware
(processor) dependent. Basic addition operation errors, for ex:
3.6 - 2.4 = 1.19999999999 or 1.20000000003
There are the erroneous values I'm getting. I'm using C#.Net v1.1 Please
reply me how these operations are handled internally.

See http://www.pobox.com/~skeet/csharp/floatingpoint.html

--
Jon Skeet - <sk***@pobox.com>
http://www.pobox.com/~skeet
Nov 15 '05 #3
I had problem regarding float/double arithmetic only with + and -
operations, which gives inaccurate precisions.
It may be symantics, but to me the representation of floating-point
precision numbers in a "binary" machine is a the problem, not necessarily
the precision of the number, unless you want to specify exactly how many
decimal places are significant for every floating point number and any
calculation that involves floating point numbers.
I would like to know how the
arithmetic operations are internally handled by C# or they are hardware
(processor) dependent.
The problems with floating-point number representation are not specific to a
particular language. They are specific to representing floating-point
number on binary computers. Essentially, you are representing a continuous
number in a discrete environment. The effective approach is to break up the
range of values that a floating-point number can have into tiny intervals
(the precision the machine you are running the code on.).
Basic addition operation errors, for ex:
3.6 - 2.4 = 1.19999999999 or 1.20000000003
There are the erroneous values I'm getting.
Before declaring these erroneous, you should specify what precision you are
looking for. They are correct to 10 decimal places. If you want higher
precision, use a double. If you want exact precision, use the
System.Decimal class and specifiy the number of decimal places.
reply me how these operations are handled internally.

It is bad programming form to compare floating-point numbers and results
using the traditional "==" operator. You should calculate the machine
precision and then check to see if the difference of two floating-point
numbers is smaller than some small value based on the machine precision.

Hope that helps.

Regards,

Randy
Nov 15 '05 #4

This thread has been closed and replies have been disabled. Please start a new discussion.