Madan,

I had problem regarding float/double arithmetic only with + and -

operations, which gives inaccurate precisions.

It may be symantics, but to me the representation of floating-point

precision numbers in a "binary" machine is a the problem, not necessarily

the precision of the number, unless you want to specify exactly how many

decimal places are significant for every floating point number and any

calculation that involves floating point numbers.

I would like to know how the

arithmetic operations are internally handled by C# or they are hardware

(processor) dependent.

The problems with floating-point number representation are not specific to a

particular language. They are specific to representing floating-point

number on binary computers. Essentially, you are representing a continuous

number in a discrete environment. The effective approach is to break up the

range of values that a floating-point number can have into tiny intervals

(the precision the machine you are running the code on.).

Basic addition operation errors, for ex:

3.6 - 2.4 = 1.19999999999 or 1.20000000003

There are the erroneous values I'm getting.

Before declaring these erroneous, you should specify what precision you are

looking for. They are correct to 10 decimal places. If you want higher

precision, use a double. If you want exact precision, use the

System.Decimal class and specifiy the number of decimal places.

I'm using C#.Net v1.1 Please

reply me how these operations are handled internally.

It is bad programming form to compare floating-point numbers and results

using the traditional "==" operator. You should calculate the machine

precision and then check to see if the difference of two floating-point

numbers is smaller than some small value based on the machine precision.

Hope that helps.

Regards,

Randy