Göran Andersson <gu***@guffa.comwrote:

Jon Skeet [C# MVP] wrote:
On Sep 16, 12:49 pm, Göran Andersson <gu...@guffa.comwrote:
You can use a double if you just want better precision, but still want

the performance of a type that is supported by the ALU of the processor.

A decimal

Obviously I meant a double.

Sorry, it wasn't obvious to me. I thought you really were casting

aspersions on decimal :)

still doesn't represent values exactly, but you get something
like 100000.990000002 rather than 100000.992.

Both double and decimal represent exact values - just different sets

of values.

What do you mean by that? A double is a floating point type, and

approximates most values.

Decimal is a floating point type too - just a floating decimal point

instead of a floating binary point. Both types have a set of numbers

they can represent exactly. You can't represent 1/3 exactly in decimal

any more than you can represent 1/5 exactly in binary.

The reason that decimal "feels" precise whereas binary "feels"

approximate is that humans tend to use a decimal system - we tend to

think that 1/5 is a more "normal" number than 1/3. When you look at it

in a mathematical sense, decimal has more precision and less range than

double, but both are just sets of numbers with conversions and

operations which will approximate the accurate value to the nearest

value in the target set.

And decimal certainly *can* represent 10000.992 exactly.

A decimal stores 100000.99 as 100000.99, not as 100000.992.

Indeed - I didn't check the original post, just yours.

--

Jon Skeet - <sk***@pobox.com>

Web site:

http://www.pobox.com/~skeet
Blog:

http://www.msmvps.com/jon.skeet
C# in Depth:

http://csharpindepth.com