Nicholas Paldino [.NET/C# MVP] <mv*@spam.guard.caspershouse.comwrote:
I would disagree slightly here.
Your two points are valid, but in this case, the inability of a
float/double type to faithfully represent a value with an exact
representation such as "0.99" (not the decimal representation of 1/3 which
is .33 with three repeating (can't do overbars)) to me comes across as
pretty inaccurate. =)
1/3 is only recurring in base 10. It's not in base 3 - in base 3, the
representation is just 0.1. Decimal's failure to represent 0.1 base 3
is exactly the same as float/double's failure to represent 0.1 base 10.
The OP happens to want to represent 0.99 (value) so for that
*particular* value decimal is exact and float/double aren't, but that
doesn't make it true of the types in general. Would the decimal type
*as a whole* become inaccurate if the OP had wanted to represent 1/3?
There's nothing particularly magical about base 10, it's just what
humans happen to usually use to write down numbers. Maths itself
doesn't care.
Saying that decimal is accurate and float/double are inaccurate
suggests there's a big difference in what they do - there isn't.
They're all floating point types (admittedly with significant
differences in choice of available mantissa/exponent sizes) it's just
that decimal uses a floating *decimal* point and float/double use a
floating *binary* point.
Saying decimal is accurate and float/double sometimes inaccurate for
values which are exactly representable in a decimal form is correct
(although I'd use the word exact rather than accurate) - saying decimal
is accurate as a blanket statement is dodgier, IMO.
--
Jon Skeet - <sk***@pobox.com>
http://www.pobox.com/~skeet Blog:
http://www.msmvps.com/jon.skeet
If replying to the group, please do not mail me too