"Richard Thrasher" <rl*****@sbcglobal.net> wrote in message
news:03****************************@phx.gbl...
Writing my very first C# program, I found what appears to
be a compiler error. The statement
int temp = (int)(100 * 36.41);
assigns the value 3640 to temp. I've tried this code with
various numbers in place of 36.41, but 36.41 is the only
number I've found that produces the error.
My questions are:
(1) Can anybody else reproduce this error?
(2) How does one submit bug reports to MS?
Well... You may have blundered into one of the big "gotchas" of computer
programming as we know it.
36.41 is a number that cannot be expressed exactly in binary, for the same
reason that 1/3 cannot be expressed exactly in decimal. (It would take an
infinite number of decimal places or binary places respectively.)
So when you write "36.41" you actually get a binary number very slightly
less than 36.41.
Then you multiply it by 100 and get 3640.9999999 or something like that.
Then you convert -- and if the int conversion is done by truncation, you get
3640.
At least that's my educated guess about what's happening.
The Microsoft "Decimal" numeric type overcomes this; it's actually
represented in decimal, so 36.41 is always exactly 36.41.
--
Michael A. Covington - Associate Director
Artificial Intelligence Center, The University of Georgia
http://www.ai.uga.edu/~mc