"Cristian Martinello" <ca*********@tiscali.it> writes:
Try this code, and tell me how can I fix it, please.
var num1=new Number(parseFloat("118.18"))
var num2=new Number(parseFloat("50"))
var num3=new Number(parseFloat("50"))
alert(num1-num2-num3)
Just
alert(118.18-100)
will do the trick.
This is very strange...
This is completely normal when calculating with finite precission
binary-based floating point numbers.
The computer cannot represent 118.18 exactly. It finds a
representation that is close enough that when output, it is rounded to
118.18. Now you subtract 100 from it. That means that the integer part
of the number gets smaller, so there are more bits available for
the fractional part.
Imagine the number is stored as 15 decimal digits, but not exactly:
118.180000000001
When you output it, the least significant digit is rounded away.
Now subtract 100 and still use 15 decimal digits:
18.1800000000010
Suddently the error on the least significant bit is amplified, so
it won't go away by rounding.
The same problem happens with binary representation, typically when
making the number smaller.
<URL:http://jibbering.com/faq/#FAQ4_7>
/L
--
Lasse Reichstein Nielsen -
lr*@hotpop.com
Art D'HTML: <URL:http://www.infimum.dk/HTML/randomArtSplit.html>
'Faith without judgement merely degrades the spirit divine.'