469,913 Members | 2,683 Online

# FAQ Topic - How do I convert a Number into a String with exactly 2 decimal places?

-----------------------------------------------------------------------
FAQ Topic - How do I convert a Number into a String with
exactly 2 decimal places?
-----------------------------------------------------------------------

When formatting money for example, to format 6.57634 to
6.58, 6.5 to 6.50, and 6 to 6.00?

Rounding of x.xx5 is uncertain, as such numbers are not
represented exactly. See section 4.7 for Rounding issues.

N = Math.round(N*100)/100 only converts N to a Number of value
close to a multiple of 0.01; but document.write(N) does not give
trailing zeroes.

ECMAScript Ed. 3.0 (JScript 5.5 [but buggy] and JavaScript 1.5)
introduced N.toFixed, the main problem with this is the bugs in
JScripts implementation.

Most implementations fail with certain numbers, for example 0.07.
The following works successfully for M>0, N>0:

function Stretch(Q, L, c) { var S = Q
if (c.length>0) while (S.length<L) { S = c+S }
return S
}
function StrU(X, M, N) { // X>=0.0
var T, S=new String(Math.round(X*Number("1e"+N)))
if (S.search && S.search(/\D/)!=-1) { return ''+X }
with (new String(Stretch(S, M+N, '0')))
return substring(0, T=(length-N)) + '.' + substring(T)
}
function Sign(X) { return X<0 ? '-' : ''; }
function StrS(X, M, N) { return Sign(X)+StrU(Math.abs(X), M, N) }

Number.prototype.toFixed= new Function('n','return StrS(this,1,n)')

http://www.merlyn.demon.co.uk/js-round.htm

http://msdn.microsoft.com/library/de...34c0f6d6f0.asp
===
Postings such as this are automatically sent once a day. Their
goal is to answer repeated questions, and to offer the content to
the community for continuous evaluation/improvement. The complete
comp.lang.javascript FAQ is at http://jibbering.com/faq/index.html.
The FAQ workers are a group of volunteers.

Feb 12 '07 #1
28 5156
On Feb 12, 3:00 am, "FAQ server" <javascr...@dotinternet.bewrote:
Rounding of x.xx5 is uncertain, as such numbers are not
represented exactly.
Another thing to fix - together with the rounding proc after mine is
done.

1.035 is happily stored in IEEE-754 DP-FP without bit loss.

Same for say 1.055 - with a bit loss on IEEE-754 single-precision but
it is irrelevant for IEEE-754 topics.

If "exactly" is used in some other special meaning then please could
anyone explain? AFAICT it is some legacy errouneus results
interpretation to be corrected.
See section 4.7 for Rounding issues.
That is really an interesting question to solve before a robust
rounding algorithm released.

For the results check one needs any version of IE installed, others
have to trust me :-)

By specs both JavaScript and JScript implements IEEE-754 DP-FP, so
same for VBScript Double numbers (?) I'm not totally sure about
VBScript, but with JavaScript/JScript it is what is taken as given.

I'm taking values from FAQ 4.7 and around them. Sorry for unwanted
line breaks if anyone has them: the matter requires rather long
So:

Addition 1 : 0.05 + 0.01

A + 1.100110011001100110011001100110011001100110011001 1010 *2-5 =
0.05
B + 1.010001111010111000010100011110101110000101000111 1011 *2-7 =
0.01

Alignment Step
A + 1.100110011001100110011001100110011001100110011001 1010|000 *2-5
B + 0.010100011110101110000101000111101011100001010001 1110|110 *2-5
A+B + 1.111010111000010100011110101110000101000111101011 1000|110 *2-5

Postnormalization Step
A+B + 1.111010111000010100011110101110000101000111101011 1000|11 *2-5

Possible outcome by the implicit rounding rule:

Round to Zero
A+B + 1.111010111000010100011110101110000101000111101011 1000 *2-5 =
0.06

Round to Nearest Even
A+B + 1.111010111000010100011110101110000101000111101011 1001 *2-5 =
0.060000000000000005

Round to Plus Infinity
A+B + 1.111010111000010100011110101110000101000111101011 1001 *2-5 =
0.060000000000000005

Round to Minus Infinity
A+B + 1.111010111000010100011110101110000101000111101011 1000 *2-5 =
0.06

Actual JS outcome: 0.060000000000000005
Actual VBS outcome: 0.06

----------------------------

Addition 2 : 0.06 + 0.01

A + 1.111010111000010100011110101110000101000111101011 1000 *2-5 =
0.06
B + 1.010001111010111000010100011110101110000101000111 1011 *2-7 =
0.01

Alignment Step
A + 1.111010111000010100011110101110000101000111101011 1000|000 *2-5
B + 0.010100011110101110000101000111101011100001010001 1110|110 *2-5
A+B + 10.00111101011100001010001111010111000010100011110 10110|110
*2-5

Postnormalization Step
A+B + 1.000111101011100001010001111010111000010100011110 1011|11 *2-4

Possible outcome by the implicit rounding rule:

Round to Zero
A+B + 1.000111101011100001010001111010111000010100011110 1011 *2-4 =
0.06999999999999999

Round to Nearest Even
A+B + 1.000111101011100001010001111010111000010100011110 1100 *2-4 =
0.07

Round to Plus Infinity
A+B + 1.000111101011100001010001111010111000010100011110 1100 *2-4 =
0.07

Round to Minus Infinity
A+B + 1.000111101011100001010001111010111000010100011110 1011 *2-4 =
0.06999999999999999

Actual JS outcome: 0.06999999999999999
Actual VBS outcome: 0.07

-----------------------------

By simple comparison it is obvious that neither J(ava)Script nor
VBScript are conforming IEEE-754 DP-FP Round To Nearest Even rule -
which is supposed to be the default internal rounding by IEEE-754
specs. Moreover it is difficult to say _what_ rounding rule is the
default one in either case. I have an impression that there is some
extra run-time logic added atop of pure IEEE-754.
It is as well possible that I misinterpreted the results.

-----------------------------

The test results are obtained over two nearly identical pages below.
The type support in VBScript is pretty much bastardized in comparison
to VBA and other more powerful Basic dialects. So to ensure that there
is not some hidden douncasting I used a round-around way via VarType.
In either case JavaScript results alone are rather strange - again if
I'm right with the production schema.

-----------------------------

<html>
<meta http-equiv="Content-Type"
content="text/html; charset=iso-8859-1">
<script type="text/javascript">
function init() {
var MyForm = document.forms[0];
MyForm.output.value += 'JS:\n' +
'0.05 + 0.01 = ' + (0.05 + 0.01) +
'\nIEEE-754 Double-Precision Floating-Point number';
}

</script>

<script type="text/vbscript">

Dim MyForm
Dim Result, ResultType
Dim NL

Set MyForm = Document.forms.item(0)

Result = 0.05 + 0.01

If VarType(Result) = 5 Then
ResultType = "(IEEE-754 ?) Double-Precision Floating-Point number"
Else
ResultType = "a under-precision type"
End If

NL = vbNewLine

MyForm.output.value = MyForm.output.value & _
NL & NL & "VBS:" & NL & "0.05 + 0.01 = " & _
Result & NL & ResultType

End Sub
</script>

<body>
<form action="">
<fieldset>
<legend>Output</legend>
<textarea name="output" cols="64" rows="8"></textarea>
</fieldset>
</form>
</body>
</html>

-----------------------------

<html>
<meta http-equiv="Content-Type"
content="text/html; charset=iso-8859-1">
<script type="text/javascript">
function init() {
var MyForm = document.forms[0];
MyForm.output.value += 'JS:\n' +
'0.06 + 0.01 = ' + (0.06 + 0.01) +
'\nIEEE-754 Double-Precision Floating-Point number';
}

</script>

<script type="text/vbscript">

Dim MyForm
Dim Result, ResultType
Dim NL

Set MyForm = Document.forms.item(0)

Result = 0.06 + 0.01

If VarType(Result) = 5 Then
ResultType = "(IEEE-754 ?) Double-Precision Floating-Point number"
Else
ResultType = "a under-precision type"
End If

NL = vbNewLine

MyForm.output.value = MyForm.output.value & _
NL & NL & "VBS:" & NL & "0.06 + 0.01 = " & _
Result & NL & ResultType

End Sub
</script>

<body>
<form action="">
<fieldset>
<legend>Output</legend>
<textarea name="output" cols="64" rows="8"></textarea>
</fieldset>
</form>
</body>
</html>

Feb 12 '07 #2
VK wrote:
On Feb 12, 3:00 am, FAQ server wrote:
>Rounding of x.xx5 is uncertain, as such numbers are not
represented exactly.

Another thing to fix - together with the rounding proc after
mine is done.

1.035 is happily stored in IEEE-754 DP-FP without bit loss.
If you are going to disagree with everyone about that the least you could
do is post some sort of demonstration of whatever it was that resulted in
our making that conclusion. Then someone could tell you which of your
misconceptions resulted in your erroneous conclusion.
Same for say 1.055 - with a bit loss on IEEE-754 single-precision but
it is irrelevant for IEEE-754 topics.

If "exactly" is used in some other special meaning then please
could anyone explain? AFAICT it is some legacy errouneus results
interpretation to be corrected.
Exactly means exactly. What you need to do is explain why you think the
statement is incorrect.

<snip>
By simple comparison it is obvious that neither J(ava)Script
nor VBScript are conforming IEEE-754 DP-FP Round To Nearest
Even rule - which is supposed to be the default internal
rounding by IEEE-754 specs. Moreover it is difficult to say
_what_ rounding rule is the default one in either case.
I have an impression that there is some extra run-time logic
added atop of pure IEEE-754. It is as well possible that I
misinterpreted the results.
<snip>

It is dam near certain you misinterpret the results. Stat with looking at
how javascript transforms a numeric literal in its source code into an
IEEE double precision floating-point number (as I told you last time; it
is in ECMA 262, 3rd Ed. Section 7.8.3). Where you find similar details
for VBScript is a different matter, and JScript is not without its own
bugs, but you will not be able to say anything useful about operations
performed upon IEEE double precision floating point numbers until you
know what numbers you are really handing to start with. Until then all
this noise from you is a waste of everyone's time.

Richard.

Feb 12 '07 #3
On Feb 13, 1:58 am, "Richard Cornford" <Rich...@litotes.demon.co.uk>
wrote:
VK wrote:
On Feb 12, 3:00 am, FAQ server wrote:
Rounding of x.xx5 is uncertain, as such numbers are not
represented exactly.
Another thing to fix - together with the rounding proc after
mine is done.
1.035 is happily stored in IEEE-754 DP-FP without bit loss.

If you are going to disagree with everyone about that the least you could
do is post some sort of demonstration of whatever it was that resulted in
our making that conclusion.
I have no intention to disagree with "everyone". I have no intention
to argue with IEEE-754 standards for instance - though the actual
functionning may differ by implementations.

1.035 in IEEE-754 DP-FP form is stored as

00111111111100001000111101011100001010001111010111 00001010001111
SEEEEEEEEEEEFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFF FFFFFFFFFFFFFF

Can you point where exactly a bit loss may occur?

More probably someone just couldn't find the leading 1 in mantissa -
because it is _implied_ but not presented with a non-zero exponent:
and from this a false "precision loss" conclusion was drawn.

Feb 12 '07 #4
On Feb 13, 9:46 am, "VK" <schools_r...@yahoo.comwrote:
On Feb 13, 1:58 am, "Richard Cornford" <Rich...@litotes.demon.co.uk>
wrote:
VK wrote:
On Feb 12, 3:00 am, FAQ server wrote:
>Rounding of x.xx5 is uncertain, as such numbers are not
>represented exactly.
Another thing to fix - together with the rounding proc after
mine is done.
1.035 is happily stored in IEEE-754 DP-FP without bit loss.
If you are going to disagree with everyone about that the least you could
do is post some sort of demonstration of whatever it was that resulted in
our making that conclusion.

I have no intention to disagree with "everyone". I have no intention
to argue with IEEE-754 standards for instance - though the actual
functionning may differ by implementations.

1.035 in IEEE-754 DP-FP form is stored as

00111111111100001000111101011100001010001111010111 00001010001111
SEEEEEEEEEEEFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFF FFFFFFFFFFFFFF

Can you point where exactly a bit loss may occur?
When doing certain mathematic operations that are commonly used with
simple rounding agorithms:

var x = 1.035;

// Typical problematic rounding algorithm
var y = Math.round(x*100)/100;

// Confusion is caused by...
alert('x = ' + x + '\n'
+ '100x = ' + (100*x)
+ '\ny = ' + y);
In Firefox I see:

x = 1.035
100x = 103.49999999999999 (or 103.49999999999998 in IE)
y = 1.03
Whereas most would expect y = 1.04

Confusion arises because the apparent anomaly occurs for only certain
cases.
--
Rob

Feb 13 '07 #5
On Feb 12, 11:46 pm, "VK" <schools_r...@yahoo.comwrote:
On Feb 13, 1:58 am, "Richard Cornford wrote:
>VK wrote:
>>On Feb 12, 3:00 am, FAQ server wrote:
Rounding of x.xx5 is uncertain, as such numbers are not
represented exactly.
>>Another thing to fix - together with the rounding proc after
mine is done.
>>1.035 is happily stored in IEEE-754 DP-FP without bit loss.
>If you are going to disagree with everyone about that the least you could
do is post some sort of demonstration of whatever it was that resulted in
our making that conclusion.

I have no intention to disagree with "everyone". I have no intention
to argue with IEEE-754 standards for instance - though the actual
functionning may differ by implementations.

1.035 in IEEE-754 DP-FP form is stored as

00111111111100001000111101011100001010001111010111 00001010001111
SEEEEEEEEEEEFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFF FFFFFFFFFFFFFF
What makes you think that? But even if it does that bit pattern does
not precisely represent 1.035, it is smaller.
Can you point where exactly a bit loss may occur?
In the bit pattern above (assuming you have reported it accurately. In
the mantissa above the bits set (counting the left most as one and
increasing to the right) are 5, 9, 10, 11, 12, 14, 16, 17, 18, 23, 25,
29, 30, 31, 32, 34, 36, 37, 38, 43, 45, 49, 50, 51 and 52. The set bit
implied to the left of the mantissa (and so the binary point) provides
the value 1, and so these set bits account for the fractional part of
the value, which you are asserting would be 0.035.

The value contributed to the total by each bit is (1/Math.pow(2,
Bit)), where 'Bit' is the number of the bit from the left starting at
one. Thus the first bit would contribute 1/2 to the value if it were
set [(1/Math.pow(2, 1)) or (1/2)], and the next bit ¼ if set.

The bits actually set contribute:-

Bit | 1/Math.pow(2, Bit) |
------------------------------------------------------------
5 -1/32 == 140737488355328/4503599627370496
9 -1/512 == 8796093022208/4503599627370496
10 -1/1024 == 4398046511104/4503599627370496
11 -1/2048 == 2199023255552/4503599627370496
12 -1/4096 == 1099511627776/4503599627370496
14 -1/16384 == 274877906944/4503599627370496
16 -1/65536 == 68719476736/4503599627370496
17 -1/131072 == 34359738368/4503599627370496
18 -1/262144 == 17179869184/4503599627370496
23 -1/8388608 == 536870912/4503599627370496
25 -1/33554432 == 134217728/4503599627370496
29 -1/536870912 == 8388608/4503599627370496
30 -1/1073741824 == 4194304/4503599627370496
31 -1/2147483648 == 2097152/4503599627370496
32 -1/4294967296 == 1048576/4503599627370496
34 -1/17179869184 == 262144/4503599627370496
36 -1/68719476736 == 65536/4503599627370496
37 -1/137438953472 == 32768/4503599627370496
38 -1/274877906944 == 16384/4503599627370496
43 -1/8796093022208 == 512/4503599627370496
45 -1/35184372088832 == 128/4503599627370496
49 -1/562949953421312 == 8/4503599627370496
50 -1/1125899906842624 == 4/4503599627370496
51 -1/2251799813685248 == 2/4503599627370496
52 -1/4503599627370496 == 1/4503599627370496
----------------------------------------------------------
Total:- 157625986957967/4503599627370496

- and the total of these represents the fraction part of the number
represented. So:-

fractionalPart == 157625986957967/4503599627370496

- and therefore:-

fractionalPart * 4503599627370496 == 157625986957967

To avoided having to work with fractions here both sides can be
multiplied by 1000:-

(fractionalPart * 1000) * 4503599627370496 == (157625986957967 * 1000)

- to give:-

(fractionalPart * 1000) * 4503599627370496 == 157625986957967000

The fractional part of 1.035 is 0.035, and multiplying that by 1000
gives 35.

If the bit pattern above precisely represents the number 1.035 then:-

35 * 4503599627370496 == 157625986957967000

But (35 * 4503599627370496) is actually 157625986957967360, which
differs from 157625986957967000 by 360. The bit pattern presented
above _is_not_ the value 0.035, and the value 0.035 cannot be
precisely represented as an IEEE 754 double precision floating point
number (the next bigger number that this bit pattern can represent is
greater than 0.035).
More probably someone just couldn't find the leading 1 in mantissa -
because it is _implied_ but not presented with a non-zero exponent:
and from this a false "precision loss" conclusion was drawn.
Given the evidence I don't think you should be assuming any failure to
understand on the part of anyone else, or any superior understanding
on your part. You are wasting everyone's time wittering on about a
subject that is clearly beyond you.

Richard.

Feb 13 '07 #6
decimal conversation seems a bit too labor intensive. The conventional
IEEE <=decimal pattern would show that I'm wrong in much lesser
steps.

Yes, 1.035 is not representable as a dyadic fraction - thus it cannot
be represented by a finite binary sequence.

Namely the sentence "Rounding of x.xx5 is uncertain, as such numbers
are not represented exactly" states that any decimal number in form of
n.nn5 is not representable as a dyadic fraction. More formally it
could be spelled as:

Any decimal floating-point number having form of x.xx5 cannot be
represented as a rational number X/2^Y (X divided by 2 in power of Y)
where:
X is an integer and
Y is an natural number in CS sense thus including 0

I'm not sure what would it be - an axiom or a theorem - seems like a
theorem to me, but clueless about proof pattern.

This way the sentence from the FAQ is formally correct - and I was
factually wrong. Everyone is allowed to enjoy :-)
>From the other side - and how else ;-) - it is semi-misleading IMO as
it fixates on a very narrow subset of non-dyadic fractions. Placed at
the top of the FAQ it makes you think that "ending on 5" floats are
the main source of evil and rounding errors in IEEE-754.
In fact some innocent looking decimal 0.01 or 9.2 are not dyadic
fractions as well - and with much more nasty rounding outcomes than
say 0.035. Overall a decimal float representable as a dyadic rational
is more of a happy coincidence than something to expect on a daily
run. This is why implied internal rounding is a vital part of IEEE-754
specs.

Note: whoever doesn't like the term "happy coincidence" is welcome to
study the topological group of dyadic solenoid. I'm humbly passing on

This way all decimal integers are representable as dyadic rational
with 2 in power of 0 so can be represented by a finite binary
sequence: INT == INT/1 == INT/2^0
Respectively the prevailing majority of floats is a "fragile stuff"
being continuously rounded and adjusted by internal algorithms.

The acing on the cake is that the given dyadic fraction definition
applies to the abstract math - thus the binary sequence has to be
finite in formally infinite space. On IEEE-754-DP-FP systems we have
well finite space of 52 bits in mantissa part with implied msb set to
1 as 53rd "hidden" bit.

exists then I will gladly switch on it.

53-diadic fraction would be the one which is fully representable as a
binary sequence where - going from msb to lsb - the distance between
the first bit set to the end of the sequence is lesser than or equal
to 53.

Otherwise no matter dyadic or not it will be stored with precision
loss in IEEE-754-DP-FP.

The final note: with all the "mess" above as a general rule IEEE-754-
DP-FP numbers are ambiguous. Once stored in IEEE-754-DP-FP format it
is not technically possible to determine if say 0.6999999999999999 is
"IEEE's sh** happens" or something intended.
That gives pretty of much of freedom to decide what rounding to use to
represent final results - so transform them into string value.

Feb 13 '07 #7
By simple comparison it is obvious that neither J(ava)Script nor
VBScript are conforming IEEE-754 DP-FP Round To Nearest Even rule -
which is supposed to be the default internal rounding by IEEE-754
specs. Moreover it is difficult to say _what_ rounding rule is the
default one in either case. I have an impression that there is some
extra run-time logic added atop of pure IEEE-754.
Eric Lippert remains my hero ! :-)

<http://blogs.msdn.com/ericlippert/archive/2005/01/26/361041.aspx>

Once again I see I have a talent to make right conclusions without
having the necessary information - and even sometimes based on a wrong
reasoning. IMO it is a sure sign of a genius mind.

P.S. :-) / a.k.a. joking above /

P.P.S. I'm currently going through the entire IEEE-related blog
series, it took some times to get them all together. Who's interested
in the matter the links are:

1 <http://blogs.msdn.com/ericlippert/archive/2005/01/10/350108.aspx>
2 <http://blogs.msdn.com/ericlippert/archive/2005/01/13/352284.aspx>
3 <http://blogs.msdn.com/ericlippert/archive/2005/01/17/354658.aspx>
4 <http://blogs.msdn.com/ericlippert/archive/2005/01/18/355351.aspx>
5 <http://blogs.msdn.com/ericlippert/archive/2005/01/20/357407.aspx>

Feb 13 '07 #8
VK said the following on 2/13/2007 12:29 PM:

<snip>
This way the sentence from the FAQ is formally correct - and I was
factually wrong. Everyone is allowed to enjoy :-)
It is nice to see you finally realized what everybody else already knew.
On both accounts.
--
Randy
Chance Favors The Prepared Mind
comp.lang.javascript FAQ - http://jibbering.com/faq/index.html
Javascript Best Practices - http://www.JavascriptToolbox.com/bestpractices/
Feb 13 '07 #9
In comp.lang.javascript message <11**********************@j27g2000cwj.go
oglegroups.com>, Mon, 12 Feb 2007 15:46:58, VK <sc**********@yahoo.com>
posted:
>On Feb 13, 1:58 am, "Richard Cornford" <Rich...@litotes.demon.co.uk>
wrote:
>VK wrote:
On Feb 12, 3:00 am, FAQ server wrote:
Rounding of x.xx5 is uncertain, as such numbers are not
represented exactly.
Another thing to fix - together with the rounding proc after
mine is done.
1.035 is happily stored in IEEE-754 DP-FP without bit loss.

If you are going to disagree with everyone about that the least you could
do is post some sort of demonstration of whatever it was that resulted in
our making that conclusion.

I have no intention to disagree with "everyone". I have no intention
to argue with IEEE-754 standards for instance - though the actual
functionning may differ by implementations.

1.035 in IEEE-754 DP-FP form is stored as

0011111111110000100011110101110000101000111101011 100001010001111
SEEEEEEEEEEEFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFF FFFFFFFFFFFFFFF
The exponent is 15 bits, not 11.

The EXACT value of that IEEE Double is not 1.035, but
1.034999999999999920063942226988729089498519897460 9375

In Javascript,
String(1.035 - 1.0) ="0.03499999999999992"
String(1.035 ) ="1.035"

--
(c) John Stockton, Surrey, UK. ?@merlyn.demon.co.uk Delphi 3? Turnpike 6.05
<URL:http://www.bancoems.com/CompLangPascalDelphiMisc-MiniFAQ.htmclpdmFAQ;
<URL:http://www.borland.com/newsgroups/guide.htmlnews:borland.* Guidelines
Feb 13 '07 #10
On Feb 13, 4:17 pm, Dr J R Stockton <reply0...@merlyn.demon.co.uk>
wrote:
The exponent is 15 bits, not 11.
On IEEE-754-DP-FP the exponent is 11 bits biased by 1023 to represent
both positive and negative power, so the actual value to apply is
ExpValue-1023

01111111111 == 1023 thus the actual value is 1023-1023 = 0 thus zero
exponent.
The EXACT value of that IEEE Double is not 1.035
In Javascript,
String(1.035 - 1.0) ="0.03499999999999992"
String(1.035 ) ="1.035"
The first is the rounding result of the actual value. The second one
is a "convenience cheating" added into calculator logic atop of
IEEE-754-DP-FP rules. Otherwise the life of a regular user would
become a really confusing hell.

Feb 13 '07 #11
VK <sc**********@yahoo.comwrote:
Thanks for the helpful profound analysis - though your
binary-to- decimal conversation seems a bit too labor
intensive.
vaunted next rounding function will be as big a joke as your last effort.
The conventional IEEE <=decimal pattern would show that
I'm wrong in much lesser steps.
So your taking a few simple steps before hand may have saved you form
taken before posting in response to my request that you justify your
nonsense assertion you could have avoided compounding your error with
repetition.
Yes, 1.035 is not representable as a dyadic fraction - thus
it cannot be represented by a finite binary sequence.
<snip>

Hasn't someone already mentioned that to you, a dozen times or so by now.

Richard.

Feb 13 '07 #12
In comp.lang.javascript message <11*********************@h3g2000cwc.goog
legroups.com>, Tue, 13 Feb 2007 09:29:41, VK <sc**********@yahoo.com>
posted:
>
Yes, 1.035 is not representable as a dyadic fraction - thus it cannot
be represented by a finite binary sequence.

Namely the sentence "Rounding of x.xx5 is uncertain, as such numbers
are not represented exactly" states that any decimal number in form of
n.nn5 is not representable as a dyadic fraction. More formally it
could be spelled as:

Any decimal floating-point number having form of x.xx5 cannot be
represented as a rational number X/2^Y (X divided by 2 in power of Y)
where:
X is an integer and
Y is an natural number in CS sense thus including 0

I'm not sure what would it be - an axiom or a theorem - seems like a
theorem to me, but clueless about proof pattern.

This way the sentence from the FAQ is formally correct - and I was
factually wrong. Everyone is allowed to enjoy :-)

It's interesting that you claim to have shown that the sentence in the
FAQ is factually correct, because it is actually not completely correct.

To be accurate, it should say "Rounding of x.xx5 is generally uncertain,
as most such numbers are not represented exactly." (addition of
generally & most).

Until the integer part x gets very large, x.125 x.375 x.625 x.875 are
each represented exactly in an IEEE Double. If you had sufficiently
studied what the FAQ links to, you might have been aware of that.

Those numbers will always round in the manner which a fairly[*] simple-
minded appreciation of the situation would lead you to believe.
[*] Having read ISO 16262 15.8.2.15.

--
(c) John Stockton, Surrey, UK. ?@merlyn.demon.co.uk Turnpike v6.05 IE 6.
Web <URL:http://www.merlyn.demon.co.uk/- FAQish topics, acronyms, & links.
I find MiniTrue useful for viewing/searching/altering files, at a DOS prompt;
free, DOS/Win/UNIX, <URL:http://www.idiotsdelight.net/minitrue/>
Feb 13 '07 #13
In comp.lang.javascript message <11*********************@p10g2000cwp.goo
glegroups.com>, Tue, 13 Feb 2007 11:54:39, VK <sc**********@yahoo.com>
posted:
>On Feb 13, 4:17 pm, Dr J R Stockton <reply0...@merlyn.demon.co.uk>
wrote:
>The exponent is 15 bits, not 11.

On IEEE-754-DP-FP the exponent is 11 bits
Correct - I was thinking of the Extended ten-byte IEEE type.

--
(c) John Stockton, Surrey, UK. ?@merlyn.demon.co.uk Turnpike v6.05 IE 6.
Web <URL:http://www.merlyn.demon.co.uk/- FAQish topics, acronyms, & links.
I find MiniTrue useful for viewing/searching/altering files, at a DOS prompt;
free, DOS/Win/UNIX, <URL:http://www.idiotsdelight.net/minitrue/>
Feb 14 '07 #14
On Feb 13, 10:41 pm, Dr J R Stockton <reply0...@merlyn.demon.co.uk>
wrote:
It's interesting that you claim to have shown that the sentence in the
FAQ is factually correct, because it is actually not completely correct.

To be accurate, it should say "Rounding of x.xx5 is generally uncertain,
as most such numbers are not represented exactly." (addition of
generally & most).

Until the integer part x gets very large, x.125 x.375 x.625 x.875 are
each represented exactly in an IEEE Double. If you had sufficiently
studied what the FAQ links to, you might have been aware of that.
At my free time I was studying IEEE papers and related math topics.
Did not have so much pure math ever since my bachelory tortures way
over a decade ago :-) First started for the FAQ arguments, then simply
went curious. I never had to read so much incomplete or erroneous junk
ever since I was studying prototype matter in javascript.

1.035 nor 1.375 nor 1.625 cannot be represented as dyadic fraction so
by definition cannot be stored as finite binary sequences. So in order
to argue with the quoted statement one has to do either of two things:

-1-
Prove wrong the "1st VK's theorem" which is - and I quote -

Any decimal floating-point number having form of x.xx5 cannot be
represented as a rational number X/2^Y (X divided by 2 in power of Y)
where:
X is an integer and
Y is an natural number in CS sense thus including 0

-2-
Prove wrong the underlaying lemma (not mine!) that "Any non-dyadic
fraction cannot be represented as a finite binary sequence". I really
hope the 2nd will not happen as in this case a good part of the
current math will go to the trash can :-)
P.S. I also asked at comp.arch.arithmetic as my mind - however
brilliant it would be :-) - still needs an extra check. See:
d27030f7c099d140>

Feb 14 '07 #15
On Feb 14, 6:37 pm, "VK" <schools_r...@yahoo.comwrote:
On Feb 13, 10:41 pm, Dr J R Stockton wrote:
>It's interesting that you claim to have shown that the sentence in the
FAQ is factually correct, because it is actually not completely correct.
>To be accurate, it should say "Rounding of x.xx5 is generally uncertain,
as most such numbers are not represented exactly." (addition of
generally & most).
>Until the integer part x gets very large, x.125 x.375 x.625 x.875 are
each represented exactly in an IEEE Double. If you had sufficiently
studied what the FAQ links to, you might have been aware of that.

At my free time I was studying IEEE papers and related math topics.
Just looking at the documents does not qualify as 'studying'.
Did not have so much pure math ever since my bachelory tortures
way over a decade ago :-)
In the past you have frequently demonstrated an inability to do basic
First started for the FAQ arguments, then simply went curious.
ever since I was studying prototype matter in javascript.
So once again everyone else is wrong because the VK understanding must
be correct?
1.035 nor 1.375 nor 1.625 cannot be represented as dyadic fraction
so by definition cannot be stored as finite binary sequences.
So in order to argue with the quoted statement one has to do either
of two things:

-1-
Prove wrong the "1st VK's theorem" which is - and I quote -
A theorem is proven wrong when one single example is demonstrated to
Any decimal floating-point number having form of x.xx5
Try - 0.125 - as it has that form.
cannot be represented as a rational number X/2^Y
(X divided by 2 in power of Y)
where:
X is an integer
The integer - 1 - will do in this case.
and
Y is an natural number in CS sense thus including 0
Is - 3 - natural enough for you?

1/(2 to the power of 3) is 1/8, which is also 0.125, and 0.125 has the
form x.xx5.

Now can you give up wasting people's time with this stream of
inaccurate and uninformed posts and either write and post the rounding

<snip>
P.S. I also asked at comp.arch.arithmetic as my mind - however
brilliant it would be :-) - still needs an extra check. See:
d27030f7c099d140>
You didn't manage to learn anything form that exchange because your
statements about what is happening in ECMAScript were false (and/or
incoherent), so the response you elicited is no more than an
impression based upon false data.

Richard.
Feb 14 '07 #16
A theorem is proven wrong when one single example is demonstrated to
Correct: that is not strict but often the quickest way of proving.
Try - 0.125 - as it has that form.
OK
(X divided by 2 in power of Y)
where:
X is an integer

The integer - 1 - will do in this case.
OK
and
Y is an natural number in CS sense thus including 0

Is - 3 - natural enough for you?
Perfect
1/(2 to the power of 3) is 1/8, which is also 0.125 and 0.125 has the
form x.xx5.
Congratulations! You just successfully constructed a valid "negating
case" for the 1st VK's theorem. Alas the Academy of Science didn't set
prise yet for this theorem, so nothing but verbal congratulations so
far.
:-)

This way Dr.Stockton's correction should go into production on the
next scheduled update:
is:
"Rounding of x.xx5 is uncertain, as such numbers are not represented
exactly."
should be:
"Rounding of x.xx5 is often uncertain, as the majority of such numbers
is not represented exactly." (can be a better wording of course).
Now can you give up wasting people's time with this stream of
inaccurate and uninformed posts and either write and post the rounding
First I want to disambiguate what "is to God and what is to Caesar" -
so I want to define what is coming out from the core of IEEE-754-DP-FP
and what is per-implementation neuristic added atop of it. Some may be
happy with patching black box outcomes: I do not feel comfortable with
it.
P.S. I also asked at comp.arch.arithmetic as my mind - however
brilliant it would be :-) - still needs an extra check. See:
d27030f7c099d140>

You didn't manage to learn anything form that exchange because your
statements about what is happening in ECMAScript were false (and/or
incoherent), so the response you elicited is no more than an
impression based upon false data.
"1st VK's theorem" but about 1.035 and the possibility to get back the
original value despite it was not stored exactly, namely:
var probe = 1.035;

btw the fact that no one pointed to 0.125 case suggests that the
respondents might be not attentive or knowlegeable enough, so the
point needs a bit of more studies.

Feb 14 '07 #17
On Feb 14, 10:46 pm, "VK" <schools_r...@yahoo.comwrote:
This way Dr.Stockton's correction should go into production on the
next scheduled update:
is:
"Rounding of x.xx5 is uncertain, as such numbers are not represented
exactly."
should be:
"Rounding of x.xx5 is often uncertain, as the majority of such numbers
is not represented exactly." (can be a better wording of course).
The funny thing is that the JRS correction was initially right by
itself - despite that the spelled rationale behind it was completely
wrong.

Yet another proof that such things happen in the real life.

Feb 14 '07 #18
VK <sc**********@yahoo.comwrote:
On Feb 14, 10:46 pm, "VK" <schools_r...@yahoo.comwrote:
>This way Dr.Stockton's correction should go into production
on the next scheduled update:
is:
"Rounding of x.xx5 is uncertain, as such numbers are not
represented exactly."
should be:
"Rounding of x.xx5 is often uncertain, as the majority of
such numbers is not represented exactly." (can be a better
wording of course).

The funny thing is that the JRS correction was initially
right by itself
Not as humorous as your not being able to tell that he was correct from
what he wrote, and instead deciding to embarrass yourself even more with
yet another nonsense post.
- despite that the spelled rationale behind it was
completely wrong.
Dr Stockton's post included no 'rationale', only a statement of
self-evident facts. If you see one, and see it as wrong, then that is
probably just another symptom of your deluded mind.
Yet another proof that such things happen in the real life.
You are always happiest to think that you know something that someone
else doesn't. Regardless of the fact that whenever you manage to put any
of these notions into statements that can be understood they are promptly
demonstrated to be false statements, as is happening here repeatedly.

Richard.

Feb 14 '07 #19
VK <sc**********@yahoo.comwrote:
>A theorem is proven wrong when one single example is

Correct: that is not strict
It is absolutely strict. Any clear, non-metaphysical statement
contradicted by a valid empirical test is a false statement.
but often the quickest way of proving.
It is not a way of proving anything, it as a way of disproving things.
>>Any decimal floating-point number having form of x.xx5
>Try - 0.125 - as it has that form.

OK
>>cannot be represented as a rational number X/2^Y
(X divided by 2 in power of Y)
where:
X is an integer

The integer - 1 - will do in this case.

OK
and
Y is an natural number in CS sense thus including 0

Is - 3 - natural enough for you?

Perfect
>1/(2 to the power of 3) is 1/8, which is also 0.125 and
0.125 has the form x.xx5.

Congratulations!
Hardly, the number 1.125 was listed in Dr Stockton's post so it should
have been obviously to anyone (else) that if that number could be
precisely represented then 0.125 also could be, along with many other
numbers in the form x.xx5.
You just successfully constructed a valid "negating
case" for the 1st VK's theorem. Alas the Academy of
Science didn't set prise yet for this theorem,
Well of course not, it was just more "off the top of your head" bullshit.

<snip>
>Now can you give up wasting people's time with this
stream of inaccurate and uninformed posts and either write
and post the rounding function or admit the task is beyond you.

First I want to disambiguate what "is to God and what is to Caesar" -
so I want to define what is coming out from the core of IEEE-754-DP-FP
and what is per-implementation neuristic added atop of it.
The "per-implementation neuristic added atop" is a figment of your
imagination, so you are wasting your time trying to attribute anything to
it.
Some may be happy with patching black box outcomes: I do not
feel comfortable with it.
Some will also read what the language specification has to say about
turning numeric literals into numbers, strings to numbers and numbers
into strings. But you do not feel comfortable with that either.
>>P.S. I also asked at comp.arch.arithmetic as my mind -
however brilliant it would be :-) - still needs an extra
check. See:

You didn't manage to learn anything form that exchange because
(and/or incoherent), so the response you elicited is no more
than an impression based upon false data.

And exactly what do you base that assumption upon?
My question was not about my "1st VK's theorem"
Did I say it was?
but about 1.035 and the possibility to get back the
original value despite it was not stored exactly, namely:
var probe = 1.035;
Which is just misdirection. You don't have a number to start with, you
have a numeric literal, and you don't have a number in the end, you have
a string. There is no "original value" and you never get back to it.
btw the fact that no one pointed to 0.125 case suggests
that the respondents might be not attentive or knowlegeable
enough,
My impression of the respondents in that thread was that they were
bewildered by the incoherence and irrelevance of it.
so the point needs a bit of more studies.
Just remember that if those studies are successful you will finally know
as much as the people you have been disagreeing with here. In the
meanwhile we don't need to here any more about the misconceptions and
false conclusions to come to along the way.

Richard.

Feb 14 '07 #20
VK said the following on 2/14/2007 2:46 PM:

<snip>
btw the fact that no one pointed to 0.125 case suggests that the
respondents might be not attentive or knowlegeable enough, so the
point needs a bit of more studies.
Lacking attentiveness to your posts isn't a drawback. It's a compliment
to anyone that can successfully ignore your babbling.

--
Randy
Chance Favors The Prepared Mind
comp.lang.javascript FAQ - http://jibbering.com/faq/index.html
Javascript Best Practices - http://www.JavascriptToolbox.com/bestpractices/
Feb 14 '07 #21
On Feb 15, 2:43 am, Randy Webb <HikksNotAtH...@aol.comwrote:
Lacking attentiveness to your posts isn't a drawback. It's a compliment
to anyone that can successfully ignore your babbling.
Whatever. For time being simply mark the erroneus statement to
correct: unless now _you_ have some valide contrarguments remaining.

Feb 15 '07 #22
VK said the following on 2/15/2007 12:18 AM:
On Feb 15, 2:43 am, Randy Webb <HikksNotAtH...@aol.comwrote:
>Lacking attentiveness to your posts isn't a drawback. It's a compliment
to anyone that can successfully ignore your babbling.

Whatever. For time being simply mark the erroneus statement to
correct: unless now _you_ have some valide contrarguments remaining.
So that you can reply with more BS that you dreamed up and waste my time
trying to get you to understand what any ninth grade CS student already
understands? No thanks.

--
Randy
Chance Favors The Prepared Mind
comp.lang.javascript FAQ - http://jibbering.com/faq/index.html
Javascript Best Practices - http://www.JavascriptToolbox.com/bestpractices/
Feb 15 '07 #23
In comp.lang.javascript message <11**********************@k78g2000cwa.go
oglegroups.com>, Wed, 14 Feb 2007 10:37:14, VK <sc**********@yahoo.com>
posted:
>
1.035 nor 1.375 nor 1.625 cannot be represented as dyadic fraction so
by definition cannot be stored as finite binary sequences.
+"1.375" gives the IEEE Double
0011111111110110 0000000000000000 0000000000000000 0000000000000000
which represents it exactly. The visible mantissa is 0110 followed by
zeroes; that's no halves, one quarter, one eighth, and nothing else.
Total, with the suppressed 1, exactly 1.375.

It's a good idea to read the newsgroup and its FAQ. See below.

--
(c) John Stockton, Surrey, UK. ?@merlyn.demon.co.uk Turnpike v6.05 IE 6
news:comp.lang.javascript FAQ <URL:http://www.jibbering.com/faq/index.html>.
<URL:http://www.merlyn.demon.co.uk/js-index.htmjscr maths, dates, sources.
Feb 15 '07 #24
On Feb 15, 3:43 pm, Dr J R Stockton <reply0...@merlyn.demon.co.uk>
wrote:
In comp.lang.javascript message <1171478234.447700.227...@k78g2000cwa.go
oglegroups.com>, Wed, 14 Feb 2007 10:37:14, VK <schools_r...@yahoo.com>
posted:
1.035 nor 1.375 nor 1.625 cannot be represented as dyadic fraction so
by definition cannot be stored as finite binary sequences.

+"1.375" gives the IEEE Double
0011111111110110 0000000000000000 0000000000000000 0000000000000000
which represents it exactly. The visible mantissa is 0110 followed by
zeroes; that's no halves, one quarter, one eighth, and nothing else.
Total, with the suppressed 1, exactly 1.375.
theorem" was proven wrong. Andrew Wiles of c.l.j. is - as expected -
Mr.Cornford: though your merlin.demon.co.uk countre-samples seem
predate his posting in this thread. You may discuss the glory's
ownership tete-a-tete.

:-)

I posted another question at comp.arch.arithmetic thread - though now
it is more suitable for sci.math

If you have an answer then it would be great as well:

<quote>
any decimal floating
number in form of
x.125
x.375
x.625
x.875
can be presented as a dyadic fraction so as a finite binary sequence.
I tried with xy.125 and it is still true. Is it true that any decimal
number ending with 125, 375, 625 or 875 is representable as a dyadic
fraction. Sounds awfully non-mathematical as a statement: but the
lack
of knowledge prevents me to see the general internal pattern.
</quote>

Feb 15 '07 #25
In comp.lang.javascript message <11*********************@v33g2000cwv.goo
glegroups.com>, Thu, 15 Feb 2007 14:19:45, VK <sc**********@yahoo.com>
posted:
>
<quote>
any decimal floating
number in form of
x.125
x.375
x.625
x.875
can be presented as a dyadic fraction so as a finite binary sequence.
I tried with xy.125 and it is still true. Is it true that any decimal
number ending with 125, 375, 625 or 875 is representable as a dyadic
fraction. Sounds awfully non-mathematical as a statement: but the
lack
of knowledge prevents me to see the general internal pattern.
</quote>
An almost-infinite number of numbers can be presented as a finite binary
sequence. For sequences of length N there are 2^N possibilities.

An IEEE Double can have two signs, 2^52 mantissas, and almost 2^11
ordinary exponents; or it can be NaN or +Infinity or -Infinity. It can
be +0 or -0. but I forget where they fit into that scheme.

Any number which can be expressed exactly as a sum of powers of 0.5 (or
2.0) can be held exactly, provided that the range of the powers does not
exceed about 53, and the extreme powers are neither too big nor too
small (960 to -960 should be safe).

--
(c) John Stockton, Surrey, UK. *@merlyn.demon.co.uk / ??*********@physics.org
Web <URL:http://www.merlyn.demon.co.uk/- FAQish topics, acronyms, & links.
Correct <= 4-line sig. separator as above, a line precisely "-- " (SoRFC1036)
Do not Mail News to me. Before a reply, quote with ">" or "" (SoRFC1036)
Feb 16 '07 #26
An almost-infinite number of numbers can be presented as a finite binary
sequence. For sequences of length N there are 2^N possibilities.
That was the starting point of my researches. For that time I wanted
nothing but write a simple rounding proc with spelled in advance
limitations imposed by computer math, like "here it is will be pretty
exact, here it will not and this is why". Later I noticed - or thought
to notice - a semi-mystical correlation of these limits with Nalimov's
semantical spectra if ported onto computer programs, so the problem
got some linguistical - so some personal interest. That is not a
"showing up", I just wanted to explain why I'm spending more time on
this than it was originally planned - with the manual rounding
question itself shifter to the periphery.

The answer on "What decimal float can be stored exactly in IEEE-754
form?" appeared to be surprisingly hard to find for a non-
mathematician. The majority of online resources is simply stating that
"Not all numbers can be stored exactly in IEEE-754. For instance N
cannot." I collected a whole collection of these "for instance" all
around and nearly got berserk of it. By pure occasion, while checking
for exact English math terms, I found the definition of the dyadic
and my question got answered for the Turing machine, thus - besides
everything else - for unlimited internal storage space:

Any decimal number representable as a dyadic rational is also
representable as a finite binary sequence thus can be stored exactly
in binary form.

Including the definition of dyadic rational into the theorem itself,
the "1st VK's theorem" would be:
---
Any decimal number can be stored exactly in binary form if and only if
the decimal number in question is representable as rational X/2^Y (X
divided by 2 in power of Y) where X is an integer and Y is a natural
number including 0
---

This way the reduced form vulgar fraction can always tell if the
number is representable exactly in binary form:
92 = 92/1 = 92/2^0 <= dyadic fraction, so can be stored exactly
9.2 = 9/1 + 2/10 = 90/10 + 2/10 = 92/10 = 46/5 <= non-dyadic, so
cannot be stored exactly

As the joke with "1st VK's theorem" is getting a bit old, we can call
it more seriously for further, for instance "the theorem of dyadic
subset".

It is interesting to point out that the theorem applies to any binary
storage media so it doesn't depend on exact standard or space:
IEEE-754 single, double or n-byted is irrelevant.

This way for any set N of real numbers expressed in decimal numeral
form the subset of numbers stored exactly in binary form is formed by
numbers where this equation can be solved: R = X/Y^2

Math-savvies may try to find a formula for the subset size relative to
the whole set size. Empirically it is possible to say that binary
storage space is rather misfortunate media for decimal input so most
of the time any engine will be loosing/factoring/restoring and overall
playing with the precision.
An IEEE Double can have two signs, 2^52 mantissas, and almost 2^11
ordinary exponents; or it can be NaN or +Infinity or -Infinity. It can
be +0 or -0. but I forget where they fit into that scheme.
IEEE-754 is rather tricky in this aspect and here comes soon the "2nd
VK's theorem" :-)

But before that some grounds:

IEEE-754-DP-FP (double-precision floating-point) number takes 64 bit
of storage space with msb for sign, 11 bits for exponent and 52 bits
for mantissa: 1 + 11 + 52 = 64

Trick 1:
Because the exponent part must be able to represent both positive and
negative exponents, it is not stored exactly by as the result of
addition ActualValue+1023 (biased to 1023). This way if IEEE-754-DP-FP
form has the exponent part 1023 then the actual exponent is
1023-1023=0 and if the exponent part 1 then the actual exponent is
1-1023=-1022

Trick 2:
Mantissa is presumed to be in normalized form thus radix (comma)
placed after the first non-zero value: 1.23, 2.0456 etc. Because in
binary system the only non-zero value is 1, it allows do not store the
first bit but simply presume it ("hidden" bit). It means that if
mantissa part stores say
00101000000010100000001010000000101000000010100000 11 then the actual
bit sequence it represents is
1.001010000000101000000010100000001010000000101000 0011
It allows IEEE-754-DP-FP to have 53 bits for mantissa with only 52
physical bits.

Trick 3:
If the exponent part contains only zeros: 00000000000 then both tricks
1 and 2 are getting overridden and the stored number becomes
denormalized. In this case exponent is assumed to be -1022 and the
mantissa doesn't have any hidden bits: "what you see is what you get".
Such shift helps to operate with very small numbers, in the particular
it allows to keep comparison operations for very small values correct.

Extra 1
The normalized numbers always have "hidden" a.k.a. "implicit" bit
added - see Trick 2. Because of that the only way to represent zero
(0) in IEEE-754 is by using a denormalized number. Indeed 0 in
IEEE-754 is the number where both exponent and mantissa parts contain
all zeros. Because sign bit is always presented, it still can be set
to 0 or 1: thus it can be positive zero 0 and negative zero -0
In JavaScript the engine is instructed to say that -0 == 0
That has nothing to do with some IEEE-754 particular demands. This is
just a convenience heuristic added atop to keep regular users in
better sanity :-) Such heuristic is neither required nor uniform in
programming languages. Say in Perl -0 != 0 for endless surprise of
newbes.

Extra 2
IEEE-754 number with exponent all 1s and mantissa all 0s has special
meaning: it denotes Infinity. With sign bit cleared or set we are
getting +Infinity and -Infinity

Extra 3
IEEE-754 number with exponent all 1s and mantissa with at least one
bit set denotes NaN (not a number). There are many kinds of NaN
depending on mantissa bit pattern. The most regular one is Quiet NaN
(QNaN) with msb of mantissa set:

? 11111111111 1... further whatever
S EEEEEEEEEEE M...

QNaN simply informs that the performed operation has no mathematically
defined return value. You are getting QNaN when say performing
parseInt('foobar', 10);

There is also Signalling NaN (SNaN) with msb of mantissa cleared:

? 11111111111 0... further whatever but at least one bit 1
S EEEEEEEEEEE M...

SNaN is used to raise exceptions in math operations, AFAIK it is not
used in ECMAScript implementations.

Because mantissa content is not regulated for NaN except for msb there
can be potentially billions of NaN values. This way I'm taking back my
older opinion that "there are not any physical reasons behind the rule
that NaN != NaN". In fact there is and a good one.
Any number which can be expressed exactly as a sum of powers of 0.5 (or
2.0) can be held exactly,
That is back-reversed "1st VK's theorem" - expressed via Egyptian
rational-like method. IMO the definition over dyadic rational is more
clear and strict though I may be biased.
provided that the range of the powers does not
exceed about 53, and the extreme powers are neither too big nor too
small (960 to -960 should be safe).
Right, the "1st VK's theorem" - in any form - applies to the Turing
machine. On a real PC for IEEE-754-DP-FP we are hitting the mantissa
storage space limit of 52+1 bits

So the actual N of real decimal numbers represented exactly in
IEEE-754-DP-FP will be the subset of subset: first the numbers
satisfying the dyadic subset theorem, then subset of these satisfying
the storage limit condition.

I'm playing with it right now.

Feb 17 '07 #27
On Feb 17, 11:50 pm, "VK" <schools_r...@yahoo.comwrote:
I'm playing with it right now.
Still terribly busy with several projects at once. If survive by the
end of the week then hoping to post some code - for the sadistic
pleasure of Mr.Cornford :-)
Feb 22 '07 #28
VK said the following on 2/22/2007 1:51 PM:
On Feb 17, 11:50 pm, "VK" <schools_r...@yahoo.comwrote:
>I'm playing with it right now.

Still terribly busy with several projects at once.
I am sure I am not alone when I say I hope you stay terribly busy with
them for a long while to come.
If survive by the end of the week then hoping to post some code -
for the sadistic pleasure of Mr.Cornford :-)
Or the headache it will give anybody trying to understand code you write.

--
Randy
Chance Favors The Prepared Mind
comp.lang.javascript FAQ - http://jibbering.com/faq/index.html
Javascript Best Practices - http://www.JavascriptToolbox.com/bestpractices/
Feb 23 '07 #29

### This discussion thread is closed

Replies have been disabled for this discussion.