P: n/a

Hi
I'm have some problem understanding how JS numbers are represented
internally.
Take this code for an example of weirdness:
var biggest = Number.MAX_VALUE;
var smaller = Number.MAX_VALUE  1;
alert(biggest > smaller);
alert(biggest == smaller);
This outputs "false" then "true" not "true" then "false" as I'd expect!
What's going on here? Is this to do with precision?
What I'm looking for is the largest possible integer representable by
javascript, but I want it in nonexponential form, i.e.
123456789012345678901234567890 NOT 1.234e+123.
Thx  
Share this Question
P: n/a
 bo******@gmx.net writes: I'm have some problem understanding how JS numbers are represented internally.
They are specified to work as 64 bit IEEE floating point numbers.
Take this code for an example of weirdness:
var biggest = Number.MAX_VALUE; var smaller = Number.MAX_VALUE  1;
alert(biggest > smaller); alert(biggest == smaller);
This outputs "false" then "true" not "true" then "false" as I'd expect!
What's going on here? Is this to do with precision?
Yes. The first integer that cannot be represented by a 64 bit
floating point number is 2^52+1. This is because the number is
represented as 52 bits of mantissa (+ 1 sign bit) and 10 bit exponent
(+ 1 sign bit). You can at most have 52 significant bits in this way,
and 2^52+1 is binary
10000000000000000000000000000000000000000000000000 001
which needs 53 bits of precission.
What I'm looking for is the largest possible integer representable by javascript, but I want it in none0ponential form, i.e. 123456789012345678901234567890 NOT 1.234e+123.
The number is (2^521)*2^(2^1052). It is this number that Javascript
typically outputs as 1.7976931348623157e+308 (which is not exact,
but does suggest that you need 309 decimal digits to write it :)
It's easy to do in binary: 52 "1"'s followed by 972 "0"'s.
In decimal it's:
17976931348623155085612432838450624023434343715745 93359244048724485818457545561143884706399431262203 21960804027157371570809852884964511743044087662767 60090959433192772823707887618876057953256376869865 40648252621157710157914639830148577040081234194593 86245141723703148097529108423358883457665451722744 025579520
(look out for line breaks :)
/L

Lasse Reichstein Nielsen  lr*@hotpop.com
DHTML Death Colors: <URL:http://www.infimum.dk/HTML/rasterTriangleDOM.html>
'Faith without judgement merely degrades the spirit divine.'  
P: n/a

> > What I'm looking for is the largest possible integer representable by javascript, but I want it in none0ponential form, i.e. 123456789012345678901234567890 NOT 1.234e+123.
The number is (2^521)*2^(2^1052). It is this number that Javascript typically outputs as 1.7976931348623157e+308 (which is not exact, but does suggest that you need 309 decimal digits to write it :)
It's easy to do in binary: 52 "1"'s followed by 972 "0"'s. In decimal it's:
17976931348623155085612432838450624023434343715745 93359244048724485818457545561143884706399431262203 21960804027157371570809852884964511743044087662767 60090959433192772823707887618876057953256376869865 40648252621157710157914639830148577040081234194593 86245141723703148097529108423358883457665451722744 025579520
(look out for line breaks :)
Thank you, that's great!
Do you know of a way to output the above number (or any arbitrary
number) in javascript as a string?
Number.MAX_VALUE.toString() just gives me the exponential form.
I guess it's got something to do with manipulating the binary number
directly and converting it into decimal form using bitwise shifts and
iteration (??), but I have no clue as to where to start (not used to
working directly with binary numbers). Could you point me in the right
direction? Thanks!  
P: n/a
 bo******@gmx.net said the following on 3/17/2006 8:00 PM: What I'm looking for is the largest possible integer representable by javascript, but I want it in none0ponential form, i.e. 123456789012345678901234567890 NOT 1.234e+123. The number is (2^521)*2^(2^1052). It is this number that Javascript typically outputs as 1.7976931348623157e+308 (which is not exact, but does suggest that you need 309 decimal digits to write it :)
It's easy to do in binary: 52 "1"'s followed by 972 "0"'s. In decimal it's:
17976931348623155085612432838450624023434343715745 93359244048724485818457545561143884706399431262203 21960804027157371570809852884964511743044087662767 60090959433192772823707887618876057953256376869865 40648252621157710157914639830148577040081234194593 86245141723703148097529108423358883457665451722744 025579520
(look out for line breaks :)
Thank you, that's great!
Do you know of a way to output the above number (or any arbitrary number) in javascript as a string?
var
maxValue="1797693134862315508561243283845062402343 43437157459335924404872448581845754556114388470639 94312622032196080402715737157080985288496451174304 40876627676009095943319277282370788761887605795325 63768698654064825262115771015791463983014857704008 12341945938624514172370314809752910842335888345766 5451722744025579520";
Now, its a string :) Number.MAX_VALUE.toString() just gives me the exponential form.
Due to it's precision abilities.
I guess it's got something to do with manipulating the binary number directly and converting it into decimal form using bitwise shifts and iteration (??), but I have no clue as to where to start (not used to working directly with binary numbers).
Has nothing to do with that.
Could you point me in the right direction? Thanks!
See above.

Randy
comp.lang.javascript FAQ  http://jibbering.com/faq & newsgroup weekly
Javascript Best Practices  http://www.JavascriptToolbox.com/bestpractices/  
P: n/a

> > Do you know of a way to output the above number (or any arbitrary number) in javascript as a string?
var maxValue="1797693134862315508561243283845062402343 43437157459335924404872448581845754556114388470639 94312622032196080402715737157080985288496451174304 40876627676009095943319277282370788761887605795325 63768698654064825262115771015791463983014857704008 12341945938624514172370314809752910842335888345766 5451722744025579520";
Now, its a string :)
Number.MAX_VALUE.toString() just gives me the exponential form.
Due to it's precision abilities.
Thanks, but I'm really looking for a way to do this for any *abritrary*
number that's too long to be represented in standard decimal form.
So for example if given an integer between 1 and 1000 (x), how could I
output the decimal (not exponential) form of the following:
Number.MAX_VALUE  x
Basically I need an algorithm for how you obtained the long form above,
but for any integer, not just Number.MAX_VALUE.
Thanks for your help so far!  
P: n/a
 bo******@gmx.net wrote: Thanks, but I'm really looking for a way to do this for any *abritrary* number that's too long to be represented in standard decimal form.
So for example if given an integer between 1 and 1000 (x), how could I output the decimal (not exponential) form of the following:
Number.MAX_VALUE  x
Basically I need an algorithm for how you obtained the long form above, but for any integer, not just Number.MAX_VALUE.
Doesn't answer directly to your question, but a good consideration
point:
The biggest JavaScript/JScript integer still returned by toString
method "as it is" 999999999999999930000 or round that. Bigger integer
will be brought into exponential form.
But long before that number the buildin math will stop working
properly (from the human point of view of course). Say
999999999999999930000 and 999999999999999900000 have the same string
form 999999999999999900000 so your error may be up to 50000 and even
higher which is doubtfully acceptable :)
Usually (unless custom BigMath ligraries are used) on 32bit platforms
like Windows you can work reliable only with numbers up to 0xFFFFFFFF
(decimal 4294967295). After this "magic border" you already dealing not
with real numbers, but with machine fantasies.
As 0xFFFFFFFF and lesser are not converted into exponential form by
toString method, your problem has simple solution: do not go over
0xFFFFFFFF, there is nothing useful there anyway.  
P: n/a

> Usually (unless custom BigMath ligraries are used) on 32bit platforms like Windows you can work reliable only with numbers up to 0xFFFFFFFF (decimal 4294967295). After this "magic border" you already dealing not with real numbers, but with machine fantasies.
As 0xFFFFFFFF and lesser are not converted into exponential form by toString method, your problem has simple solution: do not go over 0xFFFFFFFF, there is nothing useful there anyway.
Thank you, that'll be absolutely fine for what I'm doing. Makes perfect
sense as well... and don't like the sound of machine fantasies too much
:)
Thanks everyone.  
P: n/a

JRS: In article <y7**********@hotpop.com>, dated Fri, 17 Mar 2006
21:44:55 remote, seen in news:comp.lang.javascript, Lasse Reichstein
Nielsen <lr*@hotpop.com> posted : bo******@gmx.net writes:
Yes. The first integer that cannot be represented by a 64 bit
IEEE
floating point number is 2^52+1. This is because the number is represented as 52 bits of mantissa (+ 1 sign bit) and 10 bit exponent (+ 1 sign bit).
Strictly, not quite. The exponent is 11bit offset binary, rather than
signand10bitmagnitude. My jsmisc0.htm#CDC code shows that; you may
recall the question here that prompted the work. What I'm looking for is the largest possible integer representable by javascript,
Strings can be used to represent integers, so the largest possible is
probably two or four gigabytes of nines. If that's too small, use a
base higher than 10. If one restricts it to a javascript Number, the
answer is about 10^308 and the answer to the probablyintended question
is about 9x10^15, both as given by LRN.

© John Stockton, Surrey, UK. ?@merlyn.demon.co.uk Turnpike v4.00 IE 4 ©
<URL:http://www.jibbering.com/faq/> JL/RC: FAQ of news:comp.lang.javascript
<URL:http://www.merlyn.demon.co.uk/jsindex.htm> jscr maths, dates, sources.
<URL:http://www.merlyn.demon.co.uk/> TP/BP/Delphi/jscr/&c, FAQ items, links.  
P: n/a

VK wrote: bo******@gmx.net wrote: Thanks, but I'm really looking for a way to do this for any *abritrary* number that's too long to be represented in standard decimal form.
So for example if given an integer between 1 and 1000 (x), how could I output the decimal (not exponential) form of the following:
Number.MAX_VALUE  x
Basically I need an algorithm for how you obtained the long form above, but for any integer, not just Number.MAX_VALUE.
Doesn't answer directly to your question, but a good consideration point:
The biggest JavaScript/JScript integer still returned by toString method "as it is" 999999999999999930000 or round that. Bigger integer will be brought into exponential form.
But long before that number the buildin math will stop working properly (from the human point of view of course). Say 999999999999999930000 and 999999999999999900000 have the same string form 999999999999999900000 so your error may be up to 50000 and even higher which is doubtfully acceptable :)
Usually (unless custom BigMath ligraries are used) on 32bit platforms like Windows you can work reliable only with numbers up to 0xFFFFFFFF (decimal 4294967295). After this "magic border" you already dealing not with real numbers, but with machine fantasies.
Utter nonsense.
1. It is only a secondary matter of the operating system. It is rather
a matter of Integer arithmetic (with Integer meaning the generic
machine data type), which can only performed if there is a processor
register that can hold the input and output value of that operation.
On a 32bit platform, with a 32 bits wide data bus, the largest
register is also 32 bits wide, therefore the largest (unsigned)
integer value that can be stored in such a register is 2^321
(0..4294967295, 0x0..0xFFFFFFF hexadecimal)
2. If the input or output value exceeds that value, floatingpoint
arithmetic has to be used, through use or emulation of a FloatingPoint
Unit (FPU); such a unit is embedded in the CPU since the Intel
80386DX/486DX and Pentium processor family. Using an FPU inevitably
involves a potential rounding error in computation, because the number
of bits available for storing numbers is still limited, and so the
value is no longer displayed as a sequence of bits representing the
decimal value in binary, but as a combination of bits representing the
mantissa, and bits representing the exponent of that floatingpoint
value.
3. ECMAScript implementations, such as JavaScript, use IEEE754
(ANSI/IEEE Std 7541985; IEC60559) doubleprecision floatingpoint
(doubles) arithmetics always. That means they reserve 64 bits for
each value, 52 for the mantissa, 11 bits for the exponent, and 1 for
the sign bit. Therefore, there can be no true representation of an
integer number above a certain value; there are just not enough bits
left to represent it asis.
There is no magic and no fantasy involved here, it is all pure mechanical
logic implemented in hardware (FPU) and software (in this case: the OS and
applications built for the platform, and any ECMAScript implementation
running on that platform and within the application's environment).
PointedEars  
P: n/a

Thanks but I need to work within a precision of 1:
alert(Number.MAX_VALUE == (Number.MAX_VALUE  1)) evaluates to true.
I need to replace Number.MAX_VALUE in the above with the *highest
integer capable of making the expression evaluate to false*.
I think I'm gonna go with the 0xFFFFFFFF suggestion above, but this is
a 32bit number and someone else said that numbers are represented as
64bit internally. Can you confirm this or am I safest working within
the 32bit limits?
Thanks  
P: n/a

bobal...@gmx.net wrote: Thanks but I need to work within a precision of 1:
alert(Number.MAX_VALUE == (Number.MAX_VALUE  1)) evaluates to true.
I need to replace Number.MAX_VALUE in the above with the *highest integer capable of making the expression evaluate to false*.
I think I'm gonna go with the 0xFFFFFFFF suggestion above, but this is a 32bit number and someone else said that numbers are represented as 64bit internally. Can you confirm this or am I safest working within the 32bit limits?
That's going to be a lot of excited advises here very soon (I think).
So you better just spit on everyone (including myself) and check the
precision borders by yourself. You may start with the numbers in my
post and play with other numbers in either side (up and down).
My byebye hint: a number cannot be "presented 64bit internally" on a
32bit platform for the same reason as doublebyte Unicode character
cannot be sent "as it is" in 8bit TCP/IP stream or 4dimensional
tesseract drawn on a flat sheet of paper: there is no "unit" to hold
it. Everything has to be emulated by the available units: Unicode char
brought into 8bit sequence, 64bit number split onto 32bit parts.
I did not look yet on this part of ECMA specs, but if it indeed says
"presented 64bit _internally_" then it's just a clueless statement.  
P: n/a

I played with integers around 0xFFFFFFFF and I seem to be able to add
and subtract integers from that number no problem with no loss of
precision, but I'm not sure if this behaviour will be consistent on all
machines.
PointedEars' post was very informative (thanks) but not that
practically useful due to my experiment.
I also need to be able to determine the minimum integer value that can
be represented to a precision of 1, and again I added/subtracted
integers to 0xFFFFFFFF and it worked ok too.
Like I said before it's not (yet) necessary for my application to work
with signed integers outside the range +/ 0xFFFFFFFF, but I'd like to
find out what these acrosstheboard limits are, out of interest.
Cheers  
P: n/a
 bo******@gmx.net wrote: I played with integers around 0xFFFFFFFF and I seem to be able to add and subtract integers from that number no problem with no loss of precision,
Of course.
but I'm not sure if this behaviour will be consistent on all machines.
Of course it will.
PointedEars' post was very informative (thanks)
You are welcome.
but not that practically useful due to my experiment.
Well, it is rather a matter of understanding ...
I also need to be able to determine the minimum integer value that can be represented to a precision of 1, and again I added/subtracted integers to 0xFFFFFFFF and it worked ok too.
Of course it did. You have not read thoroughly enough. VK was right
about the precision limit for integer values, but his explanation was
wrong/gibberish. At first, I said there is a "potential rounding error"
when floatingpoint arithmetic is done; that should read as a possibility,
not a necessity. Second, I said that ECMAScript implementations use
IEEE754 doubles always, so the 32bit Integer border does not really
matter here. If you follow the specified algorithm defined by the
latter international standard, the representation of
n = 4294967295 (or 2^321)
can be computed as follows (unless specified otherwise with "(digits)base",
all values are decimal):
1. Convert the number n to binary.
, M: 32 bits . e
N := (11111111111111111111111111111111)2 * 2^0
2. Let the mantissa m be 1 <= m < 2.
, 31 bits . e
N := (1.1111111111111111111111111111111)2 * 2^31
(e := 31)
3. Ignore the 1 before the point (normalization, allows for greater
precision), and round the mantissa to 52 bits (since we needed
less than 52 bits for n, rounding it merely fills the remaining
bits with zeroes).
, 52 bits . e
N := (1111111111111111111111111111111000000000000000000 000)2 * 2^31
or, IOW: M := (1111111111111111111111111111111000000000000000000 000)2
e := 31
4. Add the bias value 1023 (for double precision) to the value of e.
e := 31 + 1023 = 1054 = (10000011110)2 =: E
5. n is a positive number, so the sign bit S of N is 0.
6. n is stored as
S , E . , M .
01000001111011111111111111111111111111111110000 00000000000000000
`11 bits' ` 52 bits '
As you can see, there is plenty of bits left for greater precision
(greater integer numbers, or more decimals). No wonder you do not
experience any problems with this "small" and "unprecise" a number
as 2^321 (and neighbors). Likewise for 2^311.
Like I said before it's not (yet) necessary for my application to work with signed integers outside the range +/ 0xFFFFFFFF, but I'd like to find out what these acrosstheboard limits are, out of interest.
Reversing the (above) algorithm with extremal input/output values is left
as an exercise to the reader. Bear in mind that there are special values:
denormalized numbers, NaN, Infinity, and Infinity.
See also <URL:http://en.wikipedia.org/wiki/IEEE_floatingpoint_standard>
(I had expected you to find this and similar Web resources by yourself,
now that you had been given so many hints.)
HTH
PointedEars  
P: n/a
 bo******@gmx.net writes: Thanks but I need to work within a precision of 1:
alert(Number.MAX_VALUE == (Number.MAX_VALUE  1)) evaluates to true.
I need to replace Number.MAX_VALUE in the above with the *highest integer capable of making the expression evaluate to false*.
That would be 2^53 (not 2^52 as I said earlier  IEEE floating point
numbers are smart and add an implicit 1 in some cases, so you can
get 53 bits of precission (they are pretty complicated, so if you
want to understand them in details, read Dr. Stockton's link and/or
the IEEE 754 specification, I'm sure to have forgotten details)).
Actually, since rounding is downwards, 2^53+2 will satisfy your equation,
but only because (2^53+2)1 evaluates to 2^53. A better comparison would
be
MAXNUMBER == MAXNUMBER + 1
and your MAXNUBER is the lowest number satsifying this, i.e., one
below the first integer that cannot be represented..
I think I'm gonna go with the 0xFFFFFFFF suggestion above, but this is a 32bit number and someone else said that numbers are represented as 64bit internally. Can you confirm this or am I safest working within the 32bit limits?
If you ned to do bitoperations (shifts, and/or/xor), your restricted
to 32 bit numbers. Otherwise, you can stay in the range [2^53..2^53]
where all integers can be represented exactly.
/L

Lasse Reichstein Nielsen  lr*@hotpop.com
DHTML Death Colors: <URL:http://www.infimum.dk/HTML/rasterTriangleDOM.html>
'Faith without judgement merely degrades the spirit divine.'  
P: n/a

Thomas 'PointedEars' Lahn wrote: Of course it did. You have not read thoroughly enough. VK was right about the precision limit for integer values, but his explanation was wrong/gibberish. At first, I said there is a "potential rounding error" when floatingpoint arithmetic is done; that should read as a possibility, not a necessity. Second, I said that ECMAScript implementations use IEEE754 doubles always, so the 32bit Integer border does not really matter here. If you follow the specified algorithm defined by the latter international standard, the representation of
n = 4294967295 (or 2^321)
can be computed as follows (unless specified otherwise with "(digits)base", all values are decimal):
When it's asked "how to retrieve a form element value": are we also
starting with form definition, history of Web, CGI standard etc,
leaving the OP question to be answered independently? ;)
That was a clearly stated question: "From what point JavaScript/JScript
math for integers gets too unaccurate to be useful?".
The answer:
IEEE754 reference in ECMA is gibberish: it was a "reserved for future
use" statement. In the reality JavaScript still has a relatively very
weak math which mainly emulates IEEE behavior but by its precision and
"capacity" stays below many other known languages, even below VBA
(Visual Basic for Applications).
That was one of main improvement planned in JavaScript 2.0, but the
project seems never came to the successfull end.
In application to positive integers there are three main borders anyone
has to be avare of:
1) 0x0  0xFFFFFFFF (0  4294967295)
"Level of the reality". Here we are dealing with regular "human" math
where for example
( x > (x1) ) is always true.
Another important feature of this range is that we can apply both
regular math operations and bitwise operations w/o
loosing/transforming/converting the nature of the involved number.
Not less important feature of this range is that these numbers can be
handled by 32bit systems natively thus with the maximum speed.
Unless your are using Itanium or other 64bit environment (or unless you
really have to) it is always wise to stay within this range. One have
to admit that it is big enough for the majority of the most common
tasks :)
2) 0x100000000  0x38D7EA4C67FFF (4294967296  999999999999999)
"Level of fluctuations"
Primitive math is still mainly working so say ( x > (x1) ) is still
*mainly* true, but all kind of implementation differences may take
effect in mathintensive expressions.
Also these numbers do not fit to 32bit so bitwise operations are their
killers.
Also on 32bit systems all of them have to be emulated by 32bit numbers
so you have a serious impact on productivity.
3) 0x38D7EA4C68000  0x2386F26FC10000 (999999999999999 
9999999999999999)
"Twilight zone"
Spit over your shoulder before any operation  and do not take the
results too seriously. Say ( x > (x1) ) very rarely will be true 
but it may happen once with good weather conditions.
4) 0x16345785D8A0000  Number.MAX_VALUE (100000000000000000 
Number.MAX_VALUE)
"Crazy Land"
IEEE emulators are still working so you will continue to get different
cool looking numbers. But nothing of it has any correlation with the
human math and one time error can be anywhere from 10,000 to 100,000.
P.S. A "rule of thumb": the Crazy Land in JavaScript starts guaranteed
for any number containing 17 digits or more. It is absolutely
irrelevant to the number value: only amout of digits used to write this
number is important. So if you are wondering is you can do anything
useful with some long number, just count its digits.
P.P.S. Math specialists are welcome to scream now. But before one may
want to test and to read the Web a bit.  
P: n/a

VK wrote: Thomas 'PointedEars' Lahn wrote: Of course it did. You have not read thoroughly enough. VK was right about the precision limit for integer values, but his explanation was wrong/gibberish. At first, I said there is a "potential rounding error" when floatingpoint arithmetic is done; that should read as a possibility, not a necessity. Second, I said that ECMAScript implementations use IEEE754 doubles always, so the 32bit Integer border does not really matter here. If you follow the specified algorithm defined by the latter international standard, the representation of
n = 4294967295 (or 2^321)
can be computed as follows (unless specified otherwise with "(digits)base", all values are decimal): When it's asked "how to retrieve a form element value": are we also starting with form definition, history of Web, CGI standard etc, leaving the OP question to be answered independently? ;)
Troll elsewhere.
That was a clearly stated question: "From what point JavaScript/JScript math for integers gets too unaccurate to be useful?".
To be able to answer this question, one must first understand how
numbers work in JavaScript/JScript. Making wild assumptions based
on misconceptions and flawed testing, as you do, does not help.
The answer: IEEE754 reference in ECMA is gibberish:
Nonsense. It works in practice as it is specified in theory, you are just
unable to draw meaning from technical language. And it is the _ECMAScript_
specification, with the ECMA being the standardization body that issued it.
This is about the ... uh ... tenth time you have been told this.
PointedEars  
P: n/a

Thomas 'PointedEars' Lahn wrote: Troll elsewhere.
Troll? I'm answering the OP's question. The border numbers collected
from different math related articles and described behavior checked on
IE, FF, Opera before posting. There is always a place for adjustments
and clarifications of course.
From the developer point of view IMHO it is important to know exactly
the border after wich say ((x1) == x) is true or say alert(x) displays
a value which is 50,000 (fifty thousands) lesser then the actual value.
It is great of course to also know why it is correct and expected for
given value by IEEE standards, but that is already a secondary question
for math savvies.
That may doesn't have any sense  but it sounds rather reasonnable for
my twisted mind. :)  
P: n/a

VK wrote: Thomas 'PointedEars' Lahn wrote: Troll elsewhere. Troll? I'm answering the OP's question.
You have been misinforming the OP. Again. Because you have no clue what
you are talking about.
From the developer point of view IMHO it is important to know exactly the border after wich say ((x1) == x) is true or say alert(x) displays a value which is 50,000 (fifty thousands) lesser then the actual value.
And this value can be easily computed using the algorithm described.
It cannot be obtained by making wild guesses, as you did.
That may doesn't have any sense  but it sounds rather reasonnable for my twisted mind. :)
No surprise here.
PointedEars  
P: n/a

Thomas 'PointedEars' Lahn wrote: From the developer point of view IMHO it is important to know exactly the border after wich say ((x1) == x) is true or say alert(x) displays a value which is 50,000 (fifty thousands) lesser then the actual value.
And this value can be easily computed using the algorithm described.
Right, this is called BigMath and a number obtained this way is called
BigInt. BigMath is very resource expensive but it used in many domains
where the regular machine precision limits are too narrow.
It has nothing to do with the OP's question, rather then the question
could be rephrased: "From what point I cannot use default language math
for integer and I have to use 3rd party BigMath libraries?"
I never had to use BigMath in JavaScript for my projects, but a friend
of mine siggested (with not obligations) this library:
<http://www.leemon.com/crypto/BigInt.html>  
P: n/a

VK wrote: Thomas 'PointedEars' Lahn wrote: > From the developer point of view IMHO it is important to know exactly > the border after wich say ((x1) == x) is true or say alert(x) displays
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ > a value which is 50,000 (fifty thousands) lesser then the actual value. And this value can be easily computed using the algorithm described.
^^^^^^^^^^ Right, this is called BigMath
Are you reading what you are replying to? BigMath/BigInt libraries are
about calculating great integer values. IEEE754 as used in the named
ECMAScript implementations (JavaScript and JScript) is about floatingpoint
values. Inevitable potential precision loss with floatingpoint numbers
is the issue here, and the OP wanted to know which is the greatest integer
number that can be stored as IEEE754 double without precision loss.
That is not anything near 2^321, of course.
As I said already, you have no clue what you are talking about.
PointedEars  
P: n/a

Thomas 'PointedEars' Lahn wrote: As I said already, you have no clue what you are talking about.
1]
var x = 1;
alert(x == (x1));
How big must be the number to get "true" in alert?
2]
var x = 1;
alert(x);
How big must be the number to get in alert something only slightly
reflecting the real x value?
I gave the answer, it can be possibly narrowed in some parts for some
implementations.
Your IEEE mentions do not have any practical use so far. Even if you
link IEEE specs thousands times in this thread, it still doesn't answer
the question. And if the questions 1 and 2 for positive integers indeed
can be so easily and evidently conducted from IEEE specs, then where
are *your* answers?  
P: n/a

VK wrote: Thomas 'PointedEars' Lahn wrote: As I said already, you have no clue what you are talking about.
1] var x = 1; alert(x == (x1));
How big must be the number to get "true" in alert?
2] var x = 1; alert(x);
How big must be the number to get in alert something only slightly reflecting the real x value?
I gave the answer,
Not at all.
PointedEars  
P: n/a

JRS: In article <11**********************@p10g2000cwp.googlegroups .com>
, dated Sat, 18 Mar 2006 13:52:30 remote, seen in
news:comp.lang.javascript, bo******@gmx.net posted : Thanks but I need to work within a precision of 1:
alert(Number.MAX_VALUE == (Number.MAX_VALUE  1)) evaluates to true.
I need to replace Number.MAX_VALUE in the above with the *highest integer capable of making the expression evaluate to false*.
I think I'm gonna go with the 0xFFFFFFFF suggestion above, but this is a 32bit number and someone else said that numbers are represented as 64bit internally. Can you confirm this or am I safest working within the 32bit limits?
Please read the newsgroup FAQ on how to construct Usenet responses in
Google.

© John Stockton, Surrey, UK. ?@merlyn.demon.co.uk Turnpike v4.00 IE 4 ©
<URL:http://www.jibbering.com/faq/> JL/RC: FAQ of news:comp.lang.javascript
<URL:http://www.merlyn.demon.co.uk/jsindex.htm> jscr maths, dates, sources.
<URL:http://www.merlyn.demon.co.uk/> TP/BP/Delphi/jscr/&c, FAQ items, links.  
P: n/a

JRS: In article <11**********************@z34g2000cwc.googlegroups .com>
, dated Sat, 18 Mar 2006 10:42:52 remote, seen in
news:comp.lang.javascript, VK <sc**********@yahoo.com> posted : Usually (unless custom BigMath ligraries are used) on 32bit platforms like Windows you can work reliable only with numbers up to 0xFFFFFFFF (decimal 4294967295). After this "magic border" you already dealing not with real numbers, but with machine fantasies.
You are inadequately informed.
Current Delphi has 64bit integers.
For over a decade, at least, the standard PC FPU has supported,
directly, a 64bit integer type, called "comp" in Borland Pascal and
Delphi.
It is never _necessary_ to use a library, since one can always write the
corresponding code in the main body of the program.
The OP needs to read up about floatingpoint formats and properties.

© John Stockton, Surrey, UK. ?@merlyn.demon.co.uk Turnpike v4.00 IE 4 ©
<URL:http://www.jibbering.com/faq/> JL/RC: FAQ of news:comp.lang.javascript
<URL:http://www.merlyn.demon.co.uk/jsindex.htm> jscr maths, dates, sources.
<URL:http://www.merlyn.demon.co.uk/> TP/BP/Delphi/jscr/&c, FAQ items, links.  
P: n/a

JRS: In article <12****************@PointedEars.de>, dated Sat, 18 Mar
2006 22:50:04 remote, seen in news:comp.lang.javascript, Thomas
'PointedEars' Lahn <Po*********@web.de> posted : Utter nonsense.
1. It is only a secondary matter of the operating system. It is rather a matter of Integer arithmetic (with Integer meaning the generic machine data type), which can only performed if there is a processor register that can hold the input and output value of that operation. On a 32bit platform, with a 32 bits wide data bus, the largest register is also 32 bits wide, therefore the largest (unsigned) integer value that can be stored in such a register is 2^321 (0..4294967295, 0x0..0xFFFFFFF hexadecimal)
Incorrect. For example, Turbo Pascal runs on 16bit machines, and does
not need (though can use) 32bit registers and/or a FPU. But, since
1988 or earlier, it has provided the 32bit LongInt type. LongInt
addition, for example, is provided by two 16bit ops and a carry.
Note that integer multiplication frequently involves the use of a
register pair for the result.
2. If the input or output value exceeds that value, floatingpoint arithmetic has to be used, through use or emulation of a FloatingPoint Unit (FPU); such a unit is embedded in the CPU since the Intel 80386DX/486DX and Pentium processor family. Using an FPU inevitably involves a potential rounding error in computation, because the number of bits available for storing numbers is still limited, and so the value is no longer displayed as a sequence of bits representing the decimal value in binary, but as a combination of bits representing the mantissa, and bits representing the exponent of that floatingpoint value.
Insufficiently correct. The 64bit "comp" type is implemented *exactly*
in the FPU (and is 2's complement IIRC). It has integer values.
Also, longer arithmetic can be implemented outside the FPU; floating
point is not necessary.
Your use of the word "decimal" is superfluous and potentially
misleading.
3. ECMAScript implementations, such as JavaScript, use IEEE754 (ANSI/IEEE Std 7541985; IEC60559) doubleprecision floatingpoint (doubles) arithmetics always. That means they reserve 64 bits for each value, 52 for the mantissa, 11 bits for the exponent, and 1 for the sign bit. Therefore, there can be no true representation of an integer number above a certain value; there are just not enough bits left to represent it asis.
Incorrect. 2^99 is an integer, and it is represented exactly. I know
what you have in mind; but your words do not express it.

© John Stockton, Surrey, UK. ?@merlyn.demon.co.uk Turnpike v4.00 MIME. ©
Web <URL:http://www.merlyn.demon.co.uk/>  FAQqish topics, acronyms & links;
Astro stuff via astron1.htm, gravity0.htm ; quotings.htm, pascal.htm, etc.
No Encoding. Quotes before replies. Snip well. Write clearly. Don't Mail News.  
P: n/a

Dr John Stockton wrote: [...] Thomas 'PointedEars' Lahn [...] posted :Utter nonsense.
1. It is only a secondary matter of the operating system. It is rather a matter of Integer arithmetic (with Integer meaning the generic
^^^^^^^^^^^^^^^^^^^^^^^^^^^ machine data type), which can only performed if there is a processor
^^^^^^^^^^^^^^^^^ register that can hold the input and output value of that operation. On a 32bit platform, with a 32 bits wide data bus, the largest register is also 32 bits wide, therefore the largest (unsigned) integer value that can be stored in such a register is 2^321 (0..4294967295, 0x0..0xFFFFFFF hexadecimal) Incorrect.
No, it is correct.
For example, Turbo Pascal runs on 16bit machines, and does not need (though can use) 32bit registers and/or a FPU.
I have programmed in several Pascal dialects before for years. As you well
know (<URL:http://www.merlyn.demon.co.uk/pasreal.htm#FloatTypes>), Comp is
a special floatingpoint data type in Turbo Pascal 5.0 and later, to hold
larger integer values (integer != Integer). Like Single, Double, and
Extended, it can only be used if FPU (80x87) software emulation is enabled
(through a compiler switch), or an FPU is present. I mentioned the
possibility of FPU emulation in point 2.
I described general restrictions for Integer (not: integer) arithmetic
here, though.
But, since 1988 or earlier, it has provided the 32bit LongInt type. LongInt addition, for example, is provided by two 16bit ops and a carry.
Note that integer multiplication frequently involves the use of a register pair for the result.
Irrelevant. 2. If the input or output value exceeds that value, floatingpoint arithmetic has to be used, through use or emulation of a FloatingPoint Unit (FPU); such a unit is embedded in the CPU since the Intel 80386DX/486DX and Pentium processor family. Using an FPU inevitably involves a potential rounding error in computation, because the number of bits available for storing numbers is still limited, and so the value is no longer displayed as a sequence of bits representing the decimal value in binary, but as a combination of bits representing the mantissa, and bits representing the exponent of that floatingpoint value.
Insufficiently correct.
Nonsense.
The 64bit "comp" type is implemented *exactly* in the FPU (and is 2's complement IIRC). It has integer values.
"Integer" refers to the generic Integer machine type, not to the integer
set defined in math, as I already have said.
Also, longer arithmetic can be implemented outside the FPU; floatingpoint is not necessary.
I was talking about machine types.
Your use of the word "decimal" is superfluous and potentially misleading.
Your entire posting is superfluous and potentially misleading. 3. ECMAScript implementations, such as JavaScript, use IEEE754 (ANSI/IEEE Std 7541985; IEC60559) doubleprecision floatingpoint (doubles) arithmetics always. That means they reserve 64 bits for each value, 52 for the mantissa, 11 bits for the exponent, and 1 for the sign bit. Therefore, there can be no true representation of an integer number above a certain value; there are just not enough bits left to represent it asis.
Incorrect.
Nonsense.
2^99 is an integer, and it is represented exactly.
I have not said anything that contradicts this.
I know what you have in mind; but your words do not express it.
Or maybe, just /maybe/, you (deliberately) misunderstood completely.
PointedEars  
P: n/a

Thanks everyone for your help.
Can I just reign this back in to my original question, which was more
to do with the max/min limits that are represented in standard decimal
form by javascript:
Q: What is the highest integer (x) that can be represented by the
expression x.toString() such that the returned string does not contain
the letter 'e' (i.e. is in pure decimal form, not exponential notation)?  
P: n/a
 bo******@gmx.net wrote: Thanks everyone for your help.
Can I just reign this back in to my original question, which was more to do with the max/min limits that are represented in standard decimal form by javascript:
Q: What is the highest integer (x) that can be represented by the expression x.toString() such that the returned string does not contain the letter 'e' (i.e. is in pure decimal form, not exponential notation)?
I believe it was already answered in this thread (skipping on
irrelevant IEEE side branches).
The biggest number still returned by toString() method "without e"
(thus not converted into exponential form) is 999999999999999930000
But this number is located above the limits of acceptable math I
described in another post. This way say 999999999999999930000 and
999999999999999900000 will be both returned by toString() method as
"999999999999999900000" (30000 rounding error).
This way your question is incorrect as asked. The right question is:
Q: What is the highest integer (x) that can be represented by the
expression x.toString() such that the returned string does not contain
the letter 'e' (i.e. is in pure decimal form, not exponential notation)
AND
does follow the regular human math (so say x > x1 is true) ?
A:
999999999999999 (15 digits "9") and lesser if you do not plan to use
bitwise operations.
4294967295 and lesser if you plan to use bitwise operations.  
P: n/a

P.S. By using special BigMath libraries able to to handle BigInt
numbers (like one linked in my previous post) you limit is up to
Number.MAX_VALUE
With special BigMath libraries used in say astronomy you limit is from
NEGATIVE_INFINITY to POSITIVE_INFINITY.
But these libraries are very resource expensive and on relatively weak
higher level languages like JavaScript they are on the border line of
being usable. Say BigIntN 1 statement may take from 1sec to 10sec to
be executed.  
P: n/a

VK wrote: bo******@gmx.net wrote: Q: What is the highest integer (x) that can be represented by the expression x.toString() such that the returned string does not contain the letter 'e' (i.e. is in pure decimal form, not exponential notation)? I believe it was already answered in this thread (skipping on irrelevant IEEE side branches).
NO, it was not!
The biggest number still returned by toString() method "without e" (thus not converted into exponential form) is 999999999999999930000
NO, it is not! Try alert(999999999999999930001), fool.
PointedEars  
P: n/a

Thomas 'PointedEars' Lahn wrote: NO, it is not! Try alert(999999999999999930001), fool.
It was originally said "...or round that".
999999999999999934469 to be totally exact.
But starting 999999999999999900000 all numbers in place of zeros are
being lost (rounded), so toString() always returs
"999999999999999900000 ", so the above pseudoprecision is completely
useless unless we are serving the values into a BigMath library.
999999999999999 (15 digits) is the upper limit for the OP's question  
P: n/a
 bo******@gmx.net wrote: Q: What is the highest integer (x) that can be represented by the expression x.toString() such that the returned string does not contain the letter 'e' (i.e. is in pure decimal form, not exponential notation)?
Interpolation showed it is
999999999999999934463
in
 JavaScript 1.3 (Netscape/4.8; build target: i386),
 JavaScript 1.5 (Mozilla/1.7.12; build target: i686pclinuxgnu),
 JavaScript 1.6 (Firefox/1.5.0.1; same target),
 Opera/8.52 (build target: i386), and
 KHTML 3.5.1 (Konqueror/3.5; same target).
Tested on GNU/Linux 2.6.15.6 i686.
However, you will observe that truncation of decimal places has had to occur
at this point, since it is way above 2^521 (4503599627370495) _and_ the
number of bits to represent the value exactly exceeds the number of
available mantissa bits (52).
See ECMAScript Edition 3 Final, subsection 15.7.4.2
(Number.prototype.toString) referring to subsection 9.8.1.
("ToString applied to the Number type"), for the specified
value.
PointedEars  
P: n/a

VK wrote: Thomas 'PointedEars' Lahn wrote: NO, it is not! Try alert(999999999999999930001), fool. It was originally said "...or round that".
But not here.
999999999999999934469 to be totally exact.
Not here. Which UAs have you tested with, with which OSs, on which
platforms?
[...] 999999999999999 (15 digits) is the upper limit for the OP's question
Wrong. The number of decimal digits does not matter because the value
is not stored in decimal.
PointedEars  
P: n/a
 bo******@gmx.net wrote: Q: What is the highest integer (x) that can be represented by the expression x.toString() such that the returned string does not contain the letter 'e' (i.e. is in pure decimal form, not exponential notation)?
Interpolation showed it is
999999999999999934463
in
 JavaScript 1.3 (Netscape/4.8; build target: i386),
 JavaScript 1.5 (Mozilla/5.0 rv:1.7.12; build target: i686pclinuxgnu),
 JavaScript 1.6 (Firefox/1.5.0.1; same target),
 Opera/8.52 (build target: i386), and
 KHTML 3.5.1 (Konqueror/3.5; same target).
Tested on GNU/Linux 2.6.15.6 i686.
However, you will observe that truncation of decimal places has had to occur
at this point, since it is way above 2^521 (4503599627370495) _and_ the
number of bits to represent the value exactly exceeds the number of
available mantissa bits (52).
See ECMAScript Edition 3 Final, subsection 15.7.4.2
(Number.prototype.toString) referring to subsection 9.8.1.
("ToString applied to the Number type"), for the specified
value.
PointedEars  
P: n/a

999999999999999934469
in
IE 6.0 Windows XP SP1
IE 6.0 Windows 98 SE
999999999999999934463
in
Firefox 1.0.7 Windows XP SP1
Firefox 1.5.0.1 Windows 98 SE
Opera 8.52 on both OS
(and still rounded in all cases to 999999999999999930000 by toString)
Pseudobetter pseudoprecision :) in IE may be explained by sharing
common internal libraries with VBScript, so JScript inherits
semibetter math as it should by itself.  
P: n/a

Thank you both very much.
999999999999999934463 is the lucky number here for me. Truncation of
decimal places doesn't matter as I'm dealing with integers only.
Was surprised to hear how slooow BIgMath is (110 seconds for a simple
decrement!!)  I'll avoid that at all costs.
Cheers
Rob  
P: n/a

Rob wrote:
^^^
This may cause problems, as we already have at least one regular Rob here :) Thank you both very much.
You are welcome.
999999999999999934463 is the lucky number here for me. Truncation of decimal places doesn't matter as I'm dealing with integers only.
I meant the truncation of binary "decimal" places of the mantissaexponent
representation of the stored floatingpoint value. Just follow the
algorithm:
0. Let n be 999999999999999934463.
1. Convert n to binary:
, 70 bits .
N := 11011000110101110010011010110111000101110111101001 11101111111111111111
[bc(1) rulez :)]
2. Let the mantissa M be 1 <= M < 10 (binary):
, 69 bits .
M := 1.101100011010111001001101011011100010111011110100 111101111111111111111
^[1]
E := 1000101 (69d)
3.1 Ignore the "1." to allow for greater precision:
, 52 bits .
M := 10110001101011100100110101101110001011101111010011 1101111111111111111
These are 69 of available 52 bits for the mantissa M. Therefore,
3.2. Truncating the "binary" decimal places[^1]
leads to
S := 0
E := 10001 (17d) + bias
M := (1)10110001101011100100110101101110001011101111010 01111
Therefore, the actual binary value stored is
11011000110101110010011010110111000101110111101001 11100000000000000000
and the actual decimal value stored is
999999999999999868928(d)
^^
which is displayed rounded by .toString() as
999999999999999900000
^^^^^
Now compare with the intended value:
999999999999999934463
^^^^^
The difference to the intended value is 65535 when stored, 34463 when
displayed. Most certainly that does matter here, even if you are only
dealing with integers. I thought that would be clear to you already
by VK mentioning it correctly several times in this thread.
PointedEars  
P: n/a

Thomas 'PointedEars' Lahn <Po*********@web.de> writes:
[a very nice and precise derivation of the limit]
So, in summary:
The limit on integers that can be used with bitwise operations:
2^321 = 4294967295
The limit on integers that can all be represented exactly:
2^53 = 9007199254740992
(i.e., 2^53+1 is the first integer that cannot be represented by
the number type)
The limit on representable numbers that does not display in scientific
notation (largest representable number below 10^21):
10^212^17 = 999999999999999868928
Limit on number literals that are converted to this number:
10^212^161 = 999999999999999934463
(above this, the number is closer to 10^21, which can itself be
represented exactly)
/L

Lasse Reichstein Nielsen  lr*@hotpop.com
DHTML Death Colors: <URL:http://www.infimum.dk/HTML/rasterTriangleDOM.html>
'Faith without judgement merely degrades the spirit divine.'  
P: n/a

<http://groups.google.com/group/comp.lang.javascript/tree/browse_frm/thread/38d21acb4d4509ce/605c4236958ed554?rnum=31&hl=en&_done=%2Fgroup%2Fco mp.lang.javascript%2Fbrowse_frm%2Fthread%2F38d21ac b4d4509ce%2Fc61f73ac60f10e2c%3Fhl%3Den%26#doc_3833 df1762d81fee>
Clear, plain and simple! :)
Should it be a <FAQENTRY> or a FAQ Note now? (with necessary mention
that it is correct for 32bit machines and of some JavaScript/JScript
math discrepancies)  
P: n/a

VK said the following on 3/21/2006 3:49 AM: <http://groups.google.com/group/comp.lang.javascript/tree/browse_frm/thread/38d21acb4d4509ce/605c4236958ed554?rnum=31&hl=en&_done=%2Fgroup%2Fco mp.lang.javascript%2Fbrowse_frm%2Fthread%2F38d21ac b4d4509ce%2Fc61f73ac60f10e2c%3Fhl%3Den%26#doc_3833 df1762d81fee>
Clear, plain and simple! :)
Should it be a <FAQENTRY>
No. To be an entry it has to be a *frequently* asked question. And I
think this is the third time in about 6 or 7 years it has been talked
about. Not very frequent....
or a FAQ Note now?
Notes maybe but the FAQ itself.

Randy
comp.lang.javascript FAQ  http://jibbering.com/faq & newsgroup weekly
Javascript Best Practices  http://www.JavascriptToolbox.com/bestpractices/  
P: n/a

Randy Webb wrote: No. To be an entry it has to be a *frequently* asked question. And I think this is the third time in about 6 or 7 years it has been talked about. Not very frequent....
Right, one doesn't use too often numbers like 9007199254740992 or
above. :)
A relevant wiki article should be edited for sure (or created if
doesn't exists yet). It would be a shame to let it be buried in Usenet
archives.
Also I guess (only guess) that indirectly it answers on another
occasional question: "What is the longest string value allowed in
JavaScript?" Skipping on mechanical limits (memory), by language itself
I would say that it's 9007199254740992 characters or lesser to be able
to use any of string methods (otherwise rounding error for length will
kill them).  
P: n/a

Lasse Reichstein Nielsen wrote: Thomas 'PointedEars' Lahn <Po*********@web.de> writes:
[a very nice and precise derivation of the limit]
Thank you :)
So, in summary:
The limit on integers that can be used with bitwise operations: 2^321 = 4294967295
The limit on integers that can all be represented exactly: 2^53 = 9007199254740992
= 9.007199254740992E15
(i.e., 2^53+1 is the first integer that cannot be represented by the number type)
True. However, I think the _greatest_ integer that can be represented
exactly, is
(2^541)*(2^1121023)
= (2^541)*(2^101)
= 18428729675200068609
= 1.8428729675200068609E19
because there are 52 bits for the mantissa (the leading 1 of 2^541, which
requires 53 bits, stripped), and the bias (+1023) for the exponent makes
the latter different from 2^111 = 2047 (Infinity/NaN) then.
Let L be the least integer that cannot be represented exactly, and let G be
the greatest integer that can be represented exactly: It is a peculiarity
of floatingpoint formats such as IEEE754 that there are integers N with
L < N < G that can that can be represented exactly anyway; take 2^542 and
2^554, for example.[1] (However, there are more integers in the named
range that cannot be represented exactly, so this knowledge is merely of
academical value, or when you are knowing which numbers you will be dealing
with.)
[...]
PointedEars
___________
[1] ISTM that this set is defined as follows:
N := {x : 2^(m + 1),
: 2^(m + n)  2^(n  1); n elementOf(â„•), n > 1}
where m is the length of the mantissa m.  
P: n/a

JRS: In article <11**********************@e56g2000cwe.googlegroups .com>
, dated Tue, 21 Mar 2006 00:49:29 remote, seen in
news:comp.lang.javascript, VK <sc**********@yahoo.com> posted : <http://groups.google.com/group/comp....rm/thread/38d2 1acb4d4509ce/605c4236958ed554?rnum=31&hl=en&_done=%2Fgroup%2Fco mp.lang.javascrip t%2Fbrowse_frm%2Fthread%2F38d21acb4d4509ce%2Fc61f 73ac60f10e2c%3Fhl%3Den%26#doc_3 833df1762d81fee>
Clear, plain and simple! :)
Should it be a <FAQENTRY> or a FAQ Note now? (with necessary mention that it is correct for 32bit machines and of some JavaScript/JScript math discrepancies)
The 32bit limit on logical operations is in ECMA262 and applies
independently of the bitsize of the machine, whatever it may be.
Likewise the Number type is defined as an IEEE Double independently of
the machine architecture.
Of course, on machines which don't have a 32bit architecture and/or
don't have an IEEE 754 compatible FPU, there's an increased risk of non
compliance with ECMA.

© John Stockton, Surrey, UK. ?@merlyn.demon.co.uk Turnpike v4.00 IE 4 ©
<URL:http://www.jibbering.com/faq/> JL/RC: FAQ of news:comp.lang.javascript
<URL:http://www.merlyn.demon.co.uk/jsindex.htm> jscr maths, dates, sources.
<URL:http://www.merlyn.demon.co.uk/> TP/BP/Delphi/jscr/&c, FAQ items, links.  
P: n/a

JRS: In article <11**********************@i39g2000cwa.googlegroups .com>
, dated Tue, 21 Mar 2006 04:10:23 remote, seen in
news:comp.lang.javascript, VK <sc**********@yahoo.com> posted : Also I guess (only guess) that indirectly it answers on another occasional question: "What is the longest string value allowed in JavaScript?" Skipping on mechanical limits (memory), by language itself I would say that it's 9007199254740992 characters or lesser to be able to use any of string methods (otherwise rounding error for length will kill them).
Characters are Unicode, so one should probably think of a number and
halve it, allowing 2 bytes per character. ECMA says they are 16 bits.
ISTM much more likely that the internal indexing will be done with a
true integer and not a float.
ECMA says that strings consist of all finite sequences, which means that
the length is unbounded. I think they need to think that out again;
there's not room in the observable universe for all finite numbers; and
not for even the infinitesimal fraction smaller than, say, 10^1000.

© John Stockton, Surrey, UK. ?@merlyn.demon.co.uk Turnpike v4.00 IE 4 ©
<URL:http://www.jibbering.com/faq/> JL/RC: FAQ of news:comp.lang.javascript
<URL:http://www.merlyn.demon.co.uk/jsindex.htm> jscr maths, dates, sources.
<URL:http://www.merlyn.demon.co.uk/> TP/BP/Delphi/jscr/&c, FAQ items, links.  
P: n/a

Dr John Stockton wrote: [...] VK <sc**********@yahoo.com> posted : Also I guess (only guess) that indirectly it answers on another occasional question: "What is the longest string value allowed in JavaScript?" Skipping on mechanical limits (memory), by language itself I would say that it's 9007199254740992 characters or lesser to be able to use any of string methods (otherwise rounding error for length will kill them).
Characters are Unicode, so one should probably think of a number and halve it, allowing 2 bytes per character. ECMA says they are 16 bits.
They are 16 bits _at least_. ECMAScript Edition 3 (not ECMA, ECMAScript is
also an ISO/IEC Standard) says that string values are encoded using UTF16.
It is true that one UTF16 code unit is 16 bits (hence the name), but one
Unicode character can be required to be encoded with more than one UTF16
code unit.
PointedEars  
P: n/a

Thomas 'PointedEars' Lahn wrote: Dr John Stockton wrote: Characters are Unicode, so one should probably think of a number and halve it, allowing 2 bytes per character. ECMA says they are 16 bits.
They are 16 bits _at least_. ECMAScript Edition 3 (not ECMA, ECMAScript is also an ISO/IEC Standard) says that string values are encoded using UTF16.
It is true that one UTF16 code unit is 16 bits (hence the name), but one Unicode character can be required to be encoded with more than one UTF16 code unit.
I don't think that internal representation of characters is important
here, because we are not intersted in the factual String object size
but in the limits of its "methodability". Either 1 or 2 byte or even 4
bytes per character but string methods are dealing with string .length
counted per character units, not per bytes used to represent these
units.
As 8,388,608 TB (over 8 millions terabytes if I'm counting right) or
even half of it are beyond the testing on my current machines :) this
will remain a theoretical suggestion for a long while.   This discussion thread is closed Replies have been disabled for this discussion.   Question stats  viewed: 1920
 replies: 45
 date asked: Mar 17 '06
