By using this site, you agree to our updated Privacy Policy and our Terms of Use. Manage your Cookies Settings.
424,836 Members | 2,023 Online
Bytes IT Community
+ Ask a Question
Need help? Post your question and get tips & solutions from a community of 424,836 IT Pros & Developers. It's quick & easy.

C90 long long

P: n/a
From an article about implementing a C99 to C90 translator...

How does someone use integer arithmetic of at least 64 bits, and write in
pure C90?

As I understand C90 (the standard is very elusive), long int is guaranteed
to be at least 32 bits only.

So, do people rely on their known compiler limits, use clunky emulations, or
do not bother with 64 bits when writing ultra-portable code?

--
Bart

Apr 6 '08 #1
Share this Question
Share on Google+
28 Replies


P: n/a
On Sun, 06 Apr 2008 15:37:28 +0000, Bartc wrote:
From an article about implementing a C99 to C90 translator...

How does someone use integer arithmetic of at least 64 bits, and write
in pure C90?

As I understand C90 (the standard is very elusive), long int is
guaranteed to be at least 32 bits only.
Correct. And as long int is the widest type, C90 does not guarantee the
presence of any 64-bit integer type.
So, do people rely on their known compiler limits, use clunky
emulations, or do not bother with 64 bits when writing ultra-portable
code?
People do whatever works best for their situation. Sometimes, that may
involve redefining the problem to fit 32-bit arithmetic (or floating-
point). Sometimes, that may be to rely on long being 64 bits. Sometimes,
that may be to use implementation extensions. In any case, it's very much
the same as how C90 programs have to deal with complex arithmetic. The
language and library don't provide any possibility, so portable programs
have no choice but to implement it themselves.
Apr 6 '08 #2

P: n/a
On Sun, 06 Apr 2008 15:37:28 GMT, "Bartc" <bc@freeuk.comwrote:
>From an article about implementing a C99 to C90 translator...

How does someone use integer arithmetic of at least 64 bits, and write in
pure C90?

As I understand C90 (the standard is very elusive), long int is guaranteed
to be at least 32 bits only.

So, do people rely on their known compiler limits, use clunky emulations, or
do not bother with 64 bits when writing ultra-portable code?
Many use a "big number library" that simulates arithmetic on very wide
integers. I think Richard Heathfield put one on the web and google
can probably find you others.
Remove del for email
Apr 6 '08 #3

P: n/a
Barry Schwarz wrote:
On Sun, 06 Apr 2008 15:37:28 GMT, "Bartc" <bc@freeuk.comwrote:
>>From an article about implementing a C99 to C90 translator...
How does someone use integer arithmetic of at least 64 bits, and write in
pure C90?

As I understand C90 (the standard is very elusive), long int is guaranteed
to be at least 32 bits only.

So, do people rely on their known compiler limits, use clunky emulations, or
do not bother with 64 bits when writing ultra-portable code?

Many use a "big number library" that simulates arithmetic on very wide
integers. I think Richard Heathfield put one on the web and google
can probably find you others.
Remove del for email
That GUARANTEES that the software will run VERY slowly using
ALL the CPU for nothing. Heathfield's big numbers use characters
as the base representation.
--
jacob navia
jacob at jacob point remcomp point fr
logiciels/informatique
http://www.cs.virginia.edu/~lcc-win32
Apr 6 '08 #4

P: n/a
Barry Schwarz said:

<snip>
Many use a "big number library" that simulates arithmetic on very wide
integers.
Right.
I think Richard Heathfield put one on the web
Actually, no, I haven't. I've got one, but I haven't published it.

As Mr Navia notes elsethread, it uses an array of unsigned char to store
the data. He seems to think that this "GUARANTEES that the software will
run VERY slowly using ALL the CPU for nothing". It doesn't, of course.
Nevertheless, performance was not my highest priority. Those who want a
super-fast bignum library would be well advised to use GMP or Miracl.

--
Richard Heathfield <http://www.cpax.org.uk>
Email: -http://www. +rjh@
Google users: <http://www.cpax.org.uk/prg/writings/googly.php>
"Usenet is a strange place" - dmr 29 July 1999
Apr 6 '08 #5

P: n/a
jacob navia <ja***@nospam.comwrites:
Bartc wrote:
>From an article about implementing a C99 to C90 translator...

Interesting interesting.

And when you are going to do a C89 to K&R translator?
It's been done; google "ansi2knr". Though as I recall it's not a
complete translator; mostly it converts prototypes to old-style
function declarations.

The point, of course, is that a C89 to K&R translator, though it was
quite useful years ago, is no longer of much interest today, since the
C89 standard is almost universally supported (it would be difficult to
find a platform that has a K&R compiler but no C89 compiler). This is
not yet the case for C99, a fact that you insist on ignoring, even
though your own lcc-win doesn't support the full language.

A C99 compiler that uses C89 as its intermediate language might be a
very useful thing. If it managed to generate portable C89 code, it
could even permit the use of C99 on platforms where no C99 compiler
yet exists (something I would think you'd welcome).

I suspect that modifying an existing C89 compiler to handle C99 would
be easier than writing a complete translator (but the latter would
potentially only have to be done once for all target platforms).
How does someone use integer arithmetic of at least 64 bits, and
write in pure C90?

There is no 64 bit arithmetic in C89. And you will have to do it
yourself!
Yes, that's probably one of the more difficult aspects of translating
C99 to C89. A quibble, though: C89 has no *guaranteed* 64-bit
arithmetic; it's perfectly legal to make long 64 bits.

--
Keith Thompson (The_Other_Keith) <ks***@mib.org>
Nokia
"We must do something. This is something. Therefore, we must do this."
-- Antony Jay and Jonathan Lynn, "Yes Minister"
Apr 6 '08 #6

P: n/a
Richard Heathfield wrote:
Barry Schwarz said:

<snip>
>Many use a "big number library" that simulates arithmetic on very wide
integers.

Right.
>I think Richard Heathfield put one on the web

Actually, no, I haven't. I've got one, but I haven't published it.

As Mr Navia notes elsethread, it uses an array of unsigned char to store
the data. He seems to think that this "GUARANTEES that the software will
run VERY slowly using ALL the CPU for nothing". It doesn't, of course.
Nevertheless, performance was not my highest priority. Those who want a
super-fast bignum library would be well advised to use GMP or Miracl.
Using a char to store bignum data is like using a scooter to
move a 38 ton trailer.

It will work of course, and the 38 ton trailer will have a cruising
speed of several centimeters/hour...

Who cares?

That solution is PORTABLE. You can carry a scooter anywhere, they
are standard solutions, etc.
Portability means very often that you make mediocre software
that runs anywhere because it never uses the hardware/OS to
its maximum possibilities but just uses the least common
denominator of each one.

This is fine if you do not care about usability or similar problems.
GUIs?

Not portable. The command line is portable. Use the command line

Network?
Not portable...

Etc.

But this is more a philosophical question. Heathfield's viewpoint
is perfectly acceptable if you say:

Usability is less valuable than portability.
As he said, performance wasn't a concern for his software.
Portability was.

Result?

A mediocre but perfectly portable software.

--
jacob navia
jacob at jacob point remcomp point fr
logiciels/informatique
http://www.cs.virginia.edu/~lcc-win32
Apr 6 '08 #7

P: n/a
jacob navia <ja***@nospam.comwrites:
Richard Heathfield wrote:
>Barry Schwarz said:

<snip>
>>Many use a "big number library" that simulates arithmetic on very wide
integers.

Right.
>>I think Richard Heathfield put one on the web

Actually, no, I haven't. I've got one, but I haven't published it.

As Mr Navia notes elsethread, it uses an array of unsigned char to
store the data. He seems to think that this "GUARANTEES that the
software will run VERY slowly using ALL the CPU for nothing". It
doesn't, of course. Nevertheless, performance was not my highest
priority. Those who want a super-fast bignum library would be well
advised to use GMP or Miracl.

Using a char to store bignum data is like using a scooter to
move a 38 ton trailer.

It will work of course, and the 38 ton trailer will have a cruising
speed of several centimeters/hour...

Who cares?

That solution is PORTABLE. You can carry a scooter anywhere, they
are standard solutions, etc.
Portability means very often that you make mediocre software
that runs anywhere because it never uses the hardware/OS to
its maximum possibilities but just uses the least common
denominator of each one.

This is fine if you do not care about usability or similar problems.
GUIs?

Not portable. The command line is portable. Use the command line

Network?
Not portable...

Etc.

But this is more a philosophical question. Heathfield's viewpoint
is perfectly acceptable if you say:

Usability is less valuable than portability.
As he said, performance wasn't a concern for his software.
Portability was.

Result?

A mediocre but perfectly portable software.
And guess what? Most of the blowhards in this group write SW which never
gets ported anywhere anyway.

Apr 6 '08 #8

P: n/a
Richard wrote:
And guess what? Most of the blowhards in this group write SW which never
gets ported anywhere anyway.
Umm - before you make a statement like that you might want to
check with some of those blowhards who make some of their C
source code available for free download...

....and on the other hand, you might prefer to not know.

;-)

--
Morris Dovey
DeSoto Solar
DeSoto, Iowa USA
http://www.iedu.com/DeSoto/
Apr 6 '08 #9

P: n/a

"Bartc" <bc@freeuk.comwrote in message
news:Y6*****************@text.news.virginmedia.com ...
From an article about implementing a C99 to C90 translator...

How does someone use integer arithmetic of at least 64 bits, and write in
pure C90?

As I understand C90 (the standard is very elusive), long int is guaranteed
to be at least 32 bits only.

So, do people rely on their known compiler limits, use clunky emulations,
or do not bother with 64 bits when writing ultra-portable code?
OK, so for this to be practical, such a translator needs to know the
capabilities of a C90 installation. If it has 64+ bit ints, then use them,
otherwise make other arrangements. A pure translator would be of academic
interest only.

Attend to 1000 other details and such a translator may be feasible.

(Note: I am not creating such a software, just interested in the issues.)

--
Bart
Apr 6 '08 #10

P: n/a
jacob navia <ja***@nospam.comwrites:
Bartc wrote:
>OK, so for this to be practical, such a translator needs to know the
capabilities of a C90 installation. If it has 64+ bit ints, then use
them, otherwise make other arrangements. A pure translator would be
of academic interest only.

Attend to 1000 other details and such a translator may be feasible.

(Note: I am not creating such a software, just interested in the issues.)


The bad news is that you will need:

o alloca() for implementing VLA arrays.
[snip]

No, the usual behavior of alloca() (space is deallocated on return
from the calling function) is incompatible with the requirements for
VLAs. For example, given this:

for (int i = 1; i <= 1000; i ++) {
int vla[i];
// ...
}

an alloca-based implementation would allocate space for all 1000
instances of ``vla'' before deallocating any of them. I suppose you
could argue that the standard doesn't actually require that the
storage be physically deallocated on leaving the object's scope, but
it would be a seriously flawed implementation.

If you have an alloca()-like function that frees the storage on exit
from the scope, then that's not a problem (though it wouldn't satisfy
the usual, somewhat informal, requirements for alloca()).

(Didn't we discuss this just recently?)

--
Keith Thompson (The_Other_Keith) <ks***@mib.org>
Nokia
"We must do something. This is something. Therefore, we must do this."
-- Antony Jay and Jonathan Lynn, "Yes Minister"
Apr 7 '08 #11

P: n/a
Flash Gordon said:

<snip>
Richard has on more than one occasion explicitly recommended using other
bignum libraries if you want one with high performance.
Right. In fact, I explicitly recommend using other bignum libraries even if
you *don't* care about performance, because *my* bignum library is not
generally available, so you can't use it even if you want to.
He even provided
such recommendations in the post you responded to.
Right.

[A troll said]
>And guess what? Most of the blowhards in this group write SW which never
gets ported anywhere anyway.

Where is your proof that people have been lying when they have been
talking about the porting of their SW that occurs in real life?
You're arguing with a troll, remember. They don't need proof. They don't
even need evidence. They just need a good supply of mud.

--
Richard Heathfield <http://www.cpax.org.uk>
Email: -http://www. +rjh@
Google users: <http://www.cpax.org.uk/prg/writings/googly.php>
"Usenet is a strange place" - dmr 29 July 1999
Apr 7 '08 #12

P: n/a
jacob navia said:
Richard Heathfield wrote:
>Barry Schwarz said:

<snip>
>>Many use a "big number library" that simulates arithmetic on very wide
integers.

Right.
>>I think Richard Heathfield put one on the web

Actually, no, I haven't. I've got one, but I haven't published it.

As Mr Navia notes elsethread, it uses an array of unsigned char to store
the data. He seems to think that this "GUARANTEES that the software will
run VERY slowly using ALL the CPU for nothing". It doesn't, of course.
Nevertheless, performance was not my highest priority. Those who want a
super-fast bignum library would be well advised to use GMP or Miracl.

Using a char to store bignum data is like using a scooter to
move a 38 ton trailer.
If you say so. In my experience, the performance is perfectly acceptable in
those situations where I need bignums - and, if ever the performance
*isn't* acceptable to me, there are other options available to me, such as
the ones I mentioned in my earlier article.
It will work of course, and the 38 ton trailer will have a cruising
speed of several centimeters/hour...
Fine, if you say so - but that's *my* problem, right? Nobody else's. I'm
not advocating that anyone else should use this software. For the record,
however, yes, my bignum library is significantly slower than GMP (it takes
about 20 times as long as GMP to calculate factorials, for instance). So
what?
Who cares?

That solution is PORTABLE. You can carry a scooter anywhere, they
are standard solutions, etc.
That's a large part of it, yes. Licensing is another issue.

Portability means very often that you make mediocre software
that runs anywhere because it never uses the hardware/OS to
its maximum possibilities but just uses the least common
denominator of each one.
That's one way to do portability (although the result need *not* be
mediocre - the fact that it usually *is* mediocre says a lot more about
programmers than it does about portability). Another way is to recognise
the reality that some code is portable and some isn't, and generally the
parts that are portable can easily be isolated from those that aren't.
This is fine if you do not care about usability or similar problems.
If I thought you were genuinely interested in learning how to solve these
problems, I would continue to explain.
But this is more a philosophical question. Heathfield's viewpoint
is perfectly acceptable if you say:

Usability is less valuable than portability.
If it won't port to the user's system, how can it be called usable? Duh.
As he said, performance wasn't a concern for his software.
I said no such thing. In the very article to which you replied, I said:
"Nevertheless, performance was not my highest priority." There is a major
semantic difference between "is not the highest priority" and "is not a
concern". You often misinterpret people (especially me) in this way. That
is either deliberate or accidental. If it is deliberate, then you are
trolling. If it is accidental, then you need to learn to read for
comprehension before trying to argue with those who can already do so.
Portability was.

Result?

A mediocre but perfectly portable software.
Please explain how a program that *doesn't work* on the target platform is
superior to one that does.

--
Richard Heathfield <http://www.cpax.org.uk>
Email: -http://www. +rjh@
Google users: <http://www.cpax.org.uk/prg/writings/googly.php>
"Usenet is a strange place" - dmr 29 July 1999
Apr 7 '08 #13

P: n/a
Richard Heathfield wrote:
jacob navia said:
[big num stuff]
>It will work of course, and the 38 ton trailer will have a cruising
speed of several centimeters/hour...

Fine, if you say so - but that's *my* problem, right? Nobody else's.
I'm not advocating that anyone else should use this software. For the
record, however, yes, my bignum library is significantly slower than
GMP (it takes about 20 times as long as GMP to calculate factorials,
for instance). So what?
How long does it take to calculate 5^(4^(3^(2^1)))? (Ie. raising to the
power.)

This came up in comp.programming a few weeks back. I knocked up a quick
library that used byte arrays and stored a single decimal digit in each
byte. And that was in interpreted code! (Only add unsigned and multiply by
single digit were compiled.)

This took some 15 minutes to calculate the 183000-digit result (on a slowish
Pentium cpu).

Just interested in how much extra work is needed...

--
Bart
Apr 7 '08 #14

P: n/a
Bartc wrote:
Richard Heathfield wrote:
>jacob navia said:

[big num stuff]
>>It will work of course, and the 38 ton trailer will have a cruising
speed of several centimeters/hour...
Fine, if you say so - but that's *my* problem, right? Nobody else's.
I'm not advocating that anyone else should use this software. For the
record, however, yes, my bignum library is significantly slower than
GMP (it takes about 20 times as long as GMP to calculate factorials,
for instance). So what?

How long does it take to calculate 5^(4^(3^(2^1)))? (Ie. raising to the
power.)

This came up in comp.programming a few weeks back. I knocked up a quick
library that used byte arrays and stored a single decimal digit in each
byte. And that was in interpreted code! (Only add unsigned and multiply by
single digit were compiled.)

This took some 15 minutes to calculate the 183000-digit result (on a slowish
Pentium cpu).

Just interested in how much extra work is needed...
The french package "pari" calculates that in around 0.5 seconds
(In a dual core amd)

--
jacob navia
jacob at jacob point remcomp point fr
logiciels/informatique
http://www.cs.virginia.edu/~lcc-win32
Apr 7 '08 #15

P: n/a
jacob navia wrote:
Bartc wrote:
>Richard Heathfield wrote:
>>jacob navia said:

[big num stuff]
>>>It will work of course, and the 38 ton trailer will have a cruising
speed of several centimeters/hour...
Fine, if you say so - but that's *my* problem, right? Nobody else's.
I'm not advocating that anyone else should use this software. For
the record, however, yes, my bignum library is significantly slower
than GMP (it takes about 20 times as long as GMP to calculate
factorials, for instance). So what?

How long does it take to calculate 5^(4^(3^(2^1)))? (Ie. raising to
the power.)

This came up in comp.programming a few weeks back. I knocked up a
quick library that used byte arrays and stored a single decimal
digit in each byte. And that was in interpreted code! (Only add
unsigned and multiply by single digit were compiled.)

This took some 15 minutes to calculate the 183000-digit result (on a
slowish Pentium cpu).

Just interested in how much extra work is needed...

The french package "pari" calculates that in around 0.5 seconds
(In a dual core amd)
So I need to make it about 1000 times faster then, on my slower machine. No
problem..

--
Bart
Apr 7 '08 #16

P: n/a
Bartc said:

<snip>
How long does [RH's bignum lib] take to calculate 5^(4^(3^(2^1)))?
(Ie. raising to the power.)
Around 25 seconds on a 1.4GHz Athlon.
This came up in comp.programming a few weeks back. I knocked up a quick
library that used byte arrays and stored a single decimal digit in each
byte. And that was in interpreted code! (Only add unsigned and multiply
by single digit were compiled.)

This took some 15 minutes to calculate the 183000-digit result (on a
slowish Pentium cpu).
The difference is partly explained by your use of a single decimal digit
(whether it be 0 to 9 or '0' to '9') in each byte rather than a full
2-to-the-CHAR_BIT possible values. Bearing in mind that most uses of
bignum libraries are *not* for calculating 183000-digit numbers or
anything like them, the "inefficiency" of your library is unlikely to be
particularly significant. If you're only dealing with numbers of, say, one
or two hundred decimal digits (which is quite typical for modern
cryptographic applications), I doubt very much whether your losses through
inefficiency will be anything like as big as your gains in terms of
readability, simplicity, and maintainability.

And if you /do/ need the extra speed, you know how to get it.

--
Richard Heathfield <http://www.cpax.org.uk>
Email: -http://www. +rjh@
Google users: <http://www.cpax.org.uk/prg/writings/googly.php>
"Usenet is a strange place" - dmr 29 July 1999
Apr 7 '08 #17

P: n/a
"Richard Heathfield" <rj*@see.sig.invalidwrote in message
news:yN******************************@bt.com...
Bartc said:

<snip>
>How long does [RH's bignum lib] take to calculate 5^(4^(3^(2^1)))?
(Ie. raising to the power.)

Around 25 seconds on a 1.4GHz Athlon.
>This came up in comp.programming a few weeks back. I knocked up a quick
library that used byte arrays and stored a single decimal digit in each
byte. And that was in interpreted code! (Only add unsigned and multiply
by single digit were compiled.)

This took some 15 minutes to calculate the 183000-digit result (on a
slowish Pentium cpu).

The difference is partly explained by your use of a single decimal digit
(whether it be 0 to 9 or '0' to '9') in each byte rather than a full
2-to-the-CHAR_BIT possible values. Bearing in mind that most uses of
bignum libraries are *not* for calculating 183000-digit numbers or
anything like them, the "inefficiency" of your library is unlikely to be
particularly significant. If you're only dealing with numbers of, say, one
or two hundred decimal digits (which is quite typical for modern
cryptographic applications), I doubt very much whether your losses through
inefficiency will be anything like as big as your gains in terms of
readability, simplicity, and maintainability.

And if you /do/ need the extra speed, you know how to get it.
With wide numbers (in the hundreds of bits) the schoolbook algorithm for
multiplication is a bad idea.

For numbers in this range, Karatsooba or Toon-Cook multiplication is used.

For very wide numbers, multiplication is performed using FFTs.

The best way to speed up the fundamental arithmetic operations for large
numbers is to change the fundamental underlying algorithm that performs
them.
See (for instance):
http://mathworld.wolfram.com/Karatsu...plication.html
http://numbers.computation.free.fr/C...ithms/fft.html
** Posted from http://www.teranews.com **
Apr 8 '08 #18

P: n/a
Dann Corbit said:
"Richard Heathfield" <rj*@see.sig.invalidwrote in message
news:hN******************************@bt.com...
<snip>
>[...] Those who want a
super-fast bignum library would be well advised to use GMP or Miracl.

Neither of those libraries are a good choice for 64 bit operations.
Quite so, although the way I see it, if you need more than 32 bits, you
probably need arbitrarily many bits, or at least way more than 64. The
most common use I'm aware of for needing more than C90 gives you is calcs
involving RSA and D-H, for both of which 64 bits isn't anywhere like
enough.

<snip>

--
Richard Heathfield <http://www.cpax.org.uk>
Email: -http://www. +rjh@
Google users: <http://www.cpax.org.uk/prg/writings/googly.php>
"Usenet is a strange place" - dmr 29 July 1999
Apr 8 '08 #19

P: n/a
On Apr 7, 11:23*pm, Richard Heathfield <r...@see.sig.invalidwrote:
Dann Corbit said:
"Richard Heathfield" <r...@see.sig.invalidwrote in message
news:hN******************************@bt.com...

<snip>
[...] Those who want a
super-fast bignum library would be well advised to use GMP or Miracl.
Neither of those libraries are a good choice for 64 bit operations.

Quite so, although the way I see it, if you need more than 32 bits, you
probably need arbitrarily many bits, or at least way more than 64. The
most common use I'm aware of for needing more than C90 gives you is calcs
involving RSA and D-H, for both of which 64 bits isn't anywhere like
enough.
There are also lots of useful calculations that benefit directly from
64 bit math like Chess programming (uses a 64 bit integer to hold the
board representation frequently) and things like the UMAC hash:
http://fastcrypto.org/umac/

Database systems often need to use 64 bit values, because 32 bit
values (e.g. transaction IDs or record serial ID values) tend to
overflow over time.
Apr 8 '08 #20

P: n/a
Richard Heathfield wrote:
Dann Corbit said:
>"Richard Heathfield" <rj*@see.sig.invalidwrote in message
news:hN******************************@bt.com...

<snip>
>>[...] Those who want a
super-fast bignum library would be well advised to use GMP or Miracl.
Neither of those libraries are a good choice for 64 bit operations.

Quite so, although the way I see it, if you need more than 32 bits, you
probably need arbitrarily many bits, or at least way more than 64. The
most common use I'm aware of for needing more than C90 gives you is calcs
involving RSA and D-H, for both of which 64 bits isn't anywhere like
enough.

<snip>
Never used a file bigger than 4GB?

With disks of 500GB now, a database of more than 4GB is
quite common.

Timestamps are 64 bits now, under windows.

--
jacob navia
jacob at jacob point remcomp point fr
logiciels/informatique
http://www.cs.virginia.edu/~lcc-win32
Apr 8 '08 #21

P: n/a
jacob navia said:
Richard Heathfield wrote:
>Dann Corbit said:
>>"Richard Heathfield" <rj*@see.sig.invalidwrote in message
news:hN******************************@bt.com.. .

<snip>
>>>[...] Those who want a
super-fast bignum library would be well advised to use GMP or Miracl.
Neither of those libraries are a good choice for 64 bit operations.

Quite so, although the way I see it, if you need more than 32 bits, you
probably need arbitrarily many bits, or at least way more than 64. The
most common use I'm aware of for needing more than C90 gives you is
calcs involving RSA and D-H, for both of which 64 bits isn't anywhere
like enough.

<snip>

Never used a file bigger than 4GB?
You really don't get it, do you? Right - file sizes are growing rapidly.
What makes you think they will stop growing when they hit 2^64?
With disks of 500GB now, a database of more than 4GB is
quite common.
At least one UK organisation (the Met Office) currently adds more than a
Terabyte to its data store *every day*, and the rate of increase has
itself been increasing over the years. My point is not that 32 bits are
enough, but that 64 are *not* enough.
Timestamps are 64 bits now, under windows.
That's nice.

--
Richard Heathfield <http://www.cpax.org.uk>
Email: -http://www. +rjh@
Google users: <http://www.cpax.org.uk/prg/writings/googly.php>
"Usenet is a strange place" - dmr 29 July 1999
Apr 8 '08 #22

P: n/a
On Apr 8, 3:42 am, Richard Heathfield <r...@see.sig.invalidwrote:
jacob navia said:


Richard Heathfield wrote:
Dann Corbit said:
>"Richard Heathfield" <r...@see.sig.invalidwrote in message
news:hN******************************@bt.com.. .
<snip>
>>[...] Those who want a
super-fast bignum library would be well advised to use GMP or Miracl.
Neither of those libraries are a good choice for 64 bit operations.
Quite so, although the way I see it, if you need more than 32 bits, you
probably need arbitrarily many bits, or at least way more than 64. The
most common use I'm aware of for needing more than C90 gives you is
calcs involving RSA and D-H, for both of which 64 bits isn't anywhere
like enough.
<snip>
Never used a file bigger than 4GB?

You really don't get it, do you? Right - file sizes are growing rapidly.
What makes you think they will stop growing when they hit 2^64?
With disks of 500GB now, a database of more than 4GB is
quite common.

At least one UK organisation (the Met Office) currently adds more than a
Terabyte to its data store *every day*, and the rate of increase has
itself been increasing over the years. My point is not that 32 bits are
enough, but that 64 are *not* enough.
You really don't get it, do you? Real hardware
and real files are a little different from imaginary
exponential growth of the Earth population. 2^24
is sixteen million; hundred terabytes a day is
some 495 years. Well, nevermind, that organization
perhaps can have 16 million times a terabyte of
storage. That organization probably can write a new OS
with fancy filesystem to make that storage into
one big file. Well, okay. Is it going to use
bignum for file offsets? Still unlikely. Most likely
it will use 128 bits and that's going to be enough
for next ten years for sure.

Anyway, is it really such a strange idea that
for some range of problems 32 bits are enough,
and for some bigger range of problems 64 bits
are enough? Say, 64 bits *are* enough if you
want to deal with files today...

Yevgen
Apr 8 '08 #23

P: n/a
Richard Heathfield wrote:
>
At least one UK organisation (the Met Office) currently adds more than a
Terabyte to its data store *every day*, and the rate of increase has
itself been increasing over the years. My point is not that 32 bits are
enough, but that 64 are *not* enough.
Support is already there, the largest supported file size in the largest
file system I know of (ZFS) is 16 exbibytes (2^64 bytes). Mind you, the
largest pool size is 256 zebibytes, so you could store quite a few of them.

--
Ian Collins.
Apr 8 '08 #24

P: n/a
On Apr 8, 8:42*am, Richard Heathfield <r...@see.sig.invalidwrote:
jacob navia said:
Richard Heathfield wrote:
Dann Corbit said:
>"Richard Heathfield" <r...@see.sig.invalidwrote in message
news:hN******************************@bt.com.. .
<snip>
>>[...] Those who want a
super-fast bignum library would be well advised to use GMP or Miracl.
Neither of those libraries are a good choice for 64 bit operations.
Quite so, although the way I see it, if you need more than 32 bits, you
probably need arbitrarily many bits, or at least way more than 64. The
most common use I'm aware of for needing more than C90 gives you is
calcs involving RSA and D-H, for both of which 64 bits isn't anywhere
like enough.
<snip>
Never used a file bigger than 4GB?

You really don't get it, do you? Right - file sizes are growing rapidly.
What makes you think they will stop growing when they hit 2^64?
64-bits is useful /now/ (and has been for a while) for such things as
file sizes.

64-bits datatypes are commonly available to anyone who doesn't insist
on C90 for example.

Exceeding 64-bits for file sizes, that's not going to be common, and
there are a number of techniques for dealing with that: emulate 96/128-
bit values, use a 2-part value, address files in units other that
bytes, and so on. Most people won't have to bother.

Totally disregarding migrating from 32 to 64-bits, simply because one
day in the future 64-bits may not quite be enough, is silly. Like not
buying that medium sized car now because, in ten years, you might need
a bigger one...

You may have noticed that many machines now have a 64-bit native type,
and totally ignoring that fact is also silly.
--
Bart
Apr 8 '08 #25

P: n/a
Richard Heathfield <rj*@see.sig.invalidwrites:
jacob navia said:
>Richard Heathfield wrote:
>>Dann Corbit said:

"Richard Heathfield" <rj*@see.sig.invalidwrote in message
news:hN******************************@bt.com. ..

<snip>

[...] Those who want a
super-fast bignum library would be well advised to use GMP or Miracl.
Neither of those libraries are a good choice for 64 bit operations.

Quite so, although the way I see it, if you need more than 32 bits, you
probably need arbitrarily many bits, or at least way more than 64. The
most common use I'm aware of for needing more than C90 gives you is
calcs involving RSA and D-H, for both of which 64 bits isn't anywhere
like enough.

<snip>

Never used a file bigger than 4GB?

You really don't get it, do you? Right - file sizes are growing rapidly.
What makes you think they will stop growing when they hit 2^64?
They wont - but do you realise just how big a file can get with 64 bits?
Apr 8 '08 #26

P: n/a
Richard wrote:
>What makes you think they will stop growing when they hit 2^64?

They wont - but do you realise just how big a file can get with 64 bits?
Let's suppose the machines arrive at the density of DNA, i.e. around
(very roughly) 50 atoms/bit.

We have:
50*2^64 --922337203685477580800
Divided by Avogrado's number 6.022E23

is 0.001531612 liters

1.5 ml. The volume then, is just 1.5 ml.

Of course there is the packing overhead, let's say that
you multiply the bit density by 10. You arrive at
1.5Liters for a file of 2^64 bits!

In bytes you multiply by 8, you get 12 liters the volume
to store a file of 2^64 bytes

Very feasible.

Probably by 2020-2025
--
jacob navia
jacob at jacob point remcomp point fr
logiciels/informatique
http://www.cs.virginia.edu/~lcc-win32
Apr 8 '08 #27

P: n/a
Richard Heathfield <rj*@see.sig.invalidwrites:
Dann Corbit said:
>"Richard Heathfield" <rj*@see.sig.invalidwrote in message
news:hN******************************@bt.com...

<snip>
>>[...] Those who want a
super-fast bignum library would be well advised to use GMP or Miracl.

Neither of those libraries are a good choice for 64 bit operations.

Quite so, although the way I see it, if you need more than 32 bits, you
probably need arbitrarily many bits, or at least way more than 64. The
most common use I'm aware of for needing more than C90 gives you is calcs
involving RSA and D-H, for both of which 64 bits isn't anywhere like
enough.
No, but every doubling of integer width that is allowed roughly
doubles the speed of RSA and D-H calculations. If C99 (or C0x)
becomes widespread, faster portable RSA and D-H code will be possible.
The fastest cryptographic code will always use non-portable
extensions, but a portable 32-bit multiply (=64 bit result) would be
very useful.

--
Ben.
Apr 8 '08 #28

P: n/a
On Tue, 08 Apr 2008 06:23:16 +0000, Richard Heathfield
<rj*@see.sig.invalidwrote:
>Dann Corbit said:
>"Richard Heathfield" <rj*@see.sig.invalidwrote in message
news:hN******************************@bt.com...

<snip>
>>[...] Those who want a
super-fast bignum library would be well advised to use GMP or Miracl.

Neither of those libraries are a good choice for 64 bit operations.

Quite so, although the way I see it, if you need more than 32 bits, you
probably need arbitrarily many bits, or at least way more than 64. The
most common use I'm aware of for needing more than C90 gives you is calcs
involving RSA and D-H, for both of which 64 bits isn't anywhere like
enough.
Financial calculations and reporting by governments and
corporations. 32 bits is not enough; 48 bits suffices for
now; 64 bits is insurance.


Richard Harter, cr*@tiac.net
http://home.tiac.net/~cri, http://www.varinoma.com
Save the Earth now!!
It's the only planet with chocolate.
Apr 8 '08 #29

This discussion thread is closed

Replies have been disabled for this discussion.