P: n/a

Hi everyone.
I am a little confused and frustrated as to how the maximum lengths for
integers and floating point numbers came about, and whilst you
shouldn't need a number bigger than 2147483647, I am sure the need is
there.
Please could someone explain vaguely the key things that I should be
looking for as my Google searches don't tend to yield the answers to my
questions. I understand that it varies from one machine machine due to
the wordsize of that machine, but that's about all I do get. Why
couldn't integers be dynamic like strings? Why are they limited to 64
bits and floats (I think), 128 bits?
I am contemplating making a class that can handle numbers of almost
unlimited length, as I can't seem to see any online. Then again I
haven't looked very hard, as I only want to make this class to see if I
can do it, and to broaden my knowledge a little more. I think it's safe
to assume that there are plenty of people who have thought about this
besides me.
Your input would be much appreciated as always.  
Share this Question
P: n/a

"Daz" writes: I am a little confused and frustrated as to how the maximum lengths for integers and floating point numbers came about, and whilst you shouldn't need a number bigger than 2147483647, I am sure the need is there.
Please could someone explain vaguely the key things that I should be looking for as my Google searches don't tend to yield the answers to my questions. I understand that it varies from one machine machine due to the wordsize of that machine, but that's about all I do get. Why couldn't integers be dynamic like strings? Why are they limited to 64 bits and floats (I think), 128 bits?
I am contemplating making a class that can handle numbers of almost unlimited length, as I can't seem to see any online. Then again I haven't looked very hard, as I only want to make this class to see if I can do it, and to broaden my knowledge a little more. I think it's safe to assume that there are plenty of people who have thought about this besides me.
Computers sold to compute, as opposed to embedded computers, are almost
always based on an eightbit byte. Hardware and eightbit bytes are a
comfortable match.
If I were to set out to do what you describe, I would use what is called BCD
(binary coded decimal) encoding. This eliminates some potentially nasty
conversions between binary and a decimal number that a human can deal with.
Each byte would contain only the values 0 through 9, and the sign of the
number would be maintained separately. You will need to know about carry in
and carry out. I think Wikipedia is the best place to start on this
problem.
This looks pretty good. http://en.wikipedia.org/wiki/Binarycoded_decimal  
P: n/a

"Daz" <cu********@gmail.com> wrote: I am a little confused and frustrated as to how the maximum lengths for integers and floating point numbers came about, and whilst you shouldn't need a number bigger than 2147483647, I am sure the need is there.
You are right that there are, on occasion, need for larger integers. A
common application today is cryptography, which uses very large integers
(over 100 digits, in some cases). There are multipleprecision (sometimes
called "bignum") libraries which allow you to do this. It's a lot slower
than fixedlength math, which is why it's not the default in most
languages. Some languages do have builtin bignum support:
Python 2.3.4 (#3, Jun 29 2004, 21:48:03)
[GCC 3.3 20030304 (Apple Computer, Inc. build 1495)] on darwin
Type "help", "copyright", "credits" or "license" for more information. 123456789 * 987654321
121932631112635269L
I am contemplating making a class that can handle numbers of almost unlimited length, as I can't seem to see any online.
Search for "bignum" or "multiple precision", and you'll find plenty of them.
Then again I haven't looked very hard, as I only want to make this class to see if I can do it, and to broaden my knowledge a little more.
This is indeed an excellent learning exercise. Go for it.
I think it's safe to assume that there are plenty of people who have thought about this besides me.
This is certainly true, but there's no harm in you thinking about it too.
People are constantly looking at "solved problems" and finding better ways
to do it.
Here's a couple of good Wikipedia articles that you might find useful: http://en.wikipedia.org/wiki/Arbitra...ion_arithmetic http://en.wikipedia.org/wiki/IEEE_fl...point_standard  
P: n/a

Hi Daz,
My project I'm working on is to make integers, a double linked bytes,
unlimited and I'm working on its mathematical operations like addition,
substraction...
It's called Multiple Byte integers.
Shuga.  
P: n/a

Daz wrote: Hi everyone.
I am a little confused and frustrated as to how the maximum lengths for integers and floating point numbers came about, and whilst you shouldn't need a number bigger than 2147483647, I am sure the need is there.
In the back of an elementary school math textbook, you may find a
"multiplication table". It lists the numbers from 1 to 10 at the heads of
columns and rows, and it shows the product of each row and column in a
cell. Here's its upper left corner:
1 2 3
+
1 1 2 3
2 2 4 6
3 3 6 9
To multiply, a computer draws one of those grids in hardware. The number 2
will relay into this grid, and latch (switch on) the horizontal 2 wire.
The number 3 then relays into the grid and latches the 3 wire, going down.
Where the wires cross, the 6 register receives a complete electrical
signal, and it latches a 6 output. Further relays (actually "gates") turn
the 6 wire into a binary 6, 0110, and this latches into the output
register.
The width of a CPU's internal bus determines the width of its grids in its
arithmetic logic unit. So an old 16bit processor would have ALU grids
with 65,536 wires on each side.
That's why CPUs are so big and hot, and why math with numbers that fit in
a CPU's bus can arithmetic faster than larger numbers. They require small
subroutines to decompose the numbers first  like long multiplication.
I am contemplating making a class that can handle numbers of almost unlimited length, as I can't seem to see any online.
There are zillions, all called "bigint", so Google that.

Phlip  
P: n/a

Great links everyone.
I have no idea why I didn't think of looking at wikipedia in the first
place...
Thanks for such great input, as always. :)
Daz  
P: n/a

Daz wrote: Please could someone explain vaguely the key things that I should be looking for as my Google searches don't tend to yield the answers to my questions. I understand that it varies from one machine machine due to the wordsize of that machine, but that's about all I do get. Why couldn't integers be dynamic like strings?
They can, it's just that this is not commonly done in hardware.
Processors are much easier to design when the datatypes that they work
with are fixedlength records with a rigid structure. This is true of
the instruction opcodes also, not just the data types.
Note that dynamic strings are implemented by the runtime support of
your higher level programming language. Chances are that the processor
you are using has next to no support for strings.
Likewise "bignum" integers, are implemented in libraries. In some
programming languages, like Common Lisp, such numbers are built in,
seamlessly integrated with the type system of the ``numeric tower''.
It's unlikely that moving this functionality into the processor would
result in any kind of performance improvement. The processor would have
to implement the same algorithms as the library: basically treat the
big numbers as being made up of arrays of fixedsized smaller numbers,
and use loops to perform the algorithms like addition, long division
and so on. These procedures would probably be written in microcode, and
occupy quite a lot of precious silicon on the processor chip.
On a modern, heavily cached processor, there isn't much difference
between a function that is written in software and a function that is
in microcode. The instructions are so simple to decode that they are
almost like microcode already, and sufficiently small sequences of code
can fit into the instruction cache entirely. Moreover, an instruction
cache can partially decode the instructions, so they resemble microcode
even more. E.g. say, a loop which adds or multiplies two bignum
integers will probably all fit into the instruction cache. The nice
thing about that is that when the algorithm is done, the cache can be
reused for something else. This is better use of silicon than
dedicating it to a library of microcode for complex operations that not
every application needs.
I am contemplating making a class that can handle numbers of almost unlimited length, as I can't seem to see any online. Then again I haven't looked very hard, as I only want to make this class to see if I can do it, and to broaden my knowledge a little more. I think it's safe to assume that there are plenty of people who have thought about this besides me.
You think? Nah ... Only the brightest minds on the bleeding edge of
computer science research are just beginning to think about large
integers and their potential applications, such as public key
cryptography. :)  
P: n/a

"osmium" <r1********@comcast.net> wrote in message
news:4f*************@individual.net... "Daz" writes:
I am a little confused and frustrated as to how the maximum lengths for integers and floating point numbers came about, and whilst you shouldn't need a number bigger than 2147483647, I am sure the need is there.
Please could someone explain vaguely the key things that I should be looking for as my Google searches don't tend to yield the answers to my questions. I understand that it varies from one machine machine due to the wordsize of that machine, but that's about all I do get. Why couldn't integers be dynamic like strings? Why are they limited to 64 bits and floats (I think), 128 bits?
I am contemplating making a class that can handle numbers of almost unlimited length, as I can't seem to see any online. Then again I haven't looked very hard, as I only want to make this class to see if I can do it, and to broaden my knowledge a little more. I think it's safe to assume that there are plenty of people who have thought about this besides me.
Computers sold to compute, as opposed to embedded computers, are almost always based on an eightbit byte. Hardware and eightbit bytes are a comfortable match.
If I were to set out to do what you describe, I would use what is called BCD (binary coded decimal) encoding. This eliminates some potentially nasty conversions between binary and a decimal number that a human can deal with. Each byte would contain only the values 0 through 9, and the sign of the number would be maintained separately. You will need to know about carry in and carry out. I think Wikipedia is the best place to start on this problem.
Normally, in BCD each byte contains two digits. One digit in the lower
nibble (half a byte), one digit in the higher nibble.   This discussion thread is closed Replies have been disabled for this discussion.   Question stats  viewed: 1256
 replies: 7
 date asked: Jun 18 '06
