471,108 Members | 1,547 Online

# Some questions, big numbers library

Hi,

I've to implement fast library for big numbers arithmetics. I'll store
the number in the table of unsigned long variables. And now i've a
question: what will be faster:

a) storing in one cell of the table values from 0 to 999999999
so for example number 1111222233334444 will look in table:
[233334444] [1111222] // <- i'm reversing the cell's order
+ using this method conversion number<->ascii string will be fast
+ the arithmetic (for example multiplication) will be slow (because for
carrying detection i'll have to divide a lot by 1000000000)

b) storing in one cell of the table values from 0 to (2^32)-1 (so all
bit's combination). But in this method i have to change base from
decimal to binary, which is too slow for very big numbers
so for example number 1111222233334444 will look in table:
[111111001010100110] [10010110011111000000001010101100] // <- this
numbers are in binary
+ using this method conversion number<->ascii string will be slow
+ the arithmetic (for example multiplication) will be fast

In my program i'm running a lot of arithmetic algorithms, and at the
beginning i'm printing the result. I was thinking that second method
will be much faster, but... Converting from number to ascii string
takes a lot lot of time (when number has more than 100000 digits). Do
anyone have fast base converting algorithm for very long numbers or
should i use method a) ?

Hope you have understand this...

Greets&Thanks&Happy Easter
peter_k

Apr 16 '06 #1
1 4170
pk*******@gmail.com wrote:
[..]
In my program i'm running a lot of arithmetic algorithms, and at the
beginning i'm printing the result. I was thinking that second method
will be much faster, but... Converting from number to ascii string
takes a lot lot of time (when number has more than 100000 digits). Do
anyone have fast base converting algorithm for very long numbers or
should i use method a) ?

Several thoughts. First, there are known implementations of arbitrary
precision arithmetic; why not use one of them? Second, BCD incoding is
probably your best bet, see Knuth for recommendations and algorithms.
Third, unless somebody here has already implemented several different
arbitrary precision libraries, who in c.l.c++ would know which way is
faster? Besides, faster for what? How much calculation versus I/O do
you do in your program? Unless you know exactly, there is no way to
tell *before* your program is ready. Only by testing and clocking the
execution will you be able to tell which parts are fast and which are
slow, and then decide what to do about it. Do not optimize prematurely.

V
--
Please remove capital As from my address when replying by mail
Apr 16 '06 #2

### This discussion thread is closed

Replies have been disabled for this discussion.