By using this site, you agree to our updated Privacy Policy and our Terms of Use. Manage your Cookies Settings.
448,485 Members | 1,061 Online
Bytes IT Community
+ Ask a Question
Need help? Post your question and get tips & solutions from a community of 448,485 IT Pros & Developers. It's quick & easy.

Is there an algorithm for integer multiply and divide at the same time?

P: n/a
Hi,

If I have three 64 bit integers and I want to do this operation on them:

x*y/z

Lets say that what we are multiplying by (y) is offset by what we are
dividing by (z) so that the final answer will fit in a 64-bit integer.

Let me simplify it by using unsigned chars (8 bits):

254*253/252 = 255

But, if we only had 8 bit integers to work with, is there an algorithm that
would use 8 bit integers only to come up with the right answer of 255 ? The
obvious problem is that the multiplication yields a number much larger than
an 8-bit integer.

Where I am going with this is that sometimes I need to multiply and divide
large 64-bit integers and as long as the resulting number fits in a 64-bit
integer, can I do this with 64-bit integers alone?

Thanks,

Alan
www.sadevelopment.com
Jul 2 '08 #1
Share this Question
Share on Google+
14 Replies


P: n/a
On Jul 1, 10:49 pm, "Default User" <nospam38...@forme.comwrote:
Hi,

If I have three 64 bit integers and I want to do this operation on them:

x*y/z

But, if we only had 8 bit integers to work with, is there an algorithm that
would use 8 bit integers only to come up with the right answer of 255 ? The
obvious problem is that the multiplication yields a number much larger than
an 8-bit integer.
I think this should work well

x*(y/z) or try this (x/z)*y

Darío
Jul 2 '08 #2

P: n/a
Default User wrote:
Hi,

If I have three 64 bit integers and I want to do this operation on them:

x*y/z

Lets say that what we are multiplying by (y) is offset by what we are
dividing by (z) so that the final answer will fit in a 64-bit integer.

Let me simplify it by using unsigned chars (8 bits):

254*253/252 = 255

But, if we only had 8 bit integers to work with, is there an algorithm
that
would use 8 bit integers only to come up with the right answer of 255 ?
The obvious problem is that the multiplication yields a number much larger
than an 8-bit integer.

Where I am going with this is that sometimes I need to multiply and divide
large 64-bit integers and as long as the resulting number fits in a 64-bit
integer, can I do this with 64-bit integers alone?
It might be easier (and maybe also more efficient) to use double or long
double for the computation and then case back (provided they have more than
64 bit mantissa on your target platform). However, there could be some
tricky analysis involved to prove the code correct.
Best

Kai-Uwe Bux
Jul 2 '08 #3

P: n/a
On Jul 2, 4:29 am, Kai-Uwe Bux <jkherci...@gmx.netwrote:
Default User wrote:
If I have three 64 bit integers and I want to do this
operation on them:
x*y/z
Lets say that what we are multiplying by (y) is offset by
what we are dividing by (z) so that the final answer will
fit in a 64-bit integer.
Let me simplify it by using unsigned chars (8 bits):
254*253/252 = 255
But, if we only had 8 bit integers to work with, is there an
algorithm that would use 8 bit integers only to come up with
the right answer of 255 ? The obvious problem is that the
multiplication yields a number much larger than an 8-bit
integer.
Where I am going with this is that sometimes I need to
multiply and divide large 64-bit integers and as long as the
resulting number fits in a 64-bit integer, can I do this
with 64-bit integers alone?
It might be easier (and maybe also more efficient) to use
double or long double for the computation and then case back
(provided they have more than 64 bit mantissa on your target
platform).
Do you know of such a platform? The intermediate value he's
concerned with can have up to 127 bits, so you'd need a mantissa
of 127 bits. I think some have existed (perhaps some of the old
CDC mainframes), but I don't know of any today.
However, there could be some tricky analysis involved to prove
the code correct.
*If* the conditions are exactly as he stated, then it's easy to
prove the code incorrect (and even find example values which
would fail a test) for IEEE double, or any representation which
uses less than 127 bits in the mantissa.

--
James Kanze (GABI Software) email:ja*********@gmail.com
Conseils en informatique orientée objet/
Beratung in objektorientierter Datenverarbeitung
9 place Sémard, 78210 St.-Cyr-l'École, France, +33 (0)1 30 23 00 34
Jul 2 '08 #4

P: n/a

Where I am going with this is that sometimes I need to multiply and divide
large 64-bit integers and as long as the resulting number fits in a 64-bit
integer, can I do this with 64-bit integers alone?
There's the MulDiv function from Windows, which is implemented in ReactOS
like:
http://www.reactos.org/generated/dox...8c-source.html
Jul 2 '08 #5

P: n/a
Hi,
There's the MulDiv function from Windows, which is implemented in ReactOS
http://www.reactos.org/generated/dox...8c-source.html

Thanks for the ideas guys, this is basically what I am looking for, but this
code counts on RtlEnlargedUnsignedDivide which in turn counts on
(ULONGLONG).

I realize there are bignum libraries out there, but I am really only looking
for a small single algorithm to do this. I will be dealing with integers
only, and in such a way that I balance the Numerator/Denominator so that I
know the result will fit properly.

In this case I am looking to make 64 bit integers perform 128 bit like
multiplication/division, but I would think I could easily adapt code
designed to make 32 bit integers perform 64 bit like
multiplication/division. Does anyone have any simple/short code made 64 bit
integers from 32 bit integers, from a time before 64 bit integers were
available natively in the compiler?

The calculations I am trying to perform can be done in double, but it is 150
times slower according to some testing I've done which is why I'd rather
stick to integers if I can.

Thanks,

Alan
Jul 2 '08 #6

P: n/a
Default User wrote:
Hi,
>There's the MulDiv function from Windows, which is implemented in ReactOS
http://www.reactos.org/generated/dox...8c-source.html


Thanks for the ideas guys, this is basically what I am looking for, but
this code counts on RtlEnlargedUnsignedDivide which in turn counts on
(ULONGLONG).

I realize there are bignum libraries out there, but I am really only
looking
for a small single algorithm to do this. I will be dealing with integers
only, and in such a way that I balance the Numerator/Denominator so that I
know the result will fit properly.

In this case I am looking to make 64 bit integers perform 128 bit like
multiplication/division, but I would think I could easily adapt code
designed to make 32 bit integers perform 64 bit like
multiplication/division. Does anyone have any simple/short code made 64
bit integers from 32 bit integers, from a time before 64 bit integers were
available natively in the compiler?
Addition is easy.

Multiplication can be done like so: split the 64bit factors into high and
low unsinged longs of 32 bits. Then you have to compute

( a * 2^32 + b ) * ( c * 2^32 + d )
=
a*c *2^64
+
(a-b)*(d-c) *2^32
+
a*c *2^32
+
b*d *2^32
+
c*d;

which requires computing the three 64bit products ac, (a-b)(d-c), and cd.
The rest is bit shifting and addition (which we agreed is easy :-).

Alternatively, you could also do:

( a * 2^32 + b ) * ( c * 2^32 + d )
=
a*c *2^64
+
a*d *2^32
+
b*c *2^32
+
b*d

which requires computing four 64bit products but is slightly less demanding
on additions and bit shifting (also it does not involve dealing with
differences).

The tricky part is division. I have no idea how to do that efficiently.

The calculations I am trying to perform can be done in double, but it is
150 times slower according to some testing I've done which is why I'd
rather stick to integers if I can.
150 times slower compared to what? If you already have a solution that beats
floating point arithmetic by a factor of 150, you are probably all set and
it won't get any better. If you don't, you seem to be comparing a solution
to a non-solution.
Best

Kai-Uwe Bux
Jul 2 '08 #7

P: n/a
In article <48**********************@news.astraweb.com>,
Default User <no*********@forme.comwrote:
>Hi,
>There's the MulDiv function from Windows, which is implemented in ReactOS
http://www.reactos.org/generated/dox...8c-source.html


Thanks for the ideas guys, this is basically what I am looking for, but this
code counts on RtlEnlargedUnsignedDivide which in turn counts on
(ULONGLONG).

I realize there are bignum libraries out there, but I am really only looking
for a small single algorithm to do this. I will be dealing with integers
only, and in such a way that I balance the Numerator/Denominator so that I
know the result will fit properly.

In this case I am looking to make 64 bit integers perform 128 bit like
multiplication/division, but I would think I could easily adapt code
designed to make 32 bit integers perform 64 bit like
multiplication/division. Does anyone have any simple/short code made 64 bit
integers from 32 bit integers, from a time before 64 bit integers were
available natively in the compiler?

The calculations I am trying to perform can be done in double, but it is 150
times slower according to some testing I've done which is why I'd rather
stick to integers if I can.

Thanks,

Alan
Take a look at
The Art of Computer Programming
Seminumerical Algorithms
Vol 2.
by Donald Knuth

You don't need to go to a full up big num package since what you're really
wanting is just double precision. So it should be possible for you to
write some small double precision functions that will provide you what you're
looking for.

For instance, you can do a double precision integer multiplication by
doing 3 single precision multiplies along with a few shifts and adds.

Good luck on your search. I'd suggest looking for some double precision
integer packages.
Jul 2 '08 #8

P: n/a
In article <2c978e11-405e-4add-977f-
7e**********@l64g2000hse.googlegroups.com>, ja*********@gmail.com
says...

[ ... ]
Do you know of such a platform? The intermediate value he's
concerned with can have up to 127 bits, so you'd need a mantissa
of 127 bits. I think some have existed (perhaps some of the old
CDC mainframes), but I don't know of any today.
Nope -- on the old CDC mainframes, single precision as 60 bits and
double precision was 100 bits -- but that's still for the whole thing,
including a 20-bit exponent, so the largest significand was 80 bits.

I believe some compilers for the Crays and DEC Alphas did marginally
better, with a 128-bit floating point number, but again that was for the
significand AND exponent, not the exponent alone.

--
Later,
Jerry.

The universe is a figment of its own imagination.
Jul 2 '08 #9

P: n/a
On Jul 2, 7:16*am, Kai-Uwe Bux <jkherci...@gmx.netwrote:
Default User wrote:
Hi,
There's the MulDiv function from Windows, which is implemented in ReactOS
http://www.reactos.org/generated/dox...8c-source.html
Thanks for the ideas guys, this is basically what I am looking for, but
this code counts on RtlEnlargedUnsignedDivide which in turn counts on
(ULONGLONG).
I realize there are bignum libraries out there, but I am really only
looking
for a small single algorithm to do this. *I will be dealing with integers
only, and in such a way that I balance the Numerator/Denominator so thatI
know the result will fit properly.
In this case I am looking to make 64 bit integers perform 128 bit like
multiplication/division, but I would think I could easily adapt code
designed to make 32 bit integers perform 64 bit like
multiplication/division. *Does anyone have any simple/short code made 64
bit integers from 32 bit integers, from a time before 64 bit integers were
available natively in the compiler?

Addition is easy.

Multiplication can be done like so: split the 64bit factors into high and
low unsinged longs of 32 bits. Then you have to compute

* ( a * 2^32 + b ) * ( c * 2^32 + d )
* =
* a*c * * * * *2^64
* +
* (a-b)*(d-c) *2^32
* +
* a*c * * * * *2^32
* +
* b*d * * * * *2^32
* +
* c*d;

which requires computing the three 64bit products ac, (a-b)(d-c), and cd.
The rest is bit shifting and addition (which we agreed is easy :-).

Alternatively, you could also do:

* ( a * 2^32 + b ) * ( c * 2^32 + d )
* =
* a*c *2^64
* +
* a*d *2^32
* +
* b*c *2^32
* +
* b*d

which requires computing four 64bit products but is slightly less demanding
on additions and bit shifting (also it does not involve dealing with
differences).

The tricky part is division. I have no idea how to do that efficiently.
The calculations I am trying to perform can be done in double, but it is
150 times slower according to some testing I've done which is why I'd
rather stick to integers if I can.

150 times slower compared to what? If you already have a solution that beats
floating point arithmetic by a factor of 150, you are probably all set and
it won't get any better. If you don't, you seem to be comparing a solution
to a non-solution.

Best

Kai-Uwe Bux
not sure but if one splits the 128-bit product in to 64-bit digits:

int64 proH,proL;

then (s)he might be able to use the algorithm that has been used - for
hundreds of years - to manually calculate the decimal division of
numbers.I am a computer generation guy and do not remember how I used
to divide two numbers in primary school mathematics course but
somebody must remember it.

regards,
FM.
Jul 2 '08 #10

P: n/a
On Jul 2, 12:50*pm, terminator <farid.mehr...@gmail.comwrote:
I am a computer generation guy and do not remember how I used
to divide two numbers in primary school mathematics course but
somebody must remember it.
how sad :(
Jul 2 '08 #11

P: n/a
terminator wrote:
On Jul 2, 7:16*am, Kai-Uwe Bux <jkherci...@gmx.netwrote:
>Default User wrote:
[snip]
In this case I am looking to make 64 bit integers perform 128 bit like
multiplication/division, but I would think I could easily adapt code
designed to make 32 bit integers perform 64 bit like
multiplication/division. *Does anyone have any simple/short code made
64 bit integers from 32 bit integers, from a time before 64 bit
integers were available natively in the compiler?
[snip]
>The tricky part is division. I have no idea how to do that efficiently.
[snip]
>
not sure but if one splits the 128-bit product in to 64-bit digits:

int64 proH,proL;

then (s)he might be able to use the algorithm that has been used - for
hundreds of years - to manually calculate the decimal division of
numbers.I am a computer generation guy and do not remember how I used
to divide two numbers in primary school mathematics course but
somebody must remember it.
Well, the paper and pencil method of long division requires that you _know_
the multiplication table for one-digit numbers, i.e., you can divide
two-digit numbers by one-digit numbers provided the result has only one
digit. So using base 2^64 for the computation will not quite work. You
could use base 2^32 and use four digits.

However, even then the pencil and paper method you were taught requires a
certain degree of guesswork which has to be replaces by rigorous analysis.
Knuth has an exposition of that.
Best

Kai-Uwe Bux
Jul 2 '08 #12

P: n/a
In article <g4**********@aioe.org>, jk********@gmx.net says...

[ ... ]
However, even then the pencil and paper method you were taught requires a
certain degree of guesswork which has to be replaces by rigorous analysis.
Knuth has an exposition of that.
There's no guesswork needed when you're working in binary. When you're
working in decimal, you need to figure out one digit of the answer --
i.e. figure out the largest multiple of the divisor (shifted left N
digits) that's smaller than the dividend. Since you're working in
decimal, that number can be anywhere from 0 to 9.

When you're working in binary, that number can only be from 0 to 1, so
you don't have to guess at anything -- either the shifted divisor is
smaller than the current dividend, or it's not. If it's smaller, the
current digit in the answer is a 1. If it's not, the current digit in
the answer is a zero.

Here's a small class that does bitwise multiplication and division on
integers (unsigned, in this case):

class integer {

unsigned value;

public:
integer(unsigned init) : value(init) { }

integer operator/(integer const &other) {
unsigned divisor = other.value;
int shift_count = 0;
unsigned answer = 0;
unsigned dividend = value;

if (divisor == 0)
return 0;

while (divisor < dividend) {
divisor <<= 1;
++shift_count;
}

for (;shift_count>-1; --shift_count, divisor>>=1) {
if (divisor <= dividend) {
answer |= (1 << shift_count);
dividend -= divisor;
}
}
return answer;
}

integer operator*(integer const &other) {
unsigned temp1 = other.value;
unsigned temp2 = value;
unsigned answer = 0;

while (temp1 != 0) {
if (temp1 & 1)
answer += temp2;
temp2 <<= 1;
temp1 >>=1;
}
return integer(answer);
}

integer operator+(integer const &other) {
return integer(value+other.value);
}

integer operator-(integer const &other) {
return integer(value-other.value);
}

operator<(integer const &other) const {
return value < other.value;
}

friend std::ostream &operator<<(std::ostream &os, integer const &i)
{
return os << i.value;
}

friend std::istream &operator>>(std::istream &is, integer &i) {
return is >i.value;
}
};

Of course, there are other multiplication and division algorithms around
-- in fact, I don't believe this division algorithm is what's used in
most modern hardware.

--
Later,
Jerry.

The universe is a figment of its own imagination.
Jul 3 '08 #13

P: n/a
Jerry Coffin wrote:
In article <g4**********@aioe.org>, jk********@gmx.net says...

[ ... ]
>However, even then the pencil and paper method you were taught requires a
certain degree of guesswork which has to be replaces by rigorous
analysis. Knuth has an exposition of that.

There's no guesswork needed when you're working in binary. When you're
working in decimal, you need to figure out one digit of the answer --
i.e. figure out the largest multiple of the divisor (shifted left N
digits) that's smaller than the dividend. Since you're working in
decimal, that number can be anywhere from 0 to 9.

When you're working in binary, that number can only be from 0 to 1, so
you don't have to guess at anything -- either the shifted divisor is
smaller than the current dividend, or it's not. If it's smaller, the
current digit in the answer is a 1. If it's not, the current digit in
the answer is a zero.
[code snipped]

The point is that the OP wants to work in base 2^64 not in base 2. Of
course, it is easy to translate; however, the resulting algorithm is not
all that efficient since it has to extract the binary digits of the
quotient one by one. On the plus side: all other operations are just
shifts, subtractions, and comparisons, which are all fairly cheap.

Elsethread the OP already remarked that he could use double but was afraid
that the performance penalty might be too high. I would not expect the
paper and pencil method in binary to beat hardware division. However, only
measurement will show.
Best

Kai-Uwe Bux
Jul 3 '08 #14

P: n/a
On Jul 3, 7:06 am, Kai-Uwe Bux <jkherci...@gmx.netwrote:
Jerry Coffin wrote:
In article <g4geok$fv...@aioe.org>, jkherci...@gmx.net says...
[ ... ]
Elsethread the OP already remarked that he could use double
but was afraid that the performance penalty might be too high.
Which makes me wonder. On my machine, a multiply followed by a
divide is faster in double than in int.

If speed is really an issue, and the hardware is an x86-64, then
his problem can be solved with two machine instructions:
multiplying two 64 bit values results in a 128 bit value, and
division is a 128 bit value divided by a 64 bit value.
Historically, having a multiply instruction which returned a
value with twice the width of its operands, and having division
which took a numerator of twice the width of the denominator and
the result were pretty much standard practice at the assembler
level. Modern architectures seem to have gotten away from this,
however, because no high level language supports it, and the
extra registers it uses introduced awkward constraints in the
register allocation routines in the compiler. (On the IBM 360's
architecture, the C compiler I used always used R0/R1 for
multiplication and division, and never used these registers for
anything else. Which, of course, doesn't result in particularly
good optimization.)
I would not expect the paper and pencil method in binary to
beat hardware division. However, only measurement will show.
I would normally expect that if you wrote a*b/c, a good compiler
on an Intel architecture would generate something which would
actually give the right results, even if there was an
intervening overflow (at least for signed values---the standard
doesn't allow it for unsigned), not for any intrinsic reasons,
but because the fastest code to evaluate this expression will
give the right results (and according to the standard, if
overflow occurs, the behavior is undefined, so it's certainly
not incorrect to give the correct results). A quick look at the
code generated by g++, however (for 32 bits, since the only
Intel architectures I have access to here are 32 bits), shows
that it systematically inserts an extra cltd instruction between
the multiply and the divide. (CLTD seems to correspond to what
Intel names CDQ.) Whether this is intentional, because g++
defines behavior in case of overflow, or whether it is simply
careless code generation (if the operand to the multiply comes
from any source other than the results of a division, it is
necessary), I don't know.

--
James Kanze (GABI Software) email:ja*********@gmail.com
Conseils en informatique orientée objet/
Beratung in objektorientierter Datenverarbeitung
9 place Sémard, 78210 St.-Cyr-l'École, France, +33 (0)1 30 23 00 34
Jul 3 '08 #15

This discussion thread is closed

Replies have been disabled for this discussion.