473,796 Members | 2,538 Online
Bytes | Software Development & Data Engineering Community
+ Post

Home Posts Topics Members FAQ

How to write a small graceful gcd function?

My small function works, but I have some questions. And I want to
listen to you on How it is implemented?

1. The function does not check if parameter x is larger or smaller than
parameter y.

2. Is it better to use unsigned int or unsigned long as the type of
parameters x and y? This change may remove the if statement.

More comments are still welcome.

/*The Greatest Common Divisor, GCD for short, of two positive integers
can be computed with Euclid's division algorithm. Let the given numbers
be a and b, a >= b. Euclid's division algorithm has the following
steps:
1. Compute the remainder c of dividing a by b.
2. If the remainder c is zero, b is the greatest common divisor.
3. If c is not zero, replace a with b and b with the remainder c. Go
back to step (1).

http://www.cs.mtu.edu/~shene/COURSES...hap04/gcd.html */

/*computes the GCD (Greatest Common Divisor) of positive integers x and
y with Euclid's algorithm. return the GCD, or -1 for failure. - jhl,
Jul 2006*/

int gcd(int x, int y){
if (x 0 && y 0){
while (x % y){
y = x + y;
x = y - x;
y = (y - x) % x;
}
} else {
y = -1; /*input invalid*/
}
return y;
}

lovecreatesbeau ty

Jul 15 '06
74 7491
"Tom St Denis" <to********@gma il.comwrites:
Ben Pfaff wrote:
>Are we talking about the same binary GCD algorithm that's in
Knuth? There's a wikipedia page about it:
http://en.wikipedia.org/wiki/Binary_GCD_algorithm

(Every algorithm is in Knuth if you look hard enough.)

Hmm, lots of them are, but not all. [...]
For what it's worth, that was meant as a joke.
--
"Give me a couple of years and a large research grant,
and I'll give you a receipt." --Richard Heathfield
Jul 17 '06 #31

"Ben Pfaff" <bl*@cs.stanfor d.eduwrote in message
news:87******** ****@benpfaff.o rg...
"Tom St Denis" <to********@gma il.comwrites:
>Ben Pfaff wrote:
>>Are we talking about the same binary GCD algorithm that's in
Knuth? There's a wikipedia page about it:
http://en.wikipedia.org/wiki/Binary_GCD_algorithm

(Every algorithm is in Knuth if you look hard enough.)

Hmm, lots of them are, but not all. [...]

For what it's worth, that was meant as a joke.
--
"Give me a couple of years and a large research grant,
and I'll give you a receipt." --Richard Heathfield

Jul 17 '06 #32
"Ben Pfaff" <bl*@cs.stanfor d.eduwrote in message
news:87******** ****@benpfaff.o rg...
"Tom St Denis" <to********@gma il.comwrites:
>Ben Pfaff wrote:
>>Are we talking about the same binary GCD algorithm that's in
Knuth? There's a wikipedia page about it:
http://en.wikipedia.org/wiki/Binary_GCD_algorithm

(Every algorithm is in Knuth if you look hard enough.)

Hmm, lots of them are, but not all. [...]

For what it's worth, that was meant as a joke.
It's pretty darn close to correct. It is very unusual to need an algorithm
that is not covered in one of the volumes.

As an aside, some of his algorithms are superior to those in common use...
{And TAOCP was written when?!}

E.g. Merge insertion sorting is fabulous.
--
"Give me a couple of years and a large research grant,
and I'll give you a receipt." --Richard Heathfield

Jul 17 '06 #33
"Tom St Denis" <to********@gma il.comwrote in message
news:11******** **************@ i42g2000cwa.goo glegroups.com.. .
Dann Corbit wrote:
Since division is quadratic, doing gcd with it is also quadratic.
think about it.

Division is not quadratic in hardware.
It will take a definite, fixed number of cycles at most. (17 for a 64 bit
AMD CPU, IIRC).

Actually you're still wrong. It's either quadratic in time or space
[or both]. You can make it O(1) time by taking O(n^2) space but that's
still "quadratic" . You linearize it by taking linear space O(n) for
O(n) which is NOT quadratic [just not practical for the size of numbers
you work with in processors] by using interpolation [e.g. Karatsuba]

hint: Why do you think multipliers are fast. It isn't because they
violated the O(n^2) rule. It's because the multipliers are big[*].
They take up nearly O(n^2) space to do the multiplication in a trivial
amount of time.
I'm sorry, but I'm not wrong. You fundamentally misunderstand what O(f(n))
means.

If something takes a fixed number of cycles to complete at most, then it is
not O(log(n)) or O(n^2) or anything else. It is O(1).

Subtraction, division, multiplication, modulus, etc. are ALL O(1) on modern
64 bit hardware if the integer values are also 64 bit.
>Subtraction will be faster, but only by a small, fixed constant.

If your hardware does not support division, then the subtraction method
would be a significant advantage.

Or if you have to work with larger numbers. Even with an O(1) division
opcode, the division of n-digit numbers is O(n^2) time. So even if
divide == sub for time on a single digit it'd be slower for larger
numbers [proof: division by shift/subtract is slower than a single
subtraction].
I am not arguing this point. I also agree that the subtraction method is
better. The notion that the hardware version of the algorithm is O(n^2) is
pure fantasy on your part, though.
Tom
[*] Some processors use a Karatsuba trick but so far I haven't heard of
any x86 series using it as a fact....

Jul 17 '06 #34
"Dann Corbit" <dc*****@connx. comwrites:
[...]
I'm sorry, but I'm not wrong. You fundamentally misunderstand what O(f(n))
means.

If something takes a fixed number of cycles to complete at most, then it is
not O(log(n)) or O(n^2) or anything else. It is O(1).

Subtraction, division, multiplication, modulus, etc. are ALL O(1) on modern
64 bit hardware if the integer values are also 64 bit.
In this case, n is fixed at 64, so O(n) and O(1) are effectively the
same thing.

If you want to handle n 64 (multiplication of arbitrarily large
numbers), it's probably going to exceed O(1).

--
Keith Thompson (The_Other_Keit h) ks***@mib.org <http://www.ghoti.net/~kst>
San Diego Supercomputer Center <* <http://users.sdsc.edu/~kst>
We must do something. This is something. Therefore, we must do this.
Jul 17 '06 #35
Dann Corbit wrote:
I'm sorry, but I'm not wrong. You fundamentally misunderstand what O(f(n))
means.

If something takes a fixed number of cycles to complete at most, then it is
not O(log(n)) or O(n^2) or anything else. It is O(1).
No, that's wrong. In the case of multiplication they traded space for
performance. That's still a O(n^2) algorithm even though it may take
O(1) time.

What do you think processors just magically multiply 64 bit quantities
with a O(1) space algorithm?

If that were the case we'd be doing RSA in x86 processors in a dozen
cycles with a single opcode.
Subtraction, division, multiplication, modulus, etc. are ALL O(1) on modern
64 bit hardware if the integer values are also 64 bit.
They may take O(1) TIME [which even then is debatable] but they
certainly are not O(1) algorithms.

Think of O() as a unit of resources. Multiplication takes n^2
bit-multiplications to compute for n bits [where here a mult is and
AND]. You can compute it serially in n^2 time or if you have n^2
parallel multipliers in O(1) time. The algorithm still takes O(n^2)
resources.

By your logic, expanding a basic multiplier would always be more
efficient than Karatsuba since "it's linear" which simply isn't true.
Or if you have to work with larger numbers. Even with an O(1) division
opcode, the division of n-digit numbers is O(n^2) time. So even if
divide == sub for time on a single digit it'd be slower for larger
numbers [proof: division by shift/subtract is slower than a single
subtraction].

I am not arguing this point. I also agree that the subtraction method is
better. The notion that the hardware version of the algorithm is O(n^2) is
pure fantasy on your part, though.
I said the ALGORITHM is quadratic. Not the implementation. For all I
know your processor HAS a linear time divider [a real one] and
therefore the algorithm is linear. But using the typical notions of a
processor that algorithm is fully quadratic.

Think about it, replace "unsigned long" with "bigint_t". The ALGORITHM
is the same just the numbers changed size. Why is it now quadratic and
before it wasn't?

Tom

Jul 17 '06 #36
"Tom St Denis" <to********@gma il.comwrites:
Dann Corbit wrote:
>I'm sorry, but I'm not wrong. You fundamentally misunderstand what O(f(n))
means.

If something takes a fixed number of cycles to complete at most, then it is
not O(log(n)) or O(n^2) or anything else. It is O(1).

No, that's wrong. In the case of multiplication they traded space for
performance. That's still a O(n^2) algorithm even though it may take
O(1) time.
You seem to believe that you multiply time by space to obtain the
cost of an algorithm. I've never seen anyone do that before.
Merge sort takes O(n lg n) time and O(n) additional space, so
does that make it an O(n^2 lg n) algorithm?

I could accept that multiplication of fixed-size integers can be
implemented given O(1) time and O(n^2) space. That doesn't make
it "an O(n^2) algorithm"; that's a meaningless thing to say. It
makes it an O(1) time, O(n^2) space algorithm.
--
int main(void){char p[]="ABCDEFGHIJKLM NOPQRSTUVWXYZab cdefghijklmnopq rstuvwxyz.\
\n",*q="kl BIcNBFr.NKEzjwC IxNJC";int i=sizeof p/2;char *strchr();int putchar(\
);while(*q){i+= strchr(p,*q++)-p;if(i>=(int)si zeof p)i-=sizeof p-1;putchar(p[i]\
);}return 0;}
Jul 17 '06 #37
Ben Pfaff wrote:
You seem to believe that you multiply time by space to obtain the
cost of an algorithm. I've never seen anyone do that before.
Merge sort takes O(n lg n) time and O(n) additional space, so
does that make it an O(n^2 lg n) algorithm?
No, because it doesn't take n space per unit of time.

Basic multiplication is O(n^2) because there are n^2 steps. Whether
they happen in parallel or not doesn't matter.

Karatsuba is a O(n^1.583) time algorithm because there are n^1.583
multiplications . Whether you do them all in parallel doesn't matter
(and in fact in hardware that's usually the case).

The complexity of an algorithm doesn't change just because you trade
memory or space for time.

Besides, you could do all compares in parallel. Therefore, merge sort
is a O(ln n) algorithm. But wait, space complexity doesn't matter. So
therefore all sorting should be O(1) complexity!!!
I could accept that multiplication of fixed-size integers can be
implemented given O(1) time and O(n^2) space. That doesn't make
it "an O(n^2) algorithm"; that's a meaningless thing to say. It
makes it an O(1) time, O(n^2) space algorithm.
Well first off O() refers to complexity. If you TRADE space for time
that doesn't lower the complexity. It lowers the time. I'll grant you
the TIME complexity of a O(n^2) space multiplier is O(1).

Tom

Jul 17 '06 #38
"Tom St Denis" <to********@gma il.comwrote in message
news:11******** **************@ h48g2000cwc.goo glegroups.com.. .
Dann Corbit wrote:
>I'm sorry, but I'm not wrong. You fundamentally misunderstand what
O(f(n))
means.

If something takes a fixed number of cycles to complete at most, then it
is
not O(log(n)) or O(n^2) or anything else. It is O(1).

No, that's wrong. In the case of multiplication they traded space for
performance. That's still a O(n^2) algorithm even though it may take
O(1) time.
If you use the martian definition.
What do you think processors just magically multiply 64 bit quantities
with a O(1) space algorithm?
Space is constant also. Do you think that the table grows and shrinks as I
multiply different numbers? It does not.
If that were the case we'd be doing RSA in x86 processors in a dozen
cycles with a single opcode.
RSA has to worry about 512 bit numbers and larger, which are not even in
this conversation.
>Subtraction, division, multiplication, modulus, etc. are ALL O(1) on
modern
64 bit hardware if the integer values are also 64 bit.

They may take O(1) TIME [which even then is debatable] but they
certainly are not O(1) algorithms.
O(1) means that it completes in constant time. I guess that A*B or A/C or
A-C will complete in constant time if A,B,C are hardware integers.
Think of O() as a unit of resources. Multiplication takes n^2
bit-multiplications to compute for n bits [where here a mult is and
AND]. You can compute it serially in n^2 time or if you have n^2
parallel multipliers in O(1) time. The algorithm still takes O(n^2)
resources.
The size of the lookup table is fixed. The time to complete the unit of
work is fixed. Both are O(1) by the raw definition of the terms.

If you are talking about implementing arbitrary precision operations, then
your statements make sense. They make no sense at all in this context.
By your logic, expanding a basic multiplier would always be more
efficient than Karatsuba since "it's linear" which simply isn't true.
Something that executes in a fixed amount of time is O(1). That's the
definition of O(1). An O(1) algorithm could take a century to complete
(that's a fixed amount of time) and an exponential algorithm could take a
nanosecond to complete (because of the problem given to the algorithm).
Or if you have to work with larger numbers. Even with an O(1) division
opcode, the division of n-digit numbers is O(n^2) time. So even if
divide == sub for time on a single digit it'd be slower for larger
numbers [proof: division by shift/subtract is slower than a single
subtraction].

I am not arguing this point. I also agree that the subtraction method is
better. The notion that the hardware version of the algorithm is O(n^2)
is
pure fantasy on your part, though.

I said the ALGORITHM is quadratic. Not the implementation. For all I
know your processor HAS a linear time divider [a real one] and
therefore the algorithm is linear. But using the typical notions of a
processor that algorithm is fully quadratic.
The algorithm is not quadratic. The divider finishes in constant time.
Think about it, replace "unsigned long" with "bigint_t". The ALGORITHM
is the same just the numbers changed size. Why is it now quadratic and
before it wasn't?
Why should I replace unsigned long long with arbitrary precision? I made it
very clear that I was discussing the algorithm for hardware implementations .
Tom

Jul 18 '06 #39
Dann Corbit wrote:
No, that's wrong. In the case of multiplication they traded space for
performance. That's still a O(n^2) algorithm even though it may take
O(1) time.

If you use the martian definition.
I don't see the problem here. The algorithm has a complexity of O(n^2)
whether that is space or time it's still that complex.

Why do things get more efficient as you use more resources?
What do you think processors just magically multiply 64 bit quantities
with a O(1) space algorithm?

Space is constant also. Do you think that the table grows and shrinks as I
multiply different numbers? It does not.
Different SIZED NUMBERS. Yes, in fact the size grows quadratically
with the size of the input if you want to keep O(1) time.

What? You think a 16-bit and 64-bit multiplier are the same size for
constant time?
If that were the case we'd be doing RSA in x86 processors in a dozen
cycles with a single opcode.

RSA has to worry about 512 bit numbers and larger, which are not even in
this conversation.
They're the same thing. Fundamentally a "multiplica tion algorithm" is
used even in the cpu. You're treating the processor as a huge black
box where everything is linear time/space. That's just not the case.
[hint: if it were processors would have more multipliers]
Subtraction, division, multiplication, modulus, etc. are ALL O(1) on
modern
64 bit hardware if the integer values are also 64 bit.
They may take O(1) TIME [which even then is debatable] but they
certainly are not O(1) algorithms.

O(1) means that it completes in constant time. I guess that A*B or A/C or
A-C will complete in constant time if A,B,C are hardware integers.
TIME is not the only measure of complexity.
Think of O() as a unit of resources. Multiplication takes n^2
bit-multiplications to compute for n bits [where here a mult is and
AND]. You can compute it serially in n^2 time or if you have n^2
parallel multipliers in O(1) time. The algorithm still takes O(n^2)
resources.

The size of the lookup table is fixed. The time to complete the unit of
work is fixed. Both are O(1) by the raw definition of the terms.
You're either ignorant of computer science or really obtuse. O()
notation is in terms of the size of the input. And no, for O(1) time
multiplication the table size is neither fixed, nor linearly dependent
on the size of the input.
If you are talking about implementing arbitrary precision operations, then
your statements make sense. They make no sense at all in this context.
It applies just as much to processor components as it does software.
You're just really naive and think otherwise [regardless, gcdm is still
quadratic unless you specify a different way to divide so it doesn't
matter]
By your logic, expanding a basic multiplier would always be more
efficient than Karatsuba since "it's linear" which simply isn't true.

Something that executes in a fixed amount of time is O(1). That's the
definition of O(1). An O(1) algorithm could take a century to complete
(that's a fixed amount of time) and an exponential algorithm could take a
nanosecond to complete (because of the problem given to the algorithm).
TIME is not the only measure of complexity.
I said the ALGORITHM is quadratic. Not the implementation. For all I
know your processor HAS a linear time divider [a real one] and
therefore the algorithm is linear. But using the typical notions of a
processor that algorithm is fully quadratic.

The algorithm is not quadratic. The divider finishes in constant time.
This is impossible. The algorithm isn't constant . The implementation
may be [due to a specific multiplier] but as you wrote it the algorithm
itself is quadratic time because division is of O(n^2) complexity.
Think about it, replace "unsigned long" with "bigint_t". The ALGORITHM
is the same just the numbers changed size. Why is it now quadratic and
before it wasn't?

Why should I replace unsigned long long with arbitrary precision? I made it
very clear that I was discussing the algorithm for hardware implementations .
Ok but unless you use a specific division opcode which is of O(1)
complexity [none exist btw] then the algorithm itself isn't of O(1)
complexity.

You're confusing runtime with the actual implementation details. Your
multiplier is fast because it takes O(n^2) space. It's still the same
O(n^2) multiplication algorithm you learned as a 6 year old just all of
the mults are done in parallel.

Complexity is a measure of both space and time not just time.

I suppose it's too much to tell you that addition is O(n) complexity
too then right?

....

Tom

Jul 18 '06 #40

This thread has been closed and replies have been disabled. Please start a new discussion.

Similar topics

6
1576
by: tings | last post by:
How to write such a function that can take varible number and tyoes of arguments, like printf("... %d, %s...", myInt, myString...)? Thanks for for your help!
3
1960
by: Wen | last post by:
hello, now, i wanna port a c++ program into C# project. a DLL written by C++ and keep it no change, and UI wirtten by C#. my C++ UI code is as below: // error handlers --global function callback function // C++ dll would invoke this function if needed void ImageConv_Callback_Handler(const char *szInfo) {
3
2724
by: datttanand | last post by:
How to write the main function in java script? such as in vb script sub main end sub
1
2153
by: rasmidas | last post by:
I have a function sEntityFree() which is a function from a third party S/w we are using. For this we have our enhanced code. sEntityFree() has been called inside our enhanced code 2000 or 3000 times. But NULL pointer checking before freeing the memory is not there. Like the function has been called as below: sEntityFree(Entity, ( void** )&StructPtr, sYES ); or sEntityFree(Entity, ( void** )&StructPtr, sNO ); If I will write it...
0
1443
by: nabil035 | last post by:
I explain exactly what I want to do: write a callback function in a C++/CLI application this application imports function from a native DLL call this function from the DLL and return the results to the application Can Anyone help with the simplest example, please!
7
1510
by: mynkow | last post by:
Hi, I want to write an universal function called isNegative( param ) where the param can be float, int, double, long etc. The result will be true/false. Any comments?
0
10236
jinu1996
by: jinu1996 | last post by:
In today's digital age, having a compelling online presence is paramount for businesses aiming to thrive in a competitive landscape. At the heart of this digital strategy lies an intricately woven tapestry of website design and digital marketing. It's not merely about having a website; it's about crafting an immersive digital experience that captivates audiences and drives business growth. The Art of Business Website Design Your website is...
1
10182
by: Hystou | last post by:
Overview: Windows 11 and 10 have less user interface control over operating system update behaviour than previous versions of Windows. In Windows 11 and 10, there is no way to turn off the Windows Update option using the Control Panel or Settings app; it automatically checks for updates and installs any it finds, whether you like it or not. For most users, this new feature is actually very convenient. If you want to control the update process,...
0
10017
tracyyun
by: tracyyun | last post by:
Dear forum friends, With the development of smart home technology, a variety of wireless communication protocols have appeared on the market, such as Zigbee, Z-Wave, Wi-Fi, Bluetooth, etc. Each protocol has its own unique characteristics and advantages, but as a user who is planning to build a smart home system, I am a bit confused by the choice of these technologies. I'm particularly interested in Zigbee because I've heard it does some...
1
7552
isladogs
by: isladogs | last post by:
The next Access Europe User Group meeting will be on Wednesday 1 May 2024 starting at 18:00 UK time (6PM UTC+1) and finishing by 19:30 (7.30PM). In this session, we are pleased to welcome a new presenter, Adolph Dupré who will be discussing some powerful techniques for using class modules. He will explain when you may want to use classes instead of User Defined Types (UDT). For example, to manage the data in unbound forms. Adolph will...
0
6793
by: conductexam | last post by:
I have .net C# application in which I am extracting data from word file and save it in database particularly. To store word all data as it is I am converting the whole word file firstly in HTML and then checking html paragraph one by one. At the time of converting from word file to html my equations which are in the word document file was convert into image. Globals.ThisAddIn.Application.ActiveDocument.Select();...
0
5445
by: TSSRALBI | last post by:
Hello I'm a network technician in training and I need your help. I am currently learning how to create and manage the different types of VPNs and I have a question about LAN-to-LAN VPNs. The last exercise I practiced was to create a LAN-to-LAN VPN between two Pfsense firewalls, by using IPSEC protocols. I succeeded, with both firewalls in the same network. But I'm wondering if it's possible to do the same thing, with 2 Pfsense firewalls...
0
5577
by: adsilva | last post by:
A Windows Forms form does not have the event Unload, like VB6. What one acts like?
1
4120
by: 6302768590 | last post by:
Hai team i want code for transfer the data from one system to another through IP address by using C# our system has to for every 5mins then we have to update the data what the data is updated we have to send another system
3
2928
bsmnconsultancy
by: bsmnconsultancy | last post by:
In today's digital era, a well-designed website is crucial for businesses looking to succeed. Whether you're a small business owner or a large corporation in Toronto, having a strong online presence can significantly impact your brand's success. BSMN Consultancy, a leader in Website Development in Toronto offers valuable insights into creating effective websites that not only look great but also perform exceptionally well. In this comprehensive...

By using Bytes.com and it's services, you agree to our Privacy Policy and Terms of Use.

To disable or enable advertisements and analytics tracking please visit the manage ads & tracking page.