473,796 Members | 2,520 Online
Bytes | Software Development & Data Engineering Community
+ Post

Home Posts Topics Members FAQ

How to write a small graceful gcd function?

My small function works, but I have some questions. And I want to
listen to you on How it is implemented?

1. The function does not check if parameter x is larger or smaller than
parameter y.

2. Is it better to use unsigned int or unsigned long as the type of
parameters x and y? This change may remove the if statement.

More comments are still welcome.

/*The Greatest Common Divisor, GCD for short, of two positive integers
can be computed with Euclid's division algorithm. Let the given numbers
be a and b, a >= b. Euclid's division algorithm has the following
steps:
1. Compute the remainder c of dividing a by b.
2. If the remainder c is zero, b is the greatest common divisor.
3. If c is not zero, replace a with b and b with the remainder c. Go
back to step (1).

http://www.cs.mtu.edu/~shene/COURSES...hap04/gcd.html */

/*computes the GCD (Greatest Common Divisor) of positive integers x and
y with Euclid's algorithm. return the GCD, or -1 for failure. - jhl,
Jul 2006*/

int gcd(int x, int y){
if (x 0 && y 0){
while (x % y){
y = x + y;
x = y - x;
y = (y - x) % x;
}
} else {
y = -1; /*input invalid*/
}
return y;
}

lovecreatesbeau ty

Jul 15 '06
74 7489
"Tom St Denis" <to********@gma il.comwrote in message
news:11******** **************@ b28g2000cwb.goo glegroups.com.. .
Dann Corbit wrote:
No, that's wrong. In the case of multiplication they traded space for
performance. That's still a O(n^2) algorithm even though it may take
O(1) time.

If you use the martian definition.

I don't see the problem here. The algorithm has a complexity of O(n^2)
whether that is space or time it's still that complex.
If an algorithm finishes in a fixed time and never takes more than a limited
number of clocks to complete, then it is a constant time algorithm. That's
the definition. I am talking about a hardware multiply of a hardware
integer on some C implementation.
Why do things get more efficient as you use more resources?
There is no claim on my part of increased efficiency with greater resources.
What do you think processors just magically multiply 64 bit quantities
with a O(1) space algorithm?

Space is constant also. Do you think that the table grows and shrinks as
I
multiply different numbers? It does not.

Different SIZED NUMBERS. Yes, in fact the size grows quadratically
with the size of the input if you want to keep O(1) time.
All the integers involved are 64 bit (or less). The algorithm I supplied
(and the one that you supplied) do not use variable sized registers or
numbers. It is an implementation that uses integers of a single, constant
size.
What? You think a 16-bit and 64-bit multiplier are the same size for
constant time?
In what way does the implementation that you supplied change the size if its
registers as it runs? The answer is, 'the integers in this algorithm are
all of the same size and they do not change width during the operation of
the algorithm'.

I think that a 64 bit hardware number uses an ALU and mutiplies in constant
time.
I have not made any claims about changing the size of the multiplier.

The algorithm that YOU supplied uses a constant sized hardware integer.

I think that you are smart enough to know that you are wrong. If not, then
I don't see any point in continuing the discussion.

It's OK to be wrong. I'm wrong a lot. But when you are wrong and you know
you are wrong and you insist that you are right it looks pretty strange.

[snip]
Jul 18 '06 #41
Dann Corbit wrote:
If an algorithm finishes in a fixed time and never takes more than a limited
number of clocks to complete, then it is a constant time algorithm. That's
the definition. I am talking about a hardware multiply of a hardware
integer on some C implementation.
I never said the algorithm was quadratic time. I said it's O(n^2)
complexity. It's linear because the processor trades space for
division time (even then though division algos usually have early outs
so they're not constant time anyways but let's disregard reality for
this conversation since "facts" have waved bye bye long ago).
Different SIZED NUMBERS. Yes, in fact the size grows quadratically
with the size of the input if you want to keep O(1) time.

All the integers involved are 64 bit (or less). The algorithm I supplied
(and the one that you supplied) do not use variable sized registers or
numbers. It is an implementation that uses integers of a single, constant
size.
Your implementation may be constant time (though I wouldn't say that in
a portable C group) but the algorithm is not constant complexity.
I think that you are smart enough to know that you are wrong. If not, then
I don't see any point in continuing the discussion.
Your asserting something I'm not trying to maintain. I said your
algorithm is O(n^2) complexity. I never said it takes that time [or if
I did originally that was in error, and I've been saying otherwise for
the last 10 replies or so...]
It's OK to be wrong. I'm wrong a lot. But when you are wrong and you know
you are wrong and you insist that you are right it looks pretty strange.
Well given that bignum math is one of my fortes, I'm comfortable in my
position. It's you who doesn't really seem to understand complexity
notation or discussion.

If we used your logic, sorting N numbers with bubble sort is N^2 steps,
but what if N is constant. Well now buble sort is constant time.
Therefore, bubble sort is the most efficient sort algorithm.

No, bubble sort is still N^2 but your instance is constant time because
the size doesn't change. Similarly, if you always multiply 64-bit
quantities you have a constant time, but the complexity is not constant
since multiplication is not constant complexity. Big-Oh notation is
about asymptotics and in our case about the complexity of something as
the size of the input changes.

Getting back to it. Division is a O(n^2) complexity problem when
implemented serially. There is no getting around that. You have n
bits and you have to perform n n-bit subtractions. That's n^2 bit
operations. You may implement various operations in parallel but that
doesn't change the complexity just the time.

Anyways enough ranting. You're too hung up on the specifics of the
problem and are ignoring the actual meaning of what I'm trying to say.
So think whatever the hell you want. Doesn't really matter I guess.

Tom

Jul 18 '06 #42
"Tom St Denis" <to********@gma il.comwrote in message
news:11******** *************@b 28g2000cwb.goog legroups.com...
Dann Corbit wrote:
>If an algorithm finishes in a fixed time and never takes more than a
limited
number of clocks to complete, then it is a constant time algorithm.
That's
the definition. I am talking about a hardware multiply of a hardware
integer on some C implementation.

I never said the algorithm was quadratic time. I said it's O(n^2)
complexity. It's linear because the processor trades space for
division time (even then though division algos usually have early outs
so they're not constant time anyways but let's disregard reality for
this conversation since "facts" have waved bye bye long ago).
Different SIZED NUMBERS. Yes, in fact the size grows quadratically
with the size of the input if you want to keep O(1) time.

All the integers involved are 64 bit (or less). The algorithm I supplied
(and the one that you supplied) do not use variable sized registers or
numbers. It is an implementation that uses integers of a single,
constant
size.

Your implementation may be constant time (though I wouldn't say that in
a portable C group) but the algorithm is not constant complexity.
>I think that you are smart enough to know that you are wrong. If not,
then
I don't see any point in continuing the discussion.

Your asserting something I'm not trying to maintain. I said your
algorithm is O(n^2) complexity. I never said it takes that time [or if
I did originally that was in error, and I've been saying otherwise for
the last 10 replies or so...]
>It's OK to be wrong. I'm wrong a lot. But when you are wrong and you
know
you are wrong and you insist that you are right it looks pretty strange.

Well given that bignum math is one of my fortes, I'm comfortable in my
position. It's you who doesn't really seem to understand complexity
notation or discussion.

If we used your logic, sorting N numbers with bubble sort is N^2 steps,
but what if N is constant. Well now buble sort is constant time.
Therefore, bubble sort is the most efficient sort algorithm.

No, bubble sort is still N^2 but your instance is constant time because
the size doesn't change. Similarly, if you always multiply 64-bit
quantities you have a constant time, but the complexity is not constant
since multiplication is not constant complexity. Big-Oh notation is
about asymptotics and in our case about the complexity of something as
the size of the input changes.

Getting back to it. Division is a O(n^2) complexity problem when
implemented serially. There is no getting around that. You have n
bits and you have to perform n n-bit subtractions. That's n^2 bit
operations. You may implement various operations in parallel but that
doesn't change the complexity just the time.

Anyways enough ranting. You're too hung up on the specifics of the
problem and are ignoring the actual meaning of what I'm trying to say.
So think whatever the hell you want. Doesn't really matter I guess.
According to your definition this algorithm:

int multiply(int a, int b)
{
return a*b;
}
is O(n^2)

and this algorithm:

int subtract(int a, int b)
{
return a-b;
}
is O(log(n))

It just shows that you have no idea what the terms mean.

Both of those algorithms are O(1).

Now (on the other hand) if we were implementing multiple precision
algorithms to perform the fundamental operations, then (depending on the
implementation) your argument might have some validity.

As it is, you have just demonstrated you preposterous ignorance for all time
in USENET, to be salted away into the archives for posterity.
Congratulations .
At least some future denizens of earth may get a good chuckle from it.
Jul 18 '06 #43
Dann Corbit wrote:
According to your definition this algorithm:

int multiply(int a, int b)
{
return a*b;
}
is O(n^2)
*complexity*. Yes, that's true.

You can't use O() notation for constant sized data sets. You're
basically saying "as 64 =oo this function takes constant time".
That's nonsense.

However, for the general term "int" without upper bounds the algorithm
has a O(n^2) complexity.
and this algorithm:

int subtract(int a, int b)
{
return a-b;
}
is O(log(n))
Actually, addition and subtraction have O(n) complexity.
It just shows that you have no idea what the terms mean.
Funny, I was just going to say the same thing.
Both of those algorithms are O(1).
No. They're not. And for the very reason you think, O() notation does
not apply.
Now (on the other hand) if we were implementing multiple precision
algorithms to perform the fundamental operations, then (depending on the
implementation) your argument might have some validity.
Might ... does ... whatever.
As it is, you have just demonstrated you preposterous ignorance for all time
in USENET, to be salted away into the archives for posterity.
Congratulations .
At least some future denizens of earth may get a good chuckle from it.
First off, supposing I was wrong [which in this regard I'm not], this
is far from the worst post I've ever made. I've been on usenet since I
was ~16 or so and have enough good and bad posts to be rather "famous",
at least in the sci.crypt world.

I don't get why you're arguing this. You can't seem to discuss
complexity with any sense of proficiency.

Tom

Jul 18 '06 #44
"Tom St Denis" <to********@gma il.comwrites:
Dann Corbit wrote:
[snip]

I wonder if both of you would consider taking this elsewhere.

--
Keith Thompson (The_Other_Keit h) ks***@mib.org <http://www.ghoti.net/~kst>
San Diego Supercomputer Center <* <http://users.sdsc.edu/~kst>
We must do something. This is something. Therefore, we must do this.
Jul 18 '06 #45
Dann Corbit wrote:
I'm sorry, but I'm not wrong. You fundamentally misunderstand what O(f(n))
means.

Subtraction, division, multiplication, modulus, etc. are ALL O(1) on modern
64 bit hardware if the integer values are also 64 bit.
I'll add my voice to the chorus: you fundamentally misunderstand
what it means.

To say that an algorithm is O(n^2) means that as n approaches
infinity, the complexity of the algorithm approaches n^2 (perhaps
multiplied by some constant).

It is meaningless to assert "Operation X is O(1) on two 64-bit
numbers". Big-O notation is for describing how the metrics of
an algorithm grow as the size of the parameters grows. It
cannot be used to assess one specific instance.

To say that multiplication is O(n^2) is to say that a 128-bit
multiplication takes roughly 4 times as many bit operations
as a 64-bit multiplication. It says nothing about how long a 64-bit
multiplication takes in real time, nor about how a 64-bit
multiplication compares to a 64-bit addition.

A further note: in this case, "n" is the size of the number (eg.
the number of bits) and complexity is taken as the number of
bit-operations needed.

Other types of complexity can also be measured by this notation,
eg. time-complexity and space-complexity. Also, complexity can
be measured against other parameters besides the size of the inputs.

Jul 18 '06 #46

lovecreatesbeau ty wrote:
Keith Thompson wrote:
"lovecreatesbea uty" <lo************ ***@gmail.comwr ites:
Keith Thompson wrote:
>"Julian V. Noble" <jv*@virginia.e duwrites:
>[...]
int gcd( int m, int n ) // recursive
{
if(n==0) { return m; }
else
{ int answer = gcd(n,m%n);
return answer; }
}
>>
>The temporary is unnecessary, and the formatting is just odd. Here's
>how I'd write it:
>>
>int gcd(int m, int n)
>{
> if (n == 0) {
> return m;
> }
> else {
> return gcd(n, m % n);
> }
>}
>
Thanks.
>
The function accepts zeros and negative integers as their arguments.
How are users without much mathematical knowledge like me supposed to
provide correct input to use it?
The obvious solution to that is to change the arguments and result
from int to unsigned.

Different mathematical functions have different domains and ranges
(values that make sense for arguments and results). C, unlike some
other languages, doesn't let you declare a type with a specified range
of values -- but in this case, unsigned happens to be just what you
want.

The recursive way becomes the worst one and can not be improved to add
more parameter validation to it. If the function accepts input from
other automatic software systems, then someone should still keep an eye
on it, because the result may be wrong without warning.

int gcd(int x, int y){
if (x 0 && y 0){
while (x % y){
y = x + y;
x = y - x;
y = (y - x) % x;
}
} else {
y = -1; /*input invalid*/
}
return y;
}

int gcd3(int m, int n){
if (n == 0){
return m;
} else {
return gcd3(n, m % n);
}
}
...
The greatest common divisor is perfectly well
defined for negative integers: gcd(-4,-6) == 2.

int gcd(int a, int b)
{ return b ? gcd(b,a%b) : a<0 ? -a : a ? a : 1; }

Change the 1 to a -1 if you want gcd(0,0) to
return an "error indicator".

By the way, any halfway decent compiler will turn
a recursive version like the one shown here into
an iterative one. No need to micro-optimize.

Jul 18 '06 #47
On 17 Jul 2006 20:33:49 -0700, in comp.lang.c , "Tom St Denis"
<to********@gma il.comwrote:
>You can't use O() notation for constant sized data sets. You're
basically saying "as 64 =oo this function takes constant time".
That's nonsense.
You apparently have absolutely no idea what this notation means.
Perhaps you should read some books, and then creep quietly back.
>I don't get why you're arguing this. You can't seem to discuss
complexity with any sense of proficiency.
Most amusing.
--
Mark McIntyre

"Debugging is twice as hard as writing the code in the first place.
Therefore, if you write the code as cleverly as possible, you are,
by definition, not smart enough to debug it."
--Brian Kernighan
Jul 18 '06 #48
Mark McIntyre wrote:
On 17 Jul 2006 20:33:49 -0700, in comp.lang.c , "Tom St Denis"
<to********@gma il.comwrote:
You can't use O() notation for constant sized data sets. You're
basically saying "as 64 =oo this function takes constant time".
That's nonsense.

You apparently have absolutely no idea what this notation means.
Perhaps you should read some books, and then creep quietly back.
Um ok. Show me where O() notation applies to a constant sized input.

I realize this is clc and not some math group but computer scientists
should understand how to speak about complexity. Saying multiplication
is O(1) a surefire sign that you don't know what the heck you're
talking about.

Tom

Jul 18 '06 #49
Tom St Denis wrote:
Mark McIntyre wrote:
>On 17 Jul 2006 20:33:49 -0700, in comp.lang.c , "Tom St Denis"
<to********@gm ail.comwrote:
>You can't use O() notation for constant sized data sets. You're
basically saying "as 64 =oo this function takes constant time".
That's nonsense.

You apparently have absolutely no idea what this notation means.
Perhaps you should read some books, and then creep quietly back.

Um ok. Show me where O() notation applies to a constant sized input.

I realize this is clc and not some math group but computer scientists
should understand how to speak about complexity. Saying multiplication
is O(1) a surefire sign that you don't know what the heck you're
talking about.
Isn't saying "multiplica tion is O(1)" saying "multiplica tion takes
constant time", independent of the operands?

Which - for fixed-size arguments, eg 32-bit int(egers) such as are
found on many machines & in many programming languages - is, I thought,
true.

Of course (as someone alluded to) if the operands are of variable
size, not fixed in advance, as might be found in an unbounded-precision
library or (sort of equivalently) in Lisp/Pop11/Smalltalk general arithmetic,
then multiplication is ... unlikely ... to be O(1).

[It /could/ be O(1). But I'd prefer not to use that implementation.]

--
Chris "I'd rather owe(1) than owe(10)" Dollin
"People are part of the design. It's dangerous to forget that." /Star Cops/

Jul 18 '06 #50

This thread has been closed and replies have been disabled. Please start a new discussion.

Similar topics

6
1576
by: tings | last post by:
How to write such a function that can take varible number and tyoes of arguments, like printf("... %d, %s...", myInt, myString...)? Thanks for for your help!
3
1960
by: Wen | last post by:
hello, now, i wanna port a c++ program into C# project. a DLL written by C++ and keep it no change, and UI wirtten by C#. my C++ UI code is as below: // error handlers --global function callback function // C++ dll would invoke this function if needed void ImageConv_Callback_Handler(const char *szInfo) {
3
2724
by: datttanand | last post by:
How to write the main function in java script? such as in vb script sub main end sub
1
2153
by: rasmidas | last post by:
I have a function sEntityFree() which is a function from a third party S/w we are using. For this we have our enhanced code. sEntityFree() has been called inside our enhanced code 2000 or 3000 times. But NULL pointer checking before freeing the memory is not there. Like the function has been called as below: sEntityFree(Entity, ( void** )&StructPtr, sYES ); or sEntityFree(Entity, ( void** )&StructPtr, sNO ); If I will write it...
0
1443
by: nabil035 | last post by:
I explain exactly what I want to do: write a callback function in a C++/CLI application this application imports function from a native DLL call this function from the DLL and return the results to the application Can Anyone help with the simplest example, please!
7
1510
by: mynkow | last post by:
Hi, I want to write an universal function called isNegative( param ) where the param can be float, int, double, long etc. The result will be true/false. Any comments?
0
9683
marktang
by: marktang | last post by:
ONU (Optical Network Unit) is one of the key components for providing high-speed Internet services. Its primary function is to act as an endpoint device located at the user's premises. However, people are often confused as to whether an ONU can Work As a Router. In this blog post, we’ll explore What is ONU, What Is Router, ONU & Router’s main usage, and What is the difference between ONU and Router. Let’s take a closer look ! Part I. Meaning of...
0
9529
by: Hystou | last post by:
Most computers default to English, but sometimes we require a different language, especially when relocating. Forgot to request a specific language before your computer shipped? No problem! You can effortlessly switch the default language on Windows 10 without reinstalling. I'll walk you through it. First, let's disable language synchronization. With a Microsoft account, language settings sync across devices. To prevent any complications,...
0
10231
jinu1996
by: jinu1996 | last post by:
In today's digital age, having a compelling online presence is paramount for businesses aiming to thrive in a competitive landscape. At the heart of this digital strategy lies an intricately woven tapestry of website design and digital marketing. It's not merely about having a website; it's about crafting an immersive digital experience that captivates audiences and drives business growth. The Art of Business Website Design Your website is...
1
10176
by: Hystou | last post by:
Overview: Windows 11 and 10 have less user interface control over operating system update behaviour than previous versions of Windows. In Windows 11 and 10, there is no way to turn off the Windows Update option using the Control Panel or Settings app; it automatically checks for updates and installs any it finds, whether you like it or not. For most users, this new feature is actually very convenient. If you want to control the update process,...
0
10013
tracyyun
by: tracyyun | last post by:
Dear forum friends, With the development of smart home technology, a variety of wireless communication protocols have appeared on the market, such as Zigbee, Z-Wave, Wi-Fi, Bluetooth, etc. Each protocol has its own unique characteristics and advantages, but as a user who is planning to build a smart home system, I am a bit confused by the choice of these technologies. I'm particularly interested in Zigbee because I've heard it does some...
0
9054
agi2029
by: agi2029 | last post by:
Let's talk about the concept of autonomous AI software engineers and no-code agents. These AIs are designed to manage the entire lifecycle of a software development project—planning, coding, testing, and deployment—without human intervention. Imagine an AI that can take a project description, break it down, write the code, debug it, and then launch it, all on its own.... Now, this would greatly impact the work of software developers. The idea...
1
7550
isladogs
by: isladogs | last post by:
The next Access Europe User Group meeting will be on Wednesday 1 May 2024 starting at 18:00 UK time (6PM UTC+1) and finishing by 19:30 (7.30PM). In this session, we are pleased to welcome a new presenter, Adolph Dupré who will be discussing some powerful techniques for using class modules. He will explain when you may want to use classes instead of User Defined Types (UDT). For example, to manage the data in unbound forms. Adolph will...
1
4119
by: 6302768590 | last post by:
Hai team i want code for transfer the data from one system to another through IP address by using C# our system has to for every 5mins then we have to update the data what the data is updated we have to send another system
3
2927
bsmnconsultancy
by: bsmnconsultancy | last post by:
In today's digital era, a well-designed website is crucial for businesses looking to succeed. Whether you're a small business owner or a large corporation in Toronto, having a strong online presence can significantly impact your brand's success. BSMN Consultancy, a leader in Website Development in Toronto offers valuable insights into creating effective websites that not only look great but also perform exceptionally well. In this comprehensive...

By using Bytes.com and it's services, you agree to our Privacy Policy and Terms of Use.

To disable or enable advertisements and analytics tracking please visit the manage ads & tracking page.