473,796 Members | 2,515 Online
Bytes | Software Development & Data Engineering Community
+ Post

Home Posts Topics Members FAQ

How to write a small graceful gcd function?

My small function works, but I have some questions. And I want to
listen to you on How it is implemented?

1. The function does not check if parameter x is larger or smaller than
parameter y.

2. Is it better to use unsigned int or unsigned long as the type of
parameters x and y? This change may remove the if statement.

More comments are still welcome.

/*The Greatest Common Divisor, GCD for short, of two positive integers
can be computed with Euclid's division algorithm. Let the given numbers
be a and b, a >= b. Euclid's division algorithm has the following
steps:
1. Compute the remainder c of dividing a by b.
2. If the remainder c is zero, b is the greatest common divisor.
3. If c is not zero, replace a with b and b with the remainder c. Go
back to step (1).

http://www.cs.mtu.edu/~shene/COURSES...hap04/gcd.html */

/*computes the GCD (Greatest Common Divisor) of positive integers x and
y with Euclid's algorithm. return the GCD, or -1 for failure. - jhl,
Jul 2006*/

int gcd(int x, int y){
if (x 0 && y 0){
while (x % y){
y = x + y;
x = y - x;
y = (y - x) % x;
}
} else {
y = -1; /*input invalid*/
}
return y;
}

lovecreatesbeau ty

Jul 15 '06
74 7489

In article <11************ **********@p79g 2000cwp.googleg roups.com>, "Tom St Denis" <to********@gma il.comwrites:
>
Third, most processors don't even have division opcodes [mips, ARM, PPC
anyone?]
According to my PPC assembly reference (_PowerPC Microprocessor
Developer's Guide_, Bunda et al), PPC does have integer divide
opcodes (divd[o][.], divdu[o][.], divw[o][.], and divwu[o][.]) and
floating-point divide opcodes (fdiv[.] and fdivs[.]). It states that
some of those are only available in the 64-bit versions of the
processor, but divw is available in all versions.

I suppose that might have changed since the book was published.

--
Michael Wojcik mi************@ microfocus.com

See who I'm! -- Jackie Chan and unknown subtitler, _Dragons Forever_
Jul 18 '06 #61
In article <11************ **********@i42g 2000cwa.googleg roups.com>,
Tom St Denis <to********@gma il.comwrote:
>For constant sized implementations of an algorithm O() doesn't apply.
O() applies to the algorithm itself when not restrained to given
operand sizes.
But there is an obvious extension, generally clear from the context,
to implementations limited to a maximum size. For example, if an
operation on a 64-bit number takes time proportional to the position
of the highest bit set in the number, we can say that it is O(log N)
even though you can't make N tend to infinity.

-- Richard
Jul 18 '06 #62
On 2006-07-18, Michael Wojcik <mw*****@newsgu y.comwrote:
>
In article <11************ **********@p79g 2000cwp.googleg roups.com>, "Tom St Denis" <to********@gma il.comwrites:
>>
Third, most processors don't even have division opcodes [mips, ARM, PPC
anyone?]

According to my PPC assembly reference (_PowerPC Microprocessor
Developer's Guide_, Bunda et al), PPC does have integer divide
opcodes (divd[o][.], divdu[o][.], divw[o][.], and divwu[o][.]) and
floating-point divide opcodes (fdiv[.] and fdivs[.]). It states that
some of those are only available in the 64-bit versions of the
processor, but divw is available in all versions.

I suppose that might have changed since the book was published.
I recently ran into this stuff when I started porting my C compiler to
PowerPC. Apparently integer division instructions are available in PPC,
but not in POWER. The IBM AIX assembler has a compatibility mode (-mcom)
in which instructions that are not available in both POWER and PowerPC
are rejected, and it rejects division. This is also the default setting.
The -many mode can be used to allow for any available instruction to be
used.
grep div y.asm
divw 3, 3, 4
as y.asm -u
Assembler:
y.asm: line 53: 1252-149 Instruction divw is not implemented in the
current assembly mode COM.
as y.asm -many -u
--
Nils R. Weller, Bremen (Germany)
My real email address is ``nils<at>gnuli nux<dot>nl''
.... but I'm not speaking for the Software Libre Foundation!
Jul 18 '06 #63
"Tom St Denis" <to********@gma il.comwrites:
Chris Dollin wrote:
>Isn't saying "multiplica tion is O(1)" saying "multiplica tion takes
constant time", independent of the operands?

No, it means independent of operand SIZE [or complexity if you will].

In the computer science world O() is typically a function of input
size, e.g. sort set size, input width, etc...
>Which - for fixed-size arguments, eg 32-bit int(egers) such as are
found on many machines & in many programming languages - is, I thought,
true.

For constant sized implementations of an algorithm O() doesn't apply.
O() applies to the algorithm itself when not restrained to given
operand sizes.
Then why are you wasting time arguing about how O() notation applies
to multiplication of fixed-size operands?

Multiplication of fixed-size operands is typically either O(1) or
O(meaningless) (for the moment, I really don't care which). The time
it takes to perform a 32-bit multiplication is (typically) independent
of the values of the operands. (It doesn't depend in any meaningful
way on the *sizes* of the operands, since we're assuming those sizes
are fixed.)

For example, if I have an algorithm that works on an array of 32-bit
integers, its performance is dominated by 32-bit multiplications , and
it performs N log N multiplications for an N-element array, then it's
an O(N log N) algorithm. I don't have to multiply N log N by some
term that accounts for the complexity of an individual multiplication;
that's just a constant, and it disappears.

Multiplication of variable-size operands (e.g, bignums) is O(something
bigger), perhaps O(N**2).

You're talking about two different things. Each of them is reasonably
well understood, but you're creating confusion by conflating them.

--
Keith Thompson (The_Other_Keit h) ks***@mib.org <http://www.ghoti.net/~kst>
San Diego Supercomputer Center <* <http://users.sdsc.edu/~kst>
We must do something. This is something. Therefore, we must do this.
Jul 18 '06 #64
On 18 Jul 2006 08:59:40 -0700, in comp.lang.c , "Tom St Denis"
<to********@gma il.comwrote:
>Mark McIntyre wrote:
>Well, O(1) means "takes the same time irrespective of the magnitude of
the argument". I strongly suspect this is true for actual real
computer systems.

Except that O() notation doesn't apply to fixed implementations of an
algorithm because the size doesn't change [nor approach oo].
Bollocks. And End of Discussion.

--
Mark McIntyre

"Debugging is twice as hard as writing the code in the first place.
Therefore, if you write the code as cleverly as possible, you are,
by definition, not smart enough to debug it."
--Brian Kernighan
Jul 18 '06 #65
Chris Torek <no****@torek.n etwrote:
>>Dann Corbit wrote:
>>I'm sorry, but I'm not wrong. You fundamentally misunderstand what O(f(n))
means.

Subtraction , division, multiplication, modulus, etc. are ALL O(1) on modern
64 bit hardware if the integer values are also 64 bit.

In article <11************ **********@h48g 2000cwc.googleg roups.com>,
Old Wolf <ol*****@inspir e.net.nzwrote:
>>I'll add my voice to the chorus: you fundamentally misunderstand
what it means.

To say that an algorithm is O(n^2) means that as n approaches
infinity, the complexity of the algorithm approaches n^2 (perhaps
multiplied by some constant).

This is true -- or more precisely, g(n) = O(f(n)) means g(n) is
bounded above by f(n). That is, there is some constant c and value
x-sub-0 such that for all x x-sub-0, g(x) <= c f(x). (There are
also Omega and Theta notations for bounded below, and bounded on
both sides.)

But:
>>It is meaningless to assert "Operation X is O(1) on two 64-bit
numbers".

This is *not* true. If k is any constant, O(k) and O(1) mean
the same thing, since the constant can be "moved out": if k is
a constant, then g(k) = k g(1), so we can replace c f(x) with
(k*c) f(x) above.
I hate to jump into such a heated debate, and particularly to correct
a post of yours since I usually find them so informative, but you have
used some algebraic smoke and mirrors there!

Yes, O(k) is O(1) if k is constant but for an operation on a
fixed-sized input we do not know if k is a constant -- it is a
function defined at only one point: 64 (or, if you insist, it can be
thought of as defined on the domain [0, 64]). Big O is only defined
for functions over R (or sometimes R+ or Z+). It has no formal
meaning (as far I know) for functions defined over finite sets.

=== On the more general argument ===

Posters on Usenet are often rather loose with big Os. To avoid
arguments such as this one must be clear about how the problem is
"measured" and exactly what the primitive operations of the abstract
machine are. Saying that multiplication of fixed-size numbers is
O(anything) is lax at best (meaningless at worst) but to describe
a given gcd algorithm as "O(whatever ) in the number of arithmetic
operations performed" is fine if a little informal.

Often, if someone says "operation X is O(1)" what they really mean is
"I consider operation X to be one of the primitive operations in term
of which I want to analyse my algorithm".

For many who write gcd algorithms, multiplication is not a primitive
operation because they are thinking in term of bignums and
cryptography (certainly true the only time I had to do it) but for
someone doing matrix operations it may be perfectly reasonable to
consider all fixed-size arithmetic ops as "primitive" .

To make matters worse (as happened here) people will often post some
code in C and say it has such-and-such a time (or space) complexity
when, formally, it can't have one because the inputs are bounded by
some C type or other. What they want the reader to do is abstract the
algorithm out and consider it running on some ideal C machine but it
is not always clear (as in this case) if that just involves having
arbitrary resources available or also includes unbounded integers (or
some other idealised types).

--
Ben.
Jul 19 '06 #66
av
On Tue, 18 Jul 2006 10:36:26 -0700, "Dann Corbit" wrote:
>"Richard Heathfield" wrote in message
>Dann Corbit said:
<snip>
>>Both of those algorithms are O(1).
yes but in what i know division for 2 registers is slow than sub for 2
register
>>Now (on the other hand) if we were implementing multiple precision
algorithms to perform the fundamental operations, then (depending on the
implementatio n) your argument might have some validity.

That's basically what he's talking about.
>>As it is, you have just demonstrated you preposterous ignorance for all
time in USENET, to be salted away into the archives for posterity.
>Tom St Denis has little reason to count me among his greatest admirers,
but
even I think you're over-egging it here. He certainly knows his bignums.

I have downloaded some of his work, a long time ago. I know that he knows
what he is talking about when it comes to bignums. I know that he is
intelligent and good at mathematics. Be that as it may, none of the GCD
algorithms presented were O(n^2).
yes is O(1) but who use 20 divisions is slower than who use 20 subs
>I think his analysis is probably correct if we are finding the GCD of two
billion digit numbers with arbitrary precision number classes. But that is
not the algorithm that was up for discussion.
seems i remember O(f(n)) and n is the size in bit for input
>I am through arguing about it. In fact, this thread was a reminder to me of
why I stopped posting on USENET for a couple years.
usenet is a wonderful place for to learn
in usenet is enough say our own position without much discussion
it is the reader that see who is right and for me almost always is
right who speak few
Jul 19 '06 #67
In article <44************ ***********@new s.zen.co.uk>
Ben Bacarisse <sp**@bsb.me.uk wrote:
>Yes, O(k) is O(1) if k is constant but for an operation on a
fixed-sized input we do not know if k is a constant -- it is a
function defined at only one point: 64 (or, if you insist, it can be
thought of as defined on the domain [0, 64]). Big O is only defined
for functions over R (or sometimes R+ or Z+). It has no formal
meaning (as far I know) for functions defined over finite sets.
Well, it works fine (math-wise) for finite sets. It just turns
degenerate: *everything* is O(1).

Of course, "real computers" are finite, so this means that every
algorithm can be called O(1). Clearly this is not what we want!
:-) The "obvious" solution is to allow anything that is a variable
to go to infinity, even though technically the size of, e.g., an
input being sorted with an "O(n log n)" sort routine is in fact
bounded by attached storage. If you only have 10 TB of file space,
you will not be sorting 200 tera-items of data. But if we claim
that this sort is therefore O(10T log 10T) and thus O(1) ... well,
that may be *true*, but it is not a useful claim.
>Posters on Usenet are often rather loose with big Os. To avoid
arguments such as this one must be clear about how the problem is
"measured" and exactly what the primitive operations of the abstract
machine are.
Indeed.

[additional details snipped]
--
In-Real-Life: Chris Torek, Wind River Systems
Salt Lake City, UT, USA (40°39.22'N, 111°50.29'W) +1 801 277 2603
email: forget about it http://web.torek.net/torek/index.html
Reading email is like searching for food in the garbage, thanks to spammers.
Jul 19 '06 #68
"Chris Torek" <no****@torek.n etwrote in message
news:e9******** *@news1.newsguy .com...
In article <44************ ***********@new s.zen.co.uk>
Ben Bacarisse <sp**@bsb.me.uk wrote:
>>Yes, O(k) is O(1) if k is constant but for an operation on a
fixed-sized input we do not know if k is a constant -- it is a
function defined at only one point: 64 (or, if you insist, it can be
thought of as defined on the domain [0, 64]). Big O is only defined
for functions over R (or sometimes R+ or Z+). It has no formal
meaning (as far I know) for functions defined over finite sets.

Well, it works fine (math-wise) for finite sets. It just turns
degenerate: *everything* is O(1).

Of course, "real computers" are finite, so this means that every
algorithm can be called O(1). Clearly this is not what we want!
:-) The "obvious" solution is to allow anything that is a variable
to go to infinity, even though technically the size of, e.g., an
input being sorted with an "O(n log n)" sort routine is in fact
bounded by attached storage. If you only have 10 TB of file space,
you will not be sorting 200 tera-items of data. But if we claim
that this sort is therefore O(10T log 10T) and thus O(1) ... well,
that may be *true*, but it is not a useful claim.
>>Posters on Usenet are often rather loose with big Os. To avoid
arguments such as this one must be clear about how the problem is
"measured" and exactly what the primitive operations of the abstract
machine are.

Indeed.
Consider the game of chess. It has a branching factor of about 35. So, on
average, there are about 35 possible moves you can make at a given stage of
the game.

By using the alpha-beta search algorithm, you can reduce the branching
factor to ~sqrt(35) which is about 6. Through various other tricks like
null move pruning, transposition tables, etc. you can reduce the branching
factor to about 2 if you have a very smart program. Now, it has been
formally proven that there are less than 6000 full moves (a full move is one
move for each side) in the game of chess.

So a really powerful chess program could {approximately} solve the game of
chess with only 6000 plies searched, which would amount to 2^6000 positions
analyzed.

As you know, 2^6000 is just a large constant. But it is so large that it
may as well be infinity (at the present stage of compute power). So
describing the branching factor of chess and considering chess as an
exponential problem still makes sense.

The notion of O(f(n)) is useful for average case projections of how a
problem set behaves. The travelling salesman problem is another hard
problem. But there are not an infinite number of cities in the world. So
(again) we could consider TSP to be O(1) with a huge constant of
proportionality . However, that is not really useful because we can't
calculate an optimal path to visit every city on earth with minimal travel
distance.

Similarly with sorting, we want to know how the problem scales over
different vector lengths. This is useful so that we can calculate how much
it will cost us to sort a given vector.

I'm not sure there is any relevance to C in this post, so probably it is
misplaced.
[additional details snipped]
--
In-Real-Life: Chris Torek, Wind River Systems
Salt Lake City, UT, USA (40°39.22'N, 111°50.29'W) +1 801 277 2603
email: forget about it http://web.torek.net/torek/index.html
Reading email is like searching for food in the garbage, thanks to
spammers.

Jul 19 '06 #69
On Wed, 19 Jul 2006 11:00:57 -0700, "Dann Corbit" <dc*****@connx. com>
wrote:
>"Chris Torek" <no****@torek.n etwrote in message
news:e9******* **@news1.newsgu y.com...
>In article <44************ ***********@new s.zen.co.uk>
Ben Bacarisse <sp**@bsb.me.uk wrote:
>>>Yes, O(k) is O(1) if k is constant but for an operation on a
fixed-sized input we do not know if k is a constant -- it is a
function defined at only one point: 64 (or, if you insist, it can be
thought of as defined on the domain [0, 64]). Big O is only defined
for functions over R (or sometimes R+ or Z+). It has no formal
meaning (as far I know) for functions defined over finite sets.

Well, it works fine (math-wise) for finite sets. It just turns
degenerate: *everything* is O(1).

Of course, "real computers" are finite, so this means that every
algorithm can be called O(1). Clearly this is not what we want!
:-) The "obvious" solution is to allow anything that is a variable
to go to infinity, even though technically the size of, e.g., an
input being sorted with an "O(n log n)" sort routine is in fact
bounded by attached storage. If you only have 10 TB of file space,
you will not be sorting 200 tera-items of data. But if we claim
that this sort is therefore O(10T log 10T) and thus O(1) ... well,
that may be *true*, but it is not a useful claim.
>>>Posters on Usenet are often rather loose with big Os. To avoid
arguments such as this one must be clear about how the problem is
"measured" and exactly what the primitive operations of the abstract
machine are.

Indeed.

Consider the game of chess. It has a branching factor of about 35. So, on
average, there are about 35 possible moves you can make at a given stage of
the game.

By using the alpha-beta search algorithm, you can reduce the branching
factor to ~sqrt(35) which is about 6. Through various other tricks like
null move pruning, transposition tables, etc. you can reduce the branching
factor to about 2 if you have a very smart program. Now, it has been
formally proven that there are less than 6000 full moves (a full move is one
move for each side) in the game of chess.

So a really powerful chess program could {approximately} solve the game of
chess with only 6000 plies searched, which would amount to 2^6000 positions
analyzed.

As you know, 2^6000 is just a large constant. But it is so large that it
may as well be infinity (at the present stage of compute power). So
describing the branching factor of chess and considering chess as an
exponential problem still makes sense.

The notion of O(f(n)) is useful for average case projections of how a
problem set behaves. The travelling salesman problem is another hard
problem. But there are not an infinite number of cities in the world. So
(again) we could consider TSP to be O(1) with a huge constant of
proportionalit y. However, that is not really useful because we can't
calculate an optimal path to visit every city on earth with minimal travel
distance.

Similarly with sorting, we want to know how the problem scales over
different vector lengths. This is useful so that we can calculate how much
it will cost us to sort a given vector.

I'm not sure there is any relevance to C in this post, so probably it is
misplaced.
It wasn't misplaced for me. It made me go off and research
"alpha-beta" search algorithms and "branching factors" and "null move
pruning", inter alia. As a C programmer, I can imagine those things
might be useful someday.

Immeasurable Thanks
--
jay
Jul 20 '06 #70

This thread has been closed and replies have been disabled. Please start a new discussion.

Similar topics

6
1576
by: tings | last post by:
How to write such a function that can take varible number and tyoes of arguments, like printf("... %d, %s...", myInt, myString...)? Thanks for for your help!
3
1960
by: Wen | last post by:
hello, now, i wanna port a c++ program into C# project. a DLL written by C++ and keep it no change, and UI wirtten by C#. my C++ UI code is as below: // error handlers --global function callback function // C++ dll would invoke this function if needed void ImageConv_Callback_Handler(const char *szInfo) {
3
2724
by: datttanand | last post by:
How to write the main function in java script? such as in vb script sub main end sub
1
2153
by: rasmidas | last post by:
I have a function sEntityFree() which is a function from a third party S/w we are using. For this we have our enhanced code. sEntityFree() has been called inside our enhanced code 2000 or 3000 times. But NULL pointer checking before freeing the memory is not there. Like the function has been called as below: sEntityFree(Entity, ( void** )&StructPtr, sYES ); or sEntityFree(Entity, ( void** )&StructPtr, sNO ); If I will write it...
0
1443
by: nabil035 | last post by:
I explain exactly what I want to do: write a callback function in a C++/CLI application this application imports function from a native DLL call this function from the DLL and return the results to the application Can Anyone help with the simplest example, please!
7
1510
by: mynkow | last post by:
Hi, I want to write an universal function called isNegative( param ) where the param can be float, int, double, long etc. The result will be true/false. Any comments?
0
9683
marktang
by: marktang | last post by:
ONU (Optical Network Unit) is one of the key components for providing high-speed Internet services. Its primary function is to act as an endpoint device located at the user's premises. However, people are often confused as to whether an ONU can Work As a Router. In this blog post, we’ll explore What is ONU, What Is Router, ONU & Router’s main usage, and What is the difference between ONU and Router. Let’s take a closer look ! Part I. Meaning of...
0
9529
by: Hystou | last post by:
Most computers default to English, but sometimes we require a different language, especially when relocating. Forgot to request a specific language before your computer shipped? No problem! You can effortlessly switch the default language on Windows 10 without reinstalling. I'll walk you through it. First, let's disable language synchronization. With a Microsoft account, language settings sync across devices. To prevent any complications,...
0
10457
Oralloy
by: Oralloy | last post by:
Hello folks, I am unable to find appropriate documentation on the type promotion of bit-fields when using the generalised comparison operator "<=>". The problem is that using the GNU compilers, it seems that the internal comparison operator "<=>" tries to promote arguments from unsigned to signed. This is as boiled down as I can make it. Here is my compilation command: g++-12 -std=c++20 -Wnarrowing bit_field.cpp Here is the code in...
1
10176
by: Hystou | last post by:
Overview: Windows 11 and 10 have less user interface control over operating system update behaviour than previous versions of Windows. In Windows 11 and 10, there is no way to turn off the Windows Update option using the Control Panel or Settings app; it automatically checks for updates and installs any it finds, whether you like it or not. For most users, this new feature is actually very convenient. If you want to control the update process,...
0
10013
tracyyun
by: tracyyun | last post by:
Dear forum friends, With the development of smart home technology, a variety of wireless communication protocols have appeared on the market, such as Zigbee, Z-Wave, Wi-Fi, Bluetooth, etc. Each protocol has its own unique characteristics and advantages, but as a user who is planning to build a smart home system, I am a bit confused by the choice of these technologies. I'm particularly interested in Zigbee because I've heard it does some...
0
9054
agi2029
by: agi2029 | last post by:
Let's talk about the concept of autonomous AI software engineers and no-code agents. These AIs are designed to manage the entire lifecycle of a software development project—planning, coding, testing, and deployment—without human intervention. Imagine an AI that can take a project description, break it down, write the code, debug it, and then launch it, all on its own.... Now, this would greatly impact the work of software developers. The idea...
0
5443
by: TSSRALBI | last post by:
Hello I'm a network technician in training and I need your help. I am currently learning how to create and manage the different types of VPNs and I have a question about LAN-to-LAN VPNs. The last exercise I practiced was to create a LAN-to-LAN VPN between two Pfsense firewalls, by using IPSEC protocols. I succeeded, with both firewalls in the same network. But I'm wondering if it's possible to do the same thing, with 2 Pfsense firewalls...
0
5576
by: adsilva | last post by:
A Windows Forms form does not have the event Unload, like VB6. What one acts like?
3
2927
bsmnconsultancy
by: bsmnconsultancy | last post by:
In today's digital era, a well-designed website is crucial for businesses looking to succeed. Whether you're a small business owner or a large corporation in Toronto, having a strong online presence can significantly impact your brand's success. BSMN Consultancy, a leader in Website Development in Toronto offers valuable insights into creating effective websites that not only look great but also perform exceptionally well. In this comprehensive...

By using Bytes.com and it's services, you agree to our Privacy Policy and Terms of Use.

To disable or enable advertisements and analytics tracking please visit the manage ads & tracking page.