My small function works, but I have some questions. And I want to
listen to you on How it is implemented?
1. The function does not check if parameter x is larger or smaller than
parameter y.
2. Is it better to use unsigned int or unsigned long as the type of
parameters x and y? This change may remove the if statement.
More comments are still welcome.
/*The Greatest Common Divisor, GCD for short, of two positive integers
can be computed with Euclid's division algorithm. Let the given numbers
be a and b, a >= b. Euclid's division algorithm has the following
steps:
1. Compute the remainder c of dividing a by b.
2. If the remainder c is zero, b is the greatest common divisor.
3. If c is not zero, replace a with b and b with the remainder c. Go
back to step (1). http://www.cs.mtu.edu/~shene/COURSES...hap04/gcd.html */
/*computes the GCD (Greatest Common Divisor) of positive integers x and
y with Euclid's algorithm. return the GCD, or 1 for failure.  jhl,
Jul 2006*/
int gcd(int x, int y){
if (x 0 && y 0){
while (x % y){
y = x + y;
x = y  x;
y = (y  x) % x;
}
} else {
y = 1; /*input invalid*/
}
return y;
}
lovecreatesbeauty
Jul 15 '06
74 7172
Chris Dollin wrote:
Isn't saying "multiplication is O(1)" saying "multiplication takes
constant time", independent of the operands?
No, it means independent of operand SIZE [or complexity if you will].
In the computer science world O() is typically a function of input
size, e.g. sort set size, input width, etc...
Which  for fixedsize arguments, eg 32bit int(egers) such as are
found on many machines & in many programming languages  is, I thought,
true.
For constant sized implementations of an algorithm O() doesn't apply.
O() applies to the algorithm itself when not restrained to given
operand sizes.
Of course (as someone alluded to) if the operands are of variable
size, not fixed in advance, as might be found in an unboundedprecision
library or (sort of equivalently) in Lisp/Pop11/Smalltalk general arithmetic,
then multiplication is ... unlikely ... to be O(1).
Multiplication can never be O(1) complexity. At best you can
"linearize" it with O(n^k) for some ridiculously small k. However, as
k gets smaller the runtime of the operation goes up and only proves
useful for very large inputs.
Old Wolf summed up the argument well. O() notation only applies to
functions that are unbounded.
Tom
Dann Corbit said:
<snip>
>
It just shows that you have no idea what the terms mean.
No, I think Tom St D has a pretty good handle on what the terms mean  and I
think you do too. But you're talking at crosspurposes.
Tom is talking about the true complexity of multiplication, which is
actually O(m*n) when multiplying an mdigit number by an ndigit number.
You appear to be talking about the special case where m and n are both 1,
calculating in base (INTEGERTYPE_MAX + 1).
>
Both of those algorithms are O(1).
Now (on the other hand) if we were implementing multiple precision
algorithms to perform the fundamental operations, then (depending on the
implementation) your argument might have some validity.
That's basically what he's talking about.
As it is, you have just demonstrated you preposterous ignorance for all
time in USENET, to be salted away into the archives for posterity.
Tom St Denis has little reason to count me among his greatest admirers, but
even I think you're overegging it here. He certainly knows his bignums.

Richard Heathfield
"Usenet is a strange place"  dmr 29/7/1999 http://www.cpax.org.uk
email: rjh at above domain (but drop the www, obviously)
Tom St Denis wrote:
Chris Dollin wrote:
>Of course (as someone alluded to) if the operands are of variable size, not fixed in advance, as might be found in an unboundedprecision library or (sort of equivalently) in Lisp/Pop11/Smalltalk general arithmetic, then multiplication is ... unlikely ... to be O(1).
Multiplication can never be O(1) complexity. At best you can
"linearize" it with O(n^k) for some ridiculously small k. However, as
k gets smaller the runtime of the operation goes up and only proves
useful for very large inputs.
I was thinking of something a little more drastic, for the ordinary
sort of bignum implementations on machines with known limits on their
(virtual) address space.
(More drastic and less useful.)
Old Wolf summed up the argument well. O() notation only applies to
functions that are unbounded.
So not bignums then?
(fx:innocentLook)

Chris "delay loops R us" Dollin
"Who are you? What do you want?" /Babylon 5/
Chris Dollin wrote:
Multiplication can never be O(1) complexity. At best you can
"linearize" it with O(n^k) for some ridiculously small k. However, as
k gets smaller the runtime of the operation goes up and only proves
useful for very large inputs.
I was thinking of something a little more drastic, for the ordinary
sort of bignum implementations on machines with known limits on their
(virtual) address space.
(More drastic and less useful.)
Even if you're thinking about a huge lookup table that's still an
O(n^2) algorithm. Again you just traded space for time. You didn't
change the complexity of the algorithm.
You can linearize multiplication by CHANGING THE ALGORITHM not the
implementation.
Old Wolf summed up the argument well. O() notation only applies to
functions that are unbounded.
So not bignums then?
You're impossibly stupid, how do you even function in society?
Tom
Tom St Denis wrote:
Chris Dollin wrote:
Multiplication can never be O(1) complexity. At best you can
"linearize" it with O(n^k) for some ridiculously small k. However, as
k gets smaller the runtime of the operation goes up and only proves
useful for very large inputs.
I was thinking of something a little more drastic, for the ordinary sort of bignum implementations on machines with known limits on their (virtual) address space.
(More drastic and less useful.)
Even if you're thinking about a huge lookup table
No.
that's still an
O(n^2) algorithm. Again you just traded space for time. You didn't
change the complexity of the algorithm.
You can linearize multiplication by CHANGING THE ALGORITHM not the
implementation.
I don't think you're looking carefully enough at what I wrote  including
the giveaway signatures  and considering how deliberately stupid an algorithm
I'm implying. (I also think your deadpan humour detector is broken.)
Old Wolf summed up the argument well. O() notation only applies to
functions that are unbounded.
So not bignums then?
You're impossibly stupid, how do you even function in society?
How on Earth can I be /impossibly/ stupid? If I'm that stupid, it must
be /possible/ to be that stupid. Duh.
I should of course have written "So not functions over bignums then?".
how do you even function in society?
Is that bounded or unbounded functions you're thinking of?

Chris "Hello! I'm PULLING YOUR LEG." Dollin
"A facility for quotation covers the absence of original thought." /Gaudy Night/
On 18 Jul 2006 06:07:18 0700, in comp.lang.c , "Tom St Denis"
<to********@gmail.comwrote:
>Mark McIntyre wrote:
>On 17 Jul 2006 20:33:49 0700, in comp.lang.c , "Tom St Denis" <to********@gmail.comwrote:
>You can't use O() notation for constant sized data sets. You're basically saying "as 64 =oo this function takes constant time". That's nonsense.
You apparently have absolutely no idea what this notation means. Perhaps you should read some books, and then creep quietly back.
Um ok. Show me where O() notation applies to a constant sized input.
I realize this is clc and not some math group but computer scientists should understand how to speak about complexity.
I strongly suspect we do. However bear in mind that computing is not
abstract maths, its applied maths working with finite and discrete
data sets. This has an appreciable effect.
>Saying multiplication is O(1) a surefire sign that you don't know what the heck you're talking about.
Well, O(1) means "takes the same time irrespective of the magnitude of
the argument". I strongly suspect this is true for actual real
computer systems.

Mark McIntyre
"Debugging is twice as hard as writing the code in the first place.
Therefore, if you write the code as cleverly as possible, you are,
by definition, not smart enough to debug it."
Brian Kernighan
Mark McIntyre wrote:
Well, O(1) means "takes the same time irrespective of the magnitude of
the argument". I strongly suspect this is true for actual real
computer systems.
Except that O() notation doesn't apply to fixed implementations of an
algorithm because the size doesn't change [nor approach oo].
What it does apply to is the algorithm in general. For example, as you
scale the multiplier you're placing in silicon the size scales with the
square of the operand length. That, or the time it takes to complete
squares with the operand size. This is how CPU designers can model the
tradeoff between multiplier size and opcode timing.
For instance, the AMD multiplier does 64bits in 5 cycles [after a 2
cycle delay it's pipelined] which suggests[*] that they use a
Karatsuba trick. The ARM7 multiplier takes 1 cycle per nontrivial
byte in the right operand which suggests they implemented it as a 8x32
with the appropriate adder. etc, etc. They can make these decisions
because they can correctly model the algorithms space/time trade offs.
Totally OT for this group and I suggest we just leave it at that. If
you really want to understand O() better read the wikipedia page on it. http://en.wikipedia.org/wiki/Big_oh
Tom
"Richard Heathfield" <in*****@invalid.invalidwrote in message
news:gM******************************@bt.com...
Dann Corbit said:
<snip>
>> It just shows that you have no idea what the terms mean.
No, I think Tom St D has a pretty good handle on what the terms mean  and
I
think you do too. But you're talking at crosspurposes.
Tom is talking about the true complexity of multiplication, which is
actually O(m*n) when multiplying an mdigit number by an ndigit number.
You appear to be talking about the special case where m and n are both 1,
calculating in base (INTEGERTYPE_MAX + 1).
I have no argument with bignum multiplication complexity.
If his analysis were correct, then any algorithm that contained a division
would be O(n^2).
>Both of those algorithms are O(1).
Now (on the other hand) if we were implementing multiple precision algorithms to perform the fundamental operations, then (depending on the implementation) your argument might have some validity.
That's basically what he's talking about.
>As it is, you have just demonstrated you preposterous ignorance for all time in USENET, to be salted away into the archives for posterity.
Tom St Denis has little reason to count me among his greatest admirers,
but
even I think you're overegging it here. He certainly knows his bignums.
I have downloaded some of his work, a long time ago. I know that he knows
what he is talking about when it comes to bignums. I know that he is
intelligent and good at mathematics. Be that as it may, none of the GCD
algorithms presented were O(n^2).
I think his analysis is probably correct if we are finding the GCD of two
billion digit numbers with arbitrary precision number classes. But that is
not the algorithm that was up for discussion.
I am through arguing about it. In fact, this thread was a reminder to me of
why I stopped posting on USENET for a couple years.

Richard Heathfield
"Usenet is a strange place"  dmr 29/7/1999 http://www.cpax.org.uk
email: rjh at above domain (but drop the www, obviously)
"Dann Corbit" <dc*****@connx.comwrites:
I am through arguing about it. In fact, this thread was a reminder to me of
why I stopped posting on USENET for a couple years.
It'd be a shame if you stopped again. I learn so much from the
articles you post about sorting (and other algorithms).

Go not to Usenet for counsel, for they will say both no and yes.
>Dann Corbit wrote:
>I'm sorry, but I'm not wrong. You fundamentally misunderstand what O(f(n)) means.
Subtraction, division, multiplication, modulus, etc. are ALL O(1) on modern 64 bit hardware if the integer values are also 64 bit.
In article <11**********************@h48g2000cwc.googlegroups .com>,
Old Wolf <ol*****@inspire.net.nzwrote:
>I'll add my voice to the chorus: you fundamentally misunderstand what it means.
To say that an algorithm is O(n^2) means that as n approaches infinity, the complexity of the algorithm approaches n^2 (perhaps multiplied by some constant).
This is true  or more precisely, g(n) = O(f(n)) means g(n) is
bounded above by f(n). That is, there is some constant c and value
xsub0 such that for all x xsub0, g(x) <= c f(x). (There are
also Omega and Theta notations for bounded below, and bounded on
both sides.)
But:
>It is meaningless to assert "Operation X is O(1) on two 64bit numbers".
This is *not* true. If k is any constant, O(k) and O(1) mean
the same thing, since the constant can be "moved out": if k is
a constant, then g(k) = k g(1), so we can replace c f(x) with
(k*c) f(x) above.
As an aside, if some algorithm is O(1), it is also automatically
O(n) and O(n^2) and O(exp(n)), since bigO notation merely gives
you an upper bound. (One should use Theta(n) instead of O(n) if
one wishes to disallow this.)
>BigO notation is for describing how the metrics of an algorithm grow as the size of the parameters grows. It cannot be used to assess one specific instance.
Again, it *can*. It is just not particularly *useful* to do so.
But we do it all the time. :)
>To say that multiplication is O(n^2) is to say that a 128bit multiplication takes roughly 4 times as many bit operations as a 64bit multiplication.
(Again, this is only strictly true if you say it is Theta(n^2).)
In any case, the real problems here (there are two) are:
(a) In C, we do not compute a 2nbit product when multiplying
two nbit numbers. Instead, we prefer to get the wrong answer
as fast as possible, so we retain only the lower n bits of
the result (with  for signed integers  undefined behavior
if the result does not fit in those n bits, although typically
overflow is ignored; for unsigned integers, overflow is always
deliberately ignored.)
(b) Again, in order to get the wrong answer as fast as possible,
we set "n" to a constant. And when n is a constant, O(n)
*is* O(1) (because O(32) is just O(1), as is O(64), or even
O(1048976)).
To avoid both of those problems, we have to write functions to
operate on "bignums", and replace the simple:
result = n1 * n2;
syntax with, e.g.:
struct bignum *n1, *n2, *result;
...
result = bignum_multiply(n1, n2);
or similar. Now we can calculate the correct answer slowly, and
use an algorithm in which the number of bits is not a constant,
and therefore get multiplication that is indeed O(mn) (where m
and n are the number of bits needed for n1 and n2 respectively).
If C were C++, we could use operator overloading to make:
result = n1 * n2;
into syntactic sugar for a call to bignum_multiply(), and therefore
compute the right answer slowly, with an O(n^2) algorithm; but C
is not C++ (for which, for some bizarre reason, some of us are
actually grateful :) ).

InRealLife: Chris Torek, Wind River Systems
Salt Lake City, UT, USA (40°39.22'N, 111°50.29'W) +1 801 277 2603
email: forget about it http://web.torek.net/torek/index.html
Reading email is like searching for food in the garbage, thanks to spammers.
In article <11**********************@p79g2000cwp.googlegroups .com>, "Tom St Denis" <to********@gmail.comwrites:
>
Third, most processors don't even have division opcodes [mips, ARM, PPC
anyone?]
According to my PPC assembly reference (_PowerPC Microprocessor
Developer's Guide_, Bunda et al), PPC does have integer divide
opcodes (divd[o][.], divdu[o][.], divw[o][.], and divwu[o][.]) and
floatingpoint divide opcodes (fdiv[.] and fdivs[.]). It states that
some of those are only available in the 64bit versions of the
processor, but divw is available in all versions.
I suppose that might have changed since the book was published.

Michael Wojcik mi************@microfocus.com
See who I'm!  Jackie Chan and unknown subtitler, _Dragons Forever_
In article <11**********************@i42g2000cwa.googlegroups .com>,
Tom St Denis <to********@gmail.comwrote:
>For constant sized implementations of an algorithm O() doesn't apply. O() applies to the algorithm itself when not restrained to given operand sizes.
But there is an obvious extension, generally clear from the context,
to implementations limited to a maximum size. For example, if an
operation on a 64bit number takes time proportional to the position
of the highest bit set in the number, we can say that it is O(log N)
even though you can't make N tend to infinity.
 Richard
On 20060718, Michael Wojcik <mw*****@newsguy.comwrote:
>
In article <11**********************@p79g2000cwp.googlegroups .com>, "Tom St Denis" <to********@gmail.comwrites:
>> Third, most processors don't even have division opcodes [mips, ARM, PPC anyone?]
According to my PPC assembly reference (_PowerPC Microprocessor
Developer's Guide_, Bunda et al), PPC does have integer divide
opcodes (divd[o][.], divdu[o][.], divw[o][.], and divwu[o][.]) and
floatingpoint divide opcodes (fdiv[.] and fdivs[.]). It states that
some of those are only available in the 64bit versions of the
processor, but divw is available in all versions.
I suppose that might have changed since the book was published.
I recently ran into this stuff when I started porting my C compiler to
PowerPC. Apparently integer division instructions are available in PPC,
but not in POWER. The IBM AIX assembler has a compatibility mode (mcom)
in which instructions that are not available in both POWER and PowerPC
are rejected, and it rejects division. This is also the default setting.
The many mode can be used to allow for any available instruction to be
used.
grep div y.asm
divw 3, 3, 4
as y.asm u
Assembler:
y.asm: line 53: 1252149 Instruction divw is not implemented in the
current assembly mode COM.
as y.asm many u

Nils R. Weller, Bremen (Germany)
My real email address is ``nils<at>gnulinux<dot>nl''
.... but I'm not speaking for the Software Libre Foundation!
"Tom St Denis" <to********@gmail.comwrites:
Chris Dollin wrote:
>Isn't saying "multiplication is O(1)" saying "multiplication takes constant time", independent of the operands?
No, it means independent of operand SIZE [or complexity if you will].
In the computer science world O() is typically a function of input
size, e.g. sort set size, input width, etc...
>Which  for fixedsize arguments, eg 32bit int(egers) such as are found on many machines & in many programming languages  is, I thought, true.
For constant sized implementations of an algorithm O() doesn't apply.
O() applies to the algorithm itself when not restrained to given
operand sizes.
Then why are you wasting time arguing about how O() notation applies
to multiplication of fixedsize operands?
Multiplication of fixedsize operands is typically either O(1) or
O(meaningless) (for the moment, I really don't care which). The time
it takes to perform a 32bit multiplication is (typically) independent
of the values of the operands. (It doesn't depend in any meaningful
way on the *sizes* of the operands, since we're assuming those sizes
are fixed.)
For example, if I have an algorithm that works on an array of 32bit
integers, its performance is dominated by 32bit multiplications, and
it performs N log N multiplications for an Nelement array, then it's
an O(N log N) algorithm. I don't have to multiply N log N by some
term that accounts for the complexity of an individual multiplication;
that's just a constant, and it disappears.
Multiplication of variablesize operands (e.g, bignums) is O(something
bigger), perhaps O(N**2).
You're talking about two different things. Each of them is reasonably
well understood, but you're creating confusion by conflating them.

Keith Thompson (The_Other_Keith) ks***@mib.org <http://www.ghoti.net/~kst>
San Diego Supercomputer Center <* <http://users.sdsc.edu/~kst>
We must do something. This is something. Therefore, we must do this.
On 18 Jul 2006 08:59:40 0700, in comp.lang.c , "Tom St Denis"
<to********@gmail.comwrote:
>Mark McIntyre wrote:
>Well, O(1) means "takes the same time irrespective of the magnitude of the argument". I strongly suspect this is true for actual real computer systems.
Except that O() notation doesn't apply to fixed implementations of an algorithm because the size doesn't change [nor approach oo].
Bollocks. And End of Discussion.

Mark McIntyre
"Debugging is twice as hard as writing the code in the first place.
Therefore, if you write the code as cleverly as possible, you are,
by definition, not smart enough to debug it."
Brian Kernighan
Chris Torek <no****@torek.netwrote:
>>Dann Corbit wrote:
>>I'm sorry, but I'm not wrong. You fundamentally misunderstand what O(f(n)) means.
Subtraction, division, multiplication, modulus, etc. are ALL O(1) on modern 64 bit hardware if the integer values are also 64 bit.
In article <11**********************@h48g2000cwc.googlegroups .com>,
Old Wolf <ol*****@inspire.net.nzwrote:
>>I'll add my voice to the chorus: you fundamentally misunderstand what it means.
To say that an algorithm is O(n^2) means that as n approaches infinity, the complexity of the algorithm approaches n^2 (perhaps multiplied by some constant).
This is true  or more precisely, g(n) = O(f(n)) means g(n) is
bounded above by f(n). That is, there is some constant c and value
xsub0 such that for all x xsub0, g(x) <= c f(x). (There are
also Omega and Theta notations for bounded below, and bounded on
both sides.)
But:
>>It is meaningless to assert "Operation X is O(1) on two 64bit numbers".
This is *not* true. If k is any constant, O(k) and O(1) mean
the same thing, since the constant can be "moved out": if k is
a constant, then g(k) = k g(1), so we can replace c f(x) with
(k*c) f(x) above.
I hate to jump into such a heated debate, and particularly to correct
a post of yours since I usually find them so informative, but you have
used some algebraic smoke and mirrors there!
Yes, O(k) is O(1) if k is constant but for an operation on a
fixedsized input we do not know if k is a constant  it is a
function defined at only one point: 64 (or, if you insist, it can be
thought of as defined on the domain [0, 64]). Big O is only defined
for functions over R (or sometimes R+ or Z+). It has no formal
meaning (as far I know) for functions defined over finite sets.
=== On the more general argument ===
Posters on Usenet are often rather loose with big Os. To avoid
arguments such as this one must be clear about how the problem is
"measured" and exactly what the primitive operations of the abstract
machine are. Saying that multiplication of fixedsize numbers is
O(anything) is lax at best (meaningless at worst) but to describe
a given gcd algorithm as "O(whatever) in the number of arithmetic
operations performed" is fine if a little informal.
Often, if someone says "operation X is O(1)" what they really mean is
"I consider operation X to be one of the primitive operations in term
of which I want to analyse my algorithm".
For many who write gcd algorithms, multiplication is not a primitive
operation because they are thinking in term of bignums and
cryptography (certainly true the only time I had to do it) but for
someone doing matrix operations it may be perfectly reasonable to
consider all fixedsize arithmetic ops as "primitive".
To make matters worse (as happened here) people will often post some
code in C and say it has suchandsuch a time (or space) complexity
when, formally, it can't have one because the inputs are bounded by
some C type or other. What they want the reader to do is abstract the
algorithm out and consider it running on some ideal C machine but it
is not always clear (as in this case) if that just involves having
arbitrary resources available or also includes unbounded integers (or
some other idealised types).

Ben.
On Tue, 18 Jul 2006 10:36:26 0700, "Dann Corbit" wrote:
>"Richard Heathfield" wrote in message
>Dann Corbit said: <snip>
>>Both of those algorithms are O(1).
yes but in what i know division for 2 registers is slow than sub for 2
register
>>Now (on the other hand) if we were implementing multiple precision algorithms to perform the fundamental operations, then (depending on the implementation) your argument might have some validity.
That's basically what he's talking about.
>>As it is, you have just demonstrated you preposterous ignorance for all time in USENET, to be salted away into the archives for posterity.
>Tom St Denis has little reason to count me among his greatest admirers, but even I think you're overegging it here. He certainly knows his bignums.
I have downloaded some of his work, a long time ago. I know that he knows what he is talking about when it comes to bignums. I know that he is intelligent and good at mathematics. Be that as it may, none of the GCD algorithms presented were O(n^2).
yes is O(1) but who use 20 divisions is slower than who use 20 subs
>I think his analysis is probably correct if we are finding the GCD of two billion digit numbers with arbitrary precision number classes. But that is not the algorithm that was up for discussion.
seems i remember O(f(n)) and n is the size in bit for input
>I am through arguing about it. In fact, this thread was a reminder to me of why I stopped posting on USENET for a couple years.
usenet is a wonderful place for to learn
in usenet is enough say our own position without much discussion
it is the reader that see who is right and for me almost always is
right who speak few
In article <44***********************@news.zen.co.uk>
Ben Bacarisse <sp**@bsb.me.ukwrote:
>Yes, O(k) is O(1) if k is constant but for an operation on a fixedsized input we do not know if k is a constant  it is a function defined at only one point: 64 (or, if you insist, it can be thought of as defined on the domain [0, 64]). Big O is only defined for functions over R (or sometimes R+ or Z+). It has no formal meaning (as far I know) for functions defined over finite sets.
Well, it works fine (mathwise) for finite sets. It just turns
degenerate: *everything* is O(1).
Of course, "real computers" are finite, so this means that every
algorithm can be called O(1). Clearly this is not what we want!
:) The "obvious" solution is to allow anything that is a variable
to go to infinity, even though technically the size of, e.g., an
input being sorted with an "O(n log n)" sort routine is in fact
bounded by attached storage. If you only have 10 TB of file space,
you will not be sorting 200 teraitems of data. But if we claim
that this sort is therefore O(10T log 10T) and thus O(1) ... well,
that may be *true*, but it is not a useful claim.
>Posters on Usenet are often rather loose with big Os. To avoid arguments such as this one must be clear about how the problem is "measured" and exactly what the primitive operations of the abstract machine are.
Indeed.
[additional details snipped]

InRealLife: Chris Torek, Wind River Systems
Salt Lake City, UT, USA (40°39.22'N, 111°50.29'W) +1 801 277 2603
email: forget about it http://web.torek.net/torek/index.html
Reading email is like searching for food in the garbage, thanks to spammers.
"Chris Torek" <no****@torek.netwrote in message
news:e9*********@news1.newsguy.com...
In article <44***********************@news.zen.co.uk>
Ben Bacarisse <sp**@bsb.me.ukwrote:
>>Yes, O(k) is O(1) if k is constant but for an operation on a fixedsized input we do not know if k is a constant  it is a function defined at only one point: 64 (or, if you insist, it can be thought of as defined on the domain [0, 64]). Big O is only defined for functions over R (or sometimes R+ or Z+). It has no formal meaning (as far I know) for functions defined over finite sets.
Well, it works fine (mathwise) for finite sets. It just turns
degenerate: *everything* is O(1).
Of course, "real computers" are finite, so this means that every
algorithm can be called O(1). Clearly this is not what we want!
:) The "obvious" solution is to allow anything that is a variable
to go to infinity, even though technically the size of, e.g., an
input being sorted with an "O(n log n)" sort routine is in fact
bounded by attached storage. If you only have 10 TB of file space,
you will not be sorting 200 teraitems of data. But if we claim
that this sort is therefore O(10T log 10T) and thus O(1) ... well,
that may be *true*, but it is not a useful claim.
>>Posters on Usenet are often rather loose with big Os. To avoid arguments such as this one must be clear about how the problem is "measured" and exactly what the primitive operations of the abstract machine are.
Indeed.
Consider the game of chess. It has a branching factor of about 35. So, on
average, there are about 35 possible moves you can make at a given stage of
the game.
By using the alphabeta search algorithm, you can reduce the branching
factor to ~sqrt(35) which is about 6. Through various other tricks like
null move pruning, transposition tables, etc. you can reduce the branching
factor to about 2 if you have a very smart program. Now, it has been
formally proven that there are less than 6000 full moves (a full move is one
move for each side) in the game of chess.
So a really powerful chess program could {approximately} solve the game of
chess with only 6000 plies searched, which would amount to 2^6000 positions
analyzed.
As you know, 2^6000 is just a large constant. But it is so large that it
may as well be infinity (at the present stage of compute power). So
describing the branching factor of chess and considering chess as an
exponential problem still makes sense.
The notion of O(f(n)) is useful for average case projections of how a
problem set behaves. The travelling salesman problem is another hard
problem. But there are not an infinite number of cities in the world. So
(again) we could consider TSP to be O(1) with a huge constant of
proportionality. However, that is not really useful because we can't
calculate an optimal path to visit every city on earth with minimal travel
distance.
Similarly with sorting, we want to know how the problem scales over
different vector lengths. This is useful so that we can calculate how much
it will cost us to sort a given vector.
I'm not sure there is any relevance to C in this post, so probably it is
misplaced.
[additional details snipped]

InRealLife: Chris Torek, Wind River Systems
Salt Lake City, UT, USA (40°39.22'N, 111°50.29'W) +1 801 277 2603
email: forget about it http://web.torek.net/torek/index.html
Reading email is like searching for food in the garbage, thanks to
spammers.
On Wed, 19 Jul 2006 11:00:57 0700, "Dann Corbit" <dc*****@connx.com>
wrote:
>"Chris Torek" <no****@torek.netwrote in message news:e9*********@news1.newsguy.com...
>In article <44***********************@news.zen.co.uk> Ben Bacarisse <sp**@bsb.me.ukwrote:
>>>Yes, O(k) is O(1) if k is constant but for an operation on a fixedsized input we do not know if k is a constant  it is a function defined at only one point: 64 (or, if you insist, it can be thought of as defined on the domain [0, 64]). Big O is only defined for functions over R (or sometimes R+ or Z+). It has no formal meaning (as far I know) for functions defined over finite sets.
Well, it works fine (mathwise) for finite sets. It just turns degenerate: *everything* is O(1).
Of course, "real computers" are finite, so this means that every algorithm can be called O(1). Clearly this is not what we want! :) The "obvious" solution is to allow anything that is a variable to go to infinity, even though technically the size of, e.g., an input being sorted with an "O(n log n)" sort routine is in fact bounded by attached storage. If you only have 10 TB of file space, you will not be sorting 200 teraitems of data. But if we claim that this sort is therefore O(10T log 10T) and thus O(1) ... well, that may be *true*, but it is not a useful claim.
>>>Posters on Usenet are often rather loose with big Os. To avoid arguments such as this one must be clear about how the problem is "measured" and exactly what the primitive operations of the abstract machine are.
Indeed.
Consider the game of chess. It has a branching factor of about 35. So, on average, there are about 35 possible moves you can make at a given stage of the game.
By using the alphabeta search algorithm, you can reduce the branching factor to ~sqrt(35) which is about 6. Through various other tricks like null move pruning, transposition tables, etc. you can reduce the branching factor to about 2 if you have a very smart program. Now, it has been formally proven that there are less than 6000 full moves (a full move is one move for each side) in the game of chess.
So a really powerful chess program could {approximately} solve the game of chess with only 6000 plies searched, which would amount to 2^6000 positions analyzed.
As you know, 2^6000 is just a large constant. But it is so large that it may as well be infinity (at the present stage of compute power). So describing the branching factor of chess and considering chess as an exponential problem still makes sense.
The notion of O(f(n)) is useful for average case projections of how a problem set behaves. The travelling salesman problem is another hard problem. But there are not an infinite number of cities in the world. So (again) we could consider TSP to be O(1) with a huge constant of proportionality. However, that is not really useful because we can't calculate an optimal path to visit every city on earth with minimal travel distance.
Similarly with sorting, we want to know how the problem scales over different vector lengths. This is useful so that we can calculate how much it will cost us to sort a given vector.
I'm not sure there is any relevance to C in this post, so probably it is misplaced.
It wasn't misplaced for me. It made me go off and research
"alphabeta" search algorithms and "branching factors" and "null move
pruning", inter alia. As a C programmer, I can imagine those things
might be useful someday.
Immeasurable Thanks

jay
Keith Thompson wrote:
"Julian V. Noble" <jv*@virginia.eduwrites:
[...]
int gcd( int m, int n ) // recursive
{
if(n==0) { return m; }
else
{ int answer = gcd(n,m%n);
return answer; }
}
The temporary is unnecessary, and the formatting is just odd. Here's
how I'd write it:
int gcd(int m, int n)
{
if (n == 0) {
return m;
}
else {
return gcd(n, m % n);
}
}
int gcd(int m, int n)
{
return n ? gcd(n, m%n) : m;
}
Don
"Don" <do*******@gmail.comwrites:
Keith Thompson wrote:
>"Julian V. Noble" <jv*@virginia.eduwrites: [...]
int gcd( int m, int n ) // recursive
{
if(n==0) { return m; }
else
{ int answer = gcd(n,m%n);
return answer; }
}
The temporary is unnecessary, and the formatting is just odd. Here's how I'd write it:
int gcd(int m, int n) { if (n == 0) { return m; } else { return gcd(n, m % n); } }
int gcd(int m, int n)
{
return n ? gcd(n, m%n) : m;
}
Sure, that's also valid, but I prefer my version. Personally, I
dislike treating an integer as a condition rather than explicitly
comparing it to 0 *unless* it's actually being used as a boolean. I
also tend to prefer if/else to ?: in most cases. For something this
small, it's not a big deal, but it can quickly grow into IOCCC
material.
<OT>
To the person who emailed me regarding this thread: anonymous emails
are unwelcome and will be ignored. If you want to comment on
something I've written here, do so here. If you must contact me by
email, have the courtesy to identify yourself.
</OT>

Keith Thompson (The_Other_Keith) ks***@mib.org <http://www.ghoti.net/~kst>
San Diego Supercomputer Center <* <http://users.sdsc.edu/~kst>
We must do something. This is something. Therefore, we must do this.
Keith Thompson wrote:
"Don" <do*******@gmail.comwrites:
int gcd(int m, int n)
{
return n ? gcd(n, m%n) : m;
}
Sure, that's also valid, but I prefer my version. Personally, I
dislike treating an integer as a condition rather than explicitly
comparing it to 0 *unless* it's actually being used as a boolean.
I remember that you said you dislike the _Bool keyword and bool, true,
false macros in a post before.
And, in K&R2 sec3.2, it says that the shortcut is preferred when the
expression tests its value even the expression is a numeric value. It
says, ` the most obvious is writing if (expression) instead of if
(expression ! = 0) '.
In sec5.5, it says, a comparison against 0 ('\0' equals 0, right?) is
redundant. And it writes the final abbreviation version of strcpy
containing this single statement: while (*s++ = *t++) ;
"lovecreatesbeauty" <lo***************@gmail.comwrites:
Keith Thompson wrote:
>"Don" <do*******@gmail.comwrites:
int gcd(int m, int n)
{
return n ? gcd(n, m%n) : m;
}
Sure, that's also valid, but I prefer my version. Personally, I dislike treating an integer as a condition rather than explicitly comparing it to 0 *unless* it's actually being used as a boolean.
I remember that you said you dislike the _Bool keyword and bool, true,
false macros in a post before.
No, I don't believe I did. In fact, I wish that bool, true, and false
had been in the language from the very beginning <OT>as they were in
C++</OT>. Of course we can't yet assume that they're supported by all
compilers  but it's easy enough to define them yourself if you need
them. (I find the keyword _Bool a bit ugly, but I understand the need
to avoid stepping on existing code.)
And, in K&R2 sec3.2, it says that the shortcut is preferred when the
expression tests its value even the expression is a numeric value. It
says, ` the most obvious is writing if (expression) instead of if
(expression ! = 0) '.
Ok, so I disagree with K&R2 on this point. I don't have my copy of
K&R2 handy; I'll look up the exact wording later, and perhaps comment
further.
In sec5.5, it says, a comparison against 0 ('\0' equals 0, right?) is
redundant. And it writes the final abbreviation version of strcpy
containing this single statement: while (*s++ = *t++) ;
That particular form is idiomatic enough that it doesn't bother me too
much, but in general I prefer to be more explicit, even at the expense
of a few extra keystrokes. I understand that lots of very smart
people don't share this opinion, and we all need to be able to read
code in a variety of styles even if we're more restrictive in what we
write.

Keith Thompson (The_Other_Keith) ks***@mib.org <http://www.ghoti.net/~kst>
San Diego Supercomputer Center <* <http://users.sdsc.edu/~kst>
We must do something. This is something. Therefore, we must do this.
On Sun, 23 Jul 2006 06:47:12 GMT, Keith Thompson <ks***@mib.org>
wrote:
>"lovecreatesbeauty" <lo***************@gmail.comwrites:
>Keith Thompson wrote:
>>"Don" <do*******@gmail.comwrites: int gcd(int m, int n) { return n ? gcd(n, m%n) : m; } Sure, that's also valid, but I prefer my version. Personally, I dislike treating an integer as a condition rather than explicitly comparing it to 0 *unless* it's actually being used as a boolean.
I remember that you said you dislike the _Bool keyword and bool, true, false macros in a post before.
No, I don't believe I did. In fact, I wish that bool, true, and false had been in the language from the very beginning <OT>as they were in C++</OT>. Of course we can't yet assume that they're supported by all
<snip>
There is no such language as C++< / OT>
Haven't you been reading the rants about this sort of joining?
:O)
Oz

A: Because it fouls the order in which people normally read text.
Q: Why is topposting such a bad thing?
A: Topposting.
Q: What is the most annoying thing on usenet and in email? This discussion thread is closed Replies have been disabled for this discussion. Similar topics
6 posts
views
Thread by tings 
last post: by

3 posts
views
Thread by Wen 
last post: by
              