473,383 Members | 1,978 Online
Bytes | Software Development & Data Engineering Community
Post Job

Home Posts Topics Members FAQ

Join Bytes to post your question to a community of 473,383 software developers and data experts.

Greater than operator Vs Equal to operator : Which one is efficient?

Hello All,

Which one is an expensive operation?
'>' or '==" ?

I know this won't matter much. However, if I'm executing this
operation million times, I would prefer the better choice.

My gut feeling is that '>' should be efficient. However, when
translated to assembly code all it generates is "cmpl" instruction
(gcc -S <prog.c>). So, my question here is :: how is "cmpl"
instruction executed by the cpu?

Ex: (1) if (n == 0) vs if (n > 0)
(2) if (len == 12) vs if (len < 13)

Thanks in advance.

Regards,
Vivek

web: http://students.iiit.net/~vivek/
Nov 13 '05 #1
36 4553
hy********@yahoo.com (Vivek Mandava) writes:
Which one is an expensive operation?
'>' or '==" ?
The C standard doesn't define the performance of these operations.
I know this won't matter much. However, if I'm executing this
operation million times, I would prefer the better choice.

My gut feeling is that '>' should be efficient. However, when
translated to assembly code all it generates is "cmpl" instruction
(gcc -S <prog.c>). So, my question here is :: how is "cmpl"
instruction executed by the cpu?


This is not a C question, but a question about assembler/machine code
on a specific platform. It is off-topic here; please ask in a newsgroup
dedicated to programming the CPU in question.

Martin
Nov 13 '05 #2
On Fri, 05 Sep 2003 01:25:30 -0700, Vivek Mandava wrote:
Hello All,

Which one is an expensive operation?


That would depend on both plattform and compiler I think.

int main()
int i,j;

i=INT_MAX;
j=INT_MAX;

print_system_time();

while(1) {
if (i == 0) {
break;
} else {
++i;
}
}

print_system_time();

while(1) {
if (j < 1) {
break;
} else {
++j;
}
}

print_system_time;

return 0;
}

hth
--
NPV
"Linux is to Lego as Windows is to Fisher Price." - Doctor J Frink

Nov 13 '05 #3

"Vivek Mandava" <hy********@yahoo.com> wrote in message
news:bb**************************@posting.google.c om...
Hello All,

Which one is an expensive operation?
'>' or '==" ?

I know this won't matter much. However, if I'm executing this
operation million times, I would prefer the better choice.

My gut feeling is that '>' should be efficient. However, when
translated to assembly code all it generates is "cmpl" instruction
(gcc -S <prog.c>). So, my question here is :: how is "cmpl"
instruction executed by the cpu?


Why don't ask in your platform specific newsgroup ?
--
Jeff
Nov 13 '05 #4
Vivek Mandava wrote:
Which one is an expensive operation?
'>' or '==" ?


That's highly system-dependent and therefore OT here. That said, I'd
be surprised if there were any difference. It should use some kind
of "compare" instruction either way.

Why not put a tight loop of 10^7 of them in a function and run your
profiler on it?

--
Tom Zych
This email address will expire at some point to thwart spammers.
Permanent address: echo 'g******@cbobk.pbz' | rot13
Nov 13 '05 #5
hy********@yahoo.com (Vivek Mandava) writes:
Which one is an expensive operation?
'>' or '==" ?
The C standard doesn't define the performance of these operations.
I know this won't matter much. However, if I'm executing this
operation million times, I would prefer the better choice.

My gut feeling is that '>' should be efficient. However, when
translated to assembly code all it generates is "cmpl" instruction
(gcc -S <prog.c>). So, my question here is :: how is "cmpl"
instruction executed by the cpu?


This is not a C question, but a question about assembler/machine code
on a specific platform. It is off-topic here; please ask in a newsgroup
dedicated to programming the CPU in question.

Martin
Nov 13 '05 #6
hy********@yahoo.com (Vivek Mandava) wrote in message news:<bb**************************@posting.google. com>...
Hello All,

Which one is an expensive operation?
'>' or '==" ?

That's entirely a function of the target instruction set, not the C
language.
I know this won't matter much. However, if I'm executing this
operation million times, I would prefer the better choice.

The only way to know for sure is to write code using each operator and
gather performance statistics on that code, recognizing that your
results are only valid for that particular architecture. FWIW,
they're equally expensive on all the platforms I'm familiar with (they
all generate similar instructions). I'd honestly be surprised if
there was a platform where one was significantly more expensive than
the other.
My gut feeling is that '>' should be efficient. However, when
translated to assembly code all it generates is "cmpl" instruction
(gcc -S <prog.c>). So, my question here is :: how is "cmpl"
instruction executed by the cpu?

Assuming you're on x86:
http://www.penguin.cz/~literakl/intel/intel.html
Ex: (1) if (n == 0) vs if (n > 0)
(2) if (len == 12) vs if (len < 13)

Thanks in advance.

Regards,
Vivek

web: http://students.iiit.net/~vivek/

Nov 13 '05 #7
On Fri, 05 Sep 2003 01:25:30 -0700, Vivek Mandava wrote:
Hello All,

Which one is an expensive operation?


That would depend on both plattform and compiler I think.

int main()
int i,j;

i=INT_MAX;
j=INT_MAX;

print_system_time();

while(1) {
if (i == 0) {
break;
} else {
++i;
}
}

print_system_time();

while(1) {
if (j < 1) {
break;
} else {
++j;
}
}

print_system_time;

return 0;
}

hth
--
NPV
"Linux is to Lego as Windows is to Fisher Price." - Doctor J Frink

Nov 13 '05 #8

"Vivek Mandava" <hy********@yahoo.com> wrote in message
news:bb**************************@posting.google.c om...
Hello All,

Which one is an expensive operation?
'>' or '==" ?

I know this won't matter much. However, if I'm executing this
operation million times, I would prefer the better choice.

My gut feeling is that '>' should be efficient. However, when
translated to assembly code all it generates is "cmpl" instruction
(gcc -S <prog.c>). So, my question here is :: how is "cmpl"
instruction executed by the cpu?


Why don't ask in your platform specific newsgroup ?
--
Jeff
Nov 13 '05 #9
Vivek Mandava wrote:
Which one is the more expensive operation?

'>' or '==' ?

I know this won't matter much.
However, if I'm executing this operation million times,
I would prefer the better choice.

My gut feeling is that '>' should be efficient.
However, when translated to assembly code
all it generates is "cmpl" instruction (gcc -S <prog.c>).
So, my question here is,
"How is the 'cmpl' instruction executed by the cpu?"

Ex: (1) if (n == 0) vs if (n > 0)
(2) if (len == 12) vs if (len < 13)


The cmpl instruction is a subtraction which does *not* save a result.
It just sets some status register flags.
The branch instructions (je, jne, jlt, jle, jgt, jge, etc.)
test these status bits and branch accordingly.
None of them are inherently faster than the others.
Your CPU almost certainly includes branch prediction hardware
which it can use to prefetch instructions for the most likely branch.
Your compiler almost certainly includes optimizations
to assist branch prediction.

It is almost always a bad idea to attempt to outsmart
your optimizing compiler.
You will probably only succeed in frustrating these optimizations.
Write your code in the most straight forward and readable way
that you can and let your optimizing compiler figure out
how to generate instructions that yield the highest performance
and efficiency. If performance is not satisfactory,
use a profiler to locate the code that produces the bottleneck
and concentrate on optimizing that code first.

Nov 13 '05 #10
Vivek Mandava wrote:
Which one is an expensive operation?
'>' or '==" ?


That's highly system-dependent and therefore OT here. That said, I'd
be surprised if there were any difference. It should use some kind
of "compare" instruction either way.

Why not put a tight loop of 10^7 of them in a function and run your
profiler on it?

--
Tom Zych
This email address will expire at some point to thwart spammers.
Permanent address: echo 'g******@cbobk.pbz' | rot13
Nov 13 '05 #11
hy********@yahoo.com (Vivek Mandava) wrote in message news:<bb**************************@posting.google. com>...
Hello All,

Which one is an expensive operation?
'>' or '==" ?

That's entirely a function of the target instruction set, not the C
language.
I know this won't matter much. However, if I'm executing this
operation million times, I would prefer the better choice.

The only way to know for sure is to write code using each operator and
gather performance statistics on that code, recognizing that your
results are only valid for that particular architecture. FWIW,
they're equally expensive on all the platforms I'm familiar with (they
all generate similar instructions). I'd honestly be surprised if
there was a platform where one was significantly more expensive than
the other.
My gut feeling is that '>' should be efficient. However, when
translated to assembly code all it generates is "cmpl" instruction
(gcc -S <prog.c>). So, my question here is :: how is "cmpl"
instruction executed by the cpu?

Assuming you're on x86:
http://www.penguin.cz/~literakl/intel/intel.html
Ex: (1) if (n == 0) vs if (n > 0)
(2) if (len == 12) vs if (len < 13)

Thanks in advance.

Regards,
Vivek

web: http://students.iiit.net/~vivek/

Nov 13 '05 #12
Vivek Mandava wrote:
Which one is the more expensive operation?

'>' or '==' ?

I know this won't matter much.
However, if I'm executing this operation million times,
I would prefer the better choice.

My gut feeling is that '>' should be efficient.
However, when translated to assembly code
all it generates is "cmpl" instruction (gcc -S <prog.c>).
So, my question here is,
"How is the 'cmpl' instruction executed by the cpu?"

Ex: (1) if (n == 0) vs if (n > 0)
(2) if (len == 12) vs if (len < 13)


The cmpl instruction is a subtraction which does *not* save a result.
It just sets some status register flags.
The branch instructions (je, jne, jlt, jle, jgt, jge, etc.)
test these status bits and branch accordingly.
None of them are inherently faster than the others.
Your CPU almost certainly includes branch prediction hardware
which it can use to prefetch instructions for the most likely branch.
Your compiler almost certainly includes optimizations
to assist branch prediction.

It is almost always a bad idea to attempt to outsmart
your optimizing compiler.
You will probably only succeed in frustrating these optimizations.
Write your code in the most straight forward and readable way
that you can and let your optimizing compiler figure out
how to generate instructions that yield the highest performance
and efficiency. If performance is not satisfactory,
use a profiler to locate the code that produces the bottleneck
and concentrate on optimizing that code first.

Nov 13 '05 #13
In article <bb**************************@posting.google.com >,
hy********@yahoo.com (Vivek Mandava) wrote:
Hello All,

Which one is an expensive operation?
'>' or '==" ?

I know this won't matter much. However, if I'm executing this
operation million times, I would prefer the better choice.


There are three things you can do:

1. Don't worry. Unless you get a measurable benefit from it, try to
write code that is readable and easy to maintain instead of trying to
make it fast. This will mean that you finish writing your software
earlier.

2. Check the assembler code. Check the manufacturers web sites for
manuals; absolutely everything you need is available; most will send you
a CD with the manuals for free.

3. Measure. Write two versions of the code and measure their speed.

Before you start doing anything: Make sure your code is correct. Once it
is correct, use a profiler which will tell you which parts of your
program spend the most time.
Nov 13 '05 #14
John Bode wrote:
I'd honestly be surprised if
there was a platform where one was significantly more expensive than
the other.


I've seen old 16 bit systems, (CHAR_BIT==8, sizeof(int)==2)
where equality of long int
was determined by an bitwise XOR of the high words (2 bytes==word),
followed by a zero flag condition check, and then if zero,
followed by an XOR of the low words and a zero flag check.
So, it was possible to bail out before entirely evaluating
either operand.
A long was compared for equality against zero, by ORing
the high word, with the low word and then checking the zero flag.

Equality vs inequality,
seems to me to be a simpler concept than greater than,
so when the equality or inequality operator,
does what I want, then that's what I use.

--
pete
Nov 13 '05 #15
In article <bb**************************@posting.google.com >,
hy********@yahoo.com (Vivek Mandava) wrote:
Hello All,

Which one is an expensive operation?
'>' or '==" ?

I know this won't matter much. However, if I'm executing this
operation million times, I would prefer the better choice.


There are three things you can do:

1. Don't worry. Unless you get a measurable benefit from it, try to
write code that is readable and easy to maintain instead of trying to
make it fast. This will mean that you finish writing your software
earlier.

2. Check the assembler code. Check the manufacturers web sites for
manuals; absolutely everything you need is available; most will send you
a CD with the manuals for free.

3. Measure. Write two versions of the code and measure their speed.

Before you start doing anything: Make sure your code is correct. Once it
is correct, use a profiler which will tell you which parts of your
program spend the most time.
Nov 13 '05 #16
John Bode wrote:
I'd honestly be surprised if
there was a platform where one was significantly more expensive than
the other.


I've seen old 16 bit systems, (CHAR_BIT==8, sizeof(int)==2)
where equality of long int
was determined by an bitwise XOR of the high words (2 bytes==word),
followed by a zero flag condition check, and then if zero,
followed by an XOR of the low words and a zero flag check.
So, it was possible to bail out before entirely evaluating
either operand.
A long was compared for equality against zero, by ORing
the high word, with the low word and then checking the zero flag.

Equality vs inequality,
seems to me to be a simpler concept than greater than,
so when the equality or inequality operator,
does what I want, then that's what I use.

--
pete
Nov 13 '05 #17

"pete" <pf*****@mindspring.com> wrote in message
news:3F***********@mindspring.com...

(snip regarding the performance of == and >)
Equality vs inequality,
seems to me to be a simpler concept than greater than,
so when the equality or inequality operator,
does what I want, then that's what I use.


But consider that, with twos complement arithmetic, <0 can be evaluated by
checking only one bit, where ==0 must test them all.

-- glen
Nov 13 '05 #18
Glen Herrmannsfeldt wrote:
But consider that, with twos complement arithmetic,
<0 can be evaluated by checking only one bit,
where ==0 must test them all.


But this is done in hardware an *not* in software.
The typical implementation does both in parallel.
cmpl sets the "negative" bit in the status register
if the sign bit is set and it sets the "zero" bit
in the status register if all the bits are zero.
Branch instructions test one or both
of these two status register bits and update
the instruction pointer (program counter) accordingly.

Nov 13 '05 #19
E. Robert Tisdale wrote:

Glen Herrmannsfeldt wrote:
But consider that, with twos complement arithmetic,
<0 can be evaluated by checking only one bit,


That's true with any integer arithmetic, actually.

--
pete
Nov 13 '05 #20

"E. Robert Tisdale" <E.**************@jpl.nasa.gov> wrote in message
news:3F**************@jpl.nasa.gov...
Glen Herrmannsfeldt wrote:
But consider that, with twos complement arithmetic,
<0 can be evaluated by checking only one bit,
where ==0 must test them all.


But this is done in hardware an *not* in software.
The typical implementation does both in parallel.
cmpl sets the "negative" bit in the status register
if the sign bit is set and it sets the "zero" bit
in the status register if all the bits are zero.
Branch instructions test one or both
of these two status register bits and update
the instruction pointer (program counter) accordingly.


The hardware may supply a bit test instruction, which could test the sign
bit of a multibyte value. Though on modern machines the whole word would
go into the cache, anyway. It would be hard to guess whether the bit test
or compare instructions would be faster.

-- glen
Nov 13 '05 #21

"pete" <pf*****@mindspring.com> wrote in message
news:3F***********@mindspring.com...
E. Robert Tisdale wrote:

Glen Herrmannsfeldt wrote:
But consider that, with twos complement arithmetic,
<0 can be evaluated by checking only one bit,


That's true with any integer arithmetic, actually.


It depends on how you treat negative zero. If the sign bit is set, it could
be negative zero for sign magnitude or ones complement, and so should not be
treated as <0.

-- glen
Nov 13 '05 #22
In article <yba7b.399461$uu5.73515@sccrnsc04>,
"Glen Herrmannsfeldt" <ga*@ugcs.caltech.edu> wrote:
"E. Robert Tisdale" <E.**************@jpl.nasa.gov> wrote in message
news:3F**************@jpl.nasa.gov...
Glen Herrmannsfeldt wrote:
But consider that, with twos complement arithmetic,
<0 can be evaluated by checking only one bit,
where ==0 must test them all.


But this is done in hardware an *not* in software.
The typical implementation does both in parallel.
cmpl sets the "negative" bit in the status register
if the sign bit is set and it sets the "zero" bit
in the status register if all the bits are zero.
Branch instructions test one or both
of these two status register bits and update
the instruction pointer (program counter) accordingly.


The hardware may supply a bit test instruction, which could test the sign
bit of a multibyte value. Though on modern machines the whole word would
go into the cache, anyway. It would be hard to guess whether the bit test
or compare instructions would be faster.


Wouldn't any half decent compiler know which is the best way to
implement comparisons with constants and choose the best method anyway?
It is quite possible that

if (x >= 0)...
if (x > -1)...
if (((unsigned long) x) & 0x80000000) == 0)...

produce identical code (on machines where the non-portable third line
produces the same result as the first two).
Nov 13 '05 #23
"Christian Bau" <ch***********@cbau.freeserve.co.uk> wrote in message
news:ch*********************************@slb-newsm1.svr.pol.co.uk...

Wouldn't any half decent compiler know which is the best way to
implement comparisons with constants and choose the best method anyway?
It is quite possible that

if (x >= 0)...
if (x > -1)...
if (((unsigned long) x) & 0x80000000) == 0)...

produce identical code (on machines where the non-portable third line
produces the same result as the first two).


Quoting from someone named Christian Bau:
"There are three things you can do:
1. Don't worry. 2. Check the assembler code. 3. Measure.
Before you start doing anything: Make sure your code is correct. Once it is
correct, use a profiler which will tell you which parts of your program
spend the most time."

Guys, we are talking about 3000MHz computers here. Do 2, 5 or 10 machine
code instructions matter at that speed?

--
ArWeGod
Nov 13 '05 #24
ArWeGod wrote:
....
Guys, we are talking about 3000MHz computers here. Do 2, 5 or 10 machine
code instructions matter at that speed?


In an inner loop? Yes, they will lead to 2, 5 or 10 years computation time,
roughly.

Jirka

Nov 13 '05 #25
"ArWeGod" <Ar*****@sbcglobal.net> wrote:
"Christian Bau" <ch***********@cbau.freeserve.co.uk> wrote in message
news:ch*********************************@slb-newsm1.svr.pol.co.uk...

Wouldn't any half decent compiler know which is the best way to
implement comparisons with constants and choose the best method anyway?
It is quite possible that

if (x >= 0)...
if (x > -1)...
if (((unsigned long) x) & 0x80000000) == 0)...

produce identical code (on machines where the non-portable third line
produces the same result as the first two).
Quoting from someone named Christian Bau:
"There are three things you can do:
1. Don't worry. 2. Check the assembler code. 3. Measure.
Before you start doing anything: Make sure your code is correct. Once it is
correct, use a profiler which will tell you which parts of your program
spend the most time."

Guys, we are talking about 3000MHz computers here.


Says you.
Do 2, 5 or 10 machine code instructions matter at that speed?


They do, if they're repeated a million times.

The _real_ problem with this kind of micro-optimisation, from a
comp.lang.c POV, is that there's nothing portable about it. As far as
the Standard is concerned, speed cannot be regulated. However, sometimes
you _do_ want to speed up a certain part of a program. In those cases,
Christian's last suggestion is the only real solution: measure which
code is faster, and don't expect your optimisations to work anywhere
else.

Richard
Nov 13 '05 #26

"Christian Bau" <ch***********@cbau.freeserve.co.uk> wrote in message
news:ch*********************************@slb-newsm1.svr.pol.co.uk...
In article <yba7b.399461$uu5.73515@sccrnsc04>,
"Glen Herrmannsfeldt" <ga*@ugcs.caltech.edu> wrote:

(snip)
Glen Herrmannsfeldt wrote:

> But consider that, with twos complement arithmetic,
> <0 can be evaluated by checking only one bit,
> where ==0 must test them all.

(snip)
The hardware may supply a bit test instruction, which could test the sign bit of a multibyte value. Though on modern machines the whole word would go into the cache, anyway. It would be hard to guess whether the bit test or compare instructions would be faster.


Wouldn't any half decent compiler know which is the best way to
implement comparisons with constants and choose the best method anyway?
It is quite possible that

if (x >= 0)...
if (x > -1)...
if (((unsigned long) x) & 0x80000000) == 0)...

produce identical code (on machines where the non-portable third line
produces the same result as the first two).


That is a good question. One might hope so, but I don't really know. The
original question asked which was faster, so the question is really what
will a compiler generate.

But consider this: Pretty much everyone codes their for() loops with <= or
< test for increasing loops, and >= or > for decreasing loops.

for(i=0;i!=10;i++)

should work just as well as i <10, unless i accidentally becomes greater
than 10.

If a compiler determined that i didn't change in the loop, it could generate
code doing an inequality test, instead, if that was faster.

-- glen
Nov 13 '05 #27
In article <sm***************@newssvr29.news.prodigy.com>,
"ArWeGod" <Ar*****@sbcglobal.net> wrote:
"Christian Bau" <ch***********@cbau.freeserve.co.uk> wrote in message
news:ch*********************************@slb-newsm1.svr.pol.co.uk...

Wouldn't any half decent compiler know which is the best way to
implement comparisons with constants and choose the best method anyway?
It is quite possible that

if (x >= 0)...
if (x > -1)...
if (((unsigned long) x) & 0x80000000) == 0)...

produce identical code (on machines where the non-portable third line
produces the same result as the first two).


Quoting from someone named Christian Bau:
"There are three things you can do:
1. Don't worry. 2. Check the assembler code. 3. Measure.
Before you start doing anything: Make sure your code is correct. Once it is
correct, use a profiler which will tell you which parts of your program
spend the most time."

Guys, we are talking about 3000MHz computers here. Do 2, 5 or 10 machine
code instructions matter at that speed?


Machine instructions don't matter, memory bandwidth does :-)

But the point of my post above was that any trivial micro-optimisations
that a programmer might do could quite easily be done by any decent
compiler, so we are back to (1) don't worry.
Nov 13 '05 #28
"Richard Bos" <rl*@hoekstra-uitgeverij.nl> wrote in message
news:3f*****************@news.nl.net...
"ArWeGod" <Ar*****@sbcglobal.net> wrote:
Guys, we are talking about 3000MHz computers here.


Says you.


Well, how many man-hours will you spend optimizing before just spending the
$300 to upgrade?

--
ArWeGod
Nov 13 '05 #29
In article <RK***************@newssvr29.news.prodigy.com>,
"ArWeGod" <Ar*****@sbcglobal.net> wrote:
"Richard Bos" <rl*@hoekstra-uitgeverij.nl> wrote in message
news:3f*****************@news.nl.net...
"ArWeGod" <Ar*****@sbcglobal.net> wrote:
Guys, we are talking about 3000MHz computers here.


Says you.


Well, how many man-hours will you spend optimizing before just spending the
$300 to upgrade?


Do you write software that runs only on one machine?
Nov 13 '05 #30
"Christian Bau" <ch***********@cbau.freeserve.co.uk> wrote in message
news:ch*********************************@slb-newsm1.svr.pol.co.uk...
In article <RK***************@newssvr29.news.prodigy.com>,
"ArWeGod" <Ar*****@sbcglobal.net> wrote:
"Richard Bos" <rl*@hoekstra-uitgeverij.nl> wrote in message
news:3f*****************@news.nl.net...
"ArWeGod" <Ar*****@sbcglobal.net> wrote:
> Guys, we are talking about 3000MHz computers here.

Says you.


Well, how many man-hours will you spend optimizing before just spending the $300 to upgrade?


Do you write software that runs only on one machine?


Well, I don't run a million operations on my Palm! ;-)
My point is, when you are trying to optimize between == and <=, you might be
wasting your employer's money. I agree with your original response: "Don't
worry" about it.
I mean, come on, if you're using some underpowered hardware that runs so
slow that you need to worry about it, you might be using the wrong hardware.
Time to port it over. Good thing you wrote it in C, eh? Oh, sorry, some of
those "optimizations" will have to be re-written since they use specifics of
the old hardware, what a shame.

--
ArWeGod
Nov 13 '05 #31
"ArWeGod" <Ar*****@sbcglobal.net> wrote:
"Richard Bos" <rl*@hoekstra-uitgeverij.nl> wrote in message
news:3f*****************@news.nl.net...
"ArWeGod" <Ar*****@sbcglobal.net> wrote:
Guys, we are talking about 3000MHz computers here.


Says you.


Well, how many man-hours will you spend optimizing before just spending the
$300 to upgrade?


Swapping your microwave for one that contains a fully operational PC
costs only $300? Wow, get me one of those.

Richard
Nov 13 '05 #32
On Tue, 09 Sep 2003 22:46:09 GMT
"ArWeGod" <Ar*****@sbcglobal.net> wrote:
"Richard Bos" <rl*@hoekstra-uitgeverij.nl> wrote in message
news:3f*****************@news.nl.net...
"ArWeGod" <Ar*****@sbcglobal.net> wrote:
Guys, we are talking about 3000MHz computers here.


Says you.


Well, how many man-hours will you spend optimizing before just
spending the$300 to upgrade?


What makes you think that a faster processor would not exceed power
consumption or heat dissipation requirements? Heat dissipation in
particular can be a bitch in some applications.

What makes you think $300 is irrelevant if the code is in an embedded
system? Car manufacturers, for example, will find ways to reduce the
number of screws to save a few pence per car, how do you think they
feel about $300 per car? Or perhaps the code will be embedded in a toy
that costs $19.95
--
Mark Gordon
Nov 13 '05 #33
"Richard Bos" <rl*@hoekstra-uitgeverij.nl> wrote in message
news:3f*****************@news.nl.net...
"ArWeGod" <Ar*****@sbcglobal.net> wrote:
"Richard Bos" <rl*@hoekstra-uitgeverij.nl> wrote in message
news:3f*****************@news.nl.net...
"ArWeGod" <Ar*****@sbcglobal.net> wrote:
> Guys, we are talking about 3000MHz computers here.

Says you.


Well, how many man-hours will you spend optimizing before just spending the $300 to upgrade?


Swapping your microwave for one that contains a fully operational PC
costs only $300? Wow, get me one of those.

Richard


I don't think you can process a million transactions on a microwave oven...
Or even a million popcorns.

IBM and Microsoft have always trusted that hardware would catch up with
their slow speeds. Everyone complains when the new OS is slow and a year
later it's just fine.

I bought Quake III Arena and loaded it on my 400MHz system. The "bots" ran
rings around me before I could even take 10 steps. My new machine ($250 for
mobo, cpu, fan, video, and memory - everything but the case and hard disk)
plays like a dream. Thank you Frys!

--
ArWeGod
Nov 13 '05 #34
"Mark Gordon" <sp******@flash-gordon.me.uk> wrote in message
news:20030910083642.367bec9e.sp******@flash-gordon.me.uk...
On Tue, 09 Sep 2003 22:46:09 GMT
"ArWeGod" <Ar*****@sbcglobal.net> wrote:
"Richard Bos" <rl*@hoekstra-uitgeverij.nl> wrote in message
news:3f*****************@news.nl.net...
"ArWeGod" <Ar*****@sbcglobal.net> wrote:
> Guys, we are talking about 3000MHz computers here.

Says you.


Well, how many man-hours will you spend optimizing before just
spending the$300 to upgrade?


What makes you think that a faster processor would not exceed power
consumption or heat dissipation requirements? Heat dissipation in
particular can be a bitch in some applications.

What makes you think $300 is irrelevant if the code is in an embedded
system? Car manufacturers, for example, will find ways to reduce the
number of screws to save a few pence per car, how do you think they
feel about $300 per car? Or perhaps the code will be embedded in a toy
that costs $19.95
--
Mark Gordon


OK, everyone's jumping on the upgrade idea! Whee!

The man is processing 1000000 transactions in some unknown (but, presumably
Right Now) time period. OK?!

It's not embedded, it's not the space shuttle, it's a transaction processing
system!

It _could_ be Big Iron. But, he's on the C ng so it's probably not, or it
would be CICS ( "Get your CICS, on IBM 666")

--
ArWeGod
Nov 13 '05 #35
On Wed, 10 Sep 2003 08:10:11 GMT
"ArWeGod" <Ar*****@sbcglobal.net> wrote:
"Mark Gordon" <sp******@flash-gordon.me.uk> wrote in message
news:20030910083642.367bec9e.sp******@flash-gordon.me.uk...
On Tue, 09 Sep 2003 22:46:09 GMT
"ArWeGod" <Ar*****@sbcglobal.net> wrote:
"Richard Bos" <rl*@hoekstra-uitgeverij.nl> wrote in message
news:3f*****************@news.nl.net...
> "ArWeGod" <Ar*****@sbcglobal.net> wrote:
> > Guys, we are talking about 3000MHz computers here.
>
> Says you.

Well, how many man-hours will you spend optimizing before just
spending the$300 to upgrade?


What makes you think that a faster processor would not exceed power
consumption or heat dissipation requirements? Heat dissipation in
particular can be a bitch in some applications.

What makes you think $300 is irrelevant if the code is in an
embedded system? Car manufacturers, for example, will find ways to
reduce the number of screws to save a few pence per car, how do you
think they feel about $300 per car? Or perhaps the code will be
embedded in a toy that costs $19.95


OK, everyone's jumping on the upgrade idea! Whee!

The man is processing 1000000 transactions in some unknown (but,
presumably Right Now) time period. OK?!

It's not embedded, it's not the space shuttle, it's a transaction
processing system!

It _could_ be Big Iron. But, he's on the C ng so it's probably not, or
it would be CICS ( "Get your CICS, on IBM 666")v


The OP never mentioned transaction, the OP was discussing doing a
million "==" or ">" operations a second.

There are a number of embedded systems where you do rather more
processing than that, for example real time video processing. Depending
on what is being done it could be on a processor that is only running at
a few MHz.
--
Mark Gordon
Nov 13 '05 #36
"Mark Gordon" <sp******@flash-gordon.me.uk> wrote in message
news:20030910202749.7d35419c.sp******@flash-gordon.me.uk...

The OP never mentioned transaction, the OP was discussing doing a
million "==" or ">" operations a second.

There are a number of embedded systems where you do rather more
processing than that, for example real time video processing. Depending
on what is being done it could be on a processor that is only running at
a few MHz.
--
Mark Gordon


Wow! You win. I was wrong.

--
ArWeGod
Nov 13 '05 #37

This thread has been closed and replies have been disabled. Please start a new discussion.

Similar topics

3
by: KK | last post by:
Hi, im working on this bigInt class. Need help writing algorithm for the operator*, andy help will be appreciated. Thanks in advance bigInt.h...
3
by: Massimiliano Alberti | last post by:
Can someone check this? If it's OK, you can use however you want... :-) It should search for an element in an array and, if it can't find it, return the next element. key is what you are...
11
by: Shawn Odekirk | last post by:
Some code I have inherited contains a macro like the following: #define setState(state, newstate) \ (state >= newstate) ? \ (fprintf(stderr, "Illegal...
2
by: David Laub | last post by:
I know there is no C# exponentiation operator. But since the double class is sealed, there seems no way to add the operator override without creating a new class which uses containment (of a...
0
by: Vinay Jain | last post by:
query like: select distinct name from student; uses equal operator ('=') or not? If not, then how it identifies distinct element. -- Vinay Jain Dissertation Project Trainee DAKE Division...
2
by: Arvid Requate | last post by:
Hello, I'd like to understand why the following code does not compile. It looks like a strangeness in connection with overload resolution for the <complex> header: The conversion operator...
5
by: huan | last post by:
int *p=new int(7) delete p; could be written as void* buf=operator new(sizeof(int)); int* p=new (buf) int(7); operator delete(p); My question is how to use operator new, placement new to...
11
by: dascandy | last post by:
Hello, I was wondering, why is overloading operator. (period) forbidden? It would make a few odd applications possible (dynamic inheritance and transparent remote method invocation spring to my...
3
by: hurcan solter | last post by:
I have an host class that holds fundamental types template<typename T> struct Generic{ Generic(T val= T()):mval(val){} operator T(){return mval;) T mval; }
9
by: Ioannis Vranos | last post by:
Well when we have an expression like this: int obj= first | second; Why is this style preferred than the equivalent: int obj= first+ second; ?
1
by: CloudSolutions | last post by:
Introduction: For many beginners and individual users, requiring a credit card and email registration may pose a barrier when starting to use cloud servers. However, some cloud server providers now...
0
by: Faith0G | last post by:
I am starting a new it consulting business and it's been a while since I setup a new website. Is wordpress still the best web based software for hosting a 5 page website? The webpages will be...
0
isladogs
by: isladogs | last post by:
The next Access Europe User Group meeting will be on Wednesday 3 Apr 2024 starting at 18:00 UK time (6PM UTC+1) and finishing by 19:30 (7.30PM). In this session, we are pleased to welcome former...
0
by: ryjfgjl | last post by:
In our work, we often need to import Excel data into databases (such as MySQL, SQL Server, Oracle) for data analysis and processing. Usually, we use database tools like Navicat or the Excel import...
0
by: taylorcarr | last post by:
A Canon printer is a smart device known for being advanced, efficient, and reliable. It is designed for home, office, and hybrid workspace use and can also be used for a variety of purposes. However,...
0
by: aa123db | last post by:
Variable and constants Use var or let for variables and const fror constants. Var foo ='bar'; Let foo ='bar';const baz ='bar'; Functions function $name$ ($parameters$) { } ...
0
by: emmanuelkatto | last post by:
Hi All, I am Emmanuel katto from Uganda. I want to ask what challenges you've faced while migrating a website to cloud. Please let me know. Thanks! Emmanuel
1
by: Sonnysonu | last post by:
This is the data of csv file 1 2 3 1 2 3 1 2 3 1 2 3 2 3 2 3 3 the lengths should be different i have to store the data by column-wise with in the specific length. suppose the i have to...
0
by: Hystou | last post by:
There are some requirements for setting up RAID: 1. The motherboard and BIOS support RAID configuration. 2. The motherboard has 2 or more available SATA protocol SSD/HDD slots (including MSATA, M.2...

By using Bytes.com and it's services, you agree to our Privacy Policy and Terms of Use.

To disable or enable advertisements and analytics tracking please visit the manage ads & tracking page.