467,166 Members | 1,043 Online
Bytes | Developer Community
Ask Question

Home New Posts Topics Members FAQ

Post your question to a community of 467,166 developers. It's quick & easy.

Faster for() loops?

Neo
Hi Folks,http://www.abarnett.demon.co.uk/tutorial.html#FASTFOR Page
states:for( i=0; i<10; i++){ ... }i loops through the values
0,1,2,3,4,5,6,7,8,9 If you don't care about the order of the loop counter,
you can do this instead: for( i=10; i--; ) { ... }Using this code, i loops
through the values 9,8,7,6,5,4,3,2,1,0, and the loop should be faster. This
works because it is quicker to process "i--" as the test condition, which
says "is i non-zero? If so, decrement it and continue.". For the original
code, the processor has to calculate "subtract i from 10. Is the result
non-zero? if so, increment i and continue.". In tight loops, this make a
considerable difference.
How far it holds true.. in the light of modern optimizing compilers? and
will it make a significant difference in case of embedded systems???

Thanks,
-Neo
"Do U really think, what U think real is really real?"
Nov 15 '05 #1
  • viewed: 3412
Share:
109 Replies
Hi,

Neo wrote:
[...] In tight loops, this make a
considerable difference.
How far it holds true.. in the light of modern optimizing compilers? and
will it make a significant difference in case of embedded systems???


There is nothing like an experiment to test a theory. I just tried with
AVRGCC

void countDown(void){
int i;
for(i=10; i!=0; i--) doSomething();
}
void countUp(void){
int i;
for(i=0;i<10;i++) doSomething();
}

The generated code is

000000ce <countDown>:
}

void countDown(void){
ce: cf 93 push r28
d0: df 93 push r29
int i;
for(i=10; i!=0; i--) doSomething();
d2: ca e0 ldi r28, 0x0A ; 10
d4: d0 e0 ldi r29, 0x00 ; 0
d6: 0e 94 5d 00 call 0xba
da: 21 97 sbiw r28, 0x01 ; 1
dc: e1 f7 brne .-8 ; 0xd6
de: df 91 pop r29
e0: cf 91 pop r28
e2: 08 95 ret

000000e4 <countUp>:
}
void countUp(void){
e4: cf 93 push r28
e6: df 93 push r29
e8: c9 e0 ldi r28, 0x09 ; 9
ea: d0 e0 ldi r29, 0x00 ; 0
int i;
for(i=0;i<10;i++) doSomething();
ec: 0e 94 5d 00 call 0xba
f0: 21 97 sbiw r28, 0x01 ; 1
f2: d7 ff sbrs r29, 7
f4: fb cf rjmp .-10 ; 0xec
f6: df 91 pop r29
f8: cf 91 pop r28
fa: 08 95 ret

Counting down instead of up saves one whole instruction. It could make a
difference I suppose.

However, the compiler cannot optimise as well if anything in the loop
depends on the value of 'i'.
void countDown(void){
int i;
for(i=10; i!=0; i--) doSomething(i);
}
void countUp(void){
int i;
for(i=0;i<10;i++) doSomething(i);
}

Becomes

void countDown(void){
ce: cf 93 push r28
d0: df 93 push r29
int i;
for(i=10; i!=0; i--) doSomething(i);
d2: ca e0 ldi r28, 0x0A ; 10
d4: d0 e0 ldi r29, 0x00 ; 0
d6: ce 01 movw r24, r28
d8: 0e 94 5d 00 call 0xba
dc: 21 97 sbiw r28, 0x01 ; 1
de: d9 f7 brne .-10 ; 0xd6
e0: df 91 pop r29
e2: cf 91 pop r28
e4: 08 95 ret

000000e6 <countUp>:
}
void countUp(void){
e6: cf 93 push r28
e8: df 93 push r29
int i;
for(i=0;i<10;i++) doSomething(i);
ea: c0 e0 ldi r28, 0x00 ; 0
ec: d0 e0 ldi r29, 0x00 ; 0
ee: ce 01 movw r24, r28
f0: 0e 94 5d 00 call 0xba
f4: 21 96 adiw r28, 0x01 ; 1
f6: ca 30 cpi r28, 0x0A ; 10
f8: d1 05 cpc r29, r1
fa: cc f3 brlt .-14 ; 0xee
fc: df 91 pop r29
fe: cf 91 pop r28
100: 08 95 ret

This time there are a whole 2 extra instructions. I don't think this is
such a big deal. Unrolling the loop would give a better result.

cheers,

Al
Nov 15 '05 #2
Neo wrote:
Hi Folks,http://www.abarnett.demon.co.uk/tutorial.html#FASTFOR Page
states:for( i=0; i<10; i++){ ... }i loops through the values
0,1,2,3,4,5,6,7,8,9 If you don't care about the order of the loop counter,
you can do this instead: for( i=10; i--; ) { ... }Using this code, i loops
through the values 9,8,7,6,5,4,3,2,1,0, and the loop should be faster.
This works because it is quicker to process "i--" as the test condition,
which says "is i non-zero? If so, decrement it and continue.". For the
original code, the processor has to calculate "subtract i from 10. Is the
result non-zero? if so, increment i and continue.". In tight loops, this
make a considerable difference.
How far it holds true.. in the light of modern optimizing compilers? and
will it make a significant difference in case of embedded systems???


Many micros have a decrement jmp if zero (or non zero) machine instruction
so a decent optimising compiler should know this and use it in count down
to zero loops. Counting up often needs a compare followed by a jmp zero (or
non zero) which will be a tad slower.

Ian

Nov 15 '05 #3
"Neo" <ti*********************@yahoo.com> wrote in message
news:43******@news.microsoft.com...
Hi Folks,http://www.abarnett.demon.co.uk/tutorial.html#FASTFOR Page
states:for( i=0; i<10; i++){ ... }i loops through the values
0,1,2,3,4,5,6,7,8,9 If you don't care about the order of the loop counter,
you can do this instead: for( i=10; i--; ) { ... }Using this code, i loops
through the values 9,8,7,6,5,4,3,2,1,0, and the loop should be faster.
This works because it is quicker to process "i--" as the test condition,
which says "is i non-zero? If so, decrement it and continue.". For the
original code, the processor has to calculate "subtract i from 10. Is the
result non-zero? if so, increment i and continue.". In tight loops, this
make a considerable difference.
How far it holds true.. in the light of modern optimizing compilers? and
will it make a significant difference in case of embedded systems???

Thanks,
-Neo
"Do U really think, what U think real is really real?"


The answer is "implementation-dependent".

A major advantage of writing in C is that you can, if you choose, write
understandable, maintainable code. This kind of hand-optimisation has the
opposite effect. If you really need to care about exactly how many
instruction cycle a loop takes, code it in assembly language. Otherwise, for
the sake of those that come after you, please write your C readably and
leave the compiler to do the optimisation. These days, most compilers can
optimise almost as well as you can, for most "normal" operations.

Regards,
--
Peter Bushell
http://www.software-integrity.com/
Nov 15 '05 #4
Neo wrote:
Hi Folks,http://www.abarnett.demon.co.uk/tutorial.html#FASTFOR Page
states:for( i=0; i<10; i++){ ... }i loops through the values
0,1,2,3,4,5,6,7,8,9 If you don't care about the order of the loop counter,
you can do this instead: for( i=10; i--; ) { ... }Using this code, i loops
through the values 9,8,7,6,5,4,3,2,1,0, and the loop should be faster. This
works because it is quicker to process "i--" as the test condition, which
says "is i non-zero? If so, decrement it and continue.". For the original
code, the processor has to calculate "subtract i from 10. Is the result
non-zero? if so, increment i and continue.". In tight loops, this make a
considerable difference.
How far it holds true.. in the light of modern optimizing compilers? and
will it make a significant difference in case of embedded systems???

Regardless of the performance issue, I'd like to point out that after
for( i=10; i--; ) finishes, i will have the value -1, since the
decrement is performed even if i is zero. This is counterintuitive, so
it's worth noting. It also means the following is not equivalent:

for (i = 10; i != 0; --i)

Since here one less decrement is performed. Incidentally, my
compiler/platform generates better code with this version -- it compares
i to -1 in the other, which is no better than comparing it to 10! If you
want to count down, I suggest writing what you mean and separating the
test and decrement parts -- it has the added bonus of making things more
readable. The rest is best left to the compiler.

S.
Nov 15 '05 #5
Neo wrote On 09/25/05 23:41,:
Hi Folks,http://www.abarnett.demon.co.uk/tutorial.html#FASTFOR Page
states:for( i=0; i<10; i++){ ... }i loops through the values
0,1,2,3,4,5,6,7,8,9 If you don't care about the order of the loop counter,
you can do this instead: for( i=10; i--; ) { ... }Using this code, i loops
through the values 9,8,7,6,5,4,3,2,1,0, and the loop should be faster. This
works because it is quicker to process "i--" as the test condition, which
says "is i non-zero? If so, decrement it and continue.". For the original
code, the processor has to calculate "subtract i from 10. Is the result
non-zero? if so, increment i and continue.". In tight loops, this make a
considerable difference.
How far it holds true.. in the light of modern optimizing compilers? and
will it make a significant difference in case of embedded systems???

Thanks,
-Neo
"Do U really think, what U think real is really real?"


Unroll it completely.

Nov 15 '05 #6
"Neo" wrote:
Hi Folks,http://www.abarnett.demon.co.uk/tutorial.html#FASTFOR Page
states:for( i=0; i<10; i++){ ... }i loops through the values
0,1,2,3,4,5,6,7,8,9 If you don't care about the order of the loop counter,
you can do this instead: for( i=10; i--; ) { ... }Using this code, i loops
through the values 9,8,7,6,5,4,3,2,1,0, and the loop should be faster.
....
How far it holds true.. in the light of modern optimizing compilers? and
will it make a significant difference in case of embedded systems???


It may or not save a couple of assembly language instructions, (of
course depending on the compiler and processor used,) but I doubt this
"noptimization" will make any noticeable change in the performance of
a program, unless your code consist mainly of empty for() loops.

What impact can a minuscule reduction in the time required to decide
if the loop has ended or not have, if the body of the loop, for
example, call functions that format a CAN message, deliver it, wait
for a response, retry if there were errors or timeouts, decode the
response, store the values in a serial EEPROM, and based on them start
a few motors, open pneumatic valves, optionally sending an email
message to Katmandu.

That is not an optimization, but a total waste of time. Read the first
example in "Elements of programming style" and learn...

Roberto Waltman

[ Please reply to the group, ]
[ return address is invalid. ]
Nov 15 '05 #7
>
That is not an optimization, but a total waste of time. Read the first
example in "Elements of programming style" and learn...


What if the difference is between fitting into memory and not?


Nov 15 '05 #8
On Mon, 26 Sep 2005 12:11:23 +0530, in comp.lang.c , "Neo"
<ti*********************@yahoo.com> wrote:
Hi Folks,http://www.abarnett.demon.co.uk/tutorial.html#FASTFOR Page
states
(that reversing loop order is faster)

The page is talking rot. It *may* be faster. It *may* be slower. The
only way to know is to benchmark your particular implementation in the
specific case you're examining.
How far it holds true.. in the light of modern optimizing compilers? and
will it make a significant difference in case of embedded systems???


Benchmark.
--
Mark McIntyre
CLC FAQ <http://www.eskimo.com/~scs/C-faq/top.html>
CLC readme: <http://www.ungerhu.com/jxh/clc.welcome.txt>

----== Posted via Newsfeeds.Com - Unlimited-Uncensored-Secure Usenet News==----
http://www.newsfeeds.com The #1 Newsgroup Service in the World! 120,000+ Newsgroups
----= East and West-Coast Server Farms - Total Privacy via Encryption =----
Nov 15 '05 #9
Depends what you're doing. If you're accessing a large chunk of memory on a system with
cache, you want to go through incrementing addresses to maximize the use of cache.
Decrementing through memory is generally pessimal.

--
#include <standard.disclaimer>
_
Kevin D Quitt USA 91387-4454 96.37% of all statistics are made up
Nov 15 '05 #10
In article <dh**********@slavica.ukpost.com>,
Ian Bell <ru*********@yahoo.com> wrote:
Many micros have a decrement jmp if zero (or non zero) machine instruction
so a decent optimising compiler should know this and use it in count down
to zero loops. Counting up often needs a compare followed by a jmp zero (or
non zero) which will be a tad slower.


The Pentium processors have a loop instruction. Every decent compiler
knows it and avoids it like hell because it runs slower than a subtract
+ compare + conditional branch :-)
Nov 15 '05 #11
In article <43********@mk-nntp-2.news.uk.tiscali.com>,
"Peter Bushell" <NO*************@SPAMsoftware-integrity.com> wrote:
These days, most compilers can
optimise almost as well as you can, for most "normal" operations.


Question: How can I optimise code better than the compiler?
Answer: If you ask, then you can't.
Nov 15 '05 #12
Joe Butler wrote:
That is not an optimization, but a total waste of time. Read the first
example in "Elements of programming style" and learn...


What if the difference is between fitting into memory and not?


If you are hunting for that small an amount to get the program to fit
then you are in trouble anyway. Something will need changing making it
no longer fit!
--
Flash Gordon
Living in interesting times.
Although my email address says spam, it is real and I read it.
Nov 15 '05 #13
I think I disagree.

If you can fit something into a cheaper processor model because you save a
couple of bytes by changing 1 or two loops, then you are not in trouble
anymore.
"Flash Gordon" <sp**@flash-gordon.me.uk> wrote in message
news:rq************@brenda.flash-gordon.me.uk...
Joe Butler wrote:
That is not an optimization, but a total waste of time. Read the first
example in "Elements of programming style" and learn...


What if the difference is between fitting into memory and not?


If you are hunting for that small an amount to get the program to fit
then you are in trouble anyway. Something will need changing making it
no longer fit!
--
Flash Gordon
Living in interesting times.
Although my email address says spam, it is real and I read it.

Nov 15 '05 #14
Joe Butler wrote:

Don't top post. Replies belong after the text you are replying to.
"Flash Gordon" <sp**@flash-gordon.me.uk> wrote in message
news:rq************@brenda.flash-gordon.me.uk...
Joe Butler wrote:
That is not an optimization, but a total waste of time. Read the first
example in "Elements of programming style" and learn...

What if the difference is between fitting into memory and not?
If you are hunting for that small an amount to get the program to fit
then you are in trouble anyway. Something will need changing making it
no longer fit!
--
Flash Gordon
Living in interesting times.
Although my email address says spam, it is real and I read it.


Don't include peoples signatures unless you are commenting on them.
I think I disagree.

If you can fit something into a cheaper processor model because you
save a couple of bytes by changing 1 or two loops, then you are not in
trouble anymore.


I'll be more explicit then. EVERY SINGLE TIME I have come across a
system where people have tried to squeeze the code in believing it will
just about fit (either size or speed) one of the following has happened:

1) Customer required a subsequent change which proved to be impossible
unless the card was redesigned because there was no space for the new
code.
2) A bug fix requires some additional code and oops, there is no more
space.
3) By the time all the required stuff was added that the person who
thought it would only just fit had forgotten it did NOT fit by a mile
so it did not even come close to meeting the customers requirements
4) It turned out there were massive savings to be had else where because
of higher level problems allowing me to save far more space/time than
you could possibly save by such micro-optimisations.

Only with the third of those possibilities was it possible to meet the
requirements using the existing hardware, and meeting the requirements
involved fixing the algorithms or doing large scale changes where the
coding was just plain atrocious.

So my experience is that it is never worth bothering with such
micro-optimisations.
--
Flash Gordon
Living in interesting times.
Although my email address says spam, it is real and I read it.
Nov 15 '05 #15
OK, point taken. Although, when working with very small memories, it can
make all the difference if a byte can be saved here and there. Afterall, 50
such 'optimisations' could amount to 10% of the total memory available. I'm
not necessarily suggesting this should be done from day 1, but have found it
useful just to get a feel for what the compiler works best with.

"Flash Gordon" <sp**@flash-gordon.me.uk> wrote in message
news:sm************@brenda.flash-gordon.me.uk...
Joe Butler wrote:

Don't top post. Replies belong after the text you are replying to.
"Flash Gordon" <sp**@flash-gordon.me.uk> wrote in message
news:rq************@brenda.flash-gordon.me.uk...
Joe Butler wrote:

>That is not an optimization, but a total waste of time. Read the first
>example in "Elements of programming style" and learn...

What if the difference is between fitting into memory and not?

If you are hunting for that small an amount to get the program to fit
then you are in trouble anyway. Something will need changing making it
no longer fit!
--
Flash Gordon
Living in interesting times.
Although my email address says spam, it is real and I read it.


Don't include peoples signatures unless you are commenting on them.
> I think I disagree.
>
> If you can fit something into a cheaper processor model because you
> save a couple of bytes by changing 1 or two loops, then you are not in
> trouble anymore.


I'll be more explicit then. EVERY SINGLE TIME I have come across a
system where people have tried to squeeze the code in believing it will
just about fit (either size or speed) one of the following has happened:

1) Customer required a subsequent change which proved to be impossible
unless the card was redesigned because there was no space for the new
code.
2) A bug fix requires some additional code and oops, there is no more
space.
3) By the time all the required stuff was added that the person who
thought it would only just fit had forgotten it did NOT fit by a mile
so it did not even come close to meeting the customers requirements
4) It turned out there were massive savings to be had else where because
of higher level problems allowing me to save far more space/time than
you could possibly save by such micro-optimisations.

Only with the third of those possibilities was it possible to meet the
requirements using the existing hardware, and meeting the requirements
involved fixing the algorithms or doing large scale changes where the
coding was just plain atrocious.

So my experience is that it is never worth bothering with such
micro-optimisations.
--
Flash Gordon
Living in interesting times.
Although my email address says spam, it is real and I read it.

Nov 15 '05 #16
["Followup-To:" header set to comp.arch.embedded.]
On 2005-09-26, Neo <ti*********************@yahoo.com> wrote:
Hi Folks,http://www.abarnett.demon.co.uk/tutorial.html#FASTFOR Page
states:for( i=0; i<10; i++){ ... }i loops through the values
0,1,2,3,4,5,6,7,8,9 If you don't care about the order of the loop counter,
you can do this instead: for( i=10; i--; ) { ... }Using this code, i loops
through the values 9,8,7,6,5,4,3,2,1,0, and the loop should be faster. This
works because it is quicker to process "i--" as the test condition, which
says "is i non-zero? If so, decrement it and continue.". For the original
code, the processor has to calculate "subtract i from 10. Is the result
non-zero? if so, increment i and continue.". In tight loops, this make a
considerable difference.
How far it holds true.. in the light of modern optimizing compilers? and
will it make a significant difference in case of embedded systems???
You could just test it.

I think it's a mistake to obfuscate your loops just for the sake of
(what is probably) executing one more instruction which in all likelihood
isn't on the critical path of your application _anyway_. If, as you say,
you don't use the loop index, you could indeed do without the one
extra compare instruction, but you'd probably benefit from loop unrolling
too.

Premature optimization is a hindrance to software development.

Mark
Thanks,
-Neo
"Do U really think, what U think real is really real?"

Nov 15 '05 #17
Mark McIntyre wrote:
On Mon, 26 Sep 2005 12:11:23 +0530, in comp.lang.c , "Neo"
<ti*********************@yahoo.com> wrote:

Hi Folks,http://www.abarnett.demon.co.uk/tutorial.html#FASTFOR Page
states

(that reversing loop order is faster)

The page is talking rot. It *may* be faster. It *may* be slower. The
only way to know is to benchmark your particular implementation in the
specific case you're examining.


Actually, the page is talking rubbish about a great deal more than just
this case. It's full of generalisations that depend highly on the
compiler and target in question (the post is cross-posted to
comp.arch.embedded, so we are looking at a wide range of targets). "Use
switch instead of if...else..." (varies widely according to
target/compiler and the size of the switch), "Avoid ++, -- in while ()
expressions" (good compilers work well with such expressions), "Use
word-size variables instead of chars" (great for PPC, indifferent for
msp430, terrible for AVR), "Addition is faster than multiplication - use
'val + val + val' instead of 'val * 3' " (wrong for most compiler/target
combinations).

It's a nice idea to try to list such tips, but the page is badly out of
date, and makes all sorts of unwarranted assumptions.

So, as Mark says, benchmark your implementation. Also examine the
generated assembly code (you do understand the generated assembly? If
not, forget about such minor "optimisations".) And remember Knuth's
rules regarding such code-level optimisations:

1. Don't do it.
2. (For experts only) Don't do it yet.
Nov 15 '05 #18
In article <43******@news.microsoft.com>,
Neo <ti*********************@yahoo.com> wrote:
for( i=10; i--; )
{ ... }


I tend to avoid this kind of loop because it's a bit less
intuitive to use with unsigned loop counters. After the
loop is done, an unsigned i would be set to some very high
implementation-defined number.

There is not much to be gained on loops that only count to
10... that extra instruction 10 times through the loop
would only add an extra 10 nanoseconds. This is likely to
pale in significance to any useful work done in the body of
the loop.

Loops that range over memory should never count backwards,
at least not when speed is important. For better or worse,
operating systems and memory caches only prefetch when
reading ascending addresses.
Nov 15 '05 #19
On Tue, 27 Sep 2005 18:21:59 GMT, an******@example.com (Anonymous
7843) wrote:
In article <43******@news.microsoft.com>,
Neo <ti*********************@yahoo.com> wrote:
for( i=10; i--; )
{ ... }


I tend to avoid this kind of loop because it's a bit less
intuitive to use with unsigned loop counters. After the
loop is done, an unsigned i would be set to some very high
implementation-defined number.


FWIW, my bit-bang SPI output function looks something like

bit_ctr = 8;
do
{
Set_IO(SPI_DATA, (data&0x80) != 0);
Set_IO(SPI_CLOCK, 1);
data <<= 1;
Set_IO(SPI_CLOCK, 0);

} while (--bit_ctr);

which seems intuitive for the function at hand, and generates nearly
optimal assembly on all the platforms it's used on.

Regards,

-=Dave
--
Change is inevitable, progress is not.
Nov 15 '05 #20
Joe Butler wrote:
OK, point taken.
[snip]
"Flash Gordon" <sp**@flash-gordon.me.uk> wrote in message
news:sm************@brenda.flash-gordon.me.uk...
Joe Butler wrote:

Don't top post. Replies belong after the text you are replying to.

You need to get the other point, the one about not top-posting.

Brian
Nov 15 '05 #21
prick.

"Default User" <de***********@yahoo.com> wrote in message
news:3p************@individual.net...
Joe Butler wrote:
OK, point taken.


[snip]
"Flash Gordon" <sp**@flash-gordon.me.uk> wrote in message
news:sm************@brenda.flash-gordon.me.uk...
Joe Butler wrote:

Don't top post. Replies belong after the text you are replying to.

You need to get the other point, the one about not top-posting.

Brian

Nov 15 '05 #22
"Joe Butler" <ff***********@hotmail-spammers-paradise.com> writes:
prick.

"Default User" <de***********@yahoo.com> wrote in message
news:3p************@individual.net...
Joe Butler wrote:
> OK, point taken.


[snip]
> "Flash Gordon" <sp**@flash-gordon.me.uk> wrote in message
> news:sm************@brenda.flash-gordon.me.uk...
> > Joe Butler wrote:
> >
> > Don't top post. Replies belong after the text you are replying to.

You need to get the other point, the one about not top-posting.


I'm going to make one and only one attempt to make this clear to you.
It's arguably more than you deserve, but if you improve your posting
habits it will benefit all of us. If not, we can achieve the same
benefit by killfiling you -- which I'm sure many people already have.

We don't discourage top-posting because we like to enforce arbitrary
rules. We do so because top-posting makes articles more difficult to
read, especially in an environment like this one where bottom-posting
is a long-standing tradition. (In other words, top-posting in an
environment where it's commonly accepted is ok; top-posting where it's
not commonly accepted makes *your* article more difficult to read
because we have to make an extra effort to read a different format.)

Usenet is an asynchronous medium. Parent articles can arrive after
followups, or not arrive at all. It's important for each individual
article to be readable by itself, from top to bottom, providing as
much context as necessary from previous articles. It's not reasonable
to expect your readers to jump around within your article to
understand what you're talking about.

Quoting one of CBFalconer's sig quotes:

A: Because it fouls the order in which people normally read text.
Q: Why is top-posting such a bad thing?
A: Top-posting.
Q: What is the most annoying thing on usenet and in e-mail?

See also <http://www.caliburn.nl/topposting.html>.

I'm giving you good advice. You can believe me and follow it, or you
can ignore it and never be taken seriously here again. It's your
call.

--
Keith Thompson (The_Other_Keith) ks***@mib.org <http://www.ghoti.net/~kst>
San Diego Supercomputer Center <*> <http://users.sdsc.edu/~kst>
We must do something. This is something. Therefore, we must do this.
Nov 15 '05 #23
In article <43***********************@news.zen.co.uk>,
Joe Butler <ff***********@hotmail-spammers-paradise.com> wrote:
OK, point taken. Although, when working with very small memories, it can
make all the difference if a byte can be saved here and there. Afterall, 50
such 'optimisations' could amount to 10% of the total memory available. I'm
not necessarily suggesting this should be done from day 1, but have found it
useful just to get a feel for what the compiler works best with.


If you are working with just 512 bytes of program memory, then you
probably should not be writing in C. C as a programming language makes
no attempt to mimimize code space. And though it is a matter outside of the
standards, most compilers prefer to trade off space for increased speed.
--
These .signatures are sold by volume, and not by weight.
Nov 15 '05 #24
Kevin D. Quitt <KQ****@IEEInc.com> wrote:

Where is the context? You've got me all twisted up.
Depends what you're doing.
I'm reading Usenet articles at the moment. "Depends what..." That's
what I am doing. What...?
If you're accessing a large chunk of memory on a system with
cache, you want to go through incrementing addresses to maximize the use of cache.
OK, I am not doing that at this particular moment. Why are you
advising me on my disposition to go for large chunks? I do sometimes
"swing the other way" after all. You may hate me, but don't judge me!
Decrementing through memory is generally pessimal.


I do it all the time. I'm bad, I know, but had to be told. Shame on
me.

--
Dan Henry
Nov 15 '05 #25
Joe Butler wrote:
prick.

"Default User" <de***********@yahoo.com> wrote in message
news:3p************@individual.net...
Joe Butler wrote:


<snip information on correct posting which has been ignored>

Well, one thing you seem to have learned is how to avoid getting help
when you want it. The convention of bottom posting has been established
for a long time and the reasons for it have been discussed here and else
where on many occasions, so I'm not going to debate them. Had you taken
not of the practices of this group you would have known this.
--
Flash Gordon
Living in interesting times.
Although my email address says spam, it is real and I read it.
Nov 15 '05 #26
Joe Butler wrote:
OK, point taken. Although, when working with very small memories, it can
make all the difference if a byte can be saved here and there. Afterall, 50
such 'optimisations' could amount to 10% of the total memory available. I'm
not necessarily suggesting this should be done from day 1, but have found it
useful just to get a feel for what the compiler works best with.
If saving 50 or even 100 machine code instructions saves you 10% of
memory then you only have space for 1000 instructions and would, in my
opinion, be better off programming in assembler where you actually have
control over what happens. Otherwise you might well find changes in the
compiler and library from version to version are more significant.
"Flash Gordon" <sp**@flash-gordon.me.uk> wrote in message
news:sm************@brenda.flash-gordon.me.uk...
Joe Butler wrote:

Don't top post. Replies belong after the text you are replying to.


<snip>

I would refer you to the above, but based on your response to Brian I'm
guessing that you have no consideration for other users of this group.
Don't be surprised if this leaves you with only the ignorant to talk to.
--
Flash Gordon
Living in interesting times.
Although my email address says spam, it is real and I read it.
Nov 15 '05 #27
You need to get the other point, the one about not top-posting.


I'm going to make one and only one attempt to make this clear to you.
It's arguably more than you deserve, but if you improve your posting
habits it will benefit all of us. If not, we can achieve the same
benefit by killfiling you -- which I'm sure many people already have.


<big snip>

See also <http://www.caliburn.nl/topposting.html>.


See http://alpage.ath.cx/toppose/toppost.htm as well.

Thanks,

Al
Nov 15 '05 #28
See http://alpage.ath.cx/toppose/toppost.htm as well.


Oops. I can't spell.

Make that http://alpage.ath.cx/toppost/toppost.htm

Al
Nov 15 '05 #29
To Flash and Walter,

Maybe 10% saved for a 1k memory was a bit of an exageration on my part.

I'm currently working with some inhereted AVR GNU that was incomplete, by
quite some margin, yet close to the preferred memory limit with decent debug
info, and too big to feel comfortable without the debug. Checking out the
size differences resulting from alternative ways of writing the same code
has resulted in a worthwhile amount of memory being recovered (I believe) -
i.e. I'm more relaxed about the situation now. I'm telling the compiler to
optimise for size, since speed is not a problem. Of course, if things won't
fit at the end, then I'll have to look for other optimisations, such as the
assembler suggested. It's unlikely that I'll be forced to switch to a
different version of the compiler or libs. Changing the code's architecture
and data structures results in bigger savings - I'm almost thru doing that
(for other reasons) and it looks like I'll recover over 1k of mem (out of
8k). and the resulting code is cleaner and easier to understand too.

You both probably know this through experience, but one trick I've found of
simply making a local copy of a global variable, that is used a fair bit in
a function, and then coping it back to the global afterwards saves a
reasonable amount of code size, over the more obvious code, that it makes
this particular trick worth knowing/trying when things do get tight - it's
obviously (to me now) a bigger saving if the global happens to be volatile
as well.

I have no personal problem trying these things out if it helps me to
understand what the compiler is likely to do with new code that I author
(afterall, it takes about 6 minutes to try a little trick out, which means
that it'll take about an hour to know 10 new things about the behaviour of
the compiler and to recognise oportunities, etc. in the future.) Perhaps at
the end of the project, I would have had plenty of room anyway, but, as I
said things were tight, I was feeling uneasy, and every time I wanted more
debug info, it meant choosing something else for temporary culling which was
beginning to make things thorougly difficult.

Things seem to be going 'swimmingly' now - I hope that holds up to the end.
"Flash Gordon" <sp**@flash-gordon.me.uk> wrote in message
news:e2************@brenda.flash-gordon.me.uk...
Joe Butler wrote:
OK, point taken. Although, when working with very small memories, it can make all the difference if a byte can be saved here and there. Afterall, 50 such 'optimisations' could amount to 10% of the total memory available. I'm not necessarily suggesting this should be done from day 1, but have found it useful just to get a feel for what the compiler works best with.


If saving 50 or even 100 machine code instructions saves you 10% of
memory then you only have space for 1000 instructions and would, in my
opinion, be better off programming in assembler where you actually have
control over what happens. Otherwise you might well find changes in the
compiler and library from version to version are more significant.
"Flash Gordon" <sp**@flash-gordon.me.uk> wrote in message
news:sm************@brenda.flash-gordon.me.uk...
Joe Butler wrote:

Don't top post. Replies belong after the text you are replying to.


<snip>

I would refer you to the above, but based on your response to Brian I'm
guessing that you have no consideration for other users of this group.
Don't be surprised if this leaves you with only the ignorant to talk to.
--
Flash Gordon
Living in interesting times.
Although my email address says spam, it is real and I read it.

Nov 15 '05 #30
Al Borowski <al*********@EraseThis.gmail.com> writes:
You need to get the other point, the one about not top-posting.

I'm going to make one and only one attempt to make this clear to you.
It's arguably more than you deserve, but if you improve your posting
habits it will benefit all of us. If not, we can achieve the same
benefit by killfiling you -- which I'm sure many people already have.


<big snip>
See also <http://www.caliburn.nl/topposting.html>.


See http://alpage.ath.cx/toppose/toppost.htm as well.


Snipping attributions is considered rude. The quoted material
starting with "I'm going to make one and only one attempt" is mine.
I don't remember who write the line above that.

There is one valid point in the referenced web page (titled "In
Defence of Top Posting"):

Bottom posting without snipping irrelevant pats is at least as
annoying as a top-post.

Which is why nobody recommends bottom-posting without snipping
irrelevant parts.

Apart from that, the argument seems to be based on assumptions about
the software people use to read Usenet, and the environment in which
it runs.

Once again: Usenet is an asynchronous medium. The parent article may
not have arrived yet. It may have expired. It may never arrive at
all. I may have read it a week ago and forgotten it, and I very like
have my newsreader configured not to display articles I've already
read.

A simple command to jump to the parent article is a useful feature in
a newsreader, and one that exists in the one I use. I have no idea
which other newsreaders have such a command. That's why I try to make
each article readable by itself.

Perhaps the most telling point in the web page is:

A Threaded newsreader. These days, pretty much every PC has one.

Not everyone reads Usenet on a PC.

I realize this is cross-posted to comp.lang.c and comp.arch.embedded.
Perhaps top-posting is tolerated in comp.arch.embedded. In
comp.lang.c, we've reached a general consensus that top-posting is
discouraged, for perfectly valid reasons. And even if I *liked*
top-posting, I wouldn't do it in comp.lang.c; consistency is even more
important than the arguments in favor of a given style.

--
Keith Thompson (The_Other_Keith) ks***@mib.org <http://www.ghoti.net/~kst>
San Diego Supercomputer Center <*> <http://users.sdsc.edu/~kst>
We must do something. This is something. Therefore, we must do this.
Nov 15 '05 #31
Al Borowski wrote:
http://alpage.ath.cx/toppost/toppost.htm
You might like to google for "fallacious arguments".
From your page:


"...embarked on a crusade...
"...top-posting isn't the spawn of Satan...
"...Some...mantras...
"And to those of you who act like this is a religious issue,
get a life."

The only person in bringing religion into this is yourself.

--
Peter

Nov 15 '05 #32
Hi,

Peter Nilsson wrote:
Al Borowski wrote:
http://alpage.ath.cx/toppost/toppost.htm

You might like to google for "fallacious arguments".
From your page:


"...embarked on a crusade...
"...top-posting isn't the spawn of Satan...
"...Some...mantras...
"And to those of you who act like this is a religious issue,
get a life."

The only person in bringing religion into this is yourself.

I wrote that page some time ago in response to an argument on a
different newsgroup. The language I used was very tame compared to some
of the abuse heaped on top-posters at the time.

I'm not getting into an online debate on top-posting. That page has my
views and I won't repeat them here.

thanks,

Al

Nov 15 '05 #33
Keith Thompson wrote:
<snip>

I realize this is cross-posted to comp.lang.c and comp.arch.embedded.
Perhaps top-posting is tolerated in comp.arch.embedded. In
comp.lang.c, we've reached a general consensus that top-posting is
discouraged, for perfectly valid reasons. And even if I *liked*
top-posting, I wouldn't do it in comp.lang.c; consistency is even more
important than the arguments in favor of a given style.


Top-posting is strongly discouraged in comp.arch.embedded as well. It's
not as bad as google groups posts that fail to include any context,
however, which seems to be considered the worst sin at the moment (I'm
not sure where posting in html ranks these days - I haven't seen any
html posts here for a while).

David.
Nov 15 '05 #34
Joe Butler wrote:
To Flash and Walter,

Maybe 10% saved for a 1k memory was a bit of an exageration on my part.
If it's less then it is even less worth while.

<snip>
different version of the compiler or libs. Changing the code's architecture
and data structures results in bigger savings - I'm almost thru doing that
(for other reasons) and it looks like I'll recover over 1k of mem (out of
8k). and the resulting code is cleaner and easier to understand too.
Which just goes to show that you should concentrate on high level
optimisations not micro-optimisations.
You both probably know this through experience, but one trick I've found of
simply making a local copy of a global variable, that is used a fair bit in
a function, and then coping it back to the global afterwards saves a
reasonable amount of code size, over the more obvious code,
I've used processors where it would be one instruction to set up the
offset then another to read/write the value for a local as compared to
in the worst case a single instruction the same size as that pair of
instructions for a global. Of course, the compiler optimisations make
this irrelevant with most compilers since they will keep it in a
register if it is used much.
that it makes
this particular trick worth knowing/trying when things do get tight - it's
obviously (to me now) a bigger saving if the global happens to be volatile
as well.
If it is volatile you are significantly changing the semantics, so
either the original code was wrong or your "optimised" code is wrong.
I have no personal problem trying these things out if it helps me to
understand what the compiler is likely to do with new code that I author
(afterall, it takes about 6 minutes to try a little trick out, which means
that it'll take about an hour to know 10 new things about the behaviour of
the compiler and to recognise oportunities, etc. in the future.)
All that you learn about micro-optimisation on one system you have to
forget on the next where your "optimisations" actually make it worse.
Also, if your optimisations make the code harder to read you have just
increased the cost to the company of maintenance, and since maintenance
is often a major part of the cost of SW over its lifetime this is not good.
Perhaps at
the end of the project, I would have had plenty of room anyway, but, as I
said things were tight, I was feeling uneasy, and every time I wanted more
debug info, it meant choosing something else for temporary culling which was
beginning to make things thorougly difficult.
It would almost certainly have saved you time and effort to restructure
the code and optimise the algorithms first (which you say you are doing)
since then you would not have found the need for "temporary culling"
because you have already admitted that it is saving you a significant
amount.
Things seem to be going 'swimmingly' now - I hope that holds up to the end.
Not really. You included in your message that restructuring is saving
you a vast amount of space which is further evidence that you should
always start at the algorithm and work down, not start with
micro-optimisation.
"Flash Gordon" <sp**@flash-gordon.me.uk> wrote in message
news:e2************@brenda.flash-gordon.me.uk...


<snip>

For continual refusal to post properly despite being informed that top
posting is not accepted here...

*PLONK*
--
Flash Gordon
Living in interesting times.
Although my email address says spam, it is real and I read it.
Nov 15 '05 #35

"Flash Gordon" <sp**@flash-gordon.me.uk> wrote in message
news:r5************@brenda.flash-gordon.me.uk...
Joe Butler wrote:
To Flash and Walter,

Maybe 10% saved for a 1k memory was a bit of an exageration on my part.
If it's less then it is even less worth while.


If you say so. It seems to me that I've just given myself a bit more memory
to play with in a tight situation.

<snip>
different version of the compiler or libs. Changing the code's architecture and data structures results in bigger savings - I'm almost thru doing that (for other reasons) and it looks like I'll recover over 1k of mem (out of 8k). and the resulting code is cleaner and easier to understand too.
Which just goes to show that you should concentrate on high level
optimisations not micro-optimisations.


Yes, that's correct. It does not mean, so-called micro-optimisations should
not be attempted.
You both probably know this through experience, but one trick I've found of simply making a local copy of a global variable, that is used a fair bit in a function, and then coping it back to the global afterwards saves a
reasonable amount of code size, over the more obvious code,
I've used processors where it would be one instruction to set up the
offset then another to read/write the value for a local as compared to
in the worst case a single instruction the same size as that pair of
instructions for a global. Of course, the compiler optimisations make
this irrelevant with most compilers since they will keep it in a
register if it is used much.


Well, in my case, GNU didn't seem to choose this optimisation - so, the
trick saved some bytes - that's indisputable, and as far as I can see, odd
that you'd want to dismiss the idea.
> that it makes
this particular trick worth knowing/trying when things do get tight - it's obviously (to me now) a bigger saving if the global happens to be volatile as well.
If it is volatile you are significantly changing the semantics, so
either the original code was wrong or your "optimised" code is wrong.


I don't think so. The code is in an ISR. While the ISR is running, the
global won't be changed externally. The little trick saved something like
28 bytes and yet the global was only touched twice, if I remember correctly.

However, if someone has a counter example, I would be interested in knowing.
I have no personal problem trying these things out if it helps me to
understand what the compiler is likely to do with new code that I author
(afterall, it takes about 6 minutes to try a little trick out, which means that it'll take about an hour to know 10 new things about the behaviour of the compiler and to recognise oportunities, etc. in the future.)
All that you learn about micro-optimisation on one system you have to
forget on the next where your "optimisations" actually make it worse.


Perhaps. Yet, for very little effort, I have given myself more room to play
with in the current system, I've got a better feeling for the compiler's
quirks, and have saved (with just a few micro-optimisations) about 250-bytes
(out of 8k) - that's plenty more to extend an existing parameter look up
table built into the system in both directions (thus increasing the
versitility of the system), or to include more human-readable debug output
and even to use more sophisticated code for some other operations that are
yet to be written. So, as far as I can see, I've gained by this exercise.
I've given myself an advantage, and you are still telling me it was next to
worthless because it comes under the lable 'micro-optimisation' or
'premature optimisation' - to me, it's just an achieved gain.
Also, if your optimisations make the code harder to read you have just
increased the cost to the company of maintenance, and since maintenance
is often a major part of the cost of SW over its lifetime this is not good.

Errm, in this case, I don't think the code is harder to read - it's
straighforward C-code - nothing particularly difficult. There are ample
comments documenting reasons for some of the slightly none-obvious ways of
doing something, along with the 'obvious' code snippet for ready-reference.
The code will go into a mass-produced product that is to be thorougly tested
before release - maintenance is not an option.
> Perhaps at
the end of the project, I would have had plenty of room anyway, but, as I said things were tight, I was feeling uneasy, and every time I wanted more debug info, it meant choosing something else for temporary culling which was beginning to make things thorougly difficult.
It would almost certainly have saved you time and effort to restructure
the code and optimise the algorithms first (which you say you are doing)
since then you would not have found the need for "temporary culling"
because you have already admitted that it is saving you a significant
amount.


I can't see this. I've positively gained thru this exercise. I have not
lost anything.
Things seem to be going 'swimmingly' now - I hope that holds up to the
end.
Not really. You included in your message that restructuring is saving
you a vast amount of space which is further evidence that you should
always start at the algorithm and work down, not start with
micro-optimisation.


It looks like I will save about 1k with the restructure (but I cannot tell
for sure until I get the full re-structure into the embedded compiler - I'm
currently writing and testing the restructured part under Windows). So, the
'micro-optimisations' would amount to another 25% on top of that - hardly a
worthless effort, wouln't you say.

Nov 15 '05 #36
On 28 Sep 2005 09:30:44 +0200, David Brown
<da***@westcontrol.removethisbit.com> wrote:
Top-posting is strongly discouraged in comp.arch.embedded as well...


It amazes me how some people can claim to speak for the whole group
with no documentation at all. I, for one, appreciate top-posting when
it is appropriate. At least I won't claim to speak for a whole group
who did not elect me to represent them.
-Robert Scott
Ypsilanti, Michigan
Nov 15 '05 #37
Robert Scott wrote:

On 28 Sep 2005 09:30:44 +0200, David Brown
<da***@westcontrol.removethisbit.com> wrote:
Top-posting is strongly discouraged in comp.arch.embedded as well...


It amazes me how some people can claim to speak for the whole group
with no documentation at all. I, for one, appreciate top-posting when
it is appropriate. At least I won't claim to speak for a whole group
who did not elect me to represent them.


I don't like top posting.

--
pete
Nov 15 '05 #38
Joe Butler wrote:
prick.

*plonk*


Brian
Nov 15 '05 #39
"Robert Scott" <no****@dont-mail-me.com> wrote in message
news:43***************@news.provide.net...
On 28 Sep 2005 09:30:44 +0200, David Brown
<da***@westcontrol.removethisbit.com> wrote:
Top-posting is strongly discouraged in comp.arch.embedded as well...


It amazes me how some people can claim to speak for the whole group
with no documentation at all. I, for one, appreciate top-posting when
it is appropriate. At least I won't claim to speak for a whole group
who did not elect me to represent them.


For the record, I simply don't care. I'm happy to make allowances for all
kinds. Life's too short.

Steve
http://www.fivetrees.com
Nov 15 '05 #40
On Wed, 28 Sep 2005 03:29:35 +0100, in comp.lang.c , "Joe Butler"
<ff***********@hotmail-spammers-paradise.com> wrote:
To Flash and Walter,


Are you clinically thick? You're /still/ top posting. In CLC thats the
height of rudeness. Stop it.
--
Mark McIntyre
CLC FAQ <http://www.eskimo.com/~scs/C-faq/top.html>
CLC readme: <http://www.ungerhu.com/jxh/clc.welcome.txt>

----== Posted via Newsfeeds.Com - Unlimited-Uncensored-Secure Usenet News==----
http://www.newsfeeds.com The #1 Newsgroup Service in the World! 120,000+ Newsgroups
----= East and West-Coast Server Farms - Total Privacy via Encryption =----
Nov 15 '05 #41
On Wed, 28 Sep 2005 12:24:54 +1000, in comp.lang.c , Al Borowski
<al*********@EraseThis.gmail.com> wrote:
See http://alpage.ath.cx/toppose/toppost.htm as well.


Oops. I can't spell.

Make that http://alpage.ath.cx/toppost/toppost.htm


For what its worth, arguments based on threading are bunk. Thread
members arrive out-of-order, not at all, and disappear from servers.

And arguments based on catering for cretinous users who're too thick
to snip are hardly likely to impress smart people...

I slightly agree with your conclusions tho. I'd modify (1) and (2) as
follows

1) Top Post (and totally remove the old message) if you are only
making 1 brief point which has no relation to any of the old message.
And then ask yourself why the hell you're replying to this post with
an irrelevant remark.

2) otherwise middle post

--
Mark McIntyre
CLC FAQ <http://www.eskimo.com/~scs/C-faq/top.html>
CLC readme: <http://www.ungerhu.com/jxh/clc.welcome.txt>

----== Posted via Newsfeeds.Com - Unlimited-Uncensored-Secure Usenet News==----
http://www.newsfeeds.com The #1 Newsgroup Service in the World! 120,000+ Newsgroups
----= East and West-Coast Server Farms - Total Privacy via Encryption =----
Nov 15 '05 #42

"Dave Hansen" <id**@hotmail.com> wrote in message
news:1127846384.a408ac38deacc6faada150be098c0112@t eranews...
On Tue, 27 Sep 2005 18:21:59 GMT, an******@example.com (Anonymous
7843) wrote:
In article <43******@news.microsoft.com>,
Neo <ti*********************@yahoo.com> wrote:
for( i=10; i--; )
{ ... }


I tend to avoid this kind of loop because it's a bit less
intuitive to use with unsigned loop counters. After the
loop is done, an unsigned i would be set to some very high
implementation-defined number.


FWIW, my bit-bang SPI output function looks something like

bit_ctr = 8;
do
{
Set_IO(SPI_DATA, (data&0x80) != 0);
Set_IO(SPI_CLOCK, 1);
data <<= 1;
Set_IO(SPI_CLOCK, 0);

} while (--bit_ctr);

which seems intuitive for the function at hand, and generates nearly
optimal assembly on all the platforms it's used on.


A little alarm goes off in my brain every time I see an arithmetic variable
(bit_ctr) used as the parameter of a logical expression (while).

The following line better expresses what I think you mean:
while (--bit_ctr != 0);

The compiler optimizer may well deliver the same machine code for both
methods.

And please don't let me get started with the use of NULL pointers in similar
situations.

Nov 15 '05 #43

Joe Butler wrote:

That is not an optimization, but a total waste of time. Read the first
example in "Elements of programming style" and learn...


What if the difference is between fitting into memory and not?


The answer to that is the same remedy offered to all sufferers of
premature optimisation: Measure First.

Nov 15 '05 #44

David Brown wrote:
..."Use
word-size variables instead of chars" (great for PPC, indifferent for
msp430, terrible for AVR),
Catastrophic for PIC18.
"Addition is faster than multiplication - use
'val + val + val' instead of 'val * 3' " (wrong for most compiler/target
combinations).


And the implication is especially wrong for any val*n where n>3.

--T

Nov 15 '05 #45
Mark McIntyre <ma**********@spamcop.net> writes:
[...]
I slightly agree with your conclusions tho. I'd modify (1) and (2) as
follows

1) Top Post (and totally remove the old message) if you are only
making 1 brief point which has no relation to any of the old message.
And then ask yourself why the hell you're replying to this post with
an irrelevant remark.
Top-posting is posting new text *above* any quoted text; if there is
no quoted text, it's not top-posting (or at best it's a degenerate
case that could as easily be called bottom-posting or middle-posting).
What you're describing is, or should be, simply posting a new article.
2) otherwise middle post


--
Keith Thompson (The_Other_Keith) ks***@mib.org <http://www.ghoti.net/~kst>
San Diego Supercomputer Center <*> <http://users.sdsc.edu/~kst>
We must do something. This is something. Therefore, we must do this.
Nov 15 '05 #46
"Richard Henry" <rp*****@home.com> writes:
"Dave Hansen" <id**@hotmail.com> wrote in message
news:1127846384.a408ac38deacc6faada150be098c0112@t eranews...

[...]
FWIW, my bit-bang SPI output function looks something like

bit_ctr = 8;
do
{
Set_IO(SPI_DATA, (data&0x80) != 0);
Set_IO(SPI_CLOCK, 1);
data <<= 1;
Set_IO(SPI_CLOCK, 0);

} while (--bit_ctr);

which seems intuitive for the function at hand, and generates nearly
optimal assembly on all the platforms it's used on.


A little alarm goes off in my brain every time I see an arithmetic variable
(bit_ctr) used as the parameter of a logical expression (while).

The following line better expresses what I think you mean:
while (--bit_ctr != 0);

The compiler optimizer may well deliver the same machine code for both
methods.

And please don't let me get started with the use of NULL pointers in similar
situations.


I tend to agree, in that I try to use only expressions that are
logically Boolean values as conditions, adding an explicit comparison
for anything else. But things like "while (--bit_ctr)", "if (ptr)",
and "if (!ptr)" are common C idioms. You might consider disconnecting
the little alarm in your brain, or at least turning down the volume.
The set of C code that you can easily read should be much wider than
the set of C code that you'd be willing to write.

--
Keith Thompson (The_Other_Keith) ks***@mib.org <http://www.ghoti.net/~kst>
San Diego Supercomputer Center <*> <http://users.sdsc.edu/~kst>
We must do something. This is something. Therefore, we must do this.
Nov 15 '05 #47
In article <3q************@individual.net>,
"Richard Henry" <rp*****@home.com> wrote:
"Dave Hansen" <id**@hotmail.com> wrote in message
news:1127846384.a408ac38deacc6faada150be098c0112@t eranews...
FWIW, my bit-bang SPI output function looks something like

bit_ctr = 8;
do
{
Set_IO(SPI_DATA, (data&0x80) != 0);
Set_IO(SPI_CLOCK, 1);
data <<= 1;
Set_IO(SPI_CLOCK, 0);

} while (--bit_ctr);

which seems intuitive for the function at hand, and generates nearly
optimal assembly on all the platforms it's used on.


A little alarm goes off in my brain every time I see an arithmetic variable
(bit_ctr) used as the parameter of a logical expression (while).

The following line better expresses what I think you mean:
while (--bit_ctr != 0);

The compiler optimizer may well deliver the same machine code for both
methods.

And please don't let me get started with the use of NULL pointers in similar
situations.


Well, if anyone writes code like
char* p = /* whatever */
do
{
/* somestuff */

} while (--p);


then they will get what they deserve!
Nov 15 '05 #48

Christian Bau wrote:
...
Well, if anyone writes code like
char* p = /* whatever */
do
{
/* somestuff */

} while (--p);


then they will get what they deserve!


Do you really think

for( p = /*whatever*/ ; p ; --p ){
/*somestuff*/
}

is any better?

Of course, the idea of counting a *pointer* down to NULL is rather
implausible in real code. Maybe you intended an integer variable.

Nov 15 '05 #49

"Keith Thompson" <ks***@mib.org> wrote in message
news:ln************@nuthaus.mib.org...
"Richard Henry" <rp*****@home.com> writes:
"Dave Hansen" <id**@hotmail.com> wrote in message
news:1127846384.a408ac38deacc6faada150be098c0112@t eranews...

[...]
FWIW, my bit-bang SPI output function looks something like

bit_ctr = 8;
do
{
Set_IO(SPI_DATA, (data&0x80) != 0);
Set_IO(SPI_CLOCK, 1);
data <<= 1;
Set_IO(SPI_CLOCK, 0);

} while (--bit_ctr);

which seems intuitive for the function at hand, and generates nearly
optimal assembly on all the platforms it's used on.


A little alarm goes off in my brain every time I see an arithmetic variable (bit_ctr) used as the parameter of a logical expression (while).

The following line better expresses what I think you mean:
while (--bit_ctr != 0);

The compiler optimizer may well deliver the same machine code for both
methods.

And please don't let me get started with the use of NULL pointers in similar situations.


I tend to agree, in that I try to use only expressions that are
logically Boolean values as conditions, adding an explicit comparison
for anything else. But things like "while (--bit_ctr)", "if (ptr)",
and "if (!ptr)" are common C idioms. You might consider disconnecting
the little alarm in your brain, or at least turning down the volume.
The set of C code that you can easily read should be much wider than
the set of C code that you'd be willing to write.


I read it ok. I can see that it will work. But when I encounter these
situations, I just stop to think "What will happen here? How can this go
wrong?"

Nov 15 '05 #50
109 Replies

This discussion thread is closed

Replies have been disabled for this discussion.

Similar topics

12 posts views Thread by Kamilche | last post: by
34 posts views Thread by sushant | last post: by
11 posts views Thread by bill | last post: by
2 posts views Thread by Jeffrey Melloy | last post: by
5 posts views Thread by sololoquist | last post: by
23 posts views Thread by AndersWang@gmail.com | last post: by
17 posts views Thread by onkar | last post: by
By using this site, you agree to our Privacy Policy and Terms of Use.