"xianwei" <ba*********@gm ail.coma ¨¦crit dans le message de news:
11************* *********@k79g2 00...legr oups.com...
#include <stdio.h>
#include <stdlib.h>
#include <time.h>
int
main ( int argc, char *argv[] )
{
long i = 10000000L;
clock_t start, end;
double duration;
printf("Time do %ld loop spent ", i);
start = clock();
while (i--);
end = clock();
duration = (double)(end - start) / CLOCKS_PER_SEC;
printf("%f seconds\n", duration);
return EXIT_SUCCESS;
} /* ---------- end of function main ---------- */
I run the above the program,
The first time I spent 0.031000 seconds.
The second time I spent 0.015000 seconds
If I try again and again, the time spent on will 0.031 or 0.015
seconds
Why have such big difference?
I suspect the clock() function on your system has a granularity of around 15
milliseconds. If this is the case, the clock() function will return the
same value for all calls during each 15 millisecond interval. Depending on
when exactly you start your measurements within that interval, a task
lasting cloless than 15ms can be "clocked" as lasting 0ms or 15ms.
Similarly, one that takes between 15 and 30 might be reposted as taking
exactly 15ms or exactly 30ms.
On top of this granularity issue, you should look into the clock() function.
Does it measure elapsed time? total processor time? processor time spent in
your program vs time spent in the system? or something else even... Your
"timings" will also be affected by other tasks the computer performs while
your program executes, and many other characteristics of you system (cache
memory, bus sharing with i/o devices, etc.)
For your peticular concern, I suggest you try and synchronize your timing
efforts with this small loop:
clock_t last, start;
last = start = clock();
while (start == last) {
start = clock();
}
You should try and measure longer computations, by repeating them in a loop
or increasing the constants.
You should consider using more accurate timing functions such as
non-standard gettimeofday in Linux.
You should repeat the tests many many times, and average the results,
discarding extreme values.
Effective code profiling is *very* difficult. Drawing conclusions or making
changes from profiling data is not easy either: what holds on one
architecture does not necessarily on another one, even just slightly
different. There is no definitive truth in this domain.
--
Chqrlie