Longfellow (in
12*************@corp.supernews.com) said:
 Question from a C duffer par excellance, who has occasionally
 written some useful C code and knows enough about C to get some
 idea of the enormity of the amount he does not know.

 Interesting and long thread about Nilges and the Sieve of
 Eratosthenes, and I, as I suppose most did, compiled both Nilges'
 and Heathfield's code (the former with some finagling, of course).
 In general, I duplicated the values in Heathfield's table of run
 times, but I couldn't figure out how he got values in thousandths
 of a second.

 I looked at time.h in K&R2 and got the definitions of clock_t and
 time_t values. Heathfield's usage seemed reasonably expectable of
 returning time values of that precision; ie, there was no statement
 that time_t only returned time to the nearest second. So I changed
 the fprintf from %.0f t0 %.3f, but all I got was zeros to three
 places, indicating precision only to the nearest second, IIUC.

 clock()/CLOCKS_PER_SEC is defined as returning time since the start
 of execution, so I substituted it for the "difftime(end, start). I
 got a value to the required precision, but it didn't match
 Heathfield's results and it did not change with a change in the
 range of the sieve. I concluded that I was misunderstanding the
 K&R2 definition. I even engaged in some desultory substitutions of
 clock_t for time_t with and without various inline type
 assignments, and got no results as I expected, but I figured I'd
 blindly do a little due diligence in case I could stumble across
 something.

 How did Heathfield get the degree of precision in his tables? What
 am I missing here?
The easy way is to run the timed code multiple times (say 1000 for
sake of argument) in a single run  and then divide by that number of
repetitions.

Morris Dovey
DeSoto Solar
DeSoto, Iowa USA
http://www.iedu.com/DeSoto