440,304 Members | 3,172 Online
Need help? Post your question and get tips & solutions from a community of 440,304 IT Pros & Developers. It's quick & easy.

# Same program in C and in C#. C# is faster than C. How Come ?

 P: n/a Hi every one, Me and my Cousin were talking about C and C#, I love C and he loves C#..and were talking C is ...blah blah...C# is Blah Blah ...etc and then we decided to write a program that will calculate the factorial of 10, 10 millions time and print the reusult in a file with the name log.txt.. I wrote something like this #include unsigned int fib (int n); int main() { FILE *fp; unsigned int loop =1 ; if ( (fp = fopen( "log.txt", "a" )) != NULL ) for (loop; loop <= 10000000 ; loop++) { fprintf(fp,"%u\n",fib(10)); } fclose (fp); return 0; } unsigned int fib (int n) { if (n != 1 ) return n * fib(n-1); else return 1; } and he did the something in C# and then we all have the same laptop..DELL Inspiron 6000. I ran my program, I took 18 seconds to get done..his program took 7 seconds..Wow and then I asked him to run my program in his laptop..it's all the same ..but I wanted to...I ran it...gave me the same time.. How come ..?! Next day, I tried some Optimization and developed the loop and wrote something like this for (loop; loop <= 1000000 ; loop++) { fprintf(fp,"%u\n %u\n %u\n %u\n %u\n ",fib(10),fib(10),fib(10),fib(10),fib(10)); fprintf(fp,"%u\n %u\n %u\n %u\n %u\n ",fib(10),fib(10),fib(10),fib(10),fib(10)); } But his program still faster than mine.. then, I tried the program under Slackware 12....it took 3.8 Seconds to get done..Wow, I won the Challenge.. anyway, he want me to beat him under windows XP...Please guys help me out.. Dec 12 '07 #1
41 Replies

 P: n/a In article , c unsigned int fib (int n) { if (n != 1 ) return n * fib(n-1); else return 1; } I think "fact" would be a better name for this function. >and then we all have the same laptop..DELL Inspiron 6000.I ran my program, I took 18 seconds to get done What compiler did you use? What optimisation settings? You can't tell much about the relative advantages of the languages just from a random figure like that. >and developed the loop and wrote something like this for (loop; loop <= 1000000 ; loop++) { fprintf(fp,"%u\n %u\n %u\n %u\n %u\n",fib(10),fib(10),fib(10),fib(10),fib(10)); fprintf(fp,"%u\n %u\n %u\n %u\n %u\n",fib(10),fib(10),fib(10),fib(10),fib(10)); } This is likely to be a pointless change. The loop overhead is small compared with time taken for the factorial function. Most likely your program would be significantly faster if you used a loop to calculate the factorial rather than recursion. -- Richard -- :wq Dec 12 '07 #2

 P: n/a On Dec 13, 2:35 am, rich...@cogsci.ed.ac.uk (Richard Tobin) wrote: In article , c

 P: n/a On Dec 13, 2:41 am, c , c unsigned int fib (int n) { if (n != 1 ) return n * fib(n-1); else return 1; } I think "fact" would be a better name for this function. >and then we all have the same laptop..DELL Inspiron 6000. >I ran my program, I took 18 seconds to get done What compiler did you use? What optimisation settings? You can't tell much about the relative advantages of the languages just from a random figure like that. >and developed the loop and wrote something like this for (loop; loop <= 1000000 ; loop++) { fprintf(fp,"%u\n %u\n %u\n %u\n %u\n >",fib(10),fib(10),fib(10),fib(10),fib(10)); fprintf(fp,"%u\n %u\n %u\n %u\n %u\n >",fib(10),fib(10),fib(10),fib(10),fib(10)); } This is likely to be a pointless change. The loop overhead is small compared with time taken for the factorial function. Most likely your program would be significantly faster if you used a loop to calculate the factorial rather than recursion. -- Richard -- :wq I tried Tiny C Compiler, and with Turbo C... we both (me and my cousin) used recursion in our programs. Than you . One more thing, Why under Slackware with GCC..my program goes faster ? I test it with time [slackware] time ./fact Dec 12 '07 #4

 P: n/a On Dec 12, 11:45 pm, c , c

 P: n/a On Dec 12, 3:23 pm, c unsigned int fib (int n); int main() { FILE *fp; unsigned int loop =1 ; if ( (fp = fopen( "log.txt", "a" )) != NULL ) for (loop; loop <= 10000000 ; loop++) { fprintf(fp,"%u\n",fib(10)); } fclose (fp); return 0; } unsigned int fib (int n) { if (n != 1 ) return n * fib(n-1); else return 1; } and he did the something in C# and then we all have the same laptop..DELL Inspiron 6000. I ran my program, I took 18 seconds to get done..his program took 7 seconds..Wow and then I asked him to run my program in his laptop..it's all the same ..but I wanted to...I ran it...gave me the same time.. How come ..?! Next day, I tried some Optimization and developed the loop and wrote something like this for (loop; loop <= 1000000 ; loop++) { fprintf(fp,"%u\n %u\n %u\n %u\n %u\n ",fib(10),fib(10),fib(10),fib(10),fib(10)); fprintf(fp,"%u\n %u\n %u\n %u\n %u\n ",fib(10),fib(10),fib(10),fib(10),fib(10)); } But his program still faster than mine.. then, I tried the program under Slackware 12....it took 3.8 Seconds to get done..Wow, I won the Challenge.. anyway, he want me to beat him under windows XP...Please guys help me out.. The recursive factorial function will be a lot slower than an iterative one. But the lion's share of the time is going to be in writing out the text file. It occupies 160 MB on my machine. It appears that your friend has a faster disk than you do. This might be marginally faster: #include unsigned fact(unsigned n) { unsigned result = 1; while (n-- 1) result *= n; return result; } int main() { FILE *fp; unsigned loop; if ((fp = fopen("log.txt", "a")) != NULL) { setvbuf(fp, NULL, _IOFBF, 16000); for (loop=1; loop <= 10000000; loop++) { fprintf(fp, "%u\n", fact(10)); } fclose(fp); } return 0; } This program executes in less than one second on my machine (showing that the time is almost exclusively I/O): #include unsigned fact(unsigned n) { unsigned result = 1; while (n-- 1) result *= n; return result; } int main() { FILE *fp; unsigned loop; double sum = 0; if ((fp = fopen("log.txt", "a")) != NULL) { setvbuf(fp, NULL, _IOFBF, 16000); for (loop=1; loop <= 10000000; loop++) { sum += fact(10); } printf("sum was %f\n", sum); fclose(fp); } return 0; } /* C:\tmp>foo sum was 3628800000000.000000 */ Dec 12 '07 #6

 P: n/a On Dec 13, 2:54 am, user923005 unsigned int fib (int n); int main() { FILE *fp; unsigned int loop =1 ; if ( (fp = fopen( "log.txt", "a" )) != NULL ) for (loop; loop <= 10000000 ; loop++) { fprintf(fp,"%u\n",fib(10)); } fclose (fp); return 0; } unsigned int fib (int n) { if (n != 1 ) return n * fib(n-1); else return 1; } and he did the something in C# and then we all have the same laptop..DELL Inspiron 6000. I ran my program, I took 18 seconds to get done..his program took 7 seconds..Wow and then I asked him to run my program in his laptop..it's all the same ..but I wanted to...I ran it...gave me the same time.. How come ..?! Next day, I tried some Optimization and developed the loop and wrote something like this for (loop; loop <= 1000000 ; loop++) { fprintf(fp,"%u\n %u\n %u\n %u\n %u\n ",fib(10),fib(10),fib(10),fib(10),fib(10)); fprintf(fp,"%u\n %u\n %u\n %u\n %u\n ",fib(10),fib(10),fib(10),fib(10),fib(10)); } But his program still faster than mine.. then, I tried the program under Slackware 12....it took 3.8 Seconds to get done..Wow, I won the Challenge.. anyway, he want me to beat him under windows XP...Please guys help me out.. The recursive factorial function will be a lot slower than an iterative one. But the lion's share of the time is going to be in writing out the text file. It occupies 160 MB on my machine. It appears that your friend has a faster disk than you do. This might be marginally faster: #include unsigned fact(unsigned n) { unsigned result = 1; while (n-- 1) result *= n; return result; } int main() { FILE *fp; unsigned loop; if ((fp = fopen("log.txt", "a")) != NULL) { setvbuf(fp, NULL, _IOFBF, 16000); for (loop=1; loop <= 10000000; loop++) { fprintf(fp, "%u\n", fact(10)); } fclose(fp); } return 0; } This program executes in less than one second on my machine (showing that the time is almost exclusively I/O): #include unsigned fact(unsigned n) { unsigned result = 1; while (n-- 1) result *= n; return result; } int main() { FILE *fp; unsigned loop; double sum = 0; if ((fp = fopen("log.txt", "a")) != NULL) { setvbuf(fp, NULL, _IOFBF, 16000); for (loop=1; loop <= 10000000; loop++) { sum += fact(10); } printf("sum was %f\n", sum); fclose(fp); } return 0;} /* C:\tmp>foo sum was 3628800000000.000000 */ Thanks sir for you reply.. you mentioned "It appears that your friend has a faster disk than you do" We both have the same laptop..same model..anyway, I tested my program in his laptop just in case.. anyway, I compiled the code you posted..its save a 0-byte text file on machine.. I will try with another compiler..I'll get back to you.. Dec 13 '07 #7

 P: n/a On Dec 13, 2:51 am, Francine.Ne...@googlemail.com wrote: On Dec 12, 11:45 pm, c , c unsigned int fib (int n) { if (n != 1 ) return n * fib(n-1); else return 1; } I think "fact" would be a better name for this function. >and then we all have the same laptop..DELL Inspiron 6000. >I ran my program, I took 18 seconds to get done What compiler did you use? What optimisation settings? You can't tell much about the relative advantages of the languages just from a random figure like that. >and developed the loop and wrote something like this for (loop; loop <= 1000000 ; loop++) { fprintf(fp,"%u\n %u\n %u\n %u\n %u\n >",fib(10),fib(10),fib(10),fib(10),fib(10)); fprintf(fp,"%u\n %u\n %u\n %u\n %u\n >",fib(10),fib(10),fib(10),fib(10),fib(10)); } This is likely to be a pointless change. The loop overhead is small compared with time taken for the factorial function. Most likely your program would be significantly faster if you used a loop to calculate the factorial rather than recursion. -- Richard -- :wq I tried Tiny C Compiler, and with Turbo C... we both (me and my cousin) used recursion in our programs. Than you . One more thing, Why under Slackware with GCC..my program goes faster ? I test it with time [slackware] time ./fact The answer to your question is 42. What do you mean by 42..please give me more details.. Dec 13 '07 #8

 P: n/a c wrote: > Hi every one, Me and my Cousin were talking about C and C#, I love C and he loves C#..and were talking C is ...blah blah...C# is Blah Blah ...etc and then we decided to write a program that will calculate the factorial of 10, 10 millions time and print the reusult in a file with the name log.txt.. I wrote something like this #include unsigned int fib (int n); int main() { FILE *fp; unsigned int loop =1 ; if ( (fp = fopen( "log.txt", "a" )) != NULL ) for (loop; loop <= 10000000 ; loop++) { fprintf(fp,"%u\n",fib(10)); } fclose (fp); return 0; } unsigned int fib (int n) { if (n != 1 ) return n * fib(n-1); else return 1; } and he did the something in C# and then we all have the same laptop..DELL Inspiron 6000. I ran my program, I took 18 seconds to get done..his program took 7 seconds..Wow and then I asked him to run my program in his laptop..it's all the same ..but I wanted to...I ran it...gave me the same time.. How come ..?! Next day, I tried some Optimization and developed the loop and wrote something like this for (loop; loop <= 1000000 ; loop++) { fprintf(fp,"%u\n %u\n %u\n %u\n %u\n ",fib(10),fib(10),fib(10),fib(10),fib(10)); fprintf(fp,"%u\n %u\n %u\n %u\n %u\n ",fib(10),fib(10),fib(10),fib(10),fib(10)); } But his program still faster than mine.. then, I tried the program under Slackware 12....it took 3.8 Seconds to get done..Wow, I won the Challenge.. anyway, he want me to beat him under windows XP...Please guys help me out.. My first guess is that fprintf is your slowest dog. The second most obvious improvement would be to remove recursion from your fib function. Your program runs in 19 cursor blinks on my machine. My program, new.c, runs in 11 cursor blinks. /* BEGIN new.c */ #include #include long unsigned fib(int n); void lutoa(long unsigned n, char *s); int main(void) { FILE *fp; long unsigned loop = 10000000; char lutoa_buff[(sizeof(long) * CHAR_BIT ) / 3 + 1]; if ((fp = fopen("log.txt", "w")) != NULL) { do { lutoa(fib(10), lutoa_buff); fputs(lutoa_buff, fp); putc('\n', fp); } while (--loop != 0); fclose (fp); } return 0; } long unsigned fib(int n) { long unsigned r = 1; while (n 1) { r *= n--; } return r; } void lutoa(long unsigned n, char *s) { long unsigned tenth; char *p, swap; p = s; tenth = n; do { tenth /= 10; *p++ = (char)(n - 10 * tenth + '0'); n = tenth; } while (tenth != 0); *p-- = '\0'; while (p s) { swap = *s; *s++ = *p; *p-- = swap; } } /* END new.c */ -- pete Dec 13 '07 #9

 P: n/a On Dec 13, 3:07 am, pete unsigned int fib (int n); int main() { FILE *fp; unsigned int loop =1 ; if ( (fp = fopen( "log.txt", "a" )) != NULL ) for (loop; loop <= 10000000 ; loop++) { fprintf(fp,"%u\n",fib(10)); } fclose (fp); return 0; } unsigned int fib (int n) { if (n != 1 ) return n * fib(n-1); else return 1; } and he did the something in C# and then we all have the same laptop..DELL Inspiron 6000. I ran my program, I took 18 seconds to get done..his program took 7 seconds..Wow and then I asked him to run my program in his laptop..it's all the same ..but I wanted to...I ran it...gave me the same time.. How come ..?! Next day, I tried some Optimization and developed the loop and wrote something like this for (loop; loop <= 1000000 ; loop++) { fprintf(fp,"%u\n %u\n %u\n %u\n %u\n ",fib(10),fib(10),fib(10),fib(10),fib(10)); fprintf(fp,"%u\n %u\n %u\n %u\n %u\n ",fib(10),fib(10),fib(10),fib(10),fib(10)); } But his program still faster than mine.. then, I tried the program under Slackware 12....it took 3.8 Seconds to get done..Wow, I won the Challenge.. anyway, he want me to beat him under windows XP...Please guys help me out.. My first guess is that fprintf is your slowest dog. The second most obvious improvement would be to remove recursion from your fib function. Your program runs in 19 cursor blinks on my machine. My program, new.c, runs in 11 cursor blinks. /* BEGIN new.c */ #include #include long unsigned fib(int n); void lutoa(long unsigned n, char *s); int main(void) { FILE *fp; long unsigned loop = 10000000; char lutoa_buff[(sizeof(long) * CHAR_BIT ) / 3 + 1]; if ((fp = fopen("log.txt", "w")) != NULL) { do { lutoa(fib(10), lutoa_buff); fputs(lutoa_buff, fp); putc('\n', fp); } while (--loop != 0); fclose (fp); } return 0; } long unsigned fib(int n) { long unsigned r = 1; while (n 1) { r *= n--; } return r; } void lutoa(long unsigned n, char *s) { long unsigned tenth; char *p, swap; p = s; tenth = n; do { tenth /= 10; *p++ = (char)(n - 10 * tenth + '0'); n = tenth; } while (tenth != 0); *p-- = '\0'; while (p s) { swap = *s; *s++ = *p; *p-- = swap; } } /* END new.c */ -- pete thanks pete... But I gotta use recursion..It's not fair..My friend uses recursion in his program, the program that he wrote in C#..So I have to :-) otherwise, I am cheating. Thank you very much. Dec 13 '07 #10

 P: n/a c wrote: On Dec 13, 2:51 am, Francine.Ne...@googlemail.com wrote: The answer to your question is 42. What do you mean by 42..please give me more details.. Brian Dec 13 '07 #11

 P: n/a c wrote: > On Dec 13, 3:07 am, pete #include long unsigned fib(int n); void lutoa(long unsigned n, char *s); int main(void) { FILE *fp; long unsigned loop = 10000000; char lutoa_buff[(sizeof(long) * CHAR_BIT ) / 3 + 1]; if ((fp = fopen("log.txt", "w")) != NULL) { do { lutoa(fib(10), lutoa_buff); fputs(lutoa_buff, fp); putc('\n', fp); } while (--loop != 0); fclose (fp); } return 0; } long unsigned fib(int n) { return n != 1 ? n * fib(n - 1) : 1; } void lutoa(long unsigned n, char *s) { long unsigned tenth; char *p, swap; p = s; tenth = n; do { tenth /= 10; *p++ = (char)(n - 10 * tenth + '0'); n = tenth; } while (tenth != 0); *p-- = '\0'; while (p s) { swap = *s; *s++ = *p; *p-- = swap; } } /* END new.c */ -- pete Dec 13 '07 #12

 P: n/a On Dec 13, 3:43 am, pete #include long unsigned fib(int n); void lutoa(long unsigned n, char *s); int main(void) { FILE *fp; long unsigned loop = 10000000; char lutoa_buff[(sizeof(long) * CHAR_BIT ) / 3 + 1]; if ((fp = fopen("log.txt", "w")) != NULL) { do { lutoa(fib(10), lutoa_buff); fputs(lutoa_buff, fp); putc('\n', fp); } while (--loop != 0); fclose (fp); } return 0; } long unsigned fib(int n) { return n != 1 ? n * fib(n - 1) : 1; } void lutoa(long unsigned n, char *s) { long unsigned tenth; char *p, swap; p = s; tenth = n; do { tenth /= 10; *p++ = (char)(n - 10 * tenth + '0'); n = tenth; } while (tenth != 0); *p-- = '\0'; while (p s) { swap = *s; *s++ = *p; *p-- = swap; } } /* END new.c */ -- pete Thank you very much, that's very sweet of you. Dec 13 '07 #13

 P: n/a On Dec 13, 3:46 am, "Default User" Brian still don't know that do you guys mean by 42!! ! !! !!! Dec 13 '07 #14

 P: n/a c wrote: Hi every one, Me and my Cousin were talking about C and C#, I love C and he loves C#..and were talking C is ...blah blah...C# is Blah Blah ...etc and then we decided to write a program that will calculate the factorial of 10, 10 millions time and print the reusult in a file with the name log.txt. Methinks, you are not really measuring program speed, but I/O performance. You was printing '3628800' 10 million times, that's only 7 bytes per I/O. In C, try using fwrite() instead, and issue only one I/O per 4 kb, even better should be one I/O per 64 kb. If doing that, I guess you drop below 2 seconds on XP. -- Tor Dec 13 '07 #15

 P: n/a On Dec 13, 12:27 am, c My friend uses recursion in his program, the program that he wrote in C#..So I have to :-) otherwise, I am cheating. Hi c, At the risk of starting yet another flame war on topicality... This is all OT here, and you'd be better asking...mmm, I'm really not sure actually. A windows comp group, maybe. Have you checked the compiler settings you used when you compiled on XP? (Richard Tobin mentioned this above, not sure if you spotted it.) The fib() function you list (which, as others have said, should probably be called fact()!) is an excellent candidate for optimisation. Google for "tail recursion". If you turn up the optimisation on your compiler then this might kick in and help a bit. But I think that might be cheating too. AFAIK, the current C# compiler doesn't use this optimisation. (The CLR supports it, but the C# compiler doesn't yet emit it.) So I don't think your cousin's program benefits from this. (Don't take my word as gospel.) But as others have stated, it actually looks like it's the IO that's being compared at the moment. I guess you're really comparing your c standard lib implementation against the CLR, in one particular use case. Which is interesting, but probably not what you and your cousin set out to compare. Maybe you and your cousin could come up with a range of other tests - like purely compute-bound (crunch some matrices or calculate some primes) or IO-bound (copy some files). Doug Dec 13 '07 #16

 P: n/a c wrote: Hi every one, Me and my Cousin were talking about C and C#, I love C and he loves C#..and were talking C is ...blah blah...C# is Blah Blah ...etc and then we all have the same laptop..DELL Inspiron 6000. I ran my program, I took 18 seconds to get done..his program took 7 seconds..Wow Must be down to your compiler or options, your code ran in 3 seconds on my laptop (Dell M60, same CPU as the Inspiron 6000) and 1 second on my desktop... -- Ian Collins. Dec 13 '07 #17

 P: n/a On Dec 12, 6:23 pm, c

 P: n/a On Dec 12, 4:05 pm, c unsigned int fib (int n); int main() { FILE *fp; unsigned int loop =1 ; if ( (fp = fopen( "log.txt", "a" )) != NULL ) for (loop; loop <= 10000000 ; loop++) { fprintf(fp,"%u\n",fib(10)); } fclose (fp); return 0; } unsigned int fib (int n) { if (n != 1 ) return n * fib(n-1); else return 1; } and he did the something in C# and then we all have the same laptop..DELL Inspiron 6000. I ran my program, I took 18 seconds to get done..his program took 7 seconds..Wow and then I asked him to run my program in his laptop..it's all the same ..but I wanted to...I ran it...gave me the same time.. How come ..?! Next day, I tried some Optimization and developed the loop and wrote something like this for (loop; loop <= 1000000 ; loop++) { fprintf(fp,"%u\n %u\n %u\n %u\n %u\n ",fib(10),fib(10),fib(10),fib(10),fib(10)); fprintf(fp,"%u\n %u\n %u\n %u\n %u\n ",fib(10),fib(10),fib(10),fib(10),fib(10)); } But his program still faster than mine.. then, I tried the program under Slackware 12....it took 3.8 Seconds to get done..Wow, I won the Challenge.. anyway, he want me to beat him under windows XP...Please guys help me out.. The recursive factorial function will be a lot slower than an iterative one. But the lion's share of the time is going to be in writing out the text file. It occupies 160 MB on my machine. It appears that your friend has a faster disk than you do. This might be marginally faster: #include unsigned fact(unsigned n) { unsigned result = 1; while (n-- 1) result *= n; return result; } int main() { FILE *fp; unsigned loop; if ((fp = fopen("log.txt", "a")) != NULL) { setvbuf(fp, NULL, _IOFBF, 16000); for (loop=1; loop <= 10000000; loop++) { fprintf(fp, "%u\n", fact(10)); } fclose(fp); } return 0; } This program executes in less than one second on my machine (showing that the time is almost exclusively I/O): #include unsigned fact(unsigned n) { unsigned result = 1; while (n-- 1) result *= n; return result; } int main() { FILE *fp; unsigned loop; double sum = 0; if ((fp = fopen("log.txt", "a")) != NULL) { setvbuf(fp, NULL, _IOFBF, 16000); for (loop=1; loop <= 10000000; loop++) { sum += fact(10); } printf("sum was %f\n", sum); fclose(fp); } return 0;} /* C:\tmp>foo sum was 3628800000000.000000 */ Thanks sir for you reply.. you mentioned "It appears that your friend has a faster disk than you do" We both have the same laptop..same model..anyway, I tested my program in his laptop just in case.. anyway, I compiled the code you posted..its save a 0-byte text file on machine.. I will try with another compiler..I'll get back to you.. If your disk drives are the same, that indicates that the buffered I/O of C# is superior to the buffered I/O of C. You must have compiled the second program, which does not write anything to disk. The first program writes a 160 MB file. Here is a very cheesy alternative: #include #include #include unsigned fact(unsigned n) { unsigned result = 1; while (n-- 1) result *= n; return result; } static char absurd_buffer[160000000]; int main(void) { FILE *fp; unsigned loop; if ((fp = fopen("log.txt", "w")) != NULL) { if (setvbuf(fp, absurd_buffer, _IOFBF, sizeof absurd_buffer) ! = 0) { int e = errno; puts(strerror(e)); printf("Incorrect type or size of buffer for log.txt. Value of errno is %d.\n", e); } for (loop = 1; loop <= 10000000; loop++) { fprintf(fp, "%u\n", fact(10)); } fclose(fp); } return 0; } Dec 13 '07 #19

 P: n/a c wrote: On Dec 13, 3:46 am, "Default User" c wrote: >>On Dec 13, 2:51 am, Francine.Ne...@googlemail.com wrote:The answer to your question is 42.What do you mean by 42..please give me more details.. Brian still don't know that do you guys mean by 42!! If you followed up that wikipedia link, you should know that '42' is meant as a nonsense answer to a question that hasn't been well-thought out. Dec 13 '07 #20

 P: n/a On Dec 13, 4:52 am, user923005 unsigned int fib (int n); int main() { FILE *fp; unsigned int loop =1 ; if ( (fp = fopen( "log.txt", "a" )) != NULL ) for (loop; loop <= 10000000 ; loop++) { fprintf(fp,"%u\n",fib(10)); } fclose (fp); return 0; } unsigned int fib (int n) { if (n != 1 ) return n * fib(n-1); else return 1; } and he did the something in C# and then we all have the same laptop..DELL Inspiron 6000. I ran my program, I took 18 seconds to get done..his program took 7 seconds..Wow and then I asked him to run my program in his laptop..it's all the same ..but I wanted to...I ran it...gave me the same time.. How come ..?! Next day, I tried some Optimization and developed the loop and wrote something like this for (loop; loop <= 1000000 ; loop++) { fprintf(fp,"%u\n %u\n %u\n %u\n %u\n ",fib(10),fib(10),fib(10),fib(10),fib(10)); fprintf(fp,"%u\n %u\n %u\n %u\n %u\n ",fib(10),fib(10),fib(10),fib(10),fib(10)); } But his program still faster than mine.. then, I tried the program under Slackware 12....it took 3.8 Seconds to get done..Wow, I won the Challenge.. anyway, he want me to beat him under windows XP...Please guys help me out.. The recursive factorial function will be a lot slower than an iterative one. But the lion's share of the time is going to be in writing out the text file. It occupies 160 MB on my machine. It appears that your friend has a faster disk than you do. This might be marginally faster: #include unsigned fact(unsigned n) { unsigned result = 1; while (n-- 1) result *= n; return result; } int main() { FILE *fp; unsigned loop; if ((fp = fopen("log.txt", "a")) != NULL) { setvbuf(fp, NULL, _IOFBF, 16000); for (loop=1; loop <= 10000000; loop++) { fprintf(fp, "%u\n", fact(10)); } fclose(fp); } return 0; } This program executes in less than one second on my machine (showing that the time is almost exclusively I/O): #include unsigned fact(unsigned n) { unsigned result = 1; while (n-- 1) result *= n; return result; } int main() { FILE *fp; unsigned loop; double sum = 0; if ((fp = fopen("log.txt", "a")) != NULL) { setvbuf(fp, NULL, _IOFBF, 16000); for (loop=1; loop <= 10000000; loop++) { sum += fact(10); } printf("sum was %f\n", sum); fclose(fp); } return 0;} /* C:\tmp>foo sum was 3628800000000.000000 */ Thanks sir for you reply.. you mentioned "It appears that your friend has a faster disk than you do" We both have the same laptop..same model..anyway, I tested my program in his laptop just in case.. anyway, I compiled the code you posted..its save a 0-byte text file on machine.. I will try with another compiler..I'll get back to you.. If your disk drives are the same, that indicates that the buffered I/O of C# is superior to the buffered I/O of C. You must have compiled the second program, which does not write anything to disk. The first program writes a 160 MB file. Here is a very cheesy alternative: #include #include #include unsigned fact(unsigned n) { unsigned result = 1; while (n-- 1) result *= n; return result; } static char absurd_buffer[160000000]; int main(void) { FILE *fp; unsigned loop; if ((fp = fopen("log.txt", "w")) != NULL) { if (setvbuf(fp, absurd_buffer, _IOFBF, sizeof absurd_buffer) ! = 0) { int e = errno; puts(strerror(e)); printf("Incorrect type or size of buffer for log.txt. Value of errno is %d.\n", e); } for (loop = 1; loop <= 10000000; loop++) { fprintf(fp, "%u\n", fact(10)); } fclose(fp); } return 0; } I compiled you code with Borland C++ and used the option "optimize for speed" bcc32.exe -O2 fact.c and Compiled the C# code, which uses recursion. yours doesn't all on one Computer, the same Computer...all on his laptop.. guess what..his is faster, way faster...I tried all of the code posted on this article...every single code you guys posted...he still faster...and way faster... but under Slackware..any code will be faster, even stupid code.... with recursion or with out.. Dec 13 '07 #21

 P: n/a c unsigned int fib (int n); int main() { FILE *fp; unsigned int loop =1 ; if ( (fp = fopen( "log.txt", "a" )) != NULL ) for (loop; loop <= 10000000 ; loop++) { fprintf(fp,"%u\n",fib(10)); } fclose (fp); return 0; } unsigned int fib (int n) { if (n != 1 ) return n * fib(n-1); else return 1; } [snip] As others have mentioned, you don't actually seem to be measuring your fib() (should be called fact() or factorial()) function. On my system, your program ran in 4.98 seconds. When I changed the line fprintf(fp,"%u\n",fib(10)); to fprintf(fp, "%u\n", 3628800u); it ran in 5.27 seconds. I don't believe the apparent slowdown is significant; it's probably with the margin of error of my measurement. But it suggests that the performance of the program is dominated by writing 153 megabytes (!) of output. If you really want to measure the speed of the computations, don't intersperse it with I/O. -- Keith Thompson (The_Other_Keith) Looking for software development work in the San Diego area. "We must do something. This is something. Therefore, we must do this." -- Antony Jay and Jonathan Lynn, "Yes Minister" Dec 13 '07 #22

 P: n/a Tor Rustad Me and my Cousin were talking about C and C#, I love C and he lovesC#..and were talking C is ...blah blah...C# is Blah Blah ...etcand then we decided to write a program that will calculate thefactorial of 10, 10 millions time and print the reusult in a file withthe name log.txt. Methinks, you are not really measuring program speed, but I/O performance. You was printing '3628800' 10 million times, that's only 7 bytes per I/O. In C, try using fwrite() instead, and issue only one I/O per 4 kb, even better should be one I/O per 64 kb. If doing that, I guess you drop below 2 seconds on XP. I doubt it. Output using stdio is buffered; most of those fprintf calls are just writing to memory. Probably the overhead of fprintf processing the format string is significant; I just sped it up by a factor of 3 or so by using fputs() rather than fprintf() (with a constant string in both cases). Actually, that could be an interesting exercise: find out where the program is really spending its time. (Note that such an exercise isn't necessarily topical here in comp.lang.c.) -- Keith Thompson (The_Other_Keith) Looking for software development work in the San Diego area. "We must do something. This is something. Therefore, we must do this." -- Antony Jay and Jonathan Lynn, "Yes Minister" Dec 13 '07 #23

 P: n/a Richard Tobin wrote: In article , c unsigned int fib (int n) { if (n != 1 ) return n * fib(n-1); else return 1; } I think "fact" would be a better name for this function. You can't trust people who tell fibs. Dec 13 '07 #24

 P: n/a On 13 Des, 03:42, Keith Thompson

 P: n/a On 13 Des, 03:46, Keith Thompson

 P: n/a c schrieb: unsigned int fib (int n) { if (n != 1 ) return n * fib(n-1); else return 1; } and he did the something in C# 1. Use the itarative variant, as was pointed out. 2. Mark the function "const"! I wonder why nobody has pointed out #2 so far - probably because it's not a language thing, but a compiler specification. When using gcc, it will work, it might with other compilers as well. When optimization is used it will make your program run like hell, as the factorial is only calculated one single time. Kind of unfair, yes, but a cool optimization. And you're comparing apples to oranges anyways, so... Define/declare your function like this #include unsigned fact(unsigned n) __attribute__((const)); unsigned fact(unsigned n) { unsigned result = 1; while (n-- 1) result *= n; return result; } int main() { int i; for (i = 0; i < 1000000; i++) { printf("%d\n", fact(10)); } return 0; } Greetings, Johannes -- "Viele der Theorien der Mathematiker sind falsch und klar GotteslĂ¤sterlich. Ich vermute, dass diese falschen Theorien genau deshalb so geliebt werden." -- Prophet und VisionĂ¤r Hans Joss aka HJP in de.sci.mathematik <47**********************@news.sunrise.ch> Dec 13 '07 #27

 P: n/a Johannes Bauer wrote: c schrieb: >unsigned int fib (int n) { if (n != 1 ) return n * fib(n-1); else return 1; } and he did the something in C# 1. Use the itarative variant, as was pointed out. 2. Mark the function "const"! I wonder why nobody has pointed out #2 so far - probably because it's not a language thing, but a compiler specification. Exactly. When using gcc, it will work, it might with other compilers as well. When optimization is used it will make your program run like hell, Haven't various posters established that the time /of the program being tested/ is dominated by /output/? Having fact-misnamed-fib take /zero time/ won't help appreciably. -- Chris "expanding head" Dollin Hewlett-Packard Limited Cain Road, Bracknell, registered no: registered office: Berks RG12 1HN 690597 England Dec 13 '07 #28

 P: n/a On 12 Dec, 23:23, c unsigned int fib (int n); int main() { FILE *fp; unsigned int loop =1 ; if ( (fp = fopen( "log.txt", "a" )) != NULL ) for (loop; loop <= 10000000 ; loop++) { fprintf(fp,"%u\n",fib(10)); } fclose (fp); return 0; } unsigned int fib (int n) { if (n != 1 ) return n * fib(n-1); else return 1; } Since everything here is hard-wired, why not simply: #include int main() { FILE *fp; unsigned int loop =1 ; if ( (fp = fopen( "log.txt", "a" )) != NULL ) for (loop; loop <= 10000000 ; loop++) { fprintf(fp,"3628800\n"); } fclose (fp); return 0; } I betcha that's faster than the recursion... Dec 13 '07 #29

 P: n/a On 2007-12-12, c

 P: n/a Tor Rustad I don't believe the apparent slowdown is significant; it's probablywith the margin of error of my measurement. But it suggests that theperformance of the program is dominated by writing 153 megabytes (!)of output. Keith, may you explain how is writing 7 bytes 10 million times, generating a 153 Mb file on your system? Did you run the program twice? ;-) Yes, I did -- without noticing that the "log.txt" file is opened in append mode. (Apparently when you're writing 3628800 ten million times, it's important not to overwrite the previous ten million occurrences of 3628800.) -- Keith Thompson (The_Other_Keith) Looking for software development work in the San Diego area. "We must do something. This is something. Therefore, we must do this." -- Antony Jay and Jonathan Lynn, "Yes Minister" Dec 13 '07 #31

 P: n/a Tor Rustad Looking for software development work in the San Diego area. "We must do something. This is something. Therefore, we must do this." -- Antony Jay and Jonathan Lynn, "Yes Minister" Dec 13 '07 #32

 P: n/a Keith Thompson wrote: Tor Rustad Regarding 7 bytes fprintf(), odd plus odd is even, even plus odd isodd. So it goes 10 million times, that's at least 5 millionnon-optimal memory accesses...Perhaps, the bottleneck on most systems is the disk-subsystem, butthe least we can do, is make sure that every IO is accessingaligned memory, and change buffer from 7 bytes to something bigger(e.g. 64 kb). The output statement is fprintf(fp,"%u\n",fib(10)); where fib(10) is 3628800, so that's 7 digits *plus a new-line*. If a new-line is written as a single character, that's 8 bytes per call. Oooops, right... I didn't notice that new-line! *red-face* However, 1 byte new-line is a UNIX thing, on Windows it's usually two bytes, so my point still hold. ;-) I did a benchmark, the best IO result was doing the "Standard C IO no buffering" call, each fwrite() call used buffer with 64 kb of data. Re-running the same bench under Linux gave quite a suprise (see below)!!! On Windows XP I got: printing '3628800' 10 million times ------------------------------ Standard C IO by OP Written 1220 pages of ca. size 65536 (80000000 bytes) CPU time 7.03 DiskIO 11.11 Mb/s ------------------------------ Standard C IO Written 1221 pages of ca. size 65536 (80019456 bytes) CPU time 4.36 DiskIO 17.93 Mb/s ------------------------------ Standard C IO no buffering Buffering turned off Written 1221 pages of ca. size 65536 (80019456 bytes) CPU time 2.53 DiskIO 30.86 Mb/s \$ gcc challenge.c stdio_c.c -O3 \$ time ./a.out printing '3628800' 10 million times ------------------------------ Standard C IO by OP Written 1220 pages of ca. size 65536 (80000000 bytes) CPU time 2.74 DiskIO 28.51 Mb/s ------------------------------ Standard C IO Written 1221 pages of ca. size 65536 (80019456 bytes) CPU time 0.27 DiskIO 289.42 Mb/s ------------------------------ Standard C IO no buffering Buffering turned off Written 1221 pages of ca. size 65536 (80019456 bytes) CPU time 0.26 DiskIO 300.55 Mb/s real 0m7.834s user 0m2.528s sys 0m0.756s 300.55 Mb/s is too good, the theoretical peek should have been 150 Mb/s, I don't know why Linux is this blitzing fast! Does it have anything to do with dual core CPU??? *strange* /*-------------- listing 'challenge.c' ------------------------*/ #include #include #include #include #define FILE_NAME "log.txt" #define IO_BLOCK_SIZE (64*1024) unsigned int fac(int n) { if (n != 1) return n * fac(n - 1); else return 1; } /* make_io_buf: from number 'u', fill a buffer with this value 'count' times * return lenght of buffer */ static size_t make_io_buf(char *buf, size_t max_buf, size_t * count, unsigned u) { int n, len; unsigned char u_buf[32]; n = sprintf(u_buf, "%u\n", u); assert(n == 8); /* fill up the buffer */ for (len = 0; len + n <= (int)max_buf; len += n) { strcpy(&buf[len], u_buf); *count = *count + 1; } assert(len <= max_buf); assert(len + n max_buf); return len; } static void print_diff(clock_t start, clock_t stop, size_t wr_cnt) { double s = (double)(stop - start) / CLOCKS_PER_SEC, Mb = (double)wr_cnt / 1024000.0; printf("Written %lu pages of ca. size %d (%lu bytes)\n", wr_cnt / IO_BLOCK_SIZE, IO_BLOCK_SIZE, wr_cnt); printf("CPU time %.2f\n", s); printf("DiskIO %.2f Mb/s\n", Mb / s); } extern int stdio_op(const char *fname, const unsigned char *buf, size_t buf_len); extern int stdio_c(const char *fname, const unsigned char *buf, size_t buf_len); extern int stdio_nobuf(const char *fname, const unsigned char *buf, size_t buf_len); int main(void) { static unsigned char buffer[IO_BLOCK_SIZE]; size_t buf_len = 0, count = 0, wr_cnt = 0; clock_t start, stop; printf("printing '%u' 10 million times\n", fac(10)); printf("------------------------------\nStandard C IO by OP\n"); start = clock(); wr_cnt = stdio_op(FILE_NAME, NULL, 0); stop = clock(); print_diff(start, stop, wr_cnt); printf("------------------------------\nStandard C IO\n"); start = clock(); buf_len = make_io_buf(buffer, sizeof buffer, &count, fac(10)); wr_cnt = stdio_c(FILE_NAME, buffer, buf_len); stop = clock(); print_diff(start, stop, wr_cnt); printf("------------------------------\nStandard C IO no buffering\n"); start = clock(); buf_len = make_io_buf(buffer, sizeof buffer, &count, fac(10)); wr_cnt = stdio_nobuf(FILE_NAME, buffer, buf_len); stop = clock(); print_diff(start, stop, wr_cnt); remove( FILE_NAME ); return 0; } /*-------------- listing stdio_c.c ------------------------*/ #include #include #include int stdio_op(const char *fname, const unsigned char *buf, size_t buf_len) { FILE *fp; unsigned int loop, n=0; if ( (fp = fopen( "log.txt", "a" )) != NULL ) for (loop=1; loop <= 10000000 ; loop++) { n += fprintf(fp,"%u\n",fac(10)); } fclose (fp); return n; } int stdio_c(const char *fname, const unsigned char *buf, size_t buf_len) { FILE *fp; size_t loop , count = buf_len / 8; int wr_cnt = 0; if ((fp = fopen(fname, "w+b")) != NULL) { for (loop = 0; loop <= 10000000; loop += count) { wr_cnt += fwrite(buf, 1, buf_len, fp); } fclose(fp); } return wr_cnt; } int stdio_nobuf(const char *fname, const unsigned char *buf, size_t buf_len) { FILE *fp; size_t loop , count = buf_len / 8; int wr_cnt = 0; if ((fp = fopen(fname, "w+b")) != NULL) { if(0 == setvbuf(fp, NULL, _IONBF, 0 )) puts( "Buffering turned off" ); for (loop = 0; loop <= 10000000; loop += count) { wr_cnt += fwrite(buf, 1, buf_len, fp); } fclose(fp); } return wr_cnt; } -- Tor Dec 14 '07 #33

 P: n/a "c"

 P: n/a Stephen Sprunk wrote: "Tor Rustad" Standard C IO no bufferingBuffering turned offWritten 1221 pages of ca. size 65536 (80019456 bytes)CPU time 0.26DiskIO 300.55 Mb/s ... >300.55 Mb/s is too good, the theoretical peek shouldhave been 150 Mb/s, I don't know why Linux is thisblitzing fast! Does it have anything to do with dualcore CPU??? *strange* The speed you're measuring is how fast the fwrite() call returns, not the disk write performance. Most likely, fwrite() returns after putting the data into a buffer but before the hardware actually acknowledges it's been written to disk. Modern OSes do all sorts of things like that to improve performance. Nope, the clock was started *before* fopen(), and stopped *after* fclose(). If Linux is using lazy commit, i.e. allowing a file to be e.g. closed, before flushing system IO buffers, that would be highly interesting/bad, particularly for those working with critical data. Added another test case, this time using low-level POSIX IO functions and sync'ing the file *before* close'ing it, the result was sky high IO, 434 Mb/s! So, the IO subsystem file writes is more than 10 times faster on Linux, than under Windows XP, tests was done with identical HW and C source. *amazing* \$ ./a.out printing '3628800' 10 million times ------------------------------ Standard C IO by OP Written 1220 pages of ca. size 65536 (80000000 bytes) CPU time 2.80 DiskIO 27.90 Mb/s ------------------------------ Standard C IO Written 1221 pages of ca. size 65536 (80019456 bytes) CPU time 0.28 DiskIO 279.09 Mb/s ------------------------------ Standard C IO no buffering Buffering turned off Written 1221 pages of ca. size 65536 (80019456 bytes) CPU time 0.24 DiskIO 325.60 Mb/s ------------------------------ Low-level Linux IO Written 1221 pages of ca. size 65536 (80019456 bytes) CPU time 0.18 DiskIO 434.13 Mb/s /*------- listing lowio_linux.c -----------------------*/ #include #include #include #include #include int lowio_linux(const char *fname, const unsigned char *buf, size_t buf_len) { unsigned int loop , n = 0 , chunk = buf_len / 8; int fd , flags = O_WRONLY | O_CREAT | O_SYNC; ssize_t rc; fd = open(fname, flags, S_IRUSR | S_IWUSR); if (fd != -1) { for (loop = 0; loop <= 10000000; loop+=chunk) { rc = write(fd, buf, buf_len); if (rc == -1) puts("write error"), exit(EXIT_FAILURE); n += rc; } rc = fsync(fd); if (rc == -1) puts("fsync error"), exit(EXIT_FAILURE); close(fd); } return n; } -- Tor Dec 14 '07 #36

 P: n/a Tor Rustad wrote: [...] Added another test case, this time using low-level POSIX IO functions and sync'ing the file *before* close'ing it, the result was sky high IO, 434 Mb/s! So, the IO subsystem file writes is more than 10 times faster on Linux, than under Windows XP, tests was done with identical HW and C source. *amazing* I knew something was *wrong*. Grrr... these numbers are not valid! Why? What basic mistake did I do? Hint: what do clock() measure, and where does the program spend the time? -- Tor Dec 14 '07 #37

 P: n/a "Tor Rustad" "Tor Rustad" >Standard C IO no bufferingBuffering turned offWritten 1221 pages of ca. size 65536 (80019456 bytes)CPU time 0.26DiskIO 300.55 Mb/s ... >>300.55 Mb/s is too good, the theoretical peek shouldhave been 150 Mb/s, I don't know why Linux is thisblitzing fast! Does it have anything to do with dualcore CPU??? *strange* The speed you're measuring is how fast the fwrite() call returns, not thedisk write performance. Most likely, fwrite() returns after putting thedata into a buffer but before the hardware actually acknowledges it'sbeen written to disk. Modern OSes do all sorts of things like that toimprove performance. Nope, the clock was started *before* fopen(), and stopped *after* fclose(). fclose() just closes the FILE* (and related fd); it does _not_ guarantee that the data is actually physically on the disk. There may be some OS-specific function that will give you the indication you're wrongly assuming you're getting, but it's not on by default. If Linux is using lazy commit, i.e. allowing a file to be e.g. closed, before flushing system IO buffers, that would be highly interesting/bad, particularly for those working with critical data. It's completely normal. That's why one should always shut down machines cleanly instead of pulling the plug -- and why data (and filesystems) tend to get corrupted when the power goes out. Even the disks lie to the OS about when data is written; as soon as the data is in the drive's cache, it tells the OS it's done writing so that the OS can reuse the buffer(s). Only top-of-the-line controllers with battery-backed caches are immune from these sorts of problems (and even then, you have to boot the machine again and let it finish writing before removing the disk from the system). Added another test case, this time using low-level POSIX IO functions and sync'ing the file *before* close'ing it, the result was sky high IO, 434 Mb/s! So, the IO subsystem file writes is more than 10 times faster on Linux, than under Windows XP, tests was done with identical HW and C source. *amazing* It's not so amazing when you realize you're measuring the speed of the OS's I/O system, not the hardware's. S -- Stephen Sprunk "God does not play dice." --Albert Einstein CCIE #3723 "God is an inveterate gambler, and He throws the K5SSS dice at every possible opportunity." --Stephen Hawking Dec 14 '07 #38

 P: n/a Stephen Sprunk wrote: "Tor Rustad" Stephen Sprunk wrote: >>"Tor Rustad" If Linux is using lazy commit, i.e. allowing a file to be e.g. closed,before flushing system IO buffers, that would be highlyinteresting/bad, particularly for those working with critical data. It's completely normal. That's why one should always shut down machines cleanly instead of pulling the plug -- and why data (and filesystems) tend to get corrupted when the power goes out. Even the disks lie to the OS about when data is written; as soon as the data is in the drive's cache, it tells the OS it's done writing so that the OS can reuse the buffer(s). Only top-of-the-line controllers with battery-backed caches are immune from these sorts of problems (and even then, you have to boot the machine again and let it finish writing before removing the disk from the system). Some years ago, I was told that lazy commit, was the reason we didn't run *any* production systems on Linux, now it's allowed in *some* cases to use this OS. If an OS don't flush system buffers when a file is closed, or is doing a fflush() in an async manner, then it is impossible to code a bullet-proof C program with file I/O, in that environment. The C standard is silent here, data has only to be transfered to the host environment, depending on QoI, this data may written to persistent storage, before fclose() return, or program exit(). Just consider this, a DB program start a transaction with "begin work", then perform lots of updates, before "commit work"... if you can't trust that the data has been saved on disk, then you risk that the transaction can be lost, in case of e.g. power outage or HW failures (e.g. disk controller failure). If a disk controller lie, the flaw is limited to that unit. If an OS lie, that is an important thing to know about, and I don't think I would like to use such an OS, for work related production systems. >Added another test case, this time using low-level POSIX IOfunctions and sync'ing the file *before* close'ing it, the result was skyhigh IO, 434 Mb/s! So, the IO subsystem file writes is more than 10times faster on Linux, than under Windows XP, tests was done withidentical HW and C source.*amazing* It's not so amazing when you realize you're measuring the speed of the OS's I/O system, not the hardware's. Well, in the benchmark I ran, the result couldn't be explained by disk cache alone, since this is somewhere in the range of 8 Mb - 32 Mb, while the measured IO peek, was >150 Mb/s too high. Also, since I did an explicit call to fsync() in the last test-case, the data should have been committed to disk, before the stop timer was set. A clear sign that something was wrong, was that fsync()'ing, boosted the performance by another 100 Mb/s! I think the simple answer here, is that the clock() implementation on Linux, measured processor time used in *user* space only, ignoring the (significant) time spent in system calls. :) -- Tor Dec 14 '07 #39

 P: n/a Tor wrote: ) Some years ago, I was told that lazy commit, was the reason we didn't ) run *any* production systems on Linux, now it's allowed in *some* cases ) to use this OS. If an OS don't flush system buffers when a file is ) closed, or is doing a fflush() in an async manner, then it is impossible ) to code a bullet-proof C program with file I/O, in that environment. You can actually turn that off in Linux, you know. ) I think the simple answer here, is that the clock() implementation on ) Linux, measured processor time used in *user* space only, ignoring the ) (significant) time spent in system calls. :) Hmm, not quite. The clock() function is supposed to only measure processor time used, so time spent waiting for I/O to finish will not be counted. (After all, during that time, other tasks can get their share of CPU time.) It does count CPU time spent in the system call though, AFAIK. SaSW, Willem -- Disclaimer: I am in no way responsible for any of the statements made in the above text. For all I know I might be drugged or something.. No I'm not paranoid. You all think I'm paranoid, don't you ! #EOT Dec 15 '07 #40

 P: n/a Willem wrote: Tor wrote: ) Some years ago, I was told that lazy commit, was the reason we didn't ) run *any* production systems on Linux, now it's allowed in *some* cases ) to use this OS. If an OS don't flush system buffers when a file is ) closed, or is doing a fflush() in an async manner, then it is impossible ) to code a bullet-proof C program with file I/O, in that environment. You can actually turn that off in Linux, you know. It appears, that write-back cache is turned off by default on e.g. Windows and Solaris, but not on my Linux distro. I see that InnoDB MySQL people suggest doing e.g. hdparm -W 0 /dev/sda to turn it off on Linux. BUT to my mind, this must be a work around, since fsync() should do the job. If fsync() cannot be trusted to do the right thing, that's a nasty Linux bug, and write-back cache must/should be turned off manually. ) I think the simple answer here, is that the clock() implementation on ) Linux, measured processor time used in *user* space only, ignoring the ) (significant) time spent in system calls. :) Hmm, not quite. The clock() function is supposed to only measure processor time used, so time spent waiting for I/O to finish will not be counted. (After all, during that time, other tasks can get their share of CPU time.) It does count CPU time spent in the system call though, AFAIK. You might be right, the point is anyway that the actual I/O was not counted by clock(), as such, the clock() function is rather useless for measuring performance of I/O bound programs. -- Tor Dec 15 '07 #41

 P: n/a Tor Rustad Tor wrote:) Some years ago, I was told that lazy commit, was the reason wedidn't ) run *any* production systems on Linux, now it's allowed in*some* cases ) to use this OS. If an OS don't flush system bufferswhen a file is ) closed, or is doing a fflush() in an async manner,then it is impossible ) to code a bullet-proof C program with fileI/O, in that environment.You can actually turn that off in Linux, you know. It appears, that write-back cache is turned off by default on e.g. Windows and Solaris, but not on my Linux distro. I see that InnoDB MySQL people suggest doing e.g. hdparm -W 0 /dev/sda to turn it off on Linux. BUT to my mind, this must be a work around, since fsync() should do the job. If fsync() cannot be trusted to do the right thing, that's a nasty Linux bug, and write-back cache must/should be turned off manually. Assuming journal markers are stored on the same device then I don't see what the issue is. Lazy on or off, there is still the opportunity for data loss and any restarting program (application or os) must examine journal marks to determine the last safe write. Dec 15 '07 #42

### This discussion thread is closed

Replies have been disabled for this discussion.