435,435 Members | 2,939 Online
Need help? Post your question and get tips & solutions from a community of 435,435 IT Pros & Developers. It's quick & easy.

# finding how much time that is consumed?

 P: n/a Hi there. If I make a function in c (I acually use gnu right now), is there any way to find out how many clocksycluses that function takes? If I divide some numbers etc Var1 = Var2/Var3, is it a fix amount of clocksycluses that is been used for that division, or does it varies? Raymond Aug 18 '06 #1
5 Replies

 P: n/a ra*****@yahoo.no wrote On 08/18/06 09:29,: Hi there. If I make a function in c (I acually use gnu right now), is there any way to find out how many clocksycluses that function takes? ("Clocksycluses?" After some puzzlement the light dawned: I *think* what you mean is the two-word phrase "clock cycles." However, the word "clocksycluses" has a certain fascination, and people may take it up and start using it. You may have gained immortality by enriching the lexicon!) C provides the clock() function, which returns the amount of CPU time consumed by your program since some fixed arbitrary moment. You use it like this: #include #include ... clock_t t0, t1; t0 = clock(); do_something(); t1 = clock(); printf ("Used %g CPU seconds\n", (t1 - t0) / (double)CLOCKS_PER_SEC); If I divide some numbers etc Var1 = Var2/Var3, is it a fix amount of clocksycluses that is been used for that division, or does it varies? There are at least two problems here. First, the Standard says nothing about how precise the clock() measurement is, how rapidly the clock "ticks." On typical systems, the "tick rate" is somewhere between 18Hz and 1000Hz; 100Hz is a fairly common value. What this means is that clock() is probably too coarse- grained to measure the execution time of a few instructions or even a few tens of instructions; the measured time for something as short as one division will probably be zero. The second problem is that the C language says nothing about how much time various operations take. On actual machines, the time taken for your division will probably be affected by many influences, such as - Operand type: floating-point divisions and integer divisions might run at different speeds - Operand values: dividing by a denormal might take more or less time than dividing by a normalized value - Operand location: there's probably a cascade of different places the operands might reside (CPU, various caches, main memory, swap device), all with different speeds - Interference: the division might compete with other operations for scarce resources like pipelines, floating- point units, internal CPU latches, and whatnot .... and, of course, many more. Modern computers are complicated systems, and it is all but meaningless to speak of "the" amount of time a single operation takes. -- Er*********@sun.com Aug 18 '06 #2

 P: n/a Eric Sosman wrote: > ra*****@yahoo.no wrote On 08/18/06 09:29,: >Hi there.If I make a function in c (I acually use gnu right now), is there anyway to find out how many clocksycluses that function takes? ("Clocksycluses?" After some puzzlement the light dawned: I *think* what you mean is the two-word phrase "clock cycles." However, the word "clocksycluses" has a certain fascination, and people may take it up and start using it. You may have gained immortality by enriching the lexicon!) C provides the clock() function, which returns the amount of CPU time consumed by your program since some fixed arbitrary moment. You use it like this: #include #include ... clock_t t0, t1; t0 = clock(); do_something(); t1 = clock(); printf ("Used %g CPU seconds\n", (t1 - t0) / (double)CLOCKS_PER_SEC); >If I divide some numbers etc Var1 = Var2/Var3, is it a fix amount ofclocksycluses that is been used for that division, or does it varies? There are at least two problems here. First, the Standard says nothing about how precise the clock() measurement is, how rapidly the clock "ticks." On typical systems, the "tick rate" is somewhere between 18Hz and 1000Hz; 100Hz is a fairly common value. What this means is that clock() is probably too coarse- grained to measure the execution time of a few instructions or even a few tens of instructions; the measured time for something as short as one division will probably be zero. Most platforms do have useful (but non-portable) ways to measure clock ticks. Often, it is done by counting bus clock ticks and multiplying by a factor burned into the internal ROM of the CPU. It likely takes an indeterminate number of ticks to obtain the result. See the _rdtsc() macros built into certain compilers for x86 and related platforms. There is not likely to be any relationship between native clock ticks and the integral count returned by clock(); in fact, most implementations cite Posix standards as requiring that clock() must return some large increment after each native time interval, to permit "posix" applications to avoid using CLOCKS_PER_SEC, thus throwing away a number of useful bits. > The second problem is that the C language says nothing about how much time various operations take. On actual machines, the time taken for your division will probably be affected by many influences, such as - Operand type: floating-point divisions and integer divisions might run at different speeds - Operand values: dividing by a denormal might take more or less time than dividing by a normalized value - Operand location: there's probably a cascade of different places the operands might reside (CPU, various caches, main memory, swap device), all with different speeds - Interference: the division might compete with other operations for scarce resources like pipelines, floating- point units, internal CPU latches, and whatnot ... and, of course, many more. Modern computers are complicated systems, and it is all but meaningless to speak of "the" amount of time a single operation takes. However, there are many architectures where a division stalls the floating point pipeline for a fixed number of cycles, once it begins execution, depending on the width of the operands. Aug 18 '06 #3