473,396 Members | 1,929 Online
Bytes | Software Development & Data Engineering Community
Post Job

Home Posts Topics Members FAQ

Join Bytes to post your question to a community of 473,396 software developers and data experts.

time.time or time.clock


I'm having some cross platform issues with timing loops. It seems
time.time is better for some computers/platforms and time.clock others, but
it's not always clear which, so I came up with the following to try to
determine which.
import time

# Determine if time.time is better than time.clock
# The one with better resolution should be lower.
if time.clock() - time.clock() < time.time() - time.time():
clock = time.clock
else:
clock = time.time
Will this work most of the time, or is there something better?
Ron

Jan 13 '08 #1
9 8273
On Jan 14, 7:05 am, Ron Adam <r...@ronadam.comwrote:
I'm having some cross platform issues with timing loops. It seems
time.time is better for some computers/platforms and time.clock others, but
Care to explain why it seems so?
it's not always clear which, so I came up with the following to try to
determine which.

import time

# Determine if time.time is better than time.clock
# The one with better resolution should be lower.
if time.clock() - time.clock() < time.time() - time.time():
clock = time.clock
else:
clock = time.time

Will this work most of the time, or is there something better?
Manual:
"""
clock( )

On Unix, return the current processor time as a floating point number
expressed in seconds. The precision, and in fact the very definition
of the meaning of ``processor time'', depends on that of the C
function of the same name, but in any case, this is the function to
use for benchmarking Python or timing algorithms.

On Windows, this function returns wall-clock seconds elapsed since the
first call to this function, as a floating point number, based on the
Win32 function QueryPerformanceCounter(). The resolution is typically
better than one microsecond.
[snip]

time( )

Return the time as a floating point number expressed in seconds since
the epoch, in UTC. Note that even though the time is always returned
as a floating point number, not all systems provide time with a better
precision than 1 second. While this function normally returns non-
decreasing values, it can return a lower value than a previous call if
the system clock has been set back between the two calls.
"""

AFAICT that was enough indication for most people to use time.clock on
all platforms ... before the introduction of the timeit module; have
you considered it?

It looks like your method is right sometimes by accident. func() -
func() will give a negative answer with a high resolution timer and a
meaningless answer with a low resolution timer, where "high" and "low"
are relative to the time taken for the function call, so you will pick
the high resolution one most of the time because the meaningless
answer is ZERO (no tick, no change). Some small fraction of the time
the low resolution timer will have a tick between the two calls and
you will get the wrong answer (-big < -small). In the case of two
"low" resolution timers, both will give a meaningless answer and you
will choose arbitrarily.

HTH,
John
Jan 13 '08 #2
John Machin wrote:
AFAICT that was enough indication for most people to use time.clock on
all platforms ...
which was unfortunate, given that time.clock() isn't even a proper clock
on most Unix systems; it's a low-resolution sample counter that can
happily assign all time to a process that uses, say, 2% CPU and zero
time to one that uses 98% CPU.
before the introduction of the timeit module; have you considered it?
whether or not "timeit" suites his requirements, he can at least replace
his code with

clock = timeit.default_timer

which returns a good wall-time clock (which happens to be time.time() on
Unix and time.clock() on Windows).

</F>

Jan 13 '08 #3


John Machin wrote:
On Jan 14, 7:05 am, Ron Adam <r...@ronadam.comwrote:
>I'm having some cross platform issues with timing loops. It seems
time.time is better for some computers/platforms and time.clock others, but

Care to explain why it seems so?
>it's not always clear which, so I came up with the following to try to
determine which.

import time

# Determine if time.time is better than time.clock
# The one with better resolution should be lower.
if time.clock() - time.clock() < time.time() - time.time():
clock = time.clock
else:
clock = time.time

Will this work most of the time, or is there something better?

Manual:
"""
clock( )

On Unix, return the current processor time as a floating point number
expressed in seconds. The precision, and in fact the very definition
of the meaning of ``processor time'', depends on that of the C
function of the same name, but in any case, this is the function to
use for benchmarking Python or timing algorithms.

On Windows, this function returns wall-clock seconds elapsed since the
first call to this function, as a floating point number, based on the
Win32 function QueryPerformanceCounter(). The resolution is typically
better than one microsecond.
[snip]

time( )

Return the time as a floating point number expressed in seconds since
the epoch, in UTC. Note that even though the time is always returned
as a floating point number, not all systems provide time with a better
precision than 1 second. While this function normally returns non-
decreasing values, it can return a lower value than a previous call if
the system clock has been set back between the two calls.
"""

AFAICT that was enough indication for most people to use time.clock on
all platforms ... before the introduction of the timeit module; have
you considered it?
I use it to time a Visual Python loop which controls frame rate updates and
set volocities according to time between frames, rather than frame count.
The time between frames depends both on the desired frame rate, and the
background load on the computer, so it isn't constant.

time.clock() isn't high enough resolution for Ubuntu, and time.time() isn't
high enough resolution on windows.

I do use timeit for bench marking, but haven't tried using in a situation
like this.

It looks like your method is right sometimes by accident. func() -
func() will give a negative answer with a high resolution timer and a
meaningless answer with a low resolution timer, where "high" and "low"
are relative to the time taken for the function call, so you will pick
the high resolution one most of the time because the meaningless
answer is ZERO (no tick, no change). Some small fraction of the time
the low resolution timer will have a tick between the two calls and
you will get the wrong answer (-big < -small).
If the difference is between two high resolution timers then it will be
good enough. I think the time between two consectutive func() calls is
probably low enough to rule out low resolution timers.
In the case of two
"low" resolution timers, both will give a meaningless answer and you
will choose arbitrarily.
In the case of two low resolution timers, it will use time.time. In this
case I probably need to raise an exception. My program won't work
correctly with a low resolution timer.

Thanks for the feed back, I will try to find something more dependable.

Ron

Jan 14 '08 #4


John Machin wrote:
On Jan 14, 7:05 am, Ron Adam <r...@ronadam.comwrote:
>I'm having some cross platform issues with timing loops. It seems
time.time is better for some computers/platforms and time.clock others, but

Care to explain why it seems so?
>it's not always clear which, so I came up with the following to try to
determine which.

import time

# Determine if time.time is better than time.clock
# The one with better resolution should be lower.
if time.clock() - time.clock() < time.time() - time.time():
clock = time.clock
else:
clock = time.time

Will this work most of the time, or is there something better?

Manual:
"""
clock( )

On Unix, return the current processor time as a floating point number
expressed in seconds. The precision, and in fact the very definition
of the meaning of ``processor time'', depends on that of the C
function of the same name, but in any case, this is the function to
use for benchmarking Python or timing algorithms.

On Windows, this function returns wall-clock seconds elapsed since the
first call to this function, as a floating point number, based on the
Win32 function QueryPerformanceCounter(). The resolution is typically
better than one microsecond.
[snip]

time( )

Return the time as a floating point number expressed in seconds since
the epoch, in UTC. Note that even though the time is always returned
as a floating point number, not all systems provide time with a better
precision than 1 second. While this function normally returns non-
decreasing values, it can return a lower value than a previous call if
the system clock has been set back between the two calls.
"""

AFAICT that was enough indication for most people to use time.clock on
all platforms ... before the introduction of the timeit module; have
you considered it?
I use it to time a Visual Python loop which controls frame rate updates and
set volocities according to time between frames, rather than frame count.
The time between frames depends both on the desired frame rate, and the
background load on the computer, so it isn't constant.

time.clock() isn't high enough resolution for Ubuntu, and time.time() isn't
high enough resolution on windows.

I do use timeit for bench marking, but haven't tried using in a situation
like this.

It looks like your method is right sometimes by accident. func() -
func() will give a negative answer with a high resolution timer and a
meaningless answer with a low resolution timer, where "high" and "low"
are relative to the time taken for the function call, so you will pick
the high resolution one most of the time because the meaningless
answer is ZERO (no tick, no change). Some small fraction of the time
the low resolution timer will have a tick between the two calls and
you will get the wrong answer (-big < -small).
If the difference is between two high resolution timers then it will be
good enough. I think the time between two consectutive func() calls is
probably low enough to rule out low resolution timers.
In the case of two
"low" resolution timers, both will give a meaningless answer and you
will choose arbitrarily.
In the case of two low resolution timers, it will use time.time. In this
case I probably need to raise an exception. My program won't work
correctly with a low resolution timer.

Thanks for the feed back, I will try to find something more dependable.

Ron

Jan 14 '08 #5


Fredrik Lundh wrote:
John Machin wrote:
>AFAICT that was enough indication for most people to use time.clock on
all platforms ...

which was unfortunate, given that time.clock() isn't even a proper clock
on most Unix systems; it's a low-resolution sample counter that can
happily assign all time to a process that uses, say, 2% CPU and zero
time to one that uses 98% CPU.
before the introduction of the timeit module; have you considered it?

whether or not "timeit" suites his requirements, he can at least replace
his code with

clock = timeit.default_timer

which returns a good wall-time clock (which happens to be time.time() on
Unix and time.clock() on Windows).

Thanks for the suggestion Fredrik, I looked at timeit and it does the
following.
import sys
import time

if sys.platform == "win32":
# On Windows, the best timer is time.clock()
default_timer = time.clock
else:
# On most other platforms the best timer is time.time()
default_timer = time.time

I was hoping I could determine which to use by the values returned. But
maybe that isn't as easy as it seems it would be.
Ron

Jan 14 '08 #6


Fredrik Lundh wrote:
John Machin wrote:
>AFAICT that was enough indication for most people to use time.clock on
all platforms ...

which was unfortunate, given that time.clock() isn't even a proper clock
on most Unix systems; it's a low-resolution sample counter that can
happily assign all time to a process that uses, say, 2% CPU and zero
time to one that uses 98% CPU.
before the introduction of the timeit module; have you considered it?

whether or not "timeit" suites his requirements, he can at least replace
his code with

clock = timeit.default_timer

which returns a good wall-time clock (which happens to be time.time() on
Unix and time.clock() on Windows).

Thanks for the suggestion Fredrik, I looked at timeit and it does the
following.
import sys
import time

if sys.platform == "win32":
# On Windows, the best timer is time.clock()
default_timer = time.clock
else:
# On most other platforms the best timer is time.time()
default_timer = time.time

I was hoping I could determine which to use by the values returned. But
maybe that isn't as easy as it seems it would be.
Ron
Jan 14 '08 #7
"""
<snipped>
time.clock() isn't high enough resolution for Ubuntu, and time.time()
isn't
high enough resolution on windows.

Take a look at datetime. It is good to the micro-second on Linux and
milli-second on Windows.
"""

import datetime
begin_time=datetime.datetime.now()
for j in range(100000):
x = j+1 # wait a small amount of time
print "Elapsed time =", datetime.datetime.now()-begin_time

## You can also access the individual time values
print begin_time.second
print begin_time.microsecond ## etc.
Jan 14 '08 #8
dw****@gmail.com wrote:
"""
<snipped>
time.clock() isn't high enough resolution for Ubuntu, and time.time()
isn't high enough resolution on windows.

Take a look at datetime. It is good to the micro-second on Linux and
milli-second on Windows.
datetime.datetime.now() does the same thing as time.time(); it uses the
gettimeofday() API for platforms that have it (and so does time.time()),
and calls the fallback implementation in time.time() if gettimeofdat()
isn't supported. from the datetime sources:

#ifdef HAVE_GETTIMEOFDAY
struct timeval t;
#ifdef GETTIMEOFDAY_NO_TZ
gettimeofday(&t);
#else
gettimeofday(&t, (struct timezone *)NULL);
#endif
...
#else /* ! HAVE_GETTIMEOFDAY */

/* No flavor of gettimeofday exists on this platform. Python's
* time.time() does a lot of other platform tricks to get the
* best time it can on the platform, and we're not going to do
* better than that (if we could, the better code would belong
* in time.time()!) We're limited by the precision of a double,
* though.
*/

(note the "if we could" part).

</F>

Jan 14 '08 #9
On Jan 15, 4:50 am, dwb...@gmail.com wrote:
"""
<snipped>
time.clock() isn't high enough resolution for Ubuntu, and time.time()
isn't
high enough resolution on windows.

Take a look at datetime. It is good to the micro-second on Linux and
milli-second on Windows.
"""
On Windows, time.clock has MICROsecond resolution, but your method
appears to have exactly the same (MILLIsecond) resolution as
time.time, but with greater overhead, especially when the result is
required in seconds-and-a-fraction as a float:
>>def datetimer(start=datetime.datetime(1970,1,1,0,0,0), nowfunc=datetime.datetime.now):
.... delta = nowfunc() - start
.... return delta.days * 86400 + delta.seconds +
delta.microseconds / 1000000.0
....
>>tt = time.time(); td = datetimer(); diff = td - tt; print map(repr, (tt, td,
diff))
['1200341583.484', '1200381183.484', '39600.0']
>>tt = time.time(); td = datetimer(); diff = td - tt; print map(repr, (tt, td,
diff))
['1200341596.484', '1200381196.484', '39600.0']
>>tt = time.time(); td = datetimer(); diff = td - tt; print map(repr, (tt, td,
diff))
['1200341609.4530001', '1200381209.4530001', '39600.0']
>>tt = time.time(); td = datetimer(); diff = td - tt; print map(repr, (tt, td,
diff))
['1200341622.562', '1200381222.562', '39600.0']
>>>
The difference of 39600 seconds (11 hours) would be removed by using
datetime.datetime.utcnow.
>
import datetime
begin_time=datetime.datetime.now()
for j in range(100000):
x = j+1 # wait a small amount of time
print "Elapsed time =", datetime.datetime.now()-begin_time

## You can also access the individual time values
print begin_time.second
print begin_time.microsecond ## etc.
Running that on my Windows system (XP Pro, Python 2.5.1, AMD Turion 64
Mobile cpu rated at 2.0 GHz), I get
Elapsed time = 0:00:00.031000
or
Elapsed time = 0:00:00.047000
Using 50000 iterations, I get it down to 15 or 16 milliseconds. 15 ms
is the lowest non-zero interval that can be procured.

This is consistent with results obtained by using time.time.

Approach: get first result from timer function; call timer in a tight
loop until returned value changes; ignore the first difference so
found and save the next n differences.

Windows time.time appears to tick at 15 or 16 ms intervals, averaging
about 15.6 ms. For comparison, Windows time.clock appears to tick at
about 2.3 MICROsecond intervals.

Finally, some comments from the Python 2.5.1 datetimemodule.c:

/* No flavor of gettimeofday exists on this platform. Python's
* time.time() does a lot of other platform tricks to get the
* best time it can on the platform, and we're not going to do
* better than that (if we could, the better code would belong
* in time.time()!) We're limited by the precision of a double,
* though.
*/

HTH,
John
Jan 14 '08 #10

This thread has been closed and replies have been disabled. Please start a new discussion.

Similar topics

8
by: peterbe | last post by:
What's the difference between time.clock() and time.time() (and please don't say clock() is the CPU clock and time() is the actual time because that doesn't help me at all :) I'm trying to...
6
by: cournape | last post by:
Hi there, I have some scientific application written in python. There is a good deal of list processing, but also some "simple" computation such as basic linear algebra involved. I would like to...
9
by: HL | last post by:
I am using VS 2005 Beta - C# Problem: The Timer fires a few milliseconds before the actual Due-Time Let's say a timer is created in the following manner: System.Threading.Timer m_timer = null;...
17
by: OlafMeding | last post by:
Below are 2 files that isolate the problem. Note, both programs hang (stop responding) with hyper-threading turned on (a BIOS setting), but work as expected with hyper-threading turned off. ...
5
by: raybakk | last post by:
Hi there. If I make a function in c (I acually use gnu right now), is there any way to find out how many clocksycluses that function takes? If I divide some numbers etc Var1 = Var2/Var3, is it...
7
by: nono909 | last post by:
I wrote the following time class for the following assignment.i need help in completing this program pleasee. Write a class to hold time. Time is the hour, minute and seconds. Write a constructor...
5
by: yinglcs | last post by:
Hi, I am following this python example trying to time how long does an operation takes, like this: My question is why the content of the file (dataFile) is just '0.0'? I have tried "print...
37
by: David T. Ashley | last post by:
I have Red Hat Enterprise Linux 4. I was just reading up about UTC and leap seconds. Is it true on my system that the Unix time may skip up or down by one second at midnight when there is a...
8
by: Theo v. Werkhoven | last post by:
hi, In this code I read out an instrument during a user determined period, and save the relative time of the sample (since the start of the test) and the readback value in a csv file. #v+...
0
by: Charles Arthur | last post by:
How do i turn on java script on a villaon, callus and itel keypad mobile phone
0
by: ryjfgjl | last post by:
In our work, we often receive Excel tables with data in the same format. If we want to analyze these data, it can be difficult to analyze them because the data is spread across multiple Excel files...
0
by: emmanuelkatto | last post by:
Hi All, I am Emmanuel katto from Uganda. I want to ask what challenges you've faced while migrating a website to cloud. Please let me know. Thanks! Emmanuel
1
by: Sonnysonu | last post by:
This is the data of csv file 1 2 3 1 2 3 1 2 3 1 2 3 2 3 2 3 3 the lengths should be different i have to store the data by column-wise with in the specific length. suppose the i have to...
0
by: Hystou | last post by:
There are some requirements for setting up RAID: 1. The motherboard and BIOS support RAID configuration. 2. The motherboard has 2 or more available SATA protocol SSD/HDD slots (including MSATA, M.2...
0
marktang
by: marktang | last post by:
ONU (Optical Network Unit) is one of the key components for providing high-speed Internet services. Its primary function is to act as an endpoint device located at the user's premises. However,...
0
jinu1996
by: jinu1996 | last post by:
In today's digital age, having a compelling online presence is paramount for businesses aiming to thrive in a competitive landscape. At the heart of this digital strategy lies an intricately woven...
0
by: Hystou | last post by:
Overview: Windows 11 and 10 have less user interface control over operating system update behaviour than previous versions of Windows. In Windows 11 and 10, there is no way to turn off the Windows...
0
tracyyun
by: tracyyun | last post by:
Dear forum friends, With the development of smart home technology, a variety of wireless communication protocols have appeared on the market, such as Zigbee, Z-Wave, Wi-Fi, Bluetooth, etc. Each...

By using Bytes.com and it's services, you agree to our Privacy Policy and Terms of Use.

To disable or enable advertisements and analytics tracking please visit the manage ads & tracking page.