472,123 Members | 1,402 Online

Calculating Elapsed Time

Hello -

I have a start and end time that is written using the
following:

time.strftime("%b %d %Y %H:%M:%S")

How do I calculate the elapsed time?

JJ

__________________________________________
Yahoo! DSL – Something to write home about.
Just \$16.99/mo. or less.
dsl.yahoo.com

Dec 6 '05 #1
20 12422
On Tue, 6 Dec 2005 12:36:55 -0800 (PST), Jean Johnson <je***********@yahoo.com> wrote:
Hello -

I have a start and end time that is written using the
following:

time.strftime("%b %d %Y %H:%M:%S")

How do I calculate the elapsed time?
tf1 = time.strftime("%b %d %Y %H:%M:%S")
tf1 'Dec 06 2005 14:07:11' tt1 = time.strptime(tf1, "%b %d %Y %H:%M:%S" )
tt1 (2005, 12, 6, 14, 7, 11, 1, 340, -1) t1 = time.mktime(tt1)
t1 1133906831.0 tf2 = time.strftime("%b %d %Y %H:%M:%S")
tf2 'Dec 06 2005 14:08:34' tt2 = time.strptime(tf2, "%b %d %Y %H:%M:%S" )
tt2 (2005, 12, 6, 14, 8, 34, 1, 340, -1) t2 = time.mktime(tt2)
t2 1133906914.0 t2-t1 83.0
(seconds elapsed)

Perhaps time was available as a number before stftime was used? E.g., t2 can round trip: time.ctime(t2) 'Tue Dec 06 14:08:34 2005' time.strftime("%b %d %Y %H:%M:%S", time.localtime(t2)) 'Dec 06 2005 14:08:34' time.strftime("%b %d %Y %H:%M:%S", time.gmtime(t2))

'Dec 06 2005 22:08:34'

http://www.python.org/doc/current/lib/module-time.html

and in general, learn how to find info starting at

http://www.python.org/doc/

also, interactively,
import time
help(time)

Regards,
Bengt Richter
Dec 6 '05 #2
"Note that even though the time is always returned as a floating point
number, not all systems provide time with a better precision than 1
second." says the doc.
figures after the decimal point?
Thx.
malv

Dec 7 '05 #3
malv wrote:
"Note that even though the time is always returned as a floating point
number, not all systems provide time with a better precision than 1
second." says the doc.
figures after the decimal point?

A few things.

1. "Precision" is probably the wrong word there. "Resolution" seems
more correct.

2. If your system returns figures after the decimal point, it probably
has better resolution than one second (go figure). Depending on what
system it is, your best bet to determine why is to check the
documentation for your system (also go figure), since the details are
not really handled by Python. Going by memory, Linux will generally be
1ms resolution (I might be off by 10 there...), while Windows XP has
about 64 ticks per second, so .015625 resolution...

-Peter

Dec 7 '05 #4
Peter Hansen wrote:
Going by memory, Linux will generally be 1ms resolution (I might be
off by 10 there...), while Windows XP has about 64 ticks per second,
so .015625 resolution...

here's a silly little script that measures the difference between
two distinct return values, and reports the maximum frequency
it has seen this far:

import time

def test(func):
mm = 0
t0 = func()
while 1:
t1 = func()
if t0 != t1:
m = max(1 / (t1 - t0), mm)
if m != mm:
print m
mm = m
t0 = t1

test(time.time)
# test(time.clock)

if I run this on the Windows 2K box I'm sitting at right now, it settles
at 100 for time.time, and 1789772 for time.clock. on linux, I get 100
for time.clock instead, and 262144 for time.time.

</F>

Dec 7 '05 #5
On 2005-12-07, Peter Hansen <pe***@engcorp.com> wrote:
2. If your system returns figures after the decimal point, it
probably has better resolution than one second (go figure).
Depending on what system it is, your best bet to determine
why is to check the documentation for your system (also go
figure), since the details are not really handled by
Python. Going by memory, Linux will generally be 1ms
resolution (I might be off by 10 there...),

In my experience, time.time() on Linux has a resolution of
about 1us. The delta I get when I do

print time.time()-time.time()

is usually about 2-3us, but some of that is probably due to the

--
Grant Edwards grante Yow! TAILFINS!!...click...
at
visi.com
Dec 7 '05 #6
On 2005-12-07, Fredrik Lundh <fr*****@pythonware.com> wrote:
Peter Hansen wrote:
Going by memory, Linux will generally be 1ms resolution (I might be
off by 10 there...), while Windows XP has about 64 ticks per second,
so .015625 resolution...

here's a silly little script that measures the difference between
two distinct return values, and reports the maximum frequency
it has seen this far:

import time

def test(func):
mm = 0
t0 = func()
while 1:
t1 = func()
if t0 != t1:
m = max(1 / (t1 - t0), mm)
if m != mm:
print m
mm = m
t0 = t1

test(time.time)
# test(time.clock)

if I run this on the Windows 2K box I'm sitting at right now, it settles
at 100 for time.time, and 1789772 for time.clock. on linux, I get 100
for time.clock instead, and 262144 for time.time.

At least under Linux, I suspect you're just measuring loop time
rather than the granularity of the time measurement. I don't
know which library call the time modules uses, but if it's
gettimeofday(), that is limited to 1us resolution.
clock_gettime() provides an API with 1ns resolution. Not sure
what the actual data resolution is...

--
Grant Edwards grante Yow! Yow! I just went
at below the poverty line!
visi.com
Dec 7 '05 #7
On 2005-12-07, Grant Edwards <gr****@visi.com> wrote:
if I run this on the Windows 2K box I'm sitting at right now, it settles
at 100 for time.time, and 1789772 for time.clock. on linux, I get 100
for time.clock instead, and 262144 for time.time.

At least under Linux, I suspect you're just measuring loop time
rather than the granularity of the time measurement. I don't
know which library call the time modules uses, but if it's
gettimeofday(), that is limited to 1us resolution.
clock_gettime() provides an API with 1ns resolution. Not sure
what the actual data resolution is...

Depending on which clock is used, the resolution of
clock_gettime() appears to be as low as 1ns. I usually get
deltas of 136ns when calling clock_gettime() using CLOCK_PROCESS_CPUTIME_ID.
I suspect the function/system call overhead is larger than the
clock resolution.

IIRC, time.time() uses gettimeofday() under Linux, so you can
expect 1us resolution.

--
Grant Edwards grante Yow! I want to kill
at everyone here with a cute
visi.com colorful Hydrogen Bomb!!
Dec 7 '05 #8
Grant Edwards wrote:
Going by memory, Linux will generally be 1ms resolution (I might be
off by 10 there...), while Windows XP has about 64 ticks per second,
so .015625 resolution...

here's a silly little script that measures the difference between
two distinct return values, and reports the maximum frequency
it has seen this far:

if I run this on the Windows 2K box I'm sitting at right now, it settles
at 100 for time.time, and 1789772 for time.clock. on linux, I get 100
for time.clock instead, and 262144 for time.time.

At least under Linux, I suspect you're just measuring loop time
rather than the granularity of the time measurement.

Yeah, I said it was silly. On the other hand, the Linux box is a lot faster
than the Windows box I'm using, and I do get the same result no matter
what Python version I'm using...

(and in this context, neither 262144 nor 1789772 are random numbers...)

</F>

Dec 7 '05 #9
On 2005-12-07, Fredrik Lundh <fr*****@pythonware.com> wrote:
if I run this on the Windows 2K box I'm sitting at right now,
it settles at 100 for time.time, and 1789772 for time.clock.
on linux, I get 100 for time.clock instead, and 262144 for
time.time.

At least under Linux, I suspect you're just measuring loop time
rather than the granularity of the time measurement.

Yeah, I said it was silly. On the other hand, the Linux box is a lot faster
than the Windows box I'm using, and I do get the same result no matter
what Python version I'm using...

(and in this context, neither 262144 nor 1789772 are random numbers...)

262144 is 3.8us. That seems pretty large. What do you get
when you do this:

import time
for i in range(10):
print time.time()-time.time()

After the first loop, I usually get one of three values:

3.099us, 2.14,us, 2.86us.

In any case, the resolution of time.time() _appears_ to be less
than 1us.

--
Grant Edwards grante Yow! Alright,
at you!! Imitate a WOUNDED
visi.com SEAL pleading for a PARKING
SPACE!!
Dec 7 '05 #10
On Wed, 07 Dec 2005 16:35:15 -0000, Grant Edwards <gr****@visi.com> wrote:
On 2005-12-07, Peter Hansen <pe***@engcorp.com> wrote:
2. If your system returns figures after the decimal point, it
probably has better resolution than one second (go figure).
Depending on what system it is, your best bet to determine
why is to check the documentation for your system (also go
figure), since the details are not really handled by
Python. Going by memory, Linux will generally be 1ms
resolution (I might be off by 10 there...),

In my experience, time.time() on Linux has a resolution of
about 1us. The delta I get when I do

print time.time()-time.time()

is usually about 2-3us, but some of that is probably due to the

Try
import time
t=time.time; c=time.clock
min(filter(None,(-float.__sub__(c(),c()) for x in xrange(10000)) ))*1e3 0.0058666657878347905 min(filter(None,(-float.__sub__(t(),t()) for x in xrange(10000)) ))*1e3 9.9999904632568359

(This NT4 box is slow ;-)
BTW time.time is just the 100hz scheduling slice
min(filter(None,(-float.__sub__(t(),t()) for x in xrange(10000)) ))**-1 100.00009536752259 min(filter(None,(-float.__sub__(t(),t()) for x in xrange(10000)) ))**-1 100.00009536752259 min(filter(None,(-float.__sub__(c(),c()) for x in xrange(10000)) ))**-1

149147.75106031806

Regards,
Bengt Richter
Dec 7 '05 #11
On 2005-12-07, Bengt Richter <bo**@oz.net> wrote:
In my experience, time.time() on Linux has a resolution of
about 1us. The delta I get when I do

print time.time()-time.time()

is usually about 2-3us, but some of that is probably due to the

Try
import time
t=time.time; c=time.clock
min(filter(None,(-float.__sub__(c(),c()) for x in xrange(10000)) ))*1e3 0.0058666657878347905 min(filter(None,(-float.__sub__(t(),t()) for x in xrange(10000)) ))*1e3 9.9999904632568359

import time
t=time.time; c=time.clock
min(filter(None,(-float.__sub__(c(),c()) for x in xrange(10000)) ))*1e3 10.000000000000002 min(filter(None,(-float.__sub__(t(),t()) for x in xrange(10000)) ))*1e3 0.00095367431640625

Yup. That has less overhead than my original example because
you've avoided the extra name lookup:
for f in range(10):

.... print t()-t()
....
-4.05311584473e-06
-1.90734863281e-06
-1.90734863281e-06
-2.14576721191e-06
-2.86102294922e-06
-1.90734863281e-06
-2.14576721191e-06
-2.14576721191e-06
-9.53674316406e-07
-1.90734863281e-06

The min delta seen is 0.95us. I'm guessing thats
function/system call overhead and not timer resolution.

--
Grant Edwards grante Yow! I HAVE a towel.
at
visi.com
Dec 7 '05 #12
On 2005-12-07, Grant Edwards <gr****@visi.com> wrote:
for f in range(10): ... print t()-t()
...
-4.05311584473e-06
-1.90734863281e-06
-1.90734863281e-06
-2.14576721191e-06
-2.86102294922e-06
-1.90734863281e-06
-2.14576721191e-06
-2.14576721191e-06
-9.53674316406e-07
-1.90734863281e-06

The min delta seen is 0.95us. I'm guessing thats
function/system call overhead and not timer resolution.

After looking at the implimentation of time.time under Linux,
it should have _exactly_ 1us resolution. I suspect that on a
Linux platform the resolution of time.time() is being limited
by IEEE double representation.

Yup:
print len("%0.0f" % (time.time()*1e6,))

16

1us resolution for the time.time() value requires 16
significant digits. That's more than IEEE 64bit floating point
can represent accurately, and why the apparent resolution of
time.time() under Linux is only approximately 1us.

--
Grant Edwards grante Yow! Should I do my BOBBIE
at VINTON medley?
visi.com
Dec 7 '05 #13
Fredrik Lundh wrote:
Peter Hansen wrote:
Going by memory, Linux will generally be 1ms resolution (I might be
off by 10 there...), while Windows XP has about 64 ticks per second,
so .015625 resolution...

[snip script] if I run this on the Windows 2K box I'm sitting at right now, it settles
at 100 for time.time, and 1789772 for time.clock. on linux, I get 100
for time.clock instead, and 262144 for time.time.

For the record, the WinXP box I'm on is also 100 for time.time. Which
is quite odd, as I have a distinct memory of us having done the above
type of measurement before and having had it come out at 64. Colour me
puzzled.

-Peter

Dec 7 '05 #14
Peter Hansen wrote:
A few things.

1. "Precision" is probably the wrong word there. "Resolution" seems
more correct.

2. If your system returns figures after the decimal point, it probably
has better resolution than one second (go figure). Depending on what
system it is, your best bet to determine why is to check the
documentation for your system (also go figure), since the details are
not really handled by Python. Going by memory, Linux will generally be
1ms resolution (I might be off by 10 there...), while Windows XP has
about 64 ticks per second, so .015625 resolution...

One caevat: on Windows systems, time.clock() is actually the
high-precision clock (and on *nix, it's an entirely different
performance counter). Its semantics for time differential, IIRC, are
exactly the same, so if that's all you're using it for it might be worth
wrapping time.time / time.clock as a module-local timer function
depending on sys.platform.
Dec 7 '05 #15
Fredrik Lundh wrote:
if I run this on the Windows 2K box I'm sitting at right now, it settles
at 100 for time.time, and 1789772 for time.clock. on linux, I get 100
for time.clock instead, and 262144 for time.time.

Aren't the time.clock semantics different on 'nix? I thought, at least
on some 'nix systems, time.clock returned a "cpu time" value that
measured actual computation time, rather than wall-clock time [meaning
stalls in IO don't count].

This is pretty easily confirmed, at least on one particular system
(interactive prompt, so the delay is because of typing):

Python 2.2.3 (#1, Nov 12 2004, 13:02:04)
[GCC 3.2.3 20030502 (Red Hat Linux 3.2.3-42)] on linux2
import time
(c,t) = (time.clock, time.time)
(nowc, nowt) = (c(), t())
print (c() - nowc, t() - nowt)

(0.0019519999999999989, 7.6953330039978027)

So caevat programmer when using time.clock; its meaning is different on
different platforms.
Dec 7 '05 #16
Grant Edwards wrote:
Yeah, I said it was silly. On the other hand, the Linux box is a lot faster
than the Windows box I'm using, and I do get the same result no matter
what Python version I'm using...
except if I run it under my latest 2.4 build, where I get 524288 ...

(and in this context, neither 262144 nor 1789772 are random numbers...)

262144 is 3.8us. That seems pretty large. What do you get
when you do this:

import time
for i in range(10):
print time.time()-time.time()

After the first loop, I usually get one of three values:

3.099us, 2.14,us, 2.86us.

I get two different values:

-1.90734863281e-06
-2.14576721191e-06

on this hardware (faster than the PC I'm using right now, but still not a
very fast machine). let's check a faster linux box:

\$ python2.4 test.py
-6.91413879395e-06
-1.90734863281e-06
-1.90734863281e-06
-1.90734863281e-06
-1.90734863281e-06
-2.14576721191e-06
-1.90734863281e-06
-2.14576721191e-06
-1.90734863281e-06
-1.90734863281e-06

if I keep running the script over and over again, I do get individual

-1.19209289551e-06

items from time to time on both machines...

</F>

Dec 7 '05 #17
Peter Hansen wrote:
Fredrik Lundh wrote:
Peter Hansen wrote:
Going by memory, Linux will generally be 1ms resolution (I might be
off by 10 there...), while Windows XP has about 64 ticks per second,
so .015625 resolution...

if I run this on the Windows 2K box I'm sitting at right now, it settles
at 100 for time.time, and 1789772 for time.clock. on linux, I get 100
for time.clock instead, and 262144 for time.time.

For the record, the WinXP box I'm on is also 100 for time.time. Which
is quite odd, as I have a distinct memory of us having done the above
type of measurement before and having had it come out at 64. Colour me
puzzled.

But another XP (SP1) box at a customer site is reporting 64Hz. Mine is
SP2. It isn't reasonable to think that actually changed with SP2.
Colour me even more puzzled... time for some research, or for an expert
to weigh in.

-Peter

Dec 7 '05 #18
On 2005-12-07, Fredrik Lundh <fr*****@pythonware.com> wrote:
import time
for i in range(10):
print time.time()-time.time()

After the first loop, I usually get one of three values:

3.099us, 2.14,us, 2.86us.

I get two different values:

-1.90734863281e-06
-2.14576721191e-06

on this hardware (faster than the PC I'm using right now, but still not a
very fast machine). let's check a faster linux box:

\$ python2.4 test.py
-6.91413879395e-06
-1.90734863281e-06
-1.90734863281e-06
-1.90734863281e-06
-1.90734863281e-06
-2.14576721191e-06
-1.90734863281e-06
-2.14576721191e-06
-1.90734863281e-06
-1.90734863281e-06

if I keep running the script over and over again, I do get individual

-1.19209289551e-06

items from time to time on both machines...

We're seeing floating point representation issues.

The resolution of the underlying call is exactly 1us. Calling
gettimeofday() in a loop in C results in deltas of exactly 1 or
2 us. Python uses a C double to represent time, and a double
doesn't have enough bit to accurately represent 1us resolution.

--
Grant Edwards grante Yow! I'm a fuschia bowling
at ball somewhere in Brittany
visi.com
Dec 7 '05 #19
On Wed, 07 Dec 2005 18:32:50 -0000, Grant Edwards <gr****@visi.com> wrote:
On 2005-12-07, Fredrik Lundh <fr*****@pythonware.com> wrote:

....
if I keep running the script over and over again, I do get individual

-1.19209289551e-06

items from time to time on both machines...

We're seeing floating point representation issues.

The resolution of the underlying call is exactly 1us. Calling
gettimeofday() in a loop in C results in deltas of exactly 1 or
2 us. Python uses a C double to represent time, and a double
doesn't have enough bit to accurately represent 1us resolution.

Is there a timer chip that is programmed to count at exactly 1us steps?
If this is trying to be platform independent, I think it has to be
faking it sometimes. E.g., I thought on windows you could sometimes
get a time based on a pentium time stamp counter, which gets 64 bits
with a RDTSC instruction and counts at full CPU clock rate (probably
affected by power management slowdown when applicable, but I don't know this),
If you write in C, you can set a base value (which ISTR clock does
the first time it's called) and return deltas that could fit at full time
stamp counter LSB resolution in the 53 bits of a double for quite a while,
even at a Ghz (over 100 days, I think ... let's see:
(2**53/1e9)/(24*3600)

104.24999137431705
yep. So there has to be a floating convert and multiply in order to get seconds.

It would be interesting to dedicate a byte code to optimized inline raw time stamp reading
into successive 64-slots of a pre-allocated space, and have a way to get a call to
an identified funtion name be translated to this byte code instead of normal function call
instructions. One could do it with a byte-code-hacking decorator for code within a function
if the byte code were available, and a module giving access to the buffer were available.
Then the smallest interval would be a loop through the byte code switch (I think you
can read the register in user mode, unless a bit has been set to prevent it, so there
shouldn't even be kernel call and context switch overhead. Of course, if no counter is available,
the byte code would probably have to raise an exception instead, or fake it with a timer chip register.

I have a rdtsc.dll module, but I haven't re-compiled it for current version. Another back burner
thing. (I was trying to get CPU chip version detection right so the module would refuse to
load if there wasn't a suitable chip, IIRC). Sigh.

Regards,
Bengt Richter
Dec 8 '05 #20
On 2005-12-08, Bengt Richter <bo**@oz.net> wrote:
We're seeing floating point representation issues.

The resolution of the underlying call is exactly 1us. Calling
gettimeofday() in a loop in C results in deltas of exactly 1 or
2 us. Python uses a C double to represent time, and a double
doesn't have enough bit to accurately represent 1us resolution.
Is there a timer chip that is programmed to count at exactly
1us steps?

No, but the value returned by gettimeofday() is an long integer
that counts seconds along a long integer that counts
microseconds. The resolution of the data seen by Python's time
module is 1us.

The underlying hardware has a much finer resolution (as shown
by the clock_gettimer call), but the resolution of the system
call used by Python's time module on Unix is exactly 1us.
If this is trying to be platform independent, I think it has
to be faking it sometimes. E.g., I thought on windows you
could sometimes get a time based on a pentium time stamp
counter, which gets 64 bits with a RDTSC instruction and
counts at full CPU clock rate

I assume that's what the underlying Linux system call is doing
(I haven't looked). Then it just rounds/truncates to the
nearest microsecond (because that's what the BSD/SysV/Posix API
specifies) when it returns the answer that Python sees.

--
Grant Edwards grante Yow! Hmmm... a CRIPPLED
at ACCOUNTANT with a FALAFEL
visi.com sandwich is HIT by a
TROLLEY-CAR...
Dec 8 '05 #21