By using this site, you agree to our updated Privacy Policy and our Terms of Use. Manage your Cookies Settings.
428,742 Members | 1,570 Online
Bytes IT Community
+ Ask a Question
Need help? Post your question and get tips & solutions from a community of 428,742 IT Pros & Developers. It's quick & easy.

formatting time as milliseconds in logging module

P: n/a
Hi!

The logging module nicely prepends each line with a formatted date.

However, I'm not interested in the actual date but only in the number of
milliseconds that passed since the start of the program. What is the best
way of doing that?

Thanks,
Stefan
Jul 18 '05 #1
Share this Question
Share on Google+
6 Replies


P: n/a
> The logging module nicely prepends each line with a formatted date.

However, I'm not interested in the actual date but only in the number of
milliseconds that passed since the start of the program. What is the best
way of doing that?


In Windows: time.clock() will get you the number of seconds since you
last called it, down to the microsecond (trim as you see fit).

In linux, time.clock() gives you the amount of processor time used, so
it is not quite as useful in this case.
In both, time.time() will give you the number of seconds since the epoch,
so you can call it once when the program starts, and later just
subtract the times. In Windows, it is only accurate to .01 seconds, and
I don't know how accurate the linux version is (I expect about as
accurate, but I can certainly be wrong).

- Josiah

Jul 18 '05 #2

P: n/a
On Tue, 19 Oct 2004 09:47:30 -0700, Josiah Carlson <jc******@uci.edu> wrote:
The logging module nicely prepends each line with a formatted date.

However, I'm not interested in the actual date but only in the number of
milliseconds that passed since the start of the program. What is the best
way of doing that?
In Windows: time.clock() will get you the number of seconds since you
last called it, down to the microsecond (trim as you see fit).

^^^^ first, it seems (certainly not a delta since the last time.clock() call)
if 1: #postpone interactive start of suite

... print 'starting'
... import time
... time.sleep(5)
... print time.clock()
... for i in xrange(10):
... print time.clock()
... time.sleep(1.0)
...
starting
1.25714266558e-005
0.00067466656386
1.00140967598
2.00285203766
3.00429188506
4.00575352293
5.00716152272
6.0086038844
7.01003954133
8.01151542682
9.01291420756

Don't know why the first value is not a pure 0.0 though.

In linux, time.clock() gives you the amount of processor time used, so
it is not quite as useful in this case.
In both, time.time() will give you the number of seconds since the epoch,
so you can call it once when the program starts, and later just
subtract the times. In Windows, it is only accurate to .01 seconds, and Maybe not even all windows. E.g., for NT that is one scheuling time slice,
but oher variants might have other granularity, IWT.
I don't know how accurate the linux version is (I expect about as
accurate, but I can certainly be wrong).

Unless your process is running with hard real time priority, you have to throw in
os multitasking and interrupt processing latency as well (try using the mouse to drag
a complex GUI window of another app around continuously while your test runs ;-).

Regards,
Bengt Richter
Jul 18 '05 #3

P: n/a
In Windows: time.clock() will get you the number of seconds since you
last called it, down to the microsecond (trim as you see fit). ^^^^ first, it seems (certainly not a delta since the last time.clock() call)


Oops, word slip. My bad, I meant first.
Don't know why the first value is not a pure 0.0 though.
The C source tells us that in Windows, the performance counter call to
guarantee that it has been started occurs before the counter call to
actually get the number of seconds since the first call.

Maybe not even all windows. E.g., for NT that is one scheuling time slice,
but oher variants might have other granularity, IWT.
Good point, though I think that 9x variants also had .01 second
resolution.

Unless your process is running with hard real time priority, you have to throw in
os multitasking and interrupt processing latency as well (try using the mouse to drag
a complex GUI window of another app around continuously while your test runs ;-).


Now you are talking about external system loads and how they effect
timings. That wasn't the question.

- Josiah

Jul 18 '05 #4

P: n/a
On Tue, 19 Oct 2004 11:24:15 -0700, Josiah Carlson <jc******@uci.edu> wrote:
>In Windows: time.clock() will get you the number of seconds since you
>last called it, down to the microsecond (trim as you see fit). ^^^^ first, it seems (certainly not a delta since the last time.clock() call)


Oops, word slip. My bad, I meant first.
Don't know why the first value is not a pure 0.0 though.


The C source tells us that in Windows, the performance counter call to
guarantee that it has been started occurs before the counter call to
actually get the number of seconds since the first call.

Yes I suspected as much. You are presumably referring to
-----------------
if (divisor == 0.0) {
LARGE_INTEGER freq;
QueryPerformanceCounter(&ctrStart);
if (!QueryPerformanceFrequency(&freq) || freq.QuadPart == 0) {
/* Unlikely to happen - this works on all intel
machines at least! Revert to clock() */
return PyFloat_FromDouble(clock());
}
divisor = (double)freq.QuadPart;
}
QueryPerformanceCounter(&now);
diff = (double)(now.QuadPart - ctrStart.QuadPart);
return PyFloat_FromDouble(diff / divisor);
-----------------
which IMO really should be (untested):
-----------------
if (divisor == 0.0) {
LARGE_INTEGER freq;
QueryPerformanceCounter(&ctrStart);
if (!QueryPerformanceFrequency(&freq) || freq.QuadPart == 0) {
/* Unlikely to happen - this works on all intel
machines at least! Revert to clock() */
return PyFloat_FromDouble(clock());
}
divisor = (double)freq.QuadPart;
now = ctrStart;
} else {
QueryPerformanceCounter(&now);
}
diff = (double)(now.QuadPart - ctrStart.QuadPart);
return PyFloat_FromDouble(diff / divisor);
-----------------
Maybe not even all windows. E.g., for NT that is one scheuling time slice,
but oher variants might have other granularity, IWT.
Good point, though I think that 9x variants also had .01 second
resolution.

I don't recall, but the old ~55ms tick was used a lot in the old days.
Unless your process is running with hard real time priority, you have to throw in
os multitasking and interrupt processing latency as well (try using the mouse to drag
a complex GUI window of another app around continuously while your test runs ;-).


Now you are talking about external system loads and how they effect
timings. That wasn't the question.

True.

Regards,
Bengt Richter
Jul 18 '05 #5

P: n/a

bo**@oz.net (Bengt Richter) wrote:
Yes I suspected as much. You are presumably referring to
[snip original C source]
Yes.

which IMO really should be (untested):
----------------- [snip unchanged code] divisor = (double)freq.QuadPart;
now = ctrStart;
} else {
QueryPerformanceCounter(&now);
}
diff = (double)(now.QuadPart - ctrStart.QuadPart);
return PyFloat_FromDouble(diff / divisor);
-----------------


That would be arbitrarily more accurate, but on my machine it is a
difference of around 10^-5 seconds. I don't think it really makes a
difference.

Feel free to submit a patch.

In the context of the original question, the poster would likely be
rounding to the nearest 1/1000 second, so the tens of microseconds
doesn't seem to be a concern.

Maybe not even all windows. E.g., for NT that is one scheuling time slice,
but oher variants might have other granularity, IWT.


Good point, though I think that 9x variants also had .01 second
resolution.

I don't recall, but the old ~55ms tick was used a lot in the old days.


According to a few sources, windows 98 time slices were around 20ms.
Whether or not you could get time resolution beyond the time slice with
time.time(), let us find out. I just happen to have a P166 running
windows 98...

Hrm, looks like 18 or 19 unique times each second, which is around the
55ms tick time you offer. Ick.

- Josiah

Jul 18 '05 #6

P: n/a
Stefan Behnel <be*******@dvs1.informatik.tu-darmstadt.de> wrote in message news:<cl**********@lnx107.hrz.tu-darmstadt.de>...
Hi!

The logging module nicely prepends each line with a formatted date.

However, I'm not interested in the actual date but only in the number of
milliseconds that passed since the start of the program. What is the best
way of doing that?

Thanks,
Stefan


Use the relativeCreated attribute of the LogRecord in your message format. See

http://www.red-dove.com/logging/publ...ter-class.html

Regards,
Vinay Sajip
Jul 18 '05 #7

This discussion thread is closed

Replies have been disabled for this discussion.