By using this site, you agree to our updated Privacy Policy and our Terms of Use. Manage your Cookies Settings.
446,362 Members | 1,730 Online
Bytes IT Community
+ Ask a Question
Need help? Post your question and get tips & solutions from a community of 446,362 IT Pros & Developers. It's quick & easy.

response time

P: n/a
I'm programming a simple script to calculate response time from one
server to another server.
I put the time in secons in a variable. I take the web, and I take the
time again, the difference is the time one servers takes to take the page.
I can only calculate it in seconds, is there a way to do it in miliseconds?

Thanks

Jul 18 '05 #1
Share this Question
Share on Google+
12 Replies


P: n/a
Josť wrote:
I'm programming a simple script to calculate response time from one
server to another server.
I put the time in secons in a variable. I take the web, and I take the
time again, the difference is the time one servers takes to take the page.
I can only calculate it in seconds, is there a way to do it in
miliseconds?


After "import time", time.time() returns the time (elapsed since an
arbitrary epoch) with the unit of measure being the second, but the
precision being as high as the platform on which you're running will
allow. The difference between two results of calling time.time() is
therefore in seconds _and fractions_; whether the precision is (e.g.)
1/100 of a second, or 1/1000 of a second, or whatever, depends on
what platform you're running. In any case, just multiply that difference
by 1000 and you'll have it in milliseconds (possibly rounded e.g. to the
closest 10 milliseconds if you underlying platform doesn't provide
better precision than that, of course).
Alex

Jul 18 '05 #2

P: n/a
Alex Martelli <al***@aleax.it> writes:
Josť wrote:
I'm programming a simple script to calculate response time from one
server to another server. [...] I can only calculate it in seconds, is there a way to do it in
miliseconds?


After "import time", time.time() returns the time (elapsed since an
arbitrary epoch) with the unit of measure being the second, but the
precision being as high as the platform on which you're running will
allow. The difference between two results of calling time.time() is

[...]

Also note that Windows' time(), in particular, has a precision of only
around 50 milliseconds (according to Tim Peters, so I haven't bothered
to test it myself ;-). Pretty strange.
John
Jul 18 '05 #3

P: n/a
forgot to add: time.clock() might be more useful on Windows, if you
want high precision.
John
Jul 18 '05 #4

P: n/a
"John J. Lee" wrote:

Alex Martelli <al***@aleax.it> writes:
Josť wrote:
I'm programming a simple script to calculate response time from one
server to another server. [...] I can only calculate it in seconds, is there a way to do it in
miliseconds?


After "import time", time.time() returns the time (elapsed since an
arbitrary epoch) with the unit of measure being the second, but the
precision being as high as the platform on which you're running will
allow. The difference between two results of calling time.time() is

[...]

Also note that Windows' time(), in particular, has a precision of only
around 50 milliseconds (according to Tim Peters, so I haven't bothered
to test it myself ;-). Pretty strange.


Strange, but based on a relatively mundane thing: the frequency (14.31818MHz)
of the NTSC color sub-carrier which was used when displaying computer output
on a TV. This clock was divided by 3 to produce the 4.77MHz clock for the
original IBM PC (because oscillators were/are relatively expensive, so you
wanted to re-use them whenever possible, even if just a submultiple) and
then by 4 again to produce the clock signal that went to the chip involved
in time-keeping, which then counted on every edge using a 16-bit counter
which wrapped around every 65536 counts, producing one interrupt every
65536/(14.31818*1000000/12) or about 0.5492 ms, which is about 18.2 ticks
per second. So other than it being closer to 55 ms than 50, you're right.

Google searches with "18.2 14.31818" will produce lots of background for
all that.

-Peter
Jul 18 '05 #5

P: n/a
Thank you very much, I haven't problem with the precision because I use
it in a machine with Linux Red Hat, but thank you for the explanations
anyway.

Jul 18 '05 #6

P: n/a
Peter Hansen <pe***@engcorp.com> writes:
[...]
Strange, but based on a relatively mundane thing: the frequency (14.31818MHz)
of the NTSC color sub-carrier which was used when displaying computer output
on a TV. This clock was divided by 3 to produce the 4.77MHz clock for the [...] in time-keeping, which then counted on every edge using a 16-bit counter
which wrapped around every 65536 counts, producing one interrupt every
65536/(14.31818*1000000/12) or about 0.5492 ms, which is about 18.2 ticks

[...]

That doesn't explain it AFAICS -- why not use a different (smaller)
divisor? An eight bit counter would give about 0.2 ms resolution.
John
Jul 18 '05 #7

P: n/a
"John J. Lee" wrote:

Peter Hansen <pe***@engcorp.com> writes:
[...]
Strange, but based on a relatively mundane thing: the frequency (14.31818MHz)
of the NTSC color sub-carrier which was used when displaying computer output
on a TV. This clock was divided by 3 to produce the 4.77MHz clock for the

[...]
in time-keeping, which then counted on every edge using a 16-bit counter
which wrapped around every 65536 counts, producing one interrupt every
65536/(14.31818*1000000/12) or about 0.5492 ms, which is about 18.2 ticks

[...]

That doesn't explain it AFAICS -- why not use a different (smaller)
divisor? An eight bit counter would give about 0.2 ms resolution.


Can you imagine the overhead of the DOS timer interrupt executing over 500
times a second?! It would have crippled the system. In fact, from what
I recall of the overhead associated with that interrupt, that might well
have consumed every last microsecond of CPU time.

Also, the hardware probably doesn't even support an "eight bit counter".
That is, there's a good chance that the behaviour described comes entirely
"for free", after setup, whereas using any other value would have required
a periodic reload, in software, which would have been deemed an unacceptable
burden on performance. I believe one of the first links to the Google
search I mentioned has the part number of the timer chip in question, so
you could investigate further if you're curious.

And if you wonder why Windows still had to stick with the same value,
well, let's just say that it's one of the best proofs that I've seen
that even Windows 98 is nothing more than a glossy GUI shell on top
of DOS.

-Peter
Jul 18 '05 #8

P: n/a
John J. Lee wrote:
Peter Hansen <pe***@engcorp.com> writes:
[...]
Strange, but based on a relatively mundane thing: the frequency
(14.31818MHz) of the NTSC color sub-carrier which was used when
displaying computer output
on a TV. This clock was divided by 3 to produce the 4.77MHz clock for
the

[...]
in time-keeping, which then counted on every edge using a 16-bit counter
which wrapped around every 65536 counts, producing one interrupt every
65536/(14.31818*1000000/12) or about 0.5492 ms, which is about 18.2 ticks

[...]

That doesn't explain it AFAICS -- why not use a different (smaller)
divisor? An eight bit counter would give about 0.2 ms resolution.


The original IBM PC (8088, 64KB of memory if you were lucky, and
two 160 KB floppies), which is where all of these numbers come from,
didn't exactly have all that much power to spare. Dealing with 18.2
clock interrupts a second was plenty -- dealing with way more was
probably considered out of the question by the original designers.

We _are_ talking about more than 20 years ago, after all (and I'm
sure none of those designers could possibly dream that their numbers
had to be chosen, not for ONE computer model, but for models that
would span 15 or more turns of Moore Law's wheel...!_).
Alex

Jul 18 '05 #9

P: n/a
Peter Hansen wrote:

"John J. Lee" wrote:

Peter Hansen <pe***@engcorp.com> writes:
[...]
Strange, but based on a relatively mundane thing: the frequency (14.31818MHz)
of the NTSC color sub-carrier which was used when displaying computer output
on a TV. This clock was divided by 3 to produce the 4.77MHz clock for the

[...]
in time-keeping, which then counted on every edge using a 16-bit counter
which wrapped around every 65536 counts, producing one interrupt every
65536/(14.31818*1000000/12) or about 0.5492 ms, which is about 18.2 ticks

[...]

That doesn't explain it AFAICS -- why not use a different (smaller)
divisor? An eight bit counter would give about 0.2 ms resolution.


Can you imagine the overhead of the DOS timer interrupt executing over 500
times a second?! It would have crippled the system.


Oops: 5000 times a second, even worse. :-) I have a vague memory that
the DOS timer interrupt could take well over a millisecond to execute
on the old machines, so it simply wasn't feasible in any case.
Jul 18 '05 #10

P: n/a
Alex:
We _are_ talking about more than 20 years ago, after all (and I'm
sure none of those designers could possibly dream that their numbers
had to be chosen, not for ONE computer model, but for models that
would span 15 or more turns of Moore Law's wheel...!_).


I am hoping for symbolic reasons that in another couple of years it
will be possible to buy a 4.77 GHz processor. Then place it
side-by-side with an original PC and gape at the differences. 1000x
clock speed (and 100,000x performance?), 200,000x more
memory, 1,000,000x more disk space.

Andrew
da***@dalkescientific.com

Jul 18 '05 #11

P: n/a
Peter Hansen <pe***@engcorp.com> writes:
"John J. Lee" wrote: [...]
That doesn't explain it AFAICS -- why not use a different (smaller)
divisor? An eight bit counter would give about 0.2 ms resolution.


Can you imagine the overhead of the DOS timer interrupt executing over 500
times a second?!


No.

It would have crippled the system. In fact, from what
I recall of the overhead associated with that interrupt, that might well
have consumed every last microsecond of CPU time.
I see. :-)

[...] burden on performance. I believe one of the first links to the Google
search I mentioned has the part number of the timer chip in question, so
you could investigate further if you're curious.

[...]

No thanks!-)
John
Jul 18 '05 #12

P: n/a
Andrew Dalke:
I am hoping for symbolic reasons that in another couple of years it
will be possible to buy a 4.77 GHz processor.


Great! Another good reason to _not_ clean out the garage.

Emile van Sebille
em***@fenx.com
Jul 18 '05 #13

This discussion thread is closed

Replies have been disabled for this discussion.