By using this site, you agree to our updated Privacy Policy and our Terms of Use. Manage your Cookies Settings.
455,539 Members | 1,289 Online
Bytes IT Community
+ Ask a Question
Need help? Post your question and get tips & solutions from a community of 455,539 IT Pros & Developers. It's quick & easy.

python thread scheduler?

P: n/a
i'm doing some benchmarking and python is certainly fast enough (the
time.time resolution is more than good enough).

however, i am using the threading module to implement worker trheads to
hit a server with varying levels of workload.

hwoever, my result graphs (of server response time) are very flat and
these flat graphs are higher with the number of threads.

this suggests to me that the barrier i'm hitting is the python thread
schedular, not the server software being tested.

i've verified this with 2 different machines, each runnning the python
client, to hit a server and the graphs are still flat.

any ideas about how i can verify my suspicions and hwo to overcome them?
is there a switch in python that will allow me to have "very independent"
threads? forking takes too much memory and the clients machines get bogged
down beofre the server software under test.
Jul 18 '05 #1
Share this Question
Share on Google+
1 Reply


P: n/a

to clarifgy, i'm seeing responec times as low as 0.3 seconds when the
client has 5 worker threads... rising to an average of about 8 seconds
with 50 threads... and more ith 100 threads.

here is what the python threads do (simplified):
while (work units remaining):
take time
send request to server
get responce from server
take time
record time difference
sleep for a small interval (to yield, and to vary the request rate)
reduce work unit by 1

i get flat graphs of responce time against request rate (which
doubles a specified number of times, after 10 work units.

i would expect the graphs to start to vary non-linearly ... especially at
higher numbers of threads (say 50 or more) and request rates of 256 per
second or more ...
Jul 18 '05 #2

This discussion thread is closed

Replies have been disabled for this discussion.