By using this site, you agree to our updated Privacy Policy and our Terms of Use. Manage your Cookies Settings.
458,052 Members | 1,231 Online
Bytes IT Community
+ Ask a Question
Need help? Post your question and get tips & solutions from a community of 458,052 IT Pros & Developers. It's quick & easy.

with timeout(...):

P: n/a
Did anyone write a contextmanager implementing a timeout for
python2.5?

I'd love to be able to write something like

with timeout(5.0) as exceeded:
some_long_running_stuff()
if exceeded:
print "Oops - took too long!"

And have it work reliably and in a cross platform way!

From my experiments with timeouts I suspect it won't be possible to
implement it perfectly in python 2.5 - maybe we could add some extra
core infrastructure to Python 3k to make it possible?

--
Nick Craig-Wood <ni**@craig-wood.com-- http://www.craig-wood.com/nick
Mar 26 '07 #1
Share this Question
Share on Google+
22 Replies


P: n/a
Nick Craig-Wood wrote:
Did anyone write a contextmanager implementing a timeout for
python2.5?

I'd love to be able to write something like

with timeout(5.0) as exceeded:
some_long_running_stuff()
if exceeded:
print "Oops - took too long!"

And have it work reliably and in a cross platform way!

From my experiments with timeouts I suspect it won't be possible to
implement it perfectly in python 2.5 - maybe we could add some extra
core infrastructure to Python 3k to make it possible?
I'm guessing your question is far over my head, but if I understand it,
I'll take a stab:

First, did you want the timeout to kill the long running stuff?

I'm not sure if its exactly what you are looking for, but I wrote a
timer class that does something like you describe:

http://aspn.activestate.com/ASPN/Coo.../Recipe/464959

Probably you can do whatever you want upon timeout by passing the
appropriate function as the "expire" argument.

This works like a screen saver, etc.

James
Mar 26 '07 #2

P: n/a
Nick Craig-Wood wrote:
Did anyone write a contextmanager implementing a timeout for
python2.5?

I'd love to be able to write something like

with timeout(5.0) as exceeded:
some_long_running_stuff()
if exceeded:
print "Oops - took too long!"

And have it work reliably and in a cross platform way!
Cross platform isn't the issue here - reliability though is. To put it
simple: can't be done that way. You could of course add a timer to the
python bytecode core, that would "jump back" to a stored savepoint or
something like that.

But to make that work reliably, it has to be ensured that no sideeffects
occur while being in some_long_running_stuff. which doesn't only extend to
python itself, but also external modules and systems (file writing, network
communications...). Which can't be done, unless you use a time-machine.
Which I'd take as an personal insult, because in that rolled-back timeframe
I will be possibly proposing to my future wife or something...

Diez
Mar 26 '07 #3

P: n/a
On Mar 26, 3:16 pm, "Diez B. Roggisch" <d...@nospam.web.dewrote:
But to make that work reliably, it has to be ensured that no sideeffects
occur while being in some_long_running_stuff. which doesn't only extend to
python itself, but also external modules and systems (file writing, network
communications...). Which can't be done, unless you use a time-machine.
Hey hey, isn't the Python mantra that we're all adults here? It'd
be the programmers responsibility to use only code that has no
side effects. I certainly can ensure that no side-effects occur in the
following code: 1+2. I didn't even need a time machine to do that :P
Or the primitive could be implemented so that Python
throws a TimeoutException at the earliest opportunity. Then one
could write except-blocks which deal with rolling back any undesirable
side effects. (I'm not saying such timeout feature could be
implemented in
Python, but it could be made by modifying the CPython implementation)

Mar 26 '07 #4

P: n/a
On Mar 26, 3:30 am, Nick Craig-Wood <n...@craig-wood.comwrote:
Did anyone write a contextmanager implementing a timeout for
python2.5?

I'd love to be able to write something like

with timeout(5.0) as exceeded:
some_long_running_stuff()
if exceeded:
print "Oops - took too long!"

And have it work reliably and in a cross platform way!
Doubt it. But you could try:

class TimeoutException(BaseException):
pass

class timeout(object):
def __init__(self, limit_t):
self.limit_t = limit
self.timer = None
self.timed_out = False
def __nonzero__(self):
return self.timed_out
def __enter__(self):
self.timer = threading.Timer(self.limit_t, ...)
self.timer.start()
return self
def __exit__(self, exc_c, exc, tb):
if exc_c is TimeoutException:
self.timed_out = True
return True # suppress exception
return False # raise exception (maybe)

where '...' is a ctypes call to raise the given exception in the
current thread (the capi call PyThreadState_SetAsyncExc)

Definitely not fool-proof, as it relies on thread switching. Also,
lock acquisition can't be interrupted, anyway. Also, this style of
programming is rather unsafe.

But I bet it would work frequently.

-Mike

Mar 27 '07 #5

P: n/a
ir****@gmail.com <ir****@gmail.comwrote:
On Mar 26, 3:16 pm, "Diez B. Roggisch" <d...@nospam.web.dewrote:
But to make that work reliably, it has to be ensured that no sideeffects
occur while being in some_long_running_stuff. which doesn't only extend to
python itself, but also external modules and systems (file writing, network
communications...). Which can't be done, unless you use a time-machine.

Hey hey, isn't the Python mantra that we're all adults here?
Yes the timeout could happen at any time, but at a defined moment in
the python bytecode interpreters life so it wouldn't mess up its
internal state.
It'd be the programmers responsibility to use only code that has no
side effects. I certainly can ensure that no side-effects occur in
the following code: 1+2. I didn't even need a time machine to do
that :P Or the primitive could be implemented so that Python throws
a TimeoutException at the earliest opportunity. Then one could
write except-blocks which deal with rolling back any undesirable
side effects. (I'm not saying such timeout feature could be
implemented in Python, but it could be made by modifying the
CPython implementation)
I don't think timeouts would be any more difficult that using threads.

It is impossible to implement reliably at the moment though because it
is impossible to kill one thread from another thread. There is a
ctypes hack to do it, which sort of works... It needs some core
support I think.

--
Nick Craig-Wood <ni**@craig-wood.com-- http://www.craig-wood.com/nick
Mar 27 '07 #6

P: n/a
James Stroud <js*****@mbi.ucla.eduwrote:
Nick Craig-Wood wrote:
Did anyone write a contextmanager implementing a timeout for
python2.5?

I'd love to be able to write something like

with timeout(5.0) as exceeded:
some_long_running_stuff()
if exceeded:
print "Oops - took too long!"

And have it work reliably and in a cross platform way!

From my experiments with timeouts I suspect it won't be possible to
implement it perfectly in python 2.5 - maybe we could add some extra
core infrastructure to Python 3k to make it possible?

I'm guessing your question is far over my head, but if I understand it,
I'll take a stab:

First, did you want the timeout to kill the long running stuff?
Yes.
I'm not sure if its exactly what you are looking for, but I wrote a
timer class that does something like you describe:

http://aspn.activestate.com/ASPN/Coo.../Recipe/464959

Probably you can do whatever you want upon timeout by passing the
appropriate function as the "expire" argument.
I don't think your code implements quite what I meant!
--
Nick Craig-Wood <ni**@craig-wood.com-- http://www.craig-wood.com/nick
Mar 27 '07 #7

P: n/a
"Diez B. Roggisch" <de***@nospam.web.dewrote:

Nick Craig-Wood wrote:
Did anyone write a contextmanager implementing a timeout for
python2.5?

I'd love to be able to write something like

with timeout(5.0) as exceeded:
some_long_running_stuff()
if exceeded:
print "Oops - took too long!"

And have it work reliably and in a cross platform way!

Cross platform isn't the issue here - reliability though is. To put it
simple: can't be done that way. You could of course add a timer to the
python bytecode core, that would "jump back" to a stored savepoint or
something like that.

But to make that work reliably, it has to be ensured that no sideeffects
occur while being in some_long_running_stuff. which doesn't only extend to
python itself, but also external modules and systems (file writing, network
communications...). Which can't be done, unless you use a time-machine.
Which I'd take as an personal insult, because in that rolled-back timeframe
I will be possibly proposing to my future wife or something...
how does the timed callback in the Tkinter stuff work - in my experience so
far it seems that it does the timed callback quite reliably...

probably has to do with the fact that the mainloop runs as a stand alone
process, and that you set the timer up when you do the "after" call.

so it probably means that to emulate that kind of thing you need a
separate "thread" that is in a loop to monitor the timer's expiry, that
somehow gains control from the "long running stuff" periodically...

so Diez is probably right that the way to go is to put the timer in the
python interpreter loop, as its the only thing around that you could
more or less trust to run all the time.

But then it will not read as nice as Nick's wish, but more like this:

id = setup_callback(error_routine, timeout_in_milliseconds)
long_running_stuff_that_can_block_on_IO(foo, bar, baz)
cancel_callback(id)
print "Hooray it worked !! "
sys.exit()

def error_routine():
print "toughies it took too long - your chocolate is toast"
attempt_at_recovery_or_explanation(foo, bar, baz)

Much more ugly.
But would be useful to be able to do without messing with
threads and GUI and imports.
Could be hard to implement as the interpreter would have
to be assured of getting control back periodically, so a
ticker interrupt routine is called for - begins to sound more
like a kernel function to me.
Isn't there something available that could be got at via ctypes?

- Hendrik


Mar 27 '07 #8

P: n/a
Klaas <mi********@gmail.comwrote:
On Mar 26, 3:30 am, Nick Craig-Wood <n...@craig-wood.comwrote:
Did anyone write a contextmanager implementing a timeout for
python2.5?

I'd love to be able to write something like

with timeout(5.0) as exceeded:
some_long_running_stuff()
if exceeded:
print "Oops - took too long!"

And have it work reliably and in a cross platform way!

Doubt it. But you could try:

class TimeoutException(BaseException):
pass

class timeout(object):
def __init__(self, limit_t):
self.limit_t = limit
self.timer = None
self.timed_out = False
def __nonzero__(self):
return self.timed_out
def __enter__(self):
self.timer = threading.Timer(self.limit_t, ...)
self.timer.start()
return self
def __exit__(self, exc_c, exc, tb):
if exc_c is TimeoutException:
self.timed_out = True
return True # suppress exception
return False # raise exception (maybe)

where '...' is a ctypes call to raise the given exception in the
current thread (the capi call PyThreadState_SetAsyncExc)

Definitely not fool-proof, as it relies on thread switching. Also,
lock acquisition can't be interrupted, anyway. Also, this style of
programming is rather unsafe.

But I bet it would work frequently.
Here is my effort... You'll note from the comments that there are
lots of tricky bits.

It isn't perfect though as it sometimes leaves behind threads (see the
FIXME). I don't think it crashes any more though!

------------------------------------------------------------

"""
General purpose timeout mechanism not using alarm(), ie cross platform

Eg

from timeout import Timeout, TimeoutError

def might_infinite_loop(arg):
while 1:
pass

try:
Timeout(10, might_infinite_loop, "some arg")
except TimeoutError:
print "Oops took too long"
else:
print "Ran just fine"

"""

import threading
import time
import sys
import ctypes
import os

class TimeoutError(Exception):
"""Thrown on a timeout"""
PyThreadState_SetAsyncExc = ctypes.pythonapi.PyThreadState_SetAsyncExc
_c_TimeoutError = ctypes.py_object(TimeoutError)

class Timeout(threading.Thread):
"""
A General purpose timeout class
timeout is int/float in seconds
action is a callable
*args, **kwargs are passed to the callable
"""
def __init__(self, timeout, action, *args, **kwargs):
threading.Thread.__init__(self)
self.action = action
self.args = args
self.kwargs = kwargs
self.stopped = False
self.exc_value = None
self.end_lock = threading.Lock()
# start subtask
self.setDaemon(True) # FIXME this shouldn't be needed but is, indicating sub tasks aren't ending
self.start()
# Wait for subtask to end naturally
self.join(timeout)
# Use end_lock to kill the thread in a non-racy
# fashion. (Using isAlive is racy). Poking exceptions into
# the Thread cleanup code isn't a good idea either
if self.end_lock.acquire(False):
# gained end_lock =sub thread is still running
# sub thread is still running so kill it with a TimeoutError
self.exc_value = TimeoutError()
PyThreadState_SetAsyncExc(self.id, _c_TimeoutError)
# release the lock so it can progress into thread cleanup
self.end_lock.release()
# shouldn't block since we've killed the thread
self.join()
# re-raise any exception
if self.exc_value:
raise self.exc_value
def run(self):
self.id = threading._get_ident()
try:
self.action(*self.args, **self.kwargs)
except:
self.exc_value = sys.exc_value
# only end if we can acquire the end_lock
self.end_lock.acquire()

if __name__ == "__main__":

def _spin(t):
"""Spins for t seconds"""
start = time.time()
end = start + t
while time.time() < end:
pass

def _test_time_limit(name, expecting_time_out, t_limit, fn, *args, **kwargs):
"""Test Timeout"""
start = time.time()

if expecting_time_out:
print "Test",name,"should timeout"
else:
print "Test",name,"shouldn't timeout"

try:
Timeout(t_limit, fn, *args, **kwargs)
except TimeoutError, e:
if expecting_time_out:
print "Timeout generated OK"
else:
raise RuntimeError("Wasn't expecting TimeoutError Here")
else:
if expecting_time_out:
raise RuntimeError("Was expecting TimeoutError Here")
else:
print "No TimeoutError generated OK"

elapsed = time.time() - start
print "That took",elapsed,"seconds for timeout of",t_limit

def test():
"""Test code"""

# no nesting
_test_time_limit("simple #1", True, 5, _spin, 10)
_test_time_limit("simple #2", False, 10, _spin, 5)

# 1 level of nesting
_test_time_limit("nested #1", True, 4, _test_time_limit,
"nested #1a", True, 5, _spin, 10)
_test_time_limit("nested #2", False, 6, _test_time_limit,
"nested #2a", True, 5, _spin, 10)
_test_time_limit("nested #4", False, 6, _test_time_limit,
"nested #4a", False, 10, _spin, 5)

# 2 level of nesting
_test_time_limit("nested #5", True, 3, _test_time_limit,
"nested #5a", True, 4, _test_time_limit,
"nested #5b", True, 5, _spin, 10)
_test_time_limit("nested #9", False, 7, _test_time_limit,
"nested #9a", True, 4, _test_time_limit,
"nested #9b", True, 5, _spin, 10)
_test_time_limit("nested #10", False, 7, _test_time_limit,
"nested #10a",False, 6, _test_time_limit,
"nested #10b",True, 5, _spin, 10)
_test_time_limit("nested #12", False, 7, _test_time_limit,
"nested #12a",False, 6, _test_time_limit,
"nested #12b",False, 10, _spin, 5)

print "All tests OK"

test()
--
Nick Craig-Wood <ni**@craig-wood.com-- http://www.craig-wood.com/nick
Mar 27 '07 #9

P: n/a
Hendrik van Rooyen <ma**@microcorp.co.zawrote:
so Diez is probably right that the way to go is to put the timer in the
python interpreter loop, as its the only thing around that you could
more or less trust to run all the time.

But then it will not read as nice as Nick's wish, but more like this:

id = setup_callback(error_routine, timeout_in_milliseconds)
long_running_stuff_that_can_block_on_IO(foo, bar, baz)
cancel_callback(id)
print "Hooray it worked !! "
sys.exit()

def error_routine():
print "toughies it took too long - your chocolate is toast"
attempt_at_recovery_or_explanation(foo, bar, baz)

Much more ugly.
I could live with that!

It could be made to work I'm sure by getting the interpreter to check
for timeouts every few hundred bytecodes (like it does for thread
switching).
But would be useful to be able to do without messing with
threads and GUI and imports.
Could be hard to implement as the interpreter would have
to be assured of getting control back periodically, so a
ticker interrupt routine is called for - begins to sound more
like a kernel function to me.
Isn't there something available that could be got at via ctypes?
I think if we aren't executing python bytecodes (ie are blocked in the
kernel or running in some C extension) then we shouldn't try to
interrupt. It may be possible - under unix you'd send a signal -
which python would act upon next time it got control back to the
interpreter, but I don't think it would buy us anything except a whole
host of problems!

--
Nick Craig-Wood <ni**@craig-wood.com-- http://www.craig-wood.com/nick
Mar 27 '07 #10

P: n/a
Nick Craig-Wood <ni**@craig-wood.comwrites:
It could be made to work I'm sure by getting the interpreter to check
for timeouts every few hundred bytecodes (like it does for thread
switching).
Is there some reason not to use sigalarm for this?
Mar 27 '07 #11

P: n/a
On Mar 27, 3:28 pm, Paul Rubin <http://phr...@NOSPAM.invalidwrote:
Nick Craig-Wood <n...@craig-wood.comwrites:
It could be made to work I'm sure by getting the interpreter to check
for timeouts every few hundred bytecodes (like it does for thread
switching).

Is there some reason not to use sigalarm for this?
* doesn't work with threads
* requires global state/handler
* cross-platform?

-Mike

Mar 28 '07 #12

P: n/a
"Nick Craig-Wood" <ni**@craig-wood.comwrote:

Hendrik van Rooyen <ma**@microcorp.co.zawrote:
>
But would be useful to be able to do without messing with
threads and GUI and imports.
Could be hard to implement as the interpreter would have
to be assured of getting control back periodically, so a
ticker interrupt routine is called for - begins to sound more
like a kernel function to me.
Isn't there something available that could be got at via ctypes?

I think if we aren't executing python bytecodes (ie are blocked in the
kernel or running in some C extension) then we shouldn't try to
interrupt. It may be possible - under unix you'd send a signal -
which python would act upon next time it got control back to the
interpreter, but I don't think it would buy us anything except a whole
host of problems!
Don't the bytecodes call underlying OS functions? - so is there not a case
where a particular bytecode could block, or all they all protected by
time outs?
Embedded code would handle this sort of thing by interrupting anyway
and trying to clear the mess up afterward - if the limit switch does not
appear after some elapsed time, while you are moving the 100 ton mass,
you should abort and alarm, regardless of anything else...
And if the limit switch sits on a LAN device, the OS timeouts could be
wholly inappropriate...

- Hendrik

Mar 28 '07 #13

P: n/a
Hendrik van Rooyen <ma**@microcorp.co.zawrote:
"Nick Craig-Wood" <ni**@craig-wood.comwrote:
Hendrik van Rooyen <ma**@microcorp.co.zawrote:
But would be useful to be able to do without messing with
threads and GUI and imports.
Could be hard to implement as the interpreter would have
to be assured of getting control back periodically, so a
ticker interrupt routine is called for - begins to sound more
like a kernel function to me.
Isn't there something available that could be got at via ctypes?
I think if we aren't executing python bytecodes (ie are blocked in the
kernel or running in some C extension) then we shouldn't try to
interrupt. It may be possible - under unix you'd send a signal -
which python would act upon next time it got control back to the
interpreter, but I don't think it would buy us anything except a whole
host of problems!

Don't the bytecodes call underlying OS functions? - so is there not a case
where a particular bytecode could block, or all they all protected by
time outs?
I beleive the convention is when calling an OS function which might
block the global interpreter lock is dropped, thus allowing other
python bytecode to run.
Embedded code would handle this sort of thing by interrupting
anyway and trying to clear the mess up afterward - if the limit
switch does not appear after some elapsed time, while you are
moving the 100 ton mass, you should abort and alarm, regardless of
anything else... And if the limit switch sits on a LAN device, the
OS timeouts could be wholly inappropriate...
Well, yes there are different levels of potential reliability with
different implementation strategies for each!
--
Nick Craig-Wood <ni**@craig-wood.com-- http://www.craig-wood.com/nick
Mar 28 '07 #14

P: n/a
"Nick Craig-Wood" <nick@crai....od.comwrote:
Well, yes there are different levels of potential reliability with
different implementation strategies for each!
Gadzooks! Foiled again by the horses for courses argument.

; - )

- Hendrik

Mar 29 '07 #15

P: n/a
>
I beleive the convention is when calling an OS function which might
block the global interpreter lock is dropped, thus allowing other
python bytecode to run.

So what? That doesn't help you, as you are single-threaded here. The
released lock won't prevent the called C-code from taking as long as it
wants. |And there is nothing you can do about that.

Diez
Mar 29 '07 #16

P: n/a
Hendrik van Rooyen <ma**@microcorp.co.zawrote:
"Nick Craig-Wood" <nick@crai....od.comwrote:
Well, yes there are different levels of potential reliability with
different implementation strategies for each!

Gadzooks! Foiled again by the horses for courses argument.

; - )
;-)

I'd like there to be something which works well enough for day to day
use. Ie doesn't ever wreck the internals of python. It could have
some caveats like "may not timeout during C functions which haven't
released the GIL" and that would still make it very useable.

--
Nick Craig-Wood <ni**@craig-wood.com-- http://www.craig-wood.com/nick
Mar 29 '07 #17

P: n/a
Diez B. Roggisch <de***@nospam.web.dewrote:

I beleive the convention is when calling an OS function which might
block the global interpreter lock is dropped, thus allowing other
python bytecode to run.


So what? That doesn't help you, as you are single-threaded here. The
released lock won't prevent the called C-code from taking as long as it
wants. |And there is nothing you can do about that.
I'm assuming that the timeout function is running in a thread...

--
Nick Craig-Wood <ni**@craig-wood.com-- http://www.craig-wood.com/nick
Mar 29 '07 #18

P: n/a
Diez B. Roggisch wrote:
Nick Craig-Wood wrote:

>>Did anyone write a contextmanager implementing a timeout for
python2.5?

And have it work reliably and in a cross platform way!

Cross platform isn't the issue here - reliability though is. To put it
simple: can't be done that way. You could of course add a timer to the
python bytecode core, that would "jump back" to a stored savepoint or
something like that.
Early versions of Scheme had a neat solution to this problem.
You could run a function with a limited amount of "fuel". When the
"fuel" ran out, the call returned with a closure. You could
run the closure again and pick up from where the function had been
interrupted, or just discard the closure.

So there's conceptually a clean way to do this. It's probably
not worth having in Python, but there is an approach that will work.

LISP-type systems tend to be more suitable for this sort of thing.
Traditionally, LISP had the concept of a "break", where
execution could stop and the programmer (never the end user) could
interact with the computation in progress.

John Nagle
Mar 29 '07 #19

P: n/a
"Nick Craig-Wood" <n...k@cra..od.comwrote:
I'd like there to be something which works well enough for day to day
use. Ie doesn't ever wreck the internals of python. It could have
some caveats like "may not timeout during C functions which haven't
released the GIL" and that would still make it very useable.
I second this (or third or whatever if my post is slow).
It is tremendously useful to start something and to be told it has timed
out by a call, rather than to have to unblock the i/o yourself and
to "busy-loop" to see if its successful. And from what I can see
the select functionality is not much different from busy looping...

- Hendrik

Mar 30 '07 #20

P: n/a
Nick Craig-Wood schrieb:
Diez B. Roggisch <de***@nospam.web.dewrote:
>>I beleive the convention is when calling an OS function which might
block the global interpreter lock is dropped, thus allowing other
python bytecode to run.

So what? That doesn't help you, as you are single-threaded here. The
released lock won't prevent the called C-code from taking as long as it
wants. |And there is nothing you can do about that.

I'm assuming that the timeout function is running in a thread...
I wouldn't assume that - it could be a python on a platform without threads.

I really don't think that your idea is worth the effort. If there is
something that can be safely interrupted at any given point in time -
which is the exception, not the rule - then one can "code around" that
missing feature, by spawning a subprocess python, using pyro to
communicate, and terminate it. I've done so before.

Some people here said "we're adults, we make sure our code will be
safely interruptable". But first of all, even adults make errors, and
even more important: most of the time such a feature is wanted, it's
about limiting some scripts that come from an untrusted source - like
user-written plugins. Such a feature would encourage people to use it in
such cases, but not stand up for the nastiness it may provoke.

Diez
Mar 30 '07 #21

P: n/a
John Nagle <na***@animats.comwrote:
Diez B. Roggisch wrote:
Nick Craig-Wood wrote:

>Did anyone write a contextmanager implementing a timeout for
python2.5?

And have it work reliably and in a cross platform way!
Cross platform isn't the issue here - reliability though is. To put it
simple: can't be done that way. You could of course add a timer to the
python bytecode core, that would "jump back" to a stored savepoint or
something like that.

Early versions of Scheme had a neat solution to this problem.
You could run a function with a limited amount of "fuel". When the
"fuel" ran out, the call returned with a closure. You could
run the closure again and pick up from where the function had been
interrupted, or just discard the closure.
That sounds like a really nice concept. That would enable you to make
long running stuff yield without threads too.

I wonder if it is possible in python...

--
Nick Craig-Wood <ni**@craig-wood.com-- http://www.craig-wood.com/nick
Mar 31 '07 #22

P: n/a
Hendrik van Rooyen <ma**@microcorp.co.zawrote:
"Nick Craig-Wood" <n...k@cra..od.comwrote:
I'd like there to be something which works well enough for day to day
use. Ie doesn't ever wreck the internals of python. It could have
some caveats like "may not timeout during C functions which haven't
released the GIL" and that would still make it very useable.

I second this (or third or whatever if my post is slow).
It is tremendously useful to start something and to be told it has timed
out by a call, rather than to have to unblock the i/o yourself and
to "busy-loop" to see if its successful.
Yes, exactly!
And from what I can see the select functionality is not much
different from busy looping...
Conceptually it is no different. In practice your process goes to
sleep until the OS wakes it up again with more data so it is much more
CPU efficient (and possibly lower latency) than busy waiting.

--
Nick Craig-Wood <ni**@craig-wood.com-- http://www.craig-wood.com/nick
Mar 31 '07 #23

This discussion thread is closed

Replies have been disabled for this discussion.