473,325 Members | 2,785 Online
Bytes | Software Development & Data Engineering Community
Post Job

Home Posts Topics Members FAQ

Join Bytes to post your question to a community of 473,325 software developers and data experts.

with timeout(...):

Did anyone write a contextmanager implementing a timeout for
python2.5?

I'd love to be able to write something like

with timeout(5.0) as exceeded:
some_long_running_stuff()
if exceeded:
print "Oops - took too long!"

And have it work reliably and in a cross platform way!

From my experiments with timeouts I suspect it won't be possible to
implement it perfectly in python 2.5 - maybe we could add some extra
core infrastructure to Python 3k to make it possible?

--
Nick Craig-Wood <ni**@craig-wood.com-- http://www.craig-wood.com/nick
Mar 26 '07 #1
22 5239
Nick Craig-Wood wrote:
Did anyone write a contextmanager implementing a timeout for
python2.5?

I'd love to be able to write something like

with timeout(5.0) as exceeded:
some_long_running_stuff()
if exceeded:
print "Oops - took too long!"

And have it work reliably and in a cross platform way!

From my experiments with timeouts I suspect it won't be possible to
implement it perfectly in python 2.5 - maybe we could add some extra
core infrastructure to Python 3k to make it possible?
I'm guessing your question is far over my head, but if I understand it,
I'll take a stab:

First, did you want the timeout to kill the long running stuff?

I'm not sure if its exactly what you are looking for, but I wrote a
timer class that does something like you describe:

http://aspn.activestate.com/ASPN/Coo.../Recipe/464959

Probably you can do whatever you want upon timeout by passing the
appropriate function as the "expire" argument.

This works like a screen saver, etc.

James
Mar 26 '07 #2
Nick Craig-Wood wrote:
Did anyone write a contextmanager implementing a timeout for
python2.5?

I'd love to be able to write something like

with timeout(5.0) as exceeded:
some_long_running_stuff()
if exceeded:
print "Oops - took too long!"

And have it work reliably and in a cross platform way!
Cross platform isn't the issue here - reliability though is. To put it
simple: can't be done that way. You could of course add a timer to the
python bytecode core, that would "jump back" to a stored savepoint or
something like that.

But to make that work reliably, it has to be ensured that no sideeffects
occur while being in some_long_running_stuff. which doesn't only extend to
python itself, but also external modules and systems (file writing, network
communications...). Which can't be done, unless you use a time-machine.
Which I'd take as an personal insult, because in that rolled-back timeframe
I will be possibly proposing to my future wife or something...

Diez
Mar 26 '07 #3
On Mar 26, 3:16 pm, "Diez B. Roggisch" <d...@nospam.web.dewrote:
But to make that work reliably, it has to be ensured that no sideeffects
occur while being in some_long_running_stuff. which doesn't only extend to
python itself, but also external modules and systems (file writing, network
communications...). Which can't be done, unless you use a time-machine.
Hey hey, isn't the Python mantra that we're all adults here? It'd
be the programmers responsibility to use only code that has no
side effects. I certainly can ensure that no side-effects occur in the
following code: 1+2. I didn't even need a time machine to do that :P
Or the primitive could be implemented so that Python
throws a TimeoutException at the earliest opportunity. Then one
could write except-blocks which deal with rolling back any undesirable
side effects. (I'm not saying such timeout feature could be
implemented in
Python, but it could be made by modifying the CPython implementation)

Mar 26 '07 #4
On Mar 26, 3:30 am, Nick Craig-Wood <n...@craig-wood.comwrote:
Did anyone write a contextmanager implementing a timeout for
python2.5?

I'd love to be able to write something like

with timeout(5.0) as exceeded:
some_long_running_stuff()
if exceeded:
print "Oops - took too long!"

And have it work reliably and in a cross platform way!
Doubt it. But you could try:

class TimeoutException(BaseException):
pass

class timeout(object):
def __init__(self, limit_t):
self.limit_t = limit
self.timer = None
self.timed_out = False
def __nonzero__(self):
return self.timed_out
def __enter__(self):
self.timer = threading.Timer(self.limit_t, ...)
self.timer.start()
return self
def __exit__(self, exc_c, exc, tb):
if exc_c is TimeoutException:
self.timed_out = True
return True # suppress exception
return False # raise exception (maybe)

where '...' is a ctypes call to raise the given exception in the
current thread (the capi call PyThreadState_SetAsyncExc)

Definitely not fool-proof, as it relies on thread switching. Also,
lock acquisition can't be interrupted, anyway. Also, this style of
programming is rather unsafe.

But I bet it would work frequently.

-Mike

Mar 27 '07 #5
ir****@gmail.com <ir****@gmail.comwrote:
On Mar 26, 3:16 pm, "Diez B. Roggisch" <d...@nospam.web.dewrote:
But to make that work reliably, it has to be ensured that no sideeffects
occur while being in some_long_running_stuff. which doesn't only extend to
python itself, but also external modules and systems (file writing, network
communications...). Which can't be done, unless you use a time-machine.

Hey hey, isn't the Python mantra that we're all adults here?
Yes the timeout could happen at any time, but at a defined moment in
the python bytecode interpreters life so it wouldn't mess up its
internal state.
It'd be the programmers responsibility to use only code that has no
side effects. I certainly can ensure that no side-effects occur in
the following code: 1+2. I didn't even need a time machine to do
that :P Or the primitive could be implemented so that Python throws
a TimeoutException at the earliest opportunity. Then one could
write except-blocks which deal with rolling back any undesirable
side effects. (I'm not saying such timeout feature could be
implemented in Python, but it could be made by modifying the
CPython implementation)
I don't think timeouts would be any more difficult that using threads.

It is impossible to implement reliably at the moment though because it
is impossible to kill one thread from another thread. There is a
ctypes hack to do it, which sort of works... It needs some core
support I think.

--
Nick Craig-Wood <ni**@craig-wood.com-- http://www.craig-wood.com/nick
Mar 27 '07 #6
James Stroud <js*****@mbi.ucla.eduwrote:
Nick Craig-Wood wrote:
Did anyone write a contextmanager implementing a timeout for
python2.5?

I'd love to be able to write something like

with timeout(5.0) as exceeded:
some_long_running_stuff()
if exceeded:
print "Oops - took too long!"

And have it work reliably and in a cross platform way!

From my experiments with timeouts I suspect it won't be possible to
implement it perfectly in python 2.5 - maybe we could add some extra
core infrastructure to Python 3k to make it possible?

I'm guessing your question is far over my head, but if I understand it,
I'll take a stab:

First, did you want the timeout to kill the long running stuff?
Yes.
I'm not sure if its exactly what you are looking for, but I wrote a
timer class that does something like you describe:

http://aspn.activestate.com/ASPN/Coo.../Recipe/464959

Probably you can do whatever you want upon timeout by passing the
appropriate function as the "expire" argument.
I don't think your code implements quite what I meant!
--
Nick Craig-Wood <ni**@craig-wood.com-- http://www.craig-wood.com/nick
Mar 27 '07 #7
"Diez B. Roggisch" <de***@nospam.web.dewrote:

Nick Craig-Wood wrote:
Did anyone write a contextmanager implementing a timeout for
python2.5?

I'd love to be able to write something like

with timeout(5.0) as exceeded:
some_long_running_stuff()
if exceeded:
print "Oops - took too long!"

And have it work reliably and in a cross platform way!

Cross platform isn't the issue here - reliability though is. To put it
simple: can't be done that way. You could of course add a timer to the
python bytecode core, that would "jump back" to a stored savepoint or
something like that.

But to make that work reliably, it has to be ensured that no sideeffects
occur while being in some_long_running_stuff. which doesn't only extend to
python itself, but also external modules and systems (file writing, network
communications...). Which can't be done, unless you use a time-machine.
Which I'd take as an personal insult, because in that rolled-back timeframe
I will be possibly proposing to my future wife or something...
how does the timed callback in the Tkinter stuff work - in my experience so
far it seems that it does the timed callback quite reliably...

probably has to do with the fact that the mainloop runs as a stand alone
process, and that you set the timer up when you do the "after" call.

so it probably means that to emulate that kind of thing you need a
separate "thread" that is in a loop to monitor the timer's expiry, that
somehow gains control from the "long running stuff" periodically...

so Diez is probably right that the way to go is to put the timer in the
python interpreter loop, as its the only thing around that you could
more or less trust to run all the time.

But then it will not read as nice as Nick's wish, but more like this:

id = setup_callback(error_routine, timeout_in_milliseconds)
long_running_stuff_that_can_block_on_IO(foo, bar, baz)
cancel_callback(id)
print "Hooray it worked !! "
sys.exit()

def error_routine():
print "toughies it took too long - your chocolate is toast"
attempt_at_recovery_or_explanation(foo, bar, baz)

Much more ugly.
But would be useful to be able to do without messing with
threads and GUI and imports.
Could be hard to implement as the interpreter would have
to be assured of getting control back periodically, so a
ticker interrupt routine is called for - begins to sound more
like a kernel function to me.
Isn't there something available that could be got at via ctypes?

- Hendrik


Mar 27 '07 #8
Klaas <mi********@gmail.comwrote:
On Mar 26, 3:30 am, Nick Craig-Wood <n...@craig-wood.comwrote:
Did anyone write a contextmanager implementing a timeout for
python2.5?

I'd love to be able to write something like

with timeout(5.0) as exceeded:
some_long_running_stuff()
if exceeded:
print "Oops - took too long!"

And have it work reliably and in a cross platform way!

Doubt it. But you could try:

class TimeoutException(BaseException):
pass

class timeout(object):
def __init__(self, limit_t):
self.limit_t = limit
self.timer = None
self.timed_out = False
def __nonzero__(self):
return self.timed_out
def __enter__(self):
self.timer = threading.Timer(self.limit_t, ...)
self.timer.start()
return self
def __exit__(self, exc_c, exc, tb):
if exc_c is TimeoutException:
self.timed_out = True
return True # suppress exception
return False # raise exception (maybe)

where '...' is a ctypes call to raise the given exception in the
current thread (the capi call PyThreadState_SetAsyncExc)

Definitely not fool-proof, as it relies on thread switching. Also,
lock acquisition can't be interrupted, anyway. Also, this style of
programming is rather unsafe.

But I bet it would work frequently.
Here is my effort... You'll note from the comments that there are
lots of tricky bits.

It isn't perfect though as it sometimes leaves behind threads (see the
FIXME). I don't think it crashes any more though!

------------------------------------------------------------

"""
General purpose timeout mechanism not using alarm(), ie cross platform

Eg

from timeout import Timeout, TimeoutError

def might_infinite_loop(arg):
while 1:
pass

try:
Timeout(10, might_infinite_loop, "some arg")
except TimeoutError:
print "Oops took too long"
else:
print "Ran just fine"

"""

import threading
import time
import sys
import ctypes
import os

class TimeoutError(Exception):
"""Thrown on a timeout"""
PyThreadState_SetAsyncExc = ctypes.pythonapi.PyThreadState_SetAsyncExc
_c_TimeoutError = ctypes.py_object(TimeoutError)

class Timeout(threading.Thread):
"""
A General purpose timeout class
timeout is int/float in seconds
action is a callable
*args, **kwargs are passed to the callable
"""
def __init__(self, timeout, action, *args, **kwargs):
threading.Thread.__init__(self)
self.action = action
self.args = args
self.kwargs = kwargs
self.stopped = False
self.exc_value = None
self.end_lock = threading.Lock()
# start subtask
self.setDaemon(True) # FIXME this shouldn't be needed but is, indicating sub tasks aren't ending
self.start()
# Wait for subtask to end naturally
self.join(timeout)
# Use end_lock to kill the thread in a non-racy
# fashion. (Using isAlive is racy). Poking exceptions into
# the Thread cleanup code isn't a good idea either
if self.end_lock.acquire(False):
# gained end_lock =sub thread is still running
# sub thread is still running so kill it with a TimeoutError
self.exc_value = TimeoutError()
PyThreadState_SetAsyncExc(self.id, _c_TimeoutError)
# release the lock so it can progress into thread cleanup
self.end_lock.release()
# shouldn't block since we've killed the thread
self.join()
# re-raise any exception
if self.exc_value:
raise self.exc_value
def run(self):
self.id = threading._get_ident()
try:
self.action(*self.args, **self.kwargs)
except:
self.exc_value = sys.exc_value
# only end if we can acquire the end_lock
self.end_lock.acquire()

if __name__ == "__main__":

def _spin(t):
"""Spins for t seconds"""
start = time.time()
end = start + t
while time.time() < end:
pass

def _test_time_limit(name, expecting_time_out, t_limit, fn, *args, **kwargs):
"""Test Timeout"""
start = time.time()

if expecting_time_out:
print "Test",name,"should timeout"
else:
print "Test",name,"shouldn't timeout"

try:
Timeout(t_limit, fn, *args, **kwargs)
except TimeoutError, e:
if expecting_time_out:
print "Timeout generated OK"
else:
raise RuntimeError("Wasn't expecting TimeoutError Here")
else:
if expecting_time_out:
raise RuntimeError("Was expecting TimeoutError Here")
else:
print "No TimeoutError generated OK"

elapsed = time.time() - start
print "That took",elapsed,"seconds for timeout of",t_limit

def test():
"""Test code"""

# no nesting
_test_time_limit("simple #1", True, 5, _spin, 10)
_test_time_limit("simple #2", False, 10, _spin, 5)

# 1 level of nesting
_test_time_limit("nested #1", True, 4, _test_time_limit,
"nested #1a", True, 5, _spin, 10)
_test_time_limit("nested #2", False, 6, _test_time_limit,
"nested #2a", True, 5, _spin, 10)
_test_time_limit("nested #4", False, 6, _test_time_limit,
"nested #4a", False, 10, _spin, 5)

# 2 level of nesting
_test_time_limit("nested #5", True, 3, _test_time_limit,
"nested #5a", True, 4, _test_time_limit,
"nested #5b", True, 5, _spin, 10)
_test_time_limit("nested #9", False, 7, _test_time_limit,
"nested #9a", True, 4, _test_time_limit,
"nested #9b", True, 5, _spin, 10)
_test_time_limit("nested #10", False, 7, _test_time_limit,
"nested #10a",False, 6, _test_time_limit,
"nested #10b",True, 5, _spin, 10)
_test_time_limit("nested #12", False, 7, _test_time_limit,
"nested #12a",False, 6, _test_time_limit,
"nested #12b",False, 10, _spin, 5)

print "All tests OK"

test()
--
Nick Craig-Wood <ni**@craig-wood.com-- http://www.craig-wood.com/nick
Mar 27 '07 #9
Hendrik van Rooyen <ma**@microcorp.co.zawrote:
so Diez is probably right that the way to go is to put the timer in the
python interpreter loop, as its the only thing around that you could
more or less trust to run all the time.

But then it will not read as nice as Nick's wish, but more like this:

id = setup_callback(error_routine, timeout_in_milliseconds)
long_running_stuff_that_can_block_on_IO(foo, bar, baz)
cancel_callback(id)
print "Hooray it worked !! "
sys.exit()

def error_routine():
print "toughies it took too long - your chocolate is toast"
attempt_at_recovery_or_explanation(foo, bar, baz)

Much more ugly.
I could live with that!

It could be made to work I'm sure by getting the interpreter to check
for timeouts every few hundred bytecodes (like it does for thread
switching).
But would be useful to be able to do without messing with
threads and GUI and imports.
Could be hard to implement as the interpreter would have
to be assured of getting control back periodically, so a
ticker interrupt routine is called for - begins to sound more
like a kernel function to me.
Isn't there something available that could be got at via ctypes?
I think if we aren't executing python bytecodes (ie are blocked in the
kernel or running in some C extension) then we shouldn't try to
interrupt. It may be possible - under unix you'd send a signal -
which python would act upon next time it got control back to the
interpreter, but I don't think it would buy us anything except a whole
host of problems!

--
Nick Craig-Wood <ni**@craig-wood.com-- http://www.craig-wood.com/nick
Mar 27 '07 #10
Nick Craig-Wood <ni**@craig-wood.comwrites:
It could be made to work I'm sure by getting the interpreter to check
for timeouts every few hundred bytecodes (like it does for thread
switching).
Is there some reason not to use sigalarm for this?
Mar 27 '07 #11
On Mar 27, 3:28 pm, Paul Rubin <http://phr...@NOSPAM.invalidwrote:
Nick Craig-Wood <n...@craig-wood.comwrites:
It could be made to work I'm sure by getting the interpreter to check
for timeouts every few hundred bytecodes (like it does for thread
switching).

Is there some reason not to use sigalarm for this?
* doesn't work with threads
* requires global state/handler
* cross-platform?

-Mike

Mar 28 '07 #12
"Nick Craig-Wood" <ni**@craig-wood.comwrote:

Hendrik van Rooyen <ma**@microcorp.co.zawrote:
>
But would be useful to be able to do without messing with
threads and GUI and imports.
Could be hard to implement as the interpreter would have
to be assured of getting control back periodically, so a
ticker interrupt routine is called for - begins to sound more
like a kernel function to me.
Isn't there something available that could be got at via ctypes?

I think if we aren't executing python bytecodes (ie are blocked in the
kernel or running in some C extension) then we shouldn't try to
interrupt. It may be possible - under unix you'd send a signal -
which python would act upon next time it got control back to the
interpreter, but I don't think it would buy us anything except a whole
host of problems!
Don't the bytecodes call underlying OS functions? - so is there not a case
where a particular bytecode could block, or all they all protected by
time outs?
Embedded code would handle this sort of thing by interrupting anyway
and trying to clear the mess up afterward - if the limit switch does not
appear after some elapsed time, while you are moving the 100 ton mass,
you should abort and alarm, regardless of anything else...
And if the limit switch sits on a LAN device, the OS timeouts could be
wholly inappropriate...

- Hendrik

Mar 28 '07 #13
Hendrik van Rooyen <ma**@microcorp.co.zawrote:
"Nick Craig-Wood" <ni**@craig-wood.comwrote:
Hendrik van Rooyen <ma**@microcorp.co.zawrote:
But would be useful to be able to do without messing with
threads and GUI and imports.
Could be hard to implement as the interpreter would have
to be assured of getting control back periodically, so a
ticker interrupt routine is called for - begins to sound more
like a kernel function to me.
Isn't there something available that could be got at via ctypes?
I think if we aren't executing python bytecodes (ie are blocked in the
kernel or running in some C extension) then we shouldn't try to
interrupt. It may be possible - under unix you'd send a signal -
which python would act upon next time it got control back to the
interpreter, but I don't think it would buy us anything except a whole
host of problems!

Don't the bytecodes call underlying OS functions? - so is there not a case
where a particular bytecode could block, or all they all protected by
time outs?
I beleive the convention is when calling an OS function which might
block the global interpreter lock is dropped, thus allowing other
python bytecode to run.
Embedded code would handle this sort of thing by interrupting
anyway and trying to clear the mess up afterward - if the limit
switch does not appear after some elapsed time, while you are
moving the 100 ton mass, you should abort and alarm, regardless of
anything else... And if the limit switch sits on a LAN device, the
OS timeouts could be wholly inappropriate...
Well, yes there are different levels of potential reliability with
different implementation strategies for each!
--
Nick Craig-Wood <ni**@craig-wood.com-- http://www.craig-wood.com/nick
Mar 28 '07 #14
"Nick Craig-Wood" <nick@crai....od.comwrote:
Well, yes there are different levels of potential reliability with
different implementation strategies for each!
Gadzooks! Foiled again by the horses for courses argument.

; - )

- Hendrik

Mar 29 '07 #15
>
I beleive the convention is when calling an OS function which might
block the global interpreter lock is dropped, thus allowing other
python bytecode to run.

So what? That doesn't help you, as you are single-threaded here. The
released lock won't prevent the called C-code from taking as long as it
wants. |And there is nothing you can do about that.

Diez
Mar 29 '07 #16
Hendrik van Rooyen <ma**@microcorp.co.zawrote:
"Nick Craig-Wood" <nick@crai....od.comwrote:
Well, yes there are different levels of potential reliability with
different implementation strategies for each!

Gadzooks! Foiled again by the horses for courses argument.

; - )
;-)

I'd like there to be something which works well enough for day to day
use. Ie doesn't ever wreck the internals of python. It could have
some caveats like "may not timeout during C functions which haven't
released the GIL" and that would still make it very useable.

--
Nick Craig-Wood <ni**@craig-wood.com-- http://www.craig-wood.com/nick
Mar 29 '07 #17
Diez B. Roggisch <de***@nospam.web.dewrote:

I beleive the convention is when calling an OS function which might
block the global interpreter lock is dropped, thus allowing other
python bytecode to run.


So what? That doesn't help you, as you are single-threaded here. The
released lock won't prevent the called C-code from taking as long as it
wants. |And there is nothing you can do about that.
I'm assuming that the timeout function is running in a thread...

--
Nick Craig-Wood <ni**@craig-wood.com-- http://www.craig-wood.com/nick
Mar 29 '07 #18
Diez B. Roggisch wrote:
Nick Craig-Wood wrote:

>>Did anyone write a contextmanager implementing a timeout for
python2.5?

And have it work reliably and in a cross platform way!

Cross platform isn't the issue here - reliability though is. To put it
simple: can't be done that way. You could of course add a timer to the
python bytecode core, that would "jump back" to a stored savepoint or
something like that.
Early versions of Scheme had a neat solution to this problem.
You could run a function with a limited amount of "fuel". When the
"fuel" ran out, the call returned with a closure. You could
run the closure again and pick up from where the function had been
interrupted, or just discard the closure.

So there's conceptually a clean way to do this. It's probably
not worth having in Python, but there is an approach that will work.

LISP-type systems tend to be more suitable for this sort of thing.
Traditionally, LISP had the concept of a "break", where
execution could stop and the programmer (never the end user) could
interact with the computation in progress.

John Nagle
Mar 29 '07 #19
"Nick Craig-Wood" <n...k@cra..od.comwrote:
I'd like there to be something which works well enough for day to day
use. Ie doesn't ever wreck the internals of python. It could have
some caveats like "may not timeout during C functions which haven't
released the GIL" and that would still make it very useable.
I second this (or third or whatever if my post is slow).
It is tremendously useful to start something and to be told it has timed
out by a call, rather than to have to unblock the i/o yourself and
to "busy-loop" to see if its successful. And from what I can see
the select functionality is not much different from busy looping...

- Hendrik

Mar 30 '07 #20
Nick Craig-Wood schrieb:
Diez B. Roggisch <de***@nospam.web.dewrote:
>>I beleive the convention is when calling an OS function which might
block the global interpreter lock is dropped, thus allowing other
python bytecode to run.

So what? That doesn't help you, as you are single-threaded here. The
released lock won't prevent the called C-code from taking as long as it
wants. |And there is nothing you can do about that.

I'm assuming that the timeout function is running in a thread...
I wouldn't assume that - it could be a python on a platform without threads.

I really don't think that your idea is worth the effort. If there is
something that can be safely interrupted at any given point in time -
which is the exception, not the rule - then one can "code around" that
missing feature, by spawning a subprocess python, using pyro to
communicate, and terminate it. I've done so before.

Some people here said "we're adults, we make sure our code will be
safely interruptable". But first of all, even adults make errors, and
even more important: most of the time such a feature is wanted, it's
about limiting some scripts that come from an untrusted source - like
user-written plugins. Such a feature would encourage people to use it in
such cases, but not stand up for the nastiness it may provoke.

Diez
Mar 30 '07 #21
John Nagle <na***@animats.comwrote:
Diez B. Roggisch wrote:
Nick Craig-Wood wrote:

>Did anyone write a contextmanager implementing a timeout for
python2.5?

And have it work reliably and in a cross platform way!
Cross platform isn't the issue here - reliability though is. To put it
simple: can't be done that way. You could of course add a timer to the
python bytecode core, that would "jump back" to a stored savepoint or
something like that.

Early versions of Scheme had a neat solution to this problem.
You could run a function with a limited amount of "fuel". When the
"fuel" ran out, the call returned with a closure. You could
run the closure again and pick up from where the function had been
interrupted, or just discard the closure.
That sounds like a really nice concept. That would enable you to make
long running stuff yield without threads too.

I wonder if it is possible in python...

--
Nick Craig-Wood <ni**@craig-wood.com-- http://www.craig-wood.com/nick
Mar 31 '07 #22
Hendrik van Rooyen <ma**@microcorp.co.zawrote:
"Nick Craig-Wood" <n...k@cra..od.comwrote:
I'd like there to be something which works well enough for day to day
use. Ie doesn't ever wreck the internals of python. It could have
some caveats like "may not timeout during C functions which haven't
released the GIL" and that would still make it very useable.

I second this (or third or whatever if my post is slow).
It is tremendously useful to start something and to be told it has timed
out by a call, rather than to have to unblock the i/o yourself and
to "busy-loop" to see if its successful.
Yes, exactly!
And from what I can see the select functionality is not much
different from busy looping...
Conceptually it is no different. In practice your process goes to
sleep until the OS wakes it up again with more data so it is much more
CPU efficient (and possibly lower latency) than busy waiting.

--
Nick Craig-Wood <ni**@craig-wood.com-- http://www.craig-wood.com/nick
Mar 31 '07 #23

This thread has been closed and replies have been disabled. Please start a new discussion.

Similar topics

9
by: E Sullivan | last post by:
I am having a time out issue when multiple users are accessing the server. This time out does not happen all of the time. My understanding is that the time out value is actually set in two places....
12
by: Geigho | last post by:
Setting session timeout in web.config file does not seem to have any effect. Any explanation or suggestion will be appreciated.
8
by: bdeviled | last post by:
I am deploying to a web environment that uses load balancing and to insure that sessions persist across servers, the environment uses SQL to manage sessions. The machine.config file determines how...
4
by: UJ | last post by:
I have a page where the user can upload a video file. As you can guess, this may take a while. Is there a way I can change the session timeout for just this one page? I would also want to change...
5
by: half.italian | last post by:
Hi all. I try not to post until I am stuck in hole with no way out. I fought with this for several hours, and am currently in the hole. I'm doing a proof of concept for creating afp shares...
5
by: Masso | last post by:
Hallo to the group, my question is this: where i have to set timeout in a web application? If i set into sessionState with property Timeout="60", after 20 minutes the session is over. Always...
2
by: kudruu | last post by:
Hello, I am having a problem configuring a certain port on my computer. I want to loop through a list of active ports and listen to the one that is giving me the data packets I need. Right now...
4
by: Giampaolo Rodola' | last post by:
Hi there. We're talking about an asyncore-based server. Just for the heck of it I'd like to set a timeout which will disconnects the clients if they're inactive (i.e., no command or data transfer...
2
by: JimLad | last post by:
Hi, I've got a very basic asp.net page that accesses a very slow db query and siplays to screen. I'm hitting a timeout aftre about 3.5 to 4.5 minutes - seems to be variable. SQL connection...
0
by: DolphinDB | last post by:
Tired of spending countless mintues downsampling your data? Look no further! In this article, you’ll learn how to efficiently downsample 6.48 billion high-frequency records to 61 million...
0
isladogs
by: isladogs | last post by:
The next Access Europe meeting will be on Wednesday 6 Mar 2024 starting at 18:00 UK time (6PM UTC) and finishing at about 19:15 (7.15PM). In this month's session, we are pleased to welcome back...
1
isladogs
by: isladogs | last post by:
The next Access Europe meeting will be on Wednesday 6 Mar 2024 starting at 18:00 UK time (6PM UTC) and finishing at about 19:15 (7.15PM). In this month's session, we are pleased to welcome back...
0
by: Vimpel783 | last post by:
Hello! Guys, I found this code on the Internet, but I need to modify it a little. It works well, the problem is this: Data is sent from only one cell, in this case B5, but it is necessary that data...
0
by: jfyes | last post by:
As a hardware engineer, after seeing that CEIWEI recently released a new tool for Modbus RTU Over TCP/UDP filtering and monitoring, I actively went to its official website to take a look. It turned...
0
by: ArrayDB | last post by:
The error message I've encountered is; ERROR:root:Error generating model response: exception: access violation writing 0x0000000000005140, which seems to be indicative of an access violation...
1
by: CloudSolutions | last post by:
Introduction: For many beginners and individual users, requiring a credit card and email registration may pose a barrier when starting to use cloud servers. However, some cloud server providers now...
1
by: Defcon1945 | last post by:
I'm trying to learn Python using Pycharm but import shutil doesn't work
0
by: Faith0G | last post by:
I am starting a new it consulting business and it's been a while since I setup a new website. Is wordpress still the best web based software for hosting a 5 page website? The webpages will be...

By using Bytes.com and it's services, you agree to our Privacy Policy and Terms of Use.

To disable or enable advertisements and analytics tracking please visit the manage ads & tracking page.