By using this site, you agree to our updated Privacy Policy and our Terms of Use. Manage your Cookies Settings.
448,838 Members | 1,668 Online
Bytes IT Community
+ Ask a Question
Need help? Post your question and get tips & solutions from a community of 448,838 IT Pros & Developers. It's quick & easy.

python's threading has no "interrupt"?

P: n/a
As far as I know python's threading module models after Java's.
However, I can't find something equivalent to Java's interrupt and
isInterrupted methods, along with InterruptedException.
"somethread.interrupt()" will wake somethread up when it's in
sleeping/waiting state.

Is there any way of doing this with python's thread? I suppose thread
interrupt is a very primitive functionality for stopping a blocked
thread.

Jane
Jul 18 '05 #1
Share this Question
Share on Google+
19 Replies


P: n/a
ja***********@hotmail.com (Jane Austine) wrote in message news:<ba**************************@posting.google. com>...
As far as I know python's threading module models after Java's.
However, I can't find something equivalent to Java's interrupt and
isInterrupted methods, along with InterruptedException.
"somethread.interrupt()" will wake somethread up when it's in
sleeping/waiting state.

Is there any way of doing this with python's thread? I suppose thread
interrupt is a very primitive functionality for stopping a blocked
thread.

Jane


Well, I haven't got any answer since I posted it. Meanwhile, I have
been searching for it myself. Something new has been added in 2.3 in
thread module. That's interrupt_main. But, unfortunately, it is the
opposite of what I expected; It interrupts the main thread.

After all this, I am a bit disappointed about Python. (it's sad)

Jane
Jul 18 '05 #2

P: n/a
>
As far as I know python's threading module models after Java's.
However, I can't find something equivalent to Java's interrupt and
isInterrupted methods, along with InterruptedException.
"somethread.interrupt()" will wake somethread up when it's in
sleeping/waiting state.

Is there any way of doing this with python's thread? I suppose thread
interrupt is a very primitive functionality for stopping a blocked
thread.

Well, I haven't got any answer since I posted it. Meanwhile, I have
been searching for it myself. Something new has been added in 2.3 in
thread module. That's interrupt_main. But, unfortunately, it is the
opposite of what I expected; It interrupts the main thread.

After all this, I am a bit disappointed about Python. (it's sad)

Did you try condition objects? threading.Condition

Example from the library reference:

# Consume one item
cv.acquire()
while not an_item_is_available():
cv.wait()
get_an_available_item()
cv.release()

# Produce one item
cv.acquire()
make_an_item_available()
cv.notify()
cv.release()
You can block a thread with a condition object by calling its 'wait()'
method. You can
call the 'notify()' method from another thread and that will 'interrupt'
the blocked thread.

It is not the very same thing, but I suspect you can use it for the same
purposes.

Cheers,

Laci 1.0


Jul 18 '05 #3

P: n/a
> >
As far as I know python's threading module models after Java's.
However, I can't find something equivalent to Java's interrupt and
isInterrupted methods, along with InterruptedException.
"somethread.interrupt()" will wake somethread up when it's in
sleeping/waiting state.

Is there any way of doing this with python's thread? I suppose thread
interrupt is a very primitive functionality for stopping a blocked
thread.

Well, I haven't got any answer since I posted it. Meanwhile, I have
been searching for it myself. Something new has been added in 2.3 in
thread module. That's interrupt_main. But, unfortunately, it is the
opposite of what I expected; It interrupts the main thread.

After all this, I am a bit disappointed about Python. (it's sad)

Did you try condition objects? threading.Condition

Example from the library reference:

# Consume one item
cv.acquire()
while not an_item_is_available():
cv.wait()
get_an_available_item()
cv.release()

# Produce one item
cv.acquire()
make_an_item_available()
cv.notify()
cv.release()
You can block a thread with a condition object by calling its 'wait()'
method. You can
call the 'notify()' method from another thread and that will 'interrupt'
the blocked thread.

It is not the very same thing, but I suspect you can use it for the same
purposes.

Cheers,

Laci 1.0


Thanks, but it doesn't give a solution. The problem is that there can
be multiple condition variables and I have to interrupt the thread no
matter what condition variable is waiting.
Jul 18 '05 #4

P: n/a
Jane wrote:
>As far as I know python's threading module models after Java's.
>However, I can't find something equivalent to Java's interrupt and
>isInterrupted methods, along with InterruptedException.
>"somethread.interrupt()" will wake somethread up when it's in
>sleeping/waiting state.
>
>Is there any way of doing this with python's thread? I suppose thread
>interrupt is a very primitive functionality for stopping a blocked
>thread.
>
>
Well, I haven't got any answer since I posted it. Meanwhile, I have
been searching for it myself. Something new has been added in 2.3 in
thread module. That's interrupt_main. But, unfortunately, it is the
opposite of what I expected; It interrupts the main thread.

After all this, I am a bit disappointed about Python. (it's sad)

Did you try condition objects? threading.Condition

[snip]
Thanks, but it doesn't give a solution. The problem is that there can
be multiple condition variables and I have to interrupt the thread no
matter what condition variable is waiting.


What's your actual use case (IOW, what programming problem are you trying to
solve)? Maybe there is a Python solution that will work for you, but in order
to help you find it, people here will need to better understand what you're
trying to do.

You've noticed that there isn't an identical construct for what you were doing
in Java, so it may be that the Python way will be a completely different
approach to the problem rather than just a direct conversion from Java to
Python syntax.

-Dave
Jul 18 '05 #5

P: n/a
Dave Brueck wrote:
What's your actual use case (IOW, what programming problem are you trying to
solve)? Maybe there is a Python solution that will work for you, but in order
to help you find it, people here will need to better understand what you're
trying to do.

You've noticed that there isn't an identical construct for what you were doing
in Java, so it may be that the Python way will be a completely different
approach to the problem rather than just a direct conversion from Java to
Python syntax.

Well, I'll give an example from something I tried to do a few years ago
which I couldn't translate to Python, though I took a stab at it.

The language in question was a VM language with it's own internal
process model. The process model was all inteneral to on eOS process to
it was a cooperative model of multi-threading

The application I wrote was a web server on the front of an application
so that web requests were all handled in the application. IOW, the
application had a web server interface)

The language process model had both the ability to set process
priorities, as well as allow processes to sleep and cede control to
other processes.

When a web request would come in, the handling of the request would be
forked as a seperate process so that the server could accept the next
request. Both the server process and the handling process(es) were at
the same priority level so in theory each process would run to
completion before allowing another process to run. For this reason,
both the server process and the handling processes would issue a 'yield'
at periodic strategic points, allowing themselves to temporarily halt
and for the next process of equal priority that was waiting to run.
This allowed both the server to remain responsive and all handling
processes to run efficiently.

Behind all of this was a higher priority process running, but it
basically would sleep for several minutes (sleeping allowed lower
priority process, like the server process to run). When the background
process would wake up, since it was a higher priority, it would
immediately take control. It's main job was to check the list of
handling processes for any that had been running too long (long running
processes in a web server meant that something had gone wrong) and
terminate them (freeing up the process and socket resources, etc..).
Then it (the cleanup process) would go back to sleep and let the lower
priority processes run.

At one point I attempted to translate this all into Python, but the lack
of the ability to set process priorities, or for processes to yield
control to other processes, or an effective way to terminate proceses,
kept me from doing this. The Python threading model seemed very
primitve compared to what I was used to in regards to how threads can be
controlled.

Rather than being a knock on Python, I would prefer to know how to do
all this, if it can be done.
Jul 18 '05 #6

P: n/a
Jay O'Connor wrote:

.... When the background
process would wake up, since it was a higher priority, it would
immediately take control. It's main job was to check the list of
handling processes for any that had been running too long (long running
processes in a web server meant that something had gone wrong) and
terminate them (freeing up the process and socket resources, etc..).


Can you describe the nature of the "something had gone wrongs" that
you were trying to handle? It's a very important point for a design
like this. Could these processes launch external programs which
might not exit in time? Were they possibly buggy, encountering for
example endless loops? Were they dynamically loaded code written by
others, which could mean malicious behaviour was possible? Or something
else?

Depending on the answer, it will be either very easy to handle in Python,
or very hard, or potentially impossible in a straightforward fashion.

-Peter
Jul 18 '05 #7

P: n/a
Peter Hansen wrote:
Jay O'Connor wrote:

.... When the background
process would wake up, since it was a higher priority, it would
immediately take control. It's main job was to check the list of
handling processes for any that had been running too long (long running
processes in a web server meant that something had gone wrong) and
terminate them (freeing up the process and socket resources, etc..).


Can you describe the nature of the "something had gone wrongs" that
you were trying to handle? It's a very important point for a design
like this. Could these processes launch external programs which
might not exit in time? Were they possibly buggy, encountering for
example endless loops? Were they dynamically loaded code written by
others, which could mean malicious behaviour was possible? Or something
else?

Depending on the answer, it will be either very easy to handle in Python,
or very hard, or potentially impossible in a straightforward fashion.


The problem is that we didn't know what it could be. We had exception
handlers at lower levels sufficient to handle coding bugs and file i/o
issues. What ususally killed us was a network error that terminated
the connection in mid transaction at some point or even just the user
hitting 'cancel' mid stream

FWIW - the 'cleanup' process running at higher priority also took care
of some bookkeeping, stats collecting, some memory management
(releasing cached objects untouched for awhile so they could be GC), etc...
Jul 18 '05 #8

P: n/a
Jay wrote:
You've noticed that there isn't an identical construct for what you were doingin Java, so it may be that the Python way will be a completely different
approach to the problem rather than just a direct conversion from Java to
Python syntax.
Well, I'll give an example from something I tried to do a few years ago
which I couldn't translate to Python, though I took a stab at it.

The language in question was a VM language with it's own internal
process model. The process model was all inteneral to on eOS process to
it was a cooperative model of multi-threading

The application I wrote was a web server on the front of an application
so that web requests were all handled in the application. IOW, the
application had a web server interface)

The language process model had both the ability to set process
priorities, as well as allow processes to sleep and cede control to
other processes.

When a web request would come in, the handling of the request would be
forked as a seperate process so that the server could accept the next
request. Both the server process and the handling process(es) were at
the same priority level so in theory each process would run to
completion before allowing another process to run. For this reason,
both the server process and the handling processes would issue a 'yield'
at periodic strategic points, allowing themselves to temporarily halt
and for the next process of equal priority that was waiting to run.


So, if I understand this correctly, this amounted to a manual task switcher,
right? (as no more than one job was allowed to run concurrently). Was this
really the desired behavior? If each process halts while the others run and
does so only to prevent starvation in the other processes, wouldn't you get
more or less the same results by just using normal forking/threading wherein
the OS ensures that each process/thread gets a slice of the CPU pie? (IOW,
unless there was some other reason, it seems that the explicit yielding was to
work around the constraints of the process model).
Behind all of this was a higher priority process running, but it
basically would sleep for several minutes (sleeping allowed lower
priority process, like the server process to run). When the background
process would wake up, since it was a higher priority, it would
immediately take control. It's main job was to check the list of
handling processes for any that had been running too long (long running
processes in a web server meant that something had gone wrong) and
terminate them (freeing up the process and socket resources, etc..).


If you're using forking to handle new processes, then you can simply kill them
from Python and you'll get the same behavior you specify above. Stopping a
thread isn't supported because there's no way to know the state of the
resources that were in use when the thread was killed (e.g. what if it was at
the time holding a mutex that other threads will want to use?) - this isn't
really a Python issue but an OS one. The fact that Java threads even _had_ a
stop() method seems dangerous, and it looks like Sun thinks so too:

http://java.sun.com/j2se/1.4.2/docs/...ng/Thread.html - "void stop()
Deprecated. This method is inherently unsafe. "
(I mention this since that's what the OP was using)

So... what was going wrong that warranted killing the process in the first
place? In practice the need for that is pretty rare, especially for a server,
the main case coming to mind being the execution of user-supplied code (which
is pretty scary in and of itself!).

-Dave
Jul 18 '05 #9

P: n/a
Jay O'Connor fed this fish to the penguins on Tuesday 02 December 2003
07:48 am:

The language in question was a VM language with it's own internal
process model. The process model was all inteneral to on eOS process
to it was a cooperative model of multi-threading
There's the first point of departure: Python threads (as long as you
aren't in a C language number crunching extension) are preemptive
scheduled, on something like a 10-20 byte-code interval. Cooperative
basically means /you/ had to handle the scheduling of threads; in
Python you don't, it happens automatically.

The language process model had both the ability to set process
priorities, as well as allow processes to sleep and cede control to
other processes.
Besides the built-in scheduler, invoking any type of blocking
operation (sleep, wait for a condition, etc.) will also trigger
scheduler swaps.
When a web request would come in, the handling of the request would be
forked as a seperate process so that the server could accept the next
request. Both the server process and the handling process(es) were at
the same priority level so in theory each process would run to
completion before allowing another process to run. For this reason,
Not a matter for Python... Let the main server create a new thread
with the connection information, and let it go back to waiting for
another connection request -- while it is waiting, the other thread(s)
take turns running.
both the server process and the handling processes would issue a
'yield' at periodic strategic points, allowing themselves to
temporarily halt and for the next process of equal priority that was
waiting to run. This allowed both the server to remain responsive and
all handling processes to run efficiently.
No need for an explicit yield; the scheduler will swap among ready
threads as needed (or as triggered by anything that causes a blocking
operation -- I'd expect even I/O requests to cause a suspend and thread
swap).
immediately take control. It's main job was to check the list of
handling processes for any that had been running too long (long
running processes in a web server meant that something had gone wrong)
and terminate them (freeing up the process and socket resources,
etc..). Then it (the cleanup process) would go back to sleep and let
the lower priority processes run.
Now that may be the tricky part -- though if you read the Windows
programming guides, force termination of threads is strongly
discouraged as there is pretty much no possibility of recovering
resources. If the thread has a possibility of running overly long, I'd
expect there is either a loop or an unsatisfied read. For the read
request, I'd try to code a time-out into the thread (which may then
loop a few times retrying the operation before aborting the thread --
from inside, not from outside). For the loop -- embed a non-blocking
test on some event/lock/queue which your manager task can can set.
Heck, define "running too long" in a time, and you could maybe even use
a timer-thread rather than a background manager. For each connection
you spawn the handler thread AND start a timer thread passing the
handler ID; when the timer expires let IT set the "abort" signal for
the handler, and then maybe do a thread join (wait for termination of
the handler) before the timer itself ends its existance.
At one point I attempted to translate this all into Python, but the
lack of the ability to set process priorities, or for processes to
I suspect process/thread priorities are OS dependent (on Windows the
PROCESS has a priority /class/, and each thread in the process has a
priority level relative to the base for the process). You likely have
to invoke the actual OS specific calls for that.
yield control to other processes, or an effective way to terminate
"Yield"ing takes place automatically. And for terminating, that is
really OS dependent -- I know of two OSs where one it is recommended
that one never force terminate; one is supposed to have the thread
terminate in response to some condition sent to it. (BTW: in Windows
nomenclature, multiple threads are all part of a single process,
sharing one memory space).
proceses,
kept me from doing this. The Python threading model seemed very
primitve compared to what I was used to in regards to how threads can
be controlled.
Whereas I view COOPERATIVE tasking as the primitive model (it's what
WfW3.11 used at the user interface level -- every program ran as a
cooperatively scheduled thread of the window manager).
Rather than being a knock on Python, I would prefer to know how to do
all this, if it can be done.
By throwing out any preconception of the threading model and starting
from scratch... Being preemptive, the task scheduling is a given, you
don't worry about it. The rest is a matter of determining a responsive
form of IPC, letting each thread handle its own termination.
-- ================================================== ============ <
wl*****@ix.netcom.com | Wulfraed Dennis Lee Bieber KD6MOG <
wu******@dm.net | Bestiaria Support Staff <
================================================== ============ <
Bestiaria Home Page: http://www.beastie.dm.net/ <
Home Page: http://www.dm.net/~wulfraed/ <


Jul 18 '05 #10

P: n/a
Dave Brueck wrote:
Jay wrote:

You've noticed that there isn't an identical construct for what you were

doing

in Java, so it may be that the Python way will be a completely different
approach to the problem rather than just a direct conversion from Java to
Python syntax.

Well, I'll give an example from something I tried to do a few years ago
which I couldn't translate to Python, though I took a stab at it.

The language in question was a VM language with it's own internal
process model. The process model was all inteneral to on eOS process to
it was a cooperative model of multi-threading

The application I wrote was a web server on the front of an application
so that web requests were all handled in the application. IOW, the
application had a web server interface)

The language process model had both the ability to set process
priorities, as well as allow processes to sleep and cede control to
other processes.

When a web request would come in, the handling of the request would be
forked as a seperate process so that the server could accept the next
request. Both the server process and the handling process(es) were at
the same priority level so in theory each process would run to
completion before allowing another process to run. For this reason,
both the server process and the handling processes would issue a 'yield'
at periodic strategic points, allowing themselves to temporarily halt
and for the next process of equal priority that was waiting to run.


So, if I understand this correctly, this amounted to a manual task switcher,
right? (as no more than one job was allowed to run concurrently). Was this
really the desired behavior? If each process halts while the others run and
does so only to prevent starvation in the other processes, wouldn't you get
more or less the same results by just using normal forking/threading wherein
the OS ensures that each process/thread gets a slice of the CPU pie? (IOW,
unless there was some other reason, it seems that the explicit yielding was to
work around the constraints of the process model)


Partially, the processes were sharing a (sometimes very large) common
in-memory object web so forking seperate os level threads was not
feasible. Also, the server ran on multiple OS platforms so OS level
tricks or threading mechanisms were not viable
So... what was going wrong that warranted killing the process in the first
In practice the need for that is pretty rare, especially for a server,
the main case coming to mind being the execution of user-supplied code (which
is pretty scary in and of itself!).


Irt didn't happen that often but it was mostly due to the client
disconneecting permaturely and when yuor server is handling heavy
traffic, it does become a resource issue after awhile. Like I said in
anoter post, there was also bookkeeping down during this time as well.
Jul 18 '05 #11

P: n/a
Jay O'Connor fed this fish to the penguins on Tuesday 02 December 2003
08:59 am:


The problem is that we didn't know what it could be. We had exception
handlers at lower levels sufficient to handle coding bugs and file i/o
issues. What ususally killed us was a network error that terminated
the connection in mid transaction at some point or even just the user
hitting 'cancel' mid stream
Should have been able to code handlers for connection problems in the
same thread handling the connection... I/O with time-outs rather than
indefinite blocks, that sort of thing. (I'll have to confess I've not
done much socket I/O).
FWIW - the 'cleanup' process running at higher priority also took care
of some bookkeeping, stats collecting, some memory management
I suspect the emphasis would have to be on the "collecting" <G> Don't
know what type of statistics you were gathering, but if it were things
like counting bytes transferred, clock time used (I don't think there
is a means of getting thread specific processor time), etc. I'd
probably create a thread that blocks on a queue.get(), have the
individual handler threads collect the statistics, and send them via a
queue.put() to the logger as part of the thread shutdown.
(releasing cached objects untouched for awhile so they could be GC),
I'd create a "cache" module, and let /it/ do the clean-up whenever a
handler thread does a look-up into the cache -- maybe let the cache run
as a thread too, using queues (oh, I should emphasize that when I use
"queue" here, I mean those implemented by Python's Queue module -- an
IPC structure).

Cache thread pseudo-code:

while 1:
(requestor, opcode, objectID, objectData) = cachequeue.get()
#blocking call, objectData is null for a lookup
if opcode == SHUTDOWN:
# do whatever is needed to clean up and exit the thread
break
if opcode == LOOKUP:
#see if objectID is in the cache, if it is, return it
requestor.put( (SUCCESS, objectID, objectData) )
#else return failure and let requestor find it, or
#have this thread do the finding and caching, then return it
if opcode == CACHE:
#if the requestor had to find it, it can give to the cache
#do whatever is needed to store the objectData under objectID

#do timestamp maintenance for objectID, then scan and purge unused
#items

handler thread pseudo-code:

....

cachequeue.put(myQueue, LOOKUP, objectID, None) #unblocks cache
(result, confirmID, objectData) = myQueue.get() #blocks for cache
....

The cachequeue instance would be process global (all threads know of
it), myQueue is thread local, and passed to the cache thread as part of
the request so the cache knows who to send the response to.

etc...


-- ================================================== ============ <
wl*****@ix.netcom.com | Wulfraed Dennis Lee Bieber KD6MOG <
wu******@dm.net | Bestiaria Support Staff <
================================================== ============ <
Bestiaria Home Page: http://www.beastie.dm.net/ <
Home Page: http://www.dm.net/~wulfraed/ <


Jul 18 '05 #12

P: n/a
Dennis Lee Bieber wrote:
There's the first point of departure: Python threads (as long as you
aren't in a C language number crunching extension) are preemptive
scheduled, on something like a 10-20 byte-code interval. Cooperative
basically means /you/ had to handle the scheduling of threads; in
Python you don't, it happens automatically.


I stand corrected. The following code behaves exactly like I would want
it to rather than how I feared it would. Is this a change? I don't
seem to recall the same result when I tried it initially with Python
1.5.2....

--------
from threading import Thread

def test1():

while True:
print "test1"

def test2():
while True:
print "test2"

t1 = Thread (target=test1)
t2= Thread (target=test2)

print "starting"

t1.start()
t2.start()
print "started"

while True:
print "main"

---------------

Jul 18 '05 #13

P: n/a
Jay wrote:
So, if I understand this correctly, this amounted to a manual task switcher,
right? (as no more than one job was allowed to run concurrently). Was this
really the desired behavior? If each process halts while the others run and
does so only to prevent starvation in the other processes, wouldn't you get
more or less the same results by just using normal forking/threading wherein
the OS ensures that each process/thread gets a slice of the CPU pie? (IOW,
unless there was some other reason, it seems that the explicit yielding was towork around the constraints of the process model)
Partially, the processes were sharing a (sometimes very large) common
in-memory object web so forking seperate os level threads was not
feasible. Also, the server ran on multiple OS platforms so OS level
tricks or threading mechanisms were not viable


Ok, but it should still be very doable with just threads (as opposed to forking
new processes - although forking might still work well with shared memory), and
preemptive threading is more the norm than not these days, at least on the OS's
that Python runs on - so having the OS schedule the threads doesn't require any
special "tricks", it just works.
So... what was going wrong that warranted killing the process in the first
In practice the need for that is pretty rare, especially for a server,
the main case coming to mind being the execution of user-supplied code (whichis pretty scary in and of itself!).
Irt didn't happen that often but it was mostly due to the client
disconneecting permaturely and when yuor server is handling heavy
traffic, it does become a resource issue after awhile.


Having the handler detect and clean up by itself after a premature client
disconnect is both cleaner and results in _lower_ average resource usage
because it could be detected much more quickly and reliably than some watchdog
thread (this approach can work well on both poll-based and threaded connection
handling). You'd also get the benefit of more information in making the
decision: a watchdog cannot distinguish between slow progress and no progress.
Like I said in
anoter post, there was also bookkeeping down during this time as well.


Again though, you get the scheduling "for free", so e.g. there's no problem
with having a bookkeeping thread in Python either.

-Dave
Jul 18 '05 #14

P: n/a
In article <9b************@beastie.ix.netcom.com>,
Dennis Lee Bieber <wl*****@ix.netcom.com> wrote:

There's the first point of departure: Python threads (as long as you
aren't in a C language number crunching extension) are preemptive
scheduled, on something like a 10-20 byte-code interval. Cooperative
basically means /you/ had to handle the scheduling of threads; in
Python you don't, it happens automatically.


Actually, that's not really correct. Cooperative threading means that
there's no way to force a thread to yield up a timeslice, which is
precisely the way Python works, because any bytecode can last an
arbitrarily long time (consider 100**100**100). It is true that the
Python core does switch between bytecodes, but that should not be
considered preemptive in the traditional sense. OTOH, Python threads
are built on top of OS-level threads, so extensions that release the GIL
do run under the preemptive OS scheduler. To put it another way, the
reason why Python threads aren't preemptive is because each Python
thread acquires an OS-level lock that only the Python core will release,
one thread at a time.
--
Aahz (aa**@pythoncraft.com) <*> http://www.pythoncraft.com/

Weinberg's Second Law: If builders built buildings the way programmers wrote
programs, then the first woodpecker that came along would destroy civilization.
Jul 18 '05 #15

P: n/a
Jay O'Connor fed this fish to the penguins on Tuesday 02 December 2003
09:55 am:


I stand corrected. The following code behaves exactly like I would
want
it to rather than how I feared it would. Is this a change? I don't
seem to recall the same result when I tried it initially with Python
1.5.2....
I'll have to confess that I didn't even know 1.5.2 had threads (and
since I purged myself of all old books, I still don't know if it does).

-- ================================================== ============ <
wl*****@ix.netcom.com | Wulfraed Dennis Lee Bieber KD6MOG <
wu******@dm.net | Bestiaria Support Staff <
================================================== ============ <
Bestiaria Home Page: http://www.beastie.dm.net/ <
Home Page: http://www.dm.net/~wulfraed/ <


Jul 18 '05 #16

P: n/a
Aahz fed this fish to the penguins on Tuesday 02 December 2003 17:26 pm:

Actually, that's not really correct. Cooperative threading means that
there's no way to force a thread to yield up a timeslice, which is
precisely the way Python works, because any bytecode can last an
A matter of level -- cooperative, to me, implies the application code
has to explicitly invoke the scheduler by some means. The Python
byte-code interpreter is not the "application" at that point, but part
of the runtime system. From the viewpoint of the end-user code, it
appears preemptive. Long running operations could be looked on as a
very high priority thread causing starvation of regular threads.

Of course, all this is just an attempt to describe the behavior as it
looks to a user.
-- ================================================== ============ <
wl*****@ix.netcom.com | Wulfraed Dennis Lee Bieber KD6MOG <
wu******@dm.net | Bestiaria Support Staff <
================================================== ============ <
Bestiaria Home Page: http://www.beastie.dm.net/ <
Home Page: http://www.dm.net/~wulfraed/ <


Jul 18 '05 #17

P: n/a
Aahz wrote:

In article <9b************@beastie.ix.netcom.com>,
Dennis Lee Bieber <wl*****@ix.netcom.com> wrote:

There's the first point of departure: Python threads (as long as you
aren't in a C language number crunching extension) are preemptive
scheduled, on something like a 10-20 byte-code interval. Cooperative
basically means /you/ had to handle the scheduling of threads; in
Python you don't, it happens automatically.


Actually, that's not really correct. Cooperative threading means that
there's no way to force a thread to yield up a timeslice, which is
precisely the way Python works, because any bytecode can last an
arbitrarily long time (consider 100**100**100). It is true that the
Python core does switch between bytecodes, but that should not be
considered preemptive in the traditional sense.


I would say that Python is actually much closer to preemptive in the
traditional sense than it is to being cooperative.

Yes, some bytecodes could have almost unbounded duration under some
(perverse?) conditions, but the parallel between bytecodes and opcodes
in machine language is very strong, and you can't interrupt (most)
machine opcodes either... they just take much less time to execute,
and their durations generally span many times fewer orders of
magnitude...

Thinking of Python as preemptive, but in a "softer" sense, is in my
opinion the most useful view...

-Peter
Jul 18 '05 #18

P: n/a
You might be able to solve your problem a couple of ways.
A sample scenario might be than you are using
a condition variable and are blocked in
cv.wait(). The user selects "exit" from a menu and you would like to
interrupt the cv.wait() so that the program may exit.

while not cond():
cv.wait()
processData()

These lines of code require another thread to modify the condition and
call cv.notifyAll()

The cond() function could be modified
def cond():
return origCond() and exitSelected
Alternatively a semaphore could be used.
s = Semaphore(2)
on startup
s.acquire() #This one will be released upon selecting "exit".

later on
s.acquire() #This consumes a slot and continues or waits for an
available slot.
processData()

on "exit" calling s.release() will wake up the above code.

Finally, since both of these ideas may be a bit of a headache you
might find
setting Thread.setDaemon(True) to be useful. This allows your program
to terminate when the main thread terminates even if other threads are
running.
Jul 18 '05 #19

P: n/a
Python has 2 thread APIs. The primitive 'thread' API and the more recent
'threading' API.

The preferred way to do thread programming in Python is to derive your
own thread class from the 'Thread' class in the 'threading' module and
implement your own interrupt and terminate methods for the thread. The
API as such does not provide any such methods.

For example, to create a 'dumb' thread which runs till the
function passed as target to it finishes can be created as.

import threading

t=threading.Thread(None, target_func, None, None)
t.start()

whereas, a better way is ...

class MyThread(threading.Thread):

def __init__(self):
self.end_flag=False
self.thread_suspend=False
self.sleep_time=0.0
self.thread_sleep=False

def run(self):

while not self.end_flag:
// Optional sleep
if self.thread_sleep:
time.sleep(self._sleeptime)
// Optional suspend
while self.thread_suspend:
time.sleep(1.0)
self.target_func()

def target_func(self):

// Do your stuff here
pass

def terminate(self):
""" Thread termination routine """

self.end_flag = True

def set_sleep(self, sleeptime):
self.thread_sleep = True
self._sleeptime = sleeptime

def suspend_thread(self):
self.thread_suspend=True

def resume_thread(self):
self.thread_suspend=False

The above thread class is a more intelligent thread which can be
terminated, suspended, resumed and forced to sleep for some fixed
time.

You can add more attributes to this class to make it richer, like
possibly adding signal handling routines.

rgds

-Anand Pillai

Jay O'Connor <jo******@cybermesa.com> wrote in message news:<bq**********@reader2.nmix.net>...
Dave Brueck wrote:
What's your actual use case (IOW, what programming problem are you trying to
solve)? Maybe there is a Python solution that will work for you, but in order
to help you find it, people here will need to better understand what you're
trying to do.

You've noticed that there isn't an identical construct for what you were doing
in Java, so it may be that the Python way will be a completely different
approach to the problem rather than just a direct conversion from Java to
Python syntax.

Well, I'll give an example from something I tried to do a few years ago
which I couldn't translate to Python, though I took a stab at it.

The language in question was a VM language with it's own internal
process model. The process model was all inteneral to on eOS process to
it was a cooperative model of multi-threading

The application I wrote was a web server on the front of an application
so that web requests were all handled in the application. IOW, the
application had a web server interface)

The language process model had both the ability to set process
priorities, as well as allow processes to sleep and cede control to
other processes.

When a web request would come in, the handling of the request would be
forked as a seperate process so that the server could accept the next
request. Both the server process and the handling process(es) were at
the same priority level so in theory each process would run to
completion before allowing another process to run. For this reason,
both the server process and the handling processes would issue a 'yield'
at periodic strategic points, allowing themselves to temporarily halt
and for the next process of equal priority that was waiting to run.
This allowed both the server to remain responsive and all handling
processes to run efficiently.

Behind all of this was a higher priority process running, but it
basically would sleep for several minutes (sleeping allowed lower
priority process, like the server process to run). When the background
process would wake up, since it was a higher priority, it would
immediately take control. It's main job was to check the list of
handling processes for any that had been running too long (long running
processes in a web server meant that something had gone wrong) and
terminate them (freeing up the process and socket resources, etc..).
Then it (the cleanup process) would go back to sleep and let the lower
priority processes run.

At one point I attempted to translate this all into Python, but the lack
of the ability to set process priorities, or for processes to yield
control to other processes, or an effective way to terminate proceses,
kept me from doing this. The Python threading model seemed very
primitve compared to what I was used to in regards to how threads can be
controlled.

Rather than being a knock on Python, I would prefer to know how to do
all this, if it can be done.

Jul 18 '05 #20

This discussion thread is closed

Replies have been disabled for this discussion.