473,386 Members | 1,773 Online
Bytes | Software Development & Data Engineering Community
Post Job

Home Posts Topics Members FAQ

Join Bytes to post your question to a community of 473,386 software developers and data experts.

Will python ever have signalhandlers in threads?

I have had a look at the signal module and the example
and came to the conclusion that the example wont work
if you try to do this in a thread.

So is there a chance similar code will work in a thread?

--
Antoon Pardon
Jul 18 '05 #1
23 2447
Op 2004-11-09, Antoon Pardon schreef <ap*****@forel.vub.ac.be>:
I have had a look at the signal module and the example
and came to the conclusion that the example wont work
if you try to do this in a thread.

So is there a chance similar code will work in a thread?

Well I guess that the lack of response means the anser is:

Not likely.
Pity!

--
Antoon Pardon
Jul 18 '05 #2
Antoon Pardon wrote:
Op 2004-11-09, Antoon Pardon schreef <ap*****@forel.vub.ac.be>:
I have had a look at the signal module and the example
and came to the conclusion that the example wont work
if you try to do this in a thread.

So is there a chance similar code will work in a thread?

Well I guess that the lack of response means the anser is:

Not likely.


Sorry that I don't have time to go through the code you mention, but recently
for ipython I had to deal with signal handling in threads. You can look in
IPython's Shell.py for the hacks I ended up using, if you care.

The gist of it is this function, which gets installed into a gtk/WX timer:

def runcode(self):
"""Execute a code object.

Multithreaded wrapper around IPython's runcode()."""

# lock thread-protected stuff
self.ready.acquire()

# Install sigint handler
signal.signal(signal.SIGINT, sigint_handler)

if self._kill:
print >>Term.cout, 'Closing threads...',
Term.cout.flush()
for tokill in self.on_kill:
tokill()
print >>Term.cout, 'Done.'

# Run pending code by calling parent class
if self.code_to_run is not None:
self.ready.notify()
self.parent_runcode(self.code_to_run)
# Flush out code object which has been run
self.code_to_run = None

# We're done with thread-protected variables
self.ready.release()
# This MUST return true for gtk threading to work
return True

As you see, I ended up installing the handler on *every* invocation of the
function, after taking a lock. I couldn't find how to assign the handler to
the thread, so this was my brute-force approach.

This is incredibly ugly, and there may be a better solution. But in this
particular case, the hack works and the penalty is not noticeable to end users.

Best,

f

Jul 18 '05 #3
> I have had a look at the signal module and the example
and came to the conclusion that the example wont work
if you try to do this in a thread.

So is there a chance similar code will work in a thread?


I think acording to the Posix Thread specs, signals are only delevered to
the process and never to threads... I could be wrong though...

--
damjan
Jul 18 '05 #4
On 9 Nov 2004 11:56:32 GMT, Antoon Pardon <ap*****@forel.vub.ac.be> wrote:
I have had a look at the signal module and the example
and came to the conclusion that the example wont work
if you try to do this in a thread.

So is there a chance similar code will work in a thread?

Do you have a specific use in mind? E.g., I believe there are things you can
do in the main thread that won't work in others, and other things
can be solved by appropriate communication between threads, etc.,
but without more clues it's hard to discuss it. I didn't look at
the example because I don't want to guess what "similar code" you
really are interested in ;-)

Regards,
Bengt Richter
Jul 18 '05 #5

Fernando,

Just curious, does 'gtk/WX' in your message below mean wxPython?

If so, does this signal handling code actually work with wxPython?

/Jean Brouwers
In article <ma**************************************@python.o rg>,
Fernando Perez <fp*******@yahoo.com> wrote:
Antoon Pardon wrote:
Op 2004-11-09, Antoon Pardon schreef <ap*****@forel.vub.ac.be>:
I have had a look at the signal module and the example
and came to the conclusion that the example wont work
if you try to do this in a thread.

So is there a chance similar code will work in a thread?

Well I guess that the lack of response means the anser is:

Not likely.


Sorry that I don't have time to go through the code you mention, but recently
for ipython I had to deal with signal handling in threads. You can look in
IPython's Shell.py for the hacks I ended up using, if you care.

The gist of it is this function, which gets installed into a gtk/WX timer:

def runcode(self):
"""Execute a code object.

Multithreaded wrapper around IPython's runcode()."""

# lock thread-protected stuff
self.ready.acquire()

# Install sigint handler
signal.signal(signal.SIGINT, sigint_handler)

if self._kill:
print >>Term.cout, 'Closing threads...',
Term.cout.flush()
for tokill in self.on_kill:
tokill()
print >>Term.cout, 'Done.'

# Run pending code by calling parent class
if self.code_to_run is not None:
self.ready.notify()
self.parent_runcode(self.code_to_run)
# Flush out code object which has been run
self.code_to_run = None

# We're done with thread-protected variables
self.ready.release()
# This MUST return true for gtk threading to work
return True

As you see, I ended up installing the handler on *every* invocation of the
function, after taking a lock. I couldn't find how to assign the handler to
the thread, so this was my brute-force approach.

This is incredibly ugly, and there may be a better solution. But in this
particular case, the hack works and the penalty is not noticeable to end users.

Best,

f

Jul 18 '05 #6
Jean Brouwers wrote:

Fernando,

Just curious, does 'gtk/WX' in your message below mean wxPython?
Yes.
If so, does this signal handling code actually work with wxPython?>


It does, but not in a generic manner: this is code for ipython to support
matplotlib's WX backend in an interactive shell. It allows you to type into
ipython plotting commands which cause matplotlib to open a WX plotting window,
and the interactive terminal continues to function. You can have multiple WX
plotting windows open, and the command line keeps on chugging.

But this relies on a special collaborative hack between matplotlib and ipython.
matplotlib, in its WX and GTK backends (Tk doesn't need this) has a special
flag to indicate who is in control of the mainloop. In standalone scripts,
everything works in the typical manner. But if ipython comes in, it will set
this flag, telling matplotlib to keep off the mainoop.

It's pretty hackish, but it works in practice pretty well.

here's the relevant matplotlib WX code (trimmed of docstring):

def show():

for figwin in Gcf.get_all_fig_managers():
figwin.frame.Show()
figwin.canvas.realize()
figwin.canvas.draw()

if show._needmain and not matplotlib.is_interactive():
wxapp.MainLoop()
show._needmain = False
show._needmain = True

When ipython starts up, it sets show._needmain to False, so that the Mainloop()
call is never made. You can look at the whole code from ipython if you wish,
it's in IPython/Shell.py.

best,

f

Jul 18 '05 #7

Quite interesting, I'll check the Shell.py for more details. Thank you.

/Jean Brouwers

In article <ma**************************************@python.o rg>,
Fernando Perez <fp*******@yahoo.com> wrote:
Jean Brouwers wrote:

Fernando,

Just curious, does 'gtk/WX' in your message below mean wxPython?


Yes.
If so, does this signal handling code actually work with wxPython?>


It does, but not in a generic manner: this is code for ipython to support
matplotlib's WX backend in an interactive shell. It allows you to type into
ipython plotting commands which cause matplotlib to open a WX plotting window,
and the interactive terminal continues to function. You can have multiple WX
plotting windows open, and the command line keeps on chugging.

But this relies on a special collaborative hack between matplotlib and
ipython.
matplotlib, in its WX and GTK backends (Tk doesn't need this) has a special
flag to indicate who is in control of the mainloop. In standalone scripts,
everything works in the typical manner. But if ipython comes in, it will set
this flag, telling matplotlib to keep off the mainoop.

It's pretty hackish, but it works in practice pretty well.

here's the relevant matplotlib WX code (trimmed of docstring):

def show():

for figwin in Gcf.get_all_fig_managers():
figwin.frame.Show()
figwin.canvas.realize()
figwin.canvas.draw()

if show._needmain and not matplotlib.is_interactive():
wxapp.MainLoop()
show._needmain = False
show._needmain = True

When ipython starts up, it sets show._needmain to False, so that the
Mainloop()
call is never made. You can look at the whole code from ipython if you wish,
it's in IPython/Shell.py.

best,

f

Jul 18 '05 #8
On 2004-11-12, Bengt Richter <bo**@oz.net> wrote:
On 9 Nov 2004 11:56:32 GMT, Antoon Pardon <ap*****@forel.vub.ac.be> wrote:
I have had a look at the signal module and the example
and came to the conclusion that the example wont work
if you try to do this in a thread.

So is there a chance similar code will work in a thread?

Do you have a specific use in mind?


Any code that can block/run for unbounded time (in extention code)
and you wish to interrupt after a certain time out.

One thing where it could be usefull is the Queue module.

AFAIU the Queue module doesn't block on a full/empty queue when
a timeout is specified but goes in a loop sleeping and periodically
checking whether place/items are available. With signals that
can be sent to a thread the queue could just set an alarm and
then block as if no timeout value was set and either unblock
when place/items are available or get signalled when the timeout
period is over.

--
Antoon Pardon
Jul 18 '05 #9
Antoon Pardon wrote:
AFAIU the Queue module doesn't block on a full/empty queue when
a timeout is specified but goes in a loop sleeping and periodically
checking whether place/items are available. With signals that
can be sent to a thread the queue could just set an alarm and
then block as if no timeout value was set and either unblock
when place/items are available or get signalled when the timeout
period is over.


I'm fairly sure that the Queue uses an internal Event
(or REvent?) to signal when space or a new item is
available, so I believe your description (and possibly
conclusion) above is wrong. There should be no
"periodical check", as that would imply polling. Check
the source if you're interested.

-Peter
Jul 18 '05 #10
Op 2004-11-15, Peter Hansen schreef <pe***@engcorp.com>:
Antoon Pardon wrote:
AFAIU the Queue module doesn't block on a full/empty queue when
a timeout is specified but goes in a loop sleeping and periodically
checking whether place/items are available. With signals that
can be sent to a thread the queue could just set an alarm and
then block as if no timeout value was set and either unblock
when place/items are available or get signalled when the timeout
period is over.


I'm fairly sure that the Queue uses an internal Event
(or REvent?) to signal when space or a new item is
available, so I believe your description (and possibly
conclusion) above is wrong. There should be no
"periodical check", as that would imply polling. Check
the source if you're interested.


I did check the source, I just didn't study it carefully
and got my understanding mostly from the comments.
But anyway here is the relevant part and IMO we have
a polling loop here.
class Queue:

...

def get(self, block=True, timeout=None):
"""Remove and return an item from the queue.

If optional args 'block' is true and 'timeout' is None (the default),
block if necessary until an item is available. If 'timeout' is
a positive number, it blocks at most 'timeout' seconds and raises
the Empty exception if no item was available within that time.
Otherwise ('block' is false), return an item if one is immediately
available, else raise the Empty exception ('timeout' is ignored
in that case).
"""
if block:
if timeout is None:
# blocking, w/o timeout, i.e. forever
self.esema.acquire()
elif timeout >= 0:
# waiting max. 'timeout' seconds.
# this code snipped is from threading.py: _Event.wait():
# Balancing act: We can't afford a pure busy loop, so
# we
# have to sleep; but if we sleep the whole timeout time,
# we'll be unresponsive. The scheme here sleeps very
# little at first, longer as time goes on, but never
# longer
# than 20 times per second (or the timeout time
# remaining).
delay = 0.0005 # 500 us -> initial delay of 1 ms
endtime = _time() + timeout
while 1:
if self.esema.acquire(0):
break
remaining = endtime - _time()
if remaining <= 0: #time is over and no element arrived
raise Empty
delay = min(delay * 2, remaining, .05)
_sleep(delay) #reduce CPU usage by using a sleep
--
Antoon Pardon
Jul 18 '05 #11
[Antoon Pardon]
...
AFAIU the Queue module doesn't block on a full/empty queue when
a timeout is specified but goes in a loop sleeping and periodically
checking whether place/items are available. With signals that
can be sent to a thread the queue could just set an alarm and
then block as if no timeout value was set and either unblock
when place/items are available or get signalled when the timeout
period is over.


The only things CPython requires of the *platform* thread implementation are:

1. A way to start a new thread, and obtain a unique C long "identifier"
for it.

2. Enough gimmicks to build a lock object with Python's core
threading.Lock semantics (non-rentrant; any thread T can release
a lock in the acquired state, regardless of whether T acquired it).

Everything else is built on those, and CPython's thread implementation
is extremely portable-- and easy to port --as a result.

Nothing in CPython requires that platform threads support directing a
signal to a thread, neither that the platform C library support an
alarm() function, nor even that the platform have a SIGALRM signal.

It's possible to build a better Queue implementation that runs only on
POSIX systems, or only on Windows systems, or only on one of a dozen
other less-popular target platforms. The current implementation works
fine on all of them, although is suboptimal compared to what could be
done in platform-specific Queue implementations.

For that matter, even something as simple as threading.RLock could be
implemented in platform-specific ways that are more efficient than the
current portable implementation (which builds RLock semantics on top
of #2 above).
Jul 18 '05 #12
On 15 Nov 2004 11:44:31 GMT, Antoon Pardon <ap*****@forel.vub.ac.be> wrote:
Op 2004-11-15, Peter Hansen schreef <pe***@engcorp.com>:
Antoon Pardon wrote:
AFAIU the Queue module doesn't block on a full/empty queue when
a timeout is specified but goes in a loop sleeping and periodically
checking whether place/items are available. With signals that
can be sent to a thread the queue could just set an alarm and
then block as if no timeout value was set and either unblock
when place/items are available or get signalled when the timeout
period is over.
I'm fairly sure that the Queue uses an internal Event
(or REvent?) to signal when space or a new item is
available, so I believe your description (and possibly
conclusion) above is wrong. There should be no
"periodical check", as that would imply polling. Check
the source if you're interested.


I did check the source, I just didn't study it carefully
and got my understanding mostly from the comments.
But anyway here is the relevant part and IMO we have
a polling loop here.

It looks like you are right, at least on NT, where it is a shame,
since thread_nt.h only uses WaitForSingleObject with INFINITE timeout
specified, and it would seem the ideal timeout support API could be provided.
Maybe the lock acquire method could have an optional keyword arg timeout=floatingseconds,
and for NT that could be converted to milliseconds and used to wait for the mutex
with a non-inifinite wait. But it's platform dependent what low level OS support
there is for locks with timeouts. However, a win32 solution should benefit
a fair proportion of users, if many windows users do threading.


class Queue:

...

def get(self, block=True, timeout=None):
"""Remove and return an item from the queue.

If optional args 'block' is true and 'timeout' is None (the default),
block if necessary until an item is available. If 'timeout' is
a positive number, it blocks at most 'timeout' seconds and raises
the Empty exception if no item was available within that time.
Otherwise ('block' is false), return an item if one is immediately
available, else raise the Empty exception ('timeout' is ignored
in that case).
"""
if block:
if timeout is None:
# blocking, w/o timeout, i.e. forever
self.esema.acquire()
elif timeout >= 0:
# waiting max. 'timeout' seconds.
# this code snipped is from threading.py: _Event.wait():
# Balancing act: We can't afford a pure busy loop, so
# we
# have to sleep; but if we sleep the whole timeout time,
# we'll be unresponsive. The scheme here sleeps very
# little at first, longer as time goes on, but never
# longer
# than 20 times per second (or the timeout time
# remaining).
delay = 0.0005 # 500 us -> initial delay of 1 ms
endtime = _time() + timeout
while 1:
if self.esema.acquire(0):
break
remaining = endtime - _time()
if remaining <= 0: #time is over and no element arrived
raise Empty
delay = min(delay * 2, remaining, .05)
_sleep(delay) #reduce CPU usage by using a sleep


So, it would seem this kind of thing could be hidden in platform-specific
files where it was really necessary. Probably the people who can do it are
too busy ;-)

Regards,
Bengt Richter
Jul 18 '05 #13
On Mon, 15 Nov 2004 10:56:15 -0500, Tim Peters <ti********@gmail.com> wrote:
[Antoon Pardon]
...
AFAIU the Queue module doesn't block on a full/empty queue when
a timeout is specified but goes in a loop sleeping and periodically
checking whether place/items are available. With signals that
can be sent to a thread the queue could just set an alarm and
then block as if no timeout value was set and either unblock
when place/items are available or get signalled when the timeout
period is over.
The only things CPython requires of the *platform* thread implementation are:

1. A way to start a new thread, and obtain a unique C long "identifier"
for it.

2. Enough gimmicks to build a lock object with Python's core
threading.Lock semantics (non-rentrant; any thread T can release
a lock in the acquired state, regardless of whether T acquired it).


Would it be a big deal to give the acquire method an optional
timeout=floating_seconds keyword arg, and have e.g. thread_nt.h use
the timeout parameter in WaitForSingleObject? This could ripple up
to Condition.aquire etc. and provide improved waiting over polling.
(Of course, there's probably a number of polling processes going on
all the time in the average windows session, but why add ;-)
Everything else is built on those, and CPython's thread implementation
is extremely portable-- and easy to port --as a result.
Do you think just a lock timeout would make it significantly harder?
Nothing in CPython requires that platform threads support directing a
signal to a thread, neither that the platform C library support an
alarm() function, nor even that the platform have a SIGALRM signal.

It's possible to build a better Queue implementation that runs only on
POSIX systems, or only on Windows systems, or only on one of a dozen
other less-popular target platforms. The current implementation works
fine on all of them, although is suboptimal compared to what could be
done in platform-specific Queue implementations.

For that matter, even something as simple as threading.RLock could be
implemented in platform-specific ways that are more efficient than the
current portable implementation (which builds RLock semantics on top
of #2 above).

So what do you think about a lock timeout. Just that would let you build
efficient terminatable substitutes for sleep and other timing things.

Regards,
Bengt Richter
Jul 18 '05 #14
[Bengt Richter]
Would it be a big deal to give the acquire method an optional
timeout=floating_seconds keyword arg, and have e.g. thread_nt.h use
the timeout parameter in WaitForSingleObject? This could ripple up
to Condition.aquire etc. and provide improved waiting over polling.
(Of course, there's probably a number of polling processes going on
all the time in the average windows session, but why add ;-) .... Do you think just a lock timeout would make it significantly harder? .... So what do you think about a lock timeout. Just that would let you
build efficient terminatable substitutes for sleep and other timing things.


I rarely use timeouts, and, in the cases I do, haven't seen a
measurable burden due to the current wake-up-20-times-a-second
approach. So I have no motivation to change anything here.

Someone who does would have to go thru Python's 11 platform-specific
thread wrappers (these are C files in the Python/ directory, with
names of the form thread_PLATFORM.h), and figure out how to do it for
all of them -- or successfully argue that Python should no longer
support threads on those platforms for which they don't know how to
add the new core lock functionality -- or successfully argue (this one
is closer to impossible than unlikely) that platforms are free to
ignore the timeout argument.
Jul 18 '05 #15
Op 2004-11-15, Tim Peters schreef <ti********@gmail.com>:
[Antoon Pardon]
...
AFAIU the Queue module doesn't block on a full/empty queue when
a timeout is specified but goes in a loop sleeping and periodically
checking whether place/items are available. With signals that
can be sent to a thread the queue could just set an alarm and
then block as if no timeout value was set and either unblock
when place/items are available or get signalled when the timeout
period is over.


The only things CPython requires of the *platform* thread implementation are:

1. A way to start a new thread, and obtain a unique C long "identifier"
for it.

2. Enough gimmicks to build a lock object with Python's core
threading.Lock semantics (non-rentrant; any thread T can release
a lock in the acquired state, regardless of whether T acquired it).

Everything else is built on those, and CPython's thread implementation
is extremely portable-- and easy to port --as a result.

Nothing in CPython requires that platform threads support directing a
signal to a thread, neither that the platform C library support an
alarm() function, nor even that the platform have a SIGALRM signal.

It's possible to build a better Queue implementation that runs only on
POSIX systems, or only on Windows systems, or only on one of a dozen
other less-popular target platforms. The current implementation works
fine on all of them, although is suboptimal compared to what could be
done in platform-specific Queue implementations.

For that matter, even something as simple as threading.RLock could be
implemented in platform-specific ways that are more efficient than the
current portable implementation (which builds RLock semantics on top
of #2 above).


I don't fault the current Queue implementation. I think your arguments
are very strong and I'm not arguing that the current Queue implementation
should be replaced by a dozen none portable system dependent
implementation. But by limiting the signal module as it is now you make
it that much harder for people on posix systems to come up for a
different implementation on those systems.

The problem I have with the current implementation is not so much one
of burden on the system but one of possible "starvation" of a thread.

Suppose we have a number of consumers on a Queue, some simply block and
others use timeouts. The current implementation disfavors those threads
with a timeout too much IMO, because the block threads ask for the
lock continuosly while the timeout threads only ask for the lock
periodically.

--
Antoon Pardon
Jul 18 '05 #16
[Antoon Pardon, on thread gimmicks]
Couldn't it be possible to have a general solution that works on
any kind of box
Yes, because we have that now.
and provide (or let interested parties provide) an implementation that
is better suited for particular boxes?
For the most part, that hasn't happened. Someone who cares enough
would need to volunteer (or fund) API redesign to make it feasible.

....
Look, I understand perfectly. Use of signals is never my first choice.
However there sometimes seem to be circumstances when signals
are the lesser evil. My impression is also that the python people
seem to have some obligation to help in debugging peoples
programs. This is laudable in general but I also have the impression
that this obligation puts limits on what they want to allow in the
language.

If I hear people argueing about why some things are missing I get
the impression something like the following argument line is followed

I don't want to be bothered by the mess that can originate from
ill use of such a feature, I don't want to look trough code
that doesn't work from someone who doesn't understand how to
use this feature properly. So we don't include the feature in
the language (or only put the C-function in).

And that is something I find regretable.
Look at the other ("supply") end of the development process: nothing
can possibly go into Python until someone wants it enough to volunteer
to do all the work, or enough to pay someone else to do all the work.
The former is most common, but the latter has happened too.

Now in this case, who cares? Enough to actually do it, that is? As I
said before, my experience with signals was mostly negative, so
there's no chance I'm going to volunteer my time to advance "a
feature" I dislike and won't use. Someone *could* pay me to work on
it, and I'd do that if I couldn't find a less objectionable way to get
fed <wink>.

Nothing gets in just because someone asks for it, or even just because
everyone asks for it. Someone has to do the work.

Who's that someone in this case?

The only currently active contributor I can think of who cares about
signals enough to actually work on them is Michael Hudson, but his
interest seems limited to keeping the interactions been Python's GNU
readline wrapper and signals working across incompatible details of
signal semantics on Unixish boxes. It's possible I'm wrong, and he'd
really love to work much more on signals, but feels inhibited by
Guido's well-known dislike of the beasts.

That's not what I'd bet on, though. Historically, even the limited
signal support Python supplies has been an endless maintenance burden,
hard to compare with anything else on the low end of the
bang-for-the-buck scale; maybe with the endless battle to try to
support threads *at all* across a gazillion incompatible flavors of
HP-UX.

And because it's sitting in the volunteer-development ghetto, you bet
it would be a hard sell to try to introduce "more of the same".
Initial volunteers often vanish, leaving their maintenance problems to
others. I expect (and you do too, so don't act surprised <wink>) that
"new signal features" could get vetoed for that reason alone -- the
simple observation that no current long-time contributor pushes in
this direction means there's no reason to presume that long-time
support for new signal gimmicks would exist.

....
Sure but simple Locks can't have a timeout. So if you only need locks
with a timeout then you use Queues that can only have 1 element.
I agree that having multiple thread accessing the same queue some
with a timeout and others not, is strange. But I think it is less
strange if you have multiple threads accessing the same lock, some
with timeouts and others not.


In this case I expect it's 1000x more common to use a
threading.Condition than a Queue.Queue. Condition.wait() supports a
timeout, and a condvar is a much more obvious model for mutual
exclusion with a timeout than is a 1-element Queue.

It's still (of course) the case that Condition.wait() with a timeout
uses the same sleep/check/loop approach in the current implementation.
BTW, the Queue implementation in 2.4 doesn't have any timeout code of
its own; instead its timeout behavior is supplied by
Condition.wait()'s. From a lower-level view, confining timeout
*implementation* to Condition.wait() is a good idea for pthreads
systems, where pthread_cond_timedwait() could be used pretty much
directly.
Jul 18 '05 #17
Op 2004-11-17, Tim Peters schreef <ti********@gmail.com>:
[Antoon Pardon, on thread gimmicks]
Couldn't it be possible to have a general solution that works on
any kind of box
Yes, because we have that now.
and provide (or let interested parties provide) an implementation that
is better suited for particular boxes?


For the most part, that hasn't happened. Someone who cares enough
would need to volunteer (or fund) API redesign to make it feasible.

...
Look, I understand perfectly. Use of signals is never my first choice.
However there sometimes seem to be circumstances when signals
are the lesser evil. My impression is also that the python people
seem to have some obligation to help in debugging peoples
programs. This is laudable in general but I also have the impression
that this obligation puts limits on what they want to allow in the
language.

If I hear people argueing about why some things are missing I get
the impression something like the following argument line is followed

I don't want to be bothered by the mess that can originate from
ill use of such a feature, I don't want to look trough code
that doesn't work from someone who doesn't understand how to
use this feature properly. So we don't include the feature in
the language (or only put the C-function in).

And that is something I find regretable.


Look at the other ("supply") end of the development process: nothing
can possibly go into Python until someone wants it enough to volunteer
to do all the work, or enough to pay someone else to do all the work.
The former is most common, but the latter has happened too.

Now in this case, who cares? Enough to actually do it, that is? As I
said before, my experience with signals was mostly negative, so
there's no chance I'm going to volunteer my time to advance "a
feature" I dislike and won't use. Someone *could* pay me to work on
it, and I'd do that if I couldn't find a less objectionable way to get
fed <wink>.

Nothing gets in just because someone asks for it, or even just because
everyone asks for it. Someone has to do the work.


Sure but that is not enough is it? The work for letting one thread
raise an exception in an other thread is done. But it still didn't
get really in the language. All that was provided was the C interface
and people who want to use it must provide the python interface
themselves.
Who's that someone in this case?

The only currently active contributor I can think of who cares about
signals enough to actually work on them is Michael Hudson, but his
interest seems limited to keeping the interactions been Python's GNU
readline wrapper and signals working across incompatible details of
signal semantics on Unixish boxes. It's possible I'm wrong, and he'd
really love to work much more on signals, but feels inhibited by
Guido's well-known dislike of the beasts.

That's not what I'd bet on, though. Historically, even the limited
signal support Python supplies has been an endless maintenance burden,
hard to compare with anything else on the low end of the
bang-for-the-buck scale; maybe with the endless battle to try to
support threads *at all* across a gazillion incompatible flavors of
HP-UX.

And because it's sitting in the volunteer-development ghetto, you bet
it would be a hard sell to try to introduce "more of the same".
Initial volunteers often vanish, leaving their maintenance problems to
others. I expect (and you do too, so don't act surprised <wink>) that
"new signal features" could get vetoed for that reason alone -- the
simple observation that no current long-time contributor pushes in
this direction means there's no reason to presume that long-time
support for new signal gimmicks would exist.


I understand, but that also means one can't expect short term
contributors to put time in this, because they are likely to
see their time invested in this, going to waste.
Now I may invest some time in this anyway, purely for my own
interest. The question I have is, will I just have to fight
the normal signal behaviour or do I have to fight python as
well. To be more specific, the documentation of the signal
module states the following.

... only the main thread can set a new signal handler, and the main
thread will be the only one to receive signals (this is enforced by
the Python signal module, even if the underlying thread implementa-
tion supports sending signals to individual threads).
The question I have is the following. That the main thread is the only
one to receive signals, is that purely implemented in the signal module
or is there some interpreter magic that supports this, which can
cause problems for an alternative signal module.
--
Antoon Pardon
Jul 18 '05 #18
....

[Antoon Pardon]
Sure but that is not enough is it? The work for letting one thread
raise an exception in an other thread is done.
Are you talking about the internal PyThreadState_SetAsyncExc gimmick?
If so, that's got nothing to do with signals. If that's all you want,
say so, and leave signals out of it.

As to whether the work is done, emphatically no, it isn't. I hadn't
looked at its implementation before, so just did. That uncovered a
critical bug "by eyeball", which may delay the release of Python 2.4.
Happy now <wink>?

Seriously, that's just an error in the C coding. The work for a
Python-level feature hasn't even begun. I'd guess it's about 5% of
the way there (believe it or not, writing C code is typically the
least time-consuming part of any language feature).

If you want to do that, fine. Then because this functionality is
controversial, it needs a PEP. That isn't "punishment", it's that
controversial decisions need a record of design rationale, and PEPs
are Python's way of making such records. The Python-level API also
needs design and debate. For example, the C-level API takes a thread
ID as argument, but that's a poor fit with threading.py's higher-level
view of threads. In this case it probably needs a higher-level API.
The C-level API also a bizarre gimmick where you're supposed to do an
obscure dance if the function returns a value greater than 1. Offhand
I have no idea what that's all about, but it's clearly unacceptable
for a Python-level function (maybe some good news: I suspect that
when the critical bug just found is fixed, that fix will automatically
stop the indeterminism driving the need for the goofy API dance).
Thought also needs to be given to whether other implementations of
Python *can* supply this gimmick. It's unclear to me. The CPython
implementation of it is very simple, but relies in all respects on
implementation details unique to CPython (primarily the GIL, and that
there's an exhaustive linked list of thread states to crawl over). A
new Python-level function also needs documentation, and a good test
suite. The C-level API here isn't tested at all, which isn't good.
But it still didn't get really in the language.
Of course it wasn't. As above, this isn't anywhere yet near being a
language-level feature. To judge from Guido's checkin comment, he was
also reluctant to add it even at the C level, which may (or may not)
be another battle. FWIW, it's fine by me if this got exposed at the
Python level. People will certainly get themselves in trouble by
using it, but the CPython implementation of this is so toothless I
don't think they'll be able to screw up internal states of the
platform C library by, e.g., leaving *its* synch gimmicks in
ill-defined states when they provoke a Python thread "externally" into
async surprises.
All that was provided was the C interface and people who want to use
it must provide the python interface themselves.
Yes. And looking at the history, the C interface was in fact all that
was asked for at the time.

....
I understand, but that also means one can't expect short term
contributors to put time in this, because they are likely to
see their time invested in this, going to waste.
That's their call. Beyond noting that few things are achieved by
those who give up easily, "that's life".
Now I may invest some time in this anyway, purely for my own
interest. The question I have is, will I just have to fight
the normal signal behaviour or do I have to fight python as
well.
I'm confused now. Up until this point, it appeared that "this", in
this message, was talking about the lack of access to
PyThreadState_SetAsyncExc from the Python level. If so, that's got
nothing to do with signals, at least not in CPython. The
implementation doesn't use signals, and is 100% portable exactly the
way it is. And it's very simple. Nobody can object to *this*
function on porting or maintenance-burden grounds. They may object to
it on other grounds.
To be more specific, the documentation of the signal
module states the following.

... only the main thread can set a new signal handler, and the main
thread will be the only one to receive signals (this is enforced by
the Python signal module, even if the underlying thread implementa-
tion supports sending signals to individual threads).

The question I have is the following. That the main thread is the only
one to receive signals, is that purely implemented in the signal module
or is there some interpreter magic that supports this, which can
cause problems for an alternative signal module.


Read the code? As I said before, I pay no attention to the signal
module, and the only way I could answer these questions is by studying
the code too. Maybe someone else here already knows, and will chip
in.
Jul 18 '05 #19
On 2004-11-19, Tim Peters <ti********@gmail.com> wrote:
...

[Antoon Pardon]
Sure but that is not enough is it? The work for letting one thread
raise an exception in an other thread is done.
Are you talking about the internal PyThreadState_SetAsyncExc gimmick?
If so, that's got nothing to do with signals. If that's all you want,
say so, and leave signals out of it.


I'm talking about PyThreadState_SetAsyncExc only as a throw in
response to your previous remark. The main subject is signals.
As to whether the work is done, emphatically no, it isn't. I hadn't
looked at its implementation before, so just did. That uncovered a
critical bug "by eyeball", which may delay the release of Python 2.4.
Happy now <wink>?
Well wether the work is done or not hardly seems to matter. The
documentation as worded now seems to make it clear, this isn't
going get into the language
Seriously, that's just an error in the C coding. The work for a
Python-level feature hasn't even begun. I'd guess it's about 5% of
the way there (believe it or not, writing C code is typically the
least time-consuming part of any language feature).


As I understand the doc, one doesn't plan to begin working on a
Python-level feature and even if someone else implements it, it
has no chance of getting in the language. As I read the docs it
is not so much a question of the feature not being ready for the
langauge, but a question of Guido not wanting the feature to be
in the language.
This just to show that having someone implement it, is not
the biggest hurdle as your prevous remark seemed to suggest
to me.

But that is enough of this tangent (although I did find your
remarks about it interesting)
To be more specific, the documentation of the signal
module states the following.

... only the main thread can set a new signal handler, and the main
thread will be the only one to receive signals (this is enforced by
the Python signal module, even if the underlying thread implementa-
tion supports sending signals to individual threads).

The question I have is the following. That the main thread is the only
one to receive signals, is that purely implemented in the signal module
or is there some interpreter magic that supports this, which can
cause problems for an alternative signal module.


Read the code? As I said before, I pay no attention to the signal
module, and the only way I could answer these questions is by studying
the code too. Maybe someone else here already knows, and will chip
in.


Well I'll wait a few days to see if someone does, otherwise I'll see
I can find enough time to dive into the code.
Although you can't help me with my main question, I like to express
my appreciation for your responses. They have been very insighfull
and I'll be sure to use them to my advantage.

--
Antoon Pardon
Jul 18 '05 #20
[Antoon Pardon]
I'm talking about PyThreadState_SetAsyncExc only as a throw in
response to your previous remark. The main subject is signals.
OK.
....
Well wether the work is done or not hardly seems to matter. The
documentation as worded now seems to make it clear, this isn't
going get into the language.
But you're still talking about PyThreadState_SetAsyncExc here, right?

Explained last time that it can't possibly "get into the language" in
the state it's in now. The people who orginally did the work got
everything they wanted at the time: the C API function, and the
then-new thread.interrupt_main() function in Python. They didn't ask
for more than that, and they didn't work on more than that.

Doing more work is a necessary prerequisite if people want more than
that in the language. Making the case that it should be in the
language is part of that work, but, if you haven't noticed, PEPs that
have working implementations fare much better than PEPs that don't.
Indeed, no PEP without an implementation has ever been released
<wink>.

....
As I understand the doc, one doesn't plan to begin working on a
Python-level feature and even if someone else implements it, it
has no chance of getting in the language.
Sorry, couldn't make sense of that sentence.
As I read the docs it is not so much a question of the feature not
being ready for the langauge, but a question of Guido not wanting the
feature to be in the language.
You're still talking about PyThreadState_SetAsyncExc? I haven't asked
Guido about it, and I can't find any design discussion of that
function anywhere. I think it got added during a European sprint. If
you want to know what he thinks, ask him. If you want a definitive
ruling, write a PEP.
This just to show that having someone implement it, is not
the biggest hurdle as your prevous remark seemed to suggest
to me.
An implementation is prerequisite to release, but isn't necessarily
sufficient. I explained in detail last time why the current C code
has no claim to being "an implementation" of a *Python*-level spelling
of this functionality, so neither a suitable implementation nor the
necessary design discusiion have been done in this case.

If this is something you want, but you also want guaranteed acceptance
in advance of doing more than writing about it, that won't work. If
you ask Guido, and he can make time to answer, you may get guaranteed
rejection in advance -- or you may not. A PEP would be a good thing
to have in *either* case.

...
Although you can't help me with my main question, I like to express
my appreciation for your responses. They have been very insighfull
and I'll be sure to use them to my advantage.


My fondest hope is that you'll be able to use them to crush Guido into
submission <wink>.
Jul 18 '05 #21
Op 2004-11-21, Tim Peters schreef <ti********@gmail.com>:
[Antoon Pardon]
I'm talking about PyThreadState_SetAsyncExc only as a throw in
response to your previous remark. The main subject is signals.
OK.
...
Well wether the work is done or not hardly seems to matter. The
documentation as worded now seems to make it clear, this isn't
going get into the language.


But you're still talking about PyThreadState_SetAsyncExc here, right?


Yes.
Explained last time that it can't possibly "get into the language" in
the state it's in now.
And my response to that is that the documentation suggests that
that is not the main reason why it is not in the langauage.
The people who orginally did the work got
everything they wanted at the time: the C API function, and the
then-new thread.interrupt_main() function in Python. They didn't ask
for more than that, and they didn't work on more than that.

Doing more work is a necessary prerequisite if people want more than
that in the language. Making the case that it should be in the
language is part of that work, but, if you haven't noticed, PEPs that
have working implementations fare much better than PEPs that don't.
Indeed, no PEP without an implementation has ever been released
<wink>.
Look the documentation states this:

To prevent naive misuse, you must write your own C extension to call
this
IMO this says the reason for not having it in the language has
nothing to do with lack of implementation. In this case it seems
lack of implementation is the consequence of Guido and others
finding it to be too dangerous too be in the language. Not that
a lack of implementation is the reason it didn't make the language
yet.
...
As I understand the doc, one doesn't plan to begin working on a
Python-level feature and even if someone else implements it, it
has no chance of getting in the language.
Sorry, couldn't make sense of that sentence.


That raising an exception from one thread in an other thread doesn't
made the language is a so by design not by lack of implementation.
As the documentation is now worded it suggest very strongly that if
someone does implement it tomorrow it will simply rejected because
they want to prevent naive missuse.
As I read the docs it is not so much a question of the feature not
being ready for the langauge, but a question of Guido not wanting the
feature to be in the language.


You're still talking about PyThreadState_SetAsyncExc?


Yes.
I haven't asked
Guido about it, and I can't find any design discussion of that
function anywhere.

Wel that sentence came in the documentation, I don't know, but
something like: "To prevent ..." does sound like a design decision
to me.
I think it got added during a European sprint. If
you want to know what he thinks, ask him. If you want a definitive
ruling, write a PEP.


That seems a bit idiot to me. The information seems to be available
in the documentation. So why should I ask?
This just to show that having someone implement it, is not
the biggest hurdle as your prevous remark seemed to suggest
to me.


An implementation is prerequisite to release, but isn't necessarily
sufficient. I explained in detail last time why the current C code
has no claim to being "an implementation" of a *Python*-level spelling
of this functionality, so neither a suitable implementation nor the
necessary design discusiion have been done in this case.

If this is something you want, but you also want guaranteed acceptance
in advance of doing more than writing about it, that won't work. If
you ask Guido, and he can make time to answer, you may get guaranteed
rejection in advance -- or you may not. A PEP would be a good thing
to have in *either* case.


I don't want guaranteed acceptance in advance. But if I have the
impression that there is rejection in advance then there seems to
be no point in starting. The documentation as worded now seems to
imply such a rejection in advance.

So for the sake of argument even if I wanted this, I would be reluctant
to spend time in it, because I only have limited amounts of it and
this battle seems to be lost from the beginning. I would think my
time would be better spend fighting other battles.

--
Antoon Pardon
Jul 18 '05 #22
I'm not really reading comp.lang.python at the moment, but found this
thread via the Python-URL...

Tim Peters <ti********@gmail.com> writes:
The only currently active contributor I can think of who cares about
signals enough to actually work on them is Michael Hudson, but his
interest seems limited to keeping the interactions been Python's GNU
readline wrapper and signals working across incompatible details of
signal semantics on Unixish boxes. It's possible I'm wrong, and he'd
really love to work much more on signals, but feels inhibited by
Guido's well-known dislike of the beasts.


I think you're more or less right. A couple of years ago I got pretty
badly discouraged attempting to support sigprocmask and friends from
Python, and came to conclusion that noone who cares about portability
-- not just no Python programmers, but *noone at all* -- mixes threads
and signals, because the cross platform behaviour is a total mess, and
in many cases is just plain buggy (I believe the FreeBSD version of
libc_r for 5.something contains a fix that is a direct consequence of
my efforts, for example). Even linking with the threading libraries
was enough to stuff things up mightily in some circumstances.

We Do Not Want To Go There.

Cheers,
mwh

--
Well, you pretty much need Microsoft stuff to get misbehaviours
bad enough to actually tear the time-space continuum. Luckily
for you, MS Internet Explorer is available for Solaris.
-- Calle Dybedahl, alt.sysadmin.recovery
Jul 18 '05 #23
Op 2004-11-22, Michael Hudson schreef <mw*@python.net>:
I'm not really reading comp.lang.python at the moment, but found this
thread via the Python-URL...

Tim Peters <ti********@gmail.com> writes:
The only currently active contributor I can think of who cares about
signals enough to actually work on them is Michael Hudson, but his
interest seems limited to keeping the interactions been Python's GNU
readline wrapper and signals working across incompatible details of
signal semantics on Unixish boxes. It's possible I'm wrong, and he'd
really love to work much more on signals, but feels inhibited by
Guido's well-known dislike of the beasts.


I think you're more or less right. A couple of years ago I got pretty
badly discouraged attempting to support sigprocmask and friends from
Python, and came to conclusion that noone who cares about portability
-- not just no Python programmers, but *noone at all* -- mixes threads
and signals, because the cross platform behaviour is a total mess, and
in many cases is just plain buggy (I believe the FreeBSD version of
libc_r for 5.something contains a fix that is a direct consequence of
my efforts, for example). Even linking with the threading libraries
was enough to stuff things up mightily in some circumstances.

We Do Not Want To Go There.


Well for the moment I don't care about portability that much. I do
understand this is a big issue for something in the standard library
and with your remarks consider my question in the subject answered
with no.

I do think it is a pity that the crossplatform behaviour is such a
total mess but for the moment that can't be helped.

I'll just experiment a bit my self on the boxes I'm interested in.

--
Antoon Pardon
Jul 18 '05 #24

This thread has been closed and replies have been disabled. Please start a new discussion.

Similar topics

0
by: Jesper Nilsson | last post by:
I've embedded python in a threaded app which contains two threads that may call into Python. The threads are created via the "C" api. If I can guarantee (1) that no python API is ever called at the...
28
by: Matt Leslie | last post by:
Hi, I'm trying to use microthreads under stackless python, since they sound like exactly what I am after, but I am having very little success. I've got a fresh install of python 2.3.3 from...
17
by: ToddLMorgan | last post by:
I'm just starting out with python, after having a long history with Java. I was wondering if there were any resources or tips from anyone out there in Python-land that can help me make the...
1
by: Carl J. Van Arsdall | last post by:
Hey everyone, I know I've posted several questions regarding python and python's parallel capabilities so bear with me as I've never attempted to incite discussion. However, today I'm...
158
by: Giovanni Bajo | last post by:
Hello, I just read this mail by Brett Cannon: http://mail.python.org/pipermail/python-dev/2006-October/069139.html where the "PSF infrastracture committee", after weeks of evaluation, recommends...
6
by: nikhilketkar | last post by:
What are the implications of the Global Interpreter Lock in Python ? Does this mean that Python threads cannot exploit a dual core processor and the only advantage of using threads is in that...
33
by: llothar | last post by:
I'm afraid that the GIL is killing the usefullness of python for some types of applications now where 4,8 oder 64 threads on a chip are here or comming soon. What is the status about that for...
0
by: Gabriel Genellina | last post by:
QOTW: "PS: in some ways it's interesting and relevant that there has been no discussion on psf-members of Google's AppEngine, which many people I've talked to think is the most important thing...
0
by: taylorcarr | last post by:
A Canon printer is a smart device known for being advanced, efficient, and reliable. It is designed for home, office, and hybrid workspace use and can also be used for a variety of purposes. However,...
0
by: aa123db | last post by:
Variable and constants Use var or let for variables and const fror constants. Var foo ='bar'; Let foo ='bar';const baz ='bar'; Functions function $name$ ($parameters$) { } ...
0
by: ryjfgjl | last post by:
If we have dozens or hundreds of excel to import into the database, if we use the excel import function provided by database editors such as navicat, it will be extremely tedious and time-consuming...
0
by: emmanuelkatto | last post by:
Hi All, I am Emmanuel katto from Uganda. I want to ask what challenges you've faced while migrating a website to cloud. Please let me know. Thanks! Emmanuel
0
BarryA
by: BarryA | last post by:
What are the essential steps and strategies outlined in the Data Structures and Algorithms (DSA) roadmap for aspiring data scientists? How can individuals effectively utilize this roadmap to progress...
1
by: nemocccc | last post by:
hello, everyone, I want to develop a software for my android phone for daily needs, any suggestions?
1
by: Sonnysonu | last post by:
This is the data of csv file 1 2 3 1 2 3 1 2 3 1 2 3 2 3 2 3 3 the lengths should be different i have to store the data by column-wise with in the specific length. suppose the i have to...
0
marktang
by: marktang | last post by:
ONU (Optical Network Unit) is one of the key components for providing high-speed Internet services. Its primary function is to act as an endpoint device located at the user's premises. However,...
0
by: Hystou | last post by:
Most computers default to English, but sometimes we require a different language, especially when relocating. Forgot to request a specific language before your computer shipped? No problem! You can...

By using Bytes.com and it's services, you agree to our Privacy Policy and Terms of Use.

To disable or enable advertisements and analytics tracking please visit the manage ads & tracking page.