473,395 Members | 1,403 Online
Bytes | Software Development & Data Engineering Community
Post Job

Home Posts Topics Members FAQ

Join Bytes to post your question to a community of 473,395 software developers and data experts.

global interpreter lock

km
Hi all,

is true parallelism possible in python ? or atleast in the coming versions ?
is global interpreter lock a bane in this context ?

regards,
KM
Aug 19 '05
61 3423
Dennis Lee Bieber wrote:
Well, at that point, you could substitute "waiting on a queue" with
"waiting on a socket" and still have the same problem -- regardless of
the nature of the language/libraries for threading; it's a problem with
the design of the classes as applied to a threaded environment.


It's a problem for all levels; pretty much any of them can botch
it. Not only do we want to be able to wait on two queues or two
sockets, we might have good cause to wait on two queues *and*
two sockets. Win32 provides WaitMultipleObjects which lets the
programmer efficiently wait on many objects of various types at
one call; the Linux kernel Gurus are looking at supporting a
similar feature.

Library designers can botch the functionality by wrapping the
waitable object in their own classes with their own special wait
operation, and hiding the OS object.
--
--Bryan
Sep 1 '05 #51
Bryan Olson <fa*********@nowhere.org> writes:
Mike Meyer wrote:
> Bryan Olson writes:
>>System support for threads has advanced far beyond what Mr. Meyer
>>dealt with in programming the Amiga.

>
> I don't think it has - but see below.
>
>>In industry, the two major camps are Posix threads, and Microsoft's
>>Win32 threads (on NT or better). Some commercial Unix vendors have
>>mature support for Posix threads; on Linux, the NPTL is young but
>>clearly the way to move forward.

>
> I haven't looked at Win32 threading. Maybe it's better than Posix
> threads. Sure, Posix threads is better than what I dealt with 10 years
> ago, but there's no way I'd call it "advanced beyond" that model. They
> aren't even as good as the Python Threading/Queue model.

With Python threads/queues how do I wait for two queues (or
locks or semaphores) at one call? (I know some methods to
accomplish the same effect, but they suck.)


By "not as good as", I meant the model they provide isn't as managable
as the one provided by Queue/Threading. Like async I/O,
Queue/Threading provides a better model at the cost of
generality. There are things it doesn't do well, and others it doesn't
do at all. If it didn't have those problems, I wouldn't be looking for
alternatives.
>>Java and Ada will wrap the native thread package, which
>>C(++) offers it directly.

> Obviously, any good solution will wrap the native threads [...]

I recommend looking at how software that implements
sophisticated services actually words. Many things one
might think to be obvious turn out not to be true.


Instead of making vague assertions, why don't you provide us with
facts? I.e. - what are the things you think are obvious that turned
out not to be true? Name some software that implements sophisticated
services that we can go look out. And so on...

<mike
--
Mike Meyer <mw*@mired.org> http://www.mired.org/home/mwm/
Independent WWW/Perforce/FreeBSD/Unix consultant, email for more information.
Sep 2 '05 #52
Mike Meyer wrote:
Bryan Olson writes:
With Python threads/queues how do I wait for two queues (or
locks or semaphores) at one call? (I know some methods to
accomplish the same effect, but they suck.)
By "not as good as", I meant the model they provide isn't as managable
as the one provided by Queue/Threading. Like async I/O,
Queue/Threading provides a better model at the cost of
generality.


I can't tell why you think that.
Instead of making vague assertions, why don't you provide us
with facts?
Yeah, I'll keep doing that. You're the one proclaiming a 'model'
to be more manageable with no evidence.
I.e. - what are the things you think are obvious that turned
out not to be true? Name some software that implements sophisticated
services that we can go look out. And so on...


Thought we went over that. Look at the popular relational-
database engines. Implementing such a service with one line of
execution and async I/O is theoretically possible, but I've not
heard of anyone who has managed to do it. MySQL, PostgreSQL,
IBPhoenix, MaxDB, all have multiple simultaneous threads and/or
processes (as do the competitive commercial database engines,
though you can't look under the hood so easily).
--
--Bryan
Sep 2 '05 #53
Dennis Lee Bieber <wl*****@ix.netcom.com> writes:
On Wed, 31 Aug 2005 22:44:06 -0400, Mike Meyer <mw*@mired.org> declaimed
the following in comp.lang.python:
I don't know what Ada offers. Java gives you pseudo-monitors. I'm
From the days of mil-std 1815, Ada has supported "tasks" which
communicate via "rendezvous"... The receiving task waits on an "accept"
statement (simplified -- there is a means to wait on multiple different
accepts, and/or time-out). The "sending" task calls the "entry" (looks
like a regular procedure call with in and/or out parameters -- matches
the signature of the waiting "accept"). As with "accept", there are
selective entry calls, wherein which ever task is waiting on the
matching accept will be invoked. During the rendezvous, the "sending"
task blocks until the "receiving" task exits the "accept" block -- at
which point both tasks may proceed concurrently.


Thank you for providing the description. That was sufficient context
that Google found the GNAT documentation, which was very detailed.

Based on that, it seems that entry/accept are just a synchronization
construct - with some RPC semantics thrown in.
As you might notice -- data can go both ways: in at the top of the
rendezvous, and out at the end.

Tasks are created by declaring them (there are also task types, so
one can easily create a slew of identical tasks).

procedure xyz is

a : task; -- not real Ada, again, simplified
b : task;

begin -- the tasks begin execution here
-- do stuff in the procedure itself, maybe call task entries
end;


The problem is that this doesn't really provide any extra protection
for the programmer. You get language facilities that will provide the
protection, but the programmer has to remember to use them in every
case. If you forget to declare a method as protected, then nothing
stops two tasks from entering it and screwing up the objects data with
unsynchronized access. This should be compared to SCOOP, where trying
to do something like that is impossible.

Thanks again,
<mike

--
Mike Meyer <mw*@mired.org> http://www.mired.org/home/mwm/
Independent WWW/Perforce/FreeBSD/Unix consultant, email for more information.
Sep 4 '05 #54
Michael Sparks <ms@cerenity.org> writes:
But I think to do it on Erlang's scale, Python needs user-level
microthreads and not just OS threads.


You've just described Kamaelia* BTW, except substitute micro-thread
with generator :-) (Also we call the queues outboxes and inboxes, and
the combination of a generator in a class with inboxes and outboxes
components)
* http://kamaelia.sf.net/


I don't see how generators substitute for microthreads. In your example
from another post:

class encoder(component):
def __init__(self, **args):
self.encoder = unbreakable_encryption.encoder(**args)
def main(self):
while 1:
if self.dataReady("inbox"):
data = self.recv("inbox")
encoded = self.encoder.encode(data)
self.send(encoded, "outbox")
yield 1

You've got the "main" method creating a generator that has its own
event loop that yields after each event it processes. Notice the kludge

if self.dataReady("inbox"):
data = self.recv("inbox")

instead of just saying something like:

data = self.get_an_event("inbox")

where .get_an_event "blocks" (i.e. yields) if no event is pending.
The reason for that is that Python generators aren't really coroutines
and you can't yield except from the top level function in the generator.

In that particular example, the yield is only at the end, so the
generator isn't doing anything that an ordinary function closure
couldn't:

def main(self):
def run_event():
if self.dataReady("inbox"):
data = self.recv("inbox")
encoded = self.encoder.encode(data)
self.send(encoded, "outbox")
return run_event

Now instead of calling .next on a generator every time you want to let
your microthread run, just call the run_event function that main has
returned. However, I suppose there's times when you'd want to read an
event, do something with it, yield, read another event, and do
something different with it, before looping. In that case you can use
yields in different parts of that state machine. But it's not that
big a deal; you could just use multiple functions otherwise.

All in all, maybe I'm missing something but I don't see generators as
being that much help here. With first-class continuations like
Stackless used to have, the story would be different, of course.

Sep 10 '05 #55
Paul Rubin wrote:
....
I don't see how generators substitute for microthreads. In your example
from another post:
I've done some digging and found what you mean by microthreads -
specifically I suspect you're referring to the microthreads package for
stackless? (I tend to view an activated generator as having a thread of
control, and since it's not a true thread, but is similar, I tend to view
that as a microthread. However your term and mine don't co-incide, and it
appears to cause confusion, so I'll switch my definition to match yours,
given the microthreads package, etc)

You're right, generators aren't a substitue for microthreads. However I do
see them as being a useful alternative to microthreads. Indeed the fact
that you're limited to a single stack frame I think has actually helped our
architecture.

The reason I say this is because it naturally encourages small components
which are highly focussed in what they do. For example, when I was
originally looking at how to wrap network handling up, it was logical to
want to do this:

[ writing something probably implementable using greenlets, but definitely
pseudocode ]

@Nestedgenerator
def runProtocol(...)
while:
data = get_data_from_connection( ... )

# Assume non-blocking socket
def get_data_from_connection(...)
try:
data = sock.recv()
return data
except ... :
Yield(WaitSocketDataReady(sock))
except ... :
return failure

Of something - you get the idea (the above code is naff, but that's because
it's late here) - the operation that would block normally you yield inside
until given a message.

The thing about this is that we wouldn't have resulted in the structure we
do have - which is to have components for dealing with connected sockets,
listening sockets and so on. We've been able to reuse the connected socket
code between systems much more cleanly that we would have done (I
suspect) than if we'd been able to nest yields (as I once asked about here)
or have true co-routines.

At some point it would be interesing to rewrite our entire system based on
greenlets and see if that works out with more or less reuse. (And more or
less ability to make code more parallel or not)
[re-arranging order slightly of comments ] class encoder(component):
def __init__(self, **args):
self.encoder = unbreakable_encryption.encoder(**args)
def main(self):
while 1:
if self.dataReady("inbox"):
data = self.recv("inbox")
encoded = self.encoder.encode(data)
self.send(encoded, "outbox")
yield 1
.... In that particular example, the yield is only at the end, so the
generator isn't doing anything that an ordinary function closure
couldn't:

def main(self):
def run_event():
if self.dataReady("inbox"):
data = self.recv("inbox")
encoded = self.encoder.encode(data)
self.send(encoded, "outbox")
return run_event
Indeed, in particular we can currently rewrite that particular example as:

class encoder(component):
def __init__(self, **args):
self.encoder = unbreakable_encryption.encoder(**args)
def mainLoop(self):
if self.dataReady("inbox"):
data = self.recv("inbox")
encoded = self.encoder.encode(data)
self.send(encoded, "outbox")
return 1

That's a bad example though. A more useful example is probably something
more like this:

class Multicast_sender(Axon.Component.component):
def __init__(self, local_addr, local_port, remote_addr, remote_port):
super(Multicast_sender, self).__init__()
self.local_addr = local_addr
self.local_port = local_port
self.remote_addr = remote_addr
self.remote_port = remote_port

def main(self):
sock = socket.socket(socket.AF_INET, socket.SOCK_DGRAM,
socket.IPPROTO_UDP)
sock.bind((self.local_addr,self.local_port))
sock.setsockopt(socket.IPPROTO_IP, socket.IP_MULTICAST_TTL, 10)
while 1:
if self.dataReady("inbox"):
data = self.recv()
l = sock.sendto(data, (self.remote_addr,self.remote_port) );
yield 1

With a bit of fun with decorators, that can actually be collapsed into
something more like:

@component
def Multicast_sender(self, local_addr, local_port, remote_addr,
remote_port):
sock = socket.socket(socket.AF_INET, socket.SOCK_DGRAM,
socket.IPPROTO_UDP)
sock.bind((self.local_addr,self.local_port))
sock.setsockopt(socket.IPPROTO_IP, socket.IP_MULTICAST_TTL, 10)
while 1:
if self.dataReady("inbox"):
data = self.recv()
l = sock.sendto(data, (self.remote_addr,self.remote_port) );
yield 1


You've got the "main" method creating a generator that has its own
event loop that yields after each event it processes. Notice the kludge

if self.dataReady("inbox"):
data = self.recv("inbox")

instead of just saying something like:

data = self.get_an_event("inbox")

where .get_an_event "blocks" (i.e. yields) if no event is pending.
The reason for that is that Python generators aren't really coroutines
and you can't yield except from the top level function in the generator.

Now instead of calling .next on a generator every time you want to let
your microthread run, just call the run_event function that main has
returned. However, I suppose there's times when you'd want to read an
event, do something with it, yield, read another event, and do
something different with it, before looping. In that case you can use
yields in different parts of that state machine. But it's not that
big a deal; you could just use multiple functions otherwise.

All in all, maybe I'm missing something but I don't see generators as
being that much help here. With first-class continuations like
Stackless used to have, the story would be different, of course.


Sep 14 '05 #56
arrgh... hit wrong keystroke which caused an early send before I'd finished
typing... (skip the message I'm replying to hear for a minute please :-)
Michael.

Sep 14 '05 #57
[ Second time lucky... ]
Paul Rubin wrote:
....
I don't see how generators substitute for microthreads. In your example
from another post:
I've done some digging and found what you mean by microthreads -
specifically I suspect you're referring to the microthreads package for
stackless? (I tend to view an activated generator as having a thread of
control, and since it's not a true thread, but is similar, I tend to view
that as a microthread. However your term and mine don't co-incide, and it
appears to cause confusion, so I'll switch my definition to match yours,
given the microthreads package, etc)

The reason I say this is because it naturally encourages small components
which are highly focussed in what they do. For example, when I was
originally looking at how to wrap network handling up, it was logical to
want to do this:

[ writing something probably implementable using greenlets, but definitely
pseudocode ]

@Nestedgenerator
def runProtocol(...)
while:
data = get_data_from_connection( ... )

# Assume non-blocking socket
def get_data_from_connection(...)
try:
data = sock.recv()
return data
except ... :
Yield(WaitSocketDataReady(sock))
except ... :
return failure

Of something - you get the idea (the above code is naff, but that's because
it's late here) - the operation that would block normally you yield inside
until given a message.

The thing about this is that we wouldn't have resulted in the structure we
do have - which is to have components for dealing with connected sockets,
listening sockets and so on. We've been able to reuse the connected socket
code between systems much more cleanly that we would have done (I
suspect) than if we'd been able to nest yields (as I once asked about here)
or have true co-routines.

At some point it would be interesing to rewrite our entire system based on
greenlets and see if that works out with more or less reuse. (And more or
less ability to make code more parallel or not)

[re-arranging order slightly of comments ] class encoder(component):
def __init__(self, **args):
self.encoder = unbreakable_encryption.encoder(**args)
def main(self):
while 1:
if self.dataReady("inbox"):
data = self.recv("inbox")
encoded = self.encoder.encode(data)
self.send(encoded, "outbox")
yield 1
.... In that particular example, the yield is only at the end, so the
generator isn't doing anything that an ordinary function closure
couldn't:

def main(self):
def run_event():
if self.dataReady("inbox"):
data = self.recv("inbox")
encoded = self.encoder.encode(data)
self.send(encoded, "outbox")
return run_event

Indeed, in particular we can currently rewrite that particular example as:

class encoder(component):
def __init__(self, **args):
self.encoder = unbreakable_encryption.encoder(**args)
def mainLoop(self):
if self.dataReady("inbox"):
data = self.recv("inbox")
encoded = self.encoder.encode(data)
self.send(encoded, "outbox")
return 1

And that will work today. (We have a 3 callback form available for people
who aren't very au fait with generators, or are just more comfortable with
callbacks)
That's a bad example though. A more useful example is probably something
more like this: (changed example from accidental early post)
....
center = list(self.rect.center)
self.image = self.original
current = self.image
scale = 1.0
angle = 1
pos = center
while 1:
self.image = current
if self.dataReady("imaging"):
self.image = self.recv("imaging")
current = self.image
if self.dataReady("scaler"):
# Scaling
scale = self.recv("scaler")
w,h = self.image.get_size()
self.image = pygame.transform.scale(self.image, (w*scale,
h*scale))
if self.dataReady("rotator"):
angle = self.recv("rotator")
# Rotation
self.image = pygame.transform.rotate(self.image, angle)
if self.dataReady("translation"):
# Translation
pos = self.recv("translation")
self.rect = self.image.get_rect()
self.rect.center = pos
yield 1
(this code is from Kamaelia.UI.Pygame.BasicSprite)

Can it be transformed to something event based? Yes of course. Is it clear
what's happening though? I would say yes. Currently we encourage the user
to look to see if data is ready before taking it, simply because it's the
simplest interface that we can guarantee consistency with.

For example, currently the exception based equivalent would be:
try:
pos = self.recv("translation")
except IndexError:
pass

Which isn't necessarily ideal, because we haven't really finalised the
implementation of inboxes and outboxes (eg will we always through
IndexError?). We are certain though that the behaviour of
send/recv/dataReady can remain consistent until then.

(Some discussions regarding Twisted people have suggested twisted deferred
queues might be useful here, but I haven't had a chance to look at them in
detail.)
At the moment, one option that springs to mind is this:
yield WaitDataAvailable("inbox")

(This is largely because we're looking at how to add syntactic sugar for
synchronous bidirectional messaging) Allowing the scheduler to suspend the
generator until data is ready. This however doesn't work for the example
above.

Whereas currently the following:
self.pause()
yield 1
Will prevent the component being run until one of the inboxes has a delivery
made to it *or* a message taken from its outboxes. (Very course grained)
Notice the kludge
FWIW, it's deliberate because we can maintain API consistency, until we
decide on better syntactic sugar.
The reason for that is that Python generators aren't really coroutines
and you can't yield except from the top level function in the generator.
Agreed - as noted above. We're finding this to be a strength though. (Though
to confirm/deny this properly would require a rewrite using greenlets or
similar)
Now instead of calling .next on a generator every time you want to let
your microthread run, just call the run_event function that main has
returned.
But then you're building state machines. We're using generators because they
allow people to write code looking completely single threaded, throw in
yields in key locations, abstract out input/output and do all this in small
gradual steps. (I reference an example of this below)

Relevant quote that might help explain where I'm coming from is this:

"Threads are for people who cant program state machines." -- Alan Cox

I'd agree really on some level, but I'm always left wondering - what about
the people who cant program state machines, but don't want to use threads
etc? (For whatever reason - maybe the architecture they're running on has a
poor threads implementation)

Initially co-routines struck me as the halfway house, but we decided to
stick with standard python and explore a generator based approach.
However, I suppose there's times when you'd want to read an
event, do something with it, yield, read another event, and do
something different with it, before looping. In that case you can use
yields in different parts of that state machine.
That's indeed what we do for a number of different existing components. Also
there's the viewpoint aspect - you can view the system as event based
(receiving a message as an event) or you can view it as dataflow. From our
perspective we view it as a dataflow system.
But it's not that
big a deal; you could just use multiple functions otherwise.

Having a single function with yields peppering it provides a simpler path
from single program single threaded to sitting inside a larger system whilst
remaining single threaded. We have a walk through of how to write a
component here [1] which is based on the experience of writing components
for multicast handling.

[1]
http://kamaelia.sourceforge.net/cgi-...tid=1113495151

The components written are sufficient for the tasks we need them at present
by probably need work for the general case. However the resulting code
remains close to looking single threaded - lowering the barrier for bug
finding. (I'm a firm believer that > 90% of the population can't write bug
free code - me included)

The final multicast transciever may have some issues that jump out to
someone else that wouldn't necessarily jump out if I'd turned the code
inside out into seperate state functions. I'm fairly certain it would've
been less clear (to someone coming along later) how to join the
sender/receiver code into a single transceiver.

The other thing is the alternatives to generators/coroutines are:
* threads/processes
* State machine style approaches

Having worked on a (very) large project (in C++) which was very state
machine based, I've come to have a natural dislike for them, and wondered
at the time if a generator/coroutine approach would be easier to pickup and
more maintainable. It might be, it might not be.

The idea behind our work is to have a go, build something and see if it
really is better or worse. If it's worse, that's life. If it's better,
hopefully other people will copy the approach of use the tools we release.

Until then (he says optimistically) other people do have GOOD systems
like twisted which is one of the nicest systems of it's kind. (Personally
I'd expect that if our stuff pans out we'd need to do a partial rewrite to
simplify the process for people to cherry pick code into twisted (or
whatever), if they want it.)
All in all, maybe I'm missing something but I don't see generators as
being that much help here. With first-class continuations like
Stackless used to have, the story would be different, of course.


I suppose what I'm saying is what you're losing isn't as large as you think
it is, and brings benefits of its own along the way. This does mean though
that we now have the ability to compose interesting systems in a unix
pipeline approach using graphical pipeline editors that produce code
that looks like this:

pipeline( ReadFileAdaptor( filename = '/data/dirac-video/bar.drc',
readmode = 'bitrate', bitrate = 480000 ),
SingleServer( ).activate()

pipeline( TCPClient( host = "127.0.0.1", port = 1601 ),
DiracDecoder( ),
RateLimit( messages_per_second = 15, buffer=2),
VideoOverlay( ),
).run()

.... which creates 2 pipelines - one represents a server sending data out
over a network socket, the other represents a client that connects, decodes
and displays the video.

The Tk integration was relatively quick to write, because it /couldn't/ be
complex. The Pygame integration was fairly simple, because it /couldn't/
be complex. (which may be fringe benefits of generators) We haven't looked
at interating gtk, wx or qt yet.

From our perspective the implementation of pipeline is the interesting part.
Currently this is simply a wrapper component, however it is responsible for
activating the components passed over, and /could/ run the generator
based components in different processes (and hence processors potentially).
Alternatively that could be left to the scheduler to do, but I suspect
something with a bit of control would be nice.

None of this is really special to generators as is probably obvious, but
that's where we started because we hypothesised that the resulting code
*might* be cleaner, whilst potentially able to be just as efficient as more
state machine based approaches. If greenlets had been available when we
started I suspect we would have used those.

We rejected stackless at the time because generators were available, and
whilst not as good from some perspectives /are/ part of the standard
language since 2.2.something. That decision has meant that we're able to
(and do) run on things like mobiles, and upwards without changes, except to
packaging.

At the end of the day, the only reason I'm talking about this stuff at
all is because we're finding it useful - perhaps more so than I expected
when I first realised the limitations of generators :-) If you don't find it
useful, then fair enough :)

Best Regards,
Michael.

Sep 15 '05 #58
On 15/09/05, Michael Sparks <ms@cerenity.org> wrote:
At the moment, one option that springs to mind is this:
yield WaitDataAvailable("inbox")


Twisted supports this.

help("twisted.internet.defer.waitForDeferred")

example usage is:

@deferredGenerator
def thingummy():
thing = waitForDeferred(makeSomeRequestResultingInDeferred ())
yield thing
thing = thing.getResult()
print thing #the result! hoorj!

With the new generator syntax, it becomes somewhat less clunky,
allowing for the syntax:

@defgen
def foo():
somereturnvalue = yield SomeLongRunningOperation()
print somereturnvalue

http://svn.twistedmatrix.com/cvs/san...rkup&rev=14348

--
Stephen Thorne
Development Engineer
Sep 15 '05 #59
Stephen Thorne wrote:
On 15/09/05, Michael Sparks <ms@cerenity.org> wrote:
At the moment, one option that springs to mind is this:
yield WaitDataAvailable("inbox")


Twisted supports this.

help("twisted.internet.defer.waitForDeferred")


Thanks for this. I'll take a look and either we'll use that or we'll use
something that maps cleanly. (Reason for pause is because running on
mobiles is important to us.) Thanks for the example too :)

Best Regards,
Michael.
--
Mi************@rd.bbc.co.uk, http://kamaelia.sourceforge.net/
British Broadcasting Corporation, Research and Development
Kingswood Warren, Surrey KT20 6NP

This message (and any attachments) may contain personal views
which are not the views of the BBC unless specifically stated.

Sep 15 '05 #60
It looks like I am reinventing Twisted and/or Kamelia.
This is code I wrote just today to simulate Python 2.5
generator in current Python:

import Queue

class coroutine(object):
def __init__(self, *args, **kw):
self.queue = Queue.Queue()
self.it = self.__cor__(*args, **kw)
def start(self):
return self.it.next()
def next(self):
return self.send(None)
def __iter__(self):
return self
def send(self, *args):
self.queue.put(args)
return self.it.next()
def recv(self):
return self.queue.get()
@classmethod
def generator(cls, gen):
return type(gen.__name__, (cls,), dict(__cor__=gen))

@coroutine.generator
def consumer(self, N):
for i in xrange(N):
yield i
cmd = self.recv()
if cmd == "exit": break

c = consumer(100)
print c.start()
for cmd in ["", "", "", "", "exit"]:
print c.send(cmd)
Michele Simionato

Sep 15 '05 #61
Michele Simionato wrote:
It looks like I am reinventing Twisted and/or Kamaelia.


If it's /fun/ , is that a problem ? ;) (Interesting implementation BTW :)

FWIW, I've about a year ago it wasn't clear if we would be able to release
our stuff, so as part of a presentation I included a minimalistic decorator
based version of our system that has some similarities to yours. (The idea
was then at least the ideas had been shared, if not the main code - which
had approval)

Posted below in case it's of interest:

import copy
def wrapgenerator(bases=object, **attrs):
def decorate(func):
class statefulgenerator(bases):
__doc__ = func.__doc__
def __init__(self,*args):
super(statefulgenerator, self) __init__(*args)
self.func=func(self,*args)
for k in attrs.keys():
self.__dict__[k] = copy.deepcopy(attrs[k])
self.next=self.__iter__().next
def __iter__(self): return iter(self.func)
return statefulgenerator
return decorate

class com(object):
def __init__(_, *args):
# Default queues
_.queues = {"inbox":[],"control":[],
"outbox":[], "signal":[]}
def send(_,box,obj): _.queues[box].append(obj)
def dataReady(_,box): return len(_.queues[box])>0
def recv(_, box): # NB. Exceptions aren't caught
X=_.queues[box][0]
del _.queues[box][0]
return X
A sample component written using this approach then looks like this:

@wrapgenerator(com)
def forwarder(self):
"Simple data forwarding generator"
while 1:
if self.dataReady("inbox"):
self.send("outbox",self.recv("inbox"))
elif self.dataReady("control"):
if self.recv("control") == "shutdown":
break
yield 1
self.send("signal","shutdown")
yield 0

Since we're not actualy using this approach, there's likely to be border
issues here. I'm not actually sure I like this particular approach, but it
was an interesting experiment.

Best Regards,
Michael.

Sep 15 '05 #62

This thread has been closed and replies have been disabled. Please start a new discussion.

Similar topics

3
by: Tomas Christiansen | last post by:
Excaclty HOW global is the global interpreter lock? I know that it isn't global in the meaning "the hole world" (imagine that!), but is it global only to each task/job/python interpreter instance...
2
by: Tommy.Ryding | last post by:
I just need confirmation that I think right. Is the files thread_xxx.h (xxx = nt, os2 or whatever) responsible for the global interpreter lock in a multithreaded environment? I'm currently...
6
by: martin | last post by:
Hi, I have noticed that every aspx page that I created (and ascx file) has an assosiated resource file aspx.resx. However what I would like to do is have a single global resource file for the...
41
by: Miguel Dias Moura | last post by:
Hello, I am working on an ASP.NET / VB page and I created a variable "query": Sub Page_Load(sender As Object, e As System.EventArgs) Dim query as String = String.Empty ... query =...
0
by: Duncan Grisby | last post by:
Hi, I have encountered a problem with the re module. I have a multi-threaded program that does lots of regular expression searching, with some relatively complex regular expressions....
1
by: Crutcher | last post by:
I've been playing with dictionary subtypes for custom environments, and I encountered a strange interaction between exec, dictionary subtypes, and global variables. I've attached a test program,...
10
by: Janto Dreijer | last post by:
I have been having problems with the Python 2.4 and 2.5 interpreters on both Linux and Windows crashing on me. Unfortunately it's rather complex code and difficult to pin down the source. So...
8
by: ecir.hana | last post by:
Dear list, maybe I'm overlooking something obvious or this is not possible at all or I don't know. Please, consider the following code: ## insert here anything you like def changer():
33
by: llothar | last post by:
I'm afraid that the GIL is killing the usefullness of python for some types of applications now where 4,8 oder 64 threads on a chip are here or comming soon. What is the status about that for...
0
by: ryjfgjl | last post by:
In our work, we often receive Excel tables with data in the same format. If we want to analyze these data, it can be difficult to analyze them because the data is spread across multiple Excel files...
0
by: emmanuelkatto | last post by:
Hi All, I am Emmanuel katto from Uganda. I want to ask what challenges you've faced while migrating a website to cloud. Please let me know. Thanks! Emmanuel
0
BarryA
by: BarryA | last post by:
What are the essential steps and strategies outlined in the Data Structures and Algorithms (DSA) roadmap for aspiring data scientists? How can individuals effectively utilize this roadmap to progress...
1
by: nemocccc | last post by:
hello, everyone, I want to develop a software for my android phone for daily needs, any suggestions?
1
by: Sonnysonu | last post by:
This is the data of csv file 1 2 3 1 2 3 1 2 3 1 2 3 2 3 2 3 3 the lengths should be different i have to store the data by column-wise with in the specific length. suppose the i have to...
0
by: Hystou | last post by:
There are some requirements for setting up RAID: 1. The motherboard and BIOS support RAID configuration. 2. The motherboard has 2 or more available SATA protocol SSD/HDD slots (including MSATA, M.2...
0
by: Hystou | last post by:
Most computers default to English, but sometimes we require a different language, especially when relocating. Forgot to request a specific language before your computer shipped? No problem! You can...
0
jinu1996
by: jinu1996 | last post by:
In today's digital age, having a compelling online presence is paramount for businesses aiming to thrive in a competitive landscape. At the heart of this digital strategy lies an intricately woven...
0
by: Hystou | last post by:
Overview: Windows 11 and 10 have less user interface control over operating system update behaviour than previous versions of Windows. In Windows 11 and 10, there is no way to turn off the Windows...

By using Bytes.com and it's services, you agree to our Privacy Policy and Terms of Use.

To disable or enable advertisements and analytics tracking please visit the manage ads & tracking page.