By using this site, you agree to our updated Privacy Policy and our Terms of Use. Manage your Cookies Settings.
457,887 Members | 1,059 Online
Bytes IT Community
+ Ask a Question
Need help? Post your question and get tips & solutions from a community of 457,887 IT Pros & Developers. It's quick & easy.

global interpreter lock

P: n/a
km
Hi all,

is true parallelism possible in python ? or atleast in the coming versions ?
is global interpreter lock a bane in this context ?

regards,
KM
Aug 19 '05 #1
Share this Question
Share on Google+
61 Replies


P: n/a
km <km@mrna.tn.nic.in> writes:
is true parallelism possible in python ? or atleast in the coming versions ?
is global interpreter lock a bane in this context ?


http://poshmodule.sf.net
Aug 19 '05 #2

P: n/a
Paul Rubin wrote:
km <km@mrna.tn.nic.in> writes:
is true parallelism possible in python ? or atleast in the coming versions ?
is global interpreter lock a bane in this context ?

http://poshmodule.sf.net


Is posh maintained? The page mentions 2003 as the last date.
--
Robin Becker

Aug 19 '05 #3

P: n/a
Robin Becker <ro***@reportlab.com> writes:
http://poshmodule.sf.net

Is posh maintained? The page mentions 2003 as the last date.


Dunno, and I suspect not. I've been wondering about it myself.
Aug 19 '05 #4

P: n/a
On 2005-08-20, km <km@mrna.tn.nic.in> wrote:
is true parallelism possible in python?
No, not for some values of "true parallelism".
or atleast in the coming versions?
Not that I'm aware of.
is global interpreter lock a bane in this context?


In what context?

--
Grant Edwards grante Yow! I think I'll do BOTH
at if I can get RESIDUALS!!
visi.com
Aug 19 '05 #5

P: n/a
km wrote:
Hi all,

is true parallelism possible in python ? or atleast in the
coming versions ? is global interpreter lock a bane in this
context ?


No; maybe; and currently, not usually.

On a uniprocessor system, the GIL is no problem. On multi-
processor/core systems, it's a big loser.
--
--Bryan
Aug 19 '05 #6

P: n/a
In article <91***************@newssvr29.news.prodigy.net>,
Bryan Olson <fa*********@nowhere.org> wrote:
km wrote:
> Hi all,
>
> is true parallelism possible in python ? or atleast in the
> coming versions ? is global interpreter lock a bane in this
> context ?


No; maybe; and currently, not usually.

On a uniprocessor system, the GIL is no problem. On multi-
processor/core systems, it's a big loser.


I rather suspect it's a bigger winner there.

Someone who needs to execute Python instructions in parallel
is out of luck, of course, but that has to be a small crowd.
I would have to assume that in most applications that need
the kind of computational support that implies, are doing most
of the actual computation in C, in functions that run with the
lock released. Rrunnable threads is 1 interpreter, plus N
"allow threads" C functions, where N is whatever the OS will bear.

Meanwhile, the interpreter's serial concurrency limits the
damage. The unfortunate reality is that concurrency is a
bane, so to speak -- programming for concurrency takes skill
and discipline and a supportive environment, and Python's
interpreter provides a cheap and moderately effective support
that compensates for most programmers' unrealistic assessment
of their skill and discipline. Not that you can't go wrong,
but the chances you'll get nailed for it are greatly reduced -
especially in an SMP environment.

Donn Cave, do**@u.washington.edu
Aug 19 '05 #7

P: n/a
Donn Cave wrote:
Bryan Olson wrote:
On a uniprocessor system, the GIL is no problem. On multi-
processor/core systems, it's a big loser.

I rather suspect it's a bigger winner there.

Someone who needs to execute Python instructions in parallel
is out of luck, of course, but that has to be a small crowd.


Today, sure. The chip guys have spoken and the future is mult-
core.
I would have to assume that in most applications that need
the kind of computational support that implies, are doing most
of the actual computation in C, in functions that run with the
lock released.
That seems an odd thing to assume.
Rrunnable threads is 1 interpreter, plus N
"allow threads" C functions, where N is whatever the OS will bear.

Meanwhile, the interpreter's serial concurrency limits the
damage. The unfortunate reality is that concurrency is a
bane, so to speak -- programming for concurrency takes skill
and discipline and a supportive environment, and Python's
interpreter provides a cheap and moderately effective support
that compensates for most programmers' unrealistic assessment
of their skill and discipline. Not that you can't go wrong,
but the chances you'll get nailed for it are greatly reduced -
especially in an SMP environment.


I don't see much point in trying to convince programmers that
they don't really want concurrent threads. They really do. Some
don't know how to use them, but that's largely because they
haven't had them. I doubt a language for thread-phobes has much
of a future.
--
--Bryan
Aug 19 '05 #8

P: n/a
Bryan Olson <fa*********@nowhere.org> writes:
I don't see much point in trying to convince programmers that
they don't really want concurrent threads. They really do. Some
don't know how to use them, but that's largely because they
haven't had them. I doubt a language for thread-phobes has much
of a future.


The real problem is that the concurrency models available in currently
popular languages are still at the "goto" stage of language
development. Better models exist, have existed for decades, and are
available in a variety of languages.

It's not that these languages are for "thread-phobes", either. They
don't lose power any more than Python looses power by not having a
goto. They languages haven't taken off for reasons unrelated to the
threading model(*).

The rule I follow in choosing my tools is "Use the least complex tool
that will get the job done." Given that the threading models in
popular languages are complex and hard to work with, I look elsewhere
for solutions. I've had good luck using async I/O in lieue of
theards. It's won't solve every problem, but where it does, it's much
simpler to work with.

<mike

*) I recently saw a discussion elsehwere that touched on almost the
same topic, lamenting that diagnostic tools in popular programming
languages pretty much sucked, being at best no better than they were
30 years ago. The two together seem to indicate that something is
fundamentally broken somewhere.
--
Mike Meyer <mw*@mired.org> http://www.mired.org/home/mwm/
Independent WWW/Perforce/FreeBSD/Unix consultant, email for more information.
Aug 20 '05 #9

P: n/a
Mike Meyer <mw*@mired.org> writes:
I don't see much point in trying to convince programmers that
they don't really want concurrent threads. They really do. Some
don't know how to use them, but that's largely because they
haven't had them. I doubt a language for thread-phobes has much
of a future.


The real problem is that the concurrency models available in currently
popular languages are still at the "goto" stage of language
development. Better models exist, have existed for decades, and are
available in a variety of languages.


But Python's threading system is designed to be like Java's, and
actual Java implementations seem to support concurrent threads just fine.

One problem with Python is it doesn't support synchronized objects
nearly as conveniently as Java, though. You need messy explicit
locking and unlocking all over the place. But it's not mysterious how
to do those explicit locks; it's just inconvenient.
Aug 20 '05 #10

P: n/a
[km]
is true parallelism possible in python ?
cpython: no.
jython: yes.
ironpython: yes.
or atleast in the coming versions ?
cpython: unknown.
pypy: don't have time to research. Anyone know?
is global interpreter lock a bane in this context ?


beauty/bane-is-in-the-eye-of-the-beholder-ly y'rs

--
alan kennedy
------------------------------------------------------
email alan: http://xhaus.com/contact/alan
Aug 20 '05 #11

P: n/a
Quoth Paul Rubin <http://ph****@NOSPAM.invalid>:
| Mike Meyer <mw*@mired.org> writes:

|> The real problem is that the concurrency models available in currently
|> popular languages are still at the "goto" stage of language
|> development. Better models exist, have existed for decades, and are
|> available in a variety of languages.
|
| But Python's threading system is designed to be like Java's, and
| actual Java implementations seem to support concurrent threads just fine.

I don't see a contradiction here. "goto" is "just fine", too --
you can write excellent programs with goto. 20 years of one very
successful software engineering crusade against this feature have
made it a household word for brokenness, but most current programming
languages have more problems in that vein that pass without question.
If you want to see progress, it's important to remember that goto
was a workable, useful, powerful construct that worked fine in the
right hands - and that wasn't enough.

Anyway, to return to the subject, I believe if you follow this
subthread back you will see that it has diverged a little from
simply whether or how Python could support SMP.

Mike, care to mention an example or two of the better models you
had in mind there?

Donn Cave, do**@drizzle.com
Aug 20 '05 #12

P: n/a
Mike Meyer wrote:
The real problem is that the concurrency models available in currently
popular languages are still at the "goto" stage of language
development. Better models exist, have existed for decades, and are
available in a variety of languages.
That's not "the real problem"; it's a different and arguable
problem. The GIL isn't even part of Python's threading model;
it's part of the implementation.
It's not that these languages are for "thread-phobes", either. They
don't lose power any more than Python looses power by not having a
goto. They languages haven't taken off for reasons unrelated to the
threading model(*).

The rule I follow in choosing my tools is "Use the least complex tool
that will get the job done."
Even if a more complex tool could do the job better?
Given that the threading models in
popular languages are complex and hard to work with, I look elsewhere
for solutions. I've had good luck using async I/O in lieue of
theards. It's won't solve every problem, but where it does, it's much
simpler to work with.


I've found single-line-of-execution async I/O to be worse than
threads. I guess that puts me in the Tannenbaum camp and not the
Ousterhout camp. Guido and Tannenbaum worked together on Amoeba
(and other stuff), which featured threads with semaphores and
seemed to work well.

Now I've gotten off-topic. Threads are winning, and the industry
is going to multiple processors even for PC-class machines.
Might as well learn to use that power.
--
--Bryan
Aug 20 '05 #13

P: n/a
[Bryan Olson]
I don't see much point in trying to convince programmers that
they don't really want concurrent threads. They really do. Some
don't know how to use them, but that's largely because they
haven't had them. I doubt a language for thread-phobes has much
of a future.

[Mike Meyer] The real problem is that the concurrency models available in currently
popular languages are still at the "goto" stage of language
development. Better models exist, have existed for decades, and are
available in a variety of languages.


I think that having a concurrency mechanism that doesn't use goto will
require a fundamental redesign of the underlying execution hardware,
i.e. the CPU.

All modern CPUs allow flow control through the use of
machine-code/assembly instructions which branch, either conditionally or
unconditionally, to either a relative or absolute memory address, i.e. a
GOTO.

Modern languages wrap this goto nicely using constructs such as
generators, coroutines or continuations, which allow preservation and
restoration of the execution context, e.g. through closures, evaluation
stacks, etc. But underneath the hood, they're just gotos. And I have no
problem with that.

To really have parallel execution with clean modularity requires a
hardware redesign at the CPU level, where code units, executing in
parallel, are fed a series of data/work-units. When they finish
processing an individual unit, it gets passed (physically, at a hardware
level) to another code unit, executing in parallel on another execution
unit/CPU. To achieve multi-stage processing of data would require
breaking up the processing into a pipeline of modular operations, which
communicate through dedicated hardware channels.

I don't think I've described it very clearly above, but you can read a
good high-level overview of a likely model from the 1980's, the
Transputer, here

http://en.wikipedia.org/wiki/Transputer

Transputers never took off, for a variety of technical and commercial
reasons, even though there was full high-level programming language
support in the form of Occam: I think it was just too brain-bending for
most programmers at the time. (I personally *almost* took on the task of
developing a debugger for transputer arrays for my undergrad thesis in
1988, but when I realised the complexity of the problem, I picked a
hypertext project instead ;-)

http://en.wikipedia.org/wiki/Occam_programming_language

IMHO, python generators (which BTW are implemented with a JVM goto
instruction in jython 2.2) are a nice programming model that fits neatly
with this hardware model. Although not today.

--
alan kennedy
------------------------------------------------------
email alan: http://xhaus.com/contact/alan
Aug 20 '05 #14

P: n/a
"Donn Cave" <do**@drizzle.com> writes:
Quoth Paul Rubin <http://ph****@NOSPAM.invalid>:
| Mike Meyer <mw*@mired.org> writes:
|> The real problem is that the concurrency models available in currently
|> popular languages are still at the "goto" stage of language
|> development. Better models exist, have existed for decades, and are
|> available in a variety of languages.
| But Python's threading system is designed to be like Java's, and
| actual Java implementations seem to support concurrent threads just fine.
I don't see a contradiction here. "goto" is "just fine", too --
you can write excellent programs with goto.
Right. The only thing wrong with "goto" is that we've since found
better ways to describe program flow. These ways are less complex,
hence easier to use and understand.
Mike, care to mention an example or two of the better models you
had in mind there?
I've seen a couple of such, but have never been able to find the one I
really liked in Google again :-(. That leaves Eiffel's SCOOP (aka
Concurrent Eiffel). You can find a short intro at <URL:
http://archive.eiffel.com/doc/manual...hort/page.html.


Even simpler to program in is the model used by Erlang. It's more CSP
than threading, though, as it doesn't have shared memory as part of
the model. But if you can use the simpler model to solve your problem
- you probably should.

<mike
--
Mike Meyer <mw*@mired.org> http://www.mired.org/home/mwm/
Independent WWW/Perforce/FreeBSD/Unix consultant, email for more information.
Aug 21 '05 #15

P: n/a
Bryan Olson <fa*********@nowhere.org> writes:
Mike Meyer wrote:
> The real problem is that the concurrency models available in currently
> popular languages are still at the "goto" stage of language
> development. Better models exist, have existed for decades, and are
> available in a variety of languages. That's not "the real problem"; it's a different and arguable
problem. The GIL isn't even part of Python's threading model;
it's part of the implementation.


Depends on what point you consider the problem.
> It's not that these languages are for "thread-phobes", either. They
> don't lose power any more than Python looses power by not having a
> goto. They languages haven't taken off for reasons unrelated to the
> threading model(*).
> The rule I follow in choosing my tools is "Use the least complex tool
> that will get the job done."

Even if a more complex tool could do the job better?


In that case, the simpler model isn't necessarily getting the job
done. I purposely didn't refine the word "job" just so this would be
the case.
Now I've gotten off-topic. Threads are winning, and the industry
is going to multiple processors even for PC-class machines.
Might as well learn to use that power.


I own too many orphans to ever confuse popularity with technical
superiority. I've learned how to use threads, and done some
non-trivial thread proramming, and hope to never repeat that
experience. It was the second most difficult programming task I've
ever attempted(*). As I said above, the real problem isn't threads per
se, it's that the model for programming them in popular languages is
still primitive. So far, to achieve the non-repitition goal, I've used
async I/O, restricted my use of real threads in popular languages to
trivial cases, and started using servers so someone else gets tod eal
with these issues. If I ever find myself having to have non-trivial
threads again, I'll check the state of the threading models in other
languages, and make a serious push for implementing parts of the
program in a less popular language with a less primitive threading
model.

<mike

*) The most difficult task was writing horizontal microcode, which
also had serious concurrency issues in the form of device settling
times. I dealt with that by inventing a programming model that hid
most of the timing details from the programmer. It occasionally lost a
cycle, but the people who used it after me were *very* happy with it
compared to the previous model.

--
Mike Meyer <mw*@mired.org> http://www.mired.org/home/mwm/
Independent WWW/Perforce/FreeBSD/Unix consultant, email for more information.
Aug 21 '05 #16

P: n/a
Mike Meyer <mw*@mired.org> writes:
Even simpler to program in is the model used by Erlang. It's more CSP
than threading, though, as it doesn't have shared memory as part of
the model. But if you can use the simpler model to solve your problem
- you probably should.


Well, ok, the Python equivalent would be wrapping every shareable
object in its own thread, that communicates with other threads through
Queues. This is how some Pythonistas suggest writing practically all
multi-threaded Python code. It does a reasonable job of avoiding
synchronization headaches and it's not that hard to code that way.

But I think to do it on Erlang's scale, Python needs user-level
microthreads and not just OS threads. Maybe Python 3000 can add some
language support, though an opportunity was missed when Python's
generator syntax got defined the way it did.

I've been reading a bit about Erlang and am impressed with it. Here
is a good thesis about parallelizing Erlang, link courtesy of Ulf
Wiger on comp.lang.functional:

http://www.erlang.se/publications/xj...9-hedqvist.pdf

The thesis also gives a good general description of how Erlang works.
Aug 21 '05 #17

P: n/a
Quoth Mike Meyer <mw*@mired.org>:
[... wandering from the nominal topic ...]

| *) The most difficult task was writing horizontal microcode, which
| also had serious concurrency issues in the form of device settling
| times. I dealt with that by inventing a programming model that hid
| most of the timing details from the programmer. It occasionally lost a
| cycle, but the people who used it after me were *very* happy with it
| compared to the previous model.

My favorite concurrency model comes with a Haskell variant called
O'Haskell, and it was last seen calling itself "Timber" with some
added support for time as an event source. The most on topic thing
about it -- its author implemented a robot controller in Timber, and
the robot is a little 4-wheeler called ... "Timbot".

Donn Cave, do**@drizzle.com
Aug 21 '05 #18

P: n/a
On Sat, 20 Aug 2005 22:30:43 -0400, Mike Meyer <mw*@mired.org> declaimed
the following in comp.lang.python:
with these issues. If I ever find myself having to have non-trivial
threads again, I'll check the state of the threading models in other
languages, and make a serious push for implementing parts of the
program in a less popular language with a less primitive threading
model.
The June edition of "SIGPLAN Notices" (the PLDI'05 proceeding issue)
has a paper titled "Threads Cannot Be Implemented As a Library" -- which
is primarily concerned with the problems of threading being done, well,
via an add-on library (as opposed to a native part of the language
specification: C#, Ada, Java).

I suspect Python falls into the "library" category.
-- ================================================== ============ <
wl*****@ix.netcom.com | Wulfraed Dennis Lee Bieber KD6MOG <
wu******@dm.net | Bestiaria Support Staff <
================================================== ============ <
Home Page: <http://www.dm.net/~wulfraed/> <
Overflow Page: <http://wlfraed.home.netcom.com/> <

Aug 21 '05 #19

P: n/a
km wrote:

is true parallelism possible in python ? or atleast in the coming versions ?
is global interpreter lock a bane in this context ?


I've had absolutely zero problems implementing truly parallel programs
in python. All of my parallel programs have been multiprocess
architectures, though--the GIL doesn't affect multiprocess
architectures.

Good support for multiple process architectures was one of the things
that initially lead me to pick Python over Java in the first place
(Java was woefully lacking in support facilities for this kind of
architecture at that time; it's improved somewhat since then but still
requires some custom C coding). I don't have much desire to throw out
decades of work by OS implementors on protected memory without a pretty
darn good reason.

Aug 22 '05 #20

P: n/a
Paul Rubin <http://ph****@NOSPAM.invalid> writes:
Mike Meyer <mw*@mired.org> writes:
Even simpler to program in is the model used by Erlang. It's more CSP
than threading, though, as it doesn't have shared memory as part of
the model. But if you can use the simpler model to solve your problem
- you probably should. Well, ok, the Python equivalent would be wrapping every shareable
object in its own thread, that communicates with other threads through
Queues. This is how some Pythonistas suggest writing practically all
multi-threaded Python code. It does a reasonable job of avoiding
synchronization headaches and it's not that hard to code that way.


This sort of feels like writing your while loops/etc. with if and
goto. Sure, they really are that at the hardware level, but you'd like
the constructs you work with to be at a higher level. It's not really
that bad, because Queue is a higher level construct, but it's still
not quite not as good as it could be.
But I think to do it on Erlang's scale, Python needs user-level
microthreads and not just OS threads. Maybe Python 3000 can add some
language support, though an opportunity was missed when Python's
generator syntax got defined the way it did.


I'm not sure we need to go as far as Erlang does. On the other hand,
I'm also not sure we can get a "much better" threading model without
language support of some kind. Threading and Queues are all well and
good, but they still leave the programmer handling primitive threading
objects.

<mike
--
Mike Meyer <mw*@mired.org> http://www.mired.org/home/mwm/
Independent WWW/Perforce/FreeBSD/Unix consultant, email for more information.
Aug 22 '05 #21

P: n/a
Dennis Lee Bieber <wl*****@ix.netcom.com> writes:
On Sat, 20 Aug 2005 22:30:43 -0400, Mike Meyer <mw*@mired.org> declaimed
the following in comp.lang.python:
with these issues. If I ever find myself having to have non-trivial
threads again, I'll check the state of the threading models in other
languages, and make a serious push for implementing parts of the
program in a less popular language with a less primitive threading
model. The June edition of "SIGPLAN Notices" (the PLDI'05 proceeding issue)
has a paper titled "Threads Cannot Be Implemented As a Library" -- which
is primarily concerned with the problems of threading being done, well,
via an add-on library (as opposed to a native part of the language
specification: C#, Ada, Java).


Thanks for the reference. A litte googling turns up a copy published
via HP at <URL:
http://www.hpl.hp.com/techreports/20...-2004-209.html >.
I suspect Python falls into the "library" category.


Well, that's what it's got now, so that seem likely.

<mike
--
Mike Meyer <mw*@mired.org> http://www.mired.org/home/mwm/
Independent WWW/Perforce/FreeBSD/Unix consultant, email for more information.
Aug 22 '05 #22

P: n/a
Paul Rubin wrote:
Mike Meyer <mw*@mired.org> writes:
Even simpler to program in is the model used by Erlang. It's more CSP
than threading, though, as it doesn't have shared memory as part of
the model. But if you can use the simpler model to solve your problem
- you probably should.


Well, ok, the Python equivalent would be wrapping every shareable
object in its own thread, that communicates with other threads through
Queues. This is how some Pythonistas suggest writing practically all
multi-threaded Python code. It does a reasonable job of avoiding
synchronization headaches and it's not that hard to code that way.

But I think to do it on Erlang's scale, Python needs user-level
microthreads and not just OS threads.


You've just described Kamaelia* BTW, except substitute micro-thread
with generator :-) (Also we call the queues outboxes and inboxes, and
the combination of a generator in a class with inboxes and outboxes
components)
* http://kamaelia.sf.net/

For those who really want threads as well, theres a threaded component based
class that uses Queues instead :)

Best Regards,
Michael.

Aug 22 '05 #23

P: n/a
Mike Meyer wrote:
Bryan Olson writes:
Mike Meyer wrote:
> The rule I follow in choosing my tools is "Use the least complex tool
> that will get the job done."
Even if a more complex tool could do the job better?


In that case, the simpler model isn't necessarily getting the job
done. I purposely didn't refine the word "job" just so this would be
the case.


I didn't ask about any particular case. You stated a general
rule you follow, and I think that rule is nuts.

Now I've gotten off-topic. Threads are winning, and the industry
is going to multiple processors even for PC-class machines.
Might as well learn to use that power.


I own too many orphans to ever confuse popularity with technical
superiority.


The issue here is whether to confuse reality with what one might
wish reality to be.
I've learned how to use threads, and done some
non-trivial thread proramming, and hope to never repeat that
experience. It was the second most difficult programming task I've
ever attempted(*).
Great -- lets see it! Can you point out what parts were so
hard? How would you have solved the same problems without
threads?

As I said above, the real problem isn't threads per
se, it's that the model for programming them in popular languages is
still primitive. So far, to achieve the non-repitition goal, I've used
async I/O, restricted my use of real threads in popular languages to
trivial cases, and started using servers so someone else gets tod eal
with these issues.


Then maybe we should listen to those other people. Is there a
successful single-line-of-execution async-I/O based server that
provides a service as sophisticated as the modern relational-
database engines? Why do you think that is?
--
--Bryan
Aug 24 '05 #24

P: n/a
Bryan Olson <fa*********@nowhere.org> writes:
Mike Meyer wrote:
> Bryan Olson writes:
>>Mike Meyer wrote:
>> > The rule I follow in choosing my tools is "Use the least complex tool
>> > that will get the job done."
>>Even if a more complex tool could do the job better? > In that case, the simpler model isn't necessarily getting the job
> done. I purposely didn't refine the word "job" just so this would be
> the case.

I didn't ask about any particular case. You stated a general
rule you follow, and I think that rule is nuts.


You're entitled to write code as complex and unmanagable as you
wish. Me, I'll stick with the simplest thing that solve the problem.
>>Now I've gotten off-topic. Threads are winning, and the industry
>>is going to multiple processors even for PC-class machines.
>>Might as well learn to use that power.

> I own too many orphans to ever confuse popularity with technical
> superiority.

The issue here is whether to confuse reality with what one might
wish reality to be.


Let's see. Reality is that writing correct programs is hard. Writing
correct programs that use concurrency is even harder, because of the
exponential explosion of the order that operations can happen
in. Personally, I'm willing to use anything I can find that makes
those tasks easier.
> I've learned how to use threads, and done some
> non-trivial thread proramming, and hope to never repeat that
> experience. It was the second most difficult programming task I've
> ever attempted(*).

Great -- lets see it! Can you point out what parts were so
hard? How would you have solved the same problems without
threads?


Google for "aws amiga web server". Somebody is liable to still have
the source around. The hard part was dealing with the making sure that
every sequence of operations that actually happened was correct, of
course. The web server I wrote after that used async i/o, thus
avoiding the problem completely.
> As I said above, the real problem isn't threads per
> se, it's that the model for programming them in popular languages is
> still primitive. So far, to achieve the non-repitition goal, I've used
> async I/O, restricted my use of real threads in popular languages to
> trivial cases, and started using servers so someone else gets tod eal
> with these issues.

Then maybe we should listen to those other people.


Yes, we probably should. I do. The problem is, the designers of
popular languages apparently don't, so I'm stuck with lousy tools like
thread libraries (meaning you get no compile-time help in avoiding the
problems that plague concurrent programs) and Java's pseudo-monitors.
Is there a successful single-line-of-execution async-I/O based
server that provides a service as sophisticated as the modern
relational- database engines? Why do you think that is?


I don't know - is there? There have certainly been some sophisticated
network servers using async I/O and a single thread of execution. Are
they as sophisticated as a modern relational database? I dunno. Then
again, I already know that async i/o with a single thread of execution
isn't as powerful as threads, so there are almost certainly problem
areas where it isn't suitable and threads are. So what? That doesn't
make the async I/O model any less useful. Of course, if you're only
able to learn one tool, you should probably learn the most powerful
one you can. But just because you only know how to use a hammer
doesn't automatically make everything you encounter a nail.

<mike
--
Mike Meyer <mw*@mired.org> http://www.mired.org/home/mwm/
Independent WWW/Perforce/FreeBSD/Unix consultant, email for more information.
Aug 25 '05 #25

P: n/a
Mike Meyer wrote:
Bryan Olson <fa*********@nowhere.org> writes:
The issue here is whether to confuse reality with what one might
wish reality to be.


Let's see. Reality is that writing correct programs is hard. Writing
correct programs that use concurrency is even harder, because of the
exponential explosion of the order that operations can happen
in.


And dont forget:
Writing concurrent programs without protected memory between execution
contexts is even harder than with it.

Aug 26 '05 #26

P: n/a
On Thu, 25 Aug 2005 00:56:10 -0400, Mike Meyer <mw*@mired.org> wrote:
The issue here is whether to confuse reality with what one might
wish reality to be.


Let's see. Reality is that writing correct programs is hard. Writing
correct programs that use concurrency is even harder, because of the
exponential explosion of the order that operations can happen
in. Personally, I'm willing to use anything I can find that makes
those tasks easier.


Indeed so. Use threading (or whatever) when one has to, use an
asynchronous single-threaded process whenever you can.
--
Email: zen19725 at zen dot co dot uk
Aug 26 '05 #27

P: n/a
ze******@zen.co.uk (phil hunt) writes:
Let's see. Reality is that writing correct programs is hard. Writing
correct programs that use concurrency is even harder, because of the
exponential explosion of the order that operations can happen
in. Personally, I'm willing to use anything I can find that makes
those tasks easier.


Indeed so. Use threading (or whatever) when one has to, use an
asynchronous single-threaded process whenever you can.


This is silly. You could say the exact same thing about if
statements. The number of paths through the program is exponential in
the number of if statements executed. So we better get rid of if
statements.

Really, the essence of programming is to find ways of organizing the
program to stay reliable and maintainable in the face of that
combinatorial explosion. That means facing the problem and finding
solutions, not running away. The principle is no different for
threads than it is for if statements.
Aug 26 '05 #28

P: n/a
On 26 Aug 2005 14:35:03 -0700, Paul Rubin <http://ph****@NOSPAM.invalid> wrote:
ze******@zen.co.uk (phil hunt) writes:
>Let's see. Reality is that writing correct programs is hard. Writing
>correct programs that use concurrency is even harder, because of the
>exponential explosion of the order that operations can happen
>in. Personally, I'm willing to use anything I can find that makes
>those tasks easier.
Indeed so. Use threading (or whatever) when one has to, use an
asynchronous single-threaded process whenever you can.


This is silly. You could say the exact same thing about if
statements. The number of paths through the program is exponential in
the number of if statements executed. So we better get rid of if
statements.


It's not the number of paths that's important.

What's important is *predictability*, e.g. which instruction will
the computer execute next?

If you only have one thread, you can tell by looking at the code
what gets executed next. It's very simple.

If you have 2 threads you can easily have a timing-based situation
that occurs rarely but which causes your program to behave wierdly.
This sort of bug is very hard to reproduce and therefore to fix.
Really, the essence of programming is to find ways of organizing the
program to stay reliable and maintainable in the face of that
combinatorial explosion.
Yes, and introducing code that makes randomly-occurring bugs more
likely makes debugging inherently harder.
That means facing the problem and finding
solutions, not running away.


Yes, find solutions. Don't find dangerous dead-ends that look like
solutions but which will give you lots of trouble.

--
Email: zen19725 at zen dot co dot uk
Aug 27 '05 #29

P: n/a
Paul Rubin <http://ph****@NOSPAM.invalid> writes:
ze******@zen.co.uk (phil hunt) writes:
>Let's see. Reality is that writing correct programs is hard. Writing
>correct programs that use concurrency is even harder, because of the
>exponential explosion of the order that operations can happen
>in. Personally, I'm willing to use anything I can find that makes
>those tasks easier. Indeed so. Use threading (or whatever) when one has to, use an
asynchronous single-threaded process whenever you can.

This is silly. You could say the exact same thing about if
statements. The number of paths through the program is exponential in
the number of if statements executed. So we better get rid of if
statements.


The number of paths through a program isn't exponential in the number
of if statements, it's multiplicative. Each if statement multiplies
the number of paths through the program by 2, no matter how many other
statements you have.

On the other hand, with threads, the number of possible execution
orders is the number of threads raised to the power of the number of
instructions (assuming that instructions are atomic, which is probably
false) in the shared code segment. It's a *much* nastier problem.
Really, the essence of programming is to find ways of organizing the
program to stay reliable and maintainable in the face of that
combinatorial explosion. That means facing the problem and finding
solutions, not running away. The principle is no different for
threads than it is for if statements.


Correct. But choosing to use a tool that has a less complex model but
solves the problem is *not* running away. If it were, you'd have to
call using if statements rather than a goto running away, because
that's that's exactly what you're doing.

I do agree that we should face the problems and look for solutions,
because some problems can't be solved with async I/O. That's why I
posted the article titled "Static vs. dynamic checking for support of
concurrent programming" - I'm trying to find out if one potential
solution could be adapted for Python.

<mike
--
Mike Meyer <mw*@mired.org> http://www.mired.org/home/mwm/
Independent WWW/Perforce/FreeBSD/Unix consultant, email for more information.
Aug 27 '05 #30

P: n/a
>>>>> Paul Rubin <http://ph****@NOSPAM.invalid> (PR) wrote:
PR> ze******@zen.co.uk (phil hunt) writes:
>Let's see. Reality is that writing correct programs is hard. Writing
>correct programs that use concurrency is even harder, because of the
>exponential explosion of the order that operations can happen
>in. Personally, I'm willing to use anything I can find that makes
>those tasks easier.

Indeed so. Use threading (or whatever) when one has to, use an
asynchronous single-threaded process whenever you can.
PR> This is silly. You could say the exact same thing about if
PR> statements. The number of paths through the program is exponential in
PR> the number of if statements executed. So we better get rid of if
PR> statements. PR> Really, the essence of programming is to find ways of organizing the
PR> program to stay reliable and maintainable in the face of that
PR> combinatorial explosion. That means facing the problem and finding
PR> solutions, not running away. The principle is no different for
PR> threads than it is for if statements.


The principle is (more or less) similar, but for parallel programs it is an
order of magnitude more complicated. Compare the correctness proofs of
parallel programs with those of sequential programs.
--
Piet van Oostrum <pi**@cs.uu.nl>
URL: http://www.cs.uu.nl/~piet [PGP 8DAE142BE17999C4]
Private email: pi**@vanoostrum.org
Aug 27 '05 #31

P: n/a
phil hunt wrote:
It's not the number of paths that's important.
Absolutely right. Non-trivial software always has too many paths
to consider them individually, so we have to reason generally.
What's important is *predictability*, e.g. which instruction will
the computer execute next?

If you only have one thread, you can tell by looking at the code
what gets executed next. It's very simple.
Not really. Trivially, an 'if' statement that depends upon input
data is statically predictable. Use of async I/O means makes the
programs execution dependent upon external timing.

If you have 2 threads you can easily have a timing-based situation
that occurs rarely but which causes your program to behave wierdly.
This sort of bug is very hard to reproduce and therefore to fix.
So we need to learn to avoid it.
[...] Yes, find solutions. Don't find dangerous dead-ends that look like
solutions but which will give you lots of trouble.


If concurrency is a dead end, why do the programs that provide
the most sophisticated services of any in the world rely on it
so heavily?
--
--Bryan
Aug 28 '05 #32

P: n/a
Piet van Oostrum wrote:
>>Paul Rubin <http://ph****@NOSPAM.invalid> (PR) wrote:PR> Really, the essence of programming is to find ways of organizing the
PR> program to stay reliable and maintainable in the face of that
PR> combinatorial explosion. That means facing the problem and finding
PR> solutions, not running away. The principle is no different for
PR> threads than it is for if statements.


The principle is (more or less) similar, but for parallel programs it

is an order of magnitude more complicated. Compare the correctness proofs of
parallel programs with those of sequential programs.


That's an artifact of what the research community is trying to
accomplish with the proof. Proving non-trivial programs correct
is currently beyond the state of the art.
--
--Bryan
Aug 28 '05 #33

P: n/a
Bryan Olson <fa*********@nowhere.org> writes:
phil hunt wrote:
> What's important is *predictability*, e.g. which instruction will
> the computer execute next?
>
> If you only have one thread, you can tell by looking at the code
> what gets executed next. It's very simple. Not really. Trivially, an 'if' statement that depends upon input
data is statically predictable. Use of async I/O means makes the
programs execution dependent upon external timing.


Yes, but that depenency is tied to a single point - the select
call. The paths after that are statically predictable. This makes the
code very managable.
> If you have 2 threads you can easily have a timing-based situation
> that occurs rarely but which causes your program to behave wierdly.
> This sort of bug is very hard to reproduce and therefore to fix.

So we need to learn to avoid it.


No, we need tools that make it impossible to write codde that triggers
it. Async I/O is one such tool, but it has lots of other limitations
that make it unsuitable for many applications.
[...]
> Yes, find solutions. Don't find dangerous dead-ends that look like
> solutions but which will give you lots of trouble.

If concurrency is a dead end, why do the programs that provide
the most sophisticated services of any in the world rely on it
so heavily?


I don't know what Phil is saying, but I'm not calling concurrency a
dead end. I'm calling the tools available in most programming
languages for dealing with it primitive.

We need better tools.

<mike
--
Mike Meyer <mw*@mired.org> http://www.mired.org/home/mwm/
Independent WWW/Perforce/FreeBSD/Unix consultant, email for more information.
Aug 28 '05 #34

P: n/a
Mike Meyer wrote:
Bryan Olson writes:
phil hunt wrote:
> What's important is *predictability*, e.g. which instruction will
> the computer execute next?
>
> If you only have one thread, you can tell by looking at the code
> what gets executed next. It's very simple.Not really. Trivially, an 'if' statement that depends upon input
data is statically predictable. Use of async I/O means makes the
programs execution dependent upon external timing.

Yes, but that depenency is tied to a single point - the select
call. The paths after that are statically predictable. This makes the
code very managable.


Wow -- I could not disagree more. Returning back to some single
point for every possibly-blocking operation is painful to manage
even for simple GUIs, and humanly intractable for sophisticated
services.

Select is certainly useful, but it scales badly and isn't as
general as better tools.
[...] I'm calling the tools available in most programming
languages for dealing with it primitive.

We need better tools.


Agreed, but if 'select' is someone's idea of the state of the
art, they have little clue as to the tools already available.
Bringing the tools to Python remains a bit of challenge, largely
because so many Pythoners are unaware.
--
--Bryan
Aug 29 '05 #35

P: n/a
Bryan Olson <fa*********@nowhere.org> writes:
Mike Meyer wrote:
> Bryan Olson writes:
> phil hunt wrote:
>> > What's important is *predictability*, e.g. which instruction will
>> > the computer execute next?
>> > If you only have one thread, you can tell by looking at the code
>> > what gets executed next. It's very simple.
>>Not really. Trivially, an 'if' statement that depends upon input
>>data is statically predictable. Use of async I/O means makes the
>>programs execution dependent upon external timing. > Yes, but that depenency is tied to a single point - the select
> call. The paths after that are statically predictable. This makes the
> code very managable.

Wow -- I could not disagree more. Returning back to some single
point for every possibly-blocking operation is painful to manage
even for simple GUIs, and humanly intractable for sophisticated
services.


I'd be interested in what you're trying to do that winds up as
unmanagable. There are clearly things select+async IO is unsuitable
for. You may be running into problems because you're trying to use it
in such an environment. For instance, it's not clear to me that it
will work well for any kind of GUI programming, though I've had good
look with it for command line interfaces.
Select is certainly useful, but it scales badly and isn't as
general as better tools.
It can't take advantage of multiple CPUs. I've not run into scaling
problems on single-CPU systems.
> [...] I'm calling the tools available in most programming
> languages for dealing with it primitive.
> We need better tools.

Agreed, but if 'select' is someone's idea of the state of the
art, they have little clue as to the tools already available.


Well, share! If you know of tools that make dealing with concurrent
code more manageable than an async I/O loop and have fewer
restrictions, I'm certainly interested in hearing about them.
Bringing the tools to Python remains a bit of challenge, largely
because so many Pythoners are unaware.


Well, the way to fix that is to talk about them!

<mike
--
Mike Meyer <mw*@mired.org> http://www.mired.org/home/mwm/
Independent WWW/Perforce/FreeBSD/Unix consultant, email for more information.
Aug 30 '05 #36

P: n/a
Mike Meyer wrote:
Bryan Olson <fa*********@nowhere.org> writes:
> Bryan Olson writes:
> Trivially, an 'if' statement that depends upon input
>>data is statically predictable. Use of async I/O means makes the
>>programs execution dependent upon external timing.Mike Meyer wrote:
> Yes, but that depenency is tied to a single point - the select
> call. The paths after that are statically predictable. This makes the
> code very managable.

Wow -- I could not disagree more. Returning back to some single
point for every possibly-blocking operation is painful to manage
even for simple GUIs, and humanly intractable for sophisticated
services.


I'd be interested in what you're trying to do that winds up as
unmanagable.


I'd like to call a utility in someone else's code, but it might
do something that could block. I find re-writing everyone's code
into a state-machine that can trap back to the central I/O loop
and later resume where it left off, to be hard to manage. When
I'm writing my own base classes, I find it hard to support the
back-to-the-I/O-loop-and-resume thing so that method over-rides
can call blocking operations when they need to.

There are clearly things select+async IO is unsuitable
for. You may be running into problems because you're trying to use it
in such an environment. For instance, it's not clear to me that it
will work well for any kind of GUI programming, though I've had good
look with it for command line interfaces.


Uh, not sure where you're coming from there. Are you unaware of
the 'event loop' in the GUI's, or unaware that it's async I/O.
Select is certainly useful, but it scales badly and isn't as
general as better tools.


It can't take advantage of multiple CPUs. I've not run into scaling
problems on single-CPU systems.


Select() is linear-time in the number of sockets to be checked
(not just the number found to be ready). There's a good write-up
of the problem and solutions Google-able as "The C10K problem".

> [...] I'm calling the tools available in most programming
> languages for dealing with it primitive.
> We need better tools.

Agreed, but if 'select' is someone's idea of the state of the
art, they have little clue as to the tools already available.


Well, share!


Uh, where have you been? I keep explaining that concurrency
systems have improved vastly in recent years. For a long time,
the most sophisticated software services generally have used
multiple lines of execution, and now that's mostly in the form
of threads. No one actually disagrees, but they go right on
knocking the modern methods.
--
--Bryan
Aug 30 '05 #37

P: n/a
On Sun, 28 Aug 2005 20:34:07 GMT, Bryan Olson <fa*********@nowhere.org> wrote:
phil hunt wrote:
Yes, find solutions. Don't find dangerous dead-ends that look like
solutions but which will give you lots of trouble.


If concurrency is a dead end, why do the programs that provide
the most sophisticated services of any in the world rely on it
so heavily?


Some times concurrency is the best (or only) way to do a job. Other
times, it's more trouble than its worth. A good programmer will know
which is which, and will not use an overly complex solution for the
project he is writing.

--
Email: zen19725 at zen dot co dot uk
Aug 30 '05 #38

P: n/a
On Sun, 28 Aug 2005 19:25:55 -0400, Mike Meyer <mw*@mired.org> wrote:
Bryan Olson <fa*********@nowhere.org> writes:
phil hunt wrote:
> Yes, find solutions. Don't find dangerous dead-ends that look like
> solutions but which will give you lots of trouble. If concurrency is a dead end, why do the programs that provide
the most sophisticated services of any in the world rely on it
so heavily?


I don't know what Phil is saying, but I'm not calling concurrency a
dead end.


In general it isn't. However, in many programs, it might be, in that
by using it you might end up with a very complex program that fails
unpredictably and if hard to debug: if you get in that situation,
you may have to start again, in which case your previous work will
have been a dead end.

(Actually I would suggest that knowing when to throw something away
and start again is something that differentiates between good and
bad programmers).
I'm calling the tools available in most programming
languages for dealing with it primitive.

We need better tools.


I agree.

--
Email: zen19725 at zen dot co dot uk
Aug 30 '05 #39

P: n/a
On Tue, 30 Aug 2005 05:15:34 GMT, Bryan Olson <fa*********@nowhere.org> wrote:
Mike Meyer wrote:
Bryan Olson <fa*********@nowhere.org> writes:
> Bryan Olson writes:
> Trivially, an 'if' statement that depends upon input
>>data is statically predictable. Use of async I/O means makes the
>>programs execution dependent upon external timing.
Mike Meyer wrote:
[...]
[...] I'm calling the tools available in most programming
> languages for dealing with it primitive.
> We need better tools.
Agreed, but if 'select' is someone's idea of the state of the
art, they have little clue as to the tools already available.


Well, share!


Uh, where have you been? I keep explaining that concurrency
systems have improved vastly in recent years. For a long time,
the most sophisticated software services generally have used
multiple lines of execution, and now that's mostly in the form
of threads. No one actually disagrees, but they go right on
knocking the modern methods.


I think Mike is asking for references/citations/links to the
"concurrency systems" and "modern methods" you are talking about ;-)
(I'd be interested too ;-)

Regards,
Bengt Richter
Aug 30 '05 #40

P: n/a
bo**@oz.net (Bengt Richter) writes:
On Tue, 30 Aug 2005 05:15:34 GMT, Bryan Olson <fa*********@nowhere.org> wrote:
Mike Meyer wrote:
> Bryan Olson <fa*********@nowhere.org> writes:
>> > Bryan Olson writes:
>> > Trivially, an 'if' statement that depends upon input
>> >>data is statically predictable. Use of async I/O means makes the
>> >>programs execution dependent upon external timing.
>>Mike Meyer wrote:

[...]
>> > [...] I'm calling the tools available in most programming
>> > languages for dealing with it primitive.
>> > We need better tools.
>>Agreed, but if 'select' is someone's idea of the state of the
>>art, they have little clue as to the tools already available.
>
> Well, share!


Uh, where have you been? I keep explaining that concurrency
systems have improved vastly in recent years. For a long time,
the most sophisticated software services generally have used
multiple lines of execution, and now that's mostly in the form
of threads. No one actually disagrees, but they go right on
knocking the modern methods.


I think Mike is asking for references/citations/links to the
"concurrency systems" and "modern methods" you are talking about ;-)
(I'd be interested too ;-)


Yup. I know systems are getting more concurrent. I also find that the
tools in popular languages for dealing with concurrency suck. I know
of some of these tools myself, but they either have restrictions on
the problems they can solve (like async I/O) or don't integrate well
with Python (like SCOOP).

So I'm definitely interested in learning about other alternatives!

<mike
--
Mike Meyer <mw*@mired.org> http://www.mired.org/home/mwm/
Independent WWW/Perforce/FreeBSD/Unix consultant, email for more information.
Aug 31 '05 #41

P: n/a
phil hunt wrote:
Some times concurrency is the best (or only) way to do a job. Other
times, it's more trouble than its worth. A good programmer will know
which is which, and will not use an overly complex solution for the
project he is writing.


Also, a good programmer won't conflate concurrency with threads.

Aug 31 '05 #42

P: n/a
Bengt Richter wrote:
Bryan Olson wrote:
For a long time,
the most sophisticated software services generally have used
multiple lines of execution, and now that's mostly in the form
of threads. No one actually disagrees, but they go right on
knocking the modern methods.


I think Mike is asking for references/citations/links to the
"concurrency systems" and "modern methods" you are talking about ;-)
(I'd be interested too ;-)


Sure. I tried to be helpful there, but maybe I need to be more
specific. The ref from my previous post, Google-able as "The
C10K problem" is good but now a little dated. System support for
threads has advanced far beyond what Mr. Meyer dealt with in
programming the Amiga. In industry, the two major camps are
Posix threads, and Microsoft's Win32 threads (on NT or better).
Some commercial Unix vendors have mature support for Posix
threads; on Linux, the NPTL is young but clearly the way to move
forward. Java and Ada will wrap the native thread package, which
C(++) offers it directly. Microsoft's threading now works really
well. The WaitMultipleObjects idea is a huge winner.
--
--Bryan
Aug 31 '05 #43

P: n/a
Bryan Olson <fa*********@nowhere.org> writes:
Bengt Richter wrote:
> Bryan Olson wrote:
>>For a long time,
>>the most sophisticated software services generally have used
>>multiple lines of execution, and now that's mostly in the form
>>of threads. No one actually disagrees, but they go right on
>>knocking the modern methods. > I think Mike is asking for references/citations/links to the
> "concurrency systems" and "modern methods" you are talking about ;-)
> (I'd be interested too ;-)

Sure. I tried to be helpful there, but maybe I need to be more
specific. The ref from my previous post, Google-able as "The
C10K problem" is good but now a little dated.


That appears to be a discussion on squeezing the most out of a network
server, touching on threading models only so far as to mention what's
available for popular languages on some popular server OS's.
System support for threads has advanced far beyond what Mr. Meyer
dealt with in programming the Amiga.
I don't think it has - but see below.
In industry, the two major camps are Posix threads, and Microsoft's
Win32 threads (on NT or better). Some commercial Unix vendors have
mature support for Posix threads; on Linux, the NPTL is young but
clearly the way to move forward.
I haven't looked at Win32 threading. Maybe it's better than Posix
threads. Sure, Posix threads is better than what I dealt with 10 years
ago, but there's no way I'd call it "advanced beyond" that model. They
aren't even as good as the Python Threading/Queue model.
Java and Ada will wrap the native thread package, which
C(++) offers it directly.


Obviously, any good solution will wrap the native threads
package. Just like it wraps the I/O package. That doesn't make the
native threads package good. You also have to deal with the cost the
compiler pays when you implement threading primitives as library
calls. I'm surprised the paper didn't mention that problem, as it
effects performance so directly.

I don't know what Ada offers. Java gives you pseudo-monitors. I'm
almost willing to call them "advanced", but they are still don't
really help much. There are some good threading models
available. Posix threads isn't one of them, nor is Java's
"synchronized".

<mike
--
Mike Meyer <mw*@mired.org> http://www.mired.org/home/mwm/
Independent WWW/Perforce/FreeBSD/Unix consultant, email for more information.
Sep 1 '05 #44

P: n/a
Mike Meyer wrote:
Bryan Olson writes:
System support for threads has advanced far beyond what Mr. Meyer
dealt with in programming the Amiga.


I don't think it has - but see below.
In industry, the two major camps are Posix threads, and Microsoft's
Win32 threads (on NT or better). Some commercial Unix vendors have
mature support for Posix threads; on Linux, the NPTL is young but
clearly the way to move forward.


I haven't looked at Win32 threading. Maybe it's better than Posix
threads. Sure, Posix threads is better than what I dealt with 10 years
ago, but there's no way I'd call it "advanced beyond" that model. They
aren't even as good as the Python Threading/Queue model.


Ever looked under the hood to see what happens when you wait
with a timeout on a Python queue/semaphore?

With Python threads/queues how do I wait for two queues (or
locks or semaphores) at one call? (I know some methods to
accomplish the same effect, but they suck.)
Java and Ada will wrap the native thread package, which
C(++) offers it directly.


Obviously, any good solution will wrap the native threads [...]


I recommend looking at how software that implements
sophisticated services actually words. Many things one
might think to be obvious turn out not to be true.
--
--Bryan
Sep 1 '05 #45

P: n/a
Mike Meyer <mw*@mired.org> writes:
Sure. I tried to be helpful there, but maybe I need to be more
specific. The ref from my previous post, Google-able as "The
C10K problem" is good but now a little dated.


That appears to be a discussion on squeezing the most out of a network
server, touching on threading models only so far as to mention what's
available for popular languages on some popular server OS's.


The C10K paper is really interesting as are several of the papers that
it links to. I didn't realize that opening a disk file couldn't be
done really asynchronously, though I'd consider that an OS bug.
Harder to deal with: if you mmap some data and touch an address that's
not in ram, it's inherently impossible (in the normal notion of user
processes) for the executing thread to continue doing any processing
while waiting for the page fault to be serviced.

So if you're doing any significant amount of paging (such as using an
mmapped database with some infrequently accessed bits), you really
can't use your whole computer (all the cpu cycles) without multiple
threads or processes, even with a single processor.
Sep 1 '05 #46

P: n/a
On Wed, 31 Aug 2005 22:44:06 -0400, Mike Meyer <mw*@mired.org> declaimed
the following in comp.lang.python:

I don't know what Ada offers. Java gives you pseudo-monitors. I'm
From the days of mil-std 1815, Ada has supported "tasks" which
communicate via "rendezvous"... The receiving task waits on an "accept"
statement (simplified -- there is a means to wait on multiple different
accepts, and/or time-out). The "sending" task calls the "entry" (looks
like a regular procedure call with in and/or out parameters -- matches
the signature of the waiting "accept"). As with "accept", there are
selective entry calls, wherein which ever task is waiting on the
matching accept will be invoked. During the rendezvous, the "sending"
task blocks until the "receiving" task exits the "accept" block -- at
which point both tasks may proceed concurrently.

As you might notice -- data can go both ways: in at the top of the
rendezvous, and out at the end.

Tasks are created by declaring them (there are also task types, so
one can easily create a slew of identical tasks).

procedure xyz is

a : task; -- not real Ada, again, simplified
b : task;

begin -- the tasks begin execution here
-- do stuff in the procedure itself, maybe call task entries
end;

Ada ALSO has protected objects, which have task style entries,
procedures, and probably functions, along with the data items being
protected. These may be closer to "monitors" or Java's "synchronized" in
behavior -- on the simple level, only one of the procedure/functions may
be active at a time. There can be guards on entries (for both protected
and task I believe) -- so an entry will not activate unless some
condition already exists (you can't get an item from a round-robin
buffer, say, if there isn't any data in the buffer).

-- ================================================== ============ <
wl*****@ix.netcom.com | Wulfraed Dennis Lee Bieber KD6MOG <
wu******@dm.net | Bestiaria Support Staff <
================================================== ============ <
Home Page: <http://www.dm.net/~wulfraed/> <
Overflow Page: <http://wlfraed.home.netcom.com/> <

Sep 1 '05 #47

P: n/a
On Thu, 01 Sep 2005 06:15:38 GMT, Bryan Olson <fa*********@nowhere.org>
declaimed the following in comp.lang.python:


With Python threads/queues how do I wait for two queues (or
Why have two queues? Use one queue and tag the items with the
sender's "id" (or return queue).
-- ================================================== ============ <
wl*****@ix.netcom.com | Wulfraed Dennis Lee Bieber KD6MOG <
wu******@dm.net | Bestiaria Support Staff <
================================================== ============ <
Home Page: <http://www.dm.net/~wulfraed/> <
Overflow Page: <http://wlfraed.home.netcom.com/> <

Sep 1 '05 #48

P: n/a
Dennis Lee Bieber wrote:
On Thu, 01 Sep 2005 06:15:38 GMT, Bryan Olson <fa*********@nowhere.org>
declaimed the following in comp.lang.python:
With Python threads/queues how do I wait for two queues (or


Why have two queues? Use one queue and tag the items with the
sender's "id" (or return queue).


I've faced the same issue, and it stems from having existing classes
which do not already support this sort of mechanism. Sure, you can
sometimes design/redesign stuff to do exactly that, but much of the time
you don't want to go rewriting some existing, stable code that waits on
one queue just because you now need to throw another into the mix.

-Peter
Sep 1 '05 #49

P: n/a
On Thu, 01 Sep 2005 07:59:16 -0400, Peter Hansen <pe***@engcorp.com>
declaimed the following in comp.lang.python:
I've faced the same issue, and it stems from having existing classes
which do not already support this sort of mechanism. Sure, you can
sometimes design/redesign stuff to do exactly that, but much of the time
you don't want to go rewriting some existing, stable code that waits on
one queue just because you now need to throw another into the mix.
Well, at that point, you could substitute "waiting on a queue" with
"waiting on a socket" and still have the same problem -- regardless of
the nature of the language/libraries for threading; it's a problem with
the design of the classes as applied to a threaded environment.
-- ================================================== ============ <
wl*****@ix.netcom.com | Wulfraed Dennis Lee Bieber KD6MOG <
wu******@dm.net | Bestiaria Support Staff <
================================================== ============ <
Home Page: <http://www.dm.net/~wulfraed/> <
Overflow Page: <http://wlfraed.home.netcom.com/> <

Sep 1 '05 #50

61 Replies

This discussion thread is closed

Replies have been disabled for this discussion.