473,397 Members | 1,960 Online
Bytes | Software Development & Data Engineering Community
Post Job

Home Posts Topics Members FAQ

Join Bytes to post your question to a community of 473,397 software developers and data experts.

Will Python 3.0 remove the global interpreter lock (GIL)

I'm afraid that the GIL is killing the usefullness of python for some
types of applications now where 4,8 oder 64 threads on a chip are here
or comming soon.

What is the status about that for the future of python?

I know that at the moment allmost nobody in the scripting world has
solved this problem, but it bites and it bites hard. Only groovy as a
Java Plugin has support but i never tried it. Writing an interpreter
that does MT this seems to be extremely difficult to do it right, with
lots of advanced stuff like CAS and lock free programming.

Even Smalltalk and Common Lisp didn't get it until know (with the
exception of certain experiments).

Sep 3 '07 #1
33 3656
On 9/2/07, llothar <ll*****@web.dewrote:
I'm afraid that the GIL is killing the usefullness of python for some
types of applications now where 4,8 oder 64 threads on a chip are here
or comming soon.

What is the status about that for the future of python?

I know that at the moment allmost nobody in the scripting world has
solved this problem, but it bites and it bites hard. Only groovy as a
Java Plugin has support but i never tried it. Writing an interpreter
that does MT this seems to be extremely difficult to do it right, with
lots of advanced stuff like CAS and lock free programming.

Even Smalltalk and Common Lisp didn't get it until know (with the
exception of certain experiments).

No. http://www.artima.com/weblogs/viewpo...?thread=211430

--
http://www.advogato.org/person/eopadoan/
Bookmarks: http://del.icio.us/edcrypt
Sep 3 '07 #2
On 3 Sep., 07:38, "Eduardo O. Padoan" <eduardo.pad...@gmail.com>
wrote:
No.http://www.artima.com/weblogs/viewpo...?thread=211430

Ops, I meant:http://www.artima.com/forums/threade...&thread=211200
Thanks. I whish there would be a project for rewritting the C
interpreter
to make it better and more useable for threading use.

But the CPU infrastructure is also not perfect enough so maybe it's
good to
wait with this a few more years until Intel and AMD know what they are
doing.
Sep 3 '07 #4
On Sep 2, 11:16 pm, llothar <llot...@web.dewrote:
On 3 Sep., 07:38, "Eduardo O. Padoan" <eduardo.pad...@gmail.com>
wrote:
No.http://www.artima.com/weblogs/viewpo...?thread=211430
Ops, I meant:http://www.artima.com/forums/threade...&thread=211200

Thanks. I whish there would be a project for rewritting the C
interpreter
to make it better and more useable for threading use.

But the CPU infrastructure is also not perfect enough so maybe it's
good to
wait with this a few more years until Intel and AMD know what they are
doing.

I read somewhere that PYPY won't have the interpreter lock (I may be
wrong though).
Check it out: http://codespeak.net/pypy
Sep 3 '07 #5
On Sun, 2007-09-02 at 17:21 -0700, llothar wrote:
I'm afraid that the GIL is killing the usefullness of python for some
types of applications now where 4,8 oder 64 threads on a chip are here
or comming soon.

What is the status about that for the future of python?
The GIL is an implementation specific issue with CPython. It will not be
removed in CPython for the forseeable future, but you can already get a
GIL-free interpreter with Jython and IronPython. AFAIK there are plans
to remove the GIL in PyPy.

According to the last PyPy release announcement, they're running at
about half the speed of CPython, and have a preliminary JIT that can
translate certain integer operations into assembly, and will be expanded
upon in future releases. If you're looking for a progressive alternative
to CPython, I'd keep an eye on that project ;-)

--
Evan Klitzke <ev**@yelp.com>

Sep 3 '07 #6
On Sep 3, 2:21 am, llothar <llot...@web.dewrote:
I'm afraid that the GIL is killing the usefullness of python for some
types of applications now where 4,8 oder 64 threads on a chip are here
or comming soon.

What is the status about that for the future of python?
This is FAQ. You will find thousands of discussion on the net about
that.
My personal opinion (and I am not the only one in the Python
community) is that
if you want to scale the way to go is to use processes, not threads,
so removing the GIL would be a waste of effort anyway.
Look at the 'processing' module in PyPI.

Michele Simionato

Sep 3 '07 #7
On Sep 3, 9:15 am, Michele Simionato <michele.simion...@gmail.com>
wrote:
On Sep 3, 2:21 am, llothar <llot...@web.dewrote:
My personal opinion (and I am not the only one in the Python
community) is that
if you want to scale the way to go is to use processes, not threads,
so removing the GIL would be a waste of effort anyway.
Look at the 'processing' module in PyPI.

Michele Simionato

I second that. You may also look here,
http://www.parallelpython.com/

I tested it and work as expected. You can see all your processing-
cores work nicely and balanced.

Sep 3 '07 #8
Michele Simionato <mi***************@gmail.comwrites:
On Sep 3, 2:21 am, llothar <llot...@web.dewrote:
I'm afraid that the GIL is killing the usefullness of python for
some types of applications now where 4,8 oder 64 threads on a chip
are here or comming soon.

This is FAQ. You will find thousands of discussion on the net about
that.
My personal opinion (and I am not the only one in the Python
community) is that if you want to scale the way to go is to use
processes, not threads, so removing the GIL would be a waste of
effort anyway.
Yes. Processes are cheap on well-designed operating systems, and using
processes to subdivide your processor usage encourages simple, modular
interfaces between parts of a program. Threads, while also cheap, are
much more difficult and fiddly to program correctly, hence are rarely
an overall win.

One common response to that is "Processes are expensive on Win32". My
response to that is that if you're programming on Win32 and expecting
the application to scale well, you already have problems that must
first be addressed that are far more fundamental than the GIL.

--
\ "Pinky, are you pondering what I'm pondering?" "Umm, I think |
`\ so, Brain, but three men in a tub? Ooh, that's unsanitary!" -- |
_o__) _Pinky and The Brain_ |
Ben Finney
Sep 3 '07 #9
I was wondering (and maybe i still do) about this GIL "problem". I am
relatively new to Python (less than a year) and when i started to
think about it i said: "Oh, this IS a problem". But when i dig a
little more, i found that "Ah, maybe it isn't".
I strongly believe that the best usage of multiple cores processor
will be achieved if programming languages are modified to support this
on their "hearts". Code blocks that would be identified by the
compiler and run in parallel and such things. Laboratories are working
on these stuff but i do not expect something in the very-near future.

So, as i mentioned above, there are solutions for that right now
("parallel python" and others) that enabled us with little effort to
spawn a new python interpreter, thus allowing the OS to schedule it on
a different core and do the job this way relatively cheap.
I wouldn't recommend going to IronPython despite the fact that the CLR
better utilize MP. The reason for this is that i would NEVER give up
the freedom that CPython gives me by exchange "better" usage of the MP
and platform lock-in.

Sep 3 '07 #10
I was wondering (and maybe i still do) about this GIL "problem". I am
relatively new to Python (less than a year) and when i started to
think about it i said: "Oh, this IS a problem". But when i dig a
little more, i found that "Ah, maybe it isn't".
I strongly believe that the best usage of multiple cores processor
will be achieved if programming languages are modified to support this
on their "hearts". Code blocks that would be identified by the
compiler and run in parallel and such things. Laboratories are working
on these stuff but i do not expect something in the very-near future.

So, as i mentioned above, there are solutions for that right now
("parallel python" and others) that enabled us with little effort to
spawn a new python interpreter, thus allowing the OS to schedule it on
a different core and do the job this way relatively cheap.
I wouldn't recommend going to IronPython despite the fact that the CLR
better utilize MP. The reason for this is that i would NEVER give up
the freedom that CPython gives me by exchange "better" usage of the MP
and platform lock-in.

Sep 3 '07 #11
I was wondering (and maybe i still do) about this GIL "problem". I am
relatively new to Python (less than a year) and when i started to
think about it i said: "Oh, this IS a problem". But when i dig a
little more, i found that "Ah, maybe it isn't".
I strongly believe that the best usage of multiple cores processor
will be achieved if programming languages are modified to support this
on their "hearts". Code blocks that would be identified by the
compiler and run in parallel and such things. Laboratories are working
on these stuff but i do not expect something in the very-near future.

So, as i mentioned above, there are solutions for that right now
("parallel python" and others) that enabled us with little effort to
spawn a new python interpreter, thus allowing the OS to schedule it on
a different core and do the job this way relatively cheap.
I wouldn't recommend going to IronPython despite the fact that the CLR
better utilize MP. The reason for this is that i would NEVER give up
the freedom that CPython gives me by exchange "better" usage of the MP
and platform lock-in.

Sep 3 '07 #12
On Sep 2, 5:38 pm, "Eduardo O. Padoan" <eduardo.pad...@gmail.com>
wrote:
No.http://www.artima.com/weblogs/viewpo...?thread=211430

Ops, I meant:http://www.artima.com/forums/threade...&thread=211200

--http://www.advogato.org/person/eopadoan/
Bookmarks:http://del.icio.us/edcrypt
"No. We're not changing the CPython implementation much. Getting rid
of the GIL would be a massive rewrite of the interpreter because all
the internal data structures (and the reference counting operations)
would have to be made thread-safe. This was tried once before (in the
late '90s by Greg Stein) and the resulting interpreter ran twice as
slow."

How much faster/slower would Greg Stein's code be on today's
processors versus CPython running on the processors of the late
1990's? And if you decide to answer, please add a true/false response
to this statement - "CPython in the late 1990's ran too slow".

Sep 19 '07 #13

"TheFlyingDutchman" <zz******@aol.comwrote in message
news:11**********************@o80g2000hse.googlegr oups.com...
| On Sep 2, 5:38 pm, "Eduardo O. Padoan" <eduardo.pad...@gmail.com>
| wrote:
| No.http://www.artima.com/weblogs/viewpo...?thread=211430
| >
| Ops, I
meant:http://www.artima.com/forums/threade...&thread=211200
| >
| --http://www.advogato.org/person/eopadoan/
| Bookmarks:http://del.icio.us/edcrypt
|
| "No. We're not changing the CPython implementation much. Getting rid
| of the GIL would be a massive rewrite of the interpreter because all
| the internal data structures (and the reference counting operations)
| would have to be made thread-safe. This was tried once before (in the
| late '90s by Greg Stein) and the resulting interpreter ran twice as
| slow."

Since Guido wrote that, there have been put forth more ideas and interest
and promises of efforts to remove or revise the GIL or do other things to
make using multiple cores easier. (The later being the point of the
concern over GIL.)

| How much faster/slower would Greg Stein's code be on today's
| processors versus CPython running on the processors of the late
| 1990's?

Perhaps a bit faster, though processor speeds have not increased so much
the last couple of years.

|And if you decide to answer, please add a true/false response
| to this statement - "CPython in the late 1990's ran too slow".

False by late 1990's standards, True by today's standards ;-).

Most people are not currently bothered by the GIL and would not want its
speed halved.

In any case, any of the anti-GIL people are free to update Stein's code and
distribute a GIl-less version of CPython. (Or to use Jython or
IronPython.)

tjr

Sep 19 '07 #14
On Tue, 18 Sep 2007 18:09:26 -0700, TheFlyingDutchman wrote:
How much faster/slower would Greg Stein's code be on today's processors
versus CPython running on the processors of the late 1990's?
I think a better question is, how much faster/slower would Stein's code
be on today's processors, versus CPython being hand-simulated in a giant
virtual machine made of clockwork?

--
Steven.
Sep 19 '07 #15
On 2007-09-19, Steven D'Aprano <st***@REMOVE-THIS-cybersource.com.auwrote:
On Tue, 18 Sep 2007 18:09:26 -0700, TheFlyingDutchman wrote:
>How much faster/slower would Greg Stein's code be on today's
processors versus CPython running on the processors of the
late 1990's?

I think a better question is, how much faster/slower would
Stein's code be on today's processors, versus CPython being
hand-simulated in a giant virtual machine made of clockwork?
That depends on whether you have the steam-driven model or the
water-wheel driven model. The steam-drive one is quite a bit
faster once you get it going, but the water-wheel model has a
much shorter start-up time (though it is more weather
dependent).

--
Grant Edwards grante Yow! Hey, waiter! I want
at a NEW SHIRT and a PONY TAIL
visi.com with lemon sauce!
Sep 19 '07 #16
On Sep 19, 8:51 am, Steven D'Aprano <st...@REMOVE-THIS-
cybersource.com.auwrote:
On Tue, 18 Sep 2007 18:09:26 -0700, TheFlyingDutchman wrote:
How much faster/slower would Greg Stein's code be on today's processors
versus CPython running on the processors of the late 1990's?

I think a better question is, how much faster/slower would Stein's code
be on today's processors, versus CPython being hand-simulated in a giant
virtual machine made of clockwork?

--
Steven.
Steven, You forgot this part:

"And if you decide to answer, please add a true/false response
to this statement - "CPython in the late 1990's ran too slow"'.
Sep 19 '07 #17
On 19 Sep, 03:09, TheFlyingDutchman <zzbba...@aol.comwrote:
>
How much faster/slower would Greg Stein's code be on today's
processors versus CPython running on the processors of the late
1990's? And if you decide to answer, please add a true/false response
to this statement - "CPython in the late 1990's ran too slow".
Too slow for what? And what's the fixation with CPython, anyway? Other
implementations of Python 2.x don't have the GIL. Contrary to popular
folklore, Jython has been quite a reasonable implementation of Python
for about half as long as CPython has been around, if you don't mind
the JVM. I'm sure people have lots of complaints about Jython like
they do about CPython and the GIL, thinking that complaining about it
is going to make the situation better, or that they're imparting some
kind of "wisdom" to which the people who actually wrote the code must
be oblivious, but nobody is withholding the code from anyone who wants
to actually improve it.

And there we learn something: that plenty of people are willing to
prod others into providing them with something that will make their
lives better, their jobs easier, and their profits greater, but not so
many are interested in contributing back to the cause and taking on
very much of the work themselves. Anyway, the response to your
statement is "false". Now you'll have to provide us with the insight
we're all missing. Don't disappoint!

Paul

Sep 19 '07 #18
On Sep 19, 3:41 pm, Paul Boddie <p...@boddie.org.ukwrote:
On 19 Sep, 03:09, TheFlyingDutchman <zzbba...@aol.comwrote:
How much faster/slower would Greg Stein's code be on today's
processors versus CPython running on the processors of the late
1990's? And if you decide to answer, please add a true/false response
to this statement - "CPython in the late 1990's ran too slow".

Too slow for what? And what's the fixation with CPython, anyway? Other
implementations of Python 2.x don't have the GIL. Contrary to popular
folklore, Jython has been quite a reasonable implementation of Python
for about half as long as CPython has been around, if you don't mind
the JVM. I'm sure people have lots of complaints about Jython like
they do about CPython and the GIL, thinking that complaining about it
is going to make the situation better, or that they're imparting some
kind of "wisdom" to which the people who actually wrote the code must
be oblivious, but nobody is withholding the code from anyone who wants
to actually improve it.
>
And there we learn something: that plenty of people are willing to
prod others into providing them with something that will make their
lives better, their jobs easier, and their profits greater, but not so
many are interested in contributing back to the cause and taking on
very much of the work themselves. Anyway, the response to your
statement is "false". Now you'll have to provide us with the insight
we're all missing. Don't disappoint!

Paul
Paul it's a pleasure to see that you are not entirely against
complaints.

The very fastest Intel processor of the last 1990's that I found came
out in October 1999 and had a speed around 783Mhz. Current fastest
processors are something like 3.74 Ghz, with larger caches. Memory is
also faster and larger. It appears that someone running a non-GIL
implementation of CPython today would have significantly faster
performance than a GIL CPython implementation of the late 1990's.
Correct me if I am wrong, but it seems that saying non-GIL CPython is
too slow, while once valid, has become invalid due to the increase in
computing power that has taken place.

Sep 19 '07 #19

"Terry Reedy" <tj*****@udel.eduwrote in message
news:fc**********@sea.gmane.org...
|
| "TheFlyingDutchman" <zz******@aol.comwrote in message
| news:11**********************@o80g2000hse.googlegr oups.com...

| Since Guido wrote that, there have been put forth more ideas and interest
| and promises of efforts to remove or revise the GIL or do other things to
| make using multiple cores easier. (The later being the point of the
| concern over GIL.)

A few days ago, an undergraduate posted on the dev list that he just
started an independent study project on removing the GIL. Maybe we'll get
a report early next year.

Guido also said that he is willing to make changes to the CPython internals
to aid multiproccessor usage [as long, presumably, as it does not cut speed
in half].

|| How much faster/slower would Greg Stein's code be on today's
|| processors versus CPython running on the processors of the late
|| 1990's?
|
| Perhaps a bit faster, though processor speeds have not increased so much
| the last couple of years.

This assumes that comparing versions of 1.5 is still relevant. As far as I
know, his patch has not been maintained to apply against current Python.
This tells me that no one to date really wants to dump the GIL at the cost
of half Python's speed. Of course not. The point of dumping the GIL is to
use multiprocessors to get more speed! So with two cores and extra
overhead, Stein-patched 1.5 would not even break even.

Quad (and more) cores are a different matter. Hence, I think, the
resurgence of interest.

||And if you decide to answer, please add a true/false response
|| to this statement - "CPython in the late 1990's ran too slow".
|
| False by late 1990's standards, True by today's standards ;-).

So now this question for you: "CPython 2.5 runs too slow in 2007: true or
false?"

If you answer false, then there is no need for GIL removal.
If you answer true, then cutting its speed for 90+% of people is bad.

| Most people are not currently bothered by the GIL and would not want its
| speed halved.

And another question: why should such people spend time they do not have to
make Python worse for themselves?

Terry Jan Reedy


Sep 20 '07 #20
On Wed, 19 Sep 2007 11:07:48 -0700, TheFlyingDutchman wrote:
On Sep 19, 8:51 am, Steven D'Aprano <st...@REMOVE-THIS-
cybersource.com.auwrote:
>On Tue, 18 Sep 2007 18:09:26 -0700, TheFlyingDutchman wrote:
How much faster/slower would Greg Stein's code be on today's
processors versus CPython running on the processors of the late
1990's?

I think a better question is, how much faster/slower would Stein's code
be on today's processors, versus CPython being hand-simulated in a
giant virtual machine made of clockwork?

--
Steven.

Steven, You forgot this part:

"And if you decide to answer, please add a true/false response to this
statement - "CPython in the late 1990's ran too slow"'.

No, I ignored it, because it doesn't have a true/false response. It's a
malformed request. "Too slow" for what task? Compared to what
alternative? Fast and slow are not absolute terms, they are relative. A
sloth is "fast" compared to continental drift, but "slow" compared to the
space shuttle.

BUT even if we all agreed that CPython was (or wasn't) "too slow" in the
late 1990s, why on earth do you imagine that is important? It is no
longer the late 1990s, it is now 2007, and we are not using Python 1.4
any more.

--
Steven.
Sep 20 '07 #21
On Wed, 19 Sep 2007 15:59:59 -0700, TheFlyingDutchman wrote:
Paul it's a pleasure to see that you are not entirely against
complaints.
I'm not against complaints either, so long as they are well-thought out.
I've made a few of my own over the years, some of which may have been
less well-thought out than others.

The very fastest Intel processor of the last 1990's that I found came
out in October 1999 and had a speed around 783Mhz. Current fastest
processors are something like 3.74 Ghz, with larger caches. Memory is
also faster and larger. It appears that someone running a non-GIL
implementation of CPython today would have significantly faster
performance than a GIL CPython implementation of the late 1990's.
That's an irrelevant comparison. It's a STUPID comparison. The two
alternatives aren't "non-GIL CPython on 2007 hardware" versus "GIL
CPython on 1999 hardware" because we aren't using GIL CPython on 1999
hardware, we're using it on 2007 hardware. *That's* the alternative to
the non-GIL CPython that you need to compare against.

Why turn your back on eight years of faster hardware? What's the point of
getting rid of the GIL unless it leads to faster code? "Get the speed and
performance of 1999 today!" doesn't seem much of a selling point in 2007.

Correct me if I am wrong, but it seems that saying non-GIL CPython is
too slow, while once valid, has become invalid due to the increase in
computing power that has taken place.
You're wrong, because the finishing line has shifted -- performance we
were satisfied with in 1998 would be considered unbearable to work with
in 2007.

I remember in 1996 (give or take a year) being pleased that my new
computer allowed my Pascal compiler to compile a basic, bare-bones GUI
text editor in a mere two or four hours, because it used to take up to
half a day on my older computer. Now, I expect to compile a basic text
editor in minutes, not hours.

According to http://linuxreviews.org/gentoo/compiletimes/

the whole of Openoffice-ximian takes around six hours to compile. Given
the speed of my 1996 computer, it would probably take six YEARS to
compile something of Openoffice's complexity.
As a purely academic exercise, we might concede that the non-GIL version
of CPython 1.5 running on a modern, dual-core CPU with lots of RAM will
be faster than CPython 2.5 running on an eight-year old CPU with minimal
RAM. But so what? That's of zero practical interest for anyone running
CPython 2.5 on a modern PC.

If you are running a 1999 PC, your best bet is to stick with the standard
CPython 1.5 including the GIL, because it is faster than the non-GIL
version.

If you are running a 2007 PC, your best bet is *still* to stick with the
standard CPython (version 2.5 now, not 1.5), because it will still be
faster than the non-GIL version (unless you have four or more processors,
and maybe not even then).

Otherwise, there's always Jython or IronPython.

--
Steven.
Sep 20 '07 #22
On Sep 19, 8:54 pm, Steven D'Aprano <st...@REMOVE-THIS-
cybersource.com.auwrote:
On Wed, 19 Sep 2007 19:14:39 -0700, Paul Rubin wrote:
>
etc. is at best an excuse for laziness.

What are you doing about solving the problem? Apart from standing on the
side-lines calling out "Get yer lazy behinds movin', yer lazy bums!!!" at
the people who aren't even convinced there is a problem that needs
solving?
He's trying to convince the developers that there is a problem. That
is not the same as your strawman argument.
>
And more and more often, in the
application areas where Python is deployed, it's just plain wrong. Take
web servers: a big site like Google has something like a half million of
them. Even the comparatively wimpy site where I work has a couple
thousand. If each server uses 150 watts of power (plus air
conditioning), then if making the software 2x faster lets us shut down
1000 of them,

What on earth makes you think that would be anything more than a
temporary, VERY temporary, shutdown? My prediction is that the last of
the machines wouldn't have even been unplugged before management decided
that running twice as fast, or servicing twice as many people at the same
speed, is more important than saving on the electricity bill, and they'd
be plugged back in.
Plugging back in 1000 servers would be preferable to buying and
plugging in 2000 new servers which is what would occur if the software
in this example had not been sped up 2x and management had still
desired a 2x speed up in system performance as you suggest.

Sep 20 '07 #23
On Sep 19, 5:08 pm, "Terry Reedy" <tjre...@udel.eduwrote:
"Terry Reedy" <tjre...@udel.eduwrote in message
This is a little confusing because google groups does not show your
original post (not uncommon for them to lose a post in a thread - but
somehow still reflect the fact that it exists in the total-posts
number that they display) that you are replying to.

>
This assumes that comparing versions of 1.5 is still relevant. As far as I
know, his patch has not been maintained to apply against current Python.
This tells me that no one to date really wants to dump the GIL at the cost
of half Python's speed. Of course not. The point of dumping the GIL is to
use multiprocessors to get more speed! So with two cores and extra
overhead, Stein-patched 1.5 would not even break even.

Quad (and more) cores are a different matter. Hence, I think, the
resurgence of interest.
I am confused about the benefits/disadvantages of the "GIL removal".
Is it correct that the GIL is preventing CPython from having threads?

Is it correct that the only issue with the GIL is the prevention of
being able to do multi-threading?

If you only planned on writing single-threaded applications would GIL-
removal have no benefit?

Can threading have a performance benefit on a single-core machine
versus running multiple processes?
So now this question for you: "CPython 2.5 runs too slow in 2007: true or
false?"
I guess I gotta go with Steven D'Aprano - both true and false
depending on your situation.
If you answer false, then there is no need for GIL removal.
OK, I see that.
If you answer true, then cutting its speed for 90+% of people is bad.
OK, seems reasonable, assuming that multi-threading cannot be
implemented without a performance hit on single-threaded applications.
Is that a computer science maxim - giving an interpreted language
multi-threading will always negatively impact the performance of
single-threaded applications?
>
| Most people are not currently bothered by the GIL and would not want its
| speed halved.

And another question: why should such people spend time they do not have to
make Python worse for themselves?
Saying they don't have time to make a change, any change, is always
valid in my book. I cannot argue against that. Ditto for them saying
they don't want to make a change with no explanation. But it seems if
they make statements about why a change is not good, then it is fair
to make a counter-argument. I do agree with the theme of Steven
D'Aprano's comments in that it should be a cordial counter-argument
and not a demand.
Sep 20 '07 #24
2

On Sep 19, 5:08 pm, "Terry Reedy" <tjre...@udel.eduwrote:
"Terry Reedy" <tjre...@udel.eduwrote in message
>
This assumes that comparing versions of 1.5 is still relevant. As far as I
know, his patch has not been maintained to apply against current Python.
This tells me that no one to date really wants to dump the GIL at the cost
of half Python's speed. Of course not. The point of dumping the GIL is to
use multiprocessors to get more speed! So with two cores and extra
overhead, Stein-patched 1.5 would not even break even.
Is the only point in getting rid of the GIL to allow multi-threaded
applications?

Can't multiple threads also provide a performance boost versus
multiple processes on a single-core machine?
>
So now this question for you: "CPython 2.5 runs too slow in 2007: true or
false?"
Ugh, I guess I have to agree with Steven D'Aprano - it depends.
>
If you answer false, then there is no need for GIL removal.
OK, I can see that.
If you answer true, then cutting its speed for 90+% of people is bad.
OK, have to agree. Sounds like it could be a good candidate for a
fork. One question - is it a computer science maxim that an
interpreter that implements multi-threading will always be slower when
running single threaded apps?
>
And another question: why should such people spend time they do not have to
make Python worse for themselves?
I can't make an argument for someone doing something for free that
they don't have the time for. Ditto for doing something for free that
they don't want to do. But it does seem that if they give a reason for
why it's the wrong thing to do, it's fair to make a counter-argument.
Although I agree with Steven D'Aprano's theme in that it should be a
cordial rebuttal and not a demand.
Sep 20 '07 #25

"Steven D'Aprano" <steve@REMOVEauwrote:
>
I think a better question is, how much faster/slower would Stein's code
be on today's processors, versus CPython being hand-simulated in a giant
virtual machine made of clockwork?
This obviously depends on whether or not the clockwork is orange

- Hendrik

Sep 20 '07 #26
TheFlyingDutchman a écrit :
(snip)
I am confused about the benefits/disadvantages of the "GIL removal".
Is it correct that the GIL is preventing CPython from having threads?

Is it correct that the only issue with the GIL is the prevention of
being able to do multi-threading?
http://docs.python.org/lib/module-thread.html
http://docs.python.org/lib/module-threading.html

Sep 20 '07 #27
On 2007-09-20, TheFlyingDutchman <zz******@aol.comwrote:
Is the only point in getting rid of the GIL to allow multi-threaded
applications?
That's the main point.
Can't multiple threads also provide a performance boost versus
multiple processes on a single-core machine?
That depends on the algorithm, the code, and the
synchronization requirements.
OK, have to agree. Sounds like it could be a good candidate
for a fork. One question - is it a computer science maxim that
an interpreter that implements multi-threading will always be
slower when running single threaded apps?
I presume you're referring to Amdahl's law.

http://en.wikipedia.org/wiki/Amdahl's_law

Remember there are reasons other than speed on a
multi-processor platorm for wanting to do multi-threading.
Sometimes it just maps onto the application better than
a single-threaded solution.

--
Grant Edwards grante Yow! I want you to MEMORIZE
at the collected poems of
visi.com EDNA ST VINCENT MILLAY
... BACKWARDS!!
Sep 20 '07 #28
Steven D'Aprano <st***@REMOVE-THIS-cybersource.com.auwrites:
That's why your "comparatively wimpy site" preferred to throw extra web
servers at the job of serving webpages rather than investing in smarter,
harder-working programmers to pull the last skerricks of performance out
of the hardware you already had.
The compute intensive stuff (image rendering and crunching) has
already had most of those skerricks pulled out. It is written in C
and assembler (not by us). Only a small part of our stuff is written
in Python: it just happens to be the part I'm involved with.
But Python speed ups don't come for free. For instance, I'd *really*
object if Python ran twice as fast for users with a quad-core CPU, but
twice as slow for users like me with only a dual-core CPU.
Hmm. Well if the tradeoff were selectable at python configuration
time, then this option would certainly be worth doing. You might not
have a 4-core cpu today but you WILL have one soon.
What on earth makes you think that would be anything more than a
temporary, VERY temporary, shutdown? My prediction is that the last of
the machines wouldn't have even been unplugged
Of course that example was a reductio ad absurdum. In reality they'd
use the speedup to compute 2x as much stuff, rather than ever powering
any servers down. Getting the extra computation is more valuable than
saving the electricity. It's just easier to put a dollar value on
electricity than on computation in an example like this. It's also
the case for our specfiic site that our server cluster is in large
part a disk farm and not just a compute farm, so even if we sped up
the software infinitely we'd still need a lot of boxes to bolt the
disks into and keep them spinning.
Now there's a thought... given that Google:

(1) has lots of money;
(2) uses Python a lot;
(3) already employs both Guido and (I think...) Alex Martelli and
possibly other Python gurus;
(4) is not shy in investing in Open Source projects;
(5) and most importantly uses technologies that need to be used across
multiple processors and multiple machines

one wonders if Google's opinion of where core Python development needs to
go is the same as your opinion?
I think Google's approach has been to do cpu-intensive tasks in other
languages, primarily C++. It would still be great if they put some
funding into PyPy development, since I think I saw something about the
EU funding being interrupted.
Sep 20 '07 #29
On 20 Sep 2007 07:43:18 -0700, Paul Rubin
<"http://phr.cx"@nospam.invalidwrote:
Steven D'Aprano <st***@REMOVE-THIS-cybersource.com.auwrites:
That's why your "comparatively wimpy site" preferred to throw extra web
servers at the job of serving webpages rather than investing in smarter,
harder-working programmers to pull the last skerricks of performance out
of the hardware you already had.

The compute intensive stuff (image rendering and crunching) has
already had most of those skerricks pulled out. It is written in C
and assembler (not by us). Only a small part of our stuff is written
in Python: it just happens to be the part I'm involved with.
That means that this part is also unaffected by the GIL.
But Python speed ups don't come for free. For instance, I'd *really*
object if Python ran twice as fast for users with a quad-core CPU, but
twice as slow for users like me with only a dual-core CPU.

Hmm. Well if the tradeoff were selectable at python configuration
time, then this option would certainly be worth doing. You might not
have a 4-core cpu today but you WILL have one soon.
What on earth makes you think that would be anything more than a
temporary, VERY temporary, shutdown? My prediction is that the last of
the machines wouldn't have even been unplugged

Of course that example was a reductio ad absurdum. In reality they'd
use the speedup to compute 2x as much stuff, rather than ever powering
any servers down. Getting the extra computation is more valuable than
saving the electricity. It's just easier to put a dollar value on
electricity than on computation in an example like this. It's also
the case for our specfiic site that our server cluster is in large
part a disk farm and not just a compute farm, so even if we sped up
the software infinitely we'd still need a lot of boxes to bolt the
disks into and keep them spinning.
I think this is instructive, because it's pretty typical of GIL
complaints. Someone gives an example where the GIL is limited, but
upon inspection it turns out that the actual bottleneck is elsewhere,
that the GIL is being sidestepped anyway, and that the supposed
benefits of removing the GIL wouldn't materialize because the problem
space isn't really as described.
Now there's a thought... given that Google:

(1) has lots of money;
(2) uses Python a lot;
(3) already employs both Guido and (I think...) Alex Martelli and
possibly other Python gurus;
(4) is not shy in investing in Open Source projects;
(5) and most importantly uses technologies that need to be used across
multiple processors and multiple machines

one wonders if Google's opinion of where core Python development needs to
go is the same as your opinion?

I think Google's approach has been to do cpu-intensive tasks in other
languages, primarily C++. It would still be great if they put some
funding into PyPy development, since I think I saw something about the
EU funding being interrupted.
--
At the really high levels of scalability, such as across a server
farm, threading is useless. The entire point of threads, rather than
processes, is that you've got shared, mutable state. A shared nothing
process (or Actor, if you will) model is the only one that makes sense
if you really want to scale because it's the only one that allows you
to distribute over machines. The fact that it also scales very well
over multiple cores (better than threads, in many cases) is just
gravy.

The only hard example I've seen given of the GIL actually limiting
scalability is on single server, high volume Django sites, and I don't
think that the architecture of those sites is very scalable anyway.
Sep 20 '07 #30
"Chris Mellon" <ar*****@gmail.comwrites:
The compute intensive stuff (image rendering and crunching) has
already had most of those skerricks pulled out. It is written in C
and assembler
That means that this part is also unaffected by the GIL.
Right, it was a counterexample against the "speed doesn't matter"
meme, not specifically against the GIL. And that code is fast because
someone undertook comparatively enormous effort to code it in messy,
unsafe languages instead of Python, because Python is so slow.
At the really high levels of scalability, such as across a server
farm, threading is useless. The entire point of threads, rather than
processes, is that you've got shared, mutable state. A shared nothing
process (or Actor, if you will) model is the only one that makes sense
if you really want to scale because it's the only one that allows you
to distribute over machines. The fact that it also scales very well
over multiple cores (better than threads, in many cases) is just
gravy.
In reality you want to organize the problem so that memory intensive
stuff is kept local, and that's where you want threads, to avoid the
communications costs of serializing stuff between processes, either
between boxes or between cores. If communications costs could be
ignored there would be no need for gigabytes of ram in computers.
We'd just use disks for everything. As it is, we use tons of ram,
most of which is usually twiddling its thumbs doing nothing (as DJ
Bernstein put it) because the cpu isn't addressing it at that instant.
The memory just sits there waiting for the cpu to access it. We
actually can get better-than-linear speedups by designing the hardware
to avoid this. See:
http://cr.yp.to/snuffle/bruteforce-20050425.pdf
for an example.
The only hard example I've seen given of the GIL actually limiting
scalability is on single server, high volume Django sites, and I don't
think that the architecture of those sites is very scalable anyway.
The stuff I'm doing now happens to work ok with multiple processes but
would have been easier to write with threads.
Sep 20 '07 #31

"Paul Rubin" <"http://phr.cx"@NOSPAM.invalidwrote in message
news:7x************@ruckus.brouhaha.com...
| funding into PyPy development, since I think I saw something about the
| EU funding being interrupted.

As far as I know, the project was completed and promised funds paid. But I
don't know of any major follow-on funding, which I am sure they could use.

Sep 20 '07 #32

"Paul Rubin" <"http://phr.cx"@NOSPAM.invalidwrote in message
news:7x************@ruckus.brouhaha.com...
| It does sound like removing the GIL from CPython would have very high
| costs in more than one area. Is my hope that Python will transition
| from CPython to PyPy overoptimistic?

I presume you mean 'will the leading edge reference version transition...
Or more plainly, "will Guido switch to PyPy for further development of
Python?" I once thought so, but 1) Google sped the arrival of Py3.0 by
hiring Guido with a major chunk of time devoted to Python development, so
he started before PyPy was even remotely ready (and it still is not); and
2) PyPy did not focus only or specifically on being a CPython replacement
but became an umbrella for a variety of experiment (including, for
instance, a Scheme frontend).

Sep 20 '07 #33
Ben Finney a écrit :
(snip)
One common response to that is "Processes are expensive on Win32". My
response to that is that if you're programming on Win32 and expecting
the application to scale well, you already have problems that must
first be addressed that are far more fundamental than the GIL.
Lol ! +1 QOTW !

Sep 20 '07 #34

This thread has been closed and replies have been disabled. Please start a new discussion.

Similar topics

7
by: Wenning Qiu | last post by:
I am researching issues related to emdedding Python in C++ for a project. My project will be running on an SMP box and requires scalability. However, my test shows that Python threading has very...
65
by: Anthony_Barker | last post by:
I have been reading a book about the evolution of the Basic programming language. The author states that Basic - particularly Microsoft's version is full of compromises which crept in along the...
1
by: amit | last post by:
Hello, I am embedding a python script in a C++ application. The script can be called simultaneously from multiple threads. What is the correct way to implement this situation: 1) Have...
1
by: Carl Waldbieser | last post by:
I have been considering using Python and the Reportlab library for generating PDF reports for the back-end of a web based application. The application runs most of its background tasks on a...
0
by: Kurt B. Kaiser | last post by:
Patch / Bug Summary ___________________ Patches : 378 open ( +3) / 3298 closed (+34) / 3676 total (+37) Bugs : 886 open (-24) / 5926 closed (+75) / 6812 total (+51) RFE : 224 open...
41
by: km | last post by:
Hi all, Is there any PEP to introduce true threading features into python's next version as in java? i mean without having GIL. when compared to other languages, python is fun to code but i feel...
10
by: Louise Hoffman | last post by:
Dear readers, I was wondering, if Python in the foerseeable future will allocate one CPU core just for the interpreter, so heavy Python operations does slow down the OS? It seams to me like a...
6
by: nikhilketkar | last post by:
What are the implications of the Global Interpreter Lock in Python ? Does this mean that Python threads cannot exploit a dual core processor and the only advantage of using threads is in that...
7
by: skip | last post by:
This question was posed to me today. Given a C/C++ program we can clearly embed a Python interpreter in it. Is it possible to fire up multiple interpreters in multiple threads? For example: ...
0
by: Charles Arthur | last post by:
How do i turn on java script on a villaon, callus and itel keypad mobile phone
0
BarryA
by: BarryA | last post by:
What are the essential steps and strategies outlined in the Data Structures and Algorithms (DSA) roadmap for aspiring data scientists? How can individuals effectively utilize this roadmap to progress...
1
by: nemocccc | last post by:
hello, everyone, I want to develop a software for my android phone for daily needs, any suggestions?
0
by: Hystou | last post by:
There are some requirements for setting up RAID: 1. The motherboard and BIOS support RAID configuration. 2. The motherboard has 2 or more available SATA protocol SSD/HDD slots (including MSATA, M.2...
0
marktang
by: marktang | last post by:
ONU (Optical Network Unit) is one of the key components for providing high-speed Internet services. Its primary function is to act as an endpoint device located at the user's premises. However,...
0
by: Hystou | last post by:
Most computers default to English, but sometimes we require a different language, especially when relocating. Forgot to request a specific language before your computer shipped? No problem! You can...
0
Oralloy
by: Oralloy | last post by:
Hello folks, I am unable to find appropriate documentation on the type promotion of bit-fields when using the generalised comparison operator "<=>". The problem is that using the GNU compilers,...
0
jinu1996
by: jinu1996 | last post by:
In today's digital age, having a compelling online presence is paramount for businesses aiming to thrive in a competitive landscape. At the heart of this digital strategy lies an intricately woven...
0
tracyyun
by: tracyyun | last post by:
Dear forum friends, With the development of smart home technology, a variety of wireless communication protocols have appeared on the market, such as Zigbee, Z-Wave, Wi-Fi, Bluetooth, etc. Each...

By using Bytes.com and it's services, you agree to our Privacy Policy and Terms of Use.

To disable or enable advertisements and analytics tracking please visit the manage ads & tracking page.