By using this site, you agree to our updated Privacy Policy and our Terms of Use. Manage your Cookies Settings.
440,092 Members | 1,546 Online
Bytes IT Community
+ Ask a Question
Need help? Post your question and get tips & solutions from a community of 440,092 IT Pros & Developers. It's quick & easy.

Can python threads take advantage of use dual core ?

P: n/a
What are the implications of the Global Interpreter Lock in Python ?
Does this mean that Python threads cannot exploit a dual core
processor and the only advantage of using threads is in that
computation and IO-bound operations can continue in parallel ?

Thanks,
Nikhil

Aug 17 '07 #1
Share this Question
Share on Google+
6 Replies


P: n/a
nikhilketkar schrieb:
What are the implications of the Global Interpreter Lock in Python ?
Does this mean that Python threads cannot exploit a dual core
processor and the only advantage of using threads is in that
computation and IO-bound operations can continue in parallel ?

Essentially, yes. That is unless the computation is done in C-code which
released the GIL beforehand. But a certain tradeoff is to expected
nontheless.

Diez
Aug 17 '07 #2

P: n/a
Diez B. Roggisch wrote:
nikhilketkar schrieb:
>What are the implications of the Global Interpreter Lock in Python ?
Does this mean that Python threads cannot exploit a dual core
processor and the only advantage of using threads is in that
computation and IO-bound operations can continue in parallel ?


Essentially, yes. That is unless the computation is done in C-code which
released the GIL beforehand.
Which virtually all computation-intensive extensions do. Also, note the
processing package, which allows you to use a separate process more or less
like a thread, thus avoiding GIL issues completely.

Stefan
Aug 17 '07 #3

P: n/a
nikhilketkar wrote:
What are the implications of the Global Interpreter Lock in Python
?
Please have a look at the archives. This topic is brought up anew
every few weeks.
Does this mean that Python threads cannot exploit a dual core
processor and the only advantage of using threads is in that
computation and IO-bound operations can continue in parallel ?
Not generally, no.

Regards,
Björn

--
BOFH excuse #93:

Feature not yet implemented

Aug 17 '07 #4

P: n/a
On Aug 17, 6:00 pm, nikhilketkar <nikhilket...@gmail.comwrote:

What are the implications of the Global Interpreter Lock in Python ?
This is asked every second week or so.

The GIL is similar to the Big Kernel Lock (BKL) use to support SMPs in
older versions of the Linux kernel. The GIL prevents the Python
interpreter to be active in more than one thread at a time.

If you have a single CPU with a single core, the GIL has no
consequence what so ever.

If you have multiple CPUs and/or multiple cores, the GIL has the
consequence that you can forget about SMP scalability using Python
threads. SMP scalability require more fine grained object locking,
which is what the Java and .NET runtimes do, as well as current
versions of the Linux kernel. Note that IronPython and Jython use fine
grained locking instead of a GIL.
Does this mean that Python threads cannot exploit a dual core
processor and the only advantage of using threads is in that
computation and IO-bound operations can continue in parallel ?

The Python standard library releases the GIL in read/write operations
for e.g. files and sockets. This can be combined with threads to allow
multiple IO operations to continue in parallel. The GIL has no or very
little significance for IO-bound scalability in Python.

CPU-bound tasks are a different game. This is where the GIL matter.
You must either release the GIL or use multiple processes for
exploiting multiple processors in an SMP computer with Python. The GIL
can be released explicitely in C extension code. f2py and ctypes can
also release the GIL.

Note that you would NOT use threads for speeding up CPU-bound
operations, even when programming in C or Fortran. Threads are neither
the only nor the preferred way to exploit multicore CPUs for CPU-bound
tasks. Instead of threads, use either an MPI library or OpenMP
compiler pragmas. You can use MPI directly from Python (e.g. mpi4py),
or you can use OpenMP pragmas in C or Fortran code which you call
using ctypes or f2py.

Summary:

Use Python threads if you need to run IO operations in parallel.
Do not use Python threads if you need to run computations in parallel.

Regards,
Sturla Molden

Aug 17 '07 #5

P: n/a
On Aug 17, 6:00 pm, nikhilketkar <nikhilket...@gmail.comwrote:

What are the implications of the Global Interpreter Lock in Python ?
This is asked every second week or so.

The GIL is similar to the Big Kernel Lock (BKL) use to support SMPs in
older versions of the Linux kernel. The GIL prevents the Python
interpreter to be active in more than one thread at a time.

If you have a single CPU with a single core, the GIL has no
consequence what so ever.

If you have multiple CPUs and/or multiple cores, the GIL has the
consequence that you can forget about SMP scalability using Python
threads. SMP scalability require more fine grained object locking,
which is what the Java and .NET runtimes do, as well as current
versions of the Linux kernel. Note that IronPython and Jython use fine
grained locking instead of a GIL.
Does this mean that Python threads cannot exploit a dual core
processor and the only advantage of using threads is in that
computation and IO-bound operations can continue in parallel ?

The Python standard library releases the GIL in read/write operations
for e.g. files and sockets. This can be combined with threads to allow
multiple IO operations to continue in parallel. The GIL has no or very
little significance for IO-bound scalability in Python.

CPU-bound tasks are a different game. This is where the GIL matter.
You must either release the GIL or use multiple processes for
exploiting multiple processors in an SMP computer with Python. The GIL
can be released explicitely in C extension code. f2py and ctypes can
also release the GIL.

Note that you would NOT use threads for speeding up CPU-bound
operations, even when programming in C or Fortran. Threads are neither
the only nor the preferred way to exploit multicore CPUs for CPU-bound
tasks. Instead of threads, use either an MPI library or OpenMP
compiler pragmas. You can use MPI directly from Python (e.g. mpi4py),
or you can use OpenMP pragmas in C or Fortran code which you call
using ctypes or f2py.

Summary:

Use Python threads if you need to run IO operations in parallel.
Do not use Python threads if you need to run computations in parallel.

Regards,
Sturla Molden

Aug 17 '07 #6

P: n/a
Stefan Behnel <st******************@web.dewrote:
...
Which virtually all computation-intensive extensions do. Also, note the
gmpy doesn't (release the GIL), even though it IS computationally
intensive -- I tried, but it slows things down horribly even on an Intel
Core Duo. I suspect that may partly be due to the locking strategy of
the underlying GMP 4.2 library (which I haven't analyzed in depth). In
practice, when I want to exploit both cores to the hilt with gmpy-based
computations, I run multiple processes.
Alex
Aug 18 '07 #7

This discussion thread is closed

Replies have been disabled for this discussion.