By using this site, you agree to our updated Privacy Policy and our Terms of Use. Manage your Cookies Settings.
440,274 Members | 2,229 Online
Bytes IT Community
+ Ask a Question
Need help? Post your question and get tips & solutions from a community of 440,274 IT Pros & Developers. It's quick & easy.

Poor python and/or Zope performance on Sparc

P: n/a
Hello everybody,

I'm posting this message because I'm quiet frustrated.

We just bought a software from a small software vendor. In the
beginning he hosted our application on a small server at his office. I
think it was a Fujitsu-Siemens x86 running debian Linux. The
performance of the DSL-Line was very poor, so we decided to buy an own
machine to host the application ourselves.

The application is based on the Zope Application server (2.8.8-final
w/ python 2.3.6) along with many other packages like ghostview,
postgres, freetype, python imaging lib, etc... I once saw the
application running at the office of my software vendor and it was
running very well.

So I thought that - when the bottleneck of poor DSL performance has
disappeared - the software will run as fast as at his office. But I
erred. The performance is more than poor.

We have a Sun T1000 with 8 cores and 8 GB of RAM. First, I installed
Solaris 10 because I know this OS better than Debian Linux. Result:
poor Performance. After that I decided to migrate to debian:

root@carmanager uname -a
Linux carmanager 2.6.18-5-sparc64-smp #1 SMP Wed Oct 3 04:16:38 UTC
2007 sparc64 GNU/Linux

Result: Almost even worse. The application is not scaling at all.
Every time you start a request it is hanging around on one cpu and is
devouring it at about 90-100% load. The other 31 CPUs which are shown
in "mpstat" are bored at 0% load.

If anybody needs further information about installed packages, I'll
post it here.

Any hints are appreciated!
Thanks, Joe.

PS: Fortunately the Sun is not bought yet. It's a "try&buy" from my
local dealer. So if there are any hints like "buy a new machine
because Sun is crap" I will _not_ refuse obediance.

Nov 3 '07 #1
Share this Question
Share on Google+
4 Replies


P: n/a
On Nov 3, 9:35 am, joa2212 <joa2...@yahoo.dewrote:
Result: Almost even worse. The application is not scaling at all.
Every time you start a request it is hanging around on one cpu and is
devouring it at about 90-100% load. The other 31 CPUs which are shown
in "mpstat" are bored at 0% load.
You are probably not aware of Python's Global Interpeter Lock:
http://docs.python.org/api/threads.html.

George

Nov 3 '07 #2

P: n/a
On 3 Nov., 17:19, George Sakkis <george.sak...@gmail.comwrote:
On Nov 3, 9:35 am, joa2212 <joa2...@yahoo.dewrote:
Result: Almost even worse. The application is not scaling at all.
Every time you start a request it is hanging around on one cpu and is
devouring it at about 90-100% load. The other 31 CPUs which are shown
in "mpstat" are bored at 0% load.

You are probably not aware of Python's Global Interpeter Lock:http://docs.python.org/api/threads.html.

George
Hi George,

yes, that's right. I wasn't aware of this. If I understand you
correctly then we have a problem of implementing threads in the
software we are using. But tell me one thing: Why does this software
almost runs like a fool on an Intel machine with a single cpu (perhaps
dual core?) and slow like a snail when it runs on Sparc? It's exactly
the same source code on both platforms. Certainly all relevant
packages (python + zope) were recompiled on the sparc.

Sorry for my questions, I'm really no software developer. I'm just a
little bit helpless because my software vendor can't tell my anything
about by concerns.

Joe.

Nov 3 '07 #3

P: n/a
joa2212 wrote:
We have a Sun T1000 with 8 cores and 8 GB of RAM. First, I installed
Solaris 10 because I know this OS better than Debian Linux. Result:
poor Performance. After that I decided to migrate to debian:
Do you know the architecture of this machine? It's extremely streamlined
for data throughput (IO) at the expense of computational ability. In
particular:

- it runs at a relatively small clock speed (1 GHz - 1.4 GHz)
- it's terrible for floating-point calculations because there is only
one FPU shared by all 32 logical processors

While it will be screamingly fast for serving static content, and pretty
decent for light database jobs, this not an optimal platform for dynamic
web applications, especially for Python since the language's so dynamic
and it doesn't support SMP. Since it's so optimized for a certain
purpose, you can freely consider it a special-purpose machine, rather
than a general-purpose one.

Even if you manage to get Zope to spawn parallel request handlers
(probably via something like fastcgi), if the web application is
CPU-intensive you won't be happy with its performance (for CPU-intensive
tasks you probably don't want to spawn more than 8 handlers, since
that's the number of physical cores in the CPU).

-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1.2.5 (MingW32)
Comment: Using GnuPG with Mozilla - http://enigmail.mozdev.org

iD8DBQFHLSB+ldnAQVacBcgRAhHTAKCzXw8NvXmpQuoNhl/xVDKlSFDyBQCgjBe0
hSM1O5T5E3dQSdHrF/yhD4A=
=o0GD
-----END PGP SIGNATURE-----

Nov 4 '07 #4

P: n/a
On Nov 3, 2:35 pm, joa2212 <joa2...@yahoo.dewrote:
Result: Almost even worse. The application is not scaling at all.
Every time you start a request it is hanging around on one cpu and is
devouring it at about 90-100% load. The other 31 CPUs which are shown
in "mpstat" are bored at 0% load.
Like others already pointed out: Performance is going to suffer if
you
run something that's single-threaded on a T1000.

We've been using the T1000 for a while (but not for python) and it's
actually really fast. The performance is way better than the 8 cores
imply. In fact, it'll do work that's not too far from 32 times the
work
of a single thread.

Of course, it all depends on what your application is like, but try
experimenting a bit with parallelism. Try changing from 4 up to 64
parallel
working processes to see what kind of effects you'll get.

I'm guessing that whatever caching of persistent data you can do will
be a good thing as well as the disk might end up working really hard.

Nov 5 '07 #5

This discussion thread is closed

Replies have been disabled for this discussion.