By using this site, you agree to our updated Privacy Policy and our Terms of Use. Manage your Cookies Settings.
432,188 Members | 834 Online
Bytes IT Community
+ Ask a Question
Need help? Post your question and get tips & solutions from a community of 432,188 IT Pros & Developers. It's quick & easy.

Reducing load for LAMP app?

P: n/a
Hello,

I'm no LAMP expert, and a friend of mine is running a site which is a
bit overloaded. Before upgrading, he'd like to make sure there's no
easy way to improve efficiency.

A couple of things:
- MySQL : as much as possible, he keeps query results in RAM, but
apparently, each is session-specific, which means that results can't
be shared with other users.
Is there something that can be done in that area, ie. keep the maximum
amount of MySQL data in RAM, to avoid users (both logged-on and
guests) hitting the single MySQL server again and again?

- His hoster says that Apache server is under significant load. At
this point, I don't have more details, but generally speaking, what
are the well-know ways to optimize PHP apps?

Thank you.
Jan 4 '08 #1
Share this Question
Share on Google+
39 Replies


P: n/a
On Jan 4, 10:34 am, Gilles Ganault <nos...@nospam.comwrote:
Hello,

I'm no LAMP expert, and a friend of mine is running a site which is a
bit overloaded. Before upgrading, he'd like to make sure there's no
easy way to improve efficiency.

A couple of things:
- MySQL : as much as possible, he keeps query results in RAM, but
apparently, each is session-specific, which means that results can't
be shared with other users.
Is there something that can be done in that area, ie. keep the maximum
amount of MySQL data in RAM, to avoid users (both logged-on and
guests) hitting the single MySQL server again and again?
Are you sure keeping data in ram is a good solution? MySQL cache
dosen`t help?
>
- His hoster says that Apache server is under significant load. At
this point, I don't have more details, but generally speaking, what
are the well-know ways to optimize PHP apps?

Thank you.
For an easy test try to install a PHP accelerator (i`ve testesd
eaccelerator http://eaccelerator.net/ ). It can do a pretty good job.
And another thing , try to make sure that the server is overloaded
procesing stuff and not because of a bad caching mechanism (google for
browser-caching). As a rule of thumb , do not use php-generated
images, or other dinamically generated content if it`s not really
needed.
There is no easy way to optimize PHP,as with any other language you
have to know exactly what goes wrong with your scripts. You can use
xdebug (as a profiler) to see what functions eat a lot of resources.
You can monitor the server and see why he is slowing down like
that(need more ram, more cpu`s...more hdd?).
Jan 4 '08 #2

P: n/a
On Fri, 4 Jan 2008 02:23:20 -0800 (PST), "_q_u_a_m_i_s's"
<qu****@gmail.comwrote:
>Are you sure keeping data in ram is a good solution? MySQL cache
dosen`t help?
By RAM, I meant keeping stuff in memory vs. reading them from the HD,
even though it's not necessary since data didn't change.
>For an easy test try to install a PHP accelerator
I'll check it out.
>There is no easy way to optimize PHP,as with any other language you
have to know exactly what goes wrong with your scripts.
That's the problem he has: If I had to investigate where things can be
optimized in Linux + Apache + PHP + MySQL, what tool should I use?

Thanks
Jan 4 '08 #3

P: n/a
_q_u_a_m_i_s's wrote:
On Jan 4, 10:34 am, Gilles Ganault <nos...@nospam.comwrote:
>Hello,

I'm no LAMP expert, and a friend of mine is running a site which is a
bit overloaded. Before upgrading, he'd like to make sure there's no
easy way to improve efficiency.

A couple of things:
- MySQL : as much as possible, he keeps query results in RAM, but
apparently, each is session-specific, which means that results can't
be shared with other users.
Is there something that can be done in that area, ie. keep the maximum
amount of MySQL data in RAM, to avoid users (both logged-on and
guests) hitting the single MySQL server again and again?

Are you sure keeping data in ram is a good solution? MySQL cache
dosen`t help?
Its all held in ram by disk caching anyway.

Probably :-)

If you are running queries against the same sets of tables time and
again, those tables will be cached by the OS.

If the computer runs out of free RAM tho, expect a sudden and huge
downturn in performance.

That something for the machine owner to fix tho.

The other thing that sometimes screw up dB apps is searches on
un-indexed fields. Adding indexes can often help a lot, as can
optimising the way nested select statements are done.

The key is that the earliest you can reduce the amount of data, if
possible searching on indexed fields first..
>- His hoster says that Apache server is under significant load. At
this point, I don't have more details, but generally speaking, what
are the well-know ways to optimize PHP apps?

Thank you.

For an easy test try to install a PHP accelerator (i`ve testesd
eaccelerator http://eaccelerator.net/ ). It can do a pretty good job.
And another thing , try to make sure that the server is overloaded
procesing stuff and not because of a bad caching mechanism (google for
browser-caching). As a rule of thumb , do not use php-generated
images, or other dinamically generated content if it`s not really
needed.
Its a hard call to know what is slow without access to the machine. It
may be I/O bound due to low RAM and too many disk access, process bound
- too many processes for the RAM avaialable - or simply CPU bound having
to do too much computation. My experience suggests that mostly you don't
need huge CPU power in a *server*, but a fast disk or better, several
disks, and huge amounts of RAM are the key. As is a fast network..only
in some of the nastier SQL queries does CPU power get an issue, and
those are generally fixed by rewriting te query or indexing.

Any on the fly graphics computation will of course screw the CPU, but
most people don't do that..its more likely the images are just being
retrieved, not generated...
>
There is no easy way to optimize PHP,as with any other language you
have to know exactly what goes wrong with your scripts. You can use
xdebug (as a profiler) to see what functions eat a lot of resources.
You can monitor the server and see why he is slowing down like
that(need more ram, more cpu`s...more hdd?).
That's an interesting point..can PHP easily get at things like CPU
utilisation, memory resoure allocation and the like? so one could build
a web page to peer into the server?

I wouldn't mind one of those here..
Jan 4 '08 #4

P: n/a
On Fri, 04 Jan 2008 09:34:20 +0100, Gilles Ganault <no****@nospam.com>
wrote:
Hello,

I'm no LAMP expert, and a friend of mine is running a site which is a
bit overloaded. Before upgrading, he'd like to make sure there's no
easy way to improve efficiency.

A couple of things:
- MySQL : as much as possible, he keeps query results in RAM, but
apparently, each is session-specific, which means that results can't
be shared with other users.
Is there something that can be done in that area, ie. keep the maximum
amount of MySQL data in RAM, to avoid users (both logged-on and
guests) hitting the single MySQL server again and again?
AFAIK, more RAM is not always a good thing for MySQL. I've known setups
where actually reducing RAM usage/cache sizes sped it up tremendously.
It's somewhat out of my expertise though. Consult the MySQL manual, and
possibly ask for advise in comp.databases.mysql.
--
Rik Wasmus
Jan 4 '08 #5

P: n/a
Gilles Ganault wrote:
Hello,

I'm no LAMP expert, and a friend of mine is running a site which is a
bit overloaded. Before upgrading, he'd like to make sure there's no
easy way to improve efficiency.

A couple of things:
- MySQL : as much as possible, he keeps query results in RAM, but
apparently, each is session-specific, which means that results can't
be shared with other users.
Is there something that can be done in that area, ie. keep the maximum
amount of MySQL data in RAM, to avoid users (both logged-on and
guests) hitting the single MySQL server again and again?
Try comp.databases.mysql - where the MySQL experts hang out. They can
give you more details, but generally the caches are more efficient at
this than user apps.
- His hoster says that Apache server is under significant load. At
this point, I don't have more details, but generally speaking, what
are the well-know ways to optimize PHP apps?
Before you can optimize, you need to find where the bottlenecks are.
Otherwise you're going to waste a lot of time "fixing" things which
aren't broken.
Thank you.


--
==================
Remove the "x" from my email address
Jerry Stuckle
JDS Computer Training Corp.
js*******@attglobal.net
==================

Jan 4 '08 #6

P: n/a
On Jan 4, 3:34 am, Gilles Ganault <nos...@nospam.comwrote:
Hello,

I'm no LAMP expert, and a friend of mine is running a site which is a
bit overloaded. Before upgrading, he'd like to make sure there's no
easy way to improve efficiency.

A couple of things:
- MySQL : as much as possible, he keeps query results in RAM, but
apparently, each is session-specific, which means that results can't
be shared with other users.
Is there something that can be done in that area, ie. keep the maximum
amount of MySQL data in RAM, to avoid users (both logged-on and
guests) hitting the single MySQL server again and again?

- His hoster says that Apache server is under significant load. At
this point, I don't have more details, but generally speaking, what
are the well-know ways to optimize PHP apps?

Thank you.
I will agree with the statement made in the post that 'before you can
optimize, you need to know where you are optimizing'.

1. Eaccel (or any pre-compiler) will certainly help if the bottleneck
is PHP (to a point.)
2. Use the XDebug package & Kcachegrind (or wincachegrind) to look at
what the code is doing and if it can be optimized.
3. Depending on site needs, considering caching options such as
memcached.

Read the IBM series of articles on tuning/performance for LAMP

http://www.ibm.com/developerworks/li...p-1/#resources

faulkes

Jan 4 '08 #7

P: n/a
Gilles Ganault wrote:
Hello,

I'm no LAMP expert, and a friend of mine is running a site which is a
bit overloaded. Before upgrading, he'd like to make sure there's no
easy way to improve efficiency.

A couple of things:
- MySQL : as much as possible, he keeps query results in RAM, but
apparently, each is session-specific, which means that results can't
be shared with other users.
Is there something that can be done in that area, ie. keep the maximum
amount of MySQL data in RAM, to avoid users (both logged-on and
guests) hitting the single MySQL server again and again?

- His hoster says that Apache server is under significant load. At
this point, I don't have more details, but generally speaking, what
are the well-know ways to optimize PHP apps?

Thank you.
memcached: http://memcached.sf.net (for caching dynamic data in memory)
apc: http://us2.php.net/apc (caches script bytecode to reduce
compilation overhead)

These two modules will help enormously. I guarantee it. Using apc is
pretty much transparent, but memcached will require modifying your
database abstraction layer using the memcached functions
(http://php.net/manual/en/ref.memcache.php).

Also try connection pooling using PHP's persistent connections. These
make life harder for you, but eliminate a lot of connection/tear-down
overhead with the database. You need to be able to configure your own
apache and mysql for this to really work.
(http://php.net/manual/en/features.pe...onnections.php)

Jeremy
Jan 5 '08 #8

P: n/a

"Gilles Ganault" <no****@nospam.comwrote in message
news:iq********************************@4ax.com...
Hello,

I'm no LAMP expert, and a friend of mine is running a site which is
a
bit overloaded. Before upgrading, he'd like to make sure there's no
easy way to improve efficiency.

A couple of things:
- MySQL : as much as possible, he keeps query results in RAM, but
apparently, each is session-specific, which means that results can't
be shared with other users.
Is there something that can be done in that area, ie. keep the
maximum
amount of MySQL data in RAM, to avoid users (both logged-on and
guests) hitting the single MySQL server again and again?

- His hoster says that Apache server is under significant load. At
this point, I don't have more details, but generally speaking, what
are the well-know ways to optimize PHP apps?

Thank you.
Why (or how) are the queries "session-specific"?
No way to get rid of that?

Richard.
Jan 5 '08 #9

P: n/a

"Richard" <root@localhostwrote in message
news:47***********************@news.euronet.nl...
>
"Gilles Ganault" <no****@nospam.comwrote in message
news:iq********************************@4ax.com...
>Hello,

I'm no LAMP expert, and a friend of mine is running a site which is
a
bit overloaded. Before upgrading, he'd like to make sure there's no
easy way to improve efficiency.

A couple of things:
- MySQL : as much as possible, he keeps query results in RAM, but
apparently, each is session-specific, which means that results
can't
be shared with other users.
Is there something that can be done in that area, ie. keep the
maximum
amount of MySQL data in RAM, to avoid users (both logged-on and
guests) hitting the single MySQL server again and again?

- His hoster says that Apache server is under significant load. At
this point, I don't have more details, but generally speaking, what
are the well-know ways to optimize PHP apps?

Thank you.

Why (or how) are the queries "session-specific"?
No way to get rid of that?

Richard.
Oh and BTW:
>- MySQL : as much as possible, he keeps query results in RAM
It seems like a waste of memory to keep the results in RAM when you
know they will not be of any use to you afterwards. Immediately free
the mem for the next results sounds better to me.

Richard.
Jan 5 '08 #10

P: n/a
Jeremy wrote:
Gilles Ganault wrote:
>Hello,

I'm no LAMP expert, and a friend of mine is running a site which is a
bit overloaded. Before upgrading, he'd like to make sure there's no
easy way to improve efficiency.

A couple of things:
- MySQL : as much as possible, he keeps query results in RAM, but
apparently, each is session-specific, which means that results can't
be shared with other users.
Is there something that can be done in that area, ie. keep the maximum
amount of MySQL data in RAM, to avoid users (both logged-on and
guests) hitting the single MySQL server again and again?

- His hoster says that Apache server is under significant load. At
this point, I don't have more details, but generally speaking, what
are the well-know ways to optimize PHP apps?

Thank you.

memcached: http://memcached.sf.net (for caching dynamic data in memory)
apc: http://us2.php.net/apc (caches script bytecode to reduce
compilation overhead)

These two modules will help enormously. I guarantee it. Using apc is
pretty much transparent, but memcached will require modifying your
database abstraction layer using the memcached functions
(http://php.net/manual/en/ref.memcache.php).
Not necessarily. MySQL and the OS both cache data - and generally do a
better job at a lower cost than using memcached.

apc *could* help. But it might not, either. It all depends on where
the bottleneck it.
Also try connection pooling using PHP's persistent connections. These
make life harder for you, but eliminate a lot of connection/tear-down
overhead with the database. You need to be able to configure your own
apache and mysql for this to really work.
(http://php.net/manual/en/features.pe...onnections.php)
WRONG, WRONG, WRONG! This INCREASES resource usage on the server. With
persistent connections, you must have the maximum number of connections
*ever* required allocated *all of the time* - even if no one is using
your server.

MySQL persistent connections should not be used except in extreme cases
- like when you're running 100+ connections per second.
Jeremy
The FIRST thing to do when having performance problems is to identify
the cause of the performance problem. You're making suggestions without
having any idea where the hold up is. In at least two cases, your
suggestions could actually HURT performance. And the third case may or
may not help. If, for instance, the majority of the resource usage is
spent in long MySQL queries, apc will have little effect.

--
==================
Remove the "x" from my email address
Jerry Stuckle
JDS Computer Training Corp.
js*******@attglobal.net
==================

Jan 5 '08 #11

P: n/a
Richard wrote:
"Gilles Ganault" <no****@nospam.comwrote in message
news:iq********************************@4ax.com...
>Hello,

I'm no LAMP expert, and a friend of mine is running a site which is
a
bit overloaded. Before upgrading, he'd like to make sure there's no
easy way to improve efficiency.

A couple of things:
- MySQL : as much as possible, he keeps query results in RAM, but
apparently, each is session-specific, which means that results can't
be shared with other users.
Is there something that can be done in that area, ie. keep the
maximum
amount of MySQL data in RAM, to avoid users (both logged-on and
guests) hitting the single MySQL server again and again?

- His hoster says that Apache server is under significant load. At
this point, I don't have more details, but generally speaking, what
are the well-know ways to optimize PHP apps?

Thank you.

Why (or how) are the queries "session-specific"?
No way to get rid of that?

Richard.
Maybe he's storing it in the $_SESSION variable - which isn't actually
written to memory - but to disk. And is very slow for large amounts of
data.

--
==================
Remove the "x" from my email address
Jerry Stuckle
JDS Computer Training Corp.
js*******@attglobal.net
==================

Jan 5 '08 #12

P: n/a
On Sat, 5 Jan 2008 02:55:24 +0100, "Richard" <root@localhostwrote:
>Why (or how) are the queries "session-specific"?
No way to get rid of that?
I don't know what it means, really. I don't have enough experience
with PHP and MySQL to know the details.

Generally speaking, is there a way to share query results among all
viewers using the site, and if yes, what's the right way to do this?

Thanks for all the suggestions guys. It's very helpful.
Jan 5 '08 #13

P: n/a

"Gilles Ganault" <no****@nospam.comwrote in message
news:kr********************************@4ax.com...
On Sat, 5 Jan 2008 02:55:24 +0100, "Richard" <root@localhostwrote:
>>Why (or how) are the queries "session-specific"?
No way to get rid of that?

I don't know what it means, really. I don't have enough experience
with PHP and MySQL to know the details.

Generally speaking, is there a way to share query results among all
viewers using the site, and if yes, what's the right way to do this?

Thanks for all the suggestions guys. It's very helpful.
MySQL has a query cache. Check the manual or the site.
The queries have to be identical for it to work, so thats why I was
asking about the sessions.

Maybe just do a generic query and handle the session specific things
in PHP afterwards?

R.
Jan 5 '08 #14

P: n/a
On Sat, 5 Jan 2008 18:29:50 +0100, "Richard" <root@localhostwrote:
>MySQL has a query cache. Check the manual or the site.
On my way.
>The queries have to be identical for it to work, so thats why I was
asking about the sessions. Maybe just do a generic query and
handle the session specific things in PHP afterwards?
I'll check how queries are used in the pages, and see if it can be
done. Thanks.
Jan 5 '08 #15

P: n/a
Gilles Ganault wrote:
On Sat, 5 Jan 2008 02:55:24 +0100, "Richard" <root@localhostwrote:
>Why (or how) are the queries "session-specific"?
No way to get rid of that?

I don't know what it means, really. I don't have enough experience
with PHP and MySQL to know the details.

Generally speaking, is there a way to share query results among all
viewers using the site, and if yes, what's the right way to do this?

Thanks for all the suggestions guys. It's very helpful.
Not from PHP, there isn't. Every user is considered independent. Just
like every program running in a system is considered independent.

Yes, you can share information between users - just like you can share
information between programs, i.e. shared memory. But it's not done by
default.

And as Richard indicated, the MySQL query cache may be more efficient.

However, right now it looks like me you are trying to optimize your
system - but haven't even identified where the source of your problem.

The FIRST thing you need to do is find out what's causing the high
resource utilization!

--
==================
Remove the "x" from my email address
Jerry Stuckle
JDS Computer Training Corp.
js*******@attglobal.net
==================

Jan 6 '08 #16

P: n/a
On Sat, 05 Jan 2008 23:23:42 -0500, Jerry Stuckle
<js*******@attglobal.netwrote:
>The FIRST thing you need to do is find out what's causing the high
resource utilization!
I know, and I'm looking for an article that would summarize how to
check the different components of the stack: Linux, Apache, PHP, and
MySQL.

For instance, I know about those:
- OS : top, ps, vmstat
- PHP : xdebug
Jan 6 '08 #17

P: n/a
Gilles Ganault wrote:
On Sat, 05 Jan 2008 23:23:42 -0500, Jerry Stuckle
<js*******@attglobal.netwrote:
>The FIRST thing you need to do is find out what's causing the high
resource utilization!

I know, and I'm looking for an article that would summarize how to
check the different components of the stack: Linux, Apache, PHP, and
MySQL.

For instance, I know about those:
- OS : top, ps, vmstat
- PHP : xdebug
Well, to start, you don't need any tools. microtime() can help you
locate bottlenecks.

--
==================
Remove the "x" from my email address
Jerry Stuckle
JDS Computer Training Corp.
js*******@attglobal.net
==================

Jan 6 '08 #18

P: n/a
On Sun, 06 Jan 2008 10:19:53 -0500, Jerry Stuckle
<js*******@attglobal.netwrote:
>Well, to start, you don't need any tools. microtime() can help you
locate bottlenecks.
OK, I'll tell him to put some calls to Microtime() in the PHP pages,
and see what we get.

$start_time = microtime();
// run script code here
$end_time = microtime();
$total_time = $end_time - $start_time;

Thanks.
Jan 6 '08 #19

P: n/a
On Sun, 06 Jan 2008 19:21:46 +0100, Gilles Ganault <no****@nospam.com
wrote:
On Sun, 06 Jan 2008 10:19:53 -0500, Jerry Stuckle
<js*******@attglobal.netwrote:
>Well, to start, you don't need any tools. microtime() can help you
locate bottlenecks.

OK, I'll tell him to put some calls to Microtime() in the PHP pages,
and see what we get.

$start_time = microtime();
microtime(true);, check the output :P
// run script code here
$end_time = microtime();
$total_time = $end_time - $start_time;

--
Rik Wasmus
Jan 6 '08 #20

P: n/a
On Sun, 06 Jan 2008 19:54:05 +0100, "Rik Wasmus"
<lu************@hotmail.comwrote:
>microtime(true);, check the output :P
Yup :-) This looks more usable:

$starttimer = time()+microtime();
// run script code here
$stoptimer = time()+microtime();
$timer = round($stoptimer-$starttimer,4);
echo "Page created in $timer seconds.";
Jan 6 '08 #21

P: n/a
Gilles Ganault wrote:
On Sun, 06 Jan 2008 19:54:05 +0100, "Rik Wasmus"
<lu************@hotmail.comwrote:
>microtime(true);, check the output :P

Yup :-) This looks more usable:

$starttimer = time()+microtime();
// run script code here
$stoptimer = time()+microtime();
$timer = round($stoptimer-$starttimer,4);
echo "Page created in $timer seconds.";
Gilles,

You don't need time() calls. Just:

$starttimer = microtime(true);
// ...
$stoptimer = microtime(true);
... etc...

--
==================
Remove the "x" from my email address
Jerry Stuckle
JDS Computer Training Corp.
js*******@attglobal.net
==================

Jan 6 '08 #22

P: n/a
On Sun, 06 Jan 2008 14:19:14 -0500, Jerry Stuckle
<js*******@attglobal.netwrote:
>You don't need time() calls. Just:

$starttimer = microtime(true);
// ...
$stoptimer = microtime(true);
... etc...
I'll have to find a function to turn

2.598762512207E-5

into seconds.
Jan 6 '08 #23

P: n/a
Gilles Ganault wrote:
On Sun, 06 Jan 2008 14:19:14 -0500, Jerry Stuckle
<js*******@attglobal.netwrote:
>You don't need time() calls. Just:

$starttimer = microtime(true);
// ...
$stoptimer = microtime(true);
... etc...

I'll have to find a function to turn

2.598762512207E-5

into seconds.
It's already in seconds.

--
==================
Remove the "x" from my email address
Jerry Stuckle
JDS Computer Training Corp.
js*******@attglobal.net
==================

Jan 6 '08 #24

P: n/a
On Sun, 06 Jan 2008 15:54:17 -0500, Jerry Stuckle
<js*******@attglobal.netwrote:
>It's already in seconds.
OK. I can live with the trailing E :-)

Looks like a very busy host:
======
load average: 61.88, 51.08, 74.30
Tasks: 476 total, 7 running, 467 sleeping, 0 stopped, 2 zombie
Cpu(s): 78.8% us, 18.7% sy, 0.0% ni, 0.7% id, 0.3% wa, 0.5% hi,
1.0% si
Mem: 1015484k total, 993344k used, 22140k free, 76920k
buffers
Swap: 514040k total, 101032k used, 413008k free, 208496k
cached

PID USER PRI NI SIZE RSS SHARE STAT %CPU %MEM TIME COMMAND
10685 ze-card 19 19 984 736 724 R N 93.3 0.0 5671m webalizer
32072 nobody 9 0 16932 15M 12156 S 6.0 0.3 0:01 httpd
32196 nobody 9 0 16224 14M 12144 S 3.5 0.3 0:01 httpd
1868 nobody 9 0 16284 14M 12620 S 3.1 0.3 0:00 httpd
2136 nobody 9 0 16080 14M 12164 S 2.5 0.3 0:00 httpd
32205 nobody 9 0 16300 14M 12136 S 2.3 0.3 0:00 httpd
32231 nobody 9 0 16316 14M 12172 S 2.3 0.3 0:00 httpd
32124 nobody 9 0 16620 14M 12184 S 1.9 0.3 0:01 httpd
======

If I understood what I read on top and vmstat, this host has enough
RAM, but the CPU is heavily used (load average much higher than 1),
with lots of sleeping processes.

I have to find out if it's just PHP or MySQL, or even the network that
is the bottleneck.
Jan 6 '08 #25

P: n/a
On Jan 6, 5:43 pm, Gilles Ganault <nos...@nospam.comwrote:
On Sun, 06 Jan 2008 15:54:17 -0500, Jerry Stuckle

======
load average: 61.88, 51.08, 74.30
Tasks: 476 total, 7 running, 467 sleeping, 0 stopped, 2 zombie
Cpu(s): 78.8% us, 18.7% sy, 0.0% ni, 0.7% id, 0.3% wa, 0.5% hi,
1.0% si
This looks like the hallmark of a virtual server, depending on what
the host is serving or doing though I would be concerned at the
number of processes.
>
PID USER PRI NI SIZE RSS SHARE STAT %CPU %MEM TIME COMMAND
10685 ze-card 19 19 984 736 724 R N 93.3 0.0 5671m webalizer
That would be your primary cpu hog at the current time and considering
the length of time it's been there, probably on a rampage.
>
If I understood what I read on top and vmstat, this host has enough
RAM, but the CPU is heavily used (load average much higher than 1),
with lots of sleeping processes.
467 processes? I'd sure like to see what count is on those particular
processes (httpd, etc.. etc..)
I have to find out if it's just PHP or MySQL, or even the network that
is the bottleneck.
While it could be all three, I would say check out the processlist
first and that would likely be your first clue.

Jan 7 '08 #26

P: n/a
Gilles Ganault wrote:
On Sun, 06 Jan 2008 15:54:17 -0500, Jerry Stuckle
<js*******@attglobal.netwrote:
>It's already in seconds.

OK. I can live with the trailing E :-)
The training E is not in the number - that's how very small floating
point numbers are displayed.
Looks like a very busy host:
======
load average: 61.88, 51.08, 74.30
Tasks: 476 total, 7 running, 467 sleeping, 0 stopped, 2 zombie
Cpu(s): 78.8% us, 18.7% sy, 0.0% ni, 0.7% id, 0.3% wa, 0.5% hi,
1.0% si
Mem: 1015484k total, 993344k used, 22140k free, 76920k
buffers
Swap: 514040k total, 101032k used, 413008k free, 208496k
cached

PID USER PRI NI SIZE RSS SHARE STAT %CPU %MEM TIME COMMAND
10685 ze-card 19 19 984 736 724 R N 93.3 0.0 5671m webalizer
32072 nobody 9 0 16932 15M 12156 S 6.0 0.3 0:01 httpd
32196 nobody 9 0 16224 14M 12144 S 3.5 0.3 0:01 httpd
1868 nobody 9 0 16284 14M 12620 S 3.1 0.3 0:00 httpd
2136 nobody 9 0 16080 14M 12164 S 2.5 0.3 0:00 httpd
32205 nobody 9 0 16300 14M 12136 S 2.3 0.3 0:00 httpd
32231 nobody 9 0 16316 14M 12172 S 2.3 0.3 0:00 httpd
32124 nobody 9 0 16620 14M 12184 S 1.9 0.3 0:01 httpd
======

If I understood what I read on top and vmstat, this host has enough
RAM, but the CPU is heavily used (load average much higher than 1),
with lots of sleeping processes.

I have to find out if it's just PHP or MySQL, or even the network that
is the bottleneck.
Lots of sleeping processes is normal. To me, memory looks a bit
marginal. Not enough to necessarily cause problems, because you're only
using a little swap space. The real question would be if this is
normal, a peak - or maybe a lull in the traffic. In the last case it
could be a problem.

And yes, the CPU usage is high, but not necessarily that high.

I'm really wondering if your host is trying to put too many people on
your server. Monitoring the results of top over a period of time will
help show you.
--
==================
Remove the "x" from my email address
Jerry Stuckle
JDS Computer Training Corp.
js*******@attglobal.net
==================

Jan 7 '08 #27

P: n/a
On Sun, 6 Jan 2008 16:04:40 -0800 (PST), faulkes
<mi*************@gmail.comwrote:
>This looks like the hallmark of a virtual server, depending on what
the host is serving or doing though I would be concerned at the
number of processes.
Indeed, it's running on shared virtual server. Aren't each account
isolated from each other? I find it surprising to be able to see the
total number of processes running on the host, accross all accounts.
>PID USER PRI NI SIZE RSS SHARE STAT %CPU %MEM TIME COMMAND
10685 ze-card 19 19 984 736 724 R N 93.3 0.0 5671m webalizer

That would be your primary cpu hog at the current time and considering
the length of time it's been there, probably on a rampage.
Webalizer is running as Nice, so I didn't think it'd be a problem, but
I'll tell him to restart it, and see how it goes.
>467 processes? I'd sure like to see what count is on those particular
processes (httpd, etc.. etc..)
What do you mean by "count"?
>While it could be all three, I would say check out the processlist
first and that would likely be your first clue.
Since I posted the top processes above, do you mean running "ps -edf"
or "ps -aux"?

Thank you.
Jan 7 '08 #28

P: n/a
On Sun, 06 Jan 2008 20:47:11 -0500, Jerry Stuckle
<js*******@attglobal.netwrote:
>The training E is not in the number - that's how very small floating
point numbers are displayed.
I'll look at how to format the output of Microtime() so it's displayed
as seconds instead:

======
$start_time = microtime(true);
$end_time = microtime(true);
$total_time = $end_time - $start_time;
print "Using microtime() as-is : $total_time<p>";

$starttimer = time()+microtime();
$stoptimer = time()+microtime();
$timer = round($stoptimer-$starttimer,4);
echo "Using Time + Microtime() : $timer";
======
Using microtime() as-is : 2.6941299438477E-5

Using Time + Microtime() : 0.2018
======
The real question would be if this is normal, a peak - or
maybe a lull in the traffic. In the last case it could be a problem.
It's peak time. He had about 400 users logged on, with an unknown
number of guests lurking, which is the highest number he ever had.
>And yes, the CPU usage is high, but not necessarily that high.
The reason I thought it was a problem, is that this article on "top"
says that "load average" should not be much higher than 4, ie. 4 times
the amount of processes per processor (it's a single-CPU host):

"The higher the number for load average, the more likely your system
is starting to suffer under an excessive load. As the saying goes,
your mileage may vary, but I tend to think of anything under four as
acceptable. Any higher and it starts feeling slow. I've seen systems
running around 15 to 20 and let me tell you, it's ugly."
http://www.linuxjournal.com/article/5309
>I'm really wondering if your host is trying to put too many people on
your server. Monitoring the results of top over a period of time will
help show you.
If it helps, it's very responsive with about 200 users logged on, but
it crawls to a halt with 400.

Generally speaking, and considering the number of web apps being
written these days, especially in LAMP, I'm surprised Google didn't
return an article on what to do to investigate a slow web application.

Thanks.
Jan 7 '08 #29

P: n/a
On Jan 7, 12:43 am, Gilles Ganault <nos...@nospam.comwrote:
On Sun, 06 Jan 2008 15:54:17 -0500, Jerry Stuckle

<jstuck...@attglobal.netwrote:
It's already in seconds.

OK. I can live with the trailing E :-)

Looks like a very busy host:
======
load average: 61.88, 51.08, 74.30
Tasks: 476 total, 7 running, 467 sleeping, 0 stopped, 2 zombie
Cpu(s): 78.8% us, 18.7% sy, 0.0% ni, 0.7% id, 0.3% wa, 0.5% hi,
1.0% si
Mem: 1015484k total, 993344k used, 22140k free, 76920k
buffers
Swap: 514040k total, 101032k used, 413008k free, 208496k
cached
You do not have enough free RAM. If this was a peak time the this
could be ok...but looking at the numbers:
- you have 1Gb of RAM, from that 22Mb are free, and 76Mb are used for
file-buffers(opened files). The rest are taken by applications.
- the swap is used, you should not have any data swapped if you need a
FAST webserver. Keep everything in ram..

- your load avg is 61 ??? this is waaay over. the load avg should stay
below 10..at least from what i saw while working.
- 78%$ of the cpu time is spent in user mode (apache & webalizer in
your case). Try to look for more apache ways to optimize this.
PID USER PRI NI SIZE RSS SHARE STAT %CPU %MEM TIME COMMAND
10685 ze-card 19 19 984 736 724 R N 93.3 0.0 5671m webalizer
webalizer should not be in this list, it should start/stop quickly.
32072 nobody 9 0 16932 15M 12156 S 6.0 0.3 0:01 httpd
Are your scripts memory intensive? i do not know how much an usual php
process takes, but 14Mb its a lot of data for one page-request.
32196 nobody 9 0 16224 14M 12144 S 3.5 0.3 0:01 httpd
1868 nobody 9 0 16284 14M 12620 S 3.1 0.3 0:00 httpd
2136 nobody 9 0 16080 14M 12164 S 2.5 0.3 0:00 httpd
32205 nobody 9 0 16300 14M 12136 S 2.3 0.3 0:00 httpd
32231 nobody 9 0 16316 14M 12172 S 2.3 0.3 0:00 httpd
32124 nobody 9 0 16620 14M 12184 S 1.9 0.3 0:01 httpd
======

If I understood what I read on top and vmstat, this host has enough
RAM, but the CPU is heavily used (load average much higher than 1),
with lots of sleeping processes.

I have to find out if it's just PHP or MySQL, or even the network that
is the bottleneck.
Try to use xdebug & cachegrind to see what is wrong with your scripts,
and determine some points where to create some "caching points" so
that redundant data is not re-calculated over and over and over
and......

If the webpages can be cached, then cache them, do not regenerate them
everytime

Jan 7 '08 #30

P: n/a
On Mon, 7 Jan 2008 00:55:35 -0800 (PST), "_q_u_a_m_i_s's"
<qu****@gmail.comwrote:
>You do not have enough free RAM.
- your load avg is 61 ??? this is waaay over.
- 78%$ of the cpu time is spent in user mode (apache & webalizer in
your case). Try to look for more apache ways to optimize this.
webalizer should not be in this list, it should start/stop quickly.
Are your scripts memory intensive? i do not know how much an usual php
process takes, but 14Mb its a lot of data for one page-request.
I don't know, but we'll look at how to tweak Apache.
>Try to use xdebug & cachegrind to see what is wrong with your scripts,
and determine some points where to create some "caching points" so
that redundant data is not re-calculated over and over and over
and......
I'll tell him to add RAM, but I also asked him to install the sysstat
package so he can run sar + mpstat + iostat and get some data about
I/O to see if the hard-disk isn't to blame.

Thanks a bunch!
Jan 7 '08 #31

P: n/a
Gilles Ganault wrote:
On Sun, 06 Jan 2008 20:47:11 -0500, Jerry Stuckle
<js*******@attglobal.netwrote:
>The training E is not in the number - that's how very small floating
point numbers are displayed.

I'll look at how to format the output of Microtime() so it's displayed
as seconds instead:

======
$start_time = microtime(true);
$end_time = microtime(true);
$total_time = $end_time - $start_time;
print "Using microtime() as-is : $total_time<p>";

$starttimer = time()+microtime();
$stoptimer = time()+microtime();
$timer = round($stoptimer-$starttimer,4);
echo "Using Time + Microtime() : $timer";
======
Using microtime() as-is : 2.6941299438477E-5

Using Time + Microtime() : 0.2018
======
Which is incorrect.

2.6941299438477E-5 is standard scientific notation for display of very
large or very small numbers. In this case, the actual value would be:

0.000026941299438477 seconds - about 27 microseconds.

Another point here is that floating point numbers have up to 15
significant digits (actual digits, not decimal places). When you add
time() to it, you're adding 9 digits to the left - which only leaves you
with six to the right of the decimal point.

So for several reasons your "fix" breaks more than it fixes.

>The real question would be if this is normal, a peak - or
maybe a lull in the traffic. In the last case it could be a problem.

It's peak time. He had about 400 users logged on, with an unknown
number of guests lurking, which is the highest number he ever had.
OK, that's bad, but it's not as bad as if it were off-peak :-)
>And yes, the CPU usage is high, but not necessarily that high.

The reason I thought it was a problem, is that this article on "top"
says that "load average" should not be much higher than 4, ie. 4 times
the amount of processes per processor (it's a single-CPU host):
YMMV. So that means you should only be running 4 processes. MySQL and
Apache will each run more than that when they're not doing anything.

So you need to dig deeper. Most of your processes are sitting idle -
and that 4:1 ratio could be valid for active tasks. Also, this can be
affected by other problems - such as lack of memory slowing things down.
"The higher the number for load average, the more likely your system
is starting to suffer under an excessive load. As the saying goes,
your mileage may vary, but I tend to think of anything under four as
acceptable. Any higher and it starts feeling slow. I've seen systems
running around 15 to 20 and let me tell you, it's ugly."
http://www.linuxjournal.com/article/5309
Sure, and I've seen systems running around 15 to 20 and they've been
doing fine. As he says - YMMV. It's only part of the equation. But it
is something you need to include in the equation.
>I'm really wondering if your host is trying to put too many people on
your server. Monitoring the results of top over a period of time will
help show you.

If it helps, it's very responsive with about 200 users logged on, but
it crawls to a halt with 400.
Yes, that definitely helps. What does top show when you've only got 200
users? It's the comparisons I look for, not actual numbers.
Generally speaking, and considering the number of web apps being
written these days, especially in LAMP, I'm surprised Google didn't
return an article on what to do to investigate a slow web application.

Thanks.
Yep, I agree.

But at this point it doesn't look like you've got a PHP problem. I'd
suggest you follow up in the Linux admin groups - where the experts hang
out. Most of us here know just enough Linux admin to get ourselves into
trouble :-)

--
==================
Remove the "x" from my email address
Jerry Stuckle
JDS Computer Training Corp.
js*******@attglobal.net
==================

Jan 7 '08 #32

P: n/a
On 4 Jan, 10:47, Gilles Ganault <nos...@nospam.comwrote:
On Fri, 4 Jan 2008 02:23:20 -0800 (PST), "_q_u_a_m_i_s's"
There is no easy way to optimize PHP,as with any other language you
have to know exactly what goes wrong with your scripts.

That's the problem he has: If I had to investigate where things can be
optimized in Linux + Apache + PHP + MySQL, what tool should I use?

Thanks
None that any hosting company would let you run on a non-dedicated
server.....

Make sure you're running a opcode cache like PHPAccelerator.

Make sure you've got compression enabled for all non-image transfers.

Make sure you're using a Unix socket if MySQL is running on the same
host as the webserver.

Typically, the best performance benefits will come from tuning your DB
schema - meaning you should enable long query logging on the DBMS,
then suck out the data and anonymise the SQL (strip out the query
parameters so that 'SELECT * FROM widgets WHERE id=2334' becomes
something like 'SELECT * FROM widgets WHERE id=$p1') and look at the
queries with the highest sum of run times.

Analyse page rendering times - %T for apache custom logs will only
resolve to integer seconds but should be a starter for a badly
performing system - for preference use a proper E-2-E response time
tool like PastMon or Client Vantage and again identify the pages with
the highest hit rateXresponse times.

Use XDebug and KCachegrind to find out which bits of code are running
badly and refactor them.

C.
Jan 7 '08 #33

P: n/a
On Jan 7, 1:32 am, Gilles Ganault <nos...@nospam.comwrote:
On Sun, 6 Jan 2008 16:04:40 -0800 (PST), faulkes

<michael.behr...@gmail.comwrote:

Indeed, it's running on shared virtual server. Aren't each account
isolated from each other? I find it surprising to be able to see the
total number of processes running on the host, accross all accounts.
Yes, they are but all of those isolations are still bound by the # of
physical CPU's the *real* machine has. So if your hosting provider
is running 6:1 (6 virts to 1 real cpu) or greater, and lets face it,
many of them do, you have a problem which is beyond your control
(short
of a dedicated server or moving to a provider with a better ratio)
What do you mean by "count"?
# of apache children
# other children of another application
Since I posted the top processes above, do you mean running "ps -edf"
or "ps -aux"?
ps -ef | grep httpd | wc -l
ps -ef | grep otherproc | wc -l

Jan 7 '08 #34

P: n/a
Jerry Stuckle wrote:
Jeremy wrote:
>Gilles Ganault wrote:
>>Hello,

I'm no LAMP expert, and a friend of mine is running a site which is a
bit overloaded. Before upgrading, he'd like to make sure there's no
easy way to improve efficiency.

A couple of things:
- MySQL : as much as possible, he keeps query results in RAM, but
apparently, each is session-specific, which means that results can't
be shared with other users.
Is there something that can be done in that area, ie. keep the maximum
amount of MySQL data in RAM, to avoid users (both logged-on and
guests) hitting the single MySQL server again and again?

- His hoster says that Apache server is under significant load. At
this point, I don't have more details, but generally speaking, what
are the well-know ways to optimize PHP apps?

Thank you.

memcached: http://memcached.sf.net (for caching dynamic data in memory)
apc: http://us2.php.net/apc (caches script bytecode to reduce
compilation overhead)

These two modules will help enormously. I guarantee it. Using apc is
pretty much transparent, but memcached will require modifying your
database abstraction layer using the memcached functions
(http://php.net/manual/en/ref.memcache.php).

Not necessarily. MySQL and the OS both cache data - and generally do a
better job at a lower cost than using memcached.
Not necessarily. Using memcached on the application layer can save a
lot of network traffic to and from the database server, and can help
immensely. Quite immensely. The OS doesn't cache data from a separate
database server. I doubt the OP is running a local database server, as
he explicitly refers to the "MySQL server".
><Snipped re: connection pooling>

WRONG, WRONG, WRONG! This INCREASES resource usage on the server. With
persistent connections, you must have the maximum number of connections
*ever* required allocated *all of the time* - even if no one is using
your server.

MySQL persistent connections should not be used except in extreme cases
- like when you're running 100+ connections per second.
Sorry, but have you ever done any research on this topic? Connection
pooling is a well-established mechanism for boosting performance on
high-traffic servers. Keeping connections open is not a significant
drain on server resources, and can save a lot of overhead in creating
and tearing down connections. Also, you don't really know how many
connections per second his application is handling. I suggested he try
it and gauge the results for himself. If it really is slower, don't do it.
>
The FIRST thing to do when having performance problems is to identify
the cause of the performance problem.
That is certainly true.
You're making suggestions without
having any idea where the hold up is.
And what would you suggest? Should I do a code review for him? Rewrite
his queries? Or should I suggest a few common tweaks that might help
from a configuration standpoint? Or maybe we should all just stick to
flame wars instead of generating any useful discussion?
In at least two cases, your
suggestions could actually HURT performance. And the third case may or
may not help. If, for instance, the majority of the resource usage is
spent in long MySQL queries,
People are always saying "X is the bottleneck, so improving Y won't
help." This is just not true. It won't help as much as improving X,
but every cycle saved on redundantly compiling PHP code is a cycle that
can be used doing useful computation. NOT running a bytecode cache is
pretty pointless. Why not do it?

Jeremy

Jan 8 '08 #35

P: n/a
Jeremy wrote:
>every cycle saved on redundantly compiling PHP code is a cycle that
can be used doing useful computation.
Er.. what planet are YOU on?

Unless you are totally CPU bound, most of the time your processor is
doing precisely nothing. Thats why its imporatnt to wortk out what teh
bottleneck is. Boosting CPU power may have as much effect as fitting a
turbocharger to a car that spends all its time in a traffic jam..
NOT running a bytecode cache is
pretty pointless. Why not do it?
Because it may make no difference whatsoever. And adds more complexity.
Jeremy
Jan 8 '08 #36

P: n/a
The Natural Philosopher wrote:
Jeremy wrote:
>every cycle saved on redundantly compiling PHP code is a cycle that
can be used doing useful computation.

Er.. what planet are YOU on?

Unless you are totally CPU bound, most of the time your processor is
doing precisely nothing. Thats why its imporatnt to wortk out what teh
bottleneck is. Boosting CPU power may have as much effect as fitting a
turbocharger to a car that spends all its time in a traffic jam..
>NOT running a bytecode cache is pretty pointless. Why not do it?

Because it may make no difference whatsoever. And adds more complexity.
>Jeremy
So, you're saying it doesn't make any sense to eliminate the compilation
step on every request? That the request will be handled just as fast if
many thousands of lines of code must be compiled first?

Whatever planet I'm on, I want to move to yours - because computers
there seem to be magical.

Let's look at some benchmark tests:

http://2bits.com/articles/benchmarki...ng-drupal.html

In this particular case, adding APC resulted in a ~494% increase in
performance over PHP alone. Maybe that's a result of Drupal being
poorly written - couldn't tell you; I've never used it. But it seems
like something that could potentially be pretty helpful, so I suggested
he try it. I use it everywhere; it requires no maintenance, it installs
in seconds, it has concretely improved the performance of my
applications, and my servers have never crashed (with the exception of
occasional fan failures) or required restarts. Anecdotally, I would say
it's a pretty good thing to consider.

Jeremy
Jan 8 '08 #37

P: n/a
Jeremy wrote:
Jerry Stuckle wrote:
>Jeremy wrote:
>>Gilles Ganault wrote:
Hello,

I'm no LAMP expert, and a friend of mine is running a site which is a
bit overloaded. Before upgrading, he'd like to make sure there's no
easy way to improve efficiency.

A couple of things:
- MySQL : as much as possible, he keeps query results in RAM, but
apparently, each is session-specific, which means that results can't
be shared with other users.
Is there something that can be done in that area, ie. keep the maximum
amount of MySQL data in RAM, to avoid users (both logged-on and
guests) hitting the single MySQL server again and again?

- His hoster says that Apache server is under significant load. At
this point, I don't have more details, but generally speaking, what
are the well-know ways to optimize PHP apps?

Thank you.

memcached: http://memcached.sf.net (for caching dynamic data in memory)
apc: http://us2.php.net/apc (caches script bytecode to reduce
compilation overhead)

These two modules will help enormously. I guarantee it. Using apc
is pretty much transparent, but memcached will require modifying your
database abstraction layer using the memcached functions
(http://php.net/manual/en/ref.memcache.php).

Not necessarily. MySQL and the OS both cache data - and generally do
a better job at a lower cost than using memcached.

Not necessarily. Using memcached on the application layer can save a
lot of network traffic to and from the database server, and can help
immensely. Quite immensely. The OS doesn't cache data from a separate
database server. I doubt the OP is running a local database server, as
he explicitly refers to the "MySQL server".
No, it doesn't. But I'll bet he does run it locally. Virtually
everyone I know refers to their MySQL database as "the MySQL Server".
In fact, I find it's more often assumed to be on the same server when
not specified.

But whether it's local or remote, that's only going to help if you know
you're fetching the same data, and that data hasn't changed on the
server. With a database, that's often not a safe assumption.
>><Snipped re: connection pooling>

WRONG, WRONG, WRONG! This INCREASES resource usage on the server.
With persistent connections, you must have the maximum number of
connections *ever* required allocated *all of the time* - even if no
one is using your server.

MySQL persistent connections should not be used except in extreme
cases - like when you're running 100+ connections per second.

Sorry, but have you ever done any research on this topic? Connection
pooling is a well-established mechanism for boosting performance on
high-traffic servers. Keeping connections open is not a significant
drain on server resources, and can save a lot of overhead in creating
and tearing down connections. Also, you don't really know how many
connections per second his application is handling. I suggested he try
it and gauge the results for himself. If it really is slower, don't do it.
Yes, I have. In fact, I have many years of experience with persistent
connections, going back to DB2 in the 80's. And several years with
MySQL. And my comments stand.

Yes, persistent connections can help with very heavily used sites - in
the thousands of hits per second. But below that, the overhead of
making a connection - especially on a local machine - is small compared
to the need to maintain potentially hundreds of open connections because
you might use them at some point in time.

What you're suggesting is he shoot in the dark and hope he hits
something. And if he hits the wrong thing, he could easily hurt system
performance more than he helps it.
>>
The FIRST thing to do when having performance problems is to identify
the cause of the performance problem.

That is certainly true.
>You're making suggestions without having any idea where the hold up is.

And what would you suggest? Should I do a code review for him? Rewrite
his queries? Or should I suggest a few common tweaks that might help
from a configuration standpoint? Or maybe we should all just stick to
flame wars instead of generating any useful discussion?
Just what I did suggest. Identify the source of the problem first. And
I gave him some suggestions. Looks like he followed them up and found
some problems. And from what he found, the problem is not related to
database - but general resource shortage. So your suggestion of
persistent connections would take more resources and aggravate that
problem instead of helping it.

I have over 20 years of experience in tuning systems. I never make
changes until I've identified the cause(s) of the problem.
> In at least two cases, your suggestions could actually HURT
performance. And the third case may or may not help. If, for
instance, the majority of the resource usage is spent in long MySQL
queries,

People are always saying "X is the bottleneck, so improving Y won't
help." This is just not true. It won't help as much as improving X,
but every cycle saved on redundantly compiling PHP code is a cycle that
can be used doing useful computation. NOT running a bytecode cache is
pretty pointless. Why not do it?
Oh yes, it is very true. If X is the bottleneck, you aren't going to
make any significant improvement by changing Y. In fact, if you fix X,
you may not even need to change Y.

Of course, there is always the possibility that Y is a secondary
problem. If so, that will show up after fixing X.
Jeremy


--
==================
Remove the "x" from my email address
Jerry Stuckle
JDS Computer Training Corp.
js*******@attglobal.net
==================

Jan 8 '08 #38

P: n/a
Jeremy wrote:
The Natural Philosopher wrote:
>Jeremy wrote:
>>every cycle saved on redundantly compiling PHP code is a cycle that
can be used doing useful computation.

Er.. what planet are YOU on?

Unless you are totally CPU bound, most of the time your processor is
doing precisely nothing. Thats why its imporatnt to wortk out what
teh bottleneck is. Boosting CPU power may have as much effect as
fitting a turbocharger to a car that spends all its time in a traffic
jam..
>>NOT running a bytecode cache is pretty pointless. Why not do it?

Because it may make no difference whatsoever. And adds more complexity.
>>Jeremy

So, you're saying it doesn't make any sense to eliminate the compilation
step on every request? That the request will be handled just as fast if
many thousands of lines of code must be compiled first?

Whatever planet I'm on, I want to move to yours - because computers
there seem to be magical.
Sorry, I agree with Philo here. If the CPU is idle anyway, it's not
going to make any difference.
Let's look at some benchmark tests:

http://2bits.com/articles/benchmarki...ng-drupal.html
In this particular case, adding APC resulted in a ~494% increase in
performance over PHP alone. Maybe that's a result of Drupal being
poorly written - couldn't tell you; I've never used it. But it seems
like something that could potentially be pretty helpful, so I suggested
he try it. I use it everywhere; it requires no maintenance, it installs
in seconds, it has concretely improved the performance of my
applications, and my servers have never crashed (with the exception of
occasional fan failures) or required restarts. Anecdotally, I would say
it's a pretty good thing to consider.

Jeremy
Yes - IN THIS PARTICULAR CASE. That is not always true.

In in this case it would have hurt performance because a major portion
of his bottleneck was a shortage of memory. Now you just took more of it.

And BTW - your memcache idea would have taken even more of what he was
short of.

You're suggesting he steal bread from a starving man...

--
==================
Remove the "x" from my email address
Jerry Stuckle
JDS Computer Training Corp.
js*******@attglobal.net
==================

Jan 8 '08 #39

P: n/a
Jerry Stuckle wrote:
In fact, I find it's more often assumed to be on the same server
when not specified.
the overhead of making a connection - especially on a local machine
- is small..
the problem is not related to database - but general resource
shortage.

Interesting and valid points all around. I'm glad you were able to help
the OP. Many of my assumptions about his situation were incorrect.

Memcached was not a helpful suggestion here, but I still think that apc
is helpful in most situations. Even if memory is short - a smart opcode
cache won't particularly hurt memory use. In fact, it may even help -
instead of compiling the code (into memory, I might add) separately for
multiple concurrent requests, suddenly there is already a compiled copy
in shared memory that can be utilized. True, there may be less memory
used when the server is completely idle - but that is not particularly
helpful, right? Before denigrating it, I suggest you try it out on one
of your applications; it might surprise you.

I stand by my conviction that explicit caching is a good idea more often
than not, but then I rarely work with a single-server (or significantly
memory-bound) environment so maybe it's my lack of experience there
that's distorting my perception in your view.

In any case, I appreciate the discussion.

Jeremy
Jan 8 '08 #40

This discussion thread is closed

Replies have been disabled for this discussion.