By using this site, you agree to our updated Privacy Policy and our Terms of Use. Manage your Cookies Settings.
455,631 Members | 1,738 Online
Bytes IT Community
+ Ask a Question
Need help? Post your question and get tips & solutions from a community of 455,631 IT Pros & Developers. It's quick & easy.

performance issues

P: n/a
Hi all, just had a couple of performance questions for you to ponder...

I am running a fairly large php/mysql app. In total it is only about
100 or so separate php files, but it is large because I have multiple
clients and each currently gets their own directory with a copy of each
of these files, which adds up quickly.

My question is this... would it be better to give each client their own
actual copy of each file, or would it be better to just have 1 copy of
each file in a central location and serve the right dynamic content
depending on what certain session variables contain? (this is not a
problem to do at all). My original thinking was that giving each client
their own copy of all files would hopefully prevent problems with too
many people trying to access a single file at the same time, but now I
am wondering if in fact accessing multiple files at different places on
the server would create problems itself. Also, the one copy method
makes updates much easier as I only have to update one location. Does
anyone know a rough estimate for a threshold of simultaneous or near
simultaneous requests for a single file that Apache 1.3.x can handle
without problems?

My second and last question has to do with my database structure...
currently, each client has their own tables in the database with a
unique name they choose appended to all of their table names as a
prefix. Would it instead be wiser to create an individual database for
each client? My concern now is that when too many tables are created in
my current database, there might be more overhead in performing
queries... I am not sure which method would be more beneficial.

Thanks so much in advance for any help you can provide.

- Marcus

Jul 17 '05 #1
Share this Question
Share on Google+
1 Reply


P: n/a
On Mon, 13 Oct 2003 12:20:16 -0500, Marcus <Ju********@aol.com> scrawled:
Hi all, just had a couple of performance questions for you to ponder...

I am running a fairly large php/mysql app. In total it is only about
100 or so separate php files, but it is large because I have multiple
clients and each currently gets their own directory with a copy of each
of these files, which adds up quickly.

My question is this... would it be better to give each client their own
actual copy of each file, or would it be better to just have 1 copy of
each file in a central location and serve the right dynamic content
depending on what certain session variables contain? (this is not a
problem to do at all). My original thinking was that giving each client
their own copy of all files would hopefully prevent problems with too
many people trying to access a single file at the same time, but now I
am wondering if in fact accessing multiple files at different places on
the server would create problems itself. Also, the one copy method
makes updates much easier as I only have to update one location. Does
anyone know a rough estimate for a threshold of simultaneous or near
simultaneous requests for a single file that Apache 1.3.x can handle
without problems?
Don't use sessions - use a nice trick....

Create a config directory which contains in it files of the form:

www.client1_site.com.inc
www.client2_site.com.inc
www.client3_site.com.inc

and chose the appropriate config based on the server name, then
you need just one copy of all the files. (this include file contains
configurations for databases security etc)

If there is different static content you may need separate directories
for each of these subsections.
My second and last question has to do with my database structure...
currently, each client has their own tables in the database with a
unique name they choose appended to all of their table names as a
prefix. Would it instead be wiser to create an individual database for
each client? My concern now is that when too many tables are created in
my current database, there might be more overhead in performing
queries... I am not sure which method would be more beneficial.

I would chose different databases for each client - as it becomes much
easier to back-up, and your code becomes simpler - no more

"select * from $CLIENT"."_table1 where ID = 1"

style queries

just "select * from $CLIENT"."_table1 where ID = 1"

Security of data is easier - you can have separate permissions for each
one more easily;

Maintenance is easier - I want a new databases..
mysql -.. NEWCLIENT < schema.sql
and away you go....
Thanks so much in advance for any help you can provide.

Jul 17 '05 #2

This discussion thread is closed

Replies have been disabled for this discussion.