By using this site, you agree to our updated Privacy Policy and our Terms of Use. Manage your Cookies Settings.
431,827 Members | 2,155 Online
Bytes IT Community
+ Ask a Question
Need help? Post your question and get tips & solutions from a community of 431,827 IT Pros & Developers. It's quick & easy.

application level database connection

P: n/a
I have a PHP5 application that accepts external messages, and inserts
them into a database. It's rather high traffic.. the server could be
receiving thousands at a time, and recently mysql has been locking up
because it's exceeding the max_connections limit. I raised it for now,
but that's only a temporary fix.

My question is is there a way to make a connection to mysql that
multiple instances of an object will use together? I'm under the
impression that something like a singleton would only live on a per
instance basis, or am I incorrect?

Nov 16 '06 #1
Share this Question
Share on Google+
10 Replies


P: n/a
"rich" <rb*****@gmail.comwrote in message
news:11**********************@b28g2000cwb.googlegr oups.com...
My question is is there a way to make a connection to mysql that
multiple instances of an object will use together? I'm under the
impression that something like a singleton would only live on a per
instance basis, or am I incorrect?
We'd all need to know a little bit more about your application, and whether
the updates have to be done right away.

The obvious solution that comes to mind is just to have each PHP web process
append to a file, and have a separate daemon that flushes the file to the
database. Unix file semantics guarantee that there is a way to do this with
guaranteed results (for example, the daemon mv's the file then operates on
it ... this guarantees that any append operations either occur before or
after the mv but won't collide with it).

You could have the daemon just look at the file from time to time (once a
second?), or you could use a more fancy triggering mechanism.

They key design factor seems to be whether the database updates have to
complete by the time the PHP web script ends ...

Dave.

Nov 16 '06 #2

P: n/a

David T. Ashley wrote:
"rich" <rb*****@gmail.comwrote in message
news:11**********************@b28g2000cwb.googlegr oups.com...
My question is is there a way to make a connection to mysql that
multiple instances of an object will use together? I'm under the
impression that something like a singleton would only live on a per
instance basis, or am I incorrect?

We'd all need to know a little bit more about your application, and whether
the updates have to be done right away.

The obvious solution that comes to mind is just to have each PHP web process
append to a file, and have a separate daemon that flushes the file to the
database. Unix file semantics guarantee that there is a way to do this with
guaranteed results (for example, the daemon mv's the file then operates on
it ... this guarantees that any append operations either occur before or
after the mv but won't collide with it).

You could have the daemon just look at the file from time to time (once a
second?), or you could use a more fancy triggering mechanism.

They key design factor seems to be whether the database updates have to
complete by the time the PHP web script ends ...

Dave.
No, it doesn't have to be instant, and I definitely thought about this
method. At this point though, it seems like if the application can
hold a connection, it would be a heck of a lot bigger than batching
these inserts. I just don't know if it can. Perhaps an import would
be better anyway for performance reasons...

Nov 16 '06 #3

P: n/a
rich wrote:
I have a PHP5 application that accepts external messages, and inserts
them into a database. It's rather high traffic.. the server could be
receiving thousands at a time, and recently mysql has been locking up
because it's exceeding the max_connections limit. I raised it for now,
but that's only a temporary fix.

My question is is there a way to make a connection to mysql that
multiple instances of an object will use together? I'm under the
impression that something like a singleton would only live on a per
instance basis, or am I incorrect?
Hi,

I understood that if:
- the same login-credentials (username/password)
- from the same host
are used in your scripts, the connection will be recycled if you use
mysql_pconnect().

Are you using pconnect?

Also have a look at your databasestructure: Is every insert causing some
triggers to run (like Primary Key, Constraints, Indexes)??
Especially Indexes can slow your inserts down. For each insert the Index
must be updated.

If your table IS using Indexes, it might help to:
- do your inserts in a very plain table (ISAM) for storage
- move them one by one to the real (tables) later, when demand on your
server is lower.

Regards,
Erwin Moller
Nov 16 '06 #4

P: n/a

Erwin Moller wrote:
Hi,

I understood that if:
- the same login-credentials (username/password)
- from the same host
are used in your scripts, the connection will be recycled if you use
mysql_pconnect().

Are you using pconnect?
No, I'm not. I think my biggest problem here that I don't fully
understand is how can I (or is it possible to) use a resource for
multiple instances of an object. I understand that something like a
static object will live throughout an object and it's children. But if
5 different processes open 5 different instances of the same object, is
it even possible in PHP to give them the same resource if they never
know about each other? I suppose if what you say about pconnect is
true, then you just answered my question.

Thanks for the other tips on indexing too.. that'll definitely help.

Nov 16 '06 #5

P: n/a
rich wrote:
>
Erwin Moller wrote:
>Hi,

I understood that if:
- the same login-credentials (username/password)
- from the same host
are used in your scripts, the connection will be recycled if you use
mysql_pconnect().

Are you using pconnect?

No, I'm not. I think my biggest problem here that I don't fully
understand is how can I (or is it possible to) use a resource for
multiple instances of an object.

Hi,

I think you ONLY have to use mysql_pconnect instead of mysql_connect. PHP
will make sure the same connection isn't used at the same time.
I am not sure how it is implemented, but I expect PHP flags a connection
idle/finished when the script ends, but leaves the connection open, so next
time you need it you won't have to go through the authorisation again.

I understand that something like a
static object will live throughout an object and it's children. But if
5 different processes open 5 different instances of the same object, is
it even possible in PHP to give them the same resource if they never
know about each other?
Yes, that is possible. PHP will handle that, you don't have to worry about
it. PHP must use some connectionpoolingmechanism behind the scenes, but I
do not know HOW.
But from a coders point of view: When PHP needs a connection, it checks if
it has one already (for the same user/password/host):
- If No, it creates a fresh one
- If yes (AND it is free), it returns that connection.
I suppose if what you say about pconnect is
true, then you just answered my question.

Thanks for the other tips on indexing too.. that'll definitely help.
First check the pconnect, if that doesn't help enough, redesign your
database since that will be a lot more work than adding a 'p' to your
script :-).
Long tables with indexes get very slow for inserts/updates.
Indexes only speed things up when querying (SELECT), but slow down
update/insert instructions.

Good luck!

Regards,
Erwin Moller
Nov 17 '06 #6

P: n/a
Erwin Moller wrote:
rich wrote:

>>Erwin Moller wrote:
>>>Hi,

I understood that if:
- the same login-credentials (username/password)
- from the same host
are used in your scripts, the connection will be recycled if you use
mysql_pconnect().

Are you using pconnect?

No, I'm not. I think my biggest problem here that I don't fully
understand is how can I (or is it possible to) use a resource for
multiple instances of an object.

Hi,

I think you ONLY have to use mysql_pconnect instead of mysql_connect. PHP
will make sure the same connection isn't used at the same time.
I am not sure how it is implemented, but I expect PHP flags a connection
idle/finished when the script ends, but leaves the connection open, so next
time you need it you won't have to go through the authorisation again.

I understand that something like a
>>static object will live throughout an object and it's children. But if
5 different processes open 5 different instances of the same object, is
it even possible in PHP to give them the same resource if they never
know about each other?


Yes, that is possible. PHP will handle that, you don't have to worry about
it. PHP must use some connectionpoolingmechanism behind the scenes, but I
do not know HOW.
But from a coders point of view: When PHP needs a connection, it checks if
it has one already (for the same user/password/host):
- If No, it creates a fresh one
- If yes (AND it is free), it returns that connection.
I suppose if what you say about pconnect is
>>true, then you just answered my question.

Thanks for the other tips on indexing too.. that'll definitely help.


First check the pconnect, if that doesn't help enough, redesign your
database since that will be a lot more work than adding a 'p' to your
script :-).
Long tables with indexes get very slow for inserts/updates.
Indexes only speed things up when querying (SELECT), but slow down
update/insert instructions.

Good luck!

Regards,
Erwin Moller
This will only make the situation worse.

Right now he has a problem with the maximum number of connections being
occasionally exceeded. mysql_pconnect() requires the maximum number of
connections to be allocated ALL the time.

I have yet to see where mysql_pconnect SOLVES a problem. Normally it
creates MORE. Yes, it's theoretically possible for it to help on very
busy systems where connections are used for a very short time. But
that's not the case here.

--
==================
Remove the "x" from my email address
Jerry Stuckle
JDS Computer Training Corp.
js*******@attglobal.net
==================
Nov 17 '06 #7

P: n/a
Jerry Stuckle wrote:
Erwin Moller wrote:
>rich wrote:

>>>Erwin Moller wrote:

Hi,

I understood that if:
- the same login-credentials (username/password)
- from the same host
are used in your scripts, the connection will be recycled if you use
mysql_pconnect().

Are you using pconnect?
No, I'm not. I think my biggest problem here that I don't fully
understand is how can I (or is it possible to) use a resource for
multiple instances of an object.

Hi,

I think you ONLY have to use mysql_pconnect instead of mysql_connect. PHP
will make sure the same connection isn't used at the same time.
I am not sure how it is implemented, but I expect PHP flags a connection
idle/finished when the script ends, but leaves the connection open, so
next time you need it you won't have to go through the authorisation
again.

I understand that something like a
>>>static object will live throughout an object and it's children. But if
5 different processes open 5 different instances of the same object, is
it even possible in PHP to give them the same resource if they never
know about each other?


Yes, that is possible. PHP will handle that, you don't have to worry
about it. PHP must use some connectionpoolingmechanism behind the scenes,
but I do not know HOW.
But from a coders point of view: When PHP needs a connection, it checks
if it has one already (for the same user/password/host):
- If No, it creates a fresh one
- If yes (AND it is free), it returns that connection.
I suppose if what you say about pconnect is
>>>true, then you just answered my question.

Thanks for the other tips on indexing too.. that'll definitely help.


First check the pconnect, if that doesn't help enough, redesign your
database since that will be a lot more work than adding a 'p' to your
script :-).
Long tables with indexes get very slow for inserts/updates.
Indexes only speed things up when querying (SELECT), but slow down
update/insert instructions.

Good luck!

Regards,
Erwin Moller

This will only make the situation worse.
Will it?
You sound very sure.
>
Right now he has a problem with the maximum number of connections being
occasionally exceeded. mysql_pconnect() requires the maximum number of
connections to be allocated ALL the time.
That depends.
If the time it takes to run 1 request is short, then a relatively large
amount of time is spend on creating a connection.
If the script runs long, this is relatively short.

In the first case it might help to use pconnect().

But you knew that because you describe it yourself.
So why say it will make the situation worse?

The OP doesn't have to redesign his whole app or something.
It is easily checked with minimum effort.
I think it it definitely worth a try to add the 'p' and see if things run
smoother.

Regards,
Erwin Moller
>
I have yet to see where mysql_pconnect SOLVES a problem. Normally it
creates MORE. Yes, it's theoretically possible for it to help on very
busy systems where connections are used for a very short time. But
that's not the case here.
Nov 17 '06 #8

P: n/a
rich wrote:
I have a PHP5 application that accepts external messages, and inserts
them into a database. It's rather high traffic.. the server could be
receiving thousands at a time, and recently mysql has been locking up
because it's exceeding the max_connections limit. I raised it for now,
but that's only a temporary fix.

My question is is there a way to make a connection to mysql that
multiple instances of an object will use together? I'm under the
impression that something like a singleton would only live on a per
instance basis, or am I incorrect?
That's a lot of messages to be handling. How are they getting to your
server - via web pages, a socket, or some other means? That will make a
difference.

From the MySQL end, you should start by ensuring you're using good
programming practices. Things like closing the database connection as
soon as you're through with it, rather than waiting until the end of the
script (or even worse, depending on the garbage collector to do it for you.

If you have SELECTs running concurrently, especially with INNODB tables,
you may be running into locking issues. This will delay requests,
resulting in a larger number of concurrent connections.

Also, the more indexes you have on your table(s), the longer INSERT
operations will take. Check out your indexes and get rid of any you
don't need.

The recommendation of spooling to the file system can be awkward at
best. You have to ensure you lock the file before writing to it by
anyone or you can end up with a corrupted file.

Personally, I like writing data to a "scratch" table in MySQL. This
would be a MyISAM table with no indexes. It may only have two columns -
an id and unparsed data, or, if the data are always the same, it could
have several columns.

The idea is to have a table that the application can insert data into as
quickly as possible.

Then I have a batch script fetch the unparsed info from this table and
inserts it into the appropriate tables. Depending on how busy the
system is, how much data is being spooled, how long it takes to parse
the data, etc., I might fetch one row at a time - or I might fetch a
dozen, read them all into an array then process them.

The only think you need to watch out for is that you don't exceed the
processing time limit for the PHP script.

--
==================
Remove the "x" from my email address
Jerry Stuckle
JDS Computer Training Corp.
js*******@attglobal.net
==================
Nov 17 '06 #9

P: n/a
Erwin Moller wrote:
Jerry Stuckle wrote:

>>Erwin Moller wrote:
>>>rich wrote:

Erwin Moller wrote:
>Hi,
>
>I understood that if:
>- the same login-credentials (username/password)
>- from the same host
>are used in your scripts, the connection will be recycled if you use
>mysql_pconnect().
>
>Are you using pconnect?
>

No, I'm not. I think my biggest problem here that I don't fully
understand is how can I (or is it possible to) use a resource for
multiple instances of an object.

Hi,

I think you ONLY have to use mysql_pconnect instead of mysql_connect. PHP
will make sure the same connection isn't used at the same time.
I am not sure how it is implemented, but I expect PHP flags a connection
idle/finished when the script ends, but leaves the connection open, so
next time you need it you won't have to go through the authorisation
again.

I understand that something like a
static object will live throughout an object and it's children. But if
5 different processes open 5 different instances of the same object, is
it even possible in PHP to give them the same resource if they never
know about each other?
Yes, that is possible. PHP will handle that, you don't have to worry
about it. PHP must use some connectionpoolingmechanism behind the scenes,
but I do not know HOW.
But from a coders point of view: When PHP needs a connection, it checks
if it has one already (for the same user/password/host):
- If No, it creates a fresh one
- If yes (AND it is free), it returns that connection.
I suppose if what you say about pconnect is
true, then you just answered my question.

Thanks for the other tips on indexing too.. that'll definitely help.
First check the pconnect, if that doesn't help enough, redesign your
database since that will be a lot more work than adding a 'p' to your
script :-).
Long tables with indexes get very slow for inserts/updates.
Indexes only speed things up when querying (SELECT), but slow down
update/insert instructions.

Good luck!

Regards,
Erwin Moller

This will only make the situation worse.


Will it?
You sound very sure.
Yes, I am.
>
>>Right now he has a problem with the maximum number of connections being
occasionally exceeded. mysql_pconnect() requires the maximum number of
connections to be allocated ALL the time.


That depends.
If the time it takes to run 1 request is short, then a relatively large
amount of time is spend on creating a connection.
If the script runs long, this is relatively short.
Actually, the time spent making a connection is not all that long.
Sure, it's longer than reusing a permanent connection, but it's still
not that long.
In the first case it might help to use pconnect().

But you knew that because you describe it yourself.
So why say it will make the situation worse?
Because with permanent connections you have to have the maximum number
of permanent connections required available all the time. This uses up
a huge amount of server resources 24/7 just to handle an occasional peak
in requirements. The result is system and MySQL resources are tied up
unnecessarily almost all of the time.
The OP doesn't have to redesign his whole app or something.
It is easily checked with minimum effort.
I think it it definitely worth a try to add the 'p' and see if things run
smoother.
I repeat - I have YET to see where mysql_pconnect SOLVES a problem. It
it can create more. It can slow down the entire system during normal
operation while providing only a marginal increase in performance during
peak times.

Have you ever done performance tests to see how much time
mysql_pconnect() actually saves? It's not much.

He would be much better off restructuring his application to provide
better performance.
Regards,
Erwin Moller

>>I have yet to see where mysql_pconnect SOLVES a problem. Normally it
creates MORE. Yes, it's theoretically possible for it to help on very
busy systems where connections are used for a very short time. But
that's not the case here.


--
==================
Remove the "x" from my email address
Jerry Stuckle
JDS Computer Training Corp.
js*******@attglobal.net
==================
Nov 17 '06 #10

P: n/a

Jerry Stuckle wrote:
That's a lot of messages to be handling. How are they getting to your
server - via web pages, a socket, or some other means? That will make a
difference.
All http requests.. about half and half post and get.
From the MySQL end, you should start by ensuring you're using good
programming practices. Things like closing the database connection as
soon as you're through with it, rather than waiting until the end of the
script (or even worse, depending on the garbage collector to do it for you.
Yeah when the reality of the traffic hit me the other night with 3 db
crashes in a row, i made sure there's no way a close doesn't happen.
Personally, I like writing data to a "scratch" table in MySQL. This
would be a MyISAM table with no indexes. It may only have two columns -
an id and unparsed data, or, if the data are always the same, it could
have several columns.
Very good suggestion... I'll throw that in my list of possibilities i'm
going back and forth on. Thank you.

Nov 17 '06 #11

This discussion thread is closed

Replies have been disabled for this discussion.