473,385 Members | 1,333 Online
Bytes | Software Development & Data Engineering Community
Post Job

Home Posts Topics Members FAQ

Join Bytes to post your question to a community of 473,385 software developers and data experts.

application level database connection

I have a PHP5 application that accepts external messages, and inserts
them into a database. It's rather high traffic.. the server could be
receiving thousands at a time, and recently mysql has been locking up
because it's exceeding the max_connections limit. I raised it for now,
but that's only a temporary fix.

My question is is there a way to make a connection to mysql that
multiple instances of an object will use together? I'm under the
impression that something like a singleton would only live on a per
instance basis, or am I incorrect?

Nov 16 '06 #1
10 1670
"rich" <rb*****@gmail.comwrote in message
news:11**********************@b28g2000cwb.googlegr oups.com...
My question is is there a way to make a connection to mysql that
multiple instances of an object will use together? I'm under the
impression that something like a singleton would only live on a per
instance basis, or am I incorrect?
We'd all need to know a little bit more about your application, and whether
the updates have to be done right away.

The obvious solution that comes to mind is just to have each PHP web process
append to a file, and have a separate daemon that flushes the file to the
database. Unix file semantics guarantee that there is a way to do this with
guaranteed results (for example, the daemon mv's the file then operates on
it ... this guarantees that any append operations either occur before or
after the mv but won't collide with it).

You could have the daemon just look at the file from time to time (once a
second?), or you could use a more fancy triggering mechanism.

They key design factor seems to be whether the database updates have to
complete by the time the PHP web script ends ...

Dave.

Nov 16 '06 #2

David T. Ashley wrote:
"rich" <rb*****@gmail.comwrote in message
news:11**********************@b28g2000cwb.googlegr oups.com...
My question is is there a way to make a connection to mysql that
multiple instances of an object will use together? I'm under the
impression that something like a singleton would only live on a per
instance basis, or am I incorrect?

We'd all need to know a little bit more about your application, and whether
the updates have to be done right away.

The obvious solution that comes to mind is just to have each PHP web process
append to a file, and have a separate daemon that flushes the file to the
database. Unix file semantics guarantee that there is a way to do this with
guaranteed results (for example, the daemon mv's the file then operates on
it ... this guarantees that any append operations either occur before or
after the mv but won't collide with it).

You could have the daemon just look at the file from time to time (once a
second?), or you could use a more fancy triggering mechanism.

They key design factor seems to be whether the database updates have to
complete by the time the PHP web script ends ...

Dave.
No, it doesn't have to be instant, and I definitely thought about this
method. At this point though, it seems like if the application can
hold a connection, it would be a heck of a lot bigger than batching
these inserts. I just don't know if it can. Perhaps an import would
be better anyway for performance reasons...

Nov 16 '06 #3
rich wrote:
I have a PHP5 application that accepts external messages, and inserts
them into a database. It's rather high traffic.. the server could be
receiving thousands at a time, and recently mysql has been locking up
because it's exceeding the max_connections limit. I raised it for now,
but that's only a temporary fix.

My question is is there a way to make a connection to mysql that
multiple instances of an object will use together? I'm under the
impression that something like a singleton would only live on a per
instance basis, or am I incorrect?
Hi,

I understood that if:
- the same login-credentials (username/password)
- from the same host
are used in your scripts, the connection will be recycled if you use
mysql_pconnect().

Are you using pconnect?

Also have a look at your databasestructure: Is every insert causing some
triggers to run (like Primary Key, Constraints, Indexes)??
Especially Indexes can slow your inserts down. For each insert the Index
must be updated.

If your table IS using Indexes, it might help to:
- do your inserts in a very plain table (ISAM) for storage
- move them one by one to the real (tables) later, when demand on your
server is lower.

Regards,
Erwin Moller
Nov 16 '06 #4

Erwin Moller wrote:
Hi,

I understood that if:
- the same login-credentials (username/password)
- from the same host
are used in your scripts, the connection will be recycled if you use
mysql_pconnect().

Are you using pconnect?
No, I'm not. I think my biggest problem here that I don't fully
understand is how can I (or is it possible to) use a resource for
multiple instances of an object. I understand that something like a
static object will live throughout an object and it's children. But if
5 different processes open 5 different instances of the same object, is
it even possible in PHP to give them the same resource if they never
know about each other? I suppose if what you say about pconnect is
true, then you just answered my question.

Thanks for the other tips on indexing too.. that'll definitely help.

Nov 16 '06 #5
rich wrote:
>
Erwin Moller wrote:
>Hi,

I understood that if:
- the same login-credentials (username/password)
- from the same host
are used in your scripts, the connection will be recycled if you use
mysql_pconnect().

Are you using pconnect?

No, I'm not. I think my biggest problem here that I don't fully
understand is how can I (or is it possible to) use a resource for
multiple instances of an object.

Hi,

I think you ONLY have to use mysql_pconnect instead of mysql_connect. PHP
will make sure the same connection isn't used at the same time.
I am not sure how it is implemented, but I expect PHP flags a connection
idle/finished when the script ends, but leaves the connection open, so next
time you need it you won't have to go through the authorisation again.

I understand that something like a
static object will live throughout an object and it's children. But if
5 different processes open 5 different instances of the same object, is
it even possible in PHP to give them the same resource if they never
know about each other?
Yes, that is possible. PHP will handle that, you don't have to worry about
it. PHP must use some connectionpoolingmechanism behind the scenes, but I
do not know HOW.
But from a coders point of view: When PHP needs a connection, it checks if
it has one already (for the same user/password/host):
- If No, it creates a fresh one
- If yes (AND it is free), it returns that connection.
I suppose if what you say about pconnect is
true, then you just answered my question.

Thanks for the other tips on indexing too.. that'll definitely help.
First check the pconnect, if that doesn't help enough, redesign your
database since that will be a lot more work than adding a 'p' to your
script :-).
Long tables with indexes get very slow for inserts/updates.
Indexes only speed things up when querying (SELECT), but slow down
update/insert instructions.

Good luck!

Regards,
Erwin Moller
Nov 17 '06 #6
Erwin Moller wrote:
rich wrote:

>>Erwin Moller wrote:
>>>Hi,

I understood that if:
- the same login-credentials (username/password)
- from the same host
are used in your scripts, the connection will be recycled if you use
mysql_pconnect().

Are you using pconnect?

No, I'm not. I think my biggest problem here that I don't fully
understand is how can I (or is it possible to) use a resource for
multiple instances of an object.

Hi,

I think you ONLY have to use mysql_pconnect instead of mysql_connect. PHP
will make sure the same connection isn't used at the same time.
I am not sure how it is implemented, but I expect PHP flags a connection
idle/finished when the script ends, but leaves the connection open, so next
time you need it you won't have to go through the authorisation again.

I understand that something like a
>>static object will live throughout an object and it's children. But if
5 different processes open 5 different instances of the same object, is
it even possible in PHP to give them the same resource if they never
know about each other?


Yes, that is possible. PHP will handle that, you don't have to worry about
it. PHP must use some connectionpoolingmechanism behind the scenes, but I
do not know HOW.
But from a coders point of view: When PHP needs a connection, it checks if
it has one already (for the same user/password/host):
- If No, it creates a fresh one
- If yes (AND it is free), it returns that connection.
I suppose if what you say about pconnect is
>>true, then you just answered my question.

Thanks for the other tips on indexing too.. that'll definitely help.


First check the pconnect, if that doesn't help enough, redesign your
database since that will be a lot more work than adding a 'p' to your
script :-).
Long tables with indexes get very slow for inserts/updates.
Indexes only speed things up when querying (SELECT), but slow down
update/insert instructions.

Good luck!

Regards,
Erwin Moller
This will only make the situation worse.

Right now he has a problem with the maximum number of connections being
occasionally exceeded. mysql_pconnect() requires the maximum number of
connections to be allocated ALL the time.

I have yet to see where mysql_pconnect SOLVES a problem. Normally it
creates MORE. Yes, it's theoretically possible for it to help on very
busy systems where connections are used for a very short time. But
that's not the case here.

--
==================
Remove the "x" from my email address
Jerry Stuckle
JDS Computer Training Corp.
js*******@attglobal.net
==================
Nov 17 '06 #7
Jerry Stuckle wrote:
Erwin Moller wrote:
>rich wrote:

>>>Erwin Moller wrote:

Hi,

I understood that if:
- the same login-credentials (username/password)
- from the same host
are used in your scripts, the connection will be recycled if you use
mysql_pconnect().

Are you using pconnect?
No, I'm not. I think my biggest problem here that I don't fully
understand is how can I (or is it possible to) use a resource for
multiple instances of an object.

Hi,

I think you ONLY have to use mysql_pconnect instead of mysql_connect. PHP
will make sure the same connection isn't used at the same time.
I am not sure how it is implemented, but I expect PHP flags a connection
idle/finished when the script ends, but leaves the connection open, so
next time you need it you won't have to go through the authorisation
again.

I understand that something like a
>>>static object will live throughout an object and it's children. But if
5 different processes open 5 different instances of the same object, is
it even possible in PHP to give them the same resource if they never
know about each other?


Yes, that is possible. PHP will handle that, you don't have to worry
about it. PHP must use some connectionpoolingmechanism behind the scenes,
but I do not know HOW.
But from a coders point of view: When PHP needs a connection, it checks
if it has one already (for the same user/password/host):
- If No, it creates a fresh one
- If yes (AND it is free), it returns that connection.
I suppose if what you say about pconnect is
>>>true, then you just answered my question.

Thanks for the other tips on indexing too.. that'll definitely help.


First check the pconnect, if that doesn't help enough, redesign your
database since that will be a lot more work than adding a 'p' to your
script :-).
Long tables with indexes get very slow for inserts/updates.
Indexes only speed things up when querying (SELECT), but slow down
update/insert instructions.

Good luck!

Regards,
Erwin Moller

This will only make the situation worse.
Will it?
You sound very sure.
>
Right now he has a problem with the maximum number of connections being
occasionally exceeded. mysql_pconnect() requires the maximum number of
connections to be allocated ALL the time.
That depends.
If the time it takes to run 1 request is short, then a relatively large
amount of time is spend on creating a connection.
If the script runs long, this is relatively short.

In the first case it might help to use pconnect().

But you knew that because you describe it yourself.
So why say it will make the situation worse?

The OP doesn't have to redesign his whole app or something.
It is easily checked with minimum effort.
I think it it definitely worth a try to add the 'p' and see if things run
smoother.

Regards,
Erwin Moller
>
I have yet to see where mysql_pconnect SOLVES a problem. Normally it
creates MORE. Yes, it's theoretically possible for it to help on very
busy systems where connections are used for a very short time. But
that's not the case here.
Nov 17 '06 #8
rich wrote:
I have a PHP5 application that accepts external messages, and inserts
them into a database. It's rather high traffic.. the server could be
receiving thousands at a time, and recently mysql has been locking up
because it's exceeding the max_connections limit. I raised it for now,
but that's only a temporary fix.

My question is is there a way to make a connection to mysql that
multiple instances of an object will use together? I'm under the
impression that something like a singleton would only live on a per
instance basis, or am I incorrect?
That's a lot of messages to be handling. How are they getting to your
server - via web pages, a socket, or some other means? That will make a
difference.

From the MySQL end, you should start by ensuring you're using good
programming practices. Things like closing the database connection as
soon as you're through with it, rather than waiting until the end of the
script (or even worse, depending on the garbage collector to do it for you.

If you have SELECTs running concurrently, especially with INNODB tables,
you may be running into locking issues. This will delay requests,
resulting in a larger number of concurrent connections.

Also, the more indexes you have on your table(s), the longer INSERT
operations will take. Check out your indexes and get rid of any you
don't need.

The recommendation of spooling to the file system can be awkward at
best. You have to ensure you lock the file before writing to it by
anyone or you can end up with a corrupted file.

Personally, I like writing data to a "scratch" table in MySQL. This
would be a MyISAM table with no indexes. It may only have two columns -
an id and unparsed data, or, if the data are always the same, it could
have several columns.

The idea is to have a table that the application can insert data into as
quickly as possible.

Then I have a batch script fetch the unparsed info from this table and
inserts it into the appropriate tables. Depending on how busy the
system is, how much data is being spooled, how long it takes to parse
the data, etc., I might fetch one row at a time - or I might fetch a
dozen, read them all into an array then process them.

The only think you need to watch out for is that you don't exceed the
processing time limit for the PHP script.

--
==================
Remove the "x" from my email address
Jerry Stuckle
JDS Computer Training Corp.
js*******@attglobal.net
==================
Nov 17 '06 #9
Erwin Moller wrote:
Jerry Stuckle wrote:

>>Erwin Moller wrote:
>>>rich wrote:

Erwin Moller wrote:
>Hi,
>
>I understood that if:
>- the same login-credentials (username/password)
>- from the same host
>are used in your scripts, the connection will be recycled if you use
>mysql_pconnect().
>
>Are you using pconnect?
>

No, I'm not. I think my biggest problem here that I don't fully
understand is how can I (or is it possible to) use a resource for
multiple instances of an object.

Hi,

I think you ONLY have to use mysql_pconnect instead of mysql_connect. PHP
will make sure the same connection isn't used at the same time.
I am not sure how it is implemented, but I expect PHP flags a connection
idle/finished when the script ends, but leaves the connection open, so
next time you need it you won't have to go through the authorisation
again.

I understand that something like a
static object will live throughout an object and it's children. But if
5 different processes open 5 different instances of the same object, is
it even possible in PHP to give them the same resource if they never
know about each other?
Yes, that is possible. PHP will handle that, you don't have to worry
about it. PHP must use some connectionpoolingmechanism behind the scenes,
but I do not know HOW.
But from a coders point of view: When PHP needs a connection, it checks
if it has one already (for the same user/password/host):
- If No, it creates a fresh one
- If yes (AND it is free), it returns that connection.
I suppose if what you say about pconnect is
true, then you just answered my question.

Thanks for the other tips on indexing too.. that'll definitely help.
First check the pconnect, if that doesn't help enough, redesign your
database since that will be a lot more work than adding a 'p' to your
script :-).
Long tables with indexes get very slow for inserts/updates.
Indexes only speed things up when querying (SELECT), but slow down
update/insert instructions.

Good luck!

Regards,
Erwin Moller

This will only make the situation worse.


Will it?
You sound very sure.
Yes, I am.
>
>>Right now he has a problem with the maximum number of connections being
occasionally exceeded. mysql_pconnect() requires the maximum number of
connections to be allocated ALL the time.


That depends.
If the time it takes to run 1 request is short, then a relatively large
amount of time is spend on creating a connection.
If the script runs long, this is relatively short.
Actually, the time spent making a connection is not all that long.
Sure, it's longer than reusing a permanent connection, but it's still
not that long.
In the first case it might help to use pconnect().

But you knew that because you describe it yourself.
So why say it will make the situation worse?
Because with permanent connections you have to have the maximum number
of permanent connections required available all the time. This uses up
a huge amount of server resources 24/7 just to handle an occasional peak
in requirements. The result is system and MySQL resources are tied up
unnecessarily almost all of the time.
The OP doesn't have to redesign his whole app or something.
It is easily checked with minimum effort.
I think it it definitely worth a try to add the 'p' and see if things run
smoother.
I repeat - I have YET to see where mysql_pconnect SOLVES a problem. It
it can create more. It can slow down the entire system during normal
operation while providing only a marginal increase in performance during
peak times.

Have you ever done performance tests to see how much time
mysql_pconnect() actually saves? It's not much.

He would be much better off restructuring his application to provide
better performance.
Regards,
Erwin Moller

>>I have yet to see where mysql_pconnect SOLVES a problem. Normally it
creates MORE. Yes, it's theoretically possible for it to help on very
busy systems where connections are used for a very short time. But
that's not the case here.


--
==================
Remove the "x" from my email address
Jerry Stuckle
JDS Computer Training Corp.
js*******@attglobal.net
==================
Nov 17 '06 #10

Jerry Stuckle wrote:
That's a lot of messages to be handling. How are they getting to your
server - via web pages, a socket, or some other means? That will make a
difference.
All http requests.. about half and half post and get.
From the MySQL end, you should start by ensuring you're using good
programming practices. Things like closing the database connection as
soon as you're through with it, rather than waiting until the end of the
script (or even worse, depending on the garbage collector to do it for you.
Yeah when the reality of the traffic hit me the other night with 3 db
crashes in a row, i made sure there's no way a close doesn't happen.
Personally, I like writing data to a "scratch" table in MySQL. This
would be a MyISAM table with no indexes. It may only have two columns -
an id and unparsed data, or, if the data are always the same, it could
have several columns.
Very good suggestion... I'll throw that in my list of possibilities i'm
going back and forth on. Thank you.

Nov 17 '06 #11

This thread has been closed and replies have been disabled. Please start a new discussion.

Similar topics

4
by: Jason_Schaitel | last post by:
I have an application that segregates data into two different databases. Database A has stored procs that perform joins between tables in database A and database B. I am thinking that I have...
43
by: Davey | last post by:
I am planning on developing an application which will involve skills that I have very little experience of - therefore I would appreciate comments on my initial design thoughts. Overview on...
9
by: Abhishek Srivastava | last post by:
Hello All, In IIS 6.0 We have a concept of worker processes and application pools. As I understand it, we can have multiple worker process per appliction pool. Each worker process is dedicated...
6
by: B B | last post by:
Okay, here is what's happening: I have a reasonably fast laptop (1.4 GHz Mobile M, so comparable to 2.5GHz P4) doing .net development. Running Windows XP pro, SP2 IIS is installed and running...
9
by: Graham | last post by:
I have been having some fun learning and using the new Controls and methods in .Net 2.0 which will make my life in the future easier and faster. Specifically the new databinding practises and...
7
by: Venkat Venkataramanan | last post by:
Hello: I have a UI layer that interacts with the business layer which interacts with the DB layer. I have developed the business layer as a class library that complies to a DLL that will be...
20
by: Keith G. Murphy | last post by:
I'm trying to get a feel for what most people are doing or consider best practice. Given a mod_perl application talking to a PostgreSQL database on the same host, where different users are...
5
by: mmcd79 | last post by:
I built a VB.net application that makes use of a machine level DB connection string setting, and a user level starting location setting. The machine level setting and the default user based...
3
by: GaryDean | last post by:
I have just been through the docs on the Data Access Application blocks and it seems that they complicate things more than make things simple. To me it seems that there is nothing more simple and...
2
by: user | last post by:
Hello, I simply make some aspx web page to update data in a mdb access database..But when this database is open (not table, juste the mdb in access) with MS Access, i have an error with my asp.net...
0
by: Faith0G | last post by:
I am starting a new it consulting business and it's been a while since I setup a new website. Is wordpress still the best web based software for hosting a 5 page website? The webpages will be...
0
by: ryjfgjl | last post by:
In our work, we often need to import Excel data into databases (such as MySQL, SQL Server, Oracle) for data analysis and processing. Usually, we use database tools like Navicat or the Excel import...
0
by: aa123db | last post by:
Variable and constants Use var or let for variables and const fror constants. Var foo ='bar'; Let foo ='bar';const baz ='bar'; Functions function $name$ ($parameters$) { } ...
0
by: ryjfgjl | last post by:
If we have dozens or hundreds of excel to import into the database, if we use the excel import function provided by database editors such as navicat, it will be extremely tedious and time-consuming...
0
by: ryjfgjl | last post by:
In our work, we often receive Excel tables with data in the same format. If we want to analyze these data, it can be difficult to analyze them because the data is spread across multiple Excel files...
0
by: emmanuelkatto | last post by:
Hi All, I am Emmanuel katto from Uganda. I want to ask what challenges you've faced while migrating a website to cloud. Please let me know. Thanks! Emmanuel
0
BarryA
by: BarryA | last post by:
What are the essential steps and strategies outlined in the Data Structures and Algorithms (DSA) roadmap for aspiring data scientists? How can individuals effectively utilize this roadmap to progress...
1
by: nemocccc | last post by:
hello, everyone, I want to develop a software for my android phone for daily needs, any suggestions?
0
by: Hystou | last post by:
There are some requirements for setting up RAID: 1. The motherboard and BIOS support RAID configuration. 2. The motherboard has 2 or more available SATA protocol SSD/HDD slots (including MSATA, M.2...

By using Bytes.com and it's services, you agree to our Privacy Policy and Terms of Use.

To disable or enable advertisements and analytics tracking please visit the manage ads & tracking page.