By using this site, you agree to our updated Privacy Policy and our Terms of Use. Manage your Cookies Settings.
435,395 Members | 2,541 Online
Bytes IT Community
+ Ask a Question
Need help? Post your question and get tips & solutions from a community of 435,395 IT Pros & Developers. It's quick & easy.

Single database transaction across a multiple HTTP requests?

P: n/a
Does PHP have a feature to associate Cookie sessions with a persistent
database connection that will allow a single transaction across
multiple HTTP requests?

Here is how I imagine my process: I have an series of interactive HTML
forms. The user begins a Cookie session. A database connection is
opened and a transaction is begun. After the user goes through any
number of pages where they update the database they finish on a page
where they may commit or rollback the transaction.

This is not possible by default. PHP database connections implicitly
perform a commit or a rollback at the end of a script.

In other words, when I go to a page with a valid Cookie session I want
to be able to retrieve the same database connection I opened
previously. I don't want an automatic commit to be performed at the end
of the PHP script.

I'm using PostgreSQL. I'm aware of the difference between pg_pconnect()
and pg_connect(), but as I understand it this only does connection
pooling. It does not guarantee that you will get the SAME database
connection. Nor does it allow for transactions across multiple HTTP
requests. Am I wrong? Will pg_pconnect() do what I want?

I've seen some third party services that maintain databases connections
and allows them to be retrieved keyed to a Cookie. For example,
SQLRelay (http://sqlrelay.sourceforge.net/index.html), but this seems a
bit immature.

Yours,
Noah

Jul 17 '05 #1
Share this Question
Share on Google+
16 Replies


P: n/a
>Does PHP have a feature to associate Cookie sessions with a persistent
database connection that will allow a single transaction across
multiple HTTP requests?
No.
Here is how I imagine my process: I have an series of interactive HTML
forms. The user begins a Cookie session. A database connection is
opened and a transaction is begun. After the user goes through any
number of pages where they update the database they finish on a page
where they may commit or rollback the transaction.
This is an unworkable approach. Users do not usually finish on a
page where they commit or roll back the transaction - usually they
just leave without finishing. You don't want your database left
with that many uncommitted transactions.
This is not possible by default. PHP database connections implicitly
perform a commit or a rollback at the end of a script.


Regardless of what PHP does, you don't want to accumulate large numbers
of uncommitted transactions, which you WILL have when users
leave your pages to eat lunch, get married and raise a family,
die, or buy a new computer, and then never come back to the session.

Gordon L. Burditt
Jul 17 '05 #2

P: n/a
Those are good points to consider, but these are things you have to
consider for any transaction oriented client application. As I expand
the scope of the application I intend to address issue such as stale or
abandoned transactions. The intended user base is a small group of
system administrators, so deliberate abuse is unlikely and easily
punished if it occurs.

Yours,
Noah

Jul 17 '05 #3

P: n/a
Hello,

on 05/06/2005 05:17 PM no**@noah.org said the following:
Those are good points to consider, but these are things you have to
consider for any transaction oriented client application. As I expand
the scope of the application I intend to address issue such as stale or
abandoned transactions. The intended user base is a small group of
system administrators, so deliberate abuse is unlikely and easily
punished if it occurs.


I am afraid you are misunderstanding what are database transactions.
Database transactions should be short lived. If you do not commit them
soon enough (think about seconds, not minutes and even less hours or
days), the database server will abort (rollback) them.

What you want to do must be achieved some other way, like job queues
that include state information.

--

Regards,
Manuel Lemos

PHP Classes - Free ready to use OOP components written in PHP
http://www.phpclasses.org/

PHP Reviews - Reviews of PHP books and other products
http://www.phpclasses.org/reviews/

Metastorage - Data object relational mapping layer generator
http://www.meta-language.net/metastorage.html
Jul 17 '05 #4

P: n/a
>Those are good points to consider, but these are things you have to
consider for any transaction oriented client application. As I expand
Yes, and for any transaction-oriented client application where the
transaction lasts so long that people might go to lunch, get called
into meetings, have their computer reboot, or whatever, you've got
a BIG problem. (I recommend stuffing the data needed to DO the
changes into a session, then actually doing them on the final page.
Try to keep it so the start and end of transaction don't need ANY
user intervention between them. Check the input to the extent you
can beforehand, even if the transaction will also check it.)

It gets especially bad if it is likely that more than one person
may try to edit the same record (people don't edit just their own
data). In a call center you'll have lots of instances of people
asking who's locked up some particular record, or who reverted their
changes, or however the transaction mechanism deals with two people
trying to edit the same record at the same time. You'll also be
cursed to eternity if there isn't some mechanism to figure out who's
holding the lock so that person can be told to let go of it.
the scope of the application I intend to address issue such as stale or
abandoned transactions. The intended user base is a small group of
system administrators, so deliberate abuse is unlikely and easily
punished if it occurs.


Especially for system administrators, getting interrupted in the
middle of a multi-web-page transaction is going to be a problem.
With a transaction that long, it is also likely that you will ask
for some information that the administrator doesn't HAVE, and has
to go look up. Deliberate abuse is not really what I'm most concerned
about (and many people wouldn't class closing their browser and
going to lunch as abuse of any kind: that's bad design of
an application if it causes enough trouble to be called 'abuse').

Gordon L. Burditt
Jul 17 '05 #5

P: n/a
no**@noah.org wrote:
: Does PHP have a feature to associate Cookie sessions with a persistent
: database connection that will allow a single transaction across
: multiple HTTP requests?

: Here is how I imagine my process: I have an series of interactive HTML
: forms. The user begins a Cookie session. A database connection is
: opened and a transaction is begun. After the user goes through any
: number of pages where they update the database they finish on a page
: where they may commit or rollback the transaction.

: This is not possible by default. PHP database connections implicitly
: perform a commit or a rollback at the end of a script.

: In other words, when I go to a page with a valid Cookie session I want
: to be able to retrieve the same database connection I opened
: previously. I don't want an automatic commit to be performed at the end
: of the PHP script.

: I'm using PostgreSQL. I'm aware of the difference between pg_pconnect()
: and pg_connect(), but as I understand it this only does connection
: pooling. It does not guarantee that you will get the SAME database
: connection. Nor does it allow for transactions across multiple HTTP
: requests. Am I wrong? Will pg_pconnect() do what I want?

: I've seen some third party services that maintain databases connections
: and allows them to be retrieved keyed to a Cookie. For example,
: SQLRelay (http://sqlrelay.sourceforge.net/index.html), but this seems a
: bit immature.
I think the normal approach is to just _prepare_ a set of changes as part
of the session. You do lookups to confirm the values, but don't save
anything yet.

At some point the user presses the [COMMIT ALL MY CHANGES DAMN IT!]
button.

At that point you try to apply all the changes. Normally they will all
work as expected. Sometimes an underlying value that you checked earlier
has changed and the changes no longer work and you'll get an error. The
user will have to go back and make changes and try again.

You'll notice that if you get something like a plane ticket then it works
that way. If you take too long between reserving your seat and paying for
it then the system might reject your reservation because someone else paid
for it first. That's because your reservation is not actually a
reservation until the moment you pay for it. The system doesn't "hold"
the seat for you, if the flight is busy you have to pay quick or lose the
seat.

If you use something like Oracle then you can have a normal php
application that displays lots of php pages, but in which your key
database transaction is handled by the Oracle forms applet. The applet is
a single php page, but can have any number of oracle screens - that part
of your app is just a normal oracle database app that does whatever you
need with transactions. You wouldn't use that in a public web app, but it
makes sense in a closed environment like an office with sys admins.

--

This space not for rent.
Jul 17 '05 #6

P: n/a
no**@noah.org wrote:
Does PHP have a feature to associate Cookie sessions with a persistent
database connection that will allow a single transaction across
multiple HTTP requests?

Here is how I imagine my process: I have an series of interactive HTML
forms. The user begins a Cookie session. A database connection is
opened and a transaction is begun. After the user goes through any
number of pages where they update the database they finish on a page
where they may commit or rollback the transaction.

This is not possible by default. PHP database connections implicitly
perform a commit or a rollback at the end of a script.

In other words, when I go to a page with a valid Cookie session I want
to be able to retrieve the same database connection I opened
previously. I don't want an automatic commit to be performed at the end
of the PHP script.

I'm using PostgreSQL. I'm aware of the difference between pg_pconnect()
and pg_connect(), but as I understand it this only does connection
pooling. It does not guarantee that you will get the SAME database
connection. Nor does it allow for transactions across multiple HTTP
requests. Am I wrong? Will pg_pconnect() do what I want?

I've seen some third party services that maintain databases connections
and allows them to be retrieved keyed to a Cookie. For example,
SQLRelay (http://sqlrelay.sourceforge.net/index.html), but this seems a
bit immature.


Generally speaking this can be done without too many hoops, but only if the
users have their own real PostgreSQL accounts, and only if you use
pg_pconnect. If you use pg_pconnect and create each connection with the
user's own credentials, they will get the same connection each time. But
of course if they have two sessions open that fails.

But I would suggest a different approach to the problem. Countless web
sites around the world do what you are trying to do without using this
technique. Why not try to find out what techniques they are using?

On our sites, with "wizards" like this we save the answers in the $_SESSION
array, and only when they commit do we open a database connection and write
the data. This is only one of countless simple and easy approaches.

--
Kenneth Downs
Secure Data Software, Inc.
(Ken)nneth@(Sec)ure(Dat)a(.com)
Jul 17 '05 #7

P: n/a
Building up a list of changes and then executing all changes at once
from a single PHP page is a good idea. This was my first choice, but in
our application this turns out to be difficult because changes you make
in one step effect the decision of how subsequent changes are made.
For example, some steps invoke stored procedures which change rows
based on values changed in previous steps. Changes have can have
cascading effects. In effect I would have to make my script emulate the
database engine.

A DBA said that I should avoid transactions that are open while waiting
for user input, but considering how much of our system is built with
stored procedures, I'm not sure how I can do that. One suggestion was
to use temporary tables with a copy of the data I need, but those are
also persistent only in a single database connection.

My limited understanding of transactions is that rows are locked only
when the transaction is committed. At least this is how it works in
PostgreSQL. Other users are isolated from my changes during my
transaction.

It seems like I would have this dilema even if I were writing a desktop
application. A user could start the application and then walk away from
the desk. For that matter you would have the same problem if a user
logged into a shell; started psql; typed "begin;" and then walked away.

At this point, I think I'm going to request a new database instance. I
could copy the data I need from production to the scratch database;
edit the data on the scratch database; then syncronize the data when
the sysadmin is finished. The data I need is only a few megabytes, so
performance shouldn't be an issue, but it seem like a lot of
development overhead.

Yours,
Noah

Jul 17 '05 #8

P: n/a
>Building up a list of changes and then executing all changes at once
from a single PHP page is a good idea. This was my first choice, but in
our application this turns out to be difficult because changes you make
in one step effect the decision of how subsequent changes are made.
For example, some steps invoke stored procedures which change rows
based on values changed in previous steps. Changes have can have
cascading effects. In effect I would have to make my script emulate the
database engine.
One approach I have used on occasion is to query for a record,
present a page to edit values BUT WITH THE ORIGINAL VALUES IN HIDDEN
FIELDS. When the user submits the form, if any of the database
values have changed with respect to the original values in the form,
abort the change, otherwise accept it if the new values are acceptable.
Among other things, assuming the user actually made a change, is
that double-submitting the form is rejected. If anyone changed a
value which the user might have used to decide on his edits, the
change is also rejected.

What you could do for Page 2, for example, is to apply the changes
the admin submitted in Page 1, produce a Page 2 with the decisions
to be made, then roll back the changes. When the admin submits
page 2, make sure that applying the changes for Page 1 come out the
same, then apply the changes for page 2, produce a Page 3 with the
decisions to be made, then roll back the changes. This can get
very messy if you routinely end up with trivial differences like
the values of auto-increment primary keys that are otherwise
insignificant as long as the references all match up.
A DBA said that I should avoid transactions that are open while waiting
for user input, but considering how much of our system is built with
stored procedures, I'm not sure how I can do that.
Good advice.
One suggestion was
to use temporary tables with a copy of the data I need, but those are
also persistent only in a single database connection.
Another possibility is to use a temporary copy of the data you
need in a session, or in a update-in-progress table keyed to the
session. (Session data can be stored in a database rather than
temporary files by providing a handler).
My limited understanding of transactions is that rows are locked only
when the transaction is committed. At least this is how it works in
PostgreSQL. Other users are isolated from my changes during my
transaction.
So what happens two independent DB connections try to edit the same
record in incompatible ways? One has to wait? One gets aborted?
The user making the second change may NOT want to be "isolated"
from the first change, as the old data may be used to decide what
the new values are.
It seems like I would have this dilema even if I were writing a desktop
application. A user could start the application and then walk away from
the desk. For that matter you would have the same problem if a user
logged into a shell; started psql; typed "begin;" and then walked away.
No, I think you'd need to update a heavily-used record or many of
them, THEN walk away, to cause havoc. And if record-level locking
is done, you'd only tie up the records you changed. That might
still be enough to lock out the overnight billing process, leading
to some furious admins.
At this point, I think I'm going to request a new database instance. I
could copy the data I need from production to the scratch database;
edit the data on the scratch database; then syncronize the data when
the sysadmin is finished. The data I need is only a few megabytes, so
performance shouldn't be an issue, but it seem like a lot of
development overhead.


Is there a way to modify some of the stored procedures to optionally
return what they would do without actually doing it? Or doing
it to a different set of tables/databases?

Gordon L. Burditt
Jul 17 '05 #9

P: n/a

<no**@noah.org> wrote in message
news:11**********************@z14g2000cwz.googlegr oups.com...
Building up a list of changes and then executing all changes at once
from a single PHP page is a good idea. This was my first choice, but in
our application this turns out to be difficult because changes you make
in one step effect the decision of how subsequent changes are made.
For example, some steps invoke stored procedures which change rows
based on values changed in previous steps.
Either this transaction is poorly designed, or you entire systems of stored
procedures and triggers is poorly designed.

You will need to change one or the other until you obtain a design that will
work.
Changes have can have
cascading effects. In effect I would have to make my script emulate the
database engine.

A DBA said that I should avoid transactions that are open while waiting
for user input,
Your DBA is absolutely correct. Having a database lock or a transaction in
progress while waiting for user input is a disaster waiting to happen. I
have been programming in various languages with various databases for 25+
years and I can state quite categorically that the CORRECT way is to delay
the lock-update-unlock cycle until as late as possible, and to NEVER wait on
input from another source, user or otherwise, anywhere within this cycle. In
that way you can ensure that any database locks are in place for the
shortest time possible, thus reducing any delays to other transactions, and,
even more importantly, reducing the possibility of dealocks or deadly
embraces.
but considering how much of our system is built with
stored procedures, I'm not sure how I can do that. One suggestion was
to use temporary tables with a copy of the data I need, but those are
also persistent only in a single database connection.

My limited understanding of transactions is that rows are locked only
when the transaction is committed.
It is normal practice to issue the locks manually just before performing the
first update. This lock must cover ALL the tables that are going to be
modified within that transaction. Most databases will report an error if you
try to update a table which is not part of the current lock. All locks are
release when the COMMIT is issued. This procedure helps prevent a situation
known as a "deadly embrace" where transaction #1 locks table A, and
transaction #2 locks table B. If transaction #1 then tries to lock table B
it is forced to wait until transaction #2 releases the lock. A deadly
embrace occurs when transaction #2, instead of releasing the lock on table B
attempts instead to lock table A. Neither transaction can proceed until the
other one releases its database locks, and neither transaction is willing to
release the locks it already has in place. Thus the two transactions are
stuck and will not terminate of their own accord. This usually requires
someone to abort the two processes so that the database locks can be
released.
At least this is how it works in
PostgreSQL. Other users are isolated from my changes during my
transaction.
Other transactions are suspended until database locks that you have in
place, and which they need, are relaesed. Database locks are the key (pardon
the pun).
It seems like I would have this dilema even if I were writing a desktop
application.
Not in a properly designed application, desktop or otherwise.
A user could start the application and then walk away from
the desk.
Starting the application itself should NEVER issue a database lock. Nothing
should be locked until AFTER the user has input his data and then pressed
the SUBMIT button. It is the application's responsibility to take that data,
validate it, issue any database locks, apply the updates then release the
locks BEFORE returning control to the user. UNDER NO CIRCUMSTANCES do you
EVER pause in the middle of a transaction and ask the user for more input.
For that matter you would have the same problem if a user
logged into a shell; started psql; typed "begin;" and then walked away.
Shoot that user.
At this point, I think I'm going to request a new database instance.
You don't want a new database instance, you want a new database design, or a
new transaction design.

--
Tony Marston

http://www.tonymarston.net

I could copy the data I need from production to the scratch database;
edit the data on the scratch database; then syncronize the data when
the sysadmin is finished. The data I need is only a few megabytes, so
performance shouldn't be an issue, but it seem like a lot of
development overhead.

Yours,
Noah

Jul 17 '05 #10

P: n/a
In MS SQL Server you use a named transaction. There isn't a similar
concept in Postgres?

Jul 17 '05 #11

P: n/a
Chung Leong wrote:
In MS SQL Server you use a named transaction. There isn't a similar
concept in Postgres?


nope.
--
Kenneth Downs
Secure Data Software, Inc.
(Ken)nneth@(Sec)ure(Dat)a(.com)
Jul 17 '05 #12

P: n/a
On Fri, 06 May 2005 11:12:44 -0700, noah wrote:
In other words, when I go to a page with a valid Cookie session I want
to be able to retrieve the same database connection I opened
previously. I don't want an automatic commit to be performed at the end
of the PHP script.


Noah, HTTP is a stateless protocol. That means that no permanent
connection is possible. Each HTTP session is not only a separate
transaction, it is also a separate session. You can have an approximation
of permanent connections if you have a set o predefined connection and
services running in parallel with your web server and performing RDBMS
requests on your behalf. That is the case with BEA WebLogic, IBM WebSphere,
Oracle iAS, SUN iPlanet and some other products, collectively known as
application servers.
Permanent connections are a concept from the client-server world, which is
now increasingly dying out, because it was restricting choice of machines,
operating systems and clients that were able to access the application.
Permanent connections are also a great burden for the server which has to
maintain connection state information for each and every client connection.
Trust me, you're better off without the permanent connections.

--
Egoist: A person of low taste, more interested in themselves than in me.

Jul 17 '05 #13

P: n/a
In article <pa****************************@sbcglobal.net>,
Mladen Gogala <go****@sbcglobal.net> wrote:
On Fri, 06 May 2005 11:12:44 -0700, noah wrote:
In other words, when I go to a page with a valid Cookie session I want
to be able to retrieve the same database connection I opened
previously. I don't want an automatic commit to be performed at the end
of the PHP script.


Noah, HTTP is a stateless protocol. That means that no permanent
connection is possible. Each HTTP session is not only a separate
transaction, it is also a separate session. You can have an approximation
of permanent connections if you have a set o predefined connection and
services running in parallel with your web server and performing RDBMS
requests on your behalf. That is the case with BEA WebLogic, IBM WebSphere,
Oracle iAS, SUN iPlanet and some other products, collectively known as
application servers.
Permanent connections are a concept from the client-server world, which is
now increasingly dying out, because it was restricting choice of machines,
operating systems and clients that were able to access the application.
Permanent connections are also a great burden for the server which has to
maintain connection state information for each and every client connection.
Trust me, you're better off without the permanent connections.


Correct me if I'm wrong here, but because of the nature of php's
"transactions", e.g. a script runs then exists, there's no possibility
to create transactions on a multiple pages. You can collect information
on multiple pages and store it in a database keyed to a session id in
between pages, but any sort of database transaction needs to happen
within the context of a page.

If the current business model you're using doesn't allow this, you
either need to change that model or change the way you're accessing the
database. It's not currently suited to a web-based application. It's
more suited to a dedicated client/server model or batch submission (e.g.
go throught the pages to display and validate the data for submission,
then assemble a 'batch file' which is used to do the transaction, then
submit it).

--
DeeDee, don't press that button! DeeDee! NO! Dee...

Jul 17 '05 #14

P: n/a

"Michael Vilain" <vi****@spamcop.net> wrote in message
news:vi**************************@comcast.dca.giga news.com...
In article <pa****************************@sbcglobal.net>,
Mladen Gogala <go****@sbcglobal.net> wrote:
On Fri, 06 May 2005 11:12:44 -0700, noah wrote:
> In other words, when I go to a page with a valid Cookie session I want
> to be able to retrieve the same database connection I opened
> previously. I don't want an automatic commit to be performed at the end
> of the PHP script. <snip>

Correct me if I'm wrong here, but because of the nature of php's
"transactions", e.g. a script runs then exists, there's no possibility
to create transactions on a multiple pages.


This is not a limition that exists only within PHP, it is a limition of the
HTTP protocol which is completely stateless.
You can collect information
on multiple pages and store it in a database keyed to a session id in
between pages, but any sort of database transaction needs to happen
within the context of a page.
If you wish to collect information from several pages before writing them to
the database then you must wait until the last page before you commit that
transaction's data to the database. This means that the data from previous
pages must be stored in a temporary location before being written to the
database. This "temporary" location can be any of the following:
(a) In hidden field inside each web page. (not recommended)
(b) In temporary database tables.
(c) Inside PHP's session array, which may be either a disk file or a
database table.

Personally I use option (c), and have done for several years.
If the current business model you're using doesn't allow this, you
either need to change that model or change the way you're accessing the
database. It's not currently suited to a web-based application. It's
more suited to a dedicated client/server model or batch submission (e.g.
go throught the pages to display and validate the data for submission,
then assemble a 'batch file' which is used to do the transaction, then
submit it).


Agreed. If you *think* that you need a database transaction that spans
several pages then your thinking is *off* by a large margin.

--
Tony Marston

http://www.tonymarston.net

Jul 17 '05 #15

P: n/a
Mladen Gogala (go****@sbcglobal.net) wrote:
: On Fri, 06 May 2005 11:12:44 -0700, noah wrote:

: > In other words, when I go to a page with a valid Cookie session I want
: > to be able to retrieve the same database connection I opened
: > previously. I don't want an automatic commit to be performed at the end
: > of the PHP script.

: Noah, HTTP is a stateless protocol. That means that no permanent
: connection is possible.

A permanent _database connection_ is certainly possible. That is exactly
how a database connection cache works. The web server opens the
connections, and then doles them out to the scripts as they need them.
When a script exits then the web server does not close the connection,
instead it keeps it open for later. The next script that "opens" a
database connection is simply given one of the open connections.

There is no theoretical reason why a script that is using one of the cache
connections could not start a database transaction, and later another
script, using the same connection, could not finish the tranaction.

Database connection caches do not normally work that way because there are
practical problems, not because it can't be done.

--

This space not for rent.
Jul 17 '05 #16

P: n/a

"Malcolm Dew-Jones" <yf***@vtn1.victoria.tc.ca> wrote in message
news:42******@news.victoria.tc.ca...
Mladen Gogala (go****@sbcglobal.net) wrote:
: On Fri, 06 May 2005 11:12:44 -0700, noah wrote:

: > In other words, when I go to a page with a valid Cookie session I want
: > to be able to retrieve the same database connection I opened
: > previously. I don't want an automatic commit to be performed at the
end
: > of the PHP script.

: Noah, HTTP is a stateless protocol. That means that no permanent
: connection is possible.

A permanent _database connection_ is certainly possible. That is exactly
how a database connection cache works. The web server opens the
connections, and then doles them out to the scripts as they need them.
The problem with this is that you cannot guarantee that a subsequent request
belonging to a session will be given the same connection as was used
previously by thesame session. Each of those pooled connections could be
given to anybody as they are not tied to a single session.

--
Tony Marston

http://www.tonymarston.net

When a script exits then the web server does not close the connection,
instead it keeps it open for later. The next script that "opens" a
database connection is simply given one of the open connections.

There is no theoretical reason why a script that is using one of the cache
connections could not start a database transaction, and later another
script, using the same connection, could not finish the tranaction.

Database connection caches do not normally work that way because there are
practical problems, not because it can't be done.

--

This space not for rent.

Jul 17 '05 #17

This discussion thread is closed

Replies have been disabled for this discussion.