473,563 Members | 2,558 Online
Bytes | Software Development & Data Engineering Community
+ Post

Home Posts Topics Members FAQ

Single database transaction across a multiple HTTP requests?

Does PHP have a feature to associate Cookie sessions with a persistent
database connection that will allow a single transaction across
multiple HTTP requests?

Here is how I imagine my process: I have an series of interactive HTML
forms. The user begins a Cookie session. A database connection is
opened and a transaction is begun. After the user goes through any
number of pages where they update the database they finish on a page
where they may commit or rollback the transaction.

This is not possible by default. PHP database connections implicitly
perform a commit or a rollback at the end of a script.

In other words, when I go to a page with a valid Cookie session I want
to be able to retrieve the same database connection I opened
previously. I don't want an automatic commit to be performed at the end
of the PHP script.

I'm using PostgreSQL. I'm aware of the difference between pg_pconnect()
and pg_connect(), but as I understand it this only does connection
pooling. It does not guarantee that you will get the SAME database
connection. Nor does it allow for transactions across multiple HTTP
requests. Am I wrong? Will pg_pconnect() do what I want?

I've seen some third party services that maintain databases connections
and allows them to be retrieved keyed to a Cookie. For example,
SQLRelay (http://sqlrelay.sourceforge.net/index.html), but this seems a
bit immature.

Yours,
Noah

Jul 17 '05 #1
16 7472
>Does PHP have a feature to associate Cookie sessions with a persistent
database connection that will allow a single transaction across
multiple HTTP requests?
No.
Here is how I imagine my process: I have an series of interactive HTML
forms. The user begins a Cookie session. A database connection is
opened and a transaction is begun. After the user goes through any
number of pages where they update the database they finish on a page
where they may commit or rollback the transaction.
This is an unworkable approach. Users do not usually finish on a
page where they commit or roll back the transaction - usually they
just leave without finishing. You don't want your database left
with that many uncommitted transactions.
This is not possible by default. PHP database connections implicitly
perform a commit or a rollback at the end of a script.


Regardless of what PHP does, you don't want to accumulate large numbers
of uncommitted transactions, which you WILL have when users
leave your pages to eat lunch, get married and raise a family,
die, or buy a new computer, and then never come back to the session.

Gordon L. Burditt
Jul 17 '05 #2
Those are good points to consider, but these are things you have to
consider for any transaction oriented client application. As I expand
the scope of the application I intend to address issue such as stale or
abandoned transactions. The intended user base is a small group of
system administrators, so deliberate abuse is unlikely and easily
punished if it occurs.

Yours,
Noah

Jul 17 '05 #3
Hello,

on 05/06/2005 05:17 PM no**@noah.org said the following:
Those are good points to consider, but these are things you have to
consider for any transaction oriented client application. As I expand
the scope of the application I intend to address issue such as stale or
abandoned transactions. The intended user base is a small group of
system administrators, so deliberate abuse is unlikely and easily
punished if it occurs.


I am afraid you are misunderstandin g what are database transactions.
Database transactions should be short lived. If you do not commit them
soon enough (think about seconds, not minutes and even less hours or
days), the database server will abort (rollback) them.

What you want to do must be achieved some other way, like job queues
that include state information.

--

Regards,
Manuel Lemos

PHP Classes - Free ready to use OOP components written in PHP
http://www.phpclasses.org/

PHP Reviews - Reviews of PHP books and other products
http://www.phpclasses.org/reviews/

Metastorage - Data object relational mapping layer generator
http://www.meta-language.net/metastorage.html
Jul 17 '05 #4
>Those are good points to consider, but these are things you have to
consider for any transaction oriented client application. As I expand
Yes, and for any transaction-oriented client application where the
transaction lasts so long that people might go to lunch, get called
into meetings, have their computer reboot, or whatever, you've got
a BIG problem. (I recommend stuffing the data needed to DO the
changes into a session, then actually doing them on the final page.
Try to keep it so the start and end of transaction don't need ANY
user intervention between them. Check the input to the extent you
can beforehand, even if the transaction will also check it.)

It gets especially bad if it is likely that more than one person
may try to edit the same record (people don't edit just their own
data). In a call center you'll have lots of instances of people
asking who's locked up some particular record, or who reverted their
changes, or however the transaction mechanism deals with two people
trying to edit the same record at the same time. You'll also be
cursed to eternity if there isn't some mechanism to figure out who's
holding the lock so that person can be told to let go of it.
the scope of the application I intend to address issue such as stale or
abandoned transactions. The intended user base is a small group of
system administrators, so deliberate abuse is unlikely and easily
punished if it occurs.


Especially for system administrators, getting interrupted in the
middle of a multi-web-page transaction is going to be a problem.
With a transaction that long, it is also likely that you will ask
for some information that the administrator doesn't HAVE, and has
to go look up. Deliberate abuse is not really what I'm most concerned
about (and many people wouldn't class closing their browser and
going to lunch as abuse of any kind: that's bad design of
an application if it causes enough trouble to be called 'abuse').

Gordon L. Burditt
Jul 17 '05 #5
no**@noah.org wrote:
: Does PHP have a feature to associate Cookie sessions with a persistent
: database connection that will allow a single transaction across
: multiple HTTP requests?

: Here is how I imagine my process: I have an series of interactive HTML
: forms. The user begins a Cookie session. A database connection is
: opened and a transaction is begun. After the user goes through any
: number of pages where they update the database they finish on a page
: where they may commit or rollback the transaction.

: This is not possible by default. PHP database connections implicitly
: perform a commit or a rollback at the end of a script.

: In other words, when I go to a page with a valid Cookie session I want
: to be able to retrieve the same database connection I opened
: previously. I don't want an automatic commit to be performed at the end
: of the PHP script.

: I'm using PostgreSQL. I'm aware of the difference between pg_pconnect()
: and pg_connect(), but as I understand it this only does connection
: pooling. It does not guarantee that you will get the SAME database
: connection. Nor does it allow for transactions across multiple HTTP
: requests. Am I wrong? Will pg_pconnect() do what I want?

: I've seen some third party services that maintain databases connections
: and allows them to be retrieved keyed to a Cookie. For example,
: SQLRelay (http://sqlrelay.sourceforge.net/index.html), but this seems a
: bit immature.
I think the normal approach is to just _prepare_ a set of changes as part
of the session. You do lookups to confirm the values, but don't save
anything yet.

At some point the user presses the [COMMIT ALL MY CHANGES DAMN IT!]
button.

At that point you try to apply all the changes. Normally they will all
work as expected. Sometimes an underlying value that you checked earlier
has changed and the changes no longer work and you'll get an error. The
user will have to go back and make changes and try again.

You'll notice that if you get something like a plane ticket then it works
that way. If you take too long between reserving your seat and paying for
it then the system might reject your reservation because someone else paid
for it first. That's because your reservation is not actually a
reservation until the moment you pay for it. The system doesn't "hold"
the seat for you, if the flight is busy you have to pay quick or lose the
seat.

If you use something like Oracle then you can have a normal php
application that displays lots of php pages, but in which your key
database transaction is handled by the Oracle forms applet. The applet is
a single php page, but can have any number of oracle screens - that part
of your app is just a normal oracle database app that does whatever you
need with transactions. You wouldn't use that in a public web app, but it
makes sense in a closed environment like an office with sys admins.

--

This space not for rent.
Jul 17 '05 #6
no**@noah.org wrote:
Does PHP have a feature to associate Cookie sessions with a persistent
database connection that will allow a single transaction across
multiple HTTP requests?

Here is how I imagine my process: I have an series of interactive HTML
forms. The user begins a Cookie session. A database connection is
opened and a transaction is begun. After the user goes through any
number of pages where they update the database they finish on a page
where they may commit or rollback the transaction.

This is not possible by default. PHP database connections implicitly
perform a commit or a rollback at the end of a script.

In other words, when I go to a page with a valid Cookie session I want
to be able to retrieve the same database connection I opened
previously. I don't want an automatic commit to be performed at the end
of the PHP script.

I'm using PostgreSQL. I'm aware of the difference between pg_pconnect()
and pg_connect(), but as I understand it this only does connection
pooling. It does not guarantee that you will get the SAME database
connection. Nor does it allow for transactions across multiple HTTP
requests. Am I wrong? Will pg_pconnect() do what I want?

I've seen some third party services that maintain databases connections
and allows them to be retrieved keyed to a Cookie. For example,
SQLRelay (http://sqlrelay.sourceforge.net/index.html), but this seems a
bit immature.


Generally speaking this can be done without too many hoops, but only if the
users have their own real PostgreSQL accounts, and only if you use
pg_pconnect. If you use pg_pconnect and create each connection with the
user's own credentials, they will get the same connection each time. But
of course if they have two sessions open that fails.

But I would suggest a different approach to the problem. Countless web
sites around the world do what you are trying to do without using this
technique. Why not try to find out what techniques they are using?

On our sites, with "wizards" like this we save the answers in the $_SESSION
array, and only when they commit do we open a database connection and write
the data. This is only one of countless simple and easy approaches.

--
Kenneth Downs
Secure Data Software, Inc.
(Ken)nneth@(Sec )ure(Dat)a(.com )
Jul 17 '05 #7
Building up a list of changes and then executing all changes at once
from a single PHP page is a good idea. This was my first choice, but in
our application this turns out to be difficult because changes you make
in one step effect the decision of how subsequent changes are made.
For example, some steps invoke stored procedures which change rows
based on values changed in previous steps. Changes have can have
cascading effects. In effect I would have to make my script emulate the
database engine.

A DBA said that I should avoid transactions that are open while waiting
for user input, but considering how much of our system is built with
stored procedures, I'm not sure how I can do that. One suggestion was
to use temporary tables with a copy of the data I need, but those are
also persistent only in a single database connection.

My limited understanding of transactions is that rows are locked only
when the transaction is committed. At least this is how it works in
PostgreSQL. Other users are isolated from my changes during my
transaction.

It seems like I would have this dilema even if I were writing a desktop
application. A user could start the application and then walk away from
the desk. For that matter you would have the same problem if a user
logged into a shell; started psql; typed "begin;" and then walked away.

At this point, I think I'm going to request a new database instance. I
could copy the data I need from production to the scratch database;
edit the data on the scratch database; then syncronize the data when
the sysadmin is finished. The data I need is only a few megabytes, so
performance shouldn't be an issue, but it seem like a lot of
development overhead.

Yours,
Noah

Jul 17 '05 #8
>Building up a list of changes and then executing all changes at once
from a single PHP page is a good idea. This was my first choice, but in
our application this turns out to be difficult because changes you make
in one step effect the decision of how subsequent changes are made.
For example, some steps invoke stored procedures which change rows
based on values changed in previous steps. Changes have can have
cascading effects. In effect I would have to make my script emulate the
database engine.
One approach I have used on occasion is to query for a record,
present a page to edit values BUT WITH THE ORIGINAL VALUES IN HIDDEN
FIELDS. When the user submits the form, if any of the database
values have changed with respect to the original values in the form,
abort the change, otherwise accept it if the new values are acceptable.
Among other things, assuming the user actually made a change, is
that double-submitting the form is rejected. If anyone changed a
value which the user might have used to decide on his edits, the
change is also rejected.

What you could do for Page 2, for example, is to apply the changes
the admin submitted in Page 1, produce a Page 2 with the decisions
to be made, then roll back the changes. When the admin submits
page 2, make sure that applying the changes for Page 1 come out the
same, then apply the changes for page 2, produce a Page 3 with the
decisions to be made, then roll back the changes. This can get
very messy if you routinely end up with trivial differences like
the values of auto-increment primary keys that are otherwise
insignificant as long as the references all match up.
A DBA said that I should avoid transactions that are open while waiting
for user input, but considering how much of our system is built with
stored procedures, I'm not sure how I can do that.
Good advice.
One suggestion was
to use temporary tables with a copy of the data I need, but those are
also persistent only in a single database connection.
Another possibility is to use a temporary copy of the data you
need in a session, or in a update-in-progress table keyed to the
session. (Session data can be stored in a database rather than
temporary files by providing a handler).
My limited understanding of transactions is that rows are locked only
when the transaction is committed. At least this is how it works in
PostgreSQL. Other users are isolated from my changes during my
transaction.
So what happens two independent DB connections try to edit the same
record in incompatible ways? One has to wait? One gets aborted?
The user making the second change may NOT want to be "isolated"
from the first change, as the old data may be used to decide what
the new values are.
It seems like I would have this dilema even if I were writing a desktop
application. A user could start the application and then walk away from
the desk. For that matter you would have the same problem if a user
logged into a shell; started psql; typed "begin;" and then walked away.
No, I think you'd need to update a heavily-used record or many of
them, THEN walk away, to cause havoc. And if record-level locking
is done, you'd only tie up the records you changed. That might
still be enough to lock out the overnight billing process, leading
to some furious admins.
At this point, I think I'm going to request a new database instance. I
could copy the data I need from production to the scratch database;
edit the data on the scratch database; then syncronize the data when
the sysadmin is finished. The data I need is only a few megabytes, so
performance shouldn't be an issue, but it seem like a lot of
development overhead.


Is there a way to modify some of the stored procedures to optionally
return what they would do without actually doing it? Or doing
it to a different set of tables/databases?

Gordon L. Burditt
Jul 17 '05 #9

<no**@noah.or g> wrote in message
news:11******** **************@ z14g2000cwz.goo glegroups.com.. .
Building up a list of changes and then executing all changes at once
from a single PHP page is a good idea. This was my first choice, but in
our application this turns out to be difficult because changes you make
in one step effect the decision of how subsequent changes are made.
For example, some steps invoke stored procedures which change rows
based on values changed in previous steps.
Either this transaction is poorly designed, or you entire systems of stored
procedures and triggers is poorly designed.

You will need to change one or the other until you obtain a design that will
work.
Changes have can have
cascading effects. In effect I would have to make my script emulate the
database engine.

A DBA said that I should avoid transactions that are open while waiting
for user input,
Your DBA is absolutely correct. Having a database lock or a transaction in
progress while waiting for user input is a disaster waiting to happen. I
have been programming in various languages with various databases for 25+
years and I can state quite categorically that the CORRECT way is to delay
the lock-update-unlock cycle until as late as possible, and to NEVER wait on
input from another source, user or otherwise, anywhere within this cycle. In
that way you can ensure that any database locks are in place for the
shortest time possible, thus reducing any delays to other transactions, and,
even more importantly, reducing the possibility of dealocks or deadly
embraces.
but considering how much of our system is built with
stored procedures, I'm not sure how I can do that. One suggestion was
to use temporary tables with a copy of the data I need, but those are
also persistent only in a single database connection.

My limited understanding of transactions is that rows are locked only
when the transaction is committed.
It is normal practice to issue the locks manually just before performing the
first update. This lock must cover ALL the tables that are going to be
modified within that transaction. Most databases will report an error if you
try to update a table which is not part of the current lock. All locks are
release when the COMMIT is issued. This procedure helps prevent a situation
known as a "deadly embrace" where transaction #1 locks table A, and
transaction #2 locks table B. If transaction #1 then tries to lock table B
it is forced to wait until transaction #2 releases the lock. A deadly
embrace occurs when transaction #2, instead of releasing the lock on table B
attempts instead to lock table A. Neither transaction can proceed until the
other one releases its database locks, and neither transaction is willing to
release the locks it already has in place. Thus the two transactions are
stuck and will not terminate of their own accord. This usually requires
someone to abort the two processes so that the database locks can be
released.
At least this is how it works in
PostgreSQL. Other users are isolated from my changes during my
transaction.
Other transactions are suspended until database locks that you have in
place, and which they need, are relaesed. Database locks are the key (pardon
the pun).
It seems like I would have this dilema even if I were writing a desktop
application.
Not in a properly designed application, desktop or otherwise.
A user could start the application and then walk away from
the desk.
Starting the application itself should NEVER issue a database lock. Nothing
should be locked until AFTER the user has input his data and then pressed
the SUBMIT button. It is the application's responsibility to take that data,
validate it, issue any database locks, apply the updates then release the
locks BEFORE returning control to the user. UNDER NO CIRCUMSTANCES do you
EVER pause in the middle of a transaction and ask the user for more input.
For that matter you would have the same problem if a user
logged into a shell; started psql; typed "begin;" and then walked away.
Shoot that user.
At this point, I think I'm going to request a new database instance.
You don't want a new database instance, you want a new database design, or a
new transaction design.

--
Tony Marston

http://www.tonymarston.net

I could copy the data I need from production to the scratch database;
edit the data on the scratch database; then syncronize the data when
the sysadmin is finished. The data I need is only a few megabytes, so
performance shouldn't be an issue, but it seem like a lot of
development overhead.

Yours,
Noah

Jul 17 '05 #10

This thread has been closed and replies have been disabled. Please start a new discussion.

Similar topics

13
3559
by: Lew | last post by:
Hi, I am a bit confused about what I'm seeing on my systems. I am trying to develop a script that will determine what percentage of the total log space is being used. If it is too high (80%) the on-call DBA will be paged. First some system information: get db cfg for ip0 First active log file =...
9
23062
by: Abhishek Srivastava | last post by:
Hello All, In IIS 6.0 We have a concept of worker processes and application pools. As I understand it, we can have multiple worker process per appliction pool. Each worker process is dedicated to a pool. If I assign only one application to a applicaton pool and have multiple worker processes assigned to that pool. Will my application be...
7
5241
by: AboutJAV | last post by:
Hi, I was thinking of using MSMQ to handling multiple simultaneous database request to update or insert records to table. There could be hundreds of database existing connections trying to access the same table. I was thinking to writing an app with threads to listen and 1 at a time handle each request on the queue. Is this a good...
35
9309
by: keerthyragavendran | last post by:
hi i'm downloading a single file using multiple threads... how can i specify a particular range of bytes alone from a single large file... for example say if i need only bytes ranging from 500000 to 3200000 of a file whose size is say 20MB... how do i request a download which starts directly at 500000th byte... thank u cheers
0
5556
by: narayan2586 | last post by:
Hi all, When ever i am trying to connect my application it's giving below error message: RemoteException occurred in server thread; nested exception is: java.rmi.RemoteException: ; nested exception is: javax.ejb.EJBException: Connection to pool failed! - com.ibm.websphere.ce.cm.StaleConnectionException: SQL1224N The database manager is not...
0
7658
marktang
by: marktang | last post by:
ONU (Optical Network Unit) is one of the key components for providing high-speed Internet services. Its primary function is to act as an endpoint device located at the user's premises. However, people are often confused as to whether an ONU can Work As a Router. In this blog post, we’ll explore What is ONU, What Is Router, ONU & Router’s main...
0
7579
by: Hystou | last post by:
Most computers default to English, but sometimes we require a different language, especially when relocating. Forgot to request a specific language before your computer shipped? No problem! You can effortlessly switch the default language on Windows 10 without reinstalling. I'll walk you through it. First, let's disable language...
0
7876
Oralloy
by: Oralloy | last post by:
Hello folks, I am unable to find appropriate documentation on the type promotion of bit-fields when using the generalised comparison operator "<=>". The problem is that using the GNU compilers, it seems that the internal comparison operator "<=>" tries to promote arguments from unsigned to signed. This is as boiled down as I can make it. ...
0
8101
jinu1996
by: jinu1996 | last post by:
In today's digital age, having a compelling online presence is paramount for businesses aiming to thrive in a competitive landscape. At the heart of this digital strategy lies an intricately woven tapestry of website design and digital marketing. It's not merely about having a website; it's about crafting an immersive digital experience that...
1
5479
isladogs
by: isladogs | last post by:
The next Access Europe User Group meeting will be on Wednesday 1 May 2024 starting at 18:00 UK time (6PM UTC+1) and finishing by 19:30 (7.30PM). In this session, we are pleased to welcome a new presenter, Adolph Dupré who will be discussing some powerful techniques for using class modules. He will explain when you may want to use classes...
0
3628
by: TSSRALBI | last post by:
Hello I'm a network technician in training and I need your help. I am currently learning how to create and manage the different types of VPNs and I have a question about LAN-to-LAN VPNs. The last exercise I practiced was to create a LAN-to-LAN VPN between two Pfsense firewalls, by using IPSEC protocols. I succeeded, with both firewalls in...
0
3612
by: adsilva | last post by:
A Windows Forms form does not have the event Unload, like VB6. What one acts like?
1
2077
by: 6302768590 | last post by:
Hai team i want code for transfer the data from one system to another through IP address by using C# our system has to for every 5mins then we have to update the data what the data is updated we have to send another system
0
909
bsmnconsultancy
by: bsmnconsultancy | last post by:
In today's digital era, a well-designed website is crucial for businesses looking to succeed. Whether you're a small business owner or a large corporation in Toronto, having a strong online presence can significantly impact your brand's success. BSMN Consultancy, a leader in Website Development in Toronto offers valuable insights into creating...

By using Bytes.com and it's services, you agree to our Privacy Policy and Terms of Use.

To disable or enable advertisements and analytics tracking please visit the manage ads & tracking page.