473,322 Members | 1,314 Online
Bytes | Software Development & Data Engineering Community
Post Job

Home Posts Topics Members FAQ

Join Bytes to post your question to a community of 473,322 software developers and data experts.

How do I handle people going into the same record at the same in web app.

I am building an app using php and postgresql. My questionis this.
How do you handle people wanting to make parallel changes to a record.
Since in web apps you are doing a select, bring over a record then
updating it, there is no lock on that record while you are making the
changes in your browser window. Does transactions handle that? Do you
make your selection and update all part of the same transaction? This
whole scenario has me stumped.

May 19 '06 #1
19 1685
rich wrote:
I am building an app using php and postgresql. My questionis this.
How do you handle people wanting to make parallel changes to a record.
Since in web apps you are doing a select, bring over a record then
updating it, there is no lock on that record while you are making the
changes in your browser window. Does transactions handle that? Do you
make your selection and update all part of the same transaction? This
whole scenario has me stumped.


Keep the original values. Before updating, read the record in again and compare
the values to the old ones. If they don't match, put the window back up with
the new values and tell the user.

You can also be a little more creative. Check the columns which are being
updated. If they haven't changed (but perhaps others have), go ahead and allow
the update to the changed columns. However, if one or more columns have been
changed, send a message back to the user with the new values from the database.

A lot of hassle, I know. However, you wouldn't want to lock the row anyway.
What happens if someone displays the record then closes his browser? Now you
have a locked recored which you can't unlock (until perhaps some timeout value
expires).

--
==================
Remove the "x" from my email address
Jerry Stuckle
JDS Computer Training Corp.
js*******@attglobal.net
==================
May 19 '06 #2
>I am building an app using php and postgresql. My questionis this.
How do you handle people wanting to make parallel changes to a record.
Since in web apps you are doing a select, bring over a record then
updating it, there is no lock on that record while you are making the
changes in your browser window. Does transactions handle that? Do you
make your selection and update all part of the same transaction? This
whole scenario has me stumped.


One approach is to have the OLD values of (relevant fields of) the
record embedded in the form as hidden fields (or perhaps in a
session, but I think it works better on the form, as this makes the
values reflect the original values the person doing the editing saw
on the change form originally). If the OLD values in the form do
not match the values in the record, abort the update, saying that
someone has edited the record.

You could try to merge the changes. This has potential problems,
especially if someone used the old values in creating the new values.
If, for example, someone gets a 5% cost-of-living raise and a 5%
bonus for work performance, and two HR people input this at the
same time, you could lose one raise even though it *LOOKS* like two
people tried to make the same change.

You could try to merge unrelated changes only, and reject overlapping
changes. For example, if one change only changed the customer's
address, and the fields changed "behind the form's back" involved
adding services, that's OK. If there were two service changes,
that may not be OK.

This approach also prevents submitting the form, then several hours
later pressing BACK and resubmitting the form, undoing changes
made elsewhere. Always make sure that the person submitting the form
has the authority to make the changes being made *AT THE TIME THE
FORM IS SUBMITTED*. The fact that you checked when the update form
was generated is not an excuse. Don't let ex-employees with cached
forms in their browsers wreak havoc on people's accounts. Don't
let customers suspended for abuse un-suspend their own accounts.

Gordon L. Burditt
May 19 '06 #3
Rik
Jerry Stuckle wrote:
rich wrote:
I am building an app using php and postgresql. My questionis this.
How do you handle people wanting to make parallel changes to a
record.
Since in web apps you are doing a select, bring over a record then
updating it, there is no lock on that record while you are making the
changes in your browser window. Does transactions handle that? Do
you
make your selection and update all part of the same transaction?
This
whole scenario has me stumped.


Keep the original values. Before updating, read the record in again
and compare the values to the old ones. If they don't match, put the
window back up with the new values and tell the user.

You can also be a little more creative. Check the columns which are
being updated. If they haven't changed (but perhaps others have), go
ahead and allow the update to the changed columns. However, if one
or more columns have been changed, send a message back to the user
with the new values from the database.

A lot of hassle, I know. However, you wouldn't want to lock the row
anyway. What happens if someone displays the record then closes his
browser? Now you have a locked recored which you can't unlock (until
perhaps some timeout value expires).


And still, it's possible to check if the values are the same, someone
immediatlly thereafter changes the values, and only then your script has
reached the point of updating the values. So it's by no way fullproof. It
would have to happen in a very small timeframe though, and it't not very
likely to happen. Yet, given enough time and transactions, someday it might.

Then again, I haven't heard of a slim workable solution witthout that
weakness either.

Grtz,
--
Rik Wasmus
May 19 '06 #4
The first question is: Can people access the same record in parallel?

When defining tables, I always recognize 3 categories:
- Definition tables (better known as lookup tables)
Contain only read only data that rarely changes.
No concurrency problems.
- Live data tables can be a problem
- Log tables are add-only. I never put transactions on them,
because I'd rather have an illogical order of the records
than missing records. Log tables should never be locked.

When records are only available for a single user, there's only the
possibility of the same user (or someone abusing an account) messing
with his own data (two different browsers, maybe?).
If your database brand (I am not familiar with postgress) does not
support transactions or locking, you can invent your own. With a
timestamp field and a user (or sessionId) field, you can see what user
has requested a lock. If it is locked too long ago, it is free. If it is
locked by another user or session, it is locked. As SQL commands are
usually atomic (I have yet to encounter a database brand that has unsafe
commands), an update command can be forged that takes the full locking
condition in its WHERE clause.

Look up the section about transactions in the database's documentation.
The fear of "hanging" open transactions is often solved by the fact that
connections are reset by PHP after executing the script, and the
database rolls back any open transactions in a connection when it is
reset/closed. Again, I do not know how Postgress handles transactions
and connections.

Best regards

rich wrote:
I am building an app using php and postgresql. My questionis this.
How do you handle people wanting to make parallel changes to a record.
Since in web apps you are doing a select, bring over a record then
updating it, there is no lock on that record while you are making the
changes in your browser window. Does transactions handle that? Do you
make your selection and update all part of the same transaction? This
whole scenario has me stumped.

May 19 '06 #5
>If your database brand (I am not familiar with postgress) does not
support transactions or locking, you can invent your own. With a
Web-based transactions that require user interaction in the middle
of them are dangerous and pretty much useless. Part of the problem
is that there is no timeout value which is not too short, too long,
or both at the same time. In a tech support or customer service
environment, where techs edit customer records, consider the
possibilities for getting fired or murdered by co-workers if your
computer crashes in the middle of an edit and delays the daily
billing run.
timestamp field and a user (or sessionId) field, you can see what user
has requested a lock. If it is locked too long ago, it is free. If it is
locked by another user or session, it is locked. As SQL commands are
usually atomic (I have yet to encounter a database brand that has unsafe
commands), an update command can be forged that takes the full locking
condition in its WHERE clause. Look up the section about transactions in the database's documentation.
The fear of "hanging" open transactions is often solved by the fact that
connections are reset by PHP after executing the script, and the
In other words, in a PHP-based setup, the transaction is reset
before the user has a chance to answer the "are you sure?" prompt.
(Note to self: never permit creation of a user by the name of
"sure"). Net effect: they're useless since the transaction is
always rolled back.
database rolls back any open transactions in a connection when it is
reset/closed. Again, I do not know how Postgress handles transactions
and connections.


Gordon L. Burditt
May 19 '06 #6
>> If your database brand (I am not familiar with postgress) does not
support transactions or locking, you can invent your own. With a


Web-based transactions that require user interaction in the middle
of them are dangerous and pretty much useless. Part of the problem
is that there is no timeout value which is not too short, too long,
or both at the same time. In a tech support or customer service
environment, where techs edit customer records, consider the
possibilities for getting fired or murdered by co-workers if your
computer crashes in the middle of an edit and delays the daily
billing run.


If only one "finance" user can edit the record, there's no problem. Tech
support is a separate problem altogether, as they can access the
database without your application. So whatever, you come up with, an
administrator can always circumvene it.

Apart from that, I think that getting a message "this message is locked
by ..." is by far more acceptable than randomly incorrect bills. That is
why I started my previous mail with: The first question is: Can two
users access the same record at all? For many systems, this is not the case.
A good second question is: what is the impact of concurrency errors? For
a billing system, I'd take my time to investigate all consequences. For
a hitcounter, I don't mind missing a count once in a while.
timestamp field and a user (or sessionId) field, you can see what user
has requested a lock. If it is locked too long ago, it is free. If it is
locked by another user or session, it is locked. As SQL commands are
usually atomic (I have yet to encounter a database brand that has unsafe
commands), an update command can be forged that takes the full locking
condition in its WHERE clause.

Look up the section about transactions in the database's documentation.
The fear of "hanging" open transactions is often solved by the fact that
connections are reset by PHP after executing the script, and the


In other words, in a PHP-based setup, the transaction is reset
before the user has a chance to answer the "are you sure?" prompt.
(Note to self: never permit creation of a user by the name of
"sure"). Net effect: they're useless since the transaction is
always rolled back.


No. Like a previous poster wrote, you can check the state of the record
before updating. You do _that_ in the same transaction as the update. If
you detect a collision, it is up to you what you do with it: not
updating and getting back to the user is just one option. Only if the
script crashes during the collision check, the transaction would be
rolled back automatically instead of keeping a lock that is never needed
anymore.
I would never keep a lock between two calls to the webserver, as nobody
can guarantee that the second call will ever be made.

Best regards
May 19 '06 #7
Jerry Stuckle wrote:
Keep the original values. Before updating, read the record in again and compare
the values to the old ones. If they don't match, put the window back up with
the new values and tell the user.


Or just add the original value to the where clause of the update
statement. It's easier, probably fast, and avoids a race condition.

May 19 '06 #8
Chung Leong wrote:
Jerry Stuckle wrote:
Keep the original values. Before updating, read the record in again and
compare
the values to the old ones. If they don't match, put the window back up
with the new values and tell the user.


Or just add the original value to the where clause of the update
statement. It's easier, probably fast, and avoids a race condition.


A nice touch.

I would add that you do need to then check for affected rows, so you can
notify the user if the update did not go through.

--
Kenneth Downs
Secure Data Software, Inc.
(Ken)nneth@(Sec)ure(Dat)a(.com)
May 20 '06 #9
rich wrote:
I am building an app using php and postgresql. My questionis this.
How do you handle people wanting to make parallel changes to a record.
Since in web apps you are doing a select, bring over a record then
updating it, there is no lock on that record while you are making the
changes in your browser window. Does transactions handle that? Do you
make your selection and update all part of the same transaction? This
whole scenario has me stumped.


Rich,

You have more or less asked *the* *question* of multiple user n-tier
software development.

One big school of thought is the so called "last save wins" school, which as
the name implies, suggests that the last person to update wins. The most
common variant is to track old values, as has been suggested, and to update
only changed values. The argument for this method is that the user is most
likely to change only those values that are relevant to their task, and if
they are changing them, the probably need to be changed. If somebody else
is trying to change the same values, there may be other issues in the
database design and in the procedures of the company.

Chung's solution is your best bet if you want to prevent all possibility of
overwrites. This would be the "no dirty writes" school, that says you can
only write to the row if it is in the same condition you found it in when
you started.

It is very important to realize that the nature of the problem precludes a
perfect answer. It is a trade-off, and somebody will object to whichever
choice you make. I go for the last-save-wins school, with changed values
only.

It also just so happens that the scenario does not happen that often (all
flames to /dev/null).

As for the transaction, Postgres will block all writes to any row you update
until the transaction is finished. After that, it is as I said before,
last write wins. If you do not use explicit transactions, then each update
statement will be a transaction. And always remember the first rule of
transactions: the shorter the better.

--
Kenneth Downs
Secure Data Software, Inc.
(Ken)nneth@(Sec)ure(Dat)a(.com)
May 20 '06 #10
Rik wrote:
Jerry Stuckle wrote:
rich wrote:
I am building an app using php and postgresql. My questionis this.
How do you handle people wanting to make parallel changes to a
record.
Since in web apps you are doing a select, bring over a record then
updating it, there is no lock on that record while you are making the
changes in your browser window. Does transactions handle that? Do
you
make your selection and update all part of the same transaction?
This
whole scenario has me stumped.


Keep the original values. Before updating, read the record in again
and compare the values to the old ones. If they don't match, put the
window back up with the new values and tell the user.

You can also be a little more creative. Check the columns which are
being updated. If they haven't changed (but perhaps others have), go
ahead and allow the update to the changed columns. However, if one
or more columns have been changed, send a message back to the user
with the new values from the database.

A lot of hassle, I know. However, you wouldn't want to lock the row
anyway. What happens if someone displays the record then closes his
browser? Now you have a locked recored which you can't unlock (until
perhaps some timeout value expires).

And still, it's possible to check if the values are the same, someone
immediatlly thereafter changes the values, and only then your script has
reached the point of updating the values. So it's by no way fullproof. It
would have to happen in a very small timeframe though, and it't not very
likely to happen. Yet, given enough time and transactions, someday it might.

Then again, I haven't heard of a slim workable solution witthout that
weakness either.

Grtz,


Sure you can guarantee it.

$result = mysql_query("UPDATE myTable SET col1='$col1' AND col2='$col2']
"WHERE col1='$oldcol1' AND col2='$oldcol2'");
if (mysql_affected_rows != 1)
echo "Data has changed!<br>";

If either col1 or col2 has been changed the update will fail and
mysql_affected_rows() will return zero.

--
==================
Remove the "x" from my email address
Jerry Stuckle
JDS Computer Training Corp.
js*******@attglobal.net
==================
May 20 '06 #11
Rik
Jerry Stuckle wrote:
Sure you can guarantee it.

$result = mysql_query("UPDATE myTable SET col1='$col1' AND
col2='$col2'] "WHERE col1='$oldcol1' AND
col2='$oldcol2'"); if (mysql_affected_rows != 1)
echo "Data has changed!<br>";

If either col1 or col2 has been changed the update will fail and
mysql_affected_rows() will return zero.


This doesn't guarantee it, I admit we're talking mili-, no, microseconds
here. The problem is queries aren't instantanious, they take time.

2 Queries run:
USER1 UPDATE WHERE is searched
USER2 UPDATE WHERE is searched
USER1 SETS
USER2 SETS

So, user 2 has overwritten changes made bu user 1.

Likely no, possible yes.

Grtz,
--
Rik Wasmus
May 21 '06 #12
Rik wrote:
Jerry Stuckle wrote:
Sure you can guarantee it.

$result = mysql_query("UPDATE myTable SET col1='$col1' AND
col2='$col2'] "WHERE col1='$oldcol1' AND
col2='$oldcol2'"); if (mysql_affected_rows != 1)
echo "Data has changed!<br>";

If either col1 or col2 has been changed the update will fail and
mysql_affected_rows() will return zero.

This doesn't guarantee it, I admit we're talking mili-, no, microseconds
here. The problem is queries aren't instantanious, they take time.

2 Queries run:
USER1 UPDATE WHERE is searched
USER2 UPDATE WHERE is searched
USER1 SETS
USER2 SETS

So, user 2 has overwritten changes made bu user 1.

Likely no, possible yes.

Grtz,


Actually, it does guarantee it.

During the update the row is locked so it can't be changed. This has been true
in MySQL for many releases.

--
==================
Remove the "x" from my email address
Jerry Stuckle
JDS Computer Training Corp.
js*******@attglobal.net
==================
May 21 '06 #13
Someone gave me even a better idea. Use sessions in a table. Add to
each user session a field for the Key field of each table in your
database that is important. Example use the key field for a customer
record. When you query the record for a change, write to that customer
field in your seesion table the index of the record you are on.
Whenever anyone opens that record, the ffirst thingyou do is query for
the record number you are going to change and if it is found you cannot
select the record because someone else has it open. Set your session
max life time low enough so that a user can't sit on a record too long.
This saves a person alot of work entering data only to be told someone
else was using the record. I think this is a nice way of doing it.

May 23 '06 #14
rich wrote:
Someone gave me even a better idea. Use sessions in a table. Add to
each user session a field for the Key field of each table in your
database that is important. Example use the key field for a customer
record. When you query the record for a change, write to that customer
field in your seesion table the index of the record you are on.
Whenever anyone opens that record, the ffirst thingyou do is query for
the record number you are going to change and if it is found you cannot
select the record because someone else has it open. Set your session
max life time low enough so that a user can't sit on a record too long.
This saves a person alot of work entering data only to be told someone
else was using the record. I think this is a nice way of doing it.


First of all, one client cannot generally access the session data of another
client. You'd have to have some special handling in there.

Second, session data is buffered. It isn't necessarily written to the session
when you save it in the $_SESSION variable. So even if you do have a special
session handler, it may not have received the data from the first client when
the second client searches sessions.

Next, if the first client retrieves a record then shuts down his browser, the
second client will find the data in the session until the session times out -
which may be minutes or days (or even years) later. You should avoid locking
rows while waiting for user input.

All kinds of problems here - one of the worst solutions.

--
==================
Remove the "x" from my email address
Jerry Stuckle
JDS Computer Training Corp.
js*******@attglobal.net
==================
May 23 '06 #15
(First of all, one client cannot generally access the session data of
another
client. You'd have to have some special handling in there.)
If you put it in a table they can right? If you do this can't you use
the session_set_save_handler(write) function at all instances when you
write to the session to write the data to the table? I see what you
mean by the problem with the session time out, but can't this time be
put in like 5 mins. If you are idle, it destroys the session? I am
pretty new to this. Thanks for your input.

May 23 '06 #16
rich wrote:
(First of all, one client cannot generally access the session data of
another
client. You'd have to have some special handling in there.)
If you put it in a table they can right? If you do this can't you use
the session_set_save_handler(write) function at all instances when you
write to the session to write the data to the table? I see what you
mean by the problem with the session time out, but can't this time be
put in like 5 mins. If you are idle, it destroys the session? I am
pretty new to this. Thanks for your input.


Yes, if you put it in a table. That's what I meant by special session handling.

However, the rest of the comments stand. Data is not necessarily sent to the
session handler when you do something like $_SESSION['recid'] = 123; It can be
queued by PHP until some later time. So just because you've set the parameter
doesn't mean it's in the table.

Same thing when you clear the session. The request is not necessarily sent to
the session handler immediately.

The bottom line is - you can expect all kinds of problems if you try to use
sessions to lock rows or otherwise stop concurrent access to a resource. The
timing will get you sooner or later.

And BTW - ok, so you do set the session timeout to 5 minutes. You bring up a
screen. User fills out most of the data. Then he has to get some info from the
guy in the next cubicle, someone on the phone - or maybe he just gets a phone
call. Suddenly his 5 minutes have expired and he's got to start all over again.

The bottom line is - you should NEVER lock a resource (especially one or more
rows in a database) while waiting for user input.

--
==================
Remove the "x" from my email address
Jerry Stuckle
JDS Computer Training Corp.
js*******@attglobal.net
==================
May 23 '06 #17
Method 1) The easiest way would be to have on each table a last
modified timestamp and user.
Before writing, check the timestamp and user (see Method 2) and see if
it was changed since you read it last. If different, then pop up a
message for appropriate action (force overwrite or reload).

Caveat: If other apps modfying the database and dont honor timestamp
you could have a problem. Of course you can have a trigger in the
database to force a time stamp write on each update so no one forgets.

Method 2) Test the columns like suggested above and display message.
BUT you still need Transaction based updates in case of more than one
table.

If you have to apply the changes to multiple tables (relational stuff),
then you do need the begin transaction/commit/rollback in case any
tables have changed.

Begin Transaction

Select * from A with lock
Select * from B with lock

If any columns (or timestamp/user) of above cursors changed
Rollback/Error
else
Update
Commit
endif

May 23 '06 #18
I guess if this was easy, no one would get paid for it. Thanks for all
the imput. I will read this over again to get my hands around it
better.

May 23 '06 #19
rich wrote:
Someone gave me even a better idea. Use sessions in a table. Add to
each user session a field for the Key field of each table in your
database that is important. Example use the key field for a customer
record. When you query the record for a change, write to that customer
field in your seesion table the index of the record you are on.
Whenever anyone opens that record, the ffirst thingyou do is query for
the record number you are going to change and if it is found you cannot
select the record because someone else has it open. Set your session
max life time low enough so that a user can't sit on a record too long.
This saves a person alot of work entering data only to be told someone
else was using the record. I think this is a nice way of doing it.


That's a very bad idea. Whoever recommended it doesn't have a good
understanding of PHP's session garbage collection mechanism. There is
no timer somewhere that causes a session to become expired. Instead,
garbage collection *could* happen at the end of every request, with the
probability of it happening determined by session.gc_probability /
session.gc_divisor (in php.ini). The default is one in a hundred. If
site traffic is low, your record would be locked for many times the
session max life. Setting the probability to unity, on the other hand,
would be serious drag on performance/scalability.

There is nothing clever at all about the idea. It's just a locking
mechanism. You can easily implement a proper one without a custom
session handler.

May 23 '06 #20

This thread has been closed and replies have been disabled. Please start a new discussion.

Similar topics

4
by: Not Me | last post by:
Hi, What exactly does the above (subject) error mean? I'm getting it from an adp file when used by a few people at the same time (each user has the file in their own filespace though). Access...
12
by: Edward Rothwell | last post by:
I have a web site where once a user has logged on I store their MemberID in a global variable in the global.asa file. Then in other pages I find out which member I am dealing with by looking at...
375
by: rkusenet | last post by:
This article is very bleak about future of DB2. How credible is the author. http://www.eweek.com/article2/0,1895,1839681,00.asp
14
by: Abhi | last post by:
FYI: This message is for the benefit of MS Access Community. I found that this prblem has been encounterd by many but there is hardly any place where a complete solution is posted. So I thought...
1
by: iam247 | last post by:
Hi I am a relative beginner with SQL and ASP. With some help after previous posts I have a page which successfully requests querystrings from another page and deletes a record from an access...
12
by: Aaron Smith | last post by:
What is the best way to handle data in a multiple user environment? We have forms that will allow users to add edit and delete data from a table on SQL server. The data could be edited on multiple...
3
by: darrel | last post by:
I have three tables making a many-to-many relationship: Companies CoID | CoName Link CoID | InsID Insurers InsID | InsName
3
by: Strasser | last post by:
In a nested subform in datasheet view, an interviewer of homeless people picks a descriptive CATEGORY from 20 descriptive categories. The 20 categories are displayed via a combo box. (Categories...
5
by: Ben | last post by:
Hi! I have a trigger created for Customer table. My front-end is access. What is the best approach to handle a trigger result when adding a new customer record? Below is the trigger script:...
0
by: DolphinDB | last post by:
Tired of spending countless mintues downsampling your data? Look no further! In this article, you’ll learn how to efficiently downsample 6.48 billion high-frequency records to 61 million...
0
isladogs
by: isladogs | last post by:
The next Access Europe meeting will be on Wednesday 6 Mar 2024 starting at 18:00 UK time (6PM UTC) and finishing at about 19:15 (7.15PM). In this month's session, we are pleased to welcome back...
0
by: Vimpel783 | last post by:
Hello! Guys, I found this code on the Internet, but I need to modify it a little. It works well, the problem is this: Data is sent from only one cell, in this case B5, but it is necessary that data...
0
by: jfyes | last post by:
As a hardware engineer, after seeing that CEIWEI recently released a new tool for Modbus RTU Over TCP/UDP filtering and monitoring, I actively went to its official website to take a look. It turned...
0
by: ArrayDB | last post by:
The error message I've encountered is; ERROR:root:Error generating model response: exception: access violation writing 0x0000000000005140, which seems to be indicative of an access violation...
1
by: PapaRatzi | last post by:
Hello, I am teaching myself MS Access forms design and Visual Basic. I've created a table to capture a list of Top 30 singles and forms to capture new entries. The final step is a form (unbound)...
1
by: CloudSolutions | last post by:
Introduction: For many beginners and individual users, requiring a credit card and email registration may pose a barrier when starting to use cloud servers. However, some cloud server providers now...
1
by: Defcon1945 | last post by:
I'm trying to learn Python using Pycharm but import shutil doesn't work
0
isladogs
by: isladogs | last post by:
The next Access Europe User Group meeting will be on Wednesday 3 Apr 2024 starting at 18:00 UK time (6PM UTC+1) and finishing by 19:30 (7.30PM). In this session, we are pleased to welcome former...

By using Bytes.com and it's services, you agree to our Privacy Policy and Terms of Use.

To disable or enable advertisements and analytics tracking please visit the manage ads & tracking page.