Besides the performance issues, I think it's correct to detect refreshes
before sending data to the database.
In our applications, we ignore refreshes from the begining.
We do it by sending a serial number, which we keep on a session
variable.
Every time we send a page to a client, we increment this number, update
the session variable, and send it as a hidden within the page.
When the user submits the page (and the hidden value), we check the
serial number submited is equal to the session's serial number. If it
is, we write to the database, if it's not, we just skip the writing
code.
This way, when the user refreshes the page, the session variable always
gets incremented at the server, but the value submited is always the
same.
You can implement it in an easy generic way, and you don't waste time
sending updates to the database that you know will fail anyway.
Hope it helps :)
On Mon, 2003-08-25 at 14:44, Brian Maguire wrote:
Could someone provide me with some information in regards to the
performance implications of applying constraints on a table? Are there
any safe guards?
The basic scenario is that there is a table that has 80% updates and
inserts and 20% selects. I would like to restrict duplicate inserts
from user double clicking or other user behavior such as refresh.
I am after advice on what the performance implications would be on
placing restraints on a table vs. writing business logic code.
Thanks,
Brian
---------------------------(end of broadcast)---------------------------
TIP 9: the planner will ignore your desire to choose an index scan if your
joining column's datatypes do not match
-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1.2.2 (FreeBSD)
iD8DBQA/Smp321dVnhLsBV0RAjBFAJ4tYpdHCnccUNU6FsooaN7X/ladmQCeLdW/
KOUgn/3FYuHRqPNIPhR6Bvw=
=9ICO
-----END PGP SIGNATURE-----