By using this site, you agree to our updated Privacy Policy and our Terms of Use. Manage your Cookies Settings.
464,782 Members | 1,223 Online
Bytes IT Community
+ Ask a Question
Need help? Post your question and get tips & solutions from a community of 464,782 IT Pros & Developers. It's quick & easy.

table constraints and performance

P: n/a
Could someone provide me with some information in regards to the
performance implications of applying constraints on a table? Are there
any safe guards?

The basic scenario is that there is a table that has 80% updates and
inserts and 20% selects. I would like to restrict duplicate inserts
from user double clicking or other user behavior such as refresh.

I am after advice on what the performance implications would be on
placing restraints on a table vs. writing business logic code.

Thanks,
Brian



---------------------------(end of broadcast)---------------------------
TIP 9: the planner will ignore your desire to choose an index scan if your
joining column's datatypes do not match

Nov 11 '05 #1
Share this Question
Share on Google+
2 Replies

P: n/a
Besides the performance issues, I think it's correct to detect refreshes
before sending data to the database.

In our applications, we ignore refreshes from the begining.
We do it by sending a serial number, which we keep on a session
variable.
Every time we send a page to a client, we increment this number, update
the session variable, and send it as a hidden within the page.
When the user submits the page (and the hidden value), we check the
serial number submited is equal to the session's serial number. If it
is, we write to the database, if it's not, we just skip the writing
code.

This way, when the user refreshes the page, the session variable always
gets incremented at the server, but the value submited is always the
same.
You can implement it in an easy generic way, and you don't waste time
sending updates to the database that you know will fail anyway.

Hope it helps :)
On Mon, 2003-08-25 at 14:44, Brian Maguire wrote:
Could someone provide me with some information in regards to the
performance implications of applying constraints on a table? Are there
any safe guards?

The basic scenario is that there is a table that has 80% updates and
inserts and 20% selects. I would like to restrict duplicate inserts
from user double clicking or other user behavior such as refresh.

I am after advice on what the performance implications would be on
placing restraints on a table vs. writing business logic code.

Thanks,
Brian







---------------------------(end of broadcast)---------------------------
TIP 9: the planner will ignore your desire to choose an index scan if your
joining column's datatypes do not match


-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1.2.2 (FreeBSD)

iD8DBQA/Smp321dVnhLsBV0RAjBFAJ4tYpdHCnccUNU6FsooaN7X/ladmQCeLdW/
KOUgn/3FYuHRqPNIPhR6Bvw=
=9ICO
-----END PGP SIGNATURE-----

Nov 11 '05 #2

P: n/a
What Franco Bruno Borghesi says is a goos idea.
You can prevent the user from duplicate submits.

If you want to keep some fields unique in a table, please
rely on the DBMS. PostgreSQL support primary key,
foreign key, unique index and other facilities to keep
data integrity. And I think it can do things better most
of the time.
Nov 11 '05 #3

This discussion thread is closed

Replies have been disabled for this discussion.