Hi,
First of all my apologies if you have seen this mail already but I am
re-sending as there were some initial problems.
This query is related to defining indexes to be unique or not and
consequences thereof.
Some documented facts that I am aware of include
1. Defining uniqueness allows optimiser to create optimal plans eg
select based on keys in such an index allows the optimiser to determine
at most only one row will be returned
2. Defining uniqueness ensures that rule (business/Primary key) is
enforced, regradless of how the data is entered.
We have many cases where non unique indexes are defined. The approach to
date has been that even though we are aware of some of the benefits
offered by defining uniqueness , we have chosen not to add keys to non
unique indexes such that they become unique. The primary reason for this
was that we did not want to make the keys comprising the indexes
unnecessarily large and therefore ensuing consequences when DML
statements are performed.
However, I have concerns that having highly duplicate indexes can have
performance impacts , including deadlocking. I am also aware Sybase used
to store duplicate values in overflow pages and therefore there were
performance consequences. Could SQL 2000 have the same behaviour ?
Thanking you in advance
Puvendran
*** Sent via Developersdex
http://www.developersdex.com ***
Don't just participate in USENET...get rewarded for it!