On 11 Apr 2005 08:07:42 -0700,
im*******************@yahoo.com wrote:
We were trying to remove duplicates and came up with two solutions.
One solution is similar to the one found in a book called "Advanced
Transact-SQL for SQL Server 2000" by Ben-Gan & Moreau. This solution
uses temp tables for removing duplicates. A co-worker created a
different solution that also removes duplicates, but the other solution
uses subqueries instead of temp tables.
Theorhetically, which solution would result in faster performance with
large tables? Would using temp tables peform faster when the source
table has 100,000 records, for example, or would the subquery function
more quickly in that situation?
Hi imani,
That question is impossible to answer, without knowing anything about
your table structure or about the actual queries you use. Even with that
knowledge, the best answer to "which one performs best" is usually "test
them both in your environment, on your hardware and against your data".
The speed of queries depends on lots of factors; there is no generic
answer.
But the real question here is: why would you care? Cleaning duplicates
should always be a one-time operation - typically the kind of operation
where development time is much more important than exectution time. Just
run one of your queries and be done with it, then proceed to the really
important issue: take steps to ensure you'll never have to do it again.
(No, wait - reverse that: FIRST take steps to prevent new duplicates,
then take out the existing ones).
For regular tables, the way to rpevent duplicates is to find the natural
key and declare that as either PRIMARY KEY or UNIQUE.
If you are dealing with a staging table that's used for a data import
where you receive duplicates beyond your control, then you might want to
create a UNIQUE INDEX with the IGNORE_DUP_KEY option. Absolutely *NOT*
recommended for normal tables, but for this specific situation (import
of data known to have duplicates), it might be useful. You can read
aboout it in Books Online. Remember that IGNOORE_DUP_KEY can result in
loss of data, and that you can't control WHICH of the duplicate rows is
dropped.
Best, Hugo
--
(Remove _NO_ and _SPAM_ to get my e-mail address)