469,929 Members | 1,441 Online
Bytes | Developer Community
New Post

Home Posts Topics Members FAQ

Post your question to a community of 469,929 developers. It's quick & easy.

SubQuery or Temp Table?

We were trying to remove duplicates and came up with two solutions.
One solution is similar to the one found in a book called "Advanced
Transact-SQL for SQL Server 2000" by Ben-Gan & Moreau. This solution
uses temp tables for removing duplicates. A co-worker created a
different solution that also removes duplicates, but the other solution
uses subqueries instead of temp tables.

Theorhetically, which solution would result in faster performance with
large tables? Would using temp tables peform faster when the source
table has 100,000 records, for example, or would the subquery function
more quickly in that situation?

Jul 23 '05 #1
3 10230
On 11 Apr 2005 08:07:42 -0700, im*******************@yahoo.com wrote:
We were trying to remove duplicates and came up with two solutions.
One solution is similar to the one found in a book called "Advanced
Transact-SQL for SQL Server 2000" by Ben-Gan & Moreau. This solution
uses temp tables for removing duplicates. A co-worker created a
different solution that also removes duplicates, but the other solution
uses subqueries instead of temp tables.

Theorhetically, which solution would result in faster performance with
large tables? Would using temp tables peform faster when the source
table has 100,000 records, for example, or would the subquery function
more quickly in that situation?


Hi imani,

That question is impossible to answer, without knowing anything about
your table structure or about the actual queries you use. Even with that
knowledge, the best answer to "which one performs best" is usually "test
them both in your environment, on your hardware and against your data".
The speed of queries depends on lots of factors; there is no generic
answer.

But the real question here is: why would you care? Cleaning duplicates
should always be a one-time operation - typically the kind of operation
where development time is much more important than exectution time. Just
run one of your queries and be done with it, then proceed to the really
important issue: take steps to ensure you'll never have to do it again.
(No, wait - reverse that: FIRST take steps to prevent new duplicates,
then take out the existing ones).

For regular tables, the way to rpevent duplicates is to find the natural
key and declare that as either PRIMARY KEY or UNIQUE.

If you are dealing with a staging table that's used for a data import
where you receive duplicates beyond your control, then you might want to
create a UNIQUE INDEX with the IGNORE_DUP_KEY option. Absolutely *NOT*
recommended for normal tables, but for this specific situation (import
of data known to have duplicates), it might be useful. You can read
aboout it in Books Online. Remember that IGNOORE_DUP_KEY can result in
loss of data, and that you can't control WHICH of the duplicate rows is
dropped.

Best, Hugo
--

(Remove _NO_ and _SPAM_ to get my e-mail address)
Jul 23 '05 #2
The advantage of subqueries over temp tables is that the intermediate
results do not have to be written to disk as long as there is enough
internal memory. This saves (expensive) I/O.

Temp tables have the advantage that they can be indexed, and that you
can remove duplicates in batches. Both techniques can greatly benefit
your operation.

But ss Hugo noted, you have not posted enough information to really
answer the question. It *will* depend on your situation: the hardware,
the tables sizes, the query, and possibly even the SQL-Server version
and edition.

HTH,
Gert-Jan
"im*******************@yahoo.com" wrote:

We were trying to remove duplicates and came up with two solutions.
One solution is similar to the one found in a book called "Advanced
Transact-SQL for SQL Server 2000" by Ben-Gan & Moreau. This solution
uses temp tables for removing duplicates. A co-worker created a
different solution that also removes duplicates, but the other solution
uses subqueries instead of temp tables.

Theorhetically, which solution would result in faster performance with
large tables? Would using temp tables peform faster when the source
table has 100,000 records, for example, or would the subquery function
more quickly in that situation?

Jul 23 '05 #3
As a very gross generalization, use derived tables and subquery
expressions. The temp table model in SQL Server is highly proprietary,
so it will not port. A temp table is a separate object that has to be
materialized. A derived table gets optimized as a part of the whole
query, so it might not need to be materialized and processed as a
separate step. Unless you add them, a temp table has no constraints,
indexes, etc.

Jul 23 '05 #4

This discussion thread is closed

Replies have been disabled for this discussion.

Similar topics

2 posts views Thread by lev | last post: by
2 posts views Thread by jim_geissman | last post: by
By using this site, you agree to our Privacy Policy and Terms of Use.