If SQL Server is designed for multi processor systems, how can running
a query in parallel make such a dramatic difference to performance ?
We have a reasonably simple query which brings in data from a few none
complex views. If we run it on our 2x2.4Ghz Xeon server it takes 6
minutes plus to run. If we run this on the same server with
OPTION(MAXDOP 1) at the end of the same query it takes less than a
second.
Examining the execution plan, the only difference I have been able to
see is that parallelism is taking up 96% of the run time when using
two processors. This drops when using the one so a sort takes up the
vast majority of the time for the query to run.
OK, so running in parallel should mean that it's run in various parts
and then 'joined up' later for performance gains, but how can it get
it so wrong (timewise) ?
If this is the case, will I see a significant difference changing our
server to use a single processor, which seems completely the wrong
approach (or should I do this on each query in each app - eek) ?
Do we have a problem that we don't know about that causes it to take
this long ?
What can we do ? Ideally, using both processors would seem to be
preferrable.