By using this site, you agree to our updated Privacy Policy and our Terms of Use. Manage your Cookies Settings.
424,837 Members | 1,720 Online
Bytes IT Community
+ Ask a Question
Need help? Post your question and get tips & solutions from a community of 424,837 IT Pros & Developers. It's quick & easy.

View performance, linked servers, query specifiying uniqueidentifier

P: n/a
Greetings,

I have 3 servers all running SQL Server 2000 - 8.00.818. Lets call
them parent, child1, and child 2.

On parent, I create a view called item as follows:

CREATE view Item as
select * from child1.dbchild1.dbo.Item union all
select * from child2.DBChild2.dbo.Item

On child1 and child2, I have a table "item" with a column named "id"
datatype uniqueidentifier (and many other columns). There is a
non-clustered index created over column "id".

When I connect to the parent server and select from the view

Select id, col1, col2, . From item where id =
280A33E0-5B61-4194-B242-0E184C46BB59'

The query is distributed to the children "correctly" (meaning it
executes entirely (including the where clause) on the children server
and one row is returned to the parent).

However, when I select based on a list of ids

Select id, col1, col2, . From item where id in
(280A33E0-5B61-4194-B242-0E184C46BB59',
376FA839-B48A-4599-BC67-25C6820FE105')

the plan shows that the entire contents of both children item tables
(millions of rows each) are pulled from the children to the parent,
and THEN the where criteria is applied.

Oddly enough, if I put the list of id's I want into a temp table

select * from #bv1

id
------------------------------------
280A33E0-5B61-4194-B242-0E184C46BB59
376FA839-B48A-4599-BC67-25C6820FE105

and then

Select id, col1, col2, . From item where id in (select * from #bv1)

the query executes with the where criteria applied on the children
databases saving millions of rows being copied back to the parent
server.

So, I have a hack that works (using the temp table) for this case, but
I really don't understand the root cause. After reading online books,
in a way I am confused why ANY of the processing is done on the
children servers. I quote:

================================================
Remote Query Execution
SQL Server attempts to delegate as much of the evaluation of a
distributed query to the SQL Command Provider as possible. An SQL
query that accesses only the remote tables stored in the provider's
data source is extracted from the original distributed query and
executed against the provider. This reduces the number of rows
returned from the provider and allows the provider to use its indexes
in evaluating the query.
Considerations that affect how much of the original distributed query
gets delegated to the SQL Command Provider include:
The dialect level supported by the SQL Command Provider
SQL Server delegates operations only if they are supported by the
specific dialect level. The dialect levels from highest to lowest are:
SQL Server, SQL-92 Entry level, ODBC core, and Jet. The higher the
dialect level, the more operations SQL Server can delegate to the
provider.

Note The SQL Server dialect level is used when the provider
corresponds to a SQL Server linked server.
Each dialect level is a superset of the lower levels. Therefore, if an
operation is delegated to a particular level, then Queries involving
the following are never delegated to a provider and are always it is
also delegated to all higher levels.
evaluated locally:
bit
uniqueidentifier
================================================

This suggests to me that any query having where criteria applied to a
datatype uniqueidentifier will have the where criteria applied AFTER
data is returned from the linked server.

Any ideas on the root problem, and a better solution to get the query
and all the where criteria applied on the remoted linked server?

Thanks,
Bernie
Jul 20 '05 #1
Share this Question
Share on Google+
5 Replies


P: n/a
[posted and mailed, posted and mailed]

Bernie (ve*****@ix.netcom.com) writes:
So, I have a hack that works (using the temp table) for this case, but
I really don't understand the root cause. After reading online books,
in a way I am confused why ANY of the processing is done on the
children servers. I quote:
Yes, according from Books Online you are seeing the expected behaviour,
except when you submit a single GUID. Or use the temp table.

I played with this, and used a table that had both a GUID and an integer
key. (In fact I added the GUID column for this test.) When using the
profiler on the target server, I noticed a difference for the two queries

SELECT * FROM distrib_view WHERE acsid = 30455
SELECT * FROM distrib_view
WHERE guid = '5B6BC4B3-83F0-4749-B341-3C17C9046404'

This is what I saw for the GUID query:

declare @p1 int
set @p1=22
exec sp_prepexec @p1 output, N'@P1 uniqueidentifier',
N'SELECT Col1078,Col1073,Col1072
FROM (SELECT Tbl1003."guid" Col1072, Tbl1003."acsid" Col1073,
Tbl1003."busdate" Col1078
FROM "bos_sommar"."dbo"."myacs" Tbl1003) Qry1104
WHERE Col1072=@P1',
'5B6BC4B3-83F0-4749-B341-3C17C9046404'
select @p1

Where as the acsid query looked like this:

declare @p1 int
set @p1=1
exec sp_prepexec @p1 output, NULL,
N'SELECT Col1077,Col1072,Col1071
FROM (SELECT Tbl1003."guid" Col1071, Tbl1003."acsid" Col1072,
Tbl1003."busdate" Col1077
FROM "bos_sommar"."dbo"."myacs" Tbl1003) Qry1103
WHERE Col1072= 30455
select @p1

So the GUID value is passed differently, and in a way that would be
difficult to do when you add more values. Then again when I use the
temp table I see two calls to sp_prepexec.
Any ideas on the root problem, and a better solution to get the query
and all the where criteria applied on the remoted linked server?


I don't really know what you consider the root problem. It appears
clear from the documentation that you have run into a dead end, so
you probably need to find a different solution to your actual
business problem. Of which you have not told as anything, so at
this stage it is difficult to come with suggestions.
--
Erland Sommarskog, SQL Server MVP, es****@sommarskog.se

Books Online for SQL Server SP3 at
http://www.microsoft.com/sql/techinf...2000/books.asp
Jul 20 '05 #2

P: n/a
Erland,

It has been many years since I was in Sweden but the memories of my
visit and the mid-summer celebrations are still strong (as was the
schnapps)!

Thank you for taking the time experiment with my case and share you
observations.

Erland Sommarskog <es****@sommarskog.se> wrote in message
I don't really know what you consider the root problem. It appears
clear from the documentation that you have run into a dead end, so
you probably need to find a different solution to your actual
business problem. Of which you have not told as anything, so at
this stage it is difficult to come with suggestions.


The "root problem" I was referring to was, WHY does the optimizer
create a bad plan when I specify more then one GUID in the WHERE
criteria. Since the documentation clearly states that GUIDs in WHERE
clauses will be processed on the local server, I have indeed met a
dead end. I am resigned now to be satisfied that the optimizer applies
GUID WHERE criteria on the remote server when I pass a single GUID or
a list via a subselect to a temp table. The good news is that this is
a predictable "feature" and not a bug, and I have a work around.

As for the business need, I am working with an application in
production which, for performance reasons, was partitioned over 5
servers. It is a financial application where data is directed to a
specific server for processing. So, for all intents and purposes, the
application is not aware that this is a distributed environment.

There is however one process that looks for anomies in the union of
all item tables. It goes through a complex selection procedure (based
on item columns that are not GUIDs) the result being a list of GUIDS
that uniquely identify rows of interest. As this application is
already deployed, we needed to find a relatively painless workaround
to specific issue. At least now I understand why the temp table
worked, and that the poor performance will be isolated to remote
queries with guids in the WHERE clause.

Cheers,
Bernie
Jul 20 '05 #3

P: n/a
Bernie (ve*****@ix.netcom.com) writes:
It has been many years since I was in Sweden but the memories of my
visit and the mid-summer celebrations are still strong (as was the
schnapps)!
Chance had it that Midsummer is this weekend. I stayed away from the
celebrations, and just relaxed at home, though.
The "root problem" I was referring to was, WHY does the optimizer
create a bad plan when I specify more then one GUID in the WHERE
criteria.
I think we have the answer to that one. :-)
As for the business need, I am working with an application in
production which, for performance reasons, was partitioned over 5
servers. It is a financial application where data is directed to a
specific server for processing. So, for all intents and purposes, the
application is not aware that this is a distributed environment.
As long as the main part of the application is working well, I would
not worry. I have never worked with distributed partition views, but
I have the impression that you if you don't do it right, you will
get more problems than you solve.
There is however one process that looks for anomies in the union of
all item tables. It goes through a complex selection procedure (based
on item columns that are not GUIDs) the result being a list of GUIDS
that uniquely identify rows of interest. As this application is
already deployed, we needed to find a relatively painless workaround
to specific issue. At least now I understand why the temp table
worked, and that the poor performance will be isolated to remote
queries with guids in the WHERE clause.


I am not sure that I know why the temp table worked. :-) And I would
also suspect that if you throw in more rows in that temp table it
may not work equally well. Say that you have 250 rows. Is it really
a good thing to make 250 remote calls to each linked server? 250 may
not be the number, but at some point I would expect it to be cheaper
to get the tables to the local server.

If there are a lot of these queries in this process, maybe one idea is
to have an initial query that gets just the key values to the local
server, and then saves those in a table, together with information
from which server they came. Then you know which server to query for
each value.
--
Erland Sommarskog, SQL Server MVP, es****@sommarskog.se

Books Online for SQL Server SP3 at
http://www.microsoft.com/sql/techinf...2000/books.asp
Jul 20 '05 #4

P: n/a
Erland,
I would also suspect that if you throw in more rows in that temp table it
may not work equally well. Say that you have 250 rows. Is it really
a good thing to make 250 remote calls to each linked server? 250 may
not be the number, but at some point I would expect it to be cheaper
to get the tables to the local server.
I have done further testing and found that if the temp table gets large
enough, the optimizer will revert to pulling back all the data and applying
the WHERE criteria locally.

I don't know the exact cross over point, but at 16000 rows in the temp
table, the query runs remotely. At 20,000 it executes locally. This wont be
an issue for us as our result set for this query is limited to 1000 rows. I
suspect the cross over point is a function of the ratio of the temp table
size to the size of the largest remote table. In this case, the item tables
on our two test servers are 349,073 and 194495. If I had to guess, I would
say the cross over point occurs when the temp table row count >= 5% the size
of the largest remote table included in the union.

-Bernie

"Erland Sommarskog" <es****@sommarskog.se> wrote in message
news:Xn**********************@127.0.0.1... Bernie (ve*****@ix.netcom.com) writes:
It has been many years since I was in Sweden but the memories of my
visit and the mid-summer celebrations are still strong (as was the
schnapps)!


Chance had it that Midsummer is this weekend. I stayed away from the
celebrations, and just relaxed at home, though.
The "root problem" I was referring to was, WHY does the optimizer
create a bad plan when I specify more then one GUID in the WHERE
criteria.


I think we have the answer to that one. :-)
As for the business need, I am working with an application in
production which, for performance reasons, was partitioned over 5
servers. It is a financial application where data is directed to a
specific server for processing. So, for all intents and purposes, the
application is not aware that this is a distributed environment.


As long as the main part of the application is working well, I would
not worry. I have never worked with distributed partition views, but
I have the impression that you if you don't do it right, you will
get more problems than you solve.
There is however one process that looks for anomies in the union of
all item tables. It goes through a complex selection procedure (based
on item columns that are not GUIDs) the result being a list of GUIDS
that uniquely identify rows of interest. As this application is
already deployed, we needed to find a relatively painless workaround
to specific issue. At least now I understand why the temp table
worked, and that the poor performance will be isolated to remote
queries with guids in the WHERE clause.


I am not sure that I know why the temp table worked. :-) And I would
also suspect that if you throw in more rows in that temp table it
may not work equally well. Say that you have 250 rows. Is it really
a good thing to make 250 remote calls to each linked server? 250 may
not be the number, but at some point I would expect it to be cheaper
to get the tables to the local server.

If there are a lot of these queries in this process, maybe one idea is
to have an initial query that gets just the key values to the local
server, and then saves those in a table, together with information
from which server they came. Then you know which server to query for
each value.
--
Erland Sommarskog, SQL Server MVP, es****@sommarskog.se

Books Online for SQL Server SP3 at
http://www.microsoft.com/sql/techinf...2000/books.asp

Jul 20 '05 #5

P: n/a
Bernie Velivis (ve*****@ix.netcom.com) writes:
I have done further testing and found that if the temp table gets large
enough, the optimizer will revert to pulling back all the data and
applying the WHERE criteria locally.

I don't know the exact cross over point, but at 16000 rows in the temp
table, the query runs remotely. At 20,000 it executes locally. This wont
be an issue for us as our result set for this query is limited to 1000
rows. I suspect the cross over point is a function of the ratio of the
temp table size to the size of the largest remote table. In this case,
the item tables on our two test servers are 349,073 and 194495. If I had
to guess, I would say the cross over point occurs when the temp table
row count >= 5% the size of the largest remote table included in the
union.


Thanks for reporting back about your findings! It seems that you don't
have to lose any more sleep over this for the moment.

....but in keep in mind that the optimizer is a moving target, and there
are improvements in about every service pack, but sometimes they
backfire.

--
Erland Sommarskog, SQL Server MVP, es****@sommarskog.se

Books Online for SQL Server SP3 at
http://www.microsoft.com/sql/techinf...2000/books.asp
Jul 20 '05 #6

This discussion thread is closed

Replies have been disabled for this discussion.