By using this site, you agree to our updated Privacy Policy and our Terms of Use. Manage your Cookies Settings.
426,011 Members | 1,000 Online
Bytes IT Community
+ Ask a Question
Need help? Post your question and get tips & solutions from a community of 426,011 IT Pros & Developers. It's quick & easy.

Vanishing parallelism

P: n/a
I have a function that returns a table of information about
residential properties. The main input is a property type and
a location in grid coordinates. Because I want to get only a
certain number of properties, ordered by distance from the
location, I get the properties from a cursor ordered by distance,
and stop when the number is reached. (Not really possible to
determine the distance analytically in advance.) The cursor also
involves joins to a table of grid coordinates vs. postcodes (the
properties are identified mainly by postcode), and to a table
that maps the input property type into what types to search for.

Opening the cursor typically results in the creation of six to
eight parallel threads, and takes approx 1 second, which is about
half of the total time for the function.

Recently the main property table grew from 4 million to 6.5
million records, and suddenly the parallelism is lost. Taking
the identical code and executing it as a script gives parallelism.
Turning it into a SP that inserts into a #temp table and then
selects * from that table as the last statement also gives
parallelism. But when it's in the form of a function, there is
only one thread -- and the execution time has gone from ~2 sec
to ~8 sec. I updated the statistics on the table, but still
no parallelism.

I could turn it into a SP easily enough, but that would involve
a change to the C++ program that calls it, which takes a while
to get through the pipeline. In the meantime, is there some way
to induce the optimizer to use parallelism? It used to.

Jul 23 '05 #1
Share this Question
Share on Google+
3 Replies


P: n/a
Because (as I understand it) UDFs are implemented as macros and
recomplied every time, I assumed they would also get a new access plan
every time. However, I learned that the access plan is cached, and I
needed drop and recreate the function to have the plan updated to
reflect the new statistics for the table. That did the trick.

Jul 23 '05 #2

P: n/a
(ji**********@countrywide.com) writes:
Opening the cursor typically results in the creation of six to
eight parallel threads, and takes approx 1 second, which is about
half of the total time for the function.

Recently the main property table grew from 4 million to 6.5
million records, and suddenly the parallelism is lost. Taking
the identical code and executing it as a script gives parallelism.
Turning it into a SP that inserts into a #temp table and then
selects * from that table as the last statement also gives
parallelism. But when it's in the form of a function, there is
only one thread -- and the execution time has gone from ~2 sec
to ~8 sec. I updated the statistics on the table, but still
no parallelism.


Hm, I would not say that I have a very good answer - my problem with
parallelism is usually when I see it, when SQL Server has been over-
optimistic about the virtues of parallelism and produced a plan that
sinks the server.

But since you say that this happened when the table grew, it could be
a resource problem. Search for "parallelism" in Books Online, and
look at the third hit. (SQL Server Architecture -> Relational Database
Engine Architecure -> Query Processor Architecture -> Parallel Query
Processing. It sounds funny that you get parallelism when you use an
SP, but it could be that you are just on the brink where parallelism
(does not) get used.

So one approach could be to throw hardware at the problem.

--
Erland Sommarskog, SQL Server MVP, es****@sommarskog.se

Books Online for SQL Server SP3 at
http://www.microsoft.com/sql/techinf...2000/books.asp
Jul 23 '05 #3

P: n/a
(ji**********@countrywide.com) writes:
Because (as I understand it) UDFs are implemented as macros and
recomplied every time, I assumed they would also get a new access plan
every time. However, I learned that the access plan is cached, and I
needed drop and recreate the function to have the plan updated to
reflect the new statistics for the table. That did the trick.


Glad that you got it working!

Inline table functions are indeed macros - they are essentially
parameterised views. However, scalar UDFs and multi-step UDFs are
just like stored procedures in this regard and have a plan on their
own. And since your UDF involves a cursor, it has to be any of the
latter two.

Instead of reloading the function, sp_recompile on the function or
any of the tables it refers to would have been sufficient.
--
Erland Sommarskog, SQL Server MVP, es****@sommarskog.se

Books Online for SQL Server SP3 at
http://www.microsoft.com/sql/techinf...2000/books.asp
Jul 23 '05 #4

This discussion thread is closed

Replies have been disabled for this discussion.