473,802 Members | 2,017 Online
Bytes | Software Development & Data Engineering Community
+ Post

Home Posts Topics Members FAQ

Weird Optimizer Choice

I'm very puzzled by the choice of NC index being made by the optimizer
in this example. I don't actually think it should use an NC index at
all.

I have:

Table: CustomerStatus_ T
Single data page
19 records
Clustered Index on CustomerStatusI D:

CREATE TABLE [CustomerStatus_ T] (
[CustomerStatusI D] [int] NOT NULL ,
[Name] [varchar] (50) COLLATE SQL_Latin1_Gene ral_CP1_CI_AS NULL ,
[Description] [varchar] (200) COLLATE SQL_Latin1_Gene ral_CP1_CI_AS
NULL ,
[code] [varchar] (30) COLLATE SQL_Latin1_Gene ral_CP1_CI_AS NULL ,
[CodeAlt] [varchar] (30) COLLATE SQL_Latin1_Gene ral_CP1_CI_AS NULL ,
[Ordinal] [int] NULL ,
[Default] [int] NULL ,
[Display] [bit] NOT NULL ,
[StatusType] [varchar] (1) COLLATE SQL_Latin1_Gene ral_CP1_CI_AS NULL
,
[DateCreated] [smalldatetime] NULL ,
[DateUpdated] [smalldatetime] NULL ,
[DateArchived] [smalldatetime] NULL ,
CONSTRAINT [PK_ROMS_Custome rStatus] PRIMARY KEY CLUSTERED
(
[CustomerStatusI D]
) ON [PRIMARY]
) ON [PRIMARY]
If I run the following query, it does exactly what I expect and scans
the clustered index:

SELECT customerStatusI D, [Name] FROM CustomerStatus_ T
WHERE dateArchived IS NULL
AND Display = 1
AND StatusType = 'Q‘

and gives the following QEP and IO statistics:

|--Clustered Index Scan
(OBJECT:([Reach_Roms].[dbo].[CustomerStatus_ T].[PK_ROMS_Custome rStatus]),
WHERE:(([CustomerStatus_ T].[DateArchived]=NULL AND
[CustomerStatus_ T].[StatusType]='Q') AND
Convert([CustomerStatus_ T].[Display])=1))
Table 'CustomerStatus _T'. Scan count 1, logical reads 2, physical
reads 0,
read-ahead reads 0.
If I now put a NC index on the statustype column:
create index ix_nci_statusty pe on customerstatus_ t(statustype)

the query plan changes to:

SELECT customerStatusI D, [Name] FROM CustomerStatus_ T
WHERE dateArchived IS NULL
AND Display = 1
AND StatusType = 'Q‘
|--Filter(WHERE:([CustomerStatus_ T].[DateArchived]=NULL AND
Convert([CustomerStatus_ T].[Display])=1))
|--Bookmark Lookup(BOOKMARK :([Bmk1000]),
OBJECT:([Reach_Roms].[dbo].[CustomerStatus_ T]))
|--Index
Seek(OBJECT:([Reach_Roms].[dbo].[CustomerStatus_ T].[ix_nci_statusty pe]),
S EEK:([CustomerStatus_ T].[StatusType]='Q') ORDERED
FORWARD)
Table 'CustomerStatus _T'. Scan count 1, logical reads 7,
physical reads 0, read-ahead reads 0.

For some bizarre reason, the optimizer thinks that a NC index lookup
on a single-page table, which ultimately costs 7 IOs, is cheaper than
a table (or Clustered Index) scan of a single page. Why? The
showplan cost also shows that it expects the NC index to be cheaper
(which is presumably why it goes and uses it), but even after running
UPDATE STATISTICS on the table it still chooses the same idiotic query
plan.

Any thoughts, or has anyone seen similar behaviour before, and can
anyone please explain it to me?

p.s. I don't actually WANT to put a NC index on this table, but I
noticed the behaviour by accident which is why I'm asking the question
:-)
Jul 20 '05 #1
3 1596
With such a small table, the results mean almost nothing.

The way the optimizer works is that it first looks for any obvious query
plans, and a if well performing query plan (based on estimations) is
found, it is used. In that case no excessive search is done to find an
even better plan.

And that is probably what happens here.

For any serious evaluation, make sure your table has a lot of pages, say
100 or so...

Gert-Jan
Philip Yale wrote:

I'm very puzzled by the choice of NC index being made by the optimizer
in this example. I don't actually think it should use an NC index at
all.

I have:

Table: CustomerStatus_ T
Single data page
19 records
Clustered Index on CustomerStatusI D:

CREATE TABLE [CustomerStatus_ T] (
[CustomerStatusI D] [int] NOT NULL ,
[Name] [varchar] (50) COLLATE SQL_Latin1_Gene ral_CP1_CI_AS NULL ,
[Description] [varchar] (200) COLLATE SQL_Latin1_Gene ral_CP1_CI_AS
NULL ,
[code] [varchar] (30) COLLATE SQL_Latin1_Gene ral_CP1_CI_AS NULL ,
[CodeAlt] [varchar] (30) COLLATE SQL_Latin1_Gene ral_CP1_CI_AS NULL ,
[Ordinal] [int] NULL ,
[Default] [int] NULL ,
[Display] [bit] NOT NULL ,
[StatusType] [varchar] (1) COLLATE SQL_Latin1_Gene ral_CP1_CI_AS NULL
,
[DateCreated] [smalldatetime] NULL ,
[DateUpdated] [smalldatetime] NULL ,
[DateArchived] [smalldatetime] NULL ,
CONSTRAINT [PK_ROMS_Custome rStatus] PRIMARY KEY CLUSTERED
(
[CustomerStatusI D]
) ON [PRIMARY]
) ON [PRIMARY]

If I run the following query, it does exactly what I expect and scans
the clustered index:

SELECT customerStatusI D, [Name] FROM CustomerStatus_ T
WHERE dateArchived IS NULL
AND Display = 1
AND StatusType = 'Q‘

and gives the following QEP and IO statistics:

|--Clustered Index Scan
(OBJECT:([Reach_Roms].[dbo].[CustomerStatus_ T].[PK_ROMS_Custome rStatus]),
WHERE:(([CustomerStatus_ T].[DateArchived]=NULL AND
[CustomerStatus_ T].[StatusType]='Q') AND
Convert([CustomerStatus_ T].[Display])=1))
Table 'CustomerStatus _T'. Scan count 1, logical reads 2, physical
reads 0,
read-ahead reads 0.
If I now put a NC index on the statustype column:
create index ix_nci_statusty pe on customerstatus_ t(statustype)

the query plan changes to:

SELECT customerStatusI D, [Name] FROM CustomerStatus_ T
WHERE dateArchived IS NULL
AND Display = 1
AND StatusType = 'Q‘
|--Filter(WHERE:([CustomerStatus_ T].[DateArchived]=NULL AND
Convert([CustomerStatus_ T].[Display])=1))
|--Bookmark Lookup(BOOKMARK :([Bmk1000]),
OBJECT:([Reach_Roms].[dbo].[CustomerStatus_ T]))
|--Index
Seek(OBJECT:([Reach_Roms].[dbo].[CustomerStatus_ T].[ix_nci_statusty pe]),
S EEK:([CustomerStatus_ T].[StatusType]='Q') ORDERED
FORWARD)
Table 'CustomerStatus _T'. Scan count 1, logical reads 7,
physical reads 0, read-ahead reads 0.

For some bizarre reason, the optimizer thinks that a NC index lookup
on a single-page table, which ultimately costs 7 IOs, is cheaper than
a table (or Clustered Index) scan of a single page. Why? The
showplan cost also shows that it expects the NC index to be cheaper
(which is presumably why it goes and uses it), but even after running
UPDATE STATISTICS on the table it still chooses the same idiotic query
plan.

Any thoughts, or has anyone seen similar behaviour before, and can
anyone please explain it to me?

p.s. I don't actually WANT to put a NC index on this table, but I
noticed the behaviour by accident which is why I'm asking the question
:-)


--
(Please reply only to the newsgroup)
Jul 20 '05 #2
Gert-Jan Strik <so***@toomuchs pamalready.nl> wrote in message news:<40******* ********@toomuc hspamalready.nl >...
With such a small table, the results mean almost nothing.

The way the optimizer works is that it first looks for any obvious query
plans, and a if well performing query plan (based on estimations) is
found, it is used. In that case no excessive search is done to find an
even better plan.

And that is probably what happens here.

For any serious evaluation, make sure your table has a lot of pages, say
100 or so...

Gert-Jan
Philip Yale wrote:

I'm very puzzled by the choice of NC index being made by the optimizer
in this example. I don't actually think it should use an NC index at
all......


Thanks for that, Gert. I can imagine that what you say is true.
However, I wonder at what point the optimizer decides that a table is
now big enough to warrant a bit more thought when selecting a plan?
Alternatively, when does a plan become expensive enough that the
optimizer looks for better ones? These are rhetorical questions, I
know, and the answer is probably hidden away somewhere where only
engineers know (or care) how to find it. Still, this sort of thing
interests me I'm afraid (sad, isn't it?)
Jul 20 '05 #3
I have no hard evidence for it, but I have read in this newsgroup that
the limit for the obvious query plans is about 1 second (estimated
elapsed time).

Gert-Jan
Philip Yale wrote:
<snip>

Thanks for that, Gert. I can imagine that what you say is true.
However, I wonder at what point the optimizer decides that a table is
now big enough to warrant a bit more thought when selecting a plan?
Alternatively, when does a plan become expensive enough that the
optimizer looks for better ones? These are rhetorical questions, I
know, and the answer is probably hidden away somewhere where only
engineers know (or care) how to find it. Still, this sort of thing
interests me I'm afraid (sad, isn't it?)


--
(Please reply only to the newsgroup)
Jul 20 '05 #4

This thread has been closed and replies have been disabled. Please start a new discussion.

Similar topics

2
1728
by: Daniel Roy | last post by:
Guys, what I need is a tool which gives details on the choice of an execution plan by the SQL Server. For example, the cost for a hash join might be 200 and 100 for a nested loop, and therefore a nested loop is used. Same thing for the access paths for each table/view involved. In Oracle, we turn on event 100053 to see this kind of info. Thanx Daniel
0
1239
by: Larry Bertolini | last post by:
While trying to help a developer troubleshoot a performance problem, I have discovered something that strikes me as odd. When I run a particular query, using a non-privileged login that has necessary access to the objects, the query runs in 3.5 hours. When I run the same query, but use a login that has sysadmin role, the query runs in 1 second. I compared the showplan output, and the "regular" user's execution plan
3
2376
by: aj | last post by:
DB2 LUW v8.2 FP 14 RHAS 2.1 Sorry if these are newbie questions. Optimizer stuff is black magic to me. For both of these, assume stats are current and an even distribution of data. ------------------------- Lets say I have a table FOO1 that has, among other columns, a column named A. There is a non-unique index on A that has medium selectivity. Lets also say I have a table FOO2 that has, among other columns, a column named B. ...
6
1758
by: DBMonitor | last post by:
I have a table on a database that contains 18million records. I need to design a system that queries this table to produce fast counts. I have got counts for multiple criteria down to only a few seconds. Most take under a second however I have a few queries that seam to take longer which I am working on reducing the time. I have found some strange behavour in the way SQL Server works. Take the following two queries which produce...
0
9699
marktang
by: marktang | last post by:
ONU (Optical Network Unit) is one of the key components for providing high-speed Internet services. Its primary function is to act as an endpoint device located at the user's premises. However, people are often confused as to whether an ONU can Work As a Router. In this blog post, we’ll explore What is ONU, What Is Router, ONU & Router’s main usage, and What is the difference between ONU and Router. Let’s take a closer look ! Part I. Meaning of...
0
10538
Oralloy
by: Oralloy | last post by:
Hello folks, I am unable to find appropriate documentation on the type promotion of bit-fields when using the generalised comparison operator "<=>". The problem is that using the GNU compilers, it seems that the internal comparison operator "<=>" tries to promote arguments from unsigned to signed. This is as boiled down as I can make it. Here is my compilation command: g++-12 -std=c++20 -Wnarrowing bit_field.cpp Here is the code in...
0
10305
jinu1996
by: jinu1996 | last post by:
In today's digital age, having a compelling online presence is paramount for businesses aiming to thrive in a competitive landscape. At the heart of this digital strategy lies an intricately woven tapestry of website design and digital marketing. It's not merely about having a website; it's about crafting an immersive digital experience that captivates audiences and drives business growth. The Art of Business Website Design Your website is...
1
10285
by: Hystou | last post by:
Overview: Windows 11 and 10 have less user interface control over operating system update behaviour than previous versions of Windows. In Windows 11 and 10, there is no way to turn off the Windows Update option using the Control Panel or Settings app; it automatically checks for updates and installs any it finds, whether you like it or not. For most users, this new feature is actually very convenient. If you want to control the update process,...
0
10063
tracyyun
by: tracyyun | last post by:
Dear forum friends, With the development of smart home technology, a variety of wireless communication protocols have appeared on the market, such as Zigbee, Z-Wave, Wi-Fi, Bluetooth, etc. Each protocol has its own unique characteristics and advantages, but as a user who is planning to build a smart home system, I am a bit confused by the choice of these technologies. I'm particularly interested in Zigbee because I've heard it does some...
0
9115
agi2029
by: agi2029 | last post by:
Let's talk about the concept of autonomous AI software engineers and no-code agents. These AIs are designed to manage the entire lifecycle of a software development project—planning, coding, testing, and deployment—without human intervention. Imagine an AI that can take a project description, break it down, write the code, debug it, and then launch it, all on its own.... Now, this would greatly impact the work of software developers. The idea...
0
5494
by: TSSRALBI | last post by:
Hello I'm a network technician in training and I need your help. I am currently learning how to create and manage the different types of VPNs and I have a question about LAN-to-LAN VPNs. The last exercise I practiced was to create a LAN-to-LAN VPN between two Pfsense firewalls, by using IPSEC protocols. I succeeded, with both firewalls in the same network. But I'm wondering if it's possible to do the same thing, with 2 Pfsense firewalls...
1
4270
by: 6302768590 | last post by:
Hai team i want code for transfer the data from one system to another through IP address by using C# our system has to for every 5mins then we have to update the data what the data is updated we have to send another system
2
3792
muto222
by: muto222 | last post by:
How can i add a mobile payment intergratation into php mysql website.

By using Bytes.com and it's services, you agree to our Privacy Policy and Terms of Use.

To disable or enable advertisements and analytics tracking please visit the manage ads & tracking page.