467,133 Members | 1,114 Online
Bytes | Developer Community
Ask Question

Home New Posts Topics Members FAQ

Post your question to a community of 467,133 developers. It's quick & easy.

Indexes on SQL Server 7.0

I am tasked with maintaining a large database still on SQL Server 7.0.
Performance is an issue not surprisingly and I've targeted two rather
large tables to see what I can do with the indexes.

The 2 tables are described as follows:

MatterConflicts:
Fields: MatterConflicts varchar(16), Matters varchar(16), HitMatters
varchar(16), IsInclude varchar(1)
Index: MatterConflicts

MatterConflictHits:
Fields: MatterConflictHits varchar(16), MatterConflicts varchar(16),
ColumnLabel varchar(40), Hit varchar(100)
Index: MatterConflictHits

Now MatterConflicts row count is approaching 500K and
MatterConflictHits is approaching 1 Million rows. There are only one
index on each table, each for the table's primary key. The Matters
field in MatterConflicts table joins back with a table that users
access directly.

Question is, would it be beneficial to add, or modify the existing
indexes for these tables to include both the primary and foreign keys,
as well as an additional field?

Doesn't seem to be to be very beneficial to have an index that only
includes the primary key. So for example I'm thinking of creating an
index for MatterConflicts that includes the fields: MatterConflicts,
Matters, and HitMatters.

Thoughts? Suggestions? Thanks...

Mar 23 '07 #1
  • viewed: 1562
Share:
22 Replies
On Mar 23, 12:08 pm, "Zamdrist" <zamdr...@gmail.comwrote:
I am tasked with maintaining a large database still on SQL Server 7.0.
Performance is an issue not surprisingly and I've targeted two rather
large tables to see what I can do with the indexes.

The 2 tables are described as follows:

MatterConflicts:
Fields: MatterConflicts varchar(16), Matters varchar(16), HitMatters
varchar(16), IsInclude varchar(1)
Index: MatterConflicts

MatterConflictHits:
Fields: MatterConflictHits varchar(16), MatterConflicts varchar(16),
ColumnLabel varchar(40), Hit varchar(100)
Index: MatterConflictHits

Now MatterConflicts row count is approaching 500K and
MatterConflictHits is approaching 1 Million rows. There are only one
index on each table, each for the table's primary key. The Matters
field in MatterConflicts table joins back with a table that users
access directly.

Question is, would it be beneficial to add, or modify the existing
indexes for these tables to include both the primary and foreign keys,
as well as an additional field?

Doesn't seem to be to be very beneficial to have an index that only
includes the primary key. So for example I'm thinking of creating an
index for MatterConflicts that includes the fields: MatterConflicts,
Matters, and HitMatters.

Thoughts? Suggestions? Thanks...
Is this a reporting and analytical system or a transactional system?

Mar 23 '07 #2
On Mar 23, 12:20 pm, "Brad" <Brad.Marsh...@Teksouth.comwrote:
On Mar 23, 12:08 pm, "Zamdrist" <zamdr...@gmail.comwrote:
I am tasked with maintaining a large database still on SQL Server 7.0.
Performance is an issue not surprisingly and I've targeted two rather
large tables to see what I can do with the indexes.
The 2 tables are described as follows:
MatterConflicts:
Fields: MatterConflicts varchar(16), Matters varchar(16), HitMatters
varchar(16), IsInclude varchar(1)
Index: MatterConflicts
MatterConflictHits:
Fields: MatterConflictHits varchar(16), MatterConflicts varchar(16),
ColumnLabel varchar(40), Hit varchar(100)
Index: MatterConflictHits
Now MatterConflicts row count is approaching 500K and
MatterConflictHits is approaching 1 Million rows. There are only one
index on each table, each for the table's primary key. The Matters
field in MatterConflicts table joins back with a table that users
access directly.
Question is, would it be beneficial to add, or modify the existing
indexes for these tables to include both the primary and foreign keys,
as well as an additional field?
Doesn't seem to be to be very beneficial to have an index that only
includes the primary key. So for example I'm thinking of creating an
index for MatterConflicts that includes the fields: MatterConflicts,
Matters, and HitMatters.
Thoughts? Suggestions? Thanks...

Is this a reporting and analytical system or a transactional system?
Transactional

Mar 23 '07 #3
"Zamdrist" <za******@gmail.comwrote in message
news:11**********************@n59g2000hsh.googlegr oups.com...
>I am tasked with maintaining a large database still on SQL Server 7.0.
Performance is an issue not surprisingly and I've targeted two rather
large tables to see what I can do with the indexes.

The 2 tables are described as follows:

MatterConflicts:
Fields: MatterConflicts varchar(16), Matters varchar(16), HitMatters
varchar(16), IsInclude varchar(1)
Index: MatterConflicts

MatterConflictHits:
Fields: MatterConflictHits varchar(16), MatterConflicts varchar(16),
ColumnLabel varchar(40), Hit varchar(100)
Index: MatterConflictHits

Now MatterConflicts row count is approaching 500K and
MatterConflictHits is approaching 1 Million rows.
Just so you know, these aren't very large tables.

However, I'd definitely agree you probably want some indexes.

However, the question you're asking is a bit too generic. You probably need
to look at what queries you're doing and optimize for those specifically.

And generally you want to find not necessarily the longest running queries,
but the ones called the most. If you have one query called 10 times a day
that runs for 10 minutes and optimize it 10%, you'll save 10 minutes a day.

If you have one query called 10,000 times a day for a minute and optimize it
10%, you'll save 1000 minutes.

>There are only one
index on each table, each for the table's primary key. The Matters
field in MatterConflicts table joins back with a table that users
access directly.

Question is, would it be beneficial to add, or modify the existing
indexes for these tables to include both the primary and foreign keys,
as well as an additional field?

Doesn't seem to be to be very beneficial to have an index that only
includes the primary key. So for example I'm thinking of creating an
index for MatterConflicts that includes the fields: MatterConflicts,
Matters, and HitMatters.

Thoughts? Suggestions? Thanks...


--
Greg Moore
SQL Server DBA Consulting
Email: sql (at) greenms.com http://www.greenms.com
Mar 23 '07 #4
On Mar 23, 12:38 pm, "Greg D. Moore \(Strider\)"
<mooregr_deletet...@greenms.comwrote:
>
Just so you know, these aren't very large tables.

However, I'd definitely agree you probably want some indexes.

However, the question you're asking is a bit too generic. You probably need
to look at what queries you're doing and optimize for those specifically.

And generally you want to find not necessarily the longest running queries,
but the ones called the most. If you have one query called 10 times a day
that runs for 10 minutes and optimize it 10%, you'll save 10 minutes a day.

If you have one query called 10,000 times a day for a minute and optimize it
10%, you'll save 1000 minutes.
A million records isn't large? Ok.

An application I didn't write and have NO support for is accessing the
data, reading & writing to these tables. There *are* indexes but only
one each and only on the primary key field.

I have no access to the queries as the application is reading from the
tables, probably using in-line string & code queries. There are no
views or procedures used by the application (well very few and not in
this instance). For all I know it could be using "Select * From
TableName...".

There are only 4 fields in each table, so I suppose the query(ies)
could only so complex as four fields would allow for. I know what one
of the fields ties back into a more widely used table that does have
more indexes, ones that appear to be useful.

Thanks

Mar 23 '07 #5
See inline

Zamdrist wrote:
>
I am tasked with maintaining a large database still on SQL Server 7.0.
Performance is an issue not surprisingly and I've targeted two rather
large tables to see what I can do with the indexes.
Don't expect SQL Server 2000 or 2005 to be any faster if you keep the
current table and index structures... The fact you are using SQL Server
7.0 is no reason to have poor performance.
The 2 tables are described as follows:

MatterConflicts:
Fields: MatterConflicts varchar(16), Matters varchar(16), HitMatters
varchar(16), IsInclude varchar(1)
Index: MatterConflicts

MatterConflictHits:
Fields: MatterConflictHits varchar(16), MatterConflicts varchar(16),
ColumnLabel varchar(40), Hit varchar(100)
Index: MatterConflictHits

Now MatterConflicts row count is approaching 500K and
MatterConflictHits is approaching 1 Million rows. There are only one
index on each table, each for the table's primary key. The Matters
field in MatterConflicts table joins back with a table that users
access directly.
Those queries would most likely benefit from an index on this column.
But it depends on the selectivity of the column. If it is highly
selective (many different values), then an index is probably very
useful.
Question is, would it be beneficial to add, or modify the existing
indexes for these tables to include both the primary and foreign keys,
as well as an additional field?
It would surely be worth a try. Just keep in mind that there is a cost
associated with creating an additional index, because it will require
disk space, memory (for caching) and can affect concurrency because of
added blocking. Blocking could really hurt an OLTP system.

HTH,
Gert-Jan
Doesn't seem to be to be very beneficial to have an index that only
includes the primary key. So for example I'm thinking of creating an
index for MatterConflicts that includes the fields: MatterConflicts,
Matters, and HitMatters.

Thoughts? Suggestions? Thanks...
Mar 23 '07 #6
"Zamdrist" <za******@gmail.comwrote in message
news:11**********************@l75g2000hse.googlegr oups.com...
On Mar 23, 12:38 pm, "Greg D. Moore \(Strider\)"
<mooregr_deletet...@greenms.comwrote:
>>
Just so you know, these aren't very large tables.

However, I'd definitely agree you probably want some indexes.

However, the question you're asking is a bit too generic. You probably
need
to look at what queries you're doing and optimize for those specifically.

And generally you want to find not necessarily the longest running
queries,
but the ones called the most. If you have one query called 10 times a
day
that runs for 10 minutes and optimize it 10%, you'll save 10 minutes a
day.

If you have one query called 10,000 times a day for a minute and optimize
it
10%, you'll save 1000 minutes.

A million records isn't large? Ok.
Nah, rather trivial these days. ;-)

>
An application I didn't write and have NO support for is accessing the
data, reading & writing to these tables. There *are* indexes but only
one each and only on the primary key field.
If you have no access to these tables, you can't put indexes on it. So
perhaps I misunderstand.

>
I have no access to the queries as the application is reading from the
tables, probably using in-line string & code queries. There are no
views or procedures used by the application (well very few and not in
this instance). For all I know it could be using "Select * From
TableName...".
Ack.

In any case, you probably CAN get this information via profiler and looking
at the queries as they come through the machine.

Also, there's some tools (only source I know of is via the SQL Server
Magazine website so they're copyrighted I believe) to follow wait
statistics, which can be VERY powerful to find out where your application is
doing a lot of querying.

You can also try (though generally I don't find it useful) the Index Wizard
in EM.
>
There are only 4 fields in each table, so I suppose the query(ies)
could only so complex as four fields would allow for.
And any joins.

I know what one
of the fields ties back into a more widely used table that does have
more indexes, ones that appear to be useful.

Thanks


--
Greg Moore
SQL Server DBA Consulting
Email: sql (at) greenms.com http://www.greenms.com
Mar 23 '07 #7
On Mar 23, 1:59 pm, "Greg D. Moore \(Strider\)"
<mooregr_deletet...@greenms.comwrote:
>
If you have no access to these tables, you can't put indexes on it. So
perhaps I misunderstand.
No I have access to all the objects via Enterprise Manager, I just
don't have access to the application code to see what it is doing,
there are however only a handful of queries/procedures, and over 70+
tables, so I fear the worst.
>
In any case, you probably CAN get this information via profiler and looking
at the queries as they come through the machine.

Also, there's some tools (only source I know of is via the SQL Server
Magazine website so they're copyrighted I believe) to follow wait
statistics, which can be VERY powerful to find out where your application is
doing a lot of querying.

You can also try (though generally I don't find it useful) the Index Wizard
in EM.
Thanks Greg.

Mar 23 '07 #8

"Greg D. Moore (Strider)" <mo****************@greenms.comwrote in message
news:Jk*****************@newsread3.news.pas.earthl ink.net...
"Zamdrist" <za******@gmail.comwrote in message
news:11**********************@l75g2000hse.googlegr oups.com...
On Mar 23, 12:38 pm, "Greg D. Moore \(Strider\)"
<mooregr_deletet...@greenms.comwrote:
>
Just so you know, these aren't very large tables.

However, I'd definitely agree you probably want some indexes.

However, the question you're asking is a bit too generic. You probably
need
to look at what queries you're doing and optimize for those
specifically.
>
And generally you want to find not necessarily the longest running
queries,
but the ones called the most. If you have one query called 10 times a
day
that runs for 10 minutes and optimize it 10%, you'll save 10 minutes a
day.

If you have one query called 10,000 times a day for a minute and
optimize
it
10%, you'll save 1000 minutes.
A million records isn't large? Ok.

Nah, rather trivial these days. ;-)
Does "trivial" mean easy or unimportant?
Mar 24 '07 #9
"David Cressey" <cr*******@verizon.netwrote in message
news:s2eNh.248$E46.187@trndny09...
>
A million records isn't large? Ok.

Nah, rather trivial these days. ;-)

Does "trivial" mean easy or unimportant?
No, in this case it means rather small which impacts how you approach
maintainence issues. And to some extent how you solve problems.

For example, for some databases, it may be "simpler" to simply through more
memory at the problem. For a database 10x the size, more memory might not
even make a dent.

>
--
Greg Moore
SQL Server DBA Consulting
Email: sql (at) greenms.com http://www.greenms.com
Mar 24 '07 #10

"Greg D. Moore (Strider)" <mo****************@greenms.comwrote in message
news:4f******************@newsread3.news.pas.earth link.net...
"David Cressey" <cr*******@verizon.netwrote in message
news:s2eNh.248$E46.187@trndny09...
A million records isn't large? Ok.

Nah, rather trivial these days. ;-)
Does "trivial" mean easy or unimportant?

No, in this case it means rather small which impacts how you approach
maintainence issues. And to some extent how you solve problems.
I'd suggest that that's a misuse of the word "trivial", but that you might
have meant "small enough to be unimportant".

For example, for some databases, it may be "simpler" to simply through
more
memory at the problem. For a database 10x the size, more memory might not
even make a dent.
This is because a lot of the work involved in sorting and searching expands
non linearly with regard to volume of data (row cardinality in this case).
In every database I've worked on, the difference between a table scan and
an index lookup has resulted in a "nontrivial" performance difference with a
million rows in the table.

But it depends on what you mean by "non trivial", I suppose.
Mar 24 '07 #11
Hi

Greg had very good advice that you may have missed if you have never used
SQL Profiler. SQL Profiler will show you the actual SQL code that is being
executed by the application (if it is inline SQL) or what Stored Procedures
are being called.

All this might be of limited usefulness if you don't have access to
programmers who can change poorly written code in the application. I am
thinking someone is perceiving "the appilcation is slow" and have decided to
kick the DB guy, when in fact the problem is with the application, Perhaps
the app is marching through ADO Recordsets to join data when it could be
done using basic SQL functionality.

For your original question if you have

MatterConflicts:
Fields: MatterConflicts varchar(16), Matters varchar(16), HitMatters
varchar(16), IsInclude varchar(1)
Index: MatterConflicts

MatterConflictHits:
Fields: MatterConflictHits varchar(16), MatterConflicts varchar(16),
ColumnLabel varchar(40), Hit varchar(100)
Index: MatterConflictHits

MatterConflicts.MatterConflicts should be the Primary key of MatterConflicts
( think you said it was)

MatterConflictsHits.MatterConflictsHits should be the Primary key of
MatterConflicts ( think you said it was)

Add an index on MatterConflictHits.MatterConflicts this should speed up join
operations between the two tables.

Best of Luck.

-Dick Christoph
"Zamdrist" <za******@gmail.comwrote in message
news:11*********************@y66g2000hsf.googlegro ups.com...
On Mar 23, 1:59 pm, "Greg D. Moore \(Strider\)"
<mooregr_deletet...@greenms.comwrote:
>>
If you have no access to these tables, you can't put indexes on it. So
perhaps I misunderstand.

No I have access to all the objects via Enterprise Manager, I just
don't have access to the application code to see what it is doing,
there are however only a handful of queries/procedures, and over 70+
tables, so I fear the worst.
>>
In any case, you probably CAN get this information via profiler and
looking
at the queries as they come through the machine.

Also, there's some tools (only source I know of is via the SQL Server
Magazine website so they're copyrighted I believe) to follow wait
statistics, which can be VERY powerful to find out where your application
is
doing a lot of querying.

You can also try (though generally I don't find it useful) the Index
Wizard
in EM.

Thanks Greg.

Mar 26 '07 #12
On Mar 23, 11:08 pm, "Zamdrist" <zamdr...@gmail.comwrote:
I am tasked with maintaining a large database still on SQL Server 7.0.
Performance is an issue not surprisingly and I've targeted two rather
large tables to see what I can do with the indexes.

The 2 tables are described as follows:

MatterConflicts:
Fields: MatterConflicts varchar(16), Matters varchar(16), HitMatters
varchar(16), IsInclude varchar(1)
Index: MatterConflicts

MatterConflictHits:
Fields: MatterConflictHits varchar(16), MatterConflicts varchar(16),
ColumnLabel varchar(40), Hit varchar(100)
Index: MatterConflictHits

Now MatterConflicts row count is approaching 500K and
MatterConflictHits is approaching 1 Million rows. There are only one
index on each table, each for the table's primary key. The Matters
field in MatterConflicts table joins back with a table that users
access directly.

Drop the index on primary key and create a index on Matters field
instead since this is the column users are using to access data. Also,
Since these two tables are related, it will be worthwhile to consider
a index between the joining columns MatterConflicts of both tables.

Question is, would it be beneficial to add, or modify the existing
indexes for these tables to include both the primary and foreign keys,
as well as an additional field?

Doesn't seem to be to be very beneficial to have an index that only
includes the primary key. So for example I'm thinking of creating an
index for MatterConflicts that includes the fields: MatterConflicts,
Matters, and HitMatters.
no. Don't use composite indexs as they seldom are useful.
Thoughts? Suggestions? Thanks...
By the way, these tables are fairly large and you are right that you
have started to address the performance issue.
Mar 27 '07 #13
On 26 Mar 2007 21:29:32 -0700, ot*******@yahoo.com wrote:
>Doesn't seem to be to be very beneficial to have an index that only
includes the primary key. So for example I'm thinking of creating an
index for MatterConflicts that includes the fields: MatterConflicts,
Matters, and HitMatters.

no. Don't use composite indexs as they seldom are useful.
Hi othellomy,

Do you have any source for that sweeping statement?

AFAIK, composite indexes can be VERY useful. For queries that filter or
join on two or more columns, a composite index is much better than
seperate indexes on the individual columns.

--
Hugo Kornelis, SQL Server MVP
My SQL Server blog: http://sqlblog.com/blogs/hugo_kornelis
Mar 27 '07 #14
Thank you all for the advice and suggestions!

I did run Profiler and found that my suspicious are correct, all
queries are being ran via in-line text statements, no application
procedures exist much less being used.

Quite a bit usage of sp_cursorprepare, exceute, close, etc. also.
Don't know if this is normal or not.

So yeah, basically I'm up a river with no paddle. We have no access to
the application code or the programmer, nor do we even have a support
license agreement anymore with the company who wrote the application.

I did end up archiving off most of the data from the tables in
question and wrote a custom interface to that using ASP.Net. Works
like a charm, no indexes whatsoever. Archiving and subsequently
deleting the data from the production tables, and rebuilding existing
indexes had little to no effect on performance unfortunately.

Again...up a river with no paddle, lol.

Thanks

Mar 29 '07 #15
On Mar 30, 12:48 am, "Zamdrist" <zamdr...@gmail.comwrote:
>
I did end up archiving off most of the data from the tables in
question and wrote a custom interface to that using ASP.Net. Works
like a charm, no indexes whatsoever.
Does that mean who have no io problems if you use ASP.NET?

Archiving and subsequently
deleting the data from the production tables, and rebuilding existing
indexes had little to no effect on performance unfortunately.
Again...up a river with no paddle, lol.
The indexes you have are useless. Therefore, rebuilding them will not
make any difference. You need to identify what columns (preferably
one, at most 2) you need your clustered indexes for. Hopefully the
system will use those indexes.
Mar 30 '07 #16
On Mar 30, 4:54 am, othell...@yahoo.com wrote:
On Mar 30, 12:48 am, "Zamdrist" <zamdr...@gmail.comwrote:
I did end up archiving off most of the data from the tables in
question and wrote a custom interface to that using ASP.Net. Works
like a charm, no indexes whatsoever.

Does that mean who have no io problems if you use ASP.NET?

Archiving and subsequently
deleting the data from the production tables, and rebuilding existing
indexes had little to no effect on performance unfortunately.
Again...up a river with no paddle, lol.

The indexes you have are useless. Therefore, rebuilding them will not
make any difference. You need to identify what columns (preferably
one, at most 2) you need your clustered indexes for. Hopefully the
system will use those indexes.
I copied (and subsequently deleted) from production a large amount of
the data to a different table and server, and wrote an interface to
that data in ASP.Net so the users, if needed could look back on older
infrequently used data. Accessing that data works great.

I could build more intelligent indexes for the data in production,
although I doubt the application would behave any differently (faster)
as I've verified it uses no views or stored procedures...it only uses
in-line text based queries...Select fields From table Where...and so
on.

I have no access to the application code to change this behavior
unfortunately.

Mar 30 '07 #17
On Mar 30, 5:57 pm, "Zamdrist" <zamdr...@gmail.comwrote:
On Mar 30, 4:54 am, othell...@yahoo.com wrote:


On Mar 30, 12:48 am, "Zamdrist" <zamdr...@gmail.comwrote:
I did end up archiving off most of the data from the tables in
question and wrote a custom interface to that using ASP.Net. Works
like a charm, no indexes whatsoever.
Does that mean who have no io problems if you use ASP.NET?
Archiving and subsequently
deleting the data from the production tables, and rebuilding existing
indexes had little to no effect on performance unfortunately.
Again...up a river with no paddle, lol.
The indexes you have are useless. Therefore, rebuilding them will not
make any difference. You need to identify what columns (preferably
one, at most 2) you need your clustered indexes for. Hopefully the
system will use those indexes.

I copied (and subsequently deleted) from production a large amount of
the data to a different table and server, and wrote an interface to
that data in ASP.Net so the users, if needed could look back on older
infrequently used data. Accessing that data works great.

I could build more intelligent indexes for the data in production,
although I doubt the application would behave any differently (faster)
as I've verified it uses no views or stored procedures...it only uses
in-line text based queries...Select fields From table Where...and so
on.

I have no access to the application code to change this behavior
unfortunately.- Hide quoted text -

- Show quoted text -
You might consider creating a new clustered index on a column that is
frequently used to join with other tables. Copy paste your code to QA
and run a showplan with no exec, on and find out if it is doing a
table scan or index search.

Mar 30 '07 #18
On Mar 30, 6:11 am, othell...@yahoo.com wrote:
>
You might consider creating a new clustered index on a column that is
frequently used to join with other tables. Copy paste your code to QA
and run a showplan with no exec, on and find out if it is doing a
table scan or index search.
There is an index already on each of the two tables in question, one
each on the primary key only, and it is clustered.

When I run my own query with Show Execution Plan on, I see that there
are several Clustered Index Scans, Index Scans & Parallelism/
Repartition operations.

But this is my code, running in QA. I have no control over how the
application accesses the data.

Mar 30 '07 #19
Zamdrist wrote:
On Mar 30, 6:11 am, othell...@yahoo.com wrote:
>You might consider creating a new clustered index on a column that is
frequently used to join with other tables. Copy paste your code to QA
and run a showplan with no exec, on and find out if it is doing a
table scan or index search.

There is an index already on each of the two tables in question, one
each on the primary key only, and it is clustered.

When I run my own query with Show Execution Plan on, I see that there
are several Clustered Index Scans, Index Scans & Parallelism/
Repartition operations.

But this is my code, running in QA. I have no control over how the
application accesses the data.
If your query is the same as what the application runs, except for
specific values being plugged in here and there, then the application
will generally get the same execution plan that you do.

Does QA run your query reasonably quickly? Based on the Profiler
trace, does the application seem slow because it runs slow queries,
or because it runs an inefficiently large number of queries which
are reasonably fast individually?
Mar 30 '07 #20
On Mar 30, 8:31 am, Ed Murphy <emurph...@socal.rr.comwrote:
>
If your query is the same as what the application runs, except for
specific values being plugged in here and there, then the application
will generally get the same execution plan that you do.

Does QA run your query reasonably quickly? Based on the Profiler
trace, does the application seem slow because it runs slow queries,
or because it runs an inefficiently large number of queries which
are reasonably fast individually?
Here is the query I ran, it returns 70.8K rows in 2 seconds via QA:

Select M.MatterID From Matters M
Inner Join MatterConflicts MC On MC.Matters = M.Matters
Inner Join Matters M2 On M2.Matters = MC.HitMatters
Inner Join MatterConflictHits MCH On MCH.MatterConflicts =
MC.MatterConflicts

Now the application is doing all kinds of things, probably more
complicated than my query above. Honestly I don't know enough about
Profiler to isolate one operation. I did check it out and there are
many, many sp_cursorexecute, prepare and close statements, along with
many select fields from tables queries.

I dunno, I doubt I will be able to make any sort of significant impact
on performance without access to the code.

FYI: This program is a legal case management software called Prolaw by
Thomson-Elite...it *sucks* royally! LOL.

Mar 30 '07 #21
On Mar 30, 8:51 pm, "Zamdrist" <zamdr...@gmail.comwrote:
On Mar 30, 8:31 am, Ed Murphy <emurph...@socal.rr.comwrote:
If your query is the same as what the application runs, except for
specific values being plugged in here and there, then the application
will generally get the same execution plan that you do.
Does QA run your query reasonably quickly? Based on the Profiler
trace, does the application seem slow because it runs slow queries,
or because it runs an inefficiently large number of queries which
are reasonably fast individually?

Here is the query I ran, it returns 70.8K rows in 2 seconds via QA:

Select M.MatterID From Matters M
Inner Join MatterConflicts MC On MC.Matters = M.Matters
Inner Join Matters M2 On M2.Matters = MC.HitMatters
Inner Join MatterConflictHits MCH On MCH.MatterConflicts =
MC.MatterConflicts

Now the application is doing all kinds of things, probably more
complicated than my query above. Honestly I don't know enough about
Profiler to isolate one operation. I did check it out and there are
many, many sp_cursorexecute, prepare and close statements, along with
many select fields from tables queries.

I dunno, I doubt I will be able to make any sort of significant impact
on performance without access to the code.

FYI: This program is a legal case management software called Prolaw by
Thomson-Elite...it *sucks* royally! LOL.
Cursors are slow. If it is opening cursors on tables with million rows
then I am afraid there is not much you can do. Besides, opening a
cursor on a table with million records even with indexes can be slow
and I don't think the application developers had performance issue on
their mind when they wrote the code initially. Besides, handling most
of the processing on client side without stored procedures will also
slow things down considerably especially for large tables.

Apr 2 '07 #22
DH
"Zamdrist" <za******@gmail.comwrote in message
news:11**********************@n59g2000hsh.googlegr oups.com...
>I am tasked with maintaining a large database still on SQL Server 7.0.
Performance is an issue not surprisingly and I've targeted two rather
large tables to see what I can do with the indexes.

The 2 tables are described as follows:

MatterConflicts:
Fields: MatterConflicts varchar(16), Matters varchar(16), HitMatters
varchar(16), IsInclude varchar(1)
Index: MatterConflicts

MatterConflictHits:
Fields: MatterConflictHits varchar(16), MatterConflicts varchar(16),
ColumnLabel varchar(40), Hit varchar(100)
Index: MatterConflictHits

Now MatterConflicts row count is approaching 500K and
MatterConflictHits is approaching 1 Million rows. There are only one
index on each table, each for the table's primary key. The Matters
field in MatterConflicts table joins back with a table that users
access directly.

Question is, would it be beneficial to add, or modify the existing
indexes for these tables to include both the primary and foreign keys,
as well as an additional field?

Doesn't seem to be to be very beneficial to have an index that only
includes the primary key. So for example I'm thinking of creating an
index for MatterConflicts that includes the fields: MatterConflicts,
Matters, and HitMatters.

Thoughts? Suggestions? Thanks...
What are you using for hardware? Something from the year 1999? Do you have
at least 512MB of RAM in it? With 512MB of RAM, unless there's a lot more
tolerably large tables involved, your entire database should fit into main
memory, which is a good place to have it. If you don't already have enough
memory on the box to comfortably fit SQL Server, the OS and any other
ancillary processes into main memory, I'd recommend you buy some. Buy a
gigabyte. In fact, if it's a decent quality server, buy a gigabyte in 4 X
256MB cards, which will usually allow maximum memory interleaving on good
hardware. Memory, nowadays, costs very little - compared to your time.

In terms of contemporary WIntel servers, your database really isn't all that
large. Guessing at like 75% use of the varchar() fields in volved, you've
got something less than 200MB in those two tables. Can you sneak a new
server in under the database? Would the app run against a SQL Server 2000
back end?

I've got a two or three year old, 4 X 700mhz CPU server here, with 16GB of
memory, and if I run a select count(*) from <tablewhere <check something
that's not indexedon a 9 million row, 5GB table, it takes 27 seconds the
first time I execute that command and the second time I run that it takes 4
seconds (the second time, the table is in memory). You don't need that much
server to get similar performance.

--
Posted via a free Usenet account from http://www.teranews.com

Apr 16 '07 #23

This discussion thread is closed

Replies have been disabled for this discussion.

Similar topics

2 posts views Thread by Chris | last post: by
4 posts views Thread by eXavier | last post: by
2 posts views Thread by David Sharp | last post: by
1 post views Thread by Steve_CA | last post: by
9 posts views Thread by Igor | last post: by
2 posts views Thread by Vogon Jeltz | last post: by
By using this site, you agree to our Privacy Policy and Terms of Use.