473,598 Members | 3,386 Online
Bytes | Software Development & Data Engineering Community
+ Post

Home Posts Topics Members FAQ

Database performance degrading (again)

We have an application with a SQL Server 2000 back end that is fairly
database intensive -- lots of fairly frequent queries, inserts, updates
-- the gamut. The application does not make use of performance hogs
like cursors, but I know there are lots of ways the application could
be made more efficient database-wise. The server code is running VB6
of all things, using COM+ database interfaces. There are some
clustered and non-clustered indexes defined, but I'm pretty sure
there's room for improvement there as well.

Some tables have grown into the millions of records in recent months,
and performance of the application slowed to a crawl. Optimizing the
database helped a little, but not much. We know that several million
records in a table is a lot, but one would think that SQL Server should
be able to still handle that pretty well. We do have plans to archive
a lot of old data, but in the meantime, we were hurting for a quick
fix.

So we threw hardware at the problem, and transferred the database to a
new, more powerful server. The performance improvement was dramatic.
Transactions were many many times faster than before. Without
implementing any of the other more difficult performance improvements
we have planned, we suddenly became minor heros. :-)

Well, the honeymoon seems to be somewhat over. While performance is
still much better than when the database resided on our old server,
performance appears to have degraded rather significantly again.
Performance is also not significantly better with fewer users on our
system. What the heck?

Yes, the database continues to grow unchecked as we haven't quite got
an archive utility in place yet, but the growth is relatively gradual,
so you wouldn't think that would be the issue. The database is
optimized on a weekly basis, and our web and database servers are both
rebooted monthly. Our database administrators don't seem to have
answers, so I appeal to the experts reading this forum to maybe offer
some clues.

Prior to posting I did do a fair amount of research to see what people
have suggested in similar situations, and ran this by our database
admin. Here's what I can tell you from this research:

- Statistics are updated weekly along with whatever else the database
optimization does

- We do not use the "autoshrink " option for automatically shrinking log
files

- Regarding preallocating space and setting growth factors for log and
data files to minimize time spent allocating disk space, our admin
says, "We do allow database files to grow unchecked, but we do monitor
growth and manually expand as needed. Autogrow is typically set in
50MB increments or less as the amount of time it takes to expand this
amount is negligible."

- Transaction logging is turned on, and data and log devices are on
separate physical disks

- The database server is monitored to ensure no process is hogging all
of the CPU, I/O or memory

Jul 23 '05 #1
6 2313
> We know that several million records in a table is a lot
Not really. It all depends on your hardware and how your application
has been written. I've used tables that are in the tens of millions of
records (even close to 100 million) without encountering performance
problems.

Have you run Profiler to see where the slowness is originating? A good
idea would be to run profiler, logging to a table, for a day. Use that
data to find the longest running SQL statements as well as the most
commonly used SQL statements. Concentrate your efforts on improving
those queries that appear in both categories - i.e. SQL statements that
take a long time to run and which are frequently used. If you check out
Microsoft's website I'm sure that you can find a whitepaper on using
Profiler to optimize a database application.

Another thing to look at is how the front end application is accessing
the server. Is it making a lot of unnecessary round trips to the SQL
Server? Network latency can also play into this a lot. If your frontend
application is scrolling through records in your table one at a time,
retrieving each one individually, it doesn't matter how fast your
database is, the application will be slow.

This is one problem with throwing hardware at a database issue. The
problem that is causing the slowness will almost always come up again
until you fix the part of the database/application that is causing the
issue in the first place.

Hopefully these ideas will yield some improvements for you.

Good luck,
-Tom.

Jul 23 '05 #2
(te*****@hotmai l.com) writes:
Some tables have grown into the millions of records in recent months,
and performance of the application slowed to a crawl. Optimizing the
database helped a little, but not much. We know that several million
records in a table is a lot, but one would think that SQL Server should
be able to still handle that pretty well.
Well, let me put it this way: a couple of million row tables is
enough to make SQL Server slower than a snail if you have poor indexing
or malformed queries. But as Thomas said: you can have over 100 million
rows, and excellent performance. All depends on the matching between
queries and indexes.
So we threw hardware at the problem, and transferred the database to a
new, more powerful server. The performance improvement was dramatic.
Transactions were many many times faster than before.
Since SQL Server uses a cost-based optimizer, the same query with the
same indexes can give different qurey plans, with a little difference
in statistics. If there was such a drastic changes, it seems that the
tables were turned in favour of better plans.
Well, the honeymoon seems to be somewhat over. While performance is
still much better than when the database resided on our old server,
performance appears to have degraded rather significantly again.
Performance is also not significantly better with fewer users on our
system. What the heck?
And you ask us who don't even see the system!

As Thomas said, use the Profiler to track down slow-running queries,
and address them one by one; either by rewriting or modifying indexes.
Or take a shortcut: save a day's workload, and feed it to the Index
Tunig Wizard.
- Statistics are updated weekly along with whatever else the database
optimization does


Nevertheless, if can be good idea to do DBCC SHOWCONTIG on some
big table, to check for fragmentation.
--
Erland Sommarskog, SQL Server MVP, es****@sommarsk og.se

Books Online for SQL Server SP3 at
http://www.microsoft.com/sql/techinf...2000/books.asp
Jul 23 '05 #3
Thanks for your replies, Tom and Erland.

Yes, I know that several million records in a table is not necessarily
a lot, especially if the application, database, indexes, etc. are all
well designed. I'll concede that that's probably not the case here!

And yes, I'm sure that network latency contributes to our problems, as
there are a fair amount of round trips going on that probably wouldn't
be necessary if the application were better designed. The consistency
of the bad performance seems to suggest that the degradation in
performance has more to do with the database, however.

I like the Profiler suggestion, which we've been talking about doing
anyway. It's too bad that we didn't do that right after moving to this
new server so we could maybe pinpoint what has changed to make it run
slower.

I also like the suggestion of running DBCC SHOWCONTIG on the bigger
tables to see if there's significant fragmentation even though we've
been optimizing every week.

I also appreciate Erland's comment that it's difficult for someone on
the outside to know what's really going on. I just figured that there
*might* be fairly common/general things to look for/do in the situation
where 1) performance was awful on an old server, 2) performance was
vastly improved when the database was moved to a new server, and 3)
performance quickly degraded again on the new server.

Yes, I know that throwing hardware at the problem is not a silver
bullet. Our old server needed replacing anyway, so we figured it was
worth doing, at least as a stopgap measure to improve performance,
while we were still working on the more difficult fixes. And while I
am not surprised that some slowness would arise again as Tom suggests,
I am rather surprised that the degradation was as significant as it was
in a relatively few short months since the new server was put into
place. In fact, my theory is that reformatting the new server and
starting completely from scratch would suddenly give us improved
performance again -- just a hunch.

One thing that confuses me a little in relation to Erland's comments
about SQL Server's cost-based optimizer. I've read a little about
this, how SQL Server continually tries to find better execution plans,
etc. But wouldn't that lead to *improved* performance instead of worse
performance? Or is it possible that SQL Server will occasionally make
the "wrong choice" in terms of finding execution plans that offer
better performance?

Thanks much again for your suggestions. They're probably about as good
as I can expect given the lack of specific information that I can offer
on this problem.

Jul 23 '05 #4
> One thing that confuses me a little in relation to Erland's comments
about SQL Server's cost-based optimizer. I've read a little about
this, how SQL Server continually tries to find better execution plans,
etc. But wouldn't that lead to *improved* performance instead of worse
performance?
Indexes are the key to an efficient execution plan. Statistics on
non-indexed columns can help too (but to a lesser extent) and SQL Server can
generate column statistics automatically. However, indexes are entirely up
to you because you need to balance the cost of maintaining indexes with data
retrieval performance benefits. This is where Profiler and execution plan
analysis can help you out.

When you start with a well-designed database and application, the SQL Server
cost-based optimizer usually does a pretty good job of adapting to changes
in data volume and cardinality. When you don't have appropriate indexes,
the optimizer can't do much to help you out and performance degrades with
data volume. This appears to be the case in your environment based on your
description of the problem.
Or is it possible that SQL Server will occasionally make
the "wrong choice" in terms of finding execution plans that offer
better performance?
Yes, it possible that the optimizer makes the wrong choice. When you find
an execution plan that isn't using indexes effectively, first ensure stats
are up-to-date and expressions are sargable. You can specify hints to
override the cost-based optimizer choices but this should be done as a only
as a last resort.

--
Hope this helps.

Dan Guzman
SQL Server MVP

<te*****@hotmai l.com> wrote in message
news:11******** *************@o 13g2000cwo.goog legroups.com... Thanks for your replies, Tom and Erland.

Yes, I know that several million records in a table is not necessarily
a lot, especially if the application, database, indexes, etc. are all
well designed. I'll concede that that's probably not the case here!

And yes, I'm sure that network latency contributes to our problems, as
there are a fair amount of round trips going on that probably wouldn't
be necessary if the application were better designed. The consistency
of the bad performance seems to suggest that the degradation in
performance has more to do with the database, however.

I like the Profiler suggestion, which we've been talking about doing
anyway. It's too bad that we didn't do that right after moving to this
new server so we could maybe pinpoint what has changed to make it run
slower.

I also like the suggestion of running DBCC SHOWCONTIG on the bigger
tables to see if there's significant fragmentation even though we've
been optimizing every week.

I also appreciate Erland's comment that it's difficult for someone on
the outside to know what's really going on. I just figured that there
*might* be fairly common/general things to look for/do in the situation
where 1) performance was awful on an old server, 2) performance was
vastly improved when the database was moved to a new server, and 3)
performance quickly degraded again on the new server.

Yes, I know that throwing hardware at the problem is not a silver
bullet. Our old server needed replacing anyway, so we figured it was
worth doing, at least as a stopgap measure to improve performance,
while we were still working on the more difficult fixes. And while I
am not surprised that some slowness would arise again as Tom suggests,
I am rather surprised that the degradation was as significant as it was
in a relatively few short months since the new server was put into
place. In fact, my theory is that reformatting the new server and
starting completely from scratch would suddenly give us improved
performance again -- just a hunch.

One thing that confuses me a little in relation to Erland's comments
about SQL Server's cost-based optimizer. I've read a little about
this, how SQL Server continually tries to find better execution plans,
etc. But wouldn't that lead to *improved* performance instead of worse
performance? Or is it possible that SQL Server will occasionally make
the "wrong choice" in terms of finding execution plans that offer
better performance?

Thanks much again for your suggestions. They're probably about as good
as I can expect given the lack of specific information that I can offer
on this problem.

Jul 23 '05 #5
(te*****@hotmai l.com) writes:
I also appreciate Erland's comment that it's difficult for someone on
the outside to know what's really going on. I just figured that there
*might* be fairly common/general things to look for/do in the situation
where 1) performance was awful on an old server, 2) performance was
vastly improved when the database was moved to a new server, and 3)
performance quickly degraded again on the new server.
So how did you migrate the database? If you now say "Copy Database
Wizard" or any other methods that include rebuild of the database,
I'm tempted to place my bets on fragmentation as the main issue. To
wit, if you move the database in this way, you get quite a well-
defragmented database in the other end.

If you migrated the database my means of backup/restore or detach/attach,
your also copied the fragmentation. In this case... I could go into
speculation, but it would not really address the actual problem.
One thing that confuses me a little in relation to Erland's comments
about SQL Server's cost-based optimizer. I've read a little about
this, how SQL Server continually tries to find better execution plans,
etc. But wouldn't that lead to *improved* performance instead of worse
performance? Or is it possible that SQL Server will occasionally make
the "wrong choice" in terms of finding execution plans that offer
better performance?


To say that SQL Server constantly try to find better query plans, is
not maybe really correct. (That's more applicable to the SQL Server
dev team.) But SQL Server do occasionally review query plans, because
statistics have changed - or because the query plan has fallen out of
cache.

And plans are bulit on estimates from statistics, and they are indeed
a gamble. Sometimes the opimtizer makes very bad picks for one reason
or another. This also applies to different SQL Server versions. MS
does some tweak to the optimizer, which in many cases are beneficiary,
but at some sites can backfire.
--
Erland Sommarskog, SQL Server MVP, es****@sommarsk og.se

Books Online for SQL Server SP3 at
http://www.microsoft.com/sql/techinf...2000/books.asp
Jul 23 '05 #6
Hi

Glancing through the posts it is not clear what maintenace you are doing
other than the weekly updating the statistics, so I will add my two pence
worth!!!

Your DBCC SHOWCONTIG will probably indicate fragmentation if you don't have
maintenace in place that calls DBCC INDEXDEFRAG and/or DBCC DBREINDEX. You
may also want to look at running the index tuning wizard on your profiles to
see if it suggests alternate fill factors. You may want to try alternate
fill factors on a different (controlled) system and replay the profiles
against it to see what fragmentation and speed you can achieve.

John

<te*****@hotmai l.com> wrote in message
news:11******** *************@o 13g2000cwo.goog legroups.com...
Thanks for your replies, Tom and Erland.

Yes, I know that several million records in a table is not necessarily
a lot, especially if the application, database, indexes, etc. are all
well designed. I'll concede that that's probably not the case here!

And yes, I'm sure that network latency contributes to our problems, as
there are a fair amount of round trips going on that probably wouldn't
be necessary if the application were better designed. The consistency
of the bad performance seems to suggest that the degradation in
performance has more to do with the database, however.

I like the Profiler suggestion, which we've been talking about doing
anyway. It's too bad that we didn't do that right after moving to this
new server so we could maybe pinpoint what has changed to make it run
slower.

I also like the suggestion of running DBCC SHOWCONTIG on the bigger
tables to see if there's significant fragmentation even though we've
been optimizing every week.

I also appreciate Erland's comment that it's difficult for someone on
the outside to know what's really going on. I just figured that there
*might* be fairly common/general things to look for/do in the situation
where 1) performance was awful on an old server, 2) performance was
vastly improved when the database was moved to a new server, and 3)
performance quickly degraded again on the new server.

Yes, I know that throwing hardware at the problem is not a silver
bullet. Our old server needed replacing anyway, so we figured it was
worth doing, at least as a stopgap measure to improve performance,
while we were still working on the more difficult fixes. And while I
am not surprised that some slowness would arise again as Tom suggests,
I am rather surprised that the degradation was as significant as it was
in a relatively few short months since the new server was put into
place. In fact, my theory is that reformatting the new server and
starting completely from scratch would suddenly give us improved
performance again -- just a hunch.

One thing that confuses me a little in relation to Erland's comments
about SQL Server's cost-based optimizer. I've read a little about
this, how SQL Server continually tries to find better execution plans,
etc. But wouldn't that lead to *improved* performance instead of worse
performance? Or is it possible that SQL Server will occasionally make
the "wrong choice" in terms of finding execution plans that offer
better performance?

Thanks much again for your suggestions. They're probably about as good
as I can expect given the lack of specific information that I can offer
on this problem.

Jul 23 '05 #7

This thread has been closed and replies have been disabled. Please start a new discussion.

Similar topics

34
7059
by: yensao | last post by:
Hi, I have a hard time to understand difference and similarities between Relational database model and the Object-Oriented model. Can somebody help me with this? Thank you in advance. Yensao
19
21195
by: dchow | last post by:
Our database size is currently 4G and is incrementing at a rate of 45M/day. What is the max size of a SQL database? And what is the size beyond which the server performance will start to go down?
1
1906
by: tekanet | last post by:
Hello folks, first of all I really don't know how you gurus call this way of writing joins: SELECT A.FIELD, B.FIELD FROM TABLE_A A, TABLE_B B
5
2205
by: Mark Kirkwood | last post by:
Dear all, Here is the first installment concerning ATA disks and RAID controller use in a database server. I happened to have a Solaris system to myself this week, so took the opportunity to use it as a "control". In this post I used the ATA RAID controller merely to enable UDMA 133 for an oldish x86 machine, the effect of any actual RAID level will (hopefully) be examined subsequently.
1
2005
by: IkyL234 | last post by:
I'm using Access2000. I have just designed a database which seems to be operating very slow on a network. There are currently only a few records in it. Should I be compacting it now before it gets busy with usage ? The database has OLE bounded fields where pictures are linked to a network folder where the pictures are kept. Will this linking of pictures to respective records over time hinder the speed of the database ?
19
2732
by: Andy B | last post by:
Hello, Sorry for this newbish question. Briefly, my problem: ------------------ I expect the database I'm working on to reach something in the order of 12-16 Gigabytes, and I am interested in understanding as much as I can about how I can make this go as fast as possible on a linux system. I haven't run such a large database before. The nature of the database is such that
29
3559
by: MP | last post by:
Greets, context: vb6/ado/.mdb/jet 4.0 (no access)/sql beginning learner, first database, planning stages (I think the underlying question here is whether to normalize or not to normalize this one data field - but i'm not sure) :-) Background info:
37
3785
by: jortizclaver | last post by:
Hi, I'm about to develop a new framework for my corporative applications and my first decision point is what kind of strings to use: std::string or classical C char*. Performance in my system is quite importante - it's not a realtime system, but almost - and I concern about std::string performance in terms of speed. No doubt to use std implementation is a lot easier but I can't sacrifice speed.
4
1642
by: Mike P2 | last post by:
Hi. I'm writing controls that have to query the database, and it bothers me that I might have several of these controls on a page that each create, open, and close their own connection with the same connection string, and the page class has a connection object too. I'm using the code-behind way of doing things, and I considered leaving the database connection open as a field in the page's class and somehow letting the controls use that...
0
7991
marktang
by: marktang | last post by:
ONU (Optical Network Unit) is one of the key components for providing high-speed Internet services. Its primary function is to act as an endpoint device located at the user's premises. However, people are often confused as to whether an ONU can Work As a Router. In this blog post, we’ll explore What is ONU, What Is Router, ONU & Router’s main usage, and What is the difference between ONU and Router. Let’s take a closer look ! Part I. Meaning of...
0
8395
Oralloy
by: Oralloy | last post by:
Hello folks, I am unable to find appropriate documentation on the type promotion of bit-fields when using the generalised comparison operator "<=>". The problem is that using the GNU compilers, it seems that the internal comparison operator "<=>" tries to promote arguments from unsigned to signed. This is as boiled down as I can make it. Here is my compilation command: g++-12 -std=c++20 -Wnarrowing bit_field.cpp Here is the code in...
1
8050
by: Hystou | last post by:
Overview: Windows 11 and 10 have less user interface control over operating system update behaviour than previous versions of Windows. In Windows 11 and 10, there is no way to turn off the Windows Update option using the Control Panel or Settings app; it automatically checks for updates and installs any it finds, whether you like it or not. For most users, this new feature is actually very convenient. If you want to control the update process,...
0
8265
tracyyun
by: tracyyun | last post by:
Dear forum friends, With the development of smart home technology, a variety of wireless communication protocols have appeared on the market, such as Zigbee, Z-Wave, Wi-Fi, Bluetooth, etc. Each protocol has its own unique characteristics and advantages, but as a user who is planning to build a smart home system, I am a bit confused by the choice of these technologies. I'm particularly interested in Zigbee because I've heard it does some...
0
6719
agi2029
by: agi2029 | last post by:
Let's talk about the concept of autonomous AI software engineers and no-code agents. These AIs are designed to manage the entire lifecycle of a software development project—planning, coding, testing, and deployment—without human intervention. Imagine an AI that can take a project description, break it down, write the code, debug it, and then launch it, all on its own.... Now, this would greatly impact the work of software developers. The idea...
1
5850
isladogs
by: isladogs | last post by:
The next Access Europe User Group meeting will be on Wednesday 1 May 2024 starting at 18:00 UK time (6PM UTC+1) and finishing by 19:30 (7.30PM). In this session, we are pleased to welcome a new presenter, Adolph Dupré who will be discussing some powerful techniques for using class modules. He will explain when you may want to use classes instead of User Defined Types (UDT). For example, to manage the data in unbound forms. Adolph will...
0
5438
by: conductexam | last post by:
I have .net C# application in which I am extracting data from word file and save it in database particularly. To store word all data as it is I am converting the whole word file firstly in HTML and then checking html paragraph one by one. At the time of converting from word file to html my equations which are in the word document file was convert into image. Globals.ThisAddIn.Application.ActiveDocument.Select();...
0
3898
by: TSSRALBI | last post by:
Hello I'm a network technician in training and I need your help. I am currently learning how to create and manage the different types of VPNs and I have a question about LAN-to-LAN VPNs. The last exercise I practiced was to create a LAN-to-LAN VPN between two Pfsense firewalls, by using IPSEC protocols. I succeeded, with both firewalls in the same network. But I'm wondering if it's possible to do the same thing, with 2 Pfsense firewalls...
1
1504
muto222
by: muto222 | last post by:
How can i add a mobile payment intergratation into php mysql website.

By using Bytes.com and it's services, you agree to our Privacy Policy and Terms of Use.

To disable or enable advertisements and analytics tracking please visit the manage ads & tracking page.