Hi.
We have a .NET programmer using our SQL server. The database is roughly
8GB at this point, full of all of our inventory, payroll data, etc. for
running reports.
This is on a dual-xeon 3.2Ghz, 4GB DDR, 3x15k RPM SCSI RAID-5, gigabit
ethernet, etc server.
Yet, every once in a while it slows to an absolute crawl. IIS6 is the
front-end, and its limited to a web garden with 4 threads. However,
sometimes only 1 processor does any real work, even when multiple
people are running reports.
Blocking is NOT a problem, however I've seen 300,000+ locks/sec
requested occasionally (not waiting or anything like that, just
opened).
Cache hit ratios are roughly 99%.
One issue that I suspect is that the programmer is using varchar(50)
for EVERY column in EVERY table (except unique keys).
My question: In this configuration, would varchar(50)s be the cause of
the awful slowdowns?
I know its all cached in memory, but on these older xeons (533fsb,
DDR266) the memory subsystem could be better. If we are pulling THAT
much excessive data on an already congested bus (since its dual
intels), would it cause a dramatic slowdown?
The reports we run are highly intensive, but usually do not take a
fraction of the time that they do when the server exhibits these
behaviors.
I really could use anyone's input on the matter. I've read online
guides, 2 performance optimization books, and seemingly created optimal
indexes, but none of that has led me to a real solution.
Thanks
Mortrek 6 2323
"Mortrek" <mo*****@yahoo.com> wrote in message
news:11**********************@o13g2000cwo.googlegr oups.com... Hi.
We have a .NET programmer using our SQL server. The database is roughly 8GB at this point, full of all of our inventory, payroll data, etc. for running reports. This is on a dual-xeon 3.2Ghz, 4GB DDR, 3x15k RPM SCSI RAID-5, gigabit ethernet, etc server. Yet, every once in a while it slows to an absolute crawl. IIS6 is the front-end, and its limited to a web garden with 4 threads. However, sometimes only 1 processor does any real work, even when multiple people are running reports.
What exactly "slows"? All processes on the server? Just MSSQL? If it's just
MSSQL, does everything execute slowly, or only certain procedures? Does the
server 'speed up' again, or do you have to restart it? If it's only certain
procedures, then you can use Profiler to trace them and see where the time
is being spent.
As for the multiple processors, if the optimizer doesn't think that a
parallel plan will improve the performance of a particular batch, it won't
use them. You could experiment with the 'cost threshold for parallelism',
but I don't believe there's any reliable way to force the use of multiple
processors, and it could make things worse anyway. I've seen some queries
where the optimizer picked a parallel plan, but forcing it to use only one
processor turned out to give a much faster plan - the overhead of
distributing work to the processors and then merging the results wasn't
worth it.
Blocking is NOT a problem, however I've seen 300,000+ locks/sec requested occasionally (not waiting or anything like that, just opened).
Cache hit ratios are roughly 99%.
One issue that I suspect is that the programmer is using varchar(50) for EVERY column in EVERY table (except unique keys).
My question: In this configuration, would varchar(50)s be the cause of the awful slowdowns?
It's certainly going to cause some awful problems (data integrity,
maintenance, ease of development, documentation), but I don't immediately
see how a sudden decrease in performance could be one of them.
I know its all cached in memory, but on these older xeons (533fsb, DDR266) the memory subsystem could be better. If we are pulling THAT much excessive data on an already congested bus (since its dual intels), would it cause a dramatic slowdown? The reports we run are highly intensive, but usually do not take a fraction of the time that they do when the server exhibits these behaviors.
One possible cause of sudden apparent slowdowns is database
autogrow/autoshrink kicking in. If your reports use large temp tables, for
example, then tempdb can fill up and suddenly everything stops while the
server increases the size - not just reports, but everything else that
requires tempdb as well. If you currently use this for any database, I'd
suggest turning it off and setting the files to be 20% larger than you think
or know they need to be (as a starting point).
I really could use anyone's input on the matter. I've read online guides, 2 performance optimization books, and seemingly created optimal indexes, but none of that has led me to a real solution.
Thanks Mortrek
You could also consider running a trace on the server, to see if you can
identify any common query or event which happens when the slowdowns occur.
Simon
Mortrek (mo*****@yahoo.com) writes: We have a .NET programmer using our SQL server. The database is roughly 8GB at this point, full of all of our inventory, payroll data, etc. for running reports. This is on a dual-xeon 3.2Ghz, 4GB DDR, 3x15k RPM SCSI RAID-5, gigabit ethernet, etc server. Yet, every once in a while it slows to an absolute crawl. IIS6 is the front-end, and its limited to a web garden with 4 threads. However, sometimes only 1 processor does any real work, even when multiple people are running reports. Blocking is NOT a problem, however I've seen 300,000+ locks/sec requested occasionally (not waiting or anything like that, just opened).
Cache hit ratios are roughly 99%.
One issue that I suspect is that the programmer is using varchar(50) for EVERY column in EVERY table (except unique keys).
My question: In this configuration, would varchar(50)s be the cause of the awful slowdowns?
It certainly sounds like a poor database design, but as Simon said, this
alone is not the root of the problem.
My initial thought when I read this is that you have a query that is
extrmently CPU intensive. Cache hit ratio appears to be high, so it
may not be query that goes off reading large table from disk. But it
is possible bring the machine to its knees with cache-intensive queries.
Since you appear to find the machine slow overall, it appears that the
queries engages in parallelism, and as Simon said, parallel plans
are not alwyas good plan. So turning off parallelism by setting "max
degress of parallelism" to 1, could be an idea. But since parallelism
sometimes is useful, I don't recommend this as the first attempt.
I would recommend you to set up a Profiler trace where you trace for
SP:StmtCompleted, SP:RPCCompleted, SP:Completed, SQL:StmtCompleted and
SQL:BatchCompleted with a duration filter for at least 5000 seconds.
You could set up a second trace to catch all auto-grow events.
The purpose of this is to give you more information. These sort of
performance problem is kind of a detective work, and you have to
try to get more information what is going on when these slowdowns
occurs.
--
Erland Sommarskog, SQL Server MVP, es****@sommarskog.se
Books Online for SQL Server SP3 at http://www.microsoft.com/sql/techinf...2000/books.asp
What slows is any queries/SPs that are run. In profiler, they take
pretty much an order of magnitude more time than normal. All queries
slow down. 0ms duration queries will take like 50ms, 50ms queries take
like 2 seconds, etc. Some of the larger queries go from maybe taking 5
seconds to over a minute. Since the reports can take hundreds of
queries, they can time-out the web server when this happens...
It seems to speed up over time, but it takes a while to do so.
Resetting it helps, but as soon as people start using it again and load
it down much at all, it just messes up the same way. Normally it can
handle the load reasonably well.
I dont care about parallelism of single queries, I just want multiple
queries executing at once. Nothing is run parallel on this server. Even
though the reports are all run with write-locking (they are only read
operations), it still seems like only single queries are executing on a
single processor at once. That may just be a side-effect of the
slowness problem though, as one large query may choke up one thread and
make it look like something isn't working right.
The reason I somewhat suspect the varchar(50) for causing part of these
issues is that the reports pull a massive amount of data out of this
database. I dont care so much about how theoretically fast the reports
run, just that they use up the whole server, and that individual
queries run at a decent speed relative to their theoretical speed. When
you have some tables with 20+ columns and every column is a
varchar(50), it seems to me some pretty horrible bottlenecks could be
created artificially.
Database autogrow, perhaps, but the slowdown is happening even when the
DB has 5GB free space. I'm also trying to use simple logging just to
see if the transaction log could be causing some sort of bottleneck.
Autoshrink is disabled. I shrink it manually every once in a while to
make sure it isn't getting too big for its own good.
The queries, like stated above, are ALL slowed down.
Does anyone know if a web garden w/4 threads really only allows 4
requests to be executed at once? I could understand if all 18 locations
were running queries at the same time, but that is not what we want
happening. I'm somewhat new to IIS6 and its completely revamped inner
workings.
This slowdown could be happening during payroll when everyone runs
reports at once, but like I said, I only want a few threads executing
queries at a time.
Also, as before, there seems to be no physical bottleneck on this
server, besides perhaps memory bandwidth or processor speed. Still,
thats DDR266 (dual channel I think, at least interleaved since its a
server chipset) and 2x3.2Ghz Xeon 2MBs.
Very high cache hits, very low avg disk queues, low page faults. The
majority of the DB is archival, so the important working data is easily
fit into memory (SQL Standard, so about 1.7GB RAM for a 8GB database).
No disk paging, no antivirus currently enabled, web server is connected
to database via gigabit link.
Just to reiterate,stats of the database server are:
2x3.2ghz xeon, 4gb ECC DDR266, 3x15k scsi HDs in raid 5, Gb network
link, Win2k3 Server, Sql Server 2000 Standard, almost entirely
dedicated to this database (a few other very small, rarely-accessed DBs
exist on it as well).
Web server:
1x2.8Ghz Xeon, 512MB RAM, 2x10k SATA RAID1, Gb network link, Win2k3
server, SQL server personal (for IIS6 session management). Always has
at least 100MB free, even when the ASP.net threads are being big memory
hogs.
Anyway, thanks again to anyone who will help
Mortrek
Thanks for the reply.
I've been working on this problem for a while and have been trying to
diagnose it.
Also, I have Hyperthreading disabled. I was unsure if it'd offer a
performance benefit or reduction with this sort of situation, since it
seems like it only works well for parallel code that is aware of its
limitations, not individual threads that think each logical cpu is a
real one (and therefor stress it just as much, causing the CPU resource
performance hits associated with HT).
Its very hard to create a clean environment for diagnosing this. People
use the server from 4am in the morning to 12pm at night (sometimes even
more often), 7 days a week.
We have a much lower-powered development server, but its hard to stress
it in the same way and get predictable results.
Mortrek
>> The reason I somewhat suspect the VARCHAR(50) for causing part of
these issues is that the reports pull a massive amount of data out of
this database. <<
That is not the right question. First, you get the *logical* design
right, then you worry about *physical* tuning.
Does the data actually need to be stored as VARCHAR(50) or is this that
the stupid "magic datatype" that newbies pull out of thin air when they
are too lazy to design a schema? Forget about whether it makes the
query slow, firt think about what it does to the data integrity. If
you use an overly long column, some day it will get filled with garbage
and your reports will be wrong.
After you clean up the schema, then ask someone who knows how to tune
the server about the physical side of the house.
Mortrek (mo*****@yahoo.com) writes: What slows is any queries/SPs that are run. In profiler, they take pretty much an order of magnitude more time than normal.
But this could still be due to a killer query, that scans some large
table, causing a lot of data go out of cache, so a lot of access
will have to be done from disk.
It seems to speed up over time, but it takes a while to do so. Resetting it helps, but as soon as people start using it again and load it down much at all, it just messes up the same way. Normally it can handle the load reasonably well.
Resetting what?
Important question: if you have this slowdown, does the system come
back to normal after some time, even if there is still load?
I dont care about parallelism of single queries, I just want multiple queries executing at once. Nothing is run parallel on this server.
If you don't care about parallelism, you should certainly try
sp_configure 'max degrees of parallelism', 1
It is possible, though, that once you've done this will care about
parallelism for queries that suddenly are running slowly. But at
least one single query cannot monopolize the CPUs.
The reason I somewhat suspect the varchar(50) for causing part of these issues is that the reports pull a massive amount of data out of this database. I dont care so much about how theoretically fast the reports run, just that they use up the whole server, and that individual queries run at a decent speed relative to their theoretical speed. When you have some tables with 20+ columns and every column is a varchar(50), it seems to me some pretty horrible bottlenecks could be created artificially.
Just because it's varchar(50) does not mean that all values are 50 bytes
long. If the data is really numeric, you do get more overhead because
the character representation takes up more space. But, no, that is
definitely not what is bringing you server on its knees.
But if these varchar(50) columns appear in WHERE clauses, and there is
poor indexing, or indexing is not used because of implicit conversion,
then there might be a culprit.
Database autogrow, perhaps, but the slowdown is happening even when the DB has 5GB free space.
Don't forget tempdb!
By the way, have you checked out the SQL Server error log, to see if
there are any interesting messages.
In any case, I hope you understand that from a distance it is
very difficult to diagnose a problem like this. You may consider
to open a case with Microsoft, or find an experienced consultant
in your area that have a look at your system. It will not be for
free, but right now you have a cost for a slow system.
--
Erland Sommarskog, SQL Server MVP, es****@sommarskog.se
Books Online for SQL Server SP3 at http://www.microsoft.com/sql/techinf...2000/books.asp This thread has been closed and replies have been disabled. Please start a new discussion. Similar topics
by: Anders |
last post by:
We need to create a lookup structure that can contain a flexible amount of
attributes.
We have discussed different design solutions for lookup-functionality, and
come up with this suggestion:
...
|
by: Michael Malinsky |
last post by:
I'm attempting to create a database which will take information from
one (perhaps two) tables and utilize that information to return
queries to a report designed in Excel.
The general idea I...
|
by: William Wisnieski |
last post by:
Hello Everyone,
I'm really stuck on how to design this application, so I thought I'd see if
anyone had any general ideas on how to proceed. I'd say I'm an intermediate
level Access developer.
...
|
by: Inspector |
last post by:
I'm hoping I might be able to pick someone's brain out there.
Our company is currently running an inhouse application in which data
is being stored in SQL. Included in this data are thousands...
|
by: MP |
last post by:
Greets,
context: vb6/ado/.mdb/jet 4.0 (no access)/sql
beginning learner, first database, planning stages
(I think the underlying question here is whether to normalize or not to
normalize this...
|
by: clinttoris |
last post by:
Hello,
If someone could help me it would be appreciated as I am not having
much luck.
I'm struggling with my asp code and have some questions relating to asp
and oracle database.
First...
|
by: Mikee Freedom |
last post by:
Good Morning all,
New member to the list, hoping you might be able to give me some much
needed advice.
Basically, I have a client who would like to offer the ability for his
users to have...
|
by: Eric Sadoyama |
last post by:
I have a database documentation question, but I am not even sure how
to phrase it properly so I don't know where to start looking for
answers.
We are developing a database that is based on...
|
by: situ |
last post by:
Hello all,
i have Database1 and database2,
is it possible to make database connection to database2 by running
stored procedure on database1.
Thanks and Regards
Situ
|
by: Peter Duniho |
last post by:
Is there a straightfoward API in .NET that allows for inspection of a
database? That is, to look at the structure of the database, without
knowing anything in advance about it? For example,...
|
by: CloudSolutions |
last post by:
Introduction:
For many beginners and individual users, requiring a credit card and email registration may pose a barrier when starting to use cloud servers. However, some cloud server providers now...
|
by: Faith0G |
last post by:
I am starting a new it consulting business and it's been a while since I setup a new website. Is wordpress still the best web based software for hosting a 5 page website? The webpages will be...
|
by: ryjfgjl |
last post by:
In our work, we often need to import Excel data into databases (such as MySQL, SQL Server, Oracle) for data analysis and processing. Usually, we use database tools like Navicat or the Excel import...
|
by: taylorcarr |
last post by:
A Canon printer is a smart device known for being advanced, efficient, and reliable. It is designed for home, office, and hybrid workspace use and can also be used for a variety of purposes. However,...
|
by: aa123db |
last post by:
Variable and constants
Use var or let for variables and const fror constants.
Var foo ='bar';
Let foo ='bar';const baz ='bar';
Functions
function $name$ ($parameters$) {
}
...
|
by: ryjfgjl |
last post by:
In our work, we often receive Excel tables with data in the same format. If we want to analyze these data, it can be difficult to analyze them because the data is spread across multiple Excel files...
|
by: BarryA |
last post by:
What are the essential steps and strategies outlined in the Data Structures and Algorithms (DSA) roadmap for aspiring data scientists? How can individuals effectively utilize this roadmap to progress...
|
by: nemocccc |
last post by:
hello, everyone, I want to develop a software for my android phone for daily needs, any suggestions?
|
by: Hystou |
last post by:
There are some requirements for setting up RAID:
1. The motherboard and BIOS support RAID configuration.
2. The motherboard has 2 or more available SATA protocol SSD/HDD slots (including MSATA, M.2...
| |