I am working with a report generator that is based on SQL Server 2000 and
uses ASP as the UI. Basically we have a set of reports that end users can
execute through a web browser. In general the model works fine, but we are
running into some scaling issues.
What I'm trying to determine is, what is the optimal configuration for this
system. It is currently a 2.4G Pentium with a large RAID and 1G of RAM. We
have been using the "fixed" memory configuration, allocating 864M to SQL.
This is on a Windows 2003 server box.
This works fine when a "small" query or two is executed, but the performance
suffers terribly when several users try to run reports in parallel. A single
query might take 10 minutes to run if nothing else is happening on the box,
but if additional users log on an run reports, it's almost impossible to
predict when the queries will finish.
I am also looking at the effect of database size on performance, running
tests against a database with 1 month, 3 months, and say 12 months of data,
running the same query against 2 databases in parallel. With the original
configuration, the results were all over the place, with the 12 month
database outperforming the smaller dbs, while other times there was little
difference. It seems that once the system starts paging, and paging heavily,
it's over; the system never "recovers" and queries that previously ran in a
few minutes now take hours.
I added 3 G more memory to the system, and modified boot.ini to include the
/3GB switch. Now when I run the same tests, the results are much more
consistent, as the system rarely ever has to swap. Then again I've never
seen it go past 1.7G in Task manager, making me think that any more than say
2.5G of memory is a waste?
Things we are trying to determine are:
- in the SQL Server memory configuration, is Fixed better than Dynamic? We
have read that Dynamic is not good at returning memory to the OS once it's
been allocated
- What else can we do to optimize the performance for this application? It
seems to me if the indexes are properly designed, the database size
shouldn't have that much impact on performance, but this appears to be true
only to a point. In comparing the execution plans between say a 12 month and
a 3 month database, the plans are sometimes dramatically different. I assume
this is due to the optimizer deciding that going directly to the base tables
and not using an index will result in better performance, when in reality,
this doesn't always appear to be true.
- Are there other SQL Server switches I should be tweaking? Is there some
number of simultaneous queries that this configuration should be limited to?
- What about other versions of SQL Server (e.g. Enterprise, Data Center,
etc) would these buy us anything?
Thanks for any advice,
-Gary