Magnus Österberg (ma**************@abo.fi) writes:
I am experiencing the following problem;
I boot my Windows 2000 sp#4 machine. I check sqlservr.exe's memory usage
in Task manager. It is ca 20 mb, everything is OK. Then I run this
query;
select postoffice, count(*) as counter from tblTable
where postoffice is not null
group by postoffice
order by counter DESC
There's no index or constraint on field "postoffice" and tblTable
contains ca 916.000 rows. I thought this query would still execute in
only a few seconds, but it takes minutes. And the worst thing is that
sqlserver.exe's memory usage grows to about 300-400 mb when the query
runs. What am I doing wrong??
So what is the average row size of this table? Say that is 300 bytes,
then that is 300 MB of data to read. That is not very likely to be done
instantly.
SQL Server's memory consumption will increase, as it will read the entire
table into cache, and the table will stay in the cache as long as no
other data competes about the space. This means that if you resubmit the
query, the response time will be significantly shorter.
SQL Server is designed to grab as much memory it can, as the more data
in can have in cache, the better the response times. If there are other
applications competing for memory on the machine, SQL Server will yield,
but in this case it may be better to configure how much memory you want
SQL Server to use.
Note also that framgmenation could cause extra delay. Use DBCC SHOWCONTIG
to see what shape the table is in. To defragment it, you would have to
create a clustered index on the table, and then drop that index.
The query itself would benefit enormously by a non-clustered index on
postoffice.
--
Erland Sommarskog, SQL Server MVP,
es****@sommarskog.se
Books Online for SQL Server SP3 at
http://www.microsoft.com/sql/techinf...2000/books.asp