469,631 Members | 1,526 Online
Bytes | Developer Community
New Post

Home Posts Topics Members FAQ

Post your question to a community of 469,631 developers. It's quick & easy.

Memory usage\Performance problem

I am experiencing the following problem;

I boot my Windows 2000 sp#4 machine. I check sqlservr.exe's memory usage in
Task manager. It is ca 20 mb, everything is OK. Then I run this query;

select postoffice, count(*) as counter from tblTable
where postoffice is not null
group by postoffice
order by counter DESC

There's no index or constraint on field "postoffice" and tblTable contains
ca 916.000 rows. I thought this query would still execute in only a few
seconds, but it takes minutes. And the worst thing is that sqlserver.exe's
memory usage grows to about 300-400 mb when the query runs. What am I doing
wrong??

Tnx in advance!

/Magnus
Aug 29 '05 #1
7 4848
Magnus Österberg (ma**************@abo.fi) writes:
I am experiencing the following problem;

I boot my Windows 2000 sp#4 machine. I check sqlservr.exe's memory usage
in Task manager. It is ca 20 mb, everything is OK. Then I run this
query;

select postoffice, count(*) as counter from tblTable
where postoffice is not null
group by postoffice
order by counter DESC

There's no index or constraint on field "postoffice" and tblTable
contains ca 916.000 rows. I thought this query would still execute in
only a few seconds, but it takes minutes. And the worst thing is that
sqlserver.exe's memory usage grows to about 300-400 mb when the query
runs. What am I doing wrong??


So what is the average row size of this table? Say that is 300 bytes,
then that is 300 MB of data to read. That is not very likely to be done
instantly.

SQL Server's memory consumption will increase, as it will read the entire
table into cache, and the table will stay in the cache as long as no
other data competes about the space. This means that if you resubmit the
query, the response time will be significantly shorter.

SQL Server is designed to grab as much memory it can, as the more data
in can have in cache, the better the response times. If there are other
applications competing for memory on the machine, SQL Server will yield,
but in this case it may be better to configure how much memory you want
SQL Server to use.

Note also that framgmenation could cause extra delay. Use DBCC SHOWCONTIG
to see what shape the table is in. To defragment it, you would have to
create a clustered index on the table, and then drop that index.

The query itself would benefit enormously by a non-clustered index on
postoffice.
--
Erland Sommarskog, SQL Server MVP, es****@sommarskog.se

Books Online for SQL Server SP3 at
http://www.microsoft.com/sql/techinf...2000/books.asp

Aug 29 '05 #2

"Erland Sommarskog" <es****@sommarskog.se> wrote in message
news:Xn**********************@127.0.0.1...
Magnus Österberg (ma**************@abo.fi) writes:
I am experiencing the following problem;

I boot my Windows 2000 sp#4 machine. I check sqlservr.exe's memory usage
in Task manager. It is ca 20 mb, everything is OK. Then I run this
query;

select postoffice, count(*) as counter from tblTable
where postoffice is not null
group by postoffice
order by counter DESC

There's no index or constraint on field "postoffice" and tblTable
contains ca 916.000 rows. I thought this query would still execute in
only a few seconds, but it takes minutes. And the worst thing is that
sqlserver.exe's memory usage grows to about 300-400 mb when the query
runs. What am I doing wrong??


So what is the average row size of this table? Say that is 300 bytes,
then that is 300 MB of data to read. That is not very likely to be done
instantly.

SQL Server's memory consumption will increase, as it will read the entire
table into cache, and the table will stay in the cache as long as no
other data competes about the space. This means that if you resubmit the
query, the response time will be significantly shorter.

SQL Server is designed to grab as much memory it can, as the more data
in can have in cache, the better the response times. If there are other
applications competing for memory on the machine, SQL Server will yield,
but in this case it may be better to configure how much memory you want
SQL Server to use.

Note also that framgmenation could cause extra delay. Use DBCC SHOWCONTIG
to see what shape the table is in. To defragment it, you would have to
create a clustered index on the table, and then drop that index.

The query itself would benefit enormously by a non-clustered index on
postoffice.
--
Erland Sommarskog, SQL Server MVP, es****@sommarskog.se

Books Online for SQL Server SP3 at
http://www.microsoft.com/sql/techinf...2000/books.asp


Thanks for a well-written answer!

I guess my expectations on selecting based on a non-indexed column were
somewhat high. When I added an index, my query executes in seconds.
One index solved this problem, but my table contains 30-40 similar columns,
and I don't think indexing every column is a good idea.

Isn't there any other ways of speeding up selects? Well, I guess not...

/Magnus
Aug 29 '05 #3
Magnus Österberg (ma**************@abo.fi) writes:
I guess my expectations on selecting based on a non-indexed column were
somewhat high. When I added an index, my query executes in seconds. One
index solved this problem, but my table contains 30-40 similar columns,
and I don't think indexing every column is a good idea.

Isn't there any other ways of speeding up selects? Well, I guess not...


Well, once data is in cache it will be faster. Or at least less slow.

But if you need to do this on every column, it sounds to me like one
of those things Analysis Services is good for.
--
Erland Sommarskog, SQL Server MVP, es****@sommarskog.se

Books Online for SQL Server SP3 at
http://www.microsoft.com/sql/techinf...2000/books.asp

Aug 29 '05 #4
Magnus Österberg wrote:
I guess my expectations on selecting based on a non-indexed column
were somewhat high. When I added an index, my query executes in
seconds.
One index solved this problem, but my table contains 30-40 similar
columns, and I don't think indexing every column is a good idea.
Two things come to mind:

1. rethink your table design. If these columns are so similar you might
be better off with a second table which is joined. But that of course
depends on your data - just an idea.

2. Create indexes (possibly composite indexes) for most used queries. You
might even get away with a single (or few) composite index if your queries
only use a leading portion of this index's fields.
Isn't there any other ways of speeding up selects? Well, I guess
not...


Well, there are tons of other options, like having data files on several
disks, putting tx log on a separate disk, adding memory, .... It all
depends. :-)

Kind regards

robert

Aug 29 '05 #5
Thank you Robert, these all are things that I'll keep in mind.

One more thing though; why is the sqlservr.exe process consuming 395 mb RAM
when the entire server is idle?
My "cron job" is inserting a few 1000 rows of data at night, but now it is
late morning here in Finland, and the entire machine is more or less idle.
On my other servers, the RAM consumption on idle is about 20-30 mb only.
Anyone got any ideas?

/Magnus

"Robert Klemme" <bo******@gmx.net> wrote in message
news:3n************@individual.net...
Magnus Österberg wrote:
I guess my expectations on selecting based on a non-indexed column
were somewhat high. When I added an index, my query executes in
seconds.
One index solved this problem, but my table contains 30-40 similar
columns, and I don't think indexing every column is a good idea.


Two things come to mind:

1. rethink your table design. If these columns are so similar you might
be better off with a second table which is joined. But that of course
depends on your data - just an idea.

2. Create indexes (possibly composite indexes) for most used queries. You
might even get away with a single (or few) composite index if your queries
only use a leading portion of this index's fields.
Isn't there any other ways of speeding up selects? Well, I guess
not...


Well, there are tons of other options, like having data files on several
disks, putting tx log on a separate disk, adding memory, .... It all
depends. :-)

Kind regards

robert

Aug 31 '05 #6
Magnus Österberg (ma**************@abo.fi) writes:
One more thing though; why is the sqlservr.exe process consuming 395 mb
RAM when the entire server is idle?


I believe that was in my first reply.

SQL Server is designed to get as much memory as it can, and only yield
if an another application needs it. This is because it keeps data in
cache so that future requests for the same data can be answered without
reading from disk.

Thus, this is perfectly normal behaviour.
--
Erland Sommarskog, SQL Server MVP, es****@sommarskog.se

Books Online for SQL Server SP3 at
http://www.microsoft.com/sql/techinf...2000/books.asp

Aug 31 '05 #7
Erland Sommarskog wrote:
Magnus Österberg (ma**************@abo.fi) writes:
One more thing though; why is the sqlservr.exe process consuming 395
mb RAM when the entire server is idle?


I believe that was in my first reply.

SQL Server is designed to get as much memory as it can, and only yield
if an another application needs it. This is because it keeps data in
cache so that future requests for the same data can be answered
without reading from disk.

Thus, this is perfectly normal behaviour.


Adding to that max memory consumption is easily configurable so if 400MB
is too much for you then simply turn that down.

Kind regards

robert

Aug 31 '05 #8

This discussion thread is closed

Replies have been disabled for this discussion.

Similar topics

32 posts views Thread by John | last post: by
17 posts views Thread by José Joye | last post: by
reply views Thread by monika.saxena | last post: by
7 posts views Thread by Salvador | last post: by
9 posts views Thread by Bruno Barberi Gnecco | last post: by
9 posts views Thread by jeungster | last post: by
17 posts views Thread by frederic.pica | last post: by
By using this site, you agree to our Privacy Policy and Terms of Use.