By using this site, you agree to our updated Privacy Policy and our Terms of Use. Manage your Cookies Settings.
435,030 Members | 2,277 Online
Bytes IT Community
+ Ask a Question
Need help? Post your question and get tips & solutions from a community of 435,030 IT Pros & Developers. It's quick & easy.

Is it normal for a query on a 90k record table to take 1+ seconds?

P: n/a
I have a table that is --
30 Megabytes
90,000 rows
~65 columns

My query goes

SELECT city FROM table WHERE zip = 90210;

It will then find about 10 matching records.

I have an index on both zip and city.

This query takes on average anywhere from 0.80 seconds to 2.5 seconds.
Is this normal speed? I've been breaking my back trying to get it in
the sub 0.1 second range.


Nov 23 '05 #1
Share this Question
Share on Google+
5 Replies

P: n/a
You should put an index an the zip column. That's all you can (and should)
do from the query optimizing point of view.

Nov 23 '05 #2

P: n/a
Mark I already have two separate indexes in place, one on zip, one on

Is the physical size of the table slowing down the query?

Is it the 90k records, or the 65 columns?

Nov 23 '05 #3

P: n/a
For this particular query, only the index on the zip column will help you
(that doesn't mean, that the index on the city column can't help for other
queries - this only applied to this one particular query). As a general
rule, you should also take care not to place indexes that are not necessary,
because they can slow down INSERTs. Indexes should always be balanced
according to the concrete needs of a table. A good choice is to place
indexes on columns that have direct relations to other tables or that are
often used in a WHERE clause (like the zip column in your query).

65 columns for a table are quite a lot - are you sure that the data is
properly normalized (in case it is not, normalization would very likely lead
to better performence).

Also if it is normalized (although it's very rare that you need 65 columns
in a table if the data is normalized), it could make sense to split it up to
create 2 or more tables that are referenced to each other in a 1 : 1
relationship. You could group the data together in a way that you have those
columns in the same table that are mostly queried together. The less columns
you have, the better the performence will be.

The 90,000 rows should not be the problem because if you have that amount of
data to store, there's probably no way to reduce the number of rows. The
solution will most likely be to split the data up into more tables.

Nov 23 '05 #4

P: n/a

Thank you for the very imformative reply. It makes complete sense.

I started splitting the table up per your suggestion and got immediate

It's not my system, I'm just doing some contract work, so hopefully the
maintainers will be able to make some permanent normalization updates.

Thanks again,

Nov 23 '05 #5

P: n/a
On 14/11/2005, MattPF wrote:
SELECT city FROM table WHERE zip = 90210;

It will then find about 10 matching records.

I have an index on both zip and city.

A covering composite index on (zip, city) may help here.

ALTER TABLE table ADD INDEX (zip, city);

Nov 23 '05 #6

This discussion thread is closed

Replies have been disabled for this discussion.