By using this site, you agree to our updated Privacy Policy and our Terms of Use. Manage your Cookies Settings.
464,790 Members | 1,426 Online
Bytes IT Community
+ Ask a Question
Need help? Post your question and get tips & solutions from a community of 464,790 IT Pros & Developers. It's quick & easy.

Performance on select on index and large tables

P: n/a
Hello all,

I have a table with about 400,000 records and a btree index (numeric). A
simple SELECT * FROM table WHERE id = ... takes more than a second for
every query and I need to query each record at least once. It helps to
do an UPDATE tabel WHERE id IN (..., ...), but I don't have the patience
to wait for more than 400,000 seconds to pass...

I'm looking for a solution, and I think the main problem is that
PostgreSQL may not be able to keep the index on this particular table
entirely in memory. If so, a 'simple' memory upgrade (the server
currently has 1GB) would help a great deal, but could there be other
causes of this problem?

It would be a bit of a waste to have convinced my boss that his servers
(there are a development and a production server involved) need a memory
upgrade, when it turns out not to solve this problem... :(

Regards,

Alban Hertroys,
MAG Productions.
Jul 19 '05 #1
Share this Question
Share on Google+
2 Replies

P: n/a
Alban Hertroys wrote:
every query and I need to query each record at least once. It helps to
do an UPDATE tabel WHERE id IN (..., ...), but I don't have the patience
to wait for more than 400,000 seconds to pass...


One thing that I noticed that works significantly faster then "WHERE id
IN (...)" is to be smarter and try to use "WHERE (id BETWEEN ... AND
....) OR (id BETWEEN ... AND ...) OR ...".

Of course, this requires you/me to make sorted lists of ID's and can
only be used efectively if there are groups of sequential id's to
select. If they're not sequential, use "IN (...)" instead in such cases.

The queries involved went from about 2 minutes to about 15 seconds each.
A significant improvement ;)

Alban.
Jul 19 '05 #2

P: n/a
Hello Alban,

One thing you might try is creating a Hash index on the id field instead
of a btree. I'm not sure at all this will help. But I've read that for
equality searches (id = ?) hash indexes can ofter provide the best
performance. I'd try it myself but I don't have any tables nearly that
large. ;-) If you do try making a hash index on the id field, I'd be most
interested in hearing of the results.

Cheers,
w.k.

On Tue, 7 Dec 2004, Alban Hertroys wrote:
Hello all,

I have a table with about 400,000 records and a btree index (numeric). A
simple SELECT * FROM table WHERE id = ... takes more than a second for
every query and I need to query each record at least once. It helps to
do an UPDATE tabel WHERE id IN (..., ...), but I don't have the patience
to wait for more than 400,000 seconds to pass...

I'm looking for a solution, and I think the main problem is that
PostgreSQL may not be able to keep the index on this particular table
entirely in memory. If so, a 'simple' memory upgrade (the server
currently has 1GB) would help a great deal, but could there be other
causes of this problem?

It would be a bit of a waste to have convinced my boss that his servers
(there are a development and a production server involved) need a memory
upgrade, when it turns out not to solve this problem... :(

Regards,

Alban Hertroys,
MAG Productions.

Jul 19 '05 #3

This discussion thread is closed

Replies have been disabled for this discussion.