473,385 Members | 2,269 Online
Bytes | Software Development & Data Engineering Community
Post Job

Home Posts Topics Members FAQ

Join Bytes to post your question to a community of 473,385 software developers and data experts.

Performance on select on index and large tables

Hello all,

I have a table with about 400,000 records and a btree index (numeric). A
simple SELECT * FROM table WHERE id = ... takes more than a second for
every query and I need to query each record at least once. It helps to
do an UPDATE tabel WHERE id IN (..., ...), but I don't have the patience
to wait for more than 400,000 seconds to pass...

I'm looking for a solution, and I think the main problem is that
PostgreSQL may not be able to keep the index on this particular table
entirely in memory. If so, a 'simple' memory upgrade (the server
currently has 1GB) would help a great deal, but could there be other
causes of this problem?

It would be a bit of a waste to have convinced my boss that his servers
(there are a development and a production server involved) need a memory
upgrade, when it turns out not to solve this problem... :(

Regards,

Alban Hertroys,
MAG Productions.
Jul 19 '05 #1
2 5263
Alban Hertroys wrote:
every query and I need to query each record at least once. It helps to
do an UPDATE tabel WHERE id IN (..., ...), but I don't have the patience
to wait for more than 400,000 seconds to pass...


One thing that I noticed that works significantly faster then "WHERE id
IN (...)" is to be smarter and try to use "WHERE (id BETWEEN ... AND
....) OR (id BETWEEN ... AND ...) OR ...".

Of course, this requires you/me to make sorted lists of ID's and can
only be used efectively if there are groups of sequential id's to
select. If they're not sequential, use "IN (...)" instead in such cases.

The queries involved went from about 2 minutes to about 15 seconds each.
A significant improvement ;)

Alban.
Jul 19 '05 #2
Hello Alban,

One thing you might try is creating a Hash index on the id field instead
of a btree. I'm not sure at all this will help. But I've read that for
equality searches (id = ?) hash indexes can ofter provide the best
performance. I'd try it myself but I don't have any tables nearly that
large. ;-) If you do try making a hash index on the id field, I'd be most
interested in hearing of the results.

Cheers,
w.k.

On Tue, 7 Dec 2004, Alban Hertroys wrote:
Hello all,

I have a table with about 400,000 records and a btree index (numeric). A
simple SELECT * FROM table WHERE id = ... takes more than a second for
every query and I need to query each record at least once. It helps to
do an UPDATE tabel WHERE id IN (..., ...), but I don't have the patience
to wait for more than 400,000 seconds to pass...

I'm looking for a solution, and I think the main problem is that
PostgreSQL may not be able to keep the index on this particular table
entirely in memory. If so, a 'simple' memory upgrade (the server
currently has 1GB) would help a great deal, but could there be other
causes of this problem?

It would be a bit of a waste to have convinced my boss that his servers
(there are a development and a production server involved) need a memory
upgrade, when it turns out not to solve this problem... :(

Regards,

Alban Hertroys,
MAG Productions.

Jul 19 '05 #3

This thread has been closed and replies have been disabled. Please start a new discussion.

Similar topics

2
by: ALex_1998 | last post by:
Hi Dear All, I have a large query as below: select count (distinct b.bus_acct_id) from M_DATE M1, M_BIZ_ACCT M2, BIZ_ACCT B, C_PRDT_PKG C
3
by: Andy Tran | last post by:
I built a system using mysql innodb to archive SMS messages but the innodb databases are not keeping up with the number of SMS messages coming in. I'm looking for performance of 200 msgs/sec where...
12
by: serge | last post by:
I have an SP that is big, huge, 700-800 lines. I am not an expert but I need to figure out every possible way that I can improve the performance speed of this SP. In the next couple of weeks I...
5
by: Steve_CA | last post by:
Hello all, I've been recruited to assist in diagnosing and fixing a performance problem on an application we have running on SQL Server 7. The application itself is third party software, so we...
8
by: Együd Csaba | last post by:
Hi All, how can I improve the query performance in the following situation: I have a big (4.5+ million rows) table. One query takes approx. 9 sec to finish resulting ~10000 rows. But if I run...
57
by: Bing Wu | last post by:
Hi all, I am running a database containing large datasets: frames: 20 thousand rows, coordinates: 170 million row. The database has been implemented with: IBM DB2 v8.1
3
by: Joachim Klassen | last post by:
Hi all, first apologies if this question looks the same as another one I recently posted - its a different thing but for the same szenario:-). We are having performance problems when...
10
by: shsandeep | last post by:
The ETL application loaded around 3000 rows in 14 seconds in a Development database while it took 2 hours to load in a UAT database. UAT db is partitioned. Dev db is not partitioned. the...
9
by: HC | last post by:
Hello, all, I started out thinking my problems were elsewhere but as I have worked through this I have isolated my problem, currently, as a difference between MSDE and SQL Express 2005 (I'll just...
0
by: Charles Arthur | last post by:
How do i turn on java script on a villaon, callus and itel keypad mobile phone
0
by: ryjfgjl | last post by:
If we have dozens or hundreds of excel to import into the database, if we use the excel import function provided by database editors such as navicat, it will be extremely tedious and time-consuming...
0
by: emmanuelkatto | last post by:
Hi All, I am Emmanuel katto from Uganda. I want to ask what challenges you've faced while migrating a website to cloud. Please let me know. Thanks! Emmanuel
0
BarryA
by: BarryA | last post by:
What are the essential steps and strategies outlined in the Data Structures and Algorithms (DSA) roadmap for aspiring data scientists? How can individuals effectively utilize this roadmap to progress...
1
by: Sonnysonu | last post by:
This is the data of csv file 1 2 3 1 2 3 1 2 3 1 2 3 2 3 2 3 3 the lengths should be different i have to store the data by column-wise with in the specific length. suppose the i have to...
0
by: Hystou | last post by:
There are some requirements for setting up RAID: 1. The motherboard and BIOS support RAID configuration. 2. The motherboard has 2 or more available SATA protocol SSD/HDD slots (including MSATA, M.2...
0
marktang
by: marktang | last post by:
ONU (Optical Network Unit) is one of the key components for providing high-speed Internet services. Its primary function is to act as an endpoint device located at the user's premises. However,...
0
by: Hystou | last post by:
Most computers default to English, but sometimes we require a different language, especially when relocating. Forgot to request a specific language before your computer shipped? No problem! You can...
0
Oralloy
by: Oralloy | last post by:
Hello folks, I am unable to find appropriate documentation on the type promotion of bit-fields when using the generalised comparison operator "<=>". The problem is that using the GNU compilers,...

By using Bytes.com and it's services, you agree to our Privacy Policy and Terms of Use.

To disable or enable advertisements and analytics tracking please visit the manage ads & tracking page.