By using this site, you agree to our updated Privacy Policy and our Terms of Use. Manage your Cookies Settings.
464,780 Members | 1,229 Online
Bytes IT Community
+ Ask a Question
Need help? Post your question and get tips & solutions from a community of 464,780 IT Pros & Developers. It's quick & easy.

The database is very slow !

P: n/a
I currently have PostgreSQL 7.1 installed on a server with about 700
Mb of RAM.

I have many problems of speed with a database I created. For example,
it took almost 12 sec to run the query "select * from table" directly
from PostgreSQL, on a table with 4000 records and 60 fields ...
And the whole application built on this database is very very slow
(some pages take almost 20 seconds to load !)

I verifed the indexes, I think they are ok, and I tried to make my
queries as short as possible (without select * but with select
field1, field2, ...)

But anyway, I guess there is a problem of speed directly with the
database, because I think that is not normal to need 12 sec to run a
query on a table with only 4000 records ...

Has anybody an idea ?
Thanks
Krystoffff
Nov 11 '05 #1
Share this Question
Share on Google+
5 Replies

P: n/a
pr******@silesky.com (krystoffff) writes:
I currently have PostgreSQL 7.1 installed on a server with about 700
Mb of RAM.

I have many problems of speed with a database I created. For example,
it took almost 12 sec to run the query "select * from table" directly
from PostgreSQL, on a table with 4000 records and 60 fields ...
And the whole application built on this database is very very slow
(some pages take almost 20 seconds to load !)

I verifed the indexes, I think they are ok, and I tried to make my
queries as short as possible (without select * but with select
field1, field2, ...)

But anyway, I guess there is a problem of speed directly with the
database, because I think that is not normal to need 12 sec to run a
query on a table with only 4000 records ...


The perpetual first question...

"Have you run VACUUM ANALYZE?"

You may want to run VACUUM VERBOSE on the database, and see how many
dead tuples get deleted; if a lot of data gets thrown away, that's a
good sign that things ought to shortly get faster.

ANALYZE will recalculate statistics that are used for query planning,
and that, too, is likely to be helpful.

If you can, head over to the [Performance] list, and supply some
sample queries with the output of running "EXPLAIN ANALYZE" on them.

Here's a sample for a query that nests together a whole bunch of
views.

portfolio=# explain analyze select * from cdnportfolio;
QUERY PLAN
-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
Subquery Scan cdnportfolio (cost=6311.41..11477.08 rows=22 width=153) (actual time=3327.63..3330.07 rows=25 loops=1) -> Aggregate (cost=6311.41..11477.08 rows=22 width=153) (actual time=3327.62..3329.91 rows=25 loops=1)
-> Group (cost=6311.41..11473.17 rows=224 width=153) (actual time=3327.38..3328.26 rows=39 loops=1)
-> Merge Join (cost=6311.41..11472.61 rows=224 width=153) (actual time=3327.36..3327.99 rows=39 loops=1)
Merge Cond: ("outer".symbol = "inner".symbol)
-> Sort (cost=263.39..263.41 rows=8 width=102) (actual time=50.49..50.51 rows=39 loops=1)
Sort Key: stocks.symbol
-> Hash Join (cost=261.57..263.27 rows=8 width=102) (actual time=49.58..50.16 rows=39 loops=1)
Hash Cond: ("outer".symbol = "inner".symbol)
-> Seq Scan on stocktxns (cost=0.00..1.40 rows=40 width=31) (actual time=0.08..0.23 rows=40 loops=1)
-> Hash (cost=261.55..261.55 rows=5 width=71) (actual time=49.40..49.40 rows=0 loops=1)
-> Hash Join (cost=260.10..261.55 rows=5 width=71) (actual time=49.05..49.33 rows=25 loops=1)
Hash Cond: ("outer".exchange = "inner".exchange)
-> Seq Scan on stocks (cost=0.00..1.25 rows=25 width=15) (actual time=0.08..0.16 rows=25 loops=1)
-> Hash (cost=260.10..260.10 rows=1 width=56) (actual time=48.87..48.87 rows=0 loops=1)
-> Nested Loop (cost=220.18..260.10 rows=1 width=56) (actual time=48.71..48.84 rows=4 loops=1)
-> Merge Join (cost=220.18..221.97 rows=7 width=29) (actual time=48.62..48.65 rows=4 loops=1)
Merge Cond: ("outer".from_curr = "inner".currency)
-> Sort (cost=219.10..219.94 rows=338 width=22) (actual time=48.35..48.35 rows=2 loops=1)
Sort Key: lastcurrdate.from_curr
-> Subquery Scan lastcurrdate (cost=0.00..204.92 rows=338 width=22) (actual time=0.51..48.24 rows=2 loops=1)
-> Aggregate (cost=0.00..204.92 rows=338 width=22) (actual time=0.50..48.22 rows=2 loops=1)
-> Group (cost=0.00..196.47 rows=3377 width=22) (actual time=0.41..43.22 rows=3376 loops=1)
-> Index Scan using currency_conversion_idx on currency_conversion (cost=0.00..179.59 rows=3377 width=22) (actual time=0.40..28.93 rows=3376 loops=1)
Filter: (to_curr = 'CDN'::bpchar)
-> Sort (cost=1.08..1.09 rows=4 width=14) (actual time=0.24..0.24 rows=4 loops=1)
Sort Key: exchanges.currency
-> Seq Scan on exchanges (cost=0.00..1.04 rows=4 width=14) (actual time=0.10..0.12 rows=4 loops=1)
-> Index Scan using currency_conversion_idx on currency_conversion (cost=0.00..5.63 rows=1 width=27) (actual time=0.03..0.04 rows=1 loops=4)
Index Cond: (("outer".from_curr = currency_conversion.from_curr) AND (currency_conversion.to_curr = 'CDN'::bpchar) AND (currency_conversion.date = "outer".lastdate))
-> Materialize (cost=11203.81..11203.81 rows=990 width=51) (actual time=3276.83..3276.87 rows=53 loops=1)
-> Merge Join (cost=6048.02..11203.81 rows=990 width=51) (actual time=1797.46..3276.67 rows=39 loops=1)
Merge Cond: (("outer".symbol = "inner".symbol) AND ("outer".date = "inner".lastdate)) -> Index Scan using stockprice_by_date on stockprices (cost=0.00..4827.31 rows=104549 width=27) (actual time=0.46..921.54 rows=104549 loops=1)
-> Sort (cost=6048.02..6074.15 rows=10455 width=16) (actual time=1713.56..1713.63 rows=39 loops=1)
Sort Key: lastdate.symbol, lastdate.lastdate
-> Subquery Scan lastdate (cost=0.00..5350.05 rows=10455 width=16) (actual time=178.81..1713.10 rows=39 loops=1)
-> Aggregate (cost=0.00..5350.05 rows=10455 width=16) (actual time=178.80..1712.77 rows=39 loops=1)
-> Group (cost=0.00..5088.68 rows=104549 width=16) (actual time=0.03..1550.08 rows=104549 loops=1)
-> Index Scan using stockprice_by_date on stockprices (cost=0.00..4827.31 rows=104549 width=16) (actual time=0.02..1062.58 rows=104549 loops=1)
Total runtime: 3332.18 msec
(41 rows)
--
(reverse (concatenate 'string "gro.mca" "@" "enworbbc"))
http://cbbrowne.com/info/postgresql.html
It's easy to identify people who can't count to ten. They're in front
of you in the supermarket express lane. -June Henderson
Nov 11 '05 #2

P: n/a
pr******@silesky.com (krystoffff) writes:
I have many problems of speed with a database I created. For example,
it took almost 12 sec to run the query "select * from table" directly
from PostgreSQL, on a table with 4000 records and 60 fields ...


When did you last VACUUM this table?

regards, tom lane

---------------------------(end of broadcast)---------------------------
TIP 5: Have you checked our extensive FAQ?

http://www.postgresql.org/docs/faqs/FAQ.html

Nov 11 '05 #3

P: n/a
Thanks for your answers

I just vacuumed the database just before a test, and it didn't change
anything ... Sorry not to have mentionned it

Any other suggestions ?
Nov 11 '05 #4

P: n/a
Hi Kystof,

I haven't read the other suggestions but I'm running a db where some tables
and views have over a million records without any issues on similiar
hardware.

Out of curiosity - have you actually performance tuned your OS and the
postmaster (trhough postgres.conf)? If you haven't done any of these then I
wouldn't be surprised if the db was slow - i think the default install of
most linux distros leaves you with 32mb shared memory which makes your 700mb
of RAM useless. As far as 60 fields are concerned I doubt that would be a
problem although I've never used a table with more than 20 - anybody out
there know if number of fields on a table can create performance issues?

Lastly - how complicated are your indexes? If you have indexed every field on
that table then that could be an obvious issue. How many fields from the
table have foreign key constraints?

Let me know how you go,

Jason

On Wed, 13 Aug 2003 11:22 pm, krystoffff wrote:
Thanks for your answers

I just vacuumed the database just before a test, and it didn't change
anything ... Sorry not to have mentionned it

Any other suggestions ?

---------------------------(end of broadcast)---------------------------
TIP 1: subscribe and unsubscribe commands go to ma*******@postgresql.org

---------------------------(end of broadcast)---------------------------
TIP 2: you can get off all lists at once with the unregister command
(send "unregister YourEmailAddressHere" to ma*******@postgresql.org)

Nov 11 '05 #5

P: n/a
postmaster (trhough postgres.conf)? If you haven't done any of these then I
wouldn't be surprised if the db was slow - i think the default install of
most linux distros leaves you with 32mb shared memory which makes your 700mb
of RAM useless. As far as 60 fields are concerned I doubt that would be a
problem although I've never used a table with more than 20 - anybody out
there know if number of fields on a table can create performance issues?

I'm having performance issues also. High load averages on all
postmaster processes. I performed 2 tests:

1) lowered the number of fields/columns from 273 to 60. No affect.

2) changed from adding a new record/row once per second to every other
second. Load average dropped 30-40%

I plan on trying to up the shared memory from 32Meg to 256Meg today.

--Chris
---------------------------(end of broadcast)---------------------------
TIP 3: if posting/reading through Usenet, please send an appropriate
subscribe-nomail command to ma*******@postgresql.org so that your
message can get through to the mailing list cleanly

Nov 11 '05 #6

This discussion thread is closed

Replies have been disabled for this discussion.