472,328 Members | 1,535 Online
Bytes | Software Development & Data Engineering Community
+ Post

Home Posts Topics Members FAQ

Join Bytes to post your question to a community of 472,328 software developers and data experts.

The database is very slow !

I currently have PostgreSQL 7.1 installed on a server with about 700
Mb of RAM.

I have many problems of speed with a database I created. For example,
it took almost 12 sec to run the query "select * from table" directly
from PostgreSQL, on a table with 4000 records and 60 fields ...
And the whole application built on this database is very very slow
(some pages take almost 20 seconds to load !)

I verifed the indexes, I think they are ok, and I tried to make my
queries as short as possible (without select * but with select
field1, field2, ...)

But anyway, I guess there is a problem of speed directly with the
database, because I think that is not normal to need 12 sec to run a
query on a table with only 4000 records ...

Has anybody an idea ?
Thanks
Krystoffff
Nov 11 '05 #1
5 10226
pr******@silesky.com (krystoffff) writes:
I currently have PostgreSQL 7.1 installed on a server with about 700
Mb of RAM.

I have many problems of speed with a database I created. For example,
it took almost 12 sec to run the query "select * from table" directly
from PostgreSQL, on a table with 4000 records and 60 fields ...
And the whole application built on this database is very very slow
(some pages take almost 20 seconds to load !)

I verifed the indexes, I think they are ok, and I tried to make my
queries as short as possible (without select * but with select
field1, field2, ...)

But anyway, I guess there is a problem of speed directly with the
database, because I think that is not normal to need 12 sec to run a
query on a table with only 4000 records ...


The perpetual first question...

"Have you run VACUUM ANALYZE?"

You may want to run VACUUM VERBOSE on the database, and see how many
dead tuples get deleted; if a lot of data gets thrown away, that's a
good sign that things ought to shortly get faster.

ANALYZE will recalculate statistics that are used for query planning,
and that, too, is likely to be helpful.

If you can, head over to the [Performance] list, and supply some
sample queries with the output of running "EXPLAIN ANALYZE" on them.

Here's a sample for a query that nests together a whole bunch of
views.

portfolio=# explain analyze select * from cdnportfolio;
QUERY PLAN
-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
Subquery Scan cdnportfolio (cost=6311.41..11477.08 rows=22 width=153) (actual time=3327.63..3330.07 rows=25 loops=1) -> Aggregate (cost=6311.41..11477.08 rows=22 width=153) (actual time=3327.62..3329.91 rows=25 loops=1)
-> Group (cost=6311.41..11473.17 rows=224 width=153) (actual time=3327.38..3328.26 rows=39 loops=1)
-> Merge Join (cost=6311.41..11472.61 rows=224 width=153) (actual time=3327.36..3327.99 rows=39 loops=1)
Merge Cond: ("outer".symbol = "inner".symbol)
-> Sort (cost=263.39..263.41 rows=8 width=102) (actual time=50.49..50.51 rows=39 loops=1)
Sort Key: stocks.symbol
-> Hash Join (cost=261.57..263.27 rows=8 width=102) (actual time=49.58..50.16 rows=39 loops=1)
Hash Cond: ("outer".symbol = "inner".symbol)
-> Seq Scan on stocktxns (cost=0.00..1.40 rows=40 width=31) (actual time=0.08..0.23 rows=40 loops=1)
-> Hash (cost=261.55..261.55 rows=5 width=71) (actual time=49.40..49.40 rows=0 loops=1)
-> Hash Join (cost=260.10..261.55 rows=5 width=71) (actual time=49.05..49.33 rows=25 loops=1)
Hash Cond: ("outer".exchange = "inner".exchange)
-> Seq Scan on stocks (cost=0.00..1.25 rows=25 width=15) (actual time=0.08..0.16 rows=25 loops=1)
-> Hash (cost=260.10..260.10 rows=1 width=56) (actual time=48.87..48.87 rows=0 loops=1)
-> Nested Loop (cost=220.18..260.10 rows=1 width=56) (actual time=48.71..48.84 rows=4 loops=1)
-> Merge Join (cost=220.18..221.97 rows=7 width=29) (actual time=48.62..48.65 rows=4 loops=1)
Merge Cond: ("outer".from_curr = "inner".currency)
-> Sort (cost=219.10..219.94 rows=338 width=22) (actual time=48.35..48.35 rows=2 loops=1)
Sort Key: lastcurrdate.from_curr
-> Subquery Scan lastcurrdate (cost=0.00..204.92 rows=338 width=22) (actual time=0.51..48.24 rows=2 loops=1)
-> Aggregate (cost=0.00..204.92 rows=338 width=22) (actual time=0.50..48.22 rows=2 loops=1)
-> Group (cost=0.00..196.47 rows=3377 width=22) (actual time=0.41..43.22 rows=3376 loops=1)
-> Index Scan using currency_conversion_idx on currency_conversion (cost=0.00..179.59 rows=3377 width=22) (actual time=0.40..28.93 rows=3376 loops=1)
Filter: (to_curr = 'CDN'::bpchar)
-> Sort (cost=1.08..1.09 rows=4 width=14) (actual time=0.24..0.24 rows=4 loops=1)
Sort Key: exchanges.currency
-> Seq Scan on exchanges (cost=0.00..1.04 rows=4 width=14) (actual time=0.10..0.12 rows=4 loops=1)
-> Index Scan using currency_conversion_idx on currency_conversion (cost=0.00..5.63 rows=1 width=27) (actual time=0.03..0.04 rows=1 loops=4)
Index Cond: (("outer".from_curr = currency_conversion.from_curr) AND (currency_conversion.to_curr = 'CDN'::bpchar) AND (currency_conversion.date = "outer".lastdate))
-> Materialize (cost=11203.81..11203.81 rows=990 width=51) (actual time=3276.83..3276.87 rows=53 loops=1)
-> Merge Join (cost=6048.02..11203.81 rows=990 width=51) (actual time=1797.46..3276.67 rows=39 loops=1)
Merge Cond: (("outer".symbol = "inner".symbol) AND ("outer".date = "inner".lastdate)) -> Index Scan using stockprice_by_date on stockprices (cost=0.00..4827.31 rows=104549 width=27) (actual time=0.46..921.54 rows=104549 loops=1)
-> Sort (cost=6048.02..6074.15 rows=10455 width=16) (actual time=1713.56..1713.63 rows=39 loops=1)
Sort Key: lastdate.symbol, lastdate.lastdate
-> Subquery Scan lastdate (cost=0.00..5350.05 rows=10455 width=16) (actual time=178.81..1713.10 rows=39 loops=1)
-> Aggregate (cost=0.00..5350.05 rows=10455 width=16) (actual time=178.80..1712.77 rows=39 loops=1)
-> Group (cost=0.00..5088.68 rows=104549 width=16) (actual time=0.03..1550.08 rows=104549 loops=1)
-> Index Scan using stockprice_by_date on stockprices (cost=0.00..4827.31 rows=104549 width=16) (actual time=0.02..1062.58 rows=104549 loops=1)
Total runtime: 3332.18 msec
(41 rows)
--
(reverse (concatenate 'string "gro.mca" "@" "enworbbc"))
http://cbbrowne.com/info/postgresql.html
It's easy to identify people who can't count to ten. They're in front
of you in the supermarket express lane. -June Henderson
Nov 11 '05 #2
pr******@silesky.com (krystoffff) writes:
I have many problems of speed with a database I created. For example,
it took almost 12 sec to run the query "select * from table" directly
from PostgreSQL, on a table with 4000 records and 60 fields ...


When did you last VACUUM this table?

regards, tom lane

---------------------------(end of broadcast)---------------------------
TIP 5: Have you checked our extensive FAQ?

http://www.postgresql.org/docs/faqs/FAQ.html

Nov 11 '05 #3
Thanks for your answers

I just vacuumed the database just before a test, and it didn't change
anything ... Sorry not to have mentionned it

Any other suggestions ?
Nov 11 '05 #4
Hi Kystof,

I haven't read the other suggestions but I'm running a db where some tables
and views have over a million records without any issues on similiar
hardware.

Out of curiosity - have you actually performance tuned your OS and the
postmaster (trhough postgres.conf)? If you haven't done any of these then I
wouldn't be surprised if the db was slow - i think the default install of
most linux distros leaves you with 32mb shared memory which makes your 700mb
of RAM useless. As far as 60 fields are concerned I doubt that would be a
problem although I've never used a table with more than 20 - anybody out
there know if number of fields on a table can create performance issues?

Lastly - how complicated are your indexes? If you have indexed every field on
that table then that could be an obvious issue. How many fields from the
table have foreign key constraints?

Let me know how you go,

Jason

On Wed, 13 Aug 2003 11:22 pm, krystoffff wrote:
Thanks for your answers

I just vacuumed the database just before a test, and it didn't change
anything ... Sorry not to have mentionned it

Any other suggestions ?

---------------------------(end of broadcast)---------------------------
TIP 1: subscribe and unsubscribe commands go to ma*******@postgresql.org

---------------------------(end of broadcast)---------------------------
TIP 2: you can get off all lists at once with the unregister command
(send "unregister YourEmailAddressHere" to ma*******@postgresql.org)

Nov 11 '05 #5
postmaster (trhough postgres.conf)? If you haven't done any of these then I
wouldn't be surprised if the db was slow - i think the default install of
most linux distros leaves you with 32mb shared memory which makes your 700mb
of RAM useless. As far as 60 fields are concerned I doubt that would be a
problem although I've never used a table with more than 20 - anybody out
there know if number of fields on a table can create performance issues?

I'm having performance issues also. High load averages on all
postmaster processes. I performed 2 tests:

1) lowered the number of fields/columns from 273 to 60. No affect.

2) changed from adding a new record/row once per second to every other
second. Load average dropped 30-40%

I plan on trying to up the shared memory from 32Meg to 256Meg today.

--Chris
---------------------------(end of broadcast)---------------------------
TIP 3: if posting/reading through Usenet, please send an appropriate
subscribe-nomail command to ma*******@postgresql.org so that your
message can get through to the mailing list cleanly

Nov 11 '05 #6

This thread has been closed and replies have been disabled. Please start a new discussion.

Similar topics

1
by: Andy Wet | last post by:
I have a database that had worked under msde 1.0 until reached the 2GB of dimension and the dB was blocked. To make possible the work i had deleted...
12
by: siliconmike | last post by:
How do I synchronize MySQL table data of my home PC with latest data from a remote server ? My home PC is on a very slow internet connection, so...
16
by: John | last post by:
Hi All, I have two backend databases that area link to a frontend database where information is entered, retrieved and deleted. The information...
1
by: IkyL234 | last post by:
I'm using Access2000. I have just designed a database which seems to be operating very slow on a network. There are currently only a few records in...
0
by: roiavidan | last post by:
Hi, I'm having a bit of a problem with a small application I wrote in C#, which uses an Access database (mdb file) for storing financial data. ...
19
by: Andy B | last post by:
Hello, Sorry for this newbish question. Briefly, my problem: ------------------ I expect the database I'm working on to reach something in...
10
by: rich | last post by:
I have a PHP5 application that accepts external messages, and inserts them into a database. It's rather high traffic.. the server could be...
12
by: grace | last post by:
i am wondering why my database retrieval becomes too slow...we set up a new server (ubuntu, breezy badger) machine where we transferred all our...
1
by: chrisdavies | last post by:
Hello! I've developed an access database for a company split over two sites. The network connection is VERY slow between the two sites and...
0
by: tammygombez | last post by:
Hey fellow JavaFX developers, I'm currently working on a project that involves using a ComboBox in JavaFX, and I've run into a bit of an issue....
0
by: concettolabs | last post by:
In today's business world, businesses are increasingly turning to PowerApps to develop custom business applications. PowerApps is a powerful tool...
0
better678
by: better678 | last post by:
Question: Discuss your understanding of the Java platform. Is the statement "Java is interpreted" correct? Answer: Java is an object-oriented...
0
by: Kemmylinns12 | last post by:
Blockchain technology has emerged as a transformative force in the business world, offering unprecedented opportunities for innovation and...
0
by: CD Tom | last post by:
This only shows up in access runtime. When a user select a report from my report menu when they close the report they get a menu I've called Add-ins...
0
jalbright99669
by: jalbright99669 | last post by:
Am having a bit of a time with URL Rewrite. I need to incorporate http to https redirect with a reverse proxy. I have the URL Rewrite rules made...
0
by: antdb | last post by:
Ⅰ. Advantage of AntDB: hyper-convergence + streaming processing engine In the overall architecture, a new "hyper-convergence" concept was...
0
by: Matthew3360 | last post by:
Hi there. I have been struggling to find out how to use a variable as my location in my header redirect function. Here is my code. ...
0
by: AndyPSV | last post by:
HOW CAN I CREATE AN AI with an .executable file that would suck all files in the folder and on my computerHOW CAN I CREATE AN AI with an .executable...

By using Bytes.com and it's services, you agree to our Privacy Policy and Terms of Use.

To disable or enable advertisements and analytics tracking please visit the manage ads & tracking page.