473,663 Members | 2,705 Online
Bytes | Software Development & Data Engineering Community
+ Post

Home Posts Topics Members FAQ

The database is very slow !

I currently have PostgreSQL 7.1 installed on a server with about 700
Mb of RAM.

I have many problems of speed with a database I created. For example,
it took almost 12 sec to run the query "select * from table" directly
from PostgreSQL, on a table with 4000 records and 60 fields ...
And the whole application built on this database is very very slow
(some pages take almost 20 seconds to load !)

I verifed the indexes, I think they are ok, and I tried to make my
queries as short as possible (without select * but with select
field1, field2, ...)

But anyway, I guess there is a problem of speed directly with the
database, because I think that is not normal to need 12 sec to run a
query on a table with only 4000 records ...

Has anybody an idea ?
Thanks
Krystoffff
Nov 11 '05 #1
5 10312
pr******@silesk y.com (krystoffff) writes:
I currently have PostgreSQL 7.1 installed on a server with about 700
Mb of RAM.

I have many problems of speed with a database I created. For example,
it took almost 12 sec to run the query "select * from table" directly
from PostgreSQL, on a table with 4000 records and 60 fields ...
And the whole application built on this database is very very slow
(some pages take almost 20 seconds to load !)

I verifed the indexes, I think they are ok, and I tried to make my
queries as short as possible (without select * but with select
field1, field2, ...)

But anyway, I guess there is a problem of speed directly with the
database, because I think that is not normal to need 12 sec to run a
query on a table with only 4000 records ...


The perpetual first question...

"Have you run VACUUM ANALYZE?"

You may want to run VACUUM VERBOSE on the database, and see how many
dead tuples get deleted; if a lot of data gets thrown away, that's a
good sign that things ought to shortly get faster.

ANALYZE will recalculate statistics that are used for query planning,
and that, too, is likely to be helpful.

If you can, head over to the [Performance] list, and supply some
sample queries with the output of running "EXPLAIN ANALYZE" on them.

Here's a sample for a query that nests together a whole bunch of
views.

portfolio=# explain analyze select * from cdnportfolio;
QUERY PLAN
-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
Subquery Scan cdnportfolio (cost=6311.41.. 11477.08 rows=22 width=153) (actual time=3327.63..3 330.07 rows=25 loops=1) -> Aggregate (cost=6311.41.. 11477.08 rows=22 width=153) (actual time=3327.62..3 329.91 rows=25 loops=1)
-> Group (cost=6311.41.. 11473.17 rows=224 width=153) (actual time=3327.38..3 328.26 rows=39 loops=1)
-> Merge Join (cost=6311.41.. 11472.61 rows=224 width=153) (actual time=3327.36..3 327.99 rows=39 loops=1)
Merge Cond: ("outer".sym bol = "inner".sym bol)
-> Sort (cost=263.39..2 63.41 rows=8 width=102) (actual time=50.49..50. 51 rows=39 loops=1)
Sort Key: stocks.symbol
-> Hash Join (cost=261.57..2 63.27 rows=8 width=102) (actual time=49.58..50. 16 rows=39 loops=1)
Hash Cond: ("outer".sym bol = "inner".sym bol)
-> Seq Scan on stocktxns (cost=0.00..1.4 0 rows=40 width=31) (actual time=0.08..0.23 rows=40 loops=1)
-> Hash (cost=261.55..2 61.55 rows=5 width=71) (actual time=49.40..49. 40 rows=0 loops=1)
-> Hash Join (cost=260.10..2 61.55 rows=5 width=71) (actual time=49.05..49. 33 rows=25 loops=1)
Hash Cond: ("outer".exchan ge = "inner".exchang e)
-> Seq Scan on stocks (cost=0.00..1.2 5 rows=25 width=15) (actual time=0.08..0.16 rows=25 loops=1)
-> Hash (cost=260.10..2 60.10 rows=1 width=56) (actual time=48.87..48. 87 rows=0 loops=1)
-> Nested Loop (cost=220.18..2 60.10 rows=1 width=56) (actual time=48.71..48. 84 rows=4 loops=1)
-> Merge Join (cost=220.18..2 21.97 rows=7 width=29) (actual time=48.62..48. 65 rows=4 loops=1)
Merge Cond: ("outer".from_c urr = "inner".currenc y)
-> Sort (cost=219.10..2 19.94 rows=338 width=22) (actual time=48.35..48. 35 rows=2 loops=1)
Sort Key: lastcurrdate.fr om_curr
-> Subquery Scan lastcurrdate (cost=0.00..204 .92 rows=338 width=22) (actual time=0.51..48.2 4 rows=2 loops=1)
-> Aggregate (cost=0.00..204 .92 rows=338 width=22) (actual time=0.50..48.2 2 rows=2 loops=1)
-> Group (cost=0.00..196 .47 rows=3377 width=22) (actual time=0.41..43.2 2 rows=3376 loops=1)
-> Index Scan using currency_conver sion_idx on currency_conver sion (cost=0.00..179 .59 rows=3377 width=22) (actual time=0.40..28.9 3 rows=3376 loops=1)
Filter: (to_curr = 'CDN'::bpchar)
-> Sort (cost=1.08..1.0 9 rows=4 width=14) (actual time=0.24..0.24 rows=4 loops=1)
Sort Key: exchanges.curre ncy
-> Seq Scan on exchanges (cost=0.00..1.0 4 rows=4 width=14) (actual time=0.10..0.12 rows=4 loops=1)
-> Index Scan using currency_conver sion_idx on currency_conver sion (cost=0.00..5.6 3 rows=1 width=27) (actual time=0.03..0.04 rows=1 loops=4)
Index Cond: (("outer".from_ curr = currency_conver sion.from_curr) AND (currency_conve rsion.to_curr = 'CDN'::bpchar) AND (currency_conve rsion.date = "outer".lastdat e))
-> Materialize (cost=11203.81. .11203.81 rows=990 width=51) (actual time=3276.83..3 276.87 rows=53 loops=1)
-> Merge Join (cost=6048.02.. 11203.81 rows=990 width=51) (actual time=1797.46..3 276.67 rows=39 loops=1)
Merge Cond: (("outer".symbo l = "inner".sym bol) AND ("outer".dat e = "inner".lastdat e)) -> Index Scan using stockprice_by_d ate on stockprices (cost=0.00..482 7.31 rows=104549 width=27) (actual time=0.46..921. 54 rows=104549 loops=1)
-> Sort (cost=6048.02.. 6074.15 rows=10455 width=16) (actual time=1713.56..1 713.63 rows=39 loops=1)
Sort Key: lastdate.symbol , lastdate.lastda te
-> Subquery Scan lastdate (cost=0.00..535 0.05 rows=10455 width=16) (actual time=178.81..17 13.10 rows=39 loops=1)
-> Aggregate (cost=0.00..535 0.05 rows=10455 width=16) (actual time=178.80..17 12.77 rows=39 loops=1)
-> Group (cost=0.00..508 8.68 rows=104549 width=16) (actual time=0.03..1550 .08 rows=104549 loops=1)
-> Index Scan using stockprice_by_d ate on stockprices (cost=0.00..482 7.31 rows=104549 width=16) (actual time=0.02..1062 .58 rows=104549 loops=1)
Total runtime: 3332.18 msec
(41 rows)
--
(reverse (concatenate 'string "gro.mca" "@" "enworbbc") )
http://cbbrowne.com/info/postgresql.html
It's easy to identify people who can't count to ten. They're in front
of you in the supermarket express lane. -June Henderson
Nov 11 '05 #2
pr******@silesk y.com (krystoffff) writes:
I have many problems of speed with a database I created. For example,
it took almost 12 sec to run the query "select * from table" directly
from PostgreSQL, on a table with 4000 records and 60 fields ...


When did you last VACUUM this table?

regards, tom lane

---------------------------(end of broadcast)---------------------------
TIP 5: Have you checked our extensive FAQ?

http://www.postgresql.org/docs/faqs/FAQ.html

Nov 11 '05 #3
Thanks for your answers

I just vacuumed the database just before a test, and it didn't change
anything ... Sorry not to have mentionned it

Any other suggestions ?
Nov 11 '05 #4
Hi Kystof,

I haven't read the other suggestions but I'm running a db where some tables
and views have over a million records without any issues on similiar
hardware.

Out of curiosity - have you actually performance tuned your OS and the
postmaster (trhough postgres.conf)? If you haven't done any of these then I
wouldn't be surprised if the db was slow - i think the default install of
most linux distros leaves you with 32mb shared memory which makes your 700mb
of RAM useless. As far as 60 fields are concerned I doubt that would be a
problem although I've never used a table with more than 20 - anybody out
there know if number of fields on a table can create performance issues?

Lastly - how complicated are your indexes? If you have indexed every field on
that table then that could be an obvious issue. How many fields from the
table have foreign key constraints?

Let me know how you go,

Jason

On Wed, 13 Aug 2003 11:22 pm, krystoffff wrote:
Thanks for your answers

I just vacuumed the database just before a test, and it didn't change
anything ... Sorry not to have mentionned it

Any other suggestions ?

---------------------------(end of broadcast)---------------------------
TIP 1: subscribe and unsubscribe commands go to ma*******@postg resql.org

---------------------------(end of broadcast)---------------------------
TIP 2: you can get off all lists at once with the unregister command
(send "unregister YourEmailAddres sHere" to ma*******@postg resql.org)

Nov 11 '05 #5
postmaster (trhough postgres.conf)? If you haven't done any of these then I
wouldn't be surprised if the db was slow - i think the default install of
most linux distros leaves you with 32mb shared memory which makes your 700mb
of RAM useless. As far as 60 fields are concerned I doubt that would be a
problem although I've never used a table with more than 20 - anybody out
there know if number of fields on a table can create performance issues?

I'm having performance issues also. High load averages on all
postmaster processes. I performed 2 tests:

1) lowered the number of fields/columns from 273 to 60. No affect.

2) changed from adding a new record/row once per second to every other
second. Load average dropped 30-40%

I plan on trying to up the shared memory from 32Meg to 256Meg today.

--Chris
---------------------------(end of broadcast)---------------------------
TIP 3: if posting/reading through Usenet, please send an appropriate
subscribe-nomail command to ma*******@postg resql.org so that your
message can get through to the mailing list cleanly

Nov 11 '05 #6

This thread has been closed and replies have been disabled. Please start a new discussion.

Similar topics

1
3242
by: Andy Wet | last post by:
I have a database that had worked under msde 1.0 until reached the 2GB of dimension and the dB was blocked. To make possible the work i had deleted old data from some table. The database restart, but the answers from server become very slow. So i decided to pass to Sql Server 2000 without success. May i perform a check of this database? if is a indexs problem there is a way to rebuild them? Thanks in advance Andy Wet
12
2271
by: siliconmike | last post by:
How do I synchronize MySQL table data of my home PC with latest data from a remote server ? My home PC is on a very slow internet connection, so implementing replication will cause long time read lock on remote server, which is not desirable. What are the tools for manual sync of tables ? I'm running FreeBSD on my home PC.
16
2677
by: John | last post by:
Hi All, I have two backend databases that area link to a frontend database where information is entered, retrieved and deleted. The information accessed from the frontend database is coming from both databases. Both backend databases are at least 225MB each, therefore accessing data is slow and running queries are slow and frustrating, this causes the databases to crash at times. At the beginning I though it was a network connection, so...
1
2007
by: IkyL234 | last post by:
I'm using Access2000. I have just designed a database which seems to be operating very slow on a network. There are currently only a few records in it. Should I be compacting it now before it gets busy with usage ? The database has OLE bounded fields where pictures are linked to a network folder where the pictures are kept. Will this linking of pictures to respective records over time hinder the speed of the database ?
0
2605
by: roiavidan | last post by:
Hi, I'm having a bit of a problem with a small application I wrote in C#, which uses an Access database (mdb file) for storing financial data. After looking for a similiar topic and failing to find one, I'm posting the question in hope some one of you guys out there will have the answer for me...! I'll start with what I have, then I'll continue to the problem itself.
19
2735
by: Andy B | last post by:
Hello, Sorry for this newbish question. Briefly, my problem: ------------------ I expect the database I'm working on to reach something in the order of 12-16 Gigabytes, and I am interested in understanding as much as I can about how I can make this go as fast as possible on a linux system. I haven't run such a large database before. The nature of the database is such that
10
1722
by: rich | last post by:
I have a PHP5 application that accepts external messages, and inserts them into a database. It's rather high traffic.. the server could be receiving thousands at a time, and recently mysql has been locking up because it's exceeding the max_connections limit. I raised it for now, but that's only a temporary fix. My question is is there a way to make a connection to mysql that multiple instances of an object will use together? I'm under...
12
3931
by: grace | last post by:
i am wondering why my database retrieval becomes too slow...we set up a new server (ubuntu, breezy badger) machine where we transferred all our files from the old server.. Our new server uses Asus p5pe-vm motherboard and an Intel Pentium D 3.0Ghz processor, compared to the old one where we uses asrock motherboard and AMD Duron. Both has the same version of mysql installed... To summarized, both machine has the same configuration except...
1
1837
by: chrisdavies | last post by:
Hello! I've developed an access database for a company split over two sites. The network connection is VERY slow between the two sites and therefore one set of users will be using a VERY slow database unless I come up with a solution. They all need access to the same (live) tables at the same time. Anybody have any ideas? It's making my head hurt. Chris
0
8435
marktang
by: marktang | last post by:
ONU (Optical Network Unit) is one of the key components for providing high-speed Internet services. Its primary function is to act as an endpoint device located at the user's premises. However, people are often confused as to whether an ONU can Work As a Router. In this blog post, we’ll explore What is ONU, What Is Router, ONU & Router’s main usage, and What is the difference between ONU and Router. Let’s take a closer look ! Part I. Meaning of...
0
8857
Oralloy
by: Oralloy | last post by:
Hello folks, I am unable to find appropriate documentation on the type promotion of bit-fields when using the generalised comparison operator "<=>". The problem is that using the GNU compilers, it seems that the internal comparison operator "<=>" tries to promote arguments from unsigned to signed. This is as boiled down as I can make it. Here is my compilation command: g++-12 -std=c++20 -Wnarrowing bit_field.cpp Here is the code in...
1
6186
isladogs
by: isladogs | last post by:
The next Access Europe User Group meeting will be on Wednesday 1 May 2024 starting at 18:00 UK time (6PM UTC+1) and finishing by 19:30 (7.30PM). In this session, we are pleased to welcome a new presenter, Adolph Dupré who will be discussing some powerful techniques for using class modules. He will explain when you may want to use classes instead of User Defined Types (UDT). For example, to manage the data in unbound forms. Adolph will...
0
5655
by: conductexam | last post by:
I have .net C# application in which I am extracting data from word file and save it in database particularly. To store word all data as it is I am converting the whole word file firstly in HTML and then checking html paragraph one by one. At the time of converting from word file to html my equations which are in the word document file was convert into image. Globals.ThisAddIn.Application.ActiveDocument.Select();...
0
4181
by: TSSRALBI | last post by:
Hello I'm a network technician in training and I need your help. I am currently learning how to create and manage the different types of VPNs and I have a question about LAN-to-LAN VPNs. The last exercise I practiced was to create a LAN-to-LAN VPN between two Pfsense firewalls, by using IPSEC protocols. I succeeded, with both firewalls in the same network. But I'm wondering if it's possible to do the same thing, with 2 Pfsense firewalls...
0
4348
by: adsilva | last post by:
A Windows Forms form does not have the event Unload, like VB6. What one acts like?
1
2763
by: 6302768590 | last post by:
Hai team i want code for transfer the data from one system to another through IP address by using C# our system has to for every 5mins then we have to update the data what the data is updated we have to send another system
2
1999
muto222
by: muto222 | last post by:
How can i add a mobile payment intergratation into php mysql website.
2
1754
bsmnconsultancy
by: bsmnconsultancy | last post by:
In today's digital era, a well-designed website is crucial for businesses looking to succeed. Whether you're a small business owner or a large corporation in Toronto, having a strong online presence can significantly impact your brand's success. BSMN Consultancy, a leader in Website Development in Toronto offers valuable insights into creating effective websites that not only look great but also perform exceptionally well. In this comprehensive...

By using Bytes.com and it's services, you agree to our Privacy Policy and Terms of Use.

To disable or enable advertisements and analytics tracking please visit the manage ads & tracking page.