473,385 Members | 1,587 Online
Bytes | Software Development & Data Engineering Community
Post Job

Home Posts Topics Members FAQ

Join Bytes to post your question to a community of 473,385 software developers and data experts.

Slow query

I have a query that uses the same view 6 times. It seems that the
database engine is calculating the view each time.
The result is very poor performance. The same query takes 2 sec with MS
SQL, and more than 10 sec with PostgreSQL.
Is there a method to improve the performance besides merging the
components of the view into one table?
I believe that caching query results could have resolved this issue

Yonatan Goraly
This is the query:

select h.*,
CASE WHEN f1.ID=-1 THEN '' ELSE f1.NAME ||
CASE WHEN f2.ID=-1 THEN '' ELSE ' > ' || f2.NAME ||
CASE WHEN f3.ID=-1 THEN '' ELSE ' > ' || f3.NAME ||
CASE WHEN f4.ID=-1 THEN '' ELSE ' > ' || f4.NAME ||
CASE WHEN f5.ID=-1 THEN '' ELSE ' > ' || f5.NAME ||
CASE WHEN f6.ID=-1 THEN '' ELSE ' > ' || f6.NAME END END END
END END END as PATH
from COMN_ATTR_HIERARCH h
join ALL_FOLDERS_VIEW_NAME f1 on h.FOLDER_ID_1=f1.ID
join ALL_FOLDERS_VIEW_NAME f2 on h.FOLDER_ID_2=f2.ID
join ALL_FOLDERS_VIEW_NAME f3 on h.FOLDER_ID_3=f3.ID
join ALL_FOLDERS_VIEW_NAME f4 on h.FOLDER_ID_4=f4.ID
join ALL_FOLDERS_VIEW_NAME f5 on h.FOLDER_ID_5=f5.ID
join ALL_FOLDERS_VIEW_NAME f6 on h.FOLDER_ID_6=f6.ID;
------------------------------------------------------------------
QUERY PLAN
Nested Loop (cost=421.29..426.60 rows=1 width=511)
Join Filter: ("outer".folder_id_6 = "inner".id)
-> Nested Loop (cost=400.22..403.10 rows=1 width=447)
Join Filter: ("outer".folder_id_5 = "inner".id)
-> Merge Join (cost=379.16..379.35 rows=1 width=383)
Merge Cond: ("outer".folder_id_4 = "inner".id)
-> Sort (cost=355.34..355.36 rows=7 width=319)
Sort Key: h.folder_id_4
-> Merge Join (cost=354.74..355.23 rows=7 width=319)
Merge Cond: ("outer".folder_id_3 = "inner".id)
-> Sort (cost=330.92..331.05 rows=53 width=255)
Sort Key: h.folder_id_3
-> Merge Join (cost=326.71..329.41
rows=53 width=255)
Merge Cond: ("outer".folder_id_2 =
"inner".id)
-> Sort (cost=302.89..303.84
rows=380 width=191)
Sort Key: h.folder_id_2
-> Merge Join
(cost=268.08..286.62 rows=380 width=191)
Merge Cond:
("outer".folder_id_1 = "inner".id)
-> Sort
(cost=244.26..251.09 rows=2732 width=127)
Sort Key:
h.folder_id_1
-> Seq Scan on
comn_attr_hierarch h (cost=0.00..88.32 rows=2732 width=127)
-> Sort
(cost=23.82..23.89 rows=28 width=36)
Sort Key: f1.id
-> Subquery
Scan f1 (cost=21.07..23.15 rows=28 width=36)
->
Unique (cost=21.07..23.15 rows=28 width=36)
->
Sort (cost=21.07..21.76 rows=278 width=36)

Sort Key: id name

-> Append (cost=0.00..9.78 rows=278 width=36)

-> Subquery Scan "*SELECT* 1" (cost=0.00..3.15 rows=115 width=26)

-> Seq Scan on ent_folder (cost=0.00..3.15 rows=115 width=26)

-> Subquery Scan "*SELECT* 2" (cost=0.00..1.25 rows=25 width=25)

-> Seq Scan on ent_mkt_segment (cost=0.00..1.25 rows=25 width=25)

-> Subquery Scan "*SELECT* 3" (cost=0.00..3.26 rows=126 width=24)

-> Seq Scan on ent_company (cost=0.00..3.26 rows=126 width=24)

-> Subquery Scan "*SELECT* 4" (cost=0.00..1.11 rows=11 width=25)

-> Seq Scan on ent_product (cost=0.00..1.11 rows=11 width=25)

-> Subquery Scan "*SELECT* 5" (cost=0.00..1.01 rows=1 width=36)

-> Seq Scan on ent_environment_type (cost=0.00..1.01 rows=1 width=36)
-> Sort (cost=23.82..23.89
rows=28 width=36)
Sort Key: f2.id
-> Subquery Scan f2
(cost=21.07..23.15 rows=28 width=36)
-> Unique
(cost=21.07..23.15 rows=28 width=36)
-> Sort
(cost=21.07..21.76 rows=278 width=36)
Sort Key:
id name
->
Append (cost=0.00..9.78 rows=278 width=36)
->
Subquery Scan "*SELECT* 1" (cost=0.00..3.15 rows=115 width=26)

-> Seq Scan on ent_folder (cost=0.00..3.15 rows=115 width=26)
->
Subquery Scan "*SELECT* 2" (cost=0.00..1.25 rows=25 width=25)

-> Seq Scan on ent_mkt_segment (cost=0.00..1.25 rows=25 width=25)
->
Subquery Scan "*SELECT* 3" (cost=0.00..3.26 rows=126 width=24)

-> Seq Scan on ent_company (cost=0.00..3.26 rows=126 width=24)
->
Subquery Scan "*SELECT* 4" (cost=0.00..1.11 rows=11 width=25)

-> Seq Scan on ent_product (cost=0.00..1.11 rows=11 width=25)
->
Subquery Scan "*SELECT* 5" (cost=0.00..1.01 rows=1 width=36)

-> Seq Scan on ent_environment_type (cost=0.00..1.01 rows=1 width=36)
-> Sort (cost=23.82..23.89 rows=28 width=36)
Sort Key: f3.id
-> Subquery Scan f3 (cost=21.07..23.15
rows=28 width=36)
-> Unique (cost=21.07..23.15
rows=28 width=36)
-> Sort (cost=21.07..21.76
rows=278 width=36)
Sort Key: id name
-> Append
(cost=0.00..9.78 rows=278 width=36)
-> Subquery
Scan "*SELECT* 1" (cost=0.00..3.15 rows=115 width=26)
-> Seq
Scan on ent_folder (cost=0.00..3.15 rows=115 width=26)
-> Subquery
Scan "*SELECT* 2" (cost=0.00..1.25 rows=25 width=25)
-> Seq
Scan on ent_mkt_segment (cost=0.00..1.25 rows=25 width=25)
-> Subquery
Scan "*SELECT* 3" (cost=0.00..3.26 rows=126 width=24)
-> Seq
Scan on ent_company (cost=0.00..3.26 rows=126 width=24)
-> Subquery
Scan "*SELECT* 4" (cost=0.00..1.11 rows=11 width=25)
-> Seq
Scan on ent_product (cost=0.00..1.11 rows=11 width=25)
-> Subquery
Scan "*SELECT* 5" (cost=0.00..1.01 rows=1 width=36)
-> Seq
Scan on ent_environment_type (cost=0.00..1.01 rows=1 width=36)
-> Sort (cost=23.82..23.89 rows=28 width=36)
Sort Key: f4.id
-> Subquery Scan f4 (cost=21.07..23.15 rows=28
width=36)
-> Unique (cost=21.07..23.15 rows=28 width=36)
-> Sort (cost=21.07..21.76 rows=278
width=36)
Sort Key: id name
-> Append (cost=0.00..9.78
rows=278 width=36)
-> Subquery Scan "*SELECT*
1" (cost=0.00..3.15 rows=115 width=26)
-> Seq Scan on
ent_folder (cost=0.00..3.15 rows=115 width=26)
-> Subquery Scan "*SELECT*
2" (cost=0.00..1.25 rows=25 width=25)
-> Seq Scan on
ent_mkt_segment (cost=0.00..1.25 rows=25 width=25)
-> Subquery Scan "*SELECT*
3" (cost=0.00..3.26 rows=126 width=24)
-> Seq Scan on
ent_company (cost=0.00..3.26 rows=126 width=24)
-> Subquery Scan "*SELECT*
4" (cost=0.00..1.11 rows=11 width=25)
-> Seq Scan on
ent_product (cost=0.00..1.11 rows=11 width=25)
-> Subquery Scan "*SELECT*
5" (cost=0.00..1.01 rows=1 width=36)
-> Seq Scan on
ent_environment_type (cost=0.00..1.01 rows=1 width=36)
-> Subquery Scan f5 (cost=21.07..23.15 rows=28 width=36)
-> Unique (cost=21.07..23.15 rows=28 width=36)
-> Sort (cost=21.07..21.76 rows=278 width=36)
Sort Key: id name
-> Append (cost=0.00..9.78 rows=278 width=36)
-> Subquery Scan "*SELECT* 1"
(cost=0.00..3.15 rows=115 width=26)
-> Seq Scan on ent_folder
(cost=0.00..3.15 rows=115 width=26)
-> Subquery Scan "*SELECT* 2"
(cost=0.00..1.25 rows=25 width=25)
-> Seq Scan on ent_mkt_segment
(cost=0.00..1.25 rows=25 width=25)
-> Subquery Scan "*SELECT* 3"
(cost=0.00..3.26 rows=126 width=24)
-> Seq Scan on ent_company
(cost=0.00..3.26 rows=126 width=24)
-> Subquery Scan "*SELECT* 4"
(cost=0.00..1.11 rows=11 width=25)
-> Seq Scan on ent_product
(cost=0.00..1.11 rows=11 width=25)
-> Subquery Scan "*SELECT* 5"
(cost=0.00..1.01 rows=1 width=36)
-> Seq Scan on
ent_environment_type (cost=0.00..1.01 rows=1 width=36)
-> Subquery Scan f6 (cost=21.07..23.15 rows=28 width=36)
-> Unique (cost=21.07..23.15 rows=28 width=36)
-> Sort (cost=21.07..21.76 rows=278 width=36)
Sort Key: id name
-> Append (cost=0.00..9.78 rows=278 width=36)
-> Subquery Scan "*SELECT* 1"
(cost=0.00..3.15 rows=115 width=26)
-> Seq Scan on ent_folder
(cost=0.00..3.15 rows=115 width=26)
-> Subquery Scan "*SELECT* 2"
(cost=0.00..1.25 rows=25 width=25)
-> Seq Scan on ent_mkt_segment
(cost=0.00..1.25 rows=25 width=25)
-> Subquery Scan "*SELECT* 3"
(cost=0.00..3.26 rows=126 width=24)
-> Seq Scan on ent_company
(cost=0.00..3.26 rows=126 width=24)
-> Subquery Scan "*SELECT* 4"
(cost=0.00..1.11 rows=11 width=25)
-> Seq Scan on ent_product
(cost=0.00..1.11 rows=11 width=25)
-> Subquery Scan "*SELECT* 5"
(cost=0.00..1.01 rows=1 width=36)
-> Seq Scan on ent_environment_type
(cost=0.00..1.01 rows=1 width=36)
------------------------------------------------------------------
This is the view:

create view ALL_FOLDERS_VIEW_NAME as
select ID, NAME
from ENT_FOLDER
union
select ID, NAME
from ENT_MKT_SEGMENT
union
select ID, NAME
from ENT_COMPANY
union
select ID, NAME
from ENT_PRODUCT
union
select ID, NAME
from ENT_ENVIRONMENT_TYPE;

---------------------------(end of broadcast)---------------------------
TIP 4: Don't 'kill -9' the postmaster

Nov 12 '05 #1
2 1915
On Mon, Oct 27, 2003 at 12:08:31AM +0200, Yonatan Goraly wrote:
I have a query that uses the same view 6 times. It seems that the
database engine is calculating the view each time.
The result is very poor performance. The same query takes 2 sec with MS
SQL, and more than 10 sec with PostgreSQL.
Is there a method to improve the performance besides merging the
components of the view into one table?


Say, have you ANALYZEd your tables ?

--
Alvaro Herrera (<alvherre[a]dcc.uchile.cl>)
"Coge la flor que hoy nace alegre, ufana. ¿Quién sabe si nacera otra mañana?"

---------------------------(end of broadcast)---------------------------
TIP 7: don't forget to increase your free space map settings

Nov 12 '05 #2
yes

Alvaro Herrera wrote:
On Mon, Oct 27, 2003 at 12:08:31AM +0200, Yonatan Goraly wrote:

I have a query that uses the same view 6 times. It seems that the
database engine is calculating the view each time.
The result is very poor performance. The same query takes 2 sec with MS
SQL, and more than 10 sec with PostgreSQL.
Is there a method to improve the performance besides merging the
components of the view into one table?


Say, have you ANALYZEd your tables ?

---------------------------(end of broadcast)---------------------------
TIP 2: you can get off all lists at once with the unregister command
(send "unregister YourEmailAddressHere" to ma*******@postgresql.org)

Nov 12 '05 #3

This thread has been closed and replies have been disabled. Please start a new discussion.

Similar topics

5
by: Shay | last post by:
essentially I am trying to do some counts based on some assumptions in the recordset. So I get the RS back, put the values into a variable, move to the next record in the RS and compare what is in...
2
by: Tim Fountain | last post by:
We've recently enabled slow query logging on a server and it's proving interesting seeing which queries are bogging things down. This one is puzzling me a little: SELECT articleid, type,...
1
by: Thomas Bartkus | last post by:
Is it possible that the elapsed time being measured includes waiting for the client to acknowledge that it has received all the data? In *addition* to the server execution time? Documentation...
2
by: AG | last post by:
I am able to run a query which runs FAst in QA but slow in the application.It takes about 16 m in QA but 1000 ms on the Application.What I wanted to know is why would the query take a long time...
2
by: Yonatan Goraly | last post by:
I am in the process of adding PostgreSQL support for an application, in addition to Oracle and MS SQL. I am using PostgreSQL version 7.3.2, Red Hat 9.0 on Intel Pentium III board. I have a...
3
by: Janross | last post by:
I'm having trouble with a query that's prohibitively slow. On my free-standing office computer it's fine (well, 2-4 seconds), but on the client's network, it takes at least 5 minutes to run. ...
3
by: Jennyfer J Barco | last post by:
In my application I have a datagrid. The code calls a Stored procedure and brings like 200 records. I created a dataset and then a dataview to bind the results of the query to my grid using ...
0
by: Dave Hammond | last post by:
Hi All, I'm trying to use the slow-query-log (with --log-queries-not-using-indexes enabled) to determine if any queries need optimization, and have a few questions about some entries I'm...
9
by: Emin | last post by:
Dear Experts, I have a fairly simple query in which adding a where clause slows things down by at least a factor of 100. The following is the slow version of the query ...
2
by: existential.philosophy | last post by:
This is a new problem for me: I have some queries that open very slowly in design view. My benchmark query takes about 20 minutes to open in design view. That same query takes about 20 minutes...
0
by: taylorcarr | last post by:
A Canon printer is a smart device known for being advanced, efficient, and reliable. It is designed for home, office, and hybrid workspace use and can also be used for a variety of purposes. However,...
0
by: ryjfgjl | last post by:
In our work, we often receive Excel tables with data in the same format. If we want to analyze these data, it can be difficult to analyze them because the data is spread across multiple Excel files...
0
by: emmanuelkatto | last post by:
Hi All, I am Emmanuel katto from Uganda. I want to ask what challenges you've faced while migrating a website to cloud. Please let me know. Thanks! Emmanuel
1
by: nemocccc | last post by:
hello, everyone, I want to develop a software for my android phone for daily needs, any suggestions?
1
by: Sonnysonu | last post by:
This is the data of csv file 1 2 3 1 2 3 1 2 3 1 2 3 2 3 2 3 3 the lengths should be different i have to store the data by column-wise with in the specific length. suppose the i have to...
0
by: Hystou | last post by:
There are some requirements for setting up RAID: 1. The motherboard and BIOS support RAID configuration. 2. The motherboard has 2 or more available SATA protocol SSD/HDD slots (including MSATA, M.2...
0
marktang
by: marktang | last post by:
ONU (Optical Network Unit) is one of the key components for providing high-speed Internet services. Its primary function is to act as an endpoint device located at the user's premises. However,...
0
Oralloy
by: Oralloy | last post by:
Hello folks, I am unable to find appropriate documentation on the type promotion of bit-fields when using the generalised comparison operator "<=>". The problem is that using the GNU compilers,...
0
jinu1996
by: jinu1996 | last post by:
In today's digital age, having a compelling online presence is paramount for businesses aiming to thrive in a competitive landscape. At the heart of this digital strategy lies an intricately woven...

By using Bytes.com and it's services, you agree to our Privacy Policy and Terms of Use.

To disable or enable advertisements and analytics tracking please visit the manage ads & tracking page.