473,320 Members | 2,012 Online
Bytes | Software Development & Data Engineering Community
Post Job

Home Posts Topics Members FAQ

Join Bytes to post your question to a community of 473,320 software developers and data experts.

Performance problem on RH7.1

Hi All,
I've a problem with the perfprmance of the production environment.
I've two db servers. One on my laptop computer (2Ghz, 1GB, WinXP, Cygwin,
Postgres 7.3.4) and one on a production server (2GHz, 1GB, Ultra SCSI,
RH7.1, Postgres 7.3.2).

I run the same dump and the same query on both of the computers. The
difference is substantial.
The query takes 5 times longer on the production server then on the laptop.

What can be the reason? Could anybody suggest me something?
Thakn you in advance.

Best regards,
-- Csaba Együd
Kernel parameters on the linux server:
--------------------------------------
[root@db kernel]# pwd
/proc/sys/kernel
[root@db kernel]# cat shmall shmmax
134217728
134217728
[root@db kernel]#
The query:
----------
explain analyze select
id, artnum, oldartnum, name, munitid, getupid, vtsz, vat, description,
getupquantity, minstock,
(select count(*) from t_prices where t_prices.productid=t_products.id) as
pcount,
round(get_stock(id,1)::numeric,2) as stockm,
round(get_stock_getup(id,1)::numeric,2) as stockg,
(select abbrev from t_munits where id=munitid) as munit,
(select get_order_getup(id)) as deliverygetup,
(select (select deliverydate from t_orders where id=orderid) as
deliverydate
from t_orderdetails
where productid=t_products.id and
not (select delivered from t_orders where id=orderid) limit 1) as
deliverydate,
(select abbrev from t_getups where id=getupid) as getup
from t_products
order by artnum;

QUERY PLAN on my laptop:
------------------------
Sort (cost=70.17..72.38 rows=885 width=184) (actual time=7264.00..7264.00
rows=885 loops=1)
Sort Key: artnum
-> Seq Scan on t_products (cost=0.00..26.85 rows=885 width=184) (actual
time=21.00..7259.00 rows=885 loops=1)
SubPlan
-> Aggregate (cost=28.62..28.62 rows=1 width=0) (actual
time=0.12..0.12 rows=1 loops=885)
-> Index Scan using t_prices_productid on t_prices
(cost=0.00..28.60 rows=8 width=0) (actual time=0.05..0.10 rows=2 loops=885)
Index Cond: (productid = $0)
-> Seq Scan on t_munits (cost=0.00..1.06 rows=1 width=32)
(actual time=0.02..0.02 rows=1 loops=885)
Filter: (id = $1)
-> Result (cost=0.00..0.01 rows=1 width=0) (actual
time=1.13..1.13 rows=1 loops=885)
-> Limit (cost=0.00..149.06 rows=1 width=4) (actual
time=0.09..0.09 rows=0 loops=885)
-> Seq Scan on t_orderdetails (cost=0.00..149.06 rows=1
width=4) (actual time=0.08..0.08 rows=0 loops=885)
Filter: ((productid = $0) AND (NOT (subplan)))
SubPlan
-> Seq Scan on t_orders (cost=0.00..1.27 rows=1
width=14) (actual time=0.00..0.00 rows=1 loops=107)
Filter: (id = $2)
-> Seq Scan on t_orders (cost=0.00..1.27 rows=1
width=1) (actual time=0.02..0.03 rows=1 loops=107)
Filter: (id = $2)
-> Seq Scan on t_getups (cost=0.00..1.16 rows=1 width=32)
(actual time=0.01..0.02 rows=1 loops=885)
Filter: (id = $3)
Total runtime: 7265.00 msec

QUERY PLAN on the production server:
------------------------------------
Sort (cost=70.17..72.38 rows=885 width=121) (actual time=36729.92..36730.18
rows=885 loops=1)
Sort Key: artnum
-> Seq Scan on t_products (cost=0.00..26.85 rows=885 width=121) (actual
time=45.16..36724.73 rows=885 loops=1)
SubPlan
-> Aggregate (cost=9.06..9.06 rows=1 width=0) (actual
time=0.15..0.15 rows=1 loops=885)
-> Index Scan using t_prices_productid on t_prices
(cost=0.00..9.05 rows=2 width=0) (actual time=0.12..0.14 rows=2 loops=885)
Index Cond: (productid = $0)
-> Seq Scan on t_munits (cost=0.00..1.06 rows=1 width=5) (actual
time=0.04..0.04 rows=1 loops=885)
Filter: (id = $1)
-> Result (cost=0.00..0.01 rows=1 width=0) (actual
time=0.80..0.80 rows=1 loops=885)
-> Limit (cost=0.00..149.06 rows=1 width=4) (actual
time=0.08..0.08 rows=0 loops=885)
-> Seq Scan on t_orderdetails (cost=0.00..149.06 rows=1
width=4) (actual time=0.07..0.08 rows=0 loops=885)
Filter: ((productid = $0) AND (NOT (subplan)))
SubPlan
-> Seq Scan on t_orders (cost=0.00..1.27 rows=1
width=14) (actual time=0.01..0.02 rows=1 loops=107)
Filter: (id = $2)
-> Seq Scan on t_orders (cost=0.00..1.27 rows=1
width=1) (actual time=0.01..0.02 rows=1 loops=107)
Filter: (id = $2)
-> Seq Scan on t_getups (cost=0.00..1.16 rows=1 width=11)
(actual time=0.03..0.04 rows=1 loops=885)
Filter: (id = $3)
Total runtime: 36730.67 msec
---
Outgoing mail is certified Virus Free.
Checked by AVG anti-virus system (http://www.grisoft.com).
Version: 6.0.707 / Virus Database: 463 - Release Date: 2004. 06. 15.

---------------------------(end of broadcast)---------------------------
TIP 8: explain analyze is your friend

Nov 23 '05 #1
15 1556
=?iso-8859-2?Q?Egy=FCd_Csaba?= <cs*****@vnet.hu> writes:
I've a problem with the perfprmance of the production environment.
I've two db servers. One on my laptop computer (2Ghz, 1GB, WinXP, Cygwin,
Postgres 7.3.4) and one on a production server (2GHz, 1GB, Ultra SCSI,
RH7.1, Postgres 7.3.2).


Are you using the same postgresql.conf settings on both?

regards, tom lane

---------------------------(end of broadcast)---------------------------
TIP 3: if posting/reading through Usenet, please send an appropriate
subscribe-nomail command to ma*******@postgresql.org so that your
message can get through to the mailing list cleanly

Nov 23 '05 #2
Hi Tom,
Thank you for your reply.

No, I do not. On the production server I have higher values for sort_mem
(32768) and shared_buffers (2048).
The other settings are the same.

bye,
-- Csaba Együd

-----Original Message-----
From: pg*****************@postgresql.org
[mailto:pg*****************@postgresql.org]On Behalf Of Tom Lane
Sent: 2004. június 26. 17:10
To: cs*****@vnet.hu
Cc: Pg***********@Postgresql.Org (E-mail)
Subject: Re: [GENERAL] Performance problem on RH7.1
=?iso-8859-2?Q?Egy=FCd_Csaba?= <cs*****@vnet.hu> writes:
I've a problem with the perfprmance of the production environment.
I've two db servers. One on my laptop computer (2Ghz, 1GB,

WinXP, Cygwin,
Postgres 7.3.4) and one on a production server (2GHz, 1GB,

Ultra SCSI,
RH7.1, Postgres 7.3.2).


Are you using the same postgresql.conf settings on both?

regards, tom lane

---------------------------(end of
broadcast)---------------------------
TIP 3: if posting/reading through Usenet, please send an appropriate
subscribe-nomail command to ma*******@postgresql.org so
that your
message can get through to the mailing list cleanly

---
Incoming mail is certified Virus Free.
Checked by AVG anti-virus system (http://www.grisoft.com).
Version: 6.0.707 / Virus Database: 463 - Release Date: 2004. 06. 15.


---
Outgoing mail is certified Virus Free.
Checked by AVG anti-virus system (http://www.grisoft.com).
Version: 6.0.707 / Virus Database: 463 - Release Date: 2004. 06. 15.

---------------------------(end of broadcast)---------------------------
TIP 5: Have you checked our extensive FAQ?

http://www.postgresql.org/docs/faqs/FAQ.html

Nov 23 '05 #3
On Sat, 2004-06-26 at 04:16, Együd Csaba wrote:
Hi All,
I've a problem with the perfprmance of the production environment.
I've two db servers. One on my laptop computer (2Ghz, 1GB, WinXP, Cygwin,
Postgres 7.3.4) and one on a production server (2GHz, 1GB, Ultra SCSI,
RH7.1, Postgres 7.3.2).

I run the same dump and the same query on both of the computers. The
difference is substantial.
The query takes 5 times longer on the production server then on the laptop.

What can be the reason? Could anybody suggest me something?
Thakn you in advance.
QUERY PLAN on my laptop:
------------------------
Sort (cost=70.17..72.38 rows=885 width=184) (actual time=7264.00..7264.00
rows=885 loops=1)
Sort Key: artnum
-> Seq Scan on t_products (cost=0.00..26.85 rows=885 width=184) (actual
time=21.00..7259.00 rows=885 loops=1) QUERY PLAN on the production server:
------------------------------------
Sort (cost=70.17..72.38 rows=885 width=121) (actual time=36729.92..36730.18
rows=885 loops=1)
Sort Key: artnum
-> Seq Scan on t_products (cost=0.00..26.85 rows=885 width=121) (actual
time=45.16..36724.73 rows=885 loops=1)


This is the only real difference between the two, the time it's taking
to seq scan that table. Have you done a vacuum full on it lately? If
the table is the same size on the disk, but is taking 5 times longer on
the production server, then something on that machine is broken.
---------------------------(end of broadcast)---------------------------
TIP 8: explain analyze is your friend

Nov 23 '05 #4
On Sat, Jun 26, 2004 at 12:16:17PM +0200, Együd Csaba wrote:
I've a problem with the perfprmance of the production environment.
I've two db servers. One on my laptop computer (2Ghz, 1GB, WinXP, Cygwin,
Postgres 7.3.4) and one on a production server (2GHz, 1GB, Ultra SCSI,
RH7.1, Postgres 7.3.2).

I run the same dump and the same query on both of the computers. The
difference is substantial.
The query takes 5 times longer on the production server then on the laptop.


Are both databases properly vacuumed? Did you try a VACUUM FULL? I'm
wondering why it takes a lot of time seqscanning the t_products table in
the production server compared to the laptop.

--
Alvaro Herrera (<alvherre[a]dcc.uchile.cl>)
"Hoy es el primer día del resto de mi vida"
---------------------------(end of broadcast)---------------------------
TIP 8: explain analyze is your friend

Nov 23 '05 #5
Hi,
yes it is vacuumed regulary once a day. And vacuum full is done once a week.

The reasons of the slow seq scan are those two stored procedures in the
field list (get_stock and get_stock_getup). These take 13-20 ms every time
thay executed. Multiplying with the nr of rows we get 11-18 sec.

It is strange that the laptop substantially faster then the server. The
get_stock* functions are executed 2-3 times faster. This is a reason, but I
think it isn't enough. There must be something more there. Next time I'll
try to run a fsck on the data partition. May be it will show something
wrong.

Thank you all.
Best regards,
-- Csaba Együd
-----Original Message-----
From: Alvaro Herrera [mailto:al******@dcc.uchile.cl]
Sent: 2004. június 27. 3:38
To: Együd Csaba
Cc: Pg***********@Postgresql.Org (E-mail)
Subject: Re: [GENERAL] Performance problem on RH7.1
On Sat, Jun 26, 2004 at 12:16:17PM +0200, Együd Csaba wrote:
I've a problem with the perfprmance of the production environment.
I've two db servers. One on my laptop computer (2Ghz, 1GB,

WinXP, Cygwin,
Postgres 7.3.4) and one on a production server (2GHz, 1GB,

Ultra SCSI,
RH7.1, Postgres 7.3.2).

I run the same dump and the same query on both of the computers. The
difference is substantial.
The query takes 5 times longer on the production server

then on the laptop.

Are both databases properly vacuumed? Did you try a VACUUM FULL? I'm
wondering why it takes a lot of time seqscanning the
t_products table in
the production server compared to the laptop.

--
Alvaro Herrera (<alvherre[a]dcc.uchile.cl>)
"Hoy es el primer día del resto de mi vida"

---
Incoming mail is certified Virus Free.
Checked by AVG anti-virus system (http://www.grisoft.com).
Version: 6.0.707 / Virus Database: 463 - Release Date: 2004. 06. 15.


---
Outgoing mail is certified Virus Free.
Checked by AVG anti-virus system (http://www.grisoft.com).
Version: 6.0.712 / Virus Database: 468 - Release Date: 2004. 06. 27.

---------------------------(end of broadcast)---------------------------
TIP 2: you can get off all lists at once with the unregister command
(send "unregister YourEmailAddressHere" to ma*******@postgresql.org)

Nov 23 '05 #6
=?iso-8859-1?Q?Egy=FCd_Csaba?= <cs*****@vnet.hu> writes:
It is strange that the laptop substantially faster then the server. The
get_stock* functions are executed 2-3 times faster.


So what do those stored procedures do exactly?

What it smells like to me is a bad plan for a query executed in one of
the stored procedures, but it's hard to theorize with no data.

regards, tom lane

---------------------------(end of broadcast)---------------------------
TIP 4: Don't 'kill -9' the postmaster

Nov 23 '05 #7
Hi,
here is one of the stored procedures (the other is almost the same - queries
quantity instead of getup). I explain analyzed the queries called from the
stored procedures.
Thans.

bye,
-- cs.

************************************************** *******************
alumiltmp=# explain analyze select
round(get_stock_getup(234,1,'2004.06.28')::numeric ,2);
NOTICE: select date,time from t_stockchanges where stockid='1' and
productid='234' and date<='2004.06.28' and changeid=
1 order by time desc limit 1;
NOTICE: select dir, sum(getup) as getup from (select getup, (select dir
from t_changes where id = changeid) as dir from
t_stockchanges where productid='234' and stockid='1' and date>='2004.06.01
' and date<='2004.06.28' order by ti
me) as foo group by dir
QUERY PLAN
----------------------------------------------------------------------------
--------
Result (cost=0.00..0.01 rows=1 width=0) (actual time=13.97..13.97 rows=1
loops=1)
Total runtime: 13.99 msec
(2 rows)

************************************************** *******************

alumiltmp=# EXPLAIN ANALYZE select date,time from t_stockchanges where
stockid='1' and productid='234' and date<='2004.
06.28' and changeid=1 order by time desc limit 1;
QUERY
PLAN
----------------------------------------------------------------------------
--------------------------------------------
-----------------------------
Limit (cost=28.84..28.84 rows=1 width=46) (actual time=9.10..9.10 rows=1
loops=1)
-> Sort (cost=28.84..28.86 rows=7 width=46) (actual time=9.10..9.10
rows=2 loops=1)
Sort Key: "time"
-> Index Scan using t_stockchanges_fullindex on t_stockchanges
(cost=0.00..28.74 rows=7 width=46)
(actual time=0.14..9.03 rows=6 loops=1)
Index Cond: ((date <= '2004.06.28'::bpchar) AND (stockid = 1)
AND (productid = 234) AND (changeid = 1))
Total runtime: 9.17 msec
(6 rows)

************************************************** *******************

alumiltmp=# EXPLAIN ANALYZE select dir, sum(getup) as getup from (select
getup, (select dir from t_changes where id = c
hangeid) as dir from t_stockchanges where productid='234' and stockid='1'
and date>='2004.06.01 ' and date<='20
04.06.28' order by time) as foo group by dir;

QUERY PLAN

----------------------------------------------------------------------------
--------------------------------------------
----------------------------------------------
Aggregate (cost=6.92..6.93 rows=1 width=38) (actual time=1.63..1.65 rows=2
loops=1)
-> Group (cost=6.92..6.92 rows=1 width=38) (actual time=1.62..1.63
rows=7 loops=1)
-> Sort (cost=6.92..6.92 rows=1 width=38) (actual time=1.61..1.62
rows=7 loops=1)
Sort Key: dir
-> Subquery Scan foo (cost=6.90..6.91 rows=1 width=38)
(actual time=1.55..1.56 rows=7 loops=1)
-> Sort (cost=6.90..6.91 rows=1 width=38) (actual
time=1.55..1.55 rows=7 loops=1)
Sort Key: "time"
-> Index Scan using t_stockchanges_fullindex on
t_stockchanges (cost=0.00..6.89 rows=1
width=38) (actual time=0.07..1.52 rows=7
loops=1)
Index Cond: ((date >= '2004.06.01
'::bpchar) AND (date <= '2004.06.28'::bpchar)
AND (stockid = 1) AND (productid = 234))
SubPlan
-> Seq Scan on t_changes
(cost=0.00..1.16 rows=1 width=5) (actual time=0.01..0.01
rows=1 loops=7)
Filter: (id = $0)
Total runtime: 1.78 msec
(13 rows)
************************************************** *******************

create or replace function "get_stock_getup" (int, int, text) returns
numeric as'
declare
ProductID alias for $1;
StockID alias for $2;
ADate alias for $3;

OpenTime text;
q text;
R record;
retval numeric;
begin
OpenTime := '''';
-- Megkeressük a termék utolsó nyitókészletét az adott raktárban. Ha
nincs,
-- akkor a raktár elejétõl kezdve dolgozzuk fel az adatokat.
q := ''select date,time from t_stockchanges where '' ||
''stockid='' || quote_literal(StockID) || '' and '' ||
''productid='' || quote_literal(ProductID) || '' and '' ||
''date<='' || quote_literal(ADate) || '' and '' ||
''changeid=1 order by time desc limit 1;'';
-- raise notice ''%'',q;
for R in execute q loop
OpenTime := R.date;
end loop;
--raise notice ''%'', OpenTime;

-- Ha OpenTime is null, azaz nem volt nyitó, akkor az összes rekordot
visszakapjuk.
retval := 0;
q := ''select dir, sum(getup) as getup from (select getup, (select dir
from t_changes where id = changeid) as dir '' ||
''from t_stockchanges where productid='' || quote_literal(ProductID)
|| '' and '' ||
''stockid='' || quote_literal(StockID) || '' and '' ||
''date>='' || quote_literal(OpenTime) || '' and date<='' ||
quote_literal(ADate) ||
'' order by time) as foo group by dir'';
-- raise notice ''%'',q;

for R in execute q loop
if R.dir=''+'' then
retval := retval + R.getup;
end if;
if R.dir=''-'' then
retval := retval - R.getup;
end if;
end loop;

return retval;
end;
'LANGUAGE 'plpgsql';

-----Original Message-----
From: pg*****************@postgresql.org
[mailto:pg*****************@postgresql.org]On Behalf Of Tom Lane
Sent: 2004. június 28. 0:15
To: cs*****@vnet.hu
Cc: 'Alvaro Herrera'; 'P************@Postgresql.Org (E-mail)'
Subject: Re: [GENERAL] Performance problem on RH7.1
=?iso-8859-1?Q?Egy=FCd_Csaba?= <cs*****@vnet.hu> writes:
It is strange that the laptop substantially faster then the

server. The
get_stock* functions are executed 2-3 times faster.


So what do those stored procedures do exactly?

What it smells like to me is a bad plan for a query executed in one of
the stored procedures, but it's hard to theorize with no data.

regards, tom lane

---------------------------(end of
broadcast)---------------------------
TIP 4: Don't 'kill -9' the postmaster

---
Incoming mail is certified Virus Free.
Checked by AVG anti-virus system (http://www.grisoft.com).
Version: 6.0.712 / Virus Database: 468 - Release Date: 2004. 06. 27.


---
Outgoing mail is certified Virus Free.
Checked by AVG anti-virus system (http://www.grisoft.com).
Version: 6.0.712 / Virus Database: 468 - Release Date: 2004. 06. 27.

---------------------------(end of broadcast)---------------------------
TIP 2: you can get off all lists at once with the unregister command
(send "unregister YourEmailAddressHere" to ma*******@postgresql.org)

Nov 23 '05 #8
=?iso-8859-2?Q?Egy=FCd_Csaba?= <cs*****@vnet.hu> writes:
here is one of the stored procedures (the other is almost the same - queries
quantity instead of getup). I explain analyzed the queries called from the
stored procedures.
The major time sink is clearly here:
-> Index Scan using t_stockchanges_fullindex on t_stockchanges
(cost=0.00..28.74 rows=7 width=46)
(actual time=0.14..9.03 rows=6 loops=1)
Index Cond: ((date <= '2004.06.28'::bpchar) AND (stockid = 1)
AND (productid = 234) AND (changeid = 1))


and I think the problem is you've not chosen the index very well. Using
date as the most significant index column is simply the wrong way to do
this query --- the thing is going to start at the beginning of time and
scan forward over *all* index entries until it reaches a date greater
than the cutoff. What you want is date as the least significant index
column, so that you don't have to scan entries for irrelevant stocks at
all. Also you should probably put time into the index (in fact, why do
you have separate date and time fields at all?). What you really want
here is an index on (stockid, productid, changeid, date, time) and to
get a backwards indexscan with no sort step. It'd have to look like

where stockid='1' and productid='234' and changeid=1 and date<='2004.06.28'
order by stockid desc, productid desc, changeid desc, date desc, time desc
limit 1

I'd also suggest dropping the EXECUTE approach, as this is costing you
a re-plan on every call without buying much of anything.

A larger issue is whether you shouldn't forego the stored procedures
entirely and convert the whole problem into a join. The way you are
doing things now is essentially a forced nested-loop join between the
table traversed by the outer query and the table examined by the stored
procedures. Nested-loop is often the least efficient way to do a join.
But that could get pretty messy notationally, and I'm not sure how much
win there would be.

regards, tom lane

---------------------------(end of broadcast)---------------------------
TIP 7: don't forget to increase your free space map settings

Nov 23 '05 #9
> The major time sink is clearly here:
-> Index Scan using t_stockchanges_fullindex on t_stockchanges
(cost=0.00..28.74 rows=7 width=46)
(actual time=0.14..9.03 rows=6 loops=1)
Index Cond: ((date <= '2004.06.28'::bpchar)

AND (stockid = 1)
AND (productid = 234) AND (changeid = 1))

Yes, it must be there.

and I think the problem is you've not chosen the index very
well. Using
date as the most significant index column is simply the wrong
way to do
this query You are right. I haven't thought about this yet, and to tell the truth this
index is a "left there" index from the early development times. I didn't
review that since I had made it.
--- the thing is going to start at the beginning
of time and
scan forward over *all* index entries until it reaches a date greater
than the cutoff. What you want is date as the least significant index
column, so that you don't have to scan entries for irrelevant
stocks at
all. Also you should probably put time into the index (in
fact, why do
you have separate date and time fields at all?). What you really want
here is an index on (stockid, productid, changeid, date, time) and to
get a backwards indexscan with no sort step. It'd have to look like

where stockid='1' and productid='234' and changeid=1
and date<='2004.06.28'
order by stockid desc, productid desc, changeid desc,
date desc, time desc
limit 1 It is a good idea and I will do it in this way.
I'd also suggest dropping the EXECUTE approach, as this is costing you
a re-plan on every call without buying much of anything. Do you mean I should use PERFORM instead? Or what else?
Do you mean the "for R in execute" statements? How can I run a dynamic query
in other way?

A larger issue is whether you shouldn't forego the stored procedures
entirely and convert the whole problem into a join. The way you are
doing things now is essentially a forced nested-loop join between the
table traversed by the outer query and the table examined by
the stored
procedures. Nested-loop is often the least efficient way to
do a join.
But that could get pretty messy notationally, and I'm not
sure how much
win there would be.

I use stored procedures because it is clearer and simpler way then always
writing big complex queries with a lot of joins etc. I know that it has it's
price as well.
On the other hand you have lit up something in my mind so I will think about
it seriosly. I wish I have some time to do so...

Bye,
-- Csaba Együd

---
Outgoing mail is certified Virus Free.
Checked by AVG anti-virus system (http://www.grisoft.com).
Version: 6.0.712 / Virus Database: 468 - Release Date: 2004. 06. 27.

---------------------------(end of broadcast)---------------------------
TIP 8: explain analyze is your friend

Nov 23 '05 #10
=?iso-8859-2?Q?Egy=FCd_Csaba?= <cs*****@vnet.hu> writes:
I'd also suggest dropping the EXECUTE approach, as this is costing you
a re-plan on every call without buying much of anything.
Do you mean I should use PERFORM instead? Or what else?
Do you mean the "for R in execute" statements? How can I run a dynamic query
in other way?


No, I mean the most straightforward way:

for R in select ... where stockid = $1 and ...

This lets plpgsql cache the plan for the SELECT.

regards, tom lane

---------------------------(end of broadcast)---------------------------
TIP 8: explain analyze is your friend

Nov 23 '05 #11
Hi Tom,
I did the modifications you suggested on the t_stockchanges_fullindex and
the result tells everthing:

---------
explain analyze select date,time from t_stockchanges where stockid='1' and
productid='234' and date<='2004.06.29' and changeid=1 order by stockid,
productid, changeid, date, time desc limit 1;
---------
QUERY PLAN
Limit (cost=30.28..30.28 rows=1 width=58) (actual time=0.19..0.19 rows=1
loops=1)
-> Sort (cost=30.28..30.30 rows=7 width=58) (actual time=0.18..0.18
rows=2 loops=1)
Sort Key: stockid, productid, changeid, date, "time"
-> Index Scan using t_stockchanges_fullindex on t_stockchanges
(cost=0.00..30.18 rows=7 width=58) (actual time=0.04..0.08 rows=6 loops=1)
Index Cond: ((stockid = 1) AND (productid = 234) AND (changeid
= 1) AND (date <= '2004.06.29'::bpchar))
Total runtime: 0.25 msec
( Compared to 9.17 msec !!!! 37 times faster! )
----------

Thank you wery much Tom! It was very kind of you!

Best regards,
-- Csaba Együd
-----Original Message-----
From: Tom Lane [mailto:tg*@sss.pgh.pa.us]
Sent: 2004. június 28. 20:10
To: cs*****@vnet.hu
Cc: 'Alvaro Herrera'; 'P************@Postgresql.Org (E-mail)'
Subject: Re: [GENERAL] Performance problem on RH7.1
=?iso-8859-2?Q?Egy=FCd_Csaba?= <cs*****@vnet.hu> writes:
I'd also suggest dropping the EXECUTE approach, as this is costing you a re-plan on every call without buying much of anything.

Do you mean I should use PERFORM instead? Or what else?
Do you mean the "for R in execute" statements? How can I

run a dynamic query
in other way?


No, I mean the most straightforward way:

for R in select ... where stockid = $1 and ...

This lets plpgsql cache the plan for the SELECT.

regards, tom lane
---
Incoming mail is certified Virus Free.
Checked by AVG anti-virus system (http://www.grisoft.com).
Version: 6.0.712 / Virus Database: 468 - Release Date: 2004. 06. 27.


---
Outgoing mail is certified Virus Free.
Checked by AVG anti-virus system (http://www.grisoft.com).
Version: 6.0.712 / Virus Database: 468 - Release Date: 2004. 06. 27.

---------------------------(end of broadcast)---------------------------
TIP 6: Have you searched our list archives?

http://archives.postgresql.org

Nov 23 '05 #12
=?iso-8859-2?Q?Egy=FCd_Csaba?= <cs*****@vnet.hu> writes:
Limit (cost=30.28..30.28 rows=1 width=58) (actual time=0.19..0.19 rows=1
loops=1)
-> Sort (cost=30.28..30.30 rows=7 width=58) (actual time=0.18..0.18
rows=2 loops=1)
Sort Key: stockid, productid, changeid, date, "time"
-> Index Scan using t_stockchanges_fullindex on t_stockchanges
(cost=0.00..30.18 rows=7 width=58) (actual time=0.04..0.08 rows=6 loops=1)
Index Cond: ((stockid = 1) AND (productid = 234) AND (changeid
= 1) AND (date <= '2004.06.29'::bpchar))
Total runtime: 0.25 msec
( Compared to 9.17 msec !!!! 37 times faster! )


Good, but you're not there yet --- the Sort step shouldn't be there at
all. You've still got some inconsistency between the ORDER BY and the
index. Check my example again.

regards, tom lane

---------------------------(end of broadcast)---------------------------
TIP 1: subscribe and unsubscribe commands go to ma*******@postgresql.org

Nov 23 '05 #13
Hi Tom,
Good, but you're not there yet --- the Sort step shouldn't be there at
all. You've still got some inconsistency between the ORDER BY and the
index. Check my example again.

yes yes I missed that, sorry. Now don't mention the performance because I
couldn' see anything but the result. :)

In general I'd like to draw the consequences. What kind of theories should I
keep in mind when I want to choose an appropriate index key? I ask it
bacause I'm trying to optimize an other query of mine and I'm facing some
more problems.

I have this query: (note, that the planner uses t_stockchanges_fullindex,
instead of t_stockchanges_fullindex4 which is exactly what I would need)
========================
explain analyze select getup, (select dir from t_changes where id=changeid)
as dir
from t_stockchanges where stockid='1' and productid='428' and
date>='2004.06.01' and date<='2004.06.29'
order by stockid, productid, date;
QUERY PLAN
Sort (cost=7.17..7.17 rows=1 width=46) (actual time=3.00..3.00 rows=5
loops=1)
Sort Key: stockid, productid, date
-> Index Scan using t_stockchanges_fullindex on t_stockchanges
(cost=0.00..7.16 rows=1 width=46) (actual time=1.00..3.00 rows=5 loops=1)
Index Cond: ((date >= '2004.06.01'::bpchar) AND (date <=
'2004.06.29'::bpchar) AND (stockid = 1) AND (productid = 428))
SubPlan
-> Seq Scan on t_changes (cost=0.00..1.16 rows=1 width=5)
(actual time=0.00..0.00 rows=1 loops=5)
Filter: (id = $0)
Total runtime: 3.00 msec
========================

And these indexes:

========================
CREATE INDEX t_stockchanges_fullindex4
ON t_stockchanges
USING btree
(stockid, productid, date);

DROP INDEX t_stockchanges_fullindex3;
CREATE INDEX t_stockchanges_fullindex3
ON t_stockchanges
USING btree
(stockid, productid, changeid, date, time);

DROP INDEX t_stockchanges_fullindex;
CREATE INDEX t_stockchanges_fullindex
ON t_stockchanges
USING btree
(date, stockid, productid, changeid);
========================

If I delete the index t_stockchanges_fullindex, I get the following (better)
result.

========================
DROP INDEX t_stockchanges_fullindex;

explain analyze select getup, (select dir from t_changes where id=changeid)
as dir
from t_stockchanges where stockid='1' and productid='428' and
date>='2004.06.01' and date<='2004.06.29'
order by stockid, productid, date;
QUERY PLAN
Index Scan using t_stockchanges_fullindex4 on t_stockchanges
(cost=0.00..7.33 rows=1 width=46) (actual time=0.00..0.00 rows=5 loops=1)
Index Cond: ((stockid = 1) AND (productid = 428) AND (date >=
'2004.06.01'::bpchar) AND (date <= '2004.06.29'::bpchar))
SubPlan
-> Seq Scan on t_changes (cost=0.00..1.16 rows=1 width=5) (actual
time=0.00..0.00 rows=1 loops=5)
Filter: (id = $0)
Total runtime: 0.00 msec
========================
Recreating the t_stockchanges_fullindex I get the first result - so it is
not the case of the creation order of similar indexes or something similar.
Is there any explicit way to make the server to use an index of my choice? I
thought (from your examples) that it can be done by giving the "where" and
"order by" fields in the correct order. But now I seem making mistakes.

Thank you for your patience!

Best regards,
-- Csaba

---
Outgoing mail is certified Virus Free.
Checked by AVG anti-virus system (http://www.grisoft.com).
Version: 6.0.712 / Virus Database: 468 - Release Date: 2004. 06. 27.

---------------------------(end of broadcast)---------------------------
TIP 4: Don't 'kill -9' the postmaster

Nov 23 '05 #14
=?iso-8859-2?Q?Egy=FCd_Csaba?= <cs*****@vnet.hu> writes:
In general I'd like to draw the consequences. What kind of theories should I
keep in mind when I want to choose an appropriate index key?
Generally you want '=' conditions on the leftmost index keys; any
inequality or range constraint should be on the rightmost keys. You can
see this by thinking about the range of index entries that the scan will
have to pass over.

Unfortunately I think the planner's cost model for indexscans is too
crude to recognize this fact (something else for the TODO list...).
It understands about index size and index selectivity, but given two
indexes on the same columns in different orders, I don't think it really
has the tools to make the right choice --- the cost estimates are going
to come out the same.
Is there any explicit way to make the server to use an index of my choice?


No, but in most cases choosing an ORDER BY clause that matches the index
order (ascending or descending depending on where you want the scan to
start) is a sufficiently heavy thumb on the scales. To meet the ORDER
BY when using the "wrong" index, the planner will have to add a Sort
step, and that is usually enough to push the estimated cost above the
cost of using the "right" index.

regards, tom lane

---------------------------(end of broadcast)---------------------------
TIP 7: don't forget to increase your free space map settings

Nov 23 '05 #15
Hi,
Generally you want '=' conditions on the leftmost index keys; any
inequality or range constraint should be on the rightmost
keys. You can see this by thinking about the range of index entries that
the scan will have to pass over.

I see. Just like in your earlier example, where you reduced the number of
index entries to pass over by giving the stockid, productid and the changeid
first.

Thank you very much.

Good luck,
-- Csaba Együd

---
Outgoing mail is certified Virus Free.
Checked by AVG anti-virus system (http://www.grisoft.com).
Version: 6.0.712 / Virus Database: 468 - Release Date: 2004. 06. 27.

---------------------------(end of broadcast)---------------------------
TIP 7: don't forget to increase your free space map settings

Nov 23 '05 #16

This thread has been closed and replies have been disabled. Please start a new discussion.

Similar topics

25
by: Brian Patterson | last post by:
I have noticed in the book of words that hasattr works by calling getattr and raising an exception if no such attribute exists. If I need the value in any case, am I better off using getattr...
12
by: Fred | last post by:
Has anyone a link or any information comparing c and c++ as far as execution speed is concerned? Signal Processing algorithms would be welcome... Thanks Fred
12
by: serge | last post by:
I have an SP that is big, huge, 700-800 lines. I am not an expert but I need to figure out every possible way that I can improve the performance speed of this SP. In the next couple of weeks I...
6
by: teedilo | last post by:
We have an application with a SQL Server 2000 back end that is fairly database intensive -- lots of fairly frequent queries, inserts, updates -- the gamut. The application does not make use of...
5
by: Scott | last post by:
I have a customer that had developed an Access97 application to track their business information. The application grew significantly and they used the Upsizing Wizard to move the tables to SQL...
115
by: Mark Shelor | last post by:
I've encountered a troublesome inconsistency in the C-language Perl extension I've written for CPAN (Digest::SHA). The problem involves the use of a static array within a performance-critical...
13
by: bjarne | last post by:
Willy Denoyette wrote; > ... it > was not the intention of StrousTrup to the achieve the level of efficiency > of C when he invented C++, ... Ahmmm. It was my aim to match the performance...
13
by: Bern McCarty | last post by:
I have run an experiment to try to learn some things about floating point performance in managed C++. I am using Visual Studio 2003. I was hoping to get a feel for whether or not it would make...
7
by: Michael D. Ober | last post by:
When calling Enqueue, the internal array may need to be reallocated. My question is by how much? In the old MFC array classes, you could tell MFC how many additional elements to add to the array...
1
by: jvn | last post by:
I am experiencing a particular problem with performance counters. I have created a set of classes, that uses System.Diagnostics.PerformanceCounter to increment custom performance counters (using...
0
by: ryjfgjl | last post by:
ExcelToDatabase: batch import excel into database automatically...
0
by: Vimpel783 | last post by:
Hello! Guys, I found this code on the Internet, but I need to modify it a little. It works well, the problem is this: Data is sent from only one cell, in this case B5, but it is necessary that data...
0
by: jfyes | last post by:
As a hardware engineer, after seeing that CEIWEI recently released a new tool for Modbus RTU Over TCP/UDP filtering and monitoring, I actively went to its official website to take a look. It turned...
0
by: ArrayDB | last post by:
The error message I've encountered is; ERROR:root:Error generating model response: exception: access violation writing 0x0000000000005140, which seems to be indicative of an access violation...
1
by: PapaRatzi | last post by:
Hello, I am teaching myself MS Access forms design and Visual Basic. I've created a table to capture a list of Top 30 singles and forms to capture new entries. The final step is a form (unbound)...
0
by: CloudSolutions | last post by:
Introduction: For many beginners and individual users, requiring a credit card and email registration may pose a barrier when starting to use cloud servers. However, some cloud server providers now...
1
by: Shællîpôpï 09 | last post by:
If u are using a keypad phone, how do u turn on JavaScript, to access features like WhatsApp, Facebook, Instagram....
0
by: af34tf | last post by:
Hi Guys, I have a domain whose name is BytesLimited.com, and I want to sell it. Does anyone know about platforms that allow me to list my domain in auction for free. Thank you
0
isladogs
by: isladogs | last post by:
The next Access Europe User Group meeting will be on Wednesday 3 Apr 2024 starting at 18:00 UK time (6PM UTC+1) and finishing by 19:30 (7.30PM). In this session, we are pleased to welcome former...

By using Bytes.com and it's services, you agree to our Privacy Policy and Terms of Use.

To disable or enable advertisements and analytics tracking please visit the manage ads & tracking page.