473,396 Members | 1,738 Online
Bytes | Software Development & Data Engineering Community
Post Job

Home Posts Topics Members FAQ

Join Bytes to post your question to a community of 473,396 software developers and data experts.

Compound keys and foreign constraints

I am using PostgreSQL 7.4.1 on RedHat 7.2.

The query I am executing is something like (I replaced all the return values
with 'count'):

db=> explain select count(*) from messages m join (select * from
message_recipients r join addresses a on a.Address_Key=r.Recipient where
a.****************@edi.cma-cgm.com') as foo on
(m.Message_Key=foo.Message_Key AND (m.Message_Date >= '29-MAR-04') AND
(m.Message_Date <= '31-MAR-04 23:59:59'::timestamp without time zone));

QUERY PLAN
----------------------------------------------------------------------------
----------------------------------------------------------------------------
---------------
Aggregate (cost=75565.60..75565.60 rows=1 width=0)
-> Nested Loop (cost=0.00..75565.60 rows=1 width=0)
-> Nested Loop (cost=0.00..75380.70 rows=61 width=11)
-> Index Scan using addresses_i_address on addresses a
(cost=0.00..6.00 rows=1 width=11)
Index Cond: ((address)::text =
'l******@edi.cma-cgm.com'::text)
-> Index Scan using message_recipients_i_recipient on
message_recipients r (cost=0.00..75135.99 rows=19097 width=21)
Index Cond: ("outer".address_key = r.recipient)
-> Index Scan using messages_pkey on messages m (cost=0.00..3.02
rows=1 width=11)
Index Cond: (m.message_key = "outer".message_key)
Filter: ((message_date >= '2004-03-29 00:00:00'::timestamp
without time zone) AND (message_date <= '2004-03-31 23:59:59'::timestamp
without time zone))
(10 rows)

The problem is that the date filter has come up with a couple of thousand
messages out of 350,000 - which is about 10% of what it will grow to. Both
message_key and message_date are indexed.

So, I created a compound index on (message_key, message_date) in the
messages table. I couldn't get it to use the new index no way, no how,
ever, even with a very simple query on exact match on message_key and date.
So I dropped the primary key on (message_key). Now it is using the new
compound key:

db=> explain select count(*) from messages m join (select * from
message_recipients r join addresses a on a.Address_Key=r.Recipient where
a.****************@edi.cma-cgm.com') as foo on
(m.Message_Key=foo.Message_Key AND (m.Message_Date >= '29-MAR-04') AND
(m.Message_Date <= '31-MAR-04 23:59:59'::timestamp without time zone));

QUERY PLAN
----------------------------------------------------------------------------
----------------------------------------------------------------------------
-----------------------------------------------------------------
Aggregate (cost=73745.29..73745.29 rows=1 width=0)
-> Nested Loop (cost=0.00..73745.26 rows=11 width=0)
-> Nested Loop (cost=0.00..72011.44 rows=328 width=11)
-> Index Scan using addresses_i_address on addresses a
(cost=0.00..6.01 rows=1 width=11)
Index Cond: ((address)::text =
'l******@edi.cma-cgm.com'::text)
-> Index Scan using message_recipients_i_recipient on
message_recipients r (cost=0.00..71776.72 rows=18297 width=21)
Index Cond: ("outer".address_key = r.recipient)
-> Index Scan using messages_i_id_date on messages m
(cost=0.00..5.27 rows=1 width=11)
Index Cond: ((m.message_key = "outer".message_key) AND
(m.message_date >= '2004-03-29 00:00:00'::timestamp without time zone) AND
(m.message_date <= '2004-03-31 23:59:59'::timestamp without time zone))
(9 rows)
And performance is much better. But dropping the original primary key
forced me to drop my foreign key constraints:
NOTICE: constraint $1 on table restored_messages depends on index
messages_pkey
NOTICE: constraint $1 on table message_recipients depends on index
messages_pkey
ERROR: cannot drop constraint messages_pkey on table messages because other
objects depend on it
HINT: Use DROP ... CASCADE to drop the dependent objects too.
db=> alter table messages drop constraint messages_pkey cascade;
Herein lies my problem... I cannot reestablish the foreign key constraints:
db=> alter table restored_messages add constraint "$1" foreign key
(message_key) references messages(message_key);
ERROR: there is no unique constraint matching given keys for referenced
table "messages"
It should be able to use the new compound index, as message_key is the top
index key. There is no date in the subordinate tables to include in the
foreign key reference - it must be on message_key only. If I have an index
on only message_key in messages, then the compound key never gets used and
my search performance takes a nose dive. How do I get there from here?

Wes
---------------------------(end of broadcast)---------------------------
TIP 2: you can get off all lists at once with the unregister command
(send "unregister YourEmailAddressHere" to ma*******@postgresql.org)

Nov 23 '05 #1
8 3502
On Thu, 01 Apr 2004 10:53:40 -0600, <we****@syntegra.com> wrote:
db=> explain select
EXPLAIN ANALYSE please ...
ERROR: there is no unique constraint matching given keys for referenced
table "messages" It should be able to use the new compound index, as message_key is the top
index key.


Don't confuse the *concept* of unique constraints with the
*implementation detail* of unique btree indices. Per SQL standard you
need a unique constraint on the target column to make sure that only one
row matches any referencing value.

The target column being a part of a non-unique index (or even a unique
index) is not sufficient.

Servus
Manfred

---------------------------(end of broadcast)---------------------------
TIP 2: you can get off all lists at once with the unregister command
(send "unregister YourEmailAddressHere" to ma*******@postgresql.org)

Nov 23 '05 #2
On 4/1/04 4:19 PM, "Manfred Koizar" <mk*****@aon.at> wrote:
EXPLAIN ANALYSE please ...
There are about 60 million rows in message_recipients and 20 million in
messages.

db=> explain analyze select count(*) from messages m join (select * from
db(> message_recipients r join addresses a on a.Address_Key=r.Recipient
where
db(> a.****************@edi.cma-cgm.com') as foo on
db-> (m.Message_Key=foo.Message_Key AND (m.Message_Date >= '29-MAR-04') AND
db(> (m.Message_Date <= '31-MAR-04 23:59:59'::timestamp without time zone));

QUERY PLAN
----------------------------------------------------------------------------
----------------------------------------------------------------------------
-------------------------------------
Aggregate (cost=242661.13..242661.13 rows=1 width=0) (actual
time=353986.195..353986.196 rows=1 loops=1)
-> Nested Loop (cost=0.00..242661.11 rows=7 width=0) (actual
time=5054.582..353946.808 rows=8812 loops=1)
-> Nested Loop (cost=0.00..242054.91 rows=200 width=11) (actual
time=5024.098..36143.805 rows=312741 loops=1)
-> Index Scan using addresses_i_address on addresses a
(cost=0.00..6.00 rows=2 width=11) (actual time=74.493..75.240 rows=1
loops=1)
Index Cond: ((address)::text =
'l******@edi.cma-cgm.com'::text)
-> Index Scan using message_recipients_i_recipient on
message_recipients r (cost=0.00..120641.68 rows=30622 width=21) (actual
time=4949.587..35301.377 rows=312741 loops=1)
Index Cond: ("outer".address_key = r.recipient)
-> Index Scan using messages_i_messageid on messages m
(cost=0.00..3.02 rows=1 width=11) (actual time=1.013..1.013 rows=0
loops=312741)
Index Cond: (m.message_key = "outer".message_key)
Filter: ((message_date >= '2004-03-29 00:00:00'::timestamp
without time zone) AND (message_date <= '2004-03-31 23:59:59'::timestamp
without time zone))
Total runtime: 353993.858 ms
(11 rows)
db=> drop index messages_i_messageid;
DROP INDEX
db=> explain analyze select count(*) from messages m join (select * from
db(> message_recipients r join addresses a on a.Address_Key=r.Recipient
where
db(> a.****************@edi.cma-cgm.com') as foo on
db-> (m.Message_Key=foo.Message_Key AND (m.Message_Date >= '29-MAR-04') AND
db(> (m.Message_Date <= '31-MAR-04 23:59:59'::timestamp without time zone));

QUERY PLAN
----------------------------------------------------------------------------
----------------------------------------------------------------------------
-----------------------------------------------------------------
Aggregate (cost=243112.13..243112.13 rows=1 width=0) (actual
time=93444.106..93444.106 rows=1 loops=1)
-> Nested Loop (cost=0.00..243112.11 rows=7 width=0) (actual
time=4806.221..93429.171 rows=8812 loops=1)
-> Nested Loop (cost=0.00..242054.91 rows=200 width=11) (actual
time=4726.583..19111.257 rows=312741 loops=1)
-> Index Scan using addresses_i_address on addresses a
(cost=0.00..6.00 rows=2 width=11) (actual time=40.610..40.616 rows=1
loops=1)
Index Cond: ((address)::text =
'l******@edi.cma-cgm.com'::text)
-> Index Scan using message_recipients_i_recipient on
message_recipients r (cost=0.00..120641.68 rows=30622 width=21) (actual
time=4685.957..18336.831 rows=312741 loops=1)
Index Cond: ("outer".address_key = r.recipient)
-> Index Scan using messages_pkey on messages m (cost=0.00..5.27
rows=1 width=11) (actual time=0.235..0.235 rows=0 loops=312741)
Index Cond: ((m.message_key = "outer".message_key) AND
(m.message_date >= '2004-03-29 00:00:00'::timestamp without time zone) AND
(m.message_date <= '2004-03-31 23:59:59'::timestamp without time zone))
Total runtime: 93444.638 ms
(10 rows)

Don't confuse the *concept* of unique constraints with the
*implementation detail* of unique btree indices. Per SQL standard you
need a unique constraint on the target column to make sure that only one
row matches any referencing value.

The target column being a part of a non-unique index (or even a unique
index) is not sufficient.


In defining the compound key (message_key, message_date), only the
combination is guaranteed unique, not the top member. Duh... Ok... So
that gets back to the original problem - if I define a unique index on
message_key, even if not declared "primary", all queries use the slow filter
method instead of using my compound key. How do I coax it to use the
compound key - preferably without having to modify the application?

Wes
---------------------------(end of broadcast)---------------------------
TIP 4: Don't 'kill -9' the postmaster

Nov 23 '05 #3
On Thu, 01 Apr 2004 16:45:23 -0600, <we****@SYNTEGRA.COM> wrote:
db=> explain analyze select count(*) from messages m join (select * from
db(> message_recipients r join addresses a on a.Address_Key=r.Recipient
where
db(> a.****************@edi.cma-cgm.com') as foo on
db-> (m.Message_Key=foo.Message_Key AND (m.Message_Date >= '29-MAR-04') AND
db(> (m.Message_Date <= '31-MAR-04 23:59:59'::timestamp without time zone));


I wonder whether we should try to talk the planner into using a hash or
merge join, but I fear I don't know enough yet.

Is there an index on message_recipients(Message_Key)?

How many rows satisfy

SELECT * FROM messages WHERE Message_Date BETWEEN ... AND ... ?

And what are your settings for random_page_cost, effective_cache_size,
and sort_mem?

Servus
Manfred

---------------------------(end of broadcast)---------------------------
TIP 4: Don't 'kill -9' the postmaster

Nov 23 '05 #4
On 4/1/04 6:00 PM, "Manfred Koizar" <mk*****@aon.at> wrote:
And what are your settings for random_page_cost, effective_cache_size,
and sort_mem?


I didn't read close enough last time. Random_page_cosst, cpu_tuple_cost,
cpu_index_tuple_cost, and cpu_operator_cosst are all at default.
Effective_cache_size is 50000, and sort_mem is 8192. Shared_buffers=16384.

I've tried setting:

random_page_cost 1 - 20
cpu_tupple_cost 1 - .001
cpu_index_tupple_cost 1 - .00001
sort_mem 1000 - 65535
effective_cache_size 1000 - 100,000

Nothing gets it to use the (message_key, message_date) index if there is a
(message_key) index defined. The only thing that changed the plan at all
was when I changed random_page_cost to greater than 9 (see below).

Other than that, I am still in the catch 22 - index (message_key) is
required for the foreign key constraints, but index(message_key,
message_date) will never be used if index (message_key) is defined.

Is this a bug in the planner that can be fixed? It sure would be helpful if
I could specify a planner hint "use index xxx";

---

I just did some more testing. At random_page_cost=1, the trivial case picks
the compound index "message_pkey", but the real case still does the
'filter' with the (messge_key) index.

However, if I set random_page_cost to less than 1 (e.g. 0.5) then I can get
it to use the compound index.

Setting random_page_cost down from 4 to .5 seems like it wouldn't be a good
idea. However, at this point it seems to be the only solution.

Wes
db=>set random_page_cost=10;

db=> explain select count(*) from messages m, message_recipients r,
addresses a WHERE r.Message_Key=m.Message_Key AND
a.*****************@testdomain.com' AND a.Address_Key=r.Recipient AND (
(m.Message_Date >= '29-MAR-04') AND (m.Message_Date <=
TO_TIMESTAMP('31-MAR-04 23:59:59', 'dd-mon-yy HH24:MI:SS')::timestamp
without time zone));

QUERY PLAN
----------------------------------------------------------------------------
----------------------------------------------------------------------------
----------------------------------------------------------------------------
------
Aggregate (cost=595569.79..595569.79 rows=1 width=0)
-> Nested Loop (cost=564647.77..595569.78 rows=2 width=0)
Join Filter: ("outer".address_key = "inner".recipient)
-> Index Scan using addresses_i_address on addresses a
(cost=0.00..11.97 rows=2 width=11)
Index Cond: ((address)::text =
'j*******@testdomain.com'::text)
-> Materialize (cost=564647.77..572920.00 rows=574623 width=10)
-> Nested Loop (cost=0.00..562121.77 rows=574623 width=10)
-> Index Scan using messages_i_mdate on messages m
(cost=0.00..123060.87 rows=100789 width=11)
Index Cond: ((message_date >= '2004-03-29
00:00:00'::timestamp without time zone) AND (message_date <=
(to_timestamp('31-MAR-04 23:59:59'::text, 'dd-mon-yy
HH24:MI:SS'::text))::timestamp without time zone))
-> Index Scan using message_recipients_i_message on
message_recipients r (cost=0.00..3.70 rows=52 width=21)
Index Cond: (r.message_key = "outer".message_key)

db=> set random_page_cost=1;
SETTime: 0.342 ms
db=> explain select count(*) from messages where message_key=12345 and
(message_date = '2004-03-29 00:00:00'::timestamp without time zone);
QUERY PLAN
----------------------------------------------------------------------------
--------------------------------------------------
Aggregate (cost=3.02..3.02 rows=1 width=0)
-> Index Scan using messages_pkey on messages (cost=0.00..3.02 rows=1
width=0)
Index Cond: ((message_key = 12345::numeric) AND (message_date =
'2004-03-29 00:00:00'::timestamp without time zone))
(3 rows)
db=> explain analyze select count(*) from messages m, message_recipients r,
addresses a WHERE r.Message_Key=m.Message_Key AND
a.*****************@testdomain.com' AND a.Address_Key=r.Recipient AND (
(m.Message_Date >= '29-MAR-04') AND (m.Message_Date <=
TO_TIMESTAMP('31-MAR-04 23:59:59', 'dd-mon-yy HH24:MI:SS')::timestamp
without time zone));

QUERY PLAN
----------------------------------------------------------------------------
----------------------------------------------------------------------------
------------------------------------------------------------------
Aggregate (cost=62514.26..62514.26 rows=1 width=0) (actual
time=336976.694..336976.694 rows=1 loops=1)
-> Nested Loop (cost=0.00..62514.25 rows=2 width=0) (actual
time=119.178..336959.210 rows=8812 loops=1)
-> Nested Loop (cost=0.00..61907.05 rows=200 width=11) (actual
time=83.232..32412.459 rows=312741 loops=1)
-> Index Scan using addresses_i_address on addresses a
(cost=0.00..3.01 rows=2 width=11) (actual time=0.074..0.517 rows=1 loops=1)
Index Cond: ((address)::text =
'j*******@testdomain.com'::text)
-> Index Scan using message_recipients_i_recipient on
message_recipients r (cost=0.00..30569.25 rows=30622 width=21) (actual
time=83.146..31609.149 rows=312741 loops=1)
Index Cond: ("outer".address_key = r.recipient)
-> Index Scan using message_i_messagekey on messages m
(cost=0.00..3.02 rows=1 width=11) (actual time=0.971..0.971 rows=0
loops=312741)
Index Cond: ("outer".message_key = m.message_key)
Filter: ((message_date >= '2004-03-29 00:00:00'::timestamp
without time zone) AND (message_date <= (to_timestamp('31-MAR-04
23:59:59'::text, 'dd-mon-yy HH24:MI:SS'::text))::timestamp without time
zone))
Total runtime: 336978.528 ms
(11 rows)

Time: 337041.081 ms

db=> set random_page_cost=.5;
SETTime: 3.626 ms
db=> explain analyze select count(*) from messages m, message_recipients r,
addresses a WHERE r.Message_Key=m.Message_Key AND
a.*****************@testdomain.com' AND a.Address_Key=r.Recipient AND (
(m.Message_Date >= '29-MAR-04') AND (m.Message_Date <=
TO_TIMESTAMP('31-MAR-04 23:59:59', 'dd-mon-yy HH24:MI:SS')::timestamp
without time zone));

QUERY PLAN
----------------------------------------------------------------------------
----------------------------------------------------------------------------
----------------------------------------------------------------------------
----------------------------------------
Aggregate (cost=32416.62..32416.62 rows=1 width=0) (actual
time=99493.809..99493.810 rows=1 loops=1)
-> Nested Loop (cost=0.00..32416.61 rows=2 width=0) (actual
time=4948.562..99470.992 rows=8812 loops=1)
-> Nested Loop (cost=0.00..31882.41 rows=200 width=11) (actual
time=4852.103..20184.508 rows=312741 loops=1)
-> Index Scan using addresses_i_address on addresses a
(cost=0.00..2.52 rows=2 width=11) (actual time=32.822..32.828 rows=1
loops=1)
Index Cond: ((address)::text =
'j*******@testdomain.com'::text)
-> Index Scan using message_recipients_i_recipient on
message_recipients r (cost=0.00..15557.18 rows=30622 width=21) (actual
time=4819.266..19381.757 rows=312741 loops=1)
Index Cond: ("outer".address_key = r.recipient)
-> Index Scan using messages_pkey on messages m (cost=0.00..2.66
rows=1 width=11) (actual time=0.239..0.239 rows=0 loops=312741)
Index Cond: (("outer".message_key = m.message_key) AND
(m.message_date >= '2004-03-29 00:00:00'::timestamp without time zone) AND
(m.message_date <= (to_timestamp('31-MAR-04 23:59:59'::text, 'dd-mon-yy
HH24:MI:SS'::text))::timestamp without time zone))
Total runtime: 99493.941 ms
(10 rows)

Time: 99523.290 ms
---------------------------(end of broadcast)---------------------------
TIP 7: don't forget to increase your free space map settings

Nov 23 '05 #5
On Fri, Apr 02, 2004 at 11:08:21AM -0600, we****@syntegra.com wrote:
db=> explain analyze select count(*) from messages m, message_recipients r,
addresses a WHERE r.Message_Key=m.Message_Key AND
a.*****************@testdomain.com' AND a.Address_Key=r.Recipient AND (
(m.Message_Date >= '29-MAR-04') AND (m.Message_Date <=
TO_TIMESTAMP('31-MAR-04 23:59:59', 'dd-mon-yy HH24:MI:SS')::timestamp
without time zone));

QUERY PLAN
----------------------------------------------------------------------------
----------------------------------------------------------------------------
------------------------------------------------------------------ [...]
-> Index Scan using message_recipients_i_recipient on
message_recipients r (cost=0.00..30569.25 rows=30622 width=21) (actual
time=83.146..31609.149 rows=312741 loops=1)


This estimate is off by an order of magnitude. Maybe you want to
increase the statistic target for this column ...

--
Alvaro Herrera (<alvherre[a]dcc.uchile.cl>)
"The Postgresql hackers have what I call a "NASA space shot" mentality.
Quite refreshing in a world of "weekend drag racer" developers."
(Scott Marlowe)

---------------------------(end of broadcast)---------------------------
TIP 2: you can get off all lists at once with the unregister command
(send "unregister YourEmailAddressHere" to ma*******@postgresql.org)

Nov 23 '05 #6
On 4/2/04 12:33 PM, "Alvaro Herrera" <al******@dcc.uchile.cl> wrote:
-> Index Scan using message_recipients_i_recipient on
message_recipients r (cost=0.00..30569.25 rows=30622 width=21) (actual
time=83.146..31609.149 rows=312741 loops=1)
This estimate is off by an order of magnitude. Maybe you want to
increase the statistic target for this column ...


Please explain. The only thing I can find about this is

"For scalar data types, attstattarget is both the target number of ³most
common values² to collect, and the target number of histogram bins to
create."

Why would this affect the row count estimations?

With 10 being the default, what would an appropriate value be?

I changed it to 100 for both message_key and reciepent (in
message_recipients table). That used the same plan and made the discrepancy
worse 13000 estimated, 312000 actual:
-> Index Scan using message_recipients_i_recipient on
message_recipients r (cost=0.00..13324.75 rows=13249 width=21) (actual
time=4751.701..34373.808 rows=312741 loops=1)


And, it still only uses the compound index if I set random_page_cost to < 1.

Analyze is run regularly by auto-vacuum (multiple times a day - I need to
reduce this).

If I read this right, before changing the statistic target:
db=# select relname, relfilenode from pg_class where relname like
'message_recip%';

relname | relfilenode
--------------------------------+-------------
message_recipients_i_recipient | 17234
message_recipients_i_message | 17235
message_recipients | 17162
(3 rows)
db=> select attname, attrelid, attstattarget from pg_attribute where
attname='recipient';
attname | attrelid | attstattarget
-----------+----------+---------------
recipient | 17162 | -1
recipient | 17234 | 0
db=# select staattnum, stadistinct from pg_statistic where starelid=17162;
staattnum | stadistinct
-----------+-------------
1 | 1.11264e+06
2 | 3598
3 | 2
(3 rows)

db=# select attname, attstattarget, attnum from pg_attribute where
attrelid=17162;
attname | attstattarget | attnum
----------------+---------------+--------
tableoid | 0 | -7
cmax | 0 | -6
xmax | 0 | -5
cmin | 0 | -4
xmin | 0 | -3
oid | 0 | -2
ctid | 0 | -1
message_key | -1 | 1
recipient | -1 | 2
recipient_type | -1 | 3
There are 20+ million (unique) message_key records, so stadistinct for
column 1 is off by a factor of 20. There are just under 1.2 million unique
recipients (60 million total records), so column 2 (recipient) is off by a
factor of over 300.

In the messages table, message_key (which is unique) has statdistinct = -1
db=# alter table message_recipients alter column recipient set statistics
100;
db=# alter table message_recipients alter column message_key set statistics
100;
db=# analyze;

db=# select distinct relname, attname, attnum, attstattarget, stadistinct
from pg_class c, pg_attribute a, pg_statistic s where
c.relfilenode=a.attrelid and c.relfilenode=starelid and a.attnum=s.staattnum
and relname='message_recipients' order by attnum;

relname | attname | attnum | attstattarget | stadistinct
--------------------+----------------+--------+---------------+-------------
message_recipients | message_key | 1 | 100 | 2.19256e+06
message_recipients | recipient | 2 | 100 | 8672
message_recipients | recipient_type | 3 | -1 | 2

Stadistinct is about twice what it was before, but is still way off the
number of distinct values for that column. Manually setting stadistinct
closer to the number of unique values makes the discrepancy still larger.

Wes
---------------------------(end of broadcast)---------------------------
TIP 4: Don't 'kill -9' the postmaster

Nov 23 '05 #7
Wes,

it's extremely hard for me to understand what's going on here. My
thoughts are running in circles and everytime I think I can almost grasp
the problem it slips through my fingers again. Maybe I'm just getting
old, or it's the weather?

It looks like you managed to write a query with several closely
interwoven problems contributing to its poor performance.

..) There is a known problem in the optimizer, that it does not account
properly for cache hits in a nested loop with inner index scan. This
has been discussed before, but nobody has sent a patch so far.

..) Each access to messages in the inner scan of the nested loop cannot
return more than one row, because message_key is unique. It may return
no row, however, when the message identified by key doesn't fall into
the given date range, and it does this 97% of time. Using the compound
index is cheaper, because if the index tuple shows that the date doesn't
match, there's no need to fetch the heap tuple. Unfortunately the
planner doesn't believe that this is possible:
/* Don't believe estimates less than 1... */
if (tuples_fetched < 1.0)
tuples_fetched = 1.0;

..) Due to issues with multi-column index correlation the optimizer
tends to prefer indices with lesser columns.

This leads me to one more desparate recommendation:

CREATE INDEX use_me ON messages(message_date, message_key);

..) Selectivity:
-> Nested Loop (cost=0.00..61907.05 rows=200 width=11)
(actual time=83.232..32412.459 rows=312741 loops=1)
-> Index Scan using addresses_i_address on addresses a
(cost=0.00..3.01 rows=2 width=11)
(actual time=0.074..0.517 rows=1 loops=1)
Index Cond: ((address)::text ='...'::text)
-> Index Scan using message_recipients_i_recipient on message_recipients r
(cost=0.00..30569.25 rows=30622 width=21)
(actual time=83.146..31609.149 rows=312741 loops=1)
Index Cond: ("outer".address_key = r.recipient)
Alvaro has already commented on this 30K estimated vs. 300K actual rows
discrepancy. I'm not sure however that this is curable by increasing
statistics target, because the value to be looked up is not known at
planning time.

..) Join selectivity: Looking at the above plan snippet again, if we
assume that adresses and recipients are joined using a foreign key
relationship, it seems obvious that on the nested loop level there
cannot be less rows expected than on the index scan levels.

I have not checked the code, but there might be some room for
improvement in the planner. Doesn't look like low hanging fruit, though
:-(

..) I'm not sure whether a nested loop is the best join strategy between
messages and recipients. With enough memory a hash or merge join might
be the way to go. But joining tables with 700K and 300K rows is not
cheap and the low estimated row count makes a nested loop look very
attractive. And you cannot simply disable nested loops because it is
really the best method to join addresses and recipients. Maybe you
should run a few tests without the addresses table, comparing
r.recipient to a constant.
Effective_cache_size is 50000, and sort_mem is 8192. Shared_buffers=16384.
shared_buffers looks ok. Set effective_cache_size to 80% of your
physical memory (or even more if it helps). Try to get a different join
method by gradually increasing sort_mem, for educational purposes even
to unreasonably high values if necessary.

..) There are some strange looking details in your prior messages.

You said this one was chosen by default
-> Nested Loop (cost=0.00..75565.60 rows=1 width=0)

and you had to delete an index to get
-> Nested Loop (cost=0.00..73745.26 rows=11 width=0)

The last one has lower cost. Why wasn't it chosen from the beginning?
Did you VACUUM and/or ANALYSE between test runs?
-> Index Scan using a_i_address on a (cost=0.00..6.00 rows=2 width=11)
(actual time=74.493..75.240 rows=1 loops=1)

75ms is quite slow for a 1 row index scan. Is your machine very busy?

-> Index Scan using a_i_address on a (cost=0.00..6.00 rows=2 width=11)
(actual time=40.610..40.616 rows=1 loops=1)
Still slow.
On Fri, 02 Apr 2004 11:08:21 -0600, <we****@syntegra.com> wrote:db=> explain select [...]


Please always send EXPLAIN ANALYSE output. Only excuse is if a query is
already running for a whole day. Then you may abort it and do a plain
EXPLAIN instead :-)

Under certain circumstances EXPLAIN ANALYSE SELECT count(*) might not
run the same plan as the original query, because it adulterates the
estimated row widths which influence the decision what can be kept in
memory.
I'm willing to investigate a bit more if you send me something to play
with. I'm especially interested in statistics data (output might be
more readable with \x):

SELECT * FROM pg_stats WHERE tablename='...';
SELECT relname, relpages, reltuples FROM pg_class WHERE relname='...';

.... and some query plans: EXPLAIN ANALYSE ...

SELECT * FROM messages WHERE date BETWEEN ...

SELECT * FROM message_recipients WHERE recipient='...' -- edi's address

Send it off-list if you feel it is too long for the mailing list. We
can always post a summary later if we find something interesting.

Servus
Manfred

---------------------------(end of broadcast)---------------------------
TIP 9: the planner will ignore your desire to choose an index scan if your
joining column's datatypes do not match

Nov 23 '05 #8
On Mon, 05 Apr 2004 11:55:17 -0500, Wes Palmer
<We*************@syntegra.com> wrote:
match, there's no need to fetch the heap tuple. Unfortunately the
planner doesn't believe that this is possible:
/* Don't believe estimates less than 1... */
if (tuples_fetched < 1.0)
tuples_fetched = 1.0;
Unless you are fetching a majority of the rows, wouldn't it always be
cheaper to consult the index instead of reading the full record?


In many (if not most) cases yes; always no. Think of a table having
100 rows per page. To select 1% of the rows you might have to access
every row of the table -- at least theoretically in the worst case, but
you get the point. It is the optimizer's job to find out which one of a
possibly large number of query plans is the best. Unfortunately the
optimizer is not perfect (yet :-)). This code snippet above might be a
spot where some improvement is possible. If this really contributes to
your case, you have to live with if for now.
-> Nested Loop (cost=0.00..75565.60 rows=1 width=0)
-> Nested Loop (cost=0.00..73745.26 rows=11 width=0) In the above example, the first case is where both the compound and the
single-column indexes existed. I ran the test, deleted the single-column
index, then ran the test again.
Yes, this is what I understood. What I don't understand is why the
optimizer didn't choose the second plan in the first run.
I did not run vacuum or analyze in between.
Auto-vacuum?
more readable with \x):

See enclosed tar file. f1 is the first couple of commands. f1a is with
random_page_cost=4 and f1b is with random_page_cost=.8


SELECT * FROM pg_stats WHERE tablename='messages' ?

What were the other settings (sort_mem, effective_cache_size)?
You can see that I get the same plan (filter vs. index) even keeping the
addresses table out of the picture.
Now that you join only two tables you could experiment with forcing
other join methods (SET enable_nestloop, enable_mergejoin,
enable_hashjoin).
It occurs to me that de-normalizing it a bit and putting a duplicate
message_date in the message_recipients may have been a good thing to do.
The result set could have been obtained quicker. But, I was trying to keep
everything normalized as much as possible and wanted to keep the
message_recipients as small as possible because it would be growing the
fastest. Trying to put that in now would be a bit of a challenge, since I'd
have to update 70 million records based on the value in 20 million records.


Another de-normalization idea:

ALTER TABLE messages ADD COLUMN mkey2 numeric(12,0);
UPDATE messages SET mkey2 = message_key;

Do this in batches and run VACUUM between the UPDATEs.

DROP INDEX messages_i_id_mdate;
CREATE INDEX ... ON messages(mkey2, message_date);

SELECT ...
FROM a INNER JOIN r ON (...)
INNER JOIN m ON (r.message_key = m.mkey2)
WHERE ...

I guess your hack leads to better speed, but mine is limited to only one
table which might be considered cleaner.

Servus
Manfred

---------------------------(end of broadcast)---------------------------
TIP 6: Have you searched our list archives?

http://archives.postgresql.org

Nov 23 '05 #9

This thread has been closed and replies have been disabled. Please start a new discussion.

Similar topics

3
by: D | last post by:
Database 1 is corrupted. Database 2 was created by dba but none of the primary or foreign key constraints were ported over. TOAD won't let me export. I will try ErWin next. What is the best way...
2
by: Jeff Silverman | last post by:
I am working on my first database and I think I want to build a table with a foreign key in it. The O'Reilly book on MySQL says that MySQL does not support foreign keys, but it still talks about...
26
by: pb648174 | last post by:
I have a table called BidItem which has another table called BidAddendum related to it by foreign key. I have another table called BidFolder which is related to both BidItem and BidAddendum, based...
3
by: heroe | last post by:
Hello *, i write you with an urgent problem. I would like to create forign keys to datetime field, but it doesn't seem to work. I get Can't create table '.\sampleDB\student_module.frm' (errno:...
6
by: Brendan Jurd | last post by:
Hi all, I read on the manual page for Inheritance that: "A limitation of the inheritance feature is that indexes (including unique constraints) and foreign key constraints only apply to single...
1
by: Thomas T. Thai | last post by:
I'm looking for a better way to make use of foreign keys. Here is a sample setup: -- TESTING Foreign Keys create table mod ( mod_id int not null primary key, name varchar(32) not null...
4
by: Ted | last post by:
Understand, I have developed a number of applications using RDBMS, including MySQL, PostgreSQL and MS Access, but this is my first experience with MS SQL. I'd bet my bottom dollar that MS SQL...
8
by: shsandeep | last post by:
What is the quickest way to drop all the foreign keys in a database? Only FKs, not PKs. Cheers, San.
4
by: Peter | last post by:
I am interested in informed feedback on the use of Constraints, Primary Keys and Unique. The following SQL statement creates a Bands tables for a database of bookings Bands into Venues, where the...
0
by: emmanuelkatto | last post by:
Hi All, I am Emmanuel katto from Uganda. I want to ask what challenges you've faced while migrating a website to cloud. Please let me know. Thanks! Emmanuel
0
BarryA
by: BarryA | last post by:
What are the essential steps and strategies outlined in the Data Structures and Algorithms (DSA) roadmap for aspiring data scientists? How can individuals effectively utilize this roadmap to progress...
1
by: nemocccc | last post by:
hello, everyone, I want to develop a software for my android phone for daily needs, any suggestions?
0
by: Hystou | last post by:
There are some requirements for setting up RAID: 1. The motherboard and BIOS support RAID configuration. 2. The motherboard has 2 or more available SATA protocol SSD/HDD slots (including MSATA, M.2...
0
marktang
by: marktang | last post by:
ONU (Optical Network Unit) is one of the key components for providing high-speed Internet services. Its primary function is to act as an endpoint device located at the user's premises. However,...
0
Oralloy
by: Oralloy | last post by:
Hello folks, I am unable to find appropriate documentation on the type promotion of bit-fields when using the generalised comparison operator "<=>". The problem is that using the GNU compilers,...
0
jinu1996
by: jinu1996 | last post by:
In today's digital age, having a compelling online presence is paramount for businesses aiming to thrive in a competitive landscape. At the heart of this digital strategy lies an intricately woven...
0
by: Hystou | last post by:
Overview: Windows 11 and 10 have less user interface control over operating system update behaviour than previous versions of Windows. In Windows 11 and 10, there is no way to turn off the Windows...
0
agi2029
by: agi2029 | last post by:
Let's talk about the concept of autonomous AI software engineers and no-code agents. These AIs are designed to manage the entire lifecycle of a software development project—planning, coding, testing,...

By using Bytes.com and it's services, you agree to our Privacy Policy and Terms of Use.

To disable or enable advertisements and analytics tracking please visit the manage ads & tracking page.