By using this site, you agree to our updated Privacy Policy and our Terms of Use. Manage your Cookies Settings.
438,852 Members | 2,259 Online
Bytes IT Community
+ Ask a Question
Need help? Post your question and get tips & solutions from a community of 438,852 IT Pros & Developers. It's quick & easy.

log_duration and \timing times repeatably much higher than "Total runtime" from explain analyze

P: n/a

This query is odd, it seems to be taking over a second according to my
log_duration logs and according to psql's \timing numbers. However explain
analyze says it's running in about a third of a second.

What would cause this? Is it some kind of postgresql.conf configuration
failure? I have the same query running fine on a different machine.
QUERY PLAN
--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
Limit (cost=15.12..15.13 rows=1 width=2315) (actual time=315.15..315.15 rows=2 loops=1)
-> Sort (cost=15.12..15.13 rows=1 width=2315) (actual time=315.14..315.14 rows=2 loops=1)
Sort Key: x_title_xform, baz_xform, xfrm_en((qux_name)::text)
-> Subquery Scan x (cost=0.00..15.11 rows=1 width=2315) (actual time=314.20..314.99 rows=2 loops=1)
-> Limit (cost=0.00..15.11 rows=1 width=2315) (actual time=283.95..284.64 rows=2 loops=1)
-> Nested Loop (cost=0.00..15.11 rows=1 width=2315) (actual time=283.95..284.64 rows=2 loops=1)
-> Nested Loop (cost=0.00..11.92 rows=1 width=1569) (actual time=0.16..0.22 rows=2 loops=1)
-> Index Scan using foo_pkey on foo (cost=0.00..3.20 rows=1 width=1561) (actual time=0.07..0.07 rows=1 loops=1)
Index Cond: (foo_id = 815)
-> Index Scan using idx_bar_foo_loc on bar_foo (cost=0.00..8.64 rows=7 width=8) (actual time=0.06..0.09 rows=2 loops=1)
Index Cond: ("outer".foo_id = bar_foo.foo_id)
-> Index Scan using bar_pkey on bar (cost=0.00..3.18 rows=1 width=746) (actual time=0.07..0.07 rows=1 loops=2)
Index Cond: ("outer".bar_id = bar.bar_id)
Filter: (bar_type_code <> 'S'::bpchar)
SubPlan
-> Aggregate (cost=6.05..6.05 rows=1 width=59) (actual time=30.26..30.27 rows=1 loops=2)
-> Subquery Scan x (cost=6.05..6.05 rows=1 width=59) (actual time=30.25..30.25 rows=1 loops=2)
-> Sort (cost=6.05..6.05 rows=1 width=59) (actual time=30.24..30.24 rows=1 loops=2)
Sort Key: xfrm_en(qux.short_name)
-> Nested Loop (cost=0.00..6.04 rows=1 width=59) (actual time=30.14..30.16 rows=1 loops=2)
-> Index Scan using idx_bar_qux_bar on bar_qux (cost=0.00..2.85 rows=1 width=4) (actual time=0.03..0.03 rows=1 loops=2)
Index Cond: (bar_id = $0)
-> Index Scan using qux_pkey on qux (cost=0.00..3.05 rows=1 width=55) (actual time=0.02..0.03 rows=1 loops=2)
Index Cond: ("outer".qux_id = qux.qux_id)
-> Aggregate (cost=2.85..2.85 rows=1 width=0) (actual time=0.03..0.03 rows=1 loops=2)
-> Index Scan using idx_bar_qux_bar on bar_qux (cost=0.00..2.85 rows=1 width=0) (actual time=0.02..0.02 rows=1 loops=2)
Index Cond: (bar_id = $0)
-> Limit (cost=0.00..5.80 rows=1 width=80) (actual time=0.05..0.06 rows=1 loops=2)
-> Nested Loop (cost=0.00..6.04 rows=1 width=80) (actual time=0.05..0.05 rows=1 loops=2)
-> Index Scan using idx_bar_qux_bar on bar_qux (cost=0.00..2.85 rows=1 width=4) (actual time=0.01..0.02 rows=1 loops=2)
Index Cond: (bar_id = $0)
-> Index Scan using qux_pkey on qux (cost=0.00..3.05 rows=1 width=76) (actual time=0.02..0.02 rows=1 loops=2)
Index Cond: ("outer".qux_id = qux.qux_id)
-> Limit (cost=0.00..2.74 rows=1 width=4) (actual time=0.02..0.02 rows=1 loops=2)
-> Index Scan using idx_bar_qux_bar on bar_qux (cost=0.00..2.85 rows=1 width=4) (actual time=0.01..0.01 rows=1 loops=2)
Index Cond: (bar_id = $0)
Total runtime: 316.19 msec
(37 rows)

Time: 1333.72 ms
--
greg
---------------------------(end of broadcast)---------------------------
TIP 5: Have you checked our extensive FAQ?

http://www.postgresql.org/docs/faqs/FAQ.html

Nov 12 '05 #1
Share this Question
Share on Google+
10 Replies


P: n/a

I know \timing counts the time to transfer the data to the client, and I
think log_duration also might have that timing added too. It does seem
like a long time to transfer data, though.

---------------------------------------------------------------------------

Greg Stark wrote:

This query is odd, it seems to be taking over a second according to my
log_duration logs and according to psql's \timing numbers. However explain
analyze says it's running in about a third of a second.

What would cause this? Is it some kind of postgresql.conf configuration
failure? I have the same query running fine on a different machine.
QUERY PLAN
--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
Limit (cost=15.12..15.13 rows=1 width=2315) (actual time=315.15..315.15 rows=2 loops=1)
-> Sort (cost=15.12..15.13 rows=1 width=2315) (actual time=315.14..315.14 rows=2 loops=1)
Sort Key: x_title_xform, baz_xform, xfrm_en((qux_name)::text)
-> Subquery Scan x (cost=0.00..15.11 rows=1 width=2315) (actual time=314.20..314.99 rows=2 loops=1)
-> Limit (cost=0.00..15.11 rows=1 width=2315) (actual time=283.95..284.64 rows=2 loops=1)
-> Nested Loop (cost=0.00..15.11 rows=1 width=2315) (actual time=283.95..284.64 rows=2 loops=1)
-> Nested Loop (cost=0.00..11.92 rows=1 width=1569) (actual time=0.16..0.22 rows=2 loops=1)
-> Index Scan using foo_pkey on foo (cost=0.00..3.20 rows=1 width=1561) (actual time=0.07..0.07 rows=1 loops=1)
Index Cond: (foo_id = 815)
-> Index Scan using idx_bar_foo_loc on bar_foo (cost=0.00..8.64 rows=7 width=8) (actual time=0.06..0.09 rows=2 loops=1)
Index Cond: ("outer".foo_id = bar_foo.foo_id)
-> Index Scan using bar_pkey on bar (cost=0.00..3.18 rows=1 width=746) (actual time=0.07..0.07 rows=1 loops=2)
Index Cond: ("outer".bar_id = bar.bar_id)
Filter: (bar_type_code <> 'S'::bpchar)
SubPlan
-> Aggregate (cost=6.05..6.05 rows=1 width=59) (actual time=30.26..30.27 rows=1 loops=2)
-> Subquery Scan x (cost=6.05..6.05 rows=1 width=59) (actual time=30.25..30.25 rows=1 loops=2)
-> Sort (cost=6.05..6.05 rows=1 width=59) (actual time=30.24..30.24 rows=1 loops=2)
Sort Key: xfrm_en(qux.short_name)
-> Nested Loop (cost=0.00..6.04 rows=1 width=59) (actual time=30.14..30.16 rows=1 loops=2)
-> Index Scan using idx_bar_qux_bar on bar_qux (cost=0.00..2.85 rows=1 width=4) (actual time=0.03..0.03 rows=1 loops=2)
Index Cond: (bar_id = $0)
-> Index Scan using qux_pkey on qux (cost=0.00..3.05 rows=1 width=55) (actual time=0.02..0.03 rows=1 loops=2)
Index Cond: ("outer".qux_id = qux.qux_id)
-> Aggregate (cost=2.85..2.85 rows=1 width=0) (actual time=0.03..0.03 rows=1 loops=2)
-> Index Scan using idx_bar_qux_bar on bar_qux (cost=0.00..2.85 rows=1 width=0) (actual time=0.02..0.02 rows=1 loops=2)
Index Cond: (bar_id = $0)
-> Limit (cost=0.00..5.80 rows=1 width=80) (actual time=0.05..0.06 rows=1 loops=2)
-> Nested Loop (cost=0.00..6.04 rows=1 width=80) (actual time=0.05..0.05 rows=1 loops=2)
-> Index Scan using idx_bar_qux_bar on bar_qux (cost=0.00..2.85 rows=1 width=4) (actual time=0.01..0.02 rows=1 loops=2)
Index Cond: (bar_id = $0)
-> Index Scan using qux_pkey on qux (cost=0.00..3.05 rows=1 width=76) (actual time=0.02..0.02 rows=1 loops=2)
Index Cond: ("outer".qux_id = qux.qux_id)
-> Limit (cost=0.00..2.74 rows=1 width=4) (actual time=0.02..0.02 rows=1 loops=2)
-> Index Scan using idx_bar_qux_bar on bar_qux (cost=0.00..2.85 rows=1 width=4) (actual time=0.01..0.01 rows=1 loops=2)
Index Cond: (bar_id = $0)
Total runtime: 316.19 msec
(37 rows)

Time: 1333.72 ms
--
greg
---------------------------(end of broadcast)---------------------------
TIP 5: Have you checked our extensive FAQ?

http://www.postgresql.org/docs/faqs/FAQ.html


--
Bruce Momjian | http://candle.pha.pa.us
pg***@candle.pha.pa.us | (610) 359-1001
+ If your life is a hard drive, | 13 Roberts Road
+ Christ can be your backup. | Newtown Square, Pennsylvania 19073

---------------------------(end of broadcast)---------------------------
TIP 4: Don't 'kill -9' the postmaster

Nov 12 '05 #2

P: n/a

Bruce Momjian <pg***@candle.pha.pa.us> writes:
I know \timing counts the time to transfer the data to the client, and I
think log_duration also might have that timing added too. It does seem
like a long time to transfer data, though.


a) Even when executing this query normally there's very little data.
About 1-20 records.

b) this is on an explain analyze which only returns the 37 records of the
plan. I can't imagine that takes any real time.

This is on a machine that's otherwise pretty much idle. It's a dual processor
PII-900 and it used to perform fine. What's really strange is that some other
queries perform fine but this one and a few others reliably takes this long
and behaves this way under explain analyze. It's as if there's something
specific about this query that triggers the delay.
Total runtime: 316.19 msec
(37 rows)

Time: 1333.72 ms


--
greg
---------------------------(end of broadcast)---------------------------
TIP 3: if posting/reading through Usenet, please send an appropriate
subscribe-nomail command to ma*******@postgresql.org so that your
message can get through to the mailing list cleanly

Nov 12 '05 #3

P: n/a
Planning time?

On Fri, Oct 10, 2003 at 07:56:39PM -0400, Greg Stark wrote:

Bruce Momjian <pg***@candle.pha.pa.us> writes:
I know \timing counts the time to transfer the data to the client, and I
think log_duration also might have that timing added too. It does seem
like a long time to transfer data, though.
a) Even when executing this query normally there's very little data.
About 1-20 records.

b) this is on an explain analyze which only returns the 37 records of the
plan. I can't imagine that takes any real time.

This is on a machine that's otherwise pretty much idle. It's a dual processor
PII-900 and it used to perform fine. What's really strange is that some other
queries perform fine but this one and a few others reliably takes this long
and behaves this way under explain analyze. It's as if there's something
specific about this query that triggers the delay.
Total runtime: 316.19 msec
(37 rows)

Time: 1333.72 ms


--
greg


---------------------------(end of broadcast)---------------------------
TIP 3: if posting/reading through Usenet, please send an appropriate
subscribe-nomail command to ma*******@postgresql.org so that your
message can get through to the mailing list cleanly


--
Martijn van Oosterhout <kl*****@svana.org> http://svana.org/kleptog/ "All that is needed for the forces of evil to triumph is for enough good
men to do nothing." - Edmond Burke
"The penalty good people pay for not being interested in politics is to be
governed by people worse than themselves." - Plato


-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1.0.6 (GNU/Linux)
Comment: For info see http://www.gnupg.org

iD8DBQE/h9yqY5Twig3Ge+YRAkvAAJ9Uwth1GIdk1fBktR0pauDB/de3KgCfRasg
y1LQGVQOKiTGMtGzmgiJf38=
=DaEQ
-----END PGP SIGNATURE-----

Nov 12 '05 #4

P: n/a
Martijn van Oosterhout wrote:
-- Start of PGP signed section.
Planning time?
Uh, I thought the actual number reported by EXPLAIN ANALYZE includes
parse/plan times, but now that you mention it, if it is timing things
inside the executor, it might be just executor times.

Looking at explain.c, it is only timing the executor part in
ExplainOnePlan(). The planner() call is outside that loop, so it must
be parse/plan, though that seems like a lot of time spent in that area.

---------------------------------------------------------------------------

On Fri, Oct 10, 2003 at 07:56:39PM -0400, Greg Stark wrote:

Bruce Momjian <pg***@candle.pha.pa.us> writes:
I know \timing counts the time to transfer the data to the client, and I
think log_duration also might have that timing added too. It does seem
like a long time to transfer data, though.


a) Even when executing this query normally there's very little data.
About 1-20 records.

b) this is on an explain analyze which only returns the 37 records of the
plan. I can't imagine that takes any real time.

This is on a machine that's otherwise pretty much idle. It's a dual processor
PII-900 and it used to perform fine. What's really strange is that some other
queries perform fine but this one and a few others reliably takes this long
and behaves this way under explain analyze. It's as if there's something
specific about this query that triggers the delay.
> Total runtime: 316.19 msec
> (37 rows)
>
> Time: 1333.72 ms


--
greg
---------------------------(end of broadcast)---------------------------
TIP 3: if posting/reading through Usenet, please send an appropriate
subscribe-nomail command to ma*******@postgresql.org so that your
message can get through to the mailing list cleanly


--
Martijn van Oosterhout <kl*****@svana.org> http://svana.org/kleptog/
"All that is needed for the forces of evil to triumph is for enough good
men to do nothing." - Edmond Burke
"The penalty good people pay for not being interested in politics is to be
governed by people worse than themselves." - Plato

-- End of PGP section, PGP failed!

--
Bruce Momjian | http://candle.pha.pa.us
pg***@candle.pha.pa.us | (610) 359-1001
+ If your life is a hard drive, | 13 Roberts Road
+ Christ can be your backup. | Newtown Square, Pennsylvania 19073

---------------------------(end of broadcast)---------------------------
TIP 5: Have you checked our extensive FAQ?

http://www.postgresql.org/docs/faqs/FAQ.html

Nov 12 '05 #5

P: n/a
On Sat, Oct 11, 2003 at 11:22:36AM -0400, Bruce Momjian wrote:
Martijn van Oosterhout wrote:
-- Start of PGP signed section.
Planning time?


Uh, I thought the actual number reported by EXPLAIN ANALYZE includes
parse/plan times, but now that you mention it, if it is timing things
inside the executor, it might be just executor times.

Looking at explain.c, it is only timing the executor part in
ExplainOnePlan(). The planner() call is outside that loop, so it must
be parse/plan, though that seems like a lot of time spent in that area.


Could it be because of extremely large statistics settings for the
tables involved?

--
Alvaro Herrera (<alvherre[a]dcc.uchile.cl>)
"Limítate a mirar... y algun día veras"

---------------------------(end of broadcast)---------------------------
TIP 9: the planner will ignore your desire to choose an index scan if your
joining column's datatypes do not match

Nov 12 '05 #6

P: n/a
Alvaro Herrera wrote:
On Sat, Oct 11, 2003 at 11:22:36AM -0400, Bruce Momjian wrote:
Martijn van Oosterhout wrote:
-- Start of PGP signed section.
Planning time?


Uh, I thought the actual number reported by EXPLAIN ANALYZE includes
parse/plan times, but now that you mention it, if it is timing things
inside the executor, it might be just executor times.

Looking at explain.c, it is only timing the executor part in
ExplainOnePlan(). The planner() call is outside that loop, so it must
be parse/plan, though that seems like a lot of time spent in that area.


Could it be because of extremely large statistics settings for the
tables involved?


Which large statistics? Does he more than the default number of
statistics buckets?

--
Bruce Momjian | http://candle.pha.pa.us
pg***@candle.pha.pa.us | (610) 359-1001
+ If your life is a hard drive, | 13 Roberts Road
+ Christ can be your backup. | Newtown Square, Pennsylvania 19073

---------------------------(end of broadcast)---------------------------
TIP 7: don't forget to increase your free space map settings

Nov 12 '05 #7

P: n/a
On Sat, Oct 11, 2003 at 12:02:42PM -0400, Bruce Momjian wrote:
Alvaro Herrera wrote:
On Sat, Oct 11, 2003 at 11:22:36AM -0400, Bruce Momjian wrote:
Martijn van Oosterhout wrote:
-- Start of PGP signed section.
> Planning time?

Uh, I thought the actual number reported by EXPLAIN ANALYZE includes
parse/plan times, but now that you mention it, if it is timing things
inside the executor, it might be just executor times.

Looking at explain.c, it is only timing the executor part in
ExplainOnePlan(). The planner() call is outside that loop, so it must
be parse/plan, though that seems like a lot of time spent in that area.


Could it be because of extremely large statistics settings for the
tables involved?


Which large statistics? Does he more than the default number of
statistics buckets?


I dunno, that's why I'm asking :-) Just an idea.

--
Alvaro Herrera (<alvherre[a]dcc.uchile.cl>)
"There is evil in the world. There are dark, awful things. Occasionally, we get
a glimpse of them. But there are dark corners; horrors almost impossible to
imagine... even in our worst nightmares." (Van Helsing, Dracula A.D. 1972)

---------------------------(end of broadcast)---------------------------
TIP 7: don't forget to increase your free space map settings

Nov 12 '05 #8

P: n/a
Greg Stark <gs*****@mit.edu> writes:
What's really strange is that some other
queries perform fine but this one and a few others reliably takes this long
and behaves this way under explain analyze. It's as if there's something
specific about this query that triggers the delay.


You haven't shown us the query (naughty naughty) but I'm wondering about
extremely-expensive-to-evaluate immutable functions. An immutable
function applied to constant arguments will be evaluated in the planner.
The difference between total time and run time has to be essentially all
parse/rewrite/plan time, and the query doesn't look complex enough to be
needing a full second to plan without some, um, outside help.

regards, tom lane

---------------------------(end of broadcast)---------------------------
TIP 9: the planner will ignore your desire to choose an index scan if your
joining column's datatypes do not match

Nov 12 '05 #9

P: n/a

Tom Lane <tg*@sss.pgh.pa.us> writes:
Greg Stark <gs*****@mit.edu> writes:
What's really strange is that some other
queries perform fine but this one and a few others reliably takes this long
and behaves this way under explain analyze. It's as if there's something
specific about this query that triggers the delay.
You haven't shown us the query (naughty naughty)


oops, I included the (mangled) plan but forgot the actual query.
I've included it below (mangled unfortunately. I just substituted
foo/bar/baz/quux for the entity names. The client insists.)
but I'm wondering about extremely-expensive-to-evaluate immutable functions.
I don't see any functions applied to constants anywhere. There are some
functions, but they're applied to columns. And they should be fairly fast,
lang_en just expands to (($1)[1]) and xfrm_en just expands to
lower(lang_en($1)) until I implement the strxfrm stuff sometime.

An immutable function applied to constant arguments will be evaluated in the
planner. The difference between total time and run time has to be
essentially all parse/rewrite/plan time, and the query doesn't look complex
enough to be needing a full second to plan without some, um, outside help.


It seems you guys are right, a simple explain is taking nearly a full second
on this box. Explaining the same query on another machine with a more or less
identical database takes 50ms. Hell, an "explain analyze" takes less time on
the other box than the "explain" does on this one.

The configuration of the two machines aren't identical. The slow one is 7.3.2
and the other 7.3.3. But they both performed similarly not long ago. There's
no big performance problem with 7.3.2 is there? This is only development so
I'm not worried about reliability/integrity bugs.

I suspect some configuration change to this box relative to the other may have
done this, but I can't think of anything that could possibly cause this kind
of slowdown. I've done a pg_dump -s and compared schemas, I'm checking the
statistics and guc variables now.

I've run "vacuum full analyze" on both machines.

explain SELECT *,
xfrm_en(quux_name) as quux_xform
FROM (
SELECT
lang_en(title) as x_title,
xfrm_en(title) as x_title_xform,
lang_en(foo.description) as x_description,
lang_en(fine_print) as x_fine_print,
foo_start::timestamp::abstime::int4 as foo_start_time,
foo_end::timestamp::abstime::int4 as foo_end_time,
lang_en(foo_p_text) as x_foo_p_text,
lang_en(subtitle) as x_subtitle,
(SELECT concat_agg(quux_name)
FROM (
SELECT lang_en(quux.short_name) AS quux_name
FROM foo_quux
JOIN quux USING (quux_id)
WHERE foo_quux.foo_id = foo.foo_id
ORDER BY xfrm_en(quux.short_name)
) AS x
) as quux_name,
(SELECT count(*)
FROM foo_quux
WHERE foo_quux.foo_id = foo.foo_id
) as quux_count,
(SELECT lang_en(quux.small_logo) AS quux_logo
FROM foo_quux
JOIN quux USING (quux_id)
WHERE foo_quux.foo_id = foo.foo_id
LIMIT 1
) as quux_logo,
(SELECT quux_id from foo_quux where foo_quux.foo_id = foo.foo_id limit 1 ) as quux_id,
lang_en(baz.short_name) as bar_name,
xfrm_en(baz.short_name) as bar_xform,
lang_en(baz.small_logo) as bar_logo,
baz_id,
*
FROM baz
JOIN foo_baz USING (baz_id)
JOIN foo USING (foo_id)
WHERE baz_id = 815
AND foo_type_code<>'S'
AND (0=0 OR exists (select * from foo_dept where foo_id = foo.foo_id and 0 IN (dept_id, level_1_id, level_2_id)))
LIMIT 20000
) as x
ORDER BY x_title_xform asc, bar_xform asc, quux_xform asc
OFFSET 0
LIMIT 25

db-> ;
QUERY PLAN
----------------------------------------------------------------------------------------------------------------------------------------
Limit (cost=15.07..15.07 rows=1 width=2315)
-> Sort (cost=15.07..15.07 rows=1 width=2315)
Sort Key: x_title_xform, bar_xform, xfrm_en((quux_name)::text)
-> Subquery Scan x (cost=0.00..15.06 rows=1 width=2315)
-> Limit (cost=0.00..15.06 rows=1 width=2315)
-> Nested Loop (cost=0.00..15.06 rows=1 width=2315)
-> Nested Loop (cost=0.00..11.87 rows=1 width=1542)
-> Index Scan using baz_pkey on baz (cost=0.00..3.20 rows=1 width=1534)
Index Cond: (baz_id = 815)
-> Index Scan using idx_foo_baz_loc on foo_baz (cost=0.00..8.58 rows=7 width=8)
Index Cond: ("outer".baz_id = foo_baz.baz_id)
-> Index Scan using foo_pkey on foo (cost=0.00..3.18 rows=1 width=773)
Index Cond: ("outer".foo_id = foo.foo_id)
Filter: (foo_type_code <> 'S'::bpchar)
SubPlan
-> Aggregate (cost=6.05..6.05 rows=1 width=59)
-> Subquery Scan x (cost=6.05..6.05 rows=1 width=59)
-> Sort (cost=6.05..6.05 rows=1 width=59)
Sort Key: xfrm_en(quux.short_name)
-> Nested Loop (cost=0.00..6.04 rows=1 width=59)
-> Index Scan using idx_foo_quux_foo on foo_quux (cost=0.00..2.85 rows=1 width=4)
Index Cond: (foo_id = $0)
-> Index Scan using quux_pkey on quux (cost=0.00..3.05 rows=1 width=55)
Index Cond: ("outer".quux_id = quux.quux_id)
-> Aggregate (cost=2.85..2.85 rows=1 width=0)
-> Index Scan using idx_foo_quux_foo on foo_quux (cost=0.00..2.85 rows=1 width=0)
Index Cond: (foo_id = $0)
-> Limit (cost=0.00..5.80 rows=1 width=80)
-> Nested Loop (cost=0.00..6.04 rows=1 width=80)
-> Index Scan using idx_foo_quux_foo on foo_quux (cost=0.00..2.85 rows=1 width=4)
Index Cond: (foo_id = $0)
-> Index Scan using quux_pkey on quux (cost=0.00..3.05 rows=1 width=76)
Index Cond: ("outer".quux_id = quux.quux_id)
-> Limit (cost=0.00..2.74 rows=1 width=4)
-> Index Scan using idx_foo_quux_foo on foo_quux (cost=0.00..2.85 rows=1 width=4)
Index Cond: (foo_id = $0)
(36 rows)

Time: 996.83 ms
On the machine that's working properly, the same plan with similar costs but in 50ms:

Limit (cost=15.56..15.56 rows=1 width=3274)
-> Sort (cost=15.56..15.56 rows=1 width=3274)
Sort Key: x_title_xform, bar_xform, xfrm_en((quux_name)::text)
-> Subquery Scan x (cost=0.00..15.55 rows=1 width=3274)
-> Limit (cost=0.00..15.55 rows=1 width=3274)
-> Nested Loop (cost=0.00..15.55 rows=1 width=3274)
-> Nested Loop (cost=0.00..12.33 rows=1 width=2486)
-> Index Scan using baz_pkey on baz (cost=0.00..3.17 rows=1 width=2478)
Index Cond: (baz_id = 815)
-> Index Scan using idx_foo_baz_loc on foo_baz (cost=0.00..9.07 rows=7 width=8)
Index Cond: ("outer".baz_id = foo_baz.baz_id)
-> Index Scan using foo_pkey on foo (cost=0.00..3.20 rows=1 width=788)
Index Cond: ("outer".foo_id = foo.foo_id)
Filter: (foo_type_code <> 'S'::bpchar)
SubPlan
-> Aggregate (cost=6.06..6.06 rows=1 width=76)
-> Subquery Scan x (cost=6.05..6.06 rows=1 width=76)
-> Sort (cost=6.05..6.06 rows=1 width=76)
Sort Key: xfrm_en(quux.short_name)
-> Nested Loop (cost=0.00..6.04 rows=1 width=76)
-> Index Scan using idx_foo_quux_foo on foo_quux (cost=0.00..2.85 rows=1 width=4)
Index Cond: (foo_id = $0)
-> Index Scan using quux_pkey on quux (cost=0.00..3.06 rows=1 width=72)
Index Cond: ("outer".quux_id = quux.quux_id)
-> Aggregate (cost=2.85..2.85 rows=1 width=0)
-> Index Scan using idx_foo_quux_foo on foo_quux (cost=0.00..2.85 rows=1 width=0)
Index Cond: (foo_id = $0)
-> Limit (cost=0.00..5.84 rows=1 width=80)
-> Nested Loop (cost=0.00..6.04 rows=1 width=80)
-> Index Scan using idx_foo_quux_foo on foo_quux (cost=0.00..2.85 rows=1 width=4)
Index Cond: (foo_id = $0)
-> Index Scan using quux_pkey on quux (cost=0.00..3.06 rows=1 width=76)
Index Cond: ("outer".quux_id = quux.quux_id)
-> Limit (cost=0.00..2.74 rows=1 width=4)
-> Index Scan using idx_foo_quux_foo on foo_quux (cost=0.00..2.85 rows=1 width=4)
Index Cond: (foo_id = $0)
(36 rows)

Time: 50.33 ms

There do seem to be some suspiciously long lines in pg_statistic for pg_index and pg_attrdef:

16390 | 10 | 0 | 14 | 6 | 1 | 2 | 3 | 0 | 98 | 664 | 664 | 0 | {0.965753} | | {0.985817} | | {""}
| {"({ EXPR :typeOid 16 :opType op :oper { OPER :opno 1054 :opid 1048 :opresulttype 16 :opretset false } :args ({ VAR :varno 1 :varattno 2 :vartype 1042 :vartypmod 5 :varlevelsup 0 :varnoold 1 :varoattno 2} { CONST :consttype 1042 :constlen -1 :constbyval false :constisnull false :constvalue 5 [ 5 0 0 0 80 ] })})","({ EXPR :typeOid 16 :opType op :oper { OPER :opno 1054 :opid 1048 :opresulttype 16 :opretset false } :args ({ VAR :varno 1 :varattno 2 :vartype 1042 :vartypmod 5 :varlevelsup 0 :varnoold 1 :varoattno 2} { CONST :consttype 1042 :constlen -1 :constbyval false :constisnull false :constvalue 5 [ 5 0 0 0 83 ] })})","({ EXPR :typeOid 16 :opType op :oper { OPER :opno 96 :opid 65 :opresulttype 16 :opretset false } :args ({ VAR :varno 1 :varattno 2 :vartype 23 :vartypmod -1
:varlevelsup 0 :varnoold 1 :varoattno 2} { CONST :consttype 23 :constlen 4 :constbyval true :constisnull false :constvalue 4 [ 1 0 0 0 ] })})","({ EXPR :typeOid 16 :opType op :oper { OPER :opno 96 :opid 65 :opresulttype 16 :opretset false } :args ({ VAR :varno 1 :varattno 2 :vartype 23 :vartypmod -1 :varlevelsup 0 :varnoold 1 :varoattno 2} { CONST :consttype 23 :constlen 4 :constbyval true :constisnull false :constvalue 4 [ 2 0 0 0 ] })})","({ EXPR :typeOid 16 :opType op :oper { OPER :opno 96 :opid 65 :opresulttype 16 :opretset false } :args ({ VAR :varno 1 :varattno 2 :vartype 23 :vartypmod -1 :varlevelsup 0 :varnoold 1 :varoattno 2} { CONST :consttype 23 :constlen 4 :constbyval true :constisnull false :constvalue 4 [ 3 0 0 0 ] })})"}

| |
16384 | 3 | 0 | 230 | -0.735294 | 1 | 2 | 3 | 0 | 98 | 664 | 664 | 0 | {0.205882,0.117647} | | {0.304813} | | {"{ EXPR :typeOid 1184 :opType func :oper { FUNC :funcid 1299 :funcresulttype 1184 :funcretset false :funcformat 0 } :args <>}","{ CONST :consttype 23 :constlen 4 :constbyval true :constisnull false :constvalue 4 [ 0 0 0 0 ] }"}
| {"{ CONST :consttype 16 :constlen 1 :constbyval true :constisnull false :constvalue 1 [ 0 0 0 0 ] }","{ CONST :consttype 705 :constlen -1 :constbyval false :constisnull false :constvalue 5 [ 5 0 0 0 32 ] }","{ EXPR :typeOid 20 :opType func :oper { FUNC :funcid 1574 :funcresulttype 20 :funcretset false :funcformat 0 } :args ({ CONST :consttype 25 :constlen -1 :constbyval false :constisnull false :constvalue 15 [ 15 0 0 0 100 101 112 116 95 105 100 95 115 101 113 ] })}","{ EXPR :typeOid 20 :opType func :oper { FUNC :funcid 1574 :funcresulttype 20 :funcretset false :funcformat 0 } :args ({ CONST :consttype 25 :constlen -1 :constbyval false :constisnull false :constvalue 16 [ 16 0 0 0 115 116 111 114 101 95 105 100 95 115 101 113 ] })}","{ EXPR :typeOid 20 :opType func :oper { FU
NC :funcid 1574 :funcresulttype 20 :funcretset false :funcformat 0 } :args ({ CONST :consttype 25 :constlen -1 :constbyval false :constisnull false :constvalue 18 [ 18 0 0 0 112 97 114 116 110 101 114 95 105 100 95 115 101 113 ] })}","{ EXPR :typeOid 20 :opType func :oper { FUNC :funcid 1574 :funcresulttype 20 :funcretset false :funcformat 0 } :args ({ CONST :consttype 25 :constlen -1 :constbyval false :constisnull false :constvalue 23 [ 23 0 0 0 112 97 114 116 110 101 114 95 117 115 101 114 95 105 100 95 115 101 113 ] })}","{ EXPR :typeOid 20 :opType func :oper { FUNC :funcid 1574 :funcresulttype 20 :funcretset false :funcformat 0 } :args ({ CONST :consttype 25 :constlen -1 :constbyval false :constisnull false :constvalue 29 [ 29 0 0 0 112 117 98 108 105 99 46 120 108 97 116 101 95 120 108 97 116 101 95 105 100 95 115 101 113 ] })}","{ EXPR :typeOid 20 :opType func :oper { FUNC :funcid 1574 :funcresulttype 20 :funcretset false :funcformat 0 } :args ({ CONST :consttype
25 :constlen -1 :constbyval false :constisnull false :constvalue 33 [ 33 0 0 0 112 117 98 108 105 99 46 99 111 110 116 97 99 116 95 99 111 110 116 97 99 116 95 105 100 95 115 101 113 ] })}","{ EXPR :typeOid 20 :opType func :oper { FUNC :funcid 1574 :funcresulttype 20 :funcretset false :funcformat 0 } :args ({ CONST :consttype 25 :constlen -1 :constbyval false :constisnull false :constvalue 34 [ 34 0 0 0 112 117 98 108 105 99 46 106 111 98 115 95 112 111 115 116 105 110 103 95 106 111 98 95 105 100 95 115 101 113 ] })}","{ EXPR :typeOid 20 :opType func :oper { FUNC :funcid 1574 :funcresulttype 20 :funcretset false :funcformat 0 } :args ({ CONST :consttype 25 :constlen -1 :constbyval false :constisnull false :constvalue 35 [ 35 0 0 0 112 117 98 108 105 99 46 115 97 108 101 115 95 97 108 101 114 116 95 97 108 101 114 116 95 105 100 95 115 101 113 ] })}","{ EXPR :typeOid 20 :opType func :oper { FUNC :funcid 1574 :funcresulttype 20 :funcretset false :funcformat 0 } :args (
{ CONST :consttype 25 :constlen -1 :constbyval false :constisnull false :constvalue 44 [ 44 0 0 0 112 117 98 108 105 99 46 115 116 111 114 101 95 108 111 99 97 116 105 111 110 95 112 104 111 110 101 95 112 104 111 110 101 95 105 100 95 115 101 113 ] })}"} | |

but they're present on the machine that's working properly too. None of the
statistics columns for my schema tables have more than 10 buckets or too
anywhere near as much data in them as these records.

--
greg
---------------------------(end of broadcast)---------------------------
TIP 9: the planner will ignore your desire to choose an index scan if your
joining column's datatypes do not match

Nov 12 '05 #10

P: n/a

Alvaro Herrera <al******@dcc.uchile.cl> writes:
> Looking at explain.c, it is only timing the executor part in
> ExplainOnePlan(). The planner() call is outside that loop, so it must
> be parse/plan, though that seems like a lot of time spent in that area.
As I posted separately that seems to be exactly where the time is being spent.
It take nearly a second simply for a plain old "explain" with no analyze.
Could it be because of extremely large statistics settings for the
tables involved?


Which large statistics? Does he more than the default number of
statistics buckets?


I dunno, that's why I'm asking :-) Just an idea.


This sounds like a good theory to me. But I can't find anything like that.
Perhaps some large statistic array being toasted and compressed?

What would I check for? Just looking in pg_statistic doesn't find anything
that stands out. The strangest lines to my eyes are the ones with the verbose
plan stuff in them, but there are similar lines on my working machine.

Is there any way to see what the values are set with
ALTER TABLE ALTER column SET STATISTICS
?

Perhaps it should be listed in the \d output for the table?

--
greg
---------------------------(end of broadcast)---------------------------
TIP 3: if posting/reading through Usenet, please send an appropriate
subscribe-nomail command to ma*******@postgresql.org so that your
message can get through to the mailing list cleanly

Nov 12 '05 #11

This discussion thread is closed

Replies have been disabled for this discussion.