By using this site, you agree to our updated Privacy Policy and our Terms of Use. Manage your Cookies Settings.
437,903 Members | 1,104 Online
Bytes IT Community
+ Ask a Question
Need help? Post your question and get tips & solutions from a community of 437,903 IT Pros & Developers. It's quick & easy.

process hang during drop schema

P: n/a
I have a process that hangs doing a "drop schema cascade delete". This happens in a slonik command, which runs fine if I run it directly from the command line, but hangs if run from inside my process. I'm pretty sure I'm doing something silly and it's not a slony-specific thing, and I'm hoping that somebody can point me to a server debugging technique I can use to track this down. I'm using 7.4.5.

My server, run with "-d 5", stops logging at this point:

[...]
NOTICE: drop cascades to cast from _tzreplic.xxid to xid
NOTICE: drop cascades to cast from xid to _tzreplic.xxid
NOTICE: drop cascades to function _tzreplic.xxidout(_tzreplic.xxid)
NOTICE: drop cascades to function _tzreplic.xxidin(cstring)
DEBUG: CommitTransactionCommand

There's nothing after that (until I log in with psql or something).

This is what things look like in pg_stat_activity and pg_locks (pid 19472 is the current
psql session, and 19467 is the hung process):

policy=# select * from pg_stat_activity;
datid | datname | procpid | usesysid | usename | current_query | query_start
-------+---------+---------+----------+-------------+---------------+-------------
17142 | policy | 19143 | 101 | tazz:pool | |
17142 | policy | 19467 | 102 | tazz:dbmgr | |
17142 | policy | 19472 | 103 | tazz:engmon | |
(3 rows)

policy=# select * from pg_locks order by pid,relation;
relation | database | transaction | pid | mode | granted
----------+----------+-------------+-------+---------------------+---------
20367 | 17142 | | 19467 | AccessExclusiveLock | t
20369 | 17142 | | 19467 | AccessExclusiveLock | t
20371 | 17142 | | 19467 | AccessExclusiveLock | t
20372 | 17142 | | 19467 | AccessExclusiveLock | t
20374 | 17142 | | 19467 | AccessExclusiveLock | t
20376 | 17142 | | 19467 | AccessExclusiveLock | t
20378 | 17142 | | 19467 | AccessExclusiveLock | t
20379 | 17142 | | 19467 | AccessExclusiveLock | t
20385 | 17142 | | 19467 | AccessExclusiveLock | t
20387 | 17142 | | 19467 | AccessExclusiveLock | t
20389 | 17142 | | 19467 | AccessExclusiveLock | t
20390 | 17142 | | 19467 | AccessExclusiveLock | t
20400 | 17142 | | 19467 | AccessExclusiveLock | t
20402 | 17142 | | 19467 | AccessExclusiveLock | t
20404 | 17142 | | 19467 | AccessExclusiveLock | t
20405 | 17142 | | 19467 | AccessExclusiveLock | t
20407 | 17142 | | 19467 | AccessExclusiveLock | t
20413 | 17142 | | 19467 | AccessExclusiveLock | t
20415 | 17142 | | 19467 | AccessExclusiveLock | t
20421 | 17142 | | 19467 | AccessExclusiveLock | t
20423 | 17142 | | 19467 | AccessExclusiveLock | t
20425 | 17142 | | 19467 | AccessExclusiveLock | t
20426 | 17142 | | 19467 | AccessExclusiveLock | t
20428 | 17142 | | 19467 | AccessExclusiveLock | t
20434 | 17142 | | 19467 | AccessExclusiveLock | t
20436 | 17142 | | 19467 | AccessExclusiveLock | t
20438 | 17142 | | 19467 | AccessExclusiveLock | t
20439 | 17142 | | 19467 | AccessExclusiveLock | t
20449 | 17142 | | 19467 | AccessExclusiveLock | t
20451 | 17142 | | 19467 | AccessExclusiveLock | t
20461 | 17142 | | 19467 | AccessExclusiveLock | t
20463 | 17142 | | 19467 | AccessExclusiveLock | t
20473 | 17142 | | 19467 | AccessExclusiveLock | t
20475 | 17142 | | 19467 | AccessExclusiveLock | t
20477 | 17142 | | 19467 | AccessExclusiveLock | t
20478 | 17142 | | 19467 | AccessExclusiveLock | t
20480 | 17142 | | 19467 | AccessExclusiveLock | t
20483 | 17142 | | 19467 | AccessExclusiveLock | t
20484 | 17142 | | 19467 | AccessExclusiveLock | t
20485 | 17142 | | 19467 | AccessExclusiveLock | t
20487 | 17142 | | 19467 | AccessExclusiveLock | t
20489 | 17142 | | 19467 | AccessExclusiveLock | t
20492 | 17142 | | 19467 | AccessExclusiveLock | t
20494 | 17142 | | 19467 | AccessExclusiveLock | t
20496 | 17142 | | 19467 | AccessExclusiveLock | t
20497 | 17142 | | 19467 | AccessExclusiveLock | t
20498 | 17142 | | 19467 | AccessExclusiveLock | t
20500 | 17142 | | 19467 | AccessExclusiveLock | t
20502 | 17142 | | 19467 | AccessExclusiveLock | t
20503 | 17142 | | 19467 | AccessExclusiveLock | t
20504 | 17142 | | 19467 | AccessExclusiveLock | t
20506 | 17142 | | 19467 | AccessExclusiveLock | t
20508 | 17142 | | 19467 | AccessExclusiveLock | t
20510 | 17142 | | 19467 | AccessExclusiveLock | t
20512 | 17142 | | 19467 | AccessExclusiveLock | t
20514 | 17142 | | 19467 | AccessExclusiveLock | t
| | 1301 | 19467 | ExclusiveLock | t
16759 | 17142 | | 19472 | AccessShareLock | t
| | 1304 | 19472 | ExclusiveLock | t
(59 rows)

To my untrained eye, it doesn't look as though there is any lock contentionhere, but haven't
dealt with postgres locking before, so....

Is there another table I should be looking at, or another debug switch I should be setting?

TIA for any suggestions.

- DAP
----------------------------------------------------------------------------------
David Parker Tazz Networks (401) 709-5130
*

---------------------------(end of broadcast)---------------------------
TIP 6: Have you searched our list archives?

http://archives.postgresql.org

Nov 23 '05 #1
Share this Question
Share on Google+
1 Reply


P: n/a
"David Parker" <dp*****@tazznetworks.com> writes:
To my untrained eye, it doesn't look as though there is any lock
contention here,


Me either; whatever that process is doing, it doesn't seem to be waiting
for a lock. Is it accumulating CPU time?

One way to get some info is to attach to the backend process with gdb
and get a stack trace:
gdb /path/to/postgres
attach PID
bt
quit

regards, tom lane

---------------------------(end of broadcast)---------------------------
TIP 4: Don't 'kill -9' the postmaster

Nov 23 '05 #2

This discussion thread is closed

Replies have been disabled for this discussion.