By using this site, you agree to our updated Privacy Policy and our Terms of Use. Manage your Cookies Settings.
440,874 Members | 1,058 Online
Bytes IT Community
+ Ask a Question
Need help? Post your question and get tips & solutions from a community of 440,874 IT Pros & Developers. It's quick & easy.

select for update & lock contention

P: n/a

I think I'm seeing table-level lock contention in the following function
when I have many different concurrent callers, each with mutually distinct
values for $1. Is there a way to reimplement this function using
select-for-update (or equivalent) in order to get a row-level lock (and
thus less contention) while maintaining the function interface? The docs
seem to suggest so, but it's not clear how to return the SETOF queued_item
and also use select-for-update to get the row-level locks. TIA.

CREATE OR REPLACE FUNCTION getqueuedupdates (character)
RETURNS SETOF queued_item AS '
DECLARE
rows record;
BEGIN
FOR rows IN SELECT * FROM queued_item where subscriber=$1 LOOP
RETURN NEXT rows;
DELETE FROM queued_item WHERE key=rows.key;
END LOOP;
RETURN;
END;'
LANGUAGE plpgsql;

---------------------------(end of broadcast)---------------------------
TIP 4: Don't 'kill -9' the postmaster

Nov 23 '05 #1
Share this Question
Share on Google+
4 Replies


P: n/a
This is on 7.3.4/7.3.6. Thx.

On Wednesday May 5 2004 10:42, Ed L. wrote:
I think I'm seeing table-level lock contention in the following function
when I have many different concurrent callers, each with mutually
distinct values for $1. Is there a way to reimplement this function
using select-for-update (or equivalent) in order to get a row-level lock
(and thus less contention) while maintaining the function interface? The
docs seem to suggest so, but it's not clear how to return the SETOF
queued_item and also use select-for-update to get the row-level locks.
TIA.

CREATE OR REPLACE FUNCTION getqueuedupdates (character)
RETURNS SETOF queued_item AS '
DECLARE
rows record;
BEGIN
FOR rows IN SELECT * FROM queued_item where subscriber=$1 LOOP
RETURN NEXT rows;
DELETE FROM queued_item WHERE key=rows.key;
END LOOP;
RETURN;
END;'
LANGUAGE plpgsql;

---------------------------(end of broadcast)---------------------------
TIP 8: explain analyze is your friend

Nov 23 '05 #2

P: n/a
On Wednesday May 5 2004 10:42, Ed L. wrote:
I think I'm seeing table-level lock contention in the following function
when I have many different concurrent callers, each with mutually
distinct values for $1. Is there a way to reimplement this function
using select-for-update (or equivalent) in order to get a row-level lock
(and thus less contention) while maintaining the function interface? The
docs seem to suggest so, but it's not clear how to return the SETOF
queued_item and also use select-for-update to get the row-level locks.
TIA.

CREATE OR REPLACE FUNCTION getqueuedupdates (character)
RETURNS SETOF queued_item AS '
DECLARE
rows record;
BEGIN
FOR rows IN SELECT * FROM queued_item where subscriber=$1 LOOP
RETURN NEXT rows;
DELETE FROM queued_item WHERE key=rows.key;
END LOOP;
RETURN;
END;'
LANGUAGE plpgsql;


I should also mention what leads me to suspect lock contention. First, the
table is frequently vacuum analyzed, so I'm reasonably confident its not a
planner stats issue. Second, the table usually contains a small number of
rows (tens to a couple hundred), so I reason its unlikely that a planner
issue would slow it down much. Third, I have put in "RAISE NOTICE"
statements before and after each statement in the function, and can see the
stalls of several seconds in the server log within the deleting loop.

So, I tried to get a less conflicting lock by using SELECT FOR UPDATE as
follows with the "PERFORM" line (syntax corrections welcome; contextual
examples of how to do this were not plentiful on google or docs)...

CREATE OR REPLACE FUNCTION getqueuedupdates (character)
RETURNS SETOF queued_item AS '
DECLARE
rows record;
BEGIN
-- obtain row-level locks...
PERFORM * FROM queued_item WHERE subscriber = $1 FOR UPDATE OF
queued_item;
FOR rows IN SELECT * FROM queued_item where subscriber=$1 LOOP
RETURN NEXT rows;
DELETE FROM queued_item WHERE key=rows.key;
END LOOP;
RETURN;
END;'
LANGUAGE plpgsql;
Then I watched the locks with the following command, which I think basically
shows which backends are locking which tables in which modes from which SQL
statements:

while test 1; do psql -c "select now(), d.datname||':'||r.relname as table,
l.transaction as xact, l.pid, l.mode, l.granted,
pg_stat_get_backend_activity(S.backendid) AS sql from pg_locks l, pg_class
r, pg_database d, (SELECT pg_stat_get_backend_idset() AS backendid) AS S
where l.relation = r.oid and l.database = d.oid and d.datname = 'testdb'
and r.relname = 'queued_item' and pg_stat_get_backend_pid(S.backendid) =
l.pid and d.oid = pg_stat_get_backend_dbid(S.backendid)"; sleep 1; done
And with that command above, I notice several things leading to other
questions:

1) I can now see the lock modes for the function's delete statements have
changed from RowExclusiveLock (a table-level lock?) to RowShareLock (a
row-level lock, a good thing).

2) The contention appears to continue. I am now wondering if my assumption
that RowShareLock would reduce contention over RowExclusiveLock is sound in
such a case where you have multiple writers with each deleting a distinct
set of rows (one set per subscriber)...? What else could explain the
delays I see in the delete loop?

3) This table is populated by INSERT triggers on other tables; each of
those inserts results in N triggered INSERTs into queued_item, one insert
for each of N subscribers. Given I also see the RowExclusiveLock mode from
those inserts, I'm also wondering if those inserts aren't momentarily
blocking the delete statements in the function above? The 7.3.4 docs
(http://www.postgresql.org/docs/7.3/s...LOCKING-TABLES)
seem to me to suggest RowExclusiveLock will not interfere with
RowShareLock, but the modes seen above and the modes in the docs leave room
for doubt.

One last thought: This table does at times have a fairly high volume of
rows being inserted and then quickly deleted (as much as 100
inserts/deletes per second). So the volume of change is large, but the
number of rows present remains in flux within a range of maybe 0-1000.
Vaccuum/analyze is done via autovacuum maybe every 5-20 minutes, so I guess
the planner could be out of touch with actual index distributions.

Idears?

TIA.
---------------------------(end of broadcast)---------------------------
TIP 6: Have you searched our list archives?

http://archives.postgresql.org

Nov 23 '05 #3

P: n/a
"Ed L." <pg***@bluepolka.net> writes:
I think I'm seeing table-level lock contention in the following function


I think you're barking up the wrong tree entirely. There's nothing in
that function that would acquire a conflicting table lock.

I'm wondering about foreign key lock contention, myself. Look to what
the DELETE must do.

regards, tom lane

---------------------------(end of broadcast)---------------------------
TIP 6: Have you searched our list archives?

http://archives.postgresql.org

Nov 23 '05 #4

P: n/a
On Thursday May 6 2004 6:11, Tom Lane wrote:
"Ed L." <pg***@bluepolka.net> writes:
I think I'm seeing table-level lock contention in the following
function


I think you're barking up the wrong tree entirely. There's nothing in
that function that would acquire a conflicting table lock.

I'm wondering about foreign key lock contention, myself. Look to what
the DELETE must do.


We've dropped all foreign key constraints on the queued_item table and moved
the delete out of the loop as follows...
DECLARE
rows record;
BEGIN
PERFORM * FROM queued_item WHERE subscriber = $1 FOR UPDATE OF
queued_item;
RAISE NOTICE 'getupdates(%):going to call select', $1;
FOR rows IN SELECT * FROM queued_item where subscriber=$1 LOOP
RAISE NOTICE 'getupdates(%): in select loop, returning %', $1,
rows.key;
RETURN NEXT rows;
END LOOP;
RAISE NOTICE 'getupdates(%):going to call delete', $1;
DELETE FROM queued_item WHERE subscriber = $1;
RAISE NOTICE 'getupdates(%):done calling delete', $1;
RETURN;
END;
So the delete seems a non-factor. The delay is now occurring inside the
loop, sometimes for 4-8 seconds. During this delay, it is possible that
other triggers are inserting into the queued_item table. Other ideas as to
what is going on?

TIA.
---------------------------(end of broadcast)---------------------------
TIP 5: Have you checked our extensive FAQ?

http://www.postgresql.org/docs/faqs/FAQ.html

Nov 23 '05 #5

This discussion thread is closed

Replies have been disabled for this discussion.