By using this site, you agree to our updated Privacy Policy and our Terms of Use. Manage your Cookies Settings.
437,924 Members | 1,727 Online
Bytes IT Community
+ Ask a Question
Need help? Post your question and get tips & solutions from a community of 437,924 IT Pros & Developers. It's quick & easy.

How to avoid Lock escalation

P: n/a
I receives the following in the db2diag.log file many times / day :
2007-03-05-14.55.24.836553+060 E12415C457 LEVEL: Warning
PID : 2785 TID : 1 PROC : db2agent
(dbname)
INSTANCE: db2inst1 NODE : 000 DB : dbname
APPHDL : 0-946 APPID: *LOCAL.db2inst1.070305135434
FUNCTION: DB2 UDB, data management, sqldEscalateLocks, probe:3
MESSAGE : ADM5502W The escalation of "20451" locks on table
"schema
.tab5" to lock intent "S" was successful.
I'm aware of that this is a warning message but I wont to get rid of
it but I dont understand how.....
I have increased LOCKLIST (to 2000) and MAXLOCKS (to 40) but I'm
still getting the warning message in db2diag.log. I have identified
the query which is causing this lock escalation and it looks like this
SELECT A.col1, C.col2, E.col3, A.col4, A.col5, A.col6, A.col7, D.col8,
D.col9, A.col10,
COALESCE(MAX(col11),0.), A.col12, A.col13
FROM schema.tab1 A
JOIN schema.tab2 C ON A.col14 = C.col14
JOIN schema.tab3 D ON A.col15 = D.col15
JOIN schema.tab4 E ON A.col16 = E.col16
LEFT JOIN schema.tab5 B ON A.col1 = B.col1
AND B.col17 = '01'
AND B.col18 ='xxxx'
WHERE A.col18 = 'xxxx'
AND A.col16 = 'yy'
AND A.col14 = 'zz'
AND A.col15 = 'ww'
AND A.col13 >= 'vv'
GROUP BY A.col7, A.col12, A.col4, A.col1, C.col2, E.col3, A.col5,
A.col6, D.col8, D.col9, A.col10, A.col13
ORDER BY A.col7, A.col12, A.col4;

Tables involved in the query contains this many rows :
schema.tab1 12601 rows
schema.tab2 8 rows
schema.tab3 4 rows
schema.tab4 22 rows
schema.tab5 69578 rows

According to explain is it a table scan on tab5 so I decided to create
an index on the columns in the left join but still lock
escalation....any ideas on this ?
/roger

Mar 5 '07 #1
Share this Question
Share on Google+
9 Replies


P: n/a
On Mar 5, 5:10 pm, kaval...@hotmail.com wrote:
[...]
>
According to explain is it a table scan on tab5 so I decided to create
an index on the columns in the left join but still lock
escalation....any ideas on this ?
/roger
I have heard that lock escalations might occur if PACKAGE_CACHE is to
small. I think that db2 "borrows" from the LOCK_LIST if it is out of
PACKAGE_CACHE. Don't know if it is true though, perhaps someone else
can confirm/reject this statement.

/Lennart

Mar 5 '07 #2

P: n/a
I don't know how many clients are working on the database
simultaneously, but I think 2000 4KB pages for locklist is a bit
small.

Try increasing it to about 6000 or 8000 and see if the problem
disappears.

Also check as to what is the isolation level of the statement.

regards,
dotyet

On Mar 5, 10:10 am, kaval...@hotmail.com wrote:
I receives the following in the db2diag.log file many times / day :
2007-03-05-14.55.24.836553+060 E12415C457 LEVEL: Warning
PID : 2785 TID : 1 PROC : db2agent
(dbname)
INSTANCE: db2inst1 NODE : 000 DB : dbname
APPHDL : 0-946 APPID: *LOCAL.db2inst1.070305135434
FUNCTION: DB2 UDB, data management, sqldEscalateLocks, probe:3
MESSAGE : ADM5502W The escalation of "20451" locks on table
"schema
.tab5" to lock intent "S" was successful.
I'm aware of that this is a warning message but I wont to get rid of
it but I dont understand how.....
I have increased LOCKLIST (to 2000) and MAXLOCKS (to 40) but I'm
still getting the warning message in db2diag.log. I have identified
the query which is causing this lock escalation and it looks like this
SELECT A.col1, C.col2, E.col3, A.col4, A.col5, A.col6, A.col7, D.col8,
D.col9, A.col10,
COALESCE(MAX(col11),0.), A.col12, A.col13
FROM schema.tab1 A
JOIN schema.tab2 C ON A.col14 = C.col14
JOIN schema.tab3 D ON A.col15 = D.col15
JOIN schema.tab4 E ON A.col16 = E.col16
LEFT JOIN schema.tab5 B ON A.col1 = B.col1
AND B.col17 = '01'
AND B.col18 ='xxxx'
WHERE A.col18 = 'xxxx'
AND A.col16 = 'yy'
AND A.col14 = 'zz'
AND A.col15 = 'ww'
AND A.col13 >= 'vv'
GROUP BY A.col7, A.col12, A.col4, A.col1, C.col2, E.col3, A.col5,
A.col6, D.col8, D.col9, A.col10, A.col13
ORDER BY A.col7, A.col12, A.col4;

Tables involved in the query contains this many rows :
schema.tab1 12601 rows
schema.tab2 8 rows
schema.tab3 4 rows
schema.tab4 22 rows
schema.tab5 69578 rows

According to explain is it a table scan on tab5 so I decided to create
an index on the columns in the left join but still lock
escalation....any ideas on this ?
/roger

Mar 5 '07 #3

P: n/a
ka******@hotmail.com wrote:
I receives the following in the db2diag.log file many times / day :
2007-03-05-14.55.24.836553+060 E12415C457 LEVEL: Warning
PID : 2785 TID : 1 PROC : db2agent
(dbname)
INSTANCE: db2inst1 NODE : 000 DB : dbname
APPHDL : 0-946 APPID: *LOCAL.db2inst1.070305135434
FUNCTION: DB2 UDB, data management, sqldEscalateLocks, probe:3
MESSAGE : ADM5502W The escalation of "20451" locks on table
"schema
.tab5" to lock intent "S" was successful.
I'm aware of that this is a warning message but I wont to get rid of
it but I dont understand how.....
Have you run the DB2 Configuration Advisor? It will give you a good base
configuration from which you can start fine-tuning the system. As part of
that, you will also get some suggestions for the configuration of the lock
list.

--
Knut Stolze
DB2 z/OS Admin Enablement
IBM Germany
Mar 5 '07 #4

P: n/a
Ok thanks for the answers so far ... I can recreate the lock
escalation by running the above query ie 1 connection to the database
and isolation level RS, with RR or CS no lock escalation...
/roger

Mar 6 '07 #5

P: n/a
ka******@hotmail.com wrote:
Ok thanks for the answers so far ... I can recreate the lock
escalation by running the above query ie 1 connection to the database
and isolation level RS, with RR or CS no lock escalation...
RR probably locks the whole table right from the start. That may not be a
good idea because it impacts concurrent access to the table.

--
Knut Stolze
DB2 z/OS Admin Enablement
IBM Germany
Mar 6 '07 #6

P: n/a
Lock Escalation is not a bad thing...
It can give your system a better performance - because DB2 stops (I
hope so) to maintain row level locks.
Row level locks are more expensive than table locks - so it can
degrade performance if you increase the locklist too much...

The best way would be to lock the whole table in your SQL or use
isolation level ur (uncommited read) which will use very few locks.
Because there is no "transaction stability" UR can read rows which are
deleted/inserted/updated but not committed. So be sure to use this
with caution...
On Mar 5, 4:10 pm, kaval...@hotmail.com wrote:
I receives the following in the db2diag.log file many times / day :
2007-03-05-14.55.24.836553+060 E12415C457 LEVEL: Warning
PID : 2785 TID : 1 PROC : db2agent
(dbname)
INSTANCE: db2inst1 NODE : 000 DB : dbname
APPHDL : 0-946 APPID: *LOCAL.db2inst1.070305135434
FUNCTION: DB2 UDB, data management, sqldEscalateLocks, probe:3
MESSAGE : ADM5502W The escalation of "20451" locks on table
"schema
.tab5" to lock intent "S" was successful.
I'm aware of that this is a warning message but I wont to get rid of
it but I dont understand how.....
I have increased LOCKLIST (to 2000) and MAXLOCKS (to 40) but I'm
still getting the warning message in db2diag.log. I have identified
the query which is causing this lock escalation and it looks like this
...
Mar 6 '07 #7

P: n/a
stefan.albert wrote:
Lock Escalation is not a bad thing...
It can give your system a better performance - because DB2 stops (I
hope so) to maintain row level locks.
Row level locks are more expensive than table locks - so it can
degrade performance if you increase the locklist too much...

The best way would be to lock the whole table in your SQL
A table lock prevents all others to access that table. So one application
could indeed run faster due to less locking overhead, but overall the
system could slow down significantly. Summarized: it really depends on the
specific environmen if table locks are acceptable.

--
Knut Stolze
DB2 z/OS Admin Enablement
IBM Germany
Mar 6 '07 #8

P: n/a
Well, if the situation requires a table level lock upfront, then
utilizing row level locks leading to table level lock escalation is
not a good way. In that scenario, use the LOCK TABLE sql statement to
indicate your choice to the engine. I am pretty sure that exclusively
locking upfront will not cause row level lock escalations leading to a
table level lock.

regards,
dotyet
On Mar 6, 4:58 am, "stefan.albert" <stefan.alb...@spb.dewrote:
Lock Escalation is not a bad thing...
It can give your system a better performance - because DB2 stops (I
hope so) to maintain row level locks.
Row level locks are more expensive than table locks - so it can
degrade performance if you increase the locklist too much...

The best way would be to lock the whole table in your SQL or use
isolation level ur (uncommited read) which will use very few locks.
Because there is no "transaction stability" UR can read rows which are
deleted/inserted/updated but not committed. So be sure to use this
with caution...

On Mar 5, 4:10 pm, kaval...@hotmail.com wrote:
I receives the following in the db2diag.log file many times / day :
2007-03-05-14.55.24.836553+060 E12415C457 LEVEL: Warning
PID : 2785 TID : 1 PROC : db2agent
(dbname)
INSTANCE: db2inst1 NODE : 000 DB : dbname
APPHDL : 0-946 APPID: *LOCAL.db2inst1.070305135434
FUNCTION: DB2 UDB, data management, sqldEscalateLocks, probe:3
MESSAGE : ADM5502W The escalation of "20451" locks on table
"schema
.tab5" to lock intent "S" was successful.
I'm aware of that this is a warning message but I wont to get rid of
it but I dont understand how.....
I have increased LOCKLIST (to 2000) and MAXLOCKS (to 40) but I'm
still getting the warning message in db2diag.log. I have identified
the query which is causing this lock escalation and it looks like this
...

Mar 6 '07 #9

P: n/a
LOCK TABLE statement gives you a STRICT table lock which does not allow row
level locking; you would never get row locks from any connection if your
connection had a strict table lock. You'd either get the lock which would
stop any others from getting row locks (apart from maybe share locks on the
row but I doubt even that.), or you would not because others being there
first had either intent lock on the table or a higher constraining lock on
the table (i.e. EXCLUSIVE or Z lock).
Regards, Pierre.

--
Pierre Saint-Jacques
SES Consultants Inc.
514-737-4515
"dotyet" <do****@yahoo.coma écrit dans le message de news:
11**********************@64g2000cwx.googlegroups.c om...
Well, if the situation requires a table level lock upfront, then
utilizing row level locks leading to table level lock escalation is
not a good way. In that scenario, use the LOCK TABLE sql statement to
indicate your choice to the engine. I am pretty sure that exclusively
locking upfront will not cause row level lock escalations leading to a
table level lock.

regards,
dotyet
On Mar 6, 4:58 am, "stefan.albert" <stefan.alb...@spb.dewrote:
>Lock Escalation is not a bad thing...
It can give your system a better performance - because DB2 stops (I
hope so) to maintain row level locks.
Row level locks are more expensive than table locks - so it can
degrade performance if you increase the locklist too much...

The best way would be to lock the whole table in your SQL or use
isolation level ur (uncommited read) which will use very few locks.
Because there is no "transaction stability" UR can read rows which are
deleted/inserted/updated but not committed. So be sure to use this
with caution...

On Mar 5, 4:10 pm, kaval...@hotmail.com wrote:
I receives the following in the db2diag.log file many times / day :
2007-03-05-14.55.24.836553+060 E12415C457 LEVEL: Warning
PID : 2785 TID : 1 PROC : db2agent
(dbname)
INSTANCE: db2inst1 NODE : 000 DB : dbname
APPHDL : 0-946 APPID: *LOCAL.db2inst1.070305135434
FUNCTION: DB2 UDB, data management, sqldEscalateLocks, probe:3
MESSAGE : ADM5502W The escalation of "20451" locks on table
"schema
.tab5" to lock intent "S" was successful.
I'm aware of that this is a warning message but I wont to get rid of
it but I dont understand how.....
I have increased LOCKLIST (to 2000) and MAXLOCKS (to 40) but I'm
still getting the warning message in db2diag.log. I have identified
the query which is causing this lock escalation and it looks like this
...

Mar 7 '07 #10

This discussion thread is closed

Replies have been disabled for this discussion.