473,320 Members | 1,600 Online
Bytes | Software Development & Data Engineering Community
Post Job

Home Posts Topics Members FAQ

Join Bytes to post your question to a community of 473,320 software developers and data experts.

How to avoid Lock escalation

I receives the following in the db2diag.log file many times / day :
2007-03-05-14.55.24.836553+060 E12415C457 LEVEL: Warning
PID : 2785 TID : 1 PROC : db2agent
(dbname)
INSTANCE: db2inst1 NODE : 000 DB : dbname
APPHDL : 0-946 APPID: *LOCAL.db2inst1.070305135434
FUNCTION: DB2 UDB, data management, sqldEscalateLocks, probe:3
MESSAGE : ADM5502W The escalation of "20451" locks on table
"schema
.tab5" to lock intent "S" was successful.
I'm aware of that this is a warning message but I wont to get rid of
it but I dont understand how.....
I have increased LOCKLIST (to 2000) and MAXLOCKS (to 40) but I'm
still getting the warning message in db2diag.log. I have identified
the query which is causing this lock escalation and it looks like this
SELECT A.col1, C.col2, E.col3, A.col4, A.col5, A.col6, A.col7, D.col8,
D.col9, A.col10,
COALESCE(MAX(col11),0.), A.col12, A.col13
FROM schema.tab1 A
JOIN schema.tab2 C ON A.col14 = C.col14
JOIN schema.tab3 D ON A.col15 = D.col15
JOIN schema.tab4 E ON A.col16 = E.col16
LEFT JOIN schema.tab5 B ON A.col1 = B.col1
AND B.col17 = '01'
AND B.col18 ='xxxx'
WHERE A.col18 = 'xxxx'
AND A.col16 = 'yy'
AND A.col14 = 'zz'
AND A.col15 = 'ww'
AND A.col13 >= 'vv'
GROUP BY A.col7, A.col12, A.col4, A.col1, C.col2, E.col3, A.col5,
A.col6, D.col8, D.col9, A.col10, A.col13
ORDER BY A.col7, A.col12, A.col4;

Tables involved in the query contains this many rows :
schema.tab1 12601 rows
schema.tab2 8 rows
schema.tab3 4 rows
schema.tab4 22 rows
schema.tab5 69578 rows

According to explain is it a table scan on tab5 so I decided to create
an index on the columns in the left join but still lock
escalation....any ideas on this ?
/roger

Mar 5 '07 #1
9 15437
On Mar 5, 5:10 pm, kaval...@hotmail.com wrote:
[...]
>
According to explain is it a table scan on tab5 so I decided to create
an index on the columns in the left join but still lock
escalation....any ideas on this ?
/roger
I have heard that lock escalations might occur if PACKAGE_CACHE is to
small. I think that db2 "borrows" from the LOCK_LIST if it is out of
PACKAGE_CACHE. Don't know if it is true though, perhaps someone else
can confirm/reject this statement.

/Lennart

Mar 5 '07 #2
I don't know how many clients are working on the database
simultaneously, but I think 2000 4KB pages for locklist is a bit
small.

Try increasing it to about 6000 or 8000 and see if the problem
disappears.

Also check as to what is the isolation level of the statement.

regards,
dotyet

On Mar 5, 10:10 am, kaval...@hotmail.com wrote:
I receives the following in the db2diag.log file many times / day :
2007-03-05-14.55.24.836553+060 E12415C457 LEVEL: Warning
PID : 2785 TID : 1 PROC : db2agent
(dbname)
INSTANCE: db2inst1 NODE : 000 DB : dbname
APPHDL : 0-946 APPID: *LOCAL.db2inst1.070305135434
FUNCTION: DB2 UDB, data management, sqldEscalateLocks, probe:3
MESSAGE : ADM5502W The escalation of "20451" locks on table
"schema
.tab5" to lock intent "S" was successful.
I'm aware of that this is a warning message but I wont to get rid of
it but I dont understand how.....
I have increased LOCKLIST (to 2000) and MAXLOCKS (to 40) but I'm
still getting the warning message in db2diag.log. I have identified
the query which is causing this lock escalation and it looks like this
SELECT A.col1, C.col2, E.col3, A.col4, A.col5, A.col6, A.col7, D.col8,
D.col9, A.col10,
COALESCE(MAX(col11),0.), A.col12, A.col13
FROM schema.tab1 A
JOIN schema.tab2 C ON A.col14 = C.col14
JOIN schema.tab3 D ON A.col15 = D.col15
JOIN schema.tab4 E ON A.col16 = E.col16
LEFT JOIN schema.tab5 B ON A.col1 = B.col1
AND B.col17 = '01'
AND B.col18 ='xxxx'
WHERE A.col18 = 'xxxx'
AND A.col16 = 'yy'
AND A.col14 = 'zz'
AND A.col15 = 'ww'
AND A.col13 >= 'vv'
GROUP BY A.col7, A.col12, A.col4, A.col1, C.col2, E.col3, A.col5,
A.col6, D.col8, D.col9, A.col10, A.col13
ORDER BY A.col7, A.col12, A.col4;

Tables involved in the query contains this many rows :
schema.tab1 12601 rows
schema.tab2 8 rows
schema.tab3 4 rows
schema.tab4 22 rows
schema.tab5 69578 rows

According to explain is it a table scan on tab5 so I decided to create
an index on the columns in the left join but still lock
escalation....any ideas on this ?
/roger

Mar 5 '07 #3
ka******@hotmail.com wrote:
I receives the following in the db2diag.log file many times / day :
2007-03-05-14.55.24.836553+060 E12415C457 LEVEL: Warning
PID : 2785 TID : 1 PROC : db2agent
(dbname)
INSTANCE: db2inst1 NODE : 000 DB : dbname
APPHDL : 0-946 APPID: *LOCAL.db2inst1.070305135434
FUNCTION: DB2 UDB, data management, sqldEscalateLocks, probe:3
MESSAGE : ADM5502W The escalation of "20451" locks on table
"schema
.tab5" to lock intent "S" was successful.
I'm aware of that this is a warning message but I wont to get rid of
it but I dont understand how.....
Have you run the DB2 Configuration Advisor? It will give you a good base
configuration from which you can start fine-tuning the system. As part of
that, you will also get some suggestions for the configuration of the lock
list.

--
Knut Stolze
DB2 z/OS Admin Enablement
IBM Germany
Mar 5 '07 #4
Ok thanks for the answers so far ... I can recreate the lock
escalation by running the above query ie 1 connection to the database
and isolation level RS, with RR or CS no lock escalation...
/roger

Mar 6 '07 #5
ka******@hotmail.com wrote:
Ok thanks for the answers so far ... I can recreate the lock
escalation by running the above query ie 1 connection to the database
and isolation level RS, with RR or CS no lock escalation...
RR probably locks the whole table right from the start. That may not be a
good idea because it impacts concurrent access to the table.

--
Knut Stolze
DB2 z/OS Admin Enablement
IBM Germany
Mar 6 '07 #6
Lock Escalation is not a bad thing...
It can give your system a better performance - because DB2 stops (I
hope so) to maintain row level locks.
Row level locks are more expensive than table locks - so it can
degrade performance if you increase the locklist too much...

The best way would be to lock the whole table in your SQL or use
isolation level ur (uncommited read) which will use very few locks.
Because there is no "transaction stability" UR can read rows which are
deleted/inserted/updated but not committed. So be sure to use this
with caution...
On Mar 5, 4:10 pm, kaval...@hotmail.com wrote:
I receives the following in the db2diag.log file many times / day :
2007-03-05-14.55.24.836553+060 E12415C457 LEVEL: Warning
PID : 2785 TID : 1 PROC : db2agent
(dbname)
INSTANCE: db2inst1 NODE : 000 DB : dbname
APPHDL : 0-946 APPID: *LOCAL.db2inst1.070305135434
FUNCTION: DB2 UDB, data management, sqldEscalateLocks, probe:3
MESSAGE : ADM5502W The escalation of "20451" locks on table
"schema
.tab5" to lock intent "S" was successful.
I'm aware of that this is a warning message but I wont to get rid of
it but I dont understand how.....
I have increased LOCKLIST (to 2000) and MAXLOCKS (to 40) but I'm
still getting the warning message in db2diag.log. I have identified
the query which is causing this lock escalation and it looks like this
...
Mar 6 '07 #7
stefan.albert wrote:
Lock Escalation is not a bad thing...
It can give your system a better performance - because DB2 stops (I
hope so) to maintain row level locks.
Row level locks are more expensive than table locks - so it can
degrade performance if you increase the locklist too much...

The best way would be to lock the whole table in your SQL
A table lock prevents all others to access that table. So one application
could indeed run faster due to less locking overhead, but overall the
system could slow down significantly. Summarized: it really depends on the
specific environmen if table locks are acceptable.

--
Knut Stolze
DB2 z/OS Admin Enablement
IBM Germany
Mar 6 '07 #8
Well, if the situation requires a table level lock upfront, then
utilizing row level locks leading to table level lock escalation is
not a good way. In that scenario, use the LOCK TABLE sql statement to
indicate your choice to the engine. I am pretty sure that exclusively
locking upfront will not cause row level lock escalations leading to a
table level lock.

regards,
dotyet
On Mar 6, 4:58 am, "stefan.albert" <stefan.alb...@spb.dewrote:
Lock Escalation is not a bad thing...
It can give your system a better performance - because DB2 stops (I
hope so) to maintain row level locks.
Row level locks are more expensive than table locks - so it can
degrade performance if you increase the locklist too much...

The best way would be to lock the whole table in your SQL or use
isolation level ur (uncommited read) which will use very few locks.
Because there is no "transaction stability" UR can read rows which are
deleted/inserted/updated but not committed. So be sure to use this
with caution...

On Mar 5, 4:10 pm, kaval...@hotmail.com wrote:
I receives the following in the db2diag.log file many times / day :
2007-03-05-14.55.24.836553+060 E12415C457 LEVEL: Warning
PID : 2785 TID : 1 PROC : db2agent
(dbname)
INSTANCE: db2inst1 NODE : 000 DB : dbname
APPHDL : 0-946 APPID: *LOCAL.db2inst1.070305135434
FUNCTION: DB2 UDB, data management, sqldEscalateLocks, probe:3
MESSAGE : ADM5502W The escalation of "20451" locks on table
"schema
.tab5" to lock intent "S" was successful.
I'm aware of that this is a warning message but I wont to get rid of
it but I dont understand how.....
I have increased LOCKLIST (to 2000) and MAXLOCKS (to 40) but I'm
still getting the warning message in db2diag.log. I have identified
the query which is causing this lock escalation and it looks like this
...

Mar 6 '07 #9
LOCK TABLE statement gives you a STRICT table lock which does not allow row
level locking; you would never get row locks from any connection if your
connection had a strict table lock. You'd either get the lock which would
stop any others from getting row locks (apart from maybe share locks on the
row but I doubt even that.), or you would not because others being there
first had either intent lock on the table or a higher constraining lock on
the table (i.e. EXCLUSIVE or Z lock).
Regards, Pierre.

--
Pierre Saint-Jacques
SES Consultants Inc.
514-737-4515
"dotyet" <do****@yahoo.coma écrit dans le message de news:
11**********************@64g2000cwx.googlegroups.c om...
Well, if the situation requires a table level lock upfront, then
utilizing row level locks leading to table level lock escalation is
not a good way. In that scenario, use the LOCK TABLE sql statement to
indicate your choice to the engine. I am pretty sure that exclusively
locking upfront will not cause row level lock escalations leading to a
table level lock.

regards,
dotyet
On Mar 6, 4:58 am, "stefan.albert" <stefan.alb...@spb.dewrote:
>Lock Escalation is not a bad thing...
It can give your system a better performance - because DB2 stops (I
hope so) to maintain row level locks.
Row level locks are more expensive than table locks - so it can
degrade performance if you increase the locklist too much...

The best way would be to lock the whole table in your SQL or use
isolation level ur (uncommited read) which will use very few locks.
Because there is no "transaction stability" UR can read rows which are
deleted/inserted/updated but not committed. So be sure to use this
with caution...

On Mar 5, 4:10 pm, kaval...@hotmail.com wrote:
I receives the following in the db2diag.log file many times / day :
2007-03-05-14.55.24.836553+060 E12415C457 LEVEL: Warning
PID : 2785 TID : 1 PROC : db2agent
(dbname)
INSTANCE: db2inst1 NODE : 000 DB : dbname
APPHDL : 0-946 APPID: *LOCAL.db2inst1.070305135434
FUNCTION: DB2 UDB, data management, sqldEscalateLocks, probe:3
MESSAGE : ADM5502W The escalation of "20451" locks on table
"schema
.tab5" to lock intent "S" was successful.
I'm aware of that this is a warning message but I wont to get rid of
it but I dont understand how.....
I have increased LOCKLIST (to 2000) and MAXLOCKS (to 40) but I'm
still getting the warning message in db2diag.log. I have identified
the query which is causing this lock escalation and it looks like this
...

Mar 7 '07 #10

This thread has been closed and replies have been disabled. Please start a new discussion.

Similar topics

12
by: xixi | last post by:
hi, we are using db2 udb v8.1 on 64 bit windows, this message is in the db2diag.log file ADM5502W The escalation of "38200" locks on table "xxx.xxx" to lock intent "X" was successful. ...
9
by: Jane | last post by:
Our db2diag.log is full of messages like this: 2004-05-31-17.15.10.383766 Instance:tminst1 Node:000 PID:394948(db2agent (TMDB1) 0) TID:1 Appid:GA140956.EF26.03A4B1202647 data management ...
3
by: Amy Woodward | last post by:
Does DB2 have a setting to turn off lock escalation? I've just done several hours of newsgroup googling, and have found only a couple references to such a setting, for DB2 v4.1 back in the late...
2
by: Trent | last post by:
Hello, all. I have the following production DB2 environment. DB2 8.1.4 (fp4) WG edition with 2 production databases on Windows 2003 standard edition. My first question is regard with...
3
by: stefan.albert | last post by:
Hi folks, we have a little discussion about lock escalation... What is better for performance: To have an escalation "early" (smaller locklist) or aviod the escalation with a big lock list? ...
1
by: clilush | last post by:
I'm getting entries in the windows event log saying "DB2 is performing lock escalation" which of course is followed by another entrie saying "The escalation of 'xxxx' locks on table 'SYSIBM ...
18
by: dunleav1 | last post by:
I have db2 9 installation and I think our application isn't handling locking optimally. I have Maxlocks(96) and locklist (104832) setup to be auto tuned. The value of these parameters in the...
3
by: Frederick Tant | last post by:
DB2 V9.7 The DB is used by 1 application with no concurrent applications. I got the problem that large update/insert cause lock escalation, to solve the issue I set a lock on the table so...
0
by: ryjfgjl | last post by:
ExcelToDatabase: batch import excel into database automatically...
1
isladogs
by: isladogs | last post by:
The next Access Europe meeting will be on Wednesday 6 Mar 2024 starting at 18:00 UK time (6PM UTC) and finishing at about 19:15 (7.15PM). In this month's session, we are pleased to welcome back...
0
by: Vimpel783 | last post by:
Hello! Guys, I found this code on the Internet, but I need to modify it a little. It works well, the problem is this: Data is sent from only one cell, in this case B5, but it is necessary that data...
0
by: jfyes | last post by:
As a hardware engineer, after seeing that CEIWEI recently released a new tool for Modbus RTU Over TCP/UDP filtering and monitoring, I actively went to its official website to take a look. It turned...
1
by: CloudSolutions | last post by:
Introduction: For many beginners and individual users, requiring a credit card and email registration may pose a barrier when starting to use cloud servers. However, some cloud server providers now...
1
by: Defcon1945 | last post by:
I'm trying to learn Python using Pycharm but import shutil doesn't work
0
by: af34tf | last post by:
Hi Guys, I have a domain whose name is BytesLimited.com, and I want to sell it. Does anyone know about platforms that allow me to list my domain in auction for free. Thank you
0
by: Faith0G | last post by:
I am starting a new it consulting business and it's been a while since I setup a new website. Is wordpress still the best web based software for hosting a 5 page website? The webpages will be...
0
isladogs
by: isladogs | last post by:
The next Access Europe User Group meeting will be on Wednesday 3 Apr 2024 starting at 18:00 UK time (6PM UTC+1) and finishing by 19:30 (7.30PM). In this session, we are pleased to welcome former...

By using Bytes.com and it's services, you agree to our Privacy Policy and Terms of Use.

To disable or enable advertisements and analytics tracking please visit the manage ads & tracking page.