473,394 Members | 1,724 Online
Bytes | Software Development & Data Engineering Community
Post Job

Home Posts Topics Members FAQ

Join Bytes to post your question to a community of 473,394 software developers and data experts.

Lock Escalation

I have db2 9 installation and I think our application isn't handling
locking optimally.

I have Maxlocks(96) and locklist (104832) setup to be auto tuned. The
value of these parameters in the parenthesis are the auto tuned values
after running my application for a while.
I notice values in db2diag.log that these values are being increased
to prevent lock escalation.

How do I determine the sql that is causing these high escalations?
Would it be better from a performance standpoint to specifically lock
the table in question instead?
On a machine with 4G of memory - is there a rule of thumb of about how
much memory should be allocated for locking? - locklist and maxlocks?
If I set locklist and maxlocks to a lower value and monitor
db2diag.log will I get the offending sql that is causing the lock
escalation?

Thanks.


Dec 10 '07 #1
18 3147
"dunleav1" <jm********@comcast.netwrote in message
news:a0**********************************@e25g2000 prg.googlegroups.com...
>I have db2 9 installation and I think our application isn't handling
locking optimally.

I have Maxlocks(96) and locklist (104832) setup to be auto tuned. The
value of these parameters in the parenthesis are the auto tuned values
after running my application for a while.
I notice values in db2diag.log that these values are being increased
to prevent lock escalation.

How do I determine the sql that is causing these high escalations?
Would it be better from a performance standpoint to specifically lock
the table in question instead?
On a machine with 4G of memory - is there a rule of thumb of about how
much memory should be allocated for locking? - locklist and maxlocks?
If I set locklist and maxlocks to a lower value and monitor
db2diag.log will I get the offending sql that is causing the lock
escalation?

Thanks.
That is a huge amount of locklist. Do you have a data warehouse application,
or an OLTP application (with lots of updates, inserts, and deletes)? If you
have almost all selects, you may NOT want to avoid lock escalation since
multiple share locks on table coexist fine, or maybe even use UR isolation
level.

Dec 10 '07 #2
On Dec 10, 8:27 pm, dunleav1 <jmdunle...@comcast.netwrote:
I have db2 9 installation and I think our application isn't handling
locking optimally.
What isolation level is your application using?
I have Maxlocks(96) and locklist (104832) setup to be auto tuned. The
value of these parameters in the parenthesis are the auto tuned values
after running my application for a while.
I notice values in db2diag.log that these values are being increased
to prevent lock escalation.

How do I determine the sql that is causing these high escalations?
get snapshot for applications on <db>

and

get snapshot for locks on <db>

will give you a start

/Lennart

Dec 11 '07 #3
On Dec 10, 5:48 pm, "Mark A" <nob...@nowhere.comwrote:
"dunleav1" <jmdunle...@comcast.netwrote in message

news:a0**********************************@e25g2000 prg.googlegroups.com...
I have db2 9 installation and I think our application isn't handling
locking optimally.
I have Maxlocks(96) and locklist (104832) setup to be auto tuned. The
value of these parameters in the parenthesis are the auto tuned values
after running my application for a while.
I notice values in db2diag.log that these values are being increased
to prevent lock escalation.
How do I determine the sql that is causing these high escalations?
Would it be better from a performance standpoint to specifically lock
the table in question instead?
On a machine with 4G of memory - is there a rule of thumb of about how
much memory should be allocated for locking? - locklist and maxlocks?
If I set locklist and maxlocks to a lower value and monitor
db2diag.log will I get the offending sql that is causing the lock
escalation?
Thanks.

That is a huge amount of locklist. Do you have a data warehouse application,
or an OLTP application (with lots of updates, inserts, and deletes)? If you
have almost all selects, you may NOT want to avoid lock escalation since
multiple share locks on table coexist fine, or maybe even use UR isolation
level.
It's both. This application is a oltp application but it moves data
into a reporting schema.
Basically, I move transactions into a star schema based reporting
schema.

The isolation level would be whatever DB2 chooses when using the DB2
jdbc driver.
Dec 11 '07 #4
On Dec 11, 12:30 am, Lennart <Erik.Lennart.Jons...@gmail.comwrote:
On Dec 10, 8:27 pm, dunleav1 <jmdunle...@comcast.netwrote:
I have db2 9 installation and I think our application isn't handling
locking optimally.

What isolation level is your application using?
The isolation level would be whatever DB2 chooses when using the DB2
jdbc driver.
>
I have Maxlocks(96) and locklist (104832) setup to be auto tuned. The
value of these parameters in the parenthesis are the auto tuned values
after running my application for a while.
I notice values in db2diag.log that these values are being increased
to prevent lock escalation.
How do I determine the sql that is causing these high escalations?

get snapshot for applications on <db>

and

get snapshot for locks on <db>

will give you a start

/Lennart
I have a process that runs approximately 500 transactions for my test.
I'll have to watch the db2diag.log file for an alert than map it back
to db2diag.log file. That's kind of a bother. db2diag already knows
the table that it is escalating on, I should be able to log the
offending statement shouldn't I?
Dec 11 '07 #5
I'm pretty sure it's REPEATABLE_READ by default.
Dec 11 '07 #6
"dunleav1" <jm********@comcast.netwrote in message
news:853ff41e-5d23-4030-8d3e-
It's both. This application is a oltp application but it moves data
into a reporting schema.
Basically, I move transactions into a star schema based reporting
schema.

The isolation level would be whatever DB2 chooses when using the DB2
jdbc driver.
On any select statement, you can alter the default isolation level using the
WITH UR (or CS, RR, RS) clause. Using with UR "might" be a good idea when
you are pulling data from the tables to load the datawarehouse, especially
if it is historical data and you know it is not being changed at the exact
moment you pull it..
Dec 12 '07 #7
"dunleav1" <jm********@comcast.netwrote in message
news:f5**********************************@i29g2000 prf.googlegroups.com...
I'm pretty sure it's REPEATABLE_READ by default.
The DB2 default is CS by default. If you are using RR, try changing it to CS
or UR using the WITH XX clause.
Dec 12 '07 #8
On Dec 11, 7:53 pm, "Mark A" <nob...@nowhere.comwrote:
"dunleav1" <jmdunle...@comcast.netwrote in message

news:f5**********************************@i29g2000 prf.googlegroups.com...
I'm pretty sure it's REPEATABLE_READ by default.

The DB2 default is CS by default. If you are using RR, try changing it to CS
or UR using the WITH XX clause.
CS is not one of the isolation levels listed in the JDBC standard?
Dec 12 '07 #9
On Dec 11, 12:57 pm, dunleav1 <jmdunle...@comcast.netwrote:
On Dec 11, 12:30 am, Lennart <Erik.Lennart.Jons...@gmail.comwrote:
On Dec 10, 8:27 pm, dunleav1 <jmdunle...@comcast.netwrote:
I have db2 9 installation and I think our application isn't handling
locking optimally.
What isolation level is your application using?

The isolation level would be whatever DB2 chooses when using the DB2
jdbc driver.


I have Maxlocks(96) and locklist (104832) setup to be auto tuned. The
value of these parameters in the parenthesis are the auto tuned values
after running my application for a while.
I notice values in db2diag.log that these values are being increased
to prevent lock escalation.
How do I determine the sql that is causing these high escalations?
get snapshot for applications on <db>
and
get snapshot for locks on <db>
will give you a start
/Lennart

I have a process that runs approximately 500 transactions for my test.
I'll have to watch the db2diag.log file for an alert than map it back
to db2diag.log file. That's kind of a bother. db2diag already knows
the table that it is escalating on, I should be able to log the
offending statement shouldn't I?
I figured out the offending statements by syncing a sql "trace" and
the db2diag.log file.
I have 7 insert statements that do a a join between the insert table
(T1) and a second table (T2) that look like this:
(ie) insert into T1 (val1,val2,val3) (select val1,val2,val3) from T2
where not exists (select 1 from T2 where T2.val1 = T1.val1) and val2 =
1 and val3 >10000000)

The lock escalation happens on the table being inserted into if I set
maxlocks and locklist to a permanant lower value. (ie) 50, 20000 for
example.
Now T1 may be selected or inserted into from different processes so I
want the default isolation level of RR from jdbc.

Will I see any significant performance gain by issuing a specific lock
table prior to the insert statement?
Or is it better to just let DB2 automatically escalate the sql to a
table lock?
Dec 12 '07 #10
On Dec 12, 12:00 pm, dunleav1 <jmdunle...@comcast.netwrote:
On Dec 11, 12:57 pm, dunleav1 <jmdunle...@comcast.netwrote:
On Dec 11, 12:30 am, Lennart <Erik.Lennart.Jons...@gmail.comwrote:
On Dec 10, 8:27 pm, dunleav1 <jmdunle...@comcast.netwrote:
I have db2 9 installation and I think our application isn't handling
locking optimally.
What isolation level is your application using?
The isolation level would be whatever DB2 chooses when using the DB2
jdbc driver.
I have Maxlocks(96) and locklist (104832) setup to be auto tuned. The
value of these parameters in the parenthesis are the auto tuned values
after running my application for a while.
I notice values in db2diag.log that these values are being increased
to prevent lock escalation.
How do I determine the sql that is causing these high escalations?
get snapshot for applications on <db>
and
get snapshot for locks on <db>
will give you a start
/Lennart
I have a process that runs approximately 500 transactions for my test.
I'll have to watch the db2diag.log file for an alert than map it back
to db2diag.log file. That's kind of a bother. db2diag already knows
the table that it is escalating on, I should be able to log the
offending statement shouldn't I?

I figured out the offending statements by syncing a sql "trace" and
the db2diag.log file.
I have 7 insert statements that do a a join between the insert table
(T1) and a second table (T2) that look like this:
(ie) insert into T1 (val1,val2,val3) (select val1,val2,val3) from T2
where not exists (select 1 from T2 where T2.val1 = T1.val1) and val2 =
1 and val3 >10000000)

The lock escalation happens on the table being inserted into if I set
maxlocks and locklist to a permanant lower value. (ie) 50, 20000 for
example.
Now T1 may be selected or inserted into from different processes so I
want the default isolation level of RR from jdbc.

Will I see any significant performance gain by issuing a specific lock
table prior to the insert statement?
Or is it better to just let DB2 automatically escalate the sql to a
table lock?
The 7 inserts take - 43 seconds, 35 seconds, 369 seconds, 8 seconds,
29 seconds, 29 seconds, 29 seconds.
Based on these numbers I'm really wondering just how much of
performance hit a lock escalation really is. Because it doesn't look
like much to worry about.
Dec 12 '07 #11
On Dec 12, 4:33 pm, dunleav1 <jmdunle...@comcast.netwrote:
On Dec 11, 7:53 pm, "Mark A" <nob...@nowhere.comwrote:
"dunleav1" <jmdunle...@comcast.netwrote in message
news:f5**********************************@i29g2000 prf.googlegroups.com...
I'm pretty sure it's REPEATABLE_READ by default.
The DB2 default is CS by default. If you are using RR, try changing it to CS
or UR using the WITH XX clause.

CS is not one of the isolation levels listed in the JDBC standard?
This must must be the one thing that causes most confusion between DB2
DBA's and java developers.
I think JDBC uses the names as defined in ISO

DB2 JDBC
Repeateble Read - Serilizable
Read Stability - Repeatable Read
Cursor Stability - Read Committed
Uncommitted Read - Read Uncommitted

/Lennart
Dec 12 '07 #12
On Dec 12, 12:28 pm, Lennart <Erik.Lennart.Jons...@gmail.comwrote:
On Dec 12, 4:33 pm, dunleav1 <jmdunle...@comcast.netwrote:
On Dec 11, 7:53 pm, "Mark A" <nob...@nowhere.comwrote:
"dunleav1" <jmdunle...@comcast.netwrote in message
>news:f5**********************************@i29g200 0prf.googlegroups.com...
I'm pretty sure it's REPEATABLE_READ by default.
The DB2 default is CS by default. If you are using RR, try changing it to CS
or UR using the WITH XX clause.
CS is not one of the isolation levels listed in the JDBC standard?

This must must be the one thing that causes most confusion between DB2
DBA's and java developers.
I think JDBC uses the names as defined in ISO

DB2 JDBC
Repeateble Read - Serilizable
Read Stability - Repeatable Read
Cursor Stability - Read Committed
Uncommitted Read - Read Uncommitted

/Lennart
The IBM DB2 universal jdbc driver returns "repeatable read (version:
SQL08024)" as the default isolation level when getTransactionIsolation
is called and the isolation level isn't specifically set.

Dec 12 '07 #13
On Dec 12, 9:46 pm, dunleav1 <jmdunle...@comcast.netwrote:
On Dec 12, 12:28 pm, Lennart <Erik.Lennart.Jons...@gmail.comwrote:
On Dec 12, 4:33 pm, dunleav1 <jmdunle...@comcast.netwrote:
On Dec 11, 7:53 pm, "Mark A" <nob...@nowhere.comwrote:
"dunleav1" <jmdunle...@comcast.netwrote in message
news:f5**********************************@i29g2000 prf.googlegroups.com...
I'm pretty sure it's REPEATABLE_READ by default.
The DB2 default is CS by default. If you are using RR, try changing it to CS
or UR using the WITH XX clause.
CS is not one of the isolation levels listed in the JDBC standard?
This must must be the one thing that causes most confusion between DB2
DBA's and java developers.
I think JDBC uses the names as defined in ISO
DB2 JDBC
Repeateble Read - Serilizable
Read Stability - Repeatable Read
Cursor Stability - Read Committed
Uncommitted Read - Read Uncommitted
/Lennart

The IBM DB2 universal jdbc driver returns "repeatable read (version:
SQL08024)" as the default isolation level when getTransactionIsolation
is called and the isolation level isn't specifically set.
My point was merely that "Repeateble Read" means different things for
different people. Anyhow, Repeatable Read (JDBC) keep locks longer
than Read Committed. What happens with the lock escalations if you
lower the isolation level to Read Committed?

If you have identified the query that causes the escalations, you can
try db2advis to see if there are indexes that might help.

/Lennart
Dec 12 '07 #14
On Dec 12, 5:59 pm, Lennart <Erik.Lennart.Jons...@gmail.comwrote:
On Dec 12, 9:46 pm, dunleav1 <jmdunle...@comcast.netwrote:
On Dec 12, 12:28 pm, Lennart <Erik.Lennart.Jons...@gmail.comwrote:
On Dec 12, 4:33 pm, dunleav1 <jmdunle...@comcast.netwrote:
On Dec 11, 7:53 pm, "Mark A" <nob...@nowhere.comwrote:
"dunleav1" <jmdunle...@comcast.netwrote in message
>news:f5**********************************@i29g200 0prf.googlegroups.com...
I'm pretty sure it's REPEATABLE_READ by default.
The DB2 default is CS by default. If you are using RR, try changing it to CS
or UR using the WITH XX clause.
CS is not one of the isolation levels listed in the JDBC standard?
This must must be the one thing that causes most confusion between DB2
DBA's and java developers.
I think JDBC uses the names as defined in ISO
DB2 JDBC
Repeateble Read - Serilizable
Read Stability - Repeatable Read
Cursor Stability - Read Committed
Uncommitted Read - Read Uncommitted
/Lennart
The IBM DB2 universal jdbc driver returns "repeatable read (version:
SQL08024)" as the default isolation level when getTransactionIsolation
is called and the isolation level isn't specifically set.

My point was merely that "Repeateble Read" means different things for
different people. Anyhow, Repeatable Read (JDBC) keep locks longer
than Read Committed. What happens with the lock escalations if you
lower the isolation level to Read Committed?

If you have identified the query that causes the escalations, you can
try db2advis to see if there are indexes that might help.

/Lennart
Yes, the sql is already optimized. I used db2advis to confirm.
Dec 13 '07 #15
dunleav1 wrote:
The IBM DB2 universal jdbc driver returns "repeatable read (version:
SQL08024)" as the default isolation level when getTransactionIsolation
is called and the isolation level isn't specifically set.
I would say that this is probably another stupid default setting in JDBC.

The other two I know of are auto-commit being turned on and,
consequentially, all cursors being implicitly defined as holdable. Any
serious JDBC application should turn this off right after it is started,
IMHO.

--
Knut Stolze
DB2 z/OS Utilities Development
IBM Germany
Dec 14 '07 #16
On Dec 14, 3:50 am, Knut Stolze <sto...@de.ibm.comwrote:
dunleav1 wrote:
The IBM DB2 universal jdbc driver returns "repeatable read (version:
SQL08024)" as the default isolation level when getTransactionIsolation
is called and the isolation level isn't specifically set.

I would say that this is probably another stupid default setting in JDBC.

The other two I know of are auto-commit being turned on and,
consequentially, all cursors being implicitly defined as holdable. Any
serious JDBC application should turn this off right after it is started,
IMHO.

--
Knut Stolze
DB2 z/OS Utilities Development
IBM Germany
Knut can you elaborate on what you think is stupid and why?
Dec 17 '07 #17
dunleav1 wrote:
On Dec 14, 3:50 am, Knut Stolze <sto...@de.ibm.comwrote:
>dunleav1 wrote:
The IBM DB2 universal jdbc driver returns "repeatable read (version:
SQL08024)" as the default isolation level when getTransactionIsolation
is called and the isolation level isn't specifically set.

I would say that this is probably another stupid default setting in JDBC.

The other two I know of are auto-commit being turned on and,
consequentially, all cursors being implicitly defined as holdable. Any
serious JDBC application should turn this off right after it is started,
IMHO.

Knut can you elaborate on what you think is stupid and why?
Repeatable read isolation level allows less concurrency than committed read
(or uncommitted read). Thus, an isolation level is chosen per default which
has an adverse effect on performance.

Another, similar issue is auto-commit being the default for JDBC
connections. First, any transactional properties are discarded with this
right away. You cannot rollback changes. Likewise, due to the
auto-commit, it was necessary to open cursors per default as holdable
cursors. Otherwise, the OPEN CURSOR (as JDBC method) would cause the
cursor to be closed at the end of the OPEN already. So you couldn't use
the cursor. Making all cursors holdable fits in with the auto-commit. But
again, holdable cursors allocate some resources on the database server. If
you don't need auto-commit, you can lift the resource requirements on the
server and get better performance.

Granted, those settings are good for novice programmers who just want to
their Hello World program to work. But for all practical purposes, you
have to change those defaults, IMHO.

--
Knut Stolze
DB2 z/OS Utilities Development
IBM Germany
Dec 17 '07 #18
On Dec 17, 4:20 pm, Knut Stolze <sto...@de.ibm.comwrote:
dunleav1 wrote:
On Dec 14, 3:50 am, Knut Stolze <sto...@de.ibm.comwrote:
dunleav1 wrote:
The IBM DB2 universal jdbc driver returns "repeatable read (version:
SQL08024)" as the default isolation level when getTransactionIsolation
is called and the isolation level isn't specifically set.
I would say that this is probably another stupid default setting in JDBC.
The other two I know of are auto-commit being turned on and,
consequentially, all cursors being implicitly defined as holdable. Any
serious JDBC application should turn this off right after it is started,
IMHO.
Knut can you elaborate on what you think is stupid and why?

Repeatable read isolation level allows less concurrency than committed read
(or uncommitted read). Thus, an isolation level is chosen per default which
has an adverse effect on performance.

Another, similar issue is auto-commit being the default for JDBC
connections. First, any transactional properties are discarded with this
right away. You cannot rollback changes. Likewise, due to the
auto-commit, it was necessary to open cursors per default as holdable
cursors. Otherwise, the OPEN CURSOR (as JDBC method) would cause the
cursor to be closed at the end of the OPEN already. So you couldn't use
the cursor. Making all cursors holdable fits in with the auto-commit. But
again, holdable cursors allocate some resources on the database server. If
you don't need auto-commit, you can lift the resource requirements on the
server and get better performance.

Granted, those settings are good for novice programmers who just want to
their Hello World program to work. But for all practical purposes, you
have to change those defaults, IMHO.

--
Knut Stolze
DB2 z/OS Utilities Development
IBM Germany
Thanks Knut, I'll have to run some tests with our Java programmers.
Our application supports DB2, Oracle, Sql Server. DB2 has the worst
performance out of the three.
I'll start experimenting with the isolation level to get those number
comparable (or better).
Dec 19 '07 #19

This thread has been closed and replies have been disabled. Please start a new discussion.

Similar topics

12
by: xixi | last post by:
hi, we are using db2 udb v8.1 on 64 bit windows, this message is in the db2diag.log file ADM5502W The escalation of "38200" locks on table "xxx.xxx" to lock intent "X" was successful. ...
9
by: Jane | last post by:
Our db2diag.log is full of messages like this: 2004-05-31-17.15.10.383766 Instance:tminst1 Node:000 PID:394948(db2agent (TMDB1) 0) TID:1 Appid:GA140956.EF26.03A4B1202647 data management ...
3
by: Amy Woodward | last post by:
Does DB2 have a setting to turn off lock escalation? I've just done several hours of newsgroup googling, and have found only a couple references to such a setting, for DB2 v4.1 back in the late...
0
by: Bruce Pullen | last post by:
DB2 v7.2 (FP7 - DB2 v7.1.0.68) on AIX 5.2.0.0. We're seeing unexpected single row (then commit) insert locking behaviour. We're seeing Applications that already hold row-level W locks in...
3
by: db2group88 | last post by:
we are using db2 udb v8.1 on windows, the configure parameter for locks is locklist 1000, maxlocks 60, but somehow i still have the error message ADM5502W The escalation of "1" locks on table...
2
by: Trent | last post by:
Hello, all. I have the following production DB2 environment. DB2 8.1.4 (fp4) WG edition with 2 production databases on Windows 2003 standard edition. My first question is regard with...
9
by: kavallin | last post by:
I receives the following in the db2diag.log file many times / day : 2007-03-05-14.55.24.836553+060 E12415C457 LEVEL: Warning PID : 2785 TID : 1 PROC :...
3
by: stefan.albert | last post by:
Hi folks, we have a little discussion about lock escalation... What is better for performance: To have an escalation "early" (smaller locklist) or aviod the escalation with a big lock list? ...
0
by: cburnett | last post by:
Have a peculiar problem regarding lock escalation. Originally lock escalation was occurring because of an application changing most of the rows in a large table. After setting locksize to table to...
1
by: clilush | last post by:
I'm getting entries in the windows event log saying "DB2 is performing lock escalation" which of course is followed by another entrie saying "The escalation of 'xxxx' locks on table 'SYSIBM ...
0
by: Charles Arthur | last post by:
How do i turn on java script on a villaon, callus and itel keypad mobile phone
0
by: ryjfgjl | last post by:
In our work, we often receive Excel tables with data in the same format. If we want to analyze these data, it can be difficult to analyze them because the data is spread across multiple Excel files...
0
BarryA
by: BarryA | last post by:
What are the essential steps and strategies outlined in the Data Structures and Algorithms (DSA) roadmap for aspiring data scientists? How can individuals effectively utilize this roadmap to progress...
1
by: nemocccc | last post by:
hello, everyone, I want to develop a software for my android phone for daily needs, any suggestions?
1
by: Sonnysonu | last post by:
This is the data of csv file 1 2 3 1 2 3 1 2 3 1 2 3 2 3 2 3 3 the lengths should be different i have to store the data by column-wise with in the specific length. suppose the i have to...
0
by: Hystou | last post by:
Most computers default to English, but sometimes we require a different language, especially when relocating. Forgot to request a specific language before your computer shipped? No problem! You can...
0
Oralloy
by: Oralloy | last post by:
Hello folks, I am unable to find appropriate documentation on the type promotion of bit-fields when using the generalised comparison operator "<=>". The problem is that using the GNU compilers,...
0
by: Hystou | last post by:
Overview: Windows 11 and 10 have less user interface control over operating system update behaviour than previous versions of Windows. In Windows 11 and 10, there is no way to turn off the Windows...
0
tracyyun
by: tracyyun | last post by:
Dear forum friends, With the development of smart home technology, a variety of wireless communication protocols have appeared on the market, such as Zigbee, Z-Wave, Wi-Fi, Bluetooth, etc. Each...

By using Bytes.com and it's services, you agree to our Privacy Policy and Terms of Use.

To disable or enable advertisements and analytics tracking please visit the manage ads & tracking page.