By using this site, you agree to our updated Privacy Policy and our Terms of Use. Manage your Cookies Settings.
437,586 Members | 2,418 Online
Bytes IT Community
+ Ask a Question
Need help? Post your question and get tips & solutions from a community of 437,586 IT Pros & Developers. It's quick & easy.

Lock Escalation

P: n/a
I have db2 9 installation and I think our application isn't handling
locking optimally.

I have Maxlocks(96) and locklist (104832) setup to be auto tuned. The
value of these parameters in the parenthesis are the auto tuned values
after running my application for a while.
I notice values in db2diag.log that these values are being increased
to prevent lock escalation.

How do I determine the sql that is causing these high escalations?
Would it be better from a performance standpoint to specifically lock
the table in question instead?
On a machine with 4G of memory - is there a rule of thumb of about how
much memory should be allocated for locking? - locklist and maxlocks?
If I set locklist and maxlocks to a lower value and monitor
db2diag.log will I get the offending sql that is causing the lock
escalation?

Thanks.


Dec 10 '07 #1
Share this Question
Share on Google+
18 Replies


P: n/a
"dunleav1" <jm********@comcast.netwrote in message
news:a0**********************************@e25g2000 prg.googlegroups.com...
>I have db2 9 installation and I think our application isn't handling
locking optimally.

I have Maxlocks(96) and locklist (104832) setup to be auto tuned. The
value of these parameters in the parenthesis are the auto tuned values
after running my application for a while.
I notice values in db2diag.log that these values are being increased
to prevent lock escalation.

How do I determine the sql that is causing these high escalations?
Would it be better from a performance standpoint to specifically lock
the table in question instead?
On a machine with 4G of memory - is there a rule of thumb of about how
much memory should be allocated for locking? - locklist and maxlocks?
If I set locklist and maxlocks to a lower value and monitor
db2diag.log will I get the offending sql that is causing the lock
escalation?

Thanks.
That is a huge amount of locklist. Do you have a data warehouse application,
or an OLTP application (with lots of updates, inserts, and deletes)? If you
have almost all selects, you may NOT want to avoid lock escalation since
multiple share locks on table coexist fine, or maybe even use UR isolation
level.

Dec 10 '07 #2

P: n/a
On Dec 10, 8:27 pm, dunleav1 <jmdunle...@comcast.netwrote:
I have db2 9 installation and I think our application isn't handling
locking optimally.
What isolation level is your application using?
I have Maxlocks(96) and locklist (104832) setup to be auto tuned. The
value of these parameters in the parenthesis are the auto tuned values
after running my application for a while.
I notice values in db2diag.log that these values are being increased
to prevent lock escalation.

How do I determine the sql that is causing these high escalations?
get snapshot for applications on <db>

and

get snapshot for locks on <db>

will give you a start

/Lennart

Dec 11 '07 #3

P: n/a
On Dec 10, 5:48 pm, "Mark A" <nob...@nowhere.comwrote:
"dunleav1" <jmdunle...@comcast.netwrote in message

news:a0**********************************@e25g2000 prg.googlegroups.com...
I have db2 9 installation and I think our application isn't handling
locking optimally.
I have Maxlocks(96) and locklist (104832) setup to be auto tuned. The
value of these parameters in the parenthesis are the auto tuned values
after running my application for a while.
I notice values in db2diag.log that these values are being increased
to prevent lock escalation.
How do I determine the sql that is causing these high escalations?
Would it be better from a performance standpoint to specifically lock
the table in question instead?
On a machine with 4G of memory - is there a rule of thumb of about how
much memory should be allocated for locking? - locklist and maxlocks?
If I set locklist and maxlocks to a lower value and monitor
db2diag.log will I get the offending sql that is causing the lock
escalation?
Thanks.

That is a huge amount of locklist. Do you have a data warehouse application,
or an OLTP application (with lots of updates, inserts, and deletes)? If you
have almost all selects, you may NOT want to avoid lock escalation since
multiple share locks on table coexist fine, or maybe even use UR isolation
level.
It's both. This application is a oltp application but it moves data
into a reporting schema.
Basically, I move transactions into a star schema based reporting
schema.

The isolation level would be whatever DB2 chooses when using the DB2
jdbc driver.
Dec 11 '07 #4

P: n/a
On Dec 11, 12:30 am, Lennart <Erik.Lennart.Jons...@gmail.comwrote:
On Dec 10, 8:27 pm, dunleav1 <jmdunle...@comcast.netwrote:
I have db2 9 installation and I think our application isn't handling
locking optimally.

What isolation level is your application using?
The isolation level would be whatever DB2 chooses when using the DB2
jdbc driver.
>
I have Maxlocks(96) and locklist (104832) setup to be auto tuned. The
value of these parameters in the parenthesis are the auto tuned values
after running my application for a while.
I notice values in db2diag.log that these values are being increased
to prevent lock escalation.
How do I determine the sql that is causing these high escalations?

get snapshot for applications on <db>

and

get snapshot for locks on <db>

will give you a start

/Lennart
I have a process that runs approximately 500 transactions for my test.
I'll have to watch the db2diag.log file for an alert than map it back
to db2diag.log file. That's kind of a bother. db2diag already knows
the table that it is escalating on, I should be able to log the
offending statement shouldn't I?
Dec 11 '07 #5

P: n/a
I'm pretty sure it's REPEATABLE_READ by default.
Dec 11 '07 #6

P: n/a
"dunleav1" <jm********@comcast.netwrote in message
news:853ff41e-5d23-4030-8d3e-
It's both. This application is a oltp application but it moves data
into a reporting schema.
Basically, I move transactions into a star schema based reporting
schema.

The isolation level would be whatever DB2 chooses when using the DB2
jdbc driver.
On any select statement, you can alter the default isolation level using the
WITH UR (or CS, RR, RS) clause. Using with UR "might" be a good idea when
you are pulling data from the tables to load the datawarehouse, especially
if it is historical data and you know it is not being changed at the exact
moment you pull it..
Dec 12 '07 #7

P: n/a
"dunleav1" <jm********@comcast.netwrote in message
news:f5**********************************@i29g2000 prf.googlegroups.com...
I'm pretty sure it's REPEATABLE_READ by default.
The DB2 default is CS by default. If you are using RR, try changing it to CS
or UR using the WITH XX clause.
Dec 12 '07 #8

P: n/a
On Dec 11, 7:53 pm, "Mark A" <nob...@nowhere.comwrote:
"dunleav1" <jmdunle...@comcast.netwrote in message

news:f5**********************************@i29g2000 prf.googlegroups.com...
I'm pretty sure it's REPEATABLE_READ by default.

The DB2 default is CS by default. If you are using RR, try changing it to CS
or UR using the WITH XX clause.
CS is not one of the isolation levels listed in the JDBC standard?
Dec 12 '07 #9

P: n/a
On Dec 11, 12:57 pm, dunleav1 <jmdunle...@comcast.netwrote:
On Dec 11, 12:30 am, Lennart <Erik.Lennart.Jons...@gmail.comwrote:
On Dec 10, 8:27 pm, dunleav1 <jmdunle...@comcast.netwrote:
I have db2 9 installation and I think our application isn't handling
locking optimally.
What isolation level is your application using?

The isolation level would be whatever DB2 chooses when using the DB2
jdbc driver.


I have Maxlocks(96) and locklist (104832) setup to be auto tuned. The
value of these parameters in the parenthesis are the auto tuned values
after running my application for a while.
I notice values in db2diag.log that these values are being increased
to prevent lock escalation.
How do I determine the sql that is causing these high escalations?
get snapshot for applications on <db>
and
get snapshot for locks on <db>
will give you a start
/Lennart

I have a process that runs approximately 500 transactions for my test.
I'll have to watch the db2diag.log file for an alert than map it back
to db2diag.log file. That's kind of a bother. db2diag already knows
the table that it is escalating on, I should be able to log the
offending statement shouldn't I?
I figured out the offending statements by syncing a sql "trace" and
the db2diag.log file.
I have 7 insert statements that do a a join between the insert table
(T1) and a second table (T2) that look like this:
(ie) insert into T1 (val1,val2,val3) (select val1,val2,val3) from T2
where not exists (select 1 from T2 where T2.val1 = T1.val1) and val2 =
1 and val3 >10000000)

The lock escalation happens on the table being inserted into if I set
maxlocks and locklist to a permanant lower value. (ie) 50, 20000 for
example.
Now T1 may be selected or inserted into from different processes so I
want the default isolation level of RR from jdbc.

Will I see any significant performance gain by issuing a specific lock
table prior to the insert statement?
Or is it better to just let DB2 automatically escalate the sql to a
table lock?
Dec 12 '07 #10

P: n/a
On Dec 12, 12:00 pm, dunleav1 <jmdunle...@comcast.netwrote:
On Dec 11, 12:57 pm, dunleav1 <jmdunle...@comcast.netwrote:
On Dec 11, 12:30 am, Lennart <Erik.Lennart.Jons...@gmail.comwrote:
On Dec 10, 8:27 pm, dunleav1 <jmdunle...@comcast.netwrote:
I have db2 9 installation and I think our application isn't handling
locking optimally.
What isolation level is your application using?
The isolation level would be whatever DB2 chooses when using the DB2
jdbc driver.
I have Maxlocks(96) and locklist (104832) setup to be auto tuned. The
value of these parameters in the parenthesis are the auto tuned values
after running my application for a while.
I notice values in db2diag.log that these values are being increased
to prevent lock escalation.
How do I determine the sql that is causing these high escalations?
get snapshot for applications on <db>
and
get snapshot for locks on <db>
will give you a start
/Lennart
I have a process that runs approximately 500 transactions for my test.
I'll have to watch the db2diag.log file for an alert than map it back
to db2diag.log file. That's kind of a bother. db2diag already knows
the table that it is escalating on, I should be able to log the
offending statement shouldn't I?

I figured out the offending statements by syncing a sql "trace" and
the db2diag.log file.
I have 7 insert statements that do a a join between the insert table
(T1) and a second table (T2) that look like this:
(ie) insert into T1 (val1,val2,val3) (select val1,val2,val3) from T2
where not exists (select 1 from T2 where T2.val1 = T1.val1) and val2 =
1 and val3 >10000000)

The lock escalation happens on the table being inserted into if I set
maxlocks and locklist to a permanant lower value. (ie) 50, 20000 for
example.
Now T1 may be selected or inserted into from different processes so I
want the default isolation level of RR from jdbc.

Will I see any significant performance gain by issuing a specific lock
table prior to the insert statement?
Or is it better to just let DB2 automatically escalate the sql to a
table lock?
The 7 inserts take - 43 seconds, 35 seconds, 369 seconds, 8 seconds,
29 seconds, 29 seconds, 29 seconds.
Based on these numbers I'm really wondering just how much of
performance hit a lock escalation really is. Because it doesn't look
like much to worry about.
Dec 12 '07 #11

P: n/a
On Dec 12, 4:33 pm, dunleav1 <jmdunle...@comcast.netwrote:
On Dec 11, 7:53 pm, "Mark A" <nob...@nowhere.comwrote:
"dunleav1" <jmdunle...@comcast.netwrote in message
news:f5**********************************@i29g2000 prf.googlegroups.com...
I'm pretty sure it's REPEATABLE_READ by default.
The DB2 default is CS by default. If you are using RR, try changing it to CS
or UR using the WITH XX clause.

CS is not one of the isolation levels listed in the JDBC standard?
This must must be the one thing that causes most confusion between DB2
DBA's and java developers.
I think JDBC uses the names as defined in ISO

DB2 JDBC
Repeateble Read - Serilizable
Read Stability - Repeatable Read
Cursor Stability - Read Committed
Uncommitted Read - Read Uncommitted

/Lennart
Dec 12 '07 #12

P: n/a
On Dec 12, 12:28 pm, Lennart <Erik.Lennart.Jons...@gmail.comwrote:
On Dec 12, 4:33 pm, dunleav1 <jmdunle...@comcast.netwrote:
On Dec 11, 7:53 pm, "Mark A" <nob...@nowhere.comwrote:
"dunleav1" <jmdunle...@comcast.netwrote in message
>news:f5**********************************@i29g200 0prf.googlegroups.com...
I'm pretty sure it's REPEATABLE_READ by default.
The DB2 default is CS by default. If you are using RR, try changing it to CS
or UR using the WITH XX clause.
CS is not one of the isolation levels listed in the JDBC standard?

This must must be the one thing that causes most confusion between DB2
DBA's and java developers.
I think JDBC uses the names as defined in ISO

DB2 JDBC
Repeateble Read - Serilizable
Read Stability - Repeatable Read
Cursor Stability - Read Committed
Uncommitted Read - Read Uncommitted

/Lennart
The IBM DB2 universal jdbc driver returns "repeatable read (version:
SQL08024)" as the default isolation level when getTransactionIsolation
is called and the isolation level isn't specifically set.

Dec 12 '07 #13

P: n/a
On Dec 12, 9:46 pm, dunleav1 <jmdunle...@comcast.netwrote:
On Dec 12, 12:28 pm, Lennart <Erik.Lennart.Jons...@gmail.comwrote:
On Dec 12, 4:33 pm, dunleav1 <jmdunle...@comcast.netwrote:
On Dec 11, 7:53 pm, "Mark A" <nob...@nowhere.comwrote:
"dunleav1" <jmdunle...@comcast.netwrote in message
news:f5**********************************@i29g2000 prf.googlegroups.com...
I'm pretty sure it's REPEATABLE_READ by default.
The DB2 default is CS by default. If you are using RR, try changing it to CS
or UR using the WITH XX clause.
CS is not one of the isolation levels listed in the JDBC standard?
This must must be the one thing that causes most confusion between DB2
DBA's and java developers.
I think JDBC uses the names as defined in ISO
DB2 JDBC
Repeateble Read - Serilizable
Read Stability - Repeatable Read
Cursor Stability - Read Committed
Uncommitted Read - Read Uncommitted
/Lennart

The IBM DB2 universal jdbc driver returns "repeatable read (version:
SQL08024)" as the default isolation level when getTransactionIsolation
is called and the isolation level isn't specifically set.
My point was merely that "Repeateble Read" means different things for
different people. Anyhow, Repeatable Read (JDBC) keep locks longer
than Read Committed. What happens with the lock escalations if you
lower the isolation level to Read Committed?

If you have identified the query that causes the escalations, you can
try db2advis to see if there are indexes that might help.

/Lennart
Dec 12 '07 #14

P: n/a
On Dec 12, 5:59 pm, Lennart <Erik.Lennart.Jons...@gmail.comwrote:
On Dec 12, 9:46 pm, dunleav1 <jmdunle...@comcast.netwrote:
On Dec 12, 12:28 pm, Lennart <Erik.Lennart.Jons...@gmail.comwrote:
On Dec 12, 4:33 pm, dunleav1 <jmdunle...@comcast.netwrote:
On Dec 11, 7:53 pm, "Mark A" <nob...@nowhere.comwrote:
"dunleav1" <jmdunle...@comcast.netwrote in message
>news:f5**********************************@i29g200 0prf.googlegroups.com...
I'm pretty sure it's REPEATABLE_READ by default.
The DB2 default is CS by default. If you are using RR, try changing it to CS
or UR using the WITH XX clause.
CS is not one of the isolation levels listed in the JDBC standard?
This must must be the one thing that causes most confusion between DB2
DBA's and java developers.
I think JDBC uses the names as defined in ISO
DB2 JDBC
Repeateble Read - Serilizable
Read Stability - Repeatable Read
Cursor Stability - Read Committed
Uncommitted Read - Read Uncommitted
/Lennart
The IBM DB2 universal jdbc driver returns "repeatable read (version:
SQL08024)" as the default isolation level when getTransactionIsolation
is called and the isolation level isn't specifically set.

My point was merely that "Repeateble Read" means different things for
different people. Anyhow, Repeatable Read (JDBC) keep locks longer
than Read Committed. What happens with the lock escalations if you
lower the isolation level to Read Committed?

If you have identified the query that causes the escalations, you can
try db2advis to see if there are indexes that might help.

/Lennart
Yes, the sql is already optimized. I used db2advis to confirm.
Dec 13 '07 #15

P: n/a
dunleav1 wrote:
The IBM DB2 universal jdbc driver returns "repeatable read (version:
SQL08024)" as the default isolation level when getTransactionIsolation
is called and the isolation level isn't specifically set.
I would say that this is probably another stupid default setting in JDBC.

The other two I know of are auto-commit being turned on and,
consequentially, all cursors being implicitly defined as holdable. Any
serious JDBC application should turn this off right after it is started,
IMHO.

--
Knut Stolze
DB2 z/OS Utilities Development
IBM Germany
Dec 14 '07 #16

P: n/a
On Dec 14, 3:50 am, Knut Stolze <sto...@de.ibm.comwrote:
dunleav1 wrote:
The IBM DB2 universal jdbc driver returns "repeatable read (version:
SQL08024)" as the default isolation level when getTransactionIsolation
is called and the isolation level isn't specifically set.

I would say that this is probably another stupid default setting in JDBC.

The other two I know of are auto-commit being turned on and,
consequentially, all cursors being implicitly defined as holdable. Any
serious JDBC application should turn this off right after it is started,
IMHO.

--
Knut Stolze
DB2 z/OS Utilities Development
IBM Germany
Knut can you elaborate on what you think is stupid and why?
Dec 17 '07 #17

P: n/a
dunleav1 wrote:
On Dec 14, 3:50 am, Knut Stolze <sto...@de.ibm.comwrote:
>dunleav1 wrote:
The IBM DB2 universal jdbc driver returns "repeatable read (version:
SQL08024)" as the default isolation level when getTransactionIsolation
is called and the isolation level isn't specifically set.

I would say that this is probably another stupid default setting in JDBC.

The other two I know of are auto-commit being turned on and,
consequentially, all cursors being implicitly defined as holdable. Any
serious JDBC application should turn this off right after it is started,
IMHO.

Knut can you elaborate on what you think is stupid and why?
Repeatable read isolation level allows less concurrency than committed read
(or uncommitted read). Thus, an isolation level is chosen per default which
has an adverse effect on performance.

Another, similar issue is auto-commit being the default for JDBC
connections. First, any transactional properties are discarded with this
right away. You cannot rollback changes. Likewise, due to the
auto-commit, it was necessary to open cursors per default as holdable
cursors. Otherwise, the OPEN CURSOR (as JDBC method) would cause the
cursor to be closed at the end of the OPEN already. So you couldn't use
the cursor. Making all cursors holdable fits in with the auto-commit. But
again, holdable cursors allocate some resources on the database server. If
you don't need auto-commit, you can lift the resource requirements on the
server and get better performance.

Granted, those settings are good for novice programmers who just want to
their Hello World program to work. But for all practical purposes, you
have to change those defaults, IMHO.

--
Knut Stolze
DB2 z/OS Utilities Development
IBM Germany
Dec 17 '07 #18

P: n/a
On Dec 17, 4:20 pm, Knut Stolze <sto...@de.ibm.comwrote:
dunleav1 wrote:
On Dec 14, 3:50 am, Knut Stolze <sto...@de.ibm.comwrote:
dunleav1 wrote:
The IBM DB2 universal jdbc driver returns "repeatable read (version:
SQL08024)" as the default isolation level when getTransactionIsolation
is called and the isolation level isn't specifically set.
I would say that this is probably another stupid default setting in JDBC.
The other two I know of are auto-commit being turned on and,
consequentially, all cursors being implicitly defined as holdable. Any
serious JDBC application should turn this off right after it is started,
IMHO.
Knut can you elaborate on what you think is stupid and why?

Repeatable read isolation level allows less concurrency than committed read
(or uncommitted read). Thus, an isolation level is chosen per default which
has an adverse effect on performance.

Another, similar issue is auto-commit being the default for JDBC
connections. First, any transactional properties are discarded with this
right away. You cannot rollback changes. Likewise, due to the
auto-commit, it was necessary to open cursors per default as holdable
cursors. Otherwise, the OPEN CURSOR (as JDBC method) would cause the
cursor to be closed at the end of the OPEN already. So you couldn't use
the cursor. Making all cursors holdable fits in with the auto-commit. But
again, holdable cursors allocate some resources on the database server. If
you don't need auto-commit, you can lift the resource requirements on the
server and get better performance.

Granted, those settings are good for novice programmers who just want to
their Hello World program to work. But for all practical purposes, you
have to change those defaults, IMHO.

--
Knut Stolze
DB2 z/OS Utilities Development
IBM Germany
Thanks Knut, I'll have to run some tests with our Java programmers.
Our application supports DB2, Oracle, Sql Server. DB2 has the worst
performance out of the three.
I'll start experimenting with the isolation level to get those number
comparable (or better).
Dec 19 '07 #19

This discussion thread is closed

Replies have been disabled for this discussion.