By using this site, you agree to our updated Privacy Policy and our Terms of Use. Manage your Cookies Settings.
438,514 Members | 1,717 Online
Bytes IT Community
+ Ask a Question
Need help? Post your question and get tips & solutions from a community of 438,514 IT Pros & Developers. It's quick & easy.

db2_evaluncommitted

P: n/a
Hi,

We have the following scenario:

1) update table A by joining with a global temporary
table B
2) There is a one to one correspondence between A and B
as the pk's of both tables are the same
3) The max number of rows in B is 8192

My questions:

1) When an update is done with db2_evaluncommitted set to
YES then the max number of locks
attained should be equivalent to 8192 right? if not
why?
I am seeing locks in the order of 50000 - 100,000 and
not sure why I see that after setting
db2_evaluncommitted to YES.

2) Also does db2_evaluncommitted property hold good for
global temporary table?

3) Any other property that I can tune/change to reduce
the number of locks?

Thanks in advance for feedback/help.
Thanks,
Sumanth
Apr 3 '06 #1
Share this Question
Share on Google+
7 Replies


P: n/a
Db2_evaluncommitted allows a reading cursor to look at an updated row
(by another application) that has not yet committed to determine if it
meets the reading cursor's predicates. This shouldn't have any
significant effect on the number of locks. Any row that meets the
predicates during a scan will be locked. Without more information about
the tables, available indexes and the update statement, there's no way
to even guess how many locks would be taken.

Did you see a difference in the number of locks without it set to YES?
Philip Sherman

Sumanth wrote:
Hi,

We have the following scenario:

1) update table A by joining with a global temporary
table B
2) There is a one to one correspondence between A and B
as the pk's of both tables are the same
3) The max number of rows in B is 8192

My questions:

1) When an update is done with db2_evaluncommitted set to
YES then the max number of locks
attained should be equivalent to 8192 right? if not
why?
I am seeing locks in the order of 50000 - 100,000 and
not sure why I see that after setting
db2_evaluncommitted to YES.

2) Also does db2_evaluncommitted property hold good for
global temporary table?

3) Any other property that I can tune/change to reduce
the number of locks?

Thanks in advance for feedback/help.
Thanks,
Sumanth

Apr 3 '06 #2

P: n/a
Ian
Sumanth wrote:
Hi,

We have the following scenario:

1) update table A by joining with a global temporary table B
2) There is a one to one correspondence between A and B as the pk's
of both tables are the same
3) The max number of rows in B is 8192

My questions:

1) When an update is done with db2_evaluncommitted set to YES then
the max number of locks attained should be equivalent to 8192
right? if not why?

I am seeing locks in the order of 50000 - 100,000 and not sure why
I see that after setting db2_evaluncommitted to YES.
DB2_EVALUNCOMMITTED has nothing to do with the number of locks that an
application will take when performing updates. It controls whether DB2
can "read through" locks held by other connections (i.e. uncommitted
transactions) to determine if the locked rows will match before trying
to take out the row lock.

As far as the number of locks you see, is this at the application level,
or at the database level? Have you also accounted for other locks
caused by the update (index keys, etc)?

2) Also does db2_evaluncommitted property hold good for global temporary
table?
DB2_EVALUNCOMMITTED doesn't make sense on GTTs, because only 1
connection can see the table, therefore there are no locking issues.
3) Any other property that I can tune/change to reduce the number of
locks?


Reduce LOCKLIST or MAXLOCKS; use 'LOCK TABLE' statement prior to update;
or ALTER TABLE LOCKSIZE TABLE.

Apr 3 '06 #3

P: n/a
The locks are at the database level. Almost all the lock I see is of object
type Row and is an X lock
There are around 4 users updating table A at around the same time, the data
set is mutually exclusive so
using DB2_EVALUNCOMMITTED
Have already modified the locklist and maxlocks parameter ...

The reason I want to have a better control on the number of locks is to
prevent any lock escalations which
could lead to deadlocks.

My basis is on the fact that each update would not require more than 10,000
locks and so have set maxlocks
and locklist accordingly..(the update updates only 8192 rows or lesser)
but I still see deadlocks. And in the deadlocks logs I see close to 50,000
locks for the update statement.

Not sure how to get a handle on the locks to prevent escalation.

Thanks,
Sumanth

"Ian" <ia*****@mobileaudio.com> wrote in message
news:44**********@newsfeed.slurp.net...
Sumanth wrote:
Hi,

We have the following scenario:

1) update table A by joining with a global temporary table B
2) There is a one to one correspondence between A and B as the pk's of
both tables are the same
3) The max number of rows in B is 8192

My questions:

1) When an update is done with db2_evaluncommitted set to YES then the
max number of locks attained should be equivalent to 8192
right? if not why?

I am seeing locks in the order of 50000 - 100,000 and not sure why I
see that after setting db2_evaluncommitted to YES.


DB2_EVALUNCOMMITTED has nothing to do with the number of locks that an
application will take when performing updates. It controls whether DB2
can "read through" locks held by other connections (i.e. uncommitted
transactions) to determine if the locked rows will match before trying
to take out the row lock.

As far as the number of locks you see, is this at the application level,
or at the database level? Have you also accounted for other locks
caused by the update (index keys, etc)?

2) Also does db2_evaluncommitted property hold good for global temporary
table?


DB2_EVALUNCOMMITTED doesn't make sense on GTTs, because only 1
connection can see the table, therefore there are no locking issues.
3) Any other property that I can tune/change to reduce the number of
locks?


Reduce LOCKLIST or MAXLOCKS; use 'LOCK TABLE' statement prior to update;
or ALTER TABLE LOCKSIZE TABLE.

Apr 3 '06 #4

P: n/a

There was no difference between YES and NO.

The table we are updating has close to 3 million rows, the update statement
would update only 8192 rows
i.e the predicate would match only 8192 rows but we see the locks exceeding
50,000 for this update.

Query information:
table A and table B both have the same columns. table A has an index on the
PK and table B is
a global temporary table.

update table A set A.c1 = (Select B.c1 from table B where A.pk = B.pk), A.c2
= (Select B.c2 from table B where A.pk = B.PK)
where A.pk in (select pk from B)

B has 8192 rows
A has ~3 million rows.

isolation level: cursor stability

My assumption is when table A is scanned, all rows whose pks are not in B
will not be locked..so we should have 8192 row locks

Any thoughts based on what I am trying to do above as to why I see a great
number of locks

Thanks,
Sumanth
"Phil Sherman" <ps******@ameritech.net> wrote in message
news:PA******************@newssvr30.news.prodigy.c om...
Db2_evaluncommitted allows a reading cursor to look at an updated row (by
another application) that has not yet committed to determine if it meets
the reading cursor's predicates. This shouldn't have any significant
effect on the number of locks. Any row that meets the predicates during a
scan will be locked. Without more information about the tables, available
indexes and the update statement, there's no way to even guess how many
locks would be taken.

Did you see a difference in the number of locks without it set to YES?
Philip Sherman

Sumanth wrote:
Hi,

We have the following scenario:

1) update table A by joining with a global temporary
table B
2) There is a one to one correspondence between A and
B as the pk's of both tables are the same
3) The max number of rows in B is 8192

My questions:

1) When an update is done with db2_evaluncommitted set
to YES then the max number of locks
attained should be equivalent to 8192 right? if
not why?
I am seeing locks in the order of 50000 - 100,000
and not sure why I see that after setting
db2_evaluncommitted to YES.

2) Also does db2_evaluncommitted property hold good
for global temporary table?

3) Any other property that I can tune/change to
reduce the number of locks?

Thanks in advance for feedback/help.
Thanks,
Sumanth

Apr 3 '06 #5

P: n/a
Did you try a MERGE instead of an update?
MERGE INTO a USING b ON a.pk = b.pk
WHEN MATCHED THEN
UPDATE SET a.c1 = b.c1, a.c2 = a.c2

This may gnerate a lot less database activity. You'll need to try it and
see what happens. Make sure that you have statistics on the two tables
that reflect the row counts you gave.

Philip Sherman

Sumanth wrote:
There was no difference between YES and NO.

The table we are updating has close to 3 million rows, the update statement
would update only 8192 rows
i.e the predicate would match only 8192 rows but we see the locks exceeding
50,000 for this update.

Query information:
table A and table B both have the same columns. table A has an index on the
PK and table B is
a global temporary table.

update table A set A.c1 = (Select B.c1 from table B where A.pk = B.pk), A.c2
= (Select B.c2 from table B where A.pk = B.PK)
where A.pk in (select pk from B)

B has 8192 rows
A has ~3 million rows.

isolation level: cursor stability

My assumption is when table A is scanned, all rows whose pks are not in B
will not be locked..so we should have 8192 row locks

Any thoughts based on what I am trying to do above as to why I see a great
number of locks

Thanks,
Sumanth
"Phil Sherman" <ps******@ameritech.net> wrote in message
news:PA******************@newssvr30.news.prodigy.c om...
Db2_evaluncommitted allows a reading cursor to look at an updated row (by
another application) that has not yet committed to determine if it meets
the reading cursor's predicates. This shouldn't have any significant
effect on the number of locks. Any row that meets the predicates during a
scan will be locked. Without more information about the tables, available
indexes and the update statement, there's no way to even guess how many
locks would be taken.

Did you see a difference in the number of locks without it set to YES?
Philip Sherman

Sumanth wrote:
Hi,

We have the following scenario:

1) update table A by joining with a global temporary
table B
2) There is a one to one correspondence between A and
B as the pk's of both tables are the same
3) The max number of rows in B is 8192

My questions:

1) When an update is done with db2_evaluncommitted set
to YES then the max number of locks
attained should be equivalent to 8192 right? if
not why?
I am seeing locks in the order of 50000 - 100,000
and not sure why I see that after setting
db2_evaluncommitted to YES.

2) Also does db2_evaluncommitted property hold good
for global temporary table?

3) Any other property that I can tune/change to
reduce the number of locks?

Thanks in advance for feedback/help.
Thanks,
Sumanth


Apr 3 '06 #6

P: n/a
I did not know of the merge statement,I will give it a try.
Run stats were run on the transactional table A. Did not run on B as it was
a temporary table.

Thanks,
Sumanth

"Phil Sherman" <ps******@ameritech.net> wrote in message
news:Hw******************@newssvr30.news.prodigy.c om...
Did you try a MERGE instead of an update?
MERGE INTO a USING b ON a.pk = b.pk
WHEN MATCHED THEN
UPDATE SET a.c1 = b.c1, a.c2 = a.c2

This may gnerate a lot less database activity. You'll need to try it and
see what happens. Make sure that you have statistics on the two tables
that reflect the row counts you gave.

Philip Sherman

Sumanth wrote:
There was no difference between YES and NO.

The table we are updating has close to 3 million rows, the update
statement would update only 8192 rows
i.e the predicate would match only 8192 rows but we see the locks
exceeding 50,000 for this update.

Query information:
table A and table B both have the same columns. table A has an index on
the PK and table B is
a global temporary table.

update table A set A.c1 = (Select B.c1 from table B where A.pk = B.pk),
A.c2 = (Select B.c2 from table B where A.pk = B.PK)
where A.pk in (select pk from B)

B has 8192 rows
A has ~3 million rows.

isolation level: cursor stability

My assumption is when table A is scanned, all rows whose pks are not in B
will not be locked..so we should have 8192 row locks

Any thoughts based on what I am trying to do above as to why I see a
great number of locks

Thanks,
Sumanth
"Phil Sherman" <ps******@ameritech.net> wrote in message
news:PA******************@newssvr30.news.prodigy.c om...
Db2_evaluncommitted allows a reading cursor to look at an updated row (by
another application) that has not yet committed to determine if it meets
the reading cursor's predicates. This shouldn't have any significant
effect on the number of locks. Any row that meets the predicates during a
scan will be locked. Without more information about the tables, available
indexes and the update statement, there's no way to even guess how many
locks would be taken.

Did you see a difference in the number of locks without it set to YES?
Philip Sherman

Sumanth wrote:

Hi,

We have the following scenario:

1) update table A by joining with a global temporary
table B
2) There is a one to one correspondence between A
and B as the pk's of both tables are the same
3) The max number of rows in B is 8192

My questions:

1) When an update is done with db2_evaluncommitted
set to YES then the max number of locks
attained should be equivalent to 8192 right? if
not why?
I am seeing locks in the order of 50000 - 100,000
and not sure why I see that after setting
db2_evaluncommitted to YES.

2) Also does db2_evaluncommitted property hold good
for global temporary table?

3) Any other property that I can tune/change to
reduce the number of locks?

Thanks in advance for feedback/help.
Thanks,
Sumanth



Apr 3 '06 #7

P: n/a
Ian
Sumanth wrote:
The locks are at the database level. Almost all the lock I see is of
object type Row and is an X lock There are around 4 users updating
table A at around the same time, the data set is mutually exclusive
so using DB2_EVALUNCOMMITTED Have already modified the locklist and
maxlocks parameter ...

The reason I want to have a better control on the number of locks is
to prevent any lock escalations which could lead to deadlocks.

My basis is on the fact that each update would not require more than
10,000 locks and so have set maxlocks and locklist accordingly..(the
update updates only 8192 rows or lesser) but I still see deadlocks.
And in the deadlocks logs I see close to 50,000 locks for the update
statement.


OK, so if you have 4 apps doing updates concurrently, seeing 50,000
locks in the database doesn't seem too far out of whack. Again, this
depends a lot on what indexes you have defined on your "A" table.
You should do RUNSTATS on the GTT - this is important and will
help the optimizer make better choices.

Apr 3 '06 #8

This discussion thread is closed

Replies have been disabled for this discussion.