By using this site, you agree to our updated Privacy Policy and our Terms of Use. Manage your Cookies Settings.
440,289 Members | 3,049 Online
Bytes IT Community
+ Ask a Question
Need help? Post your question and get tips & solutions from a community of 440,289 IT Pros & Developers. It's quick & easy.

Table Locks and Preventing Deadlock

P: n/a
I`m exporting data from many Lotus Notes databases to a DB2 database
using LotusScript. The LotusScript agents commit after EACH
update/insert. Nevertheless, I keep getting transaction rollbacks on
deadlock or timeout. When I get a snapshot for locks, I see that one
of the export agents is getting a table lock rather than a row lock.
I`m guessing this is what causes the time out. Questions: how do I
determine whether it`s a deadlock or a timeout? If it`s because of the
table locks, what can I do to prevent it?
LOCKLIST = 64000
DLCHKTIME = 15000
MAXLOCKS = 20
LOCKTIMEOUT = 45
Thanks.
Randy
Nov 12 '05 #1
Share this Question
Share on Google+
11 Replies


P: n/a
Ian
EoRaptor wrote:
I`m exporting data from many Lotus Notes databases to a DB2 database
using LotusScript. The LotusScript agents commit after EACH
update/insert. Nevertheless, I keep getting transaction rollbacks on
deadlock or timeout. When I get a snapshot for locks, I see that one
of the export agents is getting a table lock rather than a row lock.
I`m guessing this is what causes the time out. Questions: how do I
determine whether it`s a deadlock or a timeout? If it`s because of the
table locks, what can I do to prevent it?
LOCKLIST = 64000
DLCHKTIME = 15000
MAXLOCKS = 20
LOCKTIMEOUT = 45
Thanks.
Randy


You need to look at the reason code from the error:

SQL0911N, reason code 2 = deadlock
SQL0911N, reason code 68 = lock timeout
There are a number of parameters that you may need to consider:

LOCKTIMEOUT -- specifies the time lock waits will last before SQL0911N
LOCKLIST / MAXLOCKS -- controls the number of locks an application can
take before causing lock escalation
Good luck,
-----= Posted via Newsfeeds.Com, Uncensored Usenet News =-----
http://www.newsfeeds.com - The #1 Newsgroup Service in the World!
-----== Over 100,000 Newsgroups - 19 Different Servers! =-----
Nov 12 '05 #2

P: n/a
"EoRaptor" <rc*******@patmedia.net> wrote in message
news:fa**************************@posting.google.c om...
I`m exporting data from many Lotus Notes databases to a DB2 database
using LotusScript. The LotusScript agents commit after EACH
update/insert. Nevertheless, I keep getting transaction rollbacks on
deadlock or timeout. When I get a snapshot for locks, I see that one
of the export agents is getting a table lock rather than a row lock.
I`m guessing this is what causes the time out. Questions: how do I
determine whether it`s a deadlock or a timeout? If it`s because of the
table locks, what can I do to prevent it?
LOCKLIST = 64000
DLCHKTIME = 15000
MAXLOCKS = 20
LOCKTIMEOUT = 45
Thanks.
Randy


I am not sure this will help, but try increasing maxlocks to 60. This means
that lock escalation will only occur if the agent is using more than 60% of
the locklist.

However, locks are usually released at each commit, so I don't know why you
are getting lock escalation unless the LotusScript agent specifically asked
for a table lock in the program. You should be able to track lock
escalations in the snapshot if that is occurring.

Doing a commit after each update/insert is not very efficient, so I would
consider exporting the data from Lotus into ASCII or del format and using
the DB2 import command with the "INSERT_UPDATE" option (Adds rows of
imported data to the target table, or updates existing rows of the target
table with matching primary keys) and set COMMITCOUNT to 100. This should
cut the elapsed time at least in half but still provide reasonable
concurrency with other applications.
Nov 12 '05 #3

P: n/a
"Mark A" <ma@switchboard.net> wrote in message news:<qu******************@news.uswest.net>...
I am not sure this will help, but try increasing maxlocks to 60. This means
that lock escalation will only occur if the agent is using more than 60% of
the locklist.

However, locks are usually released at each commit, so I don't know why you
are getting lock escalation unless the LotusScript agent specifically asked
for a table lock in the program. You should be able to track lock
escalations in the snapshot if that is occurring.

Doing a commit after each update/insert is not very efficient, so I would
consider exporting the data from Lotus into ASCII or del format and using
the DB2 import command with the "INSERT_UPDATE" option (Adds rows of
imported data to the target table, or updates existing rows of the target
table with matching primary keys) and set COMMITCOUNT to 100. This should
cut the elapsed time at least in half but still provide reasonable
concurrency with other applications.


Thanks. I hesitate to increase MAXLOCKS because I have 4 Lotus Notes
agents running at the same time -- each agent exporting 1 LN DB at a
time. Still, I'll give it a try -- can always change it back. Also, it
does look like it's a lock escalation issue. When I look at the
snapshot, I see that one of the agents has gotten a table IX lock.
Still, I don't know why that isn't released after the commit.
As for commit frequency and ascii import, this is an on-going process.
Each LN DB updates the DB2 tables once per day. The Notes databases
are our production application -- workflow for immigration
applications. So, I'm using agents to automate the process. FWIW,
before I started having these rollbacks, the agents were processing
about 150 Lotus Notes documents per second -- perfectly acceptable.
Again, thanks.
Nov 12 '05 #4

P: n/a
Ian <ia*****@mobileaudio.com> wrote in message news:<40**********@corp.newsgroups.com>...
You need to look at the reason code from the error:

SQL0911N, reason code 2 = deadlock
SQL0911N, reason code 68 = lock timeout
There are a number of parameters that you may need to consider:

LOCKTIMEOUT -- specifies the time lock waits will last before SQL0911N
LOCKLIST / MAXLOCKS -- controls the number of locks an application can
take before causing lock escalation


Ah ha! So they are all deadlocks. Using snapshot for locks, I found
that one agent is getting its locks escalated -- an IX table lock. My
LOCKLIST is 96000 and MAXLOCKS is 20. I'll try some fiddling to see
what happens.
Thanks.
Randy
Nov 12 '05 #5

P: n/a
"EoRaptor" <rc*******@patmedia.net> wrote in message
news:fa**************************@posting.google.c om...
"Mark A" <ma@switchboard.net> wrote in message news:<qu******************@news.uswest.net>...
I am not sure this will help, but try increasing maxlocks to 60. This means that lock escalation will only occur if the agent is using more than 60% of the locklist.

However, locks are usually released at each commit, so I don't know why you are getting lock escalation unless the LotusScript agent specifically asked for a table lock in the program. You should be able to track lock
escalations in the snapshot if that is occurring.

Doing a commit after each update/insert is not very efficient, so I would consider exporting the data from Lotus into ASCII or del format and using the DB2 import command with the "INSERT_UPDATE" option (Adds rows of
imported data to the target table, or updates existing rows of the target table with matching primary keys) and set COMMITCOUNT to 100. This should cut the elapsed time at least in half but still provide reasonable
concurrency with other applications.


Thanks. I hesitate to increase MAXLOCKS because I have 4 Lotus Notes
agents running at the same time -- each agent exporting 1 LN DB at a
time. Still, I'll give it a try -- can always change it back. Also, it
does look like it's a lock escalation issue. When I look at the
snapshot, I see that one of the agents has gotten a table IX lock.
Still, I don't know why that isn't released after the commit.
As for commit frequency and ascii import, this is an on-going process.
Each LN DB updates the DB2 tables once per day. The Notes databases
are our production application -- workflow for immigration
applications. So, I'm using agents to automate the process. FWIW,
before I started having these rollbacks, the agents were processing
about 150 Lotus Notes documents per second -- perfectly acceptable.
Again, thanks.


I don't understand why you would be hesitant to increase MAXLOCKS. The worst
that will happen is that you will prevent escalation to table locks. If you
are running 4 agents at a time against the same table, you don't want
escalation. The only time you might want escalation is when you have single
job running against the table, and you need it to run faster by not having
to obtain (and maybe release) row locks for each row.

150 rows per second is pitiful performance, but if your volume is low, then
it doesn't matter.
Nov 12 '05 #6

P: n/a
"Mark A" <ma@switchboard.net> wrote in message news:<Hf****************@news.uswest.net>...
I don't understand why you would be hesitant to increase MAXLOCKS. The worst
that will happen is that you will prevent escalation to table locks. If you
are running 4 agents at a time against the same table, you don't want
escalation. The only time you might want escalation is when you have single
job running against the table, and you need it to run faster by not having
to obtain (and maybe release) row locks for each row.

150 rows per second is pitiful performance, but if your volume is low, then
it doesn't matter.


Is there a way to just turn off the bloody lock escalation?? Right
now, LOCKLIST is at 128000 and MAXLOCKS is at 60. Nevertheless, if I
run two or more agents(four are supposed to run), one of them ends up
getting a table lock -- at least that's what seems to be happening.
All agents are exactly the same and nowhere do I ask for a table lock.
Each agent has a maximum of 6 connections at any one time but it is a
certainty that NO agent will require any record written to, or held
by, any other agent (the combined primary key consists of the Lotus
Notes document ID and the Notes database replica ID and each agent
works wholly within one database and processes one document at a
time).
As far as performance, the bottleneck is on the Lotus Notes side...
exporting data to a text file isn't really any faster than the 150
records/sec direct export to DB2.
Thanks for your thoughts.
Randy
Nov 12 '05 #7

P: n/a
rc*******@patmedia.net (EoRaptor) wrote:
The LotusScript agents commit after EACH update/insert.


As Mark was saying, the lock escalation seems to make little sense -
are your insert/update statements doing row-at-a-time processing, or
set-based (e.g. INSERT SELECT)? If the latter, how many rows is your
largest insert or update statement processing?
Jeremy Rickard
Nov 12 '05 #8

P: n/a
rc*******@patmedia.net (EoRaptor) wrote in message news:<fa**************************@posting.google. com>...
I`m exporting data from many Lotus Notes databases to a DB2 database
using LotusScript. The LotusScript agents commit after EACH
update/insert. Nevertheless, I keep getting transaction rollbacks on
deadlock or timeout. When I get a snapshot for locks, I see that one
of the export agents is getting a table lock rather than a row lock.
I`m guessing this is what causes the time out. Questions: how do I
determine whether it`s a deadlock or a timeout? If it`s because of the
table locks, what can I do to prevent it?


Your later posts show that you are getting deadlocks.

Why not attack the root cause of the deadlocks?

How are you doing the updates?
With Cursor ?
With searched-update? (check your indexes).
What is your db2level and operating-system? If v7.x consider DB2_RR_TO_RS=YES

Are your inserts just plain INSERT ...VALUES?
Or are they INSERT INTO...SELECT ...FROM...WHERE ?

Are you committing after each insert, *and* after each update ?
Nov 12 '05 #9

P: n/a
Ian
EoRaptor wrote:
"Mark A" <ma@switchboard.net> wrote in message news:<Hf****************@news.uswest.net>...

I don't understand why you would be hesitant to increase MAXLOCKS. The worst
that will happen is that you will prevent escalation to table locks. If you
are running 4 agents at a time against the same table, you don't want
escalation. The only time you might want escalation is when you have single
job running against the table, and you need it to run faster by not having
to obtain (and maybe release) row locks for each row.

150 rows per second is pitiful performance, but if your volume is low, then
it doesn't matter.

Is there a way to just turn off the bloody lock escalation?? Right
now, LOCKLIST is at 128000 and MAXLOCKS is at 60. Nevertheless, if I
run two or more agents(four are supposed to run), one of them ends up
getting a table lock -- at least that's what seems to be happening.
All agents are exactly the same and nowhere do I ask for a table lock.
Each agent has a maximum of 6 connections at any one time but it is a
certainty that NO agent will require any record written to, or held
by, any other agent (the combined primary key consists of the Lotus
Notes document ID and the Notes database replica ID and each agent
works wholly within one database and processes one document at a
time).


Unfortunately you can't disable lock escalation entirely, without
resorting to locking the entire table for each transaction (this
would negatively affect concurrency :-)

If you have increased locklist (from 64000 to 128000) and maxlocks
(from 20 to 60), and are still having a problem, are you sure that
the changes have gone into effect? Keep in mind that the database
may need to be deactivated/reactivated for any changes to take effect
(I don't think you mentioned if you're on V7.x or V8.1).

Are you _sure_ your app is actually committing after each insert
or update? What is the scope of the update statements -- if each
update is updating a HUGE (with a 500 Mb locklist, and 60% available
per application you could lock well over 8,000,000 rows), maybe
something in your app is not right.

You should check the db2diag.log for more information. It will contain
information about lock escalations (like how many row locks were held
when escalation was triggered) as well as deadlocks. This may give you
a clue about an error in the app doing the migration. Trapping the SQL
will also help. As will running the app single threaded and monitoring
closely to see what's going on.

-----= Posted via Newsfeeds.Com, Uncensored Usenet News =-----
http://www.newsfeeds.com - The #1 Newsgroup Service in the World!
-----== Over 100,000 Newsgroups - 19 Different Servers! =-----
Nov 12 '05 #10

P: n/a
Ian <ia*****@mobileaudio.com> wrote in message news:<40**********@corp.newsgroups.com>...
Unfortunately you can't disable lock escalation entirely, without
resorting to locking the entire table for each transaction (this
would negatively affect concurrency :-)

If you have increased locklist (from 64000 to 128000) and maxlocks
(from 20 to 60), and are still having a problem, are you sure that
the changes have gone into effect? Keep in mind that the database
may need to be deactivated/reactivated for any changes to take effect
(I don't think you mentioned if you're on V7.x or V8.1).

Are you _sure_ your app is actually committing after each insert
or update? What is the scope of the update statements -- if each
update is updating a HUGE (with a 500 Mb locklist, and 60% available
per application you could lock well over 8,000,000 rows), maybe
something in your app is not right.

You should check the db2diag.log for more information. It will contain
information about lock escalations (like how many row locks were held
when escalation was triggered) as well as deadlocks. This may give you
a clue about an error in the app doing the migration. Trapping the SQL
will also help. As will running the app single threaded and monitoring
closely to see what's going on.

Good questions. Wish I was certain of the answers. I'm using
LotusScript LSC:LC classes and connectors. Of course since DB2 and
Lotus come from the same shop, you'd think they'd play well
together...
I am processing single rows. LotusScript agent code gets a Lotus Notes
document, and processes it to insert/update one row into one table.
The code, however, uses the LSX:LC fieldlist and connection classes so
I don't know what they are REALLY doing under the hood. I do have a
primary key and the agent explicitly identifies those two columns as
keys. As for actually commiting, again, I'm not issuing a direct SQL
statement. I use the connection class's "Action" method and specify
commit and assume it does so. I guess that I should be able to monitor
this someway in DB2 but I've only figured out (so far) how to get
snapshots and I've yet to catch anyting "in the act." Is there a way
to watch what's happening in real-time? I do have to say however, that
I took a look at snapshot for database and found 0 lock escalation
events. What I can't figure out is whether that info is cumulative or
if it's also instantaneous, i.e. 0 because I didn't catch one then or
cumulative and I really haven't had any. If the latter then I'm really
at a loss. Within any one agent, tables are populated serially so that
I don't get foreign key conflicts. Inserting one row at a time into my
contact table, shouldn't create deadlocks. However, I only get the
problem when I run multiple agents. I dont' see how this can create
deadlocks however, because each agent works wholly within its own
database and the primary key includes the Notes database replica ID.
So, no agent should be waiting for a commitment from another agent
because no agent is processing the same Notes database as any other
agent. I'm beginning to think that I just wrote bad code :-<
Thanks.
Randy
Nov 12 '05 #11

P: n/a
"EoRaptor" <rc*******@patmedia.net> wrote in message
news:fa**************************@posting.google.c om...
Ian <ia*****@mobileaudio.com> wrote in message news:<40**********@corp.newsgroups.com>...
Good questions. Wish I was certain of the answers. I'm using
LotusScript LSC:LC classes and connectors. Of course since DB2 and
Lotus come from the same shop, you'd think they'd play well
together...
Same shop? Same corporation, yes. Same development lab, no. Lotus remains
pretty much intact as a separate subsidiary located in the Boston area.
.. I am processing single rows. LotusScript agent code gets a Lotus Notes
document, and processes it to insert/update one row into one table.
The code, however, uses the LSX:LC fieldlist and connection classes so
I don't know what they are REALLY doing under the hood. I do have a
primary key and the agent explicitly identifies those two columns as
keys. As for actually commiting, again, I'm not issuing a direct SQL
statement. I use the connection class's "Action" method and specify
commit and assume it does so. I guess that I should be able to monitor
this someway in DB2 but I've only figured out (so far) how to get
snapshots and I've yet to catch anyting "in the act." Is there a way
to watch what's happening in real-time? I do have to say however, that
I took a look at snapshot for database and found 0 lock escalation
events. What I can't figure out is whether that info is cumulative or
if it's also instantaneous, i.e. 0 because I didn't catch one then or
cumulative and I really haven't had any. If the latter then I'm really
at a loss. Within any one agent, tables are populated serially so that
I don't get foreign key conflicts. Inserting one row at a time into my
contact table, shouldn't create deadlocks. However, I only get the
problem when I run multiple agents. I dont' see how this can create
deadlocks however, because each agent works wholly within its own
database and the primary key includes the Notes database replica ID.
So, no agent should be waiting for a commitment from another agent
because no agent is processing the same Notes database as any other
agent. I'm beginning to think that I just wrote bad code :-<
Thanks.
Randy


I am not certain what is happening, but if the LotusScript is explicitly
locking the entire table (SQL lock table statement) then a deadlock could
occur.

I would ask Lotus support about your problem.
Nov 12 '05 #12

This discussion thread is closed

Replies have been disabled for this discussion.