469,917 Members | 1,789 Online
Bytes | Developer Community
New Post

Home Posts Topics Members FAQ

Post your question to a community of 469,917 developers. It's quick & easy.

Appropriate Use of READ UNCOMMITTED?

I haven't used the READ UNCOMMITTED transaction isolation level
before, and I was wondering if this would be an appropriate use:

I have an ID table containing ID numbers that are randomly generated
and need to be unique. There is a stored procedure that potentially
generates thousands of these IDs in one execution and inserts them
into the ID table and various other tables. The basic idea is as
follows:

Begin Transaction
While not all IDs generated {
GenID:
@NewID = GenerateID()
If @NewID exists in ID table
GOTO GenID

Insert into ID table
Insert into various other tables
}
Commit Transaction

The problem occurs when the stored procedure is being run by more than
one process concurrently. The check to see whether @NewID exists in
the ID table will block, waiting for the transaction in the other
process to commit.

Would this be an appropriate place to use the READ UNCOMMITTED
isolation level to allow different executions of the stored procedure
to see what the others are writing into the ID table before the
transactions finish? I only really care that the IDs generated are
unique; they're not in sequence or anything like that. Has anyone had
experience with anything similar?
Jul 20 '05 #1
8 2149
Hi

If you read uncommitted then you have to be sure that if the writing
transaction rolls back there are no consequences for the process that reads
the (phantom) data that was uncommitted. It is not clear from your
description if you can generate the same key twice if the process rolls
back.

As both your processes will also be writing simulaneously they may well be
blocking regardless of the reads, therefore you may want to look at
shortening the transactions.

John

"Pham Nguyen" <sh*********@yahoo.com> wrote in message
news:f6**************************@posting.google.c om...
I haven't used the READ UNCOMMITTED transaction isolation level
before, and I was wondering if this would be an appropriate use:

I have an ID table containing ID numbers that are randomly generated
and need to be unique. There is a stored procedure that potentially
generates thousands of these IDs in one execution and inserts them
into the ID table and various other tables. The basic idea is as
follows:

Begin Transaction
While not all IDs generated {
GenID:
@NewID = GenerateID()
If @NewID exists in ID table
GOTO GenID

Insert into ID table
Insert into various other tables
}
Commit Transaction

The problem occurs when the stored procedure is being run by more than
one process concurrently. The check to see whether @NewID exists in
the ID table will block, waiting for the transaction in the other
process to commit.

Would this be an appropriate place to use the READ UNCOMMITTED
isolation level to allow different executions of the stored procedure
to see what the others are writing into the ID table before the
transactions finish? I only really care that the IDs generated are
unique; they're not in sequence or anything like that. Has anyone had
experience with anything similar?

Jul 20 '05 #2
Pham Nguyen (sh*********@yahoo.com) writes:
I have an ID table containing ID numbers that are randomly generated
and need to be unique. There is a stored procedure that potentially
generates thousands of these IDs in one execution and inserts them
into the ID table and various other tables. The basic idea is as
follows:

Begin Transaction
While not all IDs generated {
GenID:
@NewID = GenerateID()
If @NewID exists in ID table
GOTO GenID

Insert into ID table
Insert into various other tables
}
Commit Transaction

The problem occurs when the stored procedure is being run by more than
one process concurrently. The check to see whether @NewID exists in
the ID table will block, waiting for the transaction in the other
process to commit.


It would only block if you generate a duplicate. Assuming that is that
the id colunm is indexed, so you don't have to scan the table each time.

A better approach may be to to push the key generation out of the
transaction. That presumes that your business requirements can accept
that a key does not have any rows with it.

In fact, I have a procedure which generates a key for a set of work tables,
and that procedure barfs if it's called from within a transaction to
avoid contention problems.

--
Erland Sommarskog, SQL Server MVP, es****@sommarskog.se

Books Online for SQL Server SP3 at
http://www.microsoft.com/sql/techinf...2000/books.asp
Jul 20 '05 #3
Erland Sommarskog <es****@sommarskog.se> wrote in message news:<Xn*********************@127.0.0.1>...

It would only block if you generate a duplicate. Assuming that is that
the id colunm is indexed, so you don't have to scan the table each time.
That's true.
A better approach may be to to push the key generation out of the
transaction. That presumes that your business requirements can accept
that a key does not have any rows with it.
That was how the procedure was originally written; the new keys were
put into a temporary table as they were generated, and then got copied
over inside of a transaction. The problem we encountered was that the
stored procedure generates potentially thousands of keys in one
execution. With two processes running the stored procedure
concurrently, we saw a lot of duplicate keys, and had to rollback.

In fact, I have a procedure which generates a key for a set of work tables,
and that procedure barfs if it's called from within a transaction to
avoid contention problems.

Jul 20 '05 #4
Pham Nguyen (sh*********@yahoo.com) writes:
That was how the procedure was originally written; the new keys were
put into a temporary table as they were generated, and then got copied
over inside of a transaction. The problem we encountered was that the
stored procedure generates potentially thousands of keys in one
execution. With two processes running the stored procedure
concurrently, we saw a lot of duplicate keys, and had to rollback.


OK, so the keys has to be written to a table to be persisted. And this
may require a transaction, but the transaction should be committed here.

Here is a procedure that we use:
CREATE PROCEDURE ak_get_aidkey_sp @aidkey int OUTPUT AS

DECLARE @err int

-- Check transaction.
IF @@trancount > 0
BEGIN
RAISERROR('Internal error: to avoid contention issues, this procedure
must not be called from a transaction in progress.', 16, 1)
RETURN 55555
END

-- Aidkeys is supposed to be emptied once a day, so the below is likely
-- to generate a unique key at the first shot.
WHILE 1 = 1
BEGIN
SELECT @aidkey = -1 * abs(checksum(newid()))

BEGIN TRANSACTION

IF NOT EXISTS (SELECT * FROM aidkeys (SERIALIZABLE)
WHERE aidkey = @aidkey)
BEGIN
INSERT aidkeys (aidkey) VALUES (@aidkey)
SELECT @err = @@error IF @err <> 0 RETURN @err
BREAK
END

COMMIT TRANSACTION
END

COMMIT TRANSACTION
As you see, there is a transaction, but a very short one. Since you
generate many keys, you might need to modify the routine. Particularly,
if you generate 1000 keys in one go, the probability for at least one
collision increases.

A more brutal solution is to replace you current key column with a
uniqueidentifier and then use newid(). Then you can forget all about
collisions.

--
Erland Sommarskog, SQL Server MVP, es****@sommarskog.se

Books Online for SQL Server SP3 at
http://www.microsoft.com/sql/techinf...2000/books.asp
Jul 20 '05 #5
Thanks - I have a few questions, though. What happens if the process
that is calling this procedure to generate keys errors out? Wouldn't
we want to be able to roll back the keys that have been generated?

Also, I'm not sure why the key table is emptied out every day, if the
keys have to be unique across days.

Is using READ UNCOMMITTED to scan the key table while generating keys
really bad? What sorts of problems can crop up?

Erland Sommarskog <es****@sommarskog.se> wrote in message news:<Xn**********************@127.0.0.1>...
OK, so the keys has to be written to a table to be persisted. And this
may require a transaction, but the transaction should be committed here.

Here is a procedure that we use:
CREATE PROCEDURE ak_get_aidkey_sp @aidkey int OUTPUT AS

DECLARE @err int

-- Check transaction.
IF @@trancount > 0
BEGIN
RAISERROR('Internal error: to avoid contention issues, this procedure
must not be called from a transaction in progress.', 16, 1)
RETURN 55555
END

-- Aidkeys is supposed to be emptied once a day, so the below is likely
-- to generate a unique key at the first shot.
WHILE 1 = 1
BEGIN
SELECT @aidkey = -1 * abs(checksum(newid()))

BEGIN TRANSACTION

IF NOT EXISTS (SELECT * FROM aidkeys (SERIALIZABLE)
WHERE aidkey = @aidkey)
BEGIN
INSERT aidkeys (aidkey) VALUES (@aidkey)
SELECT @err = @@error IF @err <> 0 RETURN @err
BREAK
END

COMMIT TRANSACTION
END

COMMIT TRANSACTION
As you see, there is a transaction, but a very short one. Since you
generate many keys, you might need to modify the routine. Particularly,
if you generate 1000 keys in one go, the probability for at least one
collision increases.

A more brutal solution is to replace you current key column with a
uniqueidentifier and then use newid(). Then you can forget all about
collisions.

Jul 20 '05 #6
Pham Nguyen (sh*********@yahoo.com) writes:
Thanks - I have a few questions, though. What happens if the process
that is calling this procedure to generate keys errors out? Wouldn't
we want to be able to roll back the keys that have been generated?
That depends on your application. For our usage, this is perfectly
acceptable. If you want to roll back keys beause the transaction bailed
out, you will have to face a contention problem, since you cannot
commit until the keys have been used.
Also, I'm not sure why the key table is emptied out every day, if the
keys have to be unique across days.
Sorry, I forgot that our purpose is a bit special. We have a coupld of
so called aid-tables. They are permanent temp tables so to speak. That
is, they do hold transient data during some sort of process. They are
not temp tables because of performance problems, or because it's un-
suitable for the process for some other reason.

Our system has a night job, which can assume that when it runs, nothing
else runs in the database. One section in this night job, empties all
aid tables (in case there are some data left behind), as well as the
aidkeys table.

Obviously, if your keys are generated for a permanent purpose, you need
to maintain the table with the keys.
Is using READ UNCOMMITTED to scan the key table while generating keys
really bad? What sorts of problems can crop up?


Well, one problem is that two processes can get the aame key value.
That is, they both attempt the same key value, both find that it's not
in use, both try to insert, and only one will survive.

Have you considered uniqueidentifier? That is probably the easy way out.

--
Erland Sommarskog, SQL Server MVP, es****@sommarskog.se

Books Online for SQL Server SP3 at
http://www.microsoft.com/sql/techinf...2000/books.asp
Jul 20 '05 #7
Erland Sommarskog <es****@sommarskog.se> wrote in message news:<Xn**********************@127.0.0.1>...
Well, one problem is that two processes can get the aame key value.
That is, they both attempt the same key value, both find that it's not
in use, both try to insert, and only one will survive.
Is this prevented from happening with a higher isolation level?
Have you considered uniqueidentifier? That is probably the easy way out.


Unfortunately, this is an existing application that's being modified
and the keys are already being used in other systems out there that we
don't have control over.

We may have to just live with the contention problems. The process in
question isn't real-time (it's part of a file upload process that
dumps data into our database) so we may be able to get away with it.
Jul 20 '05 #8
Pham Nguyen (sh*********@yahoo.com) writes:
Erland Sommarskog <es****@sommarskog.se> wrote in message
news:<Xn**********************@127.0.0.1>...
Well, one problem is that two processes can get the aame key value.
That is, they both attempt the same key value, both find that it's not
in use, both try to insert, and only one will survive.


Is this prevented from happening with a higher isolation level?


Yes, although for the point where you check whether a certain key value
is available, the default READ COMMITTED won't do. You need SERIALIZABLE
to hold a lock on the value which does not yet exist. Note that you
don't need SERIALIZABLE for the entire transaction, only for the query
where you check whether key is available.

(Depending on how these keys are assigned, it's possible that lower
levels will do, but as long as I don't know any details, I will have
to assume serializable.)

--
Erland Sommarskog, SQL Server MVP, es****@sommarskog.se

Books Online for SQL Server SP3 at
http://www.microsoft.com/sql/techinf...2000/books.asp
Jul 20 '05 #9

This discussion thread is closed

Replies have been disabled for this discussion.

Similar topics

4 posts views Thread by francis70 | last post: by
1 post views Thread by Karl | last post: by
2 posts views Thread by Bernd Hohmann | last post: by
7 posts views Thread by Igor | last post: by
3 posts views Thread by Rahul Babbar | last post: by
By using this site, you agree to our Privacy Policy and Terms of Use.