473,385 Members | 1,338 Online
Bytes | Software Development & Data Engineering Community
Post Job

Home Posts Topics Members FAQ

Join Bytes to post your question to a community of 473,385 software developers and data experts.

READ UNCOMMITTED data

1. In this topic
http://groups.google.com/group/comp....21516252b65e7c,
someone wrote: "I've implemented SET TRANSACTION ISOLATION LEVEL READ
UNCOMMITTED at the beginning
of a number of stored procedures and, then SET TRANSACTION ISOLATION
LEVEL READ
COMMITTED at the end to minimize the disruption to the application.".
My question is, do you really need to set READ COMMITTED at the end of
stored procedure? What scope does that command affect?

2. Could someone write some real world example where i should never
read uncommitted data... i'm having trouble understanding when i
should and when i should not use it.

Apr 23 '07 #1
7 13244


"Igor" <je*******@gmail.comwrote in message
news:11**********************@b58g2000hsg.googlegr oups.com...
1. In this topic
http://groups.google.com/group/comp....21516252b65e7c,
someone wrote: "I've implemented SET TRANSACTION ISOLATION LEVEL READ
UNCOMMITTED at the beginning
of a number of stored procedures and, then SET TRANSACTION ISOLATION
LEVEL READ
COMMITTED at the end to minimize the disruption to the application.".
My question is, do you really need to set READ COMMITTED at the end of
stored procedure? What scope does that command affect?
I believe the READ COMMITTED is pointless there.

>
2. Could someone write some real world example where i should never
read uncommitted data... i'm having trouble understanding when i
should and when i should not use it.
It depends. If you don't mind showing possibly inaccurate information
faster, then READ UNCOMMITTED may be for you.

In some cases, this is fine. HOWEVER, in many others not only is it not
fine, it's downright wrong.

For example if you're writing say a banking application, the user would
NEVER want to see "wrong data".

(for example only partially posted transactions, phantom ones, etc.)

--
Greg Moore
SQL Server DBA Consulting Remote and Onsite available!
Email: sql (at) greenms.com http://www.greenms.com/sqlserver.html
Apr 23 '07 #2
Igor (je*******@gmail.com) writes:
someone wrote: "I've implemented SET TRANSACTION ISOLATION LEVEL READ
UNCOMMITTED at the beginning
of a number of stored procedures and, then SET TRANSACTION ISOLATION
LEVEL READ
COMMITTED at the end to minimize the disruption to the application.".
My question is, do you really need to set READ COMMITTED at the end of
stored procedure? What scope does that command affect?
No, you don't need it. A SET command only affects the current scope, and
any scope it invokes. When the scope terminates, the SET option reverts
to its previous setting. (With one exception, SET CONTEXT_INFO.)
2. Could someone write some real world example where i should never
read uncommitted data... i'm having trouble understanding when i
should and when i should not use it.
The basic rule is: if you don't understand the consequences of it, don't
use it.

Then there are cases, where it is even more wrong than others. Say that
you write a query which returns the balance on a number of accounts, and
the result of this query is to be used to reconciliate the books. If
you read with NOLOCK, you may come in the middle of a transaction that
moves money from one account to other. You could then display the same
amount in two accounts - or it is missing from both accounts.

NOLOCK is more OK if you need to read large amounts of historic data,
and there is no requirement for complete accuracy. The same table also
has active insertion of current data. In this case, locks could be
table locks which would prevent insertions from taking place. NOLOCK
can prevent this.

Never use NOLOCK or READ UNCOMMITTED as a matter of routine!

--
Erland Sommarskog, SQL Server MVP, es****@sommarskog.se

Books Online for SQL Server 2005 at
http://www.microsoft.com/technet/pro...ads/books.mspx
Books Online for SQL Server 2000 at
http://www.microsoft.com/sql/prodinf...ons/books.mspx
Apr 23 '07 #3
Thanks for you reply's.

I currently use READ UNCOMMITTED only for data that is not important,
like getting lists of referrers or today's top users. But i'm having
problems with my sp for user login, in combination with some other
stored procedures i get deadlocks, don't really know of a way to
resolve those deadlocks other than using NOLOCK on users and
users_online tables.

So generally for that kind of situation where you have to update and
calculate very important data like some money transfers, i should put
REPEATABLE READ or SERIALIZABLE.
One more question, why would one use SERIALIZABLE over REPEATABLE
READ, what are the benefits of SERIALIZABLE?

Apr 24 '07 #4
Igor (je*******@gmail.com) writes:
I currently use READ UNCOMMITTED only for data that is not important,
like getting lists of referrers or today's top users. But i'm having
problems with my sp for user login, in combination with some other
stored procedures i get deadlocks, don't really know of a way to
resolve those deadlocks other than using NOLOCK on users and
users_online tables.
Often, but always, deadlocks can be avoided with proper indexes, and also
proper access order.

If the deadlock is between user-interface activities and background tasks,
the command SET DEADLOCK_PRIORITY LOW can be an option, to avoid users
getting deadlock errors slapped in their face.

There are also some "classic" errors you can do if you use the HOLDLOCK
hint.

If you are on SQL 2005, snapshot isolation which comes in two flavours
may be an alternative to READ UNCOMMITTED. With snapshot isolation,
you read consistent committed data - but that may be stale.
So generally for that kind of situation where you have to update and
calculate very important data like some money transfers, i should put
REPEATABLE READ or SERIALIZABLE.
One more question, why would one use SERIALIZABLE over REPEATABLE
READ, what are the benefits of SERIALIZABLE?
I would rather ask the question in the other way: in most situations when
READ COMMITTED is not enough, SERIALIZABLE is what you need. That is,
you cannot accept that the rows you have read change, and you cannot
accept that new rows creep in. But if you have read the balance on an
account, and don't want it do change before you update it, REPEATABLE
READ is enough.

But SERIALIZABLE is not good for throughput, and increases the risk for
deadlocks.

You should also be aware of the UPDLOCK hint. An UPDLOCK is a shared
lock that does not block other readers. But only one process can hold
an UPDLOCK, so if two processes try to get an UPDLOCK, one will be
blocked at this point. So going back to the update of the account
balance, UPDLOCK is what you should use, not REPEATABLE READ. If you
use REPEATABLE READ only, you get deadlocks.

You can combine UPDLOCK with SERIALIZABLE, but it seems that SQL Server
sometimes take the range locks in different order, leading to deadlocks
anyway.

--
Erland Sommarskog, SQL Server MVP, es****@sommarskog.se

Books Online for SQL Server 2005 at
http://www.microsoft.com/technet/pro...ads/books.mspx
Books Online for SQL Server 2000 at
http://www.microsoft.com/sql/prodinf...ons/books.mspx
Apr 24 '07 #5
On Apr 24, 11:56 pm, Erland Sommarskog <esq...@sommarskog.sewrote:
Igor (jerosi...@gmail.com) writes:
I currently use READ UNCOMMITTED only for data that is not important,
like getting lists of referrers or today's top users. But i'm having
problems with my sp for user login, in combination with some other
stored procedures i get deadlocks, don't really know of a way to
resolve those deadlocks other than using NOLOCK on users and
users_online tables.

Often, but always, deadlocks can be avoided with proper indexes, and also
proper access order.

If the deadlock is between user-interface activities and background tasks,
the command SET DEADLOCK_PRIORITY LOW can be an option, to avoid users
getting deadlock errors slapped in their face.

There are also some "classic" errors you can do if you use the HOLDLOCK
hint.

If you are on SQL 2005, snapshot isolation which comes in two flavours
may be an alternative to READ UNCOMMITTED. With snapshot isolation,
you read consistent committed data - but that may be stale.
So generally for that kind of situation where you have to update and
calculate very important data like some money transfers, i should put
REPEATABLE READ or SERIALIZABLE.
One more question, why would one use SERIALIZABLE over REPEATABLE
READ, what are the benefits of SERIALIZABLE?

I would rather ask the question in the other way: in most situations when
READ COMMITTED is not enough, SERIALIZABLE is what you need. That is,
you cannot accept that the rows you have read change, and you cannot
accept that new rows creep in. But if you have read the balance on an
account, and don't want it do change before you update it, REPEATABLE
READ is enough.

But SERIALIZABLE is not good for throughput, and increases the risk for
deadlocks.

You should also be aware of the UPDLOCK hint. An UPDLOCK is a shared
lock that does not block other readers. But only one process can hold
an UPDLOCK, so if two processes try to get an UPDLOCK, one will be
blocked at this point. So going back to the update of the account
balance, UPDLOCK is what you should use, not REPEATABLE READ. If you
use REPEATABLE READ only, you get deadlocks.

You can combine UPDLOCK with SERIALIZABLE, but it seems that SQL Server
sometimes take the range locks in different order, leading to deadlocks
anyway.

--
Erland Sommarskog, SQL Server MVP, esq...@sommarskog.se

Books Online for SQL Server 2005 athttp://www.microsoft.com/technet/prodtechnol/sql/2005/downloads/books...
Books Online for SQL Server 2000 athttp://www.microsoft.com/sql/prodinfo/previousversions/books.mspx
Thank you, you have been great help as always.

Can you give me an example for this "deadlocks can be avoided with
proper indexes, and also
proper access order." or if you have any websites that explain this if
it's not to much trouble.

Unfortunately my hosting is still on MSSQL 2000.

Apr 26 '07 #6
Igor (je*******@gmail.com) writes:
Can you give me an example for this "deadlocks can be avoided with
proper indexes, and also proper access order." or if you have any websites
that explain this if it's not to much trouble.
What I had in mind when I talked about indexing, is that if your indexing
is not the best one, you can get table scans, and this increases the risk
for deadlock, for at least two reasons: 1) more rows have to be locked (and
thus more likely that two processes want to lock the same resource at
the same time) and 2) transactions are longer (so locks are held for a
longer time).

Here is a very simple example where you get a deadlock because of access
order. First run:

CREATE TABLE tbl1 (a int NOT NULL)
CREATE TABLE tbl2 (a int NOT NULL)
go
INSERT tbl1 (a) VALUES(1)
INSERT tbl2 (a) VALUES(1)
go

The run in one window:

BEGIN TRANSACTION
UPDATE tbl1 SET a = 123
WAITFOR DELAY '00:00:05'
UPDATE tbl2 SET a = 123
COMMIT TRANSACTION

And in another:

BEGIN TRANSACTION
UPDATE tbl2 SET a = 123
WAITFOR DELAY '00:00:05'
UPDATE tbl1 SET a = 123
COMMIT TRANSACTION

If you want to read about troubleshooting deadlocks, there is a section in
Books Online. It's far from whole-covering, but I doubt that any text
on deadlocking is. Some deadlocks are very easy to avoid, whereas others
can only be remedied with a serious redesign of the application.

In any basic text, you will find the advice of always accessing tables
in the same order. But in a complex application, this is far from a
trivial matter.

--
Erland Sommarskog, SQL Server MVP, es****@sommarskog.se

Books Online for SQL Server 2005 at
http://www.microsoft.com/technet/pro...ads/books.mspx
Books Online for SQL Server 2000 at
http://www.microsoft.com/sql/prodinf...ons/books.mspx
Apr 26 '07 #7
On Apr 26, 11:31 pm, Erland Sommarskog <esq...@sommarskog.sewrote:
Igor (jerosi...@gmail.com) writes:
Can you give me an example for this "deadlocks can be avoided with
proper indexes, and also proper access order." or if you have any websites
that explain this if it's not to much trouble.

What I had in mind when I talked about indexing, is that if your indexing
is not the best one, you can get table scans, and this increases the risk
for deadlock, for at least two reasons: 1) more rows have to be locked (and
thus more likely that two processes want to lock the same resource at
the same time) and 2) transactions are longer (so locks are held for a
longer time).

Here is a very simple example where you get a deadlock because of access
order. First run:

CREATE TABLE tbl1 (a int NOT NULL)
CREATE TABLE tbl2 (a int NOT NULL)
go
INSERT tbl1 (a) VALUES(1)
INSERT tbl2 (a) VALUES(1)
go

The run in one window:

BEGIN TRANSACTION
UPDATE tbl1 SET a = 123
WAITFOR DELAY '00:00:05'
UPDATE tbl2 SET a = 123
COMMIT TRANSACTION

And in another:

BEGIN TRANSACTION
UPDATE tbl2 SET a = 123
WAITFOR DELAY '00:00:05'
UPDATE tbl1 SET a = 123
COMMIT TRANSACTION

If you want to read about troubleshooting deadlocks, there is a section in
Books Online. It's far from whole-covering, but I doubt that any text
on deadlocking is. Some deadlocks are very easy to avoid, whereas others
can only be remedied with a serious redesign of the application.

In any basic text, you will find the advice of always accessing tables
in the same order. But in a complex application, this is far from a
trivial matter.

--
Erland Sommarskog, SQL Server MVP, esq...@sommarskog.se

Books Online for SQL Server 2005 athttp://www.microsoft.com/technet/prodtechnol/sql/2005/downloads/books...
Books Online for SQL Server 2000 athttp://www.microsoft.com/sql/prodinfo/previousversions/books.mspx
I finally solved deadlock that was occurring in user login sp
i had something like this:
begin tran
if exists (select * from users_online where user_id=@id)
update users_online data=... where user_id=@id
else
insert into users_online (user_id, data) values (@id, ...)
commit

so i put a xlock on select statement and now i don't get any deadlocks
begin tran
if exists (select * from users_online with (xlock) where user_id=@id)
update users_online data=... where user_id=@id
else
insert into users_online (user_id, data) values (@id, ...)
commit

May 5 '07 #8

This thread has been closed and replies have been disabled. Please start a new discussion.

Similar topics

4
by: francis70 | last post by:
Hi, I have these 2 problem? Is there a way in Oracle to read UNCOMMITED data. i.e. in Oracle the normal behaviour is that a user's updates to a table are visible to other users ONLY when the...
11
by: Markus Breuer | last post by:
I have a question about oracle commit and transactions. Following scenario: Process A performs a single sql-INSERT into a table and commits the transaction. Then he informs process B (ipc) to...
8
by: Pham Nguyen | last post by:
I haven't used the READ UNCOMMITTED transaction isolation level before, and I was wondering if this would be an appropriate use: I have an ID table containing ID numbers that are randomly...
1
by: THC | last post by:
Is it possible to set READ UNCOMMITTED to a user connecting to an SQL 2000 server instance? I understand this can be done via a front end application. But what I am looking to do is to assign...
29
by: pb648174 | last post by:
I have a very long transaction that runs on the same database that other users need to use for existing data. I don't care if they see data from the transaction before it is done and am only using...
1
by: Zri Man | last post by:
I have found the DB2/UDB implentation of locks a little more hard to deal with than Oracle's locking. First I realize there is lock escalation. But it would help me if somebody loudly thought...
1
by: Karl | last post by:
Is there any way to make a Trigger use Uncommitted Read? I want to avoid the possibility of deadlocks in my triggers by setting them to use Uncommitted Read in all of their Select Stmts. I get...
5
by: Robert | last post by:
Rather than setting by session I would like to configure the DB as read uncommitted. Thanx Advance.
3
by: Rahul Babbar | last post by:
Hi, I had the following doubts about the "For Read Only" clause. 1. How does a "for Read only" clause improve the performance? 2. How does a "for Read only" clause compare with "With UR"...
0
by: Faith0G | last post by:
I am starting a new it consulting business and it's been a while since I setup a new website. Is wordpress still the best web based software for hosting a 5 page website? The webpages will be...
0
isladogs
by: isladogs | last post by:
The next Access Europe User Group meeting will be on Wednesday 3 Apr 2024 starting at 18:00 UK time (6PM UTC+1) and finishing by 19:30 (7.30PM). In this session, we are pleased to welcome former...
0
by: ryjfgjl | last post by:
In our work, we often need to import Excel data into databases (such as MySQL, SQL Server, Oracle) for data analysis and processing. Usually, we use database tools like Navicat or the Excel import...
0
by: taylorcarr | last post by:
A Canon printer is a smart device known for being advanced, efficient, and reliable. It is designed for home, office, and hybrid workspace use and can also be used for a variety of purposes. However,...
0
by: Charles Arthur | last post by:
How do i turn on java script on a villaon, callus and itel keypad mobile phone
0
by: aa123db | last post by:
Variable and constants Use var or let for variables and const fror constants. Var foo ='bar'; Let foo ='bar';const baz ='bar'; Functions function $name$ ($parameters$) { } ...
0
by: ryjfgjl | last post by:
If we have dozens or hundreds of excel to import into the database, if we use the excel import function provided by database editors such as navicat, it will be extremely tedious and time-consuming...
0
by: emmanuelkatto | last post by:
Hi All, I am Emmanuel katto from Uganda. I want to ask what challenges you've faced while migrating a website to cloud. Please let me know. Thanks! Emmanuel
1
by: nemocccc | last post by:
hello, everyone, I want to develop a software for my android phone for daily needs, any suggestions?

By using Bytes.com and it's services, you agree to our Privacy Policy and Terms of Use.

To disable or enable advertisements and analytics tracking please visit the manage ads & tracking page.