By using this site, you agree to our updated Privacy Policy and our Terms of Use. Manage your Cookies Settings.
440,841 Members | 840 Online
Bytes IT Community
+ Ask a Question
Need help? Post your question and get tips & solutions from a community of 440,841 IT Pros & Developers. It's quick & easy.

Updating a large table: set @@rowcount versus explicit transactions

P: n/a
When I need to perform an update against multi-million row table I
typically specify @@rowcount, to reduce locks.

e.g.
set @@rowcount 1000
while exists (select * from myTable where col2 is null)
update myTable
set col2 = col1 + 'blahblah'
where col2 is null

However, my boss' script does something like this. I think it works OK
but it seems overly complicated to me. Any thoughts?

while exists (....)
begin tran

insert into #table
select ...

update myTable
set ...
from myTable join #table ...

(@numberOfRows is a counter variable, tracking #rows that have been
updated since last batch)

if @numberOfRows > 1000
begin
commit
begin tran
end

end

Jul 23 '05 #1
Share this Question
Share on Google+
2 Replies


P: n/a
I would prefer:

SET ROWCOUNT 1000

WHILE 1=1
BEGIN
UPDATE myTable
SET col2 = col1 + 'blahblah'
WHERE col2 IS NULL
IF @@ROWCOUNT = 0 BREAK
END

Although a batch size of 1000 seems a bit small for multi-million row table.

I don't see any advantages of the temp table version.

--
David Portas
SQL Server MVP
--
Jul 23 '05 #2

P: n/a
Cool. That's even more efficient. Thanks David.

Jul 23 '05 #3

This discussion thread is closed

Replies have been disabled for this discussion.