Peter CCH (pe************@gmail.com) writes:
Since it join both of the tables first then only update the value, if
the table have 500,000 records, will the SQL statement above takes very
long time? (Assume it update that 500,000 records)
Updating 500000 rows is usually not snap. Exactly how long time depends
on your hardware, but also the definition of the column. If the column
is fixed length, all updates can be in place. But if the column is a
varchar or varbinary column, many rows will grow out of their current
pages, so that must be a lot of page splits and data moved around. Again,
exactly how massive this effect will be depends on the data. If the
average length is three characters, you have have luck and most pages
have space to spare. If the average length is 100 bytes, this is less
likely.
Another thing that matters is whether data and log files have the space.
Updating half a million rows will take a toll on the log file. Exactly
how big that toll is, depends on the width of the table. If each row
is 10 bytes in averages you need a lot less log if the average row length
is 360 bytes, If the log does not have the space, it will have to
autogrow and autogrow does not come for free. If you are running with
simple recovery, you can hold down the log explosion by doing the
update in batches of 50000 or so.
But in short, the only way to get answer to a question like this one is
to benchmark.
--
Erland Sommarskog, SQL Server MVP,
es****@sommarskog.se
Books Online for SQL Server SP3 at
http://www.microsoft.com/sql/techinf...2000/books.asp