Hi,
Currently we're a building a metadatadriven datawarehouse in SQL
Server 2000. We're investigating the possibility of the updating
tables with enormeous number of updates and insert and the use of
checkpoints (for simple recovery and Backup Log for full recovery).
On several website people speak about full transaction log and the
pace of growing can't keep up with the update. Therefore we want to
create a script which flushes the dirty pages to the disk. It's not
quite clear to me how it works. Questions we have is:
* How does the process of updating, insert and deleting works with SQL
Server 2000 with respect to log cache, log file, buffer cache, commit,
checkpoint, etc?
What happens when?
* As far as i can see now: i'm thinking of creating chunks of data of
1000 records with a checkpoint after the Query. SQL server has the
default of implicit transactions and so it will not need a commit.
Something like this?
* How do i create chunks of 1000 records automatically without
creating a identity field or something. Is there something like SELECT
NEXT 1000?
Greetz,
Hennie