By using this site, you agree to our updated Privacy Policy and our Terms of Use. Manage your Cookies Settings.
440,569 Members | 1,394 Online
Bytes IT Community
+ Ask a Question
Need help? Post your question and get tips & solutions from a community of 440,569 IT Pros & Developers. It's quick & easy.

Migrating large amounts of data from SQL Server 7.0 to 2000

P: n/a
I'm in the process of migrating a lot of data (millions of rows, 4GB+
of data) from an older SQL Server 7.0 database to a new SQL Server
2000 machine.

Time is not of the essence; my main concern during the migration is
that when I copy in the new data, the new database isn't paralyzed by
the amount of bulk copying being one. For this reason, I'm splitting
the data into one-month chunks (the data's all timestamped and goes
back about 3 years), exporting as CSV, compressing the files, and then
importing them on the target server. The reason I'm using CSV is
because we may want to also copy this data to other non-SQL Server
systems later, and CSV is pretty universal. I'm also copying in this
format because the target server is remotely hosted and is not
accessible by any method except FTP and Remote Desktop -- no
database-to-database copying allowed for security reasons.

My questions:

1) Given all of this, what would be the least intrusive way to copy
over all this data? The target server has to remain running and be
relatively uninterrupted. One of the issues that goes hand-in-hand
with this is indexes: should I copy over all the data first and then
create indexes, or allow SQL Server to rebuild indexes as I go?

2) Another option is to make a SQL Server backup of the database from
the old server, upload it, mount it, and then copy over the data. I'm
worried that this would slow operations down to a crawl, though, which
is why I'm taking the piecemeal approach.

Comments, suggestions, raw fish?
Jul 20 '05 #1
Share this Question
Share on Google+
2 Replies


P: n/a
se****@thegline.com (Serdar Yegulalp) wrote in message news:<84**************************@posting.google. com>...
I'm in the process of migrating a lot of data (millions of rows, 4GB+
of data) from an older SQL Server 7.0 database to a new SQL Server
2000 machine.

Time is not of the essence; my main concern during the migration is
that when I copy in the new data, the new database isn't paralyzed by
the amount of bulk copying being one. For this reason, I'm splitting
the data into one-month chunks (the data's all timestamped and goes
back about 3 years), exporting as CSV, compressing the files, and then
importing them on the target server. The reason I'm using CSV is
because we may want to also copy this data to other non-SQL Server
systems later, and CSV is pretty universal. I'm also copying in this
format because the target server is remotely hosted and is not
accessible by any method except FTP and Remote Desktop -- no
database-to-database copying allowed for security reasons.

My questions:

1) Given all of this, what would be the least intrusive way to copy
over all this data? The target server has to remain running and be
relatively uninterrupted. One of the issues that goes hand-in-hand
with this is indexes: should I copy over all the data first and then
create indexes, or allow SQL Server to rebuild indexes as I go?

2) Another option is to make a SQL Server backup of the database from
the old server, upload it, mount it, and then copy over the data. I'm
worried that this would slow operations down to a crawl, though, which
is why I'm taking the piecemeal approach.

Comments, suggestions, raw fish?

I never had any trouble using DTS for this, but if you are not allowed
to, dumping the old database and restoring the Data into a new
database on the 2000 machine is ok. If transactions are slowed down
depends on the server's performance, but on decent hardware there
should be no trouble at all.
The advantage is, that it really saves time.
If you are going to use a dumpfile, remember to check for orphaned
user names...

Steve
Jul 20 '05 #2

P: n/a
Serdar Yegulalp (se****@thegline.com) writes:
I'm in the process of migrating a lot of data (millions of rows, 4GB+
of data) from an older SQL Server 7.0 database to a new SQL Server
2000 machine.
From this it sounds that you are simply upgrading. But in such case,
the simplest would be to either backup the database on SQL7 and
then restore on SQL2000 or use sp_detach_db and sp_attacb_db. So I
assume that from your questions, that you are merging the SQL7 into
an existing SQL2000 database.
Time is not of the essence; my main concern during the migration is
that when I copy in the new data, the new database isn't paralyzed by
the amount of bulk copying being one. For this reason, I'm splitting
the data into one-month chunks (the data's all timestamped and goes
back about 3 years), exporting as CSV, compressing the files, and then
importing them on the target server. The reason I'm using CSV is
because we may want to also copy this data to other non-SQL Server
systems later, and CSV is pretty universal. I'm also copying in this
format because the target server is remotely hosted and is not
accessible by any method except FTP and Remote Desktop -- no
database-to-database copying allowed for security reasons.
Chopping the data into many files, seems like a difficult path to
take. You get more administration, and you run the risk that somehow
lose a file in transport somewhere. If you want to bulk load in pieces
you can still do that, since BCP has option to only copy some of the
rows in the host file.

The main advantage with extracting the data to files, is that you don't
have to copy indexes, metadata all that over the wire. Then again, if
the connection is reliable, and you upload a backup of the database
before you go home, it should be on the target server the morning after.
1) Given all of this, what would be the least intrusive way to copy
over all this data? The target server has to remain running and be
relatively uninterrupted. One of the issues that goes hand-in-hand
with this is indexes: should I copy over all the data first and then
create indexes, or allow SQL Server to rebuild indexes as I go?

2) Another option is to make a SQL Server backup of the database from
the old server, upload it, mount it, and then copy over the data. I'm
worried that this would slow operations down to a crawl, though, which
is why I'm taking the piecemeal approach.


There is some information missing here. Do you load data into new tables
or existing ones? If you load into existing tables, it depends on how
these are accessed. The main problem could be that users are locked out
from tables being loaded. Consistency is another matter. What about
constraints?

In the end, if it is critical that the impact is as low as possible, the
best is to take a backup of the target database and benchmarck various
techniques.

--
Erland Sommarskog, SQL Server MVP, es****@sommarskog.se

Books Online for SQL Server SP3 at
http://www.microsoft.com/sql/techinf...2000/books.asp
Jul 20 '05 #3

This discussion thread is closed

Replies have been disabled for this discussion.