469,926 Members | 1,824 Online
Bytes | Developer Community
New Post

Home Posts Topics Members FAQ

Post your question to a community of 469,926 developers. It's quick & easy.

Creating a DB by restore - problems

Hi:

I have a file that I have been told is a SQL Server backup from a
server somewhere. The file is about 200MB in size

I am trying to create the database on my local server using RESTORE. I
created the backup device, associated it with a backup name etc.,
copied the file into the backup dir.

When I run the RESTORE command, Query Analyzer tells me the database
needs 31 GB of space and the RESTORE aborts. I've tried this several
times, get the same result every time.

Anybody ever seen anything like this? Is there another way to create a
DB in a server using a backup file?

I am running SqlServer 2000, Developer edition on a machine running
Windows Server 2003 OS.

I would appreciate any help/suggestions.

Waz
Jul 23 '05 #1
13 1901
A restored database is the same size as the original database so you'll need
that amount of free space for the restore. You can determine the amount of
space required with RESTORE FILELISTONLY. For example:

RESTORE FILELISTONLY
FROM DISK='C:\MyBackupFile.bak'

Note that the backup file may be considerably smaller since unused data
pages are not backed up.

--
Hope this helps.

Dan Guzman
SQL Server MVP

<Eg********@Last.com> wrote in message
news:mb********************************@4ax.com...
Hi:

I have a file that I have been told is a SQL Server backup from a
server somewhere. The file is about 200MB in size

I am trying to create the database on my local server using RESTORE. I
created the backup device, associated it with a backup name etc.,
copied the file into the backup dir.

When I run the RESTORE command, Query Analyzer tells me the database
needs 31 GB of space and the RESTORE aborts. I've tried this several
times, get the same result every time.

Anybody ever seen anything like this? Is there another way to create a
DB in a server using a backup file?

I am running SqlServer 2000, Developer edition on a machine running
Windows Server 2003 OS.

I would appreciate any help/suggestions.

Waz

Jul 23 '05 #2
On Wed, 25 May 2005 23:55:55 GMT, "Dan Guzman"
<gu******@nospam-online.sbcglobal.net> wrote:
A restored database is the same size as the original database so you'll need
that amount of free space for the restore. You can determine the amount of
space required with RESTORE FILELISTONLY. For example:

RESTORE FILELISTONLY
FROM DISK='C:\MyBackupFile.bak'

Note that the backup file may be considerably smaller since unused data
pages are not backed up.


Dan:

Thanks for your response. RESTORE FILELISTONLY says the data file
needs 1.2 gig but the log file needs 31 gig. How can I restore the
data without the log, or can I?

Thanks, Waz
Jul 23 '05 #3

<Eg********@Last.com> wrote in message
news:mb********************************@4ax.com...
Hi:

I have a file that I have been told is a SQL Server backup from a
server somewhere. The file is about 200MB in size

I am trying to create the database on my local server using RESTORE. I
created the backup device, associated it with a backup name etc.,
copied the file into the backup dir.

When I run the RESTORE command, Query Analyzer tells me the database
needs 31 GB of space and the RESTORE aborts. I've tried this several
times, get the same result every time.

Anybody ever seen anything like this? Is there another way to create a
DB in a server using a backup file?
Sounds like they may have had a huge database that only contained 200 MB of
data.

Other than havnig 31 GB free, not sure what you can do.

Can you post the results of a RESTORE FILEHEADERSONLY command?


I am running SqlServer 2000, Developer edition on a machine running
Windows Server 2003 OS.

I would appreciate any help/suggestions.

Waz

Jul 23 '05 #4
On Thu, 26 May 2005 01:21:58 GMT, "Greg D. Moore \(Strider\)"
<mo****************@greenms.com> wrote:

Sounds like they may have had a huge database that only contained 200 MB of
data.

Other than havnig 31 GB free, not sure what you can do.

Can you post the results of a RESTORE FILEHEADERSONLY command?

BackupName
BackupDescription
BackupType ExpirationDate
Compressed Position DeviceType UserName
ServerName
DatabaseName
DatabaseVersion DatabaseCreationDate
BackupSize FirstLsn LastLsn
CheckpointLsn DifferentialBaseLsn
BackupStartDate
BackupFinishDate SortOrder
CodePage UnicodeLocaleId UnicodeComparisonStyle CompatibilityLevel
SoftwareVendorId SoftwareVersionMajor SoftwareVersionMinor
SoftwareVersionBuild MachineName
Flags BindingId RecoveryForkId
Collation
---------------------------------------------------------------------------------------------------------------------------------
---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
---------- ------------------------------------------------------
---------- -------- ----------
---------------------------------------------------------------------------------------------------------------------------------
---------------------------------------------------------------------------------------------------------------------------------
---------------------------------------------------------------------------------------------------------------------------------
--------------- ------------------------------------------------------
---------------------- ---------------------------
--------------------------- ---------------------------
---------------------------
------------------------------------------------------
------------------------------------------------------ ---------
-------- --------------- ---------------------- ------------------
---------------- -------------------- --------------------
--------------------
---------------------------------------------------------------------------------------------------------------------------------
----------- --------------------------------------
--------------------------------------
---------------------------------------------------------------------------------------------------------------------------------
Scheduled xx Test backup
Scheduled xx Test Backup
1 NULL 0
1 102 TESTSERVER\Administrator
TESTSERVER
xx Test
539 2003-08-06 17:38:53.000
206685184 3597000023310200001 3597000023311100001
3597000023310200003 3596000007489300003 2004-01-12
18:55:39.000 2004-01-12 18:56:14.000
52 228 1033 196609 80
4608 8 0 194
TESTSERVER
0 {007ED5AD-104E-452D-xxxx-512A2B3C700A}
{43356D09-0597-4AD4-xxxx-6D38CAD81F5D} SQL_Latin1_General_CP1_CI_AS

(1 row(s) affected)

Waz


Jul 23 '05 #5

<Eg********@Last.com> wrote in message
news:tj********************************@4ax.com...
On Thu, 26 May 2005 01:21:58 GMT, "Greg D. Moore \(Strider\)"
<mo****************@greenms.com> wrote:

Sounds like they may have had a huge database that only contained 200 MB ofdata.

Other than havnig 31 GB free, not sure what you can do.

Can you post the results of a RESTORE FILEHEADERSONLY command?


Thanks. unfortunately I meant FILELISTONLY as Dan had correctly said.

And no, unfortunately I don't know of anyway to restore a database w/o the
log file.


Jul 23 '05 #6
Hi:

I still need help with this. I tried the following command in Query
Analyzer:

RESTORE DATABASE x
from y
with NORECOVERY,
MOVE 'x_dat' to "C:/***/mssql/data/x_dat1.mdf;

I make no reference to the log file 'x_log' which is also present in
the backup file, as demonstrated by the results of RESTORE
FILELISTONLY.

It seems content with the data, but then it complains about the
logfile. Of course if I specify a location for the log, it complains
that there is insufficient space on the disk (which is true, it wants
31 GB).

Is there a way to restore only the data portion? Or is there another
way to create a DB from a backup file, when the DB doesn't already
exist on my server?

I very much appreciate anyone's help with this.

Eggs

On Wed, 25 May 2005 18:07:55 -0400, Eg********@Last.com wrote:
Hi:

I have a file that I have been told is a SQL Server backup from a
server somewhere. The file is about 200MB in size

I am trying to create the database on my local server using RESTORE. I
created the backup device, associated it with a backup name etc.,
copied the file into the backup dir.

When I run the RESTORE command, Query Analyzer tells me the database
needs 31 GB of space and the RESTORE aborts. I've tried this several
times, get the same result every time.

Anybody ever seen anything like this? Is there another way to create a
DB in a server using a backup file?

I am running SqlServer 2000, Developer edition on a machine running
Windows Server 2003 OS.

I would appreciate any help/suggestions.

Waz


Jul 23 '05 #7
(Eg********@Last.com) writes:
I still need help with this. I tried the following command in Query
Analyzer:

RESTORE DATABASE x
from y
with NORECOVERY,
MOVE 'x_dat' to "C:/***/mssql/data/x_dat1.mdf;

I make no reference to the log file 'x_log' which is also present in
the backup file, as demonstrated by the results of RESTORE
FILELISTONLY.

It seems content with the data, but then it complains about the
logfile. Of course if I specify a location for the log, it complains
that there is insufficient space on the disk (which is true, it wants
31 GB).

Is there a way to restore only the data portion? Or is there another
way to create a DB from a backup file, when the DB doesn't already
exist on my server?

I very much appreciate anyone's help with this.


The by far easiest solution in this case, is to add a new disk to the
machine. Look at the price for a 40 GB disk and compare with what you
cost your employer/client per hour. You don't have much time to look
for shortcut solution, before your work is more expensive than the disk.

If this happens on a machine which require special expensive disks
on some sort, just find another computer where you can restore,
shrink the log, and then move the database to where you want it.

--
Erland Sommarskog, SQL Server MVP, es****@sommarskog.se

Books Online for SQL Server SP3 at
http://www.microsoft.com/sql/techinf...2000/books.asp
Jul 23 '05 #8
On Thu, 26 May 2005 22:24:20 +0000 (UTC), Erland Sommarskog
<es****@sommarskog.se> wrote:

The by far easiest solution in this case, is to add a new disk to the
machine. Look at the price for a 40 GB disk and compare with what you
cost your employer/client per hour. You don't have much time to look
for shortcut solution, before your work is more expensive than the disk.

If this happens on a machine which require special expensive disks
on some sort, just find another computer where you can restore,
shrink the log, and then move the database to where you want it.


Thanks very much for the reality check. I used my USB external drive
and was able to do the restore in a straightforward manner. The
initial stumbling point was that I couldn't see the USB drive when I
installed it under Windows 2003 Server, then I realized W2003Svr
requires you to manually assign drive letters, so I did.

Can you (or anyone) suggest general guidelines for improving the
performance and/or efficiency of stored procedures under SqlServer
2000? Again, I very much appreciate any guidance.

Eggs

Jul 23 '05 #9
(Eg********@Last.com) writes:
Can you (or anyone) suggest general guidelines for improving the
performance and/or efficiency of stored procedures under SqlServer
2000? Again, I very much appreciate any guidance.


The question is a bit open-ended. But here are some general points
from the top of my head:

o Try as much as possible to avoid iterative solution, and use set-
based solutions.

o When using temp tables, create them as the first executable statement
in the procedure. (DECLARE @local is not an executable statement.) If
you create temp tables in the middle of it all, you will get a
recompile.

o Share your graces between temp tables and table variables. Sometimes
ons is right, and sometimes the other. My general suggestion is that
you start with a temp table, but if you find that you get performance
problems because of recompiles, switch to temp tables. (Keep in mind
that those recompiles can just as well be life-savers!)

o And while it's sometimes it's a good idea to keep a temp table/table
variable for storage of intermediate results, it can also sometimes
be more effecient with one big query from hell that does it all in
one statement.

o Don't do this:
CREATE PROCEDURE some_sp ... @startdate = NULL, .... AS
IF @startdate IS NULL
SELECT @startdate = convert(char(8), getdate(), 112)
since SQL Server sniffs the parameter value, it will build query
plans assuming that @startdate is NULL. It's better to copy to
a local variable, of which SQL Server makes no assumption at all
about the value. Even more effecient is to move processing to an
inner procedure once all defaults have been filled in.

o Microsoft recommends that you always use two-part notation, for
instance "SELECT ... FROM dbl.tbl" and not "SELECT ... FROM tbl",
and claims this is more effecient. I claim that if the procedure
is owned by dbo, there should be no difference, and if there is,
that's a bug. One of these days, I will have to benchmark it.
Anyway, since MS recommends it, I thought I should mention it.

If you have particular issues you want to dicsuss, you are welcome.

--
Erland Sommarskog, SQL Server MVP, es****@sommarskog.se

Books Online for SQL Server SP3 at
http://www.microsoft.com/sql/techinf...2000/books.asp
Jul 23 '05 #10
On Fri, 27 May 2005 20:46:09 +0000 (UTC), Erland Sommarskog
<es****@sommarskog.se> wrote:
(Eg********@Last.com) writes:
Can you (or anyone) suggest general guidelines for improving the
performance and/or efficiency of stored procedures under SqlServer
2000? Again, I very much appreciate any guidance.


The question is a bit open-ended. But here are some general points
from the top of my head:

o Try as much as possible to avoid iterative solution, and use set-
based solutions.

Please elaborate a little on what exactly you mean by set-based solutions vs iterative.
Also what do you think about cursors?


o When using temp tables, create them as the first executable statement
in the procedure. (DECLARE @local is not an executable statement.) If
you create temp tables in the middle of it all, you will get a
recompile.

o Share your graces between temp tables and table variables. Sometimes
ons is right, and sometimes the other. My general suggestion is that
you start with a temp table, but if you find that you get performance
problems because of recompiles, switch to temp tables. (Keep in mind
that those recompiles can just as well be life-savers!)

o And while it's sometimes it's a good idea to keep a temp table/table
variable for storage of intermediate results, it can also sometimes
be more effecient with one big query from hell that does it all in
one statement.

o Don't do this:
CREATE PROCEDURE some_sp ... @startdate = NULL, .... AS
IF @startdate IS NULL
SELECT @startdate = convert(char(8), getdate(), 112)
since SQL Server sniffs the parameter value, it will build query
plans assuming that @startdate is NULL. It's better to copy to
a local variable, of which SQL Server makes no assumption at all
about the value. Even more effecient is to move processing to an
inner procedure once all defaults have been filled in.

o Microsoft recommends that you always use two-part notation, for
instance "SELECT ... FROM dbl.tbl" and not "SELECT ... FROM tbl",
and claims this is more effecient. I claim that if the procedure
is owned by dbo, there should be no difference, and if there is,
that's a bug. One of these days, I will have to benchmark it.
Anyway, since MS recommends it, I thought I should mention it.

If you have particular issues you want to dicsuss, you are welcome.

Thanks very much.

EA
Jul 23 '05 #11
(Eg********@Last.com) writes:
o Try as much as possible to avoid iterative solution, and use set-
based solutions.

>>Please elaborate a little on what exactly you mean by set-based solutions vs iterative.
>>Also what do you think about cursors?


A cursor is an iterative solution. Some people hear that cursors are
evil, so they go home and replace the cursor with a WHILE loop where
they do SELECT MIN from a table or somesuch. That's typiclally even
worse.

In a set-based solution you work on all data in one statement. I steal an
example from another thread, where a poster had a trigger like this:

CREATE TRIGGER [AI_CATEGORY] ON [dbo].[CATEGORY]
FOR INSERT
AS
DECLARE @TEMP_ID AS INT,
@COUNTER AS INT,
@P_ID AS INT

SELECT @TEMP_ID = CATEGORY_ID,
@COUNTER = 1,
@P_ID = PARENT_CATEGORY_ID
FROM INSERTED
IF @P_ID IS NOT NULL
BEGIN
WHILE @TEMP_ID IS NOT NULL
BEGIN
SELECT @TEMP_ID = @P_ID,
@COUNTER = @COUNTER + 1
FROM INSERTED
END
END
UPDATE CATEGORY
SET DEPTH = @COUNTER

This is an example of an iterative solution. Here is my rewrite of this
into a set-based solution:

CREATE TRIGGER category_tri ON CATEGORY FOR INSERT AS
UPDATE c
SET DEPTH = coalesce(p.DEPTH, 0) + 1
FROM CATEGORY c
JOIN inserted i ON c.CATEGORY_ID = i.CATEGORY_ID
LEFT JOIN CATEGORY p ON i.PARENT_CATEGORY_ID = p.CATEGORY_ID

In this case, we cannot compare performance, as the iterative trigger
was incorrect, but it illustrates the two different approaches.

There are situations where iterative solutions are required, or at least
can be justified. But in many situations, there are magnitudes of
performance to gain by using a set-based solution.
--
Erland Sommarskog, SQL Server MVP, es****@sommarskog.se

Books Online for SQL Server SP3 at
http://www.microsoft.com/sql/techinf...2000/books.asp
Jul 23 '05 #12
>> Can you (or anyone) suggest general guidelines for improving the performance and/or efficiency of stored procedures under SqlServer 2000? <<

I have some general guidelines in SQL PROGRAMMING STYLE (chapters 8, 9
and 10). Chapter 8 is a review of the principles of software
engineering and point out that they sitll apply to T-SQL, PL/SQL,
SQL/PSM, Informix/4GL, and all the other proprietary 4GLs. Frankly, I
think that nobody is teaching SE any more from the code I see. Chapter
9 is "Heurtistics" and 10 is "Thinking in SQL"

In the case of T-SQL in particular:

1) One old and still good heuristic was not to write over 50 lines in a
proc. T-SQL is a simple one-pass compiler and was not designed to be
an application development language. Do not stress it.

2) The lower the McCabe number of the proc, the better it will run.
That means try to write a chain of pure SQL statements without whiles
or If-then flow control. Again, STYLE has example of this. You can do
a lot of if-thn logic in a CASE expression

3) Think in sets and not sequences of process steps. This is vague and
hard to teach. Remember learning recursion? You just have to bang
your head against it until you understand it.

4) Avoid materializing physical storage like the proprietary temp
tables and table variables. Use derived tables and CTEs instead, so the
optimizer can see everything. Most of the time, materializing physical
storage is the result of violating (3); they hold the results of a step
and pass it along to the next step in sequence, like we use to do with
mag tapes in the 1950's.

5) Test code for exceptions. Hey, sometimes the heurisitics are
wrong. Parameter sniff and re-compiling are particular to T-SQL and
can help or hurt in production environments.

6) If your procedure has to clean up data, then the DDL is probably
missing constraints. Think of the schema as a whole, not as procedures
here and data there. That is how we designed file systems.

Jul 23 '05 #13
--CELKO-- (jc*******@earthlink.net) writes:
1) One old and still good heuristic was not to write over 50 lines in a
proc. T-SQL is a simple one-pass compiler and was not designed to be
an application development language. Do not stress it.
Ehum, a single SQL statement can easily exceed 50 lines - particulary if
one is to go by the advice and use not temp tables.

For the record, the longest stored procedure we have is some 3000 lines
of code. This procedure had a predecessor, which was shorter for the
simple reason that it called plenty of subprocedures. Those subprocedures
are now incorporated in the big one. Why?

Because the original procedure accepted scalar input in terms of
variables. Variables are easy to pass around as parameters. The new
version instead reads its input from a table, and make an extensive
use of table variables - there's 43 of them. Tables are difficult to
pass as parameters.
4) Avoid materializing physical storage like the proprietary temp
tables and table variables. Use derived tables and CTEs instead, so the
optimizer can see everything.


Sometimes this is a good idea. Sometimes it's better to store
intermediate data in a temp table/table variable. This is particularly
true if you need to repeat the same derived table in the query. The
optimizer computes it for each occurrence. The same applies to CTEs
in SQL 2005. But it can also be good to use a temp table for intermediate
storage, since a temp table has statistics, and this can help the
optimizer.

Speaking of proprietary issues, here's another thing. Avoid the ANSI
way:

UPDATE tbl
SET col = (SELECT SUM(col2) FROM tbl2 WHERE tbl2.keycol1 = tbl.keycol

Instead use the proprietary MSSQL way:

UPDATE tbl
SET col = d.sum2
FROM tbl t
JOIN (SELECT keycol, sum2 = SUM(col2)
FROM tbl2
GROUP BY keycol) d ON t.keycol = d.keycol

My experience is that this gives better performance.
--
Erland Sommarskog, SQL Server MVP, es****@sommarskog.se

Books Online for SQL Server SP3 at
http://www.microsoft.com/sql/techinf...2000/books.asp
Jul 23 '05 #14

This discussion thread is closed

Replies have been disabled for this discussion.

Similar topics

1 post views Thread by A.M. de Jong | last post: by
9 posts views Thread by Dave | last post: by
2 posts views Thread by Victor Yongwei Yang | last post: by
11 posts views Thread by Chris | last post: by
3 posts views Thread by aka | last post: by
By using this site, you agree to our Privacy Policy and Terms of Use.