473,385 Members | 1,730 Online
Bytes | Software Development & Data Engineering Community
Post Job

Home Posts Topics Members FAQ

Join Bytes to post your question to a community of 473,385 software developers and data experts.

Database Tripled In Size!!

Yikes! My database, which had been consistently 1 gig for a long time, went
from being 1 gig to 3 gigs overnight! Looking at the nightly backups, the
database increased on average about 5-15 MB per day, and was 1.06 GB on the
Thursday night backup. Then, with the Friday night backup, it was 2.95 GB,
and has stayed that way since!

I did a Shrink on the database, but that didn't help the situation.

The only thing I could think it might relate to is the following. I
previously (about a week ago) changed a couple of tables' DateModified field
from smalldatetime to datetime. (I posted about this under a separate thread
here.) For some reason I was getting occasional errors which I believe might
have been related to the new data type. So I decided to change the data
types back to smalldatetime.

I made the table changes Thursday night, right before the backup, but after
the database optimizations. The backup Thursday night still shows the small
database size. But the backup Friday night has the large size.

So this might not be related to the table changes at all. But I know for a
fact that there isn't 3x the data in the database. Somehow the database is
bloated to 3x its size, and it's not from actual data.

Any ideas about what I can do?

Thanks!

Neil
Dec 24 '07 #1
30 5617
On Mon, 24 Dec 2007 01:37:57 GMT, "Neil" <no****@nospam.netwrote:

Check Database Properties Files Page. It will show you how much the
db should grow once it fills up. 1MB by default, but you might have
changed it..

3GB is still very small, and hard disk space is cheap.

-Tom.
>Yikes! My database, which had been consistently 1 gig for a long time, went
from being 1 gig to 3 gigs overnight! Looking at the nightly backups, the
database increased on average about 5-15 MB per day, and was 1.06 GB on the
Thursday night backup. Then, with the Friday night backup, it was 2.95 GB,
and has stayed that way since!

I did a Shrink on the database, but that didn't help the situation.

The only thing I could think it might relate to is the following. I
previously (about a week ago) changed a couple of tables' DateModified field
from smalldatetime to datetime. (I posted about this under a separate thread
here.) For some reason I was getting occasional errors which I believe might
have been related to the new data type. So I decided to change the data
types back to smalldatetime.

I made the table changes Thursday night, right before the backup, but after
the database optimizations. The backup Thursday night still shows the small
database size. But the backup Friday night has the large size.

So this might not be related to the table changes at all. But I know for a
fact that there isn't 3x the data in the database. Somehow the database is
bloated to 3x its size, and it's not from actual data.

Any ideas about what I can do?

Thanks!

Neil
Dec 24 '07 #2
>>Yikes! My database, which had been consistently 1 gig for a long time,
went
from being 1 gig to 3 gigs overnight! Looking at the nightly backups, the
database increased on average about 5-15 MB per day, and was 1.06 GB on
the
Thursday night backup. Then, with the Friday night backup, it was 2.95 GB,
and has stayed that way since!

I did a Shrink on the database, but that didn't help the situation.

The only thing I could think it might relate to is the following. I
previously (about a week ago) changed a couple of tables' DateModified
field
from smalldatetime to datetime. (I posted about this under a separate
thread
here.) For some reason I was getting occasional errors which I believe
might
have been related to the new data type. So I decided to change the data
types back to smalldatetime.

I made the table changes Thursday night, right before the backup, but
after
the database optimizations. The backup Thursday night still shows the
small
database size. But the backup Friday night has the large size.

So this might not be related to the table changes at all. But I know for a
fact that there isn't 3x the data in the database. Somehow the database is
bloated to 3x its size, and it's not from actual data.

Any ideas about what I can do?

Thanks!

Neil
"Tom van Stiphout" <no*************@cox.netwrote in message
news:su********************************@4ax.com...
On Mon, 24 Dec 2007 01:37:57 GMT, "Neil" <no****@nospam.netwrote:

Check Database Properties Files Page. It will show you how much the
db should grow once it fills up. 1MB by default, but you might have
changed it..

3GB is still very small, and hard disk space is cheap.

-Tom.
In File Properties (which I've never touched), it's set to Automatically
grow file by 10%, with unrestricted filegrowth.

And I realized that 3 GB isn't that large. Still, the fact remains that it
couldn't have tripled in size overnight though data entry. So there has to
be something else going on there. If there's bloating, then it would be good
to get rid of it. But, as noted, Shrink Database doesn't bring it back down.
So I don't know what's going on.
Dec 24 '07 #3

"Neil" <no****@nospam.netwrote in message
news:V3*************@newssvr11.news.prodigy.net...
Yikes! My database, which had been consistently 1 gig for a long time,
went from being 1 gig to 3 gigs overnight! Looking at the nightly backups,
the database increased on average about 5-15 MB per day, and was 1.06 GB
on the Thursday night backup. Then, with the Friday night backup, it was
2.95 GB, and has stayed that way since!

I did a Shrink on the database, but that didn't help the situation.

The only thing I could think it might relate to is the following. I
previously (about a week ago) changed a couple of tables' DateModified
field from smalldatetime to datetime. (I posted about this under a
separate thread here.) For some reason I was getting occasional errors
which I believe might have been related to the new data type. So I decided
to change the data types back to smalldatetime.

I made the table changes Thursday night, right before the backup, but
after the database optimizations. The backup Thursday night still shows
the small database size. But the backup Friday night has the large size.

So this might not be related to the table changes at all. But I know for a
fact that there isn't 3x the data in the database. Somehow the database is
bloated to 3x its size, and it's not from actual data.

Any ideas about what I can do?

Thanks!

Neil

Here's one thought I had: is it possible to look at a breakdown of the
objects in the database, and how much space each one is taking up? Perhaps
that would help to determine what's going on here.
Dec 24 '07 #4
Neil (no****@nospam.net) writes:
Also, I ran the query you gave, and all the tables appear to be the
sizes they should be. At least none seemed very large, large enough to
account for 2 GB.
In that case sp_spaceused for the database should report a lot of free
space.
I appreciate you saying not to worry about it. But, still, how could a
database that has been steady at 1 GB just all of sudden go from 1 GB to 3
GB in one fell swoop, for no apparent reason. And, if it did do that (and
never did anything like that before), wouldn't that mean that performance
would suffer, if there's 2 GB worth of garbage in there somehow?
Do you run a regular maintenance job on the database that defragments
indexes? It might be that when you changed those columns to datetime from
smalldatetime, the tables had to be build entirely on new ground. That
is, all tables were moved and to get space to move them, the database
exploded.

It is not likely that this will cause any performance problems. Trying
to shrink the database may on the other hand, as shrinking leads to
fragmentation. To truly shrink it, you would have to rebuild from
scripts and reload. Definitely not worth it.

--
Erland Sommarskog, SQL Server MVP, es****@sommarskog.se

Books Online for SQL Server 2005 at
http://www.microsoft.com/technet/pro...ads/books.mspx
Books Online for SQL Server 2000 at
http://www.microsoft.com/sql/prodinf...ons/books.mspx
Dec 24 '07 #5

"Erland Sommarskog" <es****@sommarskog.sewrote in message
news:Xn**********************@127.0.0.1...
Neil (no****@nospam.net) writes:
>Also, I ran the query you gave, and all the tables appear to be the
sizes they should be. At least none seemed very large, large enough to
account for 2 GB.

In that case sp_spaceused for the database should report a lot of free
space.
It shows about half a gig of unused space. Here's the printout:

database_size unallocated space
--------------------------------------
3355.75 MB -2571.25 MB

reserved data index_size unused
------------------ ------------------ ------------------ ------------------
6069248 KB 2477728 KB 3066760 KB 524760 KB

>I appreciate you saying not to worry about it. But, still, how could a
database that has been steady at 1 GB just all of sudden go from 1 GB to
3
GB in one fell swoop, for no apparent reason. And, if it did do that (and
never did anything like that before), wouldn't that mean that performance
would suffer, if there's 2 GB worth of garbage in there somehow?

Do you run a regular maintenance job on the database that defragments
indexes?
The maintenance job that is run nightly performs the following:

Optimizations tab:

Reorganize data and index pages
(change free space per page percentage to 10%)

Remove unused space from database files
(shrink database when it grows beyond 50 MB)
(amount of free space to remain after shrink: 10% of the data space)

Integrity tab:

Check database integrity
(include indexes)
(attempt to repair any minor problems)

Thanks!

Neil

It might be that when you changed those columns to datetime from
smalldatetime, the tables had to be build entirely on new ground. That
is, all tables were moved and to get space to move them, the database
exploded.

It is not likely that this will cause any performance problems. Trying
to shrink the database may on the other hand, as shrinking leads to
fragmentation. To truly shrink it, you would have to rebuild from
scripts and reload. Definitely not worth it.

--
Erland Sommarskog, SQL Server MVP, es****@sommarskog.se

Books Online for SQL Server 2005 at
http://www.microsoft.com/technet/pro...ads/books.mspx
Books Online for SQL Server 2000 at
http://www.microsoft.com/sql/prodinf...ons/books.mspx

Dec 24 '07 #6
Neil (no****@nospam.net) writes:
It shows about half a gig of unused space. Here's the printout:

database_size unallocated space
--------------------------------------
3355.75 MB -2571.25 MB

reserved data index_size unused
------------------ ------------------ ------------------ -----------------
6069248 KB 2477728 KB 3066760 KB 524760 KB
The negative number for unallocated space is spooky. Run it again,
but now like this:

sp_spaceused NULL, true

That will make sure the values in sysindexes are updated.

By the way, are you still on SQL 7? I seem to recall that you talked
up moving on to SQL 2005, but did that materialise?
The maintenance job that is run nightly performs the following:

Optimizations tab:

Reorganize data and index pages
(change free space per page percentage to 10%)
That's OK.
Remove unused space from database files
(shrink database when it grows beyond 50 MB)
(amount of free space to remain after shrink: 10% of the data space)
But remove this one. Shrinking the database is not a good thing to do
on regular terms.
--
Erland Sommarskog, SQL Server MVP, es****@sommarskog.se

Books Online for SQL Server 2005 at
http://www.microsoft.com/technet/pro...ads/books.mspx
Books Online for SQL Server 2000 at
http://www.microsoft.com/sql/prodinf...ons/books.mspx
Dec 24 '07 #7

"Erland Sommarskog" <es****@sommarskog.sewrote in message
news:Xn*********************@127.0.0.1...
Neil (no****@nospam.net) writes:
>It shows about half a gig of unused space. Here's the printout:

database_size unallocated space
--------------------------------------
3355.75 MB -2571.25 MB

reserved data index_size unused
------------------ ------------------ ------------------ -----------------
6069248 KB 2477728 KB 3066760 KB 524760 KB

The negative number for unallocated space is spooky. Run it again,
but now like this:

sp_spaceused NULL, true

That will make sure the values in sysindexes are updated.
OK, here it is:

database_size unallocated space
------------------ ------------------
3355.75 MB 338.75 MB

reserved data index_size unused
------------------ ------------------ ------------------ ------------------
3089408 KB 2480768 KB 166896 KB 441744 KB
By the way, are you still on SQL 7? I seem to recall that you talked
up moving on to SQL 2005, but did that materialise?
It's still upcoming. Hopefully in the next month or two. You think that
might make a difference with this situation?
>
>The maintenance job that is run nightly performs the following:

Optimizations tab:

Reorganize data and index pages
(change free space per page percentage to 10%)

That's OK.
>Remove unused space from database files
(shrink database when it grows beyond 50 MB)
(amount of free space to remain after shrink: 10% of the data space)

But remove this one. Shrinking the database is not a good thing to do
on regular terms.
OK, removed it. Is that something I should do periodically?

Thanks!

Neil
>

--
Erland Sommarskog, SQL Server MVP, es****@sommarskog.se

Books Online for SQL Server 2005 at
http://www.microsoft.com/technet/pro...ads/books.mspx
Books Online for SQL Server 2000 at
http://www.microsoft.com/sql/prodinf...ons/books.mspx

Dec 25 '07 #8
OK, Erland, here's another strange one.

I took the last backup from before the database ballooned, and I downloaded
it to my development machine, so as to be able to look at the old vs. new
data, to see if I missed anything.

After restoring the backup, the database size was 1.5 GB, though it was
previously 1.0 GB. Going into Shrink Database, it showed that only 1.0 GB
was used. So I performed a Shrink Database on the old database.

The Shrink Database took a very long time. When it was done, I was told,
"The database has been shrunk to a size of 2 GB"!!! So it grew instead of
shrinking!

Looking at the files, the MDF is 1.26 GB, and the LDF is 0.75 GB, for a
total of 2 GB.

So the database that had been 1 GB grew to 1.5 after backup, and then 2.0
after shrinking.

This is the backup that was performed right after I made the table changes.
Remember that I said that I made the changes *after* the optimizations that
night, but before the backup. The backup file itself is 1.1 GB. It was the
following night's backup that grew to 3 GB.

So, apparently, making the changes, in and of themselves, didn't balloon the
database. But sometime after that -- probably the following night's
optimizations, which included a Shrink Database, caused it to balloon to 3
GB.

A significant difference between my test and the actual database, though, is
that when the old database grew to 2 GB, 0.75 GB of that was from the log
file. With the current 3 GB database, though, the log file is less than a
MB. The 3 GB is all in the data file.

Weird.

Neil
Dec 25 '07 #9
Neil (no****@nospam.net) writes:
I took the last backup from before the database ballooned, and I
downloaded it to my development machine, so as to be able to look at the
old vs. new data, to see if I missed anything.
Wait, didn't I tell you celebrate Christmas and relax?!?!?
After restoring the backup, the database size was 1.5 GB, though it was
previously 1.0 GB. Going into Shrink Database, it showed that only 1.0 GB
was used. So I performed a Shrink Database on the old database.

The Shrink Database took a very long time. When it was done, I was told,
"The database has been shrunk to a size of 2 GB"!!! So it grew instead of
shrinking!
Haven't I told you to stop shrinking databases! There are very few
situations where shrinking a database is a good idea. But there are many
where it is an outright bad idea.
So, apparently, making the changes, in and of themselves, didn't balloon
the database. But sometime after that -- probably the following night's
optimizations, which included a Shrink Database, caused it to balloon to
3 GB.
Probably it was not until the reindexing that SQL Server allocated full
space for the new columns.

Judging from the new numbers from sp_spaceused you posted, there is
plenty of free space in the database. Let it stay that way, it's
not going to do you any harm. Although the amount of unused space
is possibly a little worrying, since that indicates quite an amount
of fragmentation. But if you stop shrinking your database, the next
reindexing job should take care of that. (Shrinking introduces
fragmentation, another reason it's bad.)
>By the way, are you still on SQL 7? I seem to recall that you talked
up moving on to SQL 2005, but did that materialise?

It's still upcoming. Hopefully in the next month or two. You think that
might make a difference with this situation?
The database is not going to shrink if you upgrade to SQL 2005 if
that is what you think. Let me put it this way: you have recently
become the proud owner of a 3GB database, congratulations!

--
Erland Sommarskog, SQL Server MVP, es****@sommarskog.se

Books Online for SQL Server 2005 at
http://www.microsoft.com/technet/pro...ads/books.mspx
Books Online for SQL Server 2000 at
http://www.microsoft.com/sql/prodinf...ons/books.mspx
Dec 25 '07 #10

"Erland Sommarskog" <es****@sommarskog.sewrote in message
news:Xn**********************@127.0.0.1...
Neil (no****@nospam.net) writes:
>I took the last backup from before the database ballooned, and I
downloaded it to my development machine, so as to be able to look at the
old vs. new data, to see if I missed anything.

Wait, didn't I tell you celebrate Christmas and relax?!?!?
What makes you think I'm not relaxing?.... :-)

>So, apparently, making the changes, in and of themselves, didn't balloon
the database. But sometime after that -- probably the following night's
optimizations, which included a Shrink Database, caused it to balloon to
3 GB.

Probably it was not until the reindexing that SQL Server allocated full
space for the new columns.

Judging from the new numbers from sp_spaceused you posted, there is
plenty of free space in the database. Let it stay that way, it's
not going to do you any harm. Although the amount of unused space
is possibly a little worrying, since that indicates quite an amount
of fragmentation. But if you stop shrinking your database, the next
reindexing job should take care of that. (Shrinking introduces
fragmentation, another reason it's bad.)
I went to run another sp_spaceused, to see what it looked like, since I
turned shrinking off a couple of days ago. But then I saw that shrinking ran
last night. Couldn't figure that out, since I had turned it off. Turns out I
had _two_ shrink jobs running each night! -- one with the regular
optimizations, and one as a standalone job! Oy! So I turned the second one
off, and we'll see what happens.

BTW, why do they even have this shrinking thing if it's not needed? I mean,
I don't doubt you when you say it does more harm than good; but why is it
there in the first place if it just causes fragmentation?

>
>>By the way, are you still on SQL 7? I seem to recall that you talked
up moving on to SQL 2005, but did that materialise?

It's still upcoming. Hopefully in the next month or two. You think that
might make a difference with this situation?

The database is not going to shrink if you upgrade to SQL 2005 if
that is what you think.
I meant either that: a) perhaps SQL 2005 might have some superior tools for
dealing with this; and/or b) when the database is converted to SQL 2005 the
objects might be rewritten in such a way that it would get rid of the
bloating. But I guess no to (a) or (b).

Let me put it this way: you have recently
become the proud owner of a 3GB database, congratulations!
Yay me! :-) OK, that's fine, I can live with that, as long as it's not
problematic re. performance. But there's something I'm still not
understanding. Forgive me for being dense, but here goes.

The second sp_spaceused showed:

database_size unallocated space

----------------------- ------------------ ------------------

3320.06 MB 297.91 MB

reserved data index_size unused

------------------ ------------------ ------------------ ------------------

3094688 KB 2483672 KB 169712 KB 441304 KB
The data is 2.4 GB, which is 1.4 GB more than the DB was previously (and
when added to the index_size and unused, accounts for the additional 2 GB).
Where is this 1.4 GB coming from? I mean, there's only at most 1 GB of
actual data in the database. So what is the other 1.4 GB? Again, forgive me
for being dense here if you've already explained it.

Thanks!

Neil
Dec 26 '07 #11
Neil (no****@nospam.net) writes:
BTW, why do they even have this shrinking thing if it's not needed? I
mean, I don't doubt you when you say it does more harm than good; but
why is it there in the first place if it just causes fragmentation?
Good question. Well, for the shrinking facility as such there are of
course situations where it's useful. You have just erased five years of
data, and you expect the business to be a lot calmer for the next year,
so your database is now oversized. Or you took a copy of the production
database to get a development database to play with, but you only want
a fraction of the data, so you delete most and then shrink. This
becomes even more critical if you create one database per developer.

Now, as for why they put it as an option for maintenance jobs that's a
really good question. Unfortunately, there are plenty of things in
the tools that violate best practices for the server.
I meant either that: a) perhaps SQL 2005 might have some superior tools
for dealing with this; and/or b) when the database is converted to SQL
2005 the objects might be rewritten in such a way that it would get rid
of the bloating. But I guess no to (a) or (b).
The tools are about the same, but the behaviour may be a little different.
The data is 2.4 GB, which is 1.4 GB more than the DB was previously (and
when added to the index_size and unused, accounts for the additional 2
GB). Where is this 1.4 GB coming from? I mean, there's only at most 1 GB
of actual data in the database. So what is the other 1.4 GB? Again,
forgive me for being dense here if you've already explained it.
The figure of 2.4 GB comes from used pages. If only 100 bytes is actually
used on a page, that counts as 8192 bytes. That is, this is another token
of fragmentation.

You could run DBCC SHOWCONTIG on some big tables to have a look at
fragmentation. But if you reindex every night, and don't shrink anything,
the situation should improve.

By the way, does this query return any rows:

SELECT * FROM sysindexes WHERE indid = 0

Or put in another way: do you have tables without a clustred index?

--
Erland Sommarskog, SQL Server MVP, es****@sommarskog.se

Books Online for SQL Server 2005 at
http://www.microsoft.com/technet/pro...ads/books.mspx
Books Online for SQL Server 2000 at
http://www.microsoft.com/sql/prodinf...ons/books.mspx
Dec 26 '07 #12
You could run DBCC SHOWCONTIG on some big tables to have a look at
fragmentation. But if you reindex every night, and don't shrink anything,
the situation should improve.
I ran it on the five largest tables in the db (based on the bytes used, not
the pages). The Extent Scan Fragmentation for those five tables was: 86%,
70%, 97%, 98%, 74%.

Another thing that might be a factor here is that I noticed that disk space
is very low on the drive that contains the database. I have been keeping 4
weeks worth of backups for this database. When the backup ballooned to 3 gb
instead of 1 gb, it started eating up more drive space. Last night's backup
(after, I think, 5 of these 3 gb backups were on the drive) wouldn't go
through, as there was only 2 gb free on the drive. I deleted some backups,
and changed the storage to 1 week, instead of 4, and the backup went
through. Still, I wonder how much the limited disk space is affecting the
current situation.

By the way, does this query return any rows:

SELECT * FROM sysindexes WHERE indid = 0

Or put in another way: do you have tables without a clustred index?
Yes, apparently so. The query returned 51 rows.

Thanks,

Neil
Dec 27 '07 #13
Neil (no****@nospam.net) writes:
>You could run DBCC SHOWCONTIG on some big tables to have a look at
fragmentation. But if you reindex every night, and don't shrink anything,
the situation should improve.

I ran it on the five largest tables in the db (based on the bytes used,
not the pages). The Extent Scan Fragmentation for those five tables was:
86%, 70%, 97%, 98%, 74%.
That's quite high. What values do you have for Scan Density and Avg. Page
Density (full)? Or post the full output for one these tables.
Another thing that might be a factor here is that I noticed that disk
space is very low on the drive that contains the database. I have been
keeping 4 weeks worth of backups for this database. When the backup
ballooned to 3 gb instead of 1 gb, it started eating up more drive
space. Last night's backup (after, I think, 5 of these 3 gb backups were
on the drive) wouldn't go through, as there was only 2 gb free on the
drive. I deleted some backups, and changed the storage to 1 week,
instead of 4, and the backup went through. Still, I wonder how much the
limited disk space is affecting the current situation.
The limited disk space is not going to affect your database, but
obviously the expanded database will affect the available disk
space. Then again, you are not storing the backups on the same drive
are you? (Well, if you get them on tape or some other media as well,
I guess it's OK.)
>Or put in another way: do you have tables without a clustred index?

Yes, apparently so. The query returned 51 rows.
And that included your big tables?

I think we on to something here. Your "optimization job" is in vain;
you cannot defragment a heap (heap = table without a clustered index).

Adding clustered indexes on all tables takes a little thought to
make the best choice for the index. Then again, if you only have
one index on a table, that is probably the best choice.

But you could also just add a clustered index on any column and
then drop it. That will defragment the table.

While I know a few who disagrees, I think it's best practice to
have a clustered index on all tables.
--
Erland Sommarskog, SQL Server MVP, es****@sommarskog.se

Books Online for SQL Server 2005 at
http://www.microsoft.com/technet/pro...ads/books.mspx
Books Online for SQL Server 2000 at
http://www.microsoft.com/sql/prodinf...ons/books.mspx
Dec 27 '07 #14

"Erland Sommarskog" <es****@sommarskog.sewrote in message
news:Xn**********************@127.0.0.1...
Neil (no****@nospam.net) writes:
>>You could run DBCC SHOWCONTIG on some big tables to have a look at
fragmentation. But if you reindex every night, and don't shrink
anything,
the situation should improve.

I ran it on the five largest tables in the db (based on the bytes used,
not the pages). The Extent Scan Fragmentation for those five tables was:
86%, 70%, 97%, 98%, 74%.

That's quite high. What values do you have for Scan Density and Avg. Page
Density (full)? Or post the full output for one these tables.
Here's the full output. Will reply to the rest later (am out the door...).
Thanks!

DBCC SHOWCONTIG scanning 'CustomerMerges' table...

Table: 'CustomerMerges' (709733731); index ID: 0, database ID: 7

TABLE level scan performed.

- Pages Scanned................................: 725

- Extents Scanned..............................: 96

- Extent Switches..............................: 95

- Avg. Pages per Extent........................: 7.6

- Scan Density [Best Count:Actual Count].......: 94.79% [91:96]

- Extent Scan Fragmentation ...................: 86.46%

- Avg. Bytes Free per Page.....................: 404.7

- Avg. Page Density (full).....................: 95.00%

DBCC execution completed. If DBCC printed error messages, contact your
system administrator.

DBCC SHOWCONTIG scanning 'ImageFilesProcessed' table...

Table: 'ImageFilesProcessed' (1992550332); index ID: 0, database ID: 7

TABLE level scan performed.

- Pages Scanned................................: 1695

- Extents Scanned..............................: 214

- Extent Switches..............................: 213

- Avg. Pages per Extent........................: 7.9

- Scan Density [Best Count:Actual Count].......: 99.07% [212:214]

- Extent Scan Fragmentation ...................: 70.09%

- Avg. Bytes Free per Page.....................: 375.4

- Avg. Page Density (full).....................: 95.36%

DBCC execution completed. If DBCC printed error messages, contact your
system administrator.

DBCC SHOWCONTIG scanning 'CustActivity' table...

Table: 'CustActivity' (1006730739); index ID: 0, database ID: 7

TABLE level scan performed.

- Pages Scanned................................: 270

- Extents Scanned..............................: 38

- Extent Switches..............................: 37

- Avg. Pages per Extent........................: 7.1

- Scan Density [Best Count:Actual Count].......: 89.47% [34:38]

- Extent Scan Fragmentation ...................: 97.37%

- Avg. Bytes Free per Page.....................: 392.2

- Avg. Page Density (full).....................: 95.15%

DBCC execution completed. If DBCC printed error messages, contact your
system administrator.

DBCC SHOWCONTIG scanning 'CustomerEvents' table...

Table: 'CustomerEvents' (1029734871); index ID: 0, database ID: 7

TABLE level scan performed.

- Pages Scanned................................: 320

- Extents Scanned..............................: 46

- Extent Switches..............................: 45

- Avg. Pages per Extent........................: 7.0

- Scan Density [Best Count:Actual Count].......: 86.96% [40:46]

- Extent Scan Fragmentation ...................: 97.83%

- Avg. Bytes Free per Page.....................: 376.9

- Avg. Page Density (full).....................: 95.34%

DBCC execution completed. If DBCC printed error messages, contact your
system administrator.

DBCC SHOWCONTIG scanning 'ImageFileErrors' table...

Table: 'ImageFileErrors' (69067482); index ID: 0, database ID: 7

TABLE level scan performed.

- Pages Scanned................................: 1936

- Extents Scanned..............................: 243

- Extent Switches..............................: 242

- Avg. Pages per Extent........................: 8.0

- Scan Density [Best Count:Actual Count].......: 99.59% [242:243]

- Extent Scan Fragmentation ...................: 74.49%

- Avg. Bytes Free per Page.....................: 344.1

- Avg. Page Density (full).....................: 95.75%

DBCC execution completed. If DBCC printed error messages, contact your
system administrator.
Dec 27 '07 #15
Neil (no****@nospam.net) writes:
Here's the full output. Will reply to the rest later (am out the door...).
OK, the numbers looks good, beside the extent scan fragmentation that is.
But it's not that your pages are half-full or something like that.
--
Erland Sommarskog, SQL Server MVP, es****@sommarskog.se

Books Online for SQL Server 2005 at
http://www.microsoft.com/technet/pro...ads/books.mspx
Books Online for SQL Server 2000 at
http://www.microsoft.com/sql/prodinf...ons/books.mspx
Dec 28 '07 #16
Neil (no****@nospam.net) writes:
"PRIMARY KEY constraints create clustered indexes automatically if no
clustered index already exists on the table and a nonclustered index is
not specified when you create the PRIMARY KEY constraint."

I think I assumed that the clustered index would automatically be
created on the pk index.
That's correct. If you say

CREATE TABLE alfons(a int NOT NULL,
b int NOT NULL,
c int NOT NULL,
CONSTRAINT pk_alfons PRIMARY KEY(a, b, c))

The primary will indeed be clustered. You need to include NONCLUSTERED
to have it nonclustered.

Possibly if you created the tables and keys through some table designer,
you may have gotten non-clustered PKs.
For the five large tables, the before->after on Reserved was as follows:

CustomerMerges: 226661->959
ImageFilesProcessed: 72333->1727
CustActivity: 38034->303
CustomerEvents: 28018->367
ImageFileErrors: 24391->2046

So, some very dramatic changes there!
Indeed, but...
I then ran the "sp_spaceused NULL, true" command again. But the results
were pretty much the same as they had been previously. I then ran the
optimization job and then reran the "sp_spaceused NULL, true" command,
and the results were still pretty much the same.
One would have expected the numbers here to fall as well.

And there is another thing that is fishy as well. The "Pages Scanned"
in the old output was nowhere close to the numbers for Reserved. Now,
Reserved also includes non-clustered indexes, but I can't see how an
non-clustered index could be that badly fragmented when you run
defragmentation regularly.

So I am a little uncertain that you really gained that much. Maybe
something is flaky with SQL 7. But if you really gained that much space
by adding clustering indexes - then I would say that you really have a
case for shrinking. But that would be if sp_spaceused reported over
1GB of space unallocated.

I was wondering if there's a way to have the query: SELECT * FROM
sysindexes WHERE indid = 0 list the actual table names instead of the
IDs of the tables without clustered indexes?
object_name(id). But the name of the "index" is the table name for
indid = 0.
--
Erland Sommarskog, SQL Server MVP, es****@sommarskog.se

Books Online for SQL Server 2005 at
http://www.microsoft.com/technet/pro...ads/books.mspx
Books Online for SQL Server 2000 at
http://www.microsoft.com/sql/prodinf...ons/books.mspx
Dec 28 '07 #17
Possibly if you created the tables and keys through some table designer,
you may have gotten non-clustered PKs.
I believe that's what happened. When I originally converted the old MDB file
to SQL Server, DTS created clusted PKs, as well as timestamp fields. The
tables I created since have been created through an Access ADP file, which,
apparently doesn't create clustered PKs (nor timestamp fields).
>I then ran the "sp_spaceused NULL, true" command again. But the results
were pretty much the same as they had been previously. I then ran the
optimization job and then reran the "sp_spaceused NULL, true" command,
and the results were still pretty much the same.

One would have expected the numbers here to fall as well.
There were still 37 tables that didn't have clustered indexes. Most of them
were tiny lookup tables. Nevertheless, I added clustered indexes to all
tables, except for seven that were created by sql server than had no indexes
at all. These were:

sysfiles1

sysindexkeys

sysforeignkeys

sysmembers

sysprotects

sysfiles

Trace_References

After creating all the clustered indexes, I noticed that the tables that
were taking up the most space were the ones that I was expecting to (the
previous "top five" were surprises to me that they were at the top of the
list). So things seem much more how I would have expected them now, in terms
of which tables are taking up the most amount of space.

Nevertheless, running "sp_spaceused NULL, true" still didn't reveal any
changes. Also ran optimization and reran it again, but still no change.

So this gets back to my original question. If there is about 1 GB of data in
the db; and the database is showing 2.5 GB of data; then where's that other
1.5 GB of data coming from? Fragmentation seems to have been reduced
greatly. So what is that other 1.5 GB that is being reported? This is very
strange.

And there is another thing that is fishy as well. The "Pages Scanned"
in the old output was nowhere close to the numbers for Reserved. Now,
Reserved also includes non-clustered indexes, but I can't see how an
non-clustered index could be that badly fragmented when you run
defragmentation regularly.

So I am a little uncertain that you really gained that much. Maybe
something is flaky with SQL 7. But if you really gained that much space
by adding clustering indexes - then I would say that you really have a
case for shrinking. But that would be if sp_spaceused reported over
1GB of space unallocated.
Yeah, there was still only about 300 MB unallocated. But I ran the shrink
database anyway. No change with sp_spaceused.

So, I dont know. This is very strange -- especially the way it just happened
overnight like that.

Thanks,

Neil
Dec 29 '07 #18
Neil (no****@nospam.net) writes:
So this gets back to my original question. If there is about 1 GB of
data in the db; and the database is showing 2.5 GB of data; then where's
that other 1.5 GB of data coming from? Fragmentation seems to have been
reduced greatly. So what is that other 1.5 GB that is being reported?
This is very strange.
I'm afraid that by now I have exhausted all ideas. With the drastic
reduction of CustomerMerges, one would expect to see a big increase in
unallocated space. 226661 pages is 1.8 GB. I suspect that the number
226661 is bogus somehow.

The only thing I can suggest is that you create a new database from scripts,
and the load it with data from the current database. It may not be worth
the hassle though.

--
Erland Sommarskog, SQL Server MVP, es****@sommarskog.se

Books Online for SQL Server 2005 at
http://www.microsoft.com/technet/pro...ads/books.mspx
Books Online for SQL Server 2000 at
http://www.microsoft.com/sql/prodinf...ons/books.mspx
Dec 29 '07 #19

"Roy Harvey (SQL Server MVP)" <ro********@snet.netwrote in message
news:ij********************************@4ax.com...
Since I see mention in an earlier part of the thread that you are
running on SQL Server 7, here are a couple of system stored procedures
I used in 7 to see how space was being used. Add them to the master
database and you can use them from any database like other system
procs. Note the double underscore in the names differentiates them
from the system procs supplied by Microsoft.

They are really just hacks of sp_spaceused. sp__spaceall shows all
tables together, not one at a time. sp__spaceavail shows the total
space but breaks it down into data and log, which is the only way it
means anything.

Before running them always execute DBCC UPDATEUSAGE(0).
Hi, Roy. Thanks for these! I installed them in Master, and sp__spaceall ran
fine. However sp__spaceavail gave the error:

Server: Msg 208, Level 16, State 1, Procedure sp__spaceavail, Line 18
Invalid object name 'master..sysusages'.

Thanks,

Neil
Dec 29 '07 #20
On 29.12.2007 10:53, Erland Sommarskog wrote:
Neil (no****@nospam.net) writes:
>So this gets back to my original question. If there is about 1 GB of
data in the db; and the database is showing 2.5 GB of data; then where's
that other 1.5 GB of data coming from? Fragmentation seems to have been
reduced greatly. So what is that other 1.5 GB that is being reported?
This is very strange.

I'm afraid that by now I have exhausted all ideas. With the drastic
reduction of CustomerMerges, one would expect to see a big increase in
unallocated space. 226661 pages is 1.8 GB. I suspect that the number
226661 is bogus somehow.

The only thing I can suggest is that you create a new database from scripts,
and the load it with data from the current database. It may not be worth
the hassle though.
But if Neil switches SQL Server versions during the process (he is still
on 7 with two newer versions) it might still be worthwhile (proper
licenses available of course). Or would you rather disregard that
option for compatibility reasons?

Kind regards

robert
Dec 29 '07 #21
Robert Klemme (sh*********@googlemail.com) writes:
But if Neil switches SQL Server versions during the process (he is still
on 7 with two newer versions) it might still be worthwhile (proper
licenses available of course). Or would you rather disregard that
option for compatibility reasons?
There is some point in doing the migration by rebuild and scripting,
as there some funky behaviours with migrated databases. However, these
are not very serious. (The only one I recall is that
sys.dm_os_buffer_descriptors may give bogus values.) So it still
questionable whether it's worth the hassle.
Funny enough, though, when we have upgraded our customers from SQL 2000 to
SQL 2005 we have take this path at most sites. But we have our own special
reason: we wanted to move away from ANSI_NULLS off, and the only suported
way to do this is to recreate all tables, as there is an ANSI_NULLS flag
setting per table. The only way to change it is to update a status bit
in sysobjects, which is not to recommend. (But we still did this at some
sites where the database size precluded a reload.)
--
Erland Sommarskog, SQL Server MVP, es****@sommarskog.se

Books Online for SQL Server 2005 at
http://www.microsoft.com/technet/pro...ads/books.mspx
Books Online for SQL Server 2000 at
http://www.microsoft.com/sql/prodinf...ons/books.mspx
Dec 29 '07 #22
Neil (no****@nospam.net) writes:
Hi, Roy. Thanks for these! I installed them in Master, and sp__spaceall
ran fine. However sp__spaceavail gave the error:

Server: Msg 208, Level 16, State 1, Procedure sp__spaceavail, Line 18
Invalid object name 'master..sysusages'.
Ah, then it's even older than SQL 7. sysusages existed in SQL 6.5 and
older versions.


--
Erland Sommarskog, SQL Server MVP, es****@sommarskog.se

Books Online for SQL Server 2005 at
http://www.microsoft.com/technet/pro...ads/books.mspx
Books Online for SQL Server 2000 at
http://www.microsoft.com/sql/prodinf...ons/books.mspx
Dec 29 '07 #23
On Sat, 29 Dec 2007 17:04:33 GMT, "Neil" <no****@nospam.netwrote:
>Hi, Roy. Thanks for these! I installed them in Master, and sp__spaceall ran
fine. However sp__spaceavail gave the error:

Server: Msg 208, Level 16, State 1, Procedure sp__spaceavail, Line 18
Invalid object name 'master..sysusages'.
Sorry about that, I pulled it from by SQL 7.0 collection of tricks,
which apparently got mixed up with something else. I don't have 7.0
(or 6.5) running to fix it. 8-(

Roy Harvey
Beacon Falls, CT
Dec 29 '07 #24
OK, I looked at the data a bit that has been changed. The database backup
the night of 12/20 was 1.2 GB. The database backup the night of 12/21 was
3.3 GB. So whatever happened had to have happened on 12/21.

Nevertheless, looking at ALL changes from 12/21 through the current
database, roughly a week's worth of changes -- all records that have been
added or modified in that table since 12/21, the entire amount data in all
of those records is about 1.5 MB -- about 1,000 times short of what it needs
to be.

So there's no way that data added or modified caused this change. For some
reason this table went from 1/2 GB to 2 GB overnight, with hardly any
changes to the data. This is just very strange.

Neil
Dec 29 '07 #25
Neil (no****@nospam.net) writes:
OK, I looked at the data a bit that has been changed. The database backup
the night of 12/20 was 1.2 GB. The database backup the night of 12/21 was
3.3 GB. So whatever happened had to have happened on 12/21.

Nevertheless, looking at ALL changes from 12/21 through the current
database, roughly a week's worth of changes -- all records that have
been added or modified in that table since 12/21, the entire amount data
in all of those records is about 1.5 MB -- about 1,000 times short of
what it needs to be.

So there's no way that data added or modified caused this change. For some
reason this table went from 1/2 GB to 2 GB overnight, with hardly any
changes to the data. This is just very strange.
And

SELECT SUM(datalength(textcol1)) + SUM(datalength(textcol2)) + ...
FROM Description_Docs

returns what?

I don't know if you can get fragmentation in text columns, but I guess
you can. But it would hardly happen out of the blue. Unless, hum, you
passed some magic threshold? Nah...

Anyway, you could try running DBCC CHECKTABLE on the table, or DBCC CHECKDB
on the database to see if there is any corruption.

You could also test the effects of DBCC CLEANTABLE.

Yet an idea is to copy the table to another database, and see how
much size it takes up there.
--
Erland Sommarskog, SQL Server MVP, es****@sommarskog.se

Books Online for SQL Server 2005 at
http://www.microsoft.com/technet/pro...ads/books.mspx
Books Online for SQL Server 2000 at
http://www.microsoft.com/sql/prodinf...ons/books.mspx
Dec 29 '07 #26

"Roy Harvey (SQL Server MVP)" <ro********@snet.netwrote in message
news:sr********************************@4ax.com...
On Sat, 29 Dec 2007 17:04:33 GMT, "Neil" <no****@nospam.netwrote:
>>Hi, Roy. Thanks for these! I installed them in Master, and sp__spaceall
ran
fine. However sp__spaceavail gave the error:

Server: Msg 208, Level 16, State 1, Procedure sp__spaceavail, Line 18
Invalid object name 'master..sysusages'.

Sorry about that, I pulled it from by SQL 7.0 collection of tricks,
which apparently got mixed up with something else. I don't have 7.0
(or 6.5) running to fix it. 8-(

Roy Harvey
Beacon Falls, CT
That's cool. I think the other one was the one I needed anyway. Per my other
posts here, it seems to have pinpointed the problem. So, thanks! As to why
it happened, that's another mystery. But at least it's clear now where the
problem is. So thanks again.

Neil
Dec 30 '07 #27
Neil (no****@nospam.net) writes:
OK, getting the sum of all text column lengths resulted in: 374,457,113 --
which approximates the half GB size before it ballooned.

I created a new database and transferred the 2 GB Descriptions_Docs
table to that db. In the new db, according to Roy's sp__spaceall, the
size of the table is 420 MB. That seems about what it should be, given
the above count, as well as its size on 12/21, before the ballooning.
OK, so there is obviously a lot of air in that table. No idea where it
came from.
I ran DBCC CHECKTABLE, and it returned:

DBCC results for 'Descriptions_Docs'.
There are 66570 rows in 1171 pages for object 'Descriptions_Docs'.
DBCC execution completed. If DBCC printed error messages, contact your
system administrator.
That does apparently not include the text column.
I looked in BOL for CLEANTABLE, but it wasn't there. So I'm thinking it's
not in SQL 7?
It seems so. I will have to admit that I only looked in Books Online
for SQL 2000 and SQL 2005.
In any case, given the above, there clearly seems to be a problem with
the table, where it's showing 2 GB worth of data, but there's only in
actuality half a GB. Very strange.
I don't know how entwined this table is with other tables, but there
certainly is a case for

o Rename it.
o Create it again.
o Move the data to the new table.
o Recreate triggers, indexes and constraints.
o Move referencing constraints.
o Drop the old table.
o Shrink the database.

I would suggest that you first do this in a test environment to see that
it works well. And, oh, I would also suggest that you download SQL Compare
from Red Gate (www.red-gate.com) to compare the schema of the database
after the change with a reference copy. SQL Compare has a price tag, but
there is a preview period you could benefit from. Caveat: I don't know if
they support SQL 7.
--
Erland Sommarskog, SQL Server MVP, es****@sommarskog.se

Books Online for SQL Server 2005 at
http://www.microsoft.com/technet/pro...ads/books.mspx
Books Online for SQL Server 2000 at
http://www.microsoft.com/sql/prodinf...ons/books.mspx
Dec 30 '07 #28
OK, so there is obviously a lot of air in that table. No idea where it
came from.
Yeah, it seems that we'll never get to the bottom of that.
o Rename it.
o Create it again.
o Move the data to the new table.
o Recreate triggers, indexes and constraints.
o Move referencing constraints.
o Drop the old table.
o Shrink the database.
I think that's a good idea. What about the idea of just exporting it to a
blank db, like I did today, with the triggers, indexes, and constraints in
place, and then reimporting it with those? Then only the referencing
constraints would have to be recreated.
I would suggest that you first do this in a test environment to see that
it works well. And, oh, I would also suggest that you download SQL Compare
from Red Gate (www.red-gate.com) to compare the schema of the database
after the change with a reference copy. SQL Compare has a price tag, but
there is a preview period you could benefit from. Caveat: I don't know if
they support SQL 7.
I'll look into that. Thanks!

Neil
Dec 31 '07 #29
Neil (no****@nospam.net) writes:
I think that's a good idea. What about the idea of just exporting it to a
blank db, like I did today, with the triggers, indexes, and constraints in
place, and then reimporting it with those? Then only the referencing
constraints would have to be recreated.
Whichever way you fancy, as long as you don't wreck anything.

But if you do it that way, you don't even have to drop the referencing
constraints, you only need to disable them:

ALTER TABLE tbl NOCHECK CONSTRAINT fk_this_that

to enable them, this is the command:

ALTER TABLE tbl WITH CHECK CHECK CONSTRAINT fk_this_that

A somewhat bizarre syntax, but without the WITH CHECK part, SQL Server
will just enable the constraints without checking them.

And whichever path you go, use SQL Compare as I suggested to verify that
you have all in place!

--
Erland Sommarskog, SQL Server MVP, es****@sommarskog.se

Books Online for SQL Server 2005 at
http://www.microsoft.com/technet/pro...ads/books.mspx
Books Online for SQL Server 2000 at
http://www.microsoft.com/sql/prodinf...ons/books.mspx
Dec 31 '07 #30

"Erland Sommarskog" <es****@sommarskog.sewrote in message
news:Xn**********************@127.0.0.1...
Neil (no****@nospam.net) writes:
>I think that's a good idea. What about the idea of just exporting it to a
blank db, like I did today, with the triggers, indexes, and constraints
in
place, and then reimporting it with those? Then only the referencing
constraints would have to be recreated.

Whichever way you fancy, as long as you don't wreck anything.

But if you do it that way, you don't even have to drop the referencing
constraints, you only need to disable them:

ALTER TABLE tbl NOCHECK CONSTRAINT fk_this_that

to enable them, this is the command:

ALTER TABLE tbl WITH CHECK CHECK CONSTRAINT fk_this_that

A somewhat bizarre syntax, but without the WITH CHECK part, SQL Server
will just enable the constraints without checking them.

And whichever path you go, use SQL Compare as I suggested to verify that
you have all in place!
Well, all's well that ends well. I transferred the data to a new table,
etc., etc., and did a shrink on the database. Database went down to 1.2 GB.
Been running for a couple of days, and everything seems fine.

So, thanks so much for your help! Strange that this happened in the first
place; but at least it was able to be dealt with. And thanks for the
exchange. Was beneficial on many levels.

Thanks again, and have a great new year!

Neil
Jan 3 '08 #31

This thread has been closed and replies have been disabled. Please start a new discussion.

Similar topics

5
by: quartz12h | last post by:
Hi, We tried 2 techniques to mesure db size and both fails to return realtime size. a) under innodb, we ’show variables’, find the innodb_data_file_path variable and parse the size (works...
3
by: time_to_go | last post by:
Hi, I have set the DB to auto grow by 30 %. As well I have set it to unrestricted size.... However , I see the available size continually being reduced to now less then .54 MB... Why is...
2
by: Mike | last post by:
I running MySQL locally and have the following dir setup C:\wamp\mysql\data\mike - This is where the tables for the database mike are stored C:\wamp\mysql\data - This is where I think the...
3
by: Knick via AccessMonster.com | last post by:
I am novice to MS Access Development, I regularly back my access database every 2, 3 days ...today when I was going to back my database I saw the access file on the network drive was only 9 MB...
5
by: John | last post by:
Hi I have a typical front end/back end access app. Additionally a vb.net app accesses the backend mdb file every few minutes and runs insert and update queries on some table. The data to be...
0
by: Charles Arthur | last post by:
How do i turn on java script on a villaon, callus and itel keypad mobile phone
0
by: aa123db | last post by:
Variable and constants Use var or let for variables and const fror constants. Var foo ='bar'; Let foo ='bar';const baz ='bar'; Functions function $name$ ($parameters$) { } ...
0
by: ryjfgjl | last post by:
If we have dozens or hundreds of excel to import into the database, if we use the excel import function provided by database editors such as navicat, it will be extremely tedious and time-consuming...
0
BarryA
by: BarryA | last post by:
What are the essential steps and strategies outlined in the Data Structures and Algorithms (DSA) roadmap for aspiring data scientists? How can individuals effectively utilize this roadmap to progress...
1
by: nemocccc | last post by:
hello, everyone, I want to develop a software for my android phone for daily needs, any suggestions?
1
by: Sonnysonu | last post by:
This is the data of csv file 1 2 3 1 2 3 1 2 3 1 2 3 2 3 2 3 3 the lengths should be different i have to store the data by column-wise with in the specific length. suppose the i have to...
0
marktang
by: marktang | last post by:
ONU (Optical Network Unit) is one of the key components for providing high-speed Internet services. Its primary function is to act as an endpoint device located at the user's premises. However,...
0
by: Hystou | last post by:
Most computers default to English, but sometimes we require a different language, especially when relocating. Forgot to request a specific language before your computer shipped? No problem! You can...
0
Oralloy
by: Oralloy | last post by:
Hello folks, I am unable to find appropriate documentation on the type promotion of bit-fields when using the generalised comparison operator "<=>". The problem is that using the GNU compilers,...

By using Bytes.com and it's services, you agree to our Privacy Policy and Terms of Use.

To disable or enable advertisements and analytics tracking please visit the manage ads & tracking page.