470,819 Members | 1,591 Online
Bytes | Developer Community
New Post

Home Posts Topics Members FAQ

Post your question to a community of 470,819 developers. It's quick & easy.

quicker way to create indexes

Hi,

I have a new job. It needs to drop and re-create (by insert) a table
every night. The table contains approximately 3,000,000 (and growing)
records. The insert is fine, runs in 2 minutes. The problem is that
when I create the indexes on the table, it is taking 15-20 minutes.
There is one clustered index and 11 non-clustered. This is a lookup
table that takes many different paremeters, so it really needs the
indexes for the user interface to run efficiently. However, the
database owners aren't keen on a job taking 20 minutes to run every
night.

Any ideas?

Jul 18 '06 #1
5 1473
shelleybobelly (sh************@yahoo.com) writes:
I have a new job. It needs to drop and re-create (by insert) a table
every night. The table contains approximately 3,000,000 (and growing)
records. The insert is fine, runs in 2 minutes. The problem is that
when I create the indexes on the table, it is taking 15-20 minutes.
There is one clustered index and 11 non-clustered. This is a lookup
table that takes many different paremeters, so it really needs the
indexes for the user interface to run efficiently. However, the
database owners aren't keen on a job taking 20 minutes to run every
night.
Without knowing much about the data, it's difficult to tell. But I find it
difficult to believe that data changes that much in a lookup table. Then
again, I would not expect a look-up table to have three million rows.
Anyway, rather than dropping and recreating, maybe it's more effective
to load into staging table, and then update changed rows, insert new
ones, and delete old ones.
--
Erland Sommarskog, SQL Server MVP, es****@sommarskog.se

Books Online for SQL Server 2005 at
http://www.microsoft.com/technet/pro...ads/books.mspx
Books Online for SQL Server 2000 at
http://www.microsoft.com/sql/prodinf...ons/books.mspx
Jul 18 '06 #2
Right, I agree. If I run it the other way (just update the big lookup
table from the separate tables where there were changes), it runs much
quicker, only deleting/adding a thousand or so records a day. The
owners of this DB are doing things differently, but I guess I'll just
have to 'educate' them on this one. Then I'll create another job to
re-index this table about once a week or so to keep it from
fragmenting.

BTW, this is for a Legal department that has to look up files that can
be up to 20 years old, but they don't have all the information. That's
why the 3 million records in a lookup table. It is a subset of a HUGE
archive table that contains 255 columns, too.

Thanks Erland, keep up your postings.
Erland Sommarskog wrote:
shelleybobelly (sh************@yahoo.com) writes:
I have a new job. It needs to drop and re-create (by insert) a table
every night. The table contains approximately 3,000,000 (and growing)
records. The insert is fine, runs in 2 minutes. The problem is that
when I create the indexes on the table, it is taking 15-20 minutes.
There is one clustered index and 11 non-clustered. This is a lookup
table that takes many different paremeters, so it really needs the
indexes for the user interface to run efficiently. However, the
database owners aren't keen on a job taking 20 minutes to run every
night.

Without knowing much about the data, it's difficult to tell. But I find it
difficult to believe that data changes that much in a lookup table. Then
again, I would not expect a look-up table to have three million rows.
Anyway, rather than dropping and recreating, maybe it's more effective
to load into staging table, and then update changed rows, insert new
ones, and delete old ones.
--
Erland Sommarskog, SQL Server MVP, es****@sommarskog.se

Books Online for SQL Server 2005 at
http://www.microsoft.com/technet/pro...ads/books.mspx
Books Online for SQL Server 2000 at
http://www.microsoft.com/sql/prodinf...ons/books.mspx
Jul 18 '06 #3
Not starting over every night would certainly be the best approach,
but anyway....

One thing you might try is to leave the clustered index on the table,
and order the data by the clustering key before loading. It is
possible that the longer time to load with the index in place will be
more than offset by the time saved in creating the clustered index.

Roy Harvey
Beacon Falls, CT

On 18 Jul 2006 13:58:55 -0700, "shelleybobelly"
<sh************@yahoo.comwrote:
>Hi,

I have a new job. It needs to drop and re-create (by insert) a table
every night. The table contains approximately 3,000,000 (and growing)
records. The insert is fine, runs in 2 minutes. The problem is that
when I create the indexes on the table, it is taking 15-20 minutes.
There is one clustered index and 11 non-clustered. This is a lookup
table that takes many different paremeters, so it really needs the
indexes for the user interface to run efficiently. However, the
database owners aren't keen on a job taking 20 minutes to run every
night.

Any ideas?
Jul 19 '06 #4

"shelleybobelly" <sh************@yahoo.comwrote in message
news:11**********************@h48g2000cwc.googlegr oups.com...
Right, I agree. If I run it the other way (just update the big lookup
table from the separate tables where there were changes), it runs much
quicker, only deleting/adding a thousand or so records a day. The
owners of this DB are doing things differently, but I guess I'll just
have to 'educate' them on this one. Then I'll create another job to
re-index this table about once a week or so to keep it from
fragmenting.

BTW, this is for a Legal department that has to look up files that can
be up to 20 years old, but they don't have all the information. That's
why the 3 million records in a lookup table. It is a subset of a HUGE
archive table that contains 255 columns, too.
To suggestions:

1) move the indexes to an NDF file on a separate set of physical disks.

This may help if you're disk I/O bound at all.

2) May want to consider using full-text indexing for some of this.
Thanks Erland, keep up your postings.
Erland Sommarskog wrote:
shelleybobelly (sh************@yahoo.com) writes:
I have a new job. It needs to drop and re-create (by insert) a table
every night. The table contains approximately 3,000,000 (and growing)
records. The insert is fine, runs in 2 minutes. The problem is that
when I create the indexes on the table, it is taking 15-20 minutes.
There is one clustered index and 11 non-clustered. This is a lookup
table that takes many different paremeters, so it really needs the
indexes for the user interface to run efficiently. However, the
database owners aren't keen on a job taking 20 minutes to run every
night.
Without knowing much about the data, it's difficult to tell. But I find
it
difficult to believe that data changes that much in a lookup table. Then
again, I would not expect a look-up table to have three million rows.
Anyway, rather than dropping and recreating, maybe it's more effective
to load into staging table, and then update changed rows, insert new
ones, and delete old ones.
--
Erland Sommarskog, SQL Server MVP, es****@sommarskog.se

Books Online for SQL Server 2005 at
http://www.microsoft.com/technet/pro...ads/books.mspx
Books Online for SQL Server 2000 at
http://www.microsoft.com/sql/prodinf...ons/books.mspx

Jul 19 '06 #5
On 18 Jul 2006 13:58:55 -0700, "shelleybobelly"
<sh************@yahoo.comwrote:
>the database owners aren't keen on a job taking 20 minutes to run
every night.
Is it automated? Are there users doing regular work at night? If
the answers are "yes" and "no" respectively, then why do they care
how long it runs?
Jul 21 '06 #6

This discussion thread is closed

Replies have been disabled for this discussion.

Similar topics

reply views Thread by Phil Powell | last post: by
2 posts views Thread by bettina | last post: by
2 posts views Thread by rcamarda | last post: by
By using this site, you agree to our Privacy Policy and Terms of Use.