Hi,
I have a new job. It needs to drop and re-create (by insert) a table
every night. The table contains approximately 3,000,000 (and growing)
records. The insert is fine, runs in 2 minutes. The problem is that
when I create the indexes on the table, it is taking 15-20 minutes.
There is one clustered index and 11 non-clustered. This is a lookup
table that takes many different paremeters, so it really needs the
indexes for the user interface to run efficiently. However, the
database owners aren't keen on a job taking 20 minutes to run every
night.
Any ideas? 5 1560
shelleybobelly (sh************@yahoo.com) writes:
I have a new job. It needs to drop and re-create (by insert) a table
every night. The table contains approximately 3,000,000 (and growing)
records. The insert is fine, runs in 2 minutes. The problem is that
when I create the indexes on the table, it is taking 15-20 minutes.
There is one clustered index and 11 non-clustered. This is a lookup
table that takes many different paremeters, so it really needs the
indexes for the user interface to run efficiently. However, the
database owners aren't keen on a job taking 20 minutes to run every
night.
Without knowing much about the data, it's difficult to tell. But I find it
difficult to believe that data changes that much in a lookup table. Then
again, I would not expect a look-up table to have three million rows.
Anyway, rather than dropping and recreating, maybe it's more effective
to load into staging table, and then update changed rows, insert new
ones, and delete old ones.
--
Erland Sommarskog, SQL Server MVP, es****@sommarskog.se
Books Online for SQL Server 2005 at http://www.microsoft.com/technet/pro...ads/books.mspx
Books Online for SQL Server 2000 at http://www.microsoft.com/sql/prodinf...ons/books.mspx
Right, I agree. If I run it the other way (just update the big lookup
table from the separate tables where there were changes), it runs much
quicker, only deleting/adding a thousand or so records a day. The
owners of this DB are doing things differently, but I guess I'll just
have to 'educate' them on this one. Then I'll create another job to
re-index this table about once a week or so to keep it from
fragmenting.
BTW, this is for a Legal department that has to look up files that can
be up to 20 years old, but they don't have all the information. That's
why the 3 million records in a lookup table. It is a subset of a HUGE
archive table that contains 255 columns, too.
Thanks Erland, keep up your postings.
Erland Sommarskog wrote:
shelleybobelly (sh************@yahoo.com) writes:
I have a new job. It needs to drop and re-create (by insert) a table
every night. The table contains approximately 3,000,000 (and growing)
records. The insert is fine, runs in 2 minutes. The problem is that
when I create the indexes on the table, it is taking 15-20 minutes.
There is one clustered index and 11 non-clustered. This is a lookup
table that takes many different paremeters, so it really needs the
indexes for the user interface to run efficiently. However, the
database owners aren't keen on a job taking 20 minutes to run every
night.
Without knowing much about the data, it's difficult to tell. But I find it
difficult to believe that data changes that much in a lookup table. Then
again, I would not expect a look-up table to have three million rows.
Anyway, rather than dropping and recreating, maybe it's more effective
to load into staging table, and then update changed rows, insert new
ones, and delete old ones.
--
Erland Sommarskog, SQL Server MVP, es****@sommarskog.se
Books Online for SQL Server 2005 at http://www.microsoft.com/technet/pro...ads/books.mspx
Books Online for SQL Server 2000 at http://www.microsoft.com/sql/prodinf...ons/books.mspx
Not starting over every night would certainly be the best approach,
but anyway....
One thing you might try is to leave the clustered index on the table,
and order the data by the clustering key before loading. It is
possible that the longer time to load with the index in place will be
more than offset by the time saved in creating the clustered index.
Roy Harvey
Beacon Falls, CT
On 18 Jul 2006 13:58:55 -0700, "shelleybobelly"
<sh************@yahoo.comwrote:
>Hi,
I have a new job. It needs to drop and re-create (by insert) a table every night. The table contains approximately 3,000,000 (and growing) records. The insert is fine, runs in 2 minutes. The problem is that when I create the indexes on the table, it is taking 15-20 minutes. There is one clustered index and 11 non-clustered. This is a lookup table that takes many different paremeters, so it really needs the indexes for the user interface to run efficiently. However, the database owners aren't keen on a job taking 20 minutes to run every night.
Any ideas?
"shelleybobelly" <sh************@yahoo.comwrote in message
news:11**********************@h48g2000cwc.googlegr oups.com...
Right, I agree. If I run it the other way (just update the big lookup
table from the separate tables where there were changes), it runs much
quicker, only deleting/adding a thousand or so records a day. The
owners of this DB are doing things differently, but I guess I'll just
have to 'educate' them on this one. Then I'll create another job to
re-index this table about once a week or so to keep it from
fragmenting.
BTW, this is for a Legal department that has to look up files that can
be up to 20 years old, but they don't have all the information. That's
why the 3 million records in a lookup table. It is a subset of a HUGE
archive table that contains 255 columns, too.
To suggestions:
1) move the indexes to an NDF file on a separate set of physical disks.
This may help if you're disk I/O bound at all.
2) May want to consider using full-text indexing for some of this.
Thanks Erland, keep up your postings.
Erland Sommarskog wrote:
shelleybobelly (sh************@yahoo.com) writes:
I have a new job. It needs to drop and re-create (by insert) a table
every night. The table contains approximately 3,000,000 (and growing)
records. The insert is fine, runs in 2 minutes. The problem is that
when I create the indexes on the table, it is taking 15-20 minutes.
There is one clustered index and 11 non-clustered. This is a lookup
table that takes many different paremeters, so it really needs the
indexes for the user interface to run efficiently. However, the
database owners aren't keen on a job taking 20 minutes to run every
night.
Without knowing much about the data, it's difficult to tell. But I find
it
difficult to believe that data changes that much in a lookup table. Then
again, I would not expect a look-up table to have three million rows.
Anyway, rather than dropping and recreating, maybe it's more effective
to load into staging table, and then update changed rows, insert new
ones, and delete old ones.
--
Erland Sommarskog, SQL Server MVP, es****@sommarskog.se
Books Online for SQL Server 2005 at
http://www.microsoft.com/technet/pro...ads/books.mspx
Books Online for SQL Server 2000 at http://www.microsoft.com/sql/prodinf...ons/books.mspx
On 18 Jul 2006 13:58:55 -0700, "shelleybobelly"
<sh************@yahoo.comwrote:
>the database owners aren't keen on a job taking 20 minutes to run every night.
Is it automated? Are there users doing regular work at night? If
the answers are "yes" and "no" respectively, then why do they care
how long it runs? This thread has been closed and replies have been disabled. Please start a new discussion. Similar topics
by: Tuhin Kumar |
last post by:
Hi,
Oracle give the error ORA-01418 when I try to do the
following;
Create unique index t1_pk on TABLE1(EntryId DESC) ;
If the I try to add primary key Contraint using the above index t1_pk...
|
by: Phil Powell |
last post by:
Retracing my problem leads me to believe I never successfully created
fulltext indexes for MySQL 3.23.58 MyISAM tables. I went to the MySQL
manual and was able - or so I thought - to create them,...
|
by: bettina |
last post by:
I read somewhere that to gain efficiency it is advisable to avoid the
use of select. But they don't explain why or what to use instead. Is it
true? I habe a problem of speed (better to say of NO...
|
by: Melissa |
last post by:
I have a frontend file named CustomerApp and backend file named CustomerData.
CustomerApp is at C:\Customer Database and CustomerData is at S:\Customer
Database. Could someone help me with the code...
|
by: Brian S. Smith |
last post by:
Hi gang,
Please help. I've been through the Access help, searched the Web, and
I can't seem to get a straight answer. As the Subject line suggests, I
want to run a fairly simple VB/Access Sub...
|
by: ruben |
last post by:
Hi:
I'm running a large database on PostgreSQL 7.1.3. 20 days ago the
database failed with a threatening and not too descriptive error like:
pg_exec() query failed: server closed the...
|
by: Bob Stearns |
last post by:
I am creating an index on a column which is 40% NULLS. The process seems
to run forever, though a count of the number of values runs in
milliseconds. This leads to the subject question: is there a...
|
by: bobby_b_ |
last post by:
I have a table where fields 1 and 2 make up the primary key.
Because of this, I have a unique composite index on fields 1 and 2 (as
required by DB2). Now my question is: Fields 1 and 2 are also...
|
by: rcamarda |
last post by:
With help of others on this group, I've been learning and researching
about indexes; an area I neglected.
I see I can specify which filegroup I wish to create an index, which
the default is...
|
by: lllomh |
last post by:
Define the method first
this.state = {
buttonBackgroundColor: 'green',
isBlinking: false, // A new status is added to identify whether the button is blinking or not
}
autoStart=()=>{
|
by: DJRhino |
last post by:
Was curious if anyone else was having this same issue or not....
I was just Up/Down graded to windows 11 and now my access combo boxes are not acting right. With win 10 I could start typing...
|
by: Aliciasmith |
last post by:
In an age dominated by smartphones, having a mobile app for your business is no longer an option; it's a necessity. Whether you're a startup or an established enterprise, finding the right mobile app...
|
by: giovanniandrean |
last post by:
The energy model is structured as follows and uses excel sheets to give input data:
1-Utility.py contains all the functions needed to calculate the variables and other minor things (mentions...
|
by: NeoPa |
last post by:
Introduction
For this article I'll be using a very simple database which has Form (clsForm) & Report (clsReport) classes that simply handle making the calling Form invisible until the Form, or all...
|
by: Teri B |
last post by:
Hi, I have created a sub-form Roles. In my course form the user selects the roles assigned to the course.
0ne-to-many. One course many roles.
Then I created a report based on the Course form and...
|
by: nia12 |
last post by:
Hi there,
I am very new to Access so apologies if any of this is obvious/not clear.
I am creating a data collection tool for health care employees to complete. It consists of a number of...
|
by: NeoPa |
last post by:
Introduction
For this article I'll be focusing on the Report (clsReport) class. This simply handles making the calling Form invisible until all of the Reports opened by it have been closed, when it...
|
by: isladogs |
last post by:
The next online meeting of the Access Europe User Group will be on Wednesday 6 Dec 2023 starting at 18:00 UK time (6PM UTC) and finishing at about 19:15 (7.15PM).
In this month's session, Mike...
| |