473,387 Members | 1,569 Online
Bytes | Software Development & Data Engineering Community
Post Job

Home Posts Topics Members FAQ

Join Bytes to post your question to a community of 473,387 software developers and data experts.

Performance-related Issues on a Restored DB

We observed an interesting phenomenon that is driving us crazy. We have
an archive process that takes rows off of a primary database, then
inserts them into a history database. After loading the exported rows
into the history database, we observed abysmal performance (queries
increasing from 2 minutes to 2 hours). No amount of runstats or
explaining/tuning queries yielded anything significant.

In an effort to study the problem on a different server so we wouldn't
kill the primary server with these long-running queries, we took a
backup and restored it onto the new server. It was a similarly classed
machine. Much to our surprise, the problem didn't exist on the new
server. We looked at all the differences in the configurations and
never found anything significant.

For grins, we decided to restore the same database back to itself on
the primary server, and again, we were were surprised to see our
problem go away.

We just completed a new archive cycle and the problem has resurfaced. I
have two questions:
1. Does a restore do an implicit reorg of the tables? (I had always
thought that it just copied the data pages verbatim and reloaded them
in the same order.)
2. Why does the load create such chaos? Doesn't it allocate data on
contiguous new pages appended to the end? (We don't delete anything on
target DB, so there shouldn't be any "holes" in the data pages...)
Any enlightenment on this issue would be greatly appreciated.
Platform = UDB 7.2 FP 12 on AIX 5.2

Thanks,
Evan

Nov 12 '05 #1
1 1297
As to point 1)
No, there is no reorg during restore. DB2 puts the table in the proper
tablespaces(containers) and the pages of the table where they belong
located in the proper extents. There is no attempt to reorh, reclaim
space, restructure anything of your data.

As to point 2)
Do you collect stats (STATISTICS YES) when you do the loads?
It will allocate continuous pages of an extent in DMS or in SMS if you
have set db2empfa. In SMS , and no empfa (your db cfg has a parm that
induicates if it is set for the db) the you'll get whatever "contoguity"
the file system decides.
If it is reallly as you describe, set db2empfa if SMS and alter the
target tables to be APPEND ON, then load with statistics yes.
Test and hopefully the problem disappers. If not, we'd need a little
more info as to the sequence of events as well as some parms setting.
HTYH' Pierre.
PS: If appls. are running from boud applications a rebind may noy huirt
after stats.!

esmith2112 wrote:
We observed an interesting phenomenon that is driving us crazy. We have
an archive process that takes rows off of a primary database, then
inserts them into a history database. After loading the exported rows
into the history database, we observed abysmal performance (queries
increasing from 2 minutes to 2 hours). No amount of runstats or
explaining/tuning queries yielded anything significant.

In an effort to study the problem on a different server so we wouldn't
kill the primary server with these long-running queries, we took a
backup and restored it onto the new server. It was a similarly classed
machine. Much to our surprise, the problem didn't exist on the new
server. We looked at all the differences in the configurations and
never found anything significant.

For grins, we decided to restore the same database back to itself on
the primary server, and again, we were were surprised to see our
problem go away.

We just completed a new archive cycle and the problem has resurfaced. I
have two questions:
1. Does a restore do an implicit reorg of the tables? (I had always
thought that it just copied the data pages verbatim and reloaded them
in the same order.)
2. Why does the load create such chaos? Doesn't it allocate data on
contiguous new pages appended to the end? (We don't delete anything on
target DB, so there shouldn't be any "holes" in the data pages...)
Any enlightenment on this issue would be greatly appreciated.
Platform = UDB 7.2 FP 12 on AIX 5.2

Thanks,
Evan


--
Pierre Saint-Jacques - Reply to: sescons at attglobal dot net
IBM DB2 Cerified Solutions Expert - Administration
SES Consultants Inc.
Nov 12 '05 #2

This thread has been closed and replies have been disabled. Please start a new discussion.

Similar topics

12
by: serge | last post by:
I have an SP that is big, huge, 700-800 lines. I am not an expert but I need to figure out every possible way that I can improve the performance speed of this SP. In the next couple of weeks I...
6
by: teedilo | last post by:
We have an application with a SQL Server 2000 back end that is fairly database intensive -- lots of fairly frequent queries, inserts, updates -- the gamut. The application does not make use of...
13
by: bjarne | last post by:
Willy Denoyette wrote; > ... it > was not the intention of StrousTrup to the achieve the level of efficiency > of C when he invented C++, ... Ahmmm. It was my aim to match the performance...
13
by: Bern McCarty | last post by:
I have run an experiment to try to learn some things about floating point performance in managed C++. I am using Visual Studio 2003. I was hoping to get a feel for whether or not it would make...
1
by: jvn | last post by:
I am experiencing a particular problem with performance counters. I have created a set of classes, that uses System.Diagnostics.PerformanceCounter to increment custom performance counters (using...
0
by: Charles Arthur | last post by:
How do i turn on java script on a villaon, callus and itel keypad mobile phone
0
by: ryjfgjl | last post by:
If we have dozens or hundreds of excel to import into the database, if we use the excel import function provided by database editors such as navicat, it will be extremely tedious and time-consuming...
0
by: ryjfgjl | last post by:
In our work, we often receive Excel tables with data in the same format. If we want to analyze these data, it can be difficult to analyze them because the data is spread across multiple Excel files...
0
BarryA
by: BarryA | last post by:
What are the essential steps and strategies outlined in the Data Structures and Algorithms (DSA) roadmap for aspiring data scientists? How can individuals effectively utilize this roadmap to progress...
1
by: nemocccc | last post by:
hello, everyone, I want to develop a software for my android phone for daily needs, any suggestions?
1
by: Sonnysonu | last post by:
This is the data of csv file 1 2 3 1 2 3 1 2 3 1 2 3 2 3 2 3 3 the lengths should be different i have to store the data by column-wise with in the specific length. suppose the i have to...
0
by: Hystou | last post by:
There are some requirements for setting up RAID: 1. The motherboard and BIOS support RAID configuration. 2. The motherboard has 2 or more available SATA protocol SSD/HDD slots (including MSATA, M.2...
0
marktang
by: marktang | last post by:
ONU (Optical Network Unit) is one of the key components for providing high-speed Internet services. Its primary function is to act as an endpoint device located at the user's premises. However,...
0
Oralloy
by: Oralloy | last post by:
Hello folks, I am unable to find appropriate documentation on the type promotion of bit-fields when using the generalised comparison operator "<=>". The problem is that using the GNU compilers,...

By using Bytes.com and it's services, you agree to our Privacy Policy and Terms of Use.

To disable or enable advertisements and analytics tracking please visit the manage ads & tracking page.