467,925 Members | 1,854 Online
Bytes | Developer Community
New Post

Home Posts Topics Members FAQ

Post your question to a community of 467,925 developers. It's quick & easy.

Should system temporary tablespaces generally be on automaticstorage?

Hello,

I have recently run a rather large data import where the imported data i
pumped through some updatable views equipped with INSTEAD OF triggers. For
various reasons, the exact same data where imported into the exact same
table/view/trigger structures (but with slightly different tablespace
configuration, see later). The involved hardware was different (one
running on x86_64, one running on PPC), but with comparable CPU, memory
and I/O assignments.

The update running on one of the servers was performing extremely bad:
Even though the import was consuming 5 CPU cores, the import ran for more
than a week: On the other server, the import on the other server ran for
less than a day. I couldn't make any sense out of it, until I saw a
difference:
On the fast server, there was a "Buffer pool temporary data logical
reads" / "Buffer pool data logical reads" ratio of 10. On the slow server,
the same ratio was 30.

This led me to compare the temporary tablespaces on the two servers. I has
added 8KB system temporary tablespaces on both databases, but it turned
out that I had done it differently. The fast server was using automatic
storage, while the slow server was using non-automatic, database managed
storage. The slow database's temporary tablespace was rather large (25GB),
to I doubt that the size of the tablespace was contributing to the
problem. Actually, when looked at the actual size of the temporary
tablespace on the fast server, the involved files didn't seem to take up
more than a gigabyte of data.

I interrupted the import on the slow system, and re-created the 8KB system
temporary tablespace, this time using automatic storage. Now, a subsequent
import ran as fast as on the fast system.

How can this be? I always thought that automatic storage would be a bit
_slower_ than DMS tablespaces.

Is there any merit to concluding that system temporary storage should
generally reside on automatic storage?

--
Troels
Nov 20 '08 #1
  • viewed: 2945
Share:
6 Replies
Hi!

From my talks with DB2 guys I can say that everything should be
automatic storage on SMS, not DMS.

Best regards,
Kovi

Troels Arvin pravi:
Hello,

I have recently run a rather large data import where the imported data i
pumped through some updatable views equipped with INSTEAD OF triggers. For
various reasons, the exact same data where imported into the exact same
table/view/trigger structures (but with slightly different tablespace
configuration, see later). The involved hardware was different (one
running on x86_64, one running on PPC), but with comparable CPU, memory
and I/O assignments.

The update running on one of the servers was performing extremely bad:
Even though the import was consuming 5 CPU cores, the import ran for more
than a week: On the other server, the import on the other server ran for
less than a day. I couldn't make any sense out of it, until I saw a
difference:
On the fast server, there was a "Buffer pool temporary data logical
reads" / "Buffer pool data logical reads" ratio of 10. On the slow server,
the same ratio was 30.

This led me to compare the temporary tablespaces on the two servers. I has
added 8KB system temporary tablespaces on both databases, but it turned
out that I had done it differently. The fast server was using automatic
storage, while the slow server was using non-automatic, database managed
storage. The slow database's temporary tablespace was rather large (25GB),
to I doubt that the size of the tablespace was contributing to the
problem. Actually, when looked at the actual size of the temporary
tablespace on the fast server, the involved files didn't seem to take up
more than a gigabyte of data.

I interrupted the import on the slow system, and re-created the 8KB system
temporary tablespace, this time using automatic storage. Now, a subsequent
import ran as fast as on the fast system.

How can this be? I always thought that automatic storage would be a bit
_slower_ than DMS tablespaces.

Is there any merit to concluding that system temporary storage should
generally reside on automatic storage?

--
____________________________
|http://kovica.blogspot.com|
-----------------------------~-~-~-~-~-~-~-~-~-~-
| In A World Without Fences Who Needs Gates? |
| Experience Linux. |
-~-~-~-~-~-~-~-~-~-~-~-~-~-~-~-~-~-~-~-~-~-~-~-~-
Nov 20 '08 #2
Ian
Troels Arvin wrote:
Hello,

I have recently run a rather large data import where the imported data i
pumped through some updatable views equipped with INSTEAD OF triggers. For
various reasons, the exact same data where imported into the exact same
table/view/trigger structures (but with slightly different tablespace
configuration, see later). The involved hardware was different (one
running on x86_64, one running on PPC), but with comparable CPU, memory
and I/O assignments.

The update running on one of the servers was performing extremely bad:
Even though the import was consuming 5 CPU cores, the import ran for more
than a week: On the other server, the import on the other server ran for
less than a day. I couldn't make any sense out of it, until I saw a
difference:
On the fast server, there was a "Buffer pool temporary data logical
reads" / "Buffer pool data logical reads" ratio of 10. On the slow server,
the same ratio was 30.
That is definitely interesting, but what about physical data
reads/writes? How do those compare? If you compare only
logical data reads, you're looking only at data data coming
from the bufferpool, and the storage configuration doesn't
affect this.

I would also dig into *why* you have 3 times more temporary data
reads. If you have the same access path on both servers, something
is going wrong. Are there lots more sort overflows on the slow
server? (These would translate into reads/writes on temp
tablespaces).
This led me to compare the temporary tablespaces on the two servers. I has
added 8KB system temporary tablespaces on both databases, but it turned
out that I had done it differently. The fast server was using automatic
storage, while the slow server was using non-automatic, database managed
storage. The slow database's temporary tablespace was rather large (25GB),
to I doubt that the size of the tablespace was contributing to the
problem. Actually, when looked at the actual size of the temporary
tablespace on the fast server, the involved files didn't seem to take up
more than a gigabyte of data.
Have you compared the average I/O service times for the tablespaces on
both systems? What did you find?

How many containers did you have when the temp tablespace was DMS?
compared with how many storage paths you had in the database?
I interrupted the import on the slow system, and re-created the 8KB system
temporary tablespace, this time using automatic storage. Now, a subsequent
import ran as fast as on the fast system.

How can this be? I always thought that automatic storage would be a bit
_slower_ than DMS tablespaces.

Is there any merit to concluding that system temporary storage should
generally reside on automatic storage?
Automatic storage is implemented as DMS File (for regular/long
tablespaces) and as SMS (for user or system temporary tablespaces).
Automatic storage just controls *where* the containers get placed.

So the question should probably be, 'should a temporary tablespace
be SMS or DMS?'. But, in this case, I'd bet there is something
else going on.

Nov 20 '08 #3
Gregor Kovac( wrote:
From my talks with DB2 guys I can say that everything should be
automatic storage on SMS, not DMS.
This is certainly sweet music to my DBA ears. But it sort of breaks
"conventional wisdom" which seems to state that DMS-tablespaces give
optimal performance.

Does someone (IBM?) have information with performance data from actual,
thorough testing across different workloads/platforms/configurations, so
that the choice doesn't have to rely on ad-hoc wisper and different
versions of conventional wisdom?

--
Troels
Nov 20 '08 #4
Ian wrote:
>On the fast server, there was a "Buffer pool temporary data logical
reads" / "Buffer pool data logical reads" ratio of 10. On the slow
server, the same ratio was 30.

That is definitely interesting, but what about physical data
reads/writes?
Those were roughly equal. The only numbers which stood out were the
differences in buffer pool temporary data.

On the slow server, I had tried to get clues from the health center, from
memory distributions, db2diag-log-reading, etc. Even stracing db2sync
didn't give a clue.
I would also dig into *why* you have 3 times more temporary data reads.
So would I. But: 3 times more temporary data reads on the slow server
didn't explain why the slow server was _much_, _much_ slower than the
fast one.
If you have the same access path on both servers
I didn't compare access plans. I hope to get a chance to do that at some
point.
Have you compared the average I/O service times for the tablespaces on
both systems? What did you find?
No. How would you do that?
How many containers did you have when the temp tablespace was DMS?
Only one container. But that container was on SAN with many involved
spindles. And as far as I remember, iostat reported of very light I/O.
(But CPU usage was heavy).

By the way: The slow server ran on PPC. Comparing CPU speed using bzip2
and compress (outputting to /dev/null) indicated that each core on the
PPC server were somewhat slower than each core on the x86_64 server (the
PPC server is also somewhat older). But the database on the PPC server
had access to more CPU cores (which it used), so I guess that somehow
makes for a even playing field.

I hope to get a chance to perform more controlled comparisons later, so I
can be less vague in the descriptions.

--
Troels
Nov 20 '08 #5
"Troels Arvin" <tr****@arvin.dkwrote in message
news:gg**********@news.net.uni-c.dk...
This is certainly sweet music to my DBA ears. But it sort of breaks
"conventional wisdom" which seems to state that DMS-tablespaces give
optimal performance.

Does someone (IBM?) have information with performance data from actual,
thorough testing across different workloads/platforms/configurations, so
that the choice doesn't have to rely on ad-hoc wisper and different
versions of conventional wisdom?

--
Troels
You may be misinterpreting the poster, or the poster may be incorrect.
Perhaps he meant that all SMS tablespaces should be automatic storage.

In version 9.5 (maybe 9.1?) if you create a new database, the SYSCAT and
USERSPACE1 tablespaces are DMS with automatic storage by default, so I doubt
that IBM thinks SMS is the preference for all storage. In version 8.2 these
were SMS by default.

Obviously, when you create a system temporary tablespace, SMS is preferred
because DMS spends extra time trying to optimize the placement of the table
in the tablespace. Spending extra time optimizing the physical placement of
the table in the tablespace is fine if the table is created once and used
many times. But in a system temporary tablespace, the work table created by
DB2 only lasts the duration of the SQL statement, and it is not worthwhile
to have DB2 spend extra time finding the optimal physical placement of the
table in a system temporary tablespace.
Nov 21 '08 #6
Ian
Troels Arvin wrote:
Ian wrote:
>I would also dig into *why* you have 3 times more temporary data reads.

So would I. But: 3 times more temporary data reads on the slow server
didn't explain why the slow server was _much_, _much_ slower than the
fast one.
Also: What were the transferrate / overhead settings for the tablespace
before and after the switch? These have some effect on the optimizer,
so perhaps this was causing it to choose a different plan.
>If you have the same access path on both servers

I didn't compare access plans. I hope to get a chance to do that at some
point.
This is probably one of the most important things to look at.
>Have you compared the average I/O service times for the tablespaces on
both systems? What did you find?

No. How would you do that?
avg I/O read time =
total buffer pool read time / (physical data+index+temp reads)

avg I/O write time =
total buffer pool write time / (physical data+index writes)
>How many containers did you have when the temp tablespace was DMS?

Only one container. But that container was on SAN with many involved
spindles. And as far as I remember, iostat reported of very light I/O.
(But CPU usage was heavy).
Still, something doesn't make sense: If the app is primarily CPU bound,
I/O configuration shouldn't make a big difference. But as you said,
switching from a single container DMS tablespace to automatic storage
(which is SMS; and you didn't say how many storage paths your database
had) resolved the issue. So I would again come back to the explain
plan.
Ian

Nov 21 '08 #7

This discussion thread is closed

Replies have been disabled for this discussion.

Similar topics

1 post views Thread by gecho | last post: by
2 posts views Thread by Chris Aitchison | last post: by
10 posts views Thread by rAinDeEr | last post: by
3 posts views Thread by dcruncher4 | last post: by
1 post views Thread by Sam Durai | last post: by
8 posts views Thread by alexhguerra | last post: by
By using this site, you agree to our Privacy Policy and Terms of Use.