By using this site, you agree to our updated Privacy Policy and our Terms of Use. Manage your Cookies Settings.
440,905 Members | 900 Online
Bytes IT Community
+ Ask a Question
Need help? Post your question and get tips & solutions from a community of 440,905 IT Pros & Developers. It's quick & easy.

GLOBAL TEMPORARY table - serious join performance problem

P: n/a
Scenario:

1) Create a GLOBAL TEMPORARY table and populate it with
one (1) row.

2) Join that table to another with about 1 million rows.
The join condition selects a few hundred rows.

Performance: 4 seconds, the system is doing a full-table
scan of the second table, and the Explain Plan output
indicates that Oracle thinks the first table has 4000
rows.

Now replace the GLOBAL TEMPORARY table with a real table
and repeat exactly the same query. This runs in 94 milliseconds
and the Explain Plan shows the correct number of rows for
the driving table and an index scan on the second table, as
I would have expected.

Can anyone suggest a solution that will make the GLOBAL TEMPORARY
implementation as fast as the real table version?

BTW, why are there two sets of parallel groups under both
comp.database.oracle and comp.databases.oracle?

Jim Garrison
jh*@athensgroup.com
Jul 19 '05 #1
Share this Question
Share on Google+
5 Replies


P: n/a
Jim Garrison wrote:


BTW, why are there two sets of parallel groups under both
comp.database.oracle and comp.databases.oracle?


comp.database.oracle was created a couple of years ago by accident. Some
ISPs allowed users to create newsgroups on demand. SOme people still inist
on using that.

comp.databases.oracle itself has been voted out, in favor of
comp.databases.oracle.* heirarchy, so those of us answering could get a bit
of discrimination on the questions.
Jul 19 '05 #2

P: n/a
Jim Garrison wrote:
Scenario:

1) Create a GLOBAL TEMPORARY table and populate it with
one (1) row.

2) Join that table to another with about 1 million rows.
The join condition selects a few hundred rows.

[snip]

I found references to one solution, which is to set the
table statistics (numrows, specifically) manually using
DBMS_STATS.SET_TABLE_STATS. While this works, it appears
that table statistics are shared among all instances.
That is, even though every session gets its own copy of
the data, there's only one copy of the stats. Two
sessions with greatly differing rowcounts will step
on each other's stats. Oh well.... I guess I'm
going to have to go back to real tables.

GTTs don't seem to be fully baked yet, at least in 9i.
Can anyone confirm that GTTs work better in 10g?

Jim Garrison
jh*@athensgroup.com
Jul 19 '05 #3

P: n/a

Read about dynamic sampling.

Use a level where all tables without
statistics are sampled at run time, and
your problem will go away.
--
Regards

Jonathan Lewis

http://www.jlcomp.demon.co.uk/faq/ind_faq.html
The Co-operative Oracle Users' FAQ

http://www.jlcomp.demon.co.uk/seminar.html
Optimising Oracle Seminar - schedule updated Sept 19th

"Jim Garrison" <jh*@athensgroup.com> wrote in message
news:Gf********************@giganews.com...
Scenario:

1) Create a GLOBAL TEMPORARY table and populate it with
one (1) row.

2) Join that table to another with about 1 million rows.
The join condition selects a few hundred rows.

Performance: 4 seconds, the system is doing a full-table
scan of the second table, and the Explain Plan output
indicates that Oracle thinks the first table has 4000
rows.

Now replace the GLOBAL TEMPORARY table with a real table
and repeat exactly the same query. This runs in 94 milliseconds
and the Explain Plan shows the correct number of rows for
the driving table and an index scan on the second table, as
I would have expected.

Can anyone suggest a solution that will make the GLOBAL TEMPORARY
implementation as fast as the real table version?

BTW, why are there two sets of parallel groups under both
comp.database.oracle and comp.databases.oracle?

Jim Garrison
jh*@athensgroup.com

Jul 19 '05 #4

P: n/a
Jonathan Lewis wrote:
Read about dynamic sampling.

Use a level where all tables without
statistics are sampled at run time, and
your problem will go away.


That does the trick.

I added /*+ dynamic_sampling(gtt 1) */ and the query
works FASTER than a real table (about 15% faster)
and the execution plan is logical once again.

One note that should be in the docs is that if the
table has an alias in the statement then only the
alias name can be used in the hint. The full table
name is ignored in that case.

Thanks for your assistance.
Jul 19 '05 #5

P: n/a
"Jonathan Lewis" <jo******@jlcomp.demon.co.uk> wrote in message news:<ci**********@titan.btinternet.com>...
Read about dynamic sampling.

Use a level where all tables without
statistics are sampled at run time, and
your problem will go away.
--
Regards

Jonathan Lewis

http://www.jlcomp.demon.co.uk/faq/ind_faq.html
The Co-operative Oracle Users' FAQ

http://www.jlcomp.demon.co.uk/seminar.html
Optimising Oracle Seminar - schedule updated Sept 19th

"Jim Garrison" <jh*@athensgroup.com> wrote in message
news:Gf********************@giganews.com...
Scenario:

1) Create a GLOBAL TEMPORARY table and populate it with
one (1) row.

2) Join that table to another with about 1 million rows.
The join condition selects a few hundred rows.

Performance: 4 seconds, the system is doing a full-table
scan of the second table, and the Explain Plan output
indicates that Oracle thinks the first table has 4000
rows.

Now replace the GLOBAL TEMPORARY table with a real table
and repeat exactly the same query. This runs in 94 milliseconds
and the Explain Plan shows the correct number of rows for
the driving table and an index scan on the second table, as
I would have expected.

Can anyone suggest a solution that will make the GLOBAL TEMPORARY
implementation as fast as the real table version?

BTW, why are there two sets of parallel groups under both
comp.database.oracle and comp.databases.oracle?

Jim Garrison
jh*@athensgroup.com


I may be way off base here...

Did you collect stats on the 'real table'? If so, then the CBO knows
there is only 1 row which would give you the 'ideal plan' using the
index scan on table 2.

With the temporay table, assuming you have stats on the second table
and not on the temporary table, then Oracle is using the CBO. The
optimizer needs to guess as to how many rows are in the temporary
table. The only information it has to go on is the table definition
(row size) and your block size. The optimizer may assume worst case
and come up with some number of possible rows in the table.
SQL> create global temporary table foobar(col1 number) on commit
preserve rows;
SQL> insert into foobar values(1);
SQL> commit;

SQL> create table t1(col1 number);
SQL> insert into t1 select object_id from user_objects;
3 rows created
SQL> set autotrace on
SQL> select * from t1
2 , foobar
3 where foobar.col1 = t1.col1
4 /

no rows selected
Execution Plan
----------------------------------------------------------
0 SELECT STATEMENT Optimizer=CHOOSE (Cost=14 Card=8168 Bytes=1
47024)

1 0 HASH JOIN (Cost=14 Card=8168 Bytes=147024)
2 1 TABLE ACCESS (FULL) OF 'T1' (Cost=2 Card=3 Bytes=15)
3 1 TABLE ACCESS (FULL) OF 'FOOBAR' (Cost=11 Card=8168
Bytes=106184)

Note the cardinality for temp table foobar. My block size is 8k.

Jeff W.
en**************@earthlink.net
Jul 19 '05 #6

This discussion thread is closed

Replies have been disabled for this discussion.