"Jonathan Lewis" <jo******@jlcomp.demon.co.uk> wrote in message news:<ci**********@titan.btinternet.com>...
Read about dynamic sampling.
Use a level where all tables without
statistics are sampled at run time, and
your problem will go away.
--
Regards
Jonathan Lewis
http://www.jlcomp.demon.co.uk/faq/ind_faq.html
The Co-operative Oracle Users' FAQ
http://www.jlcomp.demon.co.uk/seminar.html
Optimising Oracle Seminar - schedule updated Sept 19th
"Jim Garrison" <jh*@athensgroup.com> wrote in message
news:Gf********************@giganews.com... Scenario:
1) Create a GLOBAL TEMPORARY table and populate it with
one (1) row.
2) Join that table to another with about 1 million rows.
The join condition selects a few hundred rows.
Performance: 4 seconds, the system is doing a full-table
scan of the second table, and the Explain Plan output
indicates that Oracle thinks the first table has 4000
rows.
Now replace the GLOBAL TEMPORARY table with a real table
and repeat exactly the same query. This runs in 94 milliseconds
and the Explain Plan shows the correct number of rows for
the driving table and an index scan on the second table, as
I would have expected.
Can anyone suggest a solution that will make the GLOBAL TEMPORARY
implementation as fast as the real table version?
BTW, why are there two sets of parallel groups under both
comp.database.oracle and comp.databases.oracle?
Jim Garrison
jh*@athensgroup.com
I may be way off base here...
Did you collect stats on the 'real table'? If so, then the CBO knows
there is only 1 row which would give you the 'ideal plan' using the
index scan on table 2.
With the temporay table, assuming you have stats on the second table
and not on the temporary table, then Oracle is using the CBO. The
optimizer needs to guess as to how many rows are in the temporary
table. The only information it has to go on is the table definition
(row size) and your block size. The optimizer may assume worst case
and come up with some number of possible rows in the table.
SQL> create global temporary table foobar(col1 number) on commit
preserve rows;
SQL> insert into foobar values(1);
SQL> commit;
SQL> create table t1(col1 number);
SQL> insert into t1 select object_id from user_objects;
3 rows created
SQL> set autotrace on
SQL> select * from t1
2 , foobar
3 where foobar.col1 = t1.col1
4 /
no rows selected
Execution Plan
----------------------------------------------------------
0 SELECT STATEMENT Optimizer=CHOOSE (Cost=14 Card=8168 Bytes=1
47024)
1 0 HASH JOIN (Cost=14 Card=8168 Bytes=147024)
2 1 TABLE ACCESS (FULL) OF 'T1' (Cost=2 Card=3 Bytes=15)
3 1 TABLE ACCESS (FULL) OF 'FOOBAR' (Cost=11 Card=8168
Bytes=106184)
Note the cardinality for temp table foobar. My block size is 8k.
Jeff W.
en**************@earthlink.net