Please suggest me a better alternative for this query to reduce execution time. - SELECT DISTINCT claim_number,
-
status,
-
date_received,
-
pay_amount
-
FROM (SELECT c.claim_number,
-
c.status,
-
c.date_received,
-
p.pay_amount
-
FROM claims.ardis_active_groups aag,
-
claims.claim c,
-
claims.payment p
-
WHERE aag.GROUP_ID = c.GROUP_ID
-
AND c.claim_seq = p.claim_seq
-
AND c.status IN ('O', 'C')
-
UNION
-
SELECT c.claim_number,
-
c.status,
-
c.date_received,
-
p.pay_amount
-
FROM claims.claim c, claims.payment p, claims.service_provider sp
-
WHERE c.claim_seq = p.claim_seq
-
AND c.service_provider_seq = sp.service_provider_seq
-
AND c.status IN ('O', 'C')
-
UNION
-
SELECT c.claim_number,
-
c.status,
-
c.date_received,
-
p.pay_amount
-
FROM claims.claim c,
-
claims.payment p,
-
claims.claim_profile cp,
-
gman.mem_evd_address@ardis mea
-
WHERE c.claim_seq = p.claim_seq
-
AND c.member_claim_profile_seq = cp.claim_profile_seq
-
AND cp.ardis_member_seq = mea.mem_evd_seq
-
AND c.status IN ('O', 'C'));
No of rows in each table:
claim_profile: 10,254,590
mem_evd_address: 6,343,539
claim:5121867
payment:3051578
service_provider:773468
aag:4750
6 3395
What about posting table structure and relations between them.
- Alter table claims.claim
-
drop primary key cascade;
-
-
drop table claims.claim cascade constraints;
-
-
create table claims.claim
-
(
-
claim_form_seq number(18),
-
claim_number varchar2(40 byte),
-
correspondence_required varchar2(1 byte),
-
status varchar2(3 byte),
-
total_charge number(9,2),
-
group_id varchar2(10 byte),
-
claim_seq number(18) constraint sys_c001506 not null,
-
special_handling_rule_seq number(18),
-
member_claim_profile_seq number(18),
-
patient_claim_profile_seq number(18),
-
aob varchar2(1 byte),
-
payee varchar2(1 byte),
-
group_name varchar2(80 byte),
-
policy_number varchar2(20 byte),
-
claim_qualify_message varchar2(10 byte),
-
plan_type varchar2(4 byte),
-
last_updated_user varchar2(40 byte),
-
last_updated_date date,
-
create_date date,
-
create_user varchar2(40 byte),
-
accident_indicator varchar2(1 byte),
-
xray varchar2(1 byte),
-
user_comments clob,
-
date_received date,
-
work_comp varchar2(1 byte),
-
provider_seq number(18),
-
service_provider_seq number(18),
-
generation number(18),
-
closed_date date,
-
recent_delay_notice_sent_ts date,
-
pe_number varchar2(5 byte),
-
correspondence_resend varchar2(1 byte),
-
claim_form_provdr_zip_override varchar2(3 byte)
-
)
-
lob (user_comments) store as (
-
tablespace claims_5m
-
enable storage in row
-
chunk 8192
-
retention
-
nocache
-
logging
-
index (
-
tablespace claims_5m
-
storage (
-
initial 5m
-
next 5m
-
minextents 1
-
maxextents unlimited
-
pctincrease 0
-
freelists 1
-
freelist groups 1
-
buffer_pool default
-
))
-
storage (
-
initial 5m
-
next 5m
-
minextents 1
-
maxextents unlimited
-
pctincrease 0
-
freelists 1
-
freelist groups 1
-
buffer_pool default
-
))
-
tablespace claims_100m
-
pctused 40
-
pctfree 10
-
initrans 1
-
maxtrans 255
-
storage (
-
initial 160k
-
next 100m
-
minextents 1
-
maxextents unlimited
-
pctincrease 0
-
freelists 1
-
freelist groups 1
-
buffer_pool default
-
)
-
logging
-
nocompress
-
nocache
-
noparallel
-
nomonitoring;
-
-
-
create unique index claims.claims_claimnum_idx on claims.claim
-
(claim_number)
-
logging
-
tablespace claims_5m_bk32_idx
-
pctfree 10
-
initrans 2
-
maxtrans 255
-
storage (
-
initial 5m
-
next 5m
-
minextents 1
-
maxextents unlimited
-
pctincrease 0
-
freelists 1
-
freelist groups 1
-
buffer_pool default
-
)
-
noparallel;
-
-
-
create index claims.claims_groupid_idx on claims.claim
-
(group_id)
-
logging
-
tablespace claims_5m_bk32_idx
-
pctfree 10
-
initrans 2
-
maxtrans 255
-
storage (
-
initial 5m
-
next 5m
-
minextents 1
-
maxextents unlimited
-
pctincrease 0
-
freelists 1
-
freelist groups 1
-
buffer_pool default
-
)
-
noparallel;
-
-
alter index claims.claims_groupid_idx
-
monitoring usage;
-
-
-
create unique index claims.claims_idx_4 on claims.claim
-
(patient_claim_profile_seq)
-
logging
-
tablespace claims_5m_idx
-
pctfree 10
-
initrans 2
-
maxtrans 255
-
storage (
-
initial 5m
-
next 5m
-
minextents 1
-
maxextents unlimited
-
pctincrease 0
-
freelists 1
-
freelist groups 1
-
buffer_pool default
-
)
-
noparallel;
-
-
-
create unique index claims.claim_idx_3 on claims.claim
-
(member_claim_profile_seq)
-
logging
-
tablespace claims_5m_bk32_idx
-
pctfree 10
-
initrans 2
-
maxtrans 255
-
storage (
-
initial 5m
-
next 5m
-
minextents 1
-
maxextents unlimited
-
pctincrease 0
-
freelists 1
-
freelist groups 1
-
buffer_pool default
-
)
-
noparallel;
-
-
-
create index claims.claim_idx_5 on claims.claim
-
(claim_form_seq)
-
logging
-
tablespace claims_5m_bk32_idx
-
pctfree 10
-
initrans 2
-
maxtrans 255
-
storage (
-
initial 5m
-
next 5m
-
minextents 1
-
maxextents unlimited
-
pctincrease 0
-
freelists 1
-
freelist groups 1
-
buffer_pool default
-
)
-
noparallel;
-
-
-
create index claims.claim_idx_6 on claims.claim
-
(last_updated_date, last_updated_user, status)
-
logging
-
tablespace claims_5m
-
pctfree 10
-
initrans 2
-
maxtrans 255
-
storage (
-
initial 5m
-
next 5m
-
minextents 1
-
maxextents unlimited
-
pctincrease 0
-
freelists 1
-
freelist groups 1
-
buffer_pool default
-
)
-
noparallel;
-
-
-
create index claims.claim_idx_7 on claims.claim
-
(service_provider_seq)
-
logging
-
tablespace claims_5m_bk32_idx
-
pctfree 10
-
initrans 2
-
maxtrans 255
-
storage (
-
initial 5m
-
next 5m
-
minextents 1
-
maxextents unlimited
-
pctincrease 0
-
freelists 1
-
freelist groups 1
-
buffer_pool default
-
)
-
noparallel;
-
-
-
create index claims.claim_prov_seq_idx1 on claims.claim
-
(provider_seq)
-
logging
-
tablespace claims_256k
-
pctfree 10
-
initrans 2
-
maxtrans 255
-
storage (
-
initial 256k
-
next 256k
-
minextents 1
-
maxextents unlimited
-
pctincrease 0
-
freelists 1
-
freelist groups 1
-
buffer_pool default
-
)
-
noparallel;
-
-
-
create unique index claims.sys_c001507 on claims.claim
-
(claim_seq)
-
logging
-
tablespace claims_5m_bk32_idx
-
pctfree 10
-
initrans 2
-
maxtrans 255
-
storage (
-
initial 5m
-
next 5m
-
minextents 1
-
maxextents unlimited
-
pctincrease 0
-
freelists 1
-
freelist groups 1
-
buffer_pool default
-
)
-
noparallel;
-
-
-
alter table claims.claim add (
-
constraint ck_accident_indicator
-
check (accident_indicator in ('y', 'n')),
-
constraint ck_xray
-
check (xray in ('y', 'n')),
-
constraint sys_c002004
-
check ("claim_number" is not null),
-
constraint sys_c002325
-
check ("generation" is not null),
-
constraint sys_c001507
-
primary key
-
(claim_seq)
-
using index
-
tablespace claims_5m_bk32_idx
-
pctfree 10
-
initrans 2
-
maxtrans 255
-
storage (
-
initial 5m
-
next 5m
-
minextents 1
-
maxextents unlimited
-
pctincrease 0
-
freelists 1
-
freelist groups 1
-
),
-
constraint claim_number_unique
-
unique (claim_number)
-
using index claims.claims_claimnum_idx);
-
-
alter table claims.claim add (
-
constraint fk_claim_1
-
foreign key (provider_seq)
-
references claims.provider (provider_seq),
-
constraint fk_claim_2
-
foreign key (service_provider_seq)
-
references claims.service_provider (service_provider_seq),
-
constraint fk_claim_member_cps
-
foreign key (member_claim_profile_seq)
-
references claims.claim_profile (claim_profile_seq),
-
constraint fk_claim_patient_cps
-
foreign key (patient_claim_profile_seq)
-
references claims.claim_profile (claim_profile_seq));
-
-
audit alter on claims.claim by access whenever successful;
-
audit alter on claims.claim by access whenever not successful;
-
audit audit on claims.claim by access whenever successful;
-
audit audit on claims.claim by access whenever not successful;
-
audit delete on claims.claim by session whenever not successful;
-
audit grant on claims.claim by access whenever successful;
-
audit grant on claims.claim by access whenever not successful;
-
audit index on claims.claim by access whenever successful;
-
audit index on claims.claim by access whenever not successful;
-
audit insert on claims.claim by session whenever not successful;
-
audit select on claims.claim by session whenever not successful;
-
audit update on claims.claim by session whenever not successful;
-
-
grant delete, insert, select, update on claims.claim to claims_role;
-
-
grant delete, insert, select, update on claims.claim to claims_user;
-
-
grant select on claims.claim to crystal_role;
-
-
grant select on claims.claim to db_link;
-
-
grant select on claims.claim to grins_view_only;
-
-
grant delete, insert, select, update on claims.claim to ops$gman;
rski 700
Expert 512MB
Remove DISTINCT clause, you don't need it because you use UNION and UNION removes duplicates.
After that check if it is enough for you. If you want additional tuning, post here execution plan.
Do you execute many insert/update/delete statements on these tables?
I've already removed distinct but the query is still running very slow.
Explain Plan for the Query:
Explain plan for the query:
SELECT STATEMENT FIRST_ROWS Cost: 3.14101209224186944E23 Cardinality: 1.01672310708368992E27 IO Cost: 3.14101209224186944E23
31 SORT UNIQUE NOSORT Cost: 3.14101209224186944E23 Cardinality: 1.01672310708368992E27 IO Cost: 3.14101209224186944E23
30 VIEW CLAIMS. Cost: 3.14101209224186944E23 Cardinality: 1.01672310708368992E27
29 SORT UNIQUE Cost: 3.14101209224186944E23 Cardinality: 1.01672310708368992E27 IO Cost: 927,161,786,876,889
28 UNION-ALL
8 FILTER (:Z='O' OR :Z='C') AND :Z=:Z AND :Z=:Z
7 MERGE JOIN CARTESIAN Cost: 927,156,181,244,344 Cardinality: 2.98664713188915008E18 IO Cost: 927,156,181,244,344
4 MERGE JOIN CARTESIAN Cost: 333,639,604 Cardinality: 583,116,885,286 IO Cost: 333,639,604
1 REMOTE SERIAL_FROM_REMOTE ARDIS.JP.CORP SELECT 0 FROM "GMAN"."DENTAL_ACTIVE_GROUPS_VIEW" "DENTAL_ACTIVE_GROUPS_VIEW" Cost: 1,702 Cardinality: 191,087
3 BUFFER SORT Cost: 333,637,902 Cardinality: 3,051,578 IO Cost: 333,637,902
2 INDEX FAST FULL SCAN CLAIMS.PK_PAYMENT [Analyzed] Cost: 1,746 Cardinality: 3,051,578 IO Cost: 1,746
6 BUFFER SORT Cost: 927,156,181,242,598 Cardinality: 5,121,867 IO Cost: 927,156,181,242,598
5 INDEX FAST FULL SCAN CLAIMS.SYS_C001507 [Analyzed] Cost: 1,590 Cardinality: 5,121,867 IO Cost: 1,590
16 FILTER (:Z='O' OR :Z='C') AND :Z=:Z AND :Z=:Z
15 MERGE JOIN CARTESIAN Cost: 3,752,875,063,156,960 Cardinality: 1.20891320906605008E19 IO Cost: 3,752,875,063,156,960
12 MERGE JOIN CARTESIAN Cost: 1,350,475,598 Cardinality: 2,360,297,932,504 IO Cost: 1,350,475,598
9 INDEX FAST FULL SCAN CLAIMS.PK_SERVICE_PROVIDER [Analyzed] Cost: 470 Cardinality: 773,468 IO Cost: 470
11 BUFFER SORT Cost: 1,350,475,128 Cardinality: 3,051,578 IO Cost: 1,350,475,128
10 INDEX FAST FULL SCAN CLAIMS.PK_PAYMENT [Analyzed] Cost: 1,746 Cardinality: 3,051,578 IO Cost: 1,746
14 BUFFER SORT Cost: 3,752,875,063,155,210 Cardinality: 5,121,867 IO Cost: 3,752,875,063,155,210
13 INDEX FAST FULL SCAN CLAIMS.SYS_C001507 [Analyzed] Cost: 1,590 Cardinality: 5,121,867 IO Cost: 1,590
27 FILTER (:Z='O' OR :Z='C') AND :Z=:Z AND :Z=:Z AND :Z=:Z
26 MERGE JOIN CARTESIAN Cost: 3.14101204544137984E23 Cardinality: 1.01672309200790992E27 IO Cost: 3.14101204544137984E23
23 MERGE JOIN CARTESIAN Cost: 30,778,919,522,550,298 Cardinality: 9.91480977794249088E19 IO Cost: 30,778,919,522,550,298
20 MERGE JOIN CARTESIAN Cost: 11,075,828,532 Cardinality: 19,357,804,054,542 IO Cost: 11,075,828,532
17 REMOTE SERIAL_FROM_REMOTE ARDIS.JP.CORP SELECT 0 FROM "GMAN"."MEM_EVD_ADDRESS" "MEA" Cost: 9,438 Cardinality: 6,343,539
19 BUFFER SORT Cost: 11,075,819,094 Cardinality: 3,051,578 IO Cost: 11,075,819,094
18 INDEX FAST FULL SCAN CLAIMS.PK_PAYMENT [Analyzed] Cost: 1,746 Cardinality: 3,051,578 IO Cost: 1,746
22 BUFFER SORT Cost: 30,778,919,522,548,602 Cardinality: 5,121,867 IO Cost: 30,778,919,522,548,602
21 INDEX FAST FULL SCAN CLAIMS.SYS_C001507 [Analyzed] Cost: 1,590 Cardinality: 5,121,867 IO Cost: 1,590
25 BUFFER SORT Cost: 3.14101204544137984E23 Cardinality: 10,254,590 IO Cost: 3.14101204544137984E23
24 INDEX FAST FULL SCAN CLAIMS.PK_CLAIM_PROFILE [Analyzed] Cost: 3,168 Cardinality: 10,254,590 IO Cost: 3,168
Can anybody tell me how do i do partitioning for my query since the data is very huge
Please let me know how can i improve performance of my query.
Do you think partition would work since there are millions of rows in each table
Sign in to post your reply or Sign up for a free account.
Similar topics
by: Dave Theese |
last post by:
Hello all,
I'm in a poition of trying to justify use of the STL from a performance
perspective. As a starting point, can anyone cite any benchmarks comparing
vectors to plain old...
|
by: bjarne |
last post by:
Willy Denoyette wrote;
> ... it
> was not the intention of StrousTrup to the achieve the level of efficiency
> of C when he invented C++, ...
Ahmmm. It was my aim to match the performance...
|
by: Bijesh |
last post by:
Hi All,
I'm using the .NET performance counters to calculate the network
utilization. But this value exceeds 100%.
In most of the cases, it is very similar to the one shown in the
task-manager....
|
by: James |
last post by:
Hi
Has anybody had any experience of ASP.Net performance counters not updating.
In the performance monitor application when I try to add the groups ASP.NET
and ASP.NET Applications the...
|
by: bimalendug |
last post by:
Hi All,
I'm using the .NET performance counters to calculate the network
utilization. But this value exceeds 100%.
In most of the cases, it is very similar to the one shown in the
task-manager....
|
by: Rune B |
last post by:
Hi Group
I was considering using a Generic Dictionary<> as a value container inside
my business objects, for the reason of keeping track of fields changed or
added and so on.
- But how...
|
by: sp |
last post by:
Hello
I have a problem with the refresh performance in datagrid –
when datagrid is being shown it is so slow that I can see one by one
cells is drawn
-datagrid contains about 35x40 of...
|
by: Michael D. Ober |
last post by:
When calling Enqueue, the internal array may need to be reallocated. My
question is by how much? In the old MFC array classes, you could tell MFC
how many additional elements to add to the array...
|
by: NAdir |
last post by:
Hi, thank you for your help.
My VB.Net application contains a document that the user can refresh at any
time.
The refresh works fine and needs to loop through few datatables (hundreds of
rows)....
|
by: Ilyas |
last post by:
Hi all
I have an application which brings back 1000 records from a sql server
database. Under the local asp.net development server provided with
Visual Studio 2008, it takes about 1.8 seconds
...
|
by: Faith0G |
last post by:
I am starting a new it consulting business and it's been a while since I setup a new website. Is wordpress still the best web based software for hosting a 5 page website? The webpages will be...
|
by: isladogs |
last post by:
The next Access Europe User Group meeting will be on Wednesday 3 Apr 2024 starting at 18:00 UK time (6PM UTC+1) and finishing by 19:30 (7.30PM).
In this session, we are pleased to welcome former...
|
by: taylorcarr |
last post by:
A Canon printer is a smart device known for being advanced, efficient, and reliable. It is designed for home, office, and hybrid workspace use and can also be used for a variety of purposes. However,...
|
by: aa123db |
last post by:
Variable and constants
Use var or let for variables and const fror constants.
Var foo ='bar';
Let foo ='bar';const baz ='bar';
Functions
function $name$ ($parameters$) {
}
...
|
by: ryjfgjl |
last post by:
In our work, we often receive Excel tables with data in the same format. If we want to analyze these data, it can be difficult to analyze them because the data is spread across multiple Excel files...
|
by: emmanuelkatto |
last post by:
Hi All, I am Emmanuel katto from Uganda. I want to ask what challenges you've faced while migrating a website to cloud.
Please let me know.
Thanks!
Emmanuel
|
by: nemocccc |
last post by:
hello, everyone, I want to develop a software for my android phone for daily needs, any suggestions?
|
by: Sonnysonu |
last post by:
This is the data of csv file
1 2 3
1 2 3
1 2 3
1 2 3
2 3
2 3
3
the lengths should be different i have to store the data by column-wise with in the specific length.
suppose the i have to...
|
by: Hystou |
last post by:
There are some requirements for setting up RAID:
1. The motherboard and BIOS support RAID configuration.
2. The motherboard has 2 or more available SATA protocol SSD/HDD slots (including MSATA, M.2...
| |