By using this site, you agree to our updated Privacy Policy and our Terms of Use. Manage your Cookies Settings.
429,470 Members | 924 Online
Bytes IT Community
+ Ask a Question
Need help? Post your question and get tips & solutions from a community of 429,470 IT Pros & Developers. It's quick & easy.

How we improve query execution of a bulky database..

P: 5
Hi Friends,

I am working in Postgresql 8.3. In that I have one table A, this table contains number of attributes and we already put indexing on Id, liveId attributes. But this table also contains the insertedDate and one more and very important things is that in every seconds approx 30 hits occurred in this table.

When we are fetching the records from this table Its takes to much time. Yesterday I had put indexing on this column and after indexing till 4-5 hours it working awesome, but now when the table size is increase it was taking too much time even more than earlier. Therefore I had drop that index but I want to increase the fetching speed.

If I will put indexing again then which way I will put it. I am new to Database handling so please help me..

So Please gave me guidance.. Thanks in Advance..
Waiting for your early response...
Feb 26 '10 #1
Share this Question
Share on Google+
10 Replies


Expert 100+
P: 700
What query do you use to fetch data from table?
Feb 26 '10 #2

RedSon
Expert 5K+
P: 5,000
Does postgresql have a query analysis tool?
Feb 26 '10 #3

P: 5
Hi Rski,
Thanks for reply.

I am using subquery, the query is:-

select * from tableA where liveid in (select id from tableB where vehicleid in (select id from tableC where registrationNumber='MH4R2423')) and inserteddate>'2010-02-20' and inserteddate<'2010-02-21'.

When I did indexing on inserteddate till 4-5 hours it work quite good. But when the table size is increase its takes too much time therefore I drop that index.

Dear RedSon, I am new to Postgres
and I only know that if we want to check the qery performance then we are using Explain Analyze "Users Query"

I am not a Database Guy I am a Developer So I can't know too much about this. Still I am trying to fix this issue, but if you know anything then Plz help me.
Heartily Appreciate for your help
Feb 27 '10 #4

Expert 100+
P: 700
Can you show the explain plan for that query.
Feb 27 '10 #5

P: 5
The Explain Plan of my query is as follows:-

Explain analyze Select * from tableA where liveid in (select id from tableB where vehicleid in (select id from tableC where registrationNumber like '%0306')) and insertedDate>'2010-02-25' and insertedDate<'2010-02-26';
QUERY PLAN
---------------------------------------------------------------------------------------------------------
Hash IN Join (cost=37054.99..279253.69 rows=119 width=140) (actual time=13374.289..90126.653 rows=965 loops=1)
Hash Cond: (tableA.liveid = tableB.id)
-> Bitmap Heap Scan on tableA (cost=1998.45..243537.42 rows=87806 width=140) (actual time=11841.439..87984.638 rows=255173 loops=1)
Recheck Cond: ((insertedDate > '2010-02-25 00:00:00+05:30'::timestamp with time zone) AND (insertedDate < '2010-02-26 00:00:00+05:30'::timestamp with time zone))
-> Bitmap Index Scan on tableA_insertedDate (cost=0.00..1976.50 rows=87806 width=0) (actual time=11825.504..11825.504 rows=255180 loops=1)
Index Cond: ((insertedDate > '2010-02-25 00:00:00+05:30'::timestamp with time zone) AND (insertedDate < '2010-02-26 00:00:00+05:30'::timestamp with time zone))
-> Hash (cost=35009.64..35009.64 rows=3752 width=8) (actual time=1.070..1.070 rows=1 loops=1)
-> Nested Loop (cost=190.88..35009.64 rows=3752 width=8) (actual time=1.054..1.063 rows=1 loops=1)
-> HashAggregate (cost=27.23..27.24 rows=1 width=8) (actual time=0.269..0.271 rows=1 loops=1)
-> Seq Scan on tableC (cost=0.00..27.23 rows=1 width=8) (actual time=0.019..0.253 rows=1 loops=1)
Filter: (registrationNumber ~~ '%0360'::text)
-> Bitmap Heap Scan on tableB (cost=163.65..34809.35 rows=13844 width=16) (actual time=0.771..0.773 rows=1 loops=1)
Recheck Cond: (tableB.vehicleid = tableC.id)
-> Bitmap Index Scan on livestats_driver_vehicle_index (cost=0.00..160.19 rows=13844 width=0) (actual time=0.147..0.147 rows=386 loops=1)
Index Cond: (tableB.vehicleid = tableC.id)
Total runtime: 90143.018 ms
(16 rows)
Feb 27 '10 #6

Expert 100+
P: 700
Did you try to replce 'IN' clauses with simple join?
Feb 27 '10 #7

P: 5
Joins takes more time... I already used that but that is also not useful...
Feb 27 '10 #8

Expert 100+
P: 700
OK, let's think
Don't you think that you can replace
Expand|Select|Wrap|Line Numbers
  1. inserteddate>'2010-02-20' and inserteddate<'2010-02-21'
  2.  
with
Expand|Select|Wrap|Line Numbers
  1. to_char(inserteddate,'YYYY-MM-DD')='2010-02-20'
  2.  
After that would be good to replace index yo have on inserteddate with function based index on to_char(inserteddate,'YYYY-MM-DD')
Feb 27 '10 #9

P: 5
Thanx for reply...
Buddy I also did same things but in different way.
I added one column in a table as a datatype varchar, and then I had insert the value of "inserteddate column without Time" in new column.
And then I put indexing on that column and did some modification on the java code also. But when I saw the performance in my local box they are almost same not too much difference...
But Still I am searching to do this task....
Feb 28 '10 #10

Expert 100+
P: 700
Can you describe the relations between tables. I mean can you describe the ERD diagram for that database (or show tables definitions, do they contain any foreign keys, primary keys,.. ?)
Feb 28 '10 #11

Post your reply

Sign in to post your reply or Sign up for a free account.