Hi,
I am facing a problem regarding querying thru a large table having millions of rows....... Its hanging in between while querying for all those rows
Millions of rows isn't the problem. If it's hanging in between, there may be some (dead) lock. Or it's just needing very much time because of a) missing or inusable indexes or b) a too small bufferpool. And needing too much time often results in locks.
a) Ty to create an index on the fields which are used in the where clauses, join conditions etc.
b) If there's an index, do a runstats. Maybe even e reorg is necessary.
c) Increase the size of the bufferpool.
Can anybody suggest me a query regarding :
Querying the database everytime for next 100 records ( that means i need to set up a cursor) till the count of the table rows ends up(take 1 million rows e.g.)
Use "select ... from ... where ... fetch first 100 rows only". Unfortunately there's no "skip first
x rows", so you have to add something like "where prim_key_row_id >
pagecounter * 100".
Regards,
Bernd