Adi Schwarz wrote:
select *
from mytable
order by some_field
limit zero_based_offset, record_count;
Of course that works, but the point is that this query is executed once
for every page of the result set -> the server gets all rows, sorts them
and then takes the rows he needs (normally the minority of all rows) -
for every single page, always (almost) the same query. I would say it
saves database server load if this is only done once.
Depends on how your code is written, whether you use persistent
connections, etc. Bear in mind, that if the user is only likely to want
to see 10 records, then it's maybe a little wasteful to fetch 2000 that
they won't ever see.
If you're talking about caching the actual data returned, then look at
using an application-level cache. I personally go the roll-your-own
route, but PEAR has a Pear::Cache class (or similar) that does this.
And just how large would the entire result be? How much do you want to pull
across the network connection to the DB server, how much do you want to
serialize/deserialize to/from disk?
Better still, why not just look at caching the page output (depending on
your application), saving executing most of your php and database code at
all?