By using this site, you agree to our updated Privacy Policy and our Terms of Use. Manage your Cookies Settings.
437,660 Members | 1,267 Online
Bytes IT Community
+ Ask a Question
Need help? Post your question and get tips & solutions from a community of 437,660 IT Pros & Developers. It's quick & easy.

Controlling maximal on-disk size of a table

P: n/a
I have a table that is essentially a log where new entries are streaming in continually and from time to time I throw old entries away to
keep the table from growing.

I understand that in addition to issue a
DELETE FROM log WHERE date < xxx
I also need to issue a
VACUUM log
so that new entries will use the space of deleted entries.

Now I want to reserve a certain amount of disk storage to hold the log table. So I first let the table (plus its index, its toast table and toast index) grow until it is about to reach the maximum size. Then a daemon continually deletes old entries and vacuums the table so the on-disk usage stays more or less constant from now on, at least this is the idea.

Of course I would like to keep as much history as possible, given the available space. Also the log may sometimes be quiet and sometimes quite busy, also the size of the text entries may vary quite a bit.

Now to make a good guess about when to issue the next delete, I need to estimate how much of the on-disk usage is accounted for by deleted entries.

I can of course count the number of entries, estimate the bytes needed for storage by averaging the length of the text column, adding the size of the fixed columns and compare that to the on-disk size to conclude how much space is still available. As for the index I assume it is has a fixed size per row.

But these queries are expensive because the log may easily contain millions of entries with an on disk size in the range of a few GB, and must be repeated quite often to prevent sudden bursts of new entries from overflowing the log.

Is there a better way to get at the current "free space" inside of a table/index?
__________________________________________________ ________
Mit WEB.DE FreePhone mit hoechster Qualitaet ab 0 Ct./Min.
weltweit telefonieren! http://freephone.web.de/?mc=021201
---------------------------(end of broadcast)---------------------------
TIP 4: Don't 'kill -9' the postmaster

Nov 23 '05 #1
Share this Question
Share on Google+
1 Reply


P: n/a
What you're looking for sounds like and extended (destructive) version
of what autovacuum does.

So you might try to look at the statistics tables like autovacuum does.
I don't know how it does that, but it seems that that way you'd be able
to incrementally have approximate information about what happens with
the table.

Good luck,

d.
--
David Helgason,
Business Development et al.,
Over the Edge I/S (http://otee.dk)
Direct line +45 2620 0663
Main line +45 3264 5049

On 16. nov 2004, at 13:21, Nils Rennebarth wrote:
I have a table that is essentially a log where new entries are
streaming in continually and from time to time I throw old entries
away to
keep the table from growing.

I understand that in addition to issue a
DELETE FROM log WHERE date < xxx
I also need to issue a
VACUUM log
so that new entries will use the space of deleted entries.

Now I want to reserve a certain amount of disk storage to hold the log
table. So I first let the table (plus its index, its toast table and
toast index) grow until it is about to reach the maximum size. Then a
daemon continually deletes old entries and vacuums the table so the
on-disk usage stays more or less constant from now on, at least this
is the idea.

Of course I would like to keep as much history as possible, given the
available space. Also the log may sometimes be quiet and sometimes
quite busy, also the size of the text entries may vary quite a bit.

Now to make a good guess about when to issue the next delete, I need
to estimate how much of the on-disk usage is accounted for by deleted
entries.

I can of course count the number of entries, estimate the bytes needed
for storage by averaging the length of the text column, adding the
size of the fixed columns and compare that to the on-disk size to
conclude how much space is still available. As for the index I assume
it is has a fixed size per row.

But these queries are expensive because the log may easily contain
millions of entries with an on disk size in the range of a few GB, and
must be repeated quite often to prevent sudden bursts of new entries
from overflowing the log.

Is there a better way to get at the current "free space" inside of a
table/index?
__________________________________________________ ________
Mit WEB.DE FreePhone mit hoechster Qualitaet ab 0 Ct./Min.
weltweit telefonieren! http://freephone.web.de/?mc=021201
---------------------------(end of
broadcast)---------------------------
TIP 4: Don't 'kill -9' the postmaster

---------------------------(end of broadcast)---------------------------
TIP 6: Have you searched our list archives?

http://archives.postgresql.org

Nov 23 '05 #2

This discussion thread is closed

Replies have been disabled for this discussion.