By using this site, you agree to our updated Privacy Policy and our Terms of Use. Manage your Cookies Settings.
440,506 Members | 1,876 Online
Bytes IT Community
+ Ask a Question
Need help? Post your question and get tips & solutions from a community of 440,506 IT Pros & Developers. It's quick & easy.

7.3.4 on Linux: UPDATE .. foo=foo+1 degrades massivly over time

P: n/a
Hello,

postgresql 7.3.4 on Debian or the redhat packaged 7.3.4-8 on RHEL AS3 -
same issue, so I somewhat cut out RH is playing things on me.
Tested on two different PCs, too (say, one debian, one RHEL).

While running
UPDATE banner SET counterhalf=counterhalf+1 WHERE BannerID=50
several thousand times, the return times degrade (somewhat linear).
The relation banner has currently *seven* rows and thus it doesnt matter
(and i checked :>) if counterhalf is indexed, or not.

A following VACCUM brings back return times to 'start' - but I cannot
run VACUUM any other minute (?). And it exactly vaccums as many tuples
as I updated.. sure thing:
INFO: Removed 5000 tuples in 95 pages.
CPU 0.00s/0.00u sec elapsed 0.00 sec.
INFO: Pages 95: Changed 1, Empty 0; Tup 7: Vac 5000, Keep 0, UnUsed 3.
Total CPU 0.01s/0.03u sec elapsed 0.04 sec.

What I cant explain is the query statistics output:
'In the beginning':
DEBUG: StartTransactionCommand
LOG: query: UPDATE banner SET counterhalf=counterhalf+1 WHERE BannerID=50
DEBUG: ProcessQuery
DEBUG: CommitTransactionCommand
LOG: QUERY STATISTICS
! system usage stats:
! 0.001110 elapsed 0.000000 user 0.000000 system sec
! [0.940000 user 0.080000 sys total]
! 0/0 [0/0] filesystem blocks in/out
! 0/0 [437/192] page faults/reclaims, 0 [0] swaps
! 0 [0] signals rcvd, 0/0 [0/0] messages rcvd/sent
! 0/0 [0/0] voluntary/involuntary context switches
! buffer usage stats:
! Shared blocks: 0 read, 0 written, buffer hit rate = 100.00
%
! Local blocks: 0 read, 0 written, buffer hit rate = 0.00%
! Direct blocks: 0 read, 0 written

After 5000 updates:
DEBUG: StartTransactionCommand
LOG: query: UPDATE banner SET counterhalf=counterhalf+1 WHERE BannerID=50
DEBUG: ProcessQuery
DEBUG: CommitTransactionCommand
LOG: QUERY STATISTICS
! system usage stats:
! 0.002503 elapsed 0.000000 user 0.000000 system sec
! [8.400000 user 0.740000 sys total]
! 0/0 [0/0] filesystem blocks in/out
! 0/0 [711/192] page faults/reclaims, 0 [0] swaps
! 0 [0] signals rcvd, 0/0 [0/0] messages rcvd/sent
! 0/0 [0/0] voluntary/involuntary context switches
! buffer usage stats:
! Shared blocks: 0 read, 0 written, buffer hit rate = 100.00%
! Local blocks: 0 read, 0 written, buffer hit rate = 0.00%
! Direct blocks: 0 read, 0 written

I checked all 5000 entries, and (obviously?) never touches the filesystem.
Where I stumble is that it keeps down with 'elapsed' time, but the user/sys
times grow linear (which is corresponding to wallclock).

The effect is the same (only in other ranges) with a default or "tuned"
postgresql.conf and either on debian or the RHEL machine.

I dont know where to go now. I was reading the whole changelog/history from
7.3.4 up to 7.4.2 and only found 'auto vacuum' - which might be a deal, yet
it needs permanent statistics (really?) and thus would eat response time
on the other hand then.

And for the very record I tried this on a mysql4.0.18 where the return time
is in general faster (dont care), but it also doesnt degrade over even 50.000
updates (do care here >:).

Next thing is profiling postgres to see, where it loses the time, but
maybe someone already can point me at something.

Any pointer is appreciated.. link to an archived mail (search on archives is
quite slow, too? :) ), pointer to some "hidden" doc I might have missed or
a different SQL possibility to count banner-views in pgsql.

Thanks for any consideration,
--
Philipp Buehler, aka fips | <double-p>

cvs -d /dev/myself commit -m "it's my life" dont/you/forget

---------------------------(end of broadcast)---------------------------
TIP 8: explain analyze is your friend

Nov 23 '05 #1
Share this Question
Share on Google+
8 Replies


P: n/a
Philipp Buehler <pb********@mlsub.buehler.net> writes:
While running
UPDATE banner SET counterhalf=counterhalf+1 WHERE BannerID=50
several thousand times, the return times degrade (somewhat linear).
You need to vacuum occasionally ...
A following VACCUM brings back return times to 'start' - but I cannot
run VACUUM any other minute (?).


Sure you can.

regards, tom lane

---------------------------(end of broadcast)---------------------------
TIP 8: explain analyze is your friend

Nov 23 '05 #2

P: n/a
Philipp Buehler <pb********@mlsub.buehler.net> writes:
While running
UPDATE banner SET counterhalf=counterhalf+1 WHERE BannerID=50
several thousand times, the return times degrade (somewhat linear).
You need to vacuum occasionally ...
A following VACCUM brings back return times to 'start' - but I cannot
run VACUUM any other minute (?).


Sure you can.

regards, tom lane

---------------------------(end of broadcast)---------------------------
TIP 8: explain analyze is your friend

Nov 23 '05 #3

P: n/a
On Wed, Apr 21, 2004 at 19:52:15 +0200,
Philipp Buehler <pb********@mlsub.buehler.net> wrote:

While running
UPDATE banner SET counterhalf=counterhalf+1 WHERE BannerID=50
several thousand times, the return times degrade (somewhat linear).
This is to be expected. Postgres uses MVCC and everytime you do an update
a new row is created.
A following VACCUM brings back return times to 'start' - but I cannot
run VACUUM any other minute (?). And it exactly vaccums as many tuples
as I updated.. sure thing:


Why not? You only have to vacuum this one table. Vacuuming it once a minute
should be doable.

---------------------------(end of broadcast)---------------------------
TIP 9: the planner will ignore your desire to choose an index scan if your
joining column's datatypes do not match

Nov 23 '05 #4

P: n/a
On Wed, Apr 21, 2004 at 19:52:15 +0200,
Philipp Buehler <pb********@mlsub.buehler.net> wrote:

While running
UPDATE banner SET counterhalf=counterhalf+1 WHERE BannerID=50
several thousand times, the return times degrade (somewhat linear).
This is to be expected. Postgres uses MVCC and everytime you do an update
a new row is created.
A following VACCUM brings back return times to 'start' - but I cannot
run VACUUM any other minute (?). And it exactly vaccums as many tuples
as I updated.. sure thing:


Why not? You only have to vacuum this one table. Vacuuming it once a minute
should be doable.

---------------------------(end of broadcast)---------------------------
TIP 9: the planner will ignore your desire to choose an index scan if your
joining column's datatypes do not match

Nov 23 '05 #5

P: n/a
On 21/04/2004, Tom Lane <tg*@sss.pgh.pa.us> wrote To Philipp Buehler:
While running
UPDATE banner SET counterhalf=counterhalf+1 WHERE BannerID=50
several thousand times, the return times degrade (somewhat linear).


You need to vacuum occasionally ...
A following VACCUM brings back return times to 'start' - but I cannot
run VACUUM any other minute (?).


Sure you can.


Yes, it's probably bearable. Just that I am sure now, it's a
systematic thing I've to deal with and not some fubar.

Thanks also for the other hints/URLs I got (pg_autovacuum in contrib, etc..)

ciao
--
Philipp Buehler, aka fips | <double-p>

cvs -d /dev/myself commit -m "it's my life" dont/you/forget

---------------------------(end of broadcast)---------------------------
TIP 8: explain analyze is your friend

Nov 23 '05 #6

P: n/a
On 21/04/2004, Tom Lane <tg*@sss.pgh.pa.us> wrote To Philipp Buehler:
While running
UPDATE banner SET counterhalf=counterhalf+1 WHERE BannerID=50
several thousand times, the return times degrade (somewhat linear).


You need to vacuum occasionally ...
A following VACCUM brings back return times to 'start' - but I cannot
run VACUUM any other minute (?).


Sure you can.


Yes, it's probably bearable. Just that I am sure now, it's a
systematic thing I've to deal with and not some fubar.

Thanks also for the other hints/URLs I got (pg_autovacuum in contrib, etc..)

ciao
--
Philipp Buehler, aka fips | <double-p>

cvs -d /dev/myself commit -m "it's my life" dont/you/forget

---------------------------(end of broadcast)---------------------------
TIP 8: explain analyze is your friend

Nov 23 '05 #7

P: n/a
I hope I understand your question...

All the old tuples that were current before your updates are still in the
heap. The executer has to do the equivelent of 'where
tuple_visible_to_current_transaction' on every tuple in the heap. The more
updates you do, the more tuples have to be visited on subsequent update
runs.

This is why vacuum exists, and it's the price we pay for the otherwise
excellent transactional model in PG.

HTH :-)
Glen Parker
-----Original Message-----
From: pg*****************@postgresql.org
[mailto:pg*****************@postgresql.org]On Behalf Of Philipp Buehler
Sent: Wednesday, April 21, 2004 10:52 AM
To: pg***********@postgresql.org
Subject: [GENERAL] 7.3.4 on Linux: UPDATE .. foo=foo+1 degrades massivly
over time

While running
UPDATE banner SET counterhalf=counterhalf+1 WHERE BannerID=50
several thousand times, the return times degrade (somewhat linear).
The relation banner has currently *seven* rows and thus it doesnt matter
(and i checked :>) if counterhalf is indexed, or not.

A following VACCUM brings back return times to 'start' - but I cannot
run VACUUM any other minute (?). And it exactly vaccums as many tuples
as I updated.. sure thing:
INFO: Removed 5000 tuples in 95 pages.
CPU 0.00s/0.00u sec elapsed 0.00 sec.
INFO: Pages 95: Changed 1, Empty 0; Tup 7: Vac 5000, Keep 0, UnUsed 3.
Total CPU 0.01s/0.03u sec elapsed 0.04 sec.

< big snip >

---------------------------(end of broadcast)---------------------------
TIP 2: you can get off all lists at once with the unregister command
(send "unregister YourEmailAddressHere" to ma*******@postgresql.org)

Nov 23 '05 #8

P: n/a
I hope I understand your question...

All the old tuples that were current before your updates are still in the
heap. The executer has to do the equivelent of 'where
tuple_visible_to_current_transaction' on every tuple in the heap. The more
updates you do, the more tuples have to be visited on subsequent update
runs.

This is why vacuum exists, and it's the price we pay for the otherwise
excellent transactional model in PG.

HTH :-)
Glen Parker
-----Original Message-----
From: pg*****************@postgresql.org
[mailto:pg*****************@postgresql.org]On Behalf Of Philipp Buehler
Sent: Wednesday, April 21, 2004 10:52 AM
To: pg***********@postgresql.org
Subject: [GENERAL] 7.3.4 on Linux: UPDATE .. foo=foo+1 degrades massivly
over time

While running
UPDATE banner SET counterhalf=counterhalf+1 WHERE BannerID=50
several thousand times, the return times degrade (somewhat linear).
The relation banner has currently *seven* rows and thus it doesnt matter
(and i checked :>) if counterhalf is indexed, or not.

A following VACCUM brings back return times to 'start' - but I cannot
run VACUUM any other minute (?). And it exactly vaccums as many tuples
as I updated.. sure thing:
INFO: Removed 5000 tuples in 95 pages.
CPU 0.00s/0.00u sec elapsed 0.00 sec.
INFO: Pages 95: Changed 1, Empty 0; Tup 7: Vac 5000, Keep 0, UnUsed 3.
Total CPU 0.01s/0.03u sec elapsed 0.04 sec.

< big snip >

---------------------------(end of broadcast)---------------------------
TIP 2: you can get off all lists at once with the unregister command
(send "unregister YourEmailAddressHere" to ma*******@postgresql.org)

Nov 23 '05 #9

This discussion thread is closed

Replies have been disabled for this discussion.