473,399 Members | 4,177 Online
Bytes | Software Development & Data Engineering Community
Post Job

Home Posts Topics Members FAQ

Join Bytes to post your question to a community of 473,399 software developers and data experts.

7.3.4 on Linux: UPDATE .. foo=foo+1 degrades massivly over time

Hello,

postgresql 7.3.4 on Debian or the redhat packaged 7.3.4-8 on RHEL AS3 -
same issue, so I somewhat cut out RH is playing things on me.
Tested on two different PCs, too (say, one debian, one RHEL).

While running
UPDATE banner SET counterhalf=counterhalf+1 WHERE BannerID=50
several thousand times, the return times degrade (somewhat linear).
The relation banner has currently *seven* rows and thus it doesnt matter
(and i checked :>) if counterhalf is indexed, or not.

A following VACCUM brings back return times to 'start' - but I cannot
run VACUUM any other minute (?). And it exactly vaccums as many tuples
as I updated.. sure thing:
INFO: Removed 5000 tuples in 95 pages.
CPU 0.00s/0.00u sec elapsed 0.00 sec.
INFO: Pages 95: Changed 1, Empty 0; Tup 7: Vac 5000, Keep 0, UnUsed 3.
Total CPU 0.01s/0.03u sec elapsed 0.04 sec.

What I cant explain is the query statistics output:
'In the beginning':
DEBUG: StartTransactionCommand
LOG: query: UPDATE banner SET counterhalf=counterhalf+1 WHERE BannerID=50
DEBUG: ProcessQuery
DEBUG: CommitTransactionCommand
LOG: QUERY STATISTICS
! system usage stats:
! 0.001110 elapsed 0.000000 user 0.000000 system sec
! [0.940000 user 0.080000 sys total]
! 0/0 [0/0] filesystem blocks in/out
! 0/0 [437/192] page faults/reclaims, 0 [0] swaps
! 0 [0] signals rcvd, 0/0 [0/0] messages rcvd/sent
! 0/0 [0/0] voluntary/involuntary context switches
! buffer usage stats:
! Shared blocks: 0 read, 0 written, buffer hit rate = 100.00
%
! Local blocks: 0 read, 0 written, buffer hit rate = 0.00%
! Direct blocks: 0 read, 0 written

After 5000 updates:
DEBUG: StartTransactionCommand
LOG: query: UPDATE banner SET counterhalf=counterhalf+1 WHERE BannerID=50
DEBUG: ProcessQuery
DEBUG: CommitTransactionCommand
LOG: QUERY STATISTICS
! system usage stats:
! 0.002503 elapsed 0.000000 user 0.000000 system sec
! [8.400000 user 0.740000 sys total]
! 0/0 [0/0] filesystem blocks in/out
! 0/0 [711/192] page faults/reclaims, 0 [0] swaps
! 0 [0] signals rcvd, 0/0 [0/0] messages rcvd/sent
! 0/0 [0/0] voluntary/involuntary context switches
! buffer usage stats:
! Shared blocks: 0 read, 0 written, buffer hit rate = 100.00%
! Local blocks: 0 read, 0 written, buffer hit rate = 0.00%
! Direct blocks: 0 read, 0 written

I checked all 5000 entries, and (obviously?) never touches the filesystem.
Where I stumble is that it keeps down with 'elapsed' time, but the user/sys
times grow linear (which is corresponding to wallclock).

The effect is the same (only in other ranges) with a default or "tuned"
postgresql.conf and either on debian or the RHEL machine.

I dont know where to go now. I was reading the whole changelog/history from
7.3.4 up to 7.4.2 and only found 'auto vacuum' - which might be a deal, yet
it needs permanent statistics (really?) and thus would eat response time
on the other hand then.

And for the very record I tried this on a mysql4.0.18 where the return time
is in general faster (dont care), but it also doesnt degrade over even 50.000
updates (do care here >:).

Next thing is profiling postgres to see, where it loses the time, but
maybe someone already can point me at something.

Any pointer is appreciated.. link to an archived mail (search on archives is
quite slow, too? :) ), pointer to some "hidden" doc I might have missed or
a different SQL possibility to count banner-views in pgsql.

Thanks for any consideration,
--
Philipp Buehler, aka fips | <double-p>

cvs -d /dev/myself commit -m "it's my life" dont/you/forget

---------------------------(end of broadcast)---------------------------
TIP 8: explain analyze is your friend

Nov 23 '05 #1
8 1803
Philipp Buehler <pb********@mlsub.buehler.net> writes:
While running
UPDATE banner SET counterhalf=counterhalf+1 WHERE BannerID=50
several thousand times, the return times degrade (somewhat linear).
You need to vacuum occasionally ...
A following VACCUM brings back return times to 'start' - but I cannot
run VACUUM any other minute (?).


Sure you can.

regards, tom lane

---------------------------(end of broadcast)---------------------------
TIP 8: explain analyze is your friend

Nov 23 '05 #2
Philipp Buehler <pb********@mlsub.buehler.net> writes:
While running
UPDATE banner SET counterhalf=counterhalf+1 WHERE BannerID=50
several thousand times, the return times degrade (somewhat linear).
You need to vacuum occasionally ...
A following VACCUM brings back return times to 'start' - but I cannot
run VACUUM any other minute (?).


Sure you can.

regards, tom lane

---------------------------(end of broadcast)---------------------------
TIP 8: explain analyze is your friend

Nov 23 '05 #3
On Wed, Apr 21, 2004 at 19:52:15 +0200,
Philipp Buehler <pb********@mlsub.buehler.net> wrote:

While running
UPDATE banner SET counterhalf=counterhalf+1 WHERE BannerID=50
several thousand times, the return times degrade (somewhat linear).
This is to be expected. Postgres uses MVCC and everytime you do an update
a new row is created.
A following VACCUM brings back return times to 'start' - but I cannot
run VACUUM any other minute (?). And it exactly vaccums as many tuples
as I updated.. sure thing:


Why not? You only have to vacuum this one table. Vacuuming it once a minute
should be doable.

---------------------------(end of broadcast)---------------------------
TIP 9: the planner will ignore your desire to choose an index scan if your
joining column's datatypes do not match

Nov 23 '05 #4
On Wed, Apr 21, 2004 at 19:52:15 +0200,
Philipp Buehler <pb********@mlsub.buehler.net> wrote:

While running
UPDATE banner SET counterhalf=counterhalf+1 WHERE BannerID=50
several thousand times, the return times degrade (somewhat linear).
This is to be expected. Postgres uses MVCC and everytime you do an update
a new row is created.
A following VACCUM brings back return times to 'start' - but I cannot
run VACUUM any other minute (?). And it exactly vaccums as many tuples
as I updated.. sure thing:


Why not? You only have to vacuum this one table. Vacuuming it once a minute
should be doable.

---------------------------(end of broadcast)---------------------------
TIP 9: the planner will ignore your desire to choose an index scan if your
joining column's datatypes do not match

Nov 23 '05 #5
On 21/04/2004, Tom Lane <tg*@sss.pgh.pa.us> wrote To Philipp Buehler:
While running
UPDATE banner SET counterhalf=counterhalf+1 WHERE BannerID=50
several thousand times, the return times degrade (somewhat linear).


You need to vacuum occasionally ...
A following VACCUM brings back return times to 'start' - but I cannot
run VACUUM any other minute (?).


Sure you can.


Yes, it's probably bearable. Just that I am sure now, it's a
systematic thing I've to deal with and not some fubar.

Thanks also for the other hints/URLs I got (pg_autovacuum in contrib, etc..)

ciao
--
Philipp Buehler, aka fips | <double-p>

cvs -d /dev/myself commit -m "it's my life" dont/you/forget

---------------------------(end of broadcast)---------------------------
TIP 8: explain analyze is your friend

Nov 23 '05 #6
On 21/04/2004, Tom Lane <tg*@sss.pgh.pa.us> wrote To Philipp Buehler:
While running
UPDATE banner SET counterhalf=counterhalf+1 WHERE BannerID=50
several thousand times, the return times degrade (somewhat linear).


You need to vacuum occasionally ...
A following VACCUM brings back return times to 'start' - but I cannot
run VACUUM any other minute (?).


Sure you can.


Yes, it's probably bearable. Just that I am sure now, it's a
systematic thing I've to deal with and not some fubar.

Thanks also for the other hints/URLs I got (pg_autovacuum in contrib, etc..)

ciao
--
Philipp Buehler, aka fips | <double-p>

cvs -d /dev/myself commit -m "it's my life" dont/you/forget

---------------------------(end of broadcast)---------------------------
TIP 8: explain analyze is your friend

Nov 23 '05 #7
I hope I understand your question...

All the old tuples that were current before your updates are still in the
heap. The executer has to do the equivelent of 'where
tuple_visible_to_current_transaction' on every tuple in the heap. The more
updates you do, the more tuples have to be visited on subsequent update
runs.

This is why vacuum exists, and it's the price we pay for the otherwise
excellent transactional model in PG.

HTH :-)
Glen Parker
-----Original Message-----
From: pg*****************@postgresql.org
[mailto:pg*****************@postgresql.org]On Behalf Of Philipp Buehler
Sent: Wednesday, April 21, 2004 10:52 AM
To: pg***********@postgresql.org
Subject: [GENERAL] 7.3.4 on Linux: UPDATE .. foo=foo+1 degrades massivly
over time

While running
UPDATE banner SET counterhalf=counterhalf+1 WHERE BannerID=50
several thousand times, the return times degrade (somewhat linear).
The relation banner has currently *seven* rows and thus it doesnt matter
(and i checked :>) if counterhalf is indexed, or not.

A following VACCUM brings back return times to 'start' - but I cannot
run VACUUM any other minute (?). And it exactly vaccums as many tuples
as I updated.. sure thing:
INFO: Removed 5000 tuples in 95 pages.
CPU 0.00s/0.00u sec elapsed 0.00 sec.
INFO: Pages 95: Changed 1, Empty 0; Tup 7: Vac 5000, Keep 0, UnUsed 3.
Total CPU 0.01s/0.03u sec elapsed 0.04 sec.

< big snip >

---------------------------(end of broadcast)---------------------------
TIP 2: you can get off all lists at once with the unregister command
(send "unregister YourEmailAddressHere" to ma*******@postgresql.org)

Nov 23 '05 #8
I hope I understand your question...

All the old tuples that were current before your updates are still in the
heap. The executer has to do the equivelent of 'where
tuple_visible_to_current_transaction' on every tuple in the heap. The more
updates you do, the more tuples have to be visited on subsequent update
runs.

This is why vacuum exists, and it's the price we pay for the otherwise
excellent transactional model in PG.

HTH :-)
Glen Parker
-----Original Message-----
From: pg*****************@postgresql.org
[mailto:pg*****************@postgresql.org]On Behalf Of Philipp Buehler
Sent: Wednesday, April 21, 2004 10:52 AM
To: pg***********@postgresql.org
Subject: [GENERAL] 7.3.4 on Linux: UPDATE .. foo=foo+1 degrades massivly
over time

While running
UPDATE banner SET counterhalf=counterhalf+1 WHERE BannerID=50
several thousand times, the return times degrade (somewhat linear).
The relation banner has currently *seven* rows and thus it doesnt matter
(and i checked :>) if counterhalf is indexed, or not.

A following VACCUM brings back return times to 'start' - but I cannot
run VACUUM any other minute (?). And it exactly vaccums as many tuples
as I updated.. sure thing:
INFO: Removed 5000 tuples in 95 pages.
CPU 0.00s/0.00u sec elapsed 0.00 sec.
INFO: Pages 95: Changed 1, Empty 0; Tup 7: Vac 5000, Keep 0, UnUsed 3.
Total CPU 0.01s/0.03u sec elapsed 0.04 sec.

< big snip >

---------------------------(end of broadcast)---------------------------
TIP 2: you can get off all lists at once with the unregister command
(send "unregister YourEmailAddressHere" to ma*******@postgresql.org)

Nov 23 '05 #9

This thread has been closed and replies have been disabled. Please start a new discussion.

Similar topics

0
by: blue | last post by:
We are using a java codebase which writes data to a SQL Server 2000 database using the JTurbo (3.0) JDBC driver. We are finding that a record update takes longer and longer as the table grows. ...
3
by: mbailey | last post by:
I am using ASP to insert/delete/update rows into a very simple SQL Server database (2000). When a certain amount of text (as little as 1000 chars) is inserted to the table (the insert works fine)...
6
by: pg | last post by:
Is there any simple way to query the most recent time of "changes" made to a table? I'm accessing my database with ODBC to a remote site thru internet. I want to eliminate some DUPLICATE long...
2
by: CSN | last post by:
Is it possible to update the timezone part of timestamp fields in a single query? I have a bunch of values that are -06 I need changed to -07. BTW, better to use 'timestamp without time zone' or...
0
by: Philipp Buehler | last post by:
Hello, postgresql 7.3.4 on Debian or the redhat packaged 7.3.4-8 on RHEL AS3 - same issue, so I somewhat cut out RH is playing things on me. Tested on two different PCs, too (say, one debian,...
20
by: Mark Harrison | last post by:
So I have some data that I want to put into a table. If the row already exists (as defined by the primary key), I would like to update the row. Otherwise, I would like to insert the row. I've...
1
by: vtang | last post by:
Is there a way using PL/SQL script to update the time stamp of a CLOB data in a table? Basically we can download the CLOB file to local and update the content with new time stamp.
1
by: mukeshrasm | last post by:
Hello Every one I want to calculate the last updated time in database. means it should display like last updated 2 hours before or 2 days before so how can i do that? I am sending the...
2
by: miguel22 | last post by:
Hi, I have a table with 4 different columns to record IP address of people visiting my site. If a new IP comes it is recorded and a time also written on the last column. The problem is when an...
0
by: Charles Arthur | last post by:
How do i turn on java script on a villaon, callus and itel keypad mobile phone
0
by: emmanuelkatto | last post by:
Hi All, I am Emmanuel katto from Uganda. I want to ask what challenges you've faced while migrating a website to cloud. Please let me know. Thanks! Emmanuel
0
BarryA
by: BarryA | last post by:
What are the essential steps and strategies outlined in the Data Structures and Algorithms (DSA) roadmap for aspiring data scientists? How can individuals effectively utilize this roadmap to progress...
1
by: Sonnysonu | last post by:
This is the data of csv file 1 2 3 1 2 3 1 2 3 1 2 3 2 3 2 3 3 the lengths should be different i have to store the data by column-wise with in the specific length. suppose the i have to...
0
by: Hystou | last post by:
Most computers default to English, but sometimes we require a different language, especially when relocating. Forgot to request a specific language before your computer shipped? No problem! You can...
0
jinu1996
by: jinu1996 | last post by:
In today's digital age, having a compelling online presence is paramount for businesses aiming to thrive in a competitive landscape. At the heart of this digital strategy lies an intricately woven...
0
by: Hystou | last post by:
Overview: Windows 11 and 10 have less user interface control over operating system update behaviour than previous versions of Windows. In Windows 11 and 10, there is no way to turn off the Windows...
0
agi2029
by: agi2029 | last post by:
Let's talk about the concept of autonomous AI software engineers and no-code agents. These AIs are designed to manage the entire lifecycle of a software development project—planning, coding, testing,...
0
isladogs
by: isladogs | last post by:
The next Access Europe User Group meeting will be on Wednesday 1 May 2024 starting at 18:00 UK time (6PM UTC+1) and finishing by 19:30 (7.30PM). In this session, we are pleased to welcome a new...

By using Bytes.com and it's services, you agree to our Privacy Policy and Terms of Use.

To disable or enable advertisements and analytics tracking please visit the manage ads & tracking page.