473,326 Members | 2,111 Online
Bytes | Software Development & Data Engineering Community
Post Job

Home Posts Topics Members FAQ

Join Bytes to post your question to a community of 473,326 software developers and data experts.

Unresponsive vacuum process


I have a 7.2.1 backend running VACUUM which appears to be blocking all other
processes. I have issued SIGTERM and SIGINT directly to that backend and
also killed the client process, but the VACUUM continues chewing up CPU and
blocking others. I know we need an upgrade; does anyone know how I can get
this VACUUM backend killed without taking down all the blocked/pending
UPDATEs and INSERTs?
---------------------------(end of broadcast)---------------------------
TIP 7: don't forget to increase your free space map settings

Nov 23 '05 #1
4 1477
"Ed L." <pg***@bluepolka.net> writes:
I have a 7.2.1 backend running VACUUM which appears to be blocking all other
processes. I have issued SIGTERM and SIGINT directly to that backend and
also killed the client process, but the VACUUM continues chewing up CPU and
blocking others.


Hmph. AFAICS 7.2 does contain CHECK_FOR_INTERRUPT calls within all the
major VACUUM loops, so it should respond to SIGINT in a reasonably
timely fashion. I'd think it was blocked on someone else's lock if it
weren't that you say it's still consuming CPU. Can you attach to the
troublesome backend with gdb and get a stack trace to show where it is?

regards, tom lane

---------------------------(end of broadcast)---------------------------
TIP 7: don't forget to increase your free space map settings

Nov 23 '05 #2
On Wednesday May 19 2004 5:49, Tom Lane wrote:
"Ed L." <pg***@bluepolka.net> writes:
I have a 7.2.1 backend running VACUUM which appears to be blocking all
other processes. I have issued SIGTERM and SIGINT directly to that
backend and also killed the client process, but the VACUUM continues
chewing up CPU and blocking others.


Hmph. AFAICS 7.2 does contain CHECK_FOR_INTERRUPT calls within all the
major VACUUM loops, so it should respond to SIGINT in a reasonably
timely fashion. I'd think it was blocked on someone else's lock if it
weren't that you say it's still consuming CPU. Can you attach to the
troublesome backend with gdb and get a stack trace to show where it is?


Had to kill it, couldn't wait. Looks like we'll punt this one and just
upgrade the cluster, but if not, I'll try to capture. Thanks.
---------------------------(end of broadcast)---------------------------
TIP 6: Have you searched our list archives?

http://archives.postgresql.org

Nov 23 '05 #3
On 5/19/04 6:49 PM, "Tom Lane" <tg*@sss.pgh.pa.us> wrote:
I have a 7.2.1 backend running VACUUM which appears to be blocking all other
processes. I have issued SIGTERM and SIGINT directly to that backend and
also killed the client process, but the VACUUM continues chewing up CPU and
blocking others.


Hmph. AFAICS 7.2 does contain CHECK_FOR_INTERRUPT calls within all the
major VACUUM loops, so it should respond to SIGINT in a reasonably
timely fashion. I'd think it was blocked on someone else's lock if it
weren't that you say it's still consuming CPU. Can you attach to the
troublesome backend with gdb and get a stack trace to show where it is?


I've had 7.4.1 take up to an hour and a half to abort a vacuum after a
SIGNIT.

Wes
---------------------------(end of broadcast)---------------------------
TIP 5: Have you checked our extensive FAQ?

http://www.postgresql.org/docs/faqs/FAQ.html

Nov 23 '05 #4
<we****@syntegra.com> writes:
On 5/19/04 6:49 PM, "Tom Lane" <tg*@sss.pgh.pa.us> wrote:
... I'd think it was blocked on someone else's lock if it
weren't that you say it's still consuming CPU. Can you attach to the
troublesome backend with gdb and get a stack trace to show where it is?
I've had 7.4.1 take up to an hour and a half to abort a vacuum after a
SIGNIT.


The request for information still stands. I can't do a thing with
undetailed anecdotes.

regards, tom lane

---------------------------(end of broadcast)---------------------------
TIP 6: Have you searched our list archives?

http://archives.postgresql.org

Nov 23 '05 #5

This thread has been closed and replies have been disabled. Please start a new discussion.

Similar topics

6
by: Holger Marzen | last post by:
Hi all, the docs are not clear for me. If I want (in version 7.1.x, 7.2.x) to help the analyzer AND free unused space do I have to do a vacuum vacuum analyze or is a
10
by: Stephen | last post by:
Hello, Is it normal for plain VACUUM on large table to degrade performance by over 9 times? My database becomes unusable when VACUUM runs. From reading newsgroups, I thought VACUUM should only...
8
by: Sean Shanny | last post by:
To all, The facts: PostgreSQL 7.4.0 running on BSD 5.1 on Dell 2650 with 4GB RAM, 5 SCSI drives in hardware RAID 0 configuration. Database size with indexes is currently 122GB. DB size...
2
by: lnd | last post by:
Any comments on multi-versioning problem: As far as I understand from PG documentation, *CURRENTLY* VACUUM must be run regulary, otherwise: -Q. database will grow as fast as there are many DML...
5
by: Gavin Scott | last post by:
Hi, I'm having a performance problem with a large database table we use with postgres 7.3.4. The table is: db=> \d log Table "public.log" Column | Type | Modifiers...
15
by: Ed L. | last post by:
If I see VACUUM ANALYZE VERBOSE output like this... INFO: --Relation public.foo-- INFO: Index idx_foo_bar: Pages 219213; Tuples 28007: Deleted 9434. CPU 17.05s/4.58u sec elapsed 3227.62 sec. ...
16
by: Ed L. | last post by:
I'm getting a slew of these repeatable errors when running ANALYZE and/or VACUUM ANALYZE (from an autovacuum process) against a 7.3.4 cluster on HP-UX B.11.00: 2004-09-29 18:14:53.621 ERROR:...
9
by: Aleksey Serba | last post by:
Hello! I have 24/7 production server under high load. I need to perform vacuum full on several tables to recover disk space / memory usage frequently ( the server must be online during vacuum...
9
by: Joseph Shraibman | last post by:
I have a table that is usually really small (currently 316 rows) but goes through spasams of updates in a small time window. Therefore I have a vacuum full run every hour on this table. Last...
0
by: DolphinDB | last post by:
Tired of spending countless mintues downsampling your data? Look no further! In this article, you’ll learn how to efficiently downsample 6.48 billion high-frequency records to 61 million...
0
isladogs
by: isladogs | last post by:
The next Access Europe meeting will be on Wednesday 6 Mar 2024 starting at 18:00 UK time (6PM UTC) and finishing at about 19:15 (7.15PM). In this month's session, we are pleased to welcome back...
0
by: Vimpel783 | last post by:
Hello! Guys, I found this code on the Internet, but I need to modify it a little. It works well, the problem is this: Data is sent from only one cell, in this case B5, but it is necessary that data...
0
by: ArrayDB | last post by:
The error message I've encountered is; ERROR:root:Error generating model response: exception: access violation writing 0x0000000000005140, which seems to be indicative of an access violation...
1
by: PapaRatzi | last post by:
Hello, I am teaching myself MS Access forms design and Visual Basic. I've created a table to capture a list of Top 30 singles and forms to capture new entries. The final step is a form (unbound)...
1
by: Defcon1945 | last post by:
I'm trying to learn Python using Pycharm but import shutil doesn't work
1
by: Shællîpôpï 09 | last post by:
If u are using a keypad phone, how do u turn on JavaScript, to access features like WhatsApp, Facebook, Instagram....
0
by: af34tf | last post by:
Hi Guys, I have a domain whose name is BytesLimited.com, and I want to sell it. Does anyone know about platforms that allow me to list my domain in auction for free. Thank you
0
isladogs
by: isladogs | last post by:
The next Access Europe User Group meeting will be on Wednesday 3 Apr 2024 starting at 18:00 UK time (6PM UTC+1) and finishing by 19:30 (7.30PM). In this session, we are pleased to welcome former...

By using Bytes.com and it's services, you agree to our Privacy Policy and Terms of Use.

To disable or enable advertisements and analytics tracking please visit the manage ads & tracking page.