By using this site, you agree to our updated Privacy Policy and our Terms of Use. Manage your Cookies Settings.
424,854 Members | 2,004 Online
Bytes IT Community
+ Ask a Question
Need help? Post your question and get tips & solutions from a community of 424,854 IT Pros & Developers. It's quick & easy.

open of /usr/lib/pgsql/data/pg_clog/0F3E failed

P: n/a
Hi:

I'm running a large database on PostgreSQL 7.1.3. 20 days ago the
database failed with a threatening and not too descriptive error like:

pg_exec() query failed: server closed the connection unexpectedlyThis
probably means the server terminated abnormally before or while
processing the request.

I lost some data and had to recreate a table, without knowing the reason
for the error, now I got this error:

FATAL 2: open of /usr/lib/pgsql/data/pg_clog/0F3E failed: No existe el
fichero o el directorio
server closed the connection unexpectedly
This probably means the server terminated abnormally
before or while processing the request.

When I VACUUM the affected table I get:

FATAL 2: open of /usr/lib/pgsql/data/pg_clog/0F5C failed: No existe el
fichero o el directorio
server closed the connection unexpectedly
This probably means the server terminated abnormally
before or while processing the request.
The connection to the server was lost. Attempting reset: NOTICE:
Message from PostgreSQL backend:
The Postmaster has informed me that some other backend
died abnormally and possibly corrupted shared memory.
I have rolled back the current transaction and am
going to terminate your database system connection and exit.
Please reconnect to the database system and repeat your query.
Failed.

I guess I have lost again some data in one table.

Someone can help with the origin of these errors? How can I avoid them?

Thanks in advance.

---------------------------(end of broadcast)---------------------------
TIP 7: don't forget to increase your free space map settings

Nov 23 '05 #1
Share this Question
Share on Google+
6 Replies


P: n/a
ruben <ru*****@superguai.com> writes:
I lost some data and had to recreate a table, without knowing the reason
for the error, now I got this error: FATAL 2: open of /usr/lib/pgsql/data/pg_clog/0F3E failed: No existe el
fichero o el directorio


This looks like one of the common symptoms of corrupted data
(specifically, you have a trashed transaction number in some row header,
which causes the code to try to look up the transaction status in a
portion of clog that doesn't exist yet).

The most likely bet is that you are having intermittent hardware
problems causing data corruption. I'd suggest testing the hardware as
best you can. memtest86 and badblocks are a couple of widely-used test
programs for RAM and disk respectively.

As for trying to recover your data, you can look in the mailing list
archives for prior discussions of coping with data corruption. The
particular rows or pages that are corrupt are probably unrecoverable,
but you can zero them out and at least make the rest of the table
readable. First you need to stabilize the hardware, though.

regards, tom lane

---------------------------(end of broadcast)---------------------------
TIP 4: Don't 'kill -9' the postmaster

Nov 23 '05 #2

P: n/a
Hi:

Lately I run into some problems with postmaster fatal errors, corrupted
data and indexes, lost tuples, etc. I was advised to check hardware,
I'll do so.

Today, one of the processes running daily took 4 hours when it takes
about 5 minutes. After a VACCUM ANALYZE of the affected tables it took
the same to finish, then I recreated (drop and create) the index of the
affected table and the process when again fast. My question is, isn't
enough to run a VACCUM to optimize a table and its indexes? Is it
advisable to recreate indexes from time to time?

Thanks in advance.

---------------------------(end of broadcast)---------------------------
TIP 8: explain analyze is your friend

Nov 23 '05 #3

P: n/a
In article <41**************@superguai.com>,
ruben <ru*****@superguai.com> writes:
Today, one of the processes running daily took 4 hours when it takes
about 5 minutes. After a VACCUM ANALYZE of the affected tables it took
the same to finish, then I recreated (drop and create) the index of
the affected table and the process when again fast. My question is,
isn't enough to run a VACCUM to optimize a table and its indexes? Is
it advisable to recreate indexes from time to time?


This was necessary in PostgreSQL up to 7.3.x, but 7.4.x is supposed to
fix that. What version are you running?
---------------------------(end of broadcast)---------------------------
TIP 7: don't forget to increase your free space map settings

Nov 23 '05 #4

P: n/a
Thanks Harald, i'm running PostgreSQL 7.1.3.

Harald Fuchs wrote:
In article <41**************@superguai.com>,
ruben <ru*****@superguai.com> writes:

Today, one of the processes running daily took 4 hours when it takes
about 5 minutes. After a VACCUM ANALYZE of the affected tables it took
the same to finish, then I recreated (drop and create) the index of
the affected table and the process when again fast. My question is,
isn't enough to run a VACCUM to optimize a table and its indexes? Is
it advisable to recreate indexes from time to time?

This was necessary in PostgreSQL up to 7.3.x, but 7.4.x is supposed to
fix that. What version are you running?
---------------------------(end of broadcast)---------------------------
TIP 7: don't forget to increase your free space map settings


---------------------------(end of broadcast)---------------------------
TIP 5: Have you checked our extensive FAQ?

http://www.postgresql.org/docs/faqs/FAQ.html

Nov 23 '05 #5

P: n/a
Unfortunately, the administrative overhead in 7.1.3 is noticeably higher
than it is in 7.4. The overhead should be lowered even more in 8.0 with
the integration of the autovacuum daemon into the backend process.

On Fri, 2004-08-06 at 10:24, ruben wrote:
Thanks Harald, i'm running PostgreSQL 7.1.3.

Harald Fuchs wrote:
In article <41**************@superguai.com>,
ruben <ru*****@superguai.com> writes:

Today, one of the processes running daily took 4 hours when it takes
about 5 minutes. After a VACCUM ANALYZE of the affected tables it took
the same to finish, then I recreated (drop and create) the index of
the affected table and the process when again fast. My question is,
isn't enough to run a VACCUM to optimize a table and its indexes? Is
it advisable to recreate indexes from time to time?

This was necessary in PostgreSQL up to 7.3.x, but 7.4.x is supposed to
fix that. What version are you running?
---------------------------(end of broadcast)---------------------------
TIP 7: don't forget to increase your free space map settings


---------------------------(end of broadcast)---------------------------
TIP 5: Have you checked our extensive FAQ?

http://www.postgresql.org/docs/faqs/FAQ.html

---------------------------(end of broadcast)---------------------------
TIP 3: if posting/reading through Usenet, please send an appropriate
subscribe-nomail command to ma*******@postgresql.org so that your
message can get through to the mailing list cleanly

Nov 23 '05 #6

P: n/a
Scott Marlowe wrote:
Unfortunately, the administrative overhead in 7.1.3 is noticeably higher
than it is in 7.4. The overhead should be lowered even more in 8.0 with
the integration of the autovacuum daemon into the backend process.


May be the autovacuum will be integrated in the version 8.1 and not in the
version 8.0. However the pg_autovacuum is not enough, I suggest at least
once a day perform a vacuum full and a reindex for tables eavily updated.
Regards
Gaetano Mendola
Nov 23 '05 #7

This discussion thread is closed

Replies have been disabled for this discussion.