By using this site, you agree to our updated Privacy Policy and our Terms of Use. Manage your Cookies Settings.
438,373 Members | 1,984 Online
Bytes IT Community
+ Ask a Question
Need help? Post your question and get tips & solutions from a community of 438,373 IT Pros & Developers. It's quick & easy.

Vacuum Error

P: n/a
I am running 7.2 and when doing a vacuum I am getting the following
error....

ERROR: Cannot insert a duplicate key into unique index
pg_statistic_relid_att_index
Where do I start to fix this?

--
Dave Smith
CANdata Systems Ltd
416-493-9020
---------------------------(end of broadcast)---------------------------
TIP 3: if posting/reading through Usenet, please send an appropriate
subscribe-nomail command to ma*******@postgresql.org so that your
message can get through to the mailing list cleanly

Nov 12 '05 #1
Share this Question
Share on Google+
8 Replies


P: n/a
Mensaje citado por Dave Smith <da********@candata.com>:
I am running 7.2 and when doing a vacuum I am getting the following
error....

ERROR: Cannot insert a duplicate key into unique index
pg_statistic_relid_att_index
Where do I start to fix this?


I'm not 100% about this working, but I would try to rebuild the index:

REINDEX pg_statistic_relid_att_index

You have to be the postgres superuser to do this.

--
select 'mmarques' || '@' || 'unl.edu.ar' AS email;
---------------------------------------------------------
Martín Marqués | Programador, DBA
Centro de Telemática | Administrador
Universidad Nacional
del Litoral
---------------------------------------------------------

---------------------------(end of broadcast)---------------------------
TIP 2: you can get off all lists at once with the unregister command
(send "unregister YourEmailAddressHere" to ma*******@postgresql.org)

Nov 12 '05 #2

P: n/a
Dave Smith <da********@candata.com> writes:
I am running 7.2 and when doing a vacuum I am getting the following
error.... ERROR: Cannot insert a duplicate key into unique index
pg_statistic_relid_att_index


Hmm, if it were a slightly newer version I'd be interested in how you
got into this state, but since it's 7.2 I'll write it off as an old
bug. The easiest way out, seeing that pg_statistic is all derived data,
is just
DELETE FROM pg_statistic;
(possibly VACUUM FULL pg_statistic here)
re-ANALYZE everything
You should try the nearby suggestion to REINDEX first, but I'm betting
that that doesn't help.

regards, tom lane

---------------------------(end of broadcast)---------------------------
TIP 7: don't forget to increase your free space map settings

Nov 12 '05 #3

P: n/a
On Mon, 12 Jan 2004, Tom Lane wrote:
Dave Smith <da********@candata.com> writes:
I am running 7.2 and when doing a vacuum I am getting the following
error....

ERROR: Cannot insert a duplicate key into unique index
pg_statistic_relid_att_index


Hmm, if it were a slightly newer version I'd be interested in how you
got into this state, but since it's 7.2 I'll write it off as an old
bug. The easiest way out, seeing that pg_statistic is all derived data,
is just
DELETE FROM pg_statistic;
(possibly VACUUM FULL pg_statistic here)
re-ANALYZE everything
You should try the nearby suggestion to REINDEX first, but I'm betting
that that doesn't help.


Didn't this happen with parallel vacuum / analyzes running in the olden
days?
---------------------------(end of broadcast)---------------------------
TIP 9: the planner will ignore your desire to choose an index scan if your
joining column's datatypes do not match

Nov 12 '05 #4

P: n/a
I believe this error usually comes about due to OID wrapping.

I have experienced this error many times. But not once since I changed all
tables definitions to "without oids".

The Fix Tom has suggested bellow is only temporary. You will need to back
up your data base and reload. The long term solution is to change every
table to "without oids", that is if your application is not using them.

The way I did it was to edit the output of PG_DUMP and make a global change
to the text changing every create table statement to include "without oids".
If any of your functions or client side code use temp tables, they also need
changing.

The longer you leave it the worse it gets.

Good luck

Thanks

Andrew Bartley

-----Original Message-----
From: pg*****************@postgresql.org
[mailto:pg*****************@postgresql.org]On Behalf Of Tom Lane
Sent: Tuesday, 13 January 2004 9:31 AM
To: Dave Smith
Cc: pg***********@postgresql.org
Subject: Re: [GENERAL] Vacuum Error
Dave Smith <da********@candata.com> writes:
I am running 7.2 and when doing a vacuum I am getting the following
error.... ERROR: Cannot insert a duplicate key into unique index
pg_statistic_relid_att_index


Hmm, if it were a slightly newer version I'd be interested in how you
got into this state, but since it's 7.2 I'll write it off as an old
bug. The easiest way out, seeing that pg_statistic is all derived data,
is just
DELETE FROM pg_statistic;
(possibly VACUUM FULL pg_statistic here)
re-ANALYZE everything
You should try the nearby suggestion to REINDEX first, but I'm betting
that that doesn't help.

regards, tom lane

---------------------------(end of broadcast)---------------------------
TIP 7: don't forget to increase your free space map settings


---------------------------(end of broadcast)---------------------------
TIP 2: you can get off all lists at once with the unregister command
(send "unregister YourEmailAddressHere" to ma*******@postgresql.org)

Nov 12 '05 #5

P: n/a
"scott.marlowe" <sc***********@ihs.com> writes:
Dave Smith <da********@candata.com> writes:
I am running 7.2 and when doing a vacuum I am getting the following
error....
ERROR: Cannot insert a duplicate key into unique index
pg_statistic_relid_att_index

Didn't this happen with parallel vacuum / analyzes running in the olden
days?


I don't think it could happen in 7.2, because that release took
ExclusiveLock on pg_statistic while inserting rows.

Later releases use a weaker lock (to permit concurrent ANALYZE of
different tables) and are subject to "concurrent update" errors if
you try to run two ANALYZEs of the same table at the same time.
But it doesn't sound like Dave is doing that, anyway.

regards, tom lane

---------------------------(end of broadcast)---------------------------
TIP 5: Have you checked our extensive FAQ?

http://www.postgresql.org/docs/faqs/FAQ.html

Nov 12 '05 #6

P: n/a
"Andrew Bartley" <ab******@evolvosystems.com> writes:
I believe this error usually comes about due to OID wrapping.


No; an OID collision would have occurred when you tried to create a
table. If two tables are present in pg_class then they have different
OIDs, and shouldn't have any conflicts in pg_statistic.

Since my last message I've thought of a possibly plausible explanation:
the bt_moveright bug that was isolated just a few months ago. If two
backends were analyzing some tables (not necessarily the same table) at
the same time, and one of them caused the first root-page split in
pg_statistic_relid_att_index, it would be possible for the other one to
miss spotting an existing row for the OID/attnum it wanted to insert.
I think this could lead to two valid entries for the same OID/attnum in
pg_statistic, and consequently a persistent error on every subsequent
attempt to analyze that table.

This seems moderately plausible because pg_statistic_relid_att_index
would fit in a single btree page up till about 300-400 entries, which
is about right for a moderate-sized database (I see 299 pg_statistic
entries in the current regression database, for example). First split
could easily happen in a database that had been running for awhile.

The relevant CVS log entry is

2003-07-29 18:18 tgl

* src/backend/access/nbtree/: nbtsearch.c (REL7_3_STABLE),
nbtsearch.c (REL7_2_STABLE), nbtsearch.c: Fix longstanding error in
_bt_search(): should moveright at top of loop not bottom.
Otherwise we fail to moveright when the root page was split while
we were "in flight" to it. This is not a significant problem when
the root is above the leaf level, but if the root was also a leaf
(ie, a single-page index just got split) we may return the wrong
leaf page to the caller, resulting in failure to find a key that is
in fact present. Bug has existed at least since 7.1, probably
forever.

(Note that although the patch was committed into 7.2 series, there has
been no 7.2 release since then. You could pull REL7_2_STABLE tip if you
wanted to build a 7.2-series server with this fix in place.)

regards, tom lane

---------------------------(end of broadcast)---------------------------
TIP 5: Have you checked our extensive FAQ?

http://www.postgresql.org/docs/faqs/FAQ.html

Nov 12 '05 #7

P: n/a
On Mon, Jan 12, 2004 at 06:20:23PM -0500, Tom Lane wrote:
"Andrew Bartley" <ab******@evolvosystems.com> writes:
I believe this error usually comes about due to OID wrapping.


No; an OID collision would have occurred when you tried to create a
table. If two tables are present in pg_class then they have different
OIDs, and shouldn't have any conflicts in pg_statistic.


How would that OID collision manifest? Do you think the error message
might look similar?

---------------------------(end of broadcast)---------------------------
TIP 3: if posting/reading through Usenet, please send an appropriate
subscribe-nomail command to ma*******@postgresql.org so that your
message can get through to the mailing list cleanly

Nov 12 '05 #8

P: n/a
Kragen Sitaker <kr**********@airwave.com> writes:
On Mon, Jan 12, 2004 at 06:20:23PM -0500, Tom Lane wrote:
No; an OID collision would have occurred when you tried to create a
table. If two tables are present in pg_class then they have different
OIDs, and shouldn't have any conflicts in pg_statistic.
How would that OID collision manifest? Do you think the error message
might look similar?


Similar, but referring to pg_class_oid_index.

regards, tom lane

---------------------------(end of broadcast)---------------------------
TIP 5: Have you checked our extensive FAQ?

http://www.postgresql.org/docs/faqs/FAQ.html

Nov 22 '05 #9

This discussion thread is closed

Replies have been disabled for this discussion.