473,386 Members | 1,733 Online
Bytes | Software Development & Data Engineering Community
Post Job

Home Posts Topics Members FAQ

Join Bytes to post your question to a community of 473,386 software developers and data experts.

Vacuum Error

I am running 7.2 and when doing a vacuum I am getting the following
error....

ERROR: Cannot insert a duplicate key into unique index
pg_statistic_relid_att_index
Where do I start to fix this?

--
Dave Smith
CANdata Systems Ltd
416-493-9020
---------------------------(end of broadcast)---------------------------
TIP 3: if posting/reading through Usenet, please send an appropriate
subscribe-nomail command to ma*******@postgresql.org so that your
message can get through to the mailing list cleanly

Nov 12 '05 #1
8 3465
Mensaje citado por Dave Smith <da********@candata.com>:
I am running 7.2 and when doing a vacuum I am getting the following
error....

ERROR: Cannot insert a duplicate key into unique index
pg_statistic_relid_att_index
Where do I start to fix this?


I'm not 100% about this working, but I would try to rebuild the index:

REINDEX pg_statistic_relid_att_index

You have to be the postgres superuser to do this.

--
select 'mmarques' || '@' || 'unl.edu.ar' AS email;
---------------------------------------------------------
Martín Marqués | Programador, DBA
Centro de Telemática | Administrador
Universidad Nacional
del Litoral
---------------------------------------------------------

---------------------------(end of broadcast)---------------------------
TIP 2: you can get off all lists at once with the unregister command
(send "unregister YourEmailAddressHere" to ma*******@postgresql.org)

Nov 12 '05 #2
Dave Smith <da********@candata.com> writes:
I am running 7.2 and when doing a vacuum I am getting the following
error.... ERROR: Cannot insert a duplicate key into unique index
pg_statistic_relid_att_index


Hmm, if it were a slightly newer version I'd be interested in how you
got into this state, but since it's 7.2 I'll write it off as an old
bug. The easiest way out, seeing that pg_statistic is all derived data,
is just
DELETE FROM pg_statistic;
(possibly VACUUM FULL pg_statistic here)
re-ANALYZE everything
You should try the nearby suggestion to REINDEX first, but I'm betting
that that doesn't help.

regards, tom lane

---------------------------(end of broadcast)---------------------------
TIP 7: don't forget to increase your free space map settings

Nov 12 '05 #3
On Mon, 12 Jan 2004, Tom Lane wrote:
Dave Smith <da********@candata.com> writes:
I am running 7.2 and when doing a vacuum I am getting the following
error....

ERROR: Cannot insert a duplicate key into unique index
pg_statistic_relid_att_index


Hmm, if it were a slightly newer version I'd be interested in how you
got into this state, but since it's 7.2 I'll write it off as an old
bug. The easiest way out, seeing that pg_statistic is all derived data,
is just
DELETE FROM pg_statistic;
(possibly VACUUM FULL pg_statistic here)
re-ANALYZE everything
You should try the nearby suggestion to REINDEX first, but I'm betting
that that doesn't help.


Didn't this happen with parallel vacuum / analyzes running in the olden
days?
---------------------------(end of broadcast)---------------------------
TIP 9: the planner will ignore your desire to choose an index scan if your
joining column's datatypes do not match

Nov 12 '05 #4
I believe this error usually comes about due to OID wrapping.

I have experienced this error many times. But not once since I changed all
tables definitions to "without oids".

The Fix Tom has suggested bellow is only temporary. You will need to back
up your data base and reload. The long term solution is to change every
table to "without oids", that is if your application is not using them.

The way I did it was to edit the output of PG_DUMP and make a global change
to the text changing every create table statement to include "without oids".
If any of your functions or client side code use temp tables, they also need
changing.

The longer you leave it the worse it gets.

Good luck

Thanks

Andrew Bartley

-----Original Message-----
From: pg*****************@postgresql.org
[mailto:pg*****************@postgresql.org]On Behalf Of Tom Lane
Sent: Tuesday, 13 January 2004 9:31 AM
To: Dave Smith
Cc: pg***********@postgresql.org
Subject: Re: [GENERAL] Vacuum Error
Dave Smith <da********@candata.com> writes:
I am running 7.2 and when doing a vacuum I am getting the following
error.... ERROR: Cannot insert a duplicate key into unique index
pg_statistic_relid_att_index


Hmm, if it were a slightly newer version I'd be interested in how you
got into this state, but since it's 7.2 I'll write it off as an old
bug. The easiest way out, seeing that pg_statistic is all derived data,
is just
DELETE FROM pg_statistic;
(possibly VACUUM FULL pg_statistic here)
re-ANALYZE everything
You should try the nearby suggestion to REINDEX first, but I'm betting
that that doesn't help.

regards, tom lane

---------------------------(end of broadcast)---------------------------
TIP 7: don't forget to increase your free space map settings


---------------------------(end of broadcast)---------------------------
TIP 2: you can get off all lists at once with the unregister command
(send "unregister YourEmailAddressHere" to ma*******@postgresql.org)

Nov 12 '05 #5
"scott.marlowe" <sc***********@ihs.com> writes:
Dave Smith <da********@candata.com> writes:
I am running 7.2 and when doing a vacuum I am getting the following
error....
ERROR: Cannot insert a duplicate key into unique index
pg_statistic_relid_att_index

Didn't this happen with parallel vacuum / analyzes running in the olden
days?


I don't think it could happen in 7.2, because that release took
ExclusiveLock on pg_statistic while inserting rows.

Later releases use a weaker lock (to permit concurrent ANALYZE of
different tables) and are subject to "concurrent update" errors if
you try to run two ANALYZEs of the same table at the same time.
But it doesn't sound like Dave is doing that, anyway.

regards, tom lane

---------------------------(end of broadcast)---------------------------
TIP 5: Have you checked our extensive FAQ?

http://www.postgresql.org/docs/faqs/FAQ.html

Nov 12 '05 #6
"Andrew Bartley" <ab******@evolvosystems.com> writes:
I believe this error usually comes about due to OID wrapping.


No; an OID collision would have occurred when you tried to create a
table. If two tables are present in pg_class then they have different
OIDs, and shouldn't have any conflicts in pg_statistic.

Since my last message I've thought of a possibly plausible explanation:
the bt_moveright bug that was isolated just a few months ago. If two
backends were analyzing some tables (not necessarily the same table) at
the same time, and one of them caused the first root-page split in
pg_statistic_relid_att_index, it would be possible for the other one to
miss spotting an existing row for the OID/attnum it wanted to insert.
I think this could lead to two valid entries for the same OID/attnum in
pg_statistic, and consequently a persistent error on every subsequent
attempt to analyze that table.

This seems moderately plausible because pg_statistic_relid_att_index
would fit in a single btree page up till about 300-400 entries, which
is about right for a moderate-sized database (I see 299 pg_statistic
entries in the current regression database, for example). First split
could easily happen in a database that had been running for awhile.

The relevant CVS log entry is

2003-07-29 18:18 tgl

* src/backend/access/nbtree/: nbtsearch.c (REL7_3_STABLE),
nbtsearch.c (REL7_2_STABLE), nbtsearch.c: Fix longstanding error in
_bt_search(): should moveright at top of loop not bottom.
Otherwise we fail to moveright when the root page was split while
we were "in flight" to it. This is not a significant problem when
the root is above the leaf level, but if the root was also a leaf
(ie, a single-page index just got split) we may return the wrong
leaf page to the caller, resulting in failure to find a key that is
in fact present. Bug has existed at least since 7.1, probably
forever.

(Note that although the patch was committed into 7.2 series, there has
been no 7.2 release since then. You could pull REL7_2_STABLE tip if you
wanted to build a 7.2-series server with this fix in place.)

regards, tom lane

---------------------------(end of broadcast)---------------------------
TIP 5: Have you checked our extensive FAQ?

http://www.postgresql.org/docs/faqs/FAQ.html

Nov 12 '05 #7
On Mon, Jan 12, 2004 at 06:20:23PM -0500, Tom Lane wrote:
"Andrew Bartley" <ab******@evolvosystems.com> writes:
I believe this error usually comes about due to OID wrapping.


No; an OID collision would have occurred when you tried to create a
table. If two tables are present in pg_class then they have different
OIDs, and shouldn't have any conflicts in pg_statistic.


How would that OID collision manifest? Do you think the error message
might look similar?

---------------------------(end of broadcast)---------------------------
TIP 3: if posting/reading through Usenet, please send an appropriate
subscribe-nomail command to ma*******@postgresql.org so that your
message can get through to the mailing list cleanly

Nov 12 '05 #8
Kragen Sitaker <kr**********@airwave.com> writes:
On Mon, Jan 12, 2004 at 06:20:23PM -0500, Tom Lane wrote:
No; an OID collision would have occurred when you tried to create a
table. If two tables are present in pg_class then they have different
OIDs, and shouldn't have any conflicts in pg_statistic.
How would that OID collision manifest? Do you think the error message
might look similar?


Similar, but referring to pg_class_oid_index.

regards, tom lane

---------------------------(end of broadcast)---------------------------
TIP 5: Have you checked our extensive FAQ?

http://www.postgresql.org/docs/faqs/FAQ.html

Nov 22 '05 #9

This thread has been closed and replies have been disabled. Please start a new discussion.

Similar topics

6
by: Holger Marzen | last post by:
Hi all, the docs are not clear for me. If I want (in version 7.1.x, 7.2.x) to help the analyzer AND free unused space do I have to do a vacuum vacuum analyze or is a
1
by: Dmitry Tkach | last post by:
Hi, everybody! I am getting a weird failure, trying to vacuum a table in 7.3 - it says "ERROR: Index pg_toast_89407_index is not a btree". Does it ring a bell to anyone? Any ideas what's wrong?...
10
by: Stephen | last post by:
Hello, Is it normal for plain VACUUM on large table to degrade performance by over 9 times? My database becomes unusable when VACUUM runs. From reading newsgroups, I thought VACUUM should only...
3
by: Lynn.Tilby | last post by:
The following program produces the output below... It is built with: vrfy_prob: vrfy_prob.cpg /usr/local/pgsql/bin/ecpg -I/usr/local/pgsql/include -o vrfy_prob.c vrfy_prob.cpg gcc -g${DEBUG}...
5
by: Carmen Gloria Sepulveda Dedes | last post by:
Hello! Can I execute VACUUM ANALYZE from ecpg? How I do that??? /* I do: EXEC SQL VACUUM ANALYZE <table>; But I get error: 'ERROR: VACUUM cannot run inside a transaction block'
8
by: Sean Shanny | last post by:
To all, The facts: PostgreSQL 7.4.0 running on BSD 5.1 on Dell 2650 with 4GB RAM, 5 SCSI drives in hardware RAID 0 configuration. Database size with indexes is currently 122GB. DB size...
3
by: Dave Smith | last post by:
I am running 7.2.4 and when running a vacuum on my database I get NOTICE: Child itemid in update-chain marked as unused - can't continue repair_frag ERROR: No one parent tuple was found...
4
by: Ilia Chipitsine | last post by:
Dear Sirs I'm about to write plpgsql function which will "vacuum full" all existing databases. Below is an example how to get list of databases. What should I write instead of "raise notice" ?...
16
by: Ed L. | last post by:
I'm getting a slew of these repeatable errors when running ANALYZE and/or VACUUM ANALYZE (from an autovacuum process) against a 7.3.4 cluster on HP-UX B.11.00: 2004-09-29 18:14:53.621 ERROR:...
0
by: taylorcarr | last post by:
A Canon printer is a smart device known for being advanced, efficient, and reliable. It is designed for home, office, and hybrid workspace use and can also be used for a variety of purposes. However,...
0
by: Charles Arthur | last post by:
How do i turn on java script on a villaon, callus and itel keypad mobile phone
0
by: aa123db | last post by:
Variable and constants Use var or let for variables and const fror constants. Var foo ='bar'; Let foo ='bar';const baz ='bar'; Functions function $name$ ($parameters$) { } ...
0
by: ryjfgjl | last post by:
If we have dozens or hundreds of excel to import into the database, if we use the excel import function provided by database editors such as navicat, it will be extremely tedious and time-consuming...
1
by: nemocccc | last post by:
hello, everyone, I want to develop a software for my android phone for daily needs, any suggestions?
1
by: Sonnysonu | last post by:
This is the data of csv file 1 2 3 1 2 3 1 2 3 1 2 3 2 3 2 3 3 the lengths should be different i have to store the data by column-wise with in the specific length. suppose the i have to...
0
marktang
by: marktang | last post by:
ONU (Optical Network Unit) is one of the key components for providing high-speed Internet services. Its primary function is to act as an endpoint device located at the user's premises. However,...
0
by: Hystou | last post by:
Most computers default to English, but sometimes we require a different language, especially when relocating. Forgot to request a specific language before your computer shipped? No problem! You can...
0
jinu1996
by: jinu1996 | last post by:
In today's digital age, having a compelling online presence is paramount for businesses aiming to thrive in a competitive landscape. At the heart of this digital strategy lies an intricately woven...

By using Bytes.com and it's services, you agree to our Privacy Policy and Terms of Use.

To disable or enable advertisements and analytics tracking please visit the manage ads & tracking page.