473,387 Members | 1,365 Online
Bytes | Software Development & Data Engineering Community
Post Job

Home Posts Topics Members FAQ

Join Bytes to post your question to a community of 473,387 software developers and data experts.

Duplicate oid and primary key values


I have a table in a PG 7.4.1 database with 380 duplicate rows,
including duplicate oid and primary key values. Looking through our
backups, the duplicates did not exist before Friday, 02/06/2004. I'm
assuming neither pg_dumpall nor restoring from a pg_dumpall file will
eliminate such duplicates. We upgraded from 7.3.4 to 7.4.1 on
02/02/2004.

What can cause these duplicates?

The server has had several system crashes over the past few days and weeks.

Below is my session with the DB showing an example of the duplicates,
the table structure, and trigger functions.

cos=> select oid, recordnumber from client where recordnumber = 10970;
oid | recordnumber
---------+--------------
2427408 | 10970
(1 row)

cos=> select oid, recordnumber from client where recordnumber < 10971
and recordnumber > 10969;
oid | recordnumber
---------+--------------
2427408 | 10970
2427408 | 10970
(2 rows)

cos=> \d client
Table "public.client"
Column | Type |
Modifiers
----------------------+-------------------------+------------------------------------------------------------------
recordnumber | integer | not null default
nextval('public.client_recordnumber_seq'::text)
recordnumber_display | integer | not null
access | text |
add1 | character varying(255) |
add2 | character varying(255) |
add_id | integer |
age | character varying(255) |
akas | character varying(255) |
besttime | character varying(255) |
birthdate | character varying(255) |
city | character varying(255) |
country | character varying(255) |
creation_date | date | not null default now()
creation_time | time without time zone | not null default now()
custom1 | character varying(255) |
custom10 | character varying(255) |
custom2 | character varying(255) |
custom3 | character varying(255) |
custom4 | character varying(255) |
custom5 | character varying(255) |
custom6 | character varying(255) |
custom7 | character varying(255) |
custom8 | character varying(255) |
custom9 | character varying(255) |
disability | character varying(255) |
edit_date | date | not null default now()
edit_time | time without time zone | not null default now()
edit_id | integer |
education | character varying(255) |
email | character varying(255) |
employer | character varying(255) |
ethnicity | character varying(1) |
extra1 | character varying(255) |
extra8 | character varying(255) |
first | character varying(255) |
gender | character varying(255) |
incomelevel | character varying(255) |
incomenotes | character varying(255) |
insurance | character varying(255) |
last | character varying(255) |
location | character varying(255) |
maritalstatus | character varying(255) |
nochildren | character varying(255) |
otherphone | character varying(255) |
own_id | integer |
phhome | character varying(255) |
phwork | character varying(255) |
prefcontact | character varying(255) |
primarylang | character varying(255) |
referredby | character varying(255) |
restrictorg_id | integer |
serverid | character(5) | not null
ssno | character varying(255) |
state | character varying(255) |
status | integer |
title | character varying(255) |
transportation | character varying(255) |
zip | character varying(255) |
extra2 | character varying(255) |
temp_extra8 | character varying(255) |
extra10 | character varying(255) |
authorize | character varying(1000) |
Indexes:
"client_pkey" primary key, btree (recordnumber)
"idx_client_recordnum_display" unique, btree
(upper_concat(serverid, recordnumber_display))
"idx_client_first" btree ("first")
"idx_client_last" btree ("last")
"idx_client_restrictorg_id" btree (restrictorg_id)
"idx_client_serverid" btree (serverid)
"idx_client_status" btree (status)
Foreign-key constraints:
"$1" FOREIGN KEY (restrictorg_id) REFERENCES
agency_dbs(record_id) ON UPDATE CASCADE ON DELETE SET NULL
Triggers:
tgr_client_edit_date BEFORE UPDATE ON client FOR EACH ROW EXECUTE
PROCEDURE fnc_edit_date()
tgr_client_edit_time BEFORE UPDATE ON client FOR EACH ROW EXECUTE
PROCEDURE fnc_edit_time()
tgr_client_recordnumber_display BEFORE INSERT ON client FOR EACH
ROW EXECUTE PROCEDURE fnc_recordnumber_display()

cos=> \connect - postgres
You are now connected as new user "postgres".
cos=# select prosrc from pg_proc where proname = 'fnc_recordnumber_display';
prosrc
---------------------------------------------------------------------------------------
DECLARE
BEGIN
new.recordnumber_display = new.recordnumber;
RETURN new;
END;
(1 row)

cos=# select prosrc from pg_proc where proname = 'fnc_edit_date';
prosrc
----------------------------------------------------
BEGIN
new.edit_date := 'now';
RETURN new;
END;
(1 row)

cos=# select prosrc from pg_proc where proname = 'fnc_edit_time';
prosrc
----------------------------------------------------
BEGIN
new.edit_time := 'now';
RETURN new;
END;
(1 row)
--

Jeff Bohmer
VisionLink, Inc.
_________________________________
303.402.0170
www.visionlink.org
_________________________________
People. Tools. Change. Community.

---------------------------(end of broadcast)---------------------------
TIP 6: Have you searched our list archives?

http://archives.postgresql.org

Nov 22 '05 #1
2 2734
On Tuesday 10 February 2004 17:10, Jeff Bohmer wrote:
I have a table in a PG 7.4.1 database with 380 duplicate rows,
including duplicate oid and primary key values. Looking through our
backups, the duplicates did not exist before Friday, 02/06/2004. I'm
assuming neither pg_dumpall nor restoring from a pg_dumpall file will
eliminate such duplicates. We upgraded from 7.3.4 to 7.4.1 on
02/02/2004.

What can cause these duplicates?

The server has had several system crashes over the past few days and weeks.
Hardware related? Or is it not clear yet?
Below is my session with the DB showing an example of the duplicates,
the table structure, and trigger functions.

cos=> select oid, recordnumber from client where recordnumber = 10970;
oid | recordnumber
---------+--------------
2427408 | 10970
(1 row)

cos=> select oid, recordnumber from client where recordnumber < 10971
and recordnumber > 10969;
oid | recordnumber
---------+--------------
2427408 | 10970
2427408 | 10970
(2 rows)


In the absence of Tom or some other more knowledgable source, try these out.

SELECT xmin,cmin,xmax,cmax,tid,oid,* FROM ... to see if these are two versions
of the same row
Perhaps stick an EXPLAIN ANALYSE on the front of those and see if one is using
an index and the other not.

It might be a corrupted INDEX, in which case REINDEX should fix it.
PS - you probably want now() or CURRENT_DATE etc. in the trigger functions
rather than 'now'.
--
Richard Huxton
Archonet Ltd

---------------------------(end of broadcast)---------------------------
TIP 7: don't forget to increase your free space map settings

Nov 22 '05 #2
On Tue, 10 Feb 2004, Jeff Bohmer wrote:

I have a table in a PG 7.4.1 database with 380 duplicate rows,
including duplicate oid and primary key values. Looking through our
backups, the duplicates did not exist before Friday, 02/06/2004. I'm
assuming neither pg_dumpall nor restoring from a pg_dumpall file will
eliminate such duplicates. We upgraded from 7.3.4 to 7.4.1 on
02/02/2004.

What can cause these duplicates?

The server has had several system crashes over the past few days and weeks.


Check your hardware. bad memory, bad cpu, or bad hard drives can cause
these problems. Postgresql, like most databases, expects the hardware to
operate without errors.
---------------------------(end of broadcast)---------------------------
TIP 3: if posting/reading through Usenet, please send an appropriate
subscribe-nomail command to ma*******@postgresql.org so that your
message can get through to the mailing list cleanly

Nov 22 '05 #3

This thread has been closed and replies have been disabled. Please start a new discussion.

Similar topics

3
by: Lauren Quantrell | last post by:
I'm sure there's a simple way to do it, I just haven't run into it yet: I just want to duplicate a table record (row) using a stored procedure. lq
18
by: Elroyskimms | last post by:
I have a table using an identity column as its Primary Key and two columns (table reduced for simplicity) EmployeeNumber and ArrivalTime. CREATE TABLE ( IDENTITY (1, 1) NOT NULL , (10)...
11
by: Jean-Christian Imbeault | last post by:
I have a table with a primary field and a few other fields. What is the fastest way to do an insert into that table assuming that sometimes I might try to insert a record with a duplicate primary...
10
by: florian | last post by:
Hi, we have a contention problem because of an application which tries to insert a duplicate row in a table with primary key. This insert fails of course but the locks are not released within...
9
by: Catherine Jo Morgan | last post by:
Can I set it up so that a certain combination of fields can't contain the same entries, on another record? e.g. a combination of FirstName/LastName/address? Or FirstName/LastName/phone? Or...
2
by: Pablo | last post by:
Hello, there, I have a table tblData which has pharmacy data. The table has following fields: ClaimNum, LineNum... The ClaimNum has claim number which is 12 characters. LineNum is NULL. The...
5
by: Manish | last post by:
The topic is related to MySQL database. Suppose a table "address" contains the following records ------------------------------------------------------- | name | address | phone |...
0
by: jehrich | last post by:
Hi Everyone, I am a bit of a hobby programmer (read newbie), and I have been searching for a solution to a SQL problem for a recent pet project. I discovered that there are a number of brilliant...
4
by: ramdil | last post by:
Hi All I have table and it have around 90000 records.Its primary key is autonumber field and it has also have date column and name, then some other columns Now i have problem with the table,as my...
0
by: aa123db | last post by:
Variable and constants Use var or let for variables and const fror constants. Var foo ='bar'; Let foo ='bar';const baz ='bar'; Functions function $name$ ($parameters$) { } ...
0
by: ryjfgjl | last post by:
In our work, we often receive Excel tables with data in the same format. If we want to analyze these data, it can be difficult to analyze them because the data is spread across multiple Excel files...
0
BarryA
by: BarryA | last post by:
What are the essential steps and strategies outlined in the Data Structures and Algorithms (DSA) roadmap for aspiring data scientists? How can individuals effectively utilize this roadmap to progress...
1
by: nemocccc | last post by:
hello, everyone, I want to develop a software for my android phone for daily needs, any suggestions?
1
by: Sonnysonu | last post by:
This is the data of csv file 1 2 3 1 2 3 1 2 3 1 2 3 2 3 2 3 3 the lengths should be different i have to store the data by column-wise with in the specific length. suppose the i have to...
0
by: Hystou | last post by:
There are some requirements for setting up RAID: 1. The motherboard and BIOS support RAID configuration. 2. The motherboard has 2 or more available SATA protocol SSD/HDD slots (including MSATA, M.2...
0
by: Hystou | last post by:
Most computers default to English, but sometimes we require a different language, especially when relocating. Forgot to request a specific language before your computer shipped? No problem! You can...
0
Oralloy
by: Oralloy | last post by:
Hello folks, I am unable to find appropriate documentation on the type promotion of bit-fields when using the generalised comparison operator "<=>". The problem is that using the GNU compilers,...
0
jinu1996
by: jinu1996 | last post by:
In today's digital age, having a compelling online presence is paramount for businesses aiming to thrive in a competitive landscape. At the heart of this digital strategy lies an intricately woven...

By using Bytes.com and it's services, you agree to our Privacy Policy and Terms of Use.

To disable or enable advertisements and analytics tracking please visit the manage ads & tracking page.