By using this site, you agree to our updated Privacy Policy and our Terms of Use. Manage your Cookies Settings.
435,269 Members | 1,519 Online
Bytes IT Community
+ Ask a Question
Need help? Post your question and get tips & solutions from a community of 435,269 IT Pros & Developers. It's quick & easy.

Database design question: ugliness or referential integrity?

P: n/a
Hi all:

Let's say I'm designing a database (Postgres 7.3) with a list of all
email accounts in a certain server:
CREATE TABLE emails (
clienteid INT4,
direccion VARCHAR(512) PRIMARY KEY,
login varchar(128) NOT NULL,
password VARCHAR(128),
dominio VARCHAR(256)
);
The PHBs want to have a log of when was an email account added, which
technician did it, when was it deleted, when did we have to reset its
password, etc.:
CREATE TABLE emails_log (
direccion varchar(512) references emails,
fecha date,
autor varchar(32),
texto varchar(1024)
);

"texto" would be a free form text field explaining what has been done.
Now, let's suppose that an email account is deleted, and six months
later another user requests it and we add it again. Do we want to keep
an audit trail for the old "version" of that account? The PHBs say yes.
Which means that we can't use the email address as primary key. Fine, we
add an "ID" column to the "emails" table and make it the primary key,
and point the foreign key in "emails_log" to that column. But now we
have two options, and here is my question:

-In "emails", the "direccion" column needs to be unique... but only for
the active email addresses (there can be 5, 10, or 20 dead addresses
called "lu***@domain2.com", but only one alive at the moment). We could
add an "active" boolean column to "emails", and write a custom
constraint to check this condition, but I find it ugly (and I saw
similar objections when another user came up with a similar problem some
time ago)...
-...Or we could create a table called "dead_emails", and add to it the
email addresses that we delete (using an ON DELETE trigger, perhaps).
Basically, store the deleted email accounts in another table... but then
we lose the referential integrity check in "emails_log".

The question is: what would you do? (I don't really like the idea of
creating yet another "dead_emails_log" table pointing to "dead_emails";
I find it almost as ugly as the first one).

Paulo Jan.
DDnet.

---------------------------(end of broadcast)---------------------------
TIP 8: explain analyze is your friend

Nov 12 '05 #1
Share this Question
Share on Google+
10 Replies


P: n/a
If I was doing this, I'd make a table:
email_event_log:
email_address
event
who_did_it
datestamp

Then you can make events be logged when the happen. Events: ADD, DELETE,
PASSWORD, etc. Make it so that only legal events are valid in the events
column for consistency and you are good to go. Keep the PHB's happy!

Easy to generate reports and find all that has happened on a given email
address, etc.

Scott

On Wednesday 29 October 2003 09:38, Paulo Jan wrote:
Hi all:

Let's say I'm designing a database (Postgres 7.3) with a list of all
email accounts in a certain server:
CREATE TABLE emails (
clienteid INT4,
direccion VARCHAR(512) PRIMARY KEY,
login varchar(128) NOT NULL,
password VARCHAR(128),
dominio VARCHAR(256)
);
The PHBs want to have a log of when was an email account added, which
technician did it, when was it deleted, when did we have to reset its
password, etc.:
CREATE TABLE emails_log (
direccion varchar(512) references emails,
fecha date,
autor varchar(32),
texto varchar(1024)
);

"texto" would be a free form text field explaining what has been done.
Now, let's suppose that an email account is deleted, and six months
later another user requests it and we add it again. Do we want to keep
an audit trail for the old "version" of that account? The PHBs say yes.
Which means that we can't use the email address as primary key. Fine, we
add an "ID" column to the "emails" table and make it the primary key,
and point the foreign key in "emails_log" to that column. But now we
have two options, and here is my question:

-In "emails", the "direccion" column needs to be unique... but only for
the active email addresses (there can be 5, 10, or 20 dead addresses
called "lu***@domain2.com", but only one alive at the moment). We could
add an "active" boolean column to "emails", and write a custom
constraint to check this condition, but I find it ugly (and I saw
similar objections when another user came up with a similar problem some
time ago)...
-...Or we could create a table called "dead_emails", and add to it the
email addresses that we delete (using an ON DELETE trigger, perhaps).
Basically, store the deleted email accounts in another table... but then
we lose the referential integrity check in "emails_log".

The question is: what would you do? (I don't really like the idea of
creating yet another "dead_emails_log" table pointing to "dead_emails";
I find it almost as ugly as the first one).

Paulo Jan.
DDnet.

---------------------------(end of broadcast)---------------------------
TIP 8: explain analyze is your friend

---------------------------(end of broadcast)---------------------------
TIP 9: the planner will ignore your desire to choose an index scan if your
joining column's datatypes do not match

Nov 12 '05 #2

P: n/a
In my further discussion with Andrew offline, we came up with a joint
suggestion to have PostgreSQL do automatic auditing. This would be VERY
NICE, imho. Any input?

Scott wrote:
It seems like it would be nice if you could flip a toggle on a
table and have it automatically build audit entries in another table.
Andrew replied: Yeah - that would be a great feature - automatic auditing...
Maybe you should post that to someone (whoever it would be?) at
PostgreSQL - sure, there would be major performance hit problems (maybe
rather than at table level, field/column level would be better), but it
would be a boon for many...

---------------------------(end of broadcast)---------------------------
TIP 2: you can get off all lists at once with the unregister command
(send "unregister YourEmailAddressHere" to ma*******@postgresql.org)

Nov 12 '05 #3

P: n/a
On Wed, 29 Oct 2003, Scott Chapman wrote:
In my further discussion with Andrew offline, we came up with a joint
suggestion to have PostgreSQL do automatic auditing. This would be VERY
NICE, imho. Any input?

Scott wrote:
It seems like it would be nice if you could flip a toggle on a
table and have it automatically build audit entries in another table.


Andrew replied:
Yeah - that would be a great feature - automatic auditing...
Maybe you should post that to someone (whoever it would be?) at
PostgreSQL - sure, there would be major performance hit problems (maybe
rather than at table level, field/column level would be better), but it
would be a boon for many...


I like the idea. It would be kinda nice to do:

create table test (name text, id serial primary key)
with audit
(id keyid, 10 cycle,fifo|stop);

and have an auditing table with a historical view of the table up to 10
deep per key, and either have it either fifo them so the ones older than
10 disappear or have it stop inserts into the parent when the history gets
too deep.

I'd guess the proof of concept could be done in plpgsql, with the with
audit part programmed as a before trigger.

---------------------------(end of broadcast)---------------------------
TIP 8: explain analyze is your friend

Nov 12 '05 #4

P: n/a
On Thursday 30 October 2003 06:38, scott.marlowe wrote:
On Wed, 29 Oct 2003, Scott Chapman wrote:
In my further discussion with Andrew offline, we came up with a joint
suggestion to have PostgreSQL do automatic auditing. This would be VERY
NICE, imho. Any input?

Scott wrote:
It seems like it would be nice if you could flip a toggle on a
table and have it automatically build audit entries in another table.


Andrew replied:
Yeah - that would be a great feature - automatic auditing...
Maybe you should post that to someone (whoever it would be?) at
PostgreSQL - sure, there would be major performance hit problems (maybe
rather than at table level, field/column level would be better), but it
would be a boon for many...


I like the idea. It would be kinda nice to do:

create table test (name text, id serial primary key)
with audit
(id keyid, 10 cycle,fifo|stop);

and have an auditing table with a historical view of the table up to 10
deep per key, and either have it either fifo them so the ones older than
10 disappear or have it stop inserts into the parent when the history gets
too deep.

I'd guess the proof of concept could be done in plpgsql, with the with
audit part programmed as a before trigger.


I wouldn't limit it to 10 layers deep. That should be all user configurable.
Some implementations would need full history audits, etc. My skill with
triggers and plpgsql is not up to this task currently, but this is a
suggestion for the PostgreSQL developers.

Cordially,
Scott
---------------------------(end of broadcast)---------------------------
TIP 6: Have you searched our list archives?

http://archives.postgresql.org

Nov 12 '05 #5

P: n/a
On Thu, 30 Oct 2003, Scott Chapman wrote:
On Thursday 30 October 2003 06:38, scott.marlowe wrote:
On Wed, 29 Oct 2003, Scott Chapman wrote:
In my further discussion with Andrew offline, we came up with a joint
suggestion to have PostgreSQL do automatic auditing. This would be VERY
NICE, imho. Any input?

Scott wrote:
> It seems like it would be nice if you could flip a toggle on a
> table and have it automatically build audit entries in another table.

Andrew replied:
> Yeah - that would be a great feature - automatic auditing...
> Maybe you should post that to someone (whoever it would be?) at
> PostgreSQL - sure, there would be major performance hit problems (maybe
> rather than at table level, field/column level would be better), but it
> would be a boon for many...


I like the idea. It would be kinda nice to do:

create table test (name text, id serial primary key)
with audit
(id keyid, 10 cycle,fifo|stop);

and have an auditing table with a historical view of the table up to 10
deep per key, and either have it either fifo them so the ones older than
10 disappear or have it stop inserts into the parent when the history gets
too deep.

I'd guess the proof of concept could be done in plpgsql, with the with
audit part programmed as a before trigger.


I wouldn't limit it to 10 layers deep. That should be all user configurable.
Some implementations would need full history audits, etc. My skill with
triggers and plpgsql is not up to this task currently, but this is a
suggestion for the PostgreSQL developers.


No, I wouldn't either, that's why there was a cycle var, that set it to
that. I.e.

create table test (name text, id serial primary key)
with audit
(id keyid, 0 cycle)

would set it to infinite.

create table test (name text, id serial primary key)
with audit
(1000 cycle,stop)

would automagically pick the pk but stop after 1,000 versions of the same
row were stored...

I might play with some of this as a plpgsql function if I get a few free
minutes.
---------------------------(end of broadcast)---------------------------
TIP 8: explain analyze is your friend

Nov 12 '05 #6

P: n/a
> I wouldn't limit it to 10 layers deep. That should be all user configurable.
Some implementations would need full history audits, etc. There's a few implementations to be found on gborg (?), using
C or plpgsql, respectively. Another one (which I wrote for
GnuMed) is in the GnuMed CVS below

http://savannah.gnu.org/cgi-bin/view...gnumed/gnumed/

Ask for details if interested.
My skill with
triggers and plpgsql is not up to this task currently, but this is a
suggestion for the PostgreSQL developers.

I should hope the developers spend their time on less trivial
(as long as the auditing isn't mucking with the MVCC
properties, that is) tasks. A big thanks to them for making
PostgreSQL what it is.

Karsten
--
GPG key ID E4071346 @ wwwkeys.pgp.net
E167 67FD A291 2BEA 73BD 4537 78B9 A9F9 E407 1346

---------------------------(end of broadcast)---------------------------
TIP 8: explain analyze is your friend

Nov 12 '05 #7

P: n/a
> There's a few implementations to be found on gborg (?), using
C or plpgsql, respectively. Another one (which I wrote for
GnuMed) is in the GnuMed CVS below

http://savannah.gnu.org/cgi-bin/view...gnumed/gnumed/

But none of them implements the counter with either STOP or FIFO so
that's something someone might want to play around with.

Karsten
--
GPG key ID E4071346 @ wwwkeys.pgp.net
E167 67FD A291 2BEA 73BD 4537 78B9 A9F9 E407 1346

---------------------------(end of broadcast)---------------------------
TIP 7: don't forget to increase your free space map settings

Nov 12 '05 #8

P: n/a


Is there an easy way (similar to COPY) to import fixed width text files
directly into Postgres tables?

COPY is fine for files with delimited fields, but I have fixed format text
files to import into tables.

Thanks,

Brent Wood
---------------------------(end of broadcast)---------------------------
TIP 8: explain analyze is your friend

Nov 12 '05 #9

P: n/a
Brent Wood <b.****@niwa.co.nz> writes:
Is there an easy way (similar to COPY) to import fixed width text files
directly into Postgres tables?

COPY is fine for files with delimited fields, but I have fixed format text
files to import into tables.


There's no built-in method; you'll need to write a script of some sort
to import the data or convert it to delimited format.

-Doug

---------------------------(end of broadcast)---------------------------
TIP 8: explain analyze is your friend

Nov 12 '05 #10

P: n/a
In article <20**************************@storm.niwa.co.nz>,
Brent Wood <b.****@niwa.co.nz> writes:
Is there an easy way (similar to COPY) to import fixed width text files
directly into Postgres tables? COPY is fine for files with delimited fields, but I have fixed format text
files to import into tables.


How about inserting the necessary delimiters with sed and piping sed's
output into "COPY FROM stdin"?
---------------------------(end of broadcast)---------------------------
TIP 4: Don't 'kill -9' the postmaster

Nov 12 '05 #11

This discussion thread is closed

Replies have been disabled for this discussion.