469,945 Members | 2,340 Online
Bytes | Developer Community
New Post

Home Posts Topics Members FAQ

Post your question to a community of 469,945 developers. It's quick & easy.

Trigger on Postgres for tables syncronization

Dear my friends...

I am using SuSE Linux 9.1 and postgres. I am a
beginner in postgres, usually I use MySQL.

I have 3 tables : appointment, appointment0 and
appointment1.

the fields of table "appointment" are as follows:
noapp* (int4):ID Number of appointment (PK)
custid (int4) : Customer ID
salesid (int4) : Sales ID
date (date) : Date of appointment
time (time) : Time of appointment
todo (char(150)) : What's to do with them
done (char(1)): whether done (N/Y)
warned (char(1)): whether warned with prompt
timestamp (timestamp) : timestamp of record

"appointment0" and "appointment1" have exactly the
same field names as what "appointment" has.

But...
1. the population of "appointment0" and "appointment1"
are the subset of "appointment"
2. what the "appointment0" has are the members of
"appointment" whose "Y" as the value of fieldname
"done".
3. and what "appointmnet1" has are the members of
"appointment" whose "N" as the value of fieldname
"done".

I want if my program inserted, updated, deleted the
record of "appointment" than the postgres does the
syncronization to the corresponded tables
(appointment0 or appointment1 or both).

Is it possible to implement this strategy with
trigger? But how?

Where Can I find a good documentation about the
trigger of postgres especially the PLPGSQL of the
postgres?

Anybody would be so nice to tell me the steps and the
command of the triggers should be in order to
implement my strategy? Please....

Please....

Thank you very much in advance.

__________________________________
Do you Yahoo!?
Yahoo! Mail - 50x more storage than other providers!
http://promotions.yahoo.com/new_mail

---------------------------(end of broadcast)---------------------------
TIP 7: don't forget to increase your free space map settings

Nov 23 '05 #1
5 2140
> I want if my program inserted, updated, deleted the
record of "appointment" than the postgres does the
syncronization to the corresponded tables
(appointment0 or appointment1 or both).


Is there a reason you aren't doing this with views?
--
Mike Nolan

---------------------------(end of broadcast)---------------------------
TIP 4: Don't 'kill -9' the postmaster

Nov 23 '05 #2
Try a view defined like:

CREATE VIEW appointment0 AS SELECT * FROM appointment WHERE done='Y';
CREATE VIEW appointment1 AS SELECT * FROM appointment WHERE done='N';

Then appointment0 and appointment1 are not real tables, but "virtual
tables". You can still do:

SELECT * FROM appointment0;
or
SELECT * FROM appointment1;

Now if you insert a record into appointment or update a record in
appointment it will automatically appear in the output of appointment0
or appointment1 depending on the value of "done". You never have to
insert into appointment0 or appointment1.

Regards,
Jeff Davis

On Tue, 2004-07-27 at 04:58, Prabu Subroto wrote:
Dear my friends...

I am using SuSE Linux 9.1 and postgres. I am a
beginner in postgres, usually I use MySQL.

I have 3 tables : appointment, appointment0 and
appointment1.

the fields of table "appointment" are as follows:
noapp* (int4):ID Number of appointment (PK)
custid (int4) : Customer ID
salesid (int4) : Sales ID
date (date) : Date of appointment
time (time) : Time of appointment
todo (char(150)) : What's to do with them
done (char(1)): whether done (N/Y)
warned (char(1)): whether warned with prompt
timestamp (timestamp) : timestamp of record

"appointment0" and "appointment1" have exactly the
same field names as what "appointment" has.

But...
1. the population of "appointment0" and "appointment1"
are the subset of "appointment"
2. what the "appointment0" has are the members of
"appointment" whose "Y" as the value of fieldname
"done".
3. and what "appointmnet1" has are the members of
"appointment" whose "N" as the value of fieldname
"done".

I want if my program inserted, updated, deleted the
record of "appointment" than the postgres does the
syncronization to the corresponded tables
(appointment0 or appointment1 or both).

Is it possible to implement this strategy with
trigger? But how?

Where Can I find a good documentation about the
trigger of postgres especially the PLPGSQL of the
postgres?

Anybody would be so nice to tell me the steps and the
command of the triggers should be in order to
implement my strategy? Please....

Please....

Thank you very much in advance.

__________________________________
Do you Yahoo!?
Yahoo! Mail - 50x more storage than other providers!
http://promotions.yahoo.com/new_mail

---------------------------(end of broadcast)---------------------------
TIP 7: don't forget to increase your free space map settings

---------------------------(end of broadcast)---------------------------
TIP 8: explain analyze is your friend

Nov 23 '05 #3

I'm a postgresql newcomer so correct me if I'm wrong... I also want to
ask another question.

I would have done this with a view, too, because it's very simple to do
in Postgresql. You can also add some rules (or triggers ?) so that an
insert attempt in appointment0 or appointment1 (which would normally fail)
would be rewritten as an insert into appointment with the 'done' value set
accordingly.

Now, I've been facing a related problem with tracking user sessions for a
web app. I want to use a table to store user sessions, both active
sessions and expired sessions for archiving. I also wanted it to look like
two different tables. I could have created one table with two views
(online and archived), or two tables.

In the end I went with two tables because the online session table is
read and updated very often, so it better be small and fit in the cache,
while the archive table will probably be huge and not used often. So to
keep better locality of reference I used two tables, and I created
functions to create sessions, update a session to push its timeout value a
bit in the future, and close a session. These functions detect timed-out
sessions in the "online" table and move them to the "archive" table. I
also have a cleanup function which moves expired sessions to the archive
table and which will be called by a cron.
Advantages of this approach :
- There can be only one session for a given user in the "online" table,
which makes finding the session fast (userid = primary key).
- The online table has only one index for faster updating, this is the
primary key on userid.
Drawbacks :
- Much more complex than a view based approach.

Question : how huge is huge, ie. how much records do I need to have in
the archive to make the two tables approach worth it ? It is much more
complex.

On Tue, 27 Jul 2004 10:12:13 -0700, Jeff Davis <jd**********@empires.org>
wrote:
Try a view defined like:

CREATE VIEW appointment0 AS SELECT * FROM appointment WHERE done='Y';
CREATE VIEW appointment1 AS SELECT * FROM appointment WHERE done='N';

Then appointment0 and appointment1 are not real tables, but "virtual
tables". You can still do:


---------------------------(end of broadcast)---------------------------
TIP 9: the planner will ignore your desire to choose an index scan if your
joining column's datatypes do not match

Nov 23 '05 #4
On Wed, 2004-07-28 at 03:57, Pierre-Frédéric Caillaud wrote:
I'm a postgresql newcomer so correct me if I'm wrong... I also want to
ask another question.

I would have done this with a view, too, because it's very simple to do
in Postgresql. You can also add some rules (or triggers ?) so that an
insert attempt in appointment0 or appointment1 (which would normally fail)
would be rewritten as an insert into appointment with the 'done' value set
accordingly.

That's correct. A rule is what you're looking for in order to insert
into a view.
Now, I've been facing a related problem with tracking user sessions for a
web app. I want to use a table to store user sessions, both active
sessions and expired sessions for archiving. I also wanted it to look like
two different tables. I could have created one table with two views
(online and archived), or two tables.

In the end I went with two tables because the online session table is
read and updated very often, so it better be small and fit in the cache,
while the archive table will probably be huge and not used often. So to
keep better locality of reference I used two tables, and I created
functions to create sessions, update a session to push its timeout value a
bit in the future, and close a session. These functions detect timed-out
sessions in the "online" table and move them to the "archive" table. I
also have a cleanup function which moves expired sessions to the archive
table and which will be called by a cron.
Advantages of this approach :
- There can be only one session for a given user in the "online" table,
which makes finding the session fast (userid = primary key).
- The online table has only one index for faster updating, this is the
primary key on userid.
Drawbacks :
- Much more complex than a view based approach.

Question : how huge is huge, ie. how much records do I need to have in
the archive to make the two tables approach worth it ? It is much more
complex.


First off, you're on the right track. It's logically one table, but it
may be better to have it stored seperately. Note that you will still
have locality of reference even if it's one big table, since recently
added records will be close to the end of the table, and old records
will be at the beginning (this isn't guaranteed, but it's true in
general). Two tables will help with caching, however, since you should
be able to keep the small table in the cache. Caching is the main
benefit, since over time, most of the records in the big archive table
can be safely ignored under normal operation (until you want to look at
the archive) and will never pollute the cache.

Here's what I'd use to determine whether the session table is big enough
to warrant two tables:
(1) Create an index on the session id field that you select on. I assume
here that you do a simple select from the session table like "SELECT *
FROM session WHERE session_id=1234567890".
(2) "VACUUM ANALYZE session" the table to make sure the planner has
up-to-date and accurate information on which it can base it's plan.
(3) Explain your query like "EXPLAIN ANALYZE SELECT * FROM session WHERE
session_id=1234567890".
(4) If it does a sequential scan, that means the table is small enough
to get the entire table in few disk reads. If it does an index scan,
that means there are enough records to warrant several reads from
different parts of the disk to avoid reading the entire table: one or
more reads for the index and then one read for the page containing the
record. So, in short, seq scan means you have a small table and nothing
much to gain by seperating the tables. An index scan means the table is
big, and you may have something to gain by seperating it into two
tables. Here you basically used the query planner to tell you whether
it's too big or not.

Now, there are a couple other considerations that you might have to
answer for yourself.
(1) how often do you move expired sessions to the session_archive table?
(2) how often do you vacuum the session table?
(3) do you want an all_sessions view like "CREATE VIEW all_sessions AS
SELECT * FROM session UNION SELECT * FROM session_archive"?

Keep in mind the extra cpu and disk activity from constantly moving the
records to archive, and constantly vacuuming. How often you do those
things probably requires some real-world testing.

Regards,
Jeff Davis
---------------------------(end of broadcast)---------------------------
TIP 3: if posting/reading through Usenet, please send an appropriate
subscribe-nomail command to ma*******@postgresql.org so that your
message can get through to the mailing list cleanly

Nov 23 '05 #5

Thanks for your advice !

More stuff below...
Now, I've been facing a related problem with tracking user sessions
for a
web app. I want to use a table to store user sessions, both active
sessions and expired sessions for archiving. I also wanted it to look
like
two different tables. I could have created one table with two views
(online and archived), or two tables.

In the end I went with two tables because the online session table is
read and updated very often, so it better be small and fit in the cache,
while the archive table will probably be huge and not used often. So to
keep better locality of reference I used two tables, and I created
functions to create sessions, update a session to push its timeout
value a
bit in the future, and close a session. These functions detect timed-out
sessions in the "online" table and move them to the "archive" table. I
also have a cleanup function which moves expired sessions to the archive
table and which will be called by a cron.
may be better to have it stored seperately. Note that you will still
have locality of reference even if it's one big table, since recently
Yes, new items are appended at the end, so it should be okay. I could use
a partial index (unique index on online sessions only) to find data fast
in that table...
(3) Explain your query like "EXPLAIN ANALYZE SELECT * FROM session WHERE
session_id=1234567890".
(4) If it does a sequential scan, that means the table is small enough
to get the entire table in few disk reads. If it does an index scan.....
OK, this would mean "huge" is pretty small (like a few hundreds records),
I intend to have a lot more of them (in the archive), so it'll be indexes
for everybody.
Now, there are a couple other considerations that you might have to
answer for yourself.
(1) how often do you move expired sessions to the session_archive table?
- This is a web app so there is no "close session" event, it comes simply
from reaching a timeout. The timeout is stored as a session expiry
timestamp. Thus online sessions have this timestamp>now(). I set the
timeout to 30 minutes.
- When a page is requested, I have to prolong the timeout. This could
lead to a lot of updates.

Thus I have only one function which creates/updates a session data :
- it takes a user ID and session info (like IP address etc).
- it looks in the table to see if there's a session (SELECT on a unique
index)
- if there's a non-expired session
- and it won't expire in the next 10 minutes, we do nothing.
- if it will expire in the next 10 minutes, we spend time UPDATEing it
to now()+'30m'
- if there's an expired session
- insert the record into the archive, delete it from this table
- if there's an expired session or no session at all
- INSERT a new row

Thus most of the time this function does a SELECT and then exits. Moving
sessions between tables is a small oiverhead as it happens only on logout
vs on every page view.
(2) how often do you vacuum the session table?
The archive, only when I'll DELETE very old records from it.
The online sessions table, probably often, but it'll be very small.

This is another advantage of using two tables.
(3) do you want an all_sessions view like "CREATE VIEW all_sessions AS
SELECT * FROM session UNION SELECT * FROM session_archive"?
Done.
Keep in mind the extra cpu and disk activity from constantly moving the
records to archive, and constantly vacuuming. How often you do those
things probably requires some real-world testing.


As I said, the consantly vacuumed table will be quite small.

All in all, I'm extremely satisfied with Postgresql
and keep discovering very nice stuff in this program.
I was on MySQL before, my god, never again !


---------------------------(end of broadcast)---------------------------
TIP 1: subscribe and unsubscribe commands go to ma*******@postgresql.org

Nov 23 '05 #6

This discussion thread is closed

Replies have been disabled for this discussion.

Similar topics

1 post views Thread by Dunc | last post: by
1 post views Thread by Thierry Missimilly | last post: by
1 post views Thread by Graeme Hinchliffe | last post: by
2 posts views Thread by Net Virtual Mailing Lists | last post: by
2 posts views Thread by Tim Vadnais | last post: by
By using this site, you agree to our Privacy Policy and Terms of Use.