473,287 Members | 1,659 Online
Bytes | Software Development & Data Engineering Community
Post Job

Home Posts Topics Members FAQ

Join Bytes to post your question to a community of 473,287 software developers and data experts.

Trigger on Postgres for tables syncronization

Dear my friends...

I am using SuSE Linux 9.1 and postgres. I am a
beginner in postgres, usually I use MySQL.

I have 3 tables : appointment, appointment0 and
appointment1.

the fields of table "appointment" are as follows:
noapp* (int4):ID Number of appointment (PK)
custid (int4) : Customer ID
salesid (int4) : Sales ID
date (date) : Date of appointment
time (time) : Time of appointment
todo (char(150)) : What's to do with them
done (char(1)): whether done (N/Y)
warned (char(1)): whether warned with prompt
timestamp (timestamp) : timestamp of record

"appointment0" and "appointment1" have exactly the
same field names as what "appointment" has.

But...
1. the population of "appointment0" and "appointment1"
are the subset of "appointment"
2. what the "appointment0" has are the members of
"appointment" whose "Y" as the value of fieldname
"done".
3. and what "appointmnet1" has are the members of
"appointment" whose "N" as the value of fieldname
"done".

I want if my program inserted, updated, deleted the
record of "appointment" than the postgres does the
syncronization to the corresponded tables
(appointment0 or appointment1 or both).

Is it possible to implement this strategy with
trigger? But how?

Where Can I find a good documentation about the
trigger of postgres especially the PLPGSQL of the
postgres?

Anybody would be so nice to tell me the steps and the
command of the triggers should be in order to
implement my strategy? Please....

Please....

Thank you very much in advance.

__________________________________
Do you Yahoo!?
Yahoo! Mail - 50x more storage than other providers!
http://promotions.yahoo.com/new_mail

---------------------------(end of broadcast)---------------------------
TIP 7: don't forget to increase your free space map settings

Nov 23 '05 #1
5 2327
> I want if my program inserted, updated, deleted the
record of "appointment" than the postgres does the
syncronization to the corresponded tables
(appointment0 or appointment1 or both).


Is there a reason you aren't doing this with views?
--
Mike Nolan

---------------------------(end of broadcast)---------------------------
TIP 4: Don't 'kill -9' the postmaster

Nov 23 '05 #2
Try a view defined like:

CREATE VIEW appointment0 AS SELECT * FROM appointment WHERE done='Y';
CREATE VIEW appointment1 AS SELECT * FROM appointment WHERE done='N';

Then appointment0 and appointment1 are not real tables, but "virtual
tables". You can still do:

SELECT * FROM appointment0;
or
SELECT * FROM appointment1;

Now if you insert a record into appointment or update a record in
appointment it will automatically appear in the output of appointment0
or appointment1 depending on the value of "done". You never have to
insert into appointment0 or appointment1.

Regards,
Jeff Davis

On Tue, 2004-07-27 at 04:58, Prabu Subroto wrote:
Dear my friends...

I am using SuSE Linux 9.1 and postgres. I am a
beginner in postgres, usually I use MySQL.

I have 3 tables : appointment, appointment0 and
appointment1.

the fields of table "appointment" are as follows:
noapp* (int4):ID Number of appointment (PK)
custid (int4) : Customer ID
salesid (int4) : Sales ID
date (date) : Date of appointment
time (time) : Time of appointment
todo (char(150)) : What's to do with them
done (char(1)): whether done (N/Y)
warned (char(1)): whether warned with prompt
timestamp (timestamp) : timestamp of record

"appointment0" and "appointment1" have exactly the
same field names as what "appointment" has.

But...
1. the population of "appointment0" and "appointment1"
are the subset of "appointment"
2. what the "appointment0" has are the members of
"appointment" whose "Y" as the value of fieldname
"done".
3. and what "appointmnet1" has are the members of
"appointment" whose "N" as the value of fieldname
"done".

I want if my program inserted, updated, deleted the
record of "appointment" than the postgres does the
syncronization to the corresponded tables
(appointment0 or appointment1 or both).

Is it possible to implement this strategy with
trigger? But how?

Where Can I find a good documentation about the
trigger of postgres especially the PLPGSQL of the
postgres?

Anybody would be so nice to tell me the steps and the
command of the triggers should be in order to
implement my strategy? Please....

Please....

Thank you very much in advance.

__________________________________
Do you Yahoo!?
Yahoo! Mail - 50x more storage than other providers!
http://promotions.yahoo.com/new_mail

---------------------------(end of broadcast)---------------------------
TIP 7: don't forget to increase your free space map settings

---------------------------(end of broadcast)---------------------------
TIP 8: explain analyze is your friend

Nov 23 '05 #3

I'm a postgresql newcomer so correct me if I'm wrong... I also want to
ask another question.

I would have done this with a view, too, because it's very simple to do
in Postgresql. You can also add some rules (or triggers ?) so that an
insert attempt in appointment0 or appointment1 (which would normally fail)
would be rewritten as an insert into appointment with the 'done' value set
accordingly.

Now, I've been facing a related problem with tracking user sessions for a
web app. I want to use a table to store user sessions, both active
sessions and expired sessions for archiving. I also wanted it to look like
two different tables. I could have created one table with two views
(online and archived), or two tables.

In the end I went with two tables because the online session table is
read and updated very often, so it better be small and fit in the cache,
while the archive table will probably be huge and not used often. So to
keep better locality of reference I used two tables, and I created
functions to create sessions, update a session to push its timeout value a
bit in the future, and close a session. These functions detect timed-out
sessions in the "online" table and move them to the "archive" table. I
also have a cleanup function which moves expired sessions to the archive
table and which will be called by a cron.
Advantages of this approach :
- There can be only one session for a given user in the "online" table,
which makes finding the session fast (userid = primary key).
- The online table has only one index for faster updating, this is the
primary key on userid.
Drawbacks :
- Much more complex than a view based approach.

Question : how huge is huge, ie. how much records do I need to have in
the archive to make the two tables approach worth it ? It is much more
complex.

On Tue, 27 Jul 2004 10:12:13 -0700, Jeff Davis <jd**********@empires.org>
wrote:
Try a view defined like:

CREATE VIEW appointment0 AS SELECT * FROM appointment WHERE done='Y';
CREATE VIEW appointment1 AS SELECT * FROM appointment WHERE done='N';

Then appointment0 and appointment1 are not real tables, but "virtual
tables". You can still do:


---------------------------(end of broadcast)---------------------------
TIP 9: the planner will ignore your desire to choose an index scan if your
joining column's datatypes do not match

Nov 23 '05 #4
On Wed, 2004-07-28 at 03:57, Pierre-Frédéric Caillaud wrote:
I'm a postgresql newcomer so correct me if I'm wrong... I also want to
ask another question.

I would have done this with a view, too, because it's very simple to do
in Postgresql. You can also add some rules (or triggers ?) so that an
insert attempt in appointment0 or appointment1 (which would normally fail)
would be rewritten as an insert into appointment with the 'done' value set
accordingly.

That's correct. A rule is what you're looking for in order to insert
into a view.
Now, I've been facing a related problem with tracking user sessions for a
web app. I want to use a table to store user sessions, both active
sessions and expired sessions for archiving. I also wanted it to look like
two different tables. I could have created one table with two views
(online and archived), or two tables.

In the end I went with two tables because the online session table is
read and updated very often, so it better be small and fit in the cache,
while the archive table will probably be huge and not used often. So to
keep better locality of reference I used two tables, and I created
functions to create sessions, update a session to push its timeout value a
bit in the future, and close a session. These functions detect timed-out
sessions in the "online" table and move them to the "archive" table. I
also have a cleanup function which moves expired sessions to the archive
table and which will be called by a cron.
Advantages of this approach :
- There can be only one session for a given user in the "online" table,
which makes finding the session fast (userid = primary key).
- The online table has only one index for faster updating, this is the
primary key on userid.
Drawbacks :
- Much more complex than a view based approach.

Question : how huge is huge, ie. how much records do I need to have in
the archive to make the two tables approach worth it ? It is much more
complex.


First off, you're on the right track. It's logically one table, but it
may be better to have it stored seperately. Note that you will still
have locality of reference even if it's one big table, since recently
added records will be close to the end of the table, and old records
will be at the beginning (this isn't guaranteed, but it's true in
general). Two tables will help with caching, however, since you should
be able to keep the small table in the cache. Caching is the main
benefit, since over time, most of the records in the big archive table
can be safely ignored under normal operation (until you want to look at
the archive) and will never pollute the cache.

Here's what I'd use to determine whether the session table is big enough
to warrant two tables:
(1) Create an index on the session id field that you select on. I assume
here that you do a simple select from the session table like "SELECT *
FROM session WHERE session_id=1234567890".
(2) "VACUUM ANALYZE session" the table to make sure the planner has
up-to-date and accurate information on which it can base it's plan.
(3) Explain your query like "EXPLAIN ANALYZE SELECT * FROM session WHERE
session_id=1234567890".
(4) If it does a sequential scan, that means the table is small enough
to get the entire table in few disk reads. If it does an index scan,
that means there are enough records to warrant several reads from
different parts of the disk to avoid reading the entire table: one or
more reads for the index and then one read for the page containing the
record. So, in short, seq scan means you have a small table and nothing
much to gain by seperating the tables. An index scan means the table is
big, and you may have something to gain by seperating it into two
tables. Here you basically used the query planner to tell you whether
it's too big or not.

Now, there are a couple other considerations that you might have to
answer for yourself.
(1) how often do you move expired sessions to the session_archive table?
(2) how often do you vacuum the session table?
(3) do you want an all_sessions view like "CREATE VIEW all_sessions AS
SELECT * FROM session UNION SELECT * FROM session_archive"?

Keep in mind the extra cpu and disk activity from constantly moving the
records to archive, and constantly vacuuming. How often you do those
things probably requires some real-world testing.

Regards,
Jeff Davis
---------------------------(end of broadcast)---------------------------
TIP 3: if posting/reading through Usenet, please send an appropriate
subscribe-nomail command to ma*******@postgresql.org so that your
message can get through to the mailing list cleanly

Nov 23 '05 #5

Thanks for your advice !

More stuff below...
Now, I've been facing a related problem with tracking user sessions
for a
web app. I want to use a table to store user sessions, both active
sessions and expired sessions for archiving. I also wanted it to look
like
two different tables. I could have created one table with two views
(online and archived), or two tables.

In the end I went with two tables because the online session table is
read and updated very often, so it better be small and fit in the cache,
while the archive table will probably be huge and not used often. So to
keep better locality of reference I used two tables, and I created
functions to create sessions, update a session to push its timeout
value a
bit in the future, and close a session. These functions detect timed-out
sessions in the "online" table and move them to the "archive" table. I
also have a cleanup function which moves expired sessions to the archive
table and which will be called by a cron.
may be better to have it stored seperately. Note that you will still
have locality of reference even if it's one big table, since recently
Yes, new items are appended at the end, so it should be okay. I could use
a partial index (unique index on online sessions only) to find data fast
in that table...
(3) Explain your query like "EXPLAIN ANALYZE SELECT * FROM session WHERE
session_id=1234567890".
(4) If it does a sequential scan, that means the table is small enough
to get the entire table in few disk reads. If it does an index scan.....
OK, this would mean "huge" is pretty small (like a few hundreds records),
I intend to have a lot more of them (in the archive), so it'll be indexes
for everybody.
Now, there are a couple other considerations that you might have to
answer for yourself.
(1) how often do you move expired sessions to the session_archive table?
- This is a web app so there is no "close session" event, it comes simply
from reaching a timeout. The timeout is stored as a session expiry
timestamp. Thus online sessions have this timestamp>now(). I set the
timeout to 30 minutes.
- When a page is requested, I have to prolong the timeout. This could
lead to a lot of updates.

Thus I have only one function which creates/updates a session data :
- it takes a user ID and session info (like IP address etc).
- it looks in the table to see if there's a session (SELECT on a unique
index)
- if there's a non-expired session
- and it won't expire in the next 10 minutes, we do nothing.
- if it will expire in the next 10 minutes, we spend time UPDATEing it
to now()+'30m'
- if there's an expired session
- insert the record into the archive, delete it from this table
- if there's an expired session or no session at all
- INSERT a new row

Thus most of the time this function does a SELECT and then exits. Moving
sessions between tables is a small oiverhead as it happens only on logout
vs on every page view.
(2) how often do you vacuum the session table?
The archive, only when I'll DELETE very old records from it.
The online sessions table, probably often, but it'll be very small.

This is another advantage of using two tables.
(3) do you want an all_sessions view like "CREATE VIEW all_sessions AS
SELECT * FROM session UNION SELECT * FROM session_archive"?
Done.
Keep in mind the extra cpu and disk activity from constantly moving the
records to archive, and constantly vacuuming. How often you do those
things probably requires some real-world testing.


As I said, the consantly vacuumed table will be quite small.

All in all, I'm extremely satisfied with Postgresql
and keep discovering very nice stuff in this program.
I was on MySQL before, my god, never again !


---------------------------(end of broadcast)---------------------------
TIP 1: subscribe and unsubscribe commands go to ma*******@postgresql.org

Nov 23 '05 #6

This thread has been closed and replies have been disabled. Please start a new discussion.

Similar topics

0
by: Sean Utt | last post by:
Table "grps" Column | Type | Modifiers -------------+-----------------------------+-------------------------------- --------------------- grpsid ...
1
by: Dunc | last post by:
I'm new to Postgres, and getting nowhere with a PL/Perl trigger that I'm trying to write - hopefully, someone can give me some insight into what I'm doing wrong. My trigger is designed to reformat...
1
by: Thierry Missimilly | last post by:
Hi Postgres users, I wonder to know if it is possible to launch a Perl program or a unix comand in a trigger function. I have tried to do that in a C trigger developed with the SPI function....
5
by: Mike Nolan | last post by:
I have a before insert trigger that updates a value in another table. It appears that I cannot depend upon that update having taken place in an after insert trigger on the first table. (The one...
1
by: Graeme Hinchliffe | last post by:
Hiya, Not had much experience with tiggers under postgres but am liking them so far. My problem is this. I am writing an updates system, postgres holds the master copy of the database, any...
2
by: Net Virtual Mailing Lists | last post by:
Hello, If I have a rule like this: CREATE OR REPLACE RULE sometable_update AS ON UPDATE TO table2 DO UPDATE cache SET updated_dt=NULL WHERE tablename='sometable'; CREATE OR REPLACE RULE...
2
by: Tim Vadnais | last post by:
Hi, My boss wants to add some logging functionality to some of our tables on update/delete/insert. I need to log who, when, table_name, field name, original value and new value for each record,...
4
by: Naeem Bari | last post by:
Hi, I am using postgres 7.4.5 on Redhat Enterprise Linux 3. My background is really on Oracle, and I am porting a largish database over to postgres. Here is my problem: On oracle, I...
1
by: sainathr | last post by:
Hi, I would like to run shell scripts with respect to the time mentioned in postgres data base tables. Postgres database should trigger the shell script for the time mentioned in the table. To...
2
isladogs
by: isladogs | last post by:
The next Access Europe meeting will be on Wednesday 7 Feb 2024 starting at 18:00 UK time (6PM UTC) and finishing at about 19:30 (7.30PM). In this month's session, the creator of the excellent VBE...
0
by: DolphinDB | last post by:
Tired of spending countless mintues downsampling your data? Look no further! In this article, you’ll learn how to efficiently downsample 6.48 billion high-frequency records to 61 million...
0
by: Aftab Ahmad | last post by:
Hello Experts! I have written a code in MS Access for a cmd called "WhatsApp Message" to open WhatsApp using that very code but the problem is that it gives a popup message everytime I clicked on...
0
by: Aftab Ahmad | last post by:
So, I have written a code for a cmd called "Send WhatsApp Message" to open and send WhatsApp messaage. The code is given below. Dim IE As Object Set IE =...
0
by: ryjfgjl | last post by:
ExcelToDatabase: batch import excel into database automatically...
0
isladogs
by: isladogs | last post by:
The next Access Europe meeting will be on Wednesday 6 Mar 2024 starting at 18:00 UK time (6PM UTC) and finishing at about 19:15 (7.15PM). In this month's session, we are pleased to welcome back...
1
isladogs
by: isladogs | last post by:
The next Access Europe meeting will be on Wednesday 6 Mar 2024 starting at 18:00 UK time (6PM UTC) and finishing at about 19:15 (7.15PM). In this month's session, we are pleased to welcome back...
0
by: Vimpel783 | last post by:
Hello! Guys, I found this code on the Internet, but I need to modify it a little. It works well, the problem is this: Data is sent from only one cell, in this case B5, but it is necessary that data...
1
by: PapaRatzi | last post by:
Hello, I am teaching myself MS Access forms design and Visual Basic. I've created a table to capture a list of Top 30 singles and forms to capture new entries. The final step is a form (unbound)...

By using Bytes.com and it's services, you agree to our Privacy Policy and Terms of Use.

To disable or enable advertisements and analytics tracking please visit the manage ads & tracking page.