473,612 Members | 2,331 Online
Bytes | Software Development & Data Engineering Community
+ Post

Home Posts Topics Members FAQ

Simulating sequences

Greetings,

I'm getting a big performance problem and I would like to ask you what
would be the reason, but first I need to explain how it happens.

Let's suppose I can't use sequences (it seams impossible but my boss
doesn't like specific database features like this one).

For sequence simulation I had created a table called cnfg_key_genera tion
and each tuple holds information for one of my tables (tablename,
current_sequenc ial_number). Lets check :

CREATE TABLE cnfg_key_genera tion (
department integer NOT NULL,
table_name varchar(20) NOT NULL,
current_key integer NOT NULL,
CONSTRAINT XPKcnfg_key_gen eration PRIMARY KEY (department,
table_name)
);
Per example, for a table called 'my_test' I would have the following values
:
department = 1
table_name = 'my_test'
current_key = 1432

Everytime I want a new key to use in my_test primary-key I just increment
current_key value. For this job, I've created a simple stored procedure
called key_generation
CREATE FUNCTION key_generation( integer, varchar(20)) RETURNS integer AS'
DECLARE
the_department ALIAS FOR $1;
the_table_name ALIAS FOR $2;
new_key_value integer;
err_num integer;
BEGIN
new_value := 0;

LOCK TABLE cnfg_key_genera tion IN ROW EXCLUSIVE MODE;

SELECT current_value INTO new_value
FROM cnfg_key_genera tion
WHERE the_department = department AND the_table_name = table_name;

IF NOT FOUND THEN
new_key_value := 1;
INSERT INTO cnfg_key_genera tion VALUES (the_department ,the_table_name ,
new_key_value);
ELSE
new_key_value := new_key_value + 1;

UPDATE cnfg_key_genera tion
SET current_key_val ue = new_key_value
WHERE department = the_department AND
table_name = the_table_name;
END IF;

RETURN new_key_value;

END;
'
LANGUAGE 'plpgsql';
Data insertion is done by the following way :

INSERT INTO my_test VALUES (key_generation (1, 'my_test'), ...other
fields...);

Ok, this works, but has a huge performance problem. I read in pg docs that
everytime I make an UPDATE, a new tuple is created and the old is marked as
invalid. For a tuple that holds sequencial key generation, everytime a new
key is generated, a new tuple is created inside cfg_key_generat ion. It means
after 2million key generations for same table, performance will be completly
degradated because there will be 2million of old versions of same tuple. For
instance, I have a table called 'cham_chamada' that actually holds
1.5Million of tuples. The response time for key_generation execution for
this table is more than 5seconds. In this same case if I execute
key_generation for a table that has just few values (current key = 5 per
example), response time is just some miliseconds (30 to 50ms).

I tryied to fix this problem with a VACUUM and it was completly ineffective.
After execution the problem was still there. Later, after execution of every
kind of vacuum I knew (with and without ANALYZE, especific for that table,
vacuumdb from shell ...) I gave up and tryied a COPY TO/DROP/CREATE/COPY
FROM. At this time, the performance problem was fixed.

What can I do to solve this problem without table reconstruction? Is there a
performance degradation in PostgreSQL that can't be solved? If a have a huge
table with millions of data being update constantly, what can I do to keep a
good performance if vacuum isn't 100%?

Does PostgreSQL sequences deal with these kinds performance questions? Is it
clean, fast and secury to use sequences? Maybe I still can make up my boss
mind about this subject if I get good arguments here to justify the use of
sequences.

Am I doing some stupid thing?

Best regards,

-----------------------------------------------------------------
José Vilson de Mello de Farias
Software Engineer

Dígitro Tecnologia Ltda - www.digitro.com.br
APC - Customer Oriented Applications
E-mail: vi***********@d igitro.com.br
Tel.: +55 48 281 7158
ICQ 11866179
---------------------------(end of broadcast)---------------------------
TIP 8: explain analyze is your friend

Nov 11 '05 #1
10 2769
I would collect the way that 5-10 of the top databases handle unique id's (sequences) for columns and compare them in a small paper. Show your boss that sequences are fairly standard and he should come around.

Vilson farias wrote:
Greetings,

I'm getting a big performance problem and I would like to ask you what
would be the reason, but first I need to explain how it happens.

Let's suppose I can't use sequences (it seams impossible but my boss
doesn't like specific database features like this one).

For sequence simulation I had created a table called cnfg_key_genera tion
and each tuple holds information for one of my tables (tablename,
current_sequenc ial_number). Lets check :

CREATE TABLE cnfg_key_genera tion (
department integer NOT NULL,
table_name varchar(20) NOT NULL,
current_key integer NOT NULL,
CONSTRAINT XPKcnfg_key_gen eration PRIMARY KEY (department,
table_name)
);
Per example, for a table called 'my_test' I would have the following values
:
department = 1
table_name = 'my_test'
current_key = 1432

Everytime I want a new key to use in my_test primary-key I just increment
current_key value. For this job, I've created a simple stored procedure
called key_generation
CREATE FUNCTION key_generation( integer, varchar(20)) RETURNS integer AS'
DECLARE
the_department ALIAS FOR $1;
the_table_name ALIAS FOR $2;
new_key_value integer;
err_num integer;
BEGIN
new_value := 0;

LOCK TABLE cnfg_key_genera tion IN ROW EXCLUSIVE MODE;

SELECT current_value INTO new_value
FROM cnfg_key_genera tion
WHERE the_department = department AND the_table_name = table_name;

IF NOT FOUND THEN
new_key_value := 1;
INSERT INTO cnfg_key_genera tion VALUES (the_department ,the_table_name ,
new_key_value);
ELSE
new_key_value := new_key_value + 1;

UPDATE cnfg_key_genera tion
SET current_key_val ue = new_key_value
WHERE department = the_department AND
table_name = the_table_name;
END IF;

RETURN new_key_value;

END;
'
LANGUAGE 'plpgsql';
Data insertion is done by the following way :

INSERT INTO my_test VALUES (key_generation (1, 'my_test'), ...other
fields...);

Ok, this works, but has a huge performance problem. I read in pg docs that
everytime I make an UPDATE, a new tuple is created and the old is marked as
invalid. For a tuple that holds sequencial key generation, everytime a new
key is generated, a new tuple is created inside cfg_key_generat ion. It means
after 2million key generations for same table, performance will be completly
degradated because there will be 2million of old versions of same tuple. For
instance, I have a table called 'cham_chamada' that actually holds
1.5Million of tuples. The response time for key_generation execution for
this table is more than 5seconds. In this same case if I execute
key_generation for a table that has just few values (current key = 5 per
example), response time is just some miliseconds (30 to 50ms).

I tryied to fix this problem with a VACUUM and it was completly ineffective.
After execution the problem was still there. Later, after execution of every
kind of vacuum I knew (with and without ANALYZE, especific for that table,
vacuumdb from shell ...) I gave up and tryied a COPY TO/DROP/CREATE/COPY
FROM. At this time, the performance problem was fixed.

What can I do to solve this problem without table reconstruction? Is there a
performance degradation in PostgreSQL that can't be solved? If a have a huge
table with millions of data being update constantly, what can I do to keep a
good performance if vacuum isn't 100%?

Does PostgreSQL sequences deal with these kinds performance questions? Is it
clean, fast and secury to use sequences? Maybe I still can make up my boss
mind about this subject if I get good arguments here to justify the use of
sequences.

Am I doing some stupid thing?

Best regards,

-----------------------------------------------------------------
Jos? Vilson de Mello de Farias
Software Engineer

D?gitro Tecnologia Ltda - www.digitro.com.br
APC - Customer Oriented Applications
E-mail: vi***********@d igitro.com.br
Tel.: +55 48 281 7158
ICQ 11866179
---------------------------(end of broadcast)---------------------------
TIP 8: explain analyze is your friend

---------------------------(end of broadcast)---------------------------
TIP 2: you can get off all lists at once with the unregister command
(send "unregister YourEmailAddres sHere" to ma*******@postg resql.org)

Nov 11 '05 #2
Vilson farias wrote:
Greetings,

I'm getting a big performance problem and I would like to ask you what
would be the reason, but first I need to explain how it happens.

Let's suppose I can't use sequences (it seams impossible but my boss
doesn't like specific database features like this one).
I don't see how using PL/pgSQL is any better with respect to specific
database feature, but that aside for a moment.

Your function not only misses the required FOR UPDATE when reading the
(possibly existing) current value, it also contains a general race
condition. Multiple concurrent transactions could try inserting the new
key and every but one would error out with a duplicate key error.

Sequence values are int8 and are by default safe against integer rollover.

Sequences do not rollback and therefore don't need to wait for
concurrent transactions to finish. Your table based replacement is a
major concurrency bottleneck. As soon as a transaction did an insert to
a table, it blocks out every other transaction from inserting into that
table until it either commits or rolls back.

Your VACUUM theory is only partial correct. A frequent VACUUM will
prevent the key table from growing. You'd have to do so very often since
the excess number of obsolete index entries pointing to dead tuples also
degrades your performance. Additionally if there is a very low number of
keys (sequences) in that table, an ANALYZE run might cause the planner
to go for a sequential scan and ignore the index on the table at which
point your function will actually cause "two" sequential scan over all
live and dead tuples of all sequences per call.

Sequences are specially designed to overcome all these issues.

If you cannot convice your boss to use sequences, he is a good example
for why people having difficulties understanding technical issues should
not assume leadership positions in IT projects.
Jan

For sequence simulation I had created a table called cnfg_key_genera tion
and each tuple holds information for one of my tables (tablename,
current_sequenc ial_number). Lets check :

CREATE TABLE cnfg_key_genera tion (
department integer NOT NULL,
table_name varchar(20) NOT NULL,
current_key integer NOT NULL,
CONSTRAINT XPKcnfg_key_gen eration PRIMARY KEY (department,
table_name)
);
Per example, for a table called 'my_test' I would have the following values
:
department = 1
table_name = 'my_test'
current_key = 1432

Everytime I want a new key to use in my_test primary-key I just increment
current_key value. For this job, I've created a simple stored procedure
called key_generation
CREATE FUNCTION key_generation( integer, varchar(20)) RETURNS integer AS'
DECLARE
the_department ALIAS FOR $1;
the_table_name ALIAS FOR $2;
new_key_value integer;
err_num integer;
BEGIN
new_value := 0;

LOCK TABLE cnfg_key_genera tion IN ROW EXCLUSIVE MODE;

SELECT current_value INTO new_value
FROM cnfg_key_genera tion
WHERE the_department = department AND the_table_name = table_name;

IF NOT FOUND THEN
new_key_value := 1;
INSERT INTO cnfg_key_genera tion VALUES (the_department ,the_table_name ,
new_key_value);
ELSE
new_key_value := new_key_value + 1;

UPDATE cnfg_key_genera tion
SET current_key_val ue = new_key_value
WHERE department = the_department AND
table_name = the_table_name;
END IF;

RETURN new_key_value;

END;
'
LANGUAGE 'plpgsql';
Data insertion is done by the following way :

INSERT INTO my_test VALUES (key_generation (1, 'my_test'), ...other
fields...);

Ok, this works, but has a huge performance problem. I read in pg docs that
everytime I make an UPDATE, a new tuple is created and the old is marked as
invalid. For a tuple that holds sequencial key generation, everytime a new
key is generated, a new tuple is created inside cfg_key_generat ion. It means
after 2million key generations for same table, performance will be completly
degradated because there will be 2million of old versions of same tuple. For
instance, I have a table called 'cham_chamada' that actually holds
1.5Million of tuples. The response time for key_generation execution for
this table is more than 5seconds. In this same case if I execute
key_generation for a table that has just few values (current key = 5 per
example), response time is just some miliseconds (30 to 50ms).

I tryied to fix this problem with a VACUUM and it was completly ineffective.
After execution the problem was still there. Later, after execution of every
kind of vacuum I knew (with and without ANALYZE, especific for that table,
vacuumdb from shell ...) I gave up and tryied a COPY TO/DROP/CREATE/COPY
FROM. At this time, the performance problem was fixed.

What can I do to solve this problem without table reconstruction? Is there a
performance degradation in PostgreSQL that can't be solved? If a have a huge
table with millions of data being update constantly, what can I do to keep a
good performance if vacuum isn't 100%?

Does PostgreSQL sequences deal with these kinds performance questions? Is it
clean, fast and secury to use sequences? Maybe I still can make up my boss
mind about this subject if I get good arguments here to justify the use of
sequences.

Am I doing some stupid thing?

Best regards,

-----------------------------------------------------------------
José Vilson de Mello de Farias
Software Engineer

Dígitro Tecnologia Ltda - www.digitro.com.br
APC - Customer Oriented Applications
E-mail: vi***********@d igitro.com.br
Tel.: +55 48 281 7158
ICQ 11866179
---------------------------(end of broadcast)---------------------------
TIP 8: explain analyze is your friend

--
#============== =============== =============== =============== ===========#
# It's easier to get forgiveness for being wrong than for being right. #
# Let's break this rule - forgive me. #
#============== =============== =============== ====== Ja******@Yahoo. com #
---------------------------(end of broadcast)---------------------------
TIP 4: Don't 'kill -9' the postmaster

Nov 11 '05 #3
On Mon, Aug 18, 2003 at 11:27:14 -0300,
Vilson farias <vi***********@ digitro.com.br> wrote:

I tryied to fix this problem with a VACUUM and it was completly ineffective.
After execution the problem was still there. Later, after execution of every
kind of vacuum I knew (with and without ANALYZE, especific for that table,
vacuumdb from shell ...) I gave up and tryied a COPY TO/DROP/CREATE/COPY
FROM. At this time, the performance problem was fixed.
Did you try VACUUM FULL?

If you are doing just a normal vacuum and waited until there were over
a million tuples in the table, your FSM setting probably wasn't high
enough to let you recover the space.
What can I do to solve this problem without table reconstruction? Is there a
performance degradation in PostgreSQL that can't be solved? If a have a huge
table with millions of data being update constantly, what can I do to keep a
good performance if vacuum isn't 100%?
You want to vacuum the table a lot more often. I remember a post (that should
be in the archives) where someone calculated how many updates you could go
before the dead tuples took up more than one block. The suggestion was that
that was the point where you want to vacuum the table.
Does PostgreSQL sequences deal with these kinds performance questions? Is it
clean, fast and secury to use sequences? Maybe I still can make up my boss
mind about this subject if I get good arguments here to justify the use of
sequences.


Besides solving a dead tuple problem, using sequences also avoids contention
by not having to hold locks for the duration of a transaction.
---------------------------(end of broadcast)---------------------------
TIP 2: you can get off all lists at once with the unregister command
(send "unregister YourEmailAddres sHere" to ma*******@postg resql.org)

Nov 11 '05 #4
Hi Vilson,

Vilson farias wrote:
Does PostgreSQL sequences deal with these kinds performance questions? Is it
clean, fast and secury to use sequences? Maybe I still can make up my boss
mind about this subject if I get good arguments here to justify the use of
sequences.


Yes, exactly. Its clean, fast and secure.
The trick is, sequences life outside of transactions
and nextval() is never rolled back. So you dont have
to lock and you dont have to worry about duplicate
keys.

Its not quite possible to not use database specific
code when wanting a great performance the same time.

Fortunately postgresql is very close to SQL-spec,
so you arent so much walking on the dark side
if you adopt postgres style SQL.

Regards
Tino Wildenhain
---------------------------(end of broadcast)---------------------------
TIP 8: explain analyze is your friend

Nov 11 '05 #5
> I'm getting a big performance problem and I would like to ask you
what
would be the reason, but first I need to explain how it happens.

Let's suppose I can't use sequences (it seams impossible but my boss
doesn't like specific database features like this one).
I can't help you with the details of the performance problem, but I did
have a situation similar in that I had to maintain sequences "manually",
rather than use the PostgreSQL serial data type. The advice I got here
was to "update first, then select". Two important points I learned from
the gurus in this forum were

1) since in my case I was manipulating my simulated sequence inside a
trigger, there is an implicit transaction around the trigger associated
with the insert or update statement that fires the trigger

2) an update statement locks the record until the transaction commits.
With those items in mind, your function could become:

CREATE FUNCTION key_generation( integer, varchar(20)) RETURNS integer
AS'
DECLARE
the_department ALIAS FOR $1;
the_table_name ALIAS FOR $2;
BEGIN
IF NOT EXISTS(SELECT 1 FROM cnfg_key_genera tion
WHERE the_department = department AND the_table_name = table_name) THEN
INSERT INTO cnfg_key_genera tion VALUES (the_department ,the_table_name ,0);
END IF;
UPDATE cnfg_key_genera tion
SET current_key_val ue = 1 + current_key_val ue
WHERE department = the_department AND
table_name = the_table_name;

RETURN (SELECT current_value INTO new_value
FROM cnfg_key_genera tion
WHERE the_department = department AND the_table_name = table_name);
END;
Per example, for a table called 'my_test' I would have the following
values :
department = 1
table_name = 'my_test'
current_key = 1432

Everytime I want a new key to use in my_test primary-key I just
increment current_key value. For this job, I've created a simple stored
procedure called key_generation
CREATE FUNCTION key_generation( integer, varchar(20)) RETURNS integer
AS' DECLARE
the_department ALIAS FOR $1;
the_table_name ALIAS FOR $2;
new_key_value integer;
err_num integer;
BEGIN
new_value := 0;

LOCK TABLE cnfg_key_genera tion IN ROW EXCLUSIVE MODE;

SELECT current_value INTO new_value
FROM cnfg_key_genera tion
WHERE the_department = department AND the_table_name = table_name;

IF NOT FOUND THEN
new_key_value := 1;
INSERT INTO cnfg_key_genera tion VALUES
(the_department ,the_table_name ,
new_key_value);
ELSE
new_key_value := new_key_value + 1;

UPDATE cnfg_key_genera tion
SET current_key_val ue = new_key_value
WHERE department = the_department AND
table_name = the_table_name;
END IF;

RETURN new_key_value;

END;
'
LANGUAGE 'plpgsql';
Data insertion is done by the following way :

INSERT INTO my_test VALUES (key_generation (1, 'my_test'), ...other
fields...);

~Berend Tober


---------------------------(end of broadcast)---------------------------
TIP 5: Have you checked our extensive FAQ?

http://www.postgresql.org/docs/faqs/FAQ.html

Nov 11 '05 #6
On Mon, 18 Aug 2003, Vilson farias wrote:
I'm getting a big performance problem and I would like to ask you what
would be the reason, but first I need to explain how it happens.

Let's suppose I can't use sequences (it seams impossible but my boss
doesn't like specific database features like this one).

For sequence simulation I had created a table called cnfg_key_genera tion
and each tuple holds information for one of my tables (tablename,
current_sequenc ial_number). Lets check :

CREATE TABLE cnfg_key_genera tion (
department integer NOT NULL,
table_name varchar(20) NOT NULL,
current_key integer NOT NULL,
CONSTRAINT XPKcnfg_key_gen eration PRIMARY KEY (department,
table_name)
);
Per example, for a table called 'my_test' I would have the following values
:
department = 1
table_name = 'my_test'
current_key = 1432

Everytime I want a new key to use in my_test primary-key I just increment
current_key value. For this job, I've created a simple stored procedure
called key_generation
CREATE FUNCTION key_generation( integer, varchar(20)) RETURNS integer AS'
DECLARE
the_department ALIAS FOR $1;
the_table_name ALIAS FOR $2;
new_key_value integer;
err_num integer;
BEGIN
new_value := 0;

LOCK TABLE cnfg_key_genera tion IN ROW EXCLUSIVE MODE;

SELECT current_value INTO new_value
FROM cnfg_key_genera tion
WHERE the_department = department AND the_table_name = table_name;

IF NOT FOUND THEN
new_key_value := 1;
INSERT INTO cnfg_key_genera tion VALUES (the_department ,the_table_name ,
new_key_value);
ELSE
new_key_value := new_key_value + 1;

UPDATE cnfg_key_genera tion
SET current_key_val ue = new_key_value
WHERE department = the_department AND
table_name = the_table_name;
END IF;

RETURN new_key_value;

END;
'
LANGUAGE 'plpgsql';
Data insertion is done by the following way :

INSERT INTO my_test VALUES (key_generation (1, 'my_test'), ...other
fields...);

Ok, this works, but has a huge performance problem. I read in pg docs that
everytime I make an UPDATE, a new tuple is created and the old is marked as
invalid. For a tuple that holds sequencial key generation, everytime a new
key is generated, a new tuple is created inside cfg_key_generat ion. It means
after 2million key generations for same table, performance will be completly
degradated because there will be 2million of old versions of same tuple. For
instance, I have a table called 'cham_chamada' that actually holds
1.5Million of tuples. The response time for key_generation execution for
this table is more than 5seconds. In this same case if I execute
key_generation for a table that has just few values (current key = 5 per
example), response time is just some miliseconds (30 to 50ms).
I tryied to fix this problem with a VACUUM and it was completly ineffective.
From that state, you might want to try a REINDEX (in case you're running
into index growth problems), and look at the output of VACUUM VERBOSE.
Also, you might want to see what kind of plans are being generated for the
queries inside the function (rather than what would be generated on the
command line with values inserted). I believe Tom Lane sent an example of
doing this within the last couple of months to one of the mailing lists,
you may be able to find it in the archives.

One other thing is that you should probably be vacuuming this table alot
more frequently than you appear to be. Depending on the frequency of calls
to this, you may also want to raise your free space map settings (you can
get some guidelines later based on the results of vacuum verbose).
Does PostgreSQL sequences deal with these kinds performance questions? Is it
clean, fast and secury to use sequences? Maybe I still can make up my boss
mind about this subject if I get good arguments here to justify the use of
sequences.


Generally, yes. They also don't force you down to only having one
transaction that attempts to get the value at a time, but don't actually
guarantee sequential values, merely unique ones (within their range).
---------------------------(end of broadcast)---------------------------
TIP 1: subscribe and unsubscribe commands go to ma*******@postg resql.org

Nov 11 '05 #7
First I would like to thanks everybody that helped me so far.

Mr. Wiek, you are right, the stored procedure I've been using has a lot of
problems related with concurrency control. You gave me valuable information
and it's gonna help me making good arguments against this technique.
Your VACUUM theory is only partial correct. A frequent VACUUM will
prevent the key table from growing. You'd have to do so very often since
the excess number of obsolete index entries pointing to dead tuples also
degrades your performance. Additionally if there is a very low number of
keys (sequences) in that table, an ANALYZE run might cause the planner
to go for a sequential scan and ignore the index on the table at which
point your function will actually cause "two" sequential scan over all
live and dead tuples of all sequences per call.
About vacuum, when does analyze make planner to go for sequencial scan
instead indexed scan? How can I prevent planner from make this mistake? I
have some tables that have this problem when a SQL is running against their
primaries keys. For these tables, at creation time they are all "Index Scan"
from explain but after first vacuum is ran, they will be "Sequencial Scan"
forever. They are tables with few data, but there are data enough to slow
down the whole application when a sequencial scan is used. Where can I find
more information about it wrong sequencial scan prevention? This is one of
hardier to understand itens about PostgreSQL. Is there some tech docs
available around that you guys would like to suggest?
If you cannot convice your boss to use sequences, he is a good example
for why people having difficulties understanding technical issues should
not assume leadership positions in IT projects.


That's my point of view. But sometimes is very hard to fight against people
that have no arguments but only influence as weapon.

Best Regards,
Vilson

---------------------------(end of broadcast)---------------------------
TIP 1: subscribe and unsubscribe commands go to ma*******@postg resql.org

Nov 11 '05 #8
wouldn't a better situation be ADDING a record that is one higher, and
then doing a select MAX()?

The different triggers could do delete on the old records.


In my case that would not apply, because what I had was a need to keep a
"sequence" counter for each employee, so I added a column
("expense_repor t_seq") to the employee table:

CREATE TABLE paid.employee (
employee_pk serial,
person_pk int4 NOT NULL,
employee_identi fier varchar(24),
hire_date date,
termination_dat e date,
health_insuranc e_code_pk int4,
performance_rev iew_date date,
emergency_conta ct_pk int4,
labor_category_ pk int4,
expense_report_ seq int4 DEFAULT 0);

The incremented value of the expense_report_ seq column is then inserted
in the expense_pk column for a new row in the expense table, thus keeping
a separate sequence for each employee:

CREATE TABLE paid.expense (
project_pk int4 NOT NULL,
organization_pk int4 NOT NULL,
employee_pk int4 NOT NULL,
expense_pk int4 NOT NULL,
expense_report_ date date DEFAULT now() NOT NULL,
expense_date date DEFAULT now() NOT NULL,
CONSTRAINT expense_pkey PRIMARY KEY (project_pk, organization_pk ,
employee_pk, expense_pk),
CONSTRAINT expense_fkey FOREIGN KEY (employee_pk) REFERENCES employee
(employee_pk)
) WITHOUT OIDS;
Then there is the trigger:

CREATE TRIGGER expense_bit BEFORE INSERT ON paid.expense FOR EACH ROW
EXECUTE PROCEDURE expense_bit();

where

CREATE FUNCTION paid.expense_bi t() RETURNS trigger AS '
BEGIN
SELECT INTO NEW.expense_pk expense_report_ next(new.employ ee_pk);
RETURN new;
END;
' LANGUAGE 'plpgsql' VOLATILE;

where

CREATE FUNCTION paid.expense_re port_next(int4) RETURNS int4 AS '
DECLARE
l_employee_pk ALIAS FOR $1;
BEGIN
UPDATE employee
SET expense_report_ seq = (expense_report _seq + 1)
WHERE employee_pk = l_employee_pk;

RETURN (SELECT expense_report_ seq FROM employee WHERE employee_pk =
l_employee_pk) ;
END;' LANGUAGE 'plpgsql' VOLATILE;
Seems to work o.k., but this is not a large database with gazillions of
transactions.

~Berend Tober


---------------------------(end of broadcast)---------------------------
TIP 9: the planner will ignore your desire to choose an index scan if your
joining column's datatypes do not match

Nov 11 '05 #9
so has the final implemention become:

A/ A column value per employee
B/ A trigger to implement the incrementing of the value
C/ A row/record lock to enforce atomicity

bt****@seaworth ysys.com wrote:
On Monday, Aug 18, 2003, at 09:01 US/Pacific, <bt****@seawort hysys.com>
wrote:

With those items in mind, your function could become:

CREATE FUNCTION key_generation( integer, varchar(20)) RETURNS integer
AS'
DECLARE
the_department ALIAS FOR $1;
the_table_name ALIAS FOR $2;
BEGIN
IF NOT EXISTS(SELECT 1 FROM cnfg_key_genera tion
WHERE the_department = department AND the_table_name =
table_name )
THEN
INSERT INTO cnfg_key_genera tion VALUES
(the_departm ent,the_table_n ame,0);
END IF;


I would get the insert out of there, too. If it doesn't exist, throw
an exception. I don't believe sequences should automatically create
themselves (the tables and columns don't).

I agree. In my own case I need a sequence for each employee, and the
simulated sequence is defined as a column in the employee table, so I'm
guaranteed to have a a place to do the incrementing when the need arises.
Also, I used a "DEFAULT 0" clause on the column definition for the
sequence value, rather than explicitly inserting a zero. I left the
insert statement in place for compatibility with the original inquirer's
definition.

~Berend Tober


---------------------------(end of broadcast)---------------------------
TIP 9: the planner will ignore your desire to choose an index scan if your
joining column's datatypes do not match

---------------------------(end of broadcast)---------------------------
TIP 9: the planner will ignore your desire to choose an index scan if your
joining column's datatypes do not match

Nov 11 '05 #10

This thread has been closed and replies have been disabled. Please start a new discussion.

Similar topics

0
1622
by: lawrence | last post by:
In Java one is supposed to upcast an object to its interface whenever appropriate: MyInterface customers = new MySpecificClass(); The idea is that if later one wants to change the specific implementation one only has to change a few lines, possibly just the one line above. Thus the code becomes more flexible. Is it right to say that there is no way to do this in PHP? I've taken
4
2310
by: temp | last post by:
Hi All, I wonder could someone help me with this? What I want to do is search through a list of letters and look for adjacent groups of letters that form sequences, not in the usual way of matching say abc to another appearance later on in the list but to look for transposed patterns. The groups of letters can be made up of between 2 to 4 letters.
3
395
by: jimif_fr | last post by:
How is it possible to simulate the keyboard entries (and the mouse events) of one program, from another program. The first program is not written for this purpose and is a not aware of the simulation.
4
52030
by: Hemant Shah | last post by:
Folks, I have created bunch of sequences in DB2 7.2. How do I list all the defined sequences and if possible their current values? Thanks. --
3
5716
by: Ken | last post by:
HI: I'm reading a string that will be displayed in a MessageBox from a resource file. The string in the resource file contains escape sequences so they will be broken up into multiple lines. e.g. This is line 1\n\nThis is line 2. When this string is read using a ResourceManager GetString method the string is returned @-quoted, i.e as if I entered the literal @"This is line1\n\nThis is line 2", causing the escape sequences not to be...
18
3969
by: Bruno Baguette | last post by:
Hello, I have to design a table wich will store some action reports. Each report have an ID like this 1/2004, 2/2004, ... and each years, they restart to 1 (1/2004, 1/2005, 1/2006,...). So, I was thinking to split that in two fields : one with the increment and one with the year. But I don't know how can I manage the sequences since I have to restart to 0 each year...
5
4969
by: Michael Fuhr | last post by:
I'd like to propose that certain GRANTs on a table cascade to the table's implicit sequences. In the current implementation (as of 7.4.5 and 8.0.0beta3), a table owner must typically issue GRANT statements on both the table and its sequences to allow other users to insert records into the table. The GRANT on the sequences seems superfluous. Consider, for example, what's probably the most common use of sequences: a SERIAL type...
4
1380
by: Bill J | last post by:
Hi Everyone, Is there a way of generating a keyboard character event and sending it to a control on a form? For example, if I have a menu item called "Delete", I would like to send a DEL character event to the currently active control on the form. Thanks.
4
4270
by: JJ | last post by:
Is there a way of checking that a line with escape sequences in it, has no strings in it (apart from the escape sequences)? i.e. a line with \n\t\t\t\t\t\t\t\r\n would have no string in it a line with \n\t\t\t\thello\t\t\n would hve the string 'hello' in it. In others words, is there a method of removing all escape sequences from a string? I've tried Regex.Unescape(string) but this doesn't not seem to remove the
0
8173
marktang
by: marktang | last post by:
ONU (Optical Network Unit) is one of the key components for providing high-speed Internet services. Its primary function is to act as an endpoint device located at the user's premises. However, people are often confused as to whether an ONU can Work As a Router. In this blog post, we’ll explore What is ONU, What Is Router, ONU & Router’s main usage, and What is the difference between ONU and Router. Let’s take a closer look ! Part I. Meaning of...
0
8115
by: Hystou | last post by:
Most computers default to English, but sometimes we require a different language, especially when relocating. Forgot to request a specific language before your computer shipped? No problem! You can effortlessly switch the default language on Windows 10 without reinstalling. I'll walk you through it. First, let's disable language synchronization. With a Microsoft account, language settings sync across devices. To prevent any complications,...
0
8568
jinu1996
by: jinu1996 | last post by:
In today's digital age, having a compelling online presence is paramount for businesses aiming to thrive in a competitive landscape. At the heart of this digital strategy lies an intricately woven tapestry of website design and digital marketing. It's not merely about having a website; it's about crafting an immersive digital experience that captivates audiences and drives business growth. The Art of Business Website Design Your website is...
1
8254
by: Hystou | last post by:
Overview: Windows 11 and 10 have less user interface control over operating system update behaviour than previous versions of Windows. In Windows 11 and 10, there is no way to turn off the Windows Update option using the Control Panel or Settings app; it automatically checks for updates and installs any it finds, whether you like it or not. For most users, this new feature is actually very convenient. If you want to control the update process,...
0
8422
tracyyun
by: tracyyun | last post by:
Dear forum friends, With the development of smart home technology, a variety of wireless communication protocols have appeared on the market, such as Zigbee, Z-Wave, Wi-Fi, Bluetooth, etc. Each protocol has its own unique characteristics and advantages, but as a user who is planning to build a smart home system, I am a bit confused by the choice of these technologies. I'm particularly interested in Zigbee because I've heard it does some...
0
7044
agi2029
by: agi2029 | last post by:
Let's talk about the concept of autonomous AI software engineers and no-code agents. These AIs are designed to manage the entire lifecycle of a software development project—planning, coding, testing, and deployment—without human intervention. Imagine an AI that can take a project description, break it down, write the code, debug it, and then launch it, all on its own.... Now, this would greatly impact the work of software developers. The idea...
0
4111
by: adsilva | last post by:
A Windows Forms form does not have the event Unload, like VB6. What one acts like?
1
2555
by: 6302768590 | last post by:
Hai team i want code for transfer the data from one system to another through IP address by using C# our system has to for every 5mins then we have to update the data what the data is updated we have to send another system
0
1416
bsmnconsultancy
by: bsmnconsultancy | last post by:
In today's digital era, a well-designed website is crucial for businesses looking to succeed. Whether you're a small business owner or a large corporation in Toronto, having a strong online presence can significantly impact your brand's success. BSMN Consultancy, a leader in Website Development in Toronto offers valuable insights into creating effective websites that not only look great but also perform exceptionally well. In this comprehensive...

By using Bytes.com and it's services, you agree to our Privacy Policy and Terms of Use.

To disable or enable advertisements and analytics tracking please visit the manage ads & tracking page.