By using this site, you agree to our updated Privacy Policy and our Terms of Use. Manage your Cookies Settings.
438,710 Members | 1,970 Online
Bytes IT Community
+ Ask a Question
Need help? Post your question and get tips & solutions from a community of 438,710 IT Pros & Developers. It's quick & easy.

guaranteeing that a sequence never skips

P: n/a
Am I correct to assume that SERIAL does not guarantee that a sequence
won't skip (e.g. one successful INSERT gets 32 and the next might be 34)?

Sometimes a business requirement is that a serial sequence never skips,
e.g. when generating invoice/ticket/formal letter numbers. Would an
INSERT INTO t (id, ...) VALUES (SELECT MAX(col)+1 FROM t, ...) suffice,
or must I install a trigger too to do additional checking?

--
dave
---------------------------(end of broadcast)---------------------------
TIP 1: subscribe and unsubscribe commands go to ma*******@postgresql.org

Nov 23 '05 #1
Share this Question
Share on Google+
3 Replies


P: n/a
On Sun, 2004-10-03 at 08:58, David Garamond wrote:
Am I correct to assume that SERIAL does not guarantee that a sequence
won't skip (e.g. one successful INSERT gets 32 and the next might be 34)?

Sometimes a business requirement is that a serial sequence never skips,
e.g. when generating invoice/ticket/formal letter numbers. Would an
INSERT INTO t (id, ...) VALUES (SELECT MAX(col)+1 FROM t, ...) suffice,
or must I install a trigger too to do additional checking?


You will have to lock the whole table and your parallel performance will
be poor.
---------------------------(end of broadcast)---------------------------
TIP 1: subscribe and unsubscribe commands go to ma*******@postgresql.org

Nov 23 '05 #2

P: n/a
-----BEGIN PGP SIGNED MESSAGE-----
Hash: SHA1

On Sunday 03 October 2004 10:21 am, Scott Marlowe wrote:
On Sun, 2004-10-03 at 08:58, David Garamond wrote:
Am I correct to assume that SERIAL does not guarantee that a sequence
won't skip (e.g. one successful INSERT gets 32 and the next might be 34)?

Sometimes a business requirement is that a serial sequence never skips,
e.g. when generating invoice/ticket/formal letter numbers. Would an
INSERT INTO t (id, ...) VALUES (SELECT MAX(col)+1 FROM t, ...) suffice,
or must I install a trigger too to do additional checking?


You will have to lock the whole table and your parallel performance will
be poor.


There was a thread about this a while back. I'm using a separate counter table
and stored procs that increment the value of the counter - similar to nextval
used for sequences. My "nextval" locks the "counterrow" in question using
"...for update". So while I'm generating the record that requires the
sequential number I'm in the same stored proc and therefor in a transaction.
If I have to roll back, the counter number in the countertable will roll back
too. You just have to make sure your routine to completely generate whatever
you have to generate doesn't take long, because parallel uses of the same
thing will block until your proc commits or rolls back.

UC

- --
Open Source Solutions 4U, LLC 2570 Fleetwood Drive
Phone: +1 650 872 2425 San Bruno, CA 94066
Cell: +1 650 302 2405 United States
Fax: +1 650 872 2417
-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1.2.3 (GNU/Linux)

iD8DBQFBYD6KjqGXBvRToM4RAgFOAKCeJnwA6PnXquCrUMwGbR 9tQZBxdgCdGqyy
nwNbHafAiInSX+WTh5Uzb4o=
=Uixo
-----END PGP SIGNATURE-----
---------------------------(end of broadcast)---------------------------
TIP 7: don't forget to increase your free space map settings

Nov 23 '05 #3

P: n/a
A long time ago, in a galaxy far, far away, li***@zara.6.isreserved.com (David Garamond) wrote:
Am I correct to assume that SERIAL does not guarantee that a sequence
won't skip (e.g. one successful INSERT gets 32 and the next might be
34)?
What is guaranteed is that sequence values will not be repeated
(assuming you don't do a setval() :-).)

If value caching is turned on, then each connection may grab them in
groups of (say) 100, so that one insert, on one (not-too-busy)
connection might add in 5399, and an insert on another connection,
that has been much busier, might add in 6522, and those values differ
quite a bit :-).
Sometimes a business requirement is that a serial sequence never
skips, e.g. when generating invoice/ticket/formal letter
numbers. Would an INSERT INTO t (id, ...) VALUES (SELECT MAX(col)+1
FROM t, ...) suffice, or must I install a trigger too to do
additional checking?


This is a troublesome scenario...

1. Your requirement makes it MUCH harder to deal with concurrent
updates efficiently.

That "SELECT MAX()" destroys the efficiency achieved by the use of
sequences.

2. It may be difficult to avoid deadlocks of some sort.

Suppose several inserts take place more or less simultaneously. In
that case, they might all get the same value of SELECT MAX(), and only
one of them could therefore succeed. The others would get
"clotheslined" by the UNIQUE constraint, like a hapless fugitive that
runs into a tree branch, and you'll see transactions failing due to
concurrency. Not a good thing.

Another possibiity would be to have _two_ fields, one, call it C1,
using a sequence, and the other, C2, which gets populated later.

Periodically, a process goes through and calculates CURR=SELECT
MAX(C2), and then walks through all of the records populated with
values in C1. For each non-null C1, it assigns C2 based on the value
of CURR, and then empties C1.

That means that there is a period of time during which the "ultimate"
sequence value, C2, is not not populated, which might or might not be
a problem for your application.
--
(format nil "~S@~S" "cbbrowne" "linuxfinances.info")
http://linuxfinances.info/info/linuxxian.html
Life's a duck, and then you sigh.
Nov 23 '05 #4

This discussion thread is closed

Replies have been disabled for this discussion.