By using this site, you agree to our updated Privacy Policy and our Terms of Use. Manage your Cookies Settings.
437,541 Members | 1,476 Online
Bytes IT Community
+ Ask a Question
Need help? Post your question and get tips & solutions from a community of 437,541 IT Pros & Developers. It's quick & easy.

Numeric type problems

P: n/a
Hi All,

I hope this is the correct mailing list for this question. But neither
postgresql.org nor google could help me out on this subject.
I did find one disturbing topic on the mailing list archives
(http://archives.postgresql.org/pgsql.../msg00032.php), but
since it was quite old I'm posting my question anyway.

I'm writing a generic database layer that should support a fixed number
of generic numeric types on a number of databases. At this point it
supports MySQL just fine, but I'm having some trouble finding the right
implementation details for PostgreSQL. Please take a moment to look at
the following table. The field description speaks for itself pretty much
I guess.

Field descr. MySQL PostgreSQL
================================================== ====================
DB_FIELD_INT8 TINYINT SMALLINT (too big, but best match)
DB_FIELD_INT16 SMALLINT SMALLINT
DB_FIELD_INT32 INT INT
DB_FIELD_INT64 BIGINT BIGINT
DB_FIELD_UINT8 TINYINT UNSIGNED <not supported natively, is it?>
DB_FIELD_UINT16 SMALLINT UNSIGNED <not supported natively, is it?>
DB_FIELD_UINT32 INT UNSIGNED <not supported natively, is it?>
DB_FIELD_UINT64 BIGINT UNSIGNED <not supported natively, is it?>
DB_FIELD_FLOAT FLOAT REAL
DB_FIELD_DOUBLE DOUBLE DOUBLE PRECISION

My problem is obvisouly the unsigned values I really need to be able to
represent properly. I know I can just use the twice as big signed types
and put a constraint on it, but that only works for UINT8, UINT16 and
UINT32 (there is no 128-bit signed integer type, right?): I really need
to have proper 64-bit unsigned integer value support.

I *could* use a BIGINT to represent 64-bit unsigned values, and just
cast the binary data to an unsigned long long (or unsigned __int64 on
win32), but this would leave me with the problem that I couldn't safely
let SQL do comparisons on the value, right?

Is there any solution? I've seen someone suggesting elsewhere that one
should use the OID type, but others said that one shouldn't. I'm pretty
desperate. PostgreSQL would really be my database of choice for our
current project, but I'm afraid we can't use it if I can't get this right...

Thanks in advance for any help!

Bye,
Marc

---------------------------(end of broadcast)---------------------------
TIP 3: if posting/reading through Usenet, please send an appropriate
subscribe-nomail command to ma*******@postgresql.org so that your
message can get through to the mailing list cleanly

Nov 23 '05 #1
Share this Question
Share on Google+
6 Replies


P: n/a
Use a numeric type if you need more precision.

template1=# create domain BIGINT_UNSIGNED numeric(20,0) check (value >=
0 and value < '18446744073709551616'::numeric(20,0));
CREATE DOMAIN
template1=# create table foobar (i BIGINT_UNSIGNED);
CREATE TABLE
template1=# insert into foobar (i) values (-1); --too small
ERROR: value for domain bigint_unsigned violates check constraint "$1"
template1=# insert into foobar (i) values (0); -- works
INSERT 17159 1
template1=# insert into foobar (i) values (pow(2::numeric, 64::numeric)
- 1); --works
INSERT 17160 1
template1=# insert into foobar (i) values (pow(2::numeric,
64::numeric)); --too large
ERROR: value for domain bigint_unsigned violates check constraint "$1"
template1=# select * from foobar;
i
----------------------
0
18446744073709551615
(2 rows)

Paul Tillotson
Hi All,

I hope this is the correct mailing list for this question. But neither
postgresql.org nor google could help me out on this subject.
I did find one disturbing topic on the mailing list archives
(http://archives.postgresql.org/pgsql.../msg00032.php), but
since it was quite old I'm posting my question anyway.

I'm writing a generic database layer that should support a fixed
number of generic numeric types on a number of databases. At this
point it supports MySQL just fine, but I'm having some trouble finding
the right implementation details for PostgreSQL. Please take a moment
to look at the following table. The field description speaks for
itself pretty much I guess.

Field descr. MySQL PostgreSQL
================================================== ====================
DB_FIELD_INT8 TINYINT SMALLINT (too big, but best match)
DB_FIELD_INT16 SMALLINT SMALLINT
DB_FIELD_INT32 INT INT
DB_FIELD_INT64 BIGINT BIGINT
DB_FIELD_UINT8 TINYINT UNSIGNED <not supported natively, is it?>
DB_FIELD_UINT16 SMALLINT UNSIGNED <not supported natively, is it?>
DB_FIELD_UINT32 INT UNSIGNED <not supported natively, is it?>
DB_FIELD_UINT64 BIGINT UNSIGNED <not supported natively, is it?>
DB_FIELD_FLOAT FLOAT REAL
DB_FIELD_DOUBLE DOUBLE DOUBLE PRECISION

My problem is obvisouly the unsigned values I really need to be able
to represent properly. I know I can just use the twice as big signed
types and put a constraint on it, but that only works for UINT8,
UINT16 and UINT32 (there is no 128-bit signed integer type, right?): I
really need to have proper 64-bit unsigned integer value support.

I *could* use a BIGINT to represent 64-bit unsigned values, and just
cast the binary data to an unsigned long long (or unsigned __int64 on
win32), but this would leave me with the problem that I couldn't
safely let SQL do comparisons on the value, right?

Is there any solution? I've seen someone suggesting elsewhere that one
should use the OID type, but others said that one shouldn't. I'm
pretty desperate. PostgreSQL would really be my database of choice for
our current project, but I'm afraid we can't use it if I can't get
this right...

Thanks in advance for any help!

Bye,
Marc

---------------------------(end of broadcast)---------------------------
TIP 3: if posting/reading through Usenet, please send an appropriate
subscribe-nomail command to ma*******@postgresql.org so that your
message can get through to the mailing list cleanly

---------------------------(end of broadcast)---------------------------
TIP 5: Have you checked our extensive FAQ?

http://www.postgresql.org/docs/faqs/FAQ.html

Nov 23 '05 #2

P: n/a
Use a numeric type if you need more precision.

template1=# create domain BIGINT_UNSIGNED numeric(20,0) check (value >=
0 and value < '18446744073709551616'::numeric(20,0));
CREATE DOMAIN
template1=# create table foobar (i BIGINT_UNSIGNED);
CREATE TABLE
template1=# insert into foobar (i) values (-1); --too small
ERROR: value for domain bigint_unsigned violates check constraint "$1"
template1=# insert into foobar (i) values (0); -- works
INSERT 17159 1
template1=# insert into foobar (i) values (pow(2::numeric, 64::numeric)
- 1); --works
INSERT 17160 1
template1=# insert into foobar (i) values (pow(2::numeric,
64::numeric)); --too large
ERROR: value for domain bigint_unsigned violates check constraint "$1"
template1=# select * from foobar;
i
----------------------
0
18446744073709551615
(2 rows)

Paul Tillotson
Hi All,

I hope this is the correct mailing list for this question. But neither
postgresql.org nor google could help me out on this subject.
I did find one disturbing topic on the mailing list archives
(http://archives.postgresql.org/pgsql.../msg00032.php), but
since it was quite old I'm posting my question anyway.

I'm writing a generic database layer that should support a fixed
number of generic numeric types on a number of databases. At this
point it supports MySQL just fine, but I'm having some trouble finding
the right implementation details for PostgreSQL. Please take a moment
to look at the following table. The field description speaks for
itself pretty much I guess.

Field descr. MySQL PostgreSQL
================================================== ====================
DB_FIELD_INT8 TINYINT SMALLINT (too big, but best match)
DB_FIELD_INT16 SMALLINT SMALLINT
DB_FIELD_INT32 INT INT
DB_FIELD_INT64 BIGINT BIGINT
DB_FIELD_UINT8 TINYINT UNSIGNED <not supported natively, is it?>
DB_FIELD_UINT16 SMALLINT UNSIGNED <not supported natively, is it?>
DB_FIELD_UINT32 INT UNSIGNED <not supported natively, is it?>
DB_FIELD_UINT64 BIGINT UNSIGNED <not supported natively, is it?>
DB_FIELD_FLOAT FLOAT REAL
DB_FIELD_DOUBLE DOUBLE DOUBLE PRECISION

My problem is obvisouly the unsigned values I really need to be able
to represent properly. I know I can just use the twice as big signed
types and put a constraint on it, but that only works for UINT8,
UINT16 and UINT32 (there is no 128-bit signed integer type, right?): I
really need to have proper 64-bit unsigned integer value support.

I *could* use a BIGINT to represent 64-bit unsigned values, and just
cast the binary data to an unsigned long long (or unsigned __int64 on
win32), but this would leave me with the problem that I couldn't
safely let SQL do comparisons on the value, right?

Is there any solution? I've seen someone suggesting elsewhere that one
should use the OID type, but others said that one shouldn't. I'm
pretty desperate. PostgreSQL would really be my database of choice for
our current project, but I'm afraid we can't use it if I can't get
this right...

Thanks in advance for any help!

Bye,
Marc

---------------------------(end of broadcast)---------------------------
TIP 3: if posting/reading through Usenet, please send an appropriate
subscribe-nomail command to ma*******@postgresql.org so that your
message can get through to the mailing list cleanly

---------------------------(end of broadcast)---------------------------
TIP 5: Have you checked our extensive FAQ?

http://www.postgresql.org/docs/faqs/FAQ.html

Nov 23 '05 #3

P: n/a
This is a very interesting option. My biggest concern is performance:
the project will require tables with millions of tuples. How does the
performance of such user created types compare to using native types? Or
are they 'built' using the same structure?

Thanks again!

Marc
Paul Tillotson wrote:
Use a numeric type if you need more precision.

template1=# create domain BIGINT_UNSIGNED numeric(20,0) check (value >=
0 and value < '18446744073709551616'::numeric(20,0));
CREATE DOMAIN
template1=# create table foobar (i BIGINT_UNSIGNED);
CREATE TABLE
template1=# insert into foobar (i) values (-1); --too small
ERROR: value for domain bigint_unsigned violates check constraint "$1"
template1=# insert into foobar (i) values (0); -- works
INSERT 17159 1
template1=# insert into foobar (i) values (pow(2::numeric, 64::numeric)
- 1); --works
INSERT 17160 1
template1=# insert into foobar (i) values (pow(2::numeric,
64::numeric)); --too large
ERROR: value for domain bigint_unsigned violates check constraint "$1"
template1=# select * from foobar;
i
----------------------
0
18446744073709551615
(2 rows)

Paul Tillotson
Hi All,

I hope this is the correct mailing list for this question. But neither
postgresql.org nor google could help me out on this subject.
I did find one disturbing topic on the mailing list archives
(http://archives.postgresql.org/pgsql.../msg00032.php), but
since it was quite old I'm posting my question anyway.

I'm writing a generic database layer that should support a fixed
number of generic numeric types on a number of databases. At this
point it supports MySQL just fine, but I'm having some trouble finding
the right implementation details for PostgreSQL. Please take a moment
to look at the following table. The field description speaks for
itself pretty much I guess.

Field descr. MySQL PostgreSQL
================================================== ====================
DB_FIELD_INT8 TINYINT SMALLINT (too big, but best match)
DB_FIELD_INT16 SMALLINT SMALLINT
DB_FIELD_INT32 INT INT
DB_FIELD_INT64 BIGINT BIGINT
DB_FIELD_UINT8 TINYINT UNSIGNED <not supported natively, is it?>
DB_FIELD_UINT16 SMALLINT UNSIGNED <not supported natively, is it?>
DB_FIELD_UINT32 INT UNSIGNED <not supported natively, is it?>
DB_FIELD_UINT64 BIGINT UNSIGNED <not supported natively, is it?>
DB_FIELD_FLOAT FLOAT REAL
DB_FIELD_DOUBLE DOUBLE DOUBLE PRECISION

My problem is obvisouly the unsigned values I really need to be able
to represent properly. I know I can just use the twice as big signed
types and put a constraint on it, but that only works for UINT8,
UINT16 and UINT32 (there is no 128-bit signed integer type, right?): I
really need to have proper 64-bit unsigned integer value support.

I *could* use a BIGINT to represent 64-bit unsigned values, and just
cast the binary data to an unsigned long long (or unsigned __int64 on
win32), but this would leave me with the problem that I couldn't
safely let SQL do comparisons on the value, right?

Is there any solution? I've seen someone suggesting elsewhere that one
should use the OID type, but others said that one shouldn't. I'm
pretty desperate. PostgreSQL would really be my database of choice for
our current project, but I'm afraid we can't use it if I can't get
this right...

Thanks in advance for any help!

Bye,
Marc

---------------------------(end of broadcast)---------------------------
TIP 3: if posting/reading through Usenet, please send an appropriate
subscribe-nomail command to ma*******@postgresql.org so that your
message can get through to the mailing list cleanly



--
Bye,
Marc 'Foddex' Oude Kotte

-=-=-=-=-=-=-=-=-=-=-=-=-
Need a programmer?
Go to http://www.foddex.net

---------------------------(end of broadcast)---------------------------
TIP 3: if posting/reading through Usenet, please send an appropriate
subscribe-nomail command to ma*******@postgresql.org so that your
message can get through to the mailing list cleanly

Nov 23 '05 #4

P: n/a
This is a very interesting option. My biggest concern is performance:
the project will require tables with millions of tuples. How does the
performance of such user created types compare to using native types? Or
are they 'built' using the same structure?

Thanks again!

Marc
Paul Tillotson wrote:
Use a numeric type if you need more precision.

template1=# create domain BIGINT_UNSIGNED numeric(20,0) check (value >=
0 and value < '18446744073709551616'::numeric(20,0));
CREATE DOMAIN
template1=# create table foobar (i BIGINT_UNSIGNED);
CREATE TABLE
template1=# insert into foobar (i) values (-1); --too small
ERROR: value for domain bigint_unsigned violates check constraint "$1"
template1=# insert into foobar (i) values (0); -- works
INSERT 17159 1
template1=# insert into foobar (i) values (pow(2::numeric, 64::numeric)
- 1); --works
INSERT 17160 1
template1=# insert into foobar (i) values (pow(2::numeric,
64::numeric)); --too large
ERROR: value for domain bigint_unsigned violates check constraint "$1"
template1=# select * from foobar;
i
----------------------
0
18446744073709551615
(2 rows)

Paul Tillotson
Hi All,

I hope this is the correct mailing list for this question. But neither
postgresql.org nor google could help me out on this subject.
I did find one disturbing topic on the mailing list archives
(http://archives.postgresql.org/pgsql.../msg00032.php), but
since it was quite old I'm posting my question anyway.

I'm writing a generic database layer that should support a fixed
number of generic numeric types on a number of databases. At this
point it supports MySQL just fine, but I'm having some trouble finding
the right implementation details for PostgreSQL. Please take a moment
to look at the following table. The field description speaks for
itself pretty much I guess.

Field descr. MySQL PostgreSQL
================================================== ====================
DB_FIELD_INT8 TINYINT SMALLINT (too big, but best match)
DB_FIELD_INT16 SMALLINT SMALLINT
DB_FIELD_INT32 INT INT
DB_FIELD_INT64 BIGINT BIGINT
DB_FIELD_UINT8 TINYINT UNSIGNED <not supported natively, is it?>
DB_FIELD_UINT16 SMALLINT UNSIGNED <not supported natively, is it?>
DB_FIELD_UINT32 INT UNSIGNED <not supported natively, is it?>
DB_FIELD_UINT64 BIGINT UNSIGNED <not supported natively, is it?>
DB_FIELD_FLOAT FLOAT REAL
DB_FIELD_DOUBLE DOUBLE DOUBLE PRECISION

My problem is obvisouly the unsigned values I really need to be able
to represent properly. I know I can just use the twice as big signed
types and put a constraint on it, but that only works for UINT8,
UINT16 and UINT32 (there is no 128-bit signed integer type, right?): I
really need to have proper 64-bit unsigned integer value support.

I *could* use a BIGINT to represent 64-bit unsigned values, and just
cast the binary data to an unsigned long long (or unsigned __int64 on
win32), but this would leave me with the problem that I couldn't
safely let SQL do comparisons on the value, right?

Is there any solution? I've seen someone suggesting elsewhere that one
should use the OID type, but others said that one shouldn't. I'm
pretty desperate. PostgreSQL would really be my database of choice for
our current project, but I'm afraid we can't use it if I can't get
this right...

Thanks in advance for any help!

Bye,
Marc

---------------------------(end of broadcast)---------------------------
TIP 3: if posting/reading through Usenet, please send an appropriate
subscribe-nomail command to ma*******@postgresql.org so that your
message can get through to the mailing list cleanly



--
Bye,
Marc 'Foddex' Oude Kotte

-=-=-=-=-=-=-=-=-=-=-=-=-
Need a programmer?
Go to http://www.foddex.net

---------------------------(end of broadcast)---------------------------
TIP 3: if posting/reading through Usenet, please send an appropriate
subscribe-nomail command to ma*******@postgresql.org so that your
message can get through to the mailing list cleanly

Nov 23 '05 #5

P: n/a
First, every type in postgres is user-defined, in the sense that its
binary structure and the arithmetic and comparison operations you can
perform on it are defined by a set of native C functions that are
present in the database executable or loaded as shared libraries.
Because of postgres's extensible type system, all types share a small
performance penalty, but you can make new ones that are just as efficient.

http://www.postgresql.org/docs/7.4/s...reatetype.html
http://www.postgresql.org/docs/7.4/s...teopclass.html

If you need a high performance unsigned 64 bit integer, you should make
your own type, using the existing bigint type as a template, which
should be just as efficient as the "builtin" bigint type. This is
probably premature optimization though.

Also note that if you're trying to make a type that will merely hold a
MySQL BIGINT UNSIGNED, and you want low overhead, then numeric(20)
without the check constraint will do nicely. Since MySQL itself doesn't
check to see if the values you are inserting are negative or too big*,
then presumably that responsibility doesn't fall on you either. If you
are trying to make something that is bug-for-bug compatible with MySQL,
then you'd better start working on the user defined type.

Personally, I am curious to know what sort of application you are
writing that requires storing numbers

- larger than 2 ** 63 (otherwise you would just use signed bigint)
- but less than 2 ** 64 (as far as I know you can't do this in MySQL
anyway, although you can in postgres)
- with exact precision (otherwise you would use floating point),
- but without any requirements for checking the validity of input (since
MySQL won't do this for you.)
- and without any requirements for being able to math "in the database"
and get a valid answer. (why don't you store it as a string?)

Paul Tillotson

*As evidenced:

mysql> create table foobar (i bigint unsigned);
Query OK, 0 rows affected (0.00 sec)

mysql> insert into foobar values (-3);
Query OK, 1 row affected (0.00 sec)

mysql> insert into foobar values (1000000000 * 1000000000000);
Query OK, 1 row affected (0.00 sec)

mysql> insert into foobar values (1000000000000000000000);
Query OK, 1 row affected (0.00 sec)

mysql> select * from foobar;
+----------------------+
| i |
+----------------------+
| 18446744073709551613 |
| 3875820019684212736 |
| 18446744073709551615 |
+----------------------+
3 rows in set (0.00 sec)

mysql> update foobar set i = -i;
Query OK, 3 rows affected (0.00 sec)
Rows matched: 3 Changed: 3 Warnings: 0

mysql> select * from foobar;
+----------------------+
| i |
+----------------------+
| 3 |
| 14570924054025338880 |
| 1 |
+----------------------+
3 rows in set (0.00 sec)

M.A. Oude Kotte wrote:
This is a very interesting option. My biggest concern is performance:
the project will require tables with millions of tuples. How does the
performance of such user created types compare to using native types?
Or are they 'built' using the same structure?

Thanks again!

Marc
Paul Tillotson wrote:
Use a numeric type if you need more precision.

template1=# create domain BIGINT_UNSIGNED numeric(20,0) check (value
>= 0 and value < '18446744073709551616'::numeric(20,0));

CREATE DOMAIN
template1=# create table foobar (i BIGINT_UNSIGNED);
CREATE TABLE
template1=# insert into foobar (i) values (-1); --too small
ERROR: value for domain bigint_unsigned violates check constraint "$1"
template1=# insert into foobar (i) values (0); -- works
INSERT 17159 1
template1=# insert into foobar (i) values (pow(2::numeric,
64::numeric) - 1); --works
INSERT 17160 1
template1=# insert into foobar (i) values (pow(2::numeric,
64::numeric)); --too large
ERROR: value for domain bigint_unsigned violates check constraint "$1"
template1=# select * from foobar;
i
----------------------
0
18446744073709551615
(2 rows)

Paul Tillotson

---------------------------(end of broadcast)---------------------------
TIP 6: Have you searched our list archives?

http://archives.postgresql.org

Nov 23 '05 #6

P: n/a
First, every type in postgres is user-defined, in the sense that its
binary structure and the arithmetic and comparison operations you can
perform on it are defined by a set of native C functions that are
present in the database executable or loaded as shared libraries.
Because of postgres's extensible type system, all types share a small
performance penalty, but you can make new ones that are just as efficient.

http://www.postgresql.org/docs/7.4/s...reatetype.html
http://www.postgresql.org/docs/7.4/s...teopclass.html

If you need a high performance unsigned 64 bit integer, you should make
your own type, using the existing bigint type as a template, which
should be just as efficient as the "builtin" bigint type. This is
probably premature optimization though.

Also note that if you're trying to make a type that will merely hold a
MySQL BIGINT UNSIGNED, and you want low overhead, then numeric(20)
without the check constraint will do nicely. Since MySQL itself doesn't
check to see if the values you are inserting are negative or too big*,
then presumably that responsibility doesn't fall on you either. If you
are trying to make something that is bug-for-bug compatible with MySQL,
then you'd better start working on the user defined type.

Personally, I am curious to know what sort of application you are
writing that requires storing numbers

- larger than 2 ** 63 (otherwise you would just use signed bigint)
- but less than 2 ** 64 (as far as I know you can't do this in MySQL
anyway, although you can in postgres)
- with exact precision (otherwise you would use floating point),
- but without any requirements for checking the validity of input (since
MySQL won't do this for you.)
- and without any requirements for being able to math "in the database"
and get a valid answer. (why don't you store it as a string?)

Paul Tillotson

*As evidenced:

mysql> create table foobar (i bigint unsigned);
Query OK, 0 rows affected (0.00 sec)

mysql> insert into foobar values (-3);
Query OK, 1 row affected (0.00 sec)

mysql> insert into foobar values (1000000000 * 1000000000000);
Query OK, 1 row affected (0.00 sec)

mysql> insert into foobar values (1000000000000000000000);
Query OK, 1 row affected (0.00 sec)

mysql> select * from foobar;
+----------------------+
| i |
+----------------------+
| 18446744073709551613 |
| 3875820019684212736 |
| 18446744073709551615 |
+----------------------+
3 rows in set (0.00 sec)

mysql> update foobar set i = -i;
Query OK, 3 rows affected (0.00 sec)
Rows matched: 3 Changed: 3 Warnings: 0

mysql> select * from foobar;
+----------------------+
| i |
+----------------------+
| 3 |
| 14570924054025338880 |
| 1 |
+----------------------+
3 rows in set (0.00 sec)

M.A. Oude Kotte wrote:
This is a very interesting option. My biggest concern is performance:
the project will require tables with millions of tuples. How does the
performance of such user created types compare to using native types?
Or are they 'built' using the same structure?

Thanks again!

Marc
Paul Tillotson wrote:
Use a numeric type if you need more precision.

template1=# create domain BIGINT_UNSIGNED numeric(20,0) check (value
>= 0 and value < '18446744073709551616'::numeric(20,0));

CREATE DOMAIN
template1=# create table foobar (i BIGINT_UNSIGNED);
CREATE TABLE
template1=# insert into foobar (i) values (-1); --too small
ERROR: value for domain bigint_unsigned violates check constraint "$1"
template1=# insert into foobar (i) values (0); -- works
INSERT 17159 1
template1=# insert into foobar (i) values (pow(2::numeric,
64::numeric) - 1); --works
INSERT 17160 1
template1=# insert into foobar (i) values (pow(2::numeric,
64::numeric)); --too large
ERROR: value for domain bigint_unsigned violates check constraint "$1"
template1=# select * from foobar;
i
----------------------
0
18446744073709551615
(2 rows)

Paul Tillotson

---------------------------(end of broadcast)---------------------------
TIP 6: Have you searched our list archives?

http://archives.postgresql.org

Nov 23 '05 #7

This discussion thread is closed

Replies have been disabled for this discussion.