469,594 Members | 2,119 Online
Bytes | Developer Community
New Post

Home Posts Topics Members FAQ

Post your question to a community of 469,594 developers. It's quick & easy.

The length of CHAR() and CLOB columns in UTF8 character set

Hi,

I was wondering if anyone could advise me on this.

Right now I am setting up a DB2 UDB V8.2.3 database with UTF8
character set, which will work with a J2EE application running on
WebSphere Application Server.

I have two questions:
1. How many characters, such as Chinese, Japanese, can a CHAR(128) or
CLOB(4000) column take? From the DB2 document, it looks that the
CHAR(128) means 128 bytes, instead of 128 characters. Since some double
type characters may take up to 3 bytes in UTF8, a CHAR(128) can only
store 40 ( 1/3 of 128) characters. Is there a easier way to tell how
many characters a CHAR or CLOB column can take?

And interstingly, I happened to see that in Oracle the environment
varialble NLS_LENGTH_SEMANTICS = 'CHAR' can guarantee the CHAR(128)
holds 128 characters at the database level. And it also support a
modifier, such as CHAR(128 char), to specify at column level. Please
correct me if I am wrong. But I was just wondering would it be nice if
DB2 provides the same feature.

2. From performance's point of view, should I set DB2 character set
UTF8 or UTF16? Since Java uses only UTF16, does that mean that the java
program or JDBC has to do converstion for each character?

Any comments are highly appreciated!
Jason Zhang

Dec 5 '06 #1
2 9743
Jason,
Your understanding is correct. In DB2, CHAR(128) is 128 bytes, which
for Chinese characters will be a little over 40 characters. If you
store English data in that column, it will hold 128 characters, since
they take 1 byte per character. And, I agree, it would be nice if there
were an option to specify characters rather than bytes.
One solution, to hold enough characters is to make the column VARCHAR
and large enough for the number of characters you need (e.g.
VARCHAR(384) in this case). But, the application would have to ensure
that no more than 128 characters were entered if that is what you want.
As for UTF8 vs UTF16, it is true that there would be some conversion
for Java, but I personally would not worry about too much. If you have
Java programs currently accessing non-unicode tables, you're getting
that conversion today. Note that in UTF16, most Chinese characters are
2 bytes, as are all English characters. So, spacewise, it depends on
what languages you plan to mostly store.

Joe
Jason wrote:
Hi,

I was wondering if anyone could advise me on this.

Right now I am setting up a DB2 UDB V8.2.3 database with UTF8
character set, which will work with a J2EE application running on
WebSphere Application Server.

I have two questions:
1. How many characters, such as Chinese, Japanese, can a CHAR(128) or
CLOB(4000) column take? From the DB2 document, it looks that the
CHAR(128) means 128 bytes, instead of 128 characters. Since some double
type characters may take up to 3 bytes in UTF8, a CHAR(128) can only
store 40 ( 1/3 of 128) characters. Is there a easier way to tell how
many characters a CHAR or CLOB column can take?

And interstingly, I happened to see that in Oracle the environment
varialble NLS_LENGTH_SEMANTICS = 'CHAR' can guarantee the CHAR(128)
holds 128 characters at the database level. And it also support a
modifier, such as CHAR(128 char), to specify at column level. Please
correct me if I am wrong. But I was just wondering would it be nice if
DB2 provides the same feature.

2. From performance's point of view, should I set DB2 character set
UTF8 or UTF16? Since Java uses only UTF16, does that mean that the java
program or JDBC has to do converstion for each character?

Any comments are highly appreciated!
Jason Zhang
Dec 10 '06 #2
Joe Geller wrote:
One solution, to hold enough characters is to make the column VARCHAR
and large enough for the number of characters you need (e.g.
VARCHAR(384) in this case). But, the application would have to ensure
that no more than 128 characters were entered if that is what you want.
You could write a UDF and use it in a constraint on the table/column in
question. Then you have the integrity ensured at the database level.

--
Knut Stolze
DB2 z/OS Utilities Development
IBM Germany
Dec 11 '06 #3

This discussion thread is closed

Replies have been disabled for this discussion.

Similar topics

4 posts views Thread by sinasalek | last post: by
3 posts views Thread by kieran | last post: by
2 posts views Thread by Jim | last post: by
3 posts views Thread by Daniel Rimmelzwaan | last post: by
reply views Thread by suresh191 | last post: by
4 posts views Thread by guiromero | last post: by
By using this site, you agree to our Privacy Policy and Terms of Use.