Hi,
I was wondering if anyone could advise me on this.
Right now I am setting up a DB2 UDB V8.2.3 database with UTF8
character set, which will work with a J2EE application running on
WebSphere Application Server.
I have two questions:
1. How many characters, such as Chinese, Japanese, can a CHAR(128) or
CLOB(4000) column take? From the DB2 document, it looks that the
CHAR(128) means 128 bytes, instead of 128 characters. Since some double
type characters may take up to 3 bytes in UTF8, a CHAR(128) can only
store 40 ( 1/3 of 128) characters. Is there a easier way to tell how
many characters a CHAR or CLOB column can take?
And interstingly, I happened to see that in Oracle the environment
varialble NLS_LENGTH_SEMANTICS = 'CHAR' can guarantee the CHAR(128)
holds 128 characters at the database level. And it also support a
modifier, such as CHAR(128 char), to specify at column level. Please
correct me if I am wrong. But I was just wondering would it be nice if
DB2 provides the same feature.
2. From performance's point of view, should I set DB2 character set
UTF8 or UTF16? Since Java uses only UTF16, does that mean that the java
program or JDBC has to do converstion for each character?
Any comments are highly appreciated!
Jason Zhang