467,209 Members | 1,340 Online
Bytes | Developer Community
Ask Question

Home New Posts Topics Members FAQ

Post your question to a community of 467,209 developers. It's quick & easy.

Troubles inserting to a new unicode table....

Hello, I am running DB2 UDB 8.2 on AIX 5.3. I am running some tests
on converting several tables on an existing database to Unicode. The
database will not be converted to unicode...just this table set.

I have successfully set this up and ran some tests using Japanese
characters. The test seemed to work fine. We found during our test
that we needed to install some AIX UTF-8 software for the web-based
user interface to work properly. The following was installed on to
the machine:

bos.loc.com.utf Common Locale Support - UTF-8
bos.loc.utf.EN_US Base System Locale UTF Code
bos.loc.utf.JA_JP Base System Locale UTF Code

After the install we continued our testing to find the Japanese
characters were no longer being inserted into the tables correctly.
The problem seems to occur when the INSERT statement is executed using

(col1,col2,charcol3,col4,col5,col6,col7,col8,col9, col10,col11,col12,col13,charcol14,col15,col16,col1 7)
(100,NULL,'π^┴»π^┴^█π^┴^*π^┬^╧' ,'2007-07-30-','2007-07-30-',
0,0,2,0,NULL,NULL,NULL,998,'SOMETHING,π^┴^╧π ^┬^╔π^┴^┘',0,NULL,0,NULL)"

The two charcols that contain the data that looks like junk is the un-
emulated Japanese strings (so ignore the junk and pretend it is
real!). That string is executed in this statement string:


But, when you look in the table the Japan characters look like blanks
(there is nothing there but it is not null). In the second string you
can see the "SOMETHING," but not the rest of the string that contain
the Japanese characters. As stated above, this worked prior to
installing the utf files.

I can also insert this record from the command line (just as you see
it above) and it works but it just doesnt seem to work any longer

Does anyone know what I may be missing? It seems there is an issue
now with formatting or translating during the execute.

Your help is appreciated.

Jul 31 '07 #1
  • viewed: 2758
1 Reply
There is a possibility I might have found the reason but I am still
not sure. I read that you need to connect to the database using the
CLI function SQLConnectW() when you are using unicode. Of course,
this brings up more questions and concerns especially since this is
not a unicode database and only a few tables are set as unicode.

Can anyone confirm this?

Next, if this is correct, how do you create a set of userid/password/
schema defines to use in the SQLConnectW() function since it is
expecting double wide character parameters? Is there a CLI function I
am missing that can convert the ANSI strings to a Unicode string?

I just want to make sure this is all correct because it seems like a
management nightmare to keep track of when to connect as ANSI and when
to connect as Unicode.

Last, the current code is set up to connect to the database using EXEC
SQL CONNECT. Is there an option to add to this where it states to
connect as unicode (instead of using the SQLConnectW API )? I am more
curious if you can bypass all the handle calls needed to use

Any possible help is appreciated...it seems sort of messy to do it
this way but we do not want to convert the entire database to unicode
when it is not necessary since the majority of the database would not
need to be unicode.

Aug 3 '07 #2

This discussion thread is closed

Replies have been disabled for this discussion.

Similar topics

1 post views Thread by Rodrigo Benenson | last post: by
36 posts views Thread by Ian Rastall | last post: by
reply views Thread by Stefan Slapeta | last post: by
6 posts views Thread by gita ziabari | last post: by
By using this site, you agree to our Privacy Policy and Terms of Use.