469,106 Members | 2,139 Online
Bytes | Developer Community
New Post

Home Posts Topics Members FAQ

Post your question to a community of 469,106 developers. It's quick & easy.

Oracle Text: Indexing UTF8 or UTF16

Hello

I am trying to build a system where I can full-text index documents with
UTF8 or UTF16 data using Oracle Text. I am doing the filtering in a
third-party component outside the database, so the I dont need filtering in
Oracle, but only indexing.
If I put file references to the filtered files in the database and index
these (using FILE_DATASTORE), everything works fine. But I rather put the
filtered data in the database, and index it from here (using the
PROCECURE_FILTER). But this gives me some problems when the data is actually
unicode data.
The interface for the procedure in the PROCEDURE_FILTER does not allow the
data to be output as NCLOB or NVARCHAR, but only CLOB or VARCHAR. Indexing
the data directly in the table (using eg. an NULL_FILTER or CHARSET_FILTER)
have the same impact. If I try to index a column of the type NCLOB or
NVARCHAR, the index-creation gives me an error telling me that it is an
invalid column-type.

I have tried to create a database with the UTF8 character set, expecting
that the CLOB column type then could contain the UTF8 data, and that the
indexing then would recognize the unicode characters in the data. This does
not give any errors, but none of the unicode string in the data are
contained in the index, only the strings in english (or ascii, strings with
characters all within 1 byte) are contained in the index afterwards.

Is is not possible to index data directly in a column (using either
CHARSET_FILTER, NULL_FILTER or PROCEDURE_FILTER) that is in UTF8 or UTF16
format?
Thanks in advance for any comments.

/David
Jun 27 '08 #1
1 5062
Server Applications wrote:
Hello

I am trying to build a system where I can full-text index documents with
UTF8 or UTF16 data using Oracle Text. I am doing the filtering in a
third-party component outside the database, so the I dont need filtering in
Oracle, but only indexing.
If I put file references to the filtered files in the database and index
these (using FILE_DATASTORE), everything works fine. But I rather put the
filtered data in the database, and index it from here (using the
PROCECURE_FILTER). But this gives me some problems when the data is actually
unicode data.
The interface for the procedure in the PROCEDURE_FILTER does not allow the
data to be output as NCLOB or NVARCHAR, but only CLOB or VARCHAR. Indexing
the data directly in the table (using eg. an NULL_FILTER or CHARSET_FILTER)
have the same impact. If I try to index a column of the type NCLOB or
NVARCHAR, the index-creation gives me an error telling me that it is an
invalid column-type.

I have tried to create a database with the UTF8 character set, expecting
that the CLOB column type then could contain the UTF8 data, and that the
indexing then would recognize the unicode characters in the data. This does
not give any errors, but none of the unicode string in the data are
contained in the index, only the strings in english (or ascii, strings with
characters all within 1 byte) are contained in the index afterwards.

Is is not possible to index data directly in a column (using either
CHARSET_FILTER, NULL_FILTER or PROCEDURE_FILTER) that is in UTF8 or UTF16
format?
Thanks in advance for any comments.

/David

This ng is dead - repost in cdo.server

--
Regards,
Frank van Bortel
Jun 27 '08 #2

This discussion thread is closed

Replies have been disabled for this discussion.

Similar topics

10 posts views Thread by pekka niiranen | last post: by
4 posts views Thread by susmita_ganguly | last post: by
12 posts views Thread by Adam J. Schaff | last post: by
1 post views Thread by Isa Karbstein | last post: by
7 posts views Thread by tshad | last post: by
1 post views Thread by CARIGAR | last post: by
reply views Thread by zhoujie | last post: by
reply views Thread by kglaser89 | last post: by
By using this site, you agree to our Privacy Policy and Terms of Use.