473,386 Members | 1,969 Online
Bytes | Software Development & Data Engineering Community
Post Job

Home Posts Topics Members FAQ

Join Bytes to post your question to a community of 473,386 software developers and data experts.

Determine File Encoding

Hi there,

Can anyone point out any really obvious flaws in the methodology below
to determine the likely encoding of a file, please? I know the number
of types of encoding is small, but that is only because the
possibilities I need to work with is a small list.
private string determineFileEncoding(FileStream strm)
{
long originalSize = strm.Length;
StreamReader rdr = new StreamReader(strm);

strm.Position = 0;
System.Text.UTF8Encoding unic = new System.Text.UTF8Encoding();
byte[] inputFile = unic.GetBytes(rdr.ReadToEnd());
if(inputFile.Length == originalSize)
{
return "UTF8";
}

strm.Position = 0;
System.Text.UnicodeEncoding unic2 = new System.Text.UnicodeEncoding();
byte[] inputFile2 = unic2.GetBytes(rdr.ReadToEnd());
if(inputFile2.Length == originalSize)
{
return "Unicode";
}

strm.Position = 0;
System.Text.UTF7Encoding unic3 = new System.Text.UTF7Encoding();
byte[] inputFile3 = unic3.GetBytes(rdr.ReadToEnd());
if(inputFile3.Length == originalSize)
{
return "UTF7";
}

System.Text.ASCIIEncoding unic4 = new System.Text.ASCIIEncoding();
byte[] inputFile4 = unic3.GetBytes(rdr.ReadToEnd());
if(inputFile4.Length == originalSize)
{
return "Ascii";
}

return "Not known";
}


Thanks in advance
Marc.
Nov 17 '05 #1
10 10061
Why read the entire file to determine the encoding. Can't you tell from the
indicator bytes at the beginning?

Forgive me if I don't know much about encoding, but your algorithm appears
wildly inefficient on its face.

--
--- Nick Malik [Microsoft]
MCSD, CFPS, Certified Scrummaster
http://blogs.msdn.com/nickmalik

Disclaimer: Opinions expressed in this forum are my own, and not
representative of my employer.
I do not answer questions on behalf of my employer. I'm just a
programmer helping programmers.
--
"Marc Jennings" <Ma**********@community.nospam> wrote in message
news:ch********************************@4ax.com...
Hi there,

Can anyone point out any really obvious flaws in the methodology below
to determine the likely encoding of a file, please? I know the number
of types of encoding is small, but that is only because the
possibilities I need to work with is a small list.
private string determineFileEncoding(FileStream strm)
{
long originalSize = strm.Length;
StreamReader rdr = new StreamReader(strm);

strm.Position = 0;
System.Text.UTF8Encoding unic = new System.Text.UTF8Encoding();
byte[] inputFile = unic.GetBytes(rdr.ReadToEnd());
if(inputFile.Length == originalSize)
{
return "UTF8";
}

strm.Position = 0;
System.Text.UnicodeEncoding unic2 = new System.Text.UnicodeEncoding();
byte[] inputFile2 = unic2.GetBytes(rdr.ReadToEnd());
if(inputFile2.Length == originalSize)
{
return "Unicode";
}

strm.Position = 0;
System.Text.UTF7Encoding unic3 = new System.Text.UTF7Encoding();
byte[] inputFile3 = unic3.GetBytes(rdr.ReadToEnd());
if(inputFile3.Length == originalSize)
{
return "UTF7";
}

System.Text.ASCIIEncoding unic4 = new System.Text.ASCIIEncoding();
byte[] inputFile4 = unic3.GetBytes(rdr.ReadToEnd());
if(inputFile4.Length == originalSize)
{
return "Ascii";
}

return "Not known";
}


Thanks in advance
Marc.

Nov 17 '05 #2
I have to forgive you for not knowing too much about encoding. I know
even less. I agree that the algorithm *is* wildly inneficient, but
the fact is that I have not got a clue. :-) Such are the joys of
learning from Google.

On Wed, 1 Jun 2005 06:27:22 -0700, "Nick Malik [Microsoft]"
<ni*******@hotmail.nospam.com> wrote:
Why read the entire file to determine the encoding. Can't you tell from the
indicator bytes at the beginning?

Forgive me if I don't know much about encoding, but your algorithm appears
wildly inefficient on its face.

--
--- Nick Malik [Microsoft]
MCSD, CFPS, Certified Scrummaster
http://blogs.msdn.com/nickmalik

Disclaimer: Opinions expressed in this forum are my own, and not
representative of my employer.
I do not answer questions on behalf of my employer. I'm just a
programmer helping programmers.


Nov 17 '05 #3
KH
Check out the StreamReader constructors that take a bool argument to
determine the encoding from the byte order mark. Also check out the
Encoding.GetPreamble() method.
"Marc Jennings" wrote:
I have to forgive you for not knowing too much about encoding. I know
even less. I agree that the algorithm *is* wildly inneficient, but
the fact is that I have not got a clue. :-) Such are the joys of
learning from Google.

On Wed, 1 Jun 2005 06:27:22 -0700, "Nick Malik [Microsoft]"
<ni*******@hotmail.nospam.com> wrote:
Why read the entire file to determine the encoding. Can't you tell from the
indicator bytes at the beginning?

Forgive me if I don't know much about encoding, but your algorithm appears
wildly inefficient on its face.

--
--- Nick Malik [Microsoft]
MCSD, CFPS, Certified Scrummaster
http://blogs.msdn.com/nickmalik

Disclaimer: Opinions expressed in this forum are my own, and not
representative of my employer.
I do not answer questions on behalf of my employer. I'm just a
programmer helping programmers.


Nov 17 '05 #4
Marc Jennings wrote:
Hi there,

Can anyone point out any really obvious flaws in the methodology below
to determine the likely encoding of a file, please? I know the number
of types of encoding is small, but that is only because the
possibilities I need to work with is a small list.
private string determineFileEncoding(FileStream strm)
{
long originalSize = strm.Length;
StreamReader rdr = new StreamReader(strm);

strm.Position = 0;
System.Text.UTF8Encoding unic = new System.Text.UTF8Encoding();
byte[] inputFile = unic.GetBytes(rdr.ReadToEnd());
if(inputFile.Length == originalSize)
{
return "UTF8";
}

strm.Position = 0;
System.Text.UnicodeEncoding unic2 = new
System.Text.UnicodeEncoding(); byte[] inputFile2 =
unic2.GetBytes(rdr.ReadToEnd()); if(inputFile2.Length ==
originalSize) {
return "Unicode";
}

strm.Position = 0;
System.Text.UTF7Encoding unic3 = new System.Text.UTF7Encoding();
byte[] inputFile3 = unic3.GetBytes(rdr.ReadToEnd());
if(inputFile3.Length == originalSize)
{
return "UTF7";
}

System.Text.ASCIIEncoding unic4 = new System.Text.ASCIIEncoding();
byte[] inputFile4 = unic3.GetBytes(rdr.ReadToEnd());
if(inputFile4.Length == originalSize)
{
return "Ascii";
}

return "Not known";
}


The most obvious flaw would be that generally speaking this is
impossible to achieve ;-)

The second flaw is that your code is just plain wrong. You're using a
UTF-8 StreamReader regardless of the actual encoding. This object will
be able to read UTF-8 and ASCII, but UTF-16 will break for sure.

The third flaw is that you assume "the number of types of encoding is
small". I'd say
http://msdn.microsoft.com/library/de.../en-us/intl/un
icode_81rn.asp is not really a short list, although many of these
encodings are not likely to be found in your typical American or
Western European PC environment.

Cheers,
--
http://www.joergjooss.de
mailto:ne********@joergjooss.de
Nov 17 '05 #5
KH wrote:
Check out the StreamReader constructors that take a bool argument to
determine the encoding from the byte order mark. Also check out the
Encoding.GetPreamble() method.


That works only for certain UTFs and maybe some rather obscure stuff,
but today's popular 8 bit encodings like ISO-8859-x or Windows-152x
don't use preambles or BOMs.

Cheers,
--
http://www.joergjooss.de
mailto:ne********@joergjooss.de
Nov 17 '05 #6
On Wed, 01 Jun 2005 11:59:09 -0700, "Joerg Jooss"
<ne********@joergjooss.de> wrote:

**snip**

The most obvious flaw would be that generally speaking this is
impossible to achieve ;-)

The second flaw is that your code is just plain wrong. You're using a
UTF-8 StreamReader regardless of the actual encoding. This object will
be able to read UTF-8 and ASCII, but UTF-16 will break for sure.

The third flaw is that you assume "the number of types of encoding is
small". I'd say
http://msdn.microsoft.com/library/de.../en-us/intl/un
icode_81rn.asp is not really a short list, although many of these
encodings are not likely to be found in your typical American or
Western European PC environment.

Cheers,


Agreed in the general case, but perhaps I should have made my
situation a little clearer. The files that I need to deal with will
only be one of a very small subset of all the possible encodings out
there.

At least now I know my thinking is more flawed than I though it
was....
Nov 17 '05 #7
There is no way to determine the encoding of the file unless you know
exactly the text which you expect in the file or there are marker bytes in
the file or a special file extension.
But you can try to use a statistic approach. If the bytes on even positions
are mostly bigger than bytes on uneven positions (or was it the other way
around?) you have unicode. if there are no null chars and no chars < ascii
#32 except \r and \n you have certainly ascii encoding.
In all other cases you may have UTF8.

"Marc Jennings" <Ma**********@community.nospam> schrieb im Newsbeitrag
news:56********************************@4ax.com...
On Wed, 01 Jun 2005 11:59:09 -0700, "Joerg Jooss"
<ne********@joergjooss.de> wrote:

**snip**

The most obvious flaw would be that generally speaking this is
impossible to achieve ;-)

The second flaw is that your code is just plain wrong. You're using a
UTF-8 StreamReader regardless of the actual encoding. This object will
be able to read UTF-8 and ASCII, but UTF-16 will break for sure.

The third flaw is that you assume "the number of types of encoding is
small". I'd say
http://msdn.microsoft.com/library/de.../en-us/intl/un
icode_81rn.asp is not really a short list, although many of these
encodings are not likely to be found in your typical American or
Western European PC environment.

Cheers,


Agreed in the general case, but perhaps I should have made my
situation a little clearer. The files that I need to deal with will
only be one of a very small subset of all the possible encodings out
there.

At least now I know my thinking is more flawed than I though it
was....

Nov 17 '05 #8
Marc Jennings wrote:
On Wed, 01 Jun 2005 11:59:09 -0700, "Joerg Jooss"
<ne********@joergjooss.de> wrote:

**snip**

The most obvious flaw would be that generally speaking this is
impossible to achieve ;-)

The second flaw is that your code is just plain wrong. You're using
a UTF-8 StreamReader regardless of the actual encoding. This object
will be able to read UTF-8 and ASCII, but UTF-16 will break for
sure.

The third flaw is that you assume "the number of types of encoding
is small". I'd say
http://msdn.microsoft.com/library/de...rary/en-us/int
l/un icode_81rn.asp is not really a short list, although many of
these encodings are not likely to be found in your typical American
or Western European PC environment.

Cheers,


Agreed in the general case, but perhaps I should have made my
situation a little clearer. The files that I need to deal with will
only be one of a very small subset of all the possible encodings out
there.

At least now I know my thinking is more flawed than I though it
was....


The best approach is to have some kind of "protocol", that allows to
transports meta data like character encoding. If this is not possible
(as in the case of plain files), let the user decide by allowing him or
her to select and switch all supported between all supported encodings.

Cheers,
--
http://www.joergjooss.de
mailto:ne********@joergjooss.de
Nov 17 '05 #9
Hello Marc,

If you open a file using StreamReader it will load a CurrentEncoding
with the correct file encoding and convert the bytes to the correct
characters.

"Joerg Jooss" wrote:
Marc Jennings wrote:
On Wed, 01 Jun 2005 11:59:09 -0700, "Joerg Jooss"
<ne********@joergjooss.de> wrote:

**snip**

The most obvious flaw would be that generally speaking this is
impossible to achieve ;-)

The second flaw is that your code is just plain wrong. You're using
a UTF-8 StreamReader regardless of the actual encoding. This object
will be able to read UTF-8 and ASCII, but UTF-16 will break for
sure.

The third flaw is that you assume "the number of types of encoding
is small". I'd say
http://msdn.microsoft.com/library/de...rary/en-us/int
l/un icode_81rn.asp is not really a short list, although many of
these encodings are not likely to be found in your typical American
or Western European PC environment.

Cheers,


Agreed in the general case, but perhaps I should have made my
situation a little clearer. The files that I need to deal with will
only be one of a very small subset of all the possible encodings out
there.

At least now I know my thinking is more flawed than I though it
was....


The best approach is to have some kind of "protocol", that allows to
transports meta data like character encoding. If this is not possible
(as in the case of plain files), let the user decide by allowing him or
her to select and switch all supported between all supported encodings.

Cheers,
--
http://www.joergjooss.de
mailto:ne********@joergjooss.de

Nov 17 '05 #10
Roby Eisenbraun Martins
<Ro*******************@discussions.microsoft.com > wrote:
If you open a file using StreamReader it will load a CurrentEncoding
with the correct file encoding and convert the bytes to the correct
characters.


Only if you're lucky. It won't be able to guess correctly between
different ANSI character sets, for instance.

It's definitely best to take the guesswork out, either by explicitly
stating the encoding, making sure there *is* only one encoding, or
allowing the user to override any guesswork which has been performed.

--
Jon Skeet - <sk***@pobox.com>
http://www.pobox.com/~skeet
If replying to the group, please do not mail me too
Nov 17 '05 #11

This thread has been closed and replies have been disabled. Please start a new discussion.

Similar topics

2
by: jamie | last post by:
I have a file that was generated on a customers computer which is not using the Windows default text encoding but uses Japanese(Shift-JIS) encoding. If I open the file in notepad the data looks...
0
by: Merav | last post by:
I'm running a java application from Eclipse. Looking at the system properties I get the following values: file.encoding = Cp1255 user.language = iw user.country = IL Than I'm building a jar...
2
by: Matthew Mueller | last post by:
I noticed in python2.3 printing unicode to an appropriate terminal actually works. But using sys.stdout.write doesn't. Ex: Python 2.3.4 (#2, May 29 2004, 03:31:27) on linux2 Type "help",...
1
by: Gaia C via .NET 247 | last post by:
Hi All, How can i found out at what encoding the file was saved? I tried usign GetPreamble(), but for this i should already create a stream which get an encoding... StreamReader sr = new...
8
by: Xarky | last post by:
Hi, I am downloading a GIF file(as a mail attachement) with this file format, Content-Transfer-Encoding: base64; Now I am writing the downloaded data to a file with this technique: ...
0
by: Thomas Podlesak | last post by:
I need a function to determine if a file is utf-8 encoded. I found the inconv- and recode-functions. But they only seem to convert files. Isn´t there any php-function similar to the 'file'-command...
3
by: tomislav.vujnovac | last post by:
I need to generate xml file with data from ms sql server. That xml is used in some third party java program. That program need xml file that is saved as "windows-1250" but in first line is: "<?xml...
19
by: Edward K. Ream | last post by:
Following the usual cookbook examples, my app parses an open file as follows:: parser = xml.sax.make_parser() parser.setFeature(xml.sax.handler.feature_external_ges,1) # Hopefully the...
3
by: Sun | last post by:
Hi everyone . I have two files named a.txt and b.txt. I open a.txt with ultraeditor.exe. here is the first row of the file: neu für then I switch to the HEX mode: 00000000h: FF FE 6E 00 65...
0
by: Charles Arthur | last post by:
How do i turn on java script on a villaon, callus and itel keypad mobile phone
0
by: aa123db | last post by:
Variable and constants Use var or let for variables and const fror constants. Var foo ='bar'; Let foo ='bar';const baz ='bar'; Functions function $name$ ($parameters$) { } ...
0
by: ryjfgjl | last post by:
If we have dozens or hundreds of excel to import into the database, if we use the excel import function provided by database editors such as navicat, it will be extremely tedious and time-consuming...
1
by: nemocccc | last post by:
hello, everyone, I want to develop a software for my android phone for daily needs, any suggestions?
1
by: Sonnysonu | last post by:
This is the data of csv file 1 2 3 1 2 3 1 2 3 1 2 3 2 3 2 3 3 the lengths should be different i have to store the data by column-wise with in the specific length. suppose the i have to...
0
marktang
by: marktang | last post by:
ONU (Optical Network Unit) is one of the key components for providing high-speed Internet services. Its primary function is to act as an endpoint device located at the user's premises. However,...
0
by: Hystou | last post by:
Most computers default to English, but sometimes we require a different language, especially when relocating. Forgot to request a specific language before your computer shipped? No problem! You can...
0
Oralloy
by: Oralloy | last post by:
Hello folks, I am unable to find appropriate documentation on the type promotion of bit-fields when using the generalised comparison operator "<=>". The problem is that using the GNU compilers,...
0
jinu1996
by: jinu1996 | last post by:
In today's digital age, having a compelling online presence is paramount for businesses aiming to thrive in a competitive landscape. At the heart of this digital strategy lies an intricately woven...

By using Bytes.com and it's services, you agree to our Privacy Policy and Terms of Use.

To disable or enable advertisements and analytics tracking please visit the manage ads & tracking page.