By using this site, you agree to our updated Privacy Policy and our Terms of Use. Manage your Cookies Settings.
454,377 Members | 1,658 Online
Bytes IT Community
+ Ask a Question
Need help? Post your question and get tips & solutions from a community of 454,377 IT Pros & Developers. It's quick & easy.

Identifying extended ASCII subset

P: n/a
Hi,

I have to treat a given text file, but haven't got a clue which
extended ASCII set it is using.
Opening the file in Windows' Notepad or in DOS, all accented letters
and symbols are wrong.
Any idea how to identify the subset used?
Is there some text editor which can cycle easy through all known
subsets, or even better: cycle subsets automatically until found a
given test-string with some accents and symbols?
If someone knows a solution which involves VB, C++, XML or whatever
please don't hesitate sharing it with me.

TIA,
K

Nov 7 '05 #1
Share this Question
Share on Google+
13 Replies


P: n/a
kr********@matt.es wrote:
Hi,

I have to treat a given text file, but haven't got a clue which
extended ASCII set it is using.
Opening the file in Windows' Notepad or in DOS, all accented letters
and symbols are wrong.
Any idea how to identify the subset used?
Is there some text editor which can cycle easy through all known
subsets, or even better: cycle subsets automatically until found a
given test-string with some accents and symbols?

If you expect a computer to do this for you, you're probably dreaming. Since the actual character codes don't change, only the visual representations, someone has to look at the result to make a judgement.

If you have OCR code that will work on a memory bitmap, you could conceivably draw out the characters using a given code page and try to OCR the result, but even then I don't see any way to tell one 'close' result from another.

What is it you need to do to the text, that requires you to know what the codes represent?

--

Jim Mack
MicroDexterity Inc
www.microdexterity.com

Nov 7 '05 #2

P: n/a
On Mon, 07 Nov 2005 05:08:37 -0800, kristofvdw wrote:
I have to treat a given text file, but haven't got a clue which
extended ASCII set it is using.
Files contain bytes. Bytes are numerical values. There are no ASCII sets
or extended ASCII sets, AFA files are concerned. It's all in _our_ minds.
To make your program understand and tell one set from another, you need to
basically *teach* it the same "algorithm" _you_ are using to differentiate
those sets.
[...]


And avoid cross-posing to too many newsgroups at once. It makes your post
that more irrelevant in many newsgroups.

V
Nov 7 '05 #3

P: n/a
In article <11********************@o13g2000cwo.googlegroups.c om>,
<kr********@matt.es> wrote:
I have to treat a given text file, but haven't got a clue which
extended ASCII set it is using.
Opening the file in Windows' Notepad or in DOS, all accented letters
and symbols are wrong.
Any idea how to identify the subset used?


You can get Mozilla's character set guesser:

http://www.mozilla.org/projects/intl/chardet.html

There's a Java version too:

http://jchardet.sourceforge.net/

-- Richard
Nov 7 '05 #4

P: n/a
kr********@matt.es wrote:
Hi,

I have to treat a given text file, but haven't got a clue which
extended ASCII set it is using.
Opening the file in Windows' Notepad or in DOS, all accented letters
and symbols are wrong.
Any idea how to identify the subset used?
Is there some text editor which can cycle easy through all known
subsets, or even better: cycle subsets automatically until found a
given test-string with some accents and symbols?
If someone knows a solution which involves VB, C++, XML or whatever
please don't hesitate sharing it with me.


Open the file is a hexadecimal editor, pick some of the characters,
and use the Unicode charts (www.unicode.org) to identify what
encoding they are.

Or just ask whoever created it.

///Peter

Nov 7 '05 #5

P: n/a
mmm, you're right there; automating would be quite difficult and
probable even take longer than browsing the sets manually... any tool
you know to do so?

The data are our clients, gotten through legacy-software. Now I'm
putting the data in an Oracle DB, but it's impossible to get
information on which coding the program uses. Lots of names and
addresses have accents in them, which we can't afford to loose.

Nov 8 '05 #6

P: n/a
Thanks for the suggestion, I'll look into that.
Unfortionately, the universal_charset_detector isn't built yet, and
doesn't support rare sets, so I don't have much hope...

Nov 8 '05 #7

P: n/a
kr********@matt.es wrote:
mmm, you're right there; automating would be quite difficult and
probable even take longer than browsing the sets manually... any tool
you know to do so?

The data are our clients, gotten through legacy-software. Now I'm
putting the data in an Oracle DB, but it's impossible to get
information on which coding the program uses. Lots of names and
addresses have accents in them, which we can't afford to loose.


Do you know for sure that there is more than one character-set encoding in use? And what would you change these to, once you knew what they represented?

Is this something you have to do just once, or is there a continuing need? For a one-time use, manually cycling through your choices may not be that painful.

If this is truly an 'extended ASCII' file, which might be a legacy DOS file, you could try an OEM character set. There are several OEM code pages, but CP 437 is the most common. Just using an OEM font (like Ms Terminal or FoxPrint) will reveal whether this is the case. If it is, then applying the API OemToCharBuff will do the translation into the current code page.

--
Jim
Nov 8 '05 #8

P: n/a
Apparently, the problem is worse than expected.
As Peter suggested, I took a look at the hex-codes.
I discovered some apparent extended characters refered to the basic
ASCII codes!
For example, a name with "" (code 199/hex C7) got exported as "G"
(code 71/hex 47).
So, when exporting from an apparent extended ASCII set, it uses a basic
ASCII set, overlapping extended codes at 128 (for the example:
199-128=71).
What a moron! The programmer who managed to achieve this!

Thanks all for your contributions, I now have to search for the
original programmer and kill him...

Nov 8 '05 #9

P: n/a
On Tue, 8 Nov 2005, Jim Mack wrote, seen in comp.text.xml:
If this is truly an 'extended ASCII' file, which might be a legacy
DOS file, you could try an OEM character set. There are several OEM
code pages, but CP 437 is the most common.


In the USA, perhaps; but CP850 is the DOS codepage for a multinational
situation, at least in basically latin-1 usage - and had been for
quite some time.

[f'ups proposed]
Nov 8 '05 #10

P: n/a
On 7 Nov 2005 05:08:37 -0800, kr********@matt.es wrote:
Hi,

I have to treat a given text file, but haven't got a clue which
extended ASCII set it is using.


The .es in your name is interesting

How much do you know about where this 'legacy' data came from ?

Was it Windows, was it DOS ... or maybe something mainframe-ish ?

What is the 'context' - for example a Turkish directory printed in
Spain ?
Nov 8 '05 #11

P: n/a
I suspect the original is from an IBM mainframe in EBCDIC, but we only
get a flat text file exportation.
Additionally, we have a tough deal getting trough to the original
programmers, so we'd have to work with what they provide us...

Nov 8 '05 #12

P: n/a
On 8 Nov 2005 05:14:52 -0800, kr********@matt.es wrote:
I suspect the original is from an IBM mainframe in EBCDIC, but we only
get a flat text file exportation.
Additionally, we have a tough deal getting trough to the original
programmers, so we'd have to work with what they provide us...


The original programmers will just mislead you

- you need to look into 'inferential logic'

Like re-inventing the rules that make sense of the mess

BTW - this sounds like a classic case of data transfer saboutage

Nov 8 '05 #13

P: n/a
In article <If******************************@comcast.com>,
Jim Mack <jm***@mdxi.nospam.com> wrote:
If you expect a computer to do this for you, you're probably dreaming.
Since the actual character codes don't change, only the visual
representations, someone has to look at the result to make a judgement.


It's not that bad. By comparing the frequencies of individual
characters, and pairs and triples and so on, against those found in
known documents, it should be possible to achieve good enough accuracy
for many purposes.

If the data is really random, not even a human will be able to
answer the question.

-- Richard
Nov 8 '05 #14

This discussion thread is closed

Replies have been disabled for this discussion.