By using this site, you agree to our updated Privacy Policy and our Terms of Use. Manage your Cookies Settings.
439,986 Members | 1,568 Online
Bytes IT Community
+ Ask a Question
Need help? Post your question and get tips & solutions from a community of 439,986 IT Pros & Developers. It's quick & easy.

csv module and unicode, when or workaround?

P: n/a
hi,
to convert excel files via csv to xml or whatever I frequently use the
csv module which is really nice for quick scripts. problem are of course
non ascii characters like german umlauts, EURO currency symbol etc.

the current csv module cannot handle unicode the docs say, is there any
workaround or is unicode support planned for the near future? in most
cases support for characters in iso-8859-1(5) would be ok for my
purposes but of course full unicode support would be great...

obviously I am not a python pro, i did not even find the py source for
the module, it seemed to me it is a C based module?. is this also the
reason for the unicode unawareness?

thanks
chris
Jul 18 '05 #1
Share this Question
Share on Google+
6 Replies


P: n/a

Chris> the current csv module cannot handle unicode the docs say, is
Chris> there any workaround or is unicode support planned for the near
Chris> future?

True, it can't.

Chris> obviously I am not a python pro, i did not even find the py
Chris> source for the module, it seemed to me it is a C based
Chris> module?. is this also the reason for the unicode unawareness?

Look in Modules/_csv.c and Lib/csv.py. The C-ness of the underlying module
is the main issue as far as I understand. If you have some C+Unicode-fu
(this goes for anyone reading this, not just Chris), feel free to try
writing a patch. Also, check out the csv mailing list:

http://orca.mojam.com/mailman/listinfo/csv

Skip
Jul 18 '05 #2

P: n/a

Chris> the current csv module cannot handle unicode the docs say, is
Chris> there any workaround or is unicode support planned for the near
Chris> future?

Skip> True, it can't.

Hmmm... I think the following should be a reasonable workaround in most
situations:

#!/usr/bin/env python

import csv

class UnicodeReader:
def __init__(self, f, dialect=csv.excel, encoding="utf-8", **kwds):
self.reader = csv.reader(f, dialect=dialect, **kwds)
self.encoding = encoding

def next(self):
row = self.reader.next()
return [unicode(s, self.encoding) for s in row]

def __iter__(self):
return self

class UnicodeWriter:
def __init__(self, f, dialect=csv.excel, encoding="utf-8", **kwds):
self.writer = csv.writer(f, dialect=dialect, **kwds)
self.encoding = encoding

def writerow(self, row):
self.writer.writerow([s.encode("utf-8") for s in row])

def writerows(self, rows):
for row in rows:
self.writerow(row)

if __name__ == "__main__":
try:
oldurow = [u'\u65E5\u672C\u8A9E',
u'Hi Mom -\u263a-!',
u'A\u2262\u0391.']
writer = UnicodeWriter(open("uni.csv", "wb"))
writer.writerow(oldurow)
del writer

reader = UnicodeReader(open("uni.csv", "rb"))
newurow = reader.next()
print "trivial test", newurow == oldurow and "passed" or "failed"
finally:
import os
os.unlink("uni.csv")

If people don't find any egregious flaws with the concept I'll at least add
it as an example to the csv module docs. Maybe they would even work as
additions to the csv.py module, assuming the api is palatable.

Skip
Jul 18 '05 #3

P: n/a
Chris wrote:
the current csv module cannot handle unicode the docs say, is there any
workaround or is unicode support planned for the near future? in most
cases support for characters in iso-8859-1(5) would be ok for my
purposes but of course full unicode support would be great...


It doesn't support unicode, but you should not have problem
importing/exporting encoded strings.

I have imported utf-8 encoded string with no trouble. But I might just
have been lucky that they are inside the latin-1 range?

--

hilsen/regards Max M, Denmark

http://www.mxm.dk/
IT's Mad Science
Jul 18 '05 #4

P: n/a

Chris wrote:
hi,
to convert excel files via csv to xml or whatever I frequently use the csv module which is really nice for quick scripts. problem are of course non ascii characters like german umlauts, EURO currency symbol etc.
The umlauted characters should not be a problem, they're all in the
first 256 characters. What makes you say they are a problem "of
course"?
the current csv module cannot handle unicode the docs say, is there any workaround or is unicode support planned for the near future? in most cases support for characters in iso-8859-1(5) would be ok for my
purposes but of course full unicode support would be great...


Here's a perambulation through some of the alternatives:

A. If you save the file from Excel as "Unicode text", you can pretty
much DIY:
buff = file('csvtest.txt', 'rb').read()
lines = buff.decode('utf16').split(u'\r\n')
lines [u'M\xfcller\t"\u20ac1234,56"', u'M\xf6ller\t"\u20ac9876,54"',
u'Kawasaki\t\xa53456.78', u''] for line in lines: .... print line.split(u'\t')
....
[u'M\xfcller', u'"\u20ac1234,56"']
[u'M\xf6ller', u'"\u20ac9876,54"']
[u'Kawasaki', u'\xa53456.78']
[u'']
All you have to do is handle (1) Excel's unnecessary quoting of the
comma in the money amounts [see first two lines above; what it quotes
is probably locale-dependent] (2) double quoting any quotes [no example
given] (3) ignore the empty "line" introduced by split().

Problem (3) is easy: if not lines[-1:]: del lines[-1:]

Hmmm ... by the time you finish this (and generalise it) you will have
done the Unicode extension to the csv module ...

Alternative B: you can do ODBC access to Excel spreadsheets; hmmm ...
yuk ... no better than CSV i.e. you get the data in your current code
page, not in Unicode:

[('M\xfcller', '\x801234,56'), ('M\xf6ller', '\x809876,54'),
('Kawasaki', '\xa53456.78')]

Alternative C: why not save your file as local-code-page .csv, use the
csv module, and DIY decode:
rdr = csv.reader(file('csvtest.csv', 'rb'))
for row in rdr: .... print row
.... urow = [x.decode('cp1252') for x in row]
.... print urow
....
['Name', 'Amount']
[u'Name', u'Amount']
['M\xfcller', '\x801234,56']
[u'M\xfcller', u'\u20ac1234,56']
['M\xf6ller', '\x809876,54']
[u'M\xf6ller', u'\u20ac9876,54']
['Kawasaki', '\xa53456.78']
[u'Kawasaki', u'\xa53456.78']

Looks good to me, including the euro sign.

HTH,

John

Jul 18 '05 #5

P: n/a
hi,
thanks for all replies, I try if I can at least get the work done.

I guess my problem mainly was the rather mindflexing (at least for me)
coding/decoding of strings...

But I guess it would be really helpful to put the UnicodeReader/Writer
in the docs

thanks a lot
chris
Jul 18 '05 #6

P: n/a

Chris wrote:
hi,
thanks for all replies, I try if I can at least get the work done.

I guess my problem mainly was the rather mindflexing (at least for me) coding/decoding of strings...

But I guess it would be really helpful to put the UnicodeReader/Writer in the docs


UNFORTUNATELY the solution of saving the Excel .XLS to a .CSV doesn't
work if you have Unicode characters that are not in your Windows
code-page. Nor would it work in a CJK environment if the file was saved
in an MBCS encoding (e.g. Big5). A work-around appears possible, with
some more effort:

I have extended the previous sample XLS; there is now a last line with
IVANOV in Cyrillic letters [pardon my spelling etc etc if necessary].
My code-page is cp1252, which sure don't grok Russki :-)

I've saved it as CSV [no complaint from Excel] and as "Unicode text".
buffc = file('csvtest2.csv', 'rb').read()
buffc 'Name,Amount\r\nM\xfcller,"\x801234,56"\r\nM\xf6ll er,"\x809876,54"\r\nKawasaki,\xa53456.78\r\n?????? ,"?5678,90"\r\n'

Thanks a lot, Bill! That's really clever.
buffu16 = file('csvtest2.txt', 'rb').read()
buffu16 '\xff\xfeN\x00a\x00m\x00e\x00\t\x00A\x00m\x00o\x00 u\x00n\x00t\x00\r\x00\n\x00
[snip] \x18\x04\x12\x04
\x10\x04\x1d\x04\x1e\x04\x12\x04\t\x00"\x00
\x045\x006\x007\x008\x00,\x009\x000\x00"\x00\r\x00 \n\x00' buffu = buffu16.decode('utf16')
buffu u'Name\tAmount\r\nM\xfcller\t"\u20ac1234,56"\r\nM\ xf6ller\t"\u20ac9876,54"\r\nKawasaki\t\xa53456.78\ r\n\u0418\u0412\u0410\u041d\u041
e\u0412\t"\u04205678,90"\r\n'

Aside: this has removed the BOM. I understood (possibly incorrectly)
from a recent thread that Python codecs left the BOM in there, but hey
I'm not complaining :-)

As expected, this looks OK. The extra step required in the work-around
is to convert the utf16 file to utf8 and feed that to the csv reader.
Why utf8? (1) Every Unicode character can be represented, not just ones
in that are in your code-page (2) ASCII characters can't appear as part
of the representation of any other character -- i.e. ones that are
significant to csv (tab, comma, quote, \r, \n) can't cause errors by
showing up as part of another character e.g. CJK characters.
buffu8 = buffu.encode('utf8')
buffu8 'Name\tAmount\r\nM\xc3\xbcller\t"\xe2\x82\xac1234, 56"\r\nM\xc3\xb6ller\t"\xe2\x82\xac9876,54"\r\nKaw asaki\t\xc2\xa53456.78\r\n\xd0\x
98\xd0\x92\xd0\x90\xd0\x9d\xd0\x9e\xd0\x92\t"\xd0\ xa05678,90"\r\n' x = file('csvtest2.u8', 'wb')
x.write(buffu8)
x.close()
import csv
rdr = csv.reader(file('csvtest2.u8', 'rb'), delimiter='\t')
for row in rdr: .... print row
.... print [x.decode('utf8') for x in row]
....
['Name', 'Amount']
[u'Name', u'Amount']
['M\xc3\xbcller', '\xe2\x82\xac1234,56']
[u'M\xfcller', u'\u20ac1234,56']
['M\xc3\xb6ller', '\xe2\x82\xac9876,54']
[u'M\xf6ller', u'\u20ac9876,54']
['Kawasaki', '\xc2\xa53456.78']
[u'Kawasaki', u'\xa53456.78']
['\xd0\x98\xd0\x92\xd0\x90\xd0\x9d\xd0\x9e\xd0\x92' , '\xd0\xa05678,90']
[u'\u0418\u0412\u0410\u041d\u041e\u0412', u'\u04205678,90']


Howzat?

Cheers,
John

Jul 18 '05 #7

This discussion thread is closed

Replies have been disabled for this discussion.