By using this site, you agree to our updated Privacy Policy and our Terms of Use. Manage your Cookies Settings.
443,814 Members | 1,111 Online
Bytes IT Community
+ Ask a Question
Need help? Post your question and get tips & solutions from a community of 443,814 IT Pros & Developers. It's quick & easy.

encode() question

P: n/a
s1 = "hello"
s2 = s1.encode("utf-8")

s1 = "an accented 'e': \xc3\xa9"
s2 = s1.encode("utf-8")

The last line produces the error:

---
Traceback (most recent call last):
File "test1.py", line 6, in ?
s2 = s1.encode("utf-8")
UnicodeDecodeError: 'ascii' codec can't decode byte 0xc3 in position
17: ordinal not in range(128)
---

The error is a "decode" error, and as far as I can tell, decoding
happens when you convert a regular string to a unicode string. So, is
there an implicit conversion taking place from s1 to a unicode string
before encode() is called? By what mechanism?

Jul 31 '07 #1
Share this Question
Share on Google+
6 Replies


P: n/a
En Tue, 31 Jul 2007 13:53:11 -0300, 7stud <bb**********@yahoo.com>
escribió:
s1 = "hello"
s2 = s1.encode("utf-8")

s1 = "an accented 'e': \xc3\xa9"
s2 = s1.encode("utf-8")

The last line produces the error:

---
Traceback (most recent call last):
File "test1.py", line 6, in ?
s2 = s1.encode("utf-8")
UnicodeDecodeError: 'ascii' codec can't decode byte 0xc3 in position
17: ordinal not in range(128)
---

The error is a "decode" error, and as far as I can tell, decoding
happens when you convert a regular string to a unicode string. So, is
there an implicit conversion taking place from s1 to a unicode string
before encode() is called? By what mechanism?
Converting from unicode characters into a string of bytes is the "encode"
operation: unicode.encode() -str
Converting from string of bytes to unicode characters is the "decode"
operation: str.decode() -unicode
str.decode and unicode.encode should NOT exist, or at least issue a
warning (IMHO).
When you try to do str.encode, as the encode operation requires an unicode
source, the string is first decoded using the default encoding - and fails.

--
Gabriel Genellina

Jul 31 '07 #2

P: n/a
On Jul 31, 11:18 am, "Gabriel Genellina" <gagsl-...@yahoo.com.ar>
wrote:
str.decode and unicode.encode should NOT exist, or at least issue a
warning (IMHO).
Yes, that sounds like a good idea.

Jul 31 '07 #3

P: n/a
On Tue, 31 Jul 2007 10:45:26 -0700, 7stud wrote:
On Jul 31, 11:18 am, "Gabriel Genellina" <gagsl-...@yahoo.com.ar>
wrote:
>str.decode and unicode.encode should NOT exist, or at least issue a
warning (IMHO).

Yes, that sounds like a good idea.
It sounds like horrible idea as those are the ones that are really needed.
One could argue about `str.encode` and `unicode.decode`. But there are at
least uses for `str.encode` like 'sting-escape', 'hex', 'bz2', 'base64'
etc.

Ciao,
Marc 'BlackJack' Rintsch
Jul 31 '07 #4

P: n/a
>>str.decode and unicode.encode should NOT exist, or at least issue a
>>warning (IMHO).
Yes, that sounds like a good idea.

It sounds like horrible idea as those are the ones that are really needed.
Correct.
One could argue about `str.encode` and `unicode.decode`. But there are at
least uses for `str.encode` like 'sting-escape', 'hex', 'bz2', 'base64'
etc.
Indeed, in Py3k, those will be gone.

Regards,
Martin
Jul 31 '07 #5

P: n/a
En Tue, 31 Jul 2007 16:41:48 -0300, Marc 'BlackJack' Rintsch
<bj****@gmx.netescribió:
On Tue, 31 Jul 2007 10:45:26 -0700, 7stud wrote:
>On Jul 31, 11:18 am, "Gabriel Genellina" <gagsl-...@yahoo.com.ar>
wrote:
>>str.decode and unicode.encode should NOT exist, or at least issue a
warning (IMHO).
Yes, that sounds like a good idea.

It sounds like horrible idea as those are the ones that are really
needed.
Ouch! caffeine levels below critical threshold, I think.

--
Gabriel Genellina

Aug 2 '07 #6

P: n/a
On Tue, Jul 31, 2007 at 09:53:11AM -0700, 7stud wrote:
s1 = "hello"
s2 = s1.encode("utf-8")

s1 = "an accented 'e': \xc3\xa9"
s2 = s1.encode("utf-8")

The last line produces the error:

---
Traceback (most recent call last):
File "test1.py", line 6, in ?
s2 = s1.encode("utf-8")
UnicodeDecodeError: 'ascii' codec can't decode byte 0xc3 in position
17: ordinal not in range(128)
---

The error is a "decode" error, and as far as I can tell, decoding
happens when you convert a regular string to a unicode string. So, is
there an implicit conversion taking place from s1 to a unicode string
before encode() is called? By what mechanism?
Yep. You are trying to encode a string. The problem is that strings are
already encoded, so it generally makes no sense to call .encode() on
them.

..encode()ing a string can be handy if you want to convert its encoding.
In such a case, though, Python will first convert the string to Unicode.
To do that, it has to know how the string is encoded. Unless you tell it
otherwise, Python assumes the string is encoded in ascii. You had a byte
in there that was out of ascii's range...thus, the error. Python was
trying to decode the string, assumed it was ascii, but that didn't work.

This is all very confusing; I'd highly recommend reading this bit about
Unicode. It started me down the difficult road of actually understanding
what is going on here.

http://www.joelonsoftware.com/articles/Unicode.html
--
It's another Baseline Boulder Morning.
Aug 6 '07 #7

This discussion thread is closed

Replies have been disabled for this discussion.