471,594 Members | 2,155 Online
Bytes | Software Development & Data Engineering Community
Post +

Home Posts Topics Members FAQ

Join Bytes to post your question to a community of 471,594 software developers and data experts.

encode() question

s1 = "hello"
s2 = s1.encode("utf-8")

s1 = "an accented 'e': \xc3\xa9"
s2 = s1.encode("utf-8")

The last line produces the error:

---
Traceback (most recent call last):
File "test1.py", line 6, in ?
s2 = s1.encode("utf-8")
UnicodeDecodeError: 'ascii' codec can't decode byte 0xc3 in position
17: ordinal not in range(128)
---

The error is a "decode" error, and as far as I can tell, decoding
happens when you convert a regular string to a unicode string. So, is
there an implicit conversion taking place from s1 to a unicode string
before encode() is called? By what mechanism?

Jul 31 '07 #1
6 2210
En Tue, 31 Jul 2007 13:53:11 -0300, 7stud <bb**********@yahoo.com>
escribió:
s1 = "hello"
s2 = s1.encode("utf-8")

s1 = "an accented 'e': \xc3\xa9"
s2 = s1.encode("utf-8")

The last line produces the error:

---
Traceback (most recent call last):
File "test1.py", line 6, in ?
s2 = s1.encode("utf-8")
UnicodeDecodeError: 'ascii' codec can't decode byte 0xc3 in position
17: ordinal not in range(128)
---

The error is a "decode" error, and as far as I can tell, decoding
happens when you convert a regular string to a unicode string. So, is
there an implicit conversion taking place from s1 to a unicode string
before encode() is called? By what mechanism?
Converting from unicode characters into a string of bytes is the "encode"
operation: unicode.encode() -str
Converting from string of bytes to unicode characters is the "decode"
operation: str.decode() -unicode
str.decode and unicode.encode should NOT exist, or at least issue a
warning (IMHO).
When you try to do str.encode, as the encode operation requires an unicode
source, the string is first decoded using the default encoding - and fails.

--
Gabriel Genellina

Jul 31 '07 #2
On Jul 31, 11:18 am, "Gabriel Genellina" <gagsl-...@yahoo.com.ar>
wrote:
str.decode and unicode.encode should NOT exist, or at least issue a
warning (IMHO).
Yes, that sounds like a good idea.

Jul 31 '07 #3
On Tue, 31 Jul 2007 10:45:26 -0700, 7stud wrote:
On Jul 31, 11:18 am, "Gabriel Genellina" <gagsl-...@yahoo.com.ar>
wrote:
>str.decode and unicode.encode should NOT exist, or at least issue a
warning (IMHO).

Yes, that sounds like a good idea.
It sounds like horrible idea as those are the ones that are really needed.
One could argue about `str.encode` and `unicode.decode`. But there are at
least uses for `str.encode` like 'sting-escape', 'hex', 'bz2', 'base64'
etc.

Ciao,
Marc 'BlackJack' Rintsch
Jul 31 '07 #4
>>str.decode and unicode.encode should NOT exist, or at least issue a
>>warning (IMHO).
Yes, that sounds like a good idea.

It sounds like horrible idea as those are the ones that are really needed.
Correct.
One could argue about `str.encode` and `unicode.decode`. But there are at
least uses for `str.encode` like 'sting-escape', 'hex', 'bz2', 'base64'
etc.
Indeed, in Py3k, those will be gone.

Regards,
Martin
Jul 31 '07 #5
En Tue, 31 Jul 2007 16:41:48 -0300, Marc 'BlackJack' Rintsch
<bj****@gmx.netescribió:
On Tue, 31 Jul 2007 10:45:26 -0700, 7stud wrote:
>On Jul 31, 11:18 am, "Gabriel Genellina" <gagsl-...@yahoo.com.ar>
wrote:
>>str.decode and unicode.encode should NOT exist, or at least issue a
warning (IMHO).
Yes, that sounds like a good idea.

It sounds like horrible idea as those are the ones that are really
needed.
Ouch! caffeine levels below critical threshold, I think.

--
Gabriel Genellina

Aug 2 '07 #6
On Tue, Jul 31, 2007 at 09:53:11AM -0700, 7stud wrote:
s1 = "hello"
s2 = s1.encode("utf-8")

s1 = "an accented 'e': \xc3\xa9"
s2 = s1.encode("utf-8")

The last line produces the error:

---
Traceback (most recent call last):
File "test1.py", line 6, in ?
s2 = s1.encode("utf-8")
UnicodeDecodeError: 'ascii' codec can't decode byte 0xc3 in position
17: ordinal not in range(128)
---

The error is a "decode" error, and as far as I can tell, decoding
happens when you convert a regular string to a unicode string. So, is
there an implicit conversion taking place from s1 to a unicode string
before encode() is called? By what mechanism?
Yep. You are trying to encode a string. The problem is that strings are
already encoded, so it generally makes no sense to call .encode() on
them.

..encode()ing a string can be handy if you want to convert its encoding.
In such a case, though, Python will first convert the string to Unicode.
To do that, it has to know how the string is encoded. Unless you tell it
otherwise, Python assumes the string is encoded in ascii. You had a byte
in there that was out of ascii's range...thus, the error. Python was
trying to decode the string, assumed it was ascii, but that didn't work.

This is all very confusing; I'd highly recommend reading this bit about
Unicode. It started me down the difficult road of actually understanding
what is going on here.

http://www.joelonsoftware.com/articles/Unicode.html
--
It's another Baseline Boulder Morning.
Aug 6 '07 #7

This discussion thread is closed

Replies have been disabled for this discussion.

Similar topics

1 post views Thread by fartsniff | last post: by
3 posts views Thread by Ricky | last post: by
5 posts views Thread by Scott Matthews | last post: by
3 posts views Thread by Peter | last post: by
1 post views Thread by ok | last post: by
1 post views Thread by anonymous | last post: by
reply views Thread by XIAOLAOHU | last post: by
reply views Thread by leo001 | last post: by

By using Bytes.com and it's services, you agree to our Privacy Policy and Terms of Use.

To disable or enable advertisements and analytics tracking please visit the manage ads & tracking page.