(beating a dead horse)
Is it too ridiculous to suggest that it'd be nice
if the unicode object were to remember the
encoding of the string it was decoded from?
So that it's feasible to calculate the number
of bytes that make up the unicode code points.
# U+270C
# 11100010 10011100 10001100
buf = "\xE2\x9C\x 8C"
u = buf.decode('UTF-8')
# ... later ...
u.bytes() -3
(goes through each code point and calculates
the number of bytes that make up the character
according to the encoding) 14 1585
willie <wi****@jamots. comwrote:
Is it too ridiculous to suggest that it'd be nice
if the unicode object were to remember the
encoding of the string it was decoded from?
So that it's feasible to calculate the number
of bytes that make up the unicode code points.
So what sort of output do you expect from this:
>>a = '\xc9'.decode(' latin1') b = '\xc3\x89'.deco de('utf8') print (a+b).bytes()
???
And if you say that's an unfair question because you expected all the byte
strings to be using the same encoding then there's no point storing it on
every unicode object; you might as well store it once globally.
willie <wi****@jamots. comwrites:
# U+270C
# 11100010 10011100 10001100
buf = "\xE2\x9C\x 8C"
u = buf.decode('UTF-8')
# ... later ...
u.bytes() -3
(goes through each code point and calculates
the number of bytes that make up the character
according to the encoding)
Duncan Booth explains why that doesn't work. But I don't see any big
problem with a byte count function that lets you specify an encoding:
u = buf.decode('UTF-8')
# ... later ...
u.bytes('UTF-8') -3
u.bytes('UCS-4') -4
That avoids creating a new encoded string in memory, and for some
encodings, avoids having to scan the unicode string to add up the
lengths.
Paul Rubin wrote:
Duncan Booth explains why that doesn't work. But I don't see any big
problem with a byte count function that lets you specify an encoding:
u = buf.decode('UTF-8')
# ... later ...
u.bytes('UTF-8') -3
u.bytes('UCS-4') -4
That avoids creating a new encoded string in memory, and for some
encodings, avoids having to scan the unicode string to add up the
lengths.
It requires a fairly large change to code and API for a relatively
uncommon problem. How often do you need to know how many bytes an
encoded Unicode string takes up without needing the encoded string itself?
Leif K-Brooks <eu*****@ecritt ers.bizwrites:
It requires a fairly large change to code and API for a relatively
uncommon problem. How often do you need to know how many bytes an
encoded Unicode string takes up without needing the encoded string
itself?
Shrug. I don't see a real large change--the code would just check for
an optional arg and process accordingly. I don't know if the issue
comes up often enough to be worth making such accomodations for. I do
know that we had an extensive newsgroup thread about it, from which
this discussion came, but I haven't paid that much attention.
willie wrote:
(beating a dead horse)
Is it too ridiculous to suggest that it'd be nice
if the unicode object were to remember the
encoding of the string it was decoded from?
Where it's been is irrelevant. Where it's going to is what matters.
So that it's feasible to calculate the number
of bytes that make up the unicode code points.
# U+270C
# 11100010 10011100 10001100
buf = "\xE2\x9C\x 8C"
u = buf.decode('UTF-8')
# ... later ...
u.bytes() -3
(goes through each code point and calculates
the number of bytes that make up the character
according to the encoding)
Suppose the unicode object was decoded using some encoding other than
the one that's going to be used to store the info in the database:
| >>sg = '\xc9\xb5\xb9\x cf'
| >>len(sg)
| 4
| >>u = sg.decode('gb23 12')
later:
u.bytes() =4
but
| >>len(u.encode( 'utf8'))
| 6
and by the way, what about the memory overhead of storing the name of
the encoding (in the above case 7 (6 + overhead))?
What would u"abcdef".bytes () produce? An exception?
HTH,
John
Paul Rubin wrote:
Leif K-Brooks <eu*****@ecritt ers.bizwrites:
It requires a fairly large change to code and API for a relatively
uncommon problem. How often do you need to know how many bytes an
encoded Unicode string takes up without needing the encoded string
itself?
Shrug. I don't see a real large change--the code would just check for
an optional arg and process accordingly. I don't know if the issue
comes up often enough to be worth making such accomodations for. I do
know that we had an extensive newsgroup thread about it, from which
this discussion came, but I haven't paid that much attention.
Actually, what Willie was concerned about was some cockamamie DBMS
which required to be fed Unicode, which it encoded as UTF-8, but
silently truncated if it was more than the n in varchar(n) ... or
something like that.
So all he needs is a boolean result: u.willitfit(enc oding, width)
This can of course be optimised with simple early-loop-exit tests:
if n_bytes_so_far + n_remaining_uch ars width: return False
elif n_bytes_so_far + n_remaining_uch ars * M <= width: return True
# where M is the maximum #bytes per Unicode char for the encoding
that's being used.
Tell you what, why don't you and Willie get together and write a PEP?
Cheers,
John
"John Machin" <sj******@lexic on.netwrites:
Actually, what Willie was concerned about was some cockamamie DBMS
which required to be fed Unicode, which it encoded as UTF-8,
Yeah, I remember that.
Tell you what, why don't you and Willie get together and write a PEP?
If enough people care about the problem, I'd say just submit a code
patch. I haven't needed it myself, but I haven't (so far) had to deal
with unicode that often. It's a reasonably logical thing to want.
Imagine if the normal length(string) function required copying the
string around.
Paul Rubin wrote:
"John Machin" <sj******@lexic on.netwrites:
Actually, what Willie was concerned about was some cockamamie DBMS
which required to be fed Unicode, which it encoded as UTF-8,
Yeah, I remember that.
Tell you what, why don't you and Willie get together and write a PEP?
If enough people care about the problem, I'd say just submit a code
patch. I haven't needed it myself, but I haven't (so far) had to deal
with unicode that often. It's a reasonably logical thing to want.
Imagine if the normal length(string) function required copying the
string around.
Almost as bad: just imagine a language that had a normal strlen(string)
function that required mucking all the way through the string until you
hit some cockamamie in-band can't-happen-elsewhere sentinel.
Cheers,
John
On Mon, 25 Sep 2006 00:45:29 -0700, Paul Rubin wrote:
willie <wi****@jamots. comwrites:
># U+270C # 11100010 10011100 10001100 buf = "\xE2\x9C\x 8C" u = buf.decode('UTF-8') # ... later ... u.bytes() -3
(goes through each code point and calculates the number of bytes that make up the character according to the encoding)
Duncan Booth explains why that doesn't work. But I don't see any big
problem with a byte count function that lets you specify an encoding:
u = buf.decode('UTF-8')
# ... later ...
u.bytes('UTF-8') -3
u.bytes('UCS-4') -4
That avoids creating a new encoded string in memory, and for some
encodings, avoids having to scan the unicode string to add up the
lengths.
Unless I'm misunderstandin g something, your bytes code would have to
perform exactly the same algorithmic calculations as converting the
encoded string in the first place, except it doesn't need to store the
newly encoded string, merely the number of bytes of each character.
Here is a bit of pseudo-code that might do what you want:
def bytes(unistring , encoding):
length = 0
for c in unistring:
length += len(c.encode(en coding))
return length
At the cost of some speed, you can avoid storing the entire encoded string
in memory, which might be what you want if you are dealing with truly
enormous unicode strings.
Alternatively, instead of calling encode() on each character, you can
write a function (presumably in C for speed) that does the exact same
thing as encode, but without storing the encoded characters, merely adding
their lengths. Now you have code duplication, which is usually a bad idea.
If for no other reason, some poor schmuck has to maintain them both! (And
I bet it won't be Willie, for all his enthusiasm for the idea.)
This whole question seems to me like an awful example of premature
optimization. Your computer has probably got well in excess of 100MB, and
you're worried about duplicating a few hundred or thousand (or even
hundred thousand) bytes for a few milliseconds (just long enough to grab
the length)?
--
Steven D'Aprano This thread has been closed and replies have been disabled. Please start a new discussion. Similar topics |
by: Bill Eldridge |
last post by:
I'm trying to grab a document off the Web and toss it
into a MySQL database, but I keep running into the
various encoding problems with Unicode (that aren't
a problem for me with GB2312, BIG 5, etc.)
What I'd like is something as simple as:
CREATE TABLE junk (junklet VARCHAR(2500) CHARACTER SET UTF8));
import MySQLdb, re,urllib
|
by: Naresh Agarwal |
last post by:
Hi
XML uses UTF-8 by default. Is that correct?
Also, can we use Unicode in XML?
thanks,
Naresh
|
by: hunterb |
last post by:
I have a file which has no BOM and contains mostly single byte chars. There
are numerous double byte chars (Japanese) which appear throughout. I need to
take the resulting Unicode and store it in a DB and display it onscreen. No
matter which way I open the file, convert it to Unicode/leave it as is or
what ever, I see all single bytes ok, but double bytes become 2 seperate
single bytes. Surely there is an easy way to convert these mixed...
|
by: Nikolay Petrov |
last post by:
How can I convert DOS cyrillic text to Unicode
|
by: John Salerno |
last post by:
Forgive my newbieness, but I don't quite understand why Unicode is still
something that needs special treatment in Python (and perhaps
elsewhere). I'm reading Dive Into Python right now, and it constantly
refers to a 'regular string' versus a 'Unicode string' and how you need
to convert back and forth. But why isn't Unicode considered a regular
string by now? Is it for historical reasons that we still use ASCII and
Latin-1? Why can't...
| |
by: apprentice |
last post by:
Hello,
I'm writing an class library that I imagine people from different countries
might be interested in using, so I'm considering what needs to be provided
to support foreign languages, including asian languages (chinese, japanese,
korean, etc).
First of all, strings will be passed to my class methods, some of which
based on the language (and on the encoding) might contain characters that
require more that a single byte.
|
by: willie |
last post by:
>willie wrote:
wrote:
|
by: Holger Joukl |
last post by:
Hi there,
I consider the behaviour of unicode() inconvenient wrt to conversion of
non-string
arguments.
While you can do:
u'17.3'
you cannot do:
|
by: mario |
last post by:
I have checks in code, to ensure a decode/encode cycle returns the
original string.
Given no UnicodeErrors, are there any cases for the following not to
be True?
unicode(s, enc).encode(enc) == s
mario
|
by: Hystou |
last post by:
Most computers default to English, but sometimes we require a different language, especially when relocating. Forgot to request a specific language before your computer shipped? No problem! You can effortlessly switch the default language on Windows 10 without reinstalling. I'll walk you through it.
First, let's disable language synchronization. With a Microsoft account, language settings sync across devices. To prevent any complications,...
|
by: jinu1996 |
last post by:
In today's digital age, having a compelling online presence is paramount for businesses aiming to thrive in a competitive landscape. At the heart of this digital strategy lies an intricately woven tapestry of website design and digital marketing. It's not merely about having a website; it's about crafting an immersive digital experience that captivates audiences and drives business growth.
The Art of Business Website Design
Your website is...
| |
by: Hystou |
last post by:
Overview:
Windows 11 and 10 have less user interface control over operating system update behaviour than previous versions of Windows. In Windows 11 and 10, there is no way to turn off the Windows Update option using the Control Panel or Settings app; it automatically checks for updates and installs any it finds, whether you like it or not. For most users, this new feature is actually very convenient. If you want to control the update process,...
|
by: tracyyun |
last post by:
Dear forum friends,
With the development of smart home technology, a variety of wireless communication protocols have appeared on the market, such as Zigbee, Z-Wave, Wi-Fi, Bluetooth, etc. Each protocol has its own unique characteristics and advantages, but as a user who is planning to build a smart home system, I am a bit confused by the choice of these technologies. I'm particularly interested in Zigbee because I've heard it does some...
|
by: agi2029 |
last post by:
Let's talk about the concept of autonomous AI software engineers and no-code agents. These AIs are designed to manage the entire lifecycle of a software development project—planning, coding, testing, and deployment—without human intervention. Imagine an AI that can take a project description, break it down, write the code, debug it, and then launch it, all on its own....
Now, this would greatly impact the work of software developers. The idea...
|
by: isladogs |
last post by:
The next Access Europe User Group meeting will be on Wednesday 1 May 2024 starting at 18:00 UK time (6PM UTC+1) and finishing by 19:30 (7.30PM).
In this session, we are pleased to welcome a new presenter, Adolph Dupré who will be discussing some powerful techniques for using class modules.
He will explain when you may want to use classes instead of User Defined Types (UDT). For example, to manage the data in unbound forms.
Adolph will...
|
by: conductexam |
last post by:
I have .net C# application in which I am extracting data from word file and save it in database particularly. To store word all data as it is I am converting the whole word file firstly in HTML and then checking html paragraph one by one.
At the time of converting from word file to html my equations which are in the word document file was convert into image.
Globals.ThisAddIn.Application.ActiveDocument.Select();...
|
by: muto222 |
last post by:
How can i add a mobile payment intergratation into php mysql website.
| |
by: bsmnconsultancy |
last post by:
In today's digital era, a well-designed website is crucial for businesses looking to succeed. Whether you're a small business owner or a large corporation in Toronto, having a strong online presence can significantly impact your brand's success. BSMN Consultancy, a leader in Website Development in Toronto offers valuable insights into creating effective websites that not only look great but also perform exceptionally well. In this comprehensive...
| | |