473,385 Members | 1,375 Online
Bytes | Software Development & Data Engineering Community
Post Job

Home Posts Topics Members FAQ

Join Bytes to post your question to a community of 473,385 software developers and data experts.

Unicode/ascii encoding nightmare

I'm getting really annoyed with python in regards to
unicode/ascii-encoding problems.

The string below is the encoding of the norwegian word "fødselsdag".
>>s = 'f\xc3\x83\xc2\xb8dselsdag'
I stored the string as "fødselsdag" but somewhere in my code it got
translated into the mess above and I cannot get the original string
back. It cannot be printed in the console or written a plain text-file.
I've tried to convert it using
>>s.encode('iso-8859-1')
Traceback (most recent call last):
File "<interactive input>", line 1, in ?
UnicodeDecodeError: 'ascii' codec can't decode byte 0xc3 in position 1:
ordinal not in range(128)
>>s.encode('utf-8')
Traceback (most recent call last):
File "<interactive input>", line 1, in ?
UnicodeDecodeError: 'ascii' codec can't decode byte 0xc3 in position 1:
ordinal not in range(128)

And nothing helps. I cannot remember hacing these problems in earlier
versions of python and it's really annoying, even if it's my own fault
somehow, handling of normal characters like this shouldn't cause this
much hassle. Searching google for "codec can't decode byte" and
UnicodeDecodeError etc. produces a bunch of hits so it's obvious I'm
not alone.

Any hints?

Nov 6 '06 #1
19 3300
The string below is the encoding of the norwegian word "fødselsdag".
>
>s = 'f\xc3\x83\xc2\xb8dselsdag'
I'm not sure which encoding method you used to get the string above.
Here's the result of my playing with the string in IDLE:
>>u1 = u'fødselsdag'
u1
u'f\xf8dselsdag'
>>s1 = u1.encode('utf-8')
s1
'f\xc3\xb8dselsdag'
>>u2 = s1.decode('utf-8')
u2
u'f\xf8dselsdag'
>>print u2
fødselsdag
>>>
Nov 6 '06 #2
Thomas W wrote:
I'm getting really annoyed with python in regards to
unicode/ascii-encoding problems.

The string below is the encoding of the norwegian word "fødselsdag".
>>>s = 'f\xc3\x83\xc2\xb8dselsdag'

I stored the string as "fødselsdag" but somewhere in my code it got
translated into the mess above and I cannot get the original string
back. It cannot be printed in the console or written a plain text-file.
I've tried to convert it using
>>>s.encode('iso-8859-1')
Traceback (most recent call last):
File "<interactive input>", line 1, in ?
UnicodeDecodeError: 'ascii' codec can't decode byte 0xc3 in position 1:
ordinal not in range(128)
>>>s.encode('utf-8')
Traceback (most recent call last):
File "<interactive input>", line 1, in ?
UnicodeDecodeError: 'ascii' codec can't decode byte 0xc3 in position 1:
ordinal not in range(128)

And nothing helps. I cannot remember hacing these problems in earlier
versions of python and it's really annoying, even if it's my own fault
somehow, handling of normal characters like this shouldn't cause this
much hassle. Searching google for "codec can't decode byte" and
UnicodeDecodeError etc. produces a bunch of hits so it's obvious I'm
not alone.
You would want .decode() (which converts a byte string into a Unicode string),
not .encode() (which converts a Unicode string into a byte string). You get
UnicodeDecodeErrors even though you are trying to .encode() because whenever
Python is expecting a Unicode string but gets a byte string, it tries to decode
the byte string as 7-bit ASCII. If that fails, then it raises a UnicodeDecodeError.

However, I don't know of an encoding that takes u"fødselsdag" to
'f\xc3\x83\xc2\xb8dselsdag'.

--
Robert Kern

"I have come to believe that the whole world is an enigma, a harmless enigma
that is made terrible by our own mad attempt to interpret it as though it had
an underlying truth."
-- Umberto Eco

Nov 6 '06 #3
Thomas W wrote:
I'm getting really annoyed with python in regards to
unicode/ascii-encoding problems.

The string below is the encoding of the norwegian word "fødselsdag".
>s = 'f\xc3\x83\xc2\xb8dselsdag'
There is no such thing as "*the* encoding" of any given string.
>
I stored the string as "fødselsdag" but somewhere in my code it got
translated into the mess above and I cannot get the original string
back.
Somewhere in your code??? Can't you track through your code to see
where it is being changed? Failing that, can't you show us your code so
that we can help you?

I have guessed *what* you got, but *how* you got it boggles the mind:

The effect is the same as (decode from latin1 to Unicode, encode as
utf8) *TWICE*. That's how you change one byte in the original to *FOUR*
bytes in the "mess":

| >>orig = 'f\xf8dselsdag'
| >>orig.decode('latin1').encode('utf8')
| 'f\xc3\xb8dselsdag'
| >>>
orig.decode('latin1').encode('utf8').decode('latin 1').encode('utf8')
| 'f\xc3\x83\xc2\xb8dselsdag'
| >>>
It cannot be printed in the console or written a plain text-file.
Incorrect. *Any* string can be printed on the console or written to a
file. What you mean is that when you look at the output, it is not what
you want.
I've tried to convert it using
>s.encode('iso-8859-1')
Traceback (most recent call last):
File "<interactive input>", line 1, in ?
UnicodeDecodeError: 'ascii' codec can't decode byte 0xc3 in position 1:
ordinal not in range(128)
encode is an attribute of unicode objects. If applied to a str object,
the str object is converted to unicode first using the default codec
(typically ascii).

s.encode('iso-8859-1') is effectively
s.decode('ascii').encode('iso-8859-1'), and s.decode('ascii') fails for
the (obvious(?)) reason given.
>
>s.encode('utf-8')
Traceback (most recent call last):
File "<interactive input>", line 1, in ?
UnicodeDecodeError: 'ascii' codec can't decode byte 0xc3 in position 1:
ordinal not in range(128)
Same story as for 'iso-8859-1'
>
And nothing helps. I cannot remember hacing these problems in earlier
versions of python
I would be very surprised if you couldn't reproduce your problem on any
2.n version of Python.
and it's really annoying, even if it's my own fault
somehow, handling of normal characters like this shouldn't cause this
much hassle. Searching google for "codec can't decode byte" and
UnicodeDecodeError etc. produces a bunch of hits so it's obvious I'm
not alone.

Any hints?
1. Read the Unicode howto: http://www.amk.ca/python/howto/unicode
2. Read the Python documentation on .decode() and .encode() carefully.
3. Show us your code so that we can help you avoid the double
conversion to utf8. Tell us what IDE you are using.
4. Tell us what you are trying to achieve. Note that if all you are
trying to do is read and write text in Norwegian (or any other language
that's representable in iso-8859-1 aka latin1), then you don't have to
do anything special at all in your code-- this is the good old "legacy"
way of doing things universally in vogue before Unicode was invented!

HTH,
John

Nov 6 '06 #4

Robert Kern wrote:
However, I don't know of an encoding that takes u"fødselsdag" to
'f\xc3\x83\xc2\xb8dselsdag'.
There isn't one.

C3 and C2 hint at UTF-8.
The fact that C3 and C2 are both present, plus the fact that one
non-ASCII byte has morphoploded into 4 bytes indicate a double whammy.

Cheers,
John

Nov 6 '06 #5
John Machin wrote:
The fact that C3 and C2 are both present, plus the fact that one
non-ASCII byte has morphoploded into 4 bytes indicate a double whammy.
Indeed...
>>x = u"fødselsdag"
x.encode('utf-8').decode('iso-8859-1').encode('utf-8')
'f\xc3\x83\xc2\xb8dselsdag'

Andrea
Nov 6 '06 #6
Thomas W wrote:
I'm getting really annoyed with python in regards to
unicode/ascii-encoding problems.

The string below is the encoding of the norwegian word "fødselsdag".
>>>s = 'f\xc3\x83\xc2\xb8dselsdag'
Which encoding is this?
I stored the string as "fødselsdag" but somewhere in my code it got
You stored it where?
translated into the mess above and I cannot get the original string
back. It cannot be printed in the console or written a plain text-file.
I've tried to convert it using
>>>s.encode('iso-8859-1')
Traceback (most recent call last):
File "<interactive input>", line 1, in ?
UnicodeDecodeError: 'ascii' codec can't decode byte 0xc3 in position 1:
ordinal not in range(128)
Note that "encode" on a string object is often an indication for an error.
The encoding direction (for "normal" encodings, not special things like
the "zlib" codec) is as follows:

encode: from Unicode
decode: to Unicode

(the encode method of strings first DEcodes the string with the default
encoding, which is normally ascii, then ENcodes it with the given encoding)
>>>s.encode('utf-8')
Traceback (most recent call last):
File "<interactive input>", line 1, in ?
UnicodeDecodeError: 'ascii' codec can't decode byte 0xc3 in position 1:
ordinal not in range(128)

And nothing helps. I cannot remember hacing these problems in earlier
versions of python and it's really annoying, even if it's my own fault
somehow, handling of normal characters like this shouldn't cause this
much hassle. Searching google for "codec can't decode byte" and
UnicodeDecodeError etc. produces a bunch of hits so it's obvious I'm
not alone.
Unicode causes many problems if not used properly. If you want to use Unicode
strings, use them everywhere in your Python application, decode input as early
as possible, and encode output only before writing it to a file or another
program.

Georg
Nov 6 '06 #7
Ok, I've cleaned up my code abit and it seems as if I've
encoded/decoded myself into a corner ;-). My understanding of unicode
has room for improvement, that's for sure. I got some pointers and
initial code-cleanup seem to have removed some of the strange results I
got, which several of you also pointed out.

Anyway, thanks for all your replies. I think I can get this thing up
and running with a bit more code tinkering. And I'll read up on some
unicode-docs as well. :-) Thanks again.

Thomas

John Machin wrote:
Thomas W wrote:
I'm getting really annoyed with python in regards to
unicode/ascii-encoding problems.

The string below is the encoding of the norwegian word "fødselsdag".
>>s = 'f\xc3\x83\xc2\xb8dselsdag'

There is no such thing as "*the* encoding" of any given string.

I stored the string as "fødselsdag" but somewhere in my code it got
translated into the mess above and I cannot get the original string
back.

Somewhere in your code??? Can't you track through your code to see
where it is being changed? Failing that, can't you show us your code so
that we can help you?

I have guessed *what* you got, but *how* you got it boggles the mind:

The effect is the same as (decode from latin1 to Unicode, encode as
utf8) *TWICE*. That's how you change one byte in the original to *FOUR*
bytes in the "mess":

| >>orig = 'f\xf8dselsdag'
| >>orig.decode('latin1').encode('utf8')
| 'f\xc3\xb8dselsdag'
| >>>
orig.decode('latin1').encode('utf8').decode('latin 1').encode('utf8')
| 'f\xc3\x83\xc2\xb8dselsdag'
| >>>
It cannot be printed in the console or written a plain text-file.

Incorrect. *Any* string can be printed on the console or written to a
file. What you mean is that when you look at the output, it is not what
you want.
I've tried to convert it using
>>s.encode('iso-8859-1')
Traceback (most recent call last):
File "<interactive input>", line 1, in ?
UnicodeDecodeError: 'ascii' codec can't decode byte 0xc3 in position 1:
ordinal not in range(128)

encode is an attribute of unicode objects. If applied to a str object,
the str object is converted to unicode first using the default codec
(typically ascii).

s.encode('iso-8859-1') is effectively
s.decode('ascii').encode('iso-8859-1'), and s.decode('ascii') fails for
the (obvious(?)) reason given.
>>s.encode('utf-8')
Traceback (most recent call last):
File "<interactive input>", line 1, in ?
UnicodeDecodeError: 'ascii' codec can't decode byte 0xc3 in position 1:
ordinal not in range(128)

Same story as for 'iso-8859-1'

And nothing helps. I cannot remember hacing these problems in earlier
versions of python

I would be very surprised if you couldn't reproduce your problem on any
2.n version of Python.
and it's really annoying, even if it's my own fault
somehow, handling of normal characters like this shouldn't cause this
much hassle. Searching google for "codec can't decode byte" and
UnicodeDecodeError etc. produces a bunch of hits so it's obvious I'm
not alone.

Any hints?

1. Read the Unicode howto: http://www.amk.ca/python/howto/unicode
2. Read the Python documentation on .decode() and .encode() carefully.
3. Show us your code so that we can help you avoid the double
conversion to utf8. Tell us what IDE you are using.
4. Tell us what you are trying to achieve. Note that if all you are
trying to do is read and write text in Norwegian (or any other language
that's representable in iso-8859-1 aka latin1), then you don't have to
do anything special at all in your code-- this is the good old "legacy"
way of doing things universally in vogue before Unicode was invented!

HTH,
John
Nov 6 '06 #8

Thomas W wrote:
Ok, I've cleaned up my code abit and it seems as if I've
encoded/decoded myself into a corner ;-). My understanding of unicode
has room for improvement, that's for sure. I got some pointers and
initial code-cleanup seem to have removed some of the strange results I
got, which several of you also pointed out.

Anyway, thanks for all your replies. I think I can get this thing up
and running with a bit more code tinkering. And I'll read up on some
unicode-docs as well. :-) Thanks again.
I strongly suggest that you read the docs *FIRST*, and don't "tinker"
at all.

HTH,
John

Nov 6 '06 #9

Andrea Griffini wrote:
John Machin wrote:
The fact that C3 and C2 are both present, plus the fact that one
non-ASCII byte has morphoploded into 4 bytes indicate a double whammy.

Indeed...
>>x = u"fødselsdag"
>>x.encode('utf-8').decode('iso-8859-1').encode('utf-8')
'f\xc3\x83\xc2\xb8dselsdag'
Indeed yourself. Have you ever considered reading posts in
chronological order, or reading all posts in a thread? It might help
you avoid writing posts with non-zero information content.

Cheers,
John

Nov 6 '06 #10
John Machin wrote:
Indeed yourself. Have you ever considered reading posts in
chronological order, or reading all posts in a thread?
That presumes that messages arrive in chronological order and transmissions are
instantaneous. Neither are true.

--
Robert Kern

"I have come to believe that the whole world is an enigma, a harmless enigma
that is made terrible by our own mad attempt to interpret it as though it had
an underlying truth."
-- Umberto Eco

Nov 6 '06 #11
At Monday 6/11/2006 20:34, Robert Kern wrote:
>John Machin wrote:
Indeed yourself. Have you ever considered reading posts in
chronological order, or reading all posts in a thread?

That presumes that messages arrive in chronological order and
transmissions are
instantaneous. Neither are true.
Sometimes I even got the replies *before* the original post comes.
--
Gabriel Genellina
Softlab SRL

__________________________________________________
Correo Yahoo!
Espacio para todos tus mensajes, antivirus y antispam ¡gratis!
¡Abrí tu cuenta ya! - http://correo.yahoo.com.ar
Nov 6 '06 #12

Gabriel Genellina wrote:
At Monday 6/11/2006 20:34, Robert Kern wrote:
John Machin wrote:
Indeed yourself. Have you ever considered reading posts in
chronological order, or reading all posts in a thread?
That presumes that messages arrive in chronological order and
transmissions are
instantaneous. Neither are true.

Sometimes I even got the replies *before* the original post comes.
What is in question is the likelihood that message B can appear before
message A, when both emanate from the same source, and B was sent about
7 minutes after A.

Nov 6 '06 #13
In article <11*********************@k70g2000cwa.googlegroups. com>,
John Machin <sj******@lexicon.netwrote:
>
Thomas W wrote:
>Ok, I've cleaned up my code abit and it seems as if I've
encoded/decoded myself into a corner ;-). My understanding of unicode
has room for improvement, that's for sure. I got some pointers and
initial code-cleanup seem to have removed some of the strange results I
got, which several of you also pointed out.

Anyway, thanks for all your replies. I think I can get this thing up
and running with a bit more code tinkering. And I'll read up on some
unicode-docs as well. :-) Thanks again.

I strongly suggest that you read the docs *FIRST*, and don't "tinker"
at all.
Nov 7 '06 #14

Cameron Laird wrote:
In article <11*********************@k70g2000cwa.googlegroups. com>,
John Machin <sj******@lexicon.netwrote:

Thomas W wrote:
Ok, I've cleaned up my code abit and it seems as if I've
encoded/decoded myself into a corner ;-). My understanding of unicode
has room for improvement, that's for sure. I got some pointers and
initial code-cleanup seem to have removed some of the strange results I
got, which several of you also pointed out.

Anyway, thanks for all your replies. I think I can get this thing up
and running with a bit more code tinkering. And I'll read up on some
unicode-docs as well. :-) Thanks again.
I strongly suggest that you read the docs *FIRST*, and don't "tinker"
at all.
.
.
.
Does
<URL: http://www.unixreview.com/documents/...1d/ur0610d.htm >
help?
Hi Cameron,
Yes.
Sorry for short reply -- gotta run to bookshop :-)
Cheers,
John

Nov 7 '06 #15
"John Machin" <sj******@lexicon.netwrote:

8<---------------------------------------
I strongly suggest that you read the docs *FIRST*, and don't "tinker"
at all.

HTH,
John
This is *good* advice - its unlikely to be followed though, as the OP is prolly
just like most of us - you unpack the stuff out of the box and start assembling
it, and only towards the end, when it wont fit together, do you read the manual
to see where you went wrong...

- Hendrik

Nov 7 '06 #16
John Machin wrote:
Indeed yourself.
What does the above mean ?
Have you ever considered reading posts in
chronological order, or reading all posts in a thread?
I do no think people read posts in chronological order;
it simply doesn't make sense. I also don't think many
do read threads completely, but only until the issue is
clear or boredom kicks in.

Your nice "double whammy" post was enough to clarify
what happened to the OP, I just wanted to make a bit
more explicit what you meant; my poor english also
made me understand that you were just "suspecting"
such an error, so I verified and posted the result.

That your "suspect" was a sarcastic remark could be
clear only when reading the timewise "former" reply
that however happened to be lower in the thread tree
in my newsreader; fact that pushed it into the "not
worth reading" area.
It might help
you avoid writing posts with non-zero information content.
Why should I *avoid* writing posts with *non-zero*
information content ? Double whammy on negation or
still my poor english kicking in ? :-)

Suppose you didn't post the double whammy message,
and suppose someone else made it seven minutes later
than your other post. I suppose that in this case
the message would be a zero content noise (and not
the precious pearl of wisdom it is because it
comes from you).
Cheers,
John
Andrea
Nov 7 '06 #17
Thomas W wrote:
Ok, I've cleaned up my code abit and it seems as if I've
encoded/decoded myself into a corner ;-).
Yes, you may encounter situations where you have some string, you
"decode" it (ie. convert it to Unicode) using one character encoding,
but then you later "encode" it (ie. convert it back to a plain string)
using a different character encoding. This isn't a problem on its own,
but if you then take that plain string and attempt to convert it to
Unicode again, using the same input encoding as before, you'll be
misinterpreting the contents of the string.

This "round tripping" of character data is typical of Web applications:
you emit a Web page in one encoding, the fields in the forms are
represented in that encoding, and upon form submission you receive this
data. If you then process the form data using a different encoding,
you're misinterpreting what you previously emitted, and when you emit
this data again, you compound the error.
My understanding of unicode has room for improvement, that's for sure. I got some pointers
and initial code-cleanup seem to have removed some of the strange results I got, which
several of you also pointed out.
Converting to Unicode for processing is a "best practice" that you seem
to have adopted, but it's vital that you use character encodings
consistently. One trick, that can be used to mitigate situations where
you have less control over the encoding of data given to you, is to
attempt to convert to Unicode using an encoding that is "conservative"
with regard to acceptable combinations of byte sequences, such as
UTF-8; if such a conversion fails, it's quite possible that another
encoding applies, such as ISO-8859-1, and you can try that. Since
ISO-8859-1 is a "liberal" encoding, in the sense that any byte value
or combination of byte values is acceptable, it should only be used as
a last resort.

However, it's best to have a high level of control over character
encodings rather than using tricks to avoid considering representation
issues carefully.

Paul

Nov 7 '06 #18
On Tue, 2006-11-07 at 08:10 +0200, Hendrik van Rooyen wrote:
"John Machin" <sj******@lexicon.netwrote:

8<---------------------------------------
I strongly suggest that you read the docs *FIRST*, and don't "tinker"
at all.
This is *good* advice - its unlikely to be followed though, as the OP is prolly
just like most of us - you unpack the stuff out of the box and start assembling
it, and only towards the end, when it wont fit together, do you read the manual
to see where you went wrong...
I fall right into this camp(fire). I'm always amazed and awed at people
who actually read the docs *thoroughly* before starting. I know some
people do but frankly, unless it's a step-by-step tutorial, I rarely
read the docs beyond getting a basic understanding of what something
does before I start "tinkering".

I've always been a firm believer in the Chinese proverb:

I hear and I forget
I see and I remember
I do and I understand

Of course, I usually just skip straight to the third step and try to
work backwards as needed. This usually works pretty well but when it
doesn't it fails horribly. Unfortunately (for me), working from step
one rarely works at all, so that's the boat I'm stuck in.

I've always been a bit miffed at the RTFM crowd (and somewhat jealous, I
admit). I *do* RTFM, but as often as not the fine manual confuses as
much as clarifies. I'm not convinced this is the result of poor
documentation so much as that I personally have a different mental
approach to problem-solving than the others who find documentation
universally enlightening. I also suspect that I'm not alone in my
approach and that the RTFM crowd is more than a little close-minded
about how others might think about and approach solving problems and
understanding concepts.

Also, much documentation (including the Python docs) tends to be
reference-manual style. This is great if you *already* understand the
problem and just need details, but does about as much for
*understanding* as a dictionary does for learning a language. When I'm
perusing the Python reference manual, I usually find that 10 lines of
example code are worth 1000 lines of function descriptions and
cross-references.

Just my $0.02.

Regards,
Cliff
Nov 7 '06 #19
On Mon, 2006-11-06 at 15:47 -0800, John Machin wrote:
Gabriel Genellina wrote:
At Monday 6/11/2006 20:34, Robert Kern wrote:
>John Machin wrote:
Indeed yourself. Have you ever considered reading posts in
chronological order, or reading all posts in a thread?
>
>That presumes that messages arrive in chronological order and
>transmissions are
>instantaneous. Neither are true.
Sometimes I even got the replies *before* the original post comes.

What is in question is the likelihood that message B can appear before
message A, when both emanate from the same source, and B was sent about
7 minutes after A.
Usenet, email, usenet/email gateways, internet in general... all in
all, pretty likely. I've often seen replies to my posts long before my
own post shows up. In fact, I've seen posts not show up for several
hours.

Regards,
Cliff

Nov 8 '06 #20

This thread has been closed and replies have been disabled. Please start a new discussion.

Similar topics

8
by: Bill Eldridge | last post by:
I'm trying to grab a document off the Web and toss it into a MySQL database, but I keep running into the various encoding problems with Unicode (that aren't a problem for me with GB2312, BIG 5,...
14
by: wolfgang haefelinger | last post by:
Hi, I wonder whether someone could explain me a bit what's going on here: import sys # I'm running Mandrake 1o and Windows XP. print sys.version ## 2.3.3 (#2, Feb 17 2004, 11:45:40)
11
by: Patrick Van Esch | last post by:
Hello, I have the following problem of principle: in writing HTML pages containing ancient greek, there are two possibilities: one is to write the unicode characters directly (encoded as two...
4
by: webdev | last post by:
lo all, some of the questions i'll ask below have most certainly been discussed already, i just hope someone's kind enough to answer them again to help me out.. so i started a python 2.3...
18
by: Ger | last post by:
I have not been able to find a simple, straight forward Unicode to ASCII string conversion function in VB.Net. Is that because such a function does not exists or do I overlook it? I found...
7
by: Robert | last post by:
Hello, I'm using Pythonwin and py2.3 (py2.4). I did not come clear with this: I want to use win32-fuctions like win32ui.MessageBox, listctrl.InsertItem ..... to get unicode strings on the...
24
by: ChaosKCW | last post by:
Hi I am reading from an oracle database using cx_Oracle. I am writing to a SQLite database using apsw. The oracle database is returning utf-8 characters for euopean item names, ie special...
4
by: Petr Jakes | last post by:
Hi, I am using Python 2.4.3 on Fedora Core4 and "Eric3" Python IDE .. Below mentioned code works fine in the Eric3 environment. While trying to start it from the command line, it returns: ...
7
by: 7stud | last post by:
Based on this example and the error: ----- u_str = u"abc\u9999" print u_str UnicodeEncodeError: 'ascii' codec can't encode character u'\u9999' in position 3: ordinal not in range(128) ------
1
by: CloudSolutions | last post by:
Introduction: For many beginners and individual users, requiring a credit card and email registration may pose a barrier when starting to use cloud servers. However, some cloud server providers now...
0
by: Faith0G | last post by:
I am starting a new it consulting business and it's been a while since I setup a new website. Is wordpress still the best web based software for hosting a 5 page website? The webpages will be...
0
isladogs
by: isladogs | last post by:
The next Access Europe User Group meeting will be on Wednesday 3 Apr 2024 starting at 18:00 UK time (6PM UTC+1) and finishing by 19:30 (7.30PM). In this session, we are pleased to welcome former...
0
by: aa123db | last post by:
Variable and constants Use var or let for variables and const fror constants. Var foo ='bar'; Let foo ='bar';const baz ='bar'; Functions function $name$ ($parameters$) { } ...
0
by: ryjfgjl | last post by:
If we have dozens or hundreds of excel to import into the database, if we use the excel import function provided by database editors such as navicat, it will be extremely tedious and time-consuming...
0
BarryA
by: BarryA | last post by:
What are the essential steps and strategies outlined in the Data Structures and Algorithms (DSA) roadmap for aspiring data scientists? How can individuals effectively utilize this roadmap to progress...
1
by: nemocccc | last post by:
hello, everyone, I want to develop a software for my android phone for daily needs, any suggestions?
1
by: Sonnysonu | last post by:
This is the data of csv file 1 2 3 1 2 3 1 2 3 1 2 3 2 3 2 3 3 the lengths should be different i have to store the data by column-wise with in the specific length. suppose the i have to...
0
by: Hystou | last post by:
There are some requirements for setting up RAID: 1. The motherboard and BIOS support RAID configuration. 2. The motherboard has 2 or more available SATA protocol SSD/HDD slots (including MSATA, M.2...

By using Bytes.com and it's services, you agree to our Privacy Policy and Terms of Use.

To disable or enable advertisements and analytics tracking please visit the manage ads & tracking page.