473,402 Members | 2,050 Online
Bytes | Software Development & Data Engineering Community
Post Job

Home Posts Topics Members FAQ

Join Bytes to post your question to a community of 473,402 software developers and data experts.

My Big Dict.

Greetings,

(do excuse the possibly comical subject text)

I need advice on how I can convert a text db into a dict. Here is an
example of what I need done.

some example data lines in the text db goes as follows:

CODE1!DATA1 DATA2, DATA3
CODE2!DATA1, DATA2 DATA3

As you can see, the lines are dynamic and the data are not alike, they
change in permission values (but that's obvious in any similar situation)

Any idea on how I can convert 20,000+ lines of the above into the following
protocol for use in my code?:

TXTDB = {'CODE1': 'DATA1 DATA2, DATA3', 'CODE2': 'DATA1, DATA2 DATA3'}

I was thinking of using AWK or something to the similar liking but I just
wanted to check up with the list for any faster/sufficient hacks in python
to do such a task.

Thanks.

-- Xavier.

oderint dum mutuant

Jul 18 '05 #1
11 3250
Hello,

On Wed, 2 Jul 2003 00:13:26 -0400, Xavier wrote:
Greetings,

(do excuse the possibly comical subject text)

I need advice on how I can convert a text db into a dict. Here is an
example of what I need done.

some example data lines in the text db goes as follows:

CODE1!DATA1 DATA2, DATA3
CODE2!DATA1, DATA2 DATA3

As you can see, the lines are dynamic and the data are not alike, they
change in permission values (but that's obvious in any similar
situation)

Any idea on how I can convert 20,000+ lines of the above into the
following protocol for use in my code?:

TXTDB = {'CODE1': 'DATA1 DATA2, DATA3', 'CODE2': 'DATA1, DATA2 DATA3'}

I was thinking of using AWK or something to the similar liking but I
just wanted to check up with the list for any faster/sufficient hacks
in python to do such a task.
If your data is in a string you can use a regular expression to parse
each line, then the findall method returns a list of tuples containing
the key and the value of each item. Finally the dict class can turn this
list into a dict. For example:

data_re = re.compile(r"^(\w+)!(.*)", re.MULTILINE)

bigdict = dict(data_re.findall(data))

On my computer the second line take between 7 and 8 seconds to parse
100000 lines.

Try this:

------------------------------
import re
import time

N = 100000

print "Initialisation..."
data = "".join(["CODE%d!DATA%d_1, DATA%d_2, DATA%d_3\n"%(i,i,i,i) for i
in range(N)])

data_re = re.compile(r"^(\w+)!(.*)", re.MULTILINE)

print "Parsing..."
start = time.time()
bigdict = dict(data_re.findall(data))
stop = time.time()

print "%s items parsed in %s seconds"%(len(bigdict), stop-start)
------------------------------

Thanks.

-- Xavier.

oderint dum mutuant

--

(o_ Christophe Delord __o
//\ http://christophe.delord.free.fr/ _`\<,_
V_/_ mailto:ch***************@free.fr (_)/ (_)
Jul 18 '05 #2
>>>>> "Russell" == Russell Reagan <rr*****@attbi.com> writes:

drs> f = open('somefile.txt')
drs> d = {}
drs> l = f.readlines()
drs> for i in l:
drs> a,b = i.split('!')
drs> d[a] = b.strip()
I would make one minor modification of this. If the file were *really
long*, you could run into troubles trying to hold it in memory. I
find the following a little cleaner (with python 2.2), and doesn't
require putting the whole file in memory. A file instance is an
iterator (http://www.python.org/doc/2.2.1/whatsnew/node4.html) which
will call readline as needed:

d = {}
for line in file('sometext.dat'):
key,val = line.split('!')
d[key] = val.strip()

Or if you are not worried about putting it in memory, you can use list
comprehensions for speed

d = dict([ line.split('!') for line in file('somefile.text')])

Russell> I have just started learning Python, and I have never
Russell> used dictionaries in Python, and despite the fact that
Russell> you used mostly non-descriptive variable names, I can
Russell> still read your code perfectly and know exactly what it
Russell> does. I think I could use dictionaries now, just from
Russell> looking at your code snippet. Python rules :-)

Truly.

JDH


Jul 18 '05 #3
> "Christophe Delord" <ch***************@free.fr> wrote in message
news:20030702073735.40293ba2.ch***************@fre e.fr...
Hello,

On Wed, 2 Jul 2003 00:13:26 -0400, Xavier wrote:
Greetings,

(do excuse the possibly comical subject text)

I need advice on how I can convert a text db into a dict. Here is an
example of what I need done.

some example data lines in the text db goes as follows:

CODE1!DATA1 DATA2, DATA3
CODE2!DATA1, DATA2 DATA3

As you can see, the lines are dynamic and the data are not alike, they
change in permission values (but that's obvious in any similar
situation)

Any idea on how I can convert 20,000+ lines of the above into the
following protocol for use in my code?:

TXTDB = {'CODE1': 'DATA1 DATA2, DATA3', 'CODE2': 'DATA1, DATA2 DATA3'}

If your data is in a string you can use a regular expression to parse
each line, then the findall method returns a list of tuples containing
the key and the value of each item. Finally the dict class can turn this
list into a dict. For example:


and you can kill a fly with a sledgehammer. why not

f = open('somefile.txt')
d = {}
l = f.readlines()
for i in l:
a,b = i.split('!')
d[a] = b.strip()

or am i missing something obvious? (b/t/w the above parsed 20000+ lines on

a celeron 500 in less than a second.)


Your code looks good Christophe. Just two little things to be aware of:
1) if you use split like this, then each line must contain one and only one
'!', which means (in particular) that empy lines will bomb, and also data
must not contain any '!' or else you'll get an exception such as
"ValueError: unpack list of wrong size". If your data may contain '!',
then consider slicing up each line in a different way.
2) if your file is really huge, then you may want to fill up your dictionary
as you're reading the file, instead of reading everything in a list and then
building your dictionary (hence using up twice the memory).

But apart from these details, I agree with Christophe that this is the way
to go.

Aurélien
Jul 18 '05 #4
"Aurélien Géron" <ag****@HOHOHOHOvideotron.ca> wrote in message news:<bd***********@biggoron.nerim.net>...
"drs" wrote...
"Christophe Delord" <ch***************@free.fr> wrote in message
news:20030702073735.40293ba2.ch***************@fre e.fr...
Hello,

On Wed, 2 Jul 2003 00:13:26 -0400, Xavier wrote: <snip> > I need advice on how I can convert a text db into a dict. Here is an
> example of what I need done.
>
> some example data lines in the text db goes as follows:
>
> CODE1!DATA1 DATA2, DATA3
> CODE2!DATA1, DATA2 DATA3 <snip> > Any idea on how I can convert 20,000+ lines of the above into the
> following protocol for use in my code?:
>
> TXTDB = {'CODE1': 'DATA1 DATA2, DATA3', 'CODE2': 'DATA1, DATA2 DATA3'}
>

If your data is in a string you can use a regular expression to parse
each line, then the findall method returns a list of tuples containing
the key and the value of each item. Finally the dict class can turn this
list into a dict. For example:
<example snipped>
and you can kill a fly with a sledgehammer. why not

f = open('somefile.txt')
d = {}
l = f.readlines()
for i in l:
a,b = i.split('!')
d[a] = b.strip()
<snip> Your code looks good Christophe. Just two little things to be aware of:
I think I'm right in saying Christophe's approach was using the 're'
module, which has been snipped, whereas the approach was the above
using split was by "drs".
1) if you use split like this, then each line must contain one and only one
'!', which means (in particular) that empy lines will bomb, and also data
must not contain any '!' or else you'll get an exception such as
"ValueError: unpack list of wrong size". If your data may contain '!',
then consider slicing up each line in a different way.
If this is a problem, use a combination of count and index methods to
find the first, and use slices. For example, if you don't mind
two-lined list comps:

d=dict([(l[:l.index('!')],l[l.index('!')+1:-1])\
for l in file('test.txt') if l.count('!')])
2) if your file is really huge, then you may want to fill up your dictionary
as you're reading the file, instead of reading everything in a list and then
building your dictionary (hence using up twice the memory). Agreed.

The above list comprehension has the disadvantages that it finds how
many '!' characters for every line, and it reads the whole file in at
once. Assuming there are going to be more data lines than not, this is
much faster:

d={}
for l in file("test.txt"):
try: i=l.index('!')
except ValueError: continue
d[l[:i]]=l[i+1:]

It's often much faster to ask forgiveness than permission. I measure
it about twice as fast as the 're' method, and about four times as
fast as the list comp above.
HTH,
Paul

But apart from these details, I agree with Christophe that this is the way
to go.

Aurélien

Jul 18 '05 #5
>>>>> "Russell" == Russell Reagan <rr*****@attbi.com> writes:

drs> f = open('somefile.txt')
drs> d = {}
drs> l = f.readlines()
drs> for i in l:
drs> a,b = i.split('!')
drs> d[a] = b.strip()
I would make one minor modification of this. If the file were *really
long*, you could run into troubles trying to hold it in memory. I
find the following a little cleaner (with python 2.2), and doesn't
require putting the whole file in memory. A file instance is an
iterator (http://www.python.org/doc/2.2.1/whatsnew/node4.html) which
will call readline as needed:

d = {}
for line in file('sometext.dat'):
key,val = line.split('!')
d[key] = val.strip()

Or if you are not worried about putting it in memory, you can use list
comprehensions for speed

d = dict([ line.split('!') for line in file('somefile.text')])

Russell> I have just started learning Python, and I have never
Russell> used dictionaries in Python, and despite the fact that
Russell> you used mostly non-descriptive variable names, I can
Russell> still read your code perfectly and know exactly what it
Russell> does. I think I could use dictionaries now, just from
Russell> looking at your code snippet. Python rules :-)

Truly.

JDH


Jul 18 '05 #6
"Aurélien Géron" <ag****@HOHOHOHOvideotron.ca> wrote in message news:<bd***********@biggoron.nerim.net>...
"drs" wrote...
"Christophe Delord" <ch***************@free.fr> wrote in message
news:20030702073735.40293ba2.ch***************@fre e.fr...
Hello,

On Wed, 2 Jul 2003 00:13:26 -0400, Xavier wrote: <snip> > I need advice on how I can convert a text db into a dict. Here is an
> example of what I need done.
>
> some example data lines in the text db goes as follows:
>
> CODE1!DATA1 DATA2, DATA3
> CODE2!DATA1, DATA2 DATA3 <snip> > Any idea on how I can convert 20,000+ lines of the above into the
> following protocol for use in my code?:
>
> TXTDB = {'CODE1': 'DATA1 DATA2, DATA3', 'CODE2': 'DATA1, DATA2 DATA3'}
>

If your data is in a string you can use a regular expression to parse
each line, then the findall method returns a list of tuples containing
the key and the value of each item. Finally the dict class can turn this
list into a dict. For example:
<example snipped>
and you can kill a fly with a sledgehammer. why not

f = open('somefile.txt')
d = {}
l = f.readlines()
for i in l:
a,b = i.split('!')
d[a] = b.strip()
<snip> Your code looks good Christophe. Just two little things to be aware of:
I think I'm right in saying Christophe's approach was using the 're'
module, which has been snipped, whereas the approach was the above
using split was by "drs".
1) if you use split like this, then each line must contain one and only one
'!', which means (in particular) that empy lines will bomb, and also data
must not contain any '!' or else you'll get an exception such as
"ValueError: unpack list of wrong size". If your data may contain '!',
then consider slicing up each line in a different way.
If this is a problem, use a combination of count and index methods to
find the first, and use slices. For example, if you don't mind
two-lined list comps:

d=dict([(l[:l.index('!')],l[l.index('!')+1:-1])\
for l in file('test.txt') if l.count('!')])
2) if your file is really huge, then you may want to fill up your dictionary
as you're reading the file, instead of reading everything in a list and then
building your dictionary (hence using up twice the memory). Agreed.

The above list comprehension has the disadvantages that it finds how
many '!' characters for every line, and it reads the whole file in at
once. Assuming there are going to be more data lines than not, this is
much faster:

d={}
for l in file("test.txt"):
try: i=l.index('!')
except ValueError: continue
d[l[:i]]=l[i+1:]

It's often much faster to ask forgiveness than permission. I measure
it about twice as fast as the 're' method, and about four times as
fast as the list comp above.
HTH,
Paul

But apart from these details, I agree with Christophe that this is the way
to go.

Aurélien

Jul 18 '05 #7
Paul Simmonds wrote:
....

I'm not trying to intrude this thread, but was just
struck by the list comprehension below, so this is
about readability.
If this is a problem, use a combination of count and index methods to
find the first, and use slices. For example, if you don't mind
two-lined list comps:

d=dict([(l[:l.index('!')],l[l.index('!')+1:-1])\
for l in file('test.txt') if l.count('!')])
With every respect, this looks pretty much like another
P-language. The pure existance of list comprehensions
does not try to force you to use it everywhere :-)

....

compared to this:
....
d={}
for l in file("test.txt"):
try: i=l.index('!')
except ValueError: continue
d[l[:i]]=l[i+1:]


which is both faster in this case and easier to read.

About speed: I'm not sure with the current Python
version, but it might be worth trying to go without
the exception:

d={}
for l in file("test.txt"):
i=l.find('!')
if i >= 0:
d[l[:i]]=l[i+1:]

and then you might even consider to split on the first
"!", but I didn't do any timings:

d={}
for l in file("test.txt"):
try:
key, value = l.split("!", 1)
except ValueError: continue
d[key] = value
cheers -- chris

--
Christian Tismer :^) <mailto:ti****@tismer.com>
Mission Impossible 5oftware : Have a break! Take a ride on Python's
Johannes-Niemeyer-Weg 9a : *Starship* http://starship.python.net/
14109 Berlin : PGP key -> http://wwwkeys.pgp.net/
work +49 30 89 09 53 34 home +49 30 802 86 56 pager +49 173 24 18 776
PGP 0x57F3BF04 9064 F4E1 D754 C2FF 1619 305B C09C 5A3B 57F3 BF04
whom do you want to sponsor today? http://www.stackless.com/
Jul 18 '05 #8
Paul Simmonds wrote:
....

I'm not trying to intrude this thread, but was just
struck by the list comprehension below, so this is
about readability.
If this is a problem, use a combination of count and index methods to
find the first, and use slices. For example, if you don't mind
two-lined list comps:

d=dict([(l[:l.index('!')],l[l.index('!')+1:-1])\
for l in file('test.txt') if l.count('!')])
With every respect, this looks pretty much like another
P-language. The pure existance of list comprehensions
does not try to force you to use it everywhere :-)

....

compared to this:
....
d={}
for l in file("test.txt"):
try: i=l.index('!')
except ValueError: continue
d[l[:i]]=l[i+1:]


which is both faster in this case and easier to read.

About speed: I'm not sure with the current Python
version, but it might be worth trying to go without
the exception:

d={}
for l in file("test.txt"):
i=l.find('!')
if i >= 0:
d[l[:i]]=l[i+1:]

and then you might even consider to split on the first
"!", but I didn't do any timings:

d={}
for l in file("test.txt"):
try:
key, value = l.split("!", 1)
except ValueError: continue
d[key] = value
cheers -- chris

--
Christian Tismer :^) <mailto:ti****@tismer.com>
Mission Impossible 5oftware : Have a break! Take a ride on Python's
Johannes-Niemeyer-Weg 9a : *Starship* http://starship.python.net/
14109 Berlin : PGP key -> http://wwwkeys.pgp.net/
work +49 30 89 09 53 34 home +49 30 802 86 56 pager +49 173 24 18 776
PGP 0x57F3BF04 9064 F4E1 D754 C2FF 1619 305B C09C 5A3B 57F3 BF04
whom do you want to sponsor today? http://www.stackless.com/
Jul 18 '05 #9
Christian Tismer <ti****@tismer.com> wrote in message news:<ma**********************************@python. org>...
Paul Simmonds wrote:
...
I'm not trying to intrude this thread, but was just
struck by the list comprehension below, so this is
about readability.

<snipped>

d=dict([(l[:l.index('!')],l[l.index('!')+1:-1])\
for l in file('test.txt') if l.count('!')])


With every respect, this looks pretty much like another
P-language. The pure existance of list comprehensions
does not try to force you to use it everywhere :-)


Quite right. I think that mutation came from the fact that I was
thinking in C all day. Still, I don't even write C like that...it
should be put to sleep ASAP.

<snip>
d={}
for l in file("test.txt"):
try: i=l.index('!')
except ValueError: continue
d[l[:i]]=l[i+1:]


About speed: I'm not sure with the current Python
version, but it might be worth trying to go without
the exception:

d={}
for l in file("test.txt"):
i=l.find('!')
if i >= 0:
d[l[:i]]=l[i+1:]

and then you might even consider to split on the first
"!", but I didn't do any timings:

d={}
for l in file("test.txt"):
try:
key, value = l.split("!", 1)
except ValueError: continue
d[key] = value

Just when you think you know a language, an optional argument you've
never used pops up to make your life easier. Thanks for pointing that
out.

I've done some timings on the functions above, here are the results:

Python2.2.1, 200000 line file(all data lines)
try/except with split: 3.08s
if with slicing: 2.32s
try/except with slicing: 2.34s

So slicing seems quicker than split, and using if instead of
try/except appears to speed it up a little more. I don't know how much
faster the current version of the interpreter would be, but I doubt
the ranking would change much.

Paul
Jul 18 '05 #10
Christian Tismer <ti****@tismer.com> wrote in message news:<ma**********************************@python. org>...
Paul Simmonds wrote:
...
I'm not trying to intrude this thread, but was just
struck by the list comprehension below, so this is
about readability.

<snipped>

d=dict([(l[:l.index('!')],l[l.index('!')+1:-1])\
for l in file('test.txt') if l.count('!')])


With every respect, this looks pretty much like another
P-language. The pure existance of list comprehensions
does not try to force you to use it everywhere :-)


Quite right. I think that mutation came from the fact that I was
thinking in C all day. Still, I don't even write C like that...it
should be put to sleep ASAP.

<snip>
d={}
for l in file("test.txt"):
try: i=l.index('!')
except ValueError: continue
d[l[:i]]=l[i+1:]


About speed: I'm not sure with the current Python
version, but it might be worth trying to go without
the exception:

d={}
for l in file("test.txt"):
i=l.find('!')
if i >= 0:
d[l[:i]]=l[i+1:]

and then you might even consider to split on the first
"!", but I didn't do any timings:

d={}
for l in file("test.txt"):
try:
key, value = l.split("!", 1)
except ValueError: continue
d[key] = value

Just when you think you know a language, an optional argument you've
never used pops up to make your life easier. Thanks for pointing that
out.

I've done some timings on the functions above, here are the results:

Python2.2.1, 200000 line file(all data lines)
try/except with split: 3.08s
if with slicing: 2.32s
try/except with slicing: 2.34s

So slicing seems quicker than split, and using if instead of
try/except appears to speed it up a little more. I don't know how much
faster the current version of the interpreter would be, but I doubt
the ranking would change much.

Paul
Jul 18 '05 #11
Paul Simmonds wrote:

[some alternative implementations]
I've done some timings on the functions above, here are the results:

Python2.2.1, 200000 line file(all data lines)
try/except with split: 3.08s
if with slicing: 2.32s
try/except with slicing: 2.34s

So slicing seems quicker than split, and using if instead of
try/except appears to speed it up a little more. I don't know how much
faster the current version of the interpreter would be, but I doubt
the ranking would change much.


Interesting. I doubt that split() itself is slow, instead
I believe that the pure fact that you are calling a function
instead of using a syntactic construct makes things slower,
since method lookup is not so cheap. Unfortunately, split()
cannot be cached into a local variable, since it is obtained
as a new method of the line, all the time. On the other hand,
the same holds for the find method...

Well, I wrote a test program and figured out, that the test
results were very dependant from the order of calling the
functions! This means, the results are not independent,
probably due to the memory usage.
Here some results on Win32, testing repeatedly...

D:\slpdev\src\2.2\src\PCbuild>python -i \python22\py\testlines.py
test() function test_index for 200000 lines took 1.064 seconds.
function test_find for 200000 lines took 1.402 seconds.
function test_split for 200000 lines took 1.560 seconds. test() function test_index for 200000 lines took 1.395 seconds.
function test_find for 200000 lines took 1.502 seconds.
function test_split for 200000 lines took 1.888 seconds. test() function test_index for 200000 lines took 1.416 seconds.
function test_find for 200000 lines took 1.655 seconds.
function test_split for 200000 lines took 1.755 seconds.


For that reason, I added a command line mode for testing
single functions, with these results:

D:\slpdev\src\2.2\src\PCbuild>python \python22\py\testlines.py index
function test_index for 200000 lines took 1.056 seconds.

D:\slpdev\src\2.2\src\PCbuild>python \python22\py\testlines.py find
function test_find for 200000 lines took 1.092 seconds.

D:\slpdev\src\2.2\src\PCbuild>python \python22\py\testlines.py split
function test_split for 200000 lines took 1.255 seconds.

The results look much more reasonable; the index thing still
seems to be optimum.

Then I added another test, using an unbound str.index function,
which was again a bit faster.
Finally, I moved the try..except clause out of the game, by
using an explicit, restartable iterator, see the attached program.

D:\slpdev\src\2.2\src\PCbuild>python \python22\py\testlines.py index3
function test_index3 for 200000 lines took 0.997 seconds.

As a side result, split seems to be unnecessarily slow.

cheers - chris
--
Christian Tismer :^) <mailto:ti****@tismer.com>
Mission Impossible 5oftware : Have a break! Take a ride on Python's
Johannes-Niemeyer-Weg 9a : *Starship* http://starship.python.net/
14109 Berlin : PGP key -> http://wwwkeys.pgp.net/
work +49 30 89 09 53 34 home +49 30 802 86 56 pager +49 173 24 18 776
PGP 0x57F3BF04 9064 F4E1 D754 C2FF 1619 305B C09C 5A3B 57F3 BF04
whom do you want to sponsor today? http://www.stackless.com/
import sys, time

def test_index(data):
d={}
for l in data:
try: i=l.index('!')
except ValueError: continue
d[l[:i]]=l[i+1:]
return d

def test_find(data):
d={}
for l in data:
i=l.find('!')
if i >= 0:
d[l[:i]]=l[i+1:]
return d

def test_split(data):
d={}
for l in data:
try:
key, value = l.split("!", 1)
except ValueError: continue
d[key] = value
return d

def test_index2(data):
d={}
idx = str.index
for l in data:
try: i=idx(l, '!')
except ValueError: continue
d[l[:i]]=l[i+1:]
return d

def test_index3(data):
d={}
idx = str.index
it = iter(data)
while 1:
try:
for l in it:
i=idx(l, '!')
d[l[:i]]=l[i+1:]
else:
return d
except ValueError: continue
def make_data(n=200000):
return [ "this is some silly key %d!and that some silly value" % i for i in xrange(n) ]

def test(funcnames, n=200000):
if sys.platform == "win32":
default_timer = time.clock
else:
default_timer = time.time

data = make_data(n)
for name in funcnames.split():
fname = "test_"+name
f = globals()[fname]
t = default_timer()
f(data)
t = default_timer() - t
print "function %-10s for %d lines took %0.3f seconds." % (fname, n, t)

if __name__ == "__main__":
funcnames = "index find split index2 index3"
if len(sys.argv) > 1:
funcnames = " ".join(sys.argv[1:])
test(funcnames)

Jul 18 '05 #12

This thread has been closed and replies have been disabled. Please start a new discussion.

Similar topics

9
by: Robin Cull | last post by:
Imagine I have a dict looking something like this: myDict = {"key 1": , "key 2": , "key 3": , "key 4": } That is, a set of keys which have a variable length list of associated values after...
2
by: GrelEns | last post by:
hello, i would like if this behaviour can be obtained from python : trap an attributeError from inside a subclassing dict class... (here is a silly examples to explain my question) class...
3
by: Bengt Richter | last post by:
Has anyone found a way besides not deriving from dict? Shouldn't there be a way? TIA (need this for what I hope is an improvement on the Larosa/Foord OrderedDict ;-) I guess I can just document...
11
by: sandravandale | last post by:
I can think of several messy ways of making a dict that sets a flag if it's been altered, but I have a hunch that experienced python programmers would probably have an easier (well maybe more...
15
by: Cruella DeVille | last post by:
I'm trying to implement a bookmark-url program, which accepts user input and puts the strings in a dictionary. Somehow I'm not able to iterate myDictionary of type Dict{} When I write print...
15
by: George Sakkis | last post by:
Although I consider dict(**kwds) as one of the few unfortunate design choices in python since it prevents the future addition of useful keyword arguments (e.g a default value or an orderby...
12
by: jeremito | last post by:
Please excuse me if this is obvious to others, but I can't figure it out. I am subclassing dict, but want to prevent direct changing of some key/value pairs. For this I thought I should override...
3
by: james_027 | last post by:
hi, a_dict = {'name':'apple', 'color':'red', 'texture':'smooth', 'shape':'sphere'} is there any difference between .. for key in a_dict: from
1
by: | last post by:
Hello all, I have a question which might be simple or need some work around. I want to do something like this. My class/instance has a dict as a property. I want the instance to catch the...
20
by: Seongsu Lee | last post by:
Hi, I have a dictionary with million keys. Each value in the dictionary has a list with up to thousand integers. Follow is a simple example with 5 keys. dict = {1: , 2: , 900000: , 900001:...
0
by: Charles Arthur | last post by:
How do i turn on java script on a villaon, callus and itel keypad mobile phone
1
by: nemocccc | last post by:
hello, everyone, I want to develop a software for my android phone for daily needs, any suggestions?
0
by: Hystou | last post by:
There are some requirements for setting up RAID: 1. The motherboard and BIOS support RAID configuration. 2. The motherboard has 2 or more available SATA protocol SSD/HDD slots (including MSATA, M.2...
0
marktang
by: marktang | last post by:
ONU (Optical Network Unit) is one of the key components for providing high-speed Internet services. Its primary function is to act as an endpoint device located at the user's premises. However,...
0
by: Hystou | last post by:
Most computers default to English, but sometimes we require a different language, especially when relocating. Forgot to request a specific language before your computer shipped? No problem! You can...
0
Oralloy
by: Oralloy | last post by:
Hello folks, I am unable to find appropriate documentation on the type promotion of bit-fields when using the generalised comparison operator "<=>". The problem is that using the GNU compilers,...
0
jinu1996
by: jinu1996 | last post by:
In today's digital age, having a compelling online presence is paramount for businesses aiming to thrive in a competitive landscape. At the heart of this digital strategy lies an intricately woven...
0
by: Hystou | last post by:
Overview: Windows 11 and 10 have less user interface control over operating system update behaviour than previous versions of Windows. In Windows 11 and 10, there is no way to turn off the Windows...
0
tracyyun
by: tracyyun | last post by:
Dear forum friends, With the development of smart home technology, a variety of wireless communication protocols have appeared on the market, such as Zigbee, Z-Wave, Wi-Fi, Bluetooth, etc. Each...

By using Bytes.com and it's services, you agree to our Privacy Policy and Terms of Use.

To disable or enable advertisements and analytics tracking please visit the manage ads & tracking page.