473,386 Members | 1,674 Online
Bytes | Software Development & Data Engineering Community
Post Job

Home Posts Topics Members FAQ

Join Bytes to post your question to a community of 473,386 software developers and data experts.

Re: dict generator question

Simon Mullis wrote:
Hi,

Let's say I have an arbitrary list of minor software versions of an
imaginary software product:

l = [ "1.1.1.1", "1.2.2.2", "1.2.2.3", "1.3.1.2", "1.3.4.5"]

I'd like to create a dict with major_version : count.

(So, in this case:

dict_of_counts = { "1.1" : "1",
"1.2" : "2",
"1.3" : "2" }
[...]
data = [ "1.1.1.1", "1.2.2.2", "1.2.2.3", "1.3.1.2", "1.3.4.5"]

from itertools import groupby

datadict = \
dict((k, len(list(g))) for k,g in groupby(data, lambda s: s[:3]))
print datadict


Sep 18 '08 #1
10 1651
On Sep 18, 11:43 am, Gerard flanagan <grflana...@gmail.comwrote:
Simon Mullis wrote:
Hi,
Let's say I have an arbitrary list of minor software versions of an
imaginary software product:
l = [ "1.1.1.1", "1.2.2.2", "1.2.2.3", "1.3.1.2", "1.3.4.5"]
I'd like to create a dict with major_version : count.
(So, in this case:
dict_of_counts = { "1.1" : "1",
"1.2" : "2",
"1.3" : "2" }

[...]
data = [ "1.1.1.1", "1.2.2.2", "1.2.2.3", "1.3.1.2", "1.3.4.5"]

from itertools import groupby

datadict = \
dict((k, len(list(g))) for k,g in groupby(data, lambda s: s[:3]))
print datadict
Note that this works correctly only if the versions are already sorted
by major version.

George
Sep 18 '08 #2
George Sakkis wrote:
On Sep 18, 11:43 am, Gerard flanagan <grflana...@gmail.comwrote:
>Simon Mullis wrote:
>>Hi,
Let's say I have an arbitrary list of minor software versions of an
imaginary software product:
l = [ "1.1.1.1", "1.2.2.2", "1.2.2.3", "1.3.1.2", "1.3.4.5"]
I'd like to create a dict with major_version : count.
(So, in this case:
dict_of_counts = { "1.1" : "1",
"1.2" : "2",
"1.3" : "2" }
[...]
data = [ "1.1.1.1", "1.2.2.2", "1.2.2.3", "1.3.1.2", "1.3.4.5"]

from itertools import groupby

datadict = \
dict((k, len(list(g))) for k,g in groupby(data, lambda s: s[:3]))
print datadict

Note that this works correctly only if the versions are already sorted
by major version.
Yes, I should have mentioned it. Here's a fuller example below. There's
maybe better ways of sorting version numbers, but this is what I do.
data = [ "1.2.2.2", "1.2.2.3", "1.3.1.2", "1.1.1.1", "1.3.14.5",
"1.3.21.6" ]

from itertools import groupby
import re

RXBUILDSORT = re.compile(r'\d+|[a-zA-Z]')

def versionsort(s):
key = []
for part in RXBUILDSORT.findall(s.lower()):
try:
key.append(int(part))
except ValueError:
key.append(ord(part))
return tuple(key)

data.sort(key=versionsort)
print data

datadict = \
dict((k, len(list(g))) for k,g in groupby(data, lambda s: s[:3]))
print datadict


Sep 18 '08 #3
Gerard flanagan:
data.sort()
datadict = \
dict((k, len(list(g))) for k,g in groupby(data, lambda s:
'.'.join(s.split('.',2)[:2])))
That code may run correctly, but it's quite unreadable, while good
Python programmers value high readability. So the right thing to do is
to split that line into parts, giving meaningful names, and maybe even
add comments.

len(list(g))) looks like a good job for my little leniter() function
(or better just an extension to the semantics of len) that time ago
some people here have judged as useless, while I use it often in both
Python and D ;-)

Bye,
bearophile
Sep 19 '08 #4
On Sep 19, 2:01*pm, bearophileH...@lycos.com wrote:
Gerard flanagan:
data.sort()
datadict = \
dict((k, len(list(g))) for k,g in groupby(data, lambda s:
* * *'.'.join(s.split('.',2)[:2])))

That code may run correctly, but it's quite unreadable, while good
Python programmers value high readability. So the right thing to do is
to split that line into parts, giving meaningful names, and maybe even
add comments.

len(list(g))) looks like a good job for my little leniter() function
(or better just an extension to the semantics of len) that time ago
some people here have judged as useless, while I use it often in both
Python and D ;-)
Extending len() to support iterables sounds like a good idea, except
that it could be misleading when:

len(file(path))

returns the number of lines and /not/ the length in bytes as you might
first think! :-)

Anyway, here's another possible implementation using bags (multisets):

def major_version(version_string):
"convert '1.2.3.2' to '1.2'"
return '.'.join(version_string.split('.')[:2])

versions = ["1.1.1.1", "1.2.2.2", "1.2.2.3", "1.3.1.2", "1.3.4.5"]

bag_of_versions = bag(major_version(x) for x in versions)
dict_of_counts = dict(bag_of_versions.items())

Here's my implementation of the bag class in Python (sorry about the
length):

class bag(object):
def __init__(self, iterable = None):
self._counts = {}
if isinstance(iterable, dict):
for x, n in iterable.items():
if not isinstance(n, int):
raise TypeError()
if n < 0:
raise ValueError()
self._counts[x] = n
elif iterable:
for x in iterable:
try:
self._counts[x] += 1
except KeyError:
self._counts[x] = 1
def __and__(self, other):
new_counts = {}
for x, n in other._counts.items():
try:
new_counts[x] = min(self._counts[x], n)
except KeyError:
pass
result = bag()
result._counts = new_counts
return result
def __iand__(self):
new_counts = {}
for x, n in other._counts.items():
try:
new_counts[x] = min(self._counts[x], n)
except KeyError:
pass
self._counts = new_counts
def __or__(self, other):
new_counts = self._counts.copy()
for x, n in other._counts.items():
try:
new_counts[x] = max(new_counts[x], n)
except KeyError:
new_counts[x] = n
result = bag()
result._counts = new_counts
return result
def __ior__(self):
for x, n in other._counts.items():
try:
self._counts[x] = max(self._counts[x], n)
except KeyError:
self._counts[x] = n
def __len__(self):
return sum(self._counts.values())
def __list__(self):
result = []
for x, n in self._counts.items():
result.extend([x] * n)
return result
def __repr__(self):
return "bag([%s])" % ", ".join(", ".join([repr(x)] * n) for x,
n in self._counts.items())
def __iter__(self):
for x, n in self._counts.items():
for i in range(n):
yield x
def keys(self):
return self._counts.keys()
def values(self):
return self._counts.values()
def items(self):
return self._counts.items()
def __add__(self, other):
for x, n in other.items():
self._counts[x] = self._counts.get(x, 0) + n
def __contains__(self, x):
return x in self._counts
def add(self, x):
try:
self._counts[x] += 1
except KeyError:
self._counts[x] = 1
def __add__(self, other):
new_counts = self._counts.copy()
for x, n in other.items():
try:
new_counts[x] += n
except KeyError:
new_counts[x] = n
result = bag()
result._counts = new_counts
return result
def __sub__(self, other):
new_counts = self._counts.copy()
for x, n in other.items():
try:
new_counts[x] -= n
if new_counts[x] < 1:
del new_counts[x]
except KeyError:
pass
result = bag()
result._counts = new_counts
return result
def __iadd__(self, other):
for x, n in other.items():
try:
self._counts[x] += n
except KeyError:
self._counts[x] = n
def __isub__(self, other):
for x, n in other.items():
try:
self._counts[x] -= n
if self._counts[x] < 1:
del self._counts[x]
except KeyError:
pass
def clear(self):
self._counts = {}
def count(self, x):
return self._counts.get(x, 0)

Sep 20 '08 #5
On Fri, 19 Sep 2008 17:00:56 -0700, MRAB wrote:
Extending len() to support iterables sounds like a good idea, except
that it could be misleading when:

len(file(path))

returns the number of lines and /not/ the length in bytes as you might
first think!
Extending len() to support iterables sounds like a good idea, except that
it's not.

Here are two iterables:
def yes(): # like the Unix yes command
while True:
yield "y"

def rand(total):
"Return random numbers up to a given total."
from random import random
tot = 0.0
while tot < total:
x = random()
yield x
tot += x
What should len(yes()) and len(rand(100)) return?

--
Steven
Sep 20 '08 #6
MRAB:
except that it could be misleading when:
len(file(path))
returns the number of lines and /not/ the length in bytes as you might
first think! :-)
Well, file(...) returns an iterable of lines, so its len is the number
of lines :-)
I think I am able to always remember this fact.

Anyway, here's another possible implementation using bags (multisets):
This function looks safer/faster:

def major_version(version_string):
"convert '1.2.3.2' to '1.2'"
return '.'.join(version_string.strip().split('.', 2)[:2])

Another version:

import re
patt = re.compile(r"^(\d+\.\d+)")

dict_of_counts = defaultdict(int)
for ver in versions:
dict_of_counts[patt.match(ver).group(1)] += 1

print dict_of_counts

Bye,
bearophile
Sep 20 '08 #7
On Fri, Sep 19, 2008 at 9:51 PM, Steven D'Aprano
<st***@remove-this-cybersource.com.auwrote:
Extending len() to support iterables sounds like a good idea, except that
it's not.

Here are two iterables:
def yes(): # like the Unix yes command
while True:
yield "y"

def rand(total):
"Return random numbers up to a given total."
from random import random
tot = 0.0
while tot < total:
x = random()
yield x
tot += x
What should len(yes()) and len(rand(100)) return?
Clearly, len(yes()) would never return, and len(rand(100)) would
return a random integer not less than 101.

-Miles
Sep 22 '08 #8
Steven D'Aprano:
>Extending len() to support iterables sounds like a good idea, except that it's not.<
Python language lately has shifted toward more and more usage of lazy
iterables (see range lazy by default, etc). So they are now quite
common. So extending len() to make it act like leniter() too is a way
to adapt a basic Python construct to the changes of the other parts of
the language.

In languages like Haskell you can count how many items a lazy sequence
has. But those sequences are generally immutable, so they can be
accessed many times, so len(iterable) doesn't exhaust them like in
Python. So in Python it's less useful.
This is a common situation where I can only care of the len of the g
group:
[leniter(g) for h,g in groupby(iterable)]

There are other situations where I may be interested only in how many
items there are:
leniter(ifilter(predicate, iterable))
leniter(el for el in iterable if predicate(el))

For my usage I have written a version of the itertools module in D (a
lot of work, but the result is quite useful and flexible, even if I
miss the generator/iterator syntax a lot), and later I have written a
len() able to count the length of lazy iterables too (if the given
variable has a length attribute/property then it returns that value),
and I have found that it's useful often enough (almost as the
string.xsplit()). But in Python there is less need for a len() that
counts lazy iterables too because you can use the following syntax
that isn't bad (and isn't available in D):

[sum(1 for x in g) for h,g in groupby(iterable)]
sum(1 for x in ifilter(predicate, iterable))
sum(1 for el in iterable if predicate(el))

So you and Python designers may choose to not extend the semantics of
len() for various good reasons, but you will have a hard time
convincing me it's a useless capability :-)

Bye,
bearophile
Sep 22 '08 #9
On Mon, 22 Sep 2008 04:21:12 -0700, bearophileHUGS wrote:
Steven D'Aprano:
>>Extending len() to support iterables sounds like a good idea, except
that it's not.<

Python language lately has shifted toward more and more usage of lazy
iterables (see range lazy by default, etc). So they are now quite
common. So extending len() to make it act like leniter() too is a way to
adapt a basic Python construct to the changes of the other parts of the
language.
I'm sorry, I don't recognise leniter(). Did I miss something?

In languages like Haskell you can count how many items a lazy sequence
has. But those sequences are generally immutable, so they can be
accessed many times, so len(iterable) doesn't exhaust them like in
Python. So in Python it's less useful.
In Python, xrange() is a lazy sequence that isn't exhausted, but that's a
special case: it actually has a __len__ method, and presumably the length
is calculated from the xrange arguments, not by generating all the items
and counting them. How would you count the number of items in a generic
lazy sequence without actually generating the items first?

This is a common situation where I can only care of the len of the g
group:
[leniter(g) for h,g in groupby(iterable)]

There are other situations where I may be interested only in how many
items there are:
leniter(ifilter(predicate, iterable)) leniter(el for el in iterable if
predicate(el))

For my usage I have written a version of the itertools module in D (a
lot of work, but the result is quite useful and flexible, even if I miss
the generator/iterator syntax a lot), and later I have written a len()
able to count the length of lazy iterables too (if the given variable
has a length attribute/property then it returns that value),
I'm not saying that no iterables can accurately predict how many items
they will produce. If they can, then len() should support iterables with
a __len__ attribute. But in general there's no way of predicting how many
items the iterable will produce without iterating over it, and len()
shouldn't do that.

and I have
found that it's useful often enough (almost as the string.xsplit()). But
in Python there is less need for a len() that counts lazy iterables too
because you can use the following syntax that isn't bad (and isn't
available in D):

[sum(1 for x in g) for h,g in groupby(iterable)] sum(1 for x in
ifilter(predicate, iterable)) sum(1 for el in iterable if predicate(el))
I think the idiom sum(1 for item in iterable) is, in general, a mistake.
For starters, it doesn't work for arbitrary iterables, only sequences
(lazy or otherwise) and your choice of variable name may fool people into
thinking they can pass a use-once iterator to your code and have it work.

Secondly, it's not clear what sum(1 for item in iterable) does without
reading over it carefully. Since you're generating the entire length
anyway, len(list(iterable)) is more readable and almost as efficient for
most practical cases.

As things stand now, list(iterable) is a "dangerous" operation, as it may
consume arbitrarily huge resources. But len() isn't[1], because len()
doesn't operate on arbitrary iterables. This is a good thing.

So you and Python designers may choose to not extend the semantics of
len() for various good reasons, but you will have a hard time convincing
me it's a useless capability :-)
I didn't say that knowing the length of iterators up front was useless.
Sometimes it may be useful, but it is rarely (never?) essential.

[1] len(x) may call x.__len__() which might do anything. But the expected
semantics of __len__ is that it is expected to return an int, and do it
quickly with minimal effort. Methods that do something else are an abuse
of __len__ and should be treated as a bug.

--
Steven
Sep 22 '08 #10
Steven D'Aprano:
>I'm sorry, I don't recognise leniter(). Did I miss something?<
I have removed the docstring/doctests:

def leniter(iterator):
if hasattr(iterator, "__len__"):
return len(iterator)
nelements = 0
for _ in iterator:
nelements += 1
return nelements

>it doesn't work for arbitrary iterables, only sequences (lazy or otherwise)<
I don't understand well.

>Since you're generating the entire length anyway, len(list(iterable)) is more readable and almost as efficient for most practical cases.<
I don't agree, len(list()) creates an actual list, with lot of GC
activity.

>But the expected semantics of __len__ is that it is expected to return an int, and do it quickly with minimal effort. Methods that do something else are an abuse of __len__ and should be treated as a bug.<
I see. In the past I have read similar positions in discussions
regarding API of data structures in D, so this may be right, and this
fault may be enough to kill my proposal. But I'll keep using
leniter().

Bye,
bearophile
Sep 23 '08 #11

This thread has been closed and replies have been disabled. Please start a new discussion.

Similar topics

15
by: Irmen de Jong | last post by:
Hi I have this dict that maps a name to a sequence of other names. I want to have it reversed, i.e., map the other names each to the key they belong to (yes, the other names are unique and they...
25
by: Steven Bethard | last post by:
So I end up writing code like this a fair bit: map = {} for key, value in sequence: map.setdefault(key, ).append(value) This code basically constructs a one-to-many mapping -- each value that...
7
by: George Young | last post by:
I am puzzled that creating large dicts with an explicit iterable of key,value pairs seems to be slow. I thought to save time by doing: palettes = dict((w,set(w)) for w in words) instead of: ...
4
by: bearophileHUGS | last post by:
I have started doing practice creating C extensions for CPython, so here are two ideas I have had, possibly useless. If you keep adding elements to a CPython dict/set, it periodically rebuilds...
19
by: Drew | last post by:
When is it appropriate to use dict.items() vs dict.iteritems. Both seem to work for something like: for key,val in mydict.items(): print key,val for key,val in mydict.iteritems(): print...
22
by: mrkafk | last post by:
Hello everyone, I have written this small utility function for transforming legacy file to Python dict: def lookupdmo(domain): lines = open('/etc/virtual/domainowners','r').readlines()...
6
by: Simon Mullis | last post by:
Hi, Let's say I have an arbitrary list of minor software versions of an imaginary software product: l = I'd like to create a dict with major_version : count. (So, in this case:
6
by: Ernst-Ludwig Brust | last post by:
Given 2 Number-Lists say l0 and l1, count the various positiv differences between the 2 lists the following part works: dif= da={} for d in dif: da=da.get(d,0)+1 i wonder, if there is a...
10
by: bearophileHUGS | last post by:
I'd like to know why Python 2.6 doesn't have the syntax to create sets/ dicts of Python 3.0, like: {x*x for x in xrange(10)} {x:x*x for x in xrange(10)} Bye, bearophile
0
by: Charles Arthur | last post by:
How do i turn on java script on a villaon, callus and itel keypad mobile phone
0
by: ryjfgjl | last post by:
If we have dozens or hundreds of excel to import into the database, if we use the excel import function provided by database editors such as navicat, it will be extremely tedious and time-consuming...
0
by: ryjfgjl | last post by:
In our work, we often receive Excel tables with data in the same format. If we want to analyze these data, it can be difficult to analyze them because the data is spread across multiple Excel files...
0
BarryA
by: BarryA | last post by:
What are the essential steps and strategies outlined in the Data Structures and Algorithms (DSA) roadmap for aspiring data scientists? How can individuals effectively utilize this roadmap to progress...
1
by: nemocccc | last post by:
hello, everyone, I want to develop a software for my android phone for daily needs, any suggestions?
1
by: Sonnysonu | last post by:
This is the data of csv file 1 2 3 1 2 3 1 2 3 1 2 3 2 3 2 3 3 the lengths should be different i have to store the data by column-wise with in the specific length. suppose the i have to...
0
by: Hystou | last post by:
There are some requirements for setting up RAID: 1. The motherboard and BIOS support RAID configuration. 2. The motherboard has 2 or more available SATA protocol SSD/HDD slots (including MSATA, M.2...
0
marktang
by: marktang | last post by:
ONU (Optical Network Unit) is one of the key components for providing high-speed Internet services. Its primary function is to act as an endpoint device located at the user's premises. However,...
0
Oralloy
by: Oralloy | last post by:
Hello folks, I am unable to find appropriate documentation on the type promotion of bit-fields when using the generalised comparison operator "<=>". The problem is that using the GNU compilers,...

By using Bytes.com and it's services, you agree to our Privacy Policy and Terms of Use.

To disable or enable advertisements and analytics tracking please visit the manage ads & tracking page.