471,325 Members | 1,458 Online
Bytes | Software Development & Data Engineering Community
Post +

Home Posts Topics Members FAQ

Join Bytes to post your question to a community of 471,325 software developers and data experts.

Please explain collections.defaultdict(lambda: 1)

I'm reading http://norvig.com/spell-correct.html

and do not understand the expression listed in the subject which is
part of this function:

def train(features):
model = collections.defaultdict(lambda: 1)
for f in features:
model[f] += 1
return model
Per http://docs.python.org/lib/defaultdict-examples.html

It seems that there is a default factory which initializes each key to
1. So by the end of train(), each member of the dictionary model will
have value >= 1

But why wouldnt he set the value to zero and then increment it each
time a "feature" (actually a word) is encountered? It seems that each
model value would be 1 more than it should be.

Nov 6 '07 #1
2 5633
On Nov 6, 8:54 am, "metaperl.com" <metap...@gmail.comwrote:
I'm readinghttp://norvig.com/spell-correct.html

and do not understand the expression listed in the subject which is
part of this function:

def train(features):
model = collections.defaultdict(lambda: 1)
for f in features:
model[f] += 1
return model

Perhttp://docs.python.org/lib/defaultdict-examples.html

It seems that there is a default factory which initializes each key to
1. So by the end of train(), each member of the dictionary model will
have value >= 1

But why wouldnt he set the value to zero and then increment it each
time a "feature" (actually a word) is encountered? It seems that each
model value would be 1 more than it should be.
The explanation is a little further down on that same page, on the
discussion of "novel" words and avoiding the probablity of them being
0 just because they have not yet been seen in the training text.

-- Paul

Nov 6 '07 #2
"metaperl.com" <me******@gmail.comwrote:
Per http://docs.python.org/lib/defaultdict-examples.html

It seems that there is a default factory which initializes each key to
1. So by the end of train(), each member of the dictionary model will
have value >= 1

But why wouldnt he set the value to zero and then increment it each
time a "feature" (actually a word) is encountered? It seems that each
model value would be 1 more than it should be.
The author explains his reasoning in the article: he wants to treat novel
words (i.e. those which did not appear in the training corpus) as having
been seen once.

Nov 6 '07 #3

This discussion thread is closed

Replies have been disabled for this discussion.

Similar topics

53 posts views Thread by Oliver Fromme | last post: by
8 posts views Thread by rubbishemail | last post: by
23 posts views Thread by Kaz Kylheku | last post: by
5 posts views Thread by Octal | last post: by
3 posts views Thread by Raymond Hettinger | last post: by
2 posts views Thread by tutufan | last post: by
21 posts views Thread by globalrev | last post: by
reply views Thread by rosydwin | last post: by

By using Bytes.com and it's services, you agree to our Privacy Policy and Terms of Use.

To disable or enable advertisements and analytics tracking please visit the manage ads & tracking page.