472,373 Members | 1,857 Online
Bytes | Software Development & Data Engineering Community
Post Job

Home Posts Topics Members FAQ

Join Bytes to post your question to a community of 472,373 software developers and data experts.

Initializing the number of slots in a dictionary

Is there some way to tell a dictionary object that I am going to load 1M
objects into it and have it pre-allocate enought slots to hold all of the
entries? Thus avoiding many thousand memory allocations.

Jon Smirl
jo******@gmail.com

Aug 6 '06 #1
5 3739
Jon Smirl wrote:
Is there some way to tell a dictionary object that I am going to load 1M
objects into it and have it pre-allocate enought slots to hold all of the
entries?
Not according to the manual.

Not according to the source [as at 2.4.3]. In any case, if there were a
back-door undocumented arg for the dict constructor, somebody would
have read the source and spread the news.
Thus avoiding many thousand memory allocations.
What gives you the impression that "many thousand memory allocations"
are involved? {My impression is that the dict would be resized about 11
times on its trip from size 8 to size 2M, and each resize would involve
one allocation.]

Do you have an application with a performance problem? If so, what
makes you think inserting 1M items into a Python dict is contributing
to the problem?

Have you read the thread "Large Dictionaries" which started on/about
2006-05-15?

In general, what is the background to your question?

Cheers,
John

Aug 6 '06 #2
On Sun, 06 Aug 2006 15:33:30 -0700, John Machin wrote:
Jon Smirl wrote:
>Is there some way to tell a dictionary object that I am going to load 1M
objects into it and have it pre-allocate enought slots to hold all of
the entries?

Not according to the manual.

Not according to the source [as at 2.4.3]. In any case, if there were a
back-door undocumented arg for the dict constructor, somebody would have
read the source and spread the news.
>Thus avoiding many thousand memory allocations.

What gives you the impression that "many thousand memory allocations" are
involved? {My impression is that the dict would be resized about 11 times
on its trip from size 8 to size 2M, and each resize would involve one
allocation.]

Do you have an application with a performance problem? If so, what makes
you think inserting 1M items into a Python dict is contributing to the
problem?
I know in advance how many items will be added to the dictionary. Most
dictionary implementations I have previously worked with are more
efficient if they know ahead of time how big to make their tables.

In this case I only need to do a presence test of the key, there is no
actual data associated with the key. The table is used for detecting
duplicate entries. Is there a more efficient to do this test that sticking
an empty string into a dict? The keys are sha1.digest().
>
Have you read the thread "Large Dictionaries" which started on/about
2006-05-15?
I'll look at this.
In general, what is the background to your question?
I am in the process of modifying cvs2svn to do cvs2git. The purpose of
this is to convert Mozilla CVS (4GB) into git format. Currently a
conversion takes 5 hours but the code isn't finished yet.

Other programs that attempt to process the Mozilla CVS repository take
several days to run. But none of the other programs can successful convert
the Mozilla repository without encountering errors. cvs2svn can successful
convert the repository to subversion but it takes about a week. It is
obvious that a lot of effort has been put into teaching cvs2svn how to
deal with broken CVS repositories.

I do practice on smaller repositories but some of the problems I am
dealing with only occur when processing the full dataset (it contains
weird CVS errors). In general the conversion process is completely IO
bound. Since I am rerunning the conversion over and over anything simple
that speeds it up is helpful. I already have 4GB RAM, it would take around
30GB to get everything in memory.

Dictionaries are not a big problem for me, but there are many in use and
they have millions of items in them. But first I have to get the code for
converting to git working, then I can worry about how long it runs. Two or
three days is ok, a month is not ok.

Jon Smirl
jo******@gmail.com

Aug 7 '06 #3
....

[Jon Smirl]
I know in advance how many items will be added to the dictionary. Most
dictionary implementations I have previously worked with are more
efficient if they know ahead of time how big to make their tables.
Richard Jones spent considerable time investigating whether
"pre-sizing" lists and dicts in CPython could help, at the "Need For
Speed" sprint earlier this year. He didn't find a win worth getting;
e.g., read the section "List pre-allocation" at:

http://wiki.python.org/moin/NeedForSpeed/Failures

Trying it for dicts was also part of what he did, but I don't think
specifics about that were recorded on the Wiki. I was at the sprint,
and hoots of triumph from Richard's direction were conspicuous by
absence during his dict time ;-)
In this case I only need to do a presence test of the key, there is no
actual data associated with the key. The table is used for detecting
duplicate entries. Is there a more efficient to do this test that sticking
an empty string into a dict? The keys are sha1.digest().
It's probably more common to set the value to None or 1, but it
doesn't really matter in reality. In theory, using 1 or an empty
string relies on the implementation accident that CPython stores those
uniquely, while it's guaranteed that None is a singleton object.

BTW, is there a reason to use SHA instead of MD5? I ask because the
latter is 4 bytes shorter, and you apparently have a /lot/ of keys.

If you're using a current version of Python, you can save some memory
by using a builtin set object instead. The CPython implementation of
sets is very much like its implementation of dicts, but doesn't
consume any memory for the (non-existent) associated values. You also
get a number of operations useful on sets (like intersection and
union).
...
Since I am rerunning the conversion over and over anything simple that speeds
it up is helpful.

I already have 4GB RAM, it would take around 30GB to get everything in memory.

Dictionaries are not a big problem for me, but there are many in use and
they have millions of items in them.
A peculiar suggestion: if you don't need to release system resources
cleanly at the end of a run, try doing:

import os
os._exit(0)

at the end. If you have dicts with millions of entries swapped to
disk, it /can/ consume major time just to page them all in again to
decrement the key & value refcounts if you let "clean shutdown" code
determine they've become trash. Bailing ungracefully can skip all
that work (but also skips other work, like letting the platform C I/O
library close still-open files gracefully).
Aug 7 '06 #4
On Mon, 07 Aug 2006 00:33:33 -0400, Tim Peters wrote:
...

[Jon Smirl]
>I know in advance how many items will be added to the dictionary. Most
dictionary implementations I have previously worked with are more
efficient if they know ahead of time how big to make their tables.

Richard Jones spent considerable time investigating whether "pre-sizing"
lists and dicts in CPython could help, at the "Need For Speed" sprint
earlier this year. He didn't find a win worth getting; e.g., read the
section "List pre-allocation" at:

http://wiki.python.org/moin/NeedForSpeed/Failures

Trying it for dicts was also part of what he did, but I don't think
specifics about that were recorded on the Wiki. I was at the sprint, and
hoots of triumph from Richard's direction were conspicuous by absence
during his dict time ;-)
>In this case I only need to do a presence test of the key, there is no
actual data associated with the key. The table is used for detecting
duplicate entries. Is there a more efficient to do this test that
sticking an empty string into a dict? The keys are sha1.digest().

It's probably more common to set the value to None or 1, but it doesn't
really matter in reality. In theory, using 1 or an empty string relies on
the implementation accident that CPython stores those uniquely, while it's
guaranteed that None is a singleton object.

BTW, is there a reason to use SHA instead of MD5? I ask because the
latter is 4 bytes shorter, and you apparently have a /lot/ of keys.
http://git.or.cz/index.html
git is the source code control tool used by the Linux kernel. It is
optimized for distributed development. git is what the kernel developers
wrote to replace Bitkeeper after the Bitkeeper license change.

git identifies it's change sets with sha1 hashes.

Distributed SCM is very important when working on large projects. With a
distributed SCM nobody needs commit privs - everyone has their own
complete copy of the repository. To get a change into a distribution you
need to convince someone up the heirarchy to pull your changes into
their repository. In the Linux world Linus makes the distributions. There
are about 10 people in the next circle and about 100 in the circle after
that. You need to convince one of them to accept your changes, but that's
not very hard if your code makes sense.

Distributed also means that you can pull changes from anyone else into
your repository. So if you want to run Reiser4 and it isn't in the main
Linus tree yet, just pull a copy from the namesys git tree into your local
tree and everything will get merged.

Python is using Subversion which is way better than CVS, but Subversion is
still organized around a central repository with commit privs. If that
repository disappears in an earthquake Python may be in trouble.

If you give git a try you will also notice that it is way faster than
subversion.
If you're using a current version of Python, you can save some memory by
using a builtin set object instead. The CPython implementation of sets is
very much like its implementation of dicts, but doesn't consume any memory
for the (non-existent) associated values. You also get a number of
operations useful on sets (like intersection and union).
Sets may be a good option, I'll adjust the code for the next run.
>...
Since I am rerunning the conversion over and over anything simple that
speeds it up is helpful.

I already have 4GB RAM, it would take around 30GB to get everything in
memory.

Dictionaries are not a big problem for me, but there are many in use and
they have millions of items in them.

A peculiar suggestion: if you don't need to release system resources
cleanly at the end of a run, try doing:

import os
os._exit(0)

at the end. If you have dicts with millions of entries swapped to disk,
it /can/ consume major time just to page them all in again to decrement
the key & value refcounts if you let "clean shutdown" code determine
they've become trash. Bailing ungracefully can skip all that work (but
also skips other work, like letting the platform C I/O library close
still-open files gracefully).
Jon Smirl
jo******@gmail.com

Aug 7 '06 #5

Jon Smirl wrote:
On Sun, 06 Aug 2006 15:33:30 -0700, John Machin wrote:
[snip..]

Do you have an application with a performance problem? If so, what makes
you think inserting 1M items into a Python dict is contributing to the
problem?

I know in advance how many items will be added to the dictionary. Most
dictionary implementations I have previously worked with are more
efficient if they know ahead of time how big to make their tables.

In this case I only need to do a presence test of the key, there is no
actual data associated with the key. The table is used for detecting
duplicate entries. Is there a more efficient to do this test that sticking
an empty string into a dict? The keys are sha1.digest().
Possibly a naive question - but would using sets be more efficient ?

They are generally used for the sort of job you are describing (testing
for duplicates rather than storing data associated with a key).

We did some limited performance tests at Resolver Systems and found use
of sets and dictionaries to be almost exactly hte same speed - but
there could be memory advantages for you. Performance is also likely to
be different for vast data sets, but I understand the hashing algorithm
to be very similar for both...

Fuzzyman
http://www.voidspace.org.uk/python/index.shtml

Aug 7 '06 #6

This thread has been closed and replies have been disabled. Please start a new discussion.

Similar topics

36
by: Dag | last post by:
Is there a python module that includes functions for working with prime numbers? I mainly need A function that returns the Nth prime number and that returns how many prime numbers are less than N,...
7
by: Boris Boutillier | last post by:
Hi all, Of what I heard Python 2.3 now allow creation of new types in C with slots. Is there somewhere some documentation on this, or at least an example ? Thanks a lot Boris
1
by: Frank Bossy | last post by:
Dear group :) I don't quite understand the meaning of this paragraph in the qt docu (http://doc.trolltech.com/3.1/threads.html): ***SNIP The Signals and Slots mechanism can be used in...
13
by: simondex | last post by:
Hi, Everyone! Does anyone know how to initialize an int array with a non-zero number? Thank You Very Much. Truly Yours, Simon Dexter
6
by: Burton Samograd | last post by:
Hi, I'm writing an app that stores some user configuration variables in a file ~/.program/config, which it then imports like so: import sys from posix import environ...
6
by: moo | last post by:
Forgive the fairly mundane question, but all the code samples I've found searching for initializing a dictionary have only shown one way, specifically: <code> HybridDictionary toy = new...
0
by: Brian Vanderburg II | last post by:
I don't know if any such support is already built in, so I ended up making my own simple signals/slots like mechanism. If anyone is interested then here it is, along with a simple test. It can...
1
by: Scott SA | last post by:
On 5/1/08, Brian Vanderburg II (BrianVanderburg2@aim.com) wrote: Did you review this? <http://pydispatcher.sourceforge.net/> from what I understand is originally based upon this:...
2
by: Kemmylinns12 | last post by:
Blockchain technology has emerged as a transformative force in the business world, offering unprecedented opportunities for innovation and efficiency. While initially associated with cryptocurrencies...
0
by: antdb | last post by:
Ⅰ. Advantage of AntDB: hyper-convergence + streaming processing engine In the overall architecture, a new "hyper-convergence" concept was proposed, which integrated multiple engines and...
0
hi
by: WisdomUfot | last post by:
It's an interesting question you've got about how Gmail hides the HTTP referrer when a link in an email is clicked. While I don't have the specific technical details, Gmail likely implements measures...
0
Oralloy
by: Oralloy | last post by:
Hello Folks, I am trying to hook up a CPU which I designed using SystemC to I/O pins on an FPGA. My problem (spelled failure) is with the synthesis of my design into a bitstream, not the C++...
0
by: Carina712 | last post by:
Setting background colors for Excel documents can help to improve the visual appeal of the document and make it easier to read and understand. Background colors can be used to highlight important...
0
BLUEPANDA
by: BLUEPANDA | last post by:
At BluePanda Dev, we're passionate about building high-quality software and sharing our knowledge with the community. That's why we've created a SaaS starter kit that's not only easy to use but also...
0
by: Rahul1995seven | last post by:
Introduction: In the realm of programming languages, Python has emerged as a powerhouse. With its simplicity, versatility, and robustness, Python has gained popularity among beginners and experts...
2
by: Ricardo de Mila | last post by:
Dear people, good afternoon... I have a form in msAccess with lots of controls and a specific routine must be triggered if the mouse_down event happens in any control. Than I need to discover what...
0
DizelArs
by: DizelArs | last post by:
Hi all) Faced with a problem, element.click() event doesn't work in Safari browser. Tried various tricks like emulating touch event through a function: let clickEvent = new Event('click', {...

By using Bytes.com and it's services, you agree to our Privacy Policy and Terms of Use.

To disable or enable advertisements and analytics tracking please visit the manage ads & tracking page.