469,936 Members | 2,486 Online
Bytes | Developer Community
New Post

Home Posts Topics Members FAQ

Post your question to a community of 469,936 developers. It's quick & easy.

Reducing memory overhead for dictionaries by removing precomputed hash

Forgive me if this has already been discussed, but it seems to me that
one could reduce the memory usage of dictionaries by 2/3 by removing
the precomputed hash in each bucket.

Since Dictionaries only allow immutable objects as keys, one could move
the precomputed hash into the keys.

* Strings are probably the most popular keys for dictionaries and they
already cache the hash (hmm that almost rhymes).
* Numbers are trivial to hash.
* For Tuples one could add a member to cache the hash.

So why store the hash? I imagine it makes rebuilding the dictionary a
bit quicker, since I guess you can avoid some comparisions since you
know there are no duplicates. haven't looked at the code to see if it
does that tho.

and collision chains are possibly a bit quicker to traverse, tho I
think python uses a mask instead of a mod by prime, so hash keys with
the same low bits will collide, so collisions might be more common, but
that's possibly fixable by using a congruential hash, but that's all
black magic to me.


Apr 18 '06 #1
0 832

This discussion thread is closed

Replies have been disabled for this discussion.

Similar topics

4 posts views Thread by Thomas Rast | last post: by
16 posts views Thread by Rich S | last post: by
4 posts views Thread by Evangelista Sami | last post: by
2 posts views Thread by Martin | last post: by
4 posts views Thread by xarnaudx | last post: by
3 posts views Thread by Xiaoning He | last post: by
14 posts views Thread by cnb | last post: by
By using this site, you agree to our Privacy Policy and Terms of Use.