By using this site, you agree to our updated Privacy Policy and our Terms of Use. Manage your Cookies Settings.
443,907 Members | 1,832 Online
Bytes IT Community
+ Ask a Question
Need help? Post your question and get tips & solutions from a community of 443,907 IT Pros & Developers. It's quick & easy.

Is shelve/dbm supposed to be this inefficient?

P: n/a
I am using shelve to store some data since it is probably the best solution
to my "data formats, number of columns, etc can change at any time"
problem. However, I seem to be dealing with bloat.

My original data is 33MB. When each row is converted to python lists, and
inserted into a shelve DB, it balloons to 69MB. Now, there is some
additional data in there namely a list of all the keys containing data (vs.
the keys that contain version/file/config information), BUT if I copy all
the data over to a dict and dump the dict to a file using cPickle, that
file is only 49MB. I'm using pickle protocol 2 in both cases.

Is this expected? Is there really that much overhead to using shelve and dbm
files? Are there any similar solutions that are more space efficient? I'd
use straight pickle.dump, but loading requires pulling the entire thing
into memory, and I don't want to have to do that every time.

[Note, for those that might suggest a standard DB. Yes, I'd like to use a
regular DB, but I have a domain where the number of data points in a sample
may change at any time, so a timestamp-keyed dict is arguably the best
solution, thus my use of shelve.]

Thanks for any pointers.

j

--
Joshua Kugler
Lead System Admin -- Senior Programmer
http://www.eeinternet.com
PGP Key: http://pgp.mit.edu/ *ID 0xDB26D7CE

Aug 1 '07 #1
Share this Question
Share on Google+
1 Reply


P: n/a
On Wed, 01 Aug 2007 15:47:21 -0800, Joshua J. Kugler wrote:
My original data is 33MB. When each row is converted to python lists, and
inserted into a shelve DB, it balloons to 69MB. Now, there is some
additional data in there namely a list of all the keys containing data (vs.
the keys that contain version/file/config information), BUT if I copy all
the data over to a dict and dump the dict to a file using cPickle, that
file is only 49MB. I'm using pickle protocol 2 in both cases.

Is this expected? Is there really that much overhead to using shelve and dbm
files? Are there any similar solutions that are more space efficient? I'd
use straight pickle.dump, but loading requires pulling the entire thing
into memory, and I don't want to have to do that every time.
You did not say how many records you store. If the underlying DB used by
`shelve` works with a hash table it may be expected to see that "bloat".
It's a space vs. speed trade off then.

Ciao,
Marc 'BlackJack' Rintsch
Aug 2 '07 #2

This discussion thread is closed

Replies have been disabled for this discussion.