469,352 Members | 1,835 Online
Bytes | Developer Community
New Post

Home Posts Topics Members FAQ

Post your question to a community of 469,352 developers. It's quick & easy.

Py+SQLite or other (big output) ?

Hi !

I want to process many data with python, and want to store in database.
In the prior version of my code I create a simple thing that delete the
old results, recreate the database and fill up it.

But that is not too flexible, because when an power supply or hardware
problem occured, all of the processed items are lost.

Then I thinking about a solution that can continue the work. This is
like an diff or sync. directories problem.
Everytime I need to compare existing datas with inputs, and get result
about this database: is finished, or I need to drop/add some elements.

That is not to hard to code, but I see that the very large files are
very vulnerable.
Example: the older version of my code is use zip to export files... When
I processed many files, I access the physical limit of the zip (4 GB),
and every results destroyed in the crash...
Or when the database file is getting inconsistent state, I only way to
make result is to drop db, and recreate it.

So I thinking about that I split the datas into smaller sections, and
use them. If any of them destroyed or injured, I need to recreate only that.
But this solution have a problems too.
1.) I need a header file to "join" them logically. When this file
injured, every data must drop.
2.) The sync. operation is harder, because some files are not needed
(when input data amount less than before), or some records are not
needed; or I need to add some records to some of the files.
3.) I need to use global db to store global values.

If I use this concept, I pay with hardest coding, and many-many bug chance.

So I want to use one database file - but I want to protect it.
How to do it with SQLite ?
I see that solutions:
- 1. I use transactions.
- 2. I create full copy of database after every bigger transation.
- 3. Shadow file ???
- 4. Mirror database (this is problematic to synch.).

The transactions are very good things, but does not protect the database
from injuring.
The copy operation is better, but very decrease the processing speed,
because the result db grow fast, and copy of 1/2,2/3 GBs is slow, and
not too good.

Have SQLite any solution to this problem ?

Or have you any solution to this problem ? Hash-DB ? Pickled elements ?

Thanx for the help:
dd

Apr 3 '06 #1
1 1053
DurumDara wrote:
I want to process many data with python, and want to store in database. .... So I want to use one database file - but I want to protect it.
How to do it with SQLite ?
I see that solutions:
- 1. I use transactions.
- 2. I create full copy of database after every bigger transation.
- 3. Shadow file ???
- 4. Mirror database (this is problematic to synch.).

The transactions are very good things, but does not protect the database
from injuring.
The copy operation is better, but very decrease the processing speed,
because the result db grow fast, and copy of 1/2,2/3 GBs is slow, and
not too good.


With these requirements (data recovery, sizes of several gigabytes,
transaction safety etc) you might consider something "heavier" than
SQLite.

Of course, there is more work to administer something like DB2, Oracle
or PostgreSQL than SQLite, but at least the code is as easy as for
SQLite, and they are built to provide very robust storage of large
amounts of data in a transaction safe way, with ample possibilities
to spread out data across disks etc.

Also, with e.g. Oracle, you can define the max sizes of the database
files so that disks never get full. If the allotted files are all
full, there won't be any crash. You will just get a error from the
last INSERT, and stay in your transaction. If you catch this error
and alert the user, more disk space could be made available for the
database, the erring INSERT repeated and then you just go on with the
rest. I think you could do the same in recent PostgreSQL versions
by using savepoints. (PostrgeSQL requires a rollback after an SQL
error--savepoints enables you to rollback less than a full
transacion.)
Apr 11 '06 #2

This discussion thread is closed

Replies have been disabled for this discussion.

Similar topics

6 posts views Thread by zelzel.zsu | last post: by
2 posts views Thread by Christian Stooker | last post: by
12 posts views Thread by John Salerno | last post: by
1 post views Thread by rdrink | last post: by
4 posts views Thread by Barry | last post: by
reply views Thread by smitty1e | last post: by
1 post views Thread by CARIGAR | last post: by
reply views Thread by zhoujie | last post: by
By using this site, you agree to our Privacy Policy and Terms of Use.