473,386 Members | 1,602 Online
Bytes | Software Development & Data Engineering Community
Post Job

Home Posts Topics Members FAQ

Join Bytes to post your question to a community of 473,386 software developers and data experts.

simultaneous multiple requests to very simple database

I have an application where I need a very simple database, effectively a
very large dictionary. The very large dictionary must be accessed from
multiple processes simultaneously. I need to be able to lock records
within the very large dictionary when records are written to. Estimated
number of records will be in the ballpark of 50,000 to 100,000 in his
early phase and 10 times that in the future. Each record will run about
100 to 150 bytes.

speed is not a huge concern although I must complete processing in less
than 90 seconds. The longer the delay however the greater number of
processes must be running parallel in order to keep the throughput up.
It's the usual trade-off we have all come to know and love.

it is not necessary for the dictionary to persist beyond the life of the
parent process although I have another project coming up in which this
would be a good idea.

at this point, I know they will be some kind souls suggesting various
SQL solutions. While I appreciate the idea, unfortunately I do not have
time to puzzle out yet another component. Someday I will figure it out
because I really liked what I see with SQL lite but unfortunately, today
is not that day (unless they will give me their work, home and cell
phone numbers so I can call when I am stuck. ;-)

So the solutions that come to mind are some form of dictionary in shared
memory with locking semaphore scoreboard or a multithreaded process
containing a single database (Python native dictionary, metakit, gdbm??)
and have all of my processes speak to it using xmlrpc which leaves me
with the question of how to make a multithreaded server using stock xmlrpc.

so feedback and pointers to information would be most welcome. I'm
still exploring the idea so I am open to any and all suggestions (except
maybe SQL :-)

---eric

Jul 18 '05 #1
10 2389
"Eric S. Johansson" <es*@harvee.org> wrote in message
news:ma**************************************@pyth on.org...
<snip>
at this point, I know they will be some kind souls suggesting various
SQL solutions. While I appreciate the idea, unfortunately I do not have
time to puzzle out yet another component. Someday I will figure it out
because I really liked what I see with SQL lite but unfortunately, today
is not that day (unless they will give me their work, home and cell
phone numbers so I can call when I am stuck. ;-)

<snip>

Forgive me if this reply sounds a bit glib. But I do mean it without malice.

Do you seriously expect to write your own (database) solution and that this
will save you time and effort over learning an existing (SQL) solution?

Because -
If you are seeking to "save time" on "puzzles", you are certainly going
about it the wrong way.

Best of luck
Thomas Bartkus
Jul 18 '05 #2
Thomas Bartkus wrote:
"Eric S. Johansson" <es*@harvee.org> wrote in message
news:ma**************************************@pyth on.org...
<snip>
at this point, I know they will be some kind souls suggesting various
SQL solutions. While I appreciate the idea, unfortunately I do not have
time to puzzle out yet another component. Someday I will figure it out
because I really liked what I see with SQL lite but unfortunately, today
is not that day (unless they will give me their work, home and cell
phone numbers so I can call when I am stuck. ;-)
<snip>

Forgive me if this reply sounds a bit glib. But I do mean it without malice.


understood and taken in that spirit.
Do you seriously expect to write your own (database) solution and that this
will save you time and effort over learning an existing (SQL) solution?

Because -
If you are seeking to "save time" on "puzzles", you are certainly going
about it the wrong way.


one thing I learned a long time ago was to respect the nagging voice in
the back of my head that says "there is something wrong". Right now
with databases, that voice is not nagging but screaming. So I made my
query to try and prove that intuition wrong. So far, that has not happened.

When I look at databases, I see a bunch of very good solutions that are
either overly complex or heavyweight on one hand and very nice and
simple but unable to deal with concurrency on the other. two sets of
point solutions that try to stretch themselves and the developers to fit
other application contexts.

99.9 percent of what I do (and I suspect this could be true for others)
could be satisfied by a slightly enhanced super dictionary with a record
level locking. but, the database world does not fit this model. It has
a great deal more complication then what is frequently necessary.

If I ever find the time, I will try to build such a beast probably
around Metakit. The only reason for reluctance is that I have spent too
many hours tracking down concurrency problems at the OS level way to
many years ago and so I do not create multithreaded applications lightly.

so in conclusion, my only reason for querying was to see if I was
missing a solution. So far, I have not found any work using because
they add orders of magnitude more complexity than simple dbm with file
locking. Obviously, the simple solution has horrible performance right
now I need simplicity implementation.

thanks for your commentary.

---eric

Jul 18 '05 #3
On Tue, 18 Jan 2005 17:33:26 -0500, Eric S. Johansson wrote:
When I look at databases, I see a bunch of very good solutions that are
either overly complex or heavyweight on one hand and very nice and simple
but unable to deal with concurrency on the other. two sets of point
solutions that try to stretch themselves and the developers to fit other
application contexts.


Have you considerded SQLite/pySQLite ?

--
Ricardo

Jul 18 '05 #4
On Tue, 18 Jan 2005 17:33:26 -0500, Eric S. Johansson <es*@harvee.org> wrote:
so in conclusion, my only reason for querying was to see if I was
missing a solution. So far, I have not found any work using because
they add orders of magnitude more complexity than simple dbm with file
locking. Obviously, the simple solution has horrible performance right
now I need simplicity implementation.

thanks for your commentary.


Maybe you can just get the best of both worlds.

Have a look at SQLObject. You can ignore the fact that underneath the
SQLObject there's a postgres (or mysql, or whatever) database, and get
OO based persistance.

SQLObject is crippled in that there are degrees of freedom that SQL
gives you that SQLObject takes away/makes hard to use, but what you're
trying to do, and what most people actually do with databases, can be
easily wrapped around with a simple, pythonic wrapper.

It even has a .createTable() function for those times when you don't
even want to log into the database.

Regards,
Stephen Thorne.
Jul 18 '05 #5
Ricardo Bugalho wrote:
On Tue, 18 Jan 2005 17:33:26 -0500, Eric S. Johansson wrote:

When I look at databases, I see a bunch of very good solutions that are
either overly complex or heavyweight on one hand and very nice and simple
but unable to deal with concurrency on the other. two sets of point
solutions that try to stretch themselves and the developers to fit other
application contexts.

Have you considerded SQLite/pySQLite ?


yep and apparently it won't work

http://www.sqlite.org/faq.html#q7

if I had record level locking, the code would do a very common pattern like:

if record present:
Lock record
modify record
release lock
else:
create record atomically (actual method TBB)

if I read their opinion correctly, the SQL lite folks are wrong in that
only the applications need massive concurrency. Small applications need
significant to massive concurrency for very tiny windows on very little
data.

but I do appreciate the pointer.

Jul 18 '05 #6
"Eric S. Johansson" <es*@harvee.org> wrote in message
news:ma**************************************@pyth on.org...
<snip>
99.9 percent of what I do (and I suspect this could be true for others)
could be satisfied by a slightly enhanced super dictionary with a record
level locking.
BUT - Did you not mention! : Estimated number of records will be in the ballpark of 50,000 to 100,000 in his early phase and 10 times that in the future. Each record will run about 100 to 150 bytes. ..
And The very large dictionary must be accessed from
multiple processes simultaneously
And I need to be able to lock records
within the very large dictionary when records are written to
And although I must complete processing in less than 90 seconds.


And - the hole in the bottom of the hull -
all of the above using "a slightly enhanced super dictionary".

*Super* dictionary??? *Slightly* enhanced???
Have you attempted any feasability tests? Are you running a Cray?

There are many database systems available, and Python (probably) has free
bindings to every one of them. Whichever one might choose, it would add
simplicity, not complexity to what you are attempting. The problems you
mention are precisely those that databases are meant to solve. The only
tough (impossible?) requirement you have is that you don't want to use one.

When you write that "super dictionary", be sure to post code!
I could use one of those myself.
Thomas Bartkus
Jul 18 '05 #7
Thomas Bartkus wrote:
When you write that "super dictionary", be sure to post code!
I could use one of those myself.


hmmm it looks like you have just flung down the gauntlet of "put up or
quityerwhinging". I need to get the crude implementation done first but
I think I can do it if I can find a good XMLRPC multithreading framework.

---eric

Jul 18 '05 #8
On Tue, 18 Jan 2005 11:26:46 -0500, Eric S. Johansson wrote:
So the solutions that come to mind are some form of dictionary in shared
memory with locking semaphore scoreboard or a multithreaded process
containing a single database (Python native dictionary, metakit, gdbm??)
and have all of my processes speak to it using xmlrpc which leaves me
with the question of how to make a multithreaded server using stock
xmlrpc.


Another solution might be to store the records as files in a directory,
and use file locking to control access to the files (careful over NFS!).

You might also consider berkeley db, which is a simple database to add to
an application, (and which I believe supports locks), but I must admit I'm
not a fan of the library.

I assume that the bottleneck is processing the records, otherwise this all
seems a bit academic.

Jeremy

Jul 18 '05 #9

Just learned of this today, so I don't know enough details to judge
its suitability for you:

Durus
http://www.mems-exchange.org/software/durus/

It does not do locking, but alleges to be compact and easy to
understand, so perhaps you could modify it to meet your needs,
or find some other way to handle that requirement.

-Tom

--

To respond by email, replace "somewhere" with "astro" in the
return address.
Jul 18 '05 #10
phr
I agree with you, there's a crying need for something like that and
there's no single "one obvious way to do it" answer.

Have you looked at bsddb? See also www.sleepycat.com.
Jul 18 '05 #11

This thread has been closed and replies have been disabled. Please start a new discussion.

Similar topics

16
by: noah | last post by:
Does PHP have a feature to associate Cookie sessions with a persistent database connection that will allow a single transaction across multiple HTTP requests? Here is how I imagine my process: I...
1
by: slugger | last post by:
Hope this is not OT: I am running into some strange things whenever my ASP pages send out simultaneous requests to another ASP page which in turn gains access to a MySQL database using a DSNless...
9
by: Abhishek Srivastava | last post by:
Hello All, In IIS 6.0 We have a concept of worker processes and application pools. As I understand it, we can have multiple worker process per appliction pool. Each worker process is dedicated...
12
by: Dan V. | last post by:
Since an ASP.NET/ADO.NET website is run on the server by a single "asp_net worker process", therefore doesn't that mean that even 50 simultaneous human users of the website would appear to the...
1
by: googlegroups | last post by:
Hello everyone, I need a new web server for our existing website and I don't want subscribe to the whole IIS way of doing things. I thought I would implement a custom web server using the new...
2
by: dmagliola | last post by:
Hello all, I'm experiencing a problem with ASP.Net for which I can't find a reasonable explanation, or any information. I'm currently developing an application that, through AJAX, asks the...
35
by: keerthyragavendran | last post by:
hi i'm downloading a single file using multiple threads... how can i specify a particular range of bytes alone from a single large file... for example say if i need only bytes ranging from...
9
by: David | last post by:
With a non-server app there is one instance of the program running and one user 'using' it at a time. With this scenario I'm pretty comfortable with variable scope and lifetime. With a server app...
21
by: mark | last post by:
Hello, I want to create a php scraper that will get some information from e.g. 5 sites simultaneously. I tried the following script:...
0
by: Charles Arthur | last post by:
How do i turn on java script on a villaon, callus and itel keypad mobile phone
0
by: aa123db | last post by:
Variable and constants Use var or let for variables and const fror constants. Var foo ='bar'; Let foo ='bar';const baz ='bar'; Functions function $name$ ($parameters$) { } ...
0
by: ryjfgjl | last post by:
If we have dozens or hundreds of excel to import into the database, if we use the excel import function provided by database editors such as navicat, it will be extremely tedious and time-consuming...
0
by: ryjfgjl | last post by:
In our work, we often receive Excel tables with data in the same format. If we want to analyze these data, it can be difficult to analyze them because the data is spread across multiple Excel files...
0
by: emmanuelkatto | last post by:
Hi All, I am Emmanuel katto from Uganda. I want to ask what challenges you've faced while migrating a website to cloud. Please let me know. Thanks! Emmanuel
0
BarryA
by: BarryA | last post by:
What are the essential steps and strategies outlined in the Data Structures and Algorithms (DSA) roadmap for aspiring data scientists? How can individuals effectively utilize this roadmap to progress...
1
by: nemocccc | last post by:
hello, everyone, I want to develop a software for my android phone for daily needs, any suggestions?
1
by: Sonnysonu | last post by:
This is the data of csv file 1 2 3 1 2 3 1 2 3 1 2 3 2 3 2 3 3 the lengths should be different i have to store the data by column-wise with in the specific length. suppose the i have to...
0
jinu1996
by: jinu1996 | last post by:
In today's digital age, having a compelling online presence is paramount for businesses aiming to thrive in a competitive landscape. At the heart of this digital strategy lies an intricately woven...

By using Bytes.com and it's services, you agree to our Privacy Policy and Terms of Use.

To disable or enable advertisements and analytics tracking please visit the manage ads & tracking page.