By using this site, you agree to our updated Privacy Policy and our Terms of Use. Manage your Cookies Settings.
455,584 Members | 1,796 Online
Bytes IT Community
+ Ask a Question
Need help? Post your question and get tips & solutions from a community of 455,584 IT Pros & Developers. It's quick & easy.

PEP 249 - DB API question

P: n/a
Hi all,

As development goes on for a server project, it turns out that I am
using the MySQLDB and DB interactions excessively. One questions is
just bothering me, why don't we have a timeout for queries in PEP 249
(DB API)?

Is it really safe to wait for a query to finish, means, is it always
returning, even if the DB server goes down?

And, also from my point view, it may be a good feature. We may use
long/non-critical DB queries with a timeout and slow/critical without
a timeout. This will give us a little chance to prioritize/consume
queries on their criticality? And, I don't see so much effort in
implementing this. One has to change the socket logic in the related
DB's API source code?

What do you think?

Thanks
Nov 4 '08 #1
Share this Question
Share on Google+
5 Replies


P: n/a
On Wed, Nov 5, 2008 at 3:52 AM, k3xji <su****@gmail.comwrote:
As development goes on for a server project, it turns out that I am
using the MySQLDB and DB interactions excessively. One questions is
just bothering me, why don't we have a timeout for queries in PEP 249
(DB API)?
Because not all database engines support this ?
Is it really safe to wait for a query to finish, means, is it always
returning, even if the DB server goes down?
Try using the non-blocking features (may be RDBMS specific)
And, also from my point view, it may be a good feature. We may use
long/non-critical DB queries with a timeout and slow/critical without
a timeout. This will give us a little chance to prioritize/consume
queries on their criticality? And, I don't see so much effort in
implementing this. One has to change the socket logic in the related
DB's API source code?
Patches are welcome. A suggestion:

Try spawning a new process to run your query
in. Use the multiprocessing library. Your main
application can then just poll the db/query processes
to see if they're a) finished and b) have a result

Your application server can also c0 kill long running
queries that are "deemed" to be taking "too long"
and may not finish (eg: Cartesian Joins).

--JamesMills

--
--
-- "Problems are solved by method"
Nov 4 '08 #2

P: n/a
On 2008-11-04 18:52, k3xji wrote:
Hi all,

As development goes on for a server project, it turns out that I am
using the MySQLDB and DB interactions excessively. One questions is
just bothering me, why don't we have a timeout for queries in PEP 249
(DB API)?

Is it really safe to wait for a query to finish, means, is it always
returning, even if the DB server goes down?

And, also from my point view, it may be a good feature. We may use
long/non-critical DB queries with a timeout and slow/critical without
a timeout. This will give us a little chance to prioritize/consume
queries on their criticality? And, I don't see so much effort in
implementing this. One has to change the socket logic in the related
DB's API source code?

What do you think?
This would be a question for the Python DB-SIG mailing list.

Things like timeouts and handling of these is generally something
that is very database specific. It is difficult to provide a reliable
way of configuring this and may very well not even be within the
scope of a database API (e.g. because the timeout has to be
configured in the database server using some config file).

I'd suggest you check whether MySQL provides a way to set timeouts
and you then just use that for your project.

--
Marc-Andre Lemburg
eGenix.com

Professional Python Services directly from the Source (#1, Nov 04 2008)
>>Python/Zope Consulting and Support ... http://www.egenix.com/
mxODBC.Zope.Database.Adapter ... http://zope.egenix.com/
mxODBC, mxDateTime, mxTextTools ... http://python.egenix.com/
__________________________________________________ ______________________

:::: Try mxODBC.Zope.DA for Windows,Linux,Solaris,MacOSX for free ! ::::
eGenix.com Software, Skills and Services GmbH Pastor-Loeh-Str.48
D-40764 Langenfeld, Germany. CEO Dipl.-Math. Marc-Andre Lemburg
Registered at Amtsgericht Duesseldorf: HRB 46611
Nov 4 '08 #3

P: n/a
Try spawning a new process to run your query
in. Use the multiprocessing library. Your main
application can then just poll the db/query processes
to see if they're a) finished and b) have a result

Your application server can also c0 kill long running
queries that are "deemed" to be taking "too long"
and may not finish (eg: Cartesian Joins).
Just thinking loudly:...

More backward-compatible way to do that is to have a thread
pool of threads running queries and the main pool thread is
polling to see if the child threads are taking too long to
complete? However, from performance point of view this will
be a nightmare? You have a good reason to suggest
multiprocessing, right? But at least I can implement my
critical queries with this kind of design, as they are not
so many.

Good idea, thanks...
Nov 4 '08 #4

P: n/a
On Wed, Nov 5, 2008 at 6:13 AM, k3xji <su****@gmail.comwrote:
>
>Try spawning a new process to run your query
in. Use the multiprocessing library. Your main
application can then just poll the db/query processes
to see if they're a) finished and b) have a result

Your application server can also c0 kill long running
queries that are "deemed" to be taking "too long"
and may not finish (eg: Cartesian Joins).

Just thinking loudly:...

More backward-compatible way to do that is to have a thread
pool of threads running queries and the main pool thread is
polling to see if the child threads are taking too long to
complete? However, from performance point of view this will
be a nightmare? You have a good reason to suggest
multiprocessing, right? But at least I can implement my
critical queries with this kind of design, as they are not
so many.
I hate thread :) To be perfectly honest, I would
use processes for performance reasons, and it
were me, I would use my new shiny circuits [1]
library to trigger events when the queries are done.

--JamesMills

[1] http://trac.softcircuit.com.au/circuits/

--
--
-- "Problems are solved by method"
Nov 5 '08 #5

P: n/a
In message <ma**************************************@python.o rg>, James
Mills wrote:
Try spawning a new process to run your query in.
One approach might be to have two processes: the worker process and the
watcher process. The worker does the work, of course. Before performing any
call that may hang, the worker sends a message to the watcher: "if you
don't hear back from me in x seconds, kill me". It then does the call.
After the call, it sends another message to the watcher: "OK, I'm back,
cancel the timeout".
Nov 5 '08 #6

This discussion thread is closed

Replies have been disabled for this discussion.