By using this site, you agree to our updated Privacy Policy and our Terms of Use. Manage your Cookies Settings.
449,163 Members | 1,119 Online
Bytes IT Community
+ Ask a Question
Need help? Post your question and get tips & solutions from a community of 449,163 IT Pros & Developers. It's quick & easy.

Strange problem: MySQL and python logging using two separate cursors

P: n/a
Hi,

Explaining this problem is quite tricky, but please bear with me.

Ive got an application which uses MySQL (InnoDB) for two things:

1. Store and retrieve application data (including viewing the application log)
2. Store the application log produced by the python logging module

The connection and cursor for the two tasks are completely separate, and they
connect, query, execute and commit on their own. But they do use the same SQL
database.

Task 1 typically consist of alot of SQL queries.

For task 2 I'm using the logging_test14.py example from the logging module as
a template. (The only thing I've added basically is an escape_string() call
to properly escape any SQL queries from task 1 logged by task 2 to the
database.)

From a MySQL shell I can view the logging updates as they are commited, eg.
the log continues to grow when using the application.

But if I try to do the same from inside the application, the cursor in task1
will only return between 50 and 60 logentries, even though more updates
exists (I can still view them as they grow from the MySQL shell). If I try to
re-run the same query, the same 50-60 logentries are returned. No error, no
message - nothing.

To recap: task 2 writes all log messages to the database, and task 1 reads
these same log messages based on user input and present them in a GUI.

I don't know if this is explained well enough, but its kind of tricky
explaining such weird behaviour.

The only clue I have so far, is that the cursor in task 1 seems to be unable
to "register" any new entries in the log table produced by task 2 as soon as
task 1 perform an SQL query of some kind.

Im contemplating using the same cursor for task 1 and 2, but I think keeping
them separate is a better design - if it only worked :)

Any input on this "nutcracker"?

Thanks,
Frank
Jan 9 '08 #1
Share this Question
Share on Google+
2 Replies


P: n/a
On Wednesday 09 January 2008 18:52:02 Dennis Lee Bieber wrote:
On Wed, 9 Jan 2008 10:11:09 +0100, Frank Aune <Fr********@broadpark.no>

declaimed the following in comp.lang.python:
The only clue I have so far, is that the cursor in task 1 seems to be
unable to "register" any new entries in the log table produced by task 2
as soon as task 1 perform an SQL query of some kind.

How often do you issue a commit?
I experience the behaviour for task 1 even if the select query only reads out
data and no commit is needed.

Do I really need to perform commits on a handler even though it only reads out
data? From a MySQL shell I can see the changes from the other handler without
the commits, but afaics that shouldnt be the case if the above were true.

Thanks,
Frank

Jan 11 '08 #2

P: n/a
On Jan 9, 2008 11:52 AM, Dennis Lee Bieber <wl*****@ix.netcom.comwrote:
On Wed, 9 Jan 2008 10:11:09 +0100, Frank Aune <Fr********@broadpark.no>
declaimed the following in comp.lang.python:
The only clue I have so far, is that the cursor in task 1 seems to be unable
to "register" any new entries in the log table produced by task 2 as soon as
task 1 perform an SQL query of some kind.
How often do you issue a commit? For some DB-API adapters (I forget
which database -- think it was SQLite) a select query does not complete
until the last data has been fetched from it -- meaning the transaction
(the DB-API spec is that auto-commit is OFF) is still open and "other
transaction changes" will not be seen. {I do find it perplexing that
transactions are started by cursor actions, but committed by the
connection!}
Im contemplating using the same cursor for task 1 and 2, but I think keeping
them separate is a better design - if it only worked :)
I'd probably suggest using a separate connection and cursor -- with
liberal usage of conn.commit() to ensure that transaction "views" are
flushed/refreshed.
The MySql api doesn't have a concept of a cursor, only connections. If
you want truly separate cursors in MySql you need to use individual
connections.
Jan 11 '08 #3

This discussion thread is closed

Replies have been disabled for this discussion.