On Dec 31, 2003, at 14:16, Chris Ochs wrote:
Now what our application does is create the queries as it runs, then
instead
of inserting them into the database it writes them all out to a single
file
at the end of the transaction. This is a huge performance boost. We
then
use a separate deamon to run the disk queue once every second and do
all the
inserts. If for some reason the main application cant' write to disk
it
will revert to inserting them directly.
Is this a crazy way to handle this? No matter what I have tried,
opening
and writing a single line to a file on disk is way faster than any
database
I have used. I even tried using BerkeleyDB as the queue instead of the
disk, but that wasn't a whole lot faster then using the cached database
handles (our application runs under mod perl).
In my application, I've built a ``TransactionPipeline'' class that
queues up transactions for asynchronous storage. It made an incredible
difference in transaction processing speed in places where the
transaction isn't critical (the atomicity is important, but the main
place this stuff is used is in transactions that record the state of a
device, for example).
Conceptually, it's somewhat similar to yours. The thread that runs
the queue is triggered by the addition of a new transaction to execute
using a normal notification mechanism, so it's basically idle unless
stuff's going on, and if there's a lot of stuff going on, the queue
will just build up until incoming rates are lower than the rates at
which we can actually process stuff.
It's been the only thing that's kept our application running (against
MS SQL Server until we can throw that away in favor of postgres).
--
SPY My girlfriend asked me which one I like better.
pub 1024/3CAE01D5 1994/11/03 Dustin Sallings <du****@spy.net>
| Key fingerprint = 87 02 57 08 02 D0 DA D6 C8 0F 3E 65 51 98 D8 BE
L_______________________ I hope the answer won't upset her. ____________
---------------------------(end of broadcast)---------------------------
TIP 8: explain analyze is your friend