Marc,
Your assumption falls short, because there might be more than one
operation being performed that is not a read.
For example, you might have a general audit/log table with the time the
audit took place, etc, etc, and then specific detail tables related to the
general table.
What it comes down to is that unless it is a single operation with a
single resource, you most definitely want a new transaction. Even with file
systems that are becoming transactional (NTFS in Vista), it's more important
than ever to take this into account.
I agree that a queue operation might be a good choice here, but even
then, depending on the type of queue, you still have to take transactions
into account.
--
- Nicholas Paldino [.NET/C# MVP]
-
mv*@spam.guard.caspershouse.com
"Marc Gravell" <ma**********@gmail.comwrote in message
news:11**********************@n59g2000hsh.googlegr oups.com...
>I considered that, but guessed (albeit without any evidence) that
typical logging would be doing a very simple single insert (hence no
huge atomicity requirement) without querying any data first (so no
isolation-leevl range-lock subtleties) nor lock escalation, so *for
this specific scenario* suppress might be OK.
Of course, another approach is to use a queue for logging, cleared
down on a (not too irregular) basis, so that a: the main transactional
code isn't impeded by logging IO, and b: when loggin *does* happen it
can happen in a batch (or even SqlBulkCopy depending on throughput)
rather than individual statements.
Marc