"Ryan Liu" <ad********@online.sh.cnwrote in message
news:eE**************@TK2MSFTNGP03.phx.gbl...
Thanks , Laurent and Peter!
So that's why I want to close the stream in the finally of main() and also
in static de-constructor, and hope that will help when the application
crashes. (So when it crashes, even finally might not be executed?)
It depends on why it crashes. If the error is in your own code, or in .NET
code, you have an excellent chance of having the opportunity to clean up in
the finally clause of your main() function. If the entire computer faults
(video driver problem, for example) or the process is terminated abnormally
by external means ("End Process", for example) then no.
However, one presumes that on a system where you want a continuously running
process for which a log file is very important, you are minimizing your
exposure to buggy third-party code. Hopefully the scenarios in which your
application doesn't get a chance to clean up are even more rare than the
scenarios in which your application itself might crash, and presumably those
latter scenarios are themselves exceedingly rare.
If not, your time would be better spent figuring out how to ensure that they
*are* exceedinly rare, rather than worrying about the log file. :)
And for the same reason I open file stream in shared read/write mode, so
even the stream is not properly closed, it can still be read and write
next
time.
I have seen Windows fail to clean up an opened file correctly. However,
never after a simple application crash, and infrequently in any case (I've
seen it happen maybe three times since I first started using Windows NT,
almost 15 years ago). How an application opens the file doesn't have
anything to do with access to the file after that application crashes.
Normally, Windows will clean things up for you and the file will be returned
to a normal, closed state ready for access by other processes.
So, no...using a shared file access mode has no bearing on access to the
file after the application crashes.
And is that really costly to open a file and append at the end? I am
afraid
it will read whole file (could be big) then write. If it can locate and
right "jump" to the end of file without reading file, it should not that
costly. I just don't know the underneath implementation.
Seeking to a particular position in the file is relatively fast. The size
of the file shouldn't affect performance much, if at all. It's just that
those operations (open, seek, and close) aren't free and relative to
whatever work you could be doing on the computer they may even be fairly
expensive. If you do them every few seconds or even less frequently, I
would guess that they would not be a noticeable problem. But doing them
several times a second or more often could easily take i/o and CPU time away
from more important work.
If it's something that really matters in your situation, the best thing to
do is to measure it. Measure your performance (throughput, whatever is
relevant in your situation) without logging, and measure logging with and
without keeping the file open. This will give you some real data as to what
the best compromise will be for you.
Keeping in mind, of course, that over the years as hardware performance
changes -- CPUs get faster or more numerous, disk i/o gets faster, etc. --
the exact ratios you've measured will change, possibly affecting the
analysis as well. This measurement is most useful only with the computer
configuration on which you've done the measurement, but it should be
instructive regardless.
If it's not important enough to do some basic measurements, it's probably
not important enough to worry much at all about the performance of different
implementations. :)
Pete