Hi Amy,
A couple of questions first, if I may:
1. Are these files always using the same encoding (Unicode, UTF8, ASCII,
etc)? It makes a difference as you can handle different encodings in
different ways. For example, ASCII is always 1 byte per character, whereas
Unicode is always 2 bytes per character, and needs a bit more effort to
read.
2. What is the expected (and projected) maximum size of any of these files?
You say that your app is processing 500+ at a time, but not what the maximum
size of any of these files is. If the files are small enough, you can read
each file all at once, rather than using ReadLine. Then you can simply
search the resulting string for "\r\n\r\n".
--
HTH,
Kevin Spencer
Microsoft MVP
Professional Numbskull
Hard work is a medication for which
there is no placebo.
<am**@paxemail.com> wrote in message
news:11**********************@j33g2000cwa.googlegr oups.com...
I have an application that is reading data from text files to the first
occurrence of a double crlf. I have this working using streamreader
and using readline.
However, I am hitting 500+ files at a time and this is a bit slow.
I was thinking of testing this using FileStream or something similar,
but I am not sure how to detect the occurrence of crlf crlf?
Thoughts?
Amy