468,140 Members | 1,463 Online
Bytes | Developer Community
New Post

Home Posts Topics Members FAQ

Post your question to a community of 468,140 developers. It's quick & easy.

Memory Management extremely poor in C# when manipulating string..

I have a fairly simple C# program that just needs to open up a fixed width
file, convert each record to tab delimited and append a field to the end of
it.

The input files are between 300M and 600M. I've tried every memory
conservation trick I know in my conversion program, and a bunch I picked up
from reading some of the MSDN C# blogs, but still my program ends up using
hundreds and hundreds of megs of ram. It is also taking excessively long to
process the files. (between 10 and 25 minutes). Also, with each successive
file I process in the same program, performance goes way down, so that by the
3rd file, the program comes to a complete halt and never completes.

I ended up rewriting the process in perl which takes only a couple minutes
and never really gets above a 40 M footprint.

What gives?

I'm noticing this very poor memory handling in all my programs that need to
do any kind of intensive string processing.

I have a 2nd program that just implements the LZW decompression
algorithm(pretty much copied straight out of the manuals.) It works great on
files less than 100K, but if I try to run it on a file that's just 4.5M
compressed, it runs up to 200+ Megs footprint and then starts throwing Out of
Memory exceptions.

I was wondering if somebody could look at what I've got down and see if I'm
missing something important? I'm an old school C programmer, so I may be
doing something that is bad.

Would appreciate any help anybody can give.

Regards,

Seg
Nov 17 '05 #1
2 1702
Segfahlt <Se******@discussions.microsoft.com> wrote:
I have a fairly simple C# program that just needs to open up a fixed width
file, convert each record to tab delimited and append a field to the end of
it.

The input files are between 300M and 600M. I've tried every memory
conservation trick I know in my conversion program, and a bunch I picked up
from reading some of the MSDN C# blogs, but still my program ends up using
hundreds and hundreds of megs of ram. It is also taking excessively long to
process the files. (between 10 and 25 minutes). Also, with each successive
file I process in the same program, performance goes way down, so that by the
3rd file, the program comes to a complete halt and never completes.

I ended up rewriting the process in perl which takes only a couple minutes
and never really gets above a 40 M footprint.

What gives?
It's very hard to say without seeing any of your code. It sounds like
you don't actually need to load the whole file into memory at any time,
so the memory usage should be relatively small (aside from the overhead
for the framework itself).
I'm noticing this very poor memory handling in all my programs that need to
do any kind of intensive string processing.

I have a 2nd program that just implements the LZW decompression
algorithm(pretty much copied straight out of the manuals.) It works great on
files less than 100K, but if I try to run it on a file that's just 4.5M
compressed, it runs up to 200+ Megs footprint and then starts throwing Out of
Memory exceptions.

I was wondering if somebody could look at what I've got down and see if I'm
missing something important? I'm an old school C programmer, so I may be
doing something that is bad.

Would appreciate any help anybody can give.


Could you post a short but complete program which demonstrates the
problem?

See http://www.pobox.com/~skeet/csharp/complete.html for details of
what I mean by that.

--
Jon Skeet - <sk***@pobox.com>
http://www.pobox.com/~skeet
If replying to the group, please do not mail me too
Nov 17 '05 #2

"Segfahlt" <Se******@discussions.microsoft.com> wrote in message
news:0D**********************************@microsof t.com...
I have a fairly simple C# program that just needs to open up a fixed width
file, convert each record to tab delimited and append a field to the end
of
it.

The input files are between 300M and 600M. I've tried every memory
conservation trick I know in my conversion program, and a bunch I picked
up
from reading some of the MSDN C# blogs, but still my program ends up using
hundreds and hundreds of megs of ram. It is also taking excessively long
to
process the files. (between 10 and 25 minutes). Also, with each
successive
file I process in the same program, performance goes way down, so that by
the
3rd file, the program comes to a complete halt and never completes.

I ended up rewriting the process in perl which takes only a couple minutes
and never really gets above a 40 M footprint.

What gives?

I'm noticing this very poor memory handling in all my programs that need
to
do any kind of intensive string processing.

I have a 2nd program that just implements the LZW decompression
algorithm(pretty much copied straight out of the manuals.) It works great
on
files less than 100K, but if I try to run it on a file that's just 4.5M
compressed, it runs up to 200+ Megs footprint and then starts throwing Out
of
Memory exceptions.

I was wondering if somebody could look at what I've got down and see if
I'm
missing something important? I'm an old school C programmer, so I may be
doing something that is bad.

Would appreciate any help anybody can give.

Regards,

Seg


That's really hard to answer such broad question without a clear description
of the algorithm used or by seeing any code, so I'll have to guess:
1. You read the whole input file into memory.
2. You store each modified record into an array of strings or into a
StringArray, and write it to the file when done with the input file.
3. 1 + 2
.....
Anyway you seem to hold too much strings in memory before writing to the
output file.

Willy.
Nov 17 '05 #3

This discussion thread is closed

Replies have been disabled for this discussion.

Similar topics

4 posts views Thread by Franklin Lee | last post: by
9 posts views Thread by Mike P | last post: by
4 posts views Thread by Evangelista Sami | last post: by
2 posts views Thread by Larry | last post: by
38 posts views Thread by Peteroid | last post: by
8 posts views Thread by Adrian | last post: by
3 posts views Thread by Jim Land | last post: by
27 posts views Thread by George2 | last post: by
27 posts views Thread by didacticone | last post: by
By using this site, you agree to our Privacy Policy and Terms of Use.