I have a fairly simple C# program that just needs to open up a fixed width
file, convert each record to tab delimited and append a field to the end of
it.
The input files are between 300M and 600M. I've tried every memory
conservation trick I know in my conversion program, and a bunch I picked up
from reading some of the MSDN C# blogs, but still my program ends up using
hundreds and hundreds of megs of ram. It is also taking excessively long to
process the files. (between 10 and 25 minutes). Also, with each successive
file I process in the same program, performance goes way down, so that by the
3rd file, the program comes to a complete halt and never completes.
I ended up rewriting the process in perl which takes only a couple minutes
and never really gets above a 40 M footprint.
What gives?
I'm noticing this very poor memory handling in all my programs that need to
do any kind of intensive string processing.
I have a 2nd program that just implements the LZW decompression
algorithm(pretty much copied straight out of the manuals.) It works great on
files less than 100K, but if I try to run it on a file that's just 4.5M
compressed, it runs up to 200+ Megs footprint and then starts throwing Out of
Memory exceptions.
I was wondering if somebody could look at what I've got down and see if I'm
missing something important? I'm an old school C programmer, so I may be
doing something that is bad.
Would appreciate any help anybody can give.
Regards,
Seg 2 1998
Segfahlt <Se******@discussions.microsoft.com> wrote: I have a fairly simple C# program that just needs to open up a fixed width file, convert each record to tab delimited and append a field to the end of it.
The input files are between 300M and 600M. I've tried every memory conservation trick I know in my conversion program, and a bunch I picked up from reading some of the MSDN C# blogs, but still my program ends up using hundreds and hundreds of megs of ram. It is also taking excessively long to process the files. (between 10 and 25 minutes). Also, with each successive file I process in the same program, performance goes way down, so that by the 3rd file, the program comes to a complete halt and never completes.
I ended up rewriting the process in perl which takes only a couple minutes and never really gets above a 40 M footprint.
What gives?
It's very hard to say without seeing any of your code. It sounds like
you don't actually need to load the whole file into memory at any time,
so the memory usage should be relatively small (aside from the overhead
for the framework itself).
I'm noticing this very poor memory handling in all my programs that need to do any kind of intensive string processing.
I have a 2nd program that just implements the LZW decompression algorithm(pretty much copied straight out of the manuals.) It works great on files less than 100K, but if I try to run it on a file that's just 4.5M compressed, it runs up to 200+ Megs footprint and then starts throwing Out of Memory exceptions.
I was wondering if somebody could look at what I've got down and see if I'm missing something important? I'm an old school C programmer, so I may be doing something that is bad.
Would appreciate any help anybody can give.
Could you post a short but complete program which demonstrates the
problem?
See http://www.pobox.com/~skeet/csharp/complete.html for details of
what I mean by that.
--
Jon Skeet - <sk***@pobox.com> http://www.pobox.com/~skeet
If replying to the group, please do not mail me too
"Segfahlt" <Se******@discussions.microsoft.com> wrote in message
news:0D**********************************@microsof t.com... I have a fairly simple C# program that just needs to open up a fixed width file, convert each record to tab delimited and append a field to the end of it.
The input files are between 300M and 600M. I've tried every memory conservation trick I know in my conversion program, and a bunch I picked up from reading some of the MSDN C# blogs, but still my program ends up using hundreds and hundreds of megs of ram. It is also taking excessively long to process the files. (between 10 and 25 minutes). Also, with each successive file I process in the same program, performance goes way down, so that by the 3rd file, the program comes to a complete halt and never completes.
I ended up rewriting the process in perl which takes only a couple minutes and never really gets above a 40 M footprint.
What gives?
I'm noticing this very poor memory handling in all my programs that need to do any kind of intensive string processing.
I have a 2nd program that just implements the LZW decompression algorithm(pretty much copied straight out of the manuals.) It works great on files less than 100K, but if I try to run it on a file that's just 4.5M compressed, it runs up to 200+ Megs footprint and then starts throwing Out of Memory exceptions.
I was wondering if somebody could look at what I've got down and see if I'm missing something important? I'm an old school C programmer, so I may be doing something that is bad.
Would appreciate any help anybody can give.
Regards,
Seg
That's really hard to answer such broad question without a clear description
of the algorithm used or by seeing any code, so I'll have to guess:
1. You read the whole input file into memory.
2. You store each modified record into an array of strings or into a
StringArray, and write it to the file when done with the input file.
3. 1 + 2
.....
Anyway you seem to hold too much strings in memory before writing to the
output file.
Willy. This thread has been closed and replies have been disabled. Please start a new discussion. Similar topics
by: Franklin Lee |
last post by:
Hi All,
I use new to allocate some memory,even I doesn't use delete to release them.
When my Application exit, OS will release them.
Am I right?
If I'm right, how about Thread especally on...
|
by: Mike P |
last post by:
I know everything about reference counting and making sure you don't have
large objects lying around. I have also profiled my app with multiple tools.
I know about the fact GC collects memory but...
|
by: Evangelista Sami |
last post by:
hello all
i am implementing an application in which a large number of (char *)
is stored into a hash table structure.
As there may be collisions, each (char *) inserted into my hash table
is...
|
by: Segfahlt |
last post by:
I have a fairly simple C# program that just needs to open up a fixed width
file, convert each record to tab delimited and append a field to the end of
it.
The input files are between 300M and...
|
by: Larry |
last post by:
I am wondering if anyone has any thoughts on the following issues once
Ajax is incorporated into a page:
Now that we have this Ajax stuff, users have the potential to not leave
a page for a long...
|
by: Peteroid |
last post by:
I looked at the addresses in an 'array<>' during debug and noticed that the
addresses were contiguous. Is this guaranteed, or just something it does if
it can?
PS = VS C++.NET 2005 Express...
|
by: Adrian |
last post by:
Hi
I have a JS program that runs localy (under IE6 only) on a PC but it has
a memory leak (probably the known MS one!)
What applications are there that I could use to look at the memory usage of...
|
by: Jim Land |
last post by:
Jack Slocum claims here
http://www.jackslocum.com/yui/2006/10/02/3-easy-steps-to-avoid-javascript-
memory-leaks/
that "almost every site you visit that uses JavaScript is leaking memory".
...
|
by: kumarmdb2 |
last post by:
Hi guys,
For last few days we are getting out of private memory error. We have a
development environment. We tried to figure out the problem but we
believe that it might be related to the OS...
|
by: George2 |
last post by:
Hello everyone,
Should I delete memory pointed by pointer a if there is bad_alloc when
allocating memory in memory pointed by pointer b? I am not sure
whether there will be memory leak if I do...
|
by: isladogs |
last post by:
The next Access Europe meeting will be on Wednesday 6 Mar 2024 starting at 18:00 UK time (6PM UTC) and finishing at about 19:15 (7.15PM).
In this month's session, we are pleased to welcome back...
|
by: isladogs |
last post by:
The next Access Europe meeting will be on Wednesday 6 Mar 2024 starting at 18:00 UK time (6PM UTC) and finishing at about 19:15 (7.15PM).
In this month's session, we are pleased to welcome back...
|
by: PapaRatzi |
last post by:
Hello,
I am teaching myself MS Access forms design and Visual Basic. I've created a table to capture a list of Top 30 singles and forms to capture new entries. The final step is a form (unbound)...
|
by: CloudSolutions |
last post by:
Introduction:
For many beginners and individual users, requiring a credit card and email registration may pose a barrier when starting to use cloud servers. However, some cloud server providers now...
|
by: Defcon1945 |
last post by:
I'm trying to learn Python using Pycharm but import shutil doesn't work
|
by: Shællîpôpï 09 |
last post by:
If u are using a keypad phone, how do u turn on JavaScript, to access features like WhatsApp, Facebook, Instagram....
|
by: af34tf |
last post by:
Hi Guys, I have a domain whose name is BytesLimited.com, and I want to sell it. Does anyone know about platforms that allow me to list my domain in auction for free. Thank you
|
by: Faith0G |
last post by:
I am starting a new it consulting business and it's been a while since I setup a new website. Is wordpress still the best web based software for hosting a 5 page website? The webpages will be...
|
by: isladogs |
last post by:
The next Access Europe User Group meeting will be on Wednesday 3 Apr 2024 starting at 18:00 UK time (6PM UTC+1) and finishing by 19:30 (7.30PM).
In this session, we are pleased to welcome former...
| |