Cor,
I was watching for changes in memory by stepping through code and using the
Gc.GetTotalMemory() method. How accurate it is, I dunno. I was just
stepping through each variable and recording the delta. that gave me my
estimated variable size.
I recorded it into an excel spreadsheet and watched as I created my loop. I
don't know how often the gc is called in debug, but I'm assuming not very
often (because memory did keep growing.)
Now I know according to a couple articles I read yesterday that the GC is
only called when required, i.e. when memory is low or another .net app
starts requesting resources.
So as I record my findings, I start to have some really basic questions
about what I'm doing.
First, TypeSafe Event Handlers using delegates...
I call
AddHandler myClassName.MyEvent, New MyEventHandler(AddressOf
onMyEventHandlerMethod)
afterwards I remove it with
RemoveHandler myClassName.MyEvent, New MyEventHandler(AddressOf
onMyEventHandlerMethod
Is this wrong? should I not be calling a function pointer on the remove,
should I just use the AddressOf Operator?
2nd.. Declaring eventArgs. Should I?
a) Declare a local variable and then raise, or
b) create the variable within the event raise..
i.e.
a)
Dim e as MyEventArgs
e = new MyEventArgs(myArg1,myArg2)
RaiseEvent myEvent(me,e)
e = nothing
or
b)
RaiseEvent myEvent(me, new MyEventArgs(myArg1,myArg2))
Also, I noticed that calling a Try actually takes up 8k of memory, is this
because it uses the API Function GetLastWindowsError? Should I reduce the
amount of Try/Catch's I use? I remember reading something by Gunnerson I
believe regarding this type of error checking having large overhead.
I do have a nearly 50 meg dataset (required, would be alot bigger if not
filtered) and I did call GC.Collect once, which reduced it down some, but I
do'nt know if anything was promoted to Gen1 or Gen2 in the mean time.
Didn't check that.
Weird thing, I saw application.DoEvents calls take up 8k of memory, figured
that was because of a thread swap or something.
So, here is my question. Now as this thing goes on, memory just keeps
getting consumed, up to about a gig (well higher, my commit charge rises to
2.5 gig overall) and I'm destorying variables, disposing classes and
clearing datasets, yet memroy never seems to be reclaimed. I tried calling
the GC often and was monitoring that with a memory profiler by SciTech.
That is firing just fine. And it reclaims a "little" memory, but maybe only
a meg at best. So this thing keeps going to a gig.
So my last option as I see it is to set my MaxProcessSize (I think thats
what the method is) to like 200 meg, but I'm worried about this affecting
performance. I jsut know the machiens this will run on don't have the
horsepower mine does, and certainly can't pop up a gig. I'm continuing to
try to streamline it more, but don't know what else I can do...
Advice would be appreciated.
-CJ
"Cor Ligthert" <no**********@planet.nl> wrote in message
news:ui*************@TK2MSFTNGP10.phx.gbl...
Hi CJ,
Finding the lenght of a variable is intresting but gives no information at
all.
Mostly uses one (processor) operation needed to process that value much
more memory than the value it self.
(I did never look to the machine code from a program anymore for a very
long time, however I assume, that it will be normal that processing a 16 bit
value will take much more memory in a 32bit processor system than a 32bit
value)
Just my thought
Cor