By using this site, you agree to our updated Privacy Policy and our Terms of Use. Manage your Cookies Settings.
424,853 Members | 1,541 Online
Bytes IT Community
+ Ask a Question
Need help? Post your question and get tips & solutions from a community of 424,853 IT Pros & Developers. It's quick & easy.

will the memory allocated by malloc get released when program exits?

P: n/a
Hi,

Will the memory allocated by malloc get released when program exits?

I guess it will since when the program exits, the OS will free all the
memory (global, stack, heap) used by this process.

Is it correct?

Nov 15 '05 #1
Share this Question
Share on Google+
29 Replies


P: n/a
MJ
Hi
Yes it is correct when the program ends all the memory will get
released...
Mayur

Nov 15 '05 #2

P: n/a
In article <11*********************@g14g2000cwa.googlegroups. com>,
MJ <ma********@gmail.com> wrote:
Yes it is correct when the program ends all the memory will get
released...


It would be better to quote context.
Neither the C standard nor the POSIX standard explicitly define
exit() [or POSIX _exit()] as freeing memory. Nor does the C standard
define abort() as freeing memory.
In practice, I cannot think of any multi-process operating system
that does -not- free malloc()'d memory when the process ends.
I dunno -- MVS maybe? ;-)

I seem to recall (perhaps incorrectly) that it is not certain that
memory will be freed at process end when one is running in an
embedded environment -- if your toaster-control program somehow
exits [e.g., toaster on fire?] then what meta-process is there to
free the memory?
The original poster asked about malloc()'d memory, not about
"all the memory". The phrase you used, "all the memory", could
be construed to include shared memory segments, mmap()'d segments
and other non-malloc()'d forms -- forms that lie outside the C standard
but which exist on numerous systems. Shared memory segments in particular
are not necessarily released when the creating process terminates.
--
Look out, there are llamas!
Nov 15 '05 #3

P: n/a
REH

"MJ" <ma********@gmail.com> wrote in message
news:11*********************@g14g2000cwa.googlegro ups.com...
Hi
Yes it is correct when the program ends all the memory will get
released...
Mayur


That's not necessarily true. Some versions of Windows do, others do not.
Unix does (At least the ones that I have used do). VxWorks does not.

REH
Nov 15 '05 #4

P: n/a
ke*****@gmail.com wrote:
Will the memory allocated by malloc get released when program exits?

I guess it will since when the program exits, the OS will free all the
memory (global, stack, heap) used by this process.

Is it correct?


It's platform-dependent. Not every OS does it.
Nov 15 '05 #5

P: n/a
<ke*****@gmail.com> wrote

Will the memory allocated by malloc get released when program exits?

I guess it will since when the program exits, the OS will free all the
memory (global, stack, heap) used by this process.

Is it correct?

A decent operating system will do this. If the OS doesn't, the user deserves
to have to reboot, for buying such a trashy system.

You should free all memory in normal operation, to make it easier to detect
leaks. However if the program encounters an error, don't hesitate to just
exit(EXIT_FAILURE) if program logic makes this the natural thing to do.
Nov 15 '05 #6

P: n/a
In article <11**********************@g49g2000cwa.googlegroups .com>,
ke*****@gmail.com wrote:
Hi,

Will the memory allocated by malloc get released when program exits?

I guess it will since when the program exits, the OS will free all the
memory (global, stack, heap) used by this process.


Someone will probably give you some answer, but the other question is:
Should you rely on malloc'ed memory being released when the program
exits?

First of all, you can't just keep on allocating memory. Your program
will either run out of memory at some point, or it will get slower and
slower if you allocate more and more memory and don't call free () when
it is not used anymore.

But you should also consider that what is one program today, might be a
tiny component of a program next year. So if you relied an the operating
system releasing memory when the program exits, you now have a function
that doesn't release its memory. And if you call it repeatedly, you run
into trouble.

It is best to always match allocation and deallocation of memory.
Nov 15 '05 #7

P: n/a
Malcolm wrote:
<ke*****@gmail.com> wrote

Will the memory allocated by malloc get released when program exits?

I guess it will since when the program exits, the OS will free all the
memory (global, stack, heap) used by this process.

Is it correct?
A decent operating system will do this. If the OS doesn't, the user deserves
to have to reboot, for buying such a trashy system.


Many RTOSs do not free memory on program exit. For these systems, it's
a design tradeoff, not a quality issue.

You should free all memory in normal operation, to make it easier to detect
leaks. However if the program encounters an error, don't hesitate to just
exit(EXIT_FAILURE) if program logic makes this the natural thing to do.


Please don't. Always free all memory you allocate.
Mark F. Haigh
mf*****@sbcglobal.net

Nov 15 '05 #8

P: n/a
In article <11*********************@g44g2000cwa.googlegroups. com>,
Mark F. Haigh <mf*****@sbcglobal.net> wrote:
Malcolm wrote:
You should free all memory in normal operation, to make it easier to detect
leaks. However if the program encounters an error, don't hesitate to just
exit(EXIT_FAILURE) if program logic makes this the natural thing to do.

Please don't. Always free all memory you allocate.


Ah? Even if you caught a SIGSEGV?
--
I was very young in those days, but I was also rather dim.
-- Christopher Priest
Nov 15 '05 #9

P: n/a

"Mark F. Haigh" <mf*****@sbcglobal.net> wrote
However if the program encounters an error, don't hesitate to just
exit(EXIT_FAILURE) if program logic makes this the natural thing to do.


Please don't. Always free all memory you allocate.

Unfortunately this advice, though well-meaning, is impractical. Errors occur
mid processing, often leaving structures in a corrupted or half-build state.
So freeing everything can be complicated. Then it is even more difficult to
test the freeing code, and it might itself generate errors.

So unless freeing all memory in all circumstances is an absolute
requirement, which is unusual, it is much better to simply exit with an
error message if it becomes necessary to abort.
Nov 15 '05 #10

P: n/a
Malcolm wrote:
"Mark F. Haigh" <mf*****@sbcglobal.net> wrote
However if the program encounters an error, don't hesitate to just
exit(EXIT_FAILURE) if program logic makes this the natural thing to do.


Please don't. Always free all memory you allocate.

Unfortunately this advice, though well-meaning, is impractical. Errors occur
mid processing, often leaving structures in a corrupted or half-build state.
So freeing everything can be complicated. Then it is even more difficult to
test the freeing code, and it might itself generate errors.

So unless freeing all memory in all circumstances is an absolute
requirement, which is unusual, it is much better to simply exit with an
error message if it becomes necessary to abort.


You only need to free at program termination in systems with no hardware
memory manager.
Nov 15 '05 #11

P: n/a
In article <1e************@main.anatron.com.au>,
Russell Shaw <rjshawN_o@s_pam.netspace.net.au> wrote:
You only need to free at program termination in systems with no hardware
memory manager.


That is not accurate.

The presence or absence of a hardware memory manager has nothing
to do with whether there is some kind of executive kernel
which is keeping track of memory allocation.

I have programmed on systems that had hardware memory management
but which did not use it for tracking memory allocatiion,
and I have programmed on systems that had no hardware memory
management but which -did- track memory allocation in the kernel.

In Unix, memory allocation is based on the brk() and
sbrk() *system calls* -- and the operating system can use
any mechanism it wishes for tracking memory allocation.
--
The rule of thumb for speed is:

1. If it doesn't work then speed doesn't matter. -- Christian Bau
Nov 15 '05 #12

P: n/a
On Mon, 04 Jul 2005 13:27:06 -0700, Mark F. Haigh wrote:
Malcolm wrote:
<ke*****@gmail.com> wrote
>
> Will the memory allocated by malloc get released when program exits?
>
> I guess it will since when the program exits, the OS will free all the
> memory (global, stack, heap) used by this process.
>
> Is it correct?
> A decent operating system will do this. If the OS doesn't, the user deserves
to have to reboot, for buying such a trashy system.


Many RTOSs do not free memory on program exit.


Really? I can understand this for low level embedded systems, where
termination of the program is a pretty much terminal fault anyway. But
memory reclamation doesn't generate obvious issues for a real-time system,
it can be given a low priority. A RTOS that doesn't reclaim memory
properly risks running out of resources i.e. not being able to guarantee
what a program needs to run. That strikes me as a rather significant flaw
in a RTOS.
For these systems, it's
a design tradeoff, not a quality issue.
For a RTOS it is a fundamental quality issue.
Lawrence

Nov 15 '05 #13

P: n/a
On Mon, 04 Jul 2005 12:07:40 -0500, Sensei wrote:
ke*****@gmail.com wrote:
Will the memory allocated by malloc get released when program exits?

I guess it will since when the program exits, the OS will free all the
memory (global, stack, heap) used by this process.

Is it correct?


It's platform-dependent. Not every OS does it.


But if you choose to limit yourself to OSs that do you won't be limiting
yourself very much in practice. How many OSs can you name that don't do it?

As far as C is concerned you get no more guarantees about the releasing of
freed memory, automatic variable etc. than you do about unfreed memory.

Lawrence

Nov 15 '05 #14

P: n/a
On Mon, 04 Jul 2005 16:41:23 +0000, REH wrote:

"MJ" <ma********@gmail.com> wrote in message
news:11*********************@g14g2000cwa.googlegro ups.com...
Hi
Yes it is correct when the program ends all the memory will get
released...
Mayur


That's not necessarily true. Some versions of Windows do, others do not.


What versions of Windows don't? Even DOS implementations managed to get
this right. Win16 certainly had the mechanisms to do this properly (IIRC
it was called LocalAlloc).

Lawrence
Nov 15 '05 #15

P: n/a

Of course I will free the memory in my code. It's just an interview
question.

So the answer is that it will depend on the OS, right? Some systems
will free the memory allocated by malloc when program exits, while
others not.

Correct?

Nov 15 '05 #16

P: n/a
Lawrence Kirby wrote:
But if you choose to limit yourself to OSs that do you won't be limiting
yourself very much in practice. How many OSs can you name that don't do it?

As far as C is concerned you get no more guarantees about the releasing of
freed memory, automatic variable etc. than you do about unfreed memory.


Not only that. Ok, many OS have garbage collectors for processes that
ended, in one way or another, but don't rely on that. Moreover,
allocating memory just for fun and not deallocating it makes your
program huge and slower for no reason, so deallocate the memory you
don't use anymore.

malloc() and free() are siblings, use them always and with care :)
Nov 15 '05 #17

P: n/a
ke*****@gmail.com wrote:

Of course I will free the memory in my code. It's just an interview
question.


Not necessarily. Especially if you are using a malloc/free package
that has O(N*N) performance when there are many items to free.
This is quite common - the thing that is not so common is programs
that end with large numbers of items allocated and then free them
all.

You can demonstrate the phenomenom easily with the test program for
my hashlib system, which has provision for avoiding the final
frees. I forget the number where the effect became obvious, it
might have been 200,000 or maybe 2,000,000 items. It shows up as a
huge delay between telling the program to exit and getting a prompt
back. I found the O(N*N) effect on DJGPP, LCC_WIN32, and VC6.

The effect caused me to track down the culprit, and develop the
nmalloc package for DJGPP. nmalloc is O(1) for a single free, and
thus O(N) for a large count of items to free. Hashlib is portable,
nmalloc is not (and cannot be). You can see (and get) the packages
at:

<http://cbfalconer.home.att.net/download/>

For those interested, the free delays have to do with stitching
together freed memory with other free blocks. The troublesome
systems have to search all blocks for possible adjacencies, which
is an O(N) operation, thus making free and O(N) operation, and
freeing of all N blocks an O(N*N) operation. nmalloc keeps track
of physically adjacent blocks so that the adjacency search consists
of examining two pointers, and is thus O(1).

--
"If you want to post a followup via groups.google.com, don't use
the broken "Reply" link at the bottom of the article. Click on
"show options" at the top of the article, then click on the
"Reply" at the bottom of the article headers." - Keith Thompson
Nov 15 '05 #18

P: n/a
In article <da**********@news.doit.wisc.edu>, Sensei <se******@tin.it> wrote:
Moreover,
allocating memory just for fun and not deallocating it makes your
program huge and slower for no reason,


In most modern Unix OS, allocating large amounts of memory is usually
sub-linear time in the number of "pages" of memory to be allocated --
just long enough to allocate the memory virtually from zero-initialized
automatic store. The task of finding a physical page to go along with
the virtual memory is often deferred until the first write to the page.

Thus, as usual, any time one is talking about what is "slower" or not,
one should write qualifications or conditionally. Yes, there are
operating systems and hardware in which allocating large amounts
of memory makes your program slow, but there are others in which it
barely makes a difference.

On some systems, it is far more important as to how the memory is
allocated then the total storage used. For example, 100,000 malloc() of
8 bytes each is often noticably slower than 8 allocations of 1,000,000
each.
--
I was very young in those days, but I was also rather dim.
-- Christopher Priest
Nov 15 '05 #19

P: n/a
Walter Roberson wrote:
In article <11*********************@g44g2000cwa.googlegroups. com>,
Mark F. Haigh <mf*****@sbcglobal.net> wrote:
Malcolm wrote:

You should free all memory in normal operation, to make it easier to detect
leaks. However if the program encounters an error, don't hesitate to just
exit(EXIT_FAILURE) if program logic makes this the natural thing to do.

Please don't. Always free all memory you allocate.


Ah? Even if you caught a SIGSEGV?

Let's examine for a moment how one can come across a SIGSEGV:

1. Through undefined behavior leading to "an invalid access
to storage" (C99 7.14#3), although such conditions are not
required to be detected (C99 7.14#4).

2. As a result of calling raise (or other POSIX or system-
specific functions, e.g. kill).

If your program goes the way of #1, then internal C library memory
allocation structures may be corrupted or inconsistent. This is often
seen when overflowing a malloc-ed buffer or freeing a non-allocated or
invalid buffer location (including double frees). It should be obvious
why one should not allocate or free memory within a SIGSEGV handler.

The C standard is clear enough regarding what is allowed from a SIGSEGV
handler:

7.14 [...]
[#5] If the signal occurs other than as the result of
calling the abort or raise function, the behavior is
undefined if the signal handler refers to any object with
static storage duration other than by assigning a value to
an object declared as volatile sig_atomic_t, or the signal
handler calls any function in the standard library other
than the abort function or the signal function with the
first argument equal to the signal number corresponding to
the signal that caused the invocation of the handler.
[...]

But it's the undefined behavior that's the underlying problem. Some
implementations may be able to report certain undefined behavior via
SIGSEGV, and some may not. The best way to avoid having to worry about
it is to write correct, robust programs that do not invoke undefined
behavior.

Users of POSIX-compliant systems may be able to differentiate #1 from
#2 via the si_code member of siginfo_t.
Mark F. Haigh
mf*****@sbcglobal.net

Nov 15 '05 #20

P: n/a
Malcolm wrote:
"Mark F. Haigh" <mf*****@sbcglobal.net> wrote
However if the program encounters an error, don't hesitate to just
exit(EXIT_FAILURE) if program logic makes this the natural thing to do.
Please don't. Always free all memory you allocate.

Unfortunately this advice, though well-meaning, is impractical. Errors occur
mid processing, often leaving structures in a corrupted or half-build state.
So freeing everything can be complicated. Then it is even more difficult to
test the freeing code, and it might itself generate errors.


If a user types incorrect input into a program, should it immediately
abort? Not in my book. If a file contains unexpected input, should
the program immediately abort? Again, no. Both are "errors", and may
occur mid-processing.

In cases like this, the program should free its allocated memory and
perform an orderly shutdown.

So unless freeing all memory in all circumstances is an absolute
requirement, which is unusual, it is much better to simply exit with an
error message if it becomes necessary to abort.


It's simply good practice to clean up your own messes. First off,
memory is *not* guaranteed to be freed on program exit. Pretending
like it is doesn't make it so. Secondly, as Christian Bau mentioned,
programs have a way of becoming libraries in yet bigger programs, and
aborting from a library is rarely considered appropritate.

As far as handling programmer error (UB) goes (via signals), anywhere
from dumping stack traces to rebooting may be appropriate. It depends
on the circumstance and the platform.
Mark F. Haigh
mf*****@sbcglobal.net

Nov 15 '05 #21

P: n/a
Lawrence Kirby wrote:
On Mon, 04 Jul 2005 13:27:06 -0700, Mark F. Haigh wrote:
Malcolm wrote:
<ke*****@gmail.com> wrote
>
> Will the memory allocated by malloc get released when program exits?
>
> I guess it will since when the program exits, the OS will free all the
> memory (global, stack, heap) used by this process.
>
> Is it correct?
>
A decent operating system will do this. If the OS doesn't, the user deserves
to have to reboot, for buying such a trashy system.
Many RTOSs do not free memory on program exit.


Really? I can understand this for low level embedded systems, where
termination of the program is a pretty much terminal fault anyway. But
memory reclamation doesn't generate obvious issues for a real-time system,
it can be given a low priority. A RTOS that doesn't reclaim memory
properly risks running out of resources i.e. not being able to guarantee
what a program needs to run. That strikes me as a rather significant flaw
in a RTOS.


RTOSs aren't always built to guarantee proper resource management;
they're built to guarantee bounded response times for certain events or
operations. As long as programs free the resources they allocate
before termination, there are generally no big resource management
problems besides memory fragmentation issues (which are worked around
in several ways).

An example of a significant flaw in a RTOS would be failing to service
a timer interrupt within a specified time bound that causes too much
radiation to be released into a cancer patient's body.

Note that RTOS code to handle programmer laziness or error is more code
to audit for RT issues and is an increase in footprint and overall
system complexity.

As a footnote, using a low-priority task to reclaim memory may never
actually run in a highly loaded system, where you're most likely to
need the extra memory!
For these systems, it's
a design tradeoff, not a quality issue.


For a RTOS it is a fundamental quality issue.


No, it's a design tradeoff. Memory that is malloc-ed may be still in
use by different parts of the system. In many circumstances, the OS
does not know who is using what, and keeps itself out of the way for
safety's sake.

When people write clean, conformant code that frees what it allocates,
it can be easy to port a program to a RTOS. When that's not done, it
can be a real PITA.
Mark F. Haigh
mf*****@sbcglobal.net

Nov 15 '05 #22

P: n/a
CBFalconer <cb********@yahoo.com> wrote:
ke*****@gmail.com wrote:

Of course I will free the memory in my code. It's just an interview
question.
Not necessarily. Especially if you are using a malloc/free package
that has O(N*N) performance when there are many items to free.


(I understand that N is the number of `free's called at a time.)
I have noticed that glibc `free' runs slower in presence of
many allocations, I don't know the exact reason though.
But why should `free' behave _that_ bad? Is it really related
to not handling adjacent blocks properly (I think you suggested
it in your later part, which I already snipped)? I think it's
so basic that every implementation should have that.
This is quite common - the thing that is not so common is programs
that end with large numbers of items allocated and then free them
all.

You can demonstrate the phenomenom easily with the test program for
my hashlib system, which has provision for avoiding the final
frees. I forget the number where the effect became obvious, it
might have been 200,000 or maybe 2,000,000 items. It shows up as a
huge delay between telling the program to exit and getting a prompt
back. I found the O(N*N) effect on DJGPP, LCC_WIN32, and VC6.


I have noticed similar delay in my program (where I allocate large
amounts of memory, but for speed I cache and reuse already allocated
chunks, so probably it's not that huge in number of blocks).
I haven't gone that far to investigate it yet. I have assumed
that the delay is probably related to returning the resources
to the system by a terminated process, although in some cases
it took astonishingly long (process real time was significantly
larger that user time for small test programs - say - by a factor
of 1.5). And it seemed to behave differently on different
computers and OSs (different version glibc, kernel?).

I don't see any reason why an implementation should do anything
to the heap after exit()ing (even if you have unfreed blocks), except
when it has to free it's own resources (close files, etc...).

--
Stan Tobias
mailx `echo si***@FamOuS.BedBuG.pAlS.INVALID | sed s/[[:upper:]]//g`
Nov 15 '05 #23

P: n/a
On Tue, 05 Jul 2005 17:02:16 -0700, Mark F. Haigh wrote:
Lawrence Kirby wrote:
On Mon, 04 Jul 2005 13:27:06 -0700, Mark F. Haigh wrote:
> Malcolm wrote:
>> <ke*****@gmail.com> wrote
>> >
>> > Will the memory allocated by malloc get released when program exits?
>> >
>> > I guess it will since when the program exits, the OS will free all the
>> > memory (global, stack, heap) used by this process.
>> >
>> > Is it correct?
>> >
>> A decent operating system will do this. If the OS doesn't, the user deserves
>> to have to reboot, for buying such a trashy system.
>
> Many RTOSs do not free memory on program exit.
Really? I can understand this for low level embedded systems, where
termination of the program is a pretty much terminal fault anyway. But
memory reclamation doesn't generate obvious issues for a real-time system,
it can be given a low priority. A RTOS that doesn't reclaim memory
properly risks running out of resources i.e. not being able to guarantee
what a program needs to run. That strikes me as a rather significant flaw
in a RTOS.


RTOSs aren't always built to guarantee proper resource management;
they're built to guarantee bounded response times for certain events or
operations. As long as programs free the resources they allocate
before termination, there are generally no big resource management
problems besides memory fragmentation issues (which are worked around
in several ways).

An example of a significant flaw in a RTOS would be failing to service
a timer interrupt within a specified time bound that causes too much
radiation to be released into a cancer patient's body.


That would certainly be bad, being a little late in turning off the
radiation is not good. It could be worse though. Failing completely at
that point due to running out of resources such as memory could be
literally fatal. Of course any sane system would have secondary safeguards
and checks against resources getting low, but in terms of ensuring that an
action is performed successfully within a certain time ensuring necessary
resources is just as vital, perhaps even more so, than scheduling.
Note that RTOS code to handle programmer laziness or error is more code
to audit for RT issues and is an increase in footprint and overall
system complexity.
Not really, it is typically not difficult for the system to reclaim memory
for a terminated application and it can take away potentially much larger
complexity from the application. It also makes the overall system MUCH
easier to validate for long term stability.
As a footnote, using a low-priority task to reclaim memory may never
actually run in a highly loaded system, where you're most likely to
need the extra memory!
In which case you've failed from a RT point of view anyway. Even if the
application has to release the resources it still needs to spend the CPU
cycles to do it and if they are all in use then something has been
starved. RT principles can be applied to resource reclamation by the OS
just as much as anything else. If the RT system has insufficient CPU
resources to meet the RT requirements of its components then it has failed.
It makes no difference to this where memory freeing occurs, it still has
to happen somewhere.
> For these systems, it's
> a design tradeoff, not a quality issue.


For a RTOS it is a fundamental quality issue.


No, it's a design tradeoff. Memory that is malloc-ed may be still in
use by different parts of the system. In many circumstances, the OS
does not know who is using what, and keeps itself out of the way for
safety's sake.


I agree that shared memory is different. While malloc() could be used to
allocate shared memory in an unprotected environment it makes more sense
to have a separate mechanism so the the OS can tell the difference. Good
design should keep shared objects small in number and preferably size.
They should not be treated like normal "local" objects.
When people write clean, conformant code that frees what it allocates,
it can be easy to port a program to a RTOS. When that's not done, it
can be a real PITA.


But the flaw that highlights is in the design of the RTOS. I can
understand this sort of thing for really small systems like embedded
systems on 8 bit processors where you have to cut corners, but there is
nothing in RT technology itself that warrants this.

Lawrence

Nov 15 '05 #24

P: n/a
On Tue, 05 Jul 2005 15:54:07 -0700, Mark F. Haigh wrote:
... Let's examine for a moment how one can come across a SIGSEGV:

1. Through undefined behavior leading to "an invalid access
to storage" (C99 7.14#3), although such conditions are not
required to be detected (C99 7.14#4).

2. As a result of calling raise (or other POSIX or system-
specific functions, e.g. kill).
....
Users of POSIX-compliant systems may be able to differentiate #1 from
#2 via the si_code member of siginfo_t.


What's the point? If something causes a SIGSEGV using raise or POSIX kill
then presumably it wants the code to act as if a segmentation fault (or
whatever on the system causes it) had occurred. If it wanted it to act in
some other way it would make more sense to use a different signal.

Lawrence
Nov 15 '05 #25

P: n/a
On Tue, 05 Jul 2005 09:06:08 -0500, Sensei wrote:
Lawrence Kirby wrote:
But if you choose to limit yourself to OSs that do you won't be limiting
yourself very much in practice. How many OSs can you name that don't do it?

As far as C is concerned you get no more guarantees about the releasing of
freed memory, automatic variable etc. than you do about unfreed memory.
Not only that. Ok, many OS have garbage collectors for processes that
ended, in one way or another, but don't rely on that.


You HAVE to rely on it for collection of freed memory, program code,
automatic variables, static variables etc.
Moreover,
allocating memory just for fun and not deallocating it makes your
program huge and slower for no reason, so deallocate the memory you
don't use anymore.
For the most part, yes. A tigher wording is to free all memory
because it becomes inaccessible by the program, i.e. don't create memory
leaks. But that doesn't cover everything. Say you have a program whose
purpose is to maintain a large datastructure in memory, some in-memory
database or take for example a document in a text editor. As you edit that
document the program allocates and possibly freed memory as you add and
delete parts of the document. You write the file out, fine, it is still
also in memory. Then you quit the text editor. The question is whether it
is worth the program going through the whole document datastructure in
memory and freeing it when the OS will reclaim the memory whether you do
or not. For some applications this freeing process could take a
SIGNIFICANT time.

Note that there is no memory leak here, the allocated memory is accessible
by the program up until the point it terminates.

Lawrence


malloc() and free() are siblings, use them always and with care :)


Nov 15 '05 #26

P: n/a
CBFalconer <cb********@yahoo.com> writes:
ke*****@gmail.com wrote:

Of course I will free the memory in my code. It's just an interview
question.


Not necessarily. Especially if you are using a malloc/free package
that has O(N*N) performance when there are many items to free.


Also, with virtual memory freeing every item may force blocks to
be brought in from disk, which is very slow.
--
Ben Pfaff
email: bl*@cs.stanford.edu
web: http://benpfaff.org
Nov 15 '05 #27

P: n/a
"S.Tobias" wrote:
CBFalconer <cb********@yahoo.com> wrote:

.... snip ...

You can demonstrate the phenomenom easily with the test program for
my hashlib system, which has provision for avoiding the final
frees. I forget the number where the effect became obvious, it
might have been 200,000 or maybe 2,000,000 items. It shows up as a
huge delay between telling the program to exit and getting a prompt
back. I found the O(N*N) effect on DJGPP, LCC_WIN32, and VC6.


I have noticed similar delay in my program (where I allocate large
amounts of memory, but for speed I cache and reuse already allocated
chunks, so probably it's not that huge in number of blocks).
I haven't gone that far to investigate it yet. I have assumed
that the delay is probably related to returning the resources
to the system by a terminated process, although in some cases
it took astonishingly long (process real time was significantly
larger that user time for small test programs - say - by a factor
of 1.5). And it seemed to behave differently on different
computers and OSs (different version glibc, kernel?).

I don't see any reason why an implementation should do anything
to the heap after exit()ing (even if you have unfreed blocks), except
when it has to free it's own resources (close files, etc...).


It is not the amount of allocated memory that counts here, but the
count of allocated blocks. The program post-exit code (in the OS)
doesn't have to discriminate between allocated and free blocks or
anything else, it just takes back the one big mess that the malloc
(and other) system has been divying up during the run, and can be
very fast. You can follow most of it in my nmalloc code to which I
referred earlier.

--
"If you want to post a followup via groups.google.com, don't use
the broken "Reply" link at the bottom of the article. Click on
"show options" at the top of the article, then click on the
"Reply" at the bottom of the article headers." - Keith Thompson
Nov 15 '05 #28

P: n/a
Lawrence Kirby wrote:
On Tue, 05 Jul 2005 15:54:07 -0700, Mark F. Haigh wrote:
...

Let's examine for a moment how one can come across a SIGSEGV:

1. Through undefined behavior leading to "an invalid access
to storage" (C99 7.14#3), although such conditions are not
required to be detected (C99 7.14#4).

2. As a result of calling raise (or other POSIX or system-
specific functions, e.g. kill).


...
Users of POSIX-compliant systems may be able to differentiate #1 from
#2 via the si_code member of siginfo_t.


What's the point? If something causes a SIGSEGV using raise or POSIX kill
then presumably it wants the code to act as if a segmentation fault (or
whatever on the system causes it) had occurred. If it wanted it to act in
some other way it would make more sense to use a different signal.


No point, really. Since the standard specifies different behaviors
depending on the source of the signal, I thought it may be useful to
point out how to differentiate the two on POSIX systems. YMMV.
Mark F. Haigh
mf*****@sbcglobal.net

Nov 15 '05 #29

P: n/a
Lawrence Kirby wrote:

<snip>

An example of a significant flaw in a RTOS would be failing to service
a timer interrupt within a specified time bound that causes too much
radiation to be released into a cancer patient's body.
That would certainly be bad, being a little late in turning off the
radiation is not good. It could be worse though. Failing completely at
that point due to running out of resources such as memory could be
literally fatal. Of course any sane system would have secondary safeguards
and checks against resources getting low, but in terms of ensuring that an
action is performed successfully within a certain time ensuring necessary
resources is just as vital, perhaps even more so, than scheduling.


Memory is nearly universally pre-allocated for critical paths.
Resource allocation errors are forced to initialization time. Stacks
are pre-allocated as well, and their sizes are based on worst case
scenario analysis multiplied by a large safety factor.
Note that RTOS code to handle programmer laziness or error is more code
to audit for RT issues and is an increase in footprint and overall
system complexity.
Not really, it is typically not difficult for the system to reclaim memory
for a terminated application and it can take away potentially much larger
complexity from the application. It also makes the overall system MUCH
easier to validate for long term stability.


Maybe in an ideal world. In reality, it is difficult sometimes to
reclaim memory in a safe way, as memory is shared amongst other
programs, ISR's, coprocessors, and other hardware. The NASA Mars probe
made it there and managed to function using such a design (vxWorks).
As a footnote, using a low-priority task to reclaim memory may never
actually run in a highly loaded system, where you're most likely to
need the extra memory!
In which case you've failed from a RT point of view anyway. Even if the
application has to release the resources it still needs to spend the CPU
cycles to do it and if they are all in use then something has been
starved. RT principles can be applied to resource reclamation by the OS
just as much as anything else. If the RT system has insufficient CPU
resources to meet the RT requirements of its components then it has failed.
It makes no difference to this where memory freeing occurs, it still has
to happen somewhere.


Usually malloc and free just supply chunks to special purpose
allocators that meet the necessary execution time bounds. You'd be
surprised how often people try to shoehorn things into low priority
tasks only to grapple with the sometimes bizarre effects of high loads
on such a design.
> For these systems, it's
> a design tradeoff, not a quality issue.

For a RTOS it is a fundamental quality issue.
No, it's a design tradeoff. Memory that is malloc-ed may be still in
use by different parts of the system. In many circumstances, the OS
does not know who is using what, and keeps itself out of the way for
safety's sake.


I agree that shared memory is different. While malloc() could be used to
allocate shared memory in an unprotected environment it makes more sense
to have a separate mechanism so the the OS can tell the difference. Good
design should keep shared objects small in number and preferably size.
They should not be treated like normal "local" objects.


Many people do not agree with you and would argue it's not the business
of a RTOS to do such bookkeeping. Applications can do that for
themselves.
When people write clean, conformant code that frees what it allocates,
it can be easy to port a program to a RTOS. When that's not done, it
can be a real PITA.


But the flaw that highlights is in the design of the RTOS. I can
understand this sort of thing for really small systems like embedded
systems on 8 bit processors where you have to cut corners, but there is
nothing in RT technology itself that warrants this.


It's only a flaw if your allocations do not match your deallocations.
If you can manage to do that, there's no real problem. Your disdain
for the aesthetics of such a system doesn't mean other people don't
find them useful.
Mark F. Haigh
mf*****@sbcglobal.net

Nov 15 '05 #30

This discussion thread is closed

Replies have been disabled for this discussion.