By using this site, you agree to our updated Privacy Policy and our Terms of Use. Manage your Cookies Settings.
435,635 Members | 2,085 Online
Bytes IT Community
+ Ask a Question
Need help? Post your question and get tips & solutions from a community of 435,635 IT Pros & Developers. It's quick & easy.

When to check the return value of malloc

P: n/a
Howdy,

I was reflecting recently on malloc.

Obviously, for tiny allocations like 20 bytes to strcpy a filename or
something, there's no point putting in a check on the return value of
malloc.

OTOH, if you're allocating a gigabyte for a large array, this might
fail, so you should definitely check for a NULL return.

So somewhere in between these extremes, there must be a point where you
stop ignoring malloc's return value, and start checking it.

Where do people draw this line? I guess it depends on the likely system
the program will be deployed on, but are there any good rule-of-thumbs?

Rgds,
MJ

Jan 18 '08 #1
Share this Question
Share on Google+
173 Replies


P: n/a
Marty James wrote:
Howdy,

I was reflecting recently on malloc.

Obviously, for tiny allocations like 20 bytes to strcpy a filename or
something, there's no point putting in a check on the return value of
malloc.
No, always check.
OTOH, if you're allocating a gigabyte for a large array, this might
fail, so you should definitely check for a NULL return.
No, always check.
So somewhere in between these extremes, there must be a point where
you stop ignoring malloc's return value, and start checking it.
No, always check.
Where do people draw this line?
Always check.
I guess it depends on the likely
system the program will be deployed on, but are there any good
rule-of-thumbs?
Sure, always check.


Brian
Jan 18 '08 #2

P: n/a
I *always* check malloc.
Why drawing a line? Optimisation?
It's damn too specific a reason to even talk about "drawing a line".
Do you draw other lines? like "at which point does a memory leak
really is bad"? or "when is a segmentation fault really that annoying
for the user?"

Sorry for the sarcasm but I think he primary role of the computer
programmer is to be in control.
Jan 18 '08 #3

P: n/a
On Fri, 18 Jan 2008 15:21:46 -0600, Marty James wrote
(in article <sl*******************@nospam.invalid>):
Howdy,

I was reflecting recently on malloc.

Obviously, for tiny allocations like 20 bytes to strcpy a filename or
something, there's no point putting in a check on the return value of
malloc.
Why?
OTOH, if you're allocating a gigabyte for a large array, this might
fail, so you should definitely check for a NULL return.
Why can one fail, but not the other? I think you are making claims
based upon your perception of the statistical likelihood of a malloc
failure for a given size.
So somewhere in between these extremes, there must be a point where you
stop ignoring malloc's return value, and start checking it.
Or you just check it all the time and not pretend that you know more
about the malloc internal implementation than you actually do.
Where do people draw this line? I guess it depends on the likely system
the program will be deployed on, but are there any good rule-of-thumbs?
I'd be very leery of any code where the line was drawn anywhere other
than "in all cases we check malloc return values".
--
Randy Howard (2reply remove FOOBAR)
"The power of accurate observation is called cynicism by those
who have not got it." - George Bernard Shaw

Jan 18 '08 #4

P: n/a
Marty James wrote:
Howdy,

I was reflecting recently on malloc.

Obviously, for tiny allocations like 20 bytes to strcpy a filename or
something, there's no point putting in a check on the return value of
malloc.
Ever heard of the final straw? Any call to malloc can fail.

--
Ian Collins.
Jan 18 '08 #5

P: n/a
On Jan 18, 1:21*pm, Marty James <m...@nospam.comwrote:
Howdy,

I was reflecting recently on malloc.

Obviously, for tiny allocations like 20 bytes to strcpy a filename or
something, there's no point putting in a check on the return value of
malloc.
Okay, that's our base case. so what is the inductive hypothesis?
It is this: if we don't have to check about checking some N byte
malloc, surely, we don't have to care about a malloc of N + 1 bytes.
What's one byte? It often takes three just to make alignment.

So, by induction, we don't ever have to check the return value of
malloc.
Jan 18 '08 #6

P: n/a
Marty James wrote:
Howdy,

I was reflecting recently on malloc.

Obviously, for tiny allocations like 20 bytes to strcpy a filename or
something, there's no point putting in a check on the return value of
malloc.
"Obviously," you can allocate an infinite amount of
memory as long as you get it in 20-byte chunks? Did you
used to work for Enron or something?

--
Er*********@sun.com
Jan 18 '08 #7

P: n/a
On Jan 18, 2:35*pm, Paul Hsieh <websn...@gmail.comwrote:
check the return because it may fail for *any* reason. *About the only
time you could possibly get away with not checking is if you have to
know an awful lot about your system and then only accept a finite
fixed amount of malloc()s right when the system starts.
Or you know that your system overcommits virtual memory, so that
malloc keeps ``working'' even though your total allocations have
already exceeded core + swap.
Jan 18 '08 #8

P: n/a
Paul Hsieh <we******@gmail.comwrites:
[...]
The standards specified malloc() basically requires that you always
check the return because it may fail for *any* reason.
[...]
>
There is also absolutely no real benefit to *not* checking for the
success or failure of malloc().
[...]

I agree completely.

There is one "unreal" benefit: simplicity. The tricky part is
figuring out what the heck to do if malloc() fails.

There's an old saying: Never check for an error condition you don't
know how to handle.

But if you can't figure out what to do, you can always just terminate
the program. It's not necessarily the best thing you can do, but it's
the second simplest, and it's almost certainly better than the
simplest (ignoring the error).

--
Keith Thompson (The_Other_Keith) <ks***@mib.org>
Nokia
"We must do something. This is something. Therefore, we must do this."
-- Antony Jay and Jonathan Lynn, "Yes Minister"
Jan 18 '08 #9

P: n/a
jacob navia wrote:
Keith Thompson wrote:
>Paul Hsieh <we******@gmail.comwrites:
[...]
>>The standards specified malloc() basically requires that you always
check the return because it may fail for *any* reason.
[...]
>>There is also absolutely no real benefit to *not* checking for the
success or failure of malloc().
[...]

I agree completely.

There is one "unreal" benefit: simplicity. The tricky part is
figuring out what the heck to do if malloc() fails.

There's an old saying: Never check for an error condition you don't
know how to handle.

But if you can't figure out what to do, you can always just terminate
the program. It's not necessarily the best thing you can do, but it's
the second simplest, and it's almost certainly better than the
simplest (ignoring the error).

No, in many situations you should return an error.
How does that contradict what Keith said?

--
Ian Collins.
Jan 18 '08 #10

P: n/a
On Fri, 18 Jan 2008 16:38:24 -0600, Eric Sosman wrote
(in article <1200695900.259600@news1nwk>):
Marty James wrote:
>Howdy,

I was reflecting recently on malloc.

Obviously, for tiny allocations like 20 bytes to strcpy a filename or
something, there's no point putting in a check on the return value of
malloc.

"Obviously," you can allocate an infinite amount of
memory as long as you get it in 20-byte chunks? Did you
used to work for Enron or something?
This thread was useful, now I know I never have to buy extra memory
again.

--
Randy Howard (2reply remove FOOBAR)
"The power of accurate observation is called cynicism by those
who have not got it." - George Bernard Shaw

Jan 18 '08 #11

P: n/a
Marty James wrote:
Howdy,

I was reflecting recently on malloc.

Obviously, for tiny allocations like 20 bytes to strcpy a filename or
something, there's no point putting in a check on the return value of
malloc.
Er, no...
OTOH, if you're allocating a gigabyte for a large array, this might
fail, so you should definitely check for a NULL return.
Mhm.
So somewhere in between these extremes, there must be a point where you
stop ignoring malloc's return value, and start checking it.
The point at which your programme moves from being a trivial proof of
concept or quick hack, into being a real programme with some form of
worth to you, your customers or your colleagues.

IOW, _always_ check malloc in real programmes.

--
Mark McIntyre

CLC FAQ <http://c-faq.com/>
CLC readme: <http://www.ungerhu.com/jxh/clc.welcome.txt>
Jan 18 '08 #12

P: n/a
Marty James wrote:
Howdy,

I was reflecting recently on malloc.

Obviously, for tiny allocations like 20 bytes to strcpy a filename or
something, there's no point putting in a check on the return value of
malloc.
That's complete nonsense which will probably get you fired some day,
as many others have already explained.
Where do people draw this line? I guess it depends on the likely system
the program will be deployed on, but are there any good rule-of-thumbs?
The only time I ever fail to check the return value from malloc() is
if I change my mind about using the return value before I get around
to checking it:

a = malloc(sizeof(*a));
b = malloc(sizeof(*b)*num_b);
c = malloc(offsetof(c,flex_array)+num_c*sizeof(c->flex_array[0]);

if(a==NULL || b==NULL || c==NULL)
{
// error handling
}
else
{
// code using a, b, and c
}
free(a);
free(b);
free(c);

As you can see, the above code never bothers checking whether c is
null, if it turns out that either a or b is null.
Jan 18 '08 #13

P: n/a
jameskuy...@verizon.net wrote:
....
c = malloc(offsetof(c,flex_array)+num_c*sizeof(c->flex_array[0]);
That obvious wasn't actually compiled:

c = malloc(offsetof(flex_struct, flex_array)+num_c*sizeof(c-
>flex_array[0]));
with appropriate definitions implied for c and flex_struct.
Jan 19 '08 #14

P: n/a
In article <sl*******************@nospam.invalid>,
Marty James <ma**@nospam.comwrote:
>Where do people draw this line? I guess it depends on the likely system
the program will be deployed on, but are there any good rule-of-thumbs?
You're obviously a troll, but let's try to find some situations where
it would be ok to ignore malloc()'s return value. Then you can decide
whether you're in one of them.

Obviously if it doesn't matter that your program occasionally fails in
an obscure way or produces the wrong answer, then it's ok.

If your program will always be run on a system where dereferencing the
null pointer (and relevant offsets from it) produces an error, and you
don't mind getting "segmentation fault" instead of a more useful error
message, then it's ok.

And in these two cases, since it's true that small allocations will
fail less often than large ones, you can indeed weigh the
inconvenience against the likelihood of failure for a given size.

Also, if your program doesn't use so much memory that it could run out
of address space, and will always be run on a system where malloc()
never returns NULL for memory exhaustion (e.g. systems with overcommit
turned on), checking won't make any difference.

-- Richard
--
:wq
Jan 19 '08 #15

P: n/a
On Jan 18, 1:21*pm, Marty James <m...@nospam.comwrote:
Howdy,

I was reflecting recently on malloc.

Obviously, for tiny allocations like 20 bytes to strcpy a filename or
something, there's no point putting in a check on the return value of
malloc.

OTOH, if you're allocating a gigabyte for a large array, this might
fail, so you should definitely check for a NULL return.

So somewhere in between these extremes, there must be a point where you
stop ignoring malloc's return value, and start checking it.

Where do people draw this line? I guess it depends on the likely system
the program will be deployed on, but are there any good rule-of-thumbs?
The good rule of thumb is always check.

True story about an OS/2 application for fielding telephone messages:
While I was working as a subcontracor at a large corporation nearby
some 18 years ago or so, there was an OS/2 application that did
zillions of 16 byte allocations and frees. No big "gimmie a
megabyte!" demands at all. The machine had 4 MB {IIRC}, which at the
time was *cough* a lot of memory. Now, according to the statistics I
examined, the packet creation and packet deletion was at a sort of
homeostatis that should have never seen a failure. However, the
memory allocater for OS/2 had a bad habit of fragmenting memory in
some very bizarre ways if there were lots and lots of very rapid
malloc()/free() pairs. The end result is that those 16 byte
allocation requests would eventually start to fail. Since there was
no check for malloc() success in the code [I DIDN'T WRITE IT!] it
crashed heinously. My first stab at fixing it (after putting in
diagnosics to at lest describe the problem before exiting) was to call
a non-standard "CompactMemory()" routine at regular intervals. At
first, it seemed to fix the problem but it lead to to other problems.
The system had to be near real time, and the compact memory call would
freeze it up momemtarily and also, it would eventually start to fail
again anyway (even though it would run much longer before failure).
My final solution was to allocate most of available RAM into a single
block and then I wrote my own sub-allocator. It was actually very
simple because all the packets were the same size, and so I just
tagged them as free or in use with an array of bits.

So, even if you do not allocate any big memory hunks and even if you
know that all of the allocations should succeed, you should still
check them. Because things don't always behave the way that you know
that they should.

IMO-YMMV.
Jan 19 '08 #16

P: n/a
Marty James wrote:
>
I was reflecting recently on malloc.

Obviously, for tiny allocations like 20 bytes to strcpy a
filename or something, there's no point putting in a check on
the return value of malloc.
Oh? Try the following:

#include <stdio.h>
#include <stdlib.h>

#define SZ 40

int main(void) {
unsigned long count;
void *ptr;

count = 0;
while (ptr = malloc(SZ)) count++;
printf("Failed after %lu tries\n", count);
return 0;
}

(It may take a while - just increase SZ)
--
[mail]: Chuck F (cbfalconer at maineline dot net)
[page]: <http://cbfalconer.home.att.net>
Try the download section.

--
Posted via a free Usenet account from http://www.teranews.com

Jan 19 '08 #17

P: n/a
Randy Howard wrote:
Eric Sosman wrote
>Marty James wrote:
>>I was reflecting recently on malloc.

Obviously, for tiny allocations like 20 bytes to strcpy a
filename or something, there's no point putting in a check on
the return value of malloc.

"Obviously," you can allocate an infinite amount of
memory as long as you get it in 20-byte chunks? Did you
used to work for Enron or something?

This thread was useful, now I know I never have to buy extra
memory again.
PROVIDED you malloc it in 20 byte chunks. Since the standard
specifies that freed memory be made available again, you must be
perfectly safe in allocating 4k by:

for (i = 0; i < 20; i++) a[i] = malloc(20);
for (i = 0; i < 20; i++) free(a[i]);
ptr = malloc(4000);

with suitable declarations for a, i, ptr, and all the needed
#includes. Learning is wunnerful.

--
[mail]: Chuck F (cbfalconer at maineline dot net)
[page]: <http://cbfalconer.home.att.net>
Try the download section.

--
Posted via a free Usenet account from http://www.teranews.com

Jan 19 '08 #18

P: n/a
On Jan 18, 7:39*pm, CBFalconer <cbfalco...@yahoo.comwrote:
Randy Howard wrote:
Eric Sosman wrote
Marty James wrote:
>I was reflecting recently on malloc.
>Obviously, for tiny allocations like 20 bytes to strcpy a
filename or something, there's no point putting in a check on
the return value of malloc.
* * *"Obviously," you can allocate an infinite amount of
memory as long as you get it in 20-byte chunks? *Did you
used to work for Enron or something?
This thread was useful, now I know I never have to buy extra
memory again.

PROVIDED you malloc it in 20 byte chunks. *Since the standard
specifies that freed memory be made available again, you must be
perfectly safe in allocating 4k by:

* *for (i = 0; i < 20; i++) a[i] = malloc(20);
* *for (i = 0; i < 20; i++) free(a[i]);
* *ptr = malloc(4000);

with suitable declarations for a, i, ptr, and all the needed
#includes. *Learning is wunnerful.
You might be perfectly safe to allocate (say) 64K according to the ISO
C Standard. But the other program that is running and has consumed
all but 19 free bytes before your program executes the first malloc()
doesn't know that.
Jan 19 '08 #19

P: n/a
On Jan 18, 9:41*pm, user923005 <dcor...@connx.comwrote:
On Jan 18, 7:39*pm, CBFalconer <cbfalco...@yahoo.comwrote:


Randy Howard wrote:
Eric Sosman wrote
>Marty James wrote:
>>I was reflecting recently on malloc.
>>Obviously, for tiny allocations like 20 bytes to strcpy a
>>filename or something, there's no point putting in a check on
>>the return value of malloc.
>* * *"Obviously," you can allocate an infinite amount of
>memory as long as you get it in 20-byte chunks? *Did you
>used to work for Enron or something?
This thread was useful, now I know I never have to buy extra
memory again.
PROVIDED you malloc it in 20 byte chunks. *Since the standard
specifies that freed memory be made available again, you must be
perfectly safe in allocating 4k by:
* *for (i = 0; i < 20; i++) a[i] = malloc(20);
* *for (i = 0; i < 20; i++) free(a[i]);
* *ptr = malloc(4000);
with suitable declarations for a, i, ptr, and all the needed
#includes. *Learning is wunnerful.

You might be perfectly safe to allocate (say) 64K according to the ISO
C Standard. *But the other program that is running and has consumed
all but 19 free bytes before your program executes the first malloc()
doesn't know that.- Hide quoted text -

- Show quoted text -
Sorry, sarcasm detector switch was broken off, and laying on the
floor.
Jan 19 '08 #20

P: n/a
On Fri, 18 Jan 2008 23:41:49 -0600, user923005 wrote
(in article
<a1**********************************@z17g2000hsg. googlegroups.com>):
On Jan 18, 7:39*pm, CBFalconer <cbfalco...@yahoo.comwrote:
>Randy Howard wrote:
>>Eric Sosman wrote
Marty James wrote:
>>>>I was reflecting recently on malloc.
>>>>Obviously, for tiny allocations like 20 bytes to strcpy a
filename or something, there's no point putting in a check on
the return value of malloc.
>>>* * *"Obviously," you can allocate an infinite amount of
memory as long as you get it in 20-byte chunks? *Did you
used to work for Enron or something?
>>This thread was useful, now I know I never have to buy extra
memory again.

PROVIDED you malloc it in 20 byte chunks. *Since the standard
specifies that freed memory be made available again, you must be
perfectly safe in allocating 4k by:

* *for (i = 0; i < 20; i++) a[i] = malloc(20);
* *for (i = 0; i < 20; i++) free(a[i]);
* *ptr = malloc(4000);

with suitable declarations for a, i, ptr, and all the needed
#includes. *Learning is wunnerful.

You might be perfectly safe to allocate (say) 64K according to the ISO
C Standard. But the other program that is running and has consumed
all but 19 free bytes before your program executes the first malloc()
doesn't know that.
Both of you need to get a sense of humor. :)

--
Randy Howard (2reply remove FOOBAR)
"The power of accurate observation is called cynicism by those
who have not got it." - George Bernard Shaw

Jan 19 '08 #21

P: n/a

"Marty James" <ma**@nospam.comwrote in message
news:sl*******************@nospam.invalid...
Howdy,

I was reflecting recently on malloc.

Obviously, for tiny allocations like 20 bytes to strcpy a filename or
something, there's no point putting in a check on the return value of
malloc.

OTOH, if you're allocating a gigabyte for a large array, this might
fail, so you should definitely check for a NULL return.

So somewhere in between these extremes, there must be a point where you
stop ignoring malloc's return value, and start checking it.

Where do people draw this line? I guess it depends on the likely system
the program will be deployed on, but are there any good rule-of-thumbs?
If checking is that much trouble then create a wrapper function around
malloc() that will always return a valid result. (The wrapper will check for
NULL results of malloc() and abort or do other exception code.) Then call
that instead of malloc(). This way you dispense with checking every time.

Use this when allocation failure is (a) unimportant or (b) very important
(that the program cannot proceed). (Seems paradoxical I know.)

To take some other action when allocation fails then call malloc() in the
regular way.

Bart

Jan 19 '08 #22

P: n/a

"Marty James" <ma**@nospam.comwrote in message
Obviously, for tiny allocations like 20 bytes to strcpy a filename or
something, there's no point putting in a check on the return value of
malloc.

OTOH, if you're allocating a gigabyte for a large array, this might
fail, so you should definitely check for a NULL return.

So somewhere in between these extremes, there must be a point where you
stop ignoring malloc's return value, and start checking it.

Where do people draw this line? I guess it depends on the likely system
the program will be deployed on, but are there any good rule-of-thumbs?
Yes.
Imagine you've got 2GB installed and are allocating 20 bytes. The system is
stressed and programs crash or terminate for lack of memory once a day. Any
more than that, and no-one would tolerate it.
So the chance the crash being caused by your allocation is 1/ 100 000 000,
or once every several hundred thousand years. The chance of the computer
breaking during this period is so so much higher, there is in this case no
point checking the malloc().

It is elementary. When you control the quality of a part, and there are
costs - here increased complexity of code, thus maintenance costs - the
quality should be high enough that your part is unlikely to the the point of
failure, but no higher.

As others have pointed out, if your main allocation is in a loop, the
probabilities have to be adjusted accordingly.

You might be interestwed in xmalloc(), on my website, which gets round this
problem of too much error-handling code which will never be executed. For
all I have said, there is also a case for making programs which are correct.

--
Free games and programming goodies.
http://www.personal.leeds.ac.uk/~bgy1mm
Jan 19 '08 #23

P: n/a
Keith Thompson wrote:
There's an old saying: Never check for an error condition you don't
know how to handle.

But if you can't figure out what to do, you can always just terminate
the program. It's not necessarily the best thing you can do, but it's
the second simplest, and it's almost certainly better than the
simplest (ignoring the error).
Well, I don't usually check the result of a call such as
fprintf(stderr, "Can't frobnicate %s: %s\n", frob, strerror(errno)),
because I don't know what should I do if it failed, but I don't think that
just terminating the program would be a good idea (unless I were going to
terminate it right after the fprintf regardless of its success, that is).
--
Army1987 (Replace "NOSPAM" with "email")
Jan 19 '08 #24

P: n/a
Malcolm McLean wrote, On 19/01/08 10:47:
>
"Marty James" <ma**@nospam.comwrote in message
>Obviously, for tiny allocations like 20 bytes to strcpy a filename or
something, there's no point putting in a check on the return value of
malloc.

OTOH, if you're allocating a gigabyte for a large array, this might
fail, so you should definitely check for a NULL return.

So somewhere in between these extremes, there must be a point where you
stop ignoring malloc's return value, and start checking it.

Where do people draw this line? I guess it depends on the likely system
the program will be deployed on, but are there any good rule-of-thumbs?
Yes.
Imagine you've got 2GB installed and are allocating 20 bytes. The system
is stressed and programs crash or terminate for lack of memory once a
day. Any more than that, and no-one would tolerate it.
So the chance the crash being caused by your allocation is 1/ 100 000
000, or once every several hundred thousand years. The chance of the
computer breaking during this period is so so much higher, there is in
this case no point checking the malloc().
This is incredibly bad advice. It has also been pointed out to Malcolm
in the past that it is incredibly bad advice.

I run out of memory on my company notebook with 2GB of RAM. I know
people run out of memory on servers with far more than 2GB of RAM. In
fact, I don't think I've had a month when some piece of SW has not
reported being out of memory and provided a recovery mechanism.

<snip>
You might be interestwed in xmalloc(), on my website, which gets round
this problem of too much error-handling code which will never be
executed. For all I have said, there is also a case for making programs
which are correct.
I would suggest looking very carefully at any malloc wrapper before
using it. You need to decide whether aborting on failure is appropriate
or some recovery strategy.
--
Flash Gordon
Jan 19 '08 #25

P: n/a
Malcolm McLean wrote:
You might be interestwed in xmalloc(), on my website[1], which gets round
this problem of too much error-handling code which will never be executed.
I took a look at it. Apart from being too complicated for most programs
(yes, some will actually be able to use the additional complexity), it has
one IMHO grave bug: it uses 'int' for the allocation size. Use size_t,
which is the correct type, or do you check if the result of 'strlen()' can
be safely converted to an 'int' before calling your 'xmalloc()'?

- sorry, no chocolate for you -

Uli

[1] http://www.personal.leeds.ac.uk/~bgy...c/xmalloc.html

Jan 19 '08 #26

P: n/a

"Flash Gordon" <sp**@flash-gordon.me.ukwrote in message
This is incredibly bad advice. It has also been pointed out to Malcolm in
the past that it is incredibly bad advice.
Advice is a dangerous gift, even from the wise to the wise.
>
I run out of memory on my company notebook with 2GB of RAM. I know people
run out of memory on servers with far more than 2GB of RAM. In fact, I
don't think I've had a month when some piece of SW has not reported being
out of memory and provided a recovery mechanism.
Yes, but not very often on a single allocation of 20 bytes. That will happen
once
in every 100 000 000 months on such a machine.
>
>You might be interestwed in xmalloc(), on my website, which gets round
this problem of too much error-handling code which will never be
executed. For all I have said, there is also a case for making programs
which are correct.

I would suggest looking very carefully at any malloc wrapper before using
it. You need to decide whether aborting on failure is appropriate or some
recovery strategy.
xmalloc() never returns null, so there is no need to provide error-checking.
Basically the idea is to get clutter which will never be executed out of
functions, so that normal logic flow stands out more clearly.
Obviously you cannot guarantee an infitie supply of memory. So by default
xmalloc() aborts with an error message. That's because there is not much
else you can do within the constraints of ANSI C.
However the failure handler can be reset, and I have one for X which
requests that the user terminate some other application. Again, that isn't
approriate for everything - a server, for instance, could release memory
from an emergency store, and then put itself into "critical please attend to
me mode".
--
Free games and programming goodies.
http://www.personal.leeds.ac.uk/~bgy1mm

Jan 19 '08 #27

P: n/a

"Ulrich Eckhardt" <do******@knuut.dewrote in message
Malcolm McLean wrote:
>You might be interestwed in xmalloc(), on my website[1], which gets round
this problem of too much error-handling code which will never be
executed.

I took a look at it. Apart from being too complicated for most programs
(yes, some will actually be able to use the additional complexity), it has
one IMHO grave bug: it uses 'int' for the allocation size. Use size_t,
which is the correct type, or do you check if the result of 'strlen()' can
be safely converted to an 'int' before calling your 'xmalloc()'?

- sorry, no chocolate for you -
If the amount of memory won't fit into an int you probably shouldn't be
calling the function. It is for small amounts of memory, and is compatible
with the original C specification of malloc().

Also, using a signed value means that we can pick up many bugs. If garbage
is passed to malloc() then 50% of the time the argument will be negative.
Assuming randomness, if the function is called more than once or twice, you
practically have a guarantee of catching the bug.

I can't do anything about the size_t returned from strlen. It is most
unlikely that you'll have strings of more than the range of an int, and a
lot of people complain at use of high bit types - that's the objection, and
it is a legitimate one, made most often to the campaign for 64 bit ints.

--
Free games and programming goodies.
http://www.personal.leeds.ac.uk/~bgy1mm

Jan 19 '08 #28

P: n/a
user923005 wrote:
CBFalconer <cbfalco...@yahoo.comwrote:
>Randy Howard wrote:
>>Eric Sosman wrote
Marty James wrote:
>>>>I was reflecting recently on malloc.
>>>>Obviously, for tiny allocations like 20 bytes to strcpy a
filename or something, there's no point putting in a check on
the return value of malloc.
>>> "Obviously," you can allocate an infinite amount of
memory as long as you get it in 20-byte chunks? Did you
used to work for Enron or something?
>>This thread was useful, now I know I never have to buy extra
memory again.

PROVIDED you malloc it in 20 byte chunks. Since the standard
specifies that freed memory be made available again, you must be
perfectly safe in allocating 4k by:

for (i = 0; i < 20; i++) a[i] = malloc(20);
for (i = 0; i < 20; i++) free(a[i]);
ptr = malloc(4000);

with suitable declarations for a, i, ptr, and all the needed
#includes. Learning is wunnerful.

You might be perfectly safe to allocate (say) 64K according to
the ISO C Standard. But the other program that is running and
has consumed all but 19 free bytes before your program executes
the first malloc() doesn't know that.
No, you haven't been following. The OP postulated that assignment
of 20 bytes was safe, and needed no checking. He wanted to know a
level that needed checking. I was pointing out that his original
assumption obviated the need for ANY checking of malloc.

--
[mail]: Chuck F (cbfalconer at maineline dot net)
[page]: <http://cbfalconer.home.att.net>
Try the download section.

--
Posted via a free Usenet account from http://www.teranews.com

Jan 19 '08 #29

P: n/a
Malcolm McLean wrote, On 19/01/08 13:44:
>
"Flash Gordon" <sp**@flash-gordon.me.ukwrote in message
>This is incredibly bad advice. It has also been pointed out to Malcolm
in the past that it is incredibly bad advice.
Advice is a dangerous gift, even from the wise to the wise.
Irrelevant. Your advice was bad for anyone whether they recipient is
wise or not.
>I run out of memory on my company notebook with 2GB of RAM. I know
people run out of memory on servers with far more than 2GB of RAM. In
fact, I don't think I've had a month when some piece of SW has not
reported being out of memory and provided a recovery mechanism.
Yes, but not very often on a single allocation of 20 bytes. That will
happen once
in every 100 000 000 months on such a machine.
If my machine is out of memory then an allocation request for ONE byte
will fail, that it is other programs using up all of the memory is
irrelevant. I run out of memory on my machine regularly as stated.
Fortunately even MS seems to have a better understanding of this than you.
>>You might be interestwed in xmalloc(), on my website, which gets
round this problem of too much error-handling code which will never
be executed. For all I have said, there is also a case for making
programs which are correct.

I would suggest looking very carefully at any malloc wrapper before
using it. You need to decide whether aborting on failure is
appropriate or some recovery strategy.
xmalloc() never returns null, so there is no need to provide
<snip>

So are you claiming that the OP should not look carefully at the malloc
wrapper and deciding on the appropriate strategy for his/her
application? If not I don't see the point of that message.
--
Flash Gordon
Jan 19 '08 #30

P: n/a
Malcolm McLean wrote:
"Ulrich Eckhardt" <do******@knuut.dewrote in message
>Malcolm McLean wrote:
>>You might be interestwed in xmalloc(), on my website[1], which gets
round this problem of too much error-handling code which will never be
executed.

I took a look at it. Apart from being too complicated for most programs
(yes, some will actually be able to use the additional complexity), it
has one IMHO grave bug: it uses 'int' for the allocation size. Use
size_t, which is the correct type, or do you check if the result of
'strlen()' can be safely converted to an 'int' before calling your
'xmalloc()'?

- sorry, no chocolate for you -
If the amount of memory won't fit into an int you probably shouldn't be
calling the function.
Why not? I would agree if I limited myself to LIP32 systems, but those are
more and more phased out. Even there, several OSs support 3GiB of virtual
address space for userspace, which doesn't fit into said int type.

Also, using int is way harder to make work correctly. Assuming you have some
user input that tells you how many records they want created. You then
simply

unsigned user_input = get_user_input();
int s = user_input * sizeof(record_t);
if(s<0)
size_too_large();

....and you have lost. Newer GCCs will actually be able to see here that two
unsigned values are multiplied, which will create another unsigned value,
so the check whether the result is less than zero is simply omitted! The
simple rationale for this behaviour is that an overflow already causes
undefined behaviour, so all bets are off.

It is for small amounts of memory, and is compatible
with the original C specification of malloc().
IIRC, malloc() uses the following:

void* malloc(size_t);

...which is _not_ compatible.
Also, using a signed value means that we can pick up many bugs.
You created a whole system to catch and even handle malloc errors, including
a user-definable callback. Why not make the largest single allocation a
customisable value too? Then, you could well use size_t for correctness but
catch nonsense values (which depend on the application context) at the same
time.
If garbage
is passed to malloc() then 50% of the time the argument will be negative.
Assuming randomness, if the function is called more than once or twice,
you practically have a guarantee of catching the bug.
Using the above approach, you can forbid e.g. anything beyond 100KiB, which
makes sense in many contexts and will catch way more errors! Seriously, a
customisable check for valid allocation sizes lets you have the cake _AND_
eat it. ;)

Other than that, I'm afraid you won't get around size_t. Everything from
strlen() over malloc() to sizeof work on size_t. Compiling code that mixes
size_t with int is barely possible without proper conversions or lots of
warnings. I would chose neither of those.

Uli

Jan 19 '08 #31

P: n/a
Eric Sosman wrote:
Ulrich Eckhardt wrote:
> void* xmalloc(size_t s) {
void* res = malloc(s);
if(!res) {
fprintf( stderr, "malloc() failed\n");
exit(EXIT_FAILURE);
}
return res;
}

I do this, too, on occasion. Two things to note, though:

1) The strategy of having a low-level routine decide the
fate of the entire program seldom works except for smallish
programs.
Yes, true. This code is definitely not general purpose library code! In fact
a library should never invoke exit() on its own but always signal the error
to the calling code.
2) The test should be `if (!res && s)'.
Hmm, maybe. Technically you are right, as the behaviour would better match
that of malloc(). Practically though, I can't remember a case where I
actually used a zero allocation size for anything sensible. To be honest, I
don't even know what the use of a zero-size allocation would be.

Uli
Jan 19 '08 #32

P: n/a

"Ulrich Eckhardt" <do******@knuut.dewrote in message
Malcolm McLean wrote:
unsigned user_input = get_user_input();
int s = user_input * sizeof(record_t);
if(s<0)
size_too_large();
I'd want

int user_input = get_user_input();
/* if I was sensible, sanity check here, but this is a sloppy program */
int s = user_input * sizeof(record_t);
/* at this point we've got an overflow, so compiler issues "arithmetic
overflow" and terminates. */

Unfortunately we don't get this yet. Almost no compilers will handle oveflow
nicely. However if the result is negative, which is a sporting chance, at
least we can pick it up in the call to xmalloc(). If we are in a loop with
essentially random values being typed in by user, the chance of a negative
becomes statistically certain.

--
Free games and programming goodies.
http://www.personal.leeds.ac.uk/~bgy1mm

Jan 19 '08 #33

P: n/a

"Eric Sosman" <es*****@ieee-dot-org.invalidwrote in message
1) The strategy of having a low-level routine decide the
fate of the entire program seldom works except for smallish
programs. Once the program gets sophisticated enough to be
maintaining a significant amount of state, it may well want
to do things like writing the internal state to a file before
exiting -- but if xmalloc() or xfopen() or something has
unilaterally decided to exit on its own initiative, Too Bad.
If you are caling malloc() then usually it means that the state is currently
in a corrupt or partially constructed situation. So you have to handle that
on the dump. Which means that the whole program has to be written in a very
careful, sophisticated manner.

--
Free games and programming goodies.
http://www.personal.leeds.ac.uk/~bgy1mm

Jan 19 '08 #34

P: n/a
Malcolm McLean wrote, On 19/01/08 15:22:
>
"Flash Gordon" <sp**@flash-gordon.me.ukwrote in message
>Malcolm McLean wrote, On 19/01/08 13:51:
<snip>
Remember, this will almost never happen - once in a trillion years of
operation for the example you described, under rather different
assumptions, just enough for a few users to experience it occasionally.
IT HAPPENS REGULARLY!

Just because you don't stress your machine doesn't mean others don't.
For those of us that make machine work THEY RUN OUT OF MEMORY. When
there is NO memory left, that mean there is NO memory left. That HAPPENS
REGULARLY and means that REGULARLY malloc FAILS.

Can you understand that now? I REGULARLY DO NOT HAVE THE FREE MEMORY TO
ALLOCATE ONE BYTE.

At least in this thread you have stopped recommending simply ignoring
out of memory conditions, which was where I started pointing out how bad
your advice was.
>It means again that your function is NOT a general purpose malloc
wrapper and not suitable for use by a lot of people.
No. It is suitable for when custom error-handling code adds more cost
<snip>

No, it is not suitable for anyone that does not want YOUR arbitrarily
introduced restrictions.
--
Flash Gordon
Jan 19 '08 #35

P: n/a
On 18 Jan, 21:21, Marty James <m...@nospam.comwrote:
Howdy,

I was reflecting recently on malloc.

Obviously, for tiny allocations like 20 bytes to strcpy a filename or
something, there's no point putting in a check on the return value of
malloc.
As others have said, I'd recommend always testing.

But - if the 20 bytes is only needed temporarily, and will be finished
with before you next need 20 bytes - why not just allocate one buffer
at the beginning of the program and use that each time? That way you
would only need one malloc - or, for such a small size, use an array.

Hope this helps.
Paul.
Jan 19 '08 #36

P: n/a
Malcolm McLean wrote:
>
"Eric Sosman" <es*****@ieee-dot-org.invalidwrote in message
> 1) The strategy of having a low-level routine decide the
fate of the entire program seldom works except for smallish
programs. Once the program gets sophisticated enough to be
maintaining a significant amount of state, it may well want
to do things like writing the internal state to a file before
exiting -- but if xmalloc() or xfopen() or something has
unilaterally decided to exit on its own initiative, Too Bad.
If you are caling malloc() then usually it means that the state is
currently in a corrupt or partially constructed situation. So you have
to handle that on the dump. Which means that the whole program has to be
written in a very careful, sophisticated manner.
Writing carefully is a Good Thing, certainly. But I take
issue with the notion that doing so is "sophisticated;" going
further back, I reject the "corrupt" piece. Observe that the
memory that wasn't allocated is not yet linked into any of the
program's data structures (how could it be?), so the state
dumper or state repairer or whatever will not encounter it. If
you're extremely sloppy there might be an uninitialized pointer
somewhere, but that's the fault of the programmer and not of the
malloc() failure. More typically there's just a NULL somewhere
in the data structure, and the state massager knows enough (ought
to know enough) not to follow NULLs.

--
Eric Sosman
es*****@ieee-dot-org.invalid
Jan 19 '08 #37

P: n/a

"Eric Sosman" <es*****@ieee-dot-org.invalidwrote in message
Malcolm McLean wrote:
>>
"Eric Sosman" <es*****@ieee-dot-org.invalidwrote in message
>> 1) The strategy of having a low-level routine decide the
fate of the entire program seldom works except for smallish
programs. Once the program gets sophisticated enough to be
maintaining a significant amount of state, it may well want
to do things like writing the internal state to a file before
exiting -- but if xmalloc() or xfopen() or something has
unilaterally decided to exit on its own initiative, Too Bad.
If you are caling malloc() then usually it means that the state is
currently in a corrupt or partially constructed situation. So you have to
handle that on the dump. Which means that the whole program has to be
written in a very careful, sophisticated manner.

Writing carefully is a Good Thing, certainly. But I take
issue with the notion that doing so is "sophisticated;" going
further back, I reject the "corrupt" piece. Observe that the
memory that wasn't allocated is not yet linked into any of the
program's data structures (how could it be?), so the state
dumper or state repairer or whatever will not encounter it. If
you're extremely sloppy there might be an uninitialized pointer
somewhere, but that's the fault of the programmer and not of the
malloc() failure. More typically there's just a NULL somewhere
in the data structure, and the state massager knows enough (ought
to know enough) not to follow NULLs.
Say we've got this, not too contrived situation

typedef struct
{
int Nemployees;
char **names;
float *salaries;
} Workforce;

when constructing a Workforce, we set up the names array correctly, and
malloc() fails on the employees.
We've now got to be pretty careful. Nemployees is needed to destroy the
names correctly, but salaries is null.

It can be negotiated successfully, but remember how hard this code is to
test. We've got to test for several patterns of allocations failing and the
recovery code saving the state properly and destroying the object properly.
It is also very easy to forget to initialise the salaries member to null -
you're intialising it on the malloc, after all. However should a name fail
to allocate, you're left with a garbage pointer.

--
Free games and programming goodies.
http://www.personal.leeds.ac.uk/~bgy1mm

Jan 19 '08 #38

P: n/a
On Sat, 19 Jan 2008 09:32:03 -0600, Malcolm McLean wrote
(in article <F8*********************@bt.com>):
>
"Eric Sosman" <es*****@ieee-dot-org.invalidwrote in message
>1) The strategy of having a low-level routine decide the
fate of the entire program seldom works except for smallish
programs. Once the program gets sophisticated enough to be
maintaining a significant amount of state, it may well want
to do things like writing the internal state to a file before
exiting -- but if xmalloc() or xfopen() or something has
unilaterally decided to exit on its own initiative, Too Bad.
If you are caling malloc() then usually it means that the state is currently
in a corrupt or partially constructed situation. So you have to handle that
on the dump. Which means that the whole program has to be written in a very
careful, sophisticated manner.
BTW, they xmalloc() you referenced from your website earlier uses int
instead of size_t for the size. Is that intentional? If so, why?
--
Randy Howard (2reply remove FOOBAR)
"The power of accurate observation is called cynicism by those
who have not got it." - George Bernard Shaw

Jan 19 '08 #39

P: n/a
On Sat, 19 Jan 2008 07:51:56 -0600, Malcolm McLean wrote
(in article <r6******************************@bt.com>):
>
"Ulrich Eckhardt" <do******@knuut.dewrote in message
>Malcolm McLean wrote:
>>You might be interestwed in xmalloc(), on my website[1], which gets round
this problem of too much error-handling code which will never be
executed.

I took a look at it. Apart from being too complicated for most programs
(yes, some will actually be able to use the additional complexity), it has
one IMHO grave bug: it uses 'int' for the allocation size. Use size_t,
which is the correct type, or do you check if the result of 'strlen()' can
be safely converted to an 'int' before calling your 'xmalloc()'?

- sorry, no chocolate for you -
If the amount of memory won't fit into an int you probably shouldn't be
calling the function. It is for small amounts of memory, and is compatible
with the original C specification of malloc().
I asked a similar question before I read this article. Feel free to
ignore the other one, I'd cancel it, but that never works anyway.

I completely disagree with the above, btw.
Also, using a signed value means that we can pick up many bugs. If garbage
is passed to malloc() then 50% of the time the argument will be negative.
No, if a very large (and legal) request is passed, you'll think it's
garbage when it is not. That's not a helper function, that's a
disaster.
--
Randy Howard (2reply remove FOOBAR)
"The power of accurate observation is called cynicism by those
who have not got it." - George Bernard Shaw

Jan 19 '08 #40

P: n/a
"Malcolm McLean" <re*******@btinternet.comwrites:
[...]
Imagine you've got 2GB installed and are allocating 20 bytes. The
system is stressed and programs crash or terminate for lack of memory
once a day. Any more than that, and no-one would tolerate it.
So the chance the crash being caused by your allocation is 1/ 100 000
000, or once every several hundred thousand years. The chance of the
computer breaking during this period is so so much higher, there is in
this case no point checking the malloc().

It is elementary. When you control the quality of a part, and there
are costs - here increased complexity of code, thus maintenance costs
- the quality should be high enough that your part is unlikely to the
the point of failure, but no higher.
[...]

How difficult is it to compute the probability that a single call to
malloc() will fail? How difficult is it to check the result? What
are the "maintenance costs" of ensuring that your probability
estimates remain correct as the program evolves? Compare and
contrast.

Just check the result of every single call to malloc(). If you can't
think of a more sensible way to handle failure, just abort the
program. Use a wrapper if you like; call it "malloc_or_die()". There
is no excuse for not checking the result of malloc(), except *maybe*
in a quick throwaway program.

(Please note that I'm not generally advocating aborting the program on
a malloc() failure; I'm just saying that it's better than ignoring
it.)

--
Keith Thompson (The_Other_Keith) <ks***@mib.org>
Nokia
"We must do something. This is something. Therefore, we must do this."
-- Antony Jay and Jonathan Lynn, "Yes Minister"
Jan 19 '08 #41

P: n/a

"Randy Howard" <ra*********@FOOverizonBAR.netwrote
BTW, they xmalloc() you referenced from your website earlier uses int
instead of size_t for the size. Is that intentional? If so, why?
Intentional. To emphasise that thje function is intended for small
allocations that cannot fail, for some value of "cannot".
If you expect the allocation to fail, you shouldn't be calling xmalloc(),
because its strategy is to repeatedly nag until memory becomes available
(assuming you don't terminate). If the memory simply isn't avialable, this
isn't very sensible.

--
Free games and programming goodies.
http://www.personal.leeds.ac.uk/~bgy1mm

Jan 19 '08 #42

P: n/a

"Keith Thompson" <ks***@mib.orgwrote in message
"Malcolm McLean" <re*******@btinternet.comwrites:

How difficult is it to compute the probability that a single call to
malloc() will fail? How difficult is it to check the result? What
are the "maintenance costs" of ensuring that your probability
estimates remain correct as the program evolves? Compare and
contrast.

Just check the result of every single call to malloc(). If you can't
think of a more sensible way to handle failure, just abort the
program. Use a wrapper if you like; call it "malloc_or_die()". There
is no excuse for not checking the result of malloc(), except *maybe*
in a quick throwaway program.

(Please note that I'm not generally advocating aborting the program on
a malloc() failure; I'm just saying that it's better than ignoring
it.)
Typically something between 33% to 50% of code will be there to handle
malloc() failures, in constructor-like C functions. If malloc() fails they
destroy the partially-constructed object and return null to caller.
Typically caller has to simply terminate, or destroy itself and return null
to a higher level, which terminates.

So xmalloc() is a worthwhile simplifier. But it is not a good strategy for
dealing with out of memory conditions. It's simply the best strategy that I
can think of that can be implemented in conforming C. When the
error-handling code is called only once in a trillion years or so of
operation, obviously we are not too bothered about ideal operation. Howevewr
you cannot be expected to do this calaculation all the time. The heuristic
is "is this aloocation likely to be large enough to have a realistic chance
of failure?" If the answer is yes, call malloc(), check the result, and
treat null as one o fthe normal execution paths of the program. Otherwise
call xmalloc(), and just tolerate the sub-optimal processing.

--
Free games and programming goodies.
http://www.personal.leeds.ac.uk/~bgy1mm

Jan 19 '08 #43

P: n/a
On Sat, 19 Jan 2008 15:11:37 -0600, Malcolm McLean wrote
(in article <j_******************************@bt.com>):
>
"Randy Howard" <ra*********@FOOverizonBAR.netwrote
>BTW, they xmalloc() you referenced from your website earlier uses int
instead of size_t for the size. Is that intentional? If so, why?
Intentional. To emphasise that thje function is intended for small
allocations that cannot fail, for some value of "cannot".
I never assume that a given malloc can not fail.
If you expect the allocation to fail, you shouldn't be calling xmalloc(),
Your version, I agree completely.
because its strategy is to repeatedly nag until memory becomes available
(assuming you don't terminate). If the memory simply isn't avialable, this
isn't very sensible.
It's also a strategy that will rarely provide useful results.

--
Randy Howard (2reply remove FOOBAR)
"The power of accurate observation is called cynicism by those
who have not got it." - George Bernard Shaw

Jan 19 '08 #44

P: n/a

"Randy Howard" <ra*********@FOOverizonBAR.netwrote in message
>because its strategy is to repeatedly nag until memory becomes available
(assuming you don't terminate). If the memory simply isn't avialable,
this
isn't very sensible.

It's also a strategy that will rarely provide useful results.
I don't think that's true. The system fails to deliver 20 bytes. A window
pops up on screen saying "program Fred needs more memory to continue, please
shut down something else". User shuts down his Internet Explorer, which is
in the background doing nothing much, and program Fred continues.

I know that modern OSes tend to warn the user long before this happens, and
Linux has a dangerous policy of overcomitment. But the basic idea is
reasonable.

Then xmalloc() is by default a "malloc or die". Which has a kind of utility.
If you want "malloc or save and die", you just pass in the save function.

If you've got better semantics for xmalloc, then by all means share them. I
don't think it's an ideal solution at all. It's the least bad solution to
the problem of trivial requests failing I've been able to devise so far.

--
Free games and programming goodies.
http://www.personal.leeds.ac.uk/~bgy1mm

Jan 19 '08 #45

P: n/a
Malcolm McLean wrote:
>
Typically something between 33% to 50% of code will be there to handle
malloc() failures, in constructor-like C functions.
Please cite the research that measured this surprising figure.

--
Eric Sosman
es*****@ieee-dot-org.invalid
Jan 19 '08 #46

P: n/a

"Eric Sosman" <es*****@ieee-dot-org.invalidwrote in message
Malcolm McLean wrote:
>>
Typically something between 33% to 50% of code will be there to handle
malloc() failures, in constructor-like C functions.

Please cite the research that measured this surprising figure.
Just check my website.
Virtually all the files are written in "object style". That is to say, there
is a function with the same name as the public structure, though in lower
case, and a corresponding kill function. There are also opaque method
functions which manipulate the structures.

Count the number of lines that would not be needed if malloc() could be
guaranteed never to fail. The answer is about 33%-50%, within the
constructor functions. You've got to check the allocation; mostly I reduce
complexity by adding a goto which then calls the kill function. But even so
nulls need to be put in so that the kill function doesn't try to pass
garbage pointers to free().

--
Free games and programming goodies.
http://www.personal.leeds.ac.uk/~bgy1mm

Jan 19 '08 #47

P: n/a

"Marty James" <ma**@nospam.comschreef in bericht
news:sl*******************@nospam.invalid...
Howdy,

I was reflecting recently on malloc.

Obviously, for tiny allocations like 20 bytes to strcpy a filename or
something, there's no point putting in a check on the return value of
malloc.

OTOH, if you're allocating a gigabyte for a large array, this might
fail, so you should definitely check for a NULL return.

So somewhere in between these extremes, there must be a point where you
stop ignoring malloc's return value, and start checking it.

Where do people draw this line? I guess it depends on the likely system
the program will be deployed on, but are there any good rule-of-thumbs?
When NOT to check for NULL

1. You hate your current job
2. You just heard somebody close to you died
3. Your girl or boyfriend just broke up with you
4. You're drunk or otherwise intoxicated
5. You are currently getting a blowjob
6. Its almost time to go home and you dont want to miss the next episode of
As the world turns

Jan 19 '08 #48

P: n/a
On Sat, 19 Jan 2008 15:33:52 -0600, Malcolm McLean wrote
(in article <Dd******************************@bt.com>):
>
"Randy Howard" <ra*********@FOOverizonBAR.netwrote in message
>>because its strategy is to repeatedly nag until memory becomes available
(assuming you don't terminate). If the memory simply isn't avialable,
this
isn't very sensible.

It's also a strategy that will rarely provide useful results.
I don't think that's true. The system fails to deliver 20 bytes. A window
pops up on screen saying "program Fred needs more memory to continue, please
shut down something else". User shuts down his Internet Explorer, which is
in the background doing nothing much, and program Fred continues.
If you can't malloc 20 bytes, odds are the message may never even
appear. :)
I know that modern OSes tend to warn the user long before this happens, and
Linux has a dangerous policy of overcomitment. But the basic idea is
reasonable.
That policy is configurable.

--
Randy Howard (2reply remove FOOBAR)
"The power of accurate observation is called cynicism by those
who have not got it." - George Bernard Shaw

Jan 19 '08 #49

P: n/a
CBFalconer wrote:
Eric Sosman wrote:
> 2) The test should be `if (!res && s)'.

That is more easily handled by:

void *xmalloc(size_t s) {
void *res;

if (!s) s++;
Why?

--
Ian Collins.
Jan 20 '08 #50

173 Replies

This discussion thread is closed

Replies have been disabled for this discussion.