468,463 Members | 2,041 Online
Bytes | Developer Community
New Post

Home Posts Topics Members FAQ

Post your question to a community of 468,463 developers. It's quick & easy.

A solution for the allocation failures problem

1:
It is not possible to check EVERY malloc result within complex software.

2:
The reasonable solution (use a garbage collector) is not possible for
whatever reasons.

3:
A solution like the one proposed by Mr McLean (aborting) is not
possible for software quality reasons. The program must decide
if it is possible to just abort() or not.

Solution:

1) At program start, allocate a big buffer that is not used
elsewhere in the program. This big buffer will be freed when
a memory exhaustion situation arises, to give enough memory
to the error reporting routines to close files, or otherwise
do housekeeping chores.

2) xmalloc()

static int (*mallocfailedHandler)(int);
void *xmalloc(size_t nbytes)
{
restart:
void *r = malloc(nbytes);
if (r)
return r;
// Memory exhaustion situation.
// Release some memory to the malloc/free system.
if (BigUnusedBuffer)
free(BigUnusedBuffer);
BigUnusedBuffer = NULL;
if (mallocfailedHandler == NULL) {
// The handler has not been set. This means
// this application does not care about this
// situation. We exit.
fprintf(stderr,
"Allocation failure of %u bytes\n",
nbytes);
fprintf(stderr,"Program exit\n");
exit(EXIT_FAILURE);
}
// The malloc handler has been set. Call it.
if (mallocfailedHandler(nbytes)) {
goto restart;
}
// The handler failed to solve the problem.
// Exit without any messages.
exit(EXIT_FAILURE);
}

4:
Using the above solution the application can abort if needed, or
make a long jump to a recovery point, where the program can continue.

The recovery handler is supposed to free memory, and reallocate the
BigUnusedBuffer, that has been set to NULL;
--
jacob navia
jacob at jacob point remcomp point fr
logiciels/informatique
http://www.cs.virginia.edu/~lcc-win32
Jan 29 '08
158 4690

"CBFalconer" <cb********@yahoo.comwrote in message
Due to my natural intransigence, when machines 'demand' I tend to
ignore them. This has been known to lead to program failure. :-)
If the user says "it is better that this program should terminate than be
given 64 bytes more of memory" then that is his rightful province.

--
Free games and programming goodies.
http://www.personal.leeds.ac.uk/~bgy1mm

Feb 1 '08 #51

"Paul Hsieh" <we******@gmail.comwrote in message
The development cost for *using* Bstrlib is about 15 minutes to
download, install, try a few examples, and basically understand it.
It did take me a while to develop it, but that's only because people
before just didn't quite have the insight to see the importance of
implementing Bstrlib the way it is. And of course, its development
time invested is amortized for all future projects I make that use
Bstrlib.
That's the development cost for using Bstrlib in a hobby environment.

In a corporate environment things aren't usually so easy. Whilst I wouldn't
discourage bstrlib - I even recommended it for non-trivial string processing
applications a thread back - the costs of deciding to accept it into the
corporate codebase are much higher than that.

--
Free games and programming goodies.
http://www.personal.leeds.ac.uk/~bgy1mm

Feb 1 '08 #52

"Bartc" <bc@freeuk.comwrote in message
Those who want to dutifully check every one of thousands of allocations
in-place and have code that can unwind all the way back to start of
execution (is that even possible?) are to be commended, but that seems the
wrong approach to me.
I think usually the wrong approach. Sometimes development costs and time
taken to test do not matter, for instance if the software is to control a
spaceship. I got a rather silly reply from Kelsey to this one. Clearly if
costs are beginning to dominate overall project costs then you'll have to
keep programming costs down, but beyond that level, NASA would rather have
perfect software than cheap software, or software now.

Secondly, if the program is trivial, then unwinding the stack will also be
trivial. malloc wrappers and/or exceptions don't gain you much here.

--
Free games and programming goodies.
http://www.personal.leeds.ac.uk/~bgy1mm
Feb 1 '08 #53
"Malcolm McLean" <re*******@btinternet.comwrites:
[...]
xmalloc() takes an integer as an argument. Whilst C lays no
requirements on the sizeof an integer, it can accept negative
arguemnts and thus detect a bugged program.
malloc() also takes an integer as an argument.

Please learn the difference between "int" and "integer".

--
Keith Thompson (The_Other_Keith) <ks***@mib.org>
Nokia
"We must do something. This is something. Therefore, we must do this."
-- Antony Jay and Jonathan Lynn, "Yes Minister"
Feb 2 '08 #54
On Thu, 31 Jan 2008 19:13:43 -0500, CBFalconer wrote:
Malcolm McLean wrote:
>"Keith Thompson" <ks***@mib.orgwrote in message
>>1. Ignore it and keep going (dangerous). 2. Use an allocator that
immediately aborts the program on failure. 3. Check for failure on
each allocation. On failure:
3a. Immediately abort the program (equivalent to 2). 3b. Clean up
and abort the program.
3c. Clean up and continue processing (this can be difficult, but
it's important if you want the program to be robust).
4. If the language supports exceptions (C doesn't), catch the
error at whatever level is appropriate, not necessarily immediately
after the attempted allocation. Perform appropriate cleanup and
recovery in the exception handler.
5. Demand user intervention to supply the required memory.

Due to my natural intransigence, when machines 'demand' I tend to ignore
them. This has been known to lead to program failure. :-)
There's also the issue of "what user?"

If the app is a DB server and it cannot allocate resources for, say, a
select, does it require that the client side pop up an error saying
"Please log onto the server machine and free up resources"? How can it
require this? Or maybe it does so on the server itself, popping up a
dialog box - which nobody looks at, because it's a server, quite possibly
headless.

'Course, that'd be even more special. 200 users on a DB server, all with
allocated resources which worked, and user 201 comes along, does a large
select, and the DB server pukes and dies, taking out 200 users' data sets
instead of simply detecting the error and reporting it to the single user.

Yeah, crash and burn on allocation failure is just such a good strategy.

Feb 3 '08 #55
Kelsey Bjarnason said:
On Wed, 06 Feb 2008 09:19:54 +0000, Herbert Rosenau wrote:
>On Tue, 29 Jan 2008 11:47:27 UTC, jacob navia <ja***@nospam.comwrote:
>>1:
It is not possible to check EVERY malloc result within complex
software.

This is a lie!

No kidding. One wonders if they feel the same way about, oh, opening
files, or establishing network connections - is it impossible to check
every call to fopen, every call to connect? If not, why is memory
allocation so magically difficult to check?
The claim that it is not possible to check every malloc result within
complex software reminded me of a project in the mid-1990s where I was
fortunate enough to be in at the start, so I was able to have my say in
setting up project coding standards. Just about the first thing I
suggested was: "let's ensure that all the code clean-compiles at level 4"
(this is the highest warning level in Visual Studio 1.5, which we were
using for development before porting up to the mainframe for testing).

One of the others said, in a wonderfully broad Chicago accent, "Level FOUR?
That's IMPOSSIBLE!"

This came as news to me, since I had routinely been getting clean compiles
at level 4 for some years.

--
Richard Heathfield <http://www.cpax.org.uk>
Email: -http://www. +rjh@
Google users: <http://www.cpax.org.uk/prg/writings/googly.php>
"Usenet is a strange place" - dmr 29 July 1999
Feb 6 '08 #56
[snips]

On Wed, 06 Feb 2008 11:55:30 +0000, Richard Heathfield wrote:
Kelsey Bjarnason said:
>No kidding. One wonders if they feel the same way about, oh, opening
files, or establishing network connections - is it impossible to check
every call to fopen, every call to connect? If not, why is memory
allocation so magically difficult to check?

The claim that it is not possible to check every malloc result within
complex software reminded me of a project in the mid-1990s where I was
fortunate enough to be in at the start, so I was able to have my say in
setting up project coding standards. Just about the first thing I
suggested was: "let's ensure that all the code clean-compiles at level
4" (this is the highest warning level in Visual Studio 1.5, which we
were using for development before porting up to the mainframe for
testing).

One of the others said, in a wonderfully broad Chicago accent, "Level
FOUR? That's IMPOSSIBLE!"

This came as news to me, since I had routinely been getting clean
compiles at level 4 for some years.

Indeed. While I'll grant that some of MS's headers - particularly MFC
headers - do or did tend not to compile clean, that aside if there's a
warning, I want to know _why_. Maybe - *maybe* - it's okay and there's
nothing you can do about it (it's happened once or twice) but in general,
crank the sucker and code clean.

I've largely given up arguing the point as pertains to allocation
failure, though. I don't know if it's that some people *want* to write
bad code, or simply can't be bothered to write good code, but either way
the result is the same and they seem to have an almost religious
adherence to their view, with about as much basis for holding it as most
religious views seem to have.

Feb 6 '08 #57
Kelsey Bjarnason said:

<snip>
I've largely given up arguing the point as pertains to allocation
failure, though. I don't know if it's that some people *want* to write
bad code, or simply can't be bothered to write good code, but either way
the result is the same and they seem to have an almost religious
adherence to their view, with about as much basis for holding it as most
religious views seem to have.
I once took part in a long and highly detailed technical argument in
alt.comp.lang.learn.c-c++, on the subject of strncpy, in its role as the
"safe"haha version of strcpy. My opponent was courteous and well-informed
- a delightful combination. The argument (and it really was an *argument*,
not a row or a fight) raged, very politely, for several days.

The eventual outcome of the debate was that my opponent ***changed his
mind***.

Shock! Horror!

You'd think it would be happening all the time, wouldn't you? Well, often,
anyway. But IME it is quite rare for someone to change an entrenched
opinion as a result of reasoning about the subject. Nevertheless, it is
always something to hope for.

Of course, for such an argument to occur, it is essential that each person
taking part respects his opponent(s). I agree with you that, when things
have descended to the "checking malloc is dumb, you smell of bat guano,
and so does your warthog" level, it is time to stop.

--
Richard Heathfield <http://www.cpax.org.uk>
Email: -http://www. +rjh@
Google users: <http://www.cpax.org.uk/prg/writings/googly.php>
"Usenet is a strange place" - dmr 29 July 1999
Feb 6 '08 #58
Richard Heathfield wrote:
[...]
setting up project coding standards. Just about the first thing I
suggested was: "let's ensure that all the code clean-compiles at level 4"
(this is the highest warning level in Visual Studio 1.5, which we were
using for development before porting up to the mainframe for testing).

One of the others said, in a wonderfully broad Chicago accent, "Level FOUR?
That's IMPOSSIBLE!"
"It's not impossible. I used to bullseye womp rats in my T-16 back
home, they're not much bigger than two meters."

--
+-------------------------+--------------------+-----------------------+
| Kenneth J. Brody | www.hvcomputer.com | #include |
| kenbrody/at\spamcop.net | www.fptech.com | <std_disclaimer.h|
+-------------------------+--------------------+-----------------------+
Don't e-mail me at: <mailto:Th*************@gmail.com>
Feb 6 '08 #59
On 6 Feb 2008 at 15:23, Richard Heathfield wrote:
The eventual outcome of the debate was that my opponent ***changed his
mind***.

Shock! Horror!

You'd think it would be happening all the time, wouldn't you? Well,
often, anyway.
You certainly would - I mean, given that Heathfield is Mr Perfect, right
about every detail, how could people possibly persist in disagreeing
with him?

I wonder how many other hapless newbies have been browbeaten by
Heathfield into "changing their minds" and falling into step with his
singular worldview.
But IME it is quite rare for someone to change an entrenched opinion
as a result of reasoning about the subject. Nevertheless, it is always
something to hope for.
Look, not a trace of irony here...
Of course, for such an argument to occur, it is essential that each
person taking part respects his opponent(s). I agree with you that,
when things have descended to the "checking malloc is dumb, you smell
of bat guano, and so does your warthog" level, it is time to stop.
....nor here. Unbelievable.

Feb 6 '08 #60
Kelsey Bjarnason wrote:
>
On Thu, 07 Feb 2008 20:31:05 +0000, Herbert Rosenau wrote:
In practise it does not - because immediately after free() has oned its
work the sheduler gives another thread the CPU and that thread has
nothing better to do than to eat the released memory for its own job.

Scheduler or no, it seems to me than an implementation which does this
may in fact be non-conforming.
Eh? Please give a few hints as to your thought process here. I'm
not following (finding) your logic...

--
Morris Dovey
DeSoto Solar
DeSoto, Iowa USA
http://www.iedu.com/DeSoto
Feb 7 '08 #61
Well, it may sound somehow unusual, but since C and C++ are something
like siblings, why not just adopt a subset of the C++ exception
mechanism? Like throwing specific structs and catch them?

For those not familiar with C++ something like:
void some_function(void) try
{
/* ... */
}

catch(bad_alloc)
{
/* Error handling code here */
}

void somefunc(void)
{
try
{
int *p= malloc(sizeof(*p));
}

catch(bad_alloc)
{
/* Error handling code here */
}

/* ... function code continues... */
}
My complete proposal is to adopt the namespace mechanism and the
exception handling mechanism of C++, in a way they will be a subset of
C++ equivalent functionalities (that is only for functions, struct
definitions and objects, and variables) - so there can be compatibility
between C and C++ interfaces, making it easier for the implementers to
provide both C and C++ compilers, and making the already existing C++
mechanisms available to C easily and quickly.

Feb 8 '08 #62
Ioannis Vranos wrote:
Well, it may sound somehow unusual, but since C and C++ are something
like siblings, why not just adopt a subset of the C++ exception
mechanism? Like throwing specific structs and catch them?
<snip>

You may be already aware but I think the "lcc-win32" compiler implements
exceptions similar to C++. The author also claims to have implemented
operator overloading as in C++.

Feb 8 '08 #63
santosh wrote:
Ioannis Vranos wrote:
>Well, it may sound somehow unusual, but since C and C++ are something
like siblings, why not just adopt a subset of the C++ exception
mechanism? Like throwing specific structs and catch them?

<snip>

You may be already aware but I think the "lcc-win32" compiler implements
exceptions similar to C++. The author also claims to have implemented
operator overloading as in C++.
The author appears to turn purple at the very mention of C++! His
operator overloading implementation is nothing like C++.

--
Ian Collins.
Feb 8 '08 #64
On Jan 30, 1:31 am, "cr88192" <cr88...@hotmail.comwrote:
I propose implementing something combining both signal handling and
unwinding features.

sza=(1<<31)-1;
1 << 31 causes undefined behaviour (if int is 32-bit).

On a normal 2's complement system, one would expect
1 << 31 to generate INT_MIN -- and then you cause
further undefined behaviour by subtracting 1 from that value.

Perhaps you were looking for INT_MAX, or SIZE_MAX ?
Feb 8 '08 #65
Ian Collins wrote:
santosh wrote:
>Ioannis Vranos wrote:
>>Well, it may sound somehow unusual, but since C and C++ are
something like siblings, why not just adopt a subset of the C++
exception mechanism? Like throwing specific structs and catch them?

<snip>

You may be already aware but I think the "lcc-win32" compiler
implements exceptions similar to C++. The author also claims to have
implemented operator overloading as in C++.
The author appears to turn purple at the very mention of C++! His
operator overloading implementation is nothing like C++.
Okay, I'll take your word for this. I have used lcc-win32 but only as an
ISO C compiler.

Feb 8 '08 #66
Ioannis Vranos said:
Well, it may sound somehow unusual, but since C and C++ are something
like siblings, why not just adopt a subset of the C++ exception
mechanism?
If you want C++, you know where to find it.

The solution to the "allocation failures problem" is, believe it or not,
*not* to lock our code into an untrusted proprietary solution but rather
to check that our allocation requests succeeded and deal with them if they
didn't. This is not rocket science. Those who struggle with if() will
struggle even more with try/catch.

--
Richard Heathfield <http://www.cpax.org.uk>
Email: -http://www. +rjh@
Google users: <http://www.cpax.org.uk/prg/writings/googly.php>
"Usenet is a strange place" - dmr 29 July 1999
Feb 8 '08 #67
Ioannis Vranos wrote:
Well, it may sound somehow unusual, but since C and C++ are something
like siblings, why not just adopt a subset of the C++ exception
mechanism? Like throwing specific structs and catch them?

For those not familiar with C++ something like:
void some_function(void) try
{
/* ... */
}

catch(bad_alloc)
{
/* Error handling code here */
}

void somefunc(void)
{
try
{
int *p= malloc(sizeof(*p));
}

catch(bad_alloc)
{
/* Error handling code here */
}

/* ... function code continues... */
}
The lcc-win C compiler adopts a try/catch mechanism, copied from
the proposal of Microsoft Corp's MSVC. This feature allows
for easier programming. If you want to stay within standard C,
you can use the solution that I proposed at the start of this
thread.
>
My complete proposal is to adopt the namespace mechanism and the
exception handling mechanism of C++, in a way they will be a subset of
C++ equivalent functionalities (that is only for functions, struct
definitions and objects, and variables) - so there can be compatibility
between C and C++ interfaces, making it easier for the implementers to
provide both C and C++ compilers, and making the already existing C++
mechanisms available to C easily and quickly.
That would be really nice but with the current mindset in official
C circles it is a tough fight to do any changes to the language.

They look frozen.
--
jacob navia
jacob at jacob point remcomp point fr
logiciels/informatique
http://www.cs.virginia.edu/~lcc-win32
Feb 8 '08 #68
Richard Heathfield wrote:
Ioannis Vranos said:
>Well, it may sound somehow unusual, but since C and C++ are something
like siblings, why not just adopt a subset of the C++ exception
mechanism?

If you want C++, you know where to find it.

The solution to the "allocation failures problem" is, believe it or not,
*not* to lock our code into an untrusted proprietary solution but rather
to check that our allocation requests succeeded and deal with them if they
didn't.
This is not possible for each allocation at each step without
introducing too many bugs, as the proponents of the other side
have repeatedly pointed out.

When an allocation fails it is better to code an exception
that jumps to a recovery point. This simplifies the code
and makes for LESS bugs. Obviously this solution is not
meant for geniuses like you and the other people here
that boast of their infallible powers when programming.

You never (of course) make any mistake when writing the
hundreds of lines of recovery code. My solution is meant for
people that do make mistakes (like me).
This is not rocket science. Those who struggle with if() will
struggle even more with try/catch.
This is just rubbish. Nobody "struggles with if". But it is
a proved fact that the more lines of code you have to write
the more the possibility of making mistakes.

--
jacob navia
jacob at jacob point remcomp point fr
logiciels/informatique
http://www.cs.virginia.edu/~lcc-win32
Feb 8 '08 #69
jacob navia wrote:
Richard Heathfield wrote:
>Ioannis Vranos said:
>>Well, it may sound somehow unusual, but since C and C++ are something
like siblings, why not just adopt a subset of the C++ exception
mechanism?

If you want C++, you know where to find it.

The solution to the "allocation failures problem" is, believe it or
not, *not* to lock our code into an untrusted proprietary solution but
rather to check that our allocation requests succeeded and deal with
them if they didn't.

This is not possible for each allocation at each step without
introducing too many bugs, as the proponents of the other side
have repeatedly pointed out.

When an allocation fails it is better to code an exception
that jumps to a recovery point. This simplifies the code
and makes for LESS bugs.
That's not as easy as it sounds in C. C++ has the built in mechanisms
to write exception safe code. Such code is hard to write in C (the same
issues as multiple points of return).

Consider

char* p0 = malloc(100);
char* p1 = malloc(200);
char* p2 = malloc(300);

What do you do if the second malloc throws?

You'd end up with as much if not more code for a series of try/catch
blocks as you would for a series of if() blocks.

--
Ian Collins.
Feb 8 '08 #70
[Piggybacking]

Ian Collins said:
jacob navia wrote:
>Richard Heathfield wrote:
<snip>
>>The solution to the "allocation failures problem" is, believe it or
not, *not* to lock our code into an untrusted proprietary solution but
rather to check that our allocation requests succeeded and deal with
them if they didn't.

This is not possible for each allocation at each step without
introducing too many bugs, as the proponents of the other side
have repeatedly pointed out.
I have demonstrated to many people's satisfaction that there is no point in
my arguing with Jacob Navia, so I will not do so here. Such arguments
swiftly become acrimonious, generating far more heat than light. Suffice
it to say that I do not share his pessimism about the inability of good C
programmers to write good C code. What he finds impossible may not be
impossible for others.
>When an allocation fails it is better to code an exception
that jumps to a recovery point. This simplifies the code
and makes for LESS bugs.

That's not as easy as it sounds in C.
Nor has the case been proved (that it simplifies the code and makes for
fewer bugs). If someone whose opinion I respect would care to support that
case, it might make for a reasonable and informative discussion.
Consider

char* p0 = malloc(100);
char* p1 = malloc(200);
char* p2 = malloc(300);

What do you do if the second malloc throws?
Write a bug report. :-)
You'd end up with as much if not more code for a series of try/catch
blocks as you would for a series of if() blocks.
Right. It is certainly the case that nobody writes perfect code, but
there's nothing particularly special about malloc - it's just a way of
requesting a resource, very similar in that respect to fopen. I don't hear
anyone arguing that it's impossible to deal with fopen failures.

--
Richard Heathfield <http://www.cpax.org.uk>
Email: -http://www. +rjh@
Google users: <http://www.cpax.org.uk/prg/writings/googly.php>
"Usenet is a strange place" - dmr 29 July 1999
Feb 8 '08 #71
Ian Collins wrote:
That's not as easy as it sounds in C. C++ has the built in mechanisms
to write exception safe code. Such code is hard to write in C (the same
issues as multiple points of return).

Consider

char* p0 = malloc(100);
char* p1 = malloc(200);
char* p2 = malloc(300);

What do you do if the second malloc throws?
There are many possibilities:

1) You modify malloc so that it keeps in memory the last allocations.
If there is a failure, it will jump to a recovery point where the
last n allocations are undone.
This can be coupled with calls to mark()/release() like in Pascal,
where you can free() the last n allocations.
2) You just USE A GARBAGE COLLECTOR like the one lcc-win provides!
This solution is the best, since it frees() the mind from this
hyper tedious ACCOUNTING of each memory piece and leaves room for
the interesting stuff of programming.

Yes, I know. There are some programmers like those called here
"good C programmers" that like writing thousands of times
if (result == NULL) {
// cleanup code.
}

I am not one of those.
You'd end up with as much if not more code for a series of try/catch
blocks as you would for a series of if() blocks.
If you program with good tools and use a bit of creativity you do not
have to.
--
jacob navia
jacob at jacob point remcomp point fr
logiciels/informatique
http://www.cs.virginia.edu/~lcc-win32
Feb 8 '08 #72
Ian Collins wrote:
Consider

char* p0 = malloc(100);
char* p1 = malloc(200);
char* p2 = malloc(300);

What do you do if the second malloc throws?

You'd end up with as much if not more code for a series of try/catch
blocks as you would for a series of if() blocks.
Here's how I've dealt with such a situation once:

------------------------
void *my_malloc(size_t size, int *err)
{
void *buf;
if (*err) {
++*err;
return NULL;
}
buf = malloc(size);
*err = (buf == NULL);
return buf;
}

int main(void)
{
char *a, *b, *c;
int err = 0;

a = my_malloc(100, &err);
b = my_malloc(200, &err);
c = my_malloc(300, &err);
return err;
}
--------------------

That wasn't for acual malloc()s, but I've used this for demonstration.
The idea is that you can lump subsequent calls to error-prone functions
together and check once for an error condition at the end.

I recently implemented this scheme in a program that bombarded a
measurement system with requests over a network. If one of those calls
fails it doesn't make any sense (in fact in this case it could be
harmful to expensive equipment) to keep soldering on, so the program
simply falls through the rest of the calls, checks for an error
condition and then deals with it. The increment of *err inside the
called function even allows localization of the error.

Not an universal recipe, but in this case it worked for me.

robert
Feb 8 '08 #73
Robert Latest wrote:

I recently implemented this scheme in a program that bombarded a
measurement system with requests over a network. If one of those calls
fails it doesn't make any sense (in fact in this case it could be
harmful to expensive equipment) to keep soldering on,
I can see that excess solder could be a problem.

--
"Ashes are burning the way." - Renaissance, /Ashes Are Burning/

Hewlett-Packard Limited registered no:
registered office: Cain Road, Bracknell, Berks RG12 1HN 690597 England

Feb 8 '08 #74
jacob navia wrote:
Ian Collins wrote:
>That's not as easy as it sounds in C. C++ has the built in mechanisms
to write exception safe code. Such code is hard to write in C (the same
issues as multiple points of return).

Consider

char* p0 = malloc(100);
char* p1 = malloc(200);
char* p2 = malloc(300);

What do you do if the second malloc throws?

There are many possibilities:

1) You modify malloc so that it keeps in memory the last allocations.
If there is a failure, it will jump to a recovery point where the
last n allocations are undone.
This can be coupled with calls to mark()/release() like in Pascal,
where you can free() the last n allocations.
2) You just USE A GARBAGE COLLECTOR like the one lcc-win provides!
This solution is the best, since it frees() the mind from this
hyper tedious ACCOUNTING of each memory piece and leaves room for
the interesting stuff of programming.
That's fine for memory, but say another resource was claimed between the
first two mallocs, you would have to use multiple try blocks.

There's no getting away form the need for exception safe programming
once you add exceptions.

--
Ian Collins.
Feb 8 '08 #75
jacob navia wrote:
>
>Consider

char* p0 = malloc(100);
char* p1 = malloc(200);
char* p2 = malloc(300);

What do you do if the second malloc throws?

There are many possibilities:
[...]
2) You just USE A GARBAGE COLLECTOR like the one lcc-win provides!
Sounds like some interesting garbage collector. If the first call above
ate up the last 100 available bytes, how is a GC going to do anything
about subsequent calls?

The fact remains that the memory management of modern OSes sacrifices a
certain amount of reliablility for speed. Non-critical apps should be
aware of this fact. If your software is part of a commercial airliner's
avionics, don't use malloc() and free() at all, let alone a GC.
Statically hog all necessary RAM at start-up, and write
your own memory manager around it.

(I don't care much for military aircraft and space travel. For all I
care their firmware may be written with lcc-win)
Yes, I know. There are some programmers like those called here
"good C programmers" that like writing thousands of times
if (result == NULL) {
// cleanup code.
}
I wonder where you see all those people against extensions. Nobody here
writes exclusively Standard C without non-standard libraries. If you
want a GC, get one and link against it. Just don't come to CLC if it
doesn't work as expected. As with any third-party library.

robert
Feb 8 '08 #76
Robert Latest wrote:
jacob navia wrote:
>>Consider

char* p0 = malloc(100);
char* p1 = malloc(200);
char* p2 = malloc(300);

What do you do if the second malloc throws?
There are many possibilities:

[...]
>2) You just USE A GARBAGE COLLECTOR like the one lcc-win provides!

Sounds like some interesting garbage collector. If the first call above
ate up the last 100 available bytes, how is a GC going to do anything
about subsequent calls?
No, you just jump to the recovery point, then you call

gc()

and all the intermediate allocations are automatically freed.
No need for costly recovery measures.
The fact remains that the memory management of modern OSes sacrifices a
certain amount of reliablility for speed. Non-critical apps should be
aware of this fact. If your software is part of a commercial airliner's
avionics, don't use malloc() and free() at all, let alone a GC.
I wouldn't even use malloc()!

malloc CAN'T be reliably used in an avionics settings. It CAN
fail and that is not an option when landing an aircraft for
instance
Statically hog all necessary RAM at start-up, and write
your own memory manager around it.
Or use static buffers
(I don't care much for military aircraft and space travel. For all I
care their firmware may be written with lcc-win)
> Yes, I know. There are some programmers like those called here
"good C programmers" that like writing thousands of times
if (result == NULL) {
// cleanup code.
}

I wonder where you see all those people against extensions. Nobody here
writes exclusively Standard C without non-standard libraries. If you
want a GC, get one and link against it. Just don't come to CLC if it
doesn't work as expected. As with any third-party library.

robert
exactly

--
jacob navia
jacob at jacob point remcomp point fr
logiciels/informatique
http://www.cs.virginia.edu/~lcc-win32
Feb 8 '08 #77
Ian Collins wrote:
Consider

char* p0 = malloc(100);
char* p1 = malloc(200);
char* p2 = malloc(300);

What do you do if the second malloc throws?

You'd end up with as much if not more code for a series of try/catch
blocks as you would for a series of if() blocks.
OK, here's one idea of my own. It's not a complete solution of course.

* Allocation a big chunk of memory at the start of execution (failure here
is not critical)
* Use that as a private heap area for some allocations
* For all allocations in the application which are known to be small and
well-managed, use this private heap.
* The size of this heap is chosen to be fit comfortably all the allocations
which are likely to be active at one time.
* If this private heap overflows, while it's easy to keep expanding it, it's
better to treat this as an error in the program which uses it.

This heap allocator is not expected to fail. Therefore error returns don't
need to be checked; it will either return a valid pointer or show an error
message. (This is not that much different from expecting array indices to be
in range.)

A lot of small allocations can then make use of this instead:

char* p0 = heap(100);
char* p1 = heap(200);
char* p2 = heap(300);
....
freeheap(p0); etc..

Otherwise, if you need to use malloc() for a trivial allocation like a few
dozen bytes, then you are exposing the program to the risk of major failure;
you *must* check this allocation succeeded, and if not then you may have a
decidedly non-trivial task of cleaning up and recovering:

char *filespec;

filespec=malloc(strlen(file)+strlen(ext)+1);
if (filespec=NULL) then /* big headache of what to do about this */
....
strcopy(filespec,file);
strcat(filespec,ext);
status=checkfileexists(filespec);

free(filespec);
return status;

The fact that malloc() failed on this tiny allocation means there is a major
problem, it's not just not being able to complete this file operation (or
whatever); rather than just failing this operation, it should signal
somewhere else better able to handle this.

(And, for this function that returns 1: the file exists, 0:doesn't exist;
should a malloc() failure just return 0? That would be misleading. Perhaps
introduce 2:don't know?!)

Solution: use heap()/freeheap() instead.

(The real scenario comes from a program which is an interpreter for another
language; memory allocation/deallocation is implicit, in fact the programmer
can't even intervene:

return checkfileexists(file+ext);

The programmer may not even be aware that he narrowly missed catastrophic
failure of his application!)

--
Bart

Feb 8 '08 #78
Ian Collins wrote:
>
That's not as easy as it sounds in C. C++ has the built in mechanisms
to write exception safe code. Such code is hard to write in C (the same
issues as multiple points of return).

Consider

char* p0 = malloc(100);
char* p1 = malloc(200);
char* p2 = malloc(300);

What do you do if the second malloc throws?

You'd end up with as much if not more code for a series of try/catch
blocks as you would for a series of if() blocks.

You can retrieve information about what failed from the exception itself.

Feb 8 '08 #79
Richard Heathfield wrote:
>
Right. It is certainly the case that nobody writes perfect code, but
there's nothing particularly special about malloc - it's just a way of
requesting a resource, very similar in that respect to fopen. I don't hear
anyone arguing that it's impossible to deal with fopen failures.
Exceptions can be used for all kinds of resource allocations.

Feb 8 '08 #80
Ioannis Vranos wrote:
Ian Collins wrote:
>>
That's not as easy as it sounds in C. C++ has the built in mechanisms
to write exception safe code. Such code is hard to write in C (the same
issues as multiple points of return).

Consider

char* p0 = malloc(100);
char* p1 = malloc(200);
char* p2 = malloc(300);

What do you do if the second malloc throws?

You'd end up with as much if not more code for a series of try/catch
blocks as you would for a series of if() blocks.


You can retrieve information about what failed from the exception itself.

More specifically you may apply try-catch to each of the malloc
statements, you can apply try-catch to the whole block of malloc
statements, you can apply try-catch to the entire function body, or you
can apply try-catch inside a caller function.

You do not need to return failure values from called functions to
callers, it is done automatically if you do not catch the exception.
Feb 8 '08 #81
Ioannis Vranos said:
Richard Heathfield wrote:
>>
Right. It is certainly the case that nobody writes perfect code, but
there's nothing particularly special about malloc - it's just a way of
requesting a resource, very similar in that respect to fopen. I don't
hear anyone arguing that it's impossible to deal with fopen failures.

Exceptions can be used for all kinds of resource allocations.
Well, we don't actually have exceptions in C, so no, they can't. And we're
not likely to get them, either.

This whole thread seems to be based on the fallacy that if(mp != NULL) is
harder to type than if(fp != NULL).

--
Richard Heathfield <http://www.cpax.org.uk>
Email: -http://www. +rjh@
Google users: <http://www.cpax.org.uk/prg/writings/googly.php>
"Usenet is a strange place" - dmr 29 July 1999
Feb 8 '08 #82
Richard Heathfield wrote:
Ioannis Vranos said:
>Exceptions can be used for all kinds of resource allocations.

Well, we don't actually have exceptions in C, so no, they can't. And we're
not likely to get them, either.
In your C maybe, who knows.

In modern C, several compilers provide this facility, for instance
MSVC. Compilers for embedded systems sometimes do this, and when
I ported lcc-win into a DSP try/catch was a requirement, even if the
software did not support floating point.
This whole thread seems to be based on the fallacy that if(mp != NULL) is
harder to type than if(fp != NULL).
It has been explained to you thousand times that this is not the
case, but you think that repeating the same stupid story will make
it true.
--
jacob navia
jacob at jacob point remcomp point fr
logiciels/informatique
http://www.cs.virginia.edu/~lcc-win32
Feb 8 '08 #83
jacob navia wrote:
>
Richard Heathfield wrote:
The solution to the "allocation failures problem" is, believe it or not,
*not* to lock our code into an untrusted proprietary solution but rather
to check that our allocation requests succeeded and deal with them if they
didn't.

This is not possible for each allocation at each step without
introducing too many bugs, as the proponents of the other side
have repeatedly pointed out.
This is purely a statement of opinion. I've worked with large
real-time systems where allocation error detection and
transparent error recovery has been an absolute requirement. A
statement that it isn't possible is no more than a declaration
that the programmer is either unwilling to put forth the effort
to meet requirements, or a declaration of personal/team
incompetence.
When an allocation fails it is better to code an exception
that jumps to a recovery point. This simplifies the code
and makes for LESS bugs. Obviously this solution is not
meant for geniuses like you and the other people here
that boast of their infallible powers when programming.
I would suggest that you discover the genius of 10 - 12 ordinary
(but competent) programmers providing peer review - and eagerly
looking for any weakness in your code. It is very much _not_ a
matter of individual genius or infallible powers!
You never (of course) make any mistake when writing the
hundreds of lines of recovery code. My solution is meant for
people that do make mistakes (like me).
I understand that you are attempting to provide a means by which
semi- or incompetent programmers can avoid both the effort
required to learn and the effort required to produce high quality
code.
This is not rocket science. Those who struggle with if() will
struggle even more with try/catch.

This is just rubbish. Nobody "struggles with if". But it is
a proved fact that the more lines of code you have to write
the more the possibility of making mistakes.
*This* is correct. You have just made the best of cases for peer
review, the exercise of diligence, and the need for test
strategies that detect and diagnose errors so that they can be
corrected before release to the first user.

--
Morris Dovey
DeSoto Solar
DeSoto, Iowa USA
http://www.iedu.com/DeSoto
Feb 8 '08 #84
Ioannis Vranos wrote:

<snip>
But the bottom line is I think C99 is dead. But that's off topic.
Why is discussion of C99 status and future off-topic? We have had such
threads before and apart from annoyance at some of jacob's comments, I
don't remember anyone saying that they were OT.

Feb 8 '08 #85
Ioannis Vranos wrote:
Ioannis Vranos wrote:
<snip>
>I think C99 has got the wrong path since I first saw it, and I think
I predicted in clc that it would not be fully implemented except
perhaps in GCC. However even GCC hasn't implemented it completely,
which means it's a total failure. It reminds me the Pascal case,
where two standards exist, and more or less Pascal is dead.


Actually there may be a couple of compilers that support C99 fully.
Yes. Comeau C++ and IBM's VisualAge claim to fully implement C99.

<snip>

Feb 8 '08 #86
Ioannis Vranos wrote:
Wouldn't it be useful if C got the namespace mechanism from C++? I think
it would be useful for C programmers. I think the same thing applies
from C++ exceptions.
The one fundamental question that you proposalists are continuously
failing to answer is: If you want features that an otherwise very C-like
language X offers, why don't you simply switch to X instead of bugging C
users about the deficiencies of the language of their choice?
I am using C++ now,
Oh. You already did. Well, good for you.

robert
Feb 8 '08 #87
Morris Dovey wrote:
jacob navia wrote:
>This is not possible for each allocation at each step without
introducing too many bugs, as the proponents of the other side
have repeatedly pointed out.

This is purely a statement of opinion. I've worked with large
real-time systems where allocation error detection and
transparent error recovery has been an absolute requirement.
Those applications can't use any solution I have proposed.
I have told this countless times, and here I go again:

"There are applications where a GC with a catch/throw mechanism
for memory management is just not usable for security/real time/
or other requirements"
Happy?
A
statement that it isn't possible is no more than a declaration
that the programmer is either unwilling to put forth the effort
to meet requirements, or a declaration of personal/team
incompetence.
Of course I am unwilling to put the effort. That's what I am talking about.

Why should I put that effort when BETTER solutions exist?

Do you go running to your job?

Or you take a public transportation system or you drive?

What?

You DRIVE!

You are just someone that is unwilling to put the effort needed to
get to the job!

I always RUN to my job, I arrive late, and it takes almost the whole
work day but I AM WILLING TO DO THE EFFORT you see?

>When an allocation fails it is better to code an exception
that jumps to a recovery point. This simplifies the code
and makes for LESS bugs. Obviously this solution is not
meant for geniuses like you and the other people here
that boast of their infallible powers when programming.

I would suggest that you discover the genius of 10 - 12 ordinary
(but competent) programmers providing peer review - and eagerly
looking for any weakness in your code. It is very much _not_ a
matter of individual genius or infallible powers!
I think that peer review is a useful thing. But, as with all efforts,
it is VERY expensive (10-12 programmers doing peer review!)

And instead of reviewing the APPLICATION code, they review the
memory management code that is sprawled across the whole
application in hundreds of lines instead of focusing in a
single place where it takes place.

Why burden all applications with all that code? There is NO NEED
to do that that way.
>You never (of course) make any mistake when writing the
hundreds of lines of recovery code. My solution is meant for
people that do make mistakes (like me).

I understand that you are attempting to provide a means by which
semi- or incompetent programmers can avoid both the effort
required to learn and the effort required to produce high quality
code.
Yes, by driving instead of running to the job, an otherwise not
very sportive person can arrive at the job in time. With LESS
effort than the guy running that is always late :-)
>This is not rocket science. Those who struggle with if() will
>>struggle even more with try/catch.
This is just rubbish. Nobody "struggles with if". But it is
a proved fact that the more lines of code you have to write
the more the possibility of making mistakes.

*This* is correct. You have just made the best of cases for peer
review, the exercise of diligence, and the need for test
strategies that detect and diagnose errors so that they can be
corrected before release to the first user.
*This* is correct. You have just made the best cases for not
wasting your time and your fellow programmer's time in useless
code but to review THE APPLICATION and not the memory management.

When I do an application I want to write THE APPLICATION and
not the memory management part for the NTH time!

--
jacob navia
jacob at jacob point remcomp point fr
logiciels/informatique
http://www.cs.virginia.edu/~lcc-win32
Feb 8 '08 #88
jacob navia wrote:

<snip>
When I do an application I want to write THE APPLICATION and
not the memory management part for the NTH time!
You do realise that even with a try/catch statement code for responding
to memory failures must still be written, don't you? I'm not even sure
if it would be all that easier that the traditional method.

You do have point though if one were to use a GC.

Feb 8 '08 #89
santosh wrote:
jacob navia wrote:

<snip>
>When I do an application I want to write THE APPLICATION and
not the memory management part for the NTH time!

You do realise that even with a try/catch statement code for responding
to memory failures must still be written, don't you? I'm not even sure
if it would be all that easier that the traditional method.

You do have point though if one were to use a GC.

How can you do this easily under current C?
void somefunc(void) try
{
int *p= malloc(100*sizeof(*p));

/* ... *.
}

catch(bad_alloc)
{
static no_of_failures= 0;

no_of_failures++;

if(no_of_failures== 10)
return EXIT_FAILURE;

somefunc();
}
Feb 8 '08 #90
Minor code correction:

Ioannis Vranos wrote:
santosh wrote:
>jacob navia wrote:

<snip>
>>When I do an application I want to write THE APPLICATION and
not the memory management part for the NTH time!

You do realise that even with a try/catch statement code for responding
to memory failures must still be written, don't you? I'm not even sure
if it would be all that easier that the traditional method.

You do have point though if one were to use a GC.


How can you do this easily under current C?
void somefunc(void) try
{
int *p= malloc(100*sizeof(*p));

/* ... *.
}

catch(bad_alloc)
{
== static int no_of_failures= 0;
>
no_of_failures++;

if(no_of_failures== 10)
return EXIT_FAILURE;

somefunc();
}
Feb 8 '08 #91
On Fri, 08 Feb 2008 06:42:26 +0000, Richard Heathfield
<rj*@see.sig.invalidwrote:
>Ioannis Vranos said:
>Well, it may sound somehow unusual, but since C and C++ are something
like siblings, why not just adopt a subset of the C++ exception
mechanism?

If you want C++, you know where to find it.

The solution to the "allocation failures problem" is, believe it or not,
*not* to lock our code into an untrusted proprietary solution but rather
to check that our allocation requests succeeded and deal with them if they
didn't. This is not rocket science. Those who struggle with if() will
struggle even more with try/catch.
Writing "check for errors and writing response code" for each
call to malloc is, of course, a viable strategy. However there
are issues to be considered. Three such are (a) the
unreliability of "fixup" code, (b) the non-uniformity of error
response, and (c) code littering.

(A) Unreliability: The code in these "failure to allocate"
clauses isn't easy to test properly, and, in the absence of an
allocator wrapper, isn't easy to test at all. This may not
matter if the only action is to write a message to stderr and
call exit, but it definitely is an issue for more elaborate
responses such as request size retries and alternate algorithms.

In my view, reliable programs (and if we are not interested in
reliable programs why bother to test at all) are developed and
maintained with test harnesses that test the alternatives.

(B) Non-uniformity: Since each "failure to allocate" test clause
is individually written there is no guaranteed uniformity of
response to error conditions.

When I develop software in C my preference is to include an error
management module tailored to the program. I find that using a
"good" error management module means that it is easier to get
meaningful responses when errors happen. Obviously this is not a
universal solution.

(C) Code littering: These little "failure to allocate" tests
litter the code, i.e., they break into the readable flow of
action of functions with tangential tests. Granted, this is not
a major issue; none-the-less such littering makes code harder to
read to some degree and requires more code writing.

All of that said, what is one to do? My suggestion is to use a
wrapper for malloc for almost all storage allocation requests.
The "failure to allocate" test and the subsequent error response
(e.g., write an error message and call exit) is in one place
rather than being scattered as multiple copies throughout the
code.

But what, the skeptic says, do we do want to do something special
such as retrying with a smaller size or using an alternative less
memory intensive algorithm? One answer is simple and obvious;
don't use the wrapper in these special cases. A better answer is
to have a second entry into the wrapper package that does return
0 on a failure to allocate, the point being that the wrapper can
have a back door to force failure, a feature very useful in a
test harness. We all test our code in test harnesses, don't we?

Some have argued that a wrapper that terminates the program when
there is an error is a "crash and burn" strategy that can lead to
a large loss of results, i.e., losing files, data, or
calculations. This is pretty much of a strawman. Programs can
crash at any time for reasons beyond the control of the program;
if preserving results is critical, there are well known
strategies such as checkpointing and journaling. More than that
using wrappers and an error management package is not a "crash
and burn" strategy, it is a "black box" strategy.


Richard Harter, cr*@tiac.net
http://home.tiac.net/~cri, http://www.varinoma.com
Save the Earth now!!
It's the only planet with chocolate.
Feb 8 '08 #92
On Fri, 8 Feb 2008 02:33:34 -0600, Richard Heathfield wrote
(in article <45*********************@bt.com>):
Right. It is certainly the case that nobody writes perfect code, but
there's nothing particularly special about malloc - it's just a way of
requesting a resource, very similar in that respect to fopen. I don't hear
anyone arguing that it's impossible to deal with fopen failures.
Just give them time. It won't be long before someone argues that
checking return values fopen() is impossible to do correctly, so don't
even try.
--
Randy Howard (2reply remove FOOBAR)
"The power of accurate observation is called cynicism by those
who have not got it." - George Bernard Shaw

Feb 8 '08 #93

"Richard Harter" <cr*@tiac.netwrote in message
news:47****************@news.sbtc.net...
On Fri, 08 Feb 2008 06:42:26 +0000, Richard Heathfield
<rj*@see.sig.invalidwrote:
>>The solution to the "allocation failures problem" is, believe it or not,
*not* to lock our code into an untrusted proprietary solution but rather
to check that our allocation requests succeeded and deal with them if they
didn't. This is not rocket science. Those who struggle with if() will
struggle even more with try/catch.

Writing "check for errors and writing response code" for each
call to malloc is, of course, a viable strategy. However there
are issues to be considered. Three such are (a) the
unreliability of "fixup" code, (b) the non-uniformity of error
response, and (c) code littering.
(C) Code littering: These little "failure to allocate" tests
litter the code, i.e., they break into the readable flow of
action of functions with tangential tests. Granted, this is not
a major issue; none-the-less such littering makes code harder to
read to some degree and requires more code writing.
It's difficult to put this point across to those who think programmers must
be superhuman beings who cannot make mistakes or be distracted by untidy or
overscrupulous code.

The strategy to strew low-level malloc() calls all over the place complete
with complex error-handling seems totally wrong.

I presented a little part-solution elsewhere in this thread -- offload
trivial memory allocations to a heap manager using a preallocated block so
that it cannot ever fail, except for program error (indicating a memory leak
that can then be fixed).

The alternative, to use malloc() everywhere, would I think *increase* the
chance of failure, by leaving the program open to serious malfunction due to
memory problems, and may do so for a trivial use of malloc() in a place
illsuited to dealing with such a problem.

--
Bart

Feb 8 '08 #94
On Fri, 8 Feb 2008 09:04:16 -0600, jacob navia wrote
(in article <fo**********@aioe.org>):
I think that peer review is a useful thing. But, as with all efforts,
it is VERY expensive (10-12 programmers doing peer review!)
Quite the opposite. It's very cost-effective. You just need to
contrast that cost with what it takes to fix, repair and support
hundreds, thousands or millions of customers in the field that are
suddenly experiencing bugs and demanding fixes, and perhaps even
threatening legal action.

--
Randy Howard (2reply remove FOOBAR)
"The power of accurate observation is called cynicism by those
who have not got it." - George Bernard Shaw

Feb 8 '08 #95
On Feb 8, 7:34 am, Richard Heathfield <r...@see.sig.invalidwrote:
Ioannis Vranos said:
Richard Heathfield wrote:
Right. It is certainly the case that nobody writes perfect code, but
there's nothing particularly special about malloc - it's just a way of
requesting a resource, very similar in that respect to fopen. I don't
hear anyone arguing that it's impossible to deal with fopen failures.
Exceptions can be used for all kinds of resource allocations.

Well, we don't actually have exceptions in C, so no, they can't. And we're
not likely to get them, either.

This whole thread seems to be based on the fallacy that if(mp != NULL) is
harder to type than if(fp != NULL).
And so I went to look for best practices, got some C code from
http://www.cpax.org.uk/prg/portable/...x.php#download

It handles malloc() failure pretty well: the function which calls
malloc() returns NULL, and its caller happily uses some default
value instead of what it's supposed to do. But it does "handle"
malloc() failure.

Sure, from such a toy program you shouldn't expect much intelligence
(though it could fail instead of saying "all is well"), but we
are talking here "easy to type", aren't we? Or do I miss something,
and it's really not about handling errors but about typing an
"if(mp != NULL)"?

A bonus question: what happens when malloc() fails inside fgetline()
(my hypothesis: nothing, we pretend we hit EOF. But we don't "crash
and burn", which is Good)

Yevgen
Feb 8 '08 #96
A bit more fixed:
Ioannis Vranos wrote:
santosh wrote:
>jacob navia wrote:

<snip>
>>When I do an application I want to write THE APPLICATION and
not the memory management part for the NTH time!

You do realise that even with a try/catch statement code for responding
to memory failures must still be written, don't you? I'm not even sure
if it would be all that easier that the traditional method.

You do have point though if one were to use a GC.


How can you do this easily under current C?
void somefunc(void) try
{
int *p= malloc(100*sizeof(*p));

/* ... *.
}

catch(bad_alloc)
{
== static int no_of_failures= 0;
>
no_of_failures++;

if(no_of_failures== 10)
== exit(EXIT_FAILURE);
>
somefunc();
}
Feb 8 '08 #97
ym******@gmail.com said:
On Feb 8, 7:34 am, Richard Heathfield <r...@see.sig.invalidwrote:
>Ioannis Vranos said:
Richard Heathfield wrote:
>Right. It is certainly the case that nobody writes perfect code, but
there's nothing particularly special about malloc - it's just a way
of requesting a resource, very similar in that respect to fopen. I
don't hear anyone arguing that it's impossible to deal with fopen
failures.
Exceptions can be used for all kinds of resource allocations.

Well, we don't actually have exceptions in C, so no, they can't. And
we're not likely to get them, either.

This whole thread seems to be based on the fallacy that if(mp != NULL)
is harder to type than if(fp != NULL).

And so I went to look for best practices, got some C code from
http://www.cpax.org.uk/prg/portable/...x.php#download

It handles malloc() failure pretty well: the function which calls
malloc() returns NULL, and its caller happily uses some default
value instead of what it's supposed to do. But it does "handle"
malloc() failure.
That could be done better, I agree. But to crash and burn would be worse,
not better.
>
Sure, from such a toy program you shouldn't expect much intelligence
(though it could fail instead of saying "all is well"), but we
are talking here "easy to type", aren't we? Or do I miss something,
and it's really not about handling errors but about typing an
"if(mp != NULL)"?

A bonus question: what happens when malloc() fails inside fgetline()
(my hypothesis: nothing, we pretend we hit EOF. But we don't "crash
and burn", which is Good)
Your hypothesis is incorrect; fgetline returns a non-zero error code for an
allocation failure (it happens to be -3, which I really ought to wrap in a
#define). A generic routine such as fgetline can't reasonably decide what
to do on allocation failure, so it has only two options - crash and burn,
or tell the caller. Crash-and-burn would be stupid, so we tell the caller.

--
Richard Heathfield <http://www.cpax.org.uk>
Email: -http://www. +rjh@
Google users: <http://www.cpax.org.uk/prg/writings/googly.php>
"Usenet is a strange place" - dmr 29 July 1999
Feb 8 '08 #98
Richard Heathfield <rj*@see.sig.invalidwrote:
This whole thread seems to be based on the fallacy that if(mp != NULL) is
harder to type than if(fp != NULL).
No, it's not.

It's based on the fact that in most applications, dynamic memory is
allocated (and freed) a lot.
Feb 8 '08 #99
Ed Jensen said:
Richard Heathfield <rj*@see.sig.invalidwrote:
>This whole thread seems to be based on the fallacy that if(mp != NULL)
is harder to type than if(fp != NULL).

No, it's not.

It's based on the fact that in most applications, dynamic memory is
allocated (and freed) a lot.
In most applications, information comes in via streams and goes out via
streams, a lot (otherwise where are you getting your data and where are
you putting the results?). But nobody is bothering to propose a "solution
for the read/write failures problem" (and rightly so, because there isn't
such a problem). The fact that malloc is called a lot is not an excuse for
calling it badly.

--
Richard Heathfield <http://www.cpax.org.uk>
Email: -http://www. +rjh@
Google users: <http://www.cpax.org.uk/prg/writings/googly.php>
"Usenet is a strange place" - dmr 29 July 1999
Feb 8 '08 #100

This discussion thread is closed

Replies have been disabled for this discussion.

By using this site, you agree to our Privacy Policy and Terms of Use.