By using this site, you agree to our updated Privacy Policy and our Terms of Use. Manage your Cookies Settings.
440,666 Members | 1,909 Online
Bytes IT Community
+ Ask a Question
Need help? Post your question and get tips & solutions from a community of 440,666 IT Pros & Developers. It's quick & easy.

Determine calling function

P: n/a

Hello,

I have implemented a small library with a function a datatype to
manage temporary storage, and handle out correctly casted storage. The
function to get a double pointer is for instance:

double * work_get_double(work_type *work, size_t size) {}
Now, if the work area is not sufficiently large, the function fails,
with a call to abort. In the case of failure, I would *very much*
like to know the name of the function calling work_get_double(), so if
function foo() calls work_get_double(), and the request can not be
satisfied, I would like work_get_double() to fail with something like:

fprintf(stderr,"Sorry call from function: %s could not satsified. Aborting\n",CALLER_NAME)
abort();

Where CALLER_NAME, in this case would resolve to 'foo'. Is something
like this possible (without actually passing in the function name manually)?
Best Regards

Joakim Hove
--
Joakim Hove
hove AT ntnu.no /
Tlf: +47 (55 5)8 27 13 / Stabburveien 18
Fax: +47 (55 5)8 94 40 / N-5231 Paradis
http://www.ift.uib.no/~hove/ / 55 91 28 18 / 92 68 57 04
Nov 15 '05 #1
Share this Question
Share on Google+
21 Replies


P: n/a
You need to explicitly pass the caller name to the function as a
seperate arg !
To get a fn. name at runtime you can use the macro __FUNC__
(implementation dependant)

- Ravi
Joakim Hove wrote:
Hello,

I have implemented a small library with a function a datatype to
manage temporary storage, and handle out correctly casted storage. The
function to get a double pointer is for instance:

double * work_get_double(work_type *work, size_t size) {}
Now, if the work area is not sufficiently large, the function fails,
with a call to abort. In the case of failure, I would *very much*
like to know the name of the function calling work_get_double(), so if
function foo() calls work_get_double(), and the request can not be
satisfied, I would like work_get_double() to fail with something like:

fprintf(stderr,"Sorry call from function: %s could not satsified. Aborting\n",CALLER_NAME)
abort();

Where CALLER_NAME, in this case would resolve to 'foo'. Is something
like this possible (without actually passing in the function name manually)?
Best Regards

Joakim Hove


Nov 15 '05 #2

P: n/a

Hello,
You need to explicitly pass the caller name to the function as a
seperate arg !
OK - I was afraid of that.
To get a fn. name at runtime you can use the macro __FUNC__
(implementation dependant)


OK - that is it at least better than fully mannually typing in the
function names. On linux/gcc-3.2.3 the variable __FUNCTION__ is
defined.

Thanks for your help.
Joakim :-)
--
Joakim Hove
hove AT ntnu.no /
Tlf: +47 (55 5)8 27 13 / Stabburveien 18
Fax: +47 (55 5)8 94 40 / N-5231 Paradis
http://www.ift.uib.no/~hove/ / 55 91 28 18 / 92 68 57 04
Nov 15 '05 #3

P: n/a
Joakim Hove a écrit :
Hello,

You need to explicitly pass the caller name to the function as a
seperate arg !

OK - I was afraid of that.

To get a fn. name at runtime you can use the macro __FUNC__
(implementation dependant)

OK - that is it at least better than fully mannually typing in the
function names. On linux/gcc-3.2.3 the variable __FUNCTION__ is
defined.


In C99, __func__ is standard.
Nov 15 '05 #4

P: n/a
ajm
of course you can "overload" work_get_double by defining a macro that
inserts the __func__ argument on your behalf and encourage users to use
the macro e.g.,

work_get_double_impl(work_type *work, size_t size, const char *caller)
{
....your implementation as it currently is...
}

and then define

#define work_get_double(a,b) work_get_double_impl((a),(b),__func__)

so that when users call work_get_double(a,b) the caller name is passed
"automatically"

the usual function macro caveats apply but the above is not uncommon,

hth,
ajm.

Nov 15 '05 #5

P: n/a
Le 14-10-2005, Joakim Hove <ho**@ntnu.no> a écrit*:
I have implemented a small library with a function a datatype to
manage temporary storage, and handle out correctly casted storage. The
function to get a double pointer is for instance:

double * work_get_double(work_type *work, size_t size) {}

Now, if the work area is not sufficiently large, the function fails,
with a call to abort. In the case of failure, I would *very much*
like to know the name of the function calling work_get_double(), so if
function foo() calls work_get_double(), and the request can not be
satisfied, I would like work_get_double() to fail with something like:

fprintf(stderr,"Sorry call from function: %s could not satsified. Aborting\n",CALLER_NAME)
abort();

Where CALLER_NAME, in this case would resolve to 'foo'. Is something
like this possible (without actually passing in the function name manually)?


A common solution is to use macro:
#define WORK_GET_DOUBLE(work, size) work_get_double(work,size,__fun__)

Marc Boyer
Nov 15 '05 #6

P: n/a

Hello,
A common solution is to use macro:
#define WORK_GET_DOUBLE(work, size) work_get_double(work,size,__fun__)


thanks to both ajm and Marc for the macro based solution; I see that I
can/will work but I am somewhat (maybe undeservedly so?) weary about
macros.

Joakim

--
Joakim Hove
hove AT ntnu.no /
Tlf: +47 (55 5)8 27 13 / Stabburveien 18
Fax: +47 (55 5)8 94 40 / N-5231 Paradis
http://www.ift.uib.no/~hove/ / 55 91 28 18 / 92 68 57 04
Nov 15 '05 #7

P: n/a
Le 15-10-2005, Joakim Hove <ho**@ntnu.no> a écrit*:
A common solution is to use macro:
#define WORK_GET_DOUBLE(work, size) work_get_double(work,size,__fun__)


thanks to both ajm and Marc for the macro based solution; I see that I
can/will work but I am somewhat (maybe undeservedly so?) weary about
macros.


You can. Macro have a lot of (well ?) known drawbacks, but
sometimes, it is the best (or less worst) compromise.

Marc Boyer
Nov 15 '05 #8

P: n/a

Joakim Hove wrote:
Now, if the work area is not sufficiently large, the function fails,
with a call to abort. In the case of failure, I would *very much*
like to know the name of the function calling work_get_double(), ...


Since you plan to *abort* anyway then, assuming abort() doesn't do
any special cleanup or printing, the minimal keystroke approach
which *I* often use is to core-dump.
The function names on the call stack are usually the first thing
printed by a debugger, e.g. in response to "where" when running
gdb. How to coredump? On most systems the simplest approach
would be to *ignore* memory allocation failure and just attempt
to use the (presumably null) pointer!

Of course one would never do things this way in *delivered* code,
but good delivered code probably shouldn't suffer from allocation
failures, at least the kind that lead to abort. I'm guessing
that you're *developing* code that isn't *yet* at the perfection
level you seek.

I'll bet 15-to-1 this message draws very hot flames, but frankly
I find the mantra "Always check malloc()'s return code" to be
very much over-dogmatic in many contexts. (For one thing, an OS
like Linux allows *so much* memory to be malloc()'ed that it will
thrash painfully long before malloc() actually "fails".)

The dogma seems particularly silly in cases like yours where
referencing the invalid pointer achieves *precisely* what you
want: a core dump, with visible call stack, etc.

Detractors will point out that a similar effect can be achieved
without violating the dogma. I reply: Yes, but spend the extra
coding minutes checking for a different error that *might
actually occur*, or where special diagnostic prints might be useful.

Donning my asbestos suit...
James D. Allen

Nov 15 '05 #9

P: n/a
Joakim Hove <ho**@ntnu.no> writes:
Hello,

I have implemented a small library with a function a datatype to
manage temporary storage, and handle out correctly casted storage. The
function to get a double pointer is for instance:

double * work_get_double(work_type *work, size_t size) {}
Now, if the work area is not sufficiently large, the function fails,
with a call to abort. In the case of failure, I would *very much*
like to know the name of the function calling work_get_double(), so if
function foo() calls work_get_double(), and the request can not be
satisfied, I would like work_get_double() to fail with something like:


OT: Run a debugger like gdb when the program is aborted, and inspect the
call stack.

/Niklas Norrthon
Nov 15 '05 #10

P: n/a
"James Dow Allen" <jd*********@yahoo.com> writes:
[...]
I'll bet 15-to-1 this message draws very hot flames, but frankly
I find the mantra "Always check malloc()'s return code" to be
very much over-dogmatic in many contexts. (For one thing, an OS
like Linux allows *so much* memory to be malloc()'ed that it will
thrash painfully long before malloc() actually "fails".)

The dogma seems particularly silly in cases like yours where
referencing the invalid pointer achieves *precisely* what you
want: a core dump, with visible call stack, etc.
Maybe. Remember that dereferencing an invalid pointer invokes
undefined behavior. Checking the result of malloc() catches the error
immediately; the first attempt to dereference the pointer might not
immediately follow the call to malloc(). Also, if the first thing you
do is assign a value to a member of an allocated structure, you're not
necessarily dereferencing a null pointer (address 0, or whatever NULL
happens to be). I don't know (or care) how much memory starting at
address 0 (or whatever NULL happens to be) is protected on a given
system.
Detractors will point out that a similar effect can be achieved
without violating the dogma. I reply: Yes, but spend the extra
coding minutes checking for a different error that *might
actually occur*, or where special diagnostic prints might be useful.


That would make sense if you had an exclusive choice between checking
for malloc() failures and checking for other errors. There's no good
reason not to do both.

--
Keith Thompson (The_Other_Keith) ks***@mib.org <http://www.ghoti.net/~kst>
San Diego Supercomputer Center <*> <http://users.sdsc.edu/~kst>
We must do something. This is something. Therefore, we must do this.
Nov 15 '05 #11

P: n/a

In article <ln************@nuthaus.mib.org>, Keith Thompson <ks***@mib.org> writes:
"James Dow Allen" <jd*********@yahoo.com> writes:
I haven't seen James' original post for some reason, so I'll reply
to Keith's.
I'll bet 15-to-1 this message draws very hot flames, but frankly
I find the mantra "Always check malloc()'s return code" to be
very much over-dogmatic in many contexts. (For one thing, an OS
like Linux allows *so much* memory to be malloc()'ed that it will
thrash painfully long before malloc() actually "fails".)
[OT] Not if process limits are set properly. In a well-administered
system, or even one where the user has a modicum of sense, the
process will be running with a reasonable data-size limit and malloc
for an unreasonable amount - eg because of an arithmetic error -
will fail.
The dogma seems particularly silly in cases like yours where
referencing the invalid pointer achieves *precisely* what you
want: a core dump, with visible call stack, etc.


Maybe. Remember that dereferencing an invalid pointer invokes
undefined behavior.


Indeed.
Detractors will point out that a similar effect can be achieved
without violating the dogma. I reply: Yes, but spend the extra
coding minutes checking for a different error that *might
actually occur*, or where special diagnostic prints might be useful.


"Rather than fastening your seatbelt, use that time to check for
flat tires."
That would make sense if you had an exclusive choice between checking
for malloc() failures and checking for other errors. There's no good
reason not to do both.


There's laziness. If popularity defines goodness, this is one of
the best reasons of all.

I'm puzzled by these attempts to rationalize omitting error checking.
If people don't want to write explicit error checks, surely C is the
wrong language for them to use. There are plenty of exception-
throwing languages available.

--
Michael Wojcik mi************@microfocus.com

This record comes with a coupon that wins you a trip around the world.
-- Pizzicato Five
Nov 15 '05 #12

P: n/a
Joakim Hove wrote:
Hello,

I have implemented a small library with a function a datatype to
manage temporary storage, and handle out correctly casted storage. The
function to get a double pointer is for instance:

double * work_get_double(work_type *work, size_t size) {}
Now, if the work area is not sufficiently large, the function fails,
with a call to abort. In the case of failure, I would *very much*
like to know the name of the function calling work_get_double(), so if
function foo() calls work_get_double(), and the request can not be
satisfied, I would like work_get_double() to fail with something like:

fprintf(stderr,"Sorry call from function: %s could not satsified. Aborting\n",CALLER_NAME)
abort();

Where CALLER_NAME, in this case would resolve to 'foo'. Is something
like this possible (without actually passing in the function name manually)?

Though not possible within the realms of standard C, there
should be an implementation specific way of getting this information.
You should check your compiler/OS support. Typically a OS specific call
could dump the whole call-stack (with a list of 32 bit pointers in text
segment). Using a symbol table, you can display in human readable
actual function names. This way you don't have to worry about
modifying your function callers. The main issue is getting the support
for such a function to dump call trace (which can look into the current
stack frames and grab the PCs for each frame). All these are outside
the scope of clc though.

Karthik

Best Regards

Joakim Hove
--
Joakim Hove
hove AT ntnu.no /
Tlf: +47 (55 5)8 27 13 / Stabburveien 18
Fax: +47 (55 5)8 94 40 / N-5231 Paradis
http://www.ift.uib.no/~hove/ / 55 91 28 18 / 92 68 57 04


Nov 15 '05 #13

P: n/a
As expected, my posting elicited flames. (Also as expected, the
flames generated much heat, but little light.)

First, detractors willfully misconstrued my message. I wasn't
advocating that programmers ignore errors, or deliberately
refrain, as a general rule, from inspecting malloc()'s return
value for NULL (though I did mention, correctly, that that was
the *simplest* way to solve OP's problem). Rather I was trying
to encourage programmers to escape from following dogma mindlessly.

(Obviously checking malloc()'s return is benign compared with
much thoughtless adherence to dogma. I once saw a project
with page after page of purposeless defines like
#define SIMU_TOOLNUM_CODE QMIS_TOOLNUM_CODE
#define SIMU_TOOLTYPE_CODE QMIS_TOOLTYPE_CODE
#define SIMU_MACHTYPE_CODE QMIS_MACHTYPE_CODE
BTW, the punchline on this project, believe it or not, was
that the programming company asked for an additional $1 Million
when customer wanted to port the project to a C Compiler from
the Itsy Bitsy Mechanisms Corp. which couldn't handle long
macro names!)

Certainly the idea of "Checking for errors" sounds logical,
but do you really test for zero before *every* division?
Or, for an absurder example, since fprintf() can fail, do you
always check its return code? That would be the reductio
ad absurdem of an insistence on checking malloc(), especially
given the frequent contexts where malloc() *won't* fail, or
where, failing, a core-dump would be as good a diagnostic as any.

Let's see ... perhaps you malloc-checking pedants also want
to write:
if (fprintf(stderr, "Greetings galaxy\n") < 0) {
fprintf(stderr, "fprintf() failed\n");
}

Or should that be something more like
if ((cnt = fprintf(stderr, "Greetings galaxy\n")) < 0) {
while (fprintf(stderr, "fprintf() failed\n") < 0) {
fprintf(special_err, "fprintf() failed again\n");
}
} else if (cnt != strlen("Greetings galaxy\n")) {
while (fprintf(stderr, "Unexpected strlen mismatch\n") < 0)
{
fprintf(special_err, "fprintf() unexpectedly
failed\n");
}
}
Hey folks! Have a chuckle before you click on Flame-Reply :-)
Michael Wojcik wrote:
In article <ln************@nuthaus.mib.org>, Keith Thompson <ks***@mib.org> writes:
"James Dow Allen" <jd*********@yahoo.com> writes:
The dogma seems particularly silly in cases like [OP's] where
referencing the invalid pointer achieves *precisely* what you
want: a core dump, with visible call stack, etc.
Maybe. Remember that dereferencing an invalid pointer invokes
undefined behavior.
Well, thanks, Keith for foregoing a sarcasm like ... the compiler is allowed to generate code that will
trash your hard disk, or send all your passwords to
Tasmania via carrier pigeon.
(FWIW, if the dereference-then-trash-disk option were in
widespread use I daresay everyone in this NG, including
those who religiously test malloc()'s return, would have
lost several disks by now.)

Now I realize some of you write code for computers that
run the Chatanooga Time Sharing System, on hardware which
uses Dolly Parton's phone number as the bit pattern for
NULL pointers, and which fry your power supply whenever
you dereference Miss Parton. Even worse, some of you
write code to run under MS Windows.

But I, and many other programmers of good taste, have the
luxury that 90% of our code will *never* run on systems
other than Unix. And, UIAM, *every* version of Unix that
uses hardware memory management will dump core whenever
an *application* writes to *(NULL).

Someone is thinking: A good programmer should be able to write non-Unix
applications or operating systems, and even to write
device drivers for brain-dead OS'es.
Been there, done that, probably before many c.l.c denizens
were born. A good surgeon should be able to do
appendectomies without anesthetic, but only as a last
resort. My life is organized well enough that I shan't
have to resort to non-Unix OS'es.

Keith wrote: ... assign a value to a member of an allocated structure, you're not
necessarily dereferencing a null pointer (address 0, or whatever NULL
happens to be). I don't know (or care) how much memory starting at
address 0 (or whatever NULL happens to be) is protected on a given
system.


You don't care but I'll tell you anyway :-)
*Every* virtual memory Unix I can recall will
dump-core on writing to *any* address from (0) to (0 + X).
(X is *over Two Billion* in a typical environment.)
[OT] Not if process limits are set properly. In a well-administered
system, or even one where the user has a modicum of sense, the
process will be running with a reasonable data-size limit ...


Is this comment directed against visitations by a runaway
malloc() (ie, the case where programmer neglects to free()
unused memory, or to limit runaway table growth)? I hope
it doesn't sound like bragging but it's very rare that my
programs grow memory uncontrolledly. I'll give Michael the
benefit of the doubt and assumes he's suggesting some other
purpose for setrlimit().

Throttling memory allocation is sometimes appropriate.
An example would be the hash table design I discussed in a
recent post in comp.programming:
http://groups.google.com/group/comp....a0b7c25680d2d1
where caching heuristics
change when a soft memory limit is reached.
(BTW, I *did* win the programming contest where this
caching heuristic came into use:
http://www.recmath.org/contest/Prime.../standings.php )

Obviously a *hard* setrlimit() cannot be employed for throttling
except in a very simple program -- in the contest-winning example,
the cache may continue to grow after the *soft* limit is
reached, just at a lower rate. In fact, the proper approach
to memory throttling will often be the simplest: rely strictly
on *soft* limits (either as provided by Linux setrlimit()
or by simply tracking memory allocation) with any "need" for
a *hard* limit obviated by careful high-level design.
(During the contest I sometimes stopped one of two caching
processes in order to run Acrobat in another window without
thrashing: setrlimit() would have had no value there.)

Throttling caching parameters with a *soft* setrlimit() would
be a valid example where checking malloc()'s return code is
absolutely necessary, though I don't think that is what Michael
was suggesting. There are many cases where checking malloc()
is right, and of course it's never wrong except in Quixotic
examples like the *simplest* solution for OP's problem.
I never intended to suggest otherwise. (Even *I* almost
always check malloc()'s return, if only by using the cover
mustmalloc(), but I do it primarily because it "feels good" --
like proper arrangement of white space -- rather than based
on any superstition that that it has an important effect.)

My real point was, *Don't Be So Dogmatic*. The poster who
sarcastically suggested that tires should be checked *and*
seatbelts fastened isn't *wrong* of course, but must not live
in the real world, because programs like IE_Explorer are
riddled with *unchecked* errors. A better analogy would
have been the driver so engrossed in polishing the door
handle that he forgets to check the tires or oil.

Not all code is delivered. Not all code will be ported to bizarre
hardware or operating systems. Often the *simplest* approach to
debugging is to let the machine core-dump: this gives more
information than you'll ever get with printf().

And, speaking of printf, ... and since "repetition is the soul
of Usenet", be aware that the dogma
"malloc() might fail, so its return code must be checked"
leads to the reductio ad absurdem that every printf()
should be checked for error!

Still keeping my asbestos suit on :-)
James D. Allen

Nov 15 '05 #14

P: n/a

In article <11*********************@z14g2000cwz.googlegroups. com>, "James Dow Allen" <jd*********@yahoo.com> writes:
As expected, my posting elicited flames.
As far as I can tell, there were two responses to your post. If you
thought either of those were flames, I fear you're far too sensitive.
They weren't even particularly harsh.

I think you lost your bet, and I'd like my 15*0, please.
(Also as expected, the flames generated much heat, but little light.)
So you say.
First, detractors willfully misconstrued my message.
Ah, descending to ad hominem already. If indeed Keith and I (the
only two "detractors") "misconstrued" your argument, what evidence
do you have that it was willful?
I wasn't
advocating that programmers ignore errors, or deliberately
refrain, as a general rule, from inspecting malloc()'s return
value for NULL
You wrote:

"I find the mantra "Always check malloc()'s return code" to be
very much over-dogmatic in many contexts."

There's a fine line between "in many contexts" and "as a general
rule".
(though I did mention, correctly, that that was
the *simplest* way to solve OP's problem).
How do you know it's correct? I don't see anything posted by the
OP which indicates he's using a system where dereferencing a null
pointer is guaranteed to produce diagnostics including the name of
the calling function.
Rather I was trying
to encourage programmers to escape from following dogma mindlessly.
No doubt anyone doing so is grateful. However, I think it should be
clear that neither Keith nor I are advocating "following dogma
mindlessly", since we both provided arguments for our positions.
(And, incidentally, that makes them not dogma, by definition.)
Certainly the idea of "Checking for errors" sounds logical,
but do you really test for zero before *every* division?
No. What does this have to do with checking the result of malloc?
Or, for an absurder example, since fprintf() can fail, do you
always check its return code?
Yes, in production code, when the program can usefully act on the
result - so pretty much everywhere except in the logging mechanism
of last resort. (And that generally doesn't use fprintf.)
That would be the reductio
ad absurdem of an insistence on checking malloc(),
A vapid argument. Checking the result of malloc does not inevitably
generalize to checking the result of every function call.
especially
given the frequent contexts where malloc() *won't* fail, or
where, failing, a core-dump would be as good a diagnostic as any.
Since the production of a core dump is the result of undefined
behavior, relying on it makes code nonportable. Why write non-
portable code in a case where portable code is so easily achieved?
Let's see ... perhaps you malloc-checking pedants also want
to write:
And now the march of the strawmen.
Michael Wojcik wrote:
In article <ln************@nuthaus.mib.org>, Keith Thompson <ks***@mib.org> writes:
"James Dow Allen" <jd*********@yahoo.com> writes:
I'll also note, in passing, that putting part of your reply before
the attributions is rather annoying.
> The dogma seems particularly silly in cases like [OP's] where
> referencing the invalid pointer achieves *precisely* what you
> want: a core dump, with visible call stack, etc.

Maybe. Remember that dereferencing an invalid pointer invokes
undefined behavior.
But I, and many other programmers of good taste, have the
luxury that 90% of our code will *never* run on systems
other than Unix.
I see. You didn't get the flames you wanted the first time around,
so now you're tossing in more flame bait. Cute.
[OT] Not if process limits are set properly. In a well-administered
system, or even one where the user has a modicum of sense, the
process will be running with a reasonable data-size limit ...


Is this comment directed against visitations by a runaway
malloc() (ie, the case where programmer neglects to free()
unused memory, or to limit runaway table growth)? I hope
it doesn't sound like bragging but it's very rare that my
programs grow memory uncontrolledly.


It doesn't sound like bragging; it sounds like you write code for
a very restricted and safe environment. That's nice for you; some
of us have to deal with more hostile conditions.
I'll give Michael the
benefit of the doubt and assumes he's suggesting some other
purpose for setrlimit().
Unprivileged processes cannot raise their own hard limits, and
those hard limits can be reduced by the parent process, or by
the current process before it begins executing your code. Maybe
you've heard of users?

[Snip off-topic ramblings regarding setrlimit.]
Throttling memory allocation is sometimes appropriate.
The only case I can think of where it's inappropriate is when a
single "process" is the only program running on the system.
My real point was, *Don't Be So Dogmatic*.


You've yet to demonstrate that anyone is being dogmatic.

--
Michael Wojcik mi************@microfocus.com

Dude, it helps to be smart if you're gonna be mean. -- Darby Conley
Nov 15 '05 #15

P: n/a
James Dow Allen wrote:
Joakim Hove wrote:
Now, if the work area is not sufficiently large, the function fails,
with a call to abort. In the case of failure, I would *very much*
like to know the name of the function calling work_get_double(), ...
Since you plan to *abort* anyway then, assuming abort() doesn't do
any special cleanup or printing, the minimal keystroke approach
which *I* often use is to core-dump.
The function names on the call stack are usually the first thing
printed by a debugger, e.g. in response to "where" when running
gdb. How to coredump? On most systems the simplest approach
would be to *ignore* memory allocation failure and just attempt
to use the (presumably null) pointer!

Of course one would never do things this way in *delivered* code,
but good delivered code probably shouldn't suffer from allocation
failures, at least the kind that lead to abort. I'm guessing
that you're *developing* code that isn't *yet* at the perfection
level you seek.


Very little code is at "a perfection level". Real programs,
even pretty good ones, can and do suffer from memory leaks.
Why? It is quite rare that resources are available to get
100% code coverage of a big program and to verify a lack
of memory leaks under all possible conditions. Some
error cases (that leak memory) get missed. Users do
"interesting" things. etc.

I'll bet 15-to-1 this message draws very hot flames, but frankly
I find the mantra "Always check malloc()'s return code" to be
very much over-dogmatic in many contexts. (For one thing, an OS
like Linux allows *so much* memory to be malloc()'ed that it will
thrash painfully long before malloc() actually "fails".)

The dogma seems particularly silly in cases like yours where
referencing the invalid pointer achieves *precisely* what you
want: a core dump, with visible call stack, etc.
For something easy to detect (lack of memory), you don't want
a core dump. If the program is interacting with a user, you
want a nice message (if possible) on the way out. Or if not,
you want a clear diagnostic in whatever logging mechanism your
program uses and a clean quit. A core is not called for,
even assuming that the user is running in an environment and
on a system that allows them to be dumped. Which you can't
assume. Counting on how undefined behavior works on your
system is a poor substitute for good portable coding practices.

Detractors will point out that a similar effect can be achieved
without violating the dogma. I reply: Yes, but spend the extra
coding minutes checking for a different error that *might
actually occur*, or where special diagnostic prints might be useful.
Baloney. IF your program wants a monolithic way to deal with
*alloc failures, write wrappers for them that handle failures
and exit. Or abort on the spot, if you want that behavior.
Writing such a wrapper takes minutes, and using it takes no more
time than using the *alloc functions.

Donning my asbestos suit...
James D. Allen


I think you are espousing poor coding practices. If that is a flame,
well, make the most of it. Perhaps your approach is appropriate for
whatever environment you program in/product(s) you work on. IMHO it
is not generally applicable.

-David

Nov 15 '05 #16

P: n/a
"James Dow Allen" <jd*********@yahoo.com> writes:
As expected, my posting elicited flames. (Also as expected, the
flames generated much heat, but little light.)
What flames? A couple of us disagreed with you, that's all. Nobody
even told you to engage your brain.
First, detractors willfully misconstrued my message. I wasn't
advocating that programmers ignore errors, or deliberately
refrain, as a general rule, from inspecting malloc()'s return
value for NULL (though I did mention, correctly, that that was
the *simplest* way to solve OP's problem). Rather I was trying
to encourage programmers to escape from following dogma mindlessly.
I most certainly did not willfully misconstrue your message.

And you know what? Sometimes following dogma mindlessly isn't such a
bad thing; it can be just another term for developing good habits.
You should understand the reasoning behind the good habits, but once
the habit is developed you don't necessarily have to think about the
underlying reason every single time.

[snip]
Let's see ... perhaps you malloc-checking pedants also want
to write:
if (fprintf(stderr, "Greetings galaxy\n") < 0) {
fprintf(stderr, "fprintf() failed\n");
}

Or should that be something more like
if ((cnt = fprintf(stderr, "Greetings galaxy\n")) < 0) {
while (fprintf(stderr, "fprintf() failed\n") < 0) {
fprintf(special_err, "fprintf() failed again\n");
}
} else if (cnt != strlen("Greetings galaxy\n")) {
while (fprintf(stderr, "Unexpected strlen mismatch\n") < 0)
{
fprintf(special_err, "fprintf() unexpectedly
failed\n");
}
}
Hey folks! Have a chuckle before you click on Flame-Reply :-)
Ok. (*chuckle*)

The (deliberately absurd and, ok, mildly amusing) code above is not a
good example of what we're discussing. If fprintf(stderr, ...) fails
once, responding to the failure by trying again corresponds closely to
a classic definition of insanity. If we were advocating responding to
a malloc() failure by tring another malloc(), it would be a good
example.

And what should a critical program do if it's unable to write to its
log file because the disk is full? The answer depends on the
application, but it can't do anything sensible unless it checks the
result of the fprintf() call it uses to write the log file.

[...]
(FWIW, if the dereference-then-trash-disk option were in
widespread use I daresay everyone in this NG, including
those who religiously test malloc()'s return, would have
lost several disks by now.)
I lost a hard drive just a couple of weeks ago. I have no idea why.
It was likely a hardware failure, but I can't exclude the possibility
that it was the result of undefined behavior in some C code running
somewhere on the system.
Now I realize some of you write code for computers that
run the Chatanooga Time Sharing System, on hardware which
uses Dolly Parton's phone number as the bit pattern for
NULL pointers, and which fry your power supply whenever
you dereference Miss Parton. Even worse, some of you
write code to run under MS Windows.
No, but I'm not going to write code that assumes it's *not* running on
such a system without a good reason. (And yes, sometimes there are
good reasons for writing non-portable code.)
But I, and many other programmers of good taste, have the
luxury that 90% of our code will *never* run on systems
other than Unix. And, UIAM, *every* version of Unix that
uses hardware memory management will dump core whenever
an *application* writes to *(NULL).
That may be true. So if you don't bother checking the result of
malloc() you'll get a core dump when you try to write to the allocated
memory, not at the actual point of failure. If you're always careful
to write to the memory immediately after allocating it, *and* if all
your other assumptions are valid, you might get away with that.

[...]
Keith wrote:
> ... assign a value to a member of an allocated structure, you're not
> necessarily dereferencing a null pointer (address 0, or whatever NULL
> happens to be). I don't know (or care) how much memory starting at
> address 0 (or whatever NULL happens to be) is protected on a given
> system.


You don't care but I'll tell you anyway :-)
*Every* virtual memory Unix I can recall will
dump-core on writing to *any* address from (0) to (0 + X).
(X is *over Two Billion* in a typical environment.)


This turns out to be untrue.

You're asserting that, in a typical environment, writing to any
address from 0 to 0x7fffffff or so will cause a core dump.

Here's a program I just wrote on a Sun Blade 100 running Solaris 9,
compiled with gcc 4.0.2:

#include <stdio.h>
#include <stdlib.h>
char global[10000];
struct big_struct {
char data[0x22000];
};
int main(void)
{
struct big_struct *ptr = malloc(sizeof *ptr);
printf("global range is %p .. %p\n",
(void*)global,
(void*)(global + sizeof global));
printf("ptr = %p\n", (void*)ptr);
if (ptr != NULL) {
printf("malloc() succeeded, let's pretend it failed\n");
printf("ptr = NULL;\n");
ptr = NULL;
}
ptr->data[0x21fff] = 'x';
printf("ptr->data[0x21fff] = '%c'\n", ptr->data[0x21fff]);
return 0;
}

The numbers were chosen by trial and error by running earlier versions
of the program that just declared a global variable and printed its
starting and ending addresses.

Here's the output:

global range is 20990 .. 230a0
ptr = 230a8
malloc() succeeded, let's pretend it failed
ptr = NULL;
ptr->data[0x21fff] = 'x'

The size of the structure is just 136 kilobytes. If I tried to
malloc() such a structure, the malloc failed, and I then tried to
write to the last byte of the structure, I wouldn't have gotten a core
dump; I would have quietly clobbered some global variable. It's not
inconceivable that this could eventually result in a hard drive being
trashed.

Your whole idea that checking the result of malloc() isn't always
necessary is based on a series of assumptions. If one of those
assumptions is wrong, you're going to get undefined behavior, with all
the potentially nasty consequences that implies.

On the other hand, if you don't make such assumptions, your program is
more likely to either work properly, or to fail gracefully if it runs
out of resources.

[snip]

--
Keith Thompson (The_Other_Keith) ks***@mib.org <http://www.ghoti.net/~kst>
San Diego Supercomputer Center <*> <http://users.sdsc.edu/~kst>
We must do something. This is something. Therefore, we must do this.
Nov 15 '05 #17

P: n/a
Keith Thompson wrote:
"James Dow Allen" <jd*********@yahoo.com> writes:

<snip>
(FWIW, if the dereference-then-trash-disk option were in
widespread use I daresay everyone in this NG, including
those who religiously test malloc()'s return, would have
lost several disks by now.)

I lost a hard drive just a couple of weeks ago. I have no idea why.
It was likely a hardware failure, but I can't exclude the possibility
that it was the result of undefined behavior in some C code running
somewhere on the system.

"Why C is not my favorite programming language", by Niklaus Wirth...

S.
Nov 15 '05 #18

P: n/a
In article <11*********************@z14g2000cwz.googlegroups. com>,
James Dow Allen <jd*********@yahoo.com> wrote:
But I, and many other programmers of good taste, have the
luxury that 90% of our code will *never* run on systems
other than Unix. And, UIAM, *every* version of Unix that
uses hardware memory management will dump core whenever
an *application* writes to *(NULL).


Some of the SGI workstations have memory mapped graphics starting
at virtual address 4096. When the operating system on those was
switched from IRIX 5 (32 bit OS) to IRIX 6.2 (runs 32 and 64 bit),
the OS page size was changed from 4K to 16K. The lowest OS page
that encompassed the memory mapped graphics then included virtual
address 0, and since that page had to be writable in order to
do memory mapped I/O, virtual address 0 ended up being writable.

To block this would have required testing most pointers before
using them, which would have slowed programs unduely, so
the systems were left with writable memory at address 0 and
nearby.
--
All is vanity. -- Ecclesiastes
Nov 15 '05 #19

P: n/a
Keith Thompson wrote:
"James Dow Allen" <jd*********@yahoo.com> writes:
As expected, my posting elicited flames. (Also as expected, the
flames generated much heat, but little light.)


What flames? A couple of us disagreed with you, that's all. Nobody
even told you to engage your brain.

I killfiled this twit a long while back, I suggest others do the same.


Brian
Nov 15 '05 #20

P: n/a
On 20 Oct 2005 03:39:26 -0700, "James Dow Allen"
<jd*********@yahoo.com> wrote:
<snip>
Certainly the idea of "Checking for errors" sounds logical,
but do you really test for zero before *every* division?
Or, for an absurder example, since fprintf() can fail, do you
always check its return code? That would be the reductio
ad absurdem of an insistence on checking malloc(), especially
given the frequent contexts where malloc() *won't* fail, or
where, failing, a core-dump would be as good a diagnostic as any.
<snip silly example; concur with Keith's analysis (no relation AFAIK)>

But on the serious underlying point, there is an important difference.
I/O errors fail cleanly, with no UB and setting a sticky error flag.
It is reasonable and safe to do multiple *printf without checking (but
not rewind or clearerr) and then a single ferror() test or for an
fopen'ed output file just check the fclose().

<snip> But I, and many other programmers of good taste, have the
luxury that 90% of our code will *never* run on systems
other than Unix. And, UIAM, *every* version of Unix that
uses hardware memory management will dump core whenever
an *application* writes to *(NULL).

The first "real" (by reasonable standards) Unix, PDP-11, certainly did
not. For "normal" (single-space) mode it trashed the 407 header, which
was completely ignored. For split-I&D it trashed some data, almost
certainly basic runtime stuff (which linked first). There was (IIRC in
7ed) an option for single-space with protected code (IIRC 410) that
would trap, but with <8 * 8KB segments usually at the cost of wasting
a major amount of very dear address space.

At least some of these historically important systems are available
for hobby use, and you can actually get -11 hardware on what by now
must be a tertiary market. (Aside: what comes after that? quaternary?
quadratic?) No one would choose such a system for new development
today; and even if somehow you had to run on -11 hw or architecture
(maybe spacecraft or something) you wouldn't resurrect early Unix: it
was brilliant in its day but that day was 30 years ago. Nevertheless
it does exist and is Unix.

- David.Thompson1 at worldnet.att.net
Nov 15 '05 #21

P: n/a
Dave Thompson wrote:
<snip>
At least some of these historically important systems are available
for hobby use, and you can actually get -11 hardware on what by now
must be a tertiary market. (Aside: what comes after that? quaternary?
quadratic?)


Flea?

S.
Nov 15 '05 #22

This discussion thread is closed

Replies have been disabled for this discussion.