By using this site, you agree to our updated Privacy Policy and our Terms of Use. Manage your Cookies Settings.
457,888 Members | 1,522 Online
Bytes IT Community
+ Ask a Question
Need help? Post your question and get tips & solutions from a community of 457,888 IT Pros & Developers. It's quick & easy.

Test the validity of each argument in a function

P: n/a
I have a function foo, shown below. Is it a good idea to test each
argument against my assumption? I think it is safer. However, I
notice that people usually don't test the validity of arguments.

For example,
#define MAX_SIZE 100000
int foo(double *x, unsigned ling sz, int option)
{

int val;

if (x == NULL) { /* test against NULL */
printf("foo: x cannot be NULL\n");
exit(EXIT_FAILURE);
}

if (sz MAX_SIZE) { /* test against too large sz */
printf("foo: sz=%lu too large\n", sz);
exit(EXIT_FAIULRE);
}

if (option != 0 && option != 1) { /* test against invalid option */
printf("foo: option=%d invalid\n", option);
exit(EXIT_FAILURE);
}

/* do main things */

return val;
}
Jun 27 '08 #1
Share this Question
Share on Google+
4 Replies


P: n/a
is*********@gmail.com said:
I have a function foo, shown below. Is it a good idea to test each
argument against my assumption?
Yes, it's a good idea. The action you should take depends on whose fault it
is. If the argument violates the assumption, either the program is wrong
(because it made an incorrect assumption) or the data value is wrong
(because it doesn't meet the criteria for well-formed data).

For example, if you know for sure that your function can't ever be passed
NULL in a pointer argument because the program is designed to prevent
that, and if it *is* passed NULL, then that's a program bug. If, on the
other hand, you're processing a file containing an age field, and the
value stored in it is negative, then clearly that's a data error.

Validate program assumptions with an assertion.

Validate data criteria by handling the error as best you can - typically by
returning an error code to the calling function.

I think it is safer. However, I
notice that people usually don't test the validity of arguments.
Yes, people are lazy like that. Sometimes they don't even use const!
Nevertheless, defensive programming is wise. Just because many people
aren't wise, that doesn't mean you shouldn't be.
For example,
#define MAX_SIZE 100000
int foo(double *x, unsigned ling sz, int option)
{

int val;

if (x == NULL) { /* test against NULL */
printf("foo: x cannot be NULL\n");
exit(EXIT_FAILURE);
If x can't be NULL because it's *impossible* for x to be NULL, then:

assert(x != NULL);

If x can't be NULL because this would mean the program's input is
incorrect, then return an error code instead, and let your caller worry
about how to handle it.

--
Richard Heathfield <http://www.cpax.org.uk>
Email: -http://www. +rjh@
Google users: <http://www.cpax.org.uk/prg/writings/googly.php>
"Usenet is a strange place" - dmr 29 July 1999
Jun 27 '08 #2

P: n/a
is*********@gmail.com wrote:
I have a function foo, shown below. Is it a good idea to test each
argument against my assumption? I think it is safer. However, I
notice that people usually don't test the validity of arguments.
The answer depends on how much trust you place in the calling
code. If you're implementing a public library function, then a
check might make sense. If you're writing the calling code
yourself and you trust yourself to not pass invalid values, then
don't waste resources checking what you've already validated.

Your personal programming philosophy plays a role in arriving at
an answer. My own philosophy dictates that invalid values be
detected at input time and not allowed to "pollute" the
correctness of the processing - and to check elsewhere only when
an otherwise valid value might produce undesirable results
(division by zero, tangent of an angle resulting in infinity,
etc).

--
Morris Dovey
DeSoto Solar
DeSoto, Iowa USA
http://www.iedu.com/DeSoto/
Jun 27 '08 #3

P: n/a
On Sat, 12 Apr 2008 07:13:25 -0700 (PDT), is*********@gmail.com
wrote:
>I have a function foo, shown below. Is it a good idea to test each
argument against my assumption? I think it is safer. However, I
notice that people usually don't test the validity of arguments.

For example,
#define MAX_SIZE 100000
int foo(double *x, unsigned ling sz, int option)
nit: ling should be long.
>{

int val;

if (x == NULL) { /* test against NULL */
printf("foo: x cannot be NULL\n");
exit(EXIT_FAILURE);
}

if (sz MAX_SIZE) { /* test against too large sz */
printf("foo: sz=%lu too large\n", sz);
exit(EXIT_FAIULRE);
}

if (option != 0 && option != 1) { /* test against invalid option */
printf("foo: option=%d invalid\n", option);
exit(EXIT_FAILURE);
}

/* do main things */

return val;
}
There are three separate issues here: (A) should you check
arguments, (B) how should you check them, and (C) what should you
do about it if there is an error. Obviously there is quite a bit
of room for variations in style. However here are some
suggestions.

(A) Should you test the validity of the arguments? In general,
the answer is yes. If you do not, an unexpected faulty argument
will produce a mystery bug. These kind of bugs can be doubly
hard to find because (1) you "know" the argument is okay, and (2)
the invalid argument violates the implicit assumptions in the
code.

That said, there are situations where it is reasonable to omit
checks. This can happen when the function is an internal
function within a controlled scope where callers guarantee the
validity of the arguments.

An alternative to checking arguments is to write the code so that
all arguments are valid. Valid may merely mean reporting an
error back to the caller.

(B) How should you test them? In my opinion, the obvious way to
do it is also the worst way to do it if one is programming on any
scale. The obvious way has the form

if (some_condition) {some_action}

where some_condition is a failure condition, and some_action
typically consists of printing an error message and exiting with
EXIT_FAILURE.

The first thing that is wrong with this kind of code is that it
almost never gets tested adequately. (You, dear reader, always
test your code thoroughly but the TOG, the other guy, doesn't.)

The second thing (minor but a source of problems) is that
condition is backwards; that is, what one should be doing is what
assert does - specify what should be true of the arguments.

Assert is the obvious (and useful) choice for code in a test mode
or for code that is never going to see the light of day outside
of your personal environment. In serious code, however, assert
is inadequate. My choice is to roll my own that is coupled with
an error handler.

(C) What should you do about it if there is an error? There are
a number of things wrong with writing your own action code for
each test: (1) In the nature of things the action code is likely
to be untested; (2) Often the function is the wrong place to
decide what to do; (3) It pre-empts having a coordinated error
management policy.

My policy for programs of any size is to write an error handler
as a program wide utility. The error handler takes care of
writing error reports to an error log file. An error report
includes information about the specific fault and system state
information. The response to the error that it takes depends on
options passed to it. Thus, the default action might simply be
to write an error report and exit. However one could have the
option to pass it a callback function which takes corrective
action and continue. If one creates a my_assert macro, another
option might be to have the function return with an error
indicator to the calling function. Usw.

Your specific strategy might be quite different from mine.
However the main point I am making is that you should have a
coherent error management strategy that is robust and works for
you.
Richard Harter, cr*@tiac.net
http://home.tiac.net/~cri, http://www.varinoma.com
Save the Earth now!!
It's the only planet with chocolate.
Jun 27 '08 #4

P: n/a

<is*********@gmail.comwrote in message news:
>I have a function foo, shown below. Is it a good idea to test each
argument against my assumption? I think it is safer. However, I
notice that people usually don't test the validity of arguments.

For example,
#define MAX_SIZE 100000
int foo(double *x, unsigned ling sz, int option)
{

int val;

if (x == NULL) { /* test against NULL */
printf("foo: x cannot be NULL\n");
exit(EXIT_FAILURE);
}

if (sz MAX_SIZE) { /* test against too large sz */
printf("foo: sz=%lu too large\n", sz);
exit(EXIT_FAIULRE);
}

if (option != 0 && option != 1) { /* test against invalid option */
printf("foo: option=%d invalid\n", option);
exit(EXIT_FAILURE);
}

/* do main things */

return val;
}

The function will become unreadable if you do that.

Use

assert(x != NULL);
assert(sz <= MAX_SIZE);
assert(option == 0 || option == 1);

instead.
There are legitimate arguments against the semantics of assert, but they are
arguements against the C standard and the conventions now in force.

Almost always you want to take drastic action againt errors. No results are
better than wrong results. The only exception I can think of is video games,
where it does sometimes make sense to suppress errors in the hope that the
game goes on and the program recovers.

--
Free games and programming goodies.
http://www.personal.leeds.ac.uk/~bgy1mm

Jun 27 '08 #5

This discussion thread is closed

Replies have been disabled for this discussion.