> In the short time I have spent reading this newsgroup, I have seen this
sort of declaration a few times:
int
func (string, number, structure)
char* string
int number
struct some_struct structure
(There should be semicolons after each of the three declarations.)
Now, I am vaguely familiar with it; I did not read up on it because I
have read in an apparently misinformed book that such declarations were
very old and that no one used them any more. Can someone please explain
why they are still using this style and why the book said it was
obsolete?
There is no good reason to use the old style in new code today.
Is what I have written any different than:
int
func (char* string, int number, struct some_struct structure) ?
Yes, there is a difference: when you use the modern "prototype"
syntax, the compiler does automatic type conversions on each argument
the same way as it does in an ordinary assignment (=) expression.
With the old syntax, the types have to match or the behavior is
undefined. So these lines:
answer = func("hello", 3.1414, strucked);
answer = func("hello", 3, strucked);
are equivalent if func() was defined using a prototype, because the
double constant 3.1414 is safely converted to int. (If func() was
defined in a separate file, of course, you also need a prototyped
declaration in scope when you call it.)
If you used the old syntax, only the third form would work safely.
The others would cause undefined behavior -- in practice, what's
likely to happen is at least that some bits from the floating-point
value will be misinterpreted as an integer, and other arguments may
be misread as well. Similar issues arise if you use a long int
argument (3L) and int and long int are different sizes.
There are some other subtle differences relating certain to specific
types of arguments such as "float" and "short".
--
Mark Brader "If you design for compatibility with a
Toronto donkey cart, what you get is a donkey cart."
ms*@vex.net -- ?, quoted by Henry Spencer
My text in this article is in the public domain.