In article <2008052119135016807-ydaffaud@hotmailcom>
Yannick <yd******@hotmail.comwrote:
>I am not quite sure to understand what is really going on when a
function defined in one translation unit calls a function defined in a
different translation unit without knowing its prototype.
Depending on what the correct prototype is, this *can* even be
guaranteed to work. It may well work despite a lack of (Standard)
guarantee, however (as others have described else-thread).
>Let's say for instance :
[snippage; vertical space editing]
>void foo(float a, int b) { printf("%f (0x%x) %d (0x%x)\n", a, &a, b, &b); }
...
>However, if I change the foo function definition to foo(char a, char b) ...
the address of a and b should be *(ebp + 4) and *(ebp + 5).
Given that you mention "ebp", we can guess (without any absolute
guarantee of being right, especially someday in the future when
some other manufacturer uses that name for something entirely
different) that you refer to the x86 architecture, and one of
the common C compilers for that architecture.
>This is not the case, it appears that b is located sizeof(int)
deeper in the stack than a so the function displays the correct values.
Most x86 compilers, most of the time, under mostly-default conditions
-- and all these "most"s are in fact important -- "just happen"
(for a number of historical reasons) to pass "float" parameters as
"double"s on the hardware-provided stack addressed via %esp (I use
this wording to distinguish it from the FPU stack). They pass
"char" parameters only after widening them to "int", again on the
hardware-provided stack addressed via %esp.
Most of these compilers (under said conditions) do this whether or
not there is a prototype present at the site of the call, and hence
produce runtime code for foo() that assumes that the "float"
parameter was widened to "double", and that the "char" parameters
were widened to "int".
C compilers do not have to do this, and if they choose not to be
compatible with "most" x86 C compilers, they *can* use some sort
of "better" parameter-passing mechanism(s). Many C compilers can
even be told to use these "better" mechanisms even while maintaining
some compatibility, either on a function-by-function basis (using
#pragma or __attribute__ or __fastcall or __other_magic_word, or
perhaps on a more global basis, by something like -mregparm=N).
The "better" conventions *can* be the *default*: since Standard C
says that the C programmer, not the implementation, is in the wrong
for failing to provide proper prototypes, Standard C allows the
code to fail in the absence of proper prototypes. It is merely
the (strong) draw of compatibility (and/or convenience) that keeps
C compiler vendors using the "worse" calling conventions that make
poorly-written, non-Standard code "work" anyway.
("Better" is in quotes since "better"-ness is necessarily somewhat
an "eye of the beholder" thing. The "best" way to do something is
a lot like the "best" flavor of ice cream.)
--
In-Real-Life: Chris Torek, Wind River Systems
Salt Lake City, UT, USA (40°39.22'N, 111°50.29'W) +1 801 277 2603
email: gmail (figure it out)
http://web.torek.net/torek/index.html