iamchiawei...@gmail.com wrote:
Hello all:
Can I think of %d, %u, %c as decoders? that is %d use 2's complement
to decode whatever is
stored in the variable. %u uses straight forward binary to decimal
decoder. %c uses acii decoder.
The question is how does %d, %u know how many bytes to decode?
consider
int i[2]={1,2};
int *p = &i[0];
if you wanna print the value of *p to the screen, using %d or %u, how
do they know how many
bytes to decode?
Good question. When the compiler sees the "printf" function call, it's
not supposed to know that the name "printf" is special. So it hasw o
choice but to just assume it's some regular old function. Well,
taht's not exactly right, it USED to be that way, but somewhere along
the line a prototype for printf() snuck in and the compiler knows its a
function with a variable number of arguments after the first one.
So poor old C compiler does what it does with any function's arguments,
marshalls and converts them as is standard. Basically your chars and
shorts, and ints get passed as chars and ints and ints. Longs get
passed as longs. Floats get promoted to double.
The printf function "knows" all this, so when it sees "%d" it knows to
peek into the argument list for a int-sized thingy, for "%f" it knows
to peek for a double sized thingy, and so on.
Now in MOST languages the file I/O commands ARE known to the compiler,
so it's a bit better able to match up format specs with actual
parameters. Not perfect, but better.
One could argue all day whether this is a good thing or not (its both
IMHO).
In C, the compiler has to have blind faith that you have carefully and
exactly matched up % format specs with corresponding variables. If
you're just a bit off, printf() will be grabbing for ints where the
parameter list has longs, or worse, generating tons of garbage output
and maybe a seg-fault or two. That's just the way things are in
C-land, it's a rite of passage.