By using this site, you agree to our updated Privacy Policy and our Terms of Use. Manage your Cookies Settings.
446,190 Members | 982 Online
Bytes IT Community
+ Ask a Question
Need help? Post your question and get tips & solutions from a community of 446,190 IT Pros & Developers. It's quick & easy.

%d,%u,%c

P: n/a
Hello all:

Can I think of %d, %u, %c as decoders? that is %d use 2's complement
to decode whatever is
stored in the variable. %u uses straight forward binary to decimal
decoder. %c uses acii decoder.

The question is how does %d, %u know how many bytes to decode?
consider
int i[2]={1,2};
int *p = &i[0];
if you wanna print the value of *p to the screen, using %d or %u, how
do they know how many
bytes to decode?

Oct 2 '06 #1
Share this Question
Share on Google+
5 Replies


P: n/a

iamchiawei...@gmail.com wrote:
Hello all:

Can I think of %d, %u, %c as decoders? that is %d use 2's complement
to decode whatever is
stored in the variable. %u uses straight forward binary to decimal
decoder. %c uses acii decoder.

The question is how does %d, %u know how many bytes to decode?
consider
int i[2]={1,2};
int *p = &i[0];
if you wanna print the value of *p to the screen, using %d or %u, how
do they know how many
bytes to decode?
Good question. When the compiler sees the "printf" function call, it's
not supposed to know that the name "printf" is special. So it hasw o
choice but to just assume it's some regular old function. Well,
taht's not exactly right, it USED to be that way, but somewhere along
the line a prototype for printf() snuck in and the compiler knows its a
function with a variable number of arguments after the first one.

So poor old C compiler does what it does with any function's arguments,
marshalls and converts them as is standard. Basically your chars and
shorts, and ints get passed as chars and ints and ints. Longs get
passed as longs. Floats get promoted to double.
The printf function "knows" all this, so when it sees "%d" it knows to
peek into the argument list for a int-sized thingy, for "%f" it knows
to peek for a double sized thingy, and so on.

Now in MOST languages the file I/O commands ARE known to the compiler,
so it's a bit better able to match up format specs with actual
parameters. Not perfect, but better.
One could argue all day whether this is a good thing or not (its both
IMHO).

In C, the compiler has to have blind faith that you have carefully and
exactly matched up % format specs with corresponding variables. If
you're just a bit off, printf() will be grabbing for ints where the
parameter list has longs, or worse, generating tons of garbage output
and maybe a seg-fault or two. That's just the way things are in
C-land, it's a rite of passage.

Oct 2 '06 #2

P: n/a
In article <11**********************@k70g2000cwa.googlegroups .com>,
<ia***********@gmail.comwrote:
>Can I think of %d, %u, %c as decoders? that is %d use 2's complement
to decode whatever is
stored in the variable. %u uses straight forward binary to decimal
decoder. %c uses acii decoder.
Hmmmm.

You have not taken into the account the possibility that integral
values might be stored in one's complement or signed magnitude
(C99) or any other representation such as trinary (C89).

Similarily, in C89 %u is not necessarily -binary- to decimal.

In both C89 and C99, %c is never an ASCII convertor: it is just
a byte-level writer. In some implementations, the Basic Execution
Environment might -happen- to use ASCII for the minimal character
set, but that isn't certain -- for example, it could be using
EBCDIC. Then there is the possibility that the character set is
ISO 8859-1 or otherwise, and that there might be non-ASCII characters
available. But with very few exceptions, %c on output does not
convert the char values at all, merely stores whatever bit pattern
is already there, whether those bits happen to represent ASCII or
something else completely or aren't even defined printable
characters for the locale. The exceptions have to do with some
of the \ sequences: \a \n \b \t have to convert into the closest
the implementation can come to the actions defined for those
particular sequences.

>The question is how does %d, %u know how many bytes to decode?
consider
int i[2]={1,2};
int *p = &i[0];
if you wanna print the value of *p to the screen, using %d or %u, how
do they know how many
bytes to decode?
The knowledge is built in to the implementation.

--
All is vanity. -- Ecclesiastes
Oct 2 '06 #3

P: n/a
In article <11**********************@i42g2000cwa.googlegroups .com>,
"Ancient_Hacker" <gr**@comcast.netwrote:
In C, the compiler has to have blind faith that you have carefully and
exactly matched up % format specs with corresponding variables.
Well, it doesn't _have_ to - a compiler is allowed to complain about
anything it feels like - however, it does still have to compile a
program (and the program has to execute successfully, _provided_ the
call to printf is never reached)
Oct 2 '06 #4

P: n/a
Ancient_Hacker wrote:
>
.... snipadoodle ...
>
So poor old C compiler does what it does with any function's
arguments, marshalls and converts them as is standard. Basically
your chars and shorts, and ints get passed as chars and ints and
ints. Longs get passed as longs. Floats get promoted to double.
The printf function "knows" all this, so when it sees "%d" it
knows to peek into the argument list for a int-sized thingy, for
"%f" it knows to peek for a double sized thingy, and so on.
Correction: chars also get passed as ints. At least in C.

--
Some informative links:
<news:news.announce.newusers
<http://www.geocities.com/nnqweb/>
<http://www.catb.org/~esr/faqs/smart-questions.html>
<http://www.caliburn.nl/topposting.html>
<http://www.netmeister.org/news/learn2quote.html>
<http://cfaj.freeshell.org/google/>

Oct 2 '06 #5

P: n/a
ia***********@gmail.com wrote:
Hello all:

Can I think of %d, %u, %c as decoders? that is %d use 2's complement
to decode whatever is
stored in the variable. %u uses straight forward binary to decimal
decoder. %c uses acii decoder.

The question is how does %d, %u know how many bytes to decode?
consider
int i[2]={1,2};
int *p = &i[0];
if you wanna print the value of *p to the screen, using %d or %u, how
do they know how many
bytes to decode?
Not decoders, they are conversion specifiers. The *printf() functions
convert binary stuff to text stuff in specified ways so that we can
'print' them to the screen or some file in a formatted way.

%d says expect an int and print it in decimal with a minus sign, if
negative.

%u says expect an unsigned int. Print it in decimal. It is positive.

%c says expect a single character. Print the character.

This is neither Brain Science nor Rocket Surgery. You need to read more.
Maybe K&R2 for a good start.

--
Joe Wright
"Everything should be made as simple as possible, but not simpler."
--- Albert Einstein ---
Oct 4 '06 #6

This discussion thread is closed

Replies have been disabled for this discussion.