473,385 Members | 1,764 Online
Bytes | Software Development & Data Engineering Community
Post Job

Home Posts Topics Members FAQ

Join Bytes to post your question to a community of 473,385 software developers and data experts.

print binary representation

Hi!

How can I output value of char or int in binary form with printf(); ?

thanx in advance
Mar 24 '07 #1
26 50307
Carramba wrote:
Hi!

How can I output value of char or int in binary form with printf(); ?

thanx in advance
There is no standard format specifier for binary form. You will have
to do the conversion manually, testing each bit from highest to
lowest, printing '0' if it's not set, and '1' if it is.

Mar 24 '07 #2
On Sat, 24 Mar 2007 08:55:36 +0100, Carramba <us**@example.netwrote:
>Hi!

How can I output value of char or int in binary form with printf(); ?

thanx in advance
The C Standards do not define a conversion specifier for printf() to
output in binary. The only portable way to do this is to roll your
own. Here's a start:

printf("%s\n", int_to_binary_string(my_int));

Make sure when you implement int_to_binary_string() that it works with
most desktop targets where sizeof(int) * CHAR_BIT = 32 as well on many
embedded targets where sizeof(int) * CHAR_BIT = 16.

Best regards
--
jay
Mar 24 '07 #3
Carramba <us**@example.netwrote:
>How can I output value of char or int in binary form with printf(); ?
http://c-faq.com/misc/base2.html

http://c-faq.com/misc/hexio.html

HTH,
-Beej

Mar 24 '07 #4

"Carramba" <us**@example.netwrote in message
news:46*********************@news.luth.se...
Hi!

How can I output value of char or int in binary form with printf(); ?

thanx in advance
#include <limits.h>
/*
convert machine number to human-readable binary string.
Returns: pointer to static string overwritten with each call.
*/
char *itob(int x)
{
static char buff[sizeof(int) * CHAR_BIT + 1];
int i;
int j = sizeof(int) * CHAR_BIT - 1;

buff[j] = 0;
for(i=0;i<sizeof(int) * CHAR_BIT; i++)
{
if(x & (1 << i))
buff[j] = '1';
else
buff[j] = '0';
j--;
}
return buff;
}

Call

int x = 100;
printf("%s", itob(x));

You might want something more elaborate to cut leading zeroes or handle
negative numbers.

Mar 24 '07 #5
Harald van Dijk wrote:
Carramba wrote:
>Hi!

How can I output value of char or int in binary form with printf(); ?

thanx in advance
There is no standard format specifier for binary form. You will have
to do the conversion manually, testing each bit from highest to
lowest, printing '0' if it's not set, and '1' if it is.
thanx, maybe you have so suggestion or link for further reading on how
to do it?
Mar 24 '07 #6
thanx ! have few questions about this code :)
Malcolm McLean wrote:
>
"Carramba" <us**@example.netwrote in message
news:46*********************@news.luth.se...
>Hi!

How can I output value of char or int in binary form with printf(); ?

thanx in advance
#include <limits.h>
/*
convert machine number to human-readable binary string.
Returns: pointer to static string overwritten with each call.
*/
char *itob(int x)
{
static char buff[sizeof(int) * CHAR_BIT + 1];
why sizeof(int) * CHAR_BIT + 1 ? what does it mean?
int i;
int j = sizeof(int) * CHAR_BIT - 1;
why sizeof(int) * CHAR_BIT - 1 ? what does it mean?
>
buff[j] = 0;
for(i=0;i<sizeof(int) * CHAR_BIT; i++)
{
if(x & (1 << i))
buff[j] = '1';
else
buff[j] = '0';
j--;
}
return buff;
}

Call

int x = 100;
printf("%s", itob(x));

You might want something more elaborate to cut leading zeroes or handle
negative numbers.
Mar 24 '07 #7
Carramba wrote:
Harald van Dijk wrote:
Carramba wrote:
Hi!

How can I output value of char or int in binary form with printf(); ?

thanx in advance
There is no standard format specifier for binary form. You will have
to do the conversion manually, testing each bit from highest to
lowest, printing '0' if it's not set, and '1' if it is.
thanx, maybe you have so suggestion or link for further reading on how
to do it?
Others have given code already, but here's mine anyway:

#include <limits.h>
#include <stdio.h>

void print_char_binary(char val)
{
char mask;

if(CHAR_MIN < 0)
{
if(val < 0
|| val == 0 && val & CHAR_MAX)
putchar('1');
else
putchar('0');
}

for(mask = (CHAR_MAX >1) + 1; mask != 0; mask >>= 1)
if(val & mask)
putchar('1');
else
putchar('0');
}

void print_int_binary(int val)
{
int mask;

if(val < 0
|| val == 0 && val & INT_MAX)
putchar('1');
else
putchar('0');

for(mask = (INT_MAX >1) + 1; mask != 0; mask >>= 1)
if(val & mask)
putchar('1');
else
putchar('0');
}

Mar 24 '07 #8
Carramba wrote:
>
Hi!

How can I output value of char or int in binary form with printf(); ?

thanx in advance
/* BEGIN output from new.c */

1 = 00000001
2 = 00000010
3 = 00000011
4 = 00000100
5 = 00000101
6 = 00000110
7 = 00000111
8 = 00001000
9 = 00001001
10 = 00001010
11 = 00001011
12 = 00001100
13 = 00001101
14 = 00001110
15 = 00001111
16 = 00010000
17 = 00010001
18 = 00010010
19 = 00010011
20 = 00010100

/* END output from new.c */

/* BEGIN new.c */

#include <limits.h>
#include <stdio.h>

#define STRING "%2d = %s\n"
#define E_TYPE char
#define P_TYPE int
#define INITIAL 1
#define FINAL 20
#define INC(E) (++(E))

typedef E_TYPE e_type;
typedef P_TYPE p_type;

void bitstr(char *str, const void *obj, size_t n);

int main(void)
{
e_type e;
char ebits[CHAR_BIT * sizeof e + 1];

puts("\n/* BEGIN output from new.c */\n");
e = INITIAL;
bitstr(ebits, &e, sizeof e);
printf(STRING, (p_type)e, ebits);
while (FINAL e) {
INC(e);
bitstr(ebits, &e, sizeof e);
printf(STRING, (p_type)e, ebits);
}
puts("\n/* END output from new.c */");
return 0;
}

void bitstr(char *str, const void *obj, size_t n)
{
unsigned mask;
const unsigned char *byte = obj;

while (n-- != 0) {
mask = ((unsigned char)-1 >1) + 1;
do {
*str++ = (char)(mask & byte[n] ? '1' : '0');
mask >>= 1;
} while (mask != 0);
}
*str = '\0';
}

/* END new.c */
--
pete
Mar 24 '07 #9
"Carramba" <us**@example.netschrieb im Newsbeitrag
news:46*********************@news.luth.se...
thanx ! have few questions about this code :)
Malcolm McLean wrote:
>>
"Carramba" <us**@example.netwrote in message
news:46*********************@news.luth.se...
>>Hi!

How can I output value of char or int in binary form with printf(); ?

thanx in advance
#include <limits.h>
/*
convert machine number to human-readable binary string.
Returns: pointer to static string overwritten with each call.
*/
char *itob(int x)
{
static char buff[sizeof(int) * CHAR_BIT + 1];
why sizeof(int) * CHAR_BIT + 1 ? what does it mean?
If you want to put an in't binary representation ionto a string you need
that much space.
On many implementations sizeof(int) is 4 and CHAR_BIT is 8, so you'd need an
array of 33 chars (including the teminating null byte).

I'd use sizeof(x) instead of sizeof(int), that way you can easily change the
function to work on e.g. long long
> int i;
int j = sizeof(int) * CHAR_BIT - 1;
why sizeof(int) * CHAR_BIT - 1 ? what does it mean?
Arrays count from 0 to n and the terminating null byte isn't needed , so the
index goes from 0 to 31 (assuming the same sizes as above)
> buff[j] = 0;
for(i=0;i<sizeof(int) * CHAR_BIT; i++)
{
if(x & (1 << i))
buff[j] = '1';
else
buff[j] = '0';
j--;
}
return buff;
}

Call

int x = 100;
printf("%s", itob(x));

You might want something more elaborate to cut leading zeroes or handle
negative numbers
Bye, Jojo.
Mar 24 '07 #10
Malcolm McLean wrote:
for(i=0;i<sizeof(int) * CHAR_BIT; i++)
{
if(x & (1 << i))
There are some problems with that shift expression.

(1 << sizeof(int) * CHAR_BIT - 1) is undefined.

--
pete
Mar 24 '07 #11

"pete" <pf*****@mindspring.comwrote in message
Malcolm McLean wrote:
> for(i=0;i<sizeof(int) * CHAR_BIT; i++)
{
if(x & (1 << i))

There are some problems with that shift expression.

(1 << sizeof(int) * CHAR_BIT - 1) is undefined.
The function should take an unsigned int. However I didn't want to add that
complication for the OP. It should work OK on almost every platform.
--
Free games and programming goodies.
http://www.personal.leeds.ac.uk/~bgy1mm

Mar 24 '07 #12
Malcolm McLean wrote:
>
"pete" <pf*****@mindspring.comwrote in message
(1 << sizeof(int) * CHAR_BIT - 1) is undefined.
The function should take an unsigned int.
That makes no difference.
The evaluation of (1 << sizeof(int) * CHAR_BIT - 1)
in a program is always undefined,
and prevents a program from being a "correct program".
(1 << sizeof(int) * CHAR_BIT - 1) can't be a positive value.
However I didn't want to add that
complication for the OP.
That expression would be perfect to use as
an example of how not to write code.
It should work OK on almost every platform.
(1u << sizeof(int) * CHAR_BIT - 1) is defined.

Your initial value of j is also wrong:

int j = sizeof(int) * CHAR_BIT - 1;

buff[j] = 0;
for(i=0;i<sizeof(int) * CHAR_BIT; i++)
{
if(x & (1 << i))
buff[j] = '1';
else
buff[j] = '0';

As you can see in your code above,
the first side effect of the for loop,
is to overwrite the null terminator.
/* BEGIN new.c */

#include <stdio.h>
#include <limits.h>

char *itob(unsigned x);

int main(void)
{
printf("%s\n", itob(100));
return 0;
}

char *itob(unsigned x)
{
unsigned i;
unsigned j;
static char buff[sizeof x * CHAR_BIT + 1];

j = sizeof x * CHAR_BIT;
buff[j--] = '\0';
for (i = 0; i < sizeof x * CHAR_BIT; i++) {
if (x & (1u << i)) {
buff[j--] = '1';
} else {
buff[j--] = '0';
}
if ((1u << i) == UINT_MAX / 2 + 1) {
break;
}
}
while (i++ < sizeof x * CHAR_BIT) {
buff[j--] = '0';
}
return buff;
}

/* END new.c */
--
pete
Mar 24 '07 #13
"pete" <pf******@mindspring.comwrote in message
Malcolm McLean wrote:
>>
"pete" <pf*****@mindspring.comwrote in message
(1 << sizeof(int) * CHAR_BIT - 1) is undefined.
The function should take an unsigned int.

That makes no difference.
The evaluation of (1 << sizeof(int) * CHAR_BIT - 1)
in a program is always undefined,
and prevents a program from being a "correct program".
(1 << sizeof(int) * CHAR_BIT - 1) can't be a positive value.
>However I didn't want to add that
complication for the OP.

That expression would be perfect to use as
an example of how not to write code.
>It should work OK on almost every platform.

(1u << sizeof(int) * CHAR_BIT - 1) is defined.

Your initial value of j is also wrong:

int j = sizeof(int) * CHAR_BIT - 1;

buff[j] = 0;
for(i=0;i<sizeof(int) * CHAR_BIT; i++)
{
if(x & (1 << i))
buff[j] = '1';
else
buff[j] = '0';

As you can see in your code above,
the first side effect of the for loop,
is to overwrite the null terminator.
/* BEGIN new.c */

#include <stdio.h>
#include <limits.h>

char *itob(unsigned x);

int main(void)
{
printf("%s\n", itob(100));
return 0;
}

char *itob(unsigned x)
{
unsigned i;
unsigned j;
static char buff[sizeof x * CHAR_BIT + 1];

j = sizeof x * CHAR_BIT;
buff[j--] = '\0';
for (i = 0; i < sizeof x * CHAR_BIT; i++) {
if (x & (1u << i)) {
buff[j--] = '1';
} else {
buff[j--] = '0';
}
if ((1u << i) == UINT_MAX / 2 + 1) {
break;
}
}
while (i++ < sizeof x * CHAR_BIT) {
buff[j--] = '0';
}
return buff;
}

/* END new.c */
unsigned integers aren't allowed padding bits so you don't need all that
complication.
A pathological platform might break on the expression 1 << int bits - 1,
agreed. To be strictly correct we need to do the calculations in unsigned
integers, but I've explained why I didn't do that.
The off by one error in writing the nul was a slip. Of course I didn't
realise because the static array was zero intilaised anyway. So well
spotted.
--
Free games and programming goodies.
http://www.personal.leeds.ac.uk/~bgy1mm

Mar 24 '07 #14
Malcolm McLean wrote:
unsigned integers aren't allowed padding bits
Wrong again.
unsigned char isn't allowed padding bits.
UINT_MAX is allowed to be as low as INT_MAX,
and you can't achieve that without padding bits
in the unsigned int type.

--
pete
Mar 24 '07 #15
Carramba wrote:
Harald van Dijk wrote:
>Carramba wrote:
>>Hi!

How can I output value of char or int in binary form with printf(); ?

thanx in advance

There is no standard format specifier for binary form. You will have
to do the conversion manually, testing each bit from highest to
lowest, printing '0' if it's not set, and '1' if it is.
thanx, maybe you have so suggestion or link for further reading on how
to do it?
Think about it!

void bits(unsigned char b, int n) {
for (--n; n >= 0; --n)
putchar((b & 1 << n) ? '1' : '0');
putchar(' ');
}

Now if you call it..

bits(195, 8);

...you'll get '11000011 ' on the stdout stream.

--
Joe Wright
"Everything should be made as simple as possible, but not simpler."
--- Albert Einstein ---
Mar 24 '07 #16
On 24 Mar 2007 04:42:24 -0700, "Harald van D?k" <tr*****@gmail.com>
wrote:
>Carramba wrote:
>Harald van D?k wrote:
Carramba wrote:
Hi!

How can I output value of char or int in binary form with printf(); ?

thanx in advance

There is no standard format specifier for binary form. You will have
to do the conversion manually, testing each bit from highest to
lowest, printing '0' if it's not set, and '1' if it is.
thanx, maybe you have so suggestion or link for further reading on how
to do it?

Others have given code already, but here's mine anyway:

#include <limits.h>
#include <stdio.h>

void print_char_binary(char val)
{
char mask;

if(CHAR_MIN < 0)
{
if(val < 0
|| val == 0 && val & CHAR_MAX)
When will the expression following the && evaluate to 1? Is it
something to do with ones complement or signed magnitude
representations?
putchar('1');
else
putchar('0');
}

for(mask = (CHAR_MAX >1) + 1; mask != 0; mask >>= 1)
Is it a requirement that (CHAR_MAX>>1)+1 be a power of 2? It is a
requirement for UCHAR_MAX but what if char is signed? (If CHAR_BIT is
9, could SCHAR_MAX and CHAR_MAX be 173?)
if(val & mask)
putchar('1');
else
putchar('0');
}

void print_int_binary(int val)
{
int mask;

if(val < 0
|| val == 0 && val & INT_MAX)
putchar('1');
else
putchar('0');

for(mask = (INT_MAX >1) + 1; mask != 0; mask >>= 1)
There does not appear to be a similar requirement for INT_MAX either.
if(val & mask)
putchar('1');
else
putchar('0');
}

Remove del for email
Mar 24 '07 #17
Barry Schwarz wrote:
On 24 Mar 2007 04:42:24 -0700, "Harald van D?k" <tr*****@gmail.com>
wrote:
Carramba wrote:
Harald van D?k wrote:
Carramba wrote:
Hi!

How can I output value of char or int in binary form with printf(); ?

thanx in advance

There is no standard format specifier for binary form. You will have
to do the conversion manually, testing each bit from highest to
lowest, printing '0' if it's not set, and '1' if it is.

thanx, maybe you have so suggestion or link for further reading on how
to do it?
Others have given code already, but here's mine anyway:

#include <limits.h>
#include <stdio.h>

void print_char_binary(char val)
{
char mask;

if(CHAR_MIN < 0)
{
if(val < 0
|| val == 0 && val & CHAR_MAX)

When will the expression following the && evaluate to 1? Is it
something to do with ones complement or signed magnitude
representations?
It accounts for ones' complement, where all bits 1 is a possible
representation of 0.

It does not account for sign and magnitude, where all value bits 0 and
sign bit 1 is a representation of 0. This will be printed as all bits
zero, which is a different representation of the same value.
putchar('1');
else
putchar('0');
}

for(mask = (CHAR_MAX >1) + 1; mask != 0; mask >>= 1)

Is it a requirement that (CHAR_MAX>>1)+1 be a power of 2? It is a
requirement for UCHAR_MAX but what if char is signed? (If CHAR_BIT is
9, could SCHAR_MAX and CHAR_MAX be 173?)
[ And a similar comment for INT_MAX snipped ]

The only allowed representation systems for signed integer types are
two's complement, ones' complement, and sign and magnitude. All three
have the maximum value as a power of two minus one. (IIRC, this is new
in C99, but it was added because there were no other systems even
though C90 allowed it.)

Mar 24 '07 #18
"Harald van Dijk" <tr*****@gmail.comwrote in message
>
The only allowed representation systems for signed integer types are
two's complement, ones' complement, and sign and magnitude. All three
have the maximum value as a power of two minus one. (IIRC, this is new
in C99, but it was added because there were no other systems even
though C90 allowed it.)
That's typical committee thinking. No engineer is going to devise a new
method of representing integers for the fun of it, but because there is some
technical advantage or requirement. At which point the standard becomes a
dead letter. If the super-whizzy-fibby machine needs Fibonacci
representation for its quantum coherence modulator unit, the either C can't
be used on such a machine or the rule will change. So it is a completely
pointless regulation.

--
Free games and programming goodies.
http://www.personal.leeds.ac.uk/~bgy1mm

Mar 24 '07 #19
On 24 Mar 2007 10:51:58 -0700, "Harald van D?k" <tr*****@gmail.com>
wrote:
>Barry Schwarz wrote:
>On 24 Mar 2007 04:42:24 -0700, "Harald van D?k" <tr*****@gmail.com>
wrote:
>Carramba wrote:
Harald van D?k wrote:
Carramba wrote:
Hi!

How can I output value of char or int in binary form with printf(); ?

thanx in advance

There is no standard format specifier for binary form. You will have
to do the conversion manually, testing each bit from highest to
lowest, printing '0' if it's not set, and '1' if it is.

thanx, maybe you have so suggestion or link for further reading on how
to do it?

Others have given code already, but here's mine anyway:

#include <limits.h>
#include <stdio.h>

void print_char_binary(char val)
{
char mask;

if(CHAR_MIN < 0)
{
if(val < 0
|| val == 0 && val & CHAR_MAX)

When will the expression following the && evaluate to 1? Is it
something to do with ones complement or signed magnitude
representations?

It accounts for ones' complement, where all bits 1 is a possible
representation of 0.

It does not account for sign and magnitude, where all value bits 0 and
sign bit 1 is a representation of 0. This will be printed as all bits
zero, which is a different representation of the same value.
putchar('1');
else
putchar('0');
}

for(mask = (CHAR_MAX >1) + 1; mask != 0; mask >>= 1)

Is it a requirement that (CHAR_MAX>>1)+1 be a power of 2? It is a
requirement for UCHAR_MAX but what if char is signed? (If CHAR_BIT is
9, could SCHAR_MAX and CHAR_MAX be 173?)

[ And a similar comment for INT_MAX snipped ]

The only allowed representation systems for signed integer types are
two's complement, ones' complement, and sign and magnitude. All three
have the maximum value as a power of two minus one. (IIRC, this is new
in C99, but it was added because there were no other systems even
though C90 allowed it.)
n1124 says that UCHAR_MAX must be equal to 2^CHAR_BIT-1 which I
mentioned in my question. For SCHAR_MAX, there is no such
requirement. It is required to be at least (minimum value) 127 which
is 2^7-1 but for larger values of CHAR_BIT there is no additional
restriction. Again, if CHAR_BIT is 9, could SCHAR_MAX and CHAR_MAX be
173?
Remove del for email
Mar 24 '07 #20
Barry Schwarz wrote:
On 24 Mar 2007 10:51:58 -0700, "Harald van D?k" <tr*****@gmail.com>
wrote:
The only allowed representation systems for signed integer types are
two's complement, ones' complement, and sign and magnitude. All three
have the maximum value as a power of two minus one. (IIRC, this is new
in C99, but it was added because there were no other systems even
though C90 allowed it.)

n1124 says that UCHAR_MAX must be equal to 2^CHAR_BIT-1 which I
mentioned in my question. For SCHAR_MAX, there is no such
requirement. It is required to be at least (minimum value) 127 which
is 2^7-1 but for larger values of CHAR_BIT there is no additional
restriction. Again, if CHAR_BIT is 9, could SCHAR_MAX and CHAR_MAX be
173?
Again, no. The only allowed representation systems for signed integers
are those three. If SCHAR_MAX is 173 (or if INT_MAX is 99999), then
the representation system cannot be one of those three, so the
implementation would violate 6.2.6.2p2. The fact that there is an
explicit statement that UCHAR_MAX must equal 2^CHAR_BIT - 1 doesn't
seem relevant to me, because unsigned char is already a special case
(as the only integer type that may not contain trap representations or
padding bits).

Mar 24 '07 #21
Carramba <us**@example.netwrote:
# Hi!
#
# How can I output value of char or int in binary form with printf(); ?

You might sprintf with %x and then just map the hex digits
to four character strings.

--
SM Ryan http://www.rawbw.com/~wyrmwif/
Quit killing people. That's high profile.
Mar 24 '07 #22
On 24 Mar 2007 14:39:23 -0700, "Harald van D?k" <tr*****@gmail.com>
wrote:
>Barry Schwarz wrote:
>On 24 Mar 2007 10:51:58 -0700, "Harald van D?k" <tr*****@gmail.com>
wrote:
>The only allowed representation systems for signed integer types are
two's complement, ones' complement, and sign and magnitude. All three
have the maximum value as a power of two minus one. (IIRC, this is new
in C99, but it was added because there were no other systems even
though C90 allowed it.)

n1124 says that UCHAR_MAX must be equal to 2^CHAR_BIT-1 which I
mentioned in my question. For SCHAR_MAX, there is no such
requirement. It is required to be at least (minimum value) 127 which
is 2^7-1 but for larger values of CHAR_BIT there is no additional
restriction. Again, if CHAR_BIT is 9, could SCHAR_MAX and CHAR_MAX be
173?

Again, no. The only allowed representation systems for signed integers
are those three. If SCHAR_MAX is 173 (or if INT_MAX is 99999), then
the representation system cannot be one of those three, so the
implementation would violate 6.2.6.2p2. The fact that there is an
explicit statement that UCHAR_MAX must equal 2^CHAR_BIT - 1 doesn't
seem relevant to me, because unsigned char is already a special case
(as the only integer type that may not contain trap representations or
padding bits).
I can find no requirement that every possible bit combination must be
a valid value. The fact that a signed 9-bit char can support a value
larger than 173 in any of the three allowed representations doesn't
mean it has to.
Remove del for email
Mar 24 '07 #23
Carramba wrote:
>
How can I output value of char or int in binary form with printf(); ?
#include <stdio.h>
#include <stdlib.h>

static void binprt(long i) {
if (i / 2) binprt(i / 2);
putchar('0' + i % 2);
} /* binprt */

/* ----------------- */

int main(int argc, char* *argv) {
long x;

if ((argc != 2) || (1 != sscanf(argv[1], "%ld", &x))) {
fprintf(stderr, "Usage: binprt value\n");
exit(EXIT_FAILURE);
}
binprt(x);
putchar('\n');
return 0;
}

--
Chuck F (cbfalconer at maineline dot net)
Available for consulting/temporary embedded and systems.
<http://cbfalconer.home.att.net>

--
Posted via a free Usenet account from http://www.teranews.com

Mar 24 '07 #24
Barry Schwarz wrote:
I can find no requirement that every possible bit combination must be
a valid value. The fact that a signed 9-bit char can support a value
larger than 173 in any of the three allowed representations doesn't
mean it has to.
There are not three allowable representations for positive values
of signed types.

N869
6.2.6.2 Integer types

[#2] For signed integer types, the bits of the object
representation shall be divided into three groups: value
bits, padding bits, and the sign bit. There need not be any
padding bits; there shall be exactly one sign bit. Each bit
that is a value bit shall have the same value as the same
bit in the object representation of the corresponding
unsigned type (if there are M value bits in the signed type
and N in the unsigned type, then M<=N). If the sign bit is
zero, it shall not affect the resulting value.

The only description of value bits,
is in the description of the representation of unsigned types,
so, value bits in the signed type
should behave the same way for positive values.

--
pete
Mar 25 '07 #25
Barry Schwarz wrote:
On 24 Mar 2007 14:39:23 -0700, "Harald van D?k" <tr*****@gmail.com>
wrote:
Barry Schwarz wrote:
On 24 Mar 2007 10:51:58 -0700, "Harald van D?k" <tr*****@gmail.com>
wrote:
The only allowed representation systems for signed integer types are
two's complement, ones' complement, and sign and magnitude. All three
have the maximum value as a power of two minus one. (IIRC, this is new
in C99, but it was added because there were no other systems even
though C90 allowed it.)

n1124 says that UCHAR_MAX must be equal to 2^CHAR_BIT-1 which I
mentioned in my question. For SCHAR_MAX, there is no such
requirement. It is required to be at least (minimum value) 127 which
is 2^7-1 but for larger values of CHAR_BIT there is no additional
restriction. Again, if CHAR_BIT is 9, could SCHAR_MAX and CHAR_MAX be
173?
Again, no. The only allowed representation systems for signed integers
are those three. If SCHAR_MAX is 173 (or if INT_MAX is 99999), then
the representation system cannot be one of those three, so the
implementation would violate 6.2.6.2p2. The fact that there is an
explicit statement that UCHAR_MAX must equal 2^CHAR_BIT - 1 doesn't
seem relevant to me, because unsigned char is already a special case
(as the only integer type that may not contain trap representations or
padding bits).

I can find no requirement that every possible bit combination must be
a valid value. The fact that a signed 9-bit char can support a value
larger than 173 in any of the three allowed representations doesn't
mean it has to.
Hmm, okay, there is explicit permission for what would otherwise be
negative zero to be a trap representation, but perhaps that is
redundant too.

The rationale does say that "any result of bitwise manipulation
produces an integer result which can be printed by printf", but since
that is already incorrect for other reasons, it may not apply to 99999
| 16384 either (which would overflow if INT_MAX is 99999).

Mar 25 '07 #26
The following example is from the Users' Reference to B by Ken Thompson:

/* The following function will print a non-negative number, n, to
the base b, where 2<=b<=10, This routine uses the fact that
in the ASCII character set, the digits 0 to 9 have sequential
code values. */

printn(n,b) {
extrn putchar;
auto a;

if(a=n/b) /* assignment, not test for equality */
printn(a, b); /* recursive */
putchar(n%b + '0');
}Simplily adding "void" before "printn", replacing "extrn putchar" with
"#include <stdio.h>" outside the function, and replace "auto" with "int"
translates it into C."Carramba" <us**@example.netha scritto nel messaggio
news:46*********************@news.luth.se...
Hi!

How can I output value of char or int in binary form with printf(); ?

thanx in advance

Mar 29 '07 #27

This thread has been closed and replies have been disabled. Please start a new discussion.

Similar topics

0
by: Mark Dufour | last post by:
Hi all, I need to convert an integer into some binary representation. I found the following code in the online cookbook (adapted to return a list): binary = lambda n: n>0 and +binary(n>>1) or ...
10
by: 63q2o4i02 | last post by:
Hi, I'm using python to run some lab equipment using PyVisa. When I read a list of values from the equipment, one of the fields is 32 bits of flags, but the value is returned as a floating...
3
by: shyha | last post by:
Hello! Does anybody know what is binary representation of integer datatype fields written to archlogs on z/OS (OS/390) machines? Is it "Two's complement", "One's complement", Sign-modulo or...
5
by: Andrea | last post by:
Hi, I'm a newbie i want to know how can i convert string to its binary representation, i want to write a program where an example string "hello world" is transformed in his binary representation...
1
by: krishna81m | last post by:
I am a newbie and have been trying to understand conversion from double to int and then back to int using the following code was posted on the c++ google group. Could someone help me out with...
7
by: eliben | last post by:
Hello, I'm interested in converting integers to a binary representation, string. I.e. a desired function would produce: dec2bin(13) ="1101" The other way is easily done in Python with the...
36
by: Kapteyn's Star | last post by:
hi group, i try to compile code below but my compiler is failing. it tells:- printbin.c: In function ‘main’: printbin.c:9: error: invalid operands to binary & i am not able to understand what...
0
by: taylorcarr | last post by:
A Canon printer is a smart device known for being advanced, efficient, and reliable. It is designed for home, office, and hybrid workspace use and can also be used for a variety of purposes. However,...
0
by: Charles Arthur | last post by:
How do i turn on java script on a villaon, callus and itel keypad mobile phone
0
by: ryjfgjl | last post by:
If we have dozens or hundreds of excel to import into the database, if we use the excel import function provided by database editors such as navicat, it will be extremely tedious and time-consuming...
0
by: ryjfgjl | last post by:
In our work, we often receive Excel tables with data in the same format. If we want to analyze these data, it can be difficult to analyze them because the data is spread across multiple Excel files...
0
by: emmanuelkatto | last post by:
Hi All, I am Emmanuel katto from Uganda. I want to ask what challenges you've faced while migrating a website to cloud. Please let me know. Thanks! Emmanuel
0
BarryA
by: BarryA | last post by:
What are the essential steps and strategies outlined in the Data Structures and Algorithms (DSA) roadmap for aspiring data scientists? How can individuals effectively utilize this roadmap to progress...
1
by: Sonnysonu | last post by:
This is the data of csv file 1 2 3 1 2 3 1 2 3 1 2 3 2 3 2 3 3 the lengths should be different i have to store the data by column-wise with in the specific length. suppose the i have to...
0
Oralloy
by: Oralloy | last post by:
Hello folks, I am unable to find appropriate documentation on the type promotion of bit-fields when using the generalised comparison operator "<=>". The problem is that using the GNU compilers,...
0
jinu1996
by: jinu1996 | last post by:
In today's digital age, having a compelling online presence is paramount for businesses aiming to thrive in a competitive landscape. At the heart of this digital strategy lies an intricately woven...

By using Bytes.com and it's services, you agree to our Privacy Policy and Terms of Use.

To disable or enable advertisements and analytics tracking please visit the manage ads & tracking page.