By using this site, you agree to our updated Privacy Policy and our Terms of Use. Manage your Cookies Settings.
455,510 Members | 1,834 Online
Bytes IT Community
+ Ask a Question
Need help? Post your question and get tips & solutions from a community of 455,510 IT Pros & Developers. It's quick & easy.

order of bit fields

P: n/a
is i have this:

struct {
unsigned char bit7: 1;
unsigned char bit6: 1;
unsigned char bit5: 1;
unsigned char bit4: 1;
unsigned char bit3: 1;
unsigned char bit2: 1;
unsigned char bit1: 1;
unsigned char bit0: 1;
};

can i assume that bit0 is the lowest (2^0) and bit7 is the highest (2^7)
bit? is this guaranteed by the standard or is it implementation dependent?
Nov 1 '05 #1
Share this Question
Share on Google+
6 Replies


P: n/a
"Martin Vorbrodt" <mv*******@gmail.com> wrote in message
news:oZ*****************@twister.nyc.rr.com...
is i have this:

struct {
unsigned char bit7: 1;
unsigned char bit6: 1;
unsigned char bit5: 1;
unsigned char bit4: 1;
unsigned char bit3: 1;
unsigned char bit2: 1;
unsigned char bit1: 1;
unsigned char bit0: 1;
};

can i assume that bit0 is the lowest (2^0) and bit7 is the highest (2^7)
bit? is this guaranteed by the standard or is it implementation dependent?


The order of the bits and the amount of padding (and possibly others) are
implementation dependent.

Ali

Nov 1 '05 #2

P: n/a
On Tue, 01 Nov 2005 19:02:44 GMT, "Martin Vorbrodt"
<mv*******@gmail.com> wrote in comp.lang.c++:
is i have this:

struct {
unsigned char bit7: 1;
unsigned char bit6: 1;
unsigned char bit5: 1;
unsigned char bit4: 1;
unsigned char bit3: 1;
unsigned char bit2: 1;
unsigned char bit1: 1;
unsigned char bit0: 1;
};

can i assume that bit0 is the lowest (2^0) and bit7 is the highest (2^7)
bit? is this guaranteed by the standard or is it implementation dependent?


No, you can't. And you can't assume that the compiler will only use
an 8-bit char (assuming CHAR_BIT is 8 on your platform) to store it.

--
Jack Klein
Home: http://JK-Technology.Com
FAQs for
comp.lang.c http://www.eskimo.com/~scs/C-faq/top.html
comp.lang.c++ http://www.parashift.com/c++-faq-lite/
alt.comp.lang.learn.c-c++
http://www.contrib.andrew.cmu.edu/~a...FAQ-acllc.html
Nov 2 '05 #3

P: n/a
Martin Vorbrodt wrote:
is i have this:

struct {
unsigned char bit7: 1;
unsigned char bit6: 1;
unsigned char bit5: 1;
unsigned char bit4: 1;
unsigned char bit3: 1;
unsigned char bit2: 1;
unsigned char bit1: 1;
unsigned char bit0: 1;
};

can i assume that bit0 is the lowest (2^0) and bit7 is the highest (2^7)
bit? is this guaranteed by the standard or is it implementation dependent?


The order of the bits and the size of an allocated bitfield are not
just implementation-dependent - they are implementation-defined.

Therefore, although the standard mandates no particular bit order or
allocation size of a bitfield, every C++ compiler must nonetheless
document the bit order and the allocation size of a bitfield compiled
with that compiler.

Greg

Nov 2 '05 #4

P: n/a
"Greg" <gr****@pacbell.net> wrote in message
news:11**********************@g44g2000cwa.googlegr oups.com...
Martin Vorbrodt wrote:
is i have this:

struct {
unsigned char bit7: 1;
unsigned char bit6: 1;
unsigned char bit5: 1;
unsigned char bit4: 1;
unsigned char bit3: 1;
unsigned char bit2: 1;
unsigned char bit1: 1;
unsigned char bit0: 1;
};

can i assume that bit0 is the lowest (2^0) and bit7 is the highest (2^7)
bit? is this guaranteed by the standard or is it implementation
dependent?
The order of the bits and the size of an allocated bitfield are not
just implementation-dependent - they are implementation-defined.

Therefore, although the standard mandates no particular bit order or
allocation size of a bitfield, every C++ compiler must nonetheless
document the bit order and the allocation size of a bitfield compiled
with that compiler.

Greg


do you know of two different compilers with different bit fields order? i'm
asking because so far i tested gcc and msvc++ and they seam to be
consistent. bits go from least significant to the most significant, and when
i use unsigned char for bitfields <= 8 bits, it allocates then at byte
boundary. do you know a compiler i could test it with that has a radically
different behaviour?

Nov 2 '05 #5

P: n/a
Ian
Martin Vorbrodt wrote:
"Greg" <gr****@pacbell.net> wrote in message

do you know of two different compilers with different bit fields order? i'm
asking because so far i tested gcc and msvc++ and they seam to be
consistent. bits go from least significant to the most significant, and when
i use unsigned char for bitfields <= 8 bits, it allocates then at byte
boundary. do you know a compiler i could test it with that has a radically
different behaviour?

Well I've used quite a few over the years and never seen one that didn't
do this.

Ian
Nov 3 '05 #6

P: n/a

Martin Vorbrodt wrote:
"Greg" <gr****@pacbell.net> wrote in message
news:11**********************@g44g2000cwa.googlegr oups.com...
Martin Vorbrodt wrote:
is i have this:

struct {
unsigned char bit7: 1;
unsigned char bit6: 1;
unsigned char bit5: 1;
unsigned char bit4: 1;
unsigned char bit3: 1;
unsigned char bit2: 1;
unsigned char bit1: 1;
unsigned char bit0: 1;
};

can i assume that bit0 is the lowest (2^0) and bit7 is the highest (2^7)
bit? is this guaranteed by the standard or is it implementation

dependent?

The order of the bits and the size of an allocated bitfield are not
just implementation-dependent - they are implementation-defined.

Therefore, although the standard mandates no particular bit order or
allocation size of a bitfield, every C++ compiler must nonetheless
document the bit order and the allocation size of a bitfield compiled
with that compiler.

Greg


do you know of two different compilers with different bit fields order? i'm
asking because so far i tested gcc and msvc++ and they seam to be
consistent. bits go from least significant to the most significant, and when
i use unsigned char for bitfields <= 8 bits, it allocates then at byte
boundary. do you know a compiler i could test it with that has a radically
different behaviour?


The bit order tends to correlate with the endianess of the target
processor architecture. Big-endian compilers tend to lay out the bits
in an order that is the reverse of the order used by a little endian
compiler.

Some compilers allow the user to override the default bitfield order.
For example, Metrowerks CodeWarrior (and more recently, gcc) support a
#pragma reverse_bitfields directive that will reverse the order of the
bits in a bitfield from the order that would otherwise have applied.

Greg

Nov 3 '05 #7

This discussion thread is closed

Replies have been disabled for this discussion.