Martin Vorbrodt wrote:
"Greg" <gr****@pacbell.net> wrote in message
news:11**********************@g44g2000cwa.googlegr oups.com... Martin Vorbrodt wrote: is i have this:
struct {
unsigned char bit7: 1;
unsigned char bit6: 1;
unsigned char bit5: 1;
unsigned char bit4: 1;
unsigned char bit3: 1;
unsigned char bit2: 1;
unsigned char bit1: 1;
unsigned char bit0: 1;
};
can i assume that bit0 is the lowest (2^0) and bit7 is the highest (2^7)
bit? is this guaranteed by the standard or is it implementation
dependent?
The order of the bits and the size of an allocated bitfield are not
just implementation-dependent - they are implementation-defined.
Therefore, although the standard mandates no particular bit order or
allocation size of a bitfield, every C++ compiler must nonetheless
document the bit order and the allocation size of a bitfield compiled
with that compiler.
Greg
do you know of two different compilers with different bit fields order? i'm
asking because so far i tested gcc and msvc++ and they seam to be
consistent. bits go from least significant to the most significant, and when
i use unsigned char for bitfields <= 8 bits, it allocates then at byte
boundary. do you know a compiler i could test it with that has a radically
different behaviour?
The bit order tends to correlate with the endianess of the target
processor architecture. Big-endian compilers tend to lay out the bits
in an order that is the reverse of the order used by a little endian
compiler.
Some compilers allow the user to override the default bitfield order.
For example, Metrowerks CodeWarrior (and more recently, gcc) support a
#pragma reverse_bitfields directive that will reverse the order of the
bits in a bitfield from the order that would otherwise have applied.
Greg