By using this site, you agree to our updated Privacy Policy and our Terms of Use. Manage your Cookies Settings.
449,003 Members | 1,169 Online
Bytes IT Community
+ Ask a Question
Need help? Post your question and get tips & solutions from a community of 449,003 IT Pros & Developers. It's quick & easy.

std::bitset, standard and endianess

P: n/a
Hi.

q1: Is std::bitset<N> part of standard or it's compiler extension?

q2: Is std::bitset::to_string() part of standard?

q3: My documentation say this about std::bitset::to_string():

"...each character is 1 if the corresponding bit is set, and 0 if it is
not. In general, character position i corresponds to bit position N - 1 -
i..."

On my machine, it results in most significant bits being on lower positions
in resulting string:

unsigned int i = 12536;
std::bitset<16> bs = i;
std::string str = bs.to_string();

which gives for str "0011000011111000"

=> 0011 0000 1111 1000 == x30F8 == 12536

If I understand my docs well, on machine with different endianess, same
code will result in different string. What does standard say about it: will
string output always have most significant bits on lowest string positions,
or it is like my docs say?

TIA
Jul 23 '05 #1
Share this Question
Share on Google+
5 Replies


P: n/a
SpOiLeR wrote:
q1: Is std::bitset<N> part of standard or it's compiler extension?
Standard.
q2: Is std::bitset::to_string() part of standard?
Yes.
q3: My documentation say this about std::bitset::to_string():

"...each character is 1 if the corresponding bit is set, and 0 if it is
not. In general, character position i corresponds to bit position N - 1 -
i..."

On my machine, it results in most significant bits being on lower positions
in resulting string:

unsigned int i = 12536;
std::bitset<16> bs = i;
std::string str = bs.to_string();

which gives for str "0011000011111000"

=> 0011 0000 1111 1000 == x30F8 == 12536

If I understand my docs well, on machine with different endianess, same
code will result in different string.
Your understanding is wrong.
What does standard say about it: will
string output always have most significant bits on lowest string positions,
or it is like my docs say?


The Standard does not concern itself with endianness. So, yes, the most
significant bit will the at the beginning of the resulting string.

V
Jul 23 '05 #2

P: n/a
SpOiLeR wrote:
...
q1: Is std::bitset<N> part of standard or it's compiler extension?
It is a part of the standard library.
q2: Is std::bitset::to_string() part of standard?
Yes.
q3: My documentation say this about std::bitset::to_string():

"...each character is 1 if the corresponding bit is set, and 0 if it is
not. In general, character position i corresponds to bit position N - 1 -
i..."

On my machine, it results in most significant bits being on lower positions
in resulting string:

unsigned int i = 12536;
std::bitset<16> bs = i;
std::string str = bs.to_string();

which gives for str "0011000011111000"

=> 0011 0000 1111 1000 == x30F8 == 12536

If I understand my docs well, on machine with different endianess, same
code will result in different string.
No. The resultant string will contain the binary representation of
number 12536 (decimal), possibly with extra leading zeros. In binary
representation 12536 is "0011000011111000". It doesn't depend on the
endianness of the hardware platform. There's nothing in the
specification of 'std::bitset<>' that depends on the endianness of the
hardware platform in any way.
What does standard say about it: will
string output always have most significant bits on lowest string positions,
Yes.
or it is like my docs say?


That's actually exactly what your docs say. You just have to interpret
them at higher (logical) level.

--
Best regards,
Andrey Tarasevich
Jul 23 '05 #3

P: n/a
On Mon, 14 Mar 2005 16:09:05 -0800, Andrey Tarasevich wrote:
There's nothing in the
specification of 'std::bitset<>' that depends on the endianness of the
hardware platform in any way.


So, if I write something like this:

std::bitset<16> b16 (12345); // 12345 can fit in 16 bits
std::bitset<8> b8_higher, b8_lower;

unsigned int i;
for (i=0; i<8; i++) b8_lower[i] = b16[i];
for (i=8; i<16; i++) b8_higher[i-8] = b16[i];

unsigned char low, high;

// Casts OK because standard guarantees that char contains at least 8 bits
low = static_cast <unsigned char> (b8_lower.to_ulong ());
high = static_cast <unsigned char> (b8_higher.to_ulong ());

Is it guaranteed that low will contain less significant bits of number
contained in b16, and higher will contain more significant bits of that
number?
Jul 23 '05 #4

P: n/a
In article <1e******************************@40tude.net>,
SpOiLeR <request@no_spam.org> wrote:
On Mon, 14 Mar 2005 16:09:05 -0800, Andrey Tarasevich wrote:
There's nothing in the
specification of 'std::bitset<>' that depends on the endianness of the
hardware platform in any way.
So, if I write something like this:

std::bitset<16> b16 (12345); // 12345 can fit in 16 bits
std::bitset<8> b8_higher, b8_lower;

unsigned int i;
for (i=0; i<8; i++) b8_lower[i] = b16[i];
for (i=8; i<16; i++) b8_higher[i-8] = b16[i];

unsigned char low, high;

// Casts OK because standard guarantees that char contains at least 8 bits
low = static_cast <unsigned char> (b8_lower.to_ulong ());
high = static_cast <unsigned char> (b8_higher.to_ulong ());

Is it guaranteed that low will contain less significant bits of number
contained in b16, and higher will contain more significant bits of that
number?


Yes. The standard mandates that the bits will appear to be stored in
little endian order as observed by the indexing operator:

23.3.5p3:
When converting between an object of class bitset<N> and a value of some
integral type, bit position pos corresponds to the bit value 1 << pos .
However, just in case we caught you napping, to_string is guaranteed to
present the bits in big endian order: ;-)

23.3.5.2p19 (describing to_string):
Character position N - 1 corresponds to bit position zero. Subsequent
decreasing character positions correspond to increasing bit positions.


#include <bitset>
#include <iostream>

int main()
{
std::bitset<16> bs(0x1234);
for (unsigned i = 0; i < 16; ++i)
std::cout << (bs[i] ? 1: 0);
std::cout << '\n';
std::string srep = bs.to_string();
std::cout << srep << '\n';
}

0010110001001000
0001001000110100

Fortunately bitsets constructed from strings are consistent with
to_string and interpret the string in big endian order.

std::bitset<16> bs2(srep);
for (unsigned i = 0; i < 16; ++i)
std::cout << (bs2[i] ? 1: 0);
std::cout << '\n';

0010110001001000

-Howard
Jul 23 '05 #5

P: n/a
On Wed, 16 Mar 2005 02:04:46 GMT, Howard Hinnant wrote:
In article <1e******************************@40tude.net>,
SpOiLeR <request@no_spam.org> wrote:
On Mon, 14 Mar 2005 16:09:05 -0800, Andrey Tarasevich wrote:
There's nothing in the
specification of 'std::bitset<>' that depends on the endianness of the
hardware platform in any way.
So, if I write something like this:

std::bitset<16> b16 (12345); // 12345 can fit in 16 bits
std::bitset<8> b8_higher, b8_lower;

unsigned int i;
for (i=0; i<8; i++) b8_lower[i] = b16[i];
for (i=8; i<16; i++) b8_higher[i-8] = b16[i];

unsigned char low, high;

// Casts OK because standard guarantees that char contains at least 8 bits
low = static_cast <unsigned char> (b8_lower.to_ulong ());
high = static_cast <unsigned char> (b8_higher.to_ulong ());

Is it guaranteed that low will contain less significant bits of number
contained in b16, and higher will contain more significant bits of that
number?


Yes. The standard mandates that the bits will appear to be stored in
little endian order as observed by the indexing operator:

23.3.5p3:

When converting between an object of class bitset<N> and a value of some
integral type, bit position pos corresponds to the bit value 1 << pos .


Excellent!
However, just in case we caught you napping, to_string is guaranteed to
present the bits in big endian order: ;-)

23.3.5.2p19 (describing to_string):
Character position N - 1 corresponds to bit position zero. Subsequent
decreasing character positions correspond to increasing bit positions.


#include <bitset>
#include <iostream>

int main()
{
std::bitset<16> bs(0x1234);
for (unsigned i = 0; i < 16; ++i)
std::cout << (bs[i] ? 1: 0);
std::cout << '\n';
std::string srep = bs.to_string();
std::cout << srep << '\n';
}

0010110001001000
0001001000110100

Fortunately bitsets constructed from strings are consistent with
to_string and interpret the string in big endian order.

std::bitset<16> bs2(srep);
for (unsigned i = 0; i < 16; ++i)
std::cout << (bs2[i] ? 1: 0);
std::cout << '\n';

0010110001001000

-Howard


Well, considering all this, std::bitset is one really nice object :)!!!
Thanks everybody for your help...
Jul 23 '05 #6

This discussion thread is closed

Replies have been disabled for this discussion.