By using this site, you agree to our updated Privacy Policy and our Terms of Use. Manage your Cookies Settings.
445,797 Members | 1,836 Online
Bytes IT Community
+ Ask a Question
Need help? Post your question and get tips & solutions from a community of 445,797 IT Pros & Developers. It's quick & easy.

Different bit storing

P: n/a
I have heard the way an int is stored, is different from the way a
short is (for exampe eh... the same could apply for 16bit int and a
long)...

But how are they differently stored?

Is the following correct?

short a=16: 0000000000010000
int b=16 (int int): 00000000000000000000000000010000
in front of the 16 bits of a, are there all 1's?

Plus, I've read about the various ways a signed data type is stored...
What is the mostly used one? If I saw 32bits, how would I know if it
was a signed int or an unsigned int? Would I be able to tell??

Different bit storing seems confusing...
Jul 22 '05 #1
Share this Question
Share on Google+
2 Replies


P: n/a
Chris Mantoulidis wrote:
I have heard the way an int is stored, is different from the way a
short is (for exampe eh... the same could apply for 16bit int and a
long)...

But how are they differently stored?

Is the following correct?

short a=16: 0000000000010000
int b=16 (int int): 00000000000000000000000000010000
in front of the 16 bits of a, are there all 1's?
Usually, there are zero's. Unless, of course, you're on a 16-bit (or
fewer) machine; then the bits before &a might vary from one run of your
program to the next. Or, there may not be anything before &a.
Plus, I've read about the various ways a signed data type is stored...
What is the mostly used one?
Two's complement.
If I saw 32bits, how would I know if it
was a signed int or an unsigned int?
Would I be able to tell??
There's no way to know just from the 32 bits.
Different bit storing seems confusing...


Yep, it's tricky. Usually, you don't have to think about it much.

Jul 22 '05 #2

P: n/a
"Chris Mantoulidis" <cm****@yahoo.com> wrote in message
news:a8**************************@posting.google.c om...
I have heard the way an int is stored, is different from the way a
short is (for exampe eh... the same could apply for 16bit int and a
long)...

But how are they differently stored?

Is the following correct?

short a=16: 0000000000010000
int b=16 (int int): 00000000000000000000000000010000
in front of the 16 bits of a, are there all 1's?


Doesn't it depend on the endianness of the system?

On a little-endian machine like mine, the little bytes are stored first, so
for me 16 (020 octal) is stored as:

"\020\000" // short (16 bits for me)
"\020\000\000\000" // int (32 bits for me)

The high 0 bytes are actually stored after the smaller parts of the number.
If I reinterpret_cast an int* to a short*, I get the same value if the int
value fits into the size of a short.

On a big-endian machine the large bytes are stored first, so 16 might be
stored as

"\000\020" // short
"\000\000\000\020" // int

Here reinterpret_cast'ing an int* to a short*, might give 0 instead of 16.

Regardless of which order the bytes are stored, left-shifting (<<) operates
as if the bits were stored high-bit first. So if I keep left shifting
(short (1)) I get something like:

"\001\000"
"\002\000"
"\004\000"
"\010\000"
"\020\000"
"\040\000"
"\100\000"
"\200\000"
"\400\000"
"\000\001"
"\000\002"

etc.

I think you only need to worry when processing binary files from a machine
of opposite endianness.

HTH
--
KCS
Jul 22 '05 #3

This discussion thread is closed

Replies have been disabled for this discussion.