By using this site, you agree to our updated Privacy Policy and our Terms of Use. Manage your Cookies Settings.
432,414 Members | 1,024 Online
Bytes IT Community
+ Ask a Question
Need help? Post your question and get tips & solutions from a community of 432,414 IT Pros & Developers. It's quick & easy.

What's the output

P: n/a
int main()
{
int k;
union jatin{
int i :5;
char j :2;
};

union jatin rajpal;
k= sizeof(rajpal);
printf("%d",k);
return 0;
}

& what would be the output if instead of union in the above example if
i'll use struct? Could anyon can explain me union behavior.

Mar 1 '07 #1
Share this Question
Share on Google+
18 Replies


P: n/a
ra**********@yahoo.co.in wrote:
int main()
{
int k;
union jatin{
int i :5;
char j :2;
};

union jatin rajpal;
k= sizeof(rajpal);
printf("%d",k);
return 0;
}
What did you see and what did you expect?
& what would be the output if instead of union in the above example if
i'll use struct? Could anyon can explain me union behavior.
What is your understanding? Your text book should explain this early on.

--
Ian Collins.
Mar 1 '07 #2

P: n/a
On Feb 28, 5:34 pm, rajpal_ja...@yahoo.co.in wrote:
int main()
{
int k;
union jatin{
int i :5;
char j :2;

};

union jatin rajpal;
k= sizeof(rajpal);
printf("%d",k);
return 0;

}
It will be the size of the largest object in the collection.
& what would be the output if instead of union in the above example if
i'll use struct?
It will be the size of the sum of all struct members, plus padding.
Could anyon can explain me union behavior.
A good C book should do nicely. Have you read K&R2?

Mar 1 '07 #3

P: n/a
In article <11**********************@p10g2000cwp.googlegroups .com>,
<ra**********@yahoo.co.inwrote:
>int main()
{
int k;
union jatin{
int i :5;
char j :2;
};
>union jatin rajpal;
k= sizeof(rajpal);
printf("%d",k);
return 0;
}
You haven't included <stdio.h>, so you could get any output from
the printf due to the lack of the prototype for the printf.
>& what would be the output if instead of union in the above example if
i'll use struct? Could anyon can explain me union behavior.
In all the cases, on the compiler I was testing with, I get 4 as
the output -- which is sizeof(int) on that particular machine.

I also get a warning that char is a non-standard type for a bitfield.
The C99 standard permits bitfield types other than int and unsigned int,
as system extensions, but does not define their behaviour.

In the union case, the compiler is allocating enough space
for a full int. There is no requirement that the compiler allocate
the smallest possible space that would hold the defined bitfield sizes.
The way the relevant clauses are written, a compiler would be
conformant if it always used exactly the same size for any storage
unit that contained a bitfield.

The non-standard char bitfield does not take any more space than
an int bitfield on the particular compiler I am using, so there
there was no need to allocate anything bigger than an int for the
overall storage. But since char bitfields are non-standard, the
standard would have no complaint if a compiler decided that
it needed to allocate 742 bytes for every unit that contained
a char bitfield -- non-standard behaviour can be as unusual as
the compiler writer wants.
In the struct case, you again run into the problme that char
bitfields are non-standard, so you again could get out pretty
much any answer. If the compiler choose to treat them like int
bitfields, then you again encounter the behaviour that a
compiler is allowed to allocate a complete word to hold an
aggregate of bitfields that together fit within the limits of
a word. I am deliberately using "word" non-specifically here
rather than "int", as the compiler is not restricted to
multiples of "int". A compiler is allowed to pack down to
the smallest integral type if it wants, and it is allowed to
use a complete integral type if it wants.
Basically, if you are looking for some kind of promises in
the standards that bitfields will only be a certain size and no
bigger, or that the aggregate size will be as small as possible,
then you will not find those promises.

The only promise is that if you are using int or signed int bitfields,
and the next bitfield would fit within the same allocation unit was was
already started, then it will be put in the same allocation unit. But
if the next bitfield would not fit in the same allocation unit, then it
is up to the compiler as to whether it spans the bitfield, part in each
of the two storage units, or if it instead leaves off filling the first
storage unit and starts a new storage unit for the second bitfield.

If you are expecting portability in the fine details of how
bitfields are handled, then you should stop expecting that. There isn't
even any promise about whether bitfields start filling from
the "beginning" of the storage allocation, or start filling from
the "end" of the storage allocation. In your example, i could
end up stored before or after j in the storage unit, and the
next compiler release on the same system could switch it to
the other way. bitfields are NOT any kind of portable bit-level
storage specification.
--
There are some ideas so wrong that only a very intelligent person
could believe in them. -- George Orwell
Mar 1 '07 #4

P: n/a
On Mar 1, 6:52 am, "user923005" <dcor...@connx.comwrote:
On Feb 28, 5:34 pm, rajpal_ja...@yahoo.co.in wrote:
int main()
{
int k;
union jatin{
int i :5;
char j :2;
};
union jatin rajpal;
k= sizeof(rajpal);
printf("%d",k);
return 0;
}

It will be the size of the largest object in the collection.
& what would be the output if instead of union in the above example if
i'll use struct?

It will be the size of the sum of all struct members, plus padding.
Could anyon can explain me union behavior.

A good C book should do nicely. Have you read K&R2?
Hi Ian and user923005

Thanks for your response.

If my understanding is correct:
When I'll use struct I'll get 8 coz in structure I'm declaring the int
and char in bitwise manner int will store 5 bit and char will store 2
bit i.e. total 7 bit.
when i'll printf the size of this struct then atleast i'll get output
of 1word i.e. 8bit. this behaviour is ok to me.

However when i'll use union the output is 4.which is not clear to
me.It should be the size of 1word. ie it should also give us the
output 8.

Mar 1 '07 #5

P: n/a
jatin wrote:
On Mar 1, 6:52 am, "user923005" <dcor...@connx.comwrote:
>On Feb 28, 5:34 pm, rajpal_ja...@yahoo.co.in wrote:
int main()
{
int k;
union jatin{
int i :5;
char j :2;
};
union jatin rajpal;
k= sizeof(rajpal);
printf("%d",k);
return 0;
}

It will be the size of the largest object in the collection.
& what would be the output if instead of union in the above example if
i'll use struct?

It will be the size of the sum of all struct members, plus padding.
Could anyon can explain me union behavior.

A good C book should do nicely. Have you read K&R2?

Hi Ian and user923005

Thanks for your response.

If my understanding is correct:
When I'll use struct I'll get 8 coz
"because". "coz" is short for "cousin".
in structure I'm declaring the int
and char in bitwise manner int will store 5 bit and char will store 2
bit i.e. total 7 bit.
Plus whatever padding the compiler feels is appropriate.
when i'll printf the size of this struct then atleast i'll get output
of 1word
One /byte/. C sizes are counted in abstract units called "bytes"
or "chars", which have at least 8 -- but maybe more -- bits.
i.e. 8bit. this behaviour is ok to me.

However when i'll use union the output is 4.which is not clear to
me.
Four bytes. Your compiler -- and it will not be alone -- appears
to round union sizes to 4-bytes. It's probably a natural size on
your machine.
It should be the size of 1word. ie it should also give us the
output 8.
No "should" about it.

--
Chris "electric hedgehog" Dollin
"Never ask that question!" Ambassador Kosh, /Babylon 5/

Mar 1 '07 #6

P: n/a
On Mar 1, 6:40 pm, Chris Dollin <chris.dol...@hp.comwrote:
jatin wrote:
On Mar 1, 6:52 am, "user923005" <dcor...@connx.comwrote:
On Feb 28, 5:34 pm, rajpal_ja...@yahoo.co.in wrote:
int main()
{
int k;
union jatin{
int i :5;
char j :2;
};
union jatin rajpal;
k= sizeof(rajpal);
printf("%d",k);
return 0;
}
It will be the size of the largest object in the collection.
& what would be the output if instead of union in the above example if
i'll use struct?
It will be the size of the sum of all struct members, plus padding.
Could anyon can explain me union behavior.
A good C book should do nicely. Have you read K&R2?
Hi Ian and user923005
Thanks for your response.
If my understanding is correct:
When I'll use struct I'll get 8 coz

"because". "coz" is short for "cousin".
in structure I'm declaring the int
and char in bitwise manner int will store 5 bit and char will store 2
bit i.e. total 7 bit.

Plus whatever padding the compiler feels is appropriate.
when i'll printf the size of this struct then atleast i'll get output
of 1word

One /byte/. C sizes are counted in abstract units called "bytes"
or "chars", which have at least 8 -- but maybe more -- bits.
i.e. 8bit. this behaviour is ok to me.
However when i'll use union the output is 4.which is not clear to
me.

Four bytes. Your compiler -- and it will not be alone -- appears
to round union sizes to 4-bytes. It's probably a natural size on
your machine.
It should be the size of 1word. ie it should also give us the
output 8.

No "should" about it.

--
Chris "electric hedgehog" Dollin
"Never ask that question!" Ambassador Kosh, /Babylon 5/- Hide quoted text -

- Show quoted text -
If we assume that the padding is 0 for both the cases than the output
both the case would be 8 right!

Mar 1 '07 #7

P: n/a
On Mar 1, 7:09 am, rober...@ibd.nrc-cnrc.gc.ca (Walter Roberson)
wrote:
In article <1172712856.657318.257...@p10g2000cwp.googlegroups .com>,

<rajpal_ja...@yahoo.co.inwrote:
int main()
{
int k;
union jatin{
int i :5;
char j :2;
};
union jatin rajpal;
k= sizeof(rajpal);
printf("%d",k);
return 0;
}

You haven't included <stdio.h>, so you could get any output from
the printf due to the lack of the prototype for the printf.
& what would be the output if instead of union in the above example if
i'll use struct? Could anyon can explain me union behavior.

In all the cases, on the compiler I was testing with, I get 4 as
the output -- which is sizeof(int) on that particular machine.

I also get a warning that char is a non-standard type for a bitfield.
The C99 standard permits bitfield types other than int and unsigned int,
as system extensions, but does not define their behaviour.

In the union case, the compiler is allocating enough space
for a full int. There is no requirement that the compiler allocate
the smallest possible space that would hold the defined bitfield sizes.
The way the relevant clauses are written, a compiler would be
conformant if it always used exactly the same size for any storage
unit that contained a bitfield.

The non-standard char bitfield does not take any more space than
an int bitfield on the particular compiler I am using, so there
there was no need to allocate anything bigger than an int for the
overall storage. But since char bitfields are non-standard, the
standard would have no complaint if a compiler decided that
it needed to allocate 742 bytes for every unit that contained
a char bitfield -- non-standard behaviour can be as unusual as
the compiler writer wants.

In the struct case, you again run into the problme that char
bitfields are non-standard, so you again could get out pretty
much any answer. If the compiler choose to treat them like int
bitfields, then you again encounter the behaviour that a
compiler is allowed to allocate a complete word to hold an
aggregate of bitfields that together fit within the limits of
a word. I am deliberately using "word" non-specifically here
rather than "int", as the compiler is not restricted to
multiples of "int". A compiler is allowed to pack down to
the smallest integral type if it wants, and it is allowed to
use a complete integral type if it wants.

Basically, if you are looking for some kind of promises in
the standards that bitfields will only be a certain size and no
bigger, or that the aggregate size will be as small as possible,
then you will not find those promises.

The only promise is that if you are using int or signed int bitfields,
and the next bitfield would fit within the same allocation unit was was
already started, then it will be put in the same allocation unit. But
if the next bitfield would not fit in the same allocation unit, then it
is up to the compiler as to whether it spans the bitfield, part in each
of the two storage units, or if it instead leaves off filling the first
storage unit and starts a new storage unit for the second bitfield.

If you are expecting portability in the fine details of how
bitfields are handled, then you should stop expecting that. There isn't
even any promise about whether bitfields start filling from
the "beginning" of the storage allocation, or start filling from
the "end" of the storage allocation. In your example, i could
end up stored before or after j in the storage unit, and the
next compiler release on the same system could switch it to
the other way. bitfields are NOT any kind of portable bit-level
storage specification.
--
There are some ideas so wrong that only a very intelligent person
could believe in them. -- George Orwell
You've given me suprb explanations very close to my understanding
level.
But I still have some doubts.

Ignore padding or compiler dependencies. Just tell me from exam point
of view.

Union behavior: Do you think that o/p 4 is correct.
As per my understanding it should be equal to one word i.e 8

Struct bahavior: it should be 8
Keeping in mind I'm answering based on only theory as exam point of
view.

Mar 1 '07 #8

P: n/a
In article <11*********************@k78g2000cwa.googlegroups. com>,
jatin <ra**********@yahoo.co.inwrote:
>Ignore padding or compiler dependencies. Just tell me from exam point
of view.
Any exam that ignored compiler dependancies would be a poor
examination.

>Union behavior: Do you think that o/p 4 is correct.
As per my understanding it should be equal to one word i.e 8
Whether a word is 1 byte or 2 bytes or 4 bytes or 8 bytes or
324933 bytes is compiler dependant. Your question cannot be
answered without compiler dependancies.

>Struct bahavior: it should be 8
Keeping in mind I'm answering based on only theory as exam point of
view.
The *theory* is that the standard permit the compilers to do nearly
whatever they like with bitfields, and therefor questions about
bitfield behaviour can only be answered with respect to a -specific-
compiler (with target and version information given), or else
answered as "The standard doesn't say; any answer from 1 up is valid
here."

The compiler I used for my testing, the one that returned 4 in each
of the cases, was doing everything correctly as far as the standards
are concerned.

The hard rules about bitfields boil down to the following:

1) if a bitfield size of 0 is encountered in a struct, the compiler
must leave off filling any partly filled current storage unit and
move to the next storage unit;

2) otherwise, if the next bitfield fits completely within the
current storage unit, it must be placed in that storage unit;

3) if a bitfield would have to cross whatever storage unit size
the compiler is using, the behaviour is up to the compiler: it can
split the fields across the storage units, or it can move on to the
next storage unit leaving an empty space

4) Nearly everything else, including questions about what size of storage
unit is used, is up to the compiler: if you cannot answer a
question about bitfields by examining rules #1, #2, or #3, the
the standard probably doesn't define the answer. (There are
obscurities about signed vs unsigned bitfields in the standards.)

I did make a minor mistake in my previous posting: I forgot that
C99 allows boolean bitfields. C90 doesn't know anything about boolean.
If you reference the above, you will see that answering "4" or "8"
as being "the" right size is impossible without knowing the compiler
and compiler version and compiler options.
One thing that I *can* say is that if the size of the union comes
out as 4, then the size of the struct will also come out as 4,
because of the rule about being -required- to pack into the same storage
unit if there is still room.
--
I was very young in those days, but I was also rather dim.
-- Christopher Priest
Mar 1 '07 #9

P: n/a
On Mar 1, 11:08 pm, rober...@ibd.nrc-cnrc.gc.ca (Walter Roberson)
wrote:
In article <1172770168.415270.51...@k78g2000cwa.googlegroups. com>,

jatin <rajpal_ja...@yahoo.co.inwrote:
Ignore padding or compiler dependencies. Just tell me from exam point
of view.

Any exam that ignored compiler dependancies would be a poor
examination.
Union behavior: Do you think that o/p 4 is correct.
As per my understanding it should be equal to one word i.e 8

Whether a word is 1 byte or 2 bytes or 4 bytes or 8 bytes or
324933 bytes is compiler dependant. Your question cannot be
answered without compiler dependancies.
Struct bahavior: it should be 8
Keeping in mind I'm answering based on only theory as exam point of
view.

The *theory* is that the standard permit the compilers to do nearly
whatever they like with bitfields, and therefor questions about
bitfield behaviour can only be answered with respect to a -specific-
compiler (with target and version information given), or else
answered as "The standard doesn't say; any answer from 1 up is valid
here."

The compiler I used for my testing, the one that returned 4 in each
of the cases, was doing everything correctly as far as the standards
are concerned.

The hard rules about bitfields boil down to the following:

1) if a bitfield size of 0 is encountered in a struct, the compiler
must leave off filling any partly filled current storage unit and
move to the next storage unit;

2) otherwise, if the next bitfield fits completely within the
current storage unit, it must be placed in that storage unit;

3) if a bitfield would have to cross whatever storage unit size
the compiler is using, the behaviour is up to the compiler: it can
split the fields across the storage units, or it can move on to the
next storage unit leaving an empty space

4) Nearly everything else, including questions about what size of storage
unit is used, is up to the compiler: if you cannot answer a
question about bitfields by examining rules #1, #2, or #3, the
the standard probably doesn't define the answer. (There are
obscurities about signed vs unsigned bitfields in the standards.)

I did make a minor mistake in my previous posting: I forgot that
C99 allows boolean bitfields. C90 doesn't know anything about boolean.

If you reference the above, you will see that answering "4" or "8"
as being "the" right size is impossible without knowing the compiler
and compiler version and compiler options.

One thing that I *can* say is that if the size of the union comes
out as 4, then the size of the struct will also come out as 4,
because of the rule about being -required- to pack into the same storage
unit if there is still room.
--
I was very young in those days, but I was also rather dim.
-- Christopher Priest
This qus belongs to ANSI C

Mar 1 '07 #10

P: n/a
jatin wrote:
On Mar 1, 6:40 pm, Chris Dollin <chris.dol...@hp.comwrote:
>>jatin wrote:
>>>It should be the size of 1word. ie it should also give us the
output 8.

No "should" about it.
*Please don't quote signatures*
>

If we assume that the padding is 0 for both the cases than the output
both the case would be 8 right!
No, it would not. It will be sizeof the largest union member, which is
an int in you example.

--
Ian Collins.
Mar 1 '07 #11

P: n/a
In article <11**********************@30g2000cwc.googlegroups. com>,
jatin <ra**********@yahoo.co.inwrote:
>On Mar 1, 11:08 pm, rober...@ibd.nrc-cnrc.gc.ca (Walter Roberson)
wrote:
>The hard rules about bitfields boil down to the following:
>4) Nearly everything else, including questions about what size of storage
unit is used, is up to the compiler:
>This qus belongs to ANSI C
Which "ANSI C" ? X3.159-1989 (before ISO adoption)?
The 1990 ISO version which is the same except with some sections
renumbered?
The 1990 ANSI version which is the same as the 1990 ISO version?

The above three versions together are usually referred to as C89.

The 1994 update that includes some technical clarifications?

The 1999 joint ANSI and ISO version that defines additional
language elements? That one is referred to as C99

The technical amendment that came after that whose date and name
I never recall?
But it doesn't really matter, unless you want to start discussing
the exact behaviour of _bool bitfields (which didn't exist in C89).
The rules are fundamentally the same for all of the versions. That
is, for *all* of the versions, the C standard defines only minimal
rules about bitfields, and saying specifically "4" or "8" for -any-
ANSI C version is wrong because the answer will be compiler dependant.
The standards SAY that it is compiler dependant. The standards
do not WANT to be more specific: they consider it to be none of
their business exactly what the compiler does with bitfields.
--
"law -- it's a commodity"
-- Andrew Ryan (The Globe and Mail, 2005/11/26)
Mar 1 '07 #12

P: n/a
Ian Collins <ia******@hotmail.comwrites:
jatin wrote:
>On Mar 1, 6:40 pm, Chris Dollin <chris.dol...@hp.comwrote:
>>>jatin wrote:

It should be the size of 1word. ie it should also give us the
output 8.

No "should" about it.
*Please don't quote signatures*
>>

If we assume that the padding is 0 for both the cases than the output
both the case would be 8 right!
No, it would not. It will be sizeof the largest union member, which is
an int in you example.
The union in question was:

union jatin {
int i: 5;
char j: 2;
};

C99 6.7.2.1p9:

A bit-field is interpreted as a signed or unsigned integer type
consisting of the specified number of bits.

So the member "i" is interpreted as a 5-bit integer type. Given, for
example, CHAR_BIT == 8 and sizeof(int) == 4, a compiler could
reasonably have sizeof(union jatin) == 1 (with 3 bits of padding).

It could also reasonably reject the declaration, because the standard
doesn't support bit fields of types other than int, signed int,
unsigned int, and (C99 only) _Bool.

Some compilers use the declared types of bit fields to affect the size
of the enclosing structure or union, but the standard doesn't require
this.

--
Keith Thompson (The_Other_Keith) ks***@mib.org <http://www.ghoti.net/~kst>
San Diego Supercomputer Center <* <http://users.sdsc.edu/~kst>
"We must do something. This is something. Therefore, we must do this."
-- Antony Jay and Jonathan Lynn, "Yes Minister"
Mar 1 '07 #13

P: n/a
Keith Thompson wrote:
Ian Collins <ia******@hotmail.comwrites:
>>jatin wrote:
>>>
If we assume that the padding is 0 for both the cases than the output
both the case would be 8 right!

No, it would not. It will be sizeof the largest union member, which is
an int in you example.


The union in question was:

union jatin {
int i: 5;
char j: 2;
};

C99 6.7.2.1p9:

A bit-field is interpreted as a signed or unsigned integer type
consisting of the specified number of bits.

So the member "i" is interpreted as a 5-bit integer type. Given, for
example, CHAR_BIT == 8 and sizeof(int) == 4, a compiler could
reasonably have sizeof(union jatin) == 1 (with 3 bits of padding).

It could also reasonably reject the declaration, because the standard
doesn't support bit fields of types other than int, signed int,
unsigned int, and (C99 only) _Bool.

Some compilers use the declared types of bit fields to affect the size
of the enclosing structure or union, but the standard doesn't require
this.
But they couldn't represent an int bit field in anything other than an
int. So sizeof the example union will always be sizeof(int).

--
Ian Collins.
Mar 1 '07 #14

P: n/a
Ian Collins <ia******@hotmail.comwrites:
Keith Thompson wrote:
>Ian Collins <ia******@hotmail.comwrites:
>>>jatin wrote:

If we assume that the padding is 0 for both the cases than the output
both the case would be 8 right!

No, it would not. It will be sizeof the largest union member, which is
an int in you example.

The union in question was:

union jatin {
int i: 5;
char j: 2;
};

C99 6.7.2.1p9:

A bit-field is interpreted as a signed or unsigned integer type
consisting of the specified number of bits.

So the member "i" is interpreted as a 5-bit integer type. Given, for
example, CHAR_BIT == 8 and sizeof(int) == 4, a compiler could
reasonably have sizeof(union jatin) == 1 (with 3 bits of padding).

It could also reasonably reject the declaration, because the standard
doesn't support bit fields of types other than int, signed int,
unsigned int, and (C99 only) _Bool.

Some compilers use the declared types of bit fields to affect the size
of the enclosing structure or union, but the standard doesn't require
this.
But they couldn't represent an int bit field in anything other than an
int. So sizeof the example union will always be sizeof(int).
One of us (at least) is missing something here. Why couldn't an int
bit field be represented in anything other than an int?

Let's consider a simpler example. As above, assume CHAR_BIT == 8,
sizeof(int) == 4.

struct foo {
unsigned int x0: 4;
unsigned int x1: 4;
};
struct foo obj;

Each of x0 and x1 is treated as a 4-bit unsigned integer type. I'm
saying that a compiler *could* allocate both x0 and x1 within a single
byte, and make sizeof(struct foo) == 1 and sizeof obj == 1. The
struct's members total only 8 bits; I see no requirement for the
struct itself to be any bigger than 8 bits. There is no 32-bit
unsigned int object, and no need to allocate space for one.

Is my hypothetical implementation non-conforming? If so, why?

--
Keith Thompson (The_Other_Keith) ks***@mib.org <http://www.ghoti.net/~kst>
San Diego Supercomputer Center <* <http://users.sdsc.edu/~kst>
"We must do something. This is something. Therefore, we must do this."
-- Antony Jay and Jonathan Lynn, "Yes Minister"
Mar 1 '07 #15

P: n/a
Keith Thompson wrote:
Ian Collins <ia******@hotmail.comwrites:
>>
But they couldn't represent an int bit field in anything other than an
int. So sizeof the example union will always be sizeof(int).

One of us (at least) is missing something here. Why couldn't an int
bit field be represented in anything other than an int?
Without trawling though the standard, the only repost I can use is that
it would be counter-intuitive - at least to me.
Let's consider a simpler example. As above, assume CHAR_BIT == 8,
sizeof(int) == 4.

struct foo {
unsigned int x0: 4;
unsigned int x1: 4;
};
struct foo obj;

Each of x0 and x1 is treated as a 4-bit unsigned integer type. I'm
saying that a compiler *could* allocate both x0 and x1 within a single
byte, and make sizeof(struct foo) == 1 and sizeof obj == 1. The
struct's members total only 8 bits; I see no requirement for the
struct itself to be any bigger than 8 bits. There is no 32-bit
unsigned int object, and no need to allocate space for one.

Is my hypothetical implementation non-conforming? If so, why?
Good question!

--
Ian Collins.
Mar 1 '07 #16

P: n/a
In article <ln************@nuthaus.mib.org>,
Keith Thompson <ks***@mib.orgwrote:
>Let's consider a simpler example. As above, assume CHAR_BIT == 8,
sizeof(int) == 4.
struct foo {
unsigned int x0: 4;
unsigned int x1: 4;
};
struct foo obj;
>Each of x0 and x1 is treated as a 4-bit unsigned integer type. I'm
saying that a compiler *could* allocate both x0 and x1 within a single
byte, and make sizeof(struct foo) == 1 and sizeof obj == 1.
My reading of C89 is that that is explicitly possible:

3.5.2.1 Structure and Union Specifiers
[...]

An implementation may allocate any addressible storage unit
large enough to hold a bit-field. [...] The alignment of the
addressible storage unit is unspecified.
In C89, I do not even see maximum limit on bitfield sizes.
We know that in C89 that the nominal type for a bitfield must be
int or a signed or unsigned qualification of int, but that falls short
of a promise that you could have a bitfield as wide as a normal int
would be in that implementation.
--
Okay, buzzwords only. Two syllables, tops. -- Laurie Anderson
Mar 1 '07 #17

P: n/a
Ian Collins <ia******@hotmail.comwrites:
Keith Thompson wrote:
>Ian Collins <ia******@hotmail.comwrites:
>>>
But they couldn't represent an int bit field in anything other than an
int. So sizeof the example union will always be sizeof(int).

One of us (at least) is missing something here. Why couldn't an int
bit field be represented in anything other than an int?
Without trawling though the standard, the only repost I can use is that
it would be counter-intuitive - at least to me.
Fascinating. So your intuition differs considerably from mine. I
know that some compilers work in accordance with your intuition, but
I've never quite understood why.

So given:
>Let's consider a simpler example. As above, assume CHAR_BIT == 8,
sizeof(int) == 4.

struct foo {
unsigned int x0: 4;
unsigned int x1: 4;
};
struct foo obj;
[...]

*my* intuition tells me that x0 and x1 aren't 32-bit unsigned ints,
they're 4-bit thingies, and you only need 8 bits to hold two of them.
The fact that the syntax requires using the name of a 32-bit type to
specify these 4-bit thingies is a bit problematic, but I tend to gloss
over that.

You seem to be saying that the "unsigned int" in the declaration
*doesn't* affect the size of x0 or x1, but *does* affect the size of
the structure that contains them.

Following my (limited) understanding of your intuition, I would think
that since x1 is at an offset of 4 bits from the start of the
structure, the fact that it's an unsigned int would require the whole
structure to be at least 36 bits (likely padded to 40 or 64). I don't
think you actually draw that conclusion, but I'm curious why not.

--
Keith Thompson (The_Other_Keith) ks***@mib.org <http://www.ghoti.net/~kst>
San Diego Supercomputer Center <* <http://users.sdsc.edu/~kst>
"We must do something. This is something. Therefore, we must do this."
-- Antony Jay and Jonathan Lynn, "Yes Minister"
Mar 2 '07 #18

P: n/a
Keith Thompson wrote:
>
You seem to be saying that the "unsigned int" in the declaration
*doesn't* affect the size of x0 or x1, but *does* affect the size of
the structure that contains them.
I have never used an implementations where this isn't the case.
Following my (limited) understanding of your intuition, I would think
that since x1 is at an offset of 4 bits from the start of the
structure, the fact that it's an unsigned int would require the whole
structure to be at least 36 bits (likely padded to 40 or 64). I don't
think you actually draw that conclusion, but I'm curious why not.
That's going too far.

I'm just used to using compilers where

#include <stdio.h>
#include <stdint.h>

struct A { uint8_t b : 4; };
struct B { uint16_t b : 4; };
struct C { uint32_t b : 4; };

int main(void) {
printf( "%d %d %d\n", sizeof(struct A), sizeof(struct B),
sizeof(struct C) );
return 0;
}

outputs 1 2 4.

--
Ian Collins.
Mar 2 '07 #19

This discussion thread is closed

Replies have been disabled for this discussion.