432,414 Members | 1,024 Online
Need help? Post your question and get tips & solutions from a community of 432,414 IT Pros & Developers. It's quick & easy.

What's the output

 P: n/a int main() { int k; union jatin{ int i :5; char j :2; }; union jatin rajpal; k= sizeof(rajpal); printf("%d",k); return 0; } & what would be the output if instead of union in the above example if i'll use struct? Could anyon can explain me union behavior. Mar 1 '07 #1
18 Replies

 P: n/a ra**********@yahoo.co.in wrote: int main() { int k; union jatin{ int i :5; char j :2; }; union jatin rajpal; k= sizeof(rajpal); printf("%d",k); return 0; } What did you see and what did you expect? & what would be the output if instead of union in the above example if i'll use struct? Could anyon can explain me union behavior. What is your understanding? Your text book should explain this early on. -- Ian Collins. Mar 1 '07 #2

 P: n/a On Feb 28, 5:34 pm, rajpal_ja...@yahoo.co.in wrote: int main() { int k; union jatin{ int i :5; char j :2; }; union jatin rajpal; k= sizeof(rajpal); printf("%d",k); return 0; } It will be the size of the largest object in the collection. & what would be the output if instead of union in the above example if i'll use struct? It will be the size of the sum of all struct members, plus padding. Could anyon can explain me union behavior. A good C book should do nicely. Have you read K&R2? Mar 1 '07 #3

 P: n/a In article <11**********************@p10g2000cwp.googlegroups .com>, int main(){int k;union jatin{int i :5;char j :2;}; >union jatin rajpal;k= sizeof(rajpal);printf("%d",k);return 0;} You haven't included , so you could get any output from the printf due to the lack of the prototype for the printf. >& what would be the output if instead of union in the above example ifi'll use struct? Could anyon can explain me union behavior. In all the cases, on the compiler I was testing with, I get 4 as the output -- which is sizeof(int) on that particular machine. I also get a warning that char is a non-standard type for a bitfield. The C99 standard permits bitfield types other than int and unsigned int, as system extensions, but does not define their behaviour. In the union case, the compiler is allocating enough space for a full int. There is no requirement that the compiler allocate the smallest possible space that would hold the defined bitfield sizes. The way the relevant clauses are written, a compiler would be conformant if it always used exactly the same size for any storage unit that contained a bitfield. The non-standard char bitfield does not take any more space than an int bitfield on the particular compiler I am using, so there there was no need to allocate anything bigger than an int for the overall storage. But since char bitfields are non-standard, the standard would have no complaint if a compiler decided that it needed to allocate 742 bytes for every unit that contained a char bitfield -- non-standard behaviour can be as unusual as the compiler writer wants. In the struct case, you again run into the problme that char bitfields are non-standard, so you again could get out pretty much any answer. If the compiler choose to treat them like int bitfields, then you again encounter the behaviour that a compiler is allowed to allocate a complete word to hold an aggregate of bitfields that together fit within the limits of a word. I am deliberately using "word" non-specifically here rather than "int", as the compiler is not restricted to multiples of "int". A compiler is allowed to pack down to the smallest integral type if it wants, and it is allowed to use a complete integral type if it wants. Basically, if you are looking for some kind of promises in the standards that bitfields will only be a certain size and no bigger, or that the aggregate size will be as small as possible, then you will not find those promises. The only promise is that if you are using int or signed int bitfields, and the next bitfield would fit within the same allocation unit was was already started, then it will be put in the same allocation unit. But if the next bitfield would not fit in the same allocation unit, then it is up to the compiler as to whether it spans the bitfield, part in each of the two storage units, or if it instead leaves off filling the first storage unit and starts a new storage unit for the second bitfield. If you are expecting portability in the fine details of how bitfields are handled, then you should stop expecting that. There isn't even any promise about whether bitfields start filling from the "beginning" of the storage allocation, or start filling from the "end" of the storage allocation. In your example, i could end up stored before or after j in the storage unit, and the next compiler release on the same system could switch it to the other way. bitfields are NOT any kind of portable bit-level storage specification. -- There are some ideas so wrong that only a very intelligent person could believe in them. -- George Orwell Mar 1 '07 #4

 P: n/a On Mar 1, 6:52 am, "user923005"

 P: n/a jatin wrote: On Mar 1, 6:52 am, "user923005" On Feb 28, 5:34 pm, rajpal_ja...@yahoo.co.in wrote: int main() { int k; union jatin{ int i :5; char j :2; }; union jatin rajpal; k= sizeof(rajpal); printf("%d",k); return 0; } It will be the size of the largest object in the collection. & what would be the output if instead of union in the above example if i'll use struct? It will be the size of the sum of all struct members, plus padding. Could anyon can explain me union behavior. A good C book should do nicely. Have you read K&R2? Hi Ian and user923005 Thanks for your response. If my understanding is correct: When I'll use struct I'll get 8 coz "because". "coz" is short for "cousin". in structure I'm declaring the int and char in bitwise manner int will store 5 bit and char will store 2 bit i.e. total 7 bit. Plus whatever padding the compiler feels is appropriate. when i'll printf the size of this struct then atleast i'll get output of 1word One /byte/. C sizes are counted in abstract units called "bytes" or "chars", which have at least 8 -- but maybe more -- bits. i.e. 8bit. this behaviour is ok to me. However when i'll use union the output is 4.which is not clear to me. Four bytes. Your compiler -- and it will not be alone -- appears to round union sizes to 4-bytes. It's probably a natural size on your machine. It should be the size of 1word. ie it should also give us the output 8. No "should" about it. -- Chris "electric hedgehog" Dollin "Never ask that question!" Ambassador Kosh, /Babylon 5/ Mar 1 '07 #6

 P: n/a On Mar 1, 6:40 pm, Chris Dollin

 P: n/a On Mar 1, 7:09 am, rober...@ibd.nrc-cnrc.gc.ca (Walter Roberson) wrote: In article <1172712856.657318.257...@p10g2000cwp.googlegroups .com>, , so you could get any output from the printf due to the lack of the prototype for the printf. & what would be the output if instead of union in the above example if i'll use struct? Could anyon can explain me union behavior. In all the cases, on the compiler I was testing with, I get 4 as the output -- which is sizeof(int) on that particular machine. I also get a warning that char is a non-standard type for a bitfield. The C99 standard permits bitfield types other than int and unsigned int, as system extensions, but does not define their behaviour. In the union case, the compiler is allocating enough space for a full int. There is no requirement that the compiler allocate the smallest possible space that would hold the defined bitfield sizes. The way the relevant clauses are written, a compiler would be conformant if it always used exactly the same size for any storage unit that contained a bitfield. The non-standard char bitfield does not take any more space than an int bitfield on the particular compiler I am using, so there there was no need to allocate anything bigger than an int for the overall storage. But since char bitfields are non-standard, the standard would have no complaint if a compiler decided that it needed to allocate 742 bytes for every unit that contained a char bitfield -- non-standard behaviour can be as unusual as the compiler writer wants. In the struct case, you again run into the problme that char bitfields are non-standard, so you again could get out pretty much any answer. If the compiler choose to treat them like int bitfields, then you again encounter the behaviour that a compiler is allowed to allocate a complete word to hold an aggregate of bitfields that together fit within the limits of a word. I am deliberately using "word" non-specifically here rather than "int", as the compiler is not restricted to multiples of "int". A compiler is allowed to pack down to the smallest integral type if it wants, and it is allowed to use a complete integral type if it wants. Basically, if you are looking for some kind of promises in the standards that bitfields will only be a certain size and no bigger, or that the aggregate size will be as small as possible, then you will not find those promises. The only promise is that if you are using int or signed int bitfields, and the next bitfield would fit within the same allocation unit was was already started, then it will be put in the same allocation unit. But if the next bitfield would not fit in the same allocation unit, then it is up to the compiler as to whether it spans the bitfield, part in each of the two storage units, or if it instead leaves off filling the first storage unit and starts a new storage unit for the second bitfield. If you are expecting portability in the fine details of how bitfields are handled, then you should stop expecting that. There isn't even any promise about whether bitfields start filling from the "beginning" of the storage allocation, or start filling from the "end" of the storage allocation. In your example, i could end up stored before or after j in the storage unit, and the next compiler release on the same system could switch it to the other way. bitfields are NOT any kind of portable bit-level storage specification. -- There are some ideas so wrong that only a very intelligent person could believe in them. -- George Orwell You've given me suprb explanations very close to my understanding level. But I still have some doubts. Ignore padding or compiler dependencies. Just tell me from exam point of view. Union behavior: Do you think that o/p 4 is correct. As per my understanding it should be equal to one word i.e 8 Struct bahavior: it should be 8 Keeping in mind I'm answering based on only theory as exam point of view. Mar 1 '07 #8

 P: n/a On Mar 1, 11:08 pm, rober...@ibd.nrc-cnrc.gc.ca (Walter Roberson) wrote: In article <1172770168.415270.51...@k78g2000cwa.googlegroups. com>, jatin

 P: n/a jatin wrote: On Mar 1, 6:40 pm, Chris Dollin >jatin wrote: >>>It should be the size of 1word. ie it should also give us theoutput 8. No "should" about it. *Please don't quote signatures* > If we assume that the padding is 0 for both the cases than the output both the case would be 8 right! No, it would not. It will be sizeof the largest union member, which is an int in you example. -- Ian Collins. Mar 1 '07 #11

 P: n/a In article <11**********************@30g2000cwc.googlegroups. com>, jatin On Mar 1, 11:08 pm, rober...@ibd.nrc-cnrc.gc.ca (Walter Roberson)wrote: >The hard rules about bitfields boil down to the following: >4) Nearly everything else, including questions about what size of storageunit is used, is up to the compiler: >This qus belongs to ANSI C Which "ANSI C" ? X3.159-1989 (before ISO adoption)? The 1990 ISO version which is the same except with some sections renumbered? The 1990 ANSI version which is the same as the 1990 ISO version? The above three versions together are usually referred to as C89. The 1994 update that includes some technical clarifications? The 1999 joint ANSI and ISO version that defines additional language elements? That one is referred to as C99 The technical amendment that came after that whose date and name I never recall? But it doesn't really matter, unless you want to start discussing the exact behaviour of _bool bitfields (which didn't exist in C89). The rules are fundamentally the same for all of the versions. That is, for *all* of the versions, the C standard defines only minimal rules about bitfields, and saying specifically "4" or "8" for -any- ANSI C version is wrong because the answer will be compiler dependant. The standards SAY that it is compiler dependant. The standards do not WANT to be more specific: they consider it to be none of their business exactly what the compiler does with bitfields. -- "law -- it's a commodity" -- Andrew Ryan (The Globe and Mail, 2005/11/26) Mar 1 '07 #12

 P: n/a Ian Collins On Mar 1, 6:40 pm, Chris Dollin >>jatin wrote:It should be the size of 1word. ie it should also give us theoutput 8.No "should" about it. *Please don't quote signatures* >>If we assume that the padding is 0 for both the cases than the outputboth the case would be 8 right! No, it would not. It will be sizeof the largest union member, which is an int in you example. The union in question was: union jatin { int i: 5; char j: 2; }; C99 6.7.2.1p9: A bit-field is interpreted as a signed or unsigned integer type consisting of the specified number of bits. So the member "i" is interpreted as a 5-bit integer type. Given, for example, CHAR_BIT == 8 and sizeof(int) == 4, a compiler could reasonably have sizeof(union jatin) == 1 (with 3 bits of padding). It could also reasonably reject the declaration, because the standard doesn't support bit fields of types other than int, signed int, unsigned int, and (C99 only) _Bool. Some compilers use the declared types of bit fields to affect the size of the enclosing structure or union, but the standard doesn't require this. -- Keith Thompson (The_Other_Keith) ks***@mib.org San Diego Supercomputer Center <* "We must do something. This is something. Therefore, we must do this." -- Antony Jay and Jonathan Lynn, "Yes Minister" Mar 1 '07 #13

 P: n/a Keith Thompson wrote: Ian Collins >jatin wrote: >>>If we assume that the padding is 0 for both the cases than the outputboth the case would be 8 right! No, it would not. It will be sizeof the largest union member, which isan int in you example. The union in question was: union jatin { int i: 5; char j: 2; }; C99 6.7.2.1p9: A bit-field is interpreted as a signed or unsigned integer type consisting of the specified number of bits. So the member "i" is interpreted as a 5-bit integer type. Given, for example, CHAR_BIT == 8 and sizeof(int) == 4, a compiler could reasonably have sizeof(union jatin) == 1 (with 3 bits of padding). It could also reasonably reject the declaration, because the standard doesn't support bit fields of types other than int, signed int, unsigned int, and (C99 only) _Bool. Some compilers use the declared types of bit fields to affect the size of the enclosing structure or union, but the standard doesn't require this. But they couldn't represent an int bit field in anything other than an int. So sizeof the example union will always be sizeof(int). -- Ian Collins. Mar 1 '07 #14

 P: n/a Ian Collins Ian Collins >>jatin wrote:If we assume that the padding is 0 for both the cases than the outputboth the case would be 8 right!No, it would not. It will be sizeof the largest union member, which isan int in you example. The union in question was:union jatin { int i: 5; char j: 2;};C99 6.7.2.1p9: A bit-field is interpreted as a signed or unsigned integer type consisting of the specified number of bits.So the member "i" is interpreted as a 5-bit integer type. Given, forexample, CHAR_BIT == 8 and sizeof(int) == 4, a compiler couldreasonably have sizeof(union jatin) == 1 (with 3 bits of padding).It could also reasonably reject the declaration, because the standarddoesn't support bit fields of types other than int, signed int,unsigned int, and (C99 only) _Bool.Some compilers use the declared types of bit fields to affect the sizeof the enclosing structure or union, but the standard doesn't requirethis. But they couldn't represent an int bit field in anything other than an int. So sizeof the example union will always be sizeof(int). One of us (at least) is missing something here. Why couldn't an int bit field be represented in anything other than an int? Let's consider a simpler example. As above, assume CHAR_BIT == 8, sizeof(int) == 4. struct foo { unsigned int x0: 4; unsigned int x1: 4; }; struct foo obj; Each of x0 and x1 is treated as a 4-bit unsigned integer type. I'm saying that a compiler *could* allocate both x0 and x1 within a single byte, and make sizeof(struct foo) == 1 and sizeof obj == 1. The struct's members total only 8 bits; I see no requirement for the struct itself to be any bigger than 8 bits. There is no 32-bit unsigned int object, and no need to allocate space for one. Is my hypothetical implementation non-conforming? If so, why? -- Keith Thompson (The_Other_Keith) ks***@mib.org San Diego Supercomputer Center <* "We must do something. This is something. Therefore, we must do this." -- Antony Jay and Jonathan Lynn, "Yes Minister" Mar 1 '07 #15

 P: n/a Keith Thompson wrote: Ian Collins >But they couldn't represent an int bit field in anything other than anint. So sizeof the example union will always be sizeof(int). One of us (at least) is missing something here. Why couldn't an int bit field be represented in anything other than an int? Without trawling though the standard, the only repost I can use is that it would be counter-intuitive - at least to me. Let's consider a simpler example. As above, assume CHAR_BIT == 8, sizeof(int) == 4. struct foo { unsigned int x0: 4; unsigned int x1: 4; }; struct foo obj; Each of x0 and x1 is treated as a 4-bit unsigned integer type. I'm saying that a compiler *could* allocate both x0 and x1 within a single byte, and make sizeof(struct foo) == 1 and sizeof obj == 1. The struct's members total only 8 bits; I see no requirement for the struct itself to be any bigger than 8 bits. There is no 32-bit unsigned int object, and no need to allocate space for one. Is my hypothetical implementation non-conforming? If so, why? Good question! -- Ian Collins. Mar 1 '07 #16

 P: n/a In article , Keith Thompson Let's consider a simpler example. As above, assume CHAR_BIT == 8,sizeof(int) == 4. struct foo { unsigned int x0: 4; unsigned int x1: 4; }; struct foo obj; >Each of x0 and x1 is treated as a 4-bit unsigned integer type. I'msaying that a compiler *could* allocate both x0 and x1 within a singlebyte, and make sizeof(struct foo) == 1 and sizeof obj == 1. My reading of C89 is that that is explicitly possible: 3.5.2.1 Structure and Union Specifiers [...] An implementation may allocate any addressible storage unit large enough to hold a bit-field. [...] The alignment of the addressible storage unit is unspecified. In C89, I do not even see maximum limit on bitfield sizes. We know that in C89 that the nominal type for a bitfield must be int or a signed or unsigned qualification of int, but that falls short of a promise that you could have a bitfield as wide as a normal int would be in that implementation. -- Okay, buzzwords only. Two syllables, tops. -- Laurie Anderson Mar 1 '07 #17

 P: n/a Ian Collins Ian Collins >>But they couldn't represent an int bit field in anything other than anint. So sizeof the example union will always be sizeof(int). One of us (at least) is missing something here. Why couldn't an intbit field be represented in anything other than an int? Without trawling though the standard, the only repost I can use is that it would be counter-intuitive - at least to me. Fascinating. So your intuition differs considerably from mine. I know that some compilers work in accordance with your intuition, but I've never quite understood why. So given: >Let's consider a simpler example. As above, assume CHAR_BIT == 8,sizeof(int) == 4. struct foo { unsigned int x0: 4; unsigned int x1: 4; }; struct foo obj; [...] *my* intuition tells me that x0 and x1 aren't 32-bit unsigned ints, they're 4-bit thingies, and you only need 8 bits to hold two of them. The fact that the syntax requires using the name of a 32-bit type to specify these 4-bit thingies is a bit problematic, but I tend to gloss over that. You seem to be saying that the "unsigned int" in the declaration *doesn't* affect the size of x0 or x1, but *does* affect the size of the structure that contains them. Following my (limited) understanding of your intuition, I would think that since x1 is at an offset of 4 bits from the start of the structure, the fact that it's an unsigned int would require the whole structure to be at least 36 bits (likely padded to 40 or 64). I don't think you actually draw that conclusion, but I'm curious why not. -- Keith Thompson (The_Other_Keith) ks***@mib.org San Diego Supercomputer Center <* "We must do something. This is something. Therefore, we must do this." -- Antony Jay and Jonathan Lynn, "Yes Minister" Mar 2 '07 #18

 P: n/a Keith Thompson wrote: > You seem to be saying that the "unsigned int" in the declaration *doesn't* affect the size of x0 or x1, but *does* affect the size of the structure that contains them. I have never used an implementations where this isn't the case. Following my (limited) understanding of your intuition, I would think that since x1 is at an offset of 4 bits from the start of the structure, the fact that it's an unsigned int would require the whole structure to be at least 36 bits (likely padded to 40 or 64). I don't think you actually draw that conclusion, but I'm curious why not. That's going too far. I'm just used to using compilers where #include #include struct A { uint8_t b : 4; }; struct B { uint16_t b : 4; }; struct C { uint32_t b : 4; }; int main(void) { printf( "%d %d %d\n", sizeof(struct A), sizeof(struct B), sizeof(struct C) ); return 0; } outputs 1 2 4. -- Ian Collins. Mar 2 '07 #19