470,631 Members | 1,994 Online

# size of a byte and int?

Hi All,

Is that true that size of a byte not necessarily 8-bit?
What the std. says? If that true, then what will the size of an int,
i mean what sizeof(int) should return?

On my machine sizeof(char) == sizeof(int).
TMS320C5402 DSP (with 16-bit word size).
both returns one. So, it holds true.

But my interpretation is :
Size of int,float etc is specified in terms of bytes, not bits...which is a
standard i.e int -> 2 bytes, char -> 1 byte etc... now, actual size of these
depends up on no. of bits in a byte... which is implementation defined. So,
if we say on a machine its defined 1 byte = 8 bits then size(char) = 1byte =
8 bits size(int) = 2 bytes = 16 bits...
but on other machine 1 byte = 16 bits the size(char) = 1byte = 16 bits and
Size(Int) =2 bytes= 32 bits. In no case, it can be same for both char & int.

-Neo

Nov 14 '05 #1
53 3835
Neo wrote:

Hi All,

Is that true that size of a byte not necessarily 8-bit?

Width is measured in bits.
Size is measured in bytes.

--
pete
Nov 14 '05 #2

"pete" <pf*****@mindspring.com> wrote in message
news:41***********@mindspring.com...
Neo wrote:

Hi All,

Is that true that size of a byte not necessarily 8-bit?

Width is measured in bits.
Size is measured in bytes.

--
pete

O'kay! What does int and char (data types in C) are measured in?
-Neo
Nov 14 '05 #3
Neo wrote:

Is that true that size of a byte not necessarily 8-bit?
Yes.
What the std. says?
It says that a byte is CHAR_BIT bits in width, where CHAR_BIT >= 8.
If that true, then what will the size of an int,
i mean what sizeof(int) should return?
It will be whatever it is on a given implementation.
On my machine sizeof(char) == sizeof(int).
TMS320C5402 DSP (with 16-bit word size).
both returns one. So, it holds true.

But my interpretation is :
Size of int,float etc is specified in terms of bytes, not bits...
No. Integers and floats are specified in terms of value limits
and precision. The minimum range that a signed int must be able
to represent is -32767..32767. Mathematics dictate that this
requires at least 16 bits (including a sign bit.) Thus, an
int will require _at least_ as many bytes as is required to
store 16-bits.

Google for N869 and read the last public draft of the C99 standard.
--
Peter

Nov 14 '05 #4
Neo wrote:

"pete" <pf*****@mindspring.com> wrote in message
news:41***********@mindspring.com...
Neo wrote:

Hi All,

Is that true that size of a byte not necessarily 8-bit?

Width is measured in bits.
Size is measured in bytes.

--
pete

O'kay! What does int and char (data types in C) are measured in?

Bytes or bits, depending on whether you want size or width.

sizeof(char) is 1, by definition.

A char comprises CHAR_BIT bits. CHAR_BIT is defined in <limits.h>
and its value can vary from one implementation to the next, but
it can be no lower than 8.

sizeof(int) is at least 16 bits wide. Therefore, it must be
at least (16 + CHAR_BIT - 1) / CHAR_BIT bytes in size (ignoring
any remainder).

int must be able to represent all values in the range -32767 to +32767.
Nov 14 '05 #5
>Bytes or bits, depending on whether you want size or width.

sizeof(char) is 1, by definition.

Could be 2 on some DSPs (IIRC TI's).

--
42Bastian
Do not email to ba*******@yahoo.com, it's a spam-only account :-)
Nov 14 '05 #6
On Tue, 25 Jan 2005 13:11:04 +0530, "Neo"
<ti***************@yahoo.com> wrote:
Hi All,

Is that true that size of a byte not necessarily 8-bit?
I think common sense is that a byte is nowadays 8 bit.

What the std. says? If that true, then what will the size of an int,
Don't mix byte with char ! I don't think there is a std defining the
width of a byte.

i mean what sizeof(int) should return?

On my machine sizeof(char) == sizeof(int).
TMS320C5402 DSP (with 16-bit word size).

^^^
That's it, they speak of words avoiding the term byte.

A reason, to define __u8,__u16,__u32 etc. (or the like) depending on
the cpu and/or compiler.
--
42Bastian
Do not email to ba*******@yahoo.com, it's a spam-only account :-)
Nov 14 '05 #7
42Bastian Schick wrote:
Bytes or bits, depending on whether you want size or width.

sizeof(char) is 1, by definition.

Could be 2 on some DSPs (IIRC TI's).

No, it can't. By _definition_, sizeof(chat) is 1. This is stated in the
C standard.

--
Paul Black mailto:pa********@oxsemi.com
Oxford Semiconductor Ltd http://www.oxsemi.com
25 Milton Park, Abingdon, Tel: +44 (0) 1235 824 909
Oxfordshire. OX14 4SH Fax: +44 (0) 1235 821 141
Nov 14 '05 #8
42Bastian Schick wrote:
"Neo"
Hi All,

Is that true that size of a byte not necessarily 8-bit?
I think common sense is that a byte is nowadays 8 bit.

Sense, common to what? An 8-bit entity is called an 'octet'
explicitly by many standards and protocols, precisely to avoid
confusion.
What the std. says? If that true, then what will the size of
an int,

Don't mix byte with char ! I don't think there is a std
defining the width of a byte.

The standard does define that a byte has an implementation width
which is big enough to hold the representation of a character
from the basic character set. The standard also states that the
size of all three character types is 1 byte.
i mean what sizeof(int) should return?

On my machine sizeof(char) == sizeof(int).
TMS320C5402 DSP (with 16-bit word size).

^^^
That's it, they speak of words avoiding the term byte.

Who is 'they'? A C programmer will still speak of bytes.
A reason, to define __u8,__u16,__u32 etc. (or the like)
depending on the cpu and/or compiler.

Programs targetting hosted implementations will generally
have little need for such types. Indeed, implementations
on certain architectures may not be able to represent such
precise width types, outside of inefficient emulation.

C99 introduced the intN_t types to cater for programs which
do rely on precise width twos complement integer types,
however programs which make use of them are not strictly
conforming.

--
Peter

Nov 14 '05 #9

"Peter Nilsson" <ai***@acay.com.au> wrote in message
42Bastian Schick wrote: [-snip-] C99 introduced the intN_t types to cater for programs which
do rely on precise width twos complement integer types,
however programs which make use of them are not strictly
conforming.

--
Peter

Why not conforming? Then why does the std. defines these?
-Neo
Nov 14 '05 #10
Neo wrote:
"Peter Nilsson" <ai***@acay.com.au> wrote in message
C99 introduced the intN_t types to cater for programs which
do rely on precise width twos complement integer types,
however programs which make use of them are not strictly
conforming.

--
Peter

Why not conforming?

(They are not _strictly_ conforming)

Because a platform is not required to provide them.
Just in case it cannot support them.
Then why does the std. defines these?

Probably to make the interface to this functionality
uniform across the subset of platforms that can support
them. This is not a bad thing since code that really
needs this is unlikely to ever have much use on a
platform which cannot provide this.

--
Thomas.

Nov 14 '05 #11
On Tue, 25 Jan 2005 09:25:50 +0000, 42Bastian Schick wrote:
Bytes or bits, depending on whether you want size or width.

sizeof(char) is 1, by definition.

Could be 2 on some DSPs (IIRC TI's).

No, sizeof(char) is 1 even on DSP's that cannot access smaller than 16-bit
data. In the case of the TMS320F24x, sizeof(char) = sizeof(int), with
both being 16-bit. Makes the chip a real pain.

Nov 14 '05 #12
[F'up2 cut down --- should have been done by OP!]

In comp.arch.embedded Neo <ti***************@yahoo.com> wrote:
Is that true that size of a byte not necessarily 8-bit?
What the std. says? If that true, then what will the size of an int,
i mean what sizeof(int) should return?

You cross-posted this question to two newsgroups, one of which (c.a.e)
it is seriously off-topic in. Please don't do that, or if you do, at
least explain why, and set a Followup-To. As is, your posting
silently assumes a context that only applies to half your audience,
causing all kinds of needless confusion.

In the context of comp.arch.embedded, your question doesn't make much
sense at all. In the context of comp.lang.c, the answers to the above
are: Yes. The same. Implementation-defined. (In that order).

Your confusion seems to come from the fact that you don't realize that
sizeof(int), too, is implementation-defined (within limitations set up
by the standard on the range of values an int must at least be able to
hold). In theory, an implementation could have, say

CHAR_BITS == 13
sizeof(short) == 2
sizeof(int) == 3
sizeof(long) == 5
sizeof(float) == 7
sizeof(double) == 9

just for the perverse fun of it.

--
Hans-Bernhard Broeker (br*****@physik.rwth-aachen.de)
Even if all the snow were burnt, ashes would remain.
Nov 14 '05 #13

"Hans-Bernhard Broeker" <br*****@physik.rwth-aachen.de> wrote in message
news:35*************@news.dfncis.de...
[F'up2 cut down --- should have been done by OP!]

In comp.arch.embedded Neo <ti***************@yahoo.com> wrote:
Is that true that size of a byte not necessarily 8-bit?
What the std. says? If that true, then what will the size of an int,
i mean what sizeof(int) should return?
You cross-posted this question to two newsgroups, one of which (c.a.e)
it is seriously off-topic in. Please don't do that, or if you do, at
least explain why, and set a Followup-To. As is, your posting
silently assumes a context that only applies to half your audience,
causing all kinds of needless confusion.

In the context of comp.arch.embedded, your question doesn't make much
sense at all. In the context of comp.lang.c, the answers to the above
are: Yes. The same. Implementation-defined. (In that order).

Your confusion seems to come from the fact that you don't realize that
sizeof(int), too, is implementation-defined (within limitations set up
by the standard on the range of values an int must at least be able to
hold). In theory, an implementation could have, say

CHAR_BITS == 13
sizeof(short) == 2
sizeof(int) == 3

Shouldn't sizeof(int) be 2 here?
As per the post by infobart :

sizeof(int) is at least 16 bits wide. Therefore, it must be
at least (16 + CHAR_BIT - 1) / CHAR_BIT bytes in size (ignoring
any remainder).

int must be able to represent all values in the range -32767 to +32767.

2 bytes here consisting of 26 bits can represent all these values so, it
should be 2 why 3 then?

-Neo
sizeof(long) == 5
sizeof(float) == 7
sizeof(double) == 9

just for the perverse fun of it.

--
Hans-Bernhard Broeker (br*****@physik.rwth-aachen.de)
Even if all the snow were burnt, ashes would remain.

Nov 14 '05 #14
David wrote:
On Tue, 25 Jan 2005 09:25:50 +0000, 42Bastian Schick wrote:

Bytes or bits, depending on whether you want size or width.

sizeof(char) is 1, by definition.

Could be 2 on some DSPs (IIRC TI's).

No, sizeof(char) is 1 even on DSP's that cannot access smaller than 16-bit
data. In the case of the TMS320F24x, sizeof(char) = sizeof(int), with
both being 16-bit. Makes the chip a real pain.

Mmm... yes, but, beware:
sizeof(char) = 1;
sizeof('c') = 1; // In C++
sizeof('c') = sizeof(int); /* In C */
Nov 14 '05 #15
Neo wrote:
"Hans-Bernhard Broeker" <br*****@physik.rwth-aachen.de> wrote in message
news:35*************@news.dfncis.de...
CHAR_BITS == 13
sizeof(short) == 2
sizeof(int) == 3

Shouldn't sizeof(int) be 2 here?
As per the post by infobart :

sizeof(int) is at least 16 bits wide. Therefore, it must be
at least (16 + CHAR_BIT - 1) / CHAR_BIT bytes in size (ignoring

^^^^^^^^ any remainder).

int must be able to represent all values in the range -32767 to +32767.

2 bytes here consisting of 26 bits can represent all these values so, it
should be 2 why 3 then?

Because 3 bytes is also capable of representing all those values.
The standard defines a set of allowable sizes. That is all sizes
that can represent at least the above range of values.

Also (though this is not normative, just recommended practice)
int is intended to be the natural size for a given platform,
that means that for example a 32 bit machine will often have
sizeof(int) == 4 even though CHAR_BIT == 8.

IOW, the standard imposes a lower limit and does require that
the actual values are as close as possible to these limits.

--
Thomas.
Nov 14 '05 #16

Neo wrote:
"Hans-Bernhard Broeker" <br*****@physik.rwth-aachen.de> wrote in message
news:35*************@news.dfncis.de...
[F'up2 cut down --- should have been done by OP!]

In comp.arch.embedded Neo <ti***************@yahoo.com> wrote:

Is that true that size of a byte not necessarily 8-bit?
What the std. says? If that true, then what will the size of an int,
i mean what sizeof(int) should return?
You cross-posted this question to two newsgroups, one of which (c.a.e)
it is seriously off-topic in. Please don't do that, or if you do, at
least explain why, and set a Followup-To. As is, your posting
silently assumes a context that only applies to half your audience,
causing all kinds of needless confusion.

In the context of comp.arch.embedded, your question doesn't make much
sense at all. In the context of comp.lang.c, the answers to the above
are: Yes. The same. Implementation-defined. (In that order).

Your confusion seems to come from the fact that you don't realize that
sizeof(int), too, is implementation-defined (within limitations set up
by the standard on the range of values an int must at least be able to
hold). In theory, an implementation could have, say

CHAR_BITS == 13
sizeof(short) == 2
sizeof(int) == 3

Shouldn't sizeof(int) be 2 here?

No. It could be.
As per the post by infobart :
not yet have that post, so a better reference is needed.

sizeof(int) is at least 16 bits wide. Therefore, it must be ^^^^^^^^^^^^^^^^^^^^^ at least (16 + CHAR_BIT - 1) / CHAR_BIT bytes in size (ignoring
any remainder).

int must be able to represent all values in the range -32767 to +32767. ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
2 bytes here consisting of 26 bits can represent all these values so, it
should be 2 why 3 then?

Parse correctly: _at_least_ versus _minimum_multiple_larger_than_
We only have
sizeof(int)*CHAR_BIT >= 16.

This does not disallow sizeof(int)*CHAR_BIT = 39.

In some cases, you may have a 64 bit machine and then you want to be
able to use the 64 bits, so an implementation might give you 64 bit
longs, 32 bit ints, 16 bit shorts, and 8 bit chars which makes much
more sense than the minimum requirements which leave you without a
64 bit integer data type (if we do not have C99's long long).

The limits are only the minimum requirements to enable you to write
portable programs.

sizeof(long) == 5
sizeof(float) == 7
sizeof(double) == 9

just for the perverse fun of it.

:-)
Cheers
Michael
--
E-Mail: Mine is a gmx dot de address.

Nov 14 '05 #17

Thomas Stegen wrote:
Neo wrote:
"Hans-Bernhard Broeker" <br*****@physik.rwth-aachen.de> wrote in
message news:35*************@news.dfncis.de...
CHAR_BITS == 13
sizeof(short) == 2
sizeof(int) == 3

Shouldn't sizeof(int) be 2 here?
As per the post by infobart :

sizeof(int) is at least 16 bits wide. Therefore, it must be
at least (16 + CHAR_BIT - 1) / CHAR_BIT bytes in size (ignoring

^^^^^^^^
any remainder).

int must be able to represent all values in the range -32767 to
+32767.

2 bytes here consisting of 26 bits can represent all these values so,
it should be 2 why 3 then?

Because 3 bytes is also capable of representing all those values.
The standard defines a set of allowable sizes. That is all sizes
that can represent at least the above range of values.

Also (though this is not normative, just recommended practice)
int is intended to be the natural size for a given platform,
that means that for example a 32 bit machine will often have
sizeof(int) == 4 even though CHAR_BIT == 8.

IOW, the standard imposes a lower limit and does require that

not the actual values are as close as possible to these limits.

--
E-Mail: Mine is a gmx dot de address.

Nov 14 '05 #18
Michael Mair wrote:

Thomas Stegen wrote:
IOW, the standard imposes a lower limit and does require that

not
the actual values are as close as possible to these limits.

Oops, yeah, thanks.

--
Thomas.
Nov 14 '05 #19
On 2005-01-25, 42Bastian Schick <ba*******@yahoo.com> wrote:
Bytes or bits, depending on whether you want size or width.

sizeof(char) is 1, by definition.

Could be 2 on some DSPs (IIRC TI's).

Nope. sizeof (char) is 1. It doesn't matter if the char has
8, 16, or 32 bits.

--
Grant Edwards grante Yow! does your DRESSING
at ROOM have enough ASPARAGUS?
visi.com
Nov 14 '05 #20
On 2005-01-25, 42Bastian Schick <ba*******@yahoo.com> wrote:
On Tue, 25 Jan 2005 13:11:04 +0530, "Neo"
<ti***************@yahoo.com> wrote:
Hi All,

Is that true that size of a byte not necessarily 8-bit?
I think common sense is that a byte is nowadays 8 bit.

Common sense is often wrong. As yours seems to be in this
case. I've worked on architectures where a byte (a-la "C") was
32 bits.
That's it, they speak of words avoiding the term byte.

The OP is asking about C. In C, 'byte' has a very specific
definition.

--
Grant Edwards grante Yow! Yow! I like my new
at DENTIST...
visi.com
Nov 14 '05 #21
On 2005-01-25, Hans-Bernhard Broeker <br*****@physik.rwth-aachen.de> wrote:
Is that true that size of a byte not necessarily 8-bit? What
the std. says? If that true, then what will the size of an
int, i mean what sizeof(int) should return?
You cross-posted this question to two newsgroups, one of which (c.a.e)
it is seriously off-topic in.

I disagree. I imagine that these days it's mostly in embedded
work where C programmers run across bytes that aren't 8-bits
wide.
In the context of comp.arch.embedded, your question doesn't
make much sense at all.

Yes it does. The only current architectures I'm aware of that
have non-8-bit bytes are used in embedded systems.

--
Grant Edwards grante Yow! Is this TERMINAL fun?
at
visi.com
Nov 14 '05 #22
ba*******@yahoo.com (42Bastian Schick) wrote in
news:41****************@news.individual.de:
Bytes or bits, depending on whether you want size or width.

sizeof(char) is 1, by definition.

Could be 2 on some DSPs (IIRC TI's).

No, but maybe you're thinking of CHAR_BITS?

--
- Mark ->
--
Nov 14 '05 #23
ba*******@yahoo.com (42Bastian Schick) wrote in
news:41****************@news.individual.de:
On Tue, 25 Jan 2005 13:11:04 +0530, "Neo"
<ti***************@yahoo.com> wrote:
Hi All,

Is that true that size of a byte not necessarily 8-bit?

I think common sense is that a byte is nowadays 8 bit.

What the std. says? If that true, then what will the size of an int,

Don't mix byte with char ! I don't think there is a std defining the
width of a byte.

i mean what sizeof(int) should return?

On my machine sizeof(char) == sizeof(int).
TMS320C5402 DSP (with 16-bit word size).

^^^
That's it, they speak of words avoiding the term byte.

A reason, to define __u8,__u16,__u32 etc. (or the like) depending on
the cpu and/or compiler.

Leading underscores are for the implementation, not application programs.

--
- Mark ->
--
Nov 14 '05 #24
"Mark A. Odell" wrote:

ba*******@yahoo.com (42Bastian Schick) wrote in
news:41****************@news.individual.de:
Bytes or bits, depending on whether you want size or width.

sizeof(char) is 1, by definition.

Could be 2 on some DSPs (IIRC TI's).

No, but maybe you're thinking of CHAR_BITS?

No, but maybe he's thinking of CHAR_BIT.
Nov 14 '05 #25
Hans-Bernhard Broeker <br*****@physik.rwth-aachen.de> writes:
[...]
Your confusion seems to come from the fact that you don't realize that
sizeof(int), too, is implementation-defined (within limitations set up
by the standard on the range of values an int must at least be able to
hold). In theory, an implementation could have, say

CHAR_BITS == 13
sizeof(short) == 2
sizeof(int) == 3
sizeof(long) == 5
sizeof(float) == 7
sizeof(double) == 9

just for the perverse fun of it.

Quibble: it's CHAR_BIT, not CHAR_BITS.

Given CHAR_BIT == 13, the minimum allowable sizes are:

sizeof(short) >= 2 /* 26 bits */
sizeof(int) >= 2 /* 26 bits */
sizeof(long) >= 3 /* 39 bits */

short and int must have at least 16 bits that aren't padding bits;
long must have at least 32 non-padding bits. int must be at least as
wide as short, and long must be at least as wide as int, but given
padding bits width and size aren't necessarily directly related. An
even more perverse implementation could have:

sizeof(short) == 3
sizeof(int) == 2

I can't imagine any good reason to do this, but the standard allows
it.

--
Keith Thompson (The_Other_Keith) ks***@mib.org <http://www.ghoti.net/~kst>
San Diego Supercomputer Center <*> <http://users.sdsc.edu/~kst>
We must do something. This is something. Therefore, we must do this.
Nov 14 '05 #26
On 25 Jan 2005 15:38:47 GMT, Grant Edwards <gr****@visi.com> wrote:
Common sense is often wrong. As yours seems to be in this
case. I've worked on architectures where a byte (a-la "C") was
32 bits.

To up this, think PDP-10. 36 bits, unless using very special pointers to packed
structures of five 7-bit ASCII, with a spare bit in the word. But that doesn't
fit too well with C.

Jon
Nov 14 '05 #27

David wrote:
... In the case of the TMS320F24x, sizeof(char) = sizeof(int), with
both being 16-bit. Makes the chip a real pain.

The only inconvenience I can think of is dealing with "packed"
character arrays.
T

Nov 14 '05 #28
<jk*****@easystreet.com> wrote:
<gr****@visi.com> wrote:
Common sense is often wrong. As yours seems to be in this
case. I've worked on architectures where a byte (a-la "C") was
32 bits.

To up this, think PDP-10. 36 bits, unless using very special pointers to packed
structures of five 7-bit ASCII, with a spare bit in the word. But that doesn't
fit too well with C.

( Or 4 9-bit characters. Doesn't fit the "all bytes are flat and 8
bits" theory either...)

As other pointed out, many DSPs can access memory only one word at a
time, and the word width is not necessarily a multiple of 8 bits.

Plus:

* CDC-6600 mainframes: 60 bit words. You could pack 10 6-bit bytes in
each word, but they were not directly addressable. (6 bit was enough
for a capital-letters-only reduced character set. FORTRAN and Pascal
did not need more.)

* Early HP-1000 mini-computers, 16 bit words, only word addressable.
(I believe the same applies to early Nova minis)

* PDP-8 minis (and Intersil 6100 copies) - 12 bit words, only word

I once ported the original Small-C compiler to an HP-2113 mini. (May
not be the right model number.) This one was an "advanced model", it
(No stack pointer, of course)
To keep it compatible with existing software, it used the original
instruction set with 15 bit addresses for a 32 Kword address space,
while a few new instruction took 16 bit addresses for byte operations.
In other words, the word at address N contained the two bytes at
A first attempt to make the compiler use byte addressing was soon
bit when accessing words or left untouched for byte access. At the end
I made the chars 16 bit wide and used the word addresses consistently
across the system.
Was there ever a C compiler for CDC-6600/7??? machines?

Roberto Waltman.

Nov 14 '05 #29
On Tue, 25 Jan 2005 21:26:27 GMT, Jonathan Kirwan
<jk*****@easystreet.com> wrote:
On 25 Jan 2005 15:38:47 GMT, Grant Edwards <gr****@visi.com> wrote:
Common sense is often wrong. As yours seems to be in this
case. I've worked on architectures where a byte (a-la "C") was
32 bits.

To up this, think PDP-10. 36 bits, unless using very special pointers to packed
structures of five 7-bit ASCII, with a spare bit in the word. But that doesn't
fit too well with C.

Wouldn't work. A byte (in C) must be at least 8 bits wide.

I used to work on a Univac 1100/80, which also had a 36-bit word.
Each word held six 6-bit (FIELDATA) characters, or four 9-bit
(quarter-word) ASCII characters. The latter would work for C, though
pointer arithemetic might be, um, painful.

FWIW, the Univac never had a C compiler when I was working with it.
FORTRAN 77, Pascal, PL/I, COBOL, SNOBOL, Lisp, GPSS, Prolog, and many
others I'm forgetting now, but no C.

Regards,

-=Dave
--
Change is inevitable, progress is not.
Nov 14 '05 #30
On Tue, 25 Jan 2005 17:04:41 -0500, Roberto Waltman <us****@rwaltman.net> wrote:
I once ported the original Small-C compiler to an HP-2113 mini. (May
not be the right model number.) This one was an "advanced model", it
(No stack pointer, of course)

I worked on a timeshared BASIC for the HP-2114 and HP-2116. I don't recall a
2113, though. My experience was with the 2114, 2116, and 21MX. Subroutines
were handled by poking out the first word with the return address and using a
jump indirect through it to return to the caller. No stacks, as you say.

Jon
Nov 14 '05 #31
On Tue, 25 Jan 2005 16:08:07 +0530, "Neo"
<ti***************@yahoo.com> wrote in comp.arch.embedded:

"Peter Nilsson" <ai***@acay.com.au> wrote in message
42Bastian Schick wrote:

[-snip-]
C99 introduced the intN_t types to cater for programs which
do rely on precise width twos complement integer types,
however programs which make use of them are not strictly
conforming.

--
Peter

Why not conforming? Then why does the std. defines these?
-Neo

An example is the TMS320C5402 you are using, or the TMS320F2812 that
is one of the processors I am working with these days. They can't
provide an exact width int8_t or uint8_t type, because the processor
does not provide types with exactly that many bits.

But it can support all of the least n bits types, and fastest n bits
types.

Here is a <stdint.h> header that I wrote for use with the TI 2812 and
Code Composer Studio, it will probably work with your 5402. Try
copying it into the compiler include directory, where headers like
<stdio.h>, <string.h>, and so on, are.

And as another poster suggested, do a Google search for the N869 draft
of the C standard and read up on the integer types and this header.

Copy the text between the lines into an editor and save as stdint.h.
================================================== =
/************************************************** ***********************************/
/** \file stdint.h
* type definitions and macros for the subset of C99's stdint.h types
* that Code Composer Studio for the TMS320C2812 DSP supports
*
* \date 15-Aug-2003
*
* \author Jack Klein
*
* \b Notes: \n
* Note: not all C99 integer types are supported on this
* platform, in particular, no 8 bit types at all, and no 64
* bit types
*

************************************************** ***********************************/

#ifndef H_STDINT
#define H_STDINT

/* 7.18.1.1 */
/* exact-width signed integer types */
typedef signed int int16_t;
typedef signed long int32_t;

/* exact-width unsigned integer types */
typedef unsigned int uint16_t;
typedef unsigned long uint32_t;

/* 7.18.1.2 */
/* smallest type of at least n bits */
/* minimum-width signed integer types */
typedef signed int int_least8_t;
typedef signed int int_least16_t;
typedef signed long int_least32_t;

/* minimum-width unsigned integer types */
typedef unsigned int uint_least8_t;
typedef unsigned int uint_least16_t;
typedef unsigned long uint_least32_t;

/* 7.18.1.3 */

/* fastest minimum-width signed integer types */
typedef signed int int_fast8_t;
typedef signed int int_fast16_t;
typedef signed long int_fast32_t;

/* fastest minimum-width unsigned integer types */
typedef unsigned int uint_fast8_t;
typedef unsigned int uint_fast16_t;
typedef unsigned long uint_fast32_t;

/* 7.18.1.4 integer types capable of holding object pointers */
typedef signed long intptr_t;
typedef unsigned long uintptr_t;

/* 7.18.1.5 greatest-width integer types */
typedef signed long intmax_t;
typedef unsigned long uintmax_t;
#if !defined(__cplusplus) || defined(__STDC_LIMIT_MACROS)

/* 7.18.2.1 */

/* maximum values of exact-width signed integer types */
#define INT16_MAX 32767
#define INT32_MAX 2147483647L

/* minimum values of exact-width signed integer types */
#define INT16_MIN (-INT16_MAX-1)
#define INT32_MIN (-INT32_MAX-1) /* -2147483648 is unsigned
*/

/* maximum values of exact-width unsigned integer types */
#define UINT16_MAX 65535U
#define UINT32_MAX 4294967295UL

/* 7.18.2.2 */

/* maximum values of minimum-width signed integer types */
#define INT_LEAST8_MAX 32767
#define INT_LEAST16_MAX 32767
#define INT_LEAST32_MAX 2147483647L

/* minimum values of minimum-width signed integer types */
#define INT_LEAST8_MIN (-INT_LEAST8_MAX-1)
#define INT_LEAST16_MIN (-INT_LEAST16_MAX-1)
#define INT_LEAST32_MIN (-INT_LEAST32_MAX-1)

/* maximum values of minimum-width unsigned integer types */
#define UINT_LEAST8_MAX 65535U
#define UINT_LEAST16_MAX 65535U
#define UINT_LEAST32_MAX 4294967295UL

/* 7.18.2.3 */

/* maximum values of fastest minimum-width signed integer types */
#define INT_FAST8_MAX 32767
#define INT_FAST16_MAX 32767
#define INT_FAST32_MAX 2147483647L

/* minimum values of fastest minimum-width signed integer types */
#define INT_FAST8_MIN (-INT_FAST8_MAX-1)
#define INT_FAST16_MIN (-INT_FAST16_MAX-1)
#define INT_FAST32_MIN (-INT_FAST32_MAX-1)

/* maximum values of fastest minimum-width unsigned integer types
*/
#define UINT_FAST8_MAX 65535U
#define UINT_FAST16_MAX 65535U
#define UINT_FAST32_MAX 4294967295UL

/* 7.18.2.4 */

/* maximum value of pointer-holding signed integer type */
#define INTPTR_MAX 2147483647L

/* minimum value of pointer-holding signed integer type */
#define INTPTR_MIN (-INTPTR_MAX-1)

/* maximum value of pointer-holding unsigned integer type */
#define UINTPTR_MAX 4294967295UL

/* 7.18.2.5 */

/* maximum value of greatest-width signed integer type */
#define INTMAX_MAX 2147483647L

/* minimum value of greatest-width signed integer type */
#define INTMAX_MIN (-INTMAX_MAX-1)

/* maximum value of greatest-width unsigned integer type */
#define UINTMAX_MAX 4294967295UL

/* 7.18.3 */

/* limits of ptrdiff_t */
#define PTRDIFF_MAX 2147483647L
#define PTRDIFF_MIN (-PTRDIFF_MAX-1)

/* limits of sig_atomic_t */
#define SIG_ATOMIC_MAX 2147483647L
#define SIG_ATOMIC_MIN (-SIG_ATOMIC_MAX-1)

#endif /* __STDC_LIMIT_MACROS */

#if !defined(__cplusplus) || defined(__STDC_CONSTANT_MACROS)

/* 7.18.4.1 macros for minimum-width integer constants */
#define INT16_C(x) (x)
#define INT32_C(x) (x ## l)

#define UINT16_C(x) (x ## u)
#define UINT32_C(x) (x ## ul)

/* 7.18.4.2 macros for greatest-width integer constants */
#define INTMAX_C(x) (x ## l)
#define UINTMAX_C(x) (x ## ul)

#endif /* __STDC_CONSTANT_MACROS */

#endif /* __stdint_h */

/* end of stdint.h */
================================================== =

--
Jack Klein
Home: http://JK-Technology.Com
FAQs for
comp.lang.c http://www.eskimo.com/~scs/C-faq/top.html
comp.lang.c++ http://www.parashift.com/c++-faq-lite/
alt.comp.lang.learn.c-c++
http://www.contrib.andrew.cmu.edu/~a...FAQ-acllc.html
Nov 14 '05 #32
On 25 Jan 2005 13:45:58 GMT, Hans-Bernhard Broeker
<br*****@physik.rwth-aachen.de> wrote in comp.arch.embedded:
[F'up2 cut down --- should have been done by OP!]

In comp.arch.embedded Neo <ti***************@yahoo.com> wrote:
Is that true that size of a byte not necessarily 8-bit?
What the std. says? If that true, then what will the size of an int,
i mean what sizeof(int) should return?

You cross-posted this question to two newsgroups, one of which (c.a.e)
it is seriously off-topic in. Please don't do that, or if you do, at
least explain why, and set a Followup-To. As is, your posting
silently assumes a context that only applies to half your audience,
causing all kinds of needless confusion.

I disagree that this is not relevant to comp.arch.embedded. Many, in
fact most, C programmers are totally unaware of what the C standard
allows in terms of integer and floating point types. It is most
commonly in embedded only platforms, like 16-bit, 24-bit (Mot 56K),
and 32-bit DSPs that this is even a concern these days.

Yes, as others elsewhere in the thread point out, there were other,
and sometimes stranger, architectures in days gone by, but in practice
anyone who hasn't come across such a system by now will almost
certainly only ever come across this issue in the future if they
program a DSP or oddball SOC, and that is only going to happen in
embedded systems.

I think that letting a little light shine in on embedded programmers
is topical here.

--
Jack Klein
Home: http://JK-Technology.Com
FAQs for
comp.lang.c http://www.eskimo.com/~scs/C-faq/top.html
comp.lang.c++ http://www.parashift.com/c++-faq-lite/
alt.comp.lang.learn.c-c++
http://www.contrib.andrew.cmu.edu/~a...FAQ-acllc.html
Nov 14 '05 #33
Jack Klein <ja*******@spamcop.net> writes:
[...]
An example is the TMS320C5402 you are using, or the TMS320F2812 that
is one of the processors I am working with these days. They can't
provide an exact width int8_t or uint8_t type, because the processor
does not provide types with exactly that many bits.

Strictly speaking, a C compiler for the TMS320C5402 or TMS320F2812
could provide exact width int8_t and uint8_t types (and make
CHAR_BIT==8). It would have to generate extra code to extract and
store 8-bit quantities within 16-bit words, and it would need to
implement a special pointer format that indicates which octet within a
word it points to, as well as the address of the word itself. (This
would be similar to the way Cray's C compiler implements CHAR_BIT==8
on 64-bit vector machines.)

This would almost certainly not be worth the effort -- which I suppose
is close enough to "can't" for practical purposes.

A related, and actually realistic, point is that C99, which requires
64-bit integers, can be implemented on hardware that doesn't support
64-bit types. The compiler just has to do some extra work to compose
64-bit operations from the available 32-bit operations.

--
Keith Thompson (The_Other_Keith) ks***@mib.org <http://www.ghoti.net/~kst>
San Diego Supercomputer Center <*> <http://users.sdsc.edu/~kst>
We must do something. This is something. Therefore, we must do this.
Nov 14 '05 #34
toby wrote:
David wrote:
... In the case of the TMS320F24x, sizeof(char) = sizeof(int),
with both being 16-bit. Makes the chip a real pain.

The only inconvenience I can think of is dealing with "packed"
character arrays.

The only nuisance, in C, is that it is not so easy to guarantee
that EOF cannot appear in a char stream. Recall that the char
reading routines return an int, which may contain the (positive)
value of an unsigned char, or some negative value, defined as EOF.

--
"If you want to post a followup via groups.google.com, don't use
the broken "Reply" link at the bottom of the article. Click on
"show options" at the top of the article, then click on the
Nov 14 '05 #35
On 25 Jan 2005 15:38:47 GMT, Grant Edwards <gr****@visi.com> wrote:
On 2005-01-25, 42Bastian Schick <ba*******@yahoo.com> wrote:
On Tue, 25 Jan 2005 13:11:04 +0530, "Neo"
<ti***************@yahoo.com> wrote:
Hi All,

Is that true that size of a byte not necessarily 8-bit?
I think common sense is that a byte is nowadays 8 bit.

Common sense is often wrong. As yours seems to be in this
case. I've worked on architectures where a byte (a-la "C") was
32 bits.

Maybe, or even sure, but if you read MBytes, KBytes etc. do you
honestly thing in 32-bit bytes ?

That's it, they speak of words avoiding the term byte.

The OP is asking about C. In C, 'byte' has a very specific
definition.

Didn't know that C defines the term byte.
--
42Bastian
Do not email to ba*******@yahoo.com, it's a spam-only account :-)
Nov 14 '05 #36
A reason, to define __u8,__u16,__u32 etc. (or the like) depending on
the cpu and/or compiler.

Leading underscores are for the implementation, not application programs.

Eh, what do you mean by implementation ?

And what is an application in this regard ?

Is a TCP/IP stack an application in this sense ?

I am confused !

--
42Bastian
Do not email to ba*******@yahoo.com, it's a spam-only account :-)
Nov 14 '05 #37
42Bastian Schick wrote:
On 25 Jan 2005 15:38:47 GMT, Grant Edwards <gr****@visi.com> wrote:

On 2005-01-25, 42Bastian Schick <ba*******@yahoo.com> wrote:
On Tue, 25 Jan 2005 13:11:04 +0530, "Neo"
<ti***************@yahoo.com> wrote:
Hi All,

Is that true that size of a byte not necessarily 8-bit?

I think common sense is that a byte is nowadays 8 bit.

Common sense is often wrong. As yours seems to be in this
case. I've worked on architectures where a byte (a-la "C") was
32 bits.

Maybe, or even sure, but if you read MBytes, KBytes etc. do you
honestly thing in 32-bit bytes ?
That's it, they speak of words avoiding the term byte.

The OP is asking about C. In C, 'byte' has a very specific
definition.

Didn't know that C defines the term byte.

http://en.wikipedia.org/wiki/Byte :-)

--
E-Mail: Mine is an /at/ gmx /dot/ de address.
Nov 14 '05 #38
ba*******@yahoo.com (42Bastian Schick) wrote:
On 25 Jan 2005 15:38:47 GMT, Grant Edwards <gr****@visi.com> wrote:
On 2005-01-25, 42Bastian Schick <ba*******@yahoo.com> wrote:
I think common sense is that a byte is nowadays 8 bit.

Common sense is often wrong. As yours seems to be in this
case. I've worked on architectures where a byte (a-la "C") was
32 bits.

Maybe, or even sure, but if you read MBytes, KBytes etc. do you
honestly thing in 32-bit bytes ?

When I read megabytes et cetera, I do so in a hard drive spec, not in a
C programming context. When programming, I don't use megabytes, I use
precise numbers, and precise sizes.
(Unhinted crossposts are evil.)

Richard
Nov 14 '05 #39
42Bastian Schick wrote:
A reason, to define __u8,__u16,__u32 etc.
(or the like) depending on
the cpu and/or compiler.

Leading underscores are for the implementation,
not application programs.

Eh, what do you mean by implementation ?

A C implementation is whatever it takes to translate and execute
a C program.
If you have a compiler or a cross compiler,
then that would be part of your implementation.

--
pete
Nov 14 '05 #40
On Wed, 26 Jan 2005 08:18:14 GMT, pete <pf*****@mindspring.com> wrote:
42Bastian Schick wrote:
>> A reason, to define __u8,__u16,__u32 etc.
>> (or the like) depending on
>> the cpu and/or compiler.
>
>Leading underscores are for the implementation,
>not application programs.

Eh, what do you mean by implementation ?

A C implementation is whatever it takes to translate and execute
a C program.
If you have a compiler or a cross compiler,
then that would be part of your implementation.

Means, only the compiler or what comes with it should define types

(Actually, IIRC I've seen in in some Linux source and found it
appealing.)
--
42Bastian
Do not email to ba*******@yahoo.com, it's a spam-only account :-)
Nov 14 '05 #41
42Bastian Schick wrote:
On Wed, 26 Jan 2005 08:18:14 GMT, pete <pf*****@mindspring.com> wrote:

42Bastian Schick wrote:
>A reason, to define __u8,__u16,__u32 etc.
>(or the like) depending on
>the cpu and/or compiler.

Leading underscores are for the implementation,
not application programs.

Eh, what do you mean by implementation ?
A C implementation is whatever it takes to translate and execute
a C program.
If you have a compiler or a cross compiler,
then that would be part of your implementation.

Means, only the compiler or what comes with it should define types

The compiler, the standard library, maybe the OS.
(Actually, IIRC I've seen in in some Linux source and found it
appealing.)

This is not a matter of chic; if you use leading underscores, you
nasty consequences. Imagine it was the other way round: The
implementation would use i, j, k, count, num and others, so you cannot
declare variables or functions of the respective names and be sure
that everything works as intended as the implementation gives you
macros which work on these global variables...

Just have a look at the standard headers; you will probably find
many underscores. If you inadvertently #undef'ed or #defined something
which is needed there, then you might run into subtle trouble.
Cheers
Michael
--
E-Mail: Mine is a gmx dot de address.

Nov 14 '05 #42
On 2005-01-26, 42Bastian Schick <ba*******@yahoo.com> wrote:
I think common sense is that a byte is nowadays 8 bit.

Common sense is often wrong. As yours seems to be in this
case. I've worked on architectures where a byte (a-la "C") was
32 bits.

Maybe, or even sure, but if you read MBytes, KBytes etc. do you
honestly thing in 32-bit bytes ?

That's it, they speak of words avoiding the term byte.

The OP is asking about C. In C, 'byte' has a very specific
definition.

Didn't know that C defines the term byte.

I hope you don't write C programs.

--
Grant Edwards grante Yow! Could I have a drug
at overdose?
visi.com
Nov 14 '05 #43
Grant Edwards wrote:

On 2005-01-26, 42Bastian Schick <ba*******@yahoo.com> wrote:
I think common sense is that a byte is nowadays 8 bit.

Common sense is often wrong. As yours seems to be in this
case. I've worked on architectures where a byte (a-la "C") was
32 bits.

Maybe, or even sure, but if you read MBytes, KBytes etc. do you
honestly thing in 32-bit bytes ?

That's it, they speak of words avoiding the term byte.

The OP is asking about C. In C, 'byte' has a very specific
definition.

Didn't know that C defines the term byte.

I hope you don't write C programs.

C defines a number of words like "object" and "string", which
have slightly different meanings in other programming contexts.

--
pete
Nov 14 '05 #44
In article <41***************@news.individual.de>
42Bastian Schick <ba*******@yahoo.com> wrote:
Means, only the compiler or what comes with it should define types
Means "the implementor" -- the guy writing your compiler and
surrounding code.
(Actually, IIRC I've seen in in some Linux source and found it
appealing.)

Suppose I, as your implementor, put this in <stdio.h> (contrary to
the standard's requirement that I do not do this):

#define i 42
#define tmp "ha ha"

Now you, the user, write your program, including a function:

#include <stdio.h>
...
void f(...) {
int i;
char *tmp;
...

When you go to compile this, the compiler "sees" the top of your
function f() written as:

int 42;
char *"ha ha";

and gives you a bunch of strange syntax error messages.

I, the implementor, keep out of you way by making sure I do this:

/* flags for __sflag field */
#define __SRD 1
#define __SWR 2
[etc]

You, the user, keep out of my way by not using names like "__SRD".
For the most part, all the names starting with "_" are mine, and
all the rest are yours. If I define "i" or "tmp", it is my fault
for breaking your code. If you define _ZOG, it is your fault for
breaking my code.

Now, what if you are neither the implementor, nor the end user?
What if you are the guy writing a library for doing graphics, or
playing chess, or whatever? What names do *you* get? (The Standard
--
In-Real-Life: Chris Torek, Wind River Systems
Salt Lake City, UT, USA (40°39.22'N, 111°50.29'W) +1 801 277 2603
Reading email is like searching for food in the garbage, thanks to spammers.
Nov 14 '05 #45
Chris Torek wrote:
42Bastian Schick <ba*******@yahoo.com> wrote:
Means, only the compiler or what comes with it should define

Means "the implementor" -- the guy writing your compiler and
surrounding code.
(Actually, IIRC I've seen in in some Linux source and found it
appealing.)

Suppose I, as your implementor, put this in <stdio.h> (contrary
to the standard's requirement that I do not do this):

#define i 42
#define tmp "ha ha"

Now you, the user, write your program, including a function:

#include <stdio.h>
...
void f(...) {
int i;
char *tmp;
...

When you go to compile this, the compiler "sees" the top of your
function f() written as:

int 42;
char *"ha ha";

and gives you a bunch of strange syntax error messages.

I, the implementor, keep out of you way by making sure I do this:

/* flags for __sflag field */
#define __SRD 1
#define __SWR 2
[etc]

You, the user, keep out of my way by not using names like "__SRD".
For the most part, all the names starting with "_" are mine, and
all the rest are yours. If I define "i" or "tmp", it is my fault
for breaking your code. If you define _ZOG, it is your fault for
breaking my code.

Now, what if you are neither the implementor, nor the end user?
What if you are the guy writing a library for doing graphics, or
playing chess, or whatever? What names do *you* get? (The

Which is all nice and clear for those not yet aware. As far as the
last paragraph is concerned, common practice is to add a
(hopefully) unique prefix to all names in that library. Some use
prefix, underscore, and then the normal name, as in "hsh_find". I
leave out the underscore. With luck it all works.

--
"If you want to post a followup via groups.google.com, don't use
the broken "Reply" link at the bottom of the article. Click on
"show options" at the top of the article, then click on the
Nov 14 '05 #46
yeah, I think I get to know them, too.
Thank you.

Nov 14 '05 #47
On 26 Jan 2005 19:25:59 -0800, "Zilla" <bi******@gmail.com> wrote:
yeah, I think I get to know them, too.
Thank you.

???? Know what ?

--
42Bastian
Do not email to ba*******@yahoo.com, it's a spam-only account :-)
Nov 14 '05 #48
>>
Didn't know that C defines the term byte.

I hope you don't write C programs.

I do, but prefer assembler :-)
--
42Bastian
Do not email to ba*******@yahoo.com, it's a spam-only account :-)
Nov 14 '05 #49
>
Suppose I, as your implementor, put this in <stdio.h> (contrary to
the standard's requirement that I do not do this):

#define i 42
#define tmp "ha ha"

Now you, the user, write your program, including a function:

#include <stdio.h>
...
void f(...) {
int i;
char *tmp;
...

When you go to compile this, the compiler "sees" the top of your
function f() written as:

int 42;
char *"ha ha";

and gives you a bunch of strange syntax error messages.

I, the implementor, keep out of you way by making sure I do this:

/* flags for __sflag field */
#define __SRD 1
#define __SWR 2
[etc]

You, the user, keep out of my way by not using names like "__SRD".
For the most part, all the names starting with "_" are mine, and
all the rest are yours. If I define "i" or "tmp", it is my fault
for breaking your code. If you define _ZOG, it is your fault for
breaking my code.

Now, that's clear. Thanks.

--
42Bastian
Do not email to ba*******@yahoo.com, it's a spam-only account :-)