By using this site, you agree to our updated Privacy Policy and our Terms of Use. Manage your Cookies Settings.
445,797 Members | 1,836 Online
Bytes IT Community
+ Ask a Question
Need help? Post your question and get tips & solutions from a community of 445,797 IT Pros & Developers. It's quick & easy.

Integer sizes

P: n/a
The most common sizes of integer types seem to be:

8 bits - signed char
16 bits - short
16 or 32 bits - int
32 or 64 bits - long

Question #1:

Does anyone know how common sizes other than these are ? I know that the
standard says that number of bits in a "byte" (char) is >= 8, but I have not
come across values other than these yet ...

Question #2:

Is anyone aware of a package that has some portable definitions of things
like Int8, Int16 and Int32 (assuming they are possible for the platform) ?
If not, I think you could do this using the definitions in <limits.h>
(<climits> ?). I am aware that the preprocessor doesn't understand the
sizeof operator, so you can't do it that way.

Thanks,

David F
Jul 22 '05 #1
Share this Question
Share on Google+
14 Replies


P: n/a
On Thu, 4 Dec 2003 16:10:07 +1100, "David Fisher"
<no****@nospam.nospam.nospam> wrote in comp.lang.c++:
The most common sizes of integer types seem to be:

8 bits - signed char
16 bits - short
16 or 32 bits - int
32 or 64 bits - long

Question #1:

Does anyone know how common sizes other than these are ? I know that the
standard says that number of bits in a "byte" (char) is >= 8, but I have not
come across values other than these yet ...
Generally not for ordinary processors, from 8-bit on up to 64-bit or
so, but this is quite common for DSPs (Digital Signal Processors).
Right now I'm coding for a Texas Instruments DSP that doesn't access
memory in anything smaller than 16 bit words. CHAR_BIT is 16 on that
platform.

There are some DSPs that only deal with 32 bit quantities, so the
character types, shorts, ints, and longs are all 32 bits.

There were some early members of the Motorola 56K DSP family with
24-bit word size. IIRC, char, short and int were all 24 bits, and
long was 48 bits.
Question #2:

Is anyone aware of a package that has some portable definitions of things
like Int8, Int16 and Int32 (assuming they are possible for the platform) ?
If not, I think you could do this using the definitions in <limits.h>
(<climits> ?). I am aware that the preprocessor doesn't understand the
sizeof operator, so you can't do it that way.

Thanks,

David F


The current 1999 version of the C standard includes a header named
<stdint.h> that contains typedefs for a variety of integer types. The
required types include: int_least#_t, uint_least#_t, int_fastest#_t
and uint_fastest#_t, where # consists of at least 8, 16, 32, and 64.

The (u)int_least#_t are typedefs for the integer types occupying the
least amount of memory containing at least that many bits.

The (u)int_fastest#_t are typedefs for the integer types containing at
least that many bits that are the fastest for the processor to use, if
there is more than one type with at least that number of bits.

The types that are optional under some circumstances under C99 are:

(u)int8_t, (u)int16_t, (u)int32_t, and (u)int64_t.

These are optional because some implementations might not actually
have hardware types with exactly those widths. If a platform actually
has integer types with exactly those bit widths, if those types have
not padding bits, and if the signed versions use 2's complement
representation, then the implementation must provide the typedefs.

But this must be tailored to the compiler, that is the typedefs must
be set to the appropriate native type. Of course you only do that
once for a particular compiler, then you can include and use the
header forever.

I would suggest using these types, as they will be portable across C
compilers and almost certainly included in the next version of the C++
standard. Since almost all C++ compilers also include a C compiler,
many C++ compilers come with this header already. With the possible
exception of the 64 bit types, which is the C signed and unsigned
"long long", C++ implementations should be able to use them as well.

Here's a sample of a <stdint.h> header that should work with just
about every C and C++ compiler for the Intel x86 processors, older 16
bit compilers as well as newer 32 bit ones (with the possible
exception of the 64 bit types).

#ifndef STDINT_H_INCLUDED
#define STDINT_H_INCLUDED
typedef signed char int8_t;
typedef unsigned char uint8_t;
typedef signed short int16_t;
typedef unsigned short uint16_t;
typedef signed long int32_t;
typedef unsigned long uint32_t;
typedef signed long long int64_t;
typedef unsigned long long uint64_t;

typedef signed char int_least8_t;
typedef unsigned char uint_least8_t;
typedef signed short int_least16_t;
typedef unsigned short uint_least16_t;
typedef signed long int_least32_t;
typedef unsigned long uint_least32_t;
typedef signed long long int_least64_t;
typedef unsigned long long uint_least64_t;

typedef signed char int_fast8_t;
typedef unsigned char uint_fast8_t;
typedef signed short int_fast16_t;
typedef unsigned short uint_fast16_t;
typedef signed long int_fast32_t;
typedef unsigned long uint_fast32_t;
typedef signed long long int_fast64_t;
typedef unsigned long long uint_fast64_t;

#define INT8_MAX 127
#define INT16_MAX 32767
#define INT32_MAX 2147483647
#define INT64_MAX 9223372036854775807LL

#define INT8_MIN -128
#define INT16_MIN (-INT16_MAX - 1)
#define INT32_MIN (-INT32_MAX - 1)
#define INT64_MIN (-INT64_MAX - 1)

#define UINT8_MAX 255
#define UINT16_MAX 65535
#define UINT32_MAX 4294967295
#define UINT64_MAX 18446744073709551615

#define INT_LEAST8_MAX 127
#define INT_LEAST16_MAX 32767
#define INT_LEAST32_MAX 2147483647
#define INT_LEAST64_MAX 9223372036854775807

#define INT_LEAST8_MIN -128
#define INT_LEAST16_MIN (-INT_LEAST16_MAX - 1)
#define INT_LEAST32_MIN (-INT_LEAST32_MAX - 1)
#define INT_LEAST64_MIN (-INT_LEAST64_MAX - 1)

#define UINT_LEAST8_MAX 255
#define UINT_LEAST16_MAX 65535
#define UINT_LEAST32_MAX 4294967295
#define UINT_LEAST64_MAX 18446744073709551615

#define INT_FAST8_MAX 127
#define INT_FAST16_MAX 32767
#define INT_FAST32_MAX 2147483647
#define INT_FAST64_MAX 9223372036854775807

#define INT_FAST8_MIN -128)
#define INT_FAST16_MIN (-INT_FAST16_MAX - 1)
#define INT_FAST32_MIN (-INT_FAST32_MAX - 1)
#define INT_FAST64_MIN (-INT_FAST64_MAX - 1)

#define UINT_FAST8_MAX 255
#define UINT_FAST16_MAX 65535
#define UINT_FAST32_MAX 4294967295
#define UINT_FAST64_MAX 18446744073709551615
#endif

--
Jack Klein
Home: http://JK-Technology.Com
FAQs for
comp.lang.c http://www.eskimo.com/~scs/C-faq/top.html
comp.lang.c++ http://www.parashift.com/c++-faq-lite/
alt.comp.lang.learn.c-c++ ftp://snurse-l.org/pub/acllc-c++/faq
Jul 22 '05 #2

P: n/a
On Thu, 4 Dec 2003 16:10:07 +1100, "David Fisher"
<no****@nospam.nospam.nospam> wrote:
The most common sizes of integer types seem to be:

8 bits - signed char
16 bits - short
16 or 32 bits - int
32 or 64 bits - long

Question #1:

Does anyone know how common sizes other than these are ? I know that the
standard says that number of bits in a "byte" (char) is >= 8, but I have not
come across values other than these yet ...

Question #2:

Is anyone aware of a package that has some portable definitions of things
like Int8, Int16 and Int32 (assuming they are possible for the platform) ?
If not, I think you could do this using the definitions in <limits.h>
(<climits> ?). I am aware that the preprocessor doesn't understand the
sizeof operator, so you can't do it that way.


I work with a C compiler for which, by default, int is 8 bits.

Personally, I HATE all this business of int, long, long long, double,
double double, and all this incomprehensible crap. *And Non-Portable
Crap, I might add*. Most compilers come with headers, I believe,
where they define numeric types more inteligently, like

__INT08
__INT16
....
__UINT08
__UINT16

and so on. I haven't used them myself, possibly because of the extra
typing (keyboard typing I mean), and to play along with existing code.
There must be, though I can't tell you for sure, standard conforming
and comprehensible numeric types.

I sure hope something is done about this, and real soon, to help us
rid ourselves of "int" and all that.

Yours.
dan

Jul 22 '05 #3

P: n/a
"Jack Klein" <ja*******@spamcop.net> wrote:
Question #2:

Is anyone aware of a package that has some portable definitions of things like Int8, Int16 and Int32 (assuming they are possible for the platform) ? If not, I think you could do this using the definitions in <limits.h>
(<climits> ?). I am aware that the preprocessor doesn't understand the
sizeof operator, so you can't do it that way.
The current 1999 version of the C standard includes a header named
<stdint.h> that contains typedefs for a variety of integer types. The
required types include: int_least#_t, uint_least#_t, int_fastest#_t
and uint_fastest#_t, where # consists of at least 8, 16, 32, and 64.


Thank you !

I hadn't heard of <stdint.h>. Any other new & useful headers come with C99 ?
I would suggest using these types, as they will be portable across C
compilers and almost certainly included in the next version of the C++
standard.


Sounds good ... does anyone know how about proposed merges between C99 and
C++ and / or when this might happen ?

Just for interest, here is an attempt at a portable header file for defining
(U)Int8, (U)Int16 and (U)Int32 (indentation got destroyed) ...

David F

--- code ---

#ifndef INTEGER_H_INCLUDED
#define INTEGER_H_INCLUDED

// Defines the following types:
//
// Int8, UInt8, Int16, UInt16, Int32, UInt32
//
// If a fundamental type is not available, this header should still compile,
// but there will be an error if the type is actually used

#include <limits.h>

#if defined(INT8_S_TYPE) || defined(INT8_U_TYPE) || defined(INT16_TYPE) ||
defined(INT32_TYPE)
#error "Temporary macro name already in use"
#endif

// find 8 bit integer type

#if (UCHAR_MAX == 0xFF)
#define INT8_S_TYPE signed char
#define INT8_U_TYPE unsigned char
#endif

#if !defined(INT8_S_TYPE) && (USHRT_MAX == 0xFF)
#define INT8_S_TYPE short
#define INT8_U_TYPE unsigned short
#endif

// find 16 bit integer type

#if (USHRT_MAX == 0xFFFF)
#define INT16_TYPE short
#endif

#if !defined(INT16_TYPE) && (UINT_MAX == 0xFFFF)
#define INT16_TYPE int
#endif

// find 32 bit integer type

// try to avoid comparing to an illegal value if there is no 32 bit type
#if (ULONG_MAX > 0xFFFF)
#if (ULONG_MAX > 0xFFFFFF)
#if (USHRT_MAX == 0xFFFFFFFF)
#define INT32_TYPE short
#endif
#if !defined(INT32_TYPE) && (UINT_MAX == 0xFFFFFFFF)
#define INT32_TYPE int
#endif
#if !defined(INT32_TYPE) && (UINT_MAX == 0xFFFFFFFF)
#define INT32_TYPE long
#endif
#endif
#endif // ULONG_MAX > 0xFFFF

// create Int8 typedef

#ifdef INT8_S_TYPE
typedef INT8_S_TYPE Int8;
typedef INT8_U_TYPE UInt8;
#else
struct Int8Undefined;
typedef Int8Undefined Int8;
typedef Int8Undefined UInt8;
#endif // INT8_TYPE

// create Int16 typedef

#ifdef INT16_TYPE
typedef INT16_TYPE Int16;
typedef unsigned INT16_TYPE UInt16;
#else
struct Int16Undefined;
typedef Int16Undefined Int16;
typedef Int16Undefined UInt16;
#endif // INT8_TYPE

// create Int32 typedef

#ifdef INT32_TYPE
typedef INT32_TYPE Int32;
typedef unsigned INT32_TYPE UInt32;
#else
struct Int32Undefined;
typedef Int32Undefined Int32;
typedef Int32Undefined UInt32;
#endif // INT8_TYPE

// undefine temporary macros

#undef INT8_S_TYPE
#undef INT8_U_TYPE
#undef INT16_TYPE
#undef INT32_TYPE

#endif // INTEGER_H_INCLUDED

--- end code ---
Jul 22 '05 #4

P: n/a
Good post, thank you!

Jul 22 '05 #5

P: n/a
Correction to previous post:

--- code ---
// find 32 bit integer type
...
#if !defined(INT32_TYPE) && (UINT_MAX == 0xFFFFFFFF)
#define INT32_TYPE int
#endif
#if !defined(INT32_TYPE) && (UINT_MAX == 0xFFFFFFFF) ---> change to ULONG_MAX #define INT32_TYPE long
#endif


-- end code ---

David F
Jul 22 '05 #6

P: n/a
> I work with a C compiler for which, by default, int is 8 bits.

IIRC according to the standard an int should be at least 16 bits. It seems
that your C compiler isn't very standard compliant.
Personally, I HATE all this business of int, long, long long, double,
double double, and all this incomprehensible crap. *And Non-Portable
Crap, I might add*. Most compilers come with headers, I believe,
where they define numeric types more inteligently, like


Yes, it is pain when writing cross-platform code. Unfortunately any
constructs a compiler may have to explicitly specify the size are
non-standard. I would welcome standardization in this area.

--
Peter van Merkerk
peter.van.merkerk(at)dse.nl
Jul 22 '05 #7

P: n/a
Dan W. wrote:
On Thu, 4 Dec 2003 16:10:07 +1100, "David Fisher"
<no****@nospam.nospam.nospam> wrote:
The most common sizes of integer types seem to be:

8 bits - signed char
16 bits - short
16 or 32 bits - int
32 or 64 bits - long

Question #1:

Does anyone know how common sizes other than these are ? I know that
the standard says that number of bits in a "byte" (char) is >= 8, but
I have not come across values other than these yet ...

Question #2:

Is anyone aware of a package that has some portable definitions of
things like Int8, Int16 and Int32 (assuming they are possible for the
platform) ? If not, I think you could do this using the definitions in
<limits.h> (<climits> ?). I am aware that the preprocessor doesn't
understand the sizeof operator, so you can't do it that way.
I work with a C compiler for which, by default, int is 8 bits.


Then that compiler is not ISO C compliant. If it were, int would be at
least 16 bits.
Personally, I HATE all this business of int, long, long long, double,
double double, and all this incomprehensible crap. *And Non-Portable
Crap, I might add*.
Why do you think that this is non-portable? For some situations in
low-level programming, it might be good to have types of specific
sizes, but most of the time, there is no need for them.
Most compilers come with headers, I believe,
where they define numeric types more inteligently, like

__INT08
__INT16
...
__UINT08
__UINT16

and so on.
What if there is no 8 bit type? And why would you need a type of
_exactly_ 16 bits instead of one of at least 16bits?
I haven't used them myself, possibly because of the extra
typing (keyboard typing I mean), and to play along with existing code.
There must be, though I can't tell you for sure, standard conforming
and comprehensible numeric types.
There are.
I sure hope something is done about this, and real soon, to help us
rid ourselves of "int" and all that.


As someone already mentioned, in C, you have <stdint.h>, which is a lot
more useful than what you describe above, because it adds the
possibility to get speed or size optimized types, which are way more
useful than fixed size types.

Jul 22 '05 #8

P: n/a
On Thu, 4 Dec 2003 10:33:18 +0100, "Peter van Merkerk"
<me*****@deadspam.com> wrote:
I work with a C compiler for which, by default, int is 8 bits.
IIRC according to the standard an int should be at least 16 bits. It seems
that your C compiler isn't very standard compliant.


The compiler is compliant, I believe, perhaps because it allows you
to change the meaning of int to 16 bits via a switch. It is a compiler
for PIC microcontrollers, which have 8-bit registers and instructions
only. I guess most people don't aim to port PIC programs to Posix or
WinXP .. :-)
Personally, I HATE all this business of int, long, long long, double,
double double, and all this incomprehensible crap. *And Non-Portable
Crap, I might add*. Most compilers come with headers, I believe,
where they define numeric types more inteligently, like


Yes, it is pain when writing cross-platform code. Unfortunately any
constructs a compiler may have to explicitly specify the size are
non-standard. I would welcome standardization in this area.


I'd like to see something really drastic done about this whole issue,
maybe something like announcing that keywords such as int, long,
short, etc. are to be deprecated and eventually removed from the
language, to encourage change. But for this to happen, something else
must come first: Units.

There's a company that came up with a template library called SIUnits,
which allows you to define classes representing distances, areas,
volumes, weights, densities, pressures, electron densities,
currencies, temperatures, and what not; expressed in various systems
of units. Their system is a bit too complex for my taste, including
physics models for six areas of application, including scientific,
relativistic and non-relativistic...

I'd be happy with a system that just makes sure you don't add
millimiters to inches and divide by celsius to get dollars, if you
know what I mean. Then, for every project, one might have a units.h
header dealing with numeric representations and units typedefs, and
then use REAL quantities for the rest of the program, and NEVER say
int or float or uchar or extra-double.

Cheers!
Jul 22 '05 #9

P: n/a
>> I work with a C compiler for which, by default, int is 8 bits.

Then that compiler is not ISO C compliant. If it were, int would be at
least 16 bits.
It's a compiler for 8-bit microcontrollers, and it allows changing int
to 16-bits via a switch. It's advertised as compliant.
Personally, I HATE all this business of int, long, long long, double,
double double, and all this incomprehensible crap. *And Non-Portable
Crap, I might add*.


Why do you think that this is non-portable? For some situations in
low-level programming, it might be good to have types of specific
sizes, but most of the time, there is no need for them.
(snip)
What if there is no 8 bit type? And why would you need a type of
_exactly_ 16 bits instead of one of at least 16bits?


Low level progrmming situations are all too common. I was getting
into the Eiffel programming language at one time, which only had one
type INTEGER which was 32-bits. Their argument was: "how does it hurt
you to use 32-bits where you'd use 8?" Well, imagine you create a
class or struct for RGBcolor that ends up taking 16 bytes instead of
four, and then try to manipulate 1024 x 1024 images... I ended up
coming back to C++ just for that reason: not enough low-level.
As someone already mentioned, in C, you have <stdint.h>, which is a lot
more useful than what you describe above, because it adds the
possibility to get speed or size optimized types, which are way more
useful than fixed size types.


Yes, I saved his post to a text file on my desktop.

I like to have control, and know what's going on under the hood. If
8-bit ints are not available and I use them, I'd prefer a compiler
warning, or even a run-time exception, rather than an automatic
'upgrade' to 16-bits.

Cheers!
Jul 22 '05 #10

P: n/a
> >> I work with a C compiler for which, by default, int is 8 bits.

IIRC according to the standard an int should be at least 16 bits. It seems
that your C compiler isn't very standard compliant.
The compiler is compliant, I believe, perhaps because it allows you
to change the meaning of int to 16 bits via a switch.


In that case it is only compliant when you use that switch.
It is a compiler
for PIC microcontrollers, which have 8-bit registers and instructions
only.
That explains something, the PIC processor family has a kinda weird
instruction set. OTOH the cc65 compiler for the 6502 processor (from the
good old days: a true 8-bit processor with no support for 16-bit operations
other than having a carry flag) the int size is 16-bit. But because 16-bit
operations are slow on this processor the compiler documentation recommends
against using int's and recommends use char's instead whenever you can.
I guess most people don't aim to port PIC programs to Posix or
WinXP .. :-)
I guess the PIC compiler doesn't come with a windowing library either :-)
I some cases the target platform is so different from any other platform
that cross-platform compatibility is non-issue anyway.
Personally, I HATE all this business of int, long, long long, double,
double double, and all this incomprehensible crap. *And Non-Portable
Crap, I might add*. Most compilers come with headers, I believe,
where they define numeric types more inteligently, like


Yes, it is pain when writing cross-platform code. Unfortunately any
constructs a compiler may have to explicitly specify the size are
non-standard. I would welcome standardization in this area.


I'd like to see something really drastic done about this whole issue,
maybe something like announcing that keywords such as int, long,
short, etc. are to be deprecated and eventually removed from the
language, to encourage change.


Don't hold you breath, I don't see that happen in my life time (I do plan
to be around for another couple of decades)
But for this to happen, something else
must come first: Units.
There's a company that came up with a template library called SIUnits,
which allows you to define classes representing distances, areas,
volumes, weights, densities, pressures, electron densities,
currencies, temperatures, and what not; expressed in various systems
of units. Their system is a bit too complex for my taste, including
physics models for six areas of application, including scientific,
relativistic and non-relativistic...

I'd be happy with a system that just makes sure you don't add
millimiters to inches and divide by celsius to get dollars, if you
know what I mean.


I understand what you mean. You can do that easilly in C++; just make a
classes for millimiters, inches, celcius and dollars and bit of operator
overloading magic. Having done that, if you add millimeters to inches, you
can either choose to get a compile error, or apply the correct conversion
automagically.

--
Peter van Merkerk
peter.van.merkerk(at)dse.nl
Jul 22 '05 #11

P: n/a
On Thu, 04 Dec 2003 01:23:36 -0500, Dan W. <da**@raytron-controls.com>
wrote in comp.lang.c++:

[snip]
I work with a C compiler for which, by default, int is 8 bits.
The people who produced it claim that it is a C compiler, but they are
lying, it is not. It is a different language that imitates part of C.
Personally, I HATE all this business of int, long, long long, double,
double double, and all this incomprehensible crap. *And Non-Portable
Crap, I might add*. Most compilers come with headers, I believe,
where they define numeric types more inteligently, like

__INT08
__INT16
...
__UINT08
__UINT16


Yes, and other compilers come with libraries that define them as U8,
S16, and so on, or BYTE and UWORD, and so on, and dozens of similar,
but not identical, varieties.

That's the whole point of using the C standard <stdint.h>. They are
probably not the prettiest names for exact width types ever invented,
but they are standard now in C and will be standard in C++.

So you won't have to edit the compiler-specific type names to a
different set of compiler-specific type names each time you port your
code to a different compiler or processor.

Any set of standard names that clearly express what they define is
better than dozens of non-standard names made up independently by each
compiler and library writer.

--
Jack Klein
Home: http://JK-Technology.Com
FAQs for
comp.lang.c http://www.eskimo.com/~scs/C-faq/top.html
comp.lang.c++ http://www.parashift.com/c++-faq-lite/
alt.comp.lang.learn.c-c++ ftp://snurse-l.org/pub/acllc-c++/faq
Jul 22 '05 #12

P: n/a
On Thu, 04 Dec 2003 16:10:07 +1100, David Fisher wrote:
The most common sizes of integer types seem to be:

8 bits - signed char
16 bits - short
16 or 32 bits - int
32 or 64 bits - long

Question #1:

Does anyone know how common sizes other than these are ? I know that the
standard says that number of bits in a "byte" (char) is >= 8, but I have not
come across values other than these yet ...


Apparently a lot of embedded systems use char, short, int and long of 32
bits. Some older machines (PDP?) used 9 bit chars (and 36 bit ints, IIRC.)

Simplest solution: use the size guaranteed to have _at least_ as many bits
as you need, but don't assume it'll _only_ have that many.
Jul 22 '05 #13

P: n/a
On Thu, 4 Dec 2003 14:51:46 +0100, "Peter van Merkerk"
<me*****@deadspam.com> wrote:

I understand what you mean. You can do that easilly in C++; just make a
classes for millimiters, inches, celcius and dollars and bit of operator
overloading magic. Having done that, if you add millimeters to inches, you
can either choose to get a compile error, or apply the correct conversion
automagically.


Nothing is as easy as first sounds in this business; luckily some
people at boost are taking a crack at it:

http://lists.boost.org/MailArchives/boost/msg29353.php

Here's a link to some attempt at a formal specification:

http://www.servocomm.freeserve.co.uk...ical_quantity/

Cheers!
Jul 22 '05 #14

P: n/a
"David Fisher" <no****@nospam.nospam.nospam> wrote in message news:<3n******************@nasal.pacific.net.au>.. .
The most common sizes of integer types seem to be:

8 bits - signed char
16 bits - short
16 or 32 bits - int
32 or 64 bits - long

Question #1:

Does anyone know how common sizes other than these are ? I know that the
standard says that number of bits in a "byte" (char) is >= 8, but I have not
come across values other than these yet ...


Well, my current CPU has char=16 bits, short=16 bits, int=16 bits,
long=32 bits (thank goodness).

As for why I'd want to use octets - well, a factor of 2 expansion in
ROM size ain't nice, and also would make the peripherals (sound,
graphics, etc.) behave funny. So the data has to be preprocessed to
join it up into 16 bit chunks.
Jul 22 '05 #15

This discussion thread is closed

Replies have been disabled for this discussion.