473,774 Members | 2,128 Online
Bytes | Software Development & Data Engineering Community
+ Post

Home Posts Topics Members FAQ

Integer sizes

The most common sizes of integer types seem to be:

8 bits - signed char
16 bits - short
16 or 32 bits - int
32 or 64 bits - long

Question #1:

Does anyone know how common sizes other than these are ? I know that the
standard says that number of bits in a "byte" (char) is >= 8, but I have not
come across values other than these yet ...

Question #2:

Is anyone aware of a package that has some portable definitions of things
like Int8, Int16 and Int32 (assuming they are possible for the platform) ?
If not, I think you could do this using the definitions in <limits.h>
(<climits> ?). I am aware that the preprocessor doesn't understand the
sizeof operator, so you can't do it that way.

Thanks,

David F
Jul 22 '05 #1
14 13033
On Thu, 4 Dec 2003 16:10:07 +1100, "David Fisher"
<no****@nospam. nospam.nospam> wrote in comp.lang.c++:
The most common sizes of integer types seem to be:

8 bits - signed char
16 bits - short
16 or 32 bits - int
32 or 64 bits - long

Question #1:

Does anyone know how common sizes other than these are ? I know that the
standard says that number of bits in a "byte" (char) is >= 8, but I have not
come across values other than these yet ...
Generally not for ordinary processors, from 8-bit on up to 64-bit or
so, but this is quite common for DSPs (Digital Signal Processors).
Right now I'm coding for a Texas Instruments DSP that doesn't access
memory in anything smaller than 16 bit words. CHAR_BIT is 16 on that
platform.

There are some DSPs that only deal with 32 bit quantities, so the
character types, shorts, ints, and longs are all 32 bits.

There were some early members of the Motorola 56K DSP family with
24-bit word size. IIRC, char, short and int were all 24 bits, and
long was 48 bits.
Question #2:

Is anyone aware of a package that has some portable definitions of things
like Int8, Int16 and Int32 (assuming they are possible for the platform) ?
If not, I think you could do this using the definitions in <limits.h>
(<climits> ?). I am aware that the preprocessor doesn't understand the
sizeof operator, so you can't do it that way.

Thanks,

David F


The current 1999 version of the C standard includes a header named
<stdint.h> that contains typedefs for a variety of integer types. The
required types include: int_least#_t, uint_least#_t, int_fastest#_t
and uint_fastest#_t , where # consists of at least 8, 16, 32, and 64.

The (u)int_least#_t are typedefs for the integer types occupying the
least amount of memory containing at least that many bits.

The (u)int_fastest# _t are typedefs for the integer types containing at
least that many bits that are the fastest for the processor to use, if
there is more than one type with at least that number of bits.

The types that are optional under some circumstances under C99 are:

(u)int8_t, (u)int16_t, (u)int32_t, and (u)int64_t.

These are optional because some implementations might not actually
have hardware types with exactly those widths. If a platform actually
has integer types with exactly those bit widths, if those types have
not padding bits, and if the signed versions use 2's complement
representation, then the implementation must provide the typedefs.

But this must be tailored to the compiler, that is the typedefs must
be set to the appropriate native type. Of course you only do that
once for a particular compiler, then you can include and use the
header forever.

I would suggest using these types, as they will be portable across C
compilers and almost certainly included in the next version of the C++
standard. Since almost all C++ compilers also include a C compiler,
many C++ compilers come with this header already. With the possible
exception of the 64 bit types, which is the C signed and unsigned
"long long", C++ implementations should be able to use them as well.

Here's a sample of a <stdint.h> header that should work with just
about every C and C++ compiler for the Intel x86 processors, older 16
bit compilers as well as newer 32 bit ones (with the possible
exception of the 64 bit types).

#ifndef STDINT_H_INCLUD ED
#define STDINT_H_INCLUD ED
typedef signed char int8_t;
typedef unsigned char uint8_t;
typedef signed short int16_t;
typedef unsigned short uint16_t;
typedef signed long int32_t;
typedef unsigned long uint32_t;
typedef signed long long int64_t;
typedef unsigned long long uint64_t;

typedef signed char int_least8_t;
typedef unsigned char uint_least8_t;
typedef signed short int_least16_t;
typedef unsigned short uint_least16_t;
typedef signed long int_least32_t;
typedef unsigned long uint_least32_t;
typedef signed long long int_least64_t;
typedef unsigned long long uint_least64_t;

typedef signed char int_fast8_t;
typedef unsigned char uint_fast8_t;
typedef signed short int_fast16_t;
typedef unsigned short uint_fast16_t;
typedef signed long int_fast32_t;
typedef unsigned long uint_fast32_t;
typedef signed long long int_fast64_t;
typedef unsigned long long uint_fast64_t;

#define INT8_MAX 127
#define INT16_MAX 32767
#define INT32_MAX 2147483647
#define INT64_MAX 922337203685477 5807LL

#define INT8_MIN -128
#define INT16_MIN (-INT16_MAX - 1)
#define INT32_MIN (-INT32_MAX - 1)
#define INT64_MIN (-INT64_MAX - 1)

#define UINT8_MAX 255
#define UINT16_MAX 65535
#define UINT32_MAX 4294967295
#define UINT64_MAX 184467440737095 51615

#define INT_LEAST8_MAX 127
#define INT_LEAST16_MAX 32767
#define INT_LEAST32_MAX 2147483647
#define INT_LEAST64_MAX 922337203685477 5807

#define INT_LEAST8_MIN -128
#define INT_LEAST16_MIN (-INT_LEAST16_MAX - 1)
#define INT_LEAST32_MIN (-INT_LEAST32_MAX - 1)
#define INT_LEAST64_MIN (-INT_LEAST64_MAX - 1)

#define UINT_LEAST8_MAX 255
#define UINT_LEAST16_MA X 65535
#define UINT_LEAST32_MA X 4294967295
#define UINT_LEAST64_MA X 184467440737095 51615

#define INT_FAST8_MAX 127
#define INT_FAST16_MAX 32767
#define INT_FAST32_MAX 2147483647
#define INT_FAST64_MAX 922337203685477 5807

#define INT_FAST8_MIN -128)
#define INT_FAST16_MIN (-INT_FAST16_MAX - 1)
#define INT_FAST32_MIN (-INT_FAST32_MAX - 1)
#define INT_FAST64_MIN (-INT_FAST64_MAX - 1)

#define UINT_FAST8_MAX 255
#define UINT_FAST16_MAX 65535
#define UINT_FAST32_MAX 4294967295
#define UINT_FAST64_MAX 184467440737095 51615
#endif

--
Jack Klein
Home: http://JK-Technology.Com
FAQs for
comp.lang.c http://www.eskimo.com/~scs/C-faq/top.html
comp.lang.c++ http://www.parashift.com/c++-faq-lite/
alt.comp.lang.l earn.c-c++ ftp://snurse-l.org/pub/acllc-c++/faq
Jul 22 '05 #2
On Thu, 4 Dec 2003 16:10:07 +1100, "David Fisher"
<no****@nospam. nospam.nospam> wrote:
The most common sizes of integer types seem to be:

8 bits - signed char
16 bits - short
16 or 32 bits - int
32 or 64 bits - long

Question #1:

Does anyone know how common sizes other than these are ? I know that the
standard says that number of bits in a "byte" (char) is >= 8, but I have not
come across values other than these yet ...

Question #2:

Is anyone aware of a package that has some portable definitions of things
like Int8, Int16 and Int32 (assuming they are possible for the platform) ?
If not, I think you could do this using the definitions in <limits.h>
(<climits> ?). I am aware that the preprocessor doesn't understand the
sizeof operator, so you can't do it that way.


I work with a C compiler for which, by default, int is 8 bits.

Personally, I HATE all this business of int, long, long long, double,
double double, and all this incomprehensibl e crap. *And Non-Portable
Crap, I might add*. Most compilers come with headers, I believe,
where they define numeric types more inteligently, like

__INT08
__INT16
....
__UINT08
__UINT16

and so on. I haven't used them myself, possibly because of the extra
typing (keyboard typing I mean), and to play along with existing code.
There must be, though I can't tell you for sure, standard conforming
and comprehensible numeric types.

I sure hope something is done about this, and real soon, to help us
rid ourselves of "int" and all that.

Yours.
dan

Jul 22 '05 #3
"Jack Klein" <ja*******@spam cop.net> wrote:
Question #2:

Is anyone aware of a package that has some portable definitions of things like Int8, Int16 and Int32 (assuming they are possible for the platform) ? If not, I think you could do this using the definitions in <limits.h>
(<climits> ?). I am aware that the preprocessor doesn't understand the
sizeof operator, so you can't do it that way.
The current 1999 version of the C standard includes a header named
<stdint.h> that contains typedefs for a variety of integer types. The
required types include: int_least#_t, uint_least#_t, int_fastest#_t
and uint_fastest#_t , where # consists of at least 8, 16, 32, and 64.


Thank you !

I hadn't heard of <stdint.h>. Any other new & useful headers come with C99 ?
I would suggest using these types, as they will be portable across C
compilers and almost certainly included in the next version of the C++
standard.


Sounds good ... does anyone know how about proposed merges between C99 and
C++ and / or when this might happen ?

Just for interest, here is an attempt at a portable header file for defining
(U)Int8, (U)Int16 and (U)Int32 (indentation got destroyed) ...

David F

--- code ---

#ifndef INTEGER_H_INCLU DED
#define INTEGER_H_INCLU DED

// Defines the following types:
//
// Int8, UInt8, Int16, UInt16, Int32, UInt32
//
// If a fundamental type is not available, this header should still compile,
// but there will be an error if the type is actually used

#include <limits.h>

#if defined(INT8_S_ TYPE) || defined(INT8_U_ TYPE) || defined(INT16_T YPE) ||
defined(INT32_T YPE)
#error "Temporary macro name already in use"
#endif

// find 8 bit integer type

#if (UCHAR_MAX == 0xFF)
#define INT8_S_TYPE signed char
#define INT8_U_TYPE unsigned char
#endif

#if !defined(INT8_S _TYPE) && (USHRT_MAX == 0xFF)
#define INT8_S_TYPE short
#define INT8_U_TYPE unsigned short
#endif

// find 16 bit integer type

#if (USHRT_MAX == 0xFFFF)
#define INT16_TYPE short
#endif

#if !defined(INT16_ TYPE) && (UINT_MAX == 0xFFFF)
#define INT16_TYPE int
#endif

// find 32 bit integer type

// try to avoid comparing to an illegal value if there is no 32 bit type
#if (ULONG_MAX > 0xFFFF)
#if (ULONG_MAX > 0xFFFFFF)
#if (USHRT_MAX == 0xFFFFFFFF)
#define INT32_TYPE short
#endif
#if !defined(INT32_ TYPE) && (UINT_MAX == 0xFFFFFFFF)
#define INT32_TYPE int
#endif
#if !defined(INT32_ TYPE) && (UINT_MAX == 0xFFFFFFFF)
#define INT32_TYPE long
#endif
#endif
#endif // ULONG_MAX > 0xFFFF

// create Int8 typedef

#ifdef INT8_S_TYPE
typedef INT8_S_TYPE Int8;
typedef INT8_U_TYPE UInt8;
#else
struct Int8Undefined;
typedef Int8Undefined Int8;
typedef Int8Undefined UInt8;
#endif // INT8_TYPE

// create Int16 typedef

#ifdef INT16_TYPE
typedef INT16_TYPE Int16;
typedef unsigned INT16_TYPE UInt16;
#else
struct Int16Undefined;
typedef Int16Undefined Int16;
typedef Int16Undefined UInt16;
#endif // INT8_TYPE

// create Int32 typedef

#ifdef INT32_TYPE
typedef INT32_TYPE Int32;
typedef unsigned INT32_TYPE UInt32;
#else
struct Int32Undefined;
typedef Int32Undefined Int32;
typedef Int32Undefined UInt32;
#endif // INT8_TYPE

// undefine temporary macros

#undef INT8_S_TYPE
#undef INT8_U_TYPE
#undef INT16_TYPE
#undef INT32_TYPE

#endif // INTEGER_H_INCLU DED

--- end code ---
Jul 22 '05 #4
Good post, thank you!

Jul 22 '05 #5
Correction to previous post:

--- code ---
// find 32 bit integer type
...
#if !defined(INT32_ TYPE) && (UINT_MAX == 0xFFFFFFFF)
#define INT32_TYPE int
#endif
#if !defined(INT32_ TYPE) && (UINT_MAX == 0xFFFFFFFF) ---> change to ULONG_MAX #define INT32_TYPE long
#endif


-- end code ---

David F
Jul 22 '05 #6
> I work with a C compiler for which, by default, int is 8 bits.

IIRC according to the standard an int should be at least 16 bits. It seems
that your C compiler isn't very standard compliant.
Personally, I HATE all this business of int, long, long long, double,
double double, and all this incomprehensibl e crap. *And Non-Portable
Crap, I might add*. Most compilers come with headers, I believe,
where they define numeric types more inteligently, like


Yes, it is pain when writing cross-platform code. Unfortunately any
constructs a compiler may have to explicitly specify the size are
non-standard. I would welcome standardization in this area.

--
Peter van Merkerk
peter.van.merke rk(at)dse.nl
Jul 22 '05 #7
Dan W. wrote:
On Thu, 4 Dec 2003 16:10:07 +1100, "David Fisher"
<no****@nospam. nospam.nospam> wrote:
The most common sizes of integer types seem to be:

8 bits - signed char
16 bits - short
16 or 32 bits - int
32 or 64 bits - long

Question #1:

Does anyone know how common sizes other than these are ? I know that
the standard says that number of bits in a "byte" (char) is >= 8, but
I have not come across values other than these yet ...

Question #2:

Is anyone aware of a package that has some portable definitions of
things like Int8, Int16 and Int32 (assuming they are possible for the
platform) ? If not, I think you could do this using the definitions in
<limits.h> (<climits> ?). I am aware that the preprocessor doesn't
understand the sizeof operator, so you can't do it that way.
I work with a C compiler for which, by default, int is 8 bits.


Then that compiler is not ISO C compliant. If it were, int would be at
least 16 bits.
Personally, I HATE all this business of int, long, long long, double,
double double, and all this incomprehensibl e crap. *And Non-Portable
Crap, I might add*.
Why do you think that this is non-portable? For some situations in
low-level programming, it might be good to have types of specific
sizes, but most of the time, there is no need for them.
Most compilers come with headers, I believe,
where they define numeric types more inteligently, like

__INT08
__INT16
...
__UINT08
__UINT16

and so on.
What if there is no 8 bit type? And why would you need a type of
_exactly_ 16 bits instead of one of at least 16bits?
I haven't used them myself, possibly because of the extra
typing (keyboard typing I mean), and to play along with existing code.
There must be, though I can't tell you for sure, standard conforming
and comprehensible numeric types.
There are.
I sure hope something is done about this, and real soon, to help us
rid ourselves of "int" and all that.


As someone already mentioned, in C, you have <stdint.h>, which is a lot
more useful than what you describe above, because it adds the
possibility to get speed or size optimized types, which are way more
useful than fixed size types.

Jul 22 '05 #8
On Thu, 4 Dec 2003 10:33:18 +0100, "Peter van Merkerk"
<me*****@deadsp am.com> wrote:
I work with a C compiler for which, by default, int is 8 bits.
IIRC according to the standard an int should be at least 16 bits. It seems
that your C compiler isn't very standard compliant.


The compiler is compliant, I believe, perhaps because it allows you
to change the meaning of int to 16 bits via a switch. It is a compiler
for PIC microcontroller s, which have 8-bit registers and instructions
only. I guess most people don't aim to port PIC programs to Posix or
WinXP .. :-)
Personally, I HATE all this business of int, long, long long, double,
double double, and all this incomprehensibl e crap. *And Non-Portable
Crap, I might add*. Most compilers come with headers, I believe,
where they define numeric types more inteligently, like


Yes, it is pain when writing cross-platform code. Unfortunately any
constructs a compiler may have to explicitly specify the size are
non-standard. I would welcome standardization in this area.


I'd like to see something really drastic done about this whole issue,
maybe something like announcing that keywords such as int, long,
short, etc. are to be deprecated and eventually removed from the
language, to encourage change. But for this to happen, something else
must come first: Units.

There's a company that came up with a template library called SIUnits,
which allows you to define classes representing distances, areas,
volumes, weights, densities, pressures, electron densities,
currencies, temperatures, and what not; expressed in various systems
of units. Their system is a bit too complex for my taste, including
physics models for six areas of application, including scientific,
relativistic and non-relativistic...

I'd be happy with a system that just makes sure you don't add
millimiters to inches and divide by celsius to get dollars, if you
know what I mean. Then, for every project, one might have a units.h
header dealing with numeric representations and units typedefs, and
then use REAL quantities for the rest of the program, and NEVER say
int or float or uchar or extra-double.

Cheers!
Jul 22 '05 #9
>> I work with a C compiler for which, by default, int is 8 bits.

Then that compiler is not ISO C compliant. If it were, int would be at
least 16 bits.
It's a compiler for 8-bit microcontroller s, and it allows changing int
to 16-bits via a switch. It's advertised as compliant.
Personally, I HATE all this business of int, long, long long, double,
double double, and all this incomprehensibl e crap. *And Non-Portable
Crap, I might add*.


Why do you think that this is non-portable? For some situations in
low-level programming, it might be good to have types of specific
sizes, but most of the time, there is no need for them.
(snip)
What if there is no 8 bit type? And why would you need a type of
_exactly_ 16 bits instead of one of at least 16bits?


Low level progrmming situations are all too common. I was getting
into the Eiffel programming language at one time, which only had one
type INTEGER which was 32-bits. Their argument was: "how does it hurt
you to use 32-bits where you'd use 8?" Well, imagine you create a
class or struct for RGBcolor that ends up taking 16 bytes instead of
four, and then try to manipulate 1024 x 1024 images... I ended up
coming back to C++ just for that reason: not enough low-level.
As someone already mentioned, in C, you have <stdint.h>, which is a lot
more useful than what you describe above, because it adds the
possibility to get speed or size optimized types, which are way more
useful than fixed size types.


Yes, I saved his post to a text file on my desktop.

I like to have control, and know what's going on under the hood. If
8-bit ints are not available and I use them, I'd prefer a compiler
warning, or even a run-time exception, rather than an automatic
'upgrade' to 16-bits.

Cheers!
Jul 22 '05 #10

This thread has been closed and replies have been disabled. Please start a new discussion.

Similar topics

3
2520
by: Simon G Best | last post by:
Hello! The C++ standard library provides facilities for finding out the sizes (and other such stuff) of numeric types (::std::numeric_limits<>, for example). What I would like to do is to select a type on the basis of its traits. More specifically, I would like to select an unsigned integer type on the basis of the number of bits I need it to have. (This is because there are particularly efficient ways of implementing the Advanced
30
2380
by: JKop | last post by:
When you want to store an integer in C++, you use an integral type, eg. int main() { unsigned char amount_legs_dog = 4; } In writing portable C++ code, there should be only two factors that influence which integral type you choose:
28
2509
by: Timothy Madden | last post by:
Hello I've read here that only C language has a standard 64bit integer. Can you please tell me what are the reasons for this ? What is special about C language ? Can you please show me some references to this integer type ? When was it introduced ? Thank you
20
9171
by: GS | last post by:
The stdint.h header definition mentions five integer categories, 1) exact width, eg., int32_t 2) at least as wide as, eg., int_least32_t 3) as fast as possible but at least as wide as, eg., int_fast32_t 4) integer capable of holding a pointer, intptr_t 5) widest integer in the implementation, intmax_t Is there a valid motivation for having both int_least and int_fast?
61
3392
by: John Baker | last post by:
When declaring an integer, you can specify the size by using int16, int32, or int64, with plain integer being int32. Is integer the accepted default in the programming community? If so, is there a way to remove the ones with size predefined from the autolisting of types when I am declaring something? -- To Email Me, ROT13 My Shown Email Address
21
4134
by: Frederick Gotham | last post by:
I set about trying to find a portable way to set the value of UCHAR_MAX. At first, I thought the following would work: #define UCHAR_MAX ~( (unsigned char)0 ) However, it didn't work for me. Could someone please explain to me what's going on? I would have thought that the following happens: (1) The literal, 0, whose type is int, gets converted to an unsigned char.
40
2811
by: Robert Seacord | last post by:
The CERT/CC has released a beta version of a secure integer library for the C Programming Language. The library is available for download from the CERT/CC Secure Coding Initiative web page at: http://www.cert.org/secure-coding/ The purpose of this library is to provide a collection of utility functions that can assist software developers in writing C programs that are free from common integer problems such as integer overflow, integer...
159
6331
by: Bob Timpkinson | last post by:
Hi, I have a 32-bit machine... Is there anyway I can get gcc to use the following integer sizes? char: 8 bits short: 16 bits int: 32 bits long: 64 bits long long: 128 bits
0
9454
by: Hystou | last post by:
Most computers default to English, but sometimes we require a different language, especially when relocating. Forgot to request a specific language before your computer shipped? No problem! You can effortlessly switch the default language on Windows 10 without reinstalling. I'll walk you through it. First, let's disable language synchronization. With a Microsoft account, language settings sync across devices. To prevent any complications,...
1
10040
by: Hystou | last post by:
Overview: Windows 11 and 10 have less user interface control over operating system update behaviour than previous versions of Windows. In Windows 11 and 10, there is no way to turn off the Windows Update option using the Control Panel or Settings app; it automatically checks for updates and installs any it finds, whether you like it or not. For most users, this new feature is actually very convenient. If you want to control the update process,...
0
9914
tracyyun
by: tracyyun | last post by:
Dear forum friends, With the development of smart home technology, a variety of wireless communication protocols have appeared on the market, such as Zigbee, Z-Wave, Wi-Fi, Bluetooth, etc. Each protocol has its own unique characteristics and advantages, but as a user who is planning to build a smart home system, I am a bit confused by the choice of these technologies. I'm particularly interested in Zigbee because I've heard it does some...
0
8939
agi2029
by: agi2029 | last post by:
Let's talk about the concept of autonomous AI software engineers and no-code agents. These AIs are designed to manage the entire lifecycle of a software development project—planning, coding, testing, and deployment—without human intervention. Imagine an AI that can take a project description, break it down, write the code, debug it, and then launch it, all on its own.... Now, this would greatly impact the work of software developers. The idea...
1
7463
isladogs
by: isladogs | last post by:
The next Access Europe User Group meeting will be on Wednesday 1 May 2024 starting at 18:00 UK time (6PM UTC+1) and finishing by 19:30 (7.30PM). In this session, we are pleased to welcome a new presenter, Adolph Dupré who will be discussing some powerful techniques for using class modules. He will explain when you may want to use classes instead of User Defined Types (UDT). For example, to manage the data in unbound forms. Adolph will...
0
6717
by: conductexam | last post by:
I have .net C# application in which I am extracting data from word file and save it in database particularly. To store word all data as it is I am converting the whole word file firstly in HTML and then checking html paragraph one by one. At the time of converting from word file to html my equations which are in the word document file was convert into image. Globals.ThisAddIn.Application.ActiveDocument.Select();...
0
5484
by: adsilva | last post by:
A Windows Forms form does not have the event Unload, like VB6. What one acts like?
2
3611
muto222
by: muto222 | last post by:
How can i add a mobile payment intergratation into php mysql website.
3
2852
bsmnconsultancy
by: bsmnconsultancy | last post by:
In today's digital era, a well-designed website is crucial for businesses looking to succeed. Whether you're a small business owner or a large corporation in Toronto, having a strong online presence can significantly impact your brand's success. BSMN Consultancy, a leader in Website Development in Toronto offers valuable insights into creating effective websites that not only look great but also perform exceptionally well. In this comprehensive...

By using Bytes.com and it's services, you agree to our Privacy Policy and Terms of Use.

To disable or enable advertisements and analytics tracking please visit the manage ads & tracking page.