By using this site, you agree to our updated Privacy Policy and our Terms of Use. Manage your Cookies Settings.
458,122 Members | 1,508 Online
Bytes IT Community
+ Ask a Question
Need help? Post your question and get tips & solutions from a community of 458,122 IT Pros & Developers. It's quick & easy.

Assumptions About the Size of Integer Types

P: n/a
How does one go about programming in C++ without making assumptions about
the size of types? Is it really possible?

For instance, even the well-intentioned, portable C++ (and C) code I've
seen assumes an int will be larger than, say, 4 bits. I assume this is a
safe assumption, since the size of an int is guaranteed to be at least as
big as the size of a char, and if the size of a char is 4 bits, your
character set has only 16 characters, which isn't enough to express all
of C++'s keywords and symbols. Reasonable?

What else can we assume? Is it safe to assume an int will be at least 7
bits? 8 bits?

How does this work in the real world? Do programmers just write C++ for
their target architecture(s), and add support for differing architectures
later, as needed? For instance, what if they originally write assuming
32-bit ints, and for whatever reason later need to port to an
architecture with 28-bit ints. When they compile for the new
architecture and the code breaks, do they have to go through and check
every little int? That sounds like a huge PITA!

I'm not necessarily talking about hard-coded bitfields and things like
that, but arithmetic overflow, and things of that sort.

What are the odds of "weird" architectures like that needing support? Is
it even worth worrying about if one has to ask?

I know about using the preprocessor to selectively typedef int16, int32,
etc., although such a trick wouldn't work for our 28-bit example, above.
What's troubling me is more on a "moral" level. "They" say you're not
"supposed" to make assumptions (not guaranteed by the language standard)
about the size of integers, but I have to wonder, how many programmers in
the real world, who write working, even portable C++ problems, truly
follow that advice 100%. Does anyone? Is it really possible?
Jul 22 '05 #1
Share this Question
Share on Google+
8 Replies


P: n/a
On Sun, 20 Jun 2004 06:57:44 GMT in comp.lang.c++, Lash Rambo
<lr****@obmarl.com> wrote,
For instance, even the well-intentioned, portable C++ (and C) code I've
seen assumes an int will be larger than, say, 4 bits. I assume this is a
safe assumption, since the size of an int is guaranteed to be at least as
big as the size of a char, and if the size of a char is 4 bits,


C++ adopts the minimum guarantees of the 1989 C standard.
That includes:

char at least 8 bits
int at least 16 bits
long at least 32 bits

Most common mistake is using int, and assuming it is 32 bits.

Use compile-time checks to ensure the provided types are as big as you
need:
http://groups.google.com/gr*********....earthlink.net

Jul 22 '05 #2

P: n/a
"Lash Rambo" <lr****@obmarl.com> wrote in message
news:Xn*****************************@68.12.19.6...
How does one go about programming in C++ without making assumptions about
the size of types? Is it really possible?


Why do you care about the size of types? The answer to that question may
determine the type you use.

For example, if you want an index for an in-memory data structure, you can
use ptrdiff_t if you want a signed type, or size_t if you want an unsigned
type. If you want an appropriate index for a standard-library container,
use that container's size_type or difference_type member. And so on.
Jul 22 '05 #3

P: n/a
On Sun, 20 Jun 2004 06:57:44 GMT, Lash Rambo <lr****@obmarl.com> wrote
in comp.lang.c++:
How does one go about programming in C++ without making assumptions about
the size of types? Is it really possible?

For instance, even the well-intentioned, portable C++ (and C) code I've
seen assumes an int will be larger than, say, 4 bits. I assume this is a
safe assumption, since the size of an int is guaranteed to be at least as
big as the size of a char, and if the size of a char is 4 bits, your
character set has only 16 characters, which isn't enough to express all
of C++'s keywords and symbols. Reasonable?

What else can we assume? Is it safe to assume an int will be at least 7
bits? 8 bits?

How does this work in the real world? Do programmers just write C++ for
their target architecture(s), and add support for differing architectures
later, as needed? For instance, what if they originally write assuming
32-bit ints, and for whatever reason later need to port to an
architecture with 28-bit ints. When they compile for the new
architecture and the code breaks, do they have to go through and check
every little int? That sounds like a huge PITA!

I'm not necessarily talking about hard-coded bitfields and things like
that, but arithmetic overflow, and things of that sort.

What are the odds of "weird" architectures like that needing support? Is
it even worth worrying about if one has to ask?

I know about using the preprocessor to selectively typedef int16, int32,
etc., although such a trick wouldn't work for our 28-bit example, above.
What's troubling me is more on a "moral" level. "They" say you're not
"supposed" to make assumptions (not guaranteed by the language standard)
about the size of integers, but I have to wonder, how many programmers in
the real world, who write working, even portable C++ problems, truly
follow that advice 100%. Does anyone? Is it really possible?


The 1999 upgrade to the ISO C standard tackled this issue, and the
solution will almost certainly be adopted in the next major upgrade of
the C++ standard.

A header named <stdint.h> in C (presumably the preferred name in C++
will be <cstdint>) provides definitions for all the standard and
extended integer types an implementation provides. With the exception
of the "long long" integer types (minimum 64 bits) and the typedefs
with "64" in their names, a fully conforming (to the C standard)
version of this header can be put together for every standard
conforming C++ compiler. And many of today's C++ compilers provide
64-bit integer types as an extension, either using "long long" or
__int64 for their name.

A Google search for "stdint.h" will turn up many resources, almost
certainly including an open source version that will work correctly
with your compiler as-is or with only minimal editing.

--
Jack Klein
Home: http://JK-Technology.Com
FAQs for
comp.lang.c http://www.eskimo.com/~scs/C-faq/top.html
comp.lang.c++ http://www.parashift.com/c++-faq-lite/
alt.comp.lang.learn.c-c++
http://www.contrib.andrew.cmu.edu/~a...FAQ-acllc.html
Jul 22 '05 #4

P: n/a
Lash Rambo wrote:

How does one go about programming in C++ without making assumptions about
the size of types? Is it really possible?

<snip>

With the exception of data transfer between disparate applications and/or
hardware, what is the need to know the size of an integer type?
Jul 22 '05 #5

P: n/a
Lash Rambo wrote:

How does one go about programming in C++ without making assumptions about
the size of types? Is it really possible?

I've written tons of code, very little of it reliant on the exact sizes
of data types. The few exceptions were platform-specific in nature,
hence an assumption was not a problem.


Brian Rodenborn
Jul 22 '05 #6

P: n/a
On Sun, 20 Jun 2004 21:52:00 -0700 in comp.lang.c++, Julie
<ju***@nospam.com> wrote,
Lash Rambo wrote:

How does one go about programming in C++ without making assumptions about
the size of types? Is it really possible?

<snip>

With the exception of data transfer between disparate applications and/or
hardware, what is the need to know the size of an integer type?


If I know that my program must handle values up to e,g. 500000, then I
need to know if int is always sufficient to handle that or if I should
use long instead.

Jul 22 '05 #7

P: n/a
"David Harmon" <so****@netcom.com.invalid> wrote in message
news:40****************@news.west.earthlink.net...
With the exception of data transfer between disparate applications and/or
hardware, what is the need to know the size of an integer type?
If I know that my program must handle values up to e,g. 500000, then I
need to know if int is always sufficient to handle that or if I should
use long instead.


But the answer to that question is already established: int is not
sufficient to handle values up to 500000 on all implementations, but long
is.

What you don't know is what is the *shortest* type that will contain values
up to 500000 on a *particular* implementation. So the question might be:
Why do you want to know the answer to that question if you are trying to
write portable code?
Jul 22 '05 #8

P: n/a
On Mon, 21 Jun 2004 18:57:55 GMT in comp.lang.c++, "Andrew Koenig"
<ar*@acm.org> wrote,
"David Harmon" <so****@netcom.com.invalid> wrote in message
news:40****************@news.west.earthlink.net.. .
If I know that my program must handle values up to e,g. 500000, then I
need to know if int is always sufficient to handle that or if I should
use long instead.
But the answer to that question is already established: int is not
sufficient to handle values up to 500000 on all implementations, but long
is.


Which is implied, if not completely spelled out, in my first response in
this thread. But that is what the original poster didn't know and was
asking, if int is portably necessarily larger than e.g. 8 bits, etc. So
as far as the original poster is concerned, it wasn't established until
somebody answered the question.
What you don't know is what is the *shortest* type that will contain values
up to 500000 on a *particular* implementation. So the question might be:
Why do you want to know the answer to that question if you are trying to
write portable code?


But that is a different question and rather immaterial here.
Nobody asked about a particular implementation, although Julie almost
hinted at it.

Jul 22 '05 #9

This discussion thread is closed

Replies have been disabled for this discussion.