423,680 Members | 2,394 Online
Bytes IT Community
+ Ask a Question
Need help? Post your question and get tips & solutions from a community of 423,680 IT Pros & Developers. It's quick & easy.

Sizes of integers

P: n/a
Hi,
I wonder if there is any good reason to let different systems have
different sizes of short, int and long?
I think that makes it harder to plan which type to use for your program.
For example, if I want to make a portable program which uses,
say, an int type for a counter, and one system uses 16-bit for ints, and
some other system uses 32-bits, then, in order to have my program to run
the same way on all systems I need to use the smallest value, right?
In this case, I could only count up to 16-bits, since otherwise it would
overflow on the system that uses 16-bit ints.

So I don't understand why to have different sizes. If you want your
programs to be portable and run the same on all systems, how do we do
that? By only using the minimum guaranteed size of the integer types?

I think stdint.h solves that, since then you know which size you have on
your type, but thats in C99.

/Michael
May 10 '06 #1
Share this Question
Share on Google+
14 Replies


P: n/a
Michael Brennan <br************@gmail.com> writes:
Hi,
I wonder if there is any good reason to let different systems have
different sizes of short, int and long?
I think that makes it harder to plan which type to use for your program.
For example, if I want to make a portable program which uses,
say, an int type for a counter, and one system uses 16-bit for ints, and
some other system uses 32-bits, then, in order to have my program to run
the same way on all systems I need to use the smallest value, right?
In this case, I could only count up to 16-bits, since otherwise it would
overflow on the system that uses 16-bit ints. So I don't understand why to have different sizes. If you want your
programs to be portable and run the same on all systems, how do we do
that? By only using the minimum guaranteed size of the integer types? I think stdint.h solves that, since then you know which size you have on
your type, but thats in C99.

The portability, and your decision, doesn't come from the size of a
particular platform's integers, but from the values you wish those
integer variables to hold.

If your counter will only hold integers that may be held in 16-bit
integers, then use int16_t.

If your counter can *ever* overflow a 16-bit integer, then you require
a longer, say 32-bit, integer on all platforms - including any platforms
that offer only 16-bit native integers.

Don't casually use 'int's if there's ever a danger, on any platform.

--
Chris.
May 10 '06 #2

P: n/a
Chris McDonald wrote:
Michael Brennan <br************@gmail.com> writes:

Hi,
I wonder if there is any good reason to let different systems have
different sizes of short, int and long?
I think that makes it harder to plan which type to use for your program.
For example, if I want to make a portable program which uses,
say, an int type for a counter, and one system uses 16-bit for ints, and
some other system uses 32-bits, then, in order to have my program to run
the same way on all systems I need to use the smallest value, right?
In this case, I could only count up to 16-bits, since otherwise it would
overflow on the system that uses 16-bit ints.


So I don't understand why to have different sizes. If you want your
programs to be portable and run the same on all systems, how do we do
that? By only using the minimum guaranteed size of the integer types?


I think stdint.h solves that, since then you know which size you have on
your type, but thats in C99.


The portability, and your decision, doesn't come from the size of a
particular platform's integers, but from the values you wish those
integer variables to hold.

If your counter will only hold integers that may be held in 16-bit
integers, then use int16_t.

If you are targeting C99, this is probably a good place to use
int_fast16_t and the like. Even for C90, it may be worth defining your
own versions of these types.

--
Ian Collins.
May 10 '06 #3

P: n/a
In article <Ev*******************@newsb.telia.net>,
Michael Brennan <br************@gmail.com> wrote:
I wonder if there is any good reason to let different systems have
different sizes of short, int and long?
I think that makes it harder to plan which type to use for your program. So I don't understand why to have different sizes. If you want your
programs to be portable and run the same on all systems, how do we do
that? By only using the minimum guaranteed size of the integer types?


Three reasons:

1) Back then, there were systems that didn't use multiples of 8 bits
as their native sizes. There was a time when it looked like 36 bit
words were going to win out.

2) Performance. 32 bit arithmetic had to be synthesized on earlier
systems.

3) According to some of the DSP and embedded systems people
in the newsgroup, there are a bunch of systems these days which
only offer a very limited number of storage sizes (e.g., only 32 bit).
For the kinds of applications those processors are intended for,
the other sizes are not used often enough to make it worth taking up
the die space for them. The less die space, the faster you can clock
the device...
--
I was very young in those days, but I was also rather dim.
-- Christopher Priest
May 10 '06 #4

P: n/a
Michael Brennan said:
Hi,
I wonder if there is any good reason to let different systems have
different sizes of short, int and long?
Yes. I can see no good reason to insist that all systems have the same
sizes. Surely that would stifle innovation. Wouldn't it be grand if ints
had 128 bits? Or 256? Well, you can't insist that all ints are 16 bits
wide, AND have 256 bit ints.

Just write your code so that it doesn't matter how big the types are, saving
only that they meet the minimum specs given in the Standard. The whole
intn_t thing is a move in completely the wrong direction. It's a move
towards the computer domain. Programming should stay in the problem domain.
In the real world, numbers aren't limited to the range -32767 to +32767 or
-2^31-1 to +2^32-1 - they can be yay big. Well, that's what we should aim
for with computers, too.
I think that makes it harder to plan which type to use for your program.


Use int unless you have a good reason to use something different. If your
number will exceed 32000-odd, then you have a good reason to use a long
int.

--
Richard Heathfield
"Usenet is a strange place" - dmr 29/7/1999
http://www.cpax.org.uk
email: rjh at above domain (but drop the www, obviously)
May 11 '06 #5

P: n/a
Richard Heathfield <in*****@invalid.invalid> writes:
Use int unless you have a good reason to use something different. If your
number will exceed 32000-odd, then you have a good reason to use a long
int.


Unless your long int is only 16 bits wide?

--
Chris.
May 11 '06 #6

P: n/a
Chris McDonald said:
Richard Heathfield <in*****@invalid.invalid> writes:
Use int unless you have a good reason to use something different. If your
number will exceed 32000-odd, then you have a good reason to use a long
int.


Unless your long int is only 16 bits wide?


If it is, you are not using C. In C, long int is guaranteed to be at least
32 bits wide (and can be wider).

--
Richard Heathfield
"Usenet is a strange place" - dmr 29/7/1999
http://www.cpax.org.uk
email: rjh at above domain (but drop the www, obviously)
May 11 '06 #7

P: n/a
Chris McDonald schrieb:
Richard Heathfield <in*****@invalid.invalid> writes:
Use int unless you have a good reason to use something different. If your
number will exceed 32000-odd, then you have a good reason to use a long
int.


Unless your long int is only 16 bits wide?


Then the implementation is no longer conforming to the C standard. The
minimum/maximum value a long must be able to hold is
-2147483647/+2147483647.
May 11 '06 #8

P: n/a
Marc Thrun <Te********@gmx.de> writes:
Chris McDonald schrieb:
Richard Heathfield <in*****@invalid.invalid> writes:
Use int unless you have a good reason to use something different. If your
number will exceed 32000-odd, then you have a good reason to use a long
int.
Unless your long int is only 16 bits wide?

Then the implementation is no longer conforming to the C standard. The
minimum/maximum value a long must be able to hold is
-2147483647/+2147483647.


Thanks; I didn't know that.

--
Chris.
May 11 '06 #9

P: n/a
Walter Roberson wrote:
In article <Ev*******************@newsb.telia.net>,
Michael Brennan <br************@gmail.com> wrote:
I wonder if there is any good reason to let different systems have
different sizes of short, int and long?
I think that makes it harder to plan which type to use for your program.

So I don't understand why to have different sizes. If you want your
programs to be portable and run the same on all systems, how do we do
that? By only using the minimum guaranteed size of the integer types?


Three reasons:

1) Back then, there were systems that didn't use multiples of 8 bits
as their native sizes. There was a time when it looked like 36 bit
words were going to win out.

2) Performance. 32 bit arithmetic had to be synthesized on earlier
systems.

3) According to some of the DSP and embedded systems people
in the newsgroup, there are a bunch of systems these days which
only offer a very limited number of storage sizes (e.g., only 32 bit).
For the kinds of applications those processors are intended for,
the other sizes are not used often enough to make it worth taking up
the die space for them. The less die space, the faster you can clock
the device...


4) The last time I looked in the DSP world there were processors around
which used 24/48 bit words. Although a multiple of 8 not a power of 2 ;-)

5) There are now 64 bit processors and types could now be implemented as
char 8 bits
short 16 bits
int 32 bits
long 64 bits
long long (C99) 128 bits
--
Flash Gordon, living in interesting times.
Web site - http://home.flash-gordon.me.uk/
comp.lang.c posting guidelines and intro:
http://clc-wiki.net/wiki/Intro_to_clc
May 11 '06 #10

P: n/a
"Chris McDonald" <ch***@csse.uwa.edu.au> wrote
Then the implementation is no longer conforming to the C standard. The
minimum/maximum value a long must be able to hold is
-2147483647/+2147483647.


Thanks; I didn't know that.

Unfortunately I've had embedded C compilers where an int was 8 bits, and a
long 16 bits. Not conforming, but I couldn't exactly send it back to the
factory and demand a fixed one.
--
www.personal.leeds.ac.uk/~bgy1mm

May 11 '06 #11

P: n/a
"Richard Heathfield" <in*****@invalid.invalid> wrote
Michael Brennan said:
Hi,
I wonder if there is any good reason to let different systems have
different sizes of short, int and long?
Yes. I can see no good reason to insist that all systems have the same
sizes. Surely that would stifle innovation. Wouldn't it be grand if ints
had 128 bits? Or 256? Well, you can't insist that all ints are 16 bits
wide, AND have 256 bit ints.

What would you count in a 256-bit int?
Just write your code so that it doesn't matter how big the types are,
saving
only that they meet the minimum specs given in the Standard. The whole
intn_t thing is a move in completely the wrong direction. It's a move
towards the computer domain. Programming should stay in the problem
domain.
In the real world, numbers aren't limited to the range -32767 to +32767 or
-2^31-1 to +2^32-1 - they can be yay big. Well, that's what we should aim
for with computers, too.

My Basic interpreter (see website) has two data types, numbers and strings.
That's fine for most programming, provided you don't care about efficiency.
If you want to work out interest payments for a million bank customers,
there's no problem even on a 300 computer. If you want to run a 3d shooter,
then my Basic won't be fast enough.

However some numbers are naturally integers. So it is nice to mark them.

Once you start going down that path, however, natural data types multiply.
Dates, colours, angles, proportions, complex numbers, points, error codes,
all need their own types. There is an argument for allowing this, but it
does put a burden on the user.

When you add efficency considerations into the mix, the user's burden
increases even more. For instance I used to be always rewriting graphics
routines to take floats instead of double, or fixed point instead of float,
depending on the particualar platform I was using.
I think that makes it harder to plan which type to use for your program.


Use int unless you have a good reason to use something different. If your
number will exceed 32000-odd, then you have a good reason to use a long
int.

If you are writing a payroll program, it is conceivable that the program
will have to run on a 16-bit machine. It is also conceivable that the
customer will have more than 32767 employees on his payroll. However it is
not possible that a customer with over thirty thousand employees will want
to run his payroll on a 16 bit machine. So it is quite ok to use ints to
index into the employee list.
--
www.leeds.personal.ac.uk/~bgy1mm
May 11 '06 #12

P: n/a
"Malcolm" <re*******@btinternet.com> wrote in message
news:Le********************@bt.com...
"Richard Heathfield" <in*****@invalid.invalid> wrote
Use int unless you have a good reason to use something different. If your
number will exceed 32000-odd, then you have a good reason to use a long
int.


If you are writing a payroll program, it is conceivable that the program
will have to run on a 16-bit machine. It is also conceivable that the
customer will have more than 32767 employees on his payroll. However
it is not possible that a customer with over thirty thousand employees
will want to run his payroll on a 16 bit machine. So it is quite ok to use
ints to index into the employee list.


Well, there's no guarantee that just because the machine happens to be
"32-bit" or "64-bit" that the C implementation uses anything larger than a
16-bit int. Lots of folks ran into that with MS compilers on 386s in the
late DOS/early Windows years.

Amusing anecdote: I worked at a startup which was purchased; for tax
reasons, our stock options were paid out as a payroll bonus. The payroll
system used long ints to store/manipulate the number of cents for each item.
The founder/CEO was to be paid roughly $170 million -- and every time they
tried to do a payroll run, the system crashed because that overflowed a long
int. We didn't get paid for weeks while the vendor scrambled to
recode/recompile the application using long long ints.

S

--
Stephen Sprunk "Stupid people surround themselves with smart
CCIE #3723 people. Smart people surround themselves with
K5SSS smart people who disagree with them." --Aaron Sorkin
*** Posted via a free Usenet account from http://www.teranews.com ***
May 11 '06 #13

P: n/a
Malcolm said:
"Richard Heathfield" <in*****@invalid.invalid> wrote
Michael Brennan said:
Hi,
I wonder if there is any good reason to let different systems have
different sizes of short, int and long?


Yes. I can see no good reason to insist that all systems have the same
sizes. Surely that would stifle innovation. Wouldn't it be grand if ints
had 128 bits? Or 256? Well, you can't insist that all ints are 16 bits
wide, AND have 256 bit ints.

What would you count in a 256-bit int?


The obvious uses that spring immediately to mind are Diffie-Hellman exchange
and RSA, although it has to be said that 256 bits probably wouldn't be
enough, at least not without some faffing around. Just not quite so much
faffing around as we currently have to do, that's all.

--
Richard Heathfield
"Usenet is a strange place" - dmr 29/7/1999
http://www.cpax.org.uk
email: rjh at above domain (but drop the www, obviously)
May 12 '06 #14

P: n/a
in comp.lang.c i read:

[long must be at least 32 bits, including sign]
Unfortunately I've had embedded C compilers where an int was 8 bits, and a
long 16 bits. Not conforming, but I couldn't exactly send it back to the
factory and demand a fixed one.


the key being that is isn't a c compiler, it is a looks-like-c compiler,
which are indeed quite common for small (embedded) devices. something
similar is common even for hosted implementations of "larger" devices,
e.g., gcc is not a c compiler by default (it is a gnu-c compiler).

--
a signature
May 12 '06 #15

This discussion thread is closed

Replies have been disabled for this discussion.