Hi,
I wonder if there is any good reason to let different systems have
different sizes of short, int and long?
I think that makes it harder to plan which type to use for your program.
For example, if I want to make a portable program which uses,
say, an int type for a counter, and one system uses 16-bit for ints, and
some other system uses 32-bits, then, in order to have my program to run
the same way on all systems I need to use the smallest value, right?
In this case, I could only count up to 16-bits, since otherwise it would
overflow on the system that uses 16-bit ints.
So I don't understand why to have different sizes. If you want your
programs to be portable and run the same on all systems, how do we do
that? By only using the minimum guaranteed size of the integer types?
I think stdint.h solves that, since then you know which size you have on
your type, but thats in C99.
/Michael 14 2304
Michael Brennan <br************@gmail.com> writes: Hi, I wonder if there is any good reason to let different systems have different sizes of short, int and long? I think that makes it harder to plan which type to use for your program. For example, if I want to make a portable program which uses, say, an int type for a counter, and one system uses 16-bit for ints, and some other system uses 32-bits, then, in order to have my program to run the same way on all systems I need to use the smallest value, right? In this case, I could only count up to 16-bits, since otherwise it would overflow on the system that uses 16-bit ints.
So I don't understand why to have different sizes. If you want your programs to be portable and run the same on all systems, how do we do that? By only using the minimum guaranteed size of the integer types?
I think stdint.h solves that, since then you know which size you have on your type, but thats in C99.
The portability, and your decision, doesn't come from the size of a
particular platform's integers, but from the values you wish those
integer variables to hold.
If your counter will only hold integers that may be held in 16-bit
integers, then use int16_t.
If your counter can *ever* overflow a 16-bit integer, then you require
a longer, say 32-bit, integer on all platforms - including any platforms
that offer only 16-bit native integers.
Don't casually use 'int's if there's ever a danger, on any platform.
--
Chris.
Chris McDonald wrote: Michael Brennan <br************@gmail.com> writes:
Hi, I wonder if there is any good reason to let different systems have different sizes of short, int and long? I think that makes it harder to plan which type to use for your program. For example, if I want to make a portable program which uses, say, an int type for a counter, and one system uses 16-bit for ints, and some other system uses 32-bits, then, in order to have my program to run the same way on all systems I need to use the smallest value, right? In this case, I could only count up to 16-bits, since otherwise it would overflow on the system that uses 16-bit ints.
So I don't understand why to have different sizes. If you want your programs to be portable and run the same on all systems, how do we do that? By only using the minimum guaranteed size of the integer types?
I think stdint.h solves that, since then you know which size you have on your type, but thats in C99. The portability, and your decision, doesn't come from the size of a particular platform's integers, but from the values you wish those integer variables to hold.
If your counter will only hold integers that may be held in 16-bit integers, then use int16_t.
If you are targeting C99, this is probably a good place to use
int_fast16_t and the like. Even for C90, it may be worth defining your
own versions of these types.
--
Ian Collins.
In article <Ev*******************@newsb.telia.net>,
Michael Brennan <br************@gmail.com> wrote: I wonder if there is any good reason to let different systems have different sizes of short, int and long? I think that makes it harder to plan which type to use for your program.
So I don't understand why to have different sizes. If you want your programs to be portable and run the same on all systems, how do we do that? By only using the minimum guaranteed size of the integer types?
Three reasons:
1) Back then, there were systems that didn't use multiples of 8 bits
as their native sizes. There was a time when it looked like 36 bit
words were going to win out.
2) Performance. 32 bit arithmetic had to be synthesized on earlier
systems.
3) According to some of the DSP and embedded systems people
in the newsgroup, there are a bunch of systems these days which
only offer a very limited number of storage sizes (e.g., only 32 bit).
For the kinds of applications those processors are intended for,
the other sizes are not used often enough to make it worth taking up
the die space for them. The less die space, the faster you can clock
the device...
--
I was very young in those days, but I was also rather dim.
-- Christopher Priest
Michael Brennan said: Hi, I wonder if there is any good reason to let different systems have different sizes of short, int and long?
Yes. I can see no good reason to insist that all systems have the same
sizes. Surely that would stifle innovation. Wouldn't it be grand if ints
had 128 bits? Or 256? Well, you can't insist that all ints are 16 bits
wide, AND have 256 bit ints.
Just write your code so that it doesn't matter how big the types are, saving
only that they meet the minimum specs given in the Standard. The whole
intn_t thing is a move in completely the wrong direction. It's a move
towards the computer domain. Programming should stay in the problem domain.
In the real world, numbers aren't limited to the range -32767 to +32767 or
-2^31-1 to +2^32-1 - they can be yay big. Well, that's what we should aim
for with computers, too.
I think that makes it harder to plan which type to use for your program.
Use int unless you have a good reason to use something different. If your
number will exceed 32000-odd, then you have a good reason to use a long
int.
--
Richard Heathfield
"Usenet is a strange place" - dmr 29/7/1999 http://www.cpax.org.uk
email: rjh at above domain (but drop the www, obviously)
Richard Heathfield <in*****@invalid.invalid> writes: Use int unless you have a good reason to use something different. If your number will exceed 32000-odd, then you have a good reason to use a long int.
Unless your long int is only 16 bits wide?
--
Chris.
Chris McDonald said: Richard Heathfield <in*****@invalid.invalid> writes:
Use int unless you have a good reason to use something different. If your number will exceed 32000-odd, then you have a good reason to use a long int.
Unless your long int is only 16 bits wide?
If it is, you are not using C. In C, long int is guaranteed to be at least
32 bits wide (and can be wider).
--
Richard Heathfield
"Usenet is a strange place" - dmr 29/7/1999 http://www.cpax.org.uk
email: rjh at above domain (but drop the www, obviously)
Chris McDonald schrieb: Richard Heathfield <in*****@invalid.invalid> writes:
Use int unless you have a good reason to use something different. If your number will exceed 32000-odd, then you have a good reason to use a long int.
Unless your long int is only 16 bits wide?
Then the implementation is no longer conforming to the C standard. The
minimum/maximum value a long must be able to hold is
-2147483647/+2147483647.
Marc Thrun <Te********@gmx.de> writes: Chris McDonald schrieb: Richard Heathfield <in*****@invalid.invalid> writes:
Use int unless you have a good reason to use something different. If your number will exceed 32000-odd, then you have a good reason to use a long int. Unless your long int is only 16 bits wide?
Then the implementation is no longer conforming to the C standard. The minimum/maximum value a long must be able to hold is -2147483647/+2147483647.
Thanks; I didn't know that.
--
Chris.
Walter Roberson wrote: In article <Ev*******************@newsb.telia.net>, Michael Brennan <br************@gmail.com> wrote: I wonder if there is any good reason to let different systems have different sizes of short, int and long? I think that makes it harder to plan which type to use for your program.
So I don't understand why to have different sizes. If you want your programs to be portable and run the same on all systems, how do we do that? By only using the minimum guaranteed size of the integer types?
Three reasons:
1) Back then, there were systems that didn't use multiples of 8 bits as their native sizes. There was a time when it looked like 36 bit words were going to win out.
2) Performance. 32 bit arithmetic had to be synthesized on earlier systems.
3) According to some of the DSP and embedded systems people in the newsgroup, there are a bunch of systems these days which only offer a very limited number of storage sizes (e.g., only 32 bit). For the kinds of applications those processors are intended for, the other sizes are not used often enough to make it worth taking up the die space for them. The less die space, the faster you can clock the device...
4) The last time I looked in the DSP world there were processors around
which used 24/48 bit words. Although a multiple of 8 not a power of 2 ;-)
5) There are now 64 bit processors and types could now be implemented as
char 8 bits
short 16 bits
int 32 bits
long 64 bits
long long (C99) 128 bits
--
Flash Gordon, living in interesting times.
Web site - http://home.flash-gordon.me.uk/
comp.lang.c posting guidelines and intro: http://clc-wiki.net/wiki/Intro_to_clc
"Chris McDonald" <ch***@csse.uwa.edu.au> wrote Then the implementation is no longer conforming to the C standard. The minimum/maximum value a long must be able to hold is -2147483647/+2147483647.
Thanks; I didn't know that.
Unfortunately I've had embedded C compilers where an int was 8 bits, and a
long 16 bits. Not conforming, but I couldn't exactly send it back to the
factory and demand a fixed one.
-- www.personal.leeds.ac.uk/~bgy1mm
"Richard Heathfield" <in*****@invalid.invalid> wrote Michael Brennan said:
Hi, I wonder if there is any good reason to let different systems have different sizes of short, int and long? Yes. I can see no good reason to insist that all systems have the same sizes. Surely that would stifle innovation. Wouldn't it be grand if ints had 128 bits? Or 256? Well, you can't insist that all ints are 16 bits wide, AND have 256 bit ints.
What would you count in a 256-bit int? Just write your code so that it doesn't matter how big the types are, saving only that they meet the minimum specs given in the Standard. The whole intn_t thing is a move in completely the wrong direction. It's a move towards the computer domain. Programming should stay in the problem domain. In the real world, numbers aren't limited to the range -32767 to +32767 or -2^31-1 to +2^32-1 - they can be yay big. Well, that's what we should aim for with computers, too.
My Basic interpreter (see website) has two data types, numbers and strings.
That's fine for most programming, provided you don't care about efficiency.
If you want to work out interest payments for a million bank customers,
there's no problem even on a £300 computer. If you want to run a 3d shooter,
then my Basic won't be fast enough.
However some numbers are naturally integers. So it is nice to mark them.
Once you start going down that path, however, natural data types multiply.
Dates, colours, angles, proportions, complex numbers, points, error codes,
all need their own types. There is an argument for allowing this, but it
does put a burden on the user.
When you add efficency considerations into the mix, the user's burden
increases even more. For instance I used to be always rewriting graphics
routines to take floats instead of double, or fixed point instead of float,
depending on the particualar platform I was using. I think that makes it harder to plan which type to use for your program.
Use int unless you have a good reason to use something different. If your number will exceed 32000-odd, then you have a good reason to use a long int.
If you are writing a payroll program, it is conceivable that the program
will have to run on a 16-bit machine. It is also conceivable that the
customer will have more than 32767 employees on his payroll. However it is
not possible that a customer with over thirty thousand employees will want
to run his payroll on a 16 bit machine. So it is quite ok to use ints to
index into the employee list.
-- www.leeds.personal.ac.uk/~bgy1mm
"Malcolm" <re*******@btinternet.com> wrote in message
news:Le********************@bt.com... "Richard Heathfield" <in*****@invalid.invalid> wrote Use int unless you have a good reason to use something different. If your number will exceed 32000-odd, then you have a good reason to use a long int.
If you are writing a payroll program, it is conceivable that the program will have to run on a 16-bit machine. It is also conceivable that the customer will have more than 32767 employees on his payroll. However it is not possible that a customer with over thirty thousand employees will want to run his payroll on a 16 bit machine. So it is quite ok to use ints to index into the employee list.
Well, there's no guarantee that just because the machine happens to be
"32-bit" or "64-bit" that the C implementation uses anything larger than a
16-bit int. Lots of folks ran into that with MS compilers on 386s in the
late DOS/early Windows years.
Amusing anecdote: I worked at a startup which was purchased; for tax
reasons, our stock options were paid out as a payroll bonus. The payroll
system used long ints to store/manipulate the number of cents for each item.
The founder/CEO was to be paid roughly $170 million -- and every time they
tried to do a payroll run, the system crashed because that overflowed a long
int. We didn't get paid for weeks while the vendor scrambled to
recode/recompile the application using long long ints.
S
--
Stephen Sprunk "Stupid people surround themselves with smart
CCIE #3723 people. Smart people surround themselves with
K5SSS smart people who disagree with them." --Aaron Sorkin
*** Posted via a free Usenet account from http://www.teranews.com ***
Malcolm said: "Richard Heathfield" <in*****@invalid.invalid> wrote Michael Brennan said:
Hi, I wonder if there is any good reason to let different systems have different sizes of short, int and long?
Yes. I can see no good reason to insist that all systems have the same sizes. Surely that would stifle innovation. Wouldn't it be grand if ints had 128 bits? Or 256? Well, you can't insist that all ints are 16 bits wide, AND have 256 bit ints. What would you count in a 256-bit int?
The obvious uses that spring immediately to mind are Diffie-Hellman exchange
and RSA, although it has to be said that 256 bits probably wouldn't be
enough, at least not without some faffing around. Just not quite so much
faffing around as we currently have to do, that's all.
--
Richard Heathfield
"Usenet is a strange place" - dmr 29/7/1999 http://www.cpax.org.uk
email: rjh at above domain (but drop the www, obviously)
in comp.lang.c i read:
[long must be at least 32 bits, including sign] Unfortunately I've had embedded C compilers where an int was 8 bits, and a long 16 bits. Not conforming, but I couldn't exactly send it back to the factory and demand a fixed one.
the key being that is isn't a c compiler, it is a looks-like-c compiler,
which are indeed quite common for small (embedded) devices. something
similar is common even for hosted implementations of "larger" devices,
e.g., gcc is not a c compiler by default (it is a gnu-c compiler).
--
a signature This thread has been closed and replies have been disabled. Please start a new discussion. Similar topics
by: - Steve - |
last post by:
For a school assignment I need to write a class to work with the
following code. The IntArray b(-3, 6) basically means that I need to
produce an array of integer values that has an index going...
|
by: Kip |
last post by:
Greetings everyone,
Is there anyone here who could point me to a page or pdf that has a
list of the sizes of all of the C primitive data types on various
implementations such as SPARC, x86,...
|
by: gamehack |
last post by:
Hi all,
I'm writing an application which will be opening binary files with
definite sizes for integers(2 byte integers and 4 byte longs). Is there
a portable way in being sure that each integer...
|
by: Andre |
last post by:
Hi,
Could someone tell me what's wrong with the following program? I think
there is something wrong with the size of integers I'm using, but not
sure. When I execute it, I get the following...
|
by: Bob Timpkinson |
last post by:
Hi,
I have a 32-bit machine... Is there anyway I can get gcc to use the
following integer sizes?
char: 8 bits
short: 16 bits
int: 32 bits
long: 64 bits
long long: 128 bits
|
by: CloudSolutions |
last post by:
Introduction:
For many beginners and individual users, requiring a credit card and email registration may pose a barrier when starting to use cloud servers. However, some cloud server providers now...
|
by: Faith0G |
last post by:
I am starting a new it consulting business and it's been a while since I setup a new website. Is wordpress still the best web based software for hosting a 5 page website? The webpages will be...
|
by: ryjfgjl |
last post by:
In our work, we often need to import Excel data into databases (such as MySQL, SQL Server, Oracle) for data analysis and processing. Usually, we use database tools like Navicat or the Excel import...
|
by: Charles Arthur |
last post by:
How do i turn on java script on a villaon, callus and itel keypad mobile phone
|
by: ryjfgjl |
last post by:
If we have dozens or hundreds of excel to import into the database, if we use the excel import function provided by database editors such as navicat, it will be extremely tedious and time-consuming...
|
by: ryjfgjl |
last post by:
In our work, we often receive Excel tables with data in the same format. If we want to analyze these data, it can be difficult to analyze them because the data is spread across multiple Excel files...
|
by: emmanuelkatto |
last post by:
Hi All, I am Emmanuel katto from Uganda. I want to ask what challenges you've faced while migrating a website to cloud.
Please let me know.
Thanks!
Emmanuel
|
by: BarryA |
last post by:
What are the essential steps and strategies outlined in the Data Structures and Algorithms (DSA) roadmap for aspiring data scientists? How can individuals effectively utilize this roadmap to progress...
|
by: Hystou |
last post by:
There are some requirements for setting up RAID:
1. The motherboard and BIOS support RAID configuration.
2. The motherboard has 2 or more available SATA protocol SSD/HDD slots (including MSATA, M.2...
| |