By using this site, you agree to our updated Privacy Policy and our Terms of Use. Manage your Cookies Settings.
429,426 Members | 1,729 Online
Bytes IT Community
+ Ask a Question
Need help? Post your question and get tips & solutions from a community of 429,426 IT Pros & Developers. It's quick & easy.

code portability

P: n/a
My question is more generic, but it involves what I consider ANSI standard C
and portability.

I happen to be a system admin for multiple platforms and as such a lot of
the applications that my users request are a part of the OpenSource
community. Many if not most of those applications strongly require the
presence of the GNU compiling suite to work properly. My assumption is that
this is due to the author/s creating the applications with the GNU suite.
Many of the tools requested/required are GNU replacements for make,
configure, the loader, and lastly the C compiler itself. Where I'm going
with this is, has the OpenSource community as a whole committed itself to at
the very least encouraging its contributing members to conform to ANSI
standards of programming?

My concern is that as an admin I am sometimes compelled to port these
applications to multiple platforms running the same OS and as the user
community becomes more and more insistent on OpenSource applications will
gotcha's appear due to lack of portability in coding? I fully realize that
independent developers may or may not conform to standards, but again is it
at least encouraged?

11.32 of the FAQ seemed to at least outline the crux of what I am asking.
If I loaded up my home machine to the gills will all open source compiler
applications (gcc, imake, autoconfig, etc....) would my applications that I
compile and link and load conform?
Aug 1 '06 #1
Share this Question
Share on Google+
239 Replies


P: n/a

Eigenvector wrote:
My question is more generic, but it involves what I consider ANSI standard C
and portability.

I happen to be a system admin for multiple platforms and as such a lot of
the applications that my users request are a part of the OpenSource
community. Many if not most of those applications strongly require the
presence of the GNU compiling suite to work properly. My assumption is that
this is due to the author/s creating the applications with the GNU suite.
Many of the tools requested/required are GNU replacements for make,
configure, the loader, and lastly the C compiler itself. Where I'm going
with this is, has the OpenSource community as a whole committed itself to at
the very least encouraging its contributing members to conform to ANSI
standards of programming?
GCC attempts to conform to C90 and C99 and in many regards it's there.
If you disable GNU extensions [or just plain don't use them, cuz if
you're not writing a kernel you probably don't need them] you're pretty
much set.
My concern is that as an admin I am sometimes compelled to port these
applications to multiple platforms running the same OS and as the user
community becomes more and more insistent on OpenSource applications will
gotcha's appear due to lack of portability in coding? I fully realize that
independent developers may or may not conform to standards, but again is it
at least encouraged?
The C library [glibc] and other libs [pthreads, sockets, etc] follow
various UNIX, POSIX and ANSI standards. There are minor gotchas (for
instance, pthreads is part of the glibc in Linux but not in other
UNIXes) but for the most part source compatibility is there.
11.32 of the FAQ seemed to at least outline the crux of what I am asking.
If I loaded up my home machine to the gills will all open source compiler
applications (gcc, imake, autoconfig, etc....) would my applications that I
compile and link and load conform?
Read the man pages. Make sure the functions you use conform to some
standard and are not specific to your OS (e.g. not Linux only).

You'd be surprised how much of the standard libraries are API
consistent across UNIX, BSD and Linux.

Tom

Aug 1 '06 #2

P: n/a

"Tom St Denis" <to********@gmail.comwrote in message
news:11**********************@m73g2000cwd.googlegr oups.com...
>
Eigenvector wrote:
>My question is more generic, but it involves what I consider ANSI
standard C
and portability.

I happen to be a system admin for multiple platforms and as such a lot of
the applications that my users request are a part of the OpenSource
community. Many if not most of those applications strongly require the
presence of the GNU compiling suite to work properly. My assumption is
that
this is due to the author/s creating the applications with the GNU suite.
Many of the tools requested/required are GNU replacements for make,
configure, the loader, and lastly the C compiler itself. Where I'm going
with this is, has the OpenSource community as a whole committed itself to
at
the very least encouraging its contributing members to conform to ANSI
standards of programming?

GCC attempts to conform to C90 and C99 and in many regards it's there.
If you disable GNU extensions [or just plain don't use them, cuz if
you're not writing a kernel you probably don't need them] you're pretty
much set.
What about the programmers who submit to the archives? That is mainly where
I see massive gcc and imake requirements. In fact I have on occassion
attempted to compile applications - such as gcc using my native Intel or xlC
compilers without luck. Again this isn't a question on how to compile GCC,
but rather is the experience that the OpenSource community tries to conform
to ANSI standards?
>
>My concern is that as an admin I am sometimes compelled to port these
applications to multiple platforms running the same OS and as the user
community becomes more and more insistent on OpenSource applications will
gotcha's appear due to lack of portability in coding? I fully realize
that
independent developers may or may not conform to standards, but again is
it
at least encouraged?

The C library [glibc] and other libs [pthreads, sockets, etc] follow
various UNIX, POSIX and ANSI standards. There are minor gotchas (for
instance, pthreads is part of the glibc in Linux but not in other
UNIXes) but for the most part source compatibility is there.
>11.32 of the FAQ seemed to at least outline the crux of what I am asking.
If I loaded up my home machine to the gills will all open source compiler
applications (gcc, imake, autoconfig, etc....) would my applications that
I
compile and link and load conform?

Read the man pages. Make sure the functions you use conform to some
standard and are not specific to your OS (e.g. not Linux only).

You'd be surprised how much of the standard libraries are API
consistent across UNIX, BSD and Linux.

Tom

Aug 1 '06 #3

P: n/a
Eigenvector said:

<snip>
Where I'm going
with this is, has the OpenSource community as a whole committed itself to
at the very least encouraging its contributing members to conform to ANSI
standards of programming?
I doubt it very much, unfortunately.
My concern is that as an admin I am sometimes compelled to port these
applications to multiple platforms running the same OS and as the user
community becomes more and more insistent on OpenSource applications will
gotcha's appear due to lack of portability in coding?
It's entirely possible that they will. Unfortunately, open source code is
not, generally speaking, known for its robustness or portability. In fact,
I can think of only one kind of source base that is less robust and less
portable than open source code - and that's closed source code.
I fully realize
that independent developers may or may not conform to standards, but again
is it at least encouraged?
We do our bit, here in comp.lang.c, but I doubt whether it's enough to make
more than a small difference.

--
Richard Heathfield
"Usenet is a strange place" - dmr 29/7/1999
http://www.cpax.org.uk
email: rjh at above domain (but drop the www, obviously)
Aug 1 '06 #4

P: n/a
Eigenvector wrote:
What about the programmers who submit to the archives? That is mainly where
I see massive gcc and imake requirements. In fact I have on occassion
attempted to compile applications - such as gcc using my native Intel or xlC
compilers without luck. Again this isn't a question on how to compile GCC,
but rather is the experience that the OpenSource community tries to conform
to ANSI standards?
GCC is hardly your average OSS project (and Intel C is hardly standard
conforming).

Generally if you don't use any special features of GNU make you should
be "make portable". My makefiles are used in windows, linux, bsd, AIX,
HP-UX, MacOS, etc, etc, etc even though they split between proprietary
make, imake, gmake, etc, etc, etc.

Aim for C89 compliance and you pretty much got things nailed. Keep in
mind that many compilers supported various C99 features even back then.
E.g., long long has been supported for a while and ALL unix C
compilers I've seen support it [even the ones released in the mid 90s].

Avoid VLAs, // comments and the newer header files if you want the
widest possible reach.
Tom

Aug 1 '06 #5

P: n/a
Tom St Denis wrote:
Avoid VLAs, // comments and the newer header files if you want the
widest possible reach.
Tom
You should also avoid long long. This did not exist in C89.

To add 64 bit quantities you should develop and use
add64bits(a,b);
sub64bits(a,b);
mult64bits(a,b);

etc ad nauseum.

You should also avoid long double. This did not exist in C89.

To add two long doubles you should develop and use a package
with extended floating point precision.

addlongdouble(a,b);
sublongdouble(a,b);

ad nauseum.
You should not have any array that goes beyond 32K. The size
of an integer should not be bigger than 16 bits, because that
is the minimum required by C89 if I remember correctly.

This is very easy to do: make a linked list of 32K chunks and you can
have (GASP!) arrays of maybe a MB... You should not write
int array[65000];

int m = array[45765];

but
ArrayList *newintArrayList(65000);
m = getintItem(array,45765U);

For doubles you write

ArrayList *newdoubleArrayList(65000);
m = getdoubleItem(array,45765U);

If you want indexes bigger than 65535 you should develop a 32 bit
integer package.

And after you have developed all that you can (GASP!!!) develop your
application if you have any time left.
Obviously you should buy the new Intel biprocessor or the AMD Quad
processor to handle all that overhead but who cares? Your program
will be portable to the embedded processor in the WC...
:-)
Aug 1 '06 #6

P: n/a
jacob navia <ja***@jacob.remcomp.frwrote:
You should also avoid long double. This did not exist in C89.
Oh, look, isn't it just *darling*? I'd ask if we could keep it, but
frankly I'd prefer if it kept it and its rampant lack of knowledge about
C to itself.

Richard
Aug 1 '06 #7

P: n/a
Richard Bos wrote:
jacob navia <ja***@jacob.remcomp.frwrote:

>>You should also avoid long double. This did not exist in C89.


Oh, look, isn't it just *darling*? I'd ask if we could keep it, but
frankly I'd prefer if it kept it and its rampant lack of knowledge about
C to itself.

Richard
Nothing to the other points *darling* ?

Aug 1 '06 #8

P: n/a

jacob navia wrote:
Tom St Denis wrote:
Avoid VLAs, // comments and the newer header files if you want the
widest possible reach.
Tom

You should also avoid long long. This did not exist in C89.
Except it is valid "C" and many [most] C compilers do in fact support
it. In fact the only compiler I know of in productive use on 32/64 bit
boxes that doesn't is MSVC and there is a simple workaround (it has a
64-bit type just not called long long0.

So your "advice" is ignorant ranting lunatic bullshit.
To add 64 bit quantities you should develop and use
add64bits(a,b);
sub64bits(a,b);
mult64bits(a,b);
Or ... not use crappy toy compilers.

All UNIX CC's I've seen support it. GCC supports it [since well before
C99], MSVC even supports it (indirectly).

Your comment is not well founded.
You should also avoid long double. This did not exist in C89.
You should avoid floats entirely if you want portable code. Many
modern platforms don't have FPUs at all.

So unless your code really needs it you shouldn't use them.
You should not have any array that goes beyond 32K. The size
of an integer should not be bigger than 16 bits, because that
is the minimum required by C89 if I remember correctly.
12345676UL

That's valid C89. Shut the fuck up.

Array indecies only have to be of integer type. Chances are, if your
platform only supports 32K or 64K arrays you don't have that much
memory to work with anyways. So it's moot.

No amount of unportable coding will give you 1MB of ram on an 8051
(typically 8051s don't use much if any bank switching in the field....)
And after you have developed all that you can (GASP!!!) develop your
application if you have any time left.
Bullshit. It's totally valid to do

char *p = malloc(100000L * sizeof(*p));

then do p[45000] = 3;

If and only if p != NULL.

A platform may not support such requests (e.g. if segments were 32KB
for instance) and reject the malloc request. It's true that the
compiler doesn't have to support large objects, but that's an OPTIONAL
limitation only imposed when the PHYSICAL limitations of the platform
makes it troublesome to work around it.
Obviously you should buy the new Intel biprocessor or the AMD Quad
processor to handle all that overhead but who cares? Your program
will be portable to the embedded processor in the WC...
For someone who develops for windows you seem to have an odd view of
"overhead" and waste....

Tom

Aug 1 '06 #9

P: n/a
Tom St Denis wrote:
jacob navia wrote:
>>Tom St Denis wrote:
>>>Avoid VLAs, // comments and the newer header files if you want the
widest possible reach.
Tom

You should also avoid long long. This did not exist in C89.


Except it is valid "C" and many [most] C compilers do in fact support
it. In fact the only compiler I know of in productive use on 32/64 bit
boxes that doesn't is MSVC and there is a simple workaround (it has a
64-bit type just not called long long0.
See?

Not portable then.
>
So your "advice" is ignorant ranting lunatic bullshit.
WOW what nice arguments
>
>>To add 64 bit quantities you should develop and use
add64bits(a,b);
sub64bits(a,b);
mult64bits(a,b);


Or ... not use crappy toy compilers.
Exactly. Use C99.
All UNIX CC's I've seen support it. GCC supports it [since well before
C99], MSVC even supports it (indirectly).

Your comment is not well founded.

>>You should also avoid long double. This did not exist in C89.


You should avoid floats entirely if you want portable code. Many
modern platforms don't have FPUs at all.

So unless your code really needs it you shouldn't use them.

>>You should not have any array that goes beyond 32K. The size
of an integer should not be bigger than 16 bits, because that
is the minimum required by C89 if I remember correctly.


12345676UL

That's valid C89. Shut the fuck up.

This is not portable to 16 bit machines. Per standard (C99) the minimum
integer size is 16 bit only. I would be surprised that C89 was
different.
Array indecies only have to be of integer type.
Yes, and the minimum required int size is 16 bits.
Aug 1 '06 #10

P: n/a
Richard Bos wrote:
jacob navia <ja***@jacob.remcomp.frwrote:

>>You should also avoid long double. This did not exist in C89.


Oh, look, isn't it just *darling*? I'd ask if we could keep it, but
frankly I'd prefer if it kept it and its rampant lack of knowledge about
C to itself.

Richard
I stand corrected. Apparently C89 did have long double.

Aug 1 '06 #11

P: n/a
jacob navia wrote:
Except it is valid "C" and many [most] C compilers do in fact support
it. In fact the only compiler I know of in productive use on 32/64 bit
boxes that doesn't is MSVC and there is a simple workaround (it has a
64-bit type just not called long long0.

See?

Not portable then.
Um, ok but it's trivial to work around. you can still use

ulong64 a, b, c;

a += b;
c *= b;
b ^= a;

After you find a typedef for ulong64. On real compilers it's "unsigned
long long" and on msvc it's "unsigned __int64" [irrc].

That's a stupidly quick fix. I support it in LibTomCrypt since day-1
and haven't looked back since.
So your "advice" is ignorant ranting lunatic bullshit.

WOW what nice arguments
Well given my software is used on pretty much every 32 and 64-bit C
platform you can think of I'd say I'm abundantly qualified to talk
about getting code to work portably.

Your rants [which I think are just a joe-job against the real Jacob]
are pure troll ignorant trash.
Or ... not use crappy toy compilers.

Exactly. Use C99.
Um, no read what I'm actually writing. Real compilers supported it
before C99.
12345676UL

That's valid C89. Shut the fuck up.

This is not portable to 16 bit machines. Per standard (C99) the minimum
integer size is 16 bit only. I would be surprised that C89 was
different.
um,

unsigned long = 1234567UL;

That's valid C89 code. Any compiler that doesn't support that is not a
C89 compiler and is not worth discussing [hint: BYTE Small-C from 1985
is not a C89 compiler!]
Array indecies only have to be of integer type.

Yes, and the minimum required int size is 16 bits.
Yes, so if your malloc of 100000 bytes fails you can exit gracefully.
I agree that static or globals of huge size are a bad idea (in general
they're a bad idea anyways).

That said, you generally don't assume that a program that uses huge
arrays will port directly to a 8-bit host anyways?

By your logic, C is a bad language because Apache2 won't build on my
8051.

"Portable" code has limits. But between tiny alarm clock programs and
vast server applications is an entire spectrum of applications that
work in a variety of environments. For instance, Info-Zip was being
used in 16, 32 and 64 bit platforms (of various endianesses) at the
same time.

Your rants are ignorant and shameful. My guess is you haven't spent 5
minutes trying to write portable code and you're just upset that you
think you're moot since you've missed the boat.

How about you be a team player for a change?

Tom

Aug 1 '06 #12

P: n/a
On Mon, 31 Jul 2006 20:52:16 -0700, "Eigenvector"
<m4********@yahoo.comwrote:
Where I'm going
with this is, has the OpenSource community as a whole committed itself to at
the very least encouraging its contributing members to conform to ANSI
standards of programming?
No, because there's no such entity as "the OpenSource community as a
whole." Individual projects may have guidelines.

--
Al Balmer
Sun City, AZ
Aug 1 '06 #13

P: n/a
Tom St Denis wrote:
[...]
Aim for C89 compliance and you pretty much got things nailed. Keep in
mind that many compilers supported various C99 features even back then.
E.g., long long has been supported for a while and ALL unix C
compilers I've seen support it [even the ones released in the mid 90s].
[...]

SCO OpenServer 5 doesn't have "long long" support, and the compiler
says it's dated "18Feb03".

--
+-------------------------+--------------------+-----------------------+
| Kenneth J. Brody | www.hvcomputer.com | #include |
| kenbrody/at\spamcop.net | www.fptech.com | <std_disclaimer.h|
+-------------------------+--------------------+-----------------------+
Don't e-mail me at: <mailto:Th*************@gmail.com>

Aug 1 '06 #14

P: n/a

Kenneth Brody wrote:
Tom St Denis wrote:
[...]
Aim for C89 compliance and you pretty much got things nailed. Keep in
mind that many compilers supported various C99 features even back then.
E.g., long long has been supported for a while and ALL unix C
compilers I've seen support it [even the ones released in the mid 90s].
[...]

SCO OpenServer 5 doesn't have "long long" support, and the compiler
says it's dated "18Feb03".
Who the fuck uses SCO? When I say UNIX I mean IRIX, Solaris, HP-UX,
AIX, etc...

SCO is the OS of choice for the anti-christ.

Tom

Aug 1 '06 #15

P: n/a
In article <11**********************@i42g2000cwa.googlegroups .com>,
Tom St Denis <to********@gmail.comwrote:
>Who the fuck uses SCO? When I say UNIX I mean IRIX, Solaris, HP-UX,
AIX, etc...
The OpenGroup certified UNIXes are [specific versions of]

AIX (IBM)
IRIX (SGI)
NCR UNIX (NCR)
Solaris (Sun, Fujitsu)
Tru64 (HP)
UnixWare (Caldera, SCO)
UX/4800 (NEC)
Regardless of what one thinks of SCO, they -are- one of the few
certified UNIX.
--
"It is important to remember that when it comes to law, computers
never make copies, only human beings make copies. Computers are given
commands, not permission. Only people can be given permission."
-- Brad Templeton
Aug 1 '06 #16

P: n/a
jacob navia wrote:
You should also avoid long double. This did not exist in C89.
Jacob Navia once again proves he knows nothing. From the C89 ANSI
standard, even before the ISO C90 standard:

3.1.2.5 Types
[...]
There are three floating types, designated as float , double , and
long double . The set of values of the type float is a subset of the
set of values of the type double ; the set of values of the type
double is a subset of the set of values of the type long double.

When Jacob is not pretending that his compiler's non-standard features
define the world, he pretends that C does not include what it has
included for 17 years. Is he ignorant or dishonest?
Aug 1 '06 #17

P: n/a
Martin Ambuhl said:

<snip>
>
When Jacob is not pretending that his compiler's non-standard features
define the world, he pretends that C does not include what it has
included for 17 years. Is he ignorant or dishonest?
Hanlon's Razor applies.

--
Richard Heathfield
"Usenet is a strange place" - dmr 29/7/1999
http://www.cpax.org.uk
email: rjh at above domain (but drop the www, obviously)
Aug 1 '06 #18

P: n/a
Walter Roberson wrote:
In article <11**********************@i42g2000cwa.googlegroups .com>,
Tom St Denis <to********@gmail.comwrote:
>Who the fuck uses SCO? When I say UNIX I mean IRIX, Solaris, HP-UX,
AIX, etc...

The OpenGroup certified UNIXes are [specific versions of]

AIX (IBM)
IRIX (SGI)
NCR UNIX (NCR)
Solaris (Sun, Fujitsu)
Tru64 (HP)
UnixWare (Caldera, SCO)
UX/4800 (NEC)
Regardless of what one thinks of SCO, they -are- one of the few
certified UNIX.
Also it *is* still used. I even have a SCO box in the office for doing
critical fixes to an old version of our software. Fortunately I've
persuaded the company not to support SCO for the current version.
--
Flash Gordon
Still sigless on this computer.
Aug 1 '06 #19

P: n/a
Flash Gordon wrote:
Walter Roberson wrote:
>In article <11**********************@i42g2000cwa.googlegroups .com>,
Tom St Denis <to********@gmail.comwrote:
>>Who the fuck uses SCO? When I say UNIX I mean IRIX, Solaris, HP-UX,
AIX, etc...

The OpenGroup certified UNIXes are [specific versions of]

AIX (IBM)
IRIX (SGI)
NCR UNIX (NCR)
Solaris (Sun, Fujitsu)
Tru64 (HP)
UnixWare (Caldera, SCO)
UX/4800 (NEC)

Regardless of what one thinks of SCO, they -are- one of the few
certified UNIX.

Also it *is* still used. I even have a SCO box in the office for doing
critical fixes to an old version of our software. Fortunately I've
persuaded the company not to support SCO for the current version.
[So off-topic that I set follow-ups.]

You too, huh? I actually had to rescue my OpenServer CDs from the
"coaster heap" in order to pull together a system as a good-faith move
for a customer.

It's true: people still use this stuff.
Aug 1 '06 #20

P: n/a

"Richard Heathfield" <in*****@invalid.invalidwrote in message
news:jK******************************@bt.com...
Eigenvector said:

<snip>
>Where I'm going
with this is, has the OpenSource community as a whole committed itself to
at the very least encouraging its contributing members to conform to ANSI
standards of programming?

I doubt it very much, unfortunately.
>My concern is that as an admin I am sometimes compelled to port these
applications to multiple platforms running the same OS and as the user
community becomes more and more insistent on OpenSource applications will
gotcha's appear due to lack of portability in coding?

It's entirely possible that they will. Unfortunately, open source code is
not, generally speaking, known for its robustness or portability. In fact,
I can think of only one kind of source base that is less robust and less
portable than open source code - and that's closed source code.
>I fully realize
that independent developers may or may not conform to standards, but
again
is it at least encouraged?

We do our bit, here in comp.lang.c, but I doubt whether it's enough to
make
more than a small difference.

--
Richard Heathfield
Thank you.
Aug 1 '06 #21

P: n/a

"Tom St Denis" <to********@gmail.comwrote in message
news:11*********************@m79g2000cwm.googlegro ups.com...
Eigenvector wrote:
>What about the programmers who submit to the archives? That is mainly
where
I see massive gcc and imake requirements. In fact I have on occassion
attempted to compile applications - such as gcc using my native Intel or
xlC
compilers without luck. Again this isn't a question on how to compile
GCC,
but rather is the experience that the OpenSource community tries to
conform
to ANSI standards?

GCC is hardly your average OSS project (and Intel C is hardly standard
conforming).
Why do you say that the Intel C compiler isn't standard conforming?

Generally if you don't use any special features of GNU make you should
be "make portable". My makefiles are used in windows, linux, bsd, AIX,
HP-UX, MacOS, etc, etc, etc even though they split between proprietary
make, imake, gmake, etc, etc, etc.

Aim for C89 compliance and you pretty much got things nailed. Keep in
mind that many compilers supported various C99 features even back then.
E.g., long long has been supported for a while and ALL unix C
compilers I've seen support it [even the ones released in the mid 90s].

Avoid VLAs, // comments and the newer header files if you want the
widest possible reach.
Tom

Aug 1 '06 #22

P: n/a
Tom St Denis a écrit :
Eigenvector wrote:
>>What about the programmers who submit to the archives? That is mainly where
I see massive gcc and imake requirements. In fact I have on occassion
attempted to compile applications - such as gcc using my native Intel or xlC
compilers without luck. Again this isn't a question on how to compile GCC,
but rather is the experience that the OpenSource community tries to conform
to ANSI standards?


GCC is hardly your average OSS project (and Intel C is hardly standard
conforming).
You are just speaking nonsense. As specified here:

http://www.intel.com/cd/ids/develope...372.htm?page=4

the intel compiler complies with C99. It is the best compiler for the
intel architecture producing always the best possible code. It is
compatible with gnu's gcc under linux, and with microsoft msvc
under windows.

That compiler is a GREAT piece of software.
Aug 2 '06 #23

P: n/a
jacob navia <ja***@jacob.remcomp.frwrites:
Tom St Denis a écrit :
[snip]
>GCC is hardly your average OSS project (and Intel C is hardly
standard conforming).

You are just speaking nonsense. As specified here:

http://www.intel.com/cd/ids/develope...372.htm?page=4

the intel compiler complies with C99. It is the best compiler for the
intel architecture producing always the best possible code. It is
compatible with gnu's gcc under linux, and with microsoft msvc
under windows.
"producing always the best possible code"?

Fortunately, the Intel web page you cite doesn't make that absurd
claim. I understand that it does generate good code, but I seriously
doubt that it's "always the best possible".

I'll note in passing that that web page contains the following
statement:

C89 is currently the de facto standard for C applications; however
use of C99 is increasing.

--
Keith Thompson (The_Other_Keith) ks***@mib.org <http://www.ghoti.net/~kst>
San Diego Supercomputer Center <* <http://users.sdsc.edu/~kst>
We must do something. This is something. Therefore, we must do this.
Aug 2 '06 #24

P: n/a
Eigenvector wrote:
[...] I fully realize that
independent developers may or may not conform to standards, but again is it
at least encouraged?
Not really. By its very nature C encourages non-portable programming.
In general, I try to write code portably, but the only thing keeping me
honest is actually compiling my stuff with multiple compilers to see
what happens.

To be completely portable, you also have to support multiple bit sizes
for int, be wary of arbitrary call order of parameters or expression
operands, 6 character unique variable names, use char as either signed
or unsigned, and allow for non-2s complement integers. The problem is
that its hard to find a compiler or platforms that supports variation
in those things.

The easy way to make C code "portable" is to lower your standards for
portability. I.e., just port your code to major compilers (MSVC, gcc
and some Unix cc -- personally, I add Borland C, Turbo C and Watcom C
when I want to really push it), and hope that's good enough.

In general writing any non-trivial amount of code in C for which you
have good certainty about portability is really quite time consuming.
The "open source community" tends to favor "make it work" over "make it
portable".

--
Paul Hsieh
http://www.pobox.com/~qed/
http://bstring.sf.net/

Aug 2 '06 #25

P: n/a
Keith Thompson wrote:
jacob navia <ja***@jacob.remcomp.frwrites:
>>Tom St Denis a écrit :

[snip]
>>>GCC is hardly your average OSS project (and Intel C is hardly
standard conforming).

You are just speaking nonsense. As specified here:

http://www.intel.com/cd/ids/develope...372.htm?page=4

the intel compiler complies with C99. It is the best compiler for the
intel architecture producing always the best possible code. It is
compatible with gnu's gcc under linux, and with microsoft msvc
under windows.


"producing always the best possible code"?

Fortunately, the Intel web page you cite doesn't make that absurd
claim. I understand that it does generate good code, but I seriously
doubt that it's "always the best possible".

I have *yet* to see any compiler that beats intel's. Sorry.
Aug 2 '06 #26

P: n/a
jacob navia wrote:
Fortunately, the Intel web page you cite doesn't make that absurd
claim. I understand that it does generate good code, but I seriously
doubt that it's "always the best possible".

I have *yet* to see any compiler that beats intel's. Sorry.
GCC 3.4.6 beat Intel v8 in LibTomMath benchmarks. Therefore, GCC is
better than Intel C.

See that's the problem with singular benchmarks they don't mean much.
Intel C is a decent compiler but it allows you to use C++'isms in C
code where you shouldn't be able to. It also lets you use ridicuously
old syntax. And finally, it's inline asm is not 100% compatible with
GCC's asm.

Intel C is OT for this group though, so if you really want to have a
discussion about it email me in private.

Tom

Aug 2 '06 #27

P: n/a
On Wed, 02 Aug 2006 09:49:08 +0200, jacob navia
<ja***@jacob.remcomp.frwrote:
>Tom St Denis a écrit :
>Eigenvector wrote:
>>>What about the programmers who submit to the archives? That is mainly where
I see massive gcc and imake requirements. In fact I have on occassion
attempted to compile applications - such as gcc using my native Intel or xlC
compilers without luck. Again this isn't a question on how to compile GCC,
but rather is the experience that the OpenSource community tries to conform
to ANSI standards?


GCC is hardly your average OSS project (and Intel C is hardly standard
conforming).

You are just speaking nonsense. As specified here:

http://www.intel.com/cd/ids/develope...372.htm?page=4

the intel compiler complies with C99.
It doesn't say that at all. Did you actually read the article?
It is the best compiler for the
intel architecture producing always the best possible code. It is
compatible with gnu's gcc under linux, and with microsoft msvc
under windows.

That compiler is a GREAT piece of software.
--
Al Balmer
Sun City, AZ
Aug 2 '06 #28

P: n/a
jacob navia <ja***@jacob.remcomp.frwrites:
Keith Thompson wrote:
>jacob navia <ja***@jacob.remcomp.frwrites:
>>>Tom St Denis a écrit :
[snip]
>>>>GCC is hardly your average OSS project (and Intel C is hardly
standard conforming).

You are just speaking nonsense. As specified here:

http://www.intel.com/cd/ids/develope...372.htm?page=4

the intel compiler complies with C99. It is the best compiler for the
intel architecture producing always the best possible code. It is
compatible with gnu's gcc under linux, and with microsoft msvc
under windows.
"producing always the best possible code"?
Fortunately, the Intel web page you cite doesn't make that absurd
claim. I understand that it does generate good code, but I seriously
doubt that it's "always the best possible".

I have *yet* to see any compiler that beats intel's. Sorry.
That may well be true, but it's not my point.

You claimed that Intel's compiler produces "always the best possible
code". This is a claim of absolute perfection, and it is well beyond
the current state of the art.

There is a tool that generates absolutely optimial code for a
specified operation on a particular CPU by doing an exhaustive search
of all possible instruction sequences (google "superoptimizer"). It
works only on very short sequences, and it takes a very long time, far
longer than would be practical in a compiler. (It's been used to find
canned sequences that can be emitted by rote by an optimizing
compiler.) Your claim, if taken literally, implies that the Intel
compiler does this kind of optimization for all inputs. Even if it's
the best compiler in the world, I don't believe it's the best compiler
possible.

If you want to be taken seriously, either don't make such far-reaching
claims, or back them up.

I'm not surprised that you've chosen to ignore the following from my
previous article.

| I'll note in passing that that web page contains the following
| statement:
|
| C89 is currently the de facto standard for C applications; however
| use of C99 is increasing.

--
Keith Thompson (The_Other_Keith) ks***@mib.org <http://www.ghoti.net/~kst>
San Diego Supercomputer Center <* <http://users.sdsc.edu/~kst>
We must do something. This is something. Therefore, we must do this.
Aug 2 '06 #29

P: n/a
Al Balmer <al******@att.netwrites:
On Wed, 02 Aug 2006 09:49:08 +0200, jacob navia
<ja***@jacob.remcomp.frwrote:
[...]
>>You are just speaking nonsense. As specified here:

http://www.intel.com/cd/ids/develope...372.htm?page=4

the intel compiler complies with C99.

It doesn't say that at all. Did you actually read the article?
The table of compilation modes says:

-[no-]c99 C99 conformance and feature support
-std=c89 C89 conformance and feature support

I presume it conforms well to C89. The wording implies that it
conforms equally well to C99. My only reasons to doubt that are that
full C99 conformance is relatively rare, and I'd expect Intel to make
a bigger deal about it.

--
Keith Thompson (The_Other_Keith) ks***@mib.org <http://www.ghoti.net/~kst>
San Diego Supercomputer Center <* <http://users.sdsc.edu/~kst>
We must do something. This is something. Therefore, we must do this.
Aug 2 '06 #30

P: n/a
Al Balmer wrote:
On Wed, 02 Aug 2006 09:49:08 +0200, jacob navia
<ja***@jacob.remcomp.frwrote:

>>Tom St Denis a écrit :
>>>Eigenvector wrote:
What about the programmers who submit to the archives? That is mainly where
I see massive gcc and imake requirements. In fact I have on occassion
attempted to compile applications - such as gcc using my native Intel or xlC
compilers without luck. Again this isn't a question on how to compile GCC,
but rather is the experience that the OpenSource community tries to conform
to ANSI standards?
GCC is hardly your average OSS project (and Intel C is hardly standard
conforming).

You are just speaking nonsense. As specified here:

http://www.intel.com/cd/ids/develope...372.htm?page=4

the intel compiler complies with C99.


It doesn't say that at all. Did you actually read the article?
You can't read?

In that page the option
-std=c99
is mentioned with explanation:

" C99 conformance and feature support"

in the first table of that page.

And you say:
>
It doesn't say that at all. Did you actually read the article?
There is no blinder person as the one that doesn't want to see!

jacob
Aug 2 '06 #31

P: n/a
On Wed, 02 Aug 2006 21:00:26 +0200, jacob navia
<ja***@jacob.remcomp.frwrote:
>Al Balmer wrote:
>On Wed, 02 Aug 2006 09:49:08 +0200, jacob navia
<ja***@jacob.remcomp.frwrote:

>>>Tom St Denis a écrit :

Eigenvector wrote:
>What about the programmers who submit to the archives? That is mainly where
>I see massive gcc and imake requirements. In fact I have on occassion
>attempted to compile applications - such as gcc using my native Intel or xlC
>compilers without luck. Again this isn't a question on how to compile GCC,
>but rather is the experience that the OpenSource community tries to conform
>to ANSI standards?
GCC is hardly your average OSS project (and Intel C is hardly standard
conforming).
You are just speaking nonsense. As specified here:

http://www.intel.com/cd/ids/develope...372.htm?page=4

the intel compiler complies with C99.


It doesn't say that at all. Did you actually read the article?

You can't read?

In that page the option
-std=c99
is mentioned with explanation:

" C99 conformance and feature support"
And that's what you're basing your conclusion on? An entry in a
options table? The name of one of the C++ options is "_strict_ansi",
but in the text above, they tell you that the C++ compiler has only "a
high degree of conformance." IOW, it's not "strict ansi."

As one who claims to be a compiler vendor, you should know that
standards compliance should be supplied as a separate statement, along
with the required statement of implementation dependent behavior.

It's entirely possible that Intel is fully C99 compliant (though you'd
think they would say so in their ads), but you can't tell from the
page you gave us.
>
in the first table of that page.

And you say:

It doesn't say that at all. Did you actually read the article?

There is no blinder person as the one that doesn't want to see!

jacob
--
Al Balmer
Sun City, AZ
Aug 2 '06 #32

P: n/a
On Wed, 02 Aug 2006 18:58:00 GMT, Keith Thompson <ks***@mib.org>
wrote:
>Al Balmer <al******@att.netwrites:
>On Wed, 02 Aug 2006 09:49:08 +0200, jacob navia
<ja***@jacob.remcomp.frwrote:
[...]
>>>You are just speaking nonsense. As specified here:

http://www.intel.com/cd/ids/develope...372.htm?page=4

the intel compiler complies with C99.

It doesn't say that at all. Did you actually read the article?

The table of compilation modes says:

-[no-]c99 C99 conformance and feature support
-std=c89 C89 conformance and feature support

I presume it conforms well to C89.
An unjustified presumption, imo. The text does claim good conformance
to the C++ standard, but really says nothing about either C89 or C99
conformance.
The wording implies that it
conforms equally well to C99.
Or equally badly, if you mean the wording in the table of compilation
modes, which probably isn't legally binding :-)
My only reasons to doubt that are that
full C99 conformance is relatively rare, and I'd expect Intel to make
a bigger deal about it.
Fourth return from Google:
http://www.intel.com/support/perform.../cs-015003.htm

<quote>
C Standard Conformance
The Intel® C++ Compilers provide some conformance to the ANSI/ISO
standard for C language compilation (ISO/IEC 9899:1999).

For more information on C conformance, refer to the User's Guide.
</quote>

I expect the user's guide will have a detailed listing of
non-conforming items, as well as the list of implementation-dependent
items.

--
Al Balmer
Sun City, AZ
Aug 2 '06 #33

P: n/a
Al Balmer <al******@att.netwrites:
On Wed, 02 Aug 2006 18:58:00 GMT, Keith Thompson <ks***@mib.org>
wrote:
[...]
>>The table of compilation modes says:

-[no-]c99 C99 conformance and feature support
-std=c89 C89 conformance and feature support

I presume it conforms well to C89.

An unjustified presumption, imo. The text does claim good conformance
to the C++ standard, but really says nothing about either C89 or C99
conformance.
Perhaps. My presumption is based largely on the fact that C89
conformance is much easier to achieve than C99 or C++ conformance.
There are plenty of compilers that conform well to the C89/C90
standard; Intel's compiler would have trouble competing if it had
significant gaps.

This is, of course, guesswork based on circumstantial evidence, not
guaranteed to be worth more than you paid for it.

--
Keith Thompson (The_Other_Keith) ks***@mib.org <http://www.ghoti.net/~kst>
San Diego Supercomputer Center <* <http://users.sdsc.edu/~kst>
We must do something. This is something. Therefore, we must do this.
Aug 2 '06 #34

P: n/a

<we******@gmail.comwrote in message
news:11**********************@m79g2000cwm.googlegr oups.com...
Eigenvector wrote:
>[...] I fully realize that
independent developers may or may not conform to standards, but again is
it
at least encouraged?

Not really. By its very nature C encourages non-portable programming.
In general, I try to write code portably, but the only thing keeping me
honest is actually compiling my stuff with multiple compilers to see
what happens.
Yes. There is a tension between efficiency and portability. In Java they
resolved it by compromising efficiency, in C we have to be careful to make
our portable code genuinely portable, which is why the topic is so often
discussed.
There is also the problem of "good enough" portability, for instance
assuming ASCII and two's complement integers.
--
www.personal.leeds.ac.uk/~bgy1mm
freeware games to download.

Aug 3 '06 #35

P: n/a
Websnarf posted:
Not really. By its very nature C encourages non-portable programming.

Heavily subjective statement, and I disagree vehemently.
In general, I try to write code portably, but the only thing keeping me
honest is actually compiling my stuff with multiple compilers to see
what happens.

Without trying to sound condescending: You should have a better knowledge
of the restrictions (and freedoms) which the C Standard imposes upon
implementations. Things like:

(1) Null pointers need not be all bits zero.
(2) All bits zero must be a valid zero value for an integer type.
(3) All union members have the same address.
(4) There may be padding between struct elements.

I NEVER have to rely on different compiler tests to tell me if my code is
portable -- I just look for the relevant excerpt from the Standard.
To be completely portable, you also have to support multiple bit sizes
for int

Easily sorted with:

(1) sizeof
(2) IMAX_BITS written by Hallvard B Furuseth
>, be wary of arbitrary call order of parameters or expression
operands,

Don't rely on order of evaluation. The language has sequence points for a
reason.
use char as either signed
or unsigned,

Only use "plain char" to store a character -- don't use it for numbers or
arithmetic.
and allow for non-2s complement integers.

When would one need to reply on having 2's complement? It would only be
relevant if you're doing bit-fiddling. In any case, you an easily make
allowances for it:

#if (-1 & 3 == ...
#define TWOS_COMPLEMENT...
#else
#define ONES_COMPLEMENT...

And... if all else fails, and your program simply MUST run on a two's
complement system, then:

#ifndef TWOS_COMPLEMENT
#error ...
The problem is that its hard to find a compiler or platforms that
supports variation in those things.

But it's easy to find the relevant paragraphs in the Standard.
The easy way to make C code "portable" is to lower your standards for
portability. I.e., just port your code to major compilers (MSVC, gcc
and some Unix cc -- personally, I add Borland C, Turbo C and Watcom C
when I want to really push it), and hope that's good enough.

Better yet... just write 100% portable Standard-compliant code.
In general writing any non-trivial amount of code in C for which you
have good certainty about portability is really quite time consuming.

Not if you're familiar with writing portable code.
The "open source community" tends to favor "make it work" over "make it
portable".

If the Open Source Community jumped off a cliff, would you ju...

--

Frederick Gotham
Aug 3 '06 #36

P: n/a
"Malcolm" <re*******@btinternet.comwrites:
<we******@gmail.comwrote in message
news:11**********************@m79g2000cwm.googlegr oups.com...
>Eigenvector wrote:
>>[...] I fully realize that
independent developers may or may not conform to standards, but again is
it
at least encouraged?

Not really. By its very nature C encourages non-portable programming.
In general, I try to write code portably, but the only thing keeping me
honest is actually compiling my stuff with multiple compilers to see
what happens.
Yes. There is a tension between efficiency and portability. In Java they
resolved it by compromising efficiency, in C we have to be careful to make
our portable code genuinely portable, which is why the topic is so often
discussed.
There is also the problem of "good enough" portability, for instance
assuming ASCII and two's complement integers.
I rarely find it useful to assume ASCII. It's usually just as easy to
write code that depends only on the guarantees in the standard, and
will just work regardless of the character set. It would be
marginally more convenient to be able to assume that the character
codes for the letters are contiguous, but that's easy enough to work
around.

As for two's complement, I typically don't care about that either.
Numbers are numbers. If I need to do bit-twiddling, I use unsigned.

--
Keith Thompson (The_Other_Keith) ks***@mib.org <http://www.ghoti.net/~kst>
San Diego Supercomputer Center <* <http://users.sdsc.edu/~kst>
We must do something. This is something. Therefore, we must do this.
Aug 3 '06 #37

P: n/a
Keith Thompson said:
"Malcolm" <re*******@btinternet.comwrites:
<snip>
>There is also the problem of "good enough" portability, for instance
assuming ASCII and two's complement integers.

I rarely find it useful to assume ASCII. It's usually just as easy to
write code that depends only on the guarantees in the standard, and
will just work regardless of the character set. It would be
marginally more convenient to be able to assume that the character
codes for the letters are contiguous, but that's easy enough to work
around.
I suppose that this attitude is more natural for those of us who have
written code that has to work on real live non-ASCII systems, simply
because we're so used to not being /able/ to assume ASCII that it never
occurs to us to rely on ASCII even when we might get away with it.
As for two's complement, I typically don't care about that either.
Numbers are numbers. If I need to do bit-twiddling, I use unsigned.
Indeed. And, on a related note, I find it very difficult to understand this
fascination with integers that have a particular number of bits. If I need
8 bits, I'll use char (or a flavour thereof). If I need 9 to 16 bits, I'll
use int (or unsigned). If I need 17 to 32 bits, I'll use long (or unsigned
long). And if I need more than 32 bits, I'll use a bit array. I see
absolutely no need for int_leastthis, int_fastthat, and int_exacttheother.

The introduction of long long int was, in my continued opinion, a mistake.
All the ISO guys had to do was - nothing at all! Any implementation that
wanted to support 64-bit integers could simply have made long int rather
longer than before - such a system would have continued to be fully
conforming to C90. And if it broke code, well, so what? Any code that
wrongly assumes long int is precisely 32 bits is already broken, and needs
fixing.

--
Richard Heathfield
"Usenet is a strange place" - dmr 29/7/1999
http://www.cpax.org.uk
email: rjh at above domain (but drop the www, obviously)
Aug 3 '06 #38

P: n/a
Richard Heathfield <in*****@invalid.invalidwrites:
Keith Thompson said:
>"Malcolm" <re*******@btinternet.comwrites:

<snip>
>>There is also the problem of "good enough" portability, for instance
assuming ASCII and two's complement integers.

I rarely find it useful to assume ASCII. It's usually just as easy to
write code that depends only on the guarantees in the standard, and
will just work regardless of the character set. It would be
marginally more convenient to be able to assume that the character
codes for the letters are contiguous, but that's easy enough to work
around.

I suppose that this attitude is more natural for those of us who have
written code that has to work on real live non-ASCII systems, simply
because we're so used to not being /able/ to assume ASCII that it never
occurs to us to rely on ASCII even when we might get away with it.
Perhaps, but I've never really programmed for a non-ASCII system.

It just wouldn't occur to me to write code that depends on the
assumption that 'A' == 65. If I want 'A', I write 'A'.
>As for two's complement, I typically don't care about that either.
Numbers are numbers. If I need to do bit-twiddling, I use unsigned.

Indeed. And, on a related note, I find it very difficult to understand this
fascination with integers that have a particular number of bits. If I need
8 bits, I'll use char (or a flavour thereof). If I need 9 to 16 bits, I'll
use int (or unsigned). If I need 17 to 32 bits, I'll use long (or unsigned
long). And if I need more than 32 bits, I'll use a bit array. I see
absolutely no need for int_leastthis, int_fastthat, and int_exacttheother.
But there are times when you need some exact number of bits,
particularly when you're using an externally imposed data format.
(But then whoever is imposing the data format on you should have
provided a header that declares the appropriate types.)

Something that might be more useful would be a way to ask for an
integer type with (at least) a specified *range*. If I'm using a type
to hold numbers rather than bags of bits, I care what numbers I can
store in it, not how many bits it uses to store them.
The introduction of long long int was, in my continued opinion, a mistake.
All the ISO guys had to do was - nothing at all! Any implementation that
wanted to support 64-bit integers could simply have made long int rather
longer than before - such a system would have continued to be fully
conforming to C90. And if it broke code, well, so what? Any code that
wrongly assumes long int is precisely 32 bits is already broken, and needs
fixing.
That's true, but 64 bits is the effective limit for this. The
following:
char 8 bits
short 16 bits
int 32 bits
long 64 bits
is a reasonable set of types, but if you go beyond that to 128 bits,
you're going to have to leave gaps (for example, there might not be
any 16-bit integer type).

My objection to C's integer type system is that the names are
arbitrary: "char", "short", "int", "long", "long long", "ginormous
long". I'd like to see a system where the type names follow a regular
pattern, and if you want to have a dozen distinct types the names are
clear and obvious. I have a few ideas, but since this will never
happen in any language called "C" I won't go into any more detail.

--
Keith Thompson (The_Other_Keith) ks***@mib.org <http://www.ghoti.net/~kst>
San Diego Supercomputer Center <* <http://users.sdsc.edu/~kst>
We must do something. This is something. Therefore, we must do this.
Aug 3 '06 #39

P: n/a
Keith Thompson said:
Richard Heathfield <in*****@invalid.invalidwrites:
<snip>
>
>The introduction of long long int was, in my continued opinion, a
mistake. All the ISO guys had to do was - nothing at all! Any
implementation that wanted to support 64-bit integers could simply have
made long int rather longer than before - such a system would have
continued to be fully conforming to C90. And if it broke code, well, so
what? Any code that wrongly assumes long int is precisely 32 bits is
already broken, and needs fixing.

That's true, but 64 bits is the effective limit for this.
Nope.
The following:
char 8 bits
short 16 bits
int 32 bits
long 64 bits
is a reasonable set of types,
Indeed. So is this:

char 8 bits
short 64 bits
int 64 bits
long 64 bits

and at least one implementation does it like that.
but if you go beyond that to 128 bits,
you're going to have to leave gaps (for example, there might not be
any 16-bit integer type).
Er, and?
My objection to C's integer type system is that the names are
arbitrary: "char", "short", "int", "long", "long long", "ginormous
long". I'd like to see a system where the type names follow a regular
pattern,
I think the C99 guys missed a trick by not reserving "very" for this
purpose. If they must spec extra types, at least make them scalable:

very long - at least 64 bits
very very long - at least 128 bits
very very very long - at least 256 bits
etc

--
Richard Heathfield
"Usenet is a strange place" - dmr 29/7/1999
http://www.cpax.org.uk
email: rjh at above domain (but drop the www, obviously)
Aug 4 '06 #40

P: n/a
Richard Heathfield <in*****@invalid.invalidwrites:
Keith Thompson said:
>Richard Heathfield <in*****@invalid.invalidwrites:
<snip>
>>
>>The introduction of long long int was, in my continued opinion, a
mistake. All the ISO guys had to do was - nothing at all! Any
implementation that wanted to support 64-bit integers could simply have
made long int rather longer than before - such a system would have
continued to be fully conforming to C90. And if it broke code, well, so
what? Any code that wrongly assumes long int is precisely 32 bits is
already broken, and needs fixing.

That's true, but 64 bits is the effective limit for this.

Nope.
>The following:
char 8 bits
short 16 bits
int 32 bits
long 64 bits
is a reasonable set of types,

Indeed. So is this:

char 8 bits
short 64 bits
int 64 bits
long 64 bits

and at least one implementation does it like that.
>but if you go beyond that to 128 bits,
you're going to have to leave gaps (for example, there might not be
any 16-bit integer type).

Er, and?
Er, and I've worked on such a system, and it caused some real
problems. There was a 16-bit or 32-bit field in an externally imposed
data format. The relevant system header declared it as a bit field.
Code that assumed you could take the address of that field broke.

The C compiler could have created 16-bit and 32-bit integer types,
using the same convoluted mechanism it used to handle 8-bit types --
but if these types had been called "short" and "int", then a lot of
code would have used them and become unreasonably slow as a result.

In C99, I suppose the system could have defined them as extended
integer types, guaranteeing that they wouldn't be used unless you
explicitly asked for them. Then intfast32_t would be 64 bits, but
int32_t would be 32 bits (and slow).

That's a problem with the design of <stdint.h>; the naming scheme
assumes that exact-width types are more important than types with *at
least* a specified size or range.

--
Keith Thompson (The_Other_Keith) ks***@mib.org <http://www.ghoti.net/~kst>
San Diego Supercomputer Center <* <http://users.sdsc.edu/~kst>
We must do something. This is something. Therefore, we must do this.
Aug 4 '06 #41

P: n/a
Keith Thompson said:
Richard Heathfield <in*****@invalid.invalidwrites:
>>
>>but if you go beyond that to 128 bits,
you're going to have to leave gaps (for example, there might not be
any 16-bit integer type).

Er, and?

Er, and I've worked on such a system, and it caused some real
problems. There was a 16-bit or 32-bit field in an externally imposed
data format. The relevant system header declared it as a bit field.
Code that assumed you could take the address of that field broke.
Bit-fields are broken anyway. I'd read it a byte at a time, and copy the
bits over "by hand", so to speak.

--
Richard Heathfield
"Usenet is a strange place" - dmr 29/7/1999
http://www.cpax.org.uk
email: rjh at above domain (but drop the www, obviously)
Aug 4 '06 #42

P: n/a
Richard Heathfield <in*****@invalid.invalidwrites:
Keith Thompson said:
>Richard Heathfield <in*****@invalid.invalidwrites:
>>>but if you go beyond that to 128 bits,
you're going to have to leave gaps (for example, there might not be
any 16-bit integer type).

Er, and?

Er, and I've worked on such a system, and it caused some real
problems. There was a 16-bit or 32-bit field in an externally imposed
data format. The relevant system header declared it as a bit field.
Code that assumed you could take the address of that field broke.

Bit-fields are broken anyway. I'd read it a byte at a time, and copy the
bits over "by hand", so to speak.
Yes, that's another (equally inconvenient) approach. But in any case
I didn't have the option of modifying the system header files.

--
Keith Thompson (The_Other_Keith) ks***@mib.org <http://www.ghoti.net/~kst>
San Diego Supercomputer Center <* <http://users.sdsc.edu/~kst>
We must do something. This is something. Therefore, we must do this.
Aug 4 '06 #43

P: n/a
On 2006-08-03, Keith Thompson <ks***@mib.orgwrote:
Richard Heathfield <in*****@invalid.invalidwrites:
>The introduction of long long int was, in my continued opinion, a mistake.
All the ISO guys had to do was - nothing at all! Any implementation that
wanted to support 64-bit integers could simply have made long int rather
longer than before - such a system would have continued to be fully
conforming to C90. And if it broke code, well, so what? Any code that
wrongly assumes long int is precisely 32 bits is already broken, and needs
fixing.

That's true, but 64 bits is the effective limit for this. The
following:
char 8 bits
short 16 bits
int 32 bits
long 64 bits
is a reasonable set of types, but if you go beyond that to 128 bits,
you're going to have to leave gaps (for example, there might not be
any 16-bit integer type).
1) This isn't really a problem; you can use a 32-bit variable to store
16-bit values; if you really need 16 bits you might need some debug
macros to artificially constrain the range.
2) If you've got a 128-bit processor, IMHO, you shouldn't be insisting
on using 8-bit types. That just sounds inefficient. [OT]

--
Andrew Poelstra <http://www.wpsoftware.net/projects>
To reach me by email, use `apoelstra' at the above domain.
"Do BOTH ends of the cable need to be plugged in?" -Anon.
Aug 4 '06 #44

P: n/a
Richard Heathfield <in*****@invalid.invalidwrote:
>
The introduction of long long int was, in my continued opinion, a mistake.
All the ISO guys had to do was - nothing at all! Any implementation that
wanted to support 64-bit integers could simply have made long int rather
longer than before - such a system would have continued to be fully
conforming to C90. And if it broke code, well, so what? Any code that
wrongly assumes long int is precisely 32 bits is already broken, and needs
fixing.
The problem was not breaking code but breaking binary compatibility.
Lots of vendors wanted to add 64 bit support to their existing 32 bit
platforms (for large file support, if nothing else), but they couldn't
very well change the size of long on the existing platforms without
breaking *everything*. The expedient, if not elegant, solution was long
long. Most of the committee didn't really like the idea, but it was
already a common extension and promised to become more so, so we figured
we'd better standardize it (particularly the promotion rules) so that
people had a chance of using it portably.

-Larry Jones

Fortunately, that was our plan from the start. -- Calvin
Aug 4 '06 #45

P: n/a
Keith Thompson <ks***@mib.orgwrote:
>
My objection to C's integer type system is that the names are
arbitrary: "char", "short", "int", "long", "long long", "ginormous
long". I'd like to see a system where the type names follow a regular
pattern, and if you want to have a dozen distinct types the names are
clear and obvious.
Well, one wag did propose adding the "very" keyword to C so that,
instead of long long we could have very long (and very very long, and
very very very long, etc.), not to mention very short (and very very
short, etc.). You could even have very const....

-Larry Jones

I hope Mom and Dad didn't rent out my room. -- Calvin
Aug 4 '06 #46

P: n/a
Keith Thompson wrote:
>
My objection to C's integer type system is that the names are
arbitrary: "char", "short", "int", "long", "long long", "ginormous
long". I'd like to see a system where the type names follow a regular
pattern, and if you want to have a dozen distinct types the names are
clear and obvious. I have a few ideas, but since this will never
happen in any language called "C" I won't go into any more detail.
Isn't that why we now have (u)int32_t and friends? I tend to use int or
unsigned if I don't care about the size and one of the exact size type
if I do.

--
Ian Collins.
Aug 4 '06 #47

P: n/a
Andrew Poelstra wrote:
On 2006-08-03, Keith Thompson <ks***@mib.orgwrote:
>>Richard Heathfield <in*****@invalid.invalidwrites:
>>>The introduction of long long int was, in my continued opinion, a mistake.
All the ISO guys had to do was - nothing at all! Any implementation that
wanted to support 64-bit integers could simply have made long int rather
longer than before - such a system would have continued to be fully
conforming to C90. And if it broke code, well, so what? Any code that
wrongly assumes long int is precisely 32 bits is already broken, and needs
fixing.

That's true, but 64 bits is the effective limit for this. The
following:
char 8 bits
short 16 bits
int 32 bits
long 64 bits
is a reasonable set of types, but if you go beyond that to 128 bits,
you're going to have to leave gaps (for example, there might not be
any 16-bit integer type).


1) This isn't really a problem; you can use a 32-bit variable to store
16-bit values; if you really need 16 bits you might need some debug
macros to artificially constrain the range.
Just beware of overflows!
2) If you've got a 128-bit processor, IMHO, you shouldn't be insisting
on using 8-bit types. That just sounds inefficient. [OT]
Unless your (possibly externally imposed) data happens to be 8 bit.

--
Ian Collins.
Aug 4 '06 #48

P: n/a

Andrew Poelstra wrote:

2) If you've got a 128-bit processor, IMHO, you shouldn't be insisting
on using 8-bit types. That just sounds inefficient. [OT]
Kind of an ironic statement in a thread about portability. :)

--
Bill Pursell

Aug 4 '06 #49

P: n/a
Andrew Poelstra <ap*******@false.sitewrites:
On 2006-08-03, Keith Thompson <ks***@mib.orgwrote:
>Richard Heathfield <in*****@invalid.invalidwrites:
>>The introduction of long long int was, in my continued opinion, a mistake.
All the ISO guys had to do was - nothing at all! Any implementation that
wanted to support 64-bit integers could simply have made long int rather
longer than before - such a system would have continued to be fully
conforming to C90. And if it broke code, well, so what? Any code that
wrongly assumes long int is precisely 32 bits is already broken, and needs
fixing.

That's true, but 64 bits is the effective limit for this. The
following:
char 8 bits
short 16 bits
int 32 bits
long 64 bits
is a reasonable set of types, but if you go beyond that to 128 bits,
you're going to have to leave gaps (for example, there might not be
any 16-bit integer type).

1) This isn't really a problem; you can use a 32-bit variable to store
16-bit values; if you really need 16 bits you might need some debug
macros to artificially constrain the range.
Yes, this was really a problem. The issue wasn't just the range, it
was the representation of an externally imposed data format.
2) If you've got a 128-bit processor, IMHO, you shouldn't be insisting
on using 8-bit types. That just sounds inefficient. [OT]
Unless, of course, you're doing string processing.

--
Keith Thompson (The_Other_Keith) ks***@mib.org <http://www.ghoti.net/~kst>
San Diego Supercomputer Center <* <http://users.sdsc.edu/~kst>
We must do something. This is something. Therefore, we must do this.
Aug 4 '06 #50

239 Replies

This discussion thread is closed

Replies have been disabled for this discussion.