By using this site, you agree to our updated Privacy Policy and our Terms of Use. Manage your Cookies Settings.
424,827 Members | 2,177 Online
Bytes IT Community
+ Ask a Question
Need help? Post your question and get tips & solutions from a community of 424,827 IT Pros & Developers. It's quick & easy.

The illusion of "portability"

P: n/a
In this group there is a bunch of people that call themselves 'regulars'
that insist in something called "portability".

Portability for them means the least common denominator.

Write your code so that it will compile in all old and broken
compilers, preferably in such a fashion that it can be moved with no
effort from the embedded system in the coffe machine to the 64 bit
processor in your desktop.

Sure, you can do that. But as you know, there is no free lunch.
You pay for that "portability" by missing all the progress done
since 1989 in C.

Note that there is objectively speaking not a single useful
program in C that can be ported to all machines that run the
language.

Not even the classic

int main(void) { printf("hello\n");}

Why?

For instance, if we take that program above and we want to
know if our printf did write something to stdout, we have to write
int main(void) {
int r=printf("hello\n");
if (r < 0) {
// what do we do here ???
}
}

The error code returned by printf is nowhere specified. There is no
portable way for this program to know what happened.

Since printf returns a negative value for an i/o error OR for a
format error in the format string there is no portable way to
discriminate between those two possibilitiess either.

Obviously, network i/o, GUIs, threads, and many other stuff essential
for modern programming is completely beyond the scope of "standard C"
and any usage makes instantly your program non portable.

This means that effectively 100% of real C software is not portable to
all machines and that "portability" is at best a goal to keep in
mind by trying to build abstraction layers, but no more.

This is taken to ridiculous heights with the polemic against C99, by
some of those same 'regulars'.

They first start yelling about "Standard C", and then... they do not
mean standard C but some other obsolete standard. All that, in the name
of "portability".

Who cares about portability if the cost is higher than "usability"
and easy of programming?
jacob

Jul 31 '06 #1
Share this Question
Share on Google+
93 Replies


P: n/a
jacob navia said:
In this group there is a bunch of people that call themselves 'regulars'
that insist in something called "portability".
The term "regulars" is a common one to describe those who frequent a forum,
whether it be an IRC channel, a newsgroup, or whatever. You yourself are a
"regular" in comp.lang.c, whether you realise it or not.
Portability for them means the least common denominator.
Wouldn't it be better to ask "them" (whoever "they" are) what they mean by
portability, than to assume it? It is quite likely that the word is used in
slightly different ways by various people; it's not a very portable word.

<snip>
Sure, you can do that. But as you know, there is no free lunch.
You pay for that "portability" by missing all the progress done
since 1989 in C.
There isn't all that much progress in C. What did C99 give us? Mixed
declarations? Sugar. // comments? More sugar. VLAs? More sugar. Compound
literals - sure, they might come in handy one day. A colossal math library?
Hardly anyone needs it, and those who do are probably using something like
Matlab anyway.

The real progress has been in the development of third-party libraries, many
of which are at least a bit cross-platform.
Note that there is objectively speaking not a single useful
program in C that can be ported to all machines that run the
language.
So what should I do with all mine? Delete them? Dream on.
Not even the classic

int main(void) { printf("hello\n");}

Why?
Because the behaviour is undefined. Sheesh.
Obviously, network i/o, GUIs, threads, and many other stuff essential
for modern programming is completely beyond the scope of "standard C"
and any usage makes instantly your program non portable.
Well, it certainly makes your program /less/ portable. I use wrappers around
sockets so that I at least have portability across the Win32/Linux divide.
This means that effectively 100% of real C software is not portable to
all machines and that "portability" is at best a goal to keep in
mind by trying to build abstraction layers, but no more.
Portability is itself an abstraction, and a rather fuzzy one at that. There
is a lot of grey in between "portable" and "non-portable".
This is taken to ridiculous heights with the polemic against C99, by
some of those same 'regulars'.
There isn't any polemic against C99. You have misunderstood. The position of
at least some of the clc regs is that C99 will be just fine - when it
arrives. But it hasn't yet arrived in sufficient volumes to make the switch
from C90 worthwhile.
They first start yelling about "Standard C", and then... they do not
mean standard C but some other obsolete standard. All that, in the name
of "portability".
Well, I'd rather conform to an obsolete standard that is supported by just
about all current C compilers, rather than to a standard that is not.
Who cares about portability if the cost is higher than "usability"
and easy of programming?
It's a trade-off, obviously. Some people will value portability more than
others. Those who don't value it very highly will wander off to newsgroups
dealing with their compiler, OS, or whatever. Those who do value it highly
tend to stick around here. Those who value it highly but who must also use
implementation-specific tricks on occasion can get the best of both worlds
- high quality platform-independent advice here, and platform-specific
advice in a platform-specific group. Sounds sensible to me.

--
Richard Heathfield
"Usenet is a strange place" - dmr 29/7/1999
http://www.cpax.org.uk
email: rjh at above domain (but drop the www, obviously)
Jul 31 '06 #2

P: n/a
jacob navia posted:
In this group there is a bunch of people that call themselves 'regulars'
that insist in something called "portability".

Agreed.

Portability for them means the least common denominator.

"Portability" means "code which must compile successfully with every
compiler, and behave appropriately on all platforms".

Write your code so that it will compile in all old and broken
compilers, preferably in such a fashion that it can be moved with no
effort from the embedded system in the coffe machine to the 64 bit
processor in your desktop.

I'd aim for such.

Sure, you can do that. But as you know, there is no free lunch.
You pay for that "portability" by missing all the progress done
since 1989 in C.

Present(verb) arguments in support of this statement -- I would like to
debate this with you.

Note that there is objectively speaking not a single useful
program in C that can be ported to all machines that run the
language.

If you're talking about GUI programs, then perhaps yes.

But the actual core algorithmic code can be kept fully portable. I've
written programs where all the core code is fully portable.

Not even the classic

int main(void) { printf("hello\n");}

Why?

For instance, if we take that program above and we want to
know if our printf did write something to stdout, we have to write
int main(void) {
int r=printf("hello\n");
if (r < 0) {
// what do we do here ???
}
}

The error code returned by printf is nowhere specified. There is no
portable way for this program to know what happened.

Nor would I care what happened. If a system can't reliably print a few
miserable characters to the screen, then I wouldn't waste electricity by
plugging it in.

Since printf returns a negative value for an i/o error OR for a
format error in the format string there is no portable way to
discriminate between those two possibilitiess either.

It's the programmer's responsibility to ensure that the format string isn't
corrupt. (Nonetheless, my own compiler warns of such an error.)

Obviously, network i/o, GUIs, threads, and many other stuff essential
for modern programming is completely beyond the scope of "standard C"
and any usage makes instantly your program non portable.

Only the part of the program which deals with GUI, threads, etc.

The underlying algorithms can be (and should be where possible) fully
portable.

This means that effectively 100% of real C software is not portable to
all machines and that "portability" is at best a goal to keep in
mind by trying to build abstraction layers, but no more.

Again, you're only talking about system calls.

Many times, I have written a program in fully portable code (in C++
albeit), and then progressed to write a platform-specific interface.

The core of my code remains fully portable.

Lately, I've begun to use GUI packages which one can use to compile
programs for systems such as Windows, Linux, Mac OS.

--

Frederick Gotham
Jul 31 '06 #3

P: n/a
Richard Heathfield a écrit :
There isn't all that much progress in C. What did C99 give us?
True, C99 didn't really advance the language that much, but it has some
good points. Anyway, if we are going to stick to standard C, let's agree
that standard C is Standard C as defined by the standards comitee.
Mixed declarations? Sugar.
// comments? More sugar. VLAs? More sugar.

And what do you have against sugar?
You drink your coffee without it?

I mean, C is just syntatic sugar for assembly language. Why
not program in assembly then?

Mixed declarations are a progress in the sense that they put the
declaration nearer the usage of the variable, what makes reading
the code much easier, specially in big functions.

True, big functions are surely not a bright idea, but they happen :-)

I accept that this is not a revolution, or a really big progress in C
but it is a small step, what is not bad.

VLAs are a more substantial step since it allows to allocate precisely
the memory the program needs without having to over-allocate or risk
under allocating arrays.

Under C89 you have to either:
1) allocate memory with malloc
2) Decide a maximum size and declare a local array of that size.

Both solutions aren't specially good. The first one implies using malloc
with all associated hassle, and the second risks allocating not enough
memory. C99 allows you to precisely allocate what you need and no more.
Compound literals - sure, they might come in handy one day.
They do come handy, but again, they are not such a huge progress.
A colossal math library?
Hardly anyone needs it, and those who do are probably using something like
Matlab anyway.
Maybe, maybe not, I have no data concerning this. In any case it
promotes portability (yes, I am not that stupid! :-) since it
defines a common interface for many math functions. Besides the control
you get from the abstracted FPU is very fine tuned.

You can portably set the rounding mode, for instance, and many other
things. In this sense the math library is quite a big step from C99.
The real progress has been in the development of third-party libraries, many
of which are at least a bit cross-platform.
That is a progress too, but (I do not know why) we never discuss them
in this group.

Maybe what frustrates me is that all this talk about "Stay in C89, C99
is not portable" is that it has taken me years of effort to implement
(and not all of it) C99 and that not even in this group, where we should
promote standard C C99 is accepted as what it is, the current standard.

I mean, each one of us has a picture of what C "should be". But if we
are going to get into *some* kind of consensus it must be the published
standard of the language, whether we like it or not.

For instance the fact that main() returns zero even if the programmer
doesn't specify it I find that an abomination. But I implemented that
because it is the standard even if I do not like it at all.
standard C
Jul 31 '06 #4

P: n/a
On 2006-07-31, jacob navia wrote:
In this group there is a bunch of people that call themselves 'regulars'
that insist in something called "portability".

Portability for them means the least common denominator.
Portability means the highest possible return for your effort.
Write your code so that it will compile in all old and broken
compilers, preferably in such a fashion that it can be moved with no
effort from the embedded system in the coffe machine to the 64 bit
processor in your desktop.

Sure, you can do that. But as you know, there is no free lunch.
You pay for that "portability" by missing all the progress done
since 1989 in C.
There's surprisingly little that makes programming C99 better than C89.
Note that there is objectively speaking not a single useful
program in C that can be ported to all machines that run the
language.
That's strange. I have programs that I first wrote 20 years ago on
the Amiga, that I have compiled and run successfully, without any
changes, on MS-DOS, SunOS 4, FreeBSD, NetBSD, BSDi, and GNU/Linux.

I expect they would compile and execute successfully on any
standard C implementation.

--
Chris F.A. Johnson, author | <http://cfaj.freeshell.org>
Shell Scripting Recipes: | My code in this post, if any,
A Problem-Solution Approach | is released under the
2005, Apress | GNU General Public Licence
Jul 31 '06 #5

P: n/a
Frederick Gotham a écrit :
>>Sure, you can do that. But as you know, there is no free lunch.
You pay for that "portability" by missing all the progress done
since 1989 in C.

Present(verb) arguments in support of this statement -- I would like to
debate this with you.

1) VLAs allow you to precisely allocate the memory the program needs
instead of using malloc (with all its associated problems) or having
to decide a maximum size for your local array, allocating too much
for most cases.

int fn(int n)
{
int tab[n];
}

allocates JUST what you need AT EACH CALL.

2) The math library is improved BIG time.
2A) You can portably set the rounding mode for instance, what you
could never do in C89 without using some compiler specific
stuff.
2B) Many new math functions allow you to reduce the number of
compiler dependent stuff in your code.

2C) The generic math feature allows you to change the precision
used by your program easily.
3) Mixing declarations and code allows you to declare variables
near the usage of it, making code more readable.

This are some points. There are others.

Jul 31 '06 #6

P: n/a
Chris F.A. Johnson a écrit :
>>Note that there is objectively speaking not a single useful
program in C that can be ported to all machines that run the
language.


That's strange. I have programs that I first wrote 20 years ago on
the Amiga, that I have compiled and run successfully, without any
changes, on MS-DOS, SunOS 4, FreeBSD, NetBSD, BSDi, and GNU/Linux.

I expect they would compile and execute successfully on any
standard C implementation.
The Amiga system is not an embedded system, and it is many ways very
similar to other command line environments.

I am not telling you that portable programs do not exists or that
it is not worthwhile trying to attain some degree of independence
from the underlying system. I am telling you that (as everything)
portability has some associated COST!
Jul 31 '06 #7

P: n/a

jacob navia wrote:
In this group there is a bunch of people that call themselves 'regulars'
that insist in something called "portability".

Portability for them means the least common denominator.
It primarily means that conforming compilers have been implemented on a
wide variety of hardware and OS combinations, so that conforming code
on one platform can be expected to behave the same on any other
platform.

Secondarily, it means structuring your code so that it supports
multiple platforms concurrently with minimal effort, which I've had to
do on numerous occasions (the most painful being classic MacOS,
Solaris, and Windows 3.1).

And as far as supporting the "least common denominator", it's not my
fault that Microsoft went out of its way to make it nigh impossible to
code for Windows and *anything else* by "extending" C to such a
ridiculous degree. Nor is it my fault that the bulk of the lossage was
on the Windows side.
>
Write your code so that it will compile in all old and broken
compilers, preferably in such a fashion that it can be moved with no
effort from the embedded system in the coffe machine to the 64 bit
processor in your desktop.
What "old and broken" compilers are you referring to, jacob?
Sure, you can do that. But as you know, there is no free lunch.
You pay for that "portability" by missing all the progress done
since 1989 in C.
Really? How so?
Note that there is objectively speaking not a single useful
program in C that can be ported to all machines that run the
language.
I beg to differ; I've written them. It *is* possible to write useful,
conforming apps. Not everything needs to run through a GUI.
Not even the classic

int main(void) { printf("hello\n");}

Why?

For instance, if we take that program above and we want to
know if our printf did write something to stdout, we have to write
int main(void) {
int r=printf("hello\n");
if (r < 0) {
// what do we do here ???
}
}

The error code returned by printf is nowhere specified. There is no
portable way for this program to know what happened.
That's only sort of true; the return value is EOF if an error occurs,
otherwise the value is not EOF. So rewrite the above as

int main(void)
{
int r = printf("hello\n");
if (r == EOF)
{
/* handle error */
}

return 0;
}
Since printf returns a negative value for an i/o error OR for a
format error in the format string there is no portable way to
discriminate between those two possibilitiess either.

Obviously, network i/o, GUIs, threads, and many other stuff essential
for modern programming is completely beyond the scope of "standard C"
and any usage makes instantly your program non portable.
Which is why you wrap those sections.

Abstraction is a Good Thing, anyway.
This means that effectively 100% of real C software is not portable to
all machines and that "portability" is at best a goal to keep in
mind by trying to build abstraction layers, but no more.

This is taken to ridiculous heights with the polemic against C99, by
some of those same 'regulars'.

They first start yelling about "Standard C", and then... they do not
mean standard C but some other obsolete standard. All that, in the name
of "portability".

Who cares about portability if the cost is higher than "usability"
and easy of programming?
Talk to me when you've had to support Linux, MacOS, Windows, and MPE
concurrently.

Jul 31 '06 #8

P: n/a
jacob navia <ja***@jacob.remcomp.frwrites:
Frederick Gotham a écrit :
>>>Sure, you can do that. But as you know, there is no free lunch.
You pay for that "portability" by missing all the progress done
since 1989 in C.
Or you pay for a few C99-specific features by losing a significant
degree of portability.

As many of us have been saying for a very long time, it's a tradeoff,
and different users will make different decisions about that tradeoff.
Pretending that it isn't, or that the choice is obvious, is
disingenuous.
>Present(verb) arguments in support of this statement -- I would like
to debate this with you.

1) VLAs allow you to precisely allocate the memory the program needs
instead of using malloc (with all its associated problems) or having
to decide a maximum size for your local array, allocating too much
for most cases.

int fn(int n)
{
int tab[n];
}

allocates JUST what you need AT EACH CALL.
Yes. One cost is that there is no mechanism for handling allocation
failures. If I use malloc(), I can check whether it returned a null
pointer, and perhaps do something to handle the error (I can at least
shut down the program cleanly). With a VLA, if the allocation fails,
I get undefined behavior. In most environments, I'd expect this to
abort the program with an error message (not giving me a chance to do
any final cleanup) -- but the C99 standard allows arbitrarily bad
behavior.
2) The math library is improved BIG time.
2A) You can portably set the rounding mode for instance, what you
could never do in C89 without using some compiler specific
stuff.
2B) Many new math functions allow you to reduce the number of
compiler dependent stuff in your code.

2C) The generic math feature allows you to change the precision
used by your program easily.
I don't do much math programming, so I don't know how useful that is.

As a practical matter, this depends on a C99-conforming runtime
library, which is often a separate issue from the conformance of the
compiler.
3) Mixing declarations and code allows you to declare variables
near the usage of it, making code more readable.
I agree that it's a nice feature, but it's easy to work around it when
it's missing. (Some people prefer not to mix declarations and
statements, thinking that keeping them separate improves program
structure; I don't necessarly agree, but I can see the point.)

Incidentally, the word "code" is ambiguous; it often refers to
everything in a C source files, not just statements. "Mixing
declarations and statements" is clearer.

--
Keith Thompson (The_Other_Keith) ks***@mib.org <http://www.ghoti.net/~kst>
San Diego Supercomputer Center <* <http://users.sdsc.edu/~kst>
We must do something. This is something. Therefore, we must do this.
Jul 31 '06 #9

P: n/a
jacob navia <ja***@jacob.remcomp.frwrites:
Chris F.A. Johnson a écrit :
>>>Note that there is objectively speaking not a single useful
program in C that can be ported to all machines that run the
language.
That's strange. I have programs that I first wrote 20 years ago on
the Amiga, that I have compiled and run successfully, without any
changes, on MS-DOS, SunOS 4, FreeBSD, NetBSD, BSDi, and GNU/Linux.
I expect they would compile and execute successfully on any
standard C implementation.

The Amiga system is not an embedded system, and it is many ways very
similar to other command line environments.

I am not telling you that portable programs do not exists or that
it is not worthwhile trying to attain some degree of independence
from the underlying system. I am telling you that (as everything)
portability has some associated COST!
And if you had bothered to mention that to begin with, we probably
wouldn't be having this argument.

There is a tradeoff. Ignoring either side of that tradeoff is
foolish.

Yes, portability has a cost, in that you can't use C99 features.

Conversely, using C99 features has a cost, in that you lose a
significant degree of portability.

--
Keith Thompson (The_Other_Keith) ks***@mib.org <http://www.ghoti.net/~kst>
San Diego Supercomputer Center <* <http://users.sdsc.edu/~kst>
We must do something. This is something. Therefore, we must do this.
Jul 31 '06 #10

P: n/a
jacob navia posted:
>>>Sure, you can do that. But as you know, there is no free lunch.
You pay for that "portability" by missing all the progress done
since 1989 in C.
>Present(verb) arguments in support of this statement -- I would like to
debate this with you.
1) VLAs allow you to precisely allocate the memory the program needs
instead of using malloc (with all its associated problems) or having
to decide a maximum size for your local array, allocating too much
for most cases.

int fn(int n)
{
int tab[n];
}

allocates JUST what you need AT EACH CALL.

C99 added a few new features to C.

C++ added a boat load of new features to C.

Even C++ doesn't have VLA's, because it finds efficiency to be more
valuable.

You can do the following in C++:

unsigned const len = 5;
int array[len];

But you *can't* do the following:

unsigned len = GetValueAtCompileTime();
int array[len];

The length of an array must be a compile-time constant.

Arrays whose length is known at compile time are far more efficient to work
with. Therefore, in C++, they decreed that one should be explicit about
dynamic memory allocation:

unsigned len = GetValueAtRuntime();

int *p = new unsigned[len];

delete p;

Or the C equivalent:

int *p = malloc(len * sizeof *p);
free(p);

I simply just don't like VLA's, and will never use them.

2) The math library is improved BIG time.
2A) You can portably set the rounding mode for instance, what you
could never do in C89 without using some compiler specific
stuff.
2B) Many new math functions allow you to reduce the number of
compiler dependent stuff in your code.
2C) The generic math feature allows you to change the precision
used by your program easily.

I haven't written maths-intensive programs, so I'm not qualified to comment
on this.

3) Mixing declarations and code allows you to declare variables
near the usage of it, making code more readable.

Yes, I like to define variables right where I need them.

NB: I wasn't arguing about the advantages of using C99 over using C89, but
rather any perceived disadvantages (efficiency wise) in writing strictly
portable code.

--

Frederick Gotham
Jul 31 '06 #11

P: n/a
Frederick Gotham <fg*******@SPAM.comwrites:
jacob navia posted:
[...]
>Not even the classic

int main(void) { printf("hello\n");}

Why?

For instance, if we take that program above and we want to
know if our printf did write something to stdout, we have to write
int main(void) {
int r=printf("hello\n");
if (r < 0) {
// what do we do here ???
}
}

The error code returned by printf is nowhere specified. There is no
portable way for this program to know what happened.


Nor would I care what happened. If a system can't reliably print a few
miserable characters to the screen, then I wouldn't waste electricity by
plugging it in.
[...]

Several points.

What screen?

Both programs above invoke undefined behavior, because they both call
printf() with no prototype in scope. The fix is to add
"#include <stdio.h>" to the top of each.

printf() *can* fail. For example, what if the program's stdout is
redirected (in some system-specific manner) to a disk file, and the
file system has run out of space? If you check the result of printf()
for errors, there are several things you can do. You can try to print
an error message to stderr, which may succeed even if printing to
stdout has failed. Or you can just abort the program with
"exit(EXIT_FAILURE);".

Or, as most programs do, you can ignore the error and blindly continue
running. (I admit this is what I usually do myself.)

--
Keith Thompson (The_Other_Keith) ks***@mib.org <http://www.ghoti.net/~kst>
San Diego Supercomputer Center <* <http://users.sdsc.edu/~kst>
We must do something. This is something. Therefore, we must do this.
Jul 31 '06 #12

P: n/a
Keith Thompson posted:
Or, as most programs do, you can ignore the error and blindly continue
running. (I admit this is what I usually do myself.)

I do this myself in quite a few places.

For instance, if I were allocating upwards of a megabyte of memory, I would
probably take precautions:

int *const p = malloc(8388608 / CHAR_BIT
+ !!(8388608 % CHAR_BIT));

if(!p) SOS();

But if I'm allocating less than a kilobytes... I probably won't bother most
of the time. (The system will already have ground to a halt by that stage if
it has memory allocation problems.)

--

Frederick Gotham
Jul 31 '06 #13

P: n/a
John Bode a écrit :
>

Talk to me when you've had to support Linux, MacOS, Windows, and MPE
concurrently.
lcc-win32 has customers under linux, windows and many embedded systems
without any OS (or, to be more precise, with an OS that is part
of the compiled program)

Jul 31 '06 #14

P: n/a
Keith Thompson <ks***@mib.orgwrites:
printf() *can* fail. For example, what if the program's stdout is
redirected (in some system-specific manner) to a disk file, and the
file system has run out of space? If you check the result of printf()
for errors, there are several things you can do. You can try to print
an error message to stderr, which may succeed even if printing to
stdout has failed. Or you can just abort the program with
"exit(EXIT_FAILURE);".
One reasonable option may be to flush stdout before exiting the
program, then call ferror to check whether there was an error.
If there was, terminate the program with an error (after
attempting to report it to stderr).

Some of my programs do this, but only the ones that I care about
a lot.
--
"This is a wonderful answer.
It's off-topic, it's incorrect, and it doesn't answer the question."
--Richard Heathfield
Jul 31 '06 #15

P: n/a

jacob navia wrote:
In this group there is a bunch of people that call themselves 'regulars'
that insist in something called "portability".

Portability for them means the least common denominator.
Portability means never having to say you're sorry. Portability means
conforming to a formally accepted written standard which defines how
things ought to behave.
Write your code so that it will compile in all old and broken
compilers, preferably in such a fashion that it can be moved with no
effort from the embedded system in the coffe machine to the 64 bit
processor in your desktop.
Do you really overlook the value of being able to do that?
Sure, you can do that. But as you know, there is no free lunch.
You pay for that "portability" by missing all the progress done
since 1989 in C.
Portable can be portable to C89, to C99 or to an implementation
document. The degree of portability achieved will depend upon how well
accepted and debugged the standard in use was. If I write to the C99
standard, I am writing portable code. It is portable to C99.
Note that there is objectively speaking not a single useful
program in C that can be ported to all machines that run the
language.
How about a useful program in C that runs on 100 million machines of
various architectures and is maintained over a 20 year period. Does
that sound useful to you?
Not even the classic

int main(void) { printf("hello\n");}

Why?
Other than the lack of a prototype for printf(), I don't see anything
terribly wrong with it. The big problem with failure of printf() is
that we are at a bit of a loss as to how to report the problem n'est ce
pas?
For instance, if we take that program above and we want to
know if our printf did write something to stdout, we have to write
int main(void) {
int r=printf("hello\n");
if (r < 0) {
// what do we do here ???
}
}

The error code returned by printf is nowhere specified. There is no
portable way for this program to know what happened.
You know that the printf() failed. You may not know why and perror()
may or may not be useful. How would you go about repairing this
defect?
>
Since printf returns a negative value for an i/o error OR for a
format error in the format string there is no portable way to
discriminate between those two possibilitiess either.
For a format error, it is possible to know because you can check your
format.
Obviously, network i/o, GUIs, threads, and many other stuff essential
for modern programming is completely beyond the scope of "standard C"
and any usage makes instantly your program non portable.
It makes the part of the program that uses network I/O, a GUI, threads
or other essential tasks non-portable. Here, we will generally resort
to another standard. We can use TCP/IP for network programming. We
can use POSIX threads for threading. For the GUI, we may have to use a
proprietary standard like wxWidgets or an operating system specific API
like the Windows API. In each of these cases we are still doing
standards based computing, but we are not using ANSI/ISO standards for
the parts not covered by the more fundamental level.
This means that effectively 100% of real C software is not portable to
all machines and that "portability" is at best a goal to keep in
mind by trying to build abstraction layers, but no more.
Did you know that 99.997% of all statistics are made up?
I work on a software system with hundreds of thousands of lines of
code.
It runs on Solaris, AIX, Windows, Linux, MVS, OpenVMS (and many others)
against dozens of database systems. Do you imagine that such a thing
would be remotely feasible without paying detailed attention to
standards?
This is taken to ridiculous heights with the polemic against C99, by
some of those same 'regulars'.
I agree that C99 is a favorite whipping boy for no reason that I can
glean. There are not a lot adopters, but many of the features are very
desirable. VLAs (in particular) are worth their weight in gold.
They first start yelling about "Standard C", and then... they do not
mean standard C but some other obsolete standard. All that, in the name
of "portability".
C99 is standard C. Other standards must be prefaced by the name of the
standard (IMO-YMMV). That's because C99 legally replaces the previous
C standard.
Who cares about portability if the cost is higher than "usability"
Nobody does.
and easy of programming?
Adhering to standards makes programming much, much easier. I
programmed in C before there was any formal standard approved. It was
really awful, and every implementation was so different that you had to
completely rewrite things constantly to go from one compiler vendor to
the next, even on the same physical architecture.
>
jacob
I don't understand why anyone would complain against standards.
Programming is practically impossible without them.

Jul 31 '06 #16

P: n/a
Keith Thompson a écrit :
jacob navia <ja***@jacob.remcomp.frwrites:
>>Frederick Gotham a écrit :
>>>>Sure, you can do that. But as you know, there is no free lunch.
You pay for that "portability" by missing all the progress done
since 1989 in C.


Or you pay for a few C99-specific features by losing a significant
degree of portability.
Well but this group is about STANDARD C or not?

If we do not agree about what standard C is, we can use the standard.

But if we do not even agree what standard C is ther can't be
any kind of consensus in this group you see?

The talk about "Standard C" then, is just hollow words!!!

Jul 31 '06 #17

P: n/a
Frederick Gotham <fg*******@SPAM.comwrites:
For instance, if I were allocating upwards of a megabyte of memory, I would
probably take precautions:

int *const p = malloc(8388608 / CHAR_BIT
+ !!(8388608 % CHAR_BIT));

if(!p) SOS();

But if I'm allocating less than a kilobytes... I probably won't bother most
of the time. (The system will already have ground to a halt by that stage if
it has memory allocation problems.)
Why not use a wrapper function that will always do the right
thing?
--
"The fact that there is a holy war doesn't mean that one of the sides
doesn't suck - usually both do..."
--Alexander Viro
Jul 31 '06 #18

P: n/a
dc*****@connx.com a écrit :
>

C99 is standard C. Other standards must be prefaced by the name of the
standard (IMO-YMMV). That's because C99 legally replaces the previous
C standard.
This is the most important thing I wanted with my previous message.

That we establish a consensus here about what standard C means.

And it can't mean anything else as the *current* C standard.

I have been working for years in a C99 implementation. and I wanted
that at least in this group, that is supposed to be centered around
standard C we establish that C99 *is* the standard weven if we do
not like this or that feature.

Jul 31 '06 #19

P: n/a
"Frederick Gotham" <fg*******@SPAM.comwrote in message
news:oP*******************@news.indigo.ie...
[snip]
Arrays whose length is known at compile time are far more efficient to work
with.
I doubt this statement.

On stack based machines, it's nothing more than a subtraction. Whether
the value is passed in or known at compile time makes no difference.

[snip]

Jul 31 '06 #20

P: n/a
dc*****@connx.com a écrit :
"Frederick Gotham" <fg*******@SPAM.comwrote in message
news:oP*******************@news.indigo.ie...
[snip]
>>Arrays whose length is known at compile time are far more efficient to work
with.


I doubt this statement.

On stack based machines, it's nothing more than a subtraction. Whether
the value is passed in or known at compile time makes no difference.

[snip]
I implemented this by making the array a pointer, that
gets its value automagically when the function starts by making
a subtraction from the stack pointer. Essentially

int fn(int n)
{
int tab[n];
....
}

becomes

int fn(int n)
{
int *tab = alloca(n*sizeof(int));
}

The access is done like any other int *...
Jul 31 '06 #21

P: n/a
jacob navia wrote:
In this group there is a bunch of people that call themselves 'regulars'
that insist in something called "portability".

Portability for them means the least common denominator.
No it means making wise design choices.
Write your code so that it will compile in all old and broken
compilers, preferably in such a fashion that it can be moved with no
effort from the embedded system in the coffe machine to the 64 bit
processor in your desktop.
Why? I target C99 with my code. To me "portable" means C99. I
consider it a hack if I have to support something outside of it [e.g.
Visual C lack of long long for instance].
Sure, you can do that. But as you know, there is no free lunch.
You pay for that "portability" by missing all the progress done
since 1989 in C.
C99 programs are portable.

<snip nonsense>

Even with all the latest doodahs of C99 you still are not assured to
have a TTY or a console, file system or even a heap, etc...

Big deal?

No one expects a 3D video game to work on an 8051.

On the otherhand, one would expect some non-interface type routine to
work anywhere.

My own math lib [LibTomMath] has been built on things as low as an 8086
with TurboC all the way up through the various 64-bit servers and
consoles. Without sacrificing too much speed or ANY functionality.

Similarly my crypto code is used pretty much anywhere without hacking
for this compiler, that compiler, etc.

Tom

Jul 31 '06 #22

P: n/a
"John Bode" <jo*******@my-deja.comwrites:
jacob navia wrote:
[...]
>Not even the classic

int main(void) { printf("hello\n");}

Why?

For instance, if we take that program above and we want to
know if our printf did write something to stdout, we have to write
int main(void) {
int r=printf("hello\n");
if (r < 0) {
// what do we do here ???
}
}

The error code returned by printf is nowhere specified. There is no
portable way for this program to know what happened.

That's only sort of true; the return value is EOF if an error occurs,
otherwise the value is not EOF. So rewrite the above as

int main(void)
{
int r = printf("hello\n");
if (r == EOF)
{
/* handle error */
}

return 0;
}
[...]

That's incorrect. C99 7.19.6.3p3:

The printf function returns the number of characters transmitted,
or a negative value if an output or encoding error occurred.

The "if (r < 0)" test is correct.

--
Keith Thompson (The_Other_Keith) ks***@mib.org <http://www.ghoti.net/~kst>
San Diego Supercomputer Center <* <http://users.sdsc.edu/~kst>
We must do something. This is something. Therefore, we must do this.
Jul 31 '06 #23

P: n/a

jacob navia wrote:
>
I am not telling you that portable programs do not exists or that
it is not worthwhile trying to attain some degree of independence
from the underlying system. I am telling you that (as everything)
portability has some associated COST!
Why are you telling us something that's blatantly obvious, and that we
all know?

Since we're stating the obvious, the question to be asked on each
occasion is whether the COST of portability exceeds the COST of
non-portability. In my experience, for the sort of work I do, it's
always been better to steer hard towards the portable end of the range.

Aug 1 '06 #24

P: n/a
On 2006-07-31, Ben Pfaff <bl*@cs.stanford.eduwrote:
Frederick Gotham <fg*******@SPAM.comwrites:
>For instance, if I were allocating upwards of a megabyte of memory, I would
probably take precautions:

int *const p = malloc(8388608 / CHAR_BIT
+ !!(8388608 % CHAR_BIT));

if(!p) SOS();

But if I'm allocating less than a kilobytes... I probably won't bother most
of the time. (The system will already have ground to a halt by that stage if
it has memory allocation problems.)

Why not use a wrapper function that will always do the right
thing?
What would such a wrapper do? I've written a few that do things like
attempt to settle for less memory, return memory from a pre-allocated
buffer (if malloc() succeeded, I'd take a little extra while the getting
was good), or in one case I informed the user and gave him the option to
either kill other memory-intensive programs or simply die.

However, the second of those options requires an equivilant wrapper for
free() because if I'm manually hacking memory around it's far too easy
to end up with UB. The others aren't acceptable in certain situations.

More importantly, no matter what you do, the function calling the
wrapper needs to do essentially the same stuff as a function calling
malloc() directly in the case of a critical memory error.

--
Andrew Poelstra <website down>
To reach my email, use <email also down>
New server ETA: 42
Aug 1 '06 #25

P: n/a
Frederick Gotham <fg*******@SPAM.comwrites:
jacob navia posted:
>>>>Sure, you can do that. But as you know, there is no free lunch.
You pay for that "portability" by missing all the progress done
since 1989 in C.
>>Present(verb) arguments in support of this statement -- I would like to
debate this with you.
>1) VLAs allow you to precisely allocate the memory the program needs
instead of using malloc (with all its associated problems) or having
to decide a maximum size for your local array, allocating too much
for most cases.

int fn(int n)
{
int tab[n];
}

allocates JUST what you need AT EACH CALL.


C99 added a few new features to C.
Yes.
C++ added a boat load of new features to C.
Yes and no. C++ has a boat load of features that aren't in C, but it
didn't add them to C; it added them to C++. Yes, I'm being
ridiculously picky about wording, but it is an important distinction.
C++ did not attempt to *replace* C. C99, in a very real sense, did.
Even C++ doesn't have VLA's, because it finds efficiency to be more
valuable.
<OT>
I don't believe that's the reason. The C++98 standard is based on the
C90 standard, which doesn't/didn't have VLAs, and C++ didn't add them.
I would be completely unsurprised if a future C++ standard adopted
VLAs from C99.

On the other hand, the C++ standard library provides other features
that can be used in place of VLAs.
</OT>

--
Keith Thompson (The_Other_Keith) ks***@mib.org <http://www.ghoti.net/~kst>
San Diego Supercomputer Center <* <http://users.sdsc.edu/~kst>
We must do something. This is something. Therefore, we must do this.
Aug 1 '06 #26

P: n/a
jacob navia <ja***@jacob.remcomp.frwrites:
dc*****@connx.com a écrit :
>"Frederick Gotham" <fg*******@SPAM.comwrote in message
news:oP*******************@news.indigo.ie...
[snip]
>>>Arrays whose length is known at compile time are far more efficient to work
with.
I doubt this statement.
On stack based machines, it's nothing more than a subtraction.
Whether
the value is passed in or known at compile time makes no difference.
[snip]

I implemented this by making the array a pointer, that
gets its value automagically when the function starts by making
a subtraction from the stack pointer. Essentially

int fn(int n)
{
int tab[n];
...
}

becomes

int fn(int n)
{
int *tab = alloca(n*sizeof(int));
}

The access is done like any other int *...
If that's literally true, then sizeof tab will yield an incorrect
result.

--
Keith Thompson (The_Other_Keith) ks***@mib.org <http://www.ghoti.net/~kst>
San Diego Supercomputer Center <* <http://users.sdsc.edu/~kst>
We must do something. This is something. Therefore, we must do this.
Aug 1 '06 #27

P: n/a
Frederick Gotham <fg*******@SPAM.comwrites:
Keith Thompson posted:
>Or, as most programs do, you can ignore the error and blindly continue
running. (I admit this is what I usually do myself.)

I do this myself in quite a few places.

For instance, if I were allocating upwards of a megabyte of memory, I would
probably take precautions:

int *const p = malloc(8388608 / CHAR_BIT
+ !!(8388608 % CHAR_BIT));

if(!p) SOS();

But if I'm allocating less than a kilobytes... I probably won't bother most
of the time. (The system will already have ground to a halt by that stage if
it has memory allocation problems.)
In my opinion, that's an extremely unwise approach.

In many cases, the fact that your program is running out of memory
will have no effect on the system as a whole; many systems place
limits on how much memory a single program (<OT>process</OT>) may
allocate, and those limits are typically much smaller than the total
amount of memory available to the system. This is particularly true
for multi-user systems.

You should always check the result of every call to malloc(). If you
don't want to do the check explicitly, write a wrapper.

--
Keith Thompson (The_Other_Keith) ks***@mib.org <http://www.ghoti.net/~kst>
San Diego Supercomputer Center <* <http://users.sdsc.edu/~kst>
We must do something. This is something. Therefore, we must do this.
Aug 1 '06 #28

P: n/a
Ben Pfaff <bl*@cs.stanford.eduwrites:
Keith Thompson <ks***@mib.orgwrites:
>printf() *can* fail. For example, what if the program's stdout is
redirected (in some system-specific manner) to a disk file, and the
file system has run out of space? If you check the result of printf()
for errors, there are several things you can do. You can try to print
an error message to stderr, which may succeed even if printing to
stdout has failed. Or you can just abort the program with
"exit(EXIT_FAILURE);".

One reasonable option may be to flush stdout before exiting the
program, then call ferror to check whether there was an error.
If there was, terminate the program with an error (after
attempting to report it to stderr).

Some of my programs do this, but only the ones that I care about
a lot.
Yes, that's probably better than silently ignoring the error.

One possible drawback is that it doesn't catch the error until the
program is just about to terminate.

--
Keith Thompson (The_Other_Keith) ks***@mib.org <http://www.ghoti.net/~kst>
San Diego Supercomputer Center <* <http://users.sdsc.edu/~kst>
We must do something. This is something. Therefore, we must do this.
Aug 1 '06 #29

P: n/a
On Tue, 01 Aug 2006 01:10:25 +0200, jacob navia
<ja***@jacob.remcomp.frwrote:
>dc*****@connx.com a écrit :
>>

C99 is standard C. Other standards must be prefaced by the name of the
standard (IMO-YMMV). That's because C99 legally replaces the previous
C standard.

This is the most important thing I wanted with my previous message.

That we establish a consensus here about what standard C means.
There's no consensus needed about a matter of fact which nobody
contests. However, we can understand that C99 is the current C
standard while also recognizing that it is not universally implemented
and that maximum portability often means that we must forgo some or
all of the new features it introduced. (That's a rhetorical "we" and
does not necessarily include you.)

We (most of us) also understand that portability is a worthwhile goal,
often more important to us and our work than being able to use all the
latest features of the language.
>And it can't mean anything else as the *current* C standard.

I have been working for years in a C99 implementation. and I wanted
that at least in this group, that is supposed to be centered around
standard C we establish that C99 *is* the standard weven if we do
not like this or that feature.
Why do you think this needs to be "established"? It's self-evident and
no one is contesting it.

In this thread, what has been contested is much of the ridiculous
rhetoric you indulged in in your initial post. You now seem to have
changed the subject.

--
Al Balmer
Sun City, AZ
Aug 1 '06 #30

P: n/a
jacob navia <ja***@jacob.remcomp.frwrites:
dc*****@connx.com a écrit :
>C99 is standard C. Other standards must be prefaced by the name of
the standard (IMO-YMMV). That's because C99 legally replaces the
previous C standard.

This is the most important thing I wanted with my previous message.

That we establish a consensus here about what standard C means.

And it can't mean anything else as the *current* C standard.
First, I disagree with the use of the term "legally". ISO is not a
governmental body, and the C standard does not have the force of law.
Nobody is going to arrest a user or an implementer for failing to
conform to it.

Standards do not exist in a vacuum. The purpose of a language
standard is to provide a contract (I don't necessarily mean that in
any legal sense) between users and implementers. If implementers
provide implementations that conform to the standard, and if users
write code that conforms to the standard, then that code will work
correctly with those implementations.

It is a fact (and, IMHO, an unfortunate one) that the C99 standard has
not been as successful as the C90 standard. There are reasons for
this; I won't repeat them here. But the result is that, in the real
world, I can write code that conforms to the C99 standard, but not be
able to get it to work properly on some platforms that I care about.
If I instead write code that conforms to both the C90 and C99
standards, I have a much better chance of getting it to work portably.
I have been working for years in a C99 implementation. and I wanted
that at least in this group, that is supposed to be centered around
standard C we establish that C99 *is* the standard weven if we do
not like this or that feature.
This isn't about whether we like or dislike any particular features of
C99. The issue is that those features *are not as widely available*
as the features defined by the C90 standard.

Your approach seems to be just to ignore this fact, and encourage
users to write C99-dependent code without worrying about portability.
Most of the rest of us, on the other hand, prefer to let people know
what the tradeoffs are.

There's plenty of discussion of C99 here. I regularly post quotations
from the standard; when I do, they're usually from the C99 standard or
from the n1124 draft. If someone posts code that mixes declarations
and statements, we don't say that that's illegal in C; rather we say,
truthfully, that it's legal in C99 but illegal in C90, and *explain
the tradeoffs*.

Back in the early 1990s, I'm sure you would have found plenty of
advice in this newsgroup (or its ancestor, net.lang.c; I don't
remember when the transition was) about programming in K&R C, and
writing code that's legal in both K&R C and ANSI C. The 1990 ISO C
standard had been released, and it was the *official* definition of
the language, but real-world programmers still had to deal with the
fact that it wasn't yet universally supported.

We don't stop talking about an old standard when a new one comes out;
we stop talking about an old standard when it becomes irrelevant. The
C90 standard is still very relevant.

(comp.std.c has an even stronger emphasis on C99, since any future
standards or technical corrigenda will be based on C99, not C90.)

--
Keith Thompson (The_Other_Keith) ks***@mib.org <http://www.ghoti.net/~kst>
San Diego Supercomputer Center <* <http://users.sdsc.edu/~kst>
We must do something. This is something. Therefore, we must do this.
Aug 1 '06 #31

P: n/a
jacob navia <ja***@jacob.remcomp.frwrites:
Keith Thompson a écrit :
>jacob navia <ja***@jacob.remcomp.frwrites:
>>>Frederick Gotham a écrit :

>Sure, you can do that. But as you know, there is no free lunch.
>You pay for that "portability" by missing all the progress done
>since 1989 in C.
Or you pay for a few C99-specific features by losing a significant
degree of portability.

Well but this group is about STANDARD C or not?

If we do not agree about what standard C is, we can use the standard.

But if we do not even agree what standard C is ther can't be
any kind of consensus in this group you see?

The talk about "Standard C" then, is just hollow words!!!
We talk about Standard C (C99) all the time. We're doing so right now.

We also talk about the previous standard, and occasionally about the
de facto standard before that (K&R1, Appendix A).

--
Keith Thompson (The_Other_Keith) ks***@mib.org <http://www.ghoti.net/~kst>
San Diego Supercomputer Center <* <http://users.sdsc.edu/~kst>
We must do something. This is something. Therefore, we must do this.
Aug 1 '06 #32

P: n/a
Andrew Poelstra <fa**********@wp.netwrites:
On 2006-07-31, Ben Pfaff <bl*@cs.stanford.eduwrote:
>Frederick Gotham <fg*******@SPAM.comwrites:
>>For instance, if I were allocating upwards of a megabyte of
memory, I would probably take precautions:

int *const p = malloc(8388608 / CHAR_BIT
+ !!(8388608 % CHAR_BIT));

if(!p) SOS();

But if I'm allocating less than a kilobytes... I probably won't
bother most of the time. (The system will already have ground to a
halt by that stage if it has memory allocation problems.)

Why not use a wrapper function that will always do the right
thing?

What would such a wrapper do?
In the simplest case, it could abort the program with an error message
if malloc() fails. This isn't ideal, but it's certainly better than
ignoring an allocation failure.

--
Keith Thompson (The_Other_Keith) ks***@mib.org <http://www.ghoti.net/~kst>
San Diego Supercomputer Center <* <http://users.sdsc.edu/~kst>
We must do something. This is something. Therefore, we must do this.
Aug 1 '06 #33

P: n/a
jacob navia wrote:
Keith Thompson a écrit :
>jacob navia <ja***@jacob.remcomp.frwrites:
>>Frederick Gotham a écrit :

Sure, you can do that. But as you know, there is no free lunch.
You pay for that "portability" by missing all the progress done
since 1989 in C.


Or you pay for a few C99-specific features by losing a significant
degree of portability.

Well but this group is about STANDARD C or not?

If we do not agree about what standard C is, we can use the standard.

But if we do not even agree what standard C is ther can't be
any kind of consensus in this group you see?

The talk about "Standard C" then, is just hollow words!!!
As you know very well we discuss C99, C95, C90 and even pre-ANSI C when
appropriate. Why do you object so strongly to people not being told when
something is a C99 feature and so not portable to such common
implementations as MS VC++?

Personally I would *far* rather use C99 but I have to support MS VC++
and at least two older versions -f gcc and glibc plus the C library on
another OS that does not support C99 as yet. Oh, and I could be asked at
any time about supporting some other OS for which there might not be a
C99 compiler around. Such situations are very common.

So if you want to discus something about C99 go ahead. If you want to
tell people that in C99 they can do something go ahead. However, don't
claim that everyone can use C99 or make false claims about
implementations supporting C99 when they don't.
--
Flash Gordon
Still sigless on this computer.
Aug 1 '06 #34

P: n/a
Flash Gordon <sp**@flash-gordon.me.ukwrites:
[...]
As you know very well we discuss C99, C95, C90 and even pre-ANSI C
when appropriate. Why do you object so strongly to people not being
told when something is a C99 feature and so not portable to such
common implementations as MS VC++?
I think you meant to drop the first "not" in that last sentence;
"people not being told" should be "people being told".

--
Keith Thompson (The_Other_Keith) ks***@mib.org <http://www.ghoti.net/~kst>
San Diego Supercomputer Center <* <http://users.sdsc.edu/~kst>
We must do something. This is something. Therefore, we must do this.
Aug 1 '06 #35

P: n/a

Keith Thompson wrote:
jacob navia <ja***@jacob.remcomp.frwrites:
dc*****@connx.com a écrit :
C99 is standard C. Other standards must be prefaced by the name of
the standard (IMO-YMMV). That's because C99 legally replaces the
previous C standard.
This is the most important thing I wanted with my previous message.

That we establish a consensus here about what standard C means.

And it can't mean anything else as the *current* C standard.

First, I disagree with the use of the term "legally". ISO is not a
governmental body, and the C standard does not have the force of law.
Nobody is going to arrest a user or an implementer for failing to
conform to it.
ANSI is connected to the U.S. government and (along with NIST) is used
to establish standards in the United States. While standard adoption
is 'voluntary' there can still be legal ramifications. For instance,
if I stamp the head of my bolt with a certain number of diamond shapes
on it, that is a claim of adherence to a standard. If the claim is
false, the manufacturer could get sued, possibly go to jail, etc.
Sometimes, formal standards do get adopted with more or less legal
weight, which would vary from country to country and standard to
standard. I can easily imagine legal problems for a C compiler vendor
who claimed in their advertizing that their compiler adhered to the
ANSI/ISO C standard but in fact, failed badly on many measures.

Here is an interesting quote that I found:
"Standards, unlike many other technical papers and reports, are
quasi-legal documents. Standards are used as evidence, either to
substantiate or refute points, in courts of law. Standards also become
legal documents if adopted by various governments or regulatory
agencies. When this happens, the content and decisions in a standard
carry more weight, and the process by which they are developed falls
under much more scrutiny, making ANSI accreditation especially
valuable."

I am certainly in wholehearted agreement with the main thrust of your
post, but wanted to point out a nuance of potential legal ramification,
though the documents themselves do not embody law.
[snip]

Aug 1 '06 #36

P: n/a
In article <44*********************@news.orange.frjacob navia <ja***@jacob.remcomp.frwrites:
....
I implemented this by making the array a pointer, that
gets its value automagically when the function starts by making
a subtraction from the stack pointer. Essentially
Yup, standard since about 1960 when they were introduced. But:
int fn(int n)
{
int *tab = alloca(n*sizeof(int));
}

The access is done like any other int *...
So the access is inherently less efficient (and that was the discussion
about).
--
dik t. winter, cwi, kruislaan 413, 1098 sj amsterdam, nederland, +31205924131
home: bovenover 215, 1025 jn amsterdam, nederland; http://www.cwi.nl/~dik/
Aug 1 '06 #37

P: n/a
In article <11**********************@m73g2000cwd.googlegroups .comdc*****@connx.com writes:
"Frederick Gotham" <fg*******@SPAM.comwrote in message
news:oP*******************@news.indigo.ie...
Arrays whose length is known at compile time are far more efficient to work
with.

I doubt this statement.

On stack based machines, it's nothing more than a subtraction. Whether
the value is passed in or known at compile time makes no difference.
If you allocate runtime lenghth arrays on the stack, you need indirection
if you have to allocate more than one such arrays at a level (especially
if there are multi-dimensional arrays involved). When Algol-60 introduced
runtime length arrays there have been reports written and long discussions
about how to implement it. It was not for nothing that Wirth removed them
in Pascal.
--
dik t. winter, cwi, kruislaan 413, 1098 sj amsterdam, nederland, +31205924131
home: bovenover 215, 1025 jn amsterdam, nederland; http://www.cwi.nl/~dik/
Aug 1 '06 #38

P: n/a
dc*****@connx.com writes:
Keith Thompson wrote:
>jacob navia <ja***@jacob.remcomp.frwrites:
dc*****@connx.com a écrit :
C99 is standard C. Other standards must be prefaced by the name of
the standard (IMO-YMMV). That's because C99 legally replaces the
previous C standard.

This is the most important thing I wanted with my previous message.

That we establish a consensus here about what standard C means.

And it can't mean anything else as the *current* C standard.

First, I disagree with the use of the term "legally". ISO is not a
governmental body, and the C standard does not have the force of law.
Nobody is going to arrest a user or an implementer for failing to
conform to it.

ANSI is connected to the U.S. government and (along with NIST) is used
to establish standards in the United States. While standard adoption
is 'voluntary' there can still be legal ramifications. For instance,
if I stamp the head of my bolt with a certain number of diamond shapes
on it, that is a claim of adherence to a standard. If the claim is
false, the manufacturer could get sued, possibly go to jail, etc.
Sometimes, formal standards do get adopted with more or less legal
weight, which would vary from country to country and standard to
standard. I can easily imagine legal problems for a C compiler vendor
who claimed in their advertizing that their compiler adhered to the
ANSI/ISO C standard but in fact, failed badly on many measures.

Here is an interesting quote that I found:
"Standards, unlike many other technical papers and reports, are
quasi-legal documents. Standards are used as evidence, either to
substantiate or refute points, in courts of law. Standards also become
legal documents if adopted by various governments or regulatory
agencies. When this happens, the content and decisions in a standard
carry more weight, and the process by which they are developed falls
under much more scrutiny, making ANSI accreditation especially
valuable."

I am certainly in wholehearted agreement with the main thrust of your
post, but wanted to point out a nuance of potential legal ramification,
though the documents themselves do not embody law.
[snip]
Ok, that's a good point.

It seems to me (I'm hardly an expert) that this is partly a specific
case of the more general principle that if you falsely claim
conformance to something, you've committed fraud. For example, if I
sell widgets while claiming that each widget conforms to Ralph's
Pretty Good Widget Standard, I can be sued by my customers if in fact
I've deliberately violated clause 4.2.

On the other hand, that's not all there is to it; an ANSI or ISO
standard undoubtedly has some quasi-legal standing beyond that enjoyed
by any of Ralph's Pretty Good Standards.

Whether the "quasi" or the "legal" part is more significant depends on
what you're talking about.

--
Keith Thompson (The_Other_Keith) ks***@mib.org <http://www.ghoti.net/~kst>
San Diego Supercomputer Center <* <http://users.sdsc.edu/~kst>
We must do something. This is something. Therefore, we must do this.
Aug 1 '06 #39

P: n/a
In article <44*********************@news.orange.frjacob navia <ja***@jacob.remcomp.frwrites:
John Bode a écrit :
Talk to me when you've had to support Linux, MacOS, Windows, and MPE
concurrently.

lcc-win32 has customers under linux, windows and many embedded systems
without any OS (or, to be more precise, with an OS that is part
of the compiled program)
Where can I find lcc-win32 for linux?
--
dik t. winter, cwi, kruislaan 413, 1098 sj amsterdam, nederland, +31205924131
home: bovenover 215, 1025 jn amsterdam, nederland; http://www.cwi.nl/~dik/
Aug 1 '06 #40

P: n/a
jacob navia wrote:
In this group there is a bunch of people that call themselves
'regulars' that insist in something called "portability".

Please stop trolling the newsgroup.

Brian

Aug 1 '06 #41

P: n/a
jacob navia wrote:
In this group there is a bunch of people that call themselves 'regulars'
that insist in something called "portability".
Funny how all the "portability is a myth" people write
all their code on x86.

Aug 1 '06 #42

P: n/a
Ben Pfaff wrote (re. error checking of mallc):
>
Why not use a wrapper function that will always do the right
thing?
That reminds me of the course where I learned C. The tutor advised:

#define MALLOC(type, n) ((type *)malloc((n) * sizeof(type))

Aug 1 '06 #43

P: n/a
On Mon, 31 Jul 2006 15:55:23 -0700, Ben Pfaff <bl*@cs.stanford.edu>
wrote:
>Keith Thompson <ks***@mib.orgwrites:
>printf() *can* fail. For example, what if the program's stdout is
redirected (in some system-specific manner) to a disk file, and the
file system has run out of space? If you check the result of printf()
for errors, there are several things you can do. You can try to print
an error message to stderr, which may succeed even if printing to
stdout has failed. Or you can just abort the program with
"exit(EXIT_FAILURE);".

One reasonable option may be to flush stdout before exiting the
program, then call ferror to check whether there was an error.
If there was, terminate the program with an error (after
attempting to report it to stderr).
Did you have a test case for this code? If so, did you ever achieve
decision coverage in the case where ferror() gave an error? That would
be remarkable if you did, and I would like to hear how you did it. The
only way I know how to do it is with preprocessor macros, and even
then it's not the greatest.

Best regards
--
jay
Aug 1 '06 #44

P: n/a
jacob navia said:
Richard Heathfield a écrit :
>There isn't all that much progress in C. What did C99 give us?

True, C99 didn't really advance the language that much, but it has some
good points.
Nothing I'd bother to include in a letter home, but one or two tiny good
points. Nothing there worth breaking your program's portability over.
Anyway, if we are going to stick to standard C, let's agree
that standard C is Standard C as defined by the standards comitee.
K&R C is the original C language, and it's topical here in clc.
C90 is the current de facto standard, and is topical here in clc.
C99 is the current de jure standard, and is topical here in clc.

We discuss them all here. We didn't abandon K&R C just because of C90, and
we're not going to abandon C90 just because of C99.
>Mixed declarations? Sugar.
// comments? More sugar. VLAs? More sugar.

And what do you have against sugar?
You drink your coffee without it?
My point, as I'm sure you know, is that these are trivial changes. If I'm
going to risk the portability of my code, I want a HUGE
functionality/convenience payoff in return. C++ just about gives me that,
which is why I sometimes do use C++ - but C99 certainly does not.
I mean, C is just syntatic sugar for assembly language. Why
not program in assembly then?
For which assembly language is C syntactic sugar? There's more to computers
than "the computer sitting on Mr Navia's desk".
Mixed declarations are a progress in the sense that they put the
declaration nearer the usage of the variable, what makes reading
the code much easier, specially in big functions.
Fine. When C99 is as widespread as C90 currently is, I'll be glad to take
advantage of them.
True, big functions are surely not a bright idea, but they happen :-)

I accept that this is not a revolution, or a really big progress in C
but it is a small step, what is not bad.
It's not bad provided it works. It's foolish to adopt a feature just because
you can, if having adopted that feature you then find that your code won't
compile on some of your target platforms.
VLAs are a more substantial step since it allows to allocate precisely
the memory the program needs without having to over-allocate or risk
under allocating arrays.
I see nothing they give that can't be got from a quick malloc.
>
Under C89 you have to either:
1) allocate memory with malloc
Works for me.
2) Decide a maximum size and declare a local array of that size.
Not so good.
>
Both solutions aren't specially good. The first one implies using malloc
with all associated hassle,
What hassle? You ask for memory, you get a pointer to it or a NULL, end of
problem. And you get to find out whether it worked, rather than have your
program crash out on you if N is way too large for any reason.
and the second risks allocating not enough
memory. C99 allows you to precisely allocate what you need and no more.
So does malloc.

Compound literals - sure, they might come in handy one day.

They do come handy, but again, they are not such a huge progress.
For once, we agree.
>A colossal math library?
Hardly anyone needs it, and those who do are probably using something
like Matlab anyway.

Maybe, maybe not, I have no data concerning this. In any case it
promotes portability (yes, I am not that stupid!
How stupid are you, exactly?
:-) since it
defines a common interface for many math functions. Besides the control
you get from the abstracted FPU is very fine tuned.
<shrugSince hardly anyone needs it, who cares?
You can portably set the rounding mode, for instance, and many other
things. In this sense the math library is quite a big step from C99.
Only for people for whom it's useful, which is almost nobody.
>The real progress has been in the development of third-party libraries,
many of which are at least a bit cross-platform.

That is a progress too, but (I do not know why) we never discuss them
in this group.
Because There Are Other Newsgroups For Discussing Them. If you learn nothing
else from this reply, learn this, at least: comp.lang.c is not a dumping
ground for anything Jacob Navia finds interesting. It is a newsgroup for
the discussion of the C language. Other stuff is also very interesting.
Other stuff is also very useful. Other stuff is also great fun. But other
stuff is discussed in Other Newsgroups. In this newsgroup, we discuss C. In
other newsgroups, we discuss those other things.

Maybe what frustrates me is that all this talk about "Stay in C89, C99
is not portable" is that it has taken me years of effort to implement
(and not all of it) C99
Not all of it. So you don't even have a conforming compiler, and yet you're
suggesting we move to C99? Dream on.
and that not even in this group, where we should
promote standard C C99 is accepted as what it is, the current standard.
It's the current de jure standard. It will never become the de facto
standard until it meets the same needs that C90 currently meets.
I mean, each one of us has a picture of what C "should be". But if we
are going to get into *some* kind of consensus it must be the published
standard of the language, whether we like it or not.
But it isn't a consensus. If it were a consensus, it would be widely
implemented. And it isn't, so it isn't.
For instance the fact that main() returns zero even if the programmer
doesn't specify it I find that an abomination. But I implemented that
because it is the standard even if I do not like it at all.
Fine, but bright people will still explicitly return 0 from main - not just
because omitting it is indeed an abomination, but also to keep their
programs portable to all C90 implementations as well as all C99
implementations.

--
Richard Heathfield
"Usenet is a strange place" - dmr 29/7/1999
http://www.cpax.org.uk
email: rjh at above domain (but drop the www, obviously)
Aug 1 '06 #45

P: n/a
Old Wolf said:
Ben Pfaff wrote (re. error checking of mallc):
>>
Why not use a wrapper function that will always do the right
thing?

That reminds me of the course where I learned C. The tutor advised:

#define MALLOC(type, n) ((type *)malloc((n) * sizeof(type))
How pointless.

--
Richard Heathfield
"Usenet is a strange place" - dmr 29/7/1999
http://www.cpax.org.uk
email: rjh at above domain (but drop the www, obviously)
Aug 1 '06 #46

P: n/a
Ben Pfaff said:
Frederick Gotham <fg*******@SPAM.comwrites:
>For instance, if I were allocating upwards of a megabyte of memory, I
would probably take precautions:

int *const p = malloc(8388608 / CHAR_BIT
+ !!(8388608 % CHAR_BIT));

if(!p) SOS();

But if I'm allocating less than a kilobytes... I probably won't bother
most of the time. (The system will already have ground to a halt by that
stage if it has memory allocation problems.)

Why not use a wrapper function that will always do the right
thing?
Because "the right thing" depends on the situation. Simply aborting the
program is a student's way out. Unfortunately, the only worse "solution"
than aborting the program on malloc failure - i.e. not bothering to test at
all - is also the most common "solution", it seems.

--
Richard Heathfield
"Usenet is a strange place" - dmr 29/7/1999
http://www.cpax.org.uk
email: rjh at above domain (but drop the www, obviously)
Aug 1 '06 #47

P: n/a
Keith Thompson wrote:
Flash Gordon <sp**@flash-gordon.me.ukwrites:
[...]
>As you know very well we discuss C99, C95, C90 and even pre-ANSI C
when appropriate. Why do you object so strongly to people not being
told when something is a C99 feature and so not portable to such
common implementations as MS VC++?

I think you meant to drop the first "not" in that last sentence;
"people not being told" should be "people being told".
I'll write a program to write out 1000 time, "Do not post to comp.lang.c
at almost 2AM."

#include <stdio.h>

int main(void)
{
int i;

for (i=0; i<1000; i++)
puts("Do not post to comp.lang.c at almost 2AM.");

return 0;
}
--
Flash Gordon
Still sigless on this computer.
Aug 1 '06 #48

P: n/a
In article <4r********************@bt.com>
Richard Heathfield <in*****@invalid.invalidwrote:
>... What did C99 give us? Mixed declarations? Sugar. // comments?
More sugar.
True, although sometimes a little syntactic sugar helps with
understanding.
>VLAs? More sugar.
Here I have to disagree: when you need VLAs, you really need them.
In particular, they allow you to write matrix algebra of the sort
that Fortran has had for years:

void some_mat_op(int m, int n, double mat[m][n]) {
...
}

In C89 you have to sneak around the standard to do this at all
(assuming you are using actual "array of array"s rather than
"array of pointers to vectors" or "vector of pointers to vector
of pointers", anyway).
>Compound literals - sure, they might come in handy one day.
They missed a bet with them though: a compound literal outside a
function has static duration, but any compound literal within a
function has automatic duration, with no way to give it static
duration. Hence:

int *f(void) {
static int a[] = { 1, 2, 3, 42 };
return a;
}

is OK, but you cannot get rid of the name:

int *f(void) {
return (int []){ 1, 2, 3, 42 };
}

is not, because the anonymous array object vanishes. (You can,
however, make the array "const", provided the function returns
"const int *".) Clearly C99 should have included:

return (static int []){ 1, 2, 3, 42 };

(and the corresponding version with const). :-)

(The syntax for compound literals is rather ugly, although I am
not sure how one could specify them unambiguously otherwise.)
>A colossal math library? Hardly anyone needs it ...
Although again, those who want to write their Fortran code in C
will find it handy. :-)
--
In-Real-Life: Chris Torek, Wind River Systems
Salt Lake City, UT, USA (40°39.22'N, 111°50.29'W) +1 801 277 2603
email: forget about it http://web.torek.net/torek/index.html
Reading email is like searching for food in the garbage, thanks to spammers.
Aug 1 '06 #49

P: n/a
>>A colossal math library? Hardly anyone needs it ...

Although again, those who want to write their Fortran code in C
will find it handy. :-)
A "good" programmer can write FORTRAN code in any language ;)
Aug 1 '06 #50

93 Replies

This discussion thread is closed

Replies have been disabled for this discussion.