473,395 Members | 1,442 Online
Bytes | Software Development & Data Engineering Community
Post Job

Home Posts Topics Members FAQ

Join Bytes to post your question to a community of 473,395 software developers and data experts.

Character arrays

Hi

I observed something while coding the other day:

if I declare a character array as char s[0], and try to use it as any
other character array..it works perfectly fine most of the times. It
holds strings of any length. I guess what is happening here is that
this array initially holds only '\0' and hence is of length 1.

But sometimes, when I tried to write some functions and do some
manipulations on such character arrays, the behavior is erratic..they
result in segmentation fault sometimes,when I try to store certain
values.

I am not able to understand why these arrays are behaving like
this..can you explain how these arrays grow and what could be
happening?

Thanks
Anitha
Nov 14 '05 #1
43 2645
On 6 Nov 2004 07:02:49 -0800
a.******@gmail.com (Anitha) wrote:
I observed something while coding the other day:

if I declare a character array as char s[0], and try to use it as any
other character array..it works perfectly fine most of the times. It
holds strings of any length. I guess what is happening here is that
this array initially holds only '\0' and hence is of length 1.
No, it means you are doing something that is not allowed by the standard
AFAIK. You are declaring an array of 0 length. If the compiler allows
this as an extension it is probably actually allowed to do another
trick.
But sometimes, when I tried to write some functions and do some
manipulations on such character arrays, the behavior is erratic..they
result in segmentation fault sometimes,when I try to store certain
values.

I am not able to understand why these arrays are behaving like
this..can you explain how these arrays grow and what could be
happening?


Patient: It hurts when I do this..
Dr: Well, don't do that then.

The array isn't growing. What is happening is that you are overwriting
other data and sometimes not seeing any effect.

If anything "works sometimes" then you are almost always doing something
you should not be doing. If the failure is something like segmentation
faults then you are *definitely* doing something you should not be
doing. If it appears to work but you cannot understand why it works then
it is almost certain to fail at the worst possible time, such as when
your boss is demonstrating it to a VIP.
--
Flash Gordon
Sometimes I think shooting would be far too good for some people.
Although my email address says spam, it is real and I read it.
Nov 14 '05 #2

"Anitha" <a.******@gmail.com> wrote

I observed something while coding the other day:

if I declare a character array as char s[0], and try to use it as any
other character array..it works perfectly fine most of the times. It
holds strings of any length. I guess what is happening here is that
this array initially holds only '\0' and hence is of length 1.
The array would be zero bytes in length. However it is highly likely that
the data adjacent to it is zero, hence the illusion of an empty string.

char empty[1] = {'/0'};

is a perfectly unexceptional way of decalring an empty string in C.
But sometimes, when I tried to write some functions and do some
manipulations on such character arrays, the behavior is erratic..they
result in segmentation fault sometimes,when I try to store certain
values.
Because what you are doing is illegal.
I am not able to understand why these arrays are behaving like
this..can you explain how these arrays grow and what could be
happening?

You need to look up pointers. C is not Java, where arrays can be copied from
one to the other and can be null. A C array reserves an area of memory, and
the address cannot be changed. Pointers are used when you want a variable to
be one array sometimes and another array at other times.
Nov 14 '05 #3
Malcolm wrote:

"Anitha" <a.******@gmail.com> wrote

I observed something while coding the other day:

if I declare a character array as char s[0],
There are no zero size objects in standard C.
I'm guessing you are using an extension.
fine most of the times.
When it works,
it's like parking your car in somebody else's space
when you don't get caught.
It holds strings of any length.


If you try to index into memory that is not allocated for your program,
then your code is no longer governed by the rules of C.
Anything can happen. It can work the way you want it to, or not.

--
pete
Nov 14 '05 #4

"Anitha" <a.******@gmail.com> wrote in message
if I declare a character array as char s[0], and try to use it as any
other character array..it works perfectly fine most of the times. It
holds strings of any length. I guess what is happening here is that
this array initially holds only '\0' and hence is of length 1.
The array only "holds strings of any length" by accident. I don't
know how an array of zero elements is supposed to behave and I don't
know how it actually does behave on your system. The behavior
that you experience is almost certainly due to your overwriting
variables that follow it.

But sometimes, when I tried to write some functions and do some
manipulations on such character arrays, the behavior is erratic..they
result in segmentation fault sometimes,when I try to store certain
values.

I am not able to understand why these arrays are behaving like
this..can you explain how these arrays grow and what could be
happening?


These arrays do not grow at all. Unless you do dynamic allocation, their
size is fixed for as long as the array's lifetime. You should not be writing
outside the array's bounds at all.

Richard [in PE12]
Nov 14 '05 #5
Thanks for all your messages.
I was just fiddling with my code and I wanted to know the reason why I
could declare char str[0]...Anyways, learnt that I shouldnt do what I
am not supposed to do !! :)
But, compiler's indifference to such a declaration is surprising..
I think it is due to the fact that there is no array bound checking in
C...

Anitha Adusumilli
Nov 14 '05 #6

"Anitha" <a.******@gmail.com> wrote in message
news:9b*************************@posting.google.co m...
Thanks for all your messages.
I was just fiddling with my code and I wanted to know the reason why I
could declare char str[0]...Anyways, learnt that I shouldnt do what I
am not supposed to do !! :)
But, compiler's indifference to such a declaration is surprising..
I think it is due to the fact that there is no array bound checking in
C...


It goes a lot deeper than that. C will allow you to do lots of
things that can bugger things up. It's that sort of language.
If you want all the featherbedding of modern language, read no
further: C is not for you.

Richard [in PE12]
Nov 14 '05 #7
a.******@gmail.com (Anitha) wrote:
I was just fiddling with my code and I wanted to know the reason why I
could declare char str[0]...Anyways, learnt that I shouldnt do what I
am not supposed to do !! :)
But, compiler's indifference to such a declaration is surprising..
I think it is due to the fact that there is no array bound checking in
C...


No, I don't think so. This _is_ an error in C; with the warning level
turned up to "useful", your compiler really should have warned you. I
think it's more likely to go unnoticed because you use a compiler which
comes with a whole metric shitload of embrace-and-extend bags-on-the-
side, and the warning level set to "catatonic" by default. At a guess,
either M$VC or gcc.

Richard
Nov 14 '05 #8
Hello Richard,

Richard Bos wrote:
a.******@gmail.com (Anitha) wrote:

I was just fiddling with my code and I wanted to know the reason why I
could declare char str[0]...Anyways, learnt that I shouldnt do what I
am not supposed to do !! :)
But, compiler's indifference to such a declaration is surprising..
I think it is due to the fact that there is no array bound checking in
C...

No, I don't think so. This _is_ an error in C; with the warning level
turned up to "useful", your compiler really should have warned you. I
think it's more likely to go unnoticed because you use a compiler which
comes with a whole metric shitload of embrace-and-extend bags-on-the-
side, and the warning level set to "catatonic" by default. At a guess,
either M$VC or gcc.


Yep, I do not understand why they do not use the "useful" warning
level by default -- the users rather find the "switch naughty warnings
off"/"catatonic" mode than the other way round.
And for the huge projects where you cannot do without collecting
warnings, you usually have makefiles.

For my last C course I demanded "gcc -Wall -std=c99 -pedantic" or
"gcc -Wall -ansi -pedantic" from my students but they still did not
get it into their heads that this should be the usual way to call it
-- it has too many parameters...
Next time I'll do an introduction to the shell, let them have
something along the lines of a build script, maybe
____~/bin/build____
#!/bin/sh
gcc -Wall -std=c99 -pedantic -o $* "${1}.c"
___________________

and hope that they get it then with only having to type
"build helloworld" or "build factor -lm". But I fully expect to be
disappointed... ;-)
Cheers
Michael
--
E-Mail: Mine is a gmx dot de address.

Nov 14 '05 #9
On Tue, 09 Nov 2004 15:23:06 +0100, Michael Mair
<Mi**********@invalid.invalid> wrote:
For my last C course I demanded "gcc -Wall -std=c99 -pedantic" or
I use -W as well, it catches a few more warning cases (comparing
unsigned values as < or <= zero, a < b < c (one it actually caught the
other day, I'd deleted too many characters), unused arguments). They
aren't essential for ANSI but they can show up errors (or things which
need implementing, like using the function arguments).
"gcc -Wall -ansi -pedantic" from my students but they still did not
get it into their heads that this should be the usual way to call it
-- it has too many parameters...
That's what make is for...
Next time I'll do an introduction to the shell, let them have
something along the lines of a build script, maybe
____~/bin/build____
#!/bin/sh
gcc -Wall -std=c99 -pedantic -o $* "${1}.c"


Introduce them to make?

Chris C
Nov 14 '05 #10
> > Next time I'll do an introduction to the shell, let them have
something along the lines of a build script, maybe
____~/bin/build____
#!/bin/sh
gcc -Wall -std=c99 -pedantic -o $* "${1}.c"


Introduce them to make?


It would be really good if students were all introduced to GCC, Make and of
course the all important source control (CVS , RCS or whatever your
favourite is) .
--------------
Jason Cooper
Nov 14 '05 #11
In <sl******************@ccserver.keris.net> Chris Croughton <ch***@keristor.net> writes:
On Tue, 09 Nov 2004 15:23:06 +0100, Michael Mair
<Mi**********@invalid.invalid> wrote:
For my last C course I demanded "gcc -Wall -std=c99 -pedantic" or


I use -W as well, it catches a few more warning cases (comparing
unsigned values as < or <= zero, a < b < c (one it actually caught the
other day, I'd deleted too many characters), unused arguments). They
aren't essential for ANSI but they can show up errors (or things which
need implementing, like using the function arguments).


-W complains far too often about perfectly good code, which is why its
warnings haven't been included in -Wall in the first place. It's fine
for people who understand the underlying issues and agree with the right
coding style needed for avoiding them, but I wouldn't recommend it to
any novice (who's prone to use casts in order to shut up the -W messages).

Dan
--
Dan Pop
DESY Zeuthen, RZ group
Email: Da*****@ifh.de
Currently looking for a job in the European Union
Nov 14 '05 #12
In <2v*************@uni-berlin.de> Michael Mair <Mi**********@invalid.invalid> writes:
For my last C course I demanded "gcc -Wall -std=c99 -pedantic" or
Why recommend your students to use gcc in a non-conforming mode?
"gcc -Wall -ansi -pedantic" from my students but they still did not
Without -O, -Wall is partially crippled, because gcc doesn't perform
enough data flow analysis to even attempt to detect the usage of
uninitialised variables:

fangorn:~/tmp 538> cat test.c
#include <stdio.h>

int main(void)
{
char *s;
puts(s);
return 0;
}
fangorn:~/tmp 539> gcc -Wall test.c
fangorn:~/tmp 540> gcc -Wall -O test.c
test.c: In function `main':
test.c:5: warning: `s' might be used uninitialized in this function

On x86 hardware, you also want -ffloat-store if standard conformance is
your goal (it doesn't fix all the floating point conformance related
problems of gcc, but it helps a lot). OTOH, if execution speed is more
important than getting the exact precision specified by the standard
(without -ffloat-store you get more than that), you don't want
-ffloat-store.
get it into their heads that this should be the usual way to call it
-- it has too many parameters...


This is what shell aliases are for:

alias c90 gcc -Wall -O -ansi -pedantic
alias brokenc99 gcc -Wall -O -std=c99 -pedantic

does the right thing for csh and friends.

Dan
--
Dan Pop
DESY Zeuthen, RZ group
Email: Da*****@ifh.de
Currently looking for a job in the European Union
Nov 14 '05 #13
In <cn**********@sun-cc204.lut.ac.uk> "J.L.Cooper" <A@A.COM> writes:
> Next time I'll do an introduction to the shell, let them have
> something along the lines of a build script, maybe
> ____~/bin/build____
> #!/bin/sh
> gcc -Wall -std=c99 -pedantic -o $* "${1}.c"


Introduce them to make?


It would be really good if students were all introduced to GCC, Make and of
course the all important source control (CVS , RCS or whatever your
favourite is) .


But only *after* graduating their C courses. None of these tools, except
gcc, of course, is needed by the C newbie, who is already overwhelmed by
the C language issues. No point in making his life even harder by
exposing him to tools like make and CVS, for which he has no need, yet,
anyway.

Dan
--
Dan Pop
DESY Zeuthen, RZ group
Email: Da*****@ifh.de
Currently looking for a job in the European Union
Nov 14 '05 #14
But only *after* graduating their C courses. None of these tools, except
gcc, of course, is needed by the C newbie, who is already overwhelmed by
the C language issues. No point in making his life even harder by
exposing him to tools like make and CVS, for which he has no need, yet,
anyway.


To me it seems more logical to introduce the students to them at the start
of their course. After all RCS/CVS can be used to source code control any
files being worked on and Make can be used to do a lot more than compile C
programs.

In fact I use RCS and Make a lot more than I use C even my Thesis is stored
in RCS and has a make file. After all C should not be the first module that
a student undertakes, there some which are more important (like Logic,
Computer Architecture and Assembly Language, of course this only my opinion
and I am sure other peoples will differ).
Nov 14 '05 #15
On Fri, 12 Nov 2004 16:04:40 UTC, Da*****@cern.ch (Dan Pop) wrote:
In <cn**********@sun-cc204.lut.ac.uk> "J.L.Cooper" <A@A.COM> writes:
> Next time I'll do an introduction to the shell, let them have
> something along the lines of a build script, maybe
> ____~/bin/build____
> #!/bin/sh
> gcc -Wall -std=c99 -pedantic -o $* "${1}.c"

Introduce them to make?


It would be really good if students were all introduced to GCC, Make and of
course the all important source control (CVS , RCS or whatever your
favourite is) .


But only *after* graduating their C courses. None of these tools, except
gcc, of course, is needed by the C newbie, who is already overwhelmed by
the C language issues. No point in making his life even harder by
exposing him to tools like make and CVS, for which he has no need, yet,
anyway.


At that day you tells the students how to use hader files you should
start to introduce them the minimal basic knowledges of make too.

This will give you more time to tell about the traps C holds, tell
more details how to get a standard copilant program wriiten, how to
......

Telling 10 minutes about make and the importance of using the compiler
flags in the right manner increases the knowledge of the students
about C more as saving this short time because you gets more time to
speak about that.

--
Tschau/Bye
Herbert

Visit http://www.ecomstation.de the home of german eComStation

Nov 14 '05 #16
On Fri, 12 Nov 2004 16:36:48 -0000, J.L.Cooper
<A@A.COM> wrote:
But only *after* graduating their C courses. None of these tools, except
gcc, of course, is needed by the C newbie, who is already overwhelmed by
the C language issues. No point in making his life even harder by
exposing him to tools like make and CVS, for which he has no need, yet,
anyway.
To me it seems more logical to introduce the students to them at the start
of their course. After all RCS/CVS can be used to source code control any
files being worked on and Make can be used to do a lot more than compile C
programs.


Indeed, and make can be used with any language. Or text, for that
matter, especially if they are using a text build tool like TeX.
In fact I use RCS and Make a lot more than I use C even my Thesis is stored
in RCS and has a make file. After all C should not be the first module that
a student undertakes, there some which are more important (like Logic,
Computer Architecture and Assembly Language, of course this only my opinion
and I am sure other peoples will differ).


I agree, I use RCS for text files as well. C is not a good first
language (or the best for many purposes, I use AWK and C++ as much), the
principle of programming should be learnt first and then a number of
languages presented with their good and bad qualities. The environment,
however, will stay useful...

Chris C
Nov 14 '05 #17

On Fri, 12 Nov 2004, Dan Pop wrote:

In <cn**********@sun-cc204.lut.ac.uk> "J.L.Cooper" <A@A.COM> writes:
It would be really good if students were all introduced to GCC, Make and
of course the all important source control (CVS , RCS or whatever your
favourite is).


But only *after* graduating their C courses. None of these tools, except
gcc, of course, is needed by the C newbie, who is already overwhelmed by
the C language issues. No point in making his life even harder by
exposing him to tools like make and CVS, for which he has no need, yet,
anyway.


That's assuming that the C newbie is also a newbie to programming and to
*nix as well. Which might be a good assumption in general, I suppose.
But I basically learned CVS this semester, in the same class for which
I learned OCaml... so I have no doubt that a student in a C-newbies course
who already knew the basics of programming in a language like Pascal or
Java could deal with learning both C and the basic Unix tools at the same
time.

my $.02,
-Arthur
Nov 14 '05 #18
Dan Pop wrote:
In <2v*************@uni-berlin.de> Michael Mair <Mi**********@invalid.invalid> writes:

For my last C course I demanded "gcc -Wall -std=c99 -pedantic" or
Why recommend your students to use gcc in a non-conforming mode?


Because we cannot afford the Comeau compiler plus Dinkumware libraries.
Because I hope that in the long run this is the better course than
teaching only C89 and doing C99 as add-on at the end, if at all, and
many useful constructs can be used already.
If every C course did this, then there would be enough weight to get
full conformance not only in gcc but in most and eventually all
major compilers.
However, I fear that in the long run we will get C89 plus C99
standard library and nothing more.

"gcc -Wall -ansi -pedantic" from my students but they still did not


Without -O, -Wall is partially crippled, because gcc doesn't perform
enough data flow analysis to even attempt to detect the usage of
uninitialised variables:

[snip: example uninitialized pointer]

Thank you! This is a really useful suggestion. I will have to read up on
this in the compiler manual to be sure what is all affected but will do
so before the next course :-)

On x86 hardware, you also want -ffloat-store if standard conformance is
your goal (it doesn't fix all the floating point conformance related
problems of gcc, but it helps a lot). OTOH, if execution speed is more
important than getting the exact precision specified by the standard
(without -ffloat-store you get more than that), you don't want
-ffloat-store.


Yep, I know. I included into my homework exercises the calculation of
*_EPSILON (* in {FLT,DBL,LDBL}) and address this issue there for
DBL_EPSILON.

get it into their heads that this should be the usual way to call it
-- it has too many parameters...

This is what shell aliases are for:

alias c90 gcc -Wall -O -ansi -pedantic
alias brokenc99 gcc -Wall -O -std=c99 -pedantic

does the right thing for csh and friends.


I know -- as I mentioned, I do a little introduction to the shell.
There, aliases are, of course, also included. The build script
addresses in addition the usual problem with the -o option which
in turn gives me opportunity to tell something about paths (the
difference between entering "test" and "./test".

Thank you very much for your answer!
Cheers
Michael
--
E-Mail: Mine is an /at/ gmx /dot/ de address.
Nov 14 '05 #19
Chris Croughton wrote:
On Tue, 09 Nov 2004 15:23:06 +0100, Michael Mair
For my last C course I demanded "gcc -Wall -std=c99 -pedantic" or


I use -W as well, it catches a few more warning cases (comparing
unsigned values as < or <= zero, a < b < c (one it actually caught the
other day, I'd deleted too many characters), unused arguments). They
aren't essential for ANSI but they can show up errors (or things which
need implementing, like using the function arguments).


For -W, you need to know too much. This is an option I tell
them about but usually the introduction to splint helps them
more.

"gcc -Wall -ansi -pedantic" from my students but they still did not
get it into their heads that this should be the usual way to call it
-- it has too many parameters...


That's what make is for...


Definitely! But I prefer teaching them the language to a point where
they can appreciate make before introducing them to it.

Next time I'll do an introduction to the shell, let them have
something along the lines of a build script, maybe
____~/bin/build____
#!/bin/sh
gcc -Wall -std=c99 -pedantic -o $* "${1}.c"


Introduce them to make?


Already happens. ar, make, gprof, cvs and other come in towards the end
of my course. When they have learned to write complex code which
justifies it. In the long run, the tools are more important than the
language used, but most of them do not even know "that computers can
have command lines" when they enter the course.
Make is definitely too much for absolute beginners. They first have
to really understand (by "menial labour" with an actual language at
"actual" problems) why they need it.
Aliases (as Dan Pop suggested) or this script is quite enough at the
start.
Cheers
Michael
--
E-Mail: Mine is an /at/ gmx /dot/ de address.
Nov 14 '05 #20
Arthur J. O'Dwyer wrote:

On Fri, 12 Nov 2004, Dan Pop wrote:

In <cn**********@sun-cc204.lut.ac.uk> "J.L.Cooper" <A@A.COM> writes:
It would be really good if students were all introduced to GCC, Make and
of course the all important source control (CVS , RCS or whatever your
favourite is).
But only *after* graduating their C courses. None of these tools, except
gcc, of course, is needed by the C newbie, who is already overwhelmed by
the C language issues. No point in making his life even harder by
exposing him to tools like make and CVS, for which he has no need, yet,
anyway.


I completely agree. As there are no other courses, I included the tools
at the end.
That's assuming that the C newbie is also a newbie to programming and
to *nix as well. Which might be a good assumption in general, I suppose.
But I basically learned CVS this semester, in the same class for which
I learned OCaml... so I have no doubt that a student in a C-newbies course
who already knew the basics of programming in a language like Pascal or
Java could deal with learning both C and the basic Unix tools at the same
time.


Well, in my last course less than ten percent knew anything about
programming or *nix and another ten percent told me that they had
never worked with computers before...
Cheers
Michael
--
E-Mail: Mine is an /at/ gmx /dot/ de address.
Nov 14 '05 #21
J.L.Cooper wrote:
But only *after* graduating their C courses. None of these tools, except
gcc, of course, is needed by the C newbie, who is already overwhelmed by
the C language issues. No point in making his life even harder by
exposing him to tools like make and CVS, for which he has no need, yet,
anyway.
To me it seems more logical to introduce the students to them at the start
of their course. After all RCS/CVS can be used to source code control any
files being worked on and Make can be used to do a lot more than compile C
programs.


Of course. But IMO it is more motivating to see the necessity for using
it. As I have only finite time to get them started on the environment,
C, tools, and algorithmic optimisation, I will do so in this order as it
enables me to treat all topics.
In fact I use RCS and Make a lot more than I use C even my Thesis is stored
in RCS and has a make file. After all C should not be the first module that
a student undertakes, there some which are more important (like Logic,
Computer Architecture and Assembly Language, of course this only my opinion
and I am sure other peoples will differ).


As there are no other courses, I fill the gap as good as I can.
C is IMO an excellent starting language as it is at once high level
and relatively close to the machines -- and is not as "big" as other
languages with comparable scope.
From C to Assembly is one step and using C from, say, Matlab, another.

Apart from that: If you have once _experienced_ how much relief comes
along with using make or version control, then you are much more likely
to start new projects using them. Thus, my students have to "suffer"
through the time without to get the revelation when introduced to make.
This probably is still not steep enough (Honestly, how bad can it get
in a sensible course?) to drive the lesson home but hopefully makes
them remember soon enough to (re)organise projects accordingly.
Cheers
Michael
--
E-Mail: Mine is an /at/ gmx /dot/ de address.
Nov 14 '05 #22
a.******@gmail.com (Anitha) wrote in message news:<9b*************************@posting.google.c om>...
Hi

I observed something while coding the other day:

if I declare a character array as char s[0], and try to use it as any
other character array..it works perfectly fine most of the times. It
holds strings of any length. I guess what is happening here is that
this array initially holds only '\0' and hence is of length 1.
AFAIK it will not proceeed further well Your Compiler must(Will) give
The error as soon it starts to compile the Above staement.
You Are declaring the char s[0] of Zero lengh. and this is not possible.
If you really want to get your concept clear then go through the
Book K&R and solve the problem 2.5 you will be clear with all the
Functionalites.
or write the program which read only 20 characters.
dont use any libraray function except the printf statement you will
get all the points(doubts) clear.

But sometimes, when I tried to write some functions and do some
manipulations on such character arrays, the behavior is erratic..they
result in segmentation fault sometimes,when I try to store certain
values.

I am not able to understand why these arrays are behaving like
this..can you explain how these arrays grow and what could be
happening?

Thanks
Anitha

Nov 14 '05 #23
On Sat, 13 Nov 2004 01:19:18 +0100, Michael Mair
<Mi**********@invalid.invalid> wrote:
Chris Croughton wrote:
I use -W as well, it catches a few more warning cases (comparing
unsigned values as < or <= zero, a < b < c (one it actually caught the
other day, I'd deleted too many characters), unused arguments). They
aren't essential for ANSI but they can show up errors (or things which
need implementing, like using the function arguments).
For -W, you need to know too much. This is an option I tell
them about but usually the introduction to splint helps them
more.


True, -W is a bit over the top (but lint is usually far more so, I know
a lot of people who when introduced to lint "shoot the messenger" and
either don't use it of turn off everything they can). The one I do turn
off is the signed/unsigned comparisons, that gets really annoying
(especially with some versions of gcc which complain about constants, I
really do not want to have to write x > 0U).
Definitely! But I prefer teaching them the language to a point where
they can appreciate make before introducing them to it.
I use make almost more for other transformations than for compiling.
Introduce them to make?


Already happens. ar, make, gprof, cvs and other come in towards the end
of my course. When they have learned to write complex code which
justifies it. In the long run, the tools are more important than the
language used, but most of them do not even know "that computers can
have command lines" when they enter the course.


Ah, to me that's a given. Computers had nothing but command lines when
I started -- well, OK, unless you count the front panel switches and the
button which loaded the bootstrap from the card reader. I suppose the
lights on the panel could be called a GUI, they were sort of graphical
and it was a user interface of sorts. And the oscilloscope <g>...
Make is definitely too much for absolute beginners. They first have
to really understand (by "menial labour" with an actual language at
"actual" problems) why they need it.
Hmm. I tend to write the makefile before typing in the start of the
program...
Aliases (as Dan Pop suggested) or this script is quite enough at the
start.


Pity gcc doesn't take an environment variable for the default options,
so it could be set up in the login scripts. There was something very
elegant about just typing "cc x.c". But I suppose an alias is about as
close...

Chris C
Nov 14 '05 #24
Chris Croughton wrote:
Michael Mair wrote
Chris Croughton wrote:
I use -W as well, it catches a few more warning cases (comparing
unsigned values as < or <= zero, a < b < c (one it actually caught the
other day, I'd deleted too many characters), unused arguments). They
aren't essential for ANSI but they can show up errors (or things which
need implementing, like using the function arguments).
For -W, you need to know too much. This is an option I tell
them about but usually the introduction to splint helps them
more.


True, -W is a bit over the top (but lint is usually far more so, I know
a lot of people who when introduced to lint "shoot the messenger" and
either don't use it of turn off everything they can).


*g* I could not get some of my colleagues to use splint because
even the -weak option gave them too much to digest when tried on an
average file.
However, splint -weak gives you only the "reasonable" warnings and
really helps finding the most obvious mistakes.
If you then go to the default, and then -checks, you usually learn
a good bit about your code and bad coding habits.

As soon as my students feel too confident, I give them the task
to correct some program which will give no warnings up to splint
-checks but crash nonetheless and have a hidden bug at about
every fourth line... :-)

The one I do turn
off is the signed/unsigned comparisons, that gets really annoying
(especially with some versions of gcc which complain about constants,
I really do not want to have to write x > 0U).


Ah yes, I am always "happy" about that, too ;-)
Nonetheless, I switch this off only after having a look at the
respective lines...
Cheers,
Michael
--
E-Mail: Mine is an /at/ gmx /dot/ de address.
Nov 14 '05 #25
On Sat, 13 Nov 2004 23:19:36 +0100, Michael Mair
<Mi**********@invalid.invalid> wrote:
*g* I could not get some of my colleagues to use splint because
even the -weak option gave them too much to digest when tried on an
average file.
I can believe that, I've had to tell it (OK, lclint but they are from
the same base) a number of things to ignore. Like its insistance that
%X wants an unsigned int (almost all of the time I'm using %X is in
%debug code, where I want the hex value, I don't give a monkey's
%whether it's signed or not).
However, splint -weak gives you only the "reasonable" warnings and
really helps finding the most obvious mistakes.
If you then go to the default, and then -checks, you usually learn
a good bit about your code and bad coding habits.
Things like comparing an int with a character constant? When the int
has to be that to contain EOF? That's just silly (in fact I can't
recall any case of comparing an int with a character constant which was
an error).
As soon as my students feel too confident, I give them the task
to correct some program which will give no warnings up to splint
-checks but crash nonetheless and have a hidden bug at about
every fourth line... :-)


Oh yes, those are the interesting ones. I have long believed that any
program which compiles first time with no errors or warnings is
extremely suspect. This is not just superstition ("there must be
/something/ wrong with it!"), it has basis in the fact that if a program
has had errors and has had to be corrected then the author will at least
have looked at (at least part of) the code critically and will likely
have noticed other bugs that the compiler didn't see.

One I was given at a job interview was of the form:

#include <stdio.h>

int main(void)
{
int i = 0;
++i; /* comment *
++i; * comment *
++i; * comment *
++i; * comment *
++i; * comment */
printf("%d\n", i);
return 0;
}

What value does it print out? No, not 5, although a lot of even
experienced programmers will give that answer. The "block comment" on
the right is visually confusing, the brain tends to ignore it. Of
course, any modern editor (including vim) with highlighting will show
the error immediately, so give it to them on paper. It's perfectly
valid ANSI C, even lclint on its most picky setting won't object, it
just happens to not give the answer the programmer probably expects...

(Actually, the above is a simplified version. The one I was given
included a loop, which did indeed have a bug in it (off-by-one error)
quite apart from the comment block, which confuses things more...)
> The one I do turn
> off is the signed/unsigned comparisons, that gets really annoying
> (especially with some versions of gcc which complain about constants,
> I really do not want to have to write x > 0U).


Ah yes, I am always "happy" about that, too ;-)
Nonetheless, I switch this off only after having a look at the
respective lines...


I've found some compilers where it can't be turned off (one of the ARM
ones in particular) and it's really annoying...

Chris C
Nov 14 '05 #26
Chris Croughton wrote:
Michael Mair wrote:
*g* I could not get some of my colleagues to use splint because
even the -weak option gave them too much to digest when tried on an
average file.
I can believe that, I've had to tell it (OK, lclint but they are from
the same base) a number of things to ignore. Like its insistance that
%X wants an unsigned int (almost all of the time I'm using %X is in
%debug code, where I want the hex value, I don't give a monkey's
%whether it's signed or not).


Yep, this can be rather annoying. And eventually, changing the own
programming style just to please *lint is not really what one wants
to do ;-)

However, splint -weak gives you only the "reasonable" warnings and
really helps finding the most obvious mistakes.
If you then go to the default, and then -checks, you usually learn
a good bit about your code and bad coding habits.


Things like comparing an int with a character constant? When the int
has to be that to contain EOF? That's just silly (in fact I can't
recall any case of comparing an int with a character constant which was
an error).


Umh, character constants (you are talking of things like 'a', are
you?) are of type int, aren't they?

As soon as my students feel too confident, I give them the task
to correct some program which will give no warnings up to splint
-checks but crash nonetheless and have a hidden bug at about
every fourth line... :-)


Oh yes, those are the interesting ones. I have long believed that any
program which compiles first time with no errors or warnings is
extremely suspect. This is not just superstition ("there must be
/something/ wrong with it!"), it has basis in the fact that if a program
has had errors and has had to be corrected then the author will at least
have looked at (at least part of) the code critically and will likely
have noticed other bugs that the compiler didn't see.


Well, I only once in my life wrote a longish module and had no
compiler errors or warnings. And the most evil test data worked.
Then, I asked a colleague to have a look at it. Gave me really
the creeps when he told me that everything looked fine... ;-)

One I was given at a job interview was of the form:

#include <stdio.h>

int main(void)
{
int i = 0;
++i; /* comment *
++i; * comment *
++i; * comment *
++i; * comment *
++i; * comment */
printf("%d\n", i);
return 0;
}

What value does it print out? No, not 5, although a lot of even
experienced programmers will give that answer. The "block comment" on
the right is visually confusing, the brain tends to ignore it. Of
course, any modern editor (including vim) with highlighting will show
the error immediately, so give it to them on paper. It's perfectly
valid ANSI C, even lclint on its most picky setting won't object, it
just happens to not give the answer the programmer probably expects...

(Actually, the above is a simplified version. The one I was given
included a loop, which did indeed have a bug in it (off-by-one error)
quite apart from the comment block, which confuses things more...)


Nice example; with a little bit more around the crucial part, it can be
really misleading. However, if someone is really experienced (s)he has
his/her own policy about comments which will lead to quick discovery of
what is happening; given the pressure at an interview it can fail to
work but no one with some sanity left will accept block comments or
"line" comments in many consecutive lines behind the code. Nobody will
read the important comments. IMO, thoroughly miscommented code with
useless comments at every line is worse then one comment line per
function.

> The one I do turn
> off is the signed/unsigned comparisons, that gets really annoying
> (especially with some versions of gcc which complain about constants,
> I really do not want to have to write x > 0U).


Ah yes, I am always "happy" about that, too ;-)
Nonetheless, I switch this off only after having a look at the
respective lines...


I've found some compilers where it can't be turned off (one of the ARM
ones in particular) and it's really annoying...


Oh yes, I fondly remember having to move to another compiler
which warned me that the return statements at the end of some
two hundert functions could not be reached when they were only
in there to make sure that we catch the error if someone does
something stupid when changing code. That could not be switched off
either... We ended up filtering the warnings to find the "real"
issues as we still used the other compiler on other machines.
Cheers
Michael
--
E-Mail: Mine is an /at/ gmx /dot/ de address.
Nov 14 '05 #27
On Sun, 14 Nov 2004 15:11:26 +0100, Michael Mair
<Mi**********@invalid.invalid> wrote:
Chris Croughton wrote:
Michael Mair wrote:
*g* I could not get some of my colleagues to use splint because
even the -weak option gave them too much to digest when tried on an
average file.
I can believe that, I've had to tell it (OK, lclint but they are from
the same base) a number of things to ignore. Like its insistance that
%X wants an unsigned int (almost all of the time I'm using %X is in
%debug code, where I want the hex value, I don't give a monkey's
%whether it's signed or not).
Heh, your indenter decided that % was a comment indent function. I
haven't seen that one before...
Yep, this can be rather annoying. And eventually, changing the own
programming style just to please *lint is not really what one wants
to do ;-)
Indeed. The cry "I write C, dammit, not lint!" has been heard...
However, splint -weak gives you only the "reasonable" warnings and
really helps finding the most obvious mistakes.
If you then go to the default, and then -checks, you usually learn
a good bit about your code and bad coding habits.


Things like comparing an int with a character constant? When the int
has to be that to contain EOF? That's just silly (in fact I can't
recall any case of comparing an int with a character constant which was
an error).


Umh, character constants (you are talking of things like 'a', are
you?) are of type int, aren't they?


Yup, and yup. In fact the output even tells you that using the lclint
flag to suppress it is likely to be safe because character constants
have type int!

/tmp/ttt.c:12:9: Operands of > have incompatible types (int, char): i > 'a'
A character constant is used as an int. Use +charintliteral to allow
character constants to be used as ints. (This is safe since the actual
type of a char constant is int.)

If there is really any "house style" violated by saying

int ch;
ch = getchar();
if (ch == EOF)
...
if (ch > 'a")
...

then I'd like to know, so I can avoid the place like the plague...

(Whether one should use getchar(), of course, is a different container
of piscine entities...)
Well, I only once in my life wrote a longish module and had no
compiler errors or warnings. And the most evil test data worked.
Then, I asked a colleague to have a look at it. Gave me really
the creeps when he told me that everything looked fine... ;-)
Definitely spooky! I did have one the other day, where I started
'writing' it in my head overnight and typed it in the next day, and it
not only compiled but also worked first time. However, once I started
adding the rest of the code it failed to compile spectacularly, so that
made up for it <g>...
Nice example; with a little bit more around the crucial part, it can be
really misleading. However, if someone is really experienced (s)he has
his/her own policy about comments which will lead to quick discovery of
what is happening; given the pressure at an interview it can fail to
work but no one with some sanity left will accept block comments or
"line" comments in many consecutive lines behind the code. Nobody will
read the important comments.
Yes, true. The problem comes when that person tries to strip them out
mentally, human parsers make mistakes because they look for patterns
("Ignore anything to the right of the semicolon") which aren't always
correct. Even more when switching languages (I'm used to the C++ style
// line comments, which C99 allows as do almost all modern C compilers,
so I read it as "oh, a block of line comments").
IMO, thoroughly miscommented code with
useless comments at every line is worse then one comment line per
function.
Oh definitely, I've lost count of the times I've used (and sometimes
written) a 'noft' program for various languages to remove all of the
comments, because I couldn't trust them. Or occasionaly because they
were in some language which I almost understood but not enough to be
certain that I was reading them correctly (I read a certain amount of
French, German and Dutch, enough to get confused with words which look
like English ones but aren't). Badly commented code is worse than no
comments at all because it misleads whereas lack of comments mean that
the maintainer has to actually read what the code does instead of what
the creator thought it was doing.
Oh yes, I fondly remember having to move to another compiler
which warned me that the return statements at the end of some
two hundert functions could not be reached when they were only
in there to make sure that we catch the error if someone does
something stupid when changing code. That could not be switched off
either... We ended up filtering the warnings to find the "real"
issues as we still used the other compiler on other machines.


BTDT as well. I got a bonus once for writing (in my own time) a
compiler warning filter, it improved productivity immensely. In those
days we had the option of either all warnings or none...

Chris C
Nov 14 '05 #28
Chris Croughton <ch***@keristor.net> wrote:

True, -W is a bit over the top (but lint is usually far more so, I know
a lot of people who when introduced to lint "shoot the messenger" and
either don't use it of turn off everything they can). The one I do turn
off is the signed/unsigned comparisons, that gets really annoying
(especially with some versions of gcc which complain about constants, I
really do not want to have to write x > 0U).


I turn this on whenever I can find it. Signed-unsigned
comparisons are the cause of some of the most hard-to-find
bugs. I wish there was a switch to make the C rules more
sensible :) (I would prefer that (-1 < 0x1) were TRUE).
Nov 14 '05 #29
Old Wolf wrote:
(I would prefer that (-1 < 0x1) were TRUE).


(-1 < 0x1) is true.

(-1 < 0x1u) is false.

--
pete
Nov 14 '05 #30
Chris Croughton wrote:

On Mon, 15 Nov 2004 03:05:48 GMT, pete
<pf*****@mindspring.com> wrote:
Old Wolf wrote:
(I would prefer that (-1 < 0x1) were TRUE).


(-1 < 0x1) is true.

(-1 < 0x1u) is false.


I think that what "Old Wolf" meant is what I would prefer, that any
signed type with a negative value would compare less than any unsigned
value, which would make sense.


As Old Wolf does, I like the signed/unsigned mismatch warning also.

--
pete
Nov 14 '05 #31
"Chris Croughton" <ch***@keristor.net> wrote in message
news:sl******************@ccserver.keris.net...
/tmp/ttt.c:12:9: Operands of > have incompatible types (int, char): i > 'a'
A character constant is used as an int. Use +charintliteral to allow
character constants to be used as ints. (This is safe since the actual
type of a char constant is int.)

If there is really any "house style" violated by saying

int ch;
ch = getchar();
if (ch == EOF)
...
if (ch > 'a")
...

then I'd like to know, so I can avoid the place like the plague...


no house style violation, merely a syntax error on 'a" ;-)

Since you read some amount of French, I will give you some examples of character
constant oddities :

if (ch > 'a') ... is quite meaningless if you are trying to produce portable
code (EBCDIC issues)

if (ch == 'é') ... stands a good chance of never matching anything read with
getchar()

if (ch == 'ÿ') ... will erroneously match EOF if chars are signed by default.

sizeof('a') == sizeof(int) comes as a surprise to some !

sizeof(L'a') != sizeof(int) is even more surprising (on systems where
wchar_t is a short and short != int, eg: windows)

Chqrlie.
Nov 14 '05 #32
"Chris Croughton" <ch***@keristor.net> wrote in message
news:sl******************@ccserver.keris.net...
On Mon, 15 Nov 2004 03:05:48 GMT, pete
<pf*****@mindspring.com> wrote:
Old Wolf wrote:
(I would prefer that (-1 < 0x1) were TRUE).


(-1 < 0x1) is true.

(-1 < 0x1u) is false.


I think that what "Old Wolf" meant is what I would prefer, that any
signed type with a negative value would compare less than any unsigned
value, which would make sense.


Even more surprising:

usually caught with a warning:
(-1U < 1) is false

not even a signed/unsigned comparison:
(sizeof(char) - sizeof(int) > 0) is true

Chqrlie.
Nov 14 '05 #33
"Charlie Gordon" <ne**@chqrlie.org> wrote:
if (ch == 'ÿ') ... will erroneously match EOF if chars are signed by default.


Not necessarily. It's true on most systems, since the majority uses
extended versions of ASCII in which 'ÿ' is 255, and usually EOF is -1,
CHAR_BIT is 8, and integer overflow simply wraps around. None of this is
guaranteed by the Standard, however; it's just rare to find a desktop
machine on which it isn't true.

Richard
Nov 14 '05 #34
On Tue, 16 Nov 2004 11:06:46 +0100, Charlie Gordon
<ne**@chqrlie.org> wrote:
"Chris Croughton" <ch***@keristor.net> wrote in message
news:sl******************@ccserver.keris.net...
/tmp/ttt.c:12:9: Operands of > have incompatible types (int, char): i > 'a'
A character constant is used as an int. Use +charintliteral to allow
character constants to be used as ints. (This is safe since the actual
type of a char constant is int.)

If there is really any "house style" violated by saying

int ch;
ch = getchar();
if (ch == EOF)
...
if (ch > 'a")
...

then I'd like to know, so I can avoid the place like the plague...
no house style violation, merely a syntax error on 'a" ;-)


Fair cop <g>. But it still applies to

if (ch == 'a')
Since you read some amount of French, I will give you some examples of character
constant oddities :

if (ch > 'a') ... is quite meaningless if you are trying to produce portable
code (EBCDIC issues)
But whether ch is int or char (or long, short, etc.) is irrelevant to
that. You might as well give a warning if any character constant is
used anywhere.

And note that lclint gives the same warning if the code is

if (ch >= '0' && ch <= '9')

which is valid whatever the character set used. Section 5.1(3) says:

In both the source and execution basic character sets, the value of
each character after 0 in the above list shall be one greater than the
value of the previous.

(where "the above list" was the 10 decimal digits 0 1 2 3 4 5 6 7 8 9).
That even holds in EBCDIC with 8-bit signed char.
if (ch == 'é') ... stands a good chance of never matching anything read with
getchar()
True, getchar (getc, fgetc) returns an unsigned value (except for EOF).
But since the value for 'é' needs to be determined with regard to the
current locale, which is only determinable at runtime, I would regard
any use of a character constant which is not 7-bit clean as an error.
if (ch == 'ÿ') ... will erroneously match EOF if chars are signed by default.
True, but again irrelevant. If you're dealing with non-ASCII characters
you need to do special processing anyway (in fact all of your examples
are assuming the LATIN-1 or equivalent character sets, none of it will
make any sense at all in a Cyrillic character set for instance). In any
real application which has to take account of such characters the
program will have to look at the locale in use at runtime to determine
the correct values anyway.

In none of those cases does having a warning about comparing an int with
a character constant do any more than having a general warning about
comparing anything with a character constant, or indeed a warning if you
use a character constant at all (note that the character set used for
preprocessing might also not be the one used at runtime -- indeed, the
one used at runtime may not be fixed, especially with non-ASCII
characters).
sizeof('a') == sizeof(int) comes as a surprise to some !
But not to me, since I pointed out that the warning is meaningless
because a character constant is an int!
sizeof(L'a') != sizeof(int) is even more surprising (on systems where
wchar_t is a short and short != int, eg: windows)
wchar_t is an oddity anyway, I never use it. If I am doing operations
with multi-byte character sets I use UCS-4 internally (in an int32_t,
since UCS-4 is a 31-bit type) and convert to and from UTF-8 (or a
specific 8-bit character set) externally. But if you have to do that,
there shouldn't be any character or string literals used at all.
Chqrlie.


In which character set is that? <g>

Chris C
Nov 14 '05 #35
"Chris Croughton" <ch***@keristor.net> wrote in message
news:sl******************@ccserver.keris.net...
On Tue, 16 Nov 2004 11:06:46 +0100, Charlie Gordon
<ne**@chqrlie.org> wrote:
"Chris Croughton" <ch***@keristor.net> wrote in message
news:sl******************@ccserver.keris.net...
/tmp/ttt.c:12:9: Operands of > have incompatible types (int, char): i > 'a' A character constant is used as an int. Use +charintliteral to allow
character constants to be used as ints. (This is safe since the actual
type of a char constant is int.)

If there is really any "house style" violated by saying

int ch;
ch = getchar();
if (ch == EOF)
...
if (ch > 'a")
...

then I'd like to know, so I can avoid the place like the plague...
no house style violation, merely a syntax error on 'a" ;-)


Fair cop <g>. But it still applies to

if (ch == 'a')


I agree with you about the excessive warning in the example mentioned, I was
merely pointing a few problems with character constants that you are aware of,
but will surprise the majority of C programmers.
if (ch > 'a') ... is quite meaningless if you are trying to produce portable code (EBCDIC issues)


But whether ch is int or char (or long, short, etc.) is irrelevant to
that. You might as well give a warning if any character constant is
used anywhere.


No : the use of the > operator implies assumptions about the character set that
may well be false.
other uses of 'a' are not concerned.
And note that lclint gives the same warning if the code is

if (ch >= '0' && ch <= '9')
which is valid whatever the character set used. Section 5.1(3) says:
...


lclint is not smart enough ;-)
if (ch == 'é') ... stands a good chance of never matching anything read with getchar()


True, getchar (getc, fgetc) returns an unsigned value (except for EOF).
But since the value for 'é' needs to be determined with regard to the
current locale, which is only determinable at runtime, I would regard
any use of a character constant which is not 7-bit clean as an error.


getc() and fgetc() return an int, and treat the data stream as a sequence of
unsigned chars, which is inconsistent with the char type being signed by
default, and its consequences in terms of the value of character constants with
the high bit on.
I agree with you about the extra issues related to using 8 bit characters in
strings and character constants,

I could have written :

if (ch == '\351')... // may never match anything
and
if (ch == '\377') // may erroneously match EOF

Chqrlie.

PS: the 'q' in there is a French joke.

Nov 14 '05 #36
In <2v*************@uni-berlin.de> Michael Mair <Mi**********@invalid.invalid> writes:
Dan Pop wrote:
In <2v*************@uni-berlin.de> Michael Mair <Mi**********@invalid.invalid> writes:

For my last C course I demanded "gcc -Wall -std=c99 -pedantic" or
Why recommend your students to use gcc in a non-conforming mode?


Because we cannot afford the Comeau compiler plus Dinkumware libraries.


You don't need them, either.
Because I hope that in the long run this is the better course than
teaching only C89 and doing C99 as add-on at the end, if at all,
For the time being, there is no point in teaching C99 at all: it is far
from clear that it will ever become an industry standard; for all we know
now, 5 years after its adoption, it may remain a committee pipe dream
forever.
and many useful constructs can be used already.
Not portably, which makes them far less useful.
If every C course did this, then there would be enough weight to get
full conformance not only in gcc but in most and eventually all
major compilers.
Wishful thinking. No one really needs the _Bool nonsense and most of
the big time number crunching is not done in C.
However, I fear that in the long run we will get C89 plus C99
standard library and nothing more.


More likely, C89 plus some (small) parts of the C99 language and some
(small) parts of C99 standard library.

I can see long long and inline becoming mainstream extensions to C89
and snprintf and the revised freopen as mainstream extensions to the C89
library. I also hope for VLAs, but I'm not holding my breath.

Note that gcc's support for both inline and VLAs is not conforming to the
C99 specification. That's why I wouldn't recommend gcc -std=c99 as the
proper teaching aid for a C99 course.

Dan
--
Dan Pop
DESY Zeuthen, RZ group
Email: Da*****@ifh.de
Currently looking for a job in the European Union
Nov 14 '05 #37


Dan Pop wrote:
In <2v*************@uni-berlin.de> Michael Mair <Mi**********@invalid.invalid> writes:
Dan Pop wrote:
In <2v*************@uni-berlin.de> Michael Mair <Mi**********@invalid.invalid> writes:

For my last C course I demanded "gcc -Wall -std=c99 -pedantic" or

Why recommend your students to use gcc in a non-conforming mode?
Because we cannot afford the Comeau compiler plus Dinkumware libraries.


You don't need them, either.


Assuming we want to use C99:
Which compiler/library combination conforming to the C99
standard would you suggest, then?

Because I hope that in the long run this is the better course than
teaching only C89 and doing C99 as add-on at the end, if at all,


For the time being, there is no point in teaching C99 at all: it is far
from clear that it will ever become an industry standard; for all we know
now, 5 years after its adoption, it may remain a committee pipe dream
forever.


I fear so, too. I still hope that it's only a hen-egg problem: If one of
the widely used compilers would give us C99, it would start being used
and demanded in other compilers too.
As the gcc people still claim to aim for C99, I keep my hopes up.

and many useful constructs can be used already.


Not portably, which makes them far less useful.


Yep. So I teach both, hoping for the best... :-/

If every C course did this, then there would be enough weight to get
full conformance not only in gcc but in most and eventually all
major compilers.


Wishful thinking.


Granted :-)
No one really needs the _Bool nonsense and most of
the big time number crunching is not done in C.
Granted. But _Bool is really not my reason for using C99; I am
not sure what the latter part of your sentence refers to.

However, I fear that in the long run we will get C89 plus C99
standard library and nothing more.


More likely, C89 plus some (small) parts of the C99 language and some
(small) parts of C99 standard library.

I can see long long and inline becoming mainstream extensions to C89
and snprintf and the revised freopen as mainstream extensions to the C89
library. I also hope for VLAs, but I'm not holding my breath.


Yes. I also would like to have the types from <stdint.h> and designated
initializers.

Note that gcc's support for both inline and VLAs is not conforming to the
C99 specification. That's why I wouldn't recommend gcc -std=c99 as the
proper teaching aid for a C99 course.


I am aware of both; apart from flexible array members, most people
will not notice any difference with respect to the VLAs, but the
"extern inline" issue is rather nasty. I am not sure which is
actually the "better" solution.
The complex type support would have been a nice toy but is not
really necessary.

I find it more worrying that things like typeof have been kept
from us and that certain things are (still) so weak that they are
next to useless (e.g. volatile or bit fields).
Apart from that and even though I am aware that this is a
_very_ controversial issue, I would have liked to see in addition
to the standard library sort of extended libraries covering
stuff which should not demanded for portable applications but
is more or less the object of reinventing the wheel for most
C programmers. These should really be kept apart from C as
such but could ease the way for "compatibility" between compilers
on mainstream systems. Could also be an additional standard.
*sigh* Now bring the flames...
Cheers
Michael
--
E-Mail: Mine is a gmx dot de address.

Nov 14 '05 #38
pete <pf*****@mindspring.com> wrote:
Old Wolf wrote:
(I would prefer that (-1 < 0x1) were TRUE).


(-1 < 0x1) is true.

(-1 < 0x1u) is false.


Right. I thought that 0x.. constants were unsigned, I must
have been confusing that with some other situation
(although I can't think what).
Nov 14 '05 #39
Old Wolf wrote:
pete <pf*****@mindspring.com> wrote:
Old Wolf wrote:

(I would prefer that (-1 < 0x1) were TRUE).


(-1 < 0x1) is true.

(-1 < 0x1u) is false.


Right. I thought that 0x.. constants were unsigned, I must
have been confusing that with some other situation
(although I can't think what).


Probably you thought of the printf/scanf format %x which
expects unsigned integers.

Cheers
Michael
--
E-Mail: Mine is an /at/ gmx /dot/ de address.
Nov 14 '05 #40
In <2v*************@uni-berlin.de> Michael Mair <Mi**********@invalid.invalid> writes:


Dan Pop wrote:
In <2v*************@uni-berlin.de> Michael Mair <Mi**********@invalid.invalid> writes:
Dan Pop wrote:

In <2v*************@uni-berlin.de> Michael Mair <Mi**********@invalid.invalid> writes:

>For my last C course I demanded "gcc -Wall -std=c99 -pedantic" or

Why recommend your students to use gcc in a non-conforming mode?

Because we cannot afford the Comeau compiler plus Dinkumware libraries.
You don't need them, either.


Assuming we want to use C99:
Which compiler/library combination conforming to the C99
standard would you suggest, then?


Any that claims C99 conformance. I haven't used any, so I can't have any
preferences.

Also note that gcc -std=c99 doesn't solve the C99 library conformance
issue by magic.
Because I hope that in the long run this is the better course than
teaching only C89 and doing C99 as add-on at the end, if at all,


For the time being, there is no point in teaching C99 at all: it is far
from clear that it will ever become an industry standard; for all we know
now, 5 years after its adoption, it may remain a committee pipe dream
forever.


I fear so, too. I still hope that it's only a hen-egg problem: If one of
the widely used compilers would give us C99, it would start being used
and demanded in other compilers too.
As the gcc people still claim to aim for C99, I keep my hopes up.


I haven't noticed gcc making any progress since 2001. There is precious
little evidence that things are going to change any time soon and this
is not going to solve the library problem, anyway.
and many useful constructs can be used already.


Not portably, which makes them far less useful.


Yep. So I teach both, hoping for the best... :-/


IMHO, the resources spent on taching C99 would be better used on
clarifying the darker aspects of C89. If C99 ever catches on, the
transition would be trivial for any experienced C89 programmer. I already
went through the transition from K&R C (the language I've actually learned
from K&R1) to ANSI C and it was a piece of cake. C89 to C99 should be
even easier.
No one really needs the _Bool nonsense and most of
the big time number crunching is not done in C.


Granted. But _Bool is really not my reason for using C99; I am
not sure what the latter part of your sentence refers to.


The bulk of the new features in C99 (language and library) address
numerical analysis applications.
I can see long long and inline becoming mainstream extensions to C89
and snprintf and the revised freopen as mainstream extensions to the C89
library. I also hope for VLAs, but I'm not holding my breath.


Yes. I also would like to have the types from <stdint.h>


To me, they look like a cure that is worse than the disease.

You can have (most of them, anyway) now, there are publicly available
implementations for C89 (e.g. http://www.lysator.liu.se/c/q8/index.html).
and designated initializers.
They look cute, but I've never had any real need for them. I'd rather
have compound literals instead, but I can happily live without them, too.
I find it more worrying that things like typeof have been kept
from us
Since no committee member had GNU C on his agenda, there was no real
attempt to give a serious look at the *many* very useful extensions
provided by GNU C, although they're nicely documented in a special
section of the gcc documentation. Apart from being good ideas, there
was also plenty of existing practice, as gcc is, by far, one of the most
popular implementations.
and that certain things are (still) so weak that they are
next to useless (e.g. volatile or bit fields).
volatile has its uses even in portable code (signal handlers) and bit
fields have never been meant to be used in portable code. Whether we
like to admit it here or not, C is still the language of choice for
low level (close to the hardware) programming where most of the things
cannot be done portably, anyway. So, the fact that your bit fields will
stop working when using a compiler for a different embedded control CPU
is not going to be much of a problem.
Apart from that and even though I am aware that this is a
_very_ controversial issue, I would have liked to see in addition
to the standard library sort of extended libraries covering
stuff which should not demanded for portable applications but
is more or less the object of reinventing the wheel for most
C programmers. These should really be kept apart from C as
such but could ease the way for "compatibility" between compilers
on mainstream systems. Could also be an additional standard.


The best you can find in this direction is POSIX and BSD socket support.
The latter seems to be "universally" available these days and the POSIX
curses library is supported on almost any platform where it makes sense.

There are also widely supported GUI libraries, but they are largely
ignored because they don't provide as much functionality/convenience as
the natively supported ones.

It would be nice to have all these things covered by platform neutral
standards, but the lack of such standards doesn't make the available tools
any less useful.

Dan
--
Dan Pop
DESY Zeuthen, RZ group
Email: Da*****@ifh.de
Currently looking for a job in the European Union
Nov 14 '05 #41
"Michael Mair" <Mi**********@invalid.invalid> wrote in message
news:30*************@uni-berlin.de...
Old Wolf wrote:
pete <pf*****@mindspring.com> wrote:
Old Wolf wrote:
(I would prefer that (-1 < 0x1) were TRUE).

(-1 < 0x1) is true.

(-1 < 0x1u) is false.


Right. I thought that 0x.. constants were unsigned, I must
have been confusing that with some other situation
(although I can't think what).


Probably you thought of the printf/scanf format %x which
expects unsigned integers.


Or maybe that some of them are and some aint :

0x1 is signed or unsigned depending on context
0x80000000 is unsigned on architectures with 32 bit ints.
and so is 0xffffffff.

Chqrlie.
Nov 14 '05 #42
In <cn**********@reader1.imaginet.fr> "Charlie Gordon" <ne**@chqrlie.org> writes:
0x1 is signed or unsigned depending on context
Could you point out a context where 0x1 doesn't have the type signed int?
0x80000000 is unsigned on architectures with 32 bit ints.
and so is 0xffffffff.


OTOH, the type signed int can represent the value of 0x1 on *any*
architecture.

The actual rules are a monument of inconsistency, so it's better to use
all the suffixes needed to get the desired type and make your intentions
perfectly clear to anyone who might be reading your code.

5 The type of an integer constant is the first of the corresponding
list in which its value can be represented.

|| |
|| | Octal or Hexadecimal
Suffix || Decimal Constant | Constant
-------------++-----------------------+------------------------
none ||int | int
||long int | unsigned int
||long long int | long int
|| | unsigned long int
|| | long long int
|| | unsigned long long int
-------------++-----------------------+------------------------
u or U ||unsigned int | unsigned int
||unsigned long int | unsigned long int
||unsigned long long int | unsigned long long int
-------------++-----------------------+------------------------
l or L ||long int | long int
||long long int | unsigned long int
|| | long long int
|| | unsigned long long int
-------------++-----------------------+------------------------
Both u or U ||unsigned long int | unsigned long int
and l or L ||unsigned long long int | unsigned long long int
-------------++-----------------------+------------------------
ll or LL ||long long int | long long int
|| | unsigned long long int
-------------++-----------------------+------------------------
Both u or U ||unsigned long long int | unsigned long long int
and ll or LL || |

If an integer constant cannot be represented by any type in
its list, it may have an extended integer type, if the extended
integer type can represent its value. If all of the types in the
list for the constant are signed, the extended integer type shall
be signed. If all of the types in the list for the constant are
unsigned, the extended integer type shall be unsigned. If the
list contains both signed and unsigned types, the extended
integer type may be signed or unsigned.

Note that this is inconsistent with the C89 rules, too, where an
unsuffixed decimal constant has type unsigned long if it cannot be
represented by the type long, while unsigned long long is not an
option in C99.

Dan
--
Dan Pop
DESY Zeuthen, RZ group
Email: Da*****@ifh.de
Currently looking for a job in the European Union
Nov 14 '05 #43
Charlie Gordon wrote:
0x1 is signed or unsigned depending on context


No. 0x1 is an expression of type int in any context.

--
pete
Nov 14 '05 #44

This thread has been closed and replies have been disabled. Please start a new discussion.

Similar topics

3
by: rl | last post by:
Hi out there, I'd like to know sth about the costs of a function call in php and the handling of character arrays (init size, enlargement steps of allocated memory, technique on enlargement ->...
8
by: lasek | last post by:
Hi all, a simple question, look at this code below: char acName="Claudio"; unsigned int uiLen; uiLen=strlen(acName); printf("Length of acName variable %u",uiLen); //uiLen >>>> 7
3
by: linguae | last post by:
Hello. In my C program, I have an array of character pointers. I'm trying to input character strings to each index of the character pointer array using scanf(), but when I run the program, I get...
8
by: Brand Bogard | last post by:
Does the C standard include a library function to convert an 8 bit character string to a 16 bit character string?
14
by: Shhnwz.a | last post by:
Hi, I am in confusion regarding jargons. When it is technically correct to say.. String or Character Array.in c. just give me your perspectives in this issue. Thanx in Advance.
14
by: mast2as | last post by:
Hi everyone, I am trying to implement some specs which specify that an array of parameter is passed to a function as a pointer to an array terminated by a NULL chatacter. That seemed fairly easy...
4
by: reva | last post by:
hi all!! can any one please help me in checking the two character arrays. in my code i need to compare a character array(seq) to that of hydrob and hydrop . if the seq has hydrob then it should be...
3
by: Tarik Monem | last post by:
Hi Everyone, Still a newbie with FLEX, and I've passed arrays using AJAX to FLEX before, but I've never passed links to FLEX. Basically, this is the OUTPUT, which I wanted, but I'm given an...
19
by: bowlderyu | last post by:
Hello, all. If a struct contains a character strings, there are two methods to define the struct, one by character array, another by character pointer. E.g, //Program for struct includeing...
0
by: ryjfgjl | last post by:
If we have dozens or hundreds of excel to import into the database, if we use the excel import function provided by database editors such as navicat, it will be extremely tedious and time-consuming...
0
by: ryjfgjl | last post by:
In our work, we often receive Excel tables with data in the same format. If we want to analyze these data, it can be difficult to analyze them because the data is spread across multiple Excel files...
0
BarryA
by: BarryA | last post by:
What are the essential steps and strategies outlined in the Data Structures and Algorithms (DSA) roadmap for aspiring data scientists? How can individuals effectively utilize this roadmap to progress...
1
by: nemocccc | last post by:
hello, everyone, I want to develop a software for my android phone for daily needs, any suggestions?
0
by: Hystou | last post by:
There are some requirements for setting up RAID: 1. The motherboard and BIOS support RAID configuration. 2. The motherboard has 2 or more available SATA protocol SSD/HDD slots (including MSATA, M.2...
0
marktang
by: marktang | last post by:
ONU (Optical Network Unit) is one of the key components for providing high-speed Internet services. Its primary function is to act as an endpoint device located at the user's premises. However,...
0
Oralloy
by: Oralloy | last post by:
Hello folks, I am unable to find appropriate documentation on the type promotion of bit-fields when using the generalised comparison operator "<=>". The problem is that using the GNU compilers,...
0
by: Hystou | last post by:
Overview: Windows 11 and 10 have less user interface control over operating system update behaviour than previous versions of Windows. In Windows 11 and 10, there is no way to turn off the Windows...
0
tracyyun
by: tracyyun | last post by:
Dear forum friends, With the development of smart home technology, a variety of wireless communication protocols have appeared on the market, such as Zigbee, Z-Wave, Wi-Fi, Bluetooth, etc. Each...

By using Bytes.com and it's services, you agree to our Privacy Policy and Terms of Use.

To disable or enable advertisements and analytics tracking please visit the manage ads & tracking page.