By using this site, you agree to our updated Privacy Policy and our Terms of Use. Manage your Cookies Settings.
445,898 Members | 2,022 Online
Bytes IT Community
+ Ask a Question
Need help? Post your question and get tips & solutions from a community of 445,898 IT Pros & Developers. It's quick & easy.

Code Review: strncpy

P: n/a
Hi,

Is the following strncpy implementation
according to C99?

char *
strncpy ( char *s, const char *t, size_t n )
{
char *p = s;
size_t i = 0;

if ( !s )
return s;

while ( t && i < n )
{
*s++ = *t++;
i++;
}

if ( !t )
{
do
*s++ = '\0';
while ( i++ < n );
}

return p;
}
Thanks
--
Vijay Kumar R Zanvar
Nov 14 '05 #1
Share this Question
Share on Google+
30 Replies


P: n/a
nrk
Vijay Kumar R Zanvar wrote:
Hi,

Is the following strncpy implementation
according to C99?

char *
strncpy ( char *s, const char *t, size_t n )
{
char *p = s;
size_t i = 0;

if ( !s )
return s;

while ( t && i < n )
ITYM:
while ( *t && i < n )
{
*s++ = *t++;
i++;
}

if ( !t )
ITYM:
if ( *t == 0 )
{
do
*s++ = '\0';
while ( i++ < n );
Ouch!! What happens if *t == 0 and i == n?

-nrk.
}

return p;
}
Thanks
--
Vijay Kumar R Zanvar


Nov 14 '05 #2

P: n/a
"Vijay Kumar R Zanvar" <vi*****@hotpop.com> wrote in message
news:bu************@ID-203837.news.uni-berlin.de...
Hi,

Is the following strncpy implementation
according to C99?

char *
strncpy ( char *s, const char *t, size_t n )
strncpy ( char * restrict s, const char * restrict t, size_t n )
{
char *p = s;
You never change p. I would declare it as char * const p;
size_t i = 0;

if ( !s )
return s;
Standard does not mandate this check. But if you do it already, why not
checking for !t as well?

while ( t && i < n )
while ( *t && i < n )
{
*s++ = *t++;
i++;
}

if ( !t )
if ( !*t )
{
do
*s++ = '\0';
while ( i++ < n );
}

return p;
}


Apart from missing asterisks and restrict keywords, I think it should be OK.
I would also replace i++ < n in the loops with n-- and get rid of an extra
variable, but that's me, not you ;-)

Peter
Nov 14 '05 #3

P: n/a
Thanks a lot nrk and Peter....
Small mistakes but big affects!

/* modified */

char *
strncpy ( char * restrict s, const char * restrict t, size_t n )
{
char * const p = s;

if ( !s || !t )
return s;

while ( *t && n )
{
*s++ = *t++;
n--;
}

if ( !*t && n )
{
do
*s++ = '\0';
while ( n-- );
}

return p;
}

Nov 14 '05 #4

P: n/a
nrk
Vijay Kumar R Zanvar wrote:
Thanks a lot nrk and Peter....
Small mistakes but big affects!

/* modified */

char *
strncpy ( char * restrict s, const char * restrict t, size_t n )
{
char * const p = s;

if ( !s || !t )
return s;

while ( *t && n )
{
*s++ = *t++;
n--;
}

if ( !*t && n )
Not exactly a problem, but just if ( n ) suffices.
{
do
*s++ = '\0';
while ( n-- );
Ouch again!! Think back to how the Post-decrement operator works. You can
fix this by either making this a while loop instead of do..while (I prefer
this solution) or replacing n-- with --n.

-nrk.
}

return p;
}


Nov 14 '05 #5

P: n/a
"nrk" <ra*********@devnull.verizon.net> wrote in message
news:PN*****************@nwrddc01.gnilink.net...
Vijay Kumar R Zanvar wrote:
if ( !*t && n )
Not exactly a problem, but just if ( n ) suffices.
{
do
*s++ = '\0';
while ( n-- );


Ouch again!! Think back to how the Post-decrement operator works. You

can fix this by either making this a while loop instead of do..while (I prefer
this solution) or replacing n-- with --n.
}


Which, together with your first comment, makes the if completely
unnecessary:

while (n--)
*s++ = 0;

Peter
Nov 14 '05 #6

P: n/a
> while (n--)
*s++ = 0;


*s++ = '\0';

0 has type "signed int", and the conversion from signed int
to char is implementation-defined.
Nov 14 '05 #7

P: n/a
nrk
Old Wolf wrote:
while (n--)
*s++ = 0;


*s++ = '\0';

0 has type "signed int", and the conversion from signed int
to char is implementation-defined.


And what type does '\0' have?

In this particular case, the result is not implementation-defined since the
null character is guaranteed to be 0.

-nrk.
Nov 14 '05 #8

P: n/a
In article <84**************************@posting.google.com >
Old Wolf <ol*****@inspire.net.nz> writes:
while (n--)
*s++ = 0;


*s++ = '\0';

0 has type "signed int", and the conversion from signed int
to char is implementation-defined.


To some extent, yes. Alas, '\0' *also* has type (signed) int,
with the same value as 0. So this change changes nothing at all!
Luckily, any ordinary int with value 0 must convert to the char
with value 0 (because it is guaranteed to be in range -- CHAR_MIN
is no greater than 0 and CHAR_MAX is at least 127).

I still (slightly) prefer '\0' in this context, for no reason other
than to convey to a human reader that you mean "the string terminator
character" rather than "the small integer whose value is 0". (These
are the same thing of course -- but if you were to rewrite all the
code to use, say, counted-length strings, they would suddenly become
different. That is, the source language has no way to distinguish
these two semantics, but if we choose to translate the code to some
other form or language, we might need a different translation. On
the principle "say *what* you want to happen, rather than *how*
you want it to happen", I thus prefer the '\0' form -- I think it
better reflects the "what". Others might reasonably disagree.)
--
In-Real-Life: Chris Torek, Wind River Systems
Salt Lake City, UT, USA (4039.22'N, 11150.29'W) +1 801 277 2603
email: forget about it http://web.torek.net/torek/index.html
Reading email is like searching for food in the garbage, thanks to spammers.
Nov 14 '05 #9

P: n/a
Old Wolf <ol*****@inspire.net.nz> wrote:
while (n--)
*s++ = 0;
*s++ = '\0'; 0 has type "signed int", and the conversion from signed int
to char is implementation-defined.


Actually, character literals also have type 'int', so these two
forms are exactly equivalent. The "conversion" is perfectly well
defined if the int value is small enough to fit in the char.

--
Alex Monjushko (mo*******@hotmail.com)
Nov 14 '05 #10

P: n/a
In article <84**************************@posting.google.com > ol*****@inspire.net.nz (Old Wolf) writes:
while (n--)
*s++ = 0;


*s++ = '\0';

0 has type "signed int", and the conversion from signed int
to char is implementation-defined.


Eh? Anyhow, the type of '\0' is also signed int, so what is the
improvement?
--
dik t. winter, cwi, kruislaan 413, 1098 sj amsterdam, nederland, +31205924131
home: bovenover 215, 1025 jn amsterdam, nederland; http://www.cwi.nl/~dik/
Nov 14 '05 #11

P: n/a
"Chris Torek" <no****@torek.net> wrote in message
news:bu********@enews3.newsguy.com...
In article <84**************************@posting.google.com >
Old Wolf <ol*****@inspire.net.nz> writes:
while (n--)
*s++ = 0;
*s++ = '\0';

0 has type "signed int", and the conversion from signed int
to char is implementation-defined.

[snip] I still (slightly) prefer '\0' in this context, for no reason other
than to convey to a human reader that you mean "the string
terminator character" rather than "the small integer whose value is
0".


Personally, I'd say the fact that the variable being assigned is a
dereferenced char * is a pretty good indication of which is meant.

Alex
Nov 14 '05 #12

P: n/a
Alex wrote:

"Chris Torek" <no****@torek.net> wrote in message
news:bu********@enews3.newsguy.com...
In article <84**************************@posting.google.com >
Old Wolf <ol*****@inspire.net.nz> writes:
> while (n--)
> *s++ = 0;

*s++ = '\0';

0 has type "signed int", and the conversion from signed int
to char is implementation-defined.

[snip]
I still (slightly) prefer '\0' in this context, for no reason other
than to convey to a human reader that you mean "the string
terminator character" rather than "the small integer whose value is
0".


Personally, I'd say the fact that the variable being assigned is a
dereferenced char * is a pretty good indication of which is meant.


Looking at the code that you quoted, '\0' suggests that
the code is about strings. You didn't quote enough code
to show that s is of type pointer to char.

--
pete
Nov 14 '05 #13

P: n/a
> >> *s++ = 0;
0 has type "signed int", and the conversion from signed int
to char is implementation-defined.
To some extent, yes. Alas, '\0' *also* has type (signed) int,
with the same value as 0. So this change changes nothing at all!


Aha. My thanks to the poster the other day who noted that the best
way to get an answer is to make some assertion and wait for people
to jump on you :) I had previously made enquiries as to why people
bothered with '\0' instead of the easier-to-type 0 but gotten no
answer.
I still (slightly) prefer '\0' in this context, for no reason other
than to convey to a human reader that you mean "the string terminator
character" rather than "the small integer whose value is 0".


So this is just an idiom that you are supposed to have picked up
while learning the language (like using upper-case characters
for macro names vs. function names)?

Personally I have this defined:
#define END_OF_STRING '\0'
which makes for greatly readable code (I only eschew it in throwaway
programs, or when it would make my line length exceed 80 chars).

How did this idiom originate historically?
Nov 14 '05 #14

P: n/a
nrk
Old Wolf wrote:
>> *s++ = 0;
>0 has type "signed int", and the conversion from signed int
>to char is implementation-defined.
To some extent, yes. Alas, '\0' *also* has type (signed) int,
with the same value as 0. So this change changes nothing at all!


Aha. My thanks to the poster the other day who noted that the best
way to get an answer is to make some assertion and wait for people
to jump on you :) I had previously made enquiries as to why people
bothered with '\0' instead of the easier-to-type 0 but gotten no
answer.
I still (slightly) prefer '\0' in this context, for no reason other
than to convey to a human reader that you mean "the string terminator
character" rather than "the small integer whose value is 0".


So this is just an idiom that you are supposed to have picked up
while learning the language (like using upper-case characters
for macro names vs. function names)?

Personally I have this defined:
#define END_OF_STRING '\0'
which makes for greatly readable code (I only eschew it in throwaway
programs, or when it would make my line length exceed 80 chars).


Macros that begin with E and an uppercase letter are reserved by the
implementation.

Personally, I prefer shorter forms that don't compromise readability and
therefore tend to use 0 instead of '\0'. YMMV, of course.

-nrk.
How did this idiom originate historically?


--
Remove devnull for email
Nov 14 '05 #15

P: n/a

On Thu, 15 Jan 2004, Old Wolf wrote:
> *s++ = 0;
0 has type "signed int", and the conversion from signed int
to char is implementation-defined.
To some extent, yes. Alas, '\0' *also* has type (signed) int,
with the same value as 0. So this change changes nothing at all!


Aha. My thanks to the poster the other day who noted that the best
way to get an answer is to make some assertion and wait for people
to jump on you :) I had previously made enquiries as to why people
bothered with '\0' instead of the easier-to-type 0 but gotten no
answer.


I think Chris Torek's answer in this thread (roughly, "because it's
supposed to be a character, so make it look like one") is the best
rationale.

I still (slightly) prefer '\0' in this context, for no reason other
than to convey to a human reader that you mean "the string terminator
character" rather than "the small integer whose value is 0".


So this is just an idiom that you are supposed to have picked up
while learning the language (like using upper-case characters
for macro names vs. function names)?


Basically. It's an idiom that you're supposed to encounter *earlier*
in your language-learning career than the fact that chars are just
small integers anyway; thus it's supposed to make *more* sense to use
a character when you mean a character, and zero when you mean zero.
You see? :)
Personally I have this defined:
#define END_OF_STRING '\0'
which makes for greatly readable code (I only eschew it in throwaway
programs, or when it would make my line length exceed 80 chars).
Personally, I think that's silly in the extreme. It doesn't help
readability any, since it's just substituting a programmer-specific
idiom for a language-wide idiom, and it makes the code longer. It
also requires either that you make a new header to #include this
#definition in every program you write, or that you duplicate the
code in every translation unit.
Pedantically, it invokes undefined behavior by trying to re#define
an identifier reserved to the implementation, should the implementation
ever find the need to signal to you that it's encountered an Error
having something to do with ND_OF_STRING. :)

[My first objection is a little hypocritical, perhaps, as many of
my own programs use #define steq(x,y) (!strcmp(x,y)) to simplify
the argument parsing code: another programmer-specific idiom substituted
for a perfectly good language-wide idiom. But in my defense, I'm making
the code shorter and less error-prone, not longer and murkier.]
How did this idiom originate historically?


By the need to be able to include embedded nulls in string literals.
All the string escape codes (\n,\r,\a,\0,\b,...) are legitimate escape
codes for character literals, too. As for why the language designers
picked \ to be the escape character in literals, I couldn't say.
"Historical reasons" of some sort, no doubt.

-Arthur
Nov 14 '05 #16

P: n/a
On 15 Jan 2004 13:55:44 -0800, ol*****@inspire.net.nz (Old Wolf)
wrote:
Personally I have this defined:
#define END_OF_STRING '\0'
which makes for greatly readable code (I only eschew it in throwaway
programs, or when it would make my line length exceed 80 chars).

How did this idiom originate historically?


Historically? Dunno. But it's the standard escape sequence for
designating a character by its value in octal.

--
Al Balmer
Balmer Consulting
re************************@att.net
Nov 14 '05 #17

P: n/a
In article <84**************************@posting.google.com >,
ol*****@inspire.net.nz (Old Wolf) wrote:
Personally I have this defined:
#define END_OF_STRING '\0'
which makes for greatly readable code (I only eschew it in throwaway
programs, or when it would make my line length exceed 80 chars).


That's what is called obfuscation. I sure sign of a wannabe-programmer.
Nov 14 '05 #18

P: n/a
"Christian Bau" <ch***********@cbau.freeserve.co.uk> wrote in message
news:ch*********************************@slb-newsm1.svr.pol.co.uk...
In article <84**************************@posting.google.com >,
ol*****@inspire.net.nz (Old Wolf) wrote:
Personally I have this defined:
#define END_OF_STRING '\0'
which makes for greatly readable code (I only eschew it in throwaway
programs, or when it would make my line length exceed 80 chars).


That's what is called obfuscation. I sure sign of a wannabe-programmer.


So, what do you think of people who use NULL?

These symbolic constants are equally useful and/or useless (depending on
your point of view), in the context of a language where an integer constant
zero serves a number of roles!

--
Peter
Nov 14 '05 #19

P: n/a
On Fri, 16 Jan 2004 23:05:22 +1100, "Peter Nilsson"
<ai***@acay.com.au> wrote:
"Christian Bau" <ch***********@cbau.freeserve.co.uk> wrote in message
news:ch*********************************@slb-newsm1.svr.pol.co.uk...
In article <84**************************@posting.google.com >,
ol*****@inspire.net.nz (Old Wolf) wrote:
> Personally I have this defined:
> #define END_OF_STRING '\0'
> which makes for greatly readable code (I only eschew it in throwaway
> programs, or when it would make my line length exceed 80 chars).
That's what is called obfuscation. I sure sign of a wannabe-programmer.


So, what do you think of people who use NULL?


The same as I think of people who use "while" or "switch". They're
using standard C. "END_OF_STRING" does not appear in the standard.
These symbolic constants are equally useful and/or useless (depending on
your point of view), in the context of a language where an integer constant
zero serves a number of roles!


Which is why I prefer the (standard) '\0' form to emphasize that I
intend one of those roles, that of the end of string marker. In fact,
if I saw code using END_OF_STRING my assumption would be that the
programmer was using some other character to mean end of string, and
I'd have to go find the definition. That's (mild) obfuscation.

--
Al Balmer
Balmer Consulting
re************************@att.net
Nov 14 '05 #20

P: n/a
In <40******@news.rivernet.com.au> "Peter Nilsson" <ai***@acay.com.au> writes:
"Christian Bau" <ch***********@cbau.freeserve.co.uk> wrote in message
news:ch*********************************@slb-newsm1.svr.pol.co.uk...
In article <84**************************@posting.google.com >,
ol*****@inspire.net.nz (Old Wolf) wrote:
> Personally I have this defined:
> #define END_OF_STRING '\0'
> which makes for greatly readable code (I only eschew it in throwaway
> programs, or when it would make my line length exceed 80 chars).
That's what is called obfuscation. I sure sign of a wannabe-programmer.


So, what do you think of people who use NULL?


It depends on whether they are using it correctly or not ;-)
These symbolic constants are equally useful and/or useless (depending on
your point of view), in the context of a language where an integer constant
zero serves a number of roles!


The purpose of NULL is to avoid using 0 in a context where an integer
doesn't *naturally* belong.

END_OF_STRING serves no such purpose: 0 or '\0' are perfectly natural
representations of the null character and there is exactly one reason
for having a null character inside a string.

Dan
--
Dan Pop
DESY Zeuthen, RZ group
Email: Da*****@ifh.de
Nov 14 '05 #21

P: n/a
Christian Bau wrote:
In article <84**************************@posting.google.com >,
ol*****@inspire.net.nz (Old Wolf) wrote:
Personally I have this defined:
#define END_OF_STRING '\0'
which makes for greatly readable code (I only eschew it in throwaway
programs, or when it would make my line length exceed 80 chars).


That's what is called obfuscation. I sure sign of a wannabe-programmer.


How do you feel about

#define EOS '\0'

? Is that, too, a sure sign of a wannabe-programmer?

Before you answer, I should point out that that line is quoted from source
code written by Doug Gwyn for the book "Software Solutions in C". For my
part, I consider Mr Gwyn to be a real programmer, not a wannabe. (Minor
nit: EOS obviously invades implementation namespace, and I'm pretty sure Mr
Gwyn knows that.)

--
Richard Heathfield : bi****@eton.powernet.co.uk
"Usenet is a strange place." - Dennis M Ritchie, 29 July 1999.
C FAQ: http://www.eskimo.com/~scs/C-faq/top.html
K&R answers, C books, etc: http://users.powernet.co.uk/eton
Nov 14 '05 #22

P: n/a
On Fri, 16 Jan 2004 22:53:54 +0000 (UTC), in comp.lang.c , Richard
Heathfield <do******@address.co.uk.invalid> wrote:

How do you feel about

#define EOS '\0'

? Is that, too, a sure sign of a wannabe-programmer?
Very very close to a sure sign. In fact, its a million-to-one chance.
Before you answer, I should point out that that line is quoted from source
code written by Doug Gwyn for the book "Software Solutions in C".
For my part, I consider Mr Gwyn to be a real programmer, not a wannabe.


Me too. But he's not immune from adding wannabe-programmer style
obfuscatory gratuity to his code merely because he's a real
programmer.

--
Mark McIntyre
CLC FAQ <http://www.eskimo.com/~scs/C-faq/top.html>
CLC readme: <http://www.angelfire.com/ms3/bchambless0/welcome_to_clc.html>
----== Posted via Newsfeed.Com - Unlimited-Uncensored-Secure Usenet News==----
http://www.newsfeed.com The #1 Newsgroup Service in the World! >100,000 Newsgroups
---= 19 East/West-Coast Specialized Servers - Total Privacy via Encryption =---
Nov 14 '05 #23

P: n/a
Mark McIntyre wrote:

On Fri, 16 Jan 2004 22:53:54 +0000 (UTC), in comp.lang.c , Richard
Heathfield <do******@address.co.uk.invalid> wrote:

How do you feel about

#define EOS '\0'

? Is that, too, a sure sign of a wannabe-programmer?


Very very close to a sure sign. In fact, its a million-to-one chance.
Before you answer, I should point out that that line is quoted from source
code written by Doug Gwyn for the book "Software Solutions in C".
For my part, I consider Mr Gwyn to be a real programmer, not a wannabe.


Me too. But he's not immune from adding wannabe-programmer style
obfuscatory gratuity to his code merely because he's a real
programmer.


I'm with Mark on this one.
I don't know if he had a good reason for writing bad style,
but that's bad style.

--
pete
Nov 14 '05 #24

P: n/a
Richard Heathfield wrote:
.... snip ...
How do you feel about

#define EOS '\0'

? Is that, too, a sure sign of a wannabe-programmer?

Before you answer, I should point out that that line is quoted
from source code written by Doug Gwyn for the book "Software
Solutions in C". For my part, I consider Mr Gwyn to be a real
programmer, not a wannabe. (Minor nit: EOS obviously invades
implementation namespace, and I'm pretty sure Mr Gwyn knows that.)


That has been around for a loooong time. At least 25 years. I
don't remember where I first saw it, or even in what language.

--
Chuck F (cb********@yahoo.com) (cb********@worldnet.att.net)
Available for consulting/temporary embedded and systems.
<http://cbfalconer.home.att.net> USE worldnet address!
Nov 14 '05 #25

P: n/a
pete wrote:
Mark McIntyre wrote:

On Fri, 16 Jan 2004 22:53:54 +0000 (UTC), in comp.lang.c , Richard
Heathfield <do******@address.co.uk.invalid> wrote:
>
>How do you feel about
>
>#define EOS '\0'
>
>? Is that, too, a sure sign of a wannabe-programmer?


Very very close to a sure sign. In fact, its a million-to-one chance.
>Before you answer, I should point out that that line is quoted from
>source code written by Doug Gwyn for the book "Software Solutions in C".
>For my part, I consider Mr Gwyn to be a real programmer, not a wannabe.


Me too. But he's not immune from adding wannabe-programmer style
obfuscatory gratuity to his code merely because he's a real
programmer.


I'm with Mark on this one.
I don't know if he had a good reason for writing bad style,
but that's bad style.


I agree, but there's a big difference between saying "that's bad style" and
"that's a SURE SIGN of a wannabe-programmer" (my caps).

--
Richard Heathfield : bi****@eton.powernet.co.uk
"Usenet is a strange place." - Dennis M Ritchie, 29 July 1999.
C FAQ: http://www.eskimo.com/~scs/C-faq/top.html
K&R answers, C books, etc: http://users.powernet.co.uk/eton
Nov 14 '05 #26

P: n/a
On Sat, 17 Jan 2004 08:06:23 +0000 (UTC), in comp.lang.c , Richard
Heathfield <do******@address.co.uk.invalid> wrote:
I agree, but there's a big difference between saying "that's bad style" and
"that's a SURE SIGN of a wannabe-programmer" (my caps).


Well, you know what they say about all generalisations.... :-)

--
Mark McIntyre
CLC FAQ <http://www.eskimo.com/~scs/C-faq/top.html>
CLC readme: <http://www.angelfire.com/ms3/bchambless0/welcome_to_clc.html>
----== Posted via Newsfeed.Com - Unlimited-Uncensored-Secure Usenet News==----
http://www.newsfeed.com The #1 Newsgroup Service in the World! >100,000 Newsgroups
---= 19 East/West-Coast Specialized Servers - Total Privacy via Encryption =---
Nov 14 '05 #27

P: n/a
"Dan Pop" <Da*****@cern.ch> wrote in message
news:bu**********@sunnews.cern.ch...
In <40******@news.rivernet.com.au> "Peter Nilsson" <ai***@acay.com.au>

writes:
"Christian Bau" <ch***********@cbau.freeserve.co.uk> wrote in message
news:ch*********************************@slb-newsm1.svr.pol.co.uk...
In article <84**************************@posting.google.com >,
ol*****@inspire.net.nz (Old Wolf) wrote:
> Personally I have this defined:
> #define END_OF_STRING '\0'
> which makes for greatly readable code (I only eschew it in throwaway
> programs, or when it would make my line length exceed 80 chars).

That's what is called obfuscation. I sure sign of a wannabe-programmer.


So, what do you think of people who use NULL?


It depends on whether they are using it correctly or not ;-)
These symbolic constants are equally useful and/or useless (depending on
your point of view), in the context of a language where an integer constantzero serves a number of roles!


The purpose of NULL is to avoid using 0 in a context where an integer
doesn't *naturally* belong.


The standards formally differentiate integer and pointer types. The choice
of null pointer constants (for compatability with pre-existing C) _makes_ 0
belong *naturally*.

Macros like EOS and END_OF_STRING are as unnecessary as NULL is an
anachronism.

--
Peter
Nov 14 '05 #28

P: n/a
"Alan Balmer" <al******@att.net> wrote in message
news:4l********************************@4ax.com...
On Fri, 16 Jan 2004 23:05:22 +1100, "Peter Nilsson"
<ai***@acay.com.au> wrote:
"Christian Bau" <ch***********@cbau.freeserve.co.uk> wrote in message
news:ch*********************************@slb-newsm1.svr.pol.co.uk...
In article <84**************************@posting.google.com >,
ol*****@inspire.net.nz (Old Wolf) wrote:
> Personally I have this defined:
> #define END_OF_STRING '\0'
> which makes for greatly readable code (I only eschew it in throwaway
> programs, or when it would make my line length exceed 80 chars).

That's what is called obfuscation. I sure sign of a wannabe-programmer.
So, what do you think of people who use NULL?


The same as I think of people who use "while" or "switch". They're
using standard C. "END_OF_STRING" does not appear in the standard.


Your argument (so far) could also be used by advocates wanting the macro (or
one like it) put _into_ the standard.
These symbolic constants are equally useful and/or useless (depending on
your point of view), in the context of a language where an integer constantzero serves a number of roles!


Which is why I prefer the (standard) '\0' form to emphasize that I
intend one of those roles, that of the end of string marker.


Why do you feel you need to emphasise it? Do you feel C programmers should
use NULL? If so, why?
In fact,
if I saw code using END_OF_STRING my assumption would be that the
programmer was using some other character to mean end of string, and
I'd have to go find the definition. That's (mild) obfuscation.


I agree. But I think Christian Bau's labelling of Old Wolf detracted from
his comment.

--
Peter
Nov 14 '05 #29

P: n/a
In <40******@news.rivernet.com.au> "Peter Nilsson" <ai***@acay.com.au> writes:
"Dan Pop" <Da*****@cern.ch> wrote in message
news:bu**********@sunnews.cern.ch...
In <40******@news.rivernet.com.au> "Peter Nilsson" <ai***@acay.com.au>writes:
>"Christian Bau" <ch***********@cbau.freeserve.co.uk> wrote in message
>news:ch*********************************@slb-newsm1.svr.pol.co.uk...
>> In article <84**************************@posting.google.com >,
>> ol*****@inspire.net.nz (Old Wolf) wrote:
>> > Personally I have this defined:
>> > #define END_OF_STRING '\0'
>> > which makes for greatly readable code (I only eschew it in throwaway
>> > programs, or when it would make my line length exceed 80 chars).
>>
>> That's what is called obfuscation. I sure sign of a wannabe-programmer.
>
>So, what do you think of people who use NULL?


It depends on whether they are using it correctly or not ;-)
>These symbolic constants are equally useful and/or useless (depending on
>your point of view), in the context of a language where an integerconstant >zero serves a number of roles!


The purpose of NULL is to avoid using 0 in a context where an integer
doesn't *naturally* belong.


The standards formally differentiate integer and pointer types. The choice
of null pointer constants (for compatability with pre-existing C) _makes_ 0
belong *naturally*.


Nope, it belongs there by *pure* convention. In the absence of this
convention, a plain 0, having type int, would have no place in a pointer
context.
Macros like EOS and END_OF_STRING are as unnecessary as NULL is an
anachronism.


The anachronism is making 0 a null pointer constant. The purpose of NULL
is to hide this anachronism from C code. If the only null pointer
constant were (void *)0, one could argue that the *only* purpose of NULL
was to save a few keystrokes, since (void *)0 naturally belongs to a
pointer context.

Dan
--
Dan Pop
DESY Zeuthen, RZ group
Email: Da*****@ifh.de
Nov 14 '05 #30

P: n/a
CBFalconer wrote:

Richard Heathfield wrote:

... snip ...

How do you feel about

#define EOS '\0'

? Is that, too, a sure sign of a wannabe-programmer?

Before you answer, I should point out that that line is quoted
from source code written by Doug Gwyn for the book "Software
Solutions in C". For my part, I consider Mr Gwyn to be a real
programmer, not a wannabe. (Minor nit: EOS obviously invades
implementation namespace, and I'm pretty sure Mr Gwyn knows that.)


That has been around for a loooong time. At least 25 years. I
don't remember where I first saw it, or even in what language.


Every once in while I see macros which make me think
"Pascal writers, forced to write C at gunpoint"

Is EOS like one of those ?

--
pete
Nov 14 '05 #31

This discussion thread is closed

Replies have been disabled for this discussion.