By using this site, you agree to our updated Privacy Policy and our Terms of Use. Manage your Cookies Settings.
432,474 Members | 1,001 Online
Bytes IT Community
+ Ask a Question
Need help? Post your question and get tips & solutions from a community of 432,474 IT Pros & Developers. It's quick & easy.

Doubt regarding strcpy()

P: n/a
Hi

I am using strcpy() in my code for copying a string to another string.
i am using static char arrays.

for the first time it is exected correctly but the second time the
control reaches then the SEGMENTATION FAULT is occuring.

Please tell me what are all the cases this problem may occur if we use
strcpy().

Thanks

Aug 2 '06 #1
Share this Question
Share on Google+
38 Replies


P: n/a
edu.mvk wrote:
Hi

I am using strcpy() in my code for copying a string to another string.
i am using static char arrays.

for the first time it is exected correctly but the second time the
control reaches then the SEGMENTATION FAULT is occuring.

Please tell me what are all the cases this problem may occur if we use
strcpy().
It's impossible to enumerate all the cases. You're passing invalid
arguments to strcpy().

Most likely it's one of the following:

1. The source char* doesn't point to a null terminated
sequence of char's.

2. The destination char* doesn't point to the beginning
of an array of chars big enough to hold the copied string.
Aug 2 '06 #2

P: n/a
edu.mvk wrote :
I am using strcpy() in my code for copying a string to another string.
i am using static char arrays.
You should try to use the tools C++ provides instead of old and
dangerous C facilities.
Aug 2 '06 #3

P: n/a
edu.mvk wrote:
Hi

I am using strcpy() in my code for copying a string to another string.
i am using static char arrays.

for the first time it is exected correctly but the second time the
control reaches then the SEGMENTATION FAULT is occuring.

Please tell me what are all the cases this problem may occur if we use
strcpy().
No. Show us your code and we will tell you specifically where you went
wrong. See the newsgroup FAQ list:

<http://www.parashift.com/c++-faq-lite/how-to-post.htm>
Otherwise, get a textbook on C and read up on strcpy. When you know how
it works, then you can pretty easily figure out what doesn't work.

Brian

Aug 2 '06 #4

P: n/a
edu.mvk wrote:
Hi

I am using strcpy() in my code for copying a string to another string.
i am using static char arrays.

for the first time it is exected correctly but the second time the
control reaches then the SEGMENTATION FAULT is occuring.

Please tell me what are all the cases this problem may occur if we use
strcpy().
Make sure you don't try to overwrite literals with this function.
Are you sure your 'dest' argument points to a literal rather than
to an array?

HTH,
- J.
Aug 2 '06 #5

P: n/a
On Wed, 02 Aug 2006 17:34:44 +0200, loufoque
<lo******@remove.gmail.comwrote in comp.lang.c++:
edu.mvk wrote :
I am using strcpy() in my code for copying a string to another string.
i am using static char arrays.

You should try to use the tools C++ provides instead of old and
dangerous C facilities.
You seem to be confused. std::strcpy(), declared in <cstring>, IS one
of the tools C++ provides. And, quite frankly, one that every C++
programmer should know how to use correctly, even if it should not be
the first choice.

--
Jack Klein
Home: http://JK-Technology.Com
FAQs for
comp.lang.c http://c-faq.com/
comp.lang.c++ http://www.parashift.com/c++-faq-lite/
alt.comp.lang.learn.c-c++
http://www.contrib.andrew.cmu.edu/~a...FAQ-acllc.html
Aug 3 '06 #6

P: n/a

Jack Klein wrote:
On Wed, 02 Aug 2006 17:34:44 +0200, loufoque
<lo******@remove.gmail.comwrote in comp.lang.c++:
edu.mvk wrote :
I am using strcpy() in my code for copying a string to another string.
i am using static char arrays.
You should try to use the tools C++ provides instead of old and
dangerous C facilities.

You seem to be confused. std::strcpy(), declared in <cstring>, IS one
of the tools C++ provides. And, quite frankly, one that every C++
programmer should know how to use correctly, even if it should not be
the first choice.
The only correct way to use strcpy() is not to.

Aug 3 '06 #7

P: n/a

Jack Klein skrev:
On Wed, 02 Aug 2006 17:34:44 +0200, loufoque
<lo******@remove.gmail.comwrote in comp.lang.c++:
edu.mvk wrote :
I am using strcpy() in my code for copying a string to another string.
i am using static char arrays.
You should try to use the tools C++ provides instead of old and
dangerous C facilities.

You seem to be confused. std::strcpy(), declared in <cstring>, IS one
of the tools C++ provides. And, quite frankly, one that every C++
programmer should know how to use correctly, even if it should not be
the first choice.
Is that really your opinion? I thought strcpy (and its cousin in the
std namespace) was there primarily for reasons of portability. In fact
at the company where I work, we are phasing strcpy out (our codebase is
large and some of it is very old), our goal being not to have any
unsafe C-style calls in our code at all.
What is in your opinion the strcpy function good for in new C++ code?

/Peter

Aug 3 '06 #8

P: n/a
Noah Roberts posted:
>You seem to be confused. std::strcpy(), declared in <cstring>, IS one
of the tools C++ provides. And, quite frankly, one that every C++
programmer should know how to use correctly, even if it should not be
the first choice.

The only correct way to use strcpy() is not to.

You are a C++ programmer who has a heavy bias toward using C++ features.

I am a C++ programmer who has a heavy bias toward using C features.

To state that one simply shouldn't use "strcpy" is your own opinion -- a
flawed one, in my opinion.

strcpy is an efficient algorithm for copying a null-terminated string. If you
want an efficient algorithm for copying a null-terminated string, then you
would be wise to choose "strcpy".

However, if you like to opt for the inefficient route, then you could always
use "std::string".

--

Frederick Gotham
Aug 3 '06 #9

P: n/a
peter koch posted:
What is in your opinion the strcpy function good for in new C++ code?

For copying a null terminated string.

class MyClass {

char *p;

MyClass() : p(0) {}

void Set(char const *const parg)
{
size_t const len = strlen(parg) + 1;

delete [] p;

p = new char[len];

strcpy(p,parg);
}
};

You could also consider using memcpy.

--

Frederick Gotham
Aug 3 '06 #10

P: n/a

Frederick Gotham skrev:
peter koch posted:
What is in your opinion the strcpy function good for in new C++ code?


For copying a null terminated string.

class MyClass {

char *p;

MyClass() : p(0) {}

void Set(char const *const parg)
{
size_t const len = strlen(parg) + 1;

delete [] p;

p = new char[len];

strcpy(p,parg);
}
};
Right. But that is highly specialized code - actually you are inventing
your own string class here. A better solution would be to have p a
std::string.

/Peter
You could also consider using memcpy.

--

Frederick Gotham
Aug 3 '06 #11

P: n/a
peter koch posted:
Right. But that is highly specialized code - actually you are inventing
your own string class here. A better solution would be to have p a
std::string.

"better" is highly subjective.

I frequently use null terminated char arrays, rather than std::string,
because:

(1) I can make more efficient code in most places.
(2) I find it more enjoyable.

I only resort to "std::string" when things get too complicated, or if I'm too
lazy to try achieve something manually.

--

Frederick Gotham
Aug 3 '06 #12

P: n/a
Frederick Gotham <fg*******@SPAM.comwrites:
Noah Roberts posted:
>>You seem to be confused. std::strcpy(), declared in <cstring>, IS one
of the tools C++ provides. And, quite frankly, one that every C++
programmer should know how to use correctly, even if it should not be
the first choice.

The only correct way to use strcpy() is not to.


You are a C++ programmer who has a heavy bias toward using C++ features.

I am a C++ programmer who has a heavy bias toward using C features.

To state that one simply shouldn't use "strcpy" is your own opinion -- a
flawed one, in my opinion.

strcpy is an efficient algorithm for copying a null-terminated string. If you
want an efficient algorithm for copying a null-terminated string, then you
would be wise to choose "strcpy".

However, if you like to opt for the inefficient route, then you could always
use "std::string".
I think the point is that since you are assuming a null terminated
character sequence then you are effectively using your own string type
in any std c++ environment. Since that is the case you should probably wrap
it accordingly and use the strcpy though a ::strcpy() method.

I do tend to agree with you though.

Going from C to C++ it often horrifies me how long winded a lot of code
is for such simple things as string miipulation. You get used to it, and
at the end of the day the compiler sweetens up performance to C levels
most of the time.

Aug 3 '06 #13

P: n/a

Frederick Gotham wrote:
Noah Roberts posted:
You seem to be confused. std::strcpy(), declared in <cstring>, IS one
of the tools C++ provides. And, quite frankly, one that every C++
programmer should know how to use correctly, even if it should not be
the first choice.
The only correct way to use strcpy() is not to.


You are a C++ programmer who has a heavy bias toward using C++ features.

I am a C++ programmer who has a heavy bias toward using C features.

To state that one simply shouldn't use "strcpy" is your own opinion -- a
flawed one, in my opinion.
Yes well, you have a lot of....opinions...carbon unit.
>
strcpy is an efficient algorithm for copying a null-terminated string. If you
want an efficient algorithm for copying a null-terminated string, then you
would be wise to choose "strcpy".
It is also unsafe and introduces many avenues for very difficult to
find bugs. Even if you code in C only you should not use strcpy, just
like you never use gets.
>
However, if you like to opt for the inefficient route, then you could always
use "std::string".
Yes well, a lot of flawed conclusions come from the "I gotta be
efficient" angle. Profile first. I have seen std::string turn up in
profiles but it wasn't the string's fault...the algorithm was flawed.
Often times std::string is more efficient during runtime and almost
always more efficient in development time.

Aug 3 '06 #14

P: n/a
Frederick Gotham schrieb:
peter koch posted:
>Right. But that is highly specialized code - actually you are inventing
your own string class here. A better solution would be to have p a
std::string.


"better" is highly subjective.

I frequently use null terminated char arrays, rather than std::string,
because:

(1) I can make more efficient code in most places.
For what avail?
(2) I find it more enjoyable.

I only resort to "std::string" when things get too complicated, or if I'm too
lazy to try achieve something manually.
Are you talking about commercial applications or learning and fun projects?

--
Thomas
Aug 3 '06 #15

P: n/a
Noah Roberts wrote:
It is also unsafe and introduces many avenues for very difficult to
find bugs. Even if you code in C only you should not use strcpy, just
like you never use gets.
Neither should you apply rules blindly. It is best to understand why
the rules exist.

gets() should never be used because it is impossible to make safe. The
reason why is that the call is getting data from an outside source so it
is impossible to know how big the buffer needs to be.

strcpy() on the other hand is moving data from a source within the
programmers control. If the logic of your code makes the length of the
string known, then it is possible for strcpy() to be used safely.

There is no doubt that strcpy() must be used with great care. But to
lump it in with gets() is unnecessary.

samuel
Aug 3 '06 #16

P: n/a
Noah Roberts wrote:
>>...
strcpy is an efficient algorithm for copying a null-terminated string. If you
want an efficient algorithm for copying a null-terminated string, then you
would be wise to choose "strcpy".

It is also unsafe and introduces many avenues for very difficult to
find bugs.
Wrong. 'strcpy' is perfectly safe. More precisely, the safety level of 'strcpy'
is about the same as the safety level of the language itself (both C and C++).
For this reason, criticizing the safety level of 'strcpy' is a waste of time,
since for its intended application there is no safer alternative in either C or C++.
Even if you code in C only you should not use strcpy, just
like you never use gets.
That's completely incorrect. There's a huge difference between safety of
'strcpy' and safety of 'gets'. The problem with 'gets' is that there's
absolutely no way to make a safe call to 'gets' since there's no way to predict
how much memory it will require. This does not apply to 'strcpy'. If the user
knows what he is doing, he can always make sure that every call to 'strcpy' is safe.
>>
However, if you like to opt for the inefficient route, then you could always
use "std::string".

Yes well, a lot of flawed conclusions come from the "I gotta be
efficient" angle. Profile first.
"Profile first" is a very popular piece of nonsense these days (one of the many
"fake wisdoms" floating around in C/C++ world). The notion of "profiling" is
only applicable to complete application-specific code. But what are we supposed
to do with generic code? When it comes to generic, library-level code, there's
no such thing as "profiling". Library code cannot be meaningfully profiled by
itself. It can only be "profiled" within the context of a concrete application.
But changing the code of _generic_ library based on such a _specific_ profile
is just nonsense.
I have seen std::string turn up in
profiles but it wasn't the string's fault...the algorithm was flawed.
Often times std::string is more efficient during runtime and almost
always more efficient in development time.
'std::string' implements a resizable string. In general case, using a resizable
string in context when, for example, fixed-size string is really needed is a
design error, most often caused by laziness and lack of common sense on the
programmer's part.

What you seem to advocate is known as "writing Java code in C++". This appears
to be a popular way of misusing C++ these days...

--
Best regards,
Andrey Tarasevich
Aug 3 '06 #17

P: n/a

R Samuel Klatchko wrote:
Noah Roberts wrote:
It is also unsafe and introduces many avenues for very difficult to
find bugs. Even if you code in C only you should not use strcpy, just
like you never use gets.

Neither should you apply rules blindly. It is best to understand why
the rules exist.

gets() should never be used because it is impossible to make safe. The
reason why is that the call is getting data from an outside source so it
is impossible to know how big the buffer needs to be.
That is correct. Practically speaking though it is a minor
distinction. And actually when you know that the input that you are
getting is of a certain length, because you wrote the software on the
other side of the stream, why not use gets at that point? That is
pretty close to equivelant to the use of char* to represent strings.
>
strcpy() on the other hand is moving data from a source within the
programmers control. If the logic of your code makes the length of the
string known, then it is possible for strcpy() to be used safely.
Actually, when you come right to it the only way to do this safely is
to pass around a length with your char*. The stlen() function will
fail utterly when '\0' is not contained in the buffer. The function
strncpy (the "safe" strcpy) can create such a "string". Why not
encapsulate this length with the string and call it say....std::string?
>
There is no doubt that strcpy() must be used with great care. But to
lump it in with gets() is unnecessary.
The amount of care necissary to use the char* functions safely is
pretty much impossible. Not only do you have to be an expert to know
how that is to be done, you have to be anal enough to do it. I have
yet to meet anyone that fits that description or to see any code
written by them for that matter. The function strcpy just happens to
be one of the worse of the offenders as it has no buffer length input
parameters and depends solely on there being a '\0' before the length
of dest is used up. These functions may be "possible" to use safely,
but the degree of care that has to be put into this and the type of
bugs that failure create far outweigh any purported "efficiency" they
give you.

Aug 3 '06 #18

P: n/a

Andrey Tarasevich wrote:
Noah Roberts wrote:
>...
strcpy is an efficient algorithm for copying a null-terminated string. If you
want an efficient algorithm for copying a null-terminated string, then you
would be wise to choose "strcpy".
It is also unsafe and introduces many avenues for very difficult to
find bugs.

Wrong. 'strcpy' is perfectly safe. More precisely, the safety level of 'strcpy'
is about the same as the safety level of the language itself (both C and C++).
For this reason, criticizing the safety level of 'strcpy' is a waste of time,
since for its intended application there is no safer alternative in either C or C++.
Yes, there is. That alternative is strncpy. There are also many
constructs that can be created that outweigh that alternative as well.
>
Even if you code in C only you should not use strcpy, just
like you never use gets.

That's completely incorrect. There's a huge difference between safety of
'strcpy' and safety of 'gets'. The problem with 'gets' is that there's
absolutely no way to make a safe call to 'gets' since there's no way to predict
how much memory it will require. This does not apply to 'strcpy'. If the user
knows what he is doing, he can always make sure that every call to 'strcpy' is safe.
strcpy can only be used safely given certain assumed preconditions that
cannot be guaranteed except under utopian circumstances. When those
preconditions fail, strcpy fails in ways very difficult to find...as
in, who did overrun anyway and why are demons comming out of my nose?
>
>
However, if you like to opt for the inefficient route, then you could always
use "std::string".
Yes well, a lot of flawed conclusions come from the "I gotta be
efficient" angle. Profile first.

"Profile first" is a very popular piece of nonsense these days (one of the many
"fake wisdoms" floating around in C/C++ world). The notion of "profiling" is
only applicable to complete application-specific code. But what are we supposed
to do with generic code? When it comes to generic, library-level code, there's
no such thing as "profiling".
There is no such thing as generic code. All code has a reason to be
there. Library code is profiled in the context of tests exhibiting its
intended use and its algorithms.

Library code cannot be meaningfully profiled by
itself. It can only be "profiled" within the context of a concrete application.
But changing the code of _generic_ library based on such a _specific_ profile
is just nonsense.
I have seen std::string turn up in
profiles but it wasn't the string's fault...the algorithm was flawed.
Often times std::string is more efficient during runtime and almost
always more efficient in development time.

'std::string' implements a resizable string. In general case, using a resizable
string in context when, for example, fixed-size string is really needed is a
design error, most often caused by laziness and lack of common sense on the
programmer's part.
Fixed sizes are always based on the assumption that the size will
always be enough. That assumption is nearly always flawed. Costs of
debugging buffer overruns caused by this assumption outweigh any
negligable runtime overhead of calling resize() on a string.

Besides, there are a miriad of ways to solve the problem of needing a
fixed length string without using strcpy under those rare conditions
where you really do.
>
What you seem to advocate is known as "writing Java code in C++". This appears
to be a popular way of misusing C++ these days...
A straw man is always easy to tear down.

Aug 3 '06 #19

P: n/a
Richard posted:
I think the point is that since you are assuming a null terminated
character sequence then you are effectively using your own string type
in any std c++ environment.

Null terminated strings are as old as the wheel.
Since that is the case you should probably
wrap it accordingly and use the strcpy though a ::strcpy() method.

And throw in inefficiency for good measure.
I do tend to agree with you though.

Going from C to C++ it often horrifies me how long winded a lot of code
is for such simple things as string miipulation. You get used to it, and
at the end of the day the compiler sweetens up performance to C levels
most of the time.

"You get used to it", or "You give in to it"?

I don't care how incompetent Noah Roberts feels when dealing with arrays --
*I* know how to use them, and am perfectly content working efficiently with
null-terminated strings.

--

Frederick Gotham
Aug 4 '06 #20

P: n/a
Noah Roberts posted:
>To state that one simply shouldn't use "strcpy" is your own opinion --
a flawed one, in my opinion.

Yes well, you have a lot of....opinions...carbon unit.

Social difficulties manifesting themselves, I see.
>strcpy is an efficient algorithm for copying a null-terminated string.
If you want an efficient algorithm for copying a null-terminated
string, then you would be wise to choose "strcpy".

It is also unsafe and introduces many avenues for very difficult to
find bugs.

I was about five or six when I took stabilisers off my bike. (Well
actually, my Dad did it for me -- but still it was the right decision.)
Even if you code in C only you should not use strcpy, just
like you never use gets.

I feel an air of incompetence.

Do you really hold your programming ability in such low regard that you
can't handle a simple null-terminated string?

--

Frederick Gotham
Aug 4 '06 #21

P: n/a
Noah Roberts posted:
>gets() should never be used because it is impossible to make safe. The
reason why is that the call is getting data from an outside source so it
is impossible to know how big the buffer needs to be.

That is correct. Practically speaking though it is a minor
distinction.

I don't wear a lead-filled protected vest when I use my microwave. However,
I *do* when I go under an X-Ray machine.

If I know that a string is null-terminated, and not of ridiculous length,
then why should I wear a lead-filled protected vest?
And actually when you know that the input that you are
getting is of a certain length, because you wrote the software on the
other side of the stream, why not use gets at that point? That is
pretty close to equivelant to the use of char* to represent strings.

Arrays of char's are a great way of storing strings.
>strcpy() on the other hand is moving data from a source within the
programmers control. If the logic of your code makes the length of the
string known, then it is possible for strcpy() to be used safely.

Actually, when you come right to it the only way to do this safely is
to pass around a length with your char*.

Wrong. Ask thousands of C programmers.
The strlen() function will fail utterly when '\0' is not contained in the
buffer.

Which is why we're guaranteed that our microwave won't emit X-Rays, and
which is why we don't wear a lead-filled protected vest each time we switch
on our microwave.
The function strncpy (the "safe" strcpy) can create such a "string".
Why not encapsulate this length with the string and call it
say....std::string?

The same reason my father took the stabilisers off my bike when I was five.
>There is no doubt that strcpy() must be used with great care. But to
lump it in with gets() is unnecessary.

The amount of care necissary to use the char* functions safely is
pretty much impossible.

Incompetence. Nothing more.
Not only do you have to be an expert to know
how that is to be done, you have to be anal enough to do it.

Anal enough to know how a null-terminated string works. Please.
I have yet to meet anyone that fits that description or to see any code
written by them for that matter.

Hello, my name is Frederick Gotham, how are you on this fine day?
The function strcpy just happens to be one of the worse of the offenders
as it has no buffer length input parameters and depends solely on there
being a '\0' before the length of dest is used up.

Which is why we supply it with a null-terminated string.
These functions may be "possible" to use safely, but the degree of care
that has to be put into this and the type of bugs that failure create
far outweigh any purported "efficiency" they give you.

You don't speak for the competent programmers.

--

Frederick Gotham
Aug 4 '06 #22

P: n/a
Andrey Tarasevich posted:
What you seem to advocate is known as "writing Java code in C++". This
appears to be a popular way of misusing C++ these days...

The term, "dumbed-down programming", comes to mind.

--

Frederick Gotham
Aug 4 '06 #23

P: n/a
Noah Roberts posted:
strcpy can only be used safely given certain assumed preconditions that
cannot be guaranteed except under utopian circumstances.

Your definition of "utopian" is away with the fairies.

The prospect of having a null-terminated string whose length doesn't go
into megabytes, is quiet mundane.
When those preconditions fail, strcpy fails in ways very difficult to
find...as in, who did overrun anyway and why are demons comming out of
my nose?

Then wear a lead-filled protective vest when you use your microwave.
There is no such thing as generic code. All code has a reason to be
there. Library code is profiled in the context of tests exhibiting its
intended use and its algorithms.

Here's some generic code for you:

int Five()
{
return 5;
}
Fixed sizes are always based on the assumption that the size will
always be enough.

Yes.
That assumption is nearly always flawed.

No. (Unless of course we're dealing with not-so-smart Java programmers who
are dabbling with C++?)
Costs of debugging buffer overruns caused by this assumption outweigh
any negligable runtime overhead of calling resize() on a string.

$0.00

(Unless of course we're dealing with not-so-smart Java programmers who are
dabbling with C++?)
Besides, there are a miriad of ways to solve the problem of needing a
fixed length string without using strcpy under those rare conditions
where you really do.

Arrays of char?
>What you seem to advocate is known as "writing Java code in C++". This
appears to be a popular way of misusing C++ these days...

A straw man is always easy to tear down.

Dumbed-down code tends to be inefficient.

--

Frederick Gotham
Aug 4 '06 #24

P: n/a
Thomas J. Gritzan posted:
>I frequently use null terminated char arrays, rather than std::string,
because:

(1) I can make more efficient code in most places.

For what avail?

Take the adjective, and then make a verb out of it:

E f f i c i e n c y

> (2) I find it more enjoyable.

I only resort to "std::string" when things get too complicated, or if
I'm too lazy to try achieve something manually.

Are you talking about commercial applications or learning and fun
projects?

Programming for recreation, rather than monetary gain.

--

Frederick Gotham
Aug 4 '06 #25

P: n/a
Frederick Gotham posted:
Take the adjective, and then make a verb out of it:

*noun*

--

Frederick Gotham
Aug 4 '06 #26

P: n/a
Frederick Gotham wrote:
peter koch posted:
>Right. But that is highly specialized code - actually you are inventing
your own string class here. A better solution would be to have p a
std::string.


"better" is highly subjective.
Actually, in software engineering, it seems not that subjective. Here,
criteria for quality are mainly dictated by economics: your code has to be
maintainable by others, efficient, correct, portable, etc. - whatever cuts
on costs and makes for better sales. There is no real doubt as to
what "better" means in computer programming. Sometimes it is hard to assess
whether a given piece of code is better than another but that is not
because criteria are subjective, it is because there is more than one goal
to strive for and sometimes going for one leads to sacrifices with regard
to another valid goal.

I frequently use null terminated char arrays, rather than std::string,
because:

(1) I can make more efficient code in most places.
I found, in any case where this statement was supported by measurement upon
first glance, there was either

(a) a (sometimes subtle) bug in the 0-terminated code; and upon
fixing the bugs, the performance gain disappeared

or

(b) the 0-terminated code was optimized and compared to non-optimized
use of strings; upon optimizing the string code, the performance
difference disappeared.

Admittedly, my sample size is not big and it does not cover a vast variety
of application areas, but I am very sceptical about the general claim that
0-terminated code is more amenable to optimization.

(2) I find it more enjoyable.
I can see that.

I only resort to "std::string" when things get too complicated, or if I'm
too lazy to try achieve something manually.
Your choice, but (1) is IMHO not a valid reason whereas (2) is.
Best

Kai-Uwe Bux
Aug 4 '06 #27

P: n/a
Kai-Uwe Bux posted:
>"better" is highly subjective.

Actually, in software engineering, it seems not that subjective. Here,
criteria for quality are mainly dictated by economics: your code has to
be maintainable by others, efficient, correct, portable, etc. - whatever
cuts on costs and makes for better sales.

When debating topics of programming, people often make arguments with
regard to business, productivity, better sales, etc..

I don't program for money -- it's purely a hobby of mine. Therefore, I
really don't pay much heed to this argument. I don't want my enjoyment of
programming to be poisoned by considerations of business or productivity. I
aim to write programs that perform well.

Consider a person who likes bodybuilding. They do it for enjoyment and fun.
They go to a bodybuilding newsgroup, and every time they discuss their own
way of doing things, they're met with "Well if you want productivity,
better sales, etc., then you're going to have to start taking steroids". Do
you think they want to hear that?

Indeed, if a bodybuilder wants to win the world's best competitions, then
he or she is going to have to take steroids. BUT... that doesn't mean that
it's a good way to do bodybuilding!

If I start using "std::string" to do silly little things like compare null-
terminated strings, then perhaps business people, or Noah Roberts, might
hold me in high regard -- but I don't care. I program for leisure, and I
aim to write fully-portable code and produce efficient programs with a nice
user interface. I won't lower myself to using "std::string" when I'm
perfectly competent working with a null-terminated array of char's.

Past-times were invented long before monetary gain sucked the fun out of
them. Thankfully, my own programming hasn't fallen victim.

From now on, when I pose an argument here regarding C++ programming, I will
explicitly state that I haven't got a boss looking over my shoulder making
sure I use the most dumbed-down method to achieve an objective. Perhaps
then, others will see why I program the way I program.
>I frequently use null terminated char arrays, rather than std::string,
because:

(1) I can make more efficient code in most places.

I found, in any case where this statement was supported by measurement
upon
first glance, there was either

(a) a (sometimes subtle) bug in the 0-terminated code; and upon
fixing the bugs, the performance gain disappeared

Forgive me if I come across arrogant, but I haven't had any problems with
null-terminated strings in quite a while. Sure, in my first few weeks of
programming, I made "off by one" errors and so forth -- but not now.

--

Frederick Gotham
Aug 4 '06 #28

P: n/a

Frederick Gotham wrote:
Noah Roberts posted:
Not only do you have to be an expert to know
how that is to be done, you have to be anal enough to do it.


Anal enough to know how a null-terminated string works. Please.
I have yet to meet anyone that fits that description or to see any code
written by them for that matter.


Hello, my name is Frederick Gotham, how are you on this fine day?
Heh, anyone that claims to be a perfect programmer is most defininately
not. The better ones admit that they can't write perfect code and
can't possibly track every change in even a small project that they
themselves work on. They use safer types and functions so they don't
have to be perfect and minimize the mistakes they make and later have
to spend hours fixing. "Code should be easy to use correctly and hard
to use incorrectly."

Yes, I am incredibly incompetent; I can get lost in less than 50 lines
of code. That is why I use language features that minimize how often I
shoot myself....and that is why I write code that has less bugs, and is
generally more efficient, in less time than my "competent"
counterparts. I spend more time implementing features with tools
created by people a hell of a lot smarter and more experienced than me
rather than using primitive types to try to outsmart my compiler and
library authors. My code is already written, profiled, and in testing
long before such programmers finish inventing their new square wheel.
When there are bugs, and invariably there are as I am incompetent, the
types I choose to use give me more leverage to find the cause of the
error...I'm not spending hours tracking crashes caused by heap bashing
and stack overwriting.

I know how to deal with char*, I do it every day (I may very well know
it as well or better than you). I choose not to when I can because the
type is incredibly unsafe, difficult to work with safely, fragile, and
less efficient in almost all cases...and I am not so stupid as to think
I am smart enough to write perfect code.

Yes, I stay away from unsafe and error prone types because I _am_
incompetent and know it.

So, I'll leave you to your utopian delusions, carbon unit. I'll just
leave you with a word of wisdom: Perfection is a hard thing to live up
to...good luck.

Aug 4 '06 #29

P: n/a

Frederick Gotham wrote:
I don't program for money -- it's purely a hobby of mine.
Heh, who would have guessed? Probably doesn't do much work in teams
either...

Aug 4 '06 #30

P: n/a
"Noah Roberts" <ro**********@gmail.comwrites:
Frederick Gotham wrote:
>I don't program for money -- it's purely a hobby of mine.

Heh, who would have guessed? Probably doesn't do much work in teams
either...
Sounds like a perfect recruit for the big heads in comp.lang.c that
think using a debugger shows an inability to code (complete with quote
from Kernighan) - obviously next to zero team work, integration or
experience in extending or bug fixing legacy systems. I just love
working with go it alone cowboy types :-;
Aug 4 '06 #31

P: n/a
Noah Roberts posted:
Heh, anyone that claims to be a perfect programmer is most defininately
not.

I don't claim to be a prefect programmer. I don't claim to be perfect at
anything. I do claim to be proficient in some things though.
The better ones admit that they can't write perfect code and
can't possibly track every change in even a small project that they
themselves work on. They use safer types and functions so they don't
have to be perfect and minimize the mistakes they make and later have
to spend hours fixing. "Code should be easy to use correctly and hard
to use incorrectly."

Yes, I am incredibly incompetent; I can get lost in less than 50 lines
of code. That is why I use language features that minimize how often I
shoot myself....and that is why I write code that has less bugs, and is
generally more efficient, in less time than my "competent"
counterparts.

OK I see your side of the argument. But I differ as to where the line
should be drawn as regards just HOW safe we need to make it.
I spend more time implementing features with tools
created by people a hell of a lot smarter and more experienced than me
rather than using primitive types to try to outsmart my compiler and
library authors.

This is a viewpoint you have.

I myself naturally opt for the "primitive type" method first, and only go
in search of "string" or "vector" when things become overly complicated.
My code is already written, profiled, and in testing
long before such programmers finish inventing their new square wheel.
When there are bugs, and invariably there are as I am incompetent, the
types I choose to use give me more leverage to find the cause of the
error...I'm not spending hours tracking crashes caused by heap bashing
and stack overwriting.

I know how to deal with char*, I do it every day (I may very well know
it as well or better than you). I choose not to when I can because the
type is incredibly unsafe, difficult to work with safely, fragile, and
less efficient in almost all cases...and I am not so stupid as to think
I am smart enough to write perfect code.

There's many features in C++ that you can make mistakes with. However I
wouldn't go so far as to say that null-terminated strings are fragile.

Hundreds of thousands (if not millions) of C programs work perfectly well,
and they use null-terminated strings.

--

Frederick Gotham
Aug 4 '06 #32

P: n/a

Frederick Gotham wrote:
I myself naturally opt for the "primitive type" method first, and only go
in search of "string" or "vector" when things become overly complicated.
"Primitive Obsession is actually more of a symptom that causes bloats
than a bloat itself. The same holds for Data Clumps. When a Primitive
Obsession exists, there are no small classes for small entities (e.g.
phone numbers). Thus, the functionality is added to some other class,
which increases the class and method size in the software. With Data
Clumps there exists a set of primitives that always appear together
(e.g. 3 integers for RGB colors). Since these data items are not
encapsulated in a class this increases the sizes of methods and
classes. "

http://www.soberit.hut.fi/mmantyla/B...lsTaxonomy.htm

Aug 4 '06 #33

P: n/a
Frederick Gotham wrote:
Kai-Uwe Bux posted:
>>"better" is highly subjective.

Actually, in software engineering, it seems not that subjective. Here,
criteria for quality are mainly dictated by economics: your code has to
be maintainable by others, efficient, correct, portable, etc. - whatever
cuts on costs and makes for better sales.


When debating topics of programming, people often make arguments with
regard to business, productivity, better sales, etc..

I don't program for money -- it's purely a hobby of mine. Therefore, I
really don't pay much heed to this argument. I don't want my enjoyment of
programming to be poisoned by considerations of business or productivity.
I aim to write programs that perform well.

Consider a person who likes bodybuilding. They do it for enjoyment and
fun. They go to a bodybuilding newsgroup, and every time they discuss
their own way of doing things, they're met with "Well if you want
productivity, better sales, etc., then you're going to have to start
taking steroids". Do you think they want to hear that?

Indeed, if a bodybuilder wants to win the world's best competitions, then
he or she is going to have to take steroids. BUT... that doesn't mean that
it's a good way to do bodybuilding!

If I start using "std::string" to do silly little things like compare
null- terminated strings, then perhaps business people, or Noah Roberts,
might hold me in high regard -- but I don't care. I program for leisure,
and I aim to write fully-portable code and produce efficient programs with
a nice user interface. I won't lower myself to using "std::string" when
I'm perfectly competent working with a null-terminated array of char's.

Past-times were invented long before monetary gain sucked the fun out of
them. Thankfully, my own programming hasn't fallen victim.
I do not have any issues with your personal choices or goals in programming.
However, none of what you said touches upon the observation that when it
comes to programming, the word "better" has a pretty well established
meaning and criteria for quality are far less subjective than one might
think. Whether you strive to write good programs (in that technical sense)
or whether you strive to write programs that please you (and only you) is
entirely your choice to make.

From now on, when I pose an argument here regarding C++ programming, I
will explicitly state that I haven't got a boss looking over my shoulder
making sure I use the most dumbed-down method to achieve an objective.
Perhaps then, others will see why I program the way I program.
Yes, they will: you program the way you do because you are not part of a
team, because your programs are not subject to code review (for better or
worse), and because you care for efficiency more than anything else.
Anybody in the same situation will take your style as an example and
everybody else will be well advised to avoid it like the plague.

>>I frequently use null terminated char arrays, rather than std::string,
because:

(1) I can make more efficient code in most places.

I found, in any case where this statement was supported by measurement
upon
>first glance, there was either

(a) a (sometimes subtle) bug in the 0-terminated code; and upon
fixing the bugs, the performance gain disappeared


Forgive me if I come across arrogant, but I haven't had any problems with
null-terminated strings in quite a while. Sure, in my first few weeks of
programming, I made "off by one" errors and so forth -- but not now.
I take your word for it. Then maybe your code is an example for the second
case, which you snipped: comparison to non-optimized std::string based
code.
Best

Kai-Uwe Bux
Aug 4 '06 #34

P: n/a
Richard wrote:
"Noah Roberts" <ro**********@gmail.comwrites:
Frederick Gotham wrote:
I don't program for money -- it's purely a hobby of mine.
Heh, who would have guessed? Probably doesn't do much work in teams
either...

Sounds like a perfect recruit for the big heads in comp.lang.c that
think using a debugger shows an inability to code (complete with quote
from Kernighan) - obviously next to zero team work, integration or
experience in extending or bug fixing legacy systems. I just love
working with go it alone cowboy types :-;
Get bent, jerk. Most of the people in clc have been working
professionally in teams for a long time.

Brian

Aug 4 '06 #35

P: n/a
Have to respectfully disagree with you on several major points,
Frederick.

Frederick Gotham wrote:
If I know that a string is null-terminated, and not of ridiculous length,
then why should I wear a lead-filled protected vest?
99% of C and C++ programmers don't know beans about their code.
Floyd-Hoare proof rules are excruciatingly difficult.

Unless you sit down and formally prove the mathematical properties of
your algorithm, you don't really _know_ anything about your code. You
have a hunch, possibly backed up by significant experience, but it's
still a hunch and should always be considered suspect.

And for the record, I'm not endorsing people sit down and try formal
proofs of correctness of their code. I did that early in my Master's,
and I'm still recovering my lost sanity.
Arrays of char's are a great way of storing strings.
ASCII strings, yes. The world is embracing Unicode, which can embed
NULLs in C-style strings and thoroughly confound C-style string
handling. (Of course, you have libraries like glib which try to give a
C-style API for Unicode strings...)
The amount of care necissary to use the char* functions safely is
pretty much impossible.

Incompetence. Nothing more.
I'd tentatively agree with the original poster, and nobody's called me
an incompetent programmer.

Looking over major software engineering failures and their root causes,
the root cause on the programming side is disproportionately "badly
handled C-style strings".

Presumably, all of those people who wrote those buffer overflows and
stack smashes thought they were competent to handle C-style strings
appropriately. And yet, they weren't. The OpenBSD programming team,
who are about the most proactively paranoid C programmers out there,
appears to consider C string handling inherently unsafe and
error-prone, and for that reason deserving of extraordinary scrutiny.

Aug 5 '06 #36

P: n/a
Frederick Gotham <fg*******@SPAM.comwrites:
Noah Roberts posted:
>Heh, anyone that claims to be a perfect programmer is most defininately
not.


I don't claim to be a prefect programmer. I don't claim to be perfect at
anything. I do claim to be proficient in some things though.
>The better ones admit that they can't write perfect code and
can't possibly track every change in even a small project that they
themselves work on. They use safer types and functions so they don't
have to be perfect and minimize the mistakes they make and later have
to spend hours fixing. "Code should be easy to use correctly and hard
to use incorrectly."

Yes, I am incredibly incompetent; I can get lost in less than 50 lines
of code. That is why I use language features that minimize how often I
shoot myself....and that is why I write code that has less bugs, and is
generally more efficient, in less time than my "competent"
counterparts.


OK I see your side of the argument. But I differ as to where the line
should be drawn as regards just HOW safe we need to make it.
Much as I hate the "wind and water" involved in being too "pc", if you
dont use the basic types right then there is no hope. It makes a mockery
of everything. There is a minsight to respect and acknowledge. If you
dont like it : stay with your amateur C.
>
>I spend more time implementing features with tools
created by people a hell of a lot smarter and more experienced than me
rather than using primitive types to try to outsmart my compiler and
library authors.


This is a viewpoint you have.

I myself naturally opt for the "primitive type" method first, and only go
in search of "string" or "vector" when things become overly
complicated.
And then what happens when the underlying libraries decide to do it
differently from your "know how"? Yeah I know : its annoying - but thats
what could happen.
>
>My code is already written, profiled, and in testing
long before such programmers finish inventing their new square wheel.
When there are bugs, and invariably there are as I am incompetent, the
types I choose to use give me more leverage to find the cause of the
error...I'm not spending hours tracking crashes caused by heap bashing
and stack overwriting.

I know how to deal with char*, I do it every day (I may very well know
it as well or better than you). I choose not to when I can because the
type is incredibly unsafe, difficult to work with safely, fragile, and
less efficient in almost all cases...and I am not so stupid as to think
I am smart enough to write perfect code.


There's many features in C++ that you can make mistakes with. However I
wouldn't go so far as to say that null-terminated strings are fragile.
Yup. They are. Totally. In C++ that is.
>
Hundreds of thousands (if not millions) of C programs work perfectly well,
and they use null-terminated strings.
In C. Yes.
>
--

Frederick Gotham
--
Lint early. Lint often.
Aug 5 '06 #37

P: n/a
In article <11**********************@b28g2000cwb.googlegroups .com>,
ci********@gmail.com says...
Have to respectfully disagree with you on several major points,
Frederick.
As I have to, with you.

[ ... ]
Unless you sit down and formally prove the mathematical properties of
your algorithm, you don't really _know_ anything about your code.
Nonsense!
You
have a hunch, possibly backed up by significant experience, but it's
still a hunch and should always be considered suspect.
Sorry, but that's just plain wrong. I don't need a formal proof to
realize that code that won't compile isn't very useful. I don't need a
formal proof to know that if a program crashes every five minutes,
something is seriously wrong with the code. These are far more than
hunches -- they are absolute facts.

If you find something "suspect" about the conclusion that crashing every
five minutes is unacceptable, I guess that's your business. I find it an
easy conclusion to draw, and consider it fully supported with no
mathematical proof of much of anything.

Bottom line: while a formal proof of a correct algorithm gives you a
basis for knowing some types of things about code, there's still quite a
bit that can be known (and not just thought) about code without it.

[ ... ]
The world is embracing Unicode, which can embed
NULLs in C-style strings and thoroughly confound C-style string
handling. (Of course, you have libraries like glib which try to give a
C-style API for Unicode strings...)
More or less irrelevant. When you've converted them to wide characters,
you can (mostly) treat Unicode strings like ASCII strings -- the main
difference (combining characters) are guaranteed to be non-zero, so they
don't affect anything at hand. As long as you look at an entire
character at a time (even though that won't typically fit in a char) you
don't have to worry about an embedded zero any more than you do with
ASCII strings.

[ ... ]
Looking over major software engineering failures and their root causes,
the root cause on the programming side is disproportionately "badly
handled C-style strings".

Presumably, all of those people who wrote those buffer overflows and
stack smashes thought they were competent to handle C-style strings
appropriately. And yet, they weren't. The OpenBSD programming team,
who are about the most proactively paranoid C programmers out there,
appears to consider C string handling inherently unsafe and
error-prone, and for that reason deserving of extraordinary scrutiny.
While there have certainly be a number of problems caused by C-style
strings, I'm not at all sure what would constitute "disproportionate".
Given the amount of programming done in C, and the degree to which
strings get passed around in things like network programming, I'd expect
many of the failures to happen via that route, simply because it's how a
lot gets done.

--
Later,
Jerry.

The universe is a figment of its own imagination.
Aug 6 '06 #38

P: n/a
I don't need a
formal proof to know that if a program crashes every five minutes,
something is seriously wrong with the code. These are far more than
hunches -- they are absolute facts.
Seriously wrong with the code, or seriously wrong with the compiler?
Or with libc? Or with someone's idea of what inputs the code should or
should not accept?

In the end, you rely on a heuristic: the compiler is _unlikely_ to
refuse to compile valid code, the compiler is _unlikely_ to miscompile
code, the runtime is _unlikely_ to have errors relative to the
likelihood of you introducing an error, etcetera.

These are heuristics. Educated guesses. I'm not downplaying the value
of having a good educated guess, but we need to stop fooling ourselves
with a false sense of security.
If you find something "suspect" about the conclusion that crashing every
five minutes is unacceptable, I guess that's your business.
If it's crashing every five minutes, the only conclusion which can be
drawn, absent other facts, is that it's crashing every five minutes.
Our estimations of the relative risk of component failure informs our
decision of where to debug, but it would be ludicrous to claim that we
know these things.

When a programmer can honestly say they have never been fooled by a bug
which occurred in one place but manifested somewhere completely
different, it is only because they have not been programming long.
While there have certainly be a number of problems caused by C-style
strings, I'm not at all sure what would constitute "disproportionate".
Given the amount of programming done in C, and the degree to which
strings get passed around in things like network programming, I'd expect
many of the failures to happen via that route, simply because it's how a
lot gets done.
I'd refer you to Bugtraq, the Open Source Vulnerability Database, etc.
Wherever you look, it's "mishandled buffer". Clearly, the overwhelming
majority of programmers should not be trusted to write safe and
reliable production code which uses C-style string handling and/or
buffers.

Aug 6 '06 #39

This discussion thread is closed

Replies have been disabled for this discussion.