471,591 Members | 1,855 Online

# Is it time for secure C ?

Hello,

I just downloaded MS Visual Studio 2005 Express Beta. When I tried to
compile existing valid project, I get a lot of warnings like 'sprintf'
has been deprecated, 'strcpy' has been deprecated etc. I opened STDIO.H
and figured that one has to define a macro _CRT_SECURE_NO_DEPRECATE
to stop these warnings.

I started to search internet and found few links, and the following proposal

http://www.open-std.org/jtc1/sc22/wg...docs/n1031.pdf

After looking into Whidbey Beta header files I started liking this. This is
something I have been using already for static and local buffers using
macro with strncpy() and vsnprintf(), only this is better.

Although this feature should be invoked by defining _USE_SECURE_LIBS
and not be used by default, that's easy to fix in CRTDEFS.H.

Anyway, I am just wondering if anybody knows about the status of this
proposal. And also would like to read some opinions.

Roman
Nov 14 '05 #1
68 3378
This is very good. I have been arguing in this group against the situation
in C where
unsecure programming is the rule. This means that things can change.

In lcc-win32, after a discussion about C strings in this group, I developed
a secure
string library, that is distributed with the compiler.

The Microsoft proposal goes in this direction, albeit it leaves to the
programmer
the work of always specifying correctly the block size. I posted in
comp.lang.lcc
an article proposing the usage of *bounded* pointers, that would solve the
problem of strings and other related problems.

jacob

http://www.cs.virginia.edu/~lcc-win32
Nov 14 '05 #2

[..]
the work of always specifying correctly the block size. I posted in
comp.lang.lcc

There is no such group by this name. ITYM, news:comp.compilers.lcc
Nov 14 '05 #3
Nov 14 '05 #4
"jacob navia" <ja***@jacob.remcomp.fr> wrote:

[ Please, leave _some_ context. ]
This is very good. I have been arguing in this group against the situation
in C where unsecure programming is the rule.
It isn't. Insecure programming is the rule anywhere rank amateurs or
poor professionals program in any language; it is not the rule where
real professionals or dedicated amateurs program, in C no more than in
In lcc-win32, after a discussion about C strings in this group, I developed
a secure string library, that is distributed with the compiler.
Which, of course, is off-topic here.
The Microsoft proposal goes in this direction,
It is a dreadful solution, but one of the kind which is entirely
expected of those embrace-extend-and-massively-abuse artists.
I posted in comp.lang.lcc an article proposing the usage of *bounded* pointers,

There is a newsgroup especially for lcc? That's great! That means you
won't have to post off-topic material in comp.lang.c anymore, and we
won't have to bother you with our requests for topicality anymore. Now,
all that remains is for you to post lcc-specific material there, and ISO
C material here...

Richard
Nov 14 '05 #5
> I just downloaded MS Visual Studio 2005 Express Beta. When I tried to
compile existing valid project, I get a lot of warnings like 'sprintf'
has been deprecated, 'strcpy' has been deprecated etc. I opened STDIO.H
and figured that one has to define a macro _CRT_SECURE_NO_DEPRECATE
to stop these warnings.

1. And what exactly makes you think that any Microsoft's tantrum is
going to mean anything valid in the C world?

2. Since when Microsoft is a well known authority for secure
programming? There is so much evidence of the opposite,
it's somewhat boggling...

3. So according to them, sprintf, strcpy and the like have
been deprecated? How presumptuous.

4. They seem to keep thinking that software quality is going to be
achieved with software tools for incompetent programmers. I don't
think this is ever going to work.

5. I have heard of Visual Studio 2005, and it sounds like this
new release doesn't give anything more than hypothetical improved
"programming security". Sounds more like marketing than engineering
to me.
"Secure C" in itself doesn't exist any more than a "secure car".
If you can't drive, no amount of so-called technology is going
to change the fact that you're dangerous behind the wheel - except
if you're not actually driving it yourself. Is that where Microsoft
wants the profession to be heading?
Nov 14 '05 #6

On Wed, 7 Jul 2004, Roman Ziak wrote:

I just downloaded MS Visual Studio 2005 Express Beta. When I tried to
compile existing valid project, I get a lot of warnings like 'sprintf'
has been deprecated, 'strcpy' has been deprecated etc. I opened STDIO.H
and figured that one has to define a macro _CRT_SECURE_NO_DEPRECATE
to stop these warnings.
(This sounds like typical Microsoft behavior. Ick.)

I started to search internet and found few links, and the following
proposal
http://www.open-std.org/jtc1/sc22/wg...docs/n1031.pdf
This is a somewhat interesting proposal, and one I hadn't seen
before (at least, not in such a standardese-specified way). It
doesn't strike me as particularly useful. Read on.
The 'scanf_s' family of functions is slightly broken, from the
implementor's point of view. Consider Example 2 in section 3.2.2.1:

EXAMPLE 2 The call:
#define __USE_SECURE_LIB__
#include <stdio.h>
/* ... */
int n; char s[5];
n = fscanf_s(stdin, "%s", s, sizeof s);
with the input line:
hello
will assign to 'n' the value 0 since a matching failure occurred
because the sequence 'hello\0' requires an array of six characters
to store it. No assignment to 's' occurs.

In other words, the implementation of 'scanf_s' requires a lookahead
buffer of at least N characters, where N is some value specified by the
user at runtime. This is certainly possible (especially with C99 VLAs
at the implementor's disposal), but is the proposed "security" worth
the inconvenience?

(There's also the issue of what happens when the programmer passes
a 'size_t' argument to a variadic function; I forget exactly what is
supposed to happen, but the integer promotions definitely don't help.
Maybe this is a non-issue in this case, though.)

The 'gets_s' function is exactly equivalent to the existing 'fgets'
function, except that it discards the *USEFUL* end-of-line indicator,
which is the only thing that can tell the program that a full line
has indeed been read. 'gets_s' is thus *WORSE* than nothing! (Though
it's not as bad as 'gets'. ;)

The 'rand_s' function would be absolutely a godsend... if the
author had bothered to specify its behavior!
The 'bsearch_s' function is interesting, but of course it doesn't
add any functionality to the library that didn't already exist in C99
(I don't know whether C90 guaranteed that 'key' would always be the
first argument to 'compar'). And it has *nothing* to do with
security, so it's a little silly to attach it as a "rider" onto the
main proposal.

The 'qsort_s' function is no better than the existing 'qsort'; it
guarantees neither O(NlgN) sorting time nor stable sorting. What
it *does* do is add unnecessary complexity; perhaps the 'context'
argument is an alternative to "locales" in C99? I don't know. It's
certainly not any improvement on the existing C99 functions.

And from the security POV, the author completely forgot to address
the major security hole in both functions: they take two 'size_t'
parameters, right next to each other, and I never remember which
is which. The compiler is never smart enough to help, either. So
this is a potential source of major hard-to-find bugs in C programs,
and the proposed "secure" library doesn't even address the issue!
'memcpy_s(foo, n, bar, n)' replaces the existing 'memcpy(foo, bar, n)',
and likewise 'memmove'. Extra verbosity, no security gain. Bad idea.

In practice, 'strncpy_s' now performs exactly the same function as
'memcpy_s'; ironically, the historical extra security of filling the
array out with NUL bytes is removed!

'strlen_s' is interesting, but I hardly think it's useful for its
intended purpose; after all, wasn't the whole point of this string
library proposal so that all strings *would* have well-defined lengths,
thus making the existing 'strlen' perfectly safe?
In conclusion, I think it's pretty ironic that the proposal begins
with the paragraph

Traditionally, the C Library has contained many functions that trust
the programmer to provide output character arrays big enough to hold
the result being produced. Not only do these functions not check that
the arrays are big enough, they frequently lack the information needed
to perform such checks. While it is possible to write safe, robust,
and error-free code using the existing library, the library tends to
promote programming styles that lead to mysterious failures if a
result is too big for the provided array.

when all it does is provide even *more* functions that require "big
enough" character arrays with programmer-specified values, thus promoting
the "mysterious failure" programming style it claims to be trying to
avoid!
Anyway, I am just wondering if anybody knows about the status of this
proposal. And also would like to read some opinions.

I hope this was useful to you.

-Arthur
Nov 14 '05 #7

"Richard Bos" <rl*@hoekstra-uitgeverij.nl> wrote in message

It isn't. Insecure programming is the rule anywhere rank amateurs or
poor professionals program in any language; it is not the rule where
real professionals or dedicated amateurs program, in C no more than in

C makes it very easy to address memory illegally. This problem can be solved
by using another language, at the cost of some runtime inefficiency and loss
of simplicity.
What no language and no compiler can solve is the logic error. If I am
writing control software for an aircraft, and I accidentally use a sine
rather than a cosine in some vital calculation, it will not be picked up
except through testing, or when the aeroplane crashes.

Nov 14 '05 #8
Arthur J. O'Dwyer wrote:
On Wed, 7 Jul 2004, Roman Ziak wrote:
I started to search internet and found few links, and the following
proposal
http://www.open-std.org/jtc1/sc22/wg...docs/n1031.pdf
[...] The 'bsearch_s' function is interesting, but of course it doesn't
add any functionality to the library that didn't already exist in C99
(I don't know whether C90 guaranteed that 'key' would always be the
first argument to 'compar').
It did.
The 'qsort_s' function is no better than the existing 'qsort'; it
guarantees neither O(NlgN) sorting time nor stable sorting. What
it *does* do is add unnecessary complexity; perhaps the 'context'
argument is an alternative to "locales" in C99? I don't know. It's
certainly not any improvement on the existing C99 functions.

I agree with much of what you wrote, but I think you've overlooked the
usefulness of the context' argument. In my experience, a context'
argument is an essential part of any properly-designed interface
involving a callback function, allowing customization of the behaviour
of the comparison function at runtime. The lack of it is, I think,
the one major defect in the specification of qsort().

An example is perhaps the easiest way to show this: suppose you want
to sort an array of elements of structure type:

struct element {
int id;
char strings[3];
} elements[] = { ... };

Further, you want to allow sorting by a particular member, say
strings[0], strings[1] or strings[2], and to provide the option of
sorting in forward or reverse. One way to do this is to provide two
separate comparison functions for each possibility:

int compare_zero(const void *l, const void *r)
{
const struct element *left = l, *right = r;
return strcmp(left->strings[0], right->strings[0]);
}

int compare_one(const void *l, const void *r)
{
const struct element *left = l, *right = r;
return strcmp(left->strings[1], right->strings[1]);
}

int compare_two(const void *l, const void *r)
{
const struct element *left = l, *right = r;
return strcmp(left->strings[2], right->strings[2]);
}

int compare_r_zero(const void *l, const void *r)
{
const struct element *left = l, *right = r;
return - strcmp(left->strings[0], right->strings[0]);
}

int compare_r_one(const void *l, const void *r)
{
const struct element *left = l, *right = r;
return - strcmp(left->strings[1], right->strings[1]);
}

int compare_r_two(const void *l, const void *r)
{
const struct element *left = l, *right = r;
return - strcmp(left->strings[2], right->strings[2]);
}

The appropriate comparison function can then be selected at runtime by
an if-else ladder, or an array of pointers to the functions, etc.
Duplicating the comparison code in this way is pretty inelegant,
though, besides being a maintenance burden. Obviously, it's desirable
to replace the almost-identical functions above with a single
function, and parameterize the hard-coded indexes and minus operator.

int compare(const void *l, const void *r)
{
const struct element *left = l, *right = r;
return sign * strcmp(left->strings[index], right->strings[index]);
}

The question, of course, is "Where do sign' and index' come from?
They could be global variables, but this is undesirable for a number
of reasons: giving up thread safety, re-entrancy, modularity, etc.
The ideal thing would be to pass them as parameters, but qsort()
provides no mechanism for doing this: the signature of the comparison
function is fixed, and only allows two arguments to be passed. Now, I
hope, the purpose of the context' parameter starts to become clear.
We can pass data of any type through context', so the single function
becomes easy to write:

struct context
{
int sign;
int index;
};
int compare_ctxt(const void *l, const void *r, void *context)
{
const struct element *left = l, *right = r;
return context->sign * strcmp(left->strings[context->index],
right->strings[context->index]);
}

Having dispensed with the plethora of functions, there's no longer any
need for a runtime selection of comparison function, so the code that
invokes qsort() (or rather, qsort_s()) becomes much simpler as well.
Borrowing some syntax from C99:

qsort_s(elements,
sizeof elements / sizeof elements[0],
sizeof elements[0],
&(struct context){direction, index});

This is essentially the same as the concept of a "closure" in more
functionally-inclined languages, albeit quite a bit more explicit.

Jeremy.
Nov 14 '05 #9

On Wed, 7 Jul 2004, Jeremy Yallop wrote:

Arthur J. O'Dwyer wrote:
On Wed, 7 Jul 2004, Roman Ziak wrote:
http://www.open-std.org/jtc1/sc22/wg...docs/n1031.pdf
<snip> The 'qsort_s' function is no better than the existing 'qsort'; it
guarantees neither O(NlgN) sorting time nor stable sorting. What
it *does* do is add unnecessary complexity; perhaps the 'context'
argument is an alternative to "locales" in C99? I don't know. It's
certainly not any improvement on the existing C99 functions.
I agree with much of what you wrote, but I think you've overlooked the
usefulness of the context' argument. In my experience, a context'
argument is an essential part of any properly-designed interface
involving a callback function, allowing customization of the behaviour
of the comparison function at runtime. The lack of it is, I think,
the one major defect in the specification of qsort().

[Snip example: sorting structs by fields 0,1,2, forward and reverse]
qsort_s(elements,
sizeof elements / sizeof elements[0],
sizeof elements[0],
&(struct context){direction, index});
Yes, I recognized that this was the intended usage, I just didn't
see any good reason to want to do this. You say you've had reason to
do this before? Well, perhaps it is useful, then, just not to me. ;)

I would pessimistically think the effort spent initializing
'direction' and 'index' ("on location," before each and every call
to the "contextual" qsort) would often exceed the amount of effort needed
to write six or seven specialized comparison functions in the first
place.
This is essentially the same as the concept of a "closure" in more
functionally-inclined languages, albeit quite a bit more explicit.

Yup.

-Arthur
Nov 14 '05 #10
"Malcolm" <ma*****@55bank.freeserve.co.uk> writes:
[...]
What no language and no compiler can solve is the logic error. If I am
writing control software for an aircraft, and I accidentally use a sine
rather than a cosine in some vital calculation, it will not be picked up
except through testing, or when the aeroplane crashes.

Or code review. If you're writing airplane control software without
doing code review, remind me not to fly in your airplanes. (That's a
general comment, not directed at Malcolm.)

--
Keith Thompson (The_Other_Keith) ks***@mib.org <http://www.ghoti.net/~kst>
San Diego Supercomputer Center <*> <http://users.sdsc.edu/~kst>
We must do something. This is something. Therefore, we must do this.
Nov 14 '05 #11
Arthur J. O'Dwyer wrote:
On Wed, 7 Jul 2004, Jeremy Yallop wrote:
[Snip example: sorting structs by fields 0,1,2, forward and reverse]
qsort_s(elements,
sizeof elements / sizeof elements[0],
sizeof elements[0],
&(struct context){direction, index});

I would pessimistically think the effort spent initializing
'direction' and 'index' ("on location," before each and every call
to the "contextual" qsort) would often exceed the amount of effort needed
to write six or seven specialized comparison functions in the first
place.

That's quite possibly true for the fairly simple example I gave, but
in some situations it's impossible to write a comparison function for
each case. Consider the case where the number of "fields" is not
known until runtime (i.e. the array member in the snipped example is
dynamically allocated).

The context argument is just the next logical step in the
parameterization of qsort(). The qsort() function is much more useful
than it would be if its behaviour couldn't be customized by compar().
The context argument is to compar() as compar() is to qsort().

Jeremy.
Nov 14 '05 #12
Very well written Jeremy.

A very convincing argument, well explained.

Kudos for that message.

Nov 14 '05 #13

"Arthur J. O'Dwyer" <aj*@nospam.andrew.cmu.edu> a écrit dans le message de
news:Pi**********************************@unix40.a ndrew.cmu.edu...

On Wed, 7 Jul 2004, Jeremy Yallop wrote:
I would pessimistically think the effort spent initializing
'direction' and 'index' ("on location," before each and every call
to the "contextual" qsort) would often exceed the amount of effort needed
to write six or seven specialized comparison functions in the first
place.

If your compiler doesn't support default arguments then
you can always make a memset to zero. If the function
is well constructed, a default initialization should be easy.

Nov 14 '05 #14

"Keith Thompson" <ks***@mib.org> wrote
"Malcolm" <ma*****@55bank.freeserve.co.uk> writes:
[...]
What no language and no compiler can solve is the logic error. If I am
writing control software for an aircraft, and I accidentally use a sine
rather than a cosine in some vital calculation, it will not be picked up
except through testing, or when the aeroplane crashes.

Or code review. If you're writing airplane control software without
doing code review, remind me not to fly in your airplanes. (That's a
general comment, not directed at Malcolm.)

Fortunately all my aeroplanes are vitual video game ones. However we don't
do code reviews, we just play it and if it plays for areasnable length of
time without faling over, release it.
Nov 14 '05 #15
Greetings,

Malcolm wrote:
"Richard Bos" <rl*@hoekstra-uitgeverij.nl> wrote in message
It isn't. Insecure programming is the rule anywhere rank amateurs or
poor professionals program in any language; it is not the rule where
real professionals or dedicated amateurs program, in C no more than in

C makes it very easy to address memory illegally. This problem can be solved
by using another language, at the cost of some runtime inefficiency and loss
of simplicity.

No, implementations of C make it very easy to address memory illegally.
I've not read anything in the standard the prohibits an implementation
from actually enforcing the rules.

I've given this a lot of thought of late & don't think it would be that
terribly difficult to add proper bounds checking to a good compiler.

--
Kyle A. York
Sr. Subordinate Grunt
DSBU

Nov 14 '05 #16
"Guillaume" <"grsNOSPAM at NOTTHATmail dot com"> wrote in message
news:40*********************@news.club-internet.fr...
I just downloaded MS Visual Studio 2005 Express Beta. When I tried to
compile existing valid project, I get a lot of warnings like 'sprintf'
has been deprecated, 'strcpy' has been deprecated etc. I opened STDIO.H
and figured that one has to define a macro _CRT_SECURE_NO_DEPRECATE
to stop these warnings.
1. And what exactly makes you think that any Microsoft's tantrum is
going to mean anything valid in the C world?

Microsoft is world seconds biggest software developper. And their main
product
is written mostly in C and C++. The most sensitive part of their product -
kernel -
is written almost entirely in C. Yes, WinNT used to crash a lot, but I seen
WinXP
crashed probably once or twice in my career and that was for the third party
driver.

What makes you think MS does not mean anything in C world ? Do you live in
the cave ?
2. Since when Microsoft is a well known authority for secure
programming? There is so much evidence of the opposite,
it's somewhat boggling...

Everybody learns from mistakes and so does this company. And they had
plethora
opportunities.
Nov 14 '05 #17

On Wed, 7 Jul 2004, Roman Ziak wrote:

"Guillaume" <"grsNOSPAM at NOTTHATmail dot com"> wrote:
1. And what exactly makes you think that any Microsoft's tantrum is
going to mean anything valid in the C world?

Microsoft is world seconds biggest software developper. And their main
product
is written mostly in C and C++. The most sensitive part of their product -
kernel -
is written almost entirely in C.

How do *you* know? ;) (And BTW, your lines are too long. Stick to
75 characters wide for Usenet, please.)
Yes, WinNT used to crash a lot, but I seen
WinXP
crashed probably once or twice in my career and that was for the third party
driver.
Lucky you. WinXP Professional crashes about twice a week at work,
and more, lately. At home it's less of a problem, but it still crashes
every so often. ...OTOH and besides, crashing is a heck of a lot better
than the alternative, from a *security* point of view!
What makes you think MS does not mean anything in C world ? Do you live in
the cave ?

The Unix/Linux/network/mainframe/embedded/portable cave, yeah,
probably. Microsoft certainly doesn't mean anything in the world
of C standards, and it doesn't mean a whole lot more even in the
world of C applications programming. Just because they hire a lot
of programmers doesn't make them relevant. ;)

2. Since when Microsoft is a well known authority for secure
programming? There is so much evidence of the opposite,
it's somewhat boggling...

Everybody learns from mistakes and so does this company. And they had
plethora opportunities.

You seriously believe that, do you? Check Google News recently;
even their bugfixes apparently need bugfixes!

-Arthur
Nov 14 '05 #18
"Arthur J. O'Dwyer" <aj*@nospam.andrew.cmu.edu> wrote in message
news:Pi***********************************@unix50. andrew.cmu.edu...

On Wed, 7 Jul 2004, Roman Ziak wrote:

"Guillaume" <"grsNOSPAM at NOTTHATmail dot com"> wrote:

1. And what exactly makes you think that any Microsoft's tantrum is
going to mean anything valid in the C world?

Microsoft is world seconds biggest software developper. And their main
product
is written mostly in C and C++. The most sensitive part of their product - kernel -
is written almost entirely in C.

How do *you* know? ;) (And BTW, your lines are too long. Stick to

How do *I* know ... hmmm, ... everybody knows.

I give you a hint ... if you have DDK, try to search for C and CPP files in
examples. In my case - version 3790 - it is 711 C files, 233 CPP files,
2 ASM files (which is DOS app anyway).

Look into headers and tell me how many classes do you find in DDK files ?
I found 2 files out of 111 which have a 'class'.
What makes you think MS does not mean anything in C world ? Do you live in the cave ?

The Unix/Linux/network/mainframe/embedded/portable cave, yeah,
probably. Microsoft certainly doesn't mean anything in the world
of C standards, and it doesn't mean a whole lot more even in the
world of C applications programming. Just because they hire a lot
of programmers doesn't make them relevant. ;)

I do not agree.
Everybody learns from mistakes and so does this company. And they had
plethora opportunities.

You seriously believe that, do you? Check Google News recently;
even their bugfixes apparently need bugfixes!

Almost every more sofisticated software contains bugs. The one used
by more users will expose them much more often. And because 98%
people (Google statistics) use Windows, there will be theoretically 50x
more virus crackers searching for security holes in Windows than for
other systems.

So answer to your question: I believe that people and companies learn
from mistakes.

Btw, I have tried Linux, I used it for some short time and the apps liked
crashing very much too.

Anyway, I am not going to defend MS here, my post was about new
proposal. I liked your first post, because it was to the point unlike
Guillame's was just cheap ridiculing. I have no interest and time to
get into arguments about Windows vs. Unix or MS vs everybody.
I have interest in C and that's why I subscribed to this group. To discuss
C.

Roman
Nov 14 '05 #19
Jeremy Yallop <je****@jdyallop.freeserve.co.uk> wrote:
Arthur J. O'Dwyer wrote:
On Wed, 7 Jul 2004, Roman Ziak wrote:
http://www.open-std.org/jtc1/sc22/wg...docs/n1031.pdf
The 'qsort_s' function is no better than the existing 'qsort'; it
guarantees neither O(NlgN) sorting time nor stable sorting. What
it *does* do is add unnecessary complexity; perhaps the 'context'
argument is an alternative to "locales" in C99? I don't know. It's
certainly not any improvement on the existing C99 functions.
I agree with much of what you wrote, but I think you've overlooked the
usefulness of the context' argument. In my experience, a context'
argument is an essential part of any properly-designed interface
involving a callback function, allowing customization of the behaviour
of the comparison function at runtime. The lack of it is, I think,
the one major defect in the specification of qsort().

Having dispensed with the plethora of functions, there's no longer any
need for a runtime selection of comparison function, so the code that
invokes qsort() (or rather, qsort_s()) becomes much simpler as well.

Yes... but what does that have to do with discouraging "programming
styles that lead to mysterious failures if a result is too big for the
provided array" and promoting "promote safer, more secure programming"?
It solves a completely different problem, and should not have been in
_this_ proposal.

Richard
Nov 14 '05 #20
On Wed, 7 Jul 2004, Keith Thompson wrote:

KT>"Malcolm" <ma*****@55bank.freeserve.co.uk> writes:
KT>[...]
KT>> What no language and no compiler can solve is the logic error. If I am
KT>> writing control software for an aircraft, and I accidentally use a sine
KT>> rather than a cosine in some vital calculation, it will not be picked up
KT>> except through testing, or when the aeroplane crashes.
KT>
KT>Or code review. If you're writing airplane control software without
KT>doing code review, remind me not to fly in your airplanes. (That's a
KT>general comment, not directed at Malcolm.)

Even code review or using a 'safe' language doesn't help when the problem
specification is already broken. Remember the Ariane IV crash (software
written in ADA)? Or the Airbus crash in Warszawa?

What you need is a correct problem specification, a correct design, a
correct implementation done by good programmers in a language carefully
choosen for the problem. Easy, isn't it?

harti
Nov 14 '05 #21
"Malcolm" <ma*****@55bank.freeserve.co.uk> wrote:
"Richard Bos" <rl*@hoekstra-uitgeverij.nl> wrote in message

It isn't. Insecure programming is the rule anywhere rank amateurs or
poor professionals program in any language; it is not the rule where
real professionals or dedicated amateurs program, in C no more than in
C makes it very easy to address memory illegally. This problem can be solved
by using another language, at the cost of some runtime inefficiency and loss
of simplicity.

It can be solved even more simply by doing what real professionals do:
think before you code, create a solid design, and _do your bloody
bookkeeping_. _You_ (general, not Malcolm specifically) are the one who
writes the program, secure or insecure. If you can't keep tabs on
something as simple as the length of an array, how can you be trusted
with something as complicated as, ooh, a file system?
What no language and no compiler can solve is the logic error. If I am
writing control software for an aircraft, and I accidentally use a sine
rather than a cosine in some vital calculation, it will not be picked up
except through testing, or when the aeroplane crashes.

of teeth! rain of fire!) having someone else proof-read it...

Richard
Nov 14 '05 #22
In <1089240287.803358@sj-nntpcache-3> kyle york <ky***@cisco.com> writes:
Greetings,

Malcolm wrote:
"Richard Bos" <rl*@hoekstra-uitgeverij.nl> wrote in message
It isn't. Insecure programming is the rule anywhere rank amateurs or
poor professionals program in any language; it is not the rule where
real professionals or dedicated amateurs program, in C no more than in

C makes it very easy to address memory illegally. This problem can be solved
by using another language, at the cost of some runtime inefficiency and loss
of simplicity.

No, implementations of C make it very easy to address memory illegally.
I've not read anything in the standard the prohibits an implementation
from actually enforcing the rules.

What rules? You can convert any integer to a pointer value and the
language cannot tell whether the result is a valid pointer value or not.
I've given this a lot of thought of late & don't think it would be that
terribly difficult to add proper bounds checking to a good compiler.

Think harder. Review the answer I gave to Jacob Navia, on this topic,
in this very newsgroup, several months ago.

Dan
--
Dan Pop
DESY Zeuthen, RZ group
Email: Da*****@ifh.de
Nov 14 '05 #23
In <ln************@nuthaus.mib.org> Keith Thompson <ks***@mib.org> writes:
Or code review. If you're writing airplane control software without
doing code review, remind me not to fly in your airplanes. (That's a
general comment, not directed at Malcolm.)

How do you know who wrote the programs of the airplanes you're flying
with? ;-)

Dan
--
Dan Pop
DESY Zeuthen, RZ group
Email: Da*****@ifh.de
Nov 14 '05 #24
"Roman Ziak" <ro***@nospam.com> wrote:
"Guillaume" <"grsNOSPAM at NOTTHATmail dot com"> wrote in message
news:40*********************@news.club-internet.fr...
I just downloaded MS Visual Studio 2005 Express Beta. When I tried to
compile existing valid project, I get a lot of warnings like 'sprintf'
has been deprecated, 'strcpy' has been deprecated etc. I opened STDIO.H
and figured that one has to define a macro _CRT_SECURE_NO_DEPRECATE
to stop these warnings.
1. And what exactly makes you think that any Microsoft's tantrum is
going to mean anything valid in the C world?

Microsoft is world seconds biggest software developper.

Oh? So who's the first? IBM are a large computer manufacturer, but they
don't sell as much software as Microsoft.
Yes, WinNT used to crash a lot, but I seen WinXP crashed probably once
or twice in my career and that was for the third party driver.
My mileage varies. The greatest culprits seem to be Word and Internet
Exploder, both M$products. What makes you think MS does not mean anything in C world ? Their own behaviour, and their attitude towards anything Standard - _any_ standard, not just C. Do you live in the cave ? No, they do. Unfortunately, it's a great honking big cave, and there's a lot of sheep in there. They don't yet know that Microsoft is a reincarnation of Polyphemus. 2. Since when Microsoft is a well known authority for secure programming? There is so much evidence of the opposite, it's somewhat boggling... Everybody learns from mistakes and so does this company. Erm... no. They learn to cover up better, but if they _really_ learned, they wouldn't keep overrunning buffers. Richard Nov 14 '05 #25 In <40***************@news.individual.net> rl*@hoekstra-uitgeverij.nl (Richard Bos) writes: "Roman Ziak" <ro***@nospam.com> wrote: Microsoft is world seconds biggest software developper. ^^^^^^^^^^^^^^^^^^^ Oh? So who's the first? IBM are a large computer manufacturer, but they don't sell as much software as Microsoft. ^^^^ Developing software and selling software is not exactly the same thing. I've no idea what metric could be used in order to make a top of the software developers, so Roman's statement is rather vacuous to me. Dan -- Dan Pop DESY Zeuthen, RZ group Email: Da*****@ifh.de Nov 14 '05 #26 "Richard Bos" <rl*@hoekstra-uitgeverij.nl> wrote in message news:40***************@news.individual.net... 2. Since when Microsoft is a well known authority for secure programming? There is so much evidence of the opposite, it's somewhat boggling... Everybody learns from mistakes and so does this company. Erm... no. They learn to cover up better, but if they _really_ learned, they wouldn't keep overrunning buffers. The Secure C Library proposed to WG14 by Microsoft does indeed stem from what they *learned* in eliminating buffer overruns (among other security lapses) from their code. P.J. Plauger Dinkumware, Ltd. http://www.dinkumware.com Nov 14 '05 #27 "Malcolm" <ma*****@55bank.freeserve.co.uk> wrote in message news:cc*********@news8.svr.pol.co.uk... "Keith Thompson" <ks***@mib.org> wrote "Malcolm" <ma*****@55bank.freeserve.co.uk> writes: [...] What no language and no compiler can solve is the logic error. If I am writing control software for an aircraft, and I accidentally use a sine rather than a cosine in some vital calculation, it will not be picked up except through testing, or when the aeroplane crashes. Or code review. If you're writing airplane control software without doing code review, remind me not to fly in your airplanes. (That's a general comment, not directed at Malcolm.) Fortunately all my aeroplanes are vitual video game ones. However we don't do code reviews, we just play it and if it plays for areasnable length of time without faling over, release it. Cool. That's exactly how we handled code reviews at various places I've worked. Writing applications for government accounting, utility (e.g. electric, gas) mapping, building design, etc. The major difference is that the developers didn't try "playing" much at all. We left that for the support guy and the customers. Who needs code reviews when there are customers, who are quite willing to tell you about any problems they find? Besides, we were generally too busy, trying to work around fundamental design flaws. And find the causes of those bugs the customers reported. We had no time for code reviews. Nov 14 '05 #28 Greetings, Dan Pop wrote: In <1089240287.803358@sj-nntpcache-3> kyle york <ky***@cisco.com> writes: No, implementations of C make it very easy to address memory illegally. I've not read anything in the standard the prohibits an implementation from actually enforcing the rules. What rules? You can convert any integer to a pointer value and the language cannot tell whether the result is a valid pointer value or not. So you're saying undefined behaviour is undefined. What's new? Nothing prevents the compiler from emitting code that will trap/crash/burn in this case. This is assuming I understand 6.3.2.3 paragraphs 6 & 7. The way I read this the implementation is allowed to say this results in an invalid pointer. Simple enough. I've given this a lot of thought of late & don't think it would be that terribly difficult to add proper bounds checking to a good compiler. Think harder. Review the answer I gave to Jacob Navia, on this topic, in this very newsgroup, several months ago. You and Jacob have had many threads in the past few months & I remember many of them. Please give an example of how it would be impossible to implement bounds checking. I've yet to come up with a scenario that is insurmountable. As I said before there's nothing in the language that prevents an implementation that includes bounds checking. If I'm wrong, please point me to chapter & verse. -- Kyle A. York Sr. Subordinate Grunt Nov 14 '05 #29 In article <Ok********************@news20.bellglobal.com>, Roman Ziak <ro***@nospam.com> wrote: Almost every more sofisticated software contains bugs. This need not be the case. There's a guy by the name of Donald Knuth who's written at least one major software package that I know of and use regularly, and I think a few others as well. I don't know of any bugs in any of them Any bugs that do exist are definitely NOT ones that a mere mortal would come across in normal use. There's no reason, other than unwillingness or inability to do the job right, that implies that software *should* have bugs. (Note that the inability to do the job right may not actually be the fault of the programmers; this doesn't mean it becomes excusable. Especially when it *is* the fault of the programmers.) ObC: There's no reason why C code needs to have bugs, either. You just need to be a little bit more careful, the same way you need to be more careful with a chainsaw than with a screwdriver. dave -- Dave Vandervies dj******@csclub.uwaterloo.ca What turned me into such a stubborn smartass might be a fruitful area of investigation... --Brenda Nov 14 '05 #30 > Cool. That's exactly how we handled code reviews at various places I've worked. Writing applications for government accounting, utility (e.g. electric, gas) mapping, building design, etc. The major difference is that the developers didn't try "playing" much at all. We left that for the support guy and the customers. Who needs code reviews when there are customers, who are quite willing to tell you about any problems they find? Besides, we were generally too busy, trying to work around fundamental design flaws. And find the causes of those bugs the customers reported. We had no time for code reviews. ;-) Nov 14 '05 #31 kyle york <ky***@cisco.com> writes: [...] Please give an example of how it would be impossible to implement bounds checking. I've yet to come up with a scenario that is insurmountable. As I said before there's nothing in the language that prevents an implementation that includes bounds checking. If I'm wrong, please point me to chapter & verse. I think (but I'm not certain) that reliable bounds checking could be provided by a C implementation, but there would be a significant cost. The simplest way to do it would be to use "fat pointers". For example, a char* might consist of three elements: The base address of an object, created either by an object definition or by a call to an allocation function like malloc(); The size of the object, in bytes; and An offset, in bytes. (For pointers to larger types, the size and offset could be measured either in bytes or in larger units, whichever turns out to be more efficient.) Pointer arithmetic (including array indexing) would operate on the offset, and would trap if the result is outside the known bounds of the base object. Any operation on a pointer would check whether the base address is non-null, and whether the offset is within the bounds of the base object. The drawbacks are that the resulting code would be slower, pointers would take up more space, and many useful instances of undefined behavior (in non-portable code) would cause traps. -- Keith Thompson (The_Other_Keith) ks***@mib.org <http://www.ghoti.net/~kst> San Diego Supercomputer Center <*> <http://users.sdsc.edu/~kst> We must do something. This is something. Therefore, we must do this. Nov 14 '05 #32 In <1089303659.591128@sj-nntpcache-5> kyle york <ky***@cisco.com> writes: Greetings, Dan Pop wrote: In <1089240287.803358@sj-nntpcache-3> kyle york <ky***@cisco.com> writes: No, implementations of C make it very easy to address memory illegally. I've not read anything in the standard the prohibits an implementation from actually enforcing the rules. What rules? You can convert any integer to a pointer value and the language cannot tell whether the result is a valid pointer value or not. So you're saying undefined behaviour is undefined. What's new? Nothing prevents the compiler from emitting code that will trap/crash/burn in this case. This is assuming I understand 6.3.2.3 paragraphs 6 & 7. The way I read this the implementation is allowed to say this results in an invalid pointer. Simple enough. Unless the address resulting from the conversion is the address of an object. This adds a bit of complication to the issue. I've given this a lot of thought of late & don't think it would be that terribly difficult to add proper bounds checking to a good compiler. Think harder. Review the answer I gave to Jacob Navia, on this topic, in this very newsgroup, several months ago. You and Jacob have had many threads in the past few months & I remember many of them. Please give an example of how it would be impossible to implement bounds checking. I've yet to come up with a scenario that is insurmountable. As I said before there's nothing in the language that prevents an implementation that includes bounds checking. If I'm wrong, please point me to chapter & verse. What part of "Review the answer I gave to Jacob Navia, on this topic, in this very newsgroup, several months ago" was too difficult for you to understand? I'm not saying that it is impossible, I'm saying that you're way too optimistic when you say that it's not terribly difficult. BTW, Jacob gave up the idea after reading my answer ;-) Dan -- Dan Pop DESY Zeuthen, RZ group Email: Da*****@ifh.de Nov 14 '05 #33 "Keith Thompson" <ks***@mib.org> a écrit dans le message de news:ln************@nuthaus.mib.org... kyle york <ky***@cisco.com> writes: [...] Please give an example of how it would be impossible to implement bounds checking. I've yet to come up with a scenario that is insurmountable. As I said before there's nothing in the language that prevents an implementation that includes bounds checking. If I'm wrong, please point me to chapter & verse. I think (but I'm not certain) that reliable bounds checking could be provided by a C implementation, but there would be a significant cost. The simplest way to do it would be to use "fat pointers". This is the solution I have used in my string library For example, a char* might consist of three elements: The base address of an object, created either by an object definition or by a call to an allocation function like malloc(); The size of the object, in bytes; and An offset, in bytes. My "fat" pointers consist of a length, a pointer, and a pointer to the base object. Each time the pointer is moved, the implementation checks that it stays within the bounds of the original string object. This is done dynamically, i.e. at run time. Pointer arithmetic (including array indexing) would operate on the offset, and would trap if the result is outside the known bounds of the base object. Any operation on a pointer would check whether the base address is non-null, and whether the offset is within the bounds of the base object. This is what my string library does The drawbacks are that the resulting code would be slower, pointers would take up more space, and many useful instances of undefined behavior (in non-portable code) would cause traps. I jave never really measured since the first implementation of the library is designed as a proof of concept not as the final version. The one measurement I did was the cost of function calls. In a 1.5GHZ P4 it would take several millions of calls to slow down the program just one second. The speed penalty is quite small, and for most purposes negligeable. Nov 14 '05 #34 On Thu, 8 Jul 2004, Keith Thompson wrote: kyle york <ky***@cisco.com> writes: [...] Please give an example of how it would be impossible to implement bounds checking. I've yet to come up with a scenario that is insurmountable. As I said before there's nothing in the language that prevents an implementation that includes bounds checking. If I'm wrong, please point me to chapter & verse. I think (but I'm not certain) that reliable bounds checking could be provided by a C implementation, but there would be a significant cost. The simplest way to do it would be to use "fat pointers". I seem to recall objections centering around the way pointer representations interact with, say, arrays of unsigned char and a few ill-advised memcpys. Consider void *p; unsigned char foo[sizeof p]; p = malloc(42); *p = PERFECTLY_FINE; memcpy(foo, &p, sizeof p); free(p); memcpy(&p, foo, sizeof p); *p = SAME_BITS_INVOLVED, BUT_INCORRECT; Now, this is the kind of thing that can be handled perfectly well by a clever malloc package... but I think there *were* other examples that really couldn't be handled correctly 100% of the time. The drawbacks are that the resulting code would be slower, pointers would take up more space, and many useful instances of undefined behavior (in non-portable code) would cause traps. If it causes a trap, it's not very useful, is it now? ;) That last objection is just saying that non-portable code is not portable to some implementations --- and that's true by definition! The first two objections are true enough, though. Of course, you *could* use the Hypothetical Nice Implementation to test and debug your code, and then move to the Real-World Dangerous Implementation for release. It would just be one step more advanced than the widespread "Debug Version/Release Version" paradigm. -Arthur Nov 14 '05 #35 Greetings, Keith Thompson wrote: kyle york <ky***@cisco.com> writes: [...] For example, a char* might consist of three elements: The base address of an object, created either by an object definition or by a call to an allocation function like malloc(); The size of the object, in bytes; and An offset, in bytes. I was thinking one more level of indirection -- a pointer has a descriptor + offset. The descriptor has reference count, base, size, and flags. The biggest problem at the moment is how to handle pointers embedded in structures & unions, specifically if a structure is freed while an embedded pointer is still valid. Yes, this does lead to a code size & performance hit but I suspect it would still be incredibly useful, especially for people learning C and arguably even for most user applications considering the number of hacks out there trying to prevent things like buffer overlow. If there's a 10% performance hit but a guarentee of safety I'd buy it. An added benefit would be garbage collection for free. Anyway, my original point was simply that there's nothing *in the language* that forbids safe pointers, it's just no one has bothered to implement them. -- Kyle A. York Sr. Subordinate Grunt Nov 14 '05 #36 kyle york <ky***@cisco.com> wrote in message news:<1089240287.803358@sj-nntpcache-3>... Greetings, Malcolm wrote: "Richard Bos" <rl*@hoekstra-uitgeverij.nl> wrote in message It isn't. Insecure programming is the rule anywhere rank amateurs or poor professionals program in any language; it is not the rule where real professionals or dedicated amateurs program, in C no more than in Ada. C makes it very easy to address memory illegally. This problem can be solved by using another language, at the cost of some runtime inefficiency and loss of simplicity. No, implementations of C make it very easy to address memory illegally. I've not read anything in the standard the prohibits an implementation from actually enforcing the rules. I've given this a lot of thought of late & don't think it would be that terribly difficult to add proper bounds checking to a good compiler. Depends. If you ask for static checking, quite easy[1], if runtime difficult but yes possible. But then again it can be implemented in std C only if there is a universal appeal for it. And IMO we aren't getting there anytime sooner. Actually many compilers _do_ indeed support runtime checking but that is only at debug time, relase build don't have much of run time checking. But then again you already knew this. [1] Will it be able to to detect? int arr[10] int x = 10; for ( int i = 0; i <= x; ++i ) { arr[i] = 0xCAFE; } -- Imanpreet Singh Arora isingh AT acm DOT org Nov 14 '05 #37 "Keith Thompson" <ks***@mib.org> wrote in message news:ln************@nuthaus.mib.org... kyle york <ky***@cisco.com> writes: [...] Please give an example of how it would be impossible to implement bounds checking. I've yet to come up with a scenario that is insurmountable. As I said before there's nothing in the language that prevents an implementation that includes bounds checking. If I'm wrong, please point me to chapter & verse. I think (but I'm not certain) that reliable bounds checking could be provided by a C implementation, but there would be a significant cost. The simplest way to do it would be to use "fat pointers". For example, a char* might consist of three elements: ..... Actually, Microsoft's Secure C proposal is even simpler. It provides augmented library functions for every case where a buffer size is not explicitly spelled out in the existing calling sequence. That permits code reviewers, the compiler, and the library code itself to do considerably more checking -- without the need for widespread use of fat pointers. Couple that with the inexpensive but effective stack fences now generated by VC++ and you get quite a bit more reliability for a remarkably small cost in performance and code complexity. P.J. Plauger Dinkumware, Ltd. http://www.dinkumware.com Nov 14 '05 #38 "P.J. Plauger" <pj*@dinkumware.com> wrote in message news:0i*******************@nwrddc01.gnilink.net... "Keith Thompson" <ks***@mib.org> wrote in message news:ln************@nuthaus.mib.org... kyle york <ky***@cisco.com> writes: [...] Please give an example of how it would be impossible to implement bounds checking. I've yet to come up with a scenario that is insurmountable. As I said before there's nothing in the language that prevents an implementation that includes bounds checking. If I'm wrong, please point me to chapter & verse. I think (but I'm not certain) that reliable bounds checking could be provided by a C implementation, but there would be a significant cost. The simplest way to do it would be to use "fat pointers". For example, a char* might consist of three elements: ..... Actually, Microsoft's Secure C proposal is even simpler. It provides augmented library functions for every case where a buffer size is not explicitly spelled out in the existing calling sequence. That permits code reviewers, the compiler, and the library code itself to do considerably more checking -- without the need for widespread use of fat pointers. Couple that with the inexpensive but effective stack fences now generated by VC++ and you get quite a bit more reliability for a remarkably small cost in performance and code complexity. What is "stack fence" ? Would it be swapping variables described in http://blogs.msdn.com/tims/archive/2.../30/57439.aspx I noticed in VC++ that it sometimes moves the stack pointer by approx 1k down, when calling certain functions and also swaps order of arguments. I was not able to follow this even when stepping through single instructions, the stack just changed all of the sudden when entering the function. Roman Nov 14 '05 #39 "Roman Ziak" <ro***@nospam.com> wrote in message news:ch*********************@news20.bellglobal.com ... Actually, Microsoft's Secure C proposal is even simpler. It provides augmented library functions for every case where a buffer size is not explicitly spelled out in the existing calling sequence. That permits code reviewers, the compiler, and the library code itself to do considerably more checking -- without the need for widespread use of fat pointers. Couple that with the inexpensive but effective stack fences now generated by VC++ and you get quite a bit more reliability for a remarkably small cost in performance and code complexity. What is "stack fence" ? Would it be swapping variables described in http://blogs.msdn.com/tims/archive/2.../30/57439.aspx I noticed in VC++ that it sometimes moves the stack pointer by approx 1k down, when calling certain functions and also swaps order of arguments. I was not able to follow this even when stepping through single instructions, the stack just changed all of the sudden when entering the function. The article describes part of the machinery; there's a bit more. Basically, the stack frames are organized so that it's much harder for a buffer overrun to subvert a program and go unnoticed. P.J. Plauger Dinkumware, Ltd. http://www.dinkumware.com Nov 14 '05 #40 Roman Ziak wrote: Hello, I just downloaded MS Visual Studio 2005 Express Beta. When I tried to compile existing valid project, I get a lot of warnings like 'sprintf' has been deprecated, 'strcpy' has been deprecated etc. I opened STDIO.H and figured that one has to define a macro _CRT_SECURE_NO_DEPRECATE to stop these warnings. There is more when writing insecure programs. integer overflows, unsignedness issues, corrupted heaps, etc. The use of strcpy() itself isn't always unsafe, it's use is unsafe when certain conditions are met. It's a matter of just thinking before you type the lines of C. The path that MS chooses doesn't give 100% secure programs, it just gives a false sense that programs are more secure. Thinks like stack protection (stackguard / propolice in gcc), or the recent guard is MSVC are good things, but don't match up with the best think : Simply writing good code. I'm using valgrind at the moment, and that simply learns you to write better code : I'm seeing a decrease in errors that leads to invalid memory reads / writes in my own code, and you learn what actually happens, and where in the code it happens. That beats all runtime stuff that prevents it from happening, since it doesn't tell you why it is happening. Tools like valgrind tell that, and help make better programmers. Just my 2 cents. Igmar Nov 14 '05 #41 >In article <Ok********************@news20.bellglobal.com>, Roman Ziak <ro***@nospam.com> wrote: Almost every more sofisticated software contains bugs. In article <news:cc**********@rumours.uwaterloo.ca> Dave Vandervies <dj******@csclub.uwaterloo.ca> writes:This need not be the case. There's a guy by the name of Donald Knuth who's written at least one major software package that I know of and use regularly, and I think a few others as well. I don't know of any bugs in any of them. ... Programs like TeX and Metafont, for instance? DEK set up something that few others would dare to do: anyone who found a bug (after the initial release) got paid, with the amount doubling each time. I happened to find the 9th or 10th bug in TeX, and have a now-expired check (which I never cashed, of course :-) ) for either$5.12 or
$10.24 -- I forget which, and would have to find it; I always meant to get around to framing it and hanging it on the wall, but have not, yet. Any bugs that do exist are definitely NOT ones that a mere mortal would come across in normal use. (I found mine by reading the code -- one could command TeX to copy an \hbox, and one particular sub-code was not handled in this "copy list of nodes" function. The default case called TeX's internal "panic" function.) It does not take much experience as a programmer to realize that a "double the reward for each successive bug" system makes releasing buggy software a *very* expensive proposition. Bug #24 would be worth$167 772.16, and bug #40 would be worth $10 995 116 277.76. Imagine how much Microsoft would be paying out. :-) -- In-Real-Life: Chris Torek, Wind River Systems Salt Lake City, UT, USA (40°39.22'N, 111°50.29'W) +1 801 277 2603 email: forget about it http://web.torek.net/torek/index.html Reading email is like searching for food in the garbage, thanks to spammers. Nov 14 '05 #42 "Igmar Palsenberg" <ig***@non-existant.local> wrote in message news:40***********************@news.xs4all.nl... Roman Ziak wrote: Hello, I just downloaded MS Visual Studio 2005 Express Beta. When I tried to compile existing valid project, I get a lot of warnings like 'sprintf' has been deprecated, 'strcpy' has been deprecated etc. I opened STDIO.H and figured that one has to define a macro _CRT_SECURE_NO_DEPRECATE to stop these warnings. There is more when writing insecure programs. integer overflows, unsignedness issues, corrupted heaps, etc. True. Other parts of Secure C address some of these issues, but nothing takes the place of good design habits, careful code review, and thorough testing. The use of strcpy() itself isn't always unsafe, it's use is unsafe when certain conditions are met. It's a matter of just thinking before you type the lines of C. The path that MS chooses doesn't give 100% secure programs, it just gives a false sense that programs are more secure. If people were told, "Just use Secure C and you won't have to worry your pretty head any further", I'd agree with you. Rather, the new library is presented as part of a complete breakfast. It forces the programmer to think more about the sizes of things, and it makes more visible the decisions that programmer reached. "Just thinking" is all you need to write safe code in assembly language, but you need to do a helluva lot more thinking with error-prone tools than with safer tools. Secure C is (just) a tool to help programmers think more effectively about writing safe code. Thinks like stack protection (stackguard / propolice in gcc), or the recent guard is MSVC are good things, but don't match up with the best think : Simply writing good code. Once again, you're setting up a straw man. *Nobody* is proposing that Secure C or anything else (with the possible exception of Java licensed directly from Sun) will eliminate the need to code carefully and will automatically eliminate all bugs. But tools such as stack protection do indeed match up well with other good disciplines. They just don't replace them. I'm using valgrind at the moment, and that simply learns you to write better code : I'm seeing a decrease in errors that leads to invalid memory reads / writes in my own code, and you learn what actually happens, and where in the code it happens. That beats all runtime stuff that prevents it from happening, since it doesn't tell you why it is happening. Tools like valgrind tell that, and help make better programmers. Yet another straw man. *Nobody* is proposing that runtime checks should take the place of all other code-development tools. I've been advocating good programming style for over 40 years now, and I've always emphasized early checking over late bug detection; but I also happily embrace any tools that mitigate the damage caused by bugs even as late as runtime. The approaches are not in conflict. Just my 2 cents. Mine too. P.J. Plauger Dinkumware, Ltd. http://www.dinkumware.com Nov 14 '05 #43 In <1089309845.873975@sj-nntpcache-3> kyle york <ky***@cisco.com> writes: Anyway, my original point was simply that there's nothing *in the language* that forbids safe pointers, it's just no one has bothered to implement them. s/bothered/successfully managed/ Big difference. Dan -- Dan Pop DESY Zeuthen, RZ group Email: Da*****@ifh.de Nov 14 '05 #44 Chris Torek <no****@torek.net> writes: It does not take much experience as a programmer to realize that a "double the reward for each successive bug" system makes releasing buggy software a *very* expensive proposition. Bug #24 would be worth$167 772.16, and bug #40 would be worth \$10 995 116 277.76.
Imagine how much Microsoft would be paying out. :-)

However, DEK's reward system doesn't work like that as far as I
know. Instead, he doubled the reward at each release of TeX.
Typically a release fixed more than one bug.
--
Ben Pfaff
email: bl*@cs.stanford.edu
web: http://benpfaff.org
Nov 14 '05 #45
"Arthur J. O'Dwyer" <aj*@nospam.andrew.cmu.edu> wrote in message news:<Pi***********************************@unix50 .andrew.cmu.edu>...
On Wed, 7 Jul 2004, Roman Ziak wrote:

"Guillaume" <"grsNOSPAM at NOTTHATmail dot com"> wrote:

1. And what exactly makes you think that any Microsoft's tantrum is
going to mean anything valid in the C world?
Microsoft is world seconds biggest software developper. And their main
product
is written mostly in C and C++. The most sensitive part of their product -
kernel -
is written almost entirely in C.

How do *you* know? ;) (And BTW, your lines are too long. Stick to
75 characters wide for Usenet, please.)

What do you bet Ada or Pascal or maybe COBOL?
Yes, WinNT used to crash a lot, but I seen
WinXP
crashed probably once or twice in my career and that was for the third party
driver.

Lucky you. WinXP Professional crashes about twice a week at work,
and more, lately. At home it's less of a problem, but it still crashes
every so often. ...OTOH and besides, crashing is a heck of a lot better
than the alternative, from a *security* point of view!

I would say that it could be possibly because of some faulty driver.
What makes you think MS does not mean anything in C world ? Do you live in
the cave ?

The Unix/Linux/network/mainframe/embedded/portable cave, yeah,
probably. Microsoft certainly doesn't mean anything in the world
of C standards, and it doesn't mean a whole lot more even in the
world of C applications programming. Just because they hire a lot
of programmers doesn't make them relevant. ;)

Indeed it does not, it is just one of the members of the standards
committee or is it not.
2. Since when Microsoft is a well known authority for secure
programming? There is so much evidence of the opposite,
it's somewhat boggling...

Everybody learns from mistakes and so does this company. And they had
plethora opportunities.

You seriously believe that, do you? Check Google News recently;
even their bugfixes apparently need bugfixes!

So your Linux machine crashes once a century. Right? Well wait you use
Linux _once_ in a century. Every OS has it's own bugs, just because we
get to _learn_ more about MS bugs does not mean Linux does not have
any . It just has less of them, probably. WinXP crashes more than
Linux because of 3 reasons not 1

a) More people _care_ to crash Windows.
b) More people use it. Crashes are generally propotional to the
ammount of time we use an OS also most people who use computers don't
know the basics of using a computer.
c) It has more bugs. Probably.

--
Imanpreet Singh Arora
isingh AT acm DOT org
Nov 14 '05 #46
Roman Ziak wrote:
Hello,

I just downloaded MS Visual Studio 2005 Express Beta. When I tried to
compile existing valid project, I get a lot of warnings like 'sprintf'
has been deprecated, 'strcpy' has been deprecated etc. I opened STDIO.H
and figured that one has to define a macro _CRT_SECURE_NO_DEPRECATE
to stop these warnings.

I started to search internet and found few links, and the following proposal

http://www.open-std.org/jtc1/sc22/wg...docs/n1031.pdf

After looking into Whidbey Beta header files I started liking this. This is
something I have been using already for static and local buffers using
macro with strncpy() and vsnprintf(), only this is better.

Although this feature should be invoked by defining _USE_SECURE_LIBS
and not be used by default, that's easy to fix in CRTDEFS.H.

Anyway, I am just wondering if anybody knows about the status of this
proposal. And also would like to read some opinions.

Roman

A secure C standard (perhaps with some input from the openBSD
developers) would be nice since undefined behavior usually equals
exploitable.

Brian

--
* Remove x's to send me mail *
Nov 14 '05 #47

"kyle york" <ky***@cisco.com> wrote in message

Anyway, my original point was simply that there's nothing *in the
language* that forbids safe pointers, it's just no one has bothered to
implement them.

The language makes it very hard to implement them.

For instance consider this

char *p1 = malloc();
char *p2 = p1;

free(p1);
/* code must trap here */
foo(p2);

How are you going to prevent the programmer from doing this? If you trap,
that's almost as bad as an illegal memory write, and in any case you've got
to implement some scheme for marking p2 as invalid after the free().

You can of course eliminate a lot of overruns by implementing fat pointers
(each pointer contains the address itself and the upper and lower bound of
the object it points to). In some cases this might be worthwhile. However
the overhead may not be insignificant, and that leads you to the question
"why use C?" when we have languages that are designed to prevent direct
Nov 14 '05 #48
"Malcolm" <ma*****@55bank.freeserve.co.uk> wrote in message
news:cc**********@news5.svr.pol.co.uk...

"kyle york" <ky***@cisco.com> wrote in message

Anyway, my original point was simply that there's nothing *in the
language* that forbids safe pointers, it's just no one has bothered to
implement them.

The language makes it very hard to implement them.

For instance consider this

char *p1 = malloc();
char *p2 = p1;

free(p1);
/* code must trap here */
foo(p2);

This is very easy to catch and I believe several compilers including VC
do implement that at least in _Debug_ configuration:

1) run-time keeps the list of free blocks and under certain circumstances
may check if the pointer being freed is on that list.

2) every valid memory block has a header which contains a _signature_.
There is still a slight chance of random data forming the signature, but
that is of much lesser chance. The method has almost no performance
hit, so it stay in effect even for _Release_ configuration.

Roman
Nov 14 '05 #49

"Roman Ziak" <ro***@nospam.com> wrote in message
For instance consider this

char *p1 = malloc();
char *p2 = p1;

free(p1);
/* code must trap here */
foo(p2);
This is very easy to catch and I believe several compilers including VC
do implement that at least in _Debug_ configuration:

1) run-time keeps the list of free blocks and under certain circumstances
may check if the pointer being freed is on that list.

I don't think you've looked at the example carefully enough. The call to
free() is valid. However p2 contains a copy. Therefore the compiler has to
keep a list of every pointer which is assigned a value derived from p1, and
tag it as "invalidated".
2) every valid memory block has a header which contains a _signature_.
There is still a slight chance of random data forming the signature, but
that is of much lesser chance. The method has almost no performance
hit, so it stay in effect even for _Release_ configuration.

You can use this to check if a pointer points to the head of a block
allocated with malloc(). Unfortunately C allows you to increment the
pointer, also it allows pointers to allocated blocks and pointers to stack
items to be passed arbitrarily to functions. Worst of all it allows a
pointer to ploint to one past the end of a valid object, so the pointer is
legal but cannot be dereferenced.
So what this means is that foo() cannot have startup code that says "check
if pointer points to allocated block", because the parameter might easily be
another type of pointer. So the check has to be done in the calling
function.
Nov 14 '05 #50

### This discussion thread is closed

Replies have been disabled for this discussion.