468,537 Members | 1,501 Online
Bytes | Developer Community
New Post

Home Posts Topics Members FAQ

Post your question to a community of 468,537 developers. It's quick & easy.

Our OWN 'Deep C Secrets'

You know, all this furor over this book caused me to go look it up on
Amazon. I've never read this book... but from what I can see from the
legally available table of contents, excerpt, and index at Amazon, it
looks more like a "Teach me newbie C" book than a "UNCOVER DEEP
MYSTERIES OF C!!" sort of affair.

I've got a better idea! Let's discuss some 'Deep C Secrets' of our
own! I'll start.

Testing - If you haven't tested to prove it right, it can almost
certainly be proven to be WRONG. It's terribly easy to screw
something up in C, causing buffer overruns and undefined behavior.
It's gotten to the point where I don't feel comfortable calling coding
done until the tests are written. There's too many places C can bite
you.

I wasn't this paranoid in other programming languages... you know, the
ones that take 1/10 the time to write, and run 100x slower, like VB.
Because you didn't have fine-grained control over the code, there were
fewer places to screw up something fundamental. But you paid for it in
loss of control, loss of speed, and a less stable development
environment, because you had to rely on OTHER programmer's buggy code.
If it's YOUR buggy code, you at least have a fighting chance of
figuring out what's wrong with it and fixing it.

Uh, look there. A pearl of wisdom! Grab it and start your OWN book.
:-P
Nov 14 '05 #1
20 1495
Kamilche wrote:
You know, all this furor over this book caused me to go look it up on
Amazon. I've never read this book... but from what I can see from the
legally available table of contents, excerpt, and index at Amazon, it
looks more like a "Teach me newbie C" book than a "UNCOVER DEEP
MYSTERIES OF C!!" sort of affair.

I've got a better idea! Let's discuss some 'Deep C Secrets' of our
own! I'll start.

Testing - If you haven't tested to prove it right, it can almost
certainly be proven to be WRONG. It's terribly easy to screw
something up in C, causing buffer overruns and undefined behavior.


Except you can almost *never* prove anything right by testing it. That
is a very dangerous mind set that is bound to screw you over sooner or
later. The only sure way to prove a program correct is to write a
mathematical proof. Testing only works if the size of the input is
bounded and it can all be tested. (But tests are good because they can
prove something is *wrong* with a piece of code.)
--
Daniel Sj÷blom
Remove _NOSPAM to reply by mail
Nov 14 '05 #2
Daniel Sj÷blom <ds******@mbnet.fi_NOSPAM> writes:
Except you can almost *never* prove anything right by testing it. That
is a very dangerous mind set that is bound to screw you over sooner or
later. The only sure way to prove a program correct is to write a
mathematical proof.
However, a mathematical proof created by a human can contain logical
fallacies, so you also have to prove the proof correct. And as the
proof of the proof can again contain logical fallacies, you also have
to prove it correct, and so on ad infinitum.
Testing only works if the size of the input is bounded and it can all
be tested. (But tests are good because they can prove something is
*wrong* with a piece of code.)


Given the choice between a 10 million lines of code program which comes
with a mathematical proof of correctness, but has not been tested, and
a thoroughly tested 10 million lines of code program which has not been
proven correct, I'd trust the latter more.

As Stanford mathematician Donald E. Knuth once put it [1]: "Beware of
bugs in the above code; I have only proved it correct, not tried it." :)

Martin
[1] http://www-cs-faculty.stanford.edu/~knuth/faq.html

--
,--. Martin Dickopp, Dresden, Germany ,= ,-_-. =.
/ ,- ) http://www.zero-based.org/ ((_/)o o(\_))
\ `-' `-'(. .)`-'
`-. Debian, a variant of the GNU operating system. \_/
Nov 14 '05 #3
Daniel Sj÷blom wrote:
Kamilche wrote:

.... snip ...

Testing - If you haven't tested to prove it right, it can almost
certainly be proven to be WRONG. It's terribly easy to screw
something up in C, causing buffer overruns and undefined behavior.


Except you can almost *never* prove anything right by testing it.
That is a very dangerous mind set that is bound to screw you over
sooner or later. The only sure way to prove a program correct is
to write a mathematical proof. Testing only works if the size of
the input is bounded and it can all be tested. (But tests are
good because they can prove something is *wrong* with a piece of
code.)


The technique of loop invariants can prove some assertions about
various routines. They can also help to document the
preconditions required. Unfortunately virtually nobody bothers.
Having proved those results, they can then be used in larger
areas, provided you keep the overall system well and cleanly
structured.

--
fix (vb.): 1. to paper over, obscure, hide from public view; 2.
to work around, in a way that produces unintended consequences
that are worse than the original problem. Usage: "Windows ME
fixes many of the shortcomings of Windows 98 SE". - Hutchison
Nov 14 '05 #4
Daniel Sj÷blom <ds******@mbnet.fi_NOSPAM> wrote in message news:<40**********************@news.mbnet.fi>...
Except you can almost *never* prove anything right by testing it. That
is a very dangerous mind set that is bound to screw you over sooner or
later. The only sure way to prove a program correct is to write a
mathematical proof. Testing only works if the size of the input is
bounded and it can all be tested. (But tests are good because they can
prove something is *wrong* with a piece of code.)


Yes, that's true - testing doesn't prove the absence of errors, it
only stops the stupidest ones. A programmer who does boundary
testing, tests for success, tests for failures, and stress testing,
can rest easier at night knowing they've made an honest effort.
Believe it or not, in my experience, most programmers DON'T unit test
their programs... they only do so if there's a bug so hard-to-find
that normal testing doesn't catch it. Sad, but true.

What's the testing policy in YOUR shop?
Nov 14 '05 #5

"Kamilche" <kl*******@home.com> wrote in message news:88**************************@posting.google.c om...

Yes, that's true - testing doesn't prove the absence of errors, it
only stops the stupidest ones. A programmer who does boundary
testing, tests for success, tests for failures, and stress testing,
can rest easier at night knowing they've made an honest effort.
Believe it or not, in my experience, most programmers DON'T unit test
their programs... they only do so if there's a bug so hard-to-find
that normal testing doesn't catch it. Sad, but true.


One thing that can undermine testing is management merely
adding discovered flaws to a list of bugs (or "issues") rather
than immediately fixing them.

Another is not automating the analysis of test output. It can
be too easy to take any plausible output as being correct,
especially for our own code. A former colleague once asked
me to help track down an elusive bug in his hash table code.
We reran all his unit tests, one of which involved storing and
retrieving one word beginning with each letter of the alphabet.
Fortunately, I knew that there should have been 26 words output,
not 25. The programmer was a lot brighter than I am but was
faced with the classic problem of proofreading his own work:
he saw what he knew what should have been there.

--
John.
Nov 14 '05 #6
Martin Dickopp wrote:
Daniel Sj÷blom <ds******@mbnet.fi_NOSPAM> writes:

Except you can almost *never* prove anything right by testing it. That
is a very dangerous mind set that is bound to screw you over sooner or
later. The only sure way to prove a program correct is to write a
mathematical proof.

However, a mathematical proof created by a human can contain logical
fallacies, so you also have to prove the proof correct.


Actually no. A proof cannot contain logical fallacies. If it did, it
wouldn't be a proof :-)

Testing only works if the size of the input is bounded and it can all
be tested. (But tests are good because they can prove something is
*wrong* with a piece of code.)

Given the choice between a 10 million lines of code program which comes
with a mathematical proof of correctness, but has not been tested, and
a thoroughly tested 10 million lines of code program which has not been
proven correct, I'd trust the latter more.


You read too much into my comment. I never said it was an either or
choice. I'd prefer both tested and proved code.
--
Daniel Sj÷blom
Remove _NOSPAM to reply by mail

Nov 14 '05 #7
On Sat, 8 May 2004, Kamilche wrote:
You know, all this furor over this book caused me to go look it up on
Amazon. I've never read this book... but from what I can see from the
legally available table of contents, excerpt, and index at Amazon, it
looks more like a "Teach me newbie C" book than a "UNCOVER DEEP
MYSTERIES OF C!!" sort of affair.

I've got a better idea! Let's discuss some 'Deep C Secrets' of our
own! I'll start.

Testing - If you haven't tested to prove it right, it can almost
certainly be proven to be WRONG. It's terribly easy to screw
something up in C, causing buffer overruns and undefined behavior.
It's gotten to the point where I don't feel comfortable calling coding
done until the tests are written. There's too many places C can bite
you.
I'd have to disagree with this. Far too often I have seen people use
testing to make themselves feel good about their code. I believe testing
is just a way to audit your design. It is a way to make sure you did the
right thing. If you did not design the right thing then how are you going
to test it? In other words, if I didn't think to code to prevent X then
why would I test for X?

Design and attention to detail are important. Empirical testing cannot
give 100% certainty. The number of combinations in just a simple project
is incredibly high. If there is user input then the numbers sky-rocket.

You need to think about all the things the program is capable of. Define
what you want it to do for all those capabilities. Then testing becomes a
way to confirm you did what you planned.

If you have introduced undefined behaviour into our program, testing might
not catch it. One of the most common mistake is that programmers test and
test and test and test. After testing to the point of insanity they decide
it is good and ship it. First problem, how many different configurations
did they test on? Probably one or two. Maybe ten to twenty different
configurations. Second problem, how many total person hours went into
testing? Let's say 10 people testing every day for 8 hours. They test for
a month (20 business days) for a total of 1600 hours. You ship it and sell
1000 copies. In 2 hours there has been more 'testing' by the customers
then you did in a month. There will be a lot of duplication with the
customers use but within a month or two they will find things you did not.

If the program was badly designed or requirements were not well defined,
I'd consider testing the same as using a bucket to keep a leaky boat from
sinking.

Some companies do a 'beta' release just to get more empirical testing on
the product. To me this is using a bigger bucket.
I wasn't this paranoid in other programming languages... you know, the
ones that take 1/10 the time to write, and run 100x slower, like VB.
Because you didn't have fine-grained control over the code, there were
fewer places to screw up something fundamental. But you paid for it in
loss of control, loss of speed, and a less stable development
environment, because you had to rely on OTHER programmer's buggy code.
If it's YOUR buggy code, you at least have a fighting chance of
figuring out what's wrong with it and fixing it.

Uh, look there. A pearl of wisdom! Grab it and start your OWN book.
:-P


--
Send e-mail to: darrell at cs dot toronto dot edu
Don't send e-mail to vi************@whitehouse.gov
Nov 14 '05 #8
On Mon, 10 May 2004 16:38:13 +0300, Daniel Sj├Âblom wrote:
Martin Dickopp wrote:
Daniel Sj├Âblom <ds******@mbnet.fi_NOSPAM> writes:

Except you can almost *never* prove anything right by testing it. That
is a very dangerous mind set that is bound to screw you over sooner or
later. The only sure way to prove a program correct is to write a
mathematical proof.

However, a mathematical proof created by a human can contain logical
fallacies, so you also have to prove the proof correct.


Actually no. A proof cannot contain logical fallacies. If it did, it
wouldn't be a proof :-)


Nonsense. If you call it a proof, I have to disprove your logic to
contradict you. If I don't follow your logic, or if your logical method is
wrong in a way we both miss, your proof is worthless yet we will both
believe it is valid.

Running a suite of tests is (or should be) trivial, and it should catch
all obvious possible errors and all errors related to outliers in the
possible input sets. If errors persist, the only reasonable way to catch
them is through real-life testing and debugging. Which is what really
counts anyway: Real-life performance, not ivory tower mathematics.

In any case, how do we test airframes and cars and other physical objects?
Crash testing, proving grounds, and other engineering methods. Mathematics
only describes what we already know, and it can exhaustively search spaces
we have already tested physically. In an absurd case, that might even be
useful. Actual testing, on the other hand, tells us something new.

--
yvoregnevna gjragl-guerr gjb-gubhfnaq guerr ng lnubb qbg pbz
To email me, rot13 and convert spelled-out numbers to numeric form.
"Makes hackers smile" makes hackers smile.

Nov 14 '05 #9
da*****@NOMORESPAMcs.utoronto.ca.com (Darrell Grainger) wrote in message news:<Pi*******************************@drj.pf>...
I'd have to disagree with this. Far too often I have seen people use
testing to make themselves feel good about their code... ... Second problem, how many total person hours went into
testing? Let's say 10 people testing every day for 8 hours. They test for
a month (20 business days) for a total of 1600 hours. You ship it and sell
1000 copies. In 2 hours there has been more 'testing' by the customers
then you did in a month. There will be a lot of duplication with the
customers use but within a month or two they will find things you did not.

If the program was badly designed or requirements were not well defined,
I'd consider testing the same as using a bucket to keep a leaky boat from
sinking.


You can't artifically inflate the numbers to support your argument,
and use 'many customers will test for me' as a justification to get
out of doing unit testing. Unit testing is done by the programmer, not
users... and trying to 'pass the buck' to quality control or
end-users, is one of the reasons the software industry is in such bad
shape today. There's no way ten people SHOULD be testing every day for
8 hours, for a month, just to make up for a lazy programmer who
refuses to unit test. The programmer who wrote the code, should be the
programmer who writes the unit tests, and the tests should run
automatically, either at code compilation time, or once at startup.
Live human testers should be reserved for system integration tests,
the 'big picture' bugs that programmers don't often catch on their
own.

It sounds to me, like you're trying to feel good WITHOUT testing your
code... and you really shouldn't. You say your code works? Prove it.
When embedded in the source code, the test code serves double duty as
a usage example, as well, so it certainly won't be wasted effort.
Nov 14 '05 #10
"Arthur J. O'Dwyer" <aj*@nospam.andrew.cmu.edu> writes:
On Tue, 11 May 2004, August Derleth wrote:

On Mon, 10 May 2004 16:38:13 +0300, Daniel Sj÷blom wrote:
> Martin Dickopp wrote:
>> Daniel Sj÷blom <ds******@mbnet.fi_NOSPAM> writes:
>>> The only sure way to prove a program correct is to write a
>>> mathematical proof.
>>
>> However, a mathematical proof created by a human can contain logical
>> fallacies, so you also have to prove the proof correct.
>
> Actually no. A proof cannot contain logical fallacies. If it did, it
> wouldn't be a proof :-)


Nonsense. If you call it a proof, I have to disprove your logic to
contradict you.


Calling a tail a proof doesn't make it one. :)


True. So we can only conclude that no mathematical proofs are
guaranteed to exist. Anything created by a human being[*] as a proof
could contain logical fallacies, and is therefore not guaranteed to be
a proof. It is therefore futile to argue for proving a program correct,
if there is no guarantee that the result will indeed be a proof. :)

Martin

[*] This also applies recursively to machines etc. created by human
beings.
--
,--. Martin Dickopp, Dresden, Germany ,= ,-_-. =.
/ ,- ) http://www.zero-based.org/ ((_/)o o(\_))
\ `-' `-'(. .)`-'
`-. Debian, a variant of the GNU operating system. \_/
Nov 14 '05 #11
On Tue, 11 May 2004, Kamilche wrote:
da*****@NOMORESPAMcs.utoronto.ca.com (Darrell Grainger) wrote in message news:<Pi*******************************@drj.pf>...
I'd have to disagree with this. Far too often I have seen people use
testing to make themselves feel good about their code...

... Second problem, how many total person hours went into
testing? Let's say 10 people testing every day for 8 hours. They test for
a month (20 business days) for a total of 1600 hours. You ship it and sell
1000 copies. In 2 hours there has been more 'testing' by the customers
then you did in a month. There will be a lot of duplication with the
customers use but within a month or two they will find things you did not.

If the program was badly designed or requirements were not well defined,
I'd consider testing the same as using a bucket to keep a leaky boat from
sinking.


You can't artifically inflate the numbers to support your argument,
and use 'many customers will test for me' as a justification to get
out of doing unit testing. Unit testing is done by the programmer, not
users... and trying to 'pass the buck' to quality control or
end-users, is one of the reasons the software industry is in such bad
shape today. There's no way ten people SHOULD be testing every day for
8 hours, for a month, just to make up for a lazy programmer who
refuses to unit test. The programmer who wrote the code, should be the
programmer who writes the unit tests, and the tests should run
automatically, either at code compilation time, or once at startup.
Live human testers should be reserved for system integration tests,
the 'big picture' bugs that programmers don't often catch on their
own.

It sounds to me, like you're trying to feel good WITHOUT testing your
code... and you really shouldn't. You say your code works? Prove it.
When embedded in the source code, the test code serves double duty as
a usage example, as well, so it certainly won't be wasted effort.


You have snipped a good deal of the original message. If you go back and
re-read the message you will see that I never said that you should not
test. I was trying to stress the point that testing just confirms your
design and it is the design and requiremens that are most important.

The impression I received from the original posting was that with C
language you had to test more to make sure you caught defects. I was
saying that with C language the DESIGN is more important. Without design
what are you testing? If you didn't do a design or didn't think about
something in your design then how on earth are you going to think to test
for it?

Additionally, I was NOT saying that testing was pointless and you should
just rely on quality control or the end user to test your product. The
point I was trying to make was that if you rely on JUST testing and don't
have a solid design to guide your testing you will fail. Random or even
adhoc testing will not work. The numbers I was giving were to show that if
you rely on random or adhoc testing, i.e. testing that is not guided by a
good design, then you might as well leave it to the customer to test for
you.

When I code a function I design and code it with testing in mind. By the
time the code is complete I already know all the tests that have to be
run. Testing is an by-product that comes out of a good design.

--
Send e-mail to: darrell at cs dot toronto dot edu
Don't send e-mail to vi************@whitehouse.gov
Nov 14 '05 #12
On Tue, 11 May 2004 01:46:29 -0600, in comp.lang.c , August Derleth
<se*@sig.now> wrote:
Running a suite of tests is (or should be) trivial, and it should catch
all obvious possible errors and all errors related to outliers in the
possible input sets.


/Running/ the tests is indeed trivial. Defining all possible input sets is
the tricky part. For most programmes, thats an infinite set.

--
Mark McIntyre
CLC FAQ <http://www.eskimo.com/~scs/C-faq/top.html>
CLC readme: <http://www.angelfire.com/ms3/bchambless0/welcome_to_clc.html>
----== Posted via Newsfeed.Com - Unlimited-Uncensored-Secure Usenet News==----
http://www.newsfeed.com The #1 Newsgroup Service in the World! >100,000 Newsgroups
---= 19 East/West-Coast Specialized Servers - Total Privacy via Encryption =---
Nov 14 '05 #13
Martin Dickopp wrote:
"Arthur J. O'Dwyer" <aj*@nospam.andrew.cmu.edu> writes:

On Tue, 11 May 2004, August Derleth wrote:
On Mon, 10 May 2004 16:38:13 +0300, Daniel Sj÷blom wrote:

Martin Dickopp wrote:

>Daniel Sj÷blom <ds******@mbnet.fi_NOSPAM> writes:
>
>>The only sure way to prove a program correct is to write a
>>mathematical proof.
>
>However, a mathematical proof created by a human can contain logical
>fallacies, so you also have to prove the proof correct.

Actually no. A proof cannot contain logical fallacies. If it did, it
wouldn't be a proof :-)

Nonsense. If you call it a proof, I have to disprove your logic to
contradict you.


Calling a tail a proof doesn't make it one. :)

True. So we can only conclude that no mathematical proofs are
guaranteed to exist. Anything created by a human being[*] as a proof
could contain logical fallacies, and is therefore not guaranteed to be
a proof. It is therefore futile to argue for proving a program correct,
if there is no guarantee that the result will indeed be a proof. :)


Bah. If you want to get metaphysical, you cannot be sure you are talking
to me, so the correctness of a proof should be the least of your
concerns wrt to certainity :-)

More seriously, you are just being defeatist. Of course I can make a
mistake in a proof, just like I can make many other mistakes in my life.
Does that mean I should do none of the things I may fail at and stay in
bed all day long instead? I'm frankly quite amazed at how many
programmers believe that testing is a replacement for thinking up front
(since you guys can't seem to stand the word proof.) How do you suppose
a programmer writing his own unit tests will write tests to test stuff
that he didn't think of while coding?

And again, I'm not discouraging testing, I'm merely pointing out that it
is not a silver bullet.
--
Daniel Sj÷blom
Remove _NOSPAM to reply by mail
Nov 14 '05 #14
On Tue, 11 May 2004 22:27:52 +0100, Mark McIntyre wrote:
On Tue, 11 May 2004 01:46:29 -0600, in comp.lang.c , August Derleth
<se*@sig.now> wrote:
Running a suite of tests is (or should be) trivial, and it should catch
all obvious possible errors and all errors related to outliers in the
possible input sets.


/Running/ the tests is indeed trivial. Defining all possible input sets is
the tricky part. For most programmes, thats an infinite set.


Indeed, and that's /another/ reason why proofs are rather futile: A proof
holds water only if you can either prove something about the complete set
of possible inputs and build from there, or if you can construct a proof
that will hold for all inputs no matter how cockeyed.

Testing the outliers, on the other hand, is stress testing, which is done
on a routine basis in every engineering shop worth its calipers. (You
can't test every possible wind velocity, so you test the most extreme
velocities you can until something breaks down. If the breakdown happens
at a too-low velocity, you fix it. If the breakdown happens at an absurdly
high velocity, you pat yourself on the back and record the tested limit
for that component or system.)

(Something else occurs to me: A proof is rather useless unless you can
prove things about the hardware. Things can't be virtual all the way down.)

--
yvoregnevna gjragl-guerr gjb-gubhfnaq guerr ng lnubb qbg pbz
To email me, rot13 and convert spelled-out numbers to numeric form.
"Makes hackers smile" makes hackers smile.

Nov 14 '05 #15
=?ISO-8859-1?Q?Daniel_Sj=F6blom?= <ds******@mbnet.fi_NOSPAM> wrote:
I'm frankly quite amazed at how many
programmers believe that testing is a replacement for thinking up front
(since you guys can't seem to stand the word proof.)


In practice, "proof" does not mean "thinking up front", it means
"justifying a posteriori". Most program proofs I've seen were nothing
but a display of the isomorphism of _this_ program to _that_ formal
specification, which specification was then blithely assumed to be
correct. Unfortunately, in general, that is inevitable; any proof will
be more complex than the algorithm it "proves", and therefore less, not
more, likely to be understandable and correct.

Richard
Nov 14 '05 #16
August Derleth <se*@sig.now> wrote in message news:<pa****************************@sig.now>...
On Tue, 11 May 2004 22:27:52 +0100, Mark McIntyre wrote:
On Tue, 11 May 2004 01:46:29 -0600, in comp.lang.c , August Derleth
<se*@sig.now> wrote:
Running a suite of tests is (or should be) trivial, and it should catch
all obvious possible errors and all errors related to outliers in the
possible input sets.
/Running/ the tests is indeed trivial. Defining all possible input sets is
the tricky part. For most programmes, thats an infinite set.


Indeed, and that's /another/ reason why proofs are rather futile: A proof
holds water only if you can either prove something about the complete set
of possible inputs and build from there, or if you can construct a proof
that will hold for all inputs no matter how cockeyed.


proofs can deal with infinite things

x^2 + y^2 = z^2

is saying something about *all* right angled triangles. To test it
would
involve an infinite set of cases. (Ignoring for the moment that
numbers
are normally finite on computers).
Testing the outliers, on the other hand, is stress testing, which is done
on a routine basis in every engineering shop worth its calipers. (You
can't test every possible wind velocity, so you test the most extreme
velocities you can until something breaks down. If the breakdown happens
at a too-low velocity, you fix it. If the breakdown happens at an absurdly
high velocity, you pat yourself on the back and record the tested limit
for that component or system.)
software is discrete rather than continuous. If your bridge works at
20
tons and 30 tons its likely ok at 25. Software isn't like this. What
is
odd is that those people with the wind tunnel don't despise
mathematics,
whilst much of the software world does.
(Something else occurs to me: A proof is rather useless unless you can
prove things about the hardware. Things can't be virtual all the way down.)


there have been attempts at formal descriptions of hardware. There was
even
an attempt to prove a CPU formally correct (or construct one that was
formally correct). Viper I believe. I think the Transputer was
partially
proven correct (being partially non-formal is a like being a little
bit pregnant). I understand modern hardware is "compiled" from formal
descriptions that look a bit like programming langages.
--
Nick Keighley
Nov 14 '05 #17
On Wed, 12 May 2004 04:51:12 -0600, in comp.lang.c , August Derleth
<se*@sig.now> wrote:
at a too-low velocity, you fix it. If the breakdown happens at an absurdly
high velocity, you pat yourself on the back and record the tested limit
for that component or system.)


no, you reduce the cost^wrobustness of the materials a bit... :-)
--
Mark McIntyre
CLC FAQ <http://www.eskimo.com/~scs/C-faq/top.html>
CLC readme: <http://www.angelfire.com/ms3/bchambless0/welcome_to_clc.html>
----== Posted via Newsfeed.Com - Unlimited-Uncensored-Secure Usenet News==----
http://www.newsfeed.com The #1 Newsgroup Service in the World! >100,000 Newsgroups
---= 19 East/West-Coast Specialized Servers - Total Privacy via Encryption =---
Nov 14 '05 #18
Richard Bos wrote:
=?ISO-8859-1?Q?Daniel_Sj=F6blom?= <ds******@mbnet.fi_NOSPAM> wrote:

I'm frankly quite amazed at how many
programmers believe that testing is a replacement for thinking up front
(since you guys can't seem to stand the word proof.)

In practice, "proof" does not mean "thinking up front", it means
"justifying a posteriori". Most program proofs I've seen were nothing
but a display of the isomorphism of _this_ program to _that_ formal
specification, which specification was then blithely assumed to be
correct. Unfortunately, in general, that is inevitable; any proof will
be more complex than the algorithm it "proves", and therefore less, not
more, likely to be understandable and correct.


I cannot disagree with that.

But assuming a proof is too complex, there are still some uses for the
proof, correct or not:

Because its hard to write a big proof, the program will tend to be as
modular as possible, so that each module can be proven separately.

A proof by its very nature requires that all the possible input is
defined (note, this is not equivalent to enumerating the input.) This
naturally leads to code that documents its behaviour and possible
undefined behaviour on different inputs.

Sooner or later, most software will have to be maintained. A proof can
be very useful in assuring that a mainteinance procedure (refactoring,
dead code removal or whatever) does not break the code.

In general, a proof will tend to be as straight forward as possible,
leading to code that is straight forward and easy to maintain. I've seen
much code that is three times longer (sometimes three times slower) than
it should be, because certain conditions are asserted multiple times
when they shouldn't be, or the author did not clearly understand the
problem.

Finally, simple or not so simple proofs are required in standard program
optimizations, if they are to be safe.

Testing will not do anything about issues 1,2 and 4, 5 (5 possibly being
the compiler's job instead).

I understand that for most projects there is no point in attempting a
whole program proof. It is just too expensive and difficult, and often
not even possible because there are factors that are outside the control
of the programmers. I do think it is critical in software that can
directly affect the lives of people.
--
Daniel Sj÷blom
Remove _NOSPAM to reply by mail


Nov 14 '05 #19
On Wed, 12 May 2004 07:28:54 -0700, Nick Keighley wrote:
August Derleth <se*@sig.now> wrote in message news:<pa****************************@sig.now>...
On Tue, 11 May 2004 22:27:52 +0100, Mark McIntyre wrote:
> On Tue, 11 May 2004 01:46:29 -0600, in comp.lang.c , August Derleth
> <se*@sig.now> wrote: >>Running a suite of tests is (or should be) trivial, and it should catch
>>all obvious possible errors and all errors related to outliers in the
>>possible input sets.
>
> /Running/ the tests is indeed trivial. Defining all possible input sets is
> the tricky part. For most programmes, thats an infinite set.
Indeed, and that's /another/ reason why proofs are rather futile: A proof
holds water only if you can either prove something about the complete set
of possible inputs and build from there, or if you can construct a proof
that will hold for all inputs no matter how cockeyed.


proofs can deal with infinite things

x^2 + y^2 = z^2

is saying something about *all* right angled triangles. To test it
would
involve an infinite set of cases. (Ignoring for the moment that
numbers
are normally finite on computers).


Well, in algebra class you can assume that x, y, and z will always be
reals (or complexes). In the real world, what if some moron stuffs "foo"
into x? Or, more plausibly, a pointer value?

As I said: Input is where proofs fall down, because idiot-proofing isn't
an exact science.
Testing the outliers, on the other hand, is stress testing, which is done
on a routine basis in every engineering shop worth its calipers. (You
can't test every possible wind velocity, so you test the most extreme
velocities you can until something breaks down. If the breakdown happens
at a too-low velocity, you fix it. If the breakdown happens at an absurdly
high velocity, you pat yourself on the back and record the tested limit
for that component or system.)
software is discrete rather than continuous. If your bridge works at
20
tons and 30 tons its likely ok at 25. Software isn't like this. What
is
odd is that those people with the wind tunnel don't despise
mathematics,
whilst much of the software world does.


I'm not anti-math, I'm just pro-testing. Proofs can only go so far, so
when given a choice between a proof and a test, the test wins.

(Of course, an algorithm can be proven if you abstract away a lot of
reality. But algorithms don't need to worry about idiots with keyboards
trying to crack security, for example, or your RAM deciding life is shit
and so is it.)
(Something else occurs to me: A proof is rather useless unless you can
prove things about the hardware. Things can't be virtual all the way down.)


there have been attempts at formal descriptions of hardware. There was
even
an attempt to prove a CPU formally correct (or construct one that was
formally correct). Viper I believe. I think the Transputer was
partially
proven correct (being partially non-formal is a like being a little
bit pregnant). I understand modern hardware is "compiled" from formal
descriptions that look a bit like programming langages.


I'm aware of VHDL and other hardware-description languages. They aren't my
field, but I know how they work: They describe the behavior of the
hardware given the set of possible inputs (using syntax reminiscent of C,
no less) and they leave it up to the computer to design the circuitry to
accomplish that behavior.

The problem with virtual descriptions of hardware is that a static-fried
chip isn't going to conform to your proven models. Nor will a chip made
from a faulty die, or plugged into a decaying motherboard, or fed power
from a dirty supply. In the physical world, components have tolerances,
margins of error, and occasional bad days. Proofs can be thrown out of
whack by some moron with a backhoe, and then where are you?

--
yvoregnevna gjragl-guerr gjb-gubhfnaq guerr ng lnubb qbg pbz
To email me, rot13 and convert spelled-out numbers to numeric form.
"Makes hackers smile" makes hackers smile.

Nov 14 '05 #20
Kamilche wrote:
You know, all this furor over this book caused me to go look it up on
Amazon. I've never read this book... but from what I can see from the
legally available table of contents, excerpt, and index at Amazon, it
looks more like a "Teach me newbie C" book than a "UNCOVER DEEP
MYSTERIES OF C!!" sort of affair.

I've got a better idea! Let's discuss some 'Deep C Secrets' of our
own! I'll start.

Testing - If you haven't tested to prove it right, it can almost
certainly be proven to be WRONG.


Huh?

"Testing" to see if it runs correctly is for the feeble-minded. Your
UB can run "correctly" on your "test" you know!

<snipped>

goose,
I still test anyway, btw ...
Nov 14 '05 #21

This discussion thread is closed

Replies have been disabled for this discussion.

Similar topics

10 posts views Thread by Picho | last post: by
5 posts views Thread by Tony Johansson | last post: by
8 posts views Thread by Ding Wei | last post: by
4 posts views Thread by fperfect13 | last post: by
5 posts views Thread by BenW | last post: by
reply views Thread by The 101 ROMANTIC SECRETS and TECHNICS | last post: by
reply views Thread by NPC403 | last post: by
By using this site, you agree to our Privacy Policy and Terms of Use.