473,385 Members | 1,748 Online
Bytes | Software Development & Data Engineering Community
Post Job

Home Posts Topics Members FAQ

Join Bytes to post your question to a community of 473,385 software developers and data experts.

status of Programming by Contract (PEP 316)?

I just stumbled onto PEP 316: Programming by Contract for Python
(http://www.python.org/dev/peps/pep-0316/). This would be a great
addition to Python, but I see that it was submitted way back in 2003,
and its status is "deferred." I did a quick search on
comp.lang.python,
but I don't seem to see much on it. Does anyone know what the real
status is of getting this into standard Python? Thanks.

Aug 29 '07
81 2715

Alex Martelli wrote:
Russ specifically mentioned *mission-critical applications* as being
outside of Python's possibilities; yet search IS mission critical to
Google. Yes, reliability is obtained via a "systems approach",
Alex, I think you are missing the point. Yes, I'm sure that web
searches are critical to
Google's mission and commercial success. But the point is that a few
subtle bugs cannot
destroy Google. If your search engines and associated systems have
bugs, you fix them
(or simply tolerate them) and continue on. And if a user does not get
the results he wants,
he isn't likely to die over it -- or even care much.

Online financial transactions are another matter altogether, of
course. User won't die, but
they will get very irate if they lose money. But I don't think that's
what you are talking about
here.

Aug 31 '07 #51

Neil Cerutti wrote:
Who watches the watchmen? The contracts are composed by the
programmers writing the code. Is it likely that the same person
who wrote a buggy function will know the right contract?
The idea here is that errors in the self-testing code are unlikely to
be correlated with
errors in the primary code. Hence, you get a sort of multiplying
effect on reliability. For
example, if the chance of error in the primary code and the self-test
code are each 0.01,
the chance of an undetected error is approximately 0.01^2 or 0.0001.

Aug 31 '07 #52
Neil Cerutti wrote:
On 2007-08-31, Ricardo Aráoz <ri******@gmail.comwrote:
>Russ wrote:
>>Yes, thanks for reminding me about that. With SPARK Ada, it is
possible for some real (non-trivial) applications to formally
(i.e., mathematically) *prove* correctness by static analysis.
I doubt that is possible without "static declarative type-
checking."

SPARK Ada is for applications that really *must* be correct or
people could die.
I've always wondered... Are the compilers (or interpreters),
which take these programs to machine code, also formally proven
correct? And the OS in which those programs operate, are they
also formally proven correct? And the hardware, microprocessor,
electric supply, etc. are they also 'proven correct'?

Who watches the watchmen? The contracts are composed by the
programmers writing the code. Is it likely that the same person
who wrote a buggy function will know the right contract?
Actually my point was that if a program is to be trusted in a critical
situation (critical as in catastrophe if it goes wrong) then the OS, the
compiler/interpreter etc should abide by the same rules. That is
obviously not possible, so there's not much case in making the time
investment necessary for correctness proof of a little program (or
usually a little function inside a program) when the possibilities for
failure are all around it and even in the code that will run that
function. And we should resort to other more sensible answers to the
safety problem.
Sep 1 '07 #53
Russ wrote:
Alex, I think you are missing the point. Yes, I'm sure that web
searches are critical to
Google's mission and commercial success. But the point is that a few
subtle bugs cannot
destroy Google. If your search engines and associated systems have
bugs, you fix them
(or simply tolerate them) and continue on. And if a user does not get
the results he wants,
he isn't likely to die over it -- or even care much.
But if this pattern of not getting wanted results is common, then the user
will migrate to alternative search engines and this will *kill* the
business. Wrong results won't impact ONE search, but many will impact the
company business and will be part of the recipe to take it out of business.
Online financial transactions are another matter altogether, of
course. User won't die, but
they will get very irate if they lose money. But I don't think that's
what you are talking about
here.
Lets make someone loose his job and have all his money commitments
compromised because of this money lost and we might be talking about people
taking their lives.

Again, this isn't 100% sure to happen, but it *can* happen.

As it happens with a peacemaker: the user won't die if his heart skips one
beat, but start skipping a series of them and you're incurring in serious
problems.

Just because the result isn't immediate it doesn't mean it isn't critical.
Sep 1 '07 #54

Ricardo Aráoz wrote:
Actually my point was that if a program is to be trusted in a critical
situation (critical as in catastrophe if it goes wrong) then the OS, the
compiler/interpreter etc should abide by the same rules. That is
obviously not possible, so there's not much case in making the time
investment necessary for correctness proof of a little program (or
usually a little function inside a program) when the possibilities for
failure are all around it and even in the code that will run that
function. And we should resort to other more sensible answers to the
safety problem.
I don't quite see it that way.

I would agree that if your OS and compiler are unreliable, then it
doesn't make much sense to bend over backwards worrying about the
reliability of your
application. But for real safety-critical applications, you have no
excuse for not using a
highly reliable OS and compiler. For really critical stuff, I think
the real-time OSs are usually
stripped down to
the bare basics. And if you are using something like SPARK Ada, the
language itself is
stripped of many of the fancier features in Ada itself. (There's also
something called the
Ada Ravenscar profile, which I believe is geared for safety-critical
use but is not quite as
restrictive as SPARK.)

Keep in mind that the OS and compiler are typically also
used for many other applications, so they tend to get tested fairly
thoroughly. And remember
also that you won't have extraneous applications running -- like a web
browser
or a video game, so the OS will probably not be heavily stressed. The
most likely source
of failure is likely to be your application, so bending over backwards
to get it right makes
sense.

Then again, if you are running C on Windows, you might as well just
give up on reliability
from the start. You don't have a prayer.

Sep 1 '07 #55
Jorge Godoy wrote:
Russ wrote:
>Alex, I think you are missing the point. Yes, I'm sure that web
searches are critical to
Google's mission and commercial success. But the point is that a few
subtle bugs cannot
destroy Google. If your search engines and associated systems have
bugs, you fix them
(or simply tolerate them) and continue on. And if a user does not get
the results he wants,
he isn't likely to die over it -- or even care much.

But if this pattern of not getting wanted results is common, then the user
will migrate to alternative search engines and this will *kill* the
business. Wrong results won't impact ONE search, but many will impact the
company business and will be part of the recipe to take it out of business.
>Online financial transactions are another matter altogether, of
course. User won't die, but
they will get very irate if they lose money. But I don't think that's
what you are talking about
here.

Lets make someone loose his job and have all his money commitments
compromised because of this money lost and we might be talking about people
taking their lives.

Again, this isn't 100% sure to happen, but it *can* happen.

As it happens with a peacemaker: the user won't die if his heart skips one
beat, but start skipping a series of them and you're incurring in serious
problems.

Just because the result isn't immediate it doesn't mean it isn't critical.

We probably need to distinguish between "mission-critical", where a
program has to work reliably for an organization to meet its goals, and
"safety-critical" where people die or get hurt if the program misbehaves.

The latter are the ones where you need to employ all possible techniques
to avoid all possible failure modes.

regards
Steve
--
Steve Holden +1 571 484 6266 +1 800 494 3119
Holden Web LLC/Ltd http://www.holdenweb.com
Skype: holdenweb http://del.icio.us/steve.holden
--------------- Asciimercial ------------------
Get on the web: Blog, lens and tag the Internet
Many services currently offer free registration
----------- Thank You for Reading -------------

Sep 1 '07 #56
On Fri, 31 Aug 2007 21:04:49 -0300, Ricardo Aráoz wrote:
Neil Cerutti wrote:
>On 2007-08-31, Ricardo Aráoz <ri******@gmail.comwrote:
>>Russ wrote:
Yes, thanks for reminding me about that. With SPARK Ada, it is
possible for some real (non-trivial) applications to formally (i.e.,
mathematically) *prove* correctness by static analysis. I doubt that
is possible without "static declarative type- checking."

SPARK Ada is for applications that really *must* be correct or people
could die.
I've always wondered... Are the compilers (or interpreters), which
take these programs to machine code, also formally proven correct? And
the OS in which those programs operate, are they also formally proven
correct? And the hardware, microprocessor, electric supply, etc. are
they also 'proven correct'?

Who watches the watchmen? The contracts are composed by the programmers
writing the code. Is it likely that the same person who wrote a buggy
function will know the right contract?

Actually my point was that if a program is to be trusted in a critical
situation (critical as in catastrophe if it goes wrong) then the OS, the
compiler/interpreter etc should abide by the same rules. That is
obviously not possible, so there's not much case in making the time
investment necessary for correctness proof of a little program (or
usually a little function inside a program) when the possibilities for
failure are all around it and even in the code that will run that
function. And we should resort to other more sensible answers to the
safety problem.
On the systems I work on, the OS is unit tested same as the application
software. Not sure about the compiler; I'm pretty sure we're not using
the latest GCC beta, though.
Carl Banks
Sep 1 '07 #57
On Aug 31, 6:45 pm, Steve Holden
We probably need to distinguish between "mission-critical", where a
program has to work reliably for an organization to meet its goals, and
"safety-critical" where people die or get hurt if the program misbehaves.
The term "mission critical" itself can have a wide range of
connotations.

If a software failure would force a military pilot to abort his
mission and hobble back home with a partially disabled aircraft,
that's what I think of as "mission critical" software.

If Google needs reliable software on its servers to maintain its
revenue stream, that's another kind of "mission critical" software,
but the criticality is certainly less immediate in that case.

In the first case, the software glitch definitely causes mission
failure. In the Google case, the software problems *may* ultimately
cause mission failure, but probably only if nothing is done for quite
some time to rectify the situation. If that is the case, then the
software itself is not the critical factor unless it cannot be
corrected and made to function properly in a reasonable amount of time.

Sep 1 '07 #58
On Fri, 31 Aug 2007 22:18:09 -0300, Jorge Godoy wrote:
Russ wrote:
>Alex, I think you are missing the point. Yes, I'm sure that web
searches are critical to
Google's mission and commercial success. But the point is that a few
subtle bugs cannot
destroy Google. If your search engines and associated systems have
bugs, you fix them
(or simply tolerate them) and continue on. And if a user does not get
the results he wants,
he isn't likely to die over it -- or even care much.

But if this pattern of not getting wanted results is common, then the
user will migrate to alternative search engines and this will *kill* the
business. Wrong results won't impact ONE search, but many will impact
the company business and will be part of the recipe to take it out of
business.
>Online financial transactions are another matter altogether, of course.
User won't die, but
they will get very irate if they lose money. But I don't think that's
what you are talking about
here.

Lets make someone loose his job and have all his money commitments
compromised because of this money lost and we might be talking about
people taking their lives.

Again, this isn't 100% sure to happen, but it *can* happen.

As it happens with a peacemaker: the user won't die if his heart skips
one beat, but start skipping a series of them and you're incurring in
serious problems.

Just because the result isn't immediate it doesn't mean it isn't
critical.
This is starting to sound silly, people. Critical is a relative term,
and one project's critical may be anothers mundane. Sure a flaw in your
flagship product is a critical problem *for your company*, but are you
really trying to say that the criticalness of a bad web search is even
comparable to the most important systems on airplanes, nuclear reactors,
dams, and so on? Come on.

BTW, I'm not really agreeing with Russ here. His suggestion (that
because Python is not used in highly critical systems, it is not suitable
for them) is logically flawed. And Alex's point, that Python has a good
track record of reliabilty (look at Google's 99.9% uptime) is valid
whether Google is a critical system or not.

So please leave the laughable comparisons between flight systems and web
searches out of it. It's unnecessary and makes Pythoners look bad.
Carl Banks

(P.S. 99.9% uptime would be a critical flaw in the systems I work on.)
Sep 1 '07 #59
Russ <uy*******@sneakemail.comwrites:
The idea here is that errors in the self-testing code are unlikely
to be correlated with errors in the primary code. Hence, you get a
sort of multiplying effect on reliability. For example, if the
chance of error in the primary code and the self-test code are each
0.01, the chance of an undetected error is approximately 0.01^2 or
0.0001.
But I think you give a lot of that back when you turn the checks off.
The errors you detect when the checks are enabled are only the ones
that result from your test data. Turn off the checks and expose the
application to data from a potentially different distribution, and
you're back where you started.
Sep 1 '07 #60
"Carl Banks" <pavl....mail.comwrote:
This is starting to sound silly, people. Critical is a relative term,
and one project's critical may be anothers mundane. Sure a flaw in your
flagship product is a critical problem *for your company*, but are you
really trying to say that the criticalness of a bad web search is even
comparable to the most important systems on airplanes, nuclear reactors,
dams, and so on? Come on.
This really intrigues me - how do you program a dam? - and why is it
critical?

Most dams just hold water back.

Dam design software - well yes that I would agree is critical.
Is that what you mean?

- Hendrik
Sep 1 '07 #61
On Sat, 01 Sep 2007 10:34:08 +0200, Hendrik van Rooyen wrote:
"Carl Banks" <pavl....mail.comwrote:
>This is starting to sound silly, people. Critical is a relative term,
and one project's critical may be anothers mundane. Sure a flaw in your
flagship product is a critical problem *for your company*, but are you
really trying to say that the criticalness of a bad web search is even
comparable to the most important systems on airplanes, nuclear reactors,
dams, and so on? Come on.

This really intrigues me - how do you program a dam? - and why is it
critical?

Most dams just hold water back.
And some produce electricity. And most if not all can regulate how many
water is let through. If something goes wrong the valley behind the dam
gets flooded. If this is controlled by a computer you have the need for
reliable software.

Ciao,
Marc 'BlackJack' Rintsch
Sep 1 '07 #62
On 29 Aug., 13:45, Russ <uymqlp...@sneakemail.comwrote:
I have not yet personally used it, but I am interested in anything
that can help to make my programs more reliable. If you are
programming something that doesn't really need to be correct, than you
probably don't need it. But if you really need (or want) your software
I'm one of the few (thousand) hard core Eiffel programmers on this
world
and i can tell you that this would not add to much to python. To get
the
benefits of it you need to use it together with a runtime that is
designed
from ground with DBC and a language that is fast enough to be able to
check the contracts, if you don't have the latter all you get is a
better
specification language (which you can write as comments in python).

Learn the Eiffel design way and then add assert statements whereever
you
need them. Works well when i do C/C++ programming and maybe even for
script languages - but i never used it for scripts as i don't see a
real
value here.

Sep 1 '07 #63
Steve Holden wrote:
[...]
If I can blow my own trumpet briefly, two customers (each using over 25
kLOC I have delivered over the years) ran for two years while I was away
in the UK without having to make a single support call. One of the
systems was actually locked in a cupboard all that time (though I have
since advised that client to at least apply OS patches to bring it up to
date).
This was achieved by defensive programming, understanding the user
requirements and just generally knowing what I was doing.
On the one hand, nice work. Made your customers happy and kept
them happy. Can't argue with success. On the other hand, 25K lines
is tiny by software engineering standards. If a support call would
have to go to you, then the project must be small. Software
engineering is only a little about an engineer knowing or not
knowing what he or she is doing; the bigger problem is that
hundreds or thousand of engineers cannot possibly all know what
all the others are doing.

I work on large and complex systems. If I screw up -- O.K., face
facts: *when* I screw up -- the chance of the issue being assigned
to me is small. Other engineers will own the problems I cause,
while I work on defects in code I've never touched. I wish I could
own all my own bugs, but that's not how large and complex systems
work. Root-cause analysis is the hard part. By the time we know
what went wrong, 99.99% of the work is done.
Design-by-contract (or programming-by-contract) shines in large
and complex projects, though it is not a radical new idea in
software engineering. We pretty much generally agree that we want
strong interfaces to encapsulate implementation complexity.
That's what design-by-contract is really about.

There is no strong case for adding new features to Python
specifically for design-by-contract. The language is flexible
enough to support optionally-executed pre-condition and
post-condition checks, without any extension. The good and bad
features of Python for realizing reliable abstraction are set
and unlikely to change. Python's consistency and flexibility
are laudable, while duck-typing is a convenience that will
always make it less reliable than more type-strict languages.
--
--Bryan

Sep 1 '07 #64
Carl Banks wrote:
This is starting to sound silly, people. Critical is a relative term,
and one project's critical may be anothers mundane. Sure a flaw in your
flagship product is a critical problem *for your company*, but are you
really trying to say that the criticalness of a bad web search is even
comparable to the most important systems on airplanes, nuclear reactors,
dams, and so on? Come on.
Who said they were the same? I said that just because it doesn't take lives
it doesn't mean it isn't important. I wasn't going to reply to not extend
this, but this misunderstanding of your was bugging me.

I use Python on systems that deal with human health and wrong calculations
may have severe impact on a good sized population. Using Python.

As with nuclear reactors, dams, airplanes and so on we have a lot of
redundancy and a lot of checkpoints. No one is crazy to take them out or
even to remove some kind of dispositive to allow manual intervention at
critical points.
Sep 1 '07 #65
Bryan Olson wrote:
Steve Holden wrote:
[...]
>If I can blow my own trumpet briefly, two customers (each using over 25
kLOC I have delivered over the years) ran for two years while I was away
in the UK without having to make a single support call. One of the
systems was actually locked in a cupboard all that time (though I have
since advised that client to at least apply OS patches to bring it up to
date).
>This was achieved by defensive programming, understanding the user
requirements and just generally knowing what I was doing.

On the one hand, nice work. Made your customers happy and kept
them happy. Can't argue with success. On the other hand, 25K lines
is tiny by software engineering standards. If a support call would
have to go to you, then the project must be small. Software
engineering is only a little about an engineer knowing or not
knowing what he or she is doing; the bigger problem is that
hundreds or thousand of engineers cannot possibly all know what
all the others are doing.
I agree that programming-in-the-large brings with it problems that
aren't experienced in smaller scale projects such as the ones I mentioned.
I work on large and complex systems. If I screw up -- O.K., face
facts: *when* I screw up -- the chance of the issue being assigned
to me is small. Other engineers will own the problems I cause,
while I work on defects in code I've never touched. I wish I could
own all my own bugs, but that's not how large and complex systems
work. Root-cause analysis is the hard part. By the time we know
what went wrong, 99.99% of the work is done.
This is the kind of realism I like to see in engineering. I am always
suspicious when any method is promoted as capable of reducing the error
rate to zero.
>
Design-by-contract (or programming-by-contract) shines in large
and complex projects, though it is not a radical new idea in
software engineering. We pretty much generally agree that we want
strong interfaces to encapsulate implementation complexity.
That's what design-by-contract is really about.

There is no strong case for adding new features to Python
specifically for design-by-contract. The language is flexible
enough to support optionally-executed pre-condition and
post-condition checks, without any extension. The good and bad
features of Python for realizing reliable abstraction are set
and unlikely to change. Python's consistency and flexibility
are laudable, while duck-typing is a convenience that will
always make it less reliable than more type-strict languages.
Python's dynamic nature certainly makes it more difficult to reason
about programs in any formal sense. I've always thought of it as a
pragmatist's language, and we need to be pragmatic about the downside as
well as the upside.

regards
Steve
--
Steve Holden +1 571 484 6266 +1 800 494 3119
Holden Web LLC/Ltd http://www.holdenweb.com
Skype: holdenweb http://del.icio.us/steve.holden
--------------- Asciimercial ------------------
Get on the web: Blog, lens and tag the Internet
Many services currently offer free registration
----------- Thank You for Reading -------------
Sep 1 '07 #66
Bryan Olson wrote:
Steve Holden wrote:
[...]
>If I can blow my own trumpet briefly, two customers (each using over 25
kLOC I have delivered over the years) ran for two years while I was away
in the UK without having to make a single support call. One of the
systems was actually locked in a cupboard all that time (though I have
since advised that client to at least apply OS patches to bring it up to
date).
>This was achieved by defensive programming, understanding the user
requirements and just generally knowing what I was doing.

On the one hand, nice work. Made your customers happy and kept
them happy. Can't argue with success. On the other hand, 25K lines
is tiny by software engineering standards. If a support call would
have to go to you, then the project must be small. Software
engineering is only a little about an engineer knowing or not
knowing what he or she is doing; the bigger problem is that
hundreds or thousand of engineers cannot possibly all know what
all the others are doing.
I agree that programming-in-the-large brings with it problems that
aren't experienced in smaller scale projects such as the ones I mentioned.
I work on large and complex systems. If I screw up -- O.K., face
facts: *when* I screw up -- the chance of the issue being assigned
to me is small. Other engineers will own the problems I cause,
while I work on defects in code I've never touched. I wish I could
own all my own bugs, but that's not how large and complex systems
work. Root-cause analysis is the hard part. By the time we know
what went wrong, 99.99% of the work is done.
This is the kind of realism I like to see in engineering. I am always
suspicious when any method is promoted as capable of reducing the error
rate to zero.
>
Design-by-contract (or programming-by-contract) shines in large
and complex projects, though it is not a radical new idea in
software engineering. We pretty much generally agree that we want
strong interfaces to encapsulate implementation complexity.
That's what design-by-contract is really about.

There is no strong case for adding new features to Python
specifically for design-by-contract. The language is flexible
enough to support optionally-executed pre-condition and
post-condition checks, without any extension. The good and bad
features of Python for realizing reliable abstraction are set
and unlikely to change. Python's consistency and flexibility
are laudable, while duck-typing is a convenience that will
always make it less reliable than more type-strict languages.
Python's dynamic nature certainly makes it more difficult to reason
about programs in any formal sense. I've always thought of it as a
pragmatist's language, and we need to be pragmatic about the downside as
well as the upside.

regards
Steve
--
Steve Holden +1 571 484 6266 +1 800 494 3119
Holden Web LLC/Ltd http://www.holdenweb.com
Skype: holdenweb http://del.icio.us/steve.holden
--------------- Asciimercial ------------------
Get on the web: Blog, lens and tag the Internet
Many services currently offer free registration
----------- Thank You for Reading -------------

Sep 1 '07 #67
Hendrik van Rooyen wrote:
"Carl Banks" <pavl....mail.comwrote:
>This is starting to sound silly, people. Critical is a relative term,
and one project's critical may be anothers mundane. Sure a flaw in your
flagship product is a critical problem *for your company*, but are you
really trying to say that the criticalness of a bad web search is even
comparable to the most important systems on airplanes, nuclear reactors,
dams, and so on? Come on.

This really intrigues me - how do you program a dam? - and why is it
critical?

Most dams just hold water back.

Dam design software - well yes that I would agree is critical.
Is that what you mean?

- Hendrik
Yup! He was referring to that Damn design software. Just as almost
everyone has at one time or another. ;c)
Sep 1 '07 #68
Carl Banks a écrit :
>
This is starting to sound silly, people. Critical is a relative term,
and one project's critical may be anothers mundane. Sure a flaw in your
flagship product is a critical problem *for your company*, but are you
really trying to say that the criticalness of a bad web search is even
comparable to the most important systems on airplanes, nuclear reactors,
dams, and so on? Come on.
20 years ago, there was *no* computer at all in nuclear reactors.
Sep 1 '07 #69

"Hendrik van Rooyen" <ma**@microcorp.co.zawrote in message
news:026201c7ec75$07d6e0c0$03000080@hendrik...
| This really intrigues me - how do you program a dam? - and why is it
| critical?
|
| Most dams just hold water back.

Most big dams also generate electricity. Even without that, dams do not
just hold water back, they regulate the flow over a year or longer cycle.
A full dam is great for power generation and useless for flood control. An
empty dam is great for flood control and useless for power generation. So
both power generation and bypass release must be regulated in light of
current level, anticipated upstream precipitation, and downstream
obligations. Downstream obligations can include both a minimum flow rate
for downstream users and a maximum rate so as to not flood downstream
areas.

tjr

Sep 1 '07 #70
Russ <uy*******@sneakemail.comwrote:
...
the inputs. To test the
post-conditions, you just need a call at the bottom of the function,
just before the return,
...
there's nothing to stop you putting the calls before every return.

Oops! I didn't think of that. The idea of putting one before every
return certainly doesn't appeal to me. So much for that idea.
try:
blah blah with as many return statements as you want
finally:
something that gets executed unconditionally at the end

You'll need some convention such as "all the return statements are of
the same form ``return result''" (where the result may be computed
differently each time), but that's no different from the conventions you
need anyway to express such things as ``the value that foobar had at the
time the function was called''.
Alex
Sep 2 '07 #71
On Sat, 01 Sep 2007 17:19:49 +0200, Pierre Hanser wrote:
Carl Banks a écrit :
>>
This is starting to sound silly, people. Critical is a relative term,
and one project's critical may be anothers mundane. Sure a flaw in
your flagship product is a critical problem *for your company*, but are
you really trying to say that the criticalness of a bad web search is
even comparable to the most important systems on airplanes, nuclear
reactors, dams, and so on? Come on.

20 years ago, there was *no* computer at all in nuclear reactors.
But they had electronic (analog) systems that were (supposedly) just as
heavily regulated and scrutinized as the digital computers of today; and
were a lot more scrutinized than, say, the digital computers that banks
were using.
Carl Banks
Sep 2 '07 #72
Ricardo Aráoz <ri******@gmail.comwrote:
...
We should remember that the level
of security of a 'System' is the same as the level of security of it's
weakest component,
...
You win the argument, and thanks you prove my point. You typically
concerned yourself with the technical part of the matter, yet you
completely ignored the point I was trying to make.
That's because I don't particularly care about "the point you were
trying to make" (either for or against -- as I said, it's a case of ROI
for different investments [in either security, or, more germanely to
this thread, reliability] rather than of useful/useless classification
of the investments), while I care deeply about proper system thinking
(which you keep failing badly on, even in this post).
In the third part of your post, regarding security, I think you went off
the road. The weakest component would not be one of the requisites of
access, the weakest component I was referring to would be an actual
APPLICATION,
Again, F- at system thinking: a system's components are NOT just
"applications" (what's the alternative to their being "actual", btw?),
nor is it necessarily likely that an application would be the weakest
one of the system's components (these wrong assertions are in addition
to your original error, which you keep repeating afterwards).

For example, in a system where access is gained *just* by knowing a
secret (e.g., a password), the "weakest component" is quite likely to be
that handy but very weak architectural choice -- or, seen from another
viewpoint, the human beings that are supposed to know that password,
remember, and keep it secret. If you let them choose their password,
it's too likely to be "fred" or other easily guessable short word; if
you force them to make it at least 8 characters long, it's too likely to
be "fredfred"; if you force them to use length, mixed case and digits,
it's too likely to be "Fred2Fred". If you therefore decide that
passwords chosen by humans are too weak and generate one for them,
obtaining, say, "FmZACc2eZL", they'll write it down (perhaps on a
post-it attached to their screen...) because they just can't commit to
memory a lot of long really-random strings (and nowadays the poor users
are all too likely to need to memorize far too many passwords). A
clever attacker has many other ways to try to steal passwords, from
"social engineering" (pose as a repair person and ask the user to reveal
their password as a prerequisite of obtaining service), to keystroke
sniffers of several sorts, fake applications that imitate real ones and
steal the password before delegating to the real apps, etc, etc.

Similarly, if all that's needed is a physical token (say, some sort of
electronic key), that's relatively easy to purloin by traditional means,
such as pickpocketing and breaking-and-entering; certain kind of
electronic keys (such as the passive unencrypted RFID chips that are
often used e.g. to control access to buildings) are, in addition,
trivially easy to "steal" by other (technological) means.

Refusing to admit that certain components of a system ARE actually part
of the system is weak, blinkered thinking that just can't allow halfway
decent system design -- be that for purposes of reliability, security,
availability, or whatever else. Indeed, if certain part of the system's
architecture are OUTSIDE your control (because you can't redesign the
human mind, for example;-), all the more important then to make them the
focus of the whole design (since you must design AROUND them, and any
amelioration of their weaknesses is likely to have great ROI -- e.g., if
you can make the users take a 30-minutes short course in password
security, and accompany that with a password generator that makes
reasonably memorable though random ones, you're going to get substantial
returns on investment in any password-using system's security).
e.g. an ftp server. In that case, if you have several
applications running your security will be the security of the weakest
of them.
Again, false as usual, and for the same reason I already explained: if
your system can be broken by breaking any one of several components,
then it's generally WEAKER than the weakest of the components. Say that
you're running on the system two servers, an FTP one that can be broken
into by 800 hackers in the world, and a SSH one that can only be broken
into by 300 hackers in the world; unless every single one of the hackers
who are able to break into the SSH server is *also* able to break into
the FTP one (a very special case indeed!), there are now *MORE* than 800
hackers in the world that can break into your system as a whole -- in
other words, again and no matter how often you repeat falsities to the
contraries without a shred of supporting argument, your assertion is
*FALSE*, and in this case your security is *WEAKER* than the security of
the weaker of the two components.

I do not really much care what point(s) you are trying to make through
your glib and false assertions: I *DO* care that these falsities, these
extremely serious errors that stand in the way of proper system
thinking, be never left unchallenged and uncorrected. Unfortunately a
*LOT* of people (including, shudder, ones who are responsible for
architecting, designing and implementing some systems) are under very
serious misapprehensions that impede "system thinking", some of the same
ilk as your falsities (looking at only PART of the system and never the
whole, using far-too-simplified rules of thumbs to estimate system
properties, and so forth), some nearly reversed (missing opportunities
to make systems *simpler*, overly focusing on separate components, &c).

As to your specific point about "program proofs" being likely overkill
(which doesn't mean "useless", but rather means "low ROI" compared to
spending comparable resources in other reliability enhancements), that's
probably true in many cases. But when a probably-true thesis is being
"defended" by tainted means, such as false assertions and excessive
simplifications that may cause serious damage if generally accepted and
applied to other issues, debunking the falsities in question is and
remains priority number 1 for me.
Alex
Sep 2 '07 #73
On Sep 1, 4:25 am, Bryan Olson
Design-by-contract (or programming-by-contract) shines in large
and complex projects, though it is not a radical new idea in
software engineering. We pretty much generally agree that we want
strong interfaces to encapsulate implementation complexity.
That's what design-by-contract is really about.

There is no strong case for adding new features to Python
specifically for design-by-contract. The language is flexible
enough to support optionally-executed pre-condition and
post-condition checks, without any extension. The good and bad
features of Python for realizing reliable abstraction are set
and unlikely to change. Python's consistency and flexibility
are laudable, while duck-typing is a convenience that will
always make it less reliable than more type-strict languages.

Excellent points. As for "no strong case for adding new features to
Python specifically for design-by-contract," if you mean adding
something to language itself, I agree, but I see nothing wrong with
adding it to the standard libraries, if that is possible without
changing the language itself. Someone please correct me if I am wrong,
but I think PEP adds only to the libraries.

Sep 2 '07 #74
On Sep 1, 6:51 pm, al...@mac.com (Alex Martelli)
try:
blah blah with as many return statements as you want
finally:
something that gets executed unconditionally at the end
Thanks. I didn't think of that.

So design by contract *is* relatively easy to use in Python already.
The main issue, I suppose, is one of aesthetics. Do I want to use a
lot of explicit function calls for pre and post-conditions and "try/
finally" blocks in my code to get DbC (not to mention a global
variable to enable or disable it)?

I suppose if I want it badly enough, I will. But I also happen to be a
bit obsessive about the appearance of my code, and this does
complicate it a bit. The nice thing about having it in the doc string
(as per PEP 316) is that, while it is inside the function, it is also
separate from the actual code in the function. I like that. As far as
I am concerned, the self-test code shouldn't be tangled up with the
primary code.

By the way, I would like to make a few comments about the
"reliability" of Python code. Apparently I offended you the other day
by claiming or implying that Python code is inherently unreliable. I
think it is probably possible to write very reliable code in Python,
particularly for small to medium sized applications, but you probably
need top notch software engineers to do it. And analyzing code or
*proving* that a program is correct is technically harder without
static typing. In highly regulated safety critical domains, you need
more than just reliable code; you need to *demonstrate* or *prove* the
reliability somehow.

I personally use Python for its clean syntax and its productivity with
my time, so I am certainly not denigrating it. For the R&D work I do,
I think it is very appropriate. But I did raise a few eyebrows when I
first started using it. I used C++ several years ago, and I thought
about switching to Ada a few years ago, but Ada just seems to be
fading away (which I think is very unfortunate, but that's another
story altogether).

In any case, when you get right down to it, I probably don't know what
the hell I'm talking about anyway, so I will bring this rambling to a
merciful end.

On, one more thing. I see that the line wrapping on Google Groups is
finally working for me after many months. Fantastic! I can't help but
wonder if my mentioning it to you a few days ago had anything to do
with it.

Sep 2 '07 #75
On Sep 2, 7:05 am, Russ <uymqlp...@sneakemail.comwrote:
Someone please correct me if I am wrong,
but I think PEP adds only to the libraries.
You are wrong, PEPs also add to the core language. Why don't you give
a look
at the PEP parade on python.org?

Michele Simionato

Sep 2 '07 #76
On Sep 1, 10:44 pm, Russ <uymqlp...@sneakemail.comwrote:
On, one more thing. I see that the line wrapping on Google Groups is
finally working for me after many months. Fantastic! I can't help but
wonder if my mentioning it to you a few days ago had anything to do
with it.
Well, it's working on the input side anyway.

Sep 2 '07 #77
Russ <uy*******@sneakemail.comwrites:
try:
blah blah with as many return statements as you want
finally:
something that gets executed unconditionally at the end
Thanks. I didn't think of that.
So design by contract *is* relatively easy to use in Python already.
The main issue, I suppose, is one of aesthetics. Do I want to use a
lot of explicit function calls for pre and post-conditions and "try/
finally" blocks in my code to get DbC (not to mention a global
variable to enable or disable it)?
I still don't understand why you don't like the decorator approach,
which can easily implement the above.
I personally use Python for its clean syntax and its productivity with
my time, so I am certainly not denigrating it. For the R&D work I do,
I think it is very appropriate. But I did raise a few eyebrows when I
first started using it. I used C++ several years ago, and I thought
about switching to Ada a few years ago, but Ada just seems to be
fading away (which I think is very unfortunate, but that's another
story altogether).
It seems to be getting displaced by Java, which has some of the same
benefits and costs as Ada does.

I've gotten interested in static functional languages (including proof
assistants like Coq, that can generate certified code from your
mechanically checked theorems). But I haven't done anything serious
with any of them yet. I think we're in a temporary situation where
all existing languages suck (some more badly than others) but the
functional languages seem like a more promising direction to get out
of this hole.
Sep 2 '07 #78
On Sep 1, 10:05 pm, Russ <uymqlp...@sneakemail.comwrote:
changing the language itself. Someone please correct me if I am wrong,
but I think PEP adds only to the libraries.
I meant to write PEP 316, of course.

Sep 2 '07 #79
On Sep 1, 11:04 pm, Paul Rubin wrote:
I still don't understand why you don't like the decorator approach,
which can easily implement the above.
Well, maybe decorators are the answer. If a function needs only one
decorator for all the conditions and invariants (pre and post-
conditions), and if it can just point to functions defined elsewhere
(rather than defining everything inline), then perhaps they make
sense. I guess I need to read up more on decorators to see if this is
possible.

In fact, the ideal would be to have just a single decorator type, say
"contract" or "self_test", that takes an argument that points to the
relevant functions to use for the function that the decorator applies
to. Then the actual self-test functions could be pushed off somewhere
else, and the "footprint" on the primary code would be minimal

Sep 2 '07 #80
Alex Martelli wrote:
Ricardo Aráoz <ri******@gmail.comwrote:
...
>>>We should remember that the level
of security of a 'System' is the same as the level of security of it's
weakest component,
...
>You win the argument, and thanks you prove my point. You typically
concerned yourself with the technical part of the matter, yet you
completely ignored the point I was trying to make.

That's because I don't particularly care about "the point you were
trying to make" (either for or against -- as I said, it's a case of ROI
for different investments [in either security, or, more germanely to
this thread, reliability] rather than of useful/useless classification
of the investments), while I care deeply about proper system thinking
(which you keep failing badly on, even in this post).
And here you start, followed by 'F- at system thinking', 'glib and false
assertions', 'falsities', etc.
I don't think you meant anything personal, how could you, we don't know
each other. But the outcome feels like a personal attack instead of an
attack on the ideas exposed.
If that's not what you intended, you should check your communication
abilities and see what is wrong. If that is what you meant well...

So I will not answer your post. I'll let it rest for a while till I
don't feel the sting, then I'll re-read it and try to learn as much as I
can from your thoughts (thank you for them). And even though some of
your thinking process I find objectionable I will not comment on it as
I'm sure it will start some new flame exchange which will have a lot to
do with ego and nothing to do with python.

Sep 2 '07 #81
In article <11*********************@r34g2000hsd.googlegroups. com>,
Russ <uy*******@sneakemail.comwrote:
>
Excellent points. As for "no strong case for adding new features to
Python specifically for design-by-contract," if you mean adding
something to language itself, I agree, but I see nothing wrong with
adding it to the standard libraries, if that is possible without
changing the language itself. Someone please correct me if I am wrong,
but I think PEP adds only to the libraries.
You're wrong, but even aside from that, libraries need to prove
themselves useful before they get added.
--
Aahz (aa**@pythoncraft.com) <* http://www.pythoncraft.com/

"Many customs in this life persist because they ease friction and promote
productivity as a result of universal agreement, and whether they are
precisely the optimal choices is much less important." --Henry Spencer
http://www.lysator.liu.se/c/ten-commandments.html
Sep 2 '07 #82

This thread has been closed and replies have been disabled. Please start a new discussion.

By using Bytes.com and it's services, you agree to our Privacy Policy and Terms of Use.

To disable or enable advertisements and analytics tracking please visit the manage ads & tracking page.