By using this site, you agree to our updated Privacy Policy and our Terms of Use. Manage your Cookies Settings.
432,028 Members | 1,092 Online
Bytes IT Community
+ Ask a Question
Need help? Post your question and get tips & solutions from a community of 432,028 IT Pros & Developers. It's quick & easy.

Creating a capabilities-based restricted execution system

P: n/a
I've been playing around with Zope's RestrictedPython, and I think I'm
on the way to making the modifications necessary to create a
capabilities-based restricted execution system. The idea is to strip out
any part of RestrictedPython that's not necessary for doing capabilities
and do all security using just capabilities.

The basic idea behind capabilities is that you don't give any piece of
code you don't trust a reference to something you don't want it to have
access to. You use proxies instead (E calls them "facets").

In order to be able to allow untrusted code to create proxy objects, I
needed to be able to store a reference to the proxied object in a
private attribute.

To create private attributes, I'm using "name mangling," where names
beginning with X_ within a class definition get changed to
_<uuid>_<name>, where the UUID is the same for that class. The UUIDs
don't need to be secure because it's not actually possible to create
your own name starting with an underscore in RestrictedPython; they just
need to be unique across all compiler invocations.

The nice thing about using this name mangling is that it's only done at
compile time and doesn't affect runtime performance. An interesting side
effect is that code defined on a class can access private attributes on
all descendants of that class, but only ones that are defined by other
code on that class, so this isn't a security issue.

I was thinking I needed read-only attributes to be able to avoid
untrusted code's being able to sabotage the revoke method on a proxy
object, but I'm thinking that just keeping around a reference to the
revoke method in the original code may be enough.

Does anyone think I'm going in completely the wrong direction here? Am I
missing anything obvious?
Jul 18 '05 #1
Share this Question
Share on Google+
30 Replies


P: n/a
"Sean R. Lynch" <se***@chaosring.org> writes:
Does anyone think I'm going in completely the wrong direction here? Am
I missing anything obvious?


Well, I have a dumb question. Have you studied the security failures
of rexec/Bastion and convinced yourself that they don't happen to your
new scheme?

You might look at the PyPy architecture doc if you haven't yet.
Making a separate object space for restricted objects may fit PyPy's
design quite naturally.
Jul 18 '05 #2

P: n/a
Paul Rubin wrote:
Well, I have a dumb question. Have you studied the security failures
of rexec/Bastion and convinced yourself that they don't happen to your
new scheme?
If you know of a location where the known shortcomings of rexec are
documented, please let me know. So far I've only seen a couple examples
and a lot of people saying "it's not secure so let's disable it."

My current methodology is to be very careful about adding any privileges
beyond what RestrictedPython allows.
You might look at the PyPy architecture doc if you haven't yet.
Making a separate object space for restricted objects may fit PyPy's
design quite naturally.


I have looked at PyPy. It's very interesting, but RestrictedPython is
already written and in use in Zope.

I think I've figured out a way to use my name mangling scheme to make
attributes only *writable* by code defined on a class from which an
object descends: do writes through a name-mangled method, and have
RestrictedPython output self._mangled_setattr(attr, val) for each
attempted attribute assignment. This will basically make it impossible
to have attributes that are writable from other classes, but I think
it's probably a prerequisite for capabilities. Most other languages
require attributes to be set via methods anyway, right?
Jul 18 '05 #3

P: n/a
Sean R. Lynch wrote:
If you know of a location where the known shortcomings of rexec are
documented, please let me know. So far I've only seen a couple examples
and a lot of people saying "it's not secure so let's disable it."


The biggest problem is that new-style classes are both available through
the type() builtin, and callable to create new instances.

For example, if you have managed to open a file object f, then

type(f)("/etc/passwd").read()

lets you access a different file, bypassing all machinery that may
have been designed to prevent that from happening.

Of course, for the specific case of file objects, there is additional
machinery preventing that from happening, but in the general case,
there might be more problems in that area. For example,
object.__subclasses__() gives you access to quite a lot of stuff.

Regards,
Martin

Jul 18 '05 #4

P: n/a

"Sean R. Lynch" <se***@chaosring.org> wrote in message
news:Lm********************@speakeasy.net...

[...]
Does anyone think I'm going in completely the wrong direction here? Am I
missing anything obvious?


Yes, you're missing something really obvious. Multi-level
security is a real difficult problem if you want to solve it
in a believable (that is, bullet-proof) fashion. The only way
I know of solving it is to provide separate execution
environments for the different privilege domains.
In the current Python structure, that means different
interpreters so that the object structures don't intermix.

If you have separate domains, then the only support
needed is to remove privileged modules from the
built-ins, and virtualize import so that it won't load
modules that aren't on the approved list for that
domain.

You also, of course, need some form of gate between
the untrusted and trusted domains.

Once that's done, there's no reason to layer additional
complexity on top, and there is no reason to restrict
any introspection facilities.

John Roth
Jul 18 '05 #5

P: n/a
John Roth wrote:
Yes, you're missing something really obvious. Multi-level
security is a real difficult problem if you want to solve it
in a believable (that is, bullet-proof) fashion. The only way
I know of solving it is to provide separate execution
environments for the different privilege domains.
In the current Python structure, that means different
interpreters so that the object structures don't intermix.


Hmmm, can you give me an example of a Python application that works this
way? Zope seems to be doing fine using RestrictedPython.
RestrictedPython is, in fact, an attempt to provide different execution
environments within the same memory space, which is the whole point of
my exercise. Now, I know that the lack of an example of insecurity is
not proof of security, but can you think of a way to escape from
RestrictedPython's environment? DoS is still possible, but as I'm not
planning on using this for completely untrusted users, I'm not too
concerned about that.
Jul 18 '05 #6

P: n/a
Martin v. Loewis wrote:

The biggest problem is that new-style classes are both available through
the type() builtin, and callable to create new instances.

For example, if you have managed to open a file object f, then

type(f)("/etc/passwd").read()

lets you access a different file, bypassing all machinery that may
have been designed to prevent that from happening.

Of course, for the specific case of file objects, there is additional
machinery preventing that from happening, but in the general case,
there might be more problems in that area. For example,
object.__subclasses__() gives you access to quite a lot of stuff.


RestrictedPython avoids this by removing the type() builtin from the
restricted __builtins__, and it doesn't allow untrusted code to create
names that start with _. Zope3 has a type() builtin, but it returns a
proxy (written in C) to the type object to prevent access.

Right now I'm providing a same_type function instead to compare types.
Later I'll probably start playing around with C proxies.

I think the main thing that's liable to introduce new security problems
(beyond what RestrictedPython may already have) is the fact that
RestrictedPython is mostly designed to protect the trusted environment
from the untrusted environment, and what I'd really like to do is give
programmers in the untrusted environment a way to create objects and
pass them around to one another; for example, in the original setup,
class statements are allowed but not very useful in the restricted
environment, because objects created from those classes would be
read-only due to the fact that you can't create any special attributes
to tell the system how to handle security from within the restricted
environment, which is why I'm adding private attributes to the system
and figuring out a way to allow methods defined on a class to assign to
attributes on instances of that class without allowing all code to do so.
Jul 18 '05 #7

P: n/a

"Sean R. Lynch" <se***@chaosring.org> wrote in message news:Lm********************@speakeasy.net...
I've been playing around with Zope's RestrictedPython, and I think I'm
on the way to making the modifications necessary to create a
capabilities-based restricted execution system. The idea is to strip out
any part of RestrictedPython that's not necessary for doing capabilities
and do all security using just capabilities.

The basic idea behind capabilities is that you don't give any piece of
code you don't trust a reference to something you don't want it to have
access to. You use proxies instead (E calls them "facets").
"Don't give" sounds good in theory but fails in practice. You can't prevent
leakage 100%, so any security system _must_ help programmer to keep
trusted data away from untrusted code. Do you know that rexec failed
exactly because it didn't help to prevent leakage?

In order to be able to allow untrusted code to create proxy objects, I
needed to be able to store a reference to the proxied object in a
private attribute.

To create private attributes, I'm using "name mangling," where names
beginning with X_ within a class definition get changed to
_<uuid>_<name>, where the UUID is the same for that class. The UUIDs
don't need to be secure because it's not actually possible to create
your own name starting with an underscore in RestrictedPython; they just
need to be unique across all compiler invocations.
This is a problem: you declare private attributes whereas you should be
declaring public attributes and consider all other attributes private. Otherwise
you don't help prevent leakage. What about doing it this way:

obj.attr means xgetattr(obj,acc_tuple) where acc_tuple = ('attr',UUID)
and xgetattr is
def xgetattr(obj,acc_tuple):
if not has_key(obj.__accdict__,acc_tuple):
raise AccessException
return getattr(obj,acc_tuple[0])

__accdict__ is populated at the time class or its subclasses are created.
If an object without __accdict__ is passed to untrusted code it will
just fail. If new attributes are introduced but not declared in __accdict__
they are also unreachable by default.

The nice thing about using this name mangling is that it's only done at
compile time and doesn't affect runtime performance. An interesting side
effect is that code defined on a class can access private attributes on
all descendants of that class, but only ones that are defined by other
code on that class, so this isn't a security issue.

I was thinking I needed read-only attributes to be able to avoid
untrusted code's being able to sabotage the revoke method on a proxy
object, but I'm thinking that just keeping around a reference to the
revoke method in the original code may be enough.

Does anyone think I'm going in completely the wrong direction here? Am I
missing anything obvious?


It depends on what type of security do you want. Did you think about DOS
and covert channels? If you don't care about that, yeah, you don't miss
anything obvious. <wink> you should worry whether you miss something
non-obvious.

By the way, did you think about str.encode? Or you are not worried about
bugs in zlib too?

-- Serge.
Jul 18 '05 #8

P: n/a
I hate replying to myself, but I've written some more code. I hope to
have something posted soon so people can rip it apart without needing to
resort to conjecture :)

I had been considering using a name-mangled setattr for doing attribute
assignment to only allow assignment to attributes on descendants of the
class one was writing methods on, but it occurred to me that I could
probably treat "self" as a special name using only compiler
modifications, so I could eliminate RestrictedPython's need to turn all
Getattrs and AssAttrs (shouldn't it be GetAttr) into method calls. Now,
of course, I'm limited to static checks on names to control access, but
Python already disallows, for example, access to f.func_globals, and
RestrictedPython disallows names that begin with underscore.

Now I need to write a bunch of code that uses this system and attempts
to break it :)
Jul 18 '05 #9

P: n/a
Serge Orlov wrote:
"Sean R. Lynch" <se***@chaosring.org> wrote in message news:Lm********************@speakeasy.net...
I've been playing around with Zope's RestrictedPython, and I think I'm
on the way to making the modifications necessary to create a
capabilities-based restricted execution system. The idea is to strip out
any part of RestrictedPython that's not necessary for doing capabilities
and do all security using just capabilities.

The basic idea behind capabilities is that you don't give any piece of
code you don't trust a reference to something you don't want it to have
access to. You use proxies instead (E calls them "facets").

"Don't give" sounds good in theory but fails in practice. You can't prevent
leakage 100%, so any security system _must_ help programmer to keep
trusted data away from untrusted code. Do you know that rexec failed
exactly because it didn't help to prevent leakage?


Hmm, this is good information. I think it will probably change the way
I've been looking at this.
In order to be able to allow untrusted code to create proxy objects, I
needed to be able to store a reference to the proxied object in a
private attribute.

To create private attributes, I'm using "name mangling," where names
beginning with X_ within a class definition get changed to
_<uuid>_<name>, where the UUID is the same for that class. The UUIDs
don't need to be secure because it's not actually possible to create
your own name starting with an underscore in RestrictedPython; they just
need to be unique across all compiler invocations.

This is a problem: you declare private attributes whereas you should be
declaring public attributes and consider all other attributes private. Otherwise
you don't help prevent leakage. What about doing it this way:

obj.attr means xgetattr(obj,acc_tuple) where acc_tuple = ('attr',UUID)
and xgetattr is
def xgetattr(obj,acc_tuple):
if not has_key(obj.__accdict__,acc_tuple):
raise AccessException
return getattr(obj,acc_tuple[0])

__accdict__ is populated at the time class or its subclasses are created.
If an object without __accdict__ is passed to untrusted code it will
just fail. If new attributes are introduced but not declared in __accdict__
they are also unreachable by default.


This is very interesting, and you may convince me to use something
similar, but I don't think you're quite correct in saying that the
name-mangling scheme declares private attributes; what is the difference
between saying "not having X_ in front of the attribute makes it public"
and "having X_ in front of the attribute makes it private?"
The nice thing about using this name mangling is that it's only done at
compile time and doesn't affect runtime performance. An interesting side
effect is that code defined on a class can access private attributes on
all descendants of that class, but only ones that are defined by other
code on that class, so this isn't a security issue.

I was thinking I needed read-only attributes to be able to avoid
untrusted code's being able to sabotage the revoke method on a proxy
object, but I'm thinking that just keeping around a reference to the
revoke method in the original code may be enough.

Does anyone think I'm going in completely the wrong direction here? Am I
missing anything obvious?

It depends on what type of security do you want. Did you think about DOS
and covert channels? If you don't care about that, yeah, you don't miss
anything obvious. <wink> you should worry whether you miss something
non-obvious.


I am not (particularly) concerned about DoS because I don't plan to be
running anonymous code and having to restart the server isn't that big
of a deal. I do plan to make it hard to accidentally DoS the server, but
I'm not going to sacrifice a bunch of performance for that purpose. As
for covert channels, can you give me an example of what to look for?

I am certainly worried about non-obvious things, but my intent wasn't to
put up a straw man, because if I ask if I'm missing non-obvious things,
the only possible answer is "of course."
By the way, did you think about str.encode? Or you are not worried about
bugs in zlib too?


Well, it'll only take *one* problem of that nature to force me to go
back to converting all attribute accesses to function calls. On the
other hand, as long as any problem that allows a user to access
protected data is actually a in (zlib, etc), I think I'm not going to
worry about it too much yet. If there is some method somewhere that will
allow a user access to protected data that is not considered a bug in
that particular subsystem, then I have to fix it in my scheme, which
would probably require going back to converting attribute access to
method calls.
Jul 18 '05 #10

P: n/a
"Sean R. Lynch" <se***@chaosring.org> writes:
RestrictedPython avoids this by removing the type() builtin from the
restricted __builtins__, and it doesn't allow untrusted code to create
names that start with _.


Ah, ok. That might restrict the usefulness of the package (perhaps
that is what "restricted" really means here :-).

People would not normally consider the type builtin insecure, and
might expect it to work. If you restrict Python to, say, just integers
(and functions thereof), it may be easy to see it is safe - but it is
also easy to see that it is useless.

The challenge perhaps is to provide the same functionality as rexec,
without the same problems.

Regards,
Martin
Jul 18 '05 #11

P: n/a
Martin v. Lwis wrote:
"Sean R. Lynch" <se***@chaosring.org> writes:

RestrictedPython avoids this by removing the type() builtin from the
restricted __builtins__, and it doesn't allow untrusted code to create
names that start with _.

Ah, ok. That might restrict the usefulness of the package (perhaps
that is what "restricted" really means here :-).

People would not normally consider the type builtin insecure, and
might expect it to work. If you restrict Python to, say, just integers
(and functions thereof), it may be easy to see it is safe - but it is
also easy to see that it is useless.

The challenge perhaps is to provide the same functionality as rexec,
without the same problems.


Well, I'm providing a same_type function that compares types. What else
do you want to do with type()? The other option is to go the Zope3 route
and provide proxies to the type objects returned by type().
Jul 18 '05 #12

P: n/a

"Sean R. Lynch" <se***@chaosring.org> wrote in message
news:9J********************@speakeasy.net...
John Roth wrote:
Yes, you're missing something really obvious. Multi-level
security is a real difficult problem if you want to solve it
in a believable (that is, bullet-proof) fashion. The only way
I know of solving it is to provide separate execution
environments for the different privilege domains.
In the current Python structure, that means different
interpreters so that the object structures don't intermix.


Hmmm, can you give me an example of a Python application that works this
way? Zope seems to be doing fine using RestrictedPython.
RestrictedPython is, in fact, an attempt to provide different execution
environments within the same memory space, which is the whole point of
my exercise. Now, I know that the lack of an example of insecurity is
not proof of security, but can you think of a way to escape from
RestrictedPython's environment? DoS is still possible, but as I'm not
planning on using this for completely untrusted users, I'm not too
concerned about that.


Restricted Python was withdrawn because of a number of
holes, of which new style classes were the last straw. I don't
know what the exact holes were.

Whether Zope security is subject to those holes is a question
I can't answer (and I don't find it all that interesting, anyway.)
The Restricted Execution environment's disabling access to
__dict__ seems a bit ham-handed, but I suspect that it was
simply the easiest way around one major difficulty. The Bastion
hook (which is what I believe Zope security is built on top of)
seems to be reasonably adequate. The rest of it probably
needs to be rethought.

John Roth

Jul 18 '05 #13

P: n/a

"Sean R. Lynch" <se***@chaosring.org> wrote in message news:d7********************@speakeasy.net...
Serge Orlov wrote:
"Sean R. Lynch" <se***@chaosring.org> wrote in message news:Lm********************@speakeasy.net...
To create private attributes, I'm using "name mangling," where names
beginning with X_ within a class definition get changed to
_<uuid>_<name>, where the UUID is the same for that class. The UUIDs
don't need to be secure because it's not actually possible to create
your own name starting with an underscore in RestrictedPython; they just
need to be unique across all compiler invocations.

This is a problem: you declare private attributes whereas you should be
declaring public attributes and consider all other attributes private. Otherwise
you don't help prevent leakage. What about doing it this way:

obj.attr means xgetattr(obj,acc_tuple) where acc_tuple = ('attr',UUID)
and xgetattr is
def xgetattr(obj,acc_tuple):
if not has_key(obj.__accdict__,acc_tuple):
raise AccessException
return getattr(obj,acc_tuple[0])

__accdict__ is populated at the time class or its subclasses are created.
If an object without __accdict__ is passed to untrusted code it will
just fail. If new attributes are introduced but not declared in __accdict__
they are also unreachable by default.


This is very interesting, and you may convince me to use something
similar, but I don't think you're quite correct in saying that the
name-mangling scheme declares private attributes; what is the difference
between saying "not having X_ in front of the attribute makes it public"
and "having X_ in front of the attribute makes it private?"


You're right, the wording is not quite correct. My point was that it should
take effort to make attributes _public_, for example, going to another
source line and typing attribute name or doing whatever "declaration" means.
This way adding new attribute or leaking "unsecured" object will
raise an exception when untrusted code will try to access it. Otherwise
one day something similar to rexec failure will happen: somebody
added __class__ and __subclasses__ attributes and rexec blindly
allowed to access them. The leading underscore doesn't matter, it
could be name like encode/decode that is troublesome.

By the way, my code above is buggy, it's a good idea that you're
not going to use it :) Let me try it the second time in English words:
If the attribute 'attr' is declared public give it. If the function with
UUID has access to attribute 'attr' on object 'obj' give it. Otherwise fail.


I am not (particularly) concerned about DoS because I don't plan to be
running anonymous code and having to restart the server isn't that big
of a deal. I do plan to make it hard to accidentally DoS the server, but
I'm not going to sacrifice a bunch of performance for that purpose. As
for covert channels, can you give me an example of what to look for?
Nevermind, it's just a scary word :) It can concern you if you worry
about information leaking from one security domain to another. Like
prisoners knocking the wall to pass information between them. In
computers it may look like two plugins, one is processing credit
cards and the other one has capability to make network connections.
If they are written by one evil programmer the first one can "knock
the wall" to pass the information to the second. "knocking the wall"
can be encoded like quick memory allocation up to failure = 1, no
quick memory allocation = 0. Add error correction and check summing
and you've got a reliable leak channel.

I am certainly worried about non-obvious things, but my intent wasn't to
put up a straw man, because if I ask if I'm missing non-obvious things,
the only possible answer is "of course."
By the way, did you think about str.encode? Or you are not worried about
bugs in zlib too?


Well, it'll only take *one* problem of that nature to force me to go
back to converting all attribute accesses to function calls. On the
other hand, as long as any problem that allows a user to access
protected data is actually a in (zlib, etc), I think I'm not going to
worry about it too much yet. If there is some method somewhere that will
allow a user access to protected data that is not considered a bug in
that particular subsystem, then I have to fix it in my scheme, which
would probably require going back to converting attribute access to
method calls.


I'm not sure how to deal with str.encode too. You don't know what
kind of codecs are registered for that method for sure, one day there
could be registered an unknown codec that does something unknown.
Shouldn't you have two (or several) codecs.py modules(instances?)
for trusted and untrusted code? And str.encode should be transparently
redirected to the proper one?

-- Serge.
Jul 18 '05 #14

P: n/a
In article <vv************@news.supernews.com>,
John Roth <ne********@jhrothjr.com> wrote:
"Sean R. Lynch" <se***@chaosring.org> wrote in message
news:9J********************@speakeasy.net...
John Roth wrote:

Yes, you're missing something really obvious. Multi-level security
is a real difficult problem if you want to solve it in a believable
(that is, bullet-proof) fashion. The only way I know of solving it
is to provide separate execution environments for the different
privilege domains. In the current Python structure, that means
different interpreters so that the object structures don't intermix.


Hmmm, can you give me an example of a Python application that works
this way? Zope seems to be doing fine using RestrictedPython.
RestrictedPython is, in fact, an attempt to provide different
execution environments within the same memory space, which is the
whole point of my exercise. Now, I know that the lack of an example
of insecurity is not proof of security, but can you think of a way to
escape from RestrictedPython's environment? DoS is still possible,
but as I'm not planning on using this for completely untrusted users,
I'm not too concerned about that.


Restricted Python was withdrawn because of a number of holes, of which
new style classes were the last straw.


RestrictedPython was *not* withdrawn; rexec was withdrawn. This is a
difficult enough issue to discuss without confusing different modules. See
http://dev.zope.org/Wikis/DevSite/Pr...strictedPython
--
Aahz (aa**@pythoncraft.com) <*> http://www.pythoncraft.com/

Weinberg's Second Law: If builders built buildings the way programmers wrote
programs, then the first woodpecker that came along would destroy civilization.
Jul 18 '05 #15

P: n/a
Sean R. Lynch wrote:
Well, I'm providing a same_type function that compares types. What else
do you want to do with type()? The other option is to go the Zope3 route
and provide proxies to the type objects returned by type().


I don't know what is needed, but I know that existing code will break
when it did not strictly need to - no existing code uses your function.

If you think your users can accept rewriting their code - fine.

Regards,
Martin

Jul 18 '05 #16

P: n/a

"Aahz" <aa**@pythoncraft.com> wrote in message
news:bt**********@panix2.panix.com...
In article <vv************@news.supernews.com>,
John Roth <ne********@jhrothjr.com> wrote:
"Sean R. Lynch" <se***@chaosring.org> wrote in message
news:9J********************@speakeasy.net...
John Roth wrote:

Yes, you're missing something really obvious. Multi-level security
is a real difficult problem if you want to solve it in a believable
(that is, bullet-proof) fashion. The only way I know of solving it
is to provide separate execution environments for the different
privilege domains. In the current Python structure, that means
different interpreters so that the object structures don't intermix.

Hmmm, can you give me an example of a Python application that works
this way? Zope seems to be doing fine using RestrictedPython.
RestrictedPython is, in fact, an attempt to provide different
execution environments within the same memory space, which is the
whole point of my exercise. Now, I know that the lack of an example
of insecurity is not proof of security, but can you think of a way to
escape from RestrictedPython's environment? DoS is still possible,
but as I'm not planning on using this for completely untrusted users,
I'm not too concerned about that.
Restricted Python was withdrawn because of a number of holes, of which
new style classes were the last straw.


RestrictedPython was *not* withdrawn; rexec was withdrawn. This is a
difficult enough issue to discuss without confusing different modules.

See http://dev.zope.org/Wikis/DevSite/Pr...strictedPython

I'm not sure what you're trying to say. The Zope page you reference
says that they were (prior to 2.1) doing things like modifying generated
byte code and reworking the AST. That's fun stuff I'm sure, but it
doesn't have anything to do with "Restricted Execution" as defined in the
Python Library Reference, Chapter 17, which covers Restricted Execution,
RExec and Bastion (which was also withdrawn.)

If I confused you with a subtle nomenclature difference, sorry. I don't
care what Zope is doing or not doing, except for the fact that it seems
to come up in this discussion. I'm only concerned with what Python is
(or is not) doing. The approach in the Wiki page you pointed to does,
however, seem to be a substantially more bullet-proof approach than
Python's Restricted Execution.

John Roth
--
Aahz (aa**@pythoncraft.com) <*> http://www.pythoncraft.com/
Weinberg's Second Law: If builders built buildings the way programmers wrote programs, then the first woodpecker that came along would destroy

civilization.
Jul 18 '05 #17

P: n/a
Serge Orlov wrote:
You're right, the wording is not quite correct. My point was that it should
take effort to make attributes _public_, for example, going to another
source line and typing attribute name or doing whatever "declaration" means.
This way adding new attribute or leaking "unsecured" object will
raise an exception when untrusted code will try to access it. Otherwise
one day something similar to rexec failure will happen: somebody
added __class__ and __subclasses__ attributes and rexec blindly
allowed to access them. The leading underscore doesn't matter, it
could be name like encode/decode that is troublesome.

By the way, my code above is buggy, it's a good idea that you're
not going to use it :) Let me try it the second time in English words:
If the attribute 'attr' is declared public give it. If the function with
UUID has access to attribute 'attr' on object 'obj' give it. Otherwise fail.
Ok, I think you've pretty much convinced me here. My choices for
protected attributes were to either name them specially and only allow
those attribute accesses on the name "self" (which I treat specially),
or to make everything protected by default, pass all attribute access
through a checker function (which I was hoping to avoid), and check for
a special attribute to define which attributes are supposed to be
public. Do you think it's good enough to make all attributes protected
as opposed to private by default?
Nevermind, it's just a scary word :) It can concern you if you worry
about information leaking from one security domain to another. Like
prisoners knocking the wall to pass information between them. In
computers it may look like two plugins, one is processing credit
cards and the other one has capability to make network connections.
If they are written by one evil programmer the first one can "knock
the wall" to pass the information to the second. "knocking the wall"
can be encoded like quick memory allocation up to failure = 1, no
quick memory allocation = 0. Add error correction and check summing
and you've got a reliable leak channel.
Hmmm, I think this would be even more difficult to protect from than
doing resource checks. Fortunately, I'm not planning on processing any
credit cards with this code. The primary purpose is so that multiple
programmers (possibly thousands) can work in the same memory space
without stepping on one another.
I'm not sure how to deal with str.encode too. You don't know what
kind of codecs are registered for that method for sure, one day there
could be registered an unknown codec that does something unknown.
Shouldn't you have two (or several) codecs.py modules(instances?)
for trusted and untrusted code? And str.encode should be transparently
redirected to the proper one?


I guess I'll just make attributes protected by default, and force the
programmer to go out of their way to make things public. Then I can use
the Zope/RestrictedPython technique of assuming everything is insecure
until proven otherwise, and only expose parts of the interface on
built-in types that have been audited.

Thank you very much for your extremely informative responses!
Jul 18 '05 #18

P: n/a
In article <vv************@news.supernews.com>,
John Roth <ne********@jhrothjr.com> wrote:
"Aahz" <aa**@pythoncraft.com> wrote in message
news:bt**********@panix2.panix.com...
In article <vv************@news.supernews.com>,
John Roth <ne********@jhrothjr.com> wrote:

Restricted Python was withdrawn because of a number of holes, of which
new style classes were the last straw.


RestrictedPython was *not* withdrawn; rexec was withdrawn. This is a
difficult enough issue to discuss without confusing different modules. See
http://dev.zope.org/Wikis/DevSite/Pr...strictedPython


I'm not sure what you're trying to say. The Zope page you reference
says that they were (prior to 2.1) doing things like modifying generated
byte code and reworking the AST. That's fun stuff I'm sure, but it
doesn't have anything to do with "Restricted Execution" as defined in the
Python Library Reference, Chapter 17, which covers Restricted Execution,
RExec and Bastion (which was also withdrawn.)

If I confused you with a subtle nomenclature difference, sorry. I don't
care what Zope is doing or not doing, except for the fact that it seems
to come up in this discussion. I'm only concerned with what Python is
(or is not) doing. The approach in the Wiki page you pointed to does,
however, seem to be a substantially more bullet-proof approach than
Python's Restricted Execution.


Well, I don't care what you do or don't care about, but I do care that
if you're going to post in a thread that you actually read what you're
responding to and that you post accurate information. If you go back to
the post that started this thread, it's quite clear that the reference
was specifically to Zope's RestrictedPython.
--
Aahz (aa**@pythoncraft.com) <*> http://www.pythoncraft.com/

Weinberg's Second Law: If builders built buildings the way programmers wrote
programs, then the first woodpecker that came along would destroy civilization.
Jul 18 '05 #19

P: n/a

"Aahz" <aa**@pythoncraft.com> wrote in message
news:bt**********@panix3.panix.com...
In article <vv************@news.supernews.com>,
John Roth <ne********@jhrothjr.com> wrote:
"Aahz" <aa**@pythoncraft.com> wrote in message
news:bt**********@panix2.panix.com...
In article <vv************@news.supernews.com>,
John Roth <ne********@jhrothjr.com> wrote:

Restricted Python was withdrawn because of a number of holes, of which
new style classes were the last straw.

RestrictedPython was *not* withdrawn; rexec was withdrawn. This is a
difficult enough issue to discuss without confusing different modules. See
http://dev.zope.org/Wikis/DevSite/Pr...strictedPython
I'm not sure what you're trying to say. The Zope page you reference
says that they were (prior to 2.1) doing things like modifying generated
byte code and reworking the AST. That's fun stuff I'm sure, but it
doesn't have anything to do with "Restricted Execution" as defined in the
Python Library Reference, Chapter 17, which covers Restricted Execution,
RExec and Bastion (which was also withdrawn.)

If I confused you with a subtle nomenclature difference, sorry. I don't
care what Zope is doing or not doing, except for the fact that it seems
to come up in this discussion. I'm only concerned with what Python is
(or is not) doing. The approach in the Wiki page you pointed to does,
however, seem to be a substantially more bullet-proof approach than
Python's Restricted Execution.
Well, I don't care what you do or don't care about, but I do care that
if you're going to post in a thread that you actually read what you're
responding to and that you post accurate information. If you go back to
the post that started this thread, it's quite clear that the reference
was specifically to Zope's RestrictedPython.


I beg to differ. That's what the OP said he started with, not what he's
mostly interested in nor where he wants to end up. I'm including
the first paragraph below:

[extract from message at head of thread]
I've been playing around with Zope's RestrictedPython, and I think I'm
on the way to making the modifications necessary to create a
capabilities-based restricted execution system. The idea is to strip out
any part of RestrictedPython that's not necessary for doing capabilities
and do all security using just capabilities.
[end extract]

To me, at least, it's clear that while he's *started* with a consideration
of Zope's RestrictedPython, he's talking about what would be needed
in regular Python. If he was focusing on Zope, this is the wrong forum
for the thread.

My fast scan of Zope yesterday showed that it was quite impressive,
but some of the things they did seem to be quite specific to the Zope
environment, and don't seem (at least to me) to be all that applicable
to a general solution to having some form of restricted execution.

Much of this thread has focused on "capabilities" and the use of
proxies to implement capabilities. AFIAC, that's not only putting
attention on mechanism before policy, but it's putting attention on
mechanism in the wrong place.

What I *haven't* seen in this thread is very much consideration of
what people want from a security implementation. I've seen that in
some other threads, which included some ways of taming exec and
eval when all you want is a data structure that contains nothing but
known kinds of objects. You don't, however, need exec and eval
for that if you're willing to use the compiler tools (and, I presume,
take a substantial performance hit.)

One problem I've been playing around with is: how would you
implement something functionally equivalent to the Unix/Linux
chroot() facility? The boundaries are that it should not require
coding changes to the application that is being restricted, and it
should allow any and all Python extension (not C language
extension) to operate as coded (at least as long as they don't
try to escape the jail!) Oh, yes. It has to work on Windows,
so it's not a legitimate response to say: "use chroot()."

John Roth
--
Aahz (aa**@pythoncraft.com) <*> http://www.pythoncraft.com/
Weinberg's Second Law: If builders built buildings the way programmers wrote programs, then the first woodpecker that came along would destroy

civilization.

Have to be a pretty non-aggressive woodpecker, if it allowed something
that extensive and complicated to happen before attacking it.

Jul 18 '05 #20

P: n/a

"John Roth" <ne********@jhrothjr.com> wrote in message news:vv************@news.supernews.com...

Much of this thread has focused on "capabilities" and the use of
proxies to implement capabilities. AFIAC, that's not only putting
attention on mechanism before policy, but it's putting attention on
mechanism in the wrong place.
I'm not sure why it should be discussed here since Sean refered
to E in the first post (http://www.erights.org/), so I think he's
comfortable with the policy defined by E? I think he has
missed the part that implementation should help as much as
it can prevent leaking capabilities from one security domain to
another. I pointed to that already.

What I *haven't* seen in this thread is very much consideration of
what people want from a security implementation.
I think Sean is talking about his own implementation. I didn't
see anywhere he said he's going to write general implementation
for other people. He said what he wants from his implementation.
One problem I've been playing around with is: how would you
implement something functionally equivalent to the Unix/Linux
chroot() facility? The boundaries are that it should not require
coding changes to the application that is being restricted, and it
should allow any and all Python extension (not C language
extension) to operate as coded (at least as long as they don't
try to escape the jail!) Oh, yes. It has to work on Windows,
so it's not a legitimate response to say: "use chroot()."


I don't see any unsolvable problems. Could you be more specific
what is the problem? (besides time, money, need to support
alternative python implementation, etc...)

-- Serge.
Jul 18 '05 #21

P: n/a

"Sean R. Lynch" <se***@chaosring.org> wrote in message news:sr********************@speakeasy.net...
Ok, I think you've pretty much convinced me here. My choices for
protected attributes were to either name them specially and only allow
those attribute accesses on the name "self" (which I treat specially),
or to make everything protected by default, pass all attribute access
through a checker function (which I was hoping to avoid), and check for
a special attribute to define which attributes are supposed to be
public. Do you think it's good enough to make all attributes protected
as opposed to private by default?


Are you talking about C++ like protected fields and methods? What if
untrusted code subclasses your proxy object?
I'm not sure how to deal with str.encode too. You don't know what
kind of codecs are registered for that method for sure, one day there
could be registered an unknown codec that does something unknown.
Shouldn't you have two (or several) codecs.py modules(instances?)
for trusted and untrusted code? And str.encode should be transparently
redirected to the proper one?


I guess I'll just make attributes protected by default, and force the
programmer to go out of their way to make things public. Then I can use
the Zope/RestrictedPython technique of assuming everything is insecure
until proven otherwise, and only expose parts of the interface on
built-in types that have been audited.


Thinking about str.encode I conviced myself that global state shouldn't
be shared by different security domains so that means codecs.py and
__builtins__ must be imported into each security domain separately.
It's pretty easy to do with codecs.py since it's python code. But importing
__builtins__ more than once is pretty hard since it wasn't designed
for that.

-- Serge.
Jul 18 '05 #22

P: n/a
Serge Orlov wrote:
"John Roth" <ne********@jhrothjr.com> wrote in message news:vv************@news.supernews.com...
Much of this thread has focused on "capabilities" and the use of
proxies to implement capabilities. AFIAC, that's not only putting
attention on mechanism before policy, but it's putting attention on
mechanism in the wrong place.

I'm not sure why it should be discussed here since Sean refered
to E in the first post (http://www.erights.org/), so I think he's
comfortable with the policy defined by E? I think he has
missed the part that implementation should help as much as
it can prevent leaking capabilities from one security domain to
another. I pointed to that already.


I am comfortable (so far) with the policy defined by E. However, I've
been learning more about that policy as I go, including the necessity of
helping the programmer prevent leaks, which I've started to implement by
making objects completely opaque by default and requiring that classes
list attributes that they want to make public. I have kept my
name-mangling scheme for private attributes. I'm working on making
classes opaque while still allowing code to call methods defined on
superclasses but only on self, not on other objects that happen to
inherit from the same superclass.
What I *haven't* seen in this thread is very much consideration of
what people want from a security implementation.

I think Sean is talking about his own implementation. I didn't
see anywhere he said he's going to write general implementation
for other people. He said what he wants from his implementation.


I would like my implementation to be as general as possible, but I'm
writing it for my own projects. All this talk of "breaking existing
code" and the like is not particularly relevant to me because, while I'd
like code to look as much like regular Python as possible, it's simply
not possible not to break existing code while helping the programmer
prevent leaks. Making objects opaque by default is going to break a hell
of a lot more code than not having a type() builtin, so I think people
can see why I'm not too concerned about leaving various builtins out.
One problem I've been playing around with is: how would you
implement something functionally equivalent to the Unix/Linux
chroot() facility? The boundaries are that it should not require
coding changes to the application that is being restricted, and it
should allow any and all Python extension (not C language
extension) to operate as coded (at least as long as they don't
try to escape the jail!) Oh, yes. It has to work on Windows,
so it's not a legitimate response to say: "use chroot()."


This is an interesting problem, but not one I'm trying to solve here.
I'm modifying RestrictedPython to make it possible to use a pure
capabilities-based security model in an application server. The
application server must scale to tens of thousands of security domains,
and I see no reason why the security model can't or shouldn't be
language-based instead of OS-based. There's E for Java, why can't we
make something similar for Python? There is nothing particularly special
about Java that makes it more suitable for E than Python is. Both have
unforgeable references. I've already added object encapsulation. I'm
working on eliminating any static mutable state.

Ultimately, I'd like to have user-level threads, too. I'm considering
either using Stackless for this or doing some mangling of ASTs to make
it easier to use generators as coroutines. Unfortunately, I can't think
of a way for the compiler to tell that you're calling a coroutine from
within a coroutine and therefore needs to output "yield (locals,
resultvarname, func, args, kwargs)" instead of a regular function call
without using some special syntax. Actually, I don't even know if it's
possible to modify the locals dict of a running generator without
causing trouble.
Jul 18 '05 #23

P: n/a
Serge Orlov wrote:
"Sean R. Lynch" <se***@chaosring.org> wrote in message news:sr********************@speakeasy.net...
Ok, I think you've pretty much convinced me here. My choices for
protected attributes were to either name them specially and only allow
those attribute accesses on the name "self" (which I treat specially),
or to make everything protected by default, pass all attribute access
through a checker function (which I was hoping to avoid), and check for
a special attribute to define which attributes are supposed to be
public. Do you think it's good enough to make all attributes protected
as opposed to private by default?

Are you talking about C++ like protected fields and methods? What if
untrusted code subclasses your proxy object?


Hmmm. I was thinking you'd trust those you were allowing to subclass
your classes a bit more than you'd trust people to whom you'd only give
instances, but now that you mention it, you're right. I should make all
attributes fully private by default, requiring the progammer to declare
both protected and public attributes, and I should make attributes only
writable by the class on which they're declared. I guess I also need to
make it impossible to override any attribute unless it's declared OK to
do so.

I wonder if each of these things can be done with capabilities? A
reference to a class is basically the capability to subclass it. I could
create a concept of "slots" as well. This would require a change in
syntax, however; you'd be calling setter(obj, value) and getter(obj),
and this isn't really something I could cover up in the compiler. I
think I'll forget about this for now because E just uses Java's own
object encapsulation, so I guess I should just stick with creating
Java-like object encapsulation in Python.

I need to implement a callsuper() function as well, because I don't want
to be giving programmers access to unbound methods.
Thinking about str.encode I conviced myself that global state shouldn't
be shared by different security domains so that means codecs.py and
__builtins__ must be imported into each security domain separately.
It's pretty easy to do with codecs.py since it's python code. But importing
__builtins__ more than once is pretty hard since it wasn't designed
for that.


Global *mutable* state shouldn't be shared, AFAICT. I believing making
sure no mutable state is reachable through __builtins__ and having a new
globals dict for each security domain should be enough. Any modules that
are imported would need to be imported separately for each domain, which
should be possible with a modified __import__ builtin. I don't have any
intention of allowing import of unaudited C modules.
Jul 18 '05 #24

P: n/a

"Serge Orlov" <so********@pobox.ru> wrote in message
news:bt***********@nadya.doma...

"John Roth" <ne********@jhrothjr.com> wrote in message news:vv************@news.supernews.com...

What I *haven't* seen in this thread is very much consideration of
what people want from a security implementation.


I think Sean is talking about his own implementation. I didn't
see anywhere he said he's going to write general implementation
for other people. He said what he wants from his implementation.


I see that point, and now that it's been made explicit (I missed
it the first time around, sorry,) I'm ok with it.
One problem I've been playing around with is: how would you
implement something functionally equivalent to the Unix/Linux
chroot() facility? The boundaries are that it should not require
coding changes to the application that is being restricted, and it
should allow any and all Python extension (not C language
extension) to operate as coded (at least as long as they don't
try to escape the jail!) Oh, yes. It has to work on Windows,
so it's not a legitimate response to say: "use chroot()."


I don't see any unsolvable problems. Could you be more specific
what is the problem? (besides time, money, need to support
alternative python implementation, etc...)


Well, I don't see any unsolvable problems either. The biggest
sticking point is that the Unices use hard links to create
a directory tree that has the necessary programs availible.
Windows does not have this capability, so an implementation
would have to build a virtual directory structure, intercept all
paths and map them to the virtual structure backwards and
forwards.

The reason I find it an interesting problem is that I can't see
any way to do it with the kind of "generic" facility that was
in the Python Restricted execution facility, at least without a
complete redesign of the file and directory functions and
classes in the os module. Without that, it would
require code in the C language implementation modules.
Right now the file and directory management modules are a
real mess.

John Roth
-- Serge.

Jul 18 '05 #25

P: n/a

"Sean R. Lynch" <se***@chaosring.org> wrote in message news:mv********************@speakeasy.net...
Serge Orlov wrote:
"Sean R. Lynch" <se***@chaosring.org> wrote in message news:sr********************@speakeasy.net...

Thinking about str.encode I conviced myself that global state shouldn't
be shared by different security domains so that means codecs.py and
__builtins__ must be imported into each security domain separately.
It's pretty easy to do with codecs.py since it's python code. But importing
__builtins__ more than once is pretty hard since it wasn't designed
for that.
Global *mutable* state shouldn't be shared, AFAICT.


Right, I missed this simple rule. My mind is still confined by my recent
attempt to add security by only translating bytecode without any changes
to the interpreter.
I believing making
sure no mutable state is reachable through __builtins__
Are you going to create multiple __builtins__ or you're just going
to get rid of any global objects in __builtins__? The first lets you
handle str.encode the right way.
and having a new
globals dict for each security domain should be enough. Any modules that
are imported would need to be imported separately for each domain,
Can C modules be imported more than once in CPython?
which
should be possible with a modified __import__ builtin. I don't have any
intention of allowing import of unaudited C modules.


Agreed.

-- Serge.
Jul 18 '05 #26

P: n/a

"John Roth" <ne********@jhrothjr.com> wrote in message news:vv************@news.supernews.com...

"Serge Orlov" <so********@pobox.ru> wrote in message
news:bt***********@nadya.doma...

"John Roth" <ne********@jhrothjr.com> wrote in message

news:vv************@news.supernews.com...

One problem I've been playing around with is: how would you
implement something functionally equivalent to the Unix/Linux
chroot() facility? The boundaries are that it should not require
coding changes to the application that is being restricted, and it
should allow any and all Python extension (not C language
extension) to operate as coded (at least as long as they don't
try to escape the jail!) Oh, yes. It has to work on Windows,
so it's not a legitimate response to say: "use chroot()."


I don't see any unsolvable problems. Could you be more specific
what is the problem? (besides time, money, need to support
alternative python implementation, etc...)


Well, I don't see any unsolvable problems either. The biggest
sticking point is that the Unices use hard links to create
a directory tree that has the necessary programs availible.
Windows does not have this capability, so an implementation
would have to build a virtual directory structure, intercept all
paths and map them to the virtual structure backwards and
forwards.

The reason I find it an interesting problem is that I can't see
any way to do it with the kind of "generic" facility that was
in the Python Restricted execution facility, at least without a
complete redesign of the file and directory functions and
classes in the os module. Without that, it would
require code in the C language implementation modules.
Right now the file and directory management modules are a
real mess.


Right, you can do it with a custom importer and wrapper
functions over all file and directory functions. But that's
a mess over a mess and any mess is *bad* for security.
The way out the mess is probably filepath object that
should consolidate all access to files and directories.
If you wanted to make a point that std library should
be designed with security in mind I agree with you.
One step in that direction is to design everything OO.
OO design plays nice with capabilities.

-- Serge.
Jul 18 '05 #27

P: n/a

"Serge Orlov" <so********@pobox.ru> wrote in message
news:bt***********@nadya.doma...

"John Roth" <ne********@jhrothjr.com> wrote in message news:vv************@news.supernews.com...

"Serge Orlov" <so********@pobox.ru> wrote in message
news:bt***********@nadya.doma...

"John Roth" <ne********@jhrothjr.com> wrote in message

news:vv************@news.supernews.com...
>
> One problem I've been playing around with is: how would you
> implement something functionally equivalent to the Unix/Linux
> chroot() facility? The boundaries are that it should not require
> coding changes to the application that is being restricted, and it
> should allow any and all Python extension (not C language
> extension) to operate as coded (at least as long as they don't
> try to escape the jail!) Oh, yes. It has to work on Windows,
> so it's not a legitimate response to say: "use chroot()."

I don't see any unsolvable problems. Could you be more specific
what is the problem? (besides time, money, need to support
alternative python implementation, etc...)


Well, I don't see any unsolvable problems either. The biggest
sticking point is that the Unices use hard links to create
a directory tree that has the necessary programs availible.
Windows does not have this capability, so an implementation
would have to build a virtual directory structure, intercept all
paths and map them to the virtual structure backwards and
forwards.

The reason I find it an interesting problem is that I can't see
any way to do it with the kind of "generic" facility that was
in the Python Restricted execution facility, at least without a
complete redesign of the file and directory functions and
classes in the os module. Without that, it would
require code in the C language implementation modules.
Right now the file and directory management modules are a
real mess.


Right, you can do it with a custom importer and wrapper
functions over all file and directory functions. But that's
a mess over a mess and any mess is *bad* for security.
The way out the mess is probably filepath object that
should consolidate all access to files and directories.
If you wanted to make a point that std library should
be designed with security in mind I agree with you.
One step in that direction is to design everything OO.
OO design plays nice with capabilities.

-- Serge.


Sean Ross took a pass at this idea in the thread
"Finding File Size" starting on 1/1. That got renamed
to "Filename Type" somewhere fairly quick.

There's now a pre-pep http://tinyurl.com/2578q
for the notion, thanks to Gerrit Holl.

John Roth

Jul 18 '05 #28

P: n/a
Serge Orlov wrote:
"Sean R. Lynch" <se***@chaosring.org> wrote in message news:mv********************@speakeasy.net...

Global *mutable* state shouldn't be shared, AFAICT.

Right, I missed this simple rule. My mind is still confined by my recent
attempt to add security by only translating bytecode without any changes
to the interpreter.


You were translating bytecode rather than working with ASTs? That would
be hard to maintain, considering that Zope found it too difficult to
maintain even manipulating concrete syntax trees. Also, I don't really
consider that I'm modifying the interpreter, I'm just giving the
interpreter a different globals dict.
I believing making
sure no mutable state is reachable through __builtins__

Are you going to create multiple __builtins__ or you're just going
to get rid of any global objects in __builtins__? The first lets you
handle str.encode the right way.


I'm not sure what you mean by this. I'm creating a dict for
__builtins__, but AFAIK it's not possible for code to modify the
__builtins__ dict other than through the name __builtins__, which starts
with an underscore and so is invalid. All of the objects I have in
__builtins__ right now are immutable within the restricted environment
because they're either functions or classes.

Python modules that are imported in the restricted environment will be
read-only and each domain will get its own copy. This should prevent
leaks caused by two domains importing the same module and then
performing operations that affect the state of the module. Modules will
need to explicitly specify what names they want to export the same way
classes do in order to prevent inadvertent leaks.
and having a new
globals dict for each security domain should be enough. Any modules that
are imported would need to be imported separately for each domain,

Can C modules be imported more than once in CPython?


Not that I'm aware of, which is why they will need to be audited for
mutable state and other sources of leaks and excess privilege. C modules
that we need that have problems will get proxies the same way E has
proxies for Swing.
Jul 18 '05 #29

P: n/a
I put up a page on my Wiki with what I can remember off the top of my
head of what we've discussed so far and some of what I've implemented so
far (though my implementation is in flux). It's at
<http://wiki.literati.org/CapablePython>. Feel free to add comments.
Jul 18 '05 #30

P: n/a
Ok, so how do I handle overriding methods?

Allowing code that inherits from a class to override any method it can
access could result in security problems if the parent doesn't expect a
given method to be overridden. Shadowing the overridden method could
result in unexpected behavior. And, of course, in some cases, we *want*
to allow the programmer to override methods, because this is what code
reuse is all about.

I already have a solution in mind for __init__: provide another method
that is always called on every superclass for any object that is created
from a class that inherits from that superclass. The method takes no
arguments and is called after __init__.

Another option is to simply not provide any protection from inheriting
classes. The more I think about it, the better this sounds. Classes
don't contain any data other than perhaps class data, and a programmer
shouldn't be allowing another programmer to subclass classes that
contain data they want to protect. I could put all attributes back to
being protected (rather than private) by default, and only have a single
extra declaration required in a class statement to declare what
attributes you want to make public.

Can anyone (Serge?) think of an example of a case where this might cause
leaks or privilege escalation? Remember that classes are opaque, so code
can't get at unbound methods. Subclassing a class doesn't give you any
special access to members of that class.
Jul 18 '05 #31

This discussion thread is closed

Replies have been disabled for this discussion.