By using this site, you agree to our updated Privacy Policy and our Terms of Use. Manage your Cookies Settings.
445,732 Members | 1,388 Online
Bytes IT Community
+ Ask a Question
Need help? Post your question and get tips & solutions from a community of 445,732 IT Pros & Developers. It's quick & easy.

Metaclass with name overloading.

P: n/a
I would like to write a metaclass which would allow me to overload
names in the definition of its instances, like this

class Foo(object):

__metaclass__ = OverloadingClass

att = 1
att = 3

def meth(self):
pass

def meth(self, arg):
return arg
I would then like the dictionary received by OverloadingClass.__new__
to look something like this:

{'att': (1,3),
'meth: (<function meth at 0x4018e56c>, <function meth at 0x4018e80c>) }

IOW, each name bound in the class definition should have associated
with it, a tuple containing all the objects which were bound to that
name, rather merely keeping the most recent binding for any given
name.

I was wondering whether it would be possible to achieve this by
forcing Python to use some dicitonary proxy (which accumulates values,
rather that keeping just the last value to be associated with a key),
instead of dict, when executing the class definiton?

Is something like this at all possible in pure Python? or does in
require fiddling around in the guts of the parser?
Jul 18 '05 #1
Share this Question
Share on Google+
33 Replies


P: n/a
Jacek Generowicz <ja**************@cern.ch> wrote:
...
I would like to write a metaclass which would allow me to overload
names in the definition of its instances, like this ... I was wondering whether it would be possible to achieve this by
forcing Python to use some dicitonary proxy (which accumulates values,
rather that keeping just the last value to be associated with a key),
instead of dict, when executing the class definiton?

Is something like this at all possible in pure Python? or does in
require fiddling around in the guts of the parser?


It's not possible in pure Python -- Python will always use a real dict
rather than any other type as you'd wish. The parser OTOH shouldn't
really be involved, either. In the end it boils down to a series of
STORE_NAME pseudocode instructions, which currently do:

case STORE_NAME:
w = GETITEM(names, oparg);
v = POP();
if ((x = f->f_locals) != NULL) {
if (PyDict_CheckExact(x))
err = PyDict_SetItem(x, w, v);
else
err = PyObject_SetItem(x, w, v);
Py_DECREF(v);

so they'd be able to work with f_locals being either a dict, or any
other mapping -- so that part should be OK. But having frame f's
f_locals be anything but a dict, now THAT is the problem...
Alex

Jul 18 '05 #2

P: n/a
On 27 Sep 2004 13:33:33 +0200, Jacek Generowicz
<ja**************@cern.ch> wrote:
<snip>
I would then like the dictionary received by OverloadingClass.__new__
to look something like this:

{'att': (1,3),
'meth: (<function meth at 0x4018e56c>, <function meth at 0x4018e80c>) }

IOW, each name bound in the class definition should have associated
with it, a tuple containing all the objects which were bound to that
name, rather merely keeping the most recent binding for any given
name.

I was wondering whether it would be possible to achieve this by
forcing Python to use some dicitonary proxy (which accumulates values,
rather that keeping just the last value to be associated with a key),
instead of dict, when executing the class definiton?

Is something like this at all possible in pure Python? or does in
require fiddling around in the guts of the parser?


No, you can't, and it's not just a parser issue. Python uses direct C
calls to the native dict type. It's hard coded, and I don't see any
obvious or easy way to override it except by rewriting all such code.
The metaclass receives the dictionary after all declarations were
collected, and then it's too late to act upon it.

Now that we're talking about it, I would like to discuss how this type
of hack could possibly be done in a future version of Python (probaly
Python 3.0, I don't think it's anywhere close to possible for Python
2.x).

1) One such idea is to provide the metaclass with a __dict__ factory.
For example:

class MyMetaclass(type):
def __new__(...):
...
def __dict__(cls):
return CustomDict()

.... where CustomDict is a user-defined mapping type that can store
more information about the entries than the native dict:

-- CustomDict[name] would retrieve a tuple containing all entries, in
reverse order. CustomDict[name][0] would retrieve the last definition.

-- The order of the definitions would be preserved, and the iterators
(iterkeys, iteritems, itervalues) would all iterate over the entries
in the order of the definition.

By using a user-defined dict, we would still use the default dict most
of the time without any noticeable performance hit, but would be able
to change the guts of the class declaration system whenever needed.

2) Another (crazy) idea is to have the possibility to declare
anonymous class members, such as in:

class MyClass:
"""the first anonymous member is the doc string"""
"""the second anonymous member is __anon__[0]"""
1.5 # __anon__[1] = 1.5

By anonymous members, I mean anything that is not a def, a nested
class, or a value that wasn't assigned or bound to name. That's would
be nice to have too :-)

-------
p.s. In the particular case of the original poster, I'm wondering what
kind of application did he had in mind. I had similar needs while
studying some alternatives to declare some types of data structure in
Python -- forms, reports, webpages, etc -- stuff where the order of
the entries is potentially as important than the actual member names.
I'm really curious about it...

--
Carlos Ribeiro
Consultoria em Projetos
blog: http://rascunhosrotos.blogspot.com
blog: http://pythonnotes.blogspot.com
mail: ca********@gmail.com
mail: ca********@yahoo.com
Jul 18 '05 #3

P: n/a
Carlos Ribeiro <ca********@gmail.com> wrote:
...
No, you can't, and it's not just a parser issue. Python uses direct C
calls to the native dict type. It's hard coded, and I don't see any
obvious or easy way to override it except by rewriting all such code.
The STORE_NAME used within the code object to which the class body gets
compiled is quite ready for a non-native-dict f_locals of the frame.
The problem is how to get your own favourite object into that f_locals
in the first place, and THAT one is hard -- can't think of any way...
The metaclass receives the dictionary after all declarations were
collected, and then it's too late to act upon it.
Yep, far too late. You'd have to somehow tweak the CALL_FUNCTION
bytecode that's compiled as part of the class statement, so that it
makes a frame whose f_locals is some strange object of your choice.
Now that we're talking about it, I would like to discuss how this type
of hack could possibly be done in a future version of Python (probaly
Python 3.0, I don't think it's anywhere close to possible for Python
2.x).
I think it might be quite feasible in 2.5 (too late for 2.4).

1) One such idea is to provide the metaclass with a __dict__ factory.
If you go that route, then it may indeed require changes too deep for
2.5 or 2.anything. The metaclass gets determined later, at the time
CALL_FUNCTION executes the metaclass ain't in play yet.
By using a user-defined dict, we would still use the default dict most
of the time without any noticeable performance hit, but would be able
to change the guts of the class declaration system whenever needed.
Yeah, but if you make it a metaclass's job, then you're indeed asking
for deep changes from today's architecture.
2) Another (crazy) idea is to have the possibility to declare
anonymous class members, such as in:

class MyClass:
"""the first anonymous member is the doc string"""
"""the second anonymous member is __anon__[0]"""
1.5 # __anon__[1] = 1.5

By anonymous members, I mean anything that is not a def, a nested
class, or a value that wasn't assigned or bound to name. That's would
be nice to have too :-)


Maybe, and maybe not, but it would not help in the least with dealing
with two def's for the same name, as the OP wanted.

One thing that might work: have a sys._set_locals_factory(...) which
lets you change (maybe per-thread...?) how a frame's f_locals are made.
Alex
Jul 18 '05 #4

P: n/a
On Mon, 27 Sep 2004 15:27:31 +0200, Alex Martelli <al*****@yahoo.com> wrote:
1) One such idea is to provide the metaclass with a __dict__ factory.


If you go that route, then it may indeed require changes too deep for
2.5 or 2.anything. The metaclass gets determined later, at the time
CALL_FUNCTION executes the metaclass ain't in play yet.
By using a user-defined dict, we would still use the default dict most
of the time without any noticeable performance hit, but would be able
to change the guts of the class declaration system whenever needed.


Yeah, but if you make it a metaclass's job, then you're indeed asking
for deep changes from today's architecture.


Yes, that's why I said that I don't think this is possible in a short
time frame. But for now, I have only one question. You said that the
metaclass is determined later. I'm not familiar with Python's
internals, but I (naively?) assumed that the metaclass is known very
early in the class declaration process. Is that wrong?

obs 1: If it's possible to determine the metaclass *early*, then it's
no problem to call any extra function to provide the f_locals dict, or
to provide an augmented dict.
--
Carlos Ribeiro
Consultoria em Projetos
blog: http://rascunhosrotos.blogspot.com
blog: http://pythonnotes.blogspot.com
mail: ca********@gmail.com
mail: ca********@yahoo.com
Jul 18 '05 #5

P: n/a
Carlos Ribeiro <ca********@gmail.com> wrote:
...
On Mon, 27 Sep 2004 15:27:31 +0200, Alex Martelli <al*****@yahoo.com> wrote:
1) One such idea is to provide the metaclass with a __dict__ factory.
If you go that route, then it may indeed require changes too deep for
2.5 or 2.anything. The metaclass gets determined later, at the time
CALL_FUNCTION executes the metaclass ain't in play yet.

... time frame. But for now, I have only one question. You said that the
metaclass is determined later. I'm not familiar with Python's
internals, but I (naively?) assumed that the metaclass is known very
early in the class declaration process. Is that wrong?
I'm confused as to why you would assume that and what you mean by
'declaration'. I'll take it that you mean the _execution_ of the class
_statement_, in which case I really can't see how you could assume any
such thing. You _do_ know that __metaclass__ can be set anywhere in the
class body, for example, right? So how could Python know the metaclass
before it's done executing the class body? Yet it's exactly in order to
execute the class body in the modified way you desire, that Python would
need to set a certain frame's f_locals differently from the usual dict.

In other words, it's not an issue of implementation details that could
be tweaked while leaving current semantics intact: if you insist that it
must be the _metaclass_ which determines what kind of mapping object is
used for the f_locals of the frame where the class's body executes, then
it cannot be possible any more to determine the metaclass with today's
semantics (which I think are pretty fine, btw).

Come to think of it, this might be a great role for class-decorators.
If I had a way to tweak the kind of f_locals for a _function_, the
general semantics of today's function decorators would be fine (the
tweak might be attached after the function object is built, since it's
only needed when that function gets _called_). But for a class, by the
time the class body starts executing, it's too late, because it must be
executing with some kind of f_locals in its frame. So, class decorators
would have to be given a chance _before_ the class body runs.
obs 1: If it's possible to determine the metaclass *early*, then it's
no problem to call any extra function to provide the f_locals dict, or
to provide an augmented dict.


But Python can never be sure what the metaclass will be until the class
body is done executing, far too late. Basically, you'd need to forbid
the current possibility of setting __metaclass_ in the class body, plus,
you'd have to get deep into analyzing the bases before executing the
class body -- and if you look at what types.ClassType does to delegate
the metaclass choice to other bases if it possibly can, you'll probably
agree that this nicety has to be abrogated too.

All in order to have the f_locals' kind be set by the *metaclass* rather
than by other means -- honestly, it seems the price is too high, unless
there are some big advantages connected by the choice of metaclass vs
other means that I fail to see. What's so terrible about my q&d idea of
having a sys._something function do the job, for example? Then once we
have the ability to do the setting we can think of nice syntax to dress
it up. But the mods to ceval.c needed to accept any kind of setting for
f_locals' type seem a pretty big job already, without needing to put up
the hurdle that the metaclass must be determined early, too...
Alex
Jul 18 '05 #6

P: n/a
Jacek Generowicz <ja**************@cern.ch> writes:
I would like to write a metaclass which would allow me to overload
names in the definition of its instances, like this

class Foo(object):

__metaclass__ = OverloadingClass

att = 1
att = 3

def meth(self):
pass

def meth(self, arg):
return arg
I would then like the dictionary received by OverloadingClass.__new__
to look something like this:

{'att': (1,3),
'meth: (<function meth at 0x4018e56c>, <function meth at 0x4018e80c>) }

IOW, each name bound in the class definition should have associated
with it, a tuple containing all the objects which were bound to that
name, rather merely keeping the most recent binding for any given
name.

I was wondering whether it would be possible to achieve this by
forcing Python to use some dicitonary proxy (which accumulates values,
rather that keeping just the last value to be associated with a key),
instead of dict, when executing the class definiton?

Is something like this at all possible in pure Python? or does in
require fiddling around in the guts of the parser?


It won't work for ordinary attributes, but for overloading methods you
should be able to play some tricks with decorators and sys._getframe().

Thomas
Jul 18 '05 #7

P: n/a
On Mon, 27 Sep 2004 16:47:31 +0200, Alex Martelli <al*****@yahoo.com> wrote:
I'm confused as to why you would assume that and what you mean by
'declaration'. I'll take it that you mean the _execution_ of the class
_statement_, in which case I really can't see how you could assume any
such thing. You _do_ know that __metaclass__ can be set anywhere in the
class body, for example, right? So how could Python know the metaclass
before it's done executing the class body? Yet it's exactly in order to
execute the class body in the modified way you desire, that Python would
need to set a certain frame's f_locals differently from the usual dict.


Forgive me, because I've made a big confusion. For some reason, I
assumed that the __metaclass__ statement had to be the first one on
the class declaration. I don't know why. Probably because all examples
that I have seen so far have were done like that, and in a way, it
made sense to me -- probably because I wanted it to be that.

I think that I'm trying to do too many thing at once, with more
enthusiasm than solid knowledge. I really feel that I'm on the right
path, but I'm still missing a lot of stuff. I'll try to rest a little,
think carefully about all the ideas that have popped in my mind over
the past week, and try to study it a little better *before* trying to
post again. Real thanks for all the help.

--
Carlos Ribeiro
Consultoria em Projetos
blog: http://rascunhosrotos.blogspot.com
blog: http://pythonnotes.blogspot.com
mail: ca********@gmail.com
mail: ca********@yahoo.com
Jul 18 '05 #8

P: n/a
Carlos Ribeiro <ca********@gmail.com> wrote:
...
Forgive me, because I've made a big confusion. For some reason, I
assumed that the __metaclass__ statement had to be the first one on
the class declaration. I don't know why. Probably because all examples
that I have seen so far have were done like that, and in a way, it
made sense to me -- probably because I wanted it to be that.
Ah, I see. There is no such constraint, nor does Python need to 'peek'
into the class body _at all_ trying to find all the ways in which the
name __metaclass__ could be bound, even if it was the first statement to
do that binding. E.g.,

class foo:
class __metaclass__(type): ...

or

class bar:
from baz import fee as __metaclass__

and a bazillion other possibilities to bind name '__metaclass__' in the
class body. Python just executes the class body, _then_ checks if
'__metaclass__' is a key in the resulting dictionary -- that's all. WAY
simpler and more general, this way.

I think that I'm trying to do too many thing at once, with more
enthusiasm than solid knowledge. I really feel that I'm on the right
path, but I'm still missing a lot of stuff. I'll try to rest a little,
think carefully about all the ideas that have popped in my mind over
the past week, and try to study it a little better *before* trying to
post again. Real thanks for all the help.


You're welcome! Sure, metaclasses are tempting when one is trying to
shoehorn not-quite-feasible things into Python. And I do appreciate
that one really prefers declarative syntax sometimes... but...

In AB Strakt's CAPS framework, we started out with purely-executable
ways to build Business Logic Modules (calls such as 'blm = MakeBlm(...)'
and 'ent = blm.addEntity(...)' etc etc), then we moved to a simple
"interpreter" churning simple data structures (and doing the
executable-ways of BLM creation under the covers), finally designed a
dedicated, purely declarative language (informally known as 'blam'),
with somewhat Pythonesque syntax, to express each BLM (basically the
result of an Entity-Relationship Diagram analysis, plus embedded Python
code for 'trigger'-like actions, and the like) -- so, now, we parse
'blam' modules into AST's, walk the AST's to generate the needed Python
code, etc.

A dedicated declarative "small language" is, I believe, a better idea
than shoehorning a declarative language into Python. It lets us choose
the best syntax more freely, provide clearer error messages when
something is wrong, process a BLM in alternate ways (e.g. to produce ERD
graphics rather than executable forms), etc. However, I understand the
charm of the alternative "embedding" idea.

The ability to use something else than a dict for a frame's f_locals, in
any case, _would_ be neat. Your other ideas about anonymous members
would require more -- executing the 'function' that's built from the
class body's code-object in a special state where expression's results
aren't just thrown away but processed (much like an interactive
interpreter does -- but merging that with function execution is still
somewhat of a challenge...). For all this sort of ideas, a good grasp
of the relevant Python internals would help -- if you know enough C,
it's mostly not THAT hard, as said internals are mostly quite cleanly
coded (with a few exceptions where speed is paramount, or regarding the
parsing itself, which isn't the most readable parser in the world;-)...

Module dis is your friend -- you can disassemble any piece of code that
interests you to see what bytecode it produces. You can then dip into
ceval.c to see exactly what happens on this bytecode or that... it's a
really fun process of learning, really!
Alex
Jul 18 '05 #9

P: n/a
Thomas Heller <th*****@python.net> wrote:
...
It won't work for ordinary attributes, but for overloading methods you
should be able to play some tricks with decorators and sys._getframe().


Great idea... love it!!! To clarify it a bit for people who may not be
as familiar with internals as Mr Heller...: a decorator 'sees' the
method it's decorating "at once", so it doesn't matter if that method's
name later gets trampled upon... as long as the decorator can stash the
original away somewhere, and the frame in which the class body is
executing is just the right 'somewhere'.

A code snippet may be clearer than words...:

import sys, itertools

_ignore_method = object()

def overloaded(f):
d = sys._getframe(1).f_locals
n = '__overloaded__%s__%%d' % f.func_name
for i in itertools.count():
nx = n % i
if nx in d: continue
d[nx] = f
break
return _ignore_method

class blop:
@ overloaded
def f(self): return 'first f'
@ overloaded
def f(self): return 'second f'

print blop.__dict__
so, class blop doesn't really have an 'f' (it does have it in the
__dict__, but it's a dummy '_ignore_method' entry with a suitable custom
metaclass would easily prune!-) but has __overloaded__f__0 and
__overloaded__f__1 methods (which, again, a suitable custom metaclass
could do whatever wonders with!-).

For overload purposes, you might have the decorator actually take as
arguments some _types_ and record them so that the metaclass can arrange
for the dispatching based on actual-argument types...

If you bletch at having to decorate each overloaded version with
'@overloaded', consider C# basically requires that "just BECAUSE",
without even having a good excuse such as "we need to do it that way due
to Python's semantics"...;-)
Alex
Jul 18 '05 #10

P: n/a
On Mon, 27 Sep 2004 19:11:14 +0200, Alex Martelli <al*****@yahoo.com> wrote:
so, class blop doesn't really have an 'f' (it does have it in the
__dict__, but it's a dummy '_ignore_method' entry with a suitable custom
metaclass would easily prune!-) but has __overloaded__f__0 and
__overloaded__f__1 methods (which, again, a suitable custom metaclass
could do whatever wonders with!-).


The decorator could play it safe, and at the same time, return
something like the original poster expects. Upon decoration the
following would happen:

1) store the newly declared object in the list __overloaded__<$name>.

2) return a new object (to be bound to the <$name>), where
<$name>.__call__ would return __overloaded__<$name>[-1]; and
f.__iter__ would return an iterator for all declaration in the order
they appear.

I did it as follows; it _almost_ works (which really means it's
broken), because there's a catch that I was not able to solve:

------------
import sys, itertools

class OverloadedFunction:
def __init__(self):
self.overload_list = []
def __iter__(self):
for item in self.overload_list:
yield item
def __getitem__(self, index):
return self.overload_list[index]
def __call__(self, *args, **kw):
return self.overload_list[-1](*args, **kw)
def append(self, f):
self.overload_list.append(f)

def overloaded(f):
d = sys._getframe(1).f_locals
n = '__overloaded__%s' % f.func_name
#ofl = getattr(d, n, OverloadedFunction())
if n in d:
print "achou"
ofl = d[n]
else:
print "não achou"
ofl = OverloadedFunction()
print "<",ofl.overload_list,">", d, n
ofl.append(f)
d[n] = ofl
return ofl

class blop:
def f(self): return 'first f'
f = overloaded(f)

def f(self): return 'second f'
f = overloaded(f)

print blop.__dict__
# there's a catch -- methods were not bound to the instance
# so I need to pass the 'self' parameter manually
b = blop()
print blop.f(b)
print blop.f[0](b)
print blop.f[1](b)
The problem is that the methods were not bound to the instance. Adding
individual names to each method won't work, because it'll not bind the
references stored in the overload_list. I thought about using a
closure or curry type of solution, but that's something that I still
don't understand very well. Any tips?

--
Carlos Ribeiro
Consultoria em Projetos
blog: http://rascunhosrotos.blogspot.com
blog: http://pythonnotes.blogspot.com
mail: ca********@gmail.com
mail: ca********@yahoo.com
Jul 18 '05 #11

P: n/a
>>>>> "Alex" == Alex Martelli <al*****@yahoo.com> writes:

Alex> For overload purposes, you might have the decorator actually
Alex> take as arguments some _types_ and record them so that the
Alex> metaclass can arrange for the dispatching based on
Alex> actual-argument types...

I believe several implementations of generic functions/multimethods in
Python exist already; Quick googling brings up

http://mail.python.org/pipermail/pyt...il/043902.html

def generic(*type_signature):
"""
A decorator-generator that can be used to incrementally construct
a generic function that delegates to individual functions based on
the type signature of the arguments. For example, the following
code defines a generic function that uses two different actual
functions, depending on whether its argument is a string or an
int:
def f(x) [generic(int)]: ... print x, 'is an int' def f(x) [generic(str)]:

... print x, 'is a string'
Alex> If you bletch at having to decorate each overloaded version
Alex> with '@overloaded', consider C# basically requires that
Alex> "just BECAUSE", without even having a good excuse such as
Alex> "we need to do it that way due to Python's semantics"...;-)

I think it's "override" in C#, and stands for overriding a method in base class:

http://msdn.microsoft.com/library/de...OverridePG.asp

--
Ville Vainio http://tinyurl.com/2prnb
Jul 18 '05 #12

P: n/a
Ville Vainio <vi***@spammers.com> wrote:
...
>> "Alex" == Alex Martelli <al*****@yahoo.com> writes:


Alex> For overload purposes, you might have the decorator actually
Alex> take as arguments some _types_ and record them so that the
Alex> metaclass can arrange for the dispatching based on
Alex> actual-argument types...

I believe several implementations of generic functions/multimethods in
Python exist already; Quick googling brings up

http://mail.python.org/pipermail/pyt...il/043902.html


They exist, but you can't just 'def thesamename(....):' more than once
within a class, which is what the OP asked for. Building on Heller's
hint I showed how to allow that with a decorator, that's all.
Alex
Jul 18 '05 #13

P: n/a
Carlos Ribeiro <ca********@gmail.com> wrote:
On Mon, 27 Sep 2004 19:11:14 +0200, Alex Martelli <al*****@yahoo.com> wrote:
so, class blop doesn't really have an 'f' (it does have it in the
__dict__, but it's a dummy '_ignore_method' entry with a suitable custom
metaclass would easily prune!-) but has __overloaded__f__0 and
__overloaded__f__1 methods (which, again, a suitable custom metaclass
could do whatever wonders with!-).
The decorator could play it safe, and at the same time, return
something like the original poster expects. Upon decoration the


I think it's a better architecture to have the metaclass process all the
overloads, and until the metaclass runs, leave a marker-value for the
name. Otherwise, uncaught errors are just too likely.
following would happen:

1) store the newly declared object in the list __overloaded__<$name>.
So far, so NP -- I did name mangling period, but name mangling plus
indexing is fine too.
2) return a new object (to be bound to the <$name>), where
<$name>.__call__ would return __overloaded__<$name>[-1]; and
f.__iter__ would return an iterator for all declaration in the order
they appear.
I think you'll have more nondiagnosable error cases this way. You can
add more error checking to the very simple decorator I posted, of
course. The key issue to catch is the error whereby some occurrences of
the name are correctly decorated '@ overload' and others aren't.
Decorator and metaclass working together can do it, but if you don't
ensure the metaclass runs at the end (and it seems to me your approach
wouldn't) then such errors towards the end would stay uncaught.
The problem is that the methods were not bound to the instance. Adding
individual names to each method won't work, because it'll not bind the
references stored in the overload_list. I thought about using a
closure or curry type of solution, but that's something that I still
don't understand very well. Any tips?


Do it in the metaclass. Unless the metaclass was indispensable (and it
is) you could use custom descriptors, too; Raymond Hettinger has a nice
essay on descriptors that shows how to write your own custom ones.
Alex
Jul 18 '05 #14

P: n/a
On Mon, 27 Sep 2004 19:11:14 +0200, al*****@yahoo.com (Alex Martelli) wrote:
Thomas Heller <th*****@python.net> wrote:
...
It won't work for ordinary attributes, but for overloading methods you
should be able to play some tricks with decorators and sys._getframe().


Great idea... love it!!! To clarify it a bit for people who may not be
as familiar with internals as Mr Heller...: a decorator 'sees' the
method it's decorating "at once", so it doesn't matter if that method's
name later gets trampled upon... as long as the decorator can stash the
original away somewhere, and the frame in which the class body is
executing is just the right 'somewhere'.

A code snippet may be clearer than words...:

import sys, itertools

_ignore_method = object()

def overloaded(f):
d = sys._getframe(1).f_locals
n = '__overloaded__%s__%%d' % f.func_name
for i in itertools.count():
nx = n % i
if nx in d: continue
d[nx] = f
break
return _ignore_method

class blop:
@ overloaded
def f(self): return 'first f'
@ overloaded
def f(self): return 'second f'

print blop.__dict__
so, class blop doesn't really have an 'f' (it does have it in the
__dict__, but it's a dummy '_ignore_method' entry with a suitable custom
metaclass would easily prune!-) but has __overloaded__f__0 and
__overloaded__f__1 methods (which, again, a suitable custom metaclass
could do whatever wonders with!-).

For overload purposes, you might have the decorator actually take as
arguments some _types_ and record them so that the metaclass can arrange
for the dispatching based on actual-argument types...

If you bletch at having to decorate each overloaded version with
'@overloaded', consider C# basically requires that "just BECAUSE",
without even having a good excuse such as "we need to do it that way due
to Python's semantics"...;-)

If there were a way to make a local bare name access work like a property
or other descriptor, by designating such names suitably, then def f...
could trigger the setter of an f property and that could do whatever.

It might be interesting for a function closure variables also, but here
we are talking about class bodies. Here is a straw man:

class blop:
localdesc:
f = property(fget, fset)
def f(self): return 'first f'
def f(self): return 'second f'

This effectively considers the local namespace as the attribute name space
of _something_, presumably an internal instance of some synthesized Localspace class,
let's say localspace = LocalspaceType()() -- IOW a fresh class as well as its instance,
so as to cut off base class searching and not to have surprising sharing.
Optimization is for later ;-)

The localdesc suite would cause assignment to be via type(localspace).__dict__.__setitem__
whereas normal local names would be evaluated by get/setattr(localspace, barename) and
thus trigger descriptors if present. Note that all bindings including from
def and class as well as ordinary variables could be intercepted by descriptors.

sys._getframe(level).f_locals would presumably be a special proxy object instead of a dict
when there is a localdesc: suite in the body, so that it could decide whether names
are descriptors or ordinary. Otherwise it could remain the usual dict and not get
a performance hit, I suppose.

Regards,
Bengt Richter
Jul 18 '05 #15

P: n/a
Bengt Richter <bo**@oz.net> wrote:
...
If there were a way to make a local bare name access work like a property
or other descriptor, by designating such names suitably, then def f...
could trigger the setter of an f property and that could do whatever.

It might be interesting for a function closure variables also, but here
we are talking about class bodies. Here is a straw man:
In the current implementation, class bodies are made into functions and
run once. If you could use anything but a plain dict as the f_locals of
the frame, you could implement your proposed syntax and anything else
you could dream of. But look at ceval.c and tell me how you'd tell the
functions therein to use a certain special factory for a frame's
f_locals without rewriting hundreds of lines of pretty complicated code.

If the implementation is hard to explain, it's a bad idea. So let's
find a simple-to-explain implementation; if we can't, it's a bad idea.
Optimization is for later ;-)


So is syntax. Let's focus first on how to implement 'smart frames' that
can either use dict (and be fast) or a generic mapping-factory depending
e.g. on a bit in the per-thread state -- for now we can set that bit
with some sys.whatever call, who cares, once the implementation is good
then we can start to wrestle about syntax-sugar issues for it...
Alex
Jul 18 '05 #16

P: n/a
Carlos Ribeiro <ca********@gmail.com> writes:
On Mon, 27 Sep 2004 19:11:14 +0200, Alex Martelli <al*****@yahoo.com> wrote:
so, class blop doesn't really have an 'f' (it does have it in the
__dict__, but it's a dummy '_ignore_method' entry with a suitable custom
metaclass would easily prune!-) but has __overloaded__f__0 and
__overloaded__f__1 methods (which, again, a suitable custom metaclass
could do whatever wonders with!-).


The decorator could play it safe, and at the same time, return
something like the original poster expects. Upon decoration the
following would happen:

1) store the newly declared object in the list __overloaded__<$name>.

2) return a new object (to be bound to the <$name>), where
<$name>.__call__ would return __overloaded__<$name>[-1]; and
f.__iter__ would return an iterator for all declaration in the order
they appear.

I did it as follows; it _almost_ works (which really means it's
broken), because there's a catch that I was not able to solve:

------------
import sys, itertools

class OverloadedFunction:
def __init__(self):
self.overload_list = []
def __iter__(self):
for item in self.overload_list:
yield item
def __getitem__(self, index):
return self.overload_list[index]
def __call__(self, *args, **kw):
return self.overload_list[-1](*args, **kw)
def append(self, f):
self.overload_list.append(f)

def overloaded(f):
d = sys._getframe(1).f_locals
n = '__overloaded__%s' % f.func_name
#ofl = getattr(d, n, OverloadedFunction())
if n in d:
print "achou"
ofl = d[n]
else:
print "não achou"
ofl = OverloadedFunction()
print "<",ofl.overload_list,">", d, n
ofl.append(f)
d[n] = ofl
return ofl

class blop:
def f(self): return 'first f'
f = overloaded(f)

def f(self): return 'second f'
f = overloaded(f)

print blop.__dict__
# there's a catch -- methods were not bound to the instance
# so I need to pass the 'self' parameter manually
b = blop()
print blop.f(b)
print blop.f[0](b)
print blop.f[1](b)
The problem is that the methods were not bound to the instance. Adding
individual names to each method won't work, because it'll not bind the
references stored in the overload_list. I thought about using a
closure or curry type of solution, but that's something that I still
don't understand very well. Any tips?

Here is my take on decorator overloaded. I implement OverloadedFunction
as a descriptor. It supports method binding.

import sys

class OverloadedFunction(object):
class BoundMethod:
def __init__(self, functions, instance, owner):
self.bm_functions = functions
self.bm_instance = instance
self.bm_owner = owner
def __getitem__(self, index):
return self.bm_functions[index].__get__(self.bm_instance,
self.bm_owner)
def __init__(self):
self.of_functions = []
def addFunction(self, func):
self.of_functions.append(func)
def __get__(self, instance, owner):
return self.BoundMethod(self.of_functions,
instance,
owner)

def overloaded(func):
try:
olf = sys._getframe(1).f_locals[func.__name__]
except KeyError:
olf = OverloadedFunction()
olf.addFunction(func)
return olf

# Test case:
class blob:
def __init__(self, member):
self.member = member
@overloaded
def f(self):
return "f 0: member=%s" % self.member
@overloaded
def f(self, s):
return "f 1: member=%s, s=%s" % (self.member, s)

b=blob("XXX")
print b.f[0]()
print b.f[1]("Yet another f")

---- Output ---

f 0: member=XXX
f 1: member=XXX, s=Yet another f
Lenard Lindstrom
<le***@telus.net>
Jul 18 '05 #17

P: n/a
On Mon, 27 Sep 2004 21:23:14 GMT, Lenard Lindstrom <le***@telus.net> wrote:
Carlos Ribeiro <ca********@gmail.com> writes:
<sample code snip>
The problem is that the methods were not bound to the instance. Adding
individual names to each method won't work, because it'll not bind the
references stored in the overload_list. I thought about using a
closure or curry type of solution, but that's something that I still
don't understand very well. Any tips?
Here is my take on decorator overloaded. I implement OverloadedFunction
as a descriptor. It supports method binding.


That's what I was missing. I've read about descriptors last week, but
didn't had the time to get a hand at it. It's interesting. My
development machine is still using 2.3 -- I don't know if this
descriptor fancy stuff would work here... *btw, that's why my original
snippet didn't use the new syntax to call the decorator).

I think that this code is now Cookbook-ready. Any comments?
import sys

class OverloadedFunction(object):
class BoundMethod:
def __init__(self, functions, instance, owner):
self.bm_functions = functions
self.bm_instance = instance
self.bm_owner = owner
def __getitem__(self, index):
return self.bm_functions[index].__get__(self.bm_instance,
self.bm_owner)
def __init__(self):
self.of_functions = []
def addFunction(self, func):
self.of_functions.append(func)
def __get__(self, instance, owner):
return self.BoundMethod(self.of_functions,
instance,
owner)

def overloaded(func):
try:
olf = sys._getframe(1).f_locals[func.__name__]
except KeyError:
olf = OverloadedFunction()
olf.addFunction(func)
return olf

# Test case:
class blob:
def __init__(self, member):
self.member = member
@overloaded
def f(self):
return "f 0: member=%s" % self.member
@overloaded
def f(self, s):
return "f 1: member=%s, s=%s" % (self.member, s)

b=blob("XXX")
print b.f[0]()
print b.f[1]("Yet another f")

---- Output ---

f 0: member=XXX
f 1: member=XXX, s=Yet another f
Lenard Lindstrom
<le***@telus.net>

--
Carlos Ribeiro
Consultoria em Projetos
blog: http://rascunhosrotos.blogspot.com
blog: http://pythonnotes.blogspot.com
mail: ca********@gmail.com
mail: ca********@yahoo.com
Jul 18 '05 #18

P: n/a
Carlos Ribeiro <ca********@gmail.com> writes:
On Mon, 27 Sep 2004 21:23:14 GMT, Lenard Lindstrom <le***@telus.net> wrote:
Carlos Ribeiro <ca********@gmail.com> writes:
<sample code snip>
The problem is that the methods were not bound to the instance. Adding
individual names to each method won't work, because it'll not bind the
references stored in the overload_list. I thought about using a
closure or curry type of solution, but that's something that I still
don't understand very well. Any tips?
Here is my take on decorator overloaded. I implement OverloadedFunction
as a descriptor. It supports method binding.


That's what I was missing. I've read about descriptors last week, but
didn't had the time to get a hand at it. It's interesting. My
development machine is still using 2.3 -- I don't know if this
descriptor fancy stuff would work here... *btw, that's why my original
snippet didn't use the new syntax to call the decorator).

Descriptors were introduced in 2.2. This version works with 2.2 and
up. The previous example only worked with the new decorator syntax.

import sys

class OverloadedFunction(object):
class BoundMethod:
def __init__(self, functions, instance, owner):
self.bm_functions = functions
self.bm_instance = instance
self.bm_owner = owner
def __getitem__(self, index):
return self.bm_functions[index].__get__(self.bm_instance,
self.bm_owner)
def __init__(self, functions):
self.of_functions = functions
def __get__(self, instance, owner):
return self.BoundMethod(self.of_functions,
instance,
owner)

def overloaded(func):
listattr = '_%s_functions_' % func.__name__
attrs = sys._getframe(1).f_locals
try:
functions = attrs[listattr]
functions.append(func)
except KeyError:
functions = [func]
attrs[listattr] = functions
return OverloadedFunction(functions)

# Test case:
class blob:
def __init__(self, member):
self.member = member
def f(self):
return "f 0: member=%s" % self.member
f = overloaded(f)
def f(self, s):
return "f 1: member=%s, s=%s" % (self.member, s)
f = overloaded(f)

b=blob("XXX")
print b.f[0]()
print b.f[1]("Yet another f")
I think that this code is now Cookbook-ready. Any comments?

Unfortunately it does not work as is with the staticmethod and
classmethod wrappers since these do not define either a
__name__ or fn_name attribute. And it takes an extra bit of
convolution to get at the functions wrappered by these objects.
There are more callables in Python than your philosophy can
imagine. And each requires its own kind of introspection.

Lenard Lindstrom
<le***@telus.net>

Jul 18 '05 #19

P: n/a
Carlos Ribeiro <ca********@gmail.com> writes:
On 27 Sep 2004 13:33:33 +0200, Jacek Generowicz
<ja**************@cern.ch> wrote:
[...]
I was wondering whether it would be possible to achieve this by
forcing Python to use some dicitonary proxy (which accumulates
values, rather that keeping just the last value to be associated
with a key), instead of dict, when executing the class definiton?


[...]
No, you can't, and it's not just a parser issue. Python uses direct
C calls to the native dict type. It's hard coded,
I feared this would be the case.
p.s. In the particular case of the original poster, I'm wondering
what kind of application did he had in mind.


A standalone, lightweight SWIG-like tool, in this case. But in
general, I've had cause to wonder about declarative syntaxes in Python
every now and then.

Thanks, to all who contributed ideas to the thread, particularly Alex,
Thomas and Lenard.
Jul 18 '05 #20

P: n/a
On 28 Sep 2004 09:47:51 +0200, Jacek Generowicz
<ja**************@cern.ch> wrote:
Carlos Ribeiro <ca********@gmail.com> writes:
On 27 Sep 2004 13:33:33 +0200, Jacek Generowicz
<ja**************@cern.ch> wrote:
[...]
I was wondering whether it would be possible to achieve this by
forcing Python to use some dicitonary proxy (which accumulates
values, rather that keeping just the last value to be associated
with a key), instead of dict, when executing the class definiton?


[...]
No, you can't, and it's not just a parser issue. Python uses direct
C calls to the native dict type. It's hard coded,


I feared this would be the case.
p.s. In the particular case of the original poster, I'm wondering
what kind of application did he had in mind.


A standalone, lightweight SWIG-like tool, in this case. But in
general, I've had cause to wonder about declarative syntaxes in Python
every now and then.


I'm also exploring declarative alternatives for a lot of stuff in
Python. It started explicitly as an experiment, mainly because I could
not rationally explain why I did 'feel' that it was the right approach
for a class of applications: form definitions, reports, webpage
templates, etc. Now I think that I'm beginning to get a better
understanding that allows me to articulate better *why* should I
(ab)use Python for declarative programming, instead of using a data
driven approach with XML, or creating my own mini-declarative
language. In short, the argument goes like this:

Generic templating mechanisms start as simple variable substitution
engines, but as they start to be used, there's the need to add control
structures (if, for, etc); it's also needed to provide more ways for
the template to communicate with the main program, exchanging
variables and values. At this point, wouldn't be better to write all
templates in the main programming language of the system?
Thanks, to all who contributed ideas to the thread, particularly Alex,
Thomas and Lenard.


I learned a lot through this thread. As a matter of fact, I used your
problem as an exercise on decorators :-) And I think that, while still
exploring and making some (dumb) mistakes, I'm beginning to feel
comfortable with the more esoteric introspection features of Python.

After looking Lenard example, I've come to think about other
alternatives. There are a some interesting things that can still be
done, some even more esoteric than all stuff that we've done so far. A
generic solution for this problem would greatly simplify my own
search, and I'll keep looking for it.

--
Carlos Ribeiro
Consultoria em Projetos
blog: http://rascunhosrotos.blogspot.com
blog: http://pythonnotes.blogspot.com
mail: ca********@gmail.com
mail: ca********@yahoo.com
Jul 18 '05 #21

P: n/a
Jacek Generowicz <ja**************@cern.ch> wrote:
...
No, you can't, and it's not just a parser issue. Python uses direct
C calls to the native dict type. It's hard coded,


I feared this would be the case.


It's not (not in 2.4 at least) -- the STORE_NAME is quite ready to find
a non-dict as the frame's f_locals. The problem is getting your object
to be used as the frame's f_locals in the first place -- hard but that
only affects a few spots in ceval.c.
Alex
Jul 18 '05 #22

P: n/a
Carlos Ribeiro <ca********@gmail.com> wrote:
...
Generic templating mechanisms start as simple variable substitution
engines, but as they start to be used, there's the need to add control
structures (if, for, etc); it's also needed to provide more ways for
the template to communicate with the main program, exchanging
variables and values. At this point, wouldn't be better to write all
templates in the main programming language of the system?


At this point, your templating is not declarative -- it's imperative.
Like everything in Python, btw -- not ONE 'declarative' in sight (except
the 'global' statement, which is part of what makes it a wart;-).

There IS a case for purely declarative stuff _embedding_ Python code,
like strakt.com's "blam" (purely informal name, as Strakt's marketing
may lot like it, we just can't keep saying "Business Logic Module
Language" forever;-) does for (basically) ERD + actions/triggers. The
embedding makes the whole non-declarative, of course. But the
declarative part can still be way prettier than it would be if it wasn't
a separate language, e.g. it could use such keywords as 'entity',
'relation', 'attribute' and the like...
Alex
Jul 18 '05 #23

P: n/a
On Tue, 28 Sep 2004 14:37:31 +0200, Alex Martelli <al*****@yahoo.com> wrote:
Carlos Ribeiro <ca********@gmail.com> wrote:
...
Generic templating mechanisms start as simple variable substitution
engines, but as they start to be used, there's the need to add control
structures (if, for, etc); it's also needed to provide more ways for
the template to communicate with the main program, exchanging
variables and values. At this point, wouldn't be better to write all
templates in the main programming language of the system?
At this point, your templating is not declarative -- it's imperative.
Like everything in Python, btw -- not ONE 'declarative' in sight (except
the 'global' statement, which is part of what makes it a wart;-).


I knew I should have taken more time to write that paragraph :-) The
way I'm writing my code "reads" more like declarative code than
imperative. One can surely argue with my lack of academic rigour. I
think that I'm writing "declarative" code because I'm using class
declarations to create complex, hierarchic data structures. I want to
state __what it is__, not state __how it should be done__ step by
step.

Your comment also made me realize a point that should be highlighted.
Normal templates [1] are clearly imperative, and that's part of my
problem with them. But complex object-oriented structures, although
including code (in the form of methods and descriptors) are much more
dynamic than a simple template. Better than this -- normal templates
are inherently sequential and imperative in the way they're written.
Object oriented structures are much more flexible in this respect.

[1] I stress the term "normal templates" because I'm focusing on
standard, run-of-the-mill templating systems.
There IS a case for purely declarative stuff _embedding_ Python code,
like strakt.com's "blam" (purely informal name, as Strakt's marketing
may lot like it, we just can't keep saying "Business Logic Module
Language" forever;-) does for (basically) ERD + actions/triggers. The
embedding makes the whole non-declarative, of course. But the
declarative part can still be way prettier than it would be if it wasn't
a separate language, e.g. it could use such keywords as 'entity',
'relation', 'attribute' and the like...


In the end, you've raised another interesting point -- on the whole,
my current approach is not purely declarative. It's rather a mix of
imperative and declarative, but with a mostly declarative
infrastructure holding things together.

--
Carlos Ribeiro
Consultoria em Projetos
blog: http://rascunhosrotos.blogspot.com
blog: http://pythonnotes.blogspot.com
mail: ca********@gmail.com
mail: ca********@yahoo.com
Jul 18 '05 #24

P: n/a
Jacek Generowicz <ja**************@cern.ch> wrote in message news:<ty*************@pcepsft001.cern.ch>...
I would like to write a metaclass which would allow me to overload
names in the definition of its instances, like this

class Foo(object):

__metaclass__ = OverloadingClass

att = 1
att = 3

def meth(self):
pass

def meth(self, arg):
return arg [snip] Is something like this at all possible in pure Python? or does in
require fiddling around in the guts of the parser?

Not exactly what you asked for, and a bit (litotes) ugly, but it does
allow convenient subgroups within a class. I used a similar trick
once when writing a little parser.
def alltuple(name,bases,clsdict):
return tuple(clsdict.values())
class Foo(object):

class att:
__metaclass__ = alltuple
_1 = 1
_2 = 3

class meth:
__metaclass__ = alltuple
def _1(self):
pass
def _2(self,arg):
return arg
Making it nice and pretty left as an exercise.
--
CARL BANKS
Jul 18 '05 #25

P: n/a
On Tue, 28 Sep 2004 14:37:31 +0200, al*****@yahoo.com (Alex Martelli) wrote:
[...]

At this point, your templating is not declarative -- it's imperative.
Like everything in Python, btw -- not ONE 'declarative' in sight (except
the 'global' statement, which is part of what makes it a wart;-).


Hm ;-) Is a module source a declaration of (imperative) intent, passive until imported?
ISTM we are getting into shades of semantics. Interesting though ;-)
For a language that plays well both ways, I would try scheme or lisp, I think.

Regards,
Bengt Richter
Jul 18 '05 #26

P: n/a
On 28 Sep 2004 21:07:27 GMT, Bengt Richter <bo**@oz.net> wrote:
On Tue, 28 Sep 2004 14:37:31 +0200, al*****@yahoo.com (Alex Martelli) wrote:
[...]

At this point, your templating is not declarative -- it's imperative.
Like everything in Python, btw -- not ONE 'declarative' in sight (except
the 'global' statement, which is part of what makes it a wart;-).


Hm ;-) Is a module source a declaration of (imperative) intent, passive until imported?
ISTM we are getting into shades of semantics. Interesting though ;-)
For a language that plays well both ways, I would try scheme or lisp, I think.


I was just about to reply to Alex, but managed to stop my fingers.
It's indeed a fine line, and I'm not enough of an academicist to
discuss it with all detail it deserves. We could go on weeks debating
it here (and I'm afraid we do). Broadly speaking, my take is as
follows:

Class definitions are executed (imperative), but are normally used to
store definitions (that's declarative, in a broad sense). I think
that's exactly what has attracted me to this kind of 'hack'. The
ability to write intelligent, complex, hierarchic data structures
seamlessly intermingled with code. Templating languages or XML fall
short in this respect. If you really want to *integrate* them both --
and I'm not talking about simply reading static resource files here --
either you have a cross beast that is data based but has some
imperative statements interspersed with a clumsy syntax, or you have
source code filled with unneeded clutter to manage the data
manipulation part, in a rather obstrusive way to the logic of the
system.

(XML based systems use complex parsers to work with the data. It's not
possible, in most cases, to include enough intelligence in the data
stream itself for it to instruct the parser to do something
"different" -- unless you care to define your own language to do it,
and that's clumsy, at best, given XML "great" readability).

--
Carlos Ribeiro
Consultoria em Projetos
blog: http://rascunhosrotos.blogspot.com
blog: http://pythonnotes.blogspot.com
mail: ca********@gmail.com
mail: ca********@yahoo.com
Jul 18 '05 #27

P: n/a
Bengt Richter <bo**@oz.net> wrote:
On Tue, 28 Sep 2004 14:37:31 +0200, al*****@yahoo.com (Alex Martelli) wrote:
[...]

At this point, your templating is not declarative -- it's imperative.
Like everything in Python, btw -- not ONE 'declarative' in sight (except
the 'global' statement, which is part of what makes it a wart;-).
Hm ;-) Is a module source a declaration of (imperative) intent, passive

until imported?

Every piece of code is 'passive unless executed', but that doesn't mean
every language is declarative.
ISTM we are getting into shades of semantics. Interesting though ;-)
Not all that much (to me), since redefining a word so that it applies to
every possible language is basically robbing that word of any meaning.
For a language that plays well both ways, I would try scheme or lisp, I think.


Hard to argue with this (or Dylan for syntax-sugar reasons, maybe). One
alternative might be to explore pure functional languages, which can be
seeing as remapping imperativeness into declarativeness.
Alex
Jul 18 '05 #28

P: n/a
Carlos Ribeiro <ca********@gmail.com> writes:
On 28 Sep 2004 21:07:27 GMT, Bengt Richter <bo**@oz.net> wrote:
[...]
For a language that plays well both ways, I would try scheme or
lisp, I think.


[...]
The ability to write intelligent, complex, hierarchic data
structures seamlessly intermingled with code.


Yes, this is one of the great advantages of Lisp ... and has been for
about four decades.

Jul 18 '05 #29

P: n/a
al*****@yahoo.com (Alex Martelli) writes:
Jacek Generowicz <ja**************@cern.ch> wrote:
...
No, you can't, and it's not just a parser issue. Python uses
direct C calls to the native dict type. It's hard coded,
I feared this would be the case.


It's not (not in 2.4 at least)


For what definition of "hard coded" ?
-- the STORE_NAME is quite ready to find a non-dict as the frame's
f_locals. The problem is getting your object to be used as the
frame's f_locals in the first place -- hard but that only affects a
few spots in ceval.c.


So, from the perspective of trying to code it in pure Python, it _is_
hard coded, IIUC.

(Unfortunately, I cannot afford the luxury of playing with the Python
implementation itself; I must deliver code which works with a
bog-standard Python 2.3.4. I'd love to have the time to play with
ceval.c on my own account ... but that is another luxury I cannot
afford :-( )
Jul 18 '05 #30

P: n/a
im*****@aerojockey.com (Carl Banks) writes:
def alltuple(name,bases,clsdict):
return tuple(clsdict.values()) __metaclass__ = alltuple


WBMSWA12FB !

It never occurred to me that a metaclass didn't have to be a _class_.
Jul 18 '05 #31

P: n/a
Jacek Generowicz <ja**************@cern.ch> wrote:
al*****@yahoo.com (Alex Martelli) writes:
Jacek Generowicz <ja**************@cern.ch> wrote:
...
> No, you can't, and it's not just a parser issue. Python uses
> direct C calls to the native dict type. It's hard coded,

I feared this would be the case.
It's not (not in 2.4 at least)


For what definition of "hard coded" ?


Is there more than one? As I already quoted on this very thread:

if (PyDict_CheckExact(x))
err = PyDict_SetItem(x, w, v);
else
err = PyObject_SetItem(x, w, v);

so, the "direct C call to the native dict type" only happens if x is
exactly of that type, otherwise the generic abstract call happens
instead and can deal with dispatching the functionality as needed.

Basically, the PyDict_SetItem is now there, and guarded with a
PyDict_CheckExact, only as an optimization: x will be of native dict
type overwhelmingly often.

-- the STORE_NAME is quite ready to find a non-dict as the frame's
f_locals. The problem is getting your object to be used as the
frame's f_locals in the first place -- hard but that only affects a
few spots in ceval.c.


So, from the perspective of trying to code it in pure Python, it _is_
hard coded, IIUC.


For some value of "it", sure, but NOT because of "direct C calls to the
native dict type" spread hither and yon (as used to be the case).
Rather, the issue is strictly with how a frame gets built and handled.

I never claimed nothing at all is hard-coded (even in 2.4), just that
the specific issue with "direct C calls" _isn't_ (in 2.4) for the case
of interest (STORE_NAME opcodes' execution).
(Unfortunately, I cannot afford the luxury of playing with the Python
implementation itself; I must deliver code which works with a
bog-standard Python 2.3.4. I'd love to have the time to play with
ceval.c on my own account ... but that is another luxury I cannot
afford :-( )


If you need to support 2.3.4 and can't even consider extensions, your
options are indeed severely limited -- I don't recall, but it's even
possible that, for THAT release, the "direct C calls" assertion is valid
(which is why I was careful to say "for 2.4 at least" every time). It
matters hugely (to most would-be extenders, who could surely afford to
use extensions for the purpose) whether it is or not, of course: making
some kind of special-purpose frame and getting it used appropriately
might be feasible, but if there are direct C calls hardwired all over
the place then no solution at all is feasible _within the 2.3.*
constraint_.
Alex
Jul 18 '05 #32

P: n/a
Jacek Generowicz <ja**************@cern.ch> wrote:
im*****@aerojockey.com (Carl Banks) writes:
def alltuple(name,bases,clsdict):
return tuple(clsdict.values())
__metaclass__ = alltuple


WBMSWA12FB !

It never occurred to me that a metaclass didn't have to be a _class_.


You should have seen Guido's face when he first saw me give a
presentation on "use and abuse of custom metaclasses" -- apparently,
judging from his horrified expression, it hadn't occurred to him,
either, and he didn't like it half a bit... since then I've been quite
careful against actually using this idea in production code.

Actually I believed I showed something more like:

class whatever:
def __metaclass__(clsname, clsbases, clsdict):
return <I don't remember what>
...etc etc...

i.e., an "anonymous metaclass", so to speak. But that's a minor aspect.
After all, something like:

class yetanother:
class __metaclass__(type):
...etc etc...

is just as so-to-speak "anonymous" yet IS nowadays quite an accepted
idiom...!
Alex
Jul 18 '05 #33

P: n/a
Jacek Generowicz <ja**************@cern.ch> wrote:
I would like to write a metaclass which would allow me to overload
names in the definition of its instances, like this
class Foo(object):
__metaclass__ = OverloadingClass
def meth(self):
pass
def meth(self, arg):
return arg


It's not the literal syntax you're asking for, but using my technique
for multiple dispatch in Python achieves the effect you're looking
for. It has nothing to do with metaclass, but it lets you provide
multiple "signatures" for a call. Not just number of arguments, but
also their types (if you want: you can also generically specify a
descendent of object).

See http://www-106.ibm.com/developerwork.../l-pydisp.html
for more details.

Yours, David...
Jul 18 '05 #34

This discussion thread is closed

Replies have been disabled for this discussion.