Sorry not to insert these points in all the right places in your posts and
replies...
Some observations, from the hip:
(1) Classic MI (like SI) is an "is-a" relationship, i.e., implementation
inheritance. I think this is almost always wrong. An object is almost never
so many things at once -- it may look like different objects from various
angles, or behave like different objects in different contexts, but that
does not mean that its *identity* and *implementation * should be conflated
witth these concepts -- at least not by mandate. Classic "merged object" MI
is simply one cheap (conceptually, anyway) way of acheiving these goals.
Delegation to other objects can acheive the same results, and I think having
better support for explicit and implicit delegation is better than "object
merging" in most cases.
(2) I think MI is very powerful, but it can have large costs if not done
carefully. This, of course, is true for any language feature -- but there
are some things that are more easily abused than others. If you want "a
slice" of functionality, and use MI to get at it from a class that has this
slice, you end up with all the other baggage of that class. Not just
theroretical baggage, but depending on call-tree snipping by the compiler,
inlining, "v-table" design, late vs. early allocation, etc., real measurable
baggage. A really clever compiler could probably omit both members and code
not used by actual clients, assuming you never want to get at them via
reflection. I agree that better ways to slice-and-dice (whether it's AOP or
whatever) could be useful. This is mitigated by a framework where the
classes were designed from the begining to be used as "mix-ins", but use by
C# now could result is some truely questionable objects -- as I think ends
up being the case in MI in practice. Some baggage is worse in languages
where all fn's are "virtual" of course, but alot of powerful MI I've seen
stems from the abilty for an object to take on the behavoir of a mix-in
class at *run-time* by adding a superclass (consider adding a DB-persistance
manager to objects you don't have the source code and have them), and this
isn't likely to happen in c#, as a core feature anyway.
(3) I think the most elegant MI designs I've seen are probably in the
Lisp/CLOS and similar areas. This is probably because (a) people there tend
to think much more about the design at this level than any most programmers
(not a slight against most programmers -- but most programmers simply have
to get stuff done, and usually in some prescribed way), and (b) because
these languages actually let you control (via multi-method dispatch, macros,
meta-class programming, etc.) exactly how you want your system to work. Most
"mortal" programmers simply don't have the luxury of working on the
meta-meta level, the meta-level, and the problem level at the same time -
especially when the problem level may consist of muiltiple tiers itself. The
danger is that subseqent programmers may not be so clever that they can
design within this system and maintain it.
I'm not against MI (or for that matter meta-programming of some kind -- I
posted a few weeks ago about wishing the event-on-top-of-delgates mechanism
were done in a more general fashion) but such generalizations need to be
done carefully. I admit, it's pretty bad right now when an O/R mapper forces
you to make its class the "root" of all your objects, forcing you to
"insert" this layer into your hierarchy, because inhertance is a natural way
to approach this in lieu of any other language support -- an alternative way
is to "register" all the objects that require persistance with a manager
object, causing what is, in-effect, a "linked bi-object" (or "co-objects")
as far as the memory manager is concerned, so it would be more efficient to
just glue them togther as classic MI does - so you could conceivably have
delegation with MI efficiency. (The compiler, with or without user hints via
keywords or attributes could figure these mostly-overlapping "co-lifetimes"
out -- let's just say this memory-manager/GC feature is patent-pending by me
on this date:)) This "systems space" problem is one of those cases that may
not fall into the "almost always wrong" category. However, I think in
"problem space" this 80-wrong/20-right rule probably applies. I think MI
just needs to be taken to the next level before it goes mainstream again. I
agree that without extra support for "smart" delegation or some other
MI-like abstraction, that MI would be useful in the toolbox, as long it was
approached with caution.
thanks,
m
"Shawnk" <Sh****@discuss ions.microsoft. com> wrote in message
news:0B******** *************** ***********@mic rosoft.com...
I think it's so silly!
I have to apologize for not answering your original post with a more
direct point on MI vs Interface discussions (and 'why not in C#').
I'm a MI AND Interface fan.
IMHO - The functional forte of each (MI and Interface) is;
1. MI - Functional aggregation
2. Interface - Functional decomposition
Point 1 : Aggregation
To 'bring together' a set of functions (F1->Fx - base classes) at a focal
point (the derived class) I prefer MI first followed up by Interface
implementation if I need polymorphic specialization.
So for aggregation I actually use both. For code reuse however you just
can't beat MI. This is the point of resource based
issues/concerns/arguments.
To aggregate pre-existing computing function MI can be PROVED to be the
best.
With Microsoft's budget and cultural drive Ander's team MUST already have
a theoretical analyst that codes metric analyzers to quantify and verify
the dev team's suspicions. I"m sure they already ran the numbers to have
proved point 1.
Point 2 : Decomposition
Once we've aggregated on a focal point we need to 'Cube' the
functionality.
Passing Note : 'Cubing the problem space' is engineering slang for the
following paradigm. Suspend the focal point in a hypothetical cube of six
facets, each facet an 'interface' to present 6 functionally orthogonal
computing units. Of course '6' is a contrived number used for the sake of
a clear visual diagram. The idea is you can 'decompose' the focal point
functions as a pre-existing Fn (functional pass through via MI) or create
a new Fn as an adaptation to a pre-existing functional interface spec.
Perfect focal points are all pass through (via MI). A theoretical dream
and a pragmatic nightmare (with today's technology). (Hat's off to
Joanna).
So with interfaces we can enforce functional orthogonality into the 'real
world architecture' and produce an 'ahead of schedule' deliverable that
people find 'looks pretty good'.
The nice thing about 'cubing' and interfaces is the ability to
form a 'functional channel' that provides a backbone for a succession
of computing units (classes grouped together in some sort of a general
pattern to process some type of data).
Summary Visual : Of Point 1 and 2
Mi and Interfaces are like an hourglass with the derived class
as the center point (focal point of computing function).
Both can be used for aggregation and decomposition but their inherent
architecture belies their functional forte.
They compilement each other when the inherent functional forte of each is
clearly understood. This is conjecture of course (with no formal or
informal proofs).
Why no MI in C#?
I don't know (I love saying that).
What I suspect. From what little I've seen of
Anders in video (MSDN TV, Channel 9, etc) he;
Guess 1 : Anders - Is not opposed to MI above the covers
Guess 2 : Anders - Has what appears to be an intractable problem below the
covers.
Fear 1 : Anders - Does not have a theoretical analyst to run the numbers
and
prove
(to him and team) the premise to back Guess 1 above.
Parting note : The monthly MI discussion in this forum.
Someone wondered if the same people keep posting about MI.
I don't know about others but I never have time for anything and I
certainly
should not be doing this now :-) This is my first and last post on MI
because
it sums up everything I know (or conceivably will know) about MI vs
Interface.
My motivation for responding to your post was 'it seems silly'.
An excellent observation if I may add.
Executives (in high pressure start ups) invariably recognize the following
personality types (they exist in all professional communities).
1. Motor mouth
2. Always has to be right
3. Always has to have the last word
4. ect (these are not pejorative but 'how to handle' labels)
I suspect that as Human Beings its takes us time (as a community) to
figure something out because we all (at one time or another) become less
than our ideals would have us be.
I personally don't care about MI being in C# Vx since I'm too busy with my
own forte/talent set.
I am (however) passionate about programming and the current 'state of the
art' discussed in this forum.
Your comment on 'silly' translates to the operational/managerial issues
which I trust are minimized by our conduct in this venue.
I never post on forums but I'm so thankful for Skeet, Joanna and many
others. In thanks I'd thought I'd put in my two cents (just this once) on
MI vs Interface.
I hope the above points on the complementary nature of MI and Interface
provides you with a different perspective of thought that helps you (and
the community) better understand the utility of MI and Interfaces.
Shawnk
PS. I always look forward to those who shoot holes in all the above ideas
as it always helps be better understand stuff. I NEVER take such input
personally (so fire away ;-).
Mark