473,385 Members | 1,673 Online
Bytes | Software Development & Data Engineering Community
Post Job

Home Posts Topics Members FAQ

Join Bytes to post your question to a community of 473,385 software developers and data experts.

Strategic Functional Migration and Multiple Inheritance

Some Sr. colleges and I have had an on going discussion relative to when and
if
C# will ever support 'true' multiple inheritance.

Relevant to this, I wanted to query the C# community (the 'target' programming
community herein) to get some community input and verify (or not) the
following
two statements.

[1] Few programmers (3 to7%) UNDERSTAND 'Strategic Functional Migration
(SFM)' (see PS below).

[2] Of those few, even less (another 3 to 7%) are GOOD at Strategic
Functional Migration (or 0.9% to 0.49%).

Do these percentages seem about right? (less than 1% of the target
programming
community are GOOD at SFM)

Thanks ahead of time for any relevant QUALITY input.

Shawnk

PS.

Strategic Functional Migration (SFM) is described in the post following this
one.

PPS. I submit to my fellow colleges that the answer (point spread) determines

[A] Short term mitosis of the few to pioneer a new language incorporating C#
productivity with C++ (SFM) powers

[b] Long term community evolution and industry migration away from C# as a
'core competency' solution (subtle point)

Both A/B, in turn, instantiate the 'early adopter' model in the compiler
market.

PPPS.

I have a 'core competency' project I want to do that would be a lot easier
if I
could use SFM with C#.
Jun 1 '06
60 4849
"Jon Skeet [C# MVP]" <sk***@pobox.com> wrote in message
news:MP************************@msnews.microsoft.c om...

I guess the question is how much benefit there is in the real world
from MI compared with how much abuse there is. While it's reasonable to
keep away from "bad" programmers, I'm more worried about "average"
programmers (which all of us are a lot of the time, I suspect).
I would say that the "average" programmer depends on your working
environment. In my experience, the "average" programmers I've worked with
have been what I'd call "good." I've spent a lot of time with these kind of
programmers in C++, and haven't seen the nightmares that others describe --
that's all I can say.

I have a feeling that MI is one of those things that everyone thinks
that -other- people mess up. Those of us who are comfortable with it use it
appropriately. Those of us who aren't, are smart enough not to try to use it
as a hammer to use on ever problem that looks like a nail.
I'd say that C# 1.1 *is* a simple language. C# 2.0 is significantly
more complicated.
C# is simple? Compared to Pascal, assembly language, or C? I guess it
depends on your frame of reference.
Has anyone actually suggested that duplication is *good*? It feels like
it's a straw man here. If you start with the assumption that the lack
of MI *inevitably* leads to code duplication, it's a reasonable
argument - but I don't accept that assumption.


Yes, I do believe that lack of MI leads to duplication (at least in many
cases). If you want to inherit two interfaces, and also reuse implementation
of those interfaces, you have to delegate to at least one contained object.
Delegation implies one-line methods that forward calls to the contained
object. The delegating methods duplicate the interface (which is
unavoidable), but they have to do it in two directions, incoming and
outgoing.

Again, I've said in a previous post that it's not the world's biggest deal,
and that many, many VB, Java and C# programmers write perfectly good code
without MI. I just remember from my C++ experience that "mixing-in" behavior
(both interface and implementation), orthogonally, to a class was useful.
And the downsides that others report simply didn't occur in my (and many
others') programming.

It's a moot point in root-class languages like C# anyway.
Jun 14 '06 #51
Clarification:

The compositional functional relationship within the context of the
Fu behavioral lifecycle is when you call Fu.Do_x() and then call
Fu.Do_y() for the same instance.

Thus if Do_x() was a primary function and Do_y() is a persistance
function the call sequence -

fu.Do_y(get);
fu.Do_x();
Fu.Do_y(save);

is a example of what is meant by 'compositional functional relationship'
within
the Fu behavioral life cycle.

The logical separtion (in this example of english semantics - not syntax) of
the primary function from the persistence function in key in the ability to
seperate english terminology (primary function/persistence).

The underlying point is that any logically coherent substrate the forms a
paradigm for syntactic expression must, by definition, transcend the limits
of the syntactic limitations formed by any individual language specification.

A case in point:

To discuss functional normalization in the context of MI, VI, II, SI and
Interfaces transcends the specs for Java, C# and C++.

While the syntactic proffs and metrics are essential they can only be
intellectually described with a preceding technical english lexicon (MI, VI,
II, SE, etc).

I note this in passing to help clarify the difference between compositional
sequencing (in time) of functions operating on a state space from structural
aggregation of functionality expressed in such expression mechanisms as MI.

Shawnk
Jun 14 '06 #52
Shawnk <Sh****@discussions.microsoft.com> wrote:

<snip>
I printed (and marked up the article) into a PDF but could not
find the use of the word 'interference' in the Blog article
you noted above.
Sure: "unintended side-effects of having multiple inheritance, where
the two supposedly orthogonal concepts end up affecting each other in
unexpected ways".
Also if you have any metrics (proposed or otherwise) to elucidate the
comparisons between architectural alternatives that would be helpful.
Nope, no metrics at all. Frankly, I don't find metrics nearly as useful
as it sounds like you do.
PS. Clearly overuse in not intended/desired. Merely the availibility
of MI in C#/Java would allow functional normalization in that (C#/Java)
expression medium where such expressions are appropriate/desired/intended.


The availability of overloading the assignment operator in C# would
allow certain things too - but make the language much more potentially
confusing. I find that people tend to abuse powerful features if
they're available - witness the number of people using regular
expressions inappropriately - and that as such, a feature which adds
complexity to the language really has to add a large benefit too.

--
Jon Skeet - <sk***@pobox.com>
http://www.pobox.com/~skeet Blog: http://www.msmvps.com/jon.skeet
If replying to the group, please do not mail me too
Jun 14 '06 #53
Shawnk <Sh****@discussions.microsoft.com> wrote:
I have to say, I think that's a very silly way of measuring complexity.
Would C# be a "simpler" language by making all the keywords single
letters? I don't believe so. It *certainly* wouldn't be a more readable
language.
The complexity I'm talking about is how hard it is to think about the
object model, and I believe that MI makes the model more potentially
complicated.


:-)

'Character count' means to add up all the characters in a set of expressions.


And that's exactly what I think is a bad idea.

<snip>
The solution with the least character count is (by definition) a more
powerful, effective, efficient and eloquent design.
That may be by *your* definition, but I certainly wouldn't define it
that way.

1) Efficient code isn't always brief.
2) Powerful code isn't always brief.
3) Eloquent code isn't always brief - often code becomes longer but
easier to understand when a single long statement is broken into
several short ones, for instance. That may introduce extra variables
solely for the purpose of making the code readable.
I have to say, I think that's a very silly way of measuring complexity.


To nit pick logical token count is better that a raw character
count since use of LLP (Low Level Patterns) would express 'Fx x;'
as 'Fx l_Fx_ins' or 'Fx l_fx;'. The logical point (character count/
token count) of a valid and meaningful metric is the point I was making.


Again, I don't think that token count is a good metric. For instance,
you could easily have a language which has a single token for "declare
first integer variable for the class" and another for "declare second
integer variable for the class", potentially reusing those tokens for
accessing the variables when in an appropriate context. By making
tokens mean different things in different contexts, you can end up
requiring fewer available tokens overall *and* fewer tokens for a
specific piece of code.

That doesn't make it a simpler language, or the code simpler - or more
eloquent.

To give another information-theory example: I believe that if we used
"3" as a number base, that would give the "best" integer base number
system in terms of compact information with a small number of tokens.
However, ask real *people* to use base 3 and they'll start screaming.
People aren't information theory - they're people.
If you cant' define a concise terminology (qualification) with
a matching metric set (quantification) into a coherent substrate
for logical thought then the resulting syntactic expressions,
used as examples to prove your point are 'pointless'.
I'm not trying to argue this in metrics of information theory, because
I don't believe such metrics give a good idea of the readability of
code. I believe it's to a large extent dependent on the reader, and
that what may be the most readable code for one intended audience may
well not be the most readable code for another audience. For instance,
code to manipulate a string with a regular expression may be absolutely
great for people who use regular expressions day-in-day-out, but could
be much harder than a short series of simple string operations for
other people.
I must apologize has I thought my reference to 'information theory'
would have made the 'character count' metric clear. Hopefully the
brief dissertation above clarifies what I mean in terms of metrics,
their utility and the general coherence of terminology as a logical
substrate for syntactic expressions.
We apparently still disagree about the usefulness of the metric,
however.
I also note, in passing, that those who use good terminology/metrics
are trying to move (however effectively/ineffectively) away from
opinion towards scientific fact. As you and I both agree our
individual opinions are immaterial relative to the actual science of
logical expression.


No - I believe opinions are very important, and far more important than
arbitrary metrics such as character or even token counts. One could
construct a language which from a purely theoretical point of view was
absolutely beautiful - but which was terrible to actually use. I'd far
rather have a language which mere mortals like myself can use and
understand what any single line of code is going to do, preferrably
without having to look it up in a spec due to complicated rules.

--
Jon Skeet - <sk***@pobox.com>
http://www.pobox.com/~skeet Blog: http://www.msmvps.com/jon.skeet
If replying to the group, please do not mail me too
Jun 14 '06 #54

Mark Wilden wrote:
C# is simple? Compared to Pascal, assembly language, or C? I guess it
depends on your frame of reference.
Personally, I've never really found C simple :)
Yes, I do believe that lack of MI leads to duplication (at least in
many cases). If you want to inherit two interfaces, and also reuse
implementation of those interfaces, you have to delegate to at least
one contained object. Delegation implies one-line methods that forward
calls to the contained object. The delegating methods duplicate the
interface (which is unavoidable), but they have to do it in two
directions, incoming and outgoing.
What you're calling duplication seems like a cleaner (de-coupled, separation
of concern) approach to me. In most circumstances, delegation should be preferred
over inheritance (I believe GoF said that). Composition through inheritance
sounds like job security to me :)

Jokes aside, even in a SI language like C#, shying away from inheritance
unless it makes sense from a specialization perspective leads to more extensible
and maintainable software.

Please correct me if I'm missing something, but is the only reason to implement
MI in a language the ability to be able to get away from objects having to
dispatch (and I'm ignoring the uber benefits of having control over the dispatch)
a call?

If that's really it, then I don't think I have anything left to add to this
argument.
It's a moot point in root-class languages like C# anyway.


Are you saying MI wouldn't make sense in C#? Well, I guess its decided then :)
Jun 14 '06 #55
Shawnk,

I've read those posts, and I'll have to read them again to fully understand
them.

I'm beginning to feel a little disconnected here. My "formative" years were
spent in an MI environment, where I made the transition from OOP to OOAD,
with the guidance of several extremely bright people and a lot of hard work.
So today I am rather embarrassed by many of my earlier efforts. Today I
feel OOP is a bad word, because it focusses on Programming. I focus on
analysis and design, and of course use OO concepts and pattern recognition
during those activities. For me, "programming" is simply translating design
into code - a simple mechanical task (and I have built a design/metadata
execution engine to partially avoid that step).

Summarising:
The world is full of patterns, many of which are orthogonal;
Analysis and design of real world problems is the hard job;
Coding/programming the output of the Design phase is trivial/mechanical
(given a suitable framework);
MI is a fundamental requirement of a programming language to keep the above
true.

I spent a long time with MI: 1992 - 2001. Again, those were very formative
years and I learned all about pattern recognition and design. The advantage
we had was an MI engine to power our [MI] designs. Very few people were
using OO principles for most of 90's. We embraced OO and accepted MI as a
natural instrinsic element of OO principles. I feel strongly that its a bit
like sight or hearing; if you've never had it, you don't know what you're
missing, but if you grew up with it and lose it, it is absolutely
devastating.

I'll find some time this weekend to review your other posts.

Cheers,

Radek

"Shawnk" <Sh****@discussions.microsoft.com> wrote in message
news:A7**********************************@microsof t.com...
Radek,

I can't thank you enough for excellent point regarding functional
normalization.

I always use the term 'functionally orthogonal' but your 'normalization'
comments
are a much better articulation.

With that in mind could you review my comments in response to some of
Jon's
points regarding the potential for metrics to measure MI and functional
normalization?

I look forward to hearing your thoughts on this matter.

Shawnk

Jun 14 '06 #56
"Saad Rehmani" <sa**********@gmail.com> wrote in message
news:44**************************@msnews.microsoft .com...

Mark Wilden wrote:
C# is simple? Compared to Pascal, assembly language, or C? I guess it
depends on your frame of reference.
Personally, I've never really found C simple :)


Do you think C# is simpler than C? Have you ever used assembly language?
Geez -- kids these days.
What you're calling duplication seems like a cleaner (de-coupled,
separation of concern) approach to me.
It can be both, of course. I admit to a strong prejudice against against
duplication. Do you agree that what I call duplication is indeed
duplication?
In most circumstances, delegation should be preferred over inheritance (I
believe GoF said that).
Then I guess they were idiots for using delegation instead of inheritance in
their book. Since they're not idiots (I corresponded with Vlissides before
he died, and he is definitely not an idiot), I doubt they made such a
sweeping statement. Or perhaps that while delegation (I think you mean
composition, btw) should be preferred, there are still occasions where both
kinds of inheritance are useful.
Jokes aside, even in a SI language like C#, shying away from inheritance
unless it makes sense from a specialization perspective leads to more
extensible and maintainable software.
Since you clearly don't have extensive MI experience, I wonder on what basis
you make this statement?
Please correct me if I'm missing something, but is the only reason to
implement MI in a language the ability to be able to get away from objects
having to dispatch (and I'm ignoring the uber benefits of having control
over the dispatch) a call?
I don't know if it's the only benefit, but the ability to automatically
compose mix-ins with other functionality is indeed its major benefit.
If that's really it, then I don't think I have anything left to add to
this argument.


That's up to you, of course.
It's a moot point in root-class languages like C# anyway.


Are you saying MI wouldn't make sense in C#? Well, I guess its decided
then :)


I'm not arguing in favor of MI in C#. What gave you a different impression?
Jun 14 '06 #57
"Mark Wilden" <Ma********@newsgroups.nospam> wrote in message
news:OH**************@TK2MSFTNGP02.phx.gbl...
Please correct me if I'm missing something, but is the only reason to
implement MI in a language the ability to be able to get away from
objects having to dispatch (and I'm ignoring the uber benefits of having
control over the dispatch) a call?


BTW, saying "I'm ignoring the uber benefits of having control over the
dispatch" implies that that would not be possible if a language included
multiple inheritance.

Single inheritance means specializing behavior along only one dimension. All
others must use composition and delegation. Most of the time, classes do, in
fact, only need specialization along one dimension. When they don't,
however, the designer has to choose which dimension to model through one
language mechanism, and which to model through another.

Is a given class a Widget or an Observable? Why shouldn't the possibility
exist that it's both? And if it is truly both, how do you decide why class
to inherit from and which to delegate to? Number of methods? (Actually,
that's probably the best approach!)
Jun 14 '06 #58
Saad,

The following is an opinionated synopsis of 'why MI' as well
a very brief history on what happened to MI in 'modern' (ahem!)
languages such as Java and C#.

[1] The Functional Structure of a computing Expression medium (Logical
English, C, Java, etc).

SFM (Strategic Functional Migration) is the process of normalizing a code
base.

A fully (functionally) normalized code base has no 'diamond' structures within
the fabric of inheritance. Just like a fully normalized database has no
replicated data, a fully normalized code base has no replicated functionality.

Note : Our good friend Jon Skeet (God bless him :-) might suggest that the
normalization could be accomplished via the sequential effects of composition.
Normalization being the removal of replicated functionality. The
normalization
I speak of is a structural normalization of function that controls the
structural accessibility and containment of function.

Functional normalization is a structural aggregation phenomena not directly
related to compositional issues/problems/solutions.

In this context (structural aggregation of function - ie. inheritance) two
visualizations are used to understand the problem of distributing state space
along lines of inheritance. The visualizations are 'diamond' and 'fans'.

What are 'fans'.

In a normalized code base, instead of Diamonds you have 'fans' - Radial
lines of
inheritance.

In a stratified visual diagram the 'fans' are going from the top
(derived/child
classes) to the bottom (base/parent classes). This is the self-centric
viewpoint of children. Each child has its own functional 'viewpoint'
defined by
its 'lines of inheritance'. Many children, on the top, can have lines
extending
to (and thus incorporating) the parents.

In a circular visual diagram each individual base class (parent) is in the
center of its own circle 'looking out' towards all children (derived classes)
that inherit it (a self-centric subjective viewpoint of the parent class).

The functional 'computing fabric' formed by 'fans' is orthogonal in that no
lines 'touch'.

If two children have replicated function OR if the same function is desired
in more that one child then the function can be moved out from the children
to a base class and then incorporated back into the children via MI.

Note : Function ALWAYS includes operations (operators, etc) and state space
(numbers, strings, flags, etc).

Case in point. I have a child, it has a handy function I want to use it 'over
here', so I break it out, make a base class, bring into both targets via MI
and
'Wa La' I'm done :-)

Again, this is a fully normalized code base, no diamond structures.

[2] History of MI (super short version :-)

In the beginning C was 'free form' computing with an expression medium
incorporating very little restrictions. C++ introduced functional containment
whose object oriented approach allows the computing fabric to have a
'rigidity'
formed by specifications (public/private/protected - MI) which articulated a
more refined and accurate implementation of the designers orignal intentions.

Java, followed by C#, removed the primary normalization mechanism - MI - from
the structural matrix of the computing fabric. The result being that the
radial
lines of functionality (fans as defined above) where no longer structurally
possible. Note : Structurally possible as a structural aggregation phenomena.

It was suggested/proposed/championed that using sequential functional
composition
(calling a sequence of functions similar to commutativity in mathematics)
could
replace the design utility of MI. This only confused the intellectual
landscape
of the design medium by removing 'fans' and replacing them with 'call
sequences'.

In real world programming call sequences ARE more verbose (from an information
theory point of view) complex and confusing. Why? Well, its simple - no
CENTRAL SPECIFICATION MECHANISM exists to define a sequence of calls as a
functional unit (an interesting thought). This would logically cast the
'process' as an apparently dynamic phenomena without the static costs/problems
of state space.

Class is a functional unit (structural aggregation). MI defined a structural
aggregation of function to create functional units without code replication.

Call sequence is free form (compositional effects). A method in one class
does
not define a functional unit as in a process definition from a sequence of
steps
with the design result being a callable process as an architectural
expression.
(a purely dynamic functional unit by the way).

The functional linkage of call structures is unique to each code base. It has
no ability to force the structural design of another code base. To state that
the use of a call sequence in runtime (as a compositional device in time) is
suddenly a structural design phenomena in space is a circular argument that
misses the point. A sequent of component activity is a valid, complementary,
AND FUNDAMENTALLY DIFFERENT phenomena from structural aggregation. Wheather
such a sequent can be architectually defined and reused is orthogonal and
immaterial to the complementary nature of compositon/aggregation.

Replacing structural aggregation with compositional effects is like saying
a business needs 'three people', an engineer, an accountant and a salesman.
Our company (ie. my design) will use two engineers and an accountant and we'll
be good to go!!! (no cash flow today :-)

During the C++/MI to C#/SI-Interface period the 'sequence in time' va
'aggregation in space' approaches were explored, understood and formally
defined
(as in WikiPedia).

During that time NO LANDMARK logical/metric analysis was performed in the
programming community to define functional normalization, articulate
structural
aggregation and compare the complexities of sequential composition in time to
the structural aggregation in space (Landmark being - oh yeah, now we ALL
understand MI).

The result being that a generation of programmers was indoctrinated into a
design approach that lacked a clear understanding of functional normalization
and the concept of functional orthogonality within a code base.

The inherent balance of composition/aggregation has also proven elusive since
many 'so called experts' can not see beyond the diamond problem to a computing
universe wherein functional normalization is the 'stable state' of the
architectural expressions.

MI proponents believe that 'forced code replication' is an inherent result of
not being able to fully normalize a code base.

At present, and within the cultural context of the programming community,
the degradation of the structural computing fabric due to lack of MI
is a 'de-evolution' towards the problems in the C language without the
necessary freedom/power (due to Java/C# type restrictions) to overcome
those problems.

Fast forward to today.

[3] Summary of a very opinionated dissertation

No body has time anymore....

Since we all have schedules, dead lines and prior commitments many talented
people have been unable (practically) to prove the MI/SI-Interface debate
from a
scientific-inquiry/terminology-metric point of view. If this was done Jon,
Radek, I and others would have enumerated the top three metrics (numerics)
proving our logical positions.

Fortunately forums such as this can funnel ideas to WikiPedia where
the foundations for such inquiries can have a solid consensus that
leverages the best of the communities efforts in advancing the state
of our art (computing).

If you understand database normalization and data replication you
will have a fundamental understanding of functional normalization
and code replication.

This community dialog on SFM (Strategic Functional Migration) has been quite
helpful in getting excellent input (Radek Cerny - functional normalization) to
better articulate the differences between composition and aggregation.

I hope the above is helpful to you (and hopefully others) in coming to a
better
understanding of the utility of MI and its use as a structural aggregant for
the
functional normalization of an architectural expression.

Thank you so much for your thoughtful questions and comments.

Shawnk

PS. The 'strategic' in SFM emphasizes the refactoring of a code base to
remove
all diamond structures or to change an existing normalized code base and still
retain a normalized state.

PPS. Please do not interpret any of this (the history) as a pejorative polemic
to pound MI into the brains of SI-Interface proponents. This is an attempt to
refine the articulation of the MI/SI-Interface debate. The hope being to move
our various opinions towards scientific inquiry and logical/metric analysis.

PPPS. The greatest cost/work of SFM in terms of intellectual energy is the
refactoring of any 'diamonds' into a set of 'fan downs' and 'fan ups'. In 14
years of C++ coding I never had a 'diamond' except once and I just factored it
out. I did have the luxury of doing my own architecture and design however.

To replicate the SFM process consistently in a corporate code base
with millions of lines of code from thousands of programmers is another
matter. Thus the need for a more stringent approach via a formal
analysis.

PPPPS. Also I have no problem with allowing diamonds to exist since (IMHO)
the permutations of state space collisions are well known. I would
allow all permutations to exist (architecturally) for the programmer,
choose a syntax spec default and allow a syntax mechansim to change
the default both globally and specifically.

This allows the freedom of non-normalized functionally while retaining
the precise articulation nessary to reflect the intentions of the designer
in the expression medium.

Jun 15 '06 #59

Mark Wilden wrote:
"Saad Rehmani" <sa**********@gmail.com> wrote in message
news:44**************************@msnews.microsoft .com...
Mark Wilden wrote:
C# is simple? Compared to Pascal, assembly language, or C? I guess
it depends on your frame of reference.
Personally, I've never really found C simple :)

Do you think C# is simpler than C? Have you ever used assembly
language? Geez -- kids these days.


*sigh* ... old people ... :)
What you're calling duplication seems like a cleaner (de-coupled,
separation of concern) approach to me.

It can be both, of course. I admit to a strong prejudice against
against duplication. Do you agree that what I call duplication is
indeed duplication?
In most circumstances, delegation should be preferred over
inheritance (I believe GoF said that).

Then I guess they were idiots for using delegation instead of
inheritance in their book. Since they're not idiots (I corresponded
with Vlissides before he died, and he is definitely not an idiot), I
doubt they made such a sweeping statement. Or perhaps that while
delegation (I think you mean composition, btw) should be preferred,
there are still occasions where both kinds of inheritance are useful.


Okay, now you're just getting pissy. Anyways, since this isn't comp.object,
i'll try to be civil :)

page 20:
"This leads to our second principle of object-oriented design:
Favor object composition over class inheritance."

While you're at it, you might want to re-read the part where they suggest
inheriting from only abstract classes (i.e., Interfaces?) since they provide
little or no implementation.
Jokes aside, even in a SI language like C#, shying away from
inheritance unless it makes sense from a specialization perspective
leads to more extensible and maintainable software.
Since you clearly don't have extensive MI experience, I wonder on what
basis you make this statement?


I prefaced it with 'even in an SI language' since I obviously dont have experience
with an MI language ... read what I write! :)
Please correct me if I'm missing something, but is the only reason to
implement MI in a language the ability to be able to get away from
objects having to dispatch (and I'm ignoring the uber benefits of
having control over the dispatch) a call?
I don't know if it's the only benefit, but the ability to
automatically compose mix-ins with other functionality is indeed its
major benefit.


By the way, have you considered AOP? I have pretty extensive experience with
that and personally found it quite lacking. Given your MI background, you
might find it more useful.
I'm not arguing in favor of MI in C#. What gave you a different
impression?


Errr ... this is a C# newsgroup? ;)
Jun 15 '06 #60
"Saad Rehmani" <sa**********@gmail.com> wrote in message
news:44**************************@msnews.microsoft .com...
Personally, I've never really found C simple :)
Do you think C# is simpler than C? Have you ever used assembly
language? Geez -- kids these days.


*sigh* ... old people ... :)


I'll just repeat the question then -- do you think C# is simpler than C?
Or perhaps that while
delegation (I think you mean composition, btw) should be preferred,
there are still occasions where both kinds of inheritance are useful.


Okay, now you're just getting pissy. Anyways, since this isn't
comp.object, i'll try to be civil :)

page 20: "This leads to our second principle of object-oriented design:
Favor object composition over class inheritance."

While you're at it, you might want to re-read the part where they suggest
inheriting from only abstract classes (i.e., Interfaces?) since they
provide little or no implementation.


Reread the last sentence of mine quoted above.

Also, read what GoF also say on p. 20: "Reuse by inheritance makes it easier
to make new components that can be composed with old ones. Inheritance and
object composition thus work together."

Again, they wouldn't use inheritance in the book if they thought it
shouldn't be used. They reuse code via inheritance (as in the quote above),
so if they do indeed suggest inheriting from only abstract classes, they're
contradicting themselves. Which is fine -- my correspondence with Vlissides
dealt with our mutual dissatisfaction with the Mediator pattern. GoF aren't
gods.

Abstract classes are not interfaces, and they do not provide little or no
implementation. They are simply classes that must be subclassed. I'm working
with an abstract class right now which contains the bulk of the code, where
derivations only contribute one short method.
Jokes aside, even in a SI language like C#, shying away from
inheritance unless it makes sense from a specialization perspective
leads to more extensible and maintainable software.

Since you clearly don't have extensive MI experience, I wonder on what
basis you make this statement?


I prefaced it with 'even in an SI language' since I obviously dont have
experience with an MI language ... read what I write! :)


Sorry, I misunderstood that.
By the way, have you considered AOP? I have pretty extensive experience
with that and personally found it quite lacking. Given your MI background,
you might find it more useful.


No, I'm afraid I don't have any experience in AOP.
I'm not arguing in favor of MI in C#. What gave you a different
impression?


Errr ... this is a C# newsgroup? ;)


Well, you certainly wouldn't know it by most of the messages, which have
nothing to do with the C# language.

No, I simply responded to your request for books pointing out the benefits
of MI.
Jun 15 '06 #61

This thread has been closed and replies have been disabled. Please start a new discussion.

By using Bytes.com and it's services, you agree to our Privacy Policy and Terms of Use.

To disable or enable advertisements and analytics tracking please visit the manage ads & tracking page.