By using this site, you agree to our updated Privacy Policy and our Terms of Use. Manage your Cookies Settings.
440,160 Members | 1,983 Online
Bytes IT Community
+ Ask a Question
Need help? Post your question and get tips & solutions from a community of 440,160 IT Pros & Developers. It's quick & easy.

multiple inheritance

P: n/a
why doesn't .NET support multiple inheritance?
I think it's so silly!

Cheers,
Mark
Mar 24 '06 #1
Share this Question
Share on Google+
47 Replies


P: n/a
Hello Mark,

Why for? .NET support multiple *interface* inheritance

M> why doesn't .NET support multiple inheritance? I think it's so silly!
M>
M> Cheers,
M> Mark
---
WBR,
Michael Nemtsev :: blog: http://spaces.msn.com/laflour

"At times one remains faithful to a cause only because its opponents do not
cease to be insipid." (c) Friedrich Nietzsche
Mar 24 '06 #2

P: n/a
It's just as silly as Java, which doesn't support multiple inheritance,
either.

Here's a chance for me to learn something (as usual): Is it .NET that
doesn't support multiple inheritance, or just C#? After all, there is a
C++ compiler for .NET, and C++ does support multiple inheritance... so,
can the CLR do it, but it was simply left out of C#, or is it a
limitation of the CLR?

As for why it's not in C#, if you take the trouble to search the
archives of this group you'll find many discussions on why it was left
out or why it should have been included (depending upon the poster's
point of view). There's really no need to go over it all again since
it's all been covered here before, several times. (Unless, of course,
you're bored, and really want to.)

Mar 24 '06 #3

P: n/a
"Mark" <Ma**@discussions.microsoft.com> wrote in message
news:1A**********************************@microsof t.com...
why doesn't .NET support multiple inheritance?
I think it's so silly!


http://www.google.com/search?sourcei...inheritance%22
Mar 24 '06 #4

P: n/a
Hi,

If you look into the archives you will see it has been discussed here
several times before.

In short, having interfaces you do not need the extra complexity that
introduce multiple inheritance
--
Ignacio Machin,
ignacio.machin AT dot.state.fl.us
Florida Department Of Transportation

"Mark" <Ma**@discussions.microsoft.com> wrote in message
news:1A**********************************@microsof t.com...
why doesn't .NET support multiple inheritance?
I think it's so silly!

Cheers,
Mark

Mar 24 '06 #5

P: n/a
Mark wrote:
why doesn't .NET support multiple inheritance?
I think it's so silly!


Because it was considered that the few cases where implementation MI is
usefull (especially when you don't have templates to use with MI) weren't
enough considered the implementation complexity and the (high) risk of
misuse of implementation MI.

Arnaud
MVP - VC
Mar 24 '06 #6

P: n/a
"Ignacio Machin

In short, having interfaces you do not need the extra complexity that
introduce multiple inheritance

And don't forget design principle #2: whenever possible, favor composition
over inheritance.

Cheers

Padu
Mar 24 '06 #7

P: n/a
Mark,

You are right. It is silly but more more importantly it severly limits the
ability to combine, in an articulate and elegant manner, functionally
orthogonal processing.

My take on the manner is that 'closed communities' tend to be intellectually
sterile (see summary statement).

In spite of the excellent professionalism of the C# programming community
and the significant contributions of C# (reflection, LINQ, EOP (Event
Observer Pattern)) the core decision making of C# architecture is very, very
small (God Bless Anders!!!).

The argument aganist C# MI (Multiple Iheritance) deals with complexity.

The complexity they speak of is mostly the implementation 'under the covers'
complexity. Of secondary import (in arguments aganist) is the supposed
'complexity' of expression (above the covers). I will drop 'under the covers'
issues and focus on the 'above the covers' issue of WHY (IMHO) MI (multiple
Inheritance) as been rejected by the closed community.

Q. WHY is MI not in C#?
A. Expression complexity

Expression complexity (for lack of a better term) is a most shallow argument
aganist the inclusion of multiple inheritance. When I say 'expression
complexity' I DO NOT MEAN a 30 line contrived example. I mean the 'expressive
complexity' on the order of millons of lines of code, similar in scope to
Lord of the Rings and Sherlock Holmes (If you can't make the leap don't read
further).

Currently in C# we divide and conquer complexity by (1) stratification in
(A) call levels and (B) inheritance. So complexity can be 'layered' from L1
to Ln. Single inheritance can be fitted to the concept of just being another
layer - Li.

In C++ we also have multiple inheritance to divide and conquer complexity by
(2) functionally orthogonal articulation. Thus any Lx (level of
stratifaction) can contain F1 to Fn. So a single class 'inherits' F1 to Fn as
multiple classes via MI. Now the derived class can call embedded classes (the
next LX) and also call or provide (to a caller) F1 to Fn.

SI as Li VS MI as F1 - FN is the core 'sticking point' that divides the
FOR/AGANIST campes.

WHY : THOSE OPPOSED TO C# MI : They say an Lx with F1 -> Fx is to complex.

Their summary statements always say something like 'not worth it' meaning it
brings in more problems than not.

I would tend to agree if every Lx (Level of stratification) had a functional
break out (More than one Fx via more than one parent class via MI). BUT -
IMHO - only 10 to 30 % (a hip shot I admit) of levels in a 'real world'
design have a 'functional breakout'. So I think that MI for functional
decomposition is hetrogenous (and clustered on a few key levels a real world
component design) and not homogenous.

I love MI (Multiple Inheritance). It ONLY simplifies things (above the
covers). More importantly, I believe that a simple implementation (under the
covers) is inherently doable (if I wrote the compiler - I love saying that!
:-). But the core dev team of C# does not (so much for MHO :-)

[BTW - Pipelining and OOA (Object Oriented Approach) do not result in a
'single top node' simplification via language feature sets. Multiple
inheritance, for example, is a code element mechanism that formally produces
a functional focal point (the derived class access and 'pass through' of
multiply inherited function).]

I have the deepest respect and admiration for the Anders/C#Community of
thougth. But I truly believe another solution, post C#, will fill the
programming void of the future caused by this significant and unfortunate
expressive hole (in the fundamental C# architecture).

I think the 'intellectual sterility' (flame me please - I'm asking for it
:-O) resides in the tendency to determine everything (language feature wise)
in a 30 to 40 line contrived example IN C#.

As Einstein said you have to go at least one level 'above' something to talk
about it. So system level arguments (Lx + FX combined as a single point
(classs) via MI) do not carry much weight because they are 'above' that 30 to
40 line intellectual prison of many core community members (MVPs, etc).

So -IMHO - the 'closed community' is formed by the 30 to 40 line mindset.
Like Republican/Democrat dialogs your 'camp' is defined by your values and
comfort zone. Its always nice to know I'm not alone in my 'camp'.

Good luck to you Mark. It's (also) always nice to know someone else is
sharing my pain :-)

shawnk

PS. It always feels 'so good' to get this off my chest. Thanks for the
opportunity.
"Mark" wrote:
why doesn't .NET support multiple inheritance?
I think it's so silly!

Cheers,
Mark

Mar 26 '06 #8

P: n/a
>>It always feels 'so good' to get this off my chest. Thanks for the
opportunity


Thanks. I feel much better too. I knew I wasnt alone. I grew up with MI
and miss it dearly in .NET. Only today to I find myself COPYing code into a
few places because interfaces just cant do what I need.

Radek
Mar 27 '06 #9

P: n/a
"Radek Cerny" <ra*********@NOSPAM.c1s.com.au> a écrit dans le message de
news: ek**************@TK2MSFTNGP09.phx.gbl...

| Thanks. I feel much better too. I knew I wasnt alone. I grew up with MI
| and miss it dearly in .NET. Only today to I find myself COPYing code into
a
| few places because interfaces just cant do what I need.

Sounds to me you need to use interface implementation by delegation - a much
better paradigm than MI.

Joanna

--
Joanna Carter [TeamB]
Consultant Software Engineer
Mar 27 '06 #10

P: n/a
Joanna,
no, I need MI. Have you ever had a decent MI platform to develop on? I
"think" and design (well used to anyway) relying on MI. Unfortunately, I
had tremendous success with this. I decomposed the world into objects with
a few different "base" types. I could combine these at will and achieve
amazing things at minimal time/effort (eg cost). If I could show you my
world you would be convinced that MI is necessary.
If it were not for this thread, the self-brainwashing would have worked.
Now I'll need therapy again.
Bugger.

"Joanna Carter [TeamB]" <jo****@not.for.spam> wrote in message
news:u4*************@TK2MSFTNGP09.phx.gbl...
"Radek Cerny" <ra*********@NOSPAM.c1s.com.au> a écrit dans le message de
news: ek**************@TK2MSFTNGP09.phx.gbl...

| Thanks. I feel much better too. I knew I wasnt alone. I grew up with
MI
| and miss it dearly in .NET. Only today to I find myself COPYing code
into
a
| few places because interfaces just cant do what I need.

Sounds to me you need to use interface implementation by delegation - a
much
better paradigm than MI.

Joanna

--
Joanna Carter [TeamB]
Consultant Software Engineer

Mar 27 '06 #11

P: n/a
"Radek Cerny" <ra*********@NOSPAM.c1s.com.au> a écrit dans le message de
news: %2****************@TK2MSFTNGP11.phx.gbl...

| no, I need MI.

I would contest that one of the things that is the matter with you is that
you think there is nothing the matter with you. :-)) (sort of quote from
R.D.Laing)

| Have you ever had a decent MI platform to develop on?

Yes, I used to use C}}, but then I saw the one true light of interfaces :-)

It really is tha same animal under a different skin. You can design your
classes justr as before, the only difference is that when you want to
combine multiple base classes, you declare an interface for each class and
implement those multiple interfaces in one class by delegating to composite
instances of the other "derived" classes in that sub-class.

| Now I'll need therapy again.

Just keep chanting "C# is good" :-)))

Joanna

--
Joanna Carter [TeamB]
Consultant Software Engineer
Mar 27 '06 #12

P: n/a
Joanna,

Thank you so much for this excellent suggestion. Also thanks for earlier
responses to my posts.

I use a 4-5 participant EOP (Event Observer Pattern) that is just delegate
based.
Pattern artifiacts (major and most common ones)

1. Delegate type (Signature)
2. Delegate instance
3. Delegate Method
4. Event Method
5. Binder Method

Pattern roles

1. Subject
2. Observer
3. Binder
4. Client
5. Monitor

The Subject (subscribe/unsubscribe), Observer (attach,unattach) binder
methods allow a fairly wide range of bind processes.

I must agree that the EOP is one of the most fundamental and 'new' forms of
expression. I use it in 'self assembling' container objects that, when passed
to a pattern generator, can assemble themselves into a 'pattern instance'. An
example is a data composition/decomposition pipeline with multiple 'event
stages' (the pattern instance) that are tied to typed event channel (the
backbone of the data flow through the various stages).

Yet, with all due respect, patterns are in vogue and remind me of expert
systems in the 1980s in the industrial/military startup high tech community.
Expert systems turned out the be not as fundamental as originally perceived
(case in point the rule engine in Biztalk - runs a process - not the company
:0)

MI allows a call tree to 'travel' (or expand - take your favorite semantic
term) vertically or horizontally in a stratified system diagram.

Many people only envision call trees in a vertical dimension of a stratified
system diagram (ie. ISO 7 layer OS, SOA 3->7 layer systems).

With MI, however, call trees can branch horizontally through several calls,
such as in business logic (SOA-3L). Thus the call tree moves 'horizontally'.

For those engineers who can turn a collection-with-cursor diagram from
vertical into a horizontal turing machine the MI argument seems pretty plan.

MI approaches an almost numeric/mathematical fundamental level with Von
Nueman 'turing machines'. Its general nature can be envisioned in a
fundamentally more primal category of computing patterns then say an EOP.

I've programmed (real world projects succefully delivered) in HLDV
(hardware/chips/simulators), microcode, assembly, POST(RAS), several high
level languages. I have also done nueral nets, expert systems, supercomputers
and natural language processing (I hop technologies every 2 or 3 contracts if
not every one).

I may be missing something but I still have to go on my own belifes,
experience and (especially) code output.

LLP (Low level patterns - artifact tagging - _ptr, etc) show pretty
conclusively (in my code base) that EOP and other 'funamental core C#
patterns (reflection probe patterns, interface drivers, etc) are a conceptual
level above MI.

With MI the turing tape (of the Turing machine) can be wound out (without
break) along the L1-Lx complexity stratification AND the F1->Fx functional
partitioning. Furthermore the path of the tape head (Turing Machine Pattern)
can be represented as a clockwise/counterclockwise single complete path on
any Lx using a pie chart of F1->Fx (but you have to contrive a reordered set
of pie slices for each instance of a functional call path - in the call tree).

In rebuttal one could present the EOP event stream as a turning machine. But
this proves that EOP inherently provides no complexity breakdown mechanism
(such as with L1->Ln and F1-> Fn. Without formally modelled dimensions of
complexity breakdown (such as in the dimensions L and F) the EOP becomes just
a simple turing machine.

Decorated backbones (of the event channel) could be presented as a
complexity breaddown mechanism. This is a conceptual error however as event
channel data type (the most common use of backbone typing) is a 'data flow'
mechanism for 'data flow typing' (base class EventArgs). IMHO - The staged
pipeline pattern is peer to (say) the collection residence pattern
(collections with assignable write iterators).

Note I'm on the edge of the EOP pattern domain and not discussing
selfbinding contrived examples of 40 lines of code (which are great and
wonderful solutions for many applications - we all have to 'draw from the hip
ocassionally).

I always appreicate your excellent thoughts on these matters and the EOP may
be more fundamental than I realize. However my passing interest in runtime
pattern generators for automated data composition is in LLP format. The LLP
groupings (artifact sets for EOP, Reflection, state machines) tends to prove
the hypothetical existance a more fundamental CPG 'Composition Process
Generator' of which the EOP is one of many target implemenation results (to
implement the data composition itself).

Summary point is if I had to write and deliver a CPG 'on contract' (money
says it all :-) I wouldn't touch it without MI. Since C++ is getting a little
slow productivity wise I'd pass on the project and do more interesting things
until I can find a C# replacement.

IMHO - A C# replacement is inevitable (for Von Nueman computing approaches -
VS say nueral nets (perception/actuation) and other 'niche' technologies).
(Also I have the most wonderful respect - and especally thanks - to
Anders_and_team for making programming such a wonderful and elegant
experience)

IMHO - An informal proff of the C# demise (assuming no MI in C#) could be
done with a cup of coffee, and maybe a beer. The proff would be to find the
most fundamental TMP (Turing Machine Pattern) expressed in the C# 'challenger
pattern'.

The 'challanger pattern' proves C# does not need MI.

I deeply respect your posts and thoughts (not just on my posts) and would be
very interested to see you come up wth an EOP based informal proff that WAS
not (fundamentally proved to be) a backbone decorated dataflow.

If you have already done an informal 'challenger pattern' (proves MI not
needed in any language - let alone C#) could you post it here for all of us
interested in MI?

Please forgive me if this post is too obtuse. With my limited time I can say
so much more with simple logical thought patterns than C# code examples.

Thank you again for you excellent suggestion on EOP. I always enjoy your
comments.

Shawnk


"Joanna Carter [TeamB]" wrote:
Sounds to me you need to use interface implementation by delegation - a much
better paradigm than MI.

Joanna

Mar 27 '06 #13

P: n/a
Joanna,

I hope I have not misunderstood you excellent EOP solution.

I may be wrong about the following so please correct me.

In MI there is NO COMPOSITE code artifiacts (embedded classes performing the
F1->Fx), NO INTERFACE IMPLEMENTATION. The base classes are 'passed right
through' to the client (user of the class based on an MI integration approach
to build a composite). The 'pass through' artifact result of MI is what give
the coding utility to F1-Fx as base classes B1-Bx.

So as a character count based formal proff, in MI the ':', ',',
'each_bas_cls[1->x] is the quantified (numeric) focal point of the proff.

With C# the order of magnitude expression is 2 to 4 oders of magnitude
larger. Note the is an 'off the cuff' formal proff. Better formal proffs
(quantified) can be obtained using a 'cross pollenation' approach and
concentric circles on the functional pie chart (with in the Ln stratification
level). These additional proffs would make the MI focal point (the class
using MI) a USER/CLIENT of the F1->Fx presented by the base class set.

At first glance the COMPOSITE expression of C# and MI seem identical in
order of magnitude quantifier metrics. However when the base class set itself
used MI the argument falls flat because of the recursiver nature of the
quantites invloved.

If we contain our (present) discussion space to a quanified character
count.... how can you propose C# as a logicall expression medium with equal
to or less information (characterwise).

Note I have bypassed the 'intellectual - time' burdern of have to read and
absore the additional code lines of the composite aggregation (in C# - as in
code maintance, refactoring, etc).

So, am I (we) wrong? Based on character count? (so we remain somewhat
professional and not religious).

I have 14 years of experience coding with C++ but still, experience pales in
comparions with raw intellect (Mozart, Enstein) and, being a regular joe
coder, I'm always learning from the excellent posts in this newsgroup.

Thank you ahead of time for your response. I really like to hear your take
on this in case I'm wrong or have missed something.

Shawnk

Yes, I used to use C}}, but then I saw the one true light of interfaces :-)


Mar 27 '06 #14

P: n/a
Q Does C# have multiple inheritancy ?
A No

Sort of ends the argument
Shawnk wrote:
Mark,

You are right. It is silly but more more importantly it severly limits the
ability to combine, in an articulate and elegant manner, functionally
orthogonal processing.

My take on the manner is that 'closed communities' tend to be intellectually
sterile (see summary statement).

In spite of the excellent professionalism of the C# programming community
and the significant contributions of C# (reflection, LINQ, EOP (Event
Observer Pattern)) the core decision making of C# architecture is very, very
small (God Bless Anders!!!).

The argument aganist C# MI (Multiple Iheritance) deals with complexity.

The complexity they speak of is mostly the implementation 'under the covers'
complexity. Of secondary import (in arguments aganist) is the supposed
'complexity' of expression (above the covers). I will drop 'under the covers'
issues and focus on the 'above the covers' issue of WHY (IMHO) MI (multiple
Inheritance) as been rejected by the closed community.

Q. WHY is MI not in C#?
A. Expression complexity

Expression complexity (for lack of a better term) is a most shallow argument
aganist the inclusion of multiple inheritance. When I say 'expression
complexity' I DO NOT MEAN a 30 line contrived example. I mean the 'expressive
complexity' on the order of millons of lines of code, similar in scope to
Lord of the Rings and Sherlock Holmes (If you can't make the leap don't read
further).

Currently in C# we divide and conquer complexity by (1) stratification in
(A) call levels and (B) inheritance. So complexity can be 'layered' from L1
to Ln. Single inheritance can be fitted to the concept of just being another
layer - Li.

In C++ we also have multiple inheritance to divide and conquer complexity by
(2) functionally orthogonal articulation. Thus any Lx (level of
stratifaction) can contain F1 to Fn. So a single class 'inherits' F1 to Fn as
multiple classes via MI. Now the derived class can call embedded classes (the
next LX) and also call or provide (to a caller) F1 to Fn.

SI as Li VS MI as F1 - FN is the core 'sticking point' that divides the
FOR/AGANIST campes.

WHY : THOSE OPPOSED TO C# MI : They say an Lx with F1 -> Fx is to complex.

Their summary statements always say something like 'not worth it' meaning it
brings in more problems than not.

I would tend to agree if every Lx (Level of stratification) had a functional
break out (More than one Fx via more than one parent class via MI). BUT -
IMHO - only 10 to 30 % (a hip shot I admit) of levels in a 'real world'
design have a 'functional breakout'. So I think that MI for functional
decomposition is hetrogenous (and clustered on a few key levels a real world
component design) and not homogenous.

I love MI (Multiple Inheritance). It ONLY simplifies things (above the
covers). More importantly, I believe that a simple implementation (under the
covers) is inherently doable (if I wrote the compiler - I love saying that!
:-). But the core dev team of C# does not (so much for MHO :-)

[BTW - Pipelining and OOA (Object Oriented Approach) do not result in a
'single top node' simplification via language feature sets. Multiple
inheritance, for example, is a code element mechanism that formally produces
a functional focal point (the derived class access and 'pass through' of
multiply inherited function).]

I have the deepest respect and admiration for the Anders/C#Community of
thougth. But I truly believe another solution, post C#, will fill the
programming void of the future caused by this significant and unfortunate
expressive hole (in the fundamental C# architecture).

I think the 'intellectual sterility' (flame me please - I'm asking for it
:-O) resides in the tendency to determine everything (language feature wise)
in a 30 to 40 line contrived example IN C#.

As Einstein said you have to go at least one level 'above' something to talk
about it. So system level arguments (Lx + FX combined as a single point
(classs) via MI) do not carry much weight because they are 'above' that 30 to
40 line intellectual prison of many core community members (MVPs, etc).

So -IMHO - the 'closed community' is formed by the 30 to 40 line mindset.
Like Republican/Democrat dialogs your 'camp' is defined by your values and
comfort zone. Its always nice to know I'm not alone in my 'camp'.

Good luck to you Mark. It's (also) always nice to know someone else is
sharing my pain :-)

shawnk

PS. It always feels 'so good' to get this off my chest. Thanks for the
opportunity.
"Mark" wrote:

why doesn't .NET support multiple inheritance?
I think it's so silly!

Cheers,
Mark

Mar 27 '06 #15

P: n/a
"Shawnk" <Sh****@discussions.microsoft.com> a écrit dans le message de news:
98**********************************@microsoft.com...

| Please forgive me if this post is too obtuse. With my limited time I can
say
| so much more with simple logical thought patterns than C# code examples.

Well, I have to be honest and say that I didn't understand more thatn a
couple of sentences of what you wrote, but then I am more interested in real
world architecture and design than the theoretical science of it :-)

Joanna

--
Joanna Carter [TeamB]
Consultant Software Engineer
Mar 27 '06 #16

P: n/a
Shawnk wrote:
Mark,

You are right. It is silly but more more importantly it severly
limits the ability to combine, in an articulate and elegant manner,
functionally orthogonal processing.

My take on the manner is that 'closed communities' tend to be
intellectually sterile (see summary statement).

<snip>

I read and (tried to) understand your excellent mathematical demonstration,
and if I was the only one to work on my projects, I would mostly agreed.

Alas, in real world, there is one single word that play against your whole
demonstration : maintainability.
I have yet to see a real-world MI based architeture that is easily
understandable : I have worked with various such models, in C++, and all
failed their promises because the initial design, sound as it may have been,
had been wasted by subsequents evolutions.

The only exception that comes to my mind being ATL/WTL (that is still
considered too complicated by many managers), but there MI is highly
integrated with templates - AKA the "curiously recurring template pattern",
and is is not "pure" MI modelisation.

Arnaud
MVP - VC
Mar 27 '06 #17

P: n/a
>> Q Does C# have multiple inheritancy ?
A No
Sort of ends the argument


I don't think that was ever the argument at all.
Check out the OP's post. They already knew that it didn't support MI.

This whole conversation comes around to this thread about once a month,
maybe more.
I'd be interested to know if the same people post each time...

Mar 28 '06 #18

P: n/a
Arnaud,

Both are execllent and valid points.

Point 2 : I've seen good architectures with MI. They tend to use vertical
stratification as a 'main trunk' and then 'branch horizontally' using MI.
IMHO - The key to good MI results its using it the create 'functionally
orthogonal' computing blocks. For this you need a 'natural programmer' who
has an inherent gift that prevents the 'functional confusion and chaos' you
mention. Since my experience and you experience differ it would make sense we
differ on the utility of MI.

Point 1 : I think maintainability is dependent on architecture. With the
advent of namespaces I believe that Inherent, functionally orthogonal designs
can be formally proven using API call domains expressed as name spaces.
Independent of this, the design architect really should show the main
architecture visually (personal opinion). The design 'artwork' is what I look
for in truly gifted software engineers but its not all that common.

New point : Quantified metric of MI AND Interfaces.

I always see MI and Intefaces as complimentary. I love them both.

A Quantified metric of functional dispersion and cross pollenation can be
developed by centering on the focal poinst (of MI and Interfaces) and looking
at the immediate upstream (towards parents) classes. For each Base B_cls
total the namespace points. This creates a 'Functional Dependency' (FD)
structure based on platform API calls.

You then come with a 'Dispersion Index' on a per namespace basis. (Say a
simple count of classes that use a target namespace). You run a 'Dispersion
Probe' which tunnels through FD structure and creates a Dispersion Index for
each namespace (in the list of namespaces used - Note - each 'namespace' is
really the namespace path with terminal point containing the used API call).

Finally you order the Dispersion Array (list of namespace points) according
to Dispersion Index depending on if you want to find the worst functional
dispersion (used in most base classes) or the best functional focus.

Looking at the worst functional (most commonly used everywhere) function
points you can usually find which candidates should really be in seperate
stratification level. You can also find candidates for refactoring along
functionally orthogonal lines within the domain of any Lx (stratifcation
level).

Finally : The operation can be performed on both MI and Interface designs to
measure functional dispersion and orthogonality.

With the introduction of namespaces in C# I think a natural result is just
such a metric. It would, historically, be similar to the Macabe metric of
axomatic complexity for call paths (I may have a spelling error on the name).

Summary : You, as did Joanna, hit the key purjorative point on MI (assuming
character count proffs do show MI to be more powerful from a functional
agregation point of view).

I sitll believe the demise of C# is inevitable for the above reasons and
will unfold in the future market space history for the above reasons. But
this is conjecture. Our mutual interest and set of logic points both have an
interest in this type of metric analysis (functional dispersion patterns in
MI and Inheritance). Hopefully our community will rise to the challenge some
day and produce a nifty free tool on www.snapfiles.com (or whatever) to help
all programmers produce better code faster.

Thank you for your two excellent points. Along with the expression burden
(Character count different between functional integration/aggregation of MI
and Interface expressions) they provide important clues to a quantified
solution to this seemly intractable problem.

Shawnk

Point 1 : > Alas, in real world, there is one single word that play against
your whole
demonstration : maintainability.
Point 2 : > I have yet to see a real-world MI based architeture that is
easily understandable :

Mar 28 '06 #19

P: n/a
I've worked directly with a number of fellows and principle scientists and
there is no 'inherent' difference between theory and science. LINQ is an
'envelop point' of the current state of art in progamming language. One day
it will be a ubiqutous assumption of platform code used by expressive
techniques in the future.

Quantifiable metrics are the meeting point of both. ALL the top people I
work with talk like this. All of them. They all tailor content and
articulation per audience which is what makes them successfull in the
executive realm (operations/management) as well (thus the obtuse indicator).
Also coding up a metric analyzer is (often ) not that difficult - you just
need the time (something, I admit, we all lack).

Summary point : Theory and Science CREATE real world architecture.

I would like to think that the C# community in this newgroup has enough
range for both newbies and theory since all of us are newbies in many areas
while being quite proficient in others.

I wanted to thank you for your comment with this post and also encourage any
other community members who are interested in future architecture within C#
to discuss some of the (admittedly) theorical points to better articulate the
architecture features found in future versions of C#.

I would also like to think that some of the community discussion (on theory)
is helpful to Anders and his dev team.

Thank you again for you input.

Shawnk

PS. A case in point is Arnaud Debanee's point on maintainabilty. Giving an
exploratory rebuttal to his concerns helps to articulate (read that - make it
real world and not just some cheap jerk off polemic) potential (though
informal) proffs that can help the community with new ideas. Thank you again
for clarifying your interest regarding the intersection of real world
architecture and computing theory.

"Joanna Carter [TeamB]" wrote:
"Shawnk" <Sh****@discussions.microsoft.com> a écrit dans le message de news:
98**********************************@microsoft.com...

| Please forgive me if this post is too obtuse. With my limited time I can
say
| so much more with simple logical thought patterns than C# code examples.

Well, I have to be honest and say that I didn't understand more thatn a
couple of sentences of what you wrote, but then I am more interested in real
world architecture and design than the theoretical science of it :-)

Joanna

--
Joanna Carter [TeamB]
Consultant Software Engineer

Mar 28 '06 #20

P: n/a
Joanna,

", but then I am more interested in real
world architecture and design than the theoretical science of it :-)


Just so I'm not mis-interpreted the newsgroup/engineering community can be
partitioned as;

1. Pragmatic (real world only thank you very much :-)
2. Envelope (real world and theory)
3. Applied research (theory - experimental market solutions)
4. Pure research (University - military contractor - etc)

No group is 'better' than the other. Most everyone has a preference.

I'm group two and (I take it) your center point is group one.

I'm always thankful to have group one and three types around me on a team.
Group ones are quicker on draw than I and group threes are great for code
generators and metric anlyzers (DALs, etc).

I just wanted to add that my previous post point was my appreciation to the
Micrsoft Newsgroup team (and to MS as a company) to provide a forum (this
newsgroup) where we can occasionally meet and get/give ideas. The range of
newsgroup (via any given post) contributions is what makes this venue a real
world life saver (in terms of time and money).

I have had my rear end (pardon the expression) saved multiple time by the
more pragmatic members of your community (such as yourself) and I just wanted
to clearly note my thanks to you personally and more importantly to others
for tons of answers to 'newbie questions' for 'senior engineers'.

Thanks again for your input.

Shawnk
Mar 28 '06 #21

P: n/a
I think it's so silly!
I have to apologize for not answering your original post with a more
direct point on MI vs Interface discussions (and 'why not in C#').

I'm a MI AND Interface fan.

IMHO - The functional forte of each (MI and Interface) is;

1. MI - Functional aggregation
2. Interface - Functional decomposition

Point 1 : Aggregation

To 'bring together' a set of functions (F1->Fx - base classes) at a focal
point (the derived class) I prefer MI first followed up by Interface
implementation if I need polymorphic specialization.

So for aggregation I actually use both. For code reuse however you just
can't beat MI. This is the point of resource based
issues/concerns/arguments.

To aggregate pre-existing computing function MI can be PROVED to be the
best.

With Microsoft's budget and cultural drive Ander's team MUST already have
a theoretical analyst that codes metric analyzers to quantify and verify
the dev team's suspicions. I"m sure they already ran the numbers to have
proved point 1.

Point 2 : Decomposition

Once we've aggregated on a focal point we need to 'Cube' the
functionality.

Passing Note : 'Cubing the problem space' is engineering slang for the
following paradigm. Suspend the focal point in a hypothetical cube of six
facets, each facet an 'interface' to present 6 functionally orthogonal
computing units. Of course '6' is a contrived number used for the sake of
a clear visual diagram. The idea is you can 'decompose' the focal point
functions as a pre-existing Fn (functional pass through via MI) or create
a new Fn as an adaptation to a pre-existing functional interface spec.

Perfect focal points are all pass through (via MI). A theoretical dream
and a pragmatic nightmare (with today's technology). (Hat's off to
Joanna).

So with interfaces we can enforce functional orthogonality into the 'real
world architecture' and produce an 'ahead of schedule' deliverable that
people find 'looks pretty good'.

The nice thing about 'cubing' and interfaces is the ability to
form a 'functional channel' that provides a backbone for a succession
of computing units (classes grouped together in some sort of a general
pattern to process some type of data).

Summary Visual : Of Point 1 and 2

Mi and Interfaces are like an hourglass with the derived class
as the center point (focal point of computing function).

Both can be used for aggregation and decomposition but their inherent
architecture belies their functional forte.

They compilement each other when the inherent functional forte of each is
clearly understood. This is conjecture of course (with no formal or
informal proofs).

Why no MI in C#?

I don't know (I love saying that).

What I suspect. From what little I've seen of
Anders in video (MSDN TV, Channel 9, etc) he;

Guess 1 : Anders - Is not opposed to MI above the covers
Guess 2 : Anders - Has what appears to be an intractable problem below the
covers.

Fear 1 : Anders - Does not have a theoretical analyst to run the numbers and
prove
(to him and team) the premise to back Guess 1 above.

Parting note : The monthly MI discussion in this forum.

Someone wondered if the same people keep posting about MI.

I don't know about others but I never have time for anything and I certainly
should not be doing this now :-) This is my first and last post on MI because
it sums up everything I know (or conceivably will know) about MI vs Interface.

My motivation for responding to your post was 'it seems silly'.

An excellent observation if I may add.

Executives (in high pressure start ups) invariably recognize the following
personality types (they exist in all professional communities).

1. Motor mouth
2. Always has to be right
3. Always has to have the last word
4. ect (these are not pejorative but 'how to handle' labels)

I suspect that as Human Beings its takes us time (as a community) to
figure something out because we all (at one time or another) become less
than our ideals would have us be.

I personally don't care about MI being in C# Vx since I'm too busy with my
own forte/talent set.

I am (however) passionate about programming and the current 'state of the
art' discussed in this forum.

Your comment on 'silly' translates to the operational/managerial issues
which I trust are minimized by our conduct in this venue.

I never post on forums but I'm so thankful for Skeet, Joanna and many
others. In thanks I'd thought I'd put in my two cents (just this once) on
MI vs Interface.

I hope the above points on the complementary nature of MI and Interface
provides you with a different perspective of thought that helps you (and
the community) better understand the utility of MI and Interfaces.

Shawnk

PS. I always look forward to those who shoot holes in all the above ideas
as it always helps be better understand stuff. I NEVER take such input
personally (so fire away ;-).

Mark


Mar 28 '06 #22

P: n/a
> Summary point : Theory and Science CREATE real world architecture.

In part. Try reading

http://www.laputan.org/mud/

Mar 28 '06 #23

P: n/a

Sorry not to insert these points in all the right places in your posts and
replies...

Some observations, from the hip:

(1) Classic MI (like SI) is an "is-a" relationship, i.e., implementation
inheritance. I think this is almost always wrong. An object is almost never
so many things at once -- it may look like different objects from various
angles, or behave like different objects in different contexts, but that
does not mean that its *identity* and *implementation* should be conflated
witth these concepts -- at least not by mandate. Classic "merged object" MI
is simply one cheap (conceptually, anyway) way of acheiving these goals.
Delegation to other objects can acheive the same results, and I think having
better support for explicit and implicit delegation is better than "object
merging" in most cases.

(2) I think MI is very powerful, but it can have large costs if not done
carefully. This, of course, is true for any language feature -- but there
are some things that are more easily abused than others. If you want "a
slice" of functionality, and use MI to get at it from a class that has this
slice, you end up with all the other baggage of that class. Not just
theroretical baggage, but depending on call-tree snipping by the compiler,
inlining, "v-table" design, late vs. early allocation, etc., real measurable
baggage. A really clever compiler could probably omit both members and code
not used by actual clients, assuming you never want to get at them via
reflection. I agree that better ways to slice-and-dice (whether it's AOP or
whatever) could be useful. This is mitigated by a framework where the
classes were designed from the begining to be used as "mix-ins", but use by
C# now could result is some truely questionable objects -- as I think ends
up being the case in MI in practice. Some baggage is worse in languages
where all fn's are "virtual" of course, but alot of powerful MI I've seen
stems from the abilty for an object to take on the behavoir of a mix-in
class at *run-time* by adding a superclass (consider adding a DB-persistance
manager to objects you don't have the source code and have them), and this
isn't likely to happen in c#, as a core feature anyway.

(3) I think the most elegant MI designs I've seen are probably in the
Lisp/CLOS and similar areas. This is probably because (a) people there tend
to think much more about the design at this level than any most programmers
(not a slight against most programmers -- but most programmers simply have
to get stuff done, and usually in some prescribed way), and (b) because
these languages actually let you control (via multi-method dispatch, macros,
meta-class programming, etc.) exactly how you want your system to work. Most
"mortal" programmers simply don't have the luxury of working on the
meta-meta level, the meta-level, and the problem level at the same time -
especially when the problem level may consist of muiltiple tiers itself. The
danger is that subseqent programmers may not be so clever that they can
design within this system and maintain it.

I'm not against MI (or for that matter meta-programming of some kind -- I
posted a few weeks ago about wishing the event-on-top-of-delgates mechanism
were done in a more general fashion) but such generalizations need to be
done carefully. I admit, it's pretty bad right now when an O/R mapper forces
you to make its class the "root" of all your objects, forcing you to
"insert" this layer into your hierarchy, because inhertance is a natural way
to approach this in lieu of any other language support -- an alternative way
is to "register" all the objects that require persistance with a manager
object, causing what is, in-effect, a "linked bi-object" (or "co-objects")
as far as the memory manager is concerned, so it would be more efficient to
just glue them togther as classic MI does - so you could conceivably have
delegation with MI efficiency. (The compiler, with or without user hints via
keywords or attributes could figure these mostly-overlapping "co-lifetimes"
out -- let's just say this memory-manager/GC feature is patent-pending by me
on this date:)) This "systems space" problem is one of those cases that may
not fall into the "almost always wrong" category. However, I think in
"problem space" this 80-wrong/20-right rule probably applies. I think MI
just needs to be taken to the next level before it goes mainstream again. I
agree that without extra support for "smart" delegation or some other
MI-like abstraction, that MI would be useful in the toolbox, as long it was
approached with caution.

thanks,
m
"Shawnk" <Sh****@discussions.microsoft.com> wrote in message
news:0B**********************************@microsof t.com...
I think it's so silly!


I have to apologize for not answering your original post with a more
direct point on MI vs Interface discussions (and 'why not in C#').

I'm a MI AND Interface fan.

IMHO - The functional forte of each (MI and Interface) is;

1. MI - Functional aggregation
2. Interface - Functional decomposition

Point 1 : Aggregation

To 'bring together' a set of functions (F1->Fx - base classes) at a focal
point (the derived class) I prefer MI first followed up by Interface
implementation if I need polymorphic specialization.

So for aggregation I actually use both. For code reuse however you just
can't beat MI. This is the point of resource based
issues/concerns/arguments.

To aggregate pre-existing computing function MI can be PROVED to be the
best.

With Microsoft's budget and cultural drive Ander's team MUST already have
a theoretical analyst that codes metric analyzers to quantify and verify
the dev team's suspicions. I"m sure they already ran the numbers to have
proved point 1.

Point 2 : Decomposition

Once we've aggregated on a focal point we need to 'Cube' the
functionality.

Passing Note : 'Cubing the problem space' is engineering slang for the
following paradigm. Suspend the focal point in a hypothetical cube of six
facets, each facet an 'interface' to present 6 functionally orthogonal
computing units. Of course '6' is a contrived number used for the sake of
a clear visual diagram. The idea is you can 'decompose' the focal point
functions as a pre-existing Fn (functional pass through via MI) or create
a new Fn as an adaptation to a pre-existing functional interface spec.

Perfect focal points are all pass through (via MI). A theoretical dream
and a pragmatic nightmare (with today's technology). (Hat's off to
Joanna).

So with interfaces we can enforce functional orthogonality into the 'real
world architecture' and produce an 'ahead of schedule' deliverable that
people find 'looks pretty good'.

The nice thing about 'cubing' and interfaces is the ability to
form a 'functional channel' that provides a backbone for a succession
of computing units (classes grouped together in some sort of a general
pattern to process some type of data).

Summary Visual : Of Point 1 and 2

Mi and Interfaces are like an hourglass with the derived class
as the center point (focal point of computing function).

Both can be used for aggregation and decomposition but their inherent
architecture belies their functional forte.

They compilement each other when the inherent functional forte of each is
clearly understood. This is conjecture of course (with no formal or
informal proofs).

Why no MI in C#?

I don't know (I love saying that).

What I suspect. From what little I've seen of
Anders in video (MSDN TV, Channel 9, etc) he;

Guess 1 : Anders - Is not opposed to MI above the covers
Guess 2 : Anders - Has what appears to be an intractable problem below the
covers.

Fear 1 : Anders - Does not have a theoretical analyst to run the numbers
and
prove
(to him and team) the premise to back Guess 1 above.

Parting note : The monthly MI discussion in this forum.

Someone wondered if the same people keep posting about MI.

I don't know about others but I never have time for anything and I
certainly
should not be doing this now :-) This is my first and last post on MI
because
it sums up everything I know (or conceivably will know) about MI vs
Interface.

My motivation for responding to your post was 'it seems silly'.

An excellent observation if I may add.

Executives (in high pressure start ups) invariably recognize the following
personality types (they exist in all professional communities).

1. Motor mouth
2. Always has to be right
3. Always has to have the last word
4. ect (these are not pejorative but 'how to handle' labels)

I suspect that as Human Beings its takes us time (as a community) to
figure something out because we all (at one time or another) become less
than our ideals would have us be.

I personally don't care about MI being in C# Vx since I'm too busy with my
own forte/talent set.

I am (however) passionate about programming and the current 'state of the
art' discussed in this forum.

Your comment on 'silly' translates to the operational/managerial issues
which I trust are minimized by our conduct in this venue.

I never post on forums but I'm so thankful for Skeet, Joanna and many
others. In thanks I'd thought I'd put in my two cents (just this once) on
MI vs Interface.

I hope the above points on the complementary nature of MI and Interface
provides you with a different perspective of thought that helps you (and
the community) better understand the utility of MI and Interfaces.

Shawnk

PS. I always look forward to those who shoot holes in all the above ideas
as it always helps be better understand stuff. I NEVER take such input
personally (so fire away ;-).

Mark


Mar 28 '06 #24

P: n/a
Dear Mr. Shawnk,

I really enjoyed your posts on MI last few days. They were really
insightful and shown that you are a very knowledgeable person, with much
insight and experience. I would like to ask you about your point of view on
the following scenario:

Having a class with implicit conversion to another type, along with
delegation enabling some sort of MI in C#.

class A : C
{
//set of methods
public static implicit operator B(A a)
{
return new B(a);
}
}

class B
{
private A a;

//set of methods
public B(A a)
{
this.a = a;
}
}

I know this does not solve the issue, but along with delegation enabled by
maintaining reference to A in B (and possibly including and delegating
methods from A to B, this gives, in some scenarios, a "feeling" that one has
multiple inheritance (e.g. A is C and "kind of is" B). It has helped me a
few times while I thought I needed MI.
What's your opinion on this?

"Shawnk" <Sh****@discussions.microsoft.com> wrote in message
news:0B**********************************@microsof t.com...
I think it's so silly!


I have to apologize for not answering your original post with a more
direct point on MI vs Interface discussions (and 'why not in C#').

I'm a MI AND Interface fan.

IMHO - The functional forte of each (MI and Interface) is;

1. MI - Functional aggregation
2. Interface - Functional decomposition

Point 1 : Aggregation

To 'bring together' a set of functions (F1->Fx - base classes) at a focal
point (the derived class) I prefer MI first followed up by Interface
implementation if I need polymorphic specialization.

So for aggregation I actually use both. For code reuse however you just
can't beat MI. This is the point of resource based
issues/concerns/arguments.

To aggregate pre-existing computing function MI can be PROVED to be the
best.

With Microsoft's budget and cultural drive Ander's team MUST already have
a theoretical analyst that codes metric analyzers to quantify and verify
the dev team's suspicions. I"m sure they already ran the numbers to have
proved point 1.

Point 2 : Decomposition

Once we've aggregated on a focal point we need to 'Cube' the
functionality.

Passing Note : 'Cubing the problem space' is engineering slang for the
following paradigm. Suspend the focal point in a hypothetical cube of six
facets, each facet an 'interface' to present 6 functionally orthogonal
computing units. Of course '6' is a contrived number used for the sake of
a clear visual diagram. The idea is you can 'decompose' the focal point
functions as a pre-existing Fn (functional pass through via MI) or create
a new Fn as an adaptation to a pre-existing functional interface spec.

Perfect focal points are all pass through (via MI). A theoretical dream
and a pragmatic nightmare (with today's technology). (Hat's off to
Joanna).

So with interfaces we can enforce functional orthogonality into the 'real
world architecture' and produce an 'ahead of schedule' deliverable that
people find 'looks pretty good'.

The nice thing about 'cubing' and interfaces is the ability to
form a 'functional channel' that provides a backbone for a succession
of computing units (classes grouped together in some sort of a general
pattern to process some type of data).

Summary Visual : Of Point 1 and 2

Mi and Interfaces are like an hourglass with the derived class
as the center point (focal point of computing function).

Both can be used for aggregation and decomposition but their inherent
architecture belies their functional forte.

They compilement each other when the inherent functional forte of each is
clearly understood. This is conjecture of course (with no formal or
informal proofs).

Why no MI in C#?

I don't know (I love saying that).

What I suspect. From what little I've seen of
Anders in video (MSDN TV, Channel 9, etc) he;

Guess 1 : Anders - Is not opposed to MI above the covers
Guess 2 : Anders - Has what appears to be an intractable problem below the
covers.

Fear 1 : Anders - Does not have a theoretical analyst to run the numbers
and
prove
(to him and team) the premise to back Guess 1 above.

Parting note : The monthly MI discussion in this forum.

Someone wondered if the same people keep posting about MI.

I don't know about others but I never have time for anything and I
certainly
should not be doing this now :-) This is my first and last post on MI
because
it sums up everything I know (or conceivably will know) about MI vs
Interface.

My motivation for responding to your post was 'it seems silly'.

An excellent observation if I may add.

Executives (in high pressure start ups) invariably recognize the following
personality types (they exist in all professional communities).

1. Motor mouth
2. Always has to be right
3. Always has to have the last word
4. ect (these are not pejorative but 'how to handle' labels)

I suspect that as Human Beings its takes us time (as a community) to
figure something out because we all (at one time or another) become less
than our ideals would have us be.

I personally don't care about MI being in C# Vx since I'm too busy with my
own forte/talent set.

I am (however) passionate about programming and the current 'state of the
art' discussed in this forum.

Your comment on 'silly' translates to the operational/managerial issues
which I trust are minimized by our conduct in this venue.

I never post on forums but I'm so thankful for Skeet, Joanna and many
others. In thanks I'd thought I'd put in my two cents (just this once) on
MI vs Interface.

I hope the above points on the complementary nature of MI and Interface
provides you with a different perspective of thought that helps you (and
the community) better understand the utility of MI and Interfaces.

Shawnk

PS. I always look forward to those who shoot holes in all the above ideas
as it always helps be better understand stuff. I NEVER take such input
personally (so fire away ;-).

Mark


Mar 28 '06 #25

P: n/a
Not worth much, I know, but I cast my vote with the MI-clan.

I spent several years programming in C++ and found MI in *some few* cases to
be the only thing for the job. You could get the same effect as an interface
with abstract base classes, so it wasn't as though that option weren't open
to you - but I found that in some cases MI was the only thing for the job.

Not having it forces you to cut and paste code - not good for reliability.

The other thing I desperately missed when I switched to the beta release of
C# (I've been developing in it since before the first release) was
templates. I know that's here now in V2, but I will mourn MI.

Cheers,

Adam.
========
Mar 29 '06 #26

P: n/a
Shawnk <Sh****@discussions.microsoft.com> wrote:

<snip>
I sitll believe the demise of C# is inevitable for the above reasons and
will unfold in the future market space history for the above reasons.


The demise of C# is inevitable because of the lack of MI? I think
that's a real stretch. C doesn't have MI either - is the demise of C
inevitable too?

Arguably the demise of any particular language is at least very likely
- what are the chances that we'll be using Ruby, Java, Python or even C
in a thousand years' time? Now, if you want to talk about an *imminent*
demise, that's a different matter - care to put a timescale on when you
think C# will stop being used?

--
Jon Skeet - <sk***@pobox.com>
http://www.pobox.com/~skeet Blog: http://www.msmvps.com/jon.skeet
If replying to the group, please do not mail me too
Mar 29 '06 #27

P: n/a
Arguably the demise of any particular language is at least very likely
- what are the chances that we'll be using Ruby, Java, Python or even C
in a thousand years' time? Now, if you want to talk about an *imminent*
demise, that's a different matter - care to put a timescale on when you
think C# will stop being used?


Jon,

Always a pleasure to hear from you!

I should do a partial retraction and say 'demotion' instead of demise.

Although some languages like APl (A Programming Language) exist as long as
our culture does their utility in present society is gauged as 'value to
software production' and 'market share' per year.

The social utility of most interest to me (an perhaps most engineers)
is the yearly contribution to 'new and improved' (love that phrase)
ways of computing expression (see metric 1 below).

To gauge demotion - a metric profile would be;

1. Current use as a percentage of languages in community conversations.
2. Current yearly contributions as a vehicle for computing expressions.
3. Historical contribution to computing
4. Current per year market share (in its native market) in sales
5. Current use as a percentage of all programmers (on earth)
6. Current use as a percentage of all software produced.

The 'demotion' (no longer demise) I unconsciously inferred (via the term
demise) is point 1 and 2 above. The other points follow from the first
two. Demise would be position totally existent in history (as say, APL, I
may be wrong here but you get the point - very small production community
with APL market wise).

Timescale:

If you'll indulge me :-)

I speak (inherently) from the perspective of 'Thoughtscale'. That is to
say the exponential (currently unless some Atlantian event occurs) rise in
the thinking behind our current social technology. I can backtrack this
to answer 'timescale'.

The 'time of demotion' would span two seminal events in the way
we UNDERSTAND computing. Instead of 'years' I look at 'how we think
as a community'. This we can map to timescale since the timescale is
and effect of the thoughtscape (for lack of better term).

So for computing expression technologies...

Two seminal events occur in the (A) near term and in the (B) long term.

Near term is a 'market challenger' we shall call 'Language X'. Long term
is the ability of machines to think like men - real intelligence - not
artificial.

The long term perspective is given only to 'frame' the time period of C#
non-historical existence.

A. Social inception date : MI + Interface inclusive RTM release of Language
'X'.

In passing the market context for the market entry point of X would be;

-- Assumptive criterion : Language X has comparable core functionality
-- Other language offerings non-C# and non-X

B. Social inception data : RTM release of a 'compiler' driven by human
speech input.

The market context for this means that Von Nueman systems are completely
and embedded commodity in a higher level of technology (thinking
machines).

To focus on 'what and when' for the C# demotion I'll include the scenario
where my premise 'inevitable demotion' is wrong.

Scenario 1 : No demotion

C@ Vx (in future) includes MI with Interfaces AFTER INFORMAL AND FORMAL
PROOFS SHOW the value of MI inclusion.

In passing all C# competitors say 'DOOH!'.

Scenario 2 : Demotion across technology line Point A->B above in 'thoughscape'

Demotion is clear (to most everyone) 5 years after point A, the RTM
release of X.

ANSWER: (To your timescale request)...

Because of the impact of this forum (C# as primary vehicle for computing
expression evolution) on the computing thoughtscape I would say point A
happens in 3 to 7 years from today (a hipshot).

The 'impact' effects funnel out via market competitors reading this forum.

Hypothetical Scenario: (For seminal market event A)

Because the Chinese have cuniform (sp?) a FOUNDER finds a semantic
expressive style (for computing) that is contained in the core X artifacts
(used by role classes coded by the user community of language X). Social
context to off subject (Chinese entrepeneurs and startups) but you get the
idea of the true 'different path branch' in the computing thoughtscape
brought about by language X.

Summary : (Tongue in check :-)

I admit language X accedence is dependent more on its fundamental
semantic utility than MI (its programming 'style'). The 'rate of X
accedence' hinges on;

1. True semantic utility
2. Incorporation of MI and Interface
3. C# never incorporating MI

I think 'language X' is inevitable because of what I find (in MY coding
style) based on points 1->3 (directly above) occurring.

The second part of your post (again thank you) is *imminent* demise is
point B. That is to say point B is the point of 'Imminent' (love your
term there) demise even if C# is still the market leader.

I conjecture at best 2012 (five years from 2007) for point B but it may
take as long as 3 more 'career time' generations (3*12) or 36 years from
today.

[Career generations the 12 years where workers are really hot and
productive]

That would put the horizon at (today plus 36) 2042.

Thanks for your input.

Shawnk
Mar 30 '06 #28

P: n/a

One addendum:

Language X is envisioned to be the last (I know your laughing -its OK) or
close to the last 'human programming language'.

It would (ideally) be a complete, closed and correct summation (and
reduction) of computing expression in Von Nueman systems.

It would lead to pattern based role/artifact oriented virtual reality
that would provide a logical domain for thinking machines to operate on
(Machine generated code).

Post Language X offerings would be stylistic expressions but not have the
inherent impact of Language X. The Language X is the last hoorah.

Machine generated code would subsume the programing market space with a
more powerful tool set and expressive paradigm (logical thought).

So when I said demise/demotion I really meant the above as a context for
the historical closure of the MI vs Interface debate.

Shawnk
Mar 30 '06 #29

P: n/a
Mike,

It was a pleasure reading your response.

I agree with your thinking.
To summarize the train of thought.
---------------------------

Point (1) : Above the covers : Explicit/Implicit Delegation
Delegation to other objects can achieve the same results, and I think having
better support for explicit and implicit delegation is better than "object
merging" in most cases.


Point (2) : Below the covers : Real Baggage

A> ... you end up with all the other baggage ...
B> ... Not just theoretical baggage .... real measurable baggage
C> ... better ways to slice-and-dice ....
D> ... could result is some truly questionable objects ....
E> ... powerful MI .. stems from ... behavior of a mix-in class ...

Point (3) : Use case context : Below the covers : Observation

A> ... MI .. design at this level ..[vs].. have to get stuff done ..
B> ... control ... how ... system to work ..
C> ... luxury ..

Point (4) : Use case context : Luxury of a 'clean slate design' (paraphrase)

A> Most "mortal" programmers simply don't have the luxury of working on the
A> meta-meta level, the meta-level, and the problem level at the same time -

Point (5) : Quality of result : Danger

A> The danger is that subsequent programmers may not be so clever that they
can
A> design within this system and maintain it.

Point (6) : Under the covers : Functional state scope partitioning

A> An alternative way is to "register" all the objects

B> This "systems space" problem is one of those cases that may
B> not fall into the "almost always wrong" category
---------------------------

As you, Adam, myself and other have mentioned the 'use case context'
and 'quality of use' determine MI usefulness (disregarding below cover
issues).

As you may have guessed;

(1) My only concern is with ABOVE THE COVER USAGE (Point 1->3, and 6.A,B)
(2) I take contracts with a CLEAN SLATE and no legacy (Point 4.A)
(3) I have (the user needs) a natural TALENT FOR PARTITIONING systems (Point
5.A)

Within the use case domain of the three points above MI is just 'peachy'
(works good).

Anders seems to have serious problems with the Jitter target design along
the Point 6 state space problem you mentioned (just a passing hip shot).

Your articulate assessment summarized the market case for MI in C# quite well.

I want to thank you for helping me understand my position (basically the
same as yours I think) in the usage context of the three points just
above.

You really, however, hit the nail on the head when you said...

Point 7 : Use case definition : Next level of MI

A> just needs to be taken to the next level before it goes mainstream again.

C# V.X still has the opportunity to take MI to the 'next level'.

I think many of the C# user community (such as I) would love to see this
happen along the lines you mentioned (...smart..with caution).

Solving the under cover baggage and state space issues (Point 2, Point 6)
are what we pay Microsoft for (read that Anders and team :-) in MSDN
licenses.

Being adaptive I'll (most likely) pick up a better language when (and if)
it comes along. I still consider C++ viable (probably next version) for
leading
edge work (just beyond the technology envelope)

However I would prefer a 'smart, and cautious' MI to be incorporated after
the contributions of LINQ (what a real Godsend that is) are absorbed into
the community.

Thank you so much for your excellent input.
I hope someone in the C# dev team reads it!

Shawnk

Mar 30 '06 #30

P: n/a
Lebesgue,

This is so good I want to 'play' with code for a few days and sleep on it.

I'll post next week in light of 'one last' issue not covered in this thread.

In the derived class (focal point of the MI expression) we need to create
new composite functionality. Case in point function Fcomp.

To simplify Fcomp needs F1 and F2 (incoming to focal point via MI). Fcomp
needs some instance space.

I just want to think over some issues I have with Fcomp situations using
your recommendation.

Will post next week.

Shawnk
Mar 30 '06 #31

P: n/a


Lebesgue wrote:
Having a class with implicit conversion to another type, along with
delegation enabling some sort of MI in C#.
One drawback of this workaround is that it breaks
runtime-typedispatching and the usual type-inspections.
I know this does not solve the issue, but along with delegation enabled by
maintaining reference to A in B (and possibly including and delegating
methods from A to B, this gives, in some scenarios, a "feeling" that one has
multiple inheritance (e.g. A is C and "kind of is" B). It has helped me a
few times while I thought I needed MI.


Personally i think of this as "implicitly adapting by conversion" which
may be good enough for some cases, but it doesn't buy
implementation-inheritance, so it's no good for cherry-picking
implementation.

--
Helge Jensen
mailto:he**********@slog.dk
sip:he**********@slog.dk
-=> Sebastian cover-music: http://ungdomshus.nu <=-
Mar 30 '06 #32

P: n/a
Excellent article. Loved the 'keep it working pattern'.

Reminds me of the executive summary approach for;

1. Quality (good)
2. Cost (cheap)
3. Time (fast)

The hip shot response is 'you can choose just one'.
The reality is you have to prioritize all three in a shifting field
of tactical and strategic forces.

Thank you again for the article link.

Shawnk

"Bruce Wood" wrote:
Summary point : Theory and Science CREATE real world architecture.


In part. Try reading

http://www.laputan.org/mud/

Mar 31 '06 #33

P: n/a
Reminds me of the executive summary approach for; 1. Quality (good)
2. Cost (cheap)
3. Time (fast)


Passing note:

Of course on a project per project basis the budget/schedule freeze the
force field (somewhat and just enough) to allow a reasonable priority
of QCT to be established.

The executive summary being the project will be 'done right' or
'done cheap' or 'done fast'. Tatical project tendency is 'done cheap/fast' and
strategic project tendency is 'done right'.

Shawnk

Mar 31 '06 #34

P: n/a
What I've always loved about that paper was that Foote and Yoder were
the only academics I had seen who examined the Big Ball of Mud
architecture from the point of view that business programmers aren't
uninformed louts, so there must be good reasons why people build these
systems. As a result they came up with some telling insights.

The original version of the paper was unrepentantly sympathetic to the
business programmer's plight. The current version has been altered
somewhat to play more to the academic audience, so now the original
lives on only in my file drawer. Too bad: it was better before the
revisions. :-)

Anyway, it's one of my favourite research papers because it's so true
and so real. Glad you enjoyed it.

Mar 31 '06 #35

P: n/a
Shawnk <Sh****@discussions.microsoft.com> wrote:
Arguably the demise of any particular language is at least very likely
- what are the chances that we'll be using Ruby, Java, Python or even C
in a thousand years' time? Now, if you want to talk about an *imminent*
demise, that's a different matter - care to put a timescale on when you
think C# will stop being used?

<snip - I'm afraid I don't have time to answer this fully>
I conjecture at best 2012 (five years from 2007) for point B but it may
take as long as 3 more 'career time' generations (3*12) or 36 years from
today.

[Career generations the 12 years where workers are really hot and
productive]

That would put the horizon at (today plus 36) 2042.


So your conjecture is that C# won't be the dominant language in 36
years. I don't think that's particularly surprising, and I don't think
it's got anything to do with MI. *No* programming language has been
dominant for 36 years (C itself is only 34 years old). It's a bit like
saying, "I don't think that person will live to be 200 years old
because they're left-handed."

Indeed, I would be surprised if object-orientation as we think of it
now is the dominant paradigm in 36 years (or earlier than that).

--
Jon Skeet - <sk***@pobox.com>
http://www.pobox.com/~skeet Blog: http://www.msmvps.com/jon.skeet
If replying to the group, please do not mail me too
Apr 1 '06 #36

P: n/a
Shawnk <Sh****@discussions.microsoft.com> wrote:
One addendum:

Language X is envisioned to be the last (I know your laughing -its OK) or
close to the last 'human programming language'.


That suggests that there's the possibility for one language to be the
most suitable one for all situations. I don't think that's feasible,
myself - there will always be a call for low level languages, and a
separate need for high level languages. Whether there will always be
scripting languages separate from statically compiled languages, I
don't know but I wouldn't be surprised. I suspect there are likely to
be further divisions we haven't even considered now.

--
Jon Skeet - <sk***@pobox.com>
http://www.pobox.com/~skeet Blog: http://www.msmvps.com/jon.skeet
If replying to the group, please do not mail me too
Apr 1 '06 #37

P: n/a


Jon Skeet [C# MVP] wrote:
Shawnk <Sh****@discussions.microsoft.com> wrote:
Arguably the demise of any particular language is at least very likely
- what are the chances that we'll be using Ruby, Java, Python or even C
in a thousand years' time? Now, if you want to talk about an *imminent*
demise, that's a different matter - care to put a timescale on when you
think C# will stop being used?

<snip - I'm afraid I don't have time to answer this fully>
I conjecture at best 2012 (five years from 2007) for point B but it may
take as long as 3 more 'career time' generations (3*12) or 36 years from
today.

[Career generations the 12 years where workers are really hot and
productive]

That would put the horizon at (today plus 36) 2042.

So your conjecture is that C# won't be the dominant language in 36
years. I don't think that's particularly surprising, and I don't think
it's got anything to do with MI. *No* programming language has been
dominant for 36 years (C itself is only 34 years old). It's a bit like
saying, "I don't think that person will live to be 200 years old
because they're left-handed."

Indeed, I would be surprised if object-orientation as we think of it
now is the dominant paradigm in 36 years (or earlier than that).


Computer languages and programming paradigms are to a large extent driven by
technology.

The implementation of C# is really only possible because of the massive amounts
of memory and computing power that we now have compared to 40 years ago.

In 36 years time - who knows ? When we have the equivalent of several
tera-zillion bytes of molecular memory, perhaps we won't need computer languages.

I'm pretty sure that programmers won't be sitting at terminals tapping out code.
Apr 1 '06 #38

P: n/a

"Ian Semmel" <is***********@NOKUNKrocketcomp.com.au> wrote in message
news:%2****************@TK2MSFTNGP12.phx.gbl...


Jon Skeet [C# MVP] wrote:
Shawnk <Sh****@discussions.microsoft.com> wrote:
Arguably the demise of any particular language is at least very likely
- what are the chances that we'll be using Ruby, Java, Python or even C
in a thousand years' time? Now, if you want to talk about an *imminent*
demise, that's a different matter - care to put a timescale on when you
think C# will stop being used?

<snip - I'm afraid I don't have time to answer this fully>
I conjecture at best 2012 (five years from 2007) for point B but it may
take as long as 3 more 'career time' generations (3*12) or 36 years from
today.

[Career generations the 12 years where workers are really hot and
productive]

That would put the horizon at (today plus 36) 2042.

So your conjecture is that C# won't be the dominant language in 36 years.
I don't think that's particularly surprising, and I don't think it's got
anything to do with MI. *No* programming language has been dominant for
36 years (C itself is only 34 years old). It's a bit like saying, "I
don't think that person will live to be 200 years old because they're
left-handed."

Indeed, I would be surprised if object-orientation as we think of it now
is the dominant paradigm in 36 years (or earlier than that).


Computer languages and programming paradigms are to a large extent driven
by technology.

The implementation of C# is really only possible because of the massive
amounts of memory and computing power that we now have compared to 40
years ago.

In 36 years time - who knows ? When we have the equivalent of several
tera-zillion bytes of molecular memory, perhaps we won't need computer
languages.

I'm pretty sure that programmers won't be sitting at terminals tapping out
code.


I hope not, at least not like today. Ignoring hard AI, natural language and
molecular memory, the basic methods of programming really haven't changed
since its conception. Semi-smart contextual help (Intellisense, etc.),
"visual tools", OOP, etc., are small refinements to the basic model.
("Managed memory" systems are probably the biggest leap in productivity,
IMHO.)
It's amazing really -- almost all of programming (more today than ever) is
just plumbing -- or connecting pieces and fixing "impedence mismatches". But
still, we start from scratch almost every time - even code generators get
you only to step 1 and are usually one-way affairs. For a long long time
we'll probably need a certain amount of low-level coders to bootstrap new
systems and make the new breed of systems, but I'd think more of the
"plumbing" code will probably be taken over by dynamical systems with dials
that let you treak mappings and implementations, both before and during the
lifetime of the entities involved.

m
Apr 2 '06 #39

P: n/a
Jon,

I strongly agree with you. There will always be many languages. Language 'X"
would be a dominant (if not the dominant) market leader with the metrics of
leadership being.

1. Market size
2. General utility (most application areas)
3. Most contributions to leading the computing envelop.

For example, C++, java, C#, Basic, Cobol, Fortran, XML, HTML, are all
important languages with contributions is their respective areas of focus.

But, IMHO, C# is my favorite with the exception of the MI feature of C++.
(thats a personal preference. So I still think a 'Language X' market shift
(as per this thread) is quite feasible.

So, I agree

1) MOST suitable for ALL occastions - No.
2) MOST suitable for MOST occastions - Yes

(comparing Language X to other contenders for the MOST/MOST metric).

As always Jon, a true pleasure to here your thoughts.

Shawnk
Apr 2 '06 #40

P: n/a
Please note that Eiffel is a .NET language that supports a form of MI.

Regards,
Jeff

*** Sent via Developersdex http://www.developersdex.com ***
Apr 3 '06 #41

P: n/a
Jeff Louie <an*******@devdex.com> wrote:
Please note that Eiffel is a .NET language that supports a form of MI.


Yes - but the way it does it is horrible. It bends .NET pretty badly
out of shape in order to achieve MI. In particular, clients in other
languages which try to use an Eiffel object which supposedly derives
from two classes won't see it that way (and can't, of course).

Full Kudos to the Eiffel team for getting the kludge to work, but it's
still a kludge.

--
Jon Skeet - <sk***@pobox.com>
http://www.pobox.com/~skeet Blog: http://www.msmvps.com/jon.skeet
If replying to the group, please do not mail me too
Apr 3 '06 #42

P: n/a
Lebesque,

Thanks for your patience with the length of my response. Loved the
proposed approach which (for brevity) I'll call the PI (Pseudo
Inheritance) design technique. I recoded and experimented with this
approach this weekend.

Comparing 'PI' to 'MI' (after coding) my executive summary (short form)
of the differences as follows.

----------

1. Top level specification of state space in multiple inheritance

A. The PI design technique is a wrapper for functionality around
pre-existing state space.

B. MI allows (obviously) state and functional (static and dynamic) inheritance

C. The PI technique offered (by Lebesque) is a 'functional inheritance'
(dynamic component) high cost approach.

D. The approach fails if the inherited state space (static component) is
intrinsic to the target functionality.

2. Therefore - (e.g. Child_cls is Parent_cls and "kind of is" House_cls).

A. Dynamically true
B. Statically false

C. Which is a formal agreement of your phrase '(e.g. A is C and "kind of is"
B)'.

D. The 'kind of' is a logical marker for the PI non-inheritance of
functional state space.

----------

Since I use MI for 'top level specifications' (as above) my use of PI
would be as a 'functional wrapper'. As a 'functional wrapper' on
pre-existing state space its a good approach if all you needed was a small
wrapping functionality around a pre-existing state space.

I have a follow up to this (post) if it is of interest. It has your code
redone to prove the 2.A, 2.B assertions above. The proff also shows the
severe limitations of C# as a strategic migration tool to factor out common
code. It brings out in code the 'kind of' (see above) inheritance
architecture flaw in C#.

To restate: The re-write of your code shows the 'statically false' assertion
in your 'e.g' given in executive summary above. Sort of explains the above
executive summary in code.

Thanks for your thoughtful and insightful input. I haven't used this
approach but to overcome C# limitations (MI in IMHO ) this approach is
pretty handy :-)

Shawnk

PS. Let me know if you want me to post the rewritten proof of your code.

PPS. High cost based on quantitative/qualitative metrics (1) character
count, (2) visual complexity of pass through code and (3) logical complexity
of pass through code.
Apr 3 '06 #43

P: n/a
Jon,

Do you think this (valiant but flawed) effort is indicative of;

1. Inherent Jitter architecture flaw/weakness
2. Inherent MSIL architecture flaw/weakness
3. Inherent Multilple Inheritance problem with Von Nueman (turing tape)
engines

Thanks as always for your insightful comments and input.

Shawnk

Apr 3 '06 #44

P: n/a
Shawnk,

thanks for your input. I agree with your arguments and would be glad to
see the rewritten proof. If I did my estimation right, then I think I have a
proposal which will improve the design "towards shared state space".
Looking forward to hear from you

Lebesgue

"Shawnk" <Sh****@discussions.microsoft.com> wrote in message
news:C4**********************************@microsof t.com...
Lebesque,

Thanks for your patience with the length of my response. Loved the
proposed approach which (for brevity) I'll call the PI (Pseudo
Inheritance) design technique. I recoded and experimented with this
approach this weekend.

Comparing 'PI' to 'MI' (after coding) my executive summary (short form)
of the differences as follows.

----------

1. Top level specification of state space in multiple inheritance

A. The PI design technique is a wrapper for functionality around
pre-existing state space.

B. MI allows (obviously) state and functional (static and dynamic)
inheritance

C. The PI technique offered (by Lebesque) is a 'functional inheritance'
(dynamic component) high cost approach.

D. The approach fails if the inherited state space (static component) is
intrinsic to the target functionality.

2. Therefore - (e.g. Child_cls is Parent_cls and "kind of is" House_cls).

A. Dynamically true
B. Statically false

C. Which is a formal agreement of your phrase '(e.g. A is C and "kind of
is"
B)'.

D. The 'kind of' is a logical marker for the PI non-inheritance of
functional state space.

----------

Since I use MI for 'top level specifications' (as above) my use of PI
would be as a 'functional wrapper'. As a 'functional wrapper' on
pre-existing state space its a good approach if all you needed was a small
wrapping functionality around a pre-existing state space.

I have a follow up to this (post) if it is of interest. It has your code
redone to prove the 2.A, 2.B assertions above. The proff also shows the
severe limitations of C# as a strategic migration tool to factor out
common
code. It brings out in code the 'kind of' (see above) inheritance
architecture flaw in C#.

To restate: The re-write of your code shows the 'statically false'
assertion
in your 'e.g' given in executive summary above. Sort of explains the above
executive summary in code.

Thanks for your thoughtful and insightful input. I haven't used this
approach but to overcome C# limitations (MI in IMHO ) this approach is
pretty handy :-)

Shawnk

PS. Let me know if you want me to post the rewritten proof of your code.

PPS. High cost based on quantitative/qualitative metrics (1) character
count, (2) visual complexity of pass through code and (3) logical
complexity
of pass through code.

Apr 3 '06 #45

P: n/a
Lebesque,

(IMHO)

The import of your PI technique to C# language architecture is the
potential for a 'low cost' functional wrapper on existing state space.
The 'wrapper' expression would allow a semantic pass through of all
'existing target' (the class being wrapped) functionality and state space.

The state space of the wrapper itself would not be accessible to the
target. It would be accessible, as coded, to the client.

The temporal data channel (return/parameter streams of target methods)
could be passed through.

The semi-persistent data facade (properties, fields) of the target could
be passed through.

I am not qualified to judge if this is good (or bad) without significant
efforts to compare to strategic C# design and alternative solutions. But
hopefully Anders and the dev team will look at this if they are already
not aware of the approach (which I suspect they are).

I DO think (hip shot) that the PI technique is an excellent way of
understanding the architectural partitioning of the state space.

I recast your code into role based artifacts as inheritance (parent/child)
and embedding (house/mansion) phenomena.

I really liked coding this approach and it helped me to better articulate
some of the key issues with MI vs Interface (state space specification at
the top level). Without shooting my self in the foot (hip shot astray) I
could see where this is a good solution to have in approaching the C# MI
inheritance flaw (IMHO).

Thanks so much for your input and the 'mud' article.

BTW - The 'ball of mud' paradigm is an eloquent articulation to explain
the use of MI as a strategic approach to long term functional migration
(away from a big ball of mud :-)

Shawnk

PS. The rewrite of the code is below (I hope I copied it in right -
compiles and runs OK for me)

Key point : There is no instance of 'House' identity due to the lack
of instrinsic state space removed via the PI approach.

---------------------

public
class MS_example_03_dem_cls
{
Parent m_parent_ins;
Child m_child_ins;
House m_house_ins;
Mansion m_mansion_ins;

public
void Demo_MI_operator()
{
m_parent_ins = new Parent ( "Fred Flintstone" );
m_child_ins = new Child ( "Pebbles" );

// What we would like to do is MI for house with the identity
functionality (instance name)

m_house_ins = new House ( m_child_ins );
m_mansion_ins = new Mansion ( "Mansion" );

// House now contains 'resident' functionality of child/parent.
// All 'resident' function must be 'coded out' indivdually
// Incoming client requests are funnelled to the 'delegated functionality
resident in child'

m_parent_ins. Print_identity();
m_child_ins. Print_identity();

// See if house can behave like parent
//
// With MI we could just 'pass through' the child functionality to do
the following
// Also we could set the name of the house to 'House' for its correct
identity
//
// m_house_ins. Print_identity();
//
// With PI the code to 'pass through' the identity fucntion is the
Del..() method.
//

m_house_ins. Delegation_cost_Print_identity();

//
// With true inheritance we get a no cost 'pass through' of both state
and function
//

m_mansion_ins. Print_identity();

//
// This inheritance is an excellent mechanism for the stratigic migration
of common
// function (inclusive of state) toward a target base class library
//

}

} // End_of_class

class Parent
{
private string name;

public Parent( string identity)
{
name = identity;
}
public void Print_identity()
{
Console.WriteLine(" My identity is {0} ", name);
}
}

class Child : Parent
{
public Child (string identity ) : base ( identity )
{
}

public static implicit operator House(Child child)
{
return new House(child);
}
}

class House
{
private Child child;

public House(Child child)
{
this.child = child;
}
public void Delegation_cost_Print_identity()
{
child.Print_identity();
}
}

class Mansion : Child
{
public Mansion(string identity ) : base ( identity )
{
}
}


Apr 3 '06 #46

P: n/a
Lebesque,

In passing I guess the PI implementation (as C# expression artifact) would be
a 'pass through conversion operator'. The embedded entities in the 'house'
could
be decorated with an attribute (the pass through attribute) that would signal
the compiler to generate all valid (non-conflict) pass through requests of
the client.

The compiler would have to prevent any semantic collision (between house and
child) by ensuring no artifacts in the child were duplicated in the house.
(functional
separation to ensure the semantic synchronization of the 'pass through'
functional channel (set of artifacts being passed through).

Shawnk

Apr 3 '06 #47

P: n/a
Shawnk <Sh****@discussions.microsoft.com> wrote:
Do you think this (valiant but flawed) effort is indicative of;

1. Inherent Jitter architecture flaw/weakness
2. Inherent MSIL architecture flaw/weakness
3. Inherent Multilple Inheritance problem with Von Nueman (turing tape)
engines


I think it's the normal problem of trying to make a square peg fit in a
round hole. The framework has its own idioms, and Eiffel wants to
bypass those idioms.

Without *some* idioms/rules, the CLR would be too general to be useful,
IMO. However, no set of idioms/rules can accommodate every way of
approaching things.

In short, I don't think it's a flaw in anything particularly - you have
to accept that no framework is going to be perfect for every
programming paradigm.

--
Jon Skeet - <sk***@pobox.com>
http://www.pobox.com/~skeet Blog: http://www.msmvps.com/jon.skeet
If replying to the group, please do not mail me too
Apr 6 '06 #48

This discussion thread is closed

Replies have been disabled for this discussion.