According to Troelsen in "C# and the .NET Platform"
"Boxing can be formally defined as the process of explicitly converting a value
type into a corresponding reference type."
I think that my biggest problem with this process is that the terms "value type"
and "reference type" mean something entirely different than what they mean on
every other platform in every other language. Normally a value type is the
actual data itself stored in memory, (such as an integer) and a reference type
is simply the address of this data.
It seems that .NET has made at least one of these two terms mean something
entirely different. can someone please give me a quick overview of what the
terms "value type" and "reference type" actually mean in terms of their
underlying architecture?
Jan 12 '07
161 7324
"Barry Kelly" <ba***********@gmail.comwrote in message
news:is********************************@4ax.com...
Peter Olcott wrote:
>A strongly type language like C++ effectively prevents any accidental type errors,
Not all errors. If you take the address of a variable, C++ doesn't do
anything to ensure that the variable you've taken the address of lives
longer than variable which stores the taken address. This is more what I
mean by memory safety, over type safety. It's a far stronger commitment.
The mere existence of access violations in commercial programs is
evidence enough for this.
>why bother with more than this?
There is also another class of error: intentional errors, to (e.g.)
violate security when running in a browser as another poster indicated,
or in some hosted process such as a web hosting provider's ASP.NET
context, or in a SQL Server 2005 process, etc.
I was originally thinking that it might be useless to make one set of languages
completely type safe as long as another set of languages exists that is not type
safe. The authors of malicious code simply would not migrate to the new
technology.
>
-- Barry
-- http://barrkel.blogspot.com/
Peter Olcott wrote:
That would seem to be a fine restriction. Now if we can only add an [in]
parameter qualifier that passes all large objects by reference, yet makes them
read-only.
I don't mean to be harsh, but why don't you try programming in C# & .NET
for a year or two before you suggest ways to improve it?
The way I personally see it, you've got a myopic view of the world based
on a C++ perspective, and want to "fix" things to make you yourself feel
more comfortable.
Don't take that as me saying that I think a const by-ref for value type
parameters would be a bad thing (I don't). However, I don't think it's
badly needed either.
-- Barry
-- http://barrkel.blogspot.com/
Peter Olcott wrote:
I want to fully understand exactly how the underlying architecture works so that
I can design it from the ground up using the best means. With C++ I already know
exactly what kind of machine code that anything and everything will translate
into. I need to acquire this degree of understanding of .NET before I begin
using it.
I think learning about a system is most easily achieved *while* using
it, not *before* using it. Experimentation and experience are better
teachers than replies to questions on newsgroups.
I am not
comfortable switching to C# until I know every detail of exactly how to at least
match the performance of native code C++.
I suggest you:
1) Make a first attempt at converting some algorithm you're concerned
about to C#.
2) Profile it to find out what's slower than your budget allows.
3) Then perhaps ask specific questions on the newsgroups about how to
improve and / or redesign a particular construct / technique.
-- Barry
-- http://barrkel.blogspot.com/
"Barry Kelly" <ba***********@gmail.comwrote in message
news:sa********************************@4ax.com...
Peter Olcott wrote:
>I want to fully understand exactly how the underlying architecture works so that I can design it from the ground up using the best means. With C++ I already know exactly what kind of machine code that anything and everything will translate into. I need to acquire this degree of understanding of .NET before I begin using it.
I think learning about a system is most easily achieved *while* using
it, not *before* using it. Experimentation and experience are better
teachers than replies to questions on newsgroups.
>I am not comfortable switching to C# until I know every detail of exactly how to at least match the performance of native code C++.
I suggest you:
1) Make a first attempt at converting some algorithm you're concerned
about to C#.
2) Profile it to find out what's slower than your budget allows.
3) Then perhaps ask specific questions on the newsgroups about how to
improve and / or redesign a particular construct / technique.
I don't have time to do it this way. By working 90 hours a week, I am still 80
hours a week short of what I need to get done.
-- Barry
-- http://barrkel.blogspot.com/
Peter Olcott wrote:
It might be possible to design a language that has essentially all of the
functionally capabilities of the lower level languages, without the requirement
of ever directly dealing with pointers. I myself have always avoided pointers,
(since the early 1980's) they were always too difficult to debug. Instead of
using pointers I used static arrays, at least in this case I could print out the
subscripts.
You know, memory is just one big static array of bytes (albeit sparse
due to OS address space allocation), and pointers are just indexes into
the array. It follows that for every problem with pointers, there's an
analog with array indexes, though many of those problems seem contrived
unless one is really trying to replace all pointers with indexes.
I once wrote an in-memory database that used a .NET byte array for its
storage. "Dangling" indexes, incorrectly typed indexes, all the usual
problems had to be ironed out with diagnostic tools and integrity
checkers early on in the development.
-- Barry
-- http://barrkel.blogspot.com/
"Barry Kelly" <ba***********@gmail.comwrote in message
news:21********************************@4ax.com...
Peter Olcott wrote:
>That would seem to be a fine restriction. Now if we can only add an [in] parameter qualifier that passes all large objects by reference, yet makes them read-only.
I don't mean to be harsh, but why don't you try programming in C# & .NET
for a year or two before you suggest ways to improve it?
The way I personally see it, you've got a myopic view of the world based
on a C++ perspective, and want to "fix" things to make you yourself feel
more comfortable.
Don't take that as me saying that I think a const by-ref for value type
parameters would be a bad thing (I don't). However, I don't think it's
badly needed either.
Although I do not have nearly the same degree of experience with C# as most C#
programmers, I do have more experience with computer language design than most
C# programmers. So if you are rating the quality of my suggestion on the
incorrect basis of credibility rather than the correct basis of validity, you
are rating from an incorrect basis.
>
-- Barry
-- http://barrkel.blogspot.com/
Peter Olcott wrote:
"Arne Vajhøj" <ar**@vajhoej.dkwrote in message
news:45***********************@news.sunsite.dk...
>Peter Olcott wrote:
>>What I am looking for is all of the extra steps that form what is referred to as boxing and unboxing. In C/C++ converting a value type to a reference type is a very simple operation and I don't think that there are any runtime steps at all. All the steps are done at compile time. Likewise for converting a reference type to a value type.
in C/C++ int X = 56; int *Y = &X; Now both X and *Y hold 56, and Y is a reference to X;
That code is not equivalent to what we are discussing in C#.
In fact it does not really have any equivalent in C# (not using unsafe code).
Couldn't there possibly be a way to create safe code that does not ever require
any extra runtime overhead? Couldn't all the safety checking somehow be done at
compile time?
Maybe.
I doubt that the final truth on language design is written yet.
But C# in current versions are as they are.
Arne
Peter Olcott wrote:
I want to fully understand exactly how the underlying architecture works so that
I can design it from the ground up using the best means. With C++ I already know
exactly what kind of machine code that anything and everything will translate
into. I need to acquire this degree of understanding of .NET before I begin
using it.
The systems that I am developing are not business information systems where
something can be 10,000-fold slower than necessary and there is no way for
anyone to notice the difference. In some cases a two-fold difference in the
speed of an elemental operation can noticeably effect response time. I am not
comfortable switching to C# until I know every detail of exactly how to at least
match the performance of native code C++.
In most cases C# will be as fast as C++.
But if you want to understand the language -machine instructions
stuff, then maybe C# is not for you.
It is a language designed to abstract that stuff away. You are not
supposed to care about it.
Arne
Peter Olcott wrote:
>I suggest you:
1) Make a first attempt at converting some algorithm you're concerned about to C#. 2) Profile it to find out what's slower than your budget allows. 3) Then perhaps ask specific questions on the newsgroups about how to improve and / or redesign a particular construct / technique.
I don't have time to do it this way. By working 90 hours a week, I am still 80
hours a week short of what I need to get done.
90 hours work per week and quality code does not get along very well.
Arne
Peter Olcott wrote:
A strongly type language like C++ effectively prevents any accidental type
errors, why bother with more than this?
Effectively ?
Ever seen someone forget a copy constructor and assignment
operator and get memory messed up ?
Arne
Peter Olcott wrote:
I was originally thinking that it might be useless to make one set of languages
completely type safe as long as another set of languages exists that is not type
safe. The authors of malicious code simply would not migrate to the new
technology.
????
The point is to prevent the black hat in exploiting
bugs in the code written by the white hat.
I am not aware of any language that attempts to prevent
black hats from writing malicious code (in the general
context - there are attempts to limit applications
access).
Arne
Peter Olcott wrote:
"Jesse McGrew" <jm*****@gmail.comwrote in message
>The desire to avoid that overhead (as well as other problems with reference counting) is, presumably, why .NET uses a garbage collector instead.
That does not really eliminate reference counting, it merely delegates it to the
GC.
..NET GC does not use reference counting.
Arne
Peter Olcott wrote:
Barry Kelly wrote:
Don't take that as me saying that I think a const by-ref for value type
parameters would be a bad thing (I don't). However, I don't think it's
badly needed either.
So if you are rating the quality of my suggestion on the
incorrect basis of credibility rather than the correct basis of validity, you
are rating from an incorrect basis.
I'm not rating the quality of your suggestion at all. In fact, I
explicitly say that I'm not saying it's a bad thing. I'm just giving you
a suggestion, from my own personal opinion - that it's best to be as
well-informed on the subject as possible before suggesting improvements!
-- Barry
-- http://barrkel.blogspot.com/
"Arne Vajhøj" <ar**@vajhoej.dkwrote in message
news:45***********************@news.sunsite.dk...
Peter Olcott wrote:
>"Arne Vajhøj" <ar**@vajhoej.dkwrote in message news:45***********************@news.sunsite.dk. ..
>>Peter Olcott wrote: What I am looking for is all of the extra steps that form what is referred to as boxing and unboxing. In C/C++ converting a value type to a reference type is a very simple operation and I don't think that there are any runtime steps at all. All the steps are done at compile time. Likewise for converting a reference type to a value type.
in C/C++ int X = 56; int *Y = &X; Now both X and *Y hold 56, and Y is a reference to X; That code is not equivalent to what we are discussing in C#.
In fact it does not really have any equivalent in C# (not using unsafe code).
Couldn't there possibly be a way to create safe code that does not ever require any extra runtime overhead? Couldn't all the safety checking somehow be done at compile time?
Maybe.
I doubt that the final truth on language design is written yet.
But C# in current versions are as they are.
Arne
Yes and they are improving all the time.
"Arne Vajhøj" <ar**@vajhoej.dkwrote in message
news:45***********************@news.sunsite.dk...
Peter Olcott wrote:
>>I suggest you:
1) Make a first attempt at converting some algorithm you're concerned about to C#. 2) Profile it to find out what's slower than your budget allows. 3) Then perhaps ask specific questions on the newsgroups about how to improve and / or redesign a particular construct / technique.
I don't have time to do it this way. By working 90 hours a week, I am still 80 hours a week short of what I need to get done.
90 hours work per week and quality code does not get along very well.
Arne
If the absolute prerequisite to all coding is a 100% complete design, then
quality remains optimal.
Peter Olcott wrote:
"Barry Kelly" <ba***********@gmail.comwrote:
Peter Olcott wrote:
I am not
comfortable switching to C# until I know every detail of exactly how to at
least
match the performance of native code C++.
I suggest you:
1) Make a first attempt at converting some algorithm you're concerned
about to C#.
2) Profile it to find out what's slower than your budget allows.
3) Then perhaps ask specific questions on the newsgroups about how to
improve and / or redesign a particular construct / technique.
I don't have time to do it this way.
OK. Let me rephrase it: I think you'll get the information you need
faster by doing it that way, not least because, as the mantra of every
performance expert goes, you need to measure, measure, measure!
It doesn't need to be a whole application, just some basic routines.
Start with the innermost loops; if that's too much, go deeper, if it's
still too much, then simplify. But without measurement, even with all
the questions and answers you get here on the forums, it's essentially
just guessing. I'm discounting algorithmic analysis because I'm sure
it's mostly baked and done already since you've done it before in other
languages.
You might even find out relatively quickly that C# & .NET won't do what
you need. Even then, all isn't lost; you could look at using C++/CLI to
integrate some existing unmanaged C++ code, and still get benefits from
the rest of .NET.
By working 90 hours a week, I am still 80
hours a week short of what I need to get done.
Huh? I'm very sorry that you can't get a better job? How do you expect
people to reply to this - it's not a technical problem!
-- Barry
-- http://barrkel.blogspot.com/
Arne Vajhøj wrote:
But if you want to understand the language -machine instructions
stuff, then maybe C# is not for you.
It is a language designed to abstract that stuff away.
You are not
supposed to care about it.
I strongly disagree especially with this last statement. By Joel's Law
of Leaky Abstractions, I believe it's important to be aware of the
abstractions and assumptions one is basing one's position upon. And the
CPU hasn't gone away: for working with GCHandle / interop with unmanaged
code, for writing correct lock-free code, for getting maximum
performance when it's needed - you need to peer under the covers to get
that info.
-- Barry
-- http://barrkel.blogspot.com/
"Barry Kelly" <ba***********@gmail.comwrote in message
news:8t********************************@4ax.com...
Peter Olcott wrote:
>"Barry Kelly" <ba***********@gmail.comwrote:
Peter Olcott wrote:
I am not
comfortable switching to C# until I know every detail of exactly how to
at
least
match the performance of native code C++.
I suggest you:
1) Make a first attempt at converting some algorithm you're concerned
about to C#.
2) Profile it to find out what's slower than your budget allows.
3) Then perhaps ask specific questions on the newsgroups about how to
improve and / or redesign a particular construct / technique.
I don't have time to do it this way.
OK. Let me rephrase it: I think you'll get the information you need
faster by doing it that way, not least because, as the mantra of every
performance expert goes, you need to measure, measure, measure!
It doesn't need to be a whole application, just some basic routines.
Start with the innermost loops; if that's too much, go deeper, if it's
still too much, then simplify. But without measurement, even with all
the questions and answers you get here on the forums, it's essentially
just guessing. I'm discounting algorithmic analysis because I'm sure
it's mostly baked and done already since you've done it before in other
languages.
You might even find out relatively quickly that C# & .NET won't do what
you need. Even then, all isn't lost; you could look at using C++/CLI to
integrate some existing unmanaged C++ code, and still get benefits from
the rest of .NET.
I need to thoroughly understand all of the means to improve the performance of
C# programs before I invest the time to learn any other aspect of C# or the .NET
architecture. Directly asking questions on this newsgroup has proved to very
efficiently fulfill my requirements.
I am learning the kinds of nuances that are tiny little footnotes in 1,000 page
books, without having the carefully study these 1,000 page books. Trial and
error (as you are suggesting) is an even less efficient approach than carefully
studying these 1,000 page books. Trial and error would have never told me about
the [ref] parameter qualifier, and it was only a tiny little footnote in a 1,000
page book.
>
>By working 90 hours a week, I am still 80 hours a week short of what I need to get done.
Huh? I'm very sorry that you can't get a better job? How do you expect
people to reply to this - it's not a technical problem!
-- Barry
-- http://barrkel.blogspot.com/
Peter Olcott wrote:
I need to thoroughly understand all of the means to improve the performance of
C# programs before I invest the time to learn any other aspect of C# or the .NET
architecture. Directly asking questions on this newsgroup has proved to very
efficiently fulfill my requirements.
I doubt you will understand C# performance just by asking questions
here without actually coding.
When about performance knowing some pieces here and there may
be worse than not knowing anything.
I am learning the kinds of nuances that are tiny little footnotes in 1,000 page
books, without having the carefully study these 1,000 page books. Trial and
error (as you are suggesting) is an even less efficient approach than carefully
studying these 1,000 page books. Trial and error would have never told me about
the [ref] parameter qualifier, and it was only a tiny little footnote in a 1,000
page book.
The point is that there are actually usefull and necessary stuff in
those 1000 pages.
Arne
"Barry Kelly" <ba***********@gmail.comwrote in message
news:o8********************************@4ax.com...
Peter Olcott wrote:
>Barry Kelly wrote:
Don't take that as me saying that I think a const by-ref for value type
parameters would be a bad thing (I don't). However, I don't think it's
badly needed either.
So if you are rating the quality of my suggestion on the incorrect basis of credibility rather than the correct basis of validity, you are rating from an incorrect basis.
I'm not rating the quality of your suggestion at all. In fact, I
explicitly say that I'm not saying it's a bad thing. I'm just giving you
a suggestion, from my own personal opinion - that it's best to be as
well-informed on the subject as possible before suggesting improvements!
I am well-informed on the general subject of computer language design. The
ideal computer language design involves reducing programmer effort without
reducing program performance.
Since my suggestion reduces programmer effort, AND increases program
performance, it is therefore an optimal improvement to the current design.
>
-- Barry
-- http://barrkel.blogspot.com/
Barry Kelly wrote:
Arne Vajhøj wrote:
>But if you want to understand the language -machine instructions stuff, then maybe C# is not for you.
It is a language designed to abstract that stuff away.
>You are not supposed to care about it.
I strongly disagree especially with this last statement. By Joel's Law
of Leaky Abstractions, I believe it's important to be aware of the
abstractions and assumptions one is basing one's position upon. And the
CPU hasn't gone away: for working with GCHandle / interop with unmanaged
code, for writing correct lock-free code, for getting maximum
performance when it's needed - you need to peer under the covers to get
that info.
For interop you obvious need to know something about
the native stuff to use the functionality. That is not
"looking under the hood".
Regarding getting maximum performance then it can
be necessary to look under the hood, but it can also
be dangerous. Often you end up optimizing for current
..NET version on your current PC. Performance characteristics
for the .NET versions and the hardware being used in the
codes life time may be significantly different.
The term "virtual machine" is not popular in the .NET world
due to its Java sound. But you are coding to such a beast.
And MS or whoever provides the .NET runtime for the platform
handles the mapping from that to the real thing.
If the programmer wants to inside his/her head to see the
native instructions for the code being written (as was the
case in the post I commented on), then I don't think
C# is the right choice.
Arne
"Arne Vajhøj" <ar**@vajhoej.dkwrote in message
news:45***********************@news.sunsite.dk...
Peter Olcott wrote:
>I need to thoroughly understand all of the means to improve the performance of C# programs before I invest the time to learn any other aspect of C# or the .NET architecture. Directly asking questions on this newsgroup has proved to very efficiently fulfill my requirements.
I doubt you will understand C# performance just by asking questions
here without actually coding.
In my case I understand the performance from understanding the details of the
underlying architecture. If you don't fully understand the underlying
architecture, then you can't fully understand the performance issues. From a
theoretical point of view there is no reason why the .NET architecture needs to
be any slower than straight native code.
With the advent of generics it is now far simpler to very closely match the
performance of straight native code. Eventually the .NET architecture will be
able to exceed native code performance. It will be able to exceed native code
performance by simplifying the process of computer language construction, thus
allowing more time to be spent on creating better compilers.
>
When about performance knowing some pieces here and there may
be worse than not knowing anything.
>I am learning the kinds of nuances that are tiny little footnotes in 1,000 page books, without having the carefully study these 1,000 page books. Trial and error (as you are suggesting) is an even less efficient approach than carefully studying these 1,000 page books. Trial and error would have never told me about the [ref] parameter qualifier, and it was only a tiny little footnote in a 1,000 page book.
The point is that there are actually usefull and necessary stuff in
those 1000 pages.
Arne
"Arne Vajhøj" <ar**@vajhoej.dkwrote in message
news:45***********************@news.sunsite.dk...
Barry Kelly wrote:
>Arne Vajhøj wrote:
>>But if you want to understand the language -machine instructions stuff, then maybe C# is not for you.
It is a language designed to abstract that stuff away.
>>You are not supposed to care about it.
I strongly disagree especially with this last statement. By Joel's Law of Leaky Abstractions, I believe it's important to be aware of the abstractions and assumptions one is basing one's position upon. And the CPU hasn't gone away: for working with GCHandle / interop with unmanaged code, for writing correct lock-free code, for getting maximum performance when it's needed - you need to peer under the covers to get that info.
For interop you obvious need to know something about
the native stuff to use the functionality. That is not
"looking under the hood".
Regarding getting maximum performance then it can
be necessary to look under the hood, but it can also
be dangerous. Often you end up optimizing for current
.NET version on your current PC. Performance characteristics
for the .NET versions and the hardware being used in the
codes life time may be significantly different.
The term "virtual machine" is not popular in the .NET world
due to its Java sound. But you are coding to such a beast.
And MS or whoever provides the .NET runtime for the platform
handles the mapping from that to the real thing.
If the programmer wants to inside his/her head to see the
native instructions for the code being written (as was the
case in the post I commented on), then I don't think
C# is the right choice.
Arne
If a C# programmer lacks the ability to see inside his head the underlying
machine code corresponding to the specified C# code, then this programmer lacks
a sufficient understanding of C# and .NET.
For business apps that read and write text to databases, this won't make much of
a difference. To the author of the database management system itself, this makes
a significant difference. The difference is between excellent quality and
mediocrity.
Peter Olcott wrote:
I am well-informed on the general subject of computer language design.
The
ideal computer language design involves reducing programmer effort without
reducing program performance.
I think this definition of 'ideal' is faulty. There are many ideals in
computer languages: imperative languages are not ideal for querying
(that's why we have SQL), nor pattern matching (that's why we have
regular expressions), etc.
"Programmer effort" can only be measured in terms of solving some
problem. Some languages are better at some classes of problems than
others. There is no single ideal.
Also, your statement precludes any reduction in programmer effort if it
reduces program performance. By that logic, we'd all be programming in
assembler. Programming languages are abstractions: they hide details
behind a conceptual framework. In fact, they are abstraction-building
abstractions, and different languages are better suited to different
abstractions, cf. functional languages for functional abstractions,
object oriented, logic languages, etc.
Since my suggestion reduces programmer effort, AND increases program
performance, it is therefore an optimal improvement to the current design.
It also increases language complexity, which can increase programmer
effort.
I'm not strongly opposed to your suggestion at all, BTW. Just want to
make that absolutely clear. I do think I'd hardly ever use it, though -
it's usually better to work with reference types instead, and reserve
value types for value-oriented abstractions, such as complex numbers,
Point, Matrix, int, decimal, that kind of thing.
In fact, I'd be more strongly in favour of explicitly immutable value
types, and let the CLR figure out if a 'const &' calling convention
could be applied, because I think that would more closely reflect
situations when value types are useful (in my experience).
-- Barry
-- http://barrkel.blogspot.com/
Peter Olcott <No****@SeeScreen.comwrote:
The systems that I am developing are not business information systems where
something can be 10,000-fold slower than necessary and there is no way for
anyone to notice the difference. In some cases a two-fold difference in the
speed of an elemental operation can noticeably effect response time. I am not
comfortable switching to C# until I know every detail of exactly how to at least
match the performance of native code C++.
But, without evidence, you seem obsessed with boxing and unboxing.
There are many, many, *many* other areas of performance to consider -
why are you so hung up about boxing and unboxing, which almost
certainly *won't* be a significant factor?
--
Jon Skeet - <sk***@pobox.com> http://www.pobox.com/~skeet Blog: http://www.msmvps.com/jon.skeet
If replying to the group, please do not mail me too
Peter Olcott <No****@SeeScreen.comwrote:
<snip>
That would seem to be a fine restriction. Now if we can only add an [in]
parameter qualifier that passes all large objects by reference, yet makes them
read-only.
I still don't believe you understand reference types, or what's passed
- either that, or you're considering creating large value types, which
is almost certainly a mistake.
--
Jon Skeet - <sk***@pobox.com> http://www.pobox.com/~skeet Blog: http://www.msmvps.com/jon.skeet
If replying to the group, please do not mail me too
Arne Vajhøj wrote:
Barry Kelly wrote:
Arne Vajhøj wrote:
But if you want to understand the language -machine instructions
stuff, then maybe C# is not for you.
It is a language designed to abstract that stuff away.
You are not
supposed to care about it.
I strongly disagree especially with this last statement. By Joel's Law
of Leaky Abstractions, I believe it's important to be aware of the
abstractions and assumptions one is basing one's position upon. And the
CPU hasn't gone away: for working with GCHandle / interop with unmanaged
code, for writing correct lock-free code, for getting maximum
performance when it's needed - you need to peer under the covers to get
that info.
For interop you obvious need to know something about
the native stuff to use the functionality. That is not
"looking under the hood".
I think doing a lot of interop effectively requires a fair amount of
knowledge of what the CLR is doing. In particular, there are some almost
sneaky things the CLR can do, such as the GC collecting your class
before its methods finished executing, that can cause insidious bugs.
There was one which stuck in my memory last year: http://groups.google.com/group/micro...33389a2b0aa149
A bug in Ping which causes access violations. Even examining the C# code
of the Ping class, it's not obvious how it happened. I also wrote a blog
post on the exact mechanics: http://barrkel.blogspot.com/2006/07/...collector.html
Regarding getting maximum performance then it can
be necessary to look under the hood, but it can also
be dangerous. Often you end up optimizing for current
.NET version on your current PC. Performance characteristics
for the .NET versions and the hardware being used in the
codes life time may be significantly different.
Perhaps, but I doubt things like Windows, CPUs and caches / memory
hierarchies are going away anytime soon, and you need to have an idea of
the behaviour of those guys, even if it's simple rules of thumb, like
"row-then-column" rather than "column-then-row" when iterating through
large 2D arrays, or writing asynchronous code to avoid thread-switching
overhead, etc.
The term "virtual machine" is not popular in the .NET world
due to its Java sound. But you are coding to such a beast.
And MS or whoever provides the .NET runtime for the platform
handles the mapping from that to the real thing.
If the programmer wants to inside his/her head to see the
native instructions for the code being written (as was the
case in the post I commented on), then I don't think
C# is the right choice.
Sure, if one is tweaking one's code so it inlines "just so", or adds
comments like:
// not creating variable for this expression because it forces
// register spill to stack
.... then I think one is going to far, I agree.
-- Barry
-- http://barrkel.blogspot.com/
"Barry Kelly" <ba***********@gmail.comwrote in message
news:02********************************@4ax.com...
Peter Olcott wrote:
>I am well-informed on the general subject of computer language design.
>The ideal computer language design involves reducing programmer effort without reducing program performance.
I think this definition of 'ideal' is faulty. There are many ideals in
computer languages: imperative languages are not ideal for querying
(that's why we have SQL), nor pattern matching (that's why we have
regular expressions), etc.
"Programmer effort" can only be measured in terms of solving some
problem. Some languages are better at some classes of problems than
others. There is no single ideal.
Also, your statement precludes any reduction in programmer effort if it
reduces program performance. By that logic, we'd all be programming in
assembler. Programming languages are abstractions: they hide details
behind a conceptual framework. In fact, they are abstraction-building
abstractions, and different languages are better suited to different
abstractions, cf. functional languages for functional abstractions,
object oriented, logic languages, etc.
>Since my suggestion reduces programmer effort, AND increases program performance, it is therefore an optimal improvement to the current design.
It also increases language complexity, which can increase programmer
effort.
It reduces complexity. The programmer would not even need to know what the terms
reference type and value type means much less explicitly distinguish when one is
more appropriate than the other. There would only be two types of parameters;
[out] I want to be able to change it in the function, and [in], I want to make
sure that it won't be changed in the function, and neither one of these ever has
to be passed by value. The underlying CLR can pass by value if it is quicker
than by reference for items of the size of [int] and smaller. The [ref]
parameter qualifier could be discarded.
>
I'm not strongly opposed to your suggestion at all, BTW. Just want to
make that absolutely clear. I do think I'd hardly ever use it, though -
it's usually better to work with reference types instead, and reserve
value types for value-oriented abstractions, such as complex numbers,
Point, Matrix, int, decimal, that kind of thing.
In fact, I'd be more strongly in favour of explicitly immutable value
types, and let the CLR figure out if a 'const &' calling convention
could be applied, because I think that would more closely reflect
situations when value types are useful (in my experience).
-- Barry
-- http://barrkel.blogspot.com/
"Jon Skeet [C# MVP]" <sk***@pobox.comwrote in message
news:MP************************@msnews.microsoft.c om...
Peter Olcott <No****@SeeScreen.comwrote:
>The systems that I am developing are not business information systems where something can be 10,000-fold slower than necessary and there is no way for anyone to notice the difference. In some cases a two-fold difference in the speed of an elemental operation can noticeably effect response time. I am not comfortable switching to C# until I know every detail of exactly how to at least match the performance of native code C++.
But, without evidence, you seem obsessed with boxing and unboxing.
There are many, many, *many* other areas of performance to consider -
why are you so hung up about boxing and unboxing, which almost
certainly *won't* be a significant factor?
It is not taking very much effort to completely eliminate all factors
significant and otherwise. Certainly passing large aggregate data types by value
is significant. I want to make sure that I have a 100% understanding on how to
always avoid passing large aggregate data types by value. Boxing and Unboxing
can be comparable to passing a large aggregate data type by value.
Imagine passing an array with millions of elements by value, instead of by
reference. Imagine further still that the only need of this array was to do a
binary search. Now we have a function that is 100,000-fold slower than necessary
simply because the difference between boxing and unboxing was not completely
understood.
>
--
Jon Skeet - <sk***@pobox.com> http://www.pobox.com/~skeet Blog: http://www.msmvps.com/jon.skeet
If replying to the group, please do not mail me too
"Jon Skeet [C# MVP]" <sk***@pobox.comwrote in message
news:MP************************@msnews.microsoft.c om...
Peter Olcott <No****@SeeScreen.comwrote:
<snip>
>That would seem to be a fine restriction. Now if we can only add an [in] parameter qualifier that passes all large objects by reference, yet makes them read-only.
I still don't believe you understand reference types, or what's passed
When I say passed by reference, I mean this term literally, in other words by
memory address. The term in .NET has acquired somewhat of a figure-of-speech
meaning. Literally passing by reference means passing by machine address and
nothing more.
- either that, or you're considering creating large value types, which
is almost certainly a mistake.
--
Jon Skeet - <sk***@pobox.com> http://www.pobox.com/~skeet Blog: http://www.msmvps.com/jon.skeet
If replying to the group, please do not mail me too
Peter Olcott <No****@SeeScreen.comwrote:
But, without evidence, you seem obsessed with boxing and unboxing.
There are many, many, *many* other areas of performance to consider -
why are you so hung up about boxing and unboxing, which almost
certainly *won't* be a significant factor?
It is not taking very much effort to completely eliminate all factors
significant and otherwise. Certainly passing large aggregate data types by value
is significant. I want to make sure that I have a 100% understanding on how to
always avoid passing large aggregate data types by value. Boxing and Unboxing
can be comparable to passing a large aggregate data type by value.
Well, they're not really similar and boxing/unboxing is relatively rare
when generics are available. Even when boxing/unboxing *is* involved,
as shown in the benchmark you were worried about, the cost of the
boxing was negligible compared with the copying involved in the rest of
the benchmark (so neatly sidestepped by the "updated" C++ version which
made the comparison completely irrelevant).
Imagine passing an array with millions of elements by value, instead of by
reference. Imagine further still that the only need of this array was to do a
binary search. Now we have a function that is 100,000-fold slower than necessary
simply because the difference between boxing and unboxing was not completely
understood.
If I do:
int[] x = new int[100000];
DoSomething (x);
how many bytes do you think are copied?
You seem to imagine that arrays are value types. Arrays are reference
types, so you could never pass the contents by value. That's why I've
said repeatedly that you *really* need to know more about value types
and reference types (and the fact that you very, very rarely *get* big
value types) before going much further. You have latched on to one
particular aspect of .NET and want to go very deeply into it without
getting a reasonable understanding of the rest of it. Getting the
basics right across the board will help you work out where it *is*
worth getting deeper understanding, and help you in achieving that
understanding too.
--
Jon Skeet - <sk***@pobox.com> http://www.pobox.com/~skeet Blog: http://www.msmvps.com/jon.skeet
If replying to the group, please do not mail me too
Peter Olcott <No****@SeeScreen.comwrote:
<snip>
It also increases language complexity, which can increase programmer
effort.
It reduces complexity. The programmer would not even need to know what the terms
reference type and value type means much less explicitly distinguish when one is
more appropriate than the other. There would only be two types of parameters;
[out] I want to be able to change it in the function, and [in], I want to make
sure that it won't be changed in the function, and neither one of these ever has
to be passed by value. The underlying CLR can pass by value if it is quicker
than by reference for items of the size of [int] and smaller. The [ref]
parameter qualifier could be discarded.
There's a difference between passing a value type by reference and
passing a reference type value (i.e. a reference) by value.
Please read http://www.pobox.com/~skeet/csharp/parameters.html
--
Jon Skeet - <sk***@pobox.com> http://www.pobox.com/~skeet Blog: http://www.msmvps.com/jon.skeet
If replying to the group, please do not mail me too
Peter Olcott wrote:
"Barry Kelly" <ba***********@gmail.comwrote in message
news:02********************************@4ax.com...
Peter Olcott wrote:
It also increases language complexity, which can increase programmer
effort.
It reduces complexity.
I disagree. Unless we're in fantasy land and we're talking about a
completely different language here, adding *anything* to C# is going to
increase its complexity, by definition. Any new feature needs to add
enough value to justify itself.
The programmer would not even need to know what the terms
reference type and value type means much less explicitly distinguish when one is
more appropriate than the other.
Which programmer are you talking about :
1) The guy instantiating types and calling methods? If the types are
well-designed, this guy typically doesn't need to know already.
1) The guy writing types and methods? This is the guy who needs to make
the choice, so unless the two become semantically identical, he needs to
know the difference. And if you're suggesting some kind of semantic
fusion, then you'll need to be a whole lot more specific about what
you're talking about.
There would only be two types of parameters;
[out] I want to be able to change it in the function
and [in], I want to make
sure that it won't be changed in the function,
and neither one of these ever has
to be passed by value.
The underlying CLR can pass by value if it is quicker
than by reference for items of the size of [int] and smaller.
I had a long reply composed, but I discarded it, because I realised that
your statements don't cohere into a fully-formed whole.
You need to expand much more on what you're talking about, with
precision and detail, and give example code.
And don't forget, you can't break any existing C# code or semantics.
The [ref]
parameter qualifier could be discarded.
No it can't! You've just renamed it to 'out' above.
-- Barry
-- http://barrkel.blogspot.com/
Peter Olcott <No****@SeeScreen.comwrote:
I still don't believe you understand reference types, or what's passed
When I say passed by reference, I mean this term literally, in other words by
memory address. The term in .NET has acquired somewhat of a figure-of-speech
meaning.
No, it hasn't. It's used in a somewhat woolly manner by some people,
particularly with respect to "pass by reference", which is a different
kettle of fish entirely, but the specs are pretty specific.
Literally passing by reference means passing by machine address and
nothing more.
No, "pass by reference" has a deeper semantic meaning.
See http://www.pobox.com/~skeet/csharp/parameters.html
and http://www.pobox.com/~skeet/java/parameters.html
(The latter has pretty specific definitions in it.)
--
Jon Skeet - <sk***@pobox.com> http://www.pobox.com/~skeet Blog: http://www.msmvps.com/jon.skeet
If replying to the group, please do not mail me too
Peter Olcott wrote:
Imagine passing an array with millions of elements by value, instead of by
reference.
Arrays in C# are reference types. You can't pass an array by value (or
rather, to keep Jon happy, you can only pass a reference to the array by
value, you can't pass the array value itself by value).
-- Barry
-- http://barrkel.blogspot.com/
Peter Olcott wrote:
"Barry Kelly" <ba***********@gmail.comwrote in message
news:02********************************@4ax.com...
Peter Olcott wrote:
[...]
Since my suggestion reduces programmer effort, AND increases program
performance, it is therefore an optimal improvement to the current design.
It also increases language complexity, which can increase programmer
effort.
It reduces complexity. The programmer would not even need to know what the terms
reference type and value type means much less explicitly distinguish when one is
more appropriate than the other. There would only be two types of parameters;
[out] I want to be able to change it in the function, and [in], I want to make
sure that it won't be changed in the function, and neither one of these ever has
to be passed by value. The underlying CLR can pass by value if it is quicker
than by reference for items of the size of [int] and smaller. The [ref]
parameter qualifier could be discarded.
The difference between out and ref is important, though. Out parameters
are basically extra return values - they don't have to be initialized
on the way in, but they do have to be initialized on the way out:
void DivMod(int a, int b, out int quotient, out int remainder) { ...
}
If out were the same as ref, the locations passed for quotient and
remainder would have to be initialized before calling DivMod, which is
needless work for the programmer.
Also, out and ref perform differently when you're calling methods
remotely over a network or process boundary. Ref parameter values have
to be passed both ways; out parameter values only have to be passed
back to the caller.
Jesse
"Jon Skeet [C# MVP]" <sk***@pobox.comwrote in message
news:MP************************@msnews.microsoft.c om...
Peter Olcott <No****@SeeScreen.comwrote:
But, without evidence, you seem obsessed with boxing and unboxing.
There are many, many, *many* other areas of performance to consider -
why are you so hung up about boxing and unboxing, which almost
certainly *won't* be a significant factor?
It is not taking very much effort to completely eliminate all factors significant and otherwise. Certainly passing large aggregate data types by value is significant. I want to make sure that I have a 100% understanding on how to always avoid passing large aggregate data types by value. Boxing and Unboxing can be comparable to passing a large aggregate data type by value.
Well, they're not really similar and boxing/unboxing is relatively rare
when generics are available. Even when boxing/unboxing *is* involved,
as shown in the benchmark you were worried about, the cost of the
boxing was negligible compared with the copying involved in the rest of
the benchmark (so neatly sidestepped by the "updated" C++ version which
made the comparison completely irrelevant).
>Imagine passing an array with millions of elements by value, instead of by reference. Imagine further still that the only need of this array was to do a binary search. Now we have a function that is 100,000-fold slower than necessary simply because the difference between boxing and unboxing was not completely understood.
If I do:
int[] x = new int[100000];
DoSomething (x);
how many bytes do you think are copied?
You seem to imagine that arrays are value types. Arrays are reference
types, so you could never pass the contents by value. That's why I've
said repeatedly that you *really* need to know more about value types
and reference types (and the fact that you very, very rarely *get* big
value types) before going much further. You have latched on to one
particular aspect of .NET and want to go very deeply into it without
getting a reasonable understanding of the rest of it. Getting the
basics right across the board will help you work out where it *is*
worth getting deeper understanding, and help you in achieving that
understanding too.
Maybe it is getting to that point now. It was not to that point before I began
this thread.
>
--
Jon Skeet - <sk***@pobox.com> http://www.pobox.com/~skeet Blog: http://www.msmvps.com/jon.skeet
If replying to the group, please do not mail me too
"Barry Kelly" <ba***********@gmail.comwrote in message
news:83********************************@4ax.com...
Peter Olcott wrote:
>"Barry Kelly" <ba***********@gmail.comwrote in message news:02********************************@4ax.com.. .
Peter Olcott wrote:
It also increases language complexity, which can increase programmer
effort.
It reduces complexity.
I disagree.
Right glance at a couple of words before forming the preconceived refutation.
Unless we're in fantasy land and we're talking about a
completely different language here, adding *anything* to C# is going to
increase its complexity, by definition. Any new feature needs to add
enough value to justify itself.
Add [in] remove [ref] the net difference is no more elements. However we are
adding one simple parameter qualifier and removing a complex qualifier.
>
>The programmer would not even need to know what the terms reference type and value type means much less explicitly distinguish when one is more appropriate than the other.
Which programmer are you talking about:
The one writing the programs in the C# language.
>
1) The guy instantiating types and calling methods? If the types are
well-designed, this guy typically doesn't need to know already.
1) The guy writing types and methods? This is the guy who needs to make
the choice, so unless the two become semantically identical, he needs to
know the difference. And if you're suggesting some kind of semantic
fusion, then you'll need to be a whole lot more specific about what
you're talking about.
There are programmers that only call methods and never write methods? That seems
like quite a stretch. Where do they put the code that calls the methods, if not
in another method?
>
>There would only be two types of parameters; [out] I want to be able to change it in the function and [in], I want to make sure that it won't be changed in the function, and neither one of these ever has to be passed by value. The underlying CLR can pass by value if it is quicker than by reference for items of the size of [int] and smaller.
I had a long reply composed, but I discarded it, because I realised that
your statements don't cohere into a fully-formed whole.
You need to expand much more on what you're talking about, with
precision and detail, and give example code.
int SomeMethod(in SomeType SomeName) // C#
Exactly Equals
int SomeMethod(const SomeType& SomeName) // C++
>
And don't forget, you can't break any existing C# code or semantics.
>The [ref] parameter qualifier could be discarded.
No it can't! You've just renamed it to 'out' above.
I took two different existing parameter qualifiers and combined them into a
single parameter qualifier that accomplished the purpose of both. Like I said
[ref] can be discarded. Is there really a need to make sure that a parameter
that will be written to was initialized?
There is no useful distinction between [ref] and [out]. Unify [ref] and [out]
into [out], and add [in] as a read-only pass by address parameter qualifier. The
CLR can be free to pass by value if it would be faster for very small items,
because on a read-only parameter there is no semantic difference.
>
-- Barry
-- http://barrkel.blogspot.com/
"Jesse McGrew" <jm*****@gmail.comwrote in message
news:11*********************@51g2000cwl.googlegrou ps.com...
Peter Olcott wrote:
>"Barry Kelly" <ba***********@gmail.comwrote in message news:02********************************@4ax.com.. .
Peter Olcott wrote:
[...]
>Since my suggestion reduces programmer effort, AND increases program performance, it is therefore an optimal improvement to the current design.
It also increases language complexity, which can increase programmer
effort.
It reduces complexity. The programmer would not even need to know what the terms reference type and value type means much less explicitly distinguish when one is more appropriate than the other. There would only be two types of parameters; [out] I want to be able to change it in the function, and [in], I want to make sure that it won't be changed in the function, and neither one of these ever has to be passed by value. The underlying CLR can pass by value if it is quicker than by reference for items of the size of [int] and smaller. The [ref] parameter qualifier could be discarded.
The difference between out and ref is important, though. Out parameters
are basically extra return values - they don't have to be initialized
on the way in, but they do have to be initialized on the way out:
void DivMod(int a, int b, out int quotient, out int remainder) { ...
}
If out were the same as ref, the locations passed for quotient and
remainder would have to be initialized before calling DivMod, which is
needless work for the programmer.
Also, out and ref perform differently when you're calling methods
remotely over a network or process boundary. Ref parameter values have
to be passed both ways; out parameter values only have to be passed
back to the caller.
Jesse
That last subtle distinction is why discussing these things in a newsgroup is
much more effective than merely reading books. I have a very good 1,000 page
book that never bothers to mention this distinction. In any case it would still
seem that adding an [in] parameter qualifier might be an improvement.
Peter Olcott wrote:
"Barry Kelly" <ba***********@gmail.comwrote:
Which programmer are you talking about:
1) The guy instantiating types and calling methods? If the types are
well-designed, this guy typically doesn't need to know already.
1) The guy writing types and methods? This is the guy who needs to make
the choice, so unless the two become semantically identical, he needs to
know the difference. And if you're suggesting some kind of semantic
fusion, then you'll need to be a whole lot more specific about what
you're talking about.
There are programmers that only call methods and never write methods? That seems
like quite a stretch. Where do they put the code that calls the methods, if not
in another method?
There are two 'hats' for a programmer: programmer as User of types, and
programmer as Designer of types. It's useful to distinguish between
them, because they require two different ways of thinking. The User of
types solves algorithmic problems based on the spec / purpose of the
method whose body they're writing. The Designer of types creates
abstractions to model a problem or domain, but the primary purpose is to
simplify the job of the guy who Uses the types that he / she is
creating.
You need to expand much more on what you're talking about, with
precision and detail, and give example code.
int SomeMethod(in SomeType SomeName) // C#
Exactly Equals
int SomeMethod(const SomeType& SomeName) // C++
What if SomeType is a reference type? If you're suggesting full C++
semantics for a deeper notion of const than simply 'pass value types by
reference for performance reasons', then I'll direct you to Anders
Hejlsberg's opinions on the matter. For example read this: http://www.artima.com/intv/choicesP.html
There is no useful distinction between [ref] and [out].
I disagree. 'ref' means 'modify this location', while 'out' means
'return into this location'. 'out' is a mechanism to get around the fact
that C# can't return tuples. 'ref' is a means of passing by reference.
Unify [ref] and [out]
into [out], and add [in] as a read-only pass by address parameter qualifier. The
CLR can be free to pass by value if it would be faster for very small items,
because on a read-only parameter there is no semantic difference.
Effectively (ISTM) the upshot of what you're asking for is a C++-style
'const &' for value types, to avoid boxing overhead when passing large
value types.
What many other people have been trying to tell you on this thread is:
1) Large value types aren't a good idea
2) Large value types don't even occur very often (e.g. arrays are
reference types, as you've found out)
3) Reference types are good, you should try them!
In fact, the primary advantage of value types is that they're usually
copied wherever they go, and thus reduce GC overhead.
Basically, you're suggesting a feature that wouldn't be used a lot.
Again, I think it wouldn't be harmful or anything, just not very useful.
-- Barry
-- http://barrkel.blogspot.com/
Peter Olcott wrote:
"Jesse McGrew" <jm*****@gmail.comwrote in message
news:11*********************@51g2000cwl.googlegrou ps.com...
[...]
Also, out and ref perform differently when you're calling methods
remotely over a network or process boundary. Ref parameter values have
to be passed both ways; out parameter values only have to be passed
back to the caller.
That last subtle distinction is why discussing these things in a newsgroup is
much more effective than merely reading books. I have a very good 1,000 page
book that never bothers to mention this distinction. In any case it would still
seem that adding an [in] parameter qualifier might be an improvement.
I don't see the advantage. If you leave the qualifiers off, the
parameter is already "in" by default. Optimizations like the one you're
proposing can just as well be handled by the CLR detecting that a large
value-type parameter is never modified, and deciding internally to pass
it by reference instead of copying. Adding an extra language keyword to
suggest that behavior is the kind of hint that might be common in C
(e.g. the "register" keyword) but doesn't have much of a place in C#.
Jesse
Peter Olcott wrote:
By working 90 hours a week, I am still 80
hours a week short of what I need to get done.
You realise this means you are doomed to failure?
--
Larry Lard la*******@googlemail.com
The address is real, but unread - please reply to the group
For VB and C# questions - tell us which version
"Larry Lard" <la*******@googlemail.comwrote in message
news:51*************@mid.individual.net...
Peter Olcott wrote:
>By working 90 hours a week, I am still 80 hours a week short of what I need to get done.
You realise this means you are doomed to failure?
It means that I can't afford to waste any time, and must find shortcuts to
achieve the required end-results.
>
--
Larry Lard la*******@googlemail.com
The address is real, but unread - please reply to the group
For VB and C# questions - tell us which version
"Jesse McGrew" <jm*****@gmail.comwrote in message
news:11**********************@a75g2000cwd.googlegr oups.com...
Peter Olcott wrote:
>"Jesse McGrew" <jm*****@gmail.comwrote in message news:11*********************@51g2000cwl.googlegro ups.com...
[...]
Also, out and ref perform differently when you're calling methods
remotely over a network or process boundary. Ref parameter values have
to be passed both ways; out parameter values only have to be passed
back to the caller.
That last subtle distinction is why discussing these things in a newsgroup is much more effective than merely reading books. I have a very good 1,000 page book that never bothers to mention this distinction. In any case it would still seem that adding an [in] parameter qualifier might be an improvement.
I don't see the advantage. If you leave the qualifiers off, the
parameter is already "in" by default. Optimizations like the one you're
Its pass by value, which is not the same thing as "in" by default. My suggestion
is to make [in] pass by address, and read-only exactly equivalent to:
ReturnType (const ValueType&, VariableName) // C++
proposing can just as well be handled by the CLR detecting that a large
value-type parameter is never modified, and deciding internally to pass
it by reference instead of copying. Adding an extra language keyword to
Is it read-only? If its not read-only, then its not the same.
suggest that behavior is the kind of hint that might be common in C
(e.g. the "register" keyword) but doesn't have much of a place in C#.
Jesse
"Barry Kelly" <ba***********@gmail.comwrote in message
news:2j********************************@4ax.com...
Peter Olcott wrote:
>"Barry Kelly" <ba***********@gmail.comwrote:
Which programmer are you talking about:
1) The guy instantiating types and calling methods? If the types are
well-designed, this guy typically doesn't need to know already.
1) The guy writing types and methods? This is the guy who needs to make
the choice, so unless the two become semantically identical, he needs to
know the difference. And if you're suggesting some kind of semantic
fusion, then you'll need to be a whole lot more specific about what
you're talking about.
There are programmers that only call methods and never write methods? That seems like quite a stretch. Where do they put the code that calls the methods, if not in another method?
There are two 'hats' for a programmer: programmer as User of types, and
programmer as Designer of types. It's useful to distinguish between
them, because they require two different ways of thinking. The User of
types solves algorithmic problems based on the spec / purpose of the
method whose body they're writing. The Designer of types creates
abstractions to model a problem or domain, but the primary purpose is to
simplify the job of the guy who Uses the types that he / she is
creating.
You need to expand much more on what you're talking about, with
precision and detail, and give example code.
int SomeMethod(in SomeType SomeName) // C# Exactly Equals int SomeMethod(const SomeType& SomeName) // C++
What if SomeType is a reference type? If you're suggesting full C++
semantics for a deeper notion of const than simply 'pass value types by
reference for performance reasons', then I'll direct you to Anders
Hejlsberg's opinions on the matter. For example read this:
http://www.artima.com/intv/choicesP.html
>There is no useful distinction between [ref] and [out].
I disagree. 'ref' means 'modify this location', while 'out' means
'return into this location'. 'out' is a mechanism to get around the fact
that C# can't return tuples. 'ref' is a means of passing by reference.
So then [ref] could be name [io] for input and output.
>
>Unify [ref] and [out] into [out], and add [in] as a read-only pass by address parameter qualifier. The CLR can be free to pass by value if it would be faster for very small items, because on a read-only parameter there is no semantic difference.
Effectively (ISTM) the upshot of what you're asking for is a C++-style
'const &' for value types, to avoid boxing overhead when passing large
value types.
What many other people have been trying to tell you on this thread is:
1) Large value types aren't a good idea
2) Large value types don't even occur very often (e.g. arrays are
reference types, as you've found out)
3) Reference types are good, you should try them!
In fact, the primary advantage of value types is that they're usually
copied wherever they go, and thus reduce GC overhead.
Yet the language lacks the alternative capability even when it is needed.
>
Basically, you're suggesting a feature that wouldn't be used a lot.
It would only be used when one needs to pass aggregate data without wasting the
machine time of boxing and unboxing, or the programmer time of a user writing to
a parameter intended to be read-only. The case that I envision is something like
the C++ friend function, can't be a class member, yet requires direct access to
internal data. Good design minimizes these cases, yet can not eliminate these
cases.
Again, I think it wouldn't be harmful or anything, just not very useful.
-- Barry
-- http://barrkel.blogspot.com/
"Peter Olcott" <No****@SeeScreen.comwrote in message
news:ED*******************@newsfe18.lga...
>
"Larry Lard" <la*******@googlemail.comwrote in message
news:51*************@mid.individual.net...
>Peter Olcott wrote:
>>By working 90 hours a week, I am still 80 hours a week short of what I need to get done.
You realise this means you are doomed to failure?
It means that I can't afford to waste any time, and must find
shortcuts to achieve the required end-results.
You wish to switch to a dotnet based solution to reduce the development
time, but you don't want to put the time in to learning the
language(C#). Sounds to me like you are doomed to failure anyway.
I suggest sticking with C++, since that seems to be where your comfort
zone lies. Without putting in the time to learn C# and the dotnet
framework, there is no way that you will save any time on your
development. Or, if you did save time, it would perform like a pig,
because you have no desire to work with the framework, and you would
fight it instead.
For months now, you have been posting the same old recycled arguments.
Had you listened to us then, you might actually be showing progress now.
I am amazed at how much time you have to argue, when you tell us you
need to work 90 hours a week just to tread water.
Bill
Peter Olcott wrote:
Ah so we could create a new parameter qualifier that works like [out] and [ref]
yet in the opposite direction. We could have an [in] parameter qualifier that
allows all large objects (larger than int) to be passed by reference, yet these
are all read-only objects. The compiler does not allow writing to them. This way
we avoid the unnecessary overhead of making copies of large objects just to
avoid accidentally making changes to these large objects.
Funny you should mention that. So far as I know, the C# team has been
mulling over how to include the concept of "const" arguments (which, I
think, is really what you're proposing with "in"), whereby an object
can be passed-by-ref but be unchangeable. I'm not up on the technical
challenges facing them, but I know that they're thinking about it.
One thing I believe they do want to avoid is the C++ cascading-const
nightmare, whereby including one const declaration in a method forces
some other method's parameter to be const, which forces another
method's parameter to be const.... C++ solves this (as I recall) with a
special kind of cast to remove the "const" nature of an object, but
they're hoping to avoid that in C# should they ever introduce the
feature.
It might be possible to design a language that has essentially all of the
functionally capabilities of the lower level languages, without the requirement
of ever directly dealing with pointers.
True, but one question I know the C# team constantly asks is whether a
feature is worth the additional complexity it adds to the language. (I
know this because they frequently cite that as a reason for not
including certain features.) What does it really buy you being able to
take the address of an arbitrary variable (in safe code... I know that
you can do it in unsafe code)? As I said, I think that Java (and now
C#) have demonstrated that it doesn't buy you much. You mentioned
boxing overhead, but in .NET 2.0 you can pretty-much avoid boxing...
all you have to do is learn a new idiom: a new way to do what you've
always done, but now in a new language.
That, in the end, is what it comes down to: C# works very well. It's
just that it does things differently than does C++, and you can't take
C++ idioms and concepts and start writing C# as though it were C++. In
a few domains, C++ is much better suited to the problems than is C#,
but in most domains C# gives you all the functionality you need while
helping keep you out of trouble.
If you really, really need to use pointers and arbitrary addressing, C#
has unsafe mode... but that's why it's called "unsafe". Like any other
"escape hatch" in a language, if you find yourself using unsafe code
you should be thinking long and hard about why you need it, and whether
you have a real need or just a mental disconnect with how you're
intended to use the language.
"Bruce Wood" <br*******@canada.comwrote in message
news:11**********************@q2g2000cwa.googlegro ups.com...
>
Peter Olcott wrote:
>Ah so we could create a new parameter qualifier that works like [out] and [ref] yet in the opposite direction. We could have an [in] parameter qualifier that allows all large objects (larger than int) to be passed by reference, yet these are all read-only objects. The compiler does not allow writing to them. This way we avoid the unnecessary overhead of making copies of large objects just to avoid accidentally making changes to these large objects.
Funny you should mention that. So far as I know, the C# team has been
mulling over how to include the concept of "const" arguments (which, I
think, is really what you're proposing with "in"), whereby an object
can be passed-by-ref but be unchangeable. I'm not up on the technical
challenges facing them, but I know that they're thinking about it.
One thing I believe they do want to avoid is the C++ cascading-const
nightmare, whereby including one const declaration in a method forces
some other method's parameter to be const, which forces another
method's parameter to be const.... C++ solves this (as I recall) with a
special kind of cast to remove the "const" nature of an object, but
they're hoping to avoid that in C# should they ever introduce the
feature.
>It might be possible to design a language that has essentially all of the functionally capabilities of the lower level languages, without the requirement of ever directly dealing with pointers.
True, but one question I know the C# team constantly asks is whether a
feature is worth the additional complexity it adds to the language. (I
know this because they frequently cite that as a reason for not
including certain features.) What does it really buy you being able to
take the address of an arbitrary variable (in safe code... I know that
you can do it in unsafe code)? As I said, I think that Java (and now
C#) have demonstrated that it doesn't buy you much. You mentioned
boxing overhead, but in .NET 2.0 you can pretty-much avoid boxing...
all you have to do is learn a new idiom: a new way to do what you've
always done, but now in a new language.
Are you referring to Generics? Does this address this issue of passing a struct
by (address) reference?
>
That, in the end, is what it comes down to: C# works very well. It's
just that it does things differently than does C++, and you can't take
C++ idioms and concepts and start writing C# as though it were C++. In
a few domains, C++ is much better suited to the problems than is C#,
but in most domains C# gives you all the functionality you need while
helping keep you out of trouble.
I think that it is possible to take the concept of C# further along. To be able
to provide every required feature of a language such as C++, yet to do this in
an entirely type safe way, with essentially no additional execution time
overhead, and drastically reduce the number of details that must be handled by
the programmer. I think that C# has done an excellent job of achieving these
goals up to this point. I think that there are two opportunities for
improvement:
(1) Little tweaks here and there to eliminate more of the additional execution
time overhead.
(2) Abstract out the distinction between reference types and value types so that
the programmer will not even need to know the difference. The underlying
architecture can still have separate reference types and value types, yet, this
can be handled entirely transparently by the compiler and CLR.
This last objective may be very difficult to achieve, yet will reap great
rewards in increased programmer productivity.
>
If you really, really need to use pointers and arbitrary addressing, C#
has unsafe mode... but that's why it's called "unsafe". Like any other
"escape hatch" in a language, if you find yourself using unsafe code
you should be thinking long and hard about why you need it, and whether
you have a real need or just a mental disconnect with how you're
intended to use the language.
"Bill Butler" <qw****@asdf.comwrote in message
news:IlPqh.592$Hc5.306@trnddc03...
"Peter Olcott" <No****@SeeScreen.comwrote in message
news:ED*******************@newsfe18.lga...
>> "Larry Lard" <la*******@googlemail.comwrote in message news:51*************@mid.individual.net...
>>Peter Olcott wrote: By working 90 hours a week, I am still 80 hours a week short of what I need to get done.
You realise this means you are doomed to failure?
It means that I can't afford to waste any time, and must find shortcuts to achieve the required end-results.
You wish to switch to a dotnet based solution to reduce the development time,
but you don't want to put the time in to learning the language(C#). Sounds to
me like you are doomed to failure anyway.
I can only put in a little time to learn the language now. I am focusing this
time on assessing the feasibility of using C# and .NET for my future needs. It
looks like C# does have the performance that I need. In order to get this
performance, I must use relatively obscure nuances of C# that are barely
mentioned in even the most comprehensive textbooks. Specifically because of this
newsgroup thread, I was able to learn these relatively obscure nuances. It
definitely looks like C# and .NET will be the future of essentially ALL MS
Windows development.
I suggest sticking with C++, since that seems to be where your comfort zone
lies. Without putting in the time to learn C# and the dotnet framework, there
is no way that you will save any time on your development. Or, if you did save
time, it would perform like a pig, because you have no desire to work with the
framework, and you would fight it instead.
For months now, you have been posting the same old recycled arguments. Had you
listened to us then, you might actually be showing progress now.
I am amazed at how much time you have to argue, when you tell us you need to
work 90 hours a week just to tread water.
Bill
Peter Olcott <No****@SeeScreen.comwrote:
Peter Olcott wrote:
By working 90 hours a week, I am still 80 hours a week short of what I need
to get done.
You realise this means you are doomed to failure?
It means that I can't afford to waste any time, and must find shortcuts to
achieve the required end-results.
No, if you need to achieve the impossible in order to succeed, then you
*are* doomed to failure. Whatever you do, you're not going to be able
to get 170 hours of work done in a week. You can reduce the amount of
work you do by working smarter, but that's not the same thing.
By the way, I'd consider trying to redesign .NET instead of going with
what has already been designed and trying to work with it as wasting
your time...
--
Jon Skeet - <sk***@pobox.com> http://www.pobox.com/~skeet Blog: http://www.msmvps.com/jon.skeet
If replying to the group, please do not mail me too This discussion thread is closed Replies have been disabled for this discussion. Similar topics
43 posts
views
Thread by Mountain Bikn' Guy |
last post: by
|
3 posts
views
Thread by Steve |
last post: by
|
24 posts
views
Thread by ALI-R |
last post: by
|
4 posts
views
Thread by Peter Olcott |
last post: by
|
94 posts
views
Thread by Peter Olcott |
last post: by
|
19 posts
views
Thread by ahjiang |
last post: by
| | | | | | | | | | |