By using this site, you agree to our updated Privacy Policy and our Terms of Use. Manage your Cookies Settings.
459,920 Members | 1,660 Online
Bytes IT Community
+ Ask a Question
Need help? Post your question and get tips & solutions from a community of 459,920 IT Pros & Developers. It's quick & easy.

Roll your own std::vector ???

P: n/a
I need std::vector like capability for several custom classes. I already
discussed this extensively in the thread named ArrayList without Boxing and
Unboxing. The solution was to simply create non-generic (non C++ template)
std::vector like capability for each of these custom classes. (Solution must
work in Visual Studio 2002).

Since I have already written one std::vector for a YeOlde C++ compiler (Borland
C++ 1.0) that had neither templates nor STL, I know how to do this. What I don't
know how to do is to directly re-allocate memory in the garbage collected C#.

I have written what I need in pseudocode, what is the correct C# syntax for
this?

int Size;
int Capacity;

bool AppendDataItem(DataItemType Data) {
if (Size == Capacity) {
(1) Capacity = Capacity * 2; // Or * 1.5
(2) Temp = MemoryPointer;
(3) MemoryPointer = Allocate(Capacity);
(4) Copy Data from Temp to MemoryPointer;
(5) DeAllocate(Temp);
(6) MemoryPointer[Size] = Data;
(7) Size++;
}
}

Dec 17 '06 #1
Share this Question
Share on Google+
82 Replies


P: n/a
"Peter Olcott" <No****@SeeScreen.coma écrit dans le message de news:
wJ*******************@newsfe18.lga...

|I need std::vector like capability for several custom classes.

| I have written what I need in pseudocode, what is the correct C# syntax
for
| this?

In C#, should not normally deallocate memory manually as this slows down the
GC.

If you really can't or don't want to use the generic List<Tclass, which
would be faster and easier, then try this code, I think it achieves what you
want :

public class Test
{
private int size = 0;

int[] ia = new int[4];

public void AppendDataItem(int data)
{
if (size == ia.Length)
Array.Resize(ref ia, ia.Length * 2);

ia[size++] = data;
}
}
{
Test t = new Test();

for (int i = 0; i < 10; i++ )
t.AppendDataItem(i);
}

Joanna

--
Joanna Carter [TeamB]
Consultant Software Engineer
Dec 17 '06 #2

P: n/a
"Joanna Carter [TeamB]" <jo****@not.for.spama écrit dans le message de
news: ee**************@TK2MSFTNGP04.phx.gbl...

| If you really can't or don't want to use the generic List<Tclass, which
| would be faster and easier, then try this code, I think it achieves what
you
| want :

Sorry, I used array.Resize(...) which is only available in .NET 2.0; you
said you wanted .NET 1 compatibilty, thyerefore you need to use Copy or
CopyTo :

| public class Test
| {
| private int size = 0;
|
| int[] ia = new int[4];
|
| public void AppendDataItem(int data)
| {
if (size == ia.Length)
{
int[] temp = new int[ia.Length * 2];

Array.Copy(ia, temp, ia.Length);

ia = temp;
}

ia[size++] = data;
| }
| }

Joanna

--
Joanna Carter [TeamB]
Consultant Software Engineer
Dec 17 '06 #3

P: n/a

"Joanna Carter [TeamB]" <jo****@not.for.spamwrote in message
news:%2***************@TK2MSFTNGP06.phx.gbl...
"Joanna Carter [TeamB]" <jo****@not.for.spama écrit dans le message de
news: ee**************@TK2MSFTNGP04.phx.gbl...

| If you really can't or don't want to use the generic List<Tclass, which
| would be faster and easier, then try this code, I think it achieves what
you
| want :

Sorry, I used array.Resize(...) which is only available in .NET 2.0; you
said you wanted .NET 1 compatibilty, thyerefore you need to use Copy or
CopyTo :

| public class Test
| {
| private int size = 0;
|
| int[] ia = new int[4];
I am assuming that you are allocating four integers here, and that I could just
as easily allocate one.
|
| public void AppendDataItem(int data)
| {
if (size == ia.Length)
{
int[] temp = new int[ia.Length * 2];

Array.Copy(ia, temp, ia.Length);
If I understand this correctly, we could improve the performance a little using
this statement instead:
Array.Copy(ia, temp, size);
Am I correct? (Destination, Source, Length) ???
>
ia = temp;
Is this like a pointer assignment in C++ ???
}

ia[size++] = data;
| }
| }

Joanna

--
Joanna Carter [TeamB]
Consultant Software Engineer


Dec 17 '06 #4

P: n/a
"Peter Olcott" <No****@SeeScreen.coma écrit dans le message de news:
ga******************@newsfe23.lga...

| I am assuming that you are allocating four integers here, and that I could
just
| as easily allocate one.

Yes, no problem; I was assuming a default starting size that you would
typically expect with something like ArrayList.

| If I understand this correctly, we could improve the performance a little
using
| this statement instead:
| Array.Copy(ia, temp, size);

Minimally, yes.

| Am I correct? (Destination, Source, Length) ???

No, (source, destination, length).

| Is this like a pointer assignment in C++ ???

..NET does not use explicit pointers or indirection, except in unsafe code.
Everything derives from System.Object and all "object holders" are
implicitly "pointers", although the address of the pointer or of the
contents of the pointer are deemed irrelevant. Objects just "are".

The only real difference you will see in assignment is that value types
(int, double, or any other struct) are assigned as a copy, whereas reference
types (proper classes) are assigned the "reference".

e.g.

struct S
{
public int x;
}

class C
{
public int x;
}

{
S i = new S();
i.x = 2; // i.x contains 2

S j = new S();
j = i; // j.x contains 2

i.x = 3; // j.x still contains 2

C o = new C();
o.x = 2; // o.x contains 2

C p = o; // p.x contains 2

o.x = 3; // p.x now also contains 3
}

.... and not a ->, & or * to be seen :-)

Joanna

--
Joanna Carter [TeamB]
Consultant Software Engineer
Dec 17 '06 #5

P: n/a

"Joanna Carter [TeamB]" <jo****@not.for.spamwrote in message
news:eg*************@TK2MSFTNGP06.phx.gbl...
"Peter Olcott" <No****@SeeScreen.coma écrit dans le message de news:
ga******************@newsfe23.lga...

| I am assuming that you are allocating four integers here, and that I could
just
| as easily allocate one.

Yes, no problem; I was assuming a default starting size that you would
typically expect with something like ArrayList.

| If I understand this correctly, we could improve the performance a little
using
| this statement instead:
| Array.Copy(ia, temp, size);

Minimally, yes.
If the current allocation is 100 MB, and the current size is 50 MB, the
difference might not be so trivial.
>
| Am I correct? (Destination, Source, Length) ???

No, (source, destination, length).

| Is this like a pointer assignment in C++ ???

.NET does not use explicit pointers or indirection, except in unsafe code.
Everything derives from System.Object and all "object holders" are
implicitly "pointers", although the address of the pointer or of the
contents of the pointer are deemed irrelevant. Objects just "are".
So its sort of like the way that C/C++ treats arrays?
>
The only real difference you will see in assignment is that value types
(int, double, or any other struct) are assigned as a copy, whereas reference
types (proper classes) are assigned the "reference".
Array.Copy(ia, temp, ia.Length);
ia = temp;

So exactly what is going on under the covers with this last statement? It
certainly is not copying the whole array again, is it?

It has got to be some sort of copying of pointers doesn't it? Would it copy the
whole array again in the last statement if the underlying type was a value type
such as int?
>
e.g.

struct S
{
public int x;
}

class C
{
public int x;
}

{
S i = new S();
i.x = 2; // i.x contains 2

S j = new S();
j = i; // j.x contains 2

i.x = 3; // j.x still contains 2

C o = new C();
o.x = 2; // o.x contains 2

C p = o; // p.x contains 2

o.x = 3; // p.x now also contains 3
}

... and not a ->, & or * to be seen :-)

Joanna

--
Joanna Carter [TeamB]
Consultant Software Engineer


Dec 17 '06 #6

P: n/a
"Peter Olcott" <No****@SeeScreen.coma écrit dans le message de news:
FV******************@newsfe23.lga...

| If the current allocation is 100 MB, and the current size is 50 MB, the
| difference might not be so trivial.

Assuming that the underlying implementation of the array holds a field that
holds the length of the array, then all that the Length property would do
would be to return the value held in that field. In which case the only
overhead would be the call through the property accessor rather than
straight to an externally held field.

| So its sort of like the way that C/C++ treats arrays?

It's a long time since I did C/C++ so I could not comment on that.

| Array.Copy(ia, temp, ia.Length);
| ia = temp;
|
| So exactly what is going on under the covers with this last statement? It
| certainly is not copying the whole array again, is it?

No, the first statement copies the items in the ia array to the first half
of the new temp array as expected. Then the second assignment is a simple
"reference" assignment where ia gets a reference to temp which will replave
the original array it was holding; the original array will be lost and then
be eligible for garbage collection.

Change an item in temp and ia would see the change; they are both "pointers"
to the same array.

| It has got to be some sort of copying of pointers doesn't it? Would it
copy the
| whole array again in the last statement if the underlying type was a value
type
| such as int?

As I said in my previous post, the contents of value types are copied on
assignment, but those of reference types are simply "pointed to" by the
second reference.

Joanna

--
Joanna Carter [TeamB]
Consultant Software Engineer
Dec 17 '06 #7

P: n/a
Peter Olcott <No****@SeeScreen.comwrote:
The only real difference you will see in assignment is that value types
(int, double, or any other struct) are assigned as a copy, whereas reference
types (proper classes) are assigned the "reference".

Array.Copy(ia, temp, ia.Length);
ia = temp;

So exactly what is going on under the covers with this last statement? It
certainly is not copying the whole array again, is it?
It's copying the reference, not the contents of the array.
It has got to be some sort of copying of pointers doesn't it? Would it copy the
whole array again in the last statement if the underlying type was a value type
such as int?
No. Arrays are always reference types, even if they're arrays of value
types. The difference is that if you have an array of a reference type
(eg String[]) each element of the array is a *reference* to a string,
not the string data itself. If you have an array of a value type
(eg int[]) then each element of the array is a value directly.

--
Jon Skeet - <sk***@pobox.com>
http://www.pobox.com/~skeet Blog: http://www.msmvps.com/jon.skeet
If replying to the group, please do not mail me too
Dec 17 '06 #8

P: n/a

"Joanna Carter [TeamB]" <jo****@not.for.spamwrote in message
news:OR**************@TK2MSFTNGP02.phx.gbl...
"Peter Olcott" <No****@SeeScreen.coma écrit dans le message de news:
FV******************@newsfe23.lga...

| If the current allocation is 100 MB, and the current size is 50 MB, the
| difference might not be so trivial.

Assuming that the underlying implementation of the array holds a field that
holds the length of the array, then all that the Length property would do
would be to return the value held in that field. In which case the only
overhead would be the call through the property accessor rather than
straight to an externally held field.
I saw where I goofed, I thought you were copying the capacity of the new array,
you are actually copying the capacity of the old array. Copying the capacity of
the new array would index out-of-bound on the old array. My mistake.
| So its sort of like the way that C/C++ treats arrays?

It's a long time since I did C/C++ so I could not comment on that.
The commonly understood basic principle where an array name is
one-and-the-same-thing as an array address.
>
| Array.Copy(ia, temp, ia.Length);
| ia = temp;
|
| So exactly what is going on under the covers with this last statement? It
| certainly is not copying the whole array again, is it?

No, the first statement copies the items in the ia array to the first half
of the new temp array as expected. Then the second assignment is a simple
"reference" assignment where ia gets a reference to temp which will replace
the original array it was holding; the original array will be lost and then
be eligible for garbage collection.

Change an item in temp and ia would see the change; they are both "pointers"
to the same array.
Yes that is what I was talking about, they are both pointers within the
underlying architecture.
>
| It has got to be some sort of copying of pointers doesn't it? Would it
copy the
| whole array again in the last statement if the underlying type was a value
type
| such as int?

As I said in my previous post, the contents of value types are copied on
assignment, but those of reference types are simply "pointed to" by the
second reference.
Array.Copy(ia, temp, ia.Length);
ia = temp;

So the last statement contains two reference types, even though the underlying
type of array element may be a value type or a reference type ???
>
Joanna

--
Joanna Carter [TeamB]
Consultant Software Engineer


Dec 17 '06 #9

P: n/a

"Jon Skeet [C# MVP]" <sk***@pobox.comwrote in message
news:MP************************@msnews.microsoft.c om...
Peter Olcott <No****@SeeScreen.comwrote:
The only real difference you will see in assignment is that value types
(int, double, or any other struct) are assigned as a copy, whereas
reference
types (proper classes) are assigned the "reference".

Array.Copy(ia, temp, ia.Length);
ia = temp;

So exactly what is going on under the covers with this last statement? It
certainly is not copying the whole array again, is it?

It's copying the reference, not the contents of the array.
>It has got to be some sort of copying of pointers doesn't it? Would it copy
the
whole array again in the last statement if the underlying type was a value
type
such as int?

No. Arrays are always reference types, even if they're arrays of value
types. The difference is that if you have an array of a reference type
(eg String[]) each element of the array is a *reference* to a string,
not the string data itself. If you have an array of a value type
(eg int[]) then each element of the array is a value directly.
So it look like if I want to avoid the huge overhead of boxing and unboxing that
value types have, I must comprise all of my (hand-rolled std::vector like)
arrays of value type which can include elemental types such as int, char,
double, and also composite types such as struct, but not composite types such as
class. Is this right ???
>
--
Jon Skeet - <sk***@pobox.com>
http://www.pobox.com/~skeet Blog: http://www.msmvps.com/jon.skeet
If replying to the group, please do not mail me too

Dec 17 '06 #10

P: n/a
"Peter Olcott" <No****@SeeScreen.coma écrit dans le message de news:
pV******************@newsfe19.lga...

| The commonly understood basic principle where an array name is
| one-and-the-same-thing as an array address.

In that case, that would appear to be the principle for all .NET types. The
variable that you see *is* the object that the variable points to. There is
no indirection on the surface, even though the underlying framework may
involve such.

I think the thing that you are having trouble with is this lack of
indirection; when it comes to C#, you need to try to forget about ->, & and
* and simply think of variables as being one and the same as the object
which they hold. For the sake of your comprehension, forget about pointers
and addresses :-)

| Yes that is what I was talking about, they are both pointers within the
| underlying architecture.

Maybe but that should not concern you; forget pointers :-)).

| As I said in my previous post, the contents of value types are copied on
| assignment, but those of reference types are simply "pointed to" by the
| second reference.
|
| Array.Copy(ia, temp, ia.Length);
| ia = temp;
|
| So the last statement contains two reference types, even though the
underlying
| type of array element may be a value type or a reference type ???

Array is a reference type, regardless of the type that it holds. See Jon's
post as well.

Joanna

--
Joanna Carter [TeamB]
Consultant Software Engineer
Dec 17 '06 #11

P: n/a

"Joanna Carter [TeamB]" <jo****@not.for.spamwrote in message
news:eJ**************@TK2MSFTNGP03.phx.gbl...
"Peter Olcott" <No****@SeeScreen.coma écrit dans le message de news:
pV******************@newsfe19.lga...

| The commonly understood basic principle where an array name is
| one-and-the-same-thing as an array address.

In that case, that would appear to be the principle for all .NET types. The
variable that you see *is* the object that the variable points to. There is
no indirection on the surface, even though the underlying framework may
involve such.

I think the thing that you are having trouble with is this lack of
indirection; when it comes to C#, you need to try to forget about ->, & and
* and simply think of variables as being one and the same as the object
which they hold. For the sake of your comprehension, forget about pointers
and addresses :-)
One can't forget about this completely otherwise ones makes the mistake of
taking a shallow copy to be one-and-the-same-thing as a deep copy. I think that
..NET may have simplified this somewhat in some ways, I am currently not sure of
exactly how they did this. The way that this problem is typically simplified is
to always provide all of the overhead of a deep copy just in case that is what
was wanted.
>
| Yes that is what I was talking about, they are both pointers within the
| underlying architecture.

Maybe but that should not concern you; forget pointers :-)).
Yet then one must still wonder about the shallow versus deep copy problem, and
avoiding the overhead of the deep copy, when it is not needed. The best solution
at this point in time for applications programming might be to simply always do
a deep copy, and make doing a shallow copy syntactically impossible. There are
cases on systems programming where this would be unacceptable.
>
| As I said in my previous post, the contents of value types are copied on
| assignment, but those of reference types are simply "pointed to" by the
| second reference.
|
| Array.Copy(ia, temp, ia.Length);
| ia = temp;
|
| So the last statement contains two reference types, even though the
underlying
| type of array element may be a value type or a reference type ???

Array is a reference type, regardless of the type that it holds. See Jon's
post as well.

Joanna

--
Joanna Carter [TeamB]
Consultant Software Engineer


Dec 17 '06 #12

P: n/a
"Peter Olcott" <No****@SeeScreen.coma écrit dans le message de news:
ZY******************@newsfe19.lga...

| So it look like if I want to avoid the huge overhead of boxing and
unboxing that
| value types have, I must comprise all of my (hand-rolled std::vector like)
| arrays of value type which can include elemental types such as int, char,
| double, and also composite types such as struct, but not composite types
such as
| class. Is this right ???

Or you can simply use List<Tgeneric class under .NET 2.0 as this provides
a native typesafe, dynamic collection that doesn't use any boxing or
unboxing.

WYSIWYG. It says it is a list of a cetain type, it *is* a list of a certain
type - no casting required.

Why do you want to write your own version of something that already exists
and does the job, possibly better than your attempt ?

{
List<intintList = new List<int>();

intList.Add(123); // compiles

intList.Add("123"); // will not compile

...
}

Or if you want to have your own generic Vector class, then do something like
this :

public class Vector<T: IEnumerable<T>
{
private List<Titems = new List<T>();

public int Size
{
get { return items.Count; }
}

public bool Empty
{
get { return items.Count == 0; }
}

public void PushBack(T item)
{
items.Add(item);
}

public T this[int index]
{
get { return items[index]; }
}

.... etc

#region IEnumerable members

public IEnumerator GetEnumerator()
{
return items.GetEnumerator();
}

#endregion

#region IEnumerable<Tmembers

public IEnumerator<TGetEnumerator()
{
return items.GetEnumerator();
}

#endregion
}

Joanna

--
Joanna Carter [TeamB]
Consultant Software Engineer
Dec 17 '06 #13

P: n/a
"Peter Olcott" <No****@SeeScreen.coma écrit dans le message de news:
FF*****************@newsfe22.lga...

| One can't forget about this completely otherwise ones makes the mistake of
| taking a shallow copy to be one-and-the-same-thing as a deep copy. I think
that
| .NET may have simplified this somewhat in some ways, I am currently not
sure of
| exactly how they did this. The way that this problem is typically
simplified is
| to always provide all of the overhead of a deep copy just in case that is
what
| was wanted.

You don't need to think about pointers and addresses to separate shallow and
deep copy semantics.

All types support shallow cloning by means of the derived protected
System.Object.MemberwiseClone() method which simply copies the contents of
instance fields following the semantics of the field type.

| Yet then one must still wonder about the shallow versus deep copy problem,
and
| avoiding the overhead of the deep copy, when it is not needed. The best
solution
| at this point in time for applications programming might be to simply
always do
| a deep copy, and make doing a shallow copy syntactically impossible. There
are
| cases on systems programming where this would be unacceptable.

Deep cloning is usually only available if the type supports ICloneable, but
is still at the discretion of the implementer as to how deep that cloning
goes.

Joanna

--
Joanna Carter [TeamB]
Consultant Software Engineer
Dec 17 '06 #14

P: n/a

"Joanna Carter [TeamB]" <jo****@not.for.spamwrote in message
news:Oh**************@TK2MSFTNGP03.phx.gbl...
"Peter Olcott" <No****@SeeScreen.coma écrit dans le message de news:
ZY******************@newsfe19.lga...

| So it look like if I want to avoid the huge overhead of boxing and
unboxing that
| value types have, I must comprise all of my (hand-rolled std::vector like)
| arrays of value type which can include elemental types such as int, char,
| double, and also composite types such as struct, but not composite types
such as
| class. Is this right ???

Or you can simply use List<Tgeneric class under .NET 2.0 as this provides
a native typesafe, dynamic collection that doesn't use any boxing or
unboxing.
Of even better use the actual std::vector itself that is now available with the
currently released version of visual studio, that way I further still reduce my
learning curve. There are two reason why I don't want to upgrade yet, (1) Price,
(2) The exams refer to the older version.
>
WYSIWYG. It says it is a list of a cetain type, it *is* a list of a certain
type - no casting required.

Why do you want to write your own version of something that already exists
and does the job, possibly better than your attempt ?

{
List<intintList = new List<int>();

intList.Add(123); // compiles

intList.Add("123"); // will not compile

...
}

Or if you want to have your own generic Vector class, then do something like
this :

public class Vector<T: IEnumerable<T>
{
private List<Titems = new List<T>();

public int Size
{
get { return items.Count; }
}

public bool Empty
{
get { return items.Count == 0; }
}

public void PushBack(T item)
{
items.Add(item);
}

public T this[int index]
{
get { return items[index]; }
}

... etc

#region IEnumerable members

public IEnumerator GetEnumerator()
{
return items.GetEnumerator();
}

#endregion

#region IEnumerable<Tmembers

public IEnumerator<TGetEnumerator()
{
return items.GetEnumerator();
}

#endregion
}

Joanna

--
Joanna Carter [TeamB]
Consultant Software Engineer


Dec 18 '06 #15

P: n/a
"Peter Olcott" <No****@SeeScreen.coma écrit dans le message de news:
XB****************@newsfe14.lga...

| Of even better use the actual std::vector itself that is now available
with the
| currently released version of visual studio, that way I further still
reduce my
| learning curve. There are two reason why I don't want to upgrade yet, (1)
Price,
| (2) The exams refer to the older version.

I think you will find that List<Tshould give you most of the functionality
you need. (1)You can get a copy of Visual Studio 2005 Express for free (2)
are you more interested in writing code and achieving results or passing
exams ? :-)

Joanna

--
Joanna Carter [TeamB]
Consultant Software Engineer
Dec 18 '06 #16

P: n/a

Are you doing all this work because you've profiled your app or done
tests to confirm that boxing/unboxing is an unacceptable performance
hit or is all this extra work and overhead just because you think
boxing is going to be a problem?

Sam

------------------------------------------------------------
We're hiring! B-Line Medical is seeking Mid/Sr. .NET
Developers for exciting positions in medical product
development in MD/DC. Work with a variety of technologies
in a relaxed team environment. See ads on Dice.com.

On Sun, 17 Dec 2006 08:42:52 -0600, "Peter Olcott"
<No****@SeeScreen.comwrote:
>I need std::vector like capability for several custom classes. I already
discussed this extensively in the thread named ArrayList without Boxing and
Unboxing. The solution was to simply create non-generic (non C++ template)
std::vector like capability for each of these custom classes. (Solution must
work in Visual Studio 2002).

Since I have already written one std::vector for a YeOlde C++ compiler (Borland
C++ 1.0) that had neither templates nor STL, I know how to do this. What I don't
know how to do is to directly re-allocate memory in the garbage collected C#.

I have written what I need in pseudocode, what is the correct C# syntax for
this?

int Size;
int Capacity;

bool AppendDataItem(DataItemType Data) {
if (Size == Capacity) {
(1) Capacity = Capacity * 2; // Or * 1.5
(2) Temp = MemoryPointer;
(3) MemoryPointer = Allocate(Capacity);
(4) Copy Data from Temp to MemoryPointer;
(5) DeAllocate(Temp);
(6) MemoryPointer[Size] = Data;
(7) Size++;
}
}

Dec 18 '06 #17

P: n/a
Systems programming is entirely different than applications programming. When
you are amortizing development costs over millions of users, you just don't test
something to see if its good enough. In this case you shoot for the ballpark of
as good as possible, right from the very beginning.

"Samuel R. Neff" <sa********@nomail.comwrote in message
news:e9********************************@4ax.com...
>
Are you doing all this work because you've profiled your app or done
tests to confirm that boxing/unboxing is an unacceptable performance
hit or is all this extra work and overhead just because you think
boxing is going to be a problem?

Sam

------------------------------------------------------------
We're hiring! B-Line Medical is seeking Mid/Sr. .NET
Developers for exciting positions in medical product
development in MD/DC. Work with a variety of technologies
in a relaxed team environment. See ads on Dice.com.

On Sun, 17 Dec 2006 08:42:52 -0600, "Peter Olcott"
<No****@SeeScreen.comwrote:
>>I need std::vector like capability for several custom classes. I already
discussed this extensively in the thread named ArrayList without Boxing and
Unboxing. The solution was to simply create non-generic (non C++ template)
std::vector like capability for each of these custom classes. (Solution must
work in Visual Studio 2002).

Since I have already written one std::vector for a YeOlde C++ compiler
(Borland
C++ 1.0) that had neither templates nor STL, I know how to do this. What I
don't
know how to do is to directly re-allocate memory in the garbage collected C#.

I have written what I need in pseudocode, what is the correct C# syntax for
this?

int Size;
int Capacity;

bool AppendDataItem(DataItemType Data) {
if (Size == Capacity) {
(1) Capacity = Capacity * 2; // Or * 1.5
(2) Temp = MemoryPointer;
(3) MemoryPointer = Allocate(Capacity);
(4) Copy Data from Temp to MemoryPointer;
(5) DeAllocate(Temp);
(6) MemoryPointer[Size] = Data;
(7) Size++;
}
}


Dec 18 '06 #18

P: n/a

"Joanna Carter [TeamB]" <jo****@not.for.spamwrote in message
news:Oo*************@TK2MSFTNGP06.phx.gbl...
"Peter Olcott" <No****@SeeScreen.coma écrit dans le message de news:
XB****************@newsfe14.lga...

| Of even better use the actual std::vector itself that is now available
with the
| currently released version of visual studio, that way I further still
reduce my
| learning curve. There are two reason why I don't want to upgrade yet, (1)
Price,
| (2) The exams refer to the older version.

I think you will find that List<Tshould give you most of the functionality
you need. (1)You can get a copy of Visual Studio 2005 Express for free (2)
are you more interested in writing code and achieving results or passing
exams ? :-)
Both simultaneously without any tradeoffs if possible.
(1) Where can I get this free copy?
(2) This free copy probably is not licensed for commercial use, and even if it
is, would most likely be missing the code optimizers.
>
Joanna

--
Joanna Carter [TeamB]
Consultant Software Engineer


Dec 18 '06 #19

P: n/a
"Peter Olcott" <No****@SeeScreen.coma écrit dans le message de news:
mG*****************@newsfe19.lga...

| Systems programming is entirely different than applications programming.
When
| you are amortizing development costs over millions of users, you just
don't test
| something to see if its good enough. In this case you shoot for the
ballpark of
| as good as possible, right from the very beginning.

There is a well known anti-pattern called "Premature Optimisation", beware
of it, prove that something is too slow before wasting time and effort
trying to speed it up.

Joanna

--
Joanna Carter [TeamB]
Consultant Software Engineer
Dec 18 '06 #20

P: n/a
"Peter Olcott" <No****@SeeScreen.coma écrit dans le message de news:
HI*****************@newsfe19.lga...

| Both simultaneously without any tradeoffs if possible.

Hmmm, if I were you, I would download VS Express and see what it can achieve
before planning for failure :-)

| (1) Where can I get this free copy?

http://msdn.microsoft.com/vstudio/express/default.aspx

| (2) This free copy probably is not licensed for commercial use, and even
if it
| is, would most likely be missing the code optimizers.

http://msdn.microsoft.com/vstudio/express/support/faq/

.... and i doubt whether the compiler would be specialised to be inefficient.

Joanna

--
Joanna Carter [TeamB]
Consultant Software Engineer
Dec 18 '06 #21

P: n/a

"Joanna Carter [TeamB]" <jo****@not.for.spamwrote in message
news:Oi**************@TK2MSFTNGP04.phx.gbl...
"Peter Olcott" <No****@SeeScreen.coma écrit dans le message de news:
mG*****************@newsfe19.lga...

| Systems programming is entirely different than applications programming.
When
| you are amortizing development costs over millions of users, you just
don't test
| something to see if its good enough. In this case you shoot for the
ballpark of
| as good as possible, right from the very beginning.

There is a well known anti-pattern called "Premature Optimisation", beware
of it, prove that something is too slow before wasting time and effort
trying to speed it up.
Yes, and I have found that I sometimes make this mistake, and must guard against
this mistake. On the other hand when the cost of development is to be amortized
against millions of users, one must provide a much better initial design than is
typical of most development.

If I as one programmer intend to make a product that is superior to the products
offered by the software giants, I can not just throw a few things together and
test them to see if they are good enough. The essential architecture must be
very good in every way, and it must be very good from the very beginning.

I intend to make the Borland Turbo Pascal 3.0 of my product category, (10 times
the quality for 1/10 the cost)
>
Joanna

--
Joanna Carter [TeamB]
Consultant Software Engineer


Dec 18 '06 #22

P: n/a

"Joanna Carter [TeamB]" <jo****@not.for.spamwrote in message
news:um**************@TK2MSFTNGP03.phx.gbl...
"Peter Olcott" <No****@SeeScreen.coma écrit dans le message de news:
HI*****************@newsfe19.lga...

| Both simultaneously without any tradeoffs if possible.

Hmmm, if I were you, I would download VS Express and see what it can achieve
before planning for failure :-)

| (1) Where can I get this free copy?

http://msdn.microsoft.com/vstudio/express/default.aspx

| (2) This free copy probably is not licensed for commercial use, and even
if it
| is, would most likely be missing the code optimizers.

http://msdn.microsoft.com/vstudio/express/support/faq/

... and i doubt whether the compiler would be specialised to be inefficient.
Lacking a code optimizer (as all of the cheaper versions of Visual Studio)
typically results in code that executes from 500% to twenty-fold slower. Writing
the code in C# .NET instead of C++ .NET often results in 500% slower code, since
the C++ optimizer has had many more years of fine-tuning it can produce superior
code.
>
Joanna

--
Joanna Carter [TeamB]
Consultant Software Engineer


Dec 18 '06 #23

P: n/a
Peter Olcott <No****@SeeScreen.comwrote:
No. Arrays are always reference types, even if they're arrays of value
types. The difference is that if you have an array of a reference type
(eg String[]) each element of the array is a *reference* to a string,
not the string data itself. If you have an array of a value type
(eg int[]) then each element of the array is a value directly.

So it look like if I want to avoid the huge overhead of boxing and unboxing that
value types have, I must comprise all of my (hand-rolled std::vector like)
arrays of value type which can include elemental types such as int, char,
double, and also composite types such as struct, but not composite types such as
class. Is this right ???
Yes, pretty much. I'd take a bit longer learning the basics of the
language before going too much further with a performance-critical
project, however: using a language you're unfamiliar with, you're
always likely to do things the way you're familiar with rather than the
idiomatic way to start with.

--
Jon Skeet - <sk***@pobox.com>
http://www.pobox.com/~skeet Blog: http://www.msmvps.com/jon.skeet
If replying to the group, please do not mail me too
Dec 18 '06 #24

P: n/a
Peter Olcott <No****@SeeScreen.comwrote:
http://msdn.microsoft.com/vstudio/express/support/faq/

... and i doubt whether the compiler would be specialised to be inefficient.

Lacking a code optimizer (as all of the cheaper versions of Visual Studio)
typically results in code that executes from 500% to twenty-fold slower.
The C# compiler is part of the framework, not part of Visual Studio
itself. Likewise, the JIT compiler (which is where most of the
optimisations occur) is part of the CLR itself.

Don't assume that just because cheap/free versions of other editions of
Visual Studio had weaker compilers means the same is true for VS.NET.
Writing the code in C# .NET instead of C++ .NET often results in 500%
slower code, since the C++ optimizer has had many more years of
fine-tuning it can produce superior code.
Care to produce evidence for this "often" statement? It can happen,
certainly - but in my experience most of the time, C# will end up
"roughly" the same speed as C++ (eg within 20% of the C++ performance,
and sometimes faster than the C++).

--
Jon Skeet - <sk***@pobox.com>
http://www.pobox.com/~skeet Blog: http://www.msmvps.com/jon.skeet
If replying to the group, please do not mail me too
Dec 18 '06 #25

P: n/a
"Peter Olcott" <No****@SeeScreen.coma écrit dans le message de news:
a7*******************@newsfe23.lga...

| Lacking a code optimizer (as all of the cheaper versions of Visual Studio)
| typically results in code that executes from 500% to twenty-fold slower.
Writing
| the code in C# .NET instead of C++ .NET often results in 500% slower code,
since
| the C++ optimizer has had many more years of fine-tuning it can produce
superior
| code.

AFAICT, no version of Visual Studio has an optimiser. Optimisation of .NET
code is usually done by the runtime, platform/cpu specific JITter.

You quote a possible 500% difference in performance, yet you still haven't
actually tried your particular algorithm. Do yourself a favour, end your own
speculation, and get yourself the free version, which will give just the
same end IL code as the pro versions.

Coming from a Delphi background, this native vs .NET speed argument has
cropped up often in the Borland fora. Always, the naysayers were basing
their opinions on rumour and gossip, not on actual evaluations of .NET code
which, on many occasions, turns out to be faster than native code.

It is true that .NET may be slow on the very first execution of a particular
piece of code, but after that first JIT compilation, subsequent executions
turn out to be as fast, if not faster than standard native code. This is
mainly because, once compiled and cached, .NET code *is* native code. What
is more the JIT compiler optimises according to the platform the program
runs on.

Stop prevaricating and let us know what you think of the real C# compiler,
not your theoretical one :-))

Joanna

--
Joanna Carter [TeamB]
Consultant Software Engineer
Dec 18 '06 #26

P: n/a

So you're saying you haven't done any testing to see if boxing is
producing any noticable performance effect and you are prematurely
optimizing based on assumptions with no real-world profiling or
testing.

Thanks for confirming exactly what I asked. :-)

Sam

------------------------------------------------------------
We're hiring! B-Line Medical is seeking Mid/Sr. .NET
Developers for exciting positions in medical product
development in MD/DC. Work with a variety of technologies
in a relaxed team environment. See ads on Dice.com.

On Mon, 18 Dec 2006 13:06:14 -0600, "Peter Olcott"
<No****@SeeScreen.comwrote:
>Systems programming is entirely different than applications programming. When
you are amortizing development costs over millions of users, you just don't test
something to see if its good enough. In this case you shoot for the ballpark of
as good as possible, right from the very beginning.

"Samuel R. Neff" <sa********@nomail.comwrote in message
news:e9********************************@4ax.com.. .
>>
Are you doing all this work because you've profiled your app or done
tests to confirm that boxing/unboxing is an unacceptable performance
hit or is all this extra work and overhead just because you think
boxing is going to be a problem?

Sam
Dec 18 '06 #27

P: n/a

"Jon Skeet [C# MVP]" <sk***@pobox.comwrote in message
news:MP************************@msnews.microsoft.c om...
Peter Olcott <No****@SeeScreen.comwrote:
http://msdn.microsoft.com/vstudio/express/support/faq/

... and i doubt whether the compiler would be specialised to be
inefficient.

Lacking a code optimizer (as all of the cheaper versions of Visual Studio)
typically results in code that executes from 500% to twenty-fold slower.

The C# compiler is part of the framework, not part of Visual Studio
itself. Likewise, the JIT compiler (which is where most of the
optimisations occur) is part of the CLR itself.

Don't assume that just because cheap/free versions of other editions of
Visual Studio had weaker compilers means the same is true for VS.NET.
>Writing the code in C# .NET instead of C++ .NET often results in 500%
slower code, since the C++ optimizer has had many more years of
fine-tuning it can produce superior code.

Care to produce evidence for this "often" statement? It can happen,
certainly - but in my experience most of the time, C# will end up
"roughly" the same speed as C++ (eg within 20% of the C++ performance,
and sometimes faster than the C++).
http://www.tommti-systems.de/go.html...enchmarks.html

This is from published benchmarks and Microsoft's own recommendations. At least
much of optimization, and possibly most of optimization must occur before the
jitter ever sees the code. This is because much semantic information is lost
after the code has been initially translated, thus there is less of a basis for
semantically equivalent transformations.
>
--
Jon Skeet - <sk***@pobox.com>
http://www.pobox.com/~skeet Blog: http://www.msmvps.com/jon.skeet
If replying to the group, please do not mail me too

Dec 18 '06 #28

P: n/a

"Joanna Carter [TeamB]" <jo****@not.for.spamwrote in message
news:e4****************@TK2MSFTNGP02.phx.gbl...
"Peter Olcott" <No****@SeeScreen.coma écrit dans le message de news:
a7*******************@newsfe23.lga...

| Lacking a code optimizer (as all of the cheaper versions of Visual Studio)
| typically results in code that executes from 500% to twenty-fold slower.
Writing
| the code in C# .NET instead of C++ .NET often results in 500% slower code,
since
| the C++ optimizer has had many more years of fine-tuning it can produce
superior
| code.

AFAICT, no version of Visual Studio has an optimiser. Optimisation of .NET
code is usually done by the runtime, platform/cpu specific JITter.
That would directly contradict Microsoft's published recommendation of using C++
for maximum .NET performance, thus I would expect that you are simply wrong on
this point.
>
You quote a possible 500% difference in performance, yet you still haven't
actually tried your particular algorithm. Do yourself a favour, end your own
speculation, and get yourself the free version, which will give just the
same end IL code as the pro versions.
C++ .NET also has a smaller learning curve for me than C# .NET.
>
Coming from a Delphi background, this native vs .NET speed argument has
cropped up often in the Borland fora. Always, the naysayers were basing
their opinions on rumour and gossip, not on actual evaluations of .NET code
which, on many occasions, turns out to be faster than native code.
http://www.tommti-systems.de/go.html...enchmarks.html
http://www.osnews.com/img/5602/results.jpg

The above shows a big difference between native code and .NET code on the first
link.
The second link shows a big difference between .NET code across different
language compilers.
>
It is true that .NET may be slow on the very first execution of a particular
piece of code, but after that first JIT compilation, subsequent executions
turn out to be as fast, if not faster than standard native code. This is
mainly because, once compiled and cached, .NET code *is* native code. What
is more the JIT compiler optimises according to the platform the program
runs on.

Stop prevaricating and let us know what you think of the real C# compiler,
not your theoretical one :-))

Joanna

--
Joanna Carter [TeamB]
Consultant Software Engineer


Dec 18 '06 #29

P: n/a
I don't have the time to learn something to see if it is worth learning, I must
accomplish at least 40 hours worth of work, every day (notice that I did not say
every week, I must accomplish 60% more work than there are hours in a day).

"Samuel R. Neff" <sa********@nomail.comwrote in message
news:fk********************************@4ax.com...
>
So you're saying you haven't done any testing to see if boxing is
producing any noticable performance effect and you are prematurely
optimizing based on assumptions with no real-world profiling or
testing.

Thanks for confirming exactly what I asked. :-)

Sam

------------------------------------------------------------
We're hiring! B-Line Medical is seeking Mid/Sr. .NET
Developers for exciting positions in medical product
development in MD/DC. Work with a variety of technologies
in a relaxed team environment. See ads on Dice.com.

On Mon, 18 Dec 2006 13:06:14 -0600, "Peter Olcott"
<No****@SeeScreen.comwrote:
>>Systems programming is entirely different than applications programming. When
you are amortizing development costs over millions of users, you just don't
test
something to see if its good enough. In this case you shoot for the ballpark
of
as good as possible, right from the very beginning.

"Samuel R. Neff" <sa********@nomail.comwrote in message
news:e9********************************@4ax.com. ..
>>>
Are you doing all this work because you've profiled your app or done
tests to confirm that boxing/unboxing is an unacceptable performance
hit or is all this extra work and overhead just because you think
boxing is going to be a problem?

Sam

Dec 18 '06 #30

P: n/a
"Peter Olcott" <No****@SeeScreen.comwrote:
http://www.tommti-systems.de/go.html...enchmarks.html
http://www.osnews.com/img/5602/results.jpg
The above shows a big difference between native code and .NET code on the first
link.
???

The link shows C# to be within a factor between 1 and 2 for pretty
much everything. The only outlier is "list", but it's a bad benchmark:
the c++ version uses vector<int>, and the c# uses ArrayList, but as
we've discussed in this thread it should instead use List<int>.

--
Lucian
Dec 19 '06 #31

P: n/a

"Lucian Wischik" <lu***@wischik.comwrote in message
news:pr********************************@4ax.com...
"Peter Olcott" <No****@SeeScreen.comwrote:
>>
http://www.tommti-systems.de/go.html...enchmarks.html
http://www.osnews.com/img/5602/results.jpg
The above shows a big difference between native code and .NET code on the
first
link.

???

The link shows C# to be within a factor between 1 and 2 for pretty
much everything. The only outlier is "list", but it's a bad benchmark:
the c++ version uses vector<int>, and the c# uses ArrayList, but as
we've discussed in this thread it should instead use List<int>.

--
Lucian
But then that is my whole point in a prior thread. Generics were not available
when this benchmark was created, and thus the drastic boxing and unboxing
overhead becomes quite apparent.
Dec 19 '06 #32

P: n/a
Peter Olcott <No****@SeeScreen.comwrote:
Care to produce evidence for this "often" statement? It can happen,
certainly - but in my experience most of the time, C# will end up
"roughly" the same speed as C++ (eg within 20% of the C++ performance,
and sometimes faster than the C++).

http://www.tommti-systems.de/go.html...ti-systems.de/
main-Dateien/reviews/languages/benchmarks.html

This is from published benchmarks and Microsoft's own recommendations.
a) That's .NET 1.1 - it would be interesting to see the .NET 2.0
results
b) I haven't looked at the code yet, but the fact that Java is
significantly faster in a couple of the benchmarks suggests there's
further room for optimisation
c) There's only *one* test where C# is five times slower than C++ - so
surely you can accept that your statement that .NET "typically" results
in code that executes 5-20x slower is a gross exaggeration. Do you
usually regard one in fourteen as "typical"?
At least
much of optimization, and possibly most of optimization must occur before the
jitter ever sees the code.
No, I believe the JIT does most of the optimisation, actually.
This is because much semantic information is lost after the code has
been initially translated, thus there is less of a basis for
semantically equivalent transformations.
If you decompile some C#, you can get back to something very close to
the original - so very little information is being lost, leaving the
JIT with plenty of room for optimisation.

--
Jon Skeet - <sk***@pobox.com>
http://www.pobox.com/~skeet Blog: http://www.msmvps.com/jon.skeet
If replying to the group, please do not mail me too
Dec 19 '06 #33

P: n/a
Jon Skeet [C# MVP] <sk***@pobox.comwrote:
>a) That's .NET 1.1 - it would be interesting to see the .NET 2.0
results
b) I haven't looked at the code yet, but the fact that Java is
significantly faster in a couple of the benchmarks suggests there's
further room for optimisation
Okay, here are my results for that benchmark:

c++ using vector: 28 seconds
C#.net1 using ArrayList: 19.3 seconds
C#.net2 using ArrayList: 19.5 seconds
C#.net2 using List<int>: 19.1 seconds
Also, VS2005 and Visual C# Express were identical in speed.

Of those 19 seconds, 6 of them are spent building a 10,000 element
list by repeatedly inserting a new member at the front, 10 of them are
spent clearing two 10,000 element lists by repeatedly removing the
element at the front, and the rest of the work is O(n) rather than
O(n^2).

So this benchmark is basically just doing memcpy() on a 40k block of
memory, repeatedly. It says nothing about the relative cost of boxing
or unboxing. That's why the generics are no faster than the ArrayList.

I don't know why I'm getting the exact opposite resoluts from that
webpage -- I'm getting a moderately slower c++vector, whereas the
webpage gets an extremely slower c#.net1. Obviously one of us has a
typo somewhere! I also don't know why ArrayList should be any faster
than c++vector.

Here, I've distilled out the essence (O^n2) of that anomalous
benchmark:

// c++ version
//
int main()
{ int t = GetTickCount();
for (int j=0; j<100; j++)
{ vector<intv;
for (int i=0; i<10000; i++) v.push_back(i);
vector<intv2 = v;
vector<intv3;
for (int i=0; i<10000; i++) v3.insert(v3.begin(),i);
while (!v.empty()) v.erase(v.begin());
while (!v2.empty()) v2.erase(v2.begin());
printf(".");
}
printf("\nTime: %i\n",(int)(GetTickCount()-t));
return 0;
}
// c# version
//
static void Main(string[] args)
{ DateTime startTime = DateTime.Now;
for(int j=0; j<100; j++)
{ ArrayList v = new ArrayList();
for (int i=0; i<10000; i++) v.Insert(v.Count, i);
ArrayList v2 = new ArrayList(v);
ArrayList v3 = new ArrayList();
for (int i=0; i<10000; i++) v3.Insert(0,i);
while (v2.Count>0) v2.RemoveAt(0);
while (v3.Count>0) v3.RemoveAt(0);
Console.Write(".");
}
Console.WriteLine("\nVector elapsed time: " +
DateTime.Now.Subtract(startTime).TotalMilliseconds + " ms");
}
--
Lucian
Dec 19 '06 #34

P: n/a

If you don't have time to learn new things you're really in the wrong
business.

Sam
------------------------------------------------------------
We're hiring! B-Line Medical is seeking Mid/Sr. .NET
Developers for exciting positions in medical product
development in MD/DC. Work with a variety of technologies
in a relaxed team environment. See ads on Dice.com.

On Mon, 18 Dec 2006 17:44:04 -0600, "Peter Olcott"
<No****@SeeScreen.comwrote:
>I don't have the time to learn something to see if it is worth learning
Dec 19 '06 #35

P: n/a

"Jon Skeet [C# MVP]" <sk***@pobox.comwrote in message
news:MP************************@msnews.microsoft.c om...
Peter Olcott <No****@SeeScreen.comwrote:
Care to produce evidence for this "often" statement? It can happen,
certainly - but in my experience most of the time, C# will end up
"roughly" the same speed as C++ (eg within 20% of the C++ performance,
and sometimes faster than the C++).

http://www.tommti-systems.de/go.html...ti-systems.de/
main-Dateien/reviews/languages/benchmarks.html

This is from published benchmarks and Microsoft's own recommendations.

a) That's .NET 1.1 - it would be interesting to see the .NET 2.0
results
b) I haven't looked at the code yet, but the fact that Java is
significantly faster in a couple of the benchmarks suggests there's
further room for optimisation
c) There's only *one* test where C# is five times slower than C++ - so
surely you can accept that your statement that .NET "typically" results
in code that executes 5-20x slower is a gross exaggeration. Do you
usually regard one in fourteen as "typical"?
Since I always use the construct where C# did 500% worse, and good programming
practice would require always using such a construct, then C# is often (if not
typically) 500% slower. As it turns out, the solution derived from this thread
directly addresses this specific problem. The Boxing and UnBoxing of ArrayList
simply costs way too much.
>
>At least
much of optimization, and possibly most of optimization must occur before the
jitter ever sees the code.

No, I believe the JIT does most of the optimisation, actually.
>This is because much semantic information is lost after the code has
been initially translated, thus there is less of a basis for
semantically equivalent transformations.

If you decompile some C#, you can get back to something very close to
the original - so very little information is being lost, leaving the
JIT with plenty of room for optimization.
That is not the way that compiler optimization works. Too much semantic
information is lost after it has been translated into the intermediate code for
much optimization to be applied. I posted another benchmark across the different
..NET languages, there was a substantial difference.
>
--
Jon Skeet - <sk***@pobox.com>
http://www.pobox.com/~skeet Blog: http://www.msmvps.com/jon.skeet
If replying to the group, please do not mail me too

Dec 19 '06 #36

P: n/a
"Lucian Wischik" <lu***@wischik.comwrote in message
news:a6********************************@4ax.com...
Jon Skeet [C# MVP] <sk***@pobox.comwrote:
>>a) That's .NET 1.1 - it would be interesting to see the .NET 2.0
results
b) I haven't looked at the code yet, but the fact that Java is
significantly faster in a couple of the benchmarks suggests there's
further room for optimisation

Okay, here are my results for that benchmark:

c++ using vector: 28 seconds
C#.net1 using ArrayList: 19.3 seconds
C#.net2 using ArrayList: 19.5 seconds
C#.net2 using List<int>: 19.1 seconds
Also, VS2005 and Visual C# Express were identical in speed.
Obviously something has changed. Did you use the same version of .NET ???
Also it might be that the compiler did much worse on std::vector rather than did
much better on ArrayList. You might try this test with different levels of
optimization, in some cases the optimizer might be eliminating some of the
steps.
>
Of those 19 seconds, 6 of them are spent building a 10,000 element
list by repeatedly inserting a new member at the front, 10 of them are
spent clearing two 10,000 element lists by repeatedly removing the
element at the front, and the rest of the work is O(n) rather than
O(n^2).

So this benchmark is basically just doing memcpy() on a 40k block of
memory, repeatedly. It says nothing about the relative cost of boxing
or unboxing. That's why the generics are no faster than the ArrayList.

I don't know why I'm getting the exact opposite resoluts from that
webpage -- I'm getting a moderately slower c++vector, whereas the
webpage gets an extremely slower c#.net1. Obviously one of us has a
typo somewhere! I also don't know why ArrayList should be any faster
than c++vector.

Here, I've distilled out the essence (O^n2) of that anomalous
benchmark:

// c++ version
//
int main()
{ int t = GetTickCount();
for (int j=0; j<100; j++)
{ vector<intv;
for (int i=0; i<10000; i++) v.push_back(i);
vector<intv2 = v;
vector<intv3;
for (int i=0; i<10000; i++) v3.insert(v3.begin(),i);
while (!v.empty()) v.erase(v.begin());
while (!v2.empty()) v2.erase(v2.begin());
printf(".");
}
printf("\nTime: %i\n",(int)(GetTickCount()-t));
return 0;
}
// c# version
//
static void Main(string[] args)
{ DateTime startTime = DateTime.Now;
for(int j=0; j<100; j++)
{ ArrayList v = new ArrayList();
for (int i=0; i<10000; i++) v.Insert(v.Count, i);
ArrayList v2 = new ArrayList(v);
ArrayList v3 = new ArrayList();
for (int i=0; i<10000; i++) v3.Insert(0,i);
while (v2.Count>0) v2.RemoveAt(0);
while (v3.Count>0) v3.RemoveAt(0);
Console.Write(".");
}
Console.WriteLine("\nVector elapsed time: " +
DateTime.Now.Subtract(startTime).TotalMilliseconds + " ms");
}
--
Lucian

Dec 19 '06 #37

P: n/a

"Samuel R. Neff" <sa********@nomail.comwrote in message
news:ae********************************@4ax.com...
>
If you don't have time to learn new things you're really in the wrong
business.

Sam
I did not say that I don't have time to learn new things. I said I don't have
time to learn new things to see if they are worth learning. Spending six months
learning something that ends up proving to be useless would bankrupt me at this
particular stage of my venture.
>

------------------------------------------------------------
We're hiring! B-Line Medical is seeking Mid/Sr. .NET
Developers for exciting positions in medical product
development in MD/DC. Work with a variety of technologies
in a relaxed team environment. See ads on Dice.com.

On Mon, 18 Dec 2006 17:44:04 -0600, "Peter Olcott"
<No****@SeeScreen.comwrote:
>>I don't have the time to learn something to see if it is worth learning

Dec 19 '06 #38

P: n/a
"Peter Olcott" <No****@SeeScreen.comwrote:
>"Jon Skeet [C# MVP]" <sk***@pobox.comwrote in message
>If you decompile some C#, you can get back to something very close to
the original - so very little information is being lost, leaving the
JIT with plenty of room for optimization.
That is not the way that compiler optimization works. Too much semantic
information is lost after it has been translated into the intermediate code for
much optimization to be applied. I posted another benchmark across the different
.NET languages, there was a substantial difference.
Peter, that's incorrect. Have you ever looked at the IL? It really has
lost very little information. That's why e.g. Microsoft's code
analysis FxCop is able to run and give useful results on IL.

http://www.osnews.com/img/5602/results.jpg

This was the benchmark you posted. It did not show what you claimed.
It showed both .net languages to have identical performance, except
that the VB version used a slower IO library.

>Since I always use the construct where C# did 500% worse, and good programming
practice would require always using such a construct, then C# is often (if not
typically) 500% slower. As it turns out, the solution derived from this thread
directly addresses this specific problem. The Boxing and UnBoxing of ArrayList
simply costs way too much.
The 500% you quote was a benchmark that did NOT test the performance
of boxing and unboxing. The construct it used was an O(n^2) operation
where an O(n) operation was appropriate. If this is a construct you
use often, then I suggest you fix your code!

--
Lucian
Dec 19 '06 #39

P: n/a
"Peter Olcott" <No****@SeeScreen.comwrote:
>"Lucian Wischik" <lu***@wischik.comwrote in message
>c++ using vector: 28 seconds
C#.net1 using ArrayList: 19.3 seconds
C#.net2 using ArrayList: 19.5 seconds
C#.net2 using List<int>: 19.1 seconds
Also, VS2005 and Visual C# Express were identical in speed.
Obviously something has changed. Did you use the same version of .NET ???
Also it might be that the compiler did much worse on std::vector rather than did
much better on ArrayList. You might try this test with different levels of
optimization
I tested it on VS2003 and VS2005. I did the c++ version with
optimizations for speed, and profile-guided optimization.

In any case, the c++ runs just as fast with optimizations turned on as
with them turned off. That's because the benchmark is testing how fast
the library can copy a 40k block of memory (as it removes or inserts
the head element from a 10,000 element vector). This is sure as
anything going to be a ready-provided small machine code library
routine, e.g. memcpy, and outside the realm of compiler optimizations.

--
Lucian
Dec 19 '06 #40

P: n/a

"Lucian Wischik" <lu***@wischik.comwrote in message
news:lv********************************@4ax.com...
"Peter Olcott" <No****@SeeScreen.comwrote:
>>"Jon Skeet [C# MVP]" <sk***@pobox.comwrote in message
>>If you decompile some C#, you can get back to something very close to
the original - so very little information is being lost, leaving the
JIT with plenty of room for optimization.
That is not the way that compiler optimization works. Too much semantic
information is lost after it has been translated into the intermediate code
for
much optimization to be applied. I posted another benchmark across the
different
.NET languages, there was a substantial difference.

Peter, that's incorrect. Have you ever looked at the IL? It really has
lost very little information. That's why e.g. Microsoft's code
analysis FxCop is able to run and give useful results on IL.

http://www.osnews.com/img/5602/results.jpg

This was the benchmark you posted. It did not show what you claimed.
It showed both .net languages to have identical performance, except
that the VB version used a slower IO library.
You are not nearly precise enough in your interpretation. It actually showed
that Visual C++ .NET was the fastest of every language listed except for I/O
where C# beat it by 6%. C++ beat C# by 277% on double precision math.
>
>>Since I always use the construct where C# did 500% worse, and good programming
practice would require always using such a construct, then C# is often (if not
typically) 500% slower. As it turns out, the solution derived from this thread
directly addresses this specific problem. The Boxing and UnBoxing of ArrayList
simply costs way too much.

The 500% you quote was a benchmark that did NOT test the performance
of boxing and unboxing. The construct it used was an O(n^2) operation
where an O(n) operation was appropriate. If this is a construct you
use often, then I suggest you fix your code!

--
Lucian

Dec 19 '06 #41

P: n/a

"Lucian Wischik" <lu***@wischik.comwrote in message
news:59********************************@4ax.com...
"Peter Olcott" <No****@SeeScreen.comwrote:
>>"Lucian Wischik" <lu***@wischik.comwrote in message
>>c++ using vector: 28 seconds
C#.net1 using ArrayList: 19.3 seconds
C#.net2 using ArrayList: 19.5 seconds
C#.net2 using List<int>: 19.1 seconds
Also, VS2005 and Visual C# Express were identical in speed.
Obviously something has changed. Did you use the same version of .NET ???
Also it might be that the compiler did much worse on std::vector rather than
did
much better on ArrayList. You might try this test with different levels of
optimization

I tested it on VS2003 and VS2005. I did the c++ version with
optimizations for speed, and profile-guided optimization.

In any case, the c++ runs just as fast with optimizations turned on as
with them turned off. That's because the benchmark is testing how fast
the library can copy a 40k block of memory (as it removes or inserts
the head element from a 10,000 element vector). This is sure as
anything going to be a ready-provided small machine code library
routine, e.g. memcpy, and outside the realm of compiler optimizations.

--
Lucian
If you did not get the same results as the original benchmarks then you did not
match the original benchmark conditions correctly.
Dec 19 '06 #42

P: n/a
"Peter Olcott" <No****@SeeScreen.comwrote:
>If you did not get the same results as the original benchmarks then you did not
match the original benchmark conditions correctly.
Ah, okay, I see what happened. The original benchmark used vector<int>
in c++ and ArrayList in C#, and performance was comparable. That's
what he got, and it's what I got, so we agree.

For the later tests he switched to using a linked list for c++.
Unsurprisingly this was fast, since the benchmark was about
performance at inserting and removing the head element: O(n). But he
stuck to the vector-like ArrayList for C#, O(n^2).

(which means that his benchmark figures are irrelevant for your topic,
"roll your own std::vector"...)

Isn't that how it always goes. Instead of focusing on the efficiency
of different compiler switches, or different language constructs, or
of boxing and unboxing, we're better off checking that we've picked
the correct basic datastructure and algorithm!

--
Lucian
Dec 19 '06 #43

P: n/a

"Lucian Wischik" <lu***@wischik.comwrote in message
news:di********************************@4ax.com...
"Peter Olcott" <No****@SeeScreen.comwrote:
>>If you did not get the same results as the original benchmarks then you did
not
match the original benchmark conditions correctly.

Ah, okay, I see what happened. The original benchmark used vector<int>
in c++ and ArrayList in C#, and performance was comparable. That's
what he got, and it's what I got, so we agree.

For the later tests he switched to using a linked list for c++.
Unsurprisingly this was fast, since the benchmark was about
performance at inserting and removing the head element: O(n). But he
stuck to the vector-like ArrayList for C#, O(n^2).

(which means that his benchmark figures are irrelevant for your topic,
"roll your own std::vector"...)

Isn't that how it always goes. Instead of focusing on the efficiency
of different compiler switches, or different language constructs, or
of boxing and unboxing, we're better off checking that we've picked
the correct basic datastructure and algorithm!

--
Lucian
The main reason that generics were created was because the boxing/unboxing
overhead of ArrayList was too expensive.
Dec 19 '06 #44

P: n/a
Peter Olcott <No****@SeeScreen.comwrote:
a) That's .NET 1.1 - it would be interesting to see the .NET 2.0
results
b) I haven't looked at the code yet, but the fact that Java is
significantly faster in a couple of the benchmarks suggests there's
further room for optimisation
c) There's only *one* test where C# is five times slower than C++ - so
surely you can accept that your statement that .NET "typically" results
in code that executes 5-20x slower is a gross exaggeration. Do you
usually regard one in fourteen as "typical"?

Since I always use the construct where C# did 500% worse, and good programming
practice would require always using such a construct, then C# is often (if not
typically) 500% slower.
No - it may be often/typically 500% slower *in your particular
application* (although I doubt that, too) but that's not the same as
your *general* statement.

I could make a similarly false that "houses typically exist in the UK
rather than the US". After all, nearly all the houses that *I* see are
in the UK, because I live there. That doesn't make the general
statement true, any more than it makes your general statement about the
speed of .NET true.
As it turns out, the solution derived from this thread directly
addresses this specific problem. The Boxing and UnBoxing of ArrayList
simply costs way too much.
Except that boxing and unboxing have almost *no* cost in the benchmark
you keep talking about.

As Lucian has pointed out, most of the time in the List/Vector
benchmark is actually taken removing elements from the start of a list
or inserting them into the start. That has nothing to do with
boxing/unboxing *and* it's not the typical usage for a list in most
applications I've either written or seen. Does your application
typically spend *most* of its time adding/removing entries to/from the
start of a list? If so, you should consider using a LinkedList instead.
If not, you shouldn't be basing your conclusions on this benchmark.

Changing the benchmark code to use a reference type instead of a value
type (i.e. removing the boxing/unboxing entirely) makes very little
difference to the performance of the code. (In fact, on my box it
actually makes it worse - I'm not entirely sure why at the moment.)

However, changing the benchmark code to always add/remove from the
*end* of the list makes a *vast* difference to the performance - it
makes it about *100 times* faster on my box, despite it not changing
how much boxing and unboxing occurs. (There is a slight difference in
terms of what's going on, as the lists are always reversed instead of
sometimes being copied in order, but I don't think you can possibly
argue that that is responsible for the change in performance.)

In a nutshell, the benchmark you're using to show that boxing/unboxing
are too expensive for you isn't in any way, shape or form dominated by
the cost of boxing/unboxing. Your conclusion is completely invalid.
This is because much semantic information is lost after the code has
been initially translated, thus there is less of a basis for
semantically equivalent transformations.
If you decompile some C#, you can get back to something very close to
the original - so very little information is being lost, leaving the
JIT with plenty of room for optimization.

That is not the way that compiler optimization works. Too much semantic
information is lost after it has been translated into the intermediate code for
much optimization to be applied.
If I can decompile back from IL to the original C#, what information do
you claim has been lost, precisely? Last time I looked, comments and
local variable names aren't part of what an optimiser looks at - and
often that's the only difference between the decompiled code and the
original.
I posted another benchmark across the different
.NET languages, there was a substantial difference.
That doesn't mean that the JIT doesn't do most of the optimisation. The
C++ compiler does more than the C# compiler in some cases, but I
believe the JIT is still the main optimization contributer.

--
Jon Skeet - <sk***@pobox.com>
http://www.pobox.com/~skeet Blog: http://www.msmvps.com/jon.skeet
If replying to the group, please do not mail me too
Dec 19 '06 #45

P: n/a
Lucian Wischik <lu***@wischik.comwrote:
"Peter Olcott" <No****@SeeScreen.comwrote:
If you did not get the same results as the original benchmarks then you did not
match the original benchmark conditions correctly.

Ah, okay, I see what happened. The original benchmark used vector<int>
in c++ and ArrayList in C#, and performance was comparable. That's
what he got, and it's what I got, so we agree.

For the later tests he switched to using a linked list for c++.
Unsurprisingly this was fast, since the benchmark was about
performance at inserting and removing the head element: O(n). But he
stuck to the vector-like ArrayList for C#, O(n^2).
What a cheat! I'm not entirely surprised, given the code - it's pretty
horrible for the C#, clearly not what anyone really familiar with C#
would use.
(which means that his benchmark figures are irrelevant for your topic,
"roll your own std::vector"...)
Indeed. Just for fun, I converted the code to use LinkedList<intin C#
- and surprise surprise, it's much, much faster than the original.
Isn't that how it always goes. Instead of focusing on the efficiency
of different compiler switches, or different language constructs, or
of boxing and unboxing, we're better off checking that we've picked
the correct basic datastructure and algorithm!
Indeed - and when checking the differences between languages, checking
that we've got reasonable benchmarks!

--
Jon Skeet - <sk***@pobox.com>
http://www.pobox.com/~skeet Blog: http://www.msmvps.com/jon.skeet
If replying to the group, please do not mail me too
Dec 19 '06 #46

P: n/a

"Jon Skeet [C# MVP]" <sk***@pobox.comwrote in message
news:MP************************@msnews.microsoft.c om...
Peter Olcott <No****@SeeScreen.comwrote:
a) That's .NET 1.1 - it would be interesting to see the .NET 2.0
results
b) I haven't looked at the code yet, but the fact that Java is
significantly faster in a couple of the benchmarks suggests there's
further room for optimisation
c) There's only *one* test where C# is five times slower than C++ - so
surely you can accept that your statement that .NET "typically" results
in code that executes 5-20x slower is a gross exaggeration. Do you
usually regard one in fourteen as "typical"?

Since I always use the construct where C# did 500% worse, and good
programming
practice would require always using such a construct, then C# is often (if
not
typically) 500% slower.

No - it may be often/typically 500% slower *in your particular
application* (although I doubt that, too) but that's not the same as
your *general* statement.

I could make a similarly false that "houses typically exist in the UK
rather than the US". After all, nearly all the houses that *I* see are
in the UK, because I live there. That doesn't make the general
statement true, any more than it makes your general statement about the
speed of .NET true.
>As it turns out, the solution derived from this thread directly
addresses this specific problem. The Boxing and UnBoxing of ArrayList
simply costs way too much.

Except that boxing and unboxing have almost *no* cost in the benchmark
you keep talking about.

As Lucian has pointed out, most of the time in the List/Vector
benchmark is actually taken removing elements from the start of a list
or inserting them into the start. That has nothing to do with
boxing/unboxing *and* it's not the typical usage for a list in most
applications I've either written or seen. Does your application
typically spend *most* of its time adding/removing entries to/from the
start of a list? If so, you should consider using a LinkedList instead.
If not, you shouldn't be basing your conclusions on this benchmark.

Changing the benchmark code to use a reference type instead of a value
type (i.e. removing the boxing/unboxing entirely) makes very little
difference to the performance of the code. (In fact, on my box it
actually makes it worse - I'm not entirely sure why at the moment.)
The most time critical aspect of my application spends about 99% of its time
comparing elements in one array to elements in other arrays. This subsystem can
tolerate no additional overhead at all, having a real-time constraint of 1/10
second.
>
However, changing the benchmark code to always add/remove from the
*end* of the list makes a *vast* difference to the performance - it
makes it about *100 times* faster on my box, despite it not changing
how much boxing and unboxing occurs. (There is a slight difference in
terms of what's going on, as the lists are always reversed instead of
sometimes being copied in order, but I don't think you can possibly
argue that that is responsible for the change in performance.)

In a nutshell, the benchmark you're using to show that boxing/unboxing
are too expensive for you isn't in any way, shape or form dominated by
the cost of boxing/unboxing. Your conclusion is completely invalid.
>This is because much semantic information is lost after the code has
been initially translated, thus there is less of a basis for
semantically equivalent transformations.

If you decompile some C#, you can get back to something very close to
the original - so very little information is being lost, leaving the
JIT with plenty of room for optimization.

That is not the way that compiler optimization works. Too much semantic
information is lost after it has been translated into the intermediate code
for
much optimization to be applied.

If I can decompile back from IL to the original C#, what information do
you claim has been lost, precisely? Last time I looked, comments and
local variable names aren't part of what an optimiser looks at - and
often that's the only difference between the decompiled code and the
original.
>I posted another benchmark across the different
.NET languages, there was a substantial difference.

That doesn't mean that the JIT doesn't do most of the optimisation. The
C++ compiler does more than the C# compiler in some cases, but I
believe the JIT is still the main optimization contributer.

--
Jon Skeet - <sk***@pobox.com>
http://www.pobox.com/~skeet Blog: http://www.msmvps.com/jon.skeet
If replying to the group, please do not mail me too

Dec 19 '06 #47

P: n/a
"Lucian Wischik" <lu***@wischik.comwrote in message
news:a6********************************@4ax.com...
Jon Skeet [C# MVP] <sk***@pobox.comwrote:
>>a) That's .NET 1.1 - it would be interesting to see the .NET 2.0
results
b) I haven't looked at the code yet, but the fact that Java is
significantly faster in a couple of the benchmarks suggests there's
further room for optimisation

Okay, here are my results for that benchmark:

c++ using vector: 28 seconds
C#.net1 using ArrayList: 19.3 seconds
C#.net2 using ArrayList: 19.5 seconds
C#.net2 using List<int>: 19.1 seconds
Also, VS2005 and Visual C# Express were identical in speed.

Of those 19 seconds, 6 of them are spent building a 10,000 element
list by repeatedly inserting a new member at the front, 10 of them are
spent clearing two 10,000 element lists by repeatedly removing the
element at the front, and the rest of the work is O(n) rather than
O(n^2).

So this benchmark is basically just doing memcpy() on a 40k block of
memory, repeatedly. It says nothing about the relative cost of boxing
or unboxing. That's why the generics are no faster than the ArrayList.

I don't know why I'm getting the exact opposite resoluts from that
webpage -- I'm getting a moderately slower c++vector, whereas the
webpage gets an extremely slower c#.net1. Obviously one of us has a
typo somewhere! I also don't know why ArrayList should be any faster
than c++vector.

Here, I've distilled out the essence (O^n2) of that anomalous
benchmark:

// c++ version
//
int main()
{ int t = GetTickCount();
for (int j=0; j<100; j++)
{ vector<intv;
for (int i=0; i<10000; i++) v.push_back(i);
vector<intv2 = v;
vector<intv3;
for (int i=0; i<10000; i++) v3.insert(v3.begin(),i);
while (!v.empty()) v.erase(v.begin());
while (!v2.empty()) v2.erase(v2.begin());
printf(".");
}
printf("\nTime: %i\n",(int)(GetTickCount()-t));
return 0;
}
// c# version
//
static void Main(string[] args)
{ DateTime startTime = DateTime.Now;
for(int j=0; j<100; j++)
{ ArrayList v = new ArrayList();
for (int i=0; i<10000; i++) v.Insert(v.Count, i);
ArrayList v2 = new ArrayList(v);
ArrayList v3 = new ArrayList();
for (int i=0; i<10000; i++) v3.Insert(0,i);
while (v2.Count>0) v2.RemoveAt(0);
while (v3.Count>0) v3.RemoveAt(0);
Console.Write(".");
}
Console.WriteLine("\nVector elapsed time: " +
DateTime.Now.Subtract(startTime).TotalMilliseconds + " ms");
}
--
Lucian

Following code shows the VectorTest methods I modified to use Generic List to get a fair
comparison with the Vector template class.
Compiled with csc /o filename.cs
and run on XP using 8.00.50727 of the CSharp compiler and 14.00.50727.42 of the C++ compiler
and framework....

result C#:

Vector elapsed time: 7107 ms - 10000

while the unmodified C++ code compiled with cl /EHsc /O2 filename.cpp

results in ...
Vector elapsed time: 7606 ms - 10000
(ArrayList result: Vector elapsed time: 17565 ms - 10000)
You see, C# is the clear winner.

Don't know why you got such bad figures when using List<>

[Mofified C# code ]
/* original code copyright 2004 Christopher W. Cowell-Shah
http://www.cowell-shah.com/research/benchmark/code */
/* other code portions copyright http://dada.perl.it/shootout/ and Doug Bagley
http://www.bagley.org/~doug/shootout */
/* combined, modified and fixed by Thomas Bruckschlegel - http://www.tommti-systems.com */

....
static public int VectorTest()
{
// create a list of integers (Li1) from 1 to SIZE
List<intLi1 = new List<int>();
for (int i = 1; i < lSIZE + 1; i++)
{
Li1.Insert(Li1.Count, i); //addlast
//Li1.addLast(new Integer(i));
}
// copy the list to Li2 (not by individual items)
List<intLi2 = new List<int>(Li1);
List<intLi3 = new List<int>();
// remove each individual item from left side of Li2 and
// append to right side of Li3 (preserving order)
while (Li2.Count 0)
{
Li3.Insert(Li3.Count, Li2[0]); //addlast
Li2.RemoveAt(0);
}
// Li2 must now be empty
// remove each individual item from right side of Li3 and
// append to right side of Li2 (reversing list)
while (Li3.Count 0)
{
Li2.Insert(Li2.Count, Li3[Li3.Count - 1]); //addlast
Li3.RemoveAt(Li3.Count - 1);
}
// Li3 must now be empty
// reverse Li1
List<inttmp = new List<int>();
while (Li1.Count 0)
{
tmp.Insert(0, Li1[0]); //addfirst
Li1.RemoveAt(0);
}
Li1 = tmp;
// check that first item is now lSIZE
if ((int)(Li1[0]) != lSIZE)
{
Console.WriteLine("first item of Li1 != lSIZE");
return (0);
}
// compare Li1 and Li2 for equality
// where is the == operator?
// do a Li1!=Li2 comparison
if (Li1.Count != Li2.Count)
{
Console.WriteLine("Li1 and Li2 differ");
return (0);
}
for (int i = 0; i < Li1.Count; i++)
{
if (Li1[i] != Li2[i])
{
Console.WriteLine("Li1 and Li2 differ");
return (0);
}
}
// return the length of the list
return (Li1.Count);
}

The only to tests where C++ is a clear winner are the "Exception" test and the Nested loop
test (nl), exceptions are know to be slow in managed code and for the nested loop test, the
C++ compiler optimizer has more time to spend to hoist the loop. For all other tests C# is
just as fast or somewhat faster then C++.

Willy.
Dec 19 '06 #48

P: n/a
Peter Olcott <No****@SeeScreen.comwrote:

<snip>
http://www.osnews.com/img/5602/results.jpg
I've finally (after a while) found the source code for these
benchmarks:

http://www.ocf.berkeley.edu/~cowell/...enchmark/code/

Running the tests myself, the "double" result is only 50% faster in C++
than in C#, rather than the 100% faster shown. This could partly be
down to compiler options - but they aren't given, as far as I can tell.

Now, I looked at the generated code with Reflector in both cases, and
one of the interesting differences is that the C# and C++ forms are
doing different things - C++ doesn't specify the order of the
assignments in the statement:

x *= y++;

(for example)

and the C++ compiler is treating it as:

x *= y;
y++;

In C#, that's not allowed - the order is guaranteed to be:

tmp = y;
y++;
x *= tmp;

Changing the body of the loop in doubleArithmetic to this:

while (i < doubleMax)
{
doubleResult -= i;
i++;
doubleResult += i;
i++;
doubleResult *= i;
i++;
doubleResult /= i;
i++;
}

made the generated code much closer between the two.

There's still a significant timing difference though, even with the
generated IL being almost identical - and for a while I really couldn't
explain it. Then I noticed that the stacks were significantly
different, due to the reporting mechanism: the C# uses
Console.WriteLine and formats a string, whereas the C++ uses printf
directly.
At this point (after much jigging around), I redownloaded the source,
and *just* changed the reporting mechanisms for the doubleArithmetic
methods to the following:

C++:
System::Console::WriteLine(doubleResult);
System::Console::WriteLine(elapsedTime);

C#:
Console.WriteLine(doubleResult);
Console.WriteLine(elapsedMilliseconds);

The results, with them still doing different things in terms of the
post-increment operator:

C#: 8158ms
C++: 7744ms

Suddenly the C++ result looks only 5% better than C#.

Changing the i++ in each statement into a ++i (to remove the language
difference) makes the two methods run in exactly the same time, within
the boundaries of timing error (on some runs the C++ wins, on some runs
the C# wins).

Just goes to show the importance of making like-for-like comparisons.

--
Jon Skeet - <sk***@pobox.com>
http://www.pobox.com/~skeet Blog: http://www.msmvps.com/jon.skeet
If replying to the group, please do not mail me too
Dec 19 '06 #49

P: n/a
Peter Olcott <No****@SeeScreen.comwrote:
The main reason that generics were created was because the boxing/unboxing
overhead of ArrayList was too expensive.
Well, that was *one* reason. The other (principal, IMO) reasons were
type safety and better expression of ideas.

Generics are in Java 5+ as well, without removing the impact of
boxing/unboxing.

Of course, even if generics *had* primarily been invented to alleviate
the performance hit of boxing/unboxing, that still wouldn't make the
benchmark you're looking at any more relevant, because boxing/unboxing
simply isn't a significant factor in it.

--
Jon Skeet - <sk***@pobox.com>
http://www.pobox.com/~skeet Blog: http://www.msmvps.com/jon.skeet
If replying to the group, please do not mail me too
Dec 19 '06 #50

82 Replies

This discussion thread is closed

Replies have been disabled for this discussion.