469,330 Members | 1,322 Online
Bytes | Developer Community
New Post

Home Posts Topics Members FAQ

Post your question to a community of 469,330 developers. It's quick & easy.

For vs. For Each

Is there a performance difference between this:

\\\
Dim i As Integer
For i = 0 to myObject.Controls.Count - 1
myObject.Controls(i) = ...
Next
///

and this:

\\\
Dim ctl As Control
For Each ctl In myObject.Controls
ctl = ...
Next
///

Or is For Each just "prettier"?

Thanks,

Eric
Nov 21 '05 #1
65 3075
Hi,

This was discussed a time ago and U think remember that there were almost
no differences,
beside that, this is VERY EASY to check just write an small program and
iterate both ways and see the results

and finally this is a VB.net question, not a C# one, there is no need to
post it on microsoft.public.dotnet.languages.csharp

cheers,

--
Ignacio Machin,
ignacio.machin AT dot.state.fl.us
Florida Department Of Transportation
<an*******@discussions.microsoft.com> wrote in message
news:OY**************@TK2MSFTNGP11.phx.gbl...
Is there a performance difference between this:

\\\
Dim i As Integer
For i = 0 to myObject.Controls.Count - 1
myObject.Controls(i) = ...
Next
///

and this:

\\\
Dim ctl As Control
For Each ctl In myObject.Controls
ctl = ...
Next
///

Or is For Each just "prettier"?

Thanks,

Eric

Nov 21 '05 #2
In VB6 for a Collection you should use 'for each'. Here (.NET) it probably
doesn't matter.

I suppose you should use 'for each', because its cleaner and might use
optimizations whenever this will be possible. IHMO it looks cleaner too.

- Joris

<an*******@discussions.microsoft.com> wrote in message
news:OY**************@TK2MSFTNGP11.phx.gbl...
Is there a performance difference between this:

\\\
Dim i As Integer
For i = 0 to myObject.Controls.Count - 1
myObject.Controls(i) = ...
Next
///

and this:

\\\
Dim ctl As Control
For Each ctl In myObject.Controls
ctl = ...
Next
///

Or is For Each just "prettier"?

Thanks,

Eric


Nov 21 '05 #3
There should be very little in the way of performance difference in these
two code snippets, especially as your control numbers are not likely to be
large anyway.

One point to note is that with For Each, you cannot remove the items or add
to the collection while iterating otherwise it complains, however with a for
loop, you can iterate backwards and remove items.


--

OHM ( Terry Burns )
. . . One-Handed-Man . . .
If U Need My Email ,Ask Me

Time flies when you don't know what you're doing

<an*******@discussions.microsoft.com> wrote in message
news:OY**************@TK2MSFTNGP11.phx.gbl...
Is there a performance difference between this:

\\\
Dim i As Integer
For i = 0 to myObject.Controls.Count - 1
myObject.Controls(i) = ...
Next
///

and this:

\\\
Dim ctl As Control
For Each ctl In myObject.Controls
ctl = ...
Next
///

Or is For Each just "prettier"?

Thanks,

Eric

Nov 21 '05 #4
Anders Hejlsberg: "Generally my answer is, Always use FOR EACH if you can,
because chances are you will have fewer bugs if you use FOR EACH. There are
just more pitfalls with the regular FOR statement. It is true that in some
cases the FOR statement is more efficient. I think the vast majority of
cases, I donít think you would ever notice. My advice would be, always use
FOR EACH profile your app. If they turn out to be your problem, then change
them to FOR statements, but I donít think you ever will in real code. I
think FOR EACH is much more expressive, and in theory allows us to optimize
your code more in the future. There are certain optimizations that we will
do, because we can tell that youíre going over the entire collection or
whatever, and so we could, at least in theory, generate even better code. I
would highly recommend FOR EACH unless you really do need the index."

http://msdn.microsoft.com/msdntv/epi...h/manifest.xml
Nov 21 '05 #5
Eric,
In addition to the other comments:

Are you asking about the Controls collection specifically or are you asking
any collection in general?

As each collection type has its own performance characteristics!
(ControlCollection, verses ArrayList, verses Collection, verses an Array,
verses HashTable, verses insert your favorite collection here).

I have to ask: Does it really matter which is faster?

I would not worry about which performs better, I would go with which one is
more straight forward, I find the For Each more straight forward, so that is
the one I favor.

Remember that most programs follow the 80/20 rule (link below) that is 80%
of the execution time of your program is spent in 20% of your code. I will
optimize (worry about performance) the 20% once that 20% has been identified
& proven to be a performance problem via profiling (CLR Profiler is one
profiling tool).

For info on the 80/20 rule & optimizing only the 20% see Martin Fowler's
article "Yet Another Optimization Article" at
http://martinfowler.com/ieeeSoftware...timization.pdf

Hope this helps
Jay

<an*******@discussions.microsoft.com> wrote in message
news:OY**************@TK2MSFTNGP11.phx.gbl...
Is there a performance difference between this:

\\\
Dim i As Integer
For i = 0 to myObject.Controls.Count - 1
myObject.Controls(i) = ...
Next
///

and this:

\\\
Dim ctl As Control
For Each ctl In myObject.Controls
ctl = ...
Next
///

Or is For Each just "prettier"?

Thanks,

Eric

Nov 21 '05 #6
Would you have been happier if Eric has written the question in C#? This is
very much as important a question in C# as it is in VB.NET.

foreach (Control ctl in myObject.Controls)
{
// do something useful with 'ctl'

}

I've had folks tell me that 'for' is more efficient than 'foreach' because
of enumerator overhead. For most of my code, however, this is a moot point.
Unless the code is in a critical loop, the difference in processing so tiny
that the improvement in code readability greatly outweighs the overhead of
allowing .NET to manipulate the enumerator.

--- Nick

"Ignacio Machin ( .NET/ C# MVP )" <ignacio.machin AT dot.state.fl.us> wrote
in message news:%2******************@TK2MSFTNGP11.phx.gbl...
<<clipped>>
and finally this is a VB.net question, not a C# one, there is no need to
post it on microsoft.public.dotnet.languages.csharp
<<clipped>>
<an*******@discussions.microsoft.com> wrote in message
news:OY**************@TK2MSFTNGP11.phx.gbl...
Is there a performance difference between this:

\\\
Dim i As Integer
For i = 0 to myObject.Controls.Count - 1
myObject.Controls(i) = ...
Next
///

and this:

\\\
Dim ctl As Control
For Each ctl In myObject.Controls
ctl = ...
Next
///

Or is For Each just "prettier"?

Thanks,

Eric


Nov 21 '05 #7
Jay B. Harlow [MVP - Outlook] wrote:
Are you asking about the Controls collection specifically or are you asking any collection in general?
As each collection type has its own performance characteristics!
I just meant collections in general, but you make a good point.
I have to ask: Does it really matter which is faster?
Not on my current projects, but it's good to know for the future.
Remember that most programs follow the 80/20 rule (link below) that is 80%
of the execution time of your program is spent in 20% of your code. I will
optimize (worry about performance) the 20% once that 20% has been identified & proven to be a performance problem via profiling (CLR Profiler is one
profiling tool).


I usually follow the 80/20/100 rule, where I optimize the 20% and give the
remaining 80% a good work over anyway to make sure the aggregate is as
efficient as possible. ;-)

Thanks for your reply, Jay.

Eric
Nov 21 '05 #8
Thanks for the post, Nick.

Nick Malik wrote:
I've had folks tell me that 'for' is more efficient than 'foreach' because
of enumerator overhead.


A newbie question...Where do enumerators come into play when using For Each?
In addition , since enumerators can only be of type byte, short, int, or
long, what kind of overhead is introduced?

Thanks again,

Eric
Nov 21 '05 #9
He is referring to the Enumerator Interface IEnumerator. See below, you can
create your own.

Public Class Class1
Implements IEnumerator

Public ReadOnly Property Current() As Object Implements
System.Collections.IEnumerator.Current
Get

End Get
End Property

Public Function MoveNext() As Boolean Implements
System.Collections.IEnumerator.MoveNext

End Function

Public Sub Reset() Implements System.Collections.IEnumerator.Reset

End Sub
End Class

--

OHM ( Terry Burns )
. . . One-Handed-Man . . .
If U Need My Email ,Ask Me

Time flies when you don't know what you're doing

<an*******@discussions.microsoft.com> wrote in message
news:%2******************@TK2MSFTNGP10.phx.gbl...
Thanks for the post, Nick.

Nick Malik wrote:
I've had folks tell me that 'for' is more efficient than 'foreach' because of enumerator overhead.
A newbie question...Where do enumerators come into play when using For

Each? In addition , since enumerators can only be of type byte, short, int, or
long, what kind of overhead is introduced?

Thanks again,

Eric

Nov 21 '05 #10
an*******@discussions.microsoft.com wrote:
Thanks for the post, Nick.

Nick Malik wrote:
I've had folks tell me that 'for' is more efficient than 'foreach' because
of enumerator overhead.

A newbie question...Where do enumerators come into play when using For Each?
In addition , since enumerators can only be of type byte, short, int, or
long, what kind of overhead is introduced?


By 'enumerators' Nick was referring to the IEnumerator interface which
is used by the For Each statement to iterate through the collection (For
Each uses and IEnumerable interface to get the IEnumerator).

I think you're confusing the terminology with an enumeration, such as
declared by VB.NET's Enum statement. A completely different animal.

--
mikeb
Nov 21 '05 #11
"mikeb" wrote:
I think you're confusing the terminology with an enumeration, such as
declared by VB.NET's Enum statement. A completely different animal.


That's exactly what I was doing. Thank you for the clarification, Mike &
Terry.

Eric
Nov 21 '05 #12
I disagree that foreach-construct being readonly is a bad thing. Not
to completely disregard Alvin's gripe, but here's my point of view.

Typically, you use foreach to iterate throught the collection.
Adding/Removing items from the collection during this time puts the
collection into a funny mode that others may not be ready to deal
with, what if you have multiple emunerators? (this is very common for
a nested foreach scenario, yes, that'd be O(n^2) ). I believe in C++
STL libraries you can remove current iterated item, but that opens a
can of worms, you always have to worry whether your current item has
been deleted by another thread.

Also, I highly disagree that the for-construct is faster than
foreach-construct. That is only true when you are talking about ARRAY
collection types. In a linked-list implementation, foreach-construct
O(1) would be faster than for-construct O(n) for iterating through a
collection. The fact that the .NET framework collections are almost
solely based on array types may make the statement correct in 90%+ of
the time, but it is not a correct statement to make generally. And,
besides, I wait for generics!

Typically, hashtables are iterated for the entire key/value pairs,
given that accessing the value of a hashtable is O(1), if you need to
iterate through a hastable, it's easier to iterate through the keys
O(n), and grabbing the value as you go O(1). But I can't think of why
anyone would be iterating through a hashtable except may be as a debug
step to see the contents of the hashtable.

jliu - www.ssw.com.au - johnliu.net
Nov 21 '05 #13
"JohnLiu" <jo******@gmail.com> wrote in message
news:37**************************@posting.google.c om...
I disagree that foreach-construct being readonly is a bad thing. Not
to completely disregard Alvin's gripe, but here's my point of view.

Typically, you use foreach to iterate throught the collection.
Adding/Removing items from the collection during this time puts the
collection into a funny mode that others may not be ready to deal
with, what if you have multiple emunerators? (this is very common for
a nested foreach scenario, yes, that'd be O(n^2) ). I believe in C++
STL libraries you can remove current iterated item, but that opens a
can of worms, you always have to worry whether your current item has
been deleted by another thread.

Also, I highly disagree that the for-construct is faster than
foreach-construct. That is only true when you are talking about ARRAY
collection types. In a linked-list implementation, foreach-construct
O(1) would be faster than for-construct O(n) for iterating through a
collection. The fact that the .NET framework collections are almost
solely based on array types may make the statement correct in 90%+ of
the time, but it is not a correct statement to make generally. And,
besides, I wait for generics!

Typically, hashtables are iterated for the entire key/value pairs,
given that accessing the value of a hashtable is O(1), if you need to
iterate through a hastable, it's easier to iterate through the keys
O(n), and grabbing the value as you go O(1). But I can't think of why
anyone would be iterating through a hashtable except may be as a debug
step to see the contents of the hashtable.

jliu - www.ssw.com.au - johnliu.net


With a For n=start to end step loop you would still have to worry if the
current element has been deleted by either this or another thread since the
start end & step are only evaluated once in the loop.

--
Jonathan Bailey.
Nov 21 '05 #14
On 2004-08-11, Alvin Bruney [MVP] <> wrote:
I'll chime in here with my longtime gripe.

The foreach implementation is flawed because the container is marked as
readonly during the iteration. This is a crime in my opinion because it is
*normal to effect a change on the container while iterating especially from
a vb point of view.


I see your point, but look at it from the implementers' point of view.

Making the container read-only allows for very efficient implementations
of the enumerator, and also makes writing new enumerators fairly simple.
Also, it eliminates a real ambiguity to the For Each statement, does
foreach iterate over the original collection, or over the entire
collection as it changes over time?

Dim i As Integer
For Each o as Object in MyCollection
i += 1
If i = 3 Then
MyCollection.Insert(0, new Object())
MyCollection.Add(New Object())
End If
Next

What would the iteration be in this case? Should both new objects be
iterated, or neither? Or just one? You could think of some reasonable
rules to apply to arrays, but what about things like hashes where
position doesn't have a fixed meaning? And how is the Enumerator
supposed to keep track of what's happening to the collection? Do we add
some kind of event to the IEnumerable interface? If so, that could turn
into a lot of overhead since the enumerator has to check for changes on
each iteration.

For efficiency's sake, maybe we could have two different enumeration
types, one for mutable containers and one for read-only, but then not
only are you complicating the class library tremendously, but calling
conventions can get strange (since only one of them can use For Each).

David
Nov 21 '05 #15
Can you give some sample applications where this statement of you is true?
Although due to the nature of loops, they oftentimes fall into
the 20 percent of code that consumes 80 percent of the time.


It is in my opinion definitly not with applications where is by instance
screen painting or/and dataprocessing.

It is in my opinion definitly true for applications where is image
processing where not the GDI+ encoding is used.

However that is in my opinion surely not the majority of the applications.

So I am curious in what type of other applications stand alone loops can
consume 80% of the time?

Just my thought,

Cor
Nov 21 '05 #16
> Making the container read-only allows for very efficient implementations
of the enumerator, and also makes writing new enumerators fairly simple.
Also, it eliminates a real ambiguity to the For Each statement, does
foreach iterate over the original collection, or over the entire
collection as it changes over time?
I don't disagree with that. very good point indeed. but, the current
approach makes it impossible to perform simple tasks inherent in UI
programming (like removing multiselects in a listbox for instance). Where
such simple tasks are overly complicated, i believe the design should be
reviewed.
For efficiency's sake, maybe we could have two different enumeration
types, one for mutable containers and one for read-only, but then not
only are you complicating the class library tremendously,
I think it is a reasonable approach. It would just be another way to iterate
a container and it shouldn't complicate matters since it could be made to
appear as an overload

but calling conventions can get strange (since only one of them can use For Each).
That's really a design issue which needs to be hashed out in a way to make
this approach feasible.

--
Regards,
Alvin Bruney
[ASP.NET MVP http://mvp.support.microsoft.com/default.aspx]
Got tidbits? Get it here... http://tinyurl.com/27cok
"David" <df*****@woofix.local.dom> wrote in message
news:slrnchmj96.j7j.df*****@woofix.local.dom... On 2004-08-11, Alvin Bruney [MVP] <> wrote:
I'll chime in here with my longtime gripe.

The foreach implementation is flawed because the container is marked as
readonly during the iteration. This is a crime in my opinion because it
is
*normal to effect a change on the container while iterating especially
from
a vb point of view.


I see your point, but look at it from the implementers' point of view.

Making the container read-only allows for very efficient implementations
of the enumerator, and also makes writing new enumerators fairly simple.
Also, it eliminates a real ambiguity to the For Each statement, does
foreach iterate over the original collection, or over the entire
collection as it changes over time?

Dim i As Integer
For Each o as Object in MyCollection
i += 1
If i = 3 Then
MyCollection.Insert(0, new Object())
MyCollection.Add(New Object())
End If
Next

What would the iteration be in this case? Should both new objects be
iterated, or neither? Or just one? You could think of some reasonable
rules to apply to arrays, but what about things like hashes where
position doesn't have a fixed meaning? And how is the Enumerator
supposed to keep track of what's happening to the collection? Do we add
some kind of event to the IEnumerable interface? If so, that could turn
into a lot of overhead since the enumerator has to check for changes on
each iteration.

For efficiency's sake, maybe we could have two different enumeration
types, one for mutable containers and one for read-only, but then not
only are you complicating the class library tremendously, but calling
conventions can get strange (since only one of them can use For Each).

David

Nov 21 '05 #17
> Typically, you use foreach to iterate throught the collection.
Adding/Removing items from the collection during this time puts the
collection into a funny mode that others may not be ready to deal
with, what if you have multiple emunerators?
That is a design and implementation issue, not a programming issue.
Iterating a container which can change during iteration is rightly handled
internally by the construct itself and not by the iterating code so there
should be no funny mode. For instance, what's to stop the internal code from
re-adjusting its contents based on the removal or addition of an item on the
fly? This is very basic functionality available in vb if memory serves me
right.

Multiple enumerators can be handled internally thru synchronization means
and this can all be hidden from the programmer so that she is not aware how
the iterating construct is implemented (good design). I think the choice to
implement this construct as readonly must have come down to efficiency over
functionality. That's the only reason I can think of.

--
Regards,
Alvin Bruney
[ASP.NET MVP http://mvp.support.microsoft.com/default.aspx]
Got tidbits? Get it here... http://tinyurl.com/27cok
"JohnLiu" <jo******@gmail.com> wrote in message
news:37**************************@posting.google.c om...I disagree that foreach-construct being readonly is a bad thing. Not
to completely disregard Alvin's gripe, but here's my point of view.

Typically, you use foreach to iterate throught the collection.
Adding/Removing items from the collection during this time puts the
collection into a funny mode that others may not be ready to deal
with, what if you have multiple emunerators? (this is very common for
a nested foreach scenario, yes, that'd be O(n^2) ). I believe in C++
STL libraries you can remove current iterated item, but that opens a
can of worms, you always have to worry whether your current item has
been deleted by another thread.

Also, I highly disagree that the for-construct is faster than
foreach-construct. That is only true when you are talking about ARRAY
collection types. In a linked-list implementation, foreach-construct
O(1) would be faster than for-construct O(n) for iterating through a
collection. The fact that the .NET framework collections are almost
solely based on array types may make the statement correct in 90%+ of
the time, but it is not a correct statement to make generally. And,
besides, I wait for generics!

Typically, hashtables are iterated for the entire key/value pairs,
given that accessing the value of a hashtable is O(1), if you need to
iterate through a hastable, it's easier to iterate through the keys
O(n), and grabbing the value as you go O(1). But I can't think of why
anyone would be iterating through a hashtable except may be as a debug
step to see the contents of the hashtable.

jliu - www.ssw.com.au - johnliu.net

Nov 21 '05 #18
On Wed, 11 Aug 2004 08:21:47 -0500, an*******@discussions.microsoft.com
wrote:
Is there a performance difference between this:

\\\
Dim i As Integer
For i = 0 to myObject.Controls.Count - 1
myObject.Controls(i) = ...
Next
///

and this:

\\\
Dim ctl As Control
For Each ctl In myObject.Controls
ctl = ...
Next
///

Or is For Each just "prettier"?


They are almost identical when your collection is some sort of an array.
But if the collection is e.g. linked list, then executing .Controls(n) will
cause your app to traverse through n elements - the bigger n is the slower
it will take to find n-th element. Using enumerators (For Each) is
considerably faster here.

Best regards,
Michal Dabrowski
Nov 21 '05 #19
> I don't disagree with that. very good point indeed. but, the current
approach makes it impossible to perform simple tasks inherent in UI
programming (like removing multiselects in a listbox for instance). Where
such simple tasks are overly complicated, i believe the design should be
reviewed.


good point. I like the idea of a collection object that isn't read-only,
seperate from other types of collections. Didn't another thread mention a
bit of code that Ericgu put out that does exactly this?
Nov 21 '05 #20
From this document

http://msdn.microsoft.com/library/de...tchPerfOpt.asp

The performance difference between For and For Each loops does not appear to
be significant.

I hope this helps?

Cor
Nov 21 '05 #21
Alvin,
VB for as long as I can remember (VB1, VB2, VB3, VB5, VB6, VBA) has had
trouble modifying the collection itself when you use For Each. There may
have been one or two specific collections that may have worked, or more then
likely one thought they worked, but really didn't.

The problem is the delete/insert code would need some method of notifying
(an event possible) one or more enumerators that the collection itself was
modified, this notification IMHO for the most part is too expensive to
justify adding it in all cases.

Although I do agree, it would be nice if collections had optional
Enumerators. The "fire house" version of today, which is normally used. Plus
a "safe" version that allowed modifying the collection itself while your
iterating... For example: using For Each on DataTable.Rows is not
modifiable, while using For Each on DataTable.Select is modifiable! By
modifiable means you can call DataRow.Delete or Rows.Add...

Hope this helps
Jay

"Alvin Bruney [MVP]" <vapor at steaming post office> wrote in message
news:uZ**************@TK2MSFTNGP10.phx.gbl...
Typically, you use foreach to iterate throught the collection.
Adding/Removing items from the collection during this time puts the
collection into a funny mode that others may not be ready to deal
with, what if you have multiple emunerators?
That is a design and implementation issue, not a programming issue.
Iterating a container which can change during iteration is rightly handled
internally by the construct itself and not by the iterating code so there
should be no funny mode. For instance, what's to stop the internal code

from re-adjusting its contents based on the removal or addition of an item on the fly? This is very basic functionality available in vb if memory serves me
right.

Multiple enumerators can be handled internally thru synchronization means
and this can all be hidden from the programmer so that she is not aware how the iterating construct is implemented (good design). I think the choice to implement this construct as readonly must have come down to efficiency over functionality. That's the only reason I can think of.

--
Regards,
Alvin Bruney
[ASP.NET MVP http://mvp.support.microsoft.com/default.aspx]
Got tidbits? Get it here... http://tinyurl.com/27cok
"JohnLiu" <jo******@gmail.com> wrote in message
news:37**************************@posting.google.c om...
I disagree that foreach-construct being readonly is a bad thing. Not
to completely disregard Alvin's gripe, but here's my point of view.

Typically, you use foreach to iterate throught the collection.
Adding/Removing items from the collection during this time puts the
collection into a funny mode that others may not be ready to deal
with, what if you have multiple emunerators? (this is very common for
a nested foreach scenario, yes, that'd be O(n^2) ). I believe in C++
STL libraries you can remove current iterated item, but that opens a
can of worms, you always have to worry whether your current item has
been deleted by another thread.

Also, I highly disagree that the for-construct is faster than
foreach-construct. That is only true when you are talking about ARRAY
collection types. In a linked-list implementation, foreach-construct
O(1) would be faster than for-construct O(n) for iterating through a
collection. The fact that the .NET framework collections are almost
solely based on array types may make the statement correct in 90%+ of
the time, but it is not a correct statement to make generally. And,
besides, I wait for generics!

Typically, hashtables are iterated for the entire key/value pairs,
given that accessing the value of a hashtable is O(1), if you need to
iterate through a hastable, it's easier to iterate through the keys
O(n), and grabbing the value as you go O(1). But I can't think of why
anyone would be iterating through a hashtable except may be as a debug
step to see the contents of the hashtable.

jliu - www.ssw.com.au - johnliu.net


Nov 21 '05 #22
I gave this a little bit of thought. I realized that the sort of coding I do is
quite different than what "most" people are doing. I perform mostly engineering
and geospatial analysis. For me, this involves many loops, and loops within
loops. Along with iterating over recordsets countless times. In a literal since,
this is data processing in the extreme, but certainly not like manual data
entry.

However, loops are used to perform some sort of search and/or work on a block of
items. By nature, they can consume a decent portion of the overall processing
time as they are oftentimes the place where much of the actual work is taking
place. Any number of smaller functions may be performed, but potentially it is
performed many if not thousands of times. In this particular thread's example,
the operator is iterating through all the controls in a collection, presumably
to do something with them. I would hazard a guess that if you compared the
overall processor time spent within the scope of the loop, it would be
significant relative to other non-loop functions.

So while what I do may in fact be much different than most others, I still stand
by my statement. Just look at the number of times people want to know how to
keep their GUI responsive while some sort of iterative process is occurring.
Forget for a moment about the design considerations of what is really happening.
Bottom line is that the iterative processing is consuming an amount of time
significant enough to be noticeable to the operator.

Since you asked, and to exemplify Alvin's comments, here is a common occurrence
for me.
(For those who don't what to read a confusing and long-winded example, stop
reading here)
I have a geospatial dataset that contains some number of polygons/regions.
I need to find any overlapping/intersecting regions and degenerate those
intersections into separate regions.
This requires iterating through every element in the dataset and compare it to
every other element.
Additionally, for every potentially intersecting element combination, you must
iterate through every combination of vertices/segments to determine
intersection.
Each combination of intersections could result in the creation of a new region.
Each new region could also intersect with subsequent existing and/or new
regions, which could also generate new regions...
Now, if when existing regions could be degenerated into sub-regions, I could
remove the existing regions from the collection and add the new regions to the
end of the collection, then theoretically I could determine all possible tests
within the scope of 1 top-level For Each loop. But instead, I must be creative
and do something like mark the existing regions for deletion within the master
collection, add the newly created regions to a separate collection. Then perform
the same iteration over the new collection, and potentially create an additional
collection, and so on. Once all combinations are resolved, then I must go back
and iterate through all of the resulting collections to recreate the master
collection. Now in practice, the resulting implementation isn't exactly like
that, but logically it is similar.

So for me, loop performance and implementation is extremely important.

Gerald

"Cor Ligthert" <no**********@planet.nl> wrote in message
news:ei**************@TK2MSFTNGP10.phx.gbl...
Can you give some sample applications where this statement of you is true?
Although due to the nature of loops, they oftentimes fall into
the 20 percent of code that consumes 80 percent of the time.


It is in my opinion definitly not with applications where is by instance
screen painting or/and dataprocessing.

It is in my opinion definitly true for applications where is image
processing where not the GDI+ encoding is used.

However that is in my opinion surely not the majority of the applications.

So I am curious in what type of other applications stand alone loops can
consume 80% of the time?

Just my thought,

Cor

Nov 21 '05 #23
My recollection is that MSFT claimed that .NET, for practical purposes,
eliminated the difference in speed between For and For Each, but I've not
recently tested that assertion.

--
http://www.standards.com/; See Howard Kaikow's web site.
"Nick Malik" <ni*******@hotmail.nospam.com> wrote in message
news:BgqSc.130747$eM2.70902@attbi_s51...
Would you have been happier if Eric has written the question in C#? This is very much as important a question in C# as it is in VB.NET.

foreach (Control ctl in myObject.Controls)
{
// do something useful with 'ctl'

}

I've had folks tell me that 'for' is more efficient than 'foreach' because
of enumerator overhead. For most of my code, however, this is a moot point. Unless the code is in a critical loop, the difference in processing so tiny that the improvement in code readability greatly outweighs the overhead of
allowing .NET to manipulate the enumerator.

--- Nick

"Ignacio Machin ( .NET/ C# MVP )" <ignacio.machin AT dot.state.fl.us> wrote in message news:%2******************@TK2MSFTNGP11.phx.gbl...
<<clipped>>
and finally this is a VB.net question, not a C# one, there is no need to
post it on microsoft.public.dotnet.languages.csharp

<<clipped>>

<an*******@discussions.microsoft.com> wrote in message
news:OY**************@TK2MSFTNGP11.phx.gbl...
Is there a performance difference between this:

\\\
Dim i As Integer
For i = 0 to myObject.Controls.Count - 1
myObject.Controls(i) = ...
Next
///

and this:

\\\
Dim ctl As Control
For Each ctl In myObject.Controls
ctl = ...
Next
///

Or is For Each just "prettier"?

Thanks,

Eric



Nov 21 '05 #24
Gerald,.

I readed it completly, however in my opinion is everything what happens
between processor and memory nowadays extremely fast and that is often
forgotten. (I am not writing this about your situation)

There is a lot of looping in every program even when you try to avoid it. I
think that the code which is created by the ILS will make a lot of loops.

Looping is in my opinion the basic of good programming, and people who think
they can avoid it are mostly making even more code to process or stop the
loop. (By instance by making a test in the loop which cost of course more
than a simple change of a byte).

The performance difference of the methods can be neglected, see for that the
MSDN article I point on in the mainthread of this thread.

I find it mostly overdone how many attention people take to a loop, while
the total througput time will mostly not change.

I wrote mostly. I think that it needs forever and for you specialy to be
done well and that it needs in a lot of situations extra attention. However
when it comes to optimizing the througput, I would first look in most cases
to other parts of the program.

Just my thougth

Cor
I gave this a little bit of thought. I realized that the sort of coding I do is quite different than what "most" people are doing. I perform mostly engineering and geospatial analysis. For me, this involves many loops, and loops within loops. Along with iterating over recordsets countless times. In a literal since, this is data processing in the extreme, but certainly not like manual data
entry.

However, loops are used to perform some sort of search and/or work on a block of items. By nature, they can consume a decent portion of the overall processing time as they are oftentimes the place where much of the actual work is taking place. Any number of smaller functions may be performed, but potentially it is performed many if not thousands of times. In this particular thread's example, the operator is iterating through all the controls in a collection, presumably to do something with them. I would hazard a guess that if you compared the
overall processor time spent within the scope of the loop, it would be
significant relative to other non-loop functions.

So while what I do may in fact be much different than most others, I still stand by my statement. Just look at the number of times people want to know how to keep their GUI responsive while some sort of iterative process is occurring. Forget for a moment about the design considerations of what is really happening. Bottom line is that the iterative processing is consuming an amount of time significant enough to be noticeable to the operator.

Since you asked, and to exemplify Alvin's comments, here is a common occurrence for me.
(For those who don't what to read a confusing and long-winded example, stop reading here)
I have a geospatial dataset that contains some number of polygons/regions.
I need to find any overlapping/intersecting regions and degenerate those
intersections into separate regions.
This requires iterating through every element in the dataset and compare it to every other element.
Additionally, for every potentially intersecting element combination, you must iterate through every combination of vertices/segments to determine
intersection.
Each combination of intersections could result in the creation of a new region. Each new region could also intersect with subsequent existing and/or new
regions, which could also generate new regions...
Now, if when existing regions could be degenerated into sub-regions, I could remove the existing regions from the collection and add the new regions to the end of the collection, then theoretically I could determine all possible tests within the scope of 1 top-level For Each loop. But instead, I must be creative and do something like mark the existing regions for deletion within the master collection, add the newly created regions to a separate collection. Then perform the same iteration over the new collection, and potentially create an additional collection, and so on. Once all combinations are resolved, then I must go back and iterate through all of the resulting collections to recreate the master collection. Now in practice, the resulting implementation isn't exactly like that, but logically it is similar.

So for me, loop performance and implementation is extremely important.

Gerald

"Cor Ligthert" <no**********@planet.nl> wrote in message
news:ei**************@TK2MSFTNGP10.phx.gbl...
Can you give some sample applications where this statement of you is true?
Although due to the nature of loops, they oftentimes fall into
the 20 percent of code that consumes 80 percent of the time.


It is in my opinion definitly not with applications where is by instance
screen painting or/and dataprocessing.

It is in my opinion definitly true for applications where is image
processing where not the GDI+ encoding is used.

However that is in my opinion surely not the majority of the applications.
So I am curious in what type of other applications stand alone loops can
consume 80% of the time?

Just my thought,

Cor


Nov 21 '05 #25
Cor,

If I understand your comments, then I completely agree.
1. Use loops appropriately.
2. Don't loop when it is not necessary.
3. Do loop when appropriate.
4. When you do use a loop, in the end it makes little difference if you use Do,
While, For Index, or For Each. My own testing has shown this to be true.
5. What you do while in the loop is much more important than the loop itself.
Make it as efficient as practical.
6. Just make sure that your overall code design and implementation is done
well/correctly.

In the end, follow Jay's advice. Try to do it right in the first place, and if
you find out it is a problem, then worry about the extra code to try to make it
faster.

Gerald

"Cor Ligthert" <no**********@planet.nl> wrote in message
news:%2****************@TK2MSFTNGP10.phx.gbl...
Gerald,.

I readed it completly, however in my opinion is everything what happens
between processor and memory nowadays extremely fast and that is often
forgotten. (I am not writing this about your situation)

There is a lot of looping in every program even when you try to avoid it. I
think that the code which is created by the ILS will make a lot of loops.

Looping is in my opinion the basic of good programming, and people who think
they can avoid it are mostly making even more code to process or stop the
loop. (By instance by making a test in the loop which cost of course more
than a simple change of a byte).

The performance difference of the methods can be neglected, see for that the
MSDN article I point on in the mainthread of this thread.

I find it mostly overdone how many attention people take to a loop, while
the total througput time will mostly not change.

I wrote mostly. I think that it needs forever and for you specialy to be
done well and that it needs in a lot of situations extra attention. However
when it comes to optimizing the througput, I would first look in most cases
to other parts of the program.

Just my thougth

Cor
I gave this a little bit of thought. I realized that the sort of coding I

do is
quite different than what "most" people are doing. I perform mostly

engineering
and geospatial analysis. For me, this involves many loops, and loops

within
loops. Along with iterating over recordsets countless times. In a literal

since,
this is data processing in the extreme, but certainly not like manual data
entry.

However, loops are used to perform some sort of search and/or work on a

block of
items. By nature, they can consume a decent portion of the overall

processing
time as they are oftentimes the place where much of the actual work is

taking
place. Any number of smaller functions may be performed, but potentially

it is
performed many if not thousands of times. In this particular thread's

example,
the operator is iterating through all the controls in a collection,

presumably
to do something with them. I would hazard a guess that if you compared the
overall processor time spent within the scope of the loop, it would be
significant relative to other non-loop functions.

So while what I do may in fact be much different than most others, I still

stand
by my statement. Just look at the number of times people want to know how

to
keep their GUI responsive while some sort of iterative process is

occurring.
Forget for a moment about the design considerations of what is really

happening.
Bottom line is that the iterative processing is consuming an amount of

time
significant enough to be noticeable to the operator.

Since you asked, and to exemplify Alvin's comments, here is a common

occurrence
for me.
(For those who don't what to read a confusing and long-winded example,

stop
reading here)
I have a geospatial dataset that contains some number of polygons/regions.
I need to find any overlapping/intersecting regions and degenerate those
intersections into separate regions.
This requires iterating through every element in the dataset and compare

it to
every other element.
Additionally, for every potentially intersecting element combination, you

must
iterate through every combination of vertices/segments to determine
intersection.
Each combination of intersections could result in the creation of a new

region.
Each new region could also intersect with subsequent existing and/or new
regions, which could also generate new regions...
Now, if when existing regions could be degenerated into sub-regions, I

could
remove the existing regions from the collection and add the new regions to

the
end of the collection, then theoretically I could determine all possible

tests
within the scope of 1 top-level For Each loop. But instead, I must be

creative
and do something like mark the existing regions for deletion within the

master
collection, add the newly created regions to a separate collection. Then

perform
the same iteration over the new collection, and potentially create an

additional
collection, and so on. Once all combinations are resolved, then I must go

back
and iterate through all of the resulting collections to recreate the

master
collection. Now in practice, the resulting implementation isn't exactly

like
that, but logically it is similar.

So for me, loop performance and implementation is extremely important.

Gerald

"Cor Ligthert" <no**********@planet.nl> wrote in message
news:ei**************@TK2MSFTNGP10.phx.gbl...
Can you give some sample applications where this statement of you is true?
> Although due to the nature of loops, they oftentimes fall into
> the 20 percent of code that consumes 80 percent of the time.

It is in my opinion definitly not with applications where is by instance
screen painting or/and dataprocessing.

It is in my opinion definitly true for applications where is image
processing where not the GDI+ encoding is used.

However that is in my opinion surely not the majority of the applications.
So I am curious in what type of other applications stand alone loops can
consume 80% of the time?

Just my thought,

Cor



Nov 21 '05 #26
On 2004-08-12, Alvin Bruney [MVP] <> wrote:
David Wrote
Also, it eliminates a real ambiguity to the For Each statement, does
foreach iterate over the original collection, or over the entire
collection as it changes over time?
I don't disagree with that. very good point indeed. but, the current
approach makes it impossible to perform simple tasks inherent in UI
programming (like removing multiselects in a listbox for instance).


Why is that difficult?

For each item as Object In New ArrayList(ListBox1.SelectedItems)
ListBox1.Items.Remove(item)
Next
Where
such simple tasks are overly complicated, i believe the design should be
reviewed.


Well, the case where you want to iterate over the original items while
mutating the collection is always trivial, just copy the references and
enumerate the copy. I don't see why we'd need a different enumerator
for that. And the case where you want to alter the iteration based on
actions during the iteration is

a) ambiguous and generally domain-specific, so not suitable for the CLR; and
b) probably a really bad idea.

There's a deeper issue too, which I won't really get into. But IMHO
there's a tendency for .Net developers to overuse dumb collections, and
put a lot of logic into various enumerations in controller classes that
should really be handled by the collection class itself. It's hard to
avoid doing that, but I think it's a bad habit and I'm not sure I want
to see more language features that encourage it.

Nov 21 '05 #27
> For each item as Object In New ArrayList(ListBox1.SelectedItems)
ListBox1.Items.Remove(item)
Next
maybe if you spent 10 seconds testing your code BEFORE you posted it, you
would find out what all this discussion is about!
Well, the case where you want to iterate over the original items while
mutating the collection is always trivial, just copy the references and
enumerate the copy.
for a tutorial on how references work have a look at this most excellent
article:
http://www.dotnet247.com/247referenc...box.com/~skeet

There's a deeper issue too, which I won't really get into. But IMHO
there's a tendency for .Net developers to overuse dumb collections, and
put a lot of logic into various enumerations in controller classes that
should really be handled by the collection class itself. It's hard to
avoid doing that, but I think it's a bad habit and I'm not sure I want
to see more language features that encourage it.
I have no clue as to what you are trying to say. This apparently has no
bearing on the previous threads.
Or maybe you may want to try again AFTER reading the relevant threads.

--
Regards,
Alvin Bruney
[ASP.NET MVP http://mvp.support.microsoft.com/default.aspx]
Got tidbits? Get it here... http://tinyurl.com/27cok
"David" <df*****@woofix.local.dom> wrote in message
news:slrncho6th.knv.df*****@woofix.local.dom... On 2004-08-12, Alvin Bruney [MVP] <> wrote:
David Wrote
Also, it eliminates a real ambiguity to the For Each statement, does
foreach iterate over the original collection, or over the entire
collection as it changes over time?


I don't disagree with that. very good point indeed. but, the current
approach makes it impossible to perform simple tasks inherent in UI
programming (like removing multiselects in a listbox for instance).


Why is that difficult?

For each item as Object In New ArrayList(ListBox1.SelectedItems)
ListBox1.Items.Remove(item)
Next
Where
such simple tasks are overly complicated, i believe the design should be
reviewed.


Well, the case where you want to iterate over the original items while
mutating the collection is always trivial, just copy the references and
enumerate the copy. I don't see why we'd need a different enumerator
for that. And the case where you want to alter the iteration based on
actions during the iteration is

a) ambiguous and generally domain-specific, so not suitable for the CLR;
and
b) probably a really bad idea.

There's a deeper issue too, which I won't really get into. But IMHO
there's a tendency for .Net developers to overuse dumb collections, and
put a lot of logic into various enumerations in controller classes that
should really be handled by the collection class itself. It's hard to
avoid doing that, but I think it's a bad habit and I'm not sure I want
to see more language features that encourage it.

Nov 21 '05 #28
Hello David,

Your response certainly has a lot of emotion.

too bad it isn't coherent.

(that's my long winded way of say "what the heck are you talking about?")
Well, the case where you want to iterate over the original items while
mutating the collection is always trivial, just copy the references and
enumerate the copy.
And this is efficient how? If you want to do that, then do that... but
don't make my code pay for the overhead of that functionality because you
may want to do that once out of ten-thousand calls.

I don't see why we'd need a different enumerator for that.
You just described a different enumerator

There's a deeper issue too, which I won't really get into.
I was starting to hope, but then...
But IMHO
Oh darn, you went into it...
there's a tendency for .Net
developers to overuse dumb collections, and
put a lot of logic into various enumerations in controller classes that
should really be handled by the collection class itself.
It's hard to avoid doing that, but I think it's a bad habit and I'm not sure I want to see more language features that encourage it.


So collections are a bad idea and we should all create "smart" classes that
wrap our types with logic, like how to do a sorted list, or how to do a
stack... (ignoring the debugged code for this that is in the CLR... that's
right, if you didn't write it, it isn't any good... Sorry... I forgot).

I promise not to get you started on generics.
Nov 21 '05 #29
"Cor Ligthert" <no**********@planet.nl> wrote in message news:<Ow**************@TK2MSFTNGP12.phx.gbl>...
From this document

http://msdn.microsoft.com/library/de...tchPerfOpt.asp

The performance difference between For and For Each loops does not appear to
be significant.

I hope this helps?

Cor


It is not significant if you are working with Array-based collection
(almost all .NET collections are array based). Because an enumerator
in an array collection is pretty much just an index on a particular
position.

If you are working with Linked List-based collections, performance
with enumerators O(1) are vastly superior than index-based O(n), over
large collections this is easily visible.

jliu - www.ssw.com.au - johnliu.net
Nov 21 '05 #30
On 2004-08-13, Nick Malik <ni*******@hotmail.nospam.com> wrote:
Hello David,

Your response certainly has a lot of emotion.
It really doesn't.
too bad it isn't coherent.

(that's my long winded way of say "what the heck are you talking about?")
Well, the case where you want to iterate over the original items while
mutating the collection is always trivial, just copy the references and
enumerate the copy.


And this is efficient how? If you want to do that, then do that... but
don't make my code pay for the overhead of that functionality because you
may want to do that once out of ten-thousand calls.


It's not particularly efficient, but you only have two choices if you
want to mutate the collection: either copy the references or keep track
of the changes to the collection (with an event or something). Neither
is as efficient as keeping the collection read-only, and which is more
efficient depends entirely on what you're doing during the enumeration.

And if you want to enumerate over only the original items in the
collection, copying the references is your only reasonable choice. From
the point of view of efficiency, it doesn't matter whether you do this
explicitly or if .Net does it implicitly behind the scenes with a new
enumerator type.

For the example given, removing selected items from a ListBox, copying
the references is going to take a trivial amount of time compared to the
time it takes to redraw the ListBox.
I don't see why we'd need a different enumerator for that.


You just described a different enumerator


And I didn't need a new enumerator type or new language construct to
achieve the effect.
There's a deeper issue too, which I won't really get into.


I was starting to hope, but then...


Heh, somehow I doubt you were. This last part was putting a large
design issue into a very short paragraph, so it's understandable that
it's been misunderstood.
But IMHO


Oh darn, you went into it...
there's a tendency for .Net
developers to overuse dumb collections, and
put a lot of logic into various enumerations in controller classes that
should really be handled by the collection class itself.
It's hard to avoid doing that, but I think it's a bad habit and I'm not

sure I want
to see more language features that encourage it.


So collections are a bad idea and we should all create "smart" classes that
wrap our types with logic, like how to do a sorted list, or how to do a
stack... (ignoring the debugged code for this that is in the CLR... that's
right, if you didn't write it, it isn't any good... Sorry... I forgot).


Well, now who's being emotional?

Obviously we should be using the collections, but IMO they should be
used as base classes or through composition much more often than they
are now. For example, I tend to use typed collections much more often
than the generic ArrayList, etc., and .Net gives me a rich set of tools
to create them with. In my experience, I'm not unusual at all in doing
this.

But once you begin to think of collections as being not just a dumb set
of objects, but a class representing of group of specific types, then
the next step is to start treating it like a full-fledged class in its
own right. And in classic OO terms, we shouldn't be iterating over the
privates of another class to get something done, we should be sending a
message to the class to ask it to perform the action.

Alvin's example is right on target. The only unique property a
ListBoxItemCollection holds onto is whether an item is selected,
removing those items should be a public method of the collection class
(or possibly of the ListBox itself).

Nov 21 '05 #31
On 2004-08-13, Alvin Bruney [MVP] <> wrote:
For each item as Object In New ArrayList(ListBox1.SelectedItems)
ListBox1.Items.Remove(item)
Next
maybe if you spent 10 seconds testing your code BEFORE you posted it, you
would find out what all this discussion is about!


I did. The code works. Why do you think it doesn't? Out of curiosity,
have you run it?
Well, the case where you want to iterate over the original items while
mutating the collection is always trivial, just copy the references and
enumerate the copy.


for a tutorial on how references work have a look at this most excellent
article:
http://www.dotnet247.com/247referenc...box.com/~skeet


I may be missing something basic here (not a rare event), but for the
life of me I can't figure out what you think I'm missing. I've been to
Jon's pages quite often, BTW. I realize that for some reason we're in
the midst of fun usenet snarkiness here, but I'd appreciate it if you
could explain the error more clearly, because I really don't know what
you're getting at.

<snip>
I have no clue as to what you are trying to say. This apparently has no
bearing on the previous threads.


It's only tangentially related to the thread. Sorry, I thought I made
that clear.

Nov 21 '05 #32
> >
So collections are a bad idea and we should all create "smart" classes that wrap our types with logic, like how to do a sorted list, or how to do a
stack... (ignoring the debugged code for this that is in the CLR... that's right, if you didn't write it, it isn't any good... Sorry... I forgot).


Well, now who's being emotional?


me. I was way too low on coffee. My apologies.

--- N
Nov 21 '05 #33

Collection was modified; enumeration operation may not execute.
Description: An unhandled exception occurred during the execution of the
current web request. Please review the stack trace for more information
about the error and where it originated in the code.

Exception Details: System.InvalidOperationException: Collection was
modified; enumeration operation may not execute.

Source Error:
Line 50: private void Button2_Click(object sender, System.EventArgs e)
Line 51: {
Line 52: foreach(ListItem li in ListBox1.Items)
Line 53: ListBox1.Items.Remove(li);
Line 54: }
running your code with a multiselect as opposed to one selection which i
suspect you didn't do.
Server Error in '/WebApplication2' Application.
--------------------------------------------------------------------------------

Collection was modified; enumeration operation may not execute.
Description: An unhandled exception occurred during the execution of the
current web request. Please review the stack trace for more information
about the error and where it originated in the code.

Exception Details: System.InvalidOperationException: Collection was
modified; enumeration operation may not execute.

Source Error:

Line 50: private void Button2_Click(object sender, System.EventArgs e)
Line 51: {
Line 52: foreach(ListItem li in ListBox1.Items)
Line 53: if(li.Selected)
Line 54: ListBox1.Items.Remove(li);

--
Regards,
Alvin Bruney
[ASP.NET MVP http://mvp.support.microsoft.com/default.aspx]
Got tidbits? Get it here... http://tinyurl.com/27cok
"David" <df*****@woofix.local.dom> wrote in message
news:slrnchp5ph.qgv.df*****@woofix.local.dom...
On 2004-08-13, Alvin Bruney [MVP] <> wrote:
For each item as Object In New ArrayList(ListBox1.SelectedItems)
ListBox1.Items.Remove(item)
Next


maybe if you spent 10 seconds testing your code BEFORE you posted it, you
would find out what all this discussion is about!


I did. The code works. Why do you think it doesn't? Out of curiosity,
have you run it?
Well, the case where you want to iterate over the original items while
mutating the collection is always trivial, just copy the references and
enumerate the copy.


for a tutorial on how references work have a look at this most excellent
article:
http://www.dotnet247.com/247referenc...box.com/~skeet


I may be missing something basic here (not a rare event), but for the
life of me I can't figure out what you think I'm missing. I've been to
Jon's pages quite often, BTW. I realize that for some reason we're in
the midst of fun usenet snarkiness here, but I'd appreciate it if you
could explain the error more clearly, because I really don't know what
you're getting at.

<snip>

I have no clue as to what you are trying to say. This apparently has no
bearing on the previous threads.


It's only tangentially related to the thread. Sorry, I thought I made
that clear.

Nov 21 '05 #34
<"Alvin Bruney [MVP]" <vapor at steaming post office>> wrote:
Collection was modified; enumeration operation may not execute.
Description: An unhandled exception occurred during the execution of the
current web request. Please review the stack trace for more information
about the error and where it originated in the code.

Exception Details: System.InvalidOperationException: Collection was
modified; enumeration operation may not execute.

Source Error:
Line 50: private void Button2_Click(object sender, System.EventArgs e)
Line 51: {
Line 52: foreach(ListItem li in ListBox1.Items)
Line 53: ListBox1.Items.Remove(li);
Line 54: }

running your code with a multiselect as opposed to one selection which i
suspect you didn't do.


That's nothing like the code that David posted, however. Try:

foreach (ListItem li in new ArrayList(ListBox1.Items))
{
ListBox1.Items.Remove(li);
}

--
Jon Skeet - <sk***@pobox.com>
http://www.pobox.com/~skeet
If replying to the group, please do not mail me too
Nov 21 '05 #35
On 2004-08-13, Alvin Bruney [MVP] <> wrote:

Collection was modified; enumeration operation may not execute.
Description: An unhandled exception occurred during the execution of the
current web request. Please review the stack trace for more information
about the error and where it originated in the code.

Exception Details: System.InvalidOperationException: Collection was
modified; enumeration operation may not execute.

Source Error:
Line 50: private void Button2_Click(object sender, System.EventArgs e)
Line 51: {
Line 52: foreach(ListItem li in ListBox1.Items)
Line 53: ListBox1.Items.Remove(li);
Line 54: }
Now, with all that in mind, let's go back a couple of posts and look at
the code I actually posted....
For each item as Object In New ArrayList(ListBox1.SelectedItems)


You're iterating over the original collection, I am not. Now, I read
this in the VB.Net group so I just posted as VB, so I apologize if the
translation to C# threw you off, it's hard to know which group people
are posting from.

Try...

foreach(Object o in new ArrayList(ListBox1.SelectedItems))
{
ListBox1.Items.Remove(o);
}
running your code with a multiselect as opposed to one selection which i
suspect you didn't do.


No, you either had trouble converting the code or just didn't read it
closely enough, I'm not sure which. Anyway, now that you see the
correct code hopefully my comments will make a little more sense to you.
Nov 21 '05 #36
On 2004-08-13, Nick Malik <ni*******@hotmail.nospam.com> wrote:
>

Well, now who's being emotional?


me. I was way too low on coffee. My apologies.


Heh, caffeine deprivation strikes us all at some point...

Nov 21 '05 #37
you are right.

i am guilty of skimming over the code and didn't catch the arraylist inside
the loop construct. this code is a gem by the way. it is the best approach i
have seen for this problem, and i have been keeping an eye out for a
feasible solution for a while (see a thread in csharp newsgroups about 8 -
10 months ago).

tomorrow, i'm gonna re-read the thread you posted a while back on that
topic. i believe you may be on to something.

thanks for your vigilance.

--
Regards,
Alvin Bruney
[ASP.NET MVP http://mvp.support.microsoft.com/default.aspx]
Got tidbits? Get it here... http://tinyurl.com/27cok
"David" <df*****@woofix.local.dom> wrote in message
news:slrnchq9kl.rgn.df*****@woofix.local.dom...
On 2004-08-13, Alvin Bruney [MVP] <> wrote:

Collection was modified; enumeration operation may not execute.
Description: An unhandled exception occurred during the execution of the
current web request. Please review the stack trace for more information
about the error and where it originated in the code.

Exception Details: System.InvalidOperationException: Collection was
modified; enumeration operation may not execute.

Source Error:
Line 50: private void Button2_Click(object sender, System.EventArgs e)
Line 51: {
Line 52: foreach(ListItem li in ListBox1.Items)
Line 53: ListBox1.Items.Remove(li);
Line 54: }


Now, with all that in mind, let's go back a couple of posts and look at
the code I actually posted....
For each item as Object In New ArrayList(ListBox1.SelectedItems)


You're iterating over the original collection, I am not. Now, I read
this in the VB.Net group so I just posted as VB, so I apologize if the
translation to C# threw you off, it's hard to know which group people
are posting from.

Try...

foreach(Object o in new ArrayList(ListBox1.SelectedItems))
{
ListBox1.Items.Remove(o);
}
running your code with a multiselect as opposed to one selection which i
suspect you didn't do.


No, you either had trouble converting the code or just didn't read it
closely enough, I'm not sure which. Anyway, now that you see the
correct code hopefully my comments will make a little more sense to you.

Nov 21 '05 #38
And the case where you want to alter the iteration based on
actions during the iteration is

a) ambiguous and generally domain-specific, so not suitable for the CLR;
and
b) probably a really bad idea.

There are issues with your approach. it really isn't domain specific or the
domain is large enough to be common to a lot of applications. From what i
understand from your solution, the problem is this approach may work well
for small collections but will not scale because you force a copy of the
collection per request.

Consider a large dataset where rows are to be removed. For n-items, you
introduce n copies. This effectively doubles your memory allocation. In a
high concurrency environment or in applications which run on finite memory
resources, this approach is not feasible because the cost of iteration is
prohibitive. Also, the cost is more expensive if the copy involves live
objects or objects in the collection which contain children as in the case
of hierarchical data. so it definitely is not trivial.

This is where a non-readonly collection would be more scalable and quicker.

I do agree that it is a good approach for small collections.

--
Regards,
Alvin Bruney
[ASP.NET MVP http://mvp.support.microsoft.com/default.aspx]
Got tidbits? Get it here... http://tinyurl.com/27cok
"David" <df*****@woofix.local.dom> wrote in message
news:slrncho6th.knv.df*****@woofix.local.dom... On 2004-08-12, Alvin Bruney [MVP] <> wrote:
David Wrote
Also, it eliminates a real ambiguity to the For Each statement, does
foreach iterate over the original collection, or over the entire
collection as it changes over time?


I don't disagree with that. very good point indeed. but, the current
approach makes it impossible to perform simple tasks inherent in UI
programming (like removing multiselects in a listbox for instance).


Why is that difficult?

For each item as Object In New ArrayList(ListBox1.SelectedItems)
ListBox1.Items.Remove(item)
Next
Where
such simple tasks are overly complicated, i believe the design should be
reviewed.


Well, the case where you want to iterate over the original items while
mutating the collection is always trivial, just copy the references and
enumerate the copy. I don't see why we'd need a different enumerator
for that. And the case where you want to alter the iteration based on
actions during the iteration is

a) ambiguous and generally domain-specific, so not suitable for the CLR;
and
b) probably a really bad idea.

There's a deeper issue too, which I won't really get into. But IMHO
there's a tendency for .Net developers to overuse dumb collections, and
put a lot of logic into various enumerations in controller classes that
should really be handled by the collection class itself. It's hard to
avoid doing that, but I think it's a bad habit and I'm not sure I want
to see more language features that encourage it.

Nov 21 '05 #39
>There's a deeper issue too, which I won't really get into. But IMHO
there's a tendency for .Net developers to overuse dumb collections
we need to hash this one out. provide all the details (get into it).

--
Regards,
Alvin Bruney
[ASP.NET MVP http://mvp.support.microsoft.com/default.aspx]
Got tidbits? Get it here... http://tinyurl.com/27cok
"David" <df*****@woofix.local.dom> wrote in message
news:slrncho6th.knv.df*****@woofix.local.dom... On 2004-08-12, Alvin Bruney [MVP] <> wrote:
David Wrote
Also, it eliminates a real ambiguity to the For Each statement, does
foreach iterate over the original collection, or over the entire
collection as it changes over time?


I don't disagree with that. very good point indeed. but, the current
approach makes it impossible to perform simple tasks inherent in UI
programming (like removing multiselects in a listbox for instance).


Why is that difficult?

For each item as Object In New ArrayList(ListBox1.SelectedItems)
ListBox1.Items.Remove(item)
Next
Where
such simple tasks are overly complicated, i believe the design should be
reviewed.


Well, the case where you want to iterate over the original items while
mutating the collection is always trivial, just copy the references and
enumerate the copy. I don't see why we'd need a different enumerator
for that. And the case where you want to alter the iteration based on
actions during the iteration is

a) ambiguous and generally domain-specific, so not suitable for the CLR;
and
b) probably a really bad idea.

There's a deeper issue too, which I won't really get into. But IMHO
there's a tendency for .Net developers to overuse dumb collections, and
put a lot of logic into various enumerations in controller classes that
should really be handled by the collection class itself. It's hard to
avoid doing that, but I think it's a bad habit and I'm not sure I want
to see more language features that encourage it.

Nov 21 '05 #40
On 2004-08-14, Alvin Bruney [MVP] <> wrote:
And the case where you want to alter the iteration based on
actions during the iteration is

a) ambiguous and generally domain-specific, so not suitable for the CLR;
and
b) probably a really bad idea.

There are issues with your approach. it really isn't domain specific or the
domain is large enough to be common to a lot of applications.


The basic question upfront here is this: should changes to the
collection make during an enumeration affect the enumeration? If so,
how exactly should the enumeration be affected?

foreach(DictionaryEntry entry in MyDictionary)
{
MyDictionary.Add(CalculateNewKey(), "New Entry");
MyDictionary.Remove(CalculateKeyToRemove());
}

Is that an infinite loop? Or does it depend on the values returned by
CalculateNewKey()? Or does it run once for each entry that exists in
the dictionary at the start of the loop? Or does it run once for each
entry that exists at the start of the loop, unless that entry has been
removed from the list. There's a large number of possible definitions,
and each is significantly flawed in its own way.
From what i
understand from your solution, the problem is this approach may work well
for small collections but will not scale because you force a copy of the
collection per request.
I do copy the collection, but I don't copy the objects contained in the
collection, only the references to those objects. There's a huge
difference between those two things.

Consider a large dataset where rows are to be removed. For n-items, you
introduce n copies. This effectively doubles your memory allocation.
No. Because the rows aren't being copied, only references to the rows.
Memory allocation is significantly less than double.
In a
high concurrency environment or in applications which run on finite memory
resources, this approach is not feasible because the cost of iteration is
prohibitive. Also, the cost is more expensive if the copy involves live
objects or objects in the collection which contain children as in the case
of hierarchical data. so it definitely is not trivial.
No, again because only references are being copied. There's a second
iteration, but it's impossible to know the performance effect of that
without knowing the internals of the collection (e.g., what is the cost
of a removal?).
This is where a non-readonly collection would be more scalable and quicker.


As I said to Nick, an enumerator over a non-readonly collection would
have to do the *exact same thing* at a minimum. The fact that some
enumerator type in the CLR does it instead of you doing it explicitly
doesn't affect scalability or performance at all.

In fact, though, performance in the CLR would be much worse than this,
how much worse depends entirely on how you chose to define the behavior
you're requesting. Copying the references as I did is trivial, but it
only works if I limit myself to very specific types of editing to the
underlying collection.


Nov 21 '05 #41
On 2004-08-14, Alvin Bruney [MVP] <> wrote:
There's a deeper issue too, which I won't really get into. But IMHO
there's a tendency for .Net developers to overuse dumb collections


we need to hash this one out. provide all the details (get into it).


If you really want to get into it, see my reply to Nick on the subject
and/or my recent post in the "Design/Implementation Considerations"
thread.

In a nutshell, it generally violates encapsulation to enumerate a
non-member collection. If you want domain-specific information from a
collection, or if you want to perform a domain-specific action on
members of a collection, then the collection class should know how to do
those things.

As a simple example, consider...

foreach(Stock stock in portfolio) {
if(stock.ticker == "MSFT") {
stock.Sell();
}
}

... as opposed to

portfolio.FindByTicker("MSFT").Sell(); // ignore the possible null ref for now

What if my stock lookup is taking too much time? In the second example,
I could add a Hash lookup for tickers and correct the issue, but the
first example will always be O(n). What if I want to move the portfolio
to a webservice? What if my portfolio becomes too large to keep in
available memory (hey, I can dream), and I want to keep it in the
database? Or whatever, there's a lot of possibilities here.

The point is that portfolio manipulation should be encapsulated in a
class, and shouldn't be spread out across the app. Sure, the first
implementation of portfolio is probably a simple member ArrayList, but
there's no reason that other classes should need to know that.
Enumerations seem benign, but in fact they make a lot of assumptions
about the inner implementation of the collection.

Well, I guess that wasn't really a nutshell. I've been putting a lot of
thought into these types of issues lately because I'm putting together a
curriculum for an OO design class.
Nov 21 '05 #42
On Sat, 14 Aug 2004 17:49:53 -0700, David <df*****@woofix.local.dom>
wrote:

<snip>

foreach(Stock stock in portfolio) {
if(stock.ticker == "MSFT") {
stock.Sell();
}
}

</snip>

David et al:

Just curious what you would think of:

Stock stock =
Array.BinarySearch(portfolio, "MFST", stock.TickerComparer)

Where the comparisons are taken care of by IComparer implementations
nested into the domain class.

I've taken this approach a few times. I feel the collection does what
it does best (search), while the domain specific logic (the different
ways to compare two stocks) stays encapsulated into the domain
objects.

--
Scott
http://www.OdeToCode.com
Nov 21 '05 #43
Scott,
I think you are making too many assumptions in your code that can cause
obscure problems later. :-)
foreach(Stock stock in portfolio) {
Supports multiple MSFT stocks in the portfolio that could be sold. Doesn't
require the portfolio to be sorted in ticker order.
Array.BinarySearch(portfolio, "MFST", stock.TickerComparer) Assumes (restricts) a single MSFT stock in the portfolio, plus requires that
the portfolio be sorted by ticker. Also requires portfolio to be an array
(rather then some abstraction of an actual Portfolio with its own behaviors
& attributes)

I would consider making "sell stock" a behavior of the portfolio object,
then how it was implemented would be immaterial. I would implement it as a
for each per the 80/20 rule.

portfolio.SellStock("MSFT");

Hope this helps
Jay
"Scott Allen" <bitmask@[nospam].fred.net> wrote in message
news:a6********************************@4ax.com... On Sat, 14 Aug 2004 17:49:53 -0700, David <df*****@woofix.local.dom>
wrote:

<snip>

foreach(Stock stock in portfolio) {
if(stock.ticker == "MSFT") {
stock.Sell();
}
}

</snip>

David et al:

Just curious what you would think of:

Stock stock =
Array.BinarySearch(portfolio, "MFST", stock.TickerComparer)

Where the comparisons are taken care of by IComparer implementations
nested into the domain class.

I've taken this approach a few times. I feel the collection does what
it does best (search), while the domain specific logic (the different
ways to compare two stocks) stays encapsulated into the domain
objects.

--
Scott
http://www.OdeToCode.com

Nov 21 '05 #44
On 2004-08-15, Scott Allen <bitmask@[> wrote:

Just curious what you would think of:

Stock stock =
Array.BinarySearch(portfolio, "MFST", stock.TickerComparer)

Where the comparisons are taken care of by IComparer implementations
nested into the domain class.

I've taken this approach a few times. I feel the collection does what
it does best (search), while the domain specific logic (the different
ways to compare two stocks) stays encapsulated into the domain
objects.


Jay makes some salient points in another post, but I'd just add that
IMHO the code above could be fine as long as a) it's encapsulated in a
single place, and b) you understand the trade-offs you've made.

A sorted array might be the best implementation of a portfolio for now,
but that choice forces you into a lot of assumptions about behavior,
assumptions which are likely to change in the future. The question I'd
have is how many places in your code have knowledge of a portfolio's
internal implementation?
Nov 21 '05 #45
Hi Jay:

I agree this does make many assumptions as a specific example.

Speaking more generally, however, I like the fact that item retrieval
from a collection is abstracted away (the details of using a
BinarySearch or a hashing algorithm will be hidden if you add another
layer of indirection - which I didn't show).

I'm sure you can imagine scenarios where foreach doesn't scale well
enough to use for all look up scenarios (the 20%). This is not the
difference between for and foreach but the difference between O(n) and
O(log n).

I also have the flexibility of using IComparer to search based on
ticker symbol, or price, or any other public member of the object. As
long as the selection of the IComparer instance is kept encapsulated
Which is all too complex for the 80% case I agree... :)

One note: BinarySearch wouldn't restrict the array to holding a single
MFST ticker. BinarySearch doesn't guarantee that it will return the
object with the lowest index in the array, but if there are multiples
it does give back an index that is 'in the ballpark', so to speak.

--
Scott
http://www.OdeToCode.com

On Sun, 15 Aug 2004 10:55:30 -0500, "Jay B. Harlow [MVP - Outlook]"
<Ja************@msn.com> wrote:
Scott,
I think you are making too many assumptions in your code that can cause
obscure problems later. :-)
>foreach(Stock stock in portfolio) {

Supports multiple MSFT stocks in the portfolio that could be sold. Doesn't
require the portfolio to be sorted in ticker order.
Array.BinarySearch(portfolio, "MFST", stock.TickerComparer)

Assumes (restricts) a single MSFT stock in the portfolio, plus requires that
the portfolio be sorted by ticker. Also requires portfolio to be an array
(rather then some abstraction of an actual Portfolio with its own behaviors
& attributes)

I would consider making "sell stock" a behavior of the portfolio object,
then how it was implemented would be immaterial. I would implement it as a
for each per the 80/20 rule.

portfolio.SellStock("MSFT");

Hope this helps
Jay


Nov 21 '05 #46
Scott,
I'm sure you can imagine scenarios where foreach doesn't scale well
enough to use for all look up scenarios (the 20%). This is not the
difference between for and foreach but the difference between O(n) and
O(log n). As I stated earlier I code for the 80% and only use special case when the
20% is proven...

If the implementation within Portfolio warranted using BinarySearch instead
of For Each then I would consider using it.

My point is: I consider BinarySearch verses For Each an Implementation
detail, I'm sure you can know the importance of hiding implementation
details.
I also have the flexibility of using IComparer to search based on
ticker symbol, or price, or any other public member of the object. As
long as the selection of the IComparer instance is kept encapsulated As long as you realize that BinarySearch requires your collection be sorted
by the "field" you are searching on. I hope you also realize this sorting
may rake havoc with your scalability.

Which is where I find (I make pun) Whidbey's Array.Find, Array.FindAll,
Array.FindIndex might be a better choice. Again implementation details.

Of course there is nothing stopping Portfolio from having a tickerIndex,
priceIndex, symbolIndex sorted Arrays that you use BinarySearch individually
on based on which "field" you are searching, again implementation detail...
One note: BinarySearch wouldn't restrict the array to holding a single
MFST ticker. BinarySearch doesn't guarantee that it will return the Its not a literal restriction as much as its a conceptual one, in that
BinarySearch is only able to return a single Stock. You would not know which
stock was returned. Your sample effectively says to sell 1 of N MSFT stock
where as David's code says to sell N of N MSFT stock. I hope you agree 1 of
N is very different then N of N!

Also as you pointed out you may not know which MSFT stock you sold, it may
be the first, last, or middle shares in your portfolio...

Hope this helps
Jay
"Scott Allen" <bitmask@[nospam].fred.net> wrote in message
news:bl********************************@4ax.com... Hi Jay:

I agree this does make many assumptions as a specific example.

Speaking more generally, however, I like the fact that item retrieval
from a collection is abstracted away (the details of using a
BinarySearch or a hashing algorithm will be hidden if you add another
layer of indirection - which I didn't show).

I'm sure you can imagine scenarios where foreach doesn't scale well
enough to use for all look up scenarios (the 20%). This is not the
difference between for and foreach but the difference between O(n) and
O(log n).

I also have the flexibility of using IComparer to search based on
ticker symbol, or price, or any other public member of the object. As
long as the selection of the IComparer instance is kept encapsulated
Which is all too complex for the 80% case I agree... :)

One note: BinarySearch wouldn't restrict the array to holding a single
MFST ticker. BinarySearch doesn't guarantee that it will return the
object with the lowest index in the array, but if there are multiples
it does give back an index that is 'in the ballpark', so to speak.

--
Scott
http://www.OdeToCode.com

On Sun, 15 Aug 2004 10:55:30 -0500, "Jay B. Harlow [MVP - Outlook]"
<Ja************@msn.com> wrote:
Scott,
I think you are making too many assumptions in your code that can cause
obscure problems later. :-)
>foreach(Stock stock in portfolio) {

Supports multiple MSFT stocks in the portfolio that could be sold. Doesn'trequire the portfolio to be sorted in ticker order.
Array.BinarySearch(portfolio, "MFST", stock.TickerComparer)

Assumes (restricts) a single MSFT stock in the portfolio, plus requires thatthe portfolio be sorted by ticker. Also requires portfolio to be an array
(rather then some abstraction of an actual Portfolio with its own behaviors& attributes)

I would consider making "sell stock" a behavior of the portfolio object,
then how it was implemented would be immaterial. I would implement it as afor each per the 80/20 rule.

portfolio.SellStock("MSFT");

Hope this helps
Jay

Nov 21 '05 #47
On Sun, 15 Aug 2004 13:47:36 -0500, "Jay B. Harlow [MVP - Outlook]"
<Ja************@msn.com> wrote:

hi Jay:

<snip>
I hope you also realize this sorting
may rake havoc with your scalability.

If you are suggesting I might be choosing algorithms at random, then I
want to assure you I have specific performance and salability metrics
to meet. The applications are tested and measured, then specific areas
are targeted for tuning to achieve the goals.

<snip>
One note: BinarySearch wouldn't restrict the array to holding a single
MFST ticker. BinarySearch doesn't guarantee that it will return the

Its not a literal restriction as much as its a conceptual one, in that
BinarySearch is only able to return a single Stock. You would not know which
stock was returned. Your sample effectively says to sell 1 of N MSFT stock
where as David's code says to sell N of N MSFT stock. I hope you agree 1 of
N is very different then N of N!

<snip>

Being nit picky, but BinarySearch doesn't return an object reference,
it returns an index value. You can use the index to find other
occurrences of what you are looking for. *If* you are in a performance
critical section, this *may* still be a win over straight iteration.
The devil is always in the details.

--
Scott

Nov 21 '05 #48
Scott,
If you are suggesting I might be choosing algorithms at random, then I I was not suggesting you were choosing algorithms at random, please re-read
what I stated!

BinarySearch requires that the array be sorted. Correct?

Sorting the array each time you search on a different "field" may negate the
benefit of the BinarySearch. Correct?

If you feel either of the above statements are false, outside my suggestion
of multiple arrays, I would like to see a working sample of how you get it
to work, while being performant & scalable! (as I may find it useful in my
solutions).

I was suggesting that using For Each or BinarySearch is IMHO an
implementation detail. I would pick the appropriate one for the requirements
of the application.
Being nit picky, but BinarySearch doesn't return an object reference,
it returns an index value. You can use the index to find other Yes it returns an Index. It returns only a single index, this index is able
to retrieve a single stock. Correct?

In your original example you only showed changing a single stock, if you
want to show an example of using BinarySearch to change N stocks, then we
can continue this discussion...

Thanks for understanding
Jay


"Scott Allen" <bitmask@[nospam].fred.net> wrote in message
news:jj********************************@4ax.com... On Sun, 15 Aug 2004 13:47:36 -0500, "Jay B. Harlow [MVP - Outlook]"
<Ja************@msn.com> wrote:

hi Jay:

<snip>
I hope you also realize this sorting
may rake havoc with your scalability.


If you are suggesting I might be choosing algorithms at random, then I
want to assure you I have specific performance and salability metrics
to meet. The applications are tested and measured, then specific areas
are targeted for tuning to achieve the goals.

<snip>
One note: BinarySearch wouldn't restrict the array to holding a single
MFST ticker. BinarySearch doesn't guarantee that it will return the

Its not a literal restriction as much as its a conceptual one, in that
BinarySearch is only able to return a single Stock. You would not know whichstock was returned. Your sample effectively says to sell 1 of N MSFT stockwhere as David's code says to sell N of N MSFT stock. I hope you agree 1 ofN is very different then N of N!

<snip>

Being nit picky, but BinarySearch doesn't return an object reference,
it returns an index value. You can use the index to find other
occurrences of what you are looking for. *If* you are in a performance
critical section, this *may* still be a win over straight iteration.
The devil is always in the details.

--
Scott

Nov 21 '05 #49
Hi again Jay:

On Sun, 15 Aug 2004 22:53:48 -0500, "Jay B. Harlow [MVP - Outlook]"
<Ja************@msn.com> wrote:
Scott,
If you are suggesting I might be choosing algorithms at random, then II was not suggesting you were choosing algorithms at random, please re-read
what I stated!


You stated:

"As long as you realize that BinarySearch requires your collection be
sorted by the "field" you are searching on. I hope you also realize
this sorting may rake havoc with your salability."

Yes, I am aware the collection has to be in sorted order. I also
wanted to make it clear I'm not doing binary searches because I
watched Oprah Winfrey's show in the morning and Oprah said binary
searches were cool so I wanted to try it too. I took this as a
condescending statement, but please accept my apologies if it wasn't
meant this way.
I was suggesting that using For Each or BinarySearch is IMHO an
implementation detail. I would pick the appropriate one for the requirements
of the application.

Agreed.
Being nit picky, but BinarySearch doesn't return an object reference,
it returns an index value. You can use the index to find other

Yes it returns an Index. It returns only a single index, this index is able
to retrieve a single stock. Correct?


Correct, but also note index + 1 and index - 1 can also contain a
match. Checking in the vicinity can, in the right scenario, be faster
than an iterating an entire collection and is non-trivial to
implement, in fact .....
In your original example you only showed changing a single stock, if you
want to show an example of using BinarySearch to change N stocks, then we
can continue this discussion...


..... I'm sure you've done this yourself by using the Select method of
the DataTable, or the Find method of the DataView, and then
manipulating the rows the methods return. One doesn't need to know
that Select uses a binary search, or that arrays (indexes, if you
will) are built to do to the binary search. These details are all
nicely encapsulated - someone tells the container what to search for
and the container does the specialized work. (Though like everything,
there can be a need to know in exceptional cases).

--
Scott
http://www.OdeToCode.com
Nov 21 '05 #50

This discussion thread is closed

Replies have been disabled for this discussion.

Similar topics

4 posts views Thread by Jean-Christophe Michel | last post: by
2 posts views Thread by matatu | last post: by
8 posts views Thread by Floris van Haaster | last post: by
13 posts views Thread by tshad | last post: by
6 posts views Thread by Michael D. Ober | last post: by
1 post views Thread by CARIGAR | last post: by
reply views Thread by suresh191 | last post: by
reply views Thread by Purva khokhar | last post: by
reply views Thread by haryvincent176 | last post: by
By using this site, you agree to our Privacy Policy and Terms of Use.