"Alvin Bruney [MVP]" <some guy without an email addresswrote in message
news:eA**************@TK2MSFTNGP02.phx.gbl...
>Sure, but it's the most inefficient solution posted yet.
Did you care to test the code before making that statement?
I don't need to. It makes one additional memory copy vs the "naive"
solution of using indexes instead of an iterator.
That single copy of the list plus the need to test every element is already
more expensive than the straightforward, highly efficient solution of making
a new list with just the items not removed, and then your solution also
calls Remove a number of times, which not only does a linear search to find
the index of the element being removed, it also then moves the other part of
the list to close the gap.
>
If you did, you'd find that the "inefficiency" is not noticable for 10,000
items being removed from the collection - at least on my lap top. And that
isn't even a real world scenario anyway. You should be far more concerned
with inefficiencies and performance issues due to network bandwidth; SQL
queries; start up times; and obscure counters to track and flag items for
deletion before railing about inefficient code.
There's no reason to write an inefficient version when the framework already
provides List<T>.RemoveAll(Predicate<T>). Premature optimization deals with
rolling your own instead of using an existing function. What you suggested
was premature de-optimization -- rolling your own and ending up with worse
performance than the existing one.