On Thu, 15 May 2008 15:40:02 -0700, amir <am**@discussions.microsoft.com
wrote:
[...]
foreach (item in the list)
mylist = list.findall(item);
foreach(myitem in mylist)
dosomething
the problem occurs in the outer foreach where it already has processed
some
items from the original list. how do i exclude them?
If you really must process your list items in groups according to your
FindAll() results, I think using LINQ as Tim suggests would work well.
However, I have to wonder why you want to do this. Nothing in the code
you posted indicates an actual need to do this, and it seems like you'd be
better off just enumerating the list and processing each element one by
one. The way you've shown it, you've basically got an O(N^2) algorithm
even if we assume you somehow address the duplicated item issue at no cost
(which isn't a realistic assumption).
If you can't use LINQ and you must process in groups, an alternative
solution would be to use a Dictionary where the key for the dictionary is
the same as whatever criteria you're using for the FindAll() search.
Then, when enumerating each item in the list, rather than doing work
during that enumeration, simply build up lists of elements in your
Dictionary based on that key, and then enumerate those lists later:
Dictionary<KeyType, List<ListItem>dict = new Dictionary<KeyType,
List<ListItem>>();
foreach (ListItem item in list)
{
List<ListItemlistDict;
if (!dict.TryGetValue(item.KeyProperty, out listDict))
{
listDict = new List<ListItem>();
dict.Add(item.KeyProperty, listDict);
}
listDict.Add(item);
}
foreach (List<ListItemlistItems in dict.Values)
{
foreach (ListItem item in listItems)
{
// do something
}
}
That's only O(N) instead of O(N^2) and IMHO makes it a bit more clear that
you specifically are trying to group the items before processing.
Pete