472,364 Members | 1,554 Online

# Efficiently Extracting Identical Values From A List/Array

As a result of a graphics based algorihtms, I have a list of indices to
a set of nodes.

I want to efficiently identify any node indices that are stored multiple
times in the array and the location of them in the array /list. Hence
the output being some list of lists, containing groups of indices of the
storage array that point to the same node index.

This is obviously a trivial problem, but if my storage list is large and
the set of nodes large (and hence lots of repeated indices) this
problem could become a bottleneck,

Jul 23 '05 #1
7 2229 As a result of a graphics based algorihtms, I have a list of indices to
a set of nodes.

I want to efficiently identify any node indices that are stored multiple
times in the array and the location of them in the array /list. Hence
the output being some list of lists, containing groups of indices of the
storage array that point to the same node index.

This is obviously a trivial problem, but if my storage list is large and
the set of nodes large (and hence lots of repeated indices) this
problem could become a bottleneck,

Tom

--
__________________________________________________ ______________________
Dipl.-Ing. Thomas Maier-Komor http://www.rcs.ei.tum.de
Institute for Real-Time Computer Systems (RCS) fon +49-89-289-23578
Technische Universitaet Muenchen, D-80290 Muenchen fax +49-89-289-23555
Jul 23 '05 #2
"Adam Hartshorne" <or********@yahoo.com> wrote in message
news:cv**********@wisteria.csv.warwick.ac.uk...
As a result of a graphics based algorihtms, I have a list of indices to a
set of nodes.

I want to efficiently identify any node indices that are stored multiple
times in the array and the location of them in the array /list. Hence the
output being some list of lists, containing groups of indices of the
storage array that point to the same node index.

This is obviously a trivial problem, but if my storage list is large and
the set of nodes large (and hence lots of repeated indices) this problem
could become a bottleneck,

An "easy" way would be to use:
std::multimap< int/*nodeIndex*/, std::vector<int/*arrayIndex*/> > myList;
// for each index:
myList[aNodeIndex].push_back( anArrayIndex );

Likely to be more efficient:
std::vector< std::pair<int/*nodeIndex*/,int/*arrayIndex*/> > myList;
myList.reserve( theSizeOfTheArrayOfIndices );
// for each index:
myList.push_back( std::pair<int,int>( aNodeIndex, anArrayIndex ) );
std::sort( myList.begin(), myList.end() );
// --> scan for consecutive items with the same node index

A hash_map (or unordered_map) could be tested too, but I would expect
the vector version to be faster (just a guess...).
Ivan
--
http://ivan.vecerina.com/contact/?subject=NG_POST <- email contact form
Brainbench MVP for C++ <> http://www.brainbench.com

Jul 23 '05 #3
Ivan Vecerina wrote:
"Adam Hartshorne" <or********@yahoo.com> wrote in message
news:cv**********@wisteria.csv.warwick.ac.uk...
As a result of a graphics based algorihtms, I have a list of indices to a
set of nodes.

I want to efficiently identify any node indices that are stored multiple
times in the array and the location of them in the array /list. Hence the
output being some list of lists, containing groups of indices of the
storage array that point to the same node index.

This is obviously a trivial problem, but if my storage list is large and
the set of nodes large (and hence lots of repeated indices) this problem
could become a bottleneck,
An "easy" way would be to use:
std::multimap< int/*nodeIndex*/, std::vector<int/*arrayIndex*/> > myList;
// for each index:
myList[aNodeIndex].push_back( anArrayIndex );

Likely to be more efficient:
std::vector< std::pair<int/*nodeIndex*/,int/*arrayIndex*/> > myList;
myList.reserve( theSizeOfTheArrayOfIndices );
// for each index:
myList.push_back( std::pair<int,int>( aNodeIndex, anArrayIndex ) );
std::sort( myList.begin(), myList.end() );
// --> scan for consecutive items with the same node index

A hash_map (or unordered_map) could be tested too, but I would expect
the vector version to be faster (just a guess...).
Ivan

Maybe I'm missing something, but using this way
An "easy" way would be to use:
std::multimap< int/*nodeIndex*/, std::vector<int/*arrayIndex*/> > myList; // for each index:
myList[aNodeIndex].push_back( anArrayIndex );

will give me the list of lists, but I only want to consider those nodes
which are mentioned multiple times in the storage array. The above will
form me the list of lists, based upon the node indices and for each of
those a list of array indices.

I would then have to search / sort the whole MyList to isolate the
elements in the new MyList that had multiple values stored. Is that correct?

Jul 23 '05 #4

As a result of a graphics based algorihtms, I have a list of indices to
a set of nodes.

I want to efficiently identify any node indices that are stored multiple
times in the array and the location of them in the array /list. Hence
the output being some list of lists, containing groups of indices of the
storage array that point to the same node index.

This is obviously a trivial problem, but if my storage list is large and
the set of nodes large (and hence lots of repeated indices) this
problem could become a bottleneck,

Yep. That definitly will become an issue for large point sets.
What you need to do:
sort the points(*) and keep an eye of where the point was in the
original data structure.
You might want to use a helper data structure for that.

After the sort has been done, all points with identical coordinates
are consecutive and the additional information will tell you where
it was in the original data set.

(*) sorting criterium:
if x_coordinates are equal
if y_coordinates are equal
return z1 < z2
else
return y1 < y2
else
return x1 < x2

--
Karl Heinz Buchegger
Jul 23 '05 #5
Karl Heinz Buchegger wrote:
As a result of a graphics based algorihtms, I have a list of indices to
a set of nodes.

I want to efficiently identify any node indices that are stored multiple
times in the array and the location of them in the array /list. Hence
the output being some list of lists, containing groups of indices of the
storage array that point to the same node index.

This is obviously a trivial problem, but if my storage list is large and
the set of nodes large (and hence lots of repeated indices) this
problem could become a bottleneck,

Yep. That definitly will become an issue for large point sets.
What you need to do:
sort the points(*) and keep an eye of where the point was in the
original data structure.
You might want to use a helper data structure for that.

After the sort has been done, all points with identical coordinates
are consecutive and the additional information will tell you where
it was in the original data set.

(*) sorting criterium:
if x_coordinates are equal
if y_coordinates are equal
return z1 < z2
else
return y1 < y2
else
return x1 < x2

I think you may have misunderstood, there are no actual point
coordinates. Simply a list of points, a list of lines and a list that is
been used to link lines to the points.

What I am concerned with is the linking list. So say the following

I = {10,10,4,6,5,5}

That says lines 1 and 2 are linked to node 10, line 3 to node 4 etc etc

What I want is a result of the search that gives me

O = {10{1,2}, 5{5,6}}
Jul 23 '05 #6

I think you may have misunderstood,
Maybe
there are no actual point
coordinates. Simply a list of points, a list of lines and a list that is
been used to link lines to the points.

What I am concerned with is the linking list. So say the following

I = {10,10,4,6,5,5}

That says lines 1 and 2 are linked to node 10, line 3 to node 4 etc etc

What I want is a result of the search that gives me

O = {10{1,2}, 5{5,6}}

Same strategy.
Set up a helper datastructure

struct SortHelper
{
int NodeIndex;
int OriginalPosition;
}

and create an array (or whatever) of that:

I = { 10, 4, 8, 10, 4, 5 }

becomes

{ 10, 1 }
{ 4, 2 }
{ 8, 3 }
{ 10, 4 }
{ 4, 5 }
{ 5, 6 }

Now sort that array according to NodeIndex:

{ 4, 2 }
{ 4, 5 }
{ 5, 6 }
{ 8, 3 }
{ 10, 1 }
{ 10, 4 }

and scan through it: there are 2 consecutive '4' Nodes in the list and they
appeared in the original I at positions 2 and 5. '5' is single and thus
of no interest to you (if I understand correctly), same for '8'. But then
there is 10 which occours 2 times in I at positions 1 and 4.

The strategy is always the same. If you need to compare each element with each
other element in a datastructure, you have a potential O(n^2) algorithm. If
possible (and often it is), sort that thing such that equal elements get
consecutive. Sorting is of order O(n*log(n)), plus an additional O(n) for
running through the data structure and sorting things out. Much better
then O(n^2) for large values of n.

--
Karl Heinz Buchegger
Jul 23 '05 #7
"Adam Hartshorne" <or********@yahoo.com> wrote in message
news:cv**********@wisteria.csv.warwick.ac.uk...
Maybe I'm missing something, but using this way
An "easy" way would be to use:
std::multimap< int/*nodeIndex*/, std::vector<int/*arrayIndex*/> >

myList;

NB: I actually meant to write std::map< .... >
// for each index:
myList[aNodeIndex].push_back( anArrayIndex );

will give me the list of lists, but I only want to consider those nodes
which are mentioned multiple times in the storage array. The above will
form me the list of lists, based upon the node indices and for each of
those a list of array indices.

I would then have to search / sort the whole MyList to isolate the
elements in the new MyList that had multiple values stored.
Is that correct?

Yes: sorting first tends to be the fastest way to find
identical values in a list.

This said, in your case, aNodeIndex values are in a know 0-based
interval. Because of that, you could probably use a faster approach:
// initial map filled with -1 to say no ArrayIndex points to that node
std::vector<int> nodeToFirstInd( maxNodeIndex, -1 );

// this will store only nodes with multiple indices
std::map< int/*nodeIndex*/, std::vector<int/*arrayIndex*/ > > multiLinked;

for(....)// for each arrayIndex, nodeIndex pair:
{
if( nodeToFirstInd[nodeIndex]==-1 )
nodeToFirstInd[nodeIndex] = arrayIndex; // mark node as 'used'
else {
if( list.empty() ) // put the initial item in
list.push_back( nodeToFirstInd[nodeIndex] )
list.push_back( arrayIndex ); // add the new value index
}
}

// now multiLinked contains what you want
Sorry the code samples are a mess - just written in a rush.
I hope it is understandable and helpful, though.

Ivan
--
http://ivan.vecerina.com/contact/?subject=NG_POST <- email contact form
Jul 23 '05 #8

This thread has been closed and replies have been disabled. Please start a new discussion.

### Similar topics

 4 by: lecichy | last post by: Hello Heres the situation: I got a file with lines like: name:second_name:somenumber:otherinfo etc with different values between colons ( just like passwd file) What I want is to extract... 1 by: Raptor | last post by: I'm using a single script to generate a table with s in each row. I fill the array with initial values, then write it out to the table and let the user edit the values. Something like: ... 5 by: Michael Hill | last post by: Hi, folks. I am writing a Javascript program that accepts (x, y) data pairs from a text box and then analyzes that data in various ways. This is my first time using text area boxes; in the past,... 0 by: Bubbles | last post by: Hello. New to ASP.NET and struggling on this one. I have a text file with a bunch of text in it. Throughout the file words followed by a ":" will appear. I need to pull every such string out... 1 by: jerryyang_la1 | last post by: I'm reading in a CSV and displaying using the code below: \$users = file("text.txt"); echo ""; echo "
"; echo "