By using this site, you agree to our updated Privacy Policy and our Terms of Use. Manage your Cookies Settings.
446,171 Members | 1,004 Online
Bytes IT Community
+ Ask a Question
Need help? Post your question and get tips & solutions from a community of 446,171 IT Pros & Developers. It's quick & easy.

Inefficency in search_n

P: n/a
Hello!

For those who aren't familar BTW, the "search_n" STL function takes two
iterators, a "count" value and and either a "value" or a "value" and a
binary predicate and searches between the iterators for the first
occurance of "count" values which are all equal "value" (with respect to
the binary predicate if given, == if not).
In my STL manual, the complexity of search_n is given as O(dist.count)
where dist is the distance between the two given iterators. It is
however I would think easy to implement search_n in O(dist), and also
with the further condition it will dereference each iterator at most once.

Furthermore, while it is not possible to get a better worse-case
complexity where the iterators are random-access iterators it is
possible to get average case O(dist/count) in many cases.

Is there some reason the basic complexity is so bad, might be improved
in a future revision, and is there any chance of a more optimised
version for random access operators? (or would such a design decision be
down to implementors?)

Thank you,

Chris
Jul 22 '05 #1
Share this Question
Share on Google+
4 Replies


P: n/a
Chris Jefferson wrote:
Hello!

For those who aren't familar BTW, the "search_n" STL function takes two
iterators, a "count" value and and either a "value" or a "value" and a
binary predicate and searches between the iterators for the first
occurance of "count" values which are all equal "value" (with respect to
the binary predicate if given, == if not).
In my STL manual, the complexity of search_n is given as O(dist.count)
where dist is the distance between the two given iterators. It is
however I would think easy to implement search_n in O(dist), and also
with the further condition it will dereference each iterator at most once.

Furthermore, while it is not possible to get a better worse-case
complexity where the iterators are random-access iterators it is
possible to get average case O(dist/count) in many cases.

Is there some reason the basic complexity is so bad, might be improved
in a future revision, and is there any chance of a more optimised
version for random access operators? (or would such a design decision be
down to implementors?)

Thank you,

Chris


I do not think that there is any good reason for the standard
to be so far off. From the description of the SGI's reference
implementation (http://www.sgi.com/tech/stl/search_n.html):

Complexity
Linear. Search_n performs at most last - first comparisons.

(The C++ standard permits the complexity to be O(n (last - first)),
but this is unnecessarily lax. There is no reason for search_n to
examine any element more than once.)

So, chances are that the library you are using already provides linear
complexity.

As for your last question, the standard, as far as I can tell, does not go
into specifying average case complexities. So I gather the average case
will be up to the implentation.
Kai-Uwe Bux

Jul 22 '05 #2

P: n/a

"Chris Jefferson" <ca*@cs.york.ac.uk> wrote in message
news:cf*********@pump1.york.ac.uk...
Hello!

For those who aren't familar BTW, the "search_n" STL function takes two
iterators, a "count" value and and either a "value" or a "value" and a
binary predicate and searches between the iterators for the first
occurance of "count" values which are all equal "value" (with respect to
the binary predicate if given, == if not).
In my STL manual, the complexity of search_n is given as O(dist.count)
where dist is the distance between the two given iterators. It is
however I would think easy to implement search_n in O(dist), and also
with the further condition it will dereference each iterator at most once.

Furthermore, while it is not possible to get a better worse-case
complexity where the iterators are random-access iterators it is
possible to get average case O(dist/count) in many cases.

Is there some reason the basic complexity is so bad, might be improved
in a future revision, and is there any chance of a more optimised
version for random access operators? (or would such a design decision be
down to implementors?)

Thank you,

Chris


Have a look at this, it seems relevant.

http://www.codeproject.com/vcpp/stl/SearchN.asp

john
Jul 22 '05 #3

P: n/a
Kai-Uwe Bux wrote:
Chris Jefferson wrote:

Hello!

For those who aren't familar BTW, the "search_n" STL function takes two
iterators, a "count" value and and either a "value" or a "value" and a
binary predicate and searches between the iterators for the first
occurance of "count" values which are all equal "value" (with respect to
the binary predicate if given, == if not).
In my STL manual, the complexity of search_n is given as O(dist.count)
where dist is the distance between the two given iterators. It is
however I would think easy to implement search_n in O(dist), and also
with the further condition it will dereference each iterator at most once.

Furthermore, while it is not possible to get a better worse-case
complexity where the iterators are random-access iterators it is
possible to get average case O(dist/count) in many cases.

Is there some reason the basic complexity is so bad, might be improved
in a future revision, and is there any chance of a more optimised
version for random access operators? (or would such a design decision be
down to implementors?)

Thank you,

Chris

I do not think that there is any good reason for the standard
to be so far off. From the description of the SGI's reference
implementation (http://www.sgi.com/tech/stl/search_n.html):

Complexity
Linear. Search_n performs at most last - first comparisons.

(The C++ standard permits the complexity to be O(n (last - first)),
but this is unnecessarily lax. There is no reason for search_n to
examine any element more than once.)

So, chances are that the library you are using already provides linear
complexity.

As for your last question, the standard, as far as I can tell, does not go
into specifying average case complexities. So I gather the average case
will be up to the implentation.


Thank you :)

One extra quick related (at least to me) question :)

Is there any way to have a templated function which takes as input two
foward iterators, and then have a specialised version which takes random
access iterators?
Thanks,

Chris
Jul 22 '05 #4

P: n/a
On Mon, 09 Aug 2004 13:47:34 +0100, Chris Jefferson
<ca*@cs.york.ac.uk> wrote:
One extra quick related (at least to me) question :)

Is there any way to have a templated function which takes as input two
foward iterators, and then have a specialised version which takes random
access iterators?


This is the usual way:

template <class RanIt>
void f(RanIt beg, RanIt end, std::random_access_iterator_tag);

template <class FwdIt>
void f(FwdIt beg, FwdIt end, std::forward_iterator_tag);

template <class It>
void f(It beg, It end)
{
typedef typename
std::iterator_traits<It>::iterator_category category;
f(beg, end, category());
}

Many variations are possible of course.

Tom
Jul 22 '05 #5

This discussion thread is closed

Replies have been disabled for this discussion.