473,786 Members | 2,426 Online
Bytes | Software Development & Data Engineering Community
+ Post

Home Posts Topics Members FAQ

Search algorithm to use

If I have an array of data that I know to be sorted in increasing
order, and the array is less than 50 elements, and I want to find the
first element greater than a certain value, is a simple linear search
the best here(for loop from beginning to end of array)? It seems like
some other search algorithm like binary or whatever would be of no
benefit on this data set.

Nov 14 '05
28 3180
On 16 Feb 2005 19:04:00 -0800, Tim Rentsch <tx*@alumnus.ca ltech.edu>
wrote:
"joshc" <jo********@gma il.com> writes:
If I have an array of data that I know to be sorted in increasing
order, and the array is less than 50 elements, and I want to find the
first element greater than a certain value, is a simple linear search
the best here(for loop from beginning to end of array)? It seems like
some other search algorithm like binary or whatever would be of no
benefit on this data set.
Gosh folks...

A linear search will take about 25 compares on the average.

A binary search will take about 6 compares on the average.

Roughly speaking, a reasonable guess is that the binary search will
run about 3 times as fast as the linear search. Whether that makes
any perceptible difference in the application or not is another
question altogether.


This isn't right. First of all you need two compares in the binary
search loop, one for the element compare, and one for the loop test,
so the number of compares is about 12. Secondly the binary search
loop requires an addition and a divide by 2. On the other hand the
linear compare loop only requires a compare.
Programming a binary search should be something any competent
programmer can do almost as fast as he/she can write it down:
I haven't checked your code; however coding a binary search completely
correctly is notoriously trickier than it seems.

/* Return the smallest index of an element greater than
* the supplied argument x. If there is no such element,
* return the index where the element would be if there
* were one (which will be 'how_many' if how_many < 0
* is true).
*
* NEEDS: argument 'v' array be sorted in increasing
* order. NOT CHECKED.
*
* Disclaimer: not tested or compiled in any way.
*/

int
first_larger( SomeType v[], int how_many, SomeType x ){
int i = 0, j = how_many;

if( i >= j || v[0] > x ) return 0;

while( i + 1 != j ){
int m = (i + j) / 2;
if( v[m] <= x ) i = m;
else j = m;
}

return j;
}

And, just a few meta-comments.

The comp.lang.c group isn't normally the place to ask this kind of
question; comp.programmin g is better.
True.

If you asked this question because you got assigned it as homework,
then look over the comments and the code until you think you see how
it works, then stop looking at it and try to write the code yourself.
If you don't you're cheating yourself (and besides which karma will
get you :).


IMO it doesn't sound like a homework question; rather it sounds like
the sort of question that an inexperienced programmer would ask.

Richard Harter, cr*@tiac.net
http://home.tiac.net/~cri, http://www.varinoma.com
All my life I wanted to be someone;
I guess I should have been more specific.
Nov 14 '05 #21
cr*@tiac.net (Richard Harter) writes:
On 16 Feb 2005 19:04:00 -0800, Tim Rentsch <tx*@alumnus.ca ltech.edu>
wrote:
"joshc" <jo********@gma il.com> writes:
If I have an array of data that I know to be sorted in increasing
order, and the array is less than 50 elements, and I want to find the
first element greater than a certain value, is a simple linear search
the best here(for loop from beginning to end of array)? It seems like
some other search algorithm like binary or whatever would be of no
benefit on this data set.


Gosh folks...

A linear search will take about 25 compares on the average.

A binary search will take about 6 compares on the average.

Roughly speaking, a reasonable guess is that the binary search will
run about 3 times as fast as the linear search. Whether that makes
any perceptible difference in the application or not is another
question altogether.


This isn't right. First of all you need two compares in the binary
search loop, one for the element compare, and one for the loop test,
so the number of compares is about 12. Secondly the binary search
loop requires an addition and a divide by 2. On the other hand the
linear compare loop only requires a compare.


The linear search needs two compares per loop iteration also - one to
test the value, and one to check the loop index.

The binary search needs an add and a divide by 2 (usually a shift);
also one assignment. The linear search needs an increment. Since
25/6 > 4, I think "about 3 as fast" is a fair statement, especially
since it was given as a "reasonable guess".

Programming a binary search should be something any competent
programmer can do almost as fast as he/she can write it down:


I haven't checked your code; however coding a binary search completely
correctly is notoriously trickier than it seems.


I used to think that too before I learned how to write a binary search
that is both simple and solid. I suspect the idea that writing a
binary search is tricky was much of the motivation of the OP; that's
part of why I wrote this.

Incidentally, looking over the code, it still looks right to me. The
comment, however, has an error - should read 'how_many >= 0'. And for
those who notice nits, "increasing " should be "non-decreasing"
instead.

/* Return the smallest index of an element greater than
* the supplied argument x. If there is no such element,
* return the index where the element would be if there
* were one (which will be 'how_many' if how_many < 0
* is true).
*
* NEEDS: argument 'v' array be sorted in increasing
* order. NOT CHECKED.
*
* Disclaimer: not tested or compiled in any way.
*/

int
first_larger( SomeType v[], int how_many, SomeType x ){
int i = 0, j = how_many;

if( i >= j || v[0] > x ) return 0;

while( i + 1 != j ){
int m = (i + j) / 2;
if( v[m] <= x ) i = m;
else j = m;
}

return j;
}

Nov 14 '05 #22
On Wed, 16 Feb 2005 05:17:17 -0800, Chris Williams wrote:

Lawrence Kirby wrote:
On Tue, 15 Feb 2005 20:46:00 -0800, Chris Williams wrote:
> If
> it is exponential then the algorithm isn't so good ( O(n^2) .)


That is polynomiual, exponential woul be something like O(2^n). E.g.

an
algorithm that searched every bit patter of n bits.


Hm. Had thought exponential growth referred to _any_ curve which
increased "steepness" as you went along the x axis.


Exponential typically refers to somthing of the form X^N where ^ is the
power operator. It is sometimes loosely used to refer to something that
grows faster than polynomial (i.e. N^X for arbirary large X). For example
2^N will exceed N^X with sufficiently large N, no matter how large you
make X.

Lawrence
Nov 14 '05 #23

In article <kf************ *@alumnus.calte ch.edu>, Tim Rentsch <tx*@alumnus.ca ltech.edu> writes:
cr*@tiac.net (Richard Harter) writes:
On 16 Feb 2005 19:04:00 -0800, Tim Rentsch <tx*@alumnus.ca ltech.edu>
wrote:
"joshc" <jo********@gma il.com> writes:

> If I have an array of data that I know to be sorted in increasing
> order, and the array is less than 50 elements, and I want to find the
> first element greater than a certain value, is a simple linear search
> the best here(for loop from beginning to end of array)?

A linear search will take about 25 compares on the average.

A binary search will take about 6 compares on the average.

Roughly speaking, a reasonable guess is that the binary search will
run about 3 times as fast as the linear search. Whether that makes
any perceptible difference in the application or not is another
question altogether.

However, a linear search will correctly predict nearly all the
branches, while a binary search will mispredict half of them, on
average.

Unless comparison time dominates, I wouldn't call your guess
"reasonable ", for that reason alone - and there are others.
(Consider we know nothing of the size of items and the memory-
management features of the platform, so we can't guess whether the
binary search will tend to cause more cache misses, for example.)
This isn't right. First of all you need two compares in the binary
search loop, one for the element compare, and one for the loop test,
so the number of compares is about 12. Secondly the binary search
loop requires an addition and a divide by 2. On the other hand the
linear compare loop only requires a compare.


The linear search needs two compares per loop iteration also - one to
test the value, and one to check the loop index.


Any decent optimizer will unroll the loop, so loop index comparisons
will only add N/O comparisons, where O is the unrolling factor.

And if loop index comparisons take about as long as value comparisons,
then the latter must be very quick - so you fall foul of the branch-
misprediction penalty of binary search.

And this is why premature optimization is a poor idea.

Now, in this case, the OP did follow up by stating that profiling had
indicated his search function was a hot spot; and since that's the
case, he might indeed want to try various alternatives. If compar-
isons are very expensive, the binary search might even help - though
I suspect he'd be better off revisiting his data structure, as
various people suggested.

--
Michael Wojcik mi************@ microfocus.com

Every allegiance to some community eventually involves such a fetish,
which functions as the disavowal of its founding crime: is not 'America'
the fetish of an infinitely open space enabling every individual to
pursue happiness in his or her own way? -- Slavoj Zizek
Nov 14 '05 #24
On 17 Feb 2005 02:30:11 -0800, Tim Rentsch <tx*@alumnus.ca ltech.edu>
wrote:
cr*@tiac.net (Richard Harter) writes:
On 16 Feb 2005 19:04:00 -0800, Tim Rentsch <tx*@alumnus.ca ltech.edu>
wrote:
>"joshc" <jo********@gma il.com> writes:
>
>> If I have an array of data that I know to be sorted in increasing
>> order, and the array is less than 50 elements, and I want to find the
>> first element greater than a certain value, is a simple linear search
>> the best here(for loop from beginning to end of array)? It seems like
>> some other search algorithm like binary or whatever would be of no
>> benefit on this data set.
>
>Gosh folks...
>
>A linear search will take about 25 compares on the average.
>
>A binary search will take about 6 compares on the average.
>
>Roughly speaking, a reasonable guess is that the binary search will
>run about 3 times as fast as the linear search. Whether that makes
>any perceptible difference in the application or not is another
>question altogether.
This isn't right. First of all you need two compares in the binary
search loop, one for the element compare, and one for the loop test,
so the number of compares is about 12. Secondly the binary search
loop requires an addition and a divide by 2. On the other hand the
linear compare loop only requires a compare.


The linear search needs two compares per loop iteration also - one to
test the value, and one to check the loop index.


One comparison suffices if you use a sentinel. The following code
fragment is illustrative (but unchecked).

if (data[0] >= testval) return 0;
ptr = data + data_length;
while (*--ptr >= testval);
return ptr-data;

One decrement, one compare per element.

The binary search needs an add and a divide by 2 (usually a shift);
also one assignment. The linear search needs an increment. Since
25/6 > 4, I think "about 3 as fast" is a fair statement, especially
since it was given as a "reasonable guess".
It's a reasonable guess if you don't do the arithmetic and count
operations, and if you've never benchmarked the two approaches. For a
linear search the costs per iteration are

1 data compare & branch
1 increment/decrement

For the binary search the costs per iteration are:

1 data compare & branch
1/2 branch (then clause)
1 index compare and branch
1 addition
1 assignment
1 divide by 2
1 addition by 1 (avoidable)

If we just count operations that's 2 for a linear search and 6.5 for a
binary search which is just about a wash. However the code for the
linear search is a better candidate for optimization. Some years ago
I ran benchmarks on just this question; the break even point was
somewhere around 50. I expect that the results would be about the
same on more modern machines.


>Programming a binary search should be something any competent
>programmer can do almost as fast as he/she can write it down:
I haven't checked your code; however coding a binary search completely
correctly is notoriously trickier than it seems.


I used to think that too before I learned how to write a binary search
that is both simple and solid. I suspect the idea that writing a
binary search is tricky was much of the motivation of the OP; that's
part of why I wrote this.


It's not just me or you; errors in writing binary searches are (or
were) common. IIRC Bentley makes just that observation in
"Programmin g Pearls" and, of course, Dijkstra had some pungent
comments on the subject.

Incidentally , looking over the code, it still looks right to me. The
comment, however, has an error - should read 'how_many >= 0'. And for
those who notice nits, "increasing " should be "non-decreasing"
instead.
Well you see, "looks right" is not quite as reassuring as "proven
right". I expect it probably is - most of us reuse validated code
schemas for these little exercises.

> /* Return the smallest index of an element greater than
> * the supplied argument x. If there is no such element,
> * return the index where the element would be if there
> * were one (which will be 'how_many' if how_many < 0
> * is true).
> *
> * NEEDS: argument 'v' array be sorted in increasing
> * order. NOT CHECKED.
> *
> * Disclaimer: not tested or compiled in any way.
> */
>
> int
> first_larger( SomeType v[], int how_many, SomeType x ){
> int i = 0, j = how_many;
>
> if( i >= j || v[0] > x ) return 0;
>
> while( i + 1 != j ){
> int m = (i + j) / 2;
> if( v[m] <= x ) i = m;
> else j = m;
> }
>
> return j;
> }
>


Richard Harter, cr*@tiac.net
http://home.tiac.net/~cri, http://www.varinoma.com
All my life I wanted to be someone;
I guess I should have been more specific.
Nov 14 '05 #25
cr*@tiac.net (Richard Harter) writes:
On 17 Feb 2005 02:30:11 -0800, Tim Rentsch <tx*@alumnus.ca ltech.edu>
wrote:
cr*@tiac.net (Richard Harter) writes:
On 16 Feb 2005 19:04:00 -0800, Tim Rentsch <tx*@alumnus.ca ltech.edu>
wrote:

>"joshc" <jo********@gma il.com> writes:
>
>> If I have an array of data that I know to be sorted in increasing
>> order, and the array is less than 50 elements, and I want to find the
>> first element greater than a certain value, is a simple linear search
>> the best here(for loop from beginning to end of array)? It seems like
>> some other search algorithm like binary or whatever would be of no
>> benefit on this data set.
>
>Gosh folks...
>
>A linear search will take about 25 compares on the average.
>
>A binary search will take about 6 compares on the average.
>
>Roughly speaking, a reasonable guess is that the binary search will
>run about 3 times as fast as the linear search. Whether that makes
>any perceptible difference in the application or not is another
>question altogether.

This isn't right. First of all you need two compares in the binary
search loop, one for the element compare, and one for the loop test,
so the number of compares is about 12. Secondly the binary search
loop requires an addition and a divide by 2. On the other hand the
linear compare loop only requires a compare.


The linear search needs two compares per loop iteration also - one to
test the value, and one to check the loop index.


One comparison suffices if you use a sentinel. The following code
fragment is illustrative (but unchecked).

if (data[0] >= testval) return 0;
ptr = data + data_length;
while (*--ptr >= testval);
return ptr-data;

One decrement, one compare per element.

The binary search needs an add and a divide by 2 (usually a shift);
also one assignment. The linear search needs an increment. Since
25/6 > 4, I think "about 3 as fast" is a fair statement, especially
since it was given as a "reasonable guess".


It's a reasonable guess if you don't do the arithmetic and count
operations, and if you've never benchmarked the two approaches. For a
linear search the costs per iteration are

1 data compare & branch
1 increment/decrement

For the binary search the costs per iteration are:

1 data compare & branch
1/2 branch (then clause)
1 index compare and branch
1 addition
1 assignment
1 divide by 2
1 addition by 1 (avoidable)

If we just count operations that's 2 for a linear search and 6.5 for a
binary search which is just about a wash. However the code for the
linear search is a better candidate for optimization. Some years ago
I ran benchmarks on just this question; the break even point was
somewhere around 50. I expect that the results would be about the
same on more modern machines.


>Programming a binary search should be something any competent
>programmer can do almost as fast as he/she can write it down:

I haven't checked your code; however coding a binary search completely
correctly is notoriously trickier than it seems.


I used to think that too before I learned how to write a binary search
that is both simple and solid. I suspect the idea that writing a
binary search is tricky was much of the motivation of the OP; that's
part of why I wrote this.


It's not just me or you; errors in writing binary searches are (or
were) common. IIRC Bentley makes just that observation in
"Programmin g Pearls" and, of course, Dijkstra had some pungent
comments on the subject.

Incidentally , looking over the code, it still looks right to me. The
comment, however, has an error - should read 'how_many >= 0'. And for
those who notice nits, "increasing " should be "non-decreasing"
instead.


Well you see, "looks right" is not quite as reassuring as "proven
right". I expect it probably is - most of us reuse validated code
schemas for these little exercises.

> /* Return the smallest index of an element greater than
> * the supplied argument x. If there is no such element,
> * return the index where the element would be if there
> * were one (which will be 'how_many' if how_many < 0
> * is true).
> *
> * NEEDS: argument 'v' array be sorted in increasing
> * order. NOT CHECKED.
> *
> * Disclaimer: not tested or compiled in any way.
> */
>
> int
> first_larger( SomeType v[], int how_many, SomeType x ){
> int i = 0, j = how_many;
>
> if( i >= j || v[0] > x ) return 0;
>
> while( i + 1 != j ){
> int m = (i + j) / 2;
> if( v[m] <= x ) i = m;
> else j = m;
> }
>
> return j;
> }
>


Actually I think we are mostly in agreement here, despite the apparent
differences in the postings. There are some different assumptions and
the language used may have been misleading in places. Let's see if
those can be straightened out.

The phrase "is a reasonable guess" (for the performance factor of
three) probably is better put as "is a reasonable back of the envelope
estimate". That's meant with the usual errors bars and ignoring of
(unknown or unspecified) second-order effects.

I admit, the estimate was skewed toward the case where the element
compare was "expensive" relative to other operations. That assumption
is reasonably good if the elements being compared are strings; fair
if the elements being compared are floating point or mixed-mode; poor
if the elements being compared are int's. I still think it's a fair
assumption for calculating a BOTE estimate, since it's standard in
comparing algorithm performance; but you're right that it's not a
good assumption when the elements being compared are int's.

You're also right that, for int arrays of 50 elements, a straight
linear search (not with sentinel) runs only a little slower than a
binary search like the one I illustrated. Out of curiosity I ran a
few benchmarks, and the simple linear search was only 25% slower than
the simple binary search. (In case anyone was wondering, the platform
here is an x86 architecture, with gcc -O2 doing the compiles.)

Using a sentinel on the linear search is nice technique, and
definitely an improvement. (Incidentally, the code snippet posted has
two errors - the two >= comparisons should be > comparisons, and the
return value should be 'ptr-data + 1' rather than 'ptr-data'.) A
linear search using the sentinel technique ran roughly 1.7 times as
fast as a simple linear search, or 40% faster than a simple binary
search. Completely unrolling the linear search -- which is reasonable
for only 50 elements -- gives a 1.2x speedup relative to the sentinel
linear, or a factor of almost 2.1 over the original simple linear
search. Nice!

I will grant you that many programmers will get binary search wrong
(and that this observation may have been made by messieurs Bentley and
Dijkstra). However, I believe this result obtains not because binary
search is inherently difficult, but because it is practiced wrongly by
less competent programmers, and because the myth of it being difficult
is perpetuated by instructors who don't teach it very well. When I
said the binary search code I wrote "looks right" I meant it looked
like there were no clerical errors; I'm sure the invariants were
right, as I was sure they were when I first posted it. It's become
second nature for me to make sure loop invariants are right, and
anyone who adopts this discipline can program binary searches with
confidence. (Come to think of it, asking a candidate to program a
simple binary search might make a good interview question.)

Fortunately for the poor binary search, it still has something to
offer in response to the performance of linear/sentinel search and
unrolled linear search in our 'int array of 50 element' benchmarks.

Combining two levels of binary comparison and then doing a simple
linear search (that is, not using the sentinel technique) runs a tad
faster (roughly 10%) than a linear search with sentinel .

The function

uint
binary_sentinel ( int v[], uint n, int x ){
/** NEEDS: n > 0 **/
if( v[n/2] > x ){
uint i = 0;
while( v[i] <= x ) i++;
return i;
} else {
while( v[--n] > x ) {}
return n+1;
}
}

which combines a single binary split with a linear/sentinel search,
outperforms even the fully unrolled linear search. (Here also the
speedup is roughly 10%.)

Finally, if we compare fully unrolled linear search and fully unrolled
binary search -- both reasonable on 50 element arrays if the search is
on the critical path for performance -- the binary search is 2.3 times
as fast as the linear search. The fully unrolled binary search also
has the nice property that it can work on arrays of any size up to 50
(ok, 63), rather than just a single fixed size of 50.

That all fit with what you were saying?
Nov 14 '05 #26
mw*****@newsguy .com (Michael Wojcik) writes:
In article <kf************ *@alumnus.calte ch.edu>, Tim Rentsch <tx*@alumnus.ca ltech.edu> writes:
cr*@tiac.net (Richard Harter) writes:
On 16 Feb 2005 19:04:00 -0800, Tim Rentsch <tx*@alumnus.ca ltech.edu>
wrote:
>"joshc" <jo********@gma il.com> writes:
>
>> If I have an array of data that I know to be sorted in increasing
>> order, and the array is less than 50 elements, and I want to find the
>> first element greater than a certain value, is a simple linear search
>> the best here(for loop from beginning to end of array)?
>
>A linear search will take about 25 compares on the average.
>
>A binary search will take about 6 compares on the average.
>
>Roughly speaking, a reasonable guess is that the binary search will
>run about 3 times as fast as the linear search. Whether that makes
>any perceptible difference in the application or not is another
>question altogether.
However, a linear search will correctly predict nearly all the
branches, while a binary search will mispredict half of them, on
average.

Unless comparison time dominates, I wouldn't call your guess
"reasonable ", for that reason alone - and there are others.
(Consider we know nothing of the size of items and the memory-
management features of the platform, so we can't guess whether the
binary search will tend to cause more cache misses, for example.)
This isn't right. First of all you need two compares in the binary
search loop, one for the element compare, and one for the loop test,
so the number of compares is about 12. Secondly the binary search
loop requires an addition and a divide by 2. On the other hand the
linear compare loop only requires a compare.


The linear search needs two compares per loop iteration also - one to
test the value, and one to check the loop index.


Any decent optimizer will unroll the loop, so loop index comparisons
will only add N/O comparisons, where O is the unrolling factor.

And if loop index comparisons take about as long as value comparisons,
then the latter must be very quick - so you fall foul of the branch-
misprediction penalty of binary search.

And this is why premature optimization is a poor idea.

Now, in this case, the OP did follow up by stating that profiling had
indicated his search function was a hot spot; and since that's the
case, he might indeed want to try various alternatives. If compar-
isons are very expensive, the binary search might even help - though
I suspect he'd be better off revisiting his data structure, as
various people suggested.


I have the sense you think I was trying to provide some sort of
detailed performance analysis. I wasn't. There was no optimization -
premature or otherwise - in my posting. There were only observations,
no recommendation.

What I was trying to do is give a not-very-deep, but helpful, answer
to a question about whether binary searching would be an improvement
over linear searching. The answer I gave was intended to address the
question asked, as it was stated.

I was also trying to illustrate a way of thinking that would likely to
be helpful to someone thinking about whether it might be worth some
effort trying to improve the performance of this particular function.
I don't specialize in performance enhancement; but engineers that I
know who do (and who are good at what they do) say that big gains in
performance are almost always due to algorithmic improvements rather
than fussing with low-level machine details. I think that view is
borne out here; please see my response to Richard Harter's posting
for details.
Nov 14 '05 #27
On 23 Feb 2005 02:41:17 -0800, Tim Rentsch <tx*@alumnus.ca ltech.edu>
wrote:


Actually I think we are mostly in agreement here, despite the apparent
differences in the postings. There are some different assumptions and
the language used may have been misleading in places. Let's see if
those can be straightened out.
Agreed. (Mostly :-))
The phrase "is a reasonable guess" (for the performance factor of
three) probably is better put as "is a reasonable back of the envelope
estimate". That's meant with the usual errors bars and ignoring of
(unknown or unspecified) second-order effects.
Yes and no. The problem here is that we are comparing algorithms with
different cost functions on a restricted range. If we just use number
of iterations for our reasonable guess we're ignoring cost per
iteration. The "reasonable guess" will be off by some constant
factor. This doesn't matter for large n; big O dominants. It's a
different matter for a small restricted range of n. Still, it's a
reasonable guess.

I admit, the estimate was skewed toward the case where the element
compare was "expensive" relative to other operations. That assumption
is reasonably good if the elements being compared are strings; fair
if the elements being compared are floating point or mixed-mode; poor
if the elements being compared are int's. I still think it's a fair
assumption for calculating a BOTE estimate, since it's standard in
comparing algorithm performance; but you're right that it's not a
good assumption when the elements being compared are int's.
Granted. As a caveat comparison costs on strings are distribution
dependent. For quasi-uniform distributions comparison costs for
binary searches will be less than those of linear searches. Surprise!

You're also right that, for int arrays of 50 elements, a straight
linear search (not with sentinel) runs only a little slower than a
binary search like the one I illustrated. Out of curiosity I ran a
few benchmarks, and the simple linear search was only 25% slower than
the simple binary search. (In case anyone was wondering, the platform
here is an x86 architecture, with gcc -O2 doing the compiles.)

Using a sentinel on the linear search is nice technique, and
definitely an improvement. (Incidentally, the code snippet posted has
two errors - the two >= comparisons should be > comparisons, and the
return value should be 'ptr-data + 1' rather than 'ptr-data'.)
Oops.
A
linear search using the sentinel technique ran roughly 1.7 times as
fast as a simple linear search, or 40% faster than a simple binary
search. Completely unrolling the linear search -- which is reasonable
for only 50 elements -- gives a 1.2x speedup relative to the sentinel
linear, or a factor of almost 2.1 over the original simple linear
search. Nice!

I will grant you that many programmers will get binary search wrong
(and that this observation may have been made by messieurs Bentley and
Dijkstra). However, I believe this result obtains not because binary
search is inherently difficult, but because it is practiced wrongly by
less competent programmers, and because the myth of it being difficult
is perpetuated by instructors who don't teach it very well. When I
said the binary search code I wrote "looks right" I meant it looked
like there were no clerical errors; I'm sure the invariants were
right, as I was sure they were when I first posted it. It's become
second nature for me to make sure loop invariants are right, and
anyone who adopts this discipline can program binary searches with
confidence. (Come to think of it, asking a candidate to program a
simple binary search might make a good interview question.)
Ah, okay. I agree, programming a simple binary search would make a
good interview question.

Fortunately for the poor binary search, it still has something to
offer in response to the performance of linear/sentinel search and
unrolled linear search in our 'int array of 50 element' benchmarks.

Combining two levels of binary comparison and then doing a simple
linear search (that is, not using the sentinel technique) runs a tad
faster (roughly 10%) than a linear search with sentinel .

The function

uint
binary_sentinel ( int v[], uint n, int x ){
/** NEEDS: n > 0 **/
if( v[n/2] > x ){
uint i = 0;
while( v[i] <= x ) i++;
return i;
} else {
while( v[--n] > x ) {}
return n+1;
}
}

which combines a single binary split with a linear/sentinel search,
outperforms even the fully unrolled linear search. (Here also the
speedup is roughly 10%.)
Finally, if we compare fully unrolled linear search and fully unrolled
binary search -- both reasonable on 50 element arrays if the search is
on the critical path for performance -- the binary search is 2.3 times
as fast as the linear search. The fully unrolled binary search also
has the nice property that it can work on arrays of any size up to 50
(ok, 63), rather than just a single fixed size of 50.
You can use a switch for the unrolled linear search. Still, I like
the unrolled binary search.
That all fit with what you were saying?


Pretty much. Whether or not optimization is premature, successful
optimization requires a full assessment of alternatives, a full
understanding of the data being operated upon, and careful
measurement.
Richard Harter, cr*@tiac.net
http://home.tiac.net/~cri, http://www.varinoma.com
All my life I wanted to be someone;
I guess I should have been more specific.
Nov 14 '05 #28
cr*@tiac.net (Richard Harter) writes:
On 23 Feb 2005 02:41:17 -0800, Tim Rentsch <tx*@alumnus.ca ltech.edu>
wrote:
[snip]

Finally, if we compare fully unrolled linear search and fully unrolled
binary search -- both reasonable on 50 element arrays if the search is
on the critical path for performance -- the binary search is 2.3 times
as fast as the linear search. The fully unrolled binary search also
has the nice property that it can work on arrays of any size up to 50
(ok, 63), rather than just a single fixed size of 50.
You can use a switch for the unrolled linear search.


I tried code that used a 'switch' to vector into an unrolled linear
search. On this problem at least, the simple loop-using-sentinel ran
faster, by about 20%.

Still, I like the unrolled binary search.


I have an interesting story about that. I tried two versions of the
unrolled binary search - one version like this:

if( v[3] > x )
if( v[1] > x ) return v[0] > x ? 0 : 1;
else return v[2] > x ? 2 : 3;
else
if( v[5] > x ) return v[4] > x ? 4 : 5;
else return v[6] > x ? 6 : 7;

(only of course moreso, to cover all 50 cases), where the number of
elements in the array must match exactly the unrolling in the code;
and another version like this:

if( n < 4 || v[3] > x )
if( n < 2 || v[1] > x ) return n < 1 || v[0] > x ? 0 : 1;
else return n < 3 || v[2] > x ? 2 : 3;
else
if( n < 6 || v[5] > x ) return n < 5 || v[4] > x ? 4 : 5;
else return n < 7 || v[6] > x ? 6 : 7;

where the number of array elements is checked in parallel construction
with the tests of the values, so now any array size up to the unrolled
size will work. To my surprise the second version actually ran faster
than the first - not by much, only a few percent, but it was nice to
get the more general functionality and have it be "better than free".

Incidentally, I have no explanation for why this might be so. The
code in the second case was roughly twice as many instructions down
each path. I guess branch prediction was doing its thing.

That all fit with what you were saying?


Pretty much. Whether or not optimization is premature, successful
optimization requires a full assessment of alternatives, a full
understanding of the data being operated upon, and careful
measurement.


Roger that.
Nov 14 '05 #29

This thread has been closed and replies have been disabled. Please start a new discussion.

Similar topics

10
3141
by: pembed2003 | last post by:
Hi all, I asked this question in the C group but no one seems to be interested in answering it. :-( Basically, I wrote a search and replace function so I can do: char source = "abcd?1234?x"; char search = '?'; char* replace = "***"; char* result = search_and_replace(source,search,replace);
1
3058
by: Dave Townsend | last post by:
Hi, Can anybody help me with the following piece of code? The purpose behind the code is to parse HTML files, strip out the tags and return the text between tags. This is part of a larger application which will perform "searches" for text values in a directory of html files, trying to match only the non-tagged text in the documents.
3
3358
by: Johann Blake | last post by:
This aticle presents factual evidence that Google's PageRank, inbound links and keywords are irrelevent to your placement in the search results. It presents several case studies that show conclusively that the algorithm used by Google's search engine is flawed with serious "bugs" that deny many websites the high ranking that they should otherwise be granted while granting it to those who shouldn't have it. The article is entitled...
60
49196
by: Julie | last post by:
What is the *fastest* way in .NET to search large on-disk text files (100+ MB) for a given string. The files are unindexed and unsorted, and for the purposes of my immediate requirements, can't be indexed/sorted. I don't want to load the entire file into physical memory, memory-mapped files are ok (and preferred). Speed/performance is a requirement -- the target is to locate the string in 10 seconds or less for a 100 MB file. The...
8
1773
by: Ben Fidge | last post by:
Hi I'm working on a site which requires the users to specify a hotel at which they're staying in London. The complete list of hotels comes to something like 1600 records. Each record consists of Hotel Name, Street Address and Postcode. We need to make it as simple as possible for users to pick their hotel, but I don't want to put 1600 hotel names in a drop-down list, and we have to consider the fact that not every user is going to...
4
3384
by: Dameon | last post by:
Hi All, I have a process where I'd like to search the contents of a file(in a dir) for all occurences (or the count of) of a given string. My goal is to focus more on performance, as some of the files could be upwards of 25mb in size and time is important. I don't want to take the route of loading the text of the file into a giant string and searching it, but would rather focus on a performance-minded solution. Any sugesstions for a...
9
3142
by: Rick | last post by:
I have a large list of objects where each object has a unique (non-overlapping) date range. The list is sorted chronologically. What is the most efficient way to search this list for a single object that spans a specified date?
14
2980
by: S | last post by:
Any idea on how I would be able to do a search within C# that does ranges or words For example I want to search for Chicken in the string string s1 = "This is Great Chicken";
6
12183
Kelicula
by: Kelicula | last post by:
Why?: One commonly used algorithm is the binary search. If you don't already know it, you should read on. Very helpful. Saves much CPU. Reduces computations exponentially. When searching though a lot of information to find a match, the first idea that comes to mind is the linear search. Loop through all the values looking for a match. If you find it, return the location, or value and end the search. However this becomes highly...
0
10164
jinu1996
by: jinu1996 | last post by:
In today's digital age, having a compelling online presence is paramount for businesses aiming to thrive in a competitive landscape. At the heart of this digital strategy lies an intricately woven tapestry of website design and digital marketing. It's not merely about having a website; it's about crafting an immersive digital experience that captivates audiences and drives business growth. The Art of Business Website Design Your website is...
1
10110
by: Hystou | last post by:
Overview: Windows 11 and 10 have less user interface control over operating system update behaviour than previous versions of Windows. In Windows 11 and 10, there is no way to turn off the Windows Update option using the Control Panel or Settings app; it automatically checks for updates and installs any it finds, whether you like it or not. For most users, this new feature is actually very convenient. If you want to control the update process,...
0
9961
tracyyun
by: tracyyun | last post by:
Dear forum friends, With the development of smart home technology, a variety of wireless communication protocols have appeared on the market, such as Zigbee, Z-Wave, Wi-Fi, Bluetooth, etc. Each protocol has its own unique characteristics and advantages, but as a user who is planning to build a smart home system, I am a bit confused by the choice of these technologies. I'm particularly interested in Zigbee because I've heard it does some...
0
8989
agi2029
by: agi2029 | last post by:
Let's talk about the concept of autonomous AI software engineers and no-code agents. These AIs are designed to manage the entire lifecycle of a software development project—planning, coding, testing, and deployment—without human intervention. Imagine an AI that can take a project description, break it down, write the code, debug it, and then launch it, all on its own.... Now, this would greatly impact the work of software developers. The idea...
1
7512
isladogs
by: isladogs | last post by:
The next Access Europe User Group meeting will be on Wednesday 1 May 2024 starting at 18:00 UK time (6PM UTC+1) and finishing by 19:30 (7.30PM). In this session, we are pleased to welcome a new presenter, Adolph Dupré who will be discussing some powerful techniques for using class modules. He will explain when you may want to use classes instead of User Defined Types (UDT). For example, to manage the data in unbound forms. Adolph will...
0
6745
by: conductexam | last post by:
I have .net C# application in which I am extracting data from word file and save it in database particularly. To store word all data as it is I am converting the whole word file firstly in HTML and then checking html paragraph one by one. At the time of converting from word file to html my equations which are in the word document file was convert into image. Globals.ThisAddIn.Application.ActiveDocument.Select();...
0
5397
by: TSSRALBI | last post by:
Hello I'm a network technician in training and I need your help. I am currently learning how to create and manage the different types of VPNs and I have a question about LAN-to-LAN VPNs. The last exercise I practiced was to create a LAN-to-LAN VPN between two Pfsense firewalls, by using IPSEC protocols. I succeeded, with both firewalls in the same network. But I'm wondering if it's possible to do the same thing, with 2 Pfsense firewalls...
1
4066
by: 6302768590 | last post by:
Hai team i want code for transfer the data from one system to another through IP address by using C# our system has to for every 5mins then we have to update the data what the data is updated we have to send another system
3
2894
bsmnconsultancy
by: bsmnconsultancy | last post by:
In today's digital era, a well-designed website is crucial for businesses looking to succeed. Whether you're a small business owner or a large corporation in Toronto, having a strong online presence can significantly impact your brand's success. BSMN Consultancy, a leader in Website Development in Toronto offers valuable insights into creating effective websites that not only look great but also perform exceptionally well. In this comprehensive...

By using Bytes.com and it's services, you agree to our Privacy Policy and Terms of Use.

To disable or enable advertisements and analytics tracking please visit the manage ads & tracking page.