468,253 Members | 1,278 Online
Bytes | Developer Community
New Post

Home Posts Topics Members FAQ

Post your question to a community of 468,253 developers. It's quick & easy.

factor 50.000 between std::list and std::set?

If I have a sorted std::list with 1.000.000 elements it takes 1.000.000
operations to find element with value = 1.000.000 (need to iterator
through the whole list).

In comparison, if I have a std::set with 1.000.000 element it will only
take approx lg 1.000.000 = 20 operations! Can it really be true that the
difference is a factor of 1.000.000/20 = 50.000 in this case?
Jun 25 '07 #1
15 3762
desktop wrote:
If I have a sorted std::list with 1.000.000 elements it takes 1.000.000
operations to find element with value = 1.000.000 (need to iterator
through the whole list).

In comparison, if I have a std::set with 1.000.000 element it will only
take approx lg 1.000.000 = 20 operations! Can it really be true that the
difference is a factor of 1.000.000/20 = 50.000 in this case?
Yes. Now do the same exercise, but look for the first element. The
difference isn't as dramatic, but it's there.

--

-- Pete
Roundhouse Consulting, Ltd. (www.versatilecoding.com)
Author of "The Standard C++ Library Extensions: a Tutorial and
Reference." (www.petebecker.com/tr1book)
Jun 25 '07 #2
desktop wrote:
If I have a sorted std::list with 1.000.000 elements it takes 1.000.000
operations to find element with value = 1.000.000 (need to iterator
through the whole list).

In comparison, if I have a std::set with 1.000.000 element it will only
take approx lg 1.000.000 = 20 operations! Can it really be true that the
difference is a factor of 1.000.000/20 = 50.000 in this case?
Yes. A list is a linear search, aka O(N) to find an element. A set is
required to have logarithmic time search O(log N). So a times specified
make perfect sense.
Jun 25 '07 #3
Pete Becker wrote:
desktop wrote:
>If I have a sorted std::list with 1.000.000 elements it takes
1.000.000 operations to find element with value = 1.000.000 (need to
iterator through the whole list).

In comparison, if I have a std::set with 1.000.000 element it will
only take approx lg 1.000.000 = 20 operations! Can it really be true
that the difference is a factor of 1.000.000/20 = 50.000 in this case?

Yes. Now do the same exercise, but look for the first element. The
difference isn't as dramatic, but it's there.
Well in a sorted list it takes constant time. But in the set it might
take 20 operations, unless you use some kind of header that always have
pointers to min and max. But still a factor 50.000 seems supernatural in
the previous example!
Jun 25 '07 #4
On 2007-06-25 22:21, desktop wrote:
If I have a sorted std::list with 1.000.000 elements it takes 1.000.000
operations to find element with value = 1.000.000 (need to iterator
through the whole list).

In comparison, if I have a std::set with 1.000.000 element it will only
take approx lg 1.000.000 = 20 operations! Can it really be true that the
difference is a factor of 1.000.000/20 = 50.000 in this case?
In operations yes, not necessarily in time. If the operations on the
list takes 1 time and the operations on the set takes 50,000 then
they'll be equally fast. This will of course not be true in any
implementation (the set will be significantly faster than the list) but
it shows that just because one container/algorithm has a better
asymptotic running time it will in fact perform better. All it says is
that for a sufficiently large set of input, the algorithm will perform
better.

In practice you'll often find that using a vector for small sets will be
faster than most other containers, even if you need to traverse the
whole vector.

--
Erik Wikström
Jun 25 '07 #5
Erik Wikström wrote:
On 2007-06-25 22:21, desktop wrote:
>If I have a sorted std::list with 1.000.000 elements it takes
1.000.000 operations to find element with value = 1.000.000 (need to
iterator through the whole list).

In comparison, if I have a std::set with 1.000.000 element it will
only take approx lg 1.000.000 = 20 operations! Can it really be true
that the difference is a factor of 1.000.000/20 = 50.000 in this case?

In operations yes, not necessarily in time. If the operations on the
list takes 1 time and the operations on the set takes 50,000 then
they'll be equally fast. This will of course not be true in any
implementation (the set will be significantly faster than the list) but
it shows that just because one container/algorithm has a better
asymptotic running time it will in fact perform better. All it says is
that for a sufficiently large set of input, the algorithm will perform
better.

In practice you'll often find that using a vector for small sets will be
faster than most other containers, even if you need to traverse the
whole vector.
Is it possible to make an exact measurement in the difference in time
for 1 operation for a set and a list?
Jun 25 '07 #6
On Jun 25, 3:51 pm, desktop <f...@sss.comwrote:
Erik Wikström wrote:
On 2007-06-25 22:21, desktop wrote:
If I have a sorted std::list with 1.000.000 elements it takes
1.000.000 operations to find element with value = 1.000.000 (need to
iterator through the whole list).
In comparison, if I have a std::set with 1.000.000 element it will
only take approx lg 1.000.000 = 20 operations! Can it really be true
that the difference is a factor of 1.000.000/20 = 50.000 in this case?
In operations yes, not necessarily in time. If the operations on the
list takes 1 time and the operations on the set takes 50,000 then
they'll be equally fast. This will of course not be true in any
implementation (the set will be significantly faster than the list) but
it shows that just because one container/algorithm has a better
asymptotic running time it will in fact perform better. All it says is
that for a sufficiently large set of input, the algorithm will perform
better.
In practice you'll often find that using a vector for small sets will be
faster than most other containers, even if you need to traverse the
whole vector.

Is it possible to make an exact measurement in the difference in time
for 1 operation for a set and a list?- Hide quoted text -

- Show quoted text -
sure, just write a benchmark test. There is no more precise way,
because of course the time depends on your CPU, your compiler, your
operating system, and what appliactions are running at the time. A
simple test like the following should work (on windows).

std::vector<intintVector;
populateIntVector(&intVector);
std::set<intintSet;
populateIntSet(&intSet);

DWORD d = timeGetTime();

for (int i=0; i < 1000000; ++i)
{
// Perform Vector operation
}

DWORD d2 = timeGetTime();

for (int i=0; i < 1000000; ++i)
{
// Perform set operation
}

DWORD d3 = timeGetTime();

DWORD millisecondsForVector = d2 - d;
DWORD millisecondsForSet = d3 - d2;

double millisecondsForSingleVectorOp = (double)millisecondsForVector /
(double)1000000;
double millisecondsForSingleSetOp = (double)millisecondsForSet /
(double)1000000;

Jun 25 '07 #7
Zachary Turner wrote:
On Jun 25, 3:51 pm, desktop <f...@sss.comwrote:
>Erik Wikström wrote:
>>On 2007-06-25 22:21, desktop wrote:
If I have a sorted std::list with 1.000.000 elements it takes
1.000.000 operations to find element with value = 1.000.000 (need to
iterator through the whole list).
In comparison, if I have a std::set with 1.000.000 element it will
only take approx lg 1.000.000 = 20 operations! Can it really be true
that the difference is a factor of 1.000.000/20 = 50.000 in this case?
In operations yes, not necessarily in time. If the operations on the
list takes 1 time and the operations on the set takes 50,000 then
they'll be equally fast. This will of course not be true in any
implementation (the set will be significantly faster than the list) but
it shows that just because one container/algorithm has a better
asymptotic running time it will in fact perform better. All it says is
that for a sufficiently large set of input, the algorithm will perform
better.
In practice you'll often find that using a vector for small sets will be
faster than most other containers, even if you need to traverse the
whole vector.
Is it possible to make an exact measurement in the difference in time
for 1 operation for a set and a list?- Hide quoted text -

- Show quoted text -

sure, just write a benchmark test. There is no more precise way,
because of course the time depends on your CPU, your compiler, your
operating system, and what appliactions are running at the time. A
simple test like the following should work (on windows).

std::vector<intintVector;
populateIntVector(&intVector);
std::set<intintSet;
populateIntSet(&intSet);

DWORD d = timeGetTime();

for (int i=0; i < 1000000; ++i)
{
// Perform Vector operation
}

DWORD d2 = timeGetTime();

for (int i=0; i < 1000000; ++i)
{
// Perform set operation
}

DWORD d3 = timeGetTime();

DWORD millisecondsForVector = d2 - d;
DWORD millisecondsForSet = d3 - d2;

double millisecondsForSingleVectorOp = (double)millisecondsForVector /
(double)1000000;
double millisecondsForSingleSetOp = (double)millisecondsForSet /
(double)1000000;
But would that not show the asymptotic difference and not the "constant"
difference in time to execute a single operation?
Jun 25 '07 #8
desktop <ff*@sss.comwrote in news:f5**********@news.net.uni-c.dk:
Pete Becker wrote:
>desktop wrote:
>>If I have a sorted std::list with 1.000.000 elements it takes
1.000.000 operations to find element with value = 1.000.000 (need to
iterator through the whole list).

In comparison, if I have a std::set with 1.000.000 element it will
only take approx lg 1.000.000 = 20 operations! Can it really be true
that the difference is a factor of 1.000.000/20 = 50.000 in this
case?
>>
Yes. Now do the same exercise, but look for the first element. The
difference isn't as dramatic, but it's there.

Well in a sorted list it takes constant time. But in the set it might
take 20 operations, unless you use some kind of header that always have
pointers to min and max. But still a factor 50.000 seems supernatural
in
the previous example!
Why is that supernatural? Searching a std::list is an O(n) operation,
searching a std::set is O(ln n) operation. As you increase n, the O(n)
grows faster than the O(ln n) (Try it with an O(n!) and see what happens
to the differences....)

std::list doesn't have a random-access iterator, only bidirectional.
Thus you must at least traverse the entire list (in the worst case). You
may be able to get away with O(ln n) comparisons in the list if you have
the assumption that the list is sorted, and you use a binary search
algorithm.

std::set is likely stored in some sort of tree-like structure, and thus
gets the tree's efficiency in searching, O(ln n).
<tanget>Although this is a brilliant example of how optimization tends to
be more effective if you change algorithms vs. attempting to tune the
existing algorithm.</tangent>
Jun 25 '07 #9
Andre Kostur wrote:
desktop <ff*@sss.comwrote in news:f5**********@news.net.uni-c.dk:
>Pete Becker wrote:
>>desktop wrote:
If I have a sorted std::list with 1.000.000 elements it takes
1.000.000 operations to find element with value = 1.000.000 (need to
iterator through the whole list).

In comparison, if I have a std::set with 1.000.000 element it will
only take approx lg 1.000.000 = 20 operations! Can it really be true
that the difference is a factor of 1.000.000/20 = 50.000 in this
case?
>>Yes. Now do the same exercise, but look for the first element. The
difference isn't as dramatic, but it's there.
Well in a sorted list it takes constant time. But in the set it might
take 20 operations, unless you use some kind of header that always have
pointers to min and max. But still a factor 50.000 seems supernatural
in
>the previous example!

Why is that supernatural? Searching a std::list is an O(n) operation,
searching a std::set is O(ln n) operation. As you increase n, the O(n)
grows faster than the O(ln n) (Try it with an O(n!) and see what happens
to the differences....)

std::list doesn't have a random-access iterator, only bidirectional.
Thus you must at least traverse the entire list (in the worst case). You
may be able to get away with O(ln n) comparisons in the list if you have
the assumption that the list is sorted, and you use a binary search
algorithm.
What algorithm are you referring to? "search":

http://www.cppreference.com/cppalgorithm/search.html

runs in linear time and quadratic in worst case.

I assume there exists no algorithm that can find an element in a list in
O(lg n) time (maybe if the list is sorted but that does not correspond
to the worst case).

>
std::set is likely stored in some sort of tree-like structure, and thus
gets the tree's efficiency in searching, O(ln n).
<tanget>Although this is a brilliant example of how optimization tends to
be more effective if you change algorithms vs. attempting to tune the
existing algorithm.</tangent>
Don't you mean change container/structure instead of algorithm?
Jun 25 '07 #10
In article <f5**********@news.net.uni-c.dk>, ff*@sss.com says...

[ ... ]
Is it possible to make an exact measurement in the difference in time
for 1 operation for a set and a list?
Sure -- for a specific operation on a specific implementation, for some
sufficiently loose definition of "exact".

--
Later,
Jerry.

The universe is a figment of its own imagination.
Jun 25 '07 #11
desktop wrote:
Erik Wikström wrote:
>On 2007-06-25 22:21, desktop wrote:
>>If I have a sorted std::list with 1.000.000 elements it takes
1.000.000 operations to find element with value = 1.000.000 (need to
iterator through the whole list).

In comparison, if I have a std::set with 1.000.000 element it will
only take approx lg 1.000.000 = 20 operations! Can it really be true
that the difference is a factor of 1.000.000/20 = 50.000 in this case?

In operations yes, not necessarily in time. If the operations on the
list takes 1 time and the operations on the set takes 50,000 then
they'll be equally fast. This will of course not be true in any
implementation (the set will be significantly faster than the list)
but it shows that just because one container/algorithm has a better
asymptotic running time it will in fact perform better. All it says is
that for a sufficiently large set of input, the algorithm will perform
better.

In practice you'll often find that using a vector for small sets will
be faster than most other containers, even if you need to traverse the
whole vector.

Is it possible to make an exact measurement in the difference in time
for 1 operation for a set and a list?
Yes, but that's not what asymptotic complexity is about. Asymptotic
complexity measures how well an algorithm scales when you increase the
amount of data. It answers questions like: it takes twenty seconds to
find all the records matching X in my database; if I double the number
of data elements, how long will it take?

--

-- Pete
Roundhouse Consulting, Ltd. (www.versatilecoding.com)
Author of "The Standard C++ Library Extensions: a Tutorial and
Reference." (www.petebecker.com/tr1book)
Jun 25 '07 #12
Pete Becker wrote:
desktop wrote:
>Erik Wikström wrote:
>>On 2007-06-25 22:21, desktop wrote:
If I have a sorted std::list with 1.000.000 elements it takes
1.000.000 operations to find element with value = 1.000.000 (need to
iterator through the whole list).

In comparison, if I have a std::set with 1.000.000 element it will
only take approx lg 1.000.000 = 20 operations! Can it really be true
that the difference is a factor of 1.000.000/20 = 50.000 in this case?

In operations yes, not necessarily in time. If the operations on the
list takes 1 time and the operations on the set takes 50,000 then
they'll be equally fast. This will of course not be true in any
implementation (the set will be significantly faster than the list)
but it shows that just because one container/algorithm has a better
asymptotic running time it will in fact perform better. All it says
is that for a sufficiently large set of input, the algorithm will
perform better.

In practice you'll often find that using a vector for small sets will
be faster than most other containers, even if you need to traverse
the whole vector.

Is it possible to make an exact measurement in the difference in time
for 1 operation for a set and a list?

Yes, but that's not what asymptotic complexity is about. Asymptotic
complexity measures how well an algorithm scales when you increase the
amount of data. It answers questions like: it takes twenty seconds to
find all the records matching X in my database; if I double the number
of data elements, how long will it take?
I am not interested in the asymptotic difference but a measurement of
the difference in time for a single operation - this way its possible to
give an idea of how many elements you need to operate with before it
makes sense to use a more complicated structure with a better asymptotic
complexity.
Jun 25 '07 #13
desktop <ff*@sss.comwrote in news:f5**********@news.net.uni-c.dk:
Pete Becker wrote:
[snip]
>Yes, but that's not what asymptotic complexity is about. Asymptotic
complexity measures how well an algorithm scales when you increase
the amount of data. It answers questions like: it takes twenty
seconds to find all the records matching X in my database; if I
double the number of data elements, how long will it take?

I am not interested in the asymptotic difference but a measurement of
the difference in time for a single operation - this way its possible
to give an idea of how many elements you need to operate with before
it makes sense to use a more complicated structure with a better
asymptotic complexity.
A couple of points to keep in mind:

1) Have you measured your application and determined that this is where
your time is being consumed? If not, you are likely wasting your time
trying to optimize this particular operation (there's a few good quotes
about this.... search for Hoare's Dictum).

2) Today your application is working with x objects (where x is this magic
number where list is performing better than set). Tomorrow your
application is called upon to work on 10 times x objects (and the day
after, 100 times). Your list time grows by a factor of 10 (and then
another 10), the set only grows by 3-4 (and finally 7).
Jun 25 '07 #14
On 2007-06-25 23:51, desktop wrote:
Andre Kostur wrote:
>desktop <ff*@sss.comwrote in news:f5**********@news.net.uni-c.dk:
>>Pete Becker wrote:
desktop wrote:
If I have a sorted std::list with 1.000.000 elements it takes
1.000.000 operations to find element with value = 1.000.000 (need to
iterator through the whole list).
>
In comparison, if I have a std::set with 1.000.000 element it will
only take approx lg 1.000.000 = 20 operations! Can it really be true
that the difference is a factor of 1.000.000/20 = 50.000 in this
case?
>>>Yes. Now do the same exercise, but look for the first element. The
difference isn't as dramatic, but it's there.

Well in a sorted list it takes constant time. But in the set it might
take 20 operations, unless you use some kind of header that always have
pointers to min and max. But still a factor 50.000 seems supernatural
in
>>the previous example!

Why is that supernatural? Searching a std::list is an O(n) operation,
searching a std::set is O(ln n) operation. As you increase n, the O(n)
grows faster than the O(ln n) (Try it with an O(n!) and see what happens
to the differences....)

std::list doesn't have a random-access iterator, only bidirectional.
Thus you must at least traverse the entire list (in the worst case). You
may be able to get away with O(ln n) comparisons in the list if you have
the assumption that the list is sorted, and you use a binary search
algorithm.

What algorithm are you referring to? "search":

http://www.cppreference.com/cppalgorithm/search.html

runs in linear time and quadratic in worst case.
No, search is for finding subranges, not a lone element (though it can
be used for such). What Andre Kostur meant was that it's possible that
performs the search in O(log n) comparisons (if the list is sorted) but
it will still require O(n) steps, which goes to show that it's important
to be clear about what you are measuring. I believe it's common to
measure the number of operations on elements (comparisons) which means
that the number of steps can differ from the asymptotic running time.
I assume there exists no algorithm that can find an element in a list in
O(lg n) time (maybe if the list is sorted but that does not correspond
to the worst case).
Well, even for a sorted list there are some combinations of elements
that are "worse" than others, which will require the entire log n
operations, while some other combinations will only require a fraction
of them.
>std::set is likely stored in some sort of tree-like structure, and thus
gets the tree's efficiency in searching, O(ln n).
<tanget>Although this is a brilliant example of how optimization tends to
be more effective if you change algorithms vs. attempting to tune the
existing algorithm.</tangent>

Don't you mean change container/structure instead of algorithm?
Not necessarily, probably more like container/algorithm since what
algorithms are available often depend on the container used. But for any
given container there might be some algorithms that are better than
others depending on the situation.

--
Erik Wikström
Jun 26 '07 #15
On 2007-06-26 00:30, desktop wrote:
Pete Becker wrote:
>desktop wrote:
>>Erik Wikström wrote:
On 2007-06-25 22:21, desktop wrote:
If I have a sorted std::list with 1.000.000 elements it takes
1.000.000 operations to find element with value = 1.000.000 (need to
iterator through the whole list).
>
In comparison, if I have a std::set with 1.000.000 element it will
only take approx lg 1.000.000 = 20 operations! Can it really be true
that the difference is a factor of 1.000.000/20 = 50.000 in this case?

In operations yes, not necessarily in time. If the operations on the
list takes 1 time and the operations on the set takes 50,000 then
they'll be equally fast. This will of course not be true in any
implementation (the set will be significantly faster than the list)
but it shows that just because one container/algorithm has a better
asymptotic running time it will in fact perform better. All it says
is that for a sufficiently large set of input, the algorithm will
perform better.

In practice you'll often find that using a vector for small sets will
be faster than most other containers, even if you need to traverse
the whole vector.
Is it possible to make an exact measurement in the difference in time
for 1 operation for a set and a list?

Yes, but that's not what asymptotic complexity is about. Asymptotic
complexity measures how well an algorithm scales when you increase the
amount of data. It answers questions like: it takes twenty seconds to
find all the records matching X in my database; if I double the number
of data elements, how long will it take?

I am not interested in the asymptotic difference but a measurement of
the difference in time for a single operation - this way its possible to
give an idea of how many elements you need to operate with before it
makes sense to use a more complicated structure with a better asymptotic
complexity.
The time one operation needs is usually not interesting since you rarely
us only one operation, instead you use a mix of a few. So what's
interesting is how long time your mix performs. Consider an application
where you perform a number of operations on a collection, let's say you
perform 10 operations of type A, 1 of type B and 100 of type C. Given
that mix it's quite uninteresting if the B operation is fast or not,
what matters is, probably, how well C performs, then A.

--
Erik Wikström
Jun 26 '07 #16

This discussion thread is closed

Replies have been disabled for this discussion.

Similar topics

4 posts views Thread by Mikhail N. Kupchik | last post: by
5 posts views Thread by JustSomeGuy | last post: by
1 post views Thread by Joe Gottman | last post: by
7 posts views Thread by Renzr | last post: by
5 posts views Thread by Christopher | last post: by
7 posts views Thread by TBass | last post: by
3 posts views Thread by Mike Copeland | last post: by
reply views Thread by NPC403 | last post: by
reply views Thread by zattat | last post: by
By using this site, you agree to our Privacy Policy and Terms of Use.