472,805 Members | 755 Online
Bytes | Software Development & Data Engineering Community
Post Job

Home Posts Topics Members FAQ

Join Bytes to post your question to a community of 472,805 software developers and data experts.

factor 50.000 between std::list and std::set?

If I have a sorted std::list with 1.000.000 elements it takes 1.000.000
operations to find element with value = 1.000.000 (need to iterator
through the whole list).

In comparison, if I have a std::set with 1.000.000 element it will only
take approx lg 1.000.000 = 20 operations! Can it really be true that the
difference is a factor of 1.000.000/20 = 50.000 in this case?
Jun 25 '07 #1
15 4058
desktop wrote:
If I have a sorted std::list with 1.000.000 elements it takes 1.000.000
operations to find element with value = 1.000.000 (need to iterator
through the whole list).

In comparison, if I have a std::set with 1.000.000 element it will only
take approx lg 1.000.000 = 20 operations! Can it really be true that the
difference is a factor of 1.000.000/20 = 50.000 in this case?
Yes. Now do the same exercise, but look for the first element. The
difference isn't as dramatic, but it's there.

--

-- Pete
Roundhouse Consulting, Ltd. (www.versatilecoding.com)
Author of "The Standard C++ Library Extensions: a Tutorial and
Reference." (www.petebecker.com/tr1book)
Jun 25 '07 #2
desktop wrote:
If I have a sorted std::list with 1.000.000 elements it takes 1.000.000
operations to find element with value = 1.000.000 (need to iterator
through the whole list).

In comparison, if I have a std::set with 1.000.000 element it will only
take approx lg 1.000.000 = 20 operations! Can it really be true that the
difference is a factor of 1.000.000/20 = 50.000 in this case?
Yes. A list is a linear search, aka O(N) to find an element. A set is
required to have logarithmic time search O(log N). So a times specified
make perfect sense.
Jun 25 '07 #3
Pete Becker wrote:
desktop wrote:
>If I have a sorted std::list with 1.000.000 elements it takes
1.000.000 operations to find element with value = 1.000.000 (need to
iterator through the whole list).

In comparison, if I have a std::set with 1.000.000 element it will
only take approx lg 1.000.000 = 20 operations! Can it really be true
that the difference is a factor of 1.000.000/20 = 50.000 in this case?

Yes. Now do the same exercise, but look for the first element. The
difference isn't as dramatic, but it's there.
Well in a sorted list it takes constant time. But in the set it might
take 20 operations, unless you use some kind of header that always have
pointers to min and max. But still a factor 50.000 seems supernatural in
the previous example!
Jun 25 '07 #4
On 2007-06-25 22:21, desktop wrote:
If I have a sorted std::list with 1.000.000 elements it takes 1.000.000
operations to find element with value = 1.000.000 (need to iterator
through the whole list).

In comparison, if I have a std::set with 1.000.000 element it will only
take approx lg 1.000.000 = 20 operations! Can it really be true that the
difference is a factor of 1.000.000/20 = 50.000 in this case?
In operations yes, not necessarily in time. If the operations on the
list takes 1 time and the operations on the set takes 50,000 then
they'll be equally fast. This will of course not be true in any
implementation (the set will be significantly faster than the list) but
it shows that just because one container/algorithm has a better
asymptotic running time it will in fact perform better. All it says is
that for a sufficiently large set of input, the algorithm will perform
better.

In practice you'll often find that using a vector for small sets will be
faster than most other containers, even if you need to traverse the
whole vector.

--
Erik Wikström
Jun 25 '07 #5
Erik Wikström wrote:
On 2007-06-25 22:21, desktop wrote:
>If I have a sorted std::list with 1.000.000 elements it takes
1.000.000 operations to find element with value = 1.000.000 (need to
iterator through the whole list).

In comparison, if I have a std::set with 1.000.000 element it will
only take approx lg 1.000.000 = 20 operations! Can it really be true
that the difference is a factor of 1.000.000/20 = 50.000 in this case?

In operations yes, not necessarily in time. If the operations on the
list takes 1 time and the operations on the set takes 50,000 then
they'll be equally fast. This will of course not be true in any
implementation (the set will be significantly faster than the list) but
it shows that just because one container/algorithm has a better
asymptotic running time it will in fact perform better. All it says is
that for a sufficiently large set of input, the algorithm will perform
better.

In practice you'll often find that using a vector for small sets will be
faster than most other containers, even if you need to traverse the
whole vector.
Is it possible to make an exact measurement in the difference in time
for 1 operation for a set and a list?
Jun 25 '07 #6
On Jun 25, 3:51 pm, desktop <f...@sss.comwrote:
Erik Wikström wrote:
On 2007-06-25 22:21, desktop wrote:
If I have a sorted std::list with 1.000.000 elements it takes
1.000.000 operations to find element with value = 1.000.000 (need to
iterator through the whole list).
In comparison, if I have a std::set with 1.000.000 element it will
only take approx lg 1.000.000 = 20 operations! Can it really be true
that the difference is a factor of 1.000.000/20 = 50.000 in this case?
In operations yes, not necessarily in time. If the operations on the
list takes 1 time and the operations on the set takes 50,000 then
they'll be equally fast. This will of course not be true in any
implementation (the set will be significantly faster than the list) but
it shows that just because one container/algorithm has a better
asymptotic running time it will in fact perform better. All it says is
that for a sufficiently large set of input, the algorithm will perform
better.
In practice you'll often find that using a vector for small sets will be
faster than most other containers, even if you need to traverse the
whole vector.

Is it possible to make an exact measurement in the difference in time
for 1 operation for a set and a list?- Hide quoted text -

- Show quoted text -
sure, just write a benchmark test. There is no more precise way,
because of course the time depends on your CPU, your compiler, your
operating system, and what appliactions are running at the time. A
simple test like the following should work (on windows).

std::vector<intintVector;
populateIntVector(&intVector);
std::set<intintSet;
populateIntSet(&intSet);

DWORD d = timeGetTime();

for (int i=0; i < 1000000; ++i)
{
// Perform Vector operation
}

DWORD d2 = timeGetTime();

for (int i=0; i < 1000000; ++i)
{
// Perform set operation
}

DWORD d3 = timeGetTime();

DWORD millisecondsForVector = d2 - d;
DWORD millisecondsForSet = d3 - d2;

double millisecondsForSingleVectorOp = (double)millisecondsForVector /
(double)1000000;
double millisecondsForSingleSetOp = (double)millisecondsForSet /
(double)1000000;

Jun 25 '07 #7
Zachary Turner wrote:
On Jun 25, 3:51 pm, desktop <f...@sss.comwrote:
>Erik Wikström wrote:
>>On 2007-06-25 22:21, desktop wrote:
If I have a sorted std::list with 1.000.000 elements it takes
1.000.000 operations to find element with value = 1.000.000 (need to
iterator through the whole list).
In comparison, if I have a std::set with 1.000.000 element it will
only take approx lg 1.000.000 = 20 operations! Can it really be true
that the difference is a factor of 1.000.000/20 = 50.000 in this case?
In operations yes, not necessarily in time. If the operations on the
list takes 1 time and the operations on the set takes 50,000 then
they'll be equally fast. This will of course not be true in any
implementation (the set will be significantly faster than the list) but
it shows that just because one container/algorithm has a better
asymptotic running time it will in fact perform better. All it says is
that for a sufficiently large set of input, the algorithm will perform
better.
In practice you'll often find that using a vector for small sets will be
faster than most other containers, even if you need to traverse the
whole vector.
Is it possible to make an exact measurement in the difference in time
for 1 operation for a set and a list?- Hide quoted text -

- Show quoted text -

sure, just write a benchmark test. There is no more precise way,
because of course the time depends on your CPU, your compiler, your
operating system, and what appliactions are running at the time. A
simple test like the following should work (on windows).

std::vector<intintVector;
populateIntVector(&intVector);
std::set<intintSet;
populateIntSet(&intSet);

DWORD d = timeGetTime();

for (int i=0; i < 1000000; ++i)
{
// Perform Vector operation
}

DWORD d2 = timeGetTime();

for (int i=0; i < 1000000; ++i)
{
// Perform set operation
}

DWORD d3 = timeGetTime();

DWORD millisecondsForVector = d2 - d;
DWORD millisecondsForSet = d3 - d2;

double millisecondsForSingleVectorOp = (double)millisecondsForVector /
(double)1000000;
double millisecondsForSingleSetOp = (double)millisecondsForSet /
(double)1000000;
But would that not show the asymptotic difference and not the "constant"
difference in time to execute a single operation?
Jun 25 '07 #8
desktop <ff*@sss.comwrote in news:f5**********@news.net.uni-c.dk:
Pete Becker wrote:
>desktop wrote:
>>If I have a sorted std::list with 1.000.000 elements it takes
1.000.000 operations to find element with value = 1.000.000 (need to
iterator through the whole list).

In comparison, if I have a std::set with 1.000.000 element it will
only take approx lg 1.000.000 = 20 operations! Can it really be true
that the difference is a factor of 1.000.000/20 = 50.000 in this
case?
>>
Yes. Now do the same exercise, but look for the first element. The
difference isn't as dramatic, but it's there.

Well in a sorted list it takes constant time. But in the set it might
take 20 operations, unless you use some kind of header that always have
pointers to min and max. But still a factor 50.000 seems supernatural
in
the previous example!
Why is that supernatural? Searching a std::list is an O(n) operation,
searching a std::set is O(ln n) operation. As you increase n, the O(n)
grows faster than the O(ln n) (Try it with an O(n!) and see what happens
to the differences....)

std::list doesn't have a random-access iterator, only bidirectional.
Thus you must at least traverse the entire list (in the worst case). You
may be able to get away with O(ln n) comparisons in the list if you have
the assumption that the list is sorted, and you use a binary search
algorithm.

std::set is likely stored in some sort of tree-like structure, and thus
gets the tree's efficiency in searching, O(ln n).
<tanget>Although this is a brilliant example of how optimization tends to
be more effective if you change algorithms vs. attempting to tune the
existing algorithm.</tangent>
Jun 25 '07 #9
Andre Kostur wrote:
desktop <ff*@sss.comwrote in news:f5**********@news.net.uni-c.dk:
>Pete Becker wrote:
>>desktop wrote:
If I have a sorted std::list with 1.000.000 elements it takes
1.000.000 operations to find element with value = 1.000.000 (need to
iterator through the whole list).

In comparison, if I have a std::set with 1.000.000 element it will
only take approx lg 1.000.000 = 20 operations! Can it really be true
that the difference is a factor of 1.000.000/20 = 50.000 in this
case?
>>Yes. Now do the same exercise, but look for the first element. The
difference isn't as dramatic, but it's there.
Well in a sorted list it takes constant time. But in the set it might
take 20 operations, unless you use some kind of header that always have
pointers to min and max. But still a factor 50.000 seems supernatural
in
>the previous example!

Why is that supernatural? Searching a std::list is an O(n) operation,
searching a std::set is O(ln n) operation. As you increase n, the O(n)
grows faster than the O(ln n) (Try it with an O(n!) and see what happens
to the differences....)

std::list doesn't have a random-access iterator, only bidirectional.
Thus you must at least traverse the entire list (in the worst case). You
may be able to get away with O(ln n) comparisons in the list if you have
the assumption that the list is sorted, and you use a binary search
algorithm.
What algorithm are you referring to? "search":

http://www.cppreference.com/cppalgorithm/search.html

runs in linear time and quadratic in worst case.

I assume there exists no algorithm that can find an element in a list in
O(lg n) time (maybe if the list is sorted but that does not correspond
to the worst case).

>
std::set is likely stored in some sort of tree-like structure, and thus
gets the tree's efficiency in searching, O(ln n).
<tanget>Although this is a brilliant example of how optimization tends to
be more effective if you change algorithms vs. attempting to tune the
existing algorithm.</tangent>
Don't you mean change container/structure instead of algorithm?
Jun 25 '07 #10
In article <f5**********@news.net.uni-c.dk>, ff*@sss.com says...

[ ... ]
Is it possible to make an exact measurement in the difference in time
for 1 operation for a set and a list?
Sure -- for a specific operation on a specific implementation, for some
sufficiently loose definition of "exact".

--
Later,
Jerry.

The universe is a figment of its own imagination.
Jun 25 '07 #11
desktop wrote:
Erik Wikström wrote:
>On 2007-06-25 22:21, desktop wrote:
>>If I have a sorted std::list with 1.000.000 elements it takes
1.000.000 operations to find element with value = 1.000.000 (need to
iterator through the whole list).

In comparison, if I have a std::set with 1.000.000 element it will
only take approx lg 1.000.000 = 20 operations! Can it really be true
that the difference is a factor of 1.000.000/20 = 50.000 in this case?

In operations yes, not necessarily in time. If the operations on the
list takes 1 time and the operations on the set takes 50,000 then
they'll be equally fast. This will of course not be true in any
implementation (the set will be significantly faster than the list)
but it shows that just because one container/algorithm has a better
asymptotic running time it will in fact perform better. All it says is
that for a sufficiently large set of input, the algorithm will perform
better.

In practice you'll often find that using a vector for small sets will
be faster than most other containers, even if you need to traverse the
whole vector.

Is it possible to make an exact measurement in the difference in time
for 1 operation for a set and a list?
Yes, but that's not what asymptotic complexity is about. Asymptotic
complexity measures how well an algorithm scales when you increase the
amount of data. It answers questions like: it takes twenty seconds to
find all the records matching X in my database; if I double the number
of data elements, how long will it take?

--

-- Pete
Roundhouse Consulting, Ltd. (www.versatilecoding.com)
Author of "The Standard C++ Library Extensions: a Tutorial and
Reference." (www.petebecker.com/tr1book)
Jun 25 '07 #12
Pete Becker wrote:
desktop wrote:
>Erik Wikström wrote:
>>On 2007-06-25 22:21, desktop wrote:
If I have a sorted std::list with 1.000.000 elements it takes
1.000.000 operations to find element with value = 1.000.000 (need to
iterator through the whole list).

In comparison, if I have a std::set with 1.000.000 element it will
only take approx lg 1.000.000 = 20 operations! Can it really be true
that the difference is a factor of 1.000.000/20 = 50.000 in this case?

In operations yes, not necessarily in time. If the operations on the
list takes 1 time and the operations on the set takes 50,000 then
they'll be equally fast. This will of course not be true in any
implementation (the set will be significantly faster than the list)
but it shows that just because one container/algorithm has a better
asymptotic running time it will in fact perform better. All it says
is that for a sufficiently large set of input, the algorithm will
perform better.

In practice you'll often find that using a vector for small sets will
be faster than most other containers, even if you need to traverse
the whole vector.

Is it possible to make an exact measurement in the difference in time
for 1 operation for a set and a list?

Yes, but that's not what asymptotic complexity is about. Asymptotic
complexity measures how well an algorithm scales when you increase the
amount of data. It answers questions like: it takes twenty seconds to
find all the records matching X in my database; if I double the number
of data elements, how long will it take?
I am not interested in the asymptotic difference but a measurement of
the difference in time for a single operation - this way its possible to
give an idea of how many elements you need to operate with before it
makes sense to use a more complicated structure with a better asymptotic
complexity.
Jun 25 '07 #13
desktop <ff*@sss.comwrote in news:f5**********@news.net.uni-c.dk:
Pete Becker wrote:
[snip]
>Yes, but that's not what asymptotic complexity is about. Asymptotic
complexity measures how well an algorithm scales when you increase
the amount of data. It answers questions like: it takes twenty
seconds to find all the records matching X in my database; if I
double the number of data elements, how long will it take?

I am not interested in the asymptotic difference but a measurement of
the difference in time for a single operation - this way its possible
to give an idea of how many elements you need to operate with before
it makes sense to use a more complicated structure with a better
asymptotic complexity.
A couple of points to keep in mind:

1) Have you measured your application and determined that this is where
your time is being consumed? If not, you are likely wasting your time
trying to optimize this particular operation (there's a few good quotes
about this.... search for Hoare's Dictum).

2) Today your application is working with x objects (where x is this magic
number where list is performing better than set). Tomorrow your
application is called upon to work on 10 times x objects (and the day
after, 100 times). Your list time grows by a factor of 10 (and then
another 10), the set only grows by 3-4 (and finally 7).
Jun 25 '07 #14
On 2007-06-25 23:51, desktop wrote:
Andre Kostur wrote:
>desktop <ff*@sss.comwrote in news:f5**********@news.net.uni-c.dk:
>>Pete Becker wrote:
desktop wrote:
If I have a sorted std::list with 1.000.000 elements it takes
1.000.000 operations to find element with value = 1.000.000 (need to
iterator through the whole list).
>
In comparison, if I have a std::set with 1.000.000 element it will
only take approx lg 1.000.000 = 20 operations! Can it really be true
that the difference is a factor of 1.000.000/20 = 50.000 in this
case?
>>>Yes. Now do the same exercise, but look for the first element. The
difference isn't as dramatic, but it's there.

Well in a sorted list it takes constant time. But in the set it might
take 20 operations, unless you use some kind of header that always have
pointers to min and max. But still a factor 50.000 seems supernatural
in
>>the previous example!

Why is that supernatural? Searching a std::list is an O(n) operation,
searching a std::set is O(ln n) operation. As you increase n, the O(n)
grows faster than the O(ln n) (Try it with an O(n!) and see what happens
to the differences....)

std::list doesn't have a random-access iterator, only bidirectional.
Thus you must at least traverse the entire list (in the worst case). You
may be able to get away with O(ln n) comparisons in the list if you have
the assumption that the list is sorted, and you use a binary search
algorithm.

What algorithm are you referring to? "search":

http://www.cppreference.com/cppalgorithm/search.html

runs in linear time and quadratic in worst case.
No, search is for finding subranges, not a lone element (though it can
be used for such). What Andre Kostur meant was that it's possible that
performs the search in O(log n) comparisons (if the list is sorted) but
it will still require O(n) steps, which goes to show that it's important
to be clear about what you are measuring. I believe it's common to
measure the number of operations on elements (comparisons) which means
that the number of steps can differ from the asymptotic running time.
I assume there exists no algorithm that can find an element in a list in
O(lg n) time (maybe if the list is sorted but that does not correspond
to the worst case).
Well, even for a sorted list there are some combinations of elements
that are "worse" than others, which will require the entire log n
operations, while some other combinations will only require a fraction
of them.
>std::set is likely stored in some sort of tree-like structure, and thus
gets the tree's efficiency in searching, O(ln n).
<tanget>Although this is a brilliant example of how optimization tends to
be more effective if you change algorithms vs. attempting to tune the
existing algorithm.</tangent>

Don't you mean change container/structure instead of algorithm?
Not necessarily, probably more like container/algorithm since what
algorithms are available often depend on the container used. But for any
given container there might be some algorithms that are better than
others depending on the situation.

--
Erik Wikström
Jun 26 '07 #15
On 2007-06-26 00:30, desktop wrote:
Pete Becker wrote:
>desktop wrote:
>>Erik Wikström wrote:
On 2007-06-25 22:21, desktop wrote:
If I have a sorted std::list with 1.000.000 elements it takes
1.000.000 operations to find element with value = 1.000.000 (need to
iterator through the whole list).
>
In comparison, if I have a std::set with 1.000.000 element it will
only take approx lg 1.000.000 = 20 operations! Can it really be true
that the difference is a factor of 1.000.000/20 = 50.000 in this case?

In operations yes, not necessarily in time. If the operations on the
list takes 1 time and the operations on the set takes 50,000 then
they'll be equally fast. This will of course not be true in any
implementation (the set will be significantly faster than the list)
but it shows that just because one container/algorithm has a better
asymptotic running time it will in fact perform better. All it says
is that for a sufficiently large set of input, the algorithm will
perform better.

In practice you'll often find that using a vector for small sets will
be faster than most other containers, even if you need to traverse
the whole vector.
Is it possible to make an exact measurement in the difference in time
for 1 operation for a set and a list?

Yes, but that's not what asymptotic complexity is about. Asymptotic
complexity measures how well an algorithm scales when you increase the
amount of data. It answers questions like: it takes twenty seconds to
find all the records matching X in my database; if I double the number
of data elements, how long will it take?

I am not interested in the asymptotic difference but a measurement of
the difference in time for a single operation - this way its possible to
give an idea of how many elements you need to operate with before it
makes sense to use a more complicated structure with a better asymptotic
complexity.
The time one operation needs is usually not interesting since you rarely
us only one operation, instead you use a mix of a few. So what's
interesting is how long time your mix performs. Consider an application
where you perform a number of operations on a collection, let's say you
perform 10 operations of type A, 1 of type B and 100 of type C. Given
that mix it's quite uninteresting if the B operation is fast or not,
what matters is, probably, how well C performs, then A.

--
Erik Wikström
Jun 26 '07 #16

This thread has been closed and replies have been disabled. Please start a new discussion.

Similar topics

4
by: Mikhail N. Kupchik | last post by:
Hi All. I have a question regarding C++ programming language standard. It is related to standard library, not to the core language. Is it portable to instantiate template class std::list<>...
5
by: JustSomeGuy | last post by:
Can you access elements of the std::list like they were an array? eg. std::list<int> xlist; int y; .... y = xlist; // Returns the 10th element of a list.
1
by: Joe Gottman | last post by:
What are the advantages and disadvantages of the tree-based std::set versus the hash-based tr1::unordered_set? set advantages: 1) iterators remain valid after insert and erase (except for...
1
by: barnesc | last post by:
Hi again, Since my linear algebra library appears not to serve any practical need (I found cgkit, and that works better for me), I've gotten bored and went back to one of my other projects:...
7
by: Renzr | last post by:
I have a problem about the std::set<>iterator. After finding a term in the std::set<>, i want to know the distance from the current term to the begin(). But i have got a error. Please offer me...
19
by: Juha Nieminen | last post by:
If I'm not completely mistaken, the only reason why std::list::size() may be (and usually is) a linear-time operation is because they want std::list::splice() to be a constant-time operation, and...
5
by: Christopher | last post by:
The situation is that a std::list<std::set<std::string is being iterated through. Upon certain criteria some sets become empty. I need to remove the empty sets from the list. Is it safe to...
7
by: TBass | last post by:
So I have a class: class Client { unsigned int ClientID; .... }; class MyListenSocket
3
by: Mike Copeland | last post by:
How do I work with a std::list that might have multiple objects having the same "key", but which have other data that is different? Here's code that compiles, but doesn't do quite what I expect:...
2
isladogs
by: isladogs | last post by:
The next Access Europe meeting will be on Wednesday 2 August 2023 starting at 18:00 UK time (6PM UTC+1) and finishing at about 19:15 (7.15PM) The start time is equivalent to 19:00 (7PM) in Central...
0
by: erikbower65 | last post by:
Here's a concise step-by-step guide for manually installing IntelliJ IDEA: 1. Download: Visit the official JetBrains website and download the IntelliJ IDEA Community or Ultimate edition based on...
0
by: kcodez | last post by:
As a H5 game development enthusiast, I recently wrote a very interesting little game - Toy Claw ((http://claw.kjeek.com/))。Here I will summarize and share the development experience here, and hope it...
2
isladogs
by: isladogs | last post by:
The next Access Europe meeting will be on Wednesday 6 Sept 2023 starting at 18:00 UK time (6PM UTC+1) and finishing at about 19:15 (7.15PM) The start time is equivalent to 19:00 (7PM) in Central...
14
DJRhino1175
by: DJRhino1175 | last post by:
When I run this code I get an error, its Run-time error# 424 Object required...This is my first attempt at doing something like this. I test the entire code and it worked until I added this - If...
0
by: Rina0 | last post by:
I am looking for a Python code to find the longest common subsequence of two strings. I found this blog post that describes the length of longest common subsequence problem and provides a solution in...
5
by: DJRhino | last post by:
Private Sub CboDrawingID_BeforeUpdate(Cancel As Integer) If = 310029923 Or 310030138 Or 310030152 Or 310030346 Or 310030348 Or _ 310030356 Or 310030359 Or 310030362 Or...
0
by: Mushico | last post by:
How to calculate date of retirement from date of birth
2
by: DJRhino | last post by:
Was curious if anyone else was having this same issue or not.... I was just Up/Down graded to windows 11 and now my access combo boxes are not acting right. With win 10 I could start typing...

By using Bytes.com and it's services, you agree to our Privacy Policy and Terms of Use.

To disable or enable advertisements and analytics tracking please visit the manage ads & tracking page.