473,385 Members | 1,610 Online
Bytes | Software Development & Data Engineering Community
Post Job

Home Posts Topics Members FAQ

Join Bytes to post your question to a community of 473,385 software developers and data experts.

factor 50.000 between std::list and std::set?

If I have a sorted std::list with 1.000.000 elements it takes 1.000.000
operations to find element with value = 1.000.000 (need to iterator
through the whole list).

In comparison, if I have a std::set with 1.000.000 element it will only
take approx lg 1.000.000 = 20 operations! Can it really be true that the
difference is a factor of 1.000.000/20 = 50.000 in this case?
Jun 25 '07 #1
15 4112
desktop wrote:
If I have a sorted std::list with 1.000.000 elements it takes 1.000.000
operations to find element with value = 1.000.000 (need to iterator
through the whole list).

In comparison, if I have a std::set with 1.000.000 element it will only
take approx lg 1.000.000 = 20 operations! Can it really be true that the
difference is a factor of 1.000.000/20 = 50.000 in this case?
Yes. Now do the same exercise, but look for the first element. The
difference isn't as dramatic, but it's there.

--

-- Pete
Roundhouse Consulting, Ltd. (www.versatilecoding.com)
Author of "The Standard C++ Library Extensions: a Tutorial and
Reference." (www.petebecker.com/tr1book)
Jun 25 '07 #2
desktop wrote:
If I have a sorted std::list with 1.000.000 elements it takes 1.000.000
operations to find element with value = 1.000.000 (need to iterator
through the whole list).

In comparison, if I have a std::set with 1.000.000 element it will only
take approx lg 1.000.000 = 20 operations! Can it really be true that the
difference is a factor of 1.000.000/20 = 50.000 in this case?
Yes. A list is a linear search, aka O(N) to find an element. A set is
required to have logarithmic time search O(log N). So a times specified
make perfect sense.
Jun 25 '07 #3
Pete Becker wrote:
desktop wrote:
>If I have a sorted std::list with 1.000.000 elements it takes
1.000.000 operations to find element with value = 1.000.000 (need to
iterator through the whole list).

In comparison, if I have a std::set with 1.000.000 element it will
only take approx lg 1.000.000 = 20 operations! Can it really be true
that the difference is a factor of 1.000.000/20 = 50.000 in this case?

Yes. Now do the same exercise, but look for the first element. The
difference isn't as dramatic, but it's there.
Well in a sorted list it takes constant time. But in the set it might
take 20 operations, unless you use some kind of header that always have
pointers to min and max. But still a factor 50.000 seems supernatural in
the previous example!
Jun 25 '07 #4
On 2007-06-25 22:21, desktop wrote:
If I have a sorted std::list with 1.000.000 elements it takes 1.000.000
operations to find element with value = 1.000.000 (need to iterator
through the whole list).

In comparison, if I have a std::set with 1.000.000 element it will only
take approx lg 1.000.000 = 20 operations! Can it really be true that the
difference is a factor of 1.000.000/20 = 50.000 in this case?
In operations yes, not necessarily in time. If the operations on the
list takes 1 time and the operations on the set takes 50,000 then
they'll be equally fast. This will of course not be true in any
implementation (the set will be significantly faster than the list) but
it shows that just because one container/algorithm has a better
asymptotic running time it will in fact perform better. All it says is
that for a sufficiently large set of input, the algorithm will perform
better.

In practice you'll often find that using a vector for small sets will be
faster than most other containers, even if you need to traverse the
whole vector.

--
Erik Wikström
Jun 25 '07 #5
Erik Wikström wrote:
On 2007-06-25 22:21, desktop wrote:
>If I have a sorted std::list with 1.000.000 elements it takes
1.000.000 operations to find element with value = 1.000.000 (need to
iterator through the whole list).

In comparison, if I have a std::set with 1.000.000 element it will
only take approx lg 1.000.000 = 20 operations! Can it really be true
that the difference is a factor of 1.000.000/20 = 50.000 in this case?

In operations yes, not necessarily in time. If the operations on the
list takes 1 time and the operations on the set takes 50,000 then
they'll be equally fast. This will of course not be true in any
implementation (the set will be significantly faster than the list) but
it shows that just because one container/algorithm has a better
asymptotic running time it will in fact perform better. All it says is
that for a sufficiently large set of input, the algorithm will perform
better.

In practice you'll often find that using a vector for small sets will be
faster than most other containers, even if you need to traverse the
whole vector.
Is it possible to make an exact measurement in the difference in time
for 1 operation for a set and a list?
Jun 25 '07 #6
On Jun 25, 3:51 pm, desktop <f...@sss.comwrote:
Erik Wikström wrote:
On 2007-06-25 22:21, desktop wrote:
If I have a sorted std::list with 1.000.000 elements it takes
1.000.000 operations to find element with value = 1.000.000 (need to
iterator through the whole list).
In comparison, if I have a std::set with 1.000.000 element it will
only take approx lg 1.000.000 = 20 operations! Can it really be true
that the difference is a factor of 1.000.000/20 = 50.000 in this case?
In operations yes, not necessarily in time. If the operations on the
list takes 1 time and the operations on the set takes 50,000 then
they'll be equally fast. This will of course not be true in any
implementation (the set will be significantly faster than the list) but
it shows that just because one container/algorithm has a better
asymptotic running time it will in fact perform better. All it says is
that for a sufficiently large set of input, the algorithm will perform
better.
In practice you'll often find that using a vector for small sets will be
faster than most other containers, even if you need to traverse the
whole vector.

Is it possible to make an exact measurement in the difference in time
for 1 operation for a set and a list?- Hide quoted text -

- Show quoted text -
sure, just write a benchmark test. There is no more precise way,
because of course the time depends on your CPU, your compiler, your
operating system, and what appliactions are running at the time. A
simple test like the following should work (on windows).

std::vector<intintVector;
populateIntVector(&intVector);
std::set<intintSet;
populateIntSet(&intSet);

DWORD d = timeGetTime();

for (int i=0; i < 1000000; ++i)
{
// Perform Vector operation
}

DWORD d2 = timeGetTime();

for (int i=0; i < 1000000; ++i)
{
// Perform set operation
}

DWORD d3 = timeGetTime();

DWORD millisecondsForVector = d2 - d;
DWORD millisecondsForSet = d3 - d2;

double millisecondsForSingleVectorOp = (double)millisecondsForVector /
(double)1000000;
double millisecondsForSingleSetOp = (double)millisecondsForSet /
(double)1000000;

Jun 25 '07 #7
Zachary Turner wrote:
On Jun 25, 3:51 pm, desktop <f...@sss.comwrote:
>Erik Wikström wrote:
>>On 2007-06-25 22:21, desktop wrote:
If I have a sorted std::list with 1.000.000 elements it takes
1.000.000 operations to find element with value = 1.000.000 (need to
iterator through the whole list).
In comparison, if I have a std::set with 1.000.000 element it will
only take approx lg 1.000.000 = 20 operations! Can it really be true
that the difference is a factor of 1.000.000/20 = 50.000 in this case?
In operations yes, not necessarily in time. If the operations on the
list takes 1 time and the operations on the set takes 50,000 then
they'll be equally fast. This will of course not be true in any
implementation (the set will be significantly faster than the list) but
it shows that just because one container/algorithm has a better
asymptotic running time it will in fact perform better. All it says is
that for a sufficiently large set of input, the algorithm will perform
better.
In practice you'll often find that using a vector for small sets will be
faster than most other containers, even if you need to traverse the
whole vector.
Is it possible to make an exact measurement in the difference in time
for 1 operation for a set and a list?- Hide quoted text -

- Show quoted text -

sure, just write a benchmark test. There is no more precise way,
because of course the time depends on your CPU, your compiler, your
operating system, and what appliactions are running at the time. A
simple test like the following should work (on windows).

std::vector<intintVector;
populateIntVector(&intVector);
std::set<intintSet;
populateIntSet(&intSet);

DWORD d = timeGetTime();

for (int i=0; i < 1000000; ++i)
{
// Perform Vector operation
}

DWORD d2 = timeGetTime();

for (int i=0; i < 1000000; ++i)
{
// Perform set operation
}

DWORD d3 = timeGetTime();

DWORD millisecondsForVector = d2 - d;
DWORD millisecondsForSet = d3 - d2;

double millisecondsForSingleVectorOp = (double)millisecondsForVector /
(double)1000000;
double millisecondsForSingleSetOp = (double)millisecondsForSet /
(double)1000000;
But would that not show the asymptotic difference and not the "constant"
difference in time to execute a single operation?
Jun 25 '07 #8
desktop <ff*@sss.comwrote in news:f5**********@news.net.uni-c.dk:
Pete Becker wrote:
>desktop wrote:
>>If I have a sorted std::list with 1.000.000 elements it takes
1.000.000 operations to find element with value = 1.000.000 (need to
iterator through the whole list).

In comparison, if I have a std::set with 1.000.000 element it will
only take approx lg 1.000.000 = 20 operations! Can it really be true
that the difference is a factor of 1.000.000/20 = 50.000 in this
case?
>>
Yes. Now do the same exercise, but look for the first element. The
difference isn't as dramatic, but it's there.

Well in a sorted list it takes constant time. But in the set it might
take 20 operations, unless you use some kind of header that always have
pointers to min and max. But still a factor 50.000 seems supernatural
in
the previous example!
Why is that supernatural? Searching a std::list is an O(n) operation,
searching a std::set is O(ln n) operation. As you increase n, the O(n)
grows faster than the O(ln n) (Try it with an O(n!) and see what happens
to the differences....)

std::list doesn't have a random-access iterator, only bidirectional.
Thus you must at least traverse the entire list (in the worst case). You
may be able to get away with O(ln n) comparisons in the list if you have
the assumption that the list is sorted, and you use a binary search
algorithm.

std::set is likely stored in some sort of tree-like structure, and thus
gets the tree's efficiency in searching, O(ln n).
<tanget>Although this is a brilliant example of how optimization tends to
be more effective if you change algorithms vs. attempting to tune the
existing algorithm.</tangent>
Jun 25 '07 #9
Andre Kostur wrote:
desktop <ff*@sss.comwrote in news:f5**********@news.net.uni-c.dk:
>Pete Becker wrote:
>>desktop wrote:
If I have a sorted std::list with 1.000.000 elements it takes
1.000.000 operations to find element with value = 1.000.000 (need to
iterator through the whole list).

In comparison, if I have a std::set with 1.000.000 element it will
only take approx lg 1.000.000 = 20 operations! Can it really be true
that the difference is a factor of 1.000.000/20 = 50.000 in this
case?
>>Yes. Now do the same exercise, but look for the first element. The
difference isn't as dramatic, but it's there.
Well in a sorted list it takes constant time. But in the set it might
take 20 operations, unless you use some kind of header that always have
pointers to min and max. But still a factor 50.000 seems supernatural
in
>the previous example!

Why is that supernatural? Searching a std::list is an O(n) operation,
searching a std::set is O(ln n) operation. As you increase n, the O(n)
grows faster than the O(ln n) (Try it with an O(n!) and see what happens
to the differences....)

std::list doesn't have a random-access iterator, only bidirectional.
Thus you must at least traverse the entire list (in the worst case). You
may be able to get away with O(ln n) comparisons in the list if you have
the assumption that the list is sorted, and you use a binary search
algorithm.
What algorithm are you referring to? "search":

http://www.cppreference.com/cppalgorithm/search.html

runs in linear time and quadratic in worst case.

I assume there exists no algorithm that can find an element in a list in
O(lg n) time (maybe if the list is sorted but that does not correspond
to the worst case).

>
std::set is likely stored in some sort of tree-like structure, and thus
gets the tree's efficiency in searching, O(ln n).
<tanget>Although this is a brilliant example of how optimization tends to
be more effective if you change algorithms vs. attempting to tune the
existing algorithm.</tangent>
Don't you mean change container/structure instead of algorithm?
Jun 25 '07 #10
In article <f5**********@news.net.uni-c.dk>, ff*@sss.com says...

[ ... ]
Is it possible to make an exact measurement in the difference in time
for 1 operation for a set and a list?
Sure -- for a specific operation on a specific implementation, for some
sufficiently loose definition of "exact".

--
Later,
Jerry.

The universe is a figment of its own imagination.
Jun 25 '07 #11
desktop wrote:
Erik Wikström wrote:
>On 2007-06-25 22:21, desktop wrote:
>>If I have a sorted std::list with 1.000.000 elements it takes
1.000.000 operations to find element with value = 1.000.000 (need to
iterator through the whole list).

In comparison, if I have a std::set with 1.000.000 element it will
only take approx lg 1.000.000 = 20 operations! Can it really be true
that the difference is a factor of 1.000.000/20 = 50.000 in this case?

In operations yes, not necessarily in time. If the operations on the
list takes 1 time and the operations on the set takes 50,000 then
they'll be equally fast. This will of course not be true in any
implementation (the set will be significantly faster than the list)
but it shows that just because one container/algorithm has a better
asymptotic running time it will in fact perform better. All it says is
that for a sufficiently large set of input, the algorithm will perform
better.

In practice you'll often find that using a vector for small sets will
be faster than most other containers, even if you need to traverse the
whole vector.

Is it possible to make an exact measurement in the difference in time
for 1 operation for a set and a list?
Yes, but that's not what asymptotic complexity is about. Asymptotic
complexity measures how well an algorithm scales when you increase the
amount of data. It answers questions like: it takes twenty seconds to
find all the records matching X in my database; if I double the number
of data elements, how long will it take?

--

-- Pete
Roundhouse Consulting, Ltd. (www.versatilecoding.com)
Author of "The Standard C++ Library Extensions: a Tutorial and
Reference." (www.petebecker.com/tr1book)
Jun 25 '07 #12
Pete Becker wrote:
desktop wrote:
>Erik Wikström wrote:
>>On 2007-06-25 22:21, desktop wrote:
If I have a sorted std::list with 1.000.000 elements it takes
1.000.000 operations to find element with value = 1.000.000 (need to
iterator through the whole list).

In comparison, if I have a std::set with 1.000.000 element it will
only take approx lg 1.000.000 = 20 operations! Can it really be true
that the difference is a factor of 1.000.000/20 = 50.000 in this case?

In operations yes, not necessarily in time. If the operations on the
list takes 1 time and the operations on the set takes 50,000 then
they'll be equally fast. This will of course not be true in any
implementation (the set will be significantly faster than the list)
but it shows that just because one container/algorithm has a better
asymptotic running time it will in fact perform better. All it says
is that for a sufficiently large set of input, the algorithm will
perform better.

In practice you'll often find that using a vector for small sets will
be faster than most other containers, even if you need to traverse
the whole vector.

Is it possible to make an exact measurement in the difference in time
for 1 operation for a set and a list?

Yes, but that's not what asymptotic complexity is about. Asymptotic
complexity measures how well an algorithm scales when you increase the
amount of data. It answers questions like: it takes twenty seconds to
find all the records matching X in my database; if I double the number
of data elements, how long will it take?
I am not interested in the asymptotic difference but a measurement of
the difference in time for a single operation - this way its possible to
give an idea of how many elements you need to operate with before it
makes sense to use a more complicated structure with a better asymptotic
complexity.
Jun 25 '07 #13
desktop <ff*@sss.comwrote in news:f5**********@news.net.uni-c.dk:
Pete Becker wrote:
[snip]
>Yes, but that's not what asymptotic complexity is about. Asymptotic
complexity measures how well an algorithm scales when you increase
the amount of data. It answers questions like: it takes twenty
seconds to find all the records matching X in my database; if I
double the number of data elements, how long will it take?

I am not interested in the asymptotic difference but a measurement of
the difference in time for a single operation - this way its possible
to give an idea of how many elements you need to operate with before
it makes sense to use a more complicated structure with a better
asymptotic complexity.
A couple of points to keep in mind:

1) Have you measured your application and determined that this is where
your time is being consumed? If not, you are likely wasting your time
trying to optimize this particular operation (there's a few good quotes
about this.... search for Hoare's Dictum).

2) Today your application is working with x objects (where x is this magic
number where list is performing better than set). Tomorrow your
application is called upon to work on 10 times x objects (and the day
after, 100 times). Your list time grows by a factor of 10 (and then
another 10), the set only grows by 3-4 (and finally 7).
Jun 25 '07 #14
On 2007-06-25 23:51, desktop wrote:
Andre Kostur wrote:
>desktop <ff*@sss.comwrote in news:f5**********@news.net.uni-c.dk:
>>Pete Becker wrote:
desktop wrote:
If I have a sorted std::list with 1.000.000 elements it takes
1.000.000 operations to find element with value = 1.000.000 (need to
iterator through the whole list).
>
In comparison, if I have a std::set with 1.000.000 element it will
only take approx lg 1.000.000 = 20 operations! Can it really be true
that the difference is a factor of 1.000.000/20 = 50.000 in this
case?
>>>Yes. Now do the same exercise, but look for the first element. The
difference isn't as dramatic, but it's there.

Well in a sorted list it takes constant time. But in the set it might
take 20 operations, unless you use some kind of header that always have
pointers to min and max. But still a factor 50.000 seems supernatural
in
>>the previous example!

Why is that supernatural? Searching a std::list is an O(n) operation,
searching a std::set is O(ln n) operation. As you increase n, the O(n)
grows faster than the O(ln n) (Try it with an O(n!) and see what happens
to the differences....)

std::list doesn't have a random-access iterator, only bidirectional.
Thus you must at least traverse the entire list (in the worst case). You
may be able to get away with O(ln n) comparisons in the list if you have
the assumption that the list is sorted, and you use a binary search
algorithm.

What algorithm are you referring to? "search":

http://www.cppreference.com/cppalgorithm/search.html

runs in linear time and quadratic in worst case.
No, search is for finding subranges, not a lone element (though it can
be used for such). What Andre Kostur meant was that it's possible that
performs the search in O(log n) comparisons (if the list is sorted) but
it will still require O(n) steps, which goes to show that it's important
to be clear about what you are measuring. I believe it's common to
measure the number of operations on elements (comparisons) which means
that the number of steps can differ from the asymptotic running time.
I assume there exists no algorithm that can find an element in a list in
O(lg n) time (maybe if the list is sorted but that does not correspond
to the worst case).
Well, even for a sorted list there are some combinations of elements
that are "worse" than others, which will require the entire log n
operations, while some other combinations will only require a fraction
of them.
>std::set is likely stored in some sort of tree-like structure, and thus
gets the tree's efficiency in searching, O(ln n).
<tanget>Although this is a brilliant example of how optimization tends to
be more effective if you change algorithms vs. attempting to tune the
existing algorithm.</tangent>

Don't you mean change container/structure instead of algorithm?
Not necessarily, probably more like container/algorithm since what
algorithms are available often depend on the container used. But for any
given container there might be some algorithms that are better than
others depending on the situation.

--
Erik Wikström
Jun 26 '07 #15
On 2007-06-26 00:30, desktop wrote:
Pete Becker wrote:
>desktop wrote:
>>Erik Wikström wrote:
On 2007-06-25 22:21, desktop wrote:
If I have a sorted std::list with 1.000.000 elements it takes
1.000.000 operations to find element with value = 1.000.000 (need to
iterator through the whole list).
>
In comparison, if I have a std::set with 1.000.000 element it will
only take approx lg 1.000.000 = 20 operations! Can it really be true
that the difference is a factor of 1.000.000/20 = 50.000 in this case?

In operations yes, not necessarily in time. If the operations on the
list takes 1 time and the operations on the set takes 50,000 then
they'll be equally fast. This will of course not be true in any
implementation (the set will be significantly faster than the list)
but it shows that just because one container/algorithm has a better
asymptotic running time it will in fact perform better. All it says
is that for a sufficiently large set of input, the algorithm will
perform better.

In practice you'll often find that using a vector for small sets will
be faster than most other containers, even if you need to traverse
the whole vector.
Is it possible to make an exact measurement in the difference in time
for 1 operation for a set and a list?

Yes, but that's not what asymptotic complexity is about. Asymptotic
complexity measures how well an algorithm scales when you increase the
amount of data. It answers questions like: it takes twenty seconds to
find all the records matching X in my database; if I double the number
of data elements, how long will it take?

I am not interested in the asymptotic difference but a measurement of
the difference in time for a single operation - this way its possible to
give an idea of how many elements you need to operate with before it
makes sense to use a more complicated structure with a better asymptotic
complexity.
The time one operation needs is usually not interesting since you rarely
us only one operation, instead you use a mix of a few. So what's
interesting is how long time your mix performs. Consider an application
where you perform a number of operations on a collection, let's say you
perform 10 operations of type A, 1 of type B and 100 of type C. Given
that mix it's quite uninteresting if the B operation is fast or not,
what matters is, probably, how well C performs, then A.

--
Erik Wikström
Jun 26 '07 #16

This thread has been closed and replies have been disabled. Please start a new discussion.

Similar topics

4
by: Mikhail N. Kupchik | last post by:
Hi All. I have a question regarding C++ programming language standard. It is related to standard library, not to the core language. Is it portable to instantiate template class std::list<>...
5
by: JustSomeGuy | last post by:
Can you access elements of the std::list like they were an array? eg. std::list<int> xlist; int y; .... y = xlist; // Returns the 10th element of a list.
1
by: Joe Gottman | last post by:
What are the advantages and disadvantages of the tree-based std::set versus the hash-based tr1::unordered_set? set advantages: 1) iterators remain valid after insert and erase (except for...
1
by: barnesc | last post by:
Hi again, Since my linear algebra library appears not to serve any practical need (I found cgkit, and that works better for me), I've gotten bored and went back to one of my other projects:...
7
by: Renzr | last post by:
I have a problem about the std::set<>iterator. After finding a term in the std::set<>, i want to know the distance from the current term to the begin(). But i have got a error. Please offer me...
19
by: Juha Nieminen | last post by:
If I'm not completely mistaken, the only reason why std::list::size() may be (and usually is) a linear-time operation is because they want std::list::splice() to be a constant-time operation, and...
5
by: Christopher | last post by:
The situation is that a std::list<std::set<std::string is being iterated through. Upon certain criteria some sets become empty. I need to remove the empty sets from the list. Is it safe to...
7
by: TBass | last post by:
So I have a class: class Client { unsigned int ClientID; .... }; class MyListenSocket
3
by: Mike Copeland | last post by:
How do I work with a std::list that might have multiple objects having the same "key", but which have other data that is different? Here's code that compiles, but doesn't do quite what I expect:...
0
by: taylorcarr | last post by:
A Canon printer is a smart device known for being advanced, efficient, and reliable. It is designed for home, office, and hybrid workspace use and can also be used for a variety of purposes. However,...
0
by: Charles Arthur | last post by:
How do i turn on java script on a villaon, callus and itel keypad mobile phone
0
by: aa123db | last post by:
Variable and constants Use var or let for variables and const fror constants. Var foo ='bar'; Let foo ='bar';const baz ='bar'; Functions function $name$ ($parameters$) { } ...
0
by: emmanuelkatto | last post by:
Hi All, I am Emmanuel katto from Uganda. I want to ask what challenges you've faced while migrating a website to cloud. Please let me know. Thanks! Emmanuel
1
by: nemocccc | last post by:
hello, everyone, I want to develop a software for my android phone for daily needs, any suggestions?
1
by: Sonnysonu | last post by:
This is the data of csv file 1 2 3 1 2 3 1 2 3 1 2 3 2 3 2 3 3 the lengths should be different i have to store the data by column-wise with in the specific length. suppose the i have to...
0
by: Hystou | last post by:
There are some requirements for setting up RAID: 1. The motherboard and BIOS support RAID configuration. 2. The motherboard has 2 or more available SATA protocol SSD/HDD slots (including MSATA, M.2...
0
Oralloy
by: Oralloy | last post by:
Hello folks, I am unable to find appropriate documentation on the type promotion of bit-fields when using the generalised comparison operator "<=>". The problem is that using the GNU compilers,...
0
jinu1996
by: jinu1996 | last post by:
In today's digital age, having a compelling online presence is paramount for businesses aiming to thrive in a competitive landscape. At the heart of this digital strategy lies an intricately woven...

By using Bytes.com and it's services, you agree to our Privacy Policy and Terms of Use.

To disable or enable advertisements and analytics tracking please visit the manage ads & tracking page.