473,781 Members | 2,335 Online
Bytes | Software Development & Data Engineering Community
+ Post

Home Posts Topics Members FAQ

factor 50.000 between std::list and std::set?

If I have a sorted std::list with 1.000.000 elements it takes 1.000.000
operations to find element with value = 1.000.000 (need to iterator
through the whole list).

In comparison, if I have a std::set with 1.000.000 element it will only
take approx lg 1.000.000 = 20 operations! Can it really be true that the
difference is a factor of 1.000.000/20 = 50.000 in this case?
Jun 25 '07
15 4159
In article <f5**********@n ews.net.uni-c.dk>, ff*@sss.com says...

[ ... ]
Is it possible to make an exact measurement in the difference in time
for 1 operation for a set and a list?
Sure -- for a specific operation on a specific implementation, for some
sufficiently loose definition of "exact".

--
Later,
Jerry.

The universe is a figment of its own imagination.
Jun 25 '07 #11
desktop wrote:
Erik Wikström wrote:
>On 2007-06-25 22:21, desktop wrote:
>>If I have a sorted std::list with 1.000.000 elements it takes
1.000.000 operations to find element with value = 1.000.000 (need to
iterator through the whole list).

In comparison, if I have a std::set with 1.000.000 element it will
only take approx lg 1.000.000 = 20 operations! Can it really be true
that the difference is a factor of 1.000.000/20 = 50.000 in this case?

In operations yes, not necessarily in time. If the operations on the
list takes 1 time and the operations on the set takes 50,000 then
they'll be equally fast. This will of course not be true in any
implementati on (the set will be significantly faster than the list)
but it shows that just because one container/algorithm has a better
asymptotic running time it will in fact perform better. All it says is
that for a sufficiently large set of input, the algorithm will perform
better.

In practice you'll often find that using a vector for small sets will
be faster than most other containers, even if you need to traverse the
whole vector.

Is it possible to make an exact measurement in the difference in time
for 1 operation for a set and a list?
Yes, but that's not what asymptotic complexity is about. Asymptotic
complexity measures how well an algorithm scales when you increase the
amount of data. It answers questions like: it takes twenty seconds to
find all the records matching X in my database; if I double the number
of data elements, how long will it take?

--

-- Pete
Roundhouse Consulting, Ltd. (www.versatilecoding.com)
Author of "The Standard C++ Library Extensions: a Tutorial and
Reference." (www.petebecker.com/tr1book)
Jun 25 '07 #12
Pete Becker wrote:
desktop wrote:
>Erik Wikström wrote:
>>On 2007-06-25 22:21, desktop wrote:
If I have a sorted std::list with 1.000.000 elements it takes
1.000.000 operations to find element with value = 1.000.000 (need to
iterator through the whole list).

In comparison, if I have a std::set with 1.000.000 element it will
only take approx lg 1.000.000 = 20 operations! Can it really be true
that the difference is a factor of 1.000.000/20 = 50.000 in this case?

In operations yes, not necessarily in time. If the operations on the
list takes 1 time and the operations on the set takes 50,000 then
they'll be equally fast. This will of course not be true in any
implementatio n (the set will be significantly faster than the list)
but it shows that just because one container/algorithm has a better
asymptotic running time it will in fact perform better. All it says
is that for a sufficiently large set of input, the algorithm will
perform better.

In practice you'll often find that using a vector for small sets will
be faster than most other containers, even if you need to traverse
the whole vector.

Is it possible to make an exact measurement in the difference in time
for 1 operation for a set and a list?

Yes, but that's not what asymptotic complexity is about. Asymptotic
complexity measures how well an algorithm scales when you increase the
amount of data. It answers questions like: it takes twenty seconds to
find all the records matching X in my database; if I double the number
of data elements, how long will it take?
I am not interested in the asymptotic difference but a measurement of
the difference in time for a single operation - this way its possible to
give an idea of how many elements you need to operate with before it
makes sense to use a more complicated structure with a better asymptotic
complexity.
Jun 25 '07 #13
desktop <ff*@sss.comwro te in news:f5******** **@news.net.uni-c.dk:
Pete Becker wrote:
[snip]
>Yes, but that's not what asymptotic complexity is about. Asymptotic
complexity measures how well an algorithm scales when you increase
the amount of data. It answers questions like: it takes twenty
seconds to find all the records matching X in my database; if I
double the number of data elements, how long will it take?

I am not interested in the asymptotic difference but a measurement of
the difference in time for a single operation - this way its possible
to give an idea of how many elements you need to operate with before
it makes sense to use a more complicated structure with a better
asymptotic complexity.
A couple of points to keep in mind:

1) Have you measured your application and determined that this is where
your time is being consumed? If not, you are likely wasting your time
trying to optimize this particular operation (there's a few good quotes
about this.... search for Hoare's Dictum).

2) Today your application is working with x objects (where x is this magic
number where list is performing better than set). Tomorrow your
application is called upon to work on 10 times x objects (and the day
after, 100 times). Your list time grows by a factor of 10 (and then
another 10), the set only grows by 3-4 (and finally 7).
Jun 25 '07 #14
On 2007-06-25 23:51, desktop wrote:
Andre Kostur wrote:
>desktop <ff*@sss.comwro te in news:f5******** **@news.net.uni-c.dk:
>>Pete Becker wrote:
desktop wrote:
If I have a sorted std::list with 1.000.000 elements it takes
1.000.000 operations to find element with value = 1.000.000 (need to
iterator through the whole list).
>
In comparison, if I have a std::set with 1.000.000 element it will
only take approx lg 1.000.000 = 20 operations! Can it really be true
that the difference is a factor of 1.000.000/20 = 50.000 in this
case?
>>>Yes. Now do the same exercise, but look for the first element. The
difference isn't as dramatic, but it's there.

Well in a sorted list it takes constant time. But in the set it might
take 20 operations, unless you use some kind of header that always have
pointers to min and max. But still a factor 50.000 seems supernatural
in
>>the previous example!

Why is that supernatural? Searching a std::list is an O(n) operation,
searching a std::set is O(ln n) operation. As you increase n, the O(n)
grows faster than the O(ln n) (Try it with an O(n!) and see what happens
to the differences.... )

std::list doesn't have a random-access iterator, only bidirectional.
Thus you must at least traverse the entire list (in the worst case). You
may be able to get away with O(ln n) comparisons in the list if you have
the assumption that the list is sorted, and you use a binary search
algorithm.

What algorithm are you referring to? "search":

http://www.cppreference.com/cppalgorithm/search.html

runs in linear time and quadratic in worst case.
No, search is for finding subranges, not a lone element (though it can
be used for such). What Andre Kostur meant was that it's possible that
performs the search in O(log n) comparisons (if the list is sorted) but
it will still require O(n) steps, which goes to show that it's important
to be clear about what you are measuring. I believe it's common to
measure the number of operations on elements (comparisons) which means
that the number of steps can differ from the asymptotic running time.
I assume there exists no algorithm that can find an element in a list in
O(lg n) time (maybe if the list is sorted but that does not correspond
to the worst case).
Well, even for a sorted list there are some combinations of elements
that are "worse" than others, which will require the entire log n
operations, while some other combinations will only require a fraction
of them.
>std::set is likely stored in some sort of tree-like structure, and thus
gets the tree's efficiency in searching, O(ln n).
<tanget>Althou gh this is a brilliant example of how optimization tends to
be more effective if you change algorithms vs. attempting to tune the
existing algorithm.</tangent>

Don't you mean change container/structure instead of algorithm?
Not necessarily, probably more like container/algorithm since what
algorithms are available often depend on the container used. But for any
given container there might be some algorithms that are better than
others depending on the situation.

--
Erik Wikström
Jun 26 '07 #15
On 2007-06-26 00:30, desktop wrote:
Pete Becker wrote:
>desktop wrote:
>>Erik Wikström wrote:
On 2007-06-25 22:21, desktop wrote:
If I have a sorted std::list with 1.000.000 elements it takes
1.000.000 operations to find element with value = 1.000.000 (need to
iterator through the whole list).
>
In comparison, if I have a std::set with 1.000.000 element it will
only take approx lg 1.000.000 = 20 operations! Can it really be true
that the difference is a factor of 1.000.000/20 = 50.000 in this case?

In operations yes, not necessarily in time. If the operations on the
list takes 1 time and the operations on the set takes 50,000 then
they'll be equally fast. This will of course not be true in any
implementati on (the set will be significantly faster than the list)
but it shows that just because one container/algorithm has a better
asymptotic running time it will in fact perform better. All it says
is that for a sufficiently large set of input, the algorithm will
perform better.

In practice you'll often find that using a vector for small sets will
be faster than most other containers, even if you need to traverse
the whole vector.
Is it possible to make an exact measurement in the difference in time
for 1 operation for a set and a list?

Yes, but that's not what asymptotic complexity is about. Asymptotic
complexity measures how well an algorithm scales when you increase the
amount of data. It answers questions like: it takes twenty seconds to
find all the records matching X in my database; if I double the number
of data elements, how long will it take?

I am not interested in the asymptotic difference but a measurement of
the difference in time for a single operation - this way its possible to
give an idea of how many elements you need to operate with before it
makes sense to use a more complicated structure with a better asymptotic
complexity.
The time one operation needs is usually not interesting since you rarely
us only one operation, instead you use a mix of a few. So what's
interesting is how long time your mix performs. Consider an application
where you perform a number of operations on a collection, let's say you
perform 10 operations of type A, 1 of type B and 100 of type C. Given
that mix it's quite uninteresting if the B operation is fast or not,
what matters is, probably, how well C performs, then A.

--
Erik Wikström
Jun 26 '07 #16

This thread has been closed and replies have been disabled. Please start a new discussion.

Similar topics

4
2487
by: Mikhail N. Kupchik | last post by:
Hi All. I have a question regarding C++ programming language standard. It is related to standard library, not to the core language. Is it portable to instantiate template class std::list<> with incomplete type? I've seen some STL implementations which allow this and some others that does not. I did not find any mentioning of this topic in the standard, maybe I searched not enough thoroughly?
5
468
by: JustSomeGuy | last post by:
Can you access elements of the std::list like they were an array? eg. std::list<int> xlist; int y; .... y = xlist; // Returns the 10th element of a list.
1
5071
by: Joe Gottman | last post by:
What are the advantages and disadvantages of the tree-based std::set versus the hash-based tr1::unordered_set? set advantages: 1) iterators remain valid after insert and erase (except for iterators to the erased element). 2) Sets can be compared using operator== or operator<. The STL set functions, (set_union, set_intersection, etc), easily work on sets. 3) Can do searches based on ordering (for instance, given a set of strings, it is...
1
485
by: barnesc | last post by:
Hi again, Since my linear algebra library appears not to serve any practical need (I found cgkit, and that works better for me), I've gotten bored and went back to one of my other projects: reimplementing the Python builtin classes list(), set(), dict(), and frozenset() with balanced trees (specifically, counted B-trees stored in memory). In short, this allows list lookup, insertion, deletion in O(log(N)) time. It allows the set and...
7
5770
by: Renzr | last post by:
I have a problem about the std::set<>iterator. After finding a term in the std::set<>, i want to know the distance from the current term to the begin(). But i have got a error. Please offer me help, thank you. I am a freshman about the STL. The following is the code. #include <set> #include <iostream> #include <vector> int main() {
19
5428
by: Juha Nieminen | last post by:
If I'm not completely mistaken, the only reason why std::list::size() may be (and usually is) a linear-time operation is because they want std::list::splice() to be a constant-time operation, and if you execute the latter, the size of the resulting lists cannot be known without explicitly counting the sizes of the new lists. I was thinking: What if size() was an O(n) operation only *after* a splice() operation has been performed (and...
5
8652
by: Christopher | last post by:
The situation is that a std::list<std::set<std::string is being iterated through. Upon certain criteria some sets become empty. I need to remove the empty sets from the list. Is it safe to iterate through a list and call list::erase( iterator ) in mid iteration?
7
2353
by: TBass | last post by:
So I have a class: class Client { unsigned int ClientID; .... }; class MyListenSocket
3
1638
by: Mike Copeland | last post by:
How do I work with a std::list that might have multiple objects having the same "key", but which have other data that is different? Here's code that compiles, but doesn't do quite what I expect: (Please note that there's some specialized I/o code here, but the logic flow should be clear...) struct GenCheck // Gender Check data { char genCode; int useCount;
0
9636
marktang
by: marktang | last post by:
ONU (Optical Network Unit) is one of the key components for providing high-speed Internet services. Its primary function is to act as an endpoint device located at the user's premises. However, people are often confused as to whether an ONU can Work As a Router. In this blog post, we’ll explore What is ONU, What Is Router, ONU & Router’s main usage, and What is the difference between ONU and Router. Let’s take a closer look ! Part I. Meaning of...
0
9474
by: Hystou | last post by:
Most computers default to English, but sometimes we require a different language, especially when relocating. Forgot to request a specific language before your computer shipped? No problem! You can effortlessly switch the default language on Windows 10 without reinstalling. I'll walk you through it. First, let's disable language synchronization. With a Microsoft account, language settings sync across devices. To prevent any complications,...
0
10306
Oralloy
by: Oralloy | last post by:
Hello folks, I am unable to find appropriate documentation on the type promotion of bit-fields when using the generalised comparison operator "<=>". The problem is that using the GNU compilers, it seems that the internal comparison operator "<=>" tries to promote arguments from unsigned to signed. This is as boiled down as I can make it. Here is my compilation command: g++-12 -std=c++20 -Wnarrowing bit_field.cpp Here is the code in...
1
10075
by: Hystou | last post by:
Overview: Windows 11 and 10 have less user interface control over operating system update behaviour than previous versions of Windows. In Windows 11 and 10, there is no way to turn off the Windows Update option using the Control Panel or Settings app; it automatically checks for updates and installs any it finds, whether you like it or not. For most users, this new feature is actually very convenient. If you want to control the update process,...
0
8961
agi2029
by: agi2029 | last post by:
Let's talk about the concept of autonomous AI software engineers and no-code agents. These AIs are designed to manage the entire lifecycle of a software development project—planning, coding, testing, and deployment—without human intervention. Imagine an AI that can take a project description, break it down, write the code, debug it, and then launch it, all on its own.... Now, this would greatly impact the work of software developers. The idea...
0
6727
by: conductexam | last post by:
I have .net C# application in which I am extracting data from word file and save it in database particularly. To store word all data as it is I am converting the whole word file firstly in HTML and then checking html paragraph one by one. At the time of converting from word file to html my equations which are in the word document file was convert into image. Globals.ThisAddIn.Application.ActiveDocument.Select();...
0
5504
by: adsilva | last post by:
A Windows Forms form does not have the event Unload, like VB6. What one acts like?
1
4037
by: 6302768590 | last post by:
Hai team i want code for transfer the data from one system to another through IP address by using C# our system has to for every 5mins then we have to update the data what the data is updated we have to send another system
2
3632
muto222
by: muto222 | last post by:
How can i add a mobile payment intergratation into php mysql website.

By using Bytes.com and it's services, you agree to our Privacy Policy and Terms of Use.

To disable or enable advertisements and analytics tracking please visit the manage ads & tracking page.