473,412 Members | 2,281 Online
Bytes | Software Development & Data Engineering Community
Post Job

Home Posts Topics Members FAQ

Join Bytes to post your question to a community of 473,412 software developers and data experts.

fastest navigation in a list

hello all,

I have an application in which I build a list<node>, with potentially
thousands of nodes.

each node has an "index", and all nodes are ordered by this index. this
index reflects a value that has been computed before, and will range from
0 to N.

node 0: index 0
node 1: index 6
node 2: index 9
node 3: index 15
and so on.

my question is: what would theoretically be the fastest way of finding a
node by its index?

I now have a second list<node> in which I keep bookmarks, i.e. I run
through the first list and every 100 or so nodes, I
copy that node into the second list. So that when I'm looking for the
node with index 5000, I don't have to run through the entire(first) list
of nodes, but only a dozen or so (in the second list). I also keep track
of the last node refernced, since most lookups will be in each other's
neighbourhood, and begin my next search from that node.

This works very well; it is veru fast. But I was wondering if using an
"advance" method would be equally fast and efficient? If yes, I would
replace my list of bookmarks by a navigation based on "advance"...

greetings, D.

Jul 23 '05 #1
9 1648
danny van elsen wrote:
hello all,

I have an application in which I build a list<node>, with potentially
thousands of nodes.

each node has an "index", and all nodes are ordered by this index. this
index reflects a value that has been computed before, and will range from
0 to N.

node 0: index 0
node 1: index 6
node 2: index 9
node 3: index 15
and so on.

my question is: what would theoretically be the fastest way of finding a
node by its index?

I now have a second list<node> in which I keep bookmarks, i.e. I run
through the first list and every 100 or so nodes, I
copy that node into the second list. So that when I'm looking for the
node with index 5000, I don't have to run through the entire(first) list
of nodes, but only a dozen or so (in the second list). I also keep track
of the last node refernced, since most lookups will be in each other's
neighbourhood, and begin my next search from that node.

This works very well; it is veru fast. But I was wondering if using an
"advance" method would be equally fast and efficient? If yes, I would
replace my list of bookmarks by a navigation based on "advance"...

greetings, D.


Perhaps a better data structure? std::set is already ordered, giving
O(log n) search times.
Jul 23 '05 #2
danny van elsen <da*************@hotmail.com> scribbled on the stall wall:
hello all,

I have an application in which I build a list<node>, with potentially
thousands of nodes.

each node has an "index", and all nodes are ordered by this index. this
index reflects a value that has been computed before, and will range from
0 to N.

node 0: index 0
node 1: index 6
node 2: index 9
node 3: index 15
and so on.

my question is: what would theoretically be the fastest way of finding a
node by its index?

I now have a second list<node> in which I keep bookmarks, i.e. I run
through the first list and every 100 or so nodes, I
copy that node into the second list. So that when I'm looking for the
node with index 5000, I don't have to run through the entire(first) list
of nodes, but only a dozen or so (in the second list). I also keep track
of the last node refernced, since most lookups will be in each other's
neighbourhood, and begin my next search from that node.

This works very well; it is veru fast. But I was wondering if using an
"advance" method would be equally fast and efficient? If yes, I would
replace my list of bookmarks by a navigation based on "advance"...


If you have an index and that index is suitable to use as a "key" then
create a map or multimap instead of a list.

----== Posted via Newsfeeds.Com - Unlimited-Uncensored-Secure Usenet News==----
http://www.newsfeeds.com The #1 Newsgroup Service in the World! 120,000+ Newsgroups
----= East and West-Coast Server Farms - Total Privacy via Encryption =----
Jul 23 '05 #3

"danny van elsen" <da*************@hotmail.com> wrote in message
news:pa***************************@hotmail.com...
hello all,

I have an application in which I build a list<node>, with potentially
thousands of nodes.

each node has an "index", and all nodes are ordered by this index. this
index reflects a value that has been computed before, and will range from
0 to N.

node 0: index 0
node 1: index 6
node 2: index 9
node 3: index 15
and so on.

my question is: what would theoretically be the fastest way of finding a
node by its index?

If you want to keep that structure, instead of using one of the alternatives
mentioned by others, then a binary search is probably your best bet. That's
what it's designed for.
I now have a second list<node> in which I keep bookmarks, i.e. I run
through the first list and every 100 or so nodes, I
copy that node into the second list. So that when I'm looking for the
node with index 5000, I don't have to run through the entire(first) list
of nodes, but only a dozen or so (in the second list). I also keep track
of the last node refernced, since most lookups will be in each other's
neighbourhood, and begin my next search from that node.

This works very well; it is veru fast. But I was wondering if using an
"advance" method would be equally fast and efficient? If yes, I would
replace my list of bookmarks by a navigation based on "advance"...


(Sorry, I don't know what an "advance" method is.)

-Howard

Jul 23 '05 #4
In message <BH*******************@bgtnsc04-news.ops.worldnet.att.net>,
Howard <al*****@hotmail.com> writes

"danny van elsen" <da*************@hotmail.com> wrote in message
news:pa***************************@hotmail.com. ..
hello all,

I have an application in which I build a list<node>, with potentially
thousands of nodes.

each node has an "index", and all nodes are ordered by this index. this
index reflects a value that has been computed before, and will range from
0 to N.

node 0: index 0
node 1: index 6
node 2: index 9
node 3: index 15
and so on.
my question is: what would theoretically be the fastest way of finding a
node by its index?
Other posters have suggested using map, but that's not necessarily any
better. There are too many unanswered questions:

Is there a particular reason for choosing a list as the basic storage
structure?
Are the index values actually in sorted order?
Are elements inserted all at once or sporadically?
Are elements ever removed?
If you want to keep that structure, instead of using one of the alternatives
mentioned by others, then a binary search is probably your best bet. That's
what it's designed for.


Except that std::list only has bidirectional iterators. binary_search
will work, but it may not be as fast as you expect.
I now have a second list<node> in which I keep bookmarks, i.e. I run
through the first list and every 100 or so nodes, I
copy that node into the second list. So that when I'm looking for the
node with index 5000, I don't have to run through the entire(first) list
of nodes, but only a dozen or so (in the second list). I also keep track
of the last node refernced, since most lookups will be in each other's
neighbourhood, and begin my next search from that node.

This works very well; it is veru fast.
It's sometimes known as "indexed sequential" in database circles.
But I was wondering if using an
"advance" method would be equally fast and efficient? If yes, I would
replace my list of bookmarks by a navigation based on "advance"...


(Sorry, I don't know what an "advance" method is.)


It's what you use to move non-random-access iterators by more than one
step ;-)
--
Richard Herring
Jul 23 '05 #5

"Richard Herring" <ju**@[127.0.0.1]> wrote in message
news:yz**************@baesystems.com...
If you want to keep that structure, instead of using one of the
alternatives
mentioned by others, then a binary search is probably your best bet.
That's
what it's designed for.


Except that std::list only has bidirectional iterators. binary_search will
work, but it may not be as fast as you expect.


Ah. That would tend to make the search less efficient, eh? :-)
I now have a second list<node> in which I keep bookmarks, i.e. I run
through the first list and every 100 or so nodes, I
copy that node into the second list. So that when I'm looking for the
node with index 5000, I don't have to run through the entire(first) list
of nodes, but only a dozen or so (in the second list). I also keep track
of the last node refernced, since most lookups will be in each other's
neighbourhood, and begin my next search from that node.

This works very well; it is veru fast.
It's sometimes known as "indexed sequential" in database circles.
But I was wondering if using an
"advance" method would be equally fast and efficient? If yes, I would
replace my list of bookmarks by a navigation based on "advance"...


(Sorry, I don't know what an "advance" method is.)


It's what you use to move non-random-access iterators by more than one
step ;-)


Ok, thanks for the info.

-Howard

Jul 23 '05 #6
>>> I have an application in which I build a list<node>, with potentially
thousands of nodes.

each node has an "index", and all nodes are ordered by this index. this
index reflects a value that has been computed before, and will range from
0 to N.

node 0: index 0
node 1: index 6
node 2: index 9
node 3: index 15
and so on.
my question is: what would theoretically be the fastest way of finding a
node by its index?
Other posters have suggested using map, but that's not necessarily any
better. There are too many unanswered questions:

Is there a particular reason for choosing a list as the basic storage
structure?
Are the index values actually in sorted order? Are elements inserted all
at once or sporadically? Are elements ever removed?


The list will be the result of an analysis phase in which I start out with
perhaps tens of thousands of nodes; each node will be looked at and most
probably split up in several new nodes. So insertion is certainly not all
at once, and 'a little bit random', because not all nodes will be
treated sequentially. I will certainly end up with a multitude of the
original number of nodes.

The indexes will always remain ordered, however.

list is certainly better than vector, because of the inserts and removals;
but my artificial way of looking up a node made me think that perhaps
there should be better predefined methods available?
If you want to keep that structure, instead of using one of the
alternatives mentioned by others, then a binary search is probably your
best bet. That's what it's designed for.


Except that std::list only has bidirectional iterators. binary_search
will work, but it may not be as fast as you expect.


Do I understand correctly that what makes you say this is that there
are no random iterators for a list? so every next step in the binary
search is necessarily a sequence of increments or decrements? Perhaps then
my way of searching is not that bad at all?
I now have a second list<node> in which I keep bookmarks, i.e. I run
through the first list and every 100 or so nodes, I copy that node
into the second list. So that when I'm looking for the node with index
5000, I don't have to run through the entire(first) list of nodes, but
only a dozen or so (in the second list). I also keep track of the last
node refernced, since most lookups will be in each other's
neighbourhood, and begin my next search from that node.

This works very well; it is very fast.
It's sometimes known as "indexed sequential" in database circles.
But I was wondering if using an
"advance" method would be equally fast and efficient? If yes, I would
replace my list of bookmarks by a navigation based on "advance"...

(Sorry, I don't know what an "advance" method is.)


It's what you use to move non-random-access iterators by more than one
step ;-)


Jul 23 '05 #7
In message <pa****************************@hotmail.com>, danny van elsen
<da*************@hotmail.com> writes
I have an application in which I build a list<node>, with potentially
thousands of nodes.

each node has an "index", and all nodes are ordered by this index. this
index reflects a value that has been computed before, and will range from
0 to N.

node 0: index 0
node 1: index 6
node 2: index 9
node 3: index 15
and so on.

my question is: what would theoretically be the fastest way of finding a
node by its index?


Other posters have suggested using map, but that's not necessarily any
better. There are too many unanswered questions:

Is there a particular reason for choosing a list as the basic storage
structure?
Are the index values actually in sorted order? Are elements inserted all
at once or sporadically? Are elements ever removed?


The list will be the result of an analysis phase in which I start out with
perhaps tens of thousands of nodes; each node will be looked at and most
probably split up in several new nodes. So insertion is certainly not all
at once, and 'a little bit random', because not all nodes will be
treated sequentially. I will certainly end up with a multitude of the
original number of nodes.

The indexes will always remain ordered, however.

list is certainly better than vector, because of the inserts and removals;


That makes sense. But if there were no removals, and the sequence didn't
need to be sorted until the end of the analysis phase, you might
consider just appending to a vector, sorting it once only, and then
using binary_search.

And for the removals you might consider using the erase(remove_if(begin,
end, predicate), end) approach of copying wanted elements down over the
unwanted ones rather than removing them individually.
but my artificial way of looking up a node made me think that perhaps
there should be better predefined methods available?
If you want to keep that structure, instead of using one of the
alternatives mentioned by others, then a binary search is probably your
best bet. That's what it's designed for.
Except that std::list only has bidirectional iterators. binary_search
will work, but it may not be as fast as you expect.


Do I understand correctly that what makes you say this is that there
are no random iterators for a list? so every next step in the binary
search is necessarily a sequence of increments or decrements?


Correct. Under the hood std::list is probably (though the standard
doesn't require it) a straightforward doubly-linked list whose nodes are
randomly scattered around in memory, so the only way to traverse it is
to follow the links one by one. So a binary search on a list will do O(N
log N) comparisons _and_ O(N log N) iterator increments. If the cost
of the comparisons dominates, that's not too bad. If the increments are
expensive (probably not for std::list, as it's just an indirection)
there might be a problem.
Perhaps then
my way of searching is not that bad at all?


Maybe not. It has a long history (think of those dictionaries with a
thumb index!). Knuth mentions something similar (The Art of Computer
Programming vol. III, page 473.) If you index every M'th element, it
takes O(N/M+M) comparisons and increments.

Ultimately the only way to find the best strategy is to try them all and
time them.
--
Richard Herring
Jul 23 '05 #8
Richard Herring wrote:
In message <pa****************************@hotmail.com>, danny van elsen
<da*************@hotmail.com> writes

If you want to keep that structure, instead of using one of the
alternatives mentioned by others, then a binary search is probably your
best bet. That's what it's designed for.
Except that std::list only has bidirectional iterators. binary_search
will work, but it may not be as fast as you expect.

Do I understand correctly that what makes you say this is that there
are no random iterators for a list? so every next step in the binary
search is necessarily a sequence of increments or decrements?

Correct. Under the hood std::list is probably (though the standard
doesn't require it) a straightforward doubly-linked list whose nodes are
randomly scattered around in memory, so the only way to traverse it is
to follow the links one by one. So a binary search on a list will do O(N
log N) comparisons _and_ O(N log N) iterator increments. If the cost
of the comparisons dominates, that's not too bad. If the increments are
expensive (probably not for std::list, as it's just an indirection)
there might be a problem.
Perhaps then
my way of searching is not that bad at all?

Maybe not. It has a long history (think of those dictionaries with a
thumb index!). Knuth mentions something similar (The Art of Computer
Programming vol. III, page 473.) If you index every M'th element, it
takes O(N/M+M) comparisons and increments.

Ultimately the only way to find the best strategy is to try them all and
time them.


Thought: I'm making the assumption that once your list is built up,
it's unchanging. If this is incorrect, the following suggestion no
longer works.

Make a std::set of iterators into your list. The sort function for the
set is "*iter1 < *iter2". You have the overhead of sorting the
iterators exactly once. There was an article in either CUJ or DDJ on a
"view library" which used a similar mechanism to create alternate views
into an STL Container.

The point being that once you've built up your set of iterators, all
searches are O(log n). Hell, you don't even need to use a set, you
could use a vector of iterators, sort the vector once, and then use a
binary search on the vector.

Kind of like what people did in C when they had data structures that
were too big to really move around efficiently -- keep a list/set/vector
of pointers to the data, and throw them around instead.
Jul 23 '05 #9
>
Thought: I'm making the assumption that once your list is built up,
it's unchanging. If this is incorrect, the following suggestion no
longer works.

Make a std::set of iterators into your list. The sort function for the
set is "*iter1 < *iter2". You have the overhead of sorting the
iterators exactly once. There was an article in either CUJ or DDJ on a
"view library" which used a similar mechanism to create alternate views
into an STL Container.

The point being that once you've built up your set of iterators, all
searches are O(log n). Hell, you don't even need to use a set, you
could use a vector of iterators, sort the vector once, and then use a
binary search on the vector.


ok, this is an interesting idea, I'll try that...

thanks for all answers!
Jul 23 '05 #10

This thread has been closed and replies have been disabled. Please start a new discussion.

Similar topics

9
by: Rune Strand | last post by:
Hi, If I have a lot of integers and want do something with each digit as integer, what is the fastest way to get there? Eg. Make 12345 into an iterable object, like or "12345" (Btw: What is...
22
by: Marek Mand | last post by:
How to create a functional *flexible* UL-menu list <div> <ul> <li><a href=""></li> <li><a href=""></li> <li><a href=""></li> </ul> </div> (working in IE, Mozilla1.6, Opera7 (or maybe even...
2
by: Griff | last post by:
Completely new to technologies like CSS, but trying to learn fast.... I've seen a few sites with "very sophisticated" navigation tools that provide menus that both drop down and expand...
6
by: Jonathan | last post by:
I am hoping that someone more experienced than myself can point me towards what might be the fastest data lookup method to use for storing ip addresses. My situation is that I will need to maintain...
0
by: Veli-Pekka Tätilä | last post by:
Hi, My first post here. I've found some serious accessibility flaws in the Python 2.4 docs and wish they could be rectified over time. I'm very new to Python and initially contacted docs at python...
6
by: John | last post by:
Just a general question... I'm currently using a combobox that when updated, opens a form with its recordset based on a query using the combo box value as the criteria. I'm I correct in...
11
by: hoopsho | last post by:
Hi Everyone, I am trying to write a program that does a few things very fast and with efficient use of memory... a) I need to parse a space-delimited file that is really large, upwards fo a...
1
by: Eric Lindsay | last post by:
Most of my old navigation links take the form <p> <a href ...> ... </a> <br> <a href ...> ... </a> <br> etc. I would like to change the navigation to lists. However I can not decide whether...
2
by: 1kky | last post by:
Hi all, i have a big problem with my navigation bar.. the thing is i can get all of the work apart from when you click a department name you go inside that department you can see the tabs but...
0
BarryA
by: BarryA | last post by:
What are the essential steps and strategies outlined in the Data Structures and Algorithms (DSA) roadmap for aspiring data scientists? How can individuals effectively utilize this roadmap to progress...
1
by: Sonnysonu | last post by:
This is the data of csv file 1 2 3 1 2 3 1 2 3 1 2 3 2 3 2 3 3 the lengths should be different i have to store the data by column-wise with in the specific length. suppose the i have to...
0
by: Hystou | last post by:
There are some requirements for setting up RAID: 1. The motherboard and BIOS support RAID configuration. 2. The motherboard has 2 or more available SATA protocol SSD/HDD slots (including MSATA, M.2...
0
marktang
by: marktang | last post by:
ONU (Optical Network Unit) is one of the key components for providing high-speed Internet services. Its primary function is to act as an endpoint device located at the user's premises. However,...
0
Oralloy
by: Oralloy | last post by:
Hello folks, I am unable to find appropriate documentation on the type promotion of bit-fields when using the generalised comparison operator "<=>". The problem is that using the GNU compilers,...
0
tracyyun
by: tracyyun | last post by:
Dear forum friends, With the development of smart home technology, a variety of wireless communication protocols have appeared on the market, such as Zigbee, Z-Wave, Wi-Fi, Bluetooth, etc. Each...
0
agi2029
by: agi2029 | last post by:
Let's talk about the concept of autonomous AI software engineers and no-code agents. These AIs are designed to manage the entire lifecycle of a software development project—planning, coding, testing,...
0
isladogs
by: isladogs | last post by:
The next Access Europe User Group meeting will be on Wednesday 1 May 2024 starting at 18:00 UK time (6PM UTC+1) and finishing by 19:30 (7.30PM). In this session, we are pleased to welcome a new...
0
by: conductexam | last post by:
I have .net C# application in which I am extracting data from word file and save it in database particularly. To store word all data as it is I am converting the whole word file firstly in HTML and...

By using Bytes.com and it's services, you agree to our Privacy Policy and Terms of Use.

To disable or enable advertisements and analytics tracking please visit the manage ads & tracking page.