By using this site, you agree to our updated Privacy Policy and our Terms of Use. Manage your Cookies Settings.
454,525 Members | 1,689 Online
Bytes IT Community
+ Ask a Question
Need help? Post your question and get tips & solutions from a community of 454,525 IT Pros & Developers. It's quick & easy.

Fast list traversal

P: n/a
I want to see if there is an alternative method for fast list
traversal. The code is very simple:

dict_long_lists = defaultdict(list)
for long_list in dict_long_lists.itervalues()
for element in long_list:
array_a[element] = m + n + p # m,n,p
are variable numbers

The long_list's are read from a defaultdict(list) dictionary and so
don't need initializing. The elements of long_list are integers and
ordered (sorted before placing in dictionary). There are 20,000
long_list's each with a variable number of elements (>5,000). The
elements of long_list are immutable (ie. don't change). The above
code is within a def function.

I've tried set() using defaultdict(set) but the elements are not
ordered.

What is the fastest way to traverse these long_list's sequentially
from the beginning to the end? Maybe there is another data structure
that can be used instead of a list.

Dinesh
Nov 2 '08 #1
Share this Question
Share on Google+
4 Replies


P: n/a
On Nov 2, 1:00*am, Dennis Lee Bieber <wlfr...@ix.netcom.comwrote:
On Sun, 2 Nov 2008 00:25:13 -0700 (PDT), dineshv
<dineshbvad...@hotmail.comdeclaimed the following in comp.lang.python:
I want to see if there is an alternative method for fast list
traversal. *The code is very simple:
dict_long_lists = defaultdict(list)
for long_list in dict_long_lists.itervalues()
* * * * for element in long_list:
* * * * * * * * array_a[element] = m + n + p * * * * * * * *# m,n,p
are variable numbers
The long_list's are read from a defaultdict(list) dictionary and so
don't need initializing. *The elements of long_list are integers and
ordered (sorted before placing in dictionary). *There are 20,000

* * * * Out of curiosity, what code is used to put the values in?The sample
you give above is creating an empty dictionary rigged, if I understand
the help file, to automatically give an empty list if a non-existent key
is requested. But in your loop, there is no possibility of a
non-existent key being requested -- .itervalues() will only traverse
over real data (ie; keys that DO exist in the dictionary).

* * * * And, if you are sorting a list "before placing in dictionary", why
need the defaultdict()? A plain

* * * * dict[key] = presorted_list_of_integers

would be sufficient.

* * * * Or do you mean to imply that you are using something like:

* * * * thedefaultdict[key].append(single_value)
* * * * thedefaultdict[key].sort()

EACH time you obtain another value from where-ever? If so, that's going
to be the biggest time sink...
What is the fastest way to traverse these long_list's sequentially
from the beginning to the end? *Maybe there is another data structure
that can be used instead of a list.

* * * * So far as I know, the list IS the fastest structure available for
sequential processing.
--
* * * * Wulfraed * * * *Dennis Lee Bieber * * * ** * * KD6MOG
* * * * wlfr...@ix.netcom.com * * * * * * *wulfr...@bestiaria.com
* * * * * * * * HTTP://wlfraed.home.netcom.com/
* * * * (Bestiaria Support Staff: * * * * * * * web-a...@bestiaria.com)
* * * * * * * * HTTP://www.bestiaria.com/
dict_long_lists is a dictionary of lists and is NOT empty. Thank-you
Nov 2 '08 #2

P: n/a
dineshv:
What is the fastest way to traverse these long_list's sequentially
from the beginning to the end? *Maybe there is another data structure
that can be used instead of a list.
Psyco can help a lot in that kind of code.

>The elements of long_list are immutable (ie. don't change).<
A tuple too may be fit then, but probably it doesn't improve the
situation.

Bye,
bearophile
Nov 2 '08 #3

P: n/a
On Sun, 02 Nov 2008 00:25:13 -0700, dineshv wrote:
I want to see if there is an alternative method for fast list traversal.
The code is very simple:

dict_long_lists = defaultdict(list)
for long_list in dict_long_lists.itervalues()
for element in long_list:
array_a[element] = m + n + p # m,n,p
are variable numbers

It might help if you showed some sample data, because your explanation is
confusing. You are asking about traversing lists, but your data is in a
defaultdict. Why is it in a dict, when you don't seem to be using the key
anywhere? Putting that aside, I'm going to make a guess and assume your
data looks something like this...

dict_long_lists = {
'key1': [0, 1, 2, 3, 4, 5],
'key2': [16, 17, 18, 19],
'key3': [7, 9, 11, 13, 15],
'key4': [6, 8, 10, 12, 14] }

Then you do something like this:

array_a = [None]*20

Then after running your code, you end up with:

array_a == [x0, x1, x2, x3, .... , x19]
where each x is calculated from m + n + p.
The long_list's are read from a defaultdict(list) dictionary and so
don't need initializing. The elements of long_list are integers and
ordered (sorted before placing in dictionary). There are 20,000
long_list's each with a variable number of elements (>5,000). The
elements of long_list are immutable (ie. don't change). The above code
is within a def function.

I've tried set() using defaultdict(set) but the elements are not
ordered.
It's not clear what you have tried to do with set(), or why the elements
need to be ordered.

What is the fastest way to traverse these long_list's sequentially from
the beginning to the end? Maybe there is another data structure that
can be used instead of a list.
I doubt you'll find anything faster than a list.

You have 20,000 lists of 5,000 items each, which means you have a
*minimum* of 100,000,000 items to traverse. If each iteration takes 0.1
millisecond, not an unreasonably slow speed depending on the amount of
computation each iteration is, that will take 10,000 seconds or nearly
three hours. The only solutions to that are to reduce the amount of
computation in each loop, reduce the number of items, or get a faster
computer.

Have you tried running the profiler to see where the time is actually
going? I suggest you write a small set of test data (say, 1000 items),
and profile it, then write a longer set of test data (say, 100,000
items), and if it's still not clear where the time is being lost, ask for
help.
--
Steven
Nov 2 '08 #4

P: n/a
Steven D'Aprano:
The only solutions to that are to reduce the amount of
computation in each loop, reduce the number of items, or get a faster
computer.
Changing language too is an option :-)
Languages like Java, D, C, C++ may help :-)

Bye,
bearophile
Nov 2 '08 #5

This discussion thread is closed

Replies have been disabled for this discussion.