473,406 Members | 2,369 Online
Bytes | Software Development & Data Engineering Community
Post Job

Home Posts Topics Members FAQ

Join Bytes to post your question to a community of 473,406 software developers and data experts.

How to make this unpickling/sorting demo faster?

I'm involved in a discussion thread in which it has been stated that:

"""
Anything written in a language that is 20x slower (Perl, Python,
PHP) than C/C++ should be instantly rejected by users on those grounds
alone.
"""

I've challenged someone to beat the snippet of code below in C, C++,
or assembler, for reading in one million pairs of random floats and
sorting them by the second member of the pair. I'm not a master
Python programmer. Is there anything I could do to make this even
faster than it is?

Also, if I try to write the resulting list of tuples back out to a
gdbm file, it takes a good 14 seconds, which is far longer than the
reading and sorting takes. The problem seems to be that the 'f' flag
to gdbm.open() is being ignored and writes are being sync'd to disk
either on each write, or on close. I'd really prefer to let the OS
decide when to actually write to disk.

I'm using python 2.5.2, libgdm 1.8.3, and python-gdbm 2.5.2 under
Ubuntu 8.4 beta and an x86_64 architechture.

Thanks for any tips.

=====
import cPickle, gdbm, operator
dbmIn = gdbm.open('float_pairs_in.pickel')
print "Reading pairs..."
pairs = cPickle.loads(dbmIn['pairs'])
print "Sorting pairs..."
pairs.sort(key=operator.itemgetter(1))
print "Done!"
=====
The input file was created with this:

=====
import random, gdbm, cPickle
print "Creating pairs file..."
pairs = [(random.random(), random.random(),) for pair in
range(0,1000000)]
dbmOut = gdbm.open('float_pairs_in.pickel', 'nf')
dbmOut['pairs'] = cPickle.dumps(pairs, 2)
dbmOut.close()
print "Done!"
=====
Jun 27 '08 #1
7 1899
On Apr 17, 6:15 pm, Steve Bergman <sbergma...@gmail.comwrote:
I'm involved in a discussion thread in which it has been stated that:

"""
Anything written in a language that is 20x slower (Perl, Python,
PHP) than C/C++ should be instantly rejected by users on those grounds
alone.
"""
THe above is applied slavishly by those who value machine time over
peoples time. Do you want to work with them?

- Paddy.
Jun 27 '08 #2
THe above is applied slavishly by those who value machine time over
peoples time. Do you want to work with them?
I understand where you are coming from. But in writing the code
snippet I learned something about pickling/unpickling, which I knew
*about* but had never actually used before. And also learned the
quick and easy way of DSU sorting, which got easier and faster in
2.4. So, personally, I'm ahead on this one. Otherwise, I agree that
arguing the matter in long flame threads is a waste of time.
Jun 27 '08 #3
Steve Bergman <sb********@gmail.comwrites:
Anything written in a language that is 20x slower (Perl, Python,
PHP) than C/C++ should be instantly rejected by users on those grounds
alone.
Well, if you time it starting from when you sit down at the computer
and start programming, til when the sorted array is output, Python
might be 20x faster than C/C++ and 100x faster than assembler.
I've challenged someone to beat the snippet of code below in C, C++,
or assembler, for reading in one million pairs of random floats and
sorting them by the second member of the pair. I'm not a master
Python programmer. Is there anything I could do to make this even
faster than it is?
1. Turn off the cyclic garbage collector during the operation since
you will have no cyclic garbage.

2. See if there is a way to read the array directly (using something
like the struct module or ctypes) rather than a pickle.

3. Use psyco and partition the array into several smaller ones with a
quicksort-like partitioning step, then sort the smaller arrays in
parallel using multiple processes, if you have a multicore CPU.

4. Write your sorting routine in C or assembler and call it through
the C API. If the sorting step is a small part of a large program and
the sort is using a lot of cpu, this is a good approach since for the
other parts of the program you still get the safety and productivity
gain of Python.
Also, if I try to write the resulting list of tuples back out to a
gdbm file,
I don't understand what you're doing with gdbm. Just use a binary
file.
Jun 27 '08 #4
Thanks for the help. Yes, I see now that gdbm is really superfluous.
Not having used pickel before, I started from the example in "Python
in a Nutshell" which uses anydbm. Using a regular file saves some
time. By far, the biggest improvement has come from using marshall
rather than cPickel. Especially for writing. gc.disable() shaved off
another 20%. I can't use psyco because I'm on x86_64. And my goal is
to demonstrate how Python can combine simplicity, readability, *and*
speed when the standard library is leveraged properly. So, in this
case, calling C or assembler code would defeat the purpose, as would
partitioning into smaller arrays. And my processor is single core.

I started at about 7 seconds to just read in and sort, and at 20
seconds to read, sort, and write. I can now read in, sort, and write
back out in almost exactly 5 seconds, thanks to marshal and your
suggestions.

I'll look into struct and ctypes. Although I know *of* them, I'm not
very familiar *with* them.

Thank you, again, for your help.

Jun 27 '08 #5
Steve Bergman <sb********@gmail.comwrites:
I started at about 7 seconds to just read in and sort, and at 20
seconds to read, sort, and write. I can now read in, sort, and write
back out in almost exactly 5 seconds, thanks to marshal and your
suggestions.
That is not bad if you aren't using a raid disk. If you require that
the output actually be written to the disk though, make sure to type
"sync" after running your program and count that as part of the
elapsed time. If those are 64-bit floats then a million of them is 16
MB and you're doing pretty good if you can get that much through a
file system in 1 second, so reading and writing the data is about 2
seconds all by itself. Therefore the likelihood of a C or asm program
being 20x faster including disk i/o is dim. But realistically,
counting just CPU time, you might get a 20x speedup with assembler if
you're really determined, using x86 SSE (128-bit vector) instructions,
cache prefetching, etc.
Jun 27 '08 #6
On Apr 17, 7:33 pm, Steve Bergman <sbergma...@gmail.comwrote:
to demonstrate how Python can combine simplicity, readability, *and*
speed when the standard library is leveraged properly. So, in this
But that's not true. You're just creating an artificial example to
prove your point.

Consider this C code that does what you say. The code is very basic
and readable. In my machine, it takes a little longer than one second
to execute.

#include <stdio.h>
#include <stdlib.h>

#define size 1000000

struct float_pair {
double a;
double b;
};
static struct float_pair s[size];

int compare(const void *a, const void *b)
{
const struct float_pair *sa;
const struct float_pair *sb;
sa = a;
sb = b;
if (sa->b sb->b) return 1;
if (sa->b == sb->b) return 0;
return -1;
}

int main(void)
{
FILE *f;
f = fopen("floats", "rb");
puts("Reading pairs...");
fread(s, sizeof(s), 1, f);
fclose(f);
qsort(s, size, sizeof(struct float_pair), compare);
puts("Done!");
return 0;
}

Is the code too big ? Yes. Can you make it faster in python ?
Probably not in the 10 minutes required to write this. The code that
generates the files is the following:

#include <stdlib.h>
#include <stdio.h>
#include <time.h>

struct float_pair {
double a;
double b;
};

static struct float_pair s[1000000];

int main(void)
{
FILE *f;
unsigned i;
f = fopen("floats", "wb");
srand(time(NULL));
for (i = 0; i < 1000000; i++) {
s[i].a = (double)rand()/RAND_MAX;
s[i].b = (double)rand()/RAND_MAX;
}
fwrite(s, sizeof(s), 1, f);
fclose(f);
return 0;
}

The binary file used is not portable, because the "double" type is
not standardized in C. However:
- if you restrict this code to run only on machines that follow the
IEEE standard, this code will probably be more portable than the
python one
- in python, there may be small rounding errors when exchanging the
data between machines, since "float" is not standardized either.

case, calling C or assembler code would defeat the purpose, as would
partitioning into smaller arrays. And my processor is single core.

I started at about 7 seconds to just read in and sort, and at 20
seconds to read, sort, and write. I can now read in, sort, and write
back out in almost exactly 5 seconds, thanks to marshal and your
suggestions.

I'll look into struct and ctypes. Although I know *of* them, I'm not
very familiar *with* them.

Thank you, again, for your help.
Jun 27 '08 #7
On Apr 17, 7:37*pm, Paul Rubin <http://phr...@NOSPAM.invalidwrote:
Therefore the likelihood of a C or asm program
being 20x faster including disk i/o is dim. *But realistically,
counting just CPU time, you might get a 20x speedup with assembler if
you're really determined, using x86 SSE (128-bit vector) instructions,
cache prefetching, etc.
I think that the attitude that is prevalent is that although Python
and other "interpreted" languages are slow, there are places where
that slowness is not important and using them is OK. The point that I
am trying to make with my example is that often, by leveraging the
care and optimization work that others have put into the Python
standard library over the years, a rather casual programmer can match
or better the performance of 'average' C code. i.e. C code that was
written by a good C programmer while no one was looking over his
shoulder, and no pressures motivated him to spend a lot of time
optimizing the C to the hilt, or switching to asm.

With the reading and writing of the data (which actually works out to
about 23MB, marshalled) now down to 1 second each, I'm content. In
the beginning, the io time was overshadowing the sort time by a good
bit. And it was the sort time that I wanted to highlight.

BTW, no one has responded to my challenge to best the original sample
Python code with C, C++, or asm.

Jun 27 '08 #8

This thread has been closed and replies have been disabled. Please start a new discussion.

Similar topics

6
by: Ivo | last post by:
Hi, An array contains elements of the form , for example IVa, XIXb, LIX and LXIVc. Does anyone have a function which can sort this array in a way a human would expect it to, with the roman letters...
1
by: Bob | last post by:
I've read over section 3.14.5.2 of the doc about a zillion times, and it still makes absolutely no sense to me. Can someone please explain it? What I'd like to do is, basically, have the object...
39
by: Erlend Fuglum | last post by:
Hi everyone, I'm having some trouble sorting lists. I suspect this might have something to do with locale settings and/or character encoding/unicode. Consider the following example, text...
12
by: CJM | last post by:
How can I dynamically sort the results from a Stored Procedure? Or more importantly, what is the fastest and most efficient way? I know I can do the sorting within the recordset in ASP, but AFAIK...
7
by: CodeRazor | last post by:
How is it possible to sort an array of ints using a for loop? I recently tried the suggestion a newsgroup user offered on how to perform this, but find that it doesn't work. He suggested the code...
3
by: sushant.sirsikar | last post by:
Hi, I am new in Python World.I want to know what is mean by ``pickling'' and ``unpickling'' ? And how can we used it?Please Give Me some links of Picking Examples. Thanks Sushant
2
by: Boris Borcic | last post by:
Assuming that the items of my_stream share no content (they are dumps of db cursor fetches), is there a simple way to do the equivalent of def pickles(my_stream) : from cPickle import...
16
by: Kittyhawk | last post by:
I would like to sort an Arraylist of objects on multiple properties. For instance, I have a Sort Index property and an ID property (both integers). So, the results of my sort would look like this:...
10
by: Anton Vredegoor | last post by:
Python's sorting algorithm takes advantage of preexisting order in a sequence: #sort_test.py import random import time def test(): n = 1000 k = 2**28
0
BarryA
by: BarryA | last post by:
What are the essential steps and strategies outlined in the Data Structures and Algorithms (DSA) roadmap for aspiring data scientists? How can individuals effectively utilize this roadmap to progress...
1
by: nemocccc | last post by:
hello, everyone, I want to develop a software for my android phone for daily needs, any suggestions?
0
by: Hystou | last post by:
There are some requirements for setting up RAID: 1. The motherboard and BIOS support RAID configuration. 2. The motherboard has 2 or more available SATA protocol SSD/HDD slots (including MSATA, M.2...
0
marktang
by: marktang | last post by:
ONU (Optical Network Unit) is one of the key components for providing high-speed Internet services. Its primary function is to act as an endpoint device located at the user's premises. However,...
0
by: Hystou | last post by:
Most computers default to English, but sometimes we require a different language, especially when relocating. Forgot to request a specific language before your computer shipped? No problem! You can...
0
Oralloy
by: Oralloy | last post by:
Hello folks, I am unable to find appropriate documentation on the type promotion of bit-fields when using the generalised comparison operator "<=>". The problem is that using the GNU compilers,...
0
jinu1996
by: jinu1996 | last post by:
In today's digital age, having a compelling online presence is paramount for businesses aiming to thrive in a competitive landscape. At the heart of this digital strategy lies an intricately woven...
0
by: Hystou | last post by:
Overview: Windows 11 and 10 have less user interface control over operating system update behaviour than previous versions of Windows. In Windows 11 and 10, there is no way to turn off the Windows...
0
agi2029
by: agi2029 | last post by:
Let's talk about the concept of autonomous AI software engineers and no-code agents. These AIs are designed to manage the entire lifecycle of a software development project—planning, coding, testing,...

By using Bytes.com and it's services, you agree to our Privacy Policy and Terms of Use.

To disable or enable advertisements and analytics tracking please visit the manage ads & tracking page.