Hi,
I'm working on a pivot table. I would like to write it in Python. I
know, I should be doing that in C, but I would like to create a cross
platform version which can deal with smaller databases (not more than a
million facts).
The data is first imported from a csv file: the user selects which
columns contain dimension and measure data (and which columns to
ignore). In the next step I would like to build up a database that is
efficient enough to be used for making pivot tables. Here is my idea for
the database:
Original CSV file with column header and values:
"Color","Year","Make","Price","VMax"
Yellow,2000,Ferrari,100000,254
Blue,2003,Volvo,50000,210
Using the GUI, it is converted to this:
dimensions = [
{ 'name':'Color', 'colindex:0, 'values':[ 'Red', 'Blue', 'Green',
'Yellow' ], },
{ 'name':'Year', colindex:1, 'values':[
1995,1999,2000,2001,2002,2003,2007 ], },
{ 'name':'Make', colindex:2, 'value':[ 'Ferrari', 'Volvo', 'Ford',
'Lamborgini' ], },
]
measures = [
{ 'name', 'Price', 'colindex':3 },
{ 'name', 'Vmax', 'colindex':4 },
]
facts = [
( (3,2,0),(100000.0,254.0) ), # ( dimension_value_indexes,
measure_values )
( (1,5,1),(50000.0,210.0) ),
.... # Some million rows or less
]
The core of the idea is that, when using a relatively small number of
possible values for each dimension, the facts table becomes
significantly smaller and easier to process. (Processing the facts would
be: iterate over facts, filter out some of them, create statistical
values of the measures, grouped by dimensions.)
The facts table cannot be kept in memory because it is too big. I need
to store it on disk, be able to read incrementally, and make statistics.
In most cases, the "statistic" will be simple sum of the measures, and
counting the number of facts affected. To be effective, reading the
facts from disk should not involve complex conversions. For this reason,
storing in CSV or XML or any textual format would be bad. I'm thinking
about a binary format, but how can I interface that with Python?
I already looked at:
- xdrlib, which throws me DeprecationWarning when I store some integers
- struct which uses format string for each read operation, I'm concerned
about its speed
What else can I use?
Thanks,
Laszlo 2 1138
On Aug 7, 1:41*pm, Laszlo Nagy <gand...@shopzeus.comwrote:
* Hi,
I'm working on a pivot table. I would like to write it in Python. I
know, I should be doing that in C, but I would like to create a cross
platform version which can deal with smaller databases (not more than a
million facts).
The data is first imported from a csv file: the user selects which
columns contain dimension and measure data (and which columns to
ignore). In the next step I would like to build up a database that is
efficient enough to be used for making pivot tables. Here is my idea for
the database:
Original CSV file with column header and values:
"Color","Year","Make","Price","VMax"
Yellow,2000,Ferrari,100000,254
Blue,2003,Volvo,50000,210
Using the GUI, it is converted to this:
dimensions = [
* * { 'name':'Color', 'colindex:0, 'values':[ 'Red', 'Blue', 'Green',
'Yellow' ], },
* * { 'name':'Year', colindex:1, 'values':[
1995,1999,2000,2001,2002,2003,2007 ], },
* * { 'name':'Make', colindex:2, 'value':[ 'Ferrari', 'Volvo', 'Ford',
'Lamborgini' ], },
]
measures = [
* * { 'name', 'Price', 'colindex':3 },
* * { 'name', 'Vmax', 'colindex':4 },
]
facts = [
* * ( (3,2,0),(100000.0,254.0) *), # ( dimension_value_indexes,
measure_values )
* * ( (1,5,1),(50000.0,210.0) ),
* *.... # Some million rows or less
]
The core of the idea is that, when using a relatively small number of
possible values for each dimension, the facts table becomes
significantly smaller and easier to process. (Processing the facts would
be: iterate over facts, filter out some of them, create statistical
values of the measures, grouped by dimensions.)
The facts table cannot be kept in memory because it is too big. I need
to store it on disk, be able to read incrementally, and make statistics.
In most cases, the "statistic" will be simple sum of the measures, and
counting the number of facts affected. To be effective, reading the
facts from disk should not involve complex conversions. For this reason,
storing in CSV or XML or any textual format would be bad. I'm thinking
about a binary format, but how can I interface that with Python?
I already looked at:
- xdrlib, which throws me DeprecationWarning when I store some integers
- struct which uses format string for each read operation, I'm concerned
about its speed
What else can I use?
Thanks,
* *Laszlo
Take a look at the mmap module. You get direct memory access, backed
by the file system. struct + mmap, if you keep your strings small?
Laszlo Nagy <ga*****@shopzeus.comwrites:
The facts table cannot be kept in memory because it is too big. I need to
store it on disk, be able to read incrementally, and make statistics. In most
cases, the "statistic" will be simple sum of the measures, and counting the
number of facts affected. To be effective, reading the facts from disk should
not involve complex conversions. For this reason, storing in CSV or XML or any
textual format would be bad. I'm thinking about a binary format, but how can I
interface that with Python?
I already looked at:
- xdrlib, which throws me DeprecationWarning when I store some integers
- struct which uses format string for each read operation, I'm concerned about
its speed
What else can I use?
pytables (<http://www.pytables.org/>) looks like the right kind of
thing.
-M- This thread has been closed and replies have been disabled. Please start a new discussion. Similar topics
by: Jenny |
last post by:
Hi,
I have a class foo which will construct some objects in my code. some
of the objects store int values into the data deque, while others
store float values to the deque.
template <class...
|
by: mchoya |
last post by:
Hello all,
I want to display a list of ints for example 100, 1000, 10000 as prices
$1.00, $10.00, $100.00.
Can I do this?
I guess I have to use the iomanip lib. But the only manipulators I...
|
by: hoopsho |
last post by:
Hi Everyone,
I am trying to write a program that does a few things very fast
and with efficient use of memory...
a) I need to parse a space-delimited file that is really large,
upwards fo a...
|
by: Daniel Mori |
last post by:
Hi, Im hoping someone can give me some advice.
Im doing some development using VS Whidbey 2005 beta 1.
Im about to implement some highly time critical code related to a
managed collection of...
|
by: Tales Normando |
last post by:
The title says it all. Anyone?
|
by: Buddy Home |
last post by:
Hello,
I'm trying to speed up a piece of code that is causing performance issues
with our product. The problem is we are using serialization to convert the
object to a string, this is costing us...
|
by: Pieter |
last post by:
Hi,
I need some type of array/list/... In which I can store objects together
with a unique key. The most important thing is performance: I will need to
do the whole time searches in the list of...
|
by: M.-A. Lemburg |
last post by:
On 2008-08-07 20:41, Laszlo Nagy wrote:
1
It also very fast at dumping/loading lists, tuples, dictionaries,
floats, etc.
--
Marc-Andre Lemburg
eGenix.com
|
by: castironpi |
last post by:
Hi,
I've got an "in-place" memory manager that uses a disk-backed memory-
mapped buffer. Among its possibilities are: storing variable-length
strings and structures for persistence and...
|
by: Charles Arthur |
last post by:
How do i turn on java script on a villaon, callus and itel keypad mobile phone
|
by: nemocccc |
last post by:
hello, everyone, I want to develop a software for my android phone for daily needs, any suggestions?
|
by: Sonnysonu |
last post by:
This is the data of csv file
1 2 3
1 2 3
1 2 3
1 2 3
2 3
2 3
3
the lengths should be different i have to store the data by column-wise with in the specific length.
suppose the i have to...
|
by: Hystou |
last post by:
There are some requirements for setting up RAID:
1. The motherboard and BIOS support RAID configuration.
2. The motherboard has 2 or more available SATA protocol SSD/HDD slots (including MSATA, M.2...
|
by: marktang |
last post by:
ONU (Optical Network Unit) is one of the key components for providing high-speed Internet services. Its primary function is to act as an endpoint device located at the user's premises. However,...
|
by: Hystou |
last post by:
Most computers default to English, but sometimes we require a different language, especially when relocating. Forgot to request a specific language before your computer shipped? No problem! You can...
|
by: jinu1996 |
last post by:
In today's digital age, having a compelling online presence is paramount for businesses aiming to thrive in a competitive landscape. At the heart of this digital strategy lies an intricately woven...
|
by: Hystou |
last post by:
Overview:
Windows 11 and 10 have less user interface control over operating system update behaviour than previous versions of Windows. In Windows 11 and 10, there is no way to turn off the Windows...
|
by: agi2029 |
last post by:
Let's talk about the concept of autonomous AI software engineers and no-code agents. These AIs are designed to manage the entire lifecycle of a software development project—planning, coding, testing,...
| |