473,396 Members | 1,786 Online
Bytes | Software Development & Data Engineering Community
Post Job

Home Posts Topics Members FAQ

Join Bytes to post your question to a community of 473,396 software developers and data experts.

python: ascii read

Hello,

I tried to read in some large ascii files (200MB-2GB) in Python using
scipy.io.read_array, but it did not work as I expected. The whole idea
was to find a fast Python routine to read in arbitrary ascii files, to
replace Yorick (which I use right now and which is really fast, but not
as general as Python). The problem with scipy.io.read_array was, that it
is really slow, returns errors when trying to process large files and it
also changes (cuts) the files (after scipy.io.read_array processed a 2GB
file its size was only 64MB).

Can someone give me hint how to use Python to do this job correctly and
fast? (Maybe with another read-in routine.)

Thanks.

Greetings,
Sebastian
Jul 18 '05 #1
11 8978
Sebastian Krause <ca*****@gmx.net> wrote:
Hello,

I tried to read in some large ascii files (200MB-2GB) in Python using
scipy.io.read_array, but it did not work as I expected. The whole idea
was to find a fast Python routine to read in arbitrary ascii files, to
replace Yorick (which I use right now and which is really fast, but not
as general as Python). The problem with scipy.io.read_array was, that it
is really slow, returns errors when trying to process large files and it
also changes (cuts) the files (after scipy.io.read_array processed a 2GB
file its size was only 64MB).

Can someone give me hint how to use Python to do this job correctly and
fast? (Maybe with another read-in routine.)


If all you need is what you say -- read a huge amount of ASCII data into
memory -- it's hard to beat
data = open('thefile.txt').read()

mmap may in fact be preferable for many uses, but it doesn't actually
read (it _maps_ the file into memory instead).
Alex
Jul 18 '05 #2
Sebastian Krause wrote:
Hello,

I tried to read in some large ascii files (200MB-2GB) in Python using
scipy.io.read_array, but it did not work as I expected. The whole idea
was to find a fast Python routine to read in arbitrary ascii files, to
replace Yorick (which I use right now and which is really fast, but not
as general as Python). The problem with scipy.io.read_array was, that it
is really slow, returns errors when trying to process large files and it
also changes (cuts) the files (after scipy.io.read_array processed a 2GB
file its size was only 64MB).

Can someone give me hint how to use Python to do this job correctly and
fast? (Maybe with another read-in routine.)


What kind of data is it? What operations do you want to perform on the
data? What platform are you on?

Some of the scipy.io.read_array behavior that you see look like bugs. We
would greatly appreciate it if you were to send a complete bug report to
the scipy-dev mailing list. Thank you.

--
Robert Kern
rk***@ucsd.edu

"In the fields of hell where the grass grows high
Are the graves of dreams allowed to die."
-- Richard Harter
Jul 18 '05 #3
I did not explictly mention that the ascii file should be read in as an
array of numbers (either integer or float).
To use open() and read() is very fast, but does only read in the data as
string and it also does not work with large files.

Sebastian

Alex Martelli wrote:
Sebastian Krause <ca*****@gmx.net> wrote:

Hello,

I tried to read in some large ascii files (200MB-2GB) in Python using
scipy.io.read_array, but it did not work as I expected. The whole idea
was to find a fast Python routine to read in arbitrary ascii files, to
replace Yorick (which I use right now and which is really fast, but not
as general as Python). The problem with scipy.io.read_array was, that it
is really slow, returns errors when trying to process large files and it
also changes (cuts) the files (after scipy.io.read_array processed a 2GB
file its size was only 64MB).

Can someone give me hint how to use Python to do this job correctly and
fast? (Maybe with another read-in routine.)

If all you need is what you say -- read a huge amount of ASCII data into
memory -- it's hard to beat
data = open('thefile.txt').read()

mmap may in fact be preferable for many uses, but it doesn't actually
read (it _maps_ the file into memory instead).
Alex

Jul 18 '05 #4
The input data is is large ascii file of astrophysical parameters
(integer and float) of gaydynamics calculations. They should be read in
as an array of integer and float numbers not as string (as open() and
read() does). Then the array is used to make different plots from the
data and do some (simple) operations: subtraction and divison of
columns. I am using Scipy with Python 2.3.x under Linux (SuSE 9.1).

Sebastian

Robert Kern wrote:
Sebastian Krause wrote:
Hello,

I tried to read in some large ascii files (200MB-2GB) in Python using
scipy.io.read_array, but it did not work as I expected. The whole idea
was to find a fast Python routine to read in arbitrary ascii files, to
replace Yorick (which I use right now and which is really fast, but
not as general as Python). The problem with scipy.io.read_array was,
that it is really slow, returns errors when trying to process large
files and it also changes (cuts) the files (after scipy.io.read_array
processed a 2GB file its size was only 64MB).

Can someone give me hint how to use Python to do this job correctly
and fast? (Maybe with another read-in routine.)

What kind of data is it? What operations do you want to perform on the
data? What platform are you on?

Some of the scipy.io.read_array behavior that you see look like bugs. We
would greatly appreciate it if you were to send a complete bug report to
the scipy-dev mailing list. Thank you.

Jul 18 '05 #5
Sebastian Krause <ca*****@gmx.net> wrote:
I did not explictly mention that the ascii file should be read in as an
array of numbers (either integer or float).
Ah, right, you didn't . So I was answering the literal question you
asked rather than the one you had in mind.
To use open() and read() is very fast, but does only read in the data as
string and it also does not work with large files.


It works just fine with files as large as you have memory for (and mmap
works for files as large as you have _spare address space_ for, if your
OS is decently good at its job). But if what you want is not the job
that .read() and mmap do, the fact that they _do_ perform that job quite
well on large files is of course of no use to you.

Back to, why scipy.io.read_array works so badly for you -- I don't know,
it's rather complicated code, as well as maybe old-ish (wraps files into
class instances to be able to iterate on their lines) and very general
(lots of options regarding what are separators, etc, etc). If your
needs are very specific (you know a lot about the format of those huge
files -- e.g. they're column-oriented, or only use whitespace separators
and \n line termination, or other such specifics) you might well be able
to do better -- likely even in Python, worst case in C. I assume you
need Numeric arrays, 2-d, specifically, as the result of reading your
files? Would you know in advance whether you're reading int or float
(it might be faster to have two separate functions)? Could you
pre-dimension the Numeric array and pass it in, or do you need it to
dimension itself dynamically based on file contents? The less
flexibility you need, the simpler and faster the reading can be...
Alex
Jul 18 '05 #6
Sebastian Krause wrote:
The input data is is large ascii file of astrophysical parameters
(integer and float) of gaydynamics calculations. They should be read in
as an array of integer and float numbers not as string (as open() and
read() does). Then the array is used to make different plots from the
data and do some (simple) operations: subtraction and divison of
columns. I am using Scipy with Python 2.3.x under Linux (SuSE 9.1).
Well, one option is to use the "lines" argument to scipy.io.read_array
to only read in chunks at a time. It probably won't help speed any, but
hopefully it will be correct.
Sebastian


--
Robert Kern
rk***@ucsd.edu

"In the fields of hell where the grass grows high
Are the graves of dreams allowed to die."
-- Richard Harter
Jul 18 '05 #7
al*****@yahoo.com (Alex Martelli) writes:
If your needs are very specific (you know a lot about the format of
those huge files -- e.g. they're column-oriented, or only use
whitespace separators and \n line termination, or other such
specifics) you might well be able to do better -- likely even in
Python, worst case in C. I assume you need Numeric arrays, 2-d,
specifically, as the result of reading your files? Would you know
in advance whether you're reading int or float (it might be faster
to have two separate functions)? Could you pre-dimension the
Numeric array and pass it in, or do you need it to dimension itself
dynamically based on file contents? The less flexibility you need,
the simpler and faster the reading can be...


The last time I wanted to be able to read large lumps of numerical
data from an ASCII file, I ended up using (f)lex, for performance
reasons. (Pure C _might_ have been faster still, of course, but it
would _quite certainly_ also have been pure C.)

This has caused minor irritation - the code has been in use through
several upgrades of Python, and it is considered polite to recompile
to match the current C API - but I'd probably do it the same way again
in the same situation.

Des
--
"[T]he structural trend in linguistics which took root with the
International Congresses of the twenties and early thirties [...] had
close and effective connections with phenomenology in its Husserlian
and Hegelian versions." -- Roman Jakobson
Jul 18 '05 #8
Alex Martelli said unto the world upon 2004-09-16 07:22:
Sebastian Krause <ca*****@gmx.net> wrote:

Hello,

I tried to read in some large ascii files (200MB-2GB) in Python using
scipy.io.read_array, but it did not work as I expected. The whole idea
was to find a fast Python routine to read in arbitrary ascii files, to
replace Yorick (which I use right now and which is really fast, but not
as general as Python). The problem with scipy.io.read_array was, that it
is really slow, returns errors when trying to process large files and it
also changes (cuts) the files (after scipy.io.read_array processed a 2GB
file its size was only 64MB).

Can someone give me hint how to use Python to do this job correctly and
fast? (Maybe with another read-in routine.)

If all you need is what you say -- read a huge amount of ASCII data into
memory -- it's hard to beat
data = open('thefile.txt').read()

mmap may in fact be preferable for many uses, but it doesn't actually
read (it _maps_ the file into memory instead).
Alex


Hi all,

[neophyte question warning]

I'd not been aware of mmap until this post. Looking at the Library
Reference and my trusty copy of Python in a Nutshell, I've gotten some
idea of the differences between using mmap and the .read() method on a
file object -- such as it returns a mutable object vs an immutable
string, constraint on slice assignment that len(oldslice) must be equal
to len(newslice), etc.

But I don't really feel I've a handle on the significance of saying it
maps the file into memory versus reading the file. The naive thought is
that since the data gets into memory, the file must be read. But this
makes me sure I'm missing a distinction in the terminology. Explanations
and pointers for what to read gratefully received.

And, since mmap behave differently on different platforms, I'm mostly a
win32 user looking to transition to Linux.

Best to all,

Brian vdB

Jul 18 '05 #9
Am Donnerstag, 16. September 2004 17:56 schrieb Brian van den Broek:
But I don't really feel I've a handle on the significance of saying it
maps the file into memory versus reading the file. The naive thought is
that since the data gets into memory, the file must be read. But this
makes me sure I'm missing a distinction in the terminology. Explanations
and pointers for what to read gratefully received.


read()ing a file into memory does what it says; it reads the binary data from
the disk all at once, and allocates main memory (as needed) to fit all the
data there. Memory mapping a file (or device or whatever) means that the
virtual memory architecture is involved. What happens here:

mmapping a file creates virtual memory pages (just like virtual memory which
is put into your paging file), which are registered with the MMU of the
processor as being absent initially.

Now, when the program tries to access the memory page (pages are some fixed
short length, like 4k for most Pentium-style computers), a (page) fault is
generated by the MMU, which invokes the operating system's handler for page
faults. Now that the operating system sees that a certain page is accessed
(from the page address it can deduce the offset in the file that you're
trying to access), it loads the corresponding page from disk, and puts it
into memory at some position, and alters the pagetable entry in the LDT to be
present.

Future accesses to the page will take place immediately (without a page fault
taking place).

Changes in memory are written to disk once the page is flushed (meaning that
it gets removed from main memory because there are too few pages available of
real main memory). Now, when a page is forcefully flushed (not due to closing
the mmap), the operating system marks the pagetable entry in the LDT to be
absent again, and the next time the program tries to access this location, a
page-fault again takes place, and the OS can load the page from disk.

For speed, the operating system allows you to mmap read-only, which means that
once a page is discarded, it does not need to be written back to disk (which
of course is faster). Some MMUs (IIRC not the Pentium-class MMU) set a dirty
bit on the page-table entry once the page has been altered, this can also be
used to control whether the page needs to be written back to disk after
access.

So, basically what you get is load on demand file handling, which is similar
to what the paging file (virtual memory file) on win32 does for allround
memory. Actually, internally, the architecture to handle mmapped files and
virtual memory is the same, and you could think of the swap file as an
operating system mmapped file, from which programs can allocate slices
through some OS calls (well, actually through the normal malloc/calloc
calls).

HTH!

Heiko.
Jul 18 '05 #10
Heiko Wundram said unto the world upon 2004-09-16 12:56:
Am Donnerstag, 16. September 2004 17:56 schrieb Brian van den Broek:
But I don't really feel I've a handle on the significance of saying it
maps the file into memory versus reading the file. The naive thought is
that since the data gets into memory, the file must be read. But this
makes me sure I'm missing a distinction in the terminology. Explanations
and pointers for what to read gratefully received.

read()ing a file into memory does what it says; it reads the binary data from
the disk all at once, and allocates main memory (as needed) to fit all the
data there. Memory mapping a file (or device or whatever) means that the
virtual memory architecture is involved. What happens here:


<Much helpful detail SNIPed>


HTH!

Heiko.


Thanks a lot for the detailed account, Heiko.

Best,

Brian vdB

Jul 18 '05 #11
Brian van den Broek wrote:
But I don't really feel I've a handle on the significance of saying it
maps the file into memory versus reading the file. The naive thought is
that since the data gets into memory, the file must be read. But this
makes me sure I'm missing a distinction in the terminology. Explanations
and pointers for what to read gratefully received.
Eventually the file is read, of course (or at least parts thereof). Mmap
is a feature of the virtual memory system in modern operating systems,
so you need a basic understanding of virtual memory in order to
understand mmap. All details can be found e.g. in Modern Operating
Systems by Andrew Tanenbaum.
http://mirrors.kernel.org/LDP/LDP/tlk/tlk.html does a good job
explaining how Linux handles it,, but I'll try to explain the general
basics here in short.

With virtual memory systems, the addresses that are used by application
programs don't refer directly to memory locations. Instead the addresses
are split in two parts; the first part is a page number, the second is
the offset of the memory location in the page. The system keeps a list
of all pages. When an address is referenced, the page is looked up in
that list (Pages are blocks of memory, typically 4-8 kB). There are two
possibilities:
- The page is already in memory. In that case, the list contains the
real physical address of the page in memory. That address is combined
with the offset to form the physical address of the memory location.
- The page is not in memory. The virtual memory system loads it in
memory and stores the physical address in the list. Processing then
continues as in the other case. Note that it may be necessary to remove
another page from memory in order to load a new one; in that case, the
other page is paged to disk if it is still needed so that it can be read
again later.

This behind-the-scenes translation and paging to and from disk is what
allows modern operating systems to use much more memory than what's
physically available in the system.

mmap creates an entry in the list that says the page is not in memory,
but tells the system what file to load it from: a range of addresses is
'mapped' to the data in the file. It also returns the logical address of
the data. When an address in the range is referenced, the virtual memory
system loads the appropriate page from disk (or possibly more than one
page at the time, for efficiency reasons) to memory and stores its
(theirs) location in the list. An application program can access exactly
the same way as any other part of memory.
And, since mmap behave differently on different platforms, I'm mostly a
win32 user looking to transition to Linux.


I think Python hides much of the differences between the Windows and
Unix implentations of mmap (Windows doesn't really have mmap; instead
you use CreateFileMapping and MapViewOfFile).

--
"Codito ergo sum"
Roel Schroeven
Jul 18 '05 #12

This thread has been closed and replies have been disabled. Please start a new discussion.

Similar topics

16
by: Paul Prescod | last post by:
I skimmed the tutorial and something alarmed me. "Strings are a powerful data type in Prothon. Unlike many languages, they can be of unlimited size (constrained only by memory size) and can hold...
4
by: Andreas Pauley | last post by:
Hi all, I'm trying to implement a Python equivalent of a C# method that encrypts a string. My Python attempt is in the attached file, but does not return the same value as the C# method (see...
6
by: Rafael Almeida | last post by:
Hello, I'm studying compilers now on my university and I can't quite understand one thing about the python interpreter. Why is its input a binary file (pyc)? The LOAD_CONST opcode is 100 (dec)...
16
by: william tanksley | last post by:
I'm trying to convert the URLs contained in iTunes' XML file into a form comparable with the filenames returned by iTunes' COM interface. I'm writing a podcast sorter in Python; I'm using iTunes...
13
by: Liang Chen | last post by:
Hope you all had a nice weekend. I have a question that I hope someone can help me out. I want to run a Python program that uses Tkinter for the user interface (GUI). The program allows me to type...
0
by: Charles Arthur | last post by:
How do i turn on java script on a villaon, callus and itel keypad mobile phone
0
by: ryjfgjl | last post by:
In our work, we often receive Excel tables with data in the same format. If we want to analyze these data, it can be difficult to analyze them because the data is spread across multiple Excel files...
1
by: Sonnysonu | last post by:
This is the data of csv file 1 2 3 1 2 3 1 2 3 1 2 3 2 3 2 3 3 the lengths should be different i have to store the data by column-wise with in the specific length. suppose the i have to...
0
by: Hystou | last post by:
There are some requirements for setting up RAID: 1. The motherboard and BIOS support RAID configuration. 2. The motherboard has 2 or more available SATA protocol SSD/HDD slots (including MSATA, M.2...
0
marktang
by: marktang | last post by:
ONU (Optical Network Unit) is one of the key components for providing high-speed Internet services. Its primary function is to act as an endpoint device located at the user's premises. However,...
0
Oralloy
by: Oralloy | last post by:
Hello folks, I am unable to find appropriate documentation on the type promotion of bit-fields when using the generalised comparison operator "<=>". The problem is that using the GNU compilers,...
0
jinu1996
by: jinu1996 | last post by:
In today's digital age, having a compelling online presence is paramount for businesses aiming to thrive in a competitive landscape. At the heart of this digital strategy lies an intricately woven...
0
by: Hystou | last post by:
Overview: Windows 11 and 10 have less user interface control over operating system update behaviour than previous versions of Windows. In Windows 11 and 10, there is no way to turn off the Windows...
0
tracyyun
by: tracyyun | last post by:
Dear forum friends, With the development of smart home technology, a variety of wireless communication protocols have appeared on the market, such as Zigbee, Z-Wave, Wi-Fi, Bluetooth, etc. Each...

By using Bytes.com and it's services, you agree to our Privacy Policy and Terms of Use.

To disable or enable advertisements and analytics tracking please visit the manage ads & tracking page.