473,396 Members | 1,738 Online
Bytes | Software Development & Data Engineering Community
Post Job

Home Posts Topics Members FAQ

Join Bytes to post your question to a community of 473,396 software developers and data experts.

parsing large text file and pairwise comparison

Hi. I have programmed in C++ before, but I`m a couple of years out
of practice. I am seeking some advice on getting started on a quickie
project. . . .

I have to read a 54MB text file and do a pairwise comparison
among 2500 items or so in the file. Each of those items have to be
compared to every other item. Many of the comparison will only require
comparing one field of the items. I will probably sort on this field
before I do the comparison, to speed things up. There are about 1000
sets of these items, so I`ll do this about 1000 times.

Would you recommend using strings to read it into vectors for
this? Is there a completely other, better way to do this?

Thanks, Alan

Oct 25 '06 #1
7 2471
"Alan" <ja*********@verizon.netwrote in message
news:11**********************@b28g2000cwb.googlegr oups.com...
Hi. I have programmed in C++ before, but I`m a couple of years out
of practice. I am seeking some advice on getting started on a quickie
project. . . .

I have to read a 54MB text file and do a pairwise comparison
among 2500 items or so in the file. Each of those items have to be
compared to every other item. Many of the comparison will only require
comparing one field of the items. I will probably sort on this field
before I do the comparison, to speed things up. There are about 1000
sets of these items, so I`ll do this about 1000 times.

Would you recommend using strings to read it into vectors for
this? Is there a completely other, better way to do this?

Thanks, Alan
Hard to say without knowing the data. Is it string data, or integer data or
floating point data or?
And what do you need to do with the comparisons, build another file? Output
something?
Is the data unique?

54MB is quite a large bit of data to try to fit into memory all at once.
Can you break the data up into some logical units?
Oct 25 '06 #2
Alan wrote:
I have to read a 54MB text file and do a pairwise comparison
among 2500 items or so in the file.
That sounds like a lot of data. Unless you want to wrestle with
resource limitations (for practice or fun) you'd be better off to
use an existing tool such as GNU sort for the heavy work. If the
file needs some pre-processing (such as combining multiple related
lines into one) then write a small filter program, to be invoked
in a command script, or let the main program do the pre-processing
and invoke sort using system() or more OS-specific means.

I'd use explicit temporary files with sort, rather than pipes, as
pipes often have amazingly bad performance for large quantities of
data. This also leaves evidence you can check for debugging.

Once the data are sorted on the relevant fields, checking for
uniqueness is easily done with just two one-line buffers.

Of course, if your purpose really requires comparing or otherwise
combining pairs outside a linear sort order you'll have to build up
an appropriate in-memory structure. Vectors of strings are a fine
way to store the data. If memory is limited you can store just the
part that's needed globally, together with a file position so you
can read the same line again when and if you need to copy the rest
of the line to the output.

If performance is too slow (which would be an issue only if this
isn't a one-time application) you can speed up the file read using
a large buffer, up to the file size, then making C strings by
writing 0 over the line separator.

Make a smaller test input file so your debugging time won't be spent
waiting for disk I/O.
Oct 25 '06 #3
It consists entirely of numbers (some integer, some real). The
primary "screening" comparison involves one integer field. For 90+% of
the data, this field will not match in the comparison. This will
reduce the data considerably.

The data does come in sequential, logical units (output of a time
cycle of a simulation). To clarify, I have to read in something like
2500 (variable) of the items at a time and compare them. Then I
calculate results and move on to the next group of 2500. If it might
be wise, I could create 1000 separate files instead of one big honker.
Any thoughts?

I will output a small file with summarized results (CSV, which
will be used in Excel for further analysis.
Thanks for the advice,
Alan

Reply to:

Hard to say without knowing the data. Is it string data, or integer
data or
floating point data or?
And what do you need to do with the comparisons, build another file?
Output
something?
Is the data unique?

54MB is quite a large bit of data to try to fit into memory all at
once.
Can you break the data up into some logical units?

Oct 25 '06 #4
Alan wrote:
It consists entirely of numbers (some integer, some real). The
primary "screening" comparison involves one integer field. For 90+% of
the data, this field will not match in the comparison. This will
reduce the data considerably.

The data does come in sequential, logical units (output of a time
cycle of a simulation). To clarify, I have to read in something like
2500 (variable) of the items at a time and compare them. Then I
calculate results and move on to the next group of 2500. If it might
be wise, I could create 1000 separate files instead of one big honker.
Any thoughts?

I will output a small file with summarized results (CSV, which
will be used in Excel for further analysis.
Thanks for the advice,
If these are numbers than I would not read them into strings,
for you will have to convert them to int or double anyway,
to compare. String comparison will not suffice, because
0, -0.0, 0.0000E+00 are all the same number.

I'd go for a vector of structs, if you know the struct in
advance. If you can reasonably guess the number of elements,
or if it is in a header of this file, then .reserve() the
vector in advance, this will reduce the penalty of reallocating
it during .push_back().

My experience shows that the bottleneck of such calculations
is usually the string to number conversion hidden behind the
scenes in

input_stream >some_number;

ie. the program becomes CPU-bound, not I/O-bound because of
the conversion. I suggest trying the stream-based approach
and only if the performance is unsatisfactory, trying to
re-code into sscanf()/atof() or the likes, being extremely
careful.

On the other hand, if you intend on comparing all pairs
of data, I suspect you'll have O(n^2) complexity of
O(n logn) if you sort it -- that way for large sets of
data the cost of reading it in, which scales as O(n)
will become less and less a hassle. Still, 2500 numbers
and a 54MB file does not sound like an awful lot.

I would not split the input into more smaller files,
IMHO it will only require more file opens and closes.

HTH,
- J.
Oct 25 '06 #5
Jacek and everyone:

Thank you for the very helpful advice. Alan

Oct 25 '06 #6
Just to follow up, for anyone who is interested . . . I followed
most of the good advice provided, reading in the data as integers,
doubles, etc. and putting it into a vector of structures. This worked
great. On my Dell Latitude D600 laptop (not the latest and greatest),
it takes less than a minute to read in all this data and iterate
through the vector for a pairwise comparison.

Thanks again for all the great advice. Alan

Oct 27 '06 #7
Just to follow up, for anyone who is interested . . . I followed
most of the good advice provided, reading in the data as integers,
doubles, etc. and putting it into a vector of structures. This worked
great. On my Dell Latitude D600 laptop (not the latest and greatest),
it takes less than a minute to read in all this data and iterate
through the vector for a pairwise comparison.

Thanks again for all the great advice. Alan

Oct 27 '06 #8

This thread has been closed and replies have been disabled. Please start a new discussion.

Similar topics

2
by: Daniel Kramer | last post by:
Hello, I'm fairly new to python but I've written a script that takes in a special text file (a renderman .rib to be specific).. and filters some of the commands. The .rib file is a simple text...
9
by: gov | last post by:
Hi, I've just started to learn programming and was told this was a good place to ask questions :) Where I work, we receive large quantities of data which is currently all printed on large,...
3
by: toton | last post by:
Hi, I have some ascii files, which are having some formatted text. I want to read some section only from the total file. For that what I am doing is indexing the sections (denoted by .START in...
1
by: Robert Neville | last post by:
Basically, I want to create a table in html, xml, or xslt; with any number of regular expressions; a script (Perl or Python) which reads each table row (regex and replacement); and performs the...
4
by: bcomeara | last post by:
I am writing a program which needs to include a large amount of data. Basically, the data are p values for different possible outcomes from trials with different number of observations (the p...
1
by: Lars B | last post by:
Hey guys, I have written a C++ program that passes data from a file to an FPGA board and back again using software and DMA buffers. In my program I need to compare the size of a given file against...
1
by: Alan | last post by:
I am wondering if anyone has any better idea of how to approach this problem than I do. . . . I have a vector of items (data). I have to do a pairwise comparison of each item to each other item...
22
by: JJ | last post by:
Whats the best way for me to pull out records from a tab delimited text file? Or rather HOW do I parse the text, knowing that the tabs are field delimiters and a return (I image) signifies a new...
31
by: broli | last post by:
I need to parse a file which has about 2000 lines and I'm getting told that reading the file in ascii would be a slower way to do it and so i need to resort to binary by reading it in large...
0
by: Charles Arthur | last post by:
How do i turn on java script on a villaon, callus and itel keypad mobile phone
0
by: emmanuelkatto | last post by:
Hi All, I am Emmanuel katto from Uganda. I want to ask what challenges you've faced while migrating a website to cloud. Please let me know. Thanks! Emmanuel
0
BarryA
by: BarryA | last post by:
What are the essential steps and strategies outlined in the Data Structures and Algorithms (DSA) roadmap for aspiring data scientists? How can individuals effectively utilize this roadmap to progress...
1
by: Sonnysonu | last post by:
This is the data of csv file 1 2 3 1 2 3 1 2 3 1 2 3 2 3 2 3 3 the lengths should be different i have to store the data by column-wise with in the specific length. suppose the i have to...
0
marktang
by: marktang | last post by:
ONU (Optical Network Unit) is one of the key components for providing high-speed Internet services. Its primary function is to act as an endpoint device located at the user's premises. However,...
0
by: Hystou | last post by:
Most computers default to English, but sometimes we require a different language, especially when relocating. Forgot to request a specific language before your computer shipped? No problem! You can...
0
jinu1996
by: jinu1996 | last post by:
In today's digital age, having a compelling online presence is paramount for businesses aiming to thrive in a competitive landscape. At the heart of this digital strategy lies an intricately woven...
0
by: Hystou | last post by:
Overview: Windows 11 and 10 have less user interface control over operating system update behaviour than previous versions of Windows. In Windows 11 and 10, there is no way to turn off the Windows...
0
tracyyun
by: tracyyun | last post by:
Dear forum friends, With the development of smart home technology, a variety of wireless communication protocols have appeared on the market, such as Zigbee, Z-Wave, Wi-Fi, Bluetooth, etc. Each...

By using Bytes.com and it's services, you agree to our Privacy Policy and Terms of Use.

To disable or enable advertisements and analytics tracking please visit the manage ads & tracking page.