By using this site, you agree to our updated Privacy Policy and our Terms of Use. Manage your Cookies Settings.
448,571 Members | 1,268 Online
Bytes IT Community
+ Ask a Question
Need help? Post your question and get tips & solutions from a community of 448,571 IT Pros & Developers. It's quick & easy.

fasteste way to fill a structure.

P: n/a
From my previous post...

If I have a structure,

struct sFileData
{
char*sSomeString1;
char*sSomeString2;
int iSomeNum1;
int iSomeNum2;
sFileData(){...};
~sFileData(){...};
sFileData(const sFileData&){...};
const sFileData operator=( const sFileData &s ){...}
};

I read the file as follows

FILE *f = fopen( szPath, "rb" );

int nLineSize = 190;
BYTE b[nLineSize+1];

fread( b, sizeof(BYTE), nLineSize, f );
int numofrecords = atoi( b ); // first line is num of records only,

// read the data itself.
while( fread( b, sizeof(BYTE), nLineSize, f ) == nLineSize )
{
// fill data
// The locations of each items is known
// sString1 = 0->39, with blank spaces filler after data
// sString2 = 40->79, with blank spaces filler after data
// iNum1 = 80->99, with blank spaces filler after data
// iNum2 = 100->end, with blank spaces filler after data
}

what would be the best way to fill the data into an array, (vector)?

Many thanks.

Simon.
Jul 23 '05 #1
Share this Question
Share on Google+
21 Replies


P: n/a
simon wrote:
If I have a structure,

struct sFileData
{
char*sSomeString1;
char*sSomeString2;
int iSomeNum1;
int iSomeNum2;
sFileData(){...};
~sFileData(){...};
sFileData(const sFileData&){...};
const sFileData operator=( const sFileData &s ){...}
};

I read the file as follows

FILE *f = fopen( szPath, "rb" );

int nLineSize = 190;
BYTE b[nLineSize+1];

fread( b, sizeof(BYTE), nLineSize, f );
int numofrecords = atoi( b ); // first line is num of records only,

// read the data itself.
while( fread( b, sizeof(BYTE), nLineSize, f ) == nLineSize )
{
// fill data
// The locations of each items is known
// sString1 = 0->39, with blank spaces filler after data
// sString2 = 40->79, with blank spaces filler after data
// iNum1 = 80->99, with blank spaces filler after data
// iNum2 = 100->end, with blank spaces filler after data
}

what would be the best way to fill the data into an array, (vector)?


I presume nLineSize is greater than 100. Then, something in line with

// as soon as you know the number of structures
yourvector.reserve(numofrecords);

// read the data themselves
while (fread(... )
{
yourvector.push_back(
sFileData(
std::string(b, b+40).c_str(),
std::string(b+40, b+80).c_str(),
strtol(std::string(b+80,b+100).c_str(),10,0),
strtol(std::string(b+100,b+nLineSize).c_str(),10,0 )
)
);
}

You will need to create another constructor for your 'sFileData',
which will take two pointers to const char, and two ints (or longs):

sFileData(char const*, char const*, int, int);

Take those pointers and extract the C strings from them to create your
members.

In general, I think it's better to have 'std::string' as members instead
of 'char*'. You may need to fix the rest of your class if you make that
switch.

V
Jul 23 '05 #2

P: n/a
This way is _not _ fast as there are loads of unnecessary memory
allocations. Simon, you had the right idea from the start, but the
data structure can be modified to:

struct sFileData
{
char sSomeString1[40];
char sSomeString2[40];
int iSomeNum1;
int iSomeNum2;
....
};

Then, you can use either an array or a vector. Since you know the size
ahead of time, you can create an array:

struct sFileData array[ numofrecords ];
// read the data itself.
int i = 0;
while( fread( b, sizeof(BYTE), nLineSize, f ) == nLineSize )
{
array[ i ] = *(struct sFileData * )&b;
++i;
}

Jul 23 '05 #3

P: n/a
I failed to see that the file format is most-likely ascii.

Jul 23 '05 #4

P: n/a
>>
struct sFileData
{
char*sSomeString1;
char*sSomeString2;
int iSomeNum1;
int iSomeNum2;
sFileData(){...};
~sFileData(){...};
sFileData(const sFileData&){...};
const sFileData operator=( const sFileData &s ){...}
};

I presume nLineSize is greater than 100. Then, something in line with
Why would it have to be > 100? or are you saying that because of my
definition?

// as soon as you know the number of structures
yourvector.reserve(numofrecords);
Ok, it does speed things up a bit.

// read the data themselves
while (fread(... )
{
yourvector.push_back(
sFileData(
std::string(b, b+40).c_str(),
std::string(b+40, b+80).c_str(),
strtol(std::string(b+80,b+100).c_str(),10,0),
strtol(std::string(b+100,b+nLineSize).c_str(),10,0 )
)
);
}


I still think that I am doing something wrong here.
To read a file with 100000 lines takes 0.66 sec, (windows machine).

But filling the structure above takes +28 seconds.

Is that normal?

Simon
Jul 23 '05 #5

P: n/a

"Simon" <sp********@example.com> wrote in message
news:3i************@individual.net...

struct sFileData
{
char*sSomeString1;
char*sSomeString2;
int iSomeNum1;
int iSomeNum2;
sFileData(){...};
~sFileData(){...};
sFileData(const sFileData&){...};
const sFileData operator=( const sFileData &s ){...}
};


I presume nLineSize is greater than 100. Then, something in line with


Why would it have to be > 100? or are you saying that because of my
definition?

// as soon as you know the number of structures
yourvector.reserve(numofrecords);


Ok, it does speed things up a bit.

// read the data themselves
while (fread(... )
{
yourvector.push_back(
sFileData(
std::string(b, b+40).c_str(),
std::string(b+40, b+80).c_str(),
strtol(std::string(b+80,b+100).c_str(),10,0),
strtol(std::string(b+100,b+nLineSize).c_str(),10,0 )
)
);
}


I still think that I am doing something wrong here.
To read a file with 100000 lines takes 0.66 sec, (windows machine).

But filling the structure above takes +28 seconds.

Is that normal?


You won't know until you profile and see where the time is spent.

Jeff Flinn
Jul 23 '05 #6

P: n/a
Simon wrote:
struct sFileData
{
char*sSomeString1;
char*sSomeString2;
int iSomeNum1;
int iSomeNum2;
sFileData(){...};
~sFileData(){...};
sFileData(const sFileData&){...};
const sFileData operator=( const sFileData &s ){...}
};


I presume nLineSize is greater than 100. Then, something in line with

Why would it have to be > 100? or are you saying that because of my
definition?

// as soon as you know the number of structures
yourvector.reserve(numofrecords);

Ok, it does speed things up a bit.

// read the data themselves
while (fread(... )
{
yourvector.push_back(
sFileData(
std::string(b, b+40).c_str(),
std::string(b+40, b+80).c_str(),
strtol(std::string(b+80,b+100).c_str(),10,0),
strtol(std::string(b+100,b+nLineSize).c_str(),10,0 )
)
);
}

I still think that I am doing something wrong here.
To read a file with 100000 lines takes 0.66 sec, (windows machine).

But filling the structure above takes +28 seconds.

Is that normal?


May not be. You may want to change the structure and make it contain
arrays of char instead of pointers to dynamically allocated arrays.

Then the construction will be a bit faster, you could simply drop the
'string' thing there. Also, if you're sure about the source of the
data, and their format, you could avoid constructing temporaries. Play
with making 'sFileData' look like

char s1[41]; // if it's a C string, reserve the room for the null char
char s2[41];
int one, two;

and then you could construct it a bit faster. You will still need to
convert the third and the fourth fields since they can't be memcpy'ed.

V
Jul 23 '05 #7

P: n/a
>
May not be. You may want to change the structure and make it contain
arrays of char instead of pointers to dynamically allocated arrays.

Then the construction will be a bit faster, you could simply drop the
'string' thing there. Also, if you're sure about the source of the
data, and their format, you could avoid constructing temporaries. Play
with making 'sFileData' look like

char s1[41]; // if it's a C string, reserve the room for the null char
char s2[41];
int one, two;


I know I am going to be told I am too difficult, but the reason why I
dynamically create the string is because they are almost never longer than 5
chars.
So by declaring s1[41] I know that I am wasting around 36 chars, (The sizes
are different, there could be a string of 40 chars).

I know that we are only talking about 36 chars here, but I load 100000's of
lines and the waste really seems unnecessary to me, (and I don't like
wasting memory).
It seems to defeat the object dynamic memory allocations.

Simon
Jul 23 '05 #8

P: n/a
Simon wrote:
I know I am going to be told I am too difficult, but the reason why I
dynamically create the string is because they are almost never longer than
5 chars.
So by declaring s1[41] I know that I am wasting around 36 chars, (The
sizes are different, there could be a string of 40 chars).

I know that we are only talking about 36 chars here, but I load 100000's
of lines and the waste really seems unnecessary to me, (and I don't like
wasting memory).
It seems to defeat the object dynamic memory allocations.

Simon

What about using std::string with std::string::reserve(5); Or something
close to the maximum "normal" value? That way, you have a minimum
preallocated, but it can still grow.
--
If our hypothesis is about anything and not about some one or more
particular things, then our deductions constitute mathematics. Thus
mathematics may be defined as the subject in which we never know what we
are talking about, nor whether what we are saying is true.-Bertrand Russell
Jul 23 '05 #9

P: n/a
Simon wrote:
May not be. You may want to change the structure and make it contain
arrays of char instead of pointers to dynamically allocated arrays.

Then the construction will be a bit faster, you could simply drop the
'string' thing there. Also, if you're sure about the source of the
data, and their format, you could avoid constructing temporaries. Play
with making 'sFileData' look like

char s1[41]; // if it's a C string, reserve the room for the null char
char s2[41];
int one, two;

I know I am going to be told I am too difficult, but the reason why I
dynamically create the string is because they are almost never longer than 5
chars.
So by declaring s1[41] I know that I am wasting around 36 chars, (The sizes
are different, there could be a string of 40 chars).

I know that we are only talking about 36 chars here, but I load 100000's of
lines and the waste really seems unnecessary to me, (and I don't like
wasting memory).
It seems to defeat the object dynamic memory allocations.


Perhaps then you need to invent a smarter scheme for storing those strings
than keeping a pointer to a dynamic array of chars. Do you know that most
heap managers when you need to allocate 1 char would slap 2*sizeof(void*)
on top of it to make a dynamic array? So, you're still wasting enough
memory (not to say all the CPU cycles to allocate and then deallocate them
along with other objects).

Imagine that your 'sFileData' class has a static storage for all its
strings, from which all individual strings are cut out (or, rather, in
which all individual strings are stacked up). If your objects never
change, and only get allocated once and deallocated together at some
point, then it might be the simple custom memory manager you need. You
can allocate that static storage in large chunks and give your class some
mechanism to account for allocations... Well, as you can see, all you may
need to improve the performance is a custom memory manager. You can
probably use an open source one, if you can find it.

V
Jul 23 '05 #10

P: n/a
Victor Bazarov wrote:
Simon wrote:
May not be. You may want to change the structure and make it contain
arrays of char instead of pointers to dynamically allocated arrays.

Then the construction will be a bit faster, you could simply drop the
'string' thing there. Also, if you're sure about the source of the
data, and their format, you could avoid constructing temporaries. Play
with making 'sFileData' look like

char s1[41]; // if it's a C string, reserve the room for the null
char char s2[41];
int one, two;

I know I am going to be told I am too difficult, but the reason why I
dynamically create the string is because they are almost never longer
than 5 chars.
So by declaring s1[41] I know that I am wasting around 36 chars, (The
sizes are different, there could be a string of 40 chars).

I know that we are only talking about 36 chars here, but I load 100000's
of lines and the waste really seems unnecessary to me, (and I don't like
wasting memory).
It seems to defeat the object dynamic memory allocations.


Perhaps then you need to invent a smarter scheme for storing those strings
than keeping a pointer to a dynamic array of chars. Do you know that most
heap managers when you need to allocate 1 char would slap 2*sizeof(void*)
on top of it to make a dynamic array? So, you're still wasting enough
memory (not to say all the CPU cycles to allocate and then deallocate them
along with other objects).

Imagine that your 'sFileData' class has a static storage for all its
strings, from which all individual strings are cut out (or, rather, in
which all individual strings are stacked up). If your objects never
change, and only get allocated once and deallocated together at some
point, then it might be the simple custom memory manager you need. You
can allocate that static storage in large chunks and give your class some
mechanism to account for allocations... Well, as you can see, all you may
need to improve the performance is a custom memory manager. You can
probably use an open source one, if you can find it.

V


I just thought about another source of what the slowness might be. It may
be a question of jumping back and forth between I/O and other operations.
I'd suggest using C++ I/O, rather than C, and try a buffered stream. I
don't know much about those, but I do know the counterpart in Java could
make a big difference. Alternatively, the file could be read into memory
explicitly in one slurp, and then processed with some kind of input stream
that reads from memory.
--
If our hypothesis is about anything and not about some one or more
particular things, then our deductions constitute mathematics. Thus
mathematics may be defined as the subject in which we never know what we
are talking about, nor whether what we are saying is true.-Bertrand Russell
Jul 23 '05 #11

P: n/a

"Simon" <sp********@example.com> wrote in message
news:3i************@individual.net...

May not be. You may want to change the structure and make it contain
arrays of char instead of pointers to dynamically allocated arrays.

Then the construction will be a bit faster, you could simply drop the
'string' thing there. Also, if you're sure about the source of the
data, and their format, you could avoid constructing temporaries. Play
with making 'sFileData' look like

char s1[41]; // if it's a C string, reserve the room for the null
char
char s2[41];
int one, two;


I know I am going to be told I am too difficult, but the reason why I
dynamically create the string is because they are almost never longer than
5 chars.
So by declaring s1[41] I know that I am wasting around 36 chars, (The
sizes are different, there could be a string of 40 chars).

I know that we are only talking about 36 chars here, but I load 100000's
of lines and the waste really seems unnecessary to me, (and I don't like
wasting memory).
It seems to defeat the object dynamic memory allocations.


If you're concerned about 36 chars, why not avoid all of this memory
allocation to begin with? Considering your file structure is fixed record
lengths of a known number of records, you shouldn't need to copy/process
anything until it's needed. I've successfully used a version of this
approach in memory limited handheld pc(33Mhz no less) to access several
multi-megabyte files.

For example(simplified,incomplete and untested):

class Record
{
std::string mString;
public:

Record( const std::string& rec ):mString(rec){}

std::string S1()const{ ... } // extract S1 from record
std::string S2()const{ ... }
int N1()const{ ... }
int N2()const{ ... }

};

class structured_file
{
class memory_mapped_file
{
// use os specific implementation

const char* mBegin;

public:

memory_mapped_file( const std::string& name )
: ...
, mBegin( ... )
{}

const char* operator[]( size_t idx )const
{
return std::string( mBegin + idx );
}

...

};

memory_mapped_file mData;
size_t mRecSize;

public:
structured( const std::string& name, size_t rec_size )
: mData(name), mRecSize(size){}

Record operator[]( size_t idx )const
{
const char* lBeg = mData[ mRecSize * idx ];

return Record( std::string( lBeg, lBeg + mRecSize ) );
}

};

int main()
{
structured_file lData( "data.dat" );

Record r1 = lData[ 123];
Record r2 = lData[2456];

int n1n1 = r1.N1();
std::string s2s2 = r2.S2();

return 0;
}

Jeff Flinn
Jul 23 '05 #12

P: n/a
Simon wrote:
May not be. You may want to change the structure and make it contain
arrays of char instead of pointers to dynamically allocated arrays.

Then the construction will be a bit faster, you could simply drop the
'string' thing there. Also, if you're sure about the source of the
data, and their format, you could avoid constructing temporaries. Play
with making 'sFileData' look like

char s1[41]; // if it's a C string, reserve the room for the null char
char s2[41];
int one, two;


I know I am going to be told I am too difficult, but the reason why I
dynamically create the string is because they are almost never longer than 5
chars.
So by declaring s1[41] I know that I am wasting around 36 chars, (The sizes
are different, there could be a string of 40 chars).

I know that we are only talking about 36 chars here, but I load 100000's of
lines and the waste really seems unnecessary to me, (and I don't like
wasting memory).
It seems to defeat the object dynamic memory allocations.

Simon


Dynamic memory allocation of many small segments causes
extremely poor memory utilization.

malloc and new (new often uses malloc) get memory from
the operating system in pages (4k, 8k, etc). They use
part of the obtained memory to implement control structures
(for keeping track of allocated and freed/reuseable chunks).
Each allocation also includes book keeping overhead (typically
8 bytes on a 32 bit OS), and normally no less than 16 bytes
is used per allocation - even if the user only asked for
one byte, malloc(1). So, in general, at least 16 bytes plus
a pointer in the malloc control structure (a linked list)
is allocated for each request.

Your program with the 100000 calls to allocate 5 bytes and
another 100000 calls to allocate 8 bytes will use AT LEAST
twice as much memory as you think - because of the hidden
extra memory used to keep track of everything.

These two articles explain it in detail (your OS may vary,
but the generalities apply):

http://www.cs.utk.edu/~plank/plank/c...2/lecture.html
http://www.cs.utk.edu/~plank/plank/c...n/lecture.html

Regards,
Larry
Jul 23 '05 #13

P: n/a
simon wrote:
From my previous post...

If I have a structure,

struct sFileData
{
char*sSomeString1;
char*sSomeString2;
int iSomeNum1;
int iSomeNum2;
sFileData(){...};
~sFileData(){...};
sFileData(const sFileData&){...};
const sFileData operator=( const sFileData &s ){...}
};

I read the file as follows

FILE *f = fopen( szPath, "rb" );

int nLineSize = 190;
BYTE b[nLineSize+1];

fread( b, sizeof(BYTE), nLineSize, f );
int numofrecords = atoi( b ); // first line is num of records only,

// read the data itself.
while( fread( b, sizeof(BYTE), nLineSize, f ) == nLineSize )
{
// fill data
// The locations of each items is known
// sString1 = 0->39, with blank spaces filler after data
// sString2 = 40->79, with blank spaces filler after data
// iNum1 = 80->99, with blank spaces filler after data
// iNum2 = 100->end, with blank spaces filler after data
}

what would be the best way to fill the data into an array, (vector)?

Many thanks.

Simon.


You state that each line in the file (including the first one) is
190 bytes long: 'int nLineSize = 190;' Yet your data items are all
ascii, and they occupy the first 100+ bytes. Is the ascii data
followed by additional (possibly binary) data that fills out the
record to a length of 190 bytes? Does each 190 byte record include
a trailing newline (Windows style "\r\n" or non-Windows "\n")?

Since you open the file in binary mode ("rb"), we might infer
at least 4 things:

1) the file contains mixed ascii/binary data records;
each of which is 190 bytes long with NO delimiting
newlines (aka Fixed Block in IBM parlance).

2) the file contains mixed ascii/binary data records;
each of which is 190 bytes long INCLUDING a delimiting
newline (Windows style "\r\n" or non-Windows "\n").

3) the file contains ascii-only data records with
a fixed length of 190 bytes with NO delimiting
newlines.

4) the file contains ascii-only data records with
a fixed length of 190 bytes INCLUDING a delimiting
newline (Windows style "\r\n" or non-Windows "\n").

It will be much easier for us to suggest effecient coding
approaches if you would please describe the EXACT layout
of the 190 byte records - including what follows 'iNum2',
and whether or not each of the 190 byte records includes
a trailing newline.

I have some ideas, but knowing the complete layout of the
190 byte records is key to picking the best approach.

Regards,
Larry
Jul 23 '05 #14

P: n/a
Larry I Smith wrote:
simon wrote:
From my previous post...

If I have a structure,

struct sFileData
{
char*sSomeString1;
char*sSomeString2;
int iSomeNum1;
int iSomeNum2;
sFileData(){...};
~sFileData(){...};
sFileData(const sFileData&){...};
const sFileData operator=( const sFileData &s ){...}
};

I read the file as follows

FILE *f = fopen( szPath, "rb" );

int nLineSize = 190;
BYTE b[nLineSize+1];

fread( b, sizeof(BYTE), nLineSize, f );
int numofrecords = atoi( b ); // first line is num of records only,

// read the data itself.
while( fread( b, sizeof(BYTE), nLineSize, f ) == nLineSize )
{
// fill data
// The locations of each items is known
// sString1 = 0->39, with blank spaces filler after data
// sString2 = 40->79, with blank spaces filler after data
// iNum1 = 80->99, with blank spaces filler after data
// iNum2 = 100->end, with blank spaces filler after data
}

what would be the best way to fill the data into an array, (vector)?

Many thanks.

Simon.


You state that each line in the file (including the first one) is
190 bytes long: 'int nLineSize = 190;' Yet your data items are all
ascii, and they occupy the first 100+ bytes. Is the ascii data
followed by additional (possibly binary) data that fills out the
record to a length of 190 bytes? Does each 190 byte record include
a trailing newline (Windows style "\r\n" or non-Windows "\n")?

Since you open the file in binary mode ("rb"), we might infer
at least 4 things:

1) the file contains mixed ascii/binary data records;
each of which is 190 bytes long with NO delimiting
newlines (aka Fixed Block in IBM parlance).

2) the file contains mixed ascii/binary data records;
each of which is 190 bytes long INCLUDING a delimiting
newline (Windows style "\r\n" or non-Windows "\n").

3) the file contains ascii-only data records with
a fixed length of 190 bytes with NO delimiting
newlines.

4) the file contains ascii-only data records with
a fixed length of 190 bytes INCLUDING a delimiting
newline (Windows style "\r\n" or non-Windows "\n").

It will be much easier for us to suggest effecient coding
approaches if you would please describe the EXACT layout
of the 190 byte records - including what follows 'iNum2',
and whether or not each of the 190 byte records includes
a trailing newline.

I have some ideas, but knowing the complete layout of the
190 byte records is key to picking the best approach.

Regards,
Larry


Two more questions:

5) do you wish to have leading/trailing whitespace
stripped from the first 2 string fields before
they are put into the structure?

6) might the first 2 string fields contain embedded
whitespace (e.g. sSomeString1 could be "hello there")?

Regards,
Larry
Jul 23 '05 #15

P: n/a
>>
You state that each line in the file (including the first one) is
190 bytes long: 'int nLineSize = 190;' Yet your data items are all
ascii, and they occupy the first 100+ bytes. Is the ascii data
followed by additional (possibly binary) data that fills out the
record to a length of 190 bytes? Does each 190 byte record include
a trailing newline (Windows style "\r\n" or non-Windows "\n")?
That's part of the problem, the line is 190 char long+'\n'
but the only meaningful data to me is 0->110

Since you open the file in binary mode ("rb"), we might infer
at least 4 things:

1) the file contains mixed ascii/binary data records;
each of which is 190 bytes long with NO delimiting
newlines (aka Fixed Block in IBM parlance).
It is all ascii+'\n'. I open it "rb" because that's what i usually do.
But it is a flat text file.
It will be much easier for us to suggest effecient coding
approaches if you would please describe the EXACT layout
of the 190 byte records - including what follows 'iNum2',
and whether or not each of the 190 byte records includes
a trailing newline.

What follows is more text and number data, (but all in ASCII).

5) do you wish to have leading/trailing whitespace
stripped from the first 2 string fields before
they are put into the structure?
Yes, but only the trailing spaces.The data is leftmost of it's section.

6) might the first 2 string fields contain embedded
whitespace (e.g. sSomeString1 could be "hello there")?
Yes, if that makes it faster to load, the data is 'protected' and i use
functions to return the values.
The problem are the numbers, it would not be very efficient to return
something like 'atoi("1234" ) all the time.

Regards,
Larry


many thanks for your help.
Simon
Jul 23 '05 #16

P: n/a
>>> You state that each line in the file (including the first one) is
190 bytes long: 'int nLineSize = 190;' Yet your data items are all
ascii, and they occupy the first 100+ bytes. Is the ascii data
followed by additional (possibly binary) data that fills out the
record to a length of 190 bytes? Does each 190 byte record include
a trailing newline (Windows style "\r\n" or non-Windows "\n")?


That's part of the problem, the line is 190 char long+'\n'
but the only meaningful data to me is 0->110


Sorry, I made a mistake, the file is 'windows style' 190+ '\r\n'

Simon
Jul 23 '05 #17

P: n/a
simon wrote:
You state that each line in the file (including the first one) is
190 bytes long: 'int nLineSize = 190;' Yet your data items are all
ascii, and they occupy the first 100+ bytes. Is the ascii data
followed by additional (possibly binary) data that fills out the
record to a length of 190 bytes? Does each 190 byte record include
a trailing newline (Windows style "\r\n" or non-Windows "\n")?

That's part of the problem, the line is 190 char long+'\n'
but the only meaningful data to me is 0->110


Sorry, I made a mistake, the file is 'windows style' 190+ '\r\n'

Simon


Ok, so the lines are each 192 bytes long (including the \r\n).

If you use fread() to read the data, then fread needs to read
192 bytes - NOT 190. "\r\n" is not special to fread() - it reads
raw bytes. So, if you read only 190 bytes when each 'line'
is actually 192 bytes long, then the fields for all records
except the first one will each be off by 2 bytes from the previous
record, e.g. by the time you get to record 40, your data fields
will be off by 80 bytes from where you think they are. This will
cause your sFileData structs to NOT have the contents you expect,
and may be contibuting to the terrible performance that you
are seeing.

I have written 3 small programs that I will post in a few
minutes. I wrote them using a 190 byte line length (including
the trailing "\r\n"). As soon as I change them to use 192
byte lines, I'll post them. They are:

simondat.c: to create a test input data file named "simon.dat"
with 100000 records for use by the other 2 programs.

simon.cpp: uses 'char *' with new/delete for the string
fields in sFileData.

simon2.cpp: uses std::string for the string fields in
sFileData.

On my pc (an old Gateway PII 450MHZ with 384MB of RAM):

simon.cpp runs in 2.20 seconds and uses 5624KB of memory.

simon2.cpp runs in 2.22 seconds and uses 6272KB of memory.

Your mileage may vary. I'm running SuSE Linux v9.3 and
using the GCC "g++" compiler v3.3.5.

Regards,
Larry
Jul 23 '05 #18

P: n/a
>>
On my pc (an old Gateway PII 450MHZ with 384MB of RAM):

simon.cpp runs in 2.20 seconds and uses 5624KB of memory.
Thanks for that, I get 1.24 sec and 6mb.
I just need to check what the difference is with my code.

simon2.cpp runs in 2.22 seconds and uses 6272KB of memory.

Your mileage may vary. I'm running SuSE Linux v9.3 and
using the GCC "g++" compiler v3.3.5.

Regards,
Larry
Here are the 3 programs:


<snip code>

Regards,
Larry


Thanks for that, this is great.
I wonder if my Trim(...) function was not part of the problem.

After profiling I noticed that delete [], (or even free(..) ) takes around
50% of the whole time.

Maybe I should get rid of the dynamic allocation all together.

Simon
Jul 23 '05 #19

P: n/a
simon wrote:
On my pc (an old Gateway PII 450MHZ with 384MB of RAM):

simon.cpp runs in 2.20 seconds and uses 5624KB of memory.
Thanks for that, I get 1.24 sec and 6mb.
I just need to check what the difference is with my code.
simon2.cpp runs in 2.22 seconds and uses 6272KB of memory.

Your mileage may vary. I'm running SuSE Linux v9.3 and
using the GCC "g++" compiler v3.3.5.

Regards,
Larry

Here are the 3 programs:


<snip code>
Regards,
Larry


Thanks for that, this is great.
I wonder if my Trim(...) function was not part of the problem.

After profiling I noticed that delete [], (or even free(..) ) takes around
50% of the whole time.

Maybe I should get rid of the dynamic allocation all together.

Simon


What does your profiler say about simon2.cpp?

Actually 1.24 seconds is pretty good for 100000 records.

As far as the memory usage goes, did you read the 2
articles on malloc that I posted earlier? Whether you
use new/delete or std::string (which does its own new/delete
behind the scenes) doesn't make much difference in performance
or memory usage, but std::string allows you much more
flexibility when manipulating the strings after you've
filled your vector (i.e. later in the program).

Due to the many (200000) tiny memory allocations, your memory
usage would be about:

2.5 * (sizeof(sFiledData) * 100000)

when both strings (sSomeString1 & sSomeString2) are small.

16 bytes minimum (plus the pointer kept in sFileData) will
be allocated for each of those strings. So, using pointers
in sFileData, the actual memory used for one sFiledData
is at least 48 bytes.

Regards,
Larry
Jul 23 '05 #20

P: n/a
Larry I Smith wrote:
simon wrote:
On my pc (an old Gateway PII 450MHZ with 384MB of RAM):

simon.cpp runs in 2.20 seconds and uses 5624KB of memory.


Thanks for that, I get 1.24 sec and 6mb.
I just need to check what the difference is with my code.
simon2.cpp runs in 2.22 seconds and uses 6272KB of memory.

Your mileage may vary. I'm running SuSE Linux v9.3 and
using the GCC "g++" compiler v3.3.5.

Regards,
Larry
Here are the 3 programs:


<snip code>
Regards,
Larry


Thanks for that, this is great.
I wonder if my Trim(...) function was not part of the problem.

After profiling I noticed that delete [], (or even free(..) ) takes
around 50% of the whole time.

Maybe I should get rid of the dynamic allocation all together.

Simon


What does your profiler say about simon2.cpp?

Actually 1.24 seconds is pretty good for 100000 records.

As far as the memory usage goes, did you read the 2
articles on malloc that I posted earlier? Whether you
use new/delete or std::string (which does its own new/delete
behind the scenes) doesn't make much difference in performance
or memory usage, but std::string allows you much more
flexibility when manipulating the strings after you've
filled your vector (i.e. later in the program).

Due to the many (200000) tiny memory allocations, your memory
usage would be about:

2.5 * (sizeof(sFiledData) * 100000)

when both strings (sSomeString1 & sSomeString2) are small.

16 bytes minimum (plus the pointer kept in sFileData) will
be allocated for each of those strings. So, using pointers
in sFileData, the actual memory used for one sFiledData
is at least 48 bytes.

Regards,
Larry


What am I missing here? There must be some part of the problem that I
missed. The following reads in 100000 data paris, one per line, in far
less than a second.

#include <fstream>
#include <iostream>
#include <string>
#include <vector>
#include <iomanip>
#include <iterator>

using namespace std;

template<typename Key_T, typename Value_T>
struct Data {
Key_T _key;
Value_T _value;

istream& fromStream(istream& in) {
return in >> _key >> _value;
}

ostream& toStream(ostream& out) const {
ios::fmtflags oldFlags(out.setf(ios::left));
out << setw(25) << _key << _value;
out.setf(oldFlags);
}

};

template<typename Key_T, typename Value_T>
istream& operator>>(istream& in, Data<Key_T, Value_T>& data) {
return data.fromStream(in);
}

template<typename Key_T, typename Value_T>
ostream& operator<<(ostream& out, const Data<Key_T,Value_T>& data) {
return data.toStream(out);
}

int main(int argc, char* argv[]) {
if(!(argc > 1)) {
cerr << "records filename: " << endl;
return -1;
}

ifstream ifs(argv[1]);

if(!ifs.is_open()) {
cerr << "Failed to open file: " << argv[1] << endl;
return -1;
}

typedef Data<string, string> D_T;
typedef vector<D_T> DV_T;
DV_T dv;

copy(istream_iterator<D_T>(ifs),istream_iterator<D _T>(),back_inserter(dv));
// copy(dv.begin(), dv.end(),ostream_iterator<D_T>(cout,"\n"));
}

--
If our hypothesis is about anything and not about some one or more
particular things, then our deductions constitute mathematics. Thus
mathematics may be defined as the subject in which we never know what we
are talking about, nor whether what we are saying is true.-Bertrand Russell
Jul 23 '05 #21

P: n/a
Steven T. Hatton wrote:
Larry I Smith wrote:
simon wrote:
>On my pc (an old Gateway PII 450MHZ with 384MB of RAM):
>
> simon.cpp runs in 2.20 seconds and uses 5624KB of memory.
Thanks for that, I get 1.24 sec and 6mb.
I just need to check what the difference is with my code.

> simon2.cpp runs in 2.22 seconds and uses 6272KB of memory.
>
>Your mileage may vary. I'm running SuSE Linux v9.3 and
>using the GCC "g++" compiler v3.3.5.
>
>Regards,
>Larry
Here are the 3 programs:

<snip code>

Regards,
Larry
Thanks for that, this is great.
I wonder if my Trim(...) function was not part of the problem.

After profiling I noticed that delete [], (or even free(..) ) takes
around 50% of the whole time.

Maybe I should get rid of the dynamic allocation all together.

Simon

What does your profiler say about simon2.cpp?

Actually 1.24 seconds is pretty good for 100000 records.

As far as the memory usage goes, did you read the 2
articles on malloc that I posted earlier? Whether you
use new/delete or std::string (which does its own new/delete
behind the scenes) doesn't make much difference in performance
or memory usage, but std::string allows you much more
flexibility when manipulating the strings after you've
filled your vector (i.e. later in the program).

Due to the many (200000) tiny memory allocations, your memory
usage would be about:

2.5 * (sizeof(sFiledData) * 100000)

when both strings (sSomeString1 & sSomeString2) are small.

16 bytes minimum (plus the pointer kept in sFileData) will
be allocated for each of those strings. So, using pointers
in sFileData, the actual memory used for one sFiledData
is at least 48 bytes.

Regards,
Larry


What am I missing here? There must be some part of the problem that I
missed. The following reads in 100000 data paris, one per line, in far
less than a second.

#include <fstream>
#include <iostream>
#include <string>
#include <vector>
#include <iomanip>
#include <iterator>

using namespace std;

template<typename Key_T, typename Value_T>
struct Data {
Key_T _key;
Value_T _value;

istream& fromStream(istream& in) {
return in >> _key >> _value;
}

ostream& toStream(ostream& out) const {
ios::fmtflags oldFlags(out.setf(ios::left));
out << setw(25) << _key << _value;
out.setf(oldFlags);
}

};

template<typename Key_T, typename Value_T>
istream& operator>>(istream& in, Data<Key_T, Value_T>& data) {
return data.fromStream(in);
}

template<typename Key_T, typename Value_T>
ostream& operator<<(ostream& out, const Data<Key_T,Value_T>& data) {
return data.toStream(out);
}

int main(int argc, char* argv[]) {
if(!(argc > 1)) {
cerr << "records filename: " << endl;
return -1;
}

ifstream ifs(argv[1]);

if(!ifs.is_open()) {
cerr << "Failed to open file: " << argv[1] << endl;
return -1;
}

typedef Data<string, string> D_T;
typedef vector<D_T> DV_T;
DV_T dv;

copy(istream_iterator<D_T>(ifs),istream_iterator<D _T>(),back_inserter(dv));
// copy(dv.begin(), dv.end(),ostream_iterator<D_T>(cout,"\n"));
}


Each of his 100000 records (each 192 bytes long) contains multiple
fields that must be parsed out of the record, then have lead/trail
blanks trimmed; additional fields in the record must be parsed out
and converted to int. Each record also contains fields that are to
be skipped over (i.e. ignored). Once all of the fields are parsed
out of a record, an object of class sFileData is constructed using
the data parsed from the record; then that sFileData object is
put into the vector. Only after all of this is the next record
read from the file. So most of the work is data parsing.

Larry
Jul 23 '05 #22

This discussion thread is closed

Replies have been disabled for this discussion.