By using this site, you agree to our updated Privacy Policy and our Terms of Use. Manage your Cookies Settings.
429,426 Members | 1,729 Online
Bytes IT Community
+ Ask a Question
Need help? Post your question and get tips & solutions from a community of 429,426 IT Pros & Developers. It's quick & easy.

Data transfer problem - ideas/solutions wanted (please)

P: n/a
Hi,

I have an interesting problem. I have a (LARGE) set of historical data
that I want to keep on a central server, as several separate files. I
want a client process to be able to request the data in a specific file
by specifying the file name, start date/time and end date/time.

The files are in binary format, to conserve space on the server (as well
as to increase processing time). The data in each file can be quite
large, covering several years of data. New data will be appended to
these files each day, by a (PHP) script. The server machine is likely to
be a Unix machine, whereas my clients will be running on windows
machines. My clients program is written in C++.

My two main problems/questions are as follows:

1). Transfer method issue:
What is the best (i.e. most efficient and fast way) to transfer data
from the server to clients ?. I think SOAP is likely to be too slow,
because of the sheer size of the data

2). Cross platform issue:
How can I insure that that the (binary?) data sent from the Unix server
can be correctly interpreted at the client side?

2). Security issue:
How can I prevent clients from directly accessing the files (to prevent
malicious or accidental corruption of the data files.?

Feb 4 '06 #1
Share this Question
Share on Google+
11 Replies


P: n/a
E.T. Grey wrote:
Hi,

I have an interesting problem. I have a (LARGE) set of historical data
that I want to keep on a central server, as several separate files. I
want a client process to be able to request the data in a specific file
by specifying the file name, start date/time and end date/time.

The files are in binary format, to conserve space on the server (as well
as to increase processing time). The data in each file can be quite
large, covering several years of data. New data will be appended to
these files each day, by a (PHP) script. The server machine is likely to
be a Unix machine, whereas my clients will be running on windows
machines. My clients program is written in C++.

My two main problems/questions are as follows:

1). Transfer method issue:
What is the best (i.e. most efficient and fast way) to transfer data
from the server to clients ?. I think SOAP is likely to be too slow,
because of the sheer size of the data

2). Cross platform issue:
How can I insure that that the (binary?) data sent from the Unix server
can be correctly interpreted at the client side?

2). Security issue:
How can I prevent clients from directly accessing the files (to prevent
malicious or accidental corruption of the data files.?


Since this is going to be on a Unix machine, might I suggest one of the
Unix/Linux groups? Other than the fact this is going to be appended to
by a PHP script (which isn't part of your question), I don't see
anything indicating PHP in involved, much less a PHP question.

--
==================
Remove the "x" from my email address
Jerry Stuckle
JDS Computer Training Corp.
js*******@attglobal.net
==================
Feb 4 '06 #2

P: n/a


Jerry Stuckle wrote:
E.T. Grey wrote:
Hi,

I have an interesting problem. I have a (LARGE) set of historical data
that I want to keep on a central server, as several separate files. I
want a client process to be able to request the data in a specific
file by specifying the file name, start date/time and end date/time.

The files are in binary format, to conserve space on the server (as
well as to increase processing time). The data in each file can be
quite large, covering several years of data. New data will be appended
to these files each day, by a (PHP) script. The server machine is
likely to be a Unix machine, whereas my clients will be running on
windows machines. My clients program is written in C++.

My two main problems/questions are as follows:

1). Transfer method issue:
What is the best (i.e. most efficient and fast way) to transfer data
from the server to clients ?. I think SOAP is likely to be too slow,
because of the sheer size of the data

2). Cross platform issue:
How can I insure that that the (binary?) data sent from the Unix
server can be correctly interpreted at the client side?

2). Security issue:
How can I prevent clients from directly accessing the files (to
prevent malicious or accidental corruption of the data files.?


Since this is going to be on a Unix machine, might I suggest one of the
Unix/Linux groups? Other than the fact this is going to be appended to
by a PHP script (which isn't part of your question), I don't see
anything indicating PHP in involved, much less a PHP question.


With the benefit of hindsight, I did not make myself clear. A further
clarification is thus in order:

I have implemented the server side of the solution using PHP. I am
communicating with the C++ frontend using SOAP (i.e. communicating
between PHP on the server and C++ at the client).
My first question is asked because I (assume) SOAP is too heavy for file
transfer (ok not directly a PHP question)

My second question was asked because the files will be created (and
appended) using PHP scripts - and I was wondering if binary files
written by PHP on Unix, may be able to be read by a C++ application
running on Windows.

My third question was asked because I'm a realitive WAMP/LAMP & PHP
newbie and I do not fully understand security issues in this framework.
I simply know that I want to prevent clients from directly accesing the
files.

Hope this helps clarify things

Feb 4 '06 #3

P: n/a
NC
E.T. Grey wrote:

I have a (LARGE) set of historical data that I want to keep
on a central server, as several separate files.
How large exactly?
I want a client process to be able to request the data in a
specific file by specifying the file name, start date/time and
end date/time.
The start/end date/time bit actually is a rather fat hint that you
should consider using a database... Searching through large files will
eat up enormous amounts of disk and processor time.
New data will be appended to these files each day, by a
(PHP) script.
Yet another reason to consider a database...
What is the best (i.e. most efficient and fast way) to transfer data
from the server to clients ?.
Assuming you are using HTTP, compressed (gzip) CSV will probably be the
fastest.
How can I insure that that the (binary?) data sent from the Unix server
can be correctly interpreted at the client side?
Why should the data be binary? Compressed CSV is likely to be at least
as compact as binary data, plus CSV will be human-readable, which
should help during debugging.
How can I prevent clients from directly accessing the files
(to prevent malicious or accidental corruption of the data files.?


Import them into a database and lock the originals in a safe place.

Cheers,
NC

Feb 4 '06 #4

P: n/a
E.T. Grey wrote:


Jerry Stuckle wrote:
E.T. Grey wrote:
Hi,

I have an interesting problem. I have a (LARGE) set of historical
data that I want to keep on a central server, as several separate
files. I want a client process to be able to request the data in a
specific file by specifying the file name, start date/time and end
date/time.

The files are in binary format, to conserve space on the server (as
well as to increase processing time). The data in each file can be
quite large, covering several years of data. New data will be
appended to these files each day, by a (PHP) script. The server
machine is likely to be a Unix machine, whereas my clients will be
running on windows machines. My clients program is written in C++.

My two main problems/questions are as follows:

1). Transfer method issue:
What is the best (i.e. most efficient and fast way) to transfer data
from the server to clients ?. I think SOAP is likely to be too slow,
because of the sheer size of the data

2). Cross platform issue:
How can I insure that that the (binary?) data sent from the Unix
server can be correctly interpreted at the client side?

2). Security issue:
How can I prevent clients from directly accessing the files (to
prevent malicious or accidental corruption of the data files.?


Since this is going to be on a Unix machine, might I suggest one of
the Unix/Linux groups? Other than the fact this is going to be
appended to by a PHP script (which isn't part of your question), I
don't see anything indicating PHP in involved, much less a PHP question.


With the benefit of hindsight, I did not make myself clear. A further
clarification is thus in order:

I have implemented the server side of the solution using PHP. I am
communicating with the C++ frontend using SOAP (i.e. communicating
between PHP on the server and C++ at the client).
My first question is asked because I (assume) SOAP is too heavy for file
transfer (ok not directly a PHP question)

My second question was asked because the files will be created (and
appended) using PHP scripts - and I was wondering if binary files
written by PHP on Unix, may be able to be read by a C++ application
running on Windows.

My third question was asked because I'm a realitive WAMP/LAMP & PHP
newbie and I do not fully understand security issues in this framework.
I simply know that I want to prevent clients from directly accesing the
files.

Hope this helps clarify things


Well, as for reading and writing the files - C++ should be able to read
any file written by PHP, COBOL or any other language. You may be forced
to do some massaging of the bytes, but that should be all.

As for not accessing the files directly - just don't put them in your
web root directory (or anywhere below it). Then someone can't access it
through the website.

--
==================
Remove the "x" from my email address
Jerry Stuckle
JDS Computer Training Corp.
js*******@attglobal.net
==================
Feb 4 '06 #5

P: n/a


NC wrote:
E.T. Grey wrote:
I have a (LARGE) set of historical data that I want to keep
on a central server, as several separate files.

How large exactly?


At last count, there are about 65,000 distinct files (and increasing)

I want a client process to be able to request the data in a
specific file by specifying the file name, start date/time and
end date/time.

The start/end date/time bit actually is a rather fat hint that you
should consider using a database... Searching through large files will
eat up enormous amounts of disk and processor time.


Not necessarily true. Each file has the equivalent of approx 1M rows
(yes - thats 1 million) - yet the binary files (which use compression
algos) are approx 10k-15K in size. If you multiply the number of rows
(on avg) by the number of files - you can quickly see why using a db as
a repository would be a poor design choice.
New data will be appended to these files each day, by a
(PHP) script.

Yet another reason to consider a database...

See above
What is the best (i.e. most efficient and fast way) to transfer data
from the server to clients ?.

Assuming you are using HTTP, compressed (gzip) CSV will probably be the
fastest.

This involves converting the read data to a string first, before
(possibly) zipping it and sending it. This incurrs overhead (that I
would like to avoid) on both server and client.
How can I insure that that the (binary?) data sent from the Unix server
can be correctly interpreted at the client side?

Why should the data be binary? Compressed CSV is likely to be at least
as compact as binary data, plus CSV will be human-readable, which
should help during debugging.


See above
How can I prevent clients from directly accessing the files
(to prevent malicious or accidental corruption of the data files.?

Import them into a database and lock the originals in a safe place.

Cheers,
NC


Feb 4 '06 #6

P: n/a
NC
E.T. Grey wrote:
I have a (LARGE) set of historical data that I want to keep
on a central server, as several separate files.

How large exactly?


At last count, there are about 65,000 distinct files (and increasing)

.... Each file has the equivalent of approx 1M rows (yes - thats 1 million) .... If you multiply the number of rows (on avg) by the number of files -
you can quickly see why using a db as a repository would be a
poor design choice.
Sorry, I can't. 65 million records is a manageable database.
This involves converting the read data to a string first, before
(possibly) zipping it and sending it. This incurrs overhead (that I
would like to avoid) on both server and client.


And yet you are willing to convert EVERY BIT of that data when you
search through it...

Cheers,
NC

Feb 4 '06 #7

P: n/a
NC wrote:
E.T. Grey wrote:
I have a (LARGE) set of historical data that I want to keep
on a central server, as several separate files.
How large exactly?


At last count, there are about 65,000 distinct files (and increasing)


...
Each file has the equivalent of approx 1M rows (yes - thats 1 million)


...
If you multiply the number of rows (on avg) by the number of files -
you can quickly see why using a db as a repository would be a
poor design choice.

Sorry, I can't. 65 million records is a manageable database.


I agree... I have designed and deployed binary and ascii data loads in
excess of 250Million records/day. Searching the data was a piece of
cake - if you know how to actually designed the database correctly.

65M records is peanuts to a database - even MySql. With proper indexing
you can do a direct-row lookup in < 4-8 I/O's - not so with the path
you are currently trying to traverse... you are looking at up to 65M
reads - and reads are very expensive!!

Use the proper tools/mechanisms for the job at hand...
Michael Austin
DBA
(stuff snipped)
Feb 4 '06 #8

P: n/a


NC wrote:
E.T. Grey wrote:
I have a (LARGE) set of historical data that I want to keep
on a central server, as several separate files.
How large exactly?


At last count, there are about 65,000 distinct files (and increasing)


...
Each file has the equivalent of approx 1M rows (yes - thats 1 million)


...
If you multiply the number of rows (on avg) by the number of files -
you can quickly see why using a db as a repository would be a
poor design choice.

Sorry, I can't. 65 million records is a manageable database.


Its amazing how some people once having set their mind on one thing,
wont change it - even when presented with the facts. Last time I
checked, 65,000 x 1 million = 65 billion - not 65 million. Ah well - you
can't win them all.

Feb 6 '06 #9

P: n/a
E.T. Grey wrote:


NC wrote:
E.T. Grey wrote:
> I have a (LARGE) set of historical data that I want to keep
> on a central server, as several separate files.
How large exactly?

At last count, there are about 65,000 distinct files (and increasing)


...
Each file has the equivalent of approx 1M rows (yes - thats 1 million)


...
If you multiply the number of rows (on avg) by the number of files -
you can quickly see why using a db as a repository would be a
poor design choice.

Sorry, I can't. 65 million records is a manageable database.


Its amazing how some people once having set their mind on one thing,
wont change it - even when presented with the facts. Last time I
checked, 65,000 x 1 million = 65 billion - not 65 million. Ah well - you
can't win them all.

Well, my question is "Do all 65 billion records need to be active at all
times?" If not, doing some roll-up/archival strategy may reduce this to
a usable size.

-david-

Feb 6 '06 #10

P: n/a


noone wrote:
NC wrote:
E.T. Grey wrote:
> I have a (LARGE) set of historical data that I want to keep
> on a central server, as several separate files.

How large exactly?
At last count, there are about 65,000 distinct files (and increasing)

...
Each file has the equivalent of approx 1M rows (yes - thats 1 million)

...
If you multiply the number of rows (on avg) by the number of files -
you can quickly see why using a db as a repository would be a
poor design choice.


Sorry, I can't. 65 million records is a manageable database.

I agree... I have designed and deployed binary and ascii data loads in
excess of 250Million records/day. Searching the data was a piece of
cake - if you know how to actually designed the database correctly.

65M records is peanuts to a database - even MySql. With proper indexing
you can do a direct-row lookup in < 4-8 I/O's - not so with the path
you are currently trying to traverse... you are looking at up to 65M
reads - and reads are very expensive!!

Use the proper tools/mechanisms for the job at hand...

Michael Austin
DBA
(stuff snipped)


Please do not patronise me. Like NC, you completely overlooked the
obvious fact that the number of records we are talking about (if a
database design is used) runs into billions - not millions. Furthermore,
the datasets are time series data and therefore order is of paramount
importance. Instead of trying to impose a design on me (without fully
understanding the problem), it would have been infinetely preferable if
you had simply answered the question I had asked in the first place. But
judging by the way you have overlooked basic facts - whilst being hell
bent that a db solution is *definitely* the way forward - you have
instantly lost any credibility you may have had - and consequently, I
will ignore any "advice" you care to offer in the future.

Feb 6 '06 #11

P: n/a
NC
E.T. Grey wrote:
NC wrote:
E.T. Grey wrote:
At last count, there are about 65,000 distinct files (and increasing)

...
Each file has the equivalent of approx 1M rows (yes - thats 1 million)

...
If you multiply the number of rows (on avg) by the number of files -
you can quickly see why using a db as a repository would be a
poor design choice.


Sorry, I can't. 65 million records is a manageable database.


Its amazing how some people once having set their mind on one thing,
wont change it - even when presented with the facts. Last time I
checked, 65,000 x 1 million = 65 billion - not 65 million.


OK, I obviously made a stupid typo; I'll gladly correct it:

65 billion records is a manageable database

Even MySQL (which is often thought of as a departmental rather than
enterprise system, although with MySQL 5.0 available this may be
reconsidered) is capable of maintaining large databases. Since MySQL
3.23, you can store up to 65536 terabytes using the MyISAM storage
engine (which effectively means that the size of your table is limited
only by your operating system's file size limit) or mere 64 TB using
the InnoDB storage engine (but in this case, the file size limit does
not apply, because an InnoDB table can be spread over several files).
You stated earlier that a compressed set of one million records takes
10-15 kilobytes to store, so an uncompressed record would probably be
just a few bytes long. This is a load that a single server with a
properly configured RAID could handle...

Cheers,
NC

Feb 6 '06 #12

This discussion thread is closed

Replies have been disabled for this discussion.