473,394 Members | 2,020 Online
Bytes | Software Development & Data Engineering Community
Post Job

Home Posts Topics Members FAQ

Join Bytes to post your question to a community of 473,394 software developers and data experts.

Data transfer problem - ideas/solutions wanted (please)

Hi,

I have an interesting problem. I have a (LARGE) set of historical data
that I want to keep on a central server, as several separate files. I
want a client process to be able to request the data in a specific file
by specifying the file name, start date/time and end date/time.

The files are in binary format, to conserve space on the server (as well
as to increase processing time). The data in each file can be quite
large, covering several years of data. New data will be appended to
these files each day, by a (PHP) script. The server machine is likely to
be a Unix machine, whereas my clients will be running on windows
machines. My clients program is written in C++.

My two main problems/questions are as follows:

1). Transfer method issue:
What is the best (i.e. most efficient and fast way) to transfer data
from the server to clients ?. I think SOAP is likely to be too slow,
because of the sheer size of the data

2). Cross platform issue:
How can I insure that that the (binary?) data sent from the Unix server
can be correctly interpreted at the client side?

2). Security issue:
How can I prevent clients from directly accessing the files (to prevent
malicious or accidental corruption of the data files.?

Feb 4 '06 #1
11 2411
E.T. Grey wrote:
Hi,

I have an interesting problem. I have a (LARGE) set of historical data
that I want to keep on a central server, as several separate files. I
want a client process to be able to request the data in a specific file
by specifying the file name, start date/time and end date/time.

The files are in binary format, to conserve space on the server (as well
as to increase processing time). The data in each file can be quite
large, covering several years of data. New data will be appended to
these files each day, by a (PHP) script. The server machine is likely to
be a Unix machine, whereas my clients will be running on windows
machines. My clients program is written in C++.

My two main problems/questions are as follows:

1). Transfer method issue:
What is the best (i.e. most efficient and fast way) to transfer data
from the server to clients ?. I think SOAP is likely to be too slow,
because of the sheer size of the data

2). Cross platform issue:
How can I insure that that the (binary?) data sent from the Unix server
can be correctly interpreted at the client side?

2). Security issue:
How can I prevent clients from directly accessing the files (to prevent
malicious or accidental corruption of the data files.?


Since this is going to be on a Unix machine, might I suggest one of the
Unix/Linux groups? Other than the fact this is going to be appended to
by a PHP script (which isn't part of your question), I don't see
anything indicating PHP in involved, much less a PHP question.

--
==================
Remove the "x" from my email address
Jerry Stuckle
JDS Computer Training Corp.
js*******@attglobal.net
==================
Feb 4 '06 #2


Jerry Stuckle wrote:
E.T. Grey wrote:
Hi,

I have an interesting problem. I have a (LARGE) set of historical data
that I want to keep on a central server, as several separate files. I
want a client process to be able to request the data in a specific
file by specifying the file name, start date/time and end date/time.

The files are in binary format, to conserve space on the server (as
well as to increase processing time). The data in each file can be
quite large, covering several years of data. New data will be appended
to these files each day, by a (PHP) script. The server machine is
likely to be a Unix machine, whereas my clients will be running on
windows machines. My clients program is written in C++.

My two main problems/questions are as follows:

1). Transfer method issue:
What is the best (i.e. most efficient and fast way) to transfer data
from the server to clients ?. I think SOAP is likely to be too slow,
because of the sheer size of the data

2). Cross platform issue:
How can I insure that that the (binary?) data sent from the Unix
server can be correctly interpreted at the client side?

2). Security issue:
How can I prevent clients from directly accessing the files (to
prevent malicious or accidental corruption of the data files.?


Since this is going to be on a Unix machine, might I suggest one of the
Unix/Linux groups? Other than the fact this is going to be appended to
by a PHP script (which isn't part of your question), I don't see
anything indicating PHP in involved, much less a PHP question.


With the benefit of hindsight, I did not make myself clear. A further
clarification is thus in order:

I have implemented the server side of the solution using PHP. I am
communicating with the C++ frontend using SOAP (i.e. communicating
between PHP on the server and C++ at the client).
My first question is asked because I (assume) SOAP is too heavy for file
transfer (ok not directly a PHP question)

My second question was asked because the files will be created (and
appended) using PHP scripts - and I was wondering if binary files
written by PHP on Unix, may be able to be read by a C++ application
running on Windows.

My third question was asked because I'm a realitive WAMP/LAMP & PHP
newbie and I do not fully understand security issues in this framework.
I simply know that I want to prevent clients from directly accesing the
files.

Hope this helps clarify things

Feb 4 '06 #3
NC
E.T. Grey wrote:

I have a (LARGE) set of historical data that I want to keep
on a central server, as several separate files.
How large exactly?
I want a client process to be able to request the data in a
specific file by specifying the file name, start date/time and
end date/time.
The start/end date/time bit actually is a rather fat hint that you
should consider using a database... Searching through large files will
eat up enormous amounts of disk and processor time.
New data will be appended to these files each day, by a
(PHP) script.
Yet another reason to consider a database...
What is the best (i.e. most efficient and fast way) to transfer data
from the server to clients ?.
Assuming you are using HTTP, compressed (gzip) CSV will probably be the
fastest.
How can I insure that that the (binary?) data sent from the Unix server
can be correctly interpreted at the client side?
Why should the data be binary? Compressed CSV is likely to be at least
as compact as binary data, plus CSV will be human-readable, which
should help during debugging.
How can I prevent clients from directly accessing the files
(to prevent malicious or accidental corruption of the data files.?


Import them into a database and lock the originals in a safe place.

Cheers,
NC

Feb 4 '06 #4
E.T. Grey wrote:


Jerry Stuckle wrote:
E.T. Grey wrote:
Hi,

I have an interesting problem. I have a (LARGE) set of historical
data that I want to keep on a central server, as several separate
files. I want a client process to be able to request the data in a
specific file by specifying the file name, start date/time and end
date/time.

The files are in binary format, to conserve space on the server (as
well as to increase processing time). The data in each file can be
quite large, covering several years of data. New data will be
appended to these files each day, by a (PHP) script. The server
machine is likely to be a Unix machine, whereas my clients will be
running on windows machines. My clients program is written in C++.

My two main problems/questions are as follows:

1). Transfer method issue:
What is the best (i.e. most efficient and fast way) to transfer data
from the server to clients ?. I think SOAP is likely to be too slow,
because of the sheer size of the data

2). Cross platform issue:
How can I insure that that the (binary?) data sent from the Unix
server can be correctly interpreted at the client side?

2). Security issue:
How can I prevent clients from directly accessing the files (to
prevent malicious or accidental corruption of the data files.?


Since this is going to be on a Unix machine, might I suggest one of
the Unix/Linux groups? Other than the fact this is going to be
appended to by a PHP script (which isn't part of your question), I
don't see anything indicating PHP in involved, much less a PHP question.


With the benefit of hindsight, I did not make myself clear. A further
clarification is thus in order:

I have implemented the server side of the solution using PHP. I am
communicating with the C++ frontend using SOAP (i.e. communicating
between PHP on the server and C++ at the client).
My first question is asked because I (assume) SOAP is too heavy for file
transfer (ok not directly a PHP question)

My second question was asked because the files will be created (and
appended) using PHP scripts - and I was wondering if binary files
written by PHP on Unix, may be able to be read by a C++ application
running on Windows.

My third question was asked because I'm a realitive WAMP/LAMP & PHP
newbie and I do not fully understand security issues in this framework.
I simply know that I want to prevent clients from directly accesing the
files.

Hope this helps clarify things


Well, as for reading and writing the files - C++ should be able to read
any file written by PHP, COBOL or any other language. You may be forced
to do some massaging of the bytes, but that should be all.

As for not accessing the files directly - just don't put them in your
web root directory (or anywhere below it). Then someone can't access it
through the website.

--
==================
Remove the "x" from my email address
Jerry Stuckle
JDS Computer Training Corp.
js*******@attglobal.net
==================
Feb 4 '06 #5


NC wrote:
E.T. Grey wrote:
I have a (LARGE) set of historical data that I want to keep
on a central server, as several separate files.

How large exactly?


At last count, there are about 65,000 distinct files (and increasing)

I want a client process to be able to request the data in a
specific file by specifying the file name, start date/time and
end date/time.

The start/end date/time bit actually is a rather fat hint that you
should consider using a database... Searching through large files will
eat up enormous amounts of disk and processor time.


Not necessarily true. Each file has the equivalent of approx 1M rows
(yes - thats 1 million) - yet the binary files (which use compression
algos) are approx 10k-15K in size. If you multiply the number of rows
(on avg) by the number of files - you can quickly see why using a db as
a repository would be a poor design choice.
New data will be appended to these files each day, by a
(PHP) script.

Yet another reason to consider a database...

See above
What is the best (i.e. most efficient and fast way) to transfer data
from the server to clients ?.

Assuming you are using HTTP, compressed (gzip) CSV will probably be the
fastest.

This involves converting the read data to a string first, before
(possibly) zipping it and sending it. This incurrs overhead (that I
would like to avoid) on both server and client.
How can I insure that that the (binary?) data sent from the Unix server
can be correctly interpreted at the client side?

Why should the data be binary? Compressed CSV is likely to be at least
as compact as binary data, plus CSV will be human-readable, which
should help during debugging.


See above
How can I prevent clients from directly accessing the files
(to prevent malicious or accidental corruption of the data files.?

Import them into a database and lock the originals in a safe place.

Cheers,
NC


Feb 4 '06 #6
NC
E.T. Grey wrote:
I have a (LARGE) set of historical data that I want to keep
on a central server, as several separate files.

How large exactly?


At last count, there are about 65,000 distinct files (and increasing)

.... Each file has the equivalent of approx 1M rows (yes - thats 1 million) .... If you multiply the number of rows (on avg) by the number of files -
you can quickly see why using a db as a repository would be a
poor design choice.
Sorry, I can't. 65 million records is a manageable database.
This involves converting the read data to a string first, before
(possibly) zipping it and sending it. This incurrs overhead (that I
would like to avoid) on both server and client.


And yet you are willing to convert EVERY BIT of that data when you
search through it...

Cheers,
NC

Feb 4 '06 #7
NC wrote:
E.T. Grey wrote:
I have a (LARGE) set of historical data that I want to keep
on a central server, as several separate files.
How large exactly?


At last count, there are about 65,000 distinct files (and increasing)


...
Each file has the equivalent of approx 1M rows (yes - thats 1 million)


...
If you multiply the number of rows (on avg) by the number of files -
you can quickly see why using a db as a repository would be a
poor design choice.

Sorry, I can't. 65 million records is a manageable database.


I agree... I have designed and deployed binary and ascii data loads in
excess of 250Million records/day. Searching the data was a piece of
cake - if you know how to actually designed the database correctly.

65M records is peanuts to a database - even MySql. With proper indexing
you can do a direct-row lookup in < 4-8 I/O's - not so with the path
you are currently trying to traverse... you are looking at up to 65M
reads - and reads are very expensive!!

Use the proper tools/mechanisms for the job at hand...
Michael Austin
DBA
(stuff snipped)
Feb 4 '06 #8


NC wrote:
E.T. Grey wrote:
I have a (LARGE) set of historical data that I want to keep
on a central server, as several separate files.
How large exactly?


At last count, there are about 65,000 distinct files (and increasing)


...
Each file has the equivalent of approx 1M rows (yes - thats 1 million)


...
If you multiply the number of rows (on avg) by the number of files -
you can quickly see why using a db as a repository would be a
poor design choice.

Sorry, I can't. 65 million records is a manageable database.


Its amazing how some people once having set their mind on one thing,
wont change it - even when presented with the facts. Last time I
checked, 65,000 x 1 million = 65 billion - not 65 million. Ah well - you
can't win them all.

Feb 6 '06 #9
E.T. Grey wrote:


NC wrote:
E.T. Grey wrote:
> I have a (LARGE) set of historical data that I want to keep
> on a central server, as several separate files.
How large exactly?

At last count, there are about 65,000 distinct files (and increasing)


...
Each file has the equivalent of approx 1M rows (yes - thats 1 million)


...
If you multiply the number of rows (on avg) by the number of files -
you can quickly see why using a db as a repository would be a
poor design choice.

Sorry, I can't. 65 million records is a manageable database.


Its amazing how some people once having set their mind on one thing,
wont change it - even when presented with the facts. Last time I
checked, 65,000 x 1 million = 65 billion - not 65 million. Ah well - you
can't win them all.

Well, my question is "Do all 65 billion records need to be active at all
times?" If not, doing some roll-up/archival strategy may reduce this to
a usable size.

-david-

Feb 6 '06 #10


noone wrote:
NC wrote:
E.T. Grey wrote:
> I have a (LARGE) set of historical data that I want to keep
> on a central server, as several separate files.

How large exactly?
At last count, there are about 65,000 distinct files (and increasing)

...
Each file has the equivalent of approx 1M rows (yes - thats 1 million)

...
If you multiply the number of rows (on avg) by the number of files -
you can quickly see why using a db as a repository would be a
poor design choice.


Sorry, I can't. 65 million records is a manageable database.

I agree... I have designed and deployed binary and ascii data loads in
excess of 250Million records/day. Searching the data was a piece of
cake - if you know how to actually designed the database correctly.

65M records is peanuts to a database - even MySql. With proper indexing
you can do a direct-row lookup in < 4-8 I/O's - not so with the path
you are currently trying to traverse... you are looking at up to 65M
reads - and reads are very expensive!!

Use the proper tools/mechanisms for the job at hand...

Michael Austin
DBA
(stuff snipped)


Please do not patronise me. Like NC, you completely overlooked the
obvious fact that the number of records we are talking about (if a
database design is used) runs into billions - not millions. Furthermore,
the datasets are time series data and therefore order is of paramount
importance. Instead of trying to impose a design on me (without fully
understanding the problem), it would have been infinetely preferable if
you had simply answered the question I had asked in the first place. But
judging by the way you have overlooked basic facts - whilst being hell
bent that a db solution is *definitely* the way forward - you have
instantly lost any credibility you may have had - and consequently, I
will ignore any "advice" you care to offer in the future.

Feb 6 '06 #11
NC
E.T. Grey wrote:
NC wrote:
E.T. Grey wrote:
At last count, there are about 65,000 distinct files (and increasing)

...
Each file has the equivalent of approx 1M rows (yes - thats 1 million)

...
If you multiply the number of rows (on avg) by the number of files -
you can quickly see why using a db as a repository would be a
poor design choice.


Sorry, I can't. 65 million records is a manageable database.


Its amazing how some people once having set their mind on one thing,
wont change it - even when presented with the facts. Last time I
checked, 65,000 x 1 million = 65 billion - not 65 million.


OK, I obviously made a stupid typo; I'll gladly correct it:

65 billion records is a manageable database

Even MySQL (which is often thought of as a departmental rather than
enterprise system, although with MySQL 5.0 available this may be
reconsidered) is capable of maintaining large databases. Since MySQL
3.23, you can store up to 65536 terabytes using the MyISAM storage
engine (which effectively means that the size of your table is limited
only by your operating system's file size limit) or mere 64 TB using
the InnoDB storage engine (but in this case, the file size limit does
not apply, because an InnoDB table can be spread over several files).
You stated earlier that a compressed set of one million records takes
10-15 kilobytes to store, so an uncompressed record would probably be
just a few bytes long. This is a load that a single server with a
properly configured RAID could handle...

Cheers,
NC

Feb 6 '06 #12

This thread has been closed and replies have been disabled. Please start a new discussion.

Similar topics

0
by: Srini | last post by:
I am implementing Front Controller in ASP.net as outlined in Microsoft documentation titled "Implementing Front Controller in ASP.NET Using HTTPHandler"...
7
by: Dave | last post by:
We are trying to migrate a MS SQL server app to DB2 8.1 Linux platform. Our database has got about 300+tables with total size - 150 GB We are using MS SQL's BCP utility to extract data from...
41
by: laimis | last post by:
Hey guys, I just recently got introduced to data mappers (DTO mapper). So now I have a SqlHelper being used by DTOMapper and then business layer is using DTOMapper when it needs to persist...
0
by: anonymous | last post by:
Thanks Michelle, How could I achieve the second one.I do not like Flashing windows in response to selection of an item. I agree the data transfer is very little but the same data transferred for...
4
by: Holger Marzen | last post by:
Hi all, AFAIK it is possible for columns to be very large, up to about 2 GB. Are there any hints or experiences about storing binary data (jpg-images, pdf-documents) in PostgrreSQL with or...
0
by: Takashi | last post by:
Hi, I have an interesting problem. I have a (LARGE) set of historical data that I want to keep on a central server, as several separate files. I want a client process to be able to request the...
9
by: David Harris | last post by:
Ok, so I'm semi-new to .NET, having done everything manually with SQL code back in VB6. So before I program this up completely manually again, I thought I'd ask for better ways to think through...
7
by: vbnetdev | last post by:
My boss wants this done in a day. I would be happy with a week! Anyway, I have a dataset filled with data and need to populate an MS word chart with it when writing a report. Any tutorials or...
5
by: Donald Adams | last post by:
Hi, I will have both web and win clients and would like to page my data. I could not find out how the datagrid control does it's paging though I did find some sample code that says they do it...
0
by: Charles Arthur | last post by:
How do i turn on java script on a villaon, callus and itel keypad mobile phone
0
by: ryjfgjl | last post by:
If we have dozens or hundreds of excel to import into the database, if we use the excel import function provided by database editors such as navicat, it will be extremely tedious and time-consuming...
0
BarryA
by: BarryA | last post by:
What are the essential steps and strategies outlined in the Data Structures and Algorithms (DSA) roadmap for aspiring data scientists? How can individuals effectively utilize this roadmap to progress...
1
by: nemocccc | last post by:
hello, everyone, I want to develop a software for my android phone for daily needs, any suggestions?
1
by: Sonnysonu | last post by:
This is the data of csv file 1 2 3 1 2 3 1 2 3 1 2 3 2 3 2 3 3 the lengths should be different i have to store the data by column-wise with in the specific length. suppose the i have to...
0
by: Hystou | last post by:
Most computers default to English, but sometimes we require a different language, especially when relocating. Forgot to request a specific language before your computer shipped? No problem! You can...
0
Oralloy
by: Oralloy | last post by:
Hello folks, I am unable to find appropriate documentation on the type promotion of bit-fields when using the generalised comparison operator "<=>". The problem is that using the GNU compilers,...
0
jinu1996
by: jinu1996 | last post by:
In today's digital age, having a compelling online presence is paramount for businesses aiming to thrive in a competitive landscape. At the heart of this digital strategy lies an intricately woven...
0
tracyyun
by: tracyyun | last post by:
Dear forum friends, With the development of smart home technology, a variety of wireless communication protocols have appeared on the market, such as Zigbee, Z-Wave, Wi-Fi, Bluetooth, etc. Each...

By using Bytes.com and it's services, you agree to our Privacy Policy and Terms of Use.

To disable or enable advertisements and analytics tracking please visit the manage ads & tracking page.