Hi,
I have an interesting problem. I have a (LARGE) set of historical data
that I want to keep on a central server, as several separate files. I
want a client process to be able to request the data in a specific file
by specifying the file name, start date/time and end date/time.
The files are in binary format, to conserve space on the server (as well
as to increase processing time). The data in each file can be quite
large, covering several years of data. New data will be appended to
these files each day, by a (PHP) script. The server machine is likely to
be a Unix machine, whereas my clients will be running on windows
machines. My clients program is written in C++.
My two main problems/questions are as follows:
1). Transfer method issue:
What is the best (i.e. most efficient and fast way) to transfer data
from the server to clients ?. I think SOAP is likely to be too slow,
because of the sheer size of the data
2). Cross platform issue:
How can I insure that that the (binary?) data sent from the Unix server
can be correctly interpreted at the client side?
2). Security issue:
How can I prevent clients from directly accessing the files (to prevent
malicious or accidental corruption of the data files.? 11 2451
E.T. Grey wrote: Hi,
I have an interesting problem. I have a (LARGE) set of historical data that I want to keep on a central server, as several separate files. I want a client process to be able to request the data in a specific file by specifying the file name, start date/time and end date/time.
The files are in binary format, to conserve space on the server (as well as to increase processing time). The data in each file can be quite large, covering several years of data. New data will be appended to these files each day, by a (PHP) script. The server machine is likely to be a Unix machine, whereas my clients will be running on windows machines. My clients program is written in C++.
My two main problems/questions are as follows:
1). Transfer method issue: What is the best (i.e. most efficient and fast way) to transfer data from the server to clients ?. I think SOAP is likely to be too slow, because of the sheer size of the data
2). Cross platform issue: How can I insure that that the (binary?) data sent from the Unix server can be correctly interpreted at the client side?
2). Security issue: How can I prevent clients from directly accessing the files (to prevent malicious or accidental corruption of the data files.?
Since this is going to be on a Unix machine, might I suggest one of the
Unix/Linux groups? Other than the fact this is going to be appended to
by a PHP script (which isn't part of your question), I don't see
anything indicating PHP in involved, much less a PHP question.
--
=============== ===
Remove the "x" from my email address
Jerry Stuckle
JDS Computer Training Corp. js*******@attgl obal.net
=============== ===
Jerry Stuckle wrote: E.T. Grey wrote:
Hi,
I have an interesting problem. I have a (LARGE) set of historical data that I want to keep on a central server, as several separate files. I want a client process to be able to request the data in a specific file by specifying the file name, start date/time and end date/time.
The files are in binary format, to conserve space on the server (as well as to increase processing time). The data in each file can be quite large, covering several years of data. New data will be appended to these files each day, by a (PHP) script. The server machine is likely to be a Unix machine, whereas my clients will be running on windows machines. My clients program is written in C++.
My two main problems/questions are as follows:
1). Transfer method issue: What is the best (i.e. most efficient and fast way) to transfer data from the server to clients ?. I think SOAP is likely to be too slow, because of the sheer size of the data
2). Cross platform issue: How can I insure that that the (binary?) data sent from the Unix server can be correctly interpreted at the client side?
2). Security issue: How can I prevent clients from directly accessing the files (to prevent malicious or accidental corruption of the data files.?
Since this is going to be on a Unix machine, might I suggest one of the Unix/Linux groups? Other than the fact this is going to be appended to by a PHP script (which isn't part of your question), I don't see anything indicating PHP in involved, much less a PHP question.
With the benefit of hindsight, I did not make myself clear. A further
clarification is thus in order:
I have implemented the server side of the solution using PHP. I am
communicating with the C++ frontend using SOAP (i.e. communicating
between PHP on the server and C++ at the client).
My first question is asked because I (assume) SOAP is too heavy for file
transfer (ok not directly a PHP question)
My second question was asked because the files will be created (and
appended) using PHP scripts - and I was wondering if binary files
written by PHP on Unix, may be able to be read by a C++ application
running on Windows.
My third question was asked because I'm a realitive WAMP/LAMP & PHP
newbie and I do not fully understand security issues in this framework.
I simply know that I want to prevent clients from directly accesing the
files.
Hope this helps clarify things
E.T. Grey wrote: I have a (LARGE) set of historical data that I want to keep on a central server, as several separate files.
How large exactly?
I want a client process to be able to request the data in a specific file by specifying the file name, start date/time and end date/time.
The start/end date/time bit actually is a rather fat hint that you
should consider using a database... Searching through large files will
eat up enormous amounts of disk and processor time.
New data will be appended to these files each day, by a (PHP) script.
Yet another reason to consider a database...
What is the best (i.e. most efficient and fast way) to transfer data from the server to clients ?.
Assuming you are using HTTP, compressed (gzip) CSV will probably be the
fastest.
How can I insure that that the (binary?) data sent from the Unix server can be correctly interpreted at the client side?
Why should the data be binary? Compressed CSV is likely to be at least
as compact as binary data, plus CSV will be human-readable, which
should help during debugging.
How can I prevent clients from directly accessing the files (to prevent malicious or accidental corruption of the data files.?
Import them into a database and lock the originals in a safe place.
Cheers,
NC
E.T. Grey wrote:
Jerry Stuckle wrote:
E.T. Grey wrote:
Hi,
I have an interesting problem. I have a (LARGE) set of historical data that I want to keep on a central server, as several separate files. I want a client process to be able to request the data in a specific file by specifying the file name, start date/time and end date/time.
The files are in binary format, to conserve space on the server (as well as to increase processing time). The data in each file can be quite large, covering several years of data. New data will be appended to these files each day, by a (PHP) script. The server machine is likely to be a Unix machine, whereas my clients will be running on windows machines. My clients program is written in C++.
My two main problems/questions are as follows:
1). Transfer method issue: What is the best (i.e. most efficient and fast way) to transfer data from the server to clients ?. I think SOAP is likely to be too slow, because of the sheer size of the data
2). Cross platform issue: How can I insure that that the (binary?) data sent from the Unix server can be correctly interpreted at the client side?
2). Security issue: How can I prevent clients from directly accessing the files (to prevent malicious or accidental corruption of the data files.?
Since this is going to be on a Unix machine, might I suggest one of the Unix/Linux groups? Other than the fact this is going to be appended to by a PHP script (which isn't part of your question), I don't see anything indicating PHP in involved, much less a PHP question.
With the benefit of hindsight, I did not make myself clear. A further clarification is thus in order:
I have implemented the server side of the solution using PHP. I am communicating with the C++ frontend using SOAP (i.e. communicating between PHP on the server and C++ at the client).
My first question is asked because I (assume) SOAP is too heavy for file transfer (ok not directly a PHP question)
My second question was asked because the files will be created (and appended) using PHP scripts - and I was wondering if binary files written by PHP on Unix, may be able to be read by a C++ application running on Windows.
My third question was asked because I'm a realitive WAMP/LAMP & PHP newbie and I do not fully understand security issues in this framework. I simply know that I want to prevent clients from directly accesing the files.
Hope this helps clarify things
Well, as for reading and writing the files - C++ should be able to read
any file written by PHP, COBOL or any other language. You may be forced
to do some massaging of the bytes, but that should be all.
As for not accessing the files directly - just don't put them in your
web root directory (or anywhere below it). Then someone can't access it
through the website.
--
=============== ===
Remove the "x" from my email address
Jerry Stuckle
JDS Computer Training Corp. js*******@attgl obal.net
=============== ===
NC wrote: E.T. Grey wrote:
I have a (LARGE) set of historical data that I want to keep on a central server, as several separate files.
How large exactly?
At last count, there are about 65,000 distinct files (and increasing)
I want a client process to be able to request the data in a specific file by specifying the file name, start date/time and end date/time.
The start/end date/time bit actually is a rather fat hint that you should consider using a database... Searching through large files will eat up enormous amounts of disk and processor time.
Not necessarily true. Each file has the equivalent of approx 1M rows
(yes - thats 1 million) - yet the binary files (which use compression
algos) are approx 10k-15K in size. If you multiply the number of rows
(on avg) by the number of files - you can quickly see why using a db as
a repository would be a poor design choice. New data will be appended to these files each day, by a (PHP) script.
Yet another reason to consider a database...
See above What is the best (i.e. most efficient and fast way) to transfer data from the server to clients ?.
Assuming you are using HTTP, compressed (gzip) CSV will probably be the fastest.
This involves converting the read data to a string first, before
(possibly) zipping it and sending it. This incurrs overhead (that I
would like to avoid) on both server and client. How can I insure that that the (binary?) data sent from the Unix server can be correctly interpreted at the client side?
Why should the data be binary? Compressed CSV is likely to be at least as compact as binary data, plus CSV will be human-readable, which should help during debugging.
See above How can I prevent clients from directly accessing the files (to prevent malicious or accidental corruption of the data files.?
Import them into a database and lock the originals in a safe place.
Cheers, NC
E.T. Grey wrote: I have a (LARGE) set of historical data that I want to keep on a central server, as several separate files.
How large exactly?
At last count, there are about 65,000 distinct files (and increasing)
.... Each file has the equivalent of approx 1M rows (yes - thats 1 million)
.... If you multiply the number of rows (on avg) by the number of files - you can quickly see why using a db as a repository would be a poor design choice.
Sorry, I can't. 65 million records is a manageable database.
This involves converting the read data to a string first, before (possibly) zipping it and sending it. This incurrs overhead (that I would like to avoid) on both server and client.
And yet you are willing to convert EVERY BIT of that data when you
search through it...
Cheers,
NC
NC wrote: E.T. Grey wrote:
I have a (LARGE) set of historical data that I want to keep on a central server, as several separate files.
How large exactly?
At last count, there are about 65,000 distinct files (and increasing)
...
Each file has the equivalent of approx 1M rows (yes - thats 1 million)
...
If you multiply the number of rows (on avg) by the number of files - you can quickly see why using a db as a repository would be a poor design choice.
Sorry, I can't. 65 million records is a manageable database.
I agree... I have designed and deployed binary and ascii data loads in
excess of 250Million records/day. Searching the data was a piece of
cake - if you know how to actually designed the database correctly.
65M records is peanuts to a database - even MySql. With proper indexing
you can do a direct-row lookup in < 4-8 I/O's - not so with the path
you are currently trying to traverse... you are looking at up to 65M
reads - and reads are very expensive!!
Use the proper tools/mechanisms for the job at hand...
Michael Austin
DBA
(stuff snipped)
NC wrote: E.T. Grey wrote:
I have a (LARGE) set of historical data that I want to keep on a central server, as several separate files.
How large exactly?
At last count, there are about 65,000 distinct files (and increasing)
...
Each file has the equivalent of approx 1M rows (yes - thats 1 million)
...
If you multiply the number of rows (on avg) by the number of files - you can quickly see why using a db as a repository would be a poor design choice.
Sorry, I can't. 65 million records is a manageable database.
Its amazing how some people once having set their mind on one thing,
wont change it - even when presented with the facts. Last time I
checked, 65,000 x 1 million = 65 billion - not 65 million. Ah well - you
can't win them all.
E.T. Grey wrote:
NC wrote: E.T. Grey wrote:
> I have a (LARGE) set of historical data that I want to keep > on a central server, as several separate files.
How large exactly?
At last count, there are about 65,000 distinct files (and increasing)
...
Each file has the equivalent of approx 1M rows (yes - thats 1 million)
...
If you multiply the number of rows (on avg) by the number of files - you can quickly see why using a db as a repository would be a poor design choice.
Sorry, I can't. 65 million records is a manageable database.
Its amazing how some people once having set their mind on one thing, wont change it - even when presented with the facts. Last time I checked, 65,000 x 1 million = 65 billion - not 65 million. Ah well - you can't win them all.
Well, my question is "Do all 65 billion records need to be active at all
times?" If not, doing some roll-up/archival strategy may reduce this to
a usable size.
-david- This thread has been closed and replies have been disabled. Please start a new discussion. Similar topics |
by: Srini |
last post by:
I am implementing Front Controller in ASP.net as outlined in Microsoft documentation titled "Implementing Front Controller in ASP.NET Using HTTPHandler" (http://msdn.microsoft.com/library/default.asp?url=/library/en-us/dnpatterns/html/ImpFrontControllerInASP.asp
Everything works well except for context.server.transfer(string url) method (refer to RedirectingCommand.cs class in the above documentation)
Here are the error details
Server...
|
by: Dave |
last post by:
We are trying to migrate a MS SQL server app to DB2 8.1 Linux platform.
Our database has got about 300+tables with total size - 150 GB
We are using MS SQL's BCP utility to extract data from MS SQL's tables
and loading into DB2
using DB2's LOAD utility. There are tons of colums of floating point
types (singe precion & double precision types)
in the database and when extracted using BCP, it generates data only upto
17 digits....
|
by: laimis |
last post by:
Hey guys,
I just recently got introduced to data mappers (DTO mapper). So now I
have a SqlHelper being used by DTOMapper and then business layer is
using DTOMapper when it needs to persist object to database or load them
back. Everything is working nicely so far. My question is, is it OK
practice to use DTOMapper rfom the presentation layer? For instance, if
I want to present in HTML format the list of entries in my database,
should I...
|
by: anonymous |
last post by:
Thanks Michelle,
How could I achieve the second one.I do not like Flashing windows in
response to selection of an item.
I agree the data transfer is very little but the same data transferred for
more than once takes multiple of the initial load time.
I used to use <%....%> in asp to achieve the forbidden method( :) just
kidding) of client side scripting. The Script tags were used to create a
data array with Database data relevent to the...
|
by: Holger Marzen |
last post by:
Hi all,
AFAIK it is possible for columns to be very large, up to about 2 GB. Are
there any hints or experiences about storing binary data (jpg-images,
pdf-documents) in PostgrreSQL with or without the complicated lo-stuff?
Of course it's in many cases a good approach to store those files simply
in the file system but there's always a risk of running out of sync
(filesystem and tables), e.g. by deleting files and not deleting the
table...
| |
by: Takashi |
last post by:
Hi,
I have an interesting problem. I have a (LARGE) set of historical data
that I want to keep on a central server, as several separate files. I
want a client process to be able to request the data in a specific file
by specifying the file name, start date/time and end date/time.
The files are in binary format, to conserve space on the server (as well
as to increase processing time). The data in each file can be quite
large, covering...
|
by: David Harris |
last post by:
Ok, so I'm semi-new to .NET, having done everything manually with SQL code
back in VB6. So before I program this up completely manually again, I thought
I'd ask for better ways to think through this problem.
We have several client machines, and a central data warehousing server. Each
machine may contain hundreds of surveys, and they all are sent to the central
server. Only they can never be networked together, forcing us to use files. I...
|
by: vbnetdev |
last post by:
My boss wants this done in a day. I would be happy with a week!
Anyway, I have a dataset filled with data and need to populate an MS word
chart with it when writing a report. Any tutorials or suggestions would be
great.
Sample data at http://www.kjmsolutions.com/sample.txt
|
by: Donald Adams |
last post by:
Hi,
I will have both web and win clients and would like to page my data. I
could not find out how the datagrid control does it's paging though I did
find some sample code that says they do it this way, but I can't see these
methods as public.
BookmarksDataSetTableAdapters.BookmarksTableAdapter bookTA = new
BookmarksDataSetTableAdapters.BookmarksTableAdapter();
BookmarkList1.DataSource = bookTA.GetAllBookmarksWrtUser(
|
by: marktang |
last post by:
ONU (Optical Network Unit) is one of the key components for providing high-speed Internet services. Its primary function is to act as an endpoint device located at the user's premises. However, people are often confused as to whether an ONU can Work As a Router. In this blog post, we’ll explore What is ONU, What Is Router, ONU & Router’s main usage, and What is the difference between ONU and Router. Let’s take a closer look !
Part I. Meaning of...
|
by: Oralloy |
last post by:
Hello folks,
I am unable to find appropriate documentation on the type promotion of bit-fields when using the generalised comparison operator "<=>".
The problem is that using the GNU compilers, it seems that the internal comparison operator "<=>" tries to promote arguments from unsigned to signed.
This is as boiled down as I can make it.
Here is my compilation command:
g++-12 -std=c++20 -Wnarrowing bit_field.cpp
Here is the code in...
| |
by: Hystou |
last post by:
Overview:
Windows 11 and 10 have less user interface control over operating system update behaviour than previous versions of Windows. In Windows 11 and 10, there is no way to turn off the Windows Update option using the Control Panel or Settings app; it automatically checks for updates and installs any it finds, whether you like it or not. For most users, this new feature is actually very convenient. If you want to control the update process,...
|
by: tracyyun |
last post by:
Dear forum friends,
With the development of smart home technology, a variety of wireless communication protocols have appeared on the market, such as Zigbee, Z-Wave, Wi-Fi, Bluetooth, etc. Each protocol has its own unique characteristics and advantages, but as a user who is planning to build a smart home system, I am a bit confused by the choice of these technologies. I'm particularly interested in Zigbee because I've heard it does some...
|
by: agi2029 |
last post by:
Let's talk about the concept of autonomous AI software engineers and no-code agents. These AIs are designed to manage the entire lifecycle of a software development project—planning, coding, testing, and deployment—without human intervention. Imagine an AI that can take a project description, break it down, write the code, debug it, and then launch it, all on its own....
Now, this would greatly impact the work of software developers. The idea...
|
by: conductexam |
last post by:
I have .net C# application in which I am extracting data from word file and save it in database particularly. To store word all data as it is I am converting the whole word file firstly in HTML and then checking html paragraph one by one.
At the time of converting from word file to html my equations which are in the word document file was convert into image.
Globals.ThisAddIn.Application.ActiveDocument.Select();...
|
by: TSSRALBI |
last post by:
Hello
I'm a network technician in training and I need your help.
I am currently learning how to create and manage the different types of VPNs and I have a question about LAN-to-LAN VPNs.
The last exercise I practiced was to create a LAN-to-LAN VPN between two Pfsense firewalls, by using IPSEC protocols.
I succeeded, with both firewalls in the same network. But I'm wondering if it's possible to do the same thing, with 2 Pfsense firewalls...
|
by: adsilva |
last post by:
A Windows Forms form does not have the event Unload, like VB6. What one acts like?
| |
by: muto222 |
last post by:
How can i add a mobile payment intergratation into php mysql website.
| |