I once supported a similar application which recorded lab measurement data
to SQL Server. The application architecture was such that data was
initially inserted to staging tables and then imported into a separate
schema for reporting and analysis. It is common to segregate operational
and reporting data so that you can optimize your schema (including indexes)
for the different requirements.
As Simon and Erland said, the key to performance with large tables is
appropriate indexing. SQL Server can certainly handle VLDBs but be aware
that performance is ultimately constrained by your hardware speed.
--
Hope this helps.
Dan Guzman
SQL Server MVP
"NewbieNewsGrouper" <ne***************@hotmail.com> wrote in message
news:1e**************************@posting.google.c om...
I need to log data collected from instruments in a laboratory at a
very high rate. I need to log timestamps and values measured from
various devices at a rate as high as once a second.
This means my table will grow to tens or hundreds of gigabytes within
a year. I want to know if SQL server will be able to handle this much
data. In addition, is it better to separate this data into different
tables? Does a table become hard to search (search by the timestamp
field I am logging) once it gets to a certain size?
Does anyone have experience or advice aboutt his problem? Is there
possibly another product out there which is better suited for logging
time-based data like this than SQL Server?
Thanks