"Me" wrote...
One of my concerns is that in the future
I will have users who
do not have or want to run SQL Server.
The approach many advocates is to modularize your application into layers,
where you easily can exchange the datalayer from one dbms to another when
the need arises.
The best performance is almost always achieved when the database part of the
application is written especially for the target database at hand.
So you should encapsulate all queries and other calls to the database into a
separate module, which the rest of the application can use, without regards
on which database it's using. This module should have a coherent interface,
so when you exchange it for another module it's transparent to the rest of
the application.
Hence, a recommendation would be to define a separate interface for this
"database-proxy", which can be reused for the next one.
Will I still be able to run
stored procedures and use
disconnected data sets if
I use ODBC?
Sure, but do the customers databases have the possibility to run stored
procedures?
That is another argument for writing the database-proxy. Even if your proxy
for SQL Server is based heavily on stored procedures, a proxy for e.g. MySQL
must have another approach, since it doesn't support stored procedures
(yet).
Will there be any loss of performance
Absolutely, but in some cases ODBC could possibly be the only way to go,
when there is no other Data Providers available.
Hence, you maybe *should* write one database-proxy for ODBC *as well* as for
other dbms.
[Followup stripped down to only
microsoft.public.data.ado and
microsoft.public.data.odbc]
// Bjorn A