473,394 Members | 1,769 Online
Bytes | Software Development & Data Engineering Community
Post Job

Home Posts Topics Members FAQ

Join Bytes to post your question to a community of 473,394 software developers and data experts.

How much to convert to SQL server

Jan
Hi:

I have an Access database that's been running (in one form or another)
for a couple of different clients for a few years. Now a new client has
requested that it be implemented with a SQL server back-end. I'm doing
my best to learn about SQL server, and I plan to leave the front-end
more or less as-is, just linking to the SQL server back end, but here's
a basic question:

The db has a front-end linked to two back-ends. One of the back-ends has
completely static data, and so in an all-Access installation it sits on
the C drive along with the front end. Only the 2nd backend sits
on the server.

Now, should I convert both back-ends to SQL server, or just the one on
the server? Reasons, pros, cons?

TIA.

Jan
Feb 15 '06 #1
29 2595
Your new client is the one who wants the conversion and, presumably, is
paying you to convert the database. You should be asking the client what
_they_ want, but be ready with logical arguments if they want something
"flakey".

The following applies only to Access MDB, using ODBC drivers, to link to an
SQL server database:

Relatively unchanging lookup tables are often kept local to an Access
application -- States in the US is a prime example, company structure
(divisions, departments) is another. I'd personally see no need to migrate
static data to the back-end and have to access it across a
possibly-slower-than-I'd-like network.

Still, the client may have their own reasons (or even prejudices) and ideas
about what can reasonably be moved to the backend.

Larry Linson
Microsoft Access MVP

"Jan" <ja*@stempelconsulting.com> wrote in message
news:11*************@corp.supernews.com...
Hi:

I have an Access database that's been running (in one form or another)
for a couple of different clients for a few years. Now a new client has
requested that it be implemented with a SQL server back-end. I'm doing
my best to learn about SQL server, and I plan to leave the front-end
more or less as-is, just linking to the SQL server back end, but here's
a basic question:

The db has a front-end linked to two back-ends. One of the back-ends has
completely static data, and so in an all-Access installation it sits on
the C drive along with the front end. Only the 2nd backend sits
on the server.

Now, should I convert both back-ends to SQL server, or just the one on
the server? Reasons, pros, cons?

TIA.

Jan

Feb 15 '06 #2
Jan
Hi, Larry:

Thanks for the quick reply.

Unfortunately, the client isn't directly mine; I'm more or less a
subcontractor. I'm not sure they know exactly why they want SQL Server,
except that they think it's the "better" way. And I'm not in a position
to argue. They do have some issues with lots of users possibly spread
out over several offices.

Anyway, I just wanted to be sure there wouldn't be some advantage to
having all the linked tables be converted to SQL Server. And yes, I'm
planning to use the ODBC link to the SQL Server backend. My preference
is certainly to keep as much of the data local, and in Access, as
possible, if only because it's what I know best.

Anyone have any other views?

Jan

Larry Linson wrote:
Your new client is the one who wants the conversion and, presumably,
is paying you to convert the database. You should be asking the
client what _they_ want, but be ready with logical arguments if they
want something "flakey".

The following applies only to Access MDB, using ODBC drivers, to link
to an SQL server database:

Relatively unchanging lookup tables are often kept local to an Access
application -- States in the US is a prime example, company
structure (divisions, departments) is another. I'd personally see no
need to migrate static data to the back-end and have to access it
across a possibly-slower-than-I'd-like network.

Still, the client may have their own reasons (or even prejudices) and
ideas about what can reasonably be moved to the backend.

Larry Linson Microsoft Access MVP

"Jan" <ja*@stempelconsulting.com> wrote in message
news:11*************@corp.supernews.com...
Hi:

I have an Access database that's been running (in one form or
another) for a couple of different clients for a few years. Now a
new client has requested that it be implemented with a SQL server
back-end. I'm doing my best to learn about SQL server, and I plan
to leave the front-end more or less as-is, just linking to the SQL
server back end, but here's a basic question:

The db has a front-end linked to two back-ends. One of the
back-ends has completely static data, and so in an all-Access
installation it sits on the C drive along with the front end. Only
the 2nd backend sits on the server.

Now, should I convert both back-ends to SQL server, or just the one
on the server? Reasons, pros, cons?

TIA.

Jan


Feb 15 '06 #3
Jan wrote:
Hi, Larry:

Thanks for the quick reply.

Unfortunately, the client isn't directly mine; I'm more or less a
subcontractor. I'm not sure they know exactly why they want SQL
Server, except that they think it's the "better" way. And I'm not in
a position to argue. They do have some issues with lots of users
possibly spread out over several offices.

Anyway, I just wanted to be sure there wouldn't be some advantage to
having all the linked tables be converted to SQL Server. And yes, I'm
planning to use the ODBC link to the SQL Server backend. My
preference is certainly to keep as much of the data local, and in
Access, as possible, if only because it's what I know best.

Anyone have any other views?


Well even if you want to keep a few tables local (I see no reason to) you would
often want another copy of the same table on the server if it is ever used in a
query. You don't want to create queries in Access that use both a local table
and a link to a server table. That would definitely be an inefficient query to
run unless the local table was very small).

In my Access FE/SQL Server BE apps ALL the tables are on the server and
performance is just fine on a standard 100 mb LAN.

--
I don't check the Email account attached
to this message. Send instead to...
RBrandt at Hunter dot com

Feb 16 '06 #4
If you are using transactions, you probably want to re-write
any transactions that include the static tables. If you use
local jet tables, the transactions are more likely to block
internally, and if you use SQL Server tables, the transactions
are likely to block other users.

(david)

"Jan" <ja*@stempelconsulting.com> wrote in message
news:11*************@corp.supernews.com...
Hi, Larry:

Thanks for the quick reply.

Unfortunately, the client isn't directly mine; I'm more or less a
subcontractor. I'm not sure they know exactly why they want SQL Server,
except that they think it's the "better" way. And I'm not in a position
to argue. They do have some issues with lots of users possibly spread
out over several offices.

Anyway, I just wanted to be sure there wouldn't be some advantage to
having all the linked tables be converted to SQL Server. And yes, I'm
planning to use the ODBC link to the SQL Server backend. My preference
is certainly to keep as much of the data local, and in Access, as
possible, if only because it's what I know best.

Anyone have any other views?

Jan

Larry Linson wrote:
Your new client is the one who wants the conversion and, presumably,
is paying you to convert the database. You should be asking the client
what _they_ want, but be ready with logical arguments if they
want something "flakey".

The following applies only to Access MDB, using ODBC drivers, to link
to an SQL server database:

Relatively unchanging lookup tables are often kept local to an Access
application -- States in the US is a prime example, company structure
(divisions, departments) is another. I'd personally see no
need to migrate static data to the back-end and have to access it across
a possibly-slower-than-I'd-like network.

Still, the client may have their own reasons (or even prejudices) and
ideas about what can reasonably be moved to the backend.

Larry Linson Microsoft Access MVP

"Jan" <ja*@stempelconsulting.com> wrote in message
news:11*************@corp.supernews.com...
Hi:

I have an Access database that's been running (in one form or another)
for a couple of different clients for a few years. Now a
new client has requested that it be implemented with a SQL server
back-end. I'm doing my best to learn about SQL server, and I plan
to leave the front-end more or less as-is, just linking to the SQL
server back end, but here's a basic question:

The db has a front-end linked to two back-ends. One of the back-ends
has completely static data, and so in an all-Access installation it sits
on the C drive along with the front end. Only
the 2nd backend sits on the server.

Now, should I convert both back-ends to SQL server, or just the one
on the server? Reasons, pros, cons?

TIA.

Jan



Feb 16 '06 #5
Jan
No transactions in this one.

david epsom dot com dot au wrote:
If you are using transactions, you probably want to re-write
any transactions that include the static tables. If you use
local jet tables, the transactions are more likely to block
internally, and if you use SQL Server tables, the transactions
are likely to block other users.

(david)

"Jan" <ja*@stempelconsulting.com> wrote in message
news:11*************@corp.supernews.com...
Hi, Larry:

Thanks for the quick reply.

Unfortunately, the client isn't directly mine; I'm more or less a
subcontractor. I'm not sure they know exactly why they want SQL Server,
except that they think it's the "better" way. And I'm not in a position
to argue. They do have some issues with lots of users possibly spread
out over several offices.

Anyway, I just wanted to be sure there wouldn't be some advantage to
having all the linked tables be converted to SQL Server. And yes, I'm
planning to use the ODBC link to the SQL Server backend. My preference
is certainly to keep as much of the data local, and in Access, as
possible, if only because it's what I know best.

Anyone have any other views?

Jan

Larry Linson wrote:
Your new client is the one who wants the conversion and, presumably,
is paying you to convert the database. You should be asking the client
what _they_ want, but be ready with logical arguments if they
want something "flakey".

The following applies only to Access MDB, using ODBC drivers, to link
to an SQL server database:

Relatively unchanging lookup tables are often kept local to an Access
application -- States in the US is a prime example, company structure
(divisions, departments) is another. I'd personally see no
need to migrate static data to the back-end and have to access it across
a possibly-slower-than-I'd-like network.

Still, the client may have their own reasons (or even prejudices) and
ideas about what can reasonably be moved to the backend.

Larry Linson Microsoft Access MVP

"Jan" <ja*@stempelconsulting.com> wrote in message
news:11*************@corp.supernews.com...
Hi:

I have an Access database that's been running (in one form or another)
for a couple of different clients for a few years. Now a
new client has requested that it be implemented with a SQL server
back-end. I'm doing my best to learn about SQL server, and I plan
to leave the front-end more or less as-is, just linking to the SQL
server back end, but here's a basic question:

The db has a front-end linked to two back-ends. One of the back-ends
has completely static data, and so in an all-Access installation it sits
on the C drive along with the front end. Only
the 2nd backend sits on the server.

Now, should I convert both back-ends to SQL server, or just the one
on the server? Reasons, pros, cons?

TIA.

Jan

Feb 16 '06 #6
Jan
Hi, Rick:

This is a key one, I guess, because there are certainly queries that run
with some tables from each of the different sets. Most of the tables
are in the local set, but there are a crucial few that need to be on
the server. In the past, when it has been an entirely Access database,
I moved all the static tables to the local drive because it vastly
improved performance.

So do you think that with a SQL server backend I wouldn't run into the
slowness issues that I had with an all-Access db?

I have to say I'm really anxious about this whole conversion process.
The client is out of town and I have to make it work in a very short
time period when I'm out there. I can test it here on my machine, with
the "developers" version of SQL Server, but I worry that it isn't a good
proxy for the
"real thing."

Jan

Rick Brandt wrote:
Jan wrote:
Hi, Larry:

Thanks for the quick reply.

Unfortunately, the client isn't directly mine; I'm more or less a
subcontractor. I'm not sure they know exactly why they want SQL
Server, except that they think it's the "better" way. And I'm not
in a position to argue. They do have some issues with lots of
users possibly spread out over several offices.

Anyway, I just wanted to be sure there wouldn't be some advantage
to having all the linked tables be converted to SQL Server. And
yes, I'm planning to use the ODBC link to the SQL Server backend.
My preference is certainly to keep as much of the data local, and
in Access, as possible, if only because it's what I know best.

Anyone have any other views?

Well even if you want to keep a few tables local (I see no reason to)
you would often want another copy of the same table on the server if
it is ever used in a query. You don't want to create queries in
Access that use both a local table and a link to a server table. That
would definitely be an inefficient query to run unless the local
table was very small).

In my Access FE/SQL Server BE apps ALL the tables are on the server
and performance is just fine on a standard 100 mb LAN.

Feb 16 '06 #7

"Jan" <ja*@stempelconsulting.com> wrote in message
news:11*************@corp.supernews.com...
Hi, Rick:

This is a key one, I guess, because there are certainly queries that run
with some tables from each of the different sets. Most of the tables
are in the local set, but there are a crucial few that need to be on
the server. In the past, when it has been an entirely Access database,
I moved all the static tables to the local drive because it vastly
improved performance.

So do you think that with a SQL server backend I wouldn't run into the
slowness issues that I had with an all-Access db?


If a query is slow because it is poorly designed then putting the tables on a
server won't magically fix that. If a query is slow because it is working
against very large tables then putting those tables on a server won't magically
cure that either.

There are many advantages to moving to a server-based data engine. Raw query
performance is not one of them. People often see performance gains when setting
up a new box for SQL Server because they will usually build that server with
high-spec'd hardware. Several years ago just about any server would be WAY more
capable than a desktop PC. That is still true, but not to the degree that it
once was because desktop PCs are simply very capable these days.

Client/Server performance is largely driven by minimizing traffic over the LAN
and good design. Those same strategies would likely result in an MDB Based
database that also performed well.
--
I don't check the Email account attached
to this message. Send instead to...
RBrandt at Hunter dot com
Feb 16 '06 #8

"Rick Brandt" <ri*********@hotmail.com> wrote
Well even if you want to keep a few tables
local (I see no reason to) you would
often want another copy of the same table
on the server if it is ever used in a
query.
I specifically said, rarely-changing _lookup_ tables, with examples. I've
done this, with a master copy on the server, from which the local tables are
refreshed at startup, if need be. Running over a WAN, it is likely you'll
see some performance improvment if there are quite a few.
You don't want to create queries in Access that
use both a local table and a link to a server table.
That would definitely be an inefficient query to
run unless the local table was very small).
I _strongly_ agree. In fact, even if the local table is very small, the
performance impact can be substantial because the entire table may be
brought from the server to the user's machine for the join. <OUCH!>
In my Access FE/SQL Server BE apps ALL
the tables are on the server and performance is
just fine on a standard 100 mb LAN.


Of course. But, move some of your users to the boonies on a WAN that shares
a T-1 line and you are likely to see a discernable difference. That's where
you may need to resort to "performance tricks".

If you have appropriate instrumentation, the difference may be "measurable"
on a 100 MBPS LAN, but it's unlikely to be discernable to someone sitting in
front of a screen. We didn't bother with local tables when everyone was on
the high-speed LAN.

Larry Linson
Microsoft Access MVP
Feb 16 '06 #9
Jan
I don't know why exactly the database runs slow with the tables on the
server; I only know that when I moved them to the local disk, a
particular activity (which involves a lot of data manipulation, writing
to a Word document, and a variety of other tasks) went from taking 5
minutes down to less than one. I always blamed it on the slowness of
the client's network, but frankly didn't spend a lot of time
contemplating the reasons; the solution was very effective and I left it
at that.

Maybe this will help you understand the situation: the users are
entering data on rental units. The database is running a model that
involves manipulating a large amount of stored data on other rental
units (that's the static part; the research is done once and stays the
same for a year), but it has to be compared and calculations made in
order to come to some recommendations on the newly-entered unit.
Thus, most of the data is static, but that new record has to be involved.

Make any sense?

So maybe the question is this:
if I haven't had a performance hit from running queries that involved
both local and server tables in the past, would I have that hit when the
"server" tables are SQL Server and the local tables are Access?

Jan

Rick Brandt wrote:
"Jan" <ja*@stempelconsulting.com> wrote in message
news:11*************@corp.supernews.com...
Hi, Rick:

This is a key one, I guess, because there are certainly queries
that run with some tables from each of the different sets. Most of
the tables are in the local set, but there are a crucial few that
need to be on the server. In the past, when it has been an
entirely Access database, I moved all the static tables to the
local drive because it vastly improved performance.

So do you think that with a SQL server backend I wouldn't run into
the slowness issues that I had with an all-Access db?

If a query is slow because it is poorly designed then putting the
tables on a server won't magically fix that. If a query is slow
because it is working against very large tables then putting those
tables on a server won't magically cure that either.

There are many advantages to moving to a server-based data engine.
Raw query performance is not one of them. People often see
performance gains when setting up a new box for SQL Server because
they will usually build that server with high-spec'd hardware.
Several years ago just about any server would be WAY more capable
than a desktop PC. That is still true, but not to the degree that it
once was because desktop PCs are simply very capable these days.

Client/Server performance is largely driven by minimizing traffic
over the LAN and good design. Those same strategies would likely
result in an MDB Based database that also performed well.

Feb 16 '06 #10

"Jan" <ja*@stempelconsulting.com> wrote in message
news:11*************@corp.supernews.com...
I don't know why exactly the database runs slow with the tables on the
server; I only know that when I moved them to the local disk, a
particular activity (which involves a lot of data manipulation, writing
to a Word document, and a variety of other tasks) went from taking 5
minutes down to less than one. I always blamed it on the slowness of
the client's network, but frankly didn't spend a lot of time
contemplating the reasons; the solution was very effective and I left it
at that.

Maybe this will help you understand the situation: the users are
entering data on rental units. The database is running a model that
involves manipulating a large amount of stored data on other rental
units (that's the static part; the research is done once and stays the
same for a year), but it has to be compared and calculations made in
order to come to some recommendations on the newly-entered unit.
Thus, most of the data is static, but that new record has to be involved.

Make any sense?

So maybe the question is this:
if I haven't had a performance hit from running queries that involved
both local and server tables in the past, would I have that hit when the
"server" tables are SQL Server and the local tables are Access?


Impossible to predict, but on the surface I can think of no reason for
performance to be worse. I just don't see much reason to believe it would be
any better either.

--
I don't check the Email account attached
to this message. Send instead to...
RBrandt at Hunter dot com
Feb 16 '06 #11
Jan
Well, it doesn't have to be better, it just has to be comparable.

Slightly different question here: will my testing on the local copy of
SQL server be in any way comparable to how the thing will behave in a
real production environment? I know that when I develop in general,
systems that work fast on my single-user machine can get very poky once
they hit the client's network. Is it the same with the developer's
edition of SQL server vs the full-blown one?

Rick Brandt wrote:
"Jan" <ja*@stempelconsulting.com> wrote in message
news:11*************@corp.supernews.com...
I don't know why exactly the database runs slow with the tables on
the server; I only know that when I moved them to the local disk, a
particular activity (which involves a lot of data manipulation,
writing to a Word document, and a variety of other tasks) went
from taking 5 minutes down to less than one. I always blamed it
on the slowness of the client's network, but frankly didn't spend a
lot of time contemplating the reasons; the solution was very
effective and I left it at that.

Maybe this will help you understand the situation: the users are
entering data on rental units. The database is running a model
that involves manipulating a large amount of stored data on other
rental units (that's the static part; the research is done once and
stays the same for a year), but it has to be compared and
calculations made in order to come to some recommendations on the
newly-entered unit. Thus, most of the data is static, but that new
record has to be involved.

Make any sense?

So maybe the question is this: if I haven't had a performance hit
from running queries that involved both local and server tables in
the past, would I have that hit when the "server" tables are SQL
Server and the local tables are Access?

Impossible to predict, but on the surface I can think of no reason
for performance to be worse. I just don't see much reason to believe
it would be any better either.

Feb 16 '06 #12
If it's been running for a few years undoubtedly it needs a complete
revision. Take the opportunity to trash the whole thing and start over.
I get sick to my stomach whenever I see anything of mine that is more
than two years old.

Feb 16 '06 #13
I had a client ask me to substitue the Jet backend with SQL Server. The
logic they used for deciding which tables to put on SQL Server was:

"Our data needs to be on SQL Server".

Therefore, presumably data which was not theirs (like zip code tables, etc)
could remain local. Also, my proprietary tables could remain local as well.
Didn't do the job and have not thoroughly read the other posts in this thread.
Just passing on what their criteria was.
Jan wrote:
No transactions in this one.
If you are using transactions, you probably want to re-write
any transactions that include the static tables. If you use

[quoted text clipped - 63 lines]
>
>Jan


--
Message posted via AccessMonster.com
http://www.accessmonster.com/Uwe/For...ccess/200602/1
Feb 16 '06 #14
Jan wrote:
I don't know why exactly the database runs slow with the tables on the
server; I only know that when I moved them to the local disk, a
particular activity (which involves a lot of data manipulation, writing
to a Word document, and a variety of other tasks) went from taking 5
minutes down to less than one.
That's a phenomenally long time. I call that coffee-break coding...as
in I want to run the process but take a break.

If I'm not getting results back in 5 seconds (absolute tops) or less.
then there is a problem. If course, I'm not dealing with millions of
records, but for a typical desktop app 5 minutes is way, way too long.

Back in the old days I'd immediately suspect a field I am filtering on,
in a where clause, is not indexed. But in Access where I see people
discussing their coffee-break queries they usually are doing sub-selects
in the query. That's nearly a guaranteed yawner of a query.

I suspect that most regulars in this newsgroup could get your query down
to a couple of seconds if they were working on your application.

I always blamed it on the slowness of the client's network, but frankly didn't spend a lot of time
contemplating the reasons; the solution was very effective and I left it
at that.
Time is money. That's why there's a FedEx. If I can shave 5 minutes of
processing on something that is done every weekday day by 10 users, in a
year the time saving is about 210 hours. You do that for a few more
processes and you're taking some significant time.
Maybe this will help you understand the situation: the users are
entering data on rental units. The database is running a model that
involves manipulating a large amount of stored data on other rental
units (that's the static part; the research is done once and stays the
same for a year), but it has to be compared and calculations made in
order to come to some recommendations on the newly-entered unit.
Thus, most of the data is static, but that new record has to be involved.

Make any sense?

So maybe the question is this:
if I haven't had a performance hit from running queries that involved
both local and server tables in the past, would I have that hit when the
"server" tables are SQL Server and the local tables are Access?
Well, I can see why the client may want to go to SQL Server, anything
that may help cut down that 5 minute processing time. I simply don't
see SQLServer as the optimum solution. If the time lag was bad before,
why will it be better with SQLServer? Doubtful. Will your client be
happy if, instead of a 5 minute lag time, you cut it down to 4 minutes?
I wouldn't. The more instantaneous you do something, the better.

You do mention a "large amount of data". That means nothing to me or
many developers. Does that mean millions of records? Or does it mean a
few thousand records with a couple hundred fields?

I would really look at your queries. If you use subselects in any of
them, change tactics. Check your indexing and filtering. Do some time
tests.
Jan

Feb 16 '06 #15
Br
Lyle Fairfield wrote:
I get sick to my stomach whenever I see anything of mine that
is more than two years old.


I'm like that...
--
regards,

Br@dley
Feb 16 '06 #16
I don't know about different editions, but
nay code runs much faster against the SQL Server
installed on my PC than it does running against
the SQL Server in another building in another
suburb on another subnet with a different name
server with the other production databases
also loaded.

(david)

"Jan" <ja*@stempelconsulting.com> wrote in message
news:11*************@corp.supernews.com...
Well, it doesn't have to be better, it just has to be comparable.

Slightly different question here: will my testing on the local copy of
SQL server be in any way comparable to how the thing will behave in a
real production environment? I know that when I develop in general,
systems that work fast on my single-user machine can get very poky once
they hit the client's network. Is it the same with the developer's
edition of SQL server vs the full-blown one?

Rick Brandt wrote:
"Jan" <ja*@stempelconsulting.com> wrote in message
news:11*************@corp.supernews.com...
I don't know why exactly the database runs slow with the tables on
the server; I only know that when I moved them to the local disk, a
particular activity (which involves a lot of data manipulation,
writing to a Word document, and a variety of other tasks) went
from taking 5 minutes down to less than one. I always blamed it
on the slowness of the client's network, but frankly didn't spend a
lot of time contemplating the reasons; the solution was very
effective and I left it at that.

Maybe this will help you understand the situation: the users are
entering data on rental units. The database is running a model
that involves manipulating a large amount of stored data on other
rental units (that's the static part; the research is done once and
stays the same for a year), but it has to be compared and
calculations made in order to come to some recommendations on the
newly-entered unit. Thus, most of the data is static, but that new
record has to be involved.

Make any sense?

So maybe the question is this: if I haven't had a performance hit
from running queries that involved both local and server tables in
the past, would I have that hit when the "server" tables are SQL
Server and the local tables are Access?

Impossible to predict, but on the surface I can think of no reason
for performance to be worse. I just don't see much reason to believe
it would be any better either.

Feb 16 '06 #17
'Theoretically' anything can happen. Using a mixture of
local tables and ODBC links, Access is unable to do a
number of optimisations.

For example, if Access can't convert all of your nested
queries into a single SQL statement, it will run separate
SQL queries against the ODBC source. This never happens
with all-jet tables or with all-SS stored procedures:
Jet doesn't run SQL internally, and neither does SQL Server,
so there is no reason to construct separate SQL statements
for complex nested queries.

On the other hand, from what you say, you might be returning
a single record from your SQL Server, and matching that
to your cloud of static data. There is no reason to expect
that returning a single record from SQL Server will take
any longer than returning a single record from a File Server.

(david)

"Jan" <ja*@stempelconsulting.com> wrote in message
news:11*************@corp.supernews.com...
I don't know why exactly the database runs slow with the tables on the
server; I only know that when I moved them to the local disk, a
particular activity (which involves a lot of data manipulation, writing
to a Word document, and a variety of other tasks) went from taking 5
minutes down to less than one. I always blamed it on the slowness of
the client's network, but frankly didn't spend a lot of time
contemplating the reasons; the solution was very effective and I left it
at that.

Maybe this will help you understand the situation: the users are
entering data on rental units. The database is running a model that
involves manipulating a large amount of stored data on other rental
units (that's the static part; the research is done once and stays the
same for a year), but it has to be compared and calculations made in
order to come to some recommendations on the newly-entered unit.
Thus, most of the data is static, but that new record has to be involved.

Make any sense?

So maybe the question is this:
if I haven't had a performance hit from running queries that involved
both local and server tables in the past, would I have that hit when the
"server" tables are SQL Server and the local tables are Access?

Jan

Feb 16 '06 #18
"Br@dley" <no@mail.com.au> wrote in message
news:dt**********@news-02.connect.com.au...
Lyle Fairfield wrote:
I get sick to my stomach whenever I see anything of mine that
is more than two years old.


I'm like that...
--
regards,

Br@dley

I'm like that with my wife.
Feb 16 '06 #19
"Lyle Fairfield" <ly***********@aim.com> wrote in message
news:11**********************@g47g2000cwa.googlegr oups.com...
If it's been running for a few years undoubtedly it needs a complete
revision. Take the opportunity to trash the whole thing and start over.
I get sick to my stomach whenever I see anything of mine that is more
than two years old.


I'm impressed. For me 6 months is usually enough :-)

--
I don't check the Email account attached
to this message. Send instead to...
RBrandt at Hunter dot com
Feb 16 '06 #20
This is a very interesting subject that I am also involved in, but
which I don't fully understand. I have some questions and some
answers.....

1. I have a database program in Access with FE on workstations and BE
on a file server. It works fine. But I've never optimized anything for
anything, I just did it !!

2. I need to convert the BE to mySQL, doesn't matter why for now. If I
use ODBC and just link my existing FE without changing anything will
performance be better, worse or the same? My initial experiments
suggests that -- it depends what you are doing and what table type you
use. If I choose myISAM table type then most operations are fast,
especially ones where you are trying to display a single record, or run
a simple query. Faster than Access?, not really. The problem is with
more complex queries - they can take forever but I haven't got to the
bottom of this yet. As a workaround I copy the necessary data to the
local machine and work on it there - it works for me but it's not
exactly elegant.

3. I hear people talking about optimizing FE's to use SQL Server but
what does that mean? Does it mean using something other that ODBC and
linking to BE that we are all used to? Or does it mean adopting a
totally different approach.

4. Am I right in assuming that the objective is to move as little data
as possible across the network, and that a "proper" client-server
approach is the way to go? And if that's right then is an Access FE and
SQL BE doing that or not?

If anyone can throw any light on this ramble I for one would love to
hear it

Thanks
Dave

Feb 16 '06 #21
Jan
Thanks for your comments, but did you see the rest of what I said? Once
I moved all the static tables to the local drive, the process took less
than a minute. I believe that some of the slowness has to do with the
fact that this process is creating a complicated document in Word. Any
thoughts on whether I should convert the local tables to SQL Server when
I convert the ones on the server?

salad wrote:
Jan wrote:
I don't know why exactly the database runs slow with the tables on
the server; I only know that when I moved them to the local disk, a
particular activity (which involves a lot of data manipulation,
writing to a Word document, and a variety of other tasks) went
from taking 5 minutes down to less than one.

That's a phenomenally long time. I call that coffee-break
coding...as in I want to run the process but take a break.

If I'm not getting results back in 5 seconds (absolute tops) or less.
then there is a problem. If course, I'm not dealing with millions
of records, but for a typical desktop app 5 minutes is way, way too
long.

Back in the old days I'd immediately suspect a field I am filtering
on, in a where clause, is not indexed. But in Access where I see
people discussing their coffee-break queries they usually are doing
sub-selects in the query. That's nearly a guaranteed yawner of a
query.

I suspect that most regulars in this newsgroup could get your query
down to a couple of seconds if they were working on your application.
I always blamed it on the slowness of
the client's network, but frankly didn't spend a lot of time
contemplating the reasons; the solution was very effective and I
left it at that.

Time is money. That's why there's a FedEx. If I can shave 5 minutes
of processing on something that is done every weekday day by 10
users, in a year the time saving is about 210 hours. You do that for
a few more processes and you're taking some significant time.

Maybe this will help you understand the situation: the users are
entering data on rental units. The database is running a model
that involves manipulating a large amount of stored data on other
rental units (that's the static part; the research is done once and
stays the same for a year), but it has to be compared and
calculations made in order to come to some recommendations on the
newly-entered unit. Thus, most of the data is static, but that new
record has to be involved.

Make any sense?

So maybe the question is this: if I haven't had a performance hit
from running queries that involved both local and server tables in
the past, would I have that hit when the "server" tables are SQL
Server and the local tables are Access?

Well, I can see why the client may want to go to SQL Server, anything
that may help cut down that 5 minute processing time. I simply
don't see SQLServer as the optimum solution. If the time lag was bad
before, why will it be better with SQLServer? Doubtful. Will your
client be happy if, instead of a 5 minute lag time, you cut it down
to 4 minutes? I wouldn't. The more instantaneous you do something,
the better.

You do mention a "large amount of data". That means nothing to me or
many developers. Does that mean millions of records? Or does it
mean a few thousand records with a couple hundred fields?

I would really look at your queries. If you use subselects in any of
them, change tactics. Check your indexing and filtering. Do some
time tests.
Jan

Feb 16 '06 #22
Jan
Hi, Larry:

Regarding your last point about "the entire table may be brought from
the server..." so-- if I can make sure that the db is dealing with only
one record from the server table when working with the local tables, I
should be ok? I know this is possible, although I can't remember at the
moment if it's completely implemented in the current version.

Also, I've always had the "local" tables in a separate backend on the C:
drive, just because I'm so stuck in the frontend-backend model that it
didn't occur to me to actually put them in the front end! We're talking
a lot of tables here: close to a couple dozen, and they literally never
change except in the annual overhaul, which is done in an entirely
different process that the end-user doesn't see. (FYI, there are only
about
6 tables that need to stay on the server.)

Jan

Larry Linson wrote:
"Rick Brandt" <ri*********@hotmail.com> wrote
Well even if you want to keep a few tables local (I see no reason
to) you would often want another copy of the same table on the
server if it is ever used in a query.


I specifically said, rarely-changing _lookup_ tables, with examples.
I've done this, with a master copy on the server, from which the
local tables are refreshed at startup, if need be. Running over a
WAN, it is likely you'll see some performance improvment if there are
quite a few.
You don't want to create queries in Access that use both a local
table and a link to a server table. That would definitely be an
inefficient query to run unless the local table was very small).


I _strongly_ agree. In fact, even if the local table is very small,
the performance impact can be substantial because the entire table
may be brought from the server to the user's machine for the join.
<OUCH!>
In my Access FE/SQL Server BE apps ALL the tables are on the server
and performance is just fine on a standard 100 mb LAN.


Of course. But, move some of your users to the boonies on a WAN that
shares a T-1 line and you are likely to see a discernable difference.
That's where you may need to resort to "performance tricks".

If you have appropriate instrumentation, the difference may be
"measurable" on a 100 MBPS LAN, but it's unlikely to be discernable
to someone sitting in front of a screen. We didn't bother with local
tables when everyone was on the high-speed LAN.

Larry Linson Microsoft Access MVP

Feb 16 '06 #23
Jan
Well, actually I've done a lot of that in the past couple of years. It
improves every time we revise it for a new client. I'm glad to know I'm
not the only one who has that reaction to past work.

Lyle Fairfield wrote:
If it's been running for a few years undoubtedly it needs a complete
revision. Take the opportunity to trash the whole thing and start
over. I get sick to my stomach whenever I see anything of mine that
is more than two years old.

Feb 16 '06 #24
Jan
Hi, David:

I was afraid of that. It makes it really hard to troubleshoot, doesn't it?

david epsom dot com dot au wrote:
I don't know about different editions, but nay code runs much faster
against the SQL Server installed on my PC than it does running
against the SQL Server in another building in another suburb on
another subnet with a different name server with the other production
databases also loaded.

(david)

"Jan" <ja*@stempelconsulting.com> wrote in message
news:11*************@corp.supernews.com...
Well, it doesn't have to be better, it just has to be comparable.

Slightly different question here: will my testing on the local
copy of SQL server be in any way comparable to how the thing will
behave in a real production environment? I know that when I develop
in general, systems that work fast on my single-user machine can
get very poky once they hit the client's network. Is it the same
with the developer's edition of SQL server vs the full-blown one?

Rick Brandt wrote:
"Jan" <ja*@stempelconsulting.com> wrote in message
news:11*************@corp.supernews.com...
I don't know why exactly the database runs slow with the tables
on the server; I only know that when I moved them to the local
disk, a particular activity (which involves a lot of data
manipulation, writing to a Word document, and a variety of
other tasks) went from taking 5 minutes down to less than one.
I always blamed it on the slowness of the client's network, but
frankly didn't spend a lot of time contemplating the reasons;
the solution was very effective and I left it at that.

Maybe this will help you understand the situation: the users
are entering data on rental units. The database is running a
model that involves manipulating a large amount of stored data
on other rental units (that's the static part; the research is
done once and stays the same for a year), but it has to be
compared and calculations made in order to come to some
recommendations on the newly-entered unit. Thus, most of the
data is static, but that new record has to be involved.

Make any sense?

So maybe the question is this: if I haven't had a performance
hit from running queries that involved both local and server
tables in the past, would I have that hit when the "server"
tables are SQL Server and the local tables are Access?
Impossible to predict, but on the surface I can think of no
reason for performance to be worse. I just don't see much reason
to believe it would be any better either.


Feb 16 '06 #25
Jan wrote:
Thanks for your comments, but did you see the rest of what I said? Once
I moved all the static tables to the local drive, the process took less
than a minute.
I still find a minute exceptionally long.

I believe that some of the slowness has to do with the fact that this process is creating a complicated document in Word. Any
thoughts on whether I should convert the local tables to SQL Server when
I convert the ones on the server?
I usually look at my bottlenecks. What processes are fast, which are
slow? I address the processes that are slow. I look at my code and put
in breakpoints to determine where something might be slow. Once I know
what's slow, I know what to fix.

For example, you say that the word document is slow. But you really
don't know. If I were the owner of the company, I'd get myself a Word
guru as a consultant. Someone that really knows their stuff. And have
them look at the document and see how I could get that document to mail
merge as fast as possible. I might do this before I rework the entire
app and bring in SQLServer. A few hours with a Word guru will be a heck
of a lot cheaper.

I remember taking over an app (not done in Access) years ago where a few
reports, run daily, each would take over 30 minutes to process. I got
them down to the 2 to 3 second range. I was saving about 4 hours a day
in processing time just in fixing up those reports. The rest of the
system was built for the developer, not for the end-user comfort. So
the entire system ran slow and crashed a lot. What amazed me is that
the client even employed the developer...he was atrocious. I think he
felt sorry for the developer. Oh well, the client went off into
tangents like web stuff and poor database planning (ex: going from
relational to flat file processing) and utilized that developer for the
new technology and didn't focus on what he knew best. He's now out of
business.

Time is of the essence.

If this is coming over a WAN, then there's not much to expect but slow.

salad wrote:
Jan wrote:
I don't know why exactly the database runs slow with the tables on
the server; I only know that when I moved them to the local disk, a
particular activity (which involves a lot of data manipulation,
writing to a Word document, and a variety of other tasks) went
from taking 5 minutes down to less than one.


That's a phenomenally long time. I call that coffee-break
coding...as in I want to run the process but take a break.

If I'm not getting results back in 5 seconds (absolute tops) or less.
then there is a problem. If course, I'm not dealing with millions
of records, but for a typical desktop app 5 minutes is way, way too
long.

Back in the old days I'd immediately suspect a field I am filtering
on, in a where clause, is not indexed. But in Access where I see
people discussing their coffee-break queries they usually are doing
sub-selects in the query. That's nearly a guaranteed yawner of a
query.

I suspect that most regulars in this newsgroup could get your query
down to a couple of seconds if they were working on your application.
I always blamed it on the slowness of
the client's network, but frankly didn't spend a lot of time
contemplating the reasons; the solution was very effective and I
left it at that.


Time is money. That's why there's a FedEx. If I can shave 5 minutes
of processing on something that is done every weekday day by 10
users, in a year the time saving is about 210 hours. You do that for
a few more processes and you're taking some significant time.

Maybe this will help you understand the situation: the users are
entering data on rental units. The database is running a model
that involves manipulating a large amount of stored data on other
rental units (that's the static part; the research is done once and
stays the same for a year), but it has to be compared and
calculations made in order to come to some recommendations on the
newly-entered unit. Thus, most of the data is static, but that new
record has to be involved.

Make any sense?

So maybe the question is this: if I haven't had a performance hit
from running queries that involved both local and server tables in
the past, would I have that hit when the "server" tables are SQL
Server and the local tables are Access?

Well, I can see why the client may want to go to SQL Server, anything
that may help cut down that 5 minute processing time. I simply
don't see SQLServer as the optimum solution. If the time lag was bad
before, why will it be better with SQLServer? Doubtful. Will your
client be happy if, instead of a 5 minute lag time, you cut it down
to 4 minutes? I wouldn't. The more instantaneous you do something,
the better.

You do mention a "large amount of data". That means nothing to me or
many developers. Does that mean millions of records? Or does it
mean a few thousand records with a couple hundred fields?

I would really look at your queries. If you use subselects in any of
them, change tactics. Check your indexing and filtering. Do some
time tests.
Jan

Feb 16 '06 #26
I've got complex Word automation mail merge code.
It used to take 45 minutes to print 45 sets.

Then the client got a faster computer and printer,
and it took 45 minutes to print 75 sets.

Then the client got a faster computer and printer,
Then the client got a faster computer and printer,
and it now takes 20 minutes to print 150 sets
When I see the printer working without a print queue,
then it will be time to optimise my mail merge code.

Either that, or the client will buy a color laser.

(david)

"Jan" <ja*@stempelconsulting.com> wrote in message
news:11*************@corp.supernews.com...
Thanks for your comments, but did you see the rest of what I said? Once
I moved all the static tables to the local drive, the process took less
than a minute. I believe that some of the slowness has to do with the
fact that this process is creating a complicated document in Word. Any
thoughts on whether I should convert the local tables to SQL Server when
I convert the ones on the server?

salad wrote:
Jan wrote:
I don't know why exactly the database runs slow with the tables on
the server; I only know that when I moved them to the local disk, a
particular activity (which involves a lot of data manipulation,
writing to a Word document, and a variety of other tasks) went
from taking 5 minutes down to less than one.

That's a phenomenally long time. I call that coffee-break
coding...as in I want to run the process but take a break.

If I'm not getting results back in 5 seconds (absolute tops) or less.
then there is a problem. If course, I'm not dealing with millions
of records, but for a typical desktop app 5 minutes is way, way too
long.

Back in the old days I'd immediately suspect a field I am filtering
on, in a where clause, is not indexed. But in Access where I see
people discussing their coffee-break queries they usually are doing
sub-selects in the query. That's nearly a guaranteed yawner of a
query.

I suspect that most regulars in this newsgroup could get your query
down to a couple of seconds if they were working on your application.
I always blamed it on the slowness of
the client's network, but frankly didn't spend a lot of time
contemplating the reasons; the solution was very effective and I
left it at that.

Time is money. That's why there's a FedEx. If I can shave 5 minutes
of processing on something that is done every weekday day by 10
users, in a year the time saving is about 210 hours. You do that for
a few more processes and you're taking some significant time.

Maybe this will help you understand the situation: the users are
entering data on rental units. The database is running a model
that involves manipulating a large amount of stored data on other
rental units (that's the static part; the research is done once and
stays the same for a year), but it has to be compared and
calculations made in order to come to some recommendations on the
newly-entered unit. Thus, most of the data is static, but that new
record has to be involved.

Make any sense?

So maybe the question is this: if I haven't had a performance hit
from running queries that involved both local and server tables in
the past, would I have that hit when the "server" tables are SQL
Server and the local tables are Access?

Well, I can see why the client may want to go to SQL Server, anything
that may help cut down that 5 minute processing time. I simply
don't see SQLServer as the optimum solution. If the time lag was bad
before, why will it be better with SQLServer? Doubtful. Will your
client be happy if, instead of a 5 minute lag time, you cut it down
to 4 minutes? I wouldn't. The more instantaneous you do something,
the better.

You do mention a "large amount of data". That means nothing to me or
many developers. Does that mean millions of records? Or does it
mean a few thousand records with a couple hundred fields?

I would really look at your queries. If you use subselects in any of
them, change tactics. Check your indexing and filtering. Do some
time tests.
Jan

Feb 16 '06 #27
salad <oi*@vinegar.com> wrote in
news:kT******************@newsread3.news.atl.earth link.net:
Back in the old days I'd immediately suspect a field I am
filtering on, in a where clause, is not indexed. But in Access
where I see people discussing their coffee-break queries they
usually are doing sub-selects in the query. That's nearly a
guaranteed yawner of a query.


Not necessarily. If the subselect is happening first, yes, it will
probably be slow. But if there is other filtering of the recordset
that can be applied before the subselect runs, it's often going to
be very fast.

I just don't know where people get the idea that all subselects are
slow. Correlated subqueries, sure, of course they are, because they
have to run for every row of the recordsource, but other types of
subselects are not by definition slow. They can be incredibly useful
tools.

IN clauses can often be replaced with joins to "virtual tables"
(i.e., a subselect treated as a data source), and those are often
very fast, too.

IN clauses are a little unpredictable sometimes in regard to when
the indexes on both sides are used or not, but most of the time,
they are.

I am less convinced of the utility and performance advantages of
other types of subselects (like EXISTS and so forth), because they
are just not useful as often as an IN clause subselect (which is the
best way to get editable recordsets from uneditable equivalent
recordsets that use joins).

--
David W. Fenton http://www.dfenton.com/
usenet at dfenton dot com http://www.dfenton.com/DFA/
Feb 16 '06 #28
"Lyle Fairfield" <ly***********@aim.com> wrote in
news:11**********************@g47g2000cwa.googlegr oups.com:
If it's been running for a few years undoubtedly it needs a
complete revision. Take the opportunity to trash the whole thing
and start over. I get sick to my stomach whenever I see anything
of mine that is more than two years old.


While my old code often dismays me (since I'm constantly learning
better ways to accomplish certain tasks), I would never, ever under
any circumstances chuck a working application and rewrite it.

That is stupidity of the highest order.

If one needs to extend an existing application, then it's often
profitable to refactor the existing app to bring it up to current
coding standards, and then re-engineer after the refactoring to
enable the addition of the extended features.

But I'd never refactor just because I was offended by the code. If
it works, it works. Any changes you make are likely to introduce
bugs, which means that you've changed working code into non-working
code.

That's why Netscape basically no longer exists -- they did exactly
that, chucked their existing codebase, which was chockfull of both
badly engineered code but also with code that included a lot of
hardwon solutions to really bad problems (like bug fixes for
specific platforms). In starting from scratch, Netscape lost 5 years
when they just didn't have a product to offer.

DON'T DINK WITH WORKING CODE.

Ever.

No matter how offensive it may be.

I have a client app, still in production use, that is 10 years old
this spring. It uses a lot of macros. But I'm not about to go in and
try to convert it to not use macros BECAUSE IT WORKS JUST FINE.

--
David W. Fenton http://www.dfenton.com/
usenet at dfenton dot com http://www.dfenton.com/DFA/
Feb 16 '06 #29
This is how I handle the reporting over a WAN with a similar situation:

1. The query on the BE returns only the minimal amount of data, raw and
unformatted - often foreign keys to the bulky, static table that are stored
in the FE - the ones that as you say, rarely if ever change.

2. The query is sent to a table in the FE - I don't use temp tables,
instead, I have several tables just to store the date - cleverly named like
"Report_LLiDD".

3. My reports' then have as their record source a large query which brings
in several of these 'static' or 'lookup' tables from the FE, the newly
filled table of dynamic data from the BE, and all those formatting commands.
--
Darryl Kerkeslager
"Jan" <ja*@stempelconsulting.com> wrote
Also, I've always had the "local" tables in a separate backend on the C:
drive, just because I'm so stuck in the frontend-backend model that it
didn't occur to me to actually put them in the front end! We're talking
a lot of tables here: close to a couple dozen, and they literally never
change except in the annual overhaul, which is done in an entirely
different process that the end-user doesn't see. (FYI, there are only
about
6 tables that need to stay on the server.)

Feb 17 '06 #30

This thread has been closed and replies have been disabled. Please start a new discussion.

Similar topics

4
by: Guy | last post by:
I got a big Access file (1 400 tables) to convert to SQL and I would like to be provided with some automated tools, except upsizing wizard and DTS, to convert it on my own. I got a lot of forms...
25
by: cory | last post by:
Hi, I have an Access database and am having an ASP.NEt application written for it. It is almost complete. I have a hosting company that I signed up with a month ago but before I did anything I...
2
by: VenuGopal | last post by:
Hi Everyone, in my application i am interacting to a known server. i send a request and get a response from the server. i am using XML and HTTP here. part of the job is that the XYZ server is...
5
by: manmit.walia | last post by:
Hello All, I am stuck on a conversion problem. I am trying to convert my application which is written in VB.NET to C# because the project I am working on currently is being written in C#. I tried...
2
by: egoldthwait | last post by:
I need to convert a 17mb access 2000 db to Oracle and house it in a Citrix farm. The issue: we have never converted an Access Db to Oracle but can probably use Oracle's Workbench to assist with...
4
by: dba_222 | last post by:
Dear Experts, Ok, I hate to ask such a seemingly dumb question, but I've already spent far too much time on this. More that I would care to admit. In Sql server, how do I simply change a...
6
by: tshad | last post by:
Apparently, I can't do: Dim da2 As New OleDb.OleDbDataAdapter("Select PR, Convert(varchar,getchar(),1),F1, F2, F5, Sum(F4) from temp .... I am getting this error. 'undefined function...
1
by: johnlim20088 | last post by:
Hi, Currently I have 6 web projects located in Visual Source Safe 6.0, as usual, everytime I will open solution file located in my local computer, connected to source safe, then check out/check in...
7
by: groups | last post by:
This is my first foray into writing a generic method and maybe I've bitten off more than I can chew. My intent is to have a generic method that accepts a value name and that value will be...
0
by: Charles Arthur | last post by:
How do i turn on java script on a villaon, callus and itel keypad mobile phone
0
by: ryjfgjl | last post by:
If we have dozens or hundreds of excel to import into the database, if we use the excel import function provided by database editors such as navicat, it will be extremely tedious and time-consuming...
1
by: nemocccc | last post by:
hello, everyone, I want to develop a software for my android phone for daily needs, any suggestions?
0
by: Hystou | last post by:
There are some requirements for setting up RAID: 1. The motherboard and BIOS support RAID configuration. 2. The motherboard has 2 or more available SATA protocol SSD/HDD slots (including MSATA, M.2...
0
by: Hystou | last post by:
Most computers default to English, but sometimes we require a different language, especially when relocating. Forgot to request a specific language before your computer shipped? No problem! You can...
0
Oralloy
by: Oralloy | last post by:
Hello folks, I am unable to find appropriate documentation on the type promotion of bit-fields when using the generalised comparison operator "<=>". The problem is that using the GNU compilers,...
0
jinu1996
by: jinu1996 | last post by:
In today's digital age, having a compelling online presence is paramount for businesses aiming to thrive in a competitive landscape. At the heart of this digital strategy lies an intricately woven...
0
by: Hystou | last post by:
Overview: Windows 11 and 10 have less user interface control over operating system update behaviour than previous versions of Windows. In Windows 11 and 10, there is no way to turn off the Windows...
0
tracyyun
by: tracyyun | last post by:
Dear forum friends, With the development of smart home technology, a variety of wireless communication protocols have appeared on the market, such as Zigbee, Z-Wave, Wi-Fi, Bluetooth, etc. Each...

By using Bytes.com and it's services, you agree to our Privacy Policy and Terms of Use.

To disable or enable advertisements and analytics tracking please visit the manage ads & tracking page.