By using this site, you agree to our updated Privacy Policy and our Terms of Use. Manage your Cookies Settings.
434,677 Members | 1,094 Online
Bytes IT Community
+ Ask a Question
Need help? Post your question and get tips & solutions from a community of 434,677 IT Pros & Developers. It's quick & easy.

Conversion to adp

P: n/a
Hi

I am converting my access front-end/backend mdb app to adp. Are there any
pitfalls I should be aware of?

Thanks

Regards
Nov 13 '05 #1
Share this Question
Share on Google+
16 Replies


P: n/a
John,
1. If the new back-end is SQL Server and the old back-end was Jet/Access
then much and perhaps all of the SQL in service may need to be revisited to
ensure it runs as intended on SQL Server. Things like DCount, etc., don't
exist on SQL Server. IIf() is implemented as CASE and works differently.
Date() is GETDATE(), "" has to be '', lots of little details are different.
2. A straight dump of Jet SQL with bug fixes & workarounds to just make the
existing SQL function on SQL Server risks really crappy performance on SQL
Server because it's possible to cause Access to do some really boneheaded
translation of SQL into something that will run on SQL Server.
3. TransactSQL is lightyears more capable than JetSQL & VBA. The
performance & stability gains possible will come from a thorough review of
the business processes implemented in Access and a study of the best way to
implement them on SQL Server.

--
Alan Webb
kn*******@SPAMhotmail.com
"It's not IT, it's IS
"John" <Jo**@nospam.infovis.co.uk> wrote in message
news:42*********************@news-text.dial.pipex.com...
Hi

I am converting my access front-end/backend mdb app to adp. Are there any
pitfalls I should be aware of?

Thanks

Regards

Nov 13 '05 #2

P: n/a
John wrote:
I am converting my access front-end/backend mdb app to adp. Are there any
pitfalls I should be aware of?


Only a few minor ones:

ADPs are unreliable, exhibit erratic anomalies (they happen one day but
not the next), open many connections (all of which are wonderfully
slow), have difficulty in opening a form at a specific record, have
recurring and sometimes unsolvable updating problems, require arcane
procedures for report filtering, and create a vulnerable insecure route
to the database.

Other than that they're fabulous.

Did I mention that they make your data insecure?
that's INSECURE
I-N-S-E-C-U-R-E!

--
--
Lyle

"The aim of those who try to control thought is always the same. They
find one single explanation of the world, one system of thought and
action that will (they believe) cover everything; and then they try to
impose that on all thinking people."
- Gilbert Highet
Nov 13 '05 #3

P: n/a
I need the sql server backend to establish replication/sync with some laptop
users who would like to disconnect and then sync when they reconnect. In my
case what is the solution? Should I keep the mdb front-end and link the
tables off sql server? Anything else?

Thanks

Regards

"Lyle Fairfield" <ly******@yahoo.ca> wrote in message
news:0P*********************@read2.cgocable.net...
John wrote:
I am converting my access front-end/backend mdb app to adp. Are there any
pitfalls I should be aware of?


Only a few minor ones:

ADPs are unreliable, exhibit erratic anomalies (they happen one day but
not the next), open many connections (all of which are wonderfully slow),
have difficulty in opening a form at a specific record, have recurring and
sometimes unsolvable updating problems, require arcane procedures for
report filtering, and create a vulnerable insecure route to the database.

Other than that they're fabulous.

Did I mention that they make your data insecure?
that's INSECURE
I-N-S-E-C-U-R-E!

--
--
Lyle

"The aim of those who try to control thought is always the same. They find
one single explanation of the world, one system of thought and action that
will (they believe) cover everything; and then they try to impose that on
all thinking people."
- Gilbert Highet

Nov 13 '05 #4

P: n/a
I yield to the replication experts.

John wrote:
I need the sql server backend to establish replication/sync with some laptop
users who would like to disconnect and then sync when they reconnect. In my
case what is the solution? Should I keep the mdb front-end and link the
tables off sql server? Anything else?

Thanks

Regards

"Lyle Fairfield" <ly******@yahoo.ca> wrote in message
news:0P*********************@read2.cgocable.net...
John wrote:

I am converting my access front-end/backend mdb app to adp. Are there any
pitfalls I should be aware of?


Only a few minor ones:

ADPs are unreliable, exhibit erratic anomalies (they happen one day but
not the next), open many connections (all of which are wonderfully slow),
have difficulty in opening a form at a specific record, have recurring and
sometimes unsolvable updating problems, require arcane procedures for
report filtering, and create a vulnerable insecure route to the database.

--
Lyle

"The aim of those who try to control thought is always the same. They
find one single explanation of the world, one system of thought and
action that will (they believe) cover everything; and then they try to
impose that on all thinking people."
- Gilbert Highet
Nov 13 '05 #5

P: n/a
There's a whole newsgroup at microsoft.public.access.replication.
It can certainly be done in Access, without requiring SQL Server.

"Lyle Fairfield" <ly******@yahoo.ca> wrote in message
news:xE****************@read1.cgocable.net...
I yield to the replication experts.

John wrote:
I need the sql server backend to establish replication/sync with some laptop users who would like to disconnect and then sync when they reconnect. In my case what is the solution? Should I keep the mdb front-end and link the
tables off sql server? Anything else?

Thanks

Regards

"Lyle Fairfield" <ly******@yahoo.ca> wrote in message
news:0P*********************@read2.cgocable.net...
John wrote:
I am converting my access front-end/backend mdb app to adp. Are there anypitfalls I should be aware of?

Only a few minor ones:

ADPs are unreliable, exhibit erratic anomalies (they happen one day but
not the next), open many connections (all of which are wonderfully slow),have difficulty in opening a form at a specific record, have recurring andsometimes unsolvable updating problems, require arcane procedures for
report filtering, and create a vulnerable insecure route to the
database. --
Lyle

"The aim of those who try to control thought is always the same. They
find one single explanation of the world, one system of thought and
action that will (they believe) cover everything; and then they try to
impose that on all thinking people."
- Gilbert Highet

Nov 13 '05 #6

P: n/a
From the quesitons you are asking, I'm assuming your situation is that you
have a central LAN with a connection to the internet, laptop users who will
have Internet access where they're trying to do their remote work, and an IT
administrator capable of managing your LAN's firewall ports, etc. If that's
the case, then your best bet is probably to use some kind of remote control
software and allow users to run their desktop systems from remote.

If remote control will not work for you, then your next best choice is
probably replication since Access may not handle the flakey SQL Server
connections you may get over the Internet, and you'll have to add a VPN layer
for security or live with the lack of security (probably not a good idea).

Note that Access replicaiton may or may not be the best way to do replication.
Sometimes, it's better to implement a replication scheme at the application
level. One way to do this is to implement a record ID generation system that
includes a machine identifier so there can't be collisions between IDs
generated on different machines, and add fields for a current and previous
record revision ID so you can tell if a record in the central database is the
same revision that a remote system made a change to while off-line, and allow
the change to post if so.

On Sun, 24 Apr 2005 15:49:14 +0100, "John" <Jo**@nospam.infovis.co.uk> wrote:
I need the sql server backend to establish replication/sync with some laptop
users who would like to disconnect and then sync when they reconnect. In my
case what is the solution? Should I keep the mdb front-end and link the
tables off sql server? Anything else?

Thanks

Regards

"Lyle Fairfield" <ly******@yahoo.ca> wrote in message
news:0P*********************@read2.cgocable.net.. .
John wrote:
I am converting my access front-end/backend mdb app to adp. Are there any
pitfalls I should be aware of?


Only a few minor ones:

ADPs are unreliable, exhibit erratic anomalies (they happen one day but
not the next), open many connections (all of which are wonderfully slow),
have difficulty in opening a form at a specific record, have recurring and
sometimes unsolvable updating problems, require arcane procedures for
report filtering, and create a vulnerable insecure route to the database.

Other than that they're fabulous.

Did I mention that they make your data insecure?
that's INSECURE
I-N-S-E-C-U-R-E!

--
--
Lyle

"The aim of those who try to control thought is always the same. They find
one single explanation of the world, one system of thought and action that
will (they believe) cover everything; and then they try to impose that on
all thinking people."
- Gilbert Highet


Nov 13 '05 #7

P: n/a
Is access replication reliable enough?

Regards

"MacDermott" <ma********@nospam.com> wrote in message
news:VF*******************@newsread2.news.atl.eart hlink.net...
There's a whole newsgroup at microsoft.public.access.replication.
It can certainly be done in Access, without requiring SQL Server.

"Lyle Fairfield" <ly******@yahoo.ca> wrote in message
news:xE****************@read1.cgocable.net...
I yield to the replication experts.

John wrote:
> I need the sql server backend to establish replication/sync with some laptop > users who would like to disconnect and then sync when they reconnect.
> In my > case what is the solution? Should I keep the mdb front-end and link the
> tables off sql server? Anything else?
>
> Thanks
>
> Regards
>
> "Lyle Fairfield" <ly******@yahoo.ca> wrote in message
> news:0P*********************@read2.cgocable.net...
>
>>John wrote:
>>
>>
>>>I am converting my access front-end/backend mdb app to adp. Are there any >>>pitfalls I should be aware of?
>>
>>Only a few minor ones:
>>
>>ADPs are unreliable, exhibit erratic anomalies (they happen one day but
>>not the next), open many connections (all of which are wonderfully slow), >>have difficulty in opening a form at a specific record, have recurring and >>sometimes unsolvable updating problems, require arcane procedures for
>>report filtering, and create a vulnerable insecure route to the

database.
--
Lyle

"The aim of those who try to control thought is always the same. They
find one single explanation of the world, one system of thought and
action that will (they believe) cover everything; and then they try to
impose that on all thinking people."
- Gilbert Highet


Nov 13 '05 #8

P: n/a
John,
It was good enough for a Y2K inventory done at a financial services company
I worked for back when I was getting paid for this stuff. As Steve points
out, there are cases where writing your own replication code ends up being
cheaper/better than what Microsoft provides in Access without code.
--
Alan Webb
kn*******@SPAMhotmail.com
"It's not IT, it's IS"

"John" <Jo**@nospam.infovis.co.uk> wrote in message
news:42*********************@news-text.dial.pipex.com...
I need the sql server backend to establish replication/sync with some
laptop users who would like to disconnect and then sync when they
reconnect. In my case what is the solution? Should I keep the mdb front-end
and link the tables off sql server? Anything else?

Thanks

Regards

"Lyle Fairfield" <ly******@yahoo.ca> wrote in message
news:0P*********************@read2.cgocable.net...
John wrote:
I am converting my access front-end/backend mdb app to adp. Are there
any pitfalls I should be aware of?


Only a few minor ones:

ADPs are unreliable, exhibit erratic anomalies (they happen one day but
not the next), open many connections (all of which are wonderfully slow),
have difficulty in opening a form at a specific record, have recurring
and sometimes unsolvable updating problems, require arcane procedures for
report filtering, and create a vulnerable insecure route to the database.

Other than that they're fabulous.

Did I mention that they make your data insecure?
that's INSECURE
I-N-S-E-C-U-R-E!

--
--
Lyle

"The aim of those who try to control thought is always the same. They
find one single explanation of the world, one system of thought and
action that will (they believe) cover everything; and then they try to
impose that on all thinking people."
- Gilbert Highet


Nov 13 '05 #9

P: n/a
Steve Jorgensen <no****@nospam.nospam> wrote in
news:10********************************@4ax.com:
Note that Access replicaiton may or may not be the best way to do
replication. Sometimes, it's better to implement a replication
scheme at the application level. One way to do this is to
implement a record ID generation system that includes a machine
identifier so there can't be collisions between IDs generated on
different machines, and add fields for a current and previous
record revision ID so you can tell if a record in the central
database is the same revision that a remote system made a change
to while off-line, and allow the change to post if so.


This is an enormously difficult task, even if it is only one-way
between a mere two copies of the data file.

I've done it. It's complicated (think about how deletions are
propagated; think about order of inserts and referential integrity).

If you're trying to have multiple dbs in the field, all being
updated, and you want those changes pushed up to the server, and you
also want to make the changes made by people on the servers to be
pulled down to the db in the field, it becomes a hugely complicated
task, unless a few conditions are met:

1. no records are edited in more than one location.

2. each person in the field has their own dataset that they work on
alone, and no other people actually edit that data (though they may
view it and analyze it).

But even then, you have to solve the PK issue. You have three
choices:

1. use a natural key, and run the risk of the same natural key being
used in two different copies of the database.

2. pre-allocated blocks of surrogate keys to each copy of the
database.

3. include a source db identifier in a compound PK in every table.

But if everyone's working on the same datasets, it becomes nearly
impossible to program from scratch.

Keep in mind that, theoretically speaking, there is a form of
heterogenous replication where the main mother ship is a SQL Server
db and the laptops have Jet dbs that synchronize with the SQL
Server. However, like pure SQL Server replication itself, the whole
scenario is much more limited than the capabilities of pure Jet
replication, and the rules much more strict.

I don't think replication is the answer here.

I think the problem needs to be completely re-thought from the
ground up.

--
David W. Fenton http://www.bway.net/~dfenton
dfenton at bway dot net http://www.bway.net/~dfassoc
Nov 13 '05 #10

P: n/a
"Alan Webb" <kn*******@hotSPAMmail.com> wrote in
news:mo********************@comcast.com:
As Steve points
out, there are cases where writing your own replication code ends
up being cheaper/better than what Microsoft provides in Access
without code.


Having done both (i.e., rolling my own and using Jet replication), I
strongly dispute this assertion of Steve's.

The number of problems is astronomical, even when you have very
limited synchronization scenarios. It's even complex enough when you
have a master/slave relationship between two or more dbs, where data
is updated/added/deleted only in the master (think about the
deletion problem and how you propagate the deletion of a record that
no longer exists at the time of the synchronization).

The problem could be much more easily solved with Terminal Server,
or with a browser-based application, but either of those requires
constant Internet access.

I could engineer an all-Jet replication scenario using indirect
replication over dialup networking or over a VPN over the Internet
(i.e., not using Internet replication, which has a host of basic
requirements that makes it extremely prone to fall over), but I
wouldn't want to have to do it.

I don't have any clients who need to update data and synch with the
mother ship while still in the field, so I no longer do that kind of
thing (it used to be one of my specializations). Nowadays I'm just
supporting travellers who need to take data with them and update it
while on the road, but don't need to re-synchronize with the mother
ship until back in the office. That's *very* easy to do, and can be
done safely with simple direct replication (and some of my clients
do it themselves, via the Access UI, to save the money on
programming that it would cost them).

But it's the requirement for getting and sending updates while in
the field that makes this very hard. If a client required it, I'd
definitely make a VPN a prerequisite to building it, as the
alternative (Internet replication) requires that the client run IIS
and an open FTP server. That's just no longer safe these days, and
it was never ever very stable.

My bet is that given the cost of engineering this with SQL Server
(which lacks some of the flexibility of Jet replication), that the
requirement for synch from the field could be easily dropped. If
that's the case, then the Terminal Server solution starts to look
attractive, as you've saved money you can now throw at the Internet
access costs, instead, while you'll be saving an enormous amount of
money on administrative costs in the long run, perhaps enough to pay
for the in-the-field Internet access costs.

--
David W. Fenton http://www.bway.net/~dfenton
dfenton at bway dot net http://www.bway.net/~dfassoc
Nov 13 '05 #11

P: n/a
"John" <Jo**@nospam.infovis.co.uk> wrote:
I am converting my access front-end/backend mdb app to adp. Are there any
pitfalls I should be aware of?


I can understand why some apps would want their data to be stored in SQL Server. But
there's no good reason to spend the extra time in converting the FE MDB to an ADP.
Leave the SQL Server tables as linked tables.

Tony
--
Tony Toews, Microsoft Access MVP
Please respond only in the newsgroups so that others can
read the entire thread of messages.
Microsoft Access Links, Hints, Tips & Accounting Systems at
http://www.granite.ab.ca/accsmstr.htm
Nov 13 '05 #12

P: n/a
On Mon, 25 Apr 2005 03:57:55 GMT, "David W. Fenton"
<dX********@bway.net.invalid> wrote:
Steve Jorgensen <no****@nospam.nospam> wrote in
news:10********************************@4ax.com :
Note that Access replicaiton may or may not be the best way to do
replication. Sometimes, it's better to implement a replication
scheme at the application level. One way to do this is to
implement a record ID generation system that includes a machine
identifier so there can't be collisions between IDs generated on
different machines, and add fields for a current and previous
record revision ID so you can tell if a record in the central
database is the same revision that a remote system made a change
to while off-line, and allow the change to post if so.


This is an enormously difficult task, even if it is only one-way
between a mere two copies of the data file.

I've done it. It's complicated (think about how deletions are
propagated; think about order of inserts and referential integrity).

If you're trying to have multiple dbs in the field, all being
updated, and you want those changes pushed up to the server, and you
also want to make the changes made by people on the servers to be
pulled down to the db in the field, it becomes a hugely complicated
task, unless a few conditions are met:

1. no records are edited in more than one location.

2. each person in the field has their own dataset that they work on
alone, and no other people actually edit that data (though they may
view it and analyze it).

But even then, you have to solve the PK issue. You have three
choices:

1. use a natural key, and run the risk of the same natural key being
used in two different copies of the database.

2. pre-allocated blocks of surrogate keys to each copy of the
database.

3. include a source db identifier in a compound PK in every table.

But if everyone's working on the same datasets, it becomes nearly
impossible to program from scratch.

Keep in mind that, theoretically speaking, there is a form of
heterogenous replication where the main mother ship is a SQL Server
db and the laptops have Jet dbs that synchronize with the SQL
Server. However, like pure SQL Server replication itself, the whole
scenario is much more limited than the capabilities of pure Jet
replication, and the rules much more strict.

I don't think replication is the answer here.

I think the problem needs to be completely re-thought from the
ground up.


All the problems you've described are real, but I've found they can often be
managed by limiting the scope of the replication features to what's important
for the real-world requirements of the app.

Here are some ideas that should help in most applications I can think of:

1. Don't try to resolve replication conflicts. Just let the loser either
abort the replication or continue and lose the conflicting changes. The user
can then abort, copy down the important information, run the replication
again, and manually enter the changes as required.

2. Don't try to replicate every table, just the ones that will really need to
be updated in the field. If the user needs a new lookup value, they can make
a comment in the notes, and fix it later when they're back on the LAN.

3. Don't allow deletions in disconnected mode. Just allow a status change to
something like "Inactive" - your application may work this way anyway.

Nov 13 '05 #13

P: n/a
David,
Ok. With the project I worked on there was only a half-dozen replicas.
Even with that I spent a fair amount of administrative time chasing down
replication errors that caused synchronization to fail. We used the
replication setup available through the Access UI. Even with my time
working out synchronization kinks this was still faster than having a clerk
rekey data collected by temps set around the company to inventory equipment
that may or may not be in-scope for Y2K.

--
Alan Webb
kn*******@SPAMhotmail.com
"It's not IT, it's IS"

"David W. Fenton" <dX********@bway.net.invalid> wrote in message
news:Xn*********************************@24.168.12 8.90...
"Alan Webb" <kn*******@hotSPAMmail.com> wrote in
news:mo********************@comcast.com:
As Steve points
out, there are cases where writing your own replication code ends
up being cheaper/better than what Microsoft provides in Access
without code.


Having done both (i.e., rolling my own and using Jet replication), I
strongly dispute this assertion of Steve's.

The number of problems is astronomical, even when you have very
limited synchronization scenarios. It's even complex enough when you
have a master/slave relationship between two or more dbs, where data
is updated/added/deleted only in the master (think about the
deletion problem and how you propagate the deletion of a record that
no longer exists at the time of the synchronization).

The problem could be much more easily solved with Terminal Server,
or with a browser-based application, but either of those requires
constant Internet access.

I could engineer an all-Jet replication scenario using indirect
replication over dialup networking or over a VPN over the Internet
(i.e., not using Internet replication, which has a host of basic
requirements that makes it extremely prone to fall over), but I
wouldn't want to have to do it.

I don't have any clients who need to update data and synch with the
mother ship while still in the field, so I no longer do that kind of
thing (it used to be one of my specializations). Nowadays I'm just
supporting travellers who need to take data with them and update it
while on the road, but don't need to re-synchronize with the mother
ship until back in the office. That's *very* easy to do, and can be
done safely with simple direct replication (and some of my clients
do it themselves, via the Access UI, to save the money on
programming that it would cost them).

But it's the requirement for getting and sending updates while in
the field that makes this very hard. If a client required it, I'd
definitely make a VPN a prerequisite to building it, as the
alternative (Internet replication) requires that the client run IIS
and an open FTP server. That's just no longer safe these days, and
it was never ever very stable.

My bet is that given the cost of engineering this with SQL Server
(which lacks some of the flexibility of Jet replication), that the
requirement for synch from the field could be easily dropped. If
that's the case, then the Terminal Server solution starts to look
attractive, as you've saved money you can now throw at the Internet
access costs, instead, while you'll be saving an enormous amount of
money on administrative costs in the long run, perhaps enough to pay
for the in-the-field Internet access costs.

--
David W. Fenton http://www.bway.net/~dfenton
dfenton at bway dot net http://www.bway.net/~dfassoc

Nov 13 '05 #14

P: n/a
Steve Jorgensen <no****@nospam.nospam> wrote in
news:f9********************************@4ax.com:
On Mon, 25 Apr 2005 03:57:55 GMT, "David W. Fenton"
<dX********@bway.net.invalid> wrote:
Steve Jorgensen <no****@nospam.nospam> wrote in
news:10********************************@4ax.co m:
Note that Access replicaiton may or may not be the best way to
do replication. Sometimes, it's better to implement a
replication scheme at the application level. One way to do this
is to implement a record ID generation system that includes a
machine identifier so there can't be collisions between IDs
generated on different machines, and add fields for a current
and previous record revision ID so you can tell if a record in
the central database is the same revision that a remote system
made a change to while off-line, and allow the change to post if
so.
This is an enormously difficult task, even if it is only one-way
between a mere two copies of the data file.

I've done it. It's complicated (think about how deletions are
propagated; think about order of inserts and referential
integrity).

If you're trying to have multiple dbs in the field, all being
updated, and you want those changes pushed up to the server, and
you also want to make the changes made by people on the servers to
be pulled down to the db in the field, it becomes a hugely
complicated task, unless a few conditions are met:

1. no records are edited in more than one location.

2. each person in the field has their own dataset that they work
on alone, and no other people actually edit that data (though they
may view it and analyze it).

But even then, you have to solve the PK issue. You have three
choices:

1. use a natural key, and run the risk of the same natural key
being used in two different copies of the database.

2. pre-allocated blocks of surrogate keys to each copy of the
database.

3. include a source db identifier in a compound PK in every table.

But if everyone's working on the same datasets, it becomes nearly
impossible to program from scratch.

Keep in mind that, theoretically speaking, there is a form of
heterogenous replication where the main mother ship is a SQL
Server db and the laptops have Jet dbs that synchronize with the
SQL Server. However, like pure SQL Server replication itself, the
whole scenario is much more limited than the capabilities of pure
Jet replication, and the rules much more strict.

I don't think replication is the answer here.

I think the problem needs to be completely re-thought from the
ground up.


All the problems you've described are real, but I've found they
can often be managed by limiting the scope of the replication
features to what's important for the real-world requirements of
the app.

Here are some ideas that should help in most applications I can
think of:

1. Don't try to resolve replication conflicts. Just let the
loser either abort the replication or continue and lose the
conflicting changes. The user can then abort, copy down the
important information, run the replication again, and manually
enter the changes as required.


Steve, that's an absolutely ludicrous proposal. I'd call that data
corruption, because the end result is that you don't know if the
multiple copies of the data file have anything like the same data
set in them.
2. Don't try to replicate every table, just the ones that will
really need to be updated in the field. If the user needs a new
lookup value, they can make a comment in the notes, and fix it
later when they're back on the LAN.
I don't believe in manual processes for things that can be
automated.
3. Don't allow deletions in disconnected mode. Just allow a
status change to something like "Inactive" - your application may
work this way anyway.


You still have to propagate that field value like other edits, and
the order of it can make a difference to any number of operations.

I am sorry to say that I don't think you have any credibility on
this issue based on the quality of your response. I would never
dream of accepting money for such a slap-dash system as you
describe.

And Jet replication offers much, much more than what you're
offering, without requiring massive amounts of coding (and testing
to see if it works).

I find your attitude towards data here quite inconsistent with your
detail-oriented approach to coding. Why are you running all these
tests on your code and wanting the compiler to catch as many
problems as possible, and then implementing systems like you
describe above, where the data isn't reliable?

Looks like a huge inconsistency to me, as though you are a code geek
who doesn't really care about data integrity.

--
David W. Fenton http://www.bway.net/~dfenton
dfenton at bway dot net http://www.bway.net/~dfassoc
Nov 13 '05 #15

P: n/a
"Alan Webb" <kn*******@hotSPAMmail.com> wrote in
news:sZ********************@comcast.com:
With the project I worked on there was only a half-dozen replicas.
Even with that I spent a fair amount of administrative time
chasing down replication errors that caused synchronization to
fail. . . .
Well, perhaps your schema was poorly designed for replication.
Here's an example of a schema design that will cause replication
errors:

Have a self-join on a table enforced with RI. Say you have a group
of records in a Company table, each record representing a branch
office. You want one of those records to be the master record (the
main office), so you store its PK in a field within the table,
limiting the values in it with RI.

This will fail every time you create a master record, because the
value in the field can't be propagated until the record itself has
been inserted.

(this is a real-world report -- I learned)
. . . We used the replication setup available through the Access
UI. Even with my time working out synchronization kinks this was
still faster than having a clerk rekey data collected by temps set
around the company to inventory equipment that may or may not be
in-scope for Y2K.


Well, I couldn't agree more about that. My understanding was that
the replication scenario under consideration was a choice between
SQL Server/MSDE or rolling your own with code. If someone were
contemplating only using the Access UI and was not trying to do the
synch over dialup networking (or any other narrow bandwidth pipe),
it would be fine.

--
David W. Fenton http://www.bway.net/~dfenton
dfenton at bway dot net http://www.bway.net/~dfassoc
Nov 13 '05 #16

P: n/a
On Wed, 27 Apr 2005 00:55:32 GMT, "David W. Fenton"
<dX********@bway.net.invalid> wrote:

....
I don't think replication is the answer here.

I think the problem needs to be completely re-thought from the
ground up.
All the problems you've described are real, but I've found they
can often be managed by limiting the scope of the replication
features to what's important for the real-world requirements of
the app.

Here are some ideas that should help in most applications I can
think of:

1. Don't try to resolve replication conflicts. Just let the
loser either abort the replication or continue and lose the
conflicting changes. The user can then abort, copy down the
important information, run the replication again, and manually
enter the changes as required.


Steve, that's an absolutely ludicrous proposal. I'd call that data
corruption, because the end result is that you don't know if the
multiple copies of the data file have anything like the same data
set in them.


Well, it depends what the scope of replication is, doesn't it. In most
applications, we end up talking about one or a few master tables where the
odds are that the number of collisions per replication attempt is somewhere
between zero and 2 with zero being more common. Under these conditions, it's
pretty easy for the user to figure out the liekly consequences of refreshing
from the main copy, then re-applying the edits manually when a conflict was
detected.
2. Don't try to replicate every table, just the ones that will
really need to be updated in the field. If the user needs a new
lookup value, they can make a comment in the notes, and fix it
later when they're back on the LAN.


I don't believe in manual processes for things that can be
automated.


That's the kind of anal retentive thinking I used to be so well known for.
Yes, automating things is good where the benefit is high enough and/or the
cost is low enough.

If we can provide substantial benefit by implementing limited replication at a
low cost, that amounts to a high ROI. If, after implementing, we find that
there would be a benefit to improving the replication that's higher than the
benefit of working on something else, then we can proceed to do it.
3. Don't allow deletions in disconnected mode. Just allow a
status change to something like "Inactive" - your application may
work this way anyway.


You still have to propagate that field value like other edits, and
the order of it can make a difference to any number of operations.


Again, the complications are in proportion to the number of potential
conflicts, the number of tables, and the number of conflicts that would
reasonably arise. Keep those numbers small, and the complications are minor.
I am sorry to say that I don't think you have any credibility on
this issue based on the quality of your response. I would never
dream of accepting money for such a slap-dash system as you
describe.
I come to my opinion based on having recenty written one slap-dash system
that's working very well for a lot of paying customers. The trick is to be
very clear about the use cases and the spec to ensure that the slap-dash
solution employed happens to do what's needed.

The system I'm referring to didn't have to do with replication per se, but
something similar. We needed to have certain kinds of data pushed back and
forth between 2 systems that were written by different authors, having
different schemas and with about a 40% overlap in data coverage.

After several short iterations of reviewing the spec. and trying to optimize
cost/benefit, we figured out that the vast majority of duplicate user effort
we wanted to remove could be achieved by replicating data from just one table
in each direction (2 tables in all). It was further determined that it was OK
to simply fail the update if lookup tables were out of sync in one direction,
and describe the problems so the user could fix them, and to allow the sync
with problems in the other direction, informing the user what values were
replaced with what defaults.

In production, this simple schema has been a huge hit with the users, and we
can now legitimately claim to have an integrated suite of products.

I don't see why a similar process could not be used to develop a limited,
relatively stupid off-line capability that is nevertheless highly useful, and
tells the user enough so they can deal with problems that occur.
And Jet replication offers much, much more than what you're
offering, without requiring massive amounts of coding (and testing
to see if it works).
Well, you may have a point there since you know much more about that than I
do. It has always been my impression that by trying to do so much, the Access
replication system leaves me unable to predict the side effects and
consequences of employing it. This could well be a symptom of my lack of
knowledge, and I'm interested in anything you would like to say on the
subject.
I find your attitude towards data here quite inconsistent with your
detail-oriented approach to coding. Why are you running all these
tests on your code and wanting the compiler to catch as many
problems as possible, and then implementing systems like you
describe above, where the data isn't reliable?
I agree that the tone of my post seems unlike me, but it's because I'm seeing
that anal retentivity misapplied costs more than it gains - for both me and
for the customer paying the bills. That does not mean ignoring essential
issues such as data integrity, but it does mean doing analysis to figure out
which issues really are not essential, so we can table them.

Less is more. A smaller spec., is a smaller bill, and if the majority of the
value is delivered with a trivial spec, that amounts to a high ROI and a happy
customer. If, after delivery, there seems to be more value to be gained from
a more powerful implementation, and that's higher than the value of other
features in the queue, then we can absolutely start on the improvement.

With regard to testing, I find that doing more testing often leads to doing
less coding because the tests can provide confidence that the the code we
write really does cover the use cases. At the company I'm working for now,
besides the programmer tests, we have a full-time test engineer who's good at
breaking things and uses a tool to run automated regression tests at the GUI
level. I believe that if my code can get past him, chances are, it'll work
right for the customer - period.
Looks like a huge inconsistency to me, as though you are a code geek
who doesn't really care about data integrity.


I hope what I've said above shows that I hav no lack of concern for data
integrity. I simply assert that when scope can be sufficiently contained,
maintaining data integrity doesn't have to require complex code.
Nov 13 '05 #17

This discussion thread is closed

Replies have been disabled for this discussion.