By using this site, you agree to our updated Privacy Policy and our Terms of Use. Manage your Cookies Settings.
424,483 Members | 1,909 Online
Bytes IT Community
+ Ask a Question
Need help? Post your question and get tips & solutions from a community of 424,483 IT Pros & Developers. It's quick & easy.

Merging databases with conflicting unique key values

P: n/a
3rd time posting this as the first two simply disappeared!
Here's the issue:

We currently run an Access application in the West Coast for tracking
resource centric data. For those located in the West the program zips
along fine but the folks in the East Coast are having a nightmarish
performance issue.

We've had our tech guys take a look and it's due to the network speed
across a long distance and there's nothing much that can be done.
We've minimized the number of hops and we're practically on the network
backbone.

What were planning to do is create a copy on another server on the East
Coast. Doing this poses a serious problem for us. All the tables have
auto ID enabled so if records are created on both machines trying to
merge them just won't be possible due to conflicting IDs since the auto
ID will sequentially select the next numeric value.

We can't create a replicated version as we'll have the same network
performance issue. Has anyone found a solution or a workaround for
this problem?

Jan 12 '07 #1
Share this Question
Share on Google+
31 Replies


P: n/a
lo*******@lycos.com wrote:
>3rd time posting this as the first two simply disappeared!
Here's the issue:

We currently run an Access application in the West Coast for tracking
resource centric data. For those located in the West the program zips
along fine but the folks in the East Coast are having a nightmarish
performance issue.

We've had our tech guys take a look and it's due to the network speed
across a long distance and there's nothing much that can be done.
We've minimized the number of hops and we're practically on the network
backbone.

What were planning to do is create a copy on another server on the East
Coast. Doing this poses a serious problem for us. All the tables have
auto ID enabled so if records are created on both machines trying to
merge them just won't be possible due to conflicting IDs since the auto
ID will sequentially select the next numeric value.

We can't create a replicated version as we'll have the same network
performance issue. Has anyone found a solution or a workaround for
this problem?

Add a field to the table so you can identify the East vs
West.

--
Marsh
Jan 12 '07 #2

P: n/a
set up a terminal server on the west coast, then the east coast can
connect to a session on the terminal server to access the application
or moved the data (backend mdb) to msde or sql-server

louish...@lycos.com wrote:
3rd time posting this as the first two simply disappeared!
Here's the issue:

We currently run an Access application in the West Coast for tracking
resource centric data. For those located in the West the program zips
along fine but the folks in the East Coast are having a nightmarish
performance issue.

We've had our tech guys take a look and it's due to the network speed
across a long distance and there's nothing much that can be done.
We've minimized the number of hops and we're practically on the network
backbone.

What were planning to do is create a copy on another server on the East
Coast. Doing this poses a serious problem for us. All the tables have
auto ID enabled so if records are created on both machines trying to
merge them just won't be possible due to conflicting IDs since the auto
ID will sequentially select the next numeric value.

We can't create a replicated version as we'll have the same network
performance issue. Has anyone found a solution or a workaround for
this problem?
Jan 12 '07 #3

P: n/a
Great idea, and we were kicking this idea around but we have multiple
users on the east coast so the front end would be accessed concurrently
from multiple users. It's just too unstable.

We also thought about using SQL server as the backend but from what we
read from the newsgroups performance goes way down.

As for adding a east/west column, we would need to re-write (fix) all
the queries and forms to make this combination of key and region a
unique combination.

Is there a way to set the auto ID to even IDs in one mdb and odd IDs
for the other? Any other possible solutions? Thanks.
lesperan...@natpro.com wrote:
set up a terminal server on the west coast, then the east coast can
connect to a session on the terminal server to access the application
or moved the data (backend mdb) to msde or sql-server

louish...@lycos.com wrote:
3rd time posting this as the first two simply disappeared!
Here's the issue:

We currently run an Access application in the West Coast for tracking
resource centric data. For those located in the West the program zips
along fine but the folks in the East Coast are having a nightmarish
performance issue.

We've had our tech guys take a look and it's due to the network speed
across a long distance and there's nothing much that can be done.
We've minimized the number of hops and we're practically on the network
backbone.

What were planning to do is create a copy on another server on the East
Coast. Doing this poses a serious problem for us. All the tables have
auto ID enabled so if records are created on both machines trying to
merge them just won't be possible due to conflicting IDs since the auto
ID will sequentially select the next numeric value.

We can't create a replicated version as we'll have the same network
performance issue. Has anyone found a solution or a workaround for
this problem?
Jan 12 '07 #4

P: n/a
lo*******@lycos.com wrote:
>Is there a way to set the auto ID to even IDs in one mdb and odd IDs
for the other? Any other possible solutions? Thanks.
You can set the AutoNumber field to random instead of increment. There MIGHT
be an occaisional collision, but it is pretty doubtful.
HTH
Mike

--
Mike Pippins

"We have met the enemy and he is us." -- Pogo Possum

Message posted via AccessMonster.com
http://www.accessmonster.com/Uwe/For...ccess/200701/1

Jan 13 '07 #5

P: n/a
lo*******@lycos.com wrote in
news:11**********************@q2g2000cwa.googlegro ups.com:

[re: terminal server]
Great idea, and we were kicking this idea around but we have
multiple users on the east coast so the front end would be
accessed concurrently from multiple users. It's just too
unstable.
Huh? With Terminal Server, you give each user an individual copy of
the front end, just like you do on a LAN. There's no issue
whatsoever with multiple users in the same front end.
We also thought about using SQL server as the backend but from
what we read from the newsgroups performance goes way down.

As for adding a east/west column, we would need to re-write (fix)
all the queries and forms to make this combination of key and
region a unique combination.

Is there a way to set the auto ID to even IDs in one mdb and odd
IDs for the other? Any other possible solutions?
Replication uses random Autonumbers and in my almost 10 years of
creating replicated apps, I've never seen a random Autonumber
collision.

The correct and simple solution is Terminal Server, properly
deployed with a front end for each user (stored in the user's
profile).

You could also use replication in your scenario, though you'd have
to use indirect replication. But with fixed locations, I'd never
recommend a replicated solution, even though I specialize in
replication. Instead, I'd tell them just what you've been told:
Terminal Server. It really does work extremely well, and also makes
administering the database phenomenally easy. You might even
consider moving the local users to use Terminal Server, just to keep
everything really simple.

--
David W. Fenton http://www.dfenton.com/
usenet at dfenton dot com http://www.dfenton.com/DFA/
Jan 13 '07 #6

P: n/a
lo*******@lycos.com wrote in
news:11**********************@51g2000cwl.googlegro ups.com:
We currently run an Access application in the West Coast for
tracking resource centric data. For those located in the West the
program zips along fine but the folks in the East Coast are having
a nightmarish performance issue.

We've had our tech guys take a look and it's due to the network
speed across a long distance and there's nothing much that can be
done. We've minimized the number of hops and we're practically on
the network backbone.

What were planning to do is create a copy on another server on the
East Coast. Doing this poses a serious problem for us. All the
tables have auto ID enabled so if records are created on both
machines trying to merge them just won't be possible due to
conflicting IDs since the auto ID will sequentially select the
next numeric value.

We can't create a replicated version as we'll have the same
network performance issue.
Er, what?

No, you can't use *direct* replication, but you *can* use indirect
or Internet replication (indirect is vastly preferable because it
has fewer outside dependencies).

But Terminal Server is the best answer to your problem.

--
David W. Fenton http://www.dfenton.com/
usenet at dfenton dot com http://www.dfenton.com/DFA/
Jan 13 '07 #7

P: n/a
Hi David,
Just to clarify, when you suggest having multiple copies of the TS for
each user do you mean to also have multiple copies of the front end on
the machine so that up to x number of people can TS into the machine
under different TS accounts (each running their copy of the front end)?
If that is what you're suggesting how many concurrent TS services have
you seen successfully run on a single box? With multiple front ends
running conncurently I would expect a lot of activity on this blade
server. Can you elaborate on your past experience on this? Thanks.
David W. Fenton wrote:
lo*******@lycos.com wrote in
news:11**********************@q2g2000cwa.googlegro ups.com:

[re: terminal server]
Great idea, and we were kicking this idea around but we have
multiple users on the east coast so the front end would be
accessed concurrently from multiple users. It's just too
unstable.

Huh? With Terminal Server, you give each user an individual copy of
the front end, just like you do on a LAN. There's no issue
whatsoever with multiple users in the same front end.
We also thought about using SQL server as the backend but from
what we read from the newsgroups performance goes way down.

As for adding a east/west column, we would need to re-write (fix)
all the queries and forms to make this combination of key and
region a unique combination.

Is there a way to set the auto ID to even IDs in one mdb and odd
IDs for the other? Any other possible solutions?

Replication uses random Autonumbers and in my almost 10 years of
creating replicated apps, I've never seen a random Autonumber
collision.

The correct and simple solution is Terminal Server, properly
deployed with a front end for each user (stored in the user's
profile).

You could also use replication in your scenario, though you'd have
to use indirect replication. But with fixed locations, I'd never
recommend a replicated solution, even though I specialize in
replication. Instead, I'd tell them just what you've been told:
Terminal Server. It really does work extremely well, and also makes
administering the database phenomenally easy. You might even
consider moving the local users to use Terminal Server, just to keep
everything really simple.

--
David W. Fenton http://www.dfenton.com/
usenet at dfenton dot com http://www.dfenton.com/DFA/
Jan 15 '07 #8

P: n/a
Just a quick couple of questions regarding your problem: Is the
application actually written entirely in MS Access or is it simply
using a Jet (mdb) database as a backend? If the application is written
entirely in Access, what is the method of data access used, DAO, ADO,
you let Access build the forms, etc...? Lastly, how many users on the
East Coast and what kind of bandwidth do you have available between
East and West?

Cheers

The Frog

Jan 16 '07 #9

P: n/a
lo*******@lycos.com wrote in
news:11**********************@51g2000cwl.googlegro ups.com:
Just to clarify, when you suggest having multiple copies of the TS
for each user do you mean to also have multiple copies of the
front end on the machine so that up to x number of people can TS
into the machine under different TS accounts (each running their
copy of the front end)?
Each user would have a copy of the front end stored in their user
profile, just as they would have a front end if running from a
workstation on the LAN.
If that is what you're suggesting how many concurrent TS services
have
you seen successfully run on a single box? With multiple front
ends running conncurently I would expect a lot of activity on this
blade server. Can you elaborate on your past experience on this?
Eh? The only resource the extra copies of the front end uses up is
disk space. The amount of RAM is going to be exactly the same, as
each user has to have their own copy of the front end in memory even
when they are loading it from a single shared file.

So, the number of users that can be supported is exactly the same
for sharing a front end and for giving each users an individual copy
of the front end, unless the disk space available is severely
limited. In that case, you'd probably be having problems with the
shared front end, as well. With today's servers often having half a
terabyte of storage available on them, it seems to me that worrying
about multiple copies of a 10- or 20MB file is really penny-wise and
pound-foolish (have you considered how many front ends can be stored
in a GB of disk space?). You'll much sooner run out of RAM and
bandwidth than you will disk space.

--
David W. Fenton http://www.dfenton.com/
usenet at dfenton dot com http://www.dfenton.com/DFA/
Jan 16 '07 #10

P: n/a
lo*******@lycos.com wrote in
news:11**********************@51g2000cwl.googlegro ups.com:
With multiple front ends
running conncurently I would expect a lot of activity on this
blade server. Can you elaborate on your past experience on this?
I kind of forgot to indicate my experience.

I have no TS apps running with more than 10-15 simultaneous users.
The best functioning of those is running on a single-processor
Compaq blade server dedicated to serving as Terminal Server, and has
only a GB of RAM (it's also the one with the most users). The
Internet connection is a T1 (i.e., 1.5mbps) and the remote sites are
connecting over 384K DSL lines. Performance is splendid. The LAN has
a couple of terabytes of disk space, but for security reasons, TS
users are restricted to using the terminal server's drives, which if
I remember correctly are a couple hundred GBs (nothing is stored
there except data for TS users).

I was very impressed with the performance of the whole setup, which
cost only $3K or so and replaced systems that just didn't work well
(sharing Quickbooks data, and trying to share Access data by just
copying it back and forth).

--
David W. Fenton http://www.dfenton.com/
usenet at dfenton dot com http://www.dfenton.com/DFA/
Jan 16 '07 #11

P: n/a
Hey Frog,
It's entirely in Access with the backend (mdb) split from the front
end. I would estimate around 5 concurrent users from each region so 10
connections max.
David,

I should have been more specific on my question. It's not the memory
that concerns me. It's the single processor that I'm worried about.
Prior to me taking over this project we had the front end located on
the server and multiple executions of the program would degredate the
performance noticeably.

If we go by the size of the program, yes, I do agree that we can fit
multiple copies on single GB but that's not where the constraint comes
into play. When the users execute reports and queries without any load
it can run 1-2 minutes. Some of these queries have countless
calculations involved. If we have multiple executions then the CPU
will be severly impacted. I'll test the TS approach and execute
multiple reports and queries simultaneously but I have a feeling that
the performance will go south drastically. We'll just have to see.

Jan 16 '07 #12

P: n/a
lo*******@lycos.com wrote in
news:11**********************@11g2000cwr.googlegro ups.com:
I should have been more specific on my question. It's not the
memory that concerns me. It's the single processor that I'm
worried about. Prior to me taking over this project we had the
front end located on the server and multiple executions of the
program would degredate the performance noticeably.
The performance of *what*? The file server? Or was this a Terminal
Server and TS users were seeing performance degradation?

Certainly, sharing a front end leads to performance problems,
because you've unnecessarily introduced contention for data for
which there is no reason for them to be shared.
If we go by the size of the program, yes, I do agree that we can
fit multiple copies on single GB but that's not where the
constraint comes into play. When the users execute reports and
queries without any load it can run 1-2 minutes. Some of these
queries have countless calculations involved. If we have multiple
executions then the CPU will be severly impacted. I'll test the
TS approach and execute multiple reports and queries
simultaneously but I have a feeling that the performance will go
south drastically. We'll just have to see.
How much RAM? I forget MS's recommendations, but I always shoot for
a TS having 128MBs per TS user, which would be 8 users per GB of
RAM. Most of the time an Access application is just sitting there,
waiting for user input. I wouldn't really think that these
processor-intensive reports and queries would be much of an issue
unless all 10 users ran them simultaneously. That's simply not
going to happen terribly often.

But, test it and see. Open TS several sessions with your Access app
in it. Then open an admin session and open Task Manager and display
processes from all users. Then launch your intensive reports and
queries and see what happens. My bet is that you won't really have
an issue for any practical situation.

--
David W. Fenton http://www.dfenton.com/
usenet at dfenton dot com http://www.dfenton.com/DFA/
Jan 17 '07 #13

P: n/a
"Peformance of *what*"? I'm a little dumbfounded with your question.
Of course i'm referring to the file server or any box that's hosting
the database. What else can it be??? In your example *TS* is the
*program* being launched on the server. We can name the sever to
whatever we wish...it's doing the same thing.

As I said, the memory is not the issue...it's the processor.

As for the performance degradation you might want to read up:
http://www.granite.ab.ca/access/performancefaq.htm

I'm guessing that you have not tested this in a real world environment.
This performance issue is as old as access itself and has been well
documented from start of time. It's a good read.
>From reading other threads I'm seeing that random keys may be the real
solution to this. TS is a possibility but I know for a fact that
performance will degrade quickly once few sessions kick off *processor*
intensive activites. Having 10-20 people doing read functions isn't
going to cost much and for this TS solution sounds about right.

David W. Fenton wrote:
lo*******@lycos.com wrote in
news:11**********************@11g2000cwr.googlegro ups.com:
I should have been more specific on my question. It's not the
memory that concerns me. It's the single processor that I'm
worried about. Prior to me taking over this project we had the
front end located on the server and multiple executions of the
program would degredate the performance noticeably.

The performance of *what*? The file server? Or was this a Terminal
Server and TS users were seeing performance degradation?

Certainly, sharing a front end leads to performance problems,
because you've unnecessarily introduced contention for data for
which there is no reason for them to be shared.
If we go by the size of the program, yes, I do agree that we can
fit multiple copies on single GB but that's not where the
constraint comes into play. When the users execute reports and
queries without any load it can run 1-2 minutes. Some of these
queries have countless calculations involved. If we have multiple
executions then the CPU will be severly impacted. I'll test the
TS approach and execute multiple reports and queries
simultaneously but I have a feeling that the performance will go
south drastically. We'll just have to see.

How much RAM? I forget MS's recommendations, but I always shoot for
a TS having 128MBs per TS user, which would be 8 users per GB of
RAM. Most of the time an Access application is just sitting there,
waiting for user input. I wouldn't really think that these
processor-intensive reports and queries would be much of an issue
unless all 10 users ran them simultaneously. That's simply not
going to happen terribly often.

But, test it and see. Open TS several sessions with your Access app
in it. Then open an admin session and open Task Manager and display
processes from all users. Then launch your intensive reports and
queries and see what happens. My bet is that you won't really have
an issue for any practical situation.

--
David W. Fenton http://www.dfenton.com/
usenet at dfenton dot com http://www.dfenton.com/DFA/
Jan 17 '07 #14

P: n/a
Oh my! Unless you have an extremely unusual application,
Access programs use very little CPU. After all, Access is a
DATA centric tool that will be I/O bound loooonng before it
becomes CPU bound. Try David's experiment and see exactly
what your app is doing.

Of course, a top notch design can overcome most hardware
limitations even with dozens of users. OTOH, a poor design
can fall flat on its face with only a few users.

If you look at Tony's Performance FAQ carefully, you will
note that just about every suggestion is related to the
reduction of I/O operations.

I will trust David's real world experiences in this matter -
period.
--
Marsh
lo*******@lycos.com wrote:
>"Peformance of *what*"? I'm a little dumbfounded with your question.
Of course i'm referring to the file server or any box that's hosting
the database. What else can it be??? In your example *TS* is the
*program* being launched on the server. We can name the sever to
whatever we wish...it's doing the same thing.

As I said, the memory is not the issue...it's the processor.

As for the performance degradation you might want to read up:
http://www.granite.ab.ca/access/performancefaq.htm

I'm guessing that you have not tested this in a real world environment.
This performance issue is as old as access itself and has been well
documented from start of time. It's a good read.
>>From reading other threads I'm seeing that random keys may be the real
solution to this. TS is a possibility but I know for a fact that
performance will degrade quickly once few sessions kick off *processor*
intensive activites. Having 10-20 people doing read functions isn't
going to cost much and for this TS solution sounds about right.
David W. Fenton wrote:
>lo*******@lycos.com wrote
I should have been more specific on my question. It's not the
memory that concerns me. It's the single processor that I'm
worried about. Prior to me taking over this project we had the
front end located on the server and multiple executions of the
program would degredate the performance noticeably.

The performance of *what*? The file server? Or was this a Terminal
Server and TS users were seeing performance degradation?

Certainly, sharing a front end leads to performance problems,
because you've unnecessarily introduced contention for data for
which there is no reason for them to be shared.
If we go by the size of the program, yes, I do agree that we can
fit multiple copies on single GB but that's not where the
constraint comes into play. When the users execute reports and
queries without any load it can run 1-2 minutes. Some of these
queries have countless calculations involved. If we have multiple
executions then the CPU will be severly impacted. I'll test the
TS approach and execute multiple reports and queries
simultaneously but I have a feeling that the performance will go
south drastically. We'll just have to see.

How much RAM? I forget MS's recommendations, but I always shoot for
a TS having 128MBs per TS user, which would be 8 users per GB of
RAM. Most of the time an Access application is just sitting there,
waiting for user input. I wouldn't really think that these
processor-intensive reports and queries would be much of an issue
unless all 10 users ran them simultaneously. That's simply not
going to happen terribly often.

But, test it and see. Open TS several sessions with your Access app
in it. Then open an admin session and open Task Manager and display
processes from all users. Then launch your intensive reports and
queries and see what happens. My bet is that you won't really have
an issue for any practical situation.

--
David W. Fenton http://www.dfenton.com/
usenet at dfenton dot com http://www.dfenton.com/DFA/
Jan 17 '07 #15

P: n/a
Hi again,

Another question regarding the operation of the database: is the
database having to operate in real time, that is to say, is this a
transaction processing type database, or more of a data storage and
reporting database. The reason I ask, is that it may be possible to
make a local copy of some tables to keep with the front end, so that
processing load is not on the "server" per se. In theory it may be
possible to split the workload between the client machines and the back
end database by keeping copies of non real-time information locally. It
really depends on the design, needs, and operation of the database app.

As another though, it may also be possible to build some shortcuts to
the data processing used for reporting, by using for example some
temporary tables that hold the underlying data used for generating the
reports. Again it depends on the design and needs of the app. But these
things might be worth considering.

There is one thing that can always improve on the performance of an
application, and that it to always "keep the wires" cold. The less need
to utilise the network for a process the better. For example, if all
the processing for a report / query is being done on a client machine,
but the data is being pulled from the server, you may experience loss
of performance while the data is moved back oand forth along the
network. As an alternative the query could be placed in the back end,
and then using programming like ADO / DAO you could "run" the query on
the back end and only the results would come through. In theory. In
practice of course, this is Access and the results may not match the
theory.

Also worth considering, especially if the application is split, it to
move the database to SQL Server 2005. The Express version is free, and
will offer you the ability to handle data processing (again in theory)
at much higher levels of performance for this type of situation. You
can definitely run the underlying query (view) on the server, and send
only the results down the wires - and this saves on a lot of bandwidth
(depends how big the result set is of course...).

Just a few ideas to play with :)

Cheers

The Frog

Jan 17 '07 #16

P: n/a
lo*******@lycos.com wrote in
news:11**********************@s34g2000cwa.googlegr oups.com:
>


David W. Fenton wrote:
>lo*******@lycos.com wrote in
news:11**********************@11g2000cwr.googlegr oups.com:
I should have been more specific on my question. It's not the
memory that concerns me. It's the single processor that I'm
worried about. Prior to me taking over this project we had the
front end located on the server and multiple executions of the
program would degredate the performance noticeably.

The performance of *what*? The file server? Or was this a
Terminal Server and TS users were seeing performance degradation?

Certainly, sharing a front end leads to performance problems,
because you've unnecessarily introduced contention for data for
which there is no reason for them to be shared.
If we go by the size of the program, yes, I do agree that we
can fit multiple copies on single GB but that's not where the
constraint comes into play. When the users execute reports and
queries without any load it can run 1-2 minutes. Some of these
queries have countless calculations involved. If we have
multiple executions then the CPU will be severly impacted.
I'll test the TS approach and execute multiple reports and
queries simultaneously but I have a feeling that the
performance will go south drastically. We'll just have to see.

How much RAM? I forget MS's recommendations, but I always shoot
for a TS having 128MBs per TS user, which would be 8 users per GB
of RAM. Most of the time an Access application is just sitting
there, waiting for user input. I wouldn't really think that these
processor-intensive reports and queries would be much of an issue
unless all 10 users ran them simultaneously. That's simply not
going to happen terribly often.

But, test it and see. Open TS several sessions with your Access
app in it. Then open an admin session and open Task Manager and
display processes from all users. Then launch your intensive
reports and queries and see what happens. My bet is that you
won't really have an issue for any practical situation.

"Peformance of *what*"? I'm a little dumbfounded with your
question. Of course i'm referring to the file server or any box
that's hosting the database. What else can it be??? In your
example *TS* is the *program* being launched on the server. We
can name the sever to whatever we wish...it's doing the same
thing.

As I said, the memory is not the issue...it's the processor.
I just don't understand what you're talking about. You aren't using
TS, so I assume you mean that you have a bunch of workstations with
front ends connecting to a file server with the back end. When you
say there are performance issues in *that* context, then I have to
ask:

Where?

1. On the workstations?

2. On the file server?

If it's #1, it may be a network issue, i.e., that the app is slow
because you're pulling a huge amount of data across the wire.

If it's #2, then it's because you're pulling so much data that it
bogs down the file server's CPU. If that's the case, I'd expect your
workstations to take an incredibly long time to pull the data. I've
never seen an Access app pull so much data that it causes a file
server's CPU to rise to any significant level (anything above 25%
sustained CPU utilization).

On a TS, if you have the back end stored on storage attached
directly to the TS, then you'll not have any network bandwidth
restrictions and your app should SPEED UP.

If the problem is serving requests for data, then I would say you
should redesign your application from scratch, as I've never seen a
properly-designed Access app that could come even close to bogging
down a modern file server with sufficient RAM and processor cycles.
As for the performance degradation you might want to read up:
http://www.granite.ab.ca/access/performancefaq.htm
Do you think I'm an inexperienced idiot?
I'm guessing that you have not tested this in a real world
environment.
One of my clients is running an Access app on TS with 10 users,
shared with a half dozen users running Quickbooks on the same TS.
There are no performance issues.
This performance issue is as old as access itself and has been
well
documented from start of time. It's a good read.
*What* performance issue are you talking about? There are a dozen
different issues in Tony's performance FAQ.
>From reading other threads I'm seeing that random keys may be the
real
solution to this. TS is a possibility but I know for a fact that
performance will degrade quickly once few sessions kick off
*processor* intensive activites. Having 10-20 people doing read
functions isn't going to cost much and for this TS solution sounds
about right.
Huh? The issue of reading the back end data file is no different on
a terminal server than it is with workstations connecting across a
LAN.

--
David W. Fenton http://www.dfenton.com/
usenet at dfenton dot com http://www.dfenton.com/DFA/
Jan 17 '07 #17

P: n/a
"The Frog" <an**************@eu.effem.comwrote in
news:11**********************@11g2000cwr.googlegro ups.com:
Another question regarding the operation of the database: is the
database having to operate in real time, that is to say, is this a
transaction processing type database, or more of a data storage
and reporting database. The reason I ask, is that it may be
possible to make a local copy of some tables to keep with the
front end, so that processing load is not on the "server" per se.
In theory it may be possible to split the workload between the
client machines and the back end database by keeping copies of non
real-time information locally. It really depends on the design,
needs, and operation of the database app.
I think you need to return to the original post in this thread. The
situation was two offices, East Coast and West Coast, and how to
supply the same data to both. The alternatives:

1. store an MDB on one server. Let the other office connect to it
across the WAN -- WAY TOO SLOW AND WAY TOO DANGEROUS.

2. put the data in SQL Server -- this was found to be too slow.

3. indirect replication would work, but is a pain to set up and
administer and has latency issues.

4. Having one office run the app on TS running on a server in the
other office is what is being contemplated.

Thus, there is no data issue that would be helped along by local
caching of data, since the workstations in the remote office WON'T
HAVE ANY DATA -- they'll just be running TS remote, and just getting
the pictures. All data will be on the TS or the TS's local LAN.
As another though, it may also be possible to build some shortcuts
to the data processing used for reporting, by using for example
some temporary tables that hold the underlying data used for
generating the reports. Again it depends on the design and needs
of the app. But these things might be worth considering.
What are you talking about? TS or not? If TS, then all of this is
completely irrelevant, especially if the back end is stored on the
TS's local storage.
There is one thing that can always improve on the performance of
an application, and that it to always "keep the wires" cold.
When running an app on TS with the data file stored on the TS, THERE
NO WIRES INVOLVED IN DATA RETRIEVAL, except the ones between the
TS's I/O bus and the hard drive controller.
The less need
to utilise the network for a process the better.
When running on a TS, THERE IS NO NETWORK IN USE, unless you're
storing the back end on a different server than the TS. If the app
is data intensive. you should store the data on the TS itself. If
it's just a normal app, there should be no issues with storing it on
a different server -- it would then be just like running the app
front end on a workstation connected to the same LAN.
For example, if all
the processing for a report / query is being done on a client
machine,
It would be done in the Terminal Server's RAM.
but the data is being pulled from the server,
Ideally for a data-intensive app, the data would be stored on the
server.
you may experience loss
of performance while the data is moved back oand forth along the
network.
With a TS setup, there's NO NETWORK, unless you choose to store the
back end on a different server (which is perfectly reasonable in
many situations).
As an alternative the query could be placed in the back end,
and then using programming like ADO / DAO you could "run" the
query on the back end and only the results would come through.
That is COMPLETE BULLSHIT.

Access with a Jet back end has NO SERVER-SIDE PROCESSING.
In theory.
No, not in theory.
In
practice of course, this is Access and the results may not match
the theory.
Your theory is WRONG.
Also worth considering, especially if the application is split, it
to move the database to SQL Server 2005. The Express version is
free, and will offer you the ability to handle data processing
(again in theory) at much higher levels of performance for this
type of situation.
If youre SQL Server is running on a different server than the TS,
then, yes, you'd get increased performance, because you've got the
TS handling running Access and the front end, and another server
handling the data processing.

It may or may not be faster than running the SQL Server on the TS,
or using a Jet back end, because the network may introduce enough of
a bandwidth bottleneck to obviate the performance advantage that
would come from having two different servers involved.
You
can definitely run the underlying query (view) on the server, and
send only the results down the wires - and this saves on a lot of
bandwidth (depends how big the result set is of course...).
And with a TS running the Access app with a Jet back end stored on
the same TS, THERE ARE NO WIRES INVOLVED< except extremely high
throughput wires (i.e., between the CPU and the disk I/O
subsystems).

--
David W. Fenton http://www.dfenton.com/
usenet at dfenton dot com http://www.dfenton.com/DFA/
Jan 17 '07 #18

P: n/a
I know this is a lo-tech answer to the autonumber 'problem' but start
one access database autonumber field at say 50,000,000 and the other at
10,000,000 then merging them will no longer be a problem until you
reach 49,999,999. Physically merging the data (writing a query to
append the data will of course append the correct values to your main
dataset!

This is a technique we use whenever multiple access databases are
required (on our standalone field capture systems for example) If you
search for setting autonumber start values you can get chapter and
verse on setting the field value!

5 or so concurrent users per site shouldn't be a problem though!

Or am I missing something ;-)

Jan 17 '07 #19

P: n/a
Hey Frog,
Migrating the backend to a SQL Server has been one consideration but
from what I've read in the newsgroup the performance is quite slow.
What has your past experience been on SQL Server? I also like the
greater security that's offered via SQL Server.

Keeping some lookup files on the local database and the transaction
tables on the backend is interesting. I haven't thought about that
before. It's an interesting solutions. Keeping the lookup tables in
sync, should they change, would be a problem but these are slowly
chaning dimensions.
Thanks for your suggestions.
The Frog wrote:
Hi again,

Another question regarding the operation of the database: is the
database having to operate in real time, that is to say, is this a
transaction processing type database, or more of a data storage and
reporting database. The reason I ask, is that it may be possible to
make a local copy of some tables to keep with the front end, so that
processing load is not on the "server" per se. In theory it may be
possible to split the workload between the client machines and the back
end database by keeping copies of non real-time information locally. It
really depends on the design, needs, and operation of the database app.

As another though, it may also be possible to build some shortcuts to
the data processing used for reporting, by using for example some
temporary tables that hold the underlying data used for generating the
reports. Again it depends on the design and needs of the app. But these
things might be worth considering.

There is one thing that can always improve on the performance of an
application, and that it to always "keep the wires" cold. The less need
to utilise the network for a process the better. For example, if all
the processing for a report / query is being done on a client machine,
but the data is being pulled from the server, you may experience loss
of performance while the data is moved back oand forth along the
network. As an alternative the query could be placed in the back end,
and then using programming like ADO / DAO you could "run" the query on
the back end and only the results would come through. In theory. In
practice of course, this is Access and the results may not match the
theory.

Also worth considering, especially if the application is split, it to
move the database to SQL Server 2005. The Express version is free, and
will offer you the ability to handle data processing (again in theory)
at much higher levels of performance for this type of situation. You
can definitely run the underlying query (view) on the server, and send
only the results down the wires - and this saves on a lot of bandwidth
(depends how big the result set is of course...).

Just a few ideas to play with :)

Cheers

The Frog
Jan 17 '07 #20

P: n/a
Hey Purpleflash,
Thanks! Didn't know you can specify the starting number for these
keys. That's just perfect! Between you and Frog you guys have given
me a ton of ideas. I'm interested in the SQL Server method as it offer
a much better security and that is one of our (Access') weak point but
this suggestion is a real simple one.

purpleflash wrote:
I know this is a lo-tech answer to the autonumber 'problem' but start
one access database autonumber field at say 50,000,000 and the other at
10,000,000 then merging them will no longer be a problem until you
reach 49,999,999. Physically merging the data (writing a query to
append the data will of course append the correct values to your main
dataset!

This is a technique we use whenever multiple access databases are
required (on our standalone field capture systems for example) If you
search for setting autonumber start values you can get chapter and
verse on setting the field value!

5 or so concurrent users per site shouldn't be a problem though!

Or am I missing something ;-)
Jan 17 '07 #21

P: n/a
David W. Fenton wrote:
lo*******@lycos.com wrote in
news:11**********************@11g2000cwr.googlegro ups.com:
I just don't understand what you're talking about. You aren't using
TS, so I assume you mean that you have a bunch of workstations with
front ends connecting to a file server with the back end. When you
say there are performance issues in *that* context, then I have to
ask:

Where?

1. On the workstations?

2. On the file server?
If you go back and re-read the posts it's clear that we're only talking
about the file server

As for the performance degradation you might want to read up:
http://www.granite.ab.ca/access/performancefaq.htm

Do you think I'm an inexperienced idiot?
No but it's quite clear that you think all others are. Go easy on the
coffee.

One of my clients is running an Access app on TS with 10 users,
shared with a half dozen users running Quickbooks on the same TS.
There are no performance issues.
You don't mention how many simultaneous TS sessions are open, and more
importantly, what kind of activity are they doing. If they're doing
read functions then 10 users is nothing.

*What* performance issue are you talking about? There are a dozen
different issues in Tony's performance FAQ.
Exactly my point. Many variables.
From reading other threads I'm seeing that random keys may be the
real
solution to this. TS is a possibility but I know for a fact that
performance will degrade quickly once few sessions kick off
*processor* intensive activites. Having 10-20 people doing read
functions isn't going to cost much and for this TS solution sounds
about right.

Huh? The issue of reading the back end data file is no different on
a terminal server than it is with workstations connecting across a
LAN.
Huh? As the original post stats if I create a copy on the East Coast I
will have conflict in unique keys once the data is merged. Since Frog
and the other fellow came up with real possible solutions (and your TS
solution is not bad either) then creating a local copy for the folks on
the East Coast is possible. There will be no performance lag due to
the distance and a nightly sync can be performed.

Jan 17 '07 #22

P: n/a
lo*******@lycos.com wrote in
news:11**********************@v45g2000cwv.googlegr oups.com:
David W. Fenton wrote:
lo*******@lycos.com wrote in
news:11**********************@11g2000cwr.googlegr oups.com:
I just don't understand what you're talking about. You aren't
using TS, so I assume you mean that you have a bunch of
workstations with front ends connecting to a file server with the
back end. When you say there are performance issues in *that*
context, then I have to ask:

Where?

1. On the workstations?

2. On the file server?

If you go back and re-read the posts it's clear that we're only
talking about the file server
Well, yes, I see that *you* are talking about the file server, but
so far, I've been unable to figure out why you continue to obsess on
that issue.
As for the performance degradation you might want to read up:
http://www.granite.ab.ca/access/performancefaq.htm

Do you think I'm an inexperienced idiot?

No but it's quite clear that you think all others are. Go easy on
the coffee.
I think you look like an idiot when you continue to reply to my
posts as though I'm discussing anything other than a TS solution,
which is by far the simplest, most efficient and cheapest way to
support users on two coasts who need to share data.
>One of my clients is running an Access app on TS with 10 users,
shared with a half dozen users running Quickbooks on the same TS.
There are no performance issues.

You don't mention how many simultaneous TS sessions are open, and
more importantly, what kind of activity are they doing. If
they're doing read functions then 10 users is nothing.
At any one time there can be 1 to 10 open for the Access app, and
often all six open for the QB users (especially at times when they
are all trying to get their financials ready for the CFO). The
number of simultaneous users on the Access app depends on the time
of year -- part of the year, nobody is using it at all. At other
times, at least 5 of the 10 are *always* using it.

The app is read-only for 5 of the TS users and read-write for the
other 5. They don't add records, only editing (the records are added
by the team leaders in the corporate office, over the local LAN,
rather than through the TS).

But, as I said, the only issue here that is different from running
the app with workstations on a LAN with the same number of users is
RAM usage on the TS, which has never been a problem with the 1GB on
this TS (even though I'm conservative in provisioning 128MBs per
user -- more RAM could be very easily added).

In any event, I've already posted all of this information.
>*What* performance issue are you talking about? There are a dozen
different issues in Tony's performance FAQ.

Exactly my point. Many variables.
And which ones are you saying are relevant to TS that aren't
relevant to standard workstation front end to file server back end
configurations?

There's nothing specific to TS, so there are no real performance
issues specific to that solution.
>From reading other threads I'm seeing that random keys may be
the real
solution to this. TS is a possibility but I know for a fact
that performance will degrade quickly once few sessions kick
off *processor* intensive activites. Having 10-20 people doing
read functions isn't going to cost much and for this TS
solution sounds about right.

Huh? The issue of reading the back end data file is no different
on a terminal server than it is with workstations connecting
across a LAN.

Huh? As the original post stats if I create a copy on the East
Coast I will have conflict in unique keys once the data is merged.
Since Frog
The poster going by the name "Frog" did not even come close to
posting a solution. His post was full of factual inaccuracies.
and the other fellow came up with real possible solutions (and
your TS solution is not bad either) then creating a local copy for
the folks on the East Coast is possible. There will be no
performance lag due to the distance and a nightly sync can be
performed.
Huh? What the heck are you talking about?

The whole point of a TS solution is that you have only one back end,
edited by all the users on both the East and West Coasts. And there
is no front end on the workstations of the users connecting to the
TS -- they only run the remote desktop client.

So, all file I/O is on the terminal server (no file server involved
for the TS users) for the TS users, unless the data file is stored
on a different server on the TS's local LAN. Then the *other* file
server is involved, not the TS.

In any event, the issue for the TS users is not going to be file
I/O, because that's local to their remote TS sessions. You don't
seem to understand this.

--
David W. Fenton http://www.dfenton.com/
usenet at dfenton dot com http://www.dfenton.com/DFA/
Jan 17 '07 #23

P: n/a
"purpleflash" <ki**@bgs.ac.ukwrote in
news:11**********************@q2g2000cwa.googlegro ups.com:
I know this is a lo-tech answer to the autonumber 'problem' but
start one access database autonumber field at say 50,000,000 and
the other at 10,000,000 then merging them will no longer be a
problem until you reach 49,999,999. Physically merging the data
(writing a query to append the data will of course append the
correct values to your main dataset!
There is absolutely no reason to keep two separate databases that
have to be merged.

None.

--
David W. Fenton http://www.dfenton.com/
usenet at dfenton dot com http://www.dfenton.com/DFA/
Jan 17 '07 #24

P: n/a
lo*******@lycos.com wrote:
>Huh? As the original post stats if I create a copy on the East Coast I
will have conflict in unique keys once the data is merged. Since Frog
and the other fellow came up with real possible solutions (and your TS
solution is not bad either) then creating a local copy for the folks on
the East Coast is possible. There will be no performance lag due to
the distance and a nightly sync can be performed.
If you're still worried about merging the two sets of data,
you can use an Insert Into query and modify the key value
from the west table so they are distinct from the east's
keys. I.e. you can insert values into an autonumber field
as long as there is no conflict. Maybe something like:

INSERT INTO east
SELECT 1000000+key, fa, fb, ...
FROM west

After you compact the result, the autonumber field will pick
up from the highest number in the merged table.

Once that is done, the TS approach is far and away the
easiest to manage, most efficient and certainly over time
the least expensive. If you can get past being irritated
with David's reponses, put your preconceptions aside, dig
out the facts in his replies and think about it. He really
does know what he's talking about.

--
Marsh
Jan 18 '07 #25

P: n/a
Hey Marsh,
I agree with you that the TS solutions sounds the easiest. I'm
hesitant to this solutions because we have 3 maximun loging attempts
before it needs to be reset manually. That and the fact that portion
of this blade server is hosting install applications I can't afford
this server to lock up. Access locks up too often and we need to
recompile the backend. I have a feeling that if I opt for the TS
solution there will be a lot of manual maintenance involved. What I
would appreciate is someone with a real solid experience with this
approach.

I can see people using TS to login and getting their information in a
hot minute but having it open day long and timing out is just too
unstable for my taste. The settings for the TS sessions is set by a
higher authority and is out of my hand.

As I mentioned, I'll need to do some real world tests and see what
results we get. I'll try out the SQL Server approach first then try
out the TS approach.

It's amazing how many good idea you can get from these newsgroups. As
far as getting past the irritating noise...that's just asking too much
of me. Now for the real work!

Marshall Barton wrote:
lo*******@lycos.com wrote:
Huh? As the original post stats if I create a copy on the East Coast I
will have conflict in unique keys once the data is merged. Since Frog
and the other fellow came up with real possible solutions (and your TS
solution is not bad either) then creating a local copy for the folks on
the East Coast is possible. There will be no performance lag due to
the distance and a nightly sync can be performed.

If you're still worried about merging the two sets of data,
you can use an Insert Into query and modify the key value
from the west table so they are distinct from the east's
keys. I.e. you can insert values into an autonumber field
as long as there is no conflict. Maybe something like:

INSERT INTO east
SELECT 1000000+key, fa, fb, ...
FROM west

After you compact the result, the autonumber field will pick
up from the highest number in the merged table.

Once that is done, the TS approach is far and away the
easiest to manage, most efficient and certainly over time
the least expensive. If you can get past being irritated
with David's reponses, put your preconceptions aside, dig
out the facts in his replies and think about it. He really
does know what he's talking about.

--
Marsh
Jan 18 '07 #26

P: n/a
Hi again :-)

The use of a separated backend from the front end is potentially a very
relaible and cost effective way of solving remote data handling issues.
The approach to take is relatively independant of the underlying choice
of technology (SQL Server, MS Access, Some custom service app with a
Jet database, etc...). The idea driving it is to keep the wires
(communications) to as little as is actually needed to get the job done
in a reliable manner.

Simply put the method looks like this:
- The client application is "cut-off" per se from direct access to the
tables / data in the back end database. In this way there are no direct
reads / writes.

- The client application can create / close a connection to the backend
when actual data movement is required. There are multiple ways to
accomplish this. The most effective is to quite literally have a
"server" application that is listening on the network for requests. If
you consider this in a purely MS Access environment that would mean
that you have an Access application built that actually does the work
of retrieving data from the backend database, and hands the results
only to the client front end application. This is what is known as a
client / server type architecture.

As an example, the frontend Access Application really only works with
local data, and when data is required from / to the backend the request
for information can be sent to the backend server application. You can
use network programming for this. You would for example send only a
text string over the network to the backend (containing the specifics
needed for the data request, such as and variable values, the specific
query or function name you want the backend to perform / use, etc, and
some form of identifier information so the backend knows who to reply
to / log as having requested the actions or whatever else is needed).
The frontend client application then waits for the backend application
to return the results over the network, again probably as text / xml.
This usually limits the data flowing back and forth quite a bit, unless
the result sets are very large.

Used in this way, you can, even with Access, implement a lot of data
safety handling to ensure reliable operation. The network listener in
the back end application can hand the requests to a que, for example
(if it suits), and thereby avoid locking issues with the database
tables and multiple concurrent requests, as it is the only point that
actually reads / writes data to the backend database. You can pretty
much code the back end to handle data any way you wish. This gives you
an enormous amount of flexibility in the application design, and as
such you can build it specifically for performance. You can keep some
data up at the frontend for quick access when possible, especially if
it doesnt change often, and the backend will perform better because it
is only performing a limited range of tasks.

Other flexibilities that come into it, depending on how you want to
design the application, are that changes to the back-end design dont
necessarily have to effect the front end at all, as the backend
application is really only handing back results, so if there are
structural changes made to the backend your code there can still make
the results look the same to the client.

Another though here that can make network latency / performance issues
more acceptable to the users, is giving feedback to them on what the
application is doing. For example you could have a label in one corner
of a form that keeps the user informed with things like: Asking server
for data, receiving results, processing report, etc... You would be
amazed at how much a user will relax when they know what is going on.
Performance issues and application design cover more than just access
to tables and running reports :-)

Doing this type of approach with a tool such as SQL Server is a little
more complex, but the results can be outstanding if properly designed.
It is important to note however that SQL Server is not nearly as easy
as Access to work with, and development may take longer. The advantages
of SQL server though, if used in this context, can be great indeed.
Apart from the security, the reliability of SQL Server as a platform
for data is far greater than most other databases available. There are
a few that compete, such as Oracle (which also has a free version), and
MySQL to name a few. If you want to head down the path of working with
SQL Server, and I would advise you to consider it, you will end up with
a very robust, reliable, and scalable solution if properly designed. If
you are new to SQL Server, then I would suggest doing some background
reading to get an idea of what you are in for. It is also interesting
to note, that the Visual Studio Express programming environments such
as VB and C# etc. all have heavy integration for working with SQL
Server, and may provide you a good starting point to wet your feet.

I have typically found that solving a problem in a corporate
environment is very much a matter of going through a lot of potential
options, discussing with people the ideas both good and bad, and
determining a suitable way through. Whatever solution you look at I
would try and involve as many internally interested parties as possible
when it comes to the ideas stage. In your case it would seem that your
network people (if you have them), are going to play an important role
here in determining what can and cannot be done.

You should always test any solutions first of course too. To take the
Terminal Services approach for example, is actually not just as simple
as installing the software. There are considerations about the network
environment, possible security risks, licensing costs, maintenance and
support, and is anyone in your environment willing to actually take on
the responsibility of implementing an infrastructure change. The same
would apply to putting an SQL Server on the network.

My experiences in this area are that things are never as simple as they
first seem, especially when other people are involved. If you want to
get a good grip on designing solutions with MS Access, there is a book
on the MS Press website by John Viescas called "Building Microsoft
Access Applications" (ISBN: 9780735620391). This guy is very much the
authority on the subject, and when I need to figure out how to approach
a problem its this book that I go to. The website for the book is
http://www.microsoft.com/MSPress/books/7107.aspx

There is one last thing to consider here, and that is this: All the
information / opinions you recieve here are just that - opinions. We
dont have the full working knowledge of your application and your needs
or environment, only an overview. You will almost always get some idiot
"flaming" others and ranting or raving about one thing or another.
Usually there is not one single answer to a problem, but many possible
ways to solve things. Treat everything you read as suggestions, not
solutions, and check things out for yourself. You can figure out pretty
quickly what will and wont work for you, and if you get too stuck then
ask for more help.

I am going to wish all the best with this, and withdraw from this
thread before another self aggrandizing idiot whose ego is writing
cheques his brain cant cash decides to start flaming again. I would
urge you to get the book I mentioned before, it will provide you many
valuable insights to solving this current problem as well as others you
may encounter in the future. And I strongly believe that a well thought
out client / server type approach will suit you well - but as I said
before dont take my word for it, do some research and speak to your
co-workers and network people to figure out what can and cannot be
done.

Best of luck and all Success

The Frog

Jan 18 '07 #27

P: n/a
lo*******@lycos.com wrote in
news:11**********************@m58g2000cwm.googlegr oups.com:
I agree with you that the TS solutions sounds the easiest. I'm
hesitant to this solutions because we have 3 maximun loging
attempts before it needs to be reset manually.
Er, why do that? And why is it an issue logging onto TS more than it
would be logging on to the workstation?
That and the fact that portion
of this blade server is hosting install applications I can't
afford this server to lock up.
The server won't lock up.
Access locks up too often and we need to
recompile the backend.
Huh? Access doesn't lock up in any situation I know of, except when
I've got an error in my code (such as an infinite loop, or a bug
that happens while the screen painting is off).

Secondly, what do you mean by "recompile the backend"? The back end
shouldn't have any code in it to be compiled, so you must actually
mean "compact".

And if a TS logon session locks up, you just kill it. TS will not
itself lock up. That's been the case for all NT-based versions of
Windows as long as they've existed and been running 32-bit
applications.
I have a feeling that if I opt for the TS
solution there will be a lot of manual maintenance involved. What
I would appreciate is someone with a real solid experience with
this approach.
Zero maintenance. And what maintenance there is is much much easier
than when you've got the damned app installed on a bunch of
workstations across the country.

Consider this: if you don't use the TS solution, you've got to
program some method to combine your databases, and some method for
retrieving the other data file, and for distributing the combined
data file back to the two sites. With TS, you don't have to do
anything at all -- no changes to your app, not new coding, no
distribution issues for the data file.
I can see people using TS to login and getting their information
in a hot minute but having it open day long and timing out is just
too unstable for my taste.
It isn't unstable. If the workstation of the person logged into the
TS crashes, nothing happens on the terminal server. Depending on how
you've got it set up, either the logged-on session sits there
waiting for the user to reconnect to it, or the session ends
(closing out the application in an orderly fashion). There's no
instability here at all. In fact, if you're storing your back end on
the TS itself, there's less chance for corruption than if you're
storing it on a file server to which your workstations connect
across the LAN.
The settings for the TS sessions is set by a
higher authority and is out of my hand.
The default settings should work just fine. I've never once mucked
with any of the TS settings on any of the servers where I use it or
where my clients' apps use it -- out of the box, it just works. I
actually have admin permissions on most of those servers, but have
never needed to use them to make any adjustments to TS settings. The
only exception would be with one particular Win2K server where I
seem to often lose connections for some reason, and then sometimes
have to kill the old session from the TS management applet. But I
have always attributed that to being Win2K, since I've never had
that problem on any of the Win2K3 Server terminal servers that I've
used.
As I mentioned, I'll need to do some real world tests and see what
results we get. I'll try out the SQL Server approach first then
try out the TS approach.
That's incredibly foolish. Try TS first, since it doesn't require
any changes to your application for you to try it.
It's amazing how many good idea you can get from these newsgroups.
And yet, you seem to be determined to ignore the advice you're
getting.
As
far as getting past the irritating noise...that's just asking too
much of me. Now for the real work!
Don't worry -- I've killfiled you because you're obviously a
complete idiot, so if you don't reply to any of my posts, you'll
never get any more advice from *me*.

--
David W. Fenton http://www.dfenton.com/
usenet at dfenton dot com http://www.dfenton.com/DFA/
Jan 18 '07 #28

P: n/a
"The Frog" <an**************@eu.effem.comwrote in
news:11**********************@s34g2000cwa.googlegr oups.com:
The use of a separated backend from the front end is potentially a
very relaible and cost effective way of solving remote data
handling issues. The approach to take is relatively independant of
the underlying choice of technology (SQL Server, MS Access, Some
custom service app with a Jet database, etc...). The idea driving
it is to keep the wires (communications) to as little as is
actually needed to get the job done in a reliable manner.
And the absolutely most reliable method for that IS TO RUN WINDOWS
TERMINAL SERVER.
Simply put the method looks like this:
- The client application is "cut-off" per se from direct access to
the tables / data in the back end database. In this way there are
no direct reads / writes.

- The client application can create / close a connection to the
backend when actual data movement is required. There are multiple
ways to accomplish this. The most effective is to quite literally
have a "server" application that is listening on the network for
requests. If you consider this in a purely MS Access environment
that would mean that you have an Access application built that
actually does the work of retrieving data from the backend
database, and hands the results only to the client front end
application. This is what is known as a client / server type
architecture.
You are SO FULL OF SHIT. With a Jet back end THIS CAN NEVER BE DONE.
As an example, the frontend Access Application really only works
with local data, and when data is required from / to the backend
the request for information can be sent to the backend server
application.
Of which there is none if you're using a Jet back end.
You can
use network programming for this.
WTF is "network programming?"
You would for example send only a
text string over the network to the backend (containing the
specifics needed for the data request, such as and variable
values, the specific query or function name you want the backend
to perform / use, etc, and some form of identifier information so
the backend knows who to reply to / log as having requested the
actions or whatever else is needed). The frontend client
application then waits for the backend application to return the
results over the network, again probably as text / xml. This
usually limits the data flowing back and forth quite a bit, unless
the result sets are very large.
That's more or less how a database server works (which is
client/server), but there is no way in hell that it works that way
with a Jet back end.
Used in this way, you can, even with Access, implement a lot of
data safety handling to ensure reliable operation. The network
listener
Jet has no "network listener."
in
the back end application
Jet has no "back end application."
can hand the requests to a que, for example
(if it suits), and thereby avoid locking issues with the database
tables and multiple concurrent requests, as it is the only point
that actually reads / writes data to the backend database. You can
pretty much code the back end to handle data any way you wish.
This gives you an enormous amount of flexibility in the
application design, and as such you can build it specifically for
performance. You can keep some data up at the frontend for quick
access when possible, especially if it doesnt change often, and
the backend will perform better because it is only performing a
limited range of tasks.
What you've described is only relevant to using SQL Server or MySQL
or some other database server for the back end. If you're using an
MDB to store your data, then none of these things apply (even though
you claimed at the top that they do).
Other flexibilities that come into it, depending on how you want
to design the application, are that changes to the back-end design
dont necessarily have to effect the front end at all, as the
backend application is really only handing back results, so if
there are structural changes made to the backend your code there
can still make the results look the same to the client.

Another though here that can make network latency / performance
issues more acceptable to the users, is giving feedback to them on
what the application is doing. For example you could have a label
in one corner of a form that keeps the user informed with things
like: Asking server for data, receiving results, processing
report, etc... You would be amazed at how much a user will relax
when they know what is going on. Performance issues and
application design cover more than just access to tables and
running reports :-)

Doing this type of approach with a tool such as SQL Server is a
little more complex,
My god, but you area complete moron.

[snip]

--
David W. Fenton http://www.dfenton.com/
usenet at dfenton dot com http://www.dfenton.com/DFA/
Jan 18 '07 #29

P: n/a
Thankyou for making my point :-)

Jan 19 '07 #30

P: n/a
Woohoo! Let the party start!

David W. Fenton wrote:
lo*******@lycos.com wrote in
news:11**********************@m58g2000cwm.googlegr oups.com:
>
... so if you don't reply to any of my posts, you'll
never get any more advice from *me*.

--
David W. Fenton http://www.dfenton.com/
usenet at dfenton dot com http://www.dfenton.com/DFA/
Jan 19 '07 #31

P: n/a
Thanks Frog...I do appreciate your help and suggestions.
The Frog wrote:
Hi again :-)

The use of a separated backend from the front end is potentially a very
relaible and cost effective way of solving remote data handling issues.
The approach to take is relatively independant of the underlying choice
of technology (SQL Server, MS Access, Some custom service app with a
Jet database, etc...). The idea driving it is to keep the wires
(communications) to as little as is actually needed to get the job done
in a reliable manner.

Simply put the method looks like this:
- The client application is "cut-off" per se from direct access to the
tables / data in the back end database. In this way there are no direct
reads / writes.

- The client application can create / close a connection to the backend
when actual data movement is required. There are multiple ways to
accomplish this. The most effective is to quite literally have a
"server" application that is listening on the network for requests. If
you consider this in a purely MS Access environment that would mean
that you have an Access application built that actually does the work
of retrieving data from the backend database, and hands the results
only to the client front end application. This is what is known as a
client / server type architecture.

As an example, the frontend Access Application really only works with
local data, and when data is required from / to the backend the request
for information can be sent to the backend server application. You can
use network programming for this. You would for example send only a
text string over the network to the backend (containing the specifics
needed for the data request, such as and variable values, the specific
query or function name you want the backend to perform / use, etc, and
some form of identifier information so the backend knows who to reply
to / log as having requested the actions or whatever else is needed).
The frontend client application then waits for the backend application
to return the results over the network, again probably as text / xml.
This usually limits the data flowing back and forth quite a bit, unless
the result sets are very large.

Used in this way, you can, even with Access, implement a lot of data
safety handling to ensure reliable operation. The network listener in
the back end application can hand the requests to a que, for example
(if it suits), and thereby avoid locking issues with the database
tables and multiple concurrent requests, as it is the only point that
actually reads / writes data to the backend database. You can pretty
much code the back end to handle data any way you wish. This gives you
an enormous amount of flexibility in the application design, and as
such you can build it specifically for performance. You can keep some
data up at the frontend for quick access when possible, especially if
it doesnt change often, and the backend will perform better because it
is only performing a limited range of tasks.

Other flexibilities that come into it, depending on how you want to
design the application, are that changes to the back-end design dont
necessarily have to effect the front end at all, as the backend
application is really only handing back results, so if there are
structural changes made to the backend your code there can still make
the results look the same to the client.

Another though here that can make network latency / performance issues
more acceptable to the users, is giving feedback to them on what the
application is doing. For example you could have a label in one corner
of a form that keeps the user informed with things like: Asking server
for data, receiving results, processing report, etc... You would be
amazed at how much a user will relax when they know what is going on.
Performance issues and application design cover more than just access
to tables and running reports :-)

Doing this type of approach with a tool such as SQL Server is a little
more complex, but the results can be outstanding if properly designed.
It is important to note however that SQL Server is not nearly as easy
as Access to work with, and development may take longer. The advantages
of SQL server though, if used in this context, can be great indeed.
Apart from the security, the reliability of SQL Server as a platform
for data is far greater than most other databases available. There are
a few that compete, such as Oracle (which also has a free version), and
MySQL to name a few. If you want to head down the path of working with
SQL Server, and I would advise you to consider it, you will end up with
a very robust, reliable, and scalable solution if properly designed. If
you are new to SQL Server, then I would suggest doing some background
reading to get an idea of what you are in for. It is also interesting
to note, that the Visual Studio Express programming environments such
as VB and C# etc. all have heavy integration for working with SQL
Server, and may provide you a good starting point to wet your feet.

I have typically found that solving a problem in a corporate
environment is very much a matter of going through a lot of potential
options, discussing with people the ideas both good and bad, and
determining a suitable way through. Whatever solution you look at I
would try and involve as many internally interested parties as possible
when it comes to the ideas stage. In your case it would seem that your
network people (if you have them), are going to play an important role
here in determining what can and cannot be done.

You should always test any solutions first of course too. To take the
Terminal Services approach for example, is actually not just as simple
as installing the software. There are considerations about the network
environment, possible security risks, licensing costs, maintenance and
support, and is anyone in your environment willing to actually take on
the responsibility of implementing an infrastructure change. The same
would apply to putting an SQL Server on the network.

My experiences in this area are that things are never as simple as they
first seem, especially when other people are involved. If you want to
get a good grip on designing solutions with MS Access, there is a book
on the MS Press website by John Viescas called "Building Microsoft
Access Applications" (ISBN: 9780735620391). This guy is very much the
authority on the subject, and when I need to figure out how to approach
a problem its this book that I go to. The website for the book is
http://www.microsoft.com/MSPress/books/7107.aspx

There is one last thing to consider here, and that is this: All the
information / opinions you recieve here are just that - opinions. We
dont have the full working knowledge of your application and your needs
or environment, only an overview. You will almost always get some idiot
"flaming" others and ranting or raving about one thing or another.
Usually there is not one single answer to a problem, but many possible
ways to solve things. Treat everything you read as suggestions, not
solutions, and check things out for yourself. You can figure out pretty
quickly what will and wont work for you, and if you get too stuck then
ask for more help.

I am going to wish all the best with this, and withdraw from this
thread before another self aggrandizing idiot whose ego is writing
cheques his brain cant cash decides to start flaming again. I would
urge you to get the book I mentioned before, it will provide you many
valuable insights to solving this current problem as well as others you
may encounter in the future. And I strongly believe that a well thought
out client / server type approach will suit you well - but as I said
before dont take my word for it, do some research and speak to your
co-workers and network people to figure out what can and cannot be
done.

Best of luck and all Success

The Frog
Jan 19 '07 #32

This discussion thread is closed

Replies have been disabled for this discussion.