Hi all,
I am working on portal which is using MS Access 2003 database. The
problem that I am facing is that once I received data from vendors I
have to upload whole Access Database file to server.
Initially it is not an issue but after increasing the size of database
file(nearly to 50 MB) it is now difficult to upload such file on
server.
Is there any way to remotly connected to MS Access database file and
update changes on database tables or to import tables remotly.
Thanks a lot
Asif 18 2538
You can use replication/synchronization or you can automate exports from a
local table to a remote table.
Looked at any of that info yet?
w_a_n_n_a_l_l_ -@-_s_b_c_g_l_o_b_a_l._n_e_t wrote in
news:ra*********************@newssvr21.news.prodig y.com: You can use replication/synchronization or you can automate exports from a local table to a remote table.
Looked at any of that info yet?
In a website situation with an ISP, replication is a no-go, because
you won't be able to install the synchronizer on the ISP's server in
order to be able to run an indirect synchronization.
And if you don't understand what that paragraph is talking about,
you shouldn't be mentioning replication as a solution to a problem.
--
David W. Fenton http://www.dfenton.com/
usenet at dfenton dot com http://www.dfenton.com/DFA/
> In a website situation with an ISP, replication is a no-go, because
Until I see a reference to web server, I leave open the possibility that
server could be a file server. you won't be able to install the synchronizer on the ISP's server in order to be able to run an indirect synchronization.
Even if there is a web server involved, if it is inhouse, as ours is, I can
replicate and synchronize via the share. I don't need to install anything
on the server. The fact that the share is later accessed by web-related
activity is irrelevant.
Even if the server is not in-house, as long as my provider makes a share
available to me I can do the same thing.
FYI: By reference to a "share" I mean a pathway to a folder via the file
system, as through a VPN.
"Rick Wannall" <wa*****@notadomain.de> wrote in
news:mW*******************@newssvr14.news.prodigy. com: In a website situation with an ISP, replication is a no-go, because Until I see a reference to web server, I leave open the possibility that server could be a file server.
you won't be able to install the synchronizer on the ISP's server in order to be able to run an indirect synchronization.
Even if there is a web server involved, if it is inhouse, as ours is, I can replicate and synchronize via the share. . . .
Assuming a LAN, yes. Once your'e on a WAN, though, that's not the
case at all.
.. . . . I don't need to install anything on the server. The fact that the share is later accessed by web-related activity is irrelevant.
Even if the server is not in-house, as long as my provider makes a share available to me I can do the same thing.
No, you can't. At least not safely. Or with anything like reasonable
speed. Unless, of course, you've got a really big pipe to the ISP.
You clearly don't know what you're talking about with regard to Jet
replication if you aren't taking those very likely aspects of the
situation into account.
--
David W. Fenton http://www.dfenton.com/
usenet at dfenton dot com http://www.dfenton.com/DFA/
"Rick Wannall" <wa*****@notadomain.de> wrote in
news:u8*******************@newssvr14.news.prodigy. com: FYI: By reference to a "share" I mean a pathway to a folder via the file system, as through a VPN.
Irrelevant if it's a WAN connection. That makes it *possible* to do
direct replication, but it is far, far from being advisable. Anyone
with real replication experience over a WAN would know this.
--
David W. Fenton http://www.dfenton.com/
usenet at dfenton dot com http://www.dfenton.com/DFA/
Why not wait until after the OP identifies the environment and then start
excluding inappropriate options?
"Rick Wannall" <wa*****@notadomain.de> wrote in
news:Zn********************@newssvr21.news.prodigy .com: Why not wait until after the OP identifies the environment and then start excluding inappropriate options?
Because the number of situations where replication can be
appropriately used are extremely narrow, and the original
description of the problem space sounded very much like one in which
it would not work.
If your history of postings demonstrated any experience with
replication, I wouldn't be ragging on you. But so few Access
developers have actually delved into it in any depth, and there is
such a huge gulf between the promises made in the documentation for
replication and what you can actually do reliably that I feel it's
important to nip in the bud any off-hand suggestion to use
replication.
I have nearly 10 years of firsthand experience with Jet replication,
and there are very few people who approach that. When people spout
off about replication without providing caveats, it is a red flag to
me that they really don't have any deep expertise in the subject.
--
David W. Fenton http://www.dfenton.com/
usenet at dfenton dot com http://www.dfenton.com/DFA/
I'm not convinced that you're as knowledgeable on this topic as you believe.
The numbers just don't justify the attitude quotient.
For the hell of it, I set up a local master at home and a replica in
florida, going through my household VPN via DSL to get to the Houston server
and from there via T1 to the Florida server. Not what I would call an
optimal setup for any serious endeavor, but good enough for a my little
test.
Master contained 10,000 names and addresses. Replicated that and copied it
to Florida. Then updated all 10,000 rows, increasing text by about 60% in
both name and address fields. Then appended the updated 10,000 rows to the
table, doubling the data exactly.
Path: home->LAN->DSL->houston office->t1->florida office
Synch Start 8:55:30
(10,000 records updated, 10,000 records added)
9:05:00
For a more likely case, I trimmed back to update 1,000 and add 1,000
Synch started 9:09:45
(1,000 updated, 1,000 added)
Synch 9:12:45
10,000 records: just under 10 minutes
1,000 records: 3 minutes
Given the availability of the tool, the ease of setup, and the numbers
above, to my own surprise I wouldn't rule out use of synchronization for
updates in this range, even going through my LAN to my DSL to my Houston
office through our T1 to Florida. Especially if I were working on a small
budget and found I could make do with tools I already had handy, and which I
understood somewhat.
Beyond that, anyone who's considering using Access to solve this sort of
problem is not likely to be appending or appending 10,000 records in a
session, and possibly not even the 1,000.
You're clearly a clever guy you need to do more field work to give you some
experience to put inline and filter those rants so you can turn them into
analyses of use to someone.
"appending or appending" should read "appending or updating"
Forgot to sate: Access XP working on data in Access 2000 format.
Forgot to sate: Access XP working on data in Access 2000 format.
(Posted against wrong response above, should have gone here)
w_a_n_n_a_l_l_ -@-_s_b_c_g_l_o_b_a_l._n_e_t wrote in
news:CU*********************@newssvr29.news.prodig y.net: I'm not convinced that you're as knowledgeable on this topic as you believe. The numbers just don't justify the attitude quotient.
*You* are the one with the attitude, claiming something would work
in a scenario where it is ill-advised.
Sharing a front end may work in many situations, but it's stupid to
do it.
Direct replication over a WAN is far less likely to work reliably on
a long-term basis than sharing a front end. Any failure of
replication over a WAN is going to corrupt your replica set and lead
to massive work in recovering from the failure.
Anyone with significant experience in replication would know this,
either from first-hand experience or from investigation of the
resources available on replication.
As to level of knowledge, just examine the history of the newsgroup
microsoft.public.access.replication. Yes, Michael Kaplan has much
more knowledge of replication than I do, but he doesn't do
replication any longer, so it's left to people like me with
in-the-trenches experience with replication to take up the slack
left by his departure from the field. And you'll see that in that
newsgroup for the last year or so, I've been providing a lot of the
answers.
Where's the evidence of *your* knowledge of replication? Mine is
right there in Google Groups for everyone to see.
For the hell of it, I set up a local master at home and a replica in florida, going through my household VPN via DSL to get to the Houston server and from there via T1 to the Florida server. Not what I would call an optimal setup for any serious endeavor, but good enough for a my little test.
Master contained 10,000 names and addresses. Replicated that and copied it to Florida. Then updated all 10,000 rows, increasing text by about 60% in both name and address fields. Then appended the updated 10,000 rows to the table, doubling the data exactly.
Path: home->LAN->DSL->houston office->t1->florida office
Synch Start 8:55:30
(10,000 records updated, 10,000 records added)
9:05:00
For a more likely case, I trimmed back to update 1,000 and add 1,000
Synch started 9:09:45
(1,000 updated, 1,000 added)
Synch 9:12:45
10,000 records: just under 10 minutes 1,000 records: 3 minutes
Given the availability of the tool, the ease of setup, and the numbers above, to my own surprise I wouldn't rule out use of synchronization for updates in this range, even going through my LAN to my DSL to my Houston office through our T1 to Florida. Especially if I were working on a small budget and found I could make do with tools I already had handy, and which I understood somewhat.
You're a complete idiot.
The issue is not *simply* the amount of time it takes. It's that a
direct replication opens and writes to the remote database across
the wire. This means that any drop in the connection will result in
corruption of the remote database. The very least that will happen
is a loss of replicability, which means that the remote database
will no longer be part of the replica set. All of its data will have
to be recovered manually.
I've been there and done that. It happened to me before I understood
that direct replication was *not* viable over anything but a LAN.
Indirect replication passes message files back and forth and does
not open the remote replica, so it's completely safe over a
slow/unreliable connection.
Beyond that, anyone who's considering using Access to solve this sort of problem is not likely to be appending or appending 10,000 records in a session, and possibly not even the 1,000.
You have absolutely *no* comprehension of how replication really
works if you consider your test to be a large data update for
replication. There's a helluva lot more involved in synching two
replicas than merely propagating updates and appends. You have taken
the simplest possible case and are trying to make it stand in for
real-world situations. If you understood more about replication
you'd see why your test case is not much of a stress on the system
and why it doesn't actually prove anything at all about the
long-term safety of direct replication over a WAN.
You're clearly a clever guy you need to do more field work to give you some experience to put inline and filter those rants so you can turn them into analyses of use to someone.
Stop giving advice on subjects in which you clearly lack the
expertise and I'll not criticise your posts.
--
David W. Fenton http://www.dfenton.com/
usenet at dfenton dot com http://www.dfenton.com/DFA/
No one in his right mind would ignore the issues you mention. Neither would
he makes the leaps you routinely make.
"Rick Wannall" <wa*****@notadomain.de> wrote in
news:e6*******************@newssvr27.news.prodigy. net: No one in his right mind would ignore the issues you mention. . .
Yet, you made a recommendation for using replication without the
important caveats about using it in a WAN environment (which is what
was implied in the original question).
. . . Neither would he makes the leaps you routinely make.
I didn't make any leaps. I provided the important information
cautions that you failed to include when you suggested replication
as an option.
Replication is complicated.
It doesn't work as reliably as Microsoft would like to make you
think.
It should be avoided in every case where there is an alternative
solution to sharing data.
--
David W. Fenton http://www.dfenton.com/
usenet at dfenton dot com http://www.dfenton.com/DFA/
Did you ever get the information you needed?
w_a_n_n_a_l_l_ wrote: I'm not convinced that you're as knowledgeable on this topic as you believe. The numbers just don't justify the attitude quotient.
For the hell of it, I set up a local master at home and a replica in florida, going through my household VPN via DSL to get to the Houston server and from there via T1 to the Florida server. Not what I would call an optimal setup for any serious endeavor, but good enough for a my little test.
When it comes to Replication a very healthy dose of paranoia is
required. I would analogise your test to stepping off the curb into the
street without looking. The danger you are in depends on how busy that
street is, how careful the drivers are and plain old dumb luck. Sure you
might get away with it, but one day your gonna be flattened. Why take
the chance when you can avoid the problem?
Add me to the list of people with years of replication experience that
wouldn't trust your test setup in production with any data I really
wanted to keep.
--
Bri This discussion thread is closed Replies have been disabled for this discussion. Similar topics
5 posts
views
Thread by kackson |
last post: by
|
9 posts
views
Thread by Greg Gursky |
last post: by
|
9 posts
views
Thread by Heather |
last post: by
|
7 posts
views
Thread by Dave Smithz |
last post: by
|
2 posts
views
Thread by James |
last post: by
|
2 posts
views
Thread by Jeff |
last post: by
|
3 posts
views
Thread by Pakna |
last post: by
|
10 posts
views
Thread by cleo |
last post: by
| |
4 posts
views
Thread by Noy B |
last post: by
| | | | | | | | | | |