By using this site, you agree to our updated Privacy Policy and our Terms of Use. Manage your Cookies Settings.
424,475 Members | 1,367 Online
Bytes IT Community
+ Ask a Question
Need help? Post your question and get tips & solutions from a community of 424,475 IT Pros & Developers. It's quick & easy.

Compact db corrupts link to another

P: n/a
This a little strange.

The link in a FE database get corrupted when the BE get compacted, and the
FE is not even open.

I have checked. The data is fine in the BE. The data is fine in the link in
the FE before compacting the BE. I shutdown the FE and then compact the BE.
Open the FE and the data 'looks' corrupt. In data sheet view the first
columns on the left are okay. Then the last several rows have been shifted
to the right and some ringin data replaces data in the first column to move
right. Weird.

Unlink the table and relink it and everything is back good again.

The link is obviously corrupted.

I have restored the files and repeated this and it occurs again.

Why?

Jeff
Jan 9 '06 #1
Share this Question
Share on Google+
9 Replies


P: n/a
"Jeff" <je************@asken.com.au> wrote
This a little strange.
The link in a FE database get corrupted when the
BE get compacted, and the FE is not even open.

I have checked. The data is fine in the BE. The data
is fine in the link in the FE before compacting the BE.
I shutdown the FE and then compact the BE.
Open the FE and the data 'looks' corrupt. In data sheet
view the first columns on the left are okay. Then the last
several rows have been shifted to the right and some
ringin data replaces data in the first column to move
right. Weird.

Unlink the table and relink it and everything is back good again.

The link is obviously corrupted.

I have restored the files and repeated this and it occurs again.

Why?


You seem to have independently verified that "relink on startup" is good
advice. Remember the Solutions 2000 example database had a "relink on
startup" section? I'm not certain that it includes an admonition to "relink
on _every_ startup" but that is what I do -- and, oh, my, I've been doing it
for so long that I don't have a reference to where I heard it.

I seem to remember seeing posts indicating similar problems when the System
Administrator did some revising of server storage, etc.. Links only contain
pointers to data stored elseswhere, so it makes sense that changing the data
stored elsewhere could lead to difficulties in accessing it.

Larry Linson
Microsoft Access MVP


Jan 9 '06 #2

P: n/a
"Jeff" <je************@asken.com.au> wrote in
news:43***********************@per-qv1-newsreader-01.iinet.net.au:
This a little strange.

The link in a FE database get corrupted when the BE get compacted,
and the FE is not even open.

I have checked. The data is fine in the BE. The data is fine in
the link in the FE before compacting the BE. I shutdown the FE and
then compact the BE. Open the FE and the data 'looks' corrupt. In
data sheet view the first columns on the left are okay. Then the
last several rows have been shifted to the right and some ringin
data replaces data in the first column to move right. Weird.

Unlink the table and relink it and everything is back good again.

The link is obviously corrupted.

I have restored the files and repeated this and it occurs again.

Why?


Have you tried compacting the FE? That might clear some of the
cached data in the links.

Table links were problematized with the introduction of A2K because
MS introduced substantial metadata stored in the link that doesn't
get updated often enough. I don't really know the exact details of
what this metadata is, but it was the source of the slow form
loading problem that many of us experienced in early A2K development
when we moved an app from the development to the production
environment and updated the connect strings of the links instead of
deleting and recreating the links. This sounds like a very similar
problem.

--
David W. Fenton http://www.dfenton.com/
usenet at dfenton dot com http://www.dfenton.com/DFA/
Jan 9 '06 #3

P: n/a
The only problem with relinking every table is the time it takes. Typically,
my systems have many tables, 100 - 150 plus, and a total relink usually gets
complaints.

When I install a new version this has to be done by resetting the connection
string. Without fail I have users ask why it takes a while to load with new
versions.

Now that I say this, how do you relink? Do you delete the linked table fully
and do a complete relink or just refresh the connection string? Is there any
difference in time?

Jeff

"Larry Linson" <bo*****@localhost.not> wrote in message
news:DRBwf.19818$em5.12905@trnddc05...
"Jeff" <je************@asken.com.au> wrote
You seem to have independently verified that "relink on startup" is good
advice. Remember the Solutions 2000 example database had a "relink on
startup" section? I'm not certain that it includes an admonition to
"relink on _every_ startup" but that is what I do -- and, oh, my, I've
been doing it for so long that I don't have a reference to where I heard
it.

I seem to remember seeing posts indicating similar problems when the
System Administrator did some revising of server storage, etc.. Links only
contain pointers to data stored elseswhere, so it makes sense that
changing the data stored elsewhere could lead to difficulties in accessing
it.

Larry Linson
Microsoft Access MVP


Jan 9 '06 #4

P: n/a
Yes, after compacting the Back End after making Schema changes,
you have to re-link or compact the Front End. Although you
normally get away with it if you just add a new field to
the edge of a table.

The odd thing is that you can add a new field in the
middle of the table, and links keep working until you
compact the back end!

Then when you compact the BE, the fields don't line
up anymore, and the links are broken.

From which we can infer that when you add a new field
to a table, the database is not really restructured
until you compact.

So you would expect performance implications if you added
a new field to the middle of table, and did not compact.

(david)

"Jeff" <je************@asken.com.au> wrote in message
news:43***********************@per-qv1-newsreader-01.iinet.net.au...
This a little strange.

The link in a FE database get corrupted when the BE get compacted, and the
FE is not even open.

I have checked. The data is fine in the BE. The data is fine in the link
in the FE before compacting the BE. I shutdown the FE and then compact the
BE. Open the FE and the data 'looks' corrupt. In data sheet view the first
columns on the left are okay. Then the last several rows have been shifted
to the right and some ringin data replaces data in the first column to
move right. Weird.

Unlink the table and relink it and everything is back good again.

The link is obviously corrupted.

I have restored the files and repeated this and it occurs again.

Why?

Jeff

Jan 10 '06 #5

P: n/a
This explains a few problems that I have encountered.

Typically, in development I make changes to the development BE and then
compact the MDB and remake the MDE for distribution. I then go to the client
and make the same changes there or their BE and install the MDE.

Most of the time this works. I assume that as long as I make the changes the
same way (i.e. add a new field in exactly the same position), or the changes
are minor and not critical to the link metadata, I may get away with this,
but if I do it differently on the client BE I am in potential trouble.

So it looks like I change my link checking code to delete and remake links
to tables, not just update the connection string.

Mmmm...that's life.

Jeff

"david epsom dot com dot au" <david@epsomdotcomdotau> wrote in message
news:43***********************@lon-reader.news.telstra.net...
Yes, after compacting the Back End after making Schema changes,
you have to re-link or compact the Front End. Although you
normally get away with it if you just add a new field to
the edge of a table.

The odd thing is that you can add a new field in the
middle of the table, and links keep working until you
compact the back end!

Then when you compact the BE, the fields don't line
up anymore, and the links are broken.

From which we can infer that when you add a new field
to a table, the database is not really restructured
until you compact.

So you would expect performance implications if you added
a new field to the middle of table, and did not compact.

(david)

"Jeff" <je************@asken.com.au> wrote in message
news:43***********************@per-qv1-newsreader-01.iinet.net.au...
This a little strange.

The link in a FE database get corrupted when the BE get compacted, and
the FE is not even open.

I have checked. The data is fine in the BE. The data is fine in the link
in the FE before compacting the BE. I shutdown the FE and then compact
the BE. Open the FE and the data 'looks' corrupt. In data sheet view the
first columns on the left are okay. Then the last several rows have been
shifted to the right and some ringin data replaces data in the first
column to move right. Weird.

Unlink the table and relink it and everything is back good again.

The link is obviously corrupted.

I have restored the files and repeated this and it occurs again.

Why?

Jeff


Jan 10 '06 #6

P: n/a
"Jeff" <je************@asken.com.au> wrote in
news:43***********************@per-qv1-newsreader-01.iinet.net.au:
The only problem with relinking every table is the time it takes.
Typically, my systems have many tables, 100 - 150 plus, and a
total relink usually gets complaints.
Contra Larry, I never relink unless it's needed. To see if it's
needed, I test the connect string of a particular table and see if
it's accessible. Generally in any of my apps, there's at least one
crucial central table that *must* be available for the app to work,
and it's a decent proxy for checking everything.

Of course, it's also possible for a relinking to fail part way
through, leaving this one table properly linked, but many others
not. But I view that as an unlikely event, one that will produce
obvious errors, and that should be corrected by a full relinking.
When I install a new version this has to be done by resetting the
connection string. Without fail I have users ask why it takes a
while to load with new versions.
I have not found relinking tables to be terribly slow, but I have
far fewer tables in my apps than you.
Now that I say this, how do you relink? Do you delete the linked
table fully and do a complete relink or just refresh the
connection string? Is there any difference in time?


I change the connect string. I just did a utility for someone that
gives the option of changing the connect string or deleting and
recreating the links. The latter is many times slower than the
former running locally (I've not tested it on a network).

--
David W. Fenton http://www.dfenton.com/
usenet at dfenton dot com http://www.dfenton.com/DFA/
Jan 10 '06 #7

P: n/a
"Jeff" <je************@asken.com.au> wrote in
news:43***********************@per-qv1-newsreader-01.iinet.net.au:
So it looks like I change my link checking code to delete and
remake links to tables, not just update the connection string.


That only makes sense if you intend to be constantly changing the
schema. While that may be a frequent task during a certain stage of
development, for the long term, I'd suggest that you *not* go the
delete/recreate route, as it's *much* slower than simply updating
the connect string on existing links.

I would suggest that you have a custom property in your back end
that indicates the schema version. Have your linking code record the
back end version number in a custom property in the front end, and
have your relinking code check if the versions match. If they don't,
delete and recreate. If they do, just update the connection string.

That way you're doing the fastest possible relink in all cases.

--
David W. Fenton http://www.dfenton.com/
usenet at dfenton dot com http://www.dfenton.com/DFA/
Jan 10 '06 #8

P: n/a
Unfortunately, a couple of systems are in continual development. I mean
continual. They have been going since A2. One is huge and the client is
continually adding new features or changing something.

This is similar to what I have been doing for years. I have a list of the
multiple BE files that get linked to. Each file has a version number in a
table.

In the link checking, for each BE I extract the path from the linked version
table and check that the BE file is actually there. If it is missing I ask
for the location and then update the version table link. I then use that to
check the version of the back-end. If the version numbers are different or
the path has changed I then walk through all linked tables and find those
that are linked to that BE file (look for the filename in the connect
string) and update the connection string.

I do not delete and remake the link. That is going to be my change.

Jeff

"David W. Fenton" <XX*******@dfenton.com.invalid> wrote in message
news:Xn**********************************@127.0.0. 1...
"Jeff" <je************@asken.com.au> wrote in
news:43***********************@per-qv1-newsreader-01.iinet.net.au:
So it looks like I change my link checking code to delete and
remake links to tables, not just update the connection string.


That only makes sense if you intend to be constantly changing the
schema. While that may be a frequent task during a certain stage of
development, for the long term, I'd suggest that you *not* go the
delete/recreate route, as it's *much* slower than simply updating
the connect string on existing links.

I would suggest that you have a custom property in your back end
that indicates the schema version. Have your linking code record the
back end version number in a custom property in the front end, and
have your relinking code check if the versions match. If they don't,
delete and recreate. If they do, just update the connection string.

That way you're doing the fastest possible relink in all cases.

--
David W. Fenton http://www.dfenton.com/
usenet at dfenton dot com http://www.dfenton.com/DFA/

Jan 10 '06 #9

P: n/a
You must be using different optimisations than I am. I
get around 50 seconds for creating and deleting around
1000 links - and the same time for just refreshing
existing links.

I do hold the first table open while I link additional
tables to the same database.

When running over a slow network, I can see that it
takes much longer when the links fail (that's mostly
because there are a series of re-tries in my code),
and much longer when I have to search the entire
dataset (10 files, 500 tables) to find the target
table when switching from ODBC to Jet. (Because of
my Access 2000 induced aversion to exceptions,
I iterate the collections before attempting connection
when I don't know which file to use).
(david)
"David W. Fenton" <XX*******@dfenton.com.invalid> wrote in message
news:Xn**********************************@127.0.0. 1...
"Jeff" <je************@asken.com.au> wrote in
news:43***********************@per-qv1-newsreader-01.iinet.net.au:
So it looks like I change my link checking code to delete and
remake links to tables, not just update the connection string.


That only makes sense if you intend to be constantly changing the
schema. While that may be a frequent task during a certain stage of
development, for the long term, I'd suggest that you *not* go the
delete/recreate route, as it's *much* slower than simply updating
the connect string on existing links.

I would suggest that you have a custom property in your back end
that indicates the schema version. Have your linking code record the
back end version number in a custom property in the front end, and
have your relinking code check if the versions match. If they don't,
delete and recreate. If they do, just update the connection string.

That way you're doing the fastest possible relink in all cases.

--
David W. Fenton http://www.dfenton.com/
usenet at dfenton dot com http://www.dfenton.com/DFA/

Jan 11 '06 #10

This discussion thread is closed

Replies have been disabled for this discussion.