By using this site, you agree to our updated Privacy Policy and our Terms of Use. Manage your Cookies Settings.
455,067 Members | 1,297 Online
Bytes IT Community
+ Ask a Question
Need help? Post your question and get tips & solutions from a community of 455,067 IT Pros & Developers. It's quick & easy.

AC97 - Replication - File Sharing Lock Count Exceeded

P: n/a
Bri
Greetings,

After making various edits and deletes on aproximately 40,000 records in
one table (on the Design Master) syncronization fails with Error 3052 -
File Sharing Lock Count Exceeded. After reading the MS KB article 173006
(http://support.microsoft.com/default...b;en-us;173006)
I increased the locks to 100,000 on the Replica PC and I still get the
error. Since the defailt value is only 9500 I would have thought that
100,000 was excessive, but it still fails.

My questions are:
- Is 100,000 large enough or should I just keep trying bigger numbers
until it either works or the PC crashes from a lack of resourses?
- Do I need to change the locks on the Server as well?
- Is there something else I can do to solve the problem?

Setup: AC97, Frontend/backend split db (sync done on backend only using
dbBackend.Synchronize in code), backend on WinNT 4 Network, Win98 on PC
with Replica, edits were to data only no design changes.

Thank you in advance for any help you can give me.

Bri

Nov 13 '05 #1
Share this Question
Share on Google+
8 Replies


P: n/a
Bri <no*@here.com> wrote in news:l01wc.632282$Pk3.594248@pd7tw1no:
After making various edits and deletes on aproximately 40,000
records in one table (on the Design Master) syncronization fails
with Error 3052 - File Sharing Lock Count Exceeded. After reading
the MS KB article 173006
(http://support.microsoft.com/default...b;en-us;173006)
I increased the locks to 100,000 on the Replica PC and I still get
the error. Since the defailt value is only 9500 I would have
thought that 100,000 was excessive, but it still fails.

My questions are:
- Is 100,000 large enough or should I just keep trying bigger
numbers
until it either works or the PC crashes from a lack of resourses?
- Do I need to change the locks on the Server as well?
- Is there something else I can do to solve the problem?


On a one-time basis, set it to something extremely large and do your
synchronization.

If you regularly need to synch quantities of changes that large,
then you need to increase the frequency of your synchs.

But something smells fishy, as I've only encountered this when doing
huge numbers of changes to files that hadn't been synched for an
extremely long time.

--
David W. Fenton http://www.bway.net/~dfenton
dfenton at bway dot net http://www.bway.net/~dfassoc
Nov 13 '05 #2

P: n/a
Bri
David,

Yes, this is a hopefully one time only event. We did a batch update to
adjust the values in one field and then deleted the records that no
longer qualified to remain in the table, so the table started out with
40,000 records which all had the one field changed then 32,000 records
were deleted. From this point forward the table will only have records
appended to it.

I'm not at the client again until next Tuesday, so I'm hoping to go in
with as many options as possible.

Bri

David W. Fenton wrote:
Bri <no*@here.com> wrote in news:l01wc.632282$Pk3.594248@pd7tw1no:

After making various edits and deletes on aproximately 40,000
records in one table (on the Design Master) syncronization fails
with Error 3052 - File Sharing Lock Count Exceeded. After reading
the MS KB article 173006
(http://support.microsoft.com/default...b;en-us;173006)
I increased the locks to 100,000 on the Replica PC and I still get
the error. Since the defailt value is only 9500 I would have
thought that 100,000 was excessive, but it still fails.

My questions are:
- Is 100,000 large enough or should I just keep trying bigger
numbers
until it either works or the PC crashes from a lack of resourses?
- Do I need to change the locks on the Server as well?
- Is there something else I can do to solve the problem?

On a one-time basis, set it to something extremely large and do your
synchronization.

If you regularly need to synch quantities of changes that large,
then you need to increase the frequency of your synchs.

But something smells fishy, as I've only encountered this when doing
huge numbers of changes to files that hadn't been synched for an
extremely long time.

Nov 13 '05 #3

P: n/a
Bri <no*@here.com> wrote in news:eG4wc.669405$oR5.195744@pd7tw3no:
Yes, this is a hopefully one time only event. We did a batch
update to adjust the values in one field and then deleted the
records that no longer qualified to remain in the table, so the
table started out with 40,000 records which all had the one field
changed then 32,000 records were deleted. From this point forward
the table will only have records appended to it.


You realize that deleting large numbers of records in a replicated
database is a bad idea, since it makes the MSysTombstones table
huge? And you can't clear that table except by changing the
expiration interval.

--
David W. Fenton http://www.bway.net/~dfenton
dfenton at bway dot net http://www.bway.net/~dfassoc
Nov 13 '05 #4

P: n/a
Bri
David,

I didn't know that this was a no-no, but I guess I do now. I checked and
I have ~65,000 records in the MSysTombstones table. How do I change the
expiration interval and what are the repercussions of doing so? What is
the MSysTombstones table actually tracking? I looked at the other MSys
tables and the only other one with any significant numbers of records is
the MSysOthersHistory table with ~6000 records.

As I said, this was a one time only operation and if anything like it
comes up again, I will think it through with this in mind (ie, clean up
the data first before bringing it into the replicated db). It just
seemed easier to import all of the data, modify the field and then
delete the ones that didn't qualify. I guess I should have used another
temp db to do this and then imported the final table of 8000 records.

So for now, I need to get the data synced and then do what needs to be
done to clean up after myself. Next time I'll hopefully remember all
this and do it right.

Thanks

Bri
David W. Fenton wrote:
Bri <no*@here.com> wrote in news:eG4wc.669405$oR5.195744@pd7tw3no:

You realize that deleting large numbers of records in a replicated
database is a bad idea, since it makes the MSysTombstones table
huge? And you can't clear that table except by changing the
expiration interval.

Nov 13 '05 #5

P: n/a
Bri <no*@here.com> wrote in news:X59wc.674841$Ig.214587@pd7tw2no:
I didn't know that this was a no-no, but I guess I do now. I
checked and I have ~65,000 records in the MSysTombstones table.
How do I change the expiration interval and what are the
repercussions of doing so? What is the MSysTombstones table
actually tracking? . . .
It tracks deletion of records, as opposed to changes of existing
records (which are tracked in the replication fields of the records
themselves). It's a requirement, since otherwise there'd be nothing
to synchronize.

Keep in mind that any two replicas from the same replica set are,
theoretically speaking, only one two-way synch away from being
identical. For this to be true, records of deleted records have to
be kept. It's not necessary to keep records of intermediate stages
of edits to existing records, because all that matters is which
generation is latest for each of the replicas in the exchange (i.e.,
greatest number of changes wins). But deletions have to be kept
separately because you can't synchronize to nothing.

This is the purpose of the tombstones table.
. . . I looked at the other MSys tables and the only
other one with any significant numbers of records is the
MSysOthersHistory table with ~6000 records.
Six thousand? Well, I guess that's not *that* many, if the
replicated file has been in use for a while, and there are several
replicas.

That table tracks the history of synchronization operations at a
granular level. The nickname field refers to which replica (see the
MSysReplicas table, which gives the pathname of each replica in the
replica set), and MSysOthersHistory indicates the number of
exchanges of information that take place within a synchronization.
Look, for instance, at the MSysExchangeLog table. There should be a
much smaller number of records there, because it tracks
synchronizations, i.e., each time you tell Access to synch, a new
record is added to this table. The MSysOthers table, however, breaks
that down into individual synchronization operations.

Sort your MSysOthersHistory by date and then by Nickname. Then,
consider, if you have two replicas, and Replica1 has a table that
has 2 records with updates and Replica2 has a table with 3 records
that have been updated. Remember that each time a record is updated,
the generation gets incremented. So, if you want to synch those two
replicas, operations have to go in both directions.

But they have to go in *generation* order, otherwise, you won't
necessarily have the latest data at the end of each operation. So,
say you have this:

Replica1

Record 1 Gen 3
Record 2 Gen 4

Replica2

Record 2 Gen 5
Record 4 Gen 3
Record 5 Gen 6

What would happen is that in the first operation, generation 3 would
be exchanged, from 1 to 2 and from 2 to 1. Then in the next
operation, generation 4 would be exchanged. In this case, Record 2
from Replica2 would overwrite the same record in Replica1 because it
has a higher generation (more changes have been made to it). Last,
generations 5 and 6 would be exhanged.

That's a gross oversimplification, but the key point is that it's
this low-level set of small exchanges that this table tracks. That's
why it has lots of records.
As I said, this was a one time only operation and if anything like
it comes up again, I will think it through with this in mind (ie,
clean up the data first before bringing it into the replicated
db). It just seemed easier to import all of the data, modify the
field and then delete the ones that didn't qualify. I guess I
should have used another temp db to do this and then imported the
final table of 8000 records.
Yep.

Don't feel so bad -- in my first replicated project, I stupidly
included a temp table in the replica set, one that had batches of
100 or so records written to it and deleted several times a day. The
result was that the file bloated to an enormous size very quickly,
with 100s of thousands of records in the tombstones table!

So your mistake wasn't half as bad as mine!
So for now, I need to get the data synced and then do what needs
to be done to clean up after myself. Next time I'll hopefully
remember all this and do it right.


There are two ways to set the retention period for a replica set:

1. use Replication Manager (from the Office Developer Tools). To do
this, the Design Master has to be managed (otherwise, you're not
allowed to edit it), and you can change it in the properties sheet
for the design master.

2. if you don't have ReplMan, then you can download the TSI
Synchronizer from http://trigeminal.com, which is Michael Kaplan's
website. He's the acknowledged world expert on Jet replication.

If you have trouble with either of these, I suggest you browse the
Google archives of microsoft.public.access.replication for articles
discussing "retention period".

The default is 1000 days. If you synch all your replicas so that
they are identical, then change the period to 1 day, synch all of
them, then change it back to 1000 days and resynch all 'round, that
should clear the tombstones table.
--
David W. Fenton http://www.bway.net/~dfenton
dfenton at bway dot net http://www.bway.net/~dfassoc
Nov 13 '05 #6

P: n/a
Bri
David,
What is the MSysTombstones table actually tracking? . . .

It tracks deletion of records, as opposed to changes of existing
records (which are tracked in the replication fields of the records
themselves). It's a requirement, since otherwise there'd be nothing
to synchronize.
<snip>
That makes a lot of sense.
Six thousand? Well, I guess that's not *that* many, if the
replicated file has been in use for a while, and there are several
replicas.

Three replicas synced every business day for a year in a cycle to get
all in the set to be identical (1,2,3,1,2).

<snip a lot of good technical info

Thanks for the easy to understand summary explaination of the guts of
replication.
I guess I
should have used another temp db to do this and then imported the
final table of 8000 records.

Yep.

Don't feel so bad -- in my first replicated project, I stupidly
included a temp table in the replica set, one that had batches of
100 or so records written to it and deleted several times a day. The
result was that the file bloated to an enormous size very quickly,
with 100s of thousands of records in the tombstones table!

So your mistake wasn't half as bad as mine!


I too have temp tables that are written too, processed and then deleted.
I did have the foresite to put these tables in a temp db that I compact
each time the app starts. :{) I had that all in place before we even
looked at replication. I won't bother to mention here all of the other
bonehead mistakes I've made. Mistakes is how we learn.
So for now, I need to get the data synced and then do what needs
to be done to clean up after myself. Next time I'll hopefully
remember all this and do it right.

There are two ways to set the retention period for a replica set:

1. use Replication Manager (from the Office Developer Tools). To do
this, the Design Master has to be managed (otherwise, you're not
allowed to edit it), and you can change it in the properties sheet
for the design master.


I don't have Replication Manager.
2. if you don't have ReplMan, then you can download the TSI
Synchronizer from http://trigeminal.com, which is Michael Kaplan's
website. He's the acknowledged world expert on Jet replication.


I'll take a look at the TSI Synchronizer. From what I've seen in the
years of posts in this group, Michael has probably built something
better than the MS tool. I had noticed that he was fairly active in the
group lately and had hoped he would have joined this thread, although
you also seem to have a good grasp of replication.

Thanks again for all the help and info. I hope I can get this all sorted
out on Tuesday.

Bri

Nov 13 '05 #7

P: n/a
Bri
David,

Just to follow up on this. I set the locks limit to 1,000,000 and the
sync worked. Is there any reason not to leave the limit this high or
should I change it back?
2. if you don't have ReplMan, then you can download the TSI
Synchronizer from http://trigeminal.com, which is Michael Kaplan's
website. He's the acknowledged world expert on Jet replication.


I've downloaded the TSI Synchronizer and will use it to clear out the
extra records in MSysTombstones et al.

Thanks again.
Bri
Nov 13 '05 #8

P: n/a
Bri <no*@here.com> wrote in news:rDGxc.718570$Ig.533865@pd7tw2no:
Just to follow up on this. I set the locks limit to 1,000,000 and
the sync worked. Is there any reason not to leave the limit this
high or should I change it back?


I've always set it back on the theory that the default is set low
for some kind of general performance reason.
2. if you don't have ReplMan, then you can download the TSI
Synchronizer from http://trigeminal.com, which is Michael
Kaplan's website. He's the acknowledged world expert on Jet
replication.


I've downloaded the TSI Synchronizer and will use it to clear out
the extra records in MSysTombstones et al.


If you have problems, post questions in the
microsoft.public.access.replication newsgroup, because Michael pays
close attention to that newsgroup and is very helpful with
questions.

--
David W. Fenton http://www.bway.net/~dfenton
dfenton at bway dot net http://www.bway.net/~dfassoc
Nov 13 '05 #9

This discussion thread is closed

Replies have been disabled for this discussion.