473,718 Members | 2,001 Online
Bytes | Software Development & Data Engineering Community
+ Post

Home Posts Topics Members FAQ

AC97 - Replication - File Sharing Lock Count Exceeded

Bri
Greetings,

After making various edits and deletes on aproximately 40,000 records in
one table (on the Design Master) syncronization fails with Error 3052 -
File Sharing Lock Count Exceeded. After reading the MS KB article 173006
(http://support.microsoft.com/default...b;en-us;173006)
I increased the locks to 100,000 on the Replica PC and I still get the
error. Since the defailt value is only 9500 I would have thought that
100,000 was excessive, but it still fails.

My questions are:
- Is 100,000 large enough or should I just keep trying bigger numbers
until it either works or the PC crashes from a lack of resourses?
- Do I need to change the locks on the Server as well?
- Is there something else I can do to solve the problem?

Setup: AC97, Frontend/backend split db (sync done on backend only using
dbBackend.Synch ronize in code), backend on WinNT 4 Network, Win98 on PC
with Replica, edits were to data only no design changes.

Thank you in advance for any help you can give me.

Bri

Nov 13 '05 #1
8 2850
Bri <no*@here.com > wrote in news:l01wc.6322 82$Pk3.594248@p d7tw1no:
After making various edits and deletes on aproximately 40,000
records in one table (on the Design Master) syncronization fails
with Error 3052 - File Sharing Lock Count Exceeded. After reading
the MS KB article 173006
(http://support.microsoft.com/default...b;en-us;173006)
I increased the locks to 100,000 on the Replica PC and I still get
the error. Since the defailt value is only 9500 I would have
thought that 100,000 was excessive, but it still fails.

My questions are:
- Is 100,000 large enough or should I just keep trying bigger
numbers
until it either works or the PC crashes from a lack of resourses?
- Do I need to change the locks on the Server as well?
- Is there something else I can do to solve the problem?


On a one-time basis, set it to something extremely large and do your
synchronization .

If you regularly need to synch quantities of changes that large,
then you need to increase the frequency of your synchs.

But something smells fishy, as I've only encountered this when doing
huge numbers of changes to files that hadn't been synched for an
extremely long time.

--
David W. Fenton http://www.bway.net/~dfenton
dfenton at bway dot net http://www.bway.net/~dfassoc
Nov 13 '05 #2
Bri
David,

Yes, this is a hopefully one time only event. We did a batch update to
adjust the values in one field and then deleted the records that no
longer qualified to remain in the table, so the table started out with
40,000 records which all had the one field changed then 32,000 records
were deleted. From this point forward the table will only have records
appended to it.

I'm not at the client again until next Tuesday, so I'm hoping to go in
with as many options as possible.

Bri

David W. Fenton wrote:
Bri <no*@here.com > wrote in news:l01wc.6322 82$Pk3.594248@p d7tw1no:

After making various edits and deletes on aproximately 40,000
records in one table (on the Design Master) syncronization fails
with Error 3052 - File Sharing Lock Count Exceeded. After reading
the MS KB article 173006
(http://support.microsoft.com/default...b;en-us;173006)
I increased the locks to 100,000 on the Replica PC and I still get
the error. Since the defailt value is only 9500 I would have
thought that 100,000 was excessive, but it still fails.

My questions are:
- Is 100,000 large enough or should I just keep trying bigger
numbers
until it either works or the PC crashes from a lack of resourses?
- Do I need to change the locks on the Server as well?
- Is there something else I can do to solve the problem?

On a one-time basis, set it to something extremely large and do your
synchronization .

If you regularly need to synch quantities of changes that large,
then you need to increase the frequency of your synchs.

But something smells fishy, as I've only encountered this when doing
huge numbers of changes to files that hadn't been synched for an
extremely long time.

Nov 13 '05 #3
Bri <no*@here.com > wrote in news:eG4wc.6694 05$oR5.195744@p d7tw3no:
Yes, this is a hopefully one time only event. We did a batch
update to adjust the values in one field and then deleted the
records that no longer qualified to remain in the table, so the
table started out with 40,000 records which all had the one field
changed then 32,000 records were deleted. From this point forward
the table will only have records appended to it.


You realize that deleting large numbers of records in a replicated
database is a bad idea, since it makes the MSysTombstones table
huge? And you can't clear that table except by changing the
expiration interval.

--
David W. Fenton http://www.bway.net/~dfenton
dfenton at bway dot net http://www.bway.net/~dfassoc
Nov 13 '05 #4
Bri
David,

I didn't know that this was a no-no, but I guess I do now. I checked and
I have ~65,000 records in the MSysTombstones table. How do I change the
expiration interval and what are the repercussions of doing so? What is
the MSysTombstones table actually tracking? I looked at the other MSys
tables and the only other one with any significant numbers of records is
the MSysOthersHisto ry table with ~6000 records.

As I said, this was a one time only operation and if anything like it
comes up again, I will think it through with this in mind (ie, clean up
the data first before bringing it into the replicated db). It just
seemed easier to import all of the data, modify the field and then
delete the ones that didn't qualify. I guess I should have used another
temp db to do this and then imported the final table of 8000 records.

So for now, I need to get the data synced and then do what needs to be
done to clean up after myself. Next time I'll hopefully remember all
this and do it right.

Thanks

Bri
David W. Fenton wrote:
Bri <no*@here.com > wrote in news:eG4wc.6694 05$oR5.195744@p d7tw3no:

You realize that deleting large numbers of records in a replicated
database is a bad idea, since it makes the MSysTombstones table
huge? And you can't clear that table except by changing the
expiration interval.

Nov 13 '05 #5
Bri <no*@here.com > wrote in news:X59wc.6748 41$Ig.214587@pd 7tw2no:
I didn't know that this was a no-no, but I guess I do now. I
checked and I have ~65,000 records in the MSysTombstones table.
How do I change the expiration interval and what are the
repercussions of doing so? What is the MSysTombstones table
actually tracking? . . .
It tracks deletion of records, as opposed to changes of existing
records (which are tracked in the replication fields of the records
themselves). It's a requirement, since otherwise there'd be nothing
to synchronize.

Keep in mind that any two replicas from the same replica set are,
theoretically speaking, only one two-way synch away from being
identical. For this to be true, records of deleted records have to
be kept. It's not necessary to keep records of intermediate stages
of edits to existing records, because all that matters is which
generation is latest for each of the replicas in the exchange (i.e.,
greatest number of changes wins). But deletions have to be kept
separately because you can't synchronize to nothing.

This is the purpose of the tombstones table.
. . . I looked at the other MSys tables and the only
other one with any significant numbers of records is the
MSysOthersHisto ry table with ~6000 records.
Six thousand? Well, I guess that's not *that* many, if the
replicated file has been in use for a while, and there are several
replicas.

That table tracks the history of synchronization operations at a
granular level. The nickname field refers to which replica (see the
MSysReplicas table, which gives the pathname of each replica in the
replica set), and MSysOthersHisto ry indicates the number of
exchanges of information that take place within a synchronization .
Look, for instance, at the MSysExchangeLog table. There should be a
much smaller number of records there, because it tracks
synchronization s, i.e., each time you tell Access to synch, a new
record is added to this table. The MSysOthers table, however, breaks
that down into individual synchronization operations.

Sort your MSysOthersHisto ry by date and then by Nickname. Then,
consider, if you have two replicas, and Replica1 has a table that
has 2 records with updates and Replica2 has a table with 3 records
that have been updated. Remember that each time a record is updated,
the generation gets incremented. So, if you want to synch those two
replicas, operations have to go in both directions.

But they have to go in *generation* order, otherwise, you won't
necessarily have the latest data at the end of each operation. So,
say you have this:

Replica1

Record 1 Gen 3
Record 2 Gen 4

Replica2

Record 2 Gen 5
Record 4 Gen 3
Record 5 Gen 6

What would happen is that in the first operation, generation 3 would
be exchanged, from 1 to 2 and from 2 to 1. Then in the next
operation, generation 4 would be exchanged. In this case, Record 2
from Replica2 would overwrite the same record in Replica1 because it
has a higher generation (more changes have been made to it). Last,
generations 5 and 6 would be exhanged.

That's a gross oversimplificat ion, but the key point is that it's
this low-level set of small exchanges that this table tracks. That's
why it has lots of records.
As I said, this was a one time only operation and if anything like
it comes up again, I will think it through with this in mind (ie,
clean up the data first before bringing it into the replicated
db). It just seemed easier to import all of the data, modify the
field and then delete the ones that didn't qualify. I guess I
should have used another temp db to do this and then imported the
final table of 8000 records.
Yep.

Don't feel so bad -- in my first replicated project, I stupidly
included a temp table in the replica set, one that had batches of
100 or so records written to it and deleted several times a day. The
result was that the file bloated to an enormous size very quickly,
with 100s of thousands of records in the tombstones table!

So your mistake wasn't half as bad as mine!
So for now, I need to get the data synced and then do what needs
to be done to clean up after myself. Next time I'll hopefully
remember all this and do it right.


There are two ways to set the retention period for a replica set:

1. use Replication Manager (from the Office Developer Tools). To do
this, the Design Master has to be managed (otherwise, you're not
allowed to edit it), and you can change it in the properties sheet
for the design master.

2. if you don't have ReplMan, then you can download the TSI
Synchronizer from http://trigeminal.com, which is Michael Kaplan's
website. He's the acknowledged world expert on Jet replication.

If you have trouble with either of these, I suggest you browse the
Google archives of microsoft.publi c.access.replic ation for articles
discussing "retention period".

The default is 1000 days. If you synch all your replicas so that
they are identical, then change the period to 1 day, synch all of
them, then change it back to 1000 days and resynch all 'round, that
should clear the tombstones table.
--
David W. Fenton http://www.bway.net/~dfenton
dfenton at bway dot net http://www.bway.net/~dfassoc
Nov 13 '05 #6
Bri
David,
What is the MSysTombstones table actually tracking? . . .

It tracks deletion of records, as opposed to changes of existing
records (which are tracked in the replication fields of the records
themselves). It's a requirement, since otherwise there'd be nothing
to synchronize.
<snip>
That makes a lot of sense.
Six thousand? Well, I guess that's not *that* many, if the
replicated file has been in use for a while, and there are several
replicas.

Three replicas synced every business day for a year in a cycle to get
all in the set to be identical (1,2,3,1,2).

<snip a lot of good technical info

Thanks for the easy to understand summary explaination of the guts of
replication.
I guess I
should have used another temp db to do this and then imported the
final table of 8000 records.

Yep.

Don't feel so bad -- in my first replicated project, I stupidly
included a temp table in the replica set, one that had batches of
100 or so records written to it and deleted several times a day. The
result was that the file bloated to an enormous size very quickly,
with 100s of thousands of records in the tombstones table!

So your mistake wasn't half as bad as mine!


I too have temp tables that are written too, processed and then deleted.
I did have the foresite to put these tables in a temp db that I compact
each time the app starts. :{) I had that all in place before we even
looked at replication. I won't bother to mention here all of the other
bonehead mistakes I've made. Mistakes is how we learn.
So for now, I need to get the data synced and then do what needs
to be done to clean up after myself. Next time I'll hopefully
remember all this and do it right.

There are two ways to set the retention period for a replica set:

1. use Replication Manager (from the Office Developer Tools). To do
this, the Design Master has to be managed (otherwise, you're not
allowed to edit it), and you can change it in the properties sheet
for the design master.


I don't have Replication Manager.
2. if you don't have ReplMan, then you can download the TSI
Synchronizer from http://trigeminal.com, which is Michael Kaplan's
website. He's the acknowledged world expert on Jet replication.


I'll take a look at the TSI Synchronizer. From what I've seen in the
years of posts in this group, Michael has probably built something
better than the MS tool. I had noticed that he was fairly active in the
group lately and had hoped he would have joined this thread, although
you also seem to have a good grasp of replication.

Thanks again for all the help and info. I hope I can get this all sorted
out on Tuesday.

Bri

Nov 13 '05 #7
Bri
David,

Just to follow up on this. I set the locks limit to 1,000,000 and the
sync worked. Is there any reason not to leave the limit this high or
should I change it back?
2. if you don't have ReplMan, then you can download the TSI
Synchronizer from http://trigeminal.com, which is Michael Kaplan's
website. He's the acknowledged world expert on Jet replication.


I've downloaded the TSI Synchronizer and will use it to clear out the
extra records in MSysTombstones et al.

Thanks again.
Bri
Nov 13 '05 #8
Bri <no*@here.com > wrote in news:rDGxc.7185 70$Ig.533865@pd 7tw2no:
Just to follow up on this. I set the locks limit to 1,000,000 and
the sync worked. Is there any reason not to leave the limit this
high or should I change it back?


I've always set it back on the theory that the default is set low
for some kind of general performance reason.
2. if you don't have ReplMan, then you can download the TSI
Synchronizer from http://trigeminal.com, which is Michael
Kaplan's website. He's the acknowledged world expert on Jet
replication.


I've downloaded the TSI Synchronizer and will use it to clear out
the extra records in MSysTombstones et al.


If you have problems, post questions in the
microsoft.publi c.access.replic ation newsgroup, because Michael pays
close attention to that newsgroup and is very helpful with
questions.

--
David W. Fenton http://www.bway.net/~dfenton
dfenton at bway dot net http://www.bway.net/~dfassoc
Nov 13 '05 #9

This thread has been closed and replies have been disabled. Please start a new discussion.

Similar topics

1
16324
by: rdavis7408 | last post by:
I have a database that has a form that opens a report using date parameters. I have been using it for six months and last week I began to get the following Error Message: "File sharing lock count exceeded. Increase MaxLocksPerFile registry entry." I checked in the Tools - Options - Advance. I have the database shared with no locks selected and record level locking selected.
3
11708
by: jamie | last post by:
Hi, I'm trying to delete a whole bunch of records and keep getting the message "File Sharing Lock Count Exceeded (Error 3052)". What is this error and how do I overcome it? Thanks guys.
2
3031
by: Keith Wilby | last post by:
A97, NT4. I'm running a code-intensive record update routine that I've inherited from an ex-colleague which is producing error 3052 "File Sharing Lock Count Exceeded" - can anyone give me a clue as to what this means and under what circumstances it might arise? Many thanks. Keith. www.keithwilby.com
6
3882
by: John | last post by:
Hi We have an access app (front-end+backend) running on the company network. I am trying to setup replication for laptop users who go into field and need the data synched between their laptops and the server upon return to the office. I am planning it this; Move all access tables to sql server and then link the tables to access front-end mdb app (using odbc?). Copy the same setup (access front end + sql backend) onto each laptop....
3
4838
by: Access Developer | last post by:
Hi All, Recently encountered problems with "Error 3052 File sharing lock exceeded". I have done some research and found the aswer here on the forum. However, I have the following questions: - The number of records in my file (text file) are only small - 3500 and this error is happening. Thought it was suppose to occur only in big files? - could it be in the code? it is stated "dbopentable" does that mean that the locking is done on the...
13
11143
by: George | last post by:
Hi, I am re-writing part of my application using C#. This application starts another process which execute a "legacy" program. This legacy program writes to a log file and before it ends, it writes a specific string to the log file. My original program (MKS Toolkit shell program) which keeps running "grep" checking the "exit string" on the "log files". There are no file sharing problem.
0
11701
by: cwho.work | last post by:
Hi! We are using apache ibatis with our MySQL 5.0 database (using innodb tables), in our web application running on Tomcat 5. Recently we started getting a number of errors relating to java.sql.SQLException: Deadlock found when trying to get lock; Try restarting transaction message from server: "Lock wait timeout exceeded; try restarting transaction"; We get such errors generally on inserts or updates while applying a
3
6582
by: fperri | last post by:
I have a DB in Access 2002 that when it is ran on a few users computers in the company, they get the error that the file sharing lock count has been exceeded and to increase the max locks per file in the registry. I have had them both increase the max locks per file in their registry to well above the amount of records in the DB. Actually, I had them set it to the same as my computer as I have never recieved the error. I have also had them...
6
3731
by: Iain King | last post by:
Hi. I'm using the win32 module to access an Access database, but I'm running into the File Sharing lock count as in http://support.microsoft.com/kb/815281 The solution I'd like to use is the one where you can temporarily override the setting using (if we were in VB): DAO.DBEngine.SetOption dbmaxlocksperfile,15000 Can I do this in win32com? I've been using ADO, not DAO, but I have to confess to not knowing exactly what the difference...
0
8827
marktang
by: marktang | last post by:
ONU (Optical Network Unit) is one of the key components for providing high-speed Internet services. Its primary function is to act as an endpoint device located at the user's premises. However, people are often confused as to whether an ONU can Work As a Router. In this blog post, we’ll explore What is ONU, What Is Router, ONU & Router’s main usage, and What is the difference between ONU and Router. Let’s take a closer look ! Part I. Meaning of...
0
9352
Oralloy
by: Oralloy | last post by:
Hello folks, I am unable to find appropriate documentation on the type promotion of bit-fields when using the generalised comparison operator "<=>". The problem is that using the GNU compilers, it seems that the internal comparison operator "<=>" tries to promote arguments from unsigned to signed. This is as boiled down as I can make it. Here is my compilation command: g++-12 -std=c++20 -Wnarrowing bit_field.cpp Here is the code in...
1
9118
by: Hystou | last post by:
Overview: Windows 11 and 10 have less user interface control over operating system update behaviour than previous versions of Windows. In Windows 11 and 10, there is no way to turn off the Windows Update option using the Control Panel or Settings app; it automatically checks for updates and installs any it finds, whether you like it or not. For most users, this new feature is actually very convenient. If you want to control the update process,...
0
9052
tracyyun
by: tracyyun | last post by:
Dear forum friends, With the development of smart home technology, a variety of wireless communication protocols have appeared on the market, such as Zigbee, Z-Wave, Wi-Fi, Bluetooth, etc. Each protocol has its own unique characteristics and advantages, but as a user who is planning to build a smart home system, I am a bit confused by the choice of these technologies. I'm particularly interested in Zigbee because I've heard it does some...
0
7985
agi2029
by: agi2029 | last post by:
Let's talk about the concept of autonomous AI software engineers and no-code agents. These AIs are designed to manage the entire lifecycle of a software development project—planning, coding, testing, and deployment—without human intervention. Imagine an AI that can take a project description, break it down, write the code, debug it, and then launch it, all on its own.... Now, this would greatly impact the work of software developers. The idea...
0
5971
by: conductexam | last post by:
I have .net C# application in which I am extracting data from word file and save it in database particularly. To store word all data as it is I am converting the whole word file firstly in HTML and then checking html paragraph one by one. At the time of converting from word file to html my equations which are in the word document file was convert into image. Globals.ThisAddIn.Application.ActiveDocument.Select();...
0
4481
by: TSSRALBI | last post by:
Hello I'm a network technician in training and I need your help. I am currently learning how to create and manage the different types of VPNs and I have a question about LAN-to-LAN VPNs. The last exercise I practiced was to create a LAN-to-LAN VPN between two Pfsense firewalls, by using IPSEC protocols. I succeeded, with both firewalls in the same network. But I'm wondering if it's possible to do the same thing, with 2 Pfsense firewalls...
1
3180
by: 6302768590 | last post by:
Hai team i want code for transfer the data from one system to another through IP address by using C# our system has to for every 5mins then we have to update the data what the data is updated we have to send another system
2
2550
muto222
by: muto222 | last post by:
How can i add a mobile payment intergratation into php mysql website.

By using Bytes.com and it's services, you agree to our Privacy Policy and Terms of Use.

To disable or enable advertisements and analytics tracking please visit the manage ads & tracking page.