By using this site, you agree to our updated Privacy Policy and our Terms of Use. Manage your Cookies Settings.
435,454 Members | 3,209 Online
Bytes IT Community
+ Ask a Question
Need help? Post your question and get tips & solutions from a community of 435,454 IT Pros & Developers. It's quick & easy.

VBA Import & Parse: File Sharing Lock Count Exceeded - Access 97 Bug?

P: 14
It's a very long time since I used Access, I'm more of a MS Project / Excel with VBA fish.

I am developing a VBA app under Access 97 and running into problems when importing data and creating large data tables. I receive the error: "File sharing lock count exceeded (Error 3052)."


If I remove my encapsulating BeginTrans... CommitTrans all works well, but painfully slowly. I had presumed that when commiting a transaction all locks would be removed . . this seems not to be happening, is it me? [In an attempt to overcome the error I include in my import loop a If recordcount > (some threshold) then CommitTrans, BeginTrans, recordcount = 0 hook . . . but this does not help regardless of how low I set (some threshold).]

I also increased my lock limit using DAO.DBEngine.SetOption dbmaxlocksperfile (to increasingly large limits), all to no avail.

The table I am creating/to which I am appending comprises 11 fields, 8 of which are indexed. By design it is not normalised and provides a parsed representation of a non-delimited text file. I am trying to append approximately 500,000 records. Any clues please? If in fact the dbmaxlocksperfile setting was taking (seems to be write-only so i can't check?) is there a means of determining how many locks I need? Can this setting be increased on the fly?)

Many questions I know . . . I shall keep my fingers crossed.

TIA,
Mal
Apr 23 '12 #1
Share this Question
Share on Google+
29 Replies


NeoPa
Expert Mod 15k+
P: 31,494
Too many questions. I will make one fundamental suggestion. From there, if more is required then a specific question would be required to cover it.

My suggestion is to import the data in as basic a way as possible. No indices. No properties that could ever cause a record to fail.

From there, you can do your error checking with your own reporting and then build up whatever indices you need in single steps. I expect many of the problems (time; resources; etc.) are related to doing all the extra work within the basic import.
Apr 23 '12 #2

P: 14
@NeoPa
Thanks for the reply.

Importing to a non-indexed table then using SQL to append imported results to the indexed destination table is ultra-slow, and seems unnecessary (if my assumption that CommitTrans should release all file locks is true . . . is it?).
Apr 23 '12 #3

NeoPa
Expert Mod 15k+
P: 31,494
CommitTrans simply ensures the table is updated with any transaction updates. Locks for the related records may be released at this time, but locking systems vary (even within Access options) so not all locks are necessarily released.

Furthermore, what I suggested was not quite what you seemed to infer. I was suggesting that the indices get applied to the table the data is in, rather than transferring the data to a table where the indices are already prepared.

MalFarrelle:
Importing to a non-indexed table then using SQL to append imported results to the indexed destination table is ultra-slow
Interesting. I wouldn't have expected this. I didn't suggest it, but nevertheless would not have expected a great deterioration of performance compared to your original approach. What timings do you have? How much slower was it?
Apr 23 '12 #4

P: 14
Thank you again, It was a wild asumption that CommitTrans should release the locks. I didn't mean to infer that I would transfer the data to a table where the indices are already prepared, I meant simply to append the records to a table that is indexed (so that Access will create new indecis for new records).

I don't have timings recorded now, I shall rebuild my VBA using the technique you suggest and compare with what I have now (which issues an .Addnew ... .Update transaction for *every* new record to overcome the issue of seemingly having no direct control over the release of locks.

I still foresee problems. In the event that the source file will spawn many more records than available locks and having no obvious means of forcing a release of locks . . . what is the prefered technique . . . trap the Error 3052 and just repeat until the error goes away?
Apr 23 '12 #5

NeoPa
Expert Mod 15k+
P: 31,494
The preferred approach would surely be to avoid the situation entirely. I would hope my suggestion, followed through accurately, would give you that.

There still seems some confusion over the approach I'm suggesting though. I hope it clarifies if I identify #2 of the possible approaches outlined below :
    • Delete all data from the table used to import new data.
    • Import new data into table (which has no indices defined).
    • Potentially do any validity checks (and reporting) on the data at this stage.
    • Copy data from this table to a table which does have indices defined.
    • Delete all data from the table used to import new data.
    • Delete any indices from table used to import data into.
    • Import new data into table (which currently has no indices defined).
    • Potentially do any validity checks (and reporting) on the data at this stage.
    • Add indices to the table with the imported data in. No copying/moving of data required at this point.

I would expect the speed improvement to come from the importing of data being able to continue at full speed without the interruption, for every record, of the finding and building of index values. I would expect doing this in a single phase at point of creating the index would take less time.

An alternative, of course, is to avoid the use of transactions at this stage. Determine if the import step succeeds after it has returned. If yes then continue. If no then reset all effected data and start over (or even allow an abort if appropriate).
Apr 23 '12 #6

P: 14
Thanks again, I appreciate the advice.

Suggestion #1 above is ecactly what I infered from your earlier reply. My issue remains . . . if I am importing data to create thirteen squizillian records I want to bunch them up into chunky transactions . . . . but there seems no way to do this while deterministically avoiding the lock limit. does Access really offer no way of itteratively creating a set of n records in a transaction when the lock limit is known to be n + some records?

I would not delete records from the file used to import (it takes 5 to 10 minutes to do the record delete) . . . prefering to drop the table and recreate it anew.

Avoiding transactions works reliably and is what I am doing now, but gives me a 6 hour process time to import a typical data set.
Apr 24 '12 #7

NeoPa
Expert Mod 15k+
P: 31,494
MalFarrelle:
Suggestion #1 above is exactly what I inferred from your earlier reply.
Correct use of 'Infer'. I like that :-)

You did notice though, that I indicated that #1 was most definitely not my suggestion? I was suggesting #2.

MalFarrelle:
I would not delete records from the file used to import (it takes 5 to 10 minutes to do the record delete) . . . preferring to drop the table and recreate it anew.
I won't argue with that. I usually find that deleting records takes minimal time, but I doubt I deal with as many records as you do and a DROP/CREATE is about as clean as you can get anyway.

MalFarrelle:
Avoiding transactions works reliably and is what I am doing now, but gives me a 6 hour process time to import a typical data set.
Are you saying it takes longer than with transactions? Otherwise you know that generally using transactions slows down the process somewhat, as it introduces more work into the process. If it takes 6 hours without transactions then I would say that is simply a reflection of the amount of data you have. It's the best you can hope for.
Apr 24 '12 #8

P: 14
That is precisely what I am saying. It is much quicker (reference follows) to save a batch of records in a single transaction than to write single-record transactions to the database . . . leastways it is under a process such as mine when one simply reads sequential data from a text file and saves it to a data table. Hence my original question,. how can I avoid the file lock issue, ie take control of the release of locks.

Quoting Microsoft: (http://support.microsoft.com/kb/146908) "You can speed up database operations in a Microsoft Access database by using transactions. A transaction starts with a BeginTrans statement and ends with a CommitTrans or Rollback statement. However, for reasons relating to stack usage it is recommended that these code regions are kept as small as possible. This ensures optimal performance and reliability." For more information as to when to use transactions, please see the following article in the Microsoft Knowledge Base:

145757 (http://support.microsoft.com/kb/145757/EN-US/ ) : Ideas to Consider When Using Transactions

The sample program below is over 17 times faster when using BeginTrans/CommitTrans. Performance may vary on different computers. "
Apr 24 '12 #9

NeoPa
Expert Mod 15k+
P: 31,494
MalFarrelle:
That is precisely what I am saying. It is much quicker (reference follows) to save a batch of records in a single transaction than to write single-record transactions to the database
Although the rest of it seems to explain that using transactions is actually faster, your quoted sentence makes that ambiguous by comparing large transactions with smaller ones, which is not the point of the question. The question was about comparing transactional updates with non-transactional ones. This leaves me somewhat confused, but I'll look into it.

Having now read the linked articles I now see that this is an apples v pears situation. It explains that transactions speed things up when updating using VBA (IE. Using Recordset.Edit() and Recordset.Update() VBA statements). That's perfectly possible, but we were discussing SQL processes (I thought), and these are typically many times faster than recordset processing in VBA. If that is not what you're doing, and you're actually importing data from somewhere and placing it carefully in another table using .Update in VBA, then my advice would certainly be to cease and desist ASAP. SQL should be used where possible if performance is any one of your priorities.
Apr 25 '12 #10

P: 14
By single-record transaction I mean a single .Addnew . . . Update transaction (implied transaction, no BeginTrans . . . CommitTrans bracketing).

The source data exists in a text file. It is not delimited data, it is not fixed width data and the actual data content is not prescribed. SQL in and of itself won't help me here, I have developed (in VBA for Access) a parser which reads the source file line by line and populates an Access data table with unnormalised data tables which correlate with the source data file. (Subsequently, and not subject of this thread I normalise the data and produce further higher-order data tables).

So, No I can't use SQL alone. Yes I am using VBA to import and parse text files. Yes I want to batch the records together in large transaction to take advantage of the well-documented time savings associated with so doing.

My question remains, is it possible to deterministically avoid the File lock limits which I encounter when importing extremely large data tables? Anyone?
Apr 25 '12 #11

NeoPa
Expert Mod 15k+
P: 31,494
MalFarrelle:
By single-record transaction I mean a single .Addnew . . . Update transaction (implied transaction, no BeginTrans . . . CommitTrans bracketing).
Ah. Much clearer without that unfortunate use of the terminology. I follow now.

I can see now (with the fuller explanation) that your situation is far from straightforward. I don't know how complex is the parsing of your data. There may be scope for importing as a job lot (file import to table), but there again, it's very possible there is not.

The only thing I can think of, which is pretty non-intuitive, is to try closing and reopening the recordset between transactions. That generally adds to the overhead of table processing, but the added cost may be outweighed by the benefit of using the larger transactions. It may be worth trying, just to see.

Frankly, like yourself, I'm surprised that the recordset isn't reset to its earlier condition (relating to any locks) after a CommitTrans.

PS. If your code isn't too long (over 100 lines) you may like to post it for perusal. It's not impossible for there to be a bug in there somewhere that you've overlooked. More eyes may catch it. Just a thought.
Apr 25 '12 #12

P: 14
Closing and opening the recordset might just do it, I shall try. Unfortunately I can't post the code without significantly cleansing it . . what I will do is try to recreate the problem as simply as possible then post any resultant trigger.

Thanks again for the help, I wouldn't have come up with the close/open recordset idea and it does sound promising.
Apr 25 '12 #13

NeoPa
Expert Mod 15k+
P: 31,494
Give it a try, yes. That's the easiest way to check it.

As for the code, I can understand sometimes it can't be made public, and that's fine. I wouldn't bother cleansing it. That's more work than warranted. It was only an idea on the off-chance. Cleansing also bears the risk of removing any offending part too anyway, which is not a reflection on your abilities, but rather the nature of such problems being hard to notice.
Apr 25 '12 #14

P: 14
I returned to this issue, it remains unsolved. Even after performing a recordset close/reopen when an arbitary threshold is set (even of just 100 records) the lock limit is always exceeded after a cumulative import of about 400,000 records. Here is the code I used, which was being accessed and executed at the apporopriate times:
Expand|Select|Wrap|Line Numbers
  1. importct = importct + 1
  2. If importct > 100 Then
  3.     CommitTrans
  4.     AddedRecordset.Close
  5.     Set AddedRecordset = CurrentDb.OpenRecordset("Events")
  6.     importct = 0
  7.     BeginTrans
  8. End If
. . . Does anybody have any understanding of how to manage record locks under Access 97?

Thanks in Advance,

Mal
May 8 '12 #15

TheSmileyCoder
Expert Mod 100+
P: 2,321
I dont think many of the current users of this forum use access 97. I myself have only used it once or twice, and only for about 10 hours combined.


Now, using Recordset.AddNew and Recordset.Update is bound to be slow, as for each update the indexes has to be rebuilt.

6 hours sounds quite intense for only 11 fields (unless some of these are memo type fields).

My suggested approach would be:

If possible, import the whole file into a temporary table.
If not possible, try reading the file in as a string, manipulate it to add delimiters, save it to a .txt file and then import that.

Once you have the temporary table, use a combination of VBA and sql to transfer it (possibly in batches) to your main table.


Could you please provide a few sample rows of data, and also the total size (kilobytes) of the file.

Unless you are working over a network connection, and/or the data is complex (memo or objects), I would believe the import should be possible to do in no more then 30 minutes.
May 8 '12 #16

P: 14
Thanks for the input.

Unfortunately '97 is mandated.

The source data comprises logged event data, sometimes on one line of the file, sometimes two. Sometimes data fields are absent, no data is fixed-width and there are no delimiters. There is an amount of parsing required to make sense of the files and while this in and of itself is not adding a huge overhead to the process it is preventing easy importt of the data.

I have indeces set on just about everything in the Events table, which comprises the following fields:

1) ID: Autonumber
2) G: Text (typically 20 chars)
3) ED: Date/Time (dd-mmm-yyyy in source file)
4) ET: Date/Time (hh:mm:ss in source file)
5) D: Boolean
6) EI: Text (typically 10 chars)
7) C: Text (typically 5 chars)
8) S: Text (typically 10 chars)
9) F: Text (Typically 10 chars)
10) EG: Boolean
11) P: Long Integer

All fields except 2, 5 and 10 are indexed. Regretably I cant publish any of the data.

I just completed an import of 1,484,204 records from a collection of source text files. This took me 5 hours. Source files were as follows:

f1.txt: 284,408 kb
f2.txt: 68,145 kb
f3.txt: 57,747 kb
f4.txt: 74,766 kb
f5.txt: 53,637 kb

Thank you for your consideration,

Mal
May 8 '12 #17

NeoPa
Expert Mod 15k+
P: 31,494
I would guess your problem is right there with line #5 of the code in post #15.

Repeated use of the function CurrentDb() is likely to give such problems. This is true in later versions too.

Try instead :
Expand|Select|Wrap|Line Numbers
  1.     Dim importCt As Integer
  2.     Dim dbVar As DAO.Database
  3.  
  4.     Set dbVar = CurrentDb()
  5. ...
  6.     importCt = importCt + 1
  7.     If importCt > 100 Then
  8.         CommitTrans
  9.         AddedRecordset.Close
  10.         Set AddedRecordset = dbVar.OpenRecordset("Events")
  11.         importCt = 0
  12.         BeginTrans
  13.     End If
Of course, much of the code introduced to try to avoid this issue may be redundent after that's fixed.
May 8 '12 #18

P: 14
Many thanks . . . I shall change all references to currentdb as advised, this seems good practice (though I make only several references to currentdb during the 5 or 6 hour process cycle so am less than confident this is contributing so much of an overhead? unless every variable bound to currentdb is somehow accessed very slowly?) Well, I shall try it and report back, thanks again.
May 8 '12 #19

NeoPa
Expert Mod 15k+
P: 31,494
Every time the code processes through that loop it creates a new instance of the database object that is never released until the whole project is closed or it otherwise leaves the scope of any references to the result (which includes any variables it's set to as well as the code that calls it). That seems to me like many thousands (and more probably tens or hundreds of thousands) of instances of a DAO.Database object. I wouldn't expect that to be negligible, but you come back and report when you've tested. Then we'll know.
May 8 '12 #20

P: 14
I arived at the 100 record limit at the conclusion of a ridiculous sequence starting at 100,000 . . in a futile bid to find *some* limit that would avoid the record lock limit, to no avail. But It's a well recieved point, thank you.
May 8 '12 #21

NeoPa
Expert Mod 15k+
P: 31,494
No worries Mal. Notice that this explanation also explains why nothing you've tried so far has had any effect on the problem. It's not related to the recordset at all, but the DAO.Database objects left lying around so many times.
May 8 '12 #22

P: 14
Replacing references to currentdb made no measurable performance increase.

Iporting into a blank table without indeces brought the import time down from 5+ hours to 20 minutes. Apending the new records (qty 184,000+) to an extant indexed table took 1 hr 20 minutes. I guess this total of 1 hr 40 mins is the best Access 97 can offer me, I will go with that and modify my algorithm s to suit. I will eradicate my currentdb references in line with received wisdom, there are 12 such references, none within any loop constructs.

Thanks all for your help. I guess Access just isn't that good at dealing with large, indexed files.
May 8 '12 #23

NeoPa
Expert Mod 15k+
P: 31,494
MalFarrelle:
Replacing references to currentdb made no measurable performance increase.
Indeed. That was to stop it crashing after exhausting available resources. Did it manage to have that effect?
May 8 '12 #24

P: 14
Unfortunately not, no.
May 8 '12 #25

TheSmileyCoder
Expert Mod 100+
P: 2,321
MalFarrelle
Iporting into a blank table without indeces brought the import time down from 5+ hours to 20 minutes. Apending the new records (qty 184,000+) to an extant indexed table took 1 hr 20 minutes.
Now we are at least approaching something that sounds more reasonable. Maybe more performance can be squeezed out of the process, but only you can say whether its worth the time to pursue.

One thing we haven't really touched on, is whether the computer running this is an old or new computer. It could be simply that the performance of the PC is not good enough, if its an old computer. Finally is the database storage on a network share, or local harddrive?
May 8 '12 #26

NeoPa
Expert Mod 15k+
P: 31,494
I'm at a loss. My understanding of the issue is that I would expect the situation you described if your code is similar to that which you posted. It's very difficult to test properly via a third party mind you. I can't see the code so I only know that you see no further occurrences of the CurrentDb() references anywhere in your code. That's not an intention to be disrespectful. I have a fair amount of experience dealing with people remotely and that tells me that most people make mistakes and confuse things rather than just the more careless. It's the norm rather than the exception.

At the end of the day, I can only say what I think from the limited information available to me at a distance and I have to rely on your making the changes and reporting the results accurately. I don't know you from Adam so I have no reason to know whether to expect anything other than the norm. I can't know of course. Unfortunately it is a separation from the project that makes progress difficult.

I'd be interested to hear anything you come up with of course, and I'm glad at least that the ideas of closing and re-opening the recordset regularly, and adding the data initially to a table without indices and adding the required indices afterwards, proved helpful.
May 8 '12 #27

NeoPa
Expert Mod 15k+
P: 31,494
I should really add, that Smiley's point about networked data (and database of course) could also prove very important. It's barely noticeable in most situations, but your situation is one that deals with massive amounts of data. If that data is held locally, rather than being accessed across any network, that could make an appreciable difference to performance times. I would expect the most important issue here would be disk performance rather than processor though. Everything makes some difference, but in a situation such as this I would expect the processor difference to be entirely dwarfed by the disk-access difference.
May 8 '12 #28

P: 14
@NeoPa
Because of the performance issues I have been doing all the updates with local data on my laptop. Its a pretty high-spec machine, a Dell Precision laptop with Core i7 processor.

I think I've gone as far as I can toward a tolerable process, thanks all for your help.

All the best.
May 10 '12 #29

NeoPa
Expert Mod 15k+
P: 31,494
You're welcome Mal. An interesting situation :-)
May 10 '12 #30

Post your reply

Sign in to post your reply or Sign up for a free account.