Per David W. Fenton:
Have you changed to avoiding stored querydefs with parameters? If
not, then I don't think you've actually identified the problem.
I cut that routine over to inline SQL about 2:00 this morning and fired up a
volume test.
First thing I noticed was that it runs 25% faster with inline SQL. Tried a few
additional runs and they all supported that number.
I've got code that opens tens of thousands of recordsets and the db
does not bloat.
On my first low-volume tests, I thought I had the bloat licked - there was
some...but I expect that from the first time forms are opened and things like
that.
But on the full-volume test, the 5-meg app was coming right up on a hundred megs
by about 5 a.m.
So I fell back and regrouped. Instead of .Opening/.Closing the RS every time
I called the routine, I opened an RS into the entire Returns table in the
calling routine. Then, as the calling routine loops through another RS,
invoking the problem routine, I pass a pointer to the permanently-opened RS and
the problem routine does .FindFirst/.FindLast to identify the range of records
I'm intersted in and then loops through them.
In low-volume testing, it looked *really* good...like three times faster: .5
seconds per Return as opposed to 1.4 seconds with the inline SQL.
Each return involves many iterations of the problem routine as I calculate
rolling rates of return for 1, 2, 3, 4, 5, 6, 7, 8, 9, and 10 years. So, for
quarterly returns, the first one is four calcs; the second one is eight, the
third one is 12...and so on. Total per return: 220 calculations. There are
also monthlies....
For each calculation, I have to ensure that there is a return present for each
quarter within the period and build a return stream from them.... So that was
..Opening/.Closing a recordset 220 times for each return....
220 & 116,000 = Lotsa .Opens/.Closes.... (25 million+... And that's only for
quarterlies. Monthlies are worse with 12 per year....
When I went to a full-volume test of the always-open-sequential-processing
approach my little bubble burst: 4.1 seconds per return! This seems to
conflict with what I've heard about indexed searching - that if all fields
involved are indexed, the nature of binary searches makes speed relatively
independent of number of records.
But it *does* seem to have stopped the bloat totally/completely albeit at the
cost of bringing the app to it's knees.
For high-volume (i.e. production) we're talking 116,000 records instead of the
low-volume's 1000....and that ain't gonna fly....
(116,000 * 4) /3600 = 132 hours....
I'm going to try to come up with some kind of compromise - maybe loop through
the driver RS one fund at a time and re-create the Return RS with a single
fund's returns - which might put me back at .5 seconds/return or even faster.
If that doesn't work out, maybe I'll just fall back to bloatsville and refresh
the copy of the app every so often.
But it *would* be nice know what's causing the bloat.... I'm also logging
beaucoups error messages using the same kind of DAO IO: for each message, I open
up a recordset, append a record, and then close the RS. And I don't *think*
I'm getting any bloat out of that.
--
PeteCresswell