Hi,
I have an Excel 97 app in which I'm using SQL INSERTs to insert data
into an Access 97 database table.
The table structure is as follows:
VersionID (PK) (Long)
MetricID (PK) (Long)
MetricNumber (Double)
MetricText (Memo)
I chose this structure because each time I write to the database I'm
saving the contents of over 400 cells (metrics). Some are numeric, some
are text > 255 characters.
My code executes an INSERT command for each of the metrics (ie 400
INSERTs). For testing purposes, I'm inserting 200 numeric ones and 200
text ones (text length only 7 or 8 chars).
Here's what happens:
I compact the database and run the 400 INSERTs.
On this first run, the db jumps from 8000K to 14674K. I have no idea
why.
On the second run (same data each time) the db jumps 14674 to 14930 =
256k diff
3: 14930-15122=192k diff
4: 15122-15362=240k diff
5: 15362-15602=240k diff
6: 15602-15858=256k diff
Note: I am using BeginTrans and CommitTrans to ensure all 400 go
through or none at all.
Questions I have:
1. Why the big jump in db size from compact to first write?
2. Why doesn't the database increase in size in a consistent manner for
each run of the same data?
3. Why is it >200k per insert? I calculate 4 bytes + 4 bytes + 8 bytes
+ 8bytes(due to 8 test chars) = 24 bytes per row, x 400 rows =
9600bytes = 9.375k. Is it the Memo? I thought it sized to the amount of
text stored...
4. I will have multiple users reading and writing to this table in 400
row chunks from their Excel Workbooks. What do you suggest I do to
handle collisions/sharing errors? Currently my plan is to handle any
read/write errors and retry behind the scenes a few times until the
operation goes through.
Any help is greatly appreciated.
Michael