Hi everyone,
I am trying to refine some ADO code that is used to extract data from
excel spreadsheets and do a simple ETL type process to place the data
into a table. The code works fine and is seemingly reliable enough. It
is also as slow as <insert metaphor>.
Basically the process is taking the excel sheet and reading it into a
disconnected recordset. Easy done. The next step is done with a
combination of ADO and DAO code. The incoming rows are checked one at
a time against the existing import data table - new rows are added as
needed and existing rows are updated as needed. This is achieved by
using two loops, one nested in the other.
Before I go and make this more 'efficient' I thought that I would ask
peoples opinions on speeding this up. I was thinking of using filters
to reduce the size of the loops, but maybe there is even a better way.
It was suggested that I load the ADO recordset into an array for
processing, but I really cant see any advantage here for that
approach.
Any suggestions would be greatly appreciated before I re-vamp this
piece of code. Is there a speed difference between find and filter
methods for example....
Thanks in advance
The Frog