You will achieve far better performance if you read the file and process as
you go. Make sure to use objects that are "memory frugal" since you are
likely to do a lot of garbage collecting.
A dataset with 20MB of data in it is not a good use of datasets.
--
--- Nick Malik [Microsoft]
MCSD, CFPS, Certified Scrummaster
http://blogs.msdn.com/nickmalik
Disclaimer: Opinions expressed in this forum are my own, and not
representative of my employer.
I do not answer questions on behalf of my employer. I'm just a
programmer helping programmers.
--
"Chris" <Ch***@discussions.microsoft.com> wrote in message
news:18**********************************@microsof t.com...
When you have to read a big file (5-30MB) and throw the data into the
database, ofcourse some logics inbetween (doesn't matter) which of the ADO
methods is recommended.
1. read line by line and do the execute for each line
2. Consolidate the files into a dataset for example and use the dataset
update method
ie
for each line
.addnew in the dataset
end
dataset.Update( .Added)
I mean which one should we prefer in these situations, will it make any
difference in performance. Are there any limitations on how many rows can
be
inserted using the dataset.Update method ?
Thanks much
Chris