473,320 Members | 2,180 Online
Bytes | Software Development & Data Engineering Community
Post Job

Home Posts Topics Members FAQ

Join Bytes to post your question to a community of 473,320 software developers and data experts.

Which is best - recordsets/VBA V SQL statements for updating data

For years I have been using VBA extensively for updating data to tables
after processing. By this I mean if I had to do some intensive processing
that resulted in data in temp tables, I would have VBA code that wrote the
results of that away to the db, either creating new records or updating
existing records, whichever was relevant. This may also include deleting
records.

Now I generally do this by opening a recordset on the source data and then
work though that in a loop and make my data changes. This is a simple view
of the situation, but in reality there may be calculations going on and many
other things. But for this question assume a simple creation of records
and/or updating existing records.

A new acquaintance of mine has questioned this as he uses SQL statements
extensively. His code includes constructing SQL statements that he then
executes to make the changes. I can see this would work and it has made me
think about the pros and cons of this. This would not always be a suitable
solution, but may be a simpler and cleaner one in some situations.

I am interested in any comments on this approach.

I assume there are no problems with executing multiple SQL statements within
a transaction so that either everything succeeds or fails.

Changes to the data structure would require revisiting all relevant SQL
statements, but then it may also require revisiting my code (except that it
a lot of situations I write the data across by looping through the fields
collection of the destination and don't actually code in field names except
where absolutely necessary).

Is there a speed advantage? I would think there is with the SQL statements.
Jan 2 '06 #1
34 10742
Jeff wrote:
For years I have been using VBA extensively for updating data to
tables after processing. By this I mean if I had to do some intensive
processing that resulted in data in temp tables, I would have VBA
code that wrote the results of that away to the db, either creating
new records or updating existing records, whichever was relevant.
This may also include deleting records.

Now I generally do this by opening a recordset on the source data and
then work though that in a loop and make my data changes. This is a
simple view of the situation, but in reality there may be
calculations going on and many other things. But for this question
assume a simple creation of records and/or updating existing records.

A new acquaintance of mine has questioned this as he uses SQL
statements extensively. His code includes constructing SQL statements
that he then executes to make the changes. I can see this would work
and it has made me think about the pros and cons of this. This would
not always be a suitable solution, but may be a simpler and cleaner
one in some situations.
I am interested in any comments on this approach.

I assume there are no problems with executing multiple SQL statements
within a transaction so that either everything succeeds or fails.

Changes to the data structure would require revisiting all relevant
SQL statements, but then it may also require revisiting my code
(except that it a lot of situations I write the data across by
looping through the fields collection of the destination and don't
actually code in field names except where absolutely necessary).

Is there a speed advantage? I would think there is with the SQL
statements.


SQL is nearly always faster (and preferable) to RecordSet updates. I would only
use the latter if I needed to use the value or result from one row in some other
row's calculation.

--
I don't check the Email account attached
to this message. Send instead to...
RBrandt at Hunter dot com
Jan 2 '06 #2

"Rick Brandt" <ri*********@hotmail.com> wrote in message
news:nL******************@newssvr21.news.prodigy.c om...
Jeff wrote:
Is there a speed advantage? I would think there is with the SQL
statements.
SQL is nearly always faster (and preferable) to RecordSet updates. I

would only use the latter if I needed to use the value or result from one row in some other row's calculation.

--
I don't check the Email account attached
to this message. Send instead to...
RBrandt at Hunter dot com


This is another one of those questions that will prompt some heated debate.
While I find it rare to contradict Rick, I think that statement is
misleading. If you are doing a single insert or single row update, SQL
would likely be faster. But if you expect to insert multiple records, that
is not necessarily the case. Under some circumstances recordsets can be
much faster.

Private Declare Function GetTickCount Lib "kernel32" () As Long

Private Sub rh()

Dim T As Long
Dim I As Integer
Const C As Integer = 1000
Dim rs As New ADODB.Recordset

On Error Resume Next
CurrentProject.Connection.Execute ( _
"DROP Table T1")
On Error GoTo 0

CurrentProject.Connection.Execute ( _
"CREATE Table T1 ( " & _
"Field1 Text, Field2 Text)")

CurrentProject.Connection.Execute ("Delete * from t1")
T = GetTickCount
For I = 1 To C
CurrentProject.Connection.Execute ( _
"Insert Into T1(Field1, Field2) " & _
"Values ('first', 'second')")
Next I
Debug.Print GetTickCount - T

CurrentProject.Connection.Execute ("Delete * from t1")
T = GetTickCount
rs.Open ("T1"), CurrentProject.Connection, adOpenForwardOnly,
adLockOptimistic
For I = 1 To C
rs.AddNew
rs(0) = "first": rs(1) = "Second"
rs.Update
Next I
rs.Close
Debug.Print GetTickCount - T

End Sub

Output:
1610
31

--
Randy Harris
tech at promail dot com
I'm pretty sure I know everything that I can remember.
Jan 2 '06 #3
Jeff wrote:
For years I have been using VBA extensively for updating data to tables
after processing. By this I mean if I had to do some intensive processing
that resulted in data in temp tables, I would have VBA code that wrote the
results of that away to the db, either creating new records or updating
existing records, whichever was relevant. This may also include deleting
records.

Now I generally do this by opening a recordset on the source data and then
work though that in a loop and make my data changes. This is a simple view
of the situation, but in reality there may be calculations going on and many
other things. But for this question assume a simple creation of records
and/or updating existing records.

A new acquaintance of mine has questioned this as he uses SQL statements
extensively. His code includes constructing SQL statements that he then
executes to make the changes. I can see this would work and it has made me
think about the pros and cons of this. This would not always be a suitable
solution, but may be a simpler and cleaner one in some situations.

I am interested in any comments on this approach.

I assume there are no problems with executing multiple SQL statements within
a transaction so that either everything succeeds or fails.

Changes to the data structure would require revisiting all relevant SQL
statements, but then it may also require revisiting my code (except that it
a lot of situations I write the data across by looping through the fields
collection of the destination and don't actually code in field names except
where absolutely necessary).

Is there a speed advantage? I would think there is with the SQL statements.


This is not a yes/no question. The real question wold be, what is the
best way to combine code and SQL to achieve the best performance. There
also may be other considerations besides speed, such as the ease with
which you can insert long text into a Memo field (much easier with a
recordset).

One example of a combination approach might be where you need to update
a specific group of records using a complex formula implemented as a VBA
function. Making VBA function calls from queries is slow, and finding a
subset of records by looping through a recordset is slow, so the best
bix is to use SQL to retrieve the set of rows to be ubdated, then loop
through that resulting recordset to apply the updates.
Jan 2 '06 #4
"Randy Harris" <ra***@SpamFree.com> wrote in message
news:Ao*******************@newssvr27.news.prodigy. net...
"Rick Brandt" <ri*********@hotmail.com> wrote in message
news:nL******************@newssvr21.news.prodigy.c om...
Jeff wrote:
> Is there a speed advantage? I would think there is with the SQL
> statements.


SQL is nearly always faster (and preferable) to RecordSet updates.
I would only use the latter if I needed to use the value or result from
one row in some other row's calculation.

This is another one of those questions that will prompt some heated debate.
While I find it rare to contradict Rick, I think that statement is
misleading. If you are doing a single insert or single row update, SQL
would likely be faster. But if you expect to insert multiple records, that
is not necessarily the case. Under some circumstances recordsets can be
much faster. [examples snipped]


A scenario that requires a SQL statement in a loop is another where I agree that
a Recordset can be faster. I was referring to using a looping Recordset
operation to update many rows as compared to a single SQL statement that would
update all of those same rows with one statement.

--
I don't check the Email account attached
to this message. Send instead to...
RBrandt at Hunter dot com

Jan 2 '06 #5
SQL is better than Recordset Updating and Vice Versa.

When using a Server DB, SQL happens on the Server. Suppose the SQL is
demanding and takes 5 minutes to run. Suppose 20 Users call the SQL (on
discrete sets of records) right now. Can the Server machine deal with
the demand? Probably, it can. But is this more efficient than each User
getting his/her set of records and processing them on the Client
machine? Who knows? It may take some thought, perhaps even testing to
decide.

Sometimes we inherit non-normalized DBs. The data is related but there
is no clear rule that defines the relationship. We find we have to
examine (with a function in code) much of the data in each record to
find which record in Table1 relates to which record in Table2. And then
we have to do the work, eg, create genealogical html files and upload
them to our genealogy site. Is this a job for SQL or for Recordset
manipulation? I use Recordset manipulation.

Sometimes we process data in multiple scans. During these scans we are
trying, testing, discarding and choosing. After each scan we assess our
results with respect, not to one row, but to the whole data. Is a
Recordset a good tool for this? I think so. If we are going to change
the value of something 47 times, do we want to write each of those
changes to our DB? Disconnecting the Recordset and doing our work in
memory seems a better idea to me.

With recent versions of MS-SQL one can do almost anything that one can
do with a Recordset and VBA on the Client machine, with a Stored
Procedure or UDF on the Server machine, using loops, conditional,
cursors or whatever. The first situation I describe above becomes quite
important. How much work do you want the Server machine to do? How much
work do you want the Client machine to do? The Server machine is
serving many Clients. The Client machine is probably serving one
Client.

Disconnected Recordsets, reconnection and UpdateBatch make your
question much more pertinent than it was a few years ago. With them we
can have a mini independent data store on our Client machines. We can
play with it forever, then reconnect it and save our changes in an
instant.

And of course, with a clever use of Table and Index Creation within
Transactions we can effect disconnected Recordsets in DAO. [We always
Rollback the Transaction]. Nothing is ever written to the DB (unless we
want it to be). Such procedures while demanding to code can be
extremely fast and powerful. (They can blow SQL away after the Tables
and Indexes have been created, but we have to have enough work for them
to so that this advantage makes the time required for their creation
worthwhile).

After all that, I must note that I estimate that I use SQL 95%of the
time.

Jan 2 '06 #6
Bri
Randy Harris wrote:
"Rick Brandt" <ri*********@hotmail.com> wrote in message
news:nL******************@newssvr21.news.prodigy.c om...
Jeff wrote:
Is there a speed advantage? I would think there is with the SQL
statements.


SQL is nearly always faster (and preferable) to RecordSet updates.
I would only use the latter if I needed to use the value or result
from one row in some other row's calculation.


This is another one of those questions that will prompt some heated
debate. While I find it rare to contradict Rick, I think that
statement is misleading. If you are doing a single insert or single
row update, SQL would likely be faster. But if you expect to insert
multiple records, that is not necessarily the case. Under some
circumstances recordsets can be much faster.

Private Declare Function GetTickCount Lib "kernel32" () As Long

Private Sub rh()

Dim T As Long Dim I As Integer Const C As Integer = 1000 Dim rs As
New ADODB.Recordset

On Error Resume Next CurrentProject.Connection.Execute ( _ "DROP
Table T1") On Error GoTo 0

CurrentProject.Connection.Execute ( _ "CREATE Table T1 ( " & _
"Field1 Text, Field2 Text)")

CurrentProject.Connection.Execute ("Delete * from t1") T =
GetTickCount For I = 1 To C CurrentProject.Connection.Execute ( _
"Insert Into T1(Field1, Field2) " & _ "Values ('first', 'second')")
Next I Debug.Print GetTickCount - T

CurrentProject.Connection.Execute ("Delete * from t1") T =
GetTickCount rs.Open ("T1"), CurrentProject.Connection,
adOpenForwardOnly, adLockOptimistic For I = 1 To C rs.AddNew rs(0) =
"first": rs(1) = "Second" rs.Update Next I rs.Close Debug.Print
GetTickCount - T

End Sub

Output: 1610 31


Your example is comparing inserting 1000 records ONE AT A TIME. A more
realistic comparison would be comparing the SQL doing ALL of the records
in one statement vs the one at a time method that recordsets can only do.

Private Declare Function GetTickCount Lib "kernel32" () As Long

Sub bri()
Dim rs As New ADODB.Recordset
Dim T As Long

'Assume that the table with a thousand records from sub rh has been created
T = GetTickCount
CurrentProject.Connection.Execute ("UPDATE T1 SET T1.Field1 = 'third',
T1.Field2 = 'fourth' ")
Debug.Print GetTickCount - T

T = GetTickCount
rs.Open ("T1"), CurrentProject.Connection, adOpenForwardOnly, _
adLockOptimistic
Do Until rs.EOF
rs(0) = "fifth": rs(1) = "sixth"
rs.Update
rs.MoveNext
Loop
rs.Close
Debug.Print GetTickCount - T

End Sub

Output: 31, 100

This is the more likely scenario, IMNSHO. If you can do a bulk operation
using SQL this will always be faster (unless the SQL calls a function
when it might not be faster, depends on the function) than walking the
recordset. I realize that you were showing the exception to the rule,
but thought it worth showing the rule too.

--
Bri

Jan 2 '06 #7
No disagreement whatever, Bri. As you point out, my example was
deliberately "biased". My point was, and I think most everyone would agree,
that both techniques have their uses. When to use each is just one more
aspect of developing programs for optimal efficiency.

--
Randy Harris
tech at promail dot com
I'm pretty sure I know everything that I can remember.

Jan 2 '06 #8
Bri

Randy Harris wrote:
No disagreement whatever, Bri. As you point out, my example was
deliberately "biased". My point was, and I think most everyone would agree,
that both techniques have their uses. When to use each is just one more
aspect of developing programs for optimal efficiency.


I agree completely. There are several tools that do similar things and
one is the better choice depending on the specifics of the task.

--
Bri

Jan 2 '06 #9
Much as I suspected. Simple updating data, such as copying from a temp
table, updating a few fields, deleting some records, would probably be
better handled by executing an SQL statement.

I will consider using SQL statements more often, where suitable.

Jeff

"Bri" <no*@here.com> wrote in message
news:vBhuf.236497$ki.164921@pd7tw2no...

Randy Harris wrote:
No disagreement whatever, Bri. As you point out, my example was
deliberately "biased". My point was, and I think most everyone would
agree,
that both techniques have their uses. When to use each is just one more
aspect of developing programs for optimal efficiency.


I agree completely. There are several tools that do similar things and one
is the better choice depending on the specifics of the task.

--
Bri

Jan 2 '06 #10
95% is high. I doubt that I would do the same, though I think I'll use SQL
statements more often. Especially with simple updates, such as creating new
records from data in a temp table.

Jeff

"Lyle Fairfield" <ly***********@aim.com> wrote in message
news:11*********************@g43g2000cwa.googlegro ups.com...
SQL is better than Recordset Updating and Vice Versa.

When using a Server DB, SQL happens on the Server. Suppose the SQL is
demanding and takes 5 minutes to run. Suppose 20 Users call the SQL (on
discrete sets of records) right now. Can the Server machine deal with
the demand? Probably, it can. But is this more efficient than each User
getting his/her set of records and processing them on the Client
machine? Who knows? It may take some thought, perhaps even testing to
decide.

Sometimes we inherit non-normalized DBs. The data is related but there
is no clear rule that defines the relationship. We find we have to
examine (with a function in code) much of the data in each record to
find which record in Table1 relates to which record in Table2. And then
we have to do the work, eg, create genealogical html files and upload
them to our genealogy site. Is this a job for SQL or for Recordset
manipulation? I use Recordset manipulation.

Sometimes we process data in multiple scans. During these scans we are
trying, testing, discarding and choosing. After each scan we assess our
results with respect, not to one row, but to the whole data. Is a
Recordset a good tool for this? I think so. If we are going to change
the value of something 47 times, do we want to write each of those
changes to our DB? Disconnecting the Recordset and doing our work in
memory seems a better idea to me.

With recent versions of MS-SQL one can do almost anything that one can
do with a Recordset and VBA on the Client machine, with a Stored
Procedure or UDF on the Server machine, using loops, conditional,
cursors or whatever. The first situation I describe above becomes quite
important. How much work do you want the Server machine to do? How much
work do you want the Client machine to do? The Server machine is
serving many Clients. The Client machine is probably serving one
Client.

Disconnected Recordsets, reconnection and UpdateBatch make your
question much more pertinent than it was a few years ago. With them we
can have a mini independent data store on our Client machines. We can
play with it forever, then reconnect it and save our changes in an
instant.

And of course, with a clever use of Table and Index Creation within
Transactions we can effect disconnected Recordsets in DAO. [We always
Rollback the Transaction]. Nothing is ever written to the DB (unless we
want it to be). Such procedures while demanding to code can be
extremely fast and powerful. (They can blow SQL away after the Tables
and Indexes have been created, but we have to have enough work for them
to so that this advantage makes the time required for their creation
worthwhile).

After all that, I must note that I estimate that I use SQL 95%of the
time.

Jan 2 '06 #11
As you use SQL more and more, you will find fewer and fewer good
reasons to use Recordsets.

Temp Tables! EWWWWWWWWW!

Jan 2 '06 #12
Lyle's estimate of 95% is probably low for me. I've got to say that if I
need to update data, my first thought is to use a SQL statement, but then
most of my work involves SQL Server rather than Jet.

--------------
Terry Kreft
"Jeff" <je************@asken.com.au> wrote in message
news:43**********************@per-qv1-newsreader-01.iinet.net.au...
95% is high. I doubt that I would do the same, though I think I'll use SQL
statements more often. Especially with simple updates, such as creating new records from data in a temp table.

Jeff

"Lyle Fairfield" <ly***********@aim.com> wrote in message
news:11*********************@g43g2000cwa.googlegro ups.com...
SQL is better than Recordset Updating and Vice Versa.

When using a Server DB, SQL happens on the Server. Suppose the SQL is
demanding and takes 5 minutes to run. Suppose 20 Users call the SQL (on
discrete sets of records) right now. Can the Server machine deal with
the demand? Probably, it can. But is this more efficient than each User
getting his/her set of records and processing them on the Client
machine? Who knows? It may take some thought, perhaps even testing to
decide.

Sometimes we inherit non-normalized DBs. The data is related but there
is no clear rule that defines the relationship. We find we have to
examine (with a function in code) much of the data in each record to
find which record in Table1 relates to which record in Table2. And then
we have to do the work, eg, create genealogical html files and upload
them to our genealogy site. Is this a job for SQL or for Recordset
manipulation? I use Recordset manipulation.

Sometimes we process data in multiple scans. During these scans we are
trying, testing, discarding and choosing. After each scan we assess our
results with respect, not to one row, but to the whole data. Is a
Recordset a good tool for this? I think so. If we are going to change
the value of something 47 times, do we want to write each of those
changes to our DB? Disconnecting the Recordset and doing our work in
memory seems a better idea to me.

With recent versions of MS-SQL one can do almost anything that one can
do with a Recordset and VBA on the Client machine, with a Stored
Procedure or UDF on the Server machine, using loops, conditional,
cursors or whatever. The first situation I describe above becomes quite
important. How much work do you want the Server machine to do? How much
work do you want the Client machine to do? The Server machine is
serving many Clients. The Client machine is probably serving one
Client.

Disconnected Recordsets, reconnection and UpdateBatch make your
question much more pertinent than it was a few years ago. With them we
can have a mini independent data store on our Client machines. We can
play with it forever, then reconnect it and save our changes in an
instant.

And of course, with a clever use of Table and Index Creation within
Transactions we can effect disconnected Recordsets in DAO. [We always
Rollback the Transaction]. Nothing is ever written to the DB (unless we
want it to be). Such procedures while demanding to code can be
extremely fast and powerful. (They can blow SQL away after the Tables
and Indexes have been created, but we have to have enough work for them
to so that this advantage makes the time required for their creation
worthwhile).

After all that, I must note that I estimate that I use SQL 95%of the
time.


Jan 2 '06 #13
"Jeff" <je************@asken.com.au> wrote in
news:43**********************@per-qv1-newsreader-01.iinet.net.au:
Much as I suspected. Simple updating data, such as copying from a
temp table, updating a few fields, deleting some records, would
probably be better handled by executing an SQL statement.

I will consider using SQL statements more often, where suitable.


You should *start* with SQL, and only edit a recordset when you
*can't* do the job with SQL.

--
David W. Fenton http://www.dfenton.com/
usenet at dfenton dot com http://www.dfenton.com/DFA/
Jan 3 '06 #14
"Terry Kreft" <te*********@mps.co.uk> wrote in
news:2F********************@karoo.co.uk:
Lyle's estimate of 95% is probably low for me. I've got to say
that if I need to update data, my first thought is to use a SQL
statement, but then most of my work involves SQL Server rather
than Jet.


All of my work is Jet, and I never contemplate using a recordset
until I see that what needs to be done *can't* be done with SQL.

SQL is the first choice.

Editing a recordset is for operations that can't easily be done with
set manipulation (which is what SQL is).

--
David W. Fenton http://www.dfenton.com/
usenet at dfenton dot com http://www.dfenton.com/DFA/
Jan 3 '06 #15
As all my work is with Jet, are there are pros or cons with upsizing to SQL
Server at a later date, with SQL statements versus recordsets. As I have
never had to do this yet I am lacking info here. However, a couple of
clients have expressed an interest in doing so, though I don't believe their
needs warrant the move at this stage.

Jeff

"David W. Fenton" <XX*******@dfenton.com.invalid> wrote in message
news:Xn**********************************@127.0.0. 1...
"Terry Kreft" <te*********@mps.co.uk> wrote in
news:2F********************@karoo.co.uk:
Lyle's estimate of 95% is probably low for me. I've got to say
that if I need to update data, my first thought is to use a SQL
statement, but then most of my work involves SQL Server rather
than Jet.


All of my work is Jet, and I never contemplate using a recordset
until I see that what needs to be done *can't* be done with SQL.

SQL is the first choice.

Editing a recordset is for operations that can't easily be done with
set manipulation (which is what SQL is).

--
David W. Fenton http://www.dfenton.com/
usenet at dfenton dot com http://www.dfenton.com/DFA/

Jan 3 '06 #16
Bri

Lyle Fairfield wrote:
As you use SQL more and more, you will find fewer and fewer good
reasons to use Recordsets.
Agreed!
Temp Tables! EWWWWWWWWW!


There are times that they are the only way to do a task and in those
cases I use them. As a last resort. After I've tried every other thing I
can think of.

Mind you, my definition of a temp table might not be the same as yours.
To me a temp table is a permanent table that holds data temporarily. An
example would be a POS product selection screen where the selection of
products and quantities are all entered (into the temp table) and
checked and payment is verified before the records are moved to the
Billing table.

--
Bri

Jan 3 '06 #17
"Jeff" <je************@asken.com.au> wrote in
news:43**********************@per-qv1-newsreader-01.iinet.net.au:
As all my work is with Jet, are there are pros or cons with
upsizing to SQL Server at a later date, with SQL statements versus
recordsets. As I have never had to do this yet I am lacking info
here. However, a couple of clients have expressed an interest in
doing so, though I don't believe their needs warrant the move at
this stage.


If you're comparing SQL to DAO recordsets, then the upsizing
question is neutral, in my opinion. The introduction of a server
back end changes the balance between the two and adds a third option
(as Lyle has pointed out), i.e., doing recordset operations in the
server's programming language (TSQL in the case of SQL Server).

But there's no predictable metric about what will change -- it
depends entirely on your application, your schema, and what tasks
are being performed.

It's also possible that in the majority of cases, there will be no
difference in the balance between SQL and recordsets. Where there
*is* a difference, then one would make some kind of alteration to
accomodate the differences.

In other words, I'd say that there's no predictability to upsizing
in terms of side effects.

--
David W. Fenton http://www.dfenton.com/
usenet at dfenton dot com http://www.dfenton.com/DFA/
Jan 3 '06 #18
That is what I mean. A permanent table, in a separate back-end file, that
holds data temporarily. For same reasons as mentioned and where there is a
lot of processing that depends on the result of previous processing etc.

A pain in the butt necessary sometimes.

Jeff

"Bri" <no*@here.com> wrote in message news:Uszuf.127773$2k.11652@pd7tw1no...

Lyle Fairfield wrote:
As you use SQL more and more, you will find fewer and fewer good
reasons to use Recordsets.


Agreed!
Temp Tables! EWWWWWWWWW!


There are times that they are the only way to do a task and in those cases
I use them. As a last resort. After I've tried every other thing I can
think of.

Mind you, my definition of a temp table might not be the same as yours. To
me a temp table is a permanent table that holds data temporarily. An
example would be a POS product selection screen where the selection of
products and quantities are all entered (into the temp table) and checked
and payment is verified before the records are moved to the Billing table.

--
Bri

Jan 3 '06 #19
Double EWWWWWWWWWWWW!

Jan 3 '06 #20
Bri
Lyle Fairfield wrote:
Double EWWWWWWWWWWWW!


So, you have an alternative solution?

--
Bri
Jan 4 '06 #21
Bri wrote:
Lyle Fairfield wrote:
Double EWWWWWWWWWWWW!


So, you have an alternative solution?

--
Bri


Self-immolation!

--
Lyle Fairfield
Jan 4 '06 #22
To?

Jan 4 '06 #23
Bri

Lyle Fairfield wrote:
Bri wrote:
Lyle Fairfield wrote:
Double EWWWWWWWWWWWW!


So, you have an alternative solution?

--
Bri


Self-immolation!

That's just as useless a response as your original one. And this is from
someone who regularly asks others to post example code to prove their
points.

Unless, of course, you meant to have a burning affair with a fiery red
head? Oh, Wait! You said 'self'... Do you have/had red hair?

--
Bri
Jan 4 '06 #24
Bri wrote:
That's just as useless a response as your original one.


I hesitate to ask you to explain a situation where one might need a
temporary table. I suppose that it's quite possible that in some
situations one might get to the point of requiring a temporary table. I
have never created or experienced those situations. But to debate them,
one might have to delve back a long way to the initial design and
conceptual framework of the application. IMO, such an examination would
be almost impossible on CDMA.

But I do not think one should say here on CDMA, temporary tables have a
legitimate use, without allowing someone else to say that in his/her
opinion temporary tables are unnecessary in any well designed
application. That is my opinion, but I am perfectly happy to be shown
to be wrong. But, please, understand that if you say "In Situation 'A'
Temporary Tables are required" that Situation 'A' and all its
precedents must be defendable.

As to my original response I am sorry that you found it useless. I'm
including it here so that you can expand on that, and perhaps point out
if some parts are more useless than others for my edification:

"the original one"

*******

SQL is better than Recordset Updating and Vice Versa.

When using a Server DB, SQL happens on the Server. Suppose the SQL is
demanding and takes 5 minutes to run. Suppose 20 Users call the SQL (on
discrete sets of records) right now. Can the Server machine deal with
the demand? Probably, it can. But is this more efficient than each User
getting his/her set of records and processing them on the Client
machine? Who knows? It may take some thought, perhaps even testing to
decide.

Sometimes we inherit non-normalized DBs. The data is related but there
is no clear rule that defines the relationship. We find we have to
examine (with a function in code) much of the data in each record to
find which record in Table1 relates to which record in Table2. And then
we have to do the work, eg, create genealogical html files and upload
them to our genealogy site. Is this a job for SQL or for Recordset
manipulation? I use Recordset manipulation.

Sometimes we process data in multiple scans. During these scans we are
trying, testing, discarding and choosing. After each scan we assess our
results with respect, not to one row, but to the whole data. Is a
Recordset a good tool for this? I think so. If we are going to change
the value of something 47 times, do we want to write each of those
changes to our DB? Disconnecting the Recordset and doing our work in
memory seems a better idea to me.

With recent versions of MS-SQL one can do almost anything that one can
do with a Recordset and VBA on the Client machine, with a Stored
Procedure or UDF on the Server machine, using loops, conditional,
cursors or whatever. The first situation I describe above becomes quite
important. How much work do you want the Server machine to do? How much
work do you want the Client machine to do? The Server machine is
serving many Clients. The Client machine is probably serving one
Client.

Disconnected Recordsets, reconnection and UpdateBatch make your
question much more pertinent than it was a few years ago. With them we
can have a mini independent data store on our Client machines. We can
play with it forever, then reconnect it and save our changes in an
instant.

And of course, with a clever use of Table and Index Creation within
Transactions we can effect disconnected Recordsets in DAO. [We always
Rollback the Transaction]. Nothing is ever written to the DB (unless we
want it to be). Such procedures while demanding to code can be
extremely fast and powerful. (They can blow SQL away after the Tables
and Indexes have been created, but we have to have enough work for them
to so that this advantage makes the time required for their creation
worthwhile).

After all that, I must note that I estimate that I use SQL 95%of the
time.

******

Jan 4 '06 #25
Lyle Fairfield typed:
But I do not think one should say here on CDMA, temporary tables have
a legitimate use, without allowing someone else to say that in his/her
opinion temporary tables are unnecessary in any well designed
application. That is my opinion, but I am perfectly happy to be shown
to be wrong. But, please, understand that if you say "In Situation 'A'
Temporary Tables are required" that Situation 'A' and all its
precedents must be defendable.


One may define some situations in good old structured programming:
when you have to force order or grouping ("levels") on intermediate
data because of a structure clash. This pattern occurs in many
situations - nowdays it's often hidden in high-level functions like
reporting and query processing. But if you roll your own, there must
be some multi-record/row data buffer. In a database context using a
temporary table could make sense.

--
Paul
Jan 4 '06 #26
Bri
Comments in-line.

Lyle Fairfield wrote:
Bri wrote:

That's just as useless a response as your original one.

I hesitate to ask you to explain a situation where one might need a
temporary table. I suppose that it's quite possible that in some
situations one might get to the point of requiring a temporary table. I
have never created or experienced those situations. But to debate them,
one might have to delve back a long way to the initial design and
conceptual framework of the application. IMO, such an examination would
be almost impossible on CDMA.


From my first post in this thread (responding to your 'Temp Tables!
EWWWWWWWWW!' response):
An example would be a POS product selection screen where the
selection of products and quantities are all entered (into the temp
table) and checked and payment is verified before the records are
moved to the Billing table.

But I do not think one should say here on CDMA, temporary tables have a
legitimate use, without allowing someone else to say that in his/her
opinion temporary tables are unnecessary in any well designed
application. That is my opinion, but I am perfectly happy to be shown
to be wrong. But, please, understand that if you say "In Situation 'A'
Temporary Tables are required" that Situation 'A' and all its
precedents must be defendable.
I agree that temp tables can be misused, but as with every other tool
they can be put to good use in the right circumstances. Also an opinion
that I am also willing to shown wrong.
As to my original response I am sorry that you found it useless. I'm
including it here so that you can expand on that, and perhaps point out
if some parts are more useless than others for my edification:

"the original one"
I was referring to your original response to temp tables: Temp Tables! EWWWWWWWWW!


I was trying to point out that as a last resort a temp table could be
used to solve some problems. Your response indicates that there is no
good reason to use temp tables. I then asked for your alternative
solution. Setting myself on fire would certainly solve my problems
(permanently), but then my clients problems would just begin.

So, for my example above, what would you use to store the entries prior
to posting them to the Billing table? A DAO vs ADO solution would be
preferred as I do 90% of my work in AC97.

You usually have very creative solutions to problems and I would truly
like to know what you would use for this scenario.

--
Bri

Jan 4 '06 #27
Now I am intrigued and seriously would like to know of a suitable
alternative.

One typical requirement was a client who required a report that forecast
materials requirements for a 12 month period. This required the analysis of
forecasted sales and current inventory to determine expected production for
the period. That information was then used to determine materials, through a
BOM, and other production requirements. On the report the result had to be
displayed the same result in three formats, two graphs and a table, and no,
they must have it on one report.

This description makes is sound reasonable simply, but believe me it wasn't.
There are many system defaults that need to be checked and incorporated, and
complex calculations that draw data from many areas.

I know this is probably not enough information, but how would you approach a
task like this.

Jeff

"Lyle Fairfield" <ly***********@aim.com> wrote in message
news:11*********************@g43g2000cwa.googlegro ups.com...
Bri wrote:
That's just as useless a response as your original one.


I hesitate to ask you to explain a situation where one might need a
temporary table. I suppose that it's quite possible that in some
situations one might get to the point of requiring a temporary table. I
have never created or experienced those situations. But to debate them,
one might have to delve back a long way to the initial design and
conceptual framework of the application. IMO, such an examination would
be almost impossible on CDMA.

But I do not think one should say here on CDMA, temporary tables have a
legitimate use, without allowing someone else to say that in his/her
opinion temporary tables are unnecessary in any well designed
application. That is my opinion, but I am perfectly happy to be shown
to be wrong. But, please, understand that if you say "In Situation 'A'
Temporary Tables are required" that Situation 'A' and all its
precedents must be defendable.

As to my original response I am sorry that you found it useless. I'm
including it here so that you can expand on that, and perhaps point out
if some parts are more useless than others for my edification:

"the original one"

*******

SQL is better than Recordset Updating and Vice Versa.

When using a Server DB, SQL happens on the Server. Suppose the SQL is
demanding and takes 5 minutes to run. Suppose 20 Users call the SQL (on
discrete sets of records) right now. Can the Server machine deal with
the demand? Probably, it can. But is this more efficient than each User
getting his/her set of records and processing them on the Client
machine? Who knows? It may take some thought, perhaps even testing to
decide.

Sometimes we inherit non-normalized DBs. The data is related but there
is no clear rule that defines the relationship. We find we have to
examine (with a function in code) much of the data in each record to
find which record in Table1 relates to which record in Table2. And then
we have to do the work, eg, create genealogical html files and upload
them to our genealogy site. Is this a job for SQL or for Recordset
manipulation? I use Recordset manipulation.

Sometimes we process data in multiple scans. During these scans we are
trying, testing, discarding and choosing. After each scan we assess our
results with respect, not to one row, but to the whole data. Is a
Recordset a good tool for this? I think so. If we are going to change
the value of something 47 times, do we want to write each of those
changes to our DB? Disconnecting the Recordset and doing our work in
memory seems a better idea to me.

With recent versions of MS-SQL one can do almost anything that one can
do with a Recordset and VBA on the Client machine, with a Stored
Procedure or UDF on the Server machine, using loops, conditional,
cursors or whatever. The first situation I describe above becomes quite
important. How much work do you want the Server machine to do? How much
work do you want the Client machine to do? The Server machine is
serving many Clients. The Client machine is probably serving one
Client.

Disconnected Recordsets, reconnection and UpdateBatch make your
question much more pertinent than it was a few years ago. With them we
can have a mini independent data store on our Client machines. We can
play with it forever, then reconnect it and save our changes in an
instant.

And of course, with a clever use of Table and Index Creation within
Transactions we can effect disconnected Recordsets in DAO. [We always
Rollback the Transaction]. Nothing is ever written to the DB (unless we
want it to be). Such procedures while demanding to code can be
extremely fast and powerful. (They can blow SQL away after the Tables
and Indexes have been created, but we have to have enough work for them
to so that this advantage makes the time required for their creation
worthwhile).

After all that, I must note that I estimate that I use SQL 95%of the
time.

******

Jan 4 '06 #28
"Lyle Fairfield" <ly***********@aim.com> wrote in
news:11*********************@g43g2000cwa.googlegro ups.com:
But I do not think one should say here on CDMA, temporary tables
have a legitimate use, without allowing someone else to say that
in his/her opinion temporary tables are unnecessary in any well
designed application. That is my opinion, but I am perfectly happy
to be shown to be wrong. But, please, understand that if you say
"In Situation 'A' Temporary Tables are required" that Situation
'A' and all its precedents must be defendable.


I would say that temp tables are not first choice, the way that
recordsets are not first choice (after all, a temp table is just a
persistent recordset, no?). You try the SQL and if that doesn't work
well, you move to the recordset, or in cases where it solves other
problems, a temp table. But suitability to task depends on the
nature of the task, the context in which it is being performed, and
the requirements that that task must fulfill.

Many around here recommend downloading lookup tables once a session
to a local data store in order to keep from having to hit the server
for lookup tables. I've never understood this one, as in my
experience, Access and the OS cache such small bits of data after
the first lookup. Since the process of downloading once a session
assumes that these tables are relatively non-volatile, the idea that
Access's underlying data caching will be invalidated by changes and
need to be re-retrieved seems non-operable to me. So, I've never
undersood why one would choose to engineer an app in this fashion --
seems like a lot of work for absolutely no benefit whatsoever.

However, change the operating environment to a marginal network with
low bandwidth, and there could end up being a benefit to pulling the
data over once and *not* relying on a cache over which you have no
programmtic control.

A case where I do use a temp table is for my query by form results.
Almost always, I want to include a checkbox to allow the results
list to be selected/deselected. Rather than maintain a checkbox
table (with the resulting maintenance it entails, and it is,
nonetheless, a temp table, as well), I prefer to write the data to a
temp table. Now, one reason for that is that I almost always use
that temp table resultset as the basis for exporting data, or for
allowing users to do mail merges. One could do both of these things
with SQL and with saved queries, but not nearly as easily as with a
temp table.

Also, in some cases, the temp tables include summarized data that
would have to be calculated on the fly in SQL, and that are
something of a performance drain. I think it's better to do it once
and persist the result, than to end up in a case where you may end
up needing to recalc.

But the balance is tipped not by the task itself, nor by performance
issues concerning the methods available, but by the outside
requirements. In this case, the Word mail merge requirement is the
one that makes the persistent temp table most attractive.

Another case where I use temp tables is for assembling data from
non-Access data sources, and then processing it for export to other
pograms. In that case, it's often the case that it can't be done any
other way without extraordinarly complex and poor-performing SQL
(would you really want to maintain a UNION query that drew data from
8 different ODBC data tables and 4 Access tables?).

But I don't use temp tables for any form of data entry, as some
people suggest in order to decouple data entry processes from the
back end data source (usually to improve concurrency). However, I
haven't had any apps where there was a need for the creation of a
header/child structure (as with an invoice) that had concurrency
problems that needed solving. So I can conceive of cases where it
could be an attractive approach.

I am intrigued by the transaction-wrapped approach to temporary data
that Lyle has often cleverly suggested, but have never thought it
worth the trouble to implement all that code just to avoid having
the temp objects persist.

Others mileage may vary, naturally.

--
David W. Fenton http://www.dfenton.com/
usenet at dfenton dot com http://www.dfenton.com/DFA/
Jan 4 '06 #29
I think I'll withdraw and leave the field to you. Without a deep
understanding of what actually happens with the POS / Billing stuff I
shouldn't suggest anything and I won't. If you've analyzed the problem,
etc, etc and concluded a temp table is called for, then by all means go
for it.
Do you create your temp table within a Transaction and RollBack the
Transaction when you're done with it? That way the Temporary Table is
never saved and your DB is unmodified. But a Report or Form cannot use
such a Temp Table; TTBOMK Access Objects can use only Tables etc that
appear in the DB window. What about when the DB window is hidden ...
well, TTBOMK it never is.

Jan 5 '06 #30
rkc
Lyle Fairfield wrote:
Do you create your temp table within a Transaction and RollBack the
Transaction when you're done with it? That way the Temporary Table is
never saved and your DB is unmodified. But a Report or Form cannot use
such a Temp Table; TTBOMK Access Objects can use only Tables etc that
appear in the DB window. What about when the DB window is hidden ...
well, TTBOMK it never is.


Say you use your VirtualDAORecordSet() function to create a recordset
and then use that function in another procedure like so:

set rs = VirtualDAORecordSet()

How when do you clean up? If you don't clean up the temp table is left
hanging around.

Or have you never used it that way?

Jan 5 '06 #31
Bri

Lyle Fairfield wrote:
I think I'll withdraw and leave the field to you. Without a deep
understanding of what actually happens with the POS / Billing stuff I
shouldn't suggest anything and I won't. If you've analyzed the problem,
etc, etc and concluded a temp table is called for, then by all means go
for it.
Well, I couldn't think of any other way to do it. That isn't the same as
'there is no other way to do it'. I'm always looking for better ways of
doing things.
Do you create your temp table within a Transaction and RollBack the
Transaction when you're done with it? That way the Temporary Table is
never saved and your DB is unmodified.
Not in this case, but I have used that technique elsewhere. I don't use
it often either, but there are times where it comes in handy.
But a Report or Form cannot use
such a Temp Table; TTBOMK Access Objects can use only Tables etc that
appear in the DB window.


That hits the nail on the head for my POS form. The temp table is in a
separate MDB and is permenantly linked. The table always exists, so it
is in the DB Window. It has data added to it in the POS screen. Once the
order has been finished and payment has been received, the data is
Inserted from the temp table to the Billing table via an INSERT TO query
and then deleted from the temp table. So, the temp table is really a
Batch proccess collection. The MDB with the temp table in it is
compacted when the app is open, each user has their own copy of the temp
table MDB in their TEMP folder.

I did in a previous post in this thread state that my definition of temp
table in this instance was the the data was temporary, ie the table was
a place to temporarily hold data. This isn't the same as a table
created, used and then deleted all in code. If I do this I always
enclose it in a Transaction as per your above suggestion.

Since this discussion is getting OT I'll bring this back around to the
original topic of this thread; I use SQL to move the data from the temp
tabel to the Billing table, I do not walk the recordset and do separate
inserts for each record.

--
Bri

Jan 5 '06 #32
rkc
rkc wrote:
How when do you clean up? If you don't clean up the temp table is left
hanging around.


Never mind.

I got it.

Stupid question.
Jan 5 '06 #33
Bri <no*@here.com> wrote in news:HX_uf.134699$2k.54242@pd7tw1no:
I did in a previous post in this thread state that my definition
of temp table in this instance was the the data was temporary, ie
the table was a place to temporarily hold data. This isn't the
same as a table created, used and then deleted all in code. If I
do this I always enclose it in a Transaction as per your above
suggestion.


I don't see any utility in repeatedly recreating and deleting a
table. If you're going to re-use the same data structure, it makes
perfect sense to me to persist the empty table. Why spend the time
writing and testing all the code to create the table and add the
indexes and so forth when you can just do it once by hand and be
done with it?

Of course, you don't want it in your front end, as it leads to bloat
when you clean out the records in it.

Theoretically, rolled-back transactions oughtn't bloat the MDB, but
I'm suspicious of that kind of thing. If temporary querydefs can
cause bloat (and they can), then I don't see why rolled-back
transactions wouldn't as well.

--
David W. Fenton http://www.dfenton.com/
usenet at dfenton dot com http://www.dfenton.com/DFA/
Jan 5 '06 #34
Bri

David W. Fenton wrote:
Bri <no*@here.com> wrote in news:HX_uf.134699$2k.54242@pd7tw1no:

I did in a previous post in this thread state that my definition
of temp table in this instance was the the data was temporary, ie
the table was a place to temporarily hold data. This isn't the
same as a table created, used and then deleted all in code. If I
do this I always enclose it in a Transaction as per your above
suggestion.

I don't see any utility in repeatedly recreating and deleting a
table. If you're going to re-use the same data structure, it makes
perfect sense to me to persist the empty table. Why spend the time
writing and testing all the code to create the table and add the
indexes and so forth when you can just do it once by hand and be
done with it?


That was exactly my thinking. I do exactly that.
Of course, you don't want it in your front end, as it leads to bloat
when you clean out the records in it.
The empty table is in its own MDB that sits in the users TEMP (found via
the environment variable) folder. I compact it as part of the apps
startup so it never gets bloated.
Theoretically, rolled-back transactions oughtn't bloat the MDB, but
I'm suspicious of that kind of thing. If temporary querydefs can
cause bloat (and they can), then I don't see why rolled-back
transactions wouldn't as well.


In my experience, the db does grow some during these rolled-back
transactions. The growth is nowhere near the growth I got before
wrapping the process in a transaction, but it still grows. I have also
noticed the temp queries cause bloat. In most cases, I create a saved
query called qryTemp that I then modify its SQL property in code. This
has the added advantage of having a query you can view the design of
during debugging.

Just to reclarify; the linked temp tables and the rollback transactions
are not being used at the same time. I use one OR the other depending on
what I'm trying to accomplish.

--
Bri

Jan 6 '06 #35

This thread has been closed and replies have been disabled. Please start a new discussion.

Similar topics

11
by: Jason | last post by:
Let's say I have an html form with 20 or 30 fields in it. The form submits the fields via POST to a php page which updates a table in a database with the $_POST vars. Which makes more sense? ...
21
by: Rabbit63 | last post by:
Hi: I want to show a set of records in the database table on the clicnt browser. I have two ways to do this (writen in JScript): 1.The first way is: <% var sql = "select firstname from...
2
by: Jason Gates | last post by:
Dear all I have written a very complex web app for intranet use. It allows users to search a large database and then returns formatted results. I am currently employing a method as below: 1....
2
by: Keith Chadwick | last post by:
I prefer to use stored procedures over having sql statements sitting in my ..NET code. My problem is I can not figure out how get .NET to understand there are multiple recordsets coming back. ...
0
by: allyn44 | last post by:
Hello--I have 2 tables: Coloncutlog: CutId (autonumber), histnum, block--indexed on those 2 fields, other info fields Colonslides: SlideID (autonumber) CutID (from Cutlog), slide number, other...
1
by: lakshmi | last post by:
Hi all, I recently rewrote a data intensive C++ program in C#. The C++ program was traversing 3 recordsets that were all open at the same time. I replaced those 3 recordsets with 3 .NET data...
5
by: MLH | last post by:
I use a fair number of debug.print statements. My apps are fairly well 'peppered' with them. I've often wondered if leaving them in the mdb, creating and rolling out an mde intended for use in the...
24
by: Donald Grove | last post by:
I want to populate an array with values from an ado recordset (multiple rows) I use the absolute position of the cursor in the recordset to define the row of my array to be populated. I have a...
15
by: Rob Meade | last post by:
Hi all, I have a databse which I'm pulling the data from for my ASP page. I have 4 tables, Course, Feature, Objective, and PreRequisite. The last three all contain a course product code and a...
0
by: DolphinDB | last post by:
Tired of spending countless mintues downsampling your data? Look no further! In this article, you’ll learn how to efficiently downsample 6.48 billion high-frequency records to 61 million...
0
isladogs
by: isladogs | last post by:
The next Access Europe meeting will be on Wednesday 6 Mar 2024 starting at 18:00 UK time (6PM UTC) and finishing at about 19:15 (7.15PM). In this month's session, we are pleased to welcome back...
1
isladogs
by: isladogs | last post by:
The next Access Europe meeting will be on Wednesday 6 Mar 2024 starting at 18:00 UK time (6PM UTC) and finishing at about 19:15 (7.15PM). In this month's session, we are pleased to welcome back...
0
by: jfyes | last post by:
As a hardware engineer, after seeing that CEIWEI recently released a new tool for Modbus RTU Over TCP/UDP filtering and monitoring, I actively went to its official website to take a look. It turned...
0
by: ArrayDB | last post by:
The error message I've encountered is; ERROR:root:Error generating model response: exception: access violation writing 0x0000000000005140, which seems to be indicative of an access violation...
1
by: PapaRatzi | last post by:
Hello, I am teaching myself MS Access forms design and Visual Basic. I've created a table to capture a list of Top 30 singles and forms to capture new entries. The final step is a form (unbound)...
0
by: CloudSolutions | last post by:
Introduction: For many beginners and individual users, requiring a credit card and email registration may pose a barrier when starting to use cloud servers. However, some cloud server providers now...
0
by: Defcon1945 | last post by:
I'm trying to learn Python using Pycharm but import shutil doesn't work
1
by: Shællîpôpï 09 | last post by:
If u are using a keypad phone, how do u turn on JavaScript, to access features like WhatsApp, Facebook, Instagram....

By using Bytes.com and it's services, you agree to our Privacy Policy and Terms of Use.

To disable or enable advertisements and analytics tracking please visit the manage ads & tracking page.