By using this site, you agree to our updated Privacy Policy and our Terms of Use. Manage your Cookies Settings.
434,930 Members | 1,383 Online
Bytes IT Community
+ Ask a Question
Need help? Post your question and get tips & solutions from a community of 434,930 IT Pros & Developers. It's quick & easy.

access 97, insert from sql server slow

P: n/a
I've got an access97 reporting mdb that pulls data (77,000 rows) from a
sql server table into a local table to run reports

if the local table is part of the reporting MDB, the insert statement
(16 fields) takes less than 30secs, but because of db-bloat, I moved
the local table to a 2nd MDB

and per postings, this 2nd MDB is copied into a folder and linked as a
'temp' MDB every time I run my reporting mdb

also, per postings, my main form opens a recordset to a table in the
'temp' MDB to prevent continuous LDB activity

but the insert statement now takes 20 mins....

both the reporting MDB and the temp MDB are on the same terminal server
(on different drives) and the sql server is connected to the terminal
server via a 1-GB network

updating to the 'temp' MDB is slow with / without any indexing on the
local table

the 'temp' MDB has been compacted, no difference

I created a new temp MDB and imported the tables, no difference

what am I missing ?

Jan 2 '07 #1
Share this Question
Share on Google+
21 Replies


P: n/a
le*********@natpro.com wrote:
I've got an access97 reporting mdb that pulls data (77,000 rows) from a
sql server table into a local table to run reports
That's really not that many records. Why not just have the mdb/e run
reports against linked tables or pass through queries? It would
eliminate maintenance of a copy of your data.
--
Tim http://www.ucs.mun.ca/~tmarshal/
^o<
/#) "Burp-beep, burp-beep, burp-beep?" - Quaker Jake
/^^ "Whatcha doin?" - Ditto "TIM-MAY!!" - Me
Jan 2 '07 #2

P: n/a
the final report is based on a query of many tables (consisting of both
sql server and access)
with user-selected filtering

and the current sql-server app does not allowing for indexing on its
tables

and .....

Since the posting, I changed the insert query to insert just one field
(6-char text) and it's still slow

And if I open the temp MDB and look at the contents of the table, it's
empty until to 20-odd minutes are up... so it's not a data transfer /
dribble issue...

It's like the whole thing is suspended on the sql server waiting for
the query to run

I'll try the insert without using the terminal server

Tim Marshall wrote:
le*********@natpro.com wrote:
I've got an access97 reporting mdb that pulls data (77,000 rows) from a
sql server table into a local table to run reports

That's really not that many records. Why not just have the mdb/e run
reports against linked tables or pass through queries? It would
eliminate maintenance of a copy of your data.
--
Tim http://www.ucs.mun.ca/~tmarshal/
^o<
/#) "Burp-beep, burp-beep, burp-beep?" - Quaker Jake
/^^ "Whatcha doin?" - Ditto "TIM-MAY!!" - Me
Jan 2 '07 #3

P: n/a
<le*********@natpro.comwrote
I've got an access97 reporting mdb that pulls data
(77,000 rows) from a sql server table into a local
table to run reports
Almost certainly, a report with 77,000 rows of data is, from a usability
point of view, something that needs work. Consider creating a View on SQL
Server to ensure that the extraction and manipulation is all done
server-side, or a pass-through Query 9 (but, my experience with Access, Jet,
ODBC, and SQL Server has been that, unless your joins are what Jet considers
complex*, the SQL generated by the combination is generally efficient). If
you only bring across the network the actual data to appear on a
"human-usable" report, I'll bet you would not have performance problems
(unless there is something seriously wrong with the Table design).

* Jet's criteria for "complex" vary between versions, but
if it deems "too complex" it will bring, or try to bring,
all the underlying data to the user's machine on what
seems to be the assumption that will be more efficient
than doing it on the server.

Larry Linson
Microsoft Access MVP

Jan 2 '07 #4

P: n/a
<le*********@natpro.comwrote
the final report is based on a query of many tables
(consisting of both sql server and access)
with user-selected filtering
The join of the combination of SQL Server and Jet tables forces much more
data to be retrieved from SQL Server, because the join can ONLY be done on
the machine where Jet is being processed (the user's machine). For example,
suppose that your SQL Server table has 100,000 records, and the Jet table
only has 100 records... you'll be retrieving all 100,000 records from SQL
Server, because only after the join will the 99,900 unused records be
determined.
and the current sql-server app does not allowing
for indexing on its tables
Are you interfacing with a commercial application that your company is
using? If not, if it is your company's application, you should have a chat
with the database administrator (DBA) to get the DBA's advice, and perhaps
the DBA will offer the solution of indexing, or creating Views, or maybe
even moving some of your Jet tables to SQL Server, so the big report query
join can be accomplished server-side without bringing all that extra data
across the network.

Larry Linson
Microsoft Access MVP

Jan 3 '07 #5

P: n/a
"Larry Linson" <bo*****@localhost.notwrote in message
news:rZEmh.1728$IT2.610@trnddc06...
<le*********@natpro.comwrote
the final report is based on a query of many tables
(consisting of both sql server and access)
with user-selected filtering

The join of the combination of SQL Server and Jet tables forces much more data
to be retrieved from SQL Server, because the join can ONLY be done on the
machine where Jet is being processed (the user's machine). For example,
suppose that your SQL Server table has 100,000 records, and the Jet table only
has 100 records... you'll be retrieving all 100,000 records from SQL Server,
because only after the join will the 99,900 unused records be determined.
Actually, in my testing that is not the case. What is sent to the server is a
WHERE clause with a 100 value OR criteria on the field used in the join. Still
a lousy way to run a query, but not as bad as pulling 100,000 rows.
--
Rick Brandt, Microsoft Access MVP
Email (as appropriate) to...
RBrandt at Hunter dot com
Jan 3 '07 #6

P: n/a
that's why I'm inserting the sql server table data into an ms-access
table
so I don't run into the restrictions you mention
I'm working with ms-access data exclusively for reporting purposes

and this is a commercial application, which has a newer version that
allows indices to be created, but the upgrade to this version won't
happen till the 2nd half of 2007, because there's a need to update
hardware / OS

Larry Linson wrote:
<le*********@natpro.comwrote
the final report is based on a query of many tables
(consisting of both sql server and access)
with user-selected filtering

The join of the combination of SQL Server and Jet tables forces much more
data to be retrieved from SQL Server, because the join can ONLY be done on
the machine where Jet is being processed (the user's machine). For example,
suppose that your SQL Server table has 100,000 records, and the Jet table
only has 100 records... you'll be retrieving all 100,000 records from SQL
Server, because only after the join will the 99,900 unused records be
determined.
and the current sql-server app does not allowing
for indexing on its tables

Are you interfacing with a commercial application that your company is
using? If not, if it is your company's application, you should have a chat
with the database administrator (DBA) to get the DBA's advice, and perhaps
the DBA will offer the solution of indexing, or creating Views, or maybe
even moving some of your Jet tables to SQL Server, so the big report query
join can be accomplished server-side without bringing all that extra data
across the network.

Larry Linson
Microsoft Access MVP
Jan 3 '07 #7

P: n/a
the report, is summarizing these 77000 rows of data, so it is only
about 17 pages long
the 77000 rows are further processed based on business rules, to obtain
the final table that is used for reporting purposes

the rules and filtering options are such that I can not preload the
data in a data warehouse format

I eventually will move the ms-access tables to sql-server and will used
stored procedures to implement the business rules and will run the
reports from the sql-server tables - but that won't happen until after
we upgrade to the latest OS, sql-server and application versions as
mentioned in another response

I just can't understand why inserting one field from 77000 sql-server
rows into an ms-access table takes 30 secs is the ms-access table is
local, but 20-mins if it's linked to another MDB on the same server
Larry Linson wrote:
<le*********@natpro.comwrote
I've got an access97 reporting mdb that pulls data
(77,000 rows) from a sql server table into a local
table to run reports

Almost certainly, a report with 77,000 rows of data is, from a usability
point of view, something that needs work. Consider creating a View on SQL
Server to ensure that the extraction and manipulation is all done
server-side, or a pass-through Query 9 (but, my experience with Access, Jet,
ODBC, and SQL Server has been that, unless your joins are what Jet considers
complex*, the SQL generated by the combination is generally efficient). If
you only bring across the network the actual data to appear on a
"human-usable" report, I'll bet you would not have performance problems
(unless there is something seriously wrong with the Table design).

* Jet's criteria for "complex" vary between versions, but
if it deems "too complex" it will bring, or try to bring,
all the underlying data to the user's machine on what
seems to be the assumption that will be more efficient
than doing it on the server.

Larry Linson
Microsoft Access MVP
Jan 3 '07 #8

P: n/a
"Rick Brandt" <ri*********@hotmail.comwrote
the final report is based on a query of many tables
(consisting of both sql server and access)
with user-selected filtering

The join of the combination of SQL Server and Jet tables forces much
more data
>to be retrieved from SQL Server, because the join can ONLY be done on
the
>machine where Jet is being processed (the user's machine). For example,
suppose that your SQL Server table has 100,000 records, and the Jet
table only
>has 100 records... you'll be retrieving all 100,000 records from SQL
Server,
>because only after the join will the 99,900 unused records be
determined.
>
Actually, in my testing that is not the case. What is sent to the server
is a
WHERE clause with a 100 value OR criteria on the field used in the join.
Still
a lousy way to run a query, but not as bad as pulling 100,000 rows.
Are we talking about the same thing?

If I have a Query with an Access Table of 100 Records, with a field called
SerialNumber, and an SQL Server Table with a field called SerialNumber, and,
in this Query, I join the Serial Number fields in the two Tables, do you
mean that Jet-ODBC is going to send to the server 100 queries with a WHERE
clause instead of the Join, or a WHERE clause with the 100 values for Serial
Number that pre-retrieved from the local Jet Table?

Suppose it was 1,000 Records? 10,000 Records? 50,000?

Larry Linson
Microsoft Access MVP

Jan 3 '07 #9

P: n/a
Have you looked at the articles, and examples, at MVP Tony Toews site,
http://www.granite.ab.ca/accsmstr.htm on performance?

Apparently, I was in error about how much information is retrieved -- see
Rick's post in this thread. It has, for certain, been several versions ago
when I last looked at what was transmitted/received from Access client to
Server database, and there was no similar situation in that database
application.

And, if the application is commercial, it's likely that you cannot create
Views in that database, nor stored procedures -- approaches frequently used
to improve performance. But, it is probable that you could use passthrough
Queries to send Queries that aren't altered or modified, but executed
exactly as you wrote them, on the server. On the other hand, passthrough
Queries will only have access to the SQL Server tables, so aren't any help
in the problem of joining local and server Tables.

Larry Linson
Microsoft Access MVP

<le*********@natpro.comwrote in message
news:11*********************@n51g2000cwc.googlegro ups.com...
the report, is summarizing these 77000 rows of data, so it is only
about 17 pages long
the 77000 rows are further processed based on business rules, to obtain
the final table that is used for reporting purposes

the rules and filtering options are such that I can not preload the
data in a data warehouse format

I eventually will move the ms-access tables to sql-server and will used
stored procedures to implement the business rules and will run the
reports from the sql-server tables - but that won't happen until after
we upgrade to the latest OS, sql-server and application versions as
mentioned in another response

I just can't understand why inserting one field from 77000 sql-server
rows into an ms-access table takes 30 secs is the ms-access table is
local, but 20-mins if it's linked to another MDB on the same server
Larry Linson wrote:
><le*********@natpro.comwrote
> I've got an access97 reporting mdb that pulls data
(77,000 rows) from a sql server table into a local
table to run reports

Almost certainly, a report with 77,000 rows of data is, from a usability
point of view, something that needs work. Consider creating a View on
SQL
Server to ensure that the extraction and manipulation is all done
server-side, or a pass-through Query 9 (but, my experience with Access,
Jet,
ODBC, and SQL Server has been that, unless your joins are what Jet
considers
complex*, the SQL generated by the combination is generally efficient).
If
you only bring across the network the actual data to appear on a
"human-usable" report, I'll bet you would not have performance problems
(unless there is something seriously wrong with the Table design).

* Jet's criteria for "complex" vary between versions, but
if it deems "too complex" it will bring, or try to bring,
all the underlying data to the user's machine on what
seems to be the assumption that will be more efficient
than doing it on the server.

Larry Linson
Microsoft Access MVP

Jan 3 '07 #10

P: n/a
Larry Linson wrote:
Are we talking about the same thing?

If I have a Query with an Access Table of 100 Records, with a field
called SerialNumber, and an SQL Server Table with a field called
SerialNumber, and, in this Query, I join the Serial Number fields in
the two Tables, do you mean that Jet-ODBC is going to send to the
server 100 queries with a WHERE clause instead of the Join, or a
WHERE clause with the 100 values for Serial Number that pre-retrieved
from the local Jet Table?
Suppose it was 1,000 Records? 10,000 Records? 50,000?
A single SELECT statement with criteria of...

WHERE SerNum = This
Or SerNum = That
Or SerNum = ...

I don't know if there is a number of local rows where the behavior changes, but
I suspect that to be the case. I only know that if the local table is
relatively small performance is too good for it to be pulling all records for a
local join.

--
Rick Brandt, Microsoft Access MVP
Email (as appropriate) to...
RBrandt at Hunter dot com

Jan 3 '07 #11

P: n/a
Try a persistent connection.......
--
Albert D. Kallal (Access MVP)
Edmonton, Alberta Canada
pl*****************@msn.com
Jan 3 '07 #12

P: n/a
I have a persistent connection
and right now I have 2 identical queries, consisting of
INSERT INTO tblLineBooking ( slsNbr )
SELECT slsNbr
FROM tblLineBooking_v
WHERE transDate Between #4/1/2006# And #12/31/2006#;

one query takes 15secs (which is correct)
the other takes 20mins

then I shutdown the MDB and reopen it, now both queries take 20
mins....

so I'm not sure what's happening
Albert D. Kallal wrote:
Try a persistent connection.......
--
Albert D. Kallal (Access MVP)
Edmonton, Alberta Canada
pl*****************@msn.com
Jan 3 '07 #13

P: n/a

"Rick Brandt" <ri*********@hotmail.comwrote in message
news:lg*****************@newssvr12.news.prodigy.ne t...
Larry Linson wrote:
>Are we talking about the same thing?

If I have a Query with an Access Table of 100 Records, with a field
called SerialNumber, and an SQL Server Table with a field called
SerialNumber, and, in this Query, I join the Serial Number fields in
the two Tables, do you mean that Jet-ODBC is going to send to the
server 100 queries with a WHERE clause instead of the Join, or a
WHERE clause with the 100 values for Serial Number that pre-retrieved
from the local Jet Table?
Suppose it was 1,000 Records? 10,000 Records? 50,000?

A single SELECT statement with criteria of...

WHERE SerNum = This
Or SerNum = That
Or SerNum = ...

I don't know if there is a number of local rows where the behavior
changes, but I suspect that to be the case. I only know that if the local
table is relatively small performance is too good for it to be pulling all
records for a local join.
Thanks, Rick. There's always something to learn about our favorite database
product, no matter how long we've been using it, isn't there?

Larry Linson
Microsoft Access MVP
Jan 3 '07 #14

P: n/a

Is this "temp" table deleted before each time you fill it up? If so,
consider creating a *new* table each time you fill it up. You may
want to do this in a second "temp data" mdb. Feel free to create any
indexes you want, the jet engine will create them as the data goes
into the table.

Remember that both the front-end (the mdb that is running the query)
and the data mdb need to be on the computer's local drive. (Having
them both on the same drive is "better".)

Is this a MS SQL Server you're pulling the data from? If not, we need
to know that. I know it sounds silly, but different SQL Servers
sometimes require different techniques.

As Larry suggested, if you can create a view, do so. Views can
rapidly speed up the retrieval process. (If this is a MS SQL Server,
running a view repeatedly will cause MS SQL Server to sort of create
it's own indexes, regardless of what's created/allowed, which also
speeds things up.)

Also, make sure you have plenty of free space on the local computer
this is running on. Queries against a (non-MS) SQL Database can eat
memory like crazy.

You mentioned Terminal Server.... That itself can cause problems if
it's not optimized correctly. Especially when it's a Citrix set-up.
There are times when it's *very* intelligent to hire a Citrix Guru to
set the Terminal Server up properly. They're expensive in the short
run, but the employee time saved over a few days can offset it rather
rapidly. (No, I'm not a Citrix Guru, but I've seen the results of
their work, and it's danged impressive!)
On 2 Jan 2007 14:52:00 -0800, le*********@natpro.com wrote:
>I've got an access97 reporting mdb that pulls data (77,000 rows) from a
sql server table into a local table to run reports

if the local table is part of the reporting MDB, the insert statement
(16 fields) takes less than 30secs, but because of db-bloat, I moved
the local table to a 2nd MDB

and per postings, this 2nd MDB is copied into a folder and linked as a
'temp' MDB every time I run my reporting mdb

also, per postings, my main form opens a recordset to a table in the
'temp' MDB to prevent continuous LDB activity

but the insert statement now takes 20 mins....

both the reporting MDB and the temp MDB are on the same terminal server
(on different drives) and the sql server is connected to the terminal
server via a 1-GB network

updating to the 'temp' MDB is slow with / without any indexing on the
local table

the 'temp' MDB has been compacted, no difference

I created a new temp MDB and imported the tables, no difference

what am I missing ?

--
Drive C: Error. (A)bort (R)etry (S)mack The Darned Thing

Jan 4 '07 #15

P: n/a
<le*********@natpro.comwrote in message
news:11*********************@i12g2000cwa.googlegro ups.com...
>I have a persistent connection
by persistent connection, I mean that you front end opens up a table to the
back end "mdb" file
BEFORE you run
those quires.

That means the front end opens up a table to that back end "temp" mob you
talked about.
You keep this table open, and then run your quires...

the issue becomes when ms-access opens, and closes the connection, the
performance issues
to create the ldb locking file for that temp mdb can slow things down.
Keeping a persistent
connection will prevent this problem (and, thus often remove the
inconsistent performance
issues - the persistent connection will DO NOTHING for sql server).
--
Albert D. Kallal (Access MVP)
Edmonton, Alberta Canada
pl*****************@msn.com
Jan 4 '07 #16

P: n/a
from OSBC (MSDN)

"ODBC provides a Driver Manager to manage simultaneous access to
multiple DBMSs. Although the use of drivers solves the problem of
accessing multiple DBMSs simultaneously, the code to do this can be
complex. Applications that are designed to work with all drivers cannot
be statically linked to any drivers. Instead, they must load drivers at
run time and call the functions in them through a table of function
pointers. The situation becomes more complex if the application uses
multiple drivers simultaneously.

Rather than forcing each application to do this, ODBC provides a Driver
Manager. The Driver Manager implements all of the ODBC functions -
mostly as pass-through calls to ODBC functions in drivers - and is
statically linked to the application or loaded by the application at
run time. Thus, the application calls ODBC functions by name in the
Driver Manager, rather than by pointer in each driver.

When an application needs a particular driver, it first requests a
connection handle with which to identify the driver and then requests
that the Driver Manager load the driver. The Driver Manager loads the
driver and stores the address of each function in the driver. To call
an ODBC function in the driver, the application calls that function in
the Driver Manager and passes the connection handle for the driver. The
Driver Manager then calls the function by using the address it stored
earlier"

****

Is it possible that ODBC Driver Manager must load the drivers for each
record (77000) because it identifies the external mdb as a separate
entitiy?

****
I do something similar to what you are talking about (as a backup
actually) by maintaining two servile files, one an MDB and the other an
ADP. The ADP is never opened other than in code from the MDB but just
functions as an ADO/OLEDB connection to the SQL-Server, while the MDB
simple offers a place from which to run the code and to temporaritly
store the tables.

The code below (it's in a statup form module) runs in an mdb (the
equivalent of your home mdb) to move SQL-data to an mdb (newly created
each time) on a different drive. I've never checked time because it's
never given me cause for concern.

Well, I ran it now starting at 05:21. The filetime of the new file is
also 05:21. The SQL database is on an internet enabled MS_SQL Server
3000 miles away. We might be talking only 10000 records max though.

Const ADPFile As String = "C:\Documents and Settings\Lyle Fairfield\My
Documents\Access\FFDBABooks.adp"
Const Backupfile As String = "D:\SpecificFiles\FFDBABooks.mdb"

Const b As String = "PROVIDER=SQLOLEDB.1;" _
& "PERSIST SECURITY INFO=FALSE;" _
& "INITIAL CATALOG=Database;" _
& "DATA SOURCE=Server"
Const p As String = "UserID"
Const u As String = "Password"

Private Sub BackupSQLTablesAsJET()
Dim c As ADODB.Connection
Dim r As ADODB.Recordset

' zap old tables
Set c = New ADODB.Connection
With c
.Open CurrentProject.BaseConnectionString
End With
Set r = c.OpenSchema( _
adSchemaTables, Array(Empty, Empty, Empty, "Table"))
With r
Do While Not .EOF
CurrentProject.Connection.Execute ("DROP TABLE " &
!TABLE_NAME)
.MoveNext
Loop
End With

' refresh TableDefs
DBEngine(0)(0).TableDefs.Refresh

' set persist security information
' in the adp file to true
SecurityInformation "TRUE"

' connect to the ADP file
With c
.Close
.Open b & ";USER ID=" & u & ";PASSWORD=" & p
End With

Set r = c.OpenSchema( _
adSchemaTables, Array(Empty, Empty, Empty, "Table"))

' import the SQL tables (as JET)
With r
Do While Not .EOF
If Left(!TABLE_NAME, 2) <"dt" Then _
DoCmd.TransferDatabase acImport, "Microsoft Access", _
ADPFile, acTable, !TABLE_NAME, !TABLE_NAME, False
.MoveNext
Loop
.Close
End With

' set persist security information
' in the adp file to false
SecurityInformation "FALSE"

' copy the tables to the BackUp Device
SaveAsText 6, "", Backupfile

DoCmd.Close acForm, Me.Name

End Sub

Private Sub Form_Close()
Application.Quit
End Sub

Private Sub Form_Open(Cancel As Integer)
Me.Visible = False
BackupSQLTablesAsJET
End Sub

Private Sub SecurityInformation(ByVal vPERSIST As String)
Dim a As Access.Application
Set a = New Access.Application
With a
.OpenAccessProject ADPFile
With .CurrentProject
If .IsConnected Then .CloseConnection
.OpenConnection Replace(b, "FALSE", vPERSIST), u, p
End With
.Quit
End With
End Sub

Jan 4 '07 #17

P: n/a
Hi again! 8)

le*********@natpro.com wrote:
the final report is based on a query of many tables (consisting of both
sql server and access)
with user-selected filtering
SO what? My current app against an existing Oracle application writes
SQL based that is more than 100K characters long (like your sql server
app, it is a very bad design), which if course is broken into 1 to 4
separate views. There are 40+ tables involved. There are more than
1000 controls on the main user criteria selection form (broken into
several sub forms, of course).

Yes, the app took me several years to write, but it can be done and is a
lot more effective than the method you've chosen (which is similar to
what I used to do, BTW). In my case, I had other things to run and do
plus the app I took so long to do is generic, all singing, all dancing
and totally comprehensive (with respect to the business of the main
app). I'm sure you could do the same sort of thing in a reasonable
amount of time with specific function/analysis in mind.
and the current sql-server app does not allowing for indexing on its
tables
Then IMO someone needs to be kicked very hard in the gonads and pushed
out the airplane door. That's ridiculous and there's no excuse for it.
If you can't make a case to force the developer to properly index the
main SQL Server app, then 1) the app is raw sewage from someone who
doesn't have a frakking clue, and 2) the organization involved doesn't
know its head from a hole in the ground.

You're welcome to show the developer/organization head(s) this reaction
if you wish.
Since the posting, I changed the insert query to insert just one field
(6-char text) and it's still slow

And if I open the temp MDB and look at the contents of the table, it's
empty until to 20-odd minutes are up... so it's not a data transfer /
dribble issue...
Like Albert has mentioned, the persistent connection could help - I used
this a lot when I used to do what you're doing. My data upload which
included some manipulation from about 40 tables and several hundred
thousand records would still take up to 45 minutes total. However, that
was from a time period in my career when I wasn't comfortable as I am
now with going into the (garbage) Oracle app I run and adding my own
indexes.

I also wonder if a proper primary key definition might help the single
table uploading you're doing?
--
Tim http://www.ucs.mun.ca/~tmarshal/
^o<
/#) "Burp-beep, burp-beep, burp-beep?" - Quaker Jake
/^^ "Whatcha doin?" - Ditto "TIM-MAY!!" - Me
Jan 4 '07 #18

P: n/a
Tim Marshall <TI****@PurplePandaChasers.Moertheriumwrote in news:enisbs
$6****@coranto.ucs.mun.ca:
My current app against an existing Oracle application writes
SQL based that is more than 100K characters long (like your sql server
app, it is a very bad design), which if course is broken into 1 to 4
separate views. There are 40+ tables involved. There are more than
1000 controls on the main user criteria selection form (broken into
several sub forms, of course).
Does it give out any destructive electro-magnetic waves? Are there any
unusual radioactive emissions? Does the building where it lives glow in the
dark?
I am planning to go to Nfld for my summer holiday but a guy has to be
careful about these things!

--
lyle fairfield
Jan 4 '07 #19

P: n/a
Lyle Fairfield wrote:
Does it give out any destructive electro-magnetic waves? Are there any
unusual radioactive emissions?
Why do you think Mr Martin eventually caved in and actually fulfilled
his promise to Newfoundland last year before he was rewarded for
publicly stabbing his leader in the back? Our Williams had a little
help....
Does the building where it lives glow in the
dark?
No, but my computer is under my desk and I have a perpetual grin...
I am planning to go to Nfld for my summer holiday but a guy has to be
careful about these things!
You'll have to drop by the university then and experience it for yourself!
--
Tim http://www.ucs.mun.ca/~tmarshal/
^o<
/#) "Burp-beep, burp-beep, burp-beep?" - Quaker Jake
/^^ "Whatcha doin?" - Ditto "TIM-MAY!!" - Me
Jan 4 '07 #20

P: n/a
correct, I have a global recordset variable that opens a record in a
table found in the temp.mdb

Albert D. Kallal wrote:
<le*********@natpro.comwrote in message
news:11*********************@i12g2000cwa.googlegro ups.com...
I have a persistent connection

by persistent connection, I mean that you front end opens up a table to the
back end "mdb" file
BEFORE you run
those quires.

That means the front end opens up a table to that back end "temp" mob you
talked about.
You keep this table open, and then run your quires...

the issue becomes when ms-access opens, and closes the connection, the
performance issues
to create the ldb locking file for that temp mdb can slow things down.
Keeping a persistent
connection will prevent this problem (and, thus often remove the
inconsistent performance
issues - the persistent connection will DO NOTHING for sql server).
--
Albert D. Kallal (Access MVP)
Edmonton, Alberta Canada
pl*****************@msn.com
Jan 5 '07 #21

P: n/a
yes, this Insert is against ms-sql server 7

your idea about a 'make-table' query into the temp.mdb is a good one...
thx

the terminal server is not Citrix, it's older than 4 years, windows
2000

the terminal server, sql server, and the ERP application will soon be
replaced with a Citrix server (with the help of gurus)

I created a pass through query that the equivalent of what is currently
being done in ms-access to prepare the data for reporting purposes, and
just running the query sometimes, not always, crashes with some network
connection errors. The query is run on the terminal server which is
connected via a 1GB link to the sql-server, the 2 servers are
physically less than 12-ft apart. I wondering if I have a network
hardware issue ?

Chuck Grimsby wrote:
Is this "temp" table deleted before each time you fill it up? If so,
consider creating a *new* table each time you fill it up. You may
want to do this in a second "temp data" mdb. Feel free to create any
indexes you want, the jet engine will create them as the data goes
into the table.

Remember that both the front-end (the mdb that is running the query)
and the data mdb need to be on the computer's local drive. (Having
them both on the same drive is "better".)

Is this a MS SQL Server you're pulling the data from? If not, we need
to know that. I know it sounds silly, but different SQL Servers
sometimes require different techniques.

As Larry suggested, if you can create a view, do so. Views can
rapidly speed up the retrieval process. (If this is a MS SQL Server,
running a view repeatedly will cause MS SQL Server to sort of create
it's own indexes, regardless of what's created/allowed, which also
speeds things up.)

Also, make sure you have plenty of free space on the local computer
this is running on. Queries against a (non-MS) SQL Database can eat
memory like crazy.

You mentioned Terminal Server.... That itself can cause problems if
it's not optimized correctly. Especially when it's a Citrix set-up.
There are times when it's *very* intelligent to hire a Citrix Guru to
set the Terminal Server up properly. They're expensive in the short
run, but the employee time saved over a few days can offset it rather
rapidly. (No, I'm not a Citrix Guru, but I've seen the results of
their work, and it's danged impressive!)
On 2 Jan 2007 14:52:00 -0800, le*********@natpro.com wrote:
I've got an access97 reporting mdb that pulls data (77,000 rows) from a
sql server table into a local table to run reports

if the local table is part of the reporting MDB, the insert statement
(16 fields) takes less than 30secs, but because of db-bloat, I moved
the local table to a 2nd MDB

and per postings, this 2nd MDB is copied into a folder and linked as a
'temp' MDB every time I run my reporting mdb

also, per postings, my main form opens a recordset to a table in the
'temp' MDB to prevent continuous LDB activity

but the insert statement now takes 20 mins....

both the reporting MDB and the temp MDB are on the same terminal server
(on different drives) and the sql server is connected to the terminal
server via a 1-GB network

updating to the 'temp' MDB is slow with / without any indexing on the
local table

the 'temp' MDB has been compacted, no difference

I created a new temp MDB and imported the tables, no difference

what am I missing ?


--
Drive C: Error. (A)bort (R)etry (S)mack The Darned Thing
Jan 5 '07 #22

This discussion thread is closed

Replies have been disabled for this discussion.