473,396 Members | 1,785 Online
Bytes | Software Development & Data Engineering Community
Post Job

Home Posts Topics Members FAQ

Join Bytes to post your question to a community of 473,396 software developers and data experts.

Performance across a LAN

I know that it is best to limit the amount of data you bring across the
LAN, from BE database to front end.

If I have an app with 10000 records in the BE database:

Is it equally efficient to use a query - based on the table - and a
selection clause in the form

or

use the table as the record source - and a selection clause in the form?

Or should the selection criteria be in the record source definition?

Or something else?

Bob
Feb 15 '07 #1
5 1862
On Feb 15, 1:51 pm, Bob Alston <bobalst...@yahoo.comwrote:
I know that it is best to limit the amount of data you bring across the
LAN, from BE database to front end.

If I have an app with 10000 records in the BE database:

Is it equally efficient to use a query - based on the table - and a
selection clause in the form

or

use the table as the record source - and a selection clause in the form?

Or should the selection criteria be in the record source definition?

Or something else?

Bob

I prefer to use a SQL statement as the record source. I find them
easier to manipulate than form-based filters. I would also think the
SQL would be faster then Access having to process filters and other
clauses to. Instead the SQL just gets passed to the Jet engine.

Feb 15 '07 #2
storrboy wrote:
On Feb 15, 1:51 pm, Bob Alston <bobalst...@yahoo.comwrote:
>I know that it is best to limit the amount of data you bring across the
LAN, from BE database to front end.

If I have an app with 10000 records in the BE database:

Is it equally efficient to use a query - based on the table - and a
selection clause in the form

or

use the table as the record source - and a selection clause in the form?

Or should the selection criteria be in the record source definition?

Or something else?

Bob


I prefer to use a SQL statement as the record source. I find them
easier to manipulate than form-based filters. I would also think the
SQL would be faster then Access having to process filters and other
clauses to. Instead the SQL just gets passed to the Jet engine.
Thanks. I was starting to think that way but found a Microsoft
knowledgebase entry that says:

"Save the SQL statement as a query

If the RecordSource property for a form or for report is set to an SQL
statement, save the SQL statement as a query and then set the
RecordSource property to the name of the query."

http://support.microsoft.com/kb/209126/

Bob
Feb 15 '07 #3
Well, the key concept here is to restrict records.

It makes no sense to open a form, and THEN filter it...

So, you can well base a form on the table, or query...and then use a "where"
clause to restrict records. The use
of a query on the table does not help in this case.

So, most important is to simply "ask" a user for the record they need to
work on, and then load the form to that one record. This approach tend to
work the best...
--
Albert D. Kallal (Access MVP)
Edmonton, Alberta Canada
pl*****************@msn.com

Feb 15 '07 #4
Albert D. Kallal wrote:
Well, the key concept here is to restrict records.

It makes no sense to open a form, and THEN filter it...

So, you can well base a form on the table, or query...and then use a "where"
clause to restrict records. The use
of a query on the table does not help in this case.

So, most important is to simply "ask" a user for the record they need to
work on, and then load the form to that one record. This approach tend to
work the best...

More info on what I do. I open up a record selection form with a
selection key equal to a record key that does not exist. the record key
is the primary key. so the form opens with no data.

I allow the user to enter the selection criteria for the user or users
desired. Options are last name, first name, and several others. From
that I dynamically create a where clause that I put in the form spec and
then refresh the query.

This seems to work well.

Previously I have using a query as the data source. I tried changing it
to the table itself and added a sorting clause in the form; that seems
slightly faster on my PC but hard to tell cause I am not testing it
across the network.

My selection criteria are indexed so there should be minimal reading of
the records just to select the record or not.

Anything I should be doing differently?

P.S. I did read again some of the performance stuff in this newsgroup.

I had already turned off the name autocorrect.

I need to set the subdatasheet property from auto to None

I checked and this query does not include a user defined function (which
can make it run 5x slower)

Thanks

Bob
Feb 15 '07 #5
Bri
Bob Alston wrote:
storrboy wrote:
>On Feb 15, 1:51 pm, Bob Alston <bobalst...@yahoo.comwrote:
>>I know that it is best to limit the amount of data you bring across the
LAN, from BE database to front end.

If I have an app with 10000 records in the BE database:

Is it equally efficient to use a query - based on the table - and a
selection clause in the form

or

use the table as the record source - and a selection clause in the form?

Or should the selection criteria be in the record source definition?

Or something else?

Bob


I prefer to use a SQL statement as the record source. I find them
easier to manipulate than form-based filters. I would also think the
SQL would be faster then Access having to process filters and other
clauses to. Instead the SQL just gets passed to the Jet engine.
Thanks. I was starting to think that way but found a Microsoft
knowledgebase entry that says:

"Save the SQL statement as a query

If the RecordSource property for a form or for report is set to an SQL
statement, save the SQL statement as a query and then set the
RecordSource property to the name of the query."

http://support.microsoft.com/kb/209126/

Bob
Yes, that is more efficient because the query is compiled and optimized
when saved. However, if you start having lots of different saved queries
it can get confusing later on. Also, if you need to have variations on
the query (ie filtering data, applying criteria on the fly) then you
would either have to have a query for every possible filter value, or
use a parameter query (which isn't optimized since the value changes, so
you lose the advantage anyway) or filter the form. All of these cause
the optimization to be no advantage. Also, on modern equipment, the
overhead of optimizing an SQL string before it executes is so small you
won't notice it.

Bottom line; use what ever method makes the coding easier to use and
understand as the execution time saved by pre-optimization of a saved
query is extremely marginal. The MS KB articles include it since it used
to be significant on older PCs and it does give a small boost to speed,
but the other points they make in that article for improving efficiency
are a lot more important.

--
Bri
Feb 16 '07 #6

This thread has been closed and replies have been disabled. Please start a new discussion.

Similar topics

5
by: nmac | last post by:
Hi all, hopefully someone can offer some sagely advice regarding Production use of Jakarta's Tomcat. First, some brief background. My company have a servlet application that connects to a MySQL...
2
by: Mark | last post by:
Hi... We're looking at moving an ASP app from IIS 5 to IIS 6. In general things seem to be working okay, but there are some oddities around the edges. One of those unanticipated changes has to...
5
by: Scott | last post by:
I have a customer that had developed an Access97 application to track their business information. The application grew significantly and they used the Upsizing Wizard to move the tables to SQL...
3
by: MikeH | last post by:
I have the weirdest problem with an Access 97 database... The application comprises the usual front-back split database. It's built around Access 97 and had been running without serious problems...
11
by: Daveo | last post by:
Hi there, Since splitting my database, one form in particular takes about 10 times as long to load and refresh, compared to the unsplit version on the server. The code behind it contains 36...
115
by: Mark Shelor | last post by:
I've encountered a troublesome inconsistency in the C-language Perl extension I've written for CPAN (Digest::SHA). The problem involves the use of a static array within a performance-critical...
1
by: Scott McDaniel | last post by:
I would like to know a little bit more about how Access works over a network. My situation is: A split database with the program .mdb installed on the workstation and the backend .mdb on the...
1
by: dandorey1 | last post by:
I'm currently in the process of writing a realtime telephony application. I've designed it with a fairly simply plugin architecture. When I first started reading about this the general suggestion...
9
by: HC | last post by:
Hello, all, I started out thinking my problems were elsewhere but as I have worked through this I have isolated my problem, currently, as a difference between MSDE and SQL Express 2005 (I'll just...
3
by: Bob Alston | last post by:
I have recently been trying to determine the best technique to pull the least amount of info across the LAN link in a slow speed LAN situation (e.g. < 10 Mbps), where data volume = performance. ...
0
by: ryjfgjl | last post by:
In our work, we often receive Excel tables with data in the same format. If we want to analyze these data, it can be difficult to analyze them because the data is spread across multiple Excel files...
0
by: emmanuelkatto | last post by:
Hi All, I am Emmanuel katto from Uganda. I want to ask what challenges you've faced while migrating a website to cloud. Please let me know. Thanks! Emmanuel
1
by: Sonnysonu | last post by:
This is the data of csv file 1 2 3 1 2 3 1 2 3 1 2 3 2 3 2 3 3 the lengths should be different i have to store the data by column-wise with in the specific length. suppose the i have to...
0
by: Hystou | last post by:
There are some requirements for setting up RAID: 1. The motherboard and BIOS support RAID configuration. 2. The motherboard has 2 or more available SATA protocol SSD/HDD slots (including MSATA, M.2...
0
marktang
by: marktang | last post by:
ONU (Optical Network Unit) is one of the key components for providing high-speed Internet services. Its primary function is to act as an endpoint device located at the user's premises. However,...
0
by: Hystou | last post by:
Most computers default to English, but sometimes we require a different language, especially when relocating. Forgot to request a specific language before your computer shipped? No problem! You can...
0
Oralloy
by: Oralloy | last post by:
Hello folks, I am unable to find appropriate documentation on the type promotion of bit-fields when using the generalised comparison operator "<=>". The problem is that using the GNU compilers,...
0
jinu1996
by: jinu1996 | last post by:
In today's digital age, having a compelling online presence is paramount for businesses aiming to thrive in a competitive landscape. At the heart of this digital strategy lies an intricately woven...
0
tracyyun
by: tracyyun | last post by:
Dear forum friends, With the development of smart home technology, a variety of wireless communication protocols have appeared on the market, such as Zigbee, Z-Wave, Wi-Fi, Bluetooth, etc. Each...

By using Bytes.com and it's services, you agree to our Privacy Policy and Terms of Use.

To disable or enable advertisements and analytics tracking please visit the manage ads & tracking page.