473,398 Members | 2,125 Online
Bytes | Software Development & Data Engineering Community
Post Job

Home Posts Topics Members FAQ

Join Bytes to post your question to a community of 473,398 software developers and data experts.

Tony Toews Performance FAQ

http://www.granite.ab.ca/access/performancefaq.htm

I hope Tony doesn't mind my opening a discussion of some issues on
his performance FAQ page here in the newsgroup. This is not meant as
criticism, at all, as I am not alleging error. I'm just asking about
a couple of things to open up the discussion to see what people have
to say about them.

1. BeginTrans/CommitTrans to improve performance: has anyone ever
done this? The KB article explaining this is very old (describing
performance on a 486 CPU), and I wonder if it would ever be useful.

2. Wireless issues: isn't the real reason that wireless is
unreliable simply because it's such low bandwidth? 802.11b (the most
common standard) is 10mbps max, and usually is closer to 1/2 or 1/3
of that in actual performance. That's substantially less than
10BaseT. And because it's shared bandwidth (the radio spectrum
cannot be expanded beyond a certain point), the actual throughput
can drop as more overlapping networks are added within the same
radio range if running on the same channel. 802.11g, which is now
extremely cheap with the new router from Linksys, is 54mbps, but is
using the same radio spectrum, so ought to have the same kind of
problems, and since it uses 2 and 3 of the channels of 802.11b to
get this increased bandwidth, it's going to be affected by 802.11b
networks in the same radio range. Basically, it seems to me that
this point in the FAQ ought to be recommending read-only, and write
for only very occasional use, perhaps with unbound forms (so the
connections are open for a very short period of time, and edits take
a very short time). Basically, wireless is more like a WAN than like
a LAN in regard to how Access is affected by it.

3. I'm very interested about the hint about assigning recordsources
and rowsources at runtime. Why would this be such a big deal in A2K?
Or is it also a big deal in A97 as well? Is it maybe because of the
double oncurrent problem? Has anyone experimented with this?

--
David W. Fenton http://www.bway.net/~dfenton
dfenton at bway dot net http://www.bway.net/~dfassoc
Nov 13 '05 #1
16 2625
On Mon, 14 Jun 2004 22:40:37 GMT, "David W. Fenton"
<dX********@bway.net.invalid> wrote:

I justt have a general comment: performance issues can best be
addressed by systematic testing. I have been interested in that topic
for a long time, but the few times I floated a trial balloon on this
subject, I got no takers.
Anecdotal evidence just isn't very convincing to me.

-Tom.

http://www.granite.ab.ca/access/performancefaq.htm

I hope Tony doesn't mind my opening a discussion of some issues on
his performance FAQ page here in the newsgroup. This is not meant as
criticism, at all, as I am not alleging error. I'm just asking about
a couple of things to open up the discussion to see what people have
to say about them.

1. BeginTrans/CommitTrans to improve performance: has anyone ever
done this? The KB article explaining this is very old (describing
performance on a 486 CPU), and I wonder if it would ever be useful.

2. Wireless issues: isn't the real reason that wireless is
unreliable simply because it's such low bandwidth? 802.11b (the most
common standard) is 10mbps max, and usually is closer to 1/2 or 1/3
of that in actual performance. That's substantially less than
10BaseT. And because it's shared bandwidth (the radio spectrum
cannot be expanded beyond a certain point), the actual throughput
can drop as more overlapping networks are added within the same
radio range if running on the same channel. 802.11g, which is now
extremely cheap with the new router from Linksys, is 54mbps, but is
using the same radio spectrum, so ought to have the same kind of
problems, and since it uses 2 and 3 of the channels of 802.11b to
get this increased bandwidth, it's going to be affected by 802.11b
networks in the same radio range. Basically, it seems to me that
this point in the FAQ ought to be recommending read-only, and write
for only very occasional use, perhaps with unbound forms (so the
connections are open for a very short period of time, and edits take
a very short time). Basically, wireless is more like a WAN than like
a LAN in regard to how Access is affected by it.

3. I'm very interested about the hint about assigning recordsources
and rowsources at runtime. Why would this be such a big deal in A2K?
Or is it also a big deal in A97 as well? Is it maybe because of the
double oncurrent problem? Has anyone experimented with this?


Nov 13 '05 #2
"David W. Fenton" <dX********@bway.net.invalid> wrote in message
news:Xn**********************************@24.168.1 28.78...
http://www.granite.ab.ca/access/performancefaq.htm
1. BeginTrans/CommitTrans to improve performance: has anyone ever
done this? The KB article explaining this is very old (describing
performance on a 486 CPU), and I wonder if it would ever be useful.
Actually, the above can really make a difference. You see, when you wrap a
transaction, it has to be stored some where. Guess what, that store is on
your LOCAL hard drive. I can write out 5000 records..and then process the
data 10 times in a loop. Each of those 10 loops will cause NO network
traffic. So, sure common sense would dictate to use LOCAL temp tables for
this kind of stuff, but in fact a lazy man way to accomplish temp table is
to wrap things in a transaction. Like all things, it would be silly to
suggest that wrapping everything just solves a performance problem. We all
know that performance tips must be taken with a grain of salt. However,
transactions can substantially reduce bandwidth requirements, and for temp
table processing, you can (often) use transactions in place of temp tables.

2. Wireless issues: isn't the real reason that wireless is
unreliable simply because it's such low bandwidth? 802.11b
Actually, the problem is still dropouts. I think we all been in a car
listing to FM radio. When you are coming to a stop at the traffic lights,
often the signal drops out. You let your car creep forward a bit, and
presto, the signal is now really good. It turns out that some cars in Japan
actually have two antenna. (one the front of the car....and one on the
back). This is called a diversity receiver. By having two antenna you
eliminate those "blind" spots (they are caused by destructive interference
from the same signal reaching the recover at different time (due to bounces
off walls etc the same signal reaches the receiver out of sync) (.you can
pick up your high school physic book if you forgot what destructive
interference means for waves).

As a result, you will note that high quality wireless microphones for muse
performers have receivers with two antenna (this prevents the drop out as
the person moves while signing on stage) You will also notice that wireless
hubs also have two antenna, and it is for this exact same reason (to prevent
dropouts...not that the two antenna pick up signals better!).

However, EVEN with two antenna (a diversity receiver), you still get a LOT
MORE interruptions on a wireless network then a hard wire.

So, you are correct that most of the problem with wireless is speed. I am
currently writing this post on my notebook, and I am using a 54g link. I
find that I can run and test split/linked databases over this link with
ease. However, in a lot cases when the phone rings, I get dropout. Why?
Because a lot of new cordless phones operate on the same frequency system as
the hub. So, a stupid phone ring can case the link to die. I now make sure I
set the Linkyss box to the lowest frequency, and my cordless phone don't
seem bother it now. So, to me, the issue with wireless is still he drop out
issue.

If you move your notebook around while it is on, or even things like
grounding your self can change the signal strength.

Between cordless phones, moving around, and your neighbor purchasing a
wireless hub that starts up on the same frequency as yours..then signal can
be broken.

Simply put...wireless is subject to temporary breaks in the connection.
....and ms-access don't like that. So, once again, the warning about using
wireless systems needs to be heeded. I also have a fixed desktop computer
here with a wireless card, and that computer is NEVER moved. So, the
occurrence of dropouts is much less rare then my notebook, but no where near
that of a cat5 hard cable. So, I think the warnings on this issue is just
fine, and as ALWAYS there are some exceptions to this rule. A commercial
high quality wireless system on a new spread spectrum hub that is quite
immune to signaler interference would be an example of a exception. So, once
again, some exceptions apply. I someone with FAR more knowledge the me could
write a few 1000 pages on this one....

3. I'm very interested about the hint about assigning recordsources
and rowsources at runtime. Why would this be such a big deal in A2K?
Or is it also a big deal in A97 as well? Is it maybe because of the
double oncurrent problem? Has anyone experimented with this?


Golly!..never even heard of the above...I better go and read that faq!!
--
Albert D. Kallal (Access MVP)
Edmonton, Alberta Canada
pl*****************@msn.com
http://www.attcanada.net/~kallal.msn
Nov 13 '05 #3
1) Dunno about BeginTrans/CommitTrans. And documentation of
implicit/explict transaction is at best obscure.

2) I've used slow (64K) networks without any reliability
problems. TCP/IP is supposed to be self-correcting, but
my database stuff still failed when used with a flaky radio
network that dropped connections.

3)I assign recordsource at runtime on some forms in A97. We
have other optimisations as well, but a specific problem was
that Access would requery dependant cbo's when we changed
records on the form. Since selecting a record was part of the
form load process, we could see the cbo's query and requery
as the form loaded. Now we assign the recordsource of all
objects in order.
Saved CBO recordsource are saved querydefs. For static data,
saved querydefs ran faster than ad-hoc SQL using Jet 3.x
For changing data, the query stats can get out of date,
and ad-hoc sql may be better. I haven't compared the speed
of saved qdf's in Access 2K: it may be that the load time
is now greater than it used to be.

Obviously, when saving a form, the saved querydefs are created,
which can slow the process. They also take up space in the
database. Saving a form is slower in A2K than in A97.
http://www.granite.ab.ca/access/performancefaq.htm
I hope Tony doesn't mind my opening a discussion of some issues on
his performance FAQ page here in the newsgroup. This is not meant as
criticism, at all, as I am not alleging error. I'm just asking about
a couple of things to open up the discussion to see what people have
to say about them.

1. BeginTrans/CommitTrans to improve performance: has anyone ever
done this? The KB article explaining this is very old (describing
performance on a 486 CPU), and I wonder if it would ever be useful.

2. Wireless issues: isn't the real reason that wireless is
unreliable simply because it's such low bandwidth? 802.11b (the most
common standard) is 10mbps max, and usually is closer to 1/2 or 1/3
of that in actual performance. That's substantially less than
10BaseT. And because it's shared bandwidth (the radio spectrum
cannot be expanded beyond a certain point), the actual throughput
can drop as more overlapping networks are added within the same
radio range if running on the same channel. 802.11g, which is now
extremely cheap with the new router from Linksys, is 54mbps, but is
using the same radio spectrum, so ought to have the same kind of
problems, and since it uses 2 and 3 of the channels of 802.11b to
get this increased bandwidth, it's going to be affected by 802.11b
networks in the same radio range. Basically, it seems to me that
this point in the FAQ ought to be recommending read-only, and write
for only very occasional use, perhaps with unbound forms (so the
connections are open for a very short period of time, and edits take
a very short time). Basically, wireless is more like a WAN than like
a LAN in regard to how Access is affected by it.

3. I'm very interested about the hint about assigning recordsources
and rowsources at runtime. Why would this be such a big deal in A2K?
Or is it also a big deal in A97 as well? Is it maybe because of the
double oncurrent problem? Has anyone experimented with this?


Nov 13 '05 #4
Tom van Stiphout <to*****@no.spam.cox.net> wrote in
news:gq********************************@4ax.com:
I justt have a general comment: performance issues can best be
addressed by systematic testing. I have been interested in that
topic for a long time, but the few times I floated a trial balloon
on this subject, I got no takers.
Anecdotal evidence just isn't very convincing to me.


What do you mean by systematic testing? It sounds very interesting.

--
David W. Fenton http://www.bway.net/~dfenton
dfenton at bway dot net http://www.bway.net/~dfassoc
Nov 13 '05 #5
"Albert D. Kallal" <Pl*******************@msn.com> wrote in
news:6Dxzc.757952$oR5.370355@pd7tw3no:
"David W. Fenton" <dX********@bway.net.invalid> wrote in message
news:Xn**********************************@24.168.1 28.78...
http://www.granite.ab.ca/access/performancefaq.htm

1. BeginTrans/CommitTrans to improve performance: has anyone ever
done this? The KB article explaining this is very old (describing
performance on a 486 CPU), and I wonder if it would ever be
useful.


Actually, the above can really make a difference. You see, when
you wrap a transaction, it has to be stored some where. Guess
what, that store is on your LOCAL hard drive. I can write out 5000
records..and then process the data 10 times in a loop. Each of
those 10 loops will cause NO network traffic. So, sure common
sense would dictate to use LOCAL temp tables for this kind of
stuff, but in fact a lazy man way to accomplish temp table is to
wrap things in a transaction. Like all things, it would be silly
to suggest that wrapping everything just solves a performance
problem. We all know that performance tips must be taken with a
grain of salt. However, transactions can substantially reduce
bandwidth requirements, and for temp table processing, you can
(often) use transactions in place of temp tables.


I use transactions when I want to do a bunch of adds/edits as a
group that must all succeed to be committed as a batch.

I can see using transactions as an alternative to temp tables, yes.
But I don't think it advisable to use transactions merely as a
performance-improving technique. It sounds dangerous to me.

--
David W. Fenton http://www.bway.net/~dfenton
dfenton at bway dot net http://www.bway.net/~dfassoc
Nov 13 '05 #6
"David W. Fenton" wrote
What do you mean by systematic
testing? It sounds very interesting.


I haven't gone back and read the EULA, but testing and publication of
performance results is a violation of the terms of some products' EULAs. I
can't recall whether I've heard Microsoft mentioned in that regard or that
I've heard of the provision being enforced in court by any vendor, however,
and I've certainly seen a lot of published performance comparisons.

Nov 13 '05 #7
On Tue, 15 Jun 2004 22:12:55 GMT, "David W. Fenton"
<dX********@bway.net.invalid> wrote:
The idea is simple: develop some real-world tests that most people can
agree on, and implement them on different versions of Access, using
different middleware (DAO, ADO, etc), on reference hardware, and
settle the discussions on what is better.
Some tests already exists, such as TPC-C, and we might be able to
borrow from them or at least get some ideas on what tests would be
relevant (TPC-C which measures transaction volumes most likely would
not be).

We would have to circumvent language in some license agreements that
tries to prevent people from publishing performance data.

-Tom.
Tom van Stiphout <to*****@no.spam.cox.net> wrote in
news:gq********************************@4ax.com :
I justt have a general comment: performance issues can best be
addressed by systematic testing. I have been interested in that
topic for a long time, but the few times I floated a trial balloon
on this subject, I got no takers.
Anecdotal evidence just isn't very convincing to me.


What do you mean by systematic testing? It sounds very interesting.


Nov 13 '05 #8
It is very difficult to get any kind of meaningful measurement
of some things like "form load time". There just aren't any
Access hooks to hang the timer off, and most of the events
are asynchronous.

Even the most easily measurable actions (data access) are
almost meaningless in a strict sense, are contingent on OS
version, are largely data-dependant, and have also been
shown to depend on patch level in unpredictable and quirky ways.

In short, if you want to see if your application runs better
if coded differently, the only useful approach is to try it
and see.

(david)
"Tom van Stiphout" <to*****@no.spam.cox.net> wrote in message
news:t1********************************@4ax.com...
On Tue, 15 Jun 2004 22:12:55 GMT, "David W. Fenton"
<dX********@bway.net.invalid> wrote:
The idea is simple: develop some real-world tests that most people can
agree on, and implement them on different versions of Access, using
different middleware (DAO, ADO, etc), on reference hardware, and
settle the discussions on what is better.
Some tests already exists, such as TPC-C, and we might be able to
borrow from them or at least get some ideas on what tests would be
relevant (TPC-C which measures transaction volumes most likely would
not be).

We would have to circumvent language in some license agreements that
tries to prevent people from publishing performance data.

-Tom.
Tom van Stiphout <to*****@no.spam.cox.net> wrote in
news:gq********************************@4ax.com :
I justt have a general comment: performance issues can best be
addressed by systematic testing. I have been interested in that
topic for a long time, but the few times I floated a trial balloon
on this subject, I got no takers.
Anecdotal evidence just isn't very convincing to me.


What do you mean by systematic testing? It sounds very interesting.

Nov 13 '05 #9
On Wed, 16 Jun 2004 17:04:33 +1000, "david epsom dot com dot au"
<david@epsomdotcomdotau> wrote:

I think we can find real-world measurements that will provide
meaningful data. Of course the measurements have to be taken in a
controlled environment. I have some ideas for that.

Form load time can be measured by recording the time interval between
the Button_Click event that launches the form, and the Form_Current
event that fires after the first record has been loaded. Or any other
interval we could agree on.

My clients can't afford to pay for much experimentation. They expect
that a professional developer will use the best procedures for the
given circumstances. Performance stats will help the developer make
that determination.

-Tom.

It is very difficult to get any kind of meaningful measurement
of some things like "form load time". There just aren't any
Access hooks to hang the timer off, and most of the events
are asynchronous.

Even the most easily measurable actions (data access) are
almost meaningless in a strict sense, are contingent on OS
version, are largely data-dependant, and have also been
shown to depend on patch level in unpredictable and quirky ways.

In short, if you want to see if your application runs better
if coded differently, the only useful approach is to try it
and see.

(david)
"Tom van Stiphout" <to*****@no.spam.cox.net> wrote in message
news:t1********************************@4ax.com.. .
On Tue, 15 Jun 2004 22:12:55 GMT, "David W. Fenton"
<dX********@bway.net.invalid> wrote:
The idea is simple: develop some real-world tests that most people can
agree on, and implement them on different versions of Access, using
different middleware (DAO, ADO, etc), on reference hardware, and
settle the discussions on what is better.
Some tests already exists, such as TPC-C, and we might be able to
borrow from them or at least get some ideas on what tests would be
relevant (TPC-C which measures transaction volumes most likely would
not be).

We would have to circumvent language in some license agreements that
tries to prevent people from publishing performance data.

-Tom.
>Tom van Stiphout <to*****@no.spam.cox.net> wrote in
>news:gq********************************@4ax.com :
>
>> I justt have a general comment: performance issues can best be
>> addressed by systematic testing. I have been interested in that
>> topic for a long time, but the few times I floated a trial balloon
>> on this subject, I got no takers.
>> Anecdotal evidence just isn't very convincing to me.
>
>What do you mean by systematic testing? It sounds very interesting.


Nov 13 '05 #10
On Wed, 16 Jun 2004 01:19:24 GMT, "Larry Linson"
<bo*****@localhost.not> wrote:

One idea is that these stats don't violate the license agreement
because they don't compare Access with competitors in a
non-reproducible fashion. Rather it compares Access with itself.

I have other ideas as well, which I can share if some of us get
serious about this topic.

-Tom.

"David W. Fenton" wrote
What do you mean by systematic
testing? It sounds very interesting.


I haven't gone back and read the EULA, but testing and publication of
performance results is a violation of the terms of some products' EULAs. I
can't recall whether I've heard Microsoft mentioned in that regard or that
I've heard of the provision being enforced in court by any vendor, however,
and I've certainly seen a lot of published performance comparisons.


Nov 13 '05 #11
David W. Fenton wrote:
http://www.granite.ab.ca/access/performancefaq.htm

I hope Tony doesn't mind my opening a discussion of some issues on
his performance FAQ page here in the newsgroup. This is not meant as
criticism, at all, as I am not alleging error. I'm just asking about
a couple of things to open up the discussion to see what people have
to say about them.

1. BeginTrans/CommitTrans to improve performance: has anyone ever
done this? The KB article explaining this is very old (describing
performance on a 486 CPU), and I wonder if it would ever be useful.
I haven't tested this for speed lately but whenever I have there's
usually been an improvement but I don't think performance is the top
reason for using transactions but rather for data stability, the ability
to rollback if one update goes pear shaped is invaluable otherwise the
data will be left in a half cocked state.
2. Wireless issues: isn't the real reason that wireless is
unreliable simply because it's such low bandwidth? 802.11b (the most
common standard) is 10mbps max, and usually is closer to 1/2 or 1/3
of that in actual performance. That's substantially less than
10BaseT. And because it's shared bandwidth (the radio spectrum
cannot be expanded beyond a certain point), the actual throughput
can drop as more overlapping networks are added within the same
radio range if running on the same channel. 802.11g, which is now
extremely cheap with the new router from Linksys, is 54mbps, but is
using the same radio spectrum, so ought to have the same kind of
problems, and since it uses 2 and 3 of the channels of 802.11b to
get this increased bandwidth, it's going to be affected by 802.11b
networks in the same radio range. Basically, it seems to me that
this point in the FAQ ought to be recommending read-only, and write
for only very occasional use, perhaps with unbound forms (so the
connections are open for a very short period of time, and edits take
a very short time). Basically, wireless is more like a WAN than like
a LAN in regard to how Access is affected by it.
My home wireless network is 802.11g, 54Mb/s, but you're right about
being able to use half that, if I copy a file from my laptop (wireless)
to my desktop (100Mb/s) and monitor bandwidth useage with task manager
the desktop uses around 22-24%. I will take into account what you say
about sharing and will disable 802.11b on my (very expensive DrayTek :)
router and see if that makes any difference to what I can get on the
802.11g. Probably nothing as I'll be the only one on it (unless one of
my neighbors knows my WEP key).

I don't think just low bandwidth will give a reliability issue, I used
to run on Arcnet, which was about 5Mb/s, slow yes but no reliability issues.
3. I'm very interested about the hint about assigning recordsources
and rowsources at runtime. Why would this be such a big deal in A2K?
Or is it also a big deal in A97 as well? Is it maybe because of the
double oncurrent problem? Has anyone experimented with this?


I only do this if the recordsource may be different but even then rarely
as mostly a different recordset can be achieved with the same query but
different criteria, it's unlikely I'd use a different source to look up
on a combo box as there'd be a relationship there it'll have to pull
from the same related table.

--
Error reading sig - A)bort R)etry I)nfluence with large hammer
Nov 13 '05 #12
Trevor Best <nospam@localhost> wrote in
news:40***********************@auth.uk.news.easyne t.net:
David W. Fenton wrote:
http://www.granite.ab.ca/access/performancefaq.htm
[]
2. Wireless issues: isn't the real reason that wireless is
unreliable simply because it's such low bandwidth? 802.11b (the
most common standard) is 10mbps max, and usually is closer to 1/2
or 1/3 of that in actual performance. That's substantially less
than 10BaseT. And because it's shared bandwidth (the radio
spectrum cannot be expanded beyond a certain point), the actual
throughput can drop as more overlapping networks are added within
the same radio range if running on the same channel. 802.11g,
which is now extremely cheap with the new router from Linksys, is
54mbps, but is using the same radio spectrum, so ought to have
the same kind of problems, and since it uses 2 and 3 of the
channels of 802.11b to get this increased bandwidth, it's going
to be affected by 802.11b networks in the same radio range.
Basically, it seems to me that this point in the FAQ ought to be
recommending read-only, and write for only very occasional use,
perhaps with unbound forms (so the connections are open for a
very short period of time, and edits take a very short time).
Basically, wireless is more like a WAN than like a LAN in regard
to how Access is affected by it.
My home wireless network is 802.11g, 54Mb/s, but you're right
about being able to use half that, if I copy a file from my laptop
(wireless) to my desktop (100Mb/s) and monitor bandwidth useage
with task manager the desktop uses around 22-24%. I will take into
account what you say about sharing and will disable 802.11b on my
(very expensive DrayTek :) router and see if that makes any
difference to what I can get on the 802.11g. Probably nothing as
I'll be the only one on it (unless one of my neighbors knows my
WEP key).


If I'm not mistaken, you can only get the full 54mpbs when you bind
the 3 available channels into one and disable 802.11b.
I don't think just low bandwidth will give a reliability issue, I
used to run on Arcnet, which was about 5Mb/s, slow yes but no
reliability issues.


Ah, but Arcnet had advantages in fewer collisions than Ethernet, no?

But yes, you and Albert are right that connectivity reliability is
the worst risk. But you also do have the same problems with
bandwidth that people encounter with WANs.
3. I'm very interested about the hint about assigning
recordsources and rowsources at runtime. Why would this be such a
big deal in A2K? Or is it also a big deal in A97 as well? Is it
maybe because of the double oncurrent problem? Has anyone
experimented with this?


I only do this if the recordsource may be different but even then
rarely as mostly a different recordset can be achieved with the
same query but different criteria, it's unlikely I'd use a
different source to look up on a combo box as there'd be a
relationship there it'll have to pull from the same related table.


Well, I do have some combo boxes and subforms where I assing the
rowsource/recordsource only on request, but I've only done this when
I already thought it was a heavyweight form, not as a matter of
course.

I suspect the performance improvement has something to do with the
double OnCurrent event in subforms. If you reduce that kind of thing
in the form's opening events, that could be significant enough to be
noticeable even when without it it already loads OK.

--
David W. Fenton http://www.bway.net/~dfenton
dfenton at bway dot net http://www.bway.net/~dfassoc
Nov 13 '05 #13
I'm following up again to Trevor's message because Tom's response to
my question about his proposal for performance testing has just
scrolled off my news server (I was unable to post to it for 2 days
because of ISP problems).

Anyway, I agree with whoever it was who said that it's pretty hard
to create tests for Access, as the tasks are not uniform from
application to application.

I'm not sure exactly how you could test issue of assigning
recordsources/rowsources only after a form is open, except by using
an existing application and existing forms. And then, the results in
any individual case would be highly dependent on the specifics of
the particular forms and data sources, and the operating
environment. For any meaningful results, you'd really have to have a
bunch of people run the tests on a bunch of applications.

Now, I think that if Tom wrote some code to test this that I could
run in a test copy of one of my apps and it would do this:

1. go through the forms and copy all the ones with recordsources to
an identical form

2. then edit that copy to edit it so that the recordsource and
rowsources would be loaded in the OnLoad event and then save the
form

3. then run a test suite where the opening of each original form is
timed

4. then time the opening of the duplicate altered form

5. record the timings in a table

6. and last, print out a report with timings, percentages and
overall averages.

I'd be thrilled to run it on a dozen of my apps.

But the amount of time it would take to write such an automated test
would be huge! Not as much time as it would take for dozens of
people to do it all manually, but still, a lot of time.

So, I just don't see how it would be valuable.

In any event, I think the main issue with performance of form
loading is *perception*, not actual usability. Rushmore is a perfect
example of this -- display the beginning of the recordset as soon as
you receive it and continue loading the rest of it in the
background. I think that this trick probably just makes it *feel*
faster, since the form itself pops up more quickly, but it's not
really going to be editable any faster, is my bet.

Also, I doubt there's any way to actual determine "form is now ready
to edit" in a programmatic fashion.

This will be the case with many such useful testing scenarios, I
fear.

As to the legal argument, I think that's bunk. MS has those terms
only for things like SQL Server. Also, since the tests would be
comparative within Access, there's no issue.

--
David W. Fenton http://www.bway.net/~dfenton
dfenton at bway dot net http://www.bway.net/~dfassoc
Nov 13 '05 #14
David W. Fenton wrote:
Ah, but Arcnet had advantages in fewer collisions than Ethernet, no?
Yes, best case practically zero due to the star topology, another
advantage of that was security, no network sniffers there. I've used a
packet sniffer on an Ethernet network (IIRC LopfCrack or something like
that) as a test, it sat there all day and collected everyone's login
name and password on the network, encrypted of course as it was NT based
but that wasn't too much of a problem and most users' passwords were
pretty pathetic. I didn't even need to be on the domain to do this, the
point is Ethernet is very unsecure in this respect although I suspect
the situation is better nowadays with intelligent switches instead of
hubs. I don't think you could do that on Arcnet (or token ring for that
matter).
I suspect the performance improvement has something to do with the
double OnCurrent event in subforms. If you reduce that kind of thing
in the form's opening events, that could be significant enough to be
noticeable even when without it it already loads OK.


Once my supplier form was so bad, with the number of subforms, I had
them on a tab control and only set the sourceobject when the relevant
tab was clicked, this improved loading of the main form no end, very
good if all you wanted to do was go in and look up a telephone number,
it just didn't ask for all the other info, e.g. contacts (unless you
wanted a specific contact's number), goods and services, accreditation,
purchase agreements, etc.

I don't do this any more though as performance tends to be much better,
maybe due to later versions of Access or to using SQL Server as the back
end or that we have better kit nowadays (back then it would have been
Pentium 100s, 10BaseT or 10Base2 (coax) ethernet, etc.).
--
Error reading sig - A)bort R)etry I)nfluence with large hammer
Nov 13 '05 #15
Trevor Best <nospam@localhost> wrote in
news:40***********************@auth.uk.news.easyne t.net:
Once my supplier form was so bad, with the number of subforms, I
had them on a tab control and only set the sourceobject when the
relevant tab was clicked, this improved loading of the main form
no end, very good if all you wanted to do was go in and look up a
telephone number, it just didn't ask for all the other info, e.g.
contacts (unless you wanted a specific contact's number), goods
and services, accreditation, purchase agreements, etc.
Well, of course, if you're cutting out 50%+ of the recordsources
from the initial load.

But I thought this advantage was with forms where there weren't
multiple non-visible subforms and combo boxes. That's the true test,
because it's only in those cases that one would be loading all the
recordsources/rowsources at once.

And, of course, the base recordsource of the main form.
I don't do this any more though as performance tends to be much
better, maybe due to later versions of Access or to using SQL
Server as the back end or that we have better kit nowadays (back
then it would have been Pentium 100s, 10BaseT or 10Base2 (coax)
ethernet, etc.).


I still do this kind of thing if I have a heavyweight form, but I
have fewer of those these days in the apps I'm working on.

--
David W. Fenton http://www.bway.net/~dfenton
dfenton at bway dot net http://www.bway.net/~dfassoc
Nov 13 '05 #16
"David W. Fenton" <dX********@bway.net.invalid> wrote:
I hope Tony doesn't mind my opening a discussion of some issues on
his performance FAQ page here in the newsgroup.


Not at all. I've been kinda busy these last few days so haven't been able to join
the discussion on this topic yet.

Tony
--
Tony Toews, Microsoft Access MVP
Please respond only in the newsgroups so that others can
read the entire thread of messages.
Microsoft Access Links, Hints, Tips & Accounting Systems at
http://www.granite.ab.ca/accsmstr.htm
Nov 13 '05 #17

This thread has been closed and replies have been disabled. Please start a new discussion.

Similar topics

2
by: oscar | last post by:
I have read Tony Toews's web site from which I have deduced I have an LDB locking problem. I have tried to follow his solution as follows: 1. I have created a dummy table in the back-end app....
10
by: Nathaniel Branden | last post by:
Hello. This isn't really an Access question. I just want to know whether Tony Toews is retarded. In that vain, I have pasted down his last thirty or so posts for someone out there to respond. ...
8
by: darren via AccessMonster.com | last post by:
Hi I'm trying to use the Auto FE updater for the first time and am running into a few problems. 1) 'UNC names on client path 'path' currently not supported.' I've mailed our helpdesk to see if...
1
by: Rick | last post by:
I'm using Tony Toews update utility. It works great but I have two identical machines running windows xp with all the latest windows updates where I receive a "Automation Error Object already...
3
by: Don Barton | last post by:
Does anyone know if Tony Toews Auto FE updater utility work with the new Access 2007 Runtime? I am new to using the runtime, but have clients starting to ask for it. (they don't want to have to...
2
by: bobh | last post by:
Hi, I'm having an issue with Tony's Auto FE Updater I was hoping someone could help with. I need to use UNC instead of mapped drive letters her in this shop drive letters are mapped differently...
6
by: Timmy! | last post by:
Is Tony still out there? The granite site is down or gone and I've been unable to access usenet for quite some time and am using Google. Here's my problem. Tony's Auto FE Loader has worked...
0
BarryA
by: BarryA | last post by:
What are the essential steps and strategies outlined in the Data Structures and Algorithms (DSA) roadmap for aspiring data scientists? How can individuals effectively utilize this roadmap to progress...
1
by: Sonnysonu | last post by:
This is the data of csv file 1 2 3 1 2 3 1 2 3 1 2 3 2 3 2 3 3 the lengths should be different i have to store the data by column-wise with in the specific length. suppose the i have to...
0
by: Hystou | last post by:
There are some requirements for setting up RAID: 1. The motherboard and BIOS support RAID configuration. 2. The motherboard has 2 or more available SATA protocol SSD/HDD slots (including MSATA, M.2...
0
marktang
by: marktang | last post by:
ONU (Optical Network Unit) is one of the key components for providing high-speed Internet services. Its primary function is to act as an endpoint device located at the user's premises. However,...
0
by: Hystou | last post by:
Most computers default to English, but sometimes we require a different language, especially when relocating. Forgot to request a specific language before your computer shipped? No problem! You can...
0
jinu1996
by: jinu1996 | last post by:
In today's digital age, having a compelling online presence is paramount for businesses aiming to thrive in a competitive landscape. At the heart of this digital strategy lies an intricately woven...
0
by: Hystou | last post by:
Overview: Windows 11 and 10 have less user interface control over operating system update behaviour than previous versions of Windows. In Windows 11 and 10, there is no way to turn off the Windows...
0
agi2029
by: agi2029 | last post by:
Let's talk about the concept of autonomous AI software engineers and no-code agents. These AIs are designed to manage the entire lifecycle of a software development project—planning, coding, testing,...
0
isladogs
by: isladogs | last post by:
The next Access Europe User Group meeting will be on Wednesday 1 May 2024 starting at 18:00 UK time (6PM UTC+1) and finishing by 19:30 (7.30PM). In this session, we are pleased to welcome a new...

By using Bytes.com and it's services, you agree to our Privacy Policy and Terms of Use.

To disable or enable advertisements and analytics tracking please visit the manage ads & tracking page.