423,818 Members | 2,266 Online
Bytes IT Community
+ Ask a Question
Need help? Post your question and get tips & solutions from a community of 423,818 IT Pros & Developers. It's quick & easy.

Access XP / MySQL / MyODBC: Access' caching system?

P: n/a
Hi all,

While working on an Access UI to a MySQL database (which should be a
reasonable, low-cost, flexible interface to the DB, better than
web-based, much less costly than a full-fledged .NET app or so.... is
it?), I tried measuring the bandwith consumed by the
Access/MyODBC/MySQL link, which came out to be, er, quite high.

I fancied it would be interesting to look at the queries Access throws
at MySQL through the ODBC link, so I set up a query log for MySQL. It
now appears that Access tries to cache the records in the database, but
it does so in quite a weird way.

The db is accessed through a normal (one-record view) form, linked to a
query, linked to a "linked table". When I go to record #5 (of 5000) in
the form, a whole bunch of queries is executed (because of a subform,
but there also appears a bunch of lines in the log like "SET AUTOCOMMIT
= 0" every time I scroll... Pretty strange, this is a huge overhead).
What stroke me most is that Access seems to fetch records 41 to 50.
When I press PgDn again, scrolling to record #6, Access fetches 51 to
60, and it fetches 61 to 70 for record #7 on the screen, and so on.
Scrolling back to #5 makes Access fetch 41 to 50 again.

Is there some ridiculously failing caching system at work here? Is
there a way to disable it? I read some things about CacheSize; how
should I use it in a form with an updatable query?

Thanks for your time; this seems to be the final big problem until we
can start working seriously on this system, so I'd be really glad if I
could fix this :)

Yours sincerely,
Onno

Aug 7 '06 #1
Share this Question
Share on Google+
6 Replies


P: n/a
Is there some ridiculously failing caching system at work here? Is
there a way to disable it? I read some things about CacheSize; how
should I use it in a form with an updatable query?

The amount of bandwidth consumed is not really any different then that of
using c++, assembler, or your favourite web system to hit the server
database.

SQL server is indeed a high performance system, and also a system that can
scale to many many users.

If you write your application in c++, or VB or in your case with ms-access,
in GENERAL the performance of all of these tools will BE THE SAME.

In other words...sql server is rather nice, and is a standard system used in
the IT industry.

However, before you convert..how well does your applciton run now?

We often see posts here that a application is too slow with one user. If the
application is too slow with one user..then what can one expect when they
try and run 10 users. That is now 10 times the requirements..

The other issue is how well is the database setup?

Further..how well are the forms designed?

How well does the application work with 5 users..and then when you jump to
10 users...how much a slow down to you notice?

A few things:

Having a table with 75k records is quite small. Lets assume you have 12
users. With a just a 100% file base system (jet), and no sql server, then
the performance of that system should really have screamed.

Before Microsoft started "really" selling sql server, they rated JET could
handle easily 50 users. We have credible reports here of people
running 100 users. however, in those cases everything must be
"perfect".

I have some applications out there with 50, or 60 HIGHLY related tables.
With 5 to 10 users on a network, response time is instant. I don't think any
form load takes more then one second. Many of those 60+ tables are highly
relational..and in the 50 to 75k records range.

So, with my 5 users..I see no reason why I can't scale to 15 users with
such small tables in the 75,000 record range.

If the application did not perform with such small tables of only 75k
records..then upsizing to sql server will do absolute nothing to fix
performance issues. In fact, in the sql server newsgroups you see weekly
posts by people who find that upgrading to sql actually slowed things down.
I even seem some very cool numbers showing that some queries where actually
MORE EFFICIENT in terms of network use by JET then sql server.

My point here is that technology will NOT solve performance problems.
However, good designs that make careful use of limited bandwidth resources
is the key here. So, if the application was not written with good
performance in mind..then you kind are stuck with a poor design!

I mean, when using a JET file share, you grab a invoice from the 75k record
table..only the one record is transferred down the network with a file share
(and, sql server will also only transfer one record). So, at this point, you
really will NOT notice any performance difference by upgrading to sql
server. There is no magic here.

Sql server is a robust and more scalable product then is JET. And, security,
backup and host of other reasons make sql server a good choice.
However, sql server will NOT solve a performance problem with dealing
with such small tables as 75k records

Of course, when efforts are made to utilize sql server, then
significant advances in performance can be realized.

I will give a few tips...these apply when using ms-access as a file
share (without a server), or even odbc to sql server:

** Ask the user what they need before you load a form!

The above is so simple, but so often I see the above concept ignored.
For example, when you walk up to a instant teller machine, does it
download every account number and THEN ASK YOU what you want to do? In
access, it is downright silly to open up form attached to a table WITHOUT
FIRST asking the user what they want! So, if it is a customer invoice, get
the invoice number, and then load up the form with the ONE record (how can
one record be slow!). When done editing the record...the form is closed, and
you are back to the prompt ready to do battle with the next customer. You
can read up on how this "flow" of a good user interface works here (and this
applies to both JET, or sql server appcltions):

http://www.members.shaw.ca/AlbertKal...rch/index.html

My only point here is restrict the form to only the ONE record the user
needs. Of course, sub-forms, and details records don't apply to this rule,
but I am always dismayed how often a developer builds a nice form, attaches
it to a large table, and then opens it..and the throws this form attached to
some huge table..and then tells the users to go have at and have fun. Don't
we have any kind of concern for those poor users? Often, the user will not
even know how to search for something ! (so, prompt, and asking the user
also makes a HUGE leap forward in usability. And, the big bonus is reduced
network traffic too!...Gosh...better and faster, and less network
traffic....what more do we want!).

In other words, that web based application, or that C++ program NEVER EVER
attached
a huge reocrdset to a HUGE table. You ALWAYS restricted that form, or web
page to
the ONE record. (so what if a few extra lines of sql and a few extra set
commands get
sent to sql server...these are usually INSIGNIFICANT in terms of bandwidth).
So, simply
use a where clause to restrict the form to ONE record.

** Don't use quires that require more then one linked table

(this ONLY applies to odbc to sql server...you CAN and are FREE to do this
with a mdb JET file share..and also with ADP projects to sql server).

When you use
ODBC, one table could be on the corporate server, and the other ODBC might
be a FoxPro table link 3 computers from the left of you. As a result..JET
has a real difficult time joining these tables together..and JET can not
assume that the two tables are on the same box..and thus have the "box" join
the tables. Thus,while jet does it best..these types of joins can often be
real slow. The simple solution in these cases is to change the query to
view..and link to that. This is the least amount of work, and means the
joins occur on the server side. This also applies to combo boxes. Most
combos boxes has sql embedded in them. That sql has to be processed, and
then thrown to a linked odbc table. This is a bit sluggish. (a form can have
maybe one, or two combos..but after that ..it will start to load slow). So,
remove the sql from the combo box, build a view..and link the combo box
direct to that view (JUST USE the view name...the sort, and any sql need to
be in the view). The result is quite good combo box load performance. (and
again, not very much work. There are other approaches that can even speed
this up more..but we have to balanced the benefits VS. the amount of work
and coding. I don't think once should re-code all combo boxes to a call back
with a pass-through reocrdset..but that can be a solution also).

** Of course, if you do have sql with more then one table..then pass-though
is the best if using odbc. (again..this does NOT apply to a mdb JET file
share). These pass-through qureieds are raw sql sent to the server. these
are not useall updable..but keep their use in mind.

** You can continue to use bound forms..but as mentioned..restrict the form
to the one record you need. You can safely open up to a single invoice,a and
even continue to use the "where" clause of the openform. Bound forms are way
less work then un-bound forms...and performance is generally just is good
anyway when done right.

** Large loading of combo boxes. A combo box is good for about 100
entries. After that..you are torturing the user (what..they got to look
through 100's of entries). So, keep things like combo boxes down
to a min size. This is both faster..and MORE importantly it is
kinder to your users.

After all, at the end of the day..what we really want is to make
things easy for the users...and treat them well.. It seems that
treating the users well, and reducing the bandwidth
(amount of data) goes hand in hand. So, better applications
treat the users well..and run faster! (this is good news!)

Most of the bandwidth problems can be solved with decent designs. The reason
why so many ms-access applications
perform poor with sql or mysql is due to the designer doing things that NONE
of the other develops would consider. So, VB, or a web page don't necessary
perform better with less bandwidth, but then again that web page, or winform
NEVER simply attaches a WHOLE TABLE to a form..and then think one is done
for the day....
--
Albert D. Kallal (Access MVP)
Edmonton, Alberta Canada
pl*****************@msn.com
Aug 7 '06 #2

P: n/a
"on****@gmail.com" <on****@gmail.comwrote in
news:11*********************@75g2000cwc.googlegrou ps.com:
The db is accessed through a normal (one-record view) form, linked
to a query, linked to a "linked table". When I go to record #5 (of
5000) in the form, a whole bunch of queries is executed (because
of a subform, but there also appears a bunch of lines in the log
like "SET AUTOCOMMIT
= 0" every time I scroll... Pretty strange, this is a huge
overhead).
What stroke me most is that Access seems to fetch records 41 to
50. When I press PgDn again, scrolling to record #6, Access
fetches 51 to 60, and it fetches 61 to 70 for record #7 on the
screen, and so on. Scrolling back to #5 makes Access fetch 41 to
50 again.
Why are you retreiving more than one record at a time? A proper
client/server design would *not* be bound to the whole table or a
large recordset, but to only one record (or a small subset of
records matching selected criteria). It's the fact that you're not
filtering anything that causes Jet to behave in an inefficient way.

That is, it's YOUR FAULT.

--
David W. Fenton http://www.dfenton.com/
usenet at dfenton dot com http://www.dfenton.com/DFA/
Aug 8 '06 #3

P: n/a
On 7 Aug 2006 14:01:05 -0700, "on****@gmail.com" <on****@gmail.com>
wrote:

Give MSFT and the developer of the ODBC driver some credit. Just
because YOU don't understand the reason for so many SET AUTOCOMMIT
= 0 calls doesn't mean there isn't one.
If you were to look at the driver operating in assembly, you might see
a lot of MOV AX,0 calls. Perhaps another manifestation of a
ridiculously failing program that surely you could optimize?

I recommend you stay at the level of good database design and good
client/server adherence, and let the low-level stuff take care of
itself.

-Tom.
>Hi all,

While working on an Access UI to a MySQL database (which should be a
reasonable, low-cost, flexible interface to the DB, better than
web-based, much less costly than a full-fledged .NET app or so.... is
it?), I tried measuring the bandwith consumed by the
Access/MyODBC/MySQL link, which came out to be, er, quite high.

I fancied it would be interesting to look at the queries Access throws
at MySQL through the ODBC link, so I set up a query log for MySQL. It
now appears that Access tries to cache the records in the database, but
it does so in quite a weird way.

The db is accessed through a normal (one-record view) form, linked to a
query, linked to a "linked table". When I go to record #5 (of 5000) in
the form, a whole bunch of queries is executed (because of a subform,
but there also appears a bunch of lines in the log like "SET AUTOCOMMIT
= 0" every time I scroll... Pretty strange, this is a huge overhead).
What stroke me most is that Access seems to fetch records 41 to 50.
When I press PgDn again, scrolling to record #6, Access fetches 51 to
60, and it fetches 61 to 70 for record #7 on the screen, and so on.
Scrolling back to #5 makes Access fetch 41 to 50 again.

Is there some ridiculously failing caching system at work here? Is
there a way to disable it? I read some things about CacheSize; how
should I use it in a form with an updatable query?

Thanks for your time; this seems to be the final big problem until we
can start working seriously on this system, so I'd be really glad if I
could fix this :)

Yours sincerely,
Onno
Aug 8 '06 #4

P: n/a
Thank you all very much for your replies; they have been very helpful.
And also thanks for correctly giving me a small blow on the head ---
sorry for being a bit too irritated in here.

I can assure you that I tried quite a few things to optimize the
queries and forms, although indeed I have to admit that never binding
one's form to a full, unfiltered 5k table is a reasonable guideline ;-)

Still, I'm puzzled by the way that Access retrieves its records (#11-20
when viewing #2, #21-30 when viewing #3), but apparently there's
nothing to do about that. Well, that's a pity (it means you retrieve
ten times the amount of data you need), but not a disaster, I suppose.

Anyway, I will certainly continue reading these articles about db
access and UI optimization!

Yours sincerely,
Onno

Aug 8 '06 #5

P: n/a
On 8 Aug 2006 04:25:17 -0700, "on****@gmail.com" <on****@gmail.com>
wrote:

Part of what you are seeing may be Access either lazily loading
additional records in anticipation of the user scrolling to the next
record, or Access efficiently loading a page of data rather than the
slower row of data.

-Tom.
>Thank you all very much for your replies; they have been very helpful.
And also thanks for correctly giving me a small blow on the head ---
sorry for being a bit too irritated in here.

I can assure you that I tried quite a few things to optimize the
queries and forms, although indeed I have to admit that never binding
one's form to a full, unfiltered 5k table is a reasonable guideline ;-)

Still, I'm puzzled by the way that Access retrieves its records (#11-20
when viewing #2, #21-30 when viewing #3), but apparently there's
nothing to do about that. Well, that's a pity (it means you retrieve
ten times the amount of data you need), but not a disaster, I suppose.

Anyway, I will certainly continue reading these articles about db
access and UI optimization!

Yours sincerely,
Onno
Aug 8 '06 #6

P: n/a
"on****@gmail.com" <on****@gmail.comwrote in
news:11*********************@b28g2000cwb.googlegro ups.com:
Still, I'm puzzled by the way that Access retrieves its records
(#11-20 when viewing #2, #21-30 when viewing #3), but apparently
there's nothing to do about that. Well, that's a pity (it means
you retrieve ten times the amount of data you need), but not a
disaster, I suppose.
Well, think for a change -- what environment was Access/Jet designed
for? It was designed for a LAN environment and a file-server
environment. When you bind to a whole table, it pre-fetches data on
the assumption that when scrolling you're going to want to see the
next set of records. This is much more sensible than if it retrieved
the whole data set, and a sensible assumption in a form that is
bound to an entire table.

But that whole design is based on the assumption that there is no
server process on the other end to do the work of deciding what to
retrieve. Once you've got a server on the other end, you design your
forms differently.

And I suspect that what you'll find is that when you have a WHERE
clause, Jet is *very* smart in what it sends to the server, and will
be very efficient.

Understand the software you're using before you criticize its
design.

When you do that, it may be more clear that the problem is pilot
error and not a flaw in the software itself.

--
David W. Fenton http://www.dfenton.com/
usenet at dfenton dot com http://www.dfenton.com/DFA/
Aug 8 '06 #7

This discussion thread is closed

Replies have been disabled for this discussion.