By using this site, you agree to our updated Privacy Policy and our Terms of Use. Manage your Cookies Settings.
424,667 Members | 2,255 Online
Bytes IT Community
+ Ask a Question
Need help? Post your question and get tips & solutions from a community of 424,667 IT Pros & Developers. It's quick & easy.

Local tables or linked local database?

P: n/a
I have a split front end/back end system. However I create a number of
local tables to carry out certain operations. There is a tendency for
the front end to bloat so I have set 'compact on close'

I think that I have read in some threads (althoug I cannot find them
now) that others place such tables in a local, linked database. I
could do this but I am interested to know what would be the advantages.
And disadvantages, if any.

Any observations on the relative merits of local tables vs a local
database would be welcome

Jim

Oct 25 '06 #1
Share this Question
Share on Google+
10 Replies


P: n/a
I think the advantages are that you don't need the compact on close, and
really, temp junk data that you plan to throw out is better placed else
where.

The idea here is that a split system means that your "mde" is your
executable file that you distribute. It contains code. Presumably, you could
issue a update, and those temp files would NOT be overwritten.

There is some sample code at Tony's site that does create a temp mdb file,
so, obviously some of us do view the extra effort of creating a temp mdb
file and advantage. (and, you can simply delete the file and not worry about
bloat after you done so).

I don't think it is a huge deal, nor a hard ad fast rule, but I would as a
rule use a temp external mdb file for that working type stuff.
--
Albert D. Kallal (Access MVP)
Edmonton, Alberta Canada
pl*****************@msn.com
Oct 25 '06 #2

P: n/a
Access does a miserable job of mitigating the amount of data that it
passes back and forth to tables when it runs any operation. You will
probably get better performance having the local tables in the database
itself. The only issue that I have ever had with compact on close is
one of windows permissions - for example if you have special
permissions set on the database file, those will be gone on the
compacted version.

Jim Devenish wrote:
I have a split front end/back end system. However I create a number of
local tables to carry out certain operations. There is a tendency for
the front end to bloat so I have set 'compact on close'

I think that I have read in some threads (althoug I cannot find them
now) that others place such tables in a local, linked database. I
could do this but I am interested to know what would be the advantages.
And disadvantages, if any.

Any observations on the relative merits of local tables vs a local
database would be welcome

Jim
Oct 25 '06 #3

P: n/a
"Pachydermitis" <pr*******@gmail.comwrote in
news:11*********************@i3g2000cwc.googlegrou ps.com:
Access does a miserable job of mitigating the amount of data that
it passes back and forth to tables when it runs any operation.
You will probably get better performance having the local tables
in the database itself.
Hogwash. Access is extremely efficient in drawing data from linked
tables.

--
David W. Fenton http://www.dfenton.com/
usenet at dfenton dot com http://www.dfenton.com/DFA/
Oct 26 '06 #4

P: n/a
I would not say 'extremely', but it does an acceptable job of 'drawing'
or retrieving data. The moment you ask access to manipulate that data
in anyway - say queries, continuous forms, reports, etc, the data
management of the interface - not necessarily the jet engine - is just
terrible. Plug one of those lovely Fluke network analysis toys into
your network and you will see what I mean.

David W. Fenton wrote:
"Pachydermitis" <pr*******@gmail.comwrote in
news:11*********************@i3g2000cwc.googlegrou ps.com:
Access does a miserable job of mitigating the amount of data that
it passes back and forth to tables when it runs any operation.
You will probably get better performance having the local tables
in the database itself.

Hogwash. Access is extremely efficient in drawing data from linked
tables.

--
David W. Fenton http://www.dfenton.com/
usenet at dfenton dot com http://www.dfenton.com/DFA/
Nov 2 '06 #5

P: n/a
"Pachydermitis" <pr*******@gmail.comwrote in
news:11**********************@i42g2000cwa.googlegr oups.com:
I would not say 'extremely', but it does an acceptable job of 'drawing'
or retrieving data. The moment you ask access to manipulate that data
in anyway - say queries, continuous forms, reports, etc, the data
management of the interface - not necessarily the jet engine - is just
terrible.
Perhaps YOU find this to be just terrible. If you do you could say, "I find
it to be just terrible." But do not say, "The moment YOU ask ...."; you
have no knowledge of what happens when I ask Access to manipulate the data.

I am happy to report that I do not find it to be just terrible. In fact I
find it to be just peachy!

--
Lyle Fairfield

from http://msdn.microsoft.com/library/de...l=/library/en-
us/dnmdac/html/data_mdacroadmap.asp

Obsolete Data Access Technologies
Obsolete technologies are technologies that have not been enhanced or
updated in several product releases and that will be excluded from future
product releases. Do not use these technologies when you write new
applications. When you modify existing applications that are written using
these technologies, consider migrating those applications to ADO.NET.
The following components are considered obsolete:
....
Data Access Objects (DAO): DAO provides access to JET (Access) databases.
This API can be used from Microsoft Visual Basic®, Microsoft Visual C++®,
and scripting languages. It was included with Microsoft Office 2000 and
Office XP. DAO 3.6 is the final version of this technology. It will not be
available on the 64-bit Windows operating system.
.....
Nov 2 '06 #6

P: n/a
"Pachydermitis" <pr*******@gmail.comwrote in
news:11**********************@i42g2000cwa.googlegr oups.com:
David W. Fenton wrote:
>"Pachydermitis" <pr*******@gmail.comwrote in
news:11*********************@i3g2000cwc.googlegro ups.com:
Access does a miserable job of mitigating the amount of data
that it passes back and forth to tables when it runs any
operation. You will probably get better performance having the
local tables in the database itself.

Hogwash. Access is extremely efficient in drawing data from
linked tables.

I would not say 'extremely', but it does an acceptable job of
'drawing' or retrieving data.
It does *way* more than acceptable. It retrieves only the minimum
amount of data needed, and if you've properly indexed your tables,
this is a very small amount of data.

The only time it bogs down is when you do something stupid, like
select on an expression or sort or select on an unindexed field.
That's not a flaw of Jet, it's pilot error.
The moment you ask access to manipulate that data
in anyway - say queries, continuous forms, reports, etc, the data
management of the interface - not necessarily the jet engine - is
just terrible.
Hogwash.
Plug one of those lovely Fluke network analysis toys into
your network and you will see what I mean.
In comparison to *what*? What other program manipulates data across
a network with such efficiency without the help of a process on the
other end of the connection?

--
David W. Fenton http://www.dfenton.com/
usenet at dfenton dot com http://www.dfenton.com/DFA/
Nov 3 '06 #7

P: n/a
Wow, I can't believe I have sparked such a response. David Fenton, I
have seen your posts and admire your knowledge and contribution to the
access community. I completely agree with what you say about poor data
structure and database design being the root of most user's
performance issues. Please understand that I continue this discussion
only with the greatest respect.

The question began with a user asking if they should use internal or
linked tables. Are you saying that linked is as fast and efficient as
internal? For example if I have a query joining two linked tables,
will anyone say that it is just as fast for access to analyze each
record for a match across a network as it is internally? If so build
two tables with 300k records each, link them, join them, time it.

Out of all the forms and reports you have created, how many only use a
single table and have no query or sql recordsources? Not many I
imagine or you probably have issues with normalization. Have you ever
analyzed the amount of data Access retrieves when you open a form on a
complex dataset - even a single record form? If not take those 300k
record tables and write two queries, one that limits your data to a
record and one that allows the form to do so in the default manner, and
time it. How good of a job does Access do mitigating the amount of
data it passes back and forth for that single record - or is it
faster when you manually limit the data? If you are surprised, wait
until you try this on a continuous form.

To add one more brand to the fire, most people use linked tables when
they have a multiuser setup. Both of the timed sequences you ran
earlier, run them again with 10 users trying to get data from that
backend. If you give those users phones you will discover a whole new
realm of data transfer. And yes I know about the subdatasheets issue.

In comparison to what? You have used passthrough queries. That in
itself deprecates the methods jet uses to manipulate queried data. Set
up those two tables on a sql or oracle backend - even msde or sql
express - heck mysql or postgre. Use passthroughs. Time your data
transfers. Time them with multiple users. Time them with a real life
network load - with a network full of people sending emails, printing,
reading files, viewing security cams, and tell everyone how Access
faired in the performance test.

Like you, I love Access and think that it is a great tool. I have been
using it for many years as a cost effective solution to solve some very
complex issues. My posts come from my experience. I have tested many
of the scenarios I mentioned above, and have come to the conclusions
that I wrote earlier. I understand that you also speak from a wealth
of experience, so please quantify your remarks. "Hogwash" does
nothing for less experienced users trying to learn from these posts.
Pachydermitis

David W. Fenton wrote:
"Pachydermitis" <pr*******@gmail.comwrote in
news:11**********************@i42g2000cwa.googlegr oups.com:
David W. Fenton wrote:
"Pachydermitis" <pr*******@gmail.comwrote in
news:11*********************@i3g2000cwc.googlegrou ps.com:

Access does a miserable job of mitigating the amount of data
that it passes back and forth to tables when it runs any
operation. You will probably get better performance having the
local tables in the database itself.

Hogwash. Access is extremely efficient in drawing data from
linked tables.
I would not say 'extremely', but it does an acceptable job of
'drawing' or retrieving data.

It does *way* more than acceptable. It retrieves only the minimum
amount of data needed, and if you've properly indexed your tables,
this is a very small amount of data.

The only time it bogs down is when you do something stupid, like
select on an expression or sort or select on an unindexed field.
That's not a flaw of Jet, it's pilot error.
The moment you ask access to manipulate that data
in anyway - say queries, continuous forms, reports, etc, the data
management of the interface - not necessarily the jet engine - is
just terrible.

Hogwash.
Plug one of those lovely Fluke network analysis toys into
your network and you will see what I mean.

In comparison to *what*? What other program manipulates data across
a network with such efficiency without the help of a process on the
other end of the connection?

--
David W. Fenton http://www.dfenton.com/
usenet at dfenton dot com http://www.dfenton.com/DFA/
Nov 6 '06 #8

P: n/a
"Pachydermitis" <pr*******@gmail.comwrote in
news:11**********************@f16g2000cwb.googlegr oups.com:
The question began with a user asking if they should use internal
or linked tables. Are you saying that linked is as fast and
efficient as internal?
Of course not. I'm only disputing overly broad statements that claim
that Jet is pulling scads of data cross the network.
For example if I have a query joining two linked tables,
will anyone say that it is just as fast for access to analyze each
record for a match across a network as it is internally?
Access doesn't analyze each record, no matter if they are stored in
a local table or in a linked table on the other end of the LAN
(unless you're joining on unindexed fields, but, you seem to agree
with me that that would constitute bad design, i.e., pilot error,
and not be due to any Jet inefficiencies).
If so build
two tables with 300k records each, link them, join them, time it.
The time will be be longer for the data that comes through the
smaller pipe. Since a LAN has a smaller pipe than the data bus on
the local PC (which may be reading the MDB out of RAM), of course it
will be faster.

However, the amount of data retrieved to do the exact same join WILL
BE EXACTLY THE SAME. First the query will be optimized using
metadata about the tables. That will be read locally or retrieved
remotely. Once the query is optimized, index pages will be retrieved
for the joined fields, the indexes will be joined and then the data
pages for the matching records will be retrieved.

The process will be PRECISELY THE SAME regardless of whether the
data is on a local hard drive or on a file server.
Out of all the forms and reports you have created, how many only
use a single table and have no query or sql recordsources? Not
many I imagine or you probably have issues with normalization.
For the most part, I very seldom have forms bound to anything but a
single table, but I'd never use anything but a SQL rowsource.
Have you ever
analyzed the amount of data Access retrieves when you open a form
on a complex dataset - even a single record form?
Yes. I have. It's very little data even with a very large data set
because Jet is efficient in optimizing SQL.

What I do *not* do, except in very small apps, is bind a form to all
the records in a table (I consider small to be <10K records or so).
If not take those 300k
record tables and write two queries, one that limits your data to
a record and one that allows the form to do so in the default
manner, and time it. How good of a job does Access do mitigating
the amount of data it passes back and forth for that single record
- or is it faster when you manually limit the data? If you are
surprised, wait until you try this on a continuous form.
What are you babbling about? Who in their right mind would populate
a form with a recordsource populated from a join of two 300K-record
tables? That's insane.

I can make it worse:

Sort the resultset.

But that, again, is just STUPIDITY -- PILOT ERROR. A user can't use
that many records. A user can only operate on a small set of records
at a time, so my apps only operate on small sets of records. I have
an app whose main table has 250K records and has a child table
displayed in a subform of 650K records. The user enters data to load
small groups of records at a time (name search), and the results
come back instantaneously. It's in use by 10-12 simultaneous users
all the time, and the network runs just fine.
To add one more brand to the fire, most people use linked tables
when they have a multiuser setup. Both of the timed sequences you
ran earlier, run them again with 10 users trying to get data from
that backend. If you give those users phones you will discover a
whole new realm of data transfer. And yes I know about the
subdatasheets issue.
But do you know anything at all about efficient application design?
Do you know about problems with CREATE/DELETE collisions on the LDB
file on the back end?
In comparison to what? You have used passthrough queries.
No, I haven't. I don't do server applications -- my apps are ALL
JET.
That in
itself deprecates the methods jet uses to manipulate queried data.
No, it doesn't.
Set
up those two tables on a sql or oracle backend - even msde or sql
express - heck mysql or postgre. Use passthroughs. Time your
data transfers. Time them with multiple users. Time them with a
real life network load - with a network full of people sending
emails, printing, reading files, viewing security cams, and tell
everyone how Access faired in the performance test.
I can design a scenario where a server database will be very poor if
I want to.

You can design a scenario where Jet will look poor.

So what?
Like you, I love Access and think that it is a great tool. I have
been using it for many years as a cost effective solution to solve
some very complex issues. My posts come from my experience. I
have tested many of the scenarios I mentioned above, and have come
to the conclusions that I wrote earlier. I understand that you
also speak from a wealth of experience, so please quantify your
remarks. "Hogwash" does nothing for less experienced users trying
to learn from these posts.
What you wrote was HOGWASH.

It was misleading at best, because it lacked context and nuance.

And what you write above indicates that YOU DON'T UNDERSTAND HOW JET
WORKS.

--
David W. Fenton http://www.dfenton.com/
usenet at dfenton dot com http://www.dfenton.com/DFA/
Nov 7 '06 #9

P: n/a
>so my apps only operate on small sets of records
That is what I am saying; Access does a lousy job of managing the
records - forcing developers to do it.
>Who in their right mind would populate. . .
I guess this means that you are in your right mind and would never
trust Access, in its legendary efficiency, to manage the data for you?
>I'm only disputing overly broad statements that claim
that Jet is pulling scads of data cross the network.
Reread the post, no one said that. I said Access does a miserable job
and further clarified in my second post. Based on your above
statement, you agree - at least about the interface.
>even with a very large data set
because Jet is efficient in optimizing SQL.
Try this: Limit a table's recordset in a query then join that query to
another table. Say table1 has 200k records and its limited qry now has
10 records. Table2 has 200k records. Joining 10 to 200k is much
faster than joining 200k to 200k.
Put a function in the second query so you can watch the records.
Rather than limiting the compared records to the first query, Access
optimizes the join for us so that it has to join every record in both
tables. More robust engines allow this type of optimization - often a
must when your recordsets are in the millions.
but I'd never use anything but a SQL rowsource.
Guess what, Access compiles its query's and using sql as a rowsource is
generally frowned upon as performance poor.
http://support.microsoft.com/kb/209126
>You can design a scenario where Jet will look poor.
If you ever have to build an application that functions at the
enterprise level, you will find out that I did not design a scenario
for jet to look poor, that is life. Jet is not that robust and
probably not designed to be.

Well, what a disappointment, who would have ever thought that you would
be such a pompous jerk. I take back what I said earlier about
respecting you. You don't even have the decency to be polite.

As you say, you only work in jet. Maybe you should get more experience
before you talk about things you don't know about. You are like a guy
on a moped shouting to everyone that he's the fastest thing on the
road. So keep shouting - to anyone foolish enough to listen - or (like
me) foolish enough to answer you.
P
David W. Fenton wrote:
"Pachydermitis" <pr*******@gmail.comwrote in
news:11**********************@f16g2000cwb.googlegr oups.com:
The question began with a user asking if they should use internal
or linked tables. Are you saying that linked is as fast and
efficient as internal?

Of course not. I'm only disputing overly broad statements that claim
that Jet is pulling scads of data cross the network.
For example if I have a query joining two linked tables,
will anyone say that it is just as fast for access to analyze each
record for a match across a network as it is internally?

Access doesn't analyze each record, no matter if they are stored in
a local table or in a linked table on the other end of the LAN
(unless you're joining on unindexed fields, but, you seem to agree
with me that that would constitute bad design, i.e., pilot error,
and not be due to any Jet inefficiencies).
If so build
two tables with 300k records each, link them, join them, time it.

The time will be be longer for the data that comes through the
smaller pipe. Since a LAN has a smaller pipe than the data bus on
the local PC (which may be reading the MDB out of RAM), of course it
will be faster.

However, the amount of data retrieved to do the exact same join WILL
BE EXACTLY THE SAME. First the query will be optimized using
metadata about the tables. That will be read locally or retrieved
remotely. Once the query is optimized, index pages will be retrieved
for the joined fields, the indexes will be joined and then the data
pages for the matching records will be retrieved.

The process will be PRECISELY THE SAME regardless of whether the
data is on a local hard drive or on a file server.
Out of all the forms and reports you have created, how many only
use a single table and have no query or sql recordsources? Not
many I imagine or you probably have issues with normalization.

For the most part, I very seldom have forms bound to anything but a
single table, but I'd never use anything but a SQL rowsource.
Have you ever
analyzed the amount of data Access retrieves when you open a form
on a complex dataset - even a single record form?

Yes. I have. It's very little data even with a very large data set
because Jet is efficient in optimizing SQL.

What I do *not* do, except in very small apps, is bind a form to all
the records in a table (I consider small to be <10K records or so).
If not take those 300k
record tables and write two queries, one that limits your data to
a record and one that allows the form to do so in the default
manner, and time it. How good of a job does Access do mitigating
the amount of data it passes back and forth for that single record
- or is it faster when you manually limit the data? If you are
surprised, wait until you try this on a continuous form.

What are you babbling about? Who in their right mind would populate
a form with a recordsource populated from a join of two 300K-record
tables? That's insane.

I can make it worse:

Sort the resultset.

But that, again, is just STUPIDITY -- PILOT ERROR. A user can't use
that many records. A user can only operate on a small set of records
at a time, so my apps only operate on small sets of records. I have
an app whose main table has 250K records and has a child table
displayed in a subform of 650K records. The user enters data to load
small groups of records at a time (name search), and the results
come back instantaneously. It's in use by 10-12 simultaneous users
all the time, and the network runs just fine.
To add one more brand to the fire, most people use linked tables
when they have a multiuser setup. Both of the timed sequences you
ran earlier, run them again with 10 users trying to get data from
that backend. If you give those users phones you will discover a
whole new realm of data transfer. And yes I know about the
subdatasheets issue.

But do you know anything at all about efficient application design?
Do you know about problems with CREATE/DELETE collisions on the LDB
file on the back end?
In comparison to what? You have used passthrough queries.

No, I haven't. I don't do server applications -- my apps are ALL
JET.
That in
itself deprecates the methods jet uses to manipulate queried data.

No, it doesn't.
Set
up those two tables on a sql or oracle backend - even msde or sql
express - heck mysql or postgre. Use passthroughs. Time your
data transfers. Time them with multiple users. Time them with a
real life network load - with a network full of people sending
emails, printing, reading files, viewing security cams, and tell
everyone how Access faired in the performance test.

I can design a scenario where a server database will be very poor if
I want to.

You can design a scenario where Jet will look poor.

So what?
Like you, I love Access and think that it is a great tool. I have
been using it for many years as a cost effective solution to solve
some very complex issues. My posts come from my experience. I
have tested many of the scenarios I mentioned above, and have come
to the conclusions that I wrote earlier. I understand that you
also speak from a wealth of experience, so please quantify your
remarks. "Hogwash" does nothing for less experienced users trying
to learn from these posts.

What you wrote was HOGWASH.

It was misleading at best, because it lacked context and nuance.

And what you write above indicates that YOU DON'T UNDERSTAND HOW JET
WORKS.

--
David W. Fenton http://www.dfenton.com/
usenet at dfenton dot com http://www.dfenton.com/DFA/
Nov 7 '06 #10

P: n/a
"Pachydermitis" <pr*******@gmail.comwrote in
news:11**********************@m73g2000cwd.googlegr oups.com:
>>so my apps only operate on small sets of records

That is what I am saying; Access does a lousy job of managing the
records - forcing developers to do it.
If you send stupid SQL to SQL Server you're going to get a huge
dataset, too, so I don't see how the developer is *not* responsible.

The only difference between Jet and a server is that Jet has to get
the index pages first (though it may not need to get all of them if
there's a condition on the index that limits how much of it is
needed). That's a pretty small amount of data (how much space do
think it takes to store the index for an Autonumber PK for a table
with 300K records?). Multiple joins would, of course, up the number
of indexes to be retrieved, and criteria on non-PK fields in the
joined tables could also add to it, but we're talking about less
than 20K for the whole index for each PK table (assuming Autonumber
and assuming the whole index is needed), and something of comparable
magnitude for any other non-primary indexes (though they'd be longer
if they were dates (8 bytes) or text (variable)), so, certainly the
amount of data retrieved in indexes alone is a difference, and the
more indexes used, the more of a difference it will be.

But other than that, there's not going to be much difference except
if you compare filtering/joining on non-indexed fields, which, of
course, a server database will do more efficiently than Jet, without
the application designer needing to do anything.

Yes, choose the reductio ad absurdam case and, yes, you're going to
be CORRECT.

In all other instances, not so much.

But back to what you said:
Access does a lousy job of managing the
records - forcing developers to do it.
Access does *not* "do a lousy job of managing the records" at all.
It's extremely efficient. But the management that a developer does
is forced on the developer by *human* considerations. A person can't
deal with a set of records that is created by joining two
300K-record tables -- no human being can make any sense out of that.
A human being would only ever work iwth subsets of that data, so
providing a user interface that retrieves subsets of the data is not
only going to be efficient (for both Jet and a server database
engine), but it's going to good application design. A list of 300K
records is of no use whatsoever to a human being without navigation
tools, and if you're going to provide navigation tools, why not make
them data retrieval tools and remove the inefficiency?

And I'm not convinced a db server is going to be any more efficient
-- the only difference in the amount of data sent is, again, the
indexes for the two tables that are used in the join. If those are
long integer or Autonumbers, that's 4 bytes per record. We're
talking less than 50K of data difference for Jet to retrieve that
wouldn't be sent by the database server. Who's going to notice
*that*?

Secondly, Access may be able to start presenting data sooner when
retrieved from Jet than from the database, since Access can then use
its Rushmore technology to start displaying the beginning of the
recordset before the whole thing is retrieved (assuming no sorting
on the results)
>>Who in their right mind would populate. . .

I guess this means that you are in your right mind and would never
trust Access, in its legendary efficiency, to manage the data for
you?
Eh? What I wrote was:
Who in their right mind would populate a form with a recordsource
populated from a join of two 300K-record tables?
This would be inefficient and COMPLETELY USELESS regardless of
whether you're retrieving the data from Jet or from a server
database.

I wouldn't do it.

It doesn't make any sense in a real-world application.
>>I'm only disputing overly broad statements that claim
that Jet is pulling scads of data cross the network.

Reread the post, no one said that. I said Access does a miserable
job and further clarified in my second post. Based on your above
statement, you agree - at least about the interface.
No, I don't agree to any such thing.

Jet does nearly as well as a server database would in terms of the
amount of data pulled across the wire. When you're joining two
tables and displaying the whole resultset, the difference between
Jet and a server database is going to be tiny in terms of the amount
of data retrieved, because in both cases, you're pulling 100s of
thousands of rows, and that's going to be most of the data.
>>even with a very large data set
because Jet is efficient in optimizing SQL.

Try this: Limit a table's recordset in a query then join that
query to another table. Say table1 has 200k records and its
limited qry now has 10 records. Table2 has 200k records. Joining
10 to 200k is much faster than joining 200k to 200k.
Well, d'oh.

Of course it is, because an index join will vastly reduce the amount
of data pages that need to be retrieved.
Put a function in the second query so you can watch the records.
Eh, what? I don't have a clue what the hell you mean here.
Rather than limiting the compared records to the first query,
Access optimizes the join for us so that it has to join every
record in both tables. More robust engines allow this type of
optimization - often a must when your recordsets are in the
millions.
Bullshit.

If your join fields are indexed and you aren't filtering on an
expression, Jet will do an index merge and then retrieve only the
data pages needed.

Haven't you ever worked with SHOWPLAN to see how Jet optimizes
queries? If you haven't, then you don't have a clue what you are
talking about.

Or are you talking about badly designed tables that are not
appropriately indexed?
>but I'd never use anything but a SQL rowsource.

Guess what, Access compiles its query's and using sql as a
rowsource is generally frowned upon as performance poor.
http://support.microsoft.com/kb/209126
Eh? I don't see that in there. It does say to save your SQL as
queries, but I don't see where there's any performance benefit to
that, except that forms based on the same query would only need to
have the saved query compiled once, but two forms based on the
corresponding SQL recordsource would need to be saved twice, once
for each form. But the SQL gets compiled whether its in a saved
query or in a form's recordsource (or a combo/listbox rowsource).
Ever looked at MSysObjects and noticed all those ~sql_... objects?
That's the compiled SQL for rowsources and recordsources. They go
away when you compact.

Of course, this is all moot as a performance issue, because a
properly-designed front end doesn't ever need to be compacted. This
means that once all forms have been opened once, their recordsources
and rowsources are all compiled and they stay that way.

This is one of those "performance tips" like "use DBEngine(0)(0)
instead of CurrentDB() because it's 1000 times FASTER" that don'e
hold up under scrutiny because in real life, they don't amount to
anything at all.

Any experieinced developer would know not to trust the
recommendations in Microosft's Knowledge Base without outside
confirmation.
>>You can design a scenario where Jet will look poor.

If you ever have to build an application that functions at the
enterprise level, you will find out that I did not design a
scenario for jet to look poor, that is life. Jet is not that
robust and probably not designed to be.
Well, now you've completely changed the subject. You seem to think
I'm arguing that Jet is appropriate for all situations. It obviously
IS NOT. No one but a moron would think so.

But no one but a moron would think that was what I was suggesting.
Well, what a disappointment, who would have ever thought that you
would be such a pompous jerk. I take back what I said earlier
about respecting you. You don't even have the decency to be
polite.
You're a complete idiot. You change the subject when you feel like
it. You deliberately misread my comments to mean something that they
don't. You mischaracterise what I've written. You take quotations
out of context. You chop the full quotation in order to make your
point.

You're intellectually dishonest.

And you haven't actually refuted a single one of the technical
points I've made about exactly how Jet actually works.
As you say, you only work in jet. Maybe you should get more
experience before you talk about things you don't know about. You
are like a guy on a moped shouting to everyone that he's the
fastest thing on the road. So keep shouting - to anyone foolish
enough to listen - or (like me) foolish enough to answer you.
I know Jet inside and out.

I know what it does well, and the scenarios you've described are
ones that Jet either handles extremely efficiently, or that no
real-world application would never include.

<PLONK>

--
David W. Fenton http://www.dfenton.com/
usenet at dfenton dot com http://www.dfenton.com/DFA/
Nov 8 '06 #11

This discussion thread is closed

Replies have been disabled for this discussion.