By using this site, you agree to our updated Privacy Policy and our Terms of Use. Manage your Cookies Settings.
424,682 Members | 1,980 Online
Bytes IT Community
+ Ask a Question
Need help? Post your question and get tips & solutions from a community of 424,682 IT Pros & Developers. It's quick & easy.

Backend Database Performance

P: n/a
Hi All,
I have two backend databases that area link to a frontend database
where information is entered, retrieved and deleted. The information
accessed from the frontend database is coming from both databases.

Both backend databases are at least 225MB each, therefore accessing
data is slow and running queries are slow and frustrating, this causes
the databases to crash at times. At the beginning I though it was a
network connection, so I copied the databases to C drive to test the
connection, it was the same, very slow.

I have tried to run performance analyser, improvement for combo boxes,
reports, queries, sub reports as suggested by Microsoft at
http://office.microsoft.com/assistan...CH062526191033
but still accessing the data is slow.

My idea is to delete a large number records since users hardly used
these records, there is no query that I can use to delete a number of
records for example to delete between Product ID 0 to Product ID
10000.

Does anyone have an idea how can I do this, but deleting the records
in both backend databases, I have tried this but delete the records in
certain tables only not in all tables. I have tried to make
relationship between tables by having foreign keys to make sure the
records are deleted but it is not working.

Can anyone provide an idea of how can I do this or to improve the
performance of accessing the information?

If I delete a large number of records from both backend databases,
would it make better to access data?

Thanks
Nov 13 '05 #1
Share this Question
Share on Google+
16 Replies


P: n/a
What do you mean by area link? My front-end has a link to each table
in my back-end, and that works fine. Is an area link a different kind
of link? If so, you may want to try linking each table to the
front-end.

When I need to delete records I do it with a query in the back-end,
not the front-end, and query the table I'm going to delete the records
from. Maybe you could try that, instead of trying to use links and
cascade delete. If you do want to delete from more than one table at
a time, make sure "Cascade Deletions" in the join properties between
tables is enabled(go to the relationships window for this).

But I think instead of deleting records you could move them into a
different database (archive) so if you did need them in the future you
could access them. You could use make-table queries for this and then
transfer the new tables into a new .mdb file, and then delete the
records from your current tables.

jn**********@yahoo.com (John) wrote in message news:<b5**************************@posting.google. com>...
Hi All,
I have two backend databases that area link to a frontend database
where information is entered, retrieved and deleted. The information
accessed from the frontend database is coming from both databases.

Both backend databases are at least 225MB each, therefore accessing
data is slow and running queries are slow and frustrating, this causes
the databases to crash at times. At the beginning I though it was a
network connection, so I copied the databases to C drive to test the
connection, it was the same, very slow.

I have tried to run performance analyser, improvement for combo boxes,
reports, queries, sub reports as suggested by Microsoft at
http://office.microsoft.com/assistan...CH062526191033
but still accessing the data is slow.

My idea is to delete a large number records since users hardly used
these records, there is no query that I can use to delete a number of
records for example to delete between Product ID 0 to Product ID
10000.

Does anyone have an idea how can I do this, but deleting the records
in both backend databases, I have tried this but delete the records in
certain tables only not in all tables. I have tried to make
relationship between tables by having foreign keys to make sure the
records are deleted but it is not working.

Can anyone provide an idea of how can I do this or to improve the
performance of accessing the information?

If I delete a large number of records from both backend databases,
would it make better to access data?

Thanks

Nov 13 '05 #2

P: n/a
"Julia Baresch" <jb******@oldrepublic.com> wrote in message
news:50**************************@posting.google.c om...
What do you mean by area link? My front-end has a link to each table
in my back-end, and that works fine. Is an area link a different kind
of link? If so, you may want to try linking each table to the
front-end.

When I need to delete records I do it with a query in the back-end,
not the front-end, and query the table I'm going to delete the records
from. [snip]


In Access when using a Jet back end ALL query processing is local. There
is no advantage to storing the query in the back end as Access will just
have to drag the query definition over the network before executing it and
when it does execute it will be exactly the same as if the query were
stored in the front end.
--
I don't check the Email account attached
to this message. Send instead to...
RBrandt at Hunter dot com

Nov 13 '05 #3

P: n/a
jn**********@yahoo.com (John) wrote:
I have two backend databases that area link to a frontend database
where information is entered, retrieved and deleted. The information
accessed from the frontend database is coming from both databases.

Both backend databases are at least 225MB each, therefore accessing
data is slow and running queries are slow and frustrating, this causes
the databases to crash at times. At the beginning I though it was a
network connection, so I copied the databases to C drive to test the
connection, it was the same, very slow.
The three most common performance problems in Access 2000 are:
- LDB locking which a persistent recordset connection or an always open bound form
corrects (multiple users)
- sub datasheet Name property set to [Auto] should be [None]
- Track name AutoCorrect should be off

For more information on these, less likely causes, other tips and links to MS KB
articles visit my Access Performance FAQ page at
http://www.granite.ab.ca/access/performancefaq.htm
Can anyone provide an idea of how can I do this or to improve the
performance of accessing the information?


Try the above first. If your tables are properly indexed then it shouldn't matter a
lot of you have 10,000 or 500,000 records.

Tony
--
Tony Toews, Microsoft Access MVP
Please respond only in the newsgroups so that others can
read the entire thread of messages.
Microsoft Access Links, Hints, Tips & Accounting Systems at
http://www.granite.ab.ca/accsmstr.htm
Nov 13 '05 #4

P: n/a
You have go some good comments here

Both backend databases are at least 225MB each


That is a good size..but not too large. Is the above size AFTER a compact?
(you have to compact the back end..not the front end).
--
Albert D. Kallal (Access MVP)
Edmonton, Alberta Canada
pl*****************@msn.com
http://www.attcanada.net/~kallal.msn
Nov 13 '05 #5

P: n/a
Tony Toews wrote:
jn**********@yahoo.com (John) wrote:
<snip>
- sub datasheet Name property set to [Auto] should be [None]
Backend and/or front end?
- Track name AutoCorrect should be off


Took me ages to work this out for myself :)
--
regards,

Bradley
Nov 13 '05 #6

P: n/a
>The three most common performance problems in Access 2000 are:
- LDB locking which a persistent recordset connection or an always open bound >form corrects (multiple users)
- sub datasheet Name property set to [Auto] should be [None]
- Track name AutoCorrect should be off


I believe I read on this site a long time ago that keeping a form
open to a table in the back end was a good thing. Since that time I
have always done it. I have run some benchmark tests and the
performance can almost always be improved by this method.
Can someone clear this up for me?
Thanks,
Hank Reed
Nov 13 '05 #7

P: n/a
Hi All,
Both backend databases are 225MB after compact, I have tried the
following:

- LDB locking which a persistent recordset connection or an always
open bound formcorrects (multiple users)
- sub datasheet Name property set to [Auto] should be [None]
- Track name AutoCorrect should be off

But still the problem remains, it is still slow when users access the
databse, I beleive the tbales are not porperly indexed.

This may be silly but what exactly should I be looking when indexing
tables in order to access the data faster?

Like tony has commented above If the tables are properly indexed then
it shouldn't matter a lot if I have 10,000 or 500,000 records. I would
the like the databases to reach this potential.

Regards

John


"Albert D. Kallal" <Pl*******************@msn.com> wrote in message news:<ZC5tc.600192$Ig.40260@pd7tw2no>...
You have go some good comments here

Both backend databases are at least 225MB each


That is a good size..but not too large. Is the above size AFTER a compact?
(you have to compact the back end..not the front end).

Nov 13 '05 #8

P: n/a
ha********@aol.com (Hank Reed) wrote in message news:<f4*************************@posting.google.c om>...
I believe I read on this site a long time ago that keeping a form
open to a table in the back end was a good thing. Since that time I
have always done it. I have run some benchmark tests and the
performance can almost always be improved by this method.
Can someone clear this up for me?
Thanks,
Hank Reed

Instead of this I have coded into the 2 main forms Open events, to
open an empty table. My users use shortcuts to open one or the other
of the forms, so they are always open.

I'm working on a query by form project with an unbound criteria form,
in this form I found I had to add error handling code to this
procedure and cancel the open event because if it couldn't connect to
the database it generated a run-time error.
Nov 13 '05 #9

P: n/a
jn**********@yahoo.com (John) wrote in message news:<b5**************************@posting.google. com>...

This may be silly but what exactly should I be looking when indexing
tables in order to access the data faster?

My understanding is you should index the fields the database is sorted
on, or most likely to be sorted on. Usually this would be last name,
first name, ID number if any, SS number, and such as that.

I have mine indexed this way and it works pretty well, although it's
not nearly as big as yours. In case you don't already know this, you
set the indexes in the table design view. For each field you can set
it as indexed (no duplicates) or (duplicates Ok).
Nov 13 '05 #10

P: n/a
"Bradley" <br*****@REMOVETHIScomcen.com.au> wrote:
- sub datasheet Name property set to [Auto] should be [None]


Backend and/or front end?


Backend.

Tony
--
Tony Toews, Microsoft Access MVP
Please respond only in the newsgroups so that others can
read the entire thread of messages.
Microsoft Access Links, Hints, Tips & Accounting Systems at
http://www.granite.ab.ca/accsmstr.htm
Nov 13 '05 #11

P: n/a
ha********@aol.com (Hank Reed) wrote:
The three most common performance problems in Access 2000 are:
- LDB locking which a persistent recordset connection or an always open bound >form corrects (multiple users)
- sub datasheet Name property set to [Auto] should be [None]
- Track name AutoCorrect should be off


I believe I read on this site a long time ago that keeping a form
open to a table in the back end was a good thing. Since that time I
have always done it. I have run some benchmark tests and the
performance can almost always be improved by this method.
Can someone clear this up for me?


The first point above starting with "LDB locking" is the same as your suggestion.

Tony
--
Tony Toews, Microsoft Access MVP
Please respond only in the newsgroups so that others can
read the entire thread of messages.
Microsoft Access Links, Hints, Tips & Accounting Systems at
http://www.granite.ab.ca/accsmstr.htm
Nov 13 '05 #12

P: n/a
jn**********@yahoo.com (John) wrote:
But still the problem remains, it is still slow when users access the
databse, I beleive the tbales are not porperly indexed.


Quite possible that they're not properly indexed. But there can be other reasons too
which are mentioned on that website. For example virus scanning on the client or
server for MDB files. Several times in the past few years a antivirus vendor
(different each time thankfully) has screwed something up and for a week or two
Access performance sucked and then an update came in which speeded things up again.

Tony
--
Tony Toews, Microsoft Access MVP
Please respond only in the newsgroups so that others can
read the entire thread of messages.
Microsoft Access Links, Hints, Tips & Accounting Systems at
http://www.granite.ab.ca/accsmstr.htm
Nov 13 '05 #13

P: n/a
Thank you all for your comments. I will index the tables on the backend databases.

Regards

John

Tony Toews <tt****@telusplanet.net> wrote in message news:<g0********************************@4ax.com>. ..
jn**********@yahoo.com (John) wrote:
But still the problem remains, it is still slow when users access the
databse, I beleive the tbales are not porperly indexed.


Quite possible that they're not properly indexed. But there can be other reasons too
which are mentioned on that website. For example virus scanning on the client or
server for MDB files. Several times in the past few years a antivirus vendor
(different each time thankfully) has screwed something up and for a week or two
Access performance sucked and then an update came in which speeded things up again.

Tony

Nov 13 '05 #14

P: n/a
John wrote:
Thank you all for your comments. I will index the tables on the backend databases.

Regards

John

Tony Toews <tt****@telusplanet.net> wrote in message news:<g0********************************@4ax.com>. ..
jn**********@yahoo.com (John) wrote:

But still the problem remains, it is still slow when users access the
databse, I beleive the tbales are not porperly indexed.


Quite possible that they're not properly indexed. But there can be other reasons too
which are mentioned on that website. For example virus scanning on the client or
server for MDB files. Several times in the past few years a antivirus vendor
(different each time thankfully) has screwed something up and for a week or two
Access performance sucked and then an update came in which speeded things up again.

Tony


You should have an index key on joins. Ex: you have a table called
Orders and another called OrderItems, each with a field called OrderID.
The OrderID in Orders is the primary key and the OrderID in
OrderItems is the foreign key. You would want an index on OrderID in
OrderItems also.

Another area to index are fields you filter on. For example, you may
want to filter the records based on OrderDate and CustomerID. Having
indexes on these fields will help in returning records quicker when you
want records for a specific customer or orders made in a date range.

If you are already doing this, your problem lies elsewhere. I suspect
it lies elsewhere.

Nov 13 '05 #15

P: n/a
> You should have an index key on joins. Ex: you have a table called
Orders and another called OrderItems, each with a field called OrderID.
The OrderID in Orders is the primary key and the OrderID in
OrderItems is the foreign key. You would want an index on OrderID in
OrderItems also.

Another area to index are fields you filter on. For example, you may
want to filter the records based on OrderDate and CustomerID. Having
indexes on these fields will help in returning records quicker when you
want records for a specific customer or orders made in a date range.

If you are already doing this, your problem lies elsewhere. I suspect
it lies elsewhere.


On the advice of Tony and other threads I have followed here, I
ran some benchmarks using shorter paths and back-end database names.
Each has a lot of effect but the best is to shorten them both. The
highest performance was with a path name like: S:\ABC\ABC.mdb

The results are dramatic (hope I'm not fooling myself) but 30%
improvement should be attainable. I ran numerous tests, with various
configurations, early in the morning with minimum users on. Perhaps I
should run tests when all users are on.

You should run your own benchmarks and judge for yourself. I
would be anxious to hear any results or write to me if you want more
help.
Hank Reed
Nov 13 '05 #16

P: n/a
ha********@aol.com (Hank Reed) wrote:
On the advice of Tony and other threads I have followed here, I
ran some benchmarks using shorter paths and back-end database names.
Each has a lot of effect but the best is to shorten them both. The
highest performance was with a path name like: S:\ABC\ABC.mdb

The results are dramatic (hope I'm not fooling myself) but 30%
improvement should be attainable. I ran numerous tests, with various
configurations, early in the morning with minimum users on. Perhaps I
should run tests when all users are on.


Thanks for posting back. Nice to see test results for this.

Tony
--
Tony Toews, Microsoft Access MVP
Please respond only in the newsgroups so that others can
read the entire thread of messages.
Microsoft Access Links, Hints, Tips & Accounting Systems at
http://www.granite.ab.ca/accsmstr.htm
Nov 13 '05 #17

This discussion thread is closed

Replies have been disabled for this discussion.