By using this site, you agree to our updated Privacy Policy and our Terms of Use. Manage your Cookies Settings.
434,660 Members | 1,945 Online
Bytes IT Community
+ Ask a Question
Need help? Post your question and get tips & solutions from a community of 434,660 IT Pros & Developers. It's quick & easy.

Awful performance and millions of packets transferred

P: n/a
I have the weirdest problem with an Access 97 database...

The application comprises the usual front-back split database. It's
built around Access 97 and had been running without serious problems
for around seven years. The back-ends are replicated across two
servers (althugh I don't think this is a replication problem).

Around two weeks ago users suddenly started to complain about slow
performance. One particular operation that used to run in five or six
seconds was taking as many minutes to complete. After a great deal of
digging, I eventually found that the slow performance was due to an
enormous number of data packets being transferred between the server
and the client PC - I used Ethereal to look at network traffic on one
of the clients. I ran the same test on a separate network and found
that normally around 4000 packets were transferred for that perticular
operation. On the live network the same operation resulted in nearly
half a million packets being transferred.

The PC, switches, routers and the server were all checked for network
errors - nothing found.

The network guys looked at the Ethereal trace and said that everything
looked normal. The client was passing requests to the server and the
server was responding with data. They couldn't see any
re-transmissions. They couldn't offer a suggestion as to why the
client was requesting half a million packets on one network, and only
4000 on another. They said it was down to the application...

This thing has been running for seven years. It's a fairly large
database - about 270 meg when freshly compacted - and has grown to that
size in a very linear fashion over the past seven years. The largest
table contains about 450,000 rows. It's compacted every week but it's
not a critical job - historically I've been unable to detect any
difference in performance in the pre and post-compact environments.

I rebuilt the database and the problem disappeared - for a few hours -
but then it returned exactly as before.

I've just rebuilt the database a second time and performance is now
back to normal (at least for the moment).

The effect I'm seeing is almost as if the indexes get trashed and the
client responds by having to read a load more data...

Nothing changed on the application at the time the problem first
started.

Has anyone come across similar symptoms before?

Thanks,

Mike

Nov 13 '05 #1
Share this Question
Share on Google+
3 Replies


P: n/a
Since presumably, the tables have been steadily growing, it's possible you
just crossed some threshold with regard to how queries are optimized, and JET
is now tending to make a non-optimal choice. If so, there may or may not not
be much you can do about it. The temporary fix cause by repair could be
explained by the fact that query plans are discarded when you compact the
front-end.

Things to try:

1. Change the query designs. Try replacing subqueries with outer joins or
vice verse, replacing some joins with left joins in order even though there
should always be a match.

2. Make sure indexes are appropriate. See if you should add or remove some
indexes.

3. Archive some of the older data.
On 25 Jul 2005 18:52:07 -0700, "MikeH" <mh****@bigfoot.com> wrote:
I have the weirdest problem with an Access 97 database...

The application comprises the usual front-back split database. It's
built around Access 97 and had been running without serious problems
for around seven years. The back-ends are replicated across two
servers (althugh I don't think this is a replication problem).

Around two weeks ago users suddenly started to complain about slow
performance. One particular operation that used to run in five or six
seconds was taking as many minutes to complete. After a great deal of
digging, I eventually found that the slow performance was due to an
enormous number of data packets being transferred between the server
and the client PC - I used Ethereal to look at network traffic on one
of the clients. I ran the same test on a separate network and found
that normally around 4000 packets were transferred for that perticular
operation. On the live network the same operation resulted in nearly
half a million packets being transferred.

The PC, switches, routers and the server were all checked for network
errors - nothing found.

The network guys looked at the Ethereal trace and said that everything
looked normal. The client was passing requests to the server and the
server was responding with data. They couldn't see any
re-transmissions. They couldn't offer a suggestion as to why the
client was requesting half a million packets on one network, and only
4000 on another. They said it was down to the application...

This thing has been running for seven years. It's a fairly large
database - about 270 meg when freshly compacted - and has grown to that
size in a very linear fashion over the past seven years. The largest
table contains about 450,000 rows. It's compacted every week but it's
not a critical job - historically I've been unable to detect any
difference in performance in the pre and post-compact environments.

I rebuilt the database and the problem disappeared - for a few hours -
but then it returned exactly as before.

I've just rebuilt the database a second time and performance is now
back to normal (at least for the moment).

The effect I'm seeing is almost as if the indexes get trashed and the
client responds by having to read a load more data...

Nothing changed on the application at the time the problem first
started.

Has anyone come across similar symptoms before?

Thanks,

Mike


Nov 13 '05 #2

P: n/a
Steve Jorgensen <no****@nospam.nospam> wrote in
news:50********************************@4ax.com:
Since presumably, the tables have been steadily growing, it's
possible you just crossed some threshold with regard to how
queries are optimized, and JET is now tending to make a
non-optimal choice. If so, there may or may not not be much you
can do about it. The temporary fix cause by repair could be
explained by the fact that query plans are discarded when you
compact the front-end.

Things to try:

1. Change the query designs. Try replacing subqueries with outer
joins or vice verse, replacing some joins with left joins in order
even though there should always be a match.

2. Make sure indexes are appropriate. See if you should add or
remove some indexes.

3. Archive some of the older data.


In addition to compacting the front end, compact the back end, too.
That can have a significant impact on performance.

--
David W. Fenton http://www.bway.net/~dfenton
dfenton at bway dot net http://www.bway.net/~dfassoc
Nov 13 '05 #3

P: n/a
On Tue, 26 Jul 2005 14:26:19 -0500, "David W. Fenton"
<dX********@bway.net.invalid> wrote:
Steve Jorgensen <no****@nospam.nospam> wrote in
news:50********************************@4ax.com :
Since presumably, the tables have been steadily growing, it's
possible you just crossed some threshold with regard to how
queries are optimized, and JET is now tending to make a
non-optimal choice. If so, there may or may not not be much you
can do about it. The temporary fix cause by repair could be
explained by the fact that query plans are discarded when you
compact the front-end.

Things to try:

1. Change the query designs. Try replacing subqueries with outer
joins or vice verse, replacing some joins with left joins in order
even though there should always be a match.

2. Make sure indexes are appropriate. See if you should add or
remove some indexes.

3. Archive some of the older data.


In addition to compacting the front end, compact the back end, too.
That can have a significant impact on performance.


I was (perhaps erroneously) assuming that was already being done. If there's
not scheduled compaction of the back-end, I recommend doing that at least once
per week, right after a regularly schedueld backup.
Nov 13 '05 #4

This discussion thread is closed

Replies have been disabled for this discussion.