473,756 Members | 4,511 Online
Bytes | Software Development & Data Engineering Community
+ Post

Home Posts Topics Members FAQ

Awful performance and millions of packets transferred

I have the weirdest problem with an Access 97 database...

The application comprises the usual front-back split database. It's
built around Access 97 and had been running without serious problems
for around seven years. The back-ends are replicated across two
servers (althugh I don't think this is a replication problem).

Around two weeks ago users suddenly started to complain about slow
performance. One particular operation that used to run in five or six
seconds was taking as many minutes to complete. After a great deal of
digging, I eventually found that the slow performance was due to an
enormous number of data packets being transferred between the server
and the client PC - I used Ethereal to look at network traffic on one
of the clients. I ran the same test on a separate network and found
that normally around 4000 packets were transferred for that perticular
operation. On the live network the same operation resulted in nearly
half a million packets being transferred.

The PC, switches, routers and the server were all checked for network
errors - nothing found.

The network guys looked at the Ethereal trace and said that everything
looked normal. The client was passing requests to the server and the
server was responding with data. They couldn't see any
re-transmissions. They couldn't offer a suggestion as to why the
client was requesting half a million packets on one network, and only
4000 on another. They said it was down to the application...

This thing has been running for seven years. It's a fairly large
database - about 270 meg when freshly compacted - and has grown to that
size in a very linear fashion over the past seven years. The largest
table contains about 450,000 rows. It's compacted every week but it's
not a critical job - historically I've been unable to detect any
difference in performance in the pre and post-compact environments.

I rebuilt the database and the problem disappeared - for a few hours -
but then it returned exactly as before.

I've just rebuilt the database a second time and performance is now
back to normal (at least for the moment).

The effect I'm seeing is almost as if the indexes get trashed and the
client responds by having to read a load more data...

Nothing changed on the application at the time the problem first
started.

Has anyone come across similar symptoms before?

Thanks,

Mike

Nov 13 '05 #1
3 1584
Since presumably, the tables have been steadily growing, it's possible you
just crossed some threshold with regard to how queries are optimized, and JET
is now tending to make a non-optimal choice. If so, there may or may not not
be much you can do about it. The temporary fix cause by repair could be
explained by the fact that query plans are discarded when you compact the
front-end.

Things to try:

1. Change the query designs. Try replacing subqueries with outer joins or
vice verse, replacing some joins with left joins in order even though there
should always be a match.

2. Make sure indexes are appropriate. See if you should add or remove some
indexes.

3. Archive some of the older data.
On 25 Jul 2005 18:52:07 -0700, "MikeH" <mh****@bigfoot .com> wrote:
I have the weirdest problem with an Access 97 database...

The application comprises the usual front-back split database. It's
built around Access 97 and had been running without serious problems
for around seven years. The back-ends are replicated across two
servers (althugh I don't think this is a replication problem).

Around two weeks ago users suddenly started to complain about slow
performance. One particular operation that used to run in five or six
seconds was taking as many minutes to complete. After a great deal of
digging, I eventually found that the slow performance was due to an
enormous number of data packets being transferred between the server
and the client PC - I used Ethereal to look at network traffic on one
of the clients. I ran the same test on a separate network and found
that normally around 4000 packets were transferred for that perticular
operation. On the live network the same operation resulted in nearly
half a million packets being transferred.

The PC, switches, routers and the server were all checked for network
errors - nothing found.

The network guys looked at the Ethereal trace and said that everything
looked normal. The client was passing requests to the server and the
server was responding with data. They couldn't see any
re-transmissions. They couldn't offer a suggestion as to why the
client was requesting half a million packets on one network, and only
4000 on another. They said it was down to the application...

This thing has been running for seven years. It's a fairly large
database - about 270 meg when freshly compacted - and has grown to that
size in a very linear fashion over the past seven years. The largest
table contains about 450,000 rows. It's compacted every week but it's
not a critical job - historically I've been unable to detect any
difference in performance in the pre and post-compact environments.

I rebuilt the database and the problem disappeared - for a few hours -
but then it returned exactly as before.

I've just rebuilt the database a second time and performance is now
back to normal (at least for the moment).

The effect I'm seeing is almost as if the indexes get trashed and the
client responds by having to read a load more data...

Nothing changed on the application at the time the problem first
started.

Has anyone come across similar symptoms before?

Thanks,

Mike


Nov 13 '05 #2
Steve Jorgensen <no****@nospam. nospam> wrote in
news:50******** *************** *********@4ax.c om:
Since presumably, the tables have been steadily growing, it's
possible you just crossed some threshold with regard to how
queries are optimized, and JET is now tending to make a
non-optimal choice. If so, there may or may not not be much you
can do about it. The temporary fix cause by repair could be
explained by the fact that query plans are discarded when you
compact the front-end.

Things to try:

1. Change the query designs. Try replacing subqueries with outer
joins or vice verse, replacing some joins with left joins in order
even though there should always be a match.

2. Make sure indexes are appropriate. See if you should add or
remove some indexes.

3. Archive some of the older data.


In addition to compacting the front end, compact the back end, too.
That can have a significant impact on performance.

--
David W. Fenton http://www.bway.net/~dfenton
dfenton at bway dot net http://www.bway.net/~dfassoc
Nov 13 '05 #3
On Tue, 26 Jul 2005 14:26:19 -0500, "David W. Fenton"
<dX********@bwa y.net.invalid> wrote:
Steve Jorgensen <no****@nospam. nospam> wrote in
news:50******* *************** **********@4ax. com:
Since presumably, the tables have been steadily growing, it's
possible you just crossed some threshold with regard to how
queries are optimized, and JET is now tending to make a
non-optimal choice. If so, there may or may not not be much you
can do about it. The temporary fix cause by repair could be
explained by the fact that query plans are discarded when you
compact the front-end.

Things to try:

1. Change the query designs. Try replacing subqueries with outer
joins or vice verse, replacing some joins with left joins in order
even though there should always be a match.

2. Make sure indexes are appropriate. See if you should add or
remove some indexes.

3. Archive some of the older data.


In addition to compacting the front end, compact the back end, too.
That can have a significant impact on performance.


I was (perhaps erroneously) assuming that was already being done. If there's
not scheduled compaction of the back-end, I recommend doing that at least once
per week, right after a regularly schedueld backup.
Nov 13 '05 #4

This thread has been closed and replies have been disabled. Please start a new discussion.

Similar topics

1
7857
by: rishi | last post by:
Looking for tips on how to improve performance on deleting records. In our database we do dataloads daily that require us to purge millions of records a day so any improvement in speed would be welcomed. CREATE OR REPLACE PROCEDURE ETL_CUSTATTRIB_STGTOTRG_ALT1v2 AS TYPE cust_t IS TABLE OF customer_master.customer_id%TYPE INDEX BY BINARY_INTEGER; TYPE attrib_t IS TABLE OF attribute_master.attribute_id%TYPE
6
2323
by: teedilo | last post by:
We have an application with a SQL Server 2000 back end that is fairly database intensive -- lots of fairly frequent queries, inserts, updates -- the gamut. The application does not make use of performance hogs like cursors, but I know there are lots of ways the application could be made more efficient database-wise. The server code is running VB6 of all things, using COM+ database interfaces. There are some clustered and non-clustered...
5
2732
by: JENS CONSER | last post by:
Hello NG, We have a performance problem in using a client server solution based on MS SQL-Server 2000 through a VPN tunnel (via broadband internet connection). The SQL Server is running on a Windows 2003 Server which is configured as VPN server as well. We figured out that not the performance
2
2123
by: Simon Gilbert | last post by:
Hi all, I have a database with a table containing millions of rows (3600000). When I do a select on this table, the performance decreased due to big volume of data. Does someone knows the solutions (an overview only becaus I know that can be tricky). Thanks
5
4005
by: Scott | last post by:
I have a customer that had developed an Access97 application to track their business information. The application grew significantly and they used the Upsizing Wizard to move the tables to SQL 2000. Of course there were no modifications made to the queries and they noticed significant performance issues. They recently upgraded the application to Access XP expecting the newer version to provide performance benefits and now queries take...
3
1539
by: Rolan | last post by:
To those that have been around Access long enough, I'm sure that insight can be shared as to what aspects of computer processing speed improves its performance. Perhaps in some respects even sufficient to partially offset an inadequate or poor database design. Also, and using Access 97 as a starting point, have there been significant changes/improvements in performance of the later versions of Access? Assuming that a system has adequate...
24
2790
by: Bob Alston | last post by:
Most of my Access database implementations have been fairly small in terms of data volume and number of concurrent users. So far I haven't had performance issues to worry about. <knock on wood> But I am curious about what techniques those of you who have done higher volume access implementations use to ensure high performance of the database in a multi-user 100mbps LAN implementation??? Thanks
2
1023
by: Guoqi Zheng | last post by:
Dear Sir, I have many small XML files contain all kind of data, very often, I need to get a summary of the XML file and show part in a repeater control... For example... I have 10 xml files in one folde.. 1.xml, 2.xml, 3.xml, ....... 10.xml. Every XML file contains a summary tag and a detail list. for example.. ******************************************
7
2314
by: maheshinvent | last post by:
Hello friends I need to know how data packets are actually transferred through network adapters...
0
9455
marktang
by: marktang | last post by:
ONU (Optical Network Unit) is one of the key components for providing high-speed Internet services. Its primary function is to act as an endpoint device located at the user's premises. However, people are often confused as to whether an ONU can Work As a Router. In this blog post, we’ll explore What is ONU, What Is Router, ONU & Router’s main usage, and What is the difference between ONU and Router. Let’s take a closer look ! Part I. Meaning of...
0
9869
jinu1996
by: jinu1996 | last post by:
In today's digital age, having a compelling online presence is paramount for businesses aiming to thrive in a competitive landscape. At the heart of this digital strategy lies an intricately woven tapestry of website design and digital marketing. It's not merely about having a website; it's about crafting an immersive digital experience that captivates audiences and drives business growth. The Art of Business Website Design Your website is...
0
9708
tracyyun
by: tracyyun | last post by:
Dear forum friends, With the development of smart home technology, a variety of wireless communication protocols have appeared on the market, such as Zigbee, Z-Wave, Wi-Fi, Bluetooth, etc. Each protocol has its own unique characteristics and advantages, but as a user who is planning to build a smart home system, I am a bit confused by the choice of these technologies. I'm particularly interested in Zigbee because I've heard it does some...
0
8709
agi2029
by: agi2029 | last post by:
Let's talk about the concept of autonomous AI software engineers and no-code agents. These AIs are designed to manage the entire lifecycle of a software development project—planning, coding, testing, and deployment—without human intervention. Imagine an AI that can take a project description, break it down, write the code, debug it, and then launch it, all on its own.... Now, this would greatly impact the work of software developers. The idea...
1
7242
isladogs
by: isladogs | last post by:
The next Access Europe User Group meeting will be on Wednesday 1 May 2024 starting at 18:00 UK time (6PM UTC+1) and finishing by 19:30 (7.30PM). In this session, we are pleased to welcome a new presenter, Adolph Dupré who will be discussing some powerful techniques for using class modules. He will explain when you may want to use classes instead of User Defined Types (UDT). For example, to manage the data in unbound forms. Adolph will...
0
5302
by: adsilva | last post by:
A Windows Forms form does not have the event Unload, like VB6. What one acts like?
1
3805
by: 6302768590 | last post by:
Hai team i want code for transfer the data from one system to another through IP address by using C# our system has to for every 5mins then we have to update the data what the data is updated we have to send another system
2
3354
muto222
by: muto222 | last post by:
How can i add a mobile payment intergratation into php mysql website.
3
2665
bsmnconsultancy
by: bsmnconsultancy | last post by:
In today's digital era, a well-designed website is crucial for businesses looking to succeed. Whether you're a small business owner or a large corporation in Toronto, having a strong online presence can significantly impact your brand's success. BSMN Consultancy, a leader in Website Development in Toronto offers valuable insights into creating effective websites that not only look great but also perform exceptionally well. In this comprehensive...

By using Bytes.com and it's services, you agree to our Privacy Policy and Terms of Use.

To disable or enable advertisements and analytics tracking please visit the manage ads & tracking page.