By using this site, you agree to our updated Privacy Policy and our Terms of Use. Manage your Cookies Settings.
462,316 Members | 612 Online
Bytes IT Community
+ Ask a Question
Need help? Post your question and get tips & solutions from a community of 462,316 IT Pros & Developers. It's quick & easy.

large scale Access table

P: 157

Some questions i cant find the ansvwear to on the internett or the books.

I have a table which is increasing with about 3000 rows a day. Today at 111 000 rows with about 15 different fields. A day means 8 hours (normal workhours in norway)

QUESTION 1: If i index one field that may occour repeatedly 2000 or 3000 rows (about 2-3%) times, is this an advantage considering speed?
I saw a webpage which described that index fields need to be less than 20 repeats in a table....

I also thought that 111 000 rows would be to much for access and i was planning on making an archive function in it, but it still works fine in a 20-user production facility. I mainly use ADO code with the

', CurrentProject.Connection, adOpenForwardOnly, adLockReadOnly' locks

I am going to request a SQL server and upscale the database if needed in the future, right now i am just testing how much load i can put on the database.

I use
Expand|Select|Wrap|Line Numbers
  1. Function justere_saldo(Varenr As String, Antall_tabletter As Integer, Lokasjon As String)
  3. Dim rst As New ADODB.Recordset
  5.     rst.Open "SELECT * from [VARER Lokasjonsantall] where Varenr=" & Varenr & " and lokasjon='" & Lokasjon & "'", CurrentProject.Connection, adOpenDynamic, adLockPessimistic
  6.     With rst
  7.         If .EOF And .BOF Then
  8.         .AddNew
  9.         End If
  10.         !Varenr = Varenr
  11.         !Lokasjon = Lokasjon
  12.         ![Antall tabletter] = ![Antall tabletter] + Antall_tabletter
  13.         If ![Antall tabletter] = 0 Then
  14.         DoCmd.SetWarnings False
  15.         DoCmd.RunSQL ("DELETE * From [VARER Lokasjonsantall] where autonr=" & !Autonr)
  16.         DoCmd.SetWarnings True
  17.         GoTo slutt
  18.         End If
  19.         .Update
  20.         .Close
  21. slutt:
  22.     End With
  23.     Set rst = Nothing
  25. End Function
on a table with 700 rows wich all the users are connected to and are changing the data's on the fly (2000-3000 times a day) and this also works almost without lag

The database is about 50mb in compressed state.

QUESTION 2: Will i have som performance gain if i convert it to access 2002-2003 type?
I cant find answear for this on the internett

Sorry if my english is a little bit confusing, i am from norway :)
Aug 15 '08 #1
Share this Question
Share on Google+
5 Replies

P: 23
If the same Database is shared by 20 Users, then you can think of splitting the database into Back-End, Front-End model, keeping the main table in the Back-End database linked to the Front-End. You can create more than one Front-End database, one for each group of 5 users. This can improve the performance.

Alternatively the front-end database can be copied to each Workstation with the help of DOS Batch File, when the User attempt to open the database so that the Front-end copy runs on the Local machine. This method will help to distribute the updated version of the Front-end database.

We have Access database with 5 Million records but not shared by many users as you have.

Aug 15 '08 #2

Expert 100+
P: 1,134
Hi MrDeej
Why don't you make a copy of the database put the index on as you think it should be and then run some comparrison queries in the 2 databases and see if it is faster or not.

I find that approach to be the best way to find the answer to questions like that.
In time you will make better judgement calls, once you have seen the results of experiments like that.
Indexes improve performance by reducing the number of records your query has to check. If an index reduces the check from 1,000,000 records down to 20 then thats should result in a performance boost. Reducing it from 1,000,000 down to 2000 should still improve things. As I say, copy the database and try it and see.

Having said that, if as you say
all the users are connected to and are changing the data's on the fly (2000-3000 times a day) and this also works almost without lag
Then any performance boost you gain through the index may turn out to be negligible. If there are lots of insert/deletes of records among those
2000-3000 times a day
then you may even slow things down.

I don't know
Aug 15 '08 #3

Expert 100+
P: 1,134
I am going to request a SQL server and upscale the database if needed in the future
SQL sever with access as a front end works well with a small user base. We did that in my previous job. 45 users in a production environment.

In my current job I am still using SQL server but we have about 400 users. For that reason we have gone away from access and we use an intranet as the main "frontend". It makes deployment and change control much easier.
We still use access sometimes, when the user count is small and the flexibility required is large. Even so, web pages are getting closer and closer to the power and programming ease of access that I haven't really had need of it now for 2 years. All solutions have become Web based.
Aug 15 '08 #4

P: 157

In my current job I am still using SQL server but we have about 400 users.
Haha.. then we have an advantage in norway because it is almost none companies which have that many users :)
In norway, 200 person's in one firm is considered a large company

If we go for sql server i will still keep the access as user interface. That is the best part of the access application i think

Regarding testing this out to se the difference between indexing and not indexing i am a little bit scared of. All the other performance enhances i have managed to apply havent shown their real side before they where live. I have 14 tables with information in the base and uses VBA-SQL to create temporary tables wich is created and deleted trhough the day (the user session). These temporary tables are manipulated data from maybe 3-4 different tables (sometimes 6-7 or more) . These temp tables also is the source for most of the reports we print out pr day (maybe 4-5 for some users) and most of the production forms.

These temptables act like if you split the database into pieces and minimizes the shared load. I also have 3-4 other databases (with outher function then production) and i have experimented with splitting the database and i have found that when you need all 20 users to access a single table 100-200 times a day you need to set to AdLockPessimistic for the data not to be corrupted/false and keep the users connected to the same table (for locking purposes)

I also have some IT areas to consider. I cant use local databases because we use a citrix enviroment to run it with very much restrictability. The database is also already connected to another database (with the entire product catalog of 40 000 products). Then the production database makes local buffer tables from the other database with all the current aktive products (900). So in one way we have already done much in splitting the database :)

But actually, the most performance enhancing part of the database is to go with the LEAN stategy on the user interaction. Eliminating idiotic code wich did nothing of value for the user have actually made the database faster now with 110 000 posts in the main 'pot' than with 30 000. But i think it may be 'full' when it reaches 200 000. In that case i have like 2-3 months to either start splitting the 'pot' into an archive or to learn SQL :)

How was your experience with migrating from Acess to SQL? And how is the index part of SQL compared to Acceess?

A complete SQL server in norway i see costs about 10-15 000$. Any good tips here on things to avoid? I really dont know anything about SQL servers
Aug 15 '08 #5

Expert 100+
P: 1,134
Converting from access as a backend to SQL server was no problem. Access' version of SQL and TSQL are similar enough that many queries can be just copied. Tables and data proved to be a simple imported straight from Access to SQL Server.

Indexing on SQL server is great and works very well.

Tips, Get the best server your buget can afford and research its features for the kind of activity it will do most(Data store,Reporting....etc). You can make the server/servers do most of the work so that the only thing the front end has to do is do is display the records returned to it or send the data to be stored.
Good database design, of course,always has biggest impact on performance but good hardware provides more potential for improvement through good database design.

Whether or not serving 20 users is worth the expense I will leave for you to proove to those who hold the purse strings :)
The smallish company I worked for with 45 user was bought out by a worldwide company and SQL Server was part of the software package deal they had with microsoft, so it fell into our laps.
Aug 20 '08 #6

Post your reply

Sign in to post your reply or Sign up for a free account.