473,563 Members | 2,635 Online
Bytes | Software Development & Data Engineering Community
+ Post

Home Posts Topics Members FAQ

Performance and second table

Hi,

I have a small theoretical issue.
I have one table, which is prettyu large. There is lot of evaluations
running on this table, that's why, each process need to wait for
another to be finished. Sometimes, for some critical functions, it
takes to long time.

I don't think that I can speed up processes, by changing the indexes on
the tables (to increase scan time for example), because this is
something what I was experimenting with already, and it was not enought
good.

My question is, will it improve performance, if I will create second
table, exactly like this one, and I will split some evaluations, that
the one, which defenately need to run on the source table will run on
the first one, and the second evaluations, will run on the other one.

To keep data consistance between this two tables, I was thinking baout
trigger on insert on the mother table, which will transport the data to
another one.

Second part is: to improve selects on the table, should I set indexes
with option of Fill factor as possible close to 100% or as possible
close to 0%. Or maybe should I set the pad index option?

What about clustered indexes. Is it better to use them if I would like
to increase performace for selects?

Thanks in advance

Mateusz

Sep 27 '05 #1
4 1690
SQL
For selects you should have your indexes as close to 100% as possible,
if this table is frequently modify make it around 80%-90%

Also did you check the fragmentation level of your table?
A lot of times this improves speed dramatically
run DBCC SHOWCONTIG ('YourTableName ') and look at Scan Density, Avg.
Bytes Free per Page and Fragmentation Levels
If your density is low and/or fragmentation high run DBCC INDEXDEFRAG
(dbname,tablena me,1)
Lookup DBCC SHOWCONTIG and DBCC INDEXDEFRAG in Books online

Have you tried horizontal partitioning? This might benefit you

http://sqlservercode.blogspot.com/

Sep 27 '05 #2
On 27 Sep 2005 09:35:55 -0700, Matik wrote:
Hi,

I have a small theoretical issue.
I have one table, which is prettyu large. There is lot of evaluations
running on this table, that's why, each process need to wait for
another to be finished. Sometimes, for some critical functions, it
takes to long time.
Hi Mateusz,

If the processes are only reading the data without modifying it, then
there is no need to wait. They can run concurrently.
My question is, will it improve performance, if I will create second
table, exactly like this one, and I will split some evaluations, that
the one, which defenately need to run on the source table will run on
the first one, and the second evaluations, will run on the other one.
I doubt it. SQL Server doesn't know that the data in both tables is
equal. So if one query reads row #12345 from table #1, and the other
query reads row #12345 from table #2, SQL Server will fetch the
corresponding data from both tables from disk to cache. In short, you
are effectively halving the amount of cache SQL Server can use for these
queries. I expect performance to decrease.
To keep data consistance between this two tables, I was thinking baout
trigger on insert on the mother table, which will transport the data to
another one.
And this will hurt performance even more. The speed of inserts will slow
down because the trigger has to be executed. As a result, locks on the
main table will live longer, keeping other queries blocked for longer
amounts of time. And the second table will be blocked as well.

Since the data is apparently updated while you are querying it, you
might find benefit in a variation on your idea: make a copy of the
table, but don't use triggers to copy over all modifications. Instead,
set up a job that will periodically synchronise the data. Now make sure
that all queries that don't need up-to-the-second precision are used on
the copy table (that is only update periodically).
Second part is: to improve selects on the table, should I set indexes
with option of Fill factor as possible close to 100% or as possible
close to 0%. Or maybe should I set the pad index option?

What about clustered indexes. Is it better to use them if I would like
to increase performace for selects?


There is no magic bullet here. Each problem needs it's own solution,
that's why there are so many options.

Read more about performance at www.sqlserver-performance.com, or post
here with full details of your tables, indexes, queries and execution
plans for more advise.

Best, Hugo
--

(Remove _NO_ and _SPAM_ to get my e-mail address)
Sep 27 '05 #3
On Tue, 27 Sep 2005 23:30:53 +0200, Hugo Kornelis wrote:
Read more about performance at www.sqlserver-performance.com


I goofed when typing that URL from memory. The correct URL is
http://www.sql-server-performance.co...erformance.asp.

Unfortunately, the site is revamped since my last visit. The content is
still there, but buried in lots of irritating advertising.

Best, Hugo
--

(Remove _NO_ and _SPAM_ to get my e-mail address)
Sep 27 '05 #4
Thanks Hugo and SQL :)

I see, that there is no better way, like just experiment with this
indexes, and maybe modify some statements. During past two days, I've
did that, and now is much better.

Anyway, the structure of the tables is bad, so there is also no
possibility to use better indexing.

I've one more question, if we are already by the topic of indexes. My
question is about the phisical memory the indexes are using.
When I've made all tables empty, truncated db, shrinkt aso. but the
file size is still a little to big. I know, that the indexes are taking
also memory to be stored (specialy clustered), but, after I've removed
all data, the indexes should be also cleared, right? Or maybe I need to
rebuild them?

Gratings

Mateusz

Sep 30 '05 #5

This thread has been closed and replies have been disabled. Please start a new discussion.

Similar topics

5
33616
by: Jim Garrison | last post by:
Scenario: 1) Create a GLOBAL TEMPORARY table and populate it with one (1) row. 2) Join that table to another with about 1 million rows. The join condition selects a few hundred rows. Performance: 4 seconds, the system is doing a full-table scan of the second table, and the Explain Plan output
3
949
by: Andy Tran | last post by:
I built a system using mysql innodb to archive SMS messages but the innodb databases are not keeping up with the number of SMS messages coming in. I'm looking for performance of 200 msgs/sec where 1 msg is 1 database row. I'm running on Red Linux: 2.4.20-8bigmem #1 SMP Thu Mar 13 17:32:29 EST 2003 i686 i686 i386 GNU/Linux The machine...
5
1725
by: Steve_CA | last post by:
Hello all, I've been recruited to assist in diagnosing and fixing a performance problem on an application we have running on SQL Server 7. The application itself is third party software, so we can't get at the source code. It's a Client Management system, where consultants all over the country track their client meetings, results, action...
5
3989
by: Scott | last post by:
I have a customer that had developed an Access97 application to track their business information. The application grew significantly and they used the Upsizing Wizard to move the tables to SQL 2000. Of course there were no modifications made to the queries and they noticed significant performance issues. They recently upgraded the...
4
2039
by: bhbgroup | last post by:
I have a query on one large table. I only add one condition, i.e. a date (the SQL reads like 'where date > parameterdate'. This query is rather quick if 'parameterdate' is either explicitly written into the query or if it is a 'normal' access parameter value that is entered during the query. If I however create a separate parameter table that...
24
2758
by: Bob Alston | last post by:
Most of my Access database implementations have been fairly small in terms of data volume and number of concurrent users. So far I haven't had performance issues to worry about. <knock on wood> But I am curious about what techniques those of you who have done higher volume access implementations use to ensure high performance of the...
4
1768
by: Bob Alston | last post by:
Some more, rather specific Access performance questions. IN a split front-end & back-end Access/Jet ONLY LAN situation, and with all query criteria fields and join fields indexed: 1. Is is good form to have a single query with base table with criteria joined to a related table - all in one query? Or should I do a two-step, first query...
9
5746
by: HC | last post by:
Hello, all, I started out thinking my problems were elsewhere but as I have worked through this I have isolated my problem, currently, as a difference between MSDE and SQL Express 2005 (I'll just call it Express for simplicity). I have, to try to simplify things, put the exact same DB on two systems, one running MSDE and one running...
0
2449
by: jsimone | last post by:
This question is about DB2 table design and performance. We are using DB2 UDB Enterprise 8.2 on Linux. We have 6 tables in a parent-child (one-to-many) relationship with each other. Each child table having an FK up to its parent. This particular design is motivated by our domain model. A -> B -> C -> D -> E -> F There is some...
0
7659
marktang
by: marktang | last post by:
ONU (Optical Network Unit) is one of the key components for providing high-speed Internet services. Its primary function is to act as an endpoint device located at the user's premises. However, people are often confused as to whether an ONU can Work As a Router. In this blog post, we’ll explore What is ONU, What Is Router, ONU & Router’s main...
0
7580
by: Hystou | last post by:
Most computers default to English, but sometimes we require a different language, especially when relocating. Forgot to request a specific language before your computer shipped? No problem! You can effortlessly switch the default language on Windows 10 without reinstalling. I'll walk you through it. First, let's disable language...
0
7882
Oralloy
by: Oralloy | last post by:
Hello folks, I am unable to find appropriate documentation on the type promotion of bit-fields when using the generalised comparison operator "<=>". The problem is that using the GNU compilers, it seems that the internal comparison operator "<=>" tries to promote arguments from unsigned to signed. This is as boiled down as I can make it. ...
0
8103
jinu1996
by: jinu1996 | last post by:
In today's digital age, having a compelling online presence is paramount for businesses aiming to thrive in a competitive landscape. At the heart of this digital strategy lies an intricately woven tapestry of website design and digital marketing. It's not merely about having a website; it's about crafting an immersive digital experience that...
1
7634
by: Hystou | last post by:
Overview: Windows 11 and 10 have less user interface control over operating system update behaviour than previous versions of Windows. In Windows 11 and 10, there is no way to turn off the Windows Update option using the Control Panel or Settings app; it automatically checks for updates and installs any it finds, whether you like it or not. For...
1
5481
isladogs
by: isladogs | last post by:
The next Access Europe User Group meeting will be on Wednesday 1 May 2024 starting at 18:00 UK time (6PM UTC+1) and finishing by 19:30 (7.30PM). In this session, we are pleased to welcome a new presenter, Adolph Dupré who will be discussing some powerful techniques for using class modules. He will explain when you may want to use classes...
0
3618
by: adsilva | last post by:
A Windows Forms form does not have the event Unload, like VB6. What one acts like?
1
2079
by: 6302768590 | last post by:
Hai team i want code for transfer the data from one system to another through IP address by using C# our system has to for every 5mins then we have to update the data what the data is updated we have to send another system
1
1194
muto222
by: muto222 | last post by:
How can i add a mobile payment intergratation into php mysql website.

By using Bytes.com and it's services, you agree to our Privacy Policy and Terms of Use.

To disable or enable advertisements and analytics tracking please visit the manage ads & tracking page.