473,856 Members | 1,750 Online
Bytes | Software Development & Data Engineering Community
+ Post

Home Posts Topics Members FAQ

Performance and second table

Hi,

I have a small theoretical issue.
I have one table, which is prettyu large. There is lot of evaluations
running on this table, that's why, each process need to wait for
another to be finished. Sometimes, for some critical functions, it
takes to long time.

I don't think that I can speed up processes, by changing the indexes on
the tables (to increase scan time for example), because this is
something what I was experimenting with already, and it was not enought
good.

My question is, will it improve performance, if I will create second
table, exactly like this one, and I will split some evaluations, that
the one, which defenately need to run on the source table will run on
the first one, and the second evaluations, will run on the other one.

To keep data consistance between this two tables, I was thinking baout
trigger on insert on the mother table, which will transport the data to
another one.

Second part is: to improve selects on the table, should I set indexes
with option of Fill factor as possible close to 100% or as possible
close to 0%. Or maybe should I set the pad index option?

What about clustered indexes. Is it better to use them if I would like
to increase performace for selects?

Thanks in advance

Mateusz

Sep 27 '05 #1
4 1701
SQL
For selects you should have your indexes as close to 100% as possible,
if this table is frequently modify make it around 80%-90%

Also did you check the fragmentation level of your table?
A lot of times this improves speed dramatically
run DBCC SHOWCONTIG ('YourTableName ') and look at Scan Density, Avg.
Bytes Free per Page and Fragmentation Levels
If your density is low and/or fragmentation high run DBCC INDEXDEFRAG
(dbname,tablena me,1)
Lookup DBCC SHOWCONTIG and DBCC INDEXDEFRAG in Books online

Have you tried horizontal partitioning? This might benefit you

http://sqlservercode.blogspot.com/

Sep 27 '05 #2
On 27 Sep 2005 09:35:55 -0700, Matik wrote:
Hi,

I have a small theoretical issue.
I have one table, which is prettyu large. There is lot of evaluations
running on this table, that's why, each process need to wait for
another to be finished. Sometimes, for some critical functions, it
takes to long time.
Hi Mateusz,

If the processes are only reading the data without modifying it, then
there is no need to wait. They can run concurrently.
My question is, will it improve performance, if I will create second
table, exactly like this one, and I will split some evaluations, that
the one, which defenately need to run on the source table will run on
the first one, and the second evaluations, will run on the other one.
I doubt it. SQL Server doesn't know that the data in both tables is
equal. So if one query reads row #12345 from table #1, and the other
query reads row #12345 from table #2, SQL Server will fetch the
corresponding data from both tables from disk to cache. In short, you
are effectively halving the amount of cache SQL Server can use for these
queries. I expect performance to decrease.
To keep data consistance between this two tables, I was thinking baout
trigger on insert on the mother table, which will transport the data to
another one.
And this will hurt performance even more. The speed of inserts will slow
down because the trigger has to be executed. As a result, locks on the
main table will live longer, keeping other queries blocked for longer
amounts of time. And the second table will be blocked as well.

Since the data is apparently updated while you are querying it, you
might find benefit in a variation on your idea: make a copy of the
table, but don't use triggers to copy over all modifications. Instead,
set up a job that will periodically synchronise the data. Now make sure
that all queries that don't need up-to-the-second precision are used on
the copy table (that is only update periodically).
Second part is: to improve selects on the table, should I set indexes
with option of Fill factor as possible close to 100% or as possible
close to 0%. Or maybe should I set the pad index option?

What about clustered indexes. Is it better to use them if I would like
to increase performace for selects?


There is no magic bullet here. Each problem needs it's own solution,
that's why there are so many options.

Read more about performance at www.sqlserver-performance.com, or post
here with full details of your tables, indexes, queries and execution
plans for more advise.

Best, Hugo
--

(Remove _NO_ and _SPAM_ to get my e-mail address)
Sep 27 '05 #3
On Tue, 27 Sep 2005 23:30:53 +0200, Hugo Kornelis wrote:
Read more about performance at www.sqlserver-performance.com


I goofed when typing that URL from memory. The correct URL is
http://www.sql-server-performance.co...erformance.asp.

Unfortunately, the site is revamped since my last visit. The content is
still there, but buried in lots of irritating advertising.

Best, Hugo
--

(Remove _NO_ and _SPAM_ to get my e-mail address)
Sep 27 '05 #4
Thanks Hugo and SQL :)

I see, that there is no better way, like just experiment with this
indexes, and maybe modify some statements. During past two days, I've
did that, and now is much better.

Anyway, the structure of the tables is bad, so there is also no
possibility to use better indexing.

I've one more question, if we are already by the topic of indexes. My
question is about the phisical memory the indexes are using.
When I've made all tables empty, truncated db, shrinkt aso. but the
file size is still a little to big. I know, that the indexes are taking
also memory to be stored (specialy clustered), but, after I've removed
all data, the indexes should be also cleared, right? Or maybe I need to
rebuild them?

Gratings

Mateusz

Sep 30 '05 #5

This thread has been closed and replies have been disabled. Please start a new discussion.

Similar topics

5
33674
by: Jim Garrison | last post by:
Scenario: 1) Create a GLOBAL TEMPORARY table and populate it with one (1) row. 2) Join that table to another with about 1 million rows. The join condition selects a few hundred rows. Performance: 4 seconds, the system is doing a full-table scan of the second table, and the Explain Plan output
3
949
by: Andy Tran | last post by:
I built a system using mysql innodb to archive SMS messages but the innodb databases are not keeping up with the number of SMS messages coming in. I'm looking for performance of 200 msgs/sec where 1 msg is 1 database row. I'm running on Red Linux: 2.4.20-8bigmem #1 SMP Thu Mar 13 17:32:29 EST 2003 i686 i686 i386 GNU/Linux The machine has dual CPU and 2G of RAM.
5
1739
by: Steve_CA | last post by:
Hello all, I've been recruited to assist in diagnosing and fixing a performance problem on an application we have running on SQL Server 7. The application itself is third party software, so we can't get at the source code. It's a Client Management system, where consultants all over the country track their client meetings, results, action plans, etc. , and has apparently been problematic for a long time now. I came into this...
5
4012
by: Scott | last post by:
I have a customer that had developed an Access97 application to track their business information. The application grew significantly and they used the Upsizing Wizard to move the tables to SQL 2000. Of course there were no modifications made to the queries and they noticed significant performance issues. They recently upgraded the application to Access XP expecting the newer version to provide performance benefits and now queries take...
4
2058
by: bhbgroup | last post by:
I have a query on one large table. I only add one condition, i.e. a date (the SQL reads like 'where date > parameterdate'. This query is rather quick if 'parameterdate' is either explicitly written into the query or if it is a 'normal' access parameter value that is entered during the query. If I however create a separate parameter table that contains nothing but the date I want to use in the query and then refer to this table (the query...
24
2797
by: Bob Alston | last post by:
Most of my Access database implementations have been fairly small in terms of data volume and number of concurrent users. So far I haven't had performance issues to worry about. <knock on wood> But I am curious about what techniques those of you who have done higher volume access implementations use to ensure high performance of the database in a multi-user 100mbps LAN implementation??? Thanks
4
1784
by: Bob Alston | last post by:
Some more, rather specific Access performance questions. IN a split front-end & back-end Access/Jet ONLY LAN situation, and with all query criteria fields and join fields indexed: 1. Is is good form to have a single query with base table with criteria joined to a related table - all in one query? Or should I do a two-step, first query does selection of main table and then join with other table? 2. I have a table with multiple...
9
5767
by: HC | last post by:
Hello, all, I started out thinking my problems were elsewhere but as I have worked through this I have isolated my problem, currently, as a difference between MSDE and SQL Express 2005 (I'll just call it Express for simplicity). I have, to try to simplify things, put the exact same DB on two systems, one running MSDE and one running Express. Both have 2 Ghz processors (one Intel, one AMD), both have a decent amount of RAM (Intel system...
0
2473
by: jsimone | last post by:
This question is about DB2 table design and performance. We are using DB2 UDB Enterprise 8.2 on Linux. We have 6 tables in a parent-child (one-to-many) relationship with each other. Each child table having an FK up to its parent. This particular design is motivated by our domain model. A -> B -> C -> D -> E -> F There is some debate in our team about how best to accomplish this. Two approaches have been discussed.
0
9904
marktang
by: marktang | last post by:
ONU (Optical Network Unit) is one of the key components for providing high-speed Internet services. Its primary function is to act as an endpoint device located at the user's premises. However, people are often confused as to whether an ONU can Work As a Router. In this blog post, we’ll explore What is ONU, What Is Router, ONU & Router’s main usage, and What is the difference between ONU and Router. Let’s take a closer look ! Part I. Meaning of...
0
11051
Oralloy
by: Oralloy | last post by:
Hello folks, I am unable to find appropriate documentation on the type promotion of bit-fields when using the generalised comparison operator "<=>". The problem is that using the GNU compilers, it seems that the internal comparison operator "<=>" tries to promote arguments from unsigned to signed. This is as boiled down as I can make it. Here is my compilation command: g++-12 -std=c++20 -Wnarrowing bit_field.cpp Here is the code in...
0
10693
jinu1996
by: jinu1996 | last post by:
In today's digital age, having a compelling online presence is paramount for businesses aiming to thrive in a competitive landscape. At the heart of this digital strategy lies an intricately woven tapestry of website design and digital marketing. It's not merely about having a website; it's about crafting an immersive digital experience that captivates audiences and drives business growth. The Art of Business Website Design Your website is...
1
10773
by: Hystou | last post by:
Overview: Windows 11 and 10 have less user interface control over operating system update behaviour than previous versions of Windows. In Windows 11 and 10, there is no way to turn off the Windows Update option using the Control Panel or Settings app; it automatically checks for updates and installs any it finds, whether you like it or not. For most users, this new feature is actually very convenient. If you want to control the update process,...
0
9527
agi2029
by: agi2029 | last post by:
Let's talk about the concept of autonomous AI software engineers and no-code agents. These AIs are designed to manage the entire lifecycle of a software development project—planning, coding, testing, and deployment—without human intervention. Imagine an AI that can take a project description, break it down, write the code, debug it, and then launch it, all on its own.... Now, this would greatly impact the work of software developers. The idea...
1
7929
isladogs
by: isladogs | last post by:
The next Access Europe User Group meeting will be on Wednesday 1 May 2024 starting at 18:00 UK time (6PM UTC+1) and finishing by 19:30 (7.30PM). In this session, we are pleased to welcome a new presenter, Adolph Dupré who will be discussing some powerful techniques for using class modules. He will explain when you may want to use classes instead of User Defined Types (UDT). For example, to manage the data in unbound forms. Adolph will...
0
7087
by: conductexam | last post by:
I have .net C# application in which I am extracting data from word file and save it in database particularly. To store word all data as it is I am converting the whole word file firstly in HTML and then checking html paragraph one by one. At the time of converting from word file to html my equations which are in the word document file was convert into image. Globals.ThisAddIn.Application.ActiveDocument.Select();...
0
5956
by: adsilva | last post by:
A Windows Forms form does not have the event Unload, like VB6. What one acts like?
3
3196
bsmnconsultancy
by: bsmnconsultancy | last post by:
In today's digital era, a well-designed website is crucial for businesses looking to succeed. Whether you're a small business owner or a large corporation in Toronto, having a strong online presence can significantly impact your brand's success. BSMN Consultancy, a leader in Website Development in Toronto offers valuable insights into creating effective websites that not only look great but also perform exceptionally well. In this comprehensive...

By using Bytes.com and it's services, you agree to our Privacy Policy and Terms of Use.

To disable or enable advertisements and analytics tracking please visit the manage ads & tracking page.