473,387 Members | 1,465 Online
Bytes | Software Development & Data Engineering Community
Post Job

Home Posts Topics Members FAQ

Join Bytes to post your question to a community of 473,387 software developers and data experts.

Horizontal Partitioning question

I recently came across a database where the data are horizonally partitioned
into 4 tables. I'm not sure if this was a poor design choice, or if it was
done for valid performance reasons. The schema of the tables are essentially
the same, it's just that they are named differenly and the columns are named
differenlty to differentiate the data from a business usage perspective. The
tables could easily be combined inot one by adding a new colum to the
clustered index that would be used to differentiate the business usage. I am
trying to evaluate whether combining the tables would improve performance or
if it would be better to leave them the way they are. Many queries that run
against these tables do not request records from more than one of the
tables, which is good. However, there are a number of processes that query
against all of the tables on the identical clustered index range. I am not
sure exactly how many rows are in the tables but I'm fairly certain the
entire database is < 50 GB.
Jul 20 '05 #1
13 1575
Hi

You don't say if they have been set up as a partitioned view, but your
comment about business usage would tend to imply they haven't? If they
haven't then this would be the change I would look at first, especially if
the growth rate of the system would indicate federation will be necessary

If only a small percentage of queries access all the tables, then this may
also indicate there is a performance benefit. If the tables are on different
filegroups and are on different disc subsystems then performance may have
been a valid reason to split them up.

Without being there when the decission to partition them was made, you will
not know the underlying stats or reasons for this design, and I would bet
they have not been documented!

If you are going to combine them, then create a benchmark test so that you
can compare each configuration, and test the two alternatives in a
controlled environment. If you can't do that, then unless there is a
specific reason to change what is already working (and perfoming well!) then
I wouldn't.

John

"MissLivvy" <Xe*******************@yahoo.com> wrote in message
news:DS******************@newsread1.news.pas.earth link.net...
I recently came across a database where the data are horizonally
partitioned
into 4 tables. I'm not sure if this was a poor design choice, or if it was
done for valid performance reasons. The schema of the tables are
essentially
the same, it's just that they are named differenly and the columns are
named
differenlty to differentiate the data from a business usage perspective.
The
tables could easily be combined inot one by adding a new colum to the
clustered index that would be used to differentiate the business usage. I
am
trying to evaluate whether combining the tables would improve performance
or
if it would be better to leave them the way they are. Many queries that
run
against these tables do not request records from more than one of the
tables, which is good. However, there are a number of processes that query
against all of the tables on the identical clustered index range. I am not
sure exactly how many rows are in the tables but I'm fairly certain the
entire database is < 50 GB.

Jul 20 '05 #2
>> I recently came across a database where the data are horizonally
partitioned into 4 tables. I'm not sure if this was a poor design
choice, or if it was
done for valid performance reasons. <<

Without knowing any more than that, the smart would bet on poor design
...
The schema of the tables are essentially the same, it's just that they are named differenly and the columns are named differenlty to
differentiate the data from a business usage perspective. <<

Here we MAY have a valid design reason. Is the data logically
different in each case? Not just a status change (paid versus unpaid
bills, etc.), really different? If not, then this is a mess.
The tables could easily be combined inot one by adding a new column to the clustered index that would be used to differentiate the
business usage. <<

Bingo! No logical differences, no separate tables in the data model.
I am trying to evaluate whether combining the tables would improve performance or if it would be better to leave them the way they are.
<<

Performance is a secondary issue. Correctness and removing redudant
data element name is the first issue. Make it right, then make it
fast.
Many queries that run against these tables do not request records

[sic] from more than one of the tables, which is good. However, there
are a number of processes that query against all of the tables on the
identical clustered index range. I am not sure exactly how many rows
are in the tables but I'm fairly certain the entire database is < 50
GB. <<

Write some VIEWs on the data. Performance with a clustered index
starting on the status column will be fine.
Jul 20 '05 #3

You don't say if they have been set up as a partitioned view, but your
comment about business usage would tend to imply they haven't?
Correct. There is no partitioned view. I don't think the current design
lends itself to that since there is currenlty no column that could be used
for the check constraint. There exist data spread across all tables with the
same primary key. Data with the same PK are logically related from a
business perspective. To create a check constraint, I think we'd have to add
another column like the one I mention below.
specific reason to change what is already working (and perfoming well!) then

Peformance is definately a problem though with operations that need to query
against all of the tables at the same time. For example, one thing that
users routinely need to do is copy a large range of rows from all of the
tables and insert them back into the same tables (with a new PK, of course).
I will try to find out if different filegroups were used for the different
tables, but I'm guessing this is not the case.

In my case, since sometimes we need to acess all of the tables at once, and
sometimes not, what I need to do is measure the tradeoff between improved
performance in situations where only 1 of the tables need accessed, vs the
penaly paid when all tables need to be accessed. My gut feeling is that
increase in time spent traversing the B-tree in the combined table should be
less significant than the penalty paid for having the data split up when we
need to access all tables at the same time. But again, I really need to
measure this.

Thanks.
"MissLivvy" <Xe*******************@yahoo.com> wrote in message
news:DS******************@newsread1.news.pas.earth link.net...
I recently came across a database where the data are horizonally
partitioned
into 4 tables. I'm not sure if this was a poor design choice, or if it was done for valid performance reasons. The schema of the tables are
essentially
the same, it's just that they are named differenly and the columns are
named
differenlty to differentiate the data from a business usage perspective.
The
tables could easily be combined inot one by adding a new colum to the
clustered index that would be used to differentiate the business usage. I am
trying to evaluate whether combining the tables would improve performance or
if it would be better to leave them the way they are. Many queries that
run
against these tables do not request records from more than one of the
tables, which is good. However, there are a number of processes that query against all of the tables on the identical clustered index range. I am not sure exactly how many rows are in the tables but I'm fairly certain the
entire database is < 50 GB.


Jul 20 '05 #4
> Bingo! No logical differences, no separate tables in the data model.

Hum, may I expose one problem I had. I have been in charge of redesigning a
database. This database contained a table called Directories that contained
the absolute path of some folders frequently used in other tables. There was
a need to differentiate three kind of folders : input, output and binary
folders. The goal was to use nick names of the folders in other tables. So I
had this schema :

Directories
nick_name varchar(20)
type byte //0: input, 1: output, 2:binary
path varchar(1000)
primary key(nick_name, type)

Jobs
input_folder
output_folder
binary_folder
I have been told that this was not a good design because I was not able to
link the Jobs table to the Directories one (the join would require a
constant. For example, input_folder is the nick_name, the type is 0).
The way to solve the problem was to create 3 different tables
InputDirectories, OutputDirectories and BinaryDirectories and to link the
Jobs table to those 3 directories.

What is best design ?

--
Vincent
Jul 20 '05 #5
MissLivvy (Xe*******************@yahoo.com) writes:
In my case, since sometimes we need to acess all of the tables at once,
and sometimes not, what I need to do is measure the tradeoff between
improved performance in situations where only 1 of the tables need
accessed, vs the penaly paid when all tables need to be accessed. My gut
feeling is that increase in time spent traversing the B-tree in the
combined table should be less significant than the penalty paid for
having the data split up when we need to access all tables at the same
time. But again, I really need to measure this.


One option would be to retain the tables, and then build an indexed view
that combines them. Of course, this will double the disk space, and also
come with a cost for updates. But if the main activity is querying, this
could be the best of both words.

Note: to be able to fully use indexed views, you need Enterprise Edition.

--
Erland Sommarskog, SQL Server MVP, es****@sommarskog.se

Books Online for SQL Server SP3 at
http://www.microsoft.com/sql/techinf...2000/books.asp
Jul 20 '05 #6
Thanks Erland.
Yes there is a lot of inserting and updating going on with these tables, so
I think we'd be paying too high a price for the querying benefit of the
indexed view.
"Erland Sommarskog" <es****@sommarskog.se> wrote in message
news:Xn*********************@127.0.0.1...
MissLivvy (Xe*******************@yahoo.com) writes:
In my case, since sometimes we need to acess all of the tables at once,
and sometimes not, what I need to do is measure the tradeoff between
improved performance in situations where only 1 of the tables need
accessed, vs the penaly paid when all tables need to be accessed. My gut
feeling is that increase in time spent traversing the B-tree in the
combined table should be less significant than the penalty paid for
having the data split up when we need to access all tables at the same
time. But again, I really need to measure this.


One option would be to retain the tables, and then build an indexed view
that combines them. Of course, this will double the disk space, and also
come with a cost for updates. But if the main activity is querying, this
could be the best of both words.

Note: to be able to fully use indexed views, you need Enterprise Edition.

--
Erland Sommarskog, SQL Server MVP, es****@sommarskog.se

Books Online for SQL Server SP3 at
http://www.microsoft.com/sql/techinf...2000/books.asp

Jul 20 '05 #7
What about:

Directories
nick_name varchar(20)
type byte //0: input, 1: output, 2:binary
path varchar(1000)
primary key(nick_name, type)

Job
(JobID int primary key,
JobName varchar(20)
)

Job_Directory
(JobID int,
nickname varchar(20),
type (byte)
)
with PK on JobID + nickname + type

"Vincent Lascaux" <no****@nospam.org> wrote in message
news:41***********************@news.free.fr...
Bingo! No logical differences, no separate tables in the data model.
Hum, may I expose one problem I had. I have been in charge of redesigning

a database. This database contained a table called Directories that contained the absolute path of some folders frequently used in other tables. There was a need to differentiate three kind of folders : input, output and binary
folders. The goal was to use nick names of the folders in other tables. So I had this schema :

Directories
nick_name varchar(20)
type byte //0: input, 1: output, 2:binary
path varchar(1000)
primary key(nick_name, type)

Jobs
input_folder
output_folder
binary_folder
I have been told that this was not a good design because I was not able to
link the Jobs table to the Directories one (the join would require a
constant. For example, input_folder is the nick_name, the type is 0).
The way to solve the problem was to create 3 different tables
InputDirectories, OutputDirectories and BinaryDirectories and to link the
Jobs table to those 3 directories.

What is best design ?

--
Vincent

Jul 20 '05 #8
>For example, one thing that
users routinely need to do is copy a large range of rows from all of
the
tables and insert them back into the same tables (with a new PK, of
course).

This seems to me like a lot of redundant data will get created
needlessly. It is probably why the db is +50 gig in size. Also a good
indication of poor design. Is this data historic or frequently
updated? if it is historic and is not changed (like a pos sales
record) Why copy the data around so much?

Jul 20 '05 #9
> Directories
nick_name varchar(20)
type byte //0: input, 1: output, 2:binary
path varchar(1000)
primary key(nick_name, type)

Job
(JobID int primary key,
JobName varchar(20)
)

Job_Directory
(JobID int,
nickname varchar(20),
type (byte)
)
with PK on JobID + nickname + type


Considering that any job has one and exactly one path of each type, you have
a 1-3 relationship. I dont know if it is better than 1-1, that I heard is
bad :)
And it makes the SQL queries more complex to write (for no added value)

--
Vincent
Jul 20 '05 #10
It's a financial forecasting application and the data are heavily
manipulated by the users after copying from another version of the forecast.
Copying is just an easier way for them to get started vs. starting over
completely from scratch. They also run variance reports to compare different
versions of the forecast. To reduce the size of the database, I think an
archiving strategy would be appropriate.

"Dan Gidman" <da*******@gmail.com> wrote in message
news:11**********************@f14g2000cwb.googlegr oups.com...
For example, one thing that

users routinely need to do is copy a large range of rows from all of
the
tables and insert them back into the same tables (with a new PK, of
course).

This seems to me like a lot of redundant data will get created
needlessly. It is probably why the db is +50 gig in size. Also a good
indication of poor design. Is this data historic or frequently
updated? if it is historic and is not changed (like a pos sales
record) Why copy the data around so much?

Jul 20 '05 #11
Maybe I misunderstood the problem. The way I understood it:

1] a directory can be one of 3 types: input, output, or binary.
2] A job has up to 3 directories: input, output and binary.
3] A directory can be shared by more than one job.

Is that correct?

"Vincent Lascaux" <no****@nospam.org> wrote in message
news:41***********************@news.free.fr...
Directories
nick_name varchar(20)
type byte //0: input, 1: output, 2:binary
path varchar(1000)
primary key(nick_name, type)

Job
(JobID int primary key,
JobName varchar(20)
)

Job_Directory
(JobID int,
nickname varchar(20),
type (byte)
)
with PK on JobID + nickname + type
Considering that any job has one and exactly one path of each type, you

have a 1-3 relationship. I dont know if it is better than 1-1, that I heard is
bad :)
And it makes the SQL queries more complex to write (for no added value)

--
Vincent

Jul 20 '05 #12
> 1] a directory can be one of 3 types: input, output, or binary.

True
And the same nickname can be used for different types
2] A job has up to 3 directories: input, output and binary.
Half true : a job has exactly 3 directories : one input, one output and one
binary directory
3] A directory can be shared by more than one job.


True

--
Vincent
Jul 20 '05 #13
Then like my design better. You may find it a pain to have to join to an
extra table, but with your design you need 3 joins to get each of the 3
directories related to a job. Also, if you ever have a 4th directory related
to a job you have to add a new column.

If you have no other attributes to add to the Job table, then you could get
rid of the job table and just do:

Directory (
nick_name varchar(20)
type byte //0: input, 1: output, 2:binary
path varchar(1000)

)
(with primary key(nick_name, type))

Job_Directory
(JobName nvarchar(20),
nickname varchar(20),
type (byte)
)

(with primary key(JobName, nick_name, type))
and fk to Directory on nickname, type)

"Vincent Lascaux" <no****@nospam.org> wrote in message
news:41***********************@news.free.fr...
Directories
nick_name varchar(20)
type byte //0: input, 1: output, 2:binary
path varchar(1000)
primary key(nick_name, type)

Job
(JobID int primary key,
JobName varchar(20)
)

Job_Directory
(JobID int,
nickname varchar(20),
type (byte)
)
with PK on JobID + nickname + type
Considering that any job has one and exactly one path of each type, you

have a 1-3 relationship. I dont know if it is better than 1-1, that I heard is
bad :)
And it makes the SQL queries more complex to write (for no added value)

--
Vincent

Jul 20 '05 #14

This thread has been closed and replies have been disabled. Please start a new discussion.

Similar topics

2
by: sumGirl | last post by:
Hello all, Thinking about building a new database in the enterprise addition of sql server and using some horizontal parititioning techniques in order to accomaodat what will eventually be a...
8
by: Duffey, Kevin | last post by:
We are looking for information regarding any capabilities of PostgreSQL in regards to scalability. Ideally we want to be able to scale in both directions. What sort of solutions are out there for...
18
by: Jeff Boes | last post by:
I'm sure this is a concept that's been explored here. I have a table (fairly simple, just two columns, one of which is a 32-digit checksum) with several million rows (currently, about 7 million)....
7
by: Jane | last post by:
In Oracle we can partition a table as follows. What is the equivalent in DB2? CREATE TABLE sales_list (salesman_id NUMBER(5), salesman_name VARCHAR2(30), sales_state VARCHAR2(20),...
10
by: shsandeep | last post by:
DB2 V8.2 (not Viper yet and no range partitioning!!) I have created a table T1 (col1, col2) with col1 as the primary key. When I try to create a partitioning key on col2, it gives me error that it...
8
by: mitek | last post by:
Hi, All I have strange situation with table design for DB2 9.1 on Windows I have 3 tables with same structure : 1 table - is MDC 2 table - is partitioned MDC table 3 table - is compressed...
0
by: harrylarenson | last post by:
Hi, Happy New Year. I am trying to insert a query to a partitioned view but the error is : Server: Msg 4436, Level 16, State 12, Line 1 UNION ALL view 'T' is not updatable because a...
15
by: Woody Ling | last post by:
I am starting to config a 64 bits DB2 in IBM 595 AIX box with 2 dual core CPU and I would like to assigned one 'processor' for one db partition. Should I config it as a 4 nodes or 2 nodes...
0
by: Nate Eaton | last post by:
According to the original whitepaper on UDB range partitioning (http:// www-106.ibm.com/developerworks/db2/library/techarticle/0202zuzarte/ 0202zuzarte.pdf), you can use a range as a criteria,...
2
by: mandor | last post by:
Hello, I need some advise in table design, and more specifically about table partitioning. I read some papers and there was mentioned that if a table is expected to hold millions of rows, it's a...
0
by: taylorcarr | last post by:
A Canon printer is a smart device known for being advanced, efficient, and reliable. It is designed for home, office, and hybrid workspace use and can also be used for a variety of purposes. However,...
0
by: Charles Arthur | last post by:
How do i turn on java script on a villaon, callus and itel keypad mobile phone
0
by: aa123db | last post by:
Variable and constants Use var or let for variables and const fror constants. Var foo ='bar'; Let foo ='bar';const baz ='bar'; Functions function $name$ ($parameters$) { } ...
0
by: emmanuelkatto | last post by:
Hi All, I am Emmanuel katto from Uganda. I want to ask what challenges you've faced while migrating a website to cloud. Please let me know. Thanks! Emmanuel
0
BarryA
by: BarryA | last post by:
What are the essential steps and strategies outlined in the Data Structures and Algorithms (DSA) roadmap for aspiring data scientists? How can individuals effectively utilize this roadmap to progress...
1
by: nemocccc | last post by:
hello, everyone, I want to develop a software for my android phone for daily needs, any suggestions?
1
by: Sonnysonu | last post by:
This is the data of csv file 1 2 3 1 2 3 1 2 3 1 2 3 2 3 2 3 3 the lengths should be different i have to store the data by column-wise with in the specific length. suppose the i have to...
0
Oralloy
by: Oralloy | last post by:
Hello folks, I am unable to find appropriate documentation on the type promotion of bit-fields when using the generalised comparison operator "<=>". The problem is that using the GNU compilers,...
0
jinu1996
by: jinu1996 | last post by:
In today's digital age, having a compelling online presence is paramount for businesses aiming to thrive in a competitive landscape. At the heart of this digital strategy lies an intricately woven...

By using Bytes.com and it's services, you agree to our Privacy Policy and Terms of Use.

To disable or enable advertisements and analytics tracking please visit the manage ads & tracking page.