473,406 Members | 2,843 Online
Bytes | Software Development & Data Engineering Community
Post Job

Home Posts Topics Members FAQ

Join Bytes to post your question to a community of 473,406 software developers and data experts.

Q: Analysis Services Cube Design

I have a (hopefully typical) problem when it comes to cube design. We
store millions of product records every year, broken down by
month/quarter. Each product can be assigned to various heirarchial
classification groups etc. The data in an OLTP DB occupies roughly
100G for a typical year.

We're looking at breaking this out into OLAP to provide faster access
to the data in various configurations and groupings. This is not a
problem, as this is the intended use for Analysis Services.

The problem is that we apply projection factors on the product prices
and quantities. This would be ok if it only happened once, however,
this happens every quarter (don't ask why). The projection factors
change 4 times a year, and they affect all historical product records.

This presents a challenge because to aggregate the data into a useful
configuration in the cubes, you throw out the detail data, but this
means throwing out the price and quantities which are needed to apply
the projection.

So if you have Product A at $10 and Product B at $20, and roll both up
into Category X, you'll have $30, but you'll lose the ability to apply
a projection factor of .5 to Product A and .78 to Product B. They're
rolled up.

I don't want to regenerate the cubes every 3 months. That's absurd.
But we can't live without the ability of projection the
prices/quantities on a product level (detail level). So how can this
be achieved when the other cubes are created at a higher level with
less details and sums of the detail data?

My initial guess is that we have to update the product data, and then
reaggregate all the other data that is built upon that product data.
Is there any other way to apply math to the data on the way out?

Thanks in advance!

Regards,

Zach
Jul 20 '05 #1
2 3756
zn*****@hpis.net (znelson) wrote in message news:<f5**************************@posting.google. com>...
I have a (hopefully typical) problem when it comes to cube design. We
store millions of product records every year, broken down by
month/quarter. Each product can be assigned to various heirarchial
classification groups etc. The data in an OLTP DB occupies roughly
100G for a typical year.

We're looking at breaking this out into OLAP to provide faster access
to the data in various configurations and groupings. This is not a
problem, as this is the intended use for Analysis Services.

The problem is that we apply projection factors on the product prices
and quantities. This would be ok if it only happened once, however,
this happens every quarter (don't ask why). The projection factors
change 4 times a year, and they affect all historical product records.

This presents a challenge because to aggregate the data into a useful
configuration in the cubes, you throw out the detail data, but this
means throwing out the price and quantities which are needed to apply
the projection.

So if you have Product A at $10 and Product B at $20, and roll both up
into Category X, you'll have $30, but you'll lose the ability to apply
a projection factor of .5 to Product A and .78 to Product B. They're
rolled up.

I don't want to regenerate the cubes every 3 months. That's absurd.
But we can't live without the ability of projection the
prices/quantities on a product level (detail level). So how can this
be achieved when the other cubes are created at a higher level with
less details and sums of the detail data?

My initial guess is that we have to update the product data, and then
reaggregate all the other data that is built upon that product data.
Is there any other way to apply math to the data on the way out?

Thanks in advance!

Regards,

Zach


You might want to post this in microsoft.public.sqlserver.olap - you
may get a better answer there.

Simon
Jul 20 '05 #2
znelson wrote:

This presents a challenge because to aggregate the data into a useful
configuration in the cubes, you throw out the detail data, but this
means throwing out the price and quantities which are needed to apply
the projection.
Not necessarily.

So if you have Product A at $10 and Product B at $20, and roll both up
into Category X, you'll have $30, but you'll lose the ability to apply
a projection factor of .5 to Product A and .78 to Product B. They're
rolled up.
They're rolled up, yes, but you can also have the detail data in the
cube depending on your dimension structure. For example, if your
hierarchy has the actual products in it (i.e. your hierarchy in this
example would be:

Category X
Product A
Product B
Category Y
Product C
Product D
etc.

I don't want to regenerate the cubes every 3 months. That's absurd.
If you want the historical data to reflect the new projections then I
don't see that you have a choice. Plus, you'll be updating your cube
with new sales data anyway.
But we can't live without the ability of projection the
prices/quantities on a product level (detail level). So how can this
be achieved when the other cubes are created at a higher level with
less details and sums of the detail data?

My initial guess is that we have to update the product data, and then
reaggregate all the other data that is built upon that product data.
Is there any other way to apply math to the data on the way out?


Unfortunately I really don't understand enough about your scenario or
how the projections are used to be able to offer any suggestions beyond
what I've already mentioned above. I hope that helps you some.

Zach Wells

Jul 20 '05 #3

This thread has been closed and replies have been disabled. Please start a new discussion.

Similar topics

0
by: Bounty Bob | last post by:
Hi all, I received the following email from a work colleague. Any help with this would be greatly appreciated. ----------------------------------- Can you please post the following...
3
by: Daniele | last post by:
I have a 40 MB database in excel format. I need to use it in Analysis Services, I imported the data by DTS (Data Transformation Services), everything is working I can see the database, but I can't...
1
by: BruceGilpin | last post by:
I was at a Microsoft sales presentation last week for the new version of SQL Server, Yukon. They had an extensive presentation on SQL Server and Reporting Services but didn't mention Analysis...
2
by: Ramir Santos | last post by:
my analysis server is hosted on a win2k server with IIS, the hosting page is on another machine. I have followed everything as per MS KB article and I could not connect through the analysis server...
0
by: wwalkerbout | last post by:
Greetings, Although, this relates to Analysis Services administration, I thought I'd post it here in case someone with the administrative side of SQL Server who subscribes to this Group may also...
1
by: Nesaar | last post by:
Hi I have been tasked to draw up design documentation for two web services we need to develop. Sort of the systems analyst for this project. I would like to know if anyone out there has example...
0
by: Kannan.V [MCSD.net] | last post by:
Hi All, Here is my situation, I have my windows application on windows server 2003 and have the analysis services on a different system (win 2003) on the same network. I am using DSO to...
0
by: flickimp | last post by:
Hi We have two servers on Analysis Manger. On Server 1 - we have created a cube and have allocated permissions/roles to 150+ accounts......quite a tedious job (as each role is slightly...
1
by: russzee | last post by:
Hello, Can I import an OLTP (Reltional DB) as a Data Source into SQL Server Analysis Services 2005 and then use the Cube Wizard and the new Data Source View feature to create the OLAP model ? ...
0
by: emmanuelkatto | last post by:
Hi All, I am Emmanuel katto from Uganda. I want to ask what challenges you've faced while migrating a website to cloud. Please let me know. Thanks! Emmanuel
0
BarryA
by: BarryA | last post by:
What are the essential steps and strategies outlined in the Data Structures and Algorithms (DSA) roadmap for aspiring data scientists? How can individuals effectively utilize this roadmap to progress...
1
by: Sonnysonu | last post by:
This is the data of csv file 1 2 3 1 2 3 1 2 3 1 2 3 2 3 2 3 3 the lengths should be different i have to store the data by column-wise with in the specific length. suppose the i have to...
0
by: Hystou | last post by:
There are some requirements for setting up RAID: 1. The motherboard and BIOS support RAID configuration. 2. The motherboard has 2 or more available SATA protocol SSD/HDD slots (including MSATA, M.2...
0
marktang
by: marktang | last post by:
ONU (Optical Network Unit) is one of the key components for providing high-speed Internet services. Its primary function is to act as an endpoint device located at the user's premises. However,...
0
Oralloy
by: Oralloy | last post by:
Hello folks, I am unable to find appropriate documentation on the type promotion of bit-fields when using the generalised comparison operator "<=>". The problem is that using the GNU compilers,...
0
jinu1996
by: jinu1996 | last post by:
In today's digital age, having a compelling online presence is paramount for businesses aiming to thrive in a competitive landscape. At the heart of this digital strategy lies an intricately woven...
0
by: Hystou | last post by:
Overview: Windows 11 and 10 have less user interface control over operating system update behaviour than previous versions of Windows. In Windows 11 and 10, there is no way to turn off the Windows...
0
agi2029
by: agi2029 | last post by:
Let's talk about the concept of autonomous AI software engineers and no-code agents. These AIs are designed to manage the entire lifecycle of a software development project—planning, coding, testing,...

By using Bytes.com and it's services, you agree to our Privacy Policy and Terms of Use.

To disable or enable advertisements and analytics tracking please visit the manage ads & tracking page.