By using this site, you agree to our updated Privacy Policy and our Terms of Use. Manage your Cookies Settings.
438,265 Members | 1,333 Online
Bytes IT Community
+ Ask a Question
Need help? Post your question and get tips & solutions from a community of 438,265 IT Pros & Developers. It's quick & easy.

Computed Column calculations

P: n/a
If i want to split a computed column into two or more columns based on
the the length (its a varchar) of the computed column, how often will
sql server determine what the computed column is? roughly along the
lines of

select comp_col,'comp_col 2'=
case when datalength(comp_col)<=100 then comp_col
else left(comp_col,99) + '~' end,
datalength(comp_col)
from aTable

As you can see, in this scenario we use the computed coulumn,
comp_col, in a few places, so does SQL server need to calculate this
each time? I'm playing with this on the basis that it does and thus
trying to shift the computed column out to a variable and then
manipulte and return from their, but that has its own problems when
you throw in additional parameters (trying to join table udf's) so if
SQL server is smart enough to not calculate the column each time I
would save a lot of hassle?

Cheers Dave
Jul 23 '05 #1
Share this Question
Share on Google+
9 Replies


P: n/a
DMAC (dr***@drmcl.free-online.co.uk) writes:
If i want to split a computed column into two or more columns based on
the the length (its a varchar) of the computed column, how often will
sql server determine what the computed column is? roughly along the
lines of

select comp_col,'comp_col 2'=
case when datalength(comp_col)<=100 then comp_col
else left(comp_col,99) + '~' end,
datalength(comp_col)
from aTable

As you can see, in this scenario we use the computed coulumn,
comp_col, in a few places, so does SQL server need to calculate this
each time? I'm playing with this on the basis that it does and thus
trying to shift the computed column out to a variable and then
manipulte and return from their, but that has its own problems when
you throw in additional parameters (trying to join table udf's) so if
SQL server is smart enough to not calculate the column each time I
would save a lot of hassle?


Unless you are calling a scalar UDF in the expression for the computed
column, I would not be too worried. While there probably is some overhead,
it pales in comparison with time for disk access etc.
--
Erland Sommarskog, SQL Server MVP, es****@sommarskog.se

Books Online for SQL Server SP3 at
http://www.microsoft.com/sql/techinf...2000/books.asp
Jul 23 '05 #2

P: n/a
Adding storage for a computed column requires negligible resources,
especially wrt disk access. However, SQL Server imposes restrictions
on how computed columns can be used. Thus, I don't see the value of
using computed columns - you are accepting denormalization by creating
the computed column in the first place, why not take it one efficient
step further?

Jul 23 '05 #3

P: n/a
(ma**@msn.com) writes:
Adding storage for a computed column requires negligible resources,
especially wrt disk access. However, SQL Server imposes restrictions
on how computed columns can be used. Thus, I don't see the value of
using computed columns - you are accepting denormalization by creating
the computed column in the first place, why not take it one efficient
step further?


You can have it both ways: you can materialize your computed column
by adding an index on it.

If you stay with plain vanilla columns, and have a column of which the
value is derived from other columns, you will have to compute that
value in a trigger. This can have some overhead when inserting data.
(More overhead than of an indexed computed columns.)

Myself, I have used computed columns only very occassionally. Once
it was a fairly complex expression for an important test. Another case
it was a simple forumula, but the value in question is essential and
queried in several places. And before I reworked this data, there was
such a column - non-computed - in another table.

--
Erland Sommarskog, SQL Server MVP, es****@sommarskog.se

Books Online for SQL Server SP3 at
http://www.microsoft.com/sql/techinf...2000/books.asp
Jul 23 '05 #4

P: n/a
Unfortunatley, in this instance I do use UDF's in the computed column,
Actually I use one which calls three more, the third of which uses a
cursor to make have a dozen logical if's on some data to see wether the
data should be in front of or after another value, be prefixed by or
suffixed by something else which makes it dead slow anyway. The problem
is obviously the business logic that says a bunch of rules needs to be
applied to generate an items description. This description can have from
1 to 80 values which need all the rules applied to each one, hence the
cursor, hence the UDF, hence the headache. Now I need to substitute some
replacements to words inside this finished description which comes from
, yup another cursor to rattle thru a few thousand 'potential'
replacement to check for a match. Eventually I want to know if the
computed description, after the replacements have been applied, is
bigger than 100 chars and get a list. So far I've narrowed it down to 20
hours for 125,000 line items. But the potential replacemens are growing
rapidly and hurting so I'm not too happy a bunny right now.

Dave

*** Sent via Developersdex http://www.developersdex.com ***
Don't just participate in USENET...get rewarded for it!
Jul 23 '05 #5

P: n/a
an*******@devdex.com (an*******@devdex.com) writes:
Unfortunatley, in this instance I do use UDF's in the computed column,
Actually I use one which calls three more, the third of which uses a
cursor to make have a dozen logical if's on some data to see wether the
data should be in front of or after another value, be prefixed by or
suffixed by something else which makes it dead slow anyway. The problem
is obviously the business logic that says a bunch of rules needs to be
applied to generate an items description. This description can have from
1 to 80 values which need all the rules applied to each one, hence the
cursor, hence the UDF, hence the headache. Now I need to substitute some
replacements to words inside this finished description which comes from
, yup another cursor to rattle thru a few thousand 'potential'
replacement to check for a match. Eventually I want to know if the
computed description, after the replacements have been applied, is
bigger than 100 chars and get a list. So far I've narrowed it down to 20
hours for 125,000 line items. But the potential replacemens are growing
rapidly and hurting so I'm not too happy a bunny right now.


Now, does that sound ugly or what?

In this case, I would consider materializing the column. It doesn't sound
like this would be possible by indexing the column, so you would have a
plain normal column, and the update from a trigger.

Of course, that would incur an overhead when inserting rows to the
table. And that the components that make up this ugly beast are somewhat
static.

In the short term, this may not be possible. But it may be a good idea
to get all the values that you need to work with into a temp table,
so that the column is at least computed not more than once for each row.

--
Erland Sommarskog, SQL Server MVP, es****@sommarskog.se

Books Online for SQL Server SP3 at
http://www.microsoft.com/sql/techinf...2000/books.asp
Jul 23 '05 #6

P: n/a
In other words, store the values.

Jul 23 '05 #7

P: n/a
Obviously the values are stored, but they are used to create the
description so the description itself is never stored as there is no
point to store it when it can chabge all the time either as it grows or
as it is ammended.
Think of it this way, people are made up of characteristics which have
values eg
hair colour - grey, blonde, brunette,
eye colour - green, blue
country of origin - british,american,
sex - male,female etc
thus, the description of one person is, 'female, blonde, green eyes,
british born'. The description includes the stored values, but is itself
derived, that way, as the subject gains more values to the
characteristics, or is edited, or the rules change (ie we applied a born
label to the country characteristic to make the value read british born
which may change to born in britain ie the value is now prefixed instead
of being suffixed by new text, thus the description is never stored
always computed becaue you do not want to store with a value every
label, just store the label once and apply the current one to the
description as it is derived, obvious really.. And now somebody wants
'born' to be 'brn' or british to be gb etc which is applied after the
description is computed, but the list of abbreviations is prohibitive
and NOT related to anything, just a big list of word replacements.
*** Sent via Developersdex http://www.developersdex.com ***
Don't just participate in USENET...get rewarded for it!
Jul 23 '05 #8

P: n/a
an*******@devdex.com (an*******@devdex.com) writes:
And now somebody wants 'born' to be 'brn' or british to be gb etc which
is applied after the description is computed, but the list of
abbreviations is prohibitive and NOT related to anything, just a big
list of word replacements.


Could you stash those abbreviate into a table? And then the user-defined
functions that builds the Description string, could take an argument
on whether to look up that table, and then you would make the abbrivated
string a second computed column?
--
Erland Sommarskog, SQL Server MVP, es****@sommarskog.se

Books Online for SQL Server SP3 at
http://www.microsoft.com/sql/techinf...2000/books.asp
Jul 23 '05 #9

P: n/a

Thanks all for your contributions. I do indeed have the abbreviated
description as a computed column based on the results of the
replacements to the computed descriptions after having looped thru the
abbreviations table looking for matches. As this is the bit that takes
the most ammount of time this is where i can save the most time which I
propose to do by simply calcualting the abbreviated description once and
using the stored value for as many retrievals as I can get away with.
That way all I need to track ('all' he says in an almost casual way) are
any changes that might result in changes to the computed column and then
re-calculate only those, which I can schedule as a job. The only real
time access to users for the abbreviated description (other than
exports and prints) is via a button so this is always calculated in real
time and so will always be accurate (about 1.5 secs per click) and if
the user only reads the descriptions I have nothing to re-calculate. The
only pain with this method are those changes to the abbreviations table
itself, given that we do not know which descriptions are affected by new
or updated abbreviations we need to re-calculate them all which is the
big pain (handled by a simple trigger from the abbreviations table to
set a flag for an overnight run to pick up on) The core problem is
obviously the lack of a relationship between the abbreviations and the
various pieces of the computed descriptions to which they apply.
*** Sent via Developersdex http://www.developersdex.com ***
Don't just participate in USENET...get rewarded for it!
Jul 23 '05 #10

This discussion thread is closed

Replies have been disabled for this discussion.