473,287 Members | 1,689 Online
Bytes | Software Development & Data Engineering Community
Post Job

Home Posts Topics Members FAQ

Join Bytes to post your question to a community of 473,287 software developers and data experts.

Views vs Stored Procedures, whats the difference?

The purpose for my questions is accessing these technologies from
applications. I develop both applications and databases. Working with
Microsoft C#.NET and Microsoft SQL Server 2000 Production and 2005 Test
Environments.

What is the purpose of a view if I can just copy the vode from a view
and put it into a stored procedure?

Should I be accessing views from stored procedures?

Should I use views to get information? and Stored Procedures for
Inserts, Updates and Deletes?

What are the performance differences between the two?

Thank you for any and all information.

SBProgrammer

Jan 27 '06 #1
28 72130
mooreit wrote:
The purpose for my questions is accessing these technologies from
applications. I develop both applications and databases. Working with
Microsoft C#.NET and Microsoft SQL Server 2000 Production and 2005 Test
Environments.

What is the purpose of a view if I can just copy the vode from a view
and put it into a stored procedure?

Should I be accessing views from stored procedures?

Should I use views to get information? and Stored Procedures for
Inserts, Updates and Deletes?

What are the performance differences between the two?

Thank you for any and all information.

SBProgrammer

The difference between a view and a stored procedure is exactly what the
names say. With a view, you can define a select-query that retrieves
specific information from one or more tables (a stored
select-statement). A stored procedure can do much more: it can run
multiple sql-commands, has control-of-flow statements and accepts
parameters. Also, giyf:
http://www.google.be/search?q=view+vs+stored+procedure

Stevel
Jan 27 '06 #2
Hi,

Think of a view as a 'virtual table', its not physical but is materialised
when you need it.

You can permission on the view, but users can write their own queries if
they are given access to the database which is often not recommended - its
the old Joe Bloggs running a select on the view in MS Access without a WHERE
clause!

If you are writing applications a better and more modular and secure
approach is to use stored procedures.

Encapsulate your logic into the stored procedure and call that from your
application, make suire you don't use dynamic sql in the app, instead use
the command object, basically don't do this in your app because you open
yourself up to sql injection...

dim strSQL as string

strSQL = "exec myproc @parm1='" & tbName.text & "'"

dbconn.Execute( strSQL )

From a performance point of view, plans are kept now anyway so the old
addage that procs are better because of the execution plan being in cache is
no longer valid, because the execution plan from the view will be in cache
as well, probably parameterised as well.

In summary, if you want to build a good, secure modular system then use
stored procedures - one last thing, its significantly and i'm talking
significantly easier to performance tune a stored procedure, i often go on
site and sometimes find places with an app that has not used stored
procedures - its like tying my hands behind my back, i can only play with
indexes or index views; whereas a stored procedure i can rewrite the SQL
more efficiently.

Hope that helps.

Tony.

--
Tony Rogerson
SQL Server MVP
http://sqlserverfaq.com - free video tutorials
"mooreit" <mm***@yahoo.com> wrote in message
news:11**********************@o13g2000cwo.googlegr oups.com...
The purpose for my questions is accessing these technologies from
applications. I develop both applications and databases. Working with
Microsoft C#.NET and Microsoft SQL Server 2000 Production and 2005 Test
Environments.

What is the purpose of a view if I can just copy the vode from a view
and put it into a stored procedure?

Should I be accessing views from stored procedures?

Should I use views to get information? and Stored Procedures for
Inserts, Updates and Deletes?

What are the performance differences between the two?

Thank you for any and all information.

SBProgrammer

Jan 27 '06 #3
On 27 Jan 2006 06:19:03 -0800, mooreit wrote:
The purpose for my questions is accessing these technologies from
applications. I develop both applications and databases. Working with
Microsoft C#.NET and Microsoft SQL Server 2000 Production and 2005 Test
Environments.

What is the purpose of a view if I can just copy the vode from a view
and put it into a stored procedure?

Should I be accessing views from stored procedures?

Should I use views to get information? and Stored Procedures for
Inserts, Updates and Deletes?

What are the performance differences between the two?

Thank you for any and all information.

SBProgrammer


Hi SBProgrammer,

I wrote about the differences between stored procedures and views ten
days ago. Here's a link to the article on Google (warning - long URL,
might wrap)

http://groups.google.com/group/micro...cc4092b8603807

--
Hugo Kornelis, SQL Server MVP
Jan 27 '06 #4
create a view.

then create that same view "into" a stored procedure.

everything you can do in a view you can do in a stored procedure. But
in a stored procedure, you can do MUCH more, much faster, and with much
more flexibility.

And then, you can do a whole lot more then that with a stored
procedure. Think of a view as memorizing key strokes for an old time
macro.

Think of a stored procedure as a full fledged programming language.

Jan 30 '06 #5
mooreit wrote:
The purpose for my questions is accessing these technologies from
applications. I develop both applications and databases. Working with
Microsoft C#.NET and Microsoft SQL Server 2000 Production and 2005 Test
Environments. Holy Cow! I read the other answers and just can't decide which one to
respond to... learned a lot reading them though.
Here's my take, never mind my footer, my answer is DBMS neutral.

What is the purpose of a view if I can just copy the vode from a view
and put it into a stored procedure? The purpose of view is that it can be used within a query.
The optimizer of the DBMS can see through a view definition.
That means you can encapuslate complexity within a view while maximizing
lattitude for the optimizer.
Views are used for access control as well as to provide a level of
abstraction from the underlying DB Schema.

By contrast a procedure is a server side extension of your client
application. It's purpose can be three fold:
* Access control
* reduction in client server traffic
* concentrating processing cost on the server (thin client).
Should I be accessing views from stored procedures? They are orthogonal. Stored procedures do procedural logic views do
realtional transformations. So: Yes, absolutely!
Should I use views to get information? and Stored Procedures for
Inserts, Updates and Deletes? No. You can INSERT, UPDATE and DELETE through views just fine.
Use stored procedures to encapsulate LOGIC.
USe views to encapsulate set processing (like JOINS, UNION, ...)
What are the performance differences between the two? There is little the DBMS can do to tune and parallelize a stored
procedure. Things happen exactly the way you code them.
There is a lot the optimizer can do with complex SQL including choosing
join orders and join types, exploiting SMP parallelism, ...
Thank you for any and all information.

No problem.

Cheers
Serge
--
Serge Rielau
DB2 Solutions Development
DB2 UDB for Linux, Unix, Windows
IBM Toronto Lab
Jan 30 '06 #6
hmmmm.......

Like Serge said, there are a lot of different answers and it depends a
lot on how you think about things.

Personally, I hate views. I can't control what the thing does, and
sometimes the optimizer doesn't do what is best. With a stored
procedure, I have more options. With the more options, I do have more
responsibilities.

For sure I would agree to learn views first. It teaches data set
mentality. When views don't cut it, upgrade to stored procedures.
Shrug. that's just how I think about it.

A couple of dissentions. Within the stored procedure will be the exact
same code as is in the view, if you do the basics. The optimizer can
optimize that SQL code just like it can in the view.
Stored procedures can be used to create data sets, and are MUCH more
flexible in their capabilities then views.

As an example, perhaps you want 5 columns, one data row long, that are
calculated ffrom 7 tables. You can absolutely use a view to set up the
joins.
However, if you utilize a stored procedure, you can utilize your
knowledge of how the data really works to "optimize" your approach to
data retrieval. So instead of a giant join, you can select the data
utilizing optimized indexes to rifle in on the data you are after.

For a small database (sub 10 million rows) it probably really doesn't
matter. Get a larger database, or a LOT of hits, and all of a sudden a
few extra table scans can really add up.

Shrug, Smile, and listening!!!!!!

-doug

Jan 30 '06 #7
>> Personally, I hate views. I can't control what the thing does, and sometimes the optimizer doesn't do what is best. <<

How does the smart money bet?
With a stored procedure, I have more options. With the more options, I do have more responsibilities. <<

Myself, i do not want to have to control 100+ factors that can change
the next time I use the procedure. But T-SQL is a simple one-pass
compiler -- it does nto not re-arrange my if-the-else logic or optimize
my loops.
For sure I would agree to learn views first. It teaches data set mentality. <<
Yes. And that is why a newbie likes it better than a VIEW. Suddenly,
you have to change your mindset, how to use DCL and WITH CHECK OPTIONs
A couple of dissentions. Within the stored procedure will be the exact same code as is in the view, if you do the basics. The optimizer can optimize that SQL code just like it can in the view.<<
Not really. Procedure programmer will tend to use if-then-else while a
VIEW programmer would tend to use CASE expressions. Procedure
programmer will tend to use temp tables as scratch files while a VIEW
programmer would tend to use CTE and derived tables in the query.
Stored procedures can be used to create data sets, and are MUCH more flexible in their capabilities then views. <<


Only because they can take parameters. The two things serve different
purposes

Feb 4 '06 #8
If you use VIEWS to encapsulate logic and your security just how do you
prevent a user from connecting to the database and writing their own
queries?

Answer: With VIEWS you can't (easily) but with stored procedures they don't
get the opporunity, nor is your schema design (and possible pitfalls) sent
across the wire for all and sundry to see your bad habits - and exploit
them!

You view will have a different plan each time its compiled anyway - when
values change, statistics distribution etc... it behaves just like a stored
procedure in that respect.

The smart money use stored procedures because:

1) they neatly encapsulate logic so
a) it can be easily developed in a multi-person team/teams
b) faults can be very easily diagnosed and solved without having
to revert to a nasty application recompile and redistribution
2) they are really great for implementing proper security, your schema
is not exposed to anybody except administrators and app devs.
3) you can code IF ELSE to make more efficient queries instead of
getting general plans and tons of code that needs testing and supporting

Tony Rogerson, SQL Server MVP
http://sqlserverfaq.com

"--CELKO--" <jc*******@earthlink.net> wrote in message
news:11*********************@z14g2000cwz.googlegro ups.com...
Personally, I hate views. I can't control what the thing does, and
sometimes the optimizer doesn't do what is best. <<
How does the smart money bet?
With a stored procedure, I have more options. With the more options, I
do have more responsibilities. <<

Myself, i do not want to have to control 100+ factors that can change
the next time I use the procedure. But T-SQL is a simple one-pass
compiler -- it does nto not re-arrange my if-the-else logic or optimize
my loops.
For sure I would agree to learn views first. It teaches data set
mentality. <<
Yes. And that is why a newbie likes it better than a VIEW. Suddenly,
you have to change your mindset, how to use DCL and WITH CHECK OPTIONs
A couple of dissentions. Within the stored procedure will be the exact
same code as is in the view, if you do the basics. The optimizer can
optimize that SQL code just like it can in the view.<<
Not really. Procedure programmer will tend to use if-then-else while a
VIEW programmer would tend to use CASE expressions. Procedure
programmer will tend to use temp tables as scratch files while a VIEW
programmer would tend to use CTE and derived tables in the query.
Stored procedures can be used to create data sets, and are MUCH more
flexible in their capabilities then views. <<


Only because they can take parameters. The two things serve different
purposes

Feb 4 '06 #9
Tony Rogerson wrote:
If you use VIEWS to encapsulate logic and your security just how do you
prevent a user from connecting to the database and writing their own
queries?

Answer: With VIEWS you can't (easily) but with stored procedures they don't
get the opporunity, nor is your schema design (and possible pitfalls) sent
across the wire for all and sundry to see your bad habits - and exploit
them! One does not exclude the other. You can use procedures as external
interface for the application and still use views within the procedures
as appropriate.
You view will have a different plan each time its compiled anyway - when
values change, statistics distribution etc... it behaves just like a stored
procedure in that respect. Absolutely not true.
A stored procedure logic will be exactly as fast as the algorithm you
chose when you wrote it. The DBMS can neither optimize nor parallelize
stored procedure logic. It can only optimize teh small pieces of SQL
that you left in.
The SQL inside the view will be as fast as the optimizer can make it
depending on the statistics using 30 years of research. The entire idea
RDBMS and SQL is to separate the WHAT from the HOW. Anytime you use
logic you take the HOW away from the RDBMS and you have exactly one
choice to combine results: Nested loop join (aka nested cursors).
The smart money use stored procedures because: The fast money perhaps, absolutely not the smart money.
Folks like Joe and I spend a lot of time digging companies who fell prey
to this thinking out the ditch.
1) they neatly encapsulate logic so
a) it can be easily developed in a multi-person team/teams
b) faults can be very easily diagnosed and solved without having
to revert to a nasty application recompile and redistribution
2) they are really great for implementing proper security, your schema
is not exposed to anybody except administrators and app devs. Both points above are orthoginal to the usage of views. 3) you can code IF ELSE to make more efficient queries instead of
getting general plans and tons of code that needs testing and supporting

How good is your QA, do you design limits testcases for each and every
combination of IF THEN ELSE logic? Do you believe the harm of a nested
cursor is undone by the advantage of an IF THEN ELSE statement?
I understant that SQL Server supports hints.
Try a couple experiements forcing SQL Server to use different join
implementations (nestedloop, merge, hash, ..) on decent sized tables,
then think about whether you can afford nested loop (i.e. nested cursors).
My company sells hardware, I don't mind if the stuff you produce
requires resources beyond measure :-)

Cheers
Serge
--
Serge Rielau
DB2 Solutions Development
DB2 UDB for Linux, Unix, Windows
IBM Toronto Lab
Feb 4 '06 #10
> One does not exclude the other. You can use procedures as external
interface for the application and still use views within the procedures as
appropriate.
Absolutely, so long as you are using stored procedures as the external
access mechanism then that is a good thing.

I hope you are not suggesting you embed SQL queries into the application?

If you are embedding SQL, just how are you going to stop users from
accessing those views directly through their desktop applications? You can
be in all kinds of mess.
Absolutely not true.
A stored procedure logic will be exactly as fast as the algorithm you
chose when you wrote it. The DBMS can neither optimize nor parallelize
stored procedure logic. It can only optimize teh small pieces of SQL that
you left in.
The SQL inside the view will be as fast as the optimizer can make it
depending on the statistics using 30 years of research. The entire idea
RDBMS and SQL is to separate the WHAT from the HOW. Anytime you use logic
you take the HOW away from the RDBMS and you have exactly one choice to
combine results: Nested loop join (aka nested cursors).
Yes it is true from the optimiser point of view - the execution plan will
remain in cache unless kicked out; if it is kicked out the the procedure
will be recompile, and that means the SQL within it and that is the level at
which views operate, the proc is simply a container for the SQL.

I am not talking about writing cursors, placing optimiser hints on the SQL
either (which you can do with views as well).

These two, asside from an unmeasurable amount of CPU time in plan
compilation will give the same performance...

SELECT ... FROM yourview

Or..

CREATE PROC yourproc
AS
BEGIN
SELECT .... FROM yourview

END

The difference is the second one is modular and gives development and
support benefits far above just calling the view in isolation.
The fast money perhaps, absolutely not the smart money.
Folks like Joe and I spend a lot of time digging companies who fell prey
to this thinking out the ditch.
If you are championing embedded SQL in applications then god help those
companies and all I can say is - catch up with the current thinking around
modular, **secure** and supportable programming methods.

The smart developer considers such things as scalability, maintainability
and supportability, they do not try to apply 80's programming models where
most products didn't have the concept of stored procedures, I remember - I
used to program PL/1, CICS and DB2 on the mainframe that way.
How good is your QA, do you design limits testcases for each and every
combination of IF THEN ELSE logic?
Your QA needs to test all possible routes through your code 'period'

That is more easily accomplished at a unit test level with a small module
aka the stored procedure. Trying to find and execute all those embedded SQL
statements is a really dumb way of going about QA and design and will lead
to bugs and security problems.
Do you believe the harm of a nested cursor is undone by the advantage of
an IF THEN ELSE statement?
I understant that SQL Server supports hints.
Try a couple experiements forcing SQL Server to use different join
implementations (nestedloop, merge, hash, ..) on decent sized tables, then
think about whether you can afford nested loop (i.e. nested cursors).
My company sells hardware, I don't mind if the stuff you produce requires
resources beyond measure :-)
I've no idea what you are getting at there.

With SQL Server you can use IF ELSE to great affect, consider this query...

SELECT ....
FROM table
WHERE mycol1 = COALESCE( @myparm1, mycol1 )
AND mycol2 = COALESCE( @myparm2, mycol2 )
etc...

In Microsoft SQL Server you will get a general plan, now, if only @myparm1
is passed and @myparm2 is NULL it is better to code this...

SELECT ....
FROM table
WHERE mycol1 = @myparm1

In a stored procedure you can put IF ELSE logic to determine the parameters
passed and execute the correct (best) statement accordingly, if you have a
lot of parameters there are things you can do with dynamic SQL within the
stored procedure.

Tony Rogerson, SQL Server MVP
http://sqlserverfaq.com
"Serge Rielau" <sr*****@ca.ibm.com> wrote in message
news:44************@individual.net... Tony Rogerson wrote:
If you use VIEWS to encapsulate logic and your security just how do you
prevent a user from connecting to the database and writing their own
queries?

Answer: With VIEWS you can't (easily) but with stored procedures they
don't get the opporunity, nor is your schema design (and possible
pitfalls) sent across the wire for all and sundry to see your bad
habits - and exploit them!

One does not exclude the other. You can use procedures as external
interface for the application and still use views within the procedures as
appropriate.

You view will have a different plan each time its compiled anyway - when
values change, statistics distribution etc... it behaves just like a
stored procedure in that respect.

Absolutely not true.
A stored procedure logic will be exactly as fast as the algorithm you
chose when you wrote it. The DBMS can neither optimize nor parallelize
stored procedure logic. It can only optimize teh small pieces of SQL that
you left in.
The SQL inside the view will be as fast as the optimizer can make it
depending on the statistics using 30 years of research. The entire idea
RDBMS and SQL is to separate the WHAT from the HOW. Anytime you use logic
you take the HOW away from the RDBMS and you have exactly one choice to
combine results: Nested loop join (aka nested cursors).
The smart money use stored procedures because:

The fast money perhaps, absolutely not the smart money.
Folks like Joe and I spend a lot of time digging companies who fell prey
to this thinking out the ditch.
1) they neatly encapsulate logic so
a) it can be easily developed in a multi-person team/teams
b) faults can be very easily diagnosed and solved without
having to revert to a nasty application recompile and redistribution
2) they are really great for implementing proper security, your
schema is not exposed to anybody except administrators and app devs.

Both points above are orthoginal to the usage of views.
3) you can code IF ELSE to make more efficient queries instead of
getting general plans and tons of code that needs testing and supporting

How good is your QA, do you design limits testcases for each and every
combination of IF THEN ELSE logic? Do you believe the harm of a nested
cursor is undone by the advantage of an IF THEN ELSE statement?
I understant that SQL Server supports hints.
Try a couple experiements forcing SQL Server to use different join
implementations (nestedloop, merge, hash, ..) on decent sized tables, then
think about whether you can afford nested loop (i.e. nested cursors).
My company sells hardware, I don't mind if the stuff you produce requires
resources beyond measure :-)

Cheers
Serge
--
Serge Rielau
DB2 Solutions Development
DB2 UDB for Linux, Unix, Windows
IBM Toronto Lab

Feb 4 '06 #11
Tony Rogerson wrote:
One does not exclude the other. You can use procedures as external
interface for the application and still use views within the procedures as
appropriate.

Absolutely, so long as you are using stored procedures as the external
access mechanism then that is a good thing.

OK.. perhaps we violently agree? You are hung up on embedded SQL. I'm
not debating that point at all.
I hope you are not suggesting you embed SQL queries into the application? No I'm not. Each their paradigm. I'm just trying to differentiate
between two fundamentally different SQL objects.

<snip>
These two, asside from an unmeasurable amount of CPU time in plan
compilation will give the same performance... So the CALL is free? No doubt it's highly optimized, but it won't be free.
SELECT ... FROM yourview

Or..

CREATE PROC yourproc
AS
BEGIN
SELECT .... FROM yourview

END You are looking only at one end of the picture.
Let's focus on the scenario where the select is NOT in the application,
but it is inside of a nested proecdure.

Let's use the following SQL (never mind if I mess up the dialect..
that's not the point):
CREATE TABLE P(pk INT NOT NULL PRIMARY KEY, C1 INT)
CREATE TABLE C(pk INT NOT NULL PRIMARY KEY,
fk INT FOREIGN KEY T(pk), C2 INT)
CREATE PROCEDURE nestedproc(IN arg)
BEGIN
SELECT c1, c2 FROM P, C WHERE P.pk = C.fk AND P.pk = arg;
END

CREATE VIEW nestedview AS SELECT c1, c2 FROM P, C WHERE P.pk = C.fk;

Now let's consider the following:
"What is the maximum value c2 for a given arg?"
Using the procedure you CALL the procedure with the argument and you
then have to process the resultset (with swoem sort of logic unless I
missed some T-SQL feature) to find the MAX.
Using the view I simply write:
SELECT MAX(c2) FROM nestedview
WHERE pk = arg

What's the difference? An SQL optimizer can clearly see that there is
nothing interesting in P. It will DROP access to table P altogether
because first it projects out c1, then it kicks out P using the foreign
key relationship to prove that the join is row preserving.
Even better, with the right index (fk, c2 desc) this whole thing will be
done in a single I/O.

Using the procedure you have broken this vital link between the consumer
of the data and the data access. The DBMS has no clue that c1 will never
be consumed because the procedure is written generically for users who
want to retrieve c1 and c2.
Now, even if your DBMS were not to do this optimization today. Chances
are it will next time you upgrade; the DBMS will show why you paid more
for the new release than the old one.

The problem lies NOT in the application. The problem lies in the fact
that procedures are being nested and important context is being lost.
The smart developer considers such things as scalability, maintainability
and supportability, they do not try to apply 80's programming models where
most products didn't have the concept of stored procedures, I remember - I
used to program PL/1, CICS and DB2 on the mainframe that way.

You are older than me... anyway being too heavy on procedures is poison
for scalability. When I teach customers I teach that both logic and SQL
have their place. What I call "trivial procedures" are my prime example
to explain what I mean.

Cheers
Serge

PS: Check out table functions (aka parameterized views).
--
Serge Rielau
DB2 Solutions Development
DB2 UDB for Linux, Unix, Windows
IBM Toronto Lab
Feb 4 '06 #12
Actually I think we are violently agreeing lol.

With your example I would never ever do that, mind you ask celko about doing
paging and he would! All formatting done on the client etc...

In the proc I would write the MAX as you've done.

In the MS space, we can use simple logic without having to resort to cursors
or sending the entire results back, back to the parameters example....

create proc myproc
@optional_parm1 int = NULL,
@optional_parm2 int = NULL
begin
if @optional_param1 is not null and @optional_param2 is not null
select max( blah )
from yourquery (or view)
where col1 = @optional_parm1
and col2 = @optional_parm2
else if @optional_param1 is not null
select max( blah )
from yourquery (or view)
where col1 = @optional_parm1
else if @optional_parm2 is not null
select max( blah )
from yourquery (or view)
where col2 = @optional_parm2
else
select max( blah )
from yourquery (or view)

end

While the above does contain logic, it will give you the best plan in MS SQL
Server, you could of used COALESCE but you would not get a good plan; you
could also build your SQL dynamically in the stored proc based on the
parameter inputs, parameterise and execute it.

In some respects we are agreeing, but i'm not sure about your stance on
using IF ELSE, I would not want to got to a one proc per parameter
combination model because its just too much overhead when we have IF ELSE to
lessen the support and development burden.

Tony

--
Tony Rogerson
SQL Server MVP
http://sqlserverfaq.com - free video tutorials
"Serge Rielau" <sr*****@ca.ibm.com> wrote in message
news:44************@individual.net...
Tony Rogerson wrote:
One does not exclude the other. You can use procedures as external
interface for the application and still use views within the procedures
as appropriate.

Absolutely, so long as you are using stored procedures as the external
access mechanism then that is a good thing.

OK.. perhaps we violently agree? You are hung up on embedded SQL. I'm not
debating that point at all.

I hope you are not suggesting you embed SQL queries into the application?

No I'm not. Each their paradigm. I'm just trying to differentiate between
two fundamentally different SQL objects.

<snip>
These two, asside from an unmeasurable amount of CPU time in plan
compilation will give the same performance...

So the CALL is free? No doubt it's highly optimized, but it won't be free.

SELECT ... FROM yourview

Or..

CREATE PROC yourproc
AS
BEGIN
SELECT .... FROM yourview

END

You are looking only at one end of the picture.
Let's focus on the scenario where the select is NOT in the application,
but it is inside of a nested proecdure.

Let's use the following SQL (never mind if I mess up the dialect.. that's
not the point):
CREATE TABLE P(pk INT NOT NULL PRIMARY KEY, C1 INT)
CREATE TABLE C(pk INT NOT NULL PRIMARY KEY,
fk INT FOREIGN KEY T(pk), C2 INT)
CREATE PROCEDURE nestedproc(IN arg)
BEGIN
SELECT c1, c2 FROM P, C WHERE P.pk = C.fk AND P.pk = arg;
END

CREATE VIEW nestedview AS SELECT c1, c2 FROM P, C WHERE P.pk = C.fk;

Now let's consider the following:
"What is the maximum value c2 for a given arg?"
Using the procedure you CALL the procedure with the argument and you then
have to process the resultset (with swoem sort of logic unless I missed
some T-SQL feature) to find the MAX.
Using the view I simply write:
SELECT MAX(c2) FROM nestedview
WHERE pk = arg

What's the difference? An SQL optimizer can clearly see that there is
nothing interesting in P. It will DROP access to table P altogether
because first it projects out c1, then it kicks out P using the foreign
key relationship to prove that the join is row preserving.
Even better, with the right index (fk, c2 desc) this whole thing will be
done in a single I/O.

Using the procedure you have broken this vital link between the consumer
of the data and the data access. The DBMS has no clue that c1 will never
be consumed because the procedure is written generically for users who
want to retrieve c1 and c2.
Now, even if your DBMS were not to do this optimization today. Chances are
it will next time you upgrade; the DBMS will show why you paid more for
the new release than the old one.

The problem lies NOT in the application. The problem lies in the fact that
procedures are being nested and important context is being lost.
The smart developer considers such things as scalability, maintainability
and supportability, they do not try to apply 80's programming models
where most products didn't have the concept of stored procedures, I
remember - I used to program PL/1, CICS and DB2 on the mainframe that
way.

You are older than me... anyway being too heavy on procedures is poison
for scalability. When I teach customers I teach that both logic and SQL
have their place. What I call "trivial procedures" are my prime example to
explain what I mean.

Cheers
Serge

PS: Check out table functions (aka parameterized views).
--
Serge Rielau
DB2 Solutions Development
DB2 UDB for Linux, Unix, Windows
IBM Toronto Lab

Feb 5 '06 #13
Tony Rogerson (to**********@sqlserverfaq.com) writes:
create proc myproc
@optional_parm1 int = NULL,
@optional_parm2 int = NULL
begin
if @optional_param1 is not null and @optional_param2 is not null
select max( blah )
from yourquery (or view)
where col1 = @optional_parm1
and col2 = @optional_parm2
else if @optional_param1 is not null
select max( blah )
from yourquery (or view)
where col1 = @optional_parm1
else if @optional_parm2 is not null
select max( blah )
from yourquery (or view)
where col2 = @optional_parm2
else
select max( blah )
from yourquery (or view)

end

While the above does contain logic, it will give you the best plan in MS
SQL Server,

Maybe. Keep in mind that SQL Server employs parameter sniffing, so if
you first call the procedure with NULL in the input parameters, all
the SELECT:s will be optimized for NULL, which may not give the best
plan.
--
Erland Sommarskog, SQL Server MVP, es****@sommarskog.se

Books Online for SQL Server 2005 at
http://www.microsoft.com/technet/pro...ads/books.mspx
Books Online for SQL Server 2000 at
http://www.microsoft.com/sql/prodinf...ons/books.mspx
Feb 5 '06 #14
Thats true, and true of SQL generally.

There are ways to specify a RECOMPILE though.

Tony.

"Erland Sommarskog" <es****@sommarskog.se> wrote in message
news:Xn**********************@127.0.0.1...
Tony Rogerson (to**********@sqlserverfaq.com) writes:
create proc myproc
@optional_parm1 int = NULL,
@optional_parm2 int = NULL
begin
if @optional_param1 is not null and @optional_param2 is not null
select max( blah )
from yourquery (or view)
where col1 = @optional_parm1
and col2 = @optional_parm2
else if @optional_param1 is not null
select max( blah )
from yourquery (or view)
where col1 = @optional_parm1
else if @optional_parm2 is not null
select max( blah )
from yourquery (or view)
where col2 = @optional_parm2
else
select max( blah )
from yourquery (or view)

end

While the above does contain logic, it will give you the best plan in MS
SQL Server,

Maybe. Keep in mind that SQL Server employs parameter sniffing, so if
you first call the procedure with NULL in the input parameters, all
the SELECT:s will be optimized for NULL, which may not give the best
plan.
--
Erland Sommarskog, SQL Server MVP, es****@sommarskog.se

Books Online for SQL Server 2005 at
http://www.microsoft.com/technet/pro...ads/books.mspx
Books Online for SQL Server 2000 at
http://www.microsoft.com/sql/prodinf...ons/books.mspx

Feb 5 '06 #15
imagine if you will 5 tables. 3 of them are over a billion rows.

you want to provide a primary key, and pull the detail data out of the
children.

For absolute, positive sure, in sql 2000, if you use a view, the engine
will do this ugly huge join hash table thing on the entire thing, THEN
look for your data.

Take the exact same code, use sense as to teh data lays out, and you
can cut your response times from 20 seconds to sub second where it
should be.
Could I have "optimized" the view wiht hints? Nope. BTDT, didn't work.
The point is the engine is trying to kind of guess blindly how the data
really works, where a good analyst can KNOW how to get the data works.
Something to think about. Doug has written sprocs that broke a view
into pieces that made the whole thing significantly faster then a giant
select statement.
Btw, the compiling of a sproc or any sql statement is trivial in the
grand scheme of data access of large systems.

sql embedded in code is really bad. celko is just barely past that
stage, as he thinks sprocs are limited to replacing embedded sql, and
views are a better solution.
Not.

Feb 6 '06 #16
>> imagine if you will 5 tables. 3 of them are over a billion rows .. <<

Since you did not post DDL, I do not see how they are related or what
the query is. When you say "children" I assume you mean referencing
tables? But in the real world, if I am dealing with an RDBMS that has
billion row tables, I would not use SQL Server at all.

This is a job for Teradata, SAND, DB2, etc. I would want to have at
least table partitioning and parallelism.
sql embedded in code is really bad. <<


Unh? I never said anything about embedded SQL. I fact, I explicitly
said that VIEWs and Stored Procedures serve different purposes.

We used embedded SQL a lot in the early days wtih Cobol, PL/I and C (in
fact, I have an example in C in my book INSTANT SQL). The precompilers
were easier to use and more portable than the assorted APIs. You could
get an executable that did not reveal source code the guys that bought
your package. But the source code showed you both the host code and
the SQL code when you maintained it.

Then along came ODBC, ADAPI and SQL/CLI which match a teired
architecture. embedded SQL fell out of favor. I doubt most current
programmers would recognize this:

EXEC SQL INCLUDE SQLCA;
EXEC SQL BEGIN DECLARE SECTION;
host_name character_string(20);
host_emp_number integer;
EXEC SQL END DECLARE SECTION;
EXEC SQL WHENEVER SQLERROR STOP;
EXEC SQL CONNECT frans;

There is a website at
http://www.csc.liv.ac.uk/~frans/Note...QL.html#syntax if you
are interested.

Feb 6 '06 #17
> This is a job for Teradata, SAND, DB2, etc. I would want to have at
least table partitioning and parallelism.


LOL - read the products specs, SQL Server has partitioning and parallelism.

Jeez you are soooo out of date on your research and opinions.

--
Tony Rogerson
SQL Server MVP
http://sqlserverfaq.com - free video tutorials
"--CELKO--" <jc*******@earthlink.net> wrote in message
news:11**********************@g43g2000cwa.googlegr oups.com...
imagine if you will 5 tables. 3 of them are over a billion rows .. <<
Since you did not post DDL, I do not see how they are related or what
the query is. When you say "children" I assume you mean referencing
tables? But in the real world, if I am dealing with an RDBMS that has
billion row tables, I would not use SQL Server at all.

This is a job for Teradata, SAND, DB2, etc. I would want to have at
least table partitioning and parallelism.
sql embedded in code is really bad. <<


Unh? I never said anything about embedded SQL. I fact, I explicitly
said that VIEWs and Stored Procedures serve different purposes.

We used embedded SQL a lot in the early days wtih Cobol, PL/I and C (in
fact, I have an example in C in my book INSTANT SQL). The precompilers
were easier to use and more portable than the assorted APIs. You could
get an executable that did not reveal source code the guys that bought
your package. But the source code showed you both the host code and
the SQL code when you maintained it.

Then along came ODBC, ADAPI and SQL/CLI which match a teired
architecture. embedded SQL fell out of favor. I doubt most current
programmers would recognize this:

EXEC SQL INCLUDE SQLCA;
EXEC SQL BEGIN DECLARE SECTION;
host_name character_string(20);
host_emp_number integer;
EXEC SQL END DECLARE SECTION;
EXEC SQL WHENEVER SQLERROR STOP;
EXEC SQL CONNECT frans;

There is a website at
http://www.csc.liv.ac.uk/~frans/Note...QL.html#syntax if you
are interested.

Feb 6 '06 #18
On 5 Feb 2006 21:05:10 -0800, Doug wrote:
imagine if you will 5 tables. 3 of them are over a billion rows.

you want to provide a primary key, and pull the detail data out of the
children.

For absolute, positive sure, in sql 2000, if you use a view, the engine
will do this ugly huge join hash table thing on the entire thing, THEN
look for your data. (snip)

Hi Doug,

I'm sorry, but this is not correct. (Unless you are using indexed views,
but in that case the results of the view are pre-computed, so it
shouldn't take any noticeable time).

For non-indexed views, SQL Server 2000 will first replace the view-name
in the query with the view definition, then feed the result to the query
optimizer. The resulting plan will then be fed to the query optimizer,
which will try to find a good execution plan.

The resulting plan will be the same as when you had coded the complete
qeury from the base tables without using the views.
Take the exact same code, use sense as to teh data lays out, and you
can cut your response times from 20 seconds to sub second where it
should be. (snip) Doug has written sprocs that broke a view
into pieces that made the whole thing significantly faster then a giant
select statement.


Yes, there will certainly be situations where you can optimize a very
big and complex query by breaking it into pieces. The optimizer is
pretty good (IMO), but certainly not flawless. It lacks some techniques
that it realy should have, and if the complexity of a query increases,
it might choke on the explosion of the number of possibilities to
consider. The optimizer is also very dependant on good statistics.

On the other hand, there are also lots of queries that will perform lots
worse when you break them into pieces. I don't think that dismissing
views because you've had a bad experience with them is the right answer.
Especially since the bad experience is not really caused by the view,
but by the optimizer not doing a perfect job on a (probably) very
complicated query.

--
Hugo Kornelis, SQL Server MVP
Feb 6 '06 #19
>> SQL Server has partitioning and parallelism. <<

Yes, Tony, I ALWAYS believe what MS tells me :) . And I never look at
the foundation of the product (is it contigous storage, bit
vector,etc., ?). I always ignore the independent industry groups, and
my own experience doing consutlign work with SQL products, etcl. Get
old enought to know the history; partitioning and parallelism are
add-ons, not fundations in SQL Server. There si nothign wrong with
that design, buit it has trade-offs. It sucks for VLDB.
Jeez you are soooo out of date on your research and opinions. <<


Jeez, you have never left your little ghetto to do any research in the
big world of RDBMS.

Feb 7 '06 #20
sellko writes:
But in the real world, if I am dealing with an RDBMS that has

billion row tables, I would not use SQL Server at all.

perhaps you shouldn't be giving advice on using sql server on larger
databases.

Feb 7 '06 #21
>> perhaps you shouldn't be giving advice on using sql server on larger databases. <<

Think about it for a minute. Would you preach to the knowledgable or
to the ignorantt?

I know of major home supply company that is still keeping its
contractor DB on ACCESS and will not move to SQL Server. Sure they can
do itn ACCESS, if they re-boot a few times day and keep a back up.
Gee, I must be out off line when I suggested that SQL Server could be a
usable solution. Duh!

There is no single magic answer to all problems. Sorry, Doug.

Feb 7 '06 #22
> There is no single magic answer to all problems. Sorry, Doug.

But in order to make that decision you need to be informed!

Geez you don't even know SQL Server scalabilities, you didn't even know it
does queries with parallelisation and partitioning has been in the product
for 3 releases now.

Did your industry experience stop 8 years ago? Have you done any real work
since then or do you just do the odd consultancy once a quarter?

--
Tony Rogerson
SQL Server MVP
http://sqlserverfaq.com - free video tutorials
"--CELKO--" <jc*******@earthlink.net> wrote in message
news:11**********************@g43g2000cwa.googlegr oups.com...
perhaps you shouldn't be giving advice on using sql server on larger
databases. <<


Think about it for a minute. Would you preach to the knowledgable or
to the ignorantt?

I know of major home supply company that is still keeping its
contractor DB on ACCESS and will not move to SQL Server. Sure they can
do itn ACCESS, if they re-boot a few times day and keep a back up.
Gee, I must be out off line when I suggested that SQL Server could be a
usable solution. Duh!

There is no single magic answer to all problems. Sorry, Doug.

Feb 7 '06 #23
> Yes, Tony, I ALWAYS believe what MS tells me :) . And I never look at
the foundation of the product (is it contigous storage, bit
vector,etc., ?). I always ignore the independent industry groups, and
my own experience doing consutlign work with SQL products, etcl. Get
old enought to know the history; partitioning and parallelism are
add-ons, not fundations in SQL Server. There si nothign wrong with
that design, buit it has trade-offs. It sucks for VLDB.
Go buy Ken Hendersons book 'Guru guide to SQL Server internals and
architecture'.

SQL Server uses a thread based model, unlike the Sybase product; it releases
threads onto available CPU's; when necessary queries are broken up into
streams and done **in parallel** - you can validate its doing that by a)
looking at the query plan and b) while it is executing using perfmon to
monitor what processes are using what CPU's.

If you don't know these fundemental things about SQL Server then you aren't
much of an expert.

Stick to logical database design and leave the implementation to people who
have been trained and have experience.

Partitioning works fine, in fact, it helped SQL Server to become the first
product to break then 1million TPS on the TPC benchmarks - google it.
Jeez, you have never left your little ghetto to do any research in the
big world of RDBMS.


Unlike your often ambiguous ranting I've just backed my statement up - go
get Ken's book, or go and read some of the many white papers on SQL Server
internals.

The folder you keep all your research in is 10 years out of date,
parallelism was brought into SQL Server in version 7.0, you are talking
about 6.5 which nobody uses anymore.

LOL - you just don't understand, your ego is undermining your credibility,
you are sooo stuck in the past with very dated and unreliable research that
its just not funny - check your own website celko.com, the majority of your
links don't even work and yet you do nothing about it - tut tut tut.

--
Tony Rogerson
SQL Server MVP
http://sqlserverfaq.com - free video tutorials
"--CELKO--" <jc*******@earthlink.net> wrote in message
news:11*********************@g47g2000cwa.googlegro ups.com...
SQL Server has partitioning and parallelism. <<
Yes, Tony, I ALWAYS believe what MS tells me :) . And I never look at
the foundation of the product (is it contigous storage, bit
vector,etc., ?). I always ignore the independent industry groups, and
my own experience doing consutlign work with SQL products, etcl. Get
old enought to know the history; partitioning and parallelism are
add-ons, not fundations in SQL Server. There si nothign wrong with
that design, buit it has trade-offs. It sucks for VLDB.
Jeez you are soooo out of date on your research and opinions. <<


Jeez, you have never left your little ghetto to do any research in the
big world of RDBMS.

Feb 7 '06 #24
Tony Rogerson wrote:
Partitioning works fine, in fact, it helped SQL Server to become the first
product to break then 1million TPS on the TPC benchmarks - google it.

Are you refering to TPC-C and TpmC? If so the first to break 1M was
Oracle http://www.tpc.org/tpcc/results/tpcc...p?id=103110401
The only > 1M result I see from MS is this one:
http://www.tpc.org/tpcc/results/tpcc...p?id=105112801

I do very much doubt that any vendor including SQL Server is using SMP
paralelism in the TPC-C benchmark because the load is driven by the high
number of concurrent users. SMP parallelism is mostly useful for low
concurrency enviroments (such as warehouses (TPC-H)).

Cheers
Serge
--
Serge Rielau
DB2 Solutions Development
DB2 UDB for Linux, Unix, Windows
IBM Toronto Lab
Feb 7 '06 #25
Hi Serge,

You are right, sorry Serge, not sure what I was thinking of when I stated
that, anyway, this link gives some more information on its current
performance http://www.microsoft.com/sql/prodinfo/compare/tpcc.mspx.

It will use parallelism from the perspective of the OS, Windows uses
threads, SQL Server allocates a number of 'worker threads' that connections
can use, so multiple connections will use multiple threads, one connection
will only use one CPU at a time, unless, the query runs using parallisation
which means that the query is broken up into streams and each stream
allocated a processor (via threads) and then executed in parallel and then
merged back at the end.

Query plans that use parallelism tend to need to do a lot of work so are
like you say best in a reporting / datawarehouse environment, assuming that
you haven't used Analysis Services to build your dimensional MOLAP database.

But, going back to my point, celko is completely wrong, SQL Server does do
parallelism and does have partitioning and has done for 3 versions of the
product now.

Tony.

--
Tony Rogerson
SQL Server MVP
http://sqlserverfaq.com - free video tutorials
"Serge Rielau" <sr*****@ca.ibm.com> wrote in message
news:44************@individual.net...
Tony Rogerson wrote:
Partitioning works fine, in fact, it helped SQL Server to become the
first product to break then 1million TPS on the TPC benchmarks - google
it.

Are you refering to TPC-C and TpmC? If so the first to break 1M was Oracle
http://www.tpc.org/tpcc/results/tpcc...p?id=103110401
The only > 1M result I see from MS is this one:
http://www.tpc.org/tpcc/results/tpcc...p?id=105112801

I do very much doubt that any vendor including SQL Server is using SMP
paralelism in the TPC-C benchmark because the load is driven by the high
number of concurrent users. SMP parallelism is mostly useful for low
concurrency enviroments (such as warehouses (TPC-H)).

Cheers
Serge
--
Serge Rielau
DB2 Solutions Development
DB2 UDB for Linux, Unix, Windows
IBM Toronto Lab

Feb 7 '06 #26
>> perhaps you shouldn't be giving advice on using sql server on larger databases. <<

Which industry watch group was it that also said SQL Server was not
ready for VLDB problems last year? Perhaps, telling someone that they
have the wrong tool IS good advice. Unless of course your idea of
being a consultant is to say "The Answer is MicroSoft!" to every
question.

Feb 7 '06 #27
the answer IS microsoft

SQL Server works just perfectly with 1 billion records. more
importantly; SQL Server Analysis Services is the market leader; in
price, performance-- everything

www.olapreport.com

Feb 9 '06 #28
been gone for a while.

There are good points here.

if you are going to do much analsys, go OLAP.

For transactions, KISS rules the day, and MS sql server is quite
capable of handling billions of rows.
I suppose if we were discussing the solutions for MS Sql server, we
would be on a forum having "ms sql server" in the title.......

oh wait.....

lol

regards,
doug

Feb 26 '06 #29

This thread has been closed and replies have been disabled. Please start a new discussion.

Similar topics

2
by: David Nettles | last post by:
Two questions: (1) Does MySQL support VIEWS? (2) Does MySQL support STORED PROCEDURES? -- David Nettles web: http://www.miteyo.org email: tetsuoni3000@yahoo.co.jp
0
by: SQLServer007 | last post by:
25 more days until the "get it free" promotion runs out for xSQL Object (you can get it from http://www.x-sql.com) Here are just some of the great features packed in the product: - Compare SQL...
3
by: Ryan.Chowdhury | last post by:
This is a general question regarding the use of view and stored procedures. I'm fairly new to databases and SQL. I've created a SQL database using an Access Data Project ("ADP") and I'm...
45
by: John | last post by:
Hi When developing vb.bet winform apps bound to sql server datasource, is it preferable to use SELECTs or stored procedure to read and write data from/to SQL Server? Why? Thanks Regards
3
by: codefragment | last post by:
Hi I have a chunky bit of sql that I will want to call from a number of places. It will return a few thousand rows. Whats the best way of structuring this? 1) I initially thought of using...
2
isladogs
by: isladogs | last post by:
The next Access Europe meeting will be on Wednesday 7 Feb 2024 starting at 18:00 UK time (6PM UTC) and finishing at about 19:30 (7.30PM). In this month's session, the creator of the excellent VBE...
0
by: MeoLessi9 | last post by:
I have VirtualBox installed on Windows 11 and now I would like to install Kali on a virtual machine. However, on the official website, I see two options: "Installer images" and "Virtual machines"....
0
by: DolphinDB | last post by:
The formulas of 101 quantitative trading alphas used by WorldQuant were presented in the paper 101 Formulaic Alphas. However, some formulas are complex, leading to challenges in calculation. Take...
0
by: DolphinDB | last post by:
Tired of spending countless mintues downsampling your data? Look no further! In this article, you’ll learn how to efficiently downsample 6.48 billion high-frequency records to 61 million...
0
by: Aftab Ahmad | last post by:
Hello Experts! I have written a code in MS Access for a cmd called "WhatsApp Message" to open WhatsApp using that very code but the problem is that it gives a popup message everytime I clicked on...
0
by: Aftab Ahmad | last post by:
So, I have written a code for a cmd called "Send WhatsApp Message" to open and send WhatsApp messaage. The code is given below. Dim IE As Object Set IE =...
0
by: ryjfgjl | last post by:
ExcelToDatabase: batch import excel into database automatically...
0
isladogs
by: isladogs | last post by:
The next Access Europe meeting will be on Wednesday 6 Mar 2024 starting at 18:00 UK time (6PM UTC) and finishing at about 19:15 (7.15PM). In this month's session, we are pleased to welcome back...
0
by: ArrayDB | last post by:
The error message I've encountered is; ERROR:root:Error generating model response: exception: access violation writing 0x0000000000005140, which seems to be indicative of an access violation...

By using Bytes.com and it's services, you agree to our Privacy Policy and Terms of Use.

To disable or enable advertisements and analytics tracking please visit the manage ads & tracking page.