473,378 Members | 1,607 Online
Bytes | Software Development & Data Engineering Community
Post Job

Home Posts Topics Members FAQ

Join Bytes to post your question to a community of 473,378 software developers and data experts.

C# transactions v SQL transactions

I've been using C# transactions for a while and had no problems with
them. Using try catch blocks I can trap basically all possible errors
and rollback all necessary data.

Over the last few days I've been trying to convert some of this code to
SQL Server stored procedures, but it seems to lack many of the benefits
of C# transactions - a lot of the errors don't seem to be trapped by the
SQL error trapping (e.g. if I do an update on a row that doesn't exist,
no rollback occurs and execution continues, if a table I am trying to
access doesn't exist then the sproc crashes).

Can anybody tell me what reasons there are for writing your transactions
in stored procedures rather than your .NET code, as there don't seem to
be any to me.
Any assistance would be really appreciated.
Cheers,

Mike
*** Sent via Developersdex http://www.developersdex.com ***
Don't just participate in USENET...get rewarded for it!
Nov 16 '05 #1
11 12953
Mike P wrote:
Can anybody tell me what reasons there are for writing your transactions
in stored procedures rather than your .NET code, as there don't seem to
be any to me.


I can think of ...
Smaller and faster client applications
Changing of procedures whithout modifying client application
List goes on... :)
Nov 16 '05 #2
What do you mean by a C# transaction?

Are you talking about System.EnterpriseServices.ServicedComponent? or

SqlConnection.BeginTransaction?

or something else?

Regards

Richard Blewett - DevelopMentor
http://staff.develop.com/richardb/weblog

nntp://news.microsoft.com/microsoft.public.dotnet.languages.csharp/<e4**************@TK2MSFTNGP14.phx.gbl>

I've been using C# transactions for a while and had no problems with
them. Using try catch blocks I can trap basically all possible errors
and rollback all necessary data.

Over the last few days I've been trying to convert some of this code to
SQL Server stored procedures, but it seems to lack many of the benefits
of C# transactions - a lot of the errors don't seem to be trapped by the
SQL error trapping (e.g. if I do an update on a row that doesn't exist,
no rollback occurs and execution continues, if a table I am trying to
access doesn't exist then the sproc crashes).

Can anybody tell me what reasons there are for writing your transactions
in stored procedures rather than your .NET code, as there don't seem to
be any to me.
Any assistance would be really appreciated.
Cheers,

Mike
*** Sent via Developersdex http://www.developersdex.com ***
Don't just participate in USENET...get rewarded for it!

---
Incoming mail is certified Virus Free.
Checked by AVG anti-virus system (http://www.grisoft.com).
Version: 6.0.766 / Virus Database: 513 - Release Date: 17/09/2004

[microsoft.public.dotnet.languages.csharp]
Nov 16 '05 #3
Mike,

Personally, I think that managing transaction state in your stored
procedures is a very, very bad idea. It limits how they can be used
elsewhere, and for the most part, are harder to maintain (as you have more
and more sp's calling each other, trying to maintain transaction state is a
pain).

I would recommend using EnterpriseServices for handling transaction
state. There are a number of reasons for this, the best one being easy
maintainability (you can turn transactions on and off with the flick of a
switch, or change their behavior). Another reason to handle this would be
because it offers the easiest translation path to Indigo.

Hope this helps.

--
- Nicholas Paldino [.NET/C# MVP]
- mv*@spam.guard.caspershouse.com

"Mike P" <mr*@telcoelectronics.co.uk> wrote in message
news:e4**************@TK2MSFTNGP14.phx.gbl...
I've been using C# transactions for a while and had no problems with
them. Using try catch blocks I can trap basically all possible errors
and rollback all necessary data.

Over the last few days I've been trying to convert some of this code to
SQL Server stored procedures, but it seems to lack many of the benefits
of C# transactions - a lot of the errors don't seem to be trapped by the
SQL error trapping (e.g. if I do an update on a row that doesn't exist,
no rollback occurs and execution continues, if a table I am trying to
access doesn't exist then the sproc crashes).

Can anybody tell me what reasons there are for writing your transactions
in stored procedures rather than your .NET code, as there don't seem to
be any to me.
Any assistance would be really appreciated.
Cheers,

Mike
*** Sent via Developersdex http://www.developersdex.com ***
Don't just participate in USENET...get rewarded for it!

Nov 16 '05 #4
Hi mrp:

As other have commented, error handling in stored procs is very
difficult to do - it tends to clutter up the TSQL quite a bit and it's
terribly easy to overlook a problem. It's an approach I try to avoid.

SQL 2005 has some improvements, but that's not here, and if you are
concerened about portability at all its not an option.

--
Scott
http://www.OdeToCode.com

On Thu, 23 Sep 2004 06:44:40 -0700, Mike P
<mr*@telcoelectronics.co.uk> wrote:
I've been using C# transactions for a while and had no problems with
them. Using try catch blocks I can trap basically all possible errors
and rollback all necessary data.

Over the last few days I've been trying to convert some of this code to
SQL Server stored procedures, but it seems to lack many of the benefits
of C# transactions - a lot of the errors don't seem to be trapped by the
SQL error trapping (e.g. if I do an update on a row that doesn't exist,
no rollback occurs and execution continues, if a table I am trying to
access doesn't exist then the sproc crashes).

Can anybody tell me what reasons there are for writing your transactions
in stored procedures rather than your .NET code, as there don't seem to
be any to me.
Any assistance would be really appreciated.
Cheers,

Mike
*** Sent via Developersdex http://www.developersdex.com ***
Don't just participate in USENET...get rewarded for it!


Nov 16 '05 #5
Are you talking about database transactions or something else? Database
transactions, i.e, BEGIN TRAN, COMMIT TRAN, ROLLBACK TRAN, really should not
be used in the front-end application. You greatly increase your risk to
locking, blocking, inconsistent data, corrupt data, long running
transactions (which leads back to locking and blocking), uncommitted
transactions, and more. Transactions should be kept as short as possible to
keep database access clean. I take the totally opposite stance and say
transactions and typed sql statements (even though you didn't mention these
but they would go hand-in-hand with someone who was creating transactions in
the FE) should never exist in the front-end. I believe that the only access
from a front-end should be limited to executing stored procedures.

Eric


Nicholas Paldino [.NET/C# MVP] wrote:
Mike,

Personally, I think that managing transaction state in your stored
procedures is a very, very bad idea. It limits how they can be used
elsewhere, and for the most part, are harder to maintain (as you have
more and more sp's calling each other, trying to maintain transaction
state is a pain).

I would recommend using EnterpriseServices for handling transaction
state. There are a number of reasons for this, the best one being
easy maintainability (you can turn transactions on and off with the
flick of a switch, or change their behavior). Another reason to
handle this would be because it offers the easiest translation path
to Indigo.
Hope this helps.
"Mike P" <mr*@telcoelectronics.co.uk> wrote in message
news:e4**************@TK2MSFTNGP14.phx.gbl...
I've been using C# transactions for a while and had no problems with
them. Using try catch blocks I can trap basically all possible
errors and rollback all necessary data.

Over the last few days I've been trying to convert some of this code
to SQL Server stored procedures, but it seems to lack many of the
benefits of C# transactions - a lot of the errors don't seem to be
trapped by the SQL error trapping (e.g. if I do an update on a row
that doesn't exist, no rollback occurs and execution continues, if a
table I am trying to access doesn't exist then the sproc crashes).

Can anybody tell me what reasons there are for writing your
transactions in stored procedures rather than your .NET code, as
there don't seem to be any to me.
Any assistance would be really appreciated.
Cheers,

Mike
*** Sent via Developersdex http://www.developersdex.com ***
Don't just participate in USENET...get rewarded for it!

Nov 16 '05 #6
Eric,

I would agree with you, transactions should not exist in the front end
(nor should dynamic SQL statements).

I am referring to the business layer. It is at this level that the
transactions should be controlled, and not by the programmer, but rather,
through the administrative tool (which gets the initial values from
attributes declared on the class).

The reason for this is that it allows for easy extenisibility in the
future. For example, say you have your transaction code in your stored
procedure now. Say that you have some piece of code that calls the stored
procedure. Later on, you have a requirement to write to a file when the
procedure completes, and to not write to the file when it fails. You would
want some sort of transaction manager to coordinate this. Of course, this
assumes that the file system is transactional (which it is in Longhorn).

The point is that when you start performing transaction management in
the resource that is being handled by the transacton, you severely limit the
other business processes that the resource can be included in. The file is
a simple case, but what about message queues? What about both queues and
emails? As you add more resources to the transaction state (outside of the
database), you need a way to abort them all.

It is because of the fact that I am going to make my business process
transactional, and not just my database operations transactional that I use
something like Enterprise Services, or the new Transaction model being
introduced in .NET 2.0. I believe that the mindset of the db being the only
transactional resource is on the way of being well behind us, and that
external entities are needed to control all of these resources, should
something fail while trying to perform operations on them.

--
- Nicholas Paldino [.NET/C# MVP]
- mv*@spam.guard.caspershouse.com
"Eric Sabine" <mopar41@mail_after_hot_not_before.com> wrote in message
news:%2***************@TK2MSFTNGP14.phx.gbl...
Are you talking about database transactions or something else? Database
transactions, i.e, BEGIN TRAN, COMMIT TRAN, ROLLBACK TRAN, really should
not be used in the front-end application. You greatly increase your risk
to locking, blocking, inconsistent data, corrupt data, long running
transactions (which leads back to locking and blocking), uncommitted
transactions, and more. Transactions should be kept as short as possible
to keep database access clean. I take the totally opposite stance and say
transactions and typed sql statements (even though you didn't mention
these but they would go hand-in-hand with someone who was creating
transactions in the FE) should never exist in the front-end. I believe
that the only access from a front-end should be limited to executing
stored procedures.

Eric


Nicholas Paldino [.NET/C# MVP] wrote:
Mike,

Personally, I think that managing transaction state in your stored
procedures is a very, very bad idea. It limits how they can be used
elsewhere, and for the most part, are harder to maintain (as you have
more and more sp's calling each other, trying to maintain transaction
state is a pain).

I would recommend using EnterpriseServices for handling transaction
state. There are a number of reasons for this, the best one being
easy maintainability (you can turn transactions on and off with the
flick of a switch, or change their behavior). Another reason to
handle this would be because it offers the easiest translation path
to Indigo.
Hope this helps.
"Mike P" <mr*@telcoelectronics.co.uk> wrote in message
news:e4**************@TK2MSFTNGP14.phx.gbl...
I've been using C# transactions for a while and had no problems with
them. Using try catch blocks I can trap basically all possible
errors and rollback all necessary data.

Over the last few days I've been trying to convert some of this code
to SQL Server stored procedures, but it seems to lack many of the
benefits of C# transactions - a lot of the errors don't seem to be
trapped by the SQL error trapping (e.g. if I do an update on a row
that doesn't exist, no rollback occurs and execution continues, if a
table I am trying to access doesn't exist then the sproc crashes).

Can anybody tell me what reasons there are for writing your
transactions in stored procedures rather than your .NET code, as
there don't seem to be any to me.
Any assistance would be really appreciated.
Cheers,

Mike
*** Sent via Developersdex http://www.developersdex.com ***
Don't just participate in USENET...get rewarded for it!


Nov 16 '05 #7
I do not believe we are discussing the same thing. I am strictly speaking
about SQL Server transactions and as you have indicated you are talking
about business layer "transactions." If the requirement were passed to the
DBA who was writing the stored procedure that File IO needs to take place,
he should kick it back and say that is outside the scope of the sql server
transaction.

SQL Server transactions are strictly for maintaining data integrity and
consistency, not to mention they give you a named place to restore to
instead of a datetime stamp. I believe you would benefit from not one, but
both, i.e., the data integrity transaction and the process integrity
transaction.

Eric

Nicholas Paldino [.NET/C# MVP] wrote:
Eric,

I would agree with you, transactions should not exist in the front
end (nor should dynamic SQL statements).

I am referring to the business layer. It is at this level that the
transactions should be controlled, and not by the programmer, but
rather, through the administrative tool (which gets the initial
values from attributes declared on the class).

The reason for this is that it allows for easy extenisibility in
the future. For example, say you have your transaction code in your
stored procedure now. Say that you have some piece of code that
calls the stored procedure. Later on, you have a requirement to
write to a file when the procedure completes, and to not write to the
file when it fails. You would want some sort of transaction manager
to coordinate this. Of course, this assumes that the file system is
transactional (which it is in Longhorn).
The point is that when you start performing transaction management
in the resource that is being handled by the transacton, you severely
limit the other business processes that the resource can be included
in. The file is a simple case, but what about message queues? What
about both queues and emails? As you add more resources to the
transaction state (outside of the database), you need a way to abort
them all.
It is because of the fact that I am going to make my business
process transactional, and not just my database operations
transactional that I use something like Enterprise Services, or the
new Transaction model being introduced in .NET 2.0. I believe that
the mindset of the db being the only transactional resource is on the
way of being well behind us, and that external entities are needed to
control all of these resources, should something fail while trying to
perform operations on them.

"Eric Sabine" <mopar41@mail_after_hot_not_before.com> wrote in message
news:%2***************@TK2MSFTNGP14.phx.gbl...
Are you talking about database transactions or something else? Database
transactions, i.e, BEGIN TRAN, COMMIT TRAN, ROLLBACK TRAN,
really should not be used in the front-end application. You greatly
increase your risk to locking, blocking, inconsistent data, corrupt
data, long running transactions (which leads back to locking and
blocking), uncommitted transactions, and more. Transactions should
be kept as short as possible to keep database access clean. I take
the totally opposite stance and say transactions and typed sql
statements (even though you didn't mention these but they would go
hand-in-hand with someone who was creating transactions in the FE)
should never exist in the front-end. I believe that the only access
from a front-end should be limited to executing stored procedures.

Eric


Nicholas Paldino [.NET/C# MVP] wrote:
Mike,

Personally, I think that managing transaction state in your
stored procedures is a very, very bad idea. It limits how they can
be used elsewhere, and for the most part, are harder to maintain
(as you have more and more sp's calling each other, trying to
maintain transaction state is a pain).

I would recommend using EnterpriseServices for handling
transaction state. There are a number of reasons for this, the
best one being easy maintainability (you can turn transactions on
and off with the flick of a switch, or change their behavior). Another
reason to handle this would be because it offers the
easiest translation path to Indigo.
Hope this helps.
"Mike P" <mr*@telcoelectronics.co.uk> wrote in message
news:e4**************@TK2MSFTNGP14.phx.gbl...
I've been using C# transactions for a while and had no problems
with them. Using try catch blocks I can trap basically all
possible errors and rollback all necessary data.

Over the last few days I've been trying to convert some of this
code to SQL Server stored procedures, but it seems to lack many of
the benefits of C# transactions - a lot of the errors don't seem
to be trapped by the SQL error trapping (e.g. if I do an update on
a row that doesn't exist, no rollback occurs and execution
continues, if a table I am trying to access doesn't exist then the
sproc crashes). Can anybody tell me what reasons there are for writing
your
transactions in stored procedures rather than your .NET code, as
there don't seem to be any to me.
Any assistance would be really appreciated.
Cheers,

Mike
*** Sent via Developersdex http://www.developersdex.com ***
Don't just participate in USENET...get rewarded for it!

Nov 16 '05 #8
Eric,

What I'm trying to say is that for any resource (DB included), if you
have the resource itself determine its transaction state, instead of a
transaction coordinator, it then becomes difficult to integrate that
resource into larger-scale transactions.

Because of that, I would recommend against handling transactions in SP
code, and have an external transaction coordinator (Enterprise
Services/COM+) handle it for you. Because SQL Server is registered as
having a resource manager, it can allow ES/COM+ to handle the transaction
management for it.

Ultimately, yes, ES/COM+ would call the resource manager for SQL server,
and issue the appropriate transaction commands, based on configuration. The
idea here isn't to lock those procedures into a pattern which would be
difficult to integrate other operations into.

So, for SQL server, or any resource that supports transactions, I would
never write a stored procedure that has transaction management code in it.

--
- Nicholas Paldino [.NET/C# MVP]
- mv*@spam.guard.caspershouse.com

"Eric Sabine" <mopar41@mail_after_hot_not_before.com> wrote in message
news:Op**************@TK2MSFTNGP09.phx.gbl...
I do not believe we are discussing the same thing. I am strictly speaking
about SQL Server transactions and as you have indicated you are talking
about business layer "transactions." If the requirement were passed to the
DBA who was writing the stored procedure that File IO needs to take place,
he should kick it back and say that is outside the scope of the sql server
transaction.

SQL Server transactions are strictly for maintaining data integrity and
consistency, not to mention they give you a named place to restore to
instead of a datetime stamp. I believe you would benefit from not one, but
both, i.e., the data integrity transaction and the process integrity
transaction.

Eric

Nicholas Paldino [.NET/C# MVP] wrote:
Eric,

I would agree with you, transactions should not exist in the front
end (nor should dynamic SQL statements).

I am referring to the business layer. It is at this level that the
transactions should be controlled, and not by the programmer, but
rather, through the administrative tool (which gets the initial
values from attributes declared on the class).

The reason for this is that it allows for easy extenisibility in
the future. For example, say you have your transaction code in your
stored procedure now. Say that you have some piece of code that
calls the stored procedure. Later on, you have a requirement to
write to a file when the procedure completes, and to not write to the
file when it fails. You would want some sort of transaction manager
to coordinate this. Of course, this assumes that the file system is
transactional (which it is in Longhorn).
The point is that when you start performing transaction management
in the resource that is being handled by the transacton, you severely
limit the other business processes that the resource can be included
in. The file is a simple case, but what about message queues? What
about both queues and emails? As you add more resources to the
transaction state (outside of the database), you need a way to abort
them all.
It is because of the fact that I am going to make my business
process transactional, and not just my database operations
transactional that I use something like Enterprise Services, or the
new Transaction model being introduced in .NET 2.0. I believe that
the mindset of the db being the only transactional resource is on the
way of being well behind us, and that external entities are needed to
control all of these resources, should something fail while trying to
perform operations on them.

"Eric Sabine" <mopar41@mail_after_hot_not_before.com> wrote in message
news:%2***************@TK2MSFTNGP14.phx.gbl...
Are you talking about database transactions or something else? Database
transactions, i.e, BEGIN TRAN, COMMIT TRAN, ROLLBACK TRAN,
really should not be used in the front-end application. You greatly
increase your risk to locking, blocking, inconsistent data, corrupt
data, long running transactions (which leads back to locking and
blocking), uncommitted transactions, and more. Transactions should
be kept as short as possible to keep database access clean. I take
the totally opposite stance and say transactions and typed sql
statements (even though you didn't mention these but they would go
hand-in-hand with someone who was creating transactions in the FE)
should never exist in the front-end. I believe that the only access
from a front-end should be limited to executing stored procedures.

Eric


Nicholas Paldino [.NET/C# MVP] wrote:
Mike,

Personally, I think that managing transaction state in your
stored procedures is a very, very bad idea. It limits how they can
be used elsewhere, and for the most part, are harder to maintain
(as you have more and more sp's calling each other, trying to
maintain transaction state is a pain).

I would recommend using EnterpriseServices for handling
transaction state. There are a number of reasons for this, the
best one being easy maintainability (you can turn transactions on
and off with the flick of a switch, or change their behavior). Another
reason to handle this would be because it offers the
easiest translation path to Indigo.
Hope this helps.
"Mike P" <mr*@telcoelectronics.co.uk> wrote in message
news:e4**************@TK2MSFTNGP14.phx.gbl...
> I've been using C# transactions for a while and had no problems
> with them. Using try catch blocks I can trap basically all
> possible errors and rollback all necessary data.
>
> Over the last few days I've been trying to convert some of this
> code to SQL Server stored procedures, but it seems to lack many of
> the benefits of C# transactions - a lot of the errors don't seem
> to be trapped by the SQL error trapping (e.g. if I do an update on
> a row that doesn't exist, no rollback occurs and execution
> continues, if a table I am trying to access doesn't exist then the
> sproc crashes). Can anybody tell me what reasons there are for writing
> your
> transactions in stored procedures rather than your .NET code, as
> there don't seem to be any to me.
>
>
> Any assistance would be really appreciated.
>
>
> Cheers,
>
> Mike
>
>
> *** Sent via Developersdex http://www.developersdex.com ***
> Don't just participate in USENET...get rewarded for it!


Nov 16 '05 #9
My gut reaction is "I don't like it." That said, I am not an expert on
Enterprise Services or the DNF 2.0 Transaction model. I feel that this
external transaction coordinator (ETC) would cause me performance problems
in my sql server because (if I am understanding it properly) the ETC would
begin a transaction and say, start some process (transaction), get data from
disparate sources, write a text file, send data to a web service, more,
more, more, then some element bombs out and the ETC says "OK let's remain
consistent, roll everything back!". But the item that bombed out had a
timeout error at 20 seconds, so I take it that my sql server portion, which
was an update on a commonly used table, which happened to update enough rows
that the lock was escalated from a row lock to a table lock, also had to
wait for these 20 seconds (plus other elemental transaction times) before
it's transaction was rolled back. Meanwhile, my users trying to enter
orders got blocked.

I believe the purpose of the ETC is probably for business "consistency" and
I'm all for that - don't get me wrong, I'm all about the consistency :-) ,
but it probably comes at a performance price, which I see happening when you
take the database-transaction away from the database and you make it no
longer dependent on the DML (data modification language, i.e., the inserts,
updates, and deletes) and you make it dependent on the business process
instead.

If you could guarantee that the ETC waits on the sql server transaction as
the last process I could be convinced, but I don't see how you get scalable
and properly performing applications if your database is forced to wait on
other non-sql processes.

Eric


Nicholas Paldino [.NET/C# MVP] wrote:
Eric,

What I'm trying to say is that for any resource (DB included), if
you have the resource itself determine its transaction state, instead
of a transaction coordinator, it then becomes difficult to integrate
that resource into larger-scale transactions.

Because of that, I would recommend against handling transactions
in SP code, and have an external transaction coordinator (Enterprise
Services/COM+) handle it for you. Because SQL Server is registered as
having a resource manager, it can allow ES/COM+ to handle the
transaction management for it.

Ultimately, yes, ES/COM+ would call the resource manager for SQL
server, and issue the appropriate transaction commands, based on
configuration. The idea here isn't to lock those procedures into a
pattern which would be difficult to integrate other operations into.

So, for SQL server, or any resource that supports transactions, I
would never write a stored procedure that has transaction management
code in it.

"Eric Sabine" <mopar41@mail_after_hot_not_before.com> wrote in message
news:Op**************@TK2MSFTNGP09.phx.gbl...
I do not believe we are discussing the same thing. I am strictly
speaking about SQL Server transactions and as you have indicated you
are talking about business layer "transactions." If the requirement
were passed to the DBA who was writing the stored procedure that
File IO needs to take place, he should kick it back and say that is
outside the scope of the sql server transaction.

SQL Server transactions are strictly for maintaining data integrity
and consistency, not to mention they give you a named place to
restore to instead of a datetime stamp. I believe you would benefit
from not one, but both, i.e., the data integrity transaction and the
process integrity transaction.

Eric

Nicholas Paldino [.NET/C# MVP] wrote:
Eric,

I would agree with you, transactions should not exist in the
front end (nor should dynamic SQL statements).

I am referring to the business layer. It is at this level that
the transactions should be controlled, and not by the programmer,
but rather, through the administrative tool (which gets the initial
values from attributes declared on the class).

The reason for this is that it allows for easy extenisibility in
the future. For example, say you have your transaction code in your
stored procedure now. Say that you have some piece of code that
calls the stored procedure. Later on, you have a requirement to
write to a file when the procedure completes, and to not write to
the file when it fails. You would want some sort of transaction
manager to coordinate this. Of course, this assumes that the file
system is transactional (which it is in Longhorn).
The point is that when you start performing transaction
management in the resource that is being handled by the transacton,
you severely limit the other business processes that the resource
can be included in. The file is a simple case, but what about
message queues? What about both queues and emails? As you add
more resources to the transaction state (outside of the database),
you need a way to abort them all.
It is because of the fact that I am going to make my business
process transactional, and not just my database operations
transactional that I use something like Enterprise Services, or the
new Transaction model being introduced in .NET 2.0. I believe that
the mindset of the db being the only transactional resource is on
the way of being well behind us, and that external entities are
needed to control all of these resources, should something fail
while trying to perform operations on them.

"Eric Sabine" <mopar41@mail_after_hot_not_before.com> wrote in
message news:%2***************@TK2MSFTNGP14.phx.gbl...
Are you talking about database transactions or something else?
Database transactions, i.e, BEGIN TRAN, COMMIT TRAN, ROLLBACK TRAN,
really should not be used in the front-end application. You
greatly increase your risk to locking, blocking, inconsistent
data, corrupt data, long running transactions (which leads back to
locking and blocking), uncommitted transactions, and more. Transactions
should be kept as short as possible to keep database access clean. I
take
the totally opposite stance and say transactions and typed sql
statements (even though you didn't mention these but they would go
hand-in-hand with someone who was creating transactions in the FE)
should never exist in the front-end. I believe that the only
access from a front-end should be limited to executing stored
procedures. Eric


Nicholas Paldino [.NET/C# MVP] wrote:
> Mike,
>
> Personally, I think that managing transaction state in your
> stored procedures is a very, very bad idea. It limits how they
> can be used elsewhere, and for the most part, are harder to
> maintain (as you have more and more sp's calling each other,
> trying to maintain transaction state is a pain).
>
> I would recommend using EnterpriseServices for handling
> transaction state. There are a number of reasons for this, the
> best one being easy maintainability (you can turn transactions on
> and off with the flick of a switch, or change their behavior).
> Another reason to handle this would be because it offers the
> easiest translation path to Indigo.
> Hope this helps.
>
>
> "Mike P" <mr*@telcoelectronics.co.uk> wrote in message
> news:e4**************@TK2MSFTNGP14.phx.gbl...
>> I've been using C# transactions for a while and had no problems
>> with them. Using try catch blocks I can trap basically all
>> possible errors and rollback all necessary data.
>>
>> Over the last few days I've been trying to convert some of this
>> code to SQL Server stored procedures, but it seems to lack many
>> of the benefits of C# transactions - a lot of the errors don't
>> seem to be trapped by the SQL error trapping (e.g. if I do an update
>> on a row that doesn't exist, no rollback occurs and execution
>> continues, if a table I am trying to access doesn't exist then
>> the sproc crashes). Can anybody tell me what reasons there are
>> for writing your
>> transactions in stored procedures rather than your .NET code, as
>> there don't seem to be any to me.
>>
>>
>> Any assistance would be really appreciated.
>>
>>
>> Cheers,
>>
>> Mike
>>
>>
>> *** Sent via Developersdex http://www.developersdex.com ***
>> Don't just participate in USENET...get rewarded for it!

Nov 16 '05 #10
Eric,

I can understand your gut reaction, and to be honest, it's a different
way of thinking than most people are accustomed to. However, I would make
the argument that if you have a unit of work that takes 20 seconds, then you
have a bigger problem anyways (such as your design, where you are defining
your transaction, process, assembly, etc, etc boundaries).

You could argue that the ETC (COM+) is for business consistency, but at
the same time, these larger "transactions" on resources really, for the most
part, should have a one-to-one mapping between your business operations and
the transaction. This doesn't mean that you can't nest transactions within
each other (and indeed, for the most part, you should), but rather, one
completed unit of work is one business transaction, and that should map to
your action in COM+. Now you can have other units of work, but they all
contribute to the success or failure of the overall unit of work that they
are included in (requires a transaction)

Is there a performance hit? Absolutely. Depending what you are doing,
it varies, but I would say that for every aspect in Enterprise Services that
you add (security, out of process, transactions, etc, etc), performance
suffers by a factor of 10.

Now people freak out when they see this, as they think that anything
that is ten times slower than anything is a bad thing. This is not so. On
a 1.6 GHz IBM ThinkPad, I could implement a full ES application, integrating
security, out of process calls, saves to the database, transactioning,
object pooling, etc, etc, etc, and get you anywhere between 20 and 40 calls
a second. This is a single processor machine, hosting SQL server, COM+
running out of process, etc, etc.

For what most people do, that kind of throughput is more than
acceptable. If it is not, then that is fine, but they have to accept the
consequences of their actions. ES offers a tremendous amount of
infrastructure (as do most other environments), and not having to code and
maintain that yourself is a nightmare.

Also, the moment that you introduce another resource that is involved in
a transaction, you have to write the code that will basically handle the
commitment or abortion of the transaction should anything fail. That kind
of coordination is something that most programmers, should not have to, and
should not ever do.

The only way you could make sure that SQL Server is the last process to
be performed is to code it that way, so that it is hit last. However, in
any correctly designed component based system, your operations on SQL server
are going to be encapsulated as part of a framework which can be used
anywhere in COM+.

Your argument about a 20 second lock on the table is a good one, but
that goes against the inherent guidelines for COM+ processes. Transactions
should be relatively quick. If you are performing an operation that takes
20 in one transaction, it should be re-designed. Most likely, it can be
re-designed so that it is performed in other commitable units of work, and
then have a final command which will commit all of them (through a distinct
flag in the db on the records which is switched, or something of that
nature).

--
- Nicholas Paldino [.NET/C# MVP]
- mv*@spam.guard.caspershouse.com
"Eric Sabine" <mopar41@mail_after_hot_not_before.com> wrote in message
news:Ow**************@TK2MSFTNGP14.phx.gbl...
My gut reaction is "I don't like it." That said, I am not an expert on
Enterprise Services or the DNF 2.0 Transaction model. I feel that this
external transaction coordinator (ETC) would cause me performance problems
in my sql server because (if I am understanding it properly) the ETC would
begin a transaction and say, start some process (transaction), get data
from disparate sources, write a text file, send data to a web service,
more, more, more, then some element bombs out and the ETC says "OK let's
remain consistent, roll everything back!". But the item that bombed out
had a timeout error at 20 seconds, so I take it that my sql server
portion, which was an update on a commonly used table, which happened to
update enough rows that the lock was escalated from a row lock to a table
lock, also had to wait for these 20 seconds (plus other elemental
transaction times) before it's transaction was rolled back. Meanwhile, my
users trying to enter orders got blocked.

I believe the purpose of the ETC is probably for business "consistency"
and I'm all for that - don't get me wrong, I'm all about the consistency
:-) , but it probably comes at a performance price, which I see happening
when you take the database-transaction away from the database and you make
it no longer dependent on the DML (data modification language, i.e., the
inserts, updates, and deletes) and you make it dependent on the business
process instead.

If you could guarantee that the ETC waits on the sql server transaction as
the last process I could be convinced, but I don't see how you get
scalable and properly performing applications if your database is forced
to wait on other non-sql processes.

Eric


Nicholas Paldino [.NET/C# MVP] wrote:
Eric,

What I'm trying to say is that for any resource (DB included), if
you have the resource itself determine its transaction state, instead
of a transaction coordinator, it then becomes difficult to integrate
that resource into larger-scale transactions.

Because of that, I would recommend against handling transactions
in SP code, and have an external transaction coordinator (Enterprise
Services/COM+) handle it for you. Because SQL Server is registered as
having a resource manager, it can allow ES/COM+ to handle the
transaction management for it.

Ultimately, yes, ES/COM+ would call the resource manager for SQL
server, and issue the appropriate transaction commands, based on
configuration. The idea here isn't to lock those procedures into a
pattern which would be difficult to integrate other operations into.

So, for SQL server, or any resource that supports transactions, I
would never write a stored procedure that has transaction management
code in it.

"Eric Sabine" <mopar41@mail_after_hot_not_before.com> wrote in message
news:Op**************@TK2MSFTNGP09.phx.gbl...
I do not believe we are discussing the same thing. I am strictly
speaking about SQL Server transactions and as you have indicated you
are talking about business layer "transactions." If the requirement
were passed to the DBA who was writing the stored procedure that
File IO needs to take place, he should kick it back and say that is
outside the scope of the sql server transaction.

SQL Server transactions are strictly for maintaining data integrity
and consistency, not to mention they give you a named place to
restore to instead of a datetime stamp. I believe you would benefit
from not one, but both, i.e., the data integrity transaction and the
process integrity transaction.

Eric

Nicholas Paldino [.NET/C# MVP] wrote:
Eric,

I would agree with you, transactions should not exist in the
front end (nor should dynamic SQL statements).

I am referring to the business layer. It is at this level that
the transactions should be controlled, and not by the programmer,
but rather, through the administrative tool (which gets the initial
values from attributes declared on the class).

The reason for this is that it allows for easy extenisibility in
the future. For example, say you have your transaction code in your
stored procedure now. Say that you have some piece of code that
calls the stored procedure. Later on, you have a requirement to
write to a file when the procedure completes, and to not write to
the file when it fails. You would want some sort of transaction
manager to coordinate this. Of course, this assumes that the file
system is transactional (which it is in Longhorn).
The point is that when you start performing transaction
management in the resource that is being handled by the transacton,
you severely limit the other business processes that the resource
can be included in. The file is a simple case, but what about
message queues? What about both queues and emails? As you add
more resources to the transaction state (outside of the database),
you need a way to abort them all.
It is because of the fact that I am going to make my business
process transactional, and not just my database operations
transactional that I use something like Enterprise Services, or the
new Transaction model being introduced in .NET 2.0. I believe that
the mindset of the db being the only transactional resource is on
the way of being well behind us, and that external entities are
needed to control all of these resources, should something fail
while trying to perform operations on them.

"Eric Sabine" <mopar41@mail_after_hot_not_before.com> wrote in
message news:%2***************@TK2MSFTNGP14.phx.gbl...
> Are you talking about database transactions or something else?
> Database transactions, i.e, BEGIN TRAN, COMMIT TRAN, ROLLBACK TRAN,
> really should not be used in the front-end application. You
> greatly increase your risk to locking, blocking, inconsistent
> data, corrupt data, long running transactions (which leads back to
> locking and blocking), uncommitted transactions, and more.
> Transactions should be kept as short as possible to keep database
> access clean. I take
> the totally opposite stance and say transactions and typed sql
> statements (even though you didn't mention these but they would go
> hand-in-hand with someone who was creating transactions in the FE)
> should never exist in the front-end. I believe that the only
> access from a front-end should be limited to executing stored
> procedures. Eric
>
>
>
>
> Nicholas Paldino [.NET/C# MVP] wrote:
>> Mike,
>>
>> Personally, I think that managing transaction state in your
>> stored procedures is a very, very bad idea. It limits how they
>> can be used elsewhere, and for the most part, are harder to
>> maintain (as you have more and more sp's calling each other,
>> trying to maintain transaction state is a pain).
>>
>> I would recommend using EnterpriseServices for handling
>> transaction state. There are a number of reasons for this, the
>> best one being easy maintainability (you can turn transactions on
>> and off with the flick of a switch, or change their behavior).
>> Another reason to handle this would be because it offers the
>> easiest translation path to Indigo.
>> Hope this helps.
>>
>>
>> "Mike P" <mr*@telcoelectronics.co.uk> wrote in message
>> news:e4**************@TK2MSFTNGP14.phx.gbl...
>>> I've been using C# transactions for a while and had no problems
>>> with them. Using try catch blocks I can trap basically all
>>> possible errors and rollback all necessary data.
>>>
>>> Over the last few days I've been trying to convert some of this
>>> code to SQL Server stored procedures, but it seems to lack many
>>> of the benefits of C# transactions - a lot of the errors don't
>>> seem to be trapped by the SQL error trapping (e.g. if I do an update
>>> on a row that doesn't exist, no rollback occurs and execution
>>> continues, if a table I am trying to access doesn't exist then
>>> the sproc crashes). Can anybody tell me what reasons there are
>>> for writing your
>>> transactions in stored procedures rather than your .NET code, as
>>> there don't seem to be any to me.
>>>
>>>
>>> Any assistance would be really appreciated.
>>>
>>>
>>> Cheers,
>>>
>>> Mike
>>>
>>>
>>> *** Sent via Developersdex http://www.developersdex.com ***
>>> Don't just participate in USENET...get rewarded for it!


Nov 16 '05 #11
Your arguments "for" are good enought that it's something I would be
interested in looking at. Perhaps I don't know enough (or anything for that
matter) about COM+ servers and how to use them. Thanks for the good
discussion.

Eric

Nicholas Paldino [.NET/C# MVP] wrote:
Eric,

I can understand your gut reaction, and to be honest, it's a
different way of thinking than most people are accustomed to. However, I
would make the argument that if you have a unit of work
that takes 20 seconds, then you have a bigger problem anyways (such
as your design, where you are defining your transaction, process,
assembly, etc, etc boundaries).
You could argue that the ETC (COM+) is for business consistency,
but at the same time, these larger "transactions" on resources
really, for the most part, should have a one-to-one mapping between
your business operations and the transaction. This doesn't mean that
you can't nest transactions within each other (and indeed, for the
most part, you should), but rather, one completed unit of work is one
business transaction, and that should map to your action in COM+. Now you
can have other units of work, but they all contribute to the
success or failure of the overall unit of work that they are included
in (requires a transaction)
Is there a performance hit? Absolutely. Depending what you are
doing, it varies, but I would say that for every aspect in Enterprise
Services that you add (security, out of process, transactions, etc,
etc), performance suffers by a factor of 10.

Now people freak out when they see this, as they think that
anything that is ten times slower than anything is a bad thing. This
is not so. On a 1.6 GHz IBM ThinkPad, I could implement a full ES
application, integrating security, out of process calls, saves to the
database, transactioning, object pooling, etc, etc, etc, and get you
anywhere between 20 and 40 calls a second. This is a single
processor machine, hosting SQL server, COM+ running out of process,
etc, etc.
For what most people do, that kind of throughput is more than
acceptable. If it is not, then that is fine, but they have to accept
the consequences of their actions. ES offers a tremendous amount of
infrastructure (as do most other environments), and not having to
code and maintain that yourself is a nightmare.

Also, the moment that you introduce another resource that is
involved in a transaction, you have to write the code that will
basically handle the commitment or abortion of the transaction should
anything fail. That kind of coordination is something that most
programmers, should not have to, and should not ever do.

The only way you could make sure that SQL Server is the last
process to be performed is to code it that way, so that it is hit
last. However, in any correctly designed component based system,
your operations on SQL server are going to be encapsulated as part of
a framework which can be used anywhere in COM+.

Your argument about a 20 second lock on the table is a good one,
but that goes against the inherent guidelines for COM+ processes.
Transactions should be relatively quick. If you are performing an
operation that takes 20 in one transaction, it should be re-designed.
Most likely, it can be re-designed so that it is performed in other
commitable units of work, and then have a final command which will
commit all of them (through a distinct flag in the db on the records
which is switched, or something of that nature).
"Eric Sabine" <mopar41@mail_after_hot_not_before.com> wrote in message
news:Ow**************@TK2MSFTNGP14.phx.gbl...
My gut reaction is "I don't like it." That said, I am not an expert
on Enterprise Services or the DNF 2.0 Transaction model. I feel
that this external transaction coordinator (ETC) would cause me
performance problems in my sql server because (if I am understanding
it properly) the ETC would begin a transaction and say, start some
process (transaction), get data from disparate sources, write a text
file, send data to a web service, more, more, more, then some
element bombs out and the ETC says "OK let's remain consistent, roll
everything back!". But the item that bombed out had a timeout error
at 20 seconds, so I take it that my sql server portion, which was an
update on a commonly used table, which happened to update enough
rows that the lock was escalated from a row lock to a table lock,
also had to wait for these 20 seconds (plus other elemental
transaction times) before it's transaction was rolled back. Meanwhile, my
users trying to enter orders got blocked. I believe the purpose of the
ETC is probably for business
"consistency" and I'm all for that - don't get me wrong, I'm all
about the consistency :-) , but it probably comes at a performance
price, which I see happening when you take the database-transaction
away from the database and you make it no longer dependent on the
DML (data modification language, i.e., the inserts, updates, and
deletes) and you make it dependent on the business process instead.

If you could guarantee that the ETC waits on the sql server
transaction as the last process I could be convinced, but I don't
see how you get scalable and properly performing applications if
your database is forced to wait on other non-sql processes.

Eric


Nicholas Paldino [.NET/C# MVP] wrote:
Eric,

What I'm trying to say is that for any resource (DB included), if
you have the resource itself determine its transaction state,
instead of a transaction coordinator, it then becomes difficult to
integrate that resource into larger-scale transactions.

Because of that, I would recommend against handling transactions
in SP code, and have an external transaction coordinator (Enterprise
Services/COM+) handle it for you. Because SQL Server is registered
as having a resource manager, it can allow ES/COM+ to handle the
transaction management for it.

Ultimately, yes, ES/COM+ would call the resource manager for SQL
server, and issue the appropriate transaction commands, based on
configuration. The idea here isn't to lock those procedures into a
pattern which would be difficult to integrate other operations into.

So, for SQL server, or any resource that supports transactions, I
would never write a stored procedure that has transaction management
code in it.

"Eric Sabine" <mopar41@mail_after_hot_not_before.com> wrote in
message news:Op**************@TK2MSFTNGP09.phx.gbl...
I do not believe we are discussing the same thing. I am strictly
speaking about SQL Server transactions and as you have indicated
you are talking about business layer "transactions." If the
requirement were passed to the DBA who was writing the stored
procedure that File IO needs to take place, he should kick it back
and say that is outside the scope of the sql server transaction.

SQL Server transactions are strictly for maintaining data integrity
and consistency, not to mention they give you a named place to
restore to instead of a datetime stamp. I believe you would benefit
from not one, but both, i.e., the data integrity transaction and
the process integrity transaction.

Eric

Nicholas Paldino [.NET/C# MVP] wrote:
> Eric,
>
> I would agree with you, transactions should not exist in the
> front end (nor should dynamic SQL statements).
>
> I am referring to the business layer. It is at this level that
> the transactions should be controlled, and not by the programmer,
> but rather, through the administrative tool (which gets the
> initial values from attributes declared on the class).
>
> The reason for this is that it allows for easy extenisibility
> in the future. For example, say you have your transaction code
> in your stored procedure now. Say that you have some piece of
> code that calls the stored procedure. Later on, you have a
> requirement to write to a file when the procedure completes, and
> to not write to the file when it fails. You would want some sort
> of transaction manager to coordinate this. Of course, this
> assumes that the file system is transactional (which it is in
> Longhorn). The point is that when you start performing
> transaction management in the resource that is being handled by the
> transacton, you severely limit the other business processes that
> the resource can be included in. The file is a simple case, but
> what about message queues? What about both queues and emails? As you
> add more resources to the transaction state (outside of
> the database), you need a way to abort them all.
> It is because of the fact that I am going to make my business
> process transactional, and not just my database operations
> transactional that I use something like Enterprise Services, or
> the new Transaction model being introduced in .NET 2.0. I
> believe that the mindset of the db being the only transactional
> resource is on the way of being well behind us, and that external
> entities are needed to control all of these resources, should
> something fail while trying to perform operations on them.
>
> "Eric Sabine" <mopar41@mail_after_hot_not_before.com> wrote in
> message news:%2***************@TK2MSFTNGP14.phx.gbl...
>> Are you talking about database transactions or something else?
>> Database transactions, i.e, BEGIN TRAN, COMMIT TRAN, ROLLBACK
>> TRAN, really should not be used in the front-end application. You
>> greatly increase your risk to locking, blocking, inconsistent
>> data, corrupt data, long running transactions (which leads back
>> to locking and blocking), uncommitted transactions, and more.
>> Transactions should be kept as short as possible to keep database
>> access clean. I take
>> the totally opposite stance and say transactions and typed sql
>> statements (even though you didn't mention these but they would
>> go hand-in-hand with someone who was creating transactions in
>> the FE) should never exist in the front-end. I believe that the
>> only access from a front-end should be limited to executing
>> stored procedures. Eric
>>
>>
>>
>>
>> Nicholas Paldino [.NET/C# MVP] wrote:
>>> Mike,
>>>
>>> Personally, I think that managing transaction state in your
>>> stored procedures is a very, very bad idea. It limits how they
>>> can be used elsewhere, and for the most part, are harder to
>>> maintain (as you have more and more sp's calling each other,
>>> trying to maintain transaction state is a pain).
>>>
>>> I would recommend using EnterpriseServices for handling
>>> transaction state. There are a number of reasons for this, the
>>> best one being easy maintainability (you can turn transactions
>>> on and off with the flick of a switch, or change their
>>> behavior). Another reason to handle this would be because it
>>> offers the easiest translation path to Indigo.
>>> Hope this helps.
>>>
>>>
>>> "Mike P" <mr*@telcoelectronics.co.uk> wrote in message
>>> news:e4**************@TK2MSFTNGP14.phx.gbl...
>>>> I've been using C# transactions for a while and had no problems
>>>> with them. Using try catch blocks I can trap basically all
>>>> possible errors and rollback all necessary data.
>>>>
>>>> Over the last few days I've been trying to convert some of this
>>>> code to SQL Server stored procedures, but it seems to lack many
>>>> of the benefits of C# transactions - a lot of the errors don't
>>>> seem to be trapped by the SQL error trapping (e.g. if I do an
>>>> update on a row that doesn't exist, no rollback occurs and
>>>> execution continues, if a table I am trying to access doesn't
>>>> exist then the sproc crashes). Can anybody tell me what reasons
>>>> there are
>>>> for writing your
>>>> transactions in stored procedures rather than your .NET code,
>>>> as there don't seem to be any to me.
>>>>
>>>>
>>>> Any assistance would be really appreciated.
>>>>
>>>>
>>>> Cheers,
>>>>
>>>> Mike
>>>>
>>>>
>>>> *** Sent via Developersdex http://www.developersdex.com ***
>>>> Don't just participate in USENET...get rewarded for it!

Nov 16 '05 #12

This thread has been closed and replies have been disabled. Please start a new discussion.

Similar topics

6
by: Christopher J. Bottaro | last post by:
Hi, Why is there no support for explicit transactions in the DB API? I mean like transaction() to start the trans and commit() and rollback() would end the trans or something. The reason why I...
7
by: Richard Maher | last post by:
Hi, I am seeking the help of volunteers to test some software that I've developed which facilitates distributed two-phase commit transactions, encompassing any resource manager (e.g. SQL/Server...
3
by: Iain Mcleod | last post by:
I wish to do a series of inserts on a sql server database in the context of a transaction. The inserts will be done as a series of stored procedure calls. I wish to be able to rollback any...
6
by: Terri | last post by:
I have a table called Transactions with 3 fields: ID, Date, and Amount. Each ID can have multiple transactions in one particular year. An ID might not have had any transactions in recent years. ...
9
by: TD | last post by:
I am trying to add transactions to my code. The original code worked fine until I followed an example to setup transactions, now the code does strange things, but no error messages. Could...
1
by: mark | last post by:
In Java, you can use JBoss or similar to host EJB that will manage your transactions for you. You could, of course, write your own transactions using JDBC. In .NET, we can specify our own...
0
radcaesar
by: radcaesar | last post by:
Customer Table ID Name Address City Phone 1 Vijay Stores 6,Gandhi Road Pondy 0413-276564 2 Ram Stores 3, MG Road, Pondicherry 0413-29543756 3 Balu Papers 3, RG...
2
by: Sridhar | last post by:
Hi, I am trying to implement sql transactions. But I am not knowing how to do that. I created a data access layer which contains methods to select/insert/update tables in a database. I have also...
12
by: Rami | last post by:
I have some requirement for an automated payment system. The system has four machines setup as follows: 1- Two machines have a clustered database. 2- Two machines have a .net business logic...
1
by: CloudSolutions | last post by:
Introduction: For many beginners and individual users, requiring a credit card and email registration may pose a barrier when starting to use cloud servers. However, some cloud server providers now...
0
by: Faith0G | last post by:
I am starting a new it consulting business and it's been a while since I setup a new website. Is wordpress still the best web based software for hosting a 5 page website? The webpages will be...
0
by: ryjfgjl | last post by:
In our work, we often need to import Excel data into databases (such as MySQL, SQL Server, Oracle) for data analysis and processing. Usually, we use database tools like Navicat or the Excel import...
0
by: Charles Arthur | last post by:
How do i turn on java script on a villaon, callus and itel keypad mobile phone
0
by: ryjfgjl | last post by:
If we have dozens or hundreds of excel to import into the database, if we use the excel import function provided by database editors such as navicat, it will be extremely tedious and time-consuming...
0
BarryA
by: BarryA | last post by:
What are the essential steps and strategies outlined in the Data Structures and Algorithms (DSA) roadmap for aspiring data scientists? How can individuals effectively utilize this roadmap to progress...
1
by: nemocccc | last post by:
hello, everyone, I want to develop a software for my android phone for daily needs, any suggestions?
1
by: Sonnysonu | last post by:
This is the data of csv file 1 2 3 1 2 3 1 2 3 1 2 3 2 3 2 3 3 the lengths should be different i have to store the data by column-wise with in the specific length. suppose the i have to...
0
by: Hystou | last post by:
There are some requirements for setting up RAID: 1. The motherboard and BIOS support RAID configuration. 2. The motherboard has 2 or more available SATA protocol SSD/HDD slots (including MSATA, M.2...

By using Bytes.com and it's services, you agree to our Privacy Policy and Terms of Use.

To disable or enable advertisements and analytics tracking please visit the manage ads & tracking page.