Hello All,
Some time ago, I implemented a data access layer that included a simple
connectin pool. At the time, I did it all by myself: I created N
connections, each connection associated with a worker thread that would
execute the db commands. The pool was fixed and all the connections
were created when the db access class was instantiated. The
connections remained opened during the whole execution. If a connection
was not available at the moment, then the request would block until one
connection was freed.
In my approach, I did not use the SQL server connection polling (I
actually disabled it in my connection strings).
That was done in c++ using ado. Now, I am in a position where I need to
implement a db access layer in .NET. I was wondering what kind of
support I can get from .NET.
I want to implement a similar model. I would appreciate if you could
weigh on my approach and on the ways to accomplish something similar in
ado.net. I would also appreciate your comments on the approach itself.
Thank you
CD 16 2832
I have a few thoughts:
1.. Is this a Windows Forms or ASP.NET application. The former does not really need a connection pooling mechanism as the entire app might be able to share a common connection.
2.. If it's ASP.NET, then it makes little sense to code, debug, test and deploy your own pooler. The code built into ADO.NET's .NET data providers should work fine.
--
____________________________________
William (Bill) Vaughn
Author, Mentor, Consultant
Microsoft MVP
INETA Speaker www.betav.com/blog/billva www.betav.com
Please reply only to the newsgroup so that others can benefit.
This posting is provided "AS IS" with no warranties, and confers no rights.
__________________________________
Visit www.hitchhikerguides.net to get more information on my latest book:
Hitchhiker's Guide to Visual Studio and SQL Server (7th Edition)
and Hitchhiker's Guide to SQL Server 2005 Compact Edition (EBook)
-----------------------------------------------------------------------------------------------------------------------
<cr****@yahoo.comwrote in message news:11**********************@f1g2000cwa.googlegro ups.com...
Hello All,
Some time ago, I implemented a data access layer that included a simple
connectin pool. At the time, I did it all by myself: I created N
connections, each connection associated with a worker thread that would
execute the db commands. The pool was fixed and all the connections
were created when the db access class was instantiated. The
connections remained opened during the whole execution. If a connection
was not available at the moment, then the request would block until one
connection was freed.
In my approach, I did not use the SQL server connection polling (I
actually disabled it in my connection strings).
That was done in c++ using ado. Now, I am in a position where I need to
implement a db access layer in .NET. I was wondering what kind of
support I can get from .NET.
I want to implement a similar model. I would appreciate if you could
weigh on my approach and on the ways to accomplish something similar in
ado.net. I would also appreciate your comments on the approach itself.
Thank you
CD
CRB,
In my idea do you have another interpretation of Connection pooling than we,
you see it from your side of the application. Mostly by regulars here is
thought in the way the database is handling all connections it needs. AFAIK
is that set to 100, therefore often fair enough to handle all applications
in the same time as all connections are open (this can even be with more
ASPNet applications running). Therefore is than the advice to close as often
as possible (in other words all the time that it is not needed and that is
long in a disconnected situation in computer time) your connection.
I hope I answer your question a little bit with this.
Cor
<cr****@yahoo.comschreef in bericht
news:11**********************@f1g2000cwa.googlegro ups.com...
Hello All,
Some time ago, I implemented a data access layer that included a simple
connectin pool. At the time, I did it all by myself: I created N
connections, each connection associated with a worker thread that would
execute the db commands. The pool was fixed and all the connections
were created when the db access class was instantiated. The
connections remained opened during the whole execution. If a connection
was not available at the moment, then the request would block until one
connection was freed.
In my approach, I did not use the SQL server connection polling (I
actually disabled it in my connection strings).
That was done in c++ using ado. Now, I am in a position where I need to
implement a db access layer in .NET. I was wondering what kind of
support I can get from .NET.
I want to implement a similar model. I would appreciate if you could
weigh on my approach and on the ways to accomplish something similar in
ado.net. I would also appreciate your comments on the approach itself.
Thank you
CD
Hello Cor,
Thank you for your message.
You are correct. I am considering connection pooling at the client
side. I am aware of,and have used connection pooliing at the server
level as you describe below. In my case, for example, I open 10
connections and all of them remain open for the life of the
application. Each of them associated with a worker thread that executes
the client's db operations.
I know that this is very different from what I hear in general. Open a
connection for every transaction and close as soon as possible.
Nevertheless, I had some motivations for implementing this pooling at
the client side:
- The machine where SQL server was installed was shared with other
applications and I found easier to tune the cpu load by controlling the
throughput at the client side.
- I did not want my transaction to fail when the maximum size of the
pool was reached. This is the typical behavior when all connections in
the pool were being used.
- In some cases (when a failover occured, for example) I ended up with
"bogus" connections reenlisted in the pool. When those connections were
reused, the application would simply get stuck and had to be restarted.
The only way for me to eliminate this problem back then was to disable
connecion pooling at the server.
- By having all connections already open, I thought I would save some
time.
This was my experience. I am not saying that this is what I would like
to do again or whether these motivations are still valid with .NET. The
nature of my application might have been the difference: It was a
real-time video application.
The nature of the application that I want to implement now is a typical
client-server using .NET (no ASP, no WEB services, no Internet,
everything in the local network). The server will be very busy due to
the projected load of users (automatic and people).
Can you see any situation where my approach would be benefitial?
Am I just making things more complicated for me?
Do I gain anything for keeping the connections open?
Are my original motivatinons valid?
Your input is greatly appreciated.
Kind regards
Cassiano
Cor Ligthert [MVP] wrote:
CRB,
In my idea do you have another interpretation of Connection pooling than we,
you see it from your side of the application. Mostly by regulars here is
thought in the way the database is handling all connections it needs. AFAIK
is that set to 100, therefore often fair enough to handle all applications
in the same time as all connections are open (this can even be with more
ASPNet applications running). Therefore is than the advice to close as often
as possible (in other words all the time that it is not needed and that is
long in a disconnected situation in computer time) your connection.
I hope I answer your question a little bit with this.
Cor
<cr****@yahoo.comschreef in bericht
news:11**********************@f1g2000cwa.googlegro ups.com...
Hello All,
Some time ago, I implemented a data access layer that included a simple
connectin pool. At the time, I did it all by myself: I created N
connections, each connection associated with a worker thread that would
execute the db commands. The pool was fixed and all the connections
were created when the db access class was instantiated. The
connections remained opened during the whole execution. If a connection
was not available at the moment, then the request would block until one
connection was freed.
In my approach, I did not use the SQL server connection polling (I
actually disabled it in my connection strings).
That was done in c++ using ado. Now, I am in a position where I need to
implement a db access layer in .NET. I was wondering what kind of
support I can get from .NET.
I want to implement a similar model. I would appreciate if you could
weigh on my approach and on the ways to accomplish something similar in
ado.net. I would also appreciate your comments on the approach itself.
Thank you
CD
Hello Bill,
Thank you for your message.
No this will be a simple client-server application using .NET over a
local network without ASP or Windows Forms.
I would appreciate if you could comment on the reply I sent to "Cor" in
this thread.
Thank you
CD
William (Bill) Vaughn wrote:
I have a few thoughts:
1.. Is this a Windows Forms or ASP.NET application. The former does not really need a connection pooling mechanism as the entire app might be able to share a common connection.
2.. If it's ASP.NET, then it makes little sense to code, debug, test and deploy your own pooler. The code built into ADO.NET's .NET data providers should work fine.
--
____________________________________
William (Bill) Vaughn
Author, Mentor, Consultant
Microsoft MVP
INETA Speaker www.betav.com/blog/billva www.betav.com
Please reply only to the newsgroup so that others can benefit.
This posting is provided "AS IS" with no warranties, and confers no rights.
__________________________________
Visit www.hitchhikerguides.net to get more information on my latest book:
Hitchhiker's Guide to Visual Studio and SQL Server (7th Edition)
and Hitchhiker's Guide to SQL Server 2005 Compact Edition (EBook)
-----------------------------------------------------------------------------------------------------------------------
<cr****@yahoo.comwrote in message news:11**********************@f1g2000cwa.googlegro ups.com...
Hello All,
Some time ago, I implemented a data access layer that included a simple
connectin pool. At the time, I did it all by myself: I created N
connections, each connection associated with a worker thread that would
execute the db commands. The pool was fixed and all the connections
were created when the db access class was instantiated. The
connections remained opened during the whole execution. If a connection
was not available at the moment, then the request would block until one
connection was freed.
In my approach, I did not use the SQL server connection polling (I
actually disabled it in my connection strings).
That was done in c++ using ado. Now, I am in a position where I need to
implement a db access layer in .NET. I was wondering what kind of
support I can get from .NET.
I want to implement a similar model. I would appreciate if you could
weigh on my approach and on the ways to accomplish something similar in
ado.net. I would also appreciate your comments on the approach itself.
Thank you
CD
------=_NextPart_000_0008_01C7194F.00870D30
Content-Type: text/html; charset=iso-8859-1
Content-Transfer-Encoding: quoted-printable
X-Google-AttachSize: 3690
<!DOCTYPE HTML PUBLIC "-//W3C//DTD HTML 4.0 Transitional//EN">
<HTML><HEAD>
<META http-equiv=Content-Type content="text/html; charset=iso-8859-1">
<META content="MSHTML 6.00.5730.11" name=GENERATOR>
<STYLE></STYLE>
</HEAD>
<BODY>
<DIV><FONT face=Arial size=2>I have a few thoughts:</FONT></DIV>
<OL>
<LI><FONT face=Arial size=2>Is this a Windows Forms or ASP.NET application.
The former does not really need a connection pooling mechanism as the entire
app might be able to share a common connection.</FONT></LI>
<LI><FONT face=Arial size=2>If it's ASP.NET, then it makes little sense to
code, debug, test and deploy your own pooler. The code built into ADO.NET's
.NET data providers should work fine. </FONT></LI></OL>
<DIV><BR><FONT face=Arial size=2>--
<BR>____________________________________<BR>Willia m (Bill) Vaughn<BR>Author,
Mentor, Consultant<BR>Microsoft MVP<BR>INETA Speaker<BR></FONT><A
href="http://www.betav.com/blog/billva"><FONT face=Arial
size=2>www.betav.com/blog/billva</FONT></A><BR><A
href="http://www.betav.com"><FONT face=Arial
size=2>www.betav.com</FONT></A><BR><FONT face=Arial size=2>Please reply only to
the newsgroup so that others can benefit.<BR>This posting is provided "AS IS"
with no warranties, and confers no
rights.<BR>__________________________________<BR>V isit </FONT><A
href="http://www.hitchhikerguides.net"><FONT face=Arial
size=2>www.hitchhikerguides.net</FONT></A><FONT face=Arial size=2to get more
information on my latest book:<BR>Hitchhiker's Guide to Visual Studio and SQL
Server (7th Edition)<BR>and Hitchhiker's Guide to SQL Server 2005 Compact
Edition
(EBook)<BR>-----------------------------------------------------------------------------------------------------------------------<BR></DIV></FONT>
<DIV><FONT face=Arial size=2><</FONT><A href="mailto:cr****@yahoo.com"><FONT
face=Arial size=2>cr****@yahoo.com</FONT></A><FONT face=Arial size=2>> wrote
in message </FONT><A
href="news:11**********************@f1g2000cwa.goo glegroups.com"><FONT
face=Arial
size=2>news:11**********************@f1g2000cwa.go oglegroups.com</FONT></A><FONT
face=Arial size=2>...</FONT></DIV><FONT face=Arial size=2>> Hello
All,<BR>> <BR>> Some time ago, I implemented a data access layer that
included a simple<BR>> connectin pool. At the time, I did it all by
myself: I created N<BR>> connections, each connection associated with a
worker thread that would<BR>> execute the db commands. The pool was fixed and
all the connections<BR>> were created when the db access class was
instantiated. The<BR>> connections remained opened during the whole
execution. If a connection<BR>> was not available at the moment, then the
request would block until one<BR>> connection was freed.<BR>> <BR>> In
my approach, I did not use the SQL server connection polling (I<BR>> actually
disabled it in my connection strings).<BR>> <BR>> That was done in c++
using ado. Now, I am in a position where I need to<BR>> implement a db access
layer in .NET. I was wondering what kind of<BR>> support I can get from
.NET.<BR>> <BR>> I want to implement a similar model. I would appreciate
if you could<BR>> weigh on my approach and on the ways to accomplish
something similar in<BR>> ado.net. I would also appreciate your comments on
the approach itself.<BR>> <BR>> Thank you <BR>>
CD<BR>></FONT></BODY></HTML>
------=_NextPart_000_0008_01C7194F.00870D30--
Hello Mark,
Please see my reply to "Cor" in ths thread.
Thank you
CD
Mark R. Dawson wrote:
Hi CD,
what extra functionality are you looking for that the standard connection
pooling in .Net does not give you?
Mark.
-- http://www.markdawson.org
"cr****@yahoo.com" wrote:
Hello All,
Some time ago, I implemented a data access layer that included a simple
connectin pool. At the time, I did it all by myself: I created N
connections, each connection associated with a worker thread that would
execute the db commands. The pool was fixed and all the connections
were created when the db access class was instantiated. The
connections remained opened during the whole execution. If a connection
was not available at the moment, then the request would block until one
connection was freed.
In my approach, I did not use the SQL server connection polling (I
actually disabled it in my connection strings).
That was done in c++ using ado. Now, I am in a position where I need to
implement a db access layer in .NET. I was wondering what kind of
support I can get from .NET.
I want to implement a similar model. I would appreciate if you could
weigh on my approach and on the ways to accomplish something similar in
ado.net. I would also appreciate your comments on the approach itself.
Thank you
CD
Personally, I would *tend* to just use the inbuilt pooling in this
scenario, and just use the "open, use, close, dispose" model which in
turn uses the inbuilt pooling. To throttle the connections, I might put
a semaphore (or equivalent) around my DAL, such that only <xthreads
can be using the DAL. Assuming correct "close, dispose" steps, this
should effectively limit the connection size, and you may find that in
reality it uses *less* connections than your cap. You could even use a
shared semaphore (perhaps named using the server/db pair) to limit the
throughput per client (rather than per process/appdomain) - compared to
network IO the cost of a Win32 semaphore (as opposed to a .Net counter)
is negligible.
Re your bullets, I believe this addresses them:
- throttle: the semaphore
- failing on aquire: thread block on aquiring the semaphore [although
if you are hitting the pool limit it sounds like you are simply using
too many connections?]
- dead connections: hmmm... not sure how to "fix" this one; but
assuming you handle the immediate exception, does the connection not
de-pool itself after finding itself invalid?
- already open: they will be
Just my thoughts,
Marc
Hi crbd,
Can you see any situation where my approach would be benefitial?
No,
Am I just making things more complicated for me?
Very very much yes
Do I gain anything for keeping the connections open?
Yes more problems
Are my original motivatinons valid?
I cannot see but if it solved something it should be valid.
I hope this helps,
Cor
Let's review: (see >>>)
--
____________________________________
William (Bill) Vaughn
Author, Mentor, Consultant
Microsoft MVP
INETA Speaker www.betav.com/blog/billva www.betav.com
Please reply only to the newsgroup so that others can benefit.
This posting is provided "AS IS" with no warranties, and confers no rights.
__________________________________
Visit www.hitchhikerguides.net to get more information on my latest book:
Hitchhiker's Guide to Visual Studio and SQL Server (7th Edition)
and Hitchhiker's Guide to SQL Server 2005 Compact Edition (EBook)
-----------------------------------------------------------------------------------------------------------------------
<cr****@yahoo.comwrote in message
news:11*********************@l12g2000cwl.googlegro ups.com...
Hello Cor,
Thank you for your message.
You are correct. I am considering connection pooling at the client
side. I am aware of,and have used connection pooliing at the server
level as you describe below. In my case, for example, I open 10
connections and all of them remain open for the life of the
application. Each of them associated with a worker thread that executes
the client's db operations.
>As I've said many times before (and again in Chapter 9) opening one or more connections on the client and holding them can work just fine--up to a point. SS is capable of handling a lot of connections--each consumes 40-60K of memory and some CPU cycles. It makes no difference if you use the Connection pool or not--the impact on the server with this approach is the same. The process of opening (and closing) a connection does consume client-side and server-side resources as the credentials are revalidated and the connection state is reset.
>
I know that this is very different from what I hear in general. Open a
connection for every transaction and close as soon as possible.
Nevertheless, I had some motivations for implementing this pooling at
the client side:
- The machine where SQL server was installed was shared with other
applications and I found easier to tune the cpu load by controlling the
throughput at the client side.
- I did not want my transaction to fail when the maximum size of the
pool was reached. This is the typical behavior when all connections in
the pool were being used.
>>If you're filling a standard (100 connections) pool, there's something wrong with your design. If each client is using that many connections, I suspect that you have a recursion problem or something wrong with your mechanism that manages your pooler.
- In some cases (when a failover occured, for example) I ended up with
"bogus" connections reenlisted in the pool. When those connections were
reused, the application would simply get stuck and had to be restarted.
The only way for me to eliminate this problem back then was to disable
connecion pooling at the server.
- By having all connections already open, I thought I would save some
time.
>>The ADO.NET 2.0 pooler is a lot smarter about handling connections post failover. The old one was not. It also permits you to flush the pool manually.
>
This was my experience. I am not saying that this is what I would like
to do again or whether these motivations are still valid with .NET. The
nature of my application might have been the difference: It was a
real-time video application.
>>Ah... yes.
>
The nature of the application that I want to implement now is a typical
client-server using .NET (no ASP, no WEB services, no Internet,
everything in the local network). The server will be very busy due to
the projected load of users (automatic and people).
Can you see any situation where my approach would be benefitial?
>>I would have to see more of the overall design to make that call.
Am I just making things more complicated for me?
>>Perhaps...
Do I gain anything for keeping the connections open?
>>Yes. Less overhead but no, less scalability (once you reach the server's limit)
Are my original motivatinons valid?
Your input is greatly appreciated.
Kind regards
Cassiano
Cor Ligthert [MVP] wrote:
>CRB,
In my idea do you have another interpretation of Connection pooling than we, you see it from your side of the application. Mostly by regulars here is thought in the way the database is handling all connections it needs. AFAIK is that set to 100, therefore often fair enough to handle all applications in the same time as all connections are open (this can even be with more ASPNet applications running). Therefore is than the advice to close as often as possible (in other words all the time that it is not needed and that is long in a disconnected situation in computer time) your connection.
I hope I answer your question a little bit with this.
Cor
<cr****@yahoo.comschreef in bericht news:11**********************@f1g2000cwa.googlegr oups.com...
Hello All,
Some time ago, I implemented a data access layer that included a simple
connectin pool. At the time, I did it all by myself: I created N
connections, each connection associated with a worker thread that would
execute the db commands. The pool was fixed and all the connections
were created when the db access class was instantiated. The
connections remained opened during the whole execution. If a connection
was not available at the moment, then the request would block until one
connection was freed.
In my approach, I did not use the SQL server connection polling (I
actually disabled it in my connection strings).
That was done in c++ using ado. Now, I am in a position where I need to
implement a db access layer in .NET. I was wondering what kind of
support I can get from .NET.
I want to implement a similar model. I would appreciate if you could
weigh on my approach and on the ways to accomplish something similar in
ado.net. I would also appreciate your comments on the approach itself.
Thank you
CD
Hello Bill,
Thank you for your detailed reply.
Based on your response, maybe the only advantage of keeping the
connections to the db open for the life of the application would be to
avoid the delay of opening and closing the connection. Without caching,
this would be prohibitive, of course. With caching, I immagine that the
time to open and close is significantly reduce. I wonder, however if
this time would still compund significantly in the case of applications
that interact with the db with a very high frequency. For instance, my
application is driven by an (multithreaded) automatic process that can
issue more that 100 transactions/second (sustained). Would the compound
cost of opening/closing/ the connection in this scenario justify my
idea of keeping a pool of open connections at the client ? So, the
question at the center of my problem is:"What is the cost for
opening/executing/closing a connection in the case caching is used?"
Thank you
CD
William (Bill) Vaughn wrote:
Let's review: (see >>>)
--
____________________________________
William (Bill) Vaughn
Author, Mentor, Consultant
Microsoft MVP
INETA Speaker www.betav.com/blog/billva www.betav.com
Please reply only to the newsgroup so that others can benefit.
This posting is provided "AS IS" with no warranties, and confers no rights.
__________________________________
Visit www.hitchhikerguides.net to get more information on my latest book:
Hitchhiker's Guide to Visual Studio and SQL Server (7th Edition)
and Hitchhiker's Guide to SQL Server 2005 Compact Edition (EBook)
-----------------------------------------------------------------------------------------------------------------------
<cr****@yahoo.comwrote in message
news:11*********************@l12g2000cwl.googlegro ups.com...
Hello Cor,
Thank you for your message.
You are correct. I am considering connection pooling at the client
side. I am aware of,and have used connection pooliing at the server
level as you describe below. In my case, for example, I open 10
connections and all of them remain open for the life of the
application. Each of them associated with a worker thread that executes
the client's db operations.
As I've said many times before (and again in Chapter 9) opening one or
more connections on the client and holding them can work just fine--up to
a point. SS is capable of handling a lot of connections--each consumes
40-60K of memory and some CPU cycles. It makes no difference if you use
the Connection pool or not--the impact on the server with this approach
is the same. The process of opening (and closing) a connection does
consume client-side and server-side resources as the credentials are
revalidated and the connection state is reset.
I know that this is very different from what I hear in general. Open a
connection for every transaction and close as soon as possible.
Nevertheless, I had some motivations for implementing this pooling at
the client side:
- The machine where SQL server was installed was shared with other
applications and I found easier to tune the cpu load by controlling the
throughput at the client side.
- I did not want my transaction to fail when the maximum size of the
pool was reached. This is the typical behavior when all connections in
the pool were being used.
>If you're filling a standard (100 connections) pool, there's something wrong with your design. If each client is using that many connections, I suspect that you have a recursion problem or something wrong with your mechanism that manages your pooler.
- In some cases (when a failover occured, for example) I ended up with
"bogus" connections reenlisted in the pool. When those connections were
reused, the application would simply get stuck and had to be restarted.
The only way for me to eliminate this problem back then was to disable
connecion pooling at the server.
- By having all connections already open, I thought I would save some
time.
>The ADO.NET 2.0 pooler is a lot smarter about handling connections post failover. The old one was not. It also permits you to flush the pool manually.
This was my experience. I am not saying that this is what I would like
to do again or whether these motivations are still valid with .NET. The
nature of my application might have been the difference: It was a
real-time video application.
>Ah... yes.
The nature of the application that I want to implement now is a typical
client-server using .NET (no ASP, no WEB services, no Internet,
everything in the local network). The server will be very busy due to
the projected load of users (automatic and people).
Can you see any situation where my approach would be benefitial?
>I would have to see more of the overall design to make that call.
Am I just making things more complicated for me?
>Perhaps...
Do I gain anything for keeping the connections open?
>Yes. Less overhead but no, less scalability (once you reach the server's limit)
Are my original motivatinons valid?
Your input is greatly appreciated.
Kind regards
Cassiano
Cor Ligthert [MVP] wrote:
CRB,
In my idea do you have another interpretation of Connection pooling than
we,
you see it from your side of the application. Mostly by regulars here is
thought in the way the database is handling all connections it needs.
AFAIK
is that set to 100, therefore often fair enough to handle all
applications
in the same time as all connections are open (this can even be with more
ASPNet applications running). Therefore is than the advice to close as
often
as possible (in other words all the time that it is not needed and that
is
long in a disconnected situation in computer time) your connection.
I hope I answer your question a little bit with this.
Cor
<cr****@yahoo.comschreef in bericht
news:11**********************@f1g2000cwa.googlegro ups.com...
Hello All,
Some time ago, I implemented a data access layer that included a simple
connectin pool. At the time, I did it all by myself: I created N
connections, each connection associated with a worker thread that would
execute the db commands. The pool was fixed and all the connections
were created when the db access class was instantiated. The
connections remained opened during the whole execution. If a connection
was not available at the moment, then the request would block until one
connection was freed.
In my approach, I did not use the SQL server connection polling (I
actually disabled it in my connection strings).
That was done in c++ using ado. Now, I am in a position where I need to
implement a db access layer in .NET. I was wondering what kind of
support I can get from .NET.
I want to implement a similar model. I would appreciate if you could
weigh on my approach and on the ways to accomplish something similar in
ado.net. I would also appreciate your comments on the approach itself.
Thank you
CD
Depends on what you mean by caching in this scenario... do you mean the
in-built pooling? In which case the answer would tend to be "not much
as long as you re-use the same connection string", where "not much"
obviously includes a few lookups inside the pool, which will be
absolutely trivial compared to network IO.
But: in general, the performance of something operating in a tight loop
can only really be assessed by testing it... since it wouldn't take
much code (less, in fact), I would suggest giving it a go...
Marc
If you use your own Connection pooler (for this specific case), you'll find
that the additional cost of re-authenticating the SSPI credentials are
eliminated, but you'll also have to deal with server state persistence
issues. The built-in pooler deals with lot of issues but since you are
holding connections open and trying to share them, you really need to
consider these other issues as well. Architecturally, I would find a
strategy to use a dedicated connection for those high-traffic operations.
Only you and your own tests can really measure the impact of any pooling
architecture configuration. Remember not to measure the speed in debug mode
as it's notoriously slow.
____________________________________
William (Bill) Vaughn
Author, Mentor, Consultant
Microsoft MVP
INETA Speaker www.betav.com/blog/billva www.betav.com
Please reply only to the newsgroup so that others can benefit.
This posting is provided "AS IS" with no warranties, and confers no rights.
__________________________________
Visit www.hitchhikerguides.net to get more information on my latest book:
Hitchhiker's Guide to Visual Studio and SQL Server (7th Edition)
and Hitchhiker's Guide to SQL Server 2005 Compact Edition (EBook)
-----------------------------------------------------------------------------------------------------------------------
<cr****@yahoo.comwrote in message
news:11*********************@f1g2000cwa.googlegrou ps.com...
Hello Bill,
Thank you for your detailed reply.
Based on your response, maybe the only advantage of keeping the
connections to the db open for the life of the application would be to
avoid the delay of opening and closing the connection. Without caching,
this would be prohibitive, of course. With caching, I immagine that the
time to open and close is significantly reduce. I wonder, however if
this time would still compund significantly in the case of applications
that interact with the db with a very high frequency. For instance, my
application is driven by an (multithreaded) automatic process that can
issue more that 100 transactions/second (sustained). Would the compound
cost of opening/closing/ the connection in this scenario justify my
idea of keeping a pool of open connections at the client ? So, the
question at the center of my problem is:"What is the cost for
opening/executing/closing a connection in the case caching is used?"
Thank you
CD
William (Bill) Vaughn wrote:
>Let's review: (see >>>)
-- ____________________________________ William (Bill) Vaughn Author, Mentor, Consultant Microsoft MVP INETA Speaker www.betav.com/blog/billva www.betav.com Please reply only to the newsgroup so that others can benefit. This posting is provided "AS IS" with no warranties, and confers no rights. __________________________________ Visit www.hitchhikerguides.net to get more information on my latest book: Hitchhiker's Guide to Visual Studio and SQL Server (7th Edition) and Hitchhiker's Guide to SQL Server 2005 Compact Edition (EBook) -----------------------------------------------------------------------------------------------------------------------
<cr****@yahoo.comwrote in message news:11*********************@l12g2000cwl.googlegr oups.com...
Hello Cor,
Thank you for your message.
You are correct. I am considering connection pooling at the client
side. I am aware of,and have used connection pooliing at the server
level as you describe below. In my case, for example, I open 10
connections and all of them remain open for the life of the
application. Each of them associated with a worker thread that executes
the client's db operations.
>As I've said many times before (and again in Chapter 9) opening one or more connections on the client and holding them can work just fine--up to a point. SS is capable of handling a lot of connections--each consumes 40-60K of memory and some CPU cycles. It makes no difference if you use the Connection pool or not--the impact on the server with this approach is the same. The process of opening (and closing) a connection does consume client-side and server-side resources as the credentials are revalidated and the connection state is reset.
>
I know that this is very different from what I hear in general. Open a
connection for every transaction and close as soon as possible.
Nevertheless, I had some motivations for implementing this pooling at
the client side:
- The machine where SQL server was installed was shared with other
applications and I found easier to tune the cpu load by controlling the
throughput at the client side.
- I did not want my transaction to fail when the maximum size of the
pool was reached. This is the typical behavior when all connections in
the pool were being used.
>>If you're filling a standard (100 connections) pool, there's something wrong with your design. If each client is using that many connections, I suspect that you have a recursion problem or something wrong with your mechanism that manages your pooler.
- In some cases (when a failover occured, for example) I ended up with
"bogus" connections reenlisted in the pool. When those connections were
reused, the application would simply get stuck and had to be restarted.
The only way for me to eliminate this problem back then was to disable
connecion pooling at the server.
- By having all connections already open, I thought I would save some
time.
>>The ADO.NET 2.0 pooler is a lot smarter about handling connections post failover. The old one was not. It also permits you to flush the pool manually.
>
This was my experience. I am not saying that this is what I would like
to do again or whether these motivations are still valid with .NET. The
nature of my application might have been the difference: It was a
real-time video application.
>>Ah... yes.
>
The nature of the application that I want to implement now is a typical
client-server using .NET (no ASP, no WEB services, no Internet,
everything in the local network). The server will be very busy due to
the projected load of users (automatic and people).
Can you see any situation where my approach would be benefitial?
>>I would have to see more of the overall design to make that call.
Am I just making things more complicated for me?
>>Perhaps...
Do I gain anything for keeping the connections open?
>>Yes. Less overhead but no, less scalability (once you reach the server's limit)
Are my original motivatinons valid?
Your input is greatly appreciated.
Kind regards
Cassiano
Cor Ligthert [MVP] wrote: CRB,
In my idea do you have another interpretation of Connection pooling than we, you see it from your side of the application. Mostly by regulars here is thought in the way the database is handling all connections it needs. AFAIK is that set to 100, therefore often fair enough to handle all applications in the same time as all connections are open (this can even be with more ASPNet applications running). Therefore is than the advice to close as often as possible (in other words all the time that it is not needed and that is long in a disconnected situation in computer time) your connection.
I hope I answer your question a little bit with this.
Cor
<cr****@yahoo.comschreef in bericht news:11**********************@f1g2000cwa.googlegr oups.com...
Hello All,
Some time ago, I implemented a data access layer that included a
simple
connectin pool. At the time, I did it all by myself: I created N
connections, each connection associated with a worker thread that
would
execute the db commands. The pool was fixed and all the connections
were created when the db access class was instantiated. The
connections remained opened during the whole execution. If a
connection
was not available at the moment, then the request would block until
one
connection was freed.
In my approach, I did not use the SQL server connection polling (I
actually disabled it in my connection strings).
That was done in c++ using ado. Now, I am in a position where I need
to
implement a db access layer in .NET. I was wondering what kind of
support I can get from .NET.
I want to implement a similar model. I would appreciate if you could
weigh on my approach and on the ways to accomplish something similar
in
ado.net. I would also appreciate your comments on the approach
itself.
Thank you
CD
Hello All,
Thank you for all your input.
Based on your suggestions, I decided to perform a simple test to asses
the impact of pooling. I tested 3 scenarios:
(1) SQL Pooling disabled and doing open/execute/close (I knew this was
going to be the one with worst performance but I just wanted to
quantify)
(2) SQL Pooling enabled and doing open/execute/close
(3) SQL Pooling off and using a global connection (opening at the
beginning of the application and closing it only at the end)
In each scenario, I executed 1000 and 10000 queries in a loop (the same
in each case). Here are the results(the times in milliseconds)
#queries (3) (2) (1)
1000 172 200 1900
10000 984 1250 18406
I should note that in my tests both the database and the client
application resided in the SAME MACHINE. I do not know whether this is
a factor.
THE FUNDAMENTAL QUESTION:
In scenario (2), is any network access required to authenticate the
credentials when the connection is opened? Or everything is done at the
client side without requiring a round trip? If a validation at the
server is required, then the times in (2) will be considerably higher.
If all the validation is performed at the client side, then the results
above are valid in general. They show an overhead of about 20% when
using the built-in pooling (2) when compared to the global connection
approach (3). I think this would not justify the implementation of my
own connection pool. Once I implement my full own client side
connection pool with all synchronizations, threads, etc, I will
probably loose the advantage.
Please let me know what you think.
Kind regards
CD
William (Bill) Vaughn wrote:
If you use your own Connection pooler (for this specific case), you'll find
that the additional cost of re-authenticating the SSPI credentials are
eliminated, but you'll also have to deal with server state persistence
issues. The built-in pooler deals with lot of issues but since you are
holding connections open and trying to share them, you really need to
consider these other issues as well. Architecturally, I would find a
strategy to use a dedicated connection for those high-traffic operations.
Only you and your own tests can really measure the impact of any pooling
architecture configuration. Remember not to measure the speed in debug mode
as it's notoriously slow.
____________________________________
William (Bill) Vaughn
Author, Mentor, Consultant
Microsoft MVP
INETA Speaker www.betav.com/blog/billva www.betav.com
Please reply only to the newsgroup so that others can benefit.
This posting is provided "AS IS" with no warranties, and confers no rights.
__________________________________
Visit www.hitchhikerguides.net to get more information on my latest book:
Hitchhiker's Guide to Visual Studio and SQL Server (7th Edition)
and Hitchhiker's Guide to SQL Server 2005 Compact Edition (EBook)
-----------------------------------------------------------------------------------------------------------------------
<cr****@yahoo.comwrote in message
news:11*********************@f1g2000cwa.googlegrou ps.com...
Hello Bill,
Thank you for your detailed reply.
Based on your response, maybe the only advantage of keeping the
connections to the db open for the life of the application would be to
avoid the delay of opening and closing the connection. Without caching,
this would be prohibitive, of course. With caching, I immagine that the
time to open and close is significantly reduce. I wonder, however if
this time would still compund significantly in the case of applications
that interact with the db with a very high frequency. For instance, my
application is driven by an (multithreaded) automatic process that can
issue more that 100 transactions/second (sustained). Would the compound
cost of opening/closing/ the connection in this scenario justify my
idea of keeping a pool of open connections at the client ? So, the
question at the center of my problem is:"What is the cost for
opening/executing/closing a connection in the case caching is used?"
Thank you
CD
William (Bill) Vaughn wrote:
Let's review: (see >>>)
--
____________________________________
William (Bill) Vaughn
Author, Mentor, Consultant
Microsoft MVP
INETA Speaker www.betav.com/blog/billva www.betav.com
Please reply only to the newsgroup so that others can benefit.
This posting is provided "AS IS" with no warranties, and confers no
rights.
__________________________________
Visit www.hitchhikerguides.net to get more information on my latest book:
Hitchhiker's Guide to Visual Studio and SQL Server (7th Edition)
and Hitchhiker's Guide to SQL Server 2005 Compact Edition (EBook)
-----------------------------------------------------------------------------------------------------------------------
<cr****@yahoo.comwrote in message
news:11*********************@l12g2000cwl.googlegro ups.com...
Hello Cor,
Thank you for your message.
You are correct. I am considering connection pooling at the client
side. I am aware of,and have used connection pooliing at the server
level as you describe below. In my case, for example, I open 10
connections and all of them remain open for the life of the
application. Each of them associated with a worker thread that executes
the client's db operations.
As I've said many times before (and again in Chapter 9) opening one or
more connections on the client and holding them can work just fine--up
to
a point. SS is capable of handling a lot of connections--each consumes
40-60K of memory and some CPU cycles. It makes no difference if you
use
the Connection pool or not--the impact on the server with this
approach
is the same. The process of opening (and closing) a connection does
consume client-side and server-side resources as the credentials are
revalidated and the connection state is reset.
I know that this is very different from what I hear in general. Open a
connection for every transaction and close as soon as possible.
Nevertheless, I had some motivations for implementing this pooling at
the client side:
- The machine where SQL server was installed was shared with other
applications and I found easier to tune the cpu load by controlling the
throughput at the client side.
- I did not want my transaction to fail when the maximum size of the
pool was reached. This is the typical behavior when all connections in
the pool were being used.
If you're filling a standard (100 connections) pool, there's something wrong with your design. If each client is using that many connections, I suspect that you have a recursion problem or something wrong with your mechanism that manages your pooler.
- In some cases (when a failover occured, for example) I ended up with
"bogus" connections reenlisted in the pool. When those connections were
reused, the application would simply get stuck and had to be restarted.
The only way for me to eliminate this problem back then was to disable
connecion pooling at the server.
- By having all connections already open, I thought I would save some
time.
The ADO.NET 2.0 pooler is a lot smarter about handling connections post failover. The old one was not. It also permits you to flush the pool manually.
This was my experience. I am not saying that this is what I would like
to do again or whether these motivations are still valid with .NET. The
nature of my application might have been the difference: It was a
real-time video application.
Ah... yes.
The nature of the application that I want to implement now is a typical
client-server using .NET (no ASP, no WEB services, no Internet,
everything in the local network). The server will be very busy due to
the projected load of users (automatic and people).
Can you see any situation where my approach would be benefitial?
I would have to see more of the overall design to make that call.
Am I just making things more complicated for me?
Perhaps...
Do I gain anything for keeping the connections open?
Yes. Less overhead but no, less scalability (once you reach the server's limit)
Are my original motivatinons valid?
Your input is greatly appreciated.
Kind regards
Cassiano
Cor Ligthert [MVP] wrote:
CRB,
In my idea do you have another interpretation of Connection pooling
than
we,
you see it from your side of the application. Mostly by regulars here
is
thought in the way the database is handling all connections it needs.
AFAIK
is that set to 100, therefore often fair enough to handle all
applications
in the same time as all connections are open (this can even be with
more
ASPNet applications running). Therefore is than the advice to close as
often
as possible (in other words all the time that it is not needed and
that
is
long in a disconnected situation in computer time) your connection.
I hope I answer your question a little bit with this.
Cor
<cr****@yahoo.comschreef in bericht
news:11**********************@f1g2000cwa.googlegro ups.com...
Hello All,
Some time ago, I implemented a data access layer that included a
simple
connectin pool. At the time, I did it all by myself: I created N
connections, each connection associated with a worker thread that
would
execute the db commands. The pool was fixed and all the connections
were created when the db access class was instantiated. The
connections remained opened during the whole execution. If a
connection
was not available at the moment, then the request would block until
one
connection was freed.
In my approach, I did not use the SQL server connection polling (I
actually disabled it in my connection strings).
That was done in c++ using ado. Now, I am in a position where I need
to
implement a db access layer in .NET. I was wondering what kind of
support I can get from .NET.
I want to implement a similar model. I would appreciate if you could
weigh on my approach and on the ways to accomplish something similar
in
ado.net. I would also appreciate your comments on the approach
itself.
Thank you
CD
See >>>
--
____________________________________
William (Bill) Vaughn
Author, Mentor, Consultant
Microsoft MVP
INETA Speaker www.betav.com/blog/billva www.betav.com
Please reply only to the newsgroup so that others can benefit.
This posting is provided "AS IS" with no warranties, and confers no rights.
__________________________________
Visit www.hitchhikerguides.net to get more information on my latest book:
Hitchhiker's Guide to Visual Studio and SQL Server (7th Edition)
and Hitchhiker's Guide to SQL Server 2005 Compact Edition (EBook)
-----------------------------------------------------------------------------------------------------------------------
<cr****@yahoo.comwrote in message
news:11**********************@16g2000cwy.googlegro ups.com...
Hello All,
Thank you for all your input.
Based on your suggestions, I decided to perform a simple test to asses
the impact of pooling. I tested 3 scenarios:
(1) SQL Pooling disabled and doing open/execute/close (I knew this was
going to be the one with worst performance but I just wanted to
quantify)
(2) SQL Pooling enabled and doing open/execute/close
(3) SQL Pooling off and using a global connection (opening at the
beginning of the application and closing it only at the end)
In each scenario, I executed 1000 and 10000 queries in a loop (the same
in each case). Here are the results(the times in milliseconds)
#queries (3) (2) (1)
1000 172 200 1900
10000 984 1250 18406
I should note that in my tests both the database and the client
application resided in the SAME MACHINE. I do not know whether this is
a factor.
>>Yes, to some extent. The provider used when bypassing the network perform differently (better) than those that have to interface via the NIC.
>
THE FUNDAMENTAL QUESTION:
In scenario (2), is any network access required to authenticate the
credentials when the connection is opened? Or everything is done at the
client side without requiring a round trip? If a validation at the
server is required, then the times in (2) will be considerably higher.
If all the validation is performed at the client side, then the results
above are valid in general. They show an overhead of about 20% when
using the built-in pooling (2) when compared to the global connection
approach (3). I think this would not justify the implementation of my
own connection pool. Once I implement my full own client side
connection pool with all synchronizations, threads, etc, I will
probably loose the advantage.
>>I'm of the same opinion. Consider that the pooler is dealing with any number of subtle (and some not so suble) issues that will bite you in the butt (eventually). I expect part of the performance difference would be seen by turning on the Profiler. This would show that the reused connections are being reset on first use (not when closed). The pooler is also revalidating the SSPI credentials with the Domain so this would also add to the overhead. I wish this could be disabled, but that's not an option. It's clearly faster to leave the pooler on and let it manage the connection. It's cheaper too--it's code you don't have to debug and support. ;)
Chapter 9 in my book provides a lot more detail on getting connected,
monitoring the pool, performance issues and much more.
hth
>
Please let me know what you think.
Kind regards
CD
William (Bill) Vaughn wrote:
>If you use your own Connection pooler (for this specific case), you'll find that the additional cost of re-authenticating the SSPI credentials are eliminated, but you'll also have to deal with server state persistence issues. The built-in pooler deals with lot of issues but since you are holding connections open and trying to share them, you really need to consider these other issues as well. Architecturally, I would find a strategy to use a dedicated connection for those high-traffic operations.
Only you and your own tests can really measure the impact of any pooling architecture configuration. Remember not to measure the speed in debug mode as it's notoriously slow. ____________________________________ William (Bill) Vaughn Author, Mentor, Consultant Microsoft MVP INETA Speaker www.betav.com/blog/billva www.betav.com Please reply only to the newsgroup so that others can benefit. This posting is provided "AS IS" with no warranties, and confers no rights. __________________________________ Visit www.hitchhikerguides.net to get more information on my latest book: Hitchhiker's Guide to Visual Studio and SQL Server (7th Edition) and Hitchhiker's Guide to SQL Server 2005 Compact Edition (EBook) -----------------------------------------------------------------------------------------------------------------------
<cr****@yahoo.comwrote in message news:11*********************@f1g2000cwa.googlegro ups.com...
Hello Bill,
Thank you for your detailed reply.
Based on your response, maybe the only advantage of keeping the
connections to the db open for the life of the application would be to
avoid the delay of opening and closing the connection. Without caching,
this would be prohibitive, of course. With caching, I immagine that the
time to open and close is significantly reduce. I wonder, however if
this time would still compund significantly in the case of applications
that interact with the db with a very high frequency. For instance, my
application is driven by an (multithreaded) automatic process that can
issue more that 100 transactions/second (sustained). Would the compound
cost of opening/closing/ the connection in this scenario justify my
idea of keeping a pool of open connections at the client ? So, the
question at the center of my problem is:"What is the cost for
opening/executing/closing a connection in the case caching is used?"
Thank you
CD
William (Bill) Vaughn wrote: Let's review: (see >>>)
-- ____________________________________ William (Bill) Vaughn Author, Mentor, Consultant Microsoft MVP INETA Speaker www.betav.com/blog/billva www.betav.com Please reply only to the newsgroup so that others can benefit. This posting is provided "AS IS" with no warranties, and confers no rights. __________________________________ Visit www.hitchhikerguides.net to get more information on my latest book: Hitchhiker's Guide to Visual Studio and SQL Server (7th Edition) and Hitchhiker's Guide to SQL Server 2005 Compact Edition (EBook) -----------------------------------------------------------------------------------------------------------------------
<cr****@yahoo.comwrote in message news:11*********************@l12g2000cwl.googlegr oups.com...
Hello Cor,
Thank you for your message.
You are correct. I am considering connection pooling at the client
side. I am aware of,and have used connection pooliing at the server
level as you describe below. In my case, for example, I open 10
connections and all of them remain open for the life of the
application. Each of them associated with a worker thread that
executes
the client's db operations.
As I've said many times before (and again in Chapter 9) opening one or more connections on the client and holding them can work just fine--up to a point. SS is capable of handling a lot of connections--each consumes 40-60K of memory and some CPU cycles. It makes no difference if you use the Connection pool or not--the impact on the server with this approach is the same. The process of opening (and closing) a connection does consume client-side and server-side resources as the credentials are revalidated and the connection state is reset.
I know that this is very different from what I hear in general. Open
a
connection for every transaction and close as soon as possible.
Nevertheless, I had some motivations for implementing this pooling
at
the client side:
- The machine where SQL server was installed was shared with other
applications and I found easier to tune the cpu load by controlling
the
throughput at the client side.
- I did not want my transaction to fail when the maximum size of the
pool was reached. This is the typical behavior when all connections
in
the pool were being used.
If you're filling a standard (100 connections) pool, there's something wrong with your design. If each client is using that many connections, I suspect that you have a recursion problem or something wrong with your mechanism that manages your pooler.
- In some cases (when a failover occured, for example) I ended up
with
"bogus" connections reenlisted in the pool. When those connections
were
reused, the application would simply get stuck and had to be
restarted.
The only way for me to eliminate this problem back then was to
disable
connecion pooling at the server.
- By having all connections already open, I thought I would save
some
time.
The ADO.NET 2.0 pooler is a lot smarter about handling connections post failover. The old one was not. It also permits you to flush the pool manually.
This was my experience. I am not saying that this is what I would
like
to do again or whether these motivations are still valid with .NET.
The
nature of my application might have been the difference: It was a
real-time video application.
Ah... yes.
The nature of the application that I want to implement now is a
typical
client-server using .NET (no ASP, no WEB services, no Internet,
everything in the local network). The server will be very busy due
to
the projected load of users (automatic and people).
Can you see any situation where my approach would be benefitial?
I would have to see more of the overall design to make that call.
Am I just making things more complicated for me?
Perhaps...
Do I gain anything for keeping the connections open?
Yes. Less overhead but no, less scalability (once you reach the server's limit)
Are my original motivatinons valid?
Your input is greatly appreciated.
Kind regards
Cassiano
Cor Ligthert [MVP] wrote: CRB,
In my idea do you have another interpretation of Connection pooling than we, you see it from your side of the application. Mostly by regulars here is thought in the way the database is handling all connections it needs. AFAIK is that set to 100, therefore often fair enough to handle all applications in the same time as all connections are open (this can even be with more ASPNet applications running). Therefore is than the advice to close as often as possible (in other words all the time that it is not needed and that is long in a disconnected situation in computer time) your connection.
I hope I answer your question a little bit with this.
Cor
<cr****@yahoo.comschreef in bericht news:11**********************@f1g2000cwa.googlegr oups.com...
Hello All,
Some time ago, I implemented a data access layer that included a
simple
connectin pool. At the time, I did it all by myself: I created N
connections, each connection associated with a worker thread that
would
execute the db commands. The pool was fixed and all the
connections
were created when the db access class was instantiated. The
connections remained opened during the whole execution. If a
connection
was not available at the moment, then the request would block
until
one
connection was freed.
In my approach, I did not use the SQL server connection polling
(I
actually disabled it in my connection strings).
That was done in c++ using ado. Now, I am in a position where I
need
to
implement a db access layer in .NET. I was wondering what kind of
support I can get from .NET.
I want to implement a similar model. I would appreciate if you
could
weigh on my approach and on the ways to accomplish something
similar
in
ado.net. I would also appreciate your comments on the approach
itself.
Thank you
CD
As Bill said, SQL Server cleans up pooled connection before each use so if
you do your own connection pooling you will have to deal with the
consequences of not doing that. For example, if someone leaves a
transaction open, any uncommitted updates will be added to your new
transaction. The rechecking of security credentials is to prevent the
security hole of a user being deleted from Windows but still being able to
access SQL Server because he has a connection is still open in the cache.
This is a pretty small hole so it's probably not something you would need to
do in your solution.
--
This posting is provided "AS IS" with no warranties, and confers no rights.
Use of included script samples are subject to the terms specified at http://www.microsoft.com/info/cpyright.htm
"William (Bill) Vaughn" <bi**************@nwlink.comwrote in message
news:ub**************@TK2MSFTNGP02.phx.gbl...
See >>>
--
____________________________________
William (Bill) Vaughn
Author, Mentor, Consultant
Microsoft MVP
INETA Speaker www.betav.com/blog/billva www.betav.com
Please reply only to the newsgroup so that others can benefit.
This posting is provided "AS IS" with no warranties, and confers no
rights.
__________________________________
Visit www.hitchhikerguides.net to get more information on my latest book:
Hitchhiker's Guide to Visual Studio and SQL Server (7th Edition)
and Hitchhiker's Guide to SQL Server 2005 Compact Edition (EBook)
-----------------------------------------------------------------------------------------------------------------------
<cr****@yahoo.comwrote in message
news:11**********************@16g2000cwy.googlegro ups.com...
>Hello All,
Thank you for all your input.
Based on your suggestions, I decided to perform a simple test to asses the impact of pooling. I tested 3 scenarios:
(1) SQL Pooling disabled and doing open/execute/close (I knew this was going to be the one with worst performance but I just wanted to quantify) (2) SQL Pooling enabled and doing open/execute/close (3) SQL Pooling off and using a global connection (opening at the beginning of the application and closing it only at the end)
In each scenario, I executed 1000 and 10000 queries in a loop (the same in each case). Here are the results(the times in milliseconds)
#queries (3) (2) (1) 1000 172 200 1900 10000 984 1250 18406
>I should note that in my tests both the database and the client application resided in the SAME MACHINE. I do not know whether this is a factor.
>>>Yes, to some extent. The provider used when bypassing the network perform differently (better) than those that have to interface via the NIC.
>> THE FUNDAMENTAL QUESTION: In scenario (2), is any network access required to authenticate the credentials when the connection is opened? Or everything is done at the client side without requiring a round trip? If a validation at the server is required, then the times in (2) will be considerably higher.
If all the validation is performed at the client side, then the results above are valid in general. They show an overhead of about 20% when using the built-in pooling (2) when compared to the global connection approach (3). I think this would not justify the implementation of my own connection pool. Once I implement my full own client side connection pool with all synchronizations, threads, etc, I will probably loose the advantage.
>>>I'm of the same opinion. Consider that the pooler is dealing with any number of subtle (and some not so suble) issues that will bite you in the butt (eventually). I expect part of the performance difference would be seen by turning on the Profiler. This would show that the reused connections are being reset on first use (not when closed). The pooler is also revalidating the SSPI credentials with the Domain so this would also add to the overhead. I wish this could be disabled, but that's not an option. It's clearly faster to leave the pooler on and let it manage the connection. It's cheaper too--it's code you don't have to debug and support. ;)
Chapter 9 in my book provides a lot more detail on getting connected,
monitoring the pool, performance issues and much more.
hth
>> Please let me know what you think. Kind regards CD
William (Bill) Vaughn wrote:
>>If you use your own Connection pooler (for this specific case), you'll find that the additional cost of re-authenticating the SSPI credentials are eliminated, but you'll also have to deal with server state persistence issues. The built-in pooler deals with lot of issues but since you are holding connections open and trying to share them, you really need to consider these other issues as well. Architecturally, I would find a strategy to use a dedicated connection for those high-traffic operations.
Only you and your own tests can really measure the impact of any pooling architecture configuration. Remember not to measure the speed in debug mode as it's notoriously slow. ____________________________________ William (Bill) Vaughn Author, Mentor, Consultant Microsoft MVP INETA Speaker www.betav.com/blog/billva www.betav.com Please reply only to the newsgroup so that others can benefit. This posting is provided "AS IS" with no warranties, and confers no rights. __________________________________ Visit www.hitchhikerguides.net to get more information on my latest book: Hitchhiker's Guide to Visual Studio and SQL Server (7th Edition) and Hitchhiker's Guide to SQL Server 2005 Compact Edition (EBook) -----------------------------------------------------------------------------------------------------------------------
<cr****@yahoo.comwrote in message news:11*********************@f1g2000cwa.googlegr oups.com... Hello Bill,
Thank you for your detailed reply.
Based on your response, maybe the only advantage of keeping the connections to the db open for the life of the application would be to avoid the delay of opening and closing the connection. Without caching, this would be prohibitive, of course. With caching, I immagine that the time to open and close is significantly reduce. I wonder, however if this time would still compund significantly in the case of applications that interact with the db with a very high frequency. For instance, my application is driven by an (multithreaded) automatic process that can issue more that 100 transactions/second (sustained). Would the compound cost of opening/closing/ the connection in this scenario justify my idea of keeping a pool of open connections at the client ? So, the question at the center of my problem is:"What is the cost for opening/executing/closing a connection in the case caching is used?"
Thank you CD William (Bill) Vaughn wrote: Let's review: (see >>>)
-- ____________________________________ William (Bill) Vaughn Author, Mentor, Consultant Microsoft MVP INETA Speaker www.betav.com/blog/billva www.betav.com Please reply only to the newsgroup so that others can benefit. This posting is provided "AS IS" with no warranties, and confers no rights. __________________________________ Visit www.hitchhikerguides.net to get more information on my latest book: Hitchhiker's Guide to Visual Studio and SQL Server (7th Edition) and Hitchhiker's Guide to SQL Server 2005 Compact Edition (EBook) -----------------------------------------------------------------------------------------------------------------------
<cr****@yahoo.comwrote in message news:11*********************@l12g2000cwl.googleg roups.com... Hello Cor,
Thank you for your message.
You are correct. I am considering connection pooling at the client side. I am aware of,and have used connection pooliing at the server level as you describe below. In my case, for example, I open 10 connections and all of them remain open for the life of the application. Each of them associated with a worker thread that executes the client's db operations.
As I've said many times before (and again in Chapter 9) opening one or more connections on the client and holding them can work just fine--up to a point. SS is capable of handling a lot of connections--each consumes 40-60K of memory and some CPU cycles. It makes no difference if you use the Connection pool or not--the impact on the server with this approach is the same. The process of opening (and closing) a connection does consume client-side and server-side resources as the credentials are revalidated and the connection state is reset.
I know that this is very different from what I hear in general. Open a connection for every transaction and close as soon as possible. Nevertheless, I had some motivations for implementing this pooling at the client side:
- The machine where SQL server was installed was shared with other applications and I found easier to tune the cpu load by controlling the throughput at the client side. - I did not want my transaction to fail when the maximum size of the pool was reached. This is the typical behavior when all connections in the pool were being used.
If you're filling a standard (100 connections) pool, there's something wrong with your design. If each client is using that many connections, I suspect that you have a recursion problem or something wrong with your mechanism that manages your pooler.
- In some cases (when a failover occured, for example) I ended up with "bogus" connections reenlisted in the pool. When those connections were reused, the application would simply get stuck and had to be restarted. The only way for me to eliminate this problem back then was to disable connecion pooling at the server. - By having all connections already open, I thought I would save some time.
The ADO.NET 2.0 pooler is a lot smarter about handling connections post failover. The old one was not. It also permits you to flush the pool manually. This was my experience. I am not saying that this is what I would like to do again or whether these motivations are still valid with .NET. The nature of my application might have been the difference: It was a real-time video application.
Ah... yes.
The nature of the application that I want to implement now is a typical client-server using .NET (no ASP, no WEB services, no Internet, everything in the local network). The server will be very busy due to the projected load of users (automatic and people).
Can you see any situation where my approach would be benefitial?
I would have to see more of the overall design to make that call.
Am I just making things more complicated for me?
Perhaps...
Do I gain anything for keeping the connections open?
Yes. Less overhead but no, less scalability (once you reach the server's limit)
Are my original motivatinons valid?
Your input is greatly appreciated.
Kind regards Cassiano Cor Ligthert [MVP] wrote: CRB,
In my idea do you have another interpretation of Connection pooling than we, you see it from your side of the application. Mostly by regulars here is thought in the way the database is handling all connections it needs. AFAIK is that set to 100, therefore often fair enough to handle all applications in the same time as all connections are open (this can even be with more ASPNet applications running). Therefore is than the advice to close as often as possible (in other words all the time that it is not needed and that is long in a disconnected situation in computer time) your connection.
I hope I answer your question a little bit with this.
Cor
<cr****@yahoo.comschreef in bericht news:11**********************@f1g2000cwa.googleg roups.com... Hello All,
Some time ago, I implemented a data access layer that included a simple connectin pool. At the time, I did it all by myself: I created N connections, each connection associated with a worker thread that would execute the db commands. The pool was fixed and all the connections were created when the db access class was instantiated. The connections remained opened during the whole execution. If a connection was not available at the moment, then the request would block until one connection was freed.
In my approach, I did not use the SQL server connection polling (I actually disabled it in my connection strings).
That was done in c++ using ado. Now, I am in a position where I need to implement a db access layer in .NET. I was wondering what kind of support I can get from .NET.
I want to implement a similar model. I would appreciate if you could weigh on my approach and on the ways to accomplish something similar in ado.net. I would also appreciate your comments on the approach itself.
Thank you CD
Hello Bill,
Thank you for your message. Based on all the discussions, I decided to
use the built-in pooling (and also buy your book :-) ).
I tried my tests in a database over the network and found some results
that surprised me quite a bit. I compared the times of (1) reusing a
global connection versus the time of (2) using a connection from the
connection pool. In each case, I perfom a sequence of queries (1000 and
10000 respectivelly). The times are in milliseconds.
# of queries (1) (2)
1000 562 578
10000 6109 5735
I did this many times. I was expecting that the global connection
approach would be faster (as it was in the case where the db server
was in the same machine as the client). But they are basically
equivalent, as the numbers show. I monitored the activity using the SQL
profiler at the server and perfmon at the client. I noticed the extra
sp_reset_connection sent for every new query.
Does this make sense?
Cheers
CD
William (Bill) Vaughn wrote:
See >>>
--
____________________________________
William (Bill) Vaughn
Author, Mentor, Consultant
Microsoft MVP
INETA Speaker www.betav.com/blog/billva www.betav.com
Please reply only to the newsgroup so that others can benefit.
This posting is provided "AS IS" with no warranties, and confers no rights.
__________________________________
Visit www.hitchhikerguides.net to get more information on my latest book:
Hitchhiker's Guide to Visual Studio and SQL Server (7th Edition)
and Hitchhiker's Guide to SQL Server 2005 Compact Edition (EBook)
-----------------------------------------------------------------------------------------------------------------------
<cr****@yahoo.comwrote in message
news:11**********************@16g2000cwy.googlegro ups.com...
Hello All,
Thank you for all your input.
Based on your suggestions, I decided to perform a simple test to asses
the impact of pooling. I tested 3 scenarios:
(1) SQL Pooling disabled and doing open/execute/close (I knew this was
going to be the one with worst performance but I just wanted to
quantify)
(2) SQL Pooling enabled and doing open/execute/close
(3) SQL Pooling off and using a global connection (opening at the
beginning of the application and closing it only at the end)
In each scenario, I executed 1000 and 10000 queries in a loop (the same
in each case). Here are the results(the times in milliseconds)
#queries (3) (2) (1)
1000 172 200 1900
10000 984 1250 18406
I should note that in my tests both the database and the client
application resided in the SAME MACHINE. I do not know whether this is
a factor.
>Yes, to some extent. The provider used when bypassing the network perform differently (better) than those that have to interface via the NIC.
THE FUNDAMENTAL QUESTION:
In scenario (2), is any network access required to authenticate the
credentials when the connection is opened? Or everything is done at the
client side without requiring a round trip? If a validation at the
server is required, then the times in (2) will be considerably higher.
If all the validation is performed at the client side, then the results
above are valid in general. They show an overhead of about 20% when
using the built-in pooling (2) when compared to the global connection
approach (3). I think this would not justify the implementation of my
own connection pool. Once I implement my full own client side
connection pool with all synchronizations, threads, etc, I will
probably loose the advantage.
>I'm of the same opinion. Consider that the pooler is dealing with any number of subtle (and some not so suble) issues that will bite you in the butt (eventually). I expect part of the performance difference would be seen by turning on the Profiler. This would show that the reused connections are being reset on first use (not when closed). The pooler is also revalidating the SSPI credentials with the Domain so this would also add to the overhead. I wish this could be disabled, but that's not an option. It's clearly faster to leave the pooler on and let it manage the connection. It's cheaper too--it's code you don't have to debug and support. ;)
Chapter 9 in my book provides a lot more detail on getting connected,
monitoring the pool, performance issues and much more.
hth
Please let me know what you think.
Kind regards
CD
William (Bill) Vaughn wrote:
If you use your own Connection pooler (for this specific case), you'll
find
that the additional cost of re-authenticating the SSPI credentials are
eliminated, but you'll also have to deal with server state persistence
issues. The built-in pooler deals with lot of issues but since you are
holding connections open and trying to share them, you really need to
consider these other issues as well. Architecturally, I would find a
strategy to use a dedicated connection for those high-traffic operations.
Only you and your own tests can really measure the impact of any pooling
architecture configuration. Remember not to measure the speed in debug
mode
as it's notoriously slow.
____________________________________
William (Bill) Vaughn
Author, Mentor, Consultant
Microsoft MVP
INETA Speaker www.betav.com/blog/billva www.betav.com
Please reply only to the newsgroup so that others can benefit.
This posting is provided "AS IS" with no warranties, and confers no
rights.
__________________________________
Visit www.hitchhikerguides.net to get more information on my latest book:
Hitchhiker's Guide to Visual Studio and SQL Server (7th Edition)
and Hitchhiker's Guide to SQL Server 2005 Compact Edition (EBook)
-----------------------------------------------------------------------------------------------------------------------
<cr****@yahoo.comwrote in message
news:11*********************@f1g2000cwa.googlegrou ps.com...
Hello Bill,
Thank you for your detailed reply.
Based on your response, maybe the only advantage of keeping the
connections to the db open for the life of the application would be to
avoid the delay of opening and closing the connection. Without caching,
this would be prohibitive, of course. With caching, I immagine that the
time to open and close is significantly reduce. I wonder, however if
this time would still compund significantly in the case of applications
that interact with the db with a very high frequency. For instance, my
application is driven by an (multithreaded) automatic process that can
issue more that 100 transactions/second (sustained). Would the compound
cost of opening/closing/ the connection in this scenario justify my
idea of keeping a pool of open connections at the client ? So, the
question at the center of my problem is:"What is the cost for
opening/executing/closing a connection in the case caching is used?"
Thank you
CD
William (Bill) Vaughn wrote:
Let's review: (see >>>)
--
____________________________________
William (Bill) Vaughn
Author, Mentor, Consultant
Microsoft MVP
INETA Speaker www.betav.com/blog/billva www.betav.com
Please reply only to the newsgroup so that others can benefit.
This posting is provided "AS IS" with no warranties, and confers no
rights.
__________________________________
Visit www.hitchhikerguides.net to get more information on my latest
book:
Hitchhiker's Guide to Visual Studio and SQL Server (7th Edition)
and Hitchhiker's Guide to SQL Server 2005 Compact Edition (EBook)
-----------------------------------------------------------------------------------------------------------------------
<cr****@yahoo.comwrote in message
news:11*********************@l12g2000cwl.googlegro ups.com...
Hello Cor,
Thank you for your message.
You are correct. I am considering connection pooling at the client
side. I am aware of,and have used connection pooliing at the server
level as you describe below. In my case, for example, I open 10
connections and all of them remain open for the life of the
application. Each of them associated with a worker thread that
executes
the client's db operations.
As I've said many times before (and again in Chapter 9) opening one
or
more connections on the client and holding them can work just
fine--up
to
a point. SS is capable of handling a lot of connections--each
consumes
40-60K of memory and some CPU cycles. It makes no difference if you
use
the Connection pool or not--the impact on the server with this
approach
is the same. The process of opening (and closing) a connection does
consume client-side and server-side resources as the credentials
are
revalidated and the connection state is reset.
I know that this is very different from what I hear in general. Open
a
connection for every transaction and close as soon as possible.
Nevertheless, I had some motivations for implementing this pooling
at
the client side:
- The machine where SQL server was installed was shared with other
applications and I found easier to tune the cpu load by controlling
the
throughput at the client side.
- I did not want my transaction to fail when the maximum size of the
pool was reached. This is the typical behavior when all connections
in
the pool were being used.
If you're filling a standard (100 connections) pool, there's something wrong with your design. If each client is using that many connections, I suspect that you have a recursion problem or something wrong with your mechanism that manages your pooler.
- In some cases (when a failover occured, for example) I ended up
with
"bogus" connections reenlisted in the pool. When those connections
were
reused, the application would simply get stuck and had to be
restarted.
The only way for me to eliminate this problem back then was to
disable
connecion pooling at the server.
- By having all connections already open, I thought I would save
some
time.
The ADO.NET 2.0 pooler is a lot smarter about handling connections post failover. The old one was not. It also permits you to flush the pool manually.
This was my experience. I am not saying that this is what I would
like
to do again or whether these motivations are still valid with .NET.
The
nature of my application might have been the difference: It was a
real-time video application.
Ah... yes.
The nature of the application that I want to implement now is a
typical
client-server using .NET (no ASP, no WEB services, no Internet,
everything in the local network). The server will be very busy due
to
the projected load of users (automatic and people).
Can you see any situation where my approach would be benefitial?
I would have to see more of the overall design to make that call.
Am I just making things more complicated for me?
Perhaps...
Do I gain anything for keeping the connections open?
Yes. Less overhead but no, less scalability (once you reach the server's limit)
Are my original motivatinons valid?
Your input is greatly appreciated.
Kind regards
Cassiano
Cor Ligthert [MVP] wrote:
CRB,
In my idea do you have another interpretation of Connection pooling
than
we,
you see it from your side of the application. Mostly by regulars
here
is
thought in the way the database is handling all connections it
needs.
AFAIK
is that set to 100, therefore often fair enough to handle all
applications
in the same time as all connections are open (this can even be with
more
ASPNet applications running). Therefore is than the advice to close
as
often
as possible (in other words all the time that it is not needed and
that
is
long in a disconnected situation in computer time) your connection.
I hope I answer your question a little bit with this.
Cor
<cr****@yahoo.comschreef in bericht
news:11**********************@f1g2000cwa.googlegro ups.com...
Hello All,
Some time ago, I implemented a data access layer that included a
simple
connectin pool. At the time, I did it all by myself: I created N
connections, each connection associated with a worker thread that
would
execute the db commands. The pool was fixed and all the
connections
were created when the db access class was instantiated. The
connections remained opened during the whole execution. If a
connection
was not available at the moment, then the request would block
until
one
connection was freed.
In my approach, I did not use the SQL server connection polling
(I
actually disabled it in my connection strings).
That was done in c++ using ado. Now, I am in a position where I
need
to
implement a db access layer in .NET. I was wondering what kind of
support I can get from .NET.
I want to implement a similar model. I would appreciate if you
could
weigh on my approach and on the ways to accomplish something
similar
in
ado.net. I would also appreciate your comments on the approach
itself.
Thank you
CD
Yup. The folks at MS have really tuned up the pooler. Note that the new
version (2.0) supports the ability (and intelligence) to flush the pool if
any of the connections fail to connect. This means that the server has gone
down. I expect there are a dozen hidden features and issues that you would
have to replicate (and support) if you take your own (slower) approach...
If you like the book, please post a review on Amazon.
Thanks
--
____________________________________
William (Bill) Vaughn
Author, Mentor, Consultant
Microsoft MVP
INETA Speaker www.betav.com/blog/billva www.betav.com
Please reply only to the newsgroup so that others can benefit.
This posting is provided "AS IS" with no warranties, and confers no rights.
__________________________________
Visit www.hitchhikerguides.net to get more information on my latest book:
Hitchhiker's Guide to Visual Studio and SQL Server (7th Edition)
and Hitchhiker's Guide to SQL Server 2005 Compact Edition (EBook)
-----------------------------------------------------------------------------------------------------------------------
"CD" <cr****@yahoo.comwrote in message
news:11**********************@j72g2000cwa.googlegr oups.com...
Hello Bill,
Thank you for your message. Based on all the discussions, I decided to
use the built-in pooling (and also buy your book :-) ).
I tried my tests in a database over the network and found some results
that surprised me quite a bit. I compared the times of (1) reusing a
global connection versus the time of (2) using a connection from the
connection pool. In each case, I perfom a sequence of queries (1000 and
10000 respectivelly). The times are in milliseconds.
# of queries (1) (2)
1000 562 578
10000 6109 5735
I did this many times. I was expecting that the global connection
approach would be faster (as it was in the case where the db server
was in the same machine as the client). But they are basically
equivalent, as the numbers show. I monitored the activity using the SQL
profiler at the server and perfmon at the client. I noticed the extra
sp_reset_connection sent for every new query.
Does this make sense?
Cheers
CD
William (Bill) Vaughn wrote:
>See >>>
-- ____________________________________ William (Bill) Vaughn Author, Mentor, Consultant Microsoft MVP INETA Speaker www.betav.com/blog/billva www.betav.com Please reply only to the newsgroup so that others can benefit. This posting is provided "AS IS" with no warranties, and confers no rights. __________________________________ Visit www.hitchhikerguides.net to get more information on my latest book: Hitchhiker's Guide to Visual Studio and SQL Server (7th Edition) and Hitchhiker's Guide to SQL Server 2005 Compact Edition (EBook) -----------------------------------------------------------------------------------------------------------------------
<cr****@yahoo.comwrote in message news:11**********************@16g2000cwy.googlegr oups.com...
Hello All,
Thank you for all your input.
Based on your suggestions, I decided to perform a simple test to asses
the impact of pooling. I tested 3 scenarios:
(1) SQL Pooling disabled and doing open/execute/close (I knew this was
going to be the one with worst performance but I just wanted to
quantify)
(2) SQL Pooling enabled and doing open/execute/close
(3) SQL Pooling off and using a global connection (opening at the
beginning of the application and closing it only at the end)
In each scenario, I executed 1000 and 10000 queries in a loop (the same
in each case). Here are the results(the times in milliseconds)
#queries (3) (2) (1)
1000 172 200 1900
10000 984 1250 18406
I should note that in my tests both the database and the client
application resided in the SAME MACHINE. I do not know whether this is
a factor.
>>Yes, to some extent. The provider used when bypassing the network perform differently (better) than those that have to interface via the NIC.
>
THE FUNDAMENTAL QUESTION:
In scenario (2), is any network access required to authenticate the
credentials when the connection is opened? Or everything is done at the
client side without requiring a round trip? If a validation at the
server is required, then the times in (2) will be considerably higher.
If all the validation is performed at the client side, then the results
above are valid in general. They show an overhead of about 20% when
using the built-in pooling (2) when compared to the global connection
approach (3). I think this would not justify the implementation of my
own connection pool. Once I implement my full own client side
connection pool with all synchronizations, threads, etc, I will
probably loose the advantage.
>>I'm of the same opinion. Consider that the pooler is dealing with any number of subtle (and some not so suble) issues that will bite you in the butt (eventually). I expect part of the performance difference would be seen by turning on the Profiler. This would show that the reused connections are being reset on first use (not when closed). The pooler is also revalidating the SSPI credentials with the Domain so this would also add to the overhead. I wish this could be disabled, but that's not an option. It's clearly faster to leave the pooler on and let it manage the connection. It's cheaper too--it's code you don't have to debug and support. ;)
Chapter 9 in my book provides a lot more detail on getting connected, monitoring the pool, performance issues and much more.
hth
>
Please let me know what you think.
Kind regards
CD
William (Bill) Vaughn wrote: If you use your own Connection pooler (for this specific case), you'll find that the additional cost of re-authenticating the SSPI credentials are eliminated, but you'll also have to deal with server state persistence issues. The built-in pooler deals with lot of issues but since you are holding connections open and trying to share them, you really need to consider these other issues as well. Architecturally, I would find a strategy to use a dedicated connection for those high-traffic operations.
Only you and your own tests can really measure the impact of any pooling architecture configuration. Remember not to measure the speed in debug mode as it's notoriously slow. ____________________________________ William (Bill) Vaughn Author, Mentor, Consultant Microsoft MVP INETA Speaker www.betav.com/blog/billva www.betav.com Please reply only to the newsgroup so that others can benefit. This posting is provided "AS IS" with no warranties, and confers no rights. __________________________________ Visit www.hitchhikerguides.net to get more information on my latest book: Hitchhiker's Guide to Visual Studio and SQL Server (7th Edition) and Hitchhiker's Guide to SQL Server 2005 Compact Edition (EBook) -----------------------------------------------------------------------------------------------------------------------
<cr****@yahoo.comwrote in message news:11*********************@f1g2000cwa.googlegro ups.com...
Hello Bill,
Thank you for your detailed reply.
Based on your response, maybe the only advantage of keeping the
connections to the db open for the life of the application would be
to
avoid the delay of opening and closing the connection. Without
caching,
this would be prohibitive, of course. With caching, I immagine that
the
time to open and close is significantly reduce. I wonder, however if
this time would still compund significantly in the case of
applications
that interact with the db with a very high frequency. For instance,
my
application is driven by an (multithreaded) automatic process that
can
issue more that 100 transactions/second (sustained). Would the
compound
cost of opening/closing/ the connection in this scenario justify my
idea of keeping a pool of open connections at the client ? So, the
question at the center of my problem is:"What is the cost for
opening/executing/closing a connection in the case caching is used?"
Thank you
CD
William (Bill) Vaughn wrote: Let's review: (see >>>)
-- ____________________________________ William (Bill) Vaughn Author, Mentor, Consultant Microsoft MVP INETA Speaker www.betav.com/blog/billva www.betav.com Please reply only to the newsgroup so that others can benefit. This posting is provided "AS IS" with no warranties, and confers no rights. __________________________________ Visit www.hitchhikerguides.net to get more information on my latest book: Hitchhiker's Guide to Visual Studio and SQL Server (7th Edition) and Hitchhiker's Guide to SQL Server 2005 Compact Edition (EBook) -----------------------------------------------------------------------------------------------------------------------
<cr****@yahoo.comwrote in message news:11*********************@l12g2000cwl.googlegr oups.com...
Hello Cor,
Thank you for your message.
You are correct. I am considering connection pooling at the
client
side. I am aware of,and have used connection pooliing at the
server
level as you describe below. In my case, for example, I open 10
connections and all of them remain open for the life of the
application. Each of them associated with a worker thread that
executes
the client's db operations.
As I've said many times before (and again in Chapter 9) opening one or more connections on the client and holding them can work just fine--up to a point. SS is capable of handling a lot of connections--each consumes 40-60K of memory and some CPU cycles. It makes no difference if you use the Connection pool or not--the impact on the server with this approach is the same. The process of opening (and closing) a connection does consume client-side and server-side resources as the credentials are revalidated and the connection state is reset.
I know that this is very different from what I hear in general.
Open
a
connection for every transaction and close as soon as possible.
Nevertheless, I had some motivations for implementing this
pooling
at
the client side:
- The machine where SQL server was installed was shared with
other
applications and I found easier to tune the cpu load by
controlling
the
throughput at the client side.
- I did not want my transaction to fail when the maximum size of
the
pool was reached. This is the typical behavior when all
connections
in
the pool were being used.
If you're filling a standard (100 connections) pool, there's something wrong with your design. If each client is using that many connections, I suspect that you have a recursion problem or something wrong with your mechanism that manages your pooler.
- In some cases (when a failover occured, for example) I ended up
with
"bogus" connections reenlisted in the pool. When those
connections
were
reused, the application would simply get stuck and had to be
restarted.
The only way for me to eliminate this problem back then was to
disable
connecion pooling at the server.
- By having all connections already open, I thought I would save
some
time.
The ADO.NET 2.0 pooler is a lot smarter about handling connections post failover. The old one was not. It also permits you to flush the pool manually.
This was my experience. I am not saying that this is what I would
like
to do again or whether these motivations are still valid with
.NET.
The
nature of my application might have been the difference: It was a
real-time video application.
Ah... yes.
The nature of the application that I want to implement now is a
typical
client-server using .NET (no ASP, no WEB services, no Internet,
everything in the local network). The server will be very busy
due
to
the projected load of users (automatic and people).
Can you see any situation where my approach would be benefitial?
I would have to see more of the overall design to make that call.
Am I just making things more complicated for me?
Perhaps...
Do I gain anything for keeping the connections open?
Yes. Less overhead but no, less scalability (once you reach the server's limit)
Are my original motivatinons valid?
Your input is greatly appreciated.
Kind regards
Cassiano
Cor Ligthert [MVP] wrote: CRB,
In my idea do you have another interpretation of Connection pooling than we, you see it from your side of the application. Mostly by regulars here is thought in the way the database is handling all connections it needs. AFAIK is that set to 100, therefore often fair enough to handle all applications in the same time as all connections are open (this can even be with more ASPNet applications running). Therefore is than the advice to close as often as possible (in other words all the time that it is not needed and that is long in a disconnected situation in computer time) your connection.
I hope I answer your question a little bit with this.
Cor
<cr****@yahoo.comschreef in bericht news:11**********************@f1g2000cwa.googlegr oups.com...
Hello All,
Some time ago, I implemented a data access layer that included
a
simple
connectin pool. At the time, I did it all by myself: I
created N
connections, each connection associated with a worker thread
that
would
execute the db commands. The pool was fixed and all the
connections
were created when the db access class was instantiated. The
connections remained opened during the whole execution. If a
connection
was not available at the moment, then the request would block
until
one
connection was freed.
In my approach, I did not use the SQL server connection
polling
(I
actually disabled it in my connection strings).
That was done in c++ using ado. Now, I am in a position where
I
need
to
implement a db access layer in .NET. I was wondering what kind
of
support I can get from .NET.
I want to implement a similar model. I would appreciate if you
could
weigh on my approach and on the ways to accomplish something
similar
in
ado.net. I would also appreciate your comments on the approach
itself.
Thank you
CD This thread has been closed and replies have been disabled. Please start a new discussion. Similar topics
by: Mahesh D. Rane |
last post by:
Hi,
I am connecting to sql databse from windows C# form.
when i close the form, then also the connection to databse
is active. Its not allowing me to delete the database.
But if i close the...
|
by: Rob Nicholson |
last post by:
We're getting an occasional occurrence of the following error when two users
try and open the same record in our ASP.NET app:
"There is already an open DataReader associated with this Connection...
|
by: Lenny Shprekher |
last post by:
Hi,
I am getting issues that Oracle collecting opened sessions (connections)
from my webservice using regular System.Data.OleDb.OleDbConnection object.
I am guessing that this is connection...
|
by: Zlatko Matić |
last post by:
Hello.
In ODBC Data Source Administrator, there is a tab "Connection Pooling" and
option "PerfMon" Enable/Disable.
Could someone explain the meaning of the option and how it reflects to
Access...
|
by: PH |
last post by:
Hi;
I got an application written in ASP.NET (VB.NET and C#) that uses
connection pooling.
The Max connections defined in the connection command are 50, and there
are no more than 3 or 4 user...
|
by: fniles |
last post by:
I am using VS2003 and connecting to MS Access database.
When using a connection pooling (every time I open the OLEDBCONNECTION I use
the exact matching connection string),
1. how can I know how...
|
by: fniles |
last post by:
In the Windows application (using VB.NET 2005) I use connection pooling like
the following:
In the main form load I open a connection using a connection string that I
stored in a global variable...
|
by: viswarajan |
last post by:
Introduction
This article is to go in deep in dome key features in the ADO.NET 2 which was shipped with VS 2005.
In this article I will go trough one of the key features which is the Connection...
|
by: laziers |
last post by:
Hi there
Anybody knows what is the best way to manage creation 2000 of the
database connections at the same time?
Now Im doing it somethink like this:
using ( Connection conn =...
|
by: ryjfgjl |
last post by:
In our work, we often receive Excel tables with data in the same format. If we want to analyze these data, it can be difficult to analyze them because the data is spread across multiple Excel files...
|
by: emmanuelkatto |
last post by:
Hi All, I am Emmanuel katto from Uganda. I want to ask what challenges you've faced while migrating a website to cloud.
Please let me know.
Thanks!
Emmanuel
|
by: nemocccc |
last post by:
hello, everyone, I want to develop a software for my android phone for daily needs, any suggestions?
|
by: Sonnysonu |
last post by:
This is the data of csv file
1 2 3
1 2 3
1 2 3
1 2 3
2 3
2 3
3
the lengths should be different i have to store the data by column-wise with in the specific length.
suppose the i have to...
|
by: Hystou |
last post by:
There are some requirements for setting up RAID:
1. The motherboard and BIOS support RAID configuration.
2. The motherboard has 2 or more available SATA protocol SSD/HDD slots (including MSATA, M.2...
|
by: Hystou |
last post by:
Most computers default to English, but sometimes we require a different language, especially when relocating. Forgot to request a specific language before your computer shipped? No problem! You can...
|
by: Oralloy |
last post by:
Hello folks,
I am unable to find appropriate documentation on the type promotion of bit-fields when using the generalised comparison operator "<=>".
The problem is that using the GNU compilers,...
|
by: jinu1996 |
last post by:
In today's digital age, having a compelling online presence is paramount for businesses aiming to thrive in a competitive landscape. At the heart of this digital strategy lies an intricately woven...
|
by: Hystou |
last post by:
Overview:
Windows 11 and 10 have less user interface control over operating system update behaviour than previous versions of Windows. In Windows 11 and 10, there is no way to turn off the Windows...
| |