"John A Grandy" <johnagrandy-at-yahoo-dot-com> wrote:
I'm trying to get a decent idea of the relative performance of three types
of implementations of data-access classes in ASP.NET 2.0.
If your code is accessing a database, then database access code
(including the network round-trip I assume it's going to involve) is
going to dominate your time, and eliminating as many roundtrips to the
database as possible will be your first, most fruitful, source of
performance increases. That would probably entail bulking together
requests if you can't do any caching.
so many
queries must be performed, implying many data-access object instantiations.
Database calls (implying a network round-trip) are so many times more
expensive than object instantiations that the instantiation cost is in
the realm of a rounding error. I think you need to measure the two costs
before trying too hard to optimize object instantiation.
In the course of writing a scalable ASP.NET 2.0 web-app, has anyone done any
benchmarking (either formal or informal) ... or has a general sense of the
relative peformance of these 3 implementations ?
With respect to memory management on .NET, probably one of the most
important things is to try to achieve a low percentage of time spent on
GC, which in turn means reducing the rate of generation 2 garbage
collections. Keeping gen-2 GCs low implies being aware of how the GC
works, of how large your objects are, and how long you are keeping them
in memory - but you have to measure GC% before you know this is where
your problem is.
The most important thing is to measure: benchmark a 'typical' request
scenario and find out if your time is really going into object
instantiations. When you start talking about the database, I seriously
doubt that micro-optimizing allocations is going to be of much benefit.
Allocating an object in .NET is the cost of incrementing a pointer and
running whatever code is in the constructor - i.e. not much cost at all
and most of it is under your control through the constructor. Objects
get more expensive if you attach them to structures which are going to
live for a while, because that gives them a chance to:
1) Be promoted out of gen-0 into gen-1 (and thus usually out the CPU
cache)
2) or (worst-case scenario) have a mid-life crisis (live until gen-2 and
then become irrelevant)
The shorter your objects' lives with respect to the overall allocation
rate, the better. Sometimes it's cheaper to allocate and initialize a
new object than it is to keep a cached one around, depending on how big
it is and how costly it is to initialize - but measure.
If it's easy to restructure your code not to use allocations, then it
might be worthwhile (but measure first!), but if it means bending over
backwards not to allocate, then it almost certainly isn't worth it. You
should look at your algorithms and actual code running on the hottest
paths first, IMHO.
-- Barry