473,725 Members | 2,271 Online
Bytes | Software Development & Data Engineering Community
+ Post

Home Posts Topics Members FAQ

Collecting different execution statistics of C++ programs

Hi,

I'm facing the problem of analyzing a memory allocation dynamic and
object creation dynamics of a very big C++ application with a goal of
optimizing its performance and eventually also identifying memory
leaks. The application in question is the Mozilla Web Browser. I also
have had similar tasks before in the compiler construction area. And
it is easy to come up with many more examples, where such kind of
statistics can be very useful. It is obvious that the availability of
execution statistics for C++ applications can be very useful for
optimization of the applications and for analyzing their behavior.

While trying to solve the problem on a per-project basis and
introducing some custom solutions for that (as an example, Mozilla
provides some custom means to collect at least some of the
statistics), I realized that in many cases, the kind of statistics
that is nice to have is not really dependent on the application.
Therefore I try to write down, what I consider to be useful as an
execution statistic of a C++ application and try to check what tools
are available for collecting some of these statistics.

Some of the most interesting statistics can be related to the
following characteristics of the application (the list is incomplete
and very memory-management centric):

Code execution statistics:
1) Execution time statistics for whole application
2) Fine-grained execution time statistics for each (used) function
3) Code coverage information
4) Where a certain function/method was called and how often?
5) When a certain function/method was called?

Memory allocation and memory access related statistics:

6) Cumulative memory usage

7) dynamic memory allocation information:
what memory blocks was allocated/freed?
when a certain memory block was allocated/freed?
which allocated blocks are not freed, i.e. leaked?

7.1) All statistics from (7), but extended with type/class information

8) Memory access dynamic
which memory regions were accessed by the application?
what parts of the code has accessed these memory regions?
when certain/all memory regions were accessed?

8.1) All statistics from (8), but extended with type/class
information, where it is appropriate

C++ specific statistics:
9) Object creation statistics:
How many objects of a given type/class were created - overall?
How many objects of a given type/class were created as global
variables?
How many objects of a given type/class were created on stack?
How many objects of a given type/class were created on the heap?

10) Object creation dynamics:
Where certain objects of a given class were created and how many?
When certain objects of a given class were created and how many?

11) Object access dynamics:
How many read/write accesses have happened to a given(all) object of
given(all) class?
How many read/write accesses have happened to a a given(all) object of
given(all) class?
Where these accesses have happened?
When these accesses have happened?

12) (Non-)static method invocations dynamics:
How many invocations of a given member method have happened for a
given object/class?
Where these invocations have happened?
When these invocations have happened?
For some of the mentioned bullets, there are some tools available
already.
(1) and (2) can be solved by prof/gprof-like tools
(3) can be solved by gcov-like tools
(4) and (5) can be probably also solved by prof/gprof and/or static
code analyzers that can build a call tree
(6) and (7) can be solved by special debug malloc libraries, e.g.
dmalloc, mpatrol. Or one can use tools like purify or valgrind.

But I'm not aware of the tools and libraries that can be used to
collect statistics described in other bullets:

(7.1) - this is particularly interesting if you want to know "per
class" allocation statistics. In C memory allocation primitives like
malloc/free are untyped. But in C++ operator new and operator delete
are do actually know the type of their arguments, at least the compiler
knows it. Unfortunately, this type information is not available at the
program level in general (only for classes that define their own
operator new. But even in this case it is implied and there is no way
to distinguish between a call for the class itself and for a derived
class). It is not possible with current tools, since all tools external
to the compiler "do not know enough" about types and classes used by
the program. As a result, all type-related information is essentially
lost. And we do not have any useful reflection/introspection facilities
in C++ yet.

(8), (8.1) - these statistics are important for understanding of the
dynamical behavior of the application with regard to memory usage. It
could be used for analysis and identifying memory access patterns. It
can provide some useful input for designing or selecting more
efficient memory allocators and garbage collectors for a given
application. It also could provide some insight about paging/swapping
behavior of the application and eventually provide some hints for VM
manager.

(9) is also rather related to the memory management. But it has a bit
broader scope. It can give you the idea about objects creation on the
per-class basis.
(10) extends (9) with a dynamic of objects creation and it is rather
similar to (7), but concentrates on objects.

(11) is interesting for better understanding of objects usage
patterns. It provides information with the object or member variable
granularity and can be collected on a per-object or per-class basis.

(12) is similar to (11) but collects statistics about member method
invocations.

Of course, I realize that many of these bullets, which I have marked
as not-solved by currently available tools, can be solved by using
some project specific solution. But virtually all of these solutions
would require the instrumentation of the original application. And it
is most likely to happen at the source code level, by inserting
certain statements for statistics collection (e.g. inside constructors
and destructors and/or other member methods, inside operators new and
delete, etc). Even worse, this is likely to be done by hand, since
there are not that many C++ analysis tools that could it
automatically. To recap, we have two options:

a) Instrumentation by hand

This is very inconvenient and eventually very time-consuming,
especially for big code bases like Mozilla's. And this introduces at
the source level a code that is not directly related to the
applications semantics. If done without using any automated tools, it
is probably only feasible when used right from the beginning of the
project and added to each class as it is designed. Applying such
changes to an already existing big project could be rather annoying.
Just imagine modifying several hundred classes by hand and inserting
such a code.

b) Doing the instrumentation automatically

Automating the instrumentation of C++ code makes the task much easier.
This can be done either at the source code level or at the machine
code level.

When we speak about automated source-code instrumentation , some tools
like Aspect++ or OpenC++, as well as some other source-to-source
transformation tools come up to my mind. As it can be easily seen,
they are coming from such areas like aspect-oriented programming,
meta-object protocols, etc. This is not surprising, since collection
of statistics can be considered to be just an "optional aspect" of the
application. I guess, it is possible to instrument any C++ application
for a statistics collection using these tools. But again, this would
introduce some changes at the source level, which can be considered as
a drawback in some situations.

When it comes to the machine-code level instrumentation , I'm not aware
of any tools that can cope with C++. Valgrind with its plugins and
Daikon are probably the closest candidates, but they do much more than
required and slowdown the execution greatly. It is also not so obvious
if all of the described kinds of statistics can be gathered using
these tools. At the same time, for some other languages, especially
ones having a virtual machine, e.g. Java and .Net languages, this can
be done rather easily using aspect-oriented programming and
instrumenting at the run-time! Of course, these languages have a much
simpler and higher-level semantic and availability of a virtual
machine makes it easy to intercept certain instructions and actions.
Another big advantages of these languages are rather powerful
reflection mechanisms that can be used at the run-time.

And I'd like to stress it again: A common, application independent way
of gathering such statistics is required. It would make the whole
process more straight forward, less error-prone, more portable and
faster. The current situation, where everybody introduces his own
solution for the same problem is not really how it should be, or?

Having said all that, I'd like to ask others about their opinion about
the status of execution statistics collection for C++ programs.

What are your experiences?
What tools do you use?
Where do you see shortcomings?
What statistics would you like to be able to collect?

Do you see any other solutions than instrumentation for collecting the
described kinds of statistics in C++? If instrumentation is required,
what would be better to have: a source-level instrumentation or a
machine-code level instrumentation ?

How can this be achieved?
Should we have special tools for that?
Should we extend compilers to support this kind of instrumentation
(e.g. you can tell the compiler to insert a certain call at the
beginning/end of each function/method; or you can tell it to call
certain function before/after each 'operator new' call)?

I'm very interested in your opinions and hope that we can have an
interesting discussion about these topics.

Best Regards,
Roman
[ See http://www.gotw.ca/resources/clcm.htm for info about ]
[ comp.lang.c++.m oderated. First time posters: Do this! ]

Apr 5 '06 #1
17 5082
ro*******@googl email.com wrote:

What are your experiences?
What tools do you use?
Where do you see shortcomings?
What statistics would you like to be able to collect?

Do you see any other solutions than instrumentation for collecting the
described kinds of statistics in C++? If instrumentation is required,
what would be better to have: a source-level instrumentation or a
machine-code level instrumentation ?

How can this be achieved?
Should we have special tools for that?
Should we extend compilers to support this kind of instrumentation
(e.g. you can tell the compiler to insert a certain call at the
beginning/end of each function/method; or you can tell it to call
certain function before/after each 'operator new' call)?

I'm very interested in your opinions and hope that we can have an
interesting discussion about these topics.

Best Regards,
Roman


Hi Roman,

a tool you probably should take a look at is the libumem of Solaris. It
doesn't require any instrumentation , has almost no slowdown effect, and
you can analyze running programs. You will get detailed statistics of
what stack traces did allocations, how the distribution of allocation
size looks like, information on memory leaks, and a ton of other things.
I use it to hunt bugs, analyze allocation performance, and more.

And the best thing you just preload the shared object set some
environment variables and run your executable. You don't have to care
about the language your program is written in, and at any point in time
during program execution, you just pull a core file with gcore and run
mdb. Then you see what is going on. You can even attach to the running
process, but this will stop the program.

Concerning other performance related problems, there I again have to
refer to Solaris board utilities. Take a look at cputrack, mpstat, and
friends.

Cheers,
Tom

P.S.: I just tested libumem against mozilla and it found a ton of leaks
in libfontconfig.. .

[ See http://www.gotw.ca/resources/clcm.htm for info about ]
[ comp.lang.c++.m oderated. First time posters: Do this! ]

Apr 6 '06 #2
Hi Tom,

Thanks a lotf for your answer. According to your recommendation, I've
looked at the mentioned Solaris utilities. I never used them before,
since I mainly develop for Linux and somtimes for Windows.

It looks like the libumem is a typical (well, may be a bit more
advanced) version of a memory allocation debugging library. Linux
systems also have something like this, e.g. mpatrol, dmalloc, etc.
Indeed, you get statistics about what was allocated, what was leaked,
when these allocations took place.

And cpustat & friends gives you some overall application performance
stats.

I would say, that these things solve problems (1) till (7) in my
categorization. But also in my original mail I stated, that exactly
these bullets are not a great problem and there are tools to solve
them.

But solution that you describe and what I mentioned collect the memory
allocation stats with a rather poor granularity, if you look at the
bullets 7.1 and higher. What I'd like to have is the statistics per
class, per object, per field, per method. Basically, it would provide a
more precise information at a much finer granularity. And it will also
include a type information, which is often very important. You'll be
able to analyze some issues at the language level and not at the OS
memory allocation API level. It is obvious, that this type of
statistics is programming-language specific.

And I'd like to have a dynamics of memory access and modification, not
only overall stats about allocation and deallocation. Under "dynamics"
I understand the information about which memory regions were accessed,
when, which of these accesses lead to page faults, etc. And then also
some statistics related to this. If it would be possible to get it, one
could try to analyze it (prefferably automatically) and derive some
_memory usage patterns_. For example, you can identify as a result of
analysis that you always access memory very sequentially. Or you see
that you always traverse very big regions of memory which leads to a
big number of page faults. One can think of many more use cases. Based
on all that, you get the idea how you can optimize your application.
You might want to redesign your data structures or you decide to use a
different and more efficient memory allocator or garbage collector.

It is may be not so obvious, why such detailed statistics are useful.
Well, the short answer is that they give you much more insight into the
"inner life" of your application. Standard tools are not sufficient in
many cases, especially if you really want to optimize a memory-related
performance of your application. These additional statistics and logs
could greatly improve analysis capabilities.

Cheers,
Roman
[ See http://www.gotw.ca/resources/clcm.htm for info about ]
[ comp.lang.c++.m oderated. First time posters: Do this! ]

Apr 6 '06 #3
Looks like my x-posted reply disappeared into a black hole, so I'll
just post it again here.

ro*******@googl email.com wrote:
Having said all that, I'd like to ask others about their opinion about
the status of execution statistics collection for C++ programs.

What are your experiences?
What tools do you use?
Where do you see shortcomings?
What statistics would you like to be able to collect?

OK, I'll bite.

I do most of my development on Solaris, which has a comprehensive set of
tools for application performance analysis.

The Studio analyser gives you the data for 1-5, along with the
applications memory page usage (which you didn't mention). This is a
useful data point if you wish to minimise paging. There is a garbage
collector library that gives you the data for 6.

I use my own global new/delete to give me 7.

That's as much data as I've ever required to tune an application. I've
found global memory allocation statistics adequate for tuning any local
allocators.

If I wanted to dig deeper, the dtrace utility would probably give me all
the hooks I'd require to gather most of your other data, but as I said,
I haven't had to do this.

Do you see any other solutions than instrumentation for collecting the
described kinds of statistics in C++? If instrumentation is required,
what would be better to have: a source-level instrumentation or a
machine-code level instrumentation ?

I think this is best left to platform tools, like the Solaris
performance analyser and dtrace, that can be applied to any running
executable without the requirement for an instrumented build.

How can this be achieved?
Should we have special tools for that?

I think we do.

-- Ian Collins.
Apr 6 '06 #4
Ian Collins wrote:
Looks like my x-posted reply disappeared into a black hole, so I'll
just post it again here.


When you reply to a message crossed to a moderated group, it won't post
to ANY group until approved in the moderated group. Messages should
never be crossed to clc++m and here.

Brian
Apr 6 '06 #5
Default User wrote:
Ian Collins wrote:

Looks like my x-posted reply disappeared into a black hole, so I'll
just post it again here.

When you reply to a message crossed to a moderated group, it won't post
to ANY group until approved in the moderated group. Messages should
never be crossed to clc++m and here.

I'm aware of the delay, that's why I waited 36 hours before reposting here.

--
Ian Collins.
Apr 7 '06 #6
ro*******@googl email.com wrote:

But solution that you describe and what I mentioned collect the memory
allocation stats with a rather poor granularity, if you look at the
bullets 7.1 and higher. What I'd like to have is the statistics per
class, per object, per field, per method. Basically, it would provide a
more precise information at a much finer granularity. And it will also
include a type information, which is often very important. You'll be
able to analyze some issues at the language level and not at the OS
memory allocation API level. It is obvious, that this type of
statistics is programming-language specific.

If you think it doesn't address 7.1 and higher, I either don't get your
point or you didn't have a deep enough look at libumem. Read
umem_debug(3MAL LOC).

Simple Example:
$ env LD_PRELOAD=/lib/libumem.so UMEM_DEBUG=audi t=8
UMEM_LOGGING=tr ansaction=16M mozilla

$ mdb `pgrep mozilla-bin` ::umalog ! c++filt | less T-0.000000000 addr=3a6c240 umem_alloc_56
libumem.so.1`um em_cache_free+0 x50
libumem.so.1`pr ocess_free+0x78
libxpcom.so`PL_ ProcessPendingE vents+0x258
libxpcom.so`uns igned nsEventQueueImp l::ProcessPendi ngEvents()+0x20
libwidget_gtk2. so`int
event_processor _callback(_GIOC hannel*,GIOCond itio
n,void*)+0x18
libglib-2.0.so.0.400.1` g_main_dispatch +0x19c
libglib-2.0.so.0.400.1` g_main_context_ dispatch+0x9c
libglib-2.0.so.0.400.1` g_main_context_ iterate+0x454

T-0.000119000 addr=3a6c240 umem_alloc_56
libumem.so.1`um em_cache_alloc+ 0x210
libumem.so.1`um em_alloc+0x60
libumem.so.1`ma lloc+0x28
libxpcom.so`voi d nsTimerImpl::Po stTimerEvent()+ 0x20
libxpcom.so`uns igned TimerThread::Ru n()+0x150
libxpcom.so`voi d nsThread::Main( void*)+0x8c
libnspr4.so`_pt _root+0xcc
libc.so.1`_lwp_ start

T-1.589364400 addr=3a6c0d8 umem_alloc_56
libumem.so.1`um em_cache_free+0 x50
libumem.so.1`pr ocess_free+0x78
libxpcom.so`PL_ ProcessPendingE vents+0x258
libxpcom.so`uns igned nsEventQueueImp l::ProcessPendi ngEvents()+0x20
libwidget_gtk2. so`int
event_processor _callback(_GIOC hannel*,GIOCond itio
n,void*)+0x18
libglib-2.0.so.0.400.1` g_main_dispatch +0x19c
libglib-2.0.so.0.400.1` g_main_context_ dispatch+0x9c
libglib-2.0.so.0.400.1` g_main_context_ iterate+0x454

T-1.589370300 addr=2dc8708 umem_alloc_56
libumem.so.1`um em_cache_free+0 x50
libumem.so.1`pr ocess_free+0x78
libCrun.so.1`vo id operator delete(void*)+4
libxpcom.so`uns igned nsTimerImpl::Re lease()+0x58
libxpcom.so`voi d destroyTimerEve nt(TimerEventTy pe*)+0x10
libxpcom.so`PL_ ProcessPendingE vents+0x258
libxpcom.so`uns igned nsEventQueueImp l::ProcessPendi ngEvents()+0x20
libwidget_gtk2. so`int
event_processor _callback(_GIOC hannel*,GIOCond itio
n,void*)+0x18

T-1.589378700 addr=348b1e8 umem_alloc_32
libumem.so.1`um em_cache_free+0 x50
libumem.so.1`pr ocess_free+0x78
libCrun.so.1`vo id operator delete(void*)+4
libnecko.so`uns igned nsFileOutputStr eam::Release()+ 0x34
libnecko.so`uns igned nsCookieService ::Write()+0x50c
libnecko.so`voi d nsCookieService ::DoLazyWrite(n sITimer*,void*) +4
libxpcom.so`voi d*handleTimerEv ent(TimerEventT ype*)+0x174
libxpcom.so`PL_ ProcessPendingE vents+0x1e4
This is only a very small snippet and I limited stack tracing to 8
frames, as you can see in the setup. But you get timing information,
information about the objects allocated or freed, and the call stack at
the point of the event. And this is not the limit of libumem...
And I'd like to have a dynamics of memory access and modification, not
only overall stats about allocation and deallocation. Under "dynamics"
I understand the information about which memory regions were accessed,
when, which of these accesses lead to page faults, etc. And then also
some statistics related to this. If it would be possible to get it, one
could try to analyze it (prefferably automatically) and derive some
_memory usage patterns_. For example, you can identify as a result of
analysis that you always access memory very sequentially. Or you see
that you always traverse very big regions of memory which leads to a
big number of page faults. One can think of many more use cases. Based
on all that, you get the idea how you can optimize your application.
You might want to redesign your data structures or you decide to use a
different and more efficient memory allocator or garbage collector.

This is a whole different area. These kind of problems are IMO best
debugged from an OS level view. I must again refer to my original
posting, and point towards the tools of Solaris. Pagefaults for example
can be easily analyzed using vmstat with its various options. There is
much more to all these utilities than you can see in a brief moment. Use
them for some time and you will get a lot.

As far as it is possible I do my developments under Solaris, although I
might target another system. The reason I stick with it and take the
effort of supporting another target, is the availability of these tools.

Don't forget that many performance problems that might come up, are
related to memory, but not caused directly by your application. E.g.
dynamic linking could be a source of problems or I/O. Sometimes one can
use a different I/O pattern and achieve drastic improvements.
It is may be not so obvious, why such detailed statistics are useful.
Well, the short answer is that they give you much more insight into the
"inner life" of your application. Standard tools are not sufficient in
many cases, especially if you really want to optimize a memory-related
performance of your application. These additional statistics and logs
could greatly improve analysis capabilities.


The white box view is the best precondition to do a successful analysis
of bottlenecks. It is impossible to go into the details of what kind of
information you can gather with all the tools I am refering to. But I
can list some manpages you might want to take a look at:

ld.so.1(1)
mpss.so.1(1)
ppgsz(1)
lari(1)
apptrace(1)
truss(1)
cputrack(1)
busstat(1m)

The following tools are included in Sun's Studio Compilers (C, C++,
Fortran), which is available for free for any use:
collect(1)
analyzer(1)
er_print(1)
tcov(1)

And, most important: DTrace.
Take a look at it. You won't regret it, as it will change the way you
think about performance analysis and the view you have on the system.
You become more aware about the impacts of interactions between your
application and the OS. And most of the improvements you can achieve
like this under Solaris, are also beneficial on other OSs.
Cheers,
Tom

P.S.: The time when Solaris cost a premium is over. Install it on an old
PC and test the tools. If you find out how to use them you will stick to
them. I found nothing comparable under Linux, *BSD or Windows.

[ See http://www.gotw.ca/resources/clcm.htm for info about ]
[ comp.lang.c++.m oderated. First time posters: Do this! ]

Apr 7 '06 #7
Hi Ian, Hi Tom,

I looked more precisely at the Solaris tools you described and they
really seem to be more powerful, as I seemed at the first glance. In
particular, the Dtrace tool. I'll probably try to develop something
under Solaris and get a hands-on experience.

But coming back to the question about granularity. Do you state that
having statistics and logs at the object/method/class level for C++
(including access logs at this granularity) is a sort of overkill and
does not bring in any add-value? The statistics delivered by the tools
you described are just enough?

For example, looking at the umem_debug example given by Tom, I agree
that it is very useful. But still one have to decrypt a stack-trace to
realize that a given allocation was done let's say by 'operator new'
or released by teh 'operator delete'. And if you do not define a
class-specific 'operator new' and 'operator delete' for each class, you
do not really know object of which class was allocated/freed by this
allocation.

Also the fact that many different statistics come from different tools
in different formats and it is not very easily possible to correlate
them to each other makes the analysis harder and more time consuming.

That is why I still think that it might be also very useful to have the
statistics at the programming language abstraction level (objects,
methods, classes). And customizable instrumentation (i.e. where you
can define the actions to be taken on certain events that it tracks,
a-la AOP) that takes programming language semantics into account could
give you a lot of flexibility and would provide you with the ability to
collect and process the executuion information about your program in a
form closer to the semantics of the original source program, instead of
looking at everything from the OS level.

In Java, most of these things are possible without any great problems.
In C++ - not yet. But I'm trying to understand why? Is it because it is
not needed in usual scenarios? Or is it due to the lack of
corresponding tools for C++? Do you use the tools you mentioned because
there are no better alternatives or do you say that you are happy with
them and you do not need anything else, which is more high-level ?

Roman

Apr 7 '06 #8

<ro*******@goog lemail.com> wrote in message
news:11******** **************@ g10g2000cwb.goo glegroups.com.. .
Hi,

I'm facing the problem of analyzing a memory allocation dynamic and
object creation dynamics of a very big C++ application ... [many types of statistics... some methods for collecting them] b) Doing the instrumentation automatically

Automating the instrumentation of C++ code makes the task much easier.
This can be done either at the source code level or at the machine
code level.

When we speak about automated source-code instrumentation , some tools
like Aspect++ or OpenC++, as well as some other source-to-source
transformation tools come up to my mind. As it can be easily seen,
they are coming from such areas like aspect-oriented programming,
meta-object protocols, etc.
Well, some come from there. Our perspective is that massive change
is a capability that should be in the hands of programmers, and has
nothing specific to do with aspects (expect they are special case).
I guess, it is possible to instrument any C++ application
for a statistics collection using these tools. But again, this would
introduce some changes at the source level, which can be considered as
a drawback in some situations.


I don't see it that way. If you can easily instrument your code,
you can instrument as you desire, collect your data, and throw
the instrumentated code away. A really good instrumentation tool knows the
complete language,
can insert arbitrary probes, and can optimize probe insertion in
appropriate circumstances to minimize the Heisenprobe effect.
Where's the disadvantage?

The DMS Software Reengineering Toolkit can (and is) be used
for this effect. The page,
http://www.semanticdesigns.com/Produ...ers/index.html
has a white paper on how to insert a particular type of code coverage
probe; we do this for many langauges including C++.
Changing probe styles isn't particularly hard, and many
of the this of data collection activities you described could
easily be implemented this way.
--
Ira Baxter, CTO
www.semanticdesigns.com

[ See http://www.gotw.ca/resources/clcm.htm for info about ]
[ comp.lang.c++.m oderated. First time posters: Do this! ]

Apr 9 '06 #9
ro*******@googl email.com wrote:
Hi Ian, Hi Tom,

I looked more precisely at the Solaris tools you described and they
really seem to be more powerful, as I seemed at the first glance. In
particular, the Dtrace tool. I'll probably try to develop something
under Solaris and get a hands-on experience.

But coming back to the question about granularity. Do you state that
having statistics and logs at the object/method/class level for C++
(including access logs at this granularity) is a sort of overkill and
does not bring in any add-value? The statistics delivered by the tools
you described are just enough?
In my case, yes both for host and embedded developments.
For example, looking at the umem_debug example given by Tom, I agree
that it is very useful. But still one have to decrypt a stack-trace to
realize that a given allocation was done let's say by 'operator new'
or released by teh 'operator delete'. And if you do not define a
class-specific 'operator new' and 'operator delete' for each class, you
do not really know object of which class was allocated/freed by this
allocation.

Also the fact that many different statistics come from different tools
in different formats and it is not very easily possible to correlate
them to each other makes the analysis harder and more time consuming.
Probably because they all do a different job and would be used where
appropriate.
That is why I still think that it might be also very useful to have the
statistics at the programming language abstraction level (objects,
methods, classes). And customizable instrumentation (i.e. where you
can define the actions to be taken on certain events that it tracks,
a-la AOP) that takes programming language semantics into account could
give you a lot of flexibility and would provide you with the ability to
collect and process the executuion information about your program in a
form closer to the semantics of the original source program, instead of
looking at everything from the OS level.
That's what I don't like, having to instrument the program. I prefer
tools that can be applied without special builds.
In Java, most of these things are possible without any great problems.
In C++ - not yet. But I'm trying to understand why? Is it because it is
not needed in usual scenarios? Or is it due to the lack of
corresponding tools for C++? Do you use the tools you mentioned because
there are no better alternatives or do you say that you are happy with
them and you do not need anything else, which is more high-level ?

Yes. Either individually or in combination, they can tell you just
about everything that you might want to know about an application, even
a third party one (obviously no data that requires symbols).

--
Ian Collins.
Apr 9 '06 #10

This thread has been closed and replies have been disabled. Please start a new discussion.

Similar topics

2
2185
by: T Chaudhary | last post by:
Hi, I have a question about estimated query execution plans that are generated in QA of MSSQL. If I point at an icon/physical operator in the estimated QEP, it shows me some statistics about the operator. Is there a way to retrieve these statistics through a query, i.e., can these statistics be available to the user?
3
5368
by: Will Atkinson | last post by:
Hi All, I'm a relative newbie to SQL Server, so please forgive me if this is a daft question... When I set "Show Execution Plan" on in Query Analyzer, and execute a (fairly complex) sproc, I note that a particular query is reported as having a query cost of "71% relative to the batch" - however, this is nowhere near the slowest executing query in the batch - other queries which take over twice as long are reported as having costs in...
3
2486
by: James Walker | last post by:
Hi there - hoping someone can help me here! I have a database that has been underperforming on a number of queries recently - in a test environment they take only a few seconds, but on the live data they take up to a minute or so to run. This is using the same data. Every evening a copy of the live data is copied to a backup 'snapshot' database on the same server and also, on this copy the queries only take a second or so to run....
2
3738
by: Ina Schmitz | last post by:
Hi NG, does IBM Universal Database 8.2 make any difference between actual and estimated execution plans like in SQL Server ("set showplan_all on" for estimated execution plan and "set statistics profile on" for actual execution plan)? Does "explain plan selection for" generate the *estimated* execution plan? Didn't find any distinction of actual or estimated execution plans in the information center. Thanks for help,
2
2424
by: Brian Tabios | last post by:
Hello Everyone, I have a very complex performance issue with our production database. Here's the scenario. We have a production webserver server and a development web server. Both are running SQL Server 2000. I encounted various performance issues with the production server with a particular query. It would take approximately 22 seconds to return 100 rows, thats about 0.22 seconds per row. Note: I ran the query in single user mode. So...
2
3421
by: jefftyzzer | last post by:
Friends: I've been working on tuning a fairly complex--and certainly vexing-- query. Several days ago I created an MQT which aided the query and dropped its cost from 3.4 million timerons to around 650,000, enabling the query to run in cir. 15 mins. For a variety of reasons, I've recently created another MQT on a different set of tables than those covered by the other one, which I've since dropped. Using this new MQT the query's cost...
1
1552
by: gauss | last post by:
Hi all, I have a problem and I wanted to solve it :-). My problem is I have an application (under GNU/Linux) wich launches a thread, executes some code and collects some statistics. My problem is I want to stop the execution of this code by some sort of key code ( something like Control-C), but I don't want to terminate, instead I want to simply stop the executing thread and perform some calculations with the statistics. I could do this...
5
10756
by: sqlgirl | last post by:
Hi, We are trying to solve a real puzzle. We have a stored procedure that exhibits *drastically* different execution times depending on how its executed. When run from QA, it can take as little as 3 seconds. When it is called from an Excel vba application, it can take up to 180 seconds. Although, at other times, it can take as little as 20 seconds from Excel.
1
1990
by: codefragment | last post by:
Hi I've heard 2 things recently, can I confirm if their true/false? (1) If you have a stored procedure and you want to optimise it you can call exec proc1, you could also use define/set for each of the variables and copy the code into query analyser, this then makes it easier to tune. However the optimiser works differently for these variables than it does for variables passed into the query via exec and will produce a less optimal
0
8749
by: Hystou | last post by:
Most computers default to English, but sometimes we require a different language, especially when relocating. Forgot to request a specific language before your computer shipped? No problem! You can effortlessly switch the default language on Windows 10 without reinstalling. I'll walk you through it. First, let's disable language synchronization. With a Microsoft account, language settings sync across devices. To prevent any complications,...
0
9400
Oralloy
by: Oralloy | last post by:
Hello folks, I am unable to find appropriate documentation on the type promotion of bit-fields when using the generalised comparison operator "<=>". The problem is that using the GNU compilers, it seems that the internal comparison operator "<=>" tries to promote arguments from unsigned to signed. This is as boiled down as I can make it. Here is my compilation command: g++-12 -std=c++20 -Wnarrowing bit_field.cpp Here is the code in...
1
9168
by: Hystou | last post by:
Overview: Windows 11 and 10 have less user interface control over operating system update behaviour than previous versions of Windows. In Windows 11 and 10, there is no way to turn off the Windows Update option using the Control Panel or Settings app; it automatically checks for updates and installs any it finds, whether you like it or not. For most users, this new feature is actually very convenient. If you want to control the update process,...
0
9103
tracyyun
by: tracyyun | last post by:
Dear forum friends, With the development of smart home technology, a variety of wireless communication protocols have appeared on the market, such as Zigbee, Z-Wave, Wi-Fi, Bluetooth, etc. Each protocol has its own unique characteristics and advantages, but as a user who is planning to build a smart home system, I am a bit confused by the choice of these technologies. I'm particularly interested in Zigbee because I've heard it does some...
0
8084
agi2029
by: agi2029 | last post by:
Let's talk about the concept of autonomous AI software engineers and no-code agents. These AIs are designed to manage the entire lifecycle of a software development project—planning, coding, testing, and deployment—without human intervention. Imagine an AI that can take a project description, break it down, write the code, debug it, and then launch it, all on its own.... Now, this would greatly impact the work of software developers. The idea...
1
6700
isladogs
by: isladogs | last post by:
The next Access Europe User Group meeting will be on Wednesday 1 May 2024 starting at 18:00 UK time (6PM UTC+1) and finishing by 19:30 (7.30PM). In this session, we are pleased to welcome a new presenter, Adolph Dupré who will be discussing some powerful techniques for using class modules. He will explain when you may want to use classes instead of User Defined Types (UDT). For example, to manage the data in unbound forms. Adolph will...
0
4779
by: adsilva | last post by:
A Windows Forms form does not have the event Unload, like VB6. What one acts like?
1
3217
by: 6302768590 | last post by:
Hai team i want code for transfer the data from one system to another through IP address by using C# our system has to for every 5mins then we have to update the data what the data is updated we have to send another system
3
2154
bsmnconsultancy
by: bsmnconsultancy | last post by:
In today's digital era, a well-designed website is crucial for businesses looking to succeed. Whether you're a small business owner or a large corporation in Toronto, having a strong online presence can significantly impact your brand's success. BSMN Consultancy, a leader in Website Development in Toronto offers valuable insights into creating effective websites that not only look great but also perform exceptionally well. In this comprehensive...

By using Bytes.com and it's services, you agree to our Privacy Policy and Terms of Use.

To disable or enable advertisements and analytics tracking please visit the manage ads & tracking page.