Havatcha wrote:
Does anyone have a benchmark for the processing overhead of the STL
Vector class, vs a C style array?
I would dearly love to use Vectors, but am paranoid about slowing my
real-time code down. Can anyone reassure?
As an embedded programmer, I use vector regularly. As Stroustrup notes,
there are many problems with arrays in C++
(
http://www.research.att.com/~bs/bs_faq2.html#arrays), and the overall
development time of a project can often be reduced by using a smarter
container like vector because it helps reduce programming errors.
As for speed, since all the calls should be inlined in release mode, it
really should have little to no overhead (apart from the additional
allocation time when compared to statically declared arrays), and in
debug mode, you can automatically range checking and so forth without
cluttering your code with #ifdefs.
In any case, run some tests and see what happens. Your worries sound
like you might be prematurely optimizing, which is also a pet peeve of
Guru Sutter (
http://www.gotw.ca/publications/mill09.htm):
Beware Premature Optimization
If you're a regular reader of this column, you'll already be familiar
with my regular harangues against premature optimization. The rules
boil down to: "1. Don't optimize early. 2. Don't optimize until you
know that it's needed. 3. Even then, don't optimize until you know what
needed, and where."
By and large, programmers--that includes you and me--are notoriously
bad at guessing the actual space/time performance bottlenecks in their
own code. If you don't have performance profiles or other empirical
evidence to guide you, you can easily spend days optimizing something
that doesn't need optimizing and that won't measurably affect runtime
space or time performance. What's even worse, however, is that when you
don't understand what needs optimizing you may actually end up
pessimizing (degrading your program) by of saving a small cost while
unintentionally incurring a large cost. Once you've run performance
profiles and other tests, and you actually know that a particular
optimization will help you in your particular situation, then it's the
right time to optimize.
By now, you can probably see where I'm going with this: The most
premature optimization of all is an optimization that's enshrined in
the standard. In particular, vector<bool> intentionally favors "less
space" at the expense of "slower speed"--and forces this optimization
choice on all programs. This optimization choice implicitly assumes
that virtually all users of a vector of bools will prefer "less space"
at the expense of "potentially slower speed," that they will be more
space-constrained than performance-constrained, and so on. This is
clearly untrue; on many popular implementations a bool is the same size
as an 8-bit char, and a vector of 1,000 bools consumes about 1K of
memory. Saving that 1K is unimportant in many applications. (Yes,
clearly a 1K space saving is important in some environments, such as
embedded systems, and clearly some applications may manipulate vectors
containing millions of bools where the saved megabytes are real and
significant.) The point is that the correct optimization depends on the
application: If you are writing an application that manipulates a
vector<bool> with 1,000 entries in an inner loop, it is much more
likely that you would benefit from potentially faster raw performance
due to reduced CPU workload (without the overhead of proxy objects and
bit-fiddling) than from the marginal space savings, even if the space
savings would reduce cache misses.
Cheers! --M