473,382 Members | 1,180 Online
Bytes | Software Development & Data Engineering Community
Post Job

Home Posts Topics Members FAQ

Join Bytes to post your question to a community of 473,382 software developers and data experts.

Is C faster than C++

In one of my interview, some people asked me why C is faster C++, and tell
me to illustrate at least two reasons.

I can't find the answer in the web.

I'll appreciate any suggestion on this.
Thank you.
Sep 18 '05 #1
54 10942
zhaoyandong wrote:
In one of my interview, some people asked me why C is faster C++, and tell
me to illustrate at least two reasons.
I can't find the answer in the web.
It isn't faster.
I'll appreciate any suggestion on this.


Go to different interviews.

Sep 18 '05 #2
* zhaoyandong:
In one of my interview, some people asked me why C is faster C++, and tell
me to illustrate at least two reasons.

I can't find the answer in the web.

I'll appreciate any suggestion on this.


Probably they wanted to check whether you understood the difference between C
and C++. And if you did, you would have answered that every C program can
also be expressed in C++ with just minor cleaning up of syntax and
declarations, and that very often the same compiler is used for C and C++. So
that given a fast C program, you also have a just as fast C++ program.

And then you might have gone on to talk about how some of that performance
could be traded for shorter development time and general maintainability, in
C++, but not in C.

And concluded that neither language is inherently faster than the other, but
that C++ gives you a much wider range of practical development techniques: all
of C, plus plus (ah, _that_'s what the "++" stands for!).

--
A: Because it messes up the order in which people normally read text.
Q: Why is it such a bad thing?
A: Top-posting.
Q: What is the most annoying thing on usenet and in e-mail?
Sep 18 '05 #3
zhaoyandong wrote:
In one of my interview, some people asked me why C is faster C++, and tell
me to illustrate at least two reasons.

I can't find the answer in the web.


The question is like asking "which is faster, a Lamborghini or a
Ferrari". The answer should be, they're only as fast as it's driver
skill can make them go.
Sep 18 '05 #4
zhaoyandong wrote:
In one of my interview, some people asked me why C is faster C++, and
tell me to illustrate at least two reasons.


It's a stupid question. C++ is mostly a superset of C. There's no reason why
a C program compiled by a C compiler should be faster than the same program
compiled by a C++ compiler. And if you use features that exist only in C++,
such as virtual functions, you would have to simulate the dynamic dispatch
in C somehow to do a comparison, and C shouldn't be able to do it any
faster. Or you might need a completely different design in C, and you can't
generalize about the performance comparison of functionally equivalent
programs of different design. The interviewers are either clueless about C++
or they expected you tell them they were talking nonsense.

DW
Sep 19 '05 #5
On Mon, 19 Sep 2005 07:20:22 +0800, "zhaoyandong" <zh*********@sina.com.cn>
wrote:
In one of my interview, some people asked me why C is faster C++, and tell
me to illustrate at least two reasons.

I can't find the answer in the web.

I'll appreciate any suggestion on this.
Thank you.


The idea that C is inherently faster than C++ is a myth, but one that I
encounter regularly, especially with ex-C programmers who don't really
understand the C++ language.

I'd go to a different interview if you want to use C++ in your daily work, or
else prepare to regularly fight an uphill battle against people who hold on to
that unfounded prejudice.

-dr
Sep 19 '05 #6
zhaoyandong wrote:
In one of my interview, some people asked me why C is faster C++, and tell
me to illustrate at least two reasons.

I can't find the answer in the web.

I'll appreciate any suggestion on this.
Thank you.


Never be reluctant to ask for clarification or to disagree with the
premise of a question, particularly an interview question. After all,
the question may designed specifically to test for those traits. In
other words, how the candidate handles the loaded question is being
assessed and not the content of the answer given.

In this case, asking for clarification would be in order:

"By what measurement have you found C++ to be slower than C?"

If the response is "average dipatch time per function call." you could
then explain how virtual functions differ from directly called
routines. But you could also note that the overhead is minimal and is
rarely an issue.

Otherwise, feel free to question the premise of the question,,
particularly a loaded question:

"Not having any specific measurements, I would not be able
to conclude that C++ is slower than C."

Greg

Sep 19 '05 #7
Greg wrote:
[snip] In this case, asking for clarification would be in order:

"By what measurement have you found C++ to be slower than C?"

If the response is "average dipatch time per function call." you could
then explain how virtual functions differ from directly called
routines. But you could also note that the overhead is minimal and is
rarely an issue.


In fact it is *never* an issue, because it isn't slower. Not if you
compare a virtual function call with the *equivalent* functionality
in C. And that is all that matters: virtual function calls in a code
are there for a reason. So you need to satisfy that reason in C or C++.
When doing so, one figures out, that C++ virtual functions are the fastest
possible way to satisfy that reason. Plus there is one benefit: maintainability.

Other then that, I agree to everything else you said.

--
Karl Heinz Buchegger
kb******@gascad.at
Sep 19 '05 #8

"zhaoyandong" <zh*********@sina.com.cn> wrote in message
news:dg**********@mail.cn99.com...
In one of my interview, some people asked me why C is faster C++, and tell
me to illustrate at least two reasons.

I can't find the answer in the web.

I'll appreciate any suggestion on this.
Thank you.


Tell them that performance bottlenecks are usually lurking in unanticipated
locations in the code, so you use a profiler and attempt to identify where
any bottlenecks are before wasting time optimizing the non-bottlenecks.
Sep 19 '05 #9
Greg wrote:
....
If the response is "average dipatch time per function call." you could
then explain how virtual functions differ from directly called
routines. But you could also note that the overhead is minimal and is
rarely an issue.


Bzzt - wrong.

It is NOT a given that virtual calls are slower than non-virtual. On
some architectures, they may even be faster than regular function calls.

Performance is one of those things that depends on the situation at hand.
Sep 19 '05 #10
C++ leads to a very generic code. But it may be the case that you do
not use such genericity always. For instance, you can have generic sort
function in C++. But if you always just use a specific 'quicksort on
doubles', may be a naive C or Fortran program written specifically for
this would be faster. My opinion is that C++ may introduce some hidden
costs, that may be visible only to an experienced programmer. Lower the
level of programming, (maybe) lower the overheads. This is IMHO.

Sep 19 '05 #11
C++ leads to a very generic code. But it may be the case that you do
not use such genericity always. For instance, you can have generic sort
function in C++. But if you always just use a specific 'quicksort on
doubles', may be a naive C or Fortran program written specifically for
this would be faster. My opinion is that C++ may introduce some hidden
costs, that may be visible only to an experienced programmer. Lower the
level of programming, (maybe) lower the overheads. This is IMHO.

Sep 19 '05 #12
Ganesh wrote:
C++ leads to a very generic code. But it may be the case that you do
not use such genericity always. For instance, you can have generic sort
function in C++. But if you always just use a specific 'quicksort on
doubles', may be a naive C or Fortran program written specifically for
this would be faster.
The sorting thing is a very poor example: A native C program would use qsort
and be slower since the call to the comparison function is not inlined
whereas the use of templates will allow the compiler to inline and optimize
the whole instantiation of sort for the particular type.

In order to actually beat C++ std::sort() you would have to roll your own
sorting code for, say, doubles in C. Now, when you do that, you could as
well just do it in C++ and provide a new sorting template. [Also, every
once in a while I try to beat the sorting and nth_element routines in the
STL of my compiler. It is *very* hard to just get even. I assure you:
implementing quick-sort for some built in data type is not going to beat
std::sort().]

My opinion is that C++ may introduce some hidden
costs, that may be visible only to an experienced programmer. Lower the
level of programming, (maybe) lower the overheads. This is IMHO.


In C++ you can program as low-level as you can in C. Also, not every level
of abstraction incurs overhead at run-time. Templates incur overhead at
compile time but can considerably improve performance during run time.
Best

Kai-Uwe Bux

Sep 19 '05 #13
Ganesh wrote:
My opinion is that C++ may introduce some hidden costs, that may be
visible only to an experienced programmer. Lower the level of
programming, (maybe) lower the overheads. This is IMHO.


What overhead? I don't call overhead something so minimal that nobody but a
expert can measure it. And the cost of programming at a lower level is
usually very visible.

And by the way, there are no hidden costs. You can take the object code and
inspect it, supposed you have that experienced programmer at hand.

--
Salu2
Sep 19 '05 #14

Karl Heinz Buchegger wrote:
Greg wrote:

[snip]
In this case, asking for clarification would be in order:

"By what measurement have you found C++ to be slower than C?"

If the response is "average dipatch time per function call." you could
then explain how virtual functions differ from directly called
routines. But you could also note that the overhead is minimal and is
rarely an issue.


In fact it is *never* an issue, because it isn't slower. Not if you
compare a virtual function call with the *equivalent* functionality
in C. And that is all that matters: virtual function calls in a code
are there for a reason. So you need to satisfy that reason in C or C++.
When doing so, one figures out, that C++ virtual functions are the fastest
possible way to satisfy that reason. Plus there is one benefit: maintainability.

Other then that, I agree to everything else you said.

--
Karl Heinz Buchegger
kb******@gascad.at


I would agree that the difference in dispatch time between a virtual
function call and a direct call should not be an issue in a
well-written C++ program; but that statement is not the same as saying
the difference is always negligible in any C++ program, no matter how
it is written. Or that a C++ programmer need not be aware of the
difference.

For instance, one could imagine that adding a virtual method to a class
like std::string would have a measurable negative effect on the
performance of a C++ program with many stack-based strings. Now clearly
a virtual method in this case is no doubt a bad idea. But that fact may
not be readily apparent to a C programmer. In other words, a C++
programmer has to be aware that virtual function calls are not
completely free; and as a consequence, methods should not be declared
virtual indiscriminately.

C++ is obviously a more complex language than C. Greater complexity
does not necessarily imply less efficiency. But it does require more
care at times to recognize inefficiency when it arises. I believe this
is true if for no other reason than that there are simply more ways to
make such mistakes in C++. Now, just to be clear: I am not arguing that
C++ is too dangerous a language in which to write programs. On the
contrary - I'm not really stating anything other than the benefits of
C++ greater expressiveness cannot be realized in the absence of a solid
understanding of the language itself.

Greg

Sep 20 '05 #15
Gianni Mariani wrote:
Greg wrote:
...
If the response is "average dipatch time per function call." you could
then explain how virtual functions differ from directly called
routines. But you could also note that the overhead is minimal and is
rarely an issue.


Bzzt - wrong.

It is NOT a given that virtual calls are slower than non-virtual. On
some architectures, they may even be faster than regular function calls.

Performance is one of those things that depends on the situation at hand.


But it is not impossible either.

And remember the scenario. In response to our challenge, the
interviewer produces a huge stack of highly detailed profiling data
that conclusively shows that for the program profiled, calls dispatched
to virtual functions required more cycles than direct calls.

So what is the candidate to do now?

One approach would be to angrily denounce the data as a pack of lies,
while slamming his or her first on the table. This approach is a
surefire way to be hired if the intent of the question was to assess
the candidate's passion for C++. If that was not the intent of the
question, though, this technique may have mixed results.

The problem with the "angry denunciation" response though, is that the
data could very well be accurate. It cannot be dismissed.

Next, there is the approach that you are advocating: for the candidate
to argue that this data can be ignored since there are other programs
in which virtual function calls are not slower. But that response is
not well connected to the situation on hand. And it is of little
comfort here, if the program that the company sells happens not to be
one of those programs for which virtual function calls are faster than
direct calls. Again, the candidate appears to skirt the issue or be
unwilling to accept facts.

The best approach is head on: -Yes, there may be additional overhead
when calling a virtual function but that overhead is a good investment.
And here's why - and then rattle off the reasons.

Making excuses or saying that there may be exceptions in some cases is
simply "lame". It only makes the candidate look like an apologist.

Greg

Sep 20 '05 #16
Greg wrote:
Karl Heinz Buchegger wrote:
In fact it is *never* an issue, because it isn't slower. Not if you
compare a virtual function call with the *equivalent* functionality
in C. And that is all that matters: virtual function calls in a code
are there for a reason. So you need to satisfy that reason in C or
C++.
When doing so, one figures out, that C++ virtual functions are the
fastest possible way to satisfy that reason. Plus there is one
benefit: maintainability.

Other then that, I agree to everything else you said.


I would agree that the difference in dispatch time between a virtual
function call and a direct call should not be an issue in a
well-written C++ program; but that statement is not the same as saying
the difference is always negligible in any C++ program, no matter how
it is written. Or that a C++ programmer need not be aware of the
difference.


I didn't think KHB was talking about the difference in dispatch time between
a virtual call and a direct call. I think he was saying that you have a
virtual function for a reason, which is that at run-time the same call can
end up in different places at different times. Therefore, you cannot simply
replace a virtual call in C++ with a direct call in C. You would have to
somehow replicate the C++ virtual call in C, e.g., with a function pointer,
switch, etc.

DW
Sep 20 '05 #17
On Mon, 19 Sep 2005 07:20:22 +0800, "zhaoyandong"
<zh*********@sina.com.cn> wrote in comp.lang.c++:
In one of my interview, some people asked me why C is faster C++, and tell
me to illustrate at least two reasons.

I can't find the answer in the web.

I'll appreciate any suggestion on this.
Thank you.


You can definitely type "C" in one-half to two-thirds of the time it
takes to type "C++".

In the hands of the type of programmer who asks questions like this
(although the people interviewing you might not have been
programmers), each is worser and slower than the other.

--
Jack Klein
Home: http://JK-Technology.Com
FAQs for
comp.lang.c http://www.eskimo.com/~scs/C-faq/top.html
comp.lang.c++ http://www.parashift.com/c++-faq-lite/
alt.comp.lang.learn.c-c++
http://www.contrib.andrew.cmu.edu/~a...FAQ-acllc.html
Sep 20 '05 #18
By C++ I mean {C++} - -{C}. That is, those features exclusively meant
for C++. I wouldn't consider a program that just uses main + some plain
functions as a real C++ program. You can write a C program in C++. But
a real C++ program should have all those OO features - templates,
virtual methods, inheritance and some design patterns. We have to
compare such a C++ program and its equivalent C counterpart to show
the relative merits. This is also MHO. A better comparison is C++ vs
FORTRAN.

Ganesh

Sep 20 '05 #19
Ganesh wrote:
By C++ I mean {C++} - -{C}. That is, those features exclusively meant
for C++. I wouldn't consider a program that just uses main + some plain
functions as a real C++ program. You can write a C program in C++. But
a real C++ program should have all those OO features - templates,
virtual methods, inheritance and some design patterns. We have to
compare such a C++ program and its equivalent C counterpart to show
the relative merits. This is also MHO.
(a) If you refer to something, please quote it. So here is a relevant part
of our discussion:
not use such genericity always. For instance, you can have generic sort
function in C++. But if you always just use a specific 'quicksort on
doubles', may be a naive C or Fortran program written specifically for
this would be faster.


The sorting thing is a very poor example: A native C program would use
qsort and be slower since the call to the comparison function is
not inlined whereas the use of templates will allow the compiler to inline
and optimize the whole instantiation of sort for the particular type.


(b) Your response completely misses the point [which is not apparent since
you snipped it].

As you can see from the quotation, I was specifically talking about
*idiomatic* ways of doing your example (sorting) in C versus C++. As it
turns out, the {C++} - {C} features involved (in this case, templates)
actually *improve* performance in idiomatically written programs. This, one
can actually measure. For a detailed discussion of sorting efficiency, see:

B. Stroustrup: Learning C++ as a New Language (page 6ff).
[http:://www.research.att.com/~bs/new_learning.pdf]

A better comparison is C++ vs FORTRAN.


In which sense would that be a "better" comparison? And would you choose
number crunching programs, network protocoll implementations, or GUI-driven
programs for the comparison?
Best

Kai-Uwe Bux
Sep 20 '05 #20
zhaoyandong wrote:
In one of my interview, some people asked me why C is faster C++, and tell
me to illustrate at least two reasons.

I can't find the answer in the web.

I'll appreciate any suggestion on this.
Thank you.
<several people wrote> It is not faster.


I once was at a workshop help by sun's compiler development team. They
said, that their compiler does some optimizations in C which is does not
in C++, but I do not exactly recall the reason (might look it up later).
Is it possible, that a compiler can make assumptions in C which it
cannot make in C++ that allows better optimization in C?

Gabriel
Sep 20 '05 #21
Greg wrote:

Karl Heinz Buchegger wrote:
Greg wrote:
[snip]
In this case, asking for clarification would be in order:

"By what measurement have you found C++ to be slower than C?"

If the response is "average dipatch time per function call." you could
then explain how virtual functions differ from directly called
routines. But you could also note that the overhead is minimal and is
rarely an issue.


In fact it is *never* an issue, because it isn't slower. Not if you
compare a virtual function call with the *equivalent* functionality
in C. And that is all that matters: virtual function calls in a code
are there for a reason. So you need to satisfy that reason in C or C++.
When doing so, one figures out, that C++ virtual functions are the fastest
possible way to satisfy that reason. Plus there is one benefit: maintainability.

Other then that, I agree to everything else you said.

--
Karl Heinz Buchegger
kb******@gascad.at


I would agree that the difference in dispatch time between a virtual
function call and a direct call should not be an issue in a
well-written C++ program;


This is not what I am talking about.

Example:

class Pet
{
public:
virtual void MakeNoise() = 0;
};

class Cat: public Pet
{
public:
virtual void MakeNoise() { printf( "Miau\n" ); }
};

class Dog: public Pet
{
public:
virtual void MakeNoise() { printf( "Wuff\n" ); }
};

void foo( Pet* p )
{
p->MakeNoise();
}

int main()
{
Cat c;
Dog d;

foo( &c );
foo( &d );
}

Now rewrite that in C and you will figure out, that you need an
additional mechanism besides the actual function call in order
to replace the virtual functions. Eg. some sort of type codes in
the structure and a switch statement in foo().

When talking about virtual functions, then yes, the actual dispatch
time of a virtual function is most always larger then the time costs
for an ordinary call. But that is comparing apples with oranges. In order
to make a fair comparison, you need to compare virtual functions with
ordinary functions *plus* that additional mechanism needed to supply the
functionality of a virtual function call. And now things turn around:
Virtual functions aren't that slow any more.

It is like comparing addition with multiplication. On most CPUs multiplication
takes longer then addition. But there really is no point in comparing them. If I
need multiplication, then replacing that multiplication by additions just
because addition is faster is not going to save the day. Except of course
the special case of multiplying by 2 which is a special case the compiler
knows about. In the same way the compiler knows about special cases of virtual
function calls and replaces them with ordinary function calls.

For instance, one could imagine that adding a virtual method to a class
like std::string would have a measurable negative effect on the
performance of a C++ program with many stack-based strings.
Could be, since the size of an object increases by the size of an additional
pointer (I know, nowhere it is written that a compiler has to use a vtable.
But in fact no other implementation then vtables are known for implementing
virtual functions). That increase could disturb the caches. But all of this
is extremely hardware dependent and outside the scope of C++.
C++ is obviously a more complex language than C. Greater complexity
does not necessarily imply less efficiency. But it does require more
care at times to recognize inefficiency when it arises.


I am with you here. But virtual functions really do not qualify as an example
for this. In fact the opposite is true. The C++ solution from above is much simpler
(and maintainable) then an equivalent C solution. The programmer must keep care
of much more things to get it right in C:

#define CAT 1
#define DOG 2

typedef struct Pet
{
unsigned char Type;
};

void MakeNoiseForCat( Pet* p )
{
printf( "Miau\n" );
}

void MakeNoiseForDog( Pet* p )
{
printf( "Wuff\n" );
}

void foo( Pet* p )
{
switch( p->Type ) {
case CAT:
MakeNoiseForCat( p );
break;

case DOG:
MakeNoiseForDog( p );
break;
}
}

int main()
{
Pet c = { CAT };
Pet d = { DOG };

foo( &c );
foo( &d );
}

Note that the C++ virtual function call:

p->MakeNoise();

has been replaced by the sequence:

switch( p->Type ) {
case CAT:
MakeNoiseForCat( p );
break;

case DOG:
MakeNoiseForDog( p );
break;
}

Now we have a fair comparison and I guess there is no doubt that the
virtual function call actually is faster on most machines then all of
that switch-case-function_call mumbo jumbo.

--
Karl Heinz Buchegger
kb******@gascad.at
Sep 20 '05 #22
All very well stated.

Two other side points:

Years ago I was asked a similar question: "Why is C so much faster
than machine code?" When I asked for clarification, they said, "We had
an expert write a function in C, and another write the same function in
assembler. The C was faster by a factor of 2".

My unfortunate reply: "Well, since C is compiled into assembler, I
would assume that the compiler writer understood that assembly language
better than the person who wrote the assembler function in this case.
For that assembly language I've always beaten the C compiler any time I
tried." Wrong answer: the interviewer was the one who'd written the
assembler.

But the other point is more significant:

Who will win a race: the person who has a bicycle, or the person who
has a choice between a bicycle, a car, a motorcycle, and an airplane?
Typically the C programmer will use better algorithms than the
assembler just because the assembler programmer doesn't have the time,
and the C++ programmer will use better data structures than the C
programmer because he has the tools and the libraries to help him.
Where a C programmer might use an array, a C++ programmer might use a
vector, a set, a splay tree, a map, or something else more appropriate
to the problem space. Not to mention that the C++ programmer can more
easily model the problem domain directly in the language, so can create
a more maintainable and correct by design solution. So given the same
PROBLEM, the C++ programmer ought to get a faster solution than the C
programmer given the same time constraints.

Stuart

Sep 22 '05 #23
"Ganesh" <ga*****@gmail.com> wrote in message
news:11*********************@g49g2000cwa.googlegro ups.com...
My opinion is that C++ may introduce some hidden
costs, that may be visible only to an experienced programmer.
Lower the level of programming, (maybe) lower the overheads.
This is IMHO.


And it is wrong. Read "The Design and Evolution of C++" b Bjarne
Stroustrup.

Pg. 28:
"The explicit aim was to match C in terms of run-time code..."
"To wit: Someone once demonstrated a 3% systematic decrease in overall
run-time efficiency compared with C caused by the use of a spurious
temporary introducuced into the function return mechanism by the C with
Classes (FYI: precursor to c++) preprocessor. This was deemed
unacceptable and the overhead promptly removed."

The basic tenet of C++ is to be a better C, and to not add overhead into
programs that do not use C++ features.

Anything you believe about "hidden costs" are similar to your beliefs
about UFOs and gods. Ie: non-existent.

--
Mabden
Sep 27 '05 #24
"Mabden" <mabden@sbc_global.net> wrote in message
news:q5***************@newssvr11.news.prodigy.com. ..
"Ganesh" <ga*****@gmail.com> wrote in message
news:11*********************@g49g2000cwa.googlegro ups.com...
My opinion is that C++ may introduce some hidden
costs, that may be visible only to an experienced programmer.
Lower the level of programming, (maybe) lower the overheads.
This is IMHO.


And it is wrong. Read "The Design and Evolution of C++" b Bjarne
Stroustrup.

Pg. 28:
"The explicit aim was to match C in terms of run-time code..."
"To wit: Someone once demonstrated a 3% systematic decrease in overall
run-time efficiency compared with C caused by the use of a spurious
temporary introducuced into the function return mechanism by the C with
Classes (FYI: precursor to c++) preprocessor. This was deemed
unacceptable and the overhead promptly removed."

The basic tenet of C++ is to be a better C, and to not add overhead into
programs that do not use C++ features.

Anything you believe about "hidden costs" are similar to your beliefs
about UFOs and gods. Ie: non-existent.


Well, no. Anything you believe about "hidden costs" are similar to
whatever other beliefs you can prove or disprove by reproducible
experiment. And experiments repeatedly show varying nonzero costs
intrinsic to C++ that are not present in C.

Exception handling is an obvious case in point -- the implementation
can effectively eliminate performance costs by raising code size,
or conversely, but the cost is there in some form. You can sometimes
argue that the equivalent checking in C has a nonzero space/time
cost, and that is true; but whether the costs are comparable depends
strongly on the particulars of a program. You can also argue (as
above) that the extra costs are being steadily eliminated as they
are discovered, but that is only true up to a point. It's fun to
highlight the successes, less fun to admit the failures (so far)
to eliminate extra overheads.

Virtual function calls raise issues similar to those for exceptions,
but typically with much smaller costs. OTOH, input/output using
the Standard C++ library drags in *way* more code than does the
Standard C library for (loosely) comparable operations. And the
I/O performance gap has closed dramatically over the past decade,
but it's still there.

Then there's the oft-repeated mantra that C++ can be *more*
efficient than C; and occasionally that's doubtless true.
But on average, IME, a C++ program will be 5 to 15 per cent
larger and/or slower than a comparable program written in C.
Again IME, that's a price well worth paying, in practically
all cases, for the improved productivity that you can get
by writing large programs in C++ instead of C. Historically,
C began winning over assembly language when its extra overhead
dropped to about 30 per cent. It's a rare application, even
embedded, where even 50 per cent overheads are worth addressing
by going to lower-level coding techniques. It's way cheaper to
use faster or bigger hardware, even when you're shipping
hundreds of thousands of devices.

I consider it a mark of zealotry to pretend that there are
*no* additional overheads when using C++ instead of C. That
requires a leap of faith akin to believing in UFOs and gods.
But more important, you don't have to be a C++ zealot to
decide on the basis of reasonable evidence that it's well
worth the real and measurable costs that are still there.

P.J. Plauger
Dinkumware, Ltd.
http://www.dinkumware.com
Sep 27 '05 #25
On Tue, 27 Sep 2005 09:40:44 -0400, "P.J. Plauger" <pj*@dinkumware.com> wrote:
I consider it a mark of zealotry to pretend that there are
*no* additional overheads when using C++ instead of C. That
requires a leap of faith akin to believing in UFOs and gods.
But more important, you don't have to be a C++ zealot to
decide on the basis of reasonable evidence that it's well
worth the real and measurable costs that are still there.


The additional cost is there because the C++ programmer chooses to use
additional features that are not available in the C version of the code.
Exception handling adds overhead, but you get exception handling in return. No
fair comparing that to a C program that has no way of handling exceptions like
C++ does.

Same goes for virtual functions--a fair comparison can only be made to a
similar run-time dynamic dispatch in C, e.g. a function pointer table. In this
case the C++ code could conceivably be faster because an optimizer has the
opportunity to short-circuit calls to virtual functions that always resolve to
a single function within the link, say, thus removing the indirection. A C
optimizer may never have that opportunity.

-dr
Sep 28 '05 #26
"Dave Rahardja" <as*@me.com> wrote in message
news:l0********************************@4ax.com...
On Tue, 27 Sep 2005 09:40:44 -0400, "P.J. Plauger" <pj*@dinkumware.com>
wrote:
I consider it a mark of zealotry to pretend that there are
*no* additional overheads when using C++ instead of C. That
requires a leap of faith akin to believing in UFOs and gods.
But more important, you don't have to be a C++ zealot to
decide on the basis of reasonable evidence that it's well
worth the real and measurable costs that are still there.
The additional cost is there because the C++ programmer chooses to use
additional features that are not available in the C version of the code.
Exception handling adds overhead, but you get exception handling in
return. No
fair comparing that to a C program that has no way of handling exceptions
like
C++ does.


Sorry, but it *is* fair. Any function you call whose contents
are unknown might throw an exception, so the compiled code must
be prepared to handle exceptions in the most innocent of code.
Perhaps the compiler is smart enough to avoid the worst overheads
in a function that contains no destructible autos, but it is not
likely to eliminate all overheads. IME, you at least get kilobytes
of stack-walking code for *any* C++ program, even a C program
compiled as C++. In some cases you also get slower function
entry/exit code as well.

C++ pays lip service to the doctrine that you don't pay for
what you don't use. Standard C++ fails to achieve that goal
in several important areas. Exception handling and I/O, which
I cited in my earlier post, are two biggies. Both can cause
C programs to swell even when compiled unchanged as C++.
Same goes for virtual functions--a fair comparison can only be made to a
similar run-time dynamic dispatch in C, e.g. a function pointer table. In
this
case the C++ code could conceivably be faster because an optimizer has the
opportunity to short-circuit calls to virtual functions that always
resolve to
a single function within the link, say, thus removing the indirection. A C
optimizer may never have that opportunity.


Not quite the same case. I did try to indicate that you have to
compare the intrinsic overhead of a C++ feature to the explicit
overhead of doing the same thing by hand in C. Again IME, the
use of virtuals tends to be a wash. It's an article of faith
among C++ zealots that optimization will give C++ the edge,
but I haven't seen a case where that's true (or at least where
it makes a dime's worth of difference).

P.J. Plauger
Dinkumware, Ltd.
http://www.dinkumware.com
Sep 28 '05 #27
> I once was at a workshop help by sun's compiler development team. They
said, that their compiler does some optimizations in C which is does not
in C++, but I do not exactly recall the reason (might look it up later).
Is it possible, that a compiler can make assumptions in C which it
cannot make in C++ that allows better optimization in C?


Usually, it is the other way around. Specifically type based alias
analysis, and the resulting optimizations, occur more frequently with
C++, because of it's strong typing and compile time polymorphism
(templates). However, given the complexity of C++, especially
exceptions, I suppose there may be some assumptions which are valid in
a C program, but not in the equivalent C++ program. That being said, I
am not a compiler writer, so I am only speculating.

- Jeremy Jurksztowicz

Sep 28 '05 #28
P.J. Plauger wrote:

Sorry, but it *is* fair. Any function you call whose contents
are unknown might throw an exception, so the compiled code must
be prepared to handle exceptions in the most innocent of code.
Perhaps the compiler is smart enough to avoid the worst overheads
in a function that contains no destructible autos, but it is not
likely to eliminate all overheads. IME, you at least get kilobytes
of stack-walking code for *any* C++ program, even a C program
compiled as C++. In some cases you also get slower function
entry/exit code as well.


Well I've jumped on my chair when I saw this, but fortunatelly
gcc only generates exception handlers for functions that have
automatic objects with destructors.
Handler code is really simple. It just calls destructors in
switch/case style and after that adjusts stack pointer,
then calls _Unwind_resume.
That's just a ten to twenty bytes of code if there is one destructor
or two.
But, every c++ object file get's exception handling stack frame
+ exception handler table (if there are any, every 'case' has handler)
which is more or less few kilobytes of data with or without
destructors.
IMO, it's acceptable overhead.
So we would say that c++ programs have at least more data then
c programs.

Greetings, Bane.

Sep 28 '05 #29

"Gabriel" <ab***@127.0.0.1> wrote in message
news:ne********************@svr01squid.pfa.researc h.philips.com...
zhaoyandong wrote:
In one of my interview, some people asked me why C is faster C++, and
tell
me to illustrate at least two reasons.

I can't find the answer in the web.

I'll appreciate any suggestion on this.
Thank you.


<several people wrote>
It is not faster.


I once was at a workshop help by sun's compiler development team. They
said, that their compiler does some optimizations in C which is does not
in C++, but I do not exactly recall the reason (might look it up later).
Is it possible, that a compiler can make assumptions in C which it cannot
make in C++ that allows better optimization in C?


Well, in c99 there is keyword 'restrict' (restricted pointer means that
pointer has no alias)
which allows optimisations that are common to fortran compilers.

Greetings, Bane.
Sep 28 '05 #30
On Wed, 28 Sep 2005 09:33:48 -0400, "P.J. Plauger" <pj*@dinkumware.com> wrote:
Sorry, but it *is* fair. Any function you call whose contents
are unknown might throw an exception, so the compiled code must
be prepared to handle exceptions in the most innocent of code.
Perhaps the compiler is smart enough to avoid the worst overheads
in a function that contains no destructible autos, but it is not
likely to eliminate all overheads. IME, you at least get kilobytes
of stack-walking code for *any* C++ program, even a C program
compiled as C++. In some cases you also get slower function
entry/exit code as well.
This is still an apples-to-oranges comparison. You pay extra for exception
handling because you are _using_ the exception handling facility. If exception
handling is not required, we can properly declare non-throwing functions with
the throw() specification, or resort to compiler switches. I also imagine that
a C++ compiler can assume that C functions declared extern "C" do not throw.
C++ pays lip service to the doctrine that you don't pay for
what you don't use. Standard C++ fails to achieve that goal
in several important areas. Exception handling and I/O, which
I cited in my earlier post, are two biggies. Both can cause
C programs to swell even when compiled unchanged as C++.
Still addressing the exception overhead you mentioned above, I admit it is
unfortunate that the C++ language assumes that plainly-declared functions can
"throw anything" instead of "throw nothing". The latter assumption could have
eliminated the exception overhead, but would require large bodies of code to
be modified before being reused.

Do you attribute the I/O bloat to the design of the C++ streams specification?
Does it help if the programmer sticks to C-style printf()s?
Not quite the same case. I did try to indicate that you have to
compare the intrinsic overhead of a C++ feature to the explicit
overhead of doing the same thing by hand in C. Again IME, the
use of virtuals tends to be a wash. It's an article of faith
among C++ zealots that optimization will give C++ the edge,
but I haven't seen a case where that's true (or at least where
it makes a dime's worth of difference).


I don't think anyone has made a claim that C++ may be anything but trivially
more efficient than C. However, my experience has shown that the counter-claim
that C is _always_ more efficient than C++ shows a misunderstanding of the
differences between the languages, or at least reveals the deep denial of a C
coder refusing to learn about C++.

The two languages start with different levels of basic functionality enabled
(C++ obviously starts with more features enabled, such as exception handling).
In C you add features explicitly, and in C++ you disable default features (by
adding a throw() specification to your functions, for example). The
understanding and use of these fundamental differences goes a long way to
demistify the apparent inneficiency of C++.

-dr
Sep 29 '05 #31

"Dave Rahardja" <as*@me.com> wrote in message
news:kg********************************@4ax.com...
On Wed, 28 Sep 2005 09:33:48 -0400, "P.J. Plauger" <pj*@dinkumware.com>
wrote:
Sorry, but it *is* fair. Any function you call whose contents
are unknown might throw an exception, so the compiled code must
be prepared to handle exceptions in the most innocent of code.
Perhaps the compiler is smart enough to avoid the worst overheads
in a function that contains no destructible autos, but it is not
likely to eliminate all overheads. IME, you at least get kilobytes
of stack-walking code for *any* C++ program, even a C program
compiled as C++. In some cases you also get slower function
entry/exit code as well.


This is still an apples-to-oranges comparison. You pay extra for exception
handling because you are _using_ the exception handling facility. If
exception
handling is not required, we can properly declare non-throwing functions
with
the throw() specification, or resort to compiler switches. I also imagine
that
a C++ compiler can assume that C functions declared extern "C" do not
throw.


throw() wouldn't help.
To quote part of 15.5.1:
"An implementation is not permitted to finish stack unwinding prematurely
based on a determination that the unwind process will eventually cause
a call to terminate()."

in gcc implementation, exception spec would only add overhead to function
that is called (unexpected stuff et al).
Calling function does not care at all about exception specifications.
It only generate exception handling code if function calls destructors
for automatic objects,that is, of course, if there are no try/catch blocks.
Overhead is simply that every object file gets exception handling data.
There is no run time overhead, but executable size overhead.
I guess, there are implementations where true is exactly opposite :)

Greetings, Bane.

Sep 29 '05 #32
Dave Rahardja wrote:
[snip]
Do you attribute the I/O bloat to the design of the C++ streams specification?
Does it help if the programmer sticks to C-style printf()s?

[snip]

As P.J.P. said above, printf/scanf do help as far as code bloat and I/O
performance go, but using them trades speed for type saftey and a
common coding style for streaming objects. The classic example of
printf failure cannot happen with iostreams:

char c='a';
printf( "%s", c ); // Boom!

IME, for all but the most thoroughly tested (or trivial) code, every
branch that prints an error message is not necessarily tested, meaning
that such bugs could be latent in the software waiting for the user to
find them. Thus, I'm quite willing to accept some bloat and
inefficiency in exchange for type safety.

Cheers! --M

Sep 29 '05 #33
"Dave Rahardja" <as*@me.com> wrote in message
news:kg********************************@4ax.com...
On Wed, 28 Sep 2005 09:33:48 -0400, "P.J. Plauger" <pj*@dinkumware.com>
wrote:
Sorry, but it *is* fair. Any function you call whose contents
are unknown might throw an exception, so the compiled code must
be prepared to handle exceptions in the most innocent of code.
Perhaps the compiler is smart enough to avoid the worst overheads
in a function that contains no destructible autos, but it is not
likely to eliminate all overheads. IME, you at least get kilobytes
of stack-walking code for *any* C++ program, even a C program
compiled as C++. In some cases you also get slower function
entry/exit code as well.
This is still an apples-to-oranges comparison. You pay extra for exception
handling because you are _using_ the exception handling facility. If
exception
handling is not required, we can properly declare non-throwing functions
with
the throw() specification, or resort to compiler switches. I also imagine
that
a C++ compiler can assume that C functions declared extern "C" do not
throw.


Yes, if you indulge in sufficient heroics, and you know what they
are, and you have an appropriate compiler, you *can* eliminate
the overhead of exception handling. Otherwise, you pay extra
for exception handling whether or not you are using that facility.
That is the norm in the world of C++.
C++ pays lip service to the doctrine that you don't pay for
what you don't use. Standard C++ fails to achieve that goal
in several important areas. Exception handling and I/O, which
I cited in my earlier post, are two biggies. Both can cause
C programs to swell even when compiled unchanged as C++.


Still addressing the exception overhead you mentioned above, I admit it is
unfortunate that the C++ language assumes that plainly-declared functions
can
"throw anything" instead of "throw nothing". The latter assumption could
have
eliminated the exception overhead, but would require large bodies of code
to
be modified before being reused.

Do you attribute the I/O bloat to the design of the C++ streams
specification?


Yes.
Does it help if the programmer sticks to C-style printf()s?
Sometimes. But, depending on the implementation, you might still
get a boatload of code just because the program unintentionally
forces the instantiation of the default locale object. (And that's
intimately associated with the bad design of iostreams.)
Not quite the same case. I did try to indicate that you have to
compare the intrinsic overhead of a C++ feature to the explicit
overhead of doing the same thing by hand in C. Again IME, the
use of virtuals tends to be a wash. It's an article of faith
among C++ zealots that optimization will give C++ the edge,
but I haven't seen a case where that's true (or at least where
it makes a dime's worth of difference).


I don't think anyone has made a claim that C++ may be anything but
trivially
more efficient than C.


Uh, no. Some have argued that it can be *materially* more
efficient.
However, my experience has shown that the
counter-claim
that C is _always_ more efficient than C++ shows a misunderstanding of the
differences between the languages, or at least reveals the deep denial of
a C
coder refusing to learn about C++.
I've certainly never claimed that C is always more efficient
than C++, mostly because I don't believe that's true.
The two languages start with different levels of basic functionality
enabled
(C++ obviously starts with more features enabled, such as exception
handling).
In C you add features explicitly, and in C++ you disable default features
(by
adding a throw() specification to your functions, for example). The
understanding and use of these fundamental differences goes a long way to
demistify the apparent inneficiency of C++.


The apparent inefficiency is a *real* inefficiency if you have
to do sophisticated things to avoid the overheads of features
you don't use.

P.J. Plauger
Dinkumware, Ltd.
http://www.dinkumware.com
Sep 29 '05 #34
mlimber wrote:
Dave Rahardja wrote:
[snip]
Do you attribute the I/O bloat to the design of the C++ streams
specification? Does it help if the programmer sticks to C-style
printf()s?

[snip]

As P.J.P. said above, printf/scanf do help as far as code bloat and
I/O performance go, but using them trades speed for type saftey and a
common coding style for streaming objects. The classic example of
printf failure cannot happen with iostreams:


It's not immediately apparent to me why printf/scanf should be faster than
streams, given that streams have a dedicated function for I/O of each type
and can do what's appropriate for that type without delay, whereas
printf/scanf have to decode a format string first, including possibly ASCII
to decimal conversions for precision and field width (which themselves
require an amount of work similar to formatted input).

DW
Sep 29 '05 #35
"David White" <no@email.provided> wrote in message
news:6V******************@nasal.pacific.net.au...
mlimber wrote:
Dave Rahardja wrote:
[snip]
Do you attribute the I/O bloat to the design of the C++ streams
specification? Does it help if the programmer sticks to C-style
printf()s?

[snip]

As P.J.P. said above, printf/scanf do help as far as code bloat and
I/O performance go, but using them trades speed for type saftey and a
common coding style for streaming objects. The classic example of
printf failure cannot happen with iostreams:


It's not immediately apparent to me why printf/scanf should be faster than
streams, given that streams have a dedicated function for I/O of each type
and can do what's appropriate for that type without delay, whereas
printf/scanf have to decode a format string first, including possibly
ASCII
to decimal conversions for precision and field width (which themselves
require an amount of work similar to formatted input).


First, the time spent decoding a format string is trivial compared
to practically any conversion. Second, the Standard C++ library
requires that all conversions be performed by locale facets, each
of which is a class typically with several virtual functions called
through public interfaces. And each conversion call requires the
construction of an ios object plus an istreambuf_iterator object.
The time overhead for all this nonsense is about 30 per cent greater
(at a minimum) than a corresponding printf/scanf call.

The space overhead can be *much* worse. The Dinkumware implementation
avoids instantiating most facets that never get called. That keeps
the space overhead around 50 KB, compared to about 500 KB (sic) for
those that don't perform this optimization. Once you instantiate a
facet, you link in all the virtuals for that facet, whether or not
they are actually used. (The zealots have been insisting for nearly
a decade now that unused virtuals can be optimized away, but
nobody has demonstrated a working implementation that does so.)
Compare this with a typical 10-20 KB overhead for linking in all
of printf/scanf, even including the format decoder and all those
conversions you don't use.

P.J. Plauger
Dinkumware, Ltd.
http://www.dinkumware.com
Sep 30 '05 #36
In article <Uq********************@giganews.com>,
"P.J. Plauger" <pj*@dinkumware.com> wrote:
"David White" <no@email.provided> wrote in message
news:6V******************@nasal.pacific.net.au...
mlimber wrote:
Dave Rahardja wrote:
[snip]
Do you attribute the I/O bloat to the design of the C++ streams
specification? Does it help if the programmer sticks to C-style
printf()s?
[snip]

As P.J.P. said above, printf/scanf do help as far as code bloat and
I/O performance go, but using them trades speed for type saftey and a
common coding style for streaming objects. The classic example of
printf failure cannot happen with iostreams:


It's not immediately apparent to me why printf/scanf should be faster than
streams, given that streams have a dedicated function for I/O of each type
and can do what's appropriate for that type without delay, whereas
printf/scanf have to decode a format string first, including possibly
ASCII
to decimal conversions for precision and field width (which themselves
require an amount of work similar to formatted input).


First, the time spent decoding a format string is trivial compared
to practically any conversion. Second, the Standard C++ library
requires that all conversions be performed by locale facets, each
of which is a class typically with several virtual functions called
through public interfaces. And each conversion call requires the
construction of an ios object plus an istreambuf_iterator object.
The time overhead for all this nonsense is about 30 per cent greater
(at a minimum) than a corresponding printf/scanf call.

The space overhead can be *much* worse. The Dinkumware implementation
avoids instantiating most facets that never get called. That keeps
the space overhead around 50 KB, compared to about 500 KB (sic) for
those that don't perform this optimization. Once you instantiate a
facet, you link in all the virtuals for that facet, whether or not
they are actually used. (The zealots have been insisting for nearly
a decade now that unused virtuals can be optimized away, but
nobody has demonstrated a working implementation that does so.)
Compare this with a typical 10-20 KB overhead for linking in all
of printf/scanf, even including the format decoder and all those
conversions you don't use.


What really gets me is that just printing an integer instantiates all of
the code for formatting floating point (even with lazy facet
instantiation). You can't even strip it out at link time, at least not
without whole program analysis. And just for the reason P.J. says:
they're in the same facet. You either get all number formatting or none
of it. There's no way to select at compile time "just int" formatting.

There's also no way to select at compile time: And don't bother me with
all that thousands-separator stuff. It's in your executable just
itching to get executed even if you never mention "locale". And again,
the linker can't strip it out without fairly heroic WPA.

And of course since all the floating point code is in there, you've also
got code sitting around to select a custom decimal point character (at
run time), just in case you might decide to print that double in Germany
after you've compiled your code.

If you're printing to a file (or likely even just the console), then
you've also got some heavy lifting equipment allowing for very flexible
code conversion magic (and the specific encoding is to be selected at
run time).

-- All for the price of printing an int.

Yes, C++ I/O could be much smaller. But as currently specified it
isn't. There are a ton of major run time decisions buried under the
most innocent looking I/O.

We (the industry) have the expertise to do it much better today than we
did a decade ago. But will we? I really don't know.

-Howard
Sep 30 '05 #37
"Howard Hinnant" <ho************@gmail.com> wrote in message
news:ho**********************************@syrcnyrd rs-01-ge0.nyroc.rr.com...
Yes, C++ I/O could be much smaller. But as currently specified it
isn't. There are a ton of major run time decisions buried under the
most innocent looking I/O.

We (the industry) have the expertise to do it much better today than we
did a decade ago. But will we? I really don't know.


Thanks, Howard, for the added details. At the risk of stirring up
a slumbering dragon, I have to point out that there is one C++
library that avoids the worst excesses of Standard C++ -- the
one specified for EC++. We offer both in our Unabridged library
package so our customers can choose. Obviously, the better solution
would be to do as Howard suggests and fix the size/performance
problems in the Standard C++ library. But that work isn't on the
horizon.

P.J. Plauger
Dinkumware, Ltd.
http://www.dinkumware.com
Sep 30 '05 #38

"Branimir Maksimovic" <bm***@eunet.yu> wrote in message
news:dh**********@news.eunet.yu...

"Dave Rahardja" <as*@me.com> wrote in message
news:kg********************************@4ax.com...
On Wed, 28 Sep 2005 09:33:48 -0400, "P.J. Plauger" <pj*@dinkumware.com>
wrote:
If
exception
handling is not required, we can properly declare non-throwing functions
with
the throw() specification, or resort to compiler switches. I also imagine
that
a C++ compiler can assume that C functions declared extern "C" do not
throw.


throw() wouldn't help.
To quote part of 15.5.1:
"An implementation is not permitted to finish stack unwinding prematurely
based on a determination that the unwind process will eventually cause
a call to terminate()."

in gcc implementation, exception spec would only add overhead to function
that is called (unexpected stuff et al).
Calling function does not care at all about exception specifications.
It only generate exception handling code if function calls destructors
for automatic objects,that is, of course, if there are no try/catch
blocks.
Overhead is simply that every object file gets exception handling data.
There is no run time overhead, but executable size overhead.
I guess, there are implementations where true is exactly opposite :)

Greetings, Bane.


I've just investigated further. Exception specification makes run time
overhead to function
only if function throws or calls function that does not have throw()
specification.
that means
void baz()throw();
void foo()throw()
{
// foo does not register unexpected (no runtime overhead)
// because baz has throw() specification
baz();
}

void baz()throw()
{
// no run time overhead , baz does not throws neither calls something that
can throw
}

On windows, with gcc, exceptions have zero cost if not used (neither code
nor data).
That means that c programs compiled as c++ with gcc produce exactly the same
object file (excluding symbol names) on windows.
I wonder why on linux c++ object files get eh frame ? strange

Greetings, Bane.

Sep 30 '05 #39
"Branimir Maksimovic" <bm***@eunet.yu> wrote in message
news:dh**********@news.eunet.yu...
"Branimir Maksimovic" <bm***@eunet.yu> wrote in message
news:dh**********@news.eunet.yu...

"Dave Rahardja" <as*@me.com> wrote in message
news:kg********************************@4ax.com...
On Wed, 28 Sep 2005 09:33:48 -0400, "P.J. Plauger" <pj*@dinkumware.com>
wrote:
If
exception
handling is not required, we can properly declare non-throwing functions
with
the throw() specification, or resort to compiler switches. I also
imagine
that
a C++ compiler can assume that C functions declared extern "C" do not
throw.


throw() wouldn't help.
To quote part of 15.5.1:
"An implementation is not permitted to finish stack unwinding prematurely
based on a determination that the unwind process will eventually cause
a call to terminate()."

in gcc implementation, exception spec would only add overhead to function
that is called (unexpected stuff et al).
Calling function does not care at all about exception specifications.
It only generate exception handling code if function calls destructors
for automatic objects,that is, of course, if there are no try/catch
blocks.
Overhead is simply that every object file gets exception handling data.
There is no run time overhead, but executable size overhead.
I guess, there are implementations where true is exactly opposite :)

Greetings, Bane.


I've just investigated further. Exception specification makes run time
overhead to function
only if function throws or calls function that does not have throw()
specification.
that means
void baz()throw();
void foo()throw()
{
// foo does not register unexpected (no runtime overhead)
// because baz has throw() specification
baz();
}

void baz()throw()
{
// no run time overhead , baz does not throws neither calls something that
can throw
}

On windows, with gcc, exceptions have zero cost if not used (neither code
nor data).
That means that c programs compiled as c++ with gcc produce exactly the
same
object file (excluding symbol names) on windows.
I wonder why on linux c++ object files get eh frame ? strange


Uh, I just tried a quick test with mingw (gcc V3.2) under Windows.
Compiled as C++, a smallish program is 100,000 bytes bigger than
when compiled as C.

P.J. Plauger
Dinkumware, Ltd.
http://www.dinkumware.com
Sep 30 '05 #40

"P.J. Plauger" <pj*@dinkumware.com> wrote in message
news:Z8********************@giganews.com...
"Branimir Maksimovic" <bm***@eunet.yu> wrote in message
news:dh**********@news.eunet.yu...
On windows, with gcc, exceptions have zero cost if not used (neither code
nor data).
That means that c programs compiled as c++ with gcc produce exactly the
same
object file (excluding symbol names) on windows.
I wonder why on linux c++ object files get eh frame ? strange


Uh, I just tried a quick test with mingw (gcc V3.2) under Windows.
Compiled as C++, a smallish program is 100,000 bytes bigger than
when compiled as C.


Perhaps that was because headers have some #ifdef __cplusplus and added
some additional libraries , but I've tested with trivial example.

bmaxa@MAXA ~
$ g++ -Wall test.c baz.c -o test

bmaxa@MAXA ~
$ gcc -Wall test.c baz.c -o testc

bmaxa@MAXA ~
$ ls -l
total 17
-rw-r--r-- 1 bmaxa Administ 14 Sep 30 17:20 baz.c
-rw-r--r-- 1 bmaxa Administ 174 Sep 30 18:58 test.c
-rwxr-xr-x 1 bmaxa Administ 15879 Sep 30 19:18 test.exe
-rwxr-xr-x 1 bmaxa Administ 15879 Sep 30 19:18 testc.exe

bmaxa@MAXA ~
$ nm test.exe > test.nm

bmaxa@MAXA ~
$ nm testc.exe > testc.nm

bmaxa@MAXA ~
$ ls -l *.nm
-rw-r--r-- 1 bmaxa Administ 7834 Sep 30 19:19 test.nm
-rw-r--r-- 1 bmaxa Administ 7822 Sep 30 19:19 testc.nm

bmaxa@MAXA ~
$ strip test*.exe

bmaxa@MAXA ~
$ ls -l
total 15
-rw-r--r-- 1 bmaxa Administ 14 Sep 30 17:20 baz.c
-rw-r--r-- 1 bmaxa Administ 174 Sep 30 18:58 test.c
-rwxr-xr-x 1 bmaxa Administ 5632 Sep 30 19:19 test.exe
-rw-r--r-- 1 bmaxa Administ 7834 Sep 30 19:19 test.nm
-rwxr-xr-x 1 bmaxa Administ 5632 Sep 30 19:19 testc.exe
-rw-r--r-- 1 bmaxa Administ 7822 Sep 30 19:19 testc.nm

bmaxa@MAXA ~
$ diff test.exe testc.exe
Binary files test.exe and testc.exe differ

bmaxa@MAXA ~
$ cat test.c baz.c
#include <stdlib.h>
#include <stdio.h>

void baz();
void bar();
void foo();

int main()
{
printf("\n");
foo();
return 0;
}

void foo()
{
bar();
}
void bar()
{
baz();
}

void baz(){}
bmaxa@MAXA ~
$ uname -a
MINGW32_NT-5.1 MAXA 1.0.10(0.46/3/2) 2004-03-15 07:17 i686 unknown

It is same if I first compiled baz into object file and then linked it with
test.o.
So I guess that some additional libraries were linked.

Greetings, Bane.
Sep 30 '05 #41
"Branimir Maksimovic" <bm***@eunet.yu> wrote in message
news:dh**********@news.eunet.yu...
"P.J. Plauger" <pj*@dinkumware.com> wrote in message
news:Z8********************@giganews.com...
"Branimir Maksimovic" <bm***@eunet.yu> wrote in message
news:dh**********@news.eunet.yu...
On windows, with gcc, exceptions have zero cost if not used (neither
code nor data).
That means that c programs compiled as c++ with gcc produce exactly the
same
object file (excluding symbol names) on windows.
I wonder why on linux c++ object files get eh frame ? strange


Uh, I just tried a quick test with mingw (gcc V3.2) under Windows.
Compiled as C++, a smallish program is 100,000 bytes bigger than
when compiled as C.


Perhaps that was because headers have some #ifdef __cplusplus and added
some additional libraries , but I've tested with trivial example.


I rest my case.

P.J. Plauger
Dinkumware, Ltd.
http://www.dinkumware.com
Sep 30 '05 #42
On Fri, 30 Sep 2005 17:49:09 +0200, "Branimir Maksimovic" <bm***@eunet.yu>
wrote:
If
exception
handling is not required, we can properly declare non-throwing functions
with
the throw() specification, or resort to compiler switches. I also imagine
that
a C++ compiler can assume that C functions declared extern "C" do not
throw.

throw() wouldn't help.
To quote part of 15.5.1:
"An implementation is not permitted to finish stack unwinding prematurely
based on a determination that the unwind process will eventually cause
a call to terminate()."


[snip]
I've just investigated further. Exception specification makes run time
overhead to function
only if function throws or calls function that does not have throw()
specification.


This has also been my experience. The overhead of non-throwing functions is
exactly ZERO, both in run time code and in data space.

Here is a trivial C and C++ program:

--- C program ---

int f(int i)
{
return i * 2;
}

int main()
{
return f(1);
}

--- C++ program ---

int f(int i) throw()
{
return i * 2;
}

int main()
{
try {
return f(1);
} catch (...) {
return 1;
}
}

---

Under Microsoft's VC++ .NET, both programs compile to exactly the same size of
22,016 bytes (under the Release settings), with or without the throw() clause.

To see the effect of the declaration of f() when the compiler has no access to
the code during the compilation of main(), I did the following experiment:

In the C++ program, splitting out f() to a different file and declaring it

extern "C" int f(int);

causes no change to the final program size, which indicates that the exception
handling mechanism has been successfully suppressed. However, declaring it

int f(int);

and compiling f() as a C++ function causes the build to bloat to 26,624 bytes.
Examining the map file shows that execption handling routines have been added,
accounting for the bloat. Changing the declaration to

int f(int) throw();

brings the size back down to 20,016 bytes, suggesting that exception handling
has been completely removed.

With that quick exercise it is easy to see that exception handling overhead is
exactly _zero_ with MSVC++ .NET when functions are properly declared throw().
Furthermore, MSVC is smart enough to figure out that extern "C" functions do
not throw.
My experience with the embedded compiler that I use daily has been the same:
the exception handling overhead for properly-declared code is not only
trivial, it is _zero_.

However, I will admit again that the decision to assume that functions with no
throw clause can "throw anything" instead of "throw nothing" makes it very
difficult for compilers to automatically elide the generation of exception
handling infrastructure. Declaring functions throw() when it really doesn't
throw anything is good practice whenever resources are at a premium.
Yes, the iostream library accounts for most of the immense bloat that C++
programs have, but C++ as a language is perfectly viable (especially on
embedded targets) without that library; there is nothing _inherently_
inefficient with the language itself.
Oct 1 '05 #43
"Dave Rahardja" <as*@me.com> wrote in message
news:62********************************@4ax.com...
With that quick exercise it is easy to see that exception handling
overhead is
exactly _zero_ with MSVC++ .NET when functions are properly declared
throw().
Furthermore, MSVC is smart enough to figure out that extern "C" functions
do
not throw.

My experience with the embedded compiler that I use daily has been the
same:
the exception handling overhead for properly-declared code is not only
trivial, it is _zero_.

However, I will admit again that the decision to assume that functions
with no
throw clause can "throw anything" instead of "throw nothing" makes it very
difficult for compilers to automatically elide the generation of exception
handling infrastructure. Declaring functions throw() when it really
doesn't
throw anything is good practice whenever resources are at a premium.
So if you declare *every* function either as extern "C" or with a
throw() qualifier, you can match the lower overhead of native C
code, at least with *some* compilers. But the least lapse in this
style -- such as writing C++ the way all the books tell you to --
and you get the bloat of exception handling whether or not you
use it.
Yes, the iostream library accounts for most of the immense bloat that C++
programs have, but C++ as a language is perfectly viable (especially on
embedded targets) without that library; there is nothing _inherently_
inefficient with the language itself.


So if you don't use the library *designed* for use with Standard
C++, the way all the books tell you, you can match the lower overhead
of native C style, at least with *some* compilers.

Now please review your words and consider whether there is anything
*inherently* inefficient with the language itself. Asymptotic behavior
doesn't mean much in the world of real programming.

P.J. Plauger
Dinkumware, Ltd.
http://www.dinkumware.com
Oct 1 '05 #44
On Sat, 1 Oct 2005 06:55:03 -0400, "P.J. Plauger" <pj*@dinkumware.com> wrote:
So if you declare *every* function either as extern "C" or with a
throw() qualifier, you can match the lower overhead of native C
code, at least with *some* compilers. But the least lapse in this
style -- such as writing C++ the way all the books tell you to --
and you get the bloat of exception handling whether or not you
use it. So if you don't use the library *designed* for use with Standard
C++, the way all the books tell you, you can match the lower overhead
of native C style, at least with *some* compilers.

Now please review your words and consider whether there is anything
*inherently* inefficient with the language itself. Asymptotic behavior
doesn't mean much in the world of real programming.


It's obvious that we're not going to convince each other in this matter. It
appears that our views of what constitutes a fair comparison differ
considerably.

My experience with C++ in both the desktop and embedded worlds is that the
overhead for _equivalent_ behavior is zero. Each overhead in C++ comes with an
additional feature, which C does not afford you. C++ defaults to a larger set
of features that you can "turn off", while C starts out with a basic set to
which you must add. In my mind, a fair comparison involves turning off
features in your C++ program to match the behavior of the C program that
you're comparing it to. Anything else is not a fair comparison in my mind, but
you obviously think that that's too much to ask in the "real world".

Maybe I can convince others, if not you, that what you consider "asymptotic
behavior" is par for the course in many application domains, including mine
(embedded code), although for most applications, the overhead associated with
the "traditional" coding style is more than made up for by the features that
C++ brings to the table. However, C-equivalent efficiency is attainable by
correct specification.

I stand by my statements and observations (and take advantage of them daily),
although you probably disagree with them.

I do concede that the two areas you highlight--default exception handling
behavior and iostreams--contribute to significant data and code bloat in C++
programs and must be aggressively managed for efficiency-critical code.

-dr
Oct 1 '05 #45
"Dave Rahardja" <as*@me.com> wrote in message
news:tp********************************@4ax.com...
On Sat, 1 Oct 2005 06:55:03 -0400, "P.J. Plauger" <pj*@dinkumware.com>
wrote:
So if you declare *every* function either as extern "C" or with a
throw() qualifier, you can match the lower overhead of native C
code, at least with *some* compilers. But the least lapse in this
style -- such as writing C++ the way all the books tell you to --
and you get the bloat of exception handling whether or not you
use it.
So if you don't use the library *designed* for use with Standard
C++, the way all the books tell you, you can match the lower overhead
of native C style, at least with *some* compilers.

Now please review your words and consider whether there is anything
*inherently* inefficient with the language itself. Asymptotic behavior
doesn't mean much in the world of real programming.


It's obvious that we're not going to convince each other in this matter.
It
appears that our views of what constitutes a fair comparison differ
considerably.

My experience with C++ in both the desktop and embedded worlds is that the
overhead for _equivalent_ behavior is zero. Each overhead in C++ comes
with an
additional feature, which C does not afford you. C++ defaults to a larger
set
of features that you can "turn off", while C starts out with a basic set
to
which you must add. In my mind, a fair comparison involves turning off
features in your C++ program to match the behavior of the C program that
you're comparing it to. Anything else is not a fair comparison in my mind,
but
you obviously think that that's too much to ask in the "real world".


In the usual sense, yes. What's advertised for C++ is: if you don't
use it, you don't pay for it. What you admit above, and several others
have demonstrated, is that you have to "use" quite a bit of stuff from
exception handling to *not* pay for it. Specifically, you have to make
liberal use of throw specifications. (Or you can declare all of your
functions extern "C", don't use any C++ functions from the library,
and hence "turn off" cross-module type checking in the bargain.)

And BTW, not all implementations of Standard C++ eliminate all space
and time overheads for exceptions even if you do the above. It's
well over a decade now since compilers started shipping with exceptions,
and that's still the state of the art.

So yes, I do consider my comparisons fair, and yes, I think that's
too much to ask in the real world. Having said that, I observe that
some C++ compilers have a switch that disables exception handling
unilaterally. All you need then is a library that can tolerate
such antics. We happen to provide one called EC++. It's part of our
standard library package, alongside the Standard C++ library. But
the same people who argue that C++ has zero overheads usually foam
at the mouth at the mention of subsetting...
Maybe I can convince others, if not you, that what you consider
"asymptotic
behavior" is par for the course in many application domains, including
mine
(embedded code), although for most applications, the overhead associated
with
the "traditional" coding style is more than made up for by the features
that
C++ brings to the table.
Please note that I made exactly this point in my first posting.
My major point was that it's a mark of zealotry to deny that
overheads do exist.
However, C-equivalent efficiency is attainable by
correct specification.
For some implementations. If you know what "correct" means.
And if you do your best to write C code masquerading as C++.
Please also note that this style is fragile. All you need is
one maintainer who's not clued in to the correct style to
add one innocent call (possibly copied from a reputable text
book) and suddenly bloat up an executable.
I stand by my statements and observations (and take advantage of them
daily),
although you probably disagree with them.
Actually, I don't.
I do concede that the two areas you highlight--default exception handling
behavior and iostreams--contribute to significant data and code bloat in
C++
programs and must be aggressively managed for efficiency-critical code.


Yep. And it's the "aggressively managed" part that puts the lie
to the opinion that prompted my first posting:

: "Mabden" <mabden@sbc_global.net> wrote in message
: news:q5***************@newssvr11.news.prodigy.com. ..
: ...
: Anything you believe about "hidden costs" are similar to your beliefs
: about UFOs and gods. Ie: non-existent.

The hidden costs are real if you have to work hard to drag
them out of hiding before you can exorcise them.

P.J. Plauger
Dinkumware, Ltd.
http://www.dinkumware.com
Oct 1 '05 #46
On Sat, 1 Oct 2005 14:57:34 -0400, "P.J. Plauger" <pj*@dinkumware.com> wrote:

I do concede that the two areas you highlight--default exception handling
behavior and iostreams--contribute to significant data and code bloat in
C++
programs and must be aggressively managed for efficiency-critical code.


Yep. And it's the "aggressively managed" part that puts the lie
to the opinion that prompted my first posting:

: "Mabden" <mabden@sbc_global.net> wrote in message
: news:q5***************@newssvr11.news.prodigy.com. ..
: ...
: Anything you believe about "hidden costs" are similar to your beliefs
: about UFOs and gods. Ie: non-existent.

The hidden costs are real if you have to work hard to drag
them out of hiding before you can exorcise them.


It's obvious that we're not going to convince each other in this matter.


-dr
Oct 2 '05 #47
"Dave Rahardja" <as*@me.com> wrote in message
news:mh********************************@4ax.com...
On Sat, 1 Oct 2005 14:57:34 -0400, "P.J. Plauger" <pj*@dinkumware.com>
wrote:

I do concede that the two areas you highlight--default exception
handling
behavior and iostreams--contribute to significant data and code bloat in
C++
programs and must be aggressively managed for efficiency-critical code.


Yep. And it's the "aggressively managed" part that puts the lie
to the opinion that prompted my first posting:

: "Mabden" <mabden@sbc_global.net> wrote in message
: news:q5***************@newssvr11.news.prodigy.com. ..
: ...
: Anything you believe about "hidden costs" are similar to your beliefs
: about UFOs and gods. Ie: non-existent.

The hidden costs are real if you have to work hard to drag
them out of hiding before you can exorcise them.


It's obvious that we're not going to convince each other in this matter.


There exists at least one implementation where, with
"aggressively managed" code, you can reduce exception
overhead to zero.

Therefore, the "hidden costs" of using C++ instead of C
are nonexistent.

If you're convinced that this is a logically tight
argument, then you're right, I'll never convince
you otherwise. Please look up the definition of
zealotry.

P.J. Plauger
Dinkumware, Ltd.
http://www.dinkumware.com
Oct 2 '05 #48
On Sun, 2 Oct 2005 09:42:56 -0400, "P.J. Plauger" <pj*@dinkumware.com> wrote:
It's obvious that we're not going to convince each other in this matter.


There exists at least one implementation where, with
"aggressively managed" code, you can reduce exception
overhead to zero.

Therefore, the "hidden costs" of using C++ instead of C
are nonexistent.

If you're convinced that this is a logically tight
argument, then you're right, I'll never convince
you otherwise. Please look up the definition of
zealotry.


PJ, please stop putting words in my mouth, get off your high horse about
zealotry, and listen for a change. Instead of that asinine sequence of
arguments that you wrote, what I said was:

- C++ has features "turned on" by default that _do_ add inefficiencies in the
code. These inefficiencies afford features such as exception handling that do
not exist in C without manual addition.

- These features can be turned off by conscientiously managing your code (or
even managing your compiler switches), thus allowing your C++ code to match
the efficiency of C.

Notice that nowhere in my statements have I said that C++ overhead is
"nonexistent". I said that C++ code is not /necessarily/ less efficient than
C.

We are always dependent on compiler implementation for the proper generation
of our code. Just because _not all_ compilers support a standard construct
doesn't mean that the construct is worthless, or not a good practice. As far
as I can tell, the MSVC and g++ compilers (as well as my embedded compiler,
Diab) support the suppression of exception handling via the throw()
specification. That seems to be a large enough base of compiler installations
to me. How large of a base would you like to see before a compiler's
implementation becomes more than a curiosity in your eyes?

Having said all that, in most applications the overhead caused by leaving the
C++ features "on" is not significant. Where they do count, such as in the
real-time embedded world that I live in, aggressive management of code is par
for the course, not just for C++ overhead reasons, but for other efficiency
reasons such as type conversions, word sizes, and hardware considerations.

Look: I understand your arguments and agree with most of them--and it seems
that the converse is true as well. Our differences lie in the extent to which
we are willing to advocate code management to (re)gain efficiencies in C++. If
that makes you want to use a dismissive label like "zealot" on me, then I at
least claim to be a _practical_ zealot, because I practice just that sort of
"zealotry" in efficiency-sensitive portions of my code every day, with
positive results. Your mileage may vary.

-dr
Oct 2 '05 #49
"Dave Rahardja" <as*@me.com> wrote in message
news:1v********************************@4ax.com...
On Sun, 2 Oct 2005 09:42:56 -0400, "P.J. Plauger" <pj*@dinkumware.com>
wrote:
> It's obvious that we're not going to convince each other in this
> matter.
There exists at least one implementation where, with
"aggressively managed" code, you can reduce exception
overhead to zero.

Therefore, the "hidden costs" of using C++ instead of C
are nonexistent.

If you're convinced that this is a logically tight
argument, then you're right, I'll never convince
you otherwise. Please look up the definition of
zealotry.


PJ, please stop putting words in my mouth, get off your high horse about
zealotry, and listen for a change. Instead of that asinine sequence of
arguments that you wrote, what I said was:

- C++ has features "turned on" by default that _do_ add inefficiencies in
the
code. These inefficiencies afford features such as exception handling that
do
not exist in C without manual addition.

- These features can be turned off by conscientiously managing your code
(or
even managing your compiler switches), thus allowing your C++ code to
match
the efficiency of C.


And that's where I part company with your quibble. There's
nothing in the C++ Standard that requires the kind of
efficiencies you claim, and in fact not all compilers
provide them. There's an important missing qualifier here.
Notice that nowhere in my statements have I said that C++ overhead is
"nonexistent".
But that indeed was what was asserted at the beginning of
this thread, and defended in various ways by you and others.
Your first post emphasized the phrase "exactly zero" three
times, while carefully decorating it with qualifiers.
There are important *disguised* qualifiers there.
I said that C++ code is not /necessarily/ less efficient
than
C.
And I've agreed that, if you work at it and if you have a
suitable compiler, you can achieve that goal for some
class of programs.
We are always dependent on compiler implementation for the proper
generation
of our code. Just because _not all_ compilers support a standard construct
doesn't mean that the construct is worthless, or not a good practice.
And I didn't say that. It's not a worthless practice; it can
be "good" if carefully enforced.
As
far
as I can tell, the MSVC and g++ compilers (as well as my embedded
compiler,
Diab) support the suppression of exception handling via the throw()
specification. That seems to be a large enough base of compiler
installations
to me. How large of a base would you like to see before a compiler's
implementation becomes more than a curiosity in your eyes?
My experiments with gcc are not as unequivocally positive as
yours. Nor is it true of all the embedded compilers I've
encountered. And I've never dubbed the "throw()" practice a
mere curiosity. I observed instead that:

a) It violates the principle that those who no nothing about
a feature (exceptions in this case) don't pay for it.

b) It's a fragile programming technique to require widespread
decoration of function declarations.

c) Even then, it's not universally guaranteed to eliminate
*all* space and time overheads with all compilers.

Those are important, practical, qualifiers.
Having said all that, in most applications the overhead caused by leaving
the
C++ features "on" is not significant.
And I pointed that out in my first posting.
Where they do count, such as in the
real-time embedded world that I live in, aggressive management of code is
par
for the course, not just for C++ overhead reasons, but for other
efficiency
reasons such as type conversions, word sizes, and hardware considerations.
Well, there's aggressive and there's aggressive, in my book.
Doing the obvious to save time and space is par for the
course. Doing magic by rote is fragile.
Look: I understand your arguments and agree with most of them--and it
seems
that the converse is true as well. Our differences lie in the extent to
which
we are willing to advocate code management to (re)gain efficiencies in
C++. If
that makes you want to use a dismissive label like "zealot" on me, then I
at
least claim to be a _practical_ zealot, because I practice just that sort
of
"zealotry" in efficiency-sensitive portions of my code every day, with
positive results. Your mileage may vary.


My charge of zealotry is to the OP, who proclaimed that such
overheads are nonexistent. But I have almost as much problem
with apologists who seem to say, "Well, he's not completely
crazy *all* the time." (This brouhaha reminds me of the
elderly lady who calls the cops, complaining that a man
across the street is behaving indecently before his window.
The cop looks and sees nothing. "Stand on the bed!" she
insists.)

The fundamental issue is whether programming in C++ yields
programs that are larger and slower than comparable programs
written in C. In my extensive experience, the simple answer
is yes. The most prudent course for a project manager is to
assume 5 to 15 per cent overheads, and take for granted that
the benefits of C++ will justify the extra costs. I went
through a similar period with C a few decades ago, selling
it against assembly language. I agree the overheads can be
managed, often significantly reduced; and it's worth teaching
people how to do so. But pretending that the problem doesn't
exist at all does not help our credibility with a rightly
critical audience of potential new users.

P.J. Plauger
Dinkumware, Ltd.
http://www.dinkumware.com
Oct 2 '05 #50

This thread has been closed and replies have been disabled. Please start a new discussion.

Similar topics

36
by: Armin Rigo | last post by:
Hi! This is a rant against the optimization trend of the Python interpreter. Sorting a list of 100000 integers in random order takes: * 0.75 seconds in Python 2.1 * 0.51 seconds in Python...
23
by: YinTat | last post by:
Hi, I learned C++ recently and I made a string class. A code example is this: class CString { public: inline CString(const char *rhs) { m_size = strlen(rhs);
98
by: jrefactors | last post by:
I heard people saying prefix increment is faster than postfix incerement, but I don't know what's the difference. They both are i = i+1. i++ ++i Please advise. thanks!!
65
by: Skybuck Flying | last post by:
Hi, I needed a method to determine if a point was on a line segment in 2D. So I googled for some help and so far I have evaluated two methods. The first method was only a formula, the second...
1
by: James dean | last post by:
I done a test and i really do not know the reason why a jagged array who has the same number of elements as a multidimensional array is faster here is my test. I assign a value and do a small...
9
by: VenuGopal | last post by:
Hi, why n++ executes faster than n+1..... or does it realli execute faster? thanks Venugopal.B
11
by: ctman770 | last post by:
Hi Everyone, Is it faster to save the precise location of an html dom node into a variable in js, or to use getElementById everytime you need to access the node? I want to make my application...
12
by: karthikbalaguru | last post by:
Hi, How is 'Int' Faster than 'Char' ? I think , 'Char' is small and so it should be easily & efficiently . Can someone here provide some info regarding this. Thanks and Regards, Karthik...
23
by: Python Maniac | last post by:
I am new to Python however I would like some feedback from those who know more about Python than I do at this time. def scrambleLine(line): s = '' for c in line: s += chr(ord(c) | 0x80)...
41
by: c | last post by:
Hi every one, Me and my Cousin were talking about C and C#, I love C and he loves C#..and were talking C is ...blah blah...C# is Blah Blah ...etc and then we decided to write a program that...
1
by: CloudSolutions | last post by:
Introduction: For many beginners and individual users, requiring a credit card and email registration may pose a barrier when starting to use cloud servers. However, some cloud server providers now...
0
by: ryjfgjl | last post by:
In our work, we often need to import Excel data into databases (such as MySQL, SQL Server, Oracle) for data analysis and processing. Usually, we use database tools like Navicat or the Excel import...
0
by: aa123db | last post by:
Variable and constants Use var or let for variables and const fror constants. Var foo ='bar'; Let foo ='bar';const baz ='bar'; Functions function $name$ ($parameters$) { } ...
0
by: ryjfgjl | last post by:
If we have dozens or hundreds of excel to import into the database, if we use the excel import function provided by database editors such as navicat, it will be extremely tedious and time-consuming...
0
by: ryjfgjl | last post by:
In our work, we often receive Excel tables with data in the same format. If we want to analyze these data, it can be difficult to analyze them because the data is spread across multiple Excel files...
0
by: emmanuelkatto | last post by:
Hi All, I am Emmanuel katto from Uganda. I want to ask what challenges you've faced while migrating a website to cloud. Please let me know. Thanks! Emmanuel
0
BarryA
by: BarryA | last post by:
What are the essential steps and strategies outlined in the Data Structures and Algorithms (DSA) roadmap for aspiring data scientists? How can individuals effectively utilize this roadmap to progress...
1
by: Sonnysonu | last post by:
This is the data of csv file 1 2 3 1 2 3 1 2 3 1 2 3 2 3 2 3 3 the lengths should be different i have to store the data by column-wise with in the specific length. suppose the i have to...
0
by: Hystou | last post by:
There are some requirements for setting up RAID: 1. The motherboard and BIOS support RAID configuration. 2. The motherboard has 2 or more available SATA protocol SSD/HDD slots (including MSATA, M.2...

By using Bytes.com and it's services, you agree to our Privacy Policy and Terms of Use.

To disable or enable advertisements and analytics tracking please visit the manage ads & tracking page.