473,507 Members | 2,504 Online
Bytes | Software Development & Data Engineering Community
+ Post

Home Posts Topics Members FAQ

Educated guesses for efficiency?



Commonly, people may ask a question along the lines of, "Which code
snippet is more efficient?".

If the code is anything other than assembler (e.g. C or C++), then
there's no precise answer because we don't know the instruction set of
the target system, or how the compiler will "map" each executable line of
C code to assembly instructions.

I consider myself to be fairly proficient in C, but I'll admit I know
very little about machine code, instructions set, and the like...

When writing fully-portable code, are there any guidelines as to how to
write your code in order to make an "educated guess" as to what would be
more efficient on *most* target platforms?

For instance, I heard that on the original machine for which C was
intended, that there was a single CPU instruction which could perform the
following:

*p++ = something; /* (p is a pointer variable) */

Therefore, at the time, it would have made sense to make use of "*p++" as
much as possible in the code.

So, my question is: Is there any sort of guide available on the web which
discusses constructs which should be used in fully-portable code because
they're likely to be quite efficient across the board?
--

Frederick Gotham
Jun 17 '06 #1
19 1510
Frederick Gotham a écrit :
Commonly, people may ask a question along the lines of, "Which code
snippet is more efficient?".

If the code is anything other than assembler (e.g. C or C++), then
there's no precise answer because we don't know the instruction set of
the target system, or how the compiler will "map" each executable line of
C code to assembly instructions.

I consider myself to be fairly proficient in C, but I'll admit I know
very little about machine code, instructions set, and the like...

When writing fully-portable code, are there any guidelines as to how to
write your code in order to make an "educated guess" as to what would be
more efficient on *most* target platforms?

For instance, I heard that on the original machine for which C was
intended, that there was a single CPU instruction which could perform the
following:

*p++ = something; /* (p is a pointer variable) */

Therefore, at the time, it would have made sense to make use of "*p++" as
much as possible in the code.

So, my question is: Is there any sort of guide available on the web which
discusses constructs which should be used in fully-portable code because
they're likely to be quite efficient across the board?


There is no one code snippet more efficient than another, if they
are strictly equivalent. Today's optimizing compilers are quite good
at generating very tight and fast assembly code.

In modern workstations, most of the traditional "optimizations" like

"a*2 is more expensive than a << 1"

are no longer valid. Much more important than the instructions
themselves is the data layout.

Data layout is today where the efficiency can be improved. The memory is
running at more or less 400 MHZ, the processor is running at more than
2-3GHZ. By improving the locality of data accesses you can gain a lot
in efficiency by avoiding expensive main memory reads that can cost you
like 10 or more cheap instructions of one cycle.

Data layout is a hot research subject, and I would recommend you reading
the manuals of the CPU vendor about this.

For the AMD/Intel world, both have produced optimizations manuals that
are a very instructive read.

jacob
Jun 17 '06 #2
Frederick Gotham (in Xn***************************@194.125.133.14)
said:

| So, my question is: Is there any sort of guide available on the web
| which discusses constructs which should be used in fully-portable
| code because they're likely to be quite efficient across the board?

If there is, I'd be inclined to not trust it. It's really the compiler
writers' job to turn any and all valid source code into efficient
intermediate and/or executable codes. This allows the programmer to
focus on producing _valid_ source code solutions.

--
Morris Dovey
DeSoto Solar
DeSoto, Iowa USA
http://www.iedu.com/DeSoto
Jun 17 '06 #3
Frederick Gotham wrote:

For instance, I heard that on the original machine for which C was
intended, that there was a single CPU instruction which could perform the
following:

*p++ = something; /* (p is a pointer variable) */

Therefore, at the time, it would have made sense to make use of "*p++" as
much as possible in the code.

Modern compilers are very good at optimizing array operations, so you
shouldn't go out of your way to use pointers. If you do use pointers,
any good compiler should be able to turn

*p = something;
++p;
into as efficient code, when "something" doesn't involve conflicting
operations. So the old advice to make your source code as clear as
possible to humans who need to understand it is better than ever. The
most likely gain in the ++ operator is saving in typing and typos.

A certain architecture which has had it fanatics is better suited to

*++p = something;

and the difference in performance could be detected.
Jun 17 '06 #4
Frederick Gotham wrote:
Commonly, people may ask a question along the lines of, "Which code
snippet is more efficient?".

If the code is anything other than assembler (e.g. C or C++), then
there's no precise answer because we don't know the instruction set of
the target system, or how the compiler will "map" each executable line of
C code to assembly instructions.

I consider myself to be fairly proficient in C, but I'll admit I know
very little about machine code, instructions set, and the like...

When writing fully-portable code, are there any guidelines as to how to
write your code in order to make an "educated guess" as to what would be
more efficient on *most* target platforms?

For instance, I heard that on the original machine for which C was
intended, that there was a single CPU instruction which could perform the
following:

*p++ = something; /* (p is a pointer variable) */


Some guy named Ritchie writes, in reference to the development
of C's immediate ancestor B:

Thompson went a step further by inventing the ++ and --
operators, which increment or decrement; their prefix or
postfix position determines whether the alteration occurs
before or after noting the value of the operand. [...]
People often guess that they were created to use the
auto-increment and auto-decrement address modes provided
by the DEC PDP-11 on which C and Unix first became popular.
This is historically impossible, since there was no PDP-11
when B was developed.

As for the PDP-11's addressing modes, there were indeed some
instances where the compiler could translate *p++ or *--p to a
single instruction. But it certainly could not do so in all
circumstances! The hardware's increment or decrement was by
either one or two bytes (for byte- or word-using instructions),
so other operand sizes needed additional instructions to adjust
the pointer value.

As for the larger question, there's a rather bitter thread
raging at this very moment over the question of whether micro-
optimizations of this sort are necessary nowadays, or even whether
they're truly optimizations at all. The doubters seem to be in
the majority, but can't be said to be winning -- nobody "wins" in
silly debates like that one.

Concerning the even larger question, I'd recommend reading

http://www.codeproject.com/tips/optimizationenemy.asp

for some good sense about optimization. The principal message
is simple: "Measure, measure, measure!" The corollary is also
simple: Since you can't know anything useful about performance
until you've measured the running code, micro-optimizations in
the initial development are silly.

--
Eric Sosman
es*****@acm-dot-org.invalid

Jun 17 '06 #5
Frederick Gotham said:


Commonly, people may ask a question along the lines of, "Which code
snippet is more efficient?".
The one that costs the maintainer the least time to fix.

<snip>
When writing fully-portable code, are there any guidelines as to how to
write your code in order to make an "educated guess" as to what would be
more efficient on *most* target platforms?
Sure.

1) Use good algorithms.
2) Write clear code.
3) All code should either do something good or stop something bad happening.
4) If, as you write a section of code, a smile comes over your face and you
think "hey, this is way cool", it probably needs re-writing.
For instance, I heard that on the original machine for which C was
intended, that there was a single CPU instruction which could perform the
following:


Who cares about a single instruction? If you're interested in efficiency, go
for macro-efficiencies. Maybe x = -x is a nanosecond quicker than x *= -1,
and maybe it isn't, but we know for sure that a binary search is vastly
quicker than a linear search in almost all circumstances.

--
Richard Heathfield
"Usenet is a strange place" - dmr 29/7/1999
http://www.cpax.org.uk
email: rjh at above domain (but drop the www, obviously)
Jun 17 '06 #6
Richard Heathfield a écrit :
4) If, as you write a section of code, a smile comes over your face and you
think "hey, this is way cool", it probably needs re-writing.


I do not agree with this at all.

Sometimes I write a program I am proud of, or at least I like the way
I wrote it.

It is short, powerful, there is nothing too much, nothing missing.

Programming is fun.

Yes, you can see the world as a sea of tears.

Or you can see the world as a place where you can find joy and
satisfaction with what you have done.

jacob
Jun 17 '06 #7
Richard Heathfield schrieb:
Frederick Gotham said:
When writing fully-portable code, are there any guidelines as to how to
write your code in order to make an "educated guess" as to what would be
more efficient on *most* target platforms?
Sure.

1) Use good algorithms.


Yes.
2) Write clear code.
Yes.
3) All code should either do something good or stop something bad happening.
Depends on your definition of good and bad.
4) If, as you write a section of code, a smile comes over your face and you
think "hey, this is way cool", it probably needs re-writing.


No and yes.

I usually smile when the roles identified during specification
and design really work as intended, everything comes down to
exactly the right level of granularity and clarity, and I just
get the feeling that it will be joy to revisit the code
throughout the years.
Or if I found a clever way to break complexity without
endangering clarity and conciseness.
And I find this "way cool".

On the other hand, nifty tricks essentially coming down to
micro-optimization can give this feeling, too -- for them, I
agree with you for most applications. However, the threshold
is very different for different projects, people, departments,
etc.
Cheers
Michael
--
E-Mail: Mine is an /at/ gmx /dot/ de address.
Jun 17 '06 #8
Michael Mair said:
Richard Heathfield schrieb:
4) If, as you write a section of code, a smile comes over your face and
you think "hey, this is way cool", it probably needs re-writing.
No and yes.

I usually smile when the roles identified during specification
and design really work as intended, everything comes down to
exactly the right level of granularity and clarity, and I just
get the feeling that it will be joy to revisit the code
throughout the years.
Or if I found a clever way to break complexity without
endangering clarity and conciseness.
And I find this "way cool".


So do I, and that's why I included the weasel word "probably". I'm afraid
that you and I are in a minority.

On the other hand, nifty tricks essentially coming down to
micro-optimization can give this feeling, too -- for them, I
agree with you for most applications.
That's the kind of thing I was talking about, yes.

However, the threshold is very different for different projects, people,
departments, etc.


Sure. If your target machine has 256 octets of storage and you find a
micro-optimisation that reduces your object code size from 257 octets to
254, then you have every right to feel pleased.

--
Richard Heathfield
"Usenet is a strange place" - dmr 29/7/1999
http://www.cpax.org.uk
email: rjh at above domain (but drop the www, obviously)
Jun 17 '06 #9
jacob navia <ja***@jacob.remcomp.fr> writes:
Richard Heathfield a écrit :
4) If, as you write a section of code, a smile comes over your face
and you think "hey, this is way cool", it probably needs re-writing.


I do not agree with this at all.

Sometimes I write a program I am proud of, or at least I like the way
I wrote it.

It is short, powerful, there is nothing too much, nothing missing.

Programming is fun.

Yes, you can see the world as a sea of tears.

Or you can see the world as a place where you can find joy and
satisfaction with what you have done.


You and Richard are talking about two very different forms of "hey,
this is way cool". If the reaction comes from having written
well-constructed and *clear* code, then it really is way cool. If it
comes from having written code that so incredibly clever that nobody
else will ever understand it, file it away for possible submission to
the IOCCC, or delete it so it will never see the light of day.

On this point, I suspect the three of us are in complete agreement.

--
Keith Thompson (The_Other_Keith) ks***@mib.org <http://www.ghoti.net/~kst>
San Diego Supercomputer Center <*> <http://users.sdsc.edu/~kst>
We must do something. This is something. Therefore, we must do this.
Jun 17 '06 #10
Richard Heathfield <in*****@invalid.invalid> writes:
1) Use good algorithms.
2) Write clear code.
3) All code should either do something good or stop something bad happening.


I'm not sure I have anything worthwhile to add, but I do have an
observation. I've noticed that as I gain more and more
experience programming, the code I write gets to be simpler and
simpler.
--
Ben Pfaff
email: bl*@cs.stanford.edu
web: http://benpfaff.org
Jun 18 '06 #11
"Richard Heathfield" <in*****@invalid.invalid> wrote

1) Use good algorithms.
2) Write clear code.
3) All code should either do something good or stop something bad
happening.
4) If, as you write a section of code, a smile comes over your face and
you
think "hey, this is way cool", it probably needs re-writing.
For instance, I heard that on the original machine for which C was
intended, that there was a single CPU instruction which could perform the
following:


Who cares about a single instruction? If you're interested in efficiency,
go
for macro-efficiencies. Maybe x = -x is a nanosecond quicker than x *= -1,
and maybe it isn't, but we know for sure that a binary search is vastly
quicker than a linear search in almost all circumstances.

What people always say is that algorithmic optimisation should come first.
Actually, it is comparatively rare for a program to run too slowly because
an algorithm with the wrong big-O analysis was chosen. Normally the reason
program are inefficient is because they go through layers of indirection and
reformatting of data. The reason is because they have been written in a
modular fashion.

To take a simple example

/*
calcuate the standard deviation of a set of values
*/
double stdev(double *x, int N)

A perfectly sensible function. However in the real world I will usually have
a structure like

struct Employee
{
char name[64];
Monetary salary
};

struct Monetary
{
long dollars;
int cents;
}

So to calculate the standard deviation I will need to call malloc() to
allocate an array of double. I then iterate through it, and call a
conversion function to express the monetary member as a double. The I call
stdev. Then I free my array of doubles.

In fact I am unlikely to care about the cents, and no-one is likely to be
paid anything like the square root of 4 billion dollars. So I can knock up a
function to calculate salary standard deviations using mainly integer
arithmetic that will execute in a fraction of the time. Is it worth it? For
mean(), almost certainly yes. For standard deviation, possibly. For
calculating the modal value, no.
--
Buy my book 12 Common Atheist Arguments (refuted)
$1.25 download or $7.20 paper, available www.lulu.com/bgy1mm

Jun 18 '06 #12
Malcolm a écrit :
"Richard Heathfield" <in*****@invalid.invalid> wrote
1) Use good algorithms.
2) Write clear code.
3) All code should either do something good or stop something bad
happening.
4) If, as you write a section of code, a smile comes over your face and
you
think "hey, this is way cool", it probably needs re-writing.

For instance, I heard that on the original machine for which C was
intended, that there was a single CPU instruction which could perform the
following:
Who cares about a single instruction? If you're interested in efficiency,
go
for macro-efficiencies. Maybe x = -x is a nanosecond quicker than x *= -1,
and maybe it isn't, but we know for sure that a binary search is vastly
quicker than a linear search in almost all circumstances.


What people always say is that algorithmic optimisation should come first.
Actually, it is comparatively rare for a program to run too slowly because
an algorithm with the wrong big-O analysis was chosen. Normally the reason
program are inefficient is because they go through layers of indirection and
reformatting of data. The reason is because they have been written in a
modular fashion.

To take a simple example

/*
calcuate the standard deviation of a set of values
*/
double stdev(double *x, int N)

A perfectly sensible function. However in the real world I will usually have
a structure like

struct Employee
{
char name[64];
Monetary salary
};

struct Monetary
{
long dollars;
int cents;
}

So to calculate the standard deviation I will need to call malloc() to
allocate an array of double. I then iterate through it, and call a
conversion function to express the monetary member as a double. The I call
stdev. Then I free my array of doubles.

In fact I am unlikely to care about the cents, and no-one is likely to be
paid anything like the square root of 4 billion dollars.


??????

The square root of 4Gig is 65 536, and that is not a very high salary,
if you are talking about yearly salary excuse me...

So I can knock up a function to calculate salary standard deviations using mainly integer
arithmetic that will execute in a fraction of the time. Is it worth it?
This supposes that floating point operations are more expensive than
integer operations, what is no longer true in pentium/amd ...
For mean(), almost certainly yes. For standard deviation, possibly. For
calculating the modal value, no.


Jun 18 '06 #13
Malcolm wrote:

What people always say is that algorithmic optimisation should come first.
Actually, it is comparatively rare for a program to run too slowly because
an algorithm with the wrong big-O analysis was chosen. [...]


Doesn't jibe with my experience. People tend to do
something that appears to get the job done and passes their
tiny test cases, and then when the Real World hits it ...
The programmer has a notion of "how big" things will be, and
chooses algorithms (wisely or unwisely) based on that notion.
Then when the "size" turns out to be different (maybe the
notion was wrong to begin with, maybe the program is being
put to a new use), the programmer looks dumb for having
chosen the wrong algorithm.

At a PPOE, I once investigated a horrible performance
problem reported by an irate customer. I very soon found
that the thing was spending all its time sorting a linked
list (this part of the product was written in Lisp, so a lot
of things tended to be stored in lists). Was some strange
sort algorithm being used? No: the code was calling Lisp's
built-in sort function. Was something wrong with the sort
implementation? Not really: Some of the constant factors
could have been improved, but it was a straightforward merge
sort, O(N log N). So why was it taking so long?

The culprit turned out to be the comparison function. Its
first check was to search a completely different list for the
two items being compared; if they were both present in the
reference list, their order in that list determined their
order in the sort. Well, the reference list was approximately
the same length as the list being sorted, so each pairwise
comparison took O(N) time.

Yes, folks: The result was an O(N^2 log N) sort! This holds
the record for the worst asymptotic complexity of any sort I've
ever seen in actual production. (Things like "bogosort" are worse,
of course, but I seriously doubt anyone's ever shipped them in
a paid-for product -- I, at least, have never run across them
in "real" software.)

The scary part is that the programmer achieved this horrible
result by putting together pieces that considered individually
were quite innocuous: an O(N log N) sort utility and an O(N)
list search. It's just that he happened to assemble them in such
a way as to multiply their complexities ... And, of course, he
tested his code, probably with lots of N=2 and N=3 lists that
checked the corner cases. All the tests worked, and the code
shipped, and the customer ran it with N = a few hundred ...

Programmers *will* use algorithms with high complexities.
Your program maintains a little LRU cache of something or other
and when you write it you have a mental image of a table of
maybe eight items, so you use simple linear methods to search
and maintain it. Later, something changes elsewhere in the
program to increase the "pressure" on the cache, and some other
programmer changes #define CACHESIZE 8 to #define CACHESIZE 8192.
Or your program keeps lists of the students enrolled in various
courses; there are usually only a couple dozen students per class
except for a very few big lecture sessions, so again you don't
bother wheeling out a hash table. "Cannons and canaries," you
mutter to yourself while writing `for(i = 0; i < class_size; ++i)'.
And then somebody in Alumni Records thinks of a great way to use
your program, and starts by "enrolling" every living alumnus of
the University in one giant artificial "class" ...

Programmers make trade-offs when they select algorithms.
Sometimes they make good trades, sometimes they don't. Sometimes
a trade that was good at the time turns bad later; this just means
that the programmer wasn't Nostradamus.

--
Eric Sosman
es*****@acm-dot-org.invalid
Jun 18 '06 #14
Malcolm wrote:
What people always say is that algorithmic optimisation should come first.
Actually, it is comparatively rare for a program to run too slowly because
an algorithm with the wrong big-O analysis was chosen.


I have personal experience how bad using linear search (as opposed to
hashing) can be.

Admittedly, probably only /one/ such example, so comparatively rare ...

[It was supposed to be a hash lookup in a chained hash table. However,
it turned out that I hadn't written the hashing bit yet. The resulting
compiler took over 24 hours to compile its test data - it was run
by an interpreting bytecode VM, which introduced an additional
factor of between 10 and 100 slowdowm. Putting in the hashing made
a /really significant/ difference.]

--
Chris "see stupid programmer! oh. it's a mirror." Dollin
"Never ask that question!" Ambassador Kosh, /Babylon 5/

Jun 19 '06 #15
Chris Dollin wrote:
Malcolm wrote:
What people always say is that algorithmic optimisation should
come first. Actually, it is comparatively rare for a program to
run too slowly because an algorithm with the wrong big-O analysis
was chosen.


I have personal experience how bad using linear search (as opposed
to hashing) can be.

Admittedly, probably only /one/ such example, so comparatively rare ...

[It was supposed to be a hash lookup in a chained hash table.
However, it turned out that I hadn't written the hashing bit yet.
The resulting compiler took over 24 hours to compile its test data
- it was run by an interpreting bytecode VM, which introduced an
additional factor of between 10 and 100 slowdowm. Putting in the
hashing made a /really significant/ difference.]


The testsuite for my hashlib includes a truly awful hash function,
to check that the results (but not the run time) are independant.
The hash returns 1 always, converting the system into an effective
linked list. See:

<http://cbfalconer.home.att.net/download/>

--
"A man who is right every time is not likely to do very much."
-- Francis Crick, co-discover of DNA
"There is nothing more amazing than stupidity in action."
-- Thomas Matthews
Jun 19 '06 #16

Eric Sosman wrote:

Yes, folks: The result was an O(N^2 log N) sort! This holds
the record for the worst asymptotic complexity of any sort I've
ever seen in actual production. (Things like "bogosort" are worse,
of course, but I seriously doubt anyone's ever shipped them in
a paid-for product -- I, at least, have never run across them
in "real" software.)


The help program on the old PDP-10 had a sort
routine with N^3 running time.

Jun 20 '06 #17
Richard Heathfield <in*****@invalid.invalid> wrote:
For instance, I heard that on the original machine for which C was
intended, that there was a single CPU instruction which could perform
the following:


Who cares about a single instruction? If you're interested in efficiency,
go for macro-efficiencies. Maybe x = -x is a nanosecond quicker than x *=
-1, and maybe it isn't, but we know for sure that a binary search is
vastly quicker than a linear search in almost all circumstances.


Do almost all circumstances have the strong potential for more than 120
things to search for? I did some tuning on a small search that was going
to be done quadrillions of times. It turned out that the linear search was
faster than binary searches (using the highest optimization level) upto
list sizes of ~120. As there is no chance we will get that high in the
foreseeable future, I just documented this fact and left the linear search
in. (Using no compiler optimization, the cross over point where binary
became superior was about 10, rather than 120.)

Xho

--
-------------------- http://NewsReader.Com/ --------------------
Usenet Newsgroup Service $9.95/Month 30GB
Jun 20 '06 #18
xh*****@gmail.com said:
Richard Heathfield <in*****@invalid.invalid> wrote:
> For instance, I heard that on the original machine for which C was
> intended, that there was a single CPU instruction which could perform
> the following:


Who cares about a single instruction? If you're interested in efficiency,
go for macro-efficiencies. Maybe x = -x is a nanosecond quicker than x *=
-1, and maybe it isn't, but we know for sure that a binary search is
vastly quicker than a linear search in almost all circumstances.


Do almost all circumstances have the strong potential for more than 120
things to search for? I did some tuning on a small search that was going
to be done quadrillions of times. It turned out that the linear search
was faster than binary searches (using the highest optimization level)
upto
list sizes of ~120. As there is no chance we will get that high in the
foreseeable future, I just documented this fact and left the linear search
in. (Using no compiler optimization, the cross over point where binary
became superior was about 10, rather than 120.)


120 is a pretty small list. A linear search will take, on average, sixty
comparisons, compared to six or seven comparisons for a binary search. If
your comparisons are very, very cheap and your divide-by-two op very, very
expensive, I can just about see how there *might* be a case here, given the
simplicity of the linear search code. But I doubt very much whether it
would scale much higher, and frankly I'm surprised it scaled as high as it
did. I guess it's a tribute to the optimiser, if anything.

--
Richard Heathfield
"Usenet is a strange place" - dmr 29/7/1999
http://www.cpax.org.uk
email: rjh at above domain (but drop the www, obviously)
Jun 20 '06 #19
On Sat, 17 Jun 2006 14:03:26 -0400, Eric Sosman
<es*****@acm-dot-org.invalid> wrote:
<snip>
As for the PDP-11's addressing modes, there were indeed some
instances where the compiler could translate *p++ or *--p to a
single instruction. But it certainly could not do so in all
circumstances! The hardware's increment or decrement was by
either one or two bytes (for byte- or word-using instructions),
so other operand sizes needed additional instructions to adjust
the pointer value.
And in those other cases the *p part (fetch, store, or modify)
couldn't be a single instruction anyway.

Tiny nit: -11 autoinc/dec is 2 for word instructions always, and 1 for
byte instructions _except_ if the register is 6 (SP) or 7 (PC) in
which case it is forced back to 2 to keep those wordaligned.
As for the larger question, there's a rather bitter thread
raging at this very moment over the question of whether micro-
optimizations of this sort are necessary nowadays, or even whether
they're truly optimizations at all. The doubters seem to be in
the majority, but can't be said to be winning -- nobody "wins" in
silly debates like that one.

Hear hear!
- David.Thompson1 at worldnet.att.net
Jun 26 '06 #20

This thread has been closed and replies have been disabled. Please start a new discussion.

Similar topics

2
2299
by: Sara | last post by:
Hi - I've been reading the posts for a solution to my query, and realize that I should ask an "approch" question as well. We receive our production data from a third party, so my uers import...
31
2587
by: mark | last post by:
Hello- i am trying to make the function addbitwise more efficient. the code below takes an array of binary numbers (of size 5) and performs bitwise addition. it looks ugly and it is not elegant...
92
3982
by: Dave Rudolf | last post by:
Hi all, Normally, I would trust that the ANSI libraries are written to be as efficient as possible, but I have an application in which the majority of the run time is calling the acos(...)...
9
4598
by: Peng Jian | last post by:
I have a function that is called very very often. Can I improve its efficiency by declaring its local variables to be static?
1
2268
by: Tomás | last post by:
dynamic_cast can be used to obtain a pointer or to obtain a reference. If the pointer form fails, then you're left with a null pointer. If the reference form fails, then an exception is thrown....
335
11441
by: extrudedaluminiu | last post by:
Hi, Is there any group in the manner of the C++ Boost group that works on the evolution of the C language? Or is there any group that performs an equivalent function? Thanks, -vs
19
2905
by: vamshi | last post by:
Hi all, This is a question about the efficiency of the code. a :- int i; for( i = 0; i < 20; i++ ) printf("%d",i); b:- int i = 10;
9
3302
by: OldBirdman | last post by:
Efficiency I've never stumbled on any discussion of efficiency of various methods of coding, although I have found posts on various forums where individuals were concerned with efficiency. I'm...
5
2947
by: want.to.be.professer | last post by:
For OO design, I like using virtual member function.But considering efficiency, template is better. Look at this program, class Animal { public: virtual void Walk() = 0; }; class Dog
0
7223
marktang
by: marktang | last post by:
ONU (Optical Network Unit) is one of the key components for providing high-speed Internet services. Its primary function is to act as an endpoint device located at the user's premises. However,...
0
7314
Oralloy
by: Oralloy | last post by:
Hello folks, I am unable to find appropriate documentation on the type promotion of bit-fields when using the generalised comparison operator "<=>". The problem is that using the GNU compilers,...
0
7372
jinu1996
by: jinu1996 | last post by:
In today's digital age, having a compelling online presence is paramount for businesses aiming to thrive in a competitive landscape. At the heart of this digital strategy lies an intricately woven...
1
5041
isladogs
by: isladogs | last post by:
The next Access Europe User Group meeting will be on Wednesday 1 May 2024 starting at 18:00 UK time (6PM UTC+1) and finishing by 19:30 (7.30PM). In this session, we are pleased to welcome a new...
0
3191
by: TSSRALBI | last post by:
Hello I'm a network technician in training and I need your help. I am currently learning how to create and manage the different types of VPNs and I have a question about LAN-to-LAN VPNs. The...
0
3179
by: adsilva | last post by:
A Windows Forms form does not have the event Unload, like VB6. What one acts like?
0
1540
by: 6302768590 | last post by:
Hai team i want code for transfer the data from one system to another through IP address by using C# our system has to for every 5mins then we have to update the data what the data is updated ...
1
758
muto222
by: muto222 | last post by:
How can i add a mobile payment intergratation into php mysql website.
0
411
bsmnconsultancy
by: bsmnconsultancy | last post by:
In today's digital era, a well-designed website is crucial for businesses looking to succeed. Whether you're a small business owner or a large corporation in Toronto, having a strong online presence...

By using Bytes.com and it's services, you agree to our Privacy Policy and Terms of Use.

To disable or enable advertisements and analytics tracking please visit the manage ads & tracking page.