473,405 Members | 2,210 Online
Bytes | Software Development & Data Engineering Community
Post Job

Home Posts Topics Members FAQ

Join Bytes to post your question to a community of 473,405 software developers and data experts.

Reducing build-times for large projects.

I have read item 26 of "Exceptional C++" that explains a way of minimising
compile-time dependencies and, therefore, a way to improve compile speeds.
The procedure involves replacing #include directives with forward
declarations from within header files.

However, a potentially large number of source files may have previously
relied on the removed #include directives being present. Therefore it is
likely that a large number of source files will not compile as a result of
the modification and the "missing" #include directives must be inserted into
each of them. This is a pain.

Here is an example. The source file Z.cpp includes Y.h. The function defined
in Y.h is inlined.

// X.h
#include <iostream>
struct { void f() { std::cout << "X::f" << std::endl; } };

// Y.h
#include "X.h"
inline void y(X x) { x.f(); }

// Z.cpp
#include "Y.h"
int main()
{
y(X());
return 0;
}

When we use a forward declaration to remove the dependency of Y.h on X.h, we
see that Z.cpp must include X.h because main calls X::X(). So Z.cpp compiles
no faster than before. We might as well have left the dependency in
existence and saved ourselves all the extra work.

// X.h (as before)

// Y.h
class X;
void y(X x);

// Y.cpp
#include "X.h"
void y(X x) { x.f(); }

// Z.cpp
#include "Y.h"
#include "X.h"
// (rest as before)

The example shows that it doesn't always pay to minimise compile-time
dependencies. Here is the question. How do you make a judgement on what
dependencies should (or should not) be removed? Any thoughts are
appreciated.
Jul 23 '05 #1
14 1728
I think you have the concept of forward declaration wrong. The idea is

____________
// X.h
class X{};

//Y.h
#include "X.h"

void foo( X* t);
______________

can be replaced to

____________
// X.h
class X{};

//Y.h
class X;

void foo( X* t);
______________

This is because Y doesnt need to know the layout of class X. But in the
below example you cannot forward declare X as foo is passing X by
value. If you do what you have done. You will have to include X.h
before Y.h in every cpp file. This is a bad idea as on a large project
as people would have to remember what order the headers need to be
included in.

_________________
// X.h
class X{};

//Y.h
#include "X.h"

void foo( X t);
______________
If you want to speed up compilation. See if you compiler supports some
sort of precompiled headers and use that. Dont use too many inline
functions. With regular functions only a prototype change triggers a
rebuild. But with inline functions even changing the function body will
trigger rebuilds.

Raj

Jul 23 '05 #2
Jason Heyes wrote:
However, a potentially large number of source files may have previously
relied on the removed #include directives being present. Therefore it is
likely that a large number of source files will not compile as a result of
the modification and the "missing" #include directives must be inserted into
each of them. This is a pain.
A pain, maybe, but good sofware IMHO.

In big projects you absolutely must, must reduce coupling or chaining of
header files whenever and where ever possible. Sure, the source files
that need to see the full declarations will have to include the header
files that they need. This is a good thing - it lets you know what
they're grabbing right there - without having to go grovelling through a
zillion chained header files.

Also, it *will* save you time on compiles of source files that do not
need the full declarations. Since the headers they include will be
simpler, not chained to a bunch of extraneous headers, they will compile
faster.

Precompiled headers and small to modest sized projects lull far to many
programmers into lazy practices. Then you hit a project with a few
million SLOC with a few thousand source files and all of a sudden it's
taking hours to build... :-(
The example shows that it doesn't always pay to minimise compile-time
dependencies. Here is the question. How do you make a judgement on what
dependencies should (or should not) be removed? Any thoughts are
appreciated.


On the contrary, I would say that it does always pay to minimize
compile-time dependencies. Even if you don't see an immediate or
impressive reduction in compile times. It makes good sense, and it
reduces coupling between files and compilation units.

Jul 23 '05 #3
"Phil Staite" <ph**@nospam.com> wrote in message
news:42**************@nospam.com...
Jason Heyes wrote:
The example shows that it doesn't always pay to minimise compile-time
dependencies. Here is the question. How do you make a judgement on what
dependencies should (or should not) be removed? Any thoughts are
appreciated.


On the contrary, I would say that it does always pay to minimize
compile-time dependencies. Even if you don't see an immediate or
impressive reduction in compile times. It makes good sense, and it
reduces coupling between files and compilation units.


Thanks for your excellent reply.

Inline functions go away as compile-time dependencies are minimised. Do you
see this as an issue worth worrying about? Will program performance be
compromised in favour of compilation speed?
Jul 23 '05 #4
You can do this neat inline trick

A.h contains all the class declarations
______________________________

class A{};

#ifdef RELEASE
#include "a.inl"
#endif

______________________________

A.inl contains all the inline functions
_________________________________

#define MYINLINE inline

MYLINE A::test()
{
}

_____________________________

A.cpp contains method implemenation

include A.inl in both A.h and A.cpp.

In debug builds a.inl actually expands in A.cpp

but in release it could be actually inline. You get smaller compile
time in debug builds and better performance in release

Jul 23 '05 #5
Jason Heyes wrote:

Inline functions go away as compile-time dependencies are minimised. Do you
see this as an issue worth worrying about? Will program performance be
compromised in favour of compilation speed?


Actually, there's some evidence that inline functions can *hurt* program
performance. Consider, any reasonably fast CPU will have a clock speed
far in excess of main memory's ability to keep up. Hence instruction
cache is king in terms of performance. Cache "misses" mean wait states
while awaiting a fetch from main memory.

Now consider a reasonably heavily used, short function. (eg. accessor
functions for protected/private data come to mind) Sounds like a great
candidate for inlining, right? But inlining puts a copy of that code
(admittedly small) at N different addresses in the executable code.
Whereas a non-inlined function has one copy, at one address. So if the
cache is big enough, and the program stays local enough, the non-inlined
copy may stay in cache most or all of the time. Meanwhile, the inlined
code, being at unique addresses, must be fetched each time it is needed
from main memory.

Depending on a lot of other factors, it is not beyond the realm of
possibility that the non-inlined code is faster, since a cache hit for
the code would outweigh the penalty of a call/return. While the penalty
for a cache miss might outweigh the benefit of avoiding a call/return.

The upshot of this little thought experiment is the same old song:
Don't ever assume X is better than Y in terms of performance. Always,
always always test to see the real story on your cpu, OS, compiler,
program, and data set... The corallary is, make it work first, then
make it work fast. Of course that's not a license to choose explicitly
stupid algorithms, data structures, and program structure. ;-)

Jul 23 '05 #6
"Phil Staite" <ph**@nospam.com> wrote in message
news:42**************@nospam.com...
Jason Heyes wrote:

Inline functions go away as compile-time dependencies are minimised. Do
you see this as an issue worth worrying about? Will program performance
be compromised in favour of compilation speed?


Actually, there's some evidence that inline functions can *hurt* program
performance. Consider, any reasonably fast CPU will have a clock speed
far in excess of main memory's ability to keep up. Hence instruction
cache is king in terms of performance. Cache "misses" mean wait states
while awaiting a fetch from main memory.

Now consider a reasonably heavily used, short function. (eg. accessor
functions for protected/private data come to mind) Sounds like a great
candidate for inlining, right? But inlining puts a copy of that code
(admittedly small) at N different addresses in the executable code.
Whereas a non-inlined function has one copy, at one address. So if the
cache is big enough, and the program stays local enough, the non-inlined
copy may stay in cache most or all of the time. Meanwhile, the inlined
code, being at unique addresses, must be fetched each time it is needed
from main memory.

Depending on a lot of other factors, it is not beyond the realm of
possibility that the non-inlined code is faster, since a cache hit for the
code would outweigh the penalty of a call/return. While the penalty for a
cache miss might outweigh the benefit of avoiding a call/return.

The upshot of this little thought experiment is the same old song: Don't
ever assume X is better than Y in terms of performance. Always, always
always test to see the real story on your cpu, OS, compiler, program, and
data set... The corallary is, make it work first, then make it work fast.
Of course that's not a license to choose explicitly stupid algorithms,
data structures, and program structure. ;-)


Ok. Thanks dude.
Jul 23 '05 #7
I'm sorry, but Inline functions are loaded right along with the rest of the
code. If the line before it executed, then the very next inline code has
already been loaded.

You are correct that not loading code is faster than loading and waiting for
it to execute. The argument is like saying dont push the BP becuase the
actual code to Push the BP has to be loaded, SO DOES THE F"N CODE TO CALL
THE FUNCTION THAT IS ALREADY IN CACHE!

and it don't have to load as much inline, as it does using the stack.

LOOP unrolling has been going on for years. and its a lot faster loading
that same chunk of code over and over than staying in the same spot doing
all of the compares.

If, however you don't like inline, you can also use register calling, which
the compiler can streamline to gain even more speed. The difference between
that and inline is a CALL [addr];

"Jason Heyes" <ja********@optusnet.com.au> wrote in message
news:42**********************@news.optusnet.com.au ...
"Phil Staite" <ph**@nospam.com> wrote in message
news:42**************@nospam.com...
Jason Heyes wrote:

Inline functions go away as compile-time dependencies are minimised. Do
you see this as an issue worth worrying about? Will program performance
be compromised in favour of compilation speed?


Actually, there's some evidence that inline functions can *hurt* program
performance. Consider, any reasonably fast CPU will have a clock speed
far in excess of main memory's ability to keep up. Hence instruction
cache is king in terms of performance. Cache "misses" mean wait states
while awaiting a fetch from main memory.

Now consider a reasonably heavily used, short function. (eg. accessor
functions for protected/private data come to mind) Sounds like a great
candidate for inlining, right? But inlining puts a copy of that code
(admittedly small) at N different addresses in the executable code.
Whereas a non-inlined function has one copy, at one address. So if the
cache is big enough, and the program stays local enough, the non-inlined
copy may stay in cache most or all of the time. Meanwhile, the inlined
code, being at unique addresses, must be fetched each time it is needed
from main memory.

Depending on a lot of other factors, it is not beyond the realm of
possibility that the non-inlined code is faster, since a cache hit for
the code would outweigh the penalty of a call/return. While the penalty
for a cache miss might outweigh the benefit of avoiding a call/return.

The upshot of this little thought experiment is the same old song: Don't
ever assume X is better than Y in terms of performance. Always, always
always test to see the real story on your cpu, OS, compiler, program, and
data set... The corallary is, make it work first, then make it work
fast. Of course that's not a license to choose explicitly stupid
algorithms, data structures, and program structure. ;-)


Ok. Thanks dude.

Jul 23 '05 #8
DHOLLINGSWORTH2 wrote:
I'm sorry, but Inline functions are loaded right along with the rest of the
code. If the line before it executed, then the very next inline code has
already been loaded.
Maybe, if your architecture does speculative execution and fetch. It
would also need a deep enough pipeline to see that it needs to do that
fetch far enough in advance to get it done in time. It would also
depend on the nature of the instructions in front of it, and how long it
took the cpu to chew through them. Otherwise you're looking at a stall.

You are correct that not loading code is faster than loading and waiting for
it to execute. The argument is like saying dont push the BP becuase the
actual code to Push the BP has to be loaded, SO DOES THE F"N CODE TO CALL
THE FUNCTION THAT IS ALREADY IN CACHE!
We're talking about the code for the function, not the code that leads
up to the function. You either have multiple calls to one set of
instructions, or just that set of instructions plonked down in the
instruction stream in multiple places. (ie you've replaced your calls
with the code)

In the non-inlined case you may get lucky and only fetch the call
instruction from main memory, then hit in the cache for the actual
function's instructions.

In the case of inlined code, you may get luck and hit the cache for the
inlined code too - just as in the non-inlined case.

However, I would contend that it is simple probability that if there is
only one copy of the code used by all the places that reference it, that
is far more likely to be in cache than one particular instance out of
many inlined copies. Also that one copy of the function's code is less
likely to push other things out of the cache (that you may later want
back) than N copies of the same code, from different addresses.
LOOP unrolling has been going on for years. and its a lot faster loading
that same chunk of code over and over than staying in the same spot doing
all of the compares.


Loop unrolling "works" because you get to do N iterations worth of the
loop code without hitting a conditional/branch. This reduces the per
iteration "overhead" of the loop. It also helps in that speculative
execution past a conditional/branch can be problematic. So the more
work you can get done between those points the better. At some point
though you hit diminishing returns on loop unrolling. You still have to
load N times as much code the first time through the loop. Generally it
all fits in cache so subsequent iterations are free in that respect.
However, you can have too much of a good thing. At some point you're
pushing enough instructions out of cache to make room for your unrolled
loop that it comes back and bite you. IIRC most compilers limit loop
unrolling to the low single digits as far as the number of copies they make.
But this whole discussion is just barely anchored in reality. An awful
lot depends on real world parameters that vary widely and wildy from
system to system, program to program. I was just trying to point out to
the OP that when testing reveals something that makes you say "Oh
{explative}! How'd that happen?" you've got to be ready to challenge
your assumptions. One common assumption is that inlining improves
execution speed. It may, but then again, it may not.
Jul 23 '05 #9
Jason Heyes wrote:
I have read item 26 of "Exceptional C++" that explains a way of minimising compile-time dependencies and, therefore, a way to improve compile speeds. The procedure involves replacing #include directives with forward
declarations from within header files.
I've not read "Exceptional C++", only "More Exceptional C++", so can't
comment on the specifics.

[snip]
// X.h (as before)

// Y.h
class X;
void y(X x);

// Y.cpp
#include "X.h"
void y(X x) { x.f(); }

// Z.cpp
#include "Y.h"
#include "X.h"
// (rest as before)

The example shows that it doesn't always pay to minimise compile-time dependencies. Here is the question. How do you make a judgement on what dependencies should (or should not) be removed? Any thoughts are
appreciated.


I assume that there are typos in the code above, as you can't forward
declare classes that are later passed/used by value, i.e.:

// Y.h
class X;
void y(X x);

=>

class X;
void y(X* x); // or X& or X const&

That aside, I'm a bit surprised (if that really is the case) that
classes used in _public_ interfaces (including protected) are only
forward declared. All header files should be self-contained, and not
rely on users first including e.g. the declaration of X.

IMHO, you should limit yourself to forward declare classes used in the
implementation only, e.g. private members, parameters/return values
to/from private methods. If you do that, you will only have to include
the declaration from the implementation file of the containing class
(y.cpp in the above case).

Regards // Johan

Jul 23 '05 #10
I would have to agree with you when we are talking about functions with a
considerable size. I emmediately thought of inline as function whos size
was comparable to that of a function call, 2 to 3 times the size of the
overhead.

But yes, the bigger the inline function, the greater the chance of a problem
occuring.

Do you know how the multi-core tech works?
do they each have a dual pipeline, or something better?
if it stalls, does each pipline need to re-up, or do they run strictly
independent?

"Phil Staite" <ph**@nospam.com> wrote in message
news:ns********************@pcisys.net...
DHOLLINGSWORTH2 wrote:
I'm sorry, but Inline functions are loaded right along with the rest of
the code. If the line before it executed, then the very next inline code
has already been loaded.


Maybe, if your architecture does speculative execution and fetch. It
would also need a deep enough pipeline to see that it needs to do that
fetch far enough in advance to get it done in time. It would also depend
on the nature of the instructions in front of it, and how long it took the
cpu to chew through them. Otherwise you're looking at a stall.

You are correct that not loading code is faster than loading and waiting
for it to execute. The argument is like saying dont push the BP becuase
the actual code to Push the BP has to be loaded, SO DOES THE F"N CODE TO
CALL THE FUNCTION THAT IS ALREADY IN CACHE!


We're talking about the code for the function, not the code that leads up
to the function. You either have multiple calls to one set of
instructions, or just that set of instructions plonked down in the
instruction stream in multiple places. (ie you've replaced your calls with
the code)

In the non-inlined case you may get lucky and only fetch the call
instruction from main memory, then hit in the cache for the actual
function's instructions.

In the case of inlined code, you may get luck and hit the cache for the
inlined code too - just as in the non-inlined case.

However, I would contend that it is simple probability that if there is
only one copy of the code used by all the places that reference it, that
is far more likely to be in cache than one particular instance out of many
inlined copies. Also that one copy of the function's code is less likely
to push other things out of the cache (that you may later want back) than
N copies of the same code, from different addresses.
LOOP unrolling has been going on for years. and its a lot faster loading
that same chunk of code over and over than staying in the same spot doing
all of the compares.


Loop unrolling "works" because you get to do N iterations worth of the
loop code without hitting a conditional/branch. This reduces the per
iteration "overhead" of the loop. It also helps in that speculative
execution past a conditional/branch can be problematic. So the more work
you can get done between those points the better. At some point though
you hit diminishing returns on loop unrolling. You still have to load N
times as much code the first time through the loop. Generally it all fits
in cache so subsequent iterations are free in that respect. However, you
can have too much of a good thing. At some point you're pushing enough
instructions out of cache to make room for your unrolled loop that it
comes back and bite you. IIRC most compilers limit loop unrolling to the
low single digits as far as the number of copies they make.
But this whole discussion is just barely anchored in reality. An awful
lot depends on real world parameters that vary widely and wildy from
system to system, program to program. I was just trying to point out to
the OP that when testing reveals something that makes you say "Oh
{explative}! How'd that happen?" you've got to be ready to challenge your
assumptions. One common assumption is that inlining improves execution
speed. It may, but then again, it may not.

Jul 23 '05 #11
Phil Staite <ph**@nospam.com> wrote in news:42**************@nospam.com:
Jason Heyes wrote:

Inline functions go away as compile-time dependencies are minimised.
Do you see this as an issue worth worrying about? Will program
performance be compromised in favour of compilation speed?
Actually, there's some evidence that inline functions can *hurt*
program performance. Consider, any reasonably fast CPU will have a
clock speed far in excess of main memory's ability to keep up. Hence
instruction cache is king in terms of performance. Cache "misses"
mean wait states while awaiting a fetch from main memory.

Now consider a reasonably heavily used, short function. (eg. accessor
functions for protected/private data come to mind) Sounds like a
great candidate for inlining, right? But inlining puts a copy of that
code (admittedly small) at N different addresses in the executable
code. Whereas a non-inlined function has one copy, at one address. So
if the cache is big enough, and the program stays local enough, the
non-inlined copy may stay in cache most or all of the time.
Meanwhile, the inlined code, being at unique addresses, must be
fetched each time it is needed from main memory.


You'll need a bigger function that simple accessors to trigger that
behaviour. On a simple enough accessor, the entire function call setup
and teardown code may be larger than the actual accessor code. So even
with your argument, inlining would make the code _smaller_ and not
bigger. This is the danger of making generalizations.......
Depending on a lot of other factors, it is not beyond the realm of
possibility that the non-inlined code is faster, since a cache hit for
the code would outweigh the penalty of a call/return. While the
penalty for a cache miss might outweigh the benefit of avoiding a
call/return.

The upshot of this little thought experiment is the same old song:
Don't ever assume X is better than Y in terms of performance. Always,
always always test to see the real story on your cpu, OS, compiler,
program, and data set... The corallary is, make it work first, then
make it work fast. Of course that's not a license to choose
explicitly stupid algorithms, data structures, and program structure.


You betcha! Measure first, then optimize.....
Jul 23 '05 #12
Phil Staite <ph**@nospam.com> wrote in
news:ns********************@pcisys.net:
DHOLLINGSWORTH2 wrote:
You are correct that not loading code is faster than loading and
waiting for it to execute. The argument is like saying dont push the
BP becuase the actual code to Push the BP has to be loaded, SO DOES
THE F"N CODE TO CALL THE FUNCTION THAT IS ALREADY IN CACHE!


We're talking about the code for the function, not the code that leads
up to the function. You either have multiple calls to one set of
instructions, or just that set of instructions plonked down in the
instruction stream in multiple places. (ie you've replaced your calls
with the code)


However, by "plonking down" the code, you give the optimizer more to work
with....
In the non-inlined case you may get lucky and only fetch the call
instruction from main memory, then hit in the cache for the actual
function's instructions.

In the case of inlined code, you may get luck and hit the cache for
the inlined code too - just as in the non-inlined case.

However, I would contend that it is simple probability that if there
is only one copy of the code used by all the places that reference it,
that is far more likely to be in cache than one particular instance
out of many inlined copies. Also that one copy of the function's code
is less likely to push other things out of the cache (that you may
later want back) than N copies of the same code, from different
addresses.
Except (as I mentioned in another post) if your function setup and
teardown code is larger than the actual code within the inlined function,
then you are increasing the code size at the call point and thus
increasing the probabilities of your other function being pushed out of
cache (hey... if we're talking probablilities....)

Or.. if your inlined function is executed within a loop (let's assume
something along the lines of: for (int i = 0; i < 1000000; ++i) someFn
(i);

If someFn() isn't inlined, then you'd have to pay for the 1000000
call/return to someFn. If someFn is inlined, then perhaps the first 90%
of someFn works out to be a loop invariant and the optimizer pulls all of
that code out in front of the loop, and only executes the remaining 10%
of the code 1000000 times. (Of course this depends on your optimizer)

[snip]
But this whole discussion is just barely anchored in reality. An
awful lot depends on real world parameters that vary widely and wildy
from system to system, program to program. I was just trying to point
out to the OP that when testing reveals something that makes you say
"Oh {explative}! How'd that happen?" you've got to be ready to
challenge your assumptions. One common assumption is that inlining
improves execution speed. It may, but then again, it may not.


Next common assumption... marking a function as inline doesn't
necessarily mean that it will be inlined.

As usual, beware of making sweeping generalizations.
Jul 23 '05 #13
DHOLLINGSWORTH2 wrote:
I would have to agree with you when we are talking about functions with a
considerable size. I emmediately thought of inline as function whos size
was comparable to that of a function call, 2 to 3 times the size of the
overhead.

But yes, the bigger the inline function, the greater the chance of a problem
occuring.
Yes, the smaller the code for the function(s), the less likely you are
to run into performance problems, inlined or not. ;-) But yeah, for
simple accessors of say integral types, you may be looking at simple
register loads where the inlined code is the same size as a call/return
would be to even get at the outlined code. In those cases it makes
absolutely no sense *not* to inline.
Do you know how the multi-core tech works?
do they each have a dual pipeline, or something better?
if it stalls, does each pipline need to re-up, or do they run strictly
independent?


I've only read a little bit about the dual cores. The one guy's take on
it was that they still had some bottlenecks - IIRC a single cache, and
some other shared on-chip resources. :-( I like the idea though, since
we're probably stuck with about the same clock speeds we have now for
the next several years. So we're going to have to get more done with
the same GHz. Fine by me, I love multithreaded programming. And at my
day job I get to play with a 128 CPU machine. :-) Who needs dual core
when you have lots of CPUs?
Jul 23 '05 #14
>> Do you know how the multi-core tech works?
do they each have a dual pipeline, or something better?
if it stalls, does each pipline need to re-up, or do they run strictly
independent?


I've only read a little bit about the dual cores. The one guy's take on
it was that they still had some bottlenecks - IIRC a single cache, and
some other shared on-chip resources. :-( I like the idea though, since
we're probably stuck with about the same clock speeds we have now for the
next several years. So we're going to have to get more done with the same
GHz. Fine by me, I love multithreaded programming. And at my day job I
get to play with a 128 CPU machine. :-) Who needs dual core when you have
lots of CPUs?


My concern is this:
We already have a Memory Bus bottle neck, now with multiple cores accessing
the same ram, that would tend to make the Bottleneck worse. They are going
to need loads of internal Cache to keep them all running Smooth.

And I'm willing to bet you'll see a lot more problems like the inline one.
Jul 23 '05 #15

This thread has been closed and replies have been disabled. Please start a new discussion.

Similar topics

7
by: Peter Salzman | last post by:
Hi all, Newish PHP programmer here. I wrote a form select class to help reduce code clutter, but I don't think the effort was worth it. I was hoping to post my attempt and get some ideas from...
5
by: Salvador I. Ducros | last post by:
Greetings all, I was hoping someone might be able to point me in the right direction. I'm currently using std::vector to maintain several lists whose objects are of unrelated types (classes)....
0
by: Filipe Martins | last post by:
Hello to all. Ever had a problem when your print some reports in a printer other the one you use more frequently, in which the report width span more tban one page? I did, and didn't liked it. ...
6
by: John Wood | last post by:
As everybody points out, the best way to reduce the memory footprint of a ..net application is to minimize it and restore it. This can make my app go from 40Mb of usage to about 3Mb of usage. Of...
0
by: foldface | last post by:
Hi Anyone got any general tips on reducing complexity on 'bigish' pages? I'm thinking here of a page with a number of usercontrols, all posting back, dynamic controls being added, having...
4
by: Patrick | last post by:
I have a tablespace that contains the LOB data for 3 tables which exist in other tablespaces. Even after reorging the LOB tablespace with the LONG option, the high water mark is still too high. I...
1
by: B | last post by:
Hello All, This is my first time using this list, but hopefully I got the right one for the question I need to ask :). I have a table which has about 4 million records. When I do a search...
3
by: Annyka | last post by:
Platform: SQL Server 2000 (8.00.2040, SP4, Enterprise edition) I've got a complex query I'm trying to build, which will select all requests that have a status_code of 1, and who's related incident...
7
by: dataangel | last post by:
After spending a lot of time writing code in other languages, returning to C++ and starting a new project has been painful. It seems in order to get anything done you need to #include, add linker...
39
by: Gilles Ganault | last post by:
Hello, I'm no LAMP expert, and a friend of mine is running a site which is a bit overloaded. Before upgrading, he'd like to make sure there's no easy way to improve efficiency. A couple of...
0
BarryA
by: BarryA | last post by:
What are the essential steps and strategies outlined in the Data Structures and Algorithms (DSA) roadmap for aspiring data scientists? How can individuals effectively utilize this roadmap to progress...
1
by: nemocccc | last post by:
hello, everyone, I want to develop a software for my android phone for daily needs, any suggestions?
0
by: Hystou | last post by:
There are some requirements for setting up RAID: 1. The motherboard and BIOS support RAID configuration. 2. The motherboard has 2 or more available SATA protocol SSD/HDD slots (including MSATA, M.2...
0
marktang
by: marktang | last post by:
ONU (Optical Network Unit) is one of the key components for providing high-speed Internet services. Its primary function is to act as an endpoint device located at the user's premises. However,...
0
by: Hystou | last post by:
Most computers default to English, but sometimes we require a different language, especially when relocating. Forgot to request a specific language before your computer shipped? No problem! You can...
0
by: Hystou | last post by:
Overview: Windows 11 and 10 have less user interface control over operating system update behaviour than previous versions of Windows. In Windows 11 and 10, there is no way to turn off the Windows...
0
tracyyun
by: tracyyun | last post by:
Dear forum friends, With the development of smart home technology, a variety of wireless communication protocols have appeared on the market, such as Zigbee, Z-Wave, Wi-Fi, Bluetooth, etc. Each...
0
agi2029
by: agi2029 | last post by:
Let's talk about the concept of autonomous AI software engineers and no-code agents. These AIs are designed to manage the entire lifecycle of a software development project—planning, coding, testing,...
0
isladogs
by: isladogs | last post by:
The next Access Europe User Group meeting will be on Wednesday 1 May 2024 starting at 18:00 UK time (6PM UTC+1) and finishing by 19:30 (7.30PM). In this session, we are pleased to welcome a new...

By using Bytes.com and it's services, you agree to our Privacy Policy and Terms of Use.

To disable or enable advertisements and analytics tracking please visit the manage ads & tracking page.