473,386 Members | 1,736 Online
Bytes | Software Development & Data Engineering Community
Post Job

Home Posts Topics Members FAQ

Join Bytes to post your question to a community of 473,386 software developers and data experts.

low-level, ugly pointer arithmetics


Hi!

I'm trying to squeeze a few clock cycles from a tight
loop that profiling shows to be a bottleneck in my program.
I'm at a point where the only thing that matters is
execution speed, not style (within the loop, obviously).

The loop deals with an array:

// this is aligned at cache line boundaries
struct hash_t {
int id;
int next;
double d0;
double d1;
double d2;
} hashtable[HASH_MAX];

within the loop, whenever there is a hashtable miss I
need to store new values into hashtable[index].
Originally this looked like

if(...) {
hashtable[index].id=some_int;
hashtable[index].next=some_int2;
hashtable[index].d0=some_double0;
hashtable[index].d1=some_double1;
hashtable[index].d2=some_double2;
}

Now... I'm trying to save a few cycles by doing something
along the lines of

if(...) {
// point to the first member
int *hashtable_ptr = reinterpret_cast<int*&(hashtable[index].id)
*(hashtable_ptr++) = some_int; // store into id and move on
*(hashtable_ptr++) = some_int2; // store into next and move on
*(hashtable_ptr++) = some_double0; // << --- trouble
// ...
}

the trouble is that I'm working with a pointer to int
and I want to store a double and, later on, advance the
pointer not by sizeof(int) bytes, but by sizeof(double)
bytes. On my system sizeof(int) is 4, sizeof(double) is 8
and portability is not an issue.

I tried

*((reinterpret_cast<double*>(hashtable_ptr))++) = some_double0;

but I got an error:

"the result of this cast cannot be used as an lvalue".

Why's that? I really need to force the compiler to treat
this pointer as a double* for a moment... I realize I can
have two pointers to the same hashtable entry, one an int*
and one a double*, but I need to save every clock cycle I
can as this loop is executed trillions of times.

What's the usual procedure to iterate a pointer through
members of varying types? Perhaps it would be easier with
a char*, but that means advancing by 4 and 8 which,
I suppose, would be slower than plain ++? Or does it boil
to moving by 4 or 8 offsets at the assembly level too?

thanks in advance,
- J.

PS. Are pointers to different datatypes guaranteed to
be of the same size? If not, than perhaps I need
an assert here and there...
Sep 7 '06 #1
18 2219
Jacek Dziedzic wrote:
Hi!

I'm trying to squeeze a few clock cycles from a tight
loop that profiling shows to be a bottleneck in my program.
I'm at a point where the only thing that matters is
execution speed, not style (within the loop, obviously).

The loop deals with an array:

// this is aligned at cache line boundaries
struct hash_t {
int id;
int next;
double d0;
double d1;
double d2;
} hashtable[HASH_MAX];

within the loop, whenever there is a hashtable miss I
need to store new values into hashtable[index].
Originally this looked like

if(...) {
hashtable[index].id=some_int;
hashtable[index].next=some_int2;
hashtable[index].d0=some_double0;
hashtable[index].d1=some_double1;
hashtable[index].d2=some_double2;
}

Now... I'm trying to save a few cycles by doing something
along the lines of

if(...) {
// point to the first member
int *hashtable_ptr = reinterpret_cast<int*&(hashtable[index].id)
*(hashtable_ptr++) = some_int; // store into id and move on
*(hashtable_ptr++) = some_int2; // store into next and move on
*(hashtable_ptr++) = some_double0; // << --- trouble
// ...
}
Accessing a double through a pointer to int causes undefined behavior.
In any case, unless you have a 10+ year old compiler it is almost
certain the compiler has done this optimization already (or a better
one even). If you're at this level of optimization then you should
probably look at the disasembly of the code to see what the compiler
generates and/or write your own assembly routine. Many processors have
vector instructions or similar stuff that could help you here.

Regards,
Bart.

Sep 7 '06 #2
<snip>

Oh and one more thing. Since you just seem to be copying members of a
POD structure then memcpy() is also acceptable, and it is probably
implemented optimally for your system as well.

Regards,
Bart.

Sep 7 '06 #3
Bart wrote:
[snip]
Accessing a double through a pointer to int causes undefined behavior.
But doesn't a cast alleviate this? I thought the compiler
would understand that I want to treat the binary representation
of a pointer to int like it was a pointer to double, so that
it would store the value correctly and then increment it
correctly. Won't this work?
In any case, unless you have a 10+ year old compiler it is almost
certain the compiler has done this optimization already (or a better
one even). If you're at this level of optimization then you should
probably look at the disasembly of the code to see what the compiler
generates and/or write your own assembly routine.
Yes and no. The compiler is a month-old intel compiler,
tuned to this particular architecture (Itanium 2), so you'd
expect it to be extremely aggresive, more so since this
architecture relies heavily on the compiler doing the
optimizations.

However, even with -O3 and other aggressive optimization
options I was able to outsmart the compiler in two places
by merely converting

array[index][0]=double0;
array[index][1]=double1;
array[index][2]=double2;

into

array_ptr = array[index];
*(array_ptr++)=...;
*(array_ptr++)=...;
*(array_ptr )=...;

as proven by profiler output. Writing my own assembly
routine is out of question -- this IA64 assembly output
is absolutely unreadable (at least to me, I only have
experience with x86 assembly) and looks like it is
quite heavily optimized already. Still, experience of the
past three days shows that translating indexing into
pointer arithmetics somehow makes the compiler perform
better still.
Many processors have
vector instructions or similar stuff that could help you here.
Yes, this processor in particular. My teammate managed
to vectorize the pow() and sqrt() operations which were
the previous bottleneck. What appears as the next bottleneck
is cache penalties for accessing elements of a large
(~1e6 doubles) array in an almost random order, many many
times. Hence I am trying to code a hashtable that would
store the values of most recently used elements in a
smaller table that would fit within the L2 cache.

Anyway, is there _really_ no way to access elements of
a struct using a single, advancing pointer? I suspect
some controller-programming C gurus would know a way?

TIA,
- J.
Sep 7 '06 #4
"Jacek Dziedzic" <jacek@no_spam.tygrys.no_spam.netschrieb im Newsbeitrag
news:3f***************************@news.chello.pl. ..
>
Hi!

I'm trying to squeeze a few clock cycles from a tight
loop that profiling shows to be a bottleneck in my program.
I'm at a point where the only thing that matters is
execution speed, not style (within the loop, obviously).

The loop deals with an array:

// this is aligned at cache line boundaries
struct hash_t {
int id;
int next;
double d0;
double d1;
double d2;
} hashtable[HASH_MAX];

within the loop, whenever there is a hashtable miss I
need to store new values into hashtable[index].
Originally this looked like

if(...) {
hashtable[index].id=some_int;
hashtable[index].next=some_int2;
hashtable[index].d0=some_double0;
hashtable[index].d1=some_double1;
hashtable[index].d2=some_double2;
}

Now... I'm trying to save a few cycles by doing something
along the lines of

if(...) {
// point to the first member
int *hashtable_ptr = reinterpret_cast<int*&(hashtable[index].id)
*(hashtable_ptr++) = some_int; // store into id and move on
*(hashtable_ptr++) = some_int2; // store into next and move on
*(hashtable_ptr++) = some_double0; // << --- trouble
// ...
}

the trouble is that I'm working with a pointer to int
and I want to store a double and, later on, advance the
pointer not by sizeof(int) bytes, but by sizeof(double)
bytes. On my system sizeof(int) is 4, sizeof(double) is 8
and portability is not an issue.

I tried

*((reinterpret_cast<double*>(hashtable_ptr))++) = some_double0;

but I got an error:

"the result of this cast cannot be used as an lvalue".

Why's that?
The result of a cast is an rvalue, not an lvalue. Think of it as a temporary
variable. Then your statement becomes something like

double* temp = reinterpret_cast<double*>(hashtable_ptr);
*temp = some_double0;
temp++;

The problem is the final increment. It increments the temporary, not the
hashtable_ptr. To do that, you have to add an extra level of indirection

*((*reinterpret_cast<double**>(&hashtable_ptr))++) = some_double0;

But before you try such nasty code, you should try something more readable:

if (...)
{
hash_t* ptr = &hashtable[index];
ptr->id = some_int;
ptr->next = some_int2;
ptr->d0 = some_double0;
...
}

If some_int, some_int2 etc. are local variables, which are computed as part
of the function, it might also improve speed if you replace all those
variables with a single, local instance of a hash_t struct. For most
compilers, code like

hash_t data;
data.id = ...
...

should be as fast as

int id, next;
double d0, d1, ...;
id = ...
...

But you could then copy the structure as a whole, like

if (...) hashtable[index] = data;

of even use memcpy, which still is much more readable than your pointer
tricks. Such tricks often even have negative effects on speed. Measure such
"improvement" very carefully and do it for every update of your compiler.
Even a compiler will understand (and optimize) simple code mush better than
tricky casts and pointer arithmetic.
I really need to force the compiler to treat
this pointer as a double* for a moment...
When you force a compiler to do something the way you want, you also force
it, not to do something it could do better than you do.
I realize I can
have two pointers to the same hashtable entry, one an int*
and one a double*, but I need to save every clock cycle I
can as this loop is executed trillions of times.

What's the usual procedure to iterate a pointer through
members of varying types? Perhaps it would be easier with
a char*, but that means advancing by 4 and 8 which,
I suppose, would be slower than plain ++? Or does it boil
to moving by 4 or 8 offsets at the assembly level too?
That depends on the hardware, your program is running on. Test it! And
remember, such tests are only valid for a single version of a compiler. You
have to test it again for each and every combination of hardware, operating
system and compiler. And the results may still depend on your code.

Heinz

Sep 7 '06 #5
Bart wrote:
<snip>

Oh and one more thing. Since you just seem to be copying members of a
POD structure then memcpy() is also acceptable, and it is probably
implemented optimally for your system as well.
Again, yes and no. The two things are that:
a) the source values that need to be stored into members
of the hash_t struct do not lie at subsequent addresses,
and if things work as expected they are in registers.
b) even though I believe memcpy() to be optimized to a single
cycle, the structure is so small (32 bytes) that I doubt
it would perform well, even if it is inlined.
Unfortunately this is not a case of copying multiple
entries of an array.

I guess b) may be disputed, but a) makes this a no-no.

thanks anyway,
- J.
Sep 7 '06 #6
Jacek Dziedzic wrote:
Bart wrote:
[snip]
Accessing a double through a pointer to int causes undefined behavior.

But doesn't a cast alleviate this? I thought the compiler
would understand that I want to treat the binary representation
of a pointer to int like it was a pointer to double, so that
it would store the value correctly and then increment it
correctly. Won't this work?
The cast only shuts up the compiler. It may work on your
implementation, but I was giving a general answer.
Anyway, is there _really_ no way to access elements of
a struct using a single, advancing pointer? I suspect
some controller-programming C gurus would know a way?
You can try using a char pointer that gets incremented by sizeof(int)
and sizeof(double). That would be pretty much equivalent to an
advancing pointer, although I can't say whether it's going to yield
faster code. Try it and see.

Regards,
Bart.

Sep 7 '06 #7
Heinz Ozwirk wrote:
>
The result of a cast is an rvalue, not an lvalue. Think of it as a
temporary variable. Then your statement becomes something like

double* temp = reinterpret_cast<double*>(hashtable_ptr);
*temp = some_double0;
temp++;

The problem is the final increment. It increments the temporary, not the
hashtable_ptr. To do that, you have to add an extra level of indirection
Right!
>
*((*reinterpret_cast<double**>(&hashtable_ptr))++) = some_double0;
Yuck :)
But before you try such nasty code, you should try something more readable:

if (...)
{
hash_t* ptr = &hashtable[index];
ptr->id = some_int;
ptr->next = some_int2;
ptr->d0 = some_double0;
...
}
Yes, you were right! My incrementing a pointer combined with
heavy casting only made the matter worse (speedwise).
Your idea is slightly better (in terms of speed) than the
original code I posted.
If some_int, some_int2 etc. are local variables, which are computed as
part of the function, it might also improve speed if you replace all
those variables with a single, local instance of a hash_t struct. For
most compilers, code like

hash_t data;
data.id = ...
...

should be as fast as

int id, next;
double d0, d1, ...;
id = ...
...

But you could then copy the structure as a whole, like

if (...) hashtable[index] = data;

of even use memcpy, which still is much more readable than your pointer
tricks.
Unfortunately, the source part of the data copied is
in registers, so I guess memcpy is out of question,
especially since this struct is rather small (32 bytes)
and I don't work at contiguous batches of hash_t's
in one iteration.
Such tricks often even have negative effects on speed. Measure
such "improvement" very carefully and do it for every update of your
compiler. Even a compiler will understand (and optimize) simple code
mush better than tricky casts and pointer arithmetic.
It turns out you were right!
[snip]
thanks a lot,
- J.
Sep 7 '06 #8

Jacek Dziedzic wrote:
Hi!

I'm trying to squeeze a few clock cycles from a tight
loop that profiling shows to be a bottleneck in my program.
I'm at a point where the only thing that matters is
execution speed, not style (within the loop, obviously).

The loop deals with an array:

// this is aligned at cache line boundaries
struct hash_t {
int id;
int next;
double d0;
double d1;
double d2;
} hashtable[HASH_MAX];

within the loop, whenever there is a hashtable miss I
need to store new values into hashtable[index].
Originally this looked like

if(...) {
hashtable[index].id=some_int;
hashtable[index].next=some_int2;
hashtable[index].d0=some_double0;
hashtable[index].d1=some_double1;
hashtable[index].d2=some_double2;
}
The compiler should do a good job of eliminating the common
subexpression hashtable[index], reducing it to a cached pointer value.
If you're worried that that doesn't happen, you can try doing that
reduction manually:

{
hash_t *tmp = &hashtable[index];
tmp->id = some_int;
tmp->next = some_int2;
...
}

That's about the best you can do.
if(...) {
// point to the first member
int *hashtable_ptr = reinterpret_cast<int*&(hashtable[index].id)
*(hashtable_ptr++) = some_int; // store into id and move on
*(hashtable_ptr++) = some_int2; // store into next and move on
*(hashtable_ptr++) = some_double0; // << --- trouble
// ...
}
If the processor has an indexed addressing mode with displacement for
structure access, then doing this kind of thing won't help at all. All
it means is that you can use indirect addressing to fetch the data,
with no displacement.

In some instruction sets, it might even be the same instruction, with a
value of zero in the displacement field. So all you are doing is adding
the overhead of incrementing the base address.

Sep 7 '06 #9
Jacek Dziedzic wrote:
Now... I'm trying to save a few cycles by doing something
along the lines of
(...)
bytes. On my system sizeof(int) is 4, sizeof(double) is 8
and portability is not an issue.
Save a few cycles and sacrify portability? Write it in assembler. It will be
safer than ugly code, at least everyone that takes an eye in the code will
notice that is not portable.

--
Salu2
Sep 8 '06 #10
"Jacek Dziedzic" <jacek@no_spam.tygrys.no_spam.netwrote in message
news:3f***************************@news.chello.pl. ..
The loop deals with an array:

// this is aligned at cache line boundaries
struct hash_t {
int id;
int next;
double d0;
double d1;
double d2;
} hashtable[HASH_MAX];

within the loop, whenever there is a hashtable miss I
need to store new values into hashtable[index].
Originally this looked like

if(...) {
hashtable[index].id=some_int;
hashtable[index].next=some_int2;
hashtable[index].d0=some_double0;
hashtable[index].d1=some_double1;
hashtable[index].d2=some_double2;
}

Now... I'm trying to save a few cycles by doing something
along the lines of

if(...) {
// point to the first member
int *hashtable_ptr = reinterpret_cast<int*&(hashtable[index].id)
*(hashtable_ptr++) = some_int; // store into id and move on
*(hashtable_ptr++) = some_int2; // store into next and move on
*(hashtable_ptr++) = some_double0; // << --- trouble
// ...
}

the trouble is that I'm working with a pointer to int
and I want to store a double and, later on, advance the
pointer not by sizeof(int) bytes, but by sizeof(double)
bytes.
Do you have any evidence that successively incrementing hashtable_ptr will
be significantly faster than doing direct (constant offset) indexing? In
other words:

hash_t* hptr = &hashtable[index]; // or, equivalently,
hashtable+index
hptr->id = some_int;
hptr->next = some_int2;
hptr->d0 = some_double0;

and so on.

On most computers this will generate successive instructions that put hptr
in a register and specify appropriate offsets as part of each instruction,
and the processor will probably add those offsets to the register at least
as quickly as you can recompute an appropriate value for hashtable_ptr.
Plus you don't have to use reinterpret_cast, which is almost never portable,
and you don't have to hope that the compiler won't put padding into your
structure that you didn't expect.

The main difference between my suggestion and your original version is that
I'm computing the address of hashtable[index] once, then reusing that
address. In fact, most optimizing compilers will do that for me, so it is
possible that the change will have no effect. Indeed, on many computers,
the execution time of the program fragment will depend more than anything
else on the number of memory references you make, which means that there may
not actually be anything you can do to speed up the program.
Sep 8 '06 #11
Jacek Dziedzic posted:
if(...) {
// point to the first member
int *hashtable_ptr = reinterpret_cast<int*&(hashtable[index].id)
*(hashtable_ptr++) = some_int; // store into id and move on
*(hashtable_ptr++) = some_int2; // store into next and move on
*(hashtable_ptr++) = some_double0; // << --- trouble
// ...
}

You're probably going the wrong way about this, but nonetheless...

char *p = (char*)hastable[index].id;

* (int*)p = some_int; p += sizeof some_int;
* (int*)p = some_int2; p += sizeof some_int2;
*(double*)p = some_double0; p += sizeof some_double0;

--

Frederick Gotham
Sep 8 '06 #12
Heinz Ozwirk posted:
The result of a cast is an rvalue, not an lvalue.

Yes, unless you cast to a reference type.

--

Frederick Gotham
Sep 8 '06 #13
Kaz Kylheku wrote:
>
If the processor has an indexed addressing mode with displacement for
structure access, then doing this kind of thing won't help at all. All
it means is that you can use indirect addressing to fetch the data,
with no displacement.

In some instruction sets, it might even be the same instruction, with a
value of zero in the displacement field. So all you are doing is adding
the overhead of incrementing the base address.
Yes, you were right, I only got the overhead.

It's weird then that the same (or similar) trick worked for
me a few hundred lines earlier, got to be a compiler quirk.

thanks,
- J.
Sep 8 '06 #14
Julián Albo wrote:
Jacek Dziedzic wrote:

> Now... I'm trying to save a few cycles by doing something
along the lines of

(...)
>>bytes. On my system sizeof(int) is 4, sizeof(double) is 8
and portability is not an issue.

Save a few cycles and sacrify portability?
I think you're a little hasty in your conclusions. I don't
"sacrifice" portability, it's just not needed here. The program
is destined to run on a massively parallel machine, and
basically only on _this_ one machine, at least for the next
five years. The few cycles may mean days of computational
time of several tens of nodes saved, as this loop iterates
_trillions_ of times. Sure, perhaps in a few years this
computer will have become outdated and the program
will have to run on another machine with sizeof(int)==32
and sizeof(double)==32, but this will be caught by an
assert in the init module that will ask somebody to fix
this. Fixing this would, in fact, be as easy as changing
an offset here or there in a clearly marked place.
Write it in assembler. It will be safer than ugly code
Recoding it in another assembly language in case of
a change of architecture won't be easy.
Furthermore I have to admit I can't code IA-64 assembly.
at least everyone that takes an eye in the code will
notice that is not portable.
Well, I think the two lines

// Warning: the following function is not portable.
// mind the offsets marked in *)

does the same job.

- J.
Sep 8 '06 #15
Andrew Koenig wrote:
[snip]
Yes, you were right.

thanks,
- J.
Sep 8 '06 #16
Frederick Gotham wrote:
Jacek Dziedzic posted:

>>if(...) {
// point to the first member
int *hashtable_ptr = reinterpret_cast<int*&(hashtable[index].id)
*(hashtable_ptr++) = some_int; // store into id and move on
*(hashtable_ptr++) = some_int2; // store into next and move on
*(hashtable_ptr++) = some_double0; // << --- trouble
// ...
}

You're probably going the wrong way about this, but nonetheless...

char *p = (char*)hastable[index].id;

* (int*)p = some_int; p += sizeof some_int;
* (int*)p = some_int2; p += sizeof some_int2;
*(double*)p = some_double0; p += sizeof some_double0;
Thanks. That's the way I tried and, as others have warned me,
I only got the overhead.

- J.
Sep 8 '06 #17
Jacek Dziedzic wrote:
>Save a few cycles and sacrify portability?
I think you're a little hasty in your conclusions. I don't
"sacrifice" portability, it's just not needed here. The program
is destined to run on a massively parallel machine, and
basically only on _this_ one machine, at least for the next
five years.
Then tell me in 2012 if you sacrificed something or not.
The few cycles may mean days of computational
time of several tens of nodes saved, as this loop iterates
_trillions_ of times.
I think you're a little hasty in your conclusions. I did not say that the
few cycles implied a little gain.
Write it in assembler. It will be safer than ugly code
Recoding it in another assembly language in case of
a change of architecture won't be easy.
I doubt that recoding obfuscated nonportable C++ will be.

--
Salu2
Sep 8 '06 #18
Jacek Dziedzic wrote:
>
Hi!

I'm trying to squeeze a few clock cycles from a tight
loop that profiling shows to be a bottleneck in my program.
I'm at a point where the only thing that matters is
execution speed, not style (within the loop, obviously).

The loop deals with an array:

// this is aligned at cache line boundaries
struct hash_t {
int id;
int next;
double d0;
double d1;
double d2;
} hashtable[HASH_MAX];

within the loop, whenever there is a hashtable miss I
need to store new values into hashtable[index].
Originally this looked like

if(...) {
hashtable[index].id=some_int;
hashtable[index].next=some_int2;
hashtable[index].d0=some_double0;
hashtable[index].d1=some_double1;
hashtable[index].d2=some_double2;
}

Now... I'm trying to save a few cycles by doing something
along the lines of

if(...) {
// point to the first member
int *hashtable_ptr = reinterpret_cast<int*&(hashtable[index].id)
*(hashtable_ptr++) = some_int; // store into id and move on
*(hashtable_ptr++) = some_int2; // store into next and move on
*(hashtable_ptr++) = some_double0; // << --- trouble
// ...
}

the trouble is that I'm working with a pointer to int
and I want to store a double and, later on, advance the
pointer not by sizeof(int) bytes, but by sizeof(double)
bytes. On my system sizeof(int) is 4, sizeof(double) is 8
and portability is not an issue.

I tried

*((reinterpret_cast<double*>(hashtable_ptr))++) = some_double0;

but I got an error:

"the result of this cast cannot be used as an lvalue".

Why's that? I really need to force the compiler to treat
this pointer as a double* for a moment... I realize I can
have two pointers to the same hashtable entry, one an int*
and one a double*, but I need to save every clock cycle I
can as this loop is executed trillions of times.

What's the usual procedure to iterate a pointer through
members of varying types? Perhaps it would be easier with
a char*, but that means advancing by 4 and 8 which,
I suppose, would be slower than plain ++? Or does it boil
to moving by 4 or 8 offsets at the assembly level too?

thanks in advance,
- J.

PS. Are pointers to different datatypes guaranteed to
be of the same size? If not, than perhaps I need
an assert here and there...
1. Try keeping everything in units of "struct hash_t".
Fill in an instance before hashing. If you need to
insert into the table, just copy the item:
struct hash_t new_item;
// ...
hashtable[index] = new_item;
This has the benefit of letting the compiler figure
out the best way to copy the data. The best method
may be item per item, block / string copy, or DMA
to cite a few. Worst case, you could call an
implementation specific function:
implementation_memcpy(&hashtable[index],
&new_item,
sizeof(new_item));

2. Are all the items in "struct hash_t" needed to make
the hash table work?
Perhaps just the key and a pointer to data located
elsewhere. Thus only a small amount of data (the
key and pointer) are moved around in the table.
Most computer time is spent searching and sorting,
not by deferencing to get at value of a <key, value>
pair.

3. Remove operations not needed. Does the hash table
need to be used as often?

--
Thomas Matthews

C++ newsgroup welcome message:
http://www.slack.net/~shiva/welcome.txt
C++ Faq: http://www.parashift.com/c++-faq-lite
C Faq: http://www.eskimo.com/~scs/c-faq/top.html
alt.comp.lang.learn.c-c++ faq:
http://www.comeaucomputing.com/learn/faq/
Other sites:
http://www.josuttis.com -- C++ STL Library book
http://www.sgi.com/tech/stl -- Standard Template Library

Sep 9 '06 #19

This thread has been closed and replies have been disabled. Please start a new discussion.

Similar topics

6
by: John Burton | last post by:
I wrote a python program on windows which needs to listen for connections on a low numbered port which works fine on windows but on linux you need to be *root* in order to listen for connections on...
49
by: Paul Rubin | last post by:
I've started a few threads before on object persistence in medium to high end server apps. This one is about low end apps, for example, a simple cgi on a personal web site that might get a dozen...
1
by: Vissu | last post by:
Hi All, We have one column with low cardinality, 4 or 5 unique values across 50 mil rows. Our query has this colunmn as a predicate. Binary index is not helping. I am tempted to create...
0
by: Andrew | last post by:
When will .NET have a low-pause-time garbage collector A low-pause-time garbage collector would greatly improve .NET's ability to serve as a platform for soft real-time systems. It doesn't have...
19
by: Lorenzo J. Lucchini | last post by:
My code contains this declaration: : typedef union { : word Word; : struct { : byte Low; : byte High; : } Bytes; : } reg;
26
by: Bruno Jouhier [MVP] | last post by:
I'm currently experiencing a strange phenomenon: At my Office, Visual Studio takes a very long time to compile our solution (more than 1 minute for the first project). At home, Visual Studio...
5
by: kevin.heart | last post by:
Hi all, How to implement such function for Linux/Unix? /* sample begins*/ #include "windows.h" .... MEMORYSTATUS stat; GlobalMemoryStatus(&stat); result = ((float)stat.dwAvailPageFile /...
2
by: dunleav1 | last post by:
I have a many row and many column table that is in a 16K page size. I have four indexes on the table. I am running row compression on the table. The table does not have a primary key. The table...
1
by: Alexander Higgins | last post by:
>>Thanks for the response.... Point Taken but this is not the case. Thus, if a person writes a text file on her or his computer and does not use UNICODE to save it, the current code page is...
0
by: WTH | last post by:
I ask because I've got a windows service I've written that manages failover and replication for our products (or even 3rd party applications) and it worked great right until I tested it (for ease...
0
by: taylorcarr | last post by:
A Canon printer is a smart device known for being advanced, efficient, and reliable. It is designed for home, office, and hybrid workspace use and can also be used for a variety of purposes. However,...
0
by: aa123db | last post by:
Variable and constants Use var or let for variables and const fror constants. Var foo ='bar'; Let foo ='bar';const baz ='bar'; Functions function $name$ ($parameters$) { } ...
0
by: ryjfgjl | last post by:
If we have dozens or hundreds of excel to import into the database, if we use the excel import function provided by database editors such as navicat, it will be extremely tedious and time-consuming...
0
by: emmanuelkatto | last post by:
Hi All, I am Emmanuel katto from Uganda. I want to ask what challenges you've faced while migrating a website to cloud. Please let me know. Thanks! Emmanuel
1
by: nemocccc | last post by:
hello, everyone, I want to develop a software for my android phone for daily needs, any suggestions?
1
by: Sonnysonu | last post by:
This is the data of csv file 1 2 3 1 2 3 1 2 3 1 2 3 2 3 2 3 3 the lengths should be different i have to store the data by column-wise with in the specific length. suppose the i have to...
0
by: Hystou | last post by:
There are some requirements for setting up RAID: 1. The motherboard and BIOS support RAID configuration. 2. The motherboard has 2 or more available SATA protocol SSD/HDD slots (including MSATA, M.2...
0
marktang
by: marktang | last post by:
ONU (Optical Network Unit) is one of the key components for providing high-speed Internet services. Its primary function is to act as an endpoint device located at the user's premises. However,...
0
jinu1996
by: jinu1996 | last post by:
In today's digital age, having a compelling online presence is paramount for businesses aiming to thrive in a competitive landscape. At the heart of this digital strategy lies an intricately woven...

By using Bytes.com and it's services, you agree to our Privacy Policy and Terms of Use.

To disable or enable advertisements and analytics tracking please visit the manage ads & tracking page.