473,396 Members | 1,722 Online
Bytes | Software Development & Data Engineering Community
Post Job

Home Posts Topics Members FAQ

Join Bytes to post your question to a community of 473,396 software developers and data experts.

The strange speed of a program!

hi all,
I have a program whose speed is so strange to me. It is maily used
to
calculate a output image so from four images s0,s1,s2,s3 where
so=(s0-s2)^2+ (s1-s3)^2. I compile it with gcc (no optimization). the
codec between /***********/ is the initialization code. What supprise
me a lot
is the code with initialization(io==1) is much faster than without
initialization(io!=1). The initialization code should takes some time
and it should slow down the program. But the result confuse me. Why?

Code is listed below
================================================== ================
#include <stdio.h>
#include <time.h>
#include <stdlib.h>
int main(int argc, char *argv[])
{
clock_t start,end;
double cpu_time_used;
short *s0,*s1,*s2,*s3;
int *so;
unsigned int i,j;
unsigned int times;
unsigned int length;
int io;
if( argc<4 )
{
fprintf( stderr,"USAGE: %s times width (1 initialize, otherwize
noinitialize)\n",argv[0] );
exit(1);
}
else
{
times = atoi( argv[1] );
length = atoi( argv[2] );
length = length*length;
io = atoi( argv[3] );
}
s0 = (short *)malloc( length*sizeof(short) );
s1 = (short *)malloc( length*sizeof(short) );
s2 = (short *)malloc( length*sizeof(short) );
s3 = (short *)malloc( length*sizeof(short) );
s3 = (short *)malloc( length*sizeof(short) );
so = (int *)malloc( length*sizeof(int) );
start = clock();
for( i=0; i<times; ++i)
{
/**************************************************/
if( io==1 )
{
for( j=0; j<length; ++j )
{
s0[j] = i+j;
s1[j] = length-1-j;
s2[j] = 2*j;
s3[j] = 3*j;
}
}
/**************************************************/
for( j=0; j<length; ++j )
{
int tmp1,tmp2;
tmp1 = s0[j]-s2[j];
tmp1 = tmp1*tmp1;
tmp2 = s1[j]-s3[j];
tmp2 = tmp2*tmp2;
so[j] = tmp1+tmp2;
}
}
end = clock();
cpu_time_used = ((double) (end - start)) / CLOCKS_PER_SEC;
printf("CPU time: %f sec\n",cpu_time_used);
free( s0 );
free( s1 );
free( s2 );
free( s3 );
free( so );
return 0;
}

Nov 14 '05 #1
10 2544
bear wrote:
hi all,
I have a program whose speed is so strange to me. What supprise
me a lot is the code with initialization(io==1) is much faster
than without initialization(io!=1). The initialization code should
takes some time and it should slow down the program.
But the result confuse me. Why?
Can you post the actual timings you observed?
The only thing that comes to mind is, the operations in the
io==1 version are all on small numbers, so maybe they don't
take as long as the io==0 version which are all on random numbers
which could be large and/or negative. Also, the large number
operations might be causing integer overflow exceptions.
#include <stdio.h>
#include <time.h>
#include <stdlib.h>
int main(int argc, char *argv[])
{
clock_t start,end;
double cpu_time_used;
short *s0,*s1,*s2,*s3;
int *so;
unsigned int i,j;
unsigned int times;
unsigned int length;
int io;
if( argc<4 )
{
fprintf( stderr,"USAGE: %s times width (1 initialize, otherwize
noinitialize)\n",argv[0] );
exit(1);
}
else
{
times = atoi( argv[1] );
length = atoi( argv[2] );
atoi causes undefined behaviour if the number is bigger than
can fit in an int
length = length*length;
You should check that length * length will not overflow, before
doing this.
io = atoi( argv[3] );
}
s0 = (short *)malloc( length*sizeof(short) );
Lose the cast (search this newsgroup for 'malloc' to learn
more about why):

s0 = malloc( length * sizeof *s0 );
s1 = (short *)malloc( length*sizeof(short) );
s2 = (short *)malloc( length*sizeof(short) );
s3 = (short *)malloc( length*sizeof(short) );
s3 = (short *)malloc( length*sizeof(short) );
You just leaked the first s3 malloc's results.
You should also check that all of these mallocs succeeded.
so = (int *)malloc( length*sizeof(int) );


Rest of the code looks OK.

Nov 14 '05 #2
Thanks for your advice.

I compile basic.c using gcc-3.3.5 with no optimization. My CPU is
athlon-xp,installed Debian.

../basic 50 1024 0 #no init
results in CPU time: 9.690000 sec
../basic 50 1024 1 #init
results in CPU time: 3.280000 sec
Old Wolf wrote:
bear wrote:
hi all,
I have a program whose speed is so strange to me. What supprise
me a lot is the code with initialization(io==1) is much faster
than without initialization(io!=1). The initialization code should
takes some time and it should slow down the program.
But the result confuse me. Why?
Can you post the actual timings you observed?
The only thing that comes to mind is, the operations in the
io==1 version are all on small numbers, so maybe they don't
take as long as the io==0 version which are all on random numbers
which could be large and/or negative. Also, the large number
operations might be causing integer overflow exceptions.


The calculation of random number runs slowly than small numbers?
I think they use the same instruction and the same width of operand.
Also, can these overflow exceptions cause slow down?
#include <stdio.h>
#include <time.h>
#include <stdlib.h>
int main(int argc, char *argv[])
{
clock_t start,end;
double cpu_time_used;
short *s0,*s1,*s2,*s3;
int *so;
unsigned int i,j;
unsigned int times;
unsigned int length;
int io;
if( argc<4 )
{
fprintf( stderr,"USAGE: %s times width (1 initialize, otherwize
noinitialize)\n",argv[0] );
exit(1);
}
else
{
times = atoi( argv[1] );
length = atoi( argv[2] );
atoi causes undefined behaviour if the number is bigger than
can fit in an int
length = length*length;


You should check that length * length will not overflow, before
doing this.
io = atoi( argv[3] );
}
s0 = (short *)malloc( length*sizeof(short) );


Lose the cast (search this newsgroup for 'malloc' to learn
more about why):


I search but do not find any clue on why this cast is incorrect.
So I just leave it there.

s0 = malloc( length * sizeof *s0 );
s1 = (short *)malloc( length*sizeof(short) );
s2 = (short *)malloc( length*sizeof(short) );
s3 = (short *)malloc( length*sizeof(short) );
s3 = (short *)malloc( length*sizeof(short) );


You just leaked the first s3 malloc's results.
You should also check that all of these mallocs succeeded.

I have corrected this

so = (int *)malloc( length*sizeof(int) );


Rest of the code looks OK.


Nov 14 '05 #3
bear wrote:
Old Wolf wrote:
bear wrote:
.... snip ...
io = atoi( argv[3] );
}
s0 = (short *)malloc( length*sizeof(short) );


Lose the cast (search this newsgroup for 'malloc' to learn
more about why):


I search but do not find any clue on why this cast is incorrect.
So I just leave it there.


In general ALL casts, other than to arguments to variadic
functions, are suspect and should be avoided. Unnecessary casts
only serve to suppress error messages, by announcing "I know what I
am doing, so don't complain". The usual form for a malloc call is:

if (!(ptr = malloc(number * sizeof *ptr))) {
/* handle the "out of memory" condition */
}
else {
/* successful allocation, do whatever with ptr */
}

which secures space for an array of number items of whatever type
ptr has been declared to point to. The declarations and values of
ptr and number (the latter may be 1, and thus omitted) control all
the action, and no error messages are suppressed. A very common
error that can be suppressed by a silly cast is failure to #include
<stdlib.h>.

Make sure you read the first three references below.

--
Some useful references about C:
<http://www.ungerhu.com/jxh/clc.welcome.txt>
<http://www.eskimo.com/~scs/C-faq/top.html>
<http://benpfaff.org/writings/clc/off-topic.html>
<http://anubis.dkuug.dk/jtc1/sc22/wg14/www/docs/n869/> (C99)
<http://www.dinkumware.com/refxc.html> (C-library}
<http://gcc.gnu.org/onlinedocs/> (GNU docs)
Nov 14 '05 #4
In comp.lang.c bear <xg***@mails.tsinghua.edu.cn> wrote:
Thanks for your advice. I compile basic.c using gcc-3.3.5 with no optimization. My CPU is
athlon-xp,installed Debian. ./basic 50 1024 0 #no init
results in CPU time: 9.690000 sec
./basic 50 1024 1 #init
results in CPU time: 3.280000 sec
gcc 3.3.4 (Slackware)
gcc -O3 -static

The same binary was run on 4 different processors, my results are:
../a.out 50 1024 [01]

init: 0 1
Pentium MMX (200MHz) 10.860000 37.870000
Pentium II (400MHz) 1.330000 7.090000
mobile Athlon XP (1400MHz) 6.950000 4.060000
Pentium4 (2.80GHz) 9.290000 0.710000

(Look at the last result - I couldn't beleive it myself!)

It looks as if the problem is not C (or gcc) specific (although there
might be some interesting morals for C programmers).
Old Wolf wrote:
bear wrote:
Can you post the actual timings you observed?
The only thing that comes to mind is, the operations in the
io==1 version are all on small numbers, so maybe they don't
take as long as the io==0 version which are all on random numbers
which could be large and/or negative. Also, the large number
operations might be causing integer overflow exceptions.
The calculation of random number runs slowly than small numbers?
I think they use the same instruction and the same width of operand.
Also, can these overflow exceptions cause slow down?
Interesting idea, but my first results didn't confirm this (ie.
that overflows might cause slow-down), but I'll yet investigate this.

There's big difference when you pre-initialize the allocated
memory before the big loop. I used this code:

void *randmalloc(size_t s)
{
size_t i;
char *pc = malloc(s);
if (pc)
for (i=0; i<s; ++i)
pc[i] = rand();
return pc;
}

#define malloc randmalloc

(Okay, okay, I know it's not legal to redefine malloc, but it's
quicker that modifying the source.)
s0 = (short *)malloc( length*sizeof(short) );


Lose the cast (search this newsgroup for 'malloc' to learn
more about why):

I search but do not find any clue on why this cast is incorrect.
So I just leave it there.


It's not incorrect per se, but it is malpractice.
See FAQ 7.7.

--
Stan Tobias
mailx `echo si***@FamOuS.BedBuG.pAlS.INVALID | sed s/[[:upper:]]//g`
Nov 14 '05 #5


S.Tobias wrote:
In comp.lang.c bear <xg***@mails.tsinghua.edu.cn> wrote:
Thanks for your advice.

I compile basic.c using gcc-3.3.5 with no optimization. My CPU is
athlon-xp,installed Debian.

./basic 50 1024 0 #no init
results in CPU time: 9.690000 sec
./basic 50 1024 1 #init
results in CPU time: 3.280000 sec


gcc 3.3.4 (Slackware)
gcc -O3 -static

The same binary was run on 4 different processors, my results are:
./a.out 50 1024 [01]

init: 0 1
Pentium MMX (200MHz) 10.860000 37.870000
Pentium II (400MHz) 1.330000 7.090000
mobile Athlon XP (1400MHz) 6.950000 4.060000
Pentium4 (2.80GHz) 9.290000 0.710000

(Look at the last result - I couldn't beleive it myself!)

It looks as if the problem is not C (or gcc) specific (although there
might be some interesting morals for C programmers).
Old Wolf wrote:
bear wrote: Can you post the actual timings you observed?
The only thing that comes to mind is, the operations in the
io==1 version are all on small numbers, so maybe they don't
take as long as the io==0 version which are all on random numbers
which could be large and/or negative. Also, the large number
operations might be causing integer overflow exceptions.
The calculation of random number runs slowly than small numbers?
I think they use the same instruction and the same width of operand.
Also, can these overflow exceptions cause slow down?


Interesting idea, but my first results didn't confirm this (ie.
that overflows might cause slow-down), but I'll yet investigate this.

There's big difference when you pre-initialize the allocated
memory before the big loop. I used this code:

void *randmalloc(size_t s)
{
size_t i;
char *pc = malloc(s);
if (pc)
for (i=0; i<s; ++i)
pc[i] = rand();
return pc;
}

#define malloc randmalloc

(Okay, okay, I know it's not legal to redefine malloc, but it's
quicker that modifying the source.)

I change the code to
==============================================
s0 = malloc( length*sizeof(short) );
s1 = malloc( length*sizeof(short) );
s2 = malloc( length*sizeof(short) );
s3 = malloc( length*sizeof(short) );
so = malloc( length*sizeof(int) );
for( j=0; j<length; ++j )
{
s0[j] = 0;
s1[j] = 0;
s2[j] = 0;
s3[j] = 0;
}
===============================================
zero the four input images. It seems that the speed is explainable now.
On my Athlon-xp:
with init: 4.020000 sec
without init 2.190000 sec
It seems that it is the overflow slows down the speed. Am I right?


> s0 = (short *)malloc( length*sizeof(short) );

Lose the cast (search this newsgroup for 'malloc' to learn
more about why):

I search but do not find any clue on why this cast is incorrect.
So I just leave it there.


It's not incorrect per se, but it is malpractice.
See FAQ 7.7.

--
Stan Tobias
mailx `echo si***@FamOuS.BedBuG.pAlS.INVALID | sed s/[[:upper:]]//g`


Nov 14 '05 #6


S.Tobias wrote:
In comp.lang.c bear <xg***@mails.tsinghua.edu.cn> wrote:
Thanks for your advice.

I compile basic.c using gcc-3.3.5 with no optimization. My CPU is
athlon-xp,installed Debian.

./basic 50 1024 0 #no init
results in CPU time: 9.690000 sec
./basic 50 1024 1 #init
results in CPU time: 3.280000 sec


gcc 3.3.4 (Slackware)
gcc -O3 -static

The same binary was run on 4 different processors, my results are:
./a.out 50 1024 [01]

init: 0 1
Pentium MMX (200MHz) 10.860000 37.870000
Pentium II (400MHz) 1.330000 7.090000
mobile Athlon XP (1400MHz) 6.950000 4.060000
Pentium4 (2.80GHz) 9.290000 0.710000

(Look at the last result - I couldn't beleive it myself!)

It looks as if the problem is not C (or gcc) specific (although there
might be some interesting morals for C programmers).
Old Wolf wrote:
bear wrote: Can you post the actual timings you observed?
The only thing that comes to mind is, the operations in the
io==1 version are all on small numbers, so maybe they don't
take as long as the io==0 version which are all on random numbers
which could be large and/or negative. Also, the large number
operations might be causing integer overflow exceptions.
The calculation of random number runs slowly than small numbers?
I think they use the same instruction and the same width of operand.
Also, can these overflow exceptions cause slow down?


Interesting idea, but my first results didn't confirm this (ie.
that overflows might cause slow-down), but I'll yet investigate this.

There's big difference when you pre-initialize the allocated
memory before the big loop. I used this code:

void *randmalloc(size_t s)
{
size_t i;
char *pc = malloc(s);
if (pc)
for (i=0; i<s; ++i)
pc[i] = rand();
return pc;
}

#define malloc randmalloc

(Okay, okay, I know it's not legal to redefine malloc, but it's
quicker that modifying the source.)

I change the code to
==============================================
s0 = malloc( length*sizeof(short) );
s1 = malloc( length*sizeof(short) );
s2 = malloc( length*sizeof(short) );
s3 = malloc( length*sizeof(short) );
so = malloc( length*sizeof(int) );
for( j=0; j<length; ++j )
{
s0[j] = 0;
s1[j] = 0;
s2[j] = 0;
s3[j] = 0;
}
===============================================
zero the four input images. It seems that the speed is explainable now.
On my Athlon-xp:
with init: 4.020000 sec
without init 2.190000 sec
It seems that it is the overflow slows down the speed. Am I right?


> s0 = (short *)malloc( length*sizeof(short) );

Lose the cast (search this newsgroup for 'malloc' to learn
more about why):

I search but do not find any clue on why this cast is incorrect.
So I just leave it there.


It's not incorrect per se, but it is malpractice.
See FAQ 7.7.

--
Stan Tobias
mailx `echo si***@FamOuS.BedBuG.pAlS.INVALID | sed s/[[:upper:]]//g`


Nov 14 '05 #7
In comp.lang.c bear <xg***@mails.tsinghua.edu.cn> wrote:
S.Tobias wrote:
In comp.lang.c bear <xg***@mails.tsinghua.edu.cn> wrote:
The calculation of random number runs slowly than small numbers?
I think they use the same instruction and the same width of operand.
Also, can these overflow exceptions cause slow down?
Interesting idea, but my first results didn't confirm this (ie.
that overflows might cause slow-down), but I'll yet investigate this.

There's big difference when you pre-initialize the allocated
memory before the big loop. I used this code:
[snip code]

I change the code to
==============================================

[snip code] ===============================================
zero the four input images. It seems that the speed is explainable now.
On my Athlon-xp:
with init: 4.020000 sec
without init 2.190000 sec
It seems that it is the overflow slows down the speed. Am I right?


I don't know. It might be it, it might be a few things together.
I tried putting random values and it worked "as it should", too.
The results seem to depend strongly on the `length' parameter
(array size) - processor cache is a suspect too. Finally, my
results (snipped) showed a large diversity on different architectures
(all PC-compatible).

This is too off-topic in comp.lang.c, where I read it, and I guess
gcc people won't be interested in this subject either (as I have shown,
the problem is - most probably - not specific to gcc). I'm not
going to discuss this in c.l.c. any longer.

Please, make a summary of what has been said until now, show your
(corrected) code, the results (mine too), and let's move this
discussion somewhere else. I think the first best group to try
would be comp.programming (unless someone can hint a better group).

I'm very interested in finding the solution to the problem myself.

--
Stan Tobias
mailx `echo si***@FamOuS.BedBuG.pAlS.INVALID | sed s/[[:upper:]]//g`
Nov 14 '05 #8
[Newsgroups restricted to clc]

In comp.lang.c bear <xg***@mails.tsinghua.edu.cn> wrote:

[snippage]

short *s0,*s1,*s2,*s3;
allocation: s0 = (short *)malloc( length*sizeof(short) );
s1 = (short *)malloc( length*sizeof(short) ); [...]

initialization of s0[n]: for( j=0; j<length; ++j )
{
s0[j] = i+j;
s1[j] = length-1-j; [...]

use: for( j=0; j<length; ++j )
{
int tmp1,tmp2;
tmp1 = s0[j]-s2[j];


If initialization step is skipped, then reading from an uninitialized
memory technically might raise UB. (But this, of course, defeats
the purpose of the question, which was not quite topical in clc anyway.)

Question to others:
In C99 "indeterminate value" is an unspecified value or a trap
representation. I think that C99 doesn't mention explicitly any
more, that accessing indeterminate values automatically raises UB.
So, if on an implementation, say, integers don't have trap values
(ie. every bit representation corresponds to some valid value),
is accessing of uninitialized integer objects defined behaviour
then (yielding only unspecified values)?

--
Stan Tobias
mailx `echo si***@FamOuS.BedBuG.pAlS.INVALID | sed s/[[:upper:]]//g`
Nov 14 '05 #9
On Wed, 08 Jun 2005 10:18:34 +0000, S.Tobias wrote:
In comp.lang.c bear <xg***@mails.tsinghua.edu.cn> wrote:
Thanks for your advice.
I compile basic.c using gcc-3.3.5 with no optimization. My CPU is
athlon-xp,installed Debian.

./basic 50 1024 0 #no init
results in CPU time: 9.690000 sec
./basic 50 1024 1 #init
results in CPU time: 3.280000 sec


gcc 3.3.4 (Slackware)
gcc -O3 -static

The same binary was run on 4 different processors, my results are:


The operating system and particularly the VM strategies it uses could be a
big factor here. Were you running the identical OS on these 4
processors?
./a.out 50 1024 [01]

init: 0 1
Pentium MMX (200MHz) 10.860000 37.870000
Pentium II (400MHz) 1.330000 7.090000
mobile Athlon XP (1400MHz) 6.950000 4.060000
Pentium4 (2.80GHz) 9.290000 0.710000

(Look at the last result - I couldn't beleive it myself!)
The last is what I would expect for a VM system that uses copy-on-write.
It is common for such VM systems to set up a single memory page zero
filled with bytes. When memory is first allocated all the virstual pages
are initially mapped to that single zero-filled page as copy-on-write.
That means that if you don't write to the memory you will always be
reading zeros from a single page in physical memory, no matter how large
your datastructure is. So all reads come from Level 1 (very likely) cache
and are very fast. If you write to memory first then each page in your
virtual memory is mapped to its own page in physical memory and suddenly
large datastructures don't fit in the processor cache and access get s a
lot slower.

That doesn't explain why


It looks as if the problem is not C (or gcc) specific (although there
might be some interesting morals for C programmers).
Old Wolf wrote:
> bear wrote: > Can you post the actual timings you observed?
> The only thing that comes to mind is, the operations in the
> io==1 version are all on small numbers, so maybe they don't
> take as long as the io==0 version which are all on random numbers
> which could be large and/or negative. Also, the large number
> operations might be causing integer overflow exceptions.
The calculation of random number runs slowly than small numbers?
I think they use the same instruction and the same width of operand.
Also, can these overflow exceptions cause slow down?


Interesting idea, but my first results didn't confirm this (ie.
that overflows might cause slow-down), but I'll yet investigate this.

There's big difference when you pre-initialize the allocated
memory before the big loop. I used this code:

void *randmalloc(size_t s)
{
size_t i;
char *pc = malloc(s);
if (pc)
for (i=0; i<s; ++i)
pc[i] = rand();
return pc;
}

#define malloc randmalloc

(Okay, okay, I know it's not legal to redefine malloc, but it's
quicker that modifying the source.)
> > s0 = (short *)malloc( length*sizeof(short) );
>
> Lose the cast (search this newsgroup for 'malloc' to learn
> more about why):

I search but do not find any clue on why this cast is incorrect.
So I just leave it there.


It's not incorrect per se, but it is malpractice.
See FAQ 7.7.


Nov 14 '05 #10
On Sat, 11 Jun 2005 18:41:56 +0100, Lawrence Kirby wrote:

Please ignore this last posting which is incomplete and incorrect. It was
an old working copy that was posted by accident.

Lawrence

Nov 14 '05 #11

This thread has been closed and replies have been disabled. Please start a new discussion.

Similar topics

6
by: KefX | last post by:
Hi guys! I'm still a bit of a Python newbie, but regardless I decided to embed Python into a plugin for a freeware (closed-source) Windows music program called Jeskola Buzz. (Man, I can't believe...
29
by: Bart Nessux | last post by:
Just fooling around this weekend. Wrote and timed programs in C, Perl and Python. Each Program counts to 1,000,000 and prints each number to the console as it counts. I was a bit surprised. I'm not...
30
by: Mike Cox | last post by:
Hi. I recently ran a benchmark against two simple programs, one written in Java and the other in C++. The both accomplish the same thing, outputting "Hello World" on my screen. The C++ program...
7
by: YAZ | last post by:
Hello, I have a dll which do some number crunching. Performances (execution speed) are very important in my application. I use VC6 to compile the DLL. A friend of mine told me that in Visual...
53
by: Krystian | last post by:
Hi are there any future perspectives for Python to be as fast as java? i would like to use Python as a language for writing games. best regards krystian
13
by: Snis Pilbor | last post by:
Hello, Here is an idea I've been toying with to speed up programs but still keep them portable. It's just a very handwavey rough description right now since I haven't worked out details. The...
2
by: openbysource | last post by:
I wrote this program: #include <iostream> using namespace std; class sample { private: int age; float salary; char status;
8
by: mast2as | last post by:
I am sure this topic has been discussed a thousand times and I read a few things about it today on the net. I also want to say I am trying to start a polemic here, I am just curious and willint to...
6
by: Gernot Frisch | last post by:
Hi, the program below workes w/o problems on a GP2X and on the PC, but my PocketPC (using GCC 3.3.3) crashes. Very dissapointing, since I expect some speed boost from it. Thnak you for your...
0
by: ryjfgjl | last post by:
In our work, we often receive Excel tables with data in the same format. If we want to analyze these data, it can be difficult to analyze them because the data is spread across multiple Excel files...
0
by: emmanuelkatto | last post by:
Hi All, I am Emmanuel katto from Uganda. I want to ask what challenges you've faced while migrating a website to cloud. Please let me know. Thanks! Emmanuel
0
by: Hystou | last post by:
There are some requirements for setting up RAID: 1. The motherboard and BIOS support RAID configuration. 2. The motherboard has 2 or more available SATA protocol SSD/HDD slots (including MSATA, M.2...
0
marktang
by: marktang | last post by:
ONU (Optical Network Unit) is one of the key components for providing high-speed Internet services. Its primary function is to act as an endpoint device located at the user's premises. However,...
0
by: Hystou | last post by:
Most computers default to English, but sometimes we require a different language, especially when relocating. Forgot to request a specific language before your computer shipped? No problem! You can...
0
Oralloy
by: Oralloy | last post by:
Hello folks, I am unable to find appropriate documentation on the type promotion of bit-fields when using the generalised comparison operator "<=>". The problem is that using the GNU compilers,...
0
jinu1996
by: jinu1996 | last post by:
In today's digital age, having a compelling online presence is paramount for businesses aiming to thrive in a competitive landscape. At the heart of this digital strategy lies an intricately woven...
0
by: Hystou | last post by:
Overview: Windows 11 and 10 have less user interface control over operating system update behaviour than previous versions of Windows. In Windows 11 and 10, there is no way to turn off the Windows...
0
tracyyun
by: tracyyun | last post by:
Dear forum friends, With the development of smart home technology, a variety of wireless communication protocols have appeared on the market, such as Zigbee, Z-Wave, Wi-Fi, Bluetooth, etc. Each...

By using Bytes.com and it's services, you agree to our Privacy Policy and Terms of Use.

To disable or enable advertisements and analytics tracking please visit the manage ads & tracking page.