473,756 Members | 9,662 Online
Bytes | Software Development & Data Engineering Community
+ Post

Home Posts Topics Members FAQ

The strange speed of a program!

hi all,
I have a program whose speed is so strange to me. It is maily used
to
calculate a output image so from four images s0,s1,s2,s3 where
so=(s0-s2)^2+ (s1-s3)^2. I compile it with gcc (no optimization). the
codec between /***********/ is the initialization code. What supprise
me a lot
is the code with initialization( io==1) is much faster than without
initialization( io!=1). The initialization code should takes some time
and it should slow down the program. But the result confuse me. Why?

Code is listed below
=============== =============== =============== =============== ======
#include <stdio.h>
#include <time.h>
#include <stdlib.h>
int main(int argc, char *argv[])
{
clock_t start,end;
double cpu_time_used;
short *s0,*s1,*s2,*s3 ;
int *so;
unsigned int i,j;
unsigned int times;
unsigned int length;
int io;
if( argc<4 )
{
fprintf( stderr,"USAGE: %s times width (1 initialize, otherwize
noinitialize)\n ",argv[0] );
exit(1);
}
else
{
times = atoi( argv[1] );
length = atoi( argv[2] );
length = length*length;
io = atoi( argv[3] );
}
s0 = (short *)malloc( length*sizeof(s hort) );
s1 = (short *)malloc( length*sizeof(s hort) );
s2 = (short *)malloc( length*sizeof(s hort) );
s3 = (short *)malloc( length*sizeof(s hort) );
s3 = (short *)malloc( length*sizeof(s hort) );
so = (int *)malloc( length*sizeof(i nt) );
start = clock();
for( i=0; i<times; ++i)
{
/*************** *************** *************** *****/
if( io==1 )
{
for( j=0; j<length; ++j )
{
s0[j] = i+j;
s1[j] = length-1-j;
s2[j] = 2*j;
s3[j] = 3*j;
}
}
/*************** *************** *************** *****/
for( j=0; j<length; ++j )
{
int tmp1,tmp2;
tmp1 = s0[j]-s2[j];
tmp1 = tmp1*tmp1;
tmp2 = s1[j]-s3[j];
tmp2 = tmp2*tmp2;
so[j] = tmp1+tmp2;
}
}
end = clock();
cpu_time_used = ((double) (end - start)) / CLOCKS_PER_SEC;
printf("CPU time: %f sec\n",cpu_time _used);
free( s0 );
free( s1 );
free( s2 );
free( s3 );
free( so );
return 0;
}

Nov 14 '05 #1
10 2591
bear wrote:
hi all,
I have a program whose speed is so strange to me. What supprise
me a lot is the code with initialization( io==1) is much faster
than without initialization( io!=1). The initialization code should
takes some time and it should slow down the program.
But the result confuse me. Why?
Can you post the actual timings you observed?
The only thing that comes to mind is, the operations in the
io==1 version are all on small numbers, so maybe they don't
take as long as the io==0 version which are all on random numbers
which could be large and/or negative. Also, the large number
operations might be causing integer overflow exceptions.
#include <stdio.h>
#include <time.h>
#include <stdlib.h>
int main(int argc, char *argv[])
{
clock_t start,end;
double cpu_time_used;
short *s0,*s1,*s2,*s3 ;
int *so;
unsigned int i,j;
unsigned int times;
unsigned int length;
int io;
if( argc<4 )
{
fprintf( stderr,"USAGE: %s times width (1 initialize, otherwize
noinitialize)\n ",argv[0] );
exit(1);
}
else
{
times = atoi( argv[1] );
length = atoi( argv[2] );
atoi causes undefined behaviour if the number is bigger than
can fit in an int
length = length*length;
You should check that length * length will not overflow, before
doing this.
io = atoi( argv[3] );
}
s0 = (short *)malloc( length*sizeof(s hort) );
Lose the cast (search this newsgroup for 'malloc' to learn
more about why):

s0 = malloc( length * sizeof *s0 );
s1 = (short *)malloc( length*sizeof(s hort) );
s2 = (short *)malloc( length*sizeof(s hort) );
s3 = (short *)malloc( length*sizeof(s hort) );
s3 = (short *)malloc( length*sizeof(s hort) );
You just leaked the first s3 malloc's results.
You should also check that all of these mallocs succeeded.
so = (int *)malloc( length*sizeof(i nt) );


Rest of the code looks OK.

Nov 14 '05 #2
Thanks for your advice.

I compile basic.c using gcc-3.3.5 with no optimization. My CPU is
athlon-xp,installed Debian.

../basic 50 1024 0 #no init
results in CPU time: 9.690000 sec
../basic 50 1024 1 #init
results in CPU time: 3.280000 sec
Old Wolf wrote:
bear wrote:
hi all,
I have a program whose speed is so strange to me. What supprise
me a lot is the code with initialization( io==1) is much faster
than without initialization( io!=1). The initialization code should
takes some time and it should slow down the program.
But the result confuse me. Why?
Can you post the actual timings you observed?
The only thing that comes to mind is, the operations in the
io==1 version are all on small numbers, so maybe they don't
take as long as the io==0 version which are all on random numbers
which could be large and/or negative. Also, the large number
operations might be causing integer overflow exceptions.


The calculation of random number runs slowly than small numbers?
I think they use the same instruction and the same width of operand.
Also, can these overflow exceptions cause slow down?
#include <stdio.h>
#include <time.h>
#include <stdlib.h>
int main(int argc, char *argv[])
{
clock_t start,end;
double cpu_time_used;
short *s0,*s1,*s2,*s3 ;
int *so;
unsigned int i,j;
unsigned int times;
unsigned int length;
int io;
if( argc<4 )
{
fprintf( stderr,"USAGE: %s times width (1 initialize, otherwize
noinitialize)\n ",argv[0] );
exit(1);
}
else
{
times = atoi( argv[1] );
length = atoi( argv[2] );
atoi causes undefined behaviour if the number is bigger than
can fit in an int
length = length*length;


You should check that length * length will not overflow, before
doing this.
io = atoi( argv[3] );
}
s0 = (short *)malloc( length*sizeof(s hort) );


Lose the cast (search this newsgroup for 'malloc' to learn
more about why):


I search but do not find any clue on why this cast is incorrect.
So I just leave it there.

s0 = malloc( length * sizeof *s0 );
s1 = (short *)malloc( length*sizeof(s hort) );
s2 = (short *)malloc( length*sizeof(s hort) );
s3 = (short *)malloc( length*sizeof(s hort) );
s3 = (short *)malloc( length*sizeof(s hort) );


You just leaked the first s3 malloc's results.
You should also check that all of these mallocs succeeded.

I have corrected this

so = (int *)malloc( length*sizeof(i nt) );


Rest of the code looks OK.


Nov 14 '05 #3
bear wrote:
Old Wolf wrote:
bear wrote:
.... snip ...
io = atoi( argv[3] );
}
s0 = (short *)malloc( length*sizeof(s hort) );


Lose the cast (search this newsgroup for 'malloc' to learn
more about why):


I search but do not find any clue on why this cast is incorrect.
So I just leave it there.


In general ALL casts, other than to arguments to variadic
functions, are suspect and should be avoided. Unnecessary casts
only serve to suppress error messages, by announcing "I know what I
am doing, so don't complain". The usual form for a malloc call is:

if (!(ptr = malloc(number * sizeof *ptr))) {
/* handle the "out of memory" condition */
}
else {
/* successful allocation, do whatever with ptr */
}

which secures space for an array of number items of whatever type
ptr has been declared to point to. The declarations and values of
ptr and number (the latter may be 1, and thus omitted) control all
the action, and no error messages are suppressed. A very common
error that can be suppressed by a silly cast is failure to #include
<stdlib.h>.

Make sure you read the first three references below.

--
Some useful references about C:
<http://www.ungerhu.com/jxh/clc.welcome.txt >
<http://www.eskimo.com/~scs/C-faq/top.html>
<http://benpfaff.org/writings/clc/off-topic.html>
<http://anubis.dkuug.dk/jtc1/sc22/wg14/www/docs/n869/> (C99)
<http://www.dinkumware. com/refxc.html> (C-library}
<http://gcc.gnu.org/onlinedocs/> (GNU docs)
Nov 14 '05 #4
In comp.lang.c bear <xg***@mails.ts inghua.edu.cn> wrote:
Thanks for your advice. I compile basic.c using gcc-3.3.5 with no optimization. My CPU is
athlon-xp,installed Debian. ./basic 50 1024 0 #no init
results in CPU time: 9.690000 sec
./basic 50 1024 1 #init
results in CPU time: 3.280000 sec
gcc 3.3.4 (Slackware)
gcc -O3 -static

The same binary was run on 4 different processors, my results are:
../a.out 50 1024 [01]

init: 0 1
Pentium MMX (200MHz) 10.860000 37.870000
Pentium II (400MHz) 1.330000 7.090000
mobile Athlon XP (1400MHz) 6.950000 4.060000
Pentium4 (2.80GHz) 9.290000 0.710000

(Look at the last result - I couldn't beleive it myself!)

It looks as if the problem is not C (or gcc) specific (although there
might be some interesting morals for C programmers).
Old Wolf wrote:
bear wrote:
Can you post the actual timings you observed?
The only thing that comes to mind is, the operations in the
io==1 version are all on small numbers, so maybe they don't
take as long as the io==0 version which are all on random numbers
which could be large and/or negative. Also, the large number
operations might be causing integer overflow exceptions.
The calculation of random number runs slowly than small numbers?
I think they use the same instruction and the same width of operand.
Also, can these overflow exceptions cause slow down?
Interesting idea, but my first results didn't confirm this (ie.
that overflows might cause slow-down), but I'll yet investigate this.

There's big difference when you pre-initialize the allocated
memory before the big loop. I used this code:

void *randmalloc(siz e_t s)
{
size_t i;
char *pc = malloc(s);
if (pc)
for (i=0; i<s; ++i)
pc[i] = rand();
return pc;
}

#define malloc randmalloc

(Okay, okay, I know it's not legal to redefine malloc, but it's
quicker that modifying the source.)
s0 = (short *)malloc( length*sizeof(s hort) );


Lose the cast (search this newsgroup for 'malloc' to learn
more about why):

I search but do not find any clue on why this cast is incorrect.
So I just leave it there.


It's not incorrect per se, but it is malpractice.
See FAQ 7.7.

--
Stan Tobias
mailx `echo si***@FamOuS.Be dBuG.pAlS.INVALID | sed s/[[:upper:]]//g`
Nov 14 '05 #5


S.Tobias wrote:
In comp.lang.c bear <xg***@mails.ts inghua.edu.cn> wrote:
Thanks for your advice.

I compile basic.c using gcc-3.3.5 with no optimization. My CPU is
athlon-xp,installed Debian.

./basic 50 1024 0 #no init
results in CPU time: 9.690000 sec
./basic 50 1024 1 #init
results in CPU time: 3.280000 sec


gcc 3.3.4 (Slackware)
gcc -O3 -static

The same binary was run on 4 different processors, my results are:
./a.out 50 1024 [01]

init: 0 1
Pentium MMX (200MHz) 10.860000 37.870000
Pentium II (400MHz) 1.330000 7.090000
mobile Athlon XP (1400MHz) 6.950000 4.060000
Pentium4 (2.80GHz) 9.290000 0.710000

(Look at the last result - I couldn't beleive it myself!)

It looks as if the problem is not C (or gcc) specific (although there
might be some interesting morals for C programmers).
Old Wolf wrote:
bear wrote: Can you post the actual timings you observed?
The only thing that comes to mind is, the operations in the
io==1 version are all on small numbers, so maybe they don't
take as long as the io==0 version which are all on random numbers
which could be large and/or negative. Also, the large number
operations might be causing integer overflow exceptions.
The calculation of random number runs slowly than small numbers?
I think they use the same instruction and the same width of operand.
Also, can these overflow exceptions cause slow down?


Interesting idea, but my first results didn't confirm this (ie.
that overflows might cause slow-down), but I'll yet investigate this.

There's big difference when you pre-initialize the allocated
memory before the big loop. I used this code:

void *randmalloc(siz e_t s)
{
size_t i;
char *pc = malloc(s);
if (pc)
for (i=0; i<s; ++i)
pc[i] = rand();
return pc;
}

#define malloc randmalloc

(Okay, okay, I know it's not legal to redefine malloc, but it's
quicker that modifying the source.)

I change the code to
=============== =============== =============== =
s0 = malloc( length*sizeof(s hort) );
s1 = malloc( length*sizeof(s hort) );
s2 = malloc( length*sizeof(s hort) );
s3 = malloc( length*sizeof(s hort) );
so = malloc( length*sizeof(i nt) );
for( j=0; j<length; ++j )
{
s0[j] = 0;
s1[j] = 0;
s2[j] = 0;
s3[j] = 0;
}
=============== =============== =============== ==
zero the four input images. It seems that the speed is explainable now.
On my Athlon-xp:
with init: 4.020000 sec
without init 2.190000 sec
It seems that it is the overflow slows down the speed. Am I right?


> s0 = (short *)malloc( length*sizeof(s hort) );

Lose the cast (search this newsgroup for 'malloc' to learn
more about why):

I search but do not find any clue on why this cast is incorrect.
So I just leave it there.


It's not incorrect per se, but it is malpractice.
See FAQ 7.7.

--
Stan Tobias
mailx `echo si***@FamOuS.Be dBuG.pAlS.INVALID | sed s/[[:upper:]]//g`


Nov 14 '05 #6


S.Tobias wrote:
In comp.lang.c bear <xg***@mails.ts inghua.edu.cn> wrote:
Thanks for your advice.

I compile basic.c using gcc-3.3.5 with no optimization. My CPU is
athlon-xp,installed Debian.

./basic 50 1024 0 #no init
results in CPU time: 9.690000 sec
./basic 50 1024 1 #init
results in CPU time: 3.280000 sec


gcc 3.3.4 (Slackware)
gcc -O3 -static

The same binary was run on 4 different processors, my results are:
./a.out 50 1024 [01]

init: 0 1
Pentium MMX (200MHz) 10.860000 37.870000
Pentium II (400MHz) 1.330000 7.090000
mobile Athlon XP (1400MHz) 6.950000 4.060000
Pentium4 (2.80GHz) 9.290000 0.710000

(Look at the last result - I couldn't beleive it myself!)

It looks as if the problem is not C (or gcc) specific (although there
might be some interesting morals for C programmers).
Old Wolf wrote:
bear wrote: Can you post the actual timings you observed?
The only thing that comes to mind is, the operations in the
io==1 version are all on small numbers, so maybe they don't
take as long as the io==0 version which are all on random numbers
which could be large and/or negative. Also, the large number
operations might be causing integer overflow exceptions.
The calculation of random number runs slowly than small numbers?
I think they use the same instruction and the same width of operand.
Also, can these overflow exceptions cause slow down?


Interesting idea, but my first results didn't confirm this (ie.
that overflows might cause slow-down), but I'll yet investigate this.

There's big difference when you pre-initialize the allocated
memory before the big loop. I used this code:

void *randmalloc(siz e_t s)
{
size_t i;
char *pc = malloc(s);
if (pc)
for (i=0; i<s; ++i)
pc[i] = rand();
return pc;
}

#define malloc randmalloc

(Okay, okay, I know it's not legal to redefine malloc, but it's
quicker that modifying the source.)

I change the code to
=============== =============== =============== =
s0 = malloc( length*sizeof(s hort) );
s1 = malloc( length*sizeof(s hort) );
s2 = malloc( length*sizeof(s hort) );
s3 = malloc( length*sizeof(s hort) );
so = malloc( length*sizeof(i nt) );
for( j=0; j<length; ++j )
{
s0[j] = 0;
s1[j] = 0;
s2[j] = 0;
s3[j] = 0;
}
=============== =============== =============== ==
zero the four input images. It seems that the speed is explainable now.
On my Athlon-xp:
with init: 4.020000 sec
without init 2.190000 sec
It seems that it is the overflow slows down the speed. Am I right?


> s0 = (short *)malloc( length*sizeof(s hort) );

Lose the cast (search this newsgroup for 'malloc' to learn
more about why):

I search but do not find any clue on why this cast is incorrect.
So I just leave it there.


It's not incorrect per se, but it is malpractice.
See FAQ 7.7.

--
Stan Tobias
mailx `echo si***@FamOuS.Be dBuG.pAlS.INVALID | sed s/[[:upper:]]//g`


Nov 14 '05 #7
In comp.lang.c bear <xg***@mails.ts inghua.edu.cn> wrote:
S.Tobias wrote:
In comp.lang.c bear <xg***@mails.ts inghua.edu.cn> wrote:
The calculation of random number runs slowly than small numbers?
I think they use the same instruction and the same width of operand.
Also, can these overflow exceptions cause slow down?
Interesting idea, but my first results didn't confirm this (ie.
that overflows might cause slow-down), but I'll yet investigate this.

There's big difference when you pre-initialize the allocated
memory before the big loop. I used this code:
[snip code]

I change the code to
=============== =============== =============== =

[snip code] =============== =============== =============== ==
zero the four input images. It seems that the speed is explainable now.
On my Athlon-xp:
with init: 4.020000 sec
without init 2.190000 sec
It seems that it is the overflow slows down the speed. Am I right?


I don't know. It might be it, it might be a few things together.
I tried putting random values and it worked "as it should", too.
The results seem to depend strongly on the `length' parameter
(array size) - processor cache is a suspect too. Finally, my
results (snipped) showed a large diversity on different architectures
(all PC-compatible).

This is too off-topic in comp.lang.c, where I read it, and I guess
gcc people won't be interested in this subject either (as I have shown,
the problem is - most probably - not specific to gcc). I'm not
going to discuss this in c.l.c. any longer.

Please, make a summary of what has been said until now, show your
(corrected) code, the results (mine too), and let's move this
discussion somewhere else. I think the first best group to try
would be comp.programmin g (unless someone can hint a better group).

I'm very interested in finding the solution to the problem myself.

--
Stan Tobias
mailx `echo si***@FamOuS.Be dBuG.pAlS.INVALID | sed s/[[:upper:]]//g`
Nov 14 '05 #8
[Newsgroups restricted to clc]

In comp.lang.c bear <xg***@mails.ts inghua.edu.cn> wrote:

[snippage]

short *s0,*s1,*s2,*s3 ;
allocation: s0 = (short *)malloc( length*sizeof(s hort) );
s1 = (short *)malloc( length*sizeof(s hort) ); [...]

initialization of s0[n]: for( j=0; j<length; ++j )
{
s0[j] = i+j;
s1[j] = length-1-j; [...]

use: for( j=0; j<length; ++j )
{
int tmp1,tmp2;
tmp1 = s0[j]-s2[j];


If initialization step is skipped, then reading from an uninitialized
memory technically might raise UB. (But this, of course, defeats
the purpose of the question, which was not quite topical in clc anyway.)

Question to others:
In C99 "indetermin ate value" is an unspecified value or a trap
representation. I think that C99 doesn't mention explicitly any
more, that accessing indeterminate values automatically raises UB.
So, if on an implementation, say, integers don't have trap values
(ie. every bit representation corresponds to some valid value),
is accessing of uninitialized integer objects defined behaviour
then (yielding only unspecified values)?

--
Stan Tobias
mailx `echo si***@FamOuS.Be dBuG.pAlS.INVALID | sed s/[[:upper:]]//g`
Nov 14 '05 #9
On Wed, 08 Jun 2005 10:18:34 +0000, S.Tobias wrote:
In comp.lang.c bear <xg***@mails.ts inghua.edu.cn> wrote:
Thanks for your advice.
I compile basic.c using gcc-3.3.5 with no optimization. My CPU is
athlon-xp,installed Debian.

./basic 50 1024 0 #no init
results in CPU time: 9.690000 sec
./basic 50 1024 1 #init
results in CPU time: 3.280000 sec


gcc 3.3.4 (Slackware)
gcc -O3 -static

The same binary was run on 4 different processors, my results are:


The operating system and particularly the VM strategies it uses could be a
big factor here. Were you running the identical OS on these 4
processors?
./a.out 50 1024 [01]

init: 0 1
Pentium MMX (200MHz) 10.860000 37.870000
Pentium II (400MHz) 1.330000 7.090000
mobile Athlon XP (1400MHz) 6.950000 4.060000
Pentium4 (2.80GHz) 9.290000 0.710000

(Look at the last result - I couldn't beleive it myself!)
The last is what I would expect for a VM system that uses copy-on-write.
It is common for such VM systems to set up a single memory page zero
filled with bytes. When memory is first allocated all the virstual pages
are initially mapped to that single zero-filled page as copy-on-write.
That means that if you don't write to the memory you will always be
reading zeros from a single page in physical memory, no matter how large
your datastructure is. So all reads come from Level 1 (very likely) cache
and are very fast. If you write to memory first then each page in your
virtual memory is mapped to its own page in physical memory and suddenly
large datastructures don't fit in the processor cache and access get s a
lot slower.

That doesn't explain why


It looks as if the problem is not C (or gcc) specific (although there
might be some interesting morals for C programmers).
Old Wolf wrote:
> bear wrote: > Can you post the actual timings you observed?
> The only thing that comes to mind is, the operations in the
> io==1 version are all on small numbers, so maybe they don't
> take as long as the io==0 version which are all on random numbers
> which could be large and/or negative. Also, the large number
> operations might be causing integer overflow exceptions.
The calculation of random number runs slowly than small numbers?
I think they use the same instruction and the same width of operand.
Also, can these overflow exceptions cause slow down?


Interesting idea, but my first results didn't confirm this (ie.
that overflows might cause slow-down), but I'll yet investigate this.

There's big difference when you pre-initialize the allocated
memory before the big loop. I used this code:

void *randmalloc(siz e_t s)
{
size_t i;
char *pc = malloc(s);
if (pc)
for (i=0; i<s; ++i)
pc[i] = rand();
return pc;
}

#define malloc randmalloc

(Okay, okay, I know it's not legal to redefine malloc, but it's
quicker that modifying the source.)
> > s0 = (short *)malloc( length*sizeof(s hort) );
>
> Lose the cast (search this newsgroup for 'malloc' to learn
> more about why):

I search but do not find any clue on why this cast is incorrect.
So I just leave it there.


It's not incorrect per se, but it is malpractice.
See FAQ 7.7.


Nov 14 '05 #10

This thread has been closed and replies have been disabled. Please start a new discussion.

Similar topics

6
1710
by: KefX | last post by:
Hi guys! I'm still a bit of a Python newbie, but regardless I decided to embed Python into a plugin for a freeware (closed-source) Windows music program called Jeskola Buzz. (Man, I can't believe how many people keep telling me "Extend Python instead!" when clearly I can't do that in this case. If somebody here tells me this, I will explode in an appropriately gory fashion.) It's a great program, and it works mainly on plugins other people...
29
3603
by: Bart Nessux | last post by:
Just fooling around this weekend. Wrote and timed programs in C, Perl and Python. Each Program counts to 1,000,000 and prints each number to the console as it counts. I was a bit surprised. I'm not an expert C or Perl programming expery, I'm most familiar with Python, but can use the others as well. Here are my results: C = 23 seconds Python = 26.5 seconds
30
2817
by: Mike Cox | last post by:
Hi. I recently ran a benchmark against two simple programs, one written in Java and the other in C++. The both accomplish the same thing, outputting "Hello World" on my screen. The C++ program took .5 seconds to complete on my 400 Mhz PC while the Java program took 6.5 seconds. I am running the SUSE 8.2 Linux distribution. Why is Java that much slower than the C++ program? I read on Slashdot that Java was almost as fast as C++. ...
7
3047
by: YAZ | last post by:
Hello, I have a dll which do some number crunching. Performances (execution speed) are very important in my application. I use VC6 to compile the DLL. A friend of mine told me that in Visual studio 2003 .net optimization were enhanced and that i must gain in performance if I switch to VS 2003 or intel compiler. So I send him the project and he returned a compiled DLL with VS 2003. Result : the VS 2003 compiled Dll is slower than the VC6...
53
3444
by: Krystian | last post by:
Hi are there any future perspectives for Python to be as fast as java? i would like to use Python as a language for writing games. best regards krystian
13
2263
by: Snis Pilbor | last post by:
Hello, Here is an idea I've been toying with to speed up programs but still keep them portable. It's just a very handwavey rough description right now since I haven't worked out details. The program would contain lots of little utility functions which do small amounts of work. None of these would actually be run. Rather, they would be typecast as strings and those strings would be examined by the program. In otherwords, the...
2
1407
by: openbysource | last post by:
I wrote this program: #include <iostream> using namespace std; class sample { private: int age; float salary; char status;
8
2457
by: mast2as | last post by:
I am sure this topic has been discussed a thousand times and I read a few things about it today on the net. I also want to say I am trying to start a polemic here, I am just curious and willint to learn and improve the way I am approaching some coding issues that I have at the moment. I use C++ programming for my work, but I am not a developper so please be patient & tolerant in your answers ;-) Okay for the last few days I have been...
6
1996
by: Gernot Frisch | last post by:
Hi, the program below workes w/o problems on a GP2X and on the PC, but my PocketPC (using GCC 3.3.3) crashes. Very dissapointing, since I expect some speed boost from it. Thnak you for your help. int dx=..; // number of pixels width to draw // one pixel is one unsigned short
0
10062
Oralloy
by: Oralloy | last post by:
Hello folks, I am unable to find appropriate documentation on the type promotion of bit-fields when using the generalised comparison operator "<=>". The problem is that using the GNU compilers, it seems that the internal comparison operator "<=>" tries to promote arguments from unsigned to signed. This is as boiled down as I can make it. Here is my compilation command: g++-12 -std=c++20 -Wnarrowing bit_field.cpp Here is the code in...
1
9878
by: Hystou | last post by:
Overview: Windows 11 and 10 have less user interface control over operating system update behaviour than previous versions of Windows. In Windows 11 and 10, there is no way to turn off the Windows Update option using the Control Panel or Settings app; it automatically checks for updates and installs any it finds, whether you like it or not. For most users, this new feature is actually very convenient. If you want to control the update process,...
0
8733
agi2029
by: agi2029 | last post by:
Let's talk about the concept of autonomous AI software engineers and no-code agents. These AIs are designed to manage the entire lifecycle of a software development project—planning, coding, testing, and deployment—without human intervention. Imagine an AI that can take a project description, break it down, write the code, debug it, and then launch it, all on its own.... Now, this would greatly impact the work of software developers. The idea...
1
7282
isladogs
by: isladogs | last post by:
The next Access Europe User Group meeting will be on Wednesday 1 May 2024 starting at 18:00 UK time (6PM UTC+1) and finishing by 19:30 (7.30PM). In this session, we are pleased to welcome a new presenter, Adolph Dupré who will be discussing some powerful techniques for using class modules. He will explain when you may want to use classes instead of User Defined Types (UDT). For example, to manage the data in unbound forms. Adolph will...
0
6551
by: conductexam | last post by:
I have .net C# application in which I am extracting data from word file and save it in database particularly. To store word all data as it is I am converting the whole word file firstly in HTML and then checking html paragraph one by one. At the time of converting from word file to html my equations which are in the word document file was convert into image. Globals.ThisAddIn.Application.ActiveDocument.Select();...
0
5167
by: TSSRALBI | last post by:
Hello I'm a network technician in training and I need your help. I am currently learning how to create and manage the different types of VPNs and I have a question about LAN-to-LAN VPNs. The last exercise I practiced was to create a LAN-to-LAN VPN between two Pfsense firewalls, by using IPSEC protocols. I succeeded, with both firewalls in the same network. But I'm wondering if it's possible to do the same thing, with 2 Pfsense firewalls...
0
5322
by: adsilva | last post by:
A Windows Forms form does not have the event Unload, like VB6. What one acts like?
1
3827
by: 6302768590 | last post by:
Hai team i want code for transfer the data from one system to another through IP address by using C# our system has to for every 5mins then we have to update the data what the data is updated we have to send another system
3
2694
bsmnconsultancy
by: bsmnconsultancy | last post by:
In today's digital era, a well-designed website is crucial for businesses looking to succeed. Whether you're a small business owner or a large corporation in Toronto, having a strong online presence can significantly impact your brand's success. BSMN Consultancy, a leader in Website Development in Toronto offers valuable insights into creating effective websites that not only look great but also perform exceptionally well. In this comprehensive...

By using Bytes.com and it's services, you agree to our Privacy Policy and Terms of Use.

To disable or enable advertisements and analytics tracking please visit the manage ads & tracking page.