473,804 Members | 2,136 Online
Bytes | Software Development & Data Engineering Community
+ Post

Home Posts Topics Members FAQ

Same program in C and in C#. C# is faster than C. How Come ?

c
Hi every one,

Me and my Cousin were talking about C and C#, I love C and he loves
C#..and were talking C is ...blah blah...C# is Blah Blah ...etc

and then we decided to write a program that will calculate the
factorial of 10, 10 millions time and print the reusult in a file with
the name log.txt..

I wrote something like this

#include <stdio.h>
unsigned int fib (int n);

int main()
{
FILE *fp;
unsigned int loop =1 ;
if ( (fp = fopen( "log.txt", "a" )) != NULL )
for (loop; loop <= 10000000 ; loop++)
{

fprintf(fp,"%u\ n",fib(10));
}
fclose (fp);
return 0;
}

unsigned int fib (int n)
{ if (n != 1 )
return n * fib(n-1);
else
return 1;
}
and he did the something in C#
and then we all have the same laptop..DELL Inspiron 6000.

I ran my program, I took 18 seconds to get done..his program took 7
seconds..Wow

and then I asked him to run my program in his laptop..it's all the
same ..but I wanted to...I ran it...gave me the same time..

How come ..?!

Next day, I tried some Optimization

and developed the loop and wrote something like this

for (loop; loop <= 1000000 ; loop++)
{
fprintf(fp,"%u\ n %u\n %u\n %u\n %u\n
",fib(10),fib(1 0),fib(10),fib( 10),fib(10));
fprintf(fp,"%u\ n %u\n %u\n %u\n %u\n
",fib(10),fib(1 0),fib(10),fib( 10),fib(10));
}

But his program still faster than mine..

then, I tried the program under Slackware 12....it took 3.8 Seconds to
get done..Wow, I won the Challenge..

anyway, he want me to beat him under windows XP...Please guys help me
out..
Dec 12 '07
41 2718
Tor Rustad <to********@hot mail.comwrites:
On 13 Des, 03:42, Keith Thompson <ks...@mib.orgw rote:
[...]
>I don't believe the apparent slowdown is significant; it's probably
with the margin of error of my measurement. But it suggests that the
performance of the program is dominated by writing 153 megabytes (!)
of output.

Keith, may you explain how is writing 7 bytes 10 million times,
generating a 153 Mb file on your system?

Did you run the program twice? ;-)
Yes, I did -- without noticing that the "log.txt" file is opened in
append mode. (Apparently when you're writing 3628800 ten million
times, it's important not to overwrite the previous ten million
occurrences of 3628800.)

--
Keith Thompson (The_Other_Keit h) <ks***@mib.or g>
Looking for software development work in the San Diego area.
"We must do something. This is something. Therefore, we must do this."
-- Antony Jay and Jonathan Lynn, "Yes Minister"
Dec 13 '07 #31
Tor Rustad <to********@hot mail.comwrites:
[...]
Regarding 7 bytes fprintf(), odd plus odd is even, even plus odd is
odd. So it goes 10 million times, that's at least 5 million
non-optimal memory accesses...

Perhaps, the bottleneck on most systems is the disk-subsystem, but
the least we can do, is make sure that every IO is accessing
aligned memory, and change buffer from 7 bytes to something bigger
(e.g. 64 kb).
The output statement is
fprintf(fp,"%u\ n",fib(10));
where fib(10) is 3628800, so that's 7 digits *plus a new-line*. If a
new-line is written as a single character, that's 8 bytes per call.

--
Keith Thompson (The_Other_Keit h) <ks***@mib.or g>
Looking for software development work in the San Diego area.
"We must do something. This is something. Therefore, we must do this."
-- Antony Jay and Jonathan Lynn, "Yes Minister"
Dec 13 '07 #32
Keith Thompson wrote:
Tor Rustad <to********@hot mail.comwrites:
[...]
>Regarding 7 bytes fprintf(), odd plus odd is even, even plus odd is
odd. So it goes 10 million times, that's at least 5 million
non-optimal memory accesses...

Perhaps, the bottleneck on most systems is the disk-subsystem, but
the least we can do, is make sure that every IO is accessing
aligned memory, and change buffer from 7 bytes to something bigger
(e.g. 64 kb).

The output statement is
fprintf(fp,"%u\ n",fib(10));
where fib(10) is 3628800, so that's 7 digits *plus a new-line*. If a
new-line is written as a single character, that's 8 bytes per call.
Oooops, right... I didn't notice that new-line! *red-face*

However, 1 byte new-line is a UNIX thing, on Windows it's usually two
bytes, so my point still hold. ;-)
I did a benchmark, the best IO result was doing the
"Standard C IO no buffering" call, each fwrite() call
used buffer with 64 kb of data. Re-running the same
bench under Linux gave quite a suprise (see below)!!!
On Windows XP I got:

printing '3628800' 10 million times
------------------------------
Standard C IO by OP
Written 1220 pages of ca. size 65536 (80000000 bytes)
CPU time 7.03
DiskIO 11.11 Mb/s
------------------------------
Standard C IO
Written 1221 pages of ca. size 65536 (80019456 bytes)
CPU time 4.36
DiskIO 17.93 Mb/s
------------------------------
Standard C IO no buffering
Buffering turned off
Written 1221 pages of ca. size 65536 (80019456 bytes)
CPU time 2.53
DiskIO 30.86 Mb/s
$ gcc challenge.c stdio_c.c -O3
$ time ./a.out
printing '3628800' 10 million times
------------------------------
Standard C IO by OP
Written 1220 pages of ca. size 65536 (80000000 bytes)
CPU time 2.74
DiskIO 28.51 Mb/s
------------------------------
Standard C IO
Written 1221 pages of ca. size 65536 (80019456 bytes)
CPU time 0.27
DiskIO 289.42 Mb/s
------------------------------
Standard C IO no buffering
Buffering turned off
Written 1221 pages of ca. size 65536 (80019456 bytes)
CPU time 0.26
DiskIO 300.55 Mb/s

real 0m7.834s
user 0m2.528s
sys 0m0.756s
300.55 Mb/s is too good, the theoretical peek should
have been 150 Mb/s, I don't know why Linux is this
blitzing fast! Does it have anything to do with dual
core CPU??? *strange*
/*-------------- listing 'challenge.c' ------------------------*/

#include <stdio.h>
#include <string.h>
#include <time.h>
#include <assert.h>

#define FILE_NAME "log.txt"
#define IO_BLOCK_SIZE (64*1024)

unsigned int fac(int n)
{
if (n != 1)
return n * fac(n - 1);
else
return 1;
}

/* make_io_buf: from number 'u', fill a buffer with this value 'count' times
* return lenght of buffer
*/
static size_t make_io_buf(cha r *buf, size_t max_buf, size_t * count,
unsigned u)
{
int n, len;
unsigned char u_buf[32];

n = sprintf(u_buf, "%u\n", u);
assert(n == 8);

/* fill up the buffer */
for (len = 0; len + n <= (int)max_buf; len += n) {
strcpy(&buf[len], u_buf);
*count = *count + 1;
}
assert(len <= max_buf);
assert(len + n max_buf);

return len;
}

static void print_diff(cloc k_t start, clock_t stop, size_t wr_cnt)
{
double s = (double)(stop - start) / CLOCKS_PER_SEC,
Mb = (double)wr_cnt / 1024000.0;

printf("Written %lu pages of ca. size %d (%lu bytes)\n",
wr_cnt / IO_BLOCK_SIZE, IO_BLOCK_SIZE, wr_cnt);
printf("CPU time %.2f\n", s);
printf("DiskIO %.2f Mb/s\n", Mb / s);
}

extern int stdio_op(const char *fname, const unsigned char *buf,
size_t buf_len);
extern int stdio_c(const char *fname, const unsigned char *buf, size_t
buf_len);
extern int stdio_nobuf(con st char *fname, const unsigned char *buf,
size_t buf_len);

int main(void)
{
static unsigned char buffer[IO_BLOCK_SIZE];
size_t buf_len = 0, count = 0, wr_cnt = 0;
clock_t start, stop;

printf("printin g '%u' 10 million times\n", fac(10));

printf("------------------------------\nStandard C IO by OP\n");
start = clock();
wr_cnt = stdio_op(FILE_N AME, NULL, 0);
stop = clock();
print_diff(star t, stop, wr_cnt);

printf("------------------------------\nStandard C IO\n");
start = clock();
buf_len = make_io_buf(buf fer, sizeof buffer, &count, fac(10));
wr_cnt = stdio_c(FILE_NA ME, buffer, buf_len);
stop = clock();
print_diff(star t, stop, wr_cnt);

printf("------------------------------\nStandard C IO no buffering\n");
start = clock();
buf_len = make_io_buf(buf fer, sizeof buffer, &count, fac(10));
wr_cnt = stdio_nobuf(FIL E_NAME, buffer, buf_len);
stop = clock();
print_diff(star t, stop, wr_cnt);

remove( FILE_NAME );

return 0;
}

/*-------------- listing stdio_c.c ------------------------*/

#include <stdio.h>
#include <stdlib.h>
#include <assert.h>

int stdio_op(const char *fname, const unsigned char *buf, size_t buf_len)
{
FILE *fp;
unsigned int loop, n=0;

if ( (fp = fopen( "log.txt", "a" )) != NULL )
for (loop=1; loop <= 10000000 ; loop++)
{
n += fprintf(fp,"%u\ n",fac(10));
}
fclose (fp);

return n;
}

int stdio_c(const char *fname, const unsigned char *buf, size_t buf_len)
{
FILE *fp;
size_t loop
, count = buf_len / 8;
int wr_cnt = 0;
if ((fp = fopen(fname, "w+b")) != NULL) {

for (loop = 0; loop <= 10000000; loop += count) {
wr_cnt += fwrite(buf, 1, buf_len, fp);
}
fclose(fp);
}
return wr_cnt;
}

int stdio_nobuf(con st char *fname, const unsigned char *buf, size_t buf_len)
{
FILE *fp;
size_t loop
, count = buf_len / 8;
int wr_cnt = 0;
if ((fp = fopen(fname, "w+b")) != NULL) {

if(0 == setvbuf(fp, NULL, _IONBF, 0 ))
puts( "Buffering turned off" );

for (loop = 0; loop <= 10000000; loop += count) {
wr_cnt += fwrite(buf, 1, buf_len, fp);
}
fclose(fp);
}
return wr_cnt;
}

--
Tor <bw****@wvtqvm. vw | tr i-za-h a-z>
Dec 14 '07 #33
"Tor Rustad" <to********@hot mail.comwrote in message
news:o8******** *************@t elenor.com...
Standard C IO no buffering
Buffering turned off
Written 1221 pages of ca. size 65536 (80019456 bytes)
CPU time 0.26
DiskIO 300.55 Mb/s
....
300.55 Mb/s is too good, the theoretical peek should
have been 150 Mb/s, I don't know why Linux is this
blitzing fast! Does it have anything to do with dual
core CPU??? *strange*
The speed you're measuring is how fast the fwrite() call returns, not the
disk write performance. Most likely, fwrite() returns after putting the
data into a buffer but before the hardware actually acknowledges it's been
written to disk. Modern OSes do all sorts of things like that to improve
performance.

S

--
Stephen Sprunk "God does not play dice." --Albert Einstein
CCIE #3723 "God is an inveterate gambler, and He throws the
K5SSS dice at every possible opportunity." --Stephen Hawking

Dec 14 '07 #34
"c" <al******@gmail .comwrote in message
news:da******** *************** ***********@c4g 2000hsg.googleg roups.com...
and then we decided to write a program that will calculate the
factorial of 10, 10 millions time and print the reusult in a file with
the name log.txt..
Your code ran in some 30 seconds on my (slowish) machine, and in about 1
second when I took out the fprintf() and only assigned the result of fib().
95% of of your run time is printing a text file as has been mentioned.

I'm surprised the results differ by as little as 7 seconds and 18 seconds.
If both run under Windows then C# is likely to have streamlined access to
low-level file i/o.

Make the test fairer by testing the algorithm only if you are testing the
code generation of C and C#. File i/o speed is more a test of libraries and
access to the OS.

Also you haven't given the source of the C# code, that might give extra
clues.

Bart

Dec 14 '07 #35
Stephen Sprunk wrote:
"Tor Rustad" <to********@hot mail.comwrote in message
news:o8******** *************@t elenor.com...
>Standard C IO no buffering
Buffering turned off
Written 1221 pages of ca. size 65536 (80019456 bytes)
CPU time 0.26
DiskIO 300.55 Mb/s
...
>300.55 Mb/s is too good, the theoretical peek should
have been 150 Mb/s, I don't know why Linux is this
blitzing fast! Does it have anything to do with dual
core CPU??? *strange*

The speed you're measuring is how fast the fwrite() call returns, not
the disk write performance. Most likely, fwrite() returns after putting
the data into a buffer but before the hardware actually acknowledges
it's been written to disk. Modern OSes do all sorts of things like that
to improve performance.
Nope, the clock was started *before* fopen(), and stopped *after*
fclose(). If Linux is using lazy commit, i.e. allowing a file to be e.g.
closed, before flushing system IO buffers, that would be highly
interesting/bad, particularly for those working with critical data.

Added another test case, this time using low-level POSIX IO functions
and sync'ing the file *before* close'ing it, the result was sky high IO,
434 Mb/s! So, the IO subsystem file writes is more than 10 times faster
on Linux, than under Windows XP, tests was done with identical HW and C
source.
*amazing*
$ ./a.out
printing '3628800' 10 million times
------------------------------
Standard C IO by OP
Written 1220 pages of ca. size 65536 (80000000 bytes)
CPU time 2.80
DiskIO 27.90 Mb/s
------------------------------
Standard C IO
Written 1221 pages of ca. size 65536 (80019456 bytes)
CPU time 0.28
DiskIO 279.09 Mb/s
------------------------------
Standard C IO no buffering
Buffering turned off
Written 1221 pages of ca. size 65536 (80019456 bytes)
CPU time 0.24
DiskIO 325.60 Mb/s
------------------------------
Low-level Linux IO
Written 1221 pages of ca. size 65536 (80019456 bytes)
CPU time 0.18
DiskIO 434.13 Mb/s
/*------- listing lowio_linux.c -----------------------*/
#include <unistd.h>
#include <sys/types.h>
#include <sys/stat.h>
#include <fcntl.h>
#include <stdlib.h>

int lowio_linux(con st char *fname, const unsigned char *buf, size_t buf_len)
{
unsigned int loop
, n = 0
, chunk = buf_len / 8;
int fd
, flags = O_WRONLY | O_CREAT | O_SYNC;
ssize_t rc;

fd = open(fname, flags, S_IRUSR | S_IWUSR);
if (fd != -1)
{
for (loop = 0; loop <= 10000000; loop+=chunk)
{
rc = write(fd, buf, buf_len);
if (rc == -1)
puts("write error"), exit(EXIT_FAILU RE);
n += rc;
}
rc = fsync(fd);
if (rc == -1)
puts("fsync error"), exit(EXIT_FAILU RE);
close(fd);
}

return n;
}

--
Tor <bw****@wvtqvm. vw | tr i-za-h a-z>
Dec 14 '07 #36
Tor Rustad wrote:

[...]
Added another test case, this time using low-level POSIX IO functions
and sync'ing the file *before* close'ing it, the result was sky high IO,
434 Mb/s! So, the IO subsystem file writes is more than 10 times faster
on Linux, than under Windows XP, tests was done with identical HW and C
source.
*amazing*

I knew something was *wrong*. Grrr... these numbers are not valid! Why?

What basic mistake did I do?
Hint: what do clock() measure, and where does the program spend the time?

--
Tor <bw****@wvtqvm. vw | tr i-za-h a-z>
Dec 14 '07 #37
"Tor Rustad" <to********@hot mail.comwrote in message
news:8u******** *************@t elenor.com...
Stephen Sprunk wrote:
>"Tor Rustad" <to********@hot mail.comwrote in message
news:o8******* **************@ telenor.com...
>>Standard C IO no buffering
Buffering turned off
Written 1221 pages of ca. size 65536 (80019456 bytes)
CPU time 0.26
DiskIO 300.55 Mb/s
...
>>300.55 Mb/s is too good, the theoretical peek should
have been 150 Mb/s, I don't know why Linux is this
blitzing fast! Does it have anything to do with dual
core CPU??? *strange*

The speed you're measuring is how fast the fwrite() call returns, not the
disk write performance. Most likely, fwrite() returns after putting the
data into a buffer but before the hardware actually acknowledges it's
been written to disk. Modern OSes do all sorts of things like that to
improve performance.

Nope, the clock was started *before* fopen(), and stopped *after*
fclose().
fclose() just closes the FILE* (and related fd); it does _not_ guarantee
that the data is actually physically on the disk. There may be some
OS-specific function that will give you the indication you're wrongly
assuming you're getting, but it's not on by default.
If Linux is using lazy commit, i.e. allowing a file to be e.g. closed,
before flushing system IO buffers, that would be highly interesting/bad,
particularly for those working with critical data.
It's completely normal. That's why one should always shut down machines
cleanly instead of pulling the plug -- and why data (and filesystems) tend
to get corrupted when the power goes out. Even the disks lie to the OS
about when data is written; as soon as the data is in the drive's cache, it
tells the OS it's done writing so that the OS can reuse the buffer(s). Only
top-of-the-line controllers with battery-backed caches are immune from these
sorts of problems (and even then, you have to boot the machine again and let
it finish writing before removing the disk from the system).
Added another test case, this time using low-level POSIX IO
functions and sync'ing the file *before* close'ing it, the result was sky
high IO, 434 Mb/s! So, the IO subsystem file writes is more than 10
times faster on Linux, than under Windows XP, tests was done with
identical HW and C source.

*amazing*
It's not so amazing when you realize you're measuring the speed of the OS's
I/O system, not the hardware's.

S

--
Stephen Sprunk "God does not play dice." --Albert Einstein
CCIE #3723 "God is an inveterate gambler, and He throws the
K5SSS dice at every possible opportunity." --Stephen Hawking

Dec 14 '07 #38
Stephen Sprunk wrote:
"Tor Rustad" <to********@hot mail.comwrote in message
news:8u******** *************@t elenor.com...
>Stephen Sprunk wrote:
>>"Tor Rustad" <to********@hot mail.comwrote in message
news:o8****** *************** @telenor.com...
Standard C IO no buffering
Buffering turned off
Written 1221 pages of ca. size 65536 (80019456 bytes)
CPU time 0.26
DiskIO 300.55 Mb/s
...
300.55 Mb/s is too good, the theoretical peek should
have been 150 Mb/s, I don't know why Linux is this
blitzing fast! Does it have anything to do with dual
core CPU??? *strange*

The speed you're measuring is how fast the fwrite() call returns, not
the disk write performance. Most likely, fwrite() returns after
putting the data into a buffer but before the hardware actually
acknowledge s it's been written to disk. Modern OSes do all sorts of
things like that to improve performance.

Nope, the clock was started *before* fopen(), and stopped *after*
fclose().

fclose() just closes the FILE* (and related fd); it does _not_ guarantee
that the data is actually physically on the disk. There may be some
OS-specific function that will give you the indication you're wrongly
assuming you're getting, but it's not on by default.
>If Linux is using lazy commit, i.e. allowing a file to be e.g. closed,
before flushing system IO buffers, that would be highly
interesting/bad, particularly for those working with critical data.

It's completely normal. That's why one should always shut down machines
cleanly instead of pulling the plug -- and why data (and filesystems)
tend to get corrupted when the power goes out. Even the disks lie to
the OS about when data is written; as soon as the data is in the drive's
cache, it tells the OS it's done writing so that the OS can reuse the
buffer(s). Only top-of-the-line controllers with battery-backed caches
are immune from these sorts of problems (and even then, you have to boot
the machine again and let it finish writing before removing the disk
from the system).
Some years ago, I was told that lazy commit, was the reason we didn't
run *any* production systems on Linux, now it's allowed in *some* cases
to use this OS. If an OS don't flush system buffers when a file is
closed, or is doing a fflush() in an async manner, then it is impossible
to code a bullet-proof C program with file I/O, in that environment.

The C standard is silent here, data has only to be transfered to the
host environment, depending on QoI, this data may written to persistent
storage, before fclose() return, or program exit().

Just consider this, a DB program start a transaction with "begin work",
then perform lots of updates, before "commit work"... if you can't trust
that the data has been saved on disk, then you risk that the transaction
can be lost, in case of e.g. power outage or HW failures (e.g. disk
controller failure).

If a disk controller lie, the flaw is limited to that unit. If an OS
lie, that is an important thing to know about, and I don't think I would
like to use such an OS, for work related production systems.

>Added another test case, this time using low-level POSIX IO
functions and sync'ing the file *before* close'ing it, the result was sky
high IO, 434 Mb/s! So, the IO subsystem file writes is more than 10
times faster on Linux, than under Windows XP, tests was done with
identical HW and C source.

*amazing*

It's not so amazing when you realize you're measuring the speed of the
OS's I/O system, not the hardware's.
Well, in the benchmark I ran, the result couldn't be explained by disk
cache alone, since this is somewhere in the range of 8 Mb - 32 Mb, while
the measured IO peek, was >150 Mb/s too high.

Also, since I did an explicit call to fsync() in the last test-case, the
data should have been committed to disk, before the stop timer was set.
A clear sign that something was wrong, was that fsync()'ing, boosted the
performance by another 100 Mb/s!

I think the simple answer here, is that the clock() implementation on
Linux, measured processor time used in *user* space only, ignoring the
(significant) time spent in system calls. :)

--
Tor <bw****@wvtqvm. vw | tr i-za-h a-z>
Dec 14 '07 #39
Tor wrote:
) Some years ago, I was told that lazy commit, was the reason we didn't
) run *any* production systems on Linux, now it's allowed in *some* cases
) to use this OS. If an OS don't flush system buffers when a file is
) closed, or is doing a fflush() in an async manner, then it is impossible
) to code a bullet-proof C program with file I/O, in that environment.

You can actually turn that off in Linux, you know.

) I think the simple answer here, is that the clock() implementation on
) Linux, measured processor time used in *user* space only, ignoring the
) (significant) time spent in system calls. :)

Hmm, not quite. The clock() function is supposed to only measure processor
time used, so time spent waiting for I/O to finish will not be counted.
(After all, during that time, other tasks can get their share of CPU time.)
It does count CPU time spent in the system call though, AFAIK.
SaSW, Willem
--
Disclaimer: I am in no way responsible for any of the statements
made in the above text. For all I know I might be
drugged or something..
No I'm not paranoid. You all think I'm paranoid, don't you !
#EOT
Dec 15 '07 #40

This thread has been closed and replies have been disabled. Please start a new discussion.

Similar topics

6
2769
by: =?Utf-8?B?QnJvbmlzbGF2?= | last post by:
I need to run more then 1 instance of the same program but also I need to know which instance is running. In C++ code was: int nInstance = 0; char szAppName; m_dwMutexReturn = ERROR_ALREADY_EXISTS; while (( m_dwMutexReturn == ERROR_ALREADY_EXISTS ) && ( nInstance < MAXINSTANCES ))
0
1360
by: John Scheldroup | last post by:
Source: Article Mixing C and C++ Code in the Same Program By Stephen Clamage, Sun Microsystems, Sun ONE Studio Solaris Tools Development Engineering http://developers.sun.com/solaris/articles/mixing.html
1
1273
by: poijoy | last post by:
I need to use two forms in one program for another school project. The textbook is annoyingly vague as to how to do this. It states that I should use the line Dim secondForm as New Form2() but it doesn't say anything enlightening after that. The program is a supposed to get information about travel expenses, sum it up, and display the information in a separate window in a listbox. I get how to add items to a list box. I get how to do...
0
9595
by: Hystou | last post by:
Most computers default to English, but sometimes we require a different language, especially when relocating. Forgot to request a specific language before your computer shipped? No problem! You can effortlessly switch the default language on Windows 10 without reinstalling. I'll walk you through it. First, let's disable language synchronization. With a Microsoft account, language settings sync across devices. To prevent any complications,...
0
10604
Oralloy
by: Oralloy | last post by:
Hello folks, I am unable to find appropriate documentation on the type promotion of bit-fields when using the generalised comparison operator "<=>". The problem is that using the GNU compilers, it seems that the internal comparison operator "<=>" tries to promote arguments from unsigned to signed. This is as boiled down as I can make it. Here is my compilation command: g++-12 -std=c++20 -Wnarrowing bit_field.cpp Here is the code in...
0
10354
jinu1996
by: jinu1996 | last post by:
In today's digital age, having a compelling online presence is paramount for businesses aiming to thrive in a competitive landscape. At the heart of this digital strategy lies an intricately woven tapestry of website design and digital marketing. It's not merely about having a website; it's about crafting an immersive digital experience that captivates audiences and drives business growth. The Art of Business Website Design Your website is...
0
10101
tracyyun
by: tracyyun | last post by:
Dear forum friends, With the development of smart home technology, a variety of wireless communication protocols have appeared on the market, such as Zigbee, Z-Wave, Wi-Fi, Bluetooth, etc. Each protocol has its own unique characteristics and advantages, but as a user who is planning to build a smart home system, I am a bit confused by the choice of these technologies. I'm particularly interested in Zigbee because I've heard it does some...
0
9177
agi2029
by: agi2029 | last post by:
Let's talk about the concept of autonomous AI software engineers and no-code agents. These AIs are designed to manage the entire lifecycle of a software development project—planning, coding, testing, and deployment—without human intervention. Imagine an AI that can take a project description, break it down, write the code, debug it, and then launch it, all on its own.... Now, this would greatly impact the work of software developers. The idea...
1
7643
isladogs
by: isladogs | last post by:
The next Access Europe User Group meeting will be on Wednesday 1 May 2024 starting at 18:00 UK time (6PM UTC+1) and finishing by 19:30 (7.30PM). In this session, we are pleased to welcome a new presenter, Adolph Dupré who will be discussing some powerful techniques for using class modules. He will explain when you may want to use classes instead of User Defined Types (UDT). For example, to manage the data in unbound forms. Adolph will...
0
5675
by: adsilva | last post by:
A Windows Forms form does not have the event Unload, like VB6. What one acts like?
1
4314
by: 6302768590 | last post by:
Hai team i want code for transfer the data from one system to another through IP address by using C# our system has to for every 5mins then we have to update the data what the data is updated we have to send another system
2
3837
muto222
by: muto222 | last post by:
How can i add a mobile payment intergratation into php mysql website.

By using Bytes.com and it's services, you agree to our Privacy Policy and Terms of Use.

To disable or enable advertisements and analytics tracking please visit the manage ads & tracking page.