473,748 Members | 2,294 Online
Bytes | Software Development & Data Engineering Community
+ Post

Home Posts Topics Members FAQ

Local tables or linked local database?

I have a split front end/back end system. However I create a number of
local tables to carry out certain operations. There is a tendency for
the front end to bloat so I have set 'compact on close'

I think that I have read in some threads (althoug I cannot find them
now) that others place such tables in a local, linked database. I
could do this but I am interested to know what would be the advantages.
And disadvantages, if any.

Any observations on the relative merits of local tables vs a local
database would be welcome

Jim

Oct 25 '06 #1
10 7704
I think the advantages are that you don't need the compact on close, and
really, temp junk data that you plan to throw out is better placed else
where.

The idea here is that a split system means that your "mde" is your
executable file that you distribute. It contains code. Presumably, you could
issue a update, and those temp files would NOT be overwritten.

There is some sample code at Tony's site that does create a temp mdb file,
so, obviously some of us do view the extra effort of creating a temp mdb
file and advantage. (and, you can simply delete the file and not worry about
bloat after you done so).

I don't think it is a huge deal, nor a hard ad fast rule, but I would as a
rule use a temp external mdb file for that working type stuff.
--
Albert D. Kallal (Access MVP)
Edmonton, Alberta Canada
pl************* ****@msn.com
Oct 25 '06 #2
Access does a miserable job of mitigating the amount of data that it
passes back and forth to tables when it runs any operation. You will
probably get better performance having the local tables in the database
itself. The only issue that I have ever had with compact on close is
one of windows permissions - for example if you have special
permissions set on the database file, those will be gone on the
compacted version.

Jim Devenish wrote:
I have a split front end/back end system. However I create a number of
local tables to carry out certain operations. There is a tendency for
the front end to bloat so I have set 'compact on close'

I think that I have read in some threads (althoug I cannot find them
now) that others place such tables in a local, linked database. I
could do this but I am interested to know what would be the advantages.
And disadvantages, if any.

Any observations on the relative merits of local tables vs a local
database would be welcome

Jim
Oct 25 '06 #3
"Pachydermi tis" <pr*******@gmai l.comwrote in
news:11******** *************@i 3g2000cwc.googl egroups.com:
Access does a miserable job of mitigating the amount of data that
it passes back and forth to tables when it runs any operation.
You will probably get better performance having the local tables
in the database itself.
Hogwash. Access is extremely efficient in drawing data from linked
tables.

--
David W. Fenton http://www.dfenton.com/
usenet at dfenton dot com http://www.dfenton.com/DFA/
Oct 26 '06 #4
I would not say 'extremely', but it does an acceptable job of 'drawing'
or retrieving data. The moment you ask access to manipulate that data
in anyway - say queries, continuous forms, reports, etc, the data
management of the interface - not necessarily the jet engine - is just
terrible. Plug one of those lovely Fluke network analysis toys into
your network and you will see what I mean.

David W. Fenton wrote:
"Pachydermi tis" <pr*******@gmai l.comwrote in
news:11******** *************@i 3g2000cwc.googl egroups.com:
Access does a miserable job of mitigating the amount of data that
it passes back and forth to tables when it runs any operation.
You will probably get better performance having the local tables
in the database itself.

Hogwash. Access is extremely efficient in drawing data from linked
tables.

--
David W. Fenton http://www.dfenton.com/
usenet at dfenton dot com http://www.dfenton.com/DFA/
Nov 2 '06 #5
"Pachydermi tis" <pr*******@gmai l.comwrote in
news:11******** **************@ i42g2000cwa.goo glegroups.com:
I would not say 'extremely', but it does an acceptable job of 'drawing'
or retrieving data. The moment you ask access to manipulate that data
in anyway - say queries, continuous forms, reports, etc, the data
management of the interface - not necessarily the jet engine - is just
terrible.
Perhaps YOU find this to be just terrible. If you do you could say, "I find
it to be just terrible." But do not say, "The moment YOU ask ...."; you
have no knowledge of what happens when I ask Access to manipulate the data.

I am happy to report that I do not find it to be just terrible. In fact I
find it to be just peachy!

--
Lyle Fairfield

from http://msdn.microsoft.com/library/de...l=/library/en-
us/dnmdac/html/data_mdacroadma p.asp

Obsolete Data Access Technologies
Obsolete technologies are technologies that have not been enhanced or
updated in several product releases and that will be excluded from future
product releases. Do not use these technologies when you write new
applications. When you modify existing applications that are written using
these technologies, consider migrating those applications to ADO.NET.
The following components are considered obsolete:
....
Data Access Objects (DAO): DAO provides access to JET (Access) databases.
This API can be used from Microsoft Visual Basic®, Microsoft Visual C++®,
and scripting languages. It was included with Microsoft Office 2000 and
Office XP. DAO 3.6 is the final version of this technology. It will not be
available on the 64-bit Windows operating system.
.....
Nov 2 '06 #6
"Pachydermi tis" <pr*******@gmai l.comwrote in
news:11******** **************@ i42g2000cwa.goo glegroups.com:
David W. Fenton wrote:
>"Pachydermitis " <pr*******@gmai l.comwrote in
news:11******* **************@ i3g2000cwc.goog legroups.com:
Access does a miserable job of mitigating the amount of data
that it passes back and forth to tables when it runs any
operation. You will probably get better performance having the
local tables in the database itself.

Hogwash. Access is extremely efficient in drawing data from
linked tables.

I would not say 'extremely', but it does an acceptable job of
'drawing' or retrieving data.
It does *way* more than acceptable. It retrieves only the minimum
amount of data needed, and if you've properly indexed your tables,
this is a very small amount of data.

The only time it bogs down is when you do something stupid, like
select on an expression or sort or select on an unindexed field.
That's not a flaw of Jet, it's pilot error.
The moment you ask access to manipulate that data
in anyway - say queries, continuous forms, reports, etc, the data
management of the interface - not necessarily the jet engine - is
just terrible.
Hogwash.
Plug one of those lovely Fluke network analysis toys into
your network and you will see what I mean.
In comparison to *what*? What other program manipulates data across
a network with such efficiency without the help of a process on the
other end of the connection?

--
David W. Fenton http://www.dfenton.com/
usenet at dfenton dot com http://www.dfenton.com/DFA/
Nov 3 '06 #7
Wow, I can't believe I have sparked such a response. David Fenton, I
have seen your posts and admire your knowledge and contribution to the
access community. I completely agree with what you say about poor data
structure and database design being the root of most user's
performance issues. Please understand that I continue this discussion
only with the greatest respect.

The question began with a user asking if they should use internal or
linked tables. Are you saying that linked is as fast and efficient as
internal? For example if I have a query joining two linked tables,
will anyone say that it is just as fast for access to analyze each
record for a match across a network as it is internally? If so build
two tables with 300k records each, link them, join them, time it.

Out of all the forms and reports you have created, how many only use a
single table and have no query or sql recordsources? Not many I
imagine or you probably have issues with normalization. Have you ever
analyzed the amount of data Access retrieves when you open a form on a
complex dataset - even a single record form? If not take those 300k
record tables and write two queries, one that limits your data to a
record and one that allows the form to do so in the default manner, and
time it. How good of a job does Access do mitigating the amount of
data it passes back and forth for that single record - or is it
faster when you manually limit the data? If you are surprised, wait
until you try this on a continuous form.

To add one more brand to the fire, most people use linked tables when
they have a multiuser setup. Both of the timed sequences you ran
earlier, run them again with 10 users trying to get data from that
backend. If you give those users phones you will discover a whole new
realm of data transfer. And yes I know about the subdatasheets issue.

In comparison to what? You have used passthrough queries. That in
itself deprecates the methods jet uses to manipulate queried data. Set
up those two tables on a sql or oracle backend - even msde or sql
express - heck mysql or postgre. Use passthroughs. Time your data
transfers. Time them with multiple users. Time them with a real life
network load - with a network full of people sending emails, printing,
reading files, viewing security cams, and tell everyone how Access
faired in the performance test.

Like you, I love Access and think that it is a great tool. I have been
using it for many years as a cost effective solution to solve some very
complex issues. My posts come from my experience. I have tested many
of the scenarios I mentioned above, and have come to the conclusions
that I wrote earlier. I understand that you also speak from a wealth
of experience, so please quantify your remarks. "Hogwash" does
nothing for less experienced users trying to learn from these posts.
Pachydermitis

David W. Fenton wrote:
"Pachydermi tis" <pr*******@gmai l.comwrote in
news:11******** **************@ i42g2000cwa.goo glegroups.com:
David W. Fenton wrote:
"Pachydermi tis" <pr*******@gmai l.comwrote in
news:11******** *************@i 3g2000cwc.googl egroups.com:

Access does a miserable job of mitigating the amount of data
that it passes back and forth to tables when it runs any
operation. You will probably get better performance having the
local tables in the database itself.

Hogwash. Access is extremely efficient in drawing data from
linked tables.
I would not say 'extremely', but it does an acceptable job of
'drawing' or retrieving data.

It does *way* more than acceptable. It retrieves only the minimum
amount of data needed, and if you've properly indexed your tables,
this is a very small amount of data.

The only time it bogs down is when you do something stupid, like
select on an expression or sort or select on an unindexed field.
That's not a flaw of Jet, it's pilot error.
The moment you ask access to manipulate that data
in anyway - say queries, continuous forms, reports, etc, the data
management of the interface - not necessarily the jet engine - is
just terrible.

Hogwash.
Plug one of those lovely Fluke network analysis toys into
your network and you will see what I mean.

In comparison to *what*? What other program manipulates data across
a network with such efficiency without the help of a process on the
other end of the connection?

--
David W. Fenton http://www.dfenton.com/
usenet at dfenton dot com http://www.dfenton.com/DFA/
Nov 6 '06 #8
"Pachydermi tis" <pr*******@gmai l.comwrote in
news:11******** **************@ f16g2000cwb.goo glegroups.com:
The question began with a user asking if they should use internal
or linked tables. Are you saying that linked is as fast and
efficient as internal?
Of course not. I'm only disputing overly broad statements that claim
that Jet is pulling scads of data cross the network.
For example if I have a query joining two linked tables,
will anyone say that it is just as fast for access to analyze each
record for a match across a network as it is internally?
Access doesn't analyze each record, no matter if they are stored in
a local table or in a linked table on the other end of the LAN
(unless you're joining on unindexed fields, but, you seem to agree
with me that that would constitute bad design, i.e., pilot error,
and not be due to any Jet inefficiencies) .
If so build
two tables with 300k records each, link them, join them, time it.
The time will be be longer for the data that comes through the
smaller pipe. Since a LAN has a smaller pipe than the data bus on
the local PC (which may be reading the MDB out of RAM), of course it
will be faster.

However, the amount of data retrieved to do the exact same join WILL
BE EXACTLY THE SAME. First the query will be optimized using
metadata about the tables. That will be read locally or retrieved
remotely. Once the query is optimized, index pages will be retrieved
for the joined fields, the indexes will be joined and then the data
pages for the matching records will be retrieved.

The process will be PRECISELY THE SAME regardless of whether the
data is on a local hard drive or on a file server.
Out of all the forms and reports you have created, how many only
use a single table and have no query or sql recordsources? Not
many I imagine or you probably have issues with normalization.
For the most part, I very seldom have forms bound to anything but a
single table, but I'd never use anything but a SQL rowsource.
Have you ever
analyzed the amount of data Access retrieves when you open a form
on a complex dataset - even a single record form?
Yes. I have. It's very little data even with a very large data set
because Jet is efficient in optimizing SQL.

What I do *not* do, except in very small apps, is bind a form to all
the records in a table (I consider small to be <10K records or so).
If not take those 300k
record tables and write two queries, one that limits your data to
a record and one that allows the form to do so in the default
manner, and time it. How good of a job does Access do mitigating
the amount of data it passes back and forth for that single record
- or is it faster when you manually limit the data? If you are
surprised, wait until you try this on a continuous form.
What are you babbling about? Who in their right mind would populate
a form with a recordsource populated from a join of two 300K-record
tables? That's insane.

I can make it worse:

Sort the resultset.

But that, again, is just STUPIDITY -- PILOT ERROR. A user can't use
that many records. A user can only operate on a small set of records
at a time, so my apps only operate on small sets of records. I have
an app whose main table has 250K records and has a child table
displayed in a subform of 650K records. The user enters data to load
small groups of records at a time (name search), and the results
come back instantaneously . It's in use by 10-12 simultaneous users
all the time, and the network runs just fine.
To add one more brand to the fire, most people use linked tables
when they have a multiuser setup. Both of the timed sequences you
ran earlier, run them again with 10 users trying to get data from
that backend. If you give those users phones you will discover a
whole new realm of data transfer. And yes I know about the
subdatasheets issue.
But do you know anything at all about efficient application design?
Do you know about problems with CREATE/DELETE collisions on the LDB
file on the back end?
In comparison to what? You have used passthrough queries.
No, I haven't. I don't do server applications -- my apps are ALL
JET.
That in
itself deprecates the methods jet uses to manipulate queried data.
No, it doesn't.
Set
up those two tables on a sql or oracle backend - even msde or sql
express - heck mysql or postgre. Use passthroughs. Time your
data transfers. Time them with multiple users. Time them with a
real life network load - with a network full of people sending
emails, printing, reading files, viewing security cams, and tell
everyone how Access faired in the performance test.
I can design a scenario where a server database will be very poor if
I want to.

You can design a scenario where Jet will look poor.

So what?
Like you, I love Access and think that it is a great tool. I have
been using it for many years as a cost effective solution to solve
some very complex issues. My posts come from my experience. I
have tested many of the scenarios I mentioned above, and have come
to the conclusions that I wrote earlier. I understand that you
also speak from a wealth of experience, so please quantify your
remarks. "Hogwash" does nothing for less experienced users trying
to learn from these posts.
What you wrote was HOGWASH.

It was misleading at best, because it lacked context and nuance.

And what you write above indicates that YOU DON'T UNDERSTAND HOW JET
WORKS.

--
David W. Fenton http://www.dfenton.com/
usenet at dfenton dot com http://www.dfenton.com/DFA/
Nov 7 '06 #9
>so my apps only operate on small sets of records
That is what I am saying; Access does a lousy job of managing the
records - forcing developers to do it.
>Who in their right mind would populate. . .
I guess this means that you are in your right mind and would never
trust Access, in its legendary efficiency, to manage the data for you?
>I'm only disputing overly broad statements that claim
that Jet is pulling scads of data cross the network.
Reread the post, no one said that. I said Access does a miserable job
and further clarified in my second post. Based on your above
statement, you agree - at least about the interface.
>even with a very large data set
because Jet is efficient in optimizing SQL.
Try this: Limit a table's recordset in a query then join that query to
another table. Say table1 has 200k records and its limited qry now has
10 records. Table2 has 200k records. Joining 10 to 200k is much
faster than joining 200k to 200k.
Put a function in the second query so you can watch the records.
Rather than limiting the compared records to the first query, Access
optimizes the join for us so that it has to join every record in both
tables. More robust engines allow this type of optimization - often a
must when your recordsets are in the millions.
but I'd never use anything but a SQL rowsource.
Guess what, Access compiles its query's and using sql as a rowsource is
generally frowned upon as performance poor.
http://support.microsoft.com/kb/209126
>You can design a scenario where Jet will look poor.
If you ever have to build an application that functions at the
enterprise level, you will find out that I did not design a scenario
for jet to look poor, that is life. Jet is not that robust and
probably not designed to be.

Well, what a disappointment, who would have ever thought that you would
be such a pompous jerk. I take back what I said earlier about
respecting you. You don't even have the decency to be polite.

As you say, you only work in jet. Maybe you should get more experience
before you talk about things you don't know about. You are like a guy
on a moped shouting to everyone that he's the fastest thing on the
road. So keep shouting - to anyone foolish enough to listen - or (like
me) foolish enough to answer you.
P
David W. Fenton wrote:
"Pachydermi tis" <pr*******@gmai l.comwrote in
news:11******** **************@ f16g2000cwb.goo glegroups.com:
The question began with a user asking if they should use internal
or linked tables. Are you saying that linked is as fast and
efficient as internal?

Of course not. I'm only disputing overly broad statements that claim
that Jet is pulling scads of data cross the network.
For example if I have a query joining two linked tables,
will anyone say that it is just as fast for access to analyze each
record for a match across a network as it is internally?

Access doesn't analyze each record, no matter if they are stored in
a local table or in a linked table on the other end of the LAN
(unless you're joining on unindexed fields, but, you seem to agree
with me that that would constitute bad design, i.e., pilot error,
and not be due to any Jet inefficiencies) .
If so build
two tables with 300k records each, link them, join them, time it.

The time will be be longer for the data that comes through the
smaller pipe. Since a LAN has a smaller pipe than the data bus on
the local PC (which may be reading the MDB out of RAM), of course it
will be faster.

However, the amount of data retrieved to do the exact same join WILL
BE EXACTLY THE SAME. First the query will be optimized using
metadata about the tables. That will be read locally or retrieved
remotely. Once the query is optimized, index pages will be retrieved
for the joined fields, the indexes will be joined and then the data
pages for the matching records will be retrieved.

The process will be PRECISELY THE SAME regardless of whether the
data is on a local hard drive or on a file server.
Out of all the forms and reports you have created, how many only
use a single table and have no query or sql recordsources? Not
many I imagine or you probably have issues with normalization.

For the most part, I very seldom have forms bound to anything but a
single table, but I'd never use anything but a SQL rowsource.
Have you ever
analyzed the amount of data Access retrieves when you open a form
on a complex dataset - even a single record form?

Yes. I have. It's very little data even with a very large data set
because Jet is efficient in optimizing SQL.

What I do *not* do, except in very small apps, is bind a form to all
the records in a table (I consider small to be <10K records or so).
If not take those 300k
record tables and write two queries, one that limits your data to
a record and one that allows the form to do so in the default
manner, and time it. How good of a job does Access do mitigating
the amount of data it passes back and forth for that single record
- or is it faster when you manually limit the data? If you are
surprised, wait until you try this on a continuous form.

What are you babbling about? Who in their right mind would populate
a form with a recordsource populated from a join of two 300K-record
tables? That's insane.

I can make it worse:

Sort the resultset.

But that, again, is just STUPIDITY -- PILOT ERROR. A user can't use
that many records. A user can only operate on a small set of records
at a time, so my apps only operate on small sets of records. I have
an app whose main table has 250K records and has a child table
displayed in a subform of 650K records. The user enters data to load
small groups of records at a time (name search), and the results
come back instantaneously . It's in use by 10-12 simultaneous users
all the time, and the network runs just fine.
To add one more brand to the fire, most people use linked tables
when they have a multiuser setup. Both of the timed sequences you
ran earlier, run them again with 10 users trying to get data from
that backend. If you give those users phones you will discover a
whole new realm of data transfer. And yes I know about the
subdatasheets issue.

But do you know anything at all about efficient application design?
Do you know about problems with CREATE/DELETE collisions on the LDB
file on the back end?
In comparison to what? You have used passthrough queries.

No, I haven't. I don't do server applications -- my apps are ALL
JET.
That in
itself deprecates the methods jet uses to manipulate queried data.

No, it doesn't.
Set
up those two tables on a sql or oracle backend - even msde or sql
express - heck mysql or postgre. Use passthroughs. Time your
data transfers. Time them with multiple users. Time them with a
real life network load - with a network full of people sending
emails, printing, reading files, viewing security cams, and tell
everyone how Access faired in the performance test.

I can design a scenario where a server database will be very poor if
I want to.

You can design a scenario where Jet will look poor.

So what?
Like you, I love Access and think that it is a great tool. I have
been using it for many years as a cost effective solution to solve
some very complex issues. My posts come from my experience. I
have tested many of the scenarios I mentioned above, and have come
to the conclusions that I wrote earlier. I understand that you
also speak from a wealth of experience, so please quantify your
remarks. "Hogwash" does nothing for less experienced users trying
to learn from these posts.

What you wrote was HOGWASH.

It was misleading at best, because it lacked context and nuance.

And what you write above indicates that YOU DON'T UNDERSTAND HOW JET
WORKS.

--
David W. Fenton http://www.dfenton.com/
usenet at dfenton dot com http://www.dfenton.com/DFA/
Nov 7 '06 #10

This thread has been closed and replies have been disabled. Please start a new discussion.

Similar topics

1
1925
by: jason | last post by:
Is it actually possible for a local access 2000 database to link to a remote database online via linked tables... I know www.aspfaq.com has spoken about possibliites (although not secure) for local-remote hook ups but I just wanted to know how feasible a LINKED TABLE scenario would be. If this was possible it would solve so many administration problems and save development time when it comes to building asp-admin pages to manage the...
6
6779
by: Andreas Lauffer | last post by:
I changed from Access97 to AccessXP and I have immense performance problems. Details: - Access XP MDB with Jet 4.0 ( no ADP-Project ) - Linked Tables to SQL-Server 2000 over ODBC I used the SQL Profile to watch the T-SQL-Command which Access ( who creates the commands?) creates and noticed:
5
20452
by: pinballjim | last post by:
Hello everyone, I'm looking for a simple way to create a local copy of a linked table. I've got a database that links about 10 tables from other databases. This works fine on my machine, but I need to distribute one single database to others and cannot have any linked tables. The only way I've figured out is to run the linked tables through Make Table queries to make a temporary table, delete the linked table, and
2
2905
by: Internet Arrow Limited | last post by:
Hi, I have a requirement to write an access application that must run under access97 and access2K. Some users will use Acess2K to access data that will also be accessed by Access97 users. The source data must therefore remain in Access97 format The user community has a mixture of access97 on WinNT; access2K on winNt and access2K on win2K.
3
2061
by: Lauren Quantrell | last post by:
A general design question: Assuming I can figure out a way to link some local tables in an .MDB file to my Access2000 .ADP database (any help on this is appreciated as well), I'm wondering which of the following methods will yield faster performance over a slow WAN network: Method A: Creating tables on my SQL Server to store temporary records that are linked to records in permanent SQL Server tables. Method B: Linking tables in a...
7
2378
by: nydiroth | last post by:
Our servers went down for over 6 hours yesterday and I was asked if there was a way to store the database on the server and a local station at the same time. My datbase is split and the tables are linked. Would there be a way to modify the table links to direct them to two seperate backends? Could there also be a non-linked table stored in the frontend alongside the table links? Our endpoint would be to have a mirror on the network...
2
6009
by: Jill Elaine | last post by:
I am building an Access 2002 frontend with linked tables to an encrypted Paradox 7 database. When I first create these linked tables, I'm asked for the password to the encrypted Paradox database, and the linked tables are successfully created. I use the data from these linked tables in several forms. All works great until I close the Access frontend and open it again. When I try to use the forms, I get an error message: "Could not...
2
4594
by: Bruce | last post by:
Hello all, I have an annoyance that is common to all of the Access 2003 applications I manage. All of the applications are split front end/ back end with a front end copy on each client workstation and back ends on a common server. They are all secured with workgroup files also residing on the server. Each front end has a mix of linked tables and local tables. The annoyance I have noticed is that it sometimes takes 10 to 15 seconds...
10
4453
by: Richard | last post by:
Hi folks, thanks for taking the time to read this (and hopefully point our where I'm going wrong). The scenario: I have a local Access2007 database which links in several read only mySql tables via ODBC. The problem:
0
8983
marktang
by: marktang | last post by:
ONU (Optical Network Unit) is one of the key components for providing high-speed Internet services. Its primary function is to act as an endpoint device located at the user's premises. However, people are often confused as to whether an ONU can Work As a Router. In this blog post, we’ll explore What is ONU, What Is Router, ONU & Router’s main usage, and What is the difference between ONU and Router. Let’s take a closer look ! Part I. Meaning of...
0
8822
by: Hystou | last post by:
Most computers default to English, but sometimes we require a different language, especially when relocating. Forgot to request a specific language before your computer shipped? No problem! You can effortlessly switch the default language on Windows 10 without reinstalling. I'll walk you through it. First, let's disable language synchronization. With a Microsoft account, language settings sync across devices. To prevent any complications,...
0
9528
Oralloy
by: Oralloy | last post by:
Hello folks, I am unable to find appropriate documentation on the type promotion of bit-fields when using the generalised comparison operator "<=>". The problem is that using the GNU compilers, it seems that the internal comparison operator "<=>" tries to promote arguments from unsigned to signed. This is as boiled down as I can make it. Here is my compilation command: g++-12 -std=c++20 -Wnarrowing bit_field.cpp Here is the code in...
0
9359
jinu1996
by: jinu1996 | last post by:
In today's digital age, having a compelling online presence is paramount for businesses aiming to thrive in a competitive landscape. At the heart of this digital strategy lies an intricately woven tapestry of website design and digital marketing. It's not merely about having a website; it's about crafting an immersive digital experience that captivates audiences and drives business growth. The Art of Business Website Design Your website is...
0
9236
tracyyun
by: tracyyun | last post by:
Dear forum friends, With the development of smart home technology, a variety of wireless communication protocols have appeared on the market, such as Zigbee, Z-Wave, Wi-Fi, Bluetooth, etc. Each protocol has its own unique characteristics and advantages, but as a user who is planning to build a smart home system, I am a bit confused by the choice of these technologies. I'm particularly interested in Zigbee because I've heard it does some...
0
4863
by: adsilva | last post by:
A Windows Forms form does not have the event Unload, like VB6. What one acts like?
1
3298
by: 6302768590 | last post by:
Hai team i want code for transfer the data from one system to another through IP address by using C# our system has to for every 5mins then we have to update the data what the data is updated we have to send another system
2
2774
muto222
by: muto222 | last post by:
How can i add a mobile payment intergratation into php mysql website.
3
2206
bsmnconsultancy
by: bsmnconsultancy | last post by:
In today's digital era, a well-designed website is crucial for businesses looking to succeed. Whether you're a small business owner or a large corporation in Toronto, having a strong online presence can significantly impact your brand's success. BSMN Consultancy, a leader in Website Development in Toronto offers valuable insights into creating effective websites that not only look great but also perform exceptionally well. In this comprehensive...

By using Bytes.com and it's services, you agree to our Privacy Policy and Terms of Use.

To disable or enable advertisements and analytics tracking please visit the manage ads & tracking page.