473,396 Members | 1,891 Online
Bytes | Software Development & Data Engineering Community
Post Job

Home Posts Topics Members FAQ

Join Bytes to post your question to a community of 473,396 software developers and data experts.

How to achieve scalability

I have a fairly large table (700,000 rows or so) that I'd like to run a
process. However, the procedure we have right now was designed with
tables of more like 20,000 rows and isn't able to handle it. Access will
always crash before it can complete.

Some background: the procedure is used in the process of data cleanup and
is designed to process company names into a standardized form, so that we
can use it to confirm data across various datasets. The procedure takes
input like, for example, "The Yummy and Tasty Waffle Corporation" or
"Yummy & Tasty Waffle, Incorporated" and turns both into "TASTYWAFFLE".
We can then sort, link, and filter, etc. on this field along with others
to see if there are duplicates or check if companies that have different
IDs are in fact the same company.

Specifically, another procedure takes a specified table and field and
creates a new field, filling it with the contents of the original field.
That procedure then passes a DAO recordset and the name of the new field
to the main procedure which then performs 11 operations to arrive at the
"TASTYWAFFLE" stage. Filling the new field is clearly double work and
I'll be triming that part out.

Now for my specific questions: Currently, the procedure takes the whole
contents of the field at once, and uses a lot of InStr, Mid, Left, and
Right functions to perform all of the operations, then moves on to the
next row. It seems more direct to just read character by charater until
I have a complete word (i.e. I hit a space or other delimiter), process
that bit, then move on to the next part of the field. Which of these
approaches is more efficient? Also, where could I go to find some
guidelines on writing the most scalable VBA code? I know Access has
limitations, but I'd like to be limited by those and not by our own
inefficiencies.

Thanks in advance,

Carlos
Jul 2 '08 #1
18 1666
On Jul 1, 8:45*pm, "Carlos Nunes-Ueno" <sulla...@athotmaildot.com>
wrote:
I have a fairly large table (700,000 rows or so) that I'd like to run a
process. *However, the procedure we have right now was designed with
tables of more like 20,000 rows and isn't able to handle it. *Access will
always crash before it can complete.

Some background: the procedure is used in the process of data cleanup and
is designed to process company names into a standardized form, so that we
can use it to confirm data across various datasets. *The procedure takes
input like, for example, "The Yummy and Tasty Waffle Corporation" or
"Yummy & Tasty Waffle, Incorporated" and turns both into "TASTYWAFFLE". *
We can then sort, link, and filter, etc. on this field along with others
to see if there are duplicates or check if companies that have different
IDs are in fact the same company.

Specifically, another procedure takes a specified table and field and
creates a new field, filling it with the contents of the original field.
That procedure then passes a DAO recordset and the name of the new field
to the main procedure which then performs 11 operations to arrive at the
"TASTYWAFFLE" stage. *Filling the new field is clearly double work and
I'll be triming that part out.

Now for my specific questions: Currently, the procedure takes the whole
contents of the field at once, and uses a lot of InStr, Mid, Left, and
Right functions to perform all of the operations, then moves on to the
next row. *It seems more direct to just read character by charater until
I have a complete word (i.e. I hit a space or other delimiter), process
that bit, then move on to the next part of the field. *Which of these
approaches is more efficient? *Also, where could I go to find some
guidelines on writing the most scalable VBA code? *I know Access has
limitations, but I'd like to be limited by those and not by our own
inefficiencies.

Thanks in advance,

Carlos
The functions you name are old and stale; they were always
inefficient. My guess is that Regular Expressions would solve your
problem and be hundreds, possibly thousands of times faster than
straight VBA code. Of course, those who don't know Regular Expressions
may disagree. No doubt, initially Regular Expressions can be
bewildering. But a little work and a modicum of patience can result in
the tedious being made simple, and the impossible, only challenging.
Jul 2 '08 #2
Thanks, Lyle. I didn't know that regular expressions were available at all
in VBA, but after some checking around I see the reference for the VBS
regular expressions library.

So, basically, you're recommending that I step through the recordset like
now but using regular expessions objects to perform the operations instead
of the VBA functions? Or is there an even more efficient way?

Thanks,

Carlos

lyle fairfield <ly************@gmail.comwrote in
news:21**********************************@25g2000h sx.googlegroups.com:
The functions you name are old and stale; they were always
inefficient. My guess is that Regular Expressions would solve your
problem and be hundreds, possibly thousands of times faster than
straight VBA code. Of course, those who don't know Regular Expressions
may disagree. No doubt, initially Regular Expressions can be
bewildering. But a little work and a modicum of patience can result in
the tedious being made simple, and the impossible, only challenging.
Jul 2 '08 #3
On Wed, 2 Jul 2008 00:45:11 +0000 (UTC), "Carlos Nunes-Ueno"
<su******@athotmaildot.comwrote:

Was that a real example? It seems difficult to come up with consistent
rules that convert "The Yummy and Tasty Waffle Corporation" into
"TASTYWAFFLE". Something like "Take the 4th and 5th word, and omit the
spaces"? Can you tell us what kinds of rules you're applying for the
conversion?

I have had good success with the Ratcliff/Obershelp algorithm that
returns a similarity (a number between 0 and 1) between two strings. I
checked and for your two company names the similarity is 0.75. Using
some cutoff value you can narrow down the most similar companies and
bunch them up that way.
We recently implemented this algorithm as a .Net assembly in SQL
Server 2005, and it is very fast. 10,000 comparisons in way less than
1 second.

I'm not at all convinced RegEx is the ticket here.

-Tom.

>I have a fairly large table (700,000 rows or so) that I'd like to run a
process. However, the procedure we have right now was designed with
tables of more like 20,000 rows and isn't able to handle it. Access will
always crash before it can complete.

Some background: the procedure is used in the process of data cleanup and
is designed to process company names into a standardized form, so that we
can use it to confirm data across various datasets. The procedure takes
input like, for example, "The Yummy and Tasty Waffle Corporation" or
"Yummy & Tasty Waffle, Incorporated" and turns both into "TASTYWAFFLE".
We can then sort, link, and filter, etc. on this field along with others
to see if there are duplicates or check if companies that have different
IDs are in fact the same company.

Specifically, another procedure takes a specified table and field and
creates a new field, filling it with the contents of the original field.
That procedure then passes a DAO recordset and the name of the new field
to the main procedure which then performs 11 operations to arrive at the
"TASTYWAFFLE" stage. Filling the new field is clearly double work and
I'll be triming that part out.

Now for my specific questions: Currently, the procedure takes the whole
contents of the field at once, and uses a lot of InStr, Mid, Left, and
Right functions to perform all of the operations, then moves on to the
next row. It seems more direct to just read character by charater until
I have a complete word (i.e. I hit a space or other delimiter), process
that bit, then move on to the next part of the field. Which of these
approaches is more efficient? Also, where could I go to find some
guidelines on writing the most scalable VBA code? I know Access has
limitations, but I'd like to be limited by those and not by our own
inefficiencies.

Thanks in advance,

Carlos
Jul 2 '08 #4
Tom, that sounds fascinating. Where did you come across this and how
did you apply it? Is it literally string A, string B, and here's the
answer xyz?

I have managed to do something similar to what Carlos has done, in
this case for product lists being imported into a database in a
structured way. The fastest way with pure vba that I could find was to
simply use a 'dictionary' for the terms, and make sure that the
dictionary terms were unique to a specific data source. In this way
source A provides data in say a CSV, you do some column mapping so
that the 'import' routine knows which columns to place where, then
load the data into a 'staging area'. You then run the first set of
transformations that map the terms from the dictionary for that data
source against the terms in the staging area that came from that data
source. Where you go from there would depend on what you want as an
output. This method processes approx 25k records in about 8 mins on a
Pentium M 1.2Ghz. Just for the record, it takes a while to build the
dictionary for a specific data source - thats the annoying part. I
could probably make the routine faster if I spent some time eyeballing
the code, but what you have there could be used much more efficiently
indeed.

Please spill the beans!

Cheers

The Frog
Jul 2 '08 #5
Hi again,

Found a VBA (VB) implementation that should do the trick:

http://www.planetsourcecode.com/vb/s...=9353&lngWId=1

Wish I had know of this sooner. This is so useful!

Cheers

The Frog
Jul 2 '08 #6
On Jul 2, 1:02*am, Tom van Stiphout <no.spam.tom7...@cox.netwrote:
On Wed, 2 Jul 2008 00:45:11 +0000 (UTC), "Carlos Nunes-Ueno"

<sulla...@athotmaildot.comwrote:

Was that a real example? It seems difficult to come up with consistent
rules that convert "The Yummy and Tasty Waffle Corporation" into
"TASTYWAFFLE". Something like "Take the 4th and 5th word, and omit the
spaces"? Can you tell us what kinds of rules you're applying for the
conversion?

I have had good success with the Ratcliff/Obershelp algorithm that
returns a similarity (a number between 0 and 1) between two strings. I
checked and for your two company names the similarity is 0.75. Using
some cutoff value you can narrow down the most similar companies and
bunch them up that way.
We recently implemented this algorithm as a .Net assembly in SQL
Server 2005, and it is very fast. 10,000 comparisons in way less than
1 second.

I'm not at all convinced RegEx is the ticket here.

-Tom.
I'm not so familiar with running .Net in SQL Server. Does ".Net
assembly in SQL Server 2005" imply the function is converted to
assembly language? Or is it compiled to machine code? P-Code? Or just
interpreted script? Will any of these impact much on string functions
anyway?

Will the Ratcliff/Obershelp algorithm deal with such directions as:
If you find "Yummy" and "Tasty" in the string return "YummyTasty"?

An air code RegExp that does that with both strings provided by the
OP, runs a million iterations in 7 seconds on my computer. Of course,
that's just P-Code.

If I were doing this I'd probably explore getting big strings with
ADO's GetString, or maybe XML record set saves, and applying the rules
to those big strings that included many records.

Sample Code (done over coffee and with the connivance of allergy
induced sleeplessness - we have met the pollen and it has won!)

Private Declare Function GetTickCount& Lib "kernel32" ()

Private Sub Whatever()
Dim Iterator&
Dim RegExp As Object
Dim ThousandthsofaSecond&
Dim WhatComesOut$
Dim WhatGoesIn$(0 To 1)
WhatGoesIn(0) = "The Yummy and Tasty Waffle Corporation"
WhatGoesIn(1) = "Yummy & Tasty Waffle, Incorporated"
ThousandthsofaSecond = GetTickCount()
Set RegExp = CreateObject("VBScript.RegExp")
With RegExp
.Global = True
.IgnoreCase = True
.pattern = ".*yummy.*tasty.*"
For Iterator = 0 To 999999
WhatComesOut = .Replace(WhatGoesIn(Iterator Mod 2),
"YUMMYTASTY")
Next Iterator
End With
Debug.Print _
"(" & WhatGoesIn(0) _
& "||" _
& WhatGoesIn(1) & ")" _
& "->" _
& WhatComesOut & vbNewLine _
& Iterator _
& " times in " _
& (GetTickCount() - ThousandthsofaSecond) / 1000 _
& " seconds"
End Sub

(The Yummy and Tasty Waffle Corporation||Yummy & Tasty Waffle,
Incorporated)->YUMMYTASTY
1000000 times in 7.176 seconds


Jul 2 '08 #7
My bad. I meant to write "YUMMYTASTYWAFFLE". I wanted to the example to
demonstrate what we need to do, which is to take out common words like
"corporation", "inc", "and", and "the". We also strip out any non-
alphanumeric characters, so symbols like the "&", and punctuation and
spaces go away. Another step substitutes common abbreviations for certain
words.

Tom van Stiphout <no*************@cox.netwrote in
news:ek********************************@4ax.com:
On Wed, 2 Jul 2008 00:45:11 +0000 (UTC), "Carlos Nunes-Ueno"
<su******@athotmaildot.comwrote:

Was that a real example? It seems difficult to come up with consistent
rules that convert "The Yummy and Tasty Waffle Corporation" into
"TASTYWAFFLE". Something like "Take the 4th and 5th word, and omit the
spaces"? Can you tell us what kinds of rules you're applying for the
conversion?

I have had good success with the Ratcliff/Obershelp algorithm that
returns a similarity (a number between 0 and 1) between two strings. I
checked and for your two company names the similarity is 0.75. Using
some cutoff value you can narrow down the most similar companies and
bunch them up that way.
We recently implemented this algorithm as a .Net assembly in SQL
Server 2005, and it is very fast. 10,000 comparisons in way less than
1 second.

I'm not at all convinced RegEx is the ticket here.

-Tom.
Jul 2 '08 #8
Seven seconds would be much, much better than never, which is what I've got
going on now.

I've taken a look at the GetString and XML recordset saves, and it's clear
to me that it those would be great for getting the data out in an easy to
process format, but I'm unsure about how the data would get back into the
table. What I'm try to achieve is a new converted name field for each row
in the table.

Thanks,

Carlos
Jul 2 '08 #9
"Carlos Nunes-Ueno" <su******@athotmaildot.comwrote:
>My bad. I meant to write "YUMMYTASTYWAFFLE". I wanted to the example to
demonstrate what we need to do, which is to take out common words like
"corporation", "inc", "and", and "the". We also strip out any non-
alphanumeric characters, so symbols like the "&", and punctuation and
spaces go away. Another step substitutes common abbreviations for certain
words.
One of the things I would do in a similar situation is to keep a table of all such
words which weren't in the common words table along with their associated "master"
record. This allowed the users to go down the looong list of words and easily look
for misspellings such as ymmy and ummy.

Tony
--
Tony Toews, Microsoft Access MVP
Please respond only in the newsgroups so that others can
read the entire thread of messages.
Microsoft Access Links, Hints, Tips & Accounting Systems at
http://www.granite.ab.ca/accsmstr.htm
Tony's Microsoft Access Blog - http://msmvps.com/blogs/access/
Jul 3 '08 #10
On Wed, 2 Jul 2008 00:40:42 -0700 (PDT), The Frog
<Mr************@googlemail.comwrote:

The algorithm is in the public domain. Google for it. And brush up on
your C knowledge.
?simil("The Yummy and Tasty Waffle Corporation", "Yummy & Tasty
Waffle, Incorporated")
=0.75

-Tom.

>Tom, that sounds fascinating. Where did you come across this and how
did you apply it? Is it literally string A, string B, and here's the
answer xyz?

I have managed to do something similar to what Carlos has done, in
this case for product lists being imported into a database in a
structured way. The fastest way with pure vba that I could find was to
simply use a 'dictionary' for the terms, and make sure that the
dictionary terms were unique to a specific data source. In this way
source A provides data in say a CSV, you do some column mapping so
that the 'import' routine knows which columns to place where, then
load the data into a 'staging area'. You then run the first set of
transformations that map the terms from the dictionary for that data
source against the terms in the staging area that came from that data
source. Where you go from there would depend on what you want as an
output. This method processes approx 25k records in about 8 mins on a
Pentium M 1.2Ghz. Just for the record, it takes a while to build the
dictionary for a specific data source - thats the annoying part. I
could probably make the routine faster if I spent some time eyeballing
the code, but what you have there could be used much more efficiently
indeed.

Please spill the beans!

Cheers

The Frog
Jul 3 '08 #11
Salad, thats a nice function. What about a combination of this
function for producing 'short names', combined with a join / union to
a table with the masterlist of these names and using the SIMIL() score
as a threshold for deciding if the two entities are actually the same.
It should also be possibel to then just have a version of the query
that shows entities that are below the threshold and need manual
attention.

Place the results of both queries into 'staging areas' if necessary,
so that they can be reviewed by human eyes, then when accepted /
approved append / update them as necessary.

Just a passing thought. No query like the one the OP needs will ever
bee 100% reliable it seems, however by stepping through the problem
and letting the 'critical' decisions remain in human hands you can at
least minimise the risks as much as possible.

Cheers

The Frog
Jul 3 '08 #12
The Frog wrote:
Salad, thats a nice function. What about a combination of this
function for producing 'short names', combined with a join / union to
a table with the masterlist of these names and using the SIMIL() score
as a threshold for deciding if the two entities are actually the same.
It should also be possibel to then just have a version of the query
that shows entities that are below the threshold and need manual
attention.
Yes, I'd most likely want to update the table with a shortname field.
Simply to speed things up. I have a nice Soundex function I wrote for
my own purposes. This Simil() function, mentioned by Tom, is most
likely a more advanced derivative of Soundex. I think adding a Soundex
field and a field for Simil() would be beneficial as well.
Place the results of both queries into 'staging areas' if necessary,
so that they can be reviewed by human eyes, then when accepted /
approved append / update them as necessary.

Just a passing thought. No query like the one the OP needs will ever
bee 100% reliable it seems, however by stepping through the problem
and letting the 'critical' decisions remain in human hands you can at
least minimise the risks as much as possible.
I think I'd cry if someone said "Salad, I need you to find me all the
duplicates in this 700,000 row table!" I might muster a squeek of "Yes,
Boss!" or morph "Yes, Boss" into Simils() of 4 letter dirty words
muttered soto voce or perhaps start handing out my resumes while working
on this task of drudgery.
>
Cheers

The Frog
Jul 3 '08 #13
On Jul 3, 3:11*am, The Frog <Mr.Frog.to....@googlemail.comwrote:
No query like the one the OP needs will ever
be 100% reliable it seems,
Why?
Jul 3 '08 #14
I forgot to mention (because I didn't know :-)) that it seems one can
create and use Regular Expressions in T-SQL with

sp_OACreate

Creates an instance of the OLE object on an instance of Microsoft® SQL
Server™.
Syntax

sp_OACreate progid, | clsid,
objecttoken OUTPUT
[ , context ]

just as one can create a Regular Expression in VBA with late binding.

This may be an example of the best thing about contributing to a usenet
group; we (Me) can learn things.

I'll try to experiment with this a bit over the next couple of days.

Tom van Stiphout <no*************@cox.netwrote in
news:42********************************@4ax.com:

<snips>
Jul 3 '08 #15
Tom van Stiphout <no*************@cox.netwrote in
news:vs********************************@4ax.com:
The algorithm is in the public domain. Google for it. And brush up
on your C knowledge.
I have a VBA version of Simil. I don't recall where I got it.

--
David W. Fenton http://www.dfenton.com/
usenet at dfenton dot com http://www.dfenton.com/DFA/
Jul 3 '08 #16
A first try:

ALTER FUNCTION
[dbo].[RemoveCommonWords]
(
@varInput varchar(8000),
@varCommonWords varchar(255),
@varReplaceWith varchar(255)
)

RETURNS varchar(8000)

AS
BEGIN

Declare @regExp int
Declare @retVal int
Declare @VarNoCommonWords varchar(8000)

Execute @retVal=sp_oacreate 'VBScript.RegExp',@regExp OUT
Execute @retVal=sp_oasetproperty @regExp, 'Global', 1
Execute @retVal=sp_oasetproperty @regExp, 'Pattern', @varCommonWords
Execute @retVal=sp_oamethod @regExp, 'Replace', @VarNoCommonWords OUT,
@varInput, @varReplaceWith
Execute @retVal=sp_oadestroy @regExp

Return @varNoCommonWords

END

Needs error handling ... tomorrow ...
ARGOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOS ... now

Actually, I suppose this will replace anything with anything (hence needs
to be renamed) but its advnatge is that it takes Regular Expression
Syntax as in:

SELECT dbo.RemoveCommonWords(
'I probably would not want to sort the query with 700K names on this
calculated columnThebes and Husband',
'\W|\s|(\b)(The|And|Or|Corp|Corporation|Inc|St)(\b )',
'')

lyle fairfield <ly******@yah00.cawrote in
news:Xn************************@216.221.81.119:
I forgot to mention (because I didn't know :-)) that it seems one can
create and use Regular Expressions in T-SQL with

sp_OACreate

Creates an instance of the OLE object on an instance of Microsoft® SQL
Server™.
Syntax

sp_OACreate progid, | clsid,
objecttoken OUTPUT
[ , context ]

just as one can create a Regular Expression in VBA with late binding.

This may be an example of the best thing about contributing to a usenet
group; we (Me) can learn things.

I'll try to experiment with this a bit over the next couple of days.

Tom van Stiphout <no*************@cox.netwrote in
news:42********************************@4ax.com:

<snips>
Jul 3 '08 #17
OK, here is a quick test on the simil() function done in vba as
follows:

Debug.Print "Start time: " & Now()
For i = 1 To 700000
score = Simil("The Yummy and Tasty Waffle Corporation",
"TASTYWAFFLE")
Next
Debug.Print "Finish time: " & Now()
Debug.Print score

and the results are:
Start time: 4/07/2008 9:19:55 AM
Finish time: 4/07/2008 9:21:13 AM
0.448979591836735

I would love to run a test with this using SALAD's ShortName function
and see if the score is higher. Unfortunately I am working in A97 at
the moment and dont have the option to work on a newer verion to test
this (no split function). Any takers for the test? I posted a link to
the VBA implementation of the SIMIL() function as the fifth post in
this thread (2nd July 08).

This was done on a Pentium M 1.2Ghz laptop with nothing else running
at the time (or not that I am aware of anyway).

To answer Lyle's "Why Not?" : The difference between probability and
certainty. While ambiguity exists you can only ever assign a
probability that one one thing is another (or equal to another), and
as such there is always a margin of error (a chance they are not the
same). As a result you cannot guarantee 100% effectiveness for any
query of this type where a probabilistic determination is involved.
The best you can do manage / minimise the risks by setting your P
cutoff threshold high enough for the result of the query to be
considered significant enough to be trusted.

Cheers

The Frog
Jul 4 '08 #18
And the winner is.......RegExp!

Interesting to know. Thanks for running the test Lyle. I just dont
have access to the right gear right now.

Cheers

The Frog
Jul 7 '08 #19

This thread has been closed and replies have been disabled. Please start a new discussion.

Similar topics

7
by: Wenning Qiu | last post by:
I am researching issues related to emdedding Python in C++ for a project. My project will be running on an SMP box and requires scalability. However, my test shows that Python threading has very...
37
by: asj | last post by:
awhile back, eBay decided to switch from a Microsoft/.NET/Windows architecture on the backend to a J2EE one, which might explain why their java backend will handle up to 1 BILLION page views a day!...
3
by: Arpan | last post by:
What does the term "scalability of an application" mean? Thanks, Arpan
0
by: Khaled D Elmeleegy | last post by:
--=_alternative 004FC1E080256D75_= Content-Type: text/plain; charset="us-ascii" I am studying the scalability of MYSQL on SMPs on Linux. I am wondering if any one has performed scalability...
2
by: rlm | last post by:
I know, solely as a matter of fact, that a web based application written in (100%) VBScript/JavaScript & embedded SQL will not scale. However, I can only conjecture as to the reasons why. We have...
0
by: tharma | last post by:
I was wondering if some one provides some information about scalability and performance of ASP vs JSP. Scalability of JSP vs. ASP (which one is better?) Performance of JSP vs. ASP (which has...
8
by: Duffey, Kevin | last post by:
We are looking for information regarding any capabilities of PostgreSQL in regards to scalability. Ideally we want to be able to scale in both directions. What sort of solutions are out there for...
1
by: Refky Wahib | last post by:
Hi Actually I need Technical Support I finished a Great project using .Net and SQL Server and .Net Mobile Control My Business case is to implement this Program to accept about 1 Million...
9
by: Tim Mitchell | last post by:
Hi All, I work on a desktop application that has been developed using python and GTK (see www.leapfrog3d.com). We have around 150k lines of python code (and 200k+ lines of C). We also have a...
0
by: Charles Arthur | last post by:
How do i turn on java script on a villaon, callus and itel keypad mobile phone
0
by: emmanuelkatto | last post by:
Hi All, I am Emmanuel katto from Uganda. I want to ask what challenges you've faced while migrating a website to cloud. Please let me know. Thanks! Emmanuel
1
by: Sonnysonu | last post by:
This is the data of csv file 1 2 3 1 2 3 1 2 3 1 2 3 2 3 2 3 3 the lengths should be different i have to store the data by column-wise with in the specific length. suppose the i have to...
0
by: Hystou | last post by:
There are some requirements for setting up RAID: 1. The motherboard and BIOS support RAID configuration. 2. The motherboard has 2 or more available SATA protocol SSD/HDD slots (including MSATA, M.2...
0
marktang
by: marktang | last post by:
ONU (Optical Network Unit) is one of the key components for providing high-speed Internet services. Its primary function is to act as an endpoint device located at the user's premises. However,...
0
jinu1996
by: jinu1996 | last post by:
In today's digital age, having a compelling online presence is paramount for businesses aiming to thrive in a competitive landscape. At the heart of this digital strategy lies an intricately woven...
0
by: Hystou | last post by:
Overview: Windows 11 and 10 have less user interface control over operating system update behaviour than previous versions of Windows. In Windows 11 and 10, there is no way to turn off the Windows...
0
tracyyun
by: tracyyun | last post by:
Dear forum friends, With the development of smart home technology, a variety of wireless communication protocols have appeared on the market, such as Zigbee, Z-Wave, Wi-Fi, Bluetooth, etc. Each...
0
agi2029
by: agi2029 | last post by:
Let's talk about the concept of autonomous AI software engineers and no-code agents. These AIs are designed to manage the entire lifecycle of a software development project—planning, coding, testing,...

By using Bytes.com and it's services, you agree to our Privacy Policy and Terms of Use.

To disable or enable advertisements and analytics tracking please visit the manage ads & tracking page.