473,396 Members | 1,879 Online
Bytes | Software Development & Data Engineering Community
Post Job

Home Posts Topics Members FAQ

Join Bytes to post your question to a community of 473,396 software developers and data experts.

Refactor a buffered class...

Hello,

i'm looking for this behaviour and i write a piece of code which works,
but it looks odd to me. can someone help me to refactor it ?

i would like to walk across a list of items by series of N (N=3 below)
of these. i had explicit mark of end of a sequence (here it is '.')
which may be any length and is composed of words.

for: s = "this . is a . test to . check if it . works . well . it looks
.. like ."
the output should be (if grouping by 3) like:

=this .
=this . is a .
=this . is a . test to .
=is a . test to . check if it .
=test to . check if it . works .
=check if it . works . well .
=works . well . it looks .
=well . it looks . like .

my piece of code :

import sys

class MyBuffer:
def __init__(self):
self.acc = []
self.sentries = [0, ]
def append(self, item):
self.acc.append(item)
def addSentry(self):
self.sentries.append(len(self.acc))
print >sys.stderr, "\t", self.sentries
def checkSentry(self, size, keepFirst):
n = len(self.sentries) - 1
if keepFirst and n < size:
return self.acc
if n % size == 0:
result = self.acc
first = self.sentries[1]
self.acc = self.acc[first:]
self.sentries = [x - first for x in self.sentries]
self.sentries.pop(0)
return result

s = "this . is a . test to . check if it . works . well . it looks .
like ."
l = s.split()
print l

mb = MyBuffer()
n = 0
for x in l:
mb.append(x)
if x == '.':
# end of something
print "+", n
n += 1
mb.addSentry()
current = mb.checkSentry(3, True) # GROUPING BY 3
if current:
print "=>", current

Sep 6 '06 #1
12 1330
lh*****@yahoo.fr wrote:
Hello,

i'm looking for this behaviour and i write a piece of code which works,
but it looks odd to me. can someone help me to refactor it ?

i would like to walk across a list of items by series of N (N=3 below)
of these. i had explicit mark of end of a sequence (here it is '.')
which may be any length and is composed of words.

for: s = "this . is a . test to . check if it . works . well . it looks
. like ."
the output should be (if grouping by 3) like:

=this .
=this . is a .
=this . is a . test to .
=is a . test to . check if it .
=test to . check if it . works .
=check if it . works . well .
=works . well . it looks .
=well . it looks . like .

my piece of code :

import sys

class MyBuffer:
def __init__(self):
self.acc = []
self.sentries = [0, ]
def append(self, item):
self.acc.append(item)
def addSentry(self):
self.sentries.append(len(self.acc))
print >sys.stderr, "\t", self.sentries
def checkSentry(self, size, keepFirst):
n = len(self.sentries) - 1
if keepFirst and n < size:
return self.acc
if n % size == 0:
result = self.acc
first = self.sentries[1]
self.acc = self.acc[first:]
self.sentries = [x - first for x in self.sentries]
self.sentries.pop(0)
return result

s = "this . is a . test to . check if it . works . well . it looks .
like ."
l = s.split()
print l

mb = MyBuffer()
n = 0
for x in l:
mb.append(x)
if x == '.':
# end of something
print "+", n
n += 1
mb.addSentry()
current = mb.checkSentry(3, True) # GROUPING BY 3
if current:
print "=>", current
If you just need to 'walk across a list of items', then your buffer class and
helper function seem unnecessary complex. A generator would do the trick,
something like:
>>def chunker(s, chunk_size=3, sentry="."):
.... buffer=[]
.... sentry_count = 0
.... for item in s:
.... buffer.append(item)
.... if item == sentry:
.... sentry_count += 1
.... if sentry_count chunk_size:
.... del buffer[:buffer.index(sentry)+1]
.... yield buffer
....

>>s = "this . is a . test to . check if it . works . well . it looks. like ."
for p in chunker(s.split()): print " ".join(p)
....
this .
this . is a .
this . is a . test to .
is a . test to . check if it .
test to . check if it . works .
check if it . works . well .
works . well . it looks. like .
>>>
HTH
Michael

Sep 6 '06 #2

Michael Spencer a écrit :
If you just need to 'walk across a list of items', then your buffer classand
helper function seem unnecessary complex. A generator would do the trick,
something like:
actually for the example i have used only one sentry condition by they
are more numerous and complex, also i need to work on a huge amount on
data (each word are a line with many features readed from a file)

but generators are interesting stuff that i'm going to look closer.

thanks.

Sep 6 '06 #3

Here is another version,

class ChunkeredBuffer:
def __init__(self):
self.buffer = []
self.sentries = []
def append(self, item):
self.buffer.append(item)
def chunk(self, chunkSize, keepFirst = False):
self.sentries.append(len(self.buffer))
forget = self.sentries[:-chunkSize]
if not keepFirst and len(self.sentries) < chunkSize:
return
if forget != []:
last = forget[-1]
self.buffer = self.buffer[last:]
self.sentries = [x - last for x in self.sentries[1:]]
print >sys.stderr, self.sentries, len(self.sentries), forget
return self.buffer

but i was wondering how i could add, the last items if needed:

it looks . like .
like .

to the previous:

this .
this . is a .
this . is a . test to .
is a . test to . check if it .
test to . check if it . works .
check if it . works . well .
works . well . it looks like .

to have:

this .
this . is a .
this . is a . test to .
is a . test to . check if it .
test to . check if it . works .
check if it . works . well .
works . well . it looks like .
it looks . like .
like .

Sep 6 '06 #4

oops
to have:

this .
this . is a .
this . is a . test to .
is a . test to . check if it .
test to . check if it . works .
check if it . works . well .
works . well . it looks like .
well . it looks like .
it looks like .

Sep 6 '06 #5
lh*****@yahoo.fr wrote:
actually for the example i have used only one sentry condition by they
are more numerous and complex, also i need to work on a huge amount on
data (each word are a line with many features readed from a file)
An open (text) file is a line-based iterator that can be fed directly to
'chunker'. As for different sentry conditions, I imagine they can be coded in
either model. How much is a 'huge amount' of data?
oops
>to have:

this .
this . is a .
this . is a . test to .
is a . test to . check if it .
test to . check if it . works .
check if it . works . well .
works . well . it looks like .
well . it looks like .
it looks like .
Here's a small update to the generator that allows optional handling of the head
and the tail:

def chunker(s, chunk_size=3, sentry=".", keep_first = False, keep_last = False):
buffer=[]
sentry_count = 0

for item in s:
buffer.append(item)
if item == sentry:
sentry_count += 1
if sentry_count < chunk_size:
if keep_first:
yield buffer
else:
yield buffer
del buffer[:buffer.index(sentry)+1]

if keep_last:
while buffer:
yield buffer
del buffer[:buffer.index(sentry)+1]

>>for p in chunker(s.split(), keep_first = True, keep_last=True): print "
".join(p)
....
this .
this . is a .
this . is a . test to .
is a . test to . check if it .
test to . check if it . works .
check if it . works . well .
works . well . it looks like .
well . it looks like .
it looks like .
>>>
Sep 6 '06 #6
Michael Spencer wrote:
Here's a small update to the generator that allows optional handling of the head
and the tail:

def chunker(s, chunk_size=3, sentry=".", keep_first = False, keep_last = False):
buffer=[]
sentry_count = 0

for item in s:
buffer.append(item)
if item == sentry:
sentry_count += 1
if sentry_count < chunk_size:
if keep_first:
yield buffer
else:
yield buffer
del buffer[:buffer.index(sentry)+1]

if keep_last:
while buffer:
yield buffer
del buffer[:buffer.index(sentry)+1]

And here's a (probably) more efficient version, using a deque as a
buffer:

from itertools import islice
from collections import deque

def chunker(seq, sentry='.', chunk_size=3, keep_first=False,
keep_last=False):
def format_chunks(chunks):
return [' '.join(chunk) for chunk in chunks]
iterchunks = itersplit(seq,sentry)
buf = deque()
for chunk in islice(iterchunks, chunk_size-1):
buf.append(chunk)
if keep_first:
yield format_chunks(buf)
for chunk in iterchunks:
buf.append(chunk)
yield format_chunks(buf)
buf.popleft()
if keep_last:
while buf:
yield format_chunks(buf)
buf.popleft()
def itersplit(seq, sentry='.'):
# split the iterable `seq` on each `sentry`
buf = []
for x in seq:
if x != sentry:
buf.append(x)
else:
yield buf
buf = []
if buf:
yield buf
s = " this . is a . test to . check if it . works . well . it looks .
like ."
for p in chunker(s.split(), keep_last=True, keep_first=True):
print p
George

Sep 7 '06 #7
George Sakkis wrote:
Michael Spencer wrote:
>Here's a small update to the generator that allows optional handling of the head
and the tail:

def chunker(s, chunk_size=3, sentry=".", keep_first = False, keep_last = False):
buffer=[]
....
>
And here's a (probably) more efficient version, using a deque as a
buffer:
Perhaps the deque-based solution is more efficient under some conditions, but
it's significantly slower for all the cases I tested:

Here are some typical results:

Using George's deque buffer:
>>time_chunkers(chunkerGS, groups=1000, words_per_group=1000, chunk_size=300)
'get_chunks(...) 30 iterations, 16.70msec per call'
>>time_chunkers(chunkerGS, groups=1000, words_per_group=1000, chunk_size=30)
'get_chunks(...) 35 iterations, 14.56msec per call'
>>time_chunkers(chunkerGS, groups=1000, words_per_group=1000, chunk_size=3)
'get_chunks(...) 35 iterations, 14.41msec per call'

Using the list buffer
>>time_chunkers(chunker, groups=1000, words_per_group=1000, chunk_size=300)
'get_chunks(...) 85 iterations, 5.88msec per call'
>>time_chunkers(chunker, groups=1000, words_per_group=1000, chunk_size=30)
'get_chunks(...) 85 iterations, 5.89msec per call'
>>time_chunkers(chunker, groups=1000, words_per_group=1000, chunk_size=3)
'get_chunks(...) 83 iterations, 6.03msec per call'
>>>
Test functions follow:

def make_seq(groups = 1000, words_per_group = 3, word_length = 76, sentry = "."):
"""Make a sequence of test input for chunker
>>make_seq(groups = 5, words_per_group=5, word_length = 2, sentry="%")
['WW', 'WW', 'WW', 'WW', 'WW', '%', 'WW', 'WW', 'WW', 'WW', 'WW', '%',
'WW', 'WW', 'WW', 'WW', 'WW', '%', 'WW', 'WW', 'WW', 'WW', 'WW', '%',
'WW', 'WW', 'WW', 'WW', 'WW', '%']
"""
word = "W"*word_length
group = [word]*words_per_group+[sentry]
return group*groups

def time_chunkers(chunk_func, groups = 1000, words_per_group=10, chunk_size=3):
"""Test harness for chunker functions"""
seq = make_seq(groups)
def get_chunks(chunk_func, seq):
return list(chunk_func(seq))
return timefunc(get_chunks, chunk_func, seq)

def _get_timer():
import sys
import time
if sys.platform == "win32":
return time.clock
else:
return time.time
return

def timefunc(func, *args, **kwds):
timer = _get_timer()
count, totaltime = 0, 0
while totaltime < 0.5:
t1 = timer()
res = func(*args, **kwds)
t2 = timer()
totaltime += (t2-t1)
count += 1
if count 1000:
unit = "usec"
timeper = totaltime * 1000000 / count
else:
unit = "msec"
timeper = totaltime * 1000 / count
return "%s(...) %s iterations, %.2f%s per call" % \
(func.__name__, count, timeper, unit)

Sep 7 '06 #8
lh*****@yahoo.fr writes:
for: s = "this . is a . test to . check if it . works . well . it looks
. like ."
the output should be (if grouping by 3) like:

=this .
=this . is a .
I don't understand, you mean you have all the items in advance?
Can't you do something like this? I got bleary trying to figure out
the question, so I'm sorry if I didn't grok it correctly.

def f(n, items):
t = len(items)
for i in xrange(-(n-1), t-n):
print items[max(i,0):max(i+n,0)]

s = 'this . is a . test to . check if it . works . well . it looks . like .'
f(3, s.split('.'))
>>## working on region in file /usr/tmp/python-306302aD...
['this ']
['this ', ' is a ']
['this ', ' is a ', ' test to ']
[' is a ', ' test to ', ' check if it ']
[' test to ', ' check if it ', ' works ']
[' check if it ', ' works ', ' well ']
[' works ', ' well ', ' it looks ']
[' well ', ' it looks ', ' like ']
Sep 7 '06 #9
Michael Spencer wrote:
George Sakkis wrote:
Michael Spencer wrote:
Here's a small update to the generator that allows optional handling of the head
and the tail:

def chunker(s, chunk_size=3, sentry=".", keep_first = False, keep_last = False):
buffer=[]
...

And here's a (probably) more efficient version, using a deque as a
buffer:

Perhaps the deque-based solution is more efficient under some conditions, but
it's significantly slower for all the cases I tested:
First off, if you're going to profile something, better use the
standard timeit module rather than a quick, dirty and most likely buggy
handmade function. From "Dive into python": "The most important thing
you need to know about optimizing Python code is that you shouldn't
write your own timing function.". As it turns out, none of chunk_size,
words_per_group and word_length are taken into account in your tests;
they all have their default values. By the way, word_length is
irrelevant since the input is already a sequence, not a big string to
be split.

Second, the output of the two functions is different, so you're not
comparing apples to apples:
>>s = " this . is a . test to . check if it . works . well . it looks . like ."
for c in chunkerMS(s.split(), keep_last=True, keep_first=True): print c
....
['this', '.']
['this', '.', 'is', 'a', '.']
['this', '.', 'is', 'a', '.', 'test', 'to', '.']
['is', 'a', '.', 'test', 'to', '.', 'check', 'if', 'it', '.']
['test', 'to', '.', 'check', 'if', 'it', '.', 'works', '.']
['check', 'if', 'it', '.', 'works', '.', 'well', '.']
['works', '.', 'well', '.', 'it', 'looks', '.']
['well', '.', 'it', 'looks', '.', 'like', '.']
['it', 'looks', '.', 'like', '.']
['like', '.']
>>for c in chunkerGS(s.split(), keep_last=True, keep_first=True): print c
....
['this']
['this', 'is a']
['this', 'is a', 'test to']
['is a', 'test to', 'check if it']
['test to', 'check if it', 'works']
['check if it', 'works', 'well']
['works', 'well', 'it looks']
['well', 'it looks', 'like']
['it looks', 'like']
['like']

Third, and most important for the measured difference, is that the
performance hit in my function came from joining the words of each
group (['check', 'if', 'it'] -'check if it') every time it is
yielded. If the groups are left unjoined as in Michael's version, the
results are quite different:

* chunk_size=3
chunkerGS: 0.98 seconds
chunkerMS: 1.04 seconds
* chunk_size=30
chunkerGS: 0.98 seconds
chunkerMS: 1.17 seconds
* chunk_size=300
chunkerGS: 0.98 seconds
chunkerMS: 2.44 seconds

As expected, the deque solution is not affected by chunk_size. Here is
the full test:

# chunkers.py
from itertools import islice
from collections import deque

def chunkerMS(s, chunk_size=3, sentry=".", keep_first = False,
keep_last = False):
buffer=[]
sentry_count = 0
for item in s:
buffer.append(item)
if item == sentry:
sentry_count += 1
if sentry_count < chunk_size:
if keep_first:
yield buffer
else:
yield buffer
del buffer[:buffer.index(sentry)+1]
if keep_last:
while buffer:
yield buffer
del buffer[:buffer.index(sentry)+1]

def chunkerGS(seq, sentry='.', chunk_size=3, keep_first=False,
keep_last=False):
buf = deque()
iterchunks = itersplit(seq,sentry)
for chunk in islice(iterchunks, chunk_size-1):
buf.append(chunk)
if keep_first:
yield buf
for chunk in iterchunks:
buf.append(chunk)
yield buf
buf.popleft()
if keep_last:
while buf:
yield buf
buf.popleft()

def itersplit(seq, sentry='.'):
# split the iterable `seq` on each `sentry`
buf = []
for x in seq:
if x != sentry:
buf.append(x)
else:
yield buf
buf = []
if buf:
yield buf

def make_seq(groups=1000, words_per_group=3, sentry='.'):
group = ['w']*words_per_group
group.append(sentry)
return group*groups
if __name__ == '__main__':
import timeit
for chunk_size in 3,30,300:
print "* chunk_size=%d" % chunk_size
for func in chunkerGS,chunkerMS:
name = func.__name__
t = timeit.Timer(
"for p in %s(s, chunk_size=%d, keep_last=True,"
"keep_first=True):pass" %
(name,chunk_size),
"from chunkers import make_seq,chunkerGS,chunkerMS;"
"s=make_seq(groups=5000, words_per_group=500)")
print "%s: %.2f seconds" % (name, min(t.repeat(3,1)))

Sep 7 '06 #10
George Sakkis wrote:
Michael Spencer wrote:
>George Sakkis wrote:
>>Michael Spencer wrote:

def chunker(s, chunk_size=3, sentry=".", keep_first = False, keep_last = False):
buffer=[]
...
>>And here's a (probably) more efficient version, using a deque as a
buffer:
Perhaps the deque-based solution is more efficient under some conditions, but
it's significantly slower for all the cases I tested:
As it turns out, none of chunk_size,
words_per_group and word_length are taken into account in your tests;
they all have their default values.
Hello George

Yep, you're right, the test was broken. chunkerGS beats chunkerMS handily, in
some cases, in particular for large chunk_size.

Second, the output of the two functions is different, so you're not
comparing apples to apples:
And, to be fair, neither meets the OP spec for joined output
Third, and most important for the measured difference, is that the
performance hit in my function came from joining the words of each
group (['check', 'if', 'it'] -'check if it') every time it is
yielded. If the groups are left unjoined as in Michael's version, the
results are quite different:

Second and Third are basically the same point i.e., join dominates the
comparison. But your function *needs* an extra join to get the OP's specified
output.

I think the two versions below each give the 'correct' output wrt to the OP's
single test case. I measure chunkerMS2 to be faster than chunkerGS2 across all
chunk sizes, but this is all about the joins.

I conclude that chunkerGS's deque beats chunkerMS's list for large chunk_size (~
>100). But for joined output, chunkerMS2 beats chunkerGS2 because it does less
joining.
if you're going to profile something, better use the
standard timeit module
....
OT: I will when timeit grows a capability for testing live objects rather than
'small code snippets'. Requiring source code input and passing arguments by
string substitution makes it too painful for interactive work. The need to
specify the number of repeats is an additional annoyance.
Cheers

Michael
#Revised functions with joined output, per OP spec

def chunkerMS2(s, chunk_size=3, sentry=".", keep_first = False, keep_last = False):
buffer=[]
sentry_count = 0

for item in s:
buffer.append(item)
if item == sentry:
sentry_count += 1
if sentry_count < chunk_size:
if keep_first:
yield " ".join(buffer)
else:
yield " ".join(buffer)
del buffer[:buffer.index(sentry)+1]

if keep_last:
while buffer:
yield " ".join(buffer)
del buffer[:buffer.index(sentry)+1]

def chunkerGS2(seq, sentry='.', chunk_size=3, keep_first=False,
keep_last=False):
def format_chunks(chunks):
return " . ".join(' '.join(chunk) for chunk in chunks) + " ."
iterchunks = itersplit(seq,sentry)
buf = deque()
for chunk in islice(iterchunks, chunk_size-1):
buf.append(chunk)
if keep_first:
yield format_chunks(buf)
for chunk in iterchunks:
buf.append(chunk)
yield format_chunks(buf)
buf.popleft()
if keep_last:
while buf:
yield format_chunks(buf)
buf.popleft()


Sep 7 '06 #11
Michael Spencer wrote:
I think the two versions below each give the 'correct' output wrt to the OP's
single test case. I measure chunkerMS2 to be faster than chunkerGS2 across all
chunk sizes, but this is all about the joins.

I conclude that chunkerGS's deque beats chunkerMS's list for large chunk_size (~
>100). But for joined output, chunkerMS2 beats chunkerGS2 because it does less
joining.
Although I speculate that the OP is really concerned about the chunking
algorithm rather than an exact output format, chunkerGS2 can do better
even when the chunks must be joined. Joining each chunk can (and
should) be done only once, not every time the chunk is yielded.
chunkerGS3 outperforms chunkerMS2 even more than the original versions:

* chunk_size=3
chunkerGS3: 1.17 seconds
chunkerMS2: 1.56 seconds
* chunk_size=30
chunkerGS3: 1.26 seconds
chunkerMS2: 6.35 seconds
* chunk_size=300
chunkerGS3: 2.20 seconds
chunkerMS2: 54.51 seconds
def chunkerGS3(seq, sentry='.', chunk_size=3, keep_first=False,
keep_last=False):
iterchunks = itersplit(seq,sentry)
buf = deque()
join = ' '.join
def append(chunk):
chunk.append(sentry)
buf.append(join(chunk))
for chunk in islice(iterchunks, chunk_size-1):
append(chunk)
if keep_first:
yield join(buf)
for chunk in iterchunks:
append(chunk)
yield join(buf)
buf.popleft()
if keep_last:
while buf:
yield join(buf)
buf.popleft()

if you're going to profile something, better use the
standard timeit module
...
OT: I will when timeit grows a capability for testing live objects rather than
'small code snippets'. Requiring source code input and passing arguments by
string substitution makes it too painful for interactive work. The need to
specify the number of repeats is an additional annoyance.
timeit is indeed somewhat cumbersome, but having a robust bug-free
timing function is worth the inconvenience IMO.

Best,
George

Sep 7 '06 #12
thanks a lot to all, i help me to learn a lot !

(i finally use the generator trick, it is great...)

best regards.

Sep 14 '06 #13

This thread has been closed and replies have been disabled. Please start a new discussion.

Similar topics

0
by: lawrence | last post by:
Dear Peter, Do we know anyone else who writes PHP code? There is too much work to do, especially if Costin and I are going to join our software together. The easiest way for us to join our...
0
by: Dan Stromberg | last post by:
I finally took a moment to put my buffered socket class, bufsock.py, on a web page. It's a wrapper class for python sockets. It should reduce tinygrams, as well as provide a more convenient...
7
by: Steven T. Hatton | last post by:
I haven't looked very closely, but from what I'm seeing, it looks like there are no buffered I/O streams in the Standard Library. There are stream buffers, but not buffered streams. I don't have...
9
by: kernelxu | last post by:
hi,everybody. I calling function setbuf() to change the characteristic of standsrd input buffer. some fragment of the progrem is: (DEV-C++2.9.9.2) #include <stdio.h> #include <stdlib.h> int...
1
by: moondaddy | last post by:
I'm running VS 2005 pro vb.net and started to experiment with refactoring. I selected some code, right clicked and selected Refactor / Extract Method. The Refactor popup window appeared. This was...
4
by: pank7 | last post by:
hi everyone, I have a program here to test the file IO(actually output) with buffer turned on and off. What I want to see is that there will be obvious differece in time. Here I have an input...
2
by: Karl | last post by:
Hey everyone! I've got a quick question on whether std::ifstream is buffered or not. The reason is that I have a homework assignment that requires me to benchmark copying files using different...
0
by: =?Utf-8?B?S2luZXRpYyBKdW1wIEFwcGxpZmUgZm9yIC5ORVQg | last post by:
Refactor .NET code using Visual Studio and Refactor! Pro Modifying the existing source code without changing its functionality is called refactoring of Source Code. Visual Studio gives us a lots of...
8
by: zeeshan708 | last post by:
what is the difference between the buffered and unbuffered stream ??(e.g we say that cout is buffered and cerr is unbuffered stream) does that mean that the output sent to buffered stream have to go...
0
by: ryjfgjl | last post by:
In our work, we often receive Excel tables with data in the same format. If we want to analyze these data, it can be difficult to analyze them because the data is spread across multiple Excel files...
1
by: nemocccc | last post by:
hello, everyone, I want to develop a software for my android phone for daily needs, any suggestions?
1
by: Sonnysonu | last post by:
This is the data of csv file 1 2 3 1 2 3 1 2 3 1 2 3 2 3 2 3 3 the lengths should be different i have to store the data by column-wise with in the specific length. suppose the i have to...
0
marktang
by: marktang | last post by:
ONU (Optical Network Unit) is one of the key components for providing high-speed Internet services. Its primary function is to act as an endpoint device located at the user's premises. However,...
0
Oralloy
by: Oralloy | last post by:
Hello folks, I am unable to find appropriate documentation on the type promotion of bit-fields when using the generalised comparison operator "<=>". The problem is that using the GNU compilers,...
0
jinu1996
by: jinu1996 | last post by:
In today's digital age, having a compelling online presence is paramount for businesses aiming to thrive in a competitive landscape. At the heart of this digital strategy lies an intricately woven...
0
by: Hystou | last post by:
Overview: Windows 11 and 10 have less user interface control over operating system update behaviour than previous versions of Windows. In Windows 11 and 10, there is no way to turn off the Windows...
0
tracyyun
by: tracyyun | last post by:
Dear forum friends, With the development of smart home technology, a variety of wireless communication protocols have appeared on the market, such as Zigbee, Z-Wave, Wi-Fi, Bluetooth, etc. Each...
0
agi2029
by: agi2029 | last post by:
Let's talk about the concept of autonomous AI software engineers and no-code agents. These AIs are designed to manage the entire lifecycle of a software development project—planning, coding, testing,...

By using Bytes.com and it's services, you agree to our Privacy Policy and Terms of Use.

To disable or enable advertisements and analytics tracking please visit the manage ads & tracking page.