473,418 Members | 1,993 Online
Bytes | Software Development & Data Engineering Community
Post Job

Home Posts Topics Members FAQ

Join Bytes to post your question to a community of 473,418 software developers and data experts.

Efficient Text File Copy

Hi everyone,

What I am wanting to do, is to copy, a simple plain text file, to
another file, but omitting duplicate items.

The way I thought of doing this, involved copying all the items into a
array, and then looping through that array looking for duplicates,
removing them, and then writing to another file.

This seems a very long and drawn out way of doing this to me, and also I
do not know the initial size of the array required will be.

Could anyone suggest a effective efficient way of doing this?

Thanks
Mick
Nov 14 '05 #1
19 5663

"Materialised" <ma**********@privacy.net> wrote in message
news:bv************@ID-220437.news.uni-berlin.de...
| Hi everyone,
|
| What I am wanting to do, is to copy, a simple plain text file, to
| another file, but omitting duplicate items.
|
| The way I thought of doing this, involved copying all the items into a
| array, and then looping through that array looking for duplicates,
| removing them, and then writing to another file.
|
| This seems a very long and drawn out way of doing this to me, and also I
| do not know the initial size of the array required will be.
|
| Could anyone suggest a effective efficient way of doing this?

That depends, because there are a few different ways
that one could approach this. For example:
std::set<>, std::map<>, std::vector<>, std::unique()
are all available for you to come up with an algorithm.

Can you tell us what constitutes an 'item' in the file ?

Can you show us a few lines of the file format ?,
and what the format of the new file should look like ?

Cheers.
Chris Val
Nov 14 '05 #2
Chris ( Val ) wrote:
"Materialised" <ma**********@privacy.net> wrote in message
news:bv************@ID-220437.news.uni-berlin.de...
| Hi everyone,
|
| What I am wanting to do, is to copy, a simple plain text file, to
| another file, but omitting duplicate items.
|
| The way I thought of doing this, involved copying all the items into a
| array, and then looping through that array looking for duplicates,
| removing them, and then writing to another file.
|
| This seems a very long and drawn out way of doing this to me, and also I
| do not know the initial size of the array required will be.
|
| Could anyone suggest a effective efficient way of doing this?

That depends, because there are a few different ways
that one could approach this. For example:
std::set<>, std::map<>, std::vector<>, std::unique()
are all available for you to come up with an algorithm.

Can you tell us what constitutes an 'item' in the file ?

Can you show us a few lines of the file format ?,
and what the format of the new file should look like ?

Cheers.
Chris Val


What he said. Also helpful would be information about the order of the
records: must it be preserved, and are they sorted?

Nov 14 '05 #3
Chris ( Val ) wrote:
"Materialised" <ma**********@privacy.net> wrote in message
news:bv************@ID-220437.news.uni-berlin.de...
| Hi everyone,
|
| What I am wanting to do, is to copy, a simple plain text file, to
| another file, but omitting duplicate items.
|
| The way I thought of doing this, involved copying all the items into a
| array, and then looping through that array looking for duplicates,
| removing them, and then writing to another file.
|
| This seems a very long and drawn out way of doing this to me, and also I
| do not know the initial size of the array required will be.
|
| Could anyone suggest a effective efficient way of doing this?

That depends, because there are a few different ways
that one could approach this. For example:
std::set<>, std::map<>, std::vector<>, std::unique()
are all available for you to come up with an algorithm.

Can you tell us what constitutes an 'item' in the file ?

Can you show us a few lines of the file format ?,
and what the format of the new file should look like ?

Cheers.
Chris Val

The file is simply a list of items, it could be anything, lets just say
for this example its a shopping list, it would be in the format

eggs
milk
bread
carrots
Jam
...
.....
...... etc

There is also no requirement on the order of items, as the program will
make a system() call to sort at the end.
Nov 14 '05 #4
"Materialised" <ma**********@privacy.net> wrote:
The file is simply a list of items, it could be anything, lets just say
for this example its a shopping list, it would be in the format

eggs
milk
bread
carrots
Jam
..
....
..... etc

There is also no requirement on the order of items, as the program will
make a system() call to sort at the end.


If you're allowed to make a system() call to sort, then you probably
can also use the system() command "uniq" to do the actual work of
eliminating duplicates.

If you have to do the work of uniq yourself, have a look at the free
source code for a version of the uniq program for ideas.

On Windows/DOS/Linux/Unix type systems:
system("sort < input.file | uniq > output.file");

Of course, you must have some version of uniq installed...

C:\>uniq --version
uniq (textutils) 2.0.21
Written by Richard Stallman and David MacKenzie.

Copyright (C) 2002 Free Software Foundation, Inc.
This is free software; see the source for copying conditions. There is NO
warranty; not even for MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.

--
Simon.
Nov 14 '05 #5

"Materialised" <ma**********@privacy.net> wrote in message
news:bv************@ID-220437.news.uni-berlin.de...
| Chris ( Val ) wrote:
| > "Materialised" <ma**********@privacy.net> wrote in message
| > news:bv************@ID-220437.news.uni-berlin.de...

[snip]

| > Can you tell us what constitutes an 'item' in the file ?
| >
| > Can you show us a few lines of the file format ?,
| > and what the format of the new file should look like ?
| >
| > Cheers.
| > Chris Val
| >
| >
| The file is simply a list of items, it could be anything, lets just say
| for this example its a shopping list, it would be in the format
|
| eggs
| milk
| bread
| carrots
| Jam
| ..
| ....
| ..... etc
|
| There is also no requirement on the order of items, as the program will
| make a system() call to sort at the end.

# include <iostream>
# include <fstream>
# include <ostream>
# include <vector>
# include <algorithm>
# include <iterator>
# include <cstdlib>

int main()
{
std::ifstream InFile( "Copies.txt" );
std::ofstream OutFile( "NewFile.txt" );

if( !InFile || !OutFile )
return EXIT_FAILURE;

std::vector<std::string> V;

std::copy( std::istream_iterator<std::string>( InFile ),
std::istream_iterator<std::string>(), std::back_inserter( V ) );

std::sort( V.begin(), V.end() );

std::unique_copy( V.begin(), V.end(),
std::ostream_iterator<std::string>( OutFile, "\n" ) );

return 0;
}

-- Copies.txt --
eggs
milk
bread
carrots
Jam
eggs
milk
bread
carrots
Jam
eggs
milk
bread
carrots
Jam
eggs
milk
bread
carrots
Jam

-- NewFile.txt --
Jam
bread
carrots
eggs
milk

Cheers.
Chris Val


Nov 14 '05 #6
Materialised wrote:
....

There is also no requirement on the order of items, as the program will
make a system() call to sort at the end.


- on unix systems "sort -u" removes dupes. In fact, if you want the
most efficient way to do this i.e. sorting and removing dupes, it's best
to do it at the same time since you need to compare each entry to an
adjacent entry (hence easily identifying dupes).

Nov 14 '05 #7
On Sat, 31 Jan 2004 01:36:59 +0000, Materialised
<ma**********@privacy.net> wrote:
Chris ( Val ) wrote:
"Materialised" <ma**********@privacy.net> wrote in message
news:bv************@ID-220437.news.uni-berlin.de...
| Hi everyone,
|
| What I am wanting to do, is to copy, a simple plain text file, to
| another file, but omitting duplicate items.
|
| The way I thought of doing this, involved copying all the items into a
| array, and then looping through that array looking for duplicates,
| removing them, and then writing to another file.
|
| This seems a very long and drawn out way of doing this to me, and also I
| do not know the initial size of the array required will be.
|
| Could anyone suggest a effective efficient way of doing this?

That depends, because there are a few different ways
that one could approach this. For example:
std::set<>, std::map<>, std::vector<>, std::unique()
are all available for you to come up with an algorithm.

Can you tell us what constitutes an 'item' in the file ?

Can you show us a few lines of the file format ?,
and what the format of the new file should look like ?

The file is simply a list of items, it could be anything, lets just say
for this example its a shopping list, it would be in the format

eggs
milk
bread
carrots
Jam
..
....
..... etc

There is also no requirement on the order of items, as the program will
make a system() call to sort at the end.


I think a more efficient method, if you expect lots of duplicates, and
have enough memory to hold all items, would be:

Insert each word in a binary tree (if you require sorted output) or a
hash table (if you do not). When you find an item that already exists,
just ignore it.

This requires a single pass over the input data and a single output
pass.

- Sev

Nov 14 '05 #8
On 30 Jan 2004 21:14:09 EST, Gianni Mariani <gi*******@mariani.ws>
wrote:
Materialised wrote:
...

There is also no requirement on the order of items, as the program will
make a system() call to sort at the end.


- on unix systems "sort -u" removes dupes. In fact, if you want the
most efficient way to do this i.e. sorting and removing dupes, it's best
to do it at the same time since you need to compare each entry to an
adjacent entry (hence easily identifying dupes).


If you're gonna assume unix, it is even better to end the sort command
with
sort-command -o outputfile
instead of
sort-command > outputfile

to avoid a bunch of extra overhead. I think this option was provided
as a consequence of the fact that the sort command has to detect EOF
on its input before it can even begin to start writing output (for
obvious reasons); since the usual concurrency benefits of piping are
lost (i.e., putting a sort command into a pipeline is more of a
notational convenience than an actual pipeline), -o lets sort itself
write the final output so that the disk space required is only once
again the size of sort's input (for the output file), rather than the
twice again it would be (sort's temporary storage plus the actual
output file) using regular I/O redirection.

BTW, I'm not sure if the OP wanted a C or a C++ solution; Since he
cross-posted to both the C and Learn C/C++ groups, I'd guess C. That
would make using system() the path of least resistance. If C++ is an
option, I'd think any reasonable STL-based solution (such as using
std::set) would run more efficiently than any way of using system();
plus, using std::set would allow for the creation of a custom
comparison function that could be used to tweak the sort criteria.
-leor

Leor Zolman
BD Software
le**@bdsoft.com
www.bdsoft.com -- On-Site Training in C/C++, Java, Perl & Unix
C++ users: Download BD Software's free STL Error Message
Decryptor at www.bdsoft.com/tools/stlfilt.html
Nov 14 '05 #9
Materialised wrote:
The file is simply a list of items, it could be anything, lets just say
for this example its a shopping list, it would be in the format

eggs
milk
bread
carrots
Jam
..
....
..... etc

There is also no requirement on the order of items, as the program will
make a system() call to sort at the end.


The UNIX [GNU] sort command option -u [--unique] to do this.

Nov 14 '05 #10

"Chris ( Val )" <ch******@bigpond.com.au> wrote in message
news:bv************@ID-110726.news.uni-berlin.de...

[snip]

My apologies to all - I didn't realise this
was cross posted.

Cheers.
Chris Val
Nov 14 '05 #11
Materialised wrote:
Hi everyone,

What I am wanting to do, is to copy, a simple plain text file, to
another file, but omitting duplicate items.

The way I thought of doing this, involved copying all the items into a
array, and then looping through that array looking for duplicates,
removing them, and then writing to another file.

This seems a very long and drawn out way of doing this to me, and also I
do not know the initial size of the array required will be.

Could anyone suggest a effective efficient way of doing this?


Since you say elsethread that you do not require the output to retain the
ordering given by the input, I suggest a hash table. Make sure you can
handle collisions. K&R2 provides a rudimentary hash table example on p145.

For a hashing algorithm, either use K&R's (p144!) or Chris Torek's (with the
33 multiplier rather than 31).

Pseudocode:

if input file opened okay
while lines remain
read a line
hash it
make sure the hash value is within bucket range
if line does not already exist in that bucket
add line to bucket
endif
endwhile
close input file
endif
if output file opened okay
for each bucket
for each line in this bucket
write line to output file
endfor
endfor
close output file
endif

Turning this into C code is left as an exercise.

--
Richard Heathfield : bi****@eton.powernet.co.uk
"Usenet is a strange place." - Dennis M Ritchie, 29 July 1999.
C FAQ: http://www.eskimo.com/~scs/C-faq/top.html
K&R answers, C books, etc: http://users.powernet.co.uk/eton
Nov 14 '05 #12

"Materialised" <ma**********@privacy.net> wrote in message
eggs
milk
bread
carrots
Jam
..
....
..... etc

There is also no requirement on the order of items, as the program will
make a system() call to sort at the end.

Sounds like homework. The question is, what is your tutor looking for. Is he
looking for basic C skills (in which case he has chosen a bad question),
knowledge of C++ libraries, knowledge of algorithms, or practical skills in
knocking up programs quickly.

If the first, then read the lines into a huge array, then step through it
looking for duplicates. This algorithm is O(N^2), which is why it is a bad
choice for teaching basic C.

The trick is to store the items in some structure where they can be accessed
easily. This is either a hash table or a balanced search tree (eg a red
black tree). C++ stl provides a binary tree (set), so if the tutor is
looking for effective use of stl you should use this.

If the problem is knowledge of the algorithm, then stl is cheating. A hash
table is probably the easiest structure to implement yourself.

If the problem is to come up with a solution, then you want to take the
system tools route. This is the best answer, but cheating if the object is
to understand how to program rather than come up with the goods quickly.
Nov 14 '05 #13
Richard Heathfield wrote:
.... snip ...
For a hashing algorithm, either use K&R's (p144!) or Chris Torek's
(with the 33 multiplier rather than 31).


Why do you recommend this? I have used 31 and 37 in the past.
This is to hash strings consisting (in the main) of ascii chars.
Frankly I took Kernighan & Pikes word for it, also noting that 31
and 37 are prime, while 33 is not.

--
Chuck F (cb********@yahoo.com) (cb********@worldnet.att.net)
Available for consulting/temporary embedded and systems.
<http://cbfalconer.home.att.net> USE worldnet address!

Nov 14 '05 #14
CBFalconer wrote:
Richard Heathfield wrote:
... snip ...

For a hashing algorithm, either use K&R's (p144!) or Chris Torek's
(with the 33 multiplier rather than 31).


Why do you recommend this?


Why do I recommend Chris Torek's hash? Because Chris says it works well and
I had no reason to disbelieve him; in my experience, he's right - it does
work well.
I have used 31
So do K&R (and, as you can see above, I recommended that, too).
and 37 in the past.
This is to hash strings consisting (in the main) of ascii chars.
Frankly I took Kernighan & Pikes word for it, also noting that 31
and 37 are prime, while 33 is not.


Hmmm.

Kernighan and Pike vs Tisdale? K&P.
Kernighan and Pike vs Malbrain? K&P.

But Kernighan and Pike vs Torek is less clear-cut. :-)

--
Richard Heathfield : bi****@eton.powernet.co.uk
"Usenet is a strange place." - Dennis M Ritchie, 29 July 1999.
C FAQ: http://www.eskimo.com/~scs/C-faq/top.html
K&R answers, C books, etc: http://users.powernet.co.uk/eton
Nov 14 '05 #15
Malcolm wrote:
"Materialised" <ma**********@privacy.net> wrote in message
eggs
milk
bread
carrots
Jam
..
....
..... etc

There is also no requirement on the order of items, as the program will
make a system() call to sort at the end.

Sounds like homework.

Incorrect
The question is, what is your tutor looking for. Is he

What tutor? Who mentioned a tutor, there you go assuming again.
Nov 14 '05 #16
Materialised wrote:
Malcolm wrote:
"Materialised" <ma**********@privacy.net> wrote in message
eggs
milk
bread
carrots
Jam
..
....
..... etc

There is also no requirement on the order of items, as the program will
make a system() call to sort at the end.


Sounds like homework.

Incorrect


Actually, it *does* sound like homework. It doesn't have to /be/ homework to
/sound/ like homework.
The question is, what is your tutor looking for. Is he

What tutor? Who mentioned a tutor, there you go assuming again.


If you post in alt.comp.lang.learn.c-c++, it's not a terribly unreasonable
assumption, even if it turns out to be an inaccurate one.

--
Richard Heathfield : bi****@eton.powernet.co.uk
"Usenet is a strange place." - Dennis M Ritchie, 29 July 1999.
C FAQ: http://www.eskimo.com/~scs/C-faq/top.html
K&R answers, C books, etc: http://users.powernet.co.uk/eton
Nov 14 '05 #17
>> Richard Heathfield wrote:
For a hashing algorithm, either use K&R's (p144!) or Chris Torek's
(with the 33 multiplier rather than 31).

Note, it is not really "my" hash (I got it it from James Gosling
many years ago -- this is the same James Gosling who is behind
Java, incidentally, but he was at CMU at the time). It uses the
remarkably simple recurrence:

unsigned int h;
...
h = 0;
for (all bytes in the string)
h = h * 33 + this_byte;

or more concretely, for a C-style string pointed to by "cp":

for (h = 0; (c = *cp++) != 0;)
h = h * 33 + c;
CBFalconer wrote:
Why do you recommend this?

In article <news:bv**********@hercules.btinternet.com>
Richard Heathfield <bi****@eton.powernet.co.uk> writes:Why do I recommend Chris Torek's hash? Because Chris says it
works well and I had no reason to disbelieve him; in my experience,
he's right - it does work well.


I think he meant "why 33" (instead of a prime number, like the
"more obvious" 31 and 37). I wonder the same thing. Gosling used
33, and I tried 31 and 37, and when others were doing larger-scale
experiments (for what eventually became the "Berkeley DB" library)
I suggested trying even more variations. Different datasets had
different outcomes, but on average 33 worked better than 31.

Note that this simple hash is less effective than either a CRC or
a strong cryptographic hash (both of those distribute the input
bits much better), but this one is very fast to compute. As it
turns out, for in-memory hashing, the distribution of the hash is
often less critical than the time required to compute it.
Multiplication by 31 and 33 are both quite quick on most CPUs, and
the additional bucket-search effort that occurs after this "less
effective" hash tends to use up less than the "saved time" as
compared to a more effective hash function.
--
In-Real-Life: Chris Torek, Wind River Systems
Salt Lake City, UT, USA (40°39.22'N, 111°50.29'W) +1 801 277 2603
email: forget about it http://web.torek.net/torek/index.html
Reading email is like searching for food in the garbage, thanks to spammers.
Nov 14 '05 #18
Chris Torek wrote:
Richard Heathfield <bi****@eton.powernet.co.uk> writes:
CBFalconer wrote:
Richard Heathfield wrote: For a hashing algorithm, either use K&R's (p144!) or Chris
Torek's (with the 33 multiplier rather than 31).
Note, it is not really "my" hash (I got it it from James Gosling
many years ago -- this is the same James Gosling who is behind
Java, incidentally, but he was at CMU at the time). It uses the
remarkably simple recurrence:

unsigned int h;
...
h = 0;
for (all bytes in the string)
h = h * 33 + this_byte;

or more concretely, for a C-style string pointed to by "cp":

for (h = 0; (c = *cp++) != 0;)
h = h * 33 + c;
Why do you recommend this?

Why do I recommend Chris Torek's hash? Because Chris says it
works well and I had no reason to disbelieve him; in my
experience, he's right - it does work well.


I think he meant "why 33" (instead of a prime number, like the
"more obvious" 31 and 37). I wonder the same thing. Gosling used
33, and I tried 31 and 37, and when others were doing larger-scale
experiments (for what eventually became the "Berkeley DB" library)
I suggested trying even more variations. Different datasets had
different outcomes, but on average 33 worked better than 31.

Note that this simple hash is less effective than either a CRC or
a strong cryptographic hash (both of those distribute the input
bits much better), but this one is very fast to compute. As it
turns out, for in-memory hashing, the distribution of the hash is
often less critical than the time required to compute it.
Multiplication by 31 and 33 are both quite quick on most CPUs, and
the additional bucket-search effort that occurs after this "less
effective" hash tends to use up less than the "saved time" as
compared to a more effective hash function.


All of 31, 33, and 37 make intuitive sense to me. 31 and 33 will
obviously be faster on machines without multiplication.

I have the obvious testbed in hashlib, which was originally built
to test some other hash functions, and then expanded. I shall
have to build a driver to run through those and list efficiencies
Real Soon Now. I could also implement the multiplications with
shift and add, to make a total of 5 useful hashes. I don't expect
the differences to be earth shaking. The present hashlib
verification suite include the awful hash function 1. That's one.

--
Chuck F (cb********@yahoo.com) (cb********@worldnet.att.net)
Available for consulting/temporary embedded and systems.
<http://cbfalconer.home.att.net> USE worldnet address!
Nov 14 '05 #19
CBFalconer <cb********@yahoo.com> wrote in message news:<40***************@yahoo.com>...
Why do I recommend Chris Torek's hash? Because Chris says it
works well and I had no reason to disbelieve him; in my
experience, he's right - it does work well.


While the function works well for strings, it doesn't work well
for numbers (for example, indexing on record numbers), in my
experience. Berkeley DB ended up using this:

/*
* Fowler/Noll/Vo hash
*
* The basis of the hash algorithm was taken from an idea sent by
email to the
* IEEE Posix P1003.2 mailing list from Phong Vo
(kp*@research.att.com) and
* Glenn Fowler (gs*@research.att.com). Landon Curt Noll
(ch****@toad.com)
* later improved on their algorithm.
*
* The magic is in the interesting relationship between the special
prime
* 16777619 (2^24 + 403) and 2^32 and 2^8.
*
* This hash produces the fewest collisions of any function that we've
seen so
* far, and works well on both numbers and strings.
*
* PUBLIC: u_int32_t __ham_func5 __P((DB *, const void *, u_int32_t));
*/
u_int32_t
__ham_func5(dbp, key, len)
DB *dbp;
const void *key;
u_int32_t len;
{
const u_int8_t *k, *e;
u_int32_t h;

if (dbp != NULL)
COMPQUIET(dbp, NULL);

k = key;
e = k + len;
for (h = 0; k < e; ++k) {
h *= 16777619;
h ^= *k;
}
return (h);
}

There are a variety of hash functions Berkeley DB has tried
at various times -- the ones we've liked best we've kept:

http://www.opensource.apple.com/darw...sh/hash_func.c

Regards,
--keith

=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=
Keith Bostic bo****@sleepycat.com
Sleepycat Software Inc. keithbosticim (ymsgid)
118 Tower Rd. +1-781-259-3139
Lincoln, MA 01773 http://www.sleepycat.com
Nov 14 '05 #20

This thread has been closed and replies have been disabled. Please start a new discussion.

Similar topics

4
by: Beren | last post by:
Hello, Can anyone give some tips to efficiently update a remote project ? I prefer to keep my projects locally, compile as release and then copy everything it to the remote server. What is...
10
by: noro | last post by:
Is there a more efficient method to find a string in a text file then: f=file('somefile') for line in f: if 'string' in line: print 'FOUND' ? BTW:
21
by: py_genetic | last post by:
Hello, I'm importing large text files of data using csv. I would like to add some more auto sensing abilities. I'm considing sampling the data file and doing some fuzzy logic scoring on the...
4
by: | last post by:
Hi all, I want to create a method that does the following: 1) Programmatically instantiate a new XmlDataSource control 2) For each file in a named directory, make a "FileSystemItem" element 3)...
35
by: spekyuman | last post by:
Pointer and reference arguments provide C++ programmers with the ability to modify objects. What is more efficient, passing arguments via pointer or reference? Avoid the stereotypical urge of...
4
by: | last post by:
Using VS.NET I am wondering what methods developers use to deploy ASP.NET website content to a remote server, either using FTP or network file copy. Ideally there would be a one-button or...
12
by: pedagani | last post by:
Dear comp.lang.c++, Could you make this snippet more efficient? As you see I have too many variables introduced in the code. //Read set of integers from a file on line by line basis in a STL...
1
by: =?Utf-8?B?UVNJRGV2ZWxvcGVy?= | last post by:
Using .NET 2.0 is it more efficient to copy files to a single folder versus spreading them across multiple folders. For instance if we have 100,000 files to be copied, Do we copy all of them to...
25
by: Abubakar | last post by:
Hi, recently some C programmer told me that using fwrite/fopen functions are not efficient because the output that they do to the file is actually buffered and gets late in writing. Is that...
0
by: emmanuelkatto | last post by:
Hi All, I am Emmanuel katto from Uganda. I want to ask what challenges you've faced while migrating a website to cloud. Please let me know. Thanks! Emmanuel
1
by: nemocccc | last post by:
hello, everyone, I want to develop a software for my android phone for daily needs, any suggestions?
1
by: Sonnysonu | last post by:
This is the data of csv file 1 2 3 1 2 3 1 2 3 1 2 3 2 3 2 3 3 the lengths should be different i have to store the data by column-wise with in the specific length. suppose the i have to...
0
by: Hystou | last post by:
There are some requirements for setting up RAID: 1. The motherboard and BIOS support RAID configuration. 2. The motherboard has 2 or more available SATA protocol SSD/HDD slots (including MSATA, M.2...
0
marktang
by: marktang | last post by:
ONU (Optical Network Unit) is one of the key components for providing high-speed Internet services. Its primary function is to act as an endpoint device located at the user's premises. However,...
0
jinu1996
by: jinu1996 | last post by:
In today's digital age, having a compelling online presence is paramount for businesses aiming to thrive in a competitive landscape. At the heart of this digital strategy lies an intricately woven...
0
by: Hystou | last post by:
Overview: Windows 11 and 10 have less user interface control over operating system update behaviour than previous versions of Windows. In Windows 11 and 10, there is no way to turn off the Windows...
0
agi2029
by: agi2029 | last post by:
Let's talk about the concept of autonomous AI software engineers and no-code agents. These AIs are designed to manage the entire lifecycle of a software development project—planning, coding, testing,...
0
by: conductexam | last post by:
I have .net C# application in which I am extracting data from word file and save it in database particularly. To store word all data as it is I am converting the whole word file firstly in HTML and...

By using Bytes.com and it's services, you agree to our Privacy Policy and Terms of Use.

To disable or enable advertisements and analytics tracking please visit the manage ads & tracking page.