473,387 Members | 1,572 Online
Bytes | Software Development & Data Engineering Community
Post Job

Home Posts Topics Members FAQ

Join Bytes to post your question to a community of 473,387 software developers and data experts.

file access - design thoughts.

Hi, I'm hoping to get some advice on this design query to see if I'm
heading in the right direction.

I would like to read a file of many lines and then search for a
particular character to break up the sentence into separate words.
Once done I will sort them.

I thought of using fgets but the line could be very very long and I
won't know it's length of n characters that could be in a line.

Using fgetc seems the better bet and put each character into an array
so I can traverse the array.

However,if I put each character into an array, I still won't know how
large to make the array.

Is there a better strategy ?

Jay

May 20 '07 #1
22 1289
J.*******@gmail.com wrote:
Hi, I'm hoping to get some advice on this design query to see if I'm
heading in the right direction.

I would like to read a file of many lines and then search for a
particular character to break up the sentence into separate words.
Once done I will sort them.

I thought of using fgets but the line could be very very long and I
won't know it's length of n characters that could be in a line.

Using fgetc seems the better bet and put each character into an array
so I can traverse the array.

However,if I put each character into an array, I still won't know how
large to make the array.

Is there a better strategy ?
If portability isn't an issue, you will probably be better of using your
operating system's means of memory mapping the file and simply using the
result as an array.

--
Ian Collins.
May 20 '07 #2

<J.*******@gmail.comwrote in message
news:11*********************@w5g2000hsg.googlegrou ps.com...
Hi, I'm hoping to get some advice on this design query to see if I'm
heading in the right direction.

I would like to read a file of many lines and then search for a
particular character to break up the sentence into separate words.
Once done I will sort them.

I thought of using fgets but the line could be very very long and I
won't know it's length of n characters that could be in a line.

Using fgetc seems the better bet and put each character into an array
so I can traverse the array.

However,if I put each character into an array, I still won't know how
large to make the array.

Is there a better strategy ?
Google for ggets and Chuck Falconer.
Chuck, a reg in this group, has kindly provided exactly what you want. I
believe entirely rights free, certainly for nothing.

May 20 '07 #3
J.*******@gmail.com wrote:
>
Hi, I'm hoping to get some advice on this design query to see if I'm
heading in the right direction.

I would like to read a file of many lines and then search for a
particular character to break up the sentence into separate words.
Once done I will sort them.

I thought of using fgets but the line could be very very long and I
won't know it's length of n characters that could be in a line.

Using fgetc seems the better bet and put each character into an array
so I can traverse the array.

However,if I put each character into an array, I still won't know how
large to make the array.

Is there a better strategy ?
get_line uses getc to read lines, and realloc to size the array.
Truncates lines at (size_t)-1) bytes.

http://www.mindspring.com/~pfilandr/...ine/get_line.c

--
pete
May 20 '07 #4
J.*******@gmail.com wrote:
Hi, I'm hoping to get some advice on this design query to see if I'm
heading in the right direction.

I would like to read a file of many lines and then search for a
particular character to break up the sentence into separate words.
Once done I will sort them.

I thought of using fgets but the line could be very very long and I
won't know it's length of n characters that could be in a line.
Your problem, is well known for compiler writers, they call it lexical
analysis. fgets() is not normally used in hand-written lexer's, but
getchar() and ungetc() is, because you analyze single character at a time.
Using fgetc seems the better bet and put each character into an array
so I can traverse the array.

However,if I put each character into an array, I still won't know how
large to make the array.

Is there a better strategy ?
Yes, there is standard tool for this, called lex. Here is a lex
specification that print all the words in a file:

$ cat ex_lex.l
/*
Compile:
flex ex_lex.l
cc lex.yy.c -ll

Run:
../a.out < ex_lex.l
*/

%{
%}
%%

[^ \n\t]+ { printf("Word: '%s', len = %d\n", yytext, yyleng); }
..|\n ; /* ignore anything else */

%%

int main(void)
{
yylex();
}

--
Tor <torust [at] online [dot] no>
May 20 '07 #5
Malcolm McLean wrote:
<J.*******@gmail.comwrote in message
>Hi, I'm hoping to get some advice on this design query to see if I'm
heading in the right direction.

I would like to read a file of many lines and then search for a
particular character to break up the sentence into separate words.
Once done I will sort them.

I thought of using fgets but the line could be very very long and I
won't know it's length of n characters that could be in a line.

Using fgetc seems the better bet and put each character into an array
so I can traverse the array.

However,if I put each character into an array, I still won't know how
large to make the array.

Is there a better strategy ?

Google for ggets and Chuck Falconer.
Chuck, a reg in this group, has kindly provided exactly what you want.
I believe entirely rights free, certainly for nothing.
try: <http://cbfalconer.home.att.net/download/>
--
<http://www.cs.auckland.ac.nz/~pgut001/pubs/vista_cost.txt>
<http://www.securityfocus.com/columnists/423>
<http://www.aaxnet.com/editor/edit043.html>
<http://kadaitcha.cx/vista/dogsbreakfast/index.html>
cbfalconer at maineline dot net
--
Posted via a free Usenet account from http://www.teranews.com

May 21 '07 #6
Tor Rustad wrote:
J.*******@gmail.com wrote:
.... SNIP ...
>>
I would like to read a file of many lines and then search for a
particular character to break up the sentence into separate words.
Once done I will sort them.

I thought of using fgets but the line could be very very long and I
won't know it's length of n characters that could be in a line.

Your problem, is well known for compiler writers, they call it lexical
analysis. fgets() is not normally used in hand-written lexer's, but
getchar() and ungetc() is, because you analyze single character at a time.
>Using fgetc seems the better bet and put each character into an array
so I can traverse the array.

However,if I put each character into an array, I still won't know how
large to make the array.

Is there a better strategy ?

Yes, there is standard tool for this, called lex. Here is a lex
specification that print all the words in a file:

$ cat ex_lex.l
/*
Compile:
flex ex_lex.l
cc lex.yy.c -ll

Run:
./a.out < ex_lex.l
*/

%{
%}
%%

[^ \n\t]+ { printf("Word: '%s', len = %d\n", yytext, yyleng); }
.|\n ; /* ignore anything else */

%%

int main(void)
{
yylex();
}
Why all that tortuous mess to copy text?.

#include <stdio.h>

int main(void) {
int ch;

while (EOF != (ch = getchar())) putchar(ch);
return 0;
}

--
<http://www.cs.auckland.ac.nz/~pgut001/pubs/vista_cost.txt>
<http://www.securityfocus.com/columnists/423>
<http://www.aaxnet.com/editor/edit043.html>
<http://kadaitcha.cx/vista/dogsbreakfast/index.html>
cbfalconer at maineline dot net

--
Posted via a free Usenet account from http://www.teranews.com

May 22 '07 #7
CBFalconer wrote:

[...]
Why all that tortuous mess to copy text?.

Hand-written lexer in C is easy, but writing a lex spec is easier.

Try again, this time print each *word* and it's length on a separate line on
the output. Please follow your own bogus advice, by using ggets().

--
Tor <torust [at] online [dot] no>

May 22 '07 #8

"Tor Rustad" <to****@online.nowrote in message
news:ir*********************@telenor.com...
CBFalconer wrote:

[...]
>Why all that tortuous mess to copy text?.


Hand-written lexer in C is easy, but writing a lex spec is easier.

Try again, this time print each *word* and it's length on a separate line
on
the output. Please follow your own bogus advice, by using ggets().
Why is ggets() a problem for this task?
Granted les / flex is a dedicated lexical analyser and in its element with
token generation, but most people don't have the inclination to learn yet
another language for such a simple spec.

--
Free games and programming goodies.
http://www.personal.leeds.ac.uk/~bgy1mm

May 23 '07 #9
Malcolm McLean wrote:
>
"Tor Rustad" <to****@online.nowrote in message
news:ir*********************@telenor.com...
>CBFalconer wrote:

[...]
>>Why all that tortuous mess to copy text?.


Hand-written lexer in C is easy, but writing a lex spec is easier.

Try again, this time print each *word* and it's length on a separate line
on the output. Please follow your own bogus advice, by using ggets().
Why is ggets() a problem for this task?
OP needed to split the file content into words, not lines! Hence, this is
what he wanted in C:

int lexer(char *word, size_t *wlen)
{
int t;

while (1)
{
t = getchar();

/* Eat white space */
if (t == ' ' || t == '\t' || t == '\n')
;
else if (t == EOF)
return EOF;

etc...
}
}

adding logic to handle lines, just complicate matters. Hence, using fgets(),
ggets(), getline() etc. was all bad advice.

Granted les / flex is a dedicated lexical analyser and in its element with
token generation, but most people don't have the inclination to learn yet
another language for such a simple spec.
OP asked if there was a better strategy, well there is, he can ignore the
line issue and look into using a dedicated tool for such problems.

Since I had already done the complete lex skeleton, all he needed to do, was
really to add that C sorting code. That hardly qualify as learning a new
language. <g>

--
Tor <torust [at] online [dot] no>

May 24 '07 #10

"Tor Rustad" <to****@online.nowrote in message
news:Pp*********************@telenor.com...
Malcolm McLean wrote:
>>
"Tor Rustad" <to****@online.nowrote in message
news:ir*********************@telenor.com...
>>CBFalconer wrote:

[...]

Why all that tortuous mess to copy text?.
Hand-written lexer in C is easy, but writing a lex spec is easier.

Try again, this time print each *word* and it's length on a separate
line
on the output. Please follow your own bogus advice, by using ggets().
Why is ggets() a problem for this task?

OP needed to split the file content into words, not lines! Hence, this is
what he wanted in C:

int lexer(char *word, size_t *wlen)
{
int t;

while (1)
{
t = getchar();

/* Eat white space */
if (t == ' ' || t == '\t' || t == '\n')
;
else if (t == EOF)
return EOF;

etc...
}
}

adding logic to handle lines, just complicate matters. Hence, using
fgets(),
ggets(), getline() etc. was all bad advice.

>Granted les / flex is a dedicated lexical analyser and in its element
with
token generation, but most people don't have the inclination to learn yet
another language for such a simple spec.

OP asked if there was a better strategy, well there is, he can ignore the
line issue and look into using a dedicated tool for such problems.

Since I had already done the complete lex skeleton, all he needed to do,
was
really to add that C sorting code. That hardly qualify as learning a new
language. <g>
You are right in a sense. You can produce very compact programs that do
manipulations on streams.
However you are limited to fairly simple manipulations before the strategy
begins to break down, and you tie IO very closely to logic, which is
undesireable for all sorts of reasons. Functions (in the mathematical sense)
should be separated from the procedures - a procedure being something which
does things.
--
Free games and programming goodies.
http://www.personal.leeds.ac.uk/~bgy1mm


May 24 '07 #11
Malcolm McLean wrote:

[...]
You are right in a sense. You can produce very compact programs that do
manipulations on streams.
However you are limited to fairly simple manipulations before the strategy
begins to break down, and you tie IO very closely to logic, which is
undesireable for all sorts of reasons.
Like I have already pointed out, regular expressions (via lex) is a *very*
useful abstraction when doing lexical analysis. For the problem at hand,
all OP needed, was to get next character input until EOF, he didn't even
need a lookahead.

So calling getchar() or fgetc(), just say "Hey, give me that next
character". Simple and guess what, it does not break down...

Functions (in the mathematical sense) should be separated from the
procedures - a procedure being something which does things.
???

--
Tor <torust [at] online [dot] no>

May 24 '07 #12

"Tor Rustad" <to****@online.nowrote in message
news:G_*********************@telenor.com...
Malcolm McLean wrote:

[...]
>You are right in a sense. You can produce very compact programs that do
manipulations on streams.
However you are limited to fairly simple manipulations before the
strategy
begins to break down, and you tie IO very closely to logic, which is
undesireable for all sorts of reasons.

Like I have already pointed out, regular expressions (via lex) is a *very*
useful abstraction when doing lexical analysis. For the problem at hand,
all OP needed, was to get next character input until EOF, he didn't even
need a lookahead.

So calling getchar() or fgetc(), just say "Hey, give me that next
character". Simple and guess what, it does not break down...
But the stream can return an error state, because it does IO, and the
program cannot control things outside of itself. Which has important
implications.
>
>Functions (in the mathematical sense) should be separated from the
procedures - a procedure being something which does things.

???
Think "pure functions" and "model-view architecture".
--
Free games and programming goodies.
http://www.personal.leeds.ac.uk/~bgy1mm
May 25 '07 #13
Malcolm McLean wrote, On 25/05/07 05:39:
>
"Tor Rustad" <to****@online.nowrote in message
news:G_*********************@telenor.com...
>Malcolm McLean wrote:

[...]
>>You are right in a sense. You can produce very compact programs that do
manipulations on streams.
However you are limited to fairly simple manipulations before the
strategy
begins to break down, and you tie IO very closely to logic, which is
undesireable for all sorts of reasons.

Like I have already pointed out, regular expressions (via lex) is a
*very*
useful abstraction when doing lexical analysis. For the problem at hand,
all OP needed, was to get next character input until EOF, he didn't even
need a lookahead.

So calling getchar() or fgetc(), just say "Hey, give me that next
character". Simple and guess what, it does not break down...
But the stream can return an error state, because it does IO, and the
program cannot control things outside of itself. Which has important
implications.
So?
>>Functions (in the mathematical sense) should be separated from the
procedures - a procedure being something which does things.

???
Think "pure functions" and "model-view architecture".
Think "there is more than one approach to SW design".
--
Flash Gordon
May 25 '07 #14
Malcolm McLean wrote:
"Tor Rustad" <to****@online.nowrote in message
>>
So calling getchar() or fgetc(), just say "Hey, give me that next
character". Simple and guess what, it does not break down...
But the stream can return an error state, because it does IO, and the
program cannot control things outside of itself. Which has important
implications.
In a lex scanner, you don't need to operate on stdin/stdout, it's just the
default streams yyin and yyout use, which can easily be changed. The caller
is also supposed to provide a yywrap() function, which is a callback on
EOF.

If there is an I/O error, this can of course be handled elsewhere. Why not
read a lex/flex tutorial first, before wasting more time here?

>>Functions (in the mathematical sense) should be separated from the
procedures - a procedure being something which does things.

???
Think "pure functions" and "model-view architecture".
FYI, there is no difference between functions and procedures in C, and
mathematical functions is something that can very much do things too:

y = f(x)

--
Tor <torust [at] online [dot] no>
May 25 '07 #15

"Tor Rustad" <to****@online.nowrote in message
news:Uq*********************@telenor.com...
Malcolm McLean wrote:
>"Tor Rustad" <to****@online.nowrote in message
>>>
So calling getchar() or fgetc(), just say "Hey, give me that next
character". Simple and guess what, it does not break down...
But the stream can return an error state, because it does IO, and the
program cannot control things outside of itself. Which has important
implications.

In a lex scanner, you don't need to operate on stdin/stdout, it's just the
default streams yyin and yyout use, which can easily be changed. The
caller
is also supposed to provide a yywrap() function, which is a callback on
EOF.

If there is an I/O error, this can of course be handled elsewhere. Why not
read a lex/flex tutorial first, before wasting more time here?

>>>Functions (in the mathematical sense) should be separated from the
procedures - a procedure being something which does things.

???
Think "pure functions" and "model-view architecture".

FYI, there is no difference between functions and procedures in C, and
mathematical functions is something that can very much do things too:

y = f(x)
In C it is conventional to use the word "function" for "subroutine". exit()
is called a "function", but it isn't in the more generally accepted usage.
Language is like that, words shift in meaning and people aren't entirely
logical.

In C you can say that a procedure is a void function. This is useful to
compiler writers, because
"x = function()" is legal whilst
"x= procedure()" is not.
However for our purposes the distinction between

int foo();
and
void bar(int *ret)

doesn't really matter. It is just a case of how you lay out the code when
calling what is effectively the same routine.

A function in mathematics is a mapping of inputs to outputs. It follows that
a mathematical function can't do things.

As Flash pointed out, there is more than one approach to software
engineering. Separating IO from logic is one of the basic themes that crops
up time after time, though it is given different names.

--
Free games and programming goodies.
http://www.personal.leeds.ac.uk/~bgy1mm

May 26 '07 #16
Malcolm McLean wrote:
>
"Tor Rustad" <to****@online.nowrote in message
news:Uq*********************@telenor.com...
>Malcolm McLean wrote:
>>"Tor Rustad" <to****@online.nowrote in message

So calling getchar() or fgetc(), just say "Hey, give me that next
character". Simple and guess what, it does not break down...

But the stream can return an error state, because it does IO, and the
program cannot control things outside of itself. Which has important
implications.

In a lex scanner, you don't need to operate on stdin/stdout, it's just
the
default streams yyin and yyout use, which can easily be changed. The
caller
is also supposed to provide a yywrap() function, which is a callback on
EOF.

If there is an I/O error, this can of course be handled elsewhere. Why
not
read a lex/flex tutorial first, before wasting more time here?
<snip>
As Flash pointed out, there is more than one approach to software
engineering. Separating IO from logic is one of the basic themes that
crops up time after time, though it is given different names.
Using Lex you *are* separating the IO from the processing. Lex is only
used to define processing.

In any case, another design theme that comes up time and again which is
keep it simple. A simple loop grabbing data a bit at a time from the
input and processing it as you grab it *is* keeping it simple. Doing
this I have before now simplified some code so much that it was then
provably fast enough to put straight on the interrupt service routing
thus allowing things to be simplified further because there was not need
to synchronise between the interrupt routine and a background task thus
speeding it up thus...

Moving the processing to the IO *greatly* simplified the entire design
and implementation. It also breached other "design rules" in order to
simplify it greatly.

There is more than one approach and separating the IO from the logic is
not always the correct approach.
--
Flash Gordon
May 26 '07 #17
"Flash Gordon" <sp**@flash-gordon.me.ukwrote in message
In any case, another design theme that comes up time and again which is
keep it simple.
Moving the processing to the IO *greatly* simplified the entire design and
implementation. It also breached other "design rules" in order to simplify
it greatly.

There is more than one approach and separating the IO from the logic is
not always the correct approach.
I agree with you. Often there are two contradictory rules, neither of which
you want to break, but dogmatically insisting on one over the other is a
silly way of proceeding.

--
Free games and programming goodies.
http://www.personal.leeds.ac.uk/~bgy1mm
May 26 '07 #18
Malcolm McLean wrote:
"Flash Gordon" <sp**@flash-gordon.me.ukwrote in message
>In any case, another design theme that comes up time and again which
is keep it simple.
Moving the processing to the IO *greatly* simplified the entire design
and implementation. It also breached other "design rules" in order to
simplify it greatly.

There is more than one approach and separating the IO from the logic
is not always the correct approach.
I agree with you. Often there are two contradictory rules, neither of
which you want to break, but dogmatically insisting on one over the
other is a silly way of proceeding.
So why were you criticising a nice simple solution for breaking a rule
by not separating the the IO? Tom's solution was nice and simple.
--
Flash Gordon
May 26 '07 #19
"Flash Gordon" <sp**@flash-gordon.me.ukwrote in message
news:cf************@news.flash-gordon.me.uk...
Malcolm McLean wrote:
>"Flash Gordon" <sp**@flash-gordon.me.ukwrote in message
>>In any case, another design theme that comes up time and again which is
keep it simple.
Moving the processing to the IO *greatly* simplified the entire design
and implementation. It also breached other "design rules" in order to
simplify it greatly.

There is more than one approach and separating the IO from the logic is
not always the correct approach.
I agree with you. Often there are two contradictory rules, neither of
which you want to break, but dogmatically insisting on one over the other
is a silly way of proceeding.

So why were you criticising a nice simple solution for breaking a rule by
not separating the the IO? Tom's solution was nice and simple.
Tom:
>>adding logic to handle lines, just complicate matters. Hence, using
fgets(),
ggets(), getline() etc. was all bad advice.
I was pointing out that it isn't as straightforwards as that. Yes you
complify things, and break the rule KISS, by using ggets. On the other hand
you couple IO very tightly to logic, breaking the prodecures must be
separate from functions rule.

However you managed to persuade me that I wasn't entirely right after all
insisting on IO separate from logic. That's one of the characteristics of
intelligent people. They modify their opinions after listening to other
intelligent people.
--
Free games and programming goodies.
http://www.personal.leeds.ac.uk/~bgy1mm

May 27 '07 #20
Malcolm McLean wrote:

<snip>
However you managed to persuade me that I wasn't entirely right after
all insisting on IO separate from logic. That's one of the
characteristics of intelligent people. They modify their opinions after
listening to other intelligent people.
Indeed it is.
--
Flash Gordon
May 27 '07 #21
Flash Gordon <sp**@flash-gordon.me.ukwrites:
Malcolm McLean wrote:

<snip>
>However you managed to persuade me that I wasn't entirely right
after all insisting on IO separate from logic. That's one of the
characteristics of intelligent people. They modify their opinions
after listening to other intelligent people.

Indeed it is.
Well, I used to think so, but ...

8-)}

--
Keith Thompson (The_Other_Keith) ks***@mib.org <http://www.ghoti.net/~kst>
San Diego Supercomputer Center <* <http://users.sdsc.edu/~kst>
"We must do something. This is something. Therefore, we must do this."
-- Antony Jay and Jonathan Lynn, "Yes Minister"
May 27 '07 #22
Keith Thompson wrote:
Flash Gordon <sp**@flash-gordon.me.ukwrites:
>Malcolm McLean wrote:

<snip>
>>However you managed to persuade me that I wasn't entirely right
after all insisting on IO separate from logic. That's one of the
characteristics of intelligent people. They modify their opinions
after listening to other intelligent people.

Indeed it is.

Well, I used to think so, but ...

Hmm.. since I didn't persuade Malcolm, it follows that one of us isn't
intelligent! :-/

Who is Tom? :-P

--
Tor <torust [at] online [dot] no>

May 27 '07 #23

This thread has been closed and replies have been disabled. Please start a new discussion.

Similar topics

9
by: Lauren Quantrell | last post by:
Is there a way to create a text file (such as a Windows Notepad file) by using a trigger on a table? What I want to do is to send a row of information to a table where the table: tblFileData has...
1
by: Tim | last post by:
Hi, We are downloading a few thousand rows of data for users to choose from and need to speed up the operation. The data is related in four levels. The current design allows the user to select...
2
by: moller | last post by:
Im looking in to the possibility of moving from mySQL to an access database. My reasons are: (1) Database is single user. (2) Database local on users PC. (3) Database has only 8 tables where 4...
3
by: sea | last post by:
If a runtime installation of Access is created and installed on a client machine that has a full verion of access, will it still be possible to view the tables and queries in design view but not...
0
by: JTS | last post by:
I need to create three ASP.NET applications - each one will run on a different production server. Each application does basically the same thing with respect to data access; the apps differ...
5
by: Water Cooler v2 | last post by:
I know that we can add a single name value entry in app.config or web.config in the configuration/configSettings/appSettings section like so: <add key="key" value="value" /> Question: I want...
18
by: Wayne | last post by:
I have a potential client who has requested design access to the prospective database so that they can change reports if and when required. Their reasons for this are: 1. They don't want to be...
13
by: NickName | last post by:
"For the vision impaired, SVG offers tremendous potential for interactive Internet mapping applications as discussed by Gardner and Bulatov (2001).". Now, here's an SVG file with fair/medium...
0
by: aa123db | last post by:
Variable and constants Use var or let for variables and const fror constants. Var foo ='bar'; Let foo ='bar';const baz ='bar'; Functions function $name$ ($parameters$) { } ...
0
by: ryjfgjl | last post by:
If we have dozens or hundreds of excel to import into the database, if we use the excel import function provided by database editors such as navicat, it will be extremely tedious and time-consuming...
0
by: ryjfgjl | last post by:
In our work, we often receive Excel tables with data in the same format. If we want to analyze these data, it can be difficult to analyze them because the data is spread across multiple Excel files...
0
by: emmanuelkatto | last post by:
Hi All, I am Emmanuel katto from Uganda. I want to ask what challenges you've faced while migrating a website to cloud. Please let me know. Thanks! Emmanuel
0
BarryA
by: BarryA | last post by:
What are the essential steps and strategies outlined in the Data Structures and Algorithms (DSA) roadmap for aspiring data scientists? How can individuals effectively utilize this roadmap to progress...
1
by: Sonnysonu | last post by:
This is the data of csv file 1 2 3 1 2 3 1 2 3 1 2 3 2 3 2 3 3 the lengths should be different i have to store the data by column-wise with in the specific length. suppose the i have to...
0
by: Hystou | last post by:
There are some requirements for setting up RAID: 1. The motherboard and BIOS support RAID configuration. 2. The motherboard has 2 or more available SATA protocol SSD/HDD slots (including MSATA, M.2...
0
Oralloy
by: Oralloy | last post by:
Hello folks, I am unable to find appropriate documentation on the type promotion of bit-fields when using the generalised comparison operator "<=>". The problem is that using the GNU compilers,...
0
jinu1996
by: jinu1996 | last post by:
In today's digital age, having a compelling online presence is paramount for businesses aiming to thrive in a competitive landscape. At the heart of this digital strategy lies an intricately woven...

By using Bytes.com and it's services, you agree to our Privacy Policy and Terms of Use.

To disable or enable advertisements and analytics tracking please visit the manage ads & tracking page.