473,804 Members | 2,124 Online
Bytes | Software Development & Data Engineering Community
+ Post

Home Posts Topics Members FAQ

Conversion to adp

Hi

I am converting my access front-end/backend mdb app to adp. Are there any
pitfalls I should be aware of?

Thanks

Regards
Nov 13 '05
16 1589
"Alan Webb" <kn*******@hotS PAMmail.com> wrote in
news:mo******** ************@co mcast.com:
As Steve points
out, there are cases where writing your own replication code ends
up being cheaper/better than what Microsoft provides in Access
without code.


Having done both (i.e., rolling my own and using Jet replication), I
strongly dispute this assertion of Steve's.

The number of problems is astronomical, even when you have very
limited synchronization scenarios. It's even complex enough when you
have a master/slave relationship between two or more dbs, where data
is updated/added/deleted only in the master (think about the
deletion problem and how you propagate the deletion of a record that
no longer exists at the time of the synchronization ).

The problem could be much more easily solved with Terminal Server,
or with a browser-based application, but either of those requires
constant Internet access.

I could engineer an all-Jet replication scenario using indirect
replication over dialup networking or over a VPN over the Internet
(i.e., not using Internet replication, which has a host of basic
requirements that makes it extremely prone to fall over), but I
wouldn't want to have to do it.

I don't have any clients who need to update data and synch with the
mother ship while still in the field, so I no longer do that kind of
thing (it used to be one of my specializations ). Nowadays I'm just
supporting travellers who need to take data with them and update it
while on the road, but don't need to re-synchronize with the mother
ship until back in the office. That's *very* easy to do, and can be
done safely with simple direct replication (and some of my clients
do it themselves, via the Access UI, to save the money on
programming that it would cost them).

But it's the requirement for getting and sending updates while in
the field that makes this very hard. If a client required it, I'd
definitely make a VPN a prerequisite to building it, as the
alternative (Internet replication) requires that the client run IIS
and an open FTP server. That's just no longer safe these days, and
it was never ever very stable.

My bet is that given the cost of engineering this with SQL Server
(which lacks some of the flexibility of Jet replication), that the
requirement for synch from the field could be easily dropped. If
that's the case, then the Terminal Server solution starts to look
attractive, as you've saved money you can now throw at the Internet
access costs, instead, while you'll be saving an enormous amount of
money on administrative costs in the long run, perhaps enough to pay
for the in-the-field Internet access costs.

--
David W. Fenton http://www.bway.net/~dfenton
dfenton at bway dot net http://www.bway.net/~dfassoc
Nov 13 '05 #11
"John" <Jo**@nospam.in fovis.co.uk> wrote:
I am converting my access front-end/backend mdb app to adp. Are there any
pitfalls I should be aware of?


I can understand why some apps would want their data to be stored in SQL Server. But
there's no good reason to spend the extra time in converting the FE MDB to an ADP.
Leave the SQL Server tables as linked tables.

Tony
--
Tony Toews, Microsoft Access MVP
Please respond only in the newsgroups so that others can
read the entire thread of messages.
Microsoft Access Links, Hints, Tips & Accounting Systems at
http://www.granite.ab.ca/accsmstr.htm
Nov 13 '05 #12
On Mon, 25 Apr 2005 03:57:55 GMT, "David W. Fenton"
<dX********@bwa y.net.invalid> wrote:
Steve Jorgensen <no****@nospam. nospam> wrote in
news:10******* *************** **********@4ax. com:
Note that Access replicaiton may or may not be the best way to do
replication. Sometimes, it's better to implement a replication
scheme at the application level. One way to do this is to
implement a record ID generation system that includes a machine
identifier so there can't be collisions between IDs generated on
different machines, and add fields for a current and previous
record revision ID so you can tell if a record in the central
database is the same revision that a remote system made a change
to while off-line, and allow the change to post if so.


This is an enormously difficult task, even if it is only one-way
between a mere two copies of the data file.

I've done it. It's complicated (think about how deletions are
propagated; think about order of inserts and referential integrity).

If you're trying to have multiple dbs in the field, all being
updated, and you want those changes pushed up to the server, and you
also want to make the changes made by people on the servers to be
pulled down to the db in the field, it becomes a hugely complicated
task, unless a few conditions are met:

1. no records are edited in more than one location.

2. each person in the field has their own dataset that they work on
alone, and no other people actually edit that data (though they may
view it and analyze it).

But even then, you have to solve the PK issue. You have three
choices:

1. use a natural key, and run the risk of the same natural key being
used in two different copies of the database.

2. pre-allocated blocks of surrogate keys to each copy of the
database.

3. include a source db identifier in a compound PK in every table.

But if everyone's working on the same datasets, it becomes nearly
impossible to program from scratch.

Keep in mind that, theoretically speaking, there is a form of
heterogenous replication where the main mother ship is a SQL Server
db and the laptops have Jet dbs that synchronize with the SQL
Server. However, like pure SQL Server replication itself, the whole
scenario is much more limited than the capabilities of pure Jet
replication, and the rules much more strict.

I don't think replication is the answer here.

I think the problem needs to be completely re-thought from the
ground up.


All the problems you've described are real, but I've found they can often be
managed by limiting the scope of the replication features to what's important
for the real-world requirements of the app.

Here are some ideas that should help in most applications I can think of:

1. Don't try to resolve replication conflicts. Just let the loser either
abort the replication or continue and lose the conflicting changes. The user
can then abort, copy down the important information, run the replication
again, and manually enter the changes as required.

2. Don't try to replicate every table, just the ones that will really need to
be updated in the field. If the user needs a new lookup value, they can make
a comment in the notes, and fix it later when they're back on the LAN.

3. Don't allow deletions in disconnected mode. Just allow a status change to
something like "Inactive" - your application may work this way anyway.

Nov 13 '05 #13
David,
Ok. With the project I worked on there was only a half-dozen replicas.
Even with that I spent a fair amount of administrative time chasing down
replication errors that caused synchronization to fail. We used the
replication setup available through the Access UI. Even with my time
working out synchronization kinks this was still faster than having a clerk
rekey data collected by temps set around the company to inventory equipment
that may or may not be in-scope for Y2K.

--
Alan Webb
kn*******@SPAMh otmail.com
"It's not IT, it's IS"

"David W. Fenton" <dX********@bwa y.net.invalid> wrote in message
news:Xn******** *************** **********@24.1 68.128.90...
"Alan Webb" <kn*******@hotS PAMmail.com> wrote in
news:mo******** ************@co mcast.com:
As Steve points
out, there are cases where writing your own replication code ends
up being cheaper/better than what Microsoft provides in Access
without code.


Having done both (i.e., rolling my own and using Jet replication), I
strongly dispute this assertion of Steve's.

The number of problems is astronomical, even when you have very
limited synchronization scenarios. It's even complex enough when you
have a master/slave relationship between two or more dbs, where data
is updated/added/deleted only in the master (think about the
deletion problem and how you propagate the deletion of a record that
no longer exists at the time of the synchronization ).

The problem could be much more easily solved with Terminal Server,
or with a browser-based application, but either of those requires
constant Internet access.

I could engineer an all-Jet replication scenario using indirect
replication over dialup networking or over a VPN over the Internet
(i.e., not using Internet replication, which has a host of basic
requirements that makes it extremely prone to fall over), but I
wouldn't want to have to do it.

I don't have any clients who need to update data and synch with the
mother ship while still in the field, so I no longer do that kind of
thing (it used to be one of my specializations ). Nowadays I'm just
supporting travellers who need to take data with them and update it
while on the road, but don't need to re-synchronize with the mother
ship until back in the office. That's *very* easy to do, and can be
done safely with simple direct replication (and some of my clients
do it themselves, via the Access UI, to save the money on
programming that it would cost them).

But it's the requirement for getting and sending updates while in
the field that makes this very hard. If a client required it, I'd
definitely make a VPN a prerequisite to building it, as the
alternative (Internet replication) requires that the client run IIS
and an open FTP server. That's just no longer safe these days, and
it was never ever very stable.

My bet is that given the cost of engineering this with SQL Server
(which lacks some of the flexibility of Jet replication), that the
requirement for synch from the field could be easily dropped. If
that's the case, then the Terminal Server solution starts to look
attractive, as you've saved money you can now throw at the Internet
access costs, instead, while you'll be saving an enormous amount of
money on administrative costs in the long run, perhaps enough to pay
for the in-the-field Internet access costs.

--
David W. Fenton http://www.bway.net/~dfenton
dfenton at bway dot net http://www.bway.net/~dfassoc

Nov 13 '05 #14
Steve Jorgensen <no****@nospam. nospam> wrote in
news:f9******** *************** *********@4ax.c om:
On Mon, 25 Apr 2005 03:57:55 GMT, "David W. Fenton"
<dX********@bw ay.net.invalid> wrote:
Steve Jorgensen <no****@nospam. nospam> wrote in
news:10****** *************** ***********@4ax .com:
Note that Access replicaiton may or may not be the best way to
do replication. Sometimes, it's better to implement a
replication scheme at the application level. One way to do this
is to implement a record ID generation system that includes a
machine identifier so there can't be collisions between IDs
generated on different machines, and add fields for a current
and previous record revision ID so you can tell if a record in
the central database is the same revision that a remote system
made a change to while off-line, and allow the change to post if
so.
This is an enormously difficult task, even if it is only one-way
between a mere two copies of the data file.

I've done it. It's complicated (think about how deletions are
propagated; think about order of inserts and referential
integrity).

If you're trying to have multiple dbs in the field, all being
updated, and you want those changes pushed up to the server, and
you also want to make the changes made by people on the servers to
be pulled down to the db in the field, it becomes a hugely
complicated task, unless a few conditions are met:

1. no records are edited in more than one location.

2. each person in the field has their own dataset that they work
on alone, and no other people actually edit that data (though they
may view it and analyze it).

But even then, you have to solve the PK issue. You have three
choices:

1. use a natural key, and run the risk of the same natural key
being used in two different copies of the database.

2. pre-allocated blocks of surrogate keys to each copy of the
database.

3. include a source db identifier in a compound PK in every table.

But if everyone's working on the same datasets, it becomes nearly
impossible to program from scratch.

Keep in mind that, theoretically speaking, there is a form of
heterogenou s replication where the main mother ship is a SQL
Server db and the laptops have Jet dbs that synchronize with the
SQL Server. However, like pure SQL Server replication itself, the
whole scenario is much more limited than the capabilities of pure
Jet replication, and the rules much more strict.

I don't think replication is the answer here.

I think the problem needs to be completely re-thought from the
ground up.


All the problems you've described are real, but I've found they
can often be managed by limiting the scope of the replication
features to what's important for the real-world requirements of
the app.

Here are some ideas that should help in most applications I can
think of:

1. Don't try to resolve replication conflicts. Just let the
loser either abort the replication or continue and lose the
conflicting changes. The user can then abort, copy down the
important information, run the replication again, and manually
enter the changes as required.


Steve, that's an absolutely ludicrous proposal. I'd call that data
corruption, because the end result is that you don't know if the
multiple copies of the data file have anything like the same data
set in them.
2. Don't try to replicate every table, just the ones that will
really need to be updated in the field. If the user needs a new
lookup value, they can make a comment in the notes, and fix it
later when they're back on the LAN.
I don't believe in manual processes for things that can be
automated.
3. Don't allow deletions in disconnected mode. Just allow a
status change to something like "Inactive" - your application may
work this way anyway.


You still have to propagate that field value like other edits, and
the order of it can make a difference to any number of operations.

I am sorry to say that I don't think you have any credibility on
this issue based on the quality of your response. I would never
dream of accepting money for such a slap-dash system as you
describe.

And Jet replication offers much, much more than what you're
offering, without requiring massive amounts of coding (and testing
to see if it works).

I find your attitude towards data here quite inconsistent with your
detail-oriented approach to coding. Why are you running all these
tests on your code and wanting the compiler to catch as many
problems as possible, and then implementing systems like you
describe above, where the data isn't reliable?

Looks like a huge inconsistency to me, as though you are a code geek
who doesn't really care about data integrity.

--
David W. Fenton http://www.bway.net/~dfenton
dfenton at bway dot net http://www.bway.net/~dfassoc
Nov 13 '05 #15
"Alan Webb" <kn*******@hotS PAMmail.com> wrote in
news:sZ******** ************@co mcast.com:
With the project I worked on there was only a half-dozen replicas.
Even with that I spent a fair amount of administrative time
chasing down replication errors that caused synchronization to
fail. . . .
Well, perhaps your schema was poorly designed for replication.
Here's an example of a schema design that will cause replication
errors:

Have a self-join on a table enforced with RI. Say you have a group
of records in a Company table, each record representing a branch
office. You want one of those records to be the master record (the
main office), so you store its PK in a field within the table,
limiting the values in it with RI.

This will fail every time you create a master record, because the
value in the field can't be propagated until the record itself has
been inserted.

(this is a real-world report -- I learned)
. . . We used the replication setup available through the Access
UI. Even with my time working out synchronization kinks this was
still faster than having a clerk rekey data collected by temps set
around the company to inventory equipment that may or may not be
in-scope for Y2K.


Well, I couldn't agree more about that. My understanding was that
the replication scenario under consideration was a choice between
SQL Server/MSDE or rolling your own with code. If someone were
contemplating only using the Access UI and was not trying to do the
synch over dialup networking (or any other narrow bandwidth pipe),
it would be fine.

--
David W. Fenton http://www.bway.net/~dfenton
dfenton at bway dot net http://www.bway.net/~dfassoc
Nov 13 '05 #16
On Wed, 27 Apr 2005 00:55:32 GMT, "David W. Fenton"
<dX********@bwa y.net.invalid> wrote:

....
I don't think replication is the answer here.

I think the problem needs to be completely re-thought from the
ground up.
All the problems you've described are real, but I've found they
can often be managed by limiting the scope of the replication
features to what's important for the real-world requirements of
the app.

Here are some ideas that should help in most applications I can
think of:

1. Don't try to resolve replication conflicts. Just let the
loser either abort the replication or continue and lose the
conflicting changes. The user can then abort, copy down the
important information, run the replication again, and manually
enter the changes as required.


Steve, that's an absolutely ludicrous proposal. I'd call that data
corruption, because the end result is that you don't know if the
multiple copies of the data file have anything like the same data
set in them.


Well, it depends what the scope of replication is, doesn't it. In most
applications, we end up talking about one or a few master tables where the
odds are that the number of collisions per replication attempt is somewhere
between zero and 2 with zero being more common. Under these conditions, it's
pretty easy for the user to figure out the liekly consequences of refreshing
from the main copy, then re-applying the edits manually when a conflict was
detected.
2. Don't try to replicate every table, just the ones that will
really need to be updated in the field. If the user needs a new
lookup value, they can make a comment in the notes, and fix it
later when they're back on the LAN.


I don't believe in manual processes for things that can be
automated.


That's the kind of anal retentive thinking I used to be so well known for.
Yes, automating things is good where the benefit is high enough and/or the
cost is low enough.

If we can provide substantial benefit by implementing limited replication at a
low cost, that amounts to a high ROI. If, after implementing, we find that
there would be a benefit to improving the replication that's higher than the
benefit of working on something else, then we can proceed to do it.
3. Don't allow deletions in disconnected mode. Just allow a
status change to something like "Inactive" - your application may
work this way anyway.


You still have to propagate that field value like other edits, and
the order of it can make a difference to any number of operations.


Again, the complications are in proportion to the number of potential
conflicts, the number of tables, and the number of conflicts that would
reasonably arise. Keep those numbers small, and the complications are minor.
I am sorry to say that I don't think you have any credibility on
this issue based on the quality of your response. I would never
dream of accepting money for such a slap-dash system as you
describe.
I come to my opinion based on having recenty written one slap-dash system
that's working very well for a lot of paying customers. The trick is to be
very clear about the use cases and the spec to ensure that the slap-dash
solution employed happens to do what's needed.

The system I'm referring to didn't have to do with replication per se, but
something similar. We needed to have certain kinds of data pushed back and
forth between 2 systems that were written by different authors, having
different schemas and with about a 40% overlap in data coverage.

After several short iterations of reviewing the spec. and trying to optimize
cost/benefit, we figured out that the vast majority of duplicate user effort
we wanted to remove could be achieved by replicating data from just one table
in each direction (2 tables in all). It was further determined that it was OK
to simply fail the update if lookup tables were out of sync in one direction,
and describe the problems so the user could fix them, and to allow the sync
with problems in the other direction, informing the user what values were
replaced with what defaults.

In production, this simple schema has been a huge hit with the users, and we
can now legitimately claim to have an integrated suite of products.

I don't see why a similar process could not be used to develop a limited,
relatively stupid off-line capability that is nevertheless highly useful, and
tells the user enough so they can deal with problems that occur.
And Jet replication offers much, much more than what you're
offering, without requiring massive amounts of coding (and testing
to see if it works).
Well, you may have a point there since you know much more about that than I
do. It has always been my impression that by trying to do so much, the Access
replication system leaves me unable to predict the side effects and
consequences of employing it. This could well be a symptom of my lack of
knowledge, and I'm interested in anything you would like to say on the
subject.
I find your attitude towards data here quite inconsistent with your
detail-oriented approach to coding. Why are you running all these
tests on your code and wanting the compiler to catch as many
problems as possible, and then implementing systems like you
describe above, where the data isn't reliable?
I agree that the tone of my post seems unlike me, but it's because I'm seeing
that anal retentivity misapplied costs more than it gains - for both me and
for the customer paying the bills. That does not mean ignoring essential
issues such as data integrity, but it does mean doing analysis to figure out
which issues really are not essential, so we can table them.

Less is more. A smaller spec., is a smaller bill, and if the majority of the
value is delivered with a trivial spec, that amounts to a high ROI and a happy
customer. If, after delivery, there seems to be more value to be gained from
a more powerful implementation, and that's higher than the value of other
features in the queue, then we can absolutely start on the improvement.

With regard to testing, I find that doing more testing often leads to doing
less coding because the tests can provide confidence that the the code we
write really does cover the use cases. At the company I'm working for now,
besides the programmer tests, we have a full-time test engineer who's good at
breaking things and uses a tool to run automated regression tests at the GUI
level. I believe that if my code can get past him, chances are, it'll work
right for the customer - period.
Looks like a huge inconsistency to me, as though you are a code geek
who doesn't really care about data integrity.


I hope what I've said above shows that I hav no lack of concern for data
integrity. I simply assert that when scope can be sufficiently contained,
maintaining data integrity doesn't have to require complex code.
Nov 13 '05 #17

This thread has been closed and replies have been disabled. Please start a new discussion.

Similar topics

1
5660
by: Stub | last post by:
Docs says that "The compiler does not use an explicit constructor to implement an implied conversion of types. It's purpose is reserved explicitly for construction." I put up code of three cases at the bottom. Hope you can help me understand the "explicit" keyword and its usage. Specifically, Is "explicit" keyword only associated with constructor in C++? What's "implied conversion of types"?
7
3262
by: Michael Lehn | last post by:
Hi, I have a question regarding the conversion of objects. When is the conversion done by the constructor and when by the operator. My feeling tells me that the constructor is preferred. But I couldn't find the exact rule in the C++ standard. And what if the classes have template parameters? It would be great if somebody could get me a rough hint where in the
16
5156
by: TTroy | last post by:
Hello, I'm relatively new to C and have gone through more than 4 books on it. None mentioned anything about integral promotion, arithmetic conversion, value preserving and unsigned preserving. And K&R2 mentions "signed extension" everywhere. Reading some old clc posts, I've beginning to realize that these books are over-generalizing the topic. I am just wondering what the difference between the following pairs of terms are: 1)...
31
6655
by: Bjørn Augestad | last post by:
Below is a program which converts a double to an integer in two different ways, giving me two different values for the int. The basic expression is 1.0 / (1.0 * 365.0) which should be 365, but one variable becomes 364 and the other one becomes 365. Does anyone have any insight to what the problem is? Thanks in advance. Bjørn
11
7627
by: Steve Gough | last post by:
Could anyone please help me to understand what is happening here? The commented line produces an error, which is what I expected given that there is no conversion defined from type double to type Test. I expected the same error from the following line, but it compiles fine. The double is silently truncated to an int and then fed in to the implicit conversion operator. Why does this happen? Is there any way that I can keep the implicit...
2
6875
by: Alex Sedow | last post by:
Why explicit conversion from SomeType* to IntPtr is not ambiguous (according to standart)? Example: // System.IntPtr class IntPtr { public static explicit System.IntPtr (int); public static explicit System.IntPtr (long);
3
4463
by: Steve Richter | last post by:
here is a warning I am getting in a C++ .NET compile: c:\SrNet\jury\JuryTest.cpp(55) : warning C4927: illegal conversion; more than one user-defined conversion has been implicitly applied while calling the constructor 'MyString::MyString(const wchar_t *)' c:\SrNet\jury\JuryTest.h(21) : see declaration of 'MyString::MyString' The class "StringData" uses a, whatever you call it, operator const
0
2295
by: Lou Evart | last post by:
DOCUMENT CONVERSION SERVICES Softline International (SII) operates one of the industry's largest document and data conversion service bureaus. In the past year, SII converted over a million pages to a variety of formats. SII's service bureau has a reputation as being a least-cost provider that can offer a timely turnaround with 100% accuracy. Here are some of the formats SII has converted over the last 20 years:
0
1869
by: dataentryoffshore | last post by:
Get a Discount up to 60% on data entry, data capture, dataentry services, large volume data processing and data conversion services through offshore facilities in India. Offshore data entry also provides form data entry, data capture, HTML/SGML coding, image scanning, file conversion with low cost, high quality,99.98% accuracy and time bound. Data Conversion Services offer cost effective data conversion projects, Outsourcing Data...
21
2468
by: REH | last post by:
It it permissible to use the constructor style cast with primitives such as "unsigned long"? One of my compilers accepts this syntax, the other does not. The failing one chokes on the fact that the type is not a single identifier or keyword. Which one is correct? For example: unsigned long x = unsigned long(y); REH
0
9715
marktang
by: marktang | last post by:
ONU (Optical Network Unit) is one of the key components for providing high-speed Internet services. Its primary function is to act as an endpoint device located at the user's premises. However, people are often confused as to whether an ONU can Work As a Router. In this blog post, we’ll explore What is ONU, What Is Router, ONU & Router’s main usage, and What is the difference between ONU and Router. Let’s take a closer look ! Part I. Meaning of...
0
9595
by: Hystou | last post by:
Most computers default to English, but sometimes we require a different language, especially when relocating. Forgot to request a specific language before your computer shipped? No problem! You can effortlessly switch the default language on Windows 10 without reinstalling. I'll walk you through it. First, let's disable language synchronization. With a Microsoft account, language settings sync across devices. To prevent any complications,...
0
10600
Oralloy
by: Oralloy | last post by:
Hello folks, I am unable to find appropriate documentation on the type promotion of bit-fields when using the generalised comparison operator "<=>". The problem is that using the GNU compilers, it seems that the internal comparison operator "<=>" tries to promote arguments from unsigned to signed. This is as boiled down as I can make it. Here is my compilation command: g++-12 -std=c++20 -Wnarrowing bit_field.cpp Here is the code in...
0
10097
tracyyun
by: tracyyun | last post by:
Dear forum friends, With the development of smart home technology, a variety of wireless communication protocols have appeared on the market, such as Zigbee, Z-Wave, Wi-Fi, Bluetooth, etc. Each protocol has its own unique characteristics and advantages, but as a user who is planning to build a smart home system, I am a bit confused by the choice of these technologies. I'm particularly interested in Zigbee because I've heard it does some...
0
9175
agi2029
by: agi2029 | last post by:
Let's talk about the concept of autonomous AI software engineers and no-code agents. These AIs are designed to manage the entire lifecycle of a software development project—planning, coding, testing, and deployment—without human intervention. Imagine an AI that can take a project description, break it down, write the code, debug it, and then launch it, all on its own.... Now, this would greatly impact the work of software developers. The idea...
0
6867
by: conductexam | last post by:
I have .net C# application in which I am extracting data from word file and save it in database particularly. To store word all data as it is I am converting the whole word file firstly in HTML and then checking html paragraph one by one. At the time of converting from word file to html my equations which are in the word document file was convert into image. Globals.ThisAddIn.Application.ActiveDocument.Select();...
0
5535
by: TSSRALBI | last post by:
Hello I'm a network technician in training and I need your help. I am currently learning how to create and manage the different types of VPNs and I have a question about LAN-to-LAN VPNs. The last exercise I practiced was to create a LAN-to-LAN VPN between two Pfsense firewalls, by using IPSEC protocols. I succeeded, with both firewalls in the same network. But I'm wondering if it's possible to do the same thing, with 2 Pfsense firewalls...
2
3835
muto222
by: muto222 | last post by:
How can i add a mobile payment intergratation into php mysql website.
3
3002
bsmnconsultancy
by: bsmnconsultancy | last post by:
In today's digital era, a well-designed website is crucial for businesses looking to succeed. Whether you're a small business owner or a large corporation in Toronto, having a strong online presence can significantly impact your brand's success. BSMN Consultancy, a leader in Website Development in Toronto offers valuable insights into creating effective websites that not only look great but also perform exceptionally well. In this comprehensive...

By using Bytes.com and it's services, you agree to our Privacy Policy and Terms of Use.

To disable or enable advertisements and analytics tracking please visit the manage ads & tracking page.