473,700 Members | 2,301 Online
Bytes | Software Development & Data Engineering Community
+ Post

Home Posts Topics Members FAQ

Very large arrays and .NET classes

This is kind of a question about C# and kind of one about the framework.
Hopefully, there's an answer in there somewhere. :)

I'm curious about the status of 32-bit vs 64-bit in C# and the framework
classes. The specific example I'm running into is with respect to byte
arrays and the BitConverter class. In C# you can create arrays larger than
2^32, using the overloaded methods that take 64-bit parameters. But as near
as I can tell, the BitConverter class can only address up to a 32-bit offset
within the array.

I see similar issues in other areas. The actual framework classes (of which
the Array class itself is one, if I understand things correctly, and thus an
exception to this generality) don't all seem to provide full 64-bit support,
even though the C# language does (through specific overloads to .NET classes
that form built-in language elements).

I suppose one workaround in this example would be to copy the interesting
parts of the array to a smaller one that can be indexed by BitConverter with
its 32-bit parameters. But not every situation is resolvable with such a
simple workaround. For example, if one is displaying an array in a
scrollable control and wants to set the scrollbar to something in the same
order of magnitude as the array length itself, this is not possible because
the scrollbar controls use only 32-bit values.

Am I missing something? Is there a general paradigm that addresses these
sorts of gaps between things that can be 64-bit and things that cannot? Or
is this just par for the course with respect to being in a transition period
between the "old" 32-bit world and the "new" 64-bit world?

Having already made it through the transitions from 8-bit to 16-bit, and
from 16-bit to 32-bit, I guess I was sort of hoping we'd have learned our
lesson and gotten a little better at this. But I'm worried that's not the
case. I'm hopeful someone can reassure me. :)

Thanks,
Pete
Sep 29 '06 #1
10 5887
Hi Peter,

Peter Duniho napisał(a):
This is kind of a question about C# and kind of one about the framework.
Hopefully, there's an answer in there somewhere. :)

I'm curious about the status of 32-bit vs 64-bit in C# and the framework
classes. The specific example I'm running into is with respect to byte
arrays and the BitConverter class. In C# you can create arrays larger than
2^32, using the overloaded methods that take 64-bit parameters. But as near
as I can tell, the BitConverter class can only address up to a 32-bit offset
within the array.
Single 32-bit BitConverter, can adress 32-bit.
Use two BitConverters to adress 64 of bits.
byte[][];
somethig like: new byte[2^32][2^32];
I see similar issues in other areas. The actual framework classes (of which
the Array class itself is one, if I understand things correctly, and thus an
exception to this generality) don't all seem to provide full 64-bit support,
even though the C# language does (through specific overloads to .NET classes
that form built-in language elements).

I suppose one workaround in this example would be to copy the interesting
parts of the array to a smaller one that can be indexed by BitConverter with
its 32-bit parameters. But not every situation is resolvable with such a
simple workaround. For example, if one is displaying an array in a
scrollable control and wants to set the scrollbar to something in the same
order of magnitude as the array length itself, this is not possible because
the scrollbar controls use only 32-bit values.
OK.
But how can you imagine the UI that works with more than million
of lines? I can not imagine that thing.
Reasonable solution is to divide those data on:
1. MOST SIGNIFICANT
2. LESS SIGNIFICANT
And then you can provide the useful UI.

with regards
Marcin
Sep 29 '06 #2
The current versions of the CLR (all platforms 32/64 bit) limits the size of
all possible objects to ~2GByte anyway. That means that you won't be able to
create an array larger than ~2GB, whatever the language.

Willy.

"Peter Duniho" <Np*********@Nn OwSlPiAnMk.comw rote in message
news:12******** *****@corp.supe rnews.com...
| This is kind of a question about C# and kind of one about the framework.
| Hopefully, there's an answer in there somewhere. :)
|
| I'm curious about the status of 32-bit vs 64-bit in C# and the framework
| classes. The specific example I'm running into is with respect to byte
| arrays and the BitConverter class. In C# you can create arrays larger
than
| 2^32, using the overloaded methods that take 64-bit parameters. But as
near
| as I can tell, the BitConverter class can only address up to a 32-bit
offset
| within the array.
|
| I see similar issues in other areas. The actual framework classes (of
which
| the Array class itself is one, if I understand things correctly, and thus
an
| exception to this generality) don't all seem to provide full 64-bit
support,
| even though the C# language does (through specific overloads to .NET
classes
| that form built-in language elements).
|
| I suppose one workaround in this example would be to copy the interesting
| parts of the array to a smaller one that can be indexed by BitConverter
with
| its 32-bit parameters. But not every situation is resolvable with such a
| simple workaround. For example, if one is displaying an array in a
| scrollable control and wants to set the scrollbar to something in the same
| order of magnitude as the array length itself, this is not possible
because
| the scrollbar controls use only 32-bit values.
|
| Am I missing something? Is there a general paradigm that addresses these
| sorts of gaps between things that can be 64-bit and things that cannot?
Or
| is this just par for the course with respect to being in a transition
period
| between the "old" 32-bit world and the "new" 64-bit world?
|
| Having already made it through the transitions from 8-bit to 16-bit, and
| from 16-bit to 32-bit, I guess I was sort of hoping we'd have learned our
| lesson and gotten a little better at this. But I'm worried that's not the
| case. I'm hopeful someone can reassure me. :)
|
| Thanks,
| Pete
|
|
Sep 29 '06 #3
"Marcin Grzębski" <mg************ ***@taxussi.com .plwrote in message
news:ef******** **@news.superme dia.pl...
Single 32-bit BitConverter, can adress 32-bit.
Use two BitConverters to adress 64 of bits.
byte[][];
somethig like: new byte[2^32][2^32];
Thanks. I'm not entirely convinced that's better than just temporarily
copying the interesting bytes when needed, since it requires a global change
to the data structure, rather than a local workaround.
OK.
But how can you imagine the UI that works with more than million
of lines? I can not imagine that thing.
Seems to me it works the same as the UI that works with less than a million
lines. Except there's more lines.
Reasonable solution is to divide those data on:
1. MOST SIGNIFICANT
2. LESS SIGNIFICANT
And then you can provide the useful UI.
The data doesn't usefully break down that way. It's true that I could
create two scrollable controls, one to allow the user to navigate 32-bit
"pages" and the other to allow the user to navigate within those "pages".
But that seems to me to be at least as sloppy a workaround as changing an
entire data structure globally just to address some one- or
two-lines-of-code problem. The user interface ought to reflect to the
*user* the abstract view of the data that exists within the data, not some
arbitrary view of the data dictated by limitations of the underlying
architecture. The user should not have to concern himself with the
underlying architecture at all.

I admit that it does appear in this case that there may be no way to
insulate the user from those issues, but I don't agree that that's a
desirable solution.

I guess what I'm hearing is that, no... .NET does not successfully address
the transition from 32-bit to 64-bit code, and that for the time being I
need to consider it 32-bit-only, even though some portions of it imply
64-bit capability.

Thank you very much for the answers.

Pete
Sep 29 '06 #4
I think that what Marcin meant is that if you are even close to reaching
the limit of the scrollbar, your UI badly needs a change.

If you present a list with a million items to the user, it's like saying
"I don't want you to use this program".

Peter Duniho wrote:
>OK.
But how can you imagine the UI that works with more than million
of lines? I can not imagine that thing.

Seems to me it works the same as the UI that works with less than a million
lines. Except there's more lines.
>Reasonable solution is to divide those data on:
1. MOST SIGNIFICANT
2. LESS SIGNIFICANT
And then you can provide the useful UI.

The data doesn't usefully break down that way. It's true that I could
create two scrollable controls, one to allow the user to navigate 32-bit
"pages" and the other to allow the user to navigate within those "pages".
But that seems to me to be at least as sloppy a workaround as changing an
entire data structure globally just to address some one- or
two-lines-of-code problem. The user interface ought to reflect to the
*user* the abstract view of the data that exists within the data, not some
arbitrary view of the data dictated by limitations of the underlying
architecture. The user should not have to concern himself with the
underlying architecture at all.

I admit that it does appear in this case that there may be no way to
insulate the user from those issues, but I don't agree that that's a
desirable solution.

I guess what I'm hearing is that, no... .NET does not successfully address
the transition from 32-bit to 64-bit code, and that for the time being I
need to consider it 32-bit-only, even though some portions of it imply
64-bit capability.

Thank you very much for the answers.

Pete
Oct 1 '06 #5
"Göran Andersson" <gu***@guffa.co mwrote in message
news:%2******** ********@TK2MSF TNGP03.phx.gbl. ..
>I think that what Marcin meant is that if you are even close to reaching
the limit of the scrollbar, your UI badly needs a change.
That really depends on what data your UI is presenting.
If you present a list with a million items to the user, it's like saying
"I don't want you to use this program".
If your user has what is essentially a single list of a million items and
you don't present that list that way to the user, it's like saying "I want
you, the user, to conform your idea of your data to something I can code".

For example, suppose the user interface is for a view onto a stream of bytes
(a very large file, for example). Why should the user be expected to
mentally imagine his data as separate pages of portions of the file, when in
fact the entire file is one long stream of bytes? How do you avoid having
the user interface impose artificial, arbitrary boundaries on the data?
Suppose you have broken the user's data into 2GB units, and the user wants
to view a portion of that data that straddles the arbitrarily assigned
boundary at 2GB? One solution to that is to have a sliding 2GB window onto
the data, but then you're no longer able to present 64-bits worth of address
space to the user (or you have to add yet another layer of paging). In
either case, these are not user-friendly answers to the question.

As another example, consider a video file that has, say, 4 million fields
(about 18 hours of 30fps interlaced video). You can store that on modern
hardware. Why should a user expect to have trouble viewing such data on
modern hardware?

I am not suggesting the user will wind up reviewing each and every byte of a
10GB file or every frame of an 18 hour stream of video. But if one is to
create an application to allow the user to do anything with that data and
wants to present a unit-oriented view onto that data, the user interface
will inherently need to support a "document" of the same order of magnitude
as the units in the data. IMHO, it's a bit arrogant for a person to assume
that there is absolutely no reason a user would ever want a UI that can deal
with a large number of units (whatever those units may be).

I do appreciate the feedback, but I frankly think people are spending too
many cycles second-guessing my needs, and not enough actually answering the
question I asked.

That said, I do believe I've gotten enough feedback to understand that .NET
is still basically a 32-bit API, and that it's not a good idea to expect
64-bit support in the near future. Any application that involves itself
with 64-bit data will have to superimpose its own solution on top of the
32-bit environment .NET presents, just as has always been necessary in
32-bit Windows.

And to those who have offered feedback along those lines, I thank you.

Pete
Oct 1 '06 #6
GS
are you working large matrix transformation? Ordinary business application
would not need large arrays in memory, rather they would have database
manipulation.
if you do need large array, if you will need more than 54 bit clr. you will
also need 64 bit OS and PC with lots or real RAM.

It is also likely you will be among a few leaders using 64 bit computing on
PC. translation: work and research and possible bugs/ unexpected features.

"Peter Duniho" <Np*********@Nn OwSlPiAnMk.comw rote in message
news:12******** *****@corp.supe rnews.com...
This is kind of a question about C# and kind of one about the framework.
Hopefully, there's an answer in there somewhere. :)

I'm curious about the status of 32-bit vs 64-bit in C# and the framework
classes. The specific example I'm running into is with respect to byte
arrays and the BitConverter class. In C# you can create arrays larger
than
2^32, using the overloaded methods that take 64-bit parameters. But as
near
as I can tell, the BitConverter class can only address up to a 32-bit
offset
within the array.

I see similar issues in other areas. The actual framework classes (of
which
the Array class itself is one, if I understand things correctly, and thus
an
exception to this generality) don't all seem to provide full 64-bit
support,
even though the C# language does (through specific overloads to .NET
classes
that form built-in language elements).

I suppose one workaround in this example would be to copy the interesting
parts of the array to a smaller one that can be indexed by BitConverter
with
its 32-bit parameters. But not every situation is resolvable with such a
simple workaround. For example, if one is displaying an array in a
scrollable control and wants to set the scrollbar to something in the same
order of magnitude as the array length itself, this is not possible
because
the scrollbar controls use only 32-bit values.

Am I missing something? Is there a general paradigm that addresses these
sorts of gaps between things that can be 64-bit and things that cannot?
Or
is this just par for the course with respect to being in a transition
period
between the "old" 32-bit world and the "new" 64-bit world?

Having already made it through the transitions from 8-bit to 16-bit, and
from 16-bit to 32-bit, I guess I was sort of hoping we'd have learned our
lesson and gotten a little better at this. But I'm worried that's not the
case. I'm hopeful someone can reassure me. :)

Thanks,
Pete


Oct 2 '06 #7
Peter Duniho wrote:
"Göran Andersson" <gu***@guffa.co mwrote in message
news:%2******** ********@TK2MSF TNGP03.phx.gbl. ..
>I think that what Marcin meant is that if you are even close to reaching
the limit of the scrollbar, your UI badly needs a change.

That really depends on what data your UI is presenting.
>If you present a list with a million items to the user, it's like saying
"I don't want you to use this program".

If your user has what is essentially a single list of a million items and
you don't present that list that way to the user, it's like saying "I want
you, the user, to conform your idea of your data to something I can code".
If you present the data as a horrificly long list, it's like saying
"This is what you get, as I am too lazy to create a user interface that
is usable". ;)
For example, suppose the user interface is for a view onto a stream of bytes
(a very large file, for example). Why should the user be expected to
mentally imagine his data as separate pages of portions of the file, when in
fact the entire file is one long stream of bytes?
Just because you don't display all the data at once, it doesn't need to
be separated into pages.
How do you avoid having
the user interface impose artificial, arbitrary boundaries on the data?
Eh.... just don't?
Suppose you have broken the user's data into 2GB units, and the user wants
to view a portion of that data that straddles the arbitrarily assigned
boundary at 2GB? One solution to that is to have a sliding 2GB window onto
the data, but then you're no longer able to present 64-bits worth of address
space to the user (or you have to add yet another layer of paging). In
either case, these are not user-friendly answers to the question.
Of course there is. There is no reason to display more data than fits on
the screen at once, as the user can't see it anyway. That doesn't mean
that you have to use large sliding windows or layered paging.
As another example, consider a video file that has, say, 4 million fields
(about 18 hours of 30fps interlaced video). You can store that on modern
hardware. Why should a user expect to have trouble viewing such data on
modern hardware?
And you think that displaying it all at once is not troublesome?
I am not suggesting the user will wind up reviewing each and every byte of a
10GB file or every frame of an 18 hour stream of video. But if one is to
create an application to allow the user to do anything with that data and
wants to present a unit-oriented view onto that data, the user interface
will inherently need to support a "document" of the same order of magnitude
as the units in the data. IMHO, it's a bit arrogant for a person to assume
that there is absolutely no reason a user would ever want a UI that can deal
with a large number of units (whatever those units may be).
Of course the user interface should be able to handle a large number of
units, but there is no reason that a single list should handle it all,
as the user can't handle it all anyway.
I do appreciate the feedback, but I frankly think people are spending too
many cycles second-guessing my needs, and not enough actually answering the
question I asked.
Of course I have to second guess your needs, as you haven't specified them.

It's quite common in message boards to make assumptions about what the
OP is really needing, or what the OP really should have asked, as many
people don't know what to ask for, what information to provide, or
sometimes even to ask a question...
That said, I do believe I've gotten enough feedback to understand that .NET
is still basically a 32-bit API, and that it's not a good idea to expect
64-bit support in the near future. Any application that involves itself
with 64-bit data will have to superimpose its own solution on top of the
32-bit environment .NET presents, just as has always been necessary in
32-bit Windows.

And to those who have offered feedback along those lines, I thank you.

Pete

Oct 3 '06 #8
"Göran Andersson" <gu***@guffa.co mwrote in message
news:e0******** *****@TK2MSFTNG P05.phx.gbl...
If you present the data as a horrificly long list, it's like saying "This
is what you get, as I am too lazy to create a user interface that is
usable". ;)
It's not like saying that at all.
[...]
>How do you avoid having the user interface impose artificial, arbitrary
boundaries on the data?

Eh.... just don't?
You are saying that the user should not see the data as the single
contiguous collection of units that it is, but that one should also not
break up the data into smaller collections of units.

Surely as someone accustomed to writing software, you understand the logical
contradiction here. Right?
[...]
Of course there is. There is no reason to display more data than fits on
the screen at once, as the user can't see it anyway. That doesn't mean
that you have to use large sliding windows or layered paging.
What does it mean then? What other UI do you propose for the purpose of
presenting to the user data which is inherently a single contiguous
collection of millions of units? Assuming that the user is to have complete
and immediate access to any portion of his data, and that this access should
conform closely to the user's own mental idea of the data, what user
interface that doesn't involve "large sliding windows or layered paging" do
you suggest?
>As another example, consider a video file that has, say, 4 million fields
(about 18 hours of 30fps interlaced video). You can store that on modern
hardware. Why should a user expect to have trouble viewing such data on
modern hardware?

And you think that displaying it all at once is not troublesome?
You need to define what you mean by "displaying it all at once". I am not
suggesting that every unit of data should be present on the screen
simultaneously.

As far as the question of allowing the user direct access "all at once" to
the entire data stream, no...it is not at all troublesome. It is in fact
what the user typically expects. Every single one of the commonly used
video editing programs does exactly this, and no one in the industry seems
to think it's a problem.
Of course the user interface should be able to handle a large number of
units, but there is no reason that a single list should handle it all, as
the user can't handle it all anyway.
First, you underestimate the user as well as the nature of a single-list
paradigm for a user interface. Second, how many lists do you suggest? How
do you suggest that the user be forced to navigate amongst these lists? And
how do you suggest implementing a multiple list scenario in which artificial
boundaries are not imposed on the data?

You keep making statements about what should NOT happen, but you have yet to
suggest what SHOULD happen, and your claims of what should not happen
contradict each other.
Of course I have to second guess your needs, as you haven't specified
them.
Baloney. You have no need to second-guess my needs, as I'm not asking a
question about those needs. The question I asked was quite specific, and
it's arrogant and insulting of you to make your own assumptions about what
help I need.

And frankly, it seems to me that you are more interested in your own ego
than in actually helping. A person who wants to help would suggest an
alternative, rather than invest all of their time denigrating the other
person's ideas. Maybe that makes you feel better about yourself, but it's
not helpful to anyone else and least of all to me.
It's quite common in message boards to make assumptions about what the OP
is really needing, or what the OP really should have asked, as many people
don't know what to ask for, what information to provide, or sometimes even
to ask a question...
Thankfully, it is NOT "quite common", and especially not when the original
question was very clear and to the point. Only people who cannot understand
that they don't have the big picture, nor are they invited to have the big
picture, insist on making assumptions, and as a result offer completely
inadequate, misleading, and insulting advice.

Pete
Oct 4 '06 #9
On assumptions:
IMO it is very natural and reasonable to start trying to make (and
state) assumptions about what you are trying to do, as your proposed
solution is not, itself, "normal" practice. Therefore a better solution
may be in order, hence the asusmptions about what you are trying to do.

On data size:
I agree that it is entirely natural to let the UI provide access to
this data, but that doesn't mean you have to load it all at once. I
would be upset if my player loaded an entire DVD into RAM before it
started playing... Perhaps you need to look at virtual mode for the
lists? And keep the data on disk, loading a chunk into a buffer as
needed, and handling the logical offset in the wrapper.
The object model (API) just has to provide access to the data; it
doesn't *necessarily* have to have it all "on hand". Lazy loading and
seamless chunk switching could be your friend here.

Marc

Oct 4 '06 #10

This thread has been closed and replies have been disabled. Please start a new discussion.

Similar topics

6
6156
by: shailesh kumar | last post by:
Hi, I need to design data interfaces for accessing files of very large sizes efficiently. The data will be accessed in chunks of fixed size ... My data interface should be able to do a random seek in the file, as well as sequential access block by block.... One aspect of the usage of this interface is that there is quite good chance of accessing same blocks again and again by the application..
2
1418
by: matthew.weiner | last post by:
I have a bunch of SPs that all rely on a UDF that parses a comma delimitted list of numbers into a table. Everything was working fine, but now my application is growing and I'm starting to approach the 8000 character limit of the varChar variable used to store the list. I would like to change the UDF only and avoid having to dig through all of my stored procedures. I was hoping to use the text datatype to allow for much larger lists,...
2
1331
by: aurora00 | last post by:
Looking at the email package, it seems all the MIMExxx classes takes string but not file object as the payload. I need to handle very large MIME messages say up to 100M. And possibly many of them. Is email package sufficient. If not is it possible to extend it? Looking at a previous thread it seems the standard library is not sufficient. ...
19
3464
by: Hal Styli | last post by:
Hello, Can someone please help. I need to use a large array and I'm not sure when one should use (A) and when one should use (B) below:- #define MAX 10000000 (A) int x;
3
4162
by: meltedown | last post by:
Normally, if I use $result=print_r($array,TRUE); print_r prints nothing and $result is equal to the readable array. However, if $array is very large, print_r prints the array and returns nothing. Is this correct ? I don't see anything about this in the the manual. I have tried limiting the size to array to see exactly how large the
38
2988
by: djhulme | last post by:
Hi, I'm using GCC. Please could you tell me, what is the maximum number of array elements that I can create in C, i.e. char* anArray = (char*) calloc( ??MAX?? , sizeof(char) ) ; I've managed to create arrays using DOUBLE data types, but when I try to access the array, the compiler complains that the number is not an INT, i.e.
17
3573
by: Stubert | last post by:
I have a training module db that stores information about employees and what training they have carried our or need to carry out. One table in this database stores what training needs to be carried based on a job title. So if a cleaner joins the company we know that they need the sweeping up training and the mopping up training. I wasn't sure how to store this information but this is what i came up with and as you will see i have hit a...
3
2017
by: John | last post by:
I have two large arrays. Is there a rapid method to comaprs the two arrays and creat a third array of onlt thos items that the are in the first array but not the second array. I do it manually now (compare each item) but it takes forever
4
1782
by: kj | last post by:
I'm downloading some very large tables from a remote site. I want to sort these tables in a particular way before saving them to disk. In the past I found that the most efficient way to do this was to piggy-back on Unix's highly optimized sort command. So, from within a Perl script, I'd create a pipe handle through sort and then just print the data through that handle: open my $out, "|$sort -t '\t' -k1,1 -k2,2 -u $out_file" or die $!;...
0
8725
marktang
by: marktang | last post by:
ONU (Optical Network Unit) is one of the key components for providing high-speed Internet services. Its primary function is to act as an endpoint device located at the user's premises. However, people are often confused as to whether an ONU can Work As a Router. In this blog post, we’ll explore What is ONU, What Is Router, ONU & Router’s main usage, and What is the difference between ONU and Router. Let’s take a closer look ! Part I. Meaning of...
0
8644
by: Hystou | last post by:
Most computers default to English, but sometimes we require a different language, especially when relocating. Forgot to request a specific language before your computer shipped? No problem! You can effortlessly switch the default language on Windows 10 without reinstalling. I'll walk you through it. First, let's disable language synchronization. With a Microsoft account, language settings sync across devices. To prevent any complications,...
0
9074
jinu1996
by: jinu1996 | last post by:
In today's digital age, having a compelling online presence is paramount for businesses aiming to thrive in a competitive landscape. At the heart of this digital strategy lies an intricately woven tapestry of website design and digital marketing. It's not merely about having a website; it's about crafting an immersive digital experience that captivates audiences and drives business growth. The Art of Business Website Design Your website is...
1
8970
by: Hystou | last post by:
Overview: Windows 11 and 10 have less user interface control over operating system update behaviour than previous versions of Windows. In Windows 11 and 10, there is no way to turn off the Windows Update option using the Control Panel or Settings app; it automatically checks for updates and installs any it finds, whether you like it or not. For most users, this new feature is actually very convenient. If you want to control the update process,...
0
5902
by: conductexam | last post by:
I have .net C# application in which I am extracting data from word file and save it in database particularly. To store word all data as it is I am converting the whole word file firstly in HTML and then checking html paragraph one by one. At the time of converting from word file to html my equations which are in the word document file was convert into image. Globals.ThisAddIn.Application.ActiveDocument.Select();...
0
4403
by: TSSRALBI | last post by:
Hello I'm a network technician in training and I need your help. I am currently learning how to create and manage the different types of VPNs and I have a question about LAN-to-LAN VPNs. The last exercise I practiced was to create a LAN-to-LAN VPN between two Pfsense firewalls, by using IPSEC protocols. I succeeded, with both firewalls in the same network. But I'm wondering if it's possible to do the same thing, with 2 Pfsense firewalls...
0
4656
by: adsilva | last post by:
A Windows Forms form does not have the event Unload, like VB6. What one acts like?
2
2392
muto222
by: muto222 | last post by:
How can i add a mobile payment intergratation into php mysql website.
3
2027
bsmnconsultancy
by: bsmnconsultancy | last post by:
In today's digital era, a well-designed website is crucial for businesses looking to succeed. Whether you're a small business owner or a large corporation in Toronto, having a strong online presence can significantly impact your brand's success. BSMN Consultancy, a leader in Website Development in Toronto offers valuable insights into creating effective websites that not only look great but also perform exceptionally well. In this comprehensive...

By using Bytes.com and it's services, you agree to our Privacy Policy and Terms of Use.

To disable or enable advertisements and analytics tracking please visit the manage ads & tracking page.