Hi,
consider the attached code.
Serializing the multi-dimensional array takes about 36s
vs. 0.36s for the single-dimensional array.
Initializing the multi-dimensional array takes about 4s
vs. 0.3s for the single-dimensional array.
(I know initializing is not necessary in this simple example,
but in my application it was necessary to frequently
re-initialize an array)
Are there any workarounds other than using single-dimensional arrays,
store the array bounds in additional fields (in the actual code the
arrays are not always zero based) and do the index calculations in code?
TIA,
Henrik
byte[, ,] multiDimArray = new byte[1000, 1000, 64];
byte[] singleDimArray = new byte[64000000];
DateTime start = DateTime.Now;
using (Stream stream = File.Open(@"d:\ test.tst", FileMode.OpenOr Create))
{
BinaryFormatter formatter = new BinaryFormatter ();
formatter.Seria lize(stream, multiDimArray);
}
Console.WriteLi ne("Serialize multi dim " + (DateTime.Now -
start).TotalSec onds);
start = DateTime.Now;
using (Stream stream = File.Open(@"d:\ test.tst", FileMode.OpenOr Create))
{
BinaryFormatter formatter = new BinaryFormatter ();
formatter.Seria lize(stream, singleDimArray) ;
}
Console.WriteLi ne("Serialize single dim " + (DateTime.Now -
start).TotalSec onds);
start = DateTime.Now;
for (int i = multiDimArray.G etLowerBound(0) ; i <=
multiDimArray.G etUpperBound(0) ; ++i)
for (int j = multiDimArray.G etLowerBound(1) ; j <=
multiDimArray.G etUpperBound(1) ; ++j)
for (int k = multiDimArray.G etLowerBound(2) ; k <=
multiDimArray.G etUpperBound(2) ; ++k)
multiDimArray[i, j, k] = 0;
Console.WriteLi ne("Init multi dim " + (DateTime.Now - start).TotalSec onds);
start = DateTime.Now;
for (int i = 0; i < singleDimArray. Length; ++i)
singleDimArray[i] = 0;
Console.WriteLi ne("Init single dim " + (DateTime.Now - start).TotalSec onds); 4 7327
Henrik,
I'm not sure about the serialization itself but I noticed that you are
always doing the muti array first. Remember that spinning up objects
take time. When I ran your code, I got similar numbers, but when I
switched the call around to run the single array serialization first I
got these numbers:
Serialize single dim 3.1087183
Serialize multi dim 18.9803655
True, the multi is still higher, but not 10x higher.
Also, in your code to initialize the arrays, you make repeated calls
to GetLowerBound and GetUpperBound. These take time, lots of time. I
changed your code to store them off as temporary variables first:
for (int i = a; i <= x; ++i)
for (int j = b; j <= y; ++j)
for (int k = c; k <= z; ++k)
multiDimArray[i, j, k] = 0;
Console.WriteLi ne("Init multi dim " + (DateTime.Now -
start).TotalSec onds);
And I got this for the times:
Init multi dim 0.5155161
Init single dim 0.8904369
Now the multi is faster.
Hope any of this helps,
L. Lee Saunders http://oldschooldotnet.blogspot.com
Thank you, Lee.
You are right, storing the array bounds in temporaries
does speed things up in the multi dim case.
I didn't think of this, because for single dim it actually
hurts performance.
By storing the bounds in variables, I could improve the performance
of my application.
There is still the issue with (de)serializati on.
And the difference is more like 100x, not 10x.
My little test program may not be optimal, but I don't see
much difference when serializing the single dim array first.
Anyway, I worked around this by changing this huge array to
one-dimensional and this dramatically reduced the time for
opening and saving documents in the application.
Thanks again.
Henrik
"Lee" wrote:
Henrik,
I'm not sure about the serialization itself but I noticed that you are
always doing the muti array first. Remember that spinning up objects
take time. When I ran your code, I got similar numbers, but when I
switched the call around to run the single array serialization first I
got these numbers:
Serialize single dim 3.1087183
Serialize multi dim 18.9803655
True, the multi is still higher, but not 10x higher.
Also, in your code to initialize the arrays, you make repeated calls
to GetLowerBound and GetUpperBound. These take time, lots of time. I
changed your code to store them off as temporary variables first:
for (int i = a; i <= x; ++i)
for (int j = b; j <= y; ++j)
for (int k = c; k <= z; ++k)
multiDimArray[i, j, k] = 0;
Console.WriteLi ne("Init multi dim " + (DateTime.Now -
start).TotalSec onds);
And I got this for the times:
Init multi dim 0.5155161
Init single dim 0.8904369
Now the multi is faster.
Hope any of this helps,
L. Lee Saunders http://oldschooldotnet.blogspot.com
Henrik Schmid wrote:
Hi,
consider the attached code.
Serializing the multi-dimensional array takes about 36s
vs. 0.36s for the single-dimensional array.
Initializing the multi-dimensional array takes about 4s
vs. 0.3s for the single-dimensional array.
(I know initializing is not necessary in this simple example,
but in my application it was necessary to frequently
re-initialize an array)
Are there any workarounds other than using single-dimensional arrays,
store the array bounds in additional fields (in the actual code the
arrays are not always zero based) and do the index calculations in code?
TIA,
Henrik
byte[, ,] multiDimArray = new byte[1000, 1000, 64];
You can also use byte[][][], which is called a 'jagged' array. A jagged
array is much faster. it requires a bit of different code to work with
them, but that won't be rocket science. See: http://dotnetperls.com/Content/Jagged-Array.aspx
FB
--
------------------------------------------------------------------------
Lead developer of LLBLGen Pro, the productive O/R mapper for .NET
LLBLGen Pro website: http://www.llblgen.com
My .NET blog: http://weblogs.asp.net/fbouma
Microsoft MVP (C#)
------------------------------------------------------------------------
Hi,
thanks for the reply.
Actually, the first thing I tried was a "semi-jagged" array: byte[,][]
which was even slower.
No I tried byte[][][], which is a bit faster (factor 4) than multi dim,
but still slower than single dim (factor 20).
Given that in my real application the first two dimensions are not zero-based,
I would still have to do some index calculation, so I can as well use a
single dim array and have the full performance.
Maybe some future framework or compiler version can apply similar
optimizations
to multi dim arrays.
Thanks anyway.
Henrik
"Frans Bouma [C# MVP]" wrote:
byte[, ,] multiDimArray = new byte[1000, 1000, 64];
You can also use byte[][][], which is called a 'jagged' array. A jagged
array is much faster. it requires a bit of different code to work with
them, but that won't be rocket science. See: http://dotnetperls.com/Content/Jagged-Array.aspx
FB
This thread has been closed and replies have been disabled. Please start a new discussion. Similar topics |
by: sandy |
last post by:
Hi All,
I am a newbie to MySQL and Python. At the first place, I would like to
know what are the general performance issues (if any) of using MySQL
with Python.
By performance, I wanted to know how will the speed be, what is the
memory overhead involved, etc during database specific operations
(retrieval, update, insert, etc) when MySQL is used with Python.
|
by: Amit Dedhia |
last post by:
Hi
I am developing a Dot net application (involving image processing) on
a uni processor. It works well on my machine. I then take all my code
on a multi processor, build and run the application there. There is
performance degradation.
The usual performance of the application on MP machine is better than
that of uni processor machine. But the performance of MP degrades when
it comes to the multi-threaded part of the application. I am...
|
by: adsheehan |
last post by:
Hi all,
Wondering if a GIL lock/unlock causes a re-schedule/contect swap when
embedding Python in a multi-threaded C/C++ app on Unix ?
If so, do I have any control or influence on this re-scheduling ?
The app suffers from serious performance degradation (compared to pure
c/C++) and high context switches that I suspect the GIL unlocking may
be aggravating ?
|
by: bluedolphin |
last post by:
Hello All:
I have been brought onboard to help on a project that had some
performance problems last year. I have taken some steps to address
the issues in question, but a huge question mark remains.
Last year, all of the tables and reports were stored in Access. The
database was put online so that end users could access the reports
online using Snapshot Viewer. The reports were aggregated on the fly,
and the selection criteria...
|
by: Bob Alston |
last post by:
Most of my Access database implementations have been fairly small in
terms of data volume and number of concurrent users. So far I haven't
had performance issues to worry about. <knock on wood>
But I am curious about what techniques those of you who have done higher
volume access implementations use to ensure high performance of the
database in a multi-user 100mbps LAN implementation???
Thanks
| |
by: Mark Shelor |
last post by:
I've encountered a troublesome inconsistency in the C-language Perl
extension I've written for CPAN (Digest::SHA). The problem involves the
use of a static array within a performance-critical transform function.
When compiling under gcc on my big-endian PowerPC (Mac OS X),
declaring this array as "static" DECREASES the transform throughput by
around 5%. However, declaring it as "static" on gcc/Linux/Intel
INCREASES the throughput by...
|
by: 1944USA |
last post by:
I am re-architecting a C# application written as a multithreaded
Windows Service and trying to squeeze every bit of performance out of
it.
1) Does the thread that an object is instantiated on have any impact on
its performnce?
Example: if I instantiate object "X" on thread "A" pass a reference
of "X" to Thread "B" and then have "B" run "X" (Exclusively). Does
|
by: yonil |
last post by:
Over the years of using C++ I've begun noticing that freestore
management functions (malloc/free) become performance bottlenecks in
complex object-oriented libraries. This is usually because these
functions acquire a mutex lock on the heap. Since the software I'm
writing is targetted for a number of embedded platforms as well as the
PC, it's somewhat difficult to use anything but the standard
implementation given with the compiler.
I've...
|
by: skotapal |
last post by:
Hello
I manage a web based VB .net application. This application has 3
components:
1. Webapp (this calls the executibles)
2. database
3. business logic is contained in individual exe application that get
called in a sequence to do some heavy calculations (mainly DB
operations with in memory datasets)
|
by: jwalsh604 |
last post by:
I have an Access db application that is intended to present the user
with the next available record to make an outbound phone call on.
Originally, I had set it up as a "split" database but, due to
performance issues, I moved it into one database which is accessed by
4-10 people at one time.
However, while the same record was popping up for multiple users at the
same time in teh old setup, it has continued in the new setup. In the...
|
by: jinu1996 |
last post by:
In today's digital age, having a compelling online presence is paramount for businesses aiming to thrive in a competitive landscape. At the heart of this digital strategy lies an intricately woven tapestry of website design and digital marketing. It's not merely about having a website; it's about crafting an immersive digital experience that captivates audiences and drives business growth.
The Art of Business Website Design
Your website is...
| |
by: Hystou |
last post by:
Overview:
Windows 11 and 10 have less user interface control over operating system update behaviour than previous versions of Windows. In Windows 11 and 10, there is no way to turn off the Windows Update option using the Control Panel or Settings app; it automatically checks for updates and installs any it finds, whether you like it or not. For most users, this new feature is actually very convenient. If you want to control the update process,...
|
by: tracyyun |
last post by:
Dear forum friends,
With the development of smart home technology, a variety of wireless communication protocols have appeared on the market, such as Zigbee, Z-Wave, Wi-Fi, Bluetooth, etc. Each protocol has its own unique characteristics and advantages, but as a user who is planning to build a smart home system, I am a bit confused by the choice of these technologies. I'm particularly interested in Zigbee because I've heard it does some...
|
by: isladogs |
last post by:
The next Access Europe User Group meeting will be on Wednesday 1 May 2024 starting at 18:00 UK time (6PM UTC+1) and finishing by 19:30 (7.30PM).
In this session, we are pleased to welcome a new presenter, Adolph Dupré who will be discussing some powerful techniques for using class modules.
He will explain when you may want to use classes instead of User Defined Types (UDT). For example, to manage the data in unbound forms.
Adolph will...
|
by: conductexam |
last post by:
I have .net C# application in which I am extracting data from word file and save it in database particularly. To store word all data as it is I am converting the whole word file firstly in HTML and then checking html paragraph one by one.
At the time of converting from word file to html my equations which are in the word document file was convert into image.
Globals.ThisAddIn.Application.ActiveDocument.Select();...
|
by: adsilva |
last post by:
A Windows Forms form does not have the event Unload, like VB6. What one acts like?
|
by: 6302768590 |
last post by:
Hai team
i want code for transfer the data from one system to another through IP address by using C# our system has to for every 5mins then we have to update the data what the data is updated we have to send another system
| |
by: muto222 |
last post by:
How can i add a mobile payment intergratation into php mysql website.
|
by: bsmnconsultancy |
last post by:
In today's digital era, a well-designed website is crucial for businesses looking to succeed. Whether you're a small business owner or a large corporation in Toronto, having a strong online presence can significantly impact your brand's success. BSMN Consultancy, a leader in Website Development in Toronto offers valuable insights into creating effective websites that not only look great but also perform exceptionally well. In this comprehensive...
| |