473,402 Members | 2,064 Online
Bytes | Software Development & Data Engineering Community
Post Job

Home Posts Topics Members FAQ

Join Bytes to post your question to a community of 473,402 software developers and data experts.

Performance in csharp, scientific simulation

I was asked this summer to write a monte carlo code to simulation
magnetic nanoparticles. For the nonphysicists, basicly it is a
simulation where most of the time is taken up by looping through each
pair in an array of 500 or so particles, in order to calculate
interaction potential. I wrote what i have so far in csharp because i
wanted to learn it and though it would give me some good experience. I
am now beginning to understand that .net and managed code in general
lags far behind performance wise. Eventually i am probably going to
have to port to c++, unmanaged, because the simulation code will need
to run on linux as well.
My question is this:
What can i do in my chsarp code right now to speed up the performance.
The main method that is run hundreds of times a second basicly involves
calculating a vector dot product (using my own vector class), and an
exponential. Would marking the method unsafe speed anything up>

Nov 17 '05 #1
22 2953
I believe before jumping to conclusions you must profile your application.
Aside from memory use you can find out which methods are taking most of time
/ calls and check if code is optimal in there.

As you might know, ideal performance is achieved by fetching result by
argument. Simple iteration is not always the best approach. This includes
names :-)

HTH
Alex

"Michael Gorbach" <mg******@gmail.com> wrote in message
news:11**********************@f14g2000cwb.googlegr oups.com...
I was asked this summer to write a monte carlo code to simulation
magnetic nanoparticles. For the nonphysicists, basicly it is a
simulation where most of the time is taken up by looping through each
pair in an array of 500 or so particles, in order to calculate
interaction potential. I wrote what i have so far in csharp because i
wanted to learn it and though it would give me some good experience. I
am now beginning to understand that .net and managed code in general
lags far behind performance wise. Eventually i am probably going to
have to port to c++, unmanaged, because the simulation code will need
to run on linux as well.
My question is this:
What can i do in my chsarp code right now to speed up the performance.
The main method that is run hundreds of times a second basicly involves
calculating a vector dot product (using my own vector class), and an
exponential. Would marking the method unsafe speed anything up>

Nov 17 '05 #2
I believe before jumping to conclusions you must profile your application.
Aside from memory use you can find out which methods are taking most of time
/ calls and check if code is optimal in there.

As you might know, ideal performance is achieved by fetching result by
argument. Simple iteration is not always the best approach. This includes
names :-)

HTH
Alex

"Michael Gorbach" <mg******@gmail.com> wrote in message
news:11**********************@f14g2000cwb.googlegr oups.com...
I was asked this summer to write a monte carlo code to simulation
magnetic nanoparticles. For the nonphysicists, basicly it is a
simulation where most of the time is taken up by looping through each
pair in an array of 500 or so particles, in order to calculate
interaction potential. I wrote what i have so far in csharp because i
wanted to learn it and though it would give me some good experience. I
am now beginning to understand that .net and managed code in general
lags far behind performance wise. Eventually i am probably going to
have to port to c++, unmanaged, because the simulation code will need
to run on linux as well.
My question is this:
What can i do in my chsarp code right now to speed up the performance.
The main method that is run hundreds of times a second basicly involves
calculating a vector dot product (using my own vector class), and an
exponential. Would marking the method unsafe speed anything up>

Nov 17 '05 #3
how do i go about doing this application profiling? im more or less new
to serious programming so any help would be appreaciated.
Also, what do you mean by "fetching result by argument?"

Are there are references on performance you could suggest?

Nov 17 '05 #4
how do i go about doing this application profiling? im more or less new
to serious programming so any help would be appreaciated.
Also, what do you mean by "fetching result by argument?"

Are there are references on performance you could suggest?

Nov 17 '05 #5
MIchael, in most testing scenarios the performance of well written (and
I stress "well - written") C# and C++ is comparable. Both eventually
are run as machine code. In the case of C#, where intensive math
computations are being made, performance can be increased by the
judicious and careful use of pointer - based arithmetic. Alex's
comment about profiling is right on the money, especially if this is a
new language you are just learning.

Compuware has an excellent freeware profiler (a "Community" edition)
and there are others. These can help you tremendously.

Nov 17 '05 #6
MIchael, in most testing scenarios the performance of well written (and
I stress "well - written") C# and C++ is comparable. Both eventually
are run as machine code. In the case of C#, where intensive math
computations are being made, performance can be increased by the
judicious and careful use of pointer - based arithmetic. Alex's
comment about profiling is right on the money, especially if this is a
new language you are just learning.

Compuware has an excellent freeware profiler (a "Community" edition)
and there are others. These can help you tremendously.

Nov 17 '05 #7
I would suggest to have a look at math books dealing with optimal
algorithms.

My comment is about most efficient computation, which is not always
achievable. y = f(x). X is argument. F - some function. If you can deliver y
for any given x, for example, using some table, you probably won't be able
to create anything more efficient in terms of speed. But you will pay with
memory use.

That's why I suggest profiling. You can start with also free MS CLRProfiler,
which you can download at
http://www.microsoft.com/downloads/d...DisplayLang=en
Source code is quite good in demonstrating some of common optimization
techniques, by the way

HTH
Alex

"Michael Gorbach" <mg******@gmail.com> wrote in message
news:11*********************@z14g2000cwz.googlegro ups.com...
how do i go about doing this application profiling? im more or less new
to serious programming so any help would be appreaciated.
Also, what do you mean by "fetching result by argument?"

Are there are references on performance you could suggest?

Nov 17 '05 #8
I would suggest to have a look at math books dealing with optimal
algorithms.

My comment is about most efficient computation, which is not always
achievable. y = f(x). X is argument. F - some function. If you can deliver y
for any given x, for example, using some table, you probably won't be able
to create anything more efficient in terms of speed. But you will pay with
memory use.

That's why I suggest profiling. You can start with also free MS CLRProfiler,
which you can download at
http://www.microsoft.com/downloads/d...DisplayLang=en
Source code is quite good in demonstrating some of common optimization
techniques, by the way

HTH
Alex

"Michael Gorbach" <mg******@gmail.com> wrote in message
news:11*********************@z14g2000cwz.googlegro ups.com...
how do i go about doing this application profiling? im more or less new
to serious programming so any help would be appreaciated.
Also, what do you mean by "fetching result by argument?"

Are there are references on performance you could suggest?

Nov 17 '05 #9
Last I checked CLR Profiler does *not* profile speed but only memory
usage. If the simulation is using fixed arrays, as I suspect, it
won't give any results.

Besides, the question was which is faster for this task -- C# or C++?
Just profiling C# evidently won't give any answer to this question.

On Fri, 22 Jul 2005 22:11:56 -0400, "AlexS"
<sa***********@SPAMsympaticoPLEASE.ca> wrote:
I would suggest to have a look at math books dealing with optimal
algorithms.

My comment is about most efficient computation, which is not always
achievable. y = f(x). X is argument. F - some function. If you can deliver y
for any given x, for example, using some table, you probably won't be able
to create anything more efficient in terms of speed. But you will pay with
memory use.

That's why I suggest profiling. You can start with also free MS CLRProfiler,
which you can download at
http://www.microsoft.com/downloads/d...DisplayLang=en
Source code is quite good in demonstrating some of common optimization
techniques, by the way

HTH
Alex

--
http://www.kynosarges.de
Nov 17 '05 #10
Last I checked CLR Profiler does *not* profile speed but only memory
usage. If the simulation is using fixed arrays, as I suspect, it
won't give any results.

Besides, the question was which is faster for this task -- C# or C++?
Just profiling C# evidently won't give any answer to this question.

On Fri, 22 Jul 2005 22:11:56 -0400, "AlexS"
<sa***********@SPAMsympaticoPLEASE.ca> wrote:
I would suggest to have a look at math books dealing with optimal
algorithms.

My comment is about most efficient computation, which is not always
achievable. y = f(x). X is argument. F - some function. If you can deliver y
for any given x, for example, using some table, you probably won't be able
to create anything more efficient in terms of speed. But you will pay with
memory use.

That's why I suggest profiling. You can start with also free MS CLRProfiler,
which you can download at
http://www.microsoft.com/downloads/d...DisplayLang=en
Source code is quite good in demonstrating some of common optimization
techniques, by the way

HTH
Alex

--
http://www.kynosarges.de
Nov 17 '05 #11
If I understand you correctly, you already have a working C# program,
and you don't do any tricky stuff that might be hard to port.

So the solution is very simple: get a C++ compiler, convert your
program to C++ (or even plain C if you can turn your vector into a
simple struct), and compare the execution times for both program
versions. There you have your result -- no profiler needed.

Whether the C++ version is faster will depend mostly on how good the
C++ compiler's optimizer is. Numerical code can be optimized very
well, but .NET doesn't do any of those optimizations. This can lead
to substantial performance benefits for unmanaged code -- assuming, of
course, that your C++ compiler actually does such optimizations.

As for speeding up the C# code...

Merely *marking* a method unsafe does nothing at all. This keyword
merely *allows* you to use pointer operations which *might* be faster,
but that's not guaranteed either.

Using pointers to address array elements might speed up C#, but if
you're iterating over an array the range checks are already optimized
away by the JIT compiler. Also, make sure that C# overflow checking
is disabled -- I think it's off with /optimize but I'm not sure.

Microsoft has a free C/C++ compiler download somewhere on MSDN but
since your code must run on Linux you'll probably use gcc anyway.
--
http://www.kynosarges.de
Nov 17 '05 #12
If I understand you correctly, you already have a working C# program,
and you don't do any tricky stuff that might be hard to port.

So the solution is very simple: get a C++ compiler, convert your
program to C++ (or even plain C if you can turn your vector into a
simple struct), and compare the execution times for both program
versions. There you have your result -- no profiler needed.

Whether the C++ version is faster will depend mostly on how good the
C++ compiler's optimizer is. Numerical code can be optimized very
well, but .NET doesn't do any of those optimizations. This can lead
to substantial performance benefits for unmanaged code -- assuming, of
course, that your C++ compiler actually does such optimizations.

As for speeding up the C# code...

Merely *marking* a method unsafe does nothing at all. This keyword
merely *allows* you to use pointer operations which *might* be faster,
but that's not guaranteed either.

Using pointers to address array elements might speed up C#, but if
you're iterating over an array the range checks are already optimized
away by the JIT compiler. Also, make sure that C# overflow checking
is disabled -- I think it's off with /optimize but I'm not sure.

Microsoft has a free C/C++ compiler download somewhere on MSDN but
since your code must run on Linux you'll probably use gcc anyway.
--
http://www.kynosarges.de
Nov 17 '05 #13
In message <11**********************@f14g2000cwb.googlegroups .com>,
Michael Gorbach <mg******@gmail.com> writes
Eventually i am probably going to
have to port to c++, unmanaged, because the simulation code will need
to run on linux as well.


Not necessarily; look at http://www.mono-project.com

--
Steve Walker
Nov 17 '05 #14
Thanks everyone for the great responses. I love this newsgroup!
Steve, yes I do know about mono and i will use it if i dont port, but
its speed is questionable. At best, it will run as fast as microsoft
..net, at worst there will be a performance hit.
I will take a look at both profilers that have been suggested. Thanks
for the references.
My question is to Peter: What exactly is pointer-based arithmitic and
where can find algorithms/books or other help on the subject? I think
this maybe the best short term solution to my problem. Also, i may
think about using a table to look up the exponential function values.

Nov 17 '05 #15
In message <11**********************@z14g2000cwz.googlegroups .com>,
Michael Gorbach <mg******@gmail.com> writes
Thanks everyone for the great responses. I love this newsgroup!
Steve, yes I do know about mono and i will use it if i dont port, but
its speed is questionable. At best, it will run as fast as microsoft
.net, at worst there will be a performance hit.


Sniffing round the net, Mono currently appears to be a little slower
than Microsoft's implementation, but I'd expect the gap to close.

--
Steve Walker
Nov 17 '05 #16
Michael Gorbach <mg******@gmail.com> wrote:
Thanks everyone for the great responses. I love this newsgroup!
Steve, yes I do know about mono and i will use it if i dont port, but
its speed is questionable. At best, it will run as fast as microsoft
.net, at worst there will be a performance hit.


That depends on what you do with it. When some colleagues were
investigating performance comparisons, they found that for many things
..NET was faster than Mono, but for some others Mono was faster than
..NET.

--
Jon Skeet - <sk***@pobox.com>
http://www.pobox.com/~skeet
If replying to the group, please do not mail me too
Nov 17 '05 #17
Dov
I am learning C# and I am writing scientific software. What I did
find so far is that when the data is in arrays, the differences between
C# and the equivalent C++/C codes are very small.

I tried (1) a Monte-Carlo simulation, and (2) translated from C a
routine that solves a system of linear equations (simq from the CEPHES
package) and benchmarked a large system with the coefficients
initialized by random number generator. The C/C++ & C# versions ended
with the same result and nearly the same time.

However, when I translated part of the Stepanov benchmark ( a C++
benchmark that measures abstraction penalty ) I did find a major
performance hit. I suspect that the current C# optimizer is currently
not yet developed enough to deal with high level of abstraction. Of
course, I am a C# beginner and perhaps I can learn to translate better.

I think that if you are a little careful in performance-critical
parts of your code, you can achieve nearly C speeds.

Dov

Michael Gorbach wrote:
I was asked this summer to write a monte carlo code to simulation
magnetic nanoparticles. For the nonphysicists, basicly it is a
simulation where most of the time is taken up by looping through each
pair in an array of 500 or so particles, in order to calculate
interaction potential. I wrote what i have so far in csharp because i
wanted to learn it and though it would give me some good experience. I
am now beginning to understand that .net and managed code in general
lags far behind performance wise. Eventually i am probably going to
have to port to c++, unmanaged, because the simulation code will need
to run on linux as well.
My question is this:
What can i do in my chsarp code right now to speed up the performance.
The main method that is run hundreds of times a second basicly involves
calculating a vector dot product (using my own vector class), and an
exponential. Would marking the method unsafe speed anything up>


Nov 17 '05 #18
I am learning C# and I am writing scientific software. What I did
find so far is that when the data is in arrays, the differences between

C# and the equivalent C++/C codes are very small.
I tried (1) a Monte-Carlo simulation, and (2) translated from C a
routine that solves a system of linear equations (simq from the CEPHES
package) and benchmarked a large system with the coefficients
initialized by random number generator. The C/C++ & C# versions ended
with the same result and nearly the same time.
However, when I translated part of the Stepanov benchmark ( a C++
benchmark that measures abstraction penalty ) I did find a major
performance hit. I suspect that the current C# optimizer is currently
not yet developed enough to deal with high level of abstraction. Of
course, I am a C# beginner and perhaps I can learn to translate better.

I think that if you are a little careful in performance-critical
parts of your code, you can achieve nearly C speeds.
Dov
Michael Gorbach wrote:
I was asked this summer to write a monte carlo code to simulation
magnetic nanoparticles. For the nonphysicists, basicly it is a
simulation where most of the time is taken up by looping through each
pair in an array of 500 or so particles, in order to calculate
interaction potential. I wrote what i have so far in csharp because i
wanted to learn it and though it would give me some good experience. I
am now beginning to understand that .net and managed code in general
lags far behind performance wise. Eventually i am probably going to
have to port to c++, unmanaged, because the simulation code will need
to run on linux as well.
My question is this:
What can i do in my chsarp code right now to speed up the performance.
The main method that is run hundreds of times a second basicly involves
calculating a vector dot product (using my own vector class), and an
exponential. Would marking the method unsafe speed anything up>


Nov 17 '05 #19
My particles are stored in a class Nanoparticle, which contains a
member of a Vector class i created. The dot product operator is done by
the * operator which i overloaded for the Vector class. It simply uses
a for loop over the coordinates to do the dot product.

Nov 17 '05 #20
And that is probably the source of your slowdown - you have created at
least two levels of abstraction above your particles: (1) The Vector
class, and (2) overloading the Vector operations (e.g. the * operator).

Dov

Nov 17 '05 #21
What is the simplest way to do that without created this slowdown?

Nov 17 '05 #22
Where performance is critical, try to use built-in data structures. For
example, if you have 3D particles use two dimensional jagged arrays
(e.g. double [][] ) to store the coordinates. The size of the first
dimension is the number of particles and of the the second dimension is
3 (for x, y, z).

Dov

Michael Gorbach wrote:
What is the simplest way to do that without created this slowdown?


Nov 17 '05 #23

This thread has been closed and replies have been disabled. Please start a new discussion.

Similar topics

1
by: Steaming Balturd | last post by:
are there any php based economic simulations out there - not trading games per se, but more like simulating an economy, be it running a business or running a government? All i've been able to...
12
by: Dave Theese | last post by:
Hello all, I'm in a poition of trying to justify use of the STL from a performance perspective. As a starting point, can anyone cite any benchmarks comparing vectors to plain old...
5
by: a | last post by:
Is there a performance hit if I have many "using" statements that are unnecessary? For example: using System.Collections; when nowhere in my code I'm using System.Collections namespace. I'm...
13
by: mgorbach | last post by:
Im writing a program that does monte carlo simulation and im having trouble figuring out how to get the threading model right. I have a simulation class which contains all simulation data and...
0
by: Michael Gorbach | last post by:
I was asked this summer to write a monte carlo code to simulation magnetic nanoparticles. For the nonphysicists, basicly it is a simulation where most of the time is taken up by looping through...
53
by: Michael Tobis | last post by:
Someone asked me to write a brief essay regarding the value-add proposition for Python in the Fortran community. Slightly modified to remove a few climatology-related specifics, here it is. I...
1
by: Ted | last post by:
I have cross posted this to comp.lang.c++ and to sci.math.num- analysis in the belief that the topic is of interest to some in both groups. I am building my toolkit, in support of my efforts in...
2
by: PJ6 | last post by:
I'm pushing my CPU to its limit with a realtime physics simulation to learn more about coding for performance. In the simplest case, I have a form that, on a timer tick, drives the physics model...
0
BarryA
by: BarryA | last post by:
What are the essential steps and strategies outlined in the Data Structures and Algorithms (DSA) roadmap for aspiring data scientists? How can individuals effectively utilize this roadmap to progress...
1
by: nemocccc | last post by:
hello, everyone, I want to develop a software for my android phone for daily needs, any suggestions?
1
by: Sonnysonu | last post by:
This is the data of csv file 1 2 3 1 2 3 1 2 3 1 2 3 2 3 2 3 3 the lengths should be different i have to store the data by column-wise with in the specific length. suppose the i have to...
0
by: Hystou | last post by:
There are some requirements for setting up RAID: 1. The motherboard and BIOS support RAID configuration. 2. The motherboard has 2 or more available SATA protocol SSD/HDD slots (including MSATA, M.2...
0
marktang
by: marktang | last post by:
ONU (Optical Network Unit) is one of the key components for providing high-speed Internet services. Its primary function is to act as an endpoint device located at the user's premises. However,...
0
by: Hystou | last post by:
Most computers default to English, but sometimes we require a different language, especially when relocating. Forgot to request a specific language before your computer shipped? No problem! You can...
0
jinu1996
by: jinu1996 | last post by:
In today's digital age, having a compelling online presence is paramount for businesses aiming to thrive in a competitive landscape. At the heart of this digital strategy lies an intricately woven...
0
by: Hystou | last post by:
Overview: Windows 11 and 10 have less user interface control over operating system update behaviour than previous versions of Windows. In Windows 11 and 10, there is no way to turn off the Windows...
0
isladogs
by: isladogs | last post by:
The next Access Europe User Group meeting will be on Wednesday 1 May 2024 starting at 18:00 UK time (6PM UTC+1) and finishing by 19:30 (7.30PM). In this session, we are pleased to welcome a new...

By using Bytes.com and it's services, you agree to our Privacy Policy and Terms of Use.

To disable or enable advertisements and analytics tracking please visit the manage ads & tracking page.