473,387 Members | 1,687 Online
Bytes | Software Development & Data Engineering Community
Post Job

Home Posts Topics Members FAQ

Join Bytes to post your question to a community of 473,387 software developers and data experts.

Computational Complexity

Given two implementations of an algorithm how do I determine the relative
computational complexity of each?

Are there tools that can determine the relative performance of two
algorithms or implementations?

Joris van Lier


Jun 27 '08 #1
5 1175
"Joris van Lier" <wh*****@hotmail.comwrote in message
news:#2**************@TK2MSFTNGP05.phx.gbl...
Given two implementations of an algorithm how do I determine the relative
computational complexity of each?

Are there tools that can determine the relative performance of two
algorithms or implementations?

Joris van Lier
Let me explain this a bit more:

I'm working on a process where multiple copies of data may exist,
new data is supplied to the process with a location where to store it,
the process stores it in the designated location and
replaces all copies with links to the new location

For the purpose of this example I'll assume that the storage is the
filesystem , the data is contained in files and all files are of equal
length, location is a path, and all locations where data can be stored are
known beforehand
With my limited math skills and assuming the program runs as a single thread
I tried to resolve this as below

D= data unit (file)
L = number of locations
F = average number of files in a location
B = average number of bytes in a file

O(fx(D))= L*F*B

is this correct?

How do I account for multiple concurrent threads of execution?

--
Joris van Lier
Jun 27 '08 #2
On Jun 9, 5:03*am, "Joris van Lier" <whiz...@hotmail.comwrote:
"Joris van Lier" <whiz...@hotmail.comwrote in messagenews:#2**************@TK2MSFTNGP05.phx.gbl. ..
Given two implementations of an algorithm how do I determine the relative
computational complexity of each?
Are there tools that can determine the relative performance of two
algorithms or implementations?
Joris van Lier

Let me explain this a bit more:

I'm working on a process where multiple copies of data may exist,
new data is supplied to the process with a location where to store it,
the process stores it in the designated location and
replaces all copies with links to the new location

For the purpose of this example I'll assume that the storage is the
filesystem , the data is contained in files and all files are of equal
length, *location is a path, and all locations where data can be stored are
known beforehand

With my limited math skills and assuming the program runs as a single thread
I tried to resolve this as below

D= data unit (file)
L = number of locations
F = average number of files in a location
B = average number of bytes in a file

O(fx(D))= L*F*B

is this correct?

How do I account for multiple concurrent threads of execution?

--
Joris van Lier
It looks like you were working through the calculations for
determining space complexity, but you are asking about runtime
complexity right? What is it that you are wanting to do to the data?
The only way to work through how long it will take your process to run
is to understand what it needs to do and how it will do that. Can you
give us a better description?

Jun 27 '08 #3
On Jun 9, 3:15*am, "Joris van Lier" <whiz...@hotmail.comwrote:
Given two implementations of an algorithm how do I determine the relative
computational complexity of each?
You do a Big-Oh analysis of the algorithms.
>
Are there tools that can determine the relative performance of two
algorithms or implementations?
Well, you can use a profiler to assist in determining the runtime
differences of the algorithms. However, you must hold the problem
size constant to be able to make a reasonable comparison between the
algorithms. That comparison is only valid for the specific problem
size chosen. Given enough information you may be able to make some
general conclusions about all problem sizes, but they won't be as
reliable as doing the Big-Oh analysis.
Jun 27 '08 #4
"Brian Gideon" <br*********@yahoo.comwrote in message
news:be**********************************@d1g2000h sg.googlegroups.com...

Hi Brian, thanks for responding
On Jun 9, 3:15 am, "Joris van Lier" <whiz...@hotmail.comwrote:
>Given two implementations of an algorithm how do I determine the relative
computational complexity of each?

You do a Big-Oh analysis of the algorithms.
I know but.... How?
>>
Are there tools that can determine the relative performance of two
algorithms or implementations?

Well, you can use a profiler to assist in determining the runtime
differences of the algorithms. However, you must hold the problem
size constant to be able to make a reasonable comparison between the
algorithms. That comparison is only valid for the specific problem
size chosen. Given enough information you may be able to make some
general conclusions about all problem sizes, but they won't be as
reliable as doing the Big-Oh analysis.
My first hunch was to do measuring , but when measuring I would have to make
really sure that the environment is stable in terms of performance / load,
other processes running at the same time might influence the tests, either
negatively by lowering the available resources or positively by causing data
used by my process to be cached, and this measurement would then not reflect
the real world in which the process will be operating,

Can you recommend any sources for learning Big-Oh analysis for
non-mathematicians ?

Joris van Lier

Jun 27 '08 #5
On Jun 9, 12:52*pm, "Joris van Lier" <whiz...@hotmail.comwrote:
Can you recommend any sources for learning Big-Oh analysis for
non-mathematicians ?

Joris van Lier
Google around a bit. Take your time. It can be quite confusing and
complex so start out simple. Avoid the math in the middle and just
concentrate on the Big-Oh notation of concrete examples. Use
complexities of known sub-algorithms to help derive the Big-Oh for the
main algorithm. Understand the differences between best, average, and
worst case scenarios. Understand what coding patterns yield the these
common complexities: O(1), O(n), O(n^2), O(log(n)), O(n*log(n)). Once
you understand those it gets easier to combine them.

Jun 27 '08 #6

This thread has been closed and replies have been disabled. Please start a new discussion.

Similar topics

0
by: Dennis Benzinger | last post by:
Hi! Does anyone know of Python bindings for the Computational Geometry Algorithms Library (http://www.cgal.org/)?
1
by: Ralf W. Grosse-Kunstleve | last post by:
======================================================================== Lawrence Berkeley National Laboratory Software Development for Single Particle Image Construction with the Computational...
0
by: avinash | last post by:
International Conference on Computational Intelligence and Multimedia Applications, August 16-18, 2005 University of Nevada, Las Vegas, USA (www.iccima.org) F I R S T C A L L F O R P...
10
by: Immortalist | last post by:
Various aquisition devices that guide learning along particular pathways towards human biases. And as E.O. Wilson might say mental development appears to be genetically constrained. (1) Language...
4
by: raghu | last post by:
How to find the complexity of a C program? Can anyone explain it with an example... Thanks a lot.
26
by: Lionel B | last post by:
Hi, Anyone know if the Standard has anything to say about the time complexity of size() for std::set? I need to access a set's size (/not/ to know if it is empty!) heavily during an algorithm...
0
by: tavares | last post by:
--------------------------------------------------------------------------------------------------------------------------------------------- (Apologies for cross-posting) Symposium...
0
by: tavares | last post by:
------------------------------------------------------------------------------------------------------------------------------------------- (Apologies for cross-posting) Symposium...
0
by: taylorcarr | last post by:
A Canon printer is a smart device known for being advanced, efficient, and reliable. It is designed for home, office, and hybrid workspace use and can also be used for a variety of purposes. However,...
0
by: ryjfgjl | last post by:
If we have dozens or hundreds of excel to import into the database, if we use the excel import function provided by database editors such as navicat, it will be extremely tedious and time-consuming...
0
by: ryjfgjl | last post by:
In our work, we often receive Excel tables with data in the same format. If we want to analyze these data, it can be difficult to analyze them because the data is spread across multiple Excel files...
1
by: nemocccc | last post by:
hello, everyone, I want to develop a software for my android phone for daily needs, any suggestions?
0
by: Hystou | last post by:
There are some requirements for setting up RAID: 1. The motherboard and BIOS support RAID configuration. 2. The motherboard has 2 or more available SATA protocol SSD/HDD slots (including MSATA, M.2...
0
marktang
by: marktang | last post by:
ONU (Optical Network Unit) is one of the key components for providing high-speed Internet services. Its primary function is to act as an endpoint device located at the user's premises. However,...
0
by: Hystou | last post by:
Most computers default to English, but sometimes we require a different language, especially when relocating. Forgot to request a specific language before your computer shipped? No problem! You can...
0
Oralloy
by: Oralloy | last post by:
Hello folks, I am unable to find appropriate documentation on the type promotion of bit-fields when using the generalised comparison operator "<=>". The problem is that using the GNU compilers,...
0
jinu1996
by: jinu1996 | last post by:
In today's digital age, having a compelling online presence is paramount for businesses aiming to thrive in a competitive landscape. At the heart of this digital strategy lies an intricately woven...

By using Bytes.com and it's services, you agree to our Privacy Policy and Terms of Use.

To disable or enable advertisements and analytics tracking please visit the manage ads & tracking page.