I've posted a simple Matrix class on my website as a smallfootprint
package for doing basic calculations on matrices up to about 10x10 in
size (no theoretical limit, but performance on inverse is exponential).
Includes:
 trace
 transpose
 conjugate
 determinant
 inverse
 eigenvectors/values (for symmetric matrices)
 addition and multiplication (with constant or other matrix)
Matrices are easily built from formatted strings, as in:
m = Matrix( """1 2 3 4
5 11 20 3
2 7 11 1
0 5 3 1""")
Pretty much a nostringsattached license, just don't hassle me about
little things like warranty, support, merchantability, accuracy, etc.
See it at http://www.geocities.com/ptmcg/pytho...html#matrix_py
 Paul 14 6110
Do you calcalate the matrix inversion, when you don't need to?
"Paul McGuire" <pt***@austin.rr.comwrote:
>I've posted a simple Matrix class on my website as a smallfootprint package for doing basic calculations on matrices up to about 10x10 in size (no theoretical limit, but performance on inverse is exponential).
Includes:  trace  transpose  conjugate  determinant  inverse  eigenvectors/values (for symmetric matrices)  addition and multiplication (with constant or other matrix)
Matrices are easily built from formatted strings, as in:
m = Matrix( """1 2 3 4
5 11 20 3
2 7 11 1
0 5 3 1""")
Pretty much a nostringsattached license, just don't hassle me about little things like warranty, support, merchantability, accuracy, etc. See it at http://www.geocities.com/ptmcg/pytho...html#matrix_py
 Paul

Regards,
Casey
On Jan 23, 4:05 pm, Casey Hawthorne <caseyhHAMMER_T...@istar.ca>
wrote:
Do you calcalate the matrix inversion, when you don't need to?
No, the inversion is only calculated on the first call to inverse(),
and memoized so that subsequent calls return the cached value
immediately. Since the Matrix class is mutable, the cache is
invalidated if any of the matrix elements are updated. So after
changing a matrix element, the inversion would be recalculated at the
next call to inverse().
Hope that clears things up. You can also do your own experiments,
including adding verbose logging to the memoizing decorators  an
example is included in the source code. Also the footprint is quite
small, just one Python source file, about 600700 lines of code, and
all Python, so 100% portable.
 Paul
Paul McGuire wrote:
I've posted a simple Matrix class on my website as a smallfootprint
package for doing basic calculations on matrices up to about 10x10 in
size (no theoretical limit, but performance on inverse is exponential).
Why is that? A simple and robust LU decomposition should be no more than O(n**3).

Robert Kern
"I have come to believe that the whole world is an enigma, a harmless enigma
that is made terrible by our own mad attempt to interpret it as though it had
an underlying truth."
 Umberto Eco
On Jan 23, 6:59 pm, Robert Kern <robert.k...@gmail.comwrote:
Paul McGuire wrote:
I've posted a simple Matrix class on my website as a smallfootprint
package for doing basic calculations on matrices up to about 10x10 in
size (no theoretical limit, but performance on inverse is exponential).
Why is that? A simple and robust LU decomposition should be no more than O(n**3).

Robert Kern
Well "3" is an exponent isn't it? :)
In truth, in my laziness, I didn't *actually* test the performance.
But after measuring inversion times for nxn matrices for n=2 to 12, I
get these results (run on Python 2.4.2, on a 2GHz CPU):
n seconds ln(seconds)
2 0.000411225449045 7.79636895604
3 0.00102247632031 6.88552782893
4 0.00437541642862 5.43175358002
5 0.0146999129778 4.21991370509
6 0.0507813143849 2.98022681913
7 0.143077961026 1.94436561528
8 0.39962257773 0.917234732978
9 1.14412558021 0.134640659841
10 3.01953516439 1.10510290046
11 8.76039971561 2.17024153354
12 21.8032182861 3.0820575867
Plotting n vs. ln(seconds) gives a fairly straight line of slope about
1.09, and exp(1.09) = 2.97, so your bigO estimate seems to line up
nicely with the experimental data  I couldn't have fudged it any
better.
 Paul
At Tuesday 23/1/2007 22:33, Paul McGuire wrote:
>On Jan 23, 6:59 pm, Robert Kern <robert.k...@gmail.comwrote:
Paul McGuire wrote:
I've posted a simple Matrix class on my website as a smallfootprint
package for doing basic calculations on matrices up to about 10x10 in
size (no theoretical limit, but performance on inverse is exponential).
Why is that? A simple and robust LU decomposition should be no
more than O(n**3).
Well "3" is an exponent isn't it? :)
But constant!
x**2 is a "power" (or quadratic, or polynomial) function. 2**x is an
"exponential" function. They're quite different.
>In truth, in my laziness, I didn't *actually* test the performance. But after measuring inversion times for nxn matrices for n=2 to 12, I get these results (run on Python 2.4.2, on a 2GHz CPU):
n seconds ln(seconds) 2 0.000411225449045 7.79636895604 3 0.00102247632031 6.88552782893 4 0.00437541642862 5.43175358002 5 0.0146999129778 4.21991370509 6 0.0507813143849 2.98022681913 7 0.143077961026 1.94436561528 8 0.39962257773 0.917234732978 9 1.14412558021 0.134640659841 10 3.01953516439 1.10510290046 11 8.76039971561 2.17024153354 12 21.8032182861 3.0820575867
Plotting n vs. ln(seconds) gives a fairly straight line of slope about 1.09, and exp(1.09) = 2.97, so your bigO estimate seems to line up nicely with the experimental data  I couldn't have fudged it any better.
Nope, such semilog plot shows that time grows exponentially, like
t=3*exp(n), and that's really bad! :(
The points should be aligned on a loglog plot to be a power function.
As Robert Kern stated before, this problem should be not worse than
O(n**3)  how have you implemented it?

Gabriel Genellina
Softlab SRL
__________________________________________________
Preguntá. Respondé. Descubrí.
Todo lo que querías saber, y lo que ni imaginabas,
está en Yahoo! Respuestas (Beta).
¡Probalo ya! http://www.yahoo.com.ar/respuestas
The points should be aligned on a loglog plot to be a power function.
As Robert Kern stated before, this problem should be not worse than
O(n**3)  how have you implemented it?
Sure enough, the complete equation is t = 5e05exp(1.1n), or t = 5e05
X 3**n.
As for the implementation, it's pretty much brute force. Here is the
code itself  the cacheValue decorator memoizes the calculated inverse
for the given Matrix.
@cacheValue
def det(self):
"Function to return the determinant of the matrix."
if self.isSquare():
if self.numRows() 2:
multiplier = 1
firstRow = self[1]
tmp = self._rows[1:]
rangelentmp = range(len(tmp))
col = 0
detsum = 0
for val in firstRow:
if val:
#~ tmp2 = Matrix([
RowVector(t[0:col]+t[col+1:]) for t in tmp ])
tmp2 = self.getCachedMatrix([
RowVector(t[0:col]+t[col+1:]) for t in tmp ])
detsum += ( multiplier * val * tmp2.det() )
multiplier = multiplier
col += 1
return detsum
if self.numRows() == 2:
return self[1][1]*self[2][2]self[1][2]*self[2][1]
if self.numRows() == 1:
return self[1][1]
if self.numRows() == 0:
return 0
else:
raise MatrixException("can only compute det for square
matrices")
def cofactor(self,i,j):
i=1
j=1
#~ tmp = Matrix([ RowVector(r[:i]+r[i+1:]) for r in
(self._rows[:j]+self._rows[j+1:]) ])
tmp = self.getCachedMatrix([ RowVector(r[:i]+r[i+1:]) for r in
(self._rows[:j]+self._rows[j+1:]) ])
if (i+j)%2:
return tmp.det()
else:
return tmp.det()
#~ return (1) ** (i+j) * tmp.det()
@cacheValue
def inverse(self):
if self.isSquare():
if self.det() != 0:
ret = Matrix( [ RowVector( [ self.cofactor(i,j) for j
in self.colrange() ] )
for i in self.rowrange() ] )
ret *= (1.0/self.det())
return ret
else:
raise MatrixException("cannot compute inverse for
singular matrices")
else:
raise MatrixException("can only compute inverse for square
matrices")
At Wednesday 24/1/2007 02:40, Paul McGuire wrote:
The points should be aligned on a loglog plot to be a power function.
As Robert Kern stated before, this problem should be not worse than
O(n**3)  how have you implemented it?
Sure enough, the complete equation is t = 5e05exp(1.1n), or t = 5e05 X 3**n.
So either reimplement it better, or place a notice in BIG letters to
your users about the terrible time and memory penalties of using inverse()

Gabriel Genellina
Softlab SRL
__________________________________________________
Preguntá. Respondé. Descubrí.
Todo lo que querías saber, y lo que ni imaginabas,
está en Yahoo! Respuestas (Beta).
¡Probalo ya! http://www.yahoo.com.ar/respuestas
On Jan 24, 11:21 am, Gabriel Genellina <gagsl...@yahoo.com.arwrote:
At Wednesday 24/1/2007 02:40, Paul McGuire wrote:
The points should be aligned on a loglog plot to be a power function.
As Robert Kern stated before, this problem should be not worse than
O(n**3)  how have you implemented it?
Sure enough, the complete equation is t = 5e05exp(1.1n), or t = 5e05
X 3**n.
So either reimplement it better, or place a notice in BIG letters to
your users about the terrible time and memory penalties of using inverse()
Well, really, the "terrible time and memory penalties" aren't all that
terrible for matrices up to, oh, 10x10 or so, but you wouldn't want to
use this on anything much larger than that. Hey, guess what?! That's
exactly what I said in my original post!
And the purpose/motivation for "reimplementing it better" would be
what, exactly? So I can charge double for it? As it was, I felt a
little guilty for posting a solution to such a likely homework
assignment, but now there's room for a student to implement a kickass
inverter routine in place of the crappybutusefulforsmallmatrices
brute force one I've written.
Cubs win!
 Paul
Paul McGuire wrote:
And the purpose/motivation for "reimplementing it better" would be
what, exactly? So I can charge double for it?
So you can have accurate results, and you get a good linear solver out of the
process. The method you use is bad in terms of accuracy as well as efficiency.

Robert Kern
"I have come to believe that the whole world is an enigma, a harmless enigma
that is made terrible by our own mad attempt to interpret it as though it had
an underlying truth."
 Umberto Eco
On Jan 24, 1:47 pm, Robert Kern <robert.k...@gmail.comwrote:
Paul McGuire wrote:
And the purpose/motivation for "reimplementing it better" would be
what, exactly? So I can charge double for it?
So you can have accurate results, and you get a good linear solver out of the
process. The method you use is bad in terms of accuracy as well as efficiency.
Dang, I thought I was testing the results sufficiently! What is the
accuracy problem? In my test cases, I've randomly created test
matrices, inverted, then multiplied, then compared to the identity
matrix, with the only failures being when I start with a singular
matrix, which shouldn't invert anyway.
One of the main reasons I wrote this was as a simple linear solver for
small order problems, so I would like this to at least be able to do
_that_. Efficiency aside, I thought I was at least getting the right
answer...
 Paul
"Paul McGuire" <pt***@austin.rr.comwrites:
Dang, I thought I was testing the results sufficiently! What is the
accuracy problem? In my test cases, I've randomly created test
matrices, inverted, then multiplied, then compared to the identity
matrix, with the only failures being when I start with a singular
matrix, which shouldn't invert anyway.
There's a lot of illconditioned matrices that you won't hit at
random, that aren't singular, but that are nonetheless very stressful
for numerical inversion. The Hilbert matrix a[i,j]=1/(i+j+1) is a
well known example.
If you're taking exponential time to invert matrices (sounds like
you're recursively using Cramer's Rule or something) that doesn't
begin to be reasonable even for very small systems, in terms of
accuracy as well as speed. It's a pure math construct that's not of
much practical value in numerics.
You might look at the Numerical Recipes books for clear descriptions
of how to do this stuff in the real world. Maybe the experts here
will jump on me for recommending those books since I think the serious
numerics crowd scoffs at them (they were written by scientists rather
than numerical analysts) but at least from my uneducated perspective,
I found them very readable and wellmotivated.
Paul McGuire wrote:
On Jan 24, 1:47 pm, Robert Kern <robert.k...@gmail.comwrote:
>Paul McGuire wrote:
>>And the purpose/motivation for "reimplementing it better" would be what, exactly? So I can charge double for it?
So you can have accurate results, and you get a good linear solver out of the process. The method you use is bad in terms of accuracy as well as efficiency.
Dang, I thought I was testing the results sufficiently! What is the
accuracy problem? In my test cases, I've randomly created test
matrices, inverted, then multiplied, then compared to the identity
matrix, with the only failures being when I start with a singular
matrix, which shouldn't invert anyway.
Illconditioned matrices. You should grab a copy of _Matrix Computations_ by
Gene H. Golub and Charles F. Van Loan.
For example, try the Hilbert matrix n=6.
H_ij = 1 / (i + j  1) http://en.wikipedia.org/wiki/Hilbert_matrix
While all numerical solvers have issues with illconditioned matrices, your
method runs into them faster.

Robert Kern
"I have come to believe that the whole world is an enigma, a harmless enigma
that is made terrible by our own mad attempt to interpret it as though it had
an underlying truth."
 Umberto Eco
Paul Rubin wrote:
You might look at the Numerical Recipes books for clear descriptions
of how to do this stuff in the real world. Maybe the experts here
will jump on me for recommending those books since I think the serious
numerics crowd scoffs at them (they were written by scientists rather
than numerical analysts) but at least from my uneducated perspective,
I found them very readable and wellmotivated.
As a scientist (well, former scientist) and programmer, I scoff at them for
their code. The text itself is decent if you need a book that covers a lot of
ground. For any one area, though, there are usually better books. _Matrix
Computations_, which I mentioned elsewhere, is difficult to beat for this area.

Robert Kern
"I have come to believe that the whole world is an enigma, a harmless enigma
that is made terrible by our own mad attempt to interpret it as though it had
an underlying truth."
 Umberto Eco
On Jan 24, 3:18 pm, Robert Kern <robert.k...@gmail.comwrote:
Illconditioned matrices. You should grab a copy of _Matrix Computations_ by
Gene H. Golub and Charles F. Van Loan.
For example, try the Hilbert matrix n=6.
H_ij = 1 / (i + j  1)
Sure enough, this gets ugly at n=6.
Thanks for the reference to Matrix Computations; I'll try to track down
a copy next time I'm at HalfPrice Books.
Meanwhile, it's back to the day job...
 Paul This discussion thread is closed Replies have been disabled for this discussion. Similar topics
6 posts
views
Thread by Ben Ingram 
last post: by

7 posts
views
Thread by Erik Borgstr?m 
last post: by

7 posts
views
Thread by Ben 
last post: by

13 posts
views
Thread by Charulatha Kalluri 
last post: by

1 post
views
Thread by SUPER_SOCKO 
last post: by

15 posts
views
Thread by christopher diggins 
last post: by

3 posts
views
Thread by 
last post: by

2 posts
views
Thread by DarrenWeber 
last post: by
           