Hello all.. I wrote code in C to invert n x n matrices, where 1<n<39 and the matrices are guaranteed to be invertible. Prior to this task, I haven't really seen linear algebra since my sophmore year of undergrad, but I believe my algorithm implements Gaussian elimination. I append an identity matrix to the right of the one I wish to invert, use row addition to first turn the lower matrix to zeroes and then turn the upper matrix to zeroes, and finally divide each row by the remaining diagonal values in the original matrix to turn the original matrix into an identity matrix.

Here's where I hit a wall: the code works fine until I attempt to invert a 10x10 matrix or larger. I replicated my algorithm in Microsoft Excel, step-by-step (painful), to assess what the intermediate values should be. Some of the values in the appended identity matrix get divided numerous times, often by very large numbers (e.g., 10^17), and it seems as if C's storage of resulting values loses precision past a certain point. I reproduced in Excel my code's results, though the MINVERSE function in Excel produces the correct matrix inversion.

I thought that by using LONG DOUBLE data types to hold values I would be in good shape, but such is not the case.

Question: Is this problem attributable to limits on how very large or very small numbers get represented and stored in memory? If yes, is there a better inversion algorithm that will not bump against memory representation constraints? Thank you in advance for any good thoughts on this.

P.S. The three linear algebra calculations I need to perform for my purposes are:

a. (X transposed)(Y inverted)(X)

b. (Identity transposed)(Y inverted)(X)

c. (Identity transposed)(Y inverted)(Identity)