LumisROB wrote:

On Sat, 24 Sep 2005 10:38:58 -0400, Kai-Uwe Bux <jk********@gmx.net>

wrote:

I am asking since using some

simple-minded allocation strategy, 2^31 doubles would take on the order of

10GB of memory. And even if you can host that many entries, computations

will take forever. Thus, I am worried that some "basic solution" might not

be good enough for you.

The maximum dimension is a matrix of the type double m[10^5][10^5] to

intend us

Then 10^10 elements of double. Unfortunately the matrix has to be

dense

The most serious calculation that I have to do is a dot scalar product

among two lines.However I have to make many non sequential accesses to

the elements

Now that's a lot of entries. You might run into serious trouble here. There

are different limitations that you might face. For one, allocation of a

single block of that size is probably not supported. However, there might

be an even worse problem: the total size of the heap might be insufficient.

Also, if your compiler thinks that pointers are 32 bits, then there will be

no way to address 10^10 points in memory since 2^32 < 10^10. May I ask,

what platform will be running this code?

As for dense matrix implementations, there is a simple minded strategy that

avoids allocating rows*cols doubles in one big chunk:

#include <memory>

template < typename T >

struct matrix {

typedef typename std::size_t size_type;

typedef T value_type;

private:

size_type rows;

size_type cols;

typedef T* RowVector;

RowVector* data;

public:

matrix ( size_type r, size_type c, value_type t = value_type() )

: rows ( r )

, cols ( c )

, data ( new RowVector [ this->rows ] )

{

size_type rows_constructed = 0;

try {

while ( rows_constructed < rows ) {

this->data[ rows_constructed ] = new value_type [ this->cols ];

for ( size_type j = 0; j < this->cols; ++ j ) {

this->data[rows_constructed][j] = t;

}

++ rows_constructed;

}

}

catch ( ... ) {

while ( rows_constructed > 0 ) {

-- rows_constructed;

delete [] this->data[ rows_constructed ];

}

delete [] this->data;

throw;

}

}

~matrix ( void ) {

for ( size_type i = 0; i < rows; ++ i ) {

delete [] this->data[i];

}

delete [] this->data;

}

size_type row_size ( void ) const {

return ( this->rows );

}

size_type col_size ( void ) const {

return ( this->cols );

}

T & operator() ( size_type r, size_type c ) {

return( this->data[r][c] );

}

T const & operator() ( size_type r, size_type c ) const {

return( this->data[r][c] );

}

}; // class matrix<T>

This is untested code and I left out the copy constructor, and the

assignment operator. The price for allocating several blocks is a more

complicated constructor since cleaning up is more difficult. Also, you are

using a little more memory. Note that using vectors will incur additional

overhead as every vector stores its size.

Another idea would be to view a big matrix as a block matrix. But I am not

sure if that would help to circumvent the limitations that you are running

into.

Best

Kai-Uwe Bux