blob: 37ec79ed64f8e4e12e806ed5dcafaae4201c1dc5 [file] [log] [blame]
Gael Guennebaud86a19262009-01-16 17:20:12 +00001namespace Eigen {
2
Gael Guennebaud41ea92d2010-07-04 10:14:47 +02003/** \page TutorialSparse Tutorial page 9 - Sparse Matrix
Gael Guennebaud86a19262009-01-16 17:20:12 +00004 \ingroup Tutorial
5
Gael Guennebaud41ea92d2010-07-04 10:14:47 +02006\li \b Previous: \ref TutorialGeometry
7\li \b Next: TODO
Gael Guennebaud86a19262009-01-16 17:20:12 +00008
9\b Table \b of \b contents \n
10 - \ref TutorialSparseIntro
11 - \ref TutorialSparseFilling
12 - \ref TutorialSparseFeatureSet
13 - \ref TutorialSparseDirectSolvers
14<hr>
15
Gael Guennebaud25f16582009-01-21 17:44:58 +000016\section TutorialSparseIntro Sparse matrix representations
Gael Guennebaud86a19262009-01-16 17:20:12 +000017
Tim Holy16a2d892011-06-20 22:47:58 -050018In many applications (e.g., finite element methods) it is common to deal with very large matrices where only a few coefficients are different from zero. In such cases, memory consumption can be reduced and performance increased by using a specialized representation storing only nonzero coefficients. Such a matrix is called a sparse matrix.
Gael Guennebaud86a19262009-01-16 17:20:12 +000019
Gael Guennebaud25f16582009-01-21 17:44:58 +000020\b Declaring \b sparse \b matrices \b and \b vectors \n
Tim Holy16a2d892011-06-20 22:47:58 -050021The SparseMatrix class is the main sparse matrix representation of Eigen's sparse module; it offers high performance, low memory usage, and compatibility with most sparse linear algebra packages. These advantages come at the cost of some loss of flexibility, particularly during the assembly of the sparse matrix; consequently, a variant called DynamicSparseMatrix is offered which is tailored for low-level sparse matrix assembly. Both of them can be either row major or column major:
Gael Guennebaud25f16582009-01-21 17:44:58 +000022
23\code
24#include <Eigen/Sparse>
Gael Guennebaud41ea92d2010-07-04 10:14:47 +020025SparseMatrix<std::complex<float> > m1(1000,2000); // declare a 1000x2000 col-major compressed sparse matrix of complex<float>
26SparseMatrix<double,RowMajor> m2(1000,2000); // declare a 1000x2000 row-major compressed sparse matrix of double
27DynamicSparseMatrix<std::complex<float> > m1(1000,2000); // declare a 1000x2000 col-major dynamic sparse matrix of complex<float>
28DynamicSparseMatrix<double,RowMajor> m2(1000,2000); // declare a 1000x2000 row-major dynamic sparse matrix of double
Gael Guennebaud25f16582009-01-21 17:44:58 +000029\endcode
30
31Although a sparse matrix could also be used to represent a sparse vector, for that purpose it is better to use the specialized SparseVector class:
32\code
33SparseVector<std::complex<float> > v1(1000); // declare a column sparse vector of complex<float> of size 1000
34SparseVector<double,RowMajor> v2(1000); // declare a row sparse vector of double of size 1000
35\endcode
Tim Holy16a2d892011-06-20 22:47:58 -050036As with dense vectors, the size of a sparse vector denotes its dimension and not the number of nonzero coefficients. At the time of allocation, both sparse matrices and sparse vectors do not have any nonzero coefficients---they correspond to the "all zeros" matrix or vector.
Gael Guennebaud25f16582009-01-21 17:44:58 +000037
38
39\b Overview \b of \b the \b internal \b sparse \b storage \n
Tim Holy16a2d892011-06-20 22:47:58 -050040In order to get the most out of Eigen's sparse objects, it is important to have a rough idea of the way they are represented internally. The SparseMatrix class implements the widely-used Compressed Column (or Row) Storage scheme. It consists of three compact arrays: one for the coefficient values, and two for the indices of the nonzero entries. However, the indices are \em not stored as a direct column, row list; instead, the beginning of each column (or row) is encoded as a pointer index. For instance, let \c m be a column-major sparse matrix. Then its nonzero coefficients are sequentially stored in memory in column-major order (\em values). A second array of integers stores the respective row index of each coefficient (\em inner \em indices). Finally, a third array of integers, having the same length as the number of columns, stores the index in the previous arrays of the first element of each column (\em outer \em indices).
Gael Guennebaud86a19262009-01-16 17:20:12 +000041
42Here is an example, with the matrix:
Gael Guennebaudf66fe262010-10-19 11:40:49 +020043<table class="manual">
Gael Guennebaud86a19262009-01-16 17:20:12 +000044<tr><td>0</td><td>3</td><td>0</td><td>0</td><td>0</td></tr>
45<tr><td>22</td><td>0</td><td>0</td><td>0</td><td>17</td></tr>
46<tr><td>7</td><td>5</td><td>0</td><td>1</td><td>0</td></tr>
47<tr><td>0</td><td>0</td><td>0</td><td>0</td><td>0</td></tr>
48<tr><td>0</td><td>0</td><td>14</td><td>0</td><td>8</td></tr>
49</table>
50
51and its internal representation using the Compressed Column Storage format:
Gael Guennebaudf66fe262010-10-19 11:40:49 +020052<table class="manual">
Gael Guennebaud86a19262009-01-16 17:20:12 +000053<tr><td>Values:</td> <td>22</td><td>7</td><td>3</td><td>5</td><td>14</td><td>1</td><td>17</td><td>8</td></tr>
54<tr><td>Inner indices:</td> <td> 1</td><td>2</td><td>0</td><td>2</td><td> 4</td><td>2</td><td> 1</td><td>4</td></tr>
55</table>
Gael Guennebaudf66fe262010-10-19 11:40:49 +020056Outer indices:<table class="manual"><tr><td>0</td><td>2</td><td>4</td><td>5</td><td>6</td><td>\em 7 </td></tr></table>
Gael Guennebaud86a19262009-01-16 17:20:12 +000057
Tim Holy16a2d892011-06-20 22:47:58 -050058As you might guess, here the storage order is even more important than with dense matrices. We will therefore often make a clear difference between the \em inner and \em outer dimensions. For instance, it is efficient to loop over the coefficients of an \em inner \em vector (e.g., a column of a column-major matrix), but completely inefficient to do the same for an \em outer \em vector (e.g., a row of a column-major matrix).
Gael Guennebaud86a19262009-01-16 17:20:12 +000059
Gael Guennebaud25f16582009-01-21 17:44:58 +000060The SparseVector class implements the same compressed storage scheme but, of course, without any outer index buffer.
Gael Guennebaud86a19262009-01-16 17:20:12 +000061
Tim Holy16a2d892011-06-20 22:47:58 -050062Since all nonzero coefficients of such a matrix are sequentially stored in memory, inserting a new nonzero near the "beginning" of the matrix can be extremely costly. As described below (\ref TutorialSparseFilling), one strategy is to fill nonzero coefficients in order. In cases where this is not possible, Eigen's sparse module also provides a DynamicSparseMatrix class which allows efficient random insertion. DynamicSparseMatrix is essentially implemented as an array of SparseVector, where the values and inner-indices arrays have been split into multiple small and resizable arrays. Assuming the number of nonzeros per inner vector is relatively small, this modification allows for very fast random insertion at the cost of a slight memory overhead (due to extra memory preallocated by each inner vector to avoid an expensive memory reallocation at every insertion) and a loss of compatibility with other sparse libraries used by some of our high-level solvers. Once complete, a DynamicSparseMatrix can be converted to a SparseMatrix to permit usage of these sparse libraries.
Gael Guennebaud86a19262009-01-16 17:20:12 +000063
Tim Holy16a2d892011-06-20 22:47:58 -050064To summarize, it is recommended to use SparseMatrix whenever possible, and reserve the use of DynamicSparseMatrix to assemble a sparse matrix in cases when a SparseMatrix is not flexible enough. The respective pros/cons of both representations are summarized in the following table:
Gael Guennebaud25f16582009-01-21 17:44:58 +000065
Gael Guennebaudf66fe262010-10-19 11:40:49 +020066<table class="manual">
Gael Guennebaud25f16582009-01-21 17:44:58 +000067<tr><td></td> <td>SparseMatrix</td><td>DynamicSparseMatrix</td></tr>
Tim Holy16a2d892011-06-20 22:47:58 -050068<tr><td>memory efficiency</td><td>***</td><td>**</td></tr>
Gael Guennebaud25f16582009-01-21 17:44:58 +000069<tr><td>sorted insertion</td><td>***</td><td>***</td></tr>
70<tr><td>random insertion \n in sorted inner vector</td><td>**</td><td>**</td></tr>
71<tr><td>sorted insertion \n in random inner vector</td><td>-</td><td>***</td></tr>
72<tr><td>random insertion</td><td>-</td><td>**</td></tr>
73<tr><td>coeff wise unary operators</td><td>***</td><td>***</td></tr>
74<tr><td>coeff wise binary operators</td><td>***</td><td>***</td></tr>
75<tr><td>matrix products</td><td>***</td><td>**(*)</td></tr>
76<tr><td>transpose</td><td>**</td><td>***</td></tr>
77<tr><td>redux</td><td>***</td><td>**</td></tr>
78<tr><td>*= scalar</td><td>***</td><td>**</td></tr>
79<tr><td>Compatibility with highlevel solvers \n (TAUCS, Cholmod, SuperLU, UmfPack)</td><td>***</td><td>-</td></tr>
80</table>
81
Gael Guennebaud86a19262009-01-16 17:20:12 +000082
83\b Matrix \b and \b vector \b properties \n
84
Tim Holy16a2d892011-06-20 22:47:58 -050085Here mat and vec represent any sparse-matrix and sparse-vector type, respectively.
Gael Guennebaud25f16582009-01-21 17:44:58 +000086
Gael Guennebaudf66fe262010-10-19 11:40:49 +020087<table class="manual">
Gael Guennebaud86a19262009-01-16 17:20:12 +000088<tr><td>Standard \n dimensions</td><td>\code
89mat.rows()
90mat.cols()\endcode</td>
91<td>\code
92vec.size() \endcode</td>
93</tr>
94<tr><td>Sizes along the \n inner/outer dimensions</td><td>\code
95mat.innerSize()
96mat.outerSize()\endcode</td>
97<td></td>
98</tr>
Tim Holy16a2d892011-06-20 22:47:58 -050099<tr><td>Number of non \n zero coefficients</td><td>\code
Gael Guennebaud86a19262009-01-16 17:20:12 +0000100mat.nonZeros() \endcode</td>
101<td>\code
102vec.nonZeros() \endcode</td></tr>
103</table>
104
Gael Guennebaud25f16582009-01-21 17:44:58 +0000105
Gael Guennebaud86a19262009-01-16 17:20:12 +0000106\b Iterating \b over \b the \b nonzero \b coefficients \n
107
Tim Holy16a2d892011-06-20 22:47:58 -0500108Iterating over the coefficients of a sparse matrix can be done only in the same order as the storage order. Here is an example:
Gael Guennebaudf66fe262010-10-19 11:40:49 +0200109<table class="manual">
Gael Guennebaud86a19262009-01-16 17:20:12 +0000110<tr><td>
111\code
Gael Guennebaud25f16582009-01-21 17:44:58 +0000112SparseMatrixType mat(rows,cols);
Jitse Niesen59b83c12011-09-10 09:18:18 +0100113for (int k=0; k<mat.outerSize(); ++k)
Gael Guennebaud25f16582009-01-21 17:44:58 +0000114 for (SparseMatrixType::InnerIterator it(mat,k); it; ++it)
Gael Guennebaud86a19262009-01-16 17:20:12 +0000115 {
116 it.value();
117 it.row(); // row index
118 it.col(); // col index (here it is equal to k)
119 it.index(); // inner index, here it is equal to it.row()
120 }
121\endcode
122</td><td>
123\code
Gael Guennebaud25f16582009-01-21 17:44:58 +0000124SparseVector<double> vec(size);
125for (SparseVector<double>::InnerIterator it(vec); it; ++it)
Gael Guennebaud86a19262009-01-16 17:20:12 +0000126{
Gael Guennebaud25f16582009-01-21 17:44:58 +0000127 it.value(); // == vec[ it.index() ]
Gael Guennebaudc532f422009-09-25 16:30:44 +0200128 it.index();
Gael Guennebaud86a19262009-01-16 17:20:12 +0000129}
130\endcode
131</td></tr>
132</table>
133
Jitse Niesen59b83c12011-09-10 09:18:18 +0100134If the type of the sparse matrix or vector depends on a template parameter, then the \c typename keyword is
135required to indicate that \c InnerIterator denotes a type; see \ref TopicTemplateKeyword for details.
136
Gael Guennebaud86a19262009-01-16 17:20:12 +0000137
138\section TutorialSparseFilling Filling a sparse matrix
139
Tim Holy16a2d892011-06-20 22:47:58 -0500140Because of the special storage scheme of a SparseMatrix, adding new nonzero entries can have consequences for performance. For instance, the cost of a purely random insertion into a SparseMatrix is O(nnz), where nnz is the current number of nonzero coefficients. In order to cover all use cases with best efficiency, Eigen provides various mechanisms, from the easiest but slowest, to the fastest but most restrictive.
Gael Guennebaudc532f422009-09-25 16:30:44 +0200141
142If you don't have any prior knowledge about the order your matrix will be filled, then the best choice is to use a DynamicSparseMatrix. With a DynamicSparseMatrix, you can add or modify any coefficients at any time using the coeffRef(row,col) method. Here is an example:
Gael Guennebaud25f16582009-01-21 17:44:58 +0000143\code
144DynamicSparseMatrix<float> aux(1000,1000);
Gael Guennebaudc532f422009-09-25 16:30:44 +0200145aux.reserve(estimated_number_of_non_zero); // optional
Gael Guennebaud25f16582009-01-21 17:44:58 +0000146for (...)
Gael Guennebaudc532f422009-09-25 16:30:44 +0200147 for each j // the j can be random
148 for each i interacting with j // the i can be random
149 aux.coeffRef(i,j) += foo(i,j);
150\endcode
Tim Holy16a2d892011-06-20 22:47:58 -0500151Then the DynamicSparseMatrix object can be converted to a compact SparseMatrix to be used, e.g., by one of our supported solvers:
Gael Guennebaudc532f422009-09-25 16:30:44 +0200152\code
153SparseMatrix<float> mat(aux);
Gael Guennebaud25f16582009-01-21 17:44:58 +0000154\endcode
155
Gael Guennebaudc532f422009-09-25 16:30:44 +0200156In order to optimize this process, instead of the generic coeffRef(i,j) method one can also use:
Tim Holy16a2d892011-06-20 22:47:58 -0500157 - \code m.insert(i,j) = value; \endcode which assumes the coefficient of coordinate (i,j) does not already exist (otherwise this is a programming error and your program will stop).
158 - \code m.insertBack(i,j) = value; \endcode which, in addition to the requirements of insert(), also assumes that the coefficient of coordinate (i,j) will be inserted at the end of the target inner-vector. More precisely, if the matrix m is column major, then the row index of the last non zero coefficient of the j-th column must be smaller than i.
Gael Guennebaud25f16582009-01-21 17:44:58 +0000159
Gael Guennebaud25f16582009-01-21 17:44:58 +0000160
Tim Holy16a2d892011-06-20 22:47:58 -0500161The SparseMatrix class also supports random insertion via the insert() method. However, it should only be used when the inserted coefficient is nearly the last one of the compact storage array. In practice, this means it should be used only to perform random (or sorted) insertion into the current inner-vector while filling the inner-vectors in increasing order. Moreover, with a SparseMatrix an insertion session must be closed by a call to finalize() before any use of the matrix. Here is an example for a column major matrix:
Gael Guennebaud25f16582009-01-21 17:44:58 +0000162
Gael Guennebaud86a19262009-01-16 17:20:12 +0000163\code
Gael Guennebaudc532f422009-09-25 16:30:44 +0200164SparseMatrix<float> mat(1000,1000);
165mat.reserve(estimated_number_of_non_zero); // optional
166for each j // should be in increasing order for performance reasons
167 for each i interacting with j // the i can be random
168 mat.insert(i,j) = foo(i,j); // optional for a DynamicSparseMatrix
169mat.finalize();
Gael Guennebaud86a19262009-01-16 17:20:12 +0000170\endcode
171
Tim Holy16a2d892011-06-20 22:47:58 -0500172Finally, the fastest way to fill a SparseMatrix object is to insert the elements in purely increasing order (increasing inner index per outer index, and increasing outer index) using the insertBack() function:
Gael Guennebaud86a19262009-01-16 17:20:12 +0000173
Gael Guennebaud86a19262009-01-16 17:20:12 +0000174\code
Gael Guennebaudc532f422009-09-25 16:30:44 +0200175SparseMatrix<float> mat(1000,1000);
176mat.reserve(estimated_number_of_non_zero); // optional
177for(int j=0; j<1000; ++j)
Gael Guennebaud86a19262009-01-16 17:20:12 +0000178{
Gael Guennebaudc532f422009-09-25 16:30:44 +0200179 mat.startVec(j); // optional for a DynamicSparseMatrix
180 for each i interacting with j // with increasing i
181 mat.insertBack(i,j) = foo(i,j);
Gael Guennebaud86a19262009-01-16 17:20:12 +0000182}
Gael Guennebaudc532f422009-09-25 16:30:44 +0200183mat.finalize(); // optional for a DynamicSparseMatrix
Gael Guennebaud86a19262009-01-16 17:20:12 +0000184\endcode
Tim Holy16a2d892011-06-20 22:47:58 -0500185Note that there is also an insertBackByOuterInner(Index outer, Index inner) function which allows one to write code agnostic to the storage order.
Gael Guennebaud86a19262009-01-16 17:20:12 +0000186
187\section TutorialSparseFeatureSet Supported operators and functions
188
Tim Holy16a2d892011-06-20 22:47:58 -0500189In the following \em sm denotes a sparse matrix, \em sv a sparse vector, \em dm a dense matrix, and \em dv a dense vector.
190In Eigen's sparse module we chose to expose only the subset of the dense matrix API which can be efficiently implemented. Moreover, not every combination is allowed; for instance, it is not possible to add two sparse matrices having two different storage orders. On the other hand, it is perfectly fine to evaluate a sparse matrix or expression to a matrix having a different storage order:
Gael Guennebaud86a19262009-01-16 17:20:12 +0000191\code
192SparseMatrixType sm1, sm2, sm3;
Tim Holy16a2d892011-06-20 22:47:58 -0500193sm3 = sm1.transpose() + sm2; // invalid, because transpose() changes the storage order
194sm3 = SparseMatrixType(sm1.transpose()) + sm2; // correct, because evaluation reformats as column-major
Gael Guennebaud86a19262009-01-16 17:20:12 +0000195\endcode
196
Tim Holy16a2d892011-06-20 22:47:58 -0500197Here are some examples of supported operations:
Gael Guennebaud86a19262009-01-16 17:20:12 +0000198\code
Tim Holy16a2d892011-06-20 22:47:58 -0500199sm1 *= 0.5;
200sm4 = sm1 + sm2 + sm3; // only if sm1, sm2 and sm3 have the same storage order
Gael Guennebaud86a19262009-01-16 17:20:12 +0000201sm3 = sm1 * sm2;
202dv3 = sm1 * dv2;
203dm3 = sm1 * dm2;
204dm3 = dm2 * sm1;
Tim Holy16a2d892011-06-20 22:47:58 -0500205sm3 = sm1.cwiseProduct(sm2); // only if sm1 and sm2 have the same storage order
Gael Guennebaud41ea92d2010-07-04 10:14:47 +0200206dv2 = sm1.triangularView<Upper>().solve(dv2);
Gael Guennebaud86a19262009-01-16 17:20:12 +0000207\endcode
208
Tim Holy16a2d892011-06-20 22:47:58 -0500209The product of a sparse \em symmetric matrix A with a dense matrix (or vector) d can be optimized by specifying the symmetry of A using selfadjointView:
Gael Guennebaud86a19262009-01-16 17:20:12 +0000210\code
Tim Holy16a2d892011-06-20 22:47:58 -0500211res = A.selfadjointView<>() * d; // if all coefficients of A are stored
212res = A.selfadjointView<Upper>() * d; // if only the upper part of A is stored
213res = A.selfadjointView<Lower>() * d; // if only the lower part of A is stored
Gael Guennebaud86a19262009-01-16 17:20:12 +0000214\endcode
215
216
217\section TutorialSparseDirectSolvers Using the direct solvers
218
Tim Holy16a2d892011-06-20 22:47:58 -0500219To solve a sparse problem you currently have to use one or several of the following "unsupported" modules:
Gael Guennebaudab41c182010-08-18 15:33:58 +0200220- \ref SparseExtra_Module
221 - \b solvers: SparseLLT<SparseMatrixType>, SparseLDLT<SparseMatrixType> (\#include <Eigen/SparseExtra>)
222 - \b notes: built-in basic LLT and LDLT solvers
223- \ref CholmodSupport_Module
224 - \b solver: SparseLLT<SparseMatrixType, Cholmod> (\#include <Eigen/CholmodSupport>)
225 - \b notes: LLT solving using Cholmod, requires a SparseMatrix object. (recommended for symmetric/selfadjoint problems)
226- \ref UmfPackSupport_Module
227 - \b solver: SparseLU<SparseMatrixType, UmfPack> (\#include <Eigen/UmfPackSupport>)
228 - \b notes: LU solving using UmfPack, requires a SparseMatrix object (recommended for squared matrices)
229- \ref SuperLUSupport_Module
230 - \b solver: SparseLU<SparseMatrixType, SuperLU> (\#include <Eigen/SuperLUSupport>)
231 - \b notes: (LU solving using SuperLU, requires a SparseMatrix object, recommended for squared matrices)
232- \ref TaucsSupport_Module
233 - \b solver: SparseLLT<SparseMatrixType, Taucs> (\#include <Eigen/TaucsSupport>)
234 - \b notes: LLT solving using Taucs, requires a SparseMatrix object (not recommended)
235
Tim Holy16a2d892011-06-20 22:47:58 -0500236\warning Those modules are currently considered to be unsupported because 1) they are not documented, and 2) their API is likely to change in the future.
Gael Guennebaudab41c182010-08-18 15:33:58 +0200237
238Here is a typical example:
239\code
240#include <Eigen/UmfPackSupport>
241// ...
242SparseMatrix<double> A;
243// fill A
244VectorXd b, x;
245// fill b
246// solve Ax = b using UmfPack:
247SparseLU<SparseMatrix<double>,UmfPack> lu_of_A(A);
248if(!lu_of_A.succeeded()) {
Tim Holy16a2d892011-06-20 22:47:58 -0500249 // decomposition failed
Gael Guennebaudab41c182010-08-18 15:33:58 +0200250 return;
251}
252if(!lu_of_A.solve(b,&x)) {
253 // solving failed
254 return;
255}
256\endcode
257
258See also the class SparseLLT, class SparseLU, and class SparseLDLT.
Gael Guennebaud86a19262009-01-16 17:20:12 +0000259
Gael Guennebaud41ea92d2010-07-04 10:14:47 +0200260\li \b Next: TODO
Gael Guennebaud86a19262009-01-16 17:20:12 +0000261
262*/
263
264}