blob: d9d5e9e0c261c9f66ae7a84790a828212c19a2d6 [file] [log] [blame]
Gael Guennebaud86a19262009-01-16 17:20:12 +00001namespace Eigen {
2
3/** \page TutorialSparse Tutorial - Getting started with the sparse module
4 \ingroup Tutorial
5
6<div class="eimainmenu">\ref index "Overview"
7 | \ref TutorialCore "Core features"
8 | \ref TutorialGeometry "Geometry"
9 | \ref TutorialAdvancedLinearAlgebra "Advanced linear algebra"
10 | \b Sparse \b matrix
11</div>
12
13\b Table \b of \b contents \n
14 - \ref TutorialSparseIntro
15 - \ref TutorialSparseFilling
16 - \ref TutorialSparseFeatureSet
17 - \ref TutorialSparseDirectSolvers
18<hr>
19
Gael Guennebaud25f16582009-01-21 17:44:58 +000020\section TutorialSparseIntro Sparse matrix representations
Gael Guennebaud86a19262009-01-16 17:20:12 +000021
22In many applications (e.g., finite element methods) it is common to deal with very large matrices where only a few coefficients are different than zero. Both in term of memory consumption and performance, it is fundamental to use an adequate representation storing only nonzero coefficients. Such a matrix is called a sparse matrix.
23
Gael Guennebaud25f16582009-01-21 17:44:58 +000024
25\b Declaring \b sparse \b matrices \b and \b vectors \n
26The SparseMatrix class is the main sparse matrix representation of the Eigen's sparse module which offers high performance, low memory usage, and compatibility with most of sparse linear algebra packages. Because of its limited flexibility, we also provide a DynamicSparseMatrix variante taillored for low-level sparse matrix assembly. Both of them can be either row major or column major:
27
28\code
29#include <Eigen/Sparse>
30SparseMatrix<std::complex<float> > m1(1000,2000); // declare a 1000x2000 col-major compressed sparse matrix of complex<float>
31SparseMatrix<double,RowMajor> m2(1000,2000); // declare a 1000x2000 row-major compressed sparse matrix of double
32DynamicSparseMatrix<std::complex<float> > m1(1000,2000); // declare a 1000x2000 col-major dynamic sparse matrix of complex<float>
33DynamicSparseMatrix<double,RowMajor> m2(1000,2000); // declare a 1000x2000 row-major dynamic sparse matrix of double
34\endcode
35
36Although a sparse matrix could also be used to represent a sparse vector, for that purpose it is better to use the specialized SparseVector class:
37\code
38SparseVector<std::complex<float> > v1(1000); // declare a column sparse vector of complex<float> of size 1000
39SparseVector<double,RowMajor> v2(1000); // declare a row sparse vector of double of size 1000
40\endcode
41Note that here the size of a vector denotes its dimension and not the number of nonzero coefficients which is initially zero (like sparse matrices).
42
43
44\b Overview \b of \b the \b internal \b sparse \b storage \n
45In order to get the best of the Eigen's sparse objects, it is important to have a rough idea of the way they are internally stored. The SparseMatrix class implements the common and generic Compressed Column/Row Storage scheme. It consists of three compact arrays storing the values with their respective inner coordinates, and pointer indices to the begining of each outer vector. For instance, let \c m be a column-major sparse matrix. Then its nonzero coefficients are sequentially stored in memory in a column-major order (\em values). A second array of integer stores the respective row index of each coefficient (\em inner \em indices). Finally, a third array of integer, having the same length than the number of columns, stores the index in the previous arrays of the first element of each column (\em outer \em indices).
Gael Guennebaud86a19262009-01-16 17:20:12 +000046
47Here is an example, with the matrix:
48<table>
49<tr><td>0</td><td>3</td><td>0</td><td>0</td><td>0</td></tr>
50<tr><td>22</td><td>0</td><td>0</td><td>0</td><td>17</td></tr>
51<tr><td>7</td><td>5</td><td>0</td><td>1</td><td>0</td></tr>
52<tr><td>0</td><td>0</td><td>0</td><td>0</td><td>0</td></tr>
53<tr><td>0</td><td>0</td><td>14</td><td>0</td><td>8</td></tr>
54</table>
55
56and its internal representation using the Compressed Column Storage format:
57<table>
58<tr><td>Values:</td> <td>22</td><td>7</td><td>3</td><td>5</td><td>14</td><td>1</td><td>17</td><td>8</td></tr>
59<tr><td>Inner indices:</td> <td> 1</td><td>2</td><td>0</td><td>2</td><td> 4</td><td>2</td><td> 1</td><td>4</td></tr>
60</table>
61Outer indices:<table><tr><td>0</td><td>2</td><td>4</td><td>5</td><td>6</td><td>\em 7 </td></tr></table>
62
Gael Guennebaud25f16582009-01-21 17:44:58 +000063As you can guess, here the storage order is even more important than with dense matrix. We will therefore often make a clear difference between the \em inner and \em outer dimensions. For instance, it is easy to loop over the coefficients of an \em inner \em vector (e.g., a column of a column-major matrix), but completely inefficient to do the same for an \em outer \em vector (e.g., a row of a col-major matrix).
Gael Guennebaud86a19262009-01-16 17:20:12 +000064
Gael Guennebaud25f16582009-01-21 17:44:58 +000065The SparseVector class implements the same compressed storage scheme but, of course, without any outer index buffer.
Gael Guennebaud86a19262009-01-16 17:20:12 +000066
Gael Guennebaud25f16582009-01-21 17:44:58 +000067Since all nonzero coefficients of such a matrix are sequentially stored in memory, random insertion of new nonzeros can be extremely costly. To overcome this limitation, Eigen's sparse module provides a DynamicSparseMatrix class which is basically implemented as an array of SparseVector. In other words, a DynamicSparseMatrix is a SparseMatrix where the values and inner-indices arrays have been splitted into multiple small and resizable arrays. Assuming the number of nonzeros per inner vector is relatively low, this slight modification allow for very fast random insertion at the cost of a slight memory overhead and a lost of compatibility with other sparse libraries used by some of our highlevel solvers. Note that the major memory overhead comes from the extra memory preallocated by each inner vector to avoid an expensive memory reallocation at every insertion.
Gael Guennebaud86a19262009-01-16 17:20:12 +000068
Gael Guennebaud25f16582009-01-21 17:44:58 +000069To summarize, it is recommanded to use a SparseMatrix whenever this is possible, and reserve the use of DynamicSparseMatrix for matrix assembly purpose when a SparseMatrix is not flexible enough. The respective pro/cons of both representations are summarized in the following table:
70
71<table>
72<tr><td></td> <td>SparseMatrix</td><td>DynamicSparseMatrix</td></tr>
73<tr><td>memory usage</td><td>***</td><td>**</td></tr>
74<tr><td>sorted insertion</td><td>***</td><td>***</td></tr>
75<tr><td>random insertion \n in sorted inner vector</td><td>**</td><td>**</td></tr>
76<tr><td>sorted insertion \n in random inner vector</td><td>-</td><td>***</td></tr>
77<tr><td>random insertion</td><td>-</td><td>**</td></tr>
78<tr><td>coeff wise unary operators</td><td>***</td><td>***</td></tr>
79<tr><td>coeff wise binary operators</td><td>***</td><td>***</td></tr>
80<tr><td>matrix products</td><td>***</td><td>**(*)</td></tr>
81<tr><td>transpose</td><td>**</td><td>***</td></tr>
82<tr><td>redux</td><td>***</td><td>**</td></tr>
83<tr><td>*= scalar</td><td>***</td><td>**</td></tr>
84<tr><td>Compatibility with highlevel solvers \n (TAUCS, Cholmod, SuperLU, UmfPack)</td><td>***</td><td>-</td></tr>
85</table>
86
Gael Guennebaud86a19262009-01-16 17:20:12 +000087
88\b Matrix \b and \b vector \b properties \n
89
Gael Guennebaud25f16582009-01-21 17:44:58 +000090Here mat and vec represents any sparse-matrix and sparse-vector types respectively.
91
Gael Guennebaud86a19262009-01-16 17:20:12 +000092<table>
93<tr><td>Standard \n dimensions</td><td>\code
94mat.rows()
95mat.cols()\endcode</td>
96<td>\code
97vec.size() \endcode</td>
98</tr>
99<tr><td>Sizes along the \n inner/outer dimensions</td><td>\code
100mat.innerSize()
101mat.outerSize()\endcode</td>
102<td></td>
103</tr>
104<tr><td>Number of non \n zero coefficiens</td><td>\code
105mat.nonZeros() \endcode</td>
106<td>\code
107vec.nonZeros() \endcode</td></tr>
108</table>
109
Gael Guennebaud25f16582009-01-21 17:44:58 +0000110
Gael Guennebaud86a19262009-01-16 17:20:12 +0000111\b Iterating \b over \b the \b nonzero \b coefficients \n
112
113Iterating over the coefficients of a sparse matrix can be done only in the same order than the storage order. Here is an example:
114<table>
115<tr><td>
116\code
Gael Guennebaud25f16582009-01-21 17:44:58 +0000117SparseMatrixType mat(rows,cols);
118for (int k=0; k\<m1.outerSize(); ++k)
119 for (SparseMatrixType::InnerIterator it(mat,k); it; ++it)
Gael Guennebaud86a19262009-01-16 17:20:12 +0000120 {
121 it.value();
122 it.row(); // row index
123 it.col(); // col index (here it is equal to k)
124 it.index(); // inner index, here it is equal to it.row()
125 }
126\endcode
127</td><td>
128\code
Gael Guennebaud25f16582009-01-21 17:44:58 +0000129SparseVector<double> vec(size);
130for (SparseVector<double>::InnerIterator it(vec); it; ++it)
Gael Guennebaud86a19262009-01-16 17:20:12 +0000131{
Gael Guennebaud25f16582009-01-21 17:44:58 +0000132 it.value(); // == vec[ it.index() ]
Gael Guennebaud86a19262009-01-16 17:20:12 +0000133 it.index();
134}
135\endcode
136</td></tr>
137</table>
138
139
140\section TutorialSparseFilling Filling a sparse matrix
141
Gael Guennebaud25f16582009-01-21 17:44:58 +0000142A DynamicSparseMatrix object can be set and updated just like any dense matrix using the coeffRef(row,col) method. If the coefficient is not stored yet, then it will be inserted in the matrix. Here is an example:
143\code
144DynamicSparseMatrix<float> aux(1000,1000);
145for (...)
146 for each i
147 for each j interacting with i
148 aux.coeffRef(i,j) += foo(o1,o2);
149SparseMatrix<float> mat(aux); // convert the DynamicSparseMatrix to a SparseMatrix
150\endcode
151
152Sometimes, however, we simply want to set all the coefficients of a matrix before using it through standard matrix operations (addition, product, etc.). In that case it faster to use the low-level startFill()/fill()/fillrand()/endFill() interface. Even though this interface is availabe for both sparse matrix types, their respective restrictions slightly differ from one representation to the other. In all case, a call to startFill() set the matrix to zero, and the fill*() functions will fail if the coefficient already exist.
153
154As a first difference, for SparseMatrix, the fill*() functions can only be called inside a startFill()/endFill() pair, and no other member functions are allowed during the filling process, i.e., until endFill() has been called. On the other hand, a DynamicSparseMatrix is always in a stable state, and the startFill()/endFill() functions are only for compatibility purpose.
155
156Another difference is that the fill*() functions must be called with increasing outer indices for a SparseMatrix, while they can be random for a DynamicSparseMatrix.
157
158Finally, the fill() function assumes the coefficient are inserted in a sorted order per inner vector, while the fillrand() variante allows random insertions (the outer indices must still be sorted for SparseMatrix).
159
160Some examples:
Gael Guennebaud86a19262009-01-16 17:20:12 +0000161
1621 - If you can set the coefficients in exactly the same order that the storage order, then the matrix can be filled directly and very efficiently. Here is an example initializing a random, row-major sparse matrix:
163\code
164SparseMatrix<double,RowMajor> m(rows,cols);
165m.startFill(rows*cols*percent_of_non_zero); // estimate of the number of nonzeros (optional)
166for (int i=0; i\<rows; ++i)
167 for (int j=0; j\<cols; ++j)
168 if (rand()\<percent_of_non_zero)
169 m.fill(i,j) = rand();
170m.endFill();
171\endcode
172
1732 - If you can set each outer vector in a consistent order, but do not have sorted data for each inner vector, then you can use fillrand() instead of fill():
174\code
175SparseMatrix<double,RowMajor> m(rows,cols);
176m.startFill(rows*cols*percent_of_non_zero); // estimate of the number of nonzeros (optional)
177for (int i=0; i\<rows; ++i)
178 for (int k=0; k\<cols*percent_of_non_zero; ++k)
179 m.fillrand(i,rand(0,cols)) = rand();
180m.endFill();
181\endcode
182The fillrand() function performs a sorted insertion into an array sequentially stored in memory and requires a copy of all coefficients stored after its target position. This method is therefore reserved for matrices having only a few elements per row/column (up to 50) and works better if the insertion are almost sorted.
183
1843 - Eventually, if none of the above solution is practicable for you, then you have to use a RandomSetter which temporarily wraps the matrix into a more flexible hash map allowing complete random accesses:
185\code
186SparseMatrix<double,RowMajor> m(rows,cols);
187{
188 RandomSetter<SparseMatrix<double,RowMajor> > setter(m);
189 for (int k=0; k\<cols*rows*percent_of_non_zero; ++k)
190 setter(rand(0,rows), rand(0,cols)) = rand();
191}
192\endcode
193The matrix \c m is set at the destruction of the setter, hence the use of a nested block. This imposed syntax has the advantage to emphasize the critical section where m is not valid and cannot be used.
194
195
196\section TutorialSparseFeatureSet Supported operators and functions
197
198In the following \em sm denote a sparse matrix, \em sv a sparse vector, \em dm a dense matrix, and \em dv a dense vector.
199In Eigen's sparse module we chose to expose only the subset of the dense matrix API which can be efficiently implemented. Moreover, all combinations are not always possible. For instance, it is not possible to add two sparse matrices having two different storage order. On the other hand it is perfectly fine to evaluate a sparse matrix/expression to a matrix having a different storage order:
200\code
201SparseMatrixType sm1, sm2, sm3;
202sm3 = sm1.transpose() + sm2; // invalid
203sm3 = SparseMatrixType(sm1.transpose()) + sm2; // correct
204\endcode
205
206Here are some examples of the supported operations:
207\code
208s_1 *= 0.5;
209sm4 = sm1 + sm2 + sm3; // only if s_1, s_2 and s_3 have the same storage order
210sm3 = sm1 * sm2;
211dv3 = sm1 * dv2;
212dm3 = sm1 * dm2;
213dm3 = dm2 * sm1;
214sm3 = sm1.cwise() * sm2; // only if s_1 and s_2 have the same storage order
215dv2 = sm1.marked<UpperTriangular>().solveTriangular(dv2);
216\endcode
217
218The product of a sparse matrix A time a dense matrix/vector dv with A symmetric can be optimized by telling that to Eigen:
219\code
220res = A.marked<SeflAdjoint>() * dv; // if all coefficients of A are stored
221res = A.marked<SeflAdjoint|UpperTriangular>() * dv; // if only the upper part of A is stored
222res = A.marked<SeflAdjoint|LowerTriangular>() * dv; // if only the lower part of A is stored
223\endcode
224
225
226\section TutorialSparseDirectSolvers Using the direct solvers
227
228TODO
229
230\subsection TutorialSparseDirectSolvers_LLT LLT
231Cholmod, Taucs.
232
233\subsection TutorialSparseDirectSolvers_LDLT LDLT
234
235
236\subsection TutorialSparseDirectSolvers_LU LU
237SuperLU, UmfPack.
238
239*/
240
241}