blob: a72724143c693fd918268be84e722e093c2c65bf [file] [log] [blame]
Benoit Jacob76152e92010-06-29 10:02:33 -04001namespace Eigen {
2
Gael Guennebaud93ee82b2013-01-05 16:37:11 +01003/** \eigenManualPage TutorialLinearAlgebra Linear algebra and decompositions
Benoit Jacob76152e92010-06-29 10:02:33 -04004
Gael Guennebaud091a49c2013-01-06 23:48:59 +01005This page explains how to solve linear systems, compute various decompositions such as LU,
6QR, %SVD, eigendecompositions... After reading this page, don't miss our
7\link TopicLinearAlgebraDecompositions catalogue \endlink of dense matrix decompositions.
Benoit Jacob76152e92010-06-29 10:02:33 -04008
Gael Guennebaud93ee82b2013-01-05 16:37:11 +01009\eigenAutoToc
Jitse Niesend0f6b1c2010-07-22 21:52:04 +010010
Benoit Jacob4d4a23c2010-06-30 10:11:55 -040011\section TutorialLinAlgBasicSolve Basic linear solving
Benoit Jacob76152e92010-06-29 10:02:33 -040012
13\b The \b problem: You have a system of equations, that you have written as a single matrix equation
14 \f[ Ax \: = \: b \f]
15Where \a A and \a b are matrices (\a b could be a vector, as a special case). You want to find a solution \a x.
16
17\b The \b solution: You can choose between various decompositions, depending on what your matrix \a A looks like,
18and depending on whether you favor speed or accuracy. However, let's start with an example that works in all cases,
19and is a good compromise:
Gael Guennebaudf66fe262010-10-19 11:40:49 +020020<table class="example">
21<tr><th>Example:</th><th>Output:</th></tr>
Benoit Jacob76152e92010-06-29 10:02:33 -040022<tr>
23 <td>\include TutorialLinAlgExSolveColPivHouseholderQR.cpp </td>
Gael Guennebaudf66fe262010-10-19 11:40:49 +020024 <td>\verbinclude TutorialLinAlgExSolveColPivHouseholderQR.out </td>
Benoit Jacob76152e92010-06-29 10:02:33 -040025</tr>
26</table>
27
Benoit Jacob4d4a23c2010-06-30 10:11:55 -040028In this example, the colPivHouseholderQr() method returns an object of class ColPivHouseholderQR. Since here the
29matrix is of type Matrix3f, this line could have been replaced by:
Benoit Jacob76152e92010-06-29 10:02:33 -040030\code
Benoit Jacob4d4a23c2010-06-30 10:11:55 -040031ColPivHouseholderQR<Matrix3f> dec(A);
Benoit Jacob76152e92010-06-29 10:02:33 -040032Vector3f x = dec.solve(b);
33\endcode
34
35Here, ColPivHouseholderQR is a QR decomposition with column pivoting. It's a good compromise for this tutorial, as it
36works for all matrices while being quite fast. Here is a table of some other decompositions that you can choose from,
37depending on your matrix and the trade-off you want to make:
38
Gael Guennebaudf66fe262010-10-19 11:40:49 +020039<table class="manual">
Benoit Jacob76152e92010-06-29 10:02:33 -040040 <tr>
Gael Guennebaudf66fe262010-10-19 11:40:49 +020041 <th>Decomposition</th>
42 <th>Method</th>
Gael Guennebaud95ecd582014-06-17 09:37:07 +020043 <th>Requirements<br/>on the matrix</th>
44 <th>Speed<br/> (small-to-medium)</th>
45 <th>Speed<br/> (large)</th>
Gael Guennebaudf66fe262010-10-19 11:40:49 +020046 <th>Accuracy</th>
Benoit Jacob76152e92010-06-29 10:02:33 -040047 </tr>
Benoit Jacob76152e92010-06-29 10:02:33 -040048 <tr>
49 <td>PartialPivLU</td>
50 <td>partialPivLu()</td>
51 <td>Invertible</td>
52 <td>++</td>
Gael Guennebaud95ecd582014-06-17 09:37:07 +020053 <td>++</td>
Benoit Jacob76152e92010-06-29 10:02:33 -040054 <td>+</td>
55 </tr>
Gael Guennebaudf66fe262010-10-19 11:40:49 +020056 <tr class="alt">
Benoit Jacob76152e92010-06-29 10:02:33 -040057 <td>FullPivLU</td>
58 <td>fullPivLu()</td>
59 <td>None</td>
60 <td>-</td>
Gael Guennebaud95ecd582014-06-17 09:37:07 +020061 <td>- -</td>
Benoit Jacob76152e92010-06-29 10:02:33 -040062 <td>+++</td>
63 </tr>
Benoit Jacob76152e92010-06-29 10:02:33 -040064 <tr>
65 <td>HouseholderQR</td>
66 <td>householderQr()</td>
67 <td>None</td>
68 <td>++</td>
Gael Guennebaud95ecd582014-06-17 09:37:07 +020069 <td>++</td>
Benoit Jacob76152e92010-06-29 10:02:33 -040070 <td>+</td>
71 </tr>
Gael Guennebaudf66fe262010-10-19 11:40:49 +020072 <tr class="alt">
Benoit Jacob76152e92010-06-29 10:02:33 -040073 <td>ColPivHouseholderQR</td>
74 <td>colPivHouseholderQr()</td>
75 <td>None</td>
Gael Guennebaude7984662018-04-11 10:46:11 +020076 <td>+</td>
Gael Guennebaud95ecd582014-06-17 09:37:07 +020077 <td>-</td>
78 <td>+++</td>
Benoit Jacob76152e92010-06-29 10:02:33 -040079 </tr>
Benoit Jacob76152e92010-06-29 10:02:33 -040080 <tr>
81 <td>FullPivHouseholderQR</td>
82 <td>fullPivHouseholderQr()</td>
83 <td>None</td>
84 <td>-</td>
Gael Guennebaud95ecd582014-06-17 09:37:07 +020085 <td>- -</td>
Benoit Jacob76152e92010-06-29 10:02:33 -040086 <td>+++</td>
87 </tr>
Gael Guennebaudf66fe262010-10-19 11:40:49 +020088 <tr class="alt">
Gael Guennebaude7984662018-04-11 10:46:11 +020089 <td>CompleteOrthogonalDecomposition</td>
90 <td>completeOrthogonalDecomposition()</td>
91 <td>None</td>
92 <td>+</td>
93 <td>-</td>
94 <td>+++</td>
95 </tr>
96 <tr class="alt">
Benoit Jacob76152e92010-06-29 10:02:33 -040097 <td>LLT</td>
98 <td>llt()</td>
99 <td>Positive definite</td>
100 <td>+++</td>
Gael Guennebaud95ecd582014-06-17 09:37:07 +0200101 <td>+++</td>
Benoit Jacob76152e92010-06-29 10:02:33 -0400102 <td>+</td>
103 </tr>
Benoit Jacob76152e92010-06-29 10:02:33 -0400104 <tr>
105 <td>LDLT</td>
106 <td>ldlt()</td>
Gael Guennebaud95ecd582014-06-17 09:37:07 +0200107 <td>Positive or negative<br/> semidefinite</td>
Benoit Jacob76152e92010-06-29 10:02:33 -0400108 <td>+++</td>
Gael Guennebaud95ecd582014-06-17 09:37:07 +0200109 <td>+</td>
Benoit Jacob76152e92010-06-29 10:02:33 -0400110 <td>++</td>
111 </tr>
Gael Guennebaud95ecd582014-06-17 09:37:07 +0200112 <tr class="alt">
Gael Guennebaude7984662018-04-11 10:46:11 +0200113 <td>BDCSVD</td>
114 <td>bdcSvd()</td>
115 <td>None</td>
116 <td>-</td>
117 <td>-</td>
118 <td>+++</td>
119 </tr>
120 <tr class="alt">
Gael Guennebaud95ecd582014-06-17 09:37:07 +0200121 <td>JacobiSVD</td>
122 <td>jacobiSvd()</td>
123 <td>None</td>
Gael Guennebaude7984662018-04-11 10:46:11 +0200124 <td>-</td>
Gael Guennebaud95ecd582014-06-17 09:37:07 +0200125 <td>- - -</td>
126 <td>+++</td>
127 </tr>
Benoit Jacob76152e92010-06-29 10:02:33 -0400128</table>
Gael Guennebaude7984662018-04-11 10:46:11 +0200129To get an overview of the true relative speed of the different decompositions, check this \link DenseDecompositionBenchmark benchmark \endlink.
Benoit Jacob76152e92010-06-29 10:02:33 -0400130
131All of these decompositions offer a solve() method that works as in the above example.
132
Benoit Jacob4d4a23c2010-06-30 10:11:55 -0400133For example, if your matrix is positive definite, the above table says that a very good
Gael Guennebaud95ecd582014-06-17 09:37:07 +0200134choice is then the LLT or LDLT decomposition. Here's an example, also demonstrating that using a general
Benoit Jacob4d4a23c2010-06-30 10:11:55 -0400135matrix (not a vector) as right hand side is possible.
136
Gael Guennebaudf66fe262010-10-19 11:40:49 +0200137<table class="example">
138<tr><th>Example:</th><th>Output:</th></tr>
Benoit Jacob4d4a23c2010-06-30 10:11:55 -0400139<tr>
140 <td>\include TutorialLinAlgExSolveLDLT.cpp </td>
Gael Guennebaudf66fe262010-10-19 11:40:49 +0200141 <td>\verbinclude TutorialLinAlgExSolveLDLT.out </td>
Benoit Jacob4d4a23c2010-06-30 10:11:55 -0400142</tr>
143</table>
144
145For a \ref TopicLinearAlgebraDecompositions "much more complete table" comparing all decompositions supported by Eigen (notice that Eigen
Benoit Jacob76152e92010-06-29 10:02:33 -0400146supports many other decompositions), see our special page on
147\ref TopicLinearAlgebraDecompositions "this topic".
148
Benoit Jacob4d4a23c2010-06-30 10:11:55 -0400149\section TutorialLinAlgSolutionExists Checking if a solution really exists
Benoit Jacob76152e92010-06-29 10:02:33 -0400150
Benoit Jacob4d4a23c2010-06-30 10:11:55 -0400151Only you know what error margin you want to allow for a solution to be considered valid.
152So Eigen lets you do this computation for yourself, if you want to, as in this example:
153
Gael Guennebaudf66fe262010-10-19 11:40:49 +0200154<table class="example">
155<tr><th>Example:</th><th>Output:</th></tr>
Benoit Jacob4d4a23c2010-06-30 10:11:55 -0400156<tr>
157 <td>\include TutorialLinAlgExComputeSolveError.cpp </td>
Gael Guennebaudf66fe262010-10-19 11:40:49 +0200158 <td>\verbinclude TutorialLinAlgExComputeSolveError.out </td>
Benoit Jacob4d4a23c2010-06-30 10:11:55 -0400159</tr>
160</table>
161
162\section TutorialLinAlgEigensolving Computing eigenvalues and eigenvectors
163
164You need an eigendecomposition here, see available such decompositions on \ref TopicLinearAlgebraDecompositions "this page".
165Make sure to check if your matrix is self-adjoint, as is often the case in these problems. Here's an example using
166SelfAdjointEigenSolver, it could easily be adapted to general matrices using EigenSolver or ComplexEigenSolver.
167
Jitse Niesen45a6bb32011-11-07 17:14:06 +0000168The computation of eigenvalues and eigenvectors does not necessarily converge, but such failure to converge is
169very rare. The call to info() is to check for this possibility.
170
Gael Guennebaudf66fe262010-10-19 11:40:49 +0200171<table class="example">
172<tr><th>Example:</th><th>Output:</th></tr>
Benoit Jacob4d4a23c2010-06-30 10:11:55 -0400173<tr>
174 <td>\include TutorialLinAlgSelfAdjointEigenSolver.cpp </td>
Gael Guennebaudf66fe262010-10-19 11:40:49 +0200175 <td>\verbinclude TutorialLinAlgSelfAdjointEigenSolver.out </td>
Benoit Jacob4d4a23c2010-06-30 10:11:55 -0400176</tr>
177</table>
178
Jitse Niesend0f6b1c2010-07-22 21:52:04 +0100179\section TutorialLinAlgInverse Computing inverse and determinant
Benoit Jacob4d4a23c2010-06-30 10:11:55 -0400180
181First of all, make sure that you really want this. While inverse and determinant are fundamental mathematical concepts,
182in \em numerical linear algebra they are not as popular as in pure mathematics. Inverse computations are often
183advantageously replaced by solve() operations, and the determinant is often \em not a good way of checking if a matrix
184is invertible.
185
186However, for \em very \em small matrices, the above is not true, and inverse and determinant can be very useful.
187
188While certain decompositions, such as PartialPivLU and FullPivLU, offer inverse() and determinant() methods, you can also
189call inverse() and determinant() directly on a matrix. If your matrix is of a very small fixed size (at most 4x4) this
190allows Eigen to avoid performing a LU decomposition, and instead use formulas that are more efficient on such small matrices.
191
192Here is an example:
Gael Guennebaudf66fe262010-10-19 11:40:49 +0200193<table class="example">
194<tr><th>Example:</th><th>Output:</th></tr>
Benoit Jacob4d4a23c2010-06-30 10:11:55 -0400195<tr>
196 <td>\include TutorialLinAlgInverseDeterminant.cpp </td>
Gael Guennebaudf66fe262010-10-19 11:40:49 +0200197 <td>\verbinclude TutorialLinAlgInverseDeterminant.out </td>
Benoit Jacob4d4a23c2010-06-30 10:11:55 -0400198</tr>
199</table>
200
201\section TutorialLinAlgLeastsquares Least squares solving
202
Gael Guennebaude7984662018-04-11 10:46:11 +0200203The most accurate method to do least squares solving is with a SVD decomposition.
204Eigen provides two implementations.
205The recommended one is the BDCSVD class, which scale well for large problems
206and automatically fall-back to the JacobiSVD class for smaller problems.
207For both classes, their solve() method is doing least-squares solving.
Benoit Jacob26129222010-10-15 09:44:43 -0400208
209Here is an example:
Gael Guennebaudf66fe262010-10-19 11:40:49 +0200210<table class="example">
211<tr><th>Example:</th><th>Output:</th></tr>
Benoit Jacob26129222010-10-15 09:44:43 -0400212<tr>
213 <td>\include TutorialLinAlgSVDSolve.cpp </td>
Gael Guennebaudf66fe262010-10-19 11:40:49 +0200214 <td>\verbinclude TutorialLinAlgSVDSolve.out </td>
Benoit Jacob26129222010-10-15 09:44:43 -0400215</tr>
216</table>
217
Jitse Niesenaa0db352014-01-18 01:16:17 +0000218Another methods, potentially faster but less reliable, are to use a Cholesky decomposition of the
219normal matrix or a QR decomposition. Our page on \link LeastSquares least squares solving \endlink
220has more details.
221
Benoit Jacob4d4a23c2010-06-30 10:11:55 -0400222
223\section TutorialLinAlgSeparateComputation Separating the computation from the construction
224
225In the above examples, the decomposition was computed at the same time that the decomposition object was constructed.
226There are however situations where you might want to separate these two things, for example if you don't know,
227at the time of the construction, the matrix that you will want to decompose; or if you want to reuse an existing
228decomposition object.
229
230What makes this possible is that:
231\li all decompositions have a default constructor,
232\li all decompositions have a compute(matrix) method that does the computation, and that may be called again
233 on an already-computed decomposition, reinitializing it.
234
235For example:
236
Gael Guennebaudf66fe262010-10-19 11:40:49 +0200237<table class="example">
238<tr><th>Example:</th><th>Output:</th></tr>
Benoit Jacob4d4a23c2010-06-30 10:11:55 -0400239<tr>
240 <td>\include TutorialLinAlgComputeTwice.cpp </td>
Gael Guennebaudf66fe262010-10-19 11:40:49 +0200241 <td>\verbinclude TutorialLinAlgComputeTwice.out </td>
Benoit Jacob4d4a23c2010-06-30 10:11:55 -0400242</tr>
243</table>
244
245Finally, you can tell the decomposition constructor to preallocate storage for decomposing matrices of a given size,
246so that when you subsequently decompose such matrices, no dynamic memory allocation is performed (of course, if you
247are using fixed-size matrices, no dynamic memory allocation happens at all). This is done by just
248passing the size to the decomposition constructor, as in this example:
249\code
250HouseholderQR<MatrixXf> qr(50,50);
251MatrixXf A = MatrixXf::Random(50,50);
252qr.compute(A); // no dynamic memory allocation
253\endcode
254
255\section TutorialLinAlgRankRevealing Rank-revealing decompositions
256
257Certain decompositions are rank-revealing, i.e. are able to compute the rank of a matrix. These are typically
258also the decompositions that behave best in the face of a non-full-rank matrix (which in the square case means a
259singular matrix). On \ref TopicLinearAlgebraDecompositions "this table" you can see for all our decompositions
260whether they are rank-revealing or not.
261
262Rank-revealing decompositions offer at least a rank() method. They can also offer convenience methods such as isInvertible(),
263and some are also providing methods to compute the kernel (null-space) and image (column-space) of the matrix, as is the
264case with FullPivLU:
265
Gael Guennebaudf66fe262010-10-19 11:40:49 +0200266<table class="example">
267<tr><th>Example:</th><th>Output:</th></tr>
Benoit Jacob4d4a23c2010-06-30 10:11:55 -0400268<tr>
269 <td>\include TutorialLinAlgRankRevealing.cpp </td>
Gael Guennebaudf66fe262010-10-19 11:40:49 +0200270 <td>\verbinclude TutorialLinAlgRankRevealing.out </td>
Benoit Jacob4d4a23c2010-06-30 10:11:55 -0400271</tr>
272</table>
273
274Of course, any rank computation depends on the choice of an arbitrary threshold, since practically no
275floating-point matrix is \em exactly rank-deficient. Eigen picks a sensible default threshold, which depends
276on the decomposition but is typically the diagonal size times machine epsilon. While this is the best default we
277could pick, only you know what is the right threshold for your application. You can set this by calling setThreshold()
Benoit Jacob962b30d2010-06-30 19:27:30 -0400278on your decomposition object before calling rank() or any other method that needs to use such a threshold.
279The decomposition itself, i.e. the compute() method, is independent of the threshold. You don't need to recompute the
280decomposition after you've changed the threshold.
Benoit Jacob4d4a23c2010-06-30 10:11:55 -0400281
Gael Guennebaudf66fe262010-10-19 11:40:49 +0200282<table class="example">
283<tr><th>Example:</th><th>Output:</th></tr>
Benoit Jacob4d4a23c2010-06-30 10:11:55 -0400284<tr>
285 <td>\include TutorialLinAlgSetThreshold.cpp </td>
Gael Guennebaudf66fe262010-10-19 11:40:49 +0200286 <td>\verbinclude TutorialLinAlgSetThreshold.out </td>
Benoit Jacob4d4a23c2010-06-30 10:11:55 -0400287</tr>
288</table>
289
Benoit Jacob76152e92010-06-29 10:02:33 -0400290*/
291
292}