Table of contents
Manipulating and solving sparse problems involves various modules which are summarized below:
Module | Header file | Contents |
---|---|---|
SparseCore | #include <Eigen/SparseCore>
| SparseMatrix and SparseVector classes, matrix assembly, basic sparse linear algebra (including sparse triangular solvers) |
SparseCholesky | #include <Eigen/SparseCholesky>
| Direct sparse LLT and LDLT Cholesky factorization to solve sparse self-adjoint positive definite problems |
IterativeLinearSolvers | #include <Eigen/IterativeLinearSolvers>
| Iterative solvers to solve large general linear square problems (including self-adjoint positive definite problems) |
#include <Eigen/Sparse>
| Includes all the above modules |
In many applications (e.g., finite element methods) it is common to deal with very large matrices where only a few coefficients are different from zero. In such cases, memory consumption can be reduced and performance increased by using a specialized representation storing only the nonzero coefficients. Such a matrix is called a sparse matrix.
The SparseMatrix class
The class SparseMatrix is the main sparse matrix representation of Eigen's sparse module; it offers high performance and low memory usage. It implements a more versatile variant of the widely-used Compressed Column (or Row) Storage scheme. It consists of four compact arrays:
Values:
stores the coefficient values of the non-zeros.InnerIndices:
stores the row (resp. column) indices of the non-zeros.OuterStarts:
stores for each column (resp. row) the index of the first non-zero in the previous two arrays.InnerNNZs:
stores the number of non-zeros of each column (resp. row). The word inner
refers to an inner vector that is a column for a column-major matrix, or a row for a row-major matrix. The word outer
refers to the other direction.This storage scheme is better explained on an example. The following matrix
0 | 3 | 0 | 0 | 0 |
22 | 0 | 0 | 0 | 17 |
7 | 5 | 0 | 1 | 0 |
0 | 0 | 0 | 0 | 0 |
0 | 0 | 14 | 0 | 8 |
and one of its possible sparse, column major representation:
Values: | 22 | 7 | _ | 3 | 5 | 14 | _ | _ | 1 | _ | 17 | 8 |
InnerIndices: | 1 | 2 | _ | 0 | 2 | 4 | _ | _ | 2 | _ | 1 | 4 |
OuterStarts: | 0 | 3 | 5 | 8 | 10 | 12 |
InnerNNZs: | 2 | 2 | 1 | 1 | 2 |
Currently the elements of a given inner vector are guaranteed to be always sorted by increasing inner indices. The "_"
indicates available free space to quickly insert new elements. Assuming no reallocation is needed, the insertion of a random element is therefore in O(nnz_j) where nnz_j is the number of nonzeros of the respective inner vector. On the other hand, inserting elements with increasing inner indices in a given inner vector is much more efficient since this only requires to increase the respective InnerNNZs
entry that is a O(1) operation.
The case where no empty space is available is a special case, and is refered as the compressed mode. It corresponds to the widely used Compressed Column (or Row) Storage schemes (CCS or CRS). Any SparseMatrix can be turned to this form by calling the SparseMatrix::makeCompressed() function. In this case, one can remark that the InnerNNZs
array is redundant with OuterStarts
because we the equality: InnerNNZs
[j] = OuterStarts
[j+1]-OuterStarts
[j]. Therefore, in practice a call to SparseMatrix::makeCompressed() frees this buffer.
It is worth noting that most of our wrappers to external libraries requires compressed matrices as inputs.
The results of Eigen's operations always produces compressed sparse matrices. On the other hand, the insertion of a new element into a SparseMatrix converts this later to the uncompressed mode.
Here is the previous matrix represented in compressed mode:
Values: | 22 | 7 | 3 | 5 | 14 | 1 | 17 | 8 |
InnerIndices: | 1 | 2 | 0 | 2 | 4 | 2 | 1 | 4 |
OuterStarts: | 0 | 2 | 4 | 5 | 6 | 8 |
A SparseVector is a special case of a SparseMatrix where only the Values
and InnerIndices
arrays are stored. There is no notion of compressed/uncompressed mode for a SparseVector.
Before describing each individual class, let's start with the following typical example: solving the Lapace equation on a regular 2D grid using a finite difference scheme and Dirichlet boundary conditions. Such problem can be mathematically expressed as a linear problem of the form
where
is the vector of
m
unknowns (in our case, the values of the pixels), is the right hand side vector resulting from the boundary conditions, and
is an
matrix containing only a few non-zero elements resulting from the discretization of the Laplacian operator.
#include <Eigen/Sparse> #include <vector> typedef Eigen::SparseMatrix<double> SpMat; // declares a column-major sparse matrix type of double typedef Eigen::Triplet<double> T; void buildProblem(std::vector<T>& coefficients, Eigen::VectorXd& b, int n); void saveAsBitmap(const Eigen::VectorXd& x, int n, const char* filename); int main(int argc, char** argv) { int n = 300; // size of the image int m = n*n; // number of unknows (=number of pixels) // Assembly: std::vector<T> coefficients; // list of non-zeros coefficients Eigen::VectorXd b(m); // the right hand side-vector resulting from the constraints buildProblem(coefficients, b, n); SpMat A(m,m); A.setFromTriplets(coefficients.begin(), coefficients.end()); // Solving: Eigen::SimplicialCholesky<SpMat> chol(A); // performs a Cholesky factorization of A Eigen::VectorXd x = chol.solve(b); // use the factorization to solve for the given right hand side // Export the result to a file: saveAsBitmap(x, n, argv[1]); return 0; } | ![]() |
In this example, we start by defining a column-major sparse matrix type of double SparseMatrix<double>
, and a triplet list of the same scalar type Triplet<double>
. A triplet is a simple object representing a non-zero entry as the triplet: row
index, column
index, value
.
In the main function, we declare a list coefficients
of triplets (as a std vector) and the right hand side vector which are filled by the buildProblem function. The raw and flat list of non-zero entries is then converted to a true SparseMatrix object
A
. Note that the elements of the list do not have to be sorted, and possible duplicate entries will be summed up.
The last step consists of effectively solving the assembled problem. Since the resulting matrix A
is symmetric by construction, we can perform a direct Cholesky factorization via the SimplicialLDLT class which behaves like its LDLT counterpart for dense objects.
The resulting vector x
contains the pixel values as a 1D array which is saved to a jpeg file shown on the right of the code above.
Describing the buildProblem and save functions is out of the scope of this tutorial. They are given here for the curious and reproducibility purpose.
Matrix and vector properties
The SparseMatrix and SparseVector classes take three template arguments: the scalar type (e.g., double) the storage order (ColMajor or RowMajor, the default is RowMajor) the inner index type (default is int
).
As for dense Matrix objects, constructors takes the size of the object. Here are some examples:
SparseMatrix<std::complex<float> > mat(1000,2000); // declares a 1000x2000 column-major compressed sparse matrix of complex<float> SparseMatrix<double,RowMajor> mat(1000,2000); // declares a 1000x2000 row-major compressed sparse matrix of double SparseVector<std::complex<float> > vec(1000); // declares a column sparse vector of complex<float> of size 1000 SparseVector<double,RowMajor> vec(1000); // declares a row sparse vector of double of size 1000
In the rest of the tutorial, mat
and vec
represent any sparse-matrix and sparse-vector objects, respectively.
The dimensions of a matrix can be queried using the following functions:
Standard dimensions | mat.rows() mat.cols() | vec.size() |
Sizes along the inner/outer dimensions | ||
Number of non zero coefficients | mat.nonZeros() | vec.nonZeros() |
Iterating over the nonzero coefficients
Random access to the elements of a sparse object can be done through the coeffRef(i,j)
function. However, this function involves a quite expensive binary search. In most cases, one only wants to iterate over the non-zeros elements. This is achieved by a standard loop over the outer dimension, and then by iterating over the non-zeros of the current inner vector via an InnerIterator. Thus, the non-zero entries have to be visited in the same order than the storage order. Here is an example:
SparseMatrix<double> mat(rows,cols); for (int k=0; k<mat.outerSize(); ++k) for (SparseMatrix<double>::InnerIterator it(mat,k); it; ++it) { it.value(); it.row(); // row index it.col(); // col index (here it is equal to k) it.index(); // inner index, here it is equal to it.row() } | SparseVector<double> vec(size); for (SparseVector<double>::InnerIterator it(vec); it; ++it) { it.value(); // == vec[ it.index() ] it.index(); } |
For a writable expression, the referenced value can be modified using the valueRef() function. If the type of the sparse matrix or vector depends on a template parameter, then the typename
keyword is required to indicate that InnerIterator
denotes a type; see The template and typename keywords in C++ for details.
Because of the special storage scheme of a SparseMatrix, special care has to be taken when adding new nonzero entries. For instance, the cost of a single purely random insertion into a SparseMatrix is O(nnz)
, where nnz
is the current number of non-zero coefficients.
The simplest way to create a sparse matrix while guaranteeing good performance is thus to first build a list of so-called triplets, and then convert it to a SparseMatrix.
Here is a typical usage example:
typedef Eigen::Triplet<double> T; std::vector<T> tripletList; triplets.reserve(estimation_of_entries); for(...) { // ... tripletList.push_back(T(i,j,v_ij)); } SparseMatrixType mat(rows,cols); mat.setFromTriplets(tripletList.begin(), tripletList.end()); // mat is ready to go!
The std::vector
of triplets might contain the elements in arbitrary order, and might even contain duplicated elements that will be summed up by setFromTriplets(). See the SparseMatrix::setFromTriplets() function and class Triplet for more details.
In some cases, however, slightly higher performance, and lower memory consumption can be reached by directly inserting the non-zeros into the destination matrix. A typical scenario of this approach is illustrated bellow:
1: SparseMatrix<double> mat(rows,cols); // default is column major 2: mat.reserve(VectorXi::Constant(cols,6)); 3: for each i,j such that v_ij != 0 4: mat.insert(i,j) = v_ij; // alternative: mat.coeffRef(i,j) += v_ij; 5: mat.makeCompressed(); // optional
j-th
inner vector (e.g., via a VectorXi or std::vector<int>). If only a rought estimate of the number of nonzeros per inner-vector can be obtained, it is highly recommended to overestimate it rather than the opposite. If this line is omitted, then the first insertion of a new element will reserve room for 2 elements per inner vector.j-th
column is not full and contains non-zeros whose inner-indices are smaller than i
. In this case, this operation boils down to trivial O(1) operation.i
,j must not already exists, otherwise use the coeffRef(i,j) method that will allow to, e.g., accumulate values. This method first performs a binary search and finally calls insert(i,j) if the element does not already exist. It is more flexible than insert() but also more costly.Eigen currently provides a limited set of built-in solvers, as well as wrappers to external solver libraries. They are summarized in the following table:
Class | Module | Solver kind | Matrix kind | Features related to performance | Dependencies,License |
Notes |
---|---|---|---|---|---|---|
SimplicialLLT | SparseCholesky | Direct LLt factorization | SPD | Fill-in reducing | built-in, LGPL | SimplicialLDLT is often preferable |
SimplicialLDLT | SparseCholesky | Direct LDLt factorization | SPD | Fill-in reducing | built-in, LGPL | Recommended for very sparse and not too large problems (e.g., 2D Poisson eq.) |
ConjugateGradient | IterativeLinearSolvers | Classic iterative CG | SPD | Preconditionning | built-in, LGPL | Recommended for large symmetric problems (e.g., 3D Poisson eq.) |
BiCGSTAB | IterativeLinearSolvers | Iterative stabilized bi-conjugate gradient | Square | Preconditionning | built-in, LGPL | Might not always converge |
PastixLLT PastixLDLT PastixLU | PaStiXSupport | Direct LLt, LDLt, LU factorizations | SPD SPD Square | Fill-in reducing, Leverage fast dense algebra, Multithreading | Requires the PaStiX package, CeCILL-C | optimized for tough problems and symmetric patterns |
CholmodSupernodalLLT | CholmodSupport | Direct LLt factorization | SPD | Fill-in reducing, Leverage fast dense algebra | Requires the SuiteSparse package, GPL | |
UmfPackLU | UmfPackSupport | Direct LU factorization | Square | Fill-in reducing, Leverage fast dense algebra | Requires the SuiteSparse package, GPL | |
SuperLU | SuperLUSupport | Direct LU factorization | Square | Fill-in reducing, Leverage fast dense algebra | Requires the SuperLU library, (BSD-like) |
Here SPD
means symmetric positive definite.
All these solvers follow the same general concept. Here is a typical and general example:
#include <Eigen/RequiredModuleName> // ... SparseMatrix<double> A; // fill A VectorXd b, x; // fill b // solve Ax = b SolverClassName<SparseMatrix<double> > solver; solver.compute(A); if(solver.info()!=Succeeded) { // decomposition failed return; } x = solver.solve(b); if(solver.info()!=Succeeded) { // solving failed return; } // solve for another right hand side: x1 = solver.solve(b1);
For SPD
solvers, a second optional template argument allows to specify which triangular part have to be used, e.g.:
#include <Eigen/IterativeLinearSolvers> ConjugateGradient<SparseMatrix<double>, Eigen::Upper> solver; x = solver.compute(A).solve(b);
In the above example, only the upper triangular part of the input matrix A is considered for solving. The opposite triangle might either be empty or contain arbitrary values.
In the case where multiple problems with the same sparcity pattern have to be solved, then the "compute" step can be decomposed as follow:
SolverClassName<SparseMatrix<double> > solver; solver.analyzePattern(A); // for this step the numerical values of A are not used solver.factorize(A); x1 = solver.solve(b1); x2 = solver.solve(b2); ... A = ...; // modify the values of the nonzeros of A, the nonzeros pattern must stay unchanged solver.factorize(A); x1 = solver.solve(b1); x2 = solver.solve(b2); ...
The compute() method is equivalent to calling both analyzePattern() and factorize().
Finally, each solver provides some specific features, such as determinant, access to the factors, controls of the iterations, and so on. More details are availble in the documentations of the respective classes.
Because of their special storage format, sparse matrices cannot offer the same level of flexbility than dense matrices. In Eigen's sparse module we chose to expose only the subset of the dense matrix API which can be efficiently implemented. In the following sm denotes a sparse matrix, sv a sparse vector, dm a dense matrix, and dv a dense vector.
Sparse expressions support most of the unary and binary coefficient wise operations:
sm1.real() sm1.imag() -sm1 0.5*sm1 sm1+sm2 sm1-sm2 sm1.cwiseProduct(sm2)
However, a strong restriction is that the storage orders must match. For instance, in the following example:
sm4 = sm1 + sm2 + sm3;
sm1, sm2, and sm3 must all be row-major or all column major. On the other hand, there is no restriction on the target matrix sm4. For instance, this means that for computing , the matrix
must be evaluated into a temporary matrix of compatible storage order:
SparseMatrix<double> A, B; B = SparseMatrix<double>(A.transpose()) + A;
Binary coefficient wise operators can also mix sparse and dense expressions:
sm2 = sm1.cwiseProduct(dm1); dm2 = sm1 + dm1;
Sparse expressions also support transposition:
sm1 = sm2.transpose(); sm1 = sm2.adjoint();
However, there is no transposeInPlace() method.
Eigen supports various kind of sparse matrix products which are summarize below:
dv2 = sm1 * dv1; dm2 = dm1 * sm1.adjoint(); dm2 = 2. * sm1 * dm1;
sm3 = sm1 * sm2; sm3 = 4 * sm1.adjoint() * sm2;
sm3 = (sm1 * sm2).prune(); // removes numerical zeros sm3 = (sm1 * sm2).prune(ref); // removes elements much smaller than ref sm3 = (sm1 * sm2).prune(ref,epsilon); // removes elements smaller than ref*epsilon
PermutationMatrix<Dynamic,Dynamic> P = ...; sm2 = P * sm1; sm2 = sm1 * P.inverse(); sm2 = sm1.transpose() * P;
Just as with dense matrices, the triangularView() function can be used to address a triangular part of the matrix, and perform triangular solves with a dense right hand side:
The selfadjointView() function permits various operations:
PermutationMatrix<Dynamic,Dynamic> P = ...; sm2 = A.selfadjointView<Upper>().twistedBy(P); // compute P S P' from the upper triangular part of A, and make it a full matrix sm2.selfadjointView<Lower>() = A.selfadjointView<Lower>().twistedBy(P); // compute P S P' from the lower triangular part of A, and then only compute the lower part
Sparse matrices does not support yet the addressing of arbitrary sub matrices. Currently, one can only reference a set of contiguous inner vectors, i.e., a set of contiguous rows for a row-major matrix, or a set of contiguous columns for a column major matrix:
sm1.innerVector(j); // returns an expression of the j-th column (resp. row) of the matrix if sm1 is col-major (resp. row-major) sm1.innerVectors(j, nb); // returns an expression of the nb columns (resp. row) starting from the j-th column (resp. row) // of the matrix if sm1 is col-major (resp. row-major) sm1.middleRows(j, nb); // for row major matrices only, get a range of nb rows sm1.middleCols(j, nb); // for column major matrices only, get a range of nb columns