Krylov subspace methods and CG
The Krylov subspace
The set of basis vectors for the Krylov subspace are formed by repeated application of a matrix on a vector
Krylov subspace methods
Krylov subspace methods are an important family of iterative algorithms for solving . Lets suppose that is an invertible matrix, and our only knowledge of is its matrix-vector product with an arbitrary vector . Repeated application of , times on an initial vector gives us a sequence of vectors . Since we are in -dimensional space, we are guaranteed that these vectors are linearly dependent, and therefore
for some set of coefficients . Let now pick the smallest index such that , and multiply the above equation by , giving
This implies that can be found only via repeated application of , and also motivates the search for solutions from the Krylov subspace.
For each iteration of a Krylov subspace method, we choose the "best" linear combination of the Krylov basis vectors to form an improved solution . Different methods give various definitions of "best", for example:
The residual is orthogonal to (Conjugate Gradients).
The residual has minimum norm for in (GMRES and MINRES).
is orthogonal to a different space (BiConjugate Gradients).
Conjugate Gradient Method
Here we will give a brief summary of the CG method, for more details you can consult the text by Golub and Van Loan (Chapter 10).
The CG method is based on minimising the function
If we set to the solution of , that is , then the value of is at its minimum , showing that solving and minimising are equivalent.
At each iteration of CG we are concerned with the residual, defined as . If the residual is nonzero, then at each step we wish to find a positive such that , where is the search direction at each . For the classical steepest descent optimisation algorithm the search direction would be the residual , however, steepest descent can suffer from convergence problems, so instead we aim to find a set of search directions so that (i.e. at each step we are guaranteed to reduce ), and that the search directions are linearly independent. The latter condition guarantees that the method will converge in at most steps, where is the size of the square matrix .
It can be shown that the best set of search directions can be achieved by setting
This ensures that the search direction is the closest vector to that is also A-conjugate to , i.e. for all , which gives the algorithm its name. After iterations the sequence of residuals for form a set of mutually orthogonal vectors that span the Krylov subspace .
Directly using the above equations in an iterative algorithm results in the standard CG algorithm. A more efficient algorithm can be derived from this by computing the residuals recursively via , leading to the final algorithm given below (reproduced from Wikipedia):
Preconditioning
The CG method works well (i.e. converges quickly) if the condition number of the matrix is low. The condition number of a matrix gives a measure of how much the solution changes in response to a small change in the input , and is a property of the matrix itself, so can vary from problem to problem. In order to keep the number of iterations small for iterative solvers, it is therefore often necessary to use a preconditioner, which is a method of transforming what might be a difficult problem with a poorly conditioned , into a well conditioned problem that is easy to solve.
Consider the case of preconditioning for the CG methods, we start from the standard problem , and we wish to solve an equivalent transformed problem given by
where , , , and is a symmetric positive matrix.
We then simply apply the standard CG method as given above to this transformed problem. This leads to an algorithm which is then simplified by instead computing the transformed quantities , , and . Finally we define a matrix , which is known as the preconditioner, leading to the final preconditioned CG algorithm given below (reproduced and edited from Wikipedia):
> >
repeat until
> > > if is sufficiently small exit loop end if > > >
end repeat.
The key point to note here is that the preconditioner is used by inverting , so this matrix must be "easy" to solve in some fashion, and also result in a transformed problem with better conditioning.
Termination: The CG algorithm is normally run until convergence to a given tolerance which is based on the norm of the input vector . In the algorithm above we iterate until the residual norm is less than some fraction (set by the user) of the norm of .
What preconditioner to choose for a given problem is often highly problem-specific, but some useful general purpose preconditioners exist, such as the incomplete Cholesky preconditioner for preconditioned CG (see Chapter 10.3.2 of the Golub & Van Loan text given below). Chapter 3 of the Barrett et al. text, also cited below, contains descriptions of a few more commonly used preconditioners.
Other Reading
Golub, G. H. & Van Loan, C. F. Matrix Computations, 3rd Ed. (Johns Hopkins University Press, 1996). Chapter 10
Barrett, R., Berry, M., Chan, T. F., Demmel, J., Donato, J., Dongarra, J., ... & Van der Vorst, H. (1994). Templates for the solution of linear systems: building blocks for iterative methods. Society for Industrial and Applied Mathematics.
Python: scipy.sparse.linalg
Once again the best resource for Python is the scipi.sparse.linalg
documentation. The
available iterative solvers in Scipy are:
Applicable to non-symmetric problems. Requires the matrix-vector product of and its transpose .
Applicable to non-symmetric
Designed as an improvement of BiCG, avoids one of the two failure situations of BiCG
Computational costs slightly higher than BiCG, still requires the transpose .
Applicable to non-symmetric
Often converges twice as fast as BiCG, but is often irregular and can diverge if starting guess is close to solution.
Unlike BiCG, the two matrix-vector products cannot be parallelized.
Applicable to non-symmetric
Computational cost similar to BiCG, but does not require the transpose of .
Simliar convergence speed as CGS, but avoids the irregular convergence properties of this method
Applicable only to symmetric positive definite .
Speed of convergences depends on condition number
Applicable non-symmetric
Best convergence properties, but each additional iteration becomes increasingly expensive, with large storage costs.
To limit the increasing cost with additional iterations, it is necessary to periodically restart the method. When to do this is highly dependence on the properties of
Requires only matrix-vector products with
Modification to GMRES that uses alternating residual vectors to improve convergence.
It is possible to supply the algorithm with "guess" vectors used to augment the Krylov subspace, which is useful if you are solving several very similar matrices one after another.
Problems
Comparing solvers
For this problem we are going to compare many of the different types of solvers, both direct and iterative, that we've looked at thus far.
Note: We will be using the Finite Difference matrix based on the two-dimensional finite difference approximation of the Poisson problem that we developed in the previous exercise.
For try the following:
Solve the linear systems using (see
scipy.linalg.inv
and record the time this takes on a - graph. (Omit the case and note this may take a while for .)Solve the linear systems using the and Cholesky decompositions. Plot the time this takes on the same graph.
Now solve the systems iteratively using a conjugate gradients solver (you can use the one in
scipy.linalg.sparse
, or you can code up your own). How many iterations are needed for each problem? Explain the results for the right-hand-side . For the right-hand-side what is the relationship between the number of iterations and . How long do the computations take?Repeat using the
scipy.sparse.linalg
BICGSTAB and GMRES solvers.