Jacobi and Relaxation Methods
Iterative Methods
Previously we have discussed direct linear algebra solvers based on decompositions of the original matrix . The amount of computational effort required to achieve these decomposisions is , where is the number of rows of a square matrix. They are therefore unsuitable for the large, sparse systems of equations that are typically encountered in scientific applications. An alternate class of linear algebra solvers are the iterative methods, which produce a series of approximate solutions to the problem. The performance of each algorithm is then based on how quickly, or how many iterations are required, for the solution to converge to within a set tolerance of the true solution .
Jacobi Method
The Jacobi method is the simplest of the iterative methods, and relies on the fact that the matrix is diagonally dominant. Starting from the problem definition:
we decompose in to , where is lower triangular, is diagonal, is upper triangular.
We then assume that we have an initial guess at the solution , and try to find a new estimate . Assuming that the diagonal dominates over and , a sensible choice would be to insert and the unknown into the equation like so:
we can rearrange to get an equation for . This is easily solved as we can take the inverse of the diagonal matrix by simply inverting each diagonal element individually:
Thus we end up with the general Jacobi iteration:
Relaxation methods
The Jacobi method is an example of a relaxation method, where the matrix is split into a dominant part (which is easy to solve), and the remainder . That is,
This can be rearranged in terms of the residual to the update equation
For the Jacobi method and . Other relaxation methods include Gauss-Seidel, where and , and successive over-relaxation (SOR), where and , where is the relaxation parameter that is within the range .
For any relaxation method to converge we need , where is the spectral radius of , which is defined as the largest eigenvalue of a a given matrix :
For the SOR method, the relaxation parameter is generally chosen to minimise , so that the speed of convergence is maximised. In some cases this optimal is known, for example for finite difference discretisation of the Poisson equation. However, in many cases sophisticated eigenvalue analysis is required to determine the optimal .
Other Reading
Golub, G. H. & Van Loan, C. F. Matrix Computations, 3rd Ed. (Johns Hopkins University Press, 1996). Chapter 10
Barrett, R., Berry, M., Chan, T. F., Demmel, J., Donato, J., Dongarra, J., ... & Van der Vorst, H. (1994). Templates for the solution of linear systems: building blocks for iterative methods. Society for Industrial and Applied Mathematics.
Problems
2D Poisson Jacobi Relaxation
This exercise involves the manipulation and solution of the linear system resulting from
the finite difference solution to Poisson's equation in two dimensions. Let be a
sparse symmetric positive definite matrix of dimension created
using scipy.sparse
(for a given ) by the function
buildA
as follows:
import numpy as np import scipy.sparse as sp def buildA(N): dx = 1 / N nvar = (N - 1)**2 e1 = np.ones((nvar), dtype=float) e2 = np.copy(e1) e2[::N-1] = 0 e3 = np.copy(e1) e3[N-2::N-1] = 0 A = sp.spdiags( (-e1, -e3, 4*e1, -e2, -e1), (-(N-1), -1, 0, 1, N-1), nvar, nvar ) A = A / dx**2 return A
and let and be the vectors defined in
buildf1
and buildf2
def buildf1(N): x = np.arange(0, 1, 1/N).reshape(N, 1) y = x.T f = np.dot(np.sin(np.pi*x), np.sin(np.pi*y)) return f[1:,1:].reshape(-1,1)
def buildf2(N): x = np.arange(0, 1, 1/N).reshape(N, 1) y = x.T f = np.dot(np.maximum(x,1-x), np.maximum(y,1-y)) return f[1:,1:].reshape(-1, 1)
We will consider manipulation of the matrix and solution of the linear systems . The solution to this linear system corresponds to a finite difference solution to Poisson's equation on the unit square with zero Dirichlet boundary conditions where is either or . PDEs of this type occur (usually with some additional reaction and or convection terms) very frequently in mathematical modelling of physiological processes, and even in image analysis.
Write a function to solve a linear system using the Jacobi method. In terms of , how many iterations does it take to converge? (Try .)
2D Poisson SOR Relaxation
Write a function to solve a linear system using the SOR method. For and right-hand-side determine numerically the best choice of the relaxation parameter t2 decimal places and compare this with theory. Hint, use
scipy.optimize.minimize_scalar