Skip to main content

Scientific Computing

Sparse Matrices an...

Scipy.sparse ... []

Scientific Computing

Sparse Matrices an...

Krylov subspa... []

This material has been adapted from material by Martin Robinson from the "Scientific Computing" module of the SABS R³ Center for Doctoral Training.

This material has been adapted from material by Martin Robinson from the "Scientific Computing" module of the SABS R³ Center for Doctoral Training.

Creative Commons License
This course material was developed as part of UNIVERSE-HPC, which is funded through the SPF ExCALIBUR programme under grant number EP/W035731/1

This course material was developed as part of UNIVERSE-HPC, which is funded through the SPF ExCALIBUR programme under grant number EP/W035731/1

Creative Commons License

Jacobi and Relaxation Methods

Iterative Methods

Previously we have discussed direct linear algebra solvers based on decompositions of the original matrix AA. The amount of computational effort required to achieve these decomposisions is O(n3)\mathcal{O}(n^3), where nn is the number of rows of a square matrix. They are therefore unsuitable for the large, sparse systems of equations that are typically encountered in scientific applications. An alternate class of linear algebra solvers are the iterative methods, which produce a series of approximate solutions xkx_k to the Ax=bA x = b problem. The performance of each algorithm is then based on how quickly, or how many iterations kk are required, for the solution xkx_k to converge to within a set tolerance of the true solution xx.

Jacobi Method

The Jacobi method is the simplest of the iterative methods, and relies on the fact that the matrix is diagonally dominant. Starting from the problem definition:

Ax=bA\mathbf{x} = \mathbf{b}

we decompose AA in to A=L+D+UA = L + D + U, where LL is lower triangular, DD is diagonal, UU is upper triangular.

Ax=Lx+Dx+Ux=bA\mathbf{x} = L\mathbf{x} + D\mathbf{x} + U\mathbf{x} = \mathbf{b}

We then assume that we have an initial guess at the solution x0\mathbf{x}^0, and try to find a new estimate x1\mathbf{x}^1. Assuming that the diagonal DD dominates over LL and UU, a sensible choice would be to insert x0x^0 and the unknown x1x^1 into the equation like so:

Lx0+Dx1+Ux0=bL\mathbf{x}^0 + D\mathbf{x}^1 + U\mathbf{x}^0 = \mathbf{b}

we can rearrange to get an equation for x1x^1. This is easily solved as we can take the inverse of the diagonal matrix by simply inverting each diagonal element individually:

Dx1=b(L+U)x0D\mathbf{x}_1 = \mathbf{b} - (L+U)\mathbf{x}_0

Thus we end up with the general Jacobi iteration:

xk+1=D1(b(L+U)xk)\mathbf{x}_{k+1} = D^{-1}(\mathbf{b} - (L+U)\mathbf{x}_k)

Relaxation methods

The Jacobi method is an example of a relaxation method, where the matrix AA is split into a dominant part MM (which is easy to solve), and the remainder NN. That is, A=MNA = M - N

Mxk+1=Nxk+bM\mathbf{x}_{k+1} = N\mathbf{x}_k + \mathbf{b}
xk+1=M1Nxk+M1b\mathbf{x}_{k+1} = M^{-1}N\mathbf{x}_k + M^{-1}\mathbf{b}

This can be rearranged in terms of the residual rk=bAxk\mathbf{r}_k = \mathbf{b} - A \mathbf{x}_k to the update equation

xk+1=xk+M1rk\mathbf{x}_{k+1} = \mathbf{x}_{k} + M^{-1}\mathbf{r}_k

For the Jacobi method M=DM = D and N=(L+U)N = -(L + U). Other relaxation methods include Gauss-Seidel, where M=(D+L)M = (D + L) and N=UN = -U, and successive over-relaxation (SOR), where M=1ωD+LM = \frac{1}{\omega} D + L and N=(ω1ωD+U)N = -(\frac{\omega - 1}{\omega} D + U), where ω\omega is the relaxation parameter that is within the range 0ω20 \le \omega \le 2.

For any relaxation method to converge we need ρ(M1N)<1\rho(M^{-1}N) < 1, where ρ()\rho() is the spectral radius of M1NM^{-1} N, which is defined as the largest eigenvalue λ\lambda of a a given matrix GG:

ρ(G)=maxλ:λλ(G)\rho(G) = \max{|\lambda|: \lambda \in \lambda(G)}

For the SOR method, the relaxation parameter ω\omega is generally chosen to minimise ρ(M1N)\rho(M^{-1}N), so that the speed of convergence is maximised. In some cases this optimal ω\omega is known, for example for finite difference discretisation of the Poisson equation. However, in many cases sophisticated eigenvalue analysis is required to determine the optimal ω\omega.

Other Reading

  • Golub, G. H. & Van Loan, C. F. Matrix Computations, 3rd Ed. (Johns Hopkins University Press, 1996). Chapter 10

  • Barrett, R., Berry, M., Chan, T. F., Demmel, J., Donato, J., Dongarra, J., ... & Van der Vorst, H. (1994). Templates for the solution of linear systems: building blocks for iterative methods. Society for Industrial and Applied Mathematics.

Problems

2D Poisson Jacobi Relaxation

This exercise involves the manipulation and solution of the linear system resulting from the finite difference solution to Poisson's equation in two dimensions. Let AA be a sparse symmetric positive definite matrix of dimension (N1)2×(N1)2(N-1)^2 \times (N-1)^2 created using scipy.sparse (for a given NN) by the function buildA as follows:

import numpy as np import scipy.sparse as sp def buildA(N): dx = 1 / N nvar = (N - 1)**2 e1 = np.ones((nvar), dtype=float) e2 = np.copy(e1) e2[::N-1] = 0 e3 = np.copy(e1) e3[N-2::N-1] = 0 A = sp.spdiags( (-e1, -e3, 4*e1, -e2, -e1), (-(N-1), -1, 0, 1, N-1), nvar, nvar ) A = A / dx**2 return A

and let f1\mathbf{f}_1 and f2\mathbf{f}_2 be the vectors defined in buildf1 and buildf2

def buildf1(N): x = np.arange(0, 1, 1/N).reshape(N, 1) y = x.T f = np.dot(np.sin(np.pi*x), np.sin(np.pi*y)) return f[1:,1:].reshape(-1,1)
def buildf2(N): x = np.arange(0, 1, 1/N).reshape(N, 1) y = x.T f = np.dot(np.maximum(x,1-x), np.maximum(y,1-y)) return f[1:,1:].reshape(-1, 1)

We will consider manipulation of the matrix AA and solution of the linear systems AUi=fiA\mathbf{U}_i=\mathbf{f}_i. The solution to this linear system corresponds to a finite difference solution to Poisson's equation 2u=f-\nabla^2 u = f on the unit square with zero Dirichlet boundary conditions where ff is either sin(πx)sin(πy)\sin(\pi x) \sin (\pi y) or max(x,1x)max(y,1y)\max(x,1-x) \max(y,1-y). PDEs of this type occur (usually with some additional reaction and or convection terms) very frequently in mathematical modelling of physiological processes, and even in image analysis.

  • Write a function to solve a linear system using the Jacobi method. In terms of NN, how many iterations does it take to converge? (Try N=4,8,16,32,64N=4,8,16,32,64.)

2D Poisson SOR Relaxation

  • Write a function to solve a linear system using the SOR method. For N=64N=64 and right-hand-side f2\mathbf{f}_2 determine numerically the best choice of the relaxation parameter t2 decimal places and compare this with theory. Hint, use scipy.optimize.minimize_scalar