Skip to main content

Scientific Computing

Sparse Matrices an...

COOrdinate fo... []

This material has been adapted from material by Martin Robinson from the "Scientific Computing" module of the SABS R³ Center for Doctoral Training.

This material has been adapted from material by Martin Robinson from the "Scientific Computing" module of the SABS R³ Center for Doctoral Training.

Creative Commons License
This course material was developed as part of UNIVERSE-HPC, which is funded through the SPF ExCALIBUR programme under grant number EP/W035731/1

This course material was developed as part of UNIVERSE-HPC, which is funded through the SPF ExCALIBUR programme under grant number EP/W035731/1

Creative Commons License

Sparse Matrices

Why sparse matrices

Taking advantage of any special structure in the matrix of interest is always of great importance when designing a linear algebra algorithm/solver. Thus far we have discussed special structures such as symmetric or positive definite matrices, but one of the most common matrix structures in scientific computing is that of a sparse matrix, or a matrix containing many zero elements. Since zeros can be ignored in many computations, for example multiplication or addition, a sparse matrix will specify a special data structure so that only the non-zero elements of the matrix are actually stored and used in computation. Note also that a sparse matrix can itself be symmetric or positive definite, and it is often necessary to take all these properties into account when designing your algorithm.

The most obvious benefit of a sparse matrix is low memory usage. A dense matrix needs to store all the elements of a matrix of size nn, and thus the memory requirements scale as O(n2)\mathcal{O}(n^2). For example, the following show the memory requirements of a matrix of double precision numbers (taken from the excellent scipy-lectures

least squares problem

A sparse matrix only stores non-zero elements, and in many different applications this represents a huge memory saving as matrices are often very sparse, holding only a few non-zero elements. Some typical applications are:

  • solution of partial differential equations (PDEs), such as the finite difference method illustrated below

  • applications of graphs or networks (e.g. electrical networks, website links), where non-zero elements of the matrix represent edges between nodes

Note that while a sparse matrix has obvious benefits in terms of matrix multiplication, where the zero elements can simply be ignored, direct solver algorithms such as LULU decomposition for the problem Ax=bAx = b, where AA is sparse, need considerably more thought as the zeros in AA can have propagating effects, and there is no guarantee that the decomposition of a AA or its inverse will be itself sparse, there can be a significant amount of what is known as fill-in (i.e. non-zero elements where there were zeros in the original matrix). This fact motivates a separate class of iterative (as opposed to direct) solvers that only rely on the matrix multiplication of AA with a vector, ignoring the internal sparsity structure of AA and only taking advantage of the increased speed of the matrix multiplication itself. These iterative solvers will be covered in the following chapter, but in this chapter we will focus on the practical requirements of constructing and using sparse matrices using the scipy.sparse library.