We implement the multigrid and full multigrid schemes using the Haskell parallel array library Repa. Whilst this file is a literate Haskell program it omits some preliminaries. The full code can be found on GitHub at multiGrid.
Repa (short for ‘regular parallel arrays’) is a Haskell library for efficiently calculating with arrays functionally. It allows parallel computation to be expressed with a single primitive (computeP
). This is possible because we only use immutable arrays so calculations are deterministic without any need to worry about locks/mutual exclusion/race conditions/etc.
The multigrid and full multigrid schemes are used for approximating solutions of partial differential equations but they are strategies rather than solvers. They can be used to drastically improve more basic solvers provided convergence and error analysis are considered for the basic solver.
We only consider linear partial differential equations and the Poisson equation in particular where linear partial differential equations can be summarised in the form
Here, is a linear operator (such as the laplace operator or higher order or lower order partial derivatives or any linear combination of these), is given and is the function we want to solve for, and both are defined on a region . The multigrid scheme starts with a fine grid (where is the grid spacing) on which we want to obtain an approximate solution by finite difference methods. The scheme involves the use of a coarser grid (e.g. ) and, recursively, a stack of coarser and coarser grids to apply error correction on approximations. The full multigrid scheme also uses coarser grids to improve initial approximations used by multigrid.
Assume we have a basic method to approximate solutions which we call ‘approximately solve’. Then the scheme for coarse grid correction of errors is
Take an initial guess on the fine grid () and use it to approximately solve to get a new approximation .
We want to estimate errors
We do not have to calculate them, but the errors satisfy
The negative of these values are called the residuals () and these we can calculate.
Move to the coarse grid and (recursively) solve
(This is a problem of the same form as we started with, but it requires solving with restrictions of and for the coarse grid.)
We use zeros as initial guess for errors and this recursion results in an approximation for errors ,
Interpolate coarse errors () into the fine grid to get and get a corrected guess
Approximately solve again (on the fine grid), but now starting with to get a final approximation.
So a basic requirement of the multigrid scheme is to be able to move values to a coarser grid (restriction) and from a coarser grid to a finer grid (interpolation). We will write functions for these below. First we give a quick summary of Repa and operations we will be using, which experienced Repa users may prefer to skip.
Repa array types have an extent (shape) and an indication of representation as well as a content type. For example Array U DIM2 Double
is the type of a two dimensional array of Double
s. The U
indicates a ‘manifest’ representation which means that the contents are fully calculated rather than delayed. A delayed representation is expressed with D
rather than U
. The only other representation type we use explicitly is (TR PC5)
which is the type of a partitioned, array resulting from a stencil mapping operation. Non-manifest arrays are useful for enabling the compiler to combine (fuse) operations to optimise after inlining code. Some operations require the argument array to be manifest, so to make a delayed array manifest we can use
computeP
which employes parallel evaluation on the array elements. The type of computeP
requires that it is used within a monad. This is to prevent us accidentally writing nested calls of computeP
.The extent DIM2
abbreviates Z :. Int :. Int
(where Z
represents a base shape for the empty zero dimensional array). The integers give the sizes in each dimension. We can have higher dimensions (adding more with :.
) but will only be using DIM2
here. Values of an extent type are used to express the sizes of an array in each dimensions and also used as indexes into an array where indexing starts at 0
.
We only use a few other Repa operations
fromListUnboxed
takes an extent and a list of values to create a new manifest array.
traverse
takes 3 arguments to produce a new delayed array. The first is the (old) array. The second is a function which produces the extent of the new array when given the extent of the old array. The third is a function to calculate any item in the new array [when supplied with an operation (get) to retrieve values from the old array and an index in the new array for the item to calculate].
szipWith
is analgous to zipWith
for lists, taking a binary operation to combine two arrays. (There is also a zipWith
for arrays, but the szipWith
version is tuned for partitioned arrays)
smap
is analgous to map
for lists, mapping an operation over an array. (There is also a map
for arrays, but the smap
version is tuned for partitioned arrays)
mapStencil2
takes a boundary handling option (we only use BoundClamp
here), a stencil, and a two dimensional array. It maps (convolves) the stencil over the array to produce a new (partititioned) array.
There is a convenient way of writing down stencils which are easy to visualize. We will see examples later, but this format requires pragmas
> {-# LANGUAGE TemplateHaskell #-}
> {-# LANGUAGE QuasiQuotes #-}
To interpolate from coarse to fine we want a (linear) mapping that is full rank. E.g identity plus averaging neighbours.
We can restrict by injection or take some weighting of neighbours as well. A common requirement is that this is also linear and a constant times the transpose of the interpolation when these are expressed as matrix operations.
We will be using DIM2
grids and work with odd extents so that border points remain on the border when moving between grids. A common method for restriction is to take a weighted average of fine neighbours with each coarse grid point and we can easily express this as a stencil mapping. We calculate one sixteenth of the results from mapping this stencil
> restStencil :: Stencil DIM2 Double
> restStencil = [stencil2| 1 2 1
> 2 4 2
> 1 2 1 |]
I.e we map this stencil then divide by 16 and take the coarse array of items where both the row and column are even. Taking the coarse grid items can be done with an array traversal
> {-# INLINE coarsen #-}
> coarsen :: Array U DIM2 Double -> Array D DIM2 Double
> coarsen !arr
> = traverse arr -- i+1 and j+1 to deal with odd extents for arr correctly
> (\ (e :. i :. j) -> (e :. (i+1) `div` 2 :. (j+1) `div` 2))
> (\ get (e :. i :. j) -> get (e :. 2*i :. 2*j))
Here the second argument for traverse – the function to calculate the new extent from the old – adds one before the division to ensures that odd extents are dealt with appropriately.
The inline pragma and bang pattern (!) on the argument are needed for good optimisation.
Coarsening after mapping the above stencil works well with odd extents but is not so good for even extents. For completeness we allow for even extents as well and in this case it is more appropriate to calculate one quarter of the results from mapping this stencil
> sum4Stencil :: Stencil DIM2 Double
> sum4Stencil = [stencil2| 0 0 0
> 0 1 1
> 0 1 1 |]
We treat mixed even and odd extents this way as well so we can express restriction for all cases as
> {-# INLINE restrict #-}
> restrict :: Array U DIM2 Double -> Array D DIM2 Double
> restrict !arr
> | odd n && odd m
> = coarsen
> $ smap (/16)
> $ mapStencil2 BoundClamp restStencil arr
> | otherwise
> = coarsen
> $ smap (/4)
> $ mapStencil2 BoundClamp sum4Stencil arr
> where _ :. m :. n = extent arr
For interpolation in the case of odd extents we want to distribute coarse to fine values according to the “give-to” stencil
1/4 1/2 1/4
1/2 1 1/2
1/4 1/2 1/4
This means that the fine value at the centre becomes the coarse value at the corresponding position (if you picture the coarse grid spaced out to overlay the fine grid). The fine values around this central value inherit proportions of the coarse value as indicated by the give-to stencil (along with proportions from other neighbouring coarse values).
For the even extent case we simply make four copies of each coarse value using a traverse
> {-# INLINE inject4 #-}
> inject4 :: Source r a => Array r DIM2 a -> Array D DIM2 a
> inject4 !arr
> = traverse arr -- mod 2s deal with odd extents
> (\ (e :. i :. j) -> (e :. 2*i - (i `mod` 2) :. 2*j - (j `mod` 2)))
> (\get (e :. i :. j) -> get(e :. i `div` 2 :. j `div` 2))
Again, we just use the latter version to cover cases with mixed even and odd extents. There is no primitive for “give-to” stencils but it is easy enough to define what we want with a traverse.
> {-# INLINE interpolate #-}
> interpolate :: Array U DIM2 Double -> Array D DIM2 Double
> interpolate !arr
> | odd m && odd n
> = traverse arr
> (\ (e :. i :. j) -> (e :. 2*i - (i `mod` 2) :. 2*j - (j `mod` 2)))
> (\get (e :. i :. j) -> case () of
> _ | even i && even j -> get(e :. i `div` 2 :. j `div` 2)
> | even i -> (0.5)*(get(e :. i `div` 2 :. j `div` 2)
> + get(e :. i `div` 2 :. (j `div` 2)+1))
> -- odd i
> | even j -> (0.5)*(get(e :. i `div` 2 :. j `div` 2)
> + get(e :. (i `div` 2)+1 :. j `div` 2))
> -- odd i and j
> | otherwise -> (0.25)*(get(e :. i `div` 2 :. j `div` 2)
> + get(e :. i `div` 2 :. (j `div` 2)+1)
> + get(e :. (i `div` 2)+1 :. j `div` 2)
> + get(e :. (i `div` 2)+1 :. (j `div` 2)+1))
> )
> | otherwise = inject4 arr
> where _ :. n :. m = extent arr
The uses of mod 2
in the new size calculation are there to ensure that odd extents remain odd.
As an aside, we found that the above interpolation for odd extents can be implemented with a stencil convolution using sum4Stencil
(which we defined above) after applying inject4
and then dividing by 4.
So we could define (for odd extents)
interpolate' :: Monad m
=> Array U DIM2 Double -> m (Array (TR PC5) DIM2 Double)
interpolate' arr =
do fineArr <- computeP $ inject4 arr
return $ smap (/4)
$ mapStencil2 BoundClamp sum4Stencil fineArr
We have to make the intermediate fineArr
manifest (using computeP
) before mapping the stencil over it which is why a monad is used. The final array is not manifest but a structured array resulting from the stencil map. By not making this manifest, we allow the computation to be combined with other computations improving inlining optimisation opportunities.
To illustrate the effect of interpolate'
, suppose the coarse array content is just
a b c
d e f
g h i
Then the injected array content will be
a a b b c c
a a b b c c
d d e e f f
d d e e f f
g g h h i i
g g h h i i
After mapping the stencil and dividing by 4 we have (assuming top left is (0,0))
at (1,1) (a+b+d+e)/4 (odd row, odd column)
at (2,1) (d+e+d+e)/4 = (d+e)/2 (even row, odd column)
at (1,2) (b+b+e+e)/4 = (b+e)/2 (odd row, even column)
at (2,2) (e+e+e+e)/4 = e (even row, even column)
This even handles the bottom and right boundaries as in the original interpolation.
Slightly more generally, after inject4 and for any w x y z
then convolution with the stencil
0 0 0
0 w x
0 y z
will produce the same as interpolating with the “give-to” stencil
z (z+y) y
(z+x) (z+y+x+w) (y+w)
x (x+w) w
Conversely after inject4, the give-to stencil has to have this form for a stencil convolution to produce the same results.
Since the stencil version does require making an intermediate array manifest it is not clear at face value which is more efficient, so we will stick with interpolate.
We note that for even extents, the combination of an interpolation followed by a restriction is an identity (inject4
followed by coarsen
). For odd extents, the combination preserves internal values but not values on the boarder. This will not be a problem if boundary conditions of the problem are used to update boarders of the array.
The problem to solve for is
where is a linear operator. This linear operator could be represented as a matrix or as a matrix multiplication, but this is not the most efficient way of implementing a solver. In the multigrid scheme described above we need knowledge of to implement an approximate solver, and also to calculate residuals. There are ways to implement these latter two operations with some mathematical analysis of the linear operator using finite differences. We will therefore be using an approximator and a residual calculator directly rather than a direct representation of as a parameter.
Let us assume that the approximate solver is a single step improvement which we want to iterate a few times. We can write down the iterator (iterateSolver
)
> {-# INLINE iterateSolver #-}
> iterateSolver !opA !steps !arrInit
> = go steps arrInit
> where
> go 0 !arr = return arr
> go n !arr
> = do arr' <- computeP $ opA arr
> go (n - 1) arr'
In the definition of multiGrid
we make use of a function to create an array of zeros of a given shape (extent)
> {-# INLINE zeroArray #-}
> zeroArray :: Shape sh => sh -> Array U sh Double
> zeroArray !sh = fromListUnboxed sh $ replicate (size sh) 0.0
and a function to determine when we have the coarsest grid (the bases case for odd extents will be 3 by 3)
> {-# INLINE coarsest #-}
> coarsest :: Array U DIM2 Double -> Bool
> coarsest !arr = m<4 where (Z :. _ :. m) = extent arr
We also use some global parameters to indicate the number of iteration steps (to simplify expressions by passing fewer arguments)
> steps1,steps2 :: Int
> steps1 = 5 -- number of iterations for first step in multiGrid
> steps2 = 5 -- number of iterations for last step in multiGrid
To write down a first version of multiGrid
, we temporarily ignore boundary conditions. We will also assume that the grid spacing is the same in each dimension and use just one parameter (h
) as the grid spacing.
{- version ignoring boundary conditions -}
multiGrid approxOp residualOp h f uInit
= if coarsest uInit
then
do computeP $ approxOp h f uInit
else
do v <- iterateSolver (approxOp h f) steps1 uInit
r <- computeP $ residualOp h f v -- calculate fine residuals
r' <- computeP $ restrict r -- move to coarser grid
err <- multiGrid approxOp residualOp (2*h) r' $ zeroArray (extent r')
vC <- computeP $ szipWith (+) v
$ interpolate err -- correct with errors on fine grid
iterateSolver (approxOp h f) steps2 vC -- solve again with improved approximation
The parameters are
approxOp
– an approximate solver to be iterated.residualOp
– a residual calculatorh
– the grid spacingf
– a representation for the (negative of) the function on the right hand side of the equationuInit
– an array representing a suitable first guess to start from.Note that both approxOp
and residualOp
need to be passed h
and f
as well as an array when they are used. Also, the recursive call of multiGrid
requres the two function parameters to work at each coarseness of grid. The grid spacing doubles and the array of residuals has to be converted to the coarser grid for the recursive call.
Next we consider how we deal with the boundary conditions. We will only deal with the Dirichlet type of boundary conditions here.
For as above, Dirichlet boundary conditions have the form for some function on the boundary . We will take the standard approach of representing the boundary conditions using a mask array (with 0’s indicating boundary positions and 1’s at non-boundary positions) along with a boundary values array which has 0’s at non-boundary positions. This will enable us to adapt these two arrays for different coarsenesses of grid.
We are assuming we are working in just two dimensions so . We can thus assume two dimensional arrays represent and the boundary mask and boundary values.
We can now define multiGrid
with boundary conditions as:
multiGrid approxOp residualOp h f boundMask boundValues uInit
= if coarsest uInit
then
do computeP $ approxOp h f boundMask boundValues uInit
else
do v <- iterateSolver (approxOp h f boundMask boundValues) steps1 uInit
r <- computeP $ residualOp h f boundMask v -- calculate fine residuals
boundMask' <- computeP $ coarsen boundMask -- move to coarser grid
r' <- computeP $ szipWith (*) boundMask' $ restrict r
let zeros = zeroArray (extent r')
err <- multiGrid approxOp residualOp (2*h) r' boundMask' zeros zeros
vC <- computeP $ szipWith (+) v
$ szipWith (*) boundMask
$ interpolate err -- correct with errors on fine grid
iterateSolver (approxOp h f boundMask boundValues) steps2 vC
In the base case (when the grid size is 3*3) there is only one value at the centre point which needs to be evaluated (assuming the surrounding 8 points are given by the boundary conditions). This will be solved exactly with a single step of the approximate solver. The recursive call uses zeros for both the boundValues (redundantly) as well the initial guess. We note that the residual calculation makes use of the mask but not the boundary values as these should be zero for residuals. The restriction of residuals for the coarse grid also requires applying the mask adapted for the coarse grid. The interpolation of errors uses the (fine version of the) mask but the boundary values will already be set in v
which is added to the errors.
Before looking at specific examples, we can now write down the algorithm for the full multigrid scheme which extends multigrid as follows.
Before calling multiGrid
we precalculate the initial guess using a coarser grid and interpolate the result. The precalculation on the coarser grid involves the same full multigrid process recursively on coarser and coarser grids.
fullMG approxOp residualOp h f boundMask boundValues
= if coarsest boundValues
then do computeP $ approxOp h f boundMask boundValues boundValues
-- an approximation with 1 interior point will be exact using boundValues as initial
else do
-- recursively precalculate for the coarser level
f' <- computeP $ restrict f
boundMask' <- computeP $ coarsen boundMask
boundValues' <- computeP $ coarsen boundValues
v' <- fullMG approxOp residualOp (2*h) f' boundMask' boundValues'
-- move to finer level
v <- computeP $ szipWith (+) boundValues
$ szipWith (*) boundMask
$ interpolate v'
-- solve for finer level
multiGrid approxOp residualOp h f boundMask boundValues v
Note that in the base case we need an initial guess for the coarsest initial array. We simply use the boundValues
array for this. The interpolation of v
requires resetting the boundary values.
The previous (higher order) versions of multiGrid
and fullMG
which take approxOp
and residualOp
as arguments can by substantially improved by importing the operations in a module and specialising the definitions for the imported functions. Thus we define a module which imports a separate module (defining approxOp
and residualOp
) and which exports
> multiGrid :: Monad m =>
> Double
> -> Array U DIM2 Double
> -> Array U DIM2 Double
> -> Array U DIM2 Double
> -> Array U DIM2 Double
> -> m (Array U DIM2 Double)
> multiGrid !h !f !boundMask !boundValues !uInit
> = if coarsest uInit
> then
> do computeP $ approxOp h f boundMask boundValues uInit
> else
> do v <- iterateSolver (approxOp h f boundMask boundValues) steps1 uInit
> r <- computeP $ residualOp h f boundMask v
> boundMask' <- computeP $ coarsen boundMask
> r' <- computeP $ szipWith (*) boundMask' $ restrict r
> let zeros = zeroArray (extent r')
> err <- multiGrid (2*h) r' boundMask' zeros zeros
> vC <- computeP $ szipWith (+) v
> $ szipWith (*) boundMask
> $ interpolate err
> iterateSolver (approxOp h f boundMask boundValues) steps2 vC
and the main full multigrid operation
> fullMG :: Monad m =>
> Double
> -> Array U DIM2 Double
> -> Array U DIM2 Double
> -> Array U DIM2 Double
> -> m (Array U DIM2 Double)
> fullMG !h !f !boundMask !boundValues
> = if coarsest boundValues
> then do computeP $ approxOp h f boundMask boundValues boundValues
> else do
> f' <- computeP $ restrict f
> boundMask' <- computeP $ coarsen boundMask
> boundValues' <- computeP $ coarsen boundValues
> v' <- fullMG (2*h) f' boundMask' boundValues'
> v <- computeP $ szipWith (*) boundValues
> $ szipWith (*) boundMask
> $ interpolate v'
> multiGrid h f boundMask boundValues v
These versions perform significantly better when optimised.
For the two dimensional case, where , Poisson’s equation has the form
for subject to the Dirichlet boundary condition for
So the linear operator here is the Laplace operator
We need to make use of finite difference analysis to implement the approximation step and the residual calculation for Poisson’s equation.
Finite difference analysis with the central difference operator leads to the following (five point difference) formula approximating Poisson’s equation
Rewriting the above as
gives us an algorithm (approxOp
) for the approximating iteration step. This is known as the the Jacobi method. We map a suitable stencil to add the four neighbours of each item, then we add corresponding elements of the array f
multiplied by , then we divide by 4 and reset the boundary with the mask and values.
> {-# INLINE approxOp #-}
> approxOp :: Double
> -> Array U DIM2 Double
> -> Array U DIM2 Double
> -> Array U DIM2 Double
> -> Array U DIM2 Double
> -> Array (TR PC5) DIM2 Double
> approxOp !h !f !boundMask !boundValues !arr
> = szipWith (+) boundValues
> $ szipWith (*) boundMask
> $ smap (/4)
> $ szipWith (+) (R.map (*hSq) f )
> $ mapStencil2 (BoundConst 0)
> [stencil2| 0 1 0
> 1 0 1
> 0 1 0 |] arr
> where hSq = h*h
We have chosen to leave the resulting array in a non-manifest form so that inlining can combine this operation with any subsequent operations to optimise the combination.
To calculate residuals from an estimation , we can use the same formula as above to approximate . Rearranging we have
This leads us to implement residualOp
with a five point stencil that adds four neighbours and subtracts four times the middle item. After this we divide all items by and then add f
and reset the boundary.
> {-# INLINE residualOp #-}
> residualOp :: Double
> -> Array U DIM2 Double
> -> Array U DIM2 Double
> -> Array U DIM2 Double
> -> Array (TR PC5) DIM2 Double
> residualOp !h !f !boundMask !v
> = szipWith (*) boundMask
> $ szipWith (+) f
> $ smap (*hFactor)
> $ mapStencil2 (BoundConst 0)
> [stencil2| 0 1 0
> 1 -4 1
> 0 1 0 |] v
> where hFactor = 1/(h*h)
As noted earlier, the boundary is reset simply with the mask. The boundary values should all be zero for residuals, so we can drop any resetting of boundary values as the mask will have done that already.
The above two functions (approxOP
and residualOp
) are put in a module PoissonOps
to be imported by the module MultiGrid_Poisson
.
This example is taken from (Iserles 2009 p.162). On the unit square
The Dirichlet boundary conditions are
and
We define the following to calculate values from the number of grids to be used.
> fineGridSpaces:: Int -> Int
> fineGridSpaces gridStackSize
> = 2^gridStackSize
> fineGridShape:: Int -> DIM2
> fineGridShape gridStackSize
> = (Z :. n :. n) where n = 1+fineGridSpaces gridStackSize
We will set a global value for the sides of the area (the unit square) and also set a stack size of 9 grids so the finest will have by (i.e. 513 by 513) grid points
> distance :: Double
> distance = 1.0 -- length of sides for square area
> gridStackSize :: Int
> gridStackSize = 9 -- number of grids to be used including finest and coarsest
> intervals = fineGridSpaces gridStackSize
> shapeInit = fineGridShape gridStackSize
> hInit :: Double
> hInit = distance / fromIntegral intervals -- initial finest grid spacing
The initial arrays are defined using
> boundMask :: Array U DIM2 Double
> boundMask =
> fromListUnboxed shapeInit $ concat
> [edgeRow,
> take ((intervals-1)*(intervals+1)) (cycle mainRow),
> edgeRow
> ]
> where edgeRow = replicate (intervals+1) 0.0
> mainRow = 0.0: replicate (intervals-1) 1.0 Prelude.++ [0.0]
> coordList :: [Double]
> coordList = Prelude.map ((hInit*) . fromIntegral) [0..intervals]
> boundValues :: Array U DIM2 Double
> boundValues =
> fromListUnboxed shapeInit $ concat
> [Prelude.map (\j -> sin (pi*j)) coordList,
> concat $ Prelude.map mainRow $ tail $ init coordList,
> Prelude.map (\j -> exp pi * sin (pi*j) + 0.5*j^2) coordList
> ]
> where mainRow i = replicate intervals 0.0 Prelude.++ [0.5*i^2]
> fInit :: Array U DIM2 Double
> fInit = -- negative of RHS of Poisson Equation
> fromListUnboxed shapeInit $ concat $ Prelude.map row coordList
> where row i = Prelude.map (item i) coordList
> item i j = -(i^2 + j^2)
> uInit :: Array U DIM2 Double
> uInit = boundValues
We are now in a position to calculate
> test1 = multiGrid hInit fInit boundMask boundValues uInit
and
> test2 = fullMG hInit fInit boundMask boundValues
as well as comparing with iterations of the basic approximation operation on its own
> solverTest :: Monad m => Int -> m(Array U DIM2 Double)
> solverTest n = iterateSolver (approxOp hInit fInit boundMask boundValues) n uInit
We note that this particular example of Poisson’s equation does have a known exact solution
So we can calculate an array with values for the exact solution and look at the errors. We just look at the maximum error here.
> exact :: Array U DIM2 Double
> exact =
> fromListUnboxed shapeInit $
> concat $ Prelude.map row coordList
> where row i = Prelude.map (item i) coordList
> item i j = exp (pi*i) * sin (pi*j) + 0.5*i^2*j^2
> maxError :: Monad m => m(Array U DIM2 Double) -> m (Double)
> maxError test =
> do ans <- test
> err :: Array U DIM2 Double
> <- computeP
> $ R.map abs
> $ R.zipWith (-) ans exact
> foldAllS max 0.0 err
The Jacobi method used for the approximation step is known to converge very slowly, but fullMG
significantly improves convergence rates. The literature analysing multigrid and full multigrid shows that the errors are reduced rapidly if the approximation method smooths the high frequency errors quickly (error components with a wavelength less than half the grid size). Unfortunately the Jacobi method does not do this. Gauss-Seidel and SOR methods do smooth high frequency errors but their usual implementation relies on in-place updating of arrays which is problematic for parallel evaluation. However, although multiGrid
with the Jacobi approxOp
does not reduce errors rapidly, fullMG
does (with only gridStackSize=7
and step1=step2=3
). This shows that the improved initial guess using fullMG
outweighs the slow reduction of errors in multiGrid
.
Clearly this code is designed to be run on multi-cores to use the parallelism. However, even using a single processor, we can see the impressive performance of Repa with optimisations on some simple tests of the Poisson example (running on a 2.13 GHz Intel Core 2 Duo).
Grid Stack | Fine Grid | cpu Time(MS) | Max Error |
---|---|---|---|
6 | 65*65 | 22 | 2.1187627917047536e-3 |
7 | 129*129 | 83 | 5.321669730857792e-4 |
8 | 257*257 | 276 | 1.3324309721163274e-4 |
9 | 513*513 | 1417 | 3.332619984242058e-5 |
10 | 1025*1025 | 6017 | 8.332744698691386e-6 |
11 | 2049*2049 | 26181 | 2.0832828990791086e-6 |
So increasing the grid stack size by 1 roughly quadruples the size of the fine grid, increases runtime by a similar factor, and improves the error by a factor of 4 as well.
As a comparison, multiGrid alone (with the Jacobi approxOp
) has high errors and increasing grid stack size does not help.
Grid Stack | cpu Time(MS) | Max Error |
---|---|---|
6 | 14 | 1.116289597967647 |
7 | 49 | 1.1682431240072333 |
8 | 213 | 1.1912130219418806 |
9 | 1118 | 1.2018233367410431 |
10 | 4566 | 1.206879281001907 |
11 | 20696 | 1.209340584664126 |
To see the problems with the basic Jacobi method on its own we note that after 400 iteration steps (with grid stack size of 6) the max error is 5.587133822631921. Increasing the grid stack size just makes this worse.
As we had observed in previous work Repa Laplace and SOR the large amount of inline optimisation seems to trigger a ghc compiler bug warning. This is resolved by adding the compiler option -fsimpl-tick-factor=1000
(along with -O2
).
We also tried two variations on opproxOp
.
The first was to add in an extra relaxation step which weights the old value with the new value at each step using a weighting between 0 and 1. This slightly increases runtime, but does not improve errors for fullMG
. It’s purpose is to improve smoothing of errors, but as we have seen although this may be significant for multiGrid
it is not for fullMG
. The improved starting points are much more significant for the latter. Note that successive over relaxing (SOR) with between 1 and 2 is not suitable for use with the Jacobi method in general as this is likely to diverge.
omega :: Double
omega = 2/3
{-# INLINE approxOp #-}
approxOp :: Double
-> Array U DIM2 Double
-> Array U DIM2 Double
-> Array U DIM2 Double
-> Array U DIM2 Double
-> Array (TR PC5) DIM2 Double
approxOp !h !f !boundMask !boundValues !arr
= szipWith weightedSum arr -- extra relaxation step
$ szipWith (+) boundValues
$ szipWith (*) boundMask
$ smap (/4)
$ szipWith (+) (R.map (*hSq) f )
$ mapStencil2 (BoundConst 0)
[stencil2| 0 1 0
1 0 1
0 1 0 |] arr
where hSq = h*h
weightedSum old new = (1-omega) * old + omega * new
The second variation is based on the ‘modified 9 point scheme’ method discussed in (Iserles 2009 p.162). This has slower runtimes than the 5 point Jacobi scheme as it involves two stencil mappings (one across f
as well as u
). It had no noticable improvement on error reduction for fullMG
.
-- modified 9 point scheme
approxOp :: Double
-> Array U DIM2 Double
-> Array U DIM2 Double
-> Array U DIM2 Double
-> Array U DIM2 Double
-> Array (TR PC5) DIM2 Double
approxOp !h !f !boundMask !boundValues !arr
= R.szipWith (+) boundValues
$ R.szipWith (*) boundMask
$ R.smap (/20)
$ R.szipWith (+)
(R.map (*hFactor)
$ mapStencil2 (BoundConst 0)
[stencil2| 0 1 0
1 8 1
0 1 0 |] f )
$ mapStencil2 (BoundConst 0)
[stencil2| 1 4 1
4 0 4
1 4 1 |] arr
where hFactor = h*h/2
This algorithm was obtained from rearranging the formula
The important point to note is that this code is ready to run using parallel processing. Results using multi-core processors will be posted in a separate blog.
Iserles (2009) gives some analysis of the multigrid method and this book also provided the example used (and the modified 9 point scheme). There are numerous other finite methods books covering multigrid and some devoted to it. Online there are some tutorial slides by Briggs part1 and part2 and some by Stüben amongst many others. A paper by Lippmeier, Keller, and Peyton Jones discussing Repa design and optimisation for stencil convolution in Haskell can be found here.
Iserles, Arieh. 2009. A First Course in Numerical Analysis of Differential Equations. Second edition. CUP.
In a previous blog Repa Laplace and SOR, I used Repa to implement a Laplace solver using the Red-Black scheme. The explanation of alternating stencils probably needed a diagram, so here it is.
This diagram illustrates the shapes of the stencils for adding neighbours of red and black cells. It shows that two different stencils are needed (for odd and even rows) and these are swapped over for red and black.
(The diagram was produced with Haskell Diagrams)
This describes the result of some experiments to enhance an existing elegant (haskell parallel array) laplace solver by using successive over-relaxation (SOR) techniques. The starting point was a laplace solver based on stencil convolution and using the Haskell Repa library (Regular Parallel Arrays) as reported in a paper by Lippmeier, Keller and Peyton-Jones. (There is also a 2011 published paper with the first two authors (Lippmeier and Keller 2011))
The Laplace equation
is usually abbreviated to simply , where is a scalar potential (such as temperature) over a smooth surface.
For numerical solutions we will restrict attention to a rectangular surface with some fixed boundary values for u. (The boundary is not necessarily the border of the rectangle, as we can allow for fixed values at internal regions as well). The problem is made discrete for finite difference methods by imposing a grid of points over the surface and here we will assume this is spaced evenly ( in the direction and in the direction). Approximating solutions numerically amounts to approximating a fixedpoint for the grid values satisfying the boundary values and also, for non-boundary points (i,j) satisfying
where
If we also assume then this simplifies to
Iterating to find this fixed poiint from some suitable starting values is called relaxation. The simplest (Jacobi) method involves iterating the calculation of new values from previous values until the iterations are close enough to converging.
This method, whilst very simple, can be extremely slow to converge. One improvement is given by the Gauss-Seidel method which assumes the next set of values in each iteration are calculated row by row in a scan across the columns allowing some new values to be used earlier during the same iteration step (changing to for calculated values in the previous column and in the previouus row).
Unfortunately, this assumption about the order of evaluation of array elements interferes with opportunities to use parallel computation of the array elements (as we wish to do using Repa array primitives).
Successive over-relaxation is a method used to speed up convergence of the iterations by using a weighted sum of the previous iteration with the new one at each relaxation step. Using as the weight we calculate first (substituting for in the above equation), then calculate
Values of less than 1 will slow down the convergence (under-relaxation). A value of between 1 and 2 should speed up convergence. In fact the optimal value for will vary for each iteration, but we will work with constant values only here. It is also important to note that starting with values above 1 will generally cause oscillations (divergence) if just the Jacobi scheme is used, so successive over-relaxation relies on a speed up of the basic iteration such as that provided by the Gauss-Seidel scheme.
In the paper a stencil mapping technique is used to get fast performance using the Repa array primitives. The code uses the Repa library version 3
The main relaxation step is expressed as
relaxLaplace arr
= computeP
$ szipWith (+) arrBoundValue -- boundary setting
$ szipWith (*) arrBoundMask -- boundary clearing
$ smap (/ 4)
$ mapStencil2 (BoundConst 0)
[stencil2| 0 1 0
1 0 1
0 1 0 |] arr
We will be taking this as a starting point, so we review details of this (working from the bottom up).
A stencil is described in a quoted form to express which neighbours are to be added (using zeros and ones) with the element being scanned assumed to be the middle item. Thus this stencil describes adding the north, west, east, and south neighbours for each item. The mapStencil2
is a Repa array primitive which applies the stencil to all elements of the previous iteration array (arr
). It is also supplied with a border directive (Boundconst 0
) which just says that any stencil item arising from indices outside the array is to be treated as 0. This stencil mapping will result in a non-manifest array, in fact, in a partitioned array split up into border parts and non-border parts to aid later parallel computation. After the stencil map there is a mapping across all the results to divide by 4 using smap
(which improves over the basic repa array map
for partitioned arrays). These calculations implement the Jacobi scheme.
The subsequent two szipWith
operations are used to reset the fixed boundary values. (The primitive szipWith
again improves over the basic repa array zipWith
by catering explicitly for partitioned arrays.) More specifically, szipWith(*)
is used with an array (arrBoundMask
) which has zero for boundary positions and 1 for other positions – thus setting the boundary items to 0. After this szipWith(+)
is used with an array (arrBoundValues
) which has the fixed initial values for the boundary items and zero for all other items. Thus the addition reinstates the initial boundary values.
These array calculations are all delayed so a final computeP
is used to force the parallel evaluations of items to produce a manifest array result. This technique allows a single pass with inlining and fusion optimisations of the code to be possible for the calculation of each element. Use of computeP
requires the whole calculation to be part of a monad computation. This is a type constraint for the primitive to exclude the possibility of nested calls of computeP
overlapping.
A significant advantage of this stencil implementation is that programming directly with array indices is avoided (these are built into the primitives once and for all and are implemented to avoid array bound checks being repeated unnecessarily).
Unfortunately, though, this implementation is using the Jacobi scheme for the iteration steps which is known to have very slow convergence.
We would like to improve on this by using successive over-relaxation, but as pointed out earlier, this will not work with the Jacobi scheme. Furthermore, the Gauss-Seidel improvement will not help because it is based on retrieving values from an array still being updated. This is not compatible with functional array primitives and stencils and prevents simple exploitation of parallel array calculations.
To the rescue – the red-black scheme for calculating iterations.
This scheme is well known and based on the observation that on a red-black checkerboard, for any red square, the north, west, east, and south neighbours are all black and conversely neighbours of black squares are all red. This leads to the simple idea of calculating updates to all the red squares first, then using these updated values to calculate the new black square values.
We split the original array into a red array and black array with simple traverse operations. We assume the original array (arr
) has an even number of columns so the number of red and black cells are the same. As a convention we will take the (0,0) item to be red so we want, for red array (r
)
r(i,j) = arr(i, 2*j + (i `mod` 2))
where
arr has i <- 0..n-1, j <- 0..2m-1
r has i <- 0..n-1, j <- 0..m-1
The traverse
operation from the Repa library takes as arguments, the original array, a mapping to express the change in shape, and a lookup function to calculate values in the new array (when given a get
operation for the original array and a coordinate (i,j) in the new array)
> projectRed :: Array U DIM2 Double -> Array D DIM2 Double
> projectRed arr =
> traverse arr
> (\ (e :. i :. j) -> (e :. i :. (j `div` 2)))
> (\get (e :. i :. j) -> get (e :. i :. 2*j + (i `mod` 2)))
Here (and throughout) we have restricted the type to work with two dimensional arrays of Double
, although a more general type is possible. Notice also that the argument array is assumed to be manifest (U
) and the result is a delayed array (D
).
Similarly for the black array (b
) we want
b(i,j) = arr(i, 2*j + ((i+1) `mod` 2))
with the same extent (shape) as for r
, hence
> projectBlack :: Array U DIM2 Double -> Array D DIM2 Double
> projectBlack arr =
> traverse arr
> (\ (e :. i :. j) -> (e :. i :. (j `div` 2)))
> (\ get (e :. i :. j) -> get(e :. i :. 2*j + ((i+1) `mod` 2)))
We can also use these same functions to set up boundary mask and value arrays separately for red and black from the starting array (arrInit
), initial mask (arrBoundMask
), and boundary values (arrBoundValue
)
do redBoundMask <- computeP $ projectRed arrBoundMask
blackBoundMask <- computeP $ projectBlack arrBoundMask
redBoundValue <- computeP $ projectRed arrBoundValue
blackBoundValue <- computeP $ projectBlack arrBoundValue
redInit <- computeP $ projectRed arrInit
blackInit <- computeP $ projectBlack arrInit
This is part of a monad computation with each step using a computeP
(parallel array computation) to create manifest versions of each of the arrays we need before beginning the iterations. These calculations are independent, so the sequencing is arbitrary.
At the end of the iterations we will reverse the split into red and black by combining using the traverse2
operation from the Repa library. We need
arr(i,j) = r(i, j `div` 2) when even(i+j)
= b(i, j `div` 2) otherwise
where
arr has i <- 0..n-1 , j <- 0..2m-1
r has i <- 0..n-1 , j <- 0..m-1
b has i <- 0..n-1 , j <- 0..m-1
The traverse2
operation takes two arrays (here – of the same extent), a mapping to express the new extent (when given the two extents of the argument arrays), and a function to calculate values in the new array (when given the respective get operations for the original arrays (here – get1
and get2
) and a coordinate (i,j) in the new array).
> combineRB r b =
> traverse2 r b
> (\ (e :. i :. j) _ -> (e :. i :. 2*j))
> (\ get1 get2 (e :. i :. j) ->
> (if even(i+j) then get1 else get2) (e :. i :. j `div` 2)
> )
We use stencils as in the original, but when we consider a stencil on black cells which are to be combined as neighbours of the red cell r(i,j) we need the following shape, where b(i,j) corresponds to the middle item
0 1 0
1 1 0
0 1 0
BUT this is only when processing EVEN rows from the black array. For odd rows we need a different shape
0 1 0
0 1 1
0 1 0
That is, we need to apply one of two different stencils depending on whether the row is even or odd. Similarly in processing the red array to combine red neighbours of b(i,j) we need the same shaped stencils but the first shape above is used on odd rows of red cells and the second shape on even rows. We define and name the stencils as leftSt
and rightSt
> leftSt :: Stencil DIM2 Double
> leftSt = [stencil2| 0 1 0
> 1 1 0
> 0 1 0 |]
> rightSt :: Stencil DIM2 Double
> rightSt = [stencil2| 0 1 0
> 0 1 1
> 0 1 0 |]
Critically, we will need an efficient way to apply alternate stencils as we map across an array. At first, it may seem that we might have to rebuild a version of the primitive mapStencil2
to accommodate this, but that will get us into some complex array representation handling which is built into that primitive. On reflection, though, the combination of lazy evaluation and smart inlining/fusion optimisation by the compiler should allow us to simply apply both stencils on all elements and then choose the results we actually want. The laziness should ensure that the unwanted stencil applications are not actually calculated, and the administration of choosing should be simplified by compiler optimisations.
To apply our stencils we will use the Repa array primitive mapStencil2
which takes as arguments, a border handling directive, a stencil, and a (manifest) array, to produce a resulting (delayed) array.
mapStencil2 :: Boundary Double
-> Stencil DIM2 Double
-> Array U DIM2 Double
-> Array D DIM2 Double
The choice of stencil will depend on the position (actually the evenness of the row). As we cannot refer to the indices when using the stencil mapping operation we are led to using traverse2
after mapping both stencils to select results from the two arrays produced. Our alternate stencil mapping operation will have a similar type to mapStencil2
except that it expects two stencils rather than one.
altMapStencil2 :: Boundary Double
-> Stencil DIM2 Double
-> Stencil DIM2 Double
-> Array U DIM2 Double
-> Array D DIM2 Double
altMapStencil2 !bd !s1 !s2 !arr
= traverse2 (mapStencil2 bd s1 arr) (mapStencil2 bd s2 arr)
(\ e _ -> e)
(\ get1 get2 (e :. i :. j) ->
(if even i then get1 else get2) (e :. i :. j)
)
This function needs to be inlined (with a pragma) and the bang annotations on arguments are there to improve optimisation opportunities.
The main relaxLaplace
iteration step involves the following (where r
and b
are the previous red and black arrays)
r1 = smap (/4)
$ altMapStencil2 (BoundConst 0) leftSt rightSt b
r2 = szipWith weightedSum r r1
where weightedSum old new = (1-omega)*old + omega*new
r' = szipWith (+) redBoundValue -- boundary resetting
$ szipWith (*) redBoundMask r2 -- boundary clearing
This is combined in the following monad computation (where r
and b
are the old arrays). The monad is necessary because we want to use computeP
to ensure that the final returned arrays are manifest.
do r' <- computeP
$ relaxStep r b redBoundValue redBoundMask leftSt rightSt
b' <- computeP
$ relaxStep b r' blackBoundValue blackBoundMask rightSt leftSt
...
which uses
relaxStep !arrOld !arrNbs !boundValue !boundMask !stencil1 !stencil2
= szipWith (+) boundValue
$ szipWith (*) boundMask
$ szipWith weightedSum arrOld
$ smap (/4)
$ altMapStencil2 (BoundConst 0) stencil1 stencil2 arrNbs
weightedSum !old !new = (1-omega)*old + omega*new
The first argument for relaxStep
is the old (red or black) array we are calculating an update for, and the second argument is the neighbour array to use stencils on. The old array is needed for the over-relaxation step with weightedSum
.
It is also worth pointing out that we have the over-relaxation step being done before the two boundary resetting steps, but this could just as easily be done after the boundary resetting as it does not make a difference to the final array produced.
Finally the main function (solveLaplace
) sets up the initial arrays and passes them to the function with the main loop (iterateLaplace
)
> solveLaplace:: > Monad m > => Int -- ^ Number of iterations to use. > -> Double -- ^ weight for over relaxing (>0.0 and <2.0) > -> Array U DIM2 Double -- ^ Boundary value mask. > -> Array U DIM2 Double -- ^ Boundary values. > -> Array U DIM2 Double -- ^ Initial state. Should have even number of columns > -> m (Array U DIM2 Double) > solveLaplace !steps !omega !arrBoundMask !arrBoundValue !arrInit
> do redBoundMask <- computeP $ projectRed arrBoundMask
> blackBoundMask <- computeP $ projectBlack arrBoundMask
> redBoundValue <- computeP $ projectRed arrBoundValue
> blackBoundValue <- computeP $ projectBlack arrBoundValue
> redInit <- computeP $ projectRed arrInit
> blackInit <- computeP $ projectBlack arrInit > iterateLaplace steps omega redInit blackInit > redBoundValue blackBoundValue redBoundMask blackBoundMask
where
iterateLaplace !steps !omega !redInit !blackInit
!redBoundValue !blackBoundValue !redBoundMask !blackBoundMask
= go steps redInit blackInit
where
go 0 !r !b = computeP $ combineRB r b -- return final combined array
go n !r !b
= do r' <- computeP
$ relaxStep r b redBoundValue redBoundMask leftSt rightSt
b' <- computeP
$ relaxStep b r' blackBoundValue blackBoundMask rightSt leftSt
go (n - 1) r' b'
{-# INLINE relaxStep #-}
relaxStep !arrOld !arrNbs !boundValue !boundMask !stencil1 !stencil2
= szipWith (+) boundValue
$ szipWith (*) boundMask
$ szipWith weightedSum arrOld
$ smap (/4)
$ altMapStencil2 (BoundConst 0) stencil1 stencil2 arrNbs
{-# INLINE weightedSum #-}
weightedSum !old !new = (1-omega)*old + omega*new
{-# INLINE iterateLaplace #-}
The number of calculations in an iteration of red-black is comparable to the original stencil implementation (with just the weighting operations addded in). Although there are two arrays to update, they are half the size of the original. We would expect optimised performance to be only fractionally slower than the original. The speedup in progress towards convergence can be dramatic (one iteration per 8 of the original for our test examples). So, in principle, this would be a big improvement. Unfortunately optimisation did not seem to be achieving the same speedups as the original and the code was roughly 12 times slower.
After much experimentation it looked as though the inner traverse2
operation of altMapStencil2
was inhibiting the optimisations (fusions with stencil mapping code and subsequent maps and zips).
A better performance was achieved by separating the stencil mapping from the traverse2
for alternate row selection and delaying the alternate row selection until after all the other operations. The new version drops altMapStencil2
and instead simply uses altRows
> altRows :: forall r1 r2 a . (Source r1 a, Source r2 a)
> => Array r1 DIM2 a -> Array r2 DIM2 a -> Array D DIM2 a
>
> altRows !arr1 !arr2 = -- assumes argument arrays with the same shape
> traverse2 arr1 arr2
> (\ e _ -> e)
> (\ get1 get2 e@(_ :. i :. _) ->
> if even i then get1 e else get2 e
> )
>
> {-# INLINE altRows #-}
The function relaxStep
in the following revised version of iterateLaplace
has altRows
done last, with mapStencil2
applied first (using a different stencil in each alternative)
> iterateLaplace ::
> Monad m
> => Int
> -> Double
> -> Array U DIM2 Double
> -> Array U DIM2 Double
> -> Array U DIM2 Double
> -> Array U DIM2 Double
> -> Array U DIM2 Double
> -> Array U DIM2 Double
> -> m (Array U DIM2 Double)
>
> iterateLaplace !steps !omega !redInit !blackInit
> !redBoundValue !blackBoundValue !redBoundMask !blackBoundMask
> = go steps redInit blackInit
> where
> go 0 !r !b = computeP $ combineRB r b -- return final combined array
> go n !r !b
> = do r' <- computeP
> $ relaxStep r b redBoundValue redBoundMask leftSt rightSt
> b' <- computeP
> $ relaxStep b r' blackBoundValue blackBoundMask rightSt leftSt
> go (n - 1) r' b'
>
> {-# INLINE relaxStep #-}
> relaxStep !arrOld !arrNbs !boundValue !boundMask !stencil1 !stencil2
> = altRows (f stencil1) (f stencil2)
> where
> {-# INLINE f #-}
> f s = szipWith (+) boundValue
> $ szipWith (*) boundMask
> $ szipWith weightedSum arrOld
> $ smap (/4)
> $ mapStencil2 (BoundConst 0) s arrNbs
>
> {-# INLINE weightedSum #-}
> weightedSum !old !new = (1-omega)*old + omega*new
>
> {-# INLINE iterateLaplace #-}
This now seems to be only a little slower to run than the original stencil solution (about 4 times slower so about 3 times faster than the previous version of red-black). Thus, with an approximately 8-fold speed up in convergence, this does give an overall improvement.
In order to compile this version, it was necessary to use not just the ghc -O2
flag, but also -fsimpl-tick-factor=1000
. The need for this flag was indicated by a ghc compiler bug message.
All the code above with birdfeet ( >) in this (incomplete) literate haskell document is essentially that in the module RedBlackStencilOpt.hs
(which has some extra preliminaries and imports and checking of arguments in functions which were elided here for clarity). This code can be found here along with the module RedBlackStencil.hs
which contains the first version. These can both be loaded by the wrapper newWrapper.hs
which is just an adaptation of the original wrapper to allow for passing the extra parameter.
There are many numerical methods books covering relevant background mathematics, but online, there are also lecture notes. In particular, on Computational Numerical Analysis of Partial Differential Equations by J.M.McDonough and on Numerical Solution of Laplace Equation by G.E.Urroz. T. Kadin has some tutorial notes with a diagram for the red-black scheme (and an implementation using Fortran 90). There is a useful Repa tutorial online and more examples on parallel array fusion are reported in (Lippmeier et al. 2012).
Lippmeier, Ben, and Gabriele Keller. 2011. “Efficient Parallel Stencil Convolution in Haskell.” In Proceedings of the 4th ACM Symposium on Haskell, 59–70. Haskell ’11. New York, NY, USA: ACM. doi:10.1145/2034675.2034684. http://doi.acm.org/10.1145/2034675.2034684.
Lippmeier, Ben, Manuel Chakravarty, Gabriele Keller, and Simon Peyton Jones. 2012. “Guiding Parallel Array Fusion with Indexed Types.” SIGPLAN Not. 47 (12) (September): 25–36. doi:10.1145/2430532.2364511. http://doi.acm.org/10.1145/2430532.2364511.