# Preconditioning Overview
* Krylov methods need preconditioning to be efficient (or even effective!).
  * Reduces the total number of linear iterations
  * Each linear iteration in MOOSE includes a residual evaluation (`computeQpResidual`)
  * Krylov methods, in theory, converge in the number of linear iterations equal to the number of unknowns in the system
* Even though the Jacobian is never formed, JFNK methods still require preconditioning.
* MOOSE's automatic (without user intervention) preconditioning is fairly minimal.
* Many options exist for implementing improved preconditioning in MOOSE.

[](---)
# Preconditioned JFNK
* Using right preconditioning, solve

$$$ \mathbf{J} (\vec{u}_i) \mathbf{M}^{-1} (\mathbf{M}\delta \vec{u}_{i+1}) = -\vec{R}(\vec{u}_i) $$$

* $$\mathbf{M}$$ symbolically represents the preconditioning matrix or process
* Inside GMRES, we only apply the action of $$\mathbf{M}^{-1}$$ on a vector
* Recall the *unpreconditioned* JFNK approximation ($$\mathbf{M}^{-1} = \mathbf{I}$$):

$$$ \mathbf{J} (\vec{u}_i) \vec{v} \approx \frac{\vec{R}(\vec{u}_i + \epsilon \vec{v}) - \vec{R}(\vec{u}_i)}{\epsilon} $$$

* Compare to the right-preconditioned, matrix-free version:

$$$ \mathbf{J} (\vec{u}_i) \mathbf{M}^{-1}\vec{v} \approx \frac{\vec{R}(\vec{u}_i + \epsilon \mathbf{M}^{-1}\vec{v}) - \vec{R}(\vec{u}_i)}{\epsilon} $$$


[](??? * Linear system is multiplied by Identity in a specific way)
[](??? * The goal is to come up with a partial inverse of J that is close enough to make the problem easier to solve)

[](---)
# Preconditioning Matrix vs Process
* On the previous slide $$\mathbf{M}$$ represented the "Preconditioning Matrix".
* The action of $$\mathbf{M}^{-1}$$ on a vector represents the "Preconditioner" or "Preconditioning Process".
* In MOOSE the "matrix to build" and the "process to apply" with that matrix are separated.
* There are several different ways to build preconditioning matrices:
    * Default: Block Diagonal Preconditioning
    * Single Matrix Preconditioner (SMP)
    * Finite Difference Preconditioner (FDP)
    * Physics Based Preconditioner (PBP)
    * Field Split Preconditioner
* After selecting how to build a preconditioning matrix you can then use solver options to select how to apply the Preconditioner.


[](---)
# Solve Type
* The default `solve_type` for MOOSE is "Preconditioned JFNK".
* An alternative `solve_type` can be set through either the `[Executioner]` or `[Preconditioner/*]` block.
* Valid options include:
  * `PJFNK` (default)
  * `JFNK`
  * `NEWTON`
  * `FD` (Finite Difference)

[](---)
# PETSc Preconditioning Options
* For specifying the preconditioning process we use solver options directly (i.e. PETSc options).
* Currently the options for preconditioning with PETSc are exposed to the applications.
* This will change in the future... there will be more generic ways of specifying preconditioning parameters.
* The best place to learn about all of the preconditioning options with PETSc is the user manual.
* We use the command-line syntax, but provide places to enter it into the input file.

[](.center[)
[http://www.mcs.anl.gov/petsc/petsc-current/docs/manual.pdf](http://www.mcs.anl.gov/petsc/petsc-current/docs/manual.pdf)
[](])

[](---)
# PETSc Specific Executioner Options

[](.table-tight[)

`petsc_options` | Description
:- | :-
`-snes_ksp_ew` | Variable linear solve tolerance, useful for transient solves
`-help` | Show PETSc options during the solve

[](])
[](---)
[](.table-tight[)


`petsc_options_iname` | `petsc_options_value` | Description
:-                    | :-                    | :-
`-pc-type`            | `ilu`                 | Default for serial
                      | `bjacobi`             | Default for parallel with `-sub_pc_type ilu`
                      | `asm`                 | Additive Schwartz with `-sub_pc_type ilu`
                      | `lu`                  | Full LU, serial only
                      | `gamg`                | Generalized Geometric-Algebric MultiGrid
                      | `hypre`               | Hypre, usually used with `boomeramg`
`-sub_pc_type`        | `ilu, lu, hypre`      | Can be used with `bjacobi` or `asm`
`-pc_hypre_type`      | `boomeramg`           | Algebraic Multigrid
`-ksp_gmres_restart`  | # (default = 30)      | Number of Krylov vectors to store
[](])

[](---)
# Default Preconditioning Matrix

* Consider the fully coupled system of equations:
$$$
  \begin{align*} -\nabla \cdot k(T,s) \nabla T  &= 0 \\ - \nabla \cdot D(T,s) \nabla s  &= 0 \end{align*}
$$$

* Fully coupled Jacobian approximation
$$$
  \begin{equation*}
    \mathbf{J}(T,s) =
    \begin{bmatrix}
      \frac{\partial (R_T)_i}{\partial T_j} & \frac{\partial (R_T)_i}{\partial s_j} \\
      \frac{\partial (R_s)_i}{\partial T_j} & \frac{\partial (R_s)_i}{\partial s_j}
    \end{bmatrix}
    \approx
    \begin{bmatrix}
      \frac{\partial (R_T)_i}{\partial T_j} & \mathbf{0} \\
      \mathbf{0} & \frac{\partial (R_s)_i}{\partial s_j} \\
    \end{bmatrix}
  \end{equation*}
$$$

* For our example:
$$$
  \begin{equation*}
    \mathbf{M} \equiv
    \begin{bmatrix}
      (k(T,s) \nabla \phi_j, \nabla \psi_i) & \mathbf{0} \\
      \mathbf{0} & (D(T,s) \nabla \phi_j, \nabla\psi_i)
    \end{bmatrix} \approx \mathbf{J}(T,s)
  \end{equation*}
$$$

* This simple style of throwing away the off-diagonal blocks is the way MOOSE will precondition when using the default `solve_type`.

[](---)
# The Preconditioning Block

[](.left-40[)

```text
[Preconditioning]
  active = 'my_prec'

  [./my_prec]
    type = SMP
    #SMP Options Go Here!
    #Override PETSc Options Here!
  [../]

  [./other_prec]
    type = PBP
    #PBP Options Go Here!
    #Override PETSc Options Here!
  [../]
[]
```

[](])
[](.right-60[)

* The `Preconditioning` block allows you to define which type of preconditioning matrix to build and what process to apply.
* You can define multiple blocks with different names, allowing you to quickly switch out preconditioning options.
* Each sub-block takes a `type` parameter to specify the type of preconditioning matrix.
* Within the sub-blocks you can also provide other options specific to that type of preconditioning matrix.
* You can also override PETSc options here.
* Only one block can be active at a time.

[](])

[](---)
# Single Matrix Preconditioning (SMP)

* Single Matrix Preconditioner (SMP) builds one preconditioning matrix.
* You enable SMP with: `type = SMP`
* You specify which blocks of the matrix to use with:
```text
off_diag_row    = 's'
off_diag_column = 'T'
```
* Which would produce an $$\mathbf{M}$$ like this:
    $$$
    \mathbf{M} \equiv
    \begin{bmatrix}
      \left(k(T,s) \nabla \phi_j, \nabla \psi_i\right) & \mathbf{0}
      \\[3pt]
      \left(\frac{\partial D(T,s)}{\partial T_j} \nabla s, \nabla \psi_i\right) & \left(D(T,s) \nabla \phi_j, \nabla\psi_i\right)
      \end{bmatrix} \approx \mathbf{J}
     $$$
* In order for this to work you must provide a `computeQpOffDiagJacobian()` function in your Kernels that computes the required partial derivatives.
* To use *all* off diagonal blocks set: `full = true`.

[](---)
# Finite Difference Preconditioning (FDP)

* The Finite Difference Preconditioner (FDP) allows you to form a "Numerical Jacobian" by doing direct finite differences of your residual statements.
* This is extremely slow and inefficient, but is a great debugging tool because it allows you to form a nearly perfect preconditioner.
* You specify it by using: `type = FDP`
* You can use the same options for specifying off-diagonal blocks as SMP.
* Since FDP allows you to build the perfect approximate Jacobian it can be useful to use it directly to solve instead of using JFNK.
* The finite differencing is sensitive to the differencing parameter which can be specified using:
```text
petsc_options_iname = '-mat_fd_coloring_err -mat_fd_type'
petsc_options_value = '1e-6                 ds'
```
* *NOTE:* FDP currently works in serial only! This might change in the future, but FDP will always be meant for debugging purposes!

[](---)
# Examples

* Default Preconditioning Matrix, Preconditioned JFNK, monitor linear solver, variable linear solver tolerance.
* Use Hypre with algebraic multigrid and store 101 Krylov vectors.
```text
[Executioner]
...
petsc_options_iname = '-pc_type -pc_hypre_type -ksp_gmres_restart'
petsc_options_value = 'hypre    boomeramg      101'
...
[]
```

* Single Matrix Preconditioner, Fill in the (forced, diffused) block, Preconditioned JFNK, Full inverse with LU
```text
  [./SMP_jfnk]
    type = SMP

    off_diag_row    = 'forced'
    off_diag_column = 'diffused'

    petsc_options_iname = '-pc_type'
    petsc_options_value = 'lu'
  [../]
```

[Example 11](/wiki/MooseExamples/Example_11)

[](---)
# Physics Based Preconditioning

* Physics based preconditioning is an advanced concept used to more efficiently solve using JFNK.
* The idea is to create a preconditioning process that targets each physics individually.
* In this way you can create a more effective preconditioner...while also maintaining efficiency.
* In MOOSE there is a `PhysicsBasedPreconditioner` object.
* This object allows you to dial up a preconditioning matrix and the operations to be done on the different blocks of that matrix on the fly from the input file.

[](---)
# What the PBP Does

* The PBP works by partially inverting a preconditioning matrix (usually an approximation of the true Jacobian) by partially inverting each block row in a Block-Gauss-Seidel way.

$$$
  \vec{R}(u,v) =
  \begin{bmatrix}
    \vec{r}_u
    \\
    \vec{r}_v
  \end{bmatrix}
$$$

$$$
\mathbf{M} \equiv
\begin{bmatrix}
  \frac{\partial (R_u)_i}{\partial u_j} & \mathbf{0} \\
  \frac{\partial (R_v)_i}{\partial u_j} & \frac{\partial (R_v)_i}{\partial v_j}
\end{bmatrix} \approx \mathbf{J}
$$$

$$$
\mathbf{M} \vec{q} = \vec{p} \quad \Rightarrow \quad
\begin{cases}
\frac{\partial (R_u)_i}{\partial u_j} \vec{q}_u = \vec{p}_u \\[6pt]
\frac{\partial (R_v)_i}{\partial v_j} \vec{q}_v = \vec{p}_v - \frac{\partial (R_v)_i}{\partial u_j} \vec{q}_u
\end{cases}
$$$

---
# Using the PBP

[](.left-40[)
```text
[Variables]
...
[]

[Preconditioning]
  active = 'myPBP'

  [./myPBP]
    type = PBP
    solve_order = 'u v'
    preconditioner  = 'ILU AMG'
    off_diag_row    = 'v'
    off_diag_column = 'u'
  [../]
[]
```

[](])
[](.right-60[)

* Set up a PBP object for a two variable system (consisting of variables "u" and "v").
* Use ILU for the "u" block and AMG for "v".
* Use the lower diagonal (v, u) block.
* When using `type = PBP`, will set `solve_type = JFNK` automatically.

[](])

[](---)
# Applying PBP

* Applying these ideas to a coupled thermo-mechanics problem

[](image:80 width:650px)
[image:80]


[Example 12](/wiki/MooseExamples/Example_12)