$$$$\int_{\Omega} f(\vec{x}) \;\text{d}\vec{x} = \sum_e \int_{\Omega_e} f(\vec{x}) \;\text{d}\vec{x}$$$$
$$$$\sum_e \int_{\Omega_e} f(\vec{x}) \;\text{d}\vec{x} = \sum_e \int_{\hat{\Omega}_e} f(\vec{\xi}) \left|\mathcal{J}_e\right| \;\text{d}\vec{\xi}$$$$
$$$$\sum_e \int_{\hat{\Omega}_e} f(\vec{\xi}) \left|\mathcal{J}_e\right| \;\text{d}\vec{\xi} \approx \sum_e \sum_{q} w_{q} f( \vec{x}_{q}) \left|\mathcal{J}_e(\vec{x}_{q})\right|$$$$
$$$\vec{x}_{q}$$$ is the spatial location of the $$$q$$$th quadrature point and $$$w_{q}$$$ is its associated weight.
MOOSE handles multiplication by the Jacobian and the weight automatically, thus your Kernel
is only responsible for computing the $$$f(\vec{x}_{q})$$$ part of the integrand.
Under certain common situations, the quadrature approximation is exact!
Note that sampling $$$u_h$$$ at the quadrature points yields: $$$$\begin{aligned} u(\vec{x}_{q}) &\approx u_h(\vec{x}_{q}) = \sum u_j \phi_j(\vec{x}_{q}) \\ \nabla u (\vec{x}_{q}) &\approx \nabla u_h(\vec{x}_{q}) = \sum u_j \nabla \phi_j(\vec{x}_{q})\end{aligned}$$$$
And our weak form becomes:
$$$$\begin{aligned} R_i(u_h) &= \sum_e \sum_{q} w_{q} \left|\mathcal{J}_e\right|\left[ \nabla\psi_i\cdot k \nabla u_h + \psi_i \left(\vec\beta\cdot \nabla u_h \right) - \psi_i f \right](\vec{x}_{q}) \\ &- \sum_f \sum_{q_{face}} w_{q_{face}} \left|\mathcal{J}_f\right| \left[\psi_i k \nabla u_h \cdot \vec{n} \right](\vec x_{q_{face}}) \end{aligned}$$$$
Kernels
must provide each of the terms in square brackets (evaluated at $$$\vec{x}_{q}$$$ or $$$\vec x_{q_{face}}$$$ as necessary).We now have a nonlinear system of equations, $$$$R_i(u_h)=0, \qquad i=1,\ldots, N$$$$ to solve for the coefficients $$$u_j, j=1,\dots,N$$$.
Newton's method has good convergence properties, we use it to solve this system of nonlinear equations.
This leads to the following form of Newton's Method: $$$$\begin{aligned} \mathbf{J}(\vec{u}_n) \delta\vec{u}_{n+1} &= -\vec{R}(\vec{u}_n) \\ \vec{u}_{n+1} &= \vec{u}_n + \delta\vec{u}_{n+1}\end{aligned}$$$$
Where $$$\mathbf{J}(\vec{u}_n)$$$ is the Jacobian matrix evaluated at the current iterate: $$$$J_{ij}(\vec{u}_n) = \frac{\partial R_i(\vec{u}_n)}{\partial u_j}$$$$
Note that: $$$$\frac{\partial u_h}{\partial u_j} = \sum_k\frac{\partial }{\partial u_j}\left(u_k \phi_k\right) = \phi_j \qquad \frac{\partial \left(\nabla u_h\right)}{\partial u_j} = \sum_k \frac{\partial }{\partial u_j}\left(u_k \nabla \phi_k\right) = \nabla \phi_j$$$$
Consider the convection-diffusion equation with nonlinear $$$k$$$, $$$\vec{\beta}$$$, and $$$f$$$: $$$$\begin{aligned}- \nabla\cdot k\nabla u + \vec{\beta} \cdot \nabla u = f\end{aligned}$$$$
The $$$i^{th}$$$ component of the residual vector is: $$$$\begin{aligned} R_i(u_h) = \left(\nabla\psi_i, k\nabla u_h \right) - \langle\psi_i, k\nabla u_h\cdot \hat{n} \rangle + \left(\psi_i, \vec{\beta} \cdot \nabla u_h\right) - \left(\psi_i, f\right)\end{aligned}$$$$
$$$$\begin{aligned} J_{ij}(u_h) &= \left(\nabla\psi_i, \frac{\partial k}{\partial u_j}\nabla u_h \right) + \left(\nabla\psi_i, k \nabla \phi_j \right) - \left \langle\psi_i, \frac{\partial k}{\partial u_j}\nabla u_h\cdot \hat{n} \right\rangle \\&- \left \langle\psi_i, k\nabla \phi_j\cdot \hat{n} \right\rangle + \left(\psi_i, \frac{\partial \vec{\beta}}{\partial u_j} \cdot\nabla u_h\right) + \left(\psi_i, \vec{\beta} \cdot \nabla \phi_j\right) - \left(\psi_i, \frac{\partial f}{\partial u_j}\right)\end{aligned}$$$$
Note that even for this "simple" equation, the Jacobian entries are nontrivial: they depend on the partial derivatives of $$$k$$$, $$$\vec{\beta}$$$, and $$$f$$$, which may be difficult or time-consuming to compute analytically.
In a multiphysics setting with many coupled equations and complicated material properties, the Jacobian might be extremely difficult to determine.
On the previous slide, the term $$$\frac{\partial f}{\partial u_j}$$$ was used, where $$$f$$$ was a nonlinear forcing function.
The chain rule allows us to write this term as $$$$\begin{aligned} \frac{\partial f}{\partial u_j} &= \frac{\partial f}{\partial u_h} \frac{\partial u_h}{\partial u_j} \\ &=\frac{\partial f}{\partial u_h} \phi_j\end{aligned}$$$$
If a functional form of $$$f$$$ is known, e.g. $$$f(u) = \sin(u)$$$, this formula implies that its Jacobian contribution is given by $$$$\frac{\partial f}{\partial u_j} = \cos(u_h) \phi_j$$$$
$$$$\vec{\rho}_k \equiv \mathbf{A}\vec{x}_k - \vec{b}$$$$
MOOSE prints the norm of this vector, $$$\|\vec{\rho}_k\|$$$, at each iteration, if you set print_linear_residuals = true
in the Outputs
block.
The "nonlinear residual" printed by MOOSE is $$$\|\vec{R}(\vec{u}_n)\|$$$.
By iterate $$$k$$$, the Krylov method has constructed the subspace
$$$$\mathcal{K}_k = \text{span}\{ \vec{b}, \mathbf{A}\vec{b}, \mathbf{A}^2\vec{b}, \ldots, \mathbf{A}^{k-1}\vec{b}\}$$$$
Different Krylov methods produce the $$$\vec{x}_k$$$ iterates in different ways:
$$$\mathbf{J}$$$ is never explicitly needed to construct the subspace, only the action of $$$\mathbf{J}$$$ on a vector is required.
This action can be approximated by: $$$$\mathbf{J}\vec{v} \approx \frac{\vec{R}(\vec{u} + \epsilon\vec{v}) - \vec{R}(\vec{u})}{\epsilon}$$$$
This form has many advantages: