Conjugate Gradient algorithm Need: A symmetric positive definite; Cost: 1 matrix-vector product per step; Storage: fixed, independent of number of steps. The CG method minimizes the A norm of the error, x k = arg min x x 2 A. x K k (A,b) x = true solution, z 2 A = z T Az. Computational Methods in Inverse Problems, Mat 1.3626 0-0
Krylov subspaces The kth Krylov subspace associated with the matrix A and the vector b is K k (A, b) = span{b, Ab,..., A k 1 b}. Iterative methods which seek the solution in a Krylov subspace are called Krylov subspace iterative methods. Computational Methods in Inverse Problems, Mat 1.3626 0-1
At each step, minimize α x k 1 + αp k 1 x 2 A Solution: New update Search directions: α k = r k 1 2 p T k 1 Ap k 1 x k = x k 1 + α k 1 p k 1. p 0 = r 0 = b Ax 0, Computational Methods in Inverse Problems, Mat 1.3626 0-2
Iteratively, A-conjugate to the previous ones: Found by writing p T k Ap j = 0, 0 j k 1. p k = r k + β k p k 1, r k = b Ax k, β k = r k 2 r k 1 2, Computational Methods in Inverse Problems, Mat 1.3626 0-3
Algorithm (CG) Initialize: x 0 = 0; r 0 = b Ax 0 ; p 0 = r 0 ; for k = 1, 2,... until stopping criterion is satisfied end α k = r k 1 2 p T k 1 Ap ; k 1 x k = x k 1 + α k p k 1 ; r k = r k 1 α k Ap k 1 ; β k = r k 2 r k 1 2 ; p k = r k + β k p k 1 ; Computational Methods in Inverse Problems, Mat 1.3626 0-4
CGLS Method Conjugate Gradient method for Least Squares (CGLS) Need: A can be rectangular (non-square); Cost: 2 matrix-vector products (one with A, one with A T ) per step; Storage: fixed, independent of number of steps. Mathematically equivalent to applying CG to normal equations A T Ax = A T b without actually forming them. Computational Methods in Inverse Problems, Mat 1.3626 0-5
CGLS minimization problem The kth iterate solves the minimization problem x k = arg min b Ax. x K k (A T A,A T b) The kth iterate x k of CGLS method (x 0 = 0) is characterized by Φ(x k ) = min Φ(x) x K k (A T A,A T b) where Φ(x) := 1 2 xt A T Ax x T A T b. Computational Methods in Inverse Problems, Mat 1.3626 0-6
Determination of the minimizer Perform sequential linear searches along A T A-conjugate directions p 0, p 1,..., p k 1 that span K k (A T A, A T b). Determine x k from x k 1 and p k 1 according to x k := x k 1 + α k 1 p k 1 where α k 1 R solves min Φ(x k 1 + αp k 1 ). α R Computational Methods in Inverse Problems, Mat 1.3626 0-7
Residual Error Introduce the residual error associated with x k : r k := A T b A T Ax k. Then p k := r k + β k 1 p k 1 Choose β k 1 so that p k is A T A-conjugate to the previous search directions: p T k A T Ap j = 0, 1 j k 1. Computational Methods in Inverse Problems, Mat 1.3626 0-8
The discrepancy associated with x is Discrepancy d k = b Ax k. Then r k = A T d k It was shown by Hestenes and Stiefel that d k+1 d k ; x k+1 x k. Computational Methods in Inverse Problems, Mat 1.3626 0-9
x 0 := 0; d 0 = b; r 0 = A T b; p 0 = r 0 ; t 0 = Ap 0 ; Algorithm (CGLS) for k = 1, 2,... until stopping criterion is satisfied end α k = r k 1 2 / t k 1 2 x k = x k 1 + α k p k 1 ; d k = d k 1 α k t k 1 ; r k = A T d k ; β k = r k 2 / r k 1 2 ; p k = r k + β k p k 1 ; t k = Ap k ; Computational Methods in Inverse Problems, Mat 1.3626 0-10
An invertible 2 2-matrix A, Example: a Toy Problem y j = Ax + ε j, j = 1, 2. Preimages, x j = A 1 y j, j = 1, 2, Computational Methods in Inverse Problems, Mat 1.3626 0-11
x 2 A x 1 y 2 y 1 x * y * A 1 Computational Methods in Inverse Problems, Mat 1.3626 0-12
solution by iterative methods: semiconvergence. Write B = Ax + E, E N (0, σ 2 I), and generate a sample of data vectors, b 1, b 2,..., b n Solve with CG Computational Methods in Inverse Problems, Mat 1.3626 0-13
Computational Methods in Inverse Problems, Mat 1.3626 0-14
When should one stop iterating? Let Ax = b + ε = Ax + ε = b, Approximate information ε η, where η > 0 is known. Write A(x x ) = ε η Any solution satisfying is reasonable. Morozov discrepancy principle Ax b τη Computational Methods in Inverse Problems, Mat 1.3626 0-15
Example: numerical differentiation Let f : [0, 1] R be a differentiable function, f(0) = 0. Data = f(t j )+ noise, t j = j, j = 1, 2,... n. n Problem: Estimate f (t j ). Direct numerical differentiation by, e.g., finite difference formula does not work: the noise takes over. Where is the inverse problem? Computational Methods in Inverse Problems, Mat 1.3626 0-16
2.5 Data 2 1.5 1 0.5 0 0.5 3 2 1 0 1 2 3 Noise level 5% Computational Methods in Inverse Problems, Mat 1.3626 0-17
2.5 Solution by finite difference 2 1.5 1 0.5 0 0.5 1 1.5 3 2 1 0 1 2 3 Computational Methods in Inverse Problems, Mat 1.3626 0-18
Denote g(t) = f (t). Then, Formulation as an inverse problem f(t) = t 0 g(τ)dτ. Linear model: where e j is the noise. Data = y j = f(t j ) + e j = tj 0 g(τ)dτ + e j, Computational Methods in Inverse Problems, Mat 1.3626 0-19
Discretization Write tj 0 g(τ)dτ 1 n j g(t k ). k=1 By denoting g(t k ) = x k, y = Ax + e, where A = 1 n 1 1 1.... 1 1. Computational Methods in Inverse Problems, Mat 1.3626 0-20