Conjugate Gradient Method

Size: px
Start display at page:

Download "Conjugate Gradient Method"

Transcription

1 Conjugate Gradient Method Tsung-Ming Huang Department of Mathematics National Taiwan Normal University October 10, 2011 T.M. Huang (NTNU) Conjugate Gradient Method October 10, / 36

2 Outline 1 Steepest Descent Method 2 Conjugate Gradient Method 3 Convergence of CG-method T.M. Huang (NTNU) Conjugate Gradient Method October 10, / 36

3 A Variational Problem, Steepest Descent Method (Gradient Method) Let A R n n be a large and sparse symmetric positive definite (s.p.d.) matrix. Consider the linear system and the functional F : R n R with Then it holds: Ax = b F(x) = 1 2 xt Ax b T x. (1) T.M. Huang (NTNU) Conjugate Gradient Method October 10, / 36

4 Theorem 1 For a vector x the following statements are equivalent: (i) F(x ) < F(x), for all x x, (ii) Ax = b. Proof: From assumption there exists z 0 = A 1 b and F(x) can be rewritten as F(x) = 1 2 (x z 0) T A(x z 0 ) 1 2 zt 0 Az 0. (2) Since A is positive definite, F(x) has a minimum at x = z 0 and only at x = z 0, it follows the assertion. The solution of the linear system Ax = b is equal to the solution of the minimization problem ( ) 1 minf(x) min 2 xt Ax b T x. university-logo T.M. Huang (NTNU) Conjugate Gradient Method October 10, / 36

5 Steepest Descent Method Let x k be an approximate of the exact solution x and p k be a search direction. We want to find an α k such that F(x k +α k p k ) < F(x k ). Set x k+1 := x k +α k p k. This leads to the basic problem: Given x, p 0, find α such that Φ(α ) = F(x+α p) = min α R F(x+αp). T.M. Huang (NTNU) Conjugate Gradient Method October 10, / 36

6 Solution: Since F(x+αp) = 1 2 (x+αp)t A(x+αp) b T (x+αp) it follows that if we take = 1 2 α2 p T Ap+α(p T Ax p T b)+f(x), α = (b Ax)T p p T Ap = rt p p T Ap, (3) where r = b Ax = gradf(x) = residual, then x+α p is the minimal solution. Moreover, F(x+α p) = F(x) 1 (r T p) 2 2 p T Ap. T.M. Huang (NTNU) Conjugate Gradient Method October 10, / 36

7 Lemma 2 Let x k+1 = x k + rt k p k p T k Ap p k, r k = b Ax k, k (4) F(x k+1 ) = F(x k ) 1 (rk Tp k) 2 2 p T k Ap, k = 0,1,2,. k (5) Then, it holds Proof: Since r T k+1 p k = 0. (6) d dα F(x k +αp k ) = gradf(x k +αp k ) T p k, it follows that gradf(x k +α k+1 p k ) T p k = 0 where α k+1 = rt k p k p T k Ap. Thus k (b Ax k+1 ) T p k = r T k+1 p k = 0, university-logo hence (6) holds. T.M. Huang (NTNU) Conjugate Gradient Method October 10, / 36

8 How to choose search direction p k? Let Φ : R n R be a differential function on x. Then it holds Φ(x+εp) Φ(x) ε = Φ (x) T p+o(ε). The right hand side takes minimum at p = Φ (x) (i.e., the largest Φ (x) descent) for all p with p = 1 (neglect O(ε)). Hence, it suggests to choose p k = gradf(x k ) = b Ax k = r k. T.M. Huang (NTNU) Conjugate Gradient Method October 10, / 36

9 Algorithm: Gradient Method 1: Give x 0 and set k = 0. 2: Compute r k = b Ax k and α k = rt k r k rk TAr ; k 3: repeat 4: Compute x k+1 = x k +α k r k and set k := k+1; 5: Compute r k = b Ax k and α k = rt k r k rk TAr. k 6: until r k = 0 Cost in each step: compute Ax k (Ar k does not need to compute). To prove the convergence of Gradient method, we need the Kontorowitsch inequality: n Let λ 1 λ 2 λ n > 0, α i 0, α i = 1. Then it holds i=1 ( n n α i λ i α j λ 1 j (λ 1 +λ n ) 2 = 1 λ1 + 4λ 1 λ n 4 λ n i=1 j=1 λn λ 1 ) 2. university-logo T.M. Huang (NTNU) Conjugate Gradient Method October 10, / 36

10 Theorem 3 If x k, x k 1 are two approximations of Gradient Method Algorithm for solving Ax = b and λ 1 λ 2 λ n > 0 are the eigenvalues of A, then it holds: F(x k )+ 1 ( ) 2 bt A 1 λ1 λ 2 n b [F(x 1 k 1)+ λ 1 +λ n 2 bt A 1 b], (7a) i.e., ( ) x k x λ1 λ n A x k 1 x A, λ 1 +λ n (7b) where x A = x T Ax. Thus the gradient method is convergent. T.M. Huang (NTNU) Conjugate Gradient Method October 10, / 36

11 Conjugate gradient method It is favorable to choose that the search directions {p i } as mutually A-conjugate, where A is symmetric positive definite. Definition 4 Two vectors p and q are called A-conjugate (A-orthogonal), if p T Aq = 0. Remark 1 Let A be symmetric positive definite. Then there exists a unique s.p.d. B such that B 2 = A. Denote B = A 1/2. Then p T Aq = (A 1/2 p) T (A 1/2 q). T.M. Huang (NTNU) Conjugate Gradient Method October 10, / 36

12 Lemma 5 Let p 0,...,p r 0 be pairwisely A-conjugate. Then they are linearly independent. Proof: From 0 = r c j p j follows that j=0 r 0 = p T k A c j p j = j=0 so c k = 0, for k = 1,...,r. r c j p T k Ap j = c k p T k Ap k, j=0 T.M. Huang (NTNU) Conjugate Gradient Method October 10, / 36

13 Theorem 6 Let A be s.p.d. and p 0,...,p n 1 be nonzero pairwisely A-conjugate vectors. Then n 1 A 1 p j p T j = p T j Ap. (8) j j=0 Remark 2 A = I, U = (p 0,...,p n 1 ), p T i p i = 1, p T i p j = 0, i j. UU T = I and I = UU T. Then p T I = (p 0,...,p n 1 ) 0. = p 0 p T 0 + +p n 1 p T n 1. p T n 1 T.M. Huang (NTNU) Conjugate Gradient Method October 10, / 36

14 Proof of Theorem 6: Since p j = A1/2 p j p T j Ap j j = 0,1,...,n 1, we have are orthonormal, for Thus, I = p 0 p T p n 1 p T n 1 n 1 A 1/2 p j p T j = A1/2 n 1 p T j Ap = A 1/2 p j p T j j p T j Ap A 1/2. j j=0 A 1/2 IA 1/2 = A 1 = n 1 j=0 j=0 p j p T j p T j Ap. j Remark 3 Let Ax = b and x 0 be an arbitrary vector. Then from x x 0 = A 1 (b Ax 0 ) and (8) follows that n 1 x = x 0 + j=0 p T j (b Ax 0) (p T j Ap p j. (9) j) university-logo T.M. Huang (NTNU) Conjugate Gradient Method October 10, / 36

15 Theorem 7 Let A be s.p.d. and p 0,...,p n 1 R n \{0} be pairwisely A-orthogonal. Given x 0 and let r 0 = b Ax 0. For k = 0,...,n 1, let Then the following statements hold: (i) r k = b Ax k. (By induction). α k = pt k r k p T k Ap, k (10) x k+1 = x k +α k p k, (11) r k+1 = r k α k Ap k. (12) (ii) x k+1 minimizes F(x) on x = x k +αp k, α R. (iii) x n = A 1 b = x. (iv) x k minimizes F(x) on the affine subspace x 0 +S k, where S k = Span{p 0,...,p k 1 }. university-logo T.M. Huang (NTNU) Conjugate Gradient Method October 10, / 36

16 Proof: (i): By Induction and using (11), (12). (ii): From (3) and (i). (iii): It is enough to show that x k corresponds with the partial sum in (9), k 1 x k = x 0 + j=0 p T j (b Ax 0) p T j Ap p j. j Then it follows that x n = x from (9). From (10) and (11) we have To show that From x k x 0 = k 1 So (13) holds. k 1 k 1 p T j x k = x 0 + α j p j = x 0 + (b Ax j) p T j Ap p j. j j=0 j=0 j=0 p T j (b Ax j ) = p T j (b Ax 0 ). (13) α j p j we obtain k 1 p T k Ax k p T k Ax 0 = α j p T k Ap j = 0. j=0 university-logo T.M. Huang (NTNU) Conjugate Gradient Method October 10, / 36

17 (iv): From (12) and (10) follows that p T k r k+1 = p T k r k α k p T k Ap k = 0. From (11), (12) and by the fact that r k+s r k+s+1 = α k+s Ap k+s and p k are A-orthogonal (for s 1) follows that Hence we have p T k r k+1 = p T k r k+2 = = p T k r n = 0. p T i r k = 0, i = 0,...,k 1, k = 1,2,...,n. (i.e., i < k). (14) We now consider F(x) on x 0 +S k : k 1 F(x 0 + ξ j p j ) = ϕ(ξ 0,,ξ k 1 ). j=0 university-logo T.M. Huang (NTNU) Conjugate Gradient Method October 10, / 36

18 F(x) is minimal on x 0 +S k if and only if all derivatives ϕ ξ s vanish at x. But ϕ k 1 = [gradf(x 0 + ξ j p j )] T p s, s = 0,1,...,k 1. ξ s j=0 If x = x k, then gradf(x) = r k. From (14) follows that ϕ ξ s (x k ) = 0, for s = 0,1,...,k 1. T.M. Huang (NTNU) Conjugate Gradient Method October 10, / 36

19 Remark 4 The following conditions are equivalent: (i) p T i Ap j = 0, i j, (ii) p T i r k = 0, i < k, (iii) r T i r j = 0, i j. Proof of (iii): for i < k, p T i r k = 0 (ri T +β i 1 p T i 1)r k = 0 ri T r k = 0 ri T r j = 0, i j. T.M. Huang (NTNU) Conjugate Gradient Method October 10, / 36

20 Remark 5 It holds p 0,p 1,,p k = r 0,r 1,,r k = r 0,Ar 0,,A k r 0. Since p 1 = r 1 +β 0 p 0 = r 1 +β 0 r 0, r 1 = r 0 α 0 Ar 0, by induction, we have r 2 = r 1 α 0 Ap 1 = r 1 α 0 A(r 1 +β 0 r 0 ) = r 0 α 0 Ar 0 α 0 A(r 0 α 0 Ar 0 +β 0 r 0 ). T.M. Huang (NTNU) Conjugate Gradient Method October 10, / 36

21 Method of conjugate directions Input: Let A be s.p.d., b and x 0 R n. Given p 0,...,p n 1 R n \{0} pairwisely A-orthogonal. 1: Compute r 0 = b Ax 0 ; 2: for k = 0,...,n 1 do 3: Compute α k = p kr k p T k Ap, x k+1 = x k +α k p k,; k 4: Compute r k+1 = r k α k Ap k = b Ax k+1. 5: end for From Theorem 7 we get x n = A 1 b. T.M. Huang (NTNU) Conjugate Gradient Method October 10, / 36

22 Practical Implementation Algorithm: Conjugate Gradient method (CG-method) Input: Given s.p.d. A, b R n and x 0 R n and r 0 = b Ax 0 = p 0. 1: Set k = 0. 2: repeat 3: Compute α k = pt k r k p T k Ap ; k 4: Compute x k+1 = x k +α k p k ; 5: Compute r k+1 = r k α k Ap k = b Ax k+1 ; 6: Compute β k = rt k+1 Ap k p T k Ap ; k 7: Compute p k+1 = r k+1 +β k p k ; 8: Set k = k+1; 9: until r k = 0 T.M. Huang (NTNU) Conjugate Gradient Method October 10, / 36

23 Theorem 8 The CG-method holds (i) If k steps of CG-method are executable, i.e., r i 0, for i = 0,...,k, then p i 0, i k and p T i Ap j = 0 for i,j k, i j. (ii) The CG-method breaks down after N steps for r N = 0 and N n. (iii) x N = A 1 b. T.M. Huang (NTNU) Conjugate Gradient Method October 10, / 36

24 Proof: (i): By induction on k, it is trivial for k = 0. Suppose that (i) is true until k and r k+1 0. Then p k+ 1 is well-defined. We want to verify p T k+1 Ap j = 0, for j = 0,1,...,k. From Lines 6 and 7 in CG Algorithm, we have Let j < k, from Line 7 we have It is enough to show that p T k+1 Ap k = r T k+1 Ap k +β k p T k Ap k = 0. p T k+1 Ap j = r T k+1 Ap j +β k p T k Ap j = r T k+1 Ap j. Ap j span{p 0,...,p j+1 }, j < k. (15) Then from the relation p T i r j = 0, i < j k+1, which has been proved in (14), follows assention. university-logo T.M. Huang (NTNU) Conjugate Gradient Method October 10, / 36

25 Claim (15): For r j 0, it holds that α j 0. Line 5shows that Ap j = 1 α j (r j r j+1 ) span{r 0,...,r j+1 }. Line 7 shows that span{r 0,...,r j+1 } = span{p 0,...,p j+1 } with r 0 = p 0, so is (15). (ii): Since {p i } k+1 i=0 0 and are mutually A-orthogonal, p 0,...,p k+1 are linearly independent. Hence there exists a N n with r N = 0. This follows x N = A 1 b. Advantage 1 Break-down in finite steps. 2 Less cost in each step: one matrix vector. T.M. Huang (NTNU) Conjugate Gradient Method October 10, / 36

26 Convergence of CG-method Consider the A-norm with A being s.p.d. x A = (x T Ax) 1/2. Let x = A 1 b. Then from (2) we have F(x) F(x ) = 1 2 (x x ) T A(x x ) = 1 2 x x 2 A, where x k is the k-th iterate of CG-method. From Theorem 7 x k minimizes the functional F on x 0 + span{p 0,...,p k 1 }. Hence it holds x k x A y x A, y x 0 +span{p 0,..,p k 1 }. (16) T.M. Huang (NTNU) Conjugate Gradient Method October 10, / 36

27 From Lines 5 and 7 in CG Algorithm it is easily seen that both p k and r k can be written as linear combination of r 0, Ar 0,...,A k 1 r 0. If y x 0 +span{p 0,...,p k 1 }, then y = x 0 +c 1 r 0 +c 2 Ar c k A k 1 r 0 = x 0 + P k 1 (A)r 0, where P k 1 is a polynomial of degree k 1. But r 0 = b Ax 0 = A(x x 0 ), thus y x = (x 0 x )+ P k 1 (A)A(x x 0 ) [ ] = I A P k 1 (A) (x 0 x ) = P k (A)(x 0 x ), (17) where degree P k k and P k (0) = 1. (18) T.M. Huang (NTNU) Conjugate Gradient Method October 10, / 36

28 Conversely, if P k is a polynomial of degree k and satisfies (18), then x +P k (A)(x 0 x ) x 0 +S k. Hence (16) means that if P k is a polynomial of degree k with P k (0) = 1, then x k x A P k (A)(x 0 x ) A. Lemma 9 Let A be s.p.d. It holds for every polynominal Q k of degree k that max x 0 Q k (A)x A x A = ρ(q k (A)) = max{ Q k (λ) : λ eigenvalue of A}. (19) Proof university-logo T.M. Huang (NTNU) Conjugate Gradient Method October 10, / 36

29 From (19) we have that where degree P k k and P k (0) = 1. x k x A ρ(p k (A)) x 0 x A, (20) Replacement problem for (20): For 0 < a < b, min{max{ P k (λ) : a λ b, deg(p k (λ)) k with P k (0) = 1}} (21) We use Chebychev poly. of the first kind for the solution. They are defined by { T0 (t) = 1,T 1 (t) = t, T k+1 (t) = 2tT k (t) T k 1 (t). it holds T k (cosφ) = cos(kφ) by using cos((k +1)φ)+cos((k 1)φ) = 2cosφcoskφ. Especially, T k (cos jπ k ) = cos(jπ) = ( 1)j, for j = 0,...,k, university-logo T.M. Huang (NTNU) Conjugate Gradient Method October 10, / 36

30 i.e. T k takes maximal value one at k +1 positions in [ 1,1] with alternating sign. In addition (Exercise!), we have T k (t) = 1 2 [ (t+ t 2 1) k +(t t 2 1) k]. (22) Lemma 10 The solution of the problem (21) is given by Q k (t) = T k ( 2t a b b a )/ T k ( a+b a b i.e., for all P k of degree k with P k (0) = 1 it holds max Q k(λ) max P k(λ). λ [a,b] λ [a,b] ), T.M. Huang (NTNU) Conjugate Gradient Method October 10, / 36

31 Proof: Q k (0) = 1. If t runs through the interval [a,b], then (2t a b)/(b a) runs through the interval [ 1,1]. Hence, in [a,b], Q k (t) has k+1 extreme with alternating sign and absolute value δ = T k ( a+b a b ) 1. If there are a P k with max{ P k (λ) : λ [a,b]} < δ, then Q k P k has the same sign as Q k of the extremal values, so Q k P k changes sign at k+1 positions. Hence Q k P k has k roots, in addition a root zero. This contradicts that degree (Q k P k ) k. T.M. Huang (NTNU) Conjugate Gradient Method October 10, / 36

32 Lemma 11 It holds δ = T k ( ) b+a 1 a b = 1 ( T b+a k b a ) = 2ck 1+c 2k 2ck, where c = κ 1 κ+1 and κ = b/a. Proof Theorem 12 CG-method satisfies the following error estimate x k x A 2c k x 0 x A, where c = κ 1 κ+1, κ = λ 1 λ n and λ 1 λ n > 0 are the eigenvalues of A. university-logo Proof T.M. Huang (NTNU) Conjugate Gradient Method October 10, / 36

33 Remark 6 To compare with Gradient method (see (7b)): Let x G k be the kth iterate of Gradient method. Then x G k x A λ 1 λ n k λ 1 +λ n x 0 x A. But λ 1 λ n = κ 1 κ 1 λ 1 +λ n κ+1 > = c, κ+1 because in general κ κ. Therefore the CG-method is much better than Gradient method. T.M. Huang (NTNU) Conjugate Gradient Method October 10, / 36

34 Appendix Proof: Q k (A)x 2 A x 2 A = xt Q k (A)AQ k (A)x x T Ax = (A1/2 x) T Q k (A)Q k (A)(A 1/2 x) (A 1/2 x)(a 1/2 x) = zt Q k (A) 2 z z T z ρ(q k (A) 2 ) = ρ 2 (Q k (A)). (Let z := A 1/2 x) Equality holds for suitable x, hence the first equality is shown. The second equality holds by the fact that Q k (λ) is an eigenvalue of Q k (A), where λ is an eigenvalue of A. return T.M. Huang (NTNU) Conjugate Gradient Method October 10, / 36

35 Proof: For t = b+a b a = κ+1 κ 1, we compute t+ t 2 1 = κ+1 κ 1 = c 1 and t t 2 1 = κ 1 κ+1 = c. Hence from (22) follows δ = 2 2ck c k = +c k 1+c 2k 2ck. return T.M. Huang (NTNU) Conjugate Gradient Method October 10, / 36

36 Proof: From (20) we have x k x A ρ(p k (A)) x 0 x A max{ P k (λ) : λ 1 λ λ n } x 0 x A, for all P k of degree k with P k (0) = 1. From Lemma 10 and Lemma 11 follows that x k x A max{ Q k (λ) : λ 1 λ λ n } x 0 x A 2c k x 0 x A. return T.M. Huang (NTNU) Conjugate Gradient Method October 10, / 36

Tsung-Ming Huang. Matrix Computation, 2016, NTNU

Tsung-Ming Huang. Matrix Computation, 2016, NTNU Tsung-Ming Huang Matrix Computation, 2016, NTNU 1 Plan Gradient method Conjugate gradient method Preconditioner 2 Gradient method 3 Theorem Ax = b, A : s.p.d Definition A : symmetric positive definite

More information

Conjugate Gradient (CG) Method

Conjugate Gradient (CG) Method Conjugate Gradient (CG) Method by K. Ozawa 1 Introduction In the series of this lecture, I will introduce the conjugate gradient method, which solves efficiently large scale sparse linear simultaneous

More information

Solutions and Notes to Selected Problems In: Numerical Optimzation by Jorge Nocedal and Stephen J. Wright.

Solutions and Notes to Selected Problems In: Numerical Optimzation by Jorge Nocedal and Stephen J. Wright. Solutions and Notes to Selected Problems In: Numerical Optimzation by Jorge Nocedal and Stephen J. Wright. John L. Weatherwax July 7, 2010 wax@alum.mit.edu 1 Chapter 5 (Conjugate Gradient Methods) Notes

More information

PETROV-GALERKIN METHODS

PETROV-GALERKIN METHODS Chapter 7 PETROV-GALERKIN METHODS 7.1 Energy Norm Minimization 7.2 Residual Norm Minimization 7.3 General Projection Methods 7.1 Energy Norm Minimization Saad, Sections 5.3.1, 5.2.1a. 7.1.1 Methods based

More information

CS137 Introduction to Scientific Computing Winter Quarter 2004 Solutions to Homework #3

CS137 Introduction to Scientific Computing Winter Quarter 2004 Solutions to Homework #3 CS137 Introduction to Scientific Computing Winter Quarter 2004 Solutions to Homework #3 Felix Kwok February 27, 2004 Written Problems 1. (Heath E3.10) Let B be an n n matrix, and assume that B is both

More information

TMA4180 Solutions to recommended exercises in Chapter 3 of N&W

TMA4180 Solutions to recommended exercises in Chapter 3 of N&W TMA480 Solutions to recommended exercises in Chapter 3 of N&W Exercise 3. The steepest descent and Newtons method with the bactracing algorithm is implemented in rosenbroc_newton.m. With initial point

More information

Notes on PCG for Sparse Linear Systems

Notes on PCG for Sparse Linear Systems Notes on PCG for Sparse Linear Systems Luca Bergamaschi Department of Civil Environmental and Architectural Engineering University of Padova e-mail luca.bergamaschi@unipd.it webpage www.dmsa.unipd.it/

More information

The Conjugate Gradient Method

The Conjugate Gradient Method The Conjugate Gradient Method The minimization problem We are given a symmetric positive definite matrix R n n and a right hand side vector b R n We want to solve the linear system Find u R n such that

More information

The Conjugate Gradient Method

The Conjugate Gradient Method The Conjugate Gradient Method Jason E. Hicken Aerospace Design Lab Department of Aeronautics & Astronautics Stanford University 14 July 2011 Lecture Objectives describe when CG can be used to solve Ax

More information

Conjugate gradient method. Descent method. Conjugate search direction. Conjugate Gradient Algorithm (294)

Conjugate gradient method. Descent method. Conjugate search direction. Conjugate Gradient Algorithm (294) Conjugate gradient method Descent method Hestenes, Stiefel 1952 For A N N SPD In exact arithmetic, solves in N steps In real arithmetic No guaranteed stopping Often converges in many fewer than N steps

More information

AMS526: Numerical Analysis I (Numerical Linear Algebra)

AMS526: Numerical Analysis I (Numerical Linear Algebra) AMS526: Numerical Analysis I (Numerical Linear Algebra) Lecture 21: Sensitivity of Eigenvalues and Eigenvectors; Conjugate Gradient Method Xiangmin Jiao Stony Brook University Xiangmin Jiao Numerical Analysis

More information

Conjugate Gradient algorithm. Storage: fixed, independent of number of steps.

Conjugate Gradient algorithm. Storage: fixed, independent of number of steps. Conjugate Gradient algorithm Need: A symmetric positive definite; Cost: 1 matrix-vector product per step; Storage: fixed, independent of number of steps. The CG method minimizes the A norm of the error,

More information

Iterative techniques in matrix algebra

Iterative techniques in matrix algebra Iterative techniques in matrix algebra Tsung-Ming Huang Department of Mathematics National Taiwan Normal University, Taiwan September 12, 2015 Outline 1 Norms of vectors and matrices 2 Eigenvalues and

More information

Topics. The CG Algorithm Algorithmic Options CG s Two Main Convergence Theorems

Topics. The CG Algorithm Algorithmic Options CG s Two Main Convergence Theorems Topics The CG Algorithm Algorithmic Options CG s Two Main Convergence Theorems What about non-spd systems? Methods requiring small history Methods requiring large history Summary of solvers 1 / 52 Conjugate

More information

Course Notes: Week 4

Course Notes: Week 4 Course Notes: Week 4 Math 270C: Applied Numerical Linear Algebra 1 Lecture 9: Steepest Descent (4/18/11) The connection with Lanczos iteration and the CG was not originally known. CG was originally derived

More information

Unconstrained optimization

Unconstrained optimization Chapter 4 Unconstrained optimization An unconstrained optimization problem takes the form min x Rnf(x) (4.1) for a target functional (also called objective function) f : R n R. In this chapter and throughout

More information

CHAPTER 6. Projection Methods. Let A R n n. Solve Ax = f. Find an approximate solution ˆx K such that r = f Aˆx L.

CHAPTER 6. Projection Methods. Let A R n n. Solve Ax = f. Find an approximate solution ˆx K such that r = f Aˆx L. Projection Methods CHAPTER 6 Let A R n n. Solve Ax = f. Find an approximate solution ˆx K such that r = f Aˆx L. V (n m) = [v, v 2,..., v m ] basis of K W (n m) = [w, w 2,..., w m ] basis of L Let x 0

More information

EECS 275 Matrix Computation

EECS 275 Matrix Computation EECS 275 Matrix Computation Ming-Hsuan Yang Electrical Engineering and Computer Science University of California at Merced Merced, CA 95344 http://faculty.ucmerced.edu/mhyang Lecture 20 1 / 20 Overview

More information

ITERATIVE METHODS BASED ON KRYLOV SUBSPACES

ITERATIVE METHODS BASED ON KRYLOV SUBSPACES ITERATIVE METHODS BASED ON KRYLOV SUBSPACES LONG CHEN We shall present iterative methods for solving linear algebraic equation Au = b based on Krylov subspaces We derive conjugate gradient (CG) method

More information

1 Conjugate gradients

1 Conjugate gradients Notes for 2016-11-18 1 Conjugate gradients We now turn to the method of conjugate gradients (CG), perhaps the best known of the Krylov subspace solvers. The CG iteration can be characterized as the iteration

More information

Lecture 9: Krylov Subspace Methods. 2 Derivation of the Conjugate Gradient Algorithm

Lecture 9: Krylov Subspace Methods. 2 Derivation of the Conjugate Gradient Algorithm CS 622 Data-Sparse Matrix Computations September 19, 217 Lecture 9: Krylov Subspace Methods Lecturer: Anil Damle Scribes: David Eriksson, Marc Aurele Gilles, Ariah Klages-Mundt, Sophia Novitzky 1 Introduction

More information

Lecture 17 Methods for System of Linear Equations: Part 2. Songting Luo. Department of Mathematics Iowa State University

Lecture 17 Methods for System of Linear Equations: Part 2. Songting Luo. Department of Mathematics Iowa State University Lecture 17 Methods for System of Linear Equations: Part 2 Songting Luo Department of Mathematics Iowa State University MATH 481 Numerical Methods for Differential Equations Songting Luo ( Department of

More information

Chapter 7 Iterative Techniques in Matrix Algebra

Chapter 7 Iterative Techniques in Matrix Algebra Chapter 7 Iterative Techniques in Matrix Algebra Per-Olof Persson persson@berkeley.edu Department of Mathematics University of California, Berkeley Math 128B Numerical Analysis Vector Norms Definition

More information

Computational Linear Algebra

Computational Linear Algebra Computational Linear Algebra PD Dr. rer. nat. habil. Ralf-Peter Mundani Computation in Engineering / BGU Scientific Computing in Computer Science / INF Winter Term 2018/19 Part 4: Iterative Methods PD

More information

Notes on Some Methods for Solving Linear Systems

Notes on Some Methods for Solving Linear Systems Notes on Some Methods for Solving Linear Systems Dianne P. O Leary, 1983 and 1999 and 2007 September 25, 2007 When the matrix A is symmetric and positive definite, we have a whole new class of algorithms

More information

Conjugate Gradient Tutorial

Conjugate Gradient Tutorial Conjugate Gradient Tutorial Prof. Chung-Kuan Cheng Computer Science and Engineering Department University of California, San Diego ckcheng@ucsd.edu December 1, 2015 Prof. Chung-Kuan Cheng (UC San Diego)

More information

Some definitions. Math 1080: Numerical Linear Algebra Chapter 5, Solving Ax = b by Optimization. A-inner product. Important facts

Some definitions. Math 1080: Numerical Linear Algebra Chapter 5, Solving Ax = b by Optimization. A-inner product. Important facts Some definitions Math 1080: Numerical Linear Algebra Chapter 5, Solving Ax = b by Optimization M. M. Sussman sussmanm@math.pitt.edu Office Hours: MW 1:45PM-2:45PM, Thack 622 A matrix A is SPD (Symmetric

More information

Computational Linear Algebra

Computational Linear Algebra Computational Linear Algebra PD Dr. rer. nat. habil. Ralf Peter Mundani Computation in Engineering / BGU Scientific Computing in Computer Science / INF Winter Term 2017/18 Part 3: Iterative Methods PD

More information

The Conjugate Gradient Method

The Conjugate Gradient Method CHAPTER The Conjugate Gradient Method Exercise.: A-norm Let A = LL be a Cholesy factorization of A, i.e.l is lower triangular with positive diagonal elements. The A-norm then taes the form x A = p x T

More information

Linear Solvers. Andrew Hazel

Linear Solvers. Andrew Hazel Linear Solvers Andrew Hazel Introduction Thus far we have talked about the formulation and discretisation of physical problems...... and stopped when we got to a discrete linear system of equations. Introduction

More information

7.3 The Jacobi and Gauss-Siedel Iterative Techniques. Problem: To solve Ax = b for A R n n. Methodology: Iteratively approximate solution x. No GEPP.

7.3 The Jacobi and Gauss-Siedel Iterative Techniques. Problem: To solve Ax = b for A R n n. Methodology: Iteratively approximate solution x. No GEPP. 7.3 The Jacobi and Gauss-Siedel Iterative Techniques Problem: To solve Ax = b for A R n n. Methodology: Iteratively approximate solution x. No GEPP. 7.3 The Jacobi and Gauss-Siedel Iterative Techniques

More information

Vector and Matrix Norms I

Vector and Matrix Norms I Vector and Matrix Norms I Scalar, vector, matrix How to calculate errors? Scalar: absolute error: ˆα α relative error: Vectors: vector norm Norm is the distance!! ˆα α / α Chih-Jen Lin (National Taiwan

More information

Algebra C Numerical Linear Algebra Sample Exam Problems

Algebra C Numerical Linear Algebra Sample Exam Problems Algebra C Numerical Linear Algebra Sample Exam Problems Notation. Denote by V a finite-dimensional Hilbert space with inner product (, ) and corresponding norm. The abbreviation SPD is used for symmetric

More information

Applied Mathematics 205. Unit V: Eigenvalue Problems. Lecturer: Dr. David Knezevic

Applied Mathematics 205. Unit V: Eigenvalue Problems. Lecturer: Dr. David Knezevic Applied Mathematics 205 Unit V: Eigenvalue Problems Lecturer: Dr. David Knezevic Unit V: Eigenvalue Problems Chapter V.4: Krylov Subspace Methods 2 / 51 Krylov Subspace Methods In this chapter we give

More information

Linear Algebra Massoud Malek

Linear Algebra Massoud Malek CSUEB Linear Algebra Massoud Malek Inner Product and Normed Space In all that follows, the n n identity matrix is denoted by I n, the n n zero matrix by Z n, and the zero vector by θ n An inner product

More information

Constrained optimization. Unconstrained optimization. One-dimensional. Multi-dimensional. Newton with equality constraints. Active-set method.

Constrained optimization. Unconstrained optimization. One-dimensional. Multi-dimensional. Newton with equality constraints. Active-set method. Optimization Unconstrained optimization One-dimensional Multi-dimensional Newton s method Basic Newton Gauss- Newton Quasi- Newton Descent methods Gradient descent Conjugate gradient Constrained optimization

More information

1 Definition and Basic Properties of Compa Operator

1 Definition and Basic Properties of Compa Operator 1 Definition and Basic Properties of Compa Operator 1.1 Let X be a infinite dimensional Banach space. Show that if A C(X ), A does not have bounded inverse. Proof. Denote the unit ball of X by B and the

More information

Chapter 1. Matrix Algebra

Chapter 1. Matrix Algebra ST4233, Linear Models, Semester 1 2008-2009 Chapter 1. Matrix Algebra 1 Matrix and vector notation Definition 1.1 A matrix is a rectangular or square array of numbers of variables. We use uppercase boldface

More information

Conjugate Gradient Method

Conjugate Gradient Method Conjugate Gradient Method direct and indirect methods positive definite linear systems Krylov sequence spectral analysis of Krylov sequence preconditioning Prof. S. Boyd, EE364b, Stanford University Three

More information

GMRES: Generalized Minimal Residual Algorithm for Solving Nonsymmetric Linear Systems

GMRES: Generalized Minimal Residual Algorithm for Solving Nonsymmetric Linear Systems GMRES: Generalized Minimal Residual Algorithm for Solving Nonsymmetric Linear Systems Tsung-Ming Huang Department of Mathematics National Taiwan Normal University December 4, 2011 T.-M. Huang (NTNU) GMRES

More information

DS-GA 1002 Lecture notes 0 Fall Linear Algebra. These notes provide a review of basic concepts in linear algebra.

DS-GA 1002 Lecture notes 0 Fall Linear Algebra. These notes provide a review of basic concepts in linear algebra. DS-GA 1002 Lecture notes 0 Fall 2016 Linear Algebra These notes provide a review of basic concepts in linear algebra. 1 Vector spaces You are no doubt familiar with vectors in R 2 or R 3, i.e. [ ] 1.1

More information

Math 577 Assignment 7

Math 577 Assignment 7 Math 577 Assignment 7 Thanks for Yu Cao 1. Solution. The linear system being solved is Ax = 0, where A is a (n 1 (n 1 matrix such that 2 1 1 2 1 A =......... 1 2 1 1 2 and x = (U 1, U 2,, U n 1. By the

More information

Vector and Matrix Norms I

Vector and Matrix Norms I Vector and Matrix Norms I Scalar, vector, matrix How to calculate errors? Scalar: absolute error: ˆα α relative error: Vectors: vector norm Norm is the distance!! ˆα α / α Chih-Jen Lin (National Taiwan

More information

The Lanczos and conjugate gradient algorithms

The Lanczos and conjugate gradient algorithms The Lanczos and conjugate gradient algorithms Gérard MEURANT October, 2008 1 The Lanczos algorithm 2 The Lanczos algorithm in finite precision 3 The nonsymmetric Lanczos algorithm 4 The Golub Kahan bidiagonalization

More information

Problems in Linear Algebra and Representation Theory

Problems in Linear Algebra and Representation Theory Problems in Linear Algebra and Representation Theory (Most of these were provided by Victor Ginzburg) The problems appearing below have varying level of difficulty. They are not listed in any specific

More information

CME342 Parallel Methods in Numerical Analysis. Matrix Computation: Iterative Methods II. Sparse Matrix-vector Multiplication.

CME342 Parallel Methods in Numerical Analysis. Matrix Computation: Iterative Methods II. Sparse Matrix-vector Multiplication. CME342 Parallel Methods in Numerical Analysis Matrix Computation: Iterative Methods II Outline: CG & its parallelization. Sparse Matrix-vector Multiplication. 1 Basic iterative methods: Ax = b r = b Ax

More information

Iterative Methods for Linear Systems of Equations

Iterative Methods for Linear Systems of Equations Iterative Methods for Linear Systems of Equations Projection methods (3) ITMAN PhD-course DTU 20-10-08 till 24-10-08 Martin van Gijzen 1 Delft University of Technology Overview day 4 Bi-Lanczos method

More information

ITERATIVE PROJECTION METHODS FOR SPARSE LINEAR SYSTEMS AND EIGENPROBLEMS CHAPTER 4 : CONJUGATE GRADIENT METHOD

ITERATIVE PROJECTION METHODS FOR SPARSE LINEAR SYSTEMS AND EIGENPROBLEMS CHAPTER 4 : CONJUGATE GRADIENT METHOD ITERATIVE PROJECTION METHODS FOR SPARSE LINEAR SYSTEMS AND EIGENPROBLEMS CHAPTER 4 : CONJUGATE GRADIENT METHOD Heinrich Voss voss@tu-harburg.de Hamburg University of Technology Institute of Numerical Simulation

More information

Some minimization problems

Some minimization problems Week 13: Wednesday, Nov 14 Some minimization problems Last time, we sketched the following two-step strategy for approximating the solution to linear systems via Krylov subspaces: 1. Build a sequence of

More information

Final A. Problem Points Score Total 100. Math115A Nadja Hempel 03/23/2017

Final A. Problem Points Score Total 100. Math115A Nadja Hempel 03/23/2017 Final A Math115A Nadja Hempel 03/23/2017 nadja@math.ucla.edu Name: UID: Problem Points Score 1 10 2 20 3 5 4 5 5 9 6 5 7 7 8 13 9 16 10 10 Total 100 1 2 Exercise 1. (10pt) Let T : V V be a linear transformation.

More information

1 Linear Algebra Problems

1 Linear Algebra Problems Linear Algebra Problems. Let A be the conjugate transpose of the complex matrix A; i.e., A = A t : A is said to be Hermitian if A = A; real symmetric if A is real and A t = A; skew-hermitian if A = A and

More information

Lecture 10: October 27, 2016

Lecture 10: October 27, 2016 Mathematical Toolkit Autumn 206 Lecturer: Madhur Tulsiani Lecture 0: October 27, 206 The conjugate gradient method In the last lecture we saw the steepest descent or gradient descent method for finding

More information

Conjugate Gradient Method

Conjugate Gradient Method Conjugate Gradient Method Hung M Phan UMass Lowell April 13, 2017 Throughout, A R n n is symmetric and positive definite, and b R n 1 Steepest Descent Method We present the steepest descent method for

More information

Notes on the matrix exponential

Notes on the matrix exponential Notes on the matrix exponential Erik Wahlén erik.wahlen@math.lu.se February 14, 212 1 Introduction The purpose of these notes is to describe how one can compute the matrix exponential e A when A is not

More information

M.A. Botchev. September 5, 2014

M.A. Botchev. September 5, 2014 Rome-Moscow school of Matrix Methods and Applied Linear Algebra 2014 A short introduction to Krylov subspaces for linear systems, matrix functions and inexact Newton methods. Plan and exercises. M.A. Botchev

More information

Programming, numerics and optimization

Programming, numerics and optimization Programming, numerics and optimization Lecture C-3: Unconstrained optimization II Łukasz Jankowski ljank@ippt.pan.pl Institute of Fundamental Technological Research Room 4.32, Phone +22.8261281 ext. 428

More information

Iterative Methods for Solving A x = b

Iterative Methods for Solving A x = b Iterative Methods for Solving A x = b A good (free) online source for iterative methods for solving A x = b is given in the description of a set of iterative solvers called templates found at netlib: http

More information

Symmetric Matrices and Eigendecomposition

Symmetric Matrices and Eigendecomposition Symmetric Matrices and Eigendecomposition Robert M. Freund January, 2014 c 2014 Massachusetts Institute of Technology. All rights reserved. 1 2 1 Symmetric Matrices and Convexity of Quadratic Functions

More information

NORMS ON SPACE OF MATRICES

NORMS ON SPACE OF MATRICES NORMS ON SPACE OF MATRICES. Operator Norms on Space of linear maps Let A be an n n real matrix and x 0 be a vector in R n. We would like to use the Picard iteration method to solve for the following system

More information

The Conjugate Gradient Method

The Conjugate Gradient Method The Conjugate Gradient Method Lecture 5, Continuous Optimisation Oxford University Computing Laboratory, HT 2006 Notes by Dr Raphael Hauser (hauser@comlab.ox.ac.uk) The notion of complexity (per iteration)

More information

Chapter 10 Conjugate Direction Methods

Chapter 10 Conjugate Direction Methods Chapter 10 Conjugate Direction Methods An Introduction to Optimization Spring, 2012 1 Wei-Ta Chu 2012/4/13 Introduction Conjugate direction methods can be viewed as being intermediate between the method

More information

9.1 Preconditioned Krylov Subspace Methods

9.1 Preconditioned Krylov Subspace Methods Chapter 9 PRECONDITIONING 9.1 Preconditioned Krylov Subspace Methods 9.2 Preconditioned Conjugate Gradient 9.3 Preconditioned Generalized Minimal Residual 9.4 Relaxation Method Preconditioners 9.5 Incomplete

More information

Improving the Convergence of Back-Propogation Learning with Second Order Methods

Improving the Convergence of Back-Propogation Learning with Second Order Methods the of Back-Propogation Learning with Second Order Methods Sue Becker and Yann le Cun, Sept 1988 Kasey Bray, October 2017 Table of Contents 1 with Back-Propagation 2 the of BP 3 A Computationally Feasible

More information

Suppose that the approximate solutions of Eq. (1) satisfy the condition (3). Then (1) if η = 0 in the algorithm Trust Region, then lim inf.

Suppose that the approximate solutions of Eq. (1) satisfy the condition (3). Then (1) if η = 0 in the algorithm Trust Region, then lim inf. Maria Cameron 1. Trust Region Methods At every iteration the trust region methods generate a model m k (p), choose a trust region, and solve the constraint optimization problem of finding the minimum of

More information

Parallel Numerics, WT 2016/ Iterative Methods for Sparse Linear Systems of Equations. page 1 of 1

Parallel Numerics, WT 2016/ Iterative Methods for Sparse Linear Systems of Equations. page 1 of 1 Parallel Numerics, WT 2016/2017 5 Iterative Methods for Sparse Linear Systems of Equations page 1 of 1 Contents 1 Introduction 1.1 Computer Science Aspects 1.2 Numerical Problems 1.3 Graphs 1.4 Loop Manipulations

More information

Linear Independence. Stephen Boyd. EE103 Stanford University. October 9, 2017

Linear Independence. Stephen Boyd. EE103 Stanford University. October 9, 2017 Linear Independence Stephen Boyd EE103 Stanford University October 9, 2017 Outline Linear independence Basis Orthonormal vectors Gram-Schmidt algorithm Linear independence 2 Linear dependence set of n-vectors

More information

Numerical Methods I Eigenvalue Problems

Numerical Methods I Eigenvalue Problems Numerical Methods I Eigenvalue Problems Aleksandar Donev Courant Institute, NYU 1 donev@courant.nyu.edu 1 MATH-GA 2011.003 / CSCI-GA 2945.003, Fall 2014 October 2nd, 2014 A. Donev (Courant Institute) Lecture

More information

Iterative methods for Linear System

Iterative methods for Linear System Iterative methods for Linear System JASS 2009 Student: Rishi Patil Advisor: Prof. Thomas Huckle Outline Basics: Matrices and their properties Eigenvalues, Condition Number Iterative Methods Direct and

More information

Krylov Subspace Methods for Large/Sparse Eigenvalue Problems

Krylov Subspace Methods for Large/Sparse Eigenvalue Problems Krylov Subspace Methods for Large/Sparse Eigenvalue Problems Tsung-Ming Huang Department of Mathematics National Taiwan Normal University, Taiwan April 17, 2012 T.-M. Huang (Taiwan Normal University) Krylov

More information

arxiv: v1 [math.na] 26 Aug 2014

arxiv: v1 [math.na] 26 Aug 2014 Journal of Optimization Theory and Applications manuscript No. (will be inserted by the editor) A Framework of Conjugate Direction Methods for Symmetric Linear Systems in Optimization Fasano Giovanni arxiv:1408.6043v1

More information

Chapter 4 Euclid Space

Chapter 4 Euclid Space Chapter 4 Euclid Space Inner Product Spaces Definition.. Let V be a real vector space over IR. A real inner product on V is a real valued function on V V, denoted by (, ), which satisfies () (x, y) = (y,

More information

TMA 4180 Optimeringsteori THE CONJUGATE GRADIENT METHOD

TMA 4180 Optimeringsteori THE CONJUGATE GRADIENT METHOD INTRODUCTION TMA 48 Optimeringsteori THE CONJUGATE GRADIENT METHOD H. E. Krogstad, IMF, Spring 28 This note summarizes main points in the numerical analysis of the Conjugate Gradient (CG) method. Most

More information

Prove that this gives a bounded linear operator T : X l 1. (6p) Prove that T is a bounded linear operator T : l l and compute (5p)

Prove that this gives a bounded linear operator T : X l 1. (6p) Prove that T is a bounded linear operator T : l l and compute (5p) Uppsala Universitet Matematiska Institutionen Andreas Strömbergsson Prov i matematik Funktionalanalys Kurs: F3B, F4Sy, NVP 2006-03-17 Skrivtid: 9 14 Tillåtna hjälpmedel: Manuella skrivdon, Kreyszigs bok

More information

Elementary linear algebra

Elementary linear algebra Chapter 1 Elementary linear algebra 1.1 Vector spaces Vector spaces owe their importance to the fact that so many models arising in the solutions of specific problems turn out to be vector spaces. The

More information

The conjugate gradient method

The conjugate gradient method The conjugate gradient method Michael S. Floater November 1, 2011 These notes try to provide motivation and an explanation of the CG method. 1 The method of conjugate directions We want to solve the linear

More information

The Conjugate Gradient Method

The Conjugate Gradient Method The Conjugate Gradient Method Classical Iterations We have a problem, We assume that the matrix comes from a discretization of a PDE. The best and most popular model problem is, The matrix will be as large

More information

the method of steepest descent

the method of steepest descent MATH 3511 Spring 2018 the method of steepest descent http://www.phys.uconn.edu/ rozman/courses/m3511_18s/ Last modified: February 6, 2018 Abstract The Steepest Descent is an iterative method for solving

More information

Mathematical optimization

Mathematical optimization Optimization Mathematical optimization Determine the best solutions to certain mathematically defined problems that are under constrained determine optimality criteria determine the convergence of the

More information

Math 5630: Conjugate Gradient Method Hung M. Phan, UMass Lowell March 29, 2019

Math 5630: Conjugate Gradient Method Hung M. Phan, UMass Lowell March 29, 2019 Math 563: Conjugate Gradient Method Hung M. Phan, UMass Lowell March 29, 219 hroughout, A R n n is symmetric and positive definite, and b R n. 1 Steepest Descent Method We present the steepest descent

More information

1.5 The Nil and Jacobson Radicals

1.5 The Nil and Jacobson Radicals 1.5 The Nil and Jacobson Radicals The idea of a radical of a ring A is an ideal I comprising some nasty piece of A such that A/I is well-behaved or tractable. Two types considered here are the nil and

More information

j=1 u 1jv 1j. 1/ 2 Lemma 1. An orthogonal set of vectors must be linearly independent.

j=1 u 1jv 1j. 1/ 2 Lemma 1. An orthogonal set of vectors must be linearly independent. Lecture Notes: Orthogonal and Symmetric Matrices Yufei Tao Department of Computer Science and Engineering Chinese University of Hong Kong taoyf@cse.cuhk.edu.hk Orthogonal Matrix Definition. Let u = [u

More information

Lecture 11. Andrei Antonenko. February 26, Last time we studied bases of vector spaces. Today we re going to give some examples of bases.

Lecture 11. Andrei Antonenko. February 26, Last time we studied bases of vector spaces. Today we re going to give some examples of bases. Lecture 11 Andrei Antonenko February 6, 003 1 Examples of bases Last time we studied bases of vector spaces. Today we re going to give some examples of bases. Example 1.1. Consider the vector space P the

More information

Supplementary Materials for Riemannian Pursuit for Big Matrix Recovery

Supplementary Materials for Riemannian Pursuit for Big Matrix Recovery Supplementary Materials for Riemannian Pursuit for Big Matrix Recovery Mingkui Tan, School of omputer Science, The University of Adelaide, Australia Ivor W. Tsang, IS, University of Technology Sydney,

More information

Introduction to Iterative Solvers of Linear Systems

Introduction to Iterative Solvers of Linear Systems Introduction to Iterative Solvers of Linear Systems SFB Training Event January 2012 Prof. Dr. Andreas Frommer Typeset by Lukas Krämer, Simon-Wolfgang Mages and Rudolf Rödl 1 Classes of Matrices and their

More information

Inner Product Spaces An inner product on a complex linear space X is a function x y from X X C such that. (1) (2) (3) x x > 0 for x 0.

Inner Product Spaces An inner product on a complex linear space X is a function x y from X X C such that. (1) (2) (3) x x > 0 for x 0. Inner Product Spaces An inner product on a complex linear space X is a function x y from X X C such that (1) () () (4) x 1 + x y = x 1 y + x y y x = x y x αy = α x y x x > 0 for x 0 Consequently, (5) (6)

More information

Putzer s Algorithm. Norman Lebovitz. September 8, 2016

Putzer s Algorithm. Norman Lebovitz. September 8, 2016 Putzer s Algorithm Norman Lebovitz September 8, 2016 1 Putzer s algorithm The differential equation dx = Ax, (1) dt where A is an n n matrix of constants, possesses the fundamental matrix solution exp(at),

More information

MA/OR/ST 706: Nonlinear Programming Midterm Exam Instructor: Dr. Kartik Sivaramakrishnan INSTRUCTIONS

MA/OR/ST 706: Nonlinear Programming Midterm Exam Instructor: Dr. Kartik Sivaramakrishnan INSTRUCTIONS MA/OR/ST 706: Nonlinear Programming Midterm Exam Instructor: Dr. Kartik Sivaramakrishnan INSTRUCTIONS 1. Please write your name and student number clearly on the front page of the exam. 2. The exam is

More information

Math 489AB Exercises for Chapter 2 Fall Section 2.3

Math 489AB Exercises for Chapter 2 Fall Section 2.3 Math 489AB Exercises for Chapter 2 Fall 2008 Section 2.3 2.3.3. Let A M n (R). Then the eigenvalues of A are the roots of the characteristic polynomial p A (t). Since A is real, p A (t) is a polynomial

More information

Matsumura: Commutative Algebra Part 2

Matsumura: Commutative Algebra Part 2 Matsumura: Commutative Algebra Part 2 Daniel Murfet October 5, 2006 This note closely follows Matsumura s book [Mat80] on commutative algebra. Proofs are the ones given there, sometimes with slightly more

More information

Optimization and Optimal Control in Banach Spaces

Optimization and Optimal Control in Banach Spaces Optimization and Optimal Control in Banach Spaces Bernhard Schmitzer October 19, 2017 1 Convex non-smooth optimization with proximal operators Remark 1.1 (Motivation). Convex optimization: easier to solve,

More information

The Conjugate Gradient Method for Solving Linear Systems of Equations

The Conjugate Gradient Method for Solving Linear Systems of Equations The Conjugate Gradient Method for Solving Linear Systems of Equations Mike Rambo Mentor: Hans de Moor May 2016 Department of Mathematics, Saint Mary s College of California Contents 1 Introduction 2 2

More information

R-Linear Convergence of Limited Memory Steepest Descent

R-Linear Convergence of Limited Memory Steepest Descent R-Linear Convergence of Limited Memory Steepest Descent Frank E. Curtis, Lehigh University joint work with Wei Guo, Lehigh University OP17 Vancouver, British Columbia, Canada 24 May 2017 R-Linear Convergence

More information

1. What is the determinant of the following matrix? a 1 a 2 4a 3 2a 2 b 1 b 2 4b 3 2b c 1. = 4, then det

1. What is the determinant of the following matrix? a 1 a 2 4a 3 2a 2 b 1 b 2 4b 3 2b c 1. = 4, then det What is the determinant of the following matrix? 3 4 3 4 3 4 4 3 A 0 B 8 C 55 D 0 E 60 If det a a a 3 b b b 3 c c c 3 = 4, then det a a 4a 3 a b b 4b 3 b c c c 3 c = A 8 B 6 C 4 D E 3 Let A be an n n matrix

More information

A short course on: Preconditioned Krylov subspace methods. Yousef Saad University of Minnesota Dept. of Computer Science and Engineering

A short course on: Preconditioned Krylov subspace methods. Yousef Saad University of Minnesota Dept. of Computer Science and Engineering A short course on: Preconditioned Krylov subspace methods Yousef Saad University of Minnesota Dept. of Computer Science and Engineering Universite du Littoral, Jan 19-3, 25 Outline Part 1 Introd., discretization

More information

ECE580 Fall 2015 Solution to Midterm Exam 1 October 23, Please leave fractions as fractions, but simplify them, etc.

ECE580 Fall 2015 Solution to Midterm Exam 1 October 23, Please leave fractions as fractions, but simplify them, etc. ECE580 Fall 2015 Solution to Midterm Exam 1 October 23, 2015 1 Name: Solution Score: /100 This exam is closed-book. You must show ALL of your work for full credit. Please read the questions carefully.

More information

Part 3: Trust-region methods for unconstrained optimization. Nick Gould (RAL)

Part 3: Trust-region methods for unconstrained optimization. Nick Gould (RAL) Part 3: Trust-region methods for unconstrained optimization Nick Gould (RAL) minimize x IR n f(x) MSc course on nonlinear optimization UNCONSTRAINED MINIMIZATION minimize x IR n f(x) where the objective

More information

Recovery of Sparse Signals from Noisy Measurements Using an l p -Regularized Least-Squares Algorithm

Recovery of Sparse Signals from Noisy Measurements Using an l p -Regularized Least-Squares Algorithm Recovery of Sparse Signals from Noisy Measurements Using an l p -Regularized Least-Squares Algorithm J. K. Pant, W.-S. Lu, and A. Antoniou University of Victoria August 25, 2011 Compressive Sensing 1 University

More information

Numerical Optimization of Partial Differential Equations

Numerical Optimization of Partial Differential Equations Numerical Optimization of Partial Differential Equations Part I: basic optimization concepts in R n Bartosz Protas Department of Mathematics & Statistics McMaster University, Hamilton, Ontario, Canada

More information

Finite-dimensional spaces. C n is the space of n-tuples x = (x 1,..., x n ) of complex numbers. It is a Hilbert space with the inner product

Finite-dimensional spaces. C n is the space of n-tuples x = (x 1,..., x n ) of complex numbers. It is a Hilbert space with the inner product Chapter 4 Hilbert Spaces 4.1 Inner Product Spaces Inner Product Space. A complex vector space E is called an inner product space (or a pre-hilbert space, or a unitary space) if there is a mapping (, )

More information

The Steepest Descent Algorithm for Unconstrained Optimization

The Steepest Descent Algorithm for Unconstrained Optimization The Steepest Descent Algorithm for Unconstrained Optimization Robert M. Freund February, 2014 c 2014 Massachusetts Institute of Technology. All rights reserved. 1 1 Steepest Descent Algorithm The problem

More information