Lecture 6 Positive Definite Matrices

Similar documents
Conceptual Questions for Review

Maths for Signals and Systems Linear Algebra in Engineering

7. Symmetric Matrices and Quadratic Forms

8. Diagonalization.

Linear Algebra And Its Applications Chapter 6. Positive Definite Matrix

6 EIGENVALUES AND EIGENVECTORS

HOMEWORK PROBLEMS FROM STRANG S LINEAR ALGEBRA AND ITS APPLICATIONS (4TH EDITION)

4 ORTHOGONALITY ORTHOGONALITY OF THE FOUR SUBSPACES 4.1

(a) If A is a 3 by 4 matrix, what does this tell us about its nullspace? Solution: dim N(A) 1, since rank(a) 3. Ax =

Positive Definite Matrix

Math Matrix Algebra

There are six more problems on the next two pages

18.06 Problem Set 8 Solution Due Wednesday, 22 April 2009 at 4 pm in Total: 160 points.

COMP 558 lecture 18 Nov. 15, 2010

Linear Algebra Primer

A A x i x j i j (i, j) (j, i) Let. Compute the value of for and

Chapter 3 Transformations

Review problems for MA 54, Fall 2004.

[Disclaimer: This is not a complete list of everything you need to know, just some of the topics that gave people difficulty.]

Review of Linear Algebra

REVIEW OF DIFFERENTIAL CALCULUS

Symmetric matrices and dot products

18.06SC Final Exam Solutions

Math Linear Algebra Final Exam Review Sheet

Transpose & Dot Product

DS-GA 1002 Lecture notes 0 Fall Linear Algebra. These notes provide a review of basic concepts in linear algebra.

Math 102, Winter Final Exam Review. Chapter 1. Matrices and Gaussian Elimination

A Brief Outline of Math 355

Linear Algebra, part 2 Eigenvalues, eigenvectors and least squares solutions

Transpose & Dot Product

Chapter 7. Optimization and Minimum Principles. 7.1 Two Fundamental Examples. Least Squares

Multivariate Statistical Analysis

Math 520 Exam 2 Topic Outline Sections 1 3 (Xiao/Dumas/Liaw) Spring 2008

MATH 369 Linear Algebra

Linear Algebra, Summer 2011, pt. 2

Math Camp Lecture 4: Linear Algebra. Xiao Yu Wang. Aug 2010 MIT. Xiao Yu Wang (MIT) Math Camp /10 1 / 88

Tangent spaces, normals and extrema

APPLICATIONS The eigenvalues are λ = 5, 5. An orthonormal basis of eigenvectors consists of

Math Bootcamp An p-dimensional vector is p numbers put together. Written as. x 1 x =. x p

Preliminary/Qualifying Exam in Numerical Analysis (Math 502a) Spring 2012

CS 246 Review of Linear Algebra 01/17/19

Introduction to Matrix Algebra

Large Scale Data Analysis Using Deep Learning

Lecture 7: Positive Semidefinite Matrices

Properties of Matrices and Operations on Matrices

Linear Algebra and Matrix Inversion

OR MSc Maths Revision Course

1 Last time: least-squares problems

Image Registration Lecture 2: Vectors and Matrices

Extra Problems for Math 2050 Linear Algebra I

5 Linear Algebra and Inverse Problem

The Singular Value Decomposition

Linear Algebra Review. Fei-Fei Li

forms Christopher Engström November 14, 2014 MAA704: Matrix factorization and canonical forms Matrix properties Matrix factorization Canonical forms

8 Eigenvectors and the Anisotropic Multivariate Gaussian Distribution

Singular Value Decomposition. 1 Singular Value Decomposition and the Four Fundamental Subspaces

Final Exam, Linear Algebra, Fall, 2003, W. Stephen Wilson

Linear Algebra- Final Exam Review

1 Positive definiteness and semidefiniteness

Econ Slides from Lecture 8

Knowledge Discovery and Data Mining 1 (VO) ( )

Linear Algebra Review. Vectors

Lecture 2 - Unconstrained Optimization Definition[Global Minimum and Maximum]Let f : S R be defined on a set S R n. Then

Linear Algebra Solutions 1

Linear Algebra & Geometry why is linear algebra useful in computer vision?

1 Linearity and Linear Systems

The Singular Value Decomposition (SVD) and Principal Component Analysis (PCA)

Econ Slides from Lecture 7

MATH 5720: Unconstrained Optimization Hung Phan, UMass Lowell September 13, 2018

7. Dimension and Structure.

THE NULLSPACE OF A: SOLVING AX = 0 3.2

Vectors To begin, let us describe an element of the state space as a point with numerical coordinates, that is x 1. x 2. x =

Linear Algebra Exercises

October 25, 2013 INNER PRODUCT SPACES

Vector Spaces, Orthogonality, and Linear Least Squares

Jim Lambers MAT 610 Summer Session Lecture 1 Notes

B553 Lecture 5: Matrix Algebra Review

Linear Algebra Review. Fei-Fei Li

Problem Set (T) If A is an m n matrix, B is an n p matrix and D is a p s matrix, then show

Matrix operations Linear Algebra with Computer Science Application

Lecture 2: Linear Algebra Review

Linear Algebra review Powers of a diagonalizable matrix Spectral decomposition

Math Linear Algebra II. 1. Inner Products and Norms

18.06 Professor Strang Quiz 3 May 2, 2008

EIGENVALUES AND SINGULAR VALUE DECOMPOSITION

APPENDIX A. Background Mathematics. A.1 Linear Algebra. Vector algebra. Let x denote the n-dimensional column vector with components x 1 x 2.

1. Select the unique answer (choice) for each problem. Write only the answer.

Math 118, Fall 2014 Final Exam

Lecture Note 1: Background

Linear Algebra review Powers of a diagonalizable matrix Spectral decomposition

Chapters 5 & 6: Theory Review: Solutions Math 308 F Spring 2015

Orthogonality. 6.1 Orthogonal Vectors and Subspaces. Chapter 6

TBP MATH33A Review Sheet. November 24, 2018

Final Review Sheet. B = (1, 1 + 3x, 1 + x 2 ) then 2 + 3x + 6x 2

Chap 3. Linear Algebra

NPTEL NPTEL ONLINE CERTIFICATION COURSE. Introduction to Machine Learning. Lecture 6

Linear Algebra. Shan-Hung Wu. Department of Computer Science, National Tsing Hua University, Taiwan. Large-Scale ML, Fall 2016

3 (Maths) Linear Algebra

33AH, WINTER 2018: STUDY GUIDE FOR FINAL EXAM

Review of matrices. Let m, n IN. A rectangle of numbers written like A =

Transcription:

Linear Algebra Lecture 6 Positive Definite Matrices Prof. Chun-Hung Liu Dept. of Electrical and Computer Engineering National Chiao Tung University Spring 2017 2017/6/8 Lecture 6: Positive Definite Matrices 1

Outline (The materials of this lecture are based on Chapter 6 of Strang s Book) Minima, Maxima, and Saddle Points Tests for Positive Definiteness Singular Value Decomposition 2017/6/8 Lecture 6: Positive Definite Matrices 2

Minima, Maxima, and Saddle Points The signs of the eigenvalues are often crucial. For stability in differential equations, we needed negative eigenvalues so that would decay. The new and highly important problem is to recognize a minimum point. Here are two examples: e t Does either F (x, y) or f(x, y) have a minimum at the point x = y =0? Remark 1. The zero-order terms F (0, 0) = 7 and f(0, 0) = 0 have no effect on the answer. They simply raise or lower the graphs of F and f. Remark 2. The linear terms give a necessary condition: To have any chance of a minimum, the first derivatives must vanish at x = y =0: 2017/6/8 Lecture 6: Positive Definite Matrices 3

Minima, Maxima, and Saddle Points Thus (x, y) =(0, 0) is a stationary point for both functions. Remark 3. The second derivatives at (0,0) are decisive: These second derivatives 4, 4, 2 contain the answer. Since they are the same for F and f, they must contain the same answer. F has a minimum if and only if f has a minimum. 2017/6/8 Lecture 6: Positive Definite Matrices 4

Minima, Maxima, and Saddle Points Remark 4. The higher-degree terms in F have no effect on the question of a local minimum, but they can prevent it from being a global minimum. Every quadratic form f = ax 2 +2bxy + cy 2 has a stationary point at the origin, where @f/@x = @f/@y =0. A local minimum would also be a global minimum, the surface z = f(x, y) will then be shaped like a bowl, resting on the origin (Figure 6.1). 2017/6/8 Lecture 6: Positive Definite Matrices 5

Minima, Maxima, and Saddle Points If the stationary point of F is at x =,y =, the only change would be to use the second derivatives at, : This f(x, y) behaves near (0,0) in the same way that F (x, y) behaves near (, ). The third derivatives are drawn into the problem when the second derivatives fail to give a definite decision. That happens when the quadratic part is singular. For a true minimum, f is allowed to vanish only at x = y =0. When f(x, y) is strictly positive at all other points (the bowl goes up), it is called positive definite. 2017/6/8 Lecture 6: Positive Definite Matrices 6

Minima, Maxima, and Saddle Points Definite versus Indefinite: Bowl versus Saddle What conditions on a, b, and c ensure that the quadratic f(x, y) =ax 2 +2bxy + cy 2 is positive definite? One necessary condition is easy: (i) If ax 2 +2bxy + cy 2 is positive definite, then necessarily a>0. (ii) If is positive definite, then necessarily. f(x, y) c>0 2017/6/8 Lecture 6: Positive Definite Matrices 7

Minima, Maxima, and Saddle Points We now want a necessary and sufficient condition for positive definiteness. The simplest technique is to complete the square: The last requirement for positive definiteness is that this coefficient must be positive: (iii) If ax 2 +2bxy + cy 2 stays positive, then necessarily ac > b 2. Test for a minimum: The conditions a>0and ac > b 2 are just right. They guarantee c>0. The right side of (2) is positive, and we have found a minimum: 2017/6/8 Lecture 6: Positive Definite Matrices 8

Minima, Maxima, and Saddle Points Test for a maximum: Since f has a maximum whenever f has a minimum, we just reverse the signs of a, b, and c. This actually leaves ac > b 2 unchanged: The quadratic form is negative definite if and only if a<0 and ac > b 2. Singular case ac = b 2 : The second term in equation (2) disappears to leave only the first square which is either positive semidefinite, when a>0, or negative semidefinite, when a<0. The prefix semi allows the possibility that f can equal zero, as it will at the point x = b, y = a. Saddle Point ac < b 2 : In one dimension, F (x) has a minimum or a maximum, or F 00 =0. In two dimensions, a very important possibility still remains: The combination ac b 2 may be negative. 2017/6/8 Lecture 6: Positive Definite Matrices 9

Minima, Maxima, and Saddle Points It is useful to consider two special cases: In the first, b=1 dominates a=c=0. In the second, a=1 and c=-1 have opposite sign. The saddles and are practically the same. 2xy x 2 y 2 These quadratic forms are indefinite, because they can take either sign. So we have a stationary point that is neither a maximum or a minimum. It is called a saddle point. Higher Dimensions: Linear Algebra A quadratic f(x, y) comes directly from a symmetric 2 by 2 matrix: 2017/6/8 Lecture 6: Positive Definite Matrices 10

Minima, Maxima, and Saddle Points For any symmetric matrix A, the product x T Ax is a pure quadratic form f(x 1,...,x n ) : The function is zero at x =(0,...,0), and its first derivatives are zero. The tangent is flat; this is a stationary point. We have to decide if x =0is a minimum or a maximum or a saddle point of the function f = x T Ax. 2017/6/8 Lecture 6: Positive Definite Matrices 11

Minima, Maxima, and Saddle Points At a stationary point all first derivatives are zero. A is the second derivative matrix with entries a ij = @ 2 F/@x i @x j. This automatically equals a ji = @ 2 F/@x j @x i, so A is symmetric. Then F has a minimum when the pure quadratic x T Ax is positive definite. These second-order terms control F near the stationary point: At a stationary point, grad F =(@F/@x 1,...,@F/@x n ) zeros. is a vector of 2017/6/8 Lecture 6: Positive Definite Matrices 12

Tests for Positive Definiteness Which symmetric matrices have the property that x T Ax > 0 for all nonzero vectors x? The previous section began with some hints about the signs of eigenvalues. But that gave place to the tests on a, b, c: A= From those conditions, both eigenvalues are positive. Their product 1 2 is determinant ac b 2 > 0, so the eigenvalues are either both positive or both negative. They must be positive because their sum is the trace a + c>0. Looking at a and ac b 2, it is even possible to spot the appearance of the pivots: Those coefficients a and (ac b 2 )/a are the pivots for a 2 by 2 matrix. 2017/6/8 Lecture 6: Positive Definite Matrices 13

Tests for Positive Definiteness For larger matrices the pivots still give a simple test for positive definiteness: x T Ax stays positive when n independent squares are multiplied by positive pivots. The natural generalization will involve all n of the upper left submatrices of A: Here is the main theorem on positive definiteness 2017/6/8 Lecture 6: Positive Definite Matrices 14

Tests for Positive Definiteness Proof. Condition I defines a positive definite matrix. Our first step shows that each eigenvalue will be positive: A positive definite matrix has positive eigenvalues, since. x T Ax > 0 If all i > 0, we have to prove x T Ax > 0 for every vector x (not just the eigenvectors). Since symmetric matrices have a full set of orthonormal eigenvectors, any x is a combination c 1 x 1 + + c n x n. Then Because of the orthogonality, and the normalization, x T i x j =0 x T i x i =0 If every i > 0, then equation (2) shows that x T Ax > 0. Thus condition II implies condition I. 2017/6/8 Lecture 6: Positive Definite Matrices 15

Tests for Positive Definiteness If condition I holds, so does condition III: The determinant of A is the product of the eigenvalues. The trick is to look at all nonzero vectors whose last n-k components are zero: Thus A k is positive definite. Its eigenvalues must be positive. Its determinant is their product, so all upper left determinants are positive. If condition III holds, so does condition IV: If the determinants are all positive, so are the pivots. If condition IV holds, so does condition I: We are given positive pivots, and must deduce that x T Ax > 0. 2017/6/8 Lecture 6: Positive Definite Matrices 16

Tests for Positive Definiteness Those positive pivots in D multiply perfect squares to make Thus condition IV implies condition I. x T Ax positive. 2017/6/8 Lecture 6: Positive Definite Matrices 17

Tests for Positive Definiteness Positive Definite Matrices and Least Squares: The rectangular matrix will be R and the least-squares problem will be Rx = b. It has m equations with m n (square systems are included). The least-square choice bx is the solution of R T Rbx = R T b. That matrix A = R T R is not only symmetric but positive definite provided that the n columns of R are linearly independent: The key is to recognize x T Ax as x T R T Rx =(Rx) T (Rx). This squared length krxk 2 is positive (unless x = 0), because R has independent columns. (If x is nonzero then Rx is nonzero.). Thus x T R T Rx > 0 and R T R is positive definite. 2017/6/8 Lecture 6: Positive Definite Matrices 18

Tests for Positive Definiteness It remains to find an R for which. A = R T R This Cholesky decomposition has the pivots split evenly between L and. L T A third possibility is R = Q p Q T, the symmetric positive definite square root of A. 2017/6/8 Lecture 6: Positive Definite Matrices 19

Semidefinite Matrices: Tests for Positive Definiteness The diagonalization A = Q Q T leads to x T Ax = x T Q Q T x = y T y. If A has rank r, there are r nonzero s and r perfect squares in y T y = 1 y1 2 + + r yr 2 2017/6/8 Lecture 6: Positive Definite Matrices 20

Tests for Positive Definiteness 2017/6/8 Lecture 6: Positive Definite Matrices 21

Tests for Positive Definiteness Ellipsoids in n Dimensions The equation to consider is x T Ax =1. If A is the identity matrix, this simplifies to x 2 1 + x 2 2 + + x 2 n =1. This is the equation of the unit sphere in R n. If A =4I, the sphere gets smaller. The equation changes to 4x 2 1 + +4x 2 n =1. The important step is to go from the identity matrix to a diagonal matrix: 2017/6/8 Lecture 6: Positive Definite Matrices 22

Tests for Positive Definiteness To locate the ellipse we compute 1 =1and 2 =9. The unit eigenvectors are (1, 1)/ p 2 and (1, 1)/ p 2. 2017/6/8 Lecture 6: Positive Definite Matrices 23

Tests for Positive Definiteness The way to see the ellipse properly is to rewrite x T Ax =1: Note and are outside the squares. The eigenvectors are inside. =1 =9 Any ellipsoid x T Ax =1can be simplified in the same way. The key step is to diagonalize A = Q Q T. Algebraically, the change to y = Q T x produces a sum of squares: The major axis has y 1 =1/ p 1 along the eigenvector with the smallest eigenvalue. The other axes are along the other eigenvectors. Their lengths are 1/ p 2,...,1/ p n. 2017/6/8 Lecture 6: Positive Definite Matrices 24

Tests for Positive Definiteness The change from x to y = Q T x rotates the axes of the space, to match the axes of the ellipsoid. So this follows that This is the equation of an ellipsoid. Its axes have lengths 1/ p 1,...,1/ p n from the center. In the original x-space they point along the eigenvectors of A. 2017/6/8 Lecture 6: Positive Definite Matrices 25

Tests for Positive Definiteness The Generalized Eigenvalue Problem: For some engineering problems, sometimes Ax = x is replaced by Ax = Mx. There are two matrices rather than one. An example is the motion of two unequal masses in a line of springs: When the masses were equal, m 1 = m 2 =1, this was the old system u 00 + Au =0. Now it is Mu 00 + Au =0, with a mass matrix M. The eigenvalue problem arises when we look for exponential solutions e i!t x : 2017/6/8 Lecture 6: Positive Definite Matrices 26

Tests for Positive Definiteness We work out det(a M) with m 1 =1and m 2 =2: The underlying theory is easier to explain if M is split into R T R. (M is assumed to be positive definite.) Then the substitution y = Rx changes Writing C for, and multiplying through by, this becomes a standard eigenvalue problem for the single symmetric matrix : R 1 (R T ) 1 = C T C T AC 2017/6/8 Lecture 6: Positive Definite Matrices 27

Tests for Positive Definiteness The properties of C T AC lead directly to the properties of Ax = Mx, when A = A T and M is positive definite: A and M are being simultaneously diagonalized. If S has the columns, then S T AS = and S T MS = I. x j in its The main point is easy to summarize: As long as M is positive definite, the generalized eigenvalue problem behaves exactly like. Ax = Mx Ax = x 2017/6/8 Lecture 6: Positive Definite Matrices 28

Singular Value Decomposition (SVD) The SVD is closely associated with the eigenvalue-eigenvector factorization Q Q T of a positive definite matrix. But now we allow the Q on the left and the Q T on the right to be any two orthogonal matrices U and V T not necessarily transposes of each other. Then every matrix will split into A = U V T. The diagonal (but rectangular) matrix has eigenvalues from A T A, not from A! Those positive entries (also called sigma) will be 1,..., r. They are the singular values of A. They fill the first r places on the main diagonal of when A has rank r. The rest of is zero. Singular Value Decomposition: Any m by n matrix A can be factored into 2017/6/8 Lecture 6: Positive Definite Matrices 29

Singular Value Decomposition 2017/6/8 Lecture 6: Positive Definite Matrices 30

Singular Value Decomposition 2017/6/8 Lecture 6: Positive Definite Matrices 31

Singular Value Decomposition 2017/6/8 Lecture 6: Positive Definite Matrices 32

Singular Value Decomposition Application of the SVD: The SVD is terrific for numerically stable computations. because U and V are orthogonal matrices. They never change the length of a vector. Since kuxk 2 = x T U T Ux = kxk 2, multiplication by U cannot destroy the scaling. Image processing: Suppose a satellite takes a picture, and wants to send it to Earth. The picture may contain 1000 by 1000 pixels a million little squares, each with a definite color. We can code the colors, and send back 1,000,000 numbers. It is better to find the essential information inside the 1000 by 1000 matrix, and send only that. Suppose we know the SVD. The key is in the singular values (in ). Typically, s are significant and others are extremely small. If we keep 20 and throw away 980, then we send only the corresponding 20 columns of U and V. We can do the matrix multiplication as columns times rows: Any matrix is the sum of r matrices of rank 1. If only 20 terms are kept, we send 20 times 2000 numbers instead of a million. 2017/6/8 Lecture 6: Positive Definite Matrices 33

Singular Value Decomposition The effective rank: The rank of a matrix is the number of independent rows, and the number of independent columns. Consider the following: The first has rank 1, although roundoff error will probably produce a second pivot. Both pivots will be small; how many do we ignore? The second has one small pivot, but we cannot pretend that its row is insignificant. The third has two pivots and its rank is 2, but its effective rank ought to be 1. We go to a more stable measure of rank. The first step is to use AA T or A T A, which are symmetric but share the same rank as A. Their eigenvalues the singular values squared are not misleading. Based on the accuracy of the data, we decide on a tolerance like 10 6 and count the singular values above it that is the effective rank. 2017/6/8 Lecture 6: Positive Definite Matrices 34

Singular Value Decomposition Polar decomposition: Every nonzero complex number z is a positive number r times a number e i on the unit circle: z = re i. The SVD extends this polar factorization to matrices of any size: For proof we just insert V T V = I into the middle of the SVD: The factor S = V V T is symmetric and semidefinite (because is). The factor Q = UV T is an orthogonal matrix (because QQ T = VU T UV T = I ). 2017/6/8 Lecture 6: Positive Definite Matrices 35

Singular Value Decomposition Least Squares: For a rectangular system Ax = b, the least-squares solution comes from the normal equations A T Abx = A T b. If A has dependent columns then A T A is not invertible and bx is not determined. We have to choose a particular solution of A T Abx = A T b, and we choose the shortest. 2017/6/8 Lecture 6: Positive Definite Matrices 36

Singular Value Decomposition That minimum length solution will be called. x + 2017/6/8 Lecture 6: Positive Definite Matrices 37

Singular Value Decomposition Based on this example, we know and for any diagonal matrix : + x + Now we find x + in the general case. We claim that the shortest solution x + is always in the row space of A. Remember that any vector bx can be split into a row space component x r and a nullspace component: bx = x r + x n 2017/6/8 Lecture 6: Positive Definite Matrices 38

Singular Value Decomposition There are three important points about that splitting: The fundamental theorem of linear algebra was in Figure 3.4. Every p in the column space comes from one and only one vector x r in the row space. All we are doing is to choose that vector, x + = x r, as the best solution to Ax = b. 2017/6/8 Lecture 6: Positive Definite Matrices 39

Singular Value Decomposition 2017/6/8 Lecture 6: Positive Definite Matrices 40

Singular Value Decomposition The row space of A is the column space of. Here is a formula for : A + A + Proof. Multiplication by the orthogonal matrix U T leaves lengths unchanged: Introduce the new unknown y = V T x = V, which has the same length as x. Then, minimizing kax bk is the same as minimizing k y U T bk. Now is diagonal and we know the best y +. It is y + = + U T b so the best x + is Vy + : 1 x Vy + is in the row space, and A T Ax + = A T b + from the SVD. 2017/6/8 Lecture 6: Positive Definite Matrices 41

Minimum Principles Our goal here is to find the minimum principle that is equivalent to, and the minimization equivalent to. Ax = b Ax = b The first step is straightforward: We want to find the parabola P (x) whose minimum occurs when Ax = b. If A is just a scalar, that is easy to do: This point x = A 1 b will be a minimum if A is positive. Then the parabola P (x) opens upward (Figure 6.4). Proof. Suppose Ax = b. For any vector y, we show that P (y) P (x) : 2017/6/8 Lecture 6: Positive Definite Matrices 42

Minimum Principles 2017/6/8 Lecture 6: Positive Definite Matrices 43

Minimum Principles This can t be negative since A is positive definite and it is zero only if y x =0. At all other points P (y) is larger than P (x), so the minimum occurs at x. 2017/6/8 Lecture 6: Positive Definite Matrices 44

Minimum Principles Minimizing with Constraints: Many applications add extra equations Cx = d on top of the minimization problem. These equations are constraints. We minimize P (x) subject to the extra requirement Cx = d. Usually x can t satisfy n equations Ax = b and also ` extra constraints Cx = d. So we need ` more unknowns. Those new unknowns y 1,...,y` are called Lagrange multipliers. They build the constraint into a function L(x, y). Thus, we can have When we set the derivatives of L to zero, we have n + ` unknowns x and y: n + ` equations for 2017/6/8 Lecture 6: Positive Definite Matrices 45

Minimum Principles 2017/6/8 Lecture 6: Positive Definite Matrices 46

Minimum Principles Figure 6.5 shows what problem the linear algebra has solved, if the constraint keeps x on a line 2x 1 x 2 =5. We are looking for the closest point to (0,0) on this line. The solution is x =(2, 1). Least Squares Again: The best bx is the vector that minimizes the squared error E 2 = kax bk 2, i.e., Compare with at the start of the section, which led to : 1 2 xt Ax x T b Ax = b 2017/6/8 Lecture 6: Positive Definite Matrices 47

Minimum Principles The minimizing equation changes into the Ax = b 2017/6/8 Lecture 6: Positive Definite Matrices 48

Matrix Factorizations 2017/6/8 Lecture 6: Positive Definite Matrices 49

Matrix Factorizations 2017/6/8 Lecture 6: Positive Definite Matrices 50

Matrix Factorizations 12. 2017/6/8 Lecture 6: Positive Definite Matrices 51