Quadratic forms. Here. Thus symmetric matrices are diagonalizable, and the diagonalization can be performed by means of an orthogonal matrix.

Similar documents
Remark By definition, an eigenvector must be a nonzero vector, but eigenvalue could be zero.

DIAGONALIZATION. In order to see the implications of this definition, let us consider the following example Example 1. Consider the matrix

235 Final exam review questions

1. General Vector Spaces

Remark 1 By definition, an eigenvector must be a nonzero vector, but eigenvalue could be zero.

LINEAR ALGEBRA 1, 2012-I PARTIAL EXAM 3 SOLUTIONS TO PRACTICE PROBLEMS

EXERCISES ON DETERMINANTS, EIGENVALUES AND EIGENVECTORS. 1. Determinants

ft-uiowa-math2550 Assignment NOTRequiredJustHWformatOfQuizReviewForExam3part2 due 12/31/2014 at 07:10pm CST

The Singular Value Decomposition

Recall : Eigenvalues and Eigenvectors

Linear algebra II Homework #1 due Thursday, Feb A =

MATH 221, Spring Homework 10 Solutions

vibrations, light transmission, tuning guitar, design buildings and bridges, washing machine, Partial differential problems, water flow,...

MA 265 FINAL EXAM Fall 2012

MATH 1120 (LINEAR ALGEBRA 1), FINAL EXAM FALL 2011 SOLUTIONS TO PRACTICE VERSION

Schur s Triangularization Theorem. Math 422

MAT Linear Algebra Collection of sample exams

Linear Algebra Practice Problems

DS-GA 1002 Lecture notes 0 Fall Linear Algebra. These notes provide a review of basic concepts in linear algebra.

Homework sheet 4: EIGENVALUES AND EIGENVECTORS. DIAGONALIZATION (with solutions) Year ? Why or why not? 6 9

Eigenvalues and Eigenvectors

Linear Algebra. Session 12

Linear System Theory

MATH 304 Linear Algebra Lecture 23: Diagonalization. Review for Test 2.

Review of Linear Algebra

Math 350 Fall 2011 Notes about inner product spaces. In this notes we state and prove some important properties of inner product spaces.

Lecture 15, 16: Diagonalization

MATH 304 Linear Algebra Lecture 34: Review for Test 2.

1 Last time: least-squares problems

Chapter 6. Eigenvalues. Josef Leydold Mathematical Methods WS 2018/19 6 Eigenvalues 1 / 45

1. What is the determinant of the following matrix? a 1 a 2 4a 3 2a 2 b 1 b 2 4b 3 2b c 1. = 4, then det

Computationally, diagonal matrices are the easiest to work with. With this idea in mind, we introduce similarity:

Linear algebra II Homework #1 solutions A = This means that every eigenvector with eigenvalue λ = 1 must have the form

x 3y 2z = 6 1.2) 2x 4y 3z = 8 3x + 6y + 8z = 5 x + 3y 2z + 5t = 4 1.5) 2x + 8y z + 9t = 9 3x + 5y 12z + 17t = 7

Linear Algebra, part 2 Eigenvalues, eigenvectors and least squares solutions

Math 24 Spring 2012 Sample Homework Solutions Week 8

Diagonalization of Matrix

Math 224, Fall 2007 Exam 3 Thursday, December 6, 2007

Math 315: Linear Algebra Solutions to Assignment 7

Therefore, A and B have the same characteristic polynomial and hence, the same eigenvalues.

22m:033 Notes: 7.1 Diagonalization of Symmetric Matrices

1 Inner Product and Orthogonality

Solutions to Final Exam

j=1 u 1jv 1j. 1/ 2 Lemma 1. An orthogonal set of vectors must be linearly independent.

MATH 31 - ADDITIONAL PRACTICE PROBLEMS FOR FINAL

ICS 6N Computational Linear Algebra Symmetric Matrices and Orthogonal Diagonalization

Math 443 Differential Geometry Spring Handout 3: Bilinear and Quadratic Forms This handout should be read just before Chapter 4 of the textbook.

Linear Algebra Lecture Notes-II

LINEAR ALGEBRA BOOT CAMP WEEK 4: THE SPECTRAL THEOREM

Symmetric matrices and dot products

Section 7.3: SYMMETRIC MATRICES AND ORTHOGONAL DIAGONALIZATION

Econ Slides from Lecture 7

Lecture 10 - Eigenvalues problem

University of Colorado Denver Department of Mathematical and Statistical Sciences Applied Linear Algebra Ph.D. Preliminary Exam June 8, 2012

7. Symmetric Matrices and Quadratic Forms

Lecture 3 Eigenvalues and Eigenvectors

MATH 431: FIRST MIDTERM. Thursday, October 3, 2013.

Homework For each of the following matrices, find the minimal polynomial and determine whether the matrix is diagonalizable.

1. In this problem, if the statement is always true, circle T; otherwise, circle F.

Eigenvalues and Eigenvectors

Final Exam, Linear Algebra, Fall, 2003, W. Stephen Wilson

MATH 423 Linear Algebra II Lecture 33: Diagonalization of normal operators.

Econ 204 Supplement to Section 3.6 Diagonalization and Quadratic Forms. 1 Diagonalization and Change of Basis

Chap 3. Linear Algebra

Question: Given an n x n matrix A, how do we find its eigenvalues? Idea: Suppose c is an eigenvalue of A, then what is the determinant of A-cI?

TMA Calculus 3. Lecture 21, April 3. Toke Meier Carlsen Norwegian University of Science and Technology Spring 2013

Final Review Written by Victoria Kala SH 6432u Office Hours R 12:30 1:30pm Last Updated 11/30/2015

Math 3191 Applied Linear Algebra

12x + 18y = 30? ax + by = m

Linear Algebra Review

Eigenvalues and Eigenvectors A =

Linear Algebra: Matrix Eigenvalue Problems

Mathematical foundations - linear algebra

c Igor Zelenko, Fall

Symmetric and self-adjoint matrices

MATH 112 QUADRATIC AND BILINEAR FORMS NOVEMBER 24, Bilinear forms

1. Select the unique answer (choice) for each problem. Write only the answer.

av 1 x 2 + 4y 2 + xy + 4z 2 = 16.

Chapter 7: Symmetric Matrices and Quadratic Forms

MATH 20F: LINEAR ALGEBRA LECTURE B00 (T. KEMP)

Linear Algebra Massoud Malek

ACM 104. Homework Set 4 Solutions February 14, 2001

Symmetric and anti symmetric matrices

MATH 304 Linear Algebra Lecture 20: The Gram-Schmidt process (continued). Eigenvalues and eigenvectors.

MATH 240 Spring, Chapter 1: Linear Equations and Matrices

Vectors To begin, let us describe an element of the state space as a point with numerical coordinates, that is x 1. x 2. x =

Problem Set (T) If A is an m n matrix, B is an n p matrix and D is a p s matrix, then show

Math Matrix Algebra

Generalized eigenspaces

LINEAR ALGEBRA QUESTION BANK

UNIT 6: The singular value decomposition.

LECTURE VI: SELF-ADJOINT AND UNITARY OPERATORS MAT FALL 2006 PRINCETON UNIVERSITY

Eigenvalues, Eigenvectors, and Diagonalization

MATH 1553-C MIDTERM EXAMINATION 3

2.3. VECTOR SPACES 25

Then x 1,..., x n is a basis as desired. Indeed, it suffices to verify that it spans V, since n = dim(v ). We may write any v V as r

Conjugate Gradient (CG) Method

Final A. Problem Points Score Total 100. Math115A Nadja Hempel 03/23/2017

Cheat Sheet for MATH461

Review of linear algebra

Transcription:

Quadratic forms 1. Symmetric matrices An n n matrix (a ij ) n ij=1 with entries on R is called symmetric if A T, that is, if a ij = a ji for all 1 i, j n. We denote by S n (R) the set of all n n symmetric matrices with coefficients in R. Remark 1.1. Note that if B is an n n matrix with entries in R (not necessarily symmetric), then for any vectors x, y R n (thought of as columnvectors), we have y Ax = y T Ax = x T A T y. Here we think of the dot-product y Ax as a 1 1 matrix. We summarize the properties of symmetric matrices in the following statement: Theorem 1.2. Let n 1 be an integer and let A S n (R) be a symmetric matrix. Then the following hold: (1) For any vectors x, y R n we have Ax y = x Ay. (2) The characteristic polynomial p A (t) of A completely factors out over R, that is, there exist (not necessarily distinct) λ 1,..., λ n R such that p A (t) = (t λ 1 )... (t λ r ). (3) If λ λ are distinct eigenvalues of A and x, x R n are nonzero eigenvectors corresponding to λ, λ accordingly (that is, Ax = λx, Ax = λ x ) then x x = 0. (4) If λ is an eigenvalue of A with multiplicity k 1 in p A (t), then there exist k linearly independent eigenvectors of A corresponding to λ, and the eigenspace E λ = {x R n Ax = λx} has dimension k. (5) There exists an orthonormal basis of R n consisting of eigenvectors of A. (6) Let p A (t) = (t λ 1 )... (t λ r ) be the factorization of p A (t) given by part (2) above, and let v 1,..., v r R n be an orthonormal basis of R n such that Av i = λ i v i (such a basis exists by part (5)). Let C = [v 1... v n ] be the n n matrix with columns v 1,... v n. Then C O(n) and Here C 1 AC = Diag(λ 1,..., λ n ). λ 1 0 0... 0 0 λ 2 0... 0 Diag(λ 1,..., λ n ) =....... 0 0 0... λ n Thus symmetric matrices are diagonalizable, and the diagonalization can be performed by means of an orthogonal matrix. 1

2 2. Quadratic forms Definition 2.1 (Quadratic form). Let V be a vector space over R of finite dimension n 1. A quadratic form on V is a bilinear map Q : V V R such that Q is symmetric, that is, Q(v, w) = Q(w, v) for all v, w V. and Saying that Q is bilinear means that Q(c 1 v 1 + c 2 v 2, w) = c 1 Q(v 1, w) + c 2 Q(v 2, w) Q(v, c 1 w 1 + c 2 w 2 ) = c 1 Q(v, w 1 ) + c 2 Q(v, w 2 ) for all c 1, c 2 R and v 1, v 2, v, w, w 1, w 2 V. To every quadratic form Q : V V R we also associate the quadratic function q : V R defined as q(v) := Q(v.v) for v V. We will see in Proposotion 2.3 below that the quadratic form Q can be uniquely recovered from q. Example 2.2. We have: (1) The standard dot-product : R n R n R, (x 1,..., x n ) (y 1,..., y n ) = x 1 y 1 + + x n y n, is a quadratic form on R n. (2) Let A be an n n matrix with entries in R. Define Q A : R n R n R as Q A (x, y) := y T Ax = y Ax = Ay x = n a ijy i x j. Then Q A is a quadratic form on R n. (3) Let A be an n n matrix with entries in R and let B = 1 2 (A + AT ) (so that B = B T and B is symmetric). Then Q Q B, that is, for all x, y R n we have Q A (x, y) = Q B (x, y). (4) Let V be a vector space with a basis E = e 1,..., e n and let λ 1,..., λ n R be real numbers. Define Q : V V R as Q(x, y) = n i=1 λ iy i x i, where x = n i=1 y ie i are arbitrary vectors in V. Then Q is a quadratic form on V. (5) Let V be a vector space with a basis E = e 1,..., e n and let A S n (R) be a symmetric matrix. Define Q A,E : V V R as Q A,E (x, y) := a ij y i x j where x = n i=1 y ie i are arbitrary vectors in V. Then Q A,E is a quadratic form on V. Proposition 2.3. The following hold: (1) For any quadratic form Q on R n there exists a unique symmetric matrix A S n (R) such that Q = Q A.

(2) Let V be a vector space of dimension n with a basis E = e 1,..., e n. Let Q : V V R be a quadratic form on V. Then there exists a unique symmetric matrix (a ij ) ij S n (R) such that Q = Q A,E. Moreover a ij = Q(e i, e j ) for all 1 i, j n. (3) Let V, Q, E and A be as in (2) above. Then for i = 1,..., n we have a ii = Q(e i, e i ) and for any 1 ij n, i j we have a ij = 1 2 (Q(e i + e j, e i + e j ) Q(e i, e i ) Q(e j, e j )). Thus part (3) of Proposition 2.3 shows that if Q : V V is a quadratic form, then we can recover Q from knowing the values Q(x, x) for all x V. Proposition 2.3 describes all quadratic forms on V in terms of their associated symmetric matrices, once we choose a basis E of V. It is useful to know how these matrices change when we change the basis of V. Proposition 2.4. Let V be a vector space of dimension n and let Q : V V R be a quadratic form on V. Let E = e 1,..., e n and E = e 1,..., e n be two bases of V. Let A, A be symmetric matrices such that Q = Q A,E = Q A,E. Then A = C T AC, where C is the transition matrix from E to E, that is e j = n i=1 c ije j. Together with Theorem 1.2, Proposition 2.4 implies: Corollary 2.5. Let V be a vector space of dimension n and let Q : V V R be a quadratic form on V. Then there exist a basis E = e 1,..., e n and real numbers λ 1 λ n such that Q = Q A,E with Diag(λ 1,..., λ n ). This means that Q(x, y) = n i=1 λ ix i y i where x = n i=1 y ie i are arbitrary vectors in V. The values λ 1,..., λ n in Corollary 2.5 are not uniquely determined by Q. However, it turns out that the number n + of i {1..., n} such that λ i > 0, the number n of i {1..., n} such that λ i < 0 and the number n 0 of i {1..., n} such that λ i = 0 are uniquely determined by Q. The triple (n 0, n +, n ) is called the signature of Q. Note that by definition, n 0 + n + + n = n. Proposition 2.6. Let V be a vector space over R of finite dimension n 1. Then the following are equivalent: (1) The form Q is positive-definite, that is, Q(x, x) > 0 for every x V, x 0. (2) We have n + = n and n 0 = n = 0. Remark 2.7. Thus we see that if Q is a quadratic form on V with Q = Q A,E for some basis E of V and a symmetric matrix A S n (R) with eigenvalues λ 1,..., λ n R, then Q is positive-definite if and only if λ i > 0 for i = 1,..., n (in which case we also have det(a) = λ 1... λ n > 0). 3

4 Proposition 2.6 implies that if Q is a positive-quadratic form on a vectorspace V of finite dimension n 1, then for every nonzero linear subspace W of V the restriction of Q to W, Q : W W R, is again a positive-definite quadratic form. 3. The case of dimension 2 Let V be a vector space over R of dimension 2. Let E = e 1, e 2 be a basis of V and let Q : V V R be a quadratic form on V. Then Q = Q A,E where E F F G where E = Q(e 1, e 1 ), F = Q(e 1, e 2 ) = Q(e 2, e 1 ) and G = Q(e 2, e 2 ). For x = x 1 e 1 + x 2 e 2 V and y = y 1 e 1 + y 2 e 2 V we have Q(x, y) = i,j = 1 n a ij y i x j = Ex 1 y 1 + F x 1 y 2 + F x 2 y 1 + Gx 2 y 2 and Q(x, x) = Ex 2 1 + 2F x 1x 2 + Gx 2 2. Example 3.1. (a) Let Q : R 2 R 2 we given by Q((x 1, x 2 ), (y 1, y 2 )) = x 1 y 1 + 3x 1 y 2 + 3x 2 y 1 + x 2 y 2. Thus for the standard unit basis E = e 1, e 2 of R 2 we have Q = Q A,E where 1 3 3 1 so that E = G = 1, F = 3.The characteristic polynomial of A is λ 2 2λ 8 = (λ 4)(λ + 2), with roots λ 1 = 2, λ 2 = 4. Thus the signature of Q is n + = n = 1, n 0 = 0, and the quadratic form Q is not positive-definite. (b) Let Q : R 2 R 2 we given by Q((x 1, x 2 ), (y 1, y 2 )) = x 1 y 1 + 2x 1 y 1 + 2x 2 y 1 + 7x 2 y 2. Thus for the standard unit basis E = e 1, e 2 of R 2 we have Q = Q A,E where 1 2 2 7 so that E = G = 1, F = 3.The characteristic polynomial of A is λ 2 8λ + 3, with roots λ 1 = 4 13, λ 2 = 4 + 13. Since λ 1 > 0, λ 2 > 0, the signature of Q is n + = 2, n 0 = n = 0, and the quadratic form Q is positive-definite. 3.1. Riemannian metrics. A Riemannian metric g on R n is a function that, to every p R n, associates a positive-definite quadratic form g(p) on T p R n. Thus for every p R n g p has the form g p = Q A(p) for some symmetric n n matrix A(p) = (g ij (p)) ij, so that g p ((x 1,..., x n ) p, (y 1,..., y n ) p ) = g ij (p)y i x j

and g p ((x 1,..., x n ) p, (x 1,..., x n ) p ) = g ij (p)x i x j. Note that g ij (p) = g p ((e i ) p, (e j ) p ). We think of g p as an inner product on T p R n. In the case n = 2 the matrix A(p) is E(p) F (p) F (p) G(p) so that g p ((x 1, x 2 ) p, (y 1, y 2 ) p ) = E(p)x 1 y 1 + F (p)x 1 y 2 + F (p)x 2 y 1 + G(p)x 2 y 2 and g p ((x 1, x 2 ) p, (x 1, x 2 ) p ) = E(p)x 2 1 + 2F (p)x 1 x 2 + G(p)x 2 2. A Riemannian metric g on R n is smooth if the functions g ij : R n R are smooth for all 1 i, j n. Let g be a smooth Riemannian metric on R n. If v p T p R n, we put v p g := g p (v p, v p ) and call this number the g-norm of v p. For a smooth curve γ : [a, b] R n, γ(t) = (x 1 (t),..., x n (t)), we define the length of γ with respect to g as b b s := γ (t) g dt = g ij (γ(t)) dx i dx j dt dt dt a a For this reason, the Riemannian metric g is often symbolically denoted as ds 2 = g ij dx i dx j. 5