ELLIPSES. Problem: Find the points on the locus. Q(x, y) = 865x 2 294xy + 585y 2 = closest to, and farthest from, the origin. Answer.

Similar documents
1. In this problem, if the statement is always true, circle T; otherwise, circle F.

Properties of Linear Transformations from R n to R m

6 EIGENVALUES AND EIGENVECTORS

APPLICATIONS The eigenvalues are λ = 5, 5. An orthonormal basis of eigenvectors consists of

SOLUTION KEY TO THE LINEAR ALGEBRA FINAL EXAM 1 2 ( 2) ( 1) c a = 1 0

A = 3 B = A 1 1 matrix is the same as a number or scalar, 3 = [3].

Transpose & Dot Product

Vectors To begin, let us describe an element of the state space as a point with numerical coordinates, that is x 1. x 2. x =

Transpose & Dot Product

Week Quadratic forms. Principal axes theorem. Text reference: this material corresponds to parts of sections 5.5, 8.2,

Information About Ellipses

Symmetric and anti symmetric matrices

3.7 Constrained Optimization and Lagrange Multipliers

Matrices and Deformation

A A x i x j i j (i, j) (j, i) Let. Compute the value of for and

Announcements Wednesday, November 01

Chapter 5. Eigenvalues and Eigenvectors

A VERY BRIEF LINEAR ALGEBRA REVIEW for MAP 5485 Introduction to Mathematical Biophysics Fall 2010

MTH 2032 SemesterII

22.3. Repeated Eigenvalues and Symmetric Matrices. Introduction. Prerequisites. Learning Outcomes

COMP 558 lecture 18 Nov. 15, 2010

Linear Algebra. Session 12

Linear Algebra: Matrix Eigenvalue Problems

EXERCISES ON DETERMINANTS, EIGENVALUES AND EIGENVECTORS. 1. Determinants

Linear Algebra Review. Fei-Fei Li

MATH 1553 PRACTICE MIDTERM 3 (VERSION B)

Math 290-2: Linear Algebra & Multivariable Calculus Northwestern University, Lecture Notes

Systems of Algebraic Equations and Systems of Differential Equations

Numerical Linear Algebra Homework Assignment - Week 2

CS 246 Review of Linear Algebra 01/17/19

8. Diagonalization.

Repeated Eigenvalues and Symmetric Matrices

CS 143 Linear Algebra Review

Linear Algebra Review. Fei-Fei Li

Math 443 Differential Geometry Spring Handout 3: Bilinear and Quadratic Forms This handout should be read just before Chapter 4 of the textbook.

Announcements Wednesday, November 01

235 Final exam review questions

Review problems for MA 54, Fall 2004.

MATH 1553-C MIDTERM EXAMINATION 3

Linear Algebra - Part II

TBP MATH33A Review Sheet. November 24, 2018

Computational Methods CMSC/AMSC/MAPL 460. Eigenvalues and Eigenvectors. Ramani Duraiswami, Dept. of Computer Science

Remark By definition, an eigenvector must be a nonzero vector, but eigenvalue could be zero.

Convex Functions and Optimization

LINEAR ALGEBRA KNOWLEDGE SURVEY

Econ Slides from Lecture 7

18.06 Problem Set 9 - Solutions Due Wednesday, 21 November 2007 at 4 pm in

Symmetric matrices and dot products

4. Linear transformations as a vector space 17

1. Matrix multiplication and Pauli Matrices: Pauli matrices are the 2 2 matrices. 1 0 i 0. 0 i

Eigenvalues and Eigenvectors

FFTs in Graphics and Vision. The Laplace Operator

REVIEW OF DIFFERENTIAL CALCULUS

Math 118, Fall 2014 Final Exam

Linear Algebra & Geometry why is linear algebra useful in computer vision?

Section 5.5. Complex Eigenvalues

DIAGONALIZATION. In order to see the implications of this definition, let us consider the following example Example 1. Consider the matrix

A = 3 1. We conclude that the algebraic multiplicity of the eigenvalues are both one, that is,

Math 315: Linear Algebra Solutions to Assignment 7

Algebra Workshops 10 and 11

1 Euclidean geometry. 1.1 The metric on R n

1. What is the determinant of the following matrix? a 1 a 2 4a 3 2a 2 b 1 b 2 4b 3 2b c 1. = 4, then det

Proofs for Quizzes. Proof. Suppose T is a linear transformation, and let A be a matrix such that T (x) = Ax for all x R m. Then

18.06 Professor Strang Quiz 3 May 2, 2008

Math Matrix Algebra

Ma/CS 6b Class 20: Spectral Graph Theory

Linear Algebra review Powers of a diagonalizable matrix Spectral decomposition

Quadratic forms. Here. Thus symmetric matrices are diagonalizable, and the diagonalization can be performed by means of an orthogonal matrix.

MIT Final Exam Solutions, Spring 2017

2. Every linear system with the same number of equations as unknowns has a unique solution.

MATH 1553 SAMPLE FINAL EXAM, SPRING 2018

MATH 1120 (LINEAR ALGEBRA 1), FINAL EXAM FALL 2011 SOLUTIONS TO PRACTICE VERSION

Extra Problems for Math 2050 Linear Algebra I

DM554 Linear and Integer Programming. Lecture 9. Diagonalization. Marco Chiarandini

Chapter 7: Symmetric Matrices and Quadratic Forms

7. Symmetric Matrices and Quadratic Forms

Lecture 15, 16: Diagonalization

Section 5.5. Complex Eigenvalues

Conceptual Questions for Review

2.3. VECTOR SPACES 25

Final Review Sheet. B = (1, 1 + 3x, 1 + x 2 ) then 2 + 3x + 6x 2

complex dot product x, y =

Dot Products. K. Behrend. April 3, Abstract A short review of some basic facts on the dot product. Projections. The spectral theorem.

Linear Algebra Practice Problems

Chapter 5 Eigenvalues and Eigenvectors

Chapter 2 Notes, Linear Algebra 5e Lay

Remark 1 By definition, an eigenvector must be a nonzero vector, but eigenvalue could be zero.

Tangent spaces, normals and extrema

LU Factorization. A m x n matrix A admits an LU factorization if it can be written in the form of A = LU

c Igor Zelenko, Fall

Linear Algebra Primer

q n. Q T Q = I. Projections Least Squares best fit solution to Ax = b. Gram-Schmidt process for getting an orthonormal basis from any basis.

IMPORTANT DEFINITIONS AND THEOREMS REFERENCE SHEET

Mathematics 205 HWK 18 Solutions Section 15.3 p725

The minimal polynomial

Linear Least-Squares Data Fitting

4. Determinants.

LINEAR ALGEBRA BOOT CAMP WEEK 4: THE SPECTRAL THEOREM

IMPORTANT DEFINITIONS AND THEOREMS REFERENCE SHEET

j=1 u 1jv 1j. 1/ 2 Lemma 1. An orthogonal set of vectors must be linearly independent.

Transcription:

ELLIPSES Problem: Find the points on the locus closest to, and farthest from, the origin. Answer. Q(x, y) 865x 2 294xy + 585y 2 1450 This is a Lagrange multiplier problem: we want to extremize f(x, y) x 2 + y 2 subject to the constraint Q(x, y) 1450. To do this we look for points on the locus where the gradient of f + λq is zero: this amounts to solving the system (f + λq) (2x + λ(1730x 294y), 2y + λ( 294x + 1070y)) (0, 0) (together with the constraint equation: that gives us three equations in three unknowns (x, y, and λ). But if we stare at this for a minute, we start to suspect that it might be easier to deal algebraically with the related problem, of finding the maximum of the function Q on the unit circle f(x, y) 1: that question requires us to find the points where the gradient of Q + λf vanishes which is pretty much what we would get if we multiplied our original problem by λ 1. After eliminating superfluous factors of two, the new problem gives us the system λx + 865x 147y 0, λy 147x + 565y 0, which can be rewritten in matrix notation in the form (A + λ1)x 0, where x denotes the vector (x, y), 1 denotes the two-by-two identity matrix (with ones down the diagonal, and zero elsewhere), and 865 147 A. 147 585 Now it s useful to recall that square matrices are invertible (ie, have inverse matrices, in the sense the the matrix product of a matrix and its inverse equals the identity matrix) if and only if its determinant is nonzero. [This is related to the geometric interpretation of determinants as volumes: if the determinant vanishes, the linear transformation defined by the (square) matrix squashes a rectangle flat. So if the determinant of the matrix A + λ1 is not zero, there can be no nontrivial solutions to our system of equations because we can then multiply our equation on the left by the inverse matrix (A + λ1) 1, to obtain (only) the trivial solution (A + λ1) 1 (A + λ1)x 1 x x 0. Thus in order for a nontrivial solution to exist, λ must satisfy the quadratic equation det(a + λ1) 0, ie 865 + λ 147 det (λ + 865)(λ + 585) ( 147) 2 0, 147 585 + λ which multiplies out to λ 2 + 1450λ + [(865 585 147 2 ) 484, 416 0. 1

2 ELLIPSES According to the quadratic formula, then, but λ 1 2 [ 1450 ± (1450 2 4 484, 416) ; 1450 2 4 484, 416 2, 102, 500 1, 937, 664 164, 836 406 2, so λ 1 2 [ 1450 ± 406 522 ( 9 58) or 928 ( 16 58). Substituting the first of these values into our original matrix equation gives us 865 522 147 x 0, 147 585 522 y but the matrix factors as [ 343 147 147 63 [ 7 49 3 49 7 21 3 21 and thus kills the vector (3, 7) as well as its normalized multiple x + 1 58 (3, 7). Similarly: substituting in the second root yields the matrix 865 928 147 63 147, 147 585 928 147 343 which admits a similar factorization, and consequently kills the normalized vector x 1 58 ( 7, 3). We can use these to answer the original question, about points on the curve Q 1450 at greatest and least distance from the origin, by noting that the function Q is quadratic, in the sense that for any real number t, we have Q(tx, ty) t 2 Q(x, y), The points on the locus Q 1450 where f is greatest are just multiples tx ± of the points on the unit circle where Q is greatest, by the argument above (about inverting λ); so all we need to do is find the right scaling factor t. It s not hard to calculate that Q(x + ) 58 9, Q(x ) 58 16, from which it follows easily that 5 3 x + and 5 4 x lie on the locus Q 1450 25 58: they are the extreme points we sought. Note by the way that these vectors (like x + and x ) are perpendicular: their dot product is 25 (3 ( 7) + 7 3) 0. 12 58 In other words: the locus Q 1450 is an ellipse, with the first vector above as the semimajor, and the second vector the semiminor, axes. It can be obtained from the standard ellipse 9X 2 + 16Y 2 25 by applying the rotation matrix [x + x : 1 3 7 cos θ sin θ : R(θ) 58 7 3 sin θ cos θ

ELLIPSES 3 through the angle satisying tan θ sin θ/ cos θ 7/3. [Note that R(θ) e 1 x +, R(θ) e 2 x, where e 1 (1, 0), e 2 (0, 1) are the standard unit vectors. In fact writing this out gives which equals Q(x, y) 9(3x + 7y) 2 + 16( 7x + 3y) 2 ; 9(9x 2 + 42xy + 49y 2 ) + 16(49x 2 42xy + 9y 2 ), [9 9 + 16 49 865x 2 + [42 (9 16) 294xy + [9 49 + 16 9 585y 2. 2 For clarity, here is another example, this time with smaller numbers: Problem: Find the principal axes (ie the semimajor and semiminor axes) for the ellipse Q(x, y) 23x 2 + 14xy + 23y 2 17. Solution: We need to find the eigenvectors of the matrix 23 7 B ; 7 23 these are the (nontrivial) vectors v ± satisfying the eigenvalue equation (B + λ ± )v ± 0. [Eigen comes from German, where it signifies something like proper or characteristic. It has become standard in mathematics (and in quantum mechanics) in this and related contexts. Thus we need to solve the quadratic equation 23 + λ 7 det (23 + λ) 2 7 2 λ 2 + 46λ + [529 49 480 0. 7 23 + λ This has solutions λ ± 1 2 [ 46 ± (46 2 4 480) ; but the term under the square root sign equals 2116 1920 196 13 2, so λ + 16 and λ 30. It follows that 23 16 7 7 7 B + λ + 7 23 16 7 7 which kills the normalized vector v + 1 2 (1, 1), while [ 23 30 7 7 7 B + λ 7 23 30 7 7 kills the vector v 1 2 (1, 1). In this case the rotation matrix is R(φ) 1 1 1, 2 1 1 defined by a 45-degree rotation (since tan φ 1), and the original equation can be rewritten as λ + (v + x) 2 + λ (v x) 2 8(x y) 2 + 15(x + y) 2. In general the normalized eigenvectors satisfy Q(v ± ) λ ±, so by rescaling (as in the previous problem) we find that the principal axes of the ellipse Q 17 are defined by the vectors 34 1120 (1, 1) and (1, 1). 8 60

4 ELLIPSES 3 In general, the n-dimensional Lagrange multiplier problem for a quadratic function x (Ax) x : R n R defined by a symmetric n n matrix A will have n nontrivial (normalized, mutually orthogonal) eigenvectors v 1,..., v n satisfying the eigenvalue equation Av k λ k v k, and the formula above generalizes to kn Q(x) λ k (v k x) 2. k1 Astronomers use this, for example, to study elliptical galaxies (which, being threedimensional objects, have three principal axes). I don t want to give the (false) impression, that the quadratic equations for such problems always work out so neatly; they don t. The problems above were cooked up using the equation where A(ax + by) 2 + B( bx + ay) 2 Ex 2 + 2F xy + Gy 2, E a 2 A + b 2 B, F ab(a B), and G b 2 A + a 2 B ; for example in problem 1, A 9, B 16, a 3, and b 7. If you chase through the quadratic formula in this generality, you find that for problems of this sort, λ ± (a 2 + b 2 ){A or B}. 4 The condition that the matrix A be symmetric is important. The symmetry condition [that the coefficient A ik of the matrix equals the coefficient A ki, with the order of indices reversed implies that for any two vectors v and v, we have (Av ) v i,kn i,k1 A ik v kv i i,kn i,k1 A ki v i v k (Av) v. But now if v and v are eigenvectors of a symmetric matrix A, with associated eigenvalues λ and λ which are distinct, ie λ λ, then v and v must be orthogonal: for Av λv, Av λ v, so on one hand (Av ) v (λ v ) v λ v v ; while on the other hand, the symmetry of A implies that Thus these two quantities are equal, ie (Av ) v (Av) v (λv) v λv v. λ v v λv v ; but the dot product itself is symmetric [v v v v so (λ λ)v v 0. But λ and λ are distinct, so their difference is nonzero, and we conclude that v v 0 which is to say that the eigenvectors v and v are perpendicular. This has applications in quantum mechanics: in that theory observable physical quantities are supposed to be represented by (things very much like) symmetric matrices, and the result of a measurement of the physical quantity in question is

ELLIPSES 5 thought to be an eigenvalue of that matrix. A state of the physical system is interpreted as a vector, and to say that a measurement of a physical quantity A in the state v yields the result λ is interpreted as saying that the state v is an eigenvector of A, with eigenvalue λ: in other words, Av λv. The fact that eigenvectors corresponding to distinct eigenvalues are orthogonal is a kind of quantum-mechanical analog of the law of the excluded middle in logic: there is a certain amount of indeterminacy in quantum mechanics - an experiment might yield λ for a measurement, or it might yield λ - but it can t yield both: the experiment yields a pure state, in which the value of the measured quantity is well-defined. This is a difficult and important notion, which lies at the heart of quantum mechanics, and it s really because I thought you might be interested in this, rather than because of the considerable intrinsic mathematical beauty of the theory of principal axes, that I have written up these notes.