Math. H110 Answers for Final Examination December 21, :56 pm

Similar documents
LinGloss. A glossary of linear algebra

Problem # Max points possible Actual score Total 120

Cheat Sheet for MATH461

Review problems for MA 54, Fall 2004.

MA 265 FINAL EXAM Fall 2012

Chapter 3 Transformations

I. Multiple Choice Questions (Answer any eight)

University of Colorado at Denver Mathematics Department Applied Linear Algebra Preliminary Exam With Solutions 16 January 2009, 10:00 am 2:00 pm

Math 102, Winter Final Exam Review. Chapter 1. Matrices and Gaussian Elimination

Notes on Eigenvalues, Singular Values and QR

Math 113 Final Exam: Solutions

Preliminary/Qualifying Exam in Numerical Analysis (Math 502a) Spring 2012

Definition (T -invariant subspace) Example. Example

MATH 1120 (LINEAR ALGEBRA 1), FINAL EXAM FALL 2011 SOLUTIONS TO PRACTICE VERSION

Elementary linear algebra

The value of a problem is not so much coming up with the answer as in the ideas and attempted ideas it forces on the would be solver I.N.

Math 54 Second Midterm Exam, Prof. Srivastava October 31, 2016, 4:10pm 5:00pm, 155 Dwinelle Hall.

Math 414: Linear Algebra II, Fall 2015 Final Exam

MATH 581D FINAL EXAM Autumn December 12, 2016

Math 224, Fall 2007 Exam 3 Thursday, December 6, 2007

LINEAR ALGEBRA BOOT CAMP WEEK 4: THE SPECTRAL THEOREM

October 25, 2013 INNER PRODUCT SPACES

EXAM. Exam 1. Math 5316, Fall December 2, 2012

Math 18, Linear Algebra, Lecture C00, Spring 2017 Review and Practice Problems for Final Exam

2. Linear algebra. matrices and vectors. linear equations. range and nullspace of matrices. function of vectors, gradient and Hessian

Math 291-2: Lecture Notes Northwestern University, Winter 2016

Math 4A Notes. Written by Victoria Kala Last updated June 11, 2017

University of Colorado Denver Department of Mathematical and Statistical Sciences Applied Linear Algebra Ph.D. Preliminary Exam June 10, 2011

FINAL EXAM Ma (Eakin) Fall 2015 December 16, 2015

Chap 3. Linear Algebra

Contents. Preface for the Instructor. Preface for the Student. xvii. Acknowledgments. 1 Vector Spaces 1 1.A R n and C n 2

MATH 31 - ADDITIONAL PRACTICE PROBLEMS FOR FINAL

Linear Algebra. Workbook

Exercise Set 7.2. Skills

1. What is the determinant of the following matrix? a 1 a 2 4a 3 2a 2 b 1 b 2 4b 3 2b c 1. = 4, then det

LINEAR ALGEBRA 1, 2012-I PARTIAL EXAM 3 SOLUTIONS TO PRACTICE PROBLEMS

Synopsis of Numerical Linear Algebra

Linear Algebra Review. Vectors

There are six more problems on the next two pages

Matrices, Moments and Quadrature, cont d

PRACTICE PROBLEMS FOR THE FINAL

Lecture notes: Applied linear algebra Part 1. Version 2

Homework 2 Foundations of Computational Math 2 Spring 2019

Lecture 7: Positive Semidefinite Matrices

Linear Algebra Massoud Malek

Math 520 Exam 2 Topic Outline Sections 1 3 (Xiao/Dumas/Liaw) Spring 2008

Math Linear Algebra

EE/ACM Applications of Convex Optimization in Signal Processing and Communications Lecture 2

University of Colorado Denver Department of Mathematical and Statistical Sciences Applied Linear Algebra Ph.D. Preliminary Exam January 13, 2014

MAT Linear Algebra Collection of sample exams

Symmetric Matrices and Eigendecomposition

TOPICS for the Math. 110 Midterm Test, 4-8 Mar. 2002

13-2 Text: 28-30; AB: 1.3.3, 3.2.3, 3.4.2, 3.5, 3.6.2; GvL Eigen2

GRE Subject test preparation Spring 2016 Topic: Abstract Algebra, Linear Algebra, Number Theory.

ALGEBRA QUALIFYING EXAM PROBLEMS LINEAR ALGEBRA

Math 290, Midterm II-key

Review of some mathematical tools

Final Exam Practice Problems Answers Math 24 Winter 2012

MATH 423 Linear Algebra II Lecture 33: Diagonalization of normal operators.

18.06 Problem Set 8 Solution Due Wednesday, 22 April 2009 at 4 pm in Total: 160 points.

Index. for generalized eigenvalue problem, butterfly form, 211

MATRICES ARE SIMILAR TO TRIANGULAR MATRICES

5.) For each of the given sets of vectors, determine whether or not the set spans R 3. Give reasons for your answers.

MATH 240 Spring, Chapter 1: Linear Equations and Matrices

Math Linear Algebra Final Exam Review Sheet

Geometry of Elementary Operations and Subspaces

Algebra Exam Syllabus

Final Exam. Linear Algebra Summer 2011 Math S2010X (3) Corrin Clarkson. August 10th, Solutions

Index. book 2009/5/27 page 121. (Page numbers set in bold type indicate the definition of an entry.)

YORK UNIVERSITY. Faculty of Science Department of Mathematics and Statistics MATH M Test #1. July 11, 2013 Solutions

MIT Final Exam Solutions, Spring 2017

MATH 315 Linear Algebra Homework #1 Assigned: August 20, 2018

Algebra Workshops 10 and 11

ECE 275A Homework #3 Solutions

(K + L)(c x) = K(c x) + L(c x) (def of K + L) = K( x) + K( y) + L( x) + L( y) (K, L are linear) = (K L)( x) + (K L)( y).

Math Linear Algebra II. 1. Inner Products and Norms

No books, no notes, no calculators. You must show work, unless the question is a true/false, yes/no, or fill-in-the-blank question.

Conceptual Questions for Review

Eigenvalue and Eigenvector Problems

Math Homework 8 (selected problems)

235 Final exam review questions

forms Christopher Engström November 14, 2014 MAA704: Matrix factorization and canonical forms Matrix properties Matrix factorization Canonical forms

Reduction to the associated homogeneous system via a particular solution

Homework 1 Elena Davidson (B) (C) (D) (E) (F) (G) (H) (I)

Throughout these notes we assume V, W are finite dimensional inner product spaces over C.

Background Mathematics (2/2) 1. David Barber

Lecture notes on Quantum Computing. Chapter 1 Mathematical Background

ANSWERS (5 points) Let A be a 2 2 matrix such that A =. Compute A. 2

Extra Problems for Math 2050 Linear Algebra I

University of Colorado Denver Department of Mathematical and Statistical Sciences Applied Linear Algebra Ph.D. Preliminary Exam January 23, 2015

Quantum Computing Lecture 2. Review of Linear Algebra

University of Colorado Denver Department of Mathematical and Statistical Sciences Applied Linear Algebra Ph.D. Preliminary Exam June 8, 2012

Math 310 Final Exam Solutions

Matrix Mathematics. Theory, Facts, and Formulas with Application to Linear Systems Theory. Dennis S. Bernstein

Linear Algebra: Matrix Eigenvalue Problems

NAME MATH 304 Examination 2 Page 1

orthogonal relations between vectors and subspaces Then we study some applications in vector spaces and linear systems, including Orthonormal Basis,

Review of Linear Algebra

Final Exam, Linear Algebra, Fall, 2003, W. Stephen Wilson

Math Fall Final Exam

Transcription:

This document was created with FrameMaker 404 Math. H110 Answers for Final Examination December 21, 2000 3:56 pm This is a CLOSED BOOK EXAM to which you may bring NO textbooks and ONE sheet of notes. Solve problem 1 first, and then as many subsequent problems as you can, in three hours. If a problem seems too hard, come back to it after trying another. Complete solutions earn far more credit than partial or defective solutions, so take your time and take pains to get them right. On each page you hand in for a grade you must put your name and the number(s) of the problem(s) solved thereon to get credit. 1: What makes matrix notation so valuable, namely the ability to represent so many things by matrices, can also make it confusing; the notation does not say what the matrix represents. A real matrix L may represent, among other things, 0: a change of coordinates (basis) in a real vector space, 1: a linear map L from one real vector space to another, 2: a linear map L from one Euclidean vector space to another, 3: a linear map L of a real space to itself, 4: a linear map L from a real vector space to its dual space, 5: a quadratic form (x) for vectors x in a non-euclidean real space, or 6: a quadratic form (x) for vectors x in an Euclidean space. Changes of coordinates in the relevant space(s) affect L differently in each case; for each case explain how, and describe (without proof) as many as you can of the attributes of L unaltered by such changes. Solution: Let the nonsingular matrices E and F represent changes of coordinates, and let L be the matrix that represents, after changes of coordinates, whatever L represented before. 1 0: If B is one basis for a vector space, BL a second, and (BL)E = BL a third, it could have been reached directly from the first basis by using L := LE. All that L and L need have in common are their dimension and the nonvanishing of their determinants. 1 1: If L maps a vector space with basis B to another space with basis C then L is represented by the matrix L := C 1 LB because Cy = y = Lx = LBx for column-vectors x and y that satisfy y = Lx. Changing bases from B to BE and from C to CF changes L to L := F 1 LE. This is an Equivalence; it has to preserve only dimensions and rank. 1 2: If L maps one Euclidean space with orthonormal basis B to another with orthonormal basis C, changes to new orthonormal bases BE and CF that preserve the root-sum-squares formula for length must be accomplished by orthogonal matrices: E T = E 1 and F T = F 1. Such changes of bases change the matrix L := C 1 LB that represents C 1 LB to L := F 1 LE. This is an Orthogonal Equivalence; it has to preserve only dimensions and singular values. 1 3: If L maps a vector space with basis B to itself, L is represented by matrix L := B 1 LB. Changing to a new basis BE changes L to L := E 1 LE. This Similarity need preserve only the Jordan Normal form in which the order of Jordan blocks is immaterial. Because L is real, complex eigenvalues and eigenvectors, if any, come in complex conjugate pairs which can be exhibited without any complex arithmetic by the real Jordan Normal form with 2-by-2 real blocks on the diagonal instead of 1-by-1 complex eigenvalues. Prof. W. Kahan Page 1 /5

1 4: If L maps a real vector space with basis B to its dual, where the dual basis is B 1, the scalar-value of the bilinear form Lxy, acting linearly upon each vector x = Bx and y = By, is obtained from their column vectors of coordinates and a matrix L that represents L thus: Lxy = (Lx) T y. To obtain L given L, substitute various unit column-vectors for x and y. Changing to a new basis BE changes x to x := E 1 x, y to y := E 1 y, and L to L := E T LE, so that Lxy = (L x) T y too. This Congruence relation between L and L is an equivalence, so it preserves the dimension and rank of L as well as the Signature of its symmetric part (L T +L)/2 ; see 1 5. 1 5: The quadratic form (x) is obtained, for vectors x = Bx in a real space with basis B, from the column x that represents x and a matrix L that represents thus: (x) = x T Lx. Only the symmetric part (L T +L)/2 of L matters here, so we might as well assume they are the same. To obtain L given, proceed as in 1 4 from the symmetric bilinear form defined by Lxy := ( (x+y) (x y))/4. Changing to a new basis BE changes L to L := E T LE and preserves its dimension, symmetry and signature (the numbers of L s positive, negative and zero eigenvalues see 1 6). 1 6: Continuing from 1 5, if the space is Euclidean the change from one orthonormal basis B to another, BE, requires that E be orthogonal: E T = E 1. Now the congruence L = E T LE is an Orthogonal Similarity that preserves the eigenvalues, all real, of symmetric matrix L. 2: Suppose that F = LCR T in which L, C and R all have the same rank and C is square and invertible. Explain why L = (L T L) 1 L T, R = (R T R) 1 R T and F = R T C 1 L. Explanation: Not (A B) = B A, which is often false; try ((uv T )(wx T ). The formulas for L and R follow from the observation that L, C and R all have the same number of columns, and that number is their rank, so ( ) 1 s exist. Let G := R T C 1 L. Since C and R R = L L = I have the same dimensions, GF = R T C 1 L LCR T = R T R T = RR = (GF) T. Similarly FG = LL = (FG) T. Then FGF = LL LCR T = F, and similarly GFG = G. Therefore G satisfies all four equations that determine the Moore-Penrose Pseudo-inverse F of F uniquely. Prof. W. Kahan Page 2 /5

3: Given constants µ 0, µ 1, µ 2 and µ 3 0 set x n+1 := µ 0 x n + µ 1 x n 1 + µ 2 x n 2 + µ 3 x n 3 for n = 3, 4, 5, in turn. Show for all these n that, regardless of x 0, x 1, x 2 and x 3, x n+ 3 x n+ 2 x n+ 1 x n x n 2 det( + x n+ 1 x n x n 1 )/( µ 3 ) n is independent of n. x n+ 1 x n x n 1 x n 2 x n x n 1 x n 2 x n 3 µ 0 1 0 0 µ 1 0 1 0 µ 2 0 0 1 µ 3 0 0 0 Proof: Let X n+3 be the matrix in the determinant, and set M :=. Then it is easy to verify that X n+3 = X n+2 M = X 3 M n. Therefore det(x n+3 ) = det(x 3 ) det(m n ) = det(x 3 ) ( µ 3 ) n. 4: An Unreduced Upper-Hessenberg Matrix is a square matrix with no zeros on the first subdiagonal and zeros everywhere below it. The Jordan-Normal Form of a Derogatory matrix has at least two different Jordan-Blocks with the same eigenvalue. Are there any derogatory unreduced Hessenberg matrices? Justify your answer. Answer: No. Consider any derogatory N-by-N matrix B. It must have at least one eigenvalue ß for which B ßI (which has the same rank as the Jordan Normal form to which it is Similar) has rank less than N 1 because two or more Jordan Blocks of B ßI have zeros on their diagonals. But the first sub-diagonal of an unreduced N-by-N upper Hessenberg matrix H is the diagonal of an (N 1)-by-(N 1) submatrix with nonzero determinant, which implies rank(h ßI) N 1 for every scalar ß. 5: In a vector space, a set is called Connected if every two of its members are joined by some continuous path consisting entirely of members of that set. In the space of all real N-by- N matrices the orthogonal matrices do not form a connected set; prove this, and also prove that the N-by-N complex unitary matrices do form a connected set. Proof: Since det( ) is a continuous function of its argument, and since every real orthogonal matrix has determinant +1 or else 1, the two kinds of orthogonal matrices cannot form one connected set. Now let P = P* 1 be any N-by-N unitary matrix; it has a Schur decomposition P = QUQ* in which Q is unitary and U is upper-triangular. However, it is easy to verify that U* = U 1 must be simultaneously upper- and lower-triangular, and is therefore a diagonal matrix which can be written U = exp(ıh) for some real diagonal H and ı = ( 1). For 0 µ 1 set P(µ) := Qexp(ıµH)Q* to describe a continuous path from P(0) = I to P(1) = P. Given any two unitary N-by-N matrices B and C, set P := B*C to obtain a continuous path BP(µ) from B = BP(0) to C = BP(1). End of proof. (More generally, the invertible matrices form a connected subset among N-by-N complex matrices, but not among real. Can you see why?) Prof. W. Kahan Page 3 /5

6: The Schur Complement of the scalar ß in the complex square matrix B := β r* is c F E := F cr*/ß. Here r* is the complex conjugate transpose of r. Now, assuming that Re{ z*bz/z*z } µ > 0 for every nonzero complex vector z of the appropriate dimension, prove that the same is true with E in place of B. (Hint: Try a real non-symmetric matrix B and real z first.) What does that assumed inequality imply about the diagonal elements of the upper-triangular factor U in the factorization B = L U with a unit-lower-triangular L? Proof: For any scalar π and nonzero vector z of the appropriate dimension, define ƒ(π, z) := [π*, z*]b π /( π 2 + z*z) = ( ß π 2 + π*r*z + πz*c + z*fz)/( π 2 + z*z), noting that z Re{ƒ} µ no matter how π and z are chosen. In particular Re{ß} µ > 0. Consequently z*ez/z*z = (z*fz z*cr*z/ß)/z*z = (1 + π 2 /z*z)ƒ(π, z) (ßπ + r*z)(ßπ* + z*c)/(ßz*z). Now set π := r*z/ß or (z*c/ß)* to find then z*ez/z*z = (1 + π 2 /z*z)ƒ(π, z), so its real part can be no less than µ, as claimed. This inequality implies the same inequality for every diagonal element of the upper-triangular factor U, the first of which is ß and the rest are the upper-left corner elements of successive Schur complements. Compare the class notes on Diagonal Prominence. 7: P and Q are Orthogonal Projectors from an Euclidean Space into itself; this means that P T = P = P 2 and Q T = Q = Q 2. The norm is the biggest singular value. Prove that a) P Q 1. b) If Rank(P) Rank(Q) then P Q = 1, but not conversely. c) If P Q < 1 then Rank(P) = Rank(Q) and no nonzero vector in Range(Q) is orthogonal to Range(P). d) The converse of (c). (This is harder.) Proof 7(a): Since P Q is real symmetric, its singular values are the magnitudes of its eigenvalues. All the eigenvalues of positive semidefinite P are zeros or ones, and likewise for Q ; therefore no eigenvalue of P Q can exceed 1, nor fall below 1. Therefore P Q 1. 7(b): If, say, Rank(P) > Rank(Q), then Range(P) and Nullspace(Q) must have a nonzero intersection since their dimensions add up to more than the dimension of the space; therefore Qx = o Px = x for some x in that intersection, and 1 x P Q x (P Q)x = x 0 and consequently P Q = 1. The converse is untrue because P Q = 1 for projectors P = 10 and Q = 00 that have the same rank 1. 00 01 Proof 7(c): Let q = Qq o be any nonzero vector in Range(Q). Now, Pq = P T q = o if q is orthogonal to Range(P), and then P Q q (P Q)q = q 0 so P Q = 1 ; therefore if P Q < 1 no nonzero q in Range(Q) can be orthogonal to range(p), and vice-versa (swapping Q and P ), and Rank(P) = Rank(Q) because of part (b). Prof. W. Kahan Page 4 /5

7(d): Conversely, suppose P Q = 1. Then (P Q)x = ±x o for at least one eigenvector x. There are now two cases to consider: In the first case (P Q)x = x and then Px = o and x = Qx o because 0 Px 2 = x T Px = x T Qx x T x x T x x T x = 0 ; this x must be in Range(Q) and orthogonal to Range(P). In the second case (P Q)x = +x and then similarly Qx = o and x = Px o ; this means that Range(P) and Nullspace(Q) have a nonzero intersection. This can occur if Range(Q) is a proper subspace of Range(P), in which case Rank(P) > Rank(Q), and then no nonzero vector in Range(Q) need be orthogonal to Range(P). But otherwise, when r := Rank(P) = Rank(Q) too in the second case, then also n := Nullity(P) = Nullity(Q), and then 0 = n+r Dim(Range(P)) Dim(Nullspace(Q)) ( n+r = Dim(whole space) ) < n+r Dim(Range(P)) Dim(Nullspace(Q)) + Dim( Range(P) Nullspace(Q) ) = n+r Dim( Range(P) + Nullspace(Q) ) = Dim( (Range(P) + Nullspace(Q)) ) = Dim( Range(P) Nullspace(Q) ) = Dim( Nullspace(P) Range(Q) ). Therefore some vector in Range(Q) is orthogonal to Range(P), as claimed. Prof. W. Kahan Page 5 /5