ECE 275A Homework # 3 Due Thursday 10/27/2016

Similar documents
ECE 275A Homework #3 Solutions

Sample ECE275A Midterm Exam Questions

Pseudoinverse & Moore-Penrose Conditions

Pseudoinverse & Orthogonal Projection Operators

Moore-Penrose Conditions & SVD

Math Fall Final Exam

Review problems for MA 54, Fall 2004.

Properties of Matrices and Operations on Matrices

Pseudoinverse and Adjoint Operators

Normed & Inner Product Vector Spaces

(a) If A is a 3 by 4 matrix, what does this tell us about its nullspace? Solution: dim N(A) 1, since rank(a) 3. Ax =

Linear Algebra Review. Vectors

2. Review of Linear Algebra

LinGloss. A glossary of linear algebra

Lecture notes: Applied linear algebra Part 1. Version 2

UNIT 6: The singular value decomposition.

COMP 558 lecture 18 Nov. 15, 2010

linearly indepedent eigenvectors as the multiplicity of the root, but in general there may be no more than one. For further discussion, assume matrice

Proposition 42. Let M be an m n matrix. Then (32) N (M M)=N (M) (33) R(MM )=R(M)

STAT 309: MATHEMATICAL COMPUTATIONS I FALL 2017 LECTURE 5

Linear Algebra Review. Fei-Fei Li

MATH 20F: LINEAR ALGEBRA LECTURE B00 (T. KEMP)

A Brief Outline of Math 355

MATH36001 Generalized Inverses and the SVD 2015

Bindel, Fall 2009 Matrix Computations (CS 6210) Week 8: Friday, Oct 17

Linear Algebra Review. Fei-Fei Li

Math 102, Winter Final Exam Review. Chapter 1. Matrices and Gaussian Elimination

5.6. PSEUDOINVERSES 101. A H w.

Vector Space Concepts

Preliminary/Qualifying Exam in Numerical Analysis (Math 502a) Spring 2012

18.06 Problem Set 10 - Solutions Due Thursday, 29 November 2007 at 4 pm in

Lecture 4 Orthonormal vectors and QR factorization

MATH 581D FINAL EXAM Autumn December 12, 2016

Math Linear Algebra II. 1. Inner Products and Norms

IV. Matrix Approximation using Least-Squares

Quantum Computing Lecture 2. Review of Linear Algebra

APPLIED LINEAR ALGEBRA

The Singular Value Decomposition

University of Colorado Denver Department of Mathematical and Statistical Sciences Applied Linear Algebra Ph.D. Preliminary Exam June 8, 2012

Numerical Methods I Singular Value Decomposition

LECTURE 7. Least Squares and Variants. Optimization Models EE 127 / EE 227AT. Outline. Least Squares. Notes. Notes. Notes. Notes.

Linear Algebra, part 3. Going back to least squares. Mathematical Models, Analysis and Simulation = 0. a T 1 e. a T n e. Anna-Karin Tornberg

Image Registration Lecture 2: Vectors and Matrices

The value of a problem is not so much coming up with the answer as in the ideas and attempted ideas it forces on the would be solver I.N.

Elementary Linear Algebra Review for Exam 2 Exam is Monday, November 16th.

Linear Systems. Carlo Tomasi

The Singular Value Decomposition

Orthonormal Bases; Gram-Schmidt Process; QR-Decomposition

EXAM. Exam 1. Math 5316, Fall December 2, 2012

Algebra C Numerical Linear Algebra Sample Exam Problems

AMS526: Numerical Analysis I (Numerical Linear Algebra)

Linear Algebra- Final Exam Review

Linear Algebra March 16, 2019

Solutions to Review Problems for Chapter 6 ( ), 7.1

Review 1 Math 321: Linear Algebra Spring 2010

Singular Value Decomposition (SVD)

33AH, WINTER 2018: STUDY GUIDE FOR FINAL EXAM

Linear Algebra, part 3 QR and SVD

Singular Value Decomposition

MATH 425-Spring 2010 HOMEWORK ASSIGNMENTS

Elementary linear algebra

Linear Systems. Carlo Tomasi. June 12, r = rank(a) b range(a) n r solutions

Linear Algebra Massoud Malek

Lecture II: Linear Algebra Revisited

Notes on singular value decomposition for Math 54. Recall that if A is a symmetric n n matrix, then A has real eigenvalues A = P DP 1 A = P DP T.

Orthogonal Projection and Least Squares Prof. Philip Pennance 1 -Version: December 12, 2016

Math 407: Linear Optimization

Chapter 0 Miscellaneous Preliminaries

Solutions to Exam I MATH 304, section 6

18.06 Problem Set 8 - Solutions Due Wednesday, 14 November 2007 at 4 pm in

CS 143 Linear Algebra Review

The Singular Value Decomposition and Least Squares Problems

Hands-on Matrix Algebra Using R

A Review of Linear Algebra

Problem # Max points possible Actual score Total 120

7. Symmetric Matrices and Quadratic Forms

MATH 350: Introduction to Computational Mathematics

Worksheet for Lecture 25 Section 6.4 Gram-Schmidt Process

Lecture 6, Sci. Comp. for DPhil Students

Applied Mathematics 205. Unit II: Numerical Linear Algebra. Lecturer: Dr. David Knezevic

Lecture 1 Review: Linear models have the form (in matrix notation) Y = Xβ + ε,

k is a product of elementary matrices.

18.06 Professor Johnson Quiz 1 October 3, 2007

Chapter 7. Canonical Forms. 7.1 Eigenvalues and Eigenvectors

University of Colorado Denver Department of Mathematical and Statistical Sciences Applied Linear Algebra Ph.D. Preliminary Exam January 22, 2016

There are six more problems on the next two pages

Glossary of Linear Algebra Terms. Prepared by Vince Zaccone For Campus Learning Assistance Services at UCSB

LEAST SQUARES SOLUTION TRICKS

Spring 2014 Math 272 Final Exam Review Sheet

Notes on Eigenvalues, Singular Values and QR

Maths for Signals and Systems Linear Algebra in Engineering

MATH 315 Linear Algebra Homework #1 Assigned: August 20, 2018

Symmetric and anti symmetric matrices

1. Select the unique answer (choice) for each problem. Write only the answer.

MODULE 8 Topics: Null space, range, column space, row space and rank of a matrix

DS-GA 1002 Lecture notes 0 Fall Linear Algebra. These notes provide a review of basic concepts in linear algebra.

Linear Algebra Highlights

AMS526: Numerical Analysis I (Numerical Linear Algebra for Computational and Data Sciences)

EE731 Lecture Notes: Matrix Computations for Signal Processing

Chap 3. Linear Algebra

Transcription:

ECE 275A Homework # 3 Due Thursday 10/27/2016 Reading: In addition to the lecture material presented in class, students are to read and study the following: A. The material in Section 4.11 of Moon & Stirling Sections 8.1 8.9 of Kay. Pay particular attention to the material in Kay Section 8.7 on Recursive (aka Sequential) Least Squares (RLS) and Kay Section 8.6 on Order-Recursive Least Squares (ORLS). (See problems (8) and (9) below.) B. The Gram-Schmidt orthogonalization process and its relationship to the QR factorization described in Sections 2.15 and 5.3.1 5.3.4 of Moon & Stirling. C. The Singular Value Decomposition (SVD), described in Sections 7.1 7.5 of Moon & Stirling. Note that the notation used in lecture corresponds to the notation used on page 371 of Moon & Stirling by taking S = Σ 1 and 0 = Σ 2. A good concise source of information on the SVD can also be found in the paper The Singular Value Decomposition: Its Computation and Some Applications, V.C. Klema & A.J. Laub, IEEE Transactions on Automatic Control, Vol. AC-25, No. 2, pp. 164-176, April 1980 which can be downloaded from IEEE Xplore, which is accessible for free (along with very many other useful databases such as INSPEC, jstor, and MathSciNet) at http://libraries.ucsd.edu/sage/databases.html if you have UCSD network privileges (and proxy server permission if you work from home). You should understand the proof of the derivation of the SVD as you could be asked on an exam to reproduce it. D. The Total Least Squares (TLS) algorithm described in Section 7.7 of Moon & Stirling. This is a very important algorithm as it allows one to do a least-squares fit even when the explanatory regression variables are not perfectly known. Many (most?) people don t know of the existence of such so-called errors-in-variables (EIV) algorithms. (See problem (10) below.) Homework: 1. Let A be a linear operator A : X Y, where X and Y are complex finite-dimensional Hilbert spaces with dim(x ) = n and dim(y) = m. 1

(a) Prove that Ax = 0 if and only if y, Ax = 0, y. Note that a variety of equivalent statements follow from this, such as My = 0 if and only if x, My = 0, x, which can be further particularized by taking M = A. (b) Prove that A is onto iff A is 1-1 and that A is 1-1 iff A is onto. You can use the fact derived in class that r(a) = r(a ). (c) Prove that N (A ) = R(A). (d) Prove that N (A) = N (A A). Use this fact to prove that A A is an invertible operator if and only if A is one to one and that AA is invertible if and only if A is onto. (e) Prove that a (general nonorthogonal) projection operator P : X X is unique and is idempotent. (f) Prove that an orthogonal projection operator, P : X X, is self adjoint, P = P, where P is the adjoint operator of P. (g) Prove that if n = m and A is full rank, then A + = A 1. (h) Let A : X Y be represented as an m n matrix where X has a hermitian positive definite inner product weighting matrix Ω and Y has an inner product weighting matrix W. (a) Prove that when r(a) = m, A + is independent of W. (b) Prove that when r(a) = n, A + is independent of Ω. (i) (a) Prove that any two particular least-squares solutions x p and x p to the linear inverse problem Ax = y must orthogonally project to the same minimum norm least-squares solution, P R(A )x p = P R(A )x p. (This justifies the claim made in class that the minimum-norm least-squares solution is uniquely determined from the orthogonal projection of any particular least-squares solution onto N (A) = R(A ).) (b) Prove that the set of all (particular) least-squares solutions is equal to the linear-variety (translated subspace), ˆx + N (A) {ˆx} + N (A) 2. Let X = C n n be the Hilbert space of n n complex matrices, X, with Frobenius inner product X 1, X 2 = tr X H 1 X 2. Let V be the set of symmetric (not hermitian) n n complex matrices V = V T, V X = C n n. Finally, define the mapping P ( ) : X X by P (X) X + XT 2. 2

(a) Prove that V is a (Hilbert) subspace of X and give its dimension. Prove that the set of hermitian matrices is not a subspace. (b) Prove that P ( ) is the orthogonal projection of X onto the subspace V. Assume at the outset that you know nothing about P ( ) other than its definition. 1 (c) Determine the orthogonal projector of X onto V and categorize the elements of the subspace V. 3. Let X = Sym(C, n) C n n be the vector space of symmetric n n complex matrices 2 with Frobenious inner product X 1, X 2 = tr X H 1 X 2. Let Y = C n m be the Hilbert space of n m complex matrices with inner product Y 1, Y 2 = tr Y H 1 Y 2. Finally for a given full row-rank n m matrix A, definite the mapping A( ) : X Y by A(X) XA, rank(a) = n. (a) Prove that the least-squares solution to the inverse problem Y = A(X) is necessarily unique and is the solution to the the (matrix) Lyapunov equation with XM + M T X = Λ (1) M AA H and Λ Y A H + (Y A H ) T. (2) (b) The Matlab Controls Toolbox provides a numerical solver for the Lyapunov Equation (1). Mathematically, it can be readily shown that a unique solution to the Lyapunov equation is theoretically given by X = 0 e M T t Λ e M t dt (3) provided that the real parts of the eigenvalues of M are all strictly greater than zero. 3 1 Note that as a projection operation the domain and codomain of of P are both X. For the claim to be true, what must be the range of P? 2 In the previous problem, the set of symmetric, complex matrices was proven to be a Hilbert subspace. Here, we treat this set as a Hilbert space in its own right. 3 This is a sufficient condition to ensure that the the integral on the right hand side of equation (3) exists and is finite. 3

i. For a square matrix A, heuristically show from the Taylor series expansion of e At that d dt eat = Ae At = e At A and that ( e At) T = e A T t. ii. Justify the claim that the unique solution to (1) and (2) is given by (3). 4 Comment: Most of Problems 2 and 3 were given on a previous ECE275A midterm. 4. Prove the following theorem: Theorem A. Let the nullspace of B and the range of C be disjoint. Then C = 0 iff BC = 0. 5. Assume that all Hilbert spaces are finite dimensional and complex. Recall the four Moore-Penrose Pseudoinverse necessary and sufficient conditions for a linear operator M to be the pseudoinverse of an linear operator A. (a) Prove that (A ) + = (A + ). (b) Suppose that A is rank-deficient. Prove that A + = A (AA ) + and A + = (A A) + A. (Theorem A to be proved above can be useful here.) (c) Let α be a possibly zero-valued scalar (real or complex). Determine its pseudoinverse. (Prove your result.) (d) Let D be a diagonal (possibly complex) matrix, D = diag(d 1 d n ). Assume that each scalar d i has a known pseudoinverse d + i, i = 1, n. Determine the pseudoinverse of D, assuming that D maps between unweighted Hilbert spaces (why is this last condition important?). (e) Let Σ be a (possibly complex) block matrix of the form ( ) S 0 Σ =, (4) 0 0 which maps between unweighted Hilbert spaces. Assume that S has a known pseudoinverse S +. Determine the pseudoinverse of the matrix Σ. (f) Let A and C have the very special property that they are unitary, A 1 = A and C 1 = C. 5. Show that in this case we have that (ABC) + = C B + A. Note that this is not true for general, arbitrary matrices A, B, and C. Also note that generally, (ABC) + C + B + A +, 4 You can interchange the order of differentiation and integration in (3) provided that the integral exists. Also the standard product rules for differentiation of matrix functions hold provided that you remember that in general matrices don t commute so that the order of terms must be preserved. 5 In the special case when A and C map between real and unweighted Hilbert spaces, we say that they are orthogonal when this condition is satisfied. 4

except, of course, for special cases, such as the unitary case presented above or when the matrices are all invertible. (g) Let the complex m n matrix A be a mapping between unweighted complex Hilbert spaces with SVD A = UΣV H, where Σ is of the form (4) with the r r diagonal matrix S containing the r nonzero singular values, σ i, of A as its diagonal elements. Prove that A + = 1 σ 1 v 1 u H 1 + + 1 σ r v r u H r. 6. For each of the following matrices, a) determine the SVD using geometric thinking and pencil and paper only; b) give orthogonal bases and projection operators for each of the four fundamental subspaces; c) give the pseudoinverse. Assume that the standard ( Cartesian ) inner product, x, y = x T y, holds on all spaces. A = 7. Kay Problem 8.13. ( ) 1 0 1 0 0 1, B = 0 2, C = 0 2 0, D = ( 0 3 0 ). 0 1 0 1 0 0 8. Directed reading assignment. Based on your reading of Sections Moon 4.11 and Kay 8.7, describe (do not derive), in words and mathematically, the Recursive (aka Sequential) Least Squares (RLS) algorithm, clearly defining all your terms. 9. Directed reading assignment. Based on your reading of Section 8.6 of Kay, describe (do not derive), in words and mathematically, the Order-Recursive Least Squares (ORLS) algorithm, clearly defining all your terms. 10. Directed reading assignment. Based on your reading of Section 7.7 of Moon, describe (do not derive) in words what the Total Least Squares (TLS) algorithm is, clearly stating all the mathematical steps involved in obtaining a TLS solution. Comment: The RLS, ORLS, and TLS algorithms which you are to describe in problems 8 10 are often taught at greater length in the course ECE251B, Adaptive Filtering. 5