UCSD ECE269 Handout #8 Prof. Young-Han Kim Wednesday, February 7, Homework Set #4 (Due: Wednesday, February 21, 2018)
|
|
- Matthew Owen
- 5 years ago
- Views:
Transcription
1 UCSD ECE269 Handout #8 Prof. Young-Han Kim Wednesday, February 7, 2018 Homework Set #4 (Due: Wednesday, February 21, 2018) 1. Almost orthonormal basis. Let u 1,u 2,...,u n form an orthonormal basis for an inner product space V and let v 1,v 2,...,v n be a set of vectors in V such that Show that v 1,v 2,...,v n form a basis for V. u j v j < 1 n, j = 1,2,...,n. 2. Matrix inversion lemmas. Let A F n n, B F n k, C F k n, and D F k k. Suppose that A, D, and D CA 1 B are invertible. (a) Show that A 1 B(D CA 1 B) 1 = (A BD 1 C) 1 BD 1. (b) Show that (A BD 1 C) 1 = A 1 +A 1 B(D CA 1 B) 1 CA 1. (c) Similarly, suppose that A, D, and D +CA 1 B are invertible. Show that (A+BD 1 C) 1 = A 1 A 1 B(D +CA 1 B) 1 CA 1, which is sometimes referred to as the Woodbury matrix identity. (d) Let à Fn n be identical to A except that the (i,j)-th entry differs by δ, i.e., à = A+δe i e T j. Show that à 1 = A 1 1 1/δ +(A 1 f i gj T, ) ji where f i is the i-th column of A 1 and g T j is its j-th row. (Hint: Consider the block matrix and its inverse.) [ ] A B C D 3. Moore Penrose pseudoinverse. A pseudoinverse of A R m n is defined as a matrix A + R n m that satisfies and AA + and A + A are symmetric. AA + A = A, A + AA + = A +, 1
2 (a) Show that A + is unique. (b) Show that (A T A) 1 A T is the pseudoinverse and a left inverse of a full-rank tall matrix A (c) ShowthatA T (AA T ) 1 isthepseudoinverseandarightinverseofafull-rankfatmatrixa. (d) Show that A 1 is the pseudoinverse of a full-rank square matrix A. (e) Show that A is the pseudoinverse of itself for a projection matrix A (cf. Question 4 in Homework Set #3). (f) Show that (A T ) + = (A + ) T. (g) Show that (AA T ) + = (A + ) T A + and (A T A) + = A + (A + ) T. (h) Suppose that A has a rank decomposition A = BC, for example, B = Q R m r and C = R R r n as in the QR decomposition. Find A + in terms of B and C. (i) Show that R(A + ) = R(A T ) and N(A + ) = N(A T ). (j) Show that P = AA + and Q = A + A are projection matrices. (k) Show that y = Px and z = Qx are the projections of x onto R(A) and R(A T ), respectively, where P and Q are defined as in 3j. (l) Show that A + = lim δ 0 (A T A+δI) 1 A T = lim δ 0 A T (AA T +δi) 1. (m) Show that x = A + b is a least-squares solution to the linear equation Ax = b, i.e., Ax b Ax b for every other x. (n) Show that x = A + b is the least-norm solution to the linear equation Ax = b, i.e., x x for every other solution x, provided that a solution exists. 4. Least squares for an alternative inner product. (a) Let A R m n be full-rank and tall. Show that x,y A = Ax,Ay = x T A T Ay is a valid inner product. (b) Let B R n k be full-rank and tall. Find the unique solution to the following leastsquares problem minimize y Bx A, where v A = v,v A. Your answer should be in terms of y and B. 5. Recursive least squares. The least-squares problem for y = Ax can be viewed as finding the best fit for noisy observations y 1,y 2,...,y m from linear measurements ã T 1 x,ãt 2 x,...,ãt m x, where ã T 1,ãT 2,...,ãT m are rows of the measurement matrix A. We know that if A is full-rank and tall, ( m ) 1 m x m = (A T A) 1 A T y = ã i ã T i y i ã i is the least-squares solution. Now suppose that there is an additional measurement ã T m+1 x, which results in a new observation y m+1. The new least-squares solution can be found from scratch as x m+1 = ( m+1 2 ã i ã T i ) 1 m+1 y i ã i,
3 which is computational inefficient. This problem explores a low-complexity alternative that can compute the new solution x m+1 based on the old one x m, and can incorporate subsequent measurement outcomes recursively. (a) Let P m = ( m ãiã T i ) 1. Using Problem 2, show that P m+1 = [P 1 m +ã m+1 ã T m+1] 1 = P m P mã m+1 ã T m+1 P m (1+ã T m+1 P mã m+1 ). (b) Show that the solution x m+1 at the (m+1)-st iteration can be obtained as where x m+1 = x m +ǫ m+1 q m+1, q m+1 = P m+1 ã m+1 ǫ m+1 = y m+1 ã T m+1x m. 3
4 Programming Assignment Write down your code as clearly as possible and add suitable comments. 1. Least-qquares polynomial approximation. Consider a function φ : [a, b] R. Suppose that we wish to approximate φ(t) by a polynomial p(t) of degree n, using evaluations of the function y 1 = φ(t 1 ), y 2 = φ(t 2 ),. y m = φ(t m ) at distinct values t 1,t 2,...,t m [a,b]. The goodness of the approximation is measured by the squared error m J = (y i p(t i )) 2, and the goal is to find the polynomial p(t) = α 0 +α 1 t+α 2 t 2 + +α n t n that minimizes J. Let 1 t 1 t 2 1 t n 1 1 t 2 t 2 2 t n 2 A =..... Rm (n+1), 1 t m t 2 m tn m x = (α 0,α 1,α 2,...,α n ) R n+1, and y = (y 1,y 2,...,y m ) R m. Then the problem is equivalent to finding the least-squares solution to y = Ax. (1) The matrix A is a Vandermonde matrix and is full-rank (provided that t 1,t 2,...,t m are distinct. Depending on m and n, there are three possibilities. If m > n+1 (i.e., A is tall), there is no solution to (1) and we fit a large number of data points (t 1,y 1 ),...,(t m,y m ) to a low-degree polynomial. If m = n+1, A is invertible and there is a unique polynomial passing through all the data points. If m < n+1, there are infinitely many polynomials passing through all the data points. (a) Write a Julia function lspoly(t,y,n) that takes as input vectors y and t, each of length m and a positive integer n, and outputs the coefficients α 0,...,α n of a polynomial p of degree n, such that the squared error m (y i p(t i )) 2 is minimized. For m < n +1, your function should output the polynomial p that passes through all the data points such that n j=0 α2 j is minimized. 4
5 (b) Using the function in part (a), we now fit a polynomial to φ(t) = sint on [0,2π]. For this purpose, choose 45 points uniformly at random on [0,2π], using the seed 1234 for the random number generator. The seed can be set by the command srand(1234). We then divide this data into three subsets. The first 15 points will be used as the training set, the next 15 points as test set #1, and the last 15 points as test set #2 (to be used in a subsequent problem). Find a polynomial p n of degree at most n = 3, such that the error m (sint i p n (t i )) 2 over the training set is minimized, and plot the training set data points, the fitted polynomial q n, and the function φ together. Repeat the experiment for n = 5,6,7,10,12. Plot the relative training error m (sint i p n (t i )) 2 m (sint i) 2 (2) as a function of n. For plotting, you can choose a few more values of n other than the ones mentioned above. (Hint: the training error can be written as y Ax / y.) Comment on the plot. (c) For each n, using the polynomial p n found by training in part (b), compute the relative test error t i test set # 1 (sint i p(t i )) 2 t i test set # 1 (sint i) 2. (3) Plot the training and test errors as functions of n on the same plot. What do you observe? (d) Repeat parts (b) and (c) for φ(t) = tlnt on t [0.25,1.75], for n = 3,5,6,7,10,12, Regularization. As noted in Problem 1, if m n+1, then there is at least one polynomial passing through all the points. This sounds like a desirable result in theory, but in practice this usually means that our model is too complex for the amount of data that we have, or in other words, we are drawing strong conclusions based on insufficient evidence. Even for under-determined problems, our model might try to approximate the training data too closely, so that performance on a new test data suffers as a consequence of trying to improve performance on the training data. This phenomenon is called over-fitting and a popular way to mitigate this effect is to regularize the solution by penalizing models that are too complex. In this problem, we use the Tychonov regularization min x R n (4) ( y Ax 2 +λ x 2) (5) to trade off the goodness of the fit and the complexity of the solution. (a) Write a Julia function rlspoly(t,y,n,lambda) that takes as input vectors y and t, each of length m, a positive integer n, and a regularization parameter lambda, and outputs 5
6 the coefficients α 0,...,α n of a polynomial p of degree n, which is the solution to the Tychonov-regularized least squares polynomial fitting. (b) Fit φ(t) = sint using the training set defined in Problem 1(b), by the Tychonovregularized least-squares solution of degree 15. Find the relative error over test set #1 for lambda = [0.1, 0.2, 0.5, 0.7, 1.0, 2.0, 5.0, 5.5]. On the same figure, plot the relative error over test set #1 and the relative error over the training set, as functions of lambda (you can use more lambda-values, if necessary.) Comment on the plots. For the best lambda (as judged by the relative error over test set #1), plot the training set data points, the test set data points, and the fitted polynomial q n. Also report this best lambda (call it λ sin.) (c) Repeat part (b) for φ(t) = tlnt on [0.25,1.75], with n = 17 and lambda = [1.0, 2.0, 2.3, 2.5, 2.7, 3.5, 4.0, 4.5]. Let λ entropy be the optimal lambda that you obtain. 3. Recursive least-squares. Sometimes we would like to update an already-existing model to take into account new observations. Performing the whole computation again for adding a single extra data point is not very economical. In this scenario, we can update our model recursively to reflect the changes, rather than rebuilding it from scratch. (a) Write a Julia function lspolyupdate(tnew, ynew, xold, P) that takes as input scalars tnew and ynew, a vector of coefficients xold of length n + 1, and a matrix P of size (n + 1) (n + 1), and outputs the updated coefficient vector xnew and the updated matrix Pnew in accordance with Problem 5. (b) Starting from the polynomial p n (for n = 15) obtained by solving the (non-regularized) least-squares problem in Problem 1(b) and applying the function lspolyupdate() recursively, compute the polynomial p n of degree n = 15 that solves the least-squares problem over all the data points in the training set and test set #1 combined. We can consider this as an alternative way (in contrast to regularization) of utilizing the data in test set #1. Now, on the same figure, plot the training set data points, the test set #1 data points, the test set #2 data points, the polynomial p n, and the polynomial q n obtained in problem 2(b) (for lambda = λ sin.). Comment on the plots. Compare the relative errors of q n and p n over test set #2. Which approach is more effective in this case: adding more training data, or regularizing the solution? (c) Repeat part (b) for φ(t) = tlnt on [0.25,1.75] with n = 17. 6
Projections and Least Square Solutions. Recall that given an inner product space V with subspace W and orthogonal basis for
Math 57 Spring 18 Projections and Least Square Solutions Recall that given an inner product space V with subspace W and orthogonal basis for W, B {v 1, v,..., v k }, the orthogonal projection of V onto
More information4. Matrix inverses. left and right inverse. linear independence. nonsingular matrices. matrices with linearly independent columns
L. Vandenberghe ECE133A (Winter 2018) 4. Matrix inverses left and right inverse linear independence nonsingular matrices matrices with linearly independent columns matrices with linearly independent rows
More informationMAT Linear Algebra Collection of sample exams
MAT 342 - Linear Algebra Collection of sample exams A-x. (0 pts Give the precise definition of the row echelon form. 2. ( 0 pts After performing row reductions on the augmented matrix for a certain system
More informationPseudoinverse & Moore-Penrose Conditions
ECE 275AB Lecture 7 Fall 2008 V1.0 c K. Kreutz-Delgado, UC San Diego p. 1/1 Lecture 7 ECE 275A Pseudoinverse & Moore-Penrose Conditions ECE 275AB Lecture 7 Fall 2008 V1.0 c K. Kreutz-Delgado, UC San Diego
More informationFinal Exam. Problem Score Problem Score 1 /10 8 /10 2 /10 9 /10 3 /10 10 /10 4 /10 11 /10 5 /10 12 /10 6 /10 13 /10 7 /10 Total /130
EE103/CME103: Introduction to Matrix Methods December 9 2015 S. Boyd Final Exam You may not use any books, notes, or computer programs (e.g., Julia). Throughout this exam we use standard mathematical notation;
More information1. What is the determinant of the following matrix? a 1 a 2 4a 3 2a 2 b 1 b 2 4b 3 2b c 1. = 4, then det
What is the determinant of the following matrix? 3 4 3 4 3 4 4 3 A 0 B 8 C 55 D 0 E 60 If det a a a 3 b b b 3 c c c 3 = 4, then det a a 4a 3 a b b 4b 3 b c c c 3 c = A 8 B 6 C 4 D E 3 Let A be an n n matrix
More informationECE 275A Homework #3 Solutions
ECE 75A Homework #3 Solutions. Proof of (a). Obviously Ax = 0 y, Ax = 0 for all y. To show sufficiency, note that if y, Ax = 0 for all y, then it must certainly be true for the particular value of y =
More informationLarge Scale Data Analysis Using Deep Learning
Large Scale Data Analysis Using Deep Learning Linear Algebra U Kang Seoul National University U Kang 1 In This Lecture Overview of linear algebra (but, not a comprehensive survey) Focused on the subset
More informationMATH 167: APPLIED LINEAR ALGEBRA Least-Squares
MATH 167: APPLIED LINEAR ALGEBRA Least-Squares October 30, 2014 Least Squares We do a series of experiments, collecting data. We wish to see patterns!! We expect the output b to be a linear function of
More informationUCSD ECE269 Handout #18 Prof. Young-Han Kim Monday, March 19, Final Examination (Total: 130 points)
UCSD ECE269 Handout #8 Prof Young-Han Kim Monday, March 9, 208 Final Examination (Total: 30 points) There are 5 problems, each with multiple parts Your answer should be as clear and succinct as possible
More informationProperties of Matrices and Operations on Matrices
Properties of Matrices and Operations on Matrices A common data structure for statistical analysis is a rectangular array or matris. Rows represent individual observational units, or just observations,
More informationMath 1060 Linear Algebra Homework Exercises 1 1. Find the complete solutions (if any!) to each of the following systems of simultaneous equations:
Homework Exercises 1 1 Find the complete solutions (if any!) to each of the following systems of simultaneous equations: (i) x 4y + 3z = 2 3x 11y + 13z = 3 2x 9y + 2z = 7 x 2y + 6z = 2 (ii) x 4y + 3z =
More informationlinearly indepedent eigenvectors as the multiplicity of the root, but in general there may be no more than one. For further discussion, assume matrice
3. Eigenvalues and Eigenvectors, Spectral Representation 3.. Eigenvalues and Eigenvectors A vector ' is eigenvector of a matrix K, if K' is parallel to ' and ' 6, i.e., K' k' k is the eigenvalue. If is
More informationProperties of Transformations
6. - 6.4 Properties of Transformations P. Danziger Transformations from R n R m. General Transformations A general transformation maps vectors in R n to vectors in R m. We write T : R n R m to indicate
More informationEECS 275 Matrix Computation
EECS 275 Matrix Computation Ming-Hsuan Yang Electrical Engineering and Computer Science University of California at Merced Merced, CA 95344 http://faculty.ucmerced.edu/mhyang Lecture 9 1 / 23 Overview
More informationMATH 167: APPLIED LINEAR ALGEBRA Chapter 3
MATH 167: APPLIED LINEAR ALGEBRA Chapter 3 Jesús De Loera, UC Davis February 18, 2012 Orthogonal Vectors and Subspaces (3.1). In real life vector spaces come with additional METRIC properties!! We have
More informationProblem Set (T) If A is an m n matrix, B is an n p matrix and D is a p s matrix, then show
MTH 0: Linear Algebra Department of Mathematics and Statistics Indian Institute of Technology - Kanpur Problem Set Problems marked (T) are for discussions in Tutorial sessions (T) If A is an m n matrix,
More informationCSL361 Problem set 4: Basic linear algebra
CSL361 Problem set 4: Basic linear algebra February 21, 2017 [Note:] If the numerical matrix computations turn out to be tedious, you may use the function rref in Matlab. 1 Row-reduced echelon matrices
More information18.06 Problem Set 3 - Solutions Due Wednesday, 26 September 2007 at 4 pm in
8.6 Problem Set 3 - s Due Wednesday, 26 September 27 at 4 pm in 2-6. Problem : (=2+2+2+2+2) A vector space is by definition a nonempty set V (whose elements are called vectors) together with rules of addition
More informationUCSD ECE269 Handout #5 Prof. Young-Han Kim Wednesday, January 31, Solutions to Homework Set #2 (Prepared by TA Alankrita Bhatt)
UCSD ECE69 Handout #5 Prof. Young-Han Kim Wednesday, January 3, 08 Solutions to Homework Set # (Prepared by TA Alankrita Bhatt). Zero nullspace. Let A F m n. Prove that the following statements are equivalent.
More informationMATH 425-Spring 2010 HOMEWORK ASSIGNMENTS
MATH 425-Spring 2010 HOMEWORK ASSIGNMENTS Instructor: Shmuel Friedland Department of Mathematics, Statistics and Computer Science email: friedlan@uic.edu Last update April 18, 2010 1 HOMEWORK ASSIGNMENT
More informationChapter 1: Systems of Linear Equations and Matrices
: Systems of Linear Equations and Matrices Multiple Choice Questions. Which of the following equations is linear? (A) x + 3x 3 + 4x 4 3 = 5 (B) 3x x + x 3 = 5 (C) 5x + 5 x x 3 = x + cos (x ) + 4x 3 = 7.
More informationECE 275A Homework # 3 Due Thursday 10/27/2016
ECE 275A Homework # 3 Due Thursday 10/27/2016 Reading: In addition to the lecture material presented in class, students are to read and study the following: A. The material in Section 4.11 of Moon & Stirling
More informationis equal to = 3 2 x, if x < 0 f (0) = lim h = 0. Therefore f exists and is continuous at 0.
Madhava Mathematics Competition January 6, 2013 Solutions and scheme of marking Part I N.B. Each question in Part I carries 2 marks. p(k + 1) 1. If p(x) is a non-constant polynomial, then lim k p(k) (a)
More informationMaths for Signals and Systems Linear Algebra in Engineering
Maths for Signals and Systems Linear Algebra in Engineering Lectures 13 15, Tuesday 8 th and Friday 11 th November 016 DR TANIA STATHAKI READER (ASSOCIATE PROFFESOR) IN SIGNAL PROCESSING IMPERIAL COLLEGE
More informationUNIT 6: The singular value decomposition.
UNIT 6: The singular value decomposition. María Barbero Liñán Universidad Carlos III de Madrid Bachelor in Statistics and Business Mathematical methods II 2011-2012 A square matrix is symmetric if A T
More informationProblem Set 1. Homeworks will graded based on content and clarity. Please show your work clearly for full credit.
CSE 151: Introduction to Machine Learning Winter 2017 Problem Set 1 Instructor: Kamalika Chaudhuri Due on: Jan 28 Instructions This is a 40 point homework Homeworks will graded based on content and clarity
More informationLinear Algebra- Final Exam Review
Linear Algebra- Final Exam Review. Let A be invertible. Show that, if v, v, v 3 are linearly independent vectors, so are Av, Av, Av 3. NOTE: It should be clear from your answer that you know the definition.
More informationPreliminary/Qualifying Exam in Numerical Analysis (Math 502a) Spring 2012
Instructions Preliminary/Qualifying Exam in Numerical Analysis (Math 502a) Spring 2012 The exam consists of four problems, each having multiple parts. You should attempt to solve all four problems. 1.
More information1 Last time: least-squares problems
MATH Linear algebra (Fall 07) Lecture Last time: least-squares problems Definition. If A is an m n matrix and b R m, then a least-squares solution to the linear system Ax = b is a vector x R n such that
More informationMATH 22A: LINEAR ALGEBRA Chapter 4
MATH 22A: LINEAR ALGEBRA Chapter 4 Jesús De Loera, UC Davis November 30, 2012 Orthogonality and Least Squares Approximation QUESTION: Suppose Ax = b has no solution!! Then what to do? Can we find an Approximate
More informationEE263 homework 3 solutions
EE263 Prof. S. Boyd EE263 homework 3 solutions 2.17 Gradient of some common functions. Recall that the gradient of a differentiable function f : R n R, at a point x R n, is defined as the vector f(x) =
More information5.6. PSEUDOINVERSES 101. A H w.
5.6. PSEUDOINVERSES 0 Corollary 5.6.4. If A is a matrix such that A H A is invertible, then the least-squares solution to Av = w is v = A H A ) A H w. The matrix A H A ) A H is the left inverse of A and
More informationLecture 4 Orthonormal vectors and QR factorization
Orthonormal vectors and QR factorization 4 1 Lecture 4 Orthonormal vectors and QR factorization EE263 Autumn 2004 orthonormal vectors Gram-Schmidt procedure, QR factorization orthogonal decomposition induced
More informationMath 224, Fall 2007 Exam 3 Thursday, December 6, 2007
Math 224, Fall 2007 Exam 3 Thursday, December 6, 2007 You have 1 hour and 20 minutes. No notes, books, or other references. You are permitted to use Maple during this exam, but you must start with a blank
More informationorthogonal relations between vectors and subspaces Then we study some applications in vector spaces and linear systems, including Orthonormal Basis,
5 Orthogonality Goals: We use scalar products to find the length of a vector, the angle between 2 vectors, projections, orthogonal relations between vectors and subspaces Then we study some applications
More informationLINEAR ALGEBRA QUESTION BANK
LINEAR ALGEBRA QUESTION BANK () ( points total) Circle True or False: TRUE / FALSE: If A is any n n matrix, and I n is the n n identity matrix, then I n A = AI n = A. TRUE / FALSE: If A, B are n n matrices,
More informationOctober 25, 2013 INNER PRODUCT SPACES
October 25, 2013 INNER PRODUCT SPACES RODICA D. COSTIN Contents 1. Inner product 2 1.1. Inner product 2 1.2. Inner product spaces 4 2. Orthogonal bases 5 2.1. Existence of an orthogonal basis 7 2.2. Orthogonal
More information5.) For each of the given sets of vectors, determine whether or not the set spans R 3. Give reasons for your answers.
Linear Algebra - Test File - Spring Test # For problems - consider the following system of equations. x + y - z = x + y + 4z = x + y + 6z =.) Solve the system without using your calculator..) Find the
More informationExtra Problems: Chapter 1
MA131 (Section 750002): Prepared by Asst.Prof.Dr.Archara Pacheenburawana 1 Extra Problems: Chapter 1 1. In each of the following answer true if the statement is always true and false otherwise in the space
More informationEE731 Lecture Notes: Matrix Computations for Signal Processing
EE731 Lecture Notes: Matrix Computations for Signal Processing James P. Reilly c Department of Electrical and Computer Engineering McMaster University September 22, 2005 0 Preface This collection of ten
More informationMath Camp II. Basic Linear Algebra. Yiqing Xu. Aug 26, 2014 MIT
Math Camp II Basic Linear Algebra Yiqing Xu MIT Aug 26, 2014 1 Solving Systems of Linear Equations 2 Vectors and Vector Spaces 3 Matrices 4 Least Squares Systems of Linear Equations Definition A linear
More informationAPPENDIX A. Background Mathematics. A.1 Linear Algebra. Vector algebra. Let x denote the n-dimensional column vector with components x 1 x 2.
APPENDIX A Background Mathematics A. Linear Algebra A.. Vector algebra Let x denote the n-dimensional column vector with components 0 x x 2 B C @. A x n Definition 6 (scalar product). The scalar product
More informationMath 520 Exam 2 Topic Outline Sections 1 3 (Xiao/Dumas/Liaw) Spring 2008
Math 520 Exam 2 Topic Outline Sections 1 3 (Xiao/Dumas/Liaw) Spring 2008 Exam 2 will be held on Tuesday, April 8, 7-8pm in 117 MacMillan What will be covered The exam will cover material from the lectures
More informationChapter 6: Orthogonality
Chapter 6: Orthogonality (Last Updated: November 7, 7) These notes are derived primarily from Linear Algebra and its applications by David Lay (4ed). A few theorems have been moved around.. Inner products
More information1 Inner Product and Orthogonality
CSCI 4/Fall 6/Vora/GWU/Orthogonality and Norms Inner Product and Orthogonality Definition : The inner product of two vectors x and y, x x x =.., y =. x n y y... y n is denoted x, y : Note that n x, y =
More informationLecture notes: Applied linear algebra Part 1. Version 2
Lecture notes: Applied linear algebra Part 1. Version 2 Michael Karow Berlin University of Technology karow@math.tu-berlin.de October 2, 2008 1 Notation, basic notions and facts 1.1 Subspaces, range and
More information5. Orthogonal matrices
L Vandenberghe EE133A (Spring 2017) 5 Orthogonal matrices matrices with orthonormal columns orthogonal matrices tall matrices with orthonormal columns complex matrices with orthonormal columns 5-1 Orthonormal
More informationJim Lambers MAT 610 Summer Session Lecture 1 Notes
Jim Lambers MAT 60 Summer Session 2009-0 Lecture Notes Introduction This course is about numerical linear algebra, which is the study of the approximate solution of fundamental problems from linear algebra
More informationSTAT 309: MATHEMATICAL COMPUTATIONS I FALL 2017 LECTURE 5
STAT 39: MATHEMATICAL COMPUTATIONS I FALL 17 LECTURE 5 1 existence of svd Theorem 1 (Existence of SVD) Every matrix has a singular value decomposition (condensed version) Proof Let A C m n and for simplicity
More informationIr O D = D = ( ) Section 2.6 Example 1. (Bottom of page 119) dim(v ) = dim(l(v, W )) = dim(v ) dim(f ) = dim(v )
Section 3.2 Theorem 3.6. Let A be an m n matrix of rank r. Then r m, r n, and, by means of a finite number of elementary row and column operations, A can be transformed into the matrix ( ) Ir O D = 1 O
More informationELEMENTARY LINEAR ALGEBRA
ELEMENTARY LINEAR ALGEBRA K R MATTHEWS DEPARTMENT OF MATHEMATICS UNIVERSITY OF QUEENSLAND First Printing, 99 Chapter LINEAR EQUATIONS Introduction to linear equations A linear equation in n unknowns x,
More informationFinal Examination 201-NYC-05 December and b =
. (5 points) Given A [ 6 5 8 [ and b (a) Express the general solution of Ax b in parametric vector form. (b) Given that is a particular solution to Ax d, express the general solution to Ax d in parametric
More informationElementary linear algebra
Chapter 1 Elementary linear algebra 1.1 Vector spaces Vector spaces owe their importance to the fact that so many models arising in the solutions of specific problems turn out to be vector spaces. The
More informationElementary operation matrices: row addition
Elementary operation matrices: row addition For t a, let A (n,t,a) be the n n matrix such that { A (n,t,a) 1 if r = c, or if r = t and c = a r,c = 0 otherwise A (n,t,a) = I + e t e T a Example: A (5,2,4)
More informationLinear Algebra Review. Fei-Fei Li
Linear Algebra Review Fei-Fei Li 1 / 37 Vectors Vectors and matrices are just collections of ordered numbers that represent something: movements in space, scaling factors, pixel brightnesses, etc. A vector
More information1 Linear Algebra Problems
Linear Algebra Problems. Let A be the conjugate transpose of the complex matrix A; i.e., A = A t : A is said to be Hermitian if A = A; real symmetric if A is real and A t = A; skew-hermitian if A = A and
More informationMatrix operations Linear Algebra with Computer Science Application
Linear Algebra with Computer Science Application February 14, 2018 1 Matrix operations 11 Matrix operations If A is an m n matrix that is, a matrix with m rows and n columns then the scalar entry in the
More informationIntroduction to Matrix Algebra
Introduction to Matrix Algebra August 18, 2010 1 Vectors 1.1 Notations A p-dimensional vector is p numbers put together. Written as x 1 x =. x p. When p = 1, this represents a point in the line. When p
More informationMathematical Optimisation, Chpt 2: Linear Equations and inequalities
Mathematical Optimisation, Chpt 2: Linear Equations and inequalities Peter J.C. Dickinson p.j.c.dickinson@utwente.nl http://dickinson.website version: 12/02/18 Monday 5th February 2018 Peter J.C. Dickinson
More informationMath 504 (Fall 2011) 1. (*) Consider the matrices
Math 504 (Fall 2011) Instructor: Emre Mengi Study Guide for Weeks 11-14 This homework concerns the following topics. Basic definitions and facts about eigenvalues and eigenvectors (Trefethen&Bau, Lecture
More informationb 1 b 2.. b = b m A = [a 1,a 2,...,a n ] where a 1,j a 2,j a j = a m,j Let A R m n and x 1 x 2 x = x n
Lectures -2: Linear Algebra Background Almost all linear and nonlinear problems in scientific computation require the use of linear algebra These lectures review basic concepts in a way that has proven
More informationNORMS ON SPACE OF MATRICES
NORMS ON SPACE OF MATRICES. Operator Norms on Space of linear maps Let A be an n n real matrix and x 0 be a vector in R n. We would like to use the Picard iteration method to solve for the following system
More informationAssignment #10: Diagonalization of Symmetric Matrices, Quadratic Forms, Optimization, Singular Value Decomposition. Name:
Assignment #10: Diagonalization of Symmetric Matrices, Quadratic Forms, Optimization, Singular Value Decomposition Due date: Friday, May 4, 2018 (1:35pm) Name: Section Number Assignment #10: Diagonalization
More informationLinear Algebra Review. Vectors
Linear Algebra Review 9/4/7 Linear Algebra Review By Tim K. Marks UCSD Borrows heavily from: Jana Kosecka http://cs.gmu.edu/~kosecka/cs682.html Virginia de Sa (UCSD) Cogsci 8F Linear Algebra review Vectors
More informationLinear Systems. Class 27. c 2008 Ron Buckmire. TITLE Projection Matrices and Orthogonal Diagonalization CURRENT READING Poole 5.4
Linear Systems Math Spring 8 c 8 Ron Buckmire Fowler 9 MWF 9: am - :5 am http://faculty.oxy.edu/ron/math//8/ Class 7 TITLE Projection Matrices and Orthogonal Diagonalization CURRENT READING Poole 5. Summary
More informationBasic Elements of Linear Algebra
A Basic Review of Linear Algebra Nick West nickwest@stanfordedu September 16, 2010 Part I Basic Elements of Linear Algebra Although the subject of linear algebra is much broader than just vectors and matrices,
More informationAssignment 1 Math 5341 Linear Algebra Review. Give complete answers to each of the following questions. Show all of your work.
Assignment 1 Math 5341 Linear Algebra Review Give complete answers to each of the following questions Show all of your work Note: You might struggle with some of these questions, either because it has
More informationMath 407: Linear Optimization
Math 407: Linear Optimization Lecture 16: The Linear Least Squares Problem II Math Dept, University of Washington February 28, 2018 Lecture 16: The Linear Least Squares Problem II (Math Dept, University
More informationHOMEWORK PROBLEMS FROM STRANG S LINEAR ALGEBRA AND ITS APPLICATIONS (4TH EDITION)
HOMEWORK PROBLEMS FROM STRANG S LINEAR ALGEBRA AND ITS APPLICATIONS (4TH EDITION) PROFESSOR STEVEN MILLER: BROWN UNIVERSITY: SPRING 2007 1. CHAPTER 1: MATRICES AND GAUSSIAN ELIMINATION Page 9, # 3: Describe
More information2.3. VECTOR SPACES 25
2.3. VECTOR SPACES 25 2.3 Vector Spaces MATH 294 FALL 982 PRELIM # 3a 2.3. Let C[, ] denote the space of continuous functions defined on the interval [,] (i.e. f(x) is a member of C[, ] if f(x) is continuous
More informationHomework For each of the following matrices, find the minimal polynomial and determine whether the matrix is diagonalizable.
Math 5327 Fall 2018 Homework 7 1. For each of the following matrices, find the minimal polynomial and determine whether the matrix is diagonalizable. 3 1 0 (a) A = 1 2 0 1 1 0 x 3 1 0 Solution: 1 x 2 0
More informationLEAST SQUARES SOLUTION TRICKS
LEAST SQUARES SOLUTION TRICKS VESA KAARNIOJA, JESSE RAILO AND SAMULI SILTANEN Abstract This handout is for the course Applications of matrix computations at the University of Helsinki in Spring 2018 We
More information7.6 The Inverse of a Square Matrix
7.6 The Inverse of a Square Matrix Copyright Cengage Learning. All rights reserved. What You Should Learn Verify that two matrices are inverses of each other. Use Gauss-Jordan elimination to find inverses
More informationTotal 170. Name. Final Examination M340L-CS
1 10 2 10 3 15 4 5 5 10 6 10 7 20 8 10 9 20 10 25 11 10 12 10 13 15 Total 170 Final Examination Name M340L-CS 1. Use extra paper to determine your solutions then neatly transcribe them onto these sheets.
More informationEighth Homework Solutions
Math 4124 Wednesday, April 20 Eighth Homework Solutions 1. Exercise 5.2.1(e). Determine the number of nonisomorphic abelian groups of order 2704. First we write 2704 as a product of prime powers, namely
More informationLinear Algebra Primer
Linear Algebra Primer David Doria daviddoria@gmail.com Wednesday 3 rd December, 2008 Contents Why is it called Linear Algebra? 4 2 What is a Matrix? 4 2. Input and Output.....................................
More informationa 11 x 1 + a 12 x a 1n x n = b 1 a 21 x 1 + a 22 x a 2n x n = b 2.
Chapter 1 LINEAR EQUATIONS 11 Introduction to linear equations A linear equation in n unknowns x 1, x,, x n is an equation of the form a 1 x 1 + a x + + a n x n = b, where a 1, a,, a n, b are given real
More informationCME 302: NUMERICAL LINEAR ALGEBRA FALL 2005/06 LECTURE 6
CME 302: NUMERICAL LINEAR ALGEBRA FALL 2005/06 LECTURE 6 GENE H GOLUB Issues with Floating-point Arithmetic We conclude our discussion of floating-point arithmetic by highlighting two issues that frequently
More informationChapter 3. Matrices. 3.1 Matrices
40 Chapter 3 Matrices 3.1 Matrices Definition 3.1 Matrix) A matrix A is a rectangular array of m n real numbers {a ij } written as a 11 a 12 a 1n a 21 a 22 a 2n A =.... a m1 a m2 a mn The array has m rows
More informationMATH 315 Linear Algebra Homework #1 Assigned: August 20, 2018
Homework #1 Assigned: August 20, 2018 Review the following subjects involving systems of equations and matrices from Calculus II. Linear systems of equations Converting systems to matrix form Pivot entry
More informationMoore-Penrose Conditions & SVD
ECE 275AB Lecture 8 Fall 2008 V1.0 c K. Kreutz-Delgado, UC San Diego p. 1/1 Lecture 8 ECE 275A Moore-Penrose Conditions & SVD ECE 275AB Lecture 8 Fall 2008 V1.0 c K. Kreutz-Delgado, UC San Diego p. 2/1
More informationPseudoinverse & Orthogonal Projection Operators
Pseudoinverse & Orthogonal Projection Operators ECE 174 Linear & Nonlinear Optimization Ken Kreutz-Delgado ECE Department, UC San Diego Ken Kreutz-Delgado (UC San Diego) ECE 174 Fall 2016 1 / 48 The Four
More informationCOMP 558 lecture 18 Nov. 15, 2010
Least squares We have seen several least squares problems thus far, and we will see more in the upcoming lectures. For this reason it is good to have a more general picture of these problems and how to
More informationMATH 323 Linear Algebra Lecture 6: Matrix algebra (continued). Determinants.
MATH 323 Linear Algebra Lecture 6: Matrix algebra (continued). Determinants. Elementary matrices Theorem 1 Any elementary row operation σ on matrices with n rows can be simulated as left multiplication
More informationLinear Algebra. Session 12
Linear Algebra. Session 12 Dr. Marco A Roque Sol 08/01/2017 Example 12.1 Find the constant function that is the least squares fit to the following data x 0 1 2 3 f(x) 1 0 1 2 Solution c = 1 c = 0 f (x)
More informationMATH 1120 (LINEAR ALGEBRA 1), FINAL EXAM FALL 2011 SOLUTIONS TO PRACTICE VERSION
MATH (LINEAR ALGEBRA ) FINAL EXAM FALL SOLUTIONS TO PRACTICE VERSION Problem (a) For each matrix below (i) find a basis for its column space (ii) find a basis for its row space (iii) determine whether
More informationRecall: Dot product on R 2 : u v = (u 1, u 2 ) (v 1, v 2 ) = u 1 v 1 + u 2 v 2, u u = u u 2 2 = u 2. Geometric Meaning:
Recall: Dot product on R 2 : u v = (u 1, u 2 ) (v 1, v 2 ) = u 1 v 1 + u 2 v 2, u u = u 2 1 + u 2 2 = u 2. Geometric Meaning: u v = u v cos θ. u θ v 1 Reason: The opposite side is given by u v. u v 2 =
More informationMath 110, Spring 2015: Midterm Solutions
Math 11, Spring 215: Midterm Solutions These are not intended as model answers ; in many cases far more explanation is provided than would be necessary to receive full credit. The goal here is to make
More informationDesigning Information Devices and Systems I Spring 2018 Homework 5
EECS 16A Designing Information Devices and Systems I Spring 2018 Homework All problems on this homework are practice, however all concepts covered here are fair game for the exam. 1. Sports Rank Every
More informationCheng Soon Ong & Christian Walder. Canberra February June 2017
Cheng Soon Ong & Christian Walder Research Group and College of Engineering and Computer Science Canberra February June 2017 (Many figures from C. M. Bishop, "Pattern Recognition and ") 1of 141 Part III
More informationEXAM. Exam 1. Math 5316, Fall December 2, 2012
EXAM Exam Math 536, Fall 22 December 2, 22 Write all of your answers on separate sheets of paper. You can keep the exam questions. This is a takehome exam, to be worked individually. You can use your notes.
More information8 The SVD Applied to Signal and Image Deblurring
8 The SVD Applied to Signal and Image Deblurring We will discuss the restoration of one-dimensional signals and two-dimensional gray-scale images that have been contaminated by blur and noise. After an
More informationMath/CS 466/666: Homework Solutions for Chapter 3
Math/CS 466/666: Homework Solutions for Chapter 3 31 Can all matrices A R n n be factored A LU? Why or why not? Consider the matrix A ] 0 1 1 0 Claim that this matrix can not be factored A LU For contradiction,
More informationLinear Algebra, Summer 2011, pt. 3
Linear Algebra, Summer 011, pt. 3 September 0, 011 Contents 1 Orthogonality. 1 1.1 The length of a vector....................... 1. Orthogonal vectors......................... 3 1.3 Orthogonal Subspaces.......................
More informationLecture 7: Positive Semidefinite Matrices
Lecture 7: Positive Semidefinite Matrices Rajat Mittal IIT Kanpur The main aim of this lecture note is to prepare your background for semidefinite programming. We have already seen some linear algebra.
More informationInverses. Stephen Boyd. EE103 Stanford University. October 28, 2017
Inverses Stephen Boyd EE103 Stanford University October 28, 2017 Outline Left and right inverses Inverse Solving linear equations Examples Pseudo-inverse Left and right inverses 2 Left inverses a number
More informationLinear algebra II Homework #1 solutions A = This means that every eigenvector with eigenvalue λ = 1 must have the form
Linear algebra II Homework # solutions. Find the eigenvalues and the eigenvectors of the matrix 4 6 A =. 5 Since tra = 9 and deta = = 8, the characteristic polynomial is f(λ) = λ (tra)λ+deta = λ 9λ+8 =
More information(a) Compute the projections of vectors [1,0,0] and [0,1,0] onto the line spanned by a Solution: The projection matrix is P = 1
6 [3] 3. Consider the plane S defined by 2u 3v+w = 0, and recall that the normal to this plane is the vector a = [2, 3,1]. (a) Compute the projections of vectors [1,0,0] and [0,1,0] onto the line spanned
More informationMath /Foundations of Algebra/Fall 2017 Foundations of the Foundations: Proofs
Math 4030-001/Foundations of Algebra/Fall 017 Foundations of the Foundations: Proofs A proof is a demonstration of the truth of a mathematical statement. We already know what a mathematical statement is.
More information