Midterm 1 Solutions. 1. (2 points) Show that the Frobenius norm of a matrix A depends only on its singular values. Precisely, show that

Size: px
Start display at page:

Download "Midterm 1 Solutions. 1. (2 points) Show that the Frobenius norm of a matrix A depends only on its singular values. Precisely, show that"

Transcription

1 EE127A L. El Ghaoui YOUR NAME HERE: SOLUTIONS YOUR SID HERE: 42 3/27/9 Midterm 1 Solutions The eam is open notes, but access to the Internet is not allowed. The maimum grade is 2. When asked to prove something, do not merely take an eample; provide a rigorous proof with clear steps. Also, some parts are more difficult than others and you are not epected to finish the eam. 1. (2 points) Show that the Frobenius norm of a matri A depends only on its singular values. Precisely, show that A F = σ 2, where σ := (σ 1,..., σ r ) is the vector formed with the singular values of A, and r is the rank of A. Solution: We present two solutions. Assume A R m n. a) By re-writing this in terms of the squared Frobenius norm, A 2 F = tr(at A) = n λ i (A T A) = r λ i (A T A) = r σi 2 = σ 2 2, the sum going to r because only the first r eigenvalues of A are non-zero. b) We can use the SVD of A = U ΣV T, with Σ = diag(σ 1,..., σ r,,..., ), directly. We have A 2 F = U ΣV T 2 F = tr(v ΣU T U ΣV T ) = tr(v Σ 2 V T ) = tr( Σ 2 V T V ) = tr Σ 2, which proves the result. 2. (8 points) We are given a matri A R m n. We consider a matri least-squares problem AX I m X F, (1) where the variable is X R n m, and I m is the m m identity matri. (a) Show that the problem can be reduced to a number of ordinary least-squares problems, each involving one column of the matri X. Make sure you define precisely the form of these least-squares problems. Solution: Let j R n denote the j-th column of A, so that X = [ 1,..., m ]. We have AX I m = [A 1 e 1,..., A m e m ], where e i is the i-th basis vector in 1

2 R m. Since the squared Frobenius norm of a matri is the sum of squares of the Euclidean norms of its columns, the objective function of our problem is AX I m 2 F = [A 1 e 1,..., A m e m ] 2 F = m A i e i 2 2. Note that the vector i appears only once, in the i-th term of the sum. Hence, to imize the above objective function, we can simply imize the i-th term with respect to i, independently of the other terms. That is: X=[ 1,..., m] AX I m 2 F = 1,..., m m A e i 2 2 = m A e i 2 2. Hence, to solve our problem we can simply solve m ordinary least-squares problems, of the form A e i 2, i = 1,..., m. If i is an optimal solution for the above, then X = [ 1,..., m] is optimal for our original problem. (b) Show that, when A is full column rank, then the optimal solution is unique, and given by X = (A T A) 1 A T. Solution: When A is full rank, the solution to the least-squares problem A y 2 is unique, and given by = (A T A) 1 A T y. Applying this to y = e i, i = 1,..., m, we obtain that the optimal X is unique, and given by X = [ 1,..., m], with i = (A T A) 1 A T e i, i = 1,..., m. That is: X = [ (A T A) 1 A T e 1 (A T A) 1 A T e m ] = (A T A) 1 A T [e 1 e m ] = (A T A) 1 A T I m = (A T A) 1 A T. Another method is to simply take derivatives in the matri problem. Indeed, the objective Eq. (1) is conve in X, and the problem is unconstrained. Taking the matri derivative of AX I m 2 F, and setting it equal to zero, we have 2A T AX 2A T I m =. When A is full column rank, A T A is full rank and invertible, so we have X = (A T A) 1 A T. (c) Show that in general, a solution is X = A, the pseudo-inverse of A. Hint: use the SVD of A, and eploit the fact that the Frobenius norm of a matri is the Euclidean norm of the vector formed with its singular values. 2

3 Solution: We know that if i is optimal for the ordinary least-squares problem A e i 2, then X := [ 1,..., m] is optimal for the original problem. Since i = A e i is optimal for the above problem, we obtain that X = [ 1,..., m] = A is optimal for the matri problem. (d) Assume that we would like to form an estimate X of the pseudo-inverse of a matri A that is as sparse as possible, while maintaining a good accuracy, epressed in terms of the objective function of problem (1). How would you formulate the corresponding trade-off? How would you use CVX to plot a trade-off curve? Solution: Ideally, to enforce sparsity, we sould like to solve a problem of the form X AX I m F + λ card(x) (2) where card(x) denotes the cardinality (number of non-zeros) of X. This is, however, wildly non-conve, so we rela Eq. (2) to the l 1 -regularized problem AX I m X F + λ i,j X ij. (3) We can then increase or decrease λ to look at the trade-off in the error AX I m F in terms of sparsity in X. 1 CVX code is below, and in Fig. 1 we plot the error in AX I m F versus the sparsity in X. m = 2; n = 1; A = randn(m, n); lambdas = [ ]; cv_quiet(true); Xs = cell(numel(lambdas), 1); for i = 1:numel(lambdas); lambda = lambdas(i); cv_begin variable X(n, m); imize(norm(a*x - eye(m), fro ) + lambda * sum(sum(abs(x)))); cv_end Xs{i} = X.*(abs(X) > 1e-8); end splevel = zeros(numel(lambdas), 1); errlevel = zeros(numel(lambdas), 1); for i = 1:numel(lambdas) 1 Note that if A is full row-rank (so that AX = I m is solvable), it may make sense to imize i,j X ij subject to AX = I m. 3

4 4.5 4 AX Im Proportion zero entries in X Figure 1: Tradeoffs for sparsity in terms of reconstruction error. splevel(i) = (n * m - nnz(xs{i})) / (m * n); errlevel(i) = norm(a*xs{i} - eye(m), fro ); end figure; plot(splevel, errlevel); label( Proportion zero entries in $X$,... Interpreter, late, fontsize, 2); h = ylabel( $\ AX - I_m\ $ ); set(h, Interpreter, late, fontsize, 2); 3. (3 points) Let p, p 1,..., p m be a collection of (m + 1) points in R n. Consider the set of points closer (in Euclidean norm) to p than to the remaining points p 1,..., p m : P = { R n : i = 1,..., m, p 2 p i 2 }. Show that the set P is a polyhedron, and provide a representation of it in terms of the problem s data, as P = { : A b}, where A is matri and b a vector, which you will detere. Solution: There are (at least) two equally reasonable ways to solve this problem. The first is to simply re-write the constraints and re-arrange symbols to give ourselves polyhedral inequalities. That is, we note that p 2 p i 2 iff p 2 2 p i 2 2, 4

5 and the second inequality can be re-written as T 2p T + p T p T 2p T i + p T i p i 2p T i 2p T p T i p i p T p (p i p ) T 1 2 (p i p ) T (p i + p ) Evidently, then, if we set A R m n and b as (p 1 p ) T (p 1 p ) T (p 1 + p ) (p 2 p ) T A =. and b = 1 (p 2 p ) T (p 2 + p ) 2., (4) (p m p ) T (p m p ) T (p m + p ) we can write the set as P = { : A b}. A more geometric way of deriving the above is as follows. First, we recognize that we will be intersecting a series of conve sets, so if we can describe p 2 p i 2 as a linear inequality, we can simply stack the inequalities. Now, the set of points that is closer to the point p than it is to the point p i is a half-space, and the normal to the half-space is p i p. Now all that remains is to find the point on the line halfway between p and p i, because this will be the point that gives the inequality defining our half-space. Clearly, this point is (p i + p )/2. See Fig. 2 for a picture. Thus, since our normal is given by (p i p ) and we know that all points in the half space must be closer to p than the point (p i + p )/2, we have all in the half space satisfy (p i p ) T (p i p ) T (p i + p )/2, which gives the same linear inequalities as the matri A and vector b in Eq. (4). p 1 (p 1 p ) 1 2 (p + p 1 ) p Figure 2: Halfspace separation 4. (3 points) We consider a production planning problem. The variables j R, j = 1,..., n, in the problem are activity levels (for eample, production levels for different products manufactured by a company). These activities consume m resources, which 5

6 are limited. Activity j consumes A ij j of resource i. (Ordinarily we have A ij, that is, activity j consumes resource i, but we allow the possibility that A ij <, which means that activity j actually generates resource i as a by-product.) The total resource consumption is additive, so the total of resource i consumed is c i = j=1 A ij j. Each resource consumption is limited:for every activity i, we must have c i c ma i are given. Activity j generates revenue r j ( j ), where r j is a function of the form { pj u if u q r j (u) = j, p j q j + p disc j (u q j ) otherwise, where c ma i where q j >, p disc j > are given parameters. The above revenue model says that above a certain level of activity, the revenue is discounted, and not as high as for lower activity level. Show how to formulate the problem of maimizing revenue under the resource constraints as an LP. Solution: We let our optimization variables be r and, where r j corresponds to the r j above. We want to maimize r j (our revenue) subject to the constraint that r j {p j j, p j q j + p disc j ( j q j )}, which is to say, r j is less than both. If we define the matrices and vectors q 1 c ma P = diag(p 1,..., p n ), P disc = diag(p disc 1,..., p disc n ), q =., c ma = 1. we can re-write the problem as ma r, 1 T r subject to A c ma, 5. (4 points) Consider the unconstrained QP q n r P, r P q + P disc ( q). p 1 = 2 T Q c T where Q = Q T R n n, Q, and c R n are given. The goal of this problem is to detere the optimal value p and the optimal set X opt, in terms of c and the eigenvalues and eigenvectors of the (symmetric) matri Q. (a) Assume that Q. Show that the optimal set is a singleton, and that p is finite. Detere both in terms of Q, c. Solution: The problem is conve and unconstrained. Thus, the optimal set is that of points such that the gradient of the objective is zero. Denoting the objective functionby f, the optimality condition is f() = Q c. 6 c ma n

7 Since Q, Q is invertible, and there is a unique solution to Q = c. Thus = Q 1 c is the unique optimal point, and p = 1 2 ( ) T Q c T = 1 2 ct Q 1 c c T Q 1 c = 1 2 ct Q 1 c. Assume from now on that Q is not invertible. (b) Assume further that Q is diagonal: Q = diag(λ 1,..., λ n ), with λ 1... λ r > λ r+1 =... = λ n =, where r is the rank of Q (1 r < n). Solve the problem in that case. (You will distinguish between two cases.) Solution: Partition the variable in two parts: = (y, z) with y R r and z R n r. Likewise partition c in two parts: c = (a, b) with a R r and b R n r. Since Q = diag(λ, ), with Λ = diag(λ 1,..., λ r ), we have f() = 1 2 T Q c T = 1 2 yt Λy a T y b T z = f 1 (y) + f 2 (z), where f 1 (y) = 1 2 yt Λy a T y and f 2 (z) = b T z. The above is a sum of two functions f 1, f 2 of independent variables y and z. Thus our problem can be solved by imizing f 1 over y and f 2 over z independently: f 1(y) + f 2 (z) = =(y,z) y f 1 (y) + z f 2 (z). The imization over y is eactly the same as in the previous part, since the matri Λ is positive definite. That is, the optimal y is unique, and given by y = Λ 1 a. For the imization with respect to z, we distinguish two cases. Either the vector b =, in which case the optimal value of z f 2 (z) is zero, and any z is optimal. Or, b, in which case the corresponding optimal value is and the problem is not attained. To summarize: If b =, then the optimal set is the subspace X opt = { (Λ 1 a, z) : z R n r}, and the optimal value is 1 2 at Λ 1 a. If b, the optimal value is, and the optimal set is empty (the optimal value is not attained). Note that the condition b = can simply be epressed as c R(Q), where R(Q) is the range of Q. (c) Now we do not assume that Q is diagonal anymore. Under what conditions (on Q, c) is the optimal value finite? 7

8 Solution: Let Q = U ΛU T be the eigenvalue decomposition of the symmetric matri Q, with U orthogonal and Λ = diag(λ, ), and Λ diagonal. Our problem reads 1 2 T U T ΛU c T 1 = 2 T Λ ct, where := U, and c = Uc. This change of variable allows us to go back to the diagonal case, which we solved in the previous part. Partition c = (ã, b) with ã R r the component of c along the range of Λ, and b R n r the component of c orthogonal to the range of Λ. We obtain that if b, then the optimal value is and the optimal set is empty. This corresponds to the case when c is not in the range of Q. We conclude that the optimal value is finite if and only if c is in the range of Q. (d) Detere the optimal value and optimal set. Be as specific as you can. Solution: We present two solutions. a) Assume that c is in the range of Q, so that the optimal value is finite. If that is the case, then c is also in the range of Q 1/2, because the ranges of Q and its symmetric square root are identical (both matrices share the same system of eigenvectors, and the range depends only on the eigenvectors). Let d be such that c = Q 1/2 d. We can write our objective function, f, as f() = 1 2 T Q c T = 1 2 T Q d T Q 1/2 = 1 2 Q1/2 d 2 2 d 2 2. Minimizing f amounts to solve the ordinary least-squares problem The solution set is Q 1/2 d 2. X opt = (Q 1/2 ) d + N(Q 1/2 ), where N(Q 1/2 ) is the range of Q 1/2 is the range of Q 1/2, which, as noted before is the same as the range of Q itself. We can write the vector (Q 1/2 ) d in terms of Q, c as follows: (Q 1/2 ) d = Q c. Indeed, if Q = U T ΛU is an eigenvalue decomposition of Q, with Λ = diag(λ, ), and Λ diagonal, then ( ) ( ) ( ) Λ (Q 1/2 ) d = U T 1/2 Λ Ud = U T 1 Λ U U T 1/2 d = Q c. Hence, if c Range(Q), we have } {{ } =Q X opt = Q c + N(Q), 8 } {{ } =c

9 where N(Q) is the nullspace of Q. The optimal value is the value of the objective evaluated at any of the points in the optimal set. For eample, p = 1 2 (Q c) T Q(Q c) c T Q c = 1 2 ct Q c, where we have eploited the fact that Q QQ = Q. b) Here is an alternative solution. We take the same notation as in the previous part. Assume that b =, that is, if c is in the range of Q, so that the optimal value is finite. Then the optimal set (in the space of -variables) is X = { (Λ 1 ã, z) : z R n r}. For every X, we can write = + ( z ), where With c = Uc: = ( Λ 1 ) ( ã b ) = := U T = U T ( Λ 1 ( Λ 1 ) c ) Uc = Q c. When z spans R n r, the vector (, z) spans the nullspace of diag(λ, ), so that the vector U T (, z) spans the nullspace of Q. Hence, if c Range(Q), the optimal set is where N(Q) is the nullspace of Q. X opt = Q c + N(Q), 9

LECTURE 7. Least Squares and Variants. Optimization Models EE 127 / EE 227AT. Outline. Least Squares. Notes. Notes. Notes. Notes.

LECTURE 7. Least Squares and Variants. Optimization Models EE 127 / EE 227AT. Outline. Least Squares. Notes. Notes. Notes. Notes. Optimization Models EE 127 / EE 227AT Laurent El Ghaoui EECS department UC Berkeley Spring 2015 Sp 15 1 / 23 LECTURE 7 Least Squares and Variants If others would but reflect on mathematical truths as deeply

More information

Midterm Solutions. EE127A L. El Ghaoui 3/19/11

Midterm Solutions. EE127A L. El Ghaoui 3/19/11 EE27A L. El Ghaoui 3/9/ Midterm Solutions. (6 points Find the projection z of the vector = (2, on the line that passes through 0 = (, 2 and with direction given by the vector u = (,. Solution: The line

More information

Lecture 7: Weak Duality

Lecture 7: Weak Duality EE 227A: Conve Optimization and Applications February 7, 2012 Lecture 7: Weak Duality Lecturer: Laurent El Ghaoui 7.1 Lagrange Dual problem 7.1.1 Primal problem In this section, we consider a possibly

More information

Lecture 2: Linear Algebra Review

Lecture 2: Linear Algebra Review EE 227A: Convex Optimization and Applications January 19 Lecture 2: Linear Algebra Review Lecturer: Mert Pilanci Reading assignment: Appendix C of BV. Sections 2-6 of the web textbook 1 2.1 Vectors 2.1.1

More information

33A Linear Algebra and Applications: Practice Final Exam - Solutions

33A Linear Algebra and Applications: Practice Final Exam - Solutions 33A Linear Algebra and Applications: Practice Final Eam - Solutions Question Consider a plane V in R 3 with a basis given by v = and v =. Suppose, y are both in V. (a) [3 points] If [ ] B =, find. (b)

More information

Conceptual Questions for Review

Conceptual Questions for Review Conceptual Questions for Review Chapter 1 1.1 Which vectors are linear combinations of v = (3, 1) and w = (4, 3)? 1.2 Compare the dot product of v = (3, 1) and w = (4, 3) to the product of their lengths.

More information

LINEAR ALGEBRA 1, 2012-I PARTIAL EXAM 3 SOLUTIONS TO PRACTICE PROBLEMS

LINEAR ALGEBRA 1, 2012-I PARTIAL EXAM 3 SOLUTIONS TO PRACTICE PROBLEMS LINEAR ALGEBRA, -I PARTIAL EXAM SOLUTIONS TO PRACTICE PROBLEMS Problem (a) For each of the two matrices below, (i) determine whether it is diagonalizable, (ii) determine whether it is orthogonally diagonalizable,

More information

Lecture 4: September 12

Lecture 4: September 12 10-725/36-725: Conve Optimization Fall 2016 Lecture 4: September 12 Lecturer: Ryan Tibshirani Scribes: Jay Hennig, Yifeng Tao, Sriram Vasudevan Note: LaTeX template courtesy of UC Berkeley EECS dept. Disclaimer:

More information

Chapter 3 Transformations

Chapter 3 Transformations Chapter 3 Transformations An Introduction to Optimization Spring, 2014 Wei-Ta Chu 1 Linear Transformations A function is called a linear transformation if 1. for every and 2. for every If we fix the bases

More information

MATH 581D FINAL EXAM Autumn December 12, 2016

MATH 581D FINAL EXAM Autumn December 12, 2016 MATH 58D FINAL EXAM Autumn 206 December 2, 206 NAME: SIGNATURE: Instructions: there are 6 problems on the final. Aim for solving 4 problems, but do as much as you can. Partial credit will be given on all

More information

STA141C: Big Data & High Performance Statistical Computing

STA141C: Big Data & High Performance Statistical Computing STA141C: Big Data & High Performance Statistical Computing Numerical Linear Algebra Background Cho-Jui Hsieh UC Davis May 15, 2018 Linear Algebra Background Vectors A vector has a direction and a magnitude

More information

Review of Some Concepts from Linear Algebra: Part 2

Review of Some Concepts from Linear Algebra: Part 2 Review of Some Concepts from Linear Algebra: Part 2 Department of Mathematics Boise State University January 16, 2019 Math 566 Linear Algebra Review: Part 2 January 16, 2019 1 / 22 Vector spaces A set

More information

Linear Algebra Review. Vectors

Linear Algebra Review. Vectors Linear Algebra Review 9/4/7 Linear Algebra Review By Tim K. Marks UCSD Borrows heavily from: Jana Kosecka http://cs.gmu.edu/~kosecka/cs682.html Virginia de Sa (UCSD) Cogsci 8F Linear Algebra review Vectors

More information

UNIT 6: The singular value decomposition.

UNIT 6: The singular value decomposition. UNIT 6: The singular value decomposition. María Barbero Liñán Universidad Carlos III de Madrid Bachelor in Statistics and Business Mathematical methods II 2011-2012 A square matrix is symmetric if A T

More information

Review problems for MA 54, Fall 2004.

Review problems for MA 54, Fall 2004. Review problems for MA 54, Fall 2004. Below are the review problems for the final. They are mostly homework problems, or very similar. If you are comfortable doing these problems, you should be fine on

More information

(a) If A is a 3 by 4 matrix, what does this tell us about its nullspace? Solution: dim N(A) 1, since rank(a) 3. Ax =

(a) If A is a 3 by 4 matrix, what does this tell us about its nullspace? Solution: dim N(A) 1, since rank(a) 3. Ax = . (5 points) (a) If A is a 3 by 4 matrix, what does this tell us about its nullspace? dim N(A), since rank(a) 3. (b) If we also know that Ax = has no solution, what do we know about the rank of A? C(A)

More information

Statistical Geometry Processing Winter Semester 2011/2012

Statistical Geometry Processing Winter Semester 2011/2012 Statistical Geometry Processing Winter Semester 2011/2012 Linear Algebra, Function Spaces & Inverse Problems Vector and Function Spaces 3 Vectors vectors are arrows in space classically: 2 or 3 dim. Euclidian

More information

Fall TMA4145 Linear Methods. Exercise set Given the matrix 1 2

Fall TMA4145 Linear Methods. Exercise set Given the matrix 1 2 Norwegian University of Science and Technology Department of Mathematical Sciences TMA445 Linear Methods Fall 07 Exercise set Please justify your answers! The most important part is how you arrive at an

More information

7. Symmetric Matrices and Quadratic Forms

7. Symmetric Matrices and Quadratic Forms Linear Algebra 7. Symmetric Matrices and Quadratic Forms CSIE NCU 1 7. Symmetric Matrices and Quadratic Forms 7.1 Diagonalization of symmetric matrices 2 7.2 Quadratic forms.. 9 7.4 The singular value

More information

Notes on singular value decomposition for Math 54. Recall that if A is a symmetric n n matrix, then A has real eigenvalues A = P DP 1 A = P DP T.

Notes on singular value decomposition for Math 54. Recall that if A is a symmetric n n matrix, then A has real eigenvalues A = P DP 1 A = P DP T. Notes on singular value decomposition for Math 54 Recall that if A is a symmetric n n matrix, then A has real eigenvalues λ 1,, λ n (possibly repeated), and R n has an orthonormal basis v 1,, v n, where

More information

CS 143 Linear Algebra Review

CS 143 Linear Algebra Review CS 143 Linear Algebra Review Stefan Roth September 29, 2003 Introductory Remarks This review does not aim at mathematical rigor very much, but instead at ease of understanding and conciseness. Please see

More information

Singular Value Decomposition

Singular Value Decomposition Singular Value Decomposition Motivatation The diagonalization theorem play a part in many interesting applications. Unfortunately not all matrices can be factored as A = PDP However a factorization A =

More information

STA141C: Big Data & High Performance Statistical Computing

STA141C: Big Data & High Performance Statistical Computing STA141C: Big Data & High Performance Statistical Computing Lecture 5: Numerical Linear Algebra Cho-Jui Hsieh UC Davis April 20, 2017 Linear Algebra Background Vectors A vector has a direction and a magnitude

More information

Throughout these notes we assume V, W are finite dimensional inner product spaces over C.

Throughout these notes we assume V, W are finite dimensional inner product spaces over C. Math 342 - Linear Algebra II Notes Throughout these notes we assume V, W are finite dimensional inner product spaces over C 1 Upper Triangular Representation Proposition: Let T L(V ) There exists an orthonormal

More information

IV. Matrix Approximation using Least-Squares

IV. Matrix Approximation using Least-Squares IV. Matrix Approximation using Least-Squares The SVD and Matrix Approximation We begin with the following fundamental question. Let A be an M N matrix with rank R. What is the closest matrix to A that

More information

Lecture 14: Optimality Conditions for Conic Problems

Lecture 14: Optimality Conditions for Conic Problems EE 227A: Conve Optimization and Applications March 6, 2012 Lecture 14: Optimality Conditions for Conic Problems Lecturer: Laurent El Ghaoui Reading assignment: 5.5 of BV. 14.1 Optimality for Conic Problems

More information

GI07/COMPM012: Mathematical Programming and Research Methods (Part 2) 2. Least Squares and Principal Components Analysis. Massimiliano Pontil

GI07/COMPM012: Mathematical Programming and Research Methods (Part 2) 2. Least Squares and Principal Components Analysis. Massimiliano Pontil GI07/COMPM012: Mathematical Programming and Research Methods (Part 2) 2. Least Squares and Principal Components Analysis Massimiliano Pontil 1 Today s plan SVD and principal component analysis (PCA) Connection

More information

Short Course Robust Optimization and Machine Learning. Lecture 4: Optimization in Unsupervised Learning

Short Course Robust Optimization and Machine Learning. Lecture 4: Optimization in Unsupervised Learning Short Course Robust Optimization and Machine Machine Lecture 4: Optimization in Unsupervised Laurent El Ghaoui EECS and IEOR Departments UC Berkeley Spring seminar TRANSP-OR, Zinal, Jan. 16-19, 2012 s

More information

Singular Value Decomposition

Singular Value Decomposition Singular Value Decomposition CS 205A: Mathematical Methods for Robotics, Vision, and Graphics Doug James (and Justin Solomon) CS 205A: Mathematical Methods Singular Value Decomposition 1 / 35 Understanding

More information

Lecture 9: SVD, Low Rank Approximation

Lecture 9: SVD, Low Rank Approximation CSE 521: Design and Analysis of Algorithms I Spring 2016 Lecture 9: SVD, Low Rank Approimation Lecturer: Shayan Oveis Gharan April 25th Scribe: Koosha Khalvati Disclaimer: hese notes have not been subjected

More information

be a Householder matrix. Then prove the followings H = I 2 uut Hu = (I 2 uu u T u )u = u 2 uut u

be a Householder matrix. Then prove the followings H = I 2 uut Hu = (I 2 uu u T u )u = u 2 uut u MATH 434/534 Theoretical Assignment 7 Solution Chapter 7 (71) Let H = I 2uuT Hu = u (ii) Hv = v if = 0 be a Householder matrix Then prove the followings H = I 2 uut Hu = (I 2 uu )u = u 2 uut u = u 2u =

More information

Linear Least Squares. Using SVD Decomposition.

Linear Least Squares. Using SVD Decomposition. Linear Least Squares. Using SVD Decomposition. Dmitriy Leykekhman Spring 2011 Goals SVD-decomposition. Solving LLS with SVD-decomposition. D. Leykekhman Linear Least Squares 1 SVD Decomposition. For any

More information

Lecture 5 Singular value decomposition

Lecture 5 Singular value decomposition Lecture 5 Singular value decomposition Weinan E 1,2 and Tiejun Li 2 1 Department of Mathematics, Princeton University, weinan@princeton.edu 2 School of Mathematical Sciences, Peking University, tieli@pku.edu.cn

More information

The Singular Value Decomposition

The Singular Value Decomposition The Singular Value Decomposition We are interested in more than just sym+def matrices. But the eigenvalue decompositions discussed in the last section of notes will play a major role in solving general

More information

Singular Value Decomposition (SVD) and Polar Form

Singular Value Decomposition (SVD) and Polar Form Chapter 2 Singular Value Decomposition (SVD) and Polar Form 2.1 Polar Form In this chapter, we assume that we are dealing with a real Euclidean space E. Let f: E E be any linear map. In general, it may

More information

2. Linear algebra. matrices and vectors. linear equations. range and nullspace of matrices. function of vectors, gradient and Hessian

2. Linear algebra. matrices and vectors. linear equations. range and nullspace of matrices. function of vectors, gradient and Hessian FE661 - Statistical Methods for Financial Engineering 2. Linear algebra Jitkomut Songsiri matrices and vectors linear equations range and nullspace of matrices function of vectors, gradient and Hessian

More information

Linear Algebra- Final Exam Review

Linear Algebra- Final Exam Review Linear Algebra- Final Exam Review. Let A be invertible. Show that, if v, v, v 3 are linearly independent vectors, so are Av, Av, Av 3. NOTE: It should be clear from your answer that you know the definition.

More information

pset3-sol September 7, 2017

pset3-sol September 7, 2017 pset3-sol September 7, 2017 1 18.06 pset 3 Solutions 1.1 Problem 1 Suppose that you solve AX = B with and find that X is 1 1 1 1 B = 0 2 2 2 1 1 0 1 1 1 0 1 X = 1 0 1 3 1 0 2 1 1.1.1 (a) What is A 1? (You

More information

Section 3.9. Matrix Norm

Section 3.9. Matrix Norm 3.9. Matrix Norm 1 Section 3.9. Matrix Norm Note. We define several matrix norms, some similar to vector norms and some reflecting how multiplication by a matrix affects the norm of a vector. We use matrix

More information

Geometric Modeling Summer Semester 2010 Mathematical Tools (1)

Geometric Modeling Summer Semester 2010 Mathematical Tools (1) Geometric Modeling Summer Semester 2010 Mathematical Tools (1) Recap: Linear Algebra Today... Topics: Mathematical Background Linear algebra Analysis & differential geometry Numerical techniques Geometric

More information

Duke University, Department of Electrical and Computer Engineering Optimization for Scientists and Engineers c Alex Bronstein, 2014

Duke University, Department of Electrical and Computer Engineering Optimization for Scientists and Engineers c Alex Bronstein, 2014 Duke University, Department of Electrical and Computer Engineering Optimization for Scientists and Engineers c Alex Bronstein, 2014 Linear Algebra A Brief Reminder Purpose. The purpose of this document

More information

Computational Methods. Eigenvalues and Singular Values

Computational Methods. Eigenvalues and Singular Values Computational Methods Eigenvalues and Singular Values Manfred Huber 2010 1 Eigenvalues and Singular Values Eigenvalues and singular values describe important aspects of transformations and of data relations

More information

Sparse Optimization Lecture: Basic Sparse Optimization Models

Sparse Optimization Lecture: Basic Sparse Optimization Models Sparse Optimization Lecture: Basic Sparse Optimization Models Instructor: Wotao Yin July 2013 online discussions on piazza.com Those who complete this lecture will know basic l 1, l 2,1, and nuclear-norm

More information

2. Review of Linear Algebra

2. Review of Linear Algebra 2. Review of Linear Algebra ECE 83, Spring 217 In this course we will represent signals as vectors and operators (e.g., filters, transforms, etc) as matrices. This lecture reviews basic concepts from linear

More information

Large Scale Data Analysis Using Deep Learning

Large Scale Data Analysis Using Deep Learning Large Scale Data Analysis Using Deep Learning Linear Algebra U Kang Seoul National University U Kang 1 In This Lecture Overview of linear algebra (but, not a comprehensive survey) Focused on the subset

More information

Mathematical foundations - linear algebra

Mathematical foundations - linear algebra Mathematical foundations - linear algebra Andrea Passerini passerini@disi.unitn.it Machine Learning Vector space Definition (over reals) A set X is called a vector space over IR if addition and scalar

More information

A Review of Linear Algebra

A Review of Linear Algebra A Review of Linear Algebra Mohammad Emtiyaz Khan CS,UBC A Review of Linear Algebra p.1/13 Basics Column vector x R n, Row vector x T, Matrix A R m n. Matrix Multiplication, (m n)(n k) m k, AB BA. Transpose

More information

NORMS ON SPACE OF MATRICES

NORMS ON SPACE OF MATRICES NORMS ON SPACE OF MATRICES. Operator Norms on Space of linear maps Let A be an n n real matrix and x 0 be a vector in R n. We would like to use the Picard iteration method to solve for the following system

More information

15 Singular Value Decomposition

15 Singular Value Decomposition 15 Singular Value Decomposition For any high-dimensional data analysis, one s first thought should often be: can I use an SVD? The singular value decomposition is an invaluable analysis tool for dealing

More information

Notes on Eigenvalues, Singular Values and QR

Notes on Eigenvalues, Singular Values and QR Notes on Eigenvalues, Singular Values and QR Michael Overton, Numerical Computing, Spring 2017 March 30, 2017 1 Eigenvalues Everyone who has studied linear algebra knows the definition: given a square

More information

Linear Algebra Primer

Linear Algebra Primer Linear Algebra Primer David Doria daviddoria@gmail.com Wednesday 3 rd December, 2008 Contents Why is it called Linear Algebra? 4 2 What is a Matrix? 4 2. Input and Output.....................................

More information

. = V c = V [x]v (5.1) c 1. c k

. = V c = V [x]v (5.1) c 1. c k Chapter 5 Linear Algebra It can be argued that all of linear algebra can be understood using the four fundamental subspaces associated with a matrix Because they form the foundation on which we later work,

More information

Problem Set (T) If A is an m n matrix, B is an n p matrix and D is a p s matrix, then show

Problem Set (T) If A is an m n matrix, B is an n p matrix and D is a p s matrix, then show MTH 0: Linear Algebra Department of Mathematics and Statistics Indian Institute of Technology - Kanpur Problem Set Problems marked (T) are for discussions in Tutorial sessions (T) If A is an m n matrix,

More information

Dimensionality Reduction: PCA. Nicholas Ruozzi University of Texas at Dallas

Dimensionality Reduction: PCA. Nicholas Ruozzi University of Texas at Dallas Dimensionality Reduction: PCA Nicholas Ruozzi University of Texas at Dallas Eigenvalues λ is an eigenvalue of a matrix A R n n if the linear system Ax = λx has at least one non-zero solution If Ax = λx

More information

Preliminary/Qualifying Exam in Numerical Analysis (Math 502a) Spring 2012

Preliminary/Qualifying Exam in Numerical Analysis (Math 502a) Spring 2012 Instructions Preliminary/Qualifying Exam in Numerical Analysis (Math 502a) Spring 2012 The exam consists of four problems, each having multiple parts. You should attempt to solve all four problems. 1.

More information

The Singular Value Decomposition and Least Squares Problems

The Singular Value Decomposition and Least Squares Problems The Singular Value Decomposition and Least Squares Problems Tom Lyche Centre of Mathematics for Applications, Department of Informatics, University of Oslo September 27, 2009 Applications of SVD solving

More information

Convexity II: Optimization Basics

Convexity II: Optimization Basics Conveity II: Optimization Basics Lecturer: Ryan Tibshirani Conve Optimization 10-725/36-725 See supplements for reviews of basic multivariate calculus basic linear algebra Last time: conve sets and functions

More information

Least Squares Optimization

Least Squares Optimization Least Squares Optimization The following is a brief review of least squares optimization and constrained optimization techniques. I assume the reader is familiar with basic linear algebra, including the

More information

Lecture 1. 1 Conic programming. MA 796S: Convex Optimization and Interior Point Methods October 8, Consider the conic program. min.

Lecture 1. 1 Conic programming. MA 796S: Convex Optimization and Interior Point Methods October 8, Consider the conic program. min. MA 796S: Convex Optimization and Interior Point Methods October 8, 2007 Lecture 1 Lecturer: Kartik Sivaramakrishnan Scribe: Kartik Sivaramakrishnan 1 Conic programming Consider the conic program min s.t.

More information

2. Every linear system with the same number of equations as unknowns has a unique solution.

2. Every linear system with the same number of equations as unknowns has a unique solution. 1. For matrices A, B, C, A + B = A + C if and only if A = B. 2. Every linear system with the same number of equations as unknowns has a unique solution. 3. Every linear system with the same number of equations

More information

Linear Algebra Massoud Malek

Linear Algebra Massoud Malek CSUEB Linear Algebra Massoud Malek Inner Product and Normed Space In all that follows, the n n identity matrix is denoted by I n, the n n zero matrix by Z n, and the zero vector by θ n An inner product

More information

Review of some mathematical tools

Review of some mathematical tools MATHEMATICAL FOUNDATIONS OF SIGNAL PROCESSING Fall 2016 Benjamín Béjar Haro, Mihailo Kolundžija, Reza Parhizkar, Adam Scholefield Teaching assistants: Golnoosh Elhami, Hanjie Pan Review of some mathematical

More information

Bindel, Fall 2009 Matrix Computations (CS 6210) Week 8: Friday, Oct 17

Bindel, Fall 2009 Matrix Computations (CS 6210) Week 8: Friday, Oct 17 Logistics Week 8: Friday, Oct 17 1. HW 3 errata: in Problem 1, I meant to say p i < i, not that p i is strictly ascending my apologies. You would want p i > i if you were simply forming the matrices and

More information

Lecture 3: Review of Linear Algebra

Lecture 3: Review of Linear Algebra ECE 83 Fall 2 Statistical Signal Processing instructor: R Nowak Lecture 3: Review of Linear Algebra Very often in this course we will represent signals as vectors and operators (eg, filters, transforms,

More information

Lecture 3: Review of Linear Algebra

Lecture 3: Review of Linear Algebra ECE 83 Fall 2 Statistical Signal Processing instructor: R Nowak, scribe: R Nowak Lecture 3: Review of Linear Algebra Very often in this course we will represent signals as vectors and operators (eg, filters,

More information

Singular Value Decomposition

Singular Value Decomposition Chapter 5 Singular Value Decomposition We now reach an important Chapter in this course concerned with the Singular Value Decomposition of a matrix A. SVD, as it is commonly referred to, is one of the

More information

Singular Value Decomposition

Singular Value Decomposition Chapter 6 Singular Value Decomposition In Chapter 5, we derived a number of algorithms for computing the eigenvalues and eigenvectors of matrices A R n n. Having developed this machinery, we complete our

More information

Stat 159/259: Linear Algebra Notes

Stat 159/259: Linear Algebra Notes Stat 159/259: Linear Algebra Notes Jarrod Millman November 16, 2015 Abstract These notes assume you ve taken a semester of undergraduate linear algebra. In particular, I assume you are familiar with the

More information

Linear Algebra Review

Linear Algebra Review January 29, 2013 Table of contents Metrics Metric Given a space X, then d : X X R + 0 and z in X if: d(x, y) = 0 is equivalent to x = y d(x, y) = d(y, x) d(x, y) d(x, z) + d(z, y) is a metric is for all

More information

1 9/5 Matrices, vectors, and their applications

1 9/5 Matrices, vectors, and their applications 1 9/5 Matrices, vectors, and their applications Algebra: study of objects and operations on them. Linear algebra: object: matrices and vectors. operations: addition, multiplication etc. Algorithms/Geometric

More information

STAT 309: MATHEMATICAL COMPUTATIONS I FALL 2017 LECTURE 5

STAT 309: MATHEMATICAL COMPUTATIONS I FALL 2017 LECTURE 5 STAT 39: MATHEMATICAL COMPUTATIONS I FALL 17 LECTURE 5 1 existence of svd Theorem 1 (Existence of SVD) Every matrix has a singular value decomposition (condensed version) Proof Let A C m n and for simplicity

More information

Inner products and Norms. Inner product of 2 vectors. Inner product of 2 vectors x and y in R n : x 1 y 1 + x 2 y x n y n in R n

Inner products and Norms. Inner product of 2 vectors. Inner product of 2 vectors x and y in R n : x 1 y 1 + x 2 y x n y n in R n Inner products and Norms Inner product of 2 vectors Inner product of 2 vectors x and y in R n : x 1 y 1 + x 2 y 2 + + x n y n in R n Notation: (x, y) or y T x For complex vectors (x, y) = x 1 ȳ 1 + x 2

More information

Math Fall Final Exam

Math Fall Final Exam Math 104 - Fall 2008 - Final Exam Name: Student ID: Signature: Instructions: Print your name and student ID number, write your signature to indicate that you accept the honor code. During the test, you

More information

Review of Linear Algebra

Review of Linear Algebra Review of Linear Algebra Dr Gerhard Roth COMP 40A Winter 05 Version Linear algebra Is an important area of mathematics It is the basis of computer vision Is very widely taught, and there are many resources

More information

Pseudoinverse & Moore-Penrose Conditions

Pseudoinverse & Moore-Penrose Conditions ECE 275AB Lecture 7 Fall 2008 V1.0 c K. Kreutz-Delgado, UC San Diego p. 1/1 Lecture 7 ECE 275A Pseudoinverse & Moore-Penrose Conditions ECE 275AB Lecture 7 Fall 2008 V1.0 c K. Kreutz-Delgado, UC San Diego

More information

Lecture 8: Linear Algebra Background

Lecture 8: Linear Algebra Background CSE 521: Design and Analysis of Algorithms I Winter 2017 Lecture 8: Linear Algebra Background Lecturer: Shayan Oveis Gharan 2/1/2017 Scribe: Swati Padmanabhan Disclaimer: These notes have not been subjected

More information

Mathematical foundations - linear algebra

Mathematical foundations - linear algebra Mathematical foundations - linear algebra Andrea Passerini passerini@disi.unitn.it Machine Learning Vector space Definition (over reals) A set X is called a vector space over IR if addition and scalar

More information

2. LINEAR ALGEBRA. 1. Definitions. 2. Linear least squares problem. 3. QR factorization. 4. Singular value decomposition (SVD) 5.

2. LINEAR ALGEBRA. 1. Definitions. 2. Linear least squares problem. 3. QR factorization. 4. Singular value decomposition (SVD) 5. 2. LINEAR ALGEBRA Outline 1. Definitions 2. Linear least squares problem 3. QR factorization 4. Singular value decomposition (SVD) 5. Pseudo-inverse 6. Eigenvalue decomposition (EVD) 1 Definitions Vector

More information

Background Mathematics (2/2) 1. David Barber

Background Mathematics (2/2) 1. David Barber Background Mathematics (2/2) 1 David Barber University College London Modified by Samson Cheung (sccheung@ieee.org) 1 These slides accompany the book Bayesian Reasoning and Machine Learning. The book and

More information

LINEAR ALGEBRA KNOWLEDGE SURVEY

LINEAR ALGEBRA KNOWLEDGE SURVEY LINEAR ALGEBRA KNOWLEDGE SURVEY Instructions: This is a Knowledge Survey. For this assignment, I am only interested in your level of confidence about your ability to do the tasks on the following pages.

More information

Q1 Q2 Q3 Q4 Tot Letr Xtra

Q1 Q2 Q3 Q4 Tot Letr Xtra Mathematics 54.1 Final Exam, 12 May 2011 180 minutes, 90 points NAME: ID: GSI: INSTRUCTIONS: You must justify your answers, except when told otherwise. All the work for a question should be on the respective

More information

The value of a problem is not so much coming up with the answer as in the ideas and attempted ideas it forces on the would be solver I.N.

The value of a problem is not so much coming up with the answer as in the ideas and attempted ideas it forces on the would be solver I.N. Math 410 Homework Problems In the following pages you will find all of the homework problems for the semester. Homework should be written out neatly and stapled and turned in at the beginning of class

More information

Mathematics 530. Practice Problems. n + 1 }

Mathematics 530. Practice Problems. n + 1 } Department of Mathematical Sciences University of Delaware Prof. T. Angell October 19, 2015 Mathematics 530 Practice Problems 1. Recall that an indifference relation on a partially ordered set is defined

More information

MODULE 8 Topics: Null space, range, column space, row space and rank of a matrix

MODULE 8 Topics: Null space, range, column space, row space and rank of a matrix MODULE 8 Topics: Null space, range, column space, row space and rank of a matrix Definition: Let L : V 1 V 2 be a linear operator. The null space N (L) of L is the subspace of V 1 defined by N (L) = {x

More information

Lecture 5 : Projections

Lecture 5 : Projections Lecture 5 : Projections EE227C. Lecturer: Professor Martin Wainwright. Scribe: Alvin Wan Up until now, we have seen convergence rates of unconstrained gradient descent. Now, we consider a constrained minimization

More information

Lecture 1 Introduction

Lecture 1 Introduction L. Vandenberghe EE236A (Fall 2013-14) Lecture 1 Introduction course overview linear optimization examples history approximate syllabus basic definitions linear optimization in vector and matrix notation

More information

Iterative solvers for linear equations

Iterative solvers for linear equations Spectral Graph Theory Lecture 15 Iterative solvers for linear equations Daniel A. Spielman October 1, 009 15.1 Overview In this and the next lecture, I will discuss iterative algorithms for solving linear

More information

I. Multiple Choice Questions (Answer any eight)

I. Multiple Choice Questions (Answer any eight) Name of the student : Roll No : CS65: Linear Algebra and Random Processes Exam - Course Instructor : Prashanth L.A. Date : Sep-24, 27 Duration : 5 minutes INSTRUCTIONS: The test will be evaluated ONLY

More information

Lecture notes: Applied linear algebra Part 1. Version 2

Lecture notes: Applied linear algebra Part 1. Version 2 Lecture notes: Applied linear algebra Part 1. Version 2 Michael Karow Berlin University of Technology karow@math.tu-berlin.de October 2, 2008 1 Notation, basic notions and facts 1.1 Subspaces, range and

More information

MATH 22A: LINEAR ALGEBRA Chapter 4

MATH 22A: LINEAR ALGEBRA Chapter 4 MATH 22A: LINEAR ALGEBRA Chapter 4 Jesús De Loera, UC Davis November 30, 2012 Orthogonality and Least Squares Approximation QUESTION: Suppose Ax = b has no solution!! Then what to do? Can we find an Approximate

More information

ECE 275A Homework #3 Solutions

ECE 275A Homework #3 Solutions ECE 75A Homework #3 Solutions. Proof of (a). Obviously Ax = 0 y, Ax = 0 for all y. To show sufficiency, note that if y, Ax = 0 for all y, then it must certainly be true for the particular value of y =

More information

Lecture 1: Systems of linear equations and their solutions

Lecture 1: Systems of linear equations and their solutions Lecture 1: Systems of linear equations and their solutions Course overview Topics to be covered this semester: Systems of linear equations and Gaussian elimination: Solving linear equations and applications

More information

Let A an n n real nonsymmetric matrix. The eigenvalue problem: λ 1 = 1 with eigenvector u 1 = ( ) λ 2 = 2 with eigenvector u 2 = ( 1

Let A an n n real nonsymmetric matrix. The eigenvalue problem: λ 1 = 1 with eigenvector u 1 = ( ) λ 2 = 2 with eigenvector u 2 = ( 1 Eigenvalue Problems. Introduction Let A an n n real nonsymmetric matrix. The eigenvalue problem: EIGENVALE PROBLEMS AND THE SVD. [5.1 TO 5.3 & 7.4] Au = λu Example: ( ) 2 0 A = 2 1 λ 1 = 1 with eigenvector

More information

COMP 558 lecture 18 Nov. 15, 2010

COMP 558 lecture 18 Nov. 15, 2010 Least squares We have seen several least squares problems thus far, and we will see more in the upcoming lectures. For this reason it is good to have a more general picture of these problems and how to

More information

Properties of Matrices and Operations on Matrices

Properties of Matrices and Operations on Matrices Properties of Matrices and Operations on Matrices A common data structure for statistical analysis is a rectangular array or matris. Rows represent individual observational units, or just observations,

More information

MATH 31 - ADDITIONAL PRACTICE PROBLEMS FOR FINAL

MATH 31 - ADDITIONAL PRACTICE PROBLEMS FOR FINAL MATH 3 - ADDITIONAL PRACTICE PROBLEMS FOR FINAL MAIN TOPICS FOR THE FINAL EXAM:. Vectors. Dot product. Cross product. Geometric applications. 2. Row reduction. Null space, column space, row space, left

More information

8. Least squares. ˆ Review of linear equations. ˆ Least squares. ˆ Example: curve-fitting. ˆ Vector norms. ˆ Geometrical intuition

8. Least squares. ˆ Review of linear equations. ˆ Least squares. ˆ Example: curve-fitting. ˆ Vector norms. ˆ Geometrical intuition CS/ECE/ISyE 54 Introduction to Optimization Spring 017 18 8. Least squares ˆ Review of linear equations ˆ Least squares ˆ Eample: curve-fitting ˆ Vector norms ˆ Geometrical intuition Laurent Lessard (www.laurentlessard.com)

More information

The Eigenvalue Problem: Perturbation Theory

The Eigenvalue Problem: Perturbation Theory Jim Lambers MAT 610 Summer Session 2009-10 Lecture 13 Notes These notes correspond to Sections 7.2 and 8.1 in the text. The Eigenvalue Problem: Perturbation Theory The Unsymmetric Eigenvalue Problem Just

More information

EE263: Introduction to Linear Dynamical Systems Review Session 9

EE263: Introduction to Linear Dynamical Systems Review Session 9 EE63: Introduction to Linear Dynamical Systems Review Session 9 SVD continued EE63 RS9 1 Singular Value Decomposition recall any nonzero matrix A R m n, with Rank(A) = r, has an SVD given by A = UΣV T,

More information

Problem # Max points possible Actual score Total 120

Problem # Max points possible Actual score Total 120 FINAL EXAMINATION - MATH 2121, FALL 2017. Name: ID#: Email: Lecture & Tutorial: Problem # Max points possible Actual score 1 15 2 15 3 10 4 15 5 15 6 15 7 10 8 10 9 15 Total 120 You have 180 minutes to

More information