Sample ECE275A Midterm Exam Questions
|
|
- Sheila Thompson
- 5 years ago
- Views:
Transcription
1 Sample ECE275A Midterm Exam Questions The questions given below are actual problems taken from exams given in in the past few years. Solutions to these problems will NOT be provided. These problems and old homework problems have been recyled on midterm and final exams in the recent past, so you should study the questions given below as well as all past homework solutions carefully. You should also study the solutions given to the sample undergraduate midterm. 1. Geometry Induced by a Linear Operator A and the Pseudoinverse. Let A be an m n complex matrix and consider the linear inverse problem y = Ax. Let Ω and W be positive-definite weighting matrices which define inner products on the domain and codomain of A respectively. Let A + denote the pseudoinverse of A. (a) Prove that N (A )=R(A) and R(A )=N(A). (b) State the four Moore-Penrose Pseudoinverse Conditions and prove that as a consequence of these conditions, P = AA +, I P, Q = A + A, and I Q are orthogonal projection operators. Give the domain and range for each of these operators. (c) Derive an expression for the adjoint operator, A, in terms of Ω, W, and A. (d) Derive closed-form expressions for A + (in terms of Ω, W, and A) valid for the two distinct cases when A is 1-to-1 and when it is onto respectively. Justify every key step in your derivations. 2. The Singular Value Decomposition (SVD). Perform the following numerical computations. Assume that A maps between real spaces using the standard (unweighted) inner product. (a) Determine the SVD for the matrix, A = ( ). (b) Give the dimension and an orthonormal basis for each of the four fundamental subspaces of the real matrix A. (c) Construct the orthogonal projections operators onto each of the four fundamental subspaces (with respect to the standard inner product). (d) Construct the pseudoinverse, A + (with respect to the standard inner product). 3. Constructing the Pseudoinverse. Let the matrix A shown below denote a linear mapping between two complex weighted inner product spaces as follows. Let the domain be C 3 with inner product weighting matrix, j j Ω= 1 2 j j j 2 + j
2 Let the codomain be C 4 with inner product weighting matrix, W = Let the mapping between the complex spaces C 3 and C 4 be, A = Construct the pseudoinverse, A +, fully showing all of its components. Justify every step. 4. Orthogonal Projection Operator onto the Space of Symmetric Matrices Let X = C n n be the Hilbert space of n n complex matrices, X, with Frobenius inner product X 1,X 2 = tr X H 1 X 2. Let V be the set of symmetric (not hermitian) n n complex matrices V = V T, V X = C n n. Finally, define the mapping P ( ) :X X by P (X) X + XT 2. (a) Prove that V is a (Hilbert) subspace of X and give its dimension. Prove that the set of hermitian matrices is not a subspace. (b) Prove that P ( ) is the orthogonal projection of X onto the subspace V. Please note that at the outset we know nothing about P ( ) other than its definition. 5. Geometric Approach to Solving Linear Inverse Problems I Let X = Sym(C,n) C n n be the vector space of symmetric n n complex matrices 1 with Frobenious inner product X 1,X 2 = tr X1 H X 2. Let Y = C n m be the Hilbert space of n m complex matrices with inner product Y 1,Y 2 = tr Y H 1 Y 2. Finally for a given full row-rank n m matrix A, definite the mapping A( ) :X Y by A(X) XA, rank(a) =n. 1 In the previous problem, the set of symmetric, complex matrices was proven to be a Hilbert subspace. Here, we treat this set as a Hilbert space in its own right. 2
3 (a) Prove that the least-squares solution to the inverse problem Y = A(X) is the necessarily unique solution to the (matrix) Lyapunov equation XM + M T X =Λ, where (1) M AA H and Λ YA H +(YA H ) T. (b) The Matlab Controls Toolbox provides a numerical solver for the Lyapunov Equation (1). Mathematically, it can be readily shown that a unique solution to the Lyapunov equation is theoretically given by X = e M T t Λ e Mt dt (2) 0 provided that the real parts of the eigenvalues of M are all strictly greater than zero. Given this fact, justify the claim that the unique solution to (1) is given by equation (2). 6. Geometric Approach to Solving Linear Inverse Problems II Let X X belong to the space, X,ofcomplex m n matrices and Y Y to the space, Y,ofcomplex p q matrices. On each of these two spaces define the (unweighted) Frobenius inner product, X 1,X 2 = trace ( X H 1 X 2 ), with the associated (induced) Frobenius norm of a matrix, X F = trace (X H X). We want to find a minimum-norm least-squares solution to the linear inverse problem, Y = A(X), where A(X) =CXB, for a fixed complex p m matrix C and a fixed complex n q matrix B. It is a fact that the operator A is 1-to-1 iff both rank(c) =m and rank(b) =n. Alternatively, A is onto iff both rank(c) =p and rank(b) =q. In general, of course, A might be rank-deficient, and hence neither 1-to-1 nor onto. (a) Determine the adjoint operator, A (Y ). (b) Derive the normal equations (i.e., the algebraic condition that a least squares solution, X, must satisfy) and the algebraic condition for a solution, X, to be minimum norm. (c) For the operator A 1-to-1 derive a closed-form expression for the solution and for the pseudoinverse, A + (Y ). Justify every key step in your derivation (d) For A 1-to-1 give a closed-form expression for the orthogonal projector onto R(A). (e) for A onto derive a closed-form expression for the solution and for the pseudoinverse, A + (Y ). Justify every key step in your solution. (f) For A onto give a closed-form expression for the orthogonal projector onto R(A ). 3
4 (g) Express the operator pseudoinverse expressions derived in parts (c) and (d) in terms of the matrix pseudoinverses C + and B + (assuming the standard inner products on the domains and codomains of the matrices C and B). Based on these expressions conjecture the form of the operator pseudoinverse, A +, for the case when A is neither 1-to-1 nor onto, and then describe in words (a proof is not necessary) how you would prove that your conjectured form is indeed the correct form for the pseudoinverse. 7. Geometric Approach to Solving Linear Inverse Problems III. Let X = R m n be the space of real m n matrices X, m and n arbitrary, with inner product X 1,X 2 = tr X T 1 X 2. Let Y = RV m be the space of finite-variance real random vectors y(ω) R m with inner product Let A : X Ybe the linear mapping y 1,y 2 = E { y T 1 y 2 }. A(X) =Xb(ω) where b is a specified real n-dimensional random vector with covariance E { bb T } = I. (a) Show that A is one-to-one. (b) Find the least squares solution to the inverse problem and give the least-squares estimate of y. y = A(X) 8. Geometric Approach to Solving Linear Inverse Problems IV. Let Y belong to the Hilbert space of complex p q matrices, Y = C p q, with weighted Frobenius inner product, Y 1,Y 2 = trace (Y1 H WY 2 ), where W is a q q positive-definite hermitian weighting matrix. The space Y is pq dimensional. Let the complex vector c =(c 1,,c n ) T belong to the Hilbert space X = C n with the standard inner product, c 1, c 2 = c H 1 c 2 and dimension n>pq. Consider the following linear mapping between X and Y which is onto but NOT one-to-one, with X i Yfor i =1,,n. Y = c 1 X c n X n, (3) (a) (i) Is the system (3) solvable for all Y? (ii) What is the dimension of the range of the mapping? (iii) What is the dimension of the null space? (iv) Is the null space trivial or nontrivial? (v) What are the solution possibilities of the inverse problem given by (3). (b) Determine the adjoint operator for the mapping shown in (3). 4
5 (c) Determine the pseudoinverse solution to the inverse problem (3). (If you are unable to solve part (b), show in detail how you would use the result of part (b) to solve part (c)). 9. Real Vector Derivatives and Coordinate Transformations. Let x =(x, y) T be the standard cartesian coordinates in the plane R 2. An alternative curvilinear coordinate system in R 2 is given by parabolic coordinates, ξ =(ξ,η) T. The relationship between these two coordinate representations is x = ξη and y = 1 ( η 2 ξ 2). 2 (a) Determine the jacobian J xξ = x ξ, the metric tensor Ω ξ, the cogradient ξ, and the gradient ξ. Are the parabolic coordinates orthogonal? Are they orthonormal? (Explain your answers.) (b) Let ˆξ denote the coordinate system obtained by normalizing the parabolic-coordinates canonical basis vectors e 1 =(1, 0) T and e 2 =(0, 1) T to produce unit vectors ɛ 1, ɛ 2 for the new ˆξ-system. Find the cogradient ˆξ and the gradient ˆξ in the ˆξ-system Vector Derivatives and Regularized Least-Squares. One way to regularize an ill posed linear linear problem, y = Ax, is to find a solution which minimizes the regularized least-squares cost function, for a given regularization parameter, γ>0. 3 l(x) = y Ax 2 W + γ x 2 Ω, (4) (a) Assuming that all the quantities in (4) are real, minimize the cost function (4) by finding a stationary point, ˆx, ofl(x) and showing that the corresponding hessian is positive definite. Use the chain-rule in your derivation and make it clear where it was used. (b) Assuming that all the quantities in (4) are complex, minimize the cost function (4) by finding a stationary point, ˆx, ofl(x) and showing that the corresponding hessian is positive definite. Did you need to use the chain-rule in your derivation? (c) Assume that A is one-to-one. Show that in the limit γ 0 we obtain ˆx = A + y for the solution to part b above. (d) Assume that A is onto. Show that in the limit γ 0 we obtain ˆx = A + y for the solution to part b above. (As suggested by parts (c) and (d), the solutions to the regularized least-squares minimization problem form a family of solutions which include the pseudoinverse as a special case (this is true even when A is rank deficient). Thus regularized least-squares provides a generalization of the pseudoinverse approach.) 2 Hint: Directly exploit the fact that in the new system Ωˆξ = I. 3 This problem also arises in other contexts. For instance, this is the loss function which is solved to obtain the so-called Leaky LMS adaptive filtering algorithm, where x denotes the vector of tap-filter weights. 5
6 11. Complex Vector Derivatives and Constrained Optimization I A so-called Minimum Variance Distortionless Response (MVDR) filter is an FIR filter, h l 0 x k y k = h j x k j = h H X k, h =., X k =., (5) j=0 h l x k l designed to pass without distortion a desired signal at a known frequency ω 0, H(e jω 0 )=1, H(Z) = h 0 + h 1 Z h l Z l, (6) while blocking noise and jamming signals located at other frequencies. Such filters can be designed for signals distributed in time or in space, where in the latter case the procedure is known as the MVDR beamformer. 4 It is assumed that x k is a zero-mean, finite-variance stationary random sequence. The l +1optimal filter coefficients h are found from minimizing the variance (signal power) of the filter output E { y k 2} subject to the constraint (6). This forces the filter to attenuate signals at all frequencies other than ω 0 while passing signals at ω 0 without distortion. (a) Show that the MVDR optimization problem can be stated as min h h H Ωh subject to g(h) =1 φ(ω 0 ) H h =0 (7) clearly stating the definition of the (l +1) (l +1)hermitian matrix Ω and the (l +1)- dimensional vector function φ(ω 0 ). (b) The theory of lagrange multipliers is well-posed when the objective function and constraints are real-valued functions of real unknown variables. 5 Note that a vector of p complex equality constraint conditions, g(h) =0 C p, corresponds to 2p real equality constraints corresponding to Re g(h) =0 and Im g(h) =0. Thus, a well-defined lagrangian is given by L = h H Ωh + λ T R Re g(h)+λ T I Im g(h), for real-valued p-dimensional lagrange multiplier vectors λ R and λ I. Define the complex lagrange multiplier vector λ by λ = λ R + jλ I and show that the lagrangian given above can be equivalently written as L = h H Ωh + Re λ H g(h). (8) 4 See pages of S. Haykin, Adaptive Filter Theory, 3rd Edition, Note that signals and filter coefficients are assumed to be complex. 5 The unknown variables here can be taken to be the 2(l +1)real-valued real and imaginary parts of the l +1filter coefficients h k, k = 0,,l. The real-valued constraints obviously correspond to the two scalar (p = 1) conditions Re g(h) = 0 and Im g(h) =0. 6
7 (c) Using the properties of complex vector derivatives, determine the MVDR filter coefficients by finding a stationary point of the lagrangian (8). Assume that Ω is full rank. (d) Again determine the MVDR filter coefficients, this time without taking derivatives, by solving problem (7) using the geometric approach. Assume that Ω is full rank. 12. Complex Vector Derivatives and Constrained Optimization II Let ẑ C n be the pseudoinverse solution to the linear inverse problem s A z C m (9) where all spaces are complex Hilbert spaces with the standard inner product (identity metric tensor). Assume that A has full column rank and Let z denote the complex conjugate of z. (a) Let z = x + jy C n for x, y R n, and let l(z) =l(x, y) R be a real-valued loss-function of z which is to be minimized subject to a vector of constraints on the values of z: g(z) =g(x, y) =0 C p. The theory of lagrange multipliers is well-posed when the objective function and constraints are real-valued functions of real unknown variables. Thus a well-defined lagrangian is given by L = l(x, y)+λ T R Re g(x, y)+λ T I Im g(x, y), for real-valued p-dimensional lagrange multiplier vectors λ R and λ I. Define the complex lagrange multiplier vector λ by λ = λ R + jλ I and show that the lagrangian given above can be equivalently written as L = l(z)+re λ H g(z). (10) (b) Find the constrained least-squares solution, ẑ c, which provides a least-squares solution to the inverse problem (9) subject to the constraint B z =0 C p, assuming that B has full row rank. Justify the existence of every inverse you take. (c) i) Show that the constrained least-squares solution ẑ c is the projection of the unconstrained leastsquares solution ẑ onto the null space of B by specifically determining the projection operator. (Be sure to prove that it is indeed a projection operator onto the nullspace of B.) ii) Is this projection an orthogonal projection? (Explain your answer.) 7
8 13. Complex Vector Derivatives, Regularized Optimization, and the Leaky LMS Algorithm. In the lecture supplement on complex derivatives we adaptively learned a complex parameter-vector of FIR filter coefficients c C n by attempting to minimizing the instantaneous quadratic error ˆl(c) = e k 2, e k = y k c H x k, (11) at each sample instant via gradient descent on ˆl c ĉ k+1 = ĉ k α cˆl(ĉk ) (12) where y k C, x k C n, and α>0. 6 This results in the LMS Algorithm. Unfortunately, the basic LMS Algorithm can perform poorly in nonstationary environments because it tends to persistently remember learned values of c. (This can be seen by noting that if e k 0 for k j we have ĉ k ĉ j for all k j.) As a consequence ĉ k might not be able to adapt quickly enough to accommodate changes in the statistical behavior of the signals y k and x k. The Leaky LMS Algorithm attempts to rectify this behavior by penalizing the size of c k by the addition of the weighting term γ c 2 to the right-hand-side (RHS) of equation (11) for γ>0. ˆl(c) = e k 2 + γ c 2 (13) (a) Derive the Leaky LMS Algorithm from equations (12) and (13). Write the RHS of the algorithm in two ways. First in terms of ĉ k and e k and Then in terms of ĉ k, x k, and y k. (b) Show that the resulting algorithm forgets the value of ĉ k for k j when e k =0for k j and α and γ are chosen appropriately. 7 What happens to the leakiness if we set γ =0? What happens if we set the product αγ =1? 6 For simplicity, it is assumed that the parameter space is Euclidean so that Ω c = I and that α is constant. 7 The parameter γ is called a forgetting factor and allows learned values of c to leak to zero when filtering error is zero. 8
ECE 275A Homework # 3 Due Thursday 10/27/2016
ECE 275A Homework # 3 Due Thursday 10/27/2016 Reading: In addition to the lecture material presented in class, students are to read and study the following: A. The material in Section 4.11 of Moon & Stirling
More informationECE 275A Homework #3 Solutions
ECE 75A Homework #3 Solutions. Proof of (a). Obviously Ax = 0 y, Ax = 0 for all y. To show sufficiency, note that if y, Ax = 0 for all y, then it must certainly be true for the particular value of y =
More informationPseudoinverse and Adjoint Operators
ECE 275AB Lecture 5 Fall 2008 V1.1 c K. Kreutz-Delgado, UC San Diego p. 1/1 Lecture 5 ECE 275A Pseudoinverse and Adjoint Operators ECE 275AB Lecture 5 Fall 2008 V1.1 c K. Kreutz-Delgado, UC San Diego p.
More informationMoore-Penrose Conditions & SVD
ECE 275AB Lecture 8 Fall 2008 V1.0 c K. Kreutz-Delgado, UC San Diego p. 1/1 Lecture 8 ECE 275A Moore-Penrose Conditions & SVD ECE 275AB Lecture 8 Fall 2008 V1.0 c K. Kreutz-Delgado, UC San Diego p. 2/1
More informationNormed & Inner Product Vector Spaces
Normed & Inner Product Vector Spaces ECE 174 Introduction to Linear & Nonlinear Optimization Ken Kreutz-Delgado ECE Department, UC San Diego Ken Kreutz-Delgado (UC San Diego) ECE 174 Fall 2016 1 / 27 Normed
More informationPseudoinverse & Moore-Penrose Conditions
ECE 275AB Lecture 7 Fall 2008 V1.0 c K. Kreutz-Delgado, UC San Diego p. 1/1 Lecture 7 ECE 275A Pseudoinverse & Moore-Penrose Conditions ECE 275AB Lecture 7 Fall 2008 V1.0 c K. Kreutz-Delgado, UC San Diego
More informationPseudoinverse & Orthogonal Projection Operators
Pseudoinverse & Orthogonal Projection Operators ECE 174 Linear & Nonlinear Optimization Ken Kreutz-Delgado ECE Department, UC San Diego Ken Kreutz-Delgado (UC San Diego) ECE 174 Fall 2016 1 / 48 The Four
More informationAPPLIED LINEAR ALGEBRA
APPLIED LINEAR ALGEBRA Giorgio Picci November 24, 2015 1 Contents 1 LINEAR VECTOR SPACES AND LINEAR MAPS 10 1.1 Linear Maps and Matrices........................... 11 1.2 Inverse of a Linear Map............................
More informationLecture notes: Applied linear algebra Part 1. Version 2
Lecture notes: Applied linear algebra Part 1. Version 2 Michael Karow Berlin University of Technology karow@math.tu-berlin.de October 2, 2008 1 Notation, basic notions and facts 1.1 Subspaces, range and
More informationEE731 Lecture Notes: Matrix Computations for Signal Processing
EE731 Lecture Notes: Matrix Computations for Signal Processing James P. Reilly c Department of Electrical and Computer Engineering McMaster University September 22, 2005 0 Preface This collection of ten
More informationLecture II: Linear Algebra Revisited
Lecture II: Linear Algebra Revisited Overview Vector spaces, Hilbert & Banach Spaces, etrics & Norms atrices, Eigenvalues, Orthogonal Transformations, Singular Values Operators, Operator Norms, Function
More informationReview problems for MA 54, Fall 2004.
Review problems for MA 54, Fall 2004. Below are the review problems for the final. They are mostly homework problems, or very similar. If you are comfortable doing these problems, you should be fine on
More informationLinear Systems. Carlo Tomasi
Linear Systems Carlo Tomasi Section 1 characterizes the existence and multiplicity of the solutions of a linear system in terms of the four fundamental spaces associated with the system s matrix and of
More informationLinear Algebra Massoud Malek
CSUEB Linear Algebra Massoud Malek Inner Product and Normed Space In all that follows, the n n identity matrix is denoted by I n, the n n zero matrix by Z n, and the zero vector by θ n An inner product
More informationLinear Systems. Carlo Tomasi. June 12, r = rank(a) b range(a) n r solutions
Linear Systems Carlo Tomasi June, 08 Section characterizes the existence and multiplicity of the solutions of a linear system in terms of the four fundamental spaces associated with the system s matrix
More informationVector Derivatives and the Gradient
ECE 275AB Lecture 10 Fall 2008 V1.1 c K. Kreutz-Delgado, UC San Diego p. 1/1 Lecture 10 ECE 275A Vector Derivatives and the Gradient ECE 275AB Lecture 10 Fall 2008 V1.1 c K. Kreutz-Delgado, UC San Diego
More information1 Computing with constraints
Notes for 2017-04-26 1 Computing with constraints Recall that our basic problem is minimize φ(x) s.t. x Ω where the feasible set Ω is defined by equality and inequality conditions Ω = {x R n : c i (x)
More informationLecture 2: Linear Algebra Review
EE 227A: Convex Optimization and Applications January 19 Lecture 2: Linear Algebra Review Lecturer: Mert Pilanci Reading assignment: Appendix C of BV. Sections 2-6 of the web textbook 1 2.1 Vectors 2.1.1
More informationOrthogonal Projection and Least Squares Prof. Philip Pennance 1 -Version: December 12, 2016
Orthogonal Projection and Least Squares Prof. Philip Pennance 1 -Version: December 12, 2016 1. Let V be a vector space. A linear transformation P : V V is called a projection if it is idempotent. That
More informationPreliminary/Qualifying Exam in Numerical Analysis (Math 502a) Spring 2012
Instructions Preliminary/Qualifying Exam in Numerical Analysis (Math 502a) Spring 2012 The exam consists of four problems, each having multiple parts. You should attempt to solve all four problems. 1.
More informationIV. Matrix Approximation using Least-Squares
IV. Matrix Approximation using Least-Squares The SVD and Matrix Approximation We begin with the following fundamental question. Let A be an M N matrix with rank R. What is the closest matrix to A that
More informationProperties of Matrices and Operations on Matrices
Properties of Matrices and Operations on Matrices A common data structure for statistical analysis is a rectangular array or matris. Rows represent individual observational units, or just observations,
More informationCOMP 558 lecture 18 Nov. 15, 2010
Least squares We have seen several least squares problems thus far, and we will see more in the upcoming lectures. For this reason it is good to have a more general picture of these problems and how to
More informationMobile Robotics 1. A Compact Course on Linear Algebra. Giorgio Grisetti
Mobile Robotics 1 A Compact Course on Linear Algebra Giorgio Grisetti SA-1 Vectors Arrays of numbers They represent a point in a n dimensional space 2 Vectors: Scalar Product Scalar-Vector Product Changes
More informationVector Space Concepts
Vector Space Concepts ECE 174 Introduction to Linear & Nonlinear Optimization Ken Kreutz-Delgado ECE Department, UC San Diego Ken Kreutz-Delgado (UC San Diego) ECE 174 Fall 2016 1 / 25 Vector Space Theory
More information1 Cricket chirps: an example
Notes for 2016-09-26 1 Cricket chirps: an example Did you know that you can estimate the temperature by listening to the rate of chirps? The data set in Table 1 1. represents measurements of the number
More informationBindel, Fall 2009 Matrix Computations (CS 6210) Week 8: Friday, Oct 17
Logistics Week 8: Friday, Oct 17 1. HW 3 errata: in Problem 1, I meant to say p i < i, not that p i is strictly ascending my apologies. You would want p i > i if you were simply forming the matrices and
More informationThere are six more problems on the next two pages
Math 435 bg & bu: Topics in linear algebra Summer 25 Final exam Wed., 8/3/5. Justify all your work to receive full credit. Name:. Let A 3 2 5 Find a permutation matrix P, a lower triangular matrix L with
More information2. Linear algebra. matrices and vectors. linear equations. range and nullspace of matrices. function of vectors, gradient and Hessian
FE661 - Statistical Methods for Financial Engineering 2. Linear algebra Jitkomut Songsiri matrices and vectors linear equations range and nullspace of matrices function of vectors, gradient and Hessian
More informationMODULE 8 Topics: Null space, range, column space, row space and rank of a matrix
MODULE 8 Topics: Null space, range, column space, row space and rank of a matrix Definition: Let L : V 1 V 2 be a linear operator. The null space N (L) of L is the subspace of V 1 defined by N (L) = {x
More informationlinearly indepedent eigenvectors as the multiplicity of the root, but in general there may be no more than one. For further discussion, assume matrice
3. Eigenvalues and Eigenvectors, Spectral Representation 3.. Eigenvalues and Eigenvectors A vector ' is eigenvector of a matrix K, if K' is parallel to ' and ' 6, i.e., K' k' k is the eigenvalue. If is
More informationChap 3. Linear Algebra
Chap 3. Linear Algebra Outlines 1. Introduction 2. Basis, Representation, and Orthonormalization 3. Linear Algebraic Equations 4. Similarity Transformation 5. Diagonal Form and Jordan Form 6. Functions
More informationFoundations of Computer Vision
Foundations of Computer Vision Wesley. E. Snyder North Carolina State University Hairong Qi University of Tennessee, Knoxville Last Edited February 8, 2017 1 3.2. A BRIEF REVIEW OF LINEAR ALGEBRA Apply
More informationMaths for Signals and Systems Linear Algebra in Engineering
Maths for Signals and Systems Linear Algebra in Engineering Lectures 13 15, Tuesday 8 th and Friday 11 th November 016 DR TANIA STATHAKI READER (ASSOCIATE PROFFESOR) IN SIGNAL PROCESSING IMPERIAL COLLEGE
More information6.241 Dynamic Systems and Control
6.241 Dynamic Systems and Control Lecture 24: H2 Synthesis Emilio Frazzoli Aeronautics and Astronautics Massachusetts Institute of Technology May 4, 2011 E. Frazzoli (MIT) Lecture 24: H 2 Synthesis May
More information10-725/36-725: Convex Optimization Prerequisite Topics
10-725/36-725: Convex Optimization Prerequisite Topics February 3, 2015 This is meant to be a brief, informal refresher of some topics that will form building blocks in this course. The content of the
More informationElementary linear algebra
Chapter 1 Elementary linear algebra 1.1 Vector spaces Vector spaces owe their importance to the fact that so many models arising in the solutions of specific problems turn out to be vector spaces. The
More information5.) For each of the given sets of vectors, determine whether or not the set spans R 3. Give reasons for your answers.
Linear Algebra - Test File - Spring Test # For problems - consider the following system of equations. x + y - z = x + y + 4z = x + y + 6z =.) Solve the system without using your calculator..) Find the
More informationMAT Linear Algebra Collection of sample exams
MAT 342 - Linear Algebra Collection of sample exams A-x. (0 pts Give the precise definition of the row echelon form. 2. ( 0 pts After performing row reductions on the augmented matrix for a certain system
More informationLinear Algebra- Final Exam Review
Linear Algebra- Final Exam Review. Let A be invertible. Show that, if v, v, v 3 are linearly independent vectors, so are Av, Av, Av 3. NOTE: It should be clear from your answer that you know the definition.
More informationMATH 581D FINAL EXAM Autumn December 12, 2016
MATH 58D FINAL EXAM Autumn 206 December 2, 206 NAME: SIGNATURE: Instructions: there are 6 problems on the final. Aim for solving 4 problems, but do as much as you can. Partial credit will be given on all
More informationChapter 3 Transformations
Chapter 3 Transformations An Introduction to Optimization Spring, 2014 Wei-Ta Chu 1 Linear Transformations A function is called a linear transformation if 1. for every and 2. for every If we fix the bases
More informationLinear Algebra (Review) Volker Tresp 2017
Linear Algebra (Review) Volker Tresp 2017 1 Vectors k is a scalar (a number) c is a column vector. Thus in two dimensions, c = ( c1 c 2 ) (Advanced: More precisely, a vector is defined in a vector space.
More information. = V c = V [x]v (5.1) c 1. c k
Chapter 5 Linear Algebra It can be argued that all of linear algebra can be understood using the four fundamental subspaces associated with a matrix Because they form the foundation on which we later work,
More informationLinear Algebra Review. Vectors
Linear Algebra Review 9/4/7 Linear Algebra Review By Tim K. Marks UCSD Borrows heavily from: Jana Kosecka http://cs.gmu.edu/~kosecka/cs682.html Virginia de Sa (UCSD) Cogsci 8F Linear Algebra review Vectors
More information18.06 Problem Set 8 - Solutions Due Wednesday, 14 November 2007 at 4 pm in
806 Problem Set 8 - Solutions Due Wednesday, 4 November 2007 at 4 pm in 2-06 08 03 Problem : 205+5+5+5 Consider the matrix A 02 07 a Check that A is a positive Markov matrix, and find its steady state
More informationIntro to Linear & Nonlinear Optimization
ECE 174 Intro to Linear & Nonlinear Optimization i Ken Kreutz-Delgado ECE Department, UCSD Contact Information Course Website Accessible from http://dsp.ucsd.edu/~kreutz Instructor Ken Kreutz-Delgado kreutz@ece.ucsd.eduucsd
More informationMIT Final Exam Solutions, Spring 2017
MIT 8.6 Final Exam Solutions, Spring 7 Problem : For some real matrix A, the following vectors form a basis for its column space and null space: C(A) = span,, N(A) = span,,. (a) What is the size m n of
More informationSingular Value Decomposition (SVD)
School of Computing National University of Singapore CS CS524 Theoretical Foundations of Multimedia More Linear Algebra Singular Value Decomposition (SVD) The highpoint of linear algebra Gilbert Strang
More informationMath Fall Final Exam
Math 104 - Fall 2008 - Final Exam Name: Student ID: Signature: Instructions: Print your name and student ID number, write your signature to indicate that you accept the honor code. During the test, you
More informationReal Vector Derivatives, Gradients, and Nonlinear Least-Squares
Real Vector Derivatives, Gradients, and Nonlinear Least-Squares ECE275A Lecture Notes Kenneth Kreutz-Delgado Electrical and Computer Engineering Jacobs School of Engineering University of California, San
More informationAdaptive Filtering. Squares. Alexander D. Poularikas. Fundamentals of. Least Mean. with MATLABR. University of Alabama, Huntsville, AL.
Adaptive Filtering Fundamentals of Least Mean Squares with MATLABR Alexander D. Poularikas University of Alabama, Huntsville, AL CRC Press Taylor & Francis Croup Boca Raton London New York CRC Press is
More informationLarge Scale Data Analysis Using Deep Learning
Large Scale Data Analysis Using Deep Learning Linear Algebra U Kang Seoul National University U Kang 1 In This Lecture Overview of linear algebra (but, not a comprehensive survey) Focused on the subset
More information14 Singular Value Decomposition
14 Singular Value Decomposition For any high-dimensional data analysis, one s first thought should often be: can I use an SVD? The singular value decomposition is an invaluable analysis tool for dealing
More informationMatrix Vector Products
We covered these notes in the tutorial sessions I strongly recommend that you further read the presented materials in classical books on linear algebra Please make sure that you understand the proofs and
More informationLeast squares: the big idea
Notes for 2016-02-22 Least squares: the big idea Least squares problems are a special sort of minimization problem. Suppose A R m n where m > n. In general, we cannot solve the overdetermined system Ax
More informationReview of some mathematical tools
MATHEMATICAL FOUNDATIONS OF SIGNAL PROCESSING Fall 2016 Benjamín Béjar Haro, Mihailo Kolundžija, Reza Parhizkar, Adam Scholefield Teaching assistants: Golnoosh Elhami, Hanjie Pan Review of some mathematical
More informationLinear Models Review
Linear Models Review Vectors in IR n will be written as ordered n-tuples which are understood to be column vectors, or n 1 matrices. A vector variable will be indicted with bold face, and the prime sign
More information2. Review of Linear Algebra
2. Review of Linear Algebra ECE 83, Spring 217 In this course we will represent signals as vectors and operators (e.g., filters, transforms, etc) as matrices. This lecture reviews basic concepts from linear
More informationMath 414: Linear Algebra II, Fall 2015 Final Exam
Math 414: Linear Algebra II, Fall 2015 Final Exam December 17, 2015 Name: This is a closed book single author exam. Use of books, notes, or other aids is not permissible, nor is collaboration with any
More information1. Background: The SVD and the best basis (questions selected from Ch. 6- Can you fill in the exercises?)
Math 35 Exam Review SOLUTIONS Overview In this third of the course we focused on linear learning algorithms to model data. summarize: To. Background: The SVD and the best basis (questions selected from
More informationMath 307 Learning Goals
Math 307 Learning Goals May 14, 2018 Chapter 1 Linear Equations 1.1 Solving Linear Equations Write a system of linear equations using matrix notation. Use Gaussian elimination to bring a system of linear
More informationwhich arises when we compute the orthogonal projection of a vector y in a subspace with an orthogonal basis. Hence assume that P y = A ij = x j, x i
MODULE 6 Topics: Gram-Schmidt orthogonalization process We begin by observing that if the vectors {x j } N are mutually orthogonal in an inner product space V then they are necessarily linearly independent.
More informationLecture: Linear algebra. 4. Solutions of linear equation systems The fundamental theorem of linear algebra
Lecture: Linear algebra. 1. Subspaces. 2. Orthogonal complement. 3. The four fundamental subspaces 4. Solutions of linear equation systems The fundamental theorem of linear algebra 5. Determining the fundamental
More informationAlgebra C Numerical Linear Algebra Sample Exam Problems
Algebra C Numerical Linear Algebra Sample Exam Problems Notation. Denote by V a finite-dimensional Hilbert space with inner product (, ) and corresponding norm. The abbreviation SPD is used for symmetric
More informationDeep Learning Book Notes Chapter 2: Linear Algebra
Deep Learning Book Notes Chapter 2: Linear Algebra Compiled By: Abhinaba Bala, Dakshit Agrawal, Mohit Jain Section 2.1: Scalars, Vectors, Matrices and Tensors Scalar Single Number Lowercase names in italic
More informationReview of Linear Algebra
Review of Linear Algebra Definitions An m n (read "m by n") matrix, is a rectangular array of entries, where m is the number of rows and n the number of columns. 2 Definitions (Con t) A is square if m=
More informationExample Linear Algebra Competency Test
Example Linear Algebra Competency Test The 4 questions below are a combination of True or False, multiple choice, fill in the blank, and computations involving matrices and vectors. In the latter case,
More informationChapter Two Elements of Linear Algebra
Chapter Two Elements of Linear Algebra Previously, in chapter one, we have considered single first order differential equations involving a single unknown function. In the next chapter we will begin to
More informationTHE SINGULAR VALUE DECOMPOSITION MARKUS GRASMAIR
THE SINGULAR VALUE DECOMPOSITION MARKUS GRASMAIR 1. Definition Existence Theorem 1. Assume that A R m n. Then there exist orthogonal matrices U R m m V R n n, values σ 1 σ 2... σ p 0 with p = min{m, n},
More informationEE731 Lecture Notes: Matrix Computations for Signal Processing
EE731 Lecture Notes: Matrix Computations for Signal Processing James P. Reilly c Department of Electrical and Computer Engineering McMaster University October 17, 005 Lecture 3 3 he Singular Value Decomposition
More informationLinGloss. A glossary of linear algebra
LinGloss A glossary of linear algebra Contents: Decompositions Types of Matrices Theorems Other objects? Quasi-triangular A matrix A is quasi-triangular iff it is a triangular matrix except its diagonal
More informationMay 9, 2014 MATH 408 MIDTERM EXAM OUTLINE. Sample Questions
May 9, 24 MATH 48 MIDTERM EXAM OUTLINE This exam will consist of two parts and each part will have multipart questions. Each of the 6 questions is worth 5 points for a total of points. The two part of
More informationLinear Algebra using Dirac Notation: Pt. 2
Linear Algebra using Dirac Notation: Pt. 2 PHYS 476Q - Southern Illinois University February 6, 2018 PHYS 476Q - Southern Illinois University Linear Algebra using Dirac Notation: Pt. 2 February 6, 2018
More informationProposition 42. Let M be an m n matrix. Then (32) N (M M)=N (M) (33) R(MM )=R(M)
RODICA D. COSTIN. Singular Value Decomposition.1. Rectangular matrices. For rectangular matrices M the notions of eigenvalue/vector cannot be defined. However, the products MM and/or M M (which are square,
More informationDiagonalizing Matrices
Diagonalizing Matrices Massoud Malek A A Let A = A k be an n n non-singular matrix and let B = A = [B, B,, B k,, B n ] Then A n A B = A A 0 0 A k [B, B,, B k,, B n ] = 0 0 = I n 0 A n Notice that A i B
More informationMath 307 Learning Goals. March 23, 2010
Math 307 Learning Goals March 23, 2010 Course Description The course presents core concepts of linear algebra by focusing on applications in Science and Engineering. Examples of applications from recent
More information18.06 Professor Johnson Quiz 1 October 3, 2007
18.6 Professor Johnson Quiz 1 October 3, 7 SOLUTIONS 1 3 pts.) A given circuit network directed graph) which has an m n incidence matrix A rows = edges, columns = nodes) and a conductance matrix C [diagonal
More informationMath 302 Outcome Statements Winter 2013
Math 302 Outcome Statements Winter 2013 1 Rectangular Space Coordinates; Vectors in the Three-Dimensional Space (a) Cartesian coordinates of a point (b) sphere (c) symmetry about a point, a line, and a
More informationIntroduction Eigen Values and Eigen Vectors An Application Matrix Calculus Optimal Portfolio. Portfolios. Christopher Ting.
Portfolios Christopher Ting Christopher Ting http://www.mysmu.edu/faculty/christophert/ : christopherting@smu.edu.sg : 6828 0364 : LKCSB 5036 November 4, 2016 Christopher Ting QF 101 Week 12 November 4,
More informationMultiplicative Perturbation Bounds of the Group Inverse and Oblique Projection
Filomat 30: 06, 37 375 DOI 0.98/FIL67M Published by Faculty of Sciences Mathematics, University of Niš, Serbia Available at: http://www.pmf.ni.ac.rs/filomat Multiplicative Perturbation Bounds of the Group
More informationAMS526: Numerical Analysis I (Numerical Linear Algebra)
AMS526: Numerical Analysis I (Numerical Linear Algebra) Lecture 1: Course Overview & Matrix-Vector Multiplication Xiangmin Jiao SUNY Stony Brook Xiangmin Jiao Numerical Analysis I 1 / 20 Outline 1 Course
More informationECE 275A Homework 6 Solutions
ECE 275A Homework 6 Solutions. The notation used in the solutions for the concentration (hyper) ellipsoid problems is defined in the lecture supplement on concentration ellipsoids. Note that θ T Σ θ =
More informationA Brief Outline of Math 355
A Brief Outline of Math 355 Lecture 1 The geometry of linear equations; elimination with matrices A system of m linear equations with n unknowns can be thought of geometrically as m hyperplanes intersecting
More informationThe University of Texas at Austin Department of Electrical and Computer Engineering. EE381V: Large Scale Learning Spring 2013.
The University of Texas at Austin Department of Electrical and Computer Engineering EE381V: Large Scale Learning Spring 2013 Assignment Two Caramanis/Sanghavi Due: Tuesday, Feb. 19, 2013. Computational
More informationMATH 423 Linear Algebra II Lecture 33: Diagonalization of normal operators.
MATH 423 Linear Algebra II Lecture 33: Diagonalization of normal operators. Adjoint operator and adjoint matrix Given a linear operator L on an inner product space V, the adjoint of L is a transformation
More informationA Quick Tour of Linear Algebra and Optimization for Machine Learning
A Quick Tour of Linear Algebra and Optimization for Machine Learning Masoud Farivar January 8, 2015 1 / 28 Outline of Part I: Review of Basic Linear Algebra Matrices and Vectors Matrix Multiplication Operators
More information(a) If A is a 3 by 4 matrix, what does this tell us about its nullspace? Solution: dim N(A) 1, since rank(a) 3. Ax =
. (5 points) (a) If A is a 3 by 4 matrix, what does this tell us about its nullspace? dim N(A), since rank(a) 3. (b) If we also know that Ax = has no solution, what do we know about the rank of A? C(A)
More informationLeast Sparsity of p-norm based Optimization Problems with p > 1
Least Sparsity of p-norm based Optimization Problems with p > Jinglai Shen and Seyedahmad Mousavi Original version: July, 07; Revision: February, 08 Abstract Motivated by l p -optimization arising from
More informationDEPARTMENT OF MATHEMATICS
DEPARTMENT OF MATHEMATICS. Points: 4+7+4 Ma 322 Solved First Exam February 7, 207 With supplements You are given an augmented matrix of a linear system of equations. Here t is a parameter: 0 4 4 t 0 3
More information15 Singular Value Decomposition
15 Singular Value Decomposition For any high-dimensional data analysis, one s first thought should often be: can I use an SVD? The singular value decomposition is an invaluable analysis tool for dealing
More information2018 Fall 2210Q Section 013 Midterm Exam II Solution
08 Fall 0Q Section 0 Midterm Exam II Solution True or False questions points 0 0 points) ) Let A be an n n matrix. If the equation Ax b has at least one solution for each b R n, then the solution is unique
More informationMath 224, Fall 2007 Exam 3 Thursday, December 6, 2007
Math 224, Fall 2007 Exam 3 Thursday, December 6, 2007 You have 1 hour and 20 minutes. No notes, books, or other references. You are permitted to use Maple during this exam, but you must start with a blank
More informationComputation. For QDA we need to calculate: Lets first consider the case that
Computation For QDA we need to calculate: δ (x) = 1 2 log( Σ ) 1 2 (x µ ) Σ 1 (x µ ) + log(π ) Lets first consider the case that Σ = I,. This is the case where each distribution is spherical, around the
More informationLINEAR ALGEBRA 1, 2012-I PARTIAL EXAM 3 SOLUTIONS TO PRACTICE PROBLEMS
LINEAR ALGEBRA, -I PARTIAL EXAM SOLUTIONS TO PRACTICE PROBLEMS Problem (a) For each of the two matrices below, (i) determine whether it is diagonalizable, (ii) determine whether it is orthogonally diagonalizable,
More informationMATH 315 Linear Algebra Homework #1 Assigned: August 20, 2018
Homework #1 Assigned: August 20, 2018 Review the following subjects involving systems of equations and matrices from Calculus II. Linear systems of equations Converting systems to matrix form Pivot entry
More informationReview of Linear Algebra
Review of Linear Algebra Dr Gerhard Roth COMP 40A Winter 05 Version Linear algebra Is an important area of mathematics It is the basis of computer vision Is very widely taught, and there are many resources
More informationRecall the convention that, for us, all vectors are column vectors.
Some linear algebra Recall the convention that, for us, all vectors are column vectors. 1. Symmetric matrices Let A be a real matrix. Recall that a complex number λ is an eigenvalue of A if there exists
More informationMATH36001 Generalized Inverses and the SVD 2015
MATH36001 Generalized Inverses and the SVD 201 1 Generalized Inverses of Matrices A matrix has an inverse only if it is square and nonsingular. However there are theoretical and practical applications
More informationLinear Algebra (Review) Volker Tresp 2018
Linear Algebra (Review) Volker Tresp 2018 1 Vectors k, M, N are scalars A one-dimensional array c is a column vector. Thus in two dimensions, ( ) c1 c = c 2 c i is the i-th component of c c T = (c 1, c
More information