Designing Information Devices and Systems II Fall 2016 Murat Arcak and Michel Maharbiz Homework 11. This homework is due November 14, 2016, at Noon.

Similar documents
Designing Information Devices and Systems II Spring 2017 Murat Arcak and Michel Maharbiz Homework 10

Designing Information Devices and Systems II Fall 2016 Murat Arcak and Michel Maharbiz Homework 0. This homework is due August 29th, 2016, at Noon.

Orthonormal Bases; Gram-Schmidt Process; QR-Decomposition

MATH 20F: LINEAR ALGEBRA LECTURE B00 (T. KEMP)

DS-GA 1002 Lecture notes 0 Fall Linear Algebra. These notes provide a review of basic concepts in linear algebra.

EE16B - Spring 17 - Lecture 12A Notes 1

Linear Algebra Massoud Malek

LINEAR ALGEBRA 1, 2012-I PARTIAL EXAM 3 SOLUTIONS TO PRACTICE PROBLEMS

Chapter 6: Orthogonality

MATH 115A: SAMPLE FINAL SOLUTIONS

Homework For each of the following matrices, find the minimal polynomial and determine whether the matrix is diagonalizable.

Diagonalizing Matrices

Chapter 4 Euclid Space

235 Final exam review questions

MTH 2032 SemesterII

Homework 11 Solutions. Math 110, Fall 2013.

Check that your exam contains 30 multiple-choice questions, numbered sequentially.

Reduction to the associated homogeneous system via a particular solution

Math Linear Algebra II. 1. Inner Products and Norms

Linear Algebra- Final Exam Review

Designing Information Devices and Systems II Spring 2017 Murat Arcak and Michel Maharbiz Homework 9

Chapter 3 Transformations

MATRICES ARE SIMILAR TO TRIANGULAR MATRICES

Math 224, Fall 2007 Exam 3 Thursday, December 6, 2007

SOLUTION KEY TO THE LINEAR ALGEBRA FINAL EXAM 1 2 ( 2) ( 1) c a = 1 0

Section 6.2, 6.3 Orthogonal Sets, Orthogonal Projections

Linear Algebra 2 Spectral Notes

Diagonalization. P. Danziger. u B = A 1. B u S.

Review 1 Math 321: Linear Algebra Spring 2010

NATIONAL UNIVERSITY OF SINGAPORE DEPARTMENT OF MATHEMATICS SEMESTER 2 EXAMINATION, AY 2010/2011. Linear Algebra II. May 2011 Time allowed :

5.) For each of the given sets of vectors, determine whether or not the set spans R 3. Give reasons for your answers.

1. What is the determinant of the following matrix? a 1 a 2 4a 3 2a 2 b 1 b 2 4b 3 2b c 1. = 4, then det

Math 20F Final Exam(ver. c)

MAT2342 : Introduction to Applied Linear Algebra Mike Newman, fall Projections. introduction

Practice Exam. 2x 1 + 4x 2 + 2x 3 = 4 x 1 + 2x 2 + 3x 3 = 1 2x 1 + 3x 2 + 4x 3 = 5

Designing Information Devices and Systems II

Solution to Homework 8, Math 2568

Math 21b Final Exam Thursday, May 15, 2003 Solutions

The Gram Schmidt Process

The Gram Schmidt Process

Inner products. Theorem (basic properties): Given vectors u, v, w in an inner product space V, and a scalar k, the following properties hold:

Eigenvalue and Eigenvector Homework

MATH 23a, FALL 2002 THEORETICAL LINEAR ALGEBRA AND MULTIVARIABLE CALCULUS Solutions to Final Exam (in-class portion) January 22, 2003

Math 350 Fall 2011 Notes about inner product spaces. In this notes we state and prove some important properties of inner product spaces.

MATH 304 Linear Algebra Lecture 20: The Gram-Schmidt process (continued). Eigenvalues and eigenvectors.

2.3. VECTOR SPACES 25

Review problems for MA 54, Fall 2004.

No books, no notes, no calculators. You must show work, unless the question is a true/false, yes/no, or fill-in-the-blank question.

Extreme Values and Positive/ Negative Definite Matrix Conditions

Math 3191 Applied Linear Algebra

MAT Linear Algebra Collection of sample exams

Dot Products. K. Behrend. April 3, Abstract A short review of some basic facts on the dot product. Projections. The spectral theorem.

APPLICATIONS The eigenvalues are λ = 5, 5. An orthonormal basis of eigenvectors consists of

Linear algebra II Homework #1 due Thursday, Feb. 2 A =. 2 5 A = When writing up solutions, write legibly and coherently.

Inverses. Stephen Boyd. EE103 Stanford University. October 28, 2017

INNER PRODUCT SPACE. Definition 1

MATH 1120 (LINEAR ALGEBRA 1), FINAL EXAM FALL 2011 SOLUTIONS TO PRACTICE VERSION

Assignment 1 Math 5341 Linear Algebra Review. Give complete answers to each of the following questions. Show all of your work.

Physics 331 Introduction to Numerical Techniques in Physics

1 Linear Algebra Problems

Homework Set 5 Solutions

University of Colorado at Denver Mathematics Department Applied Linear Algebra Preliminary Exam With Solutions 16 January 2009, 10:00 am 2:00 pm

Spring 2014 Math 272 Final Exam Review Sheet

PRACTICE FINAL EXAM. why. If they are dependent, exhibit a linear dependence relation among them.

Math 115A: Homework 5

Maths for Signals and Systems Linear Algebra in Engineering

Linear Algebra Final Exam Study Guide Solutions Fall 2012

Math 113 Homework 5. Bowei Liu, Chao Li. Fall 2013

EIGENVALUES AND EIGENVECTORS 3

Econ Slides from Lecture 7

Archive of past papers, solutions and homeworks for. MATH 224, Linear Algebra 2, Spring 2013, Laurence Barker

Remark By definition, an eigenvector must be a nonzero vector, but eigenvalue could be zero.

I. Multiple Choice Questions (Answer any eight)

( 9x + 3y. y 3y = (λ 9)x 3x + y = λy 9x + 3y = 3λy 9x + (λ 9)x = λ(λ 9)x. (λ 2 10λ)x = 0

MATH 167: APPLIED LINEAR ALGEBRA Least-Squares

which arises when we compute the orthogonal projection of a vector y in a subspace with an orthogonal basis. Hence assume that P y = A ij = x j, x i

Final A. Problem Points Score Total 100. Math115A Nadja Hempel 03/23/2017

Designing Information Devices and Systems II Spring 2018 J. Roychowdhury and M. Maharbiz Discussion 6B

Final Review Sheet. B = (1, 1 + 3x, 1 + x 2 ) then 2 + 3x + 6x 2

Math 18, Linear Algebra, Lecture C00, Spring 2017 Review and Practice Problems for Final Exam

REAL LINEAR ALGEBRA: PROBLEMS WITH SOLUTIONS

33AH, WINTER 2018: STUDY GUIDE FOR FINAL EXAM

MATH 304 Linear Algebra Lecture 23: Diagonalization. Review for Test 2.

Linear Algebra. and

Numerical Linear Algebra Homework Assignment - Week 2

The value of a problem is not so much coming up with the answer as in the ideas and attempted ideas it forces on the would be solver I.N.

2. Every linear system with the same number of equations as unknowns has a unique solution.

Math 108b: Notes on the Spectral Theorem

Methods of Mathematical Physics X1 Homework 2 - Solutions

University of Colorado Denver Department of Mathematical and Statistical Sciences Applied Linear Algebra Ph.D. Preliminary Exam June 8, 2012

Definition (T -invariant subspace) Example. Example

Quadratic forms. Here. Thus symmetric matrices are diagonalizable, and the diagonalization can be performed by means of an orthogonal matrix.

ft-uiowa-math2550 Assignment NOTRequiredJustHWformatOfQuizReviewForExam3part2 due 12/31/2014 at 07:10pm CST

Linear Models Review

6. Orthogonality and Least-Squares

1. Diagonalize the matrix A if possible, that is, find an invertible matrix P and a diagonal

Math Ordinary Differential Equations

Linear Algebra, part 2 Eigenvalues, eigenvectors and least squares solutions

Math Linear Algebra Final Exam Review Sheet

MATH 54 - FINAL EXAM STUDY GUIDE

Transcription:

EECS 16B Designing Information Devices and Systems II Fall 2016 Murat Arcak and Michel Maharbiz Homework 11 This homework is due November 14, 2016, at Noon. 1. Homework process and study group (a) Who else did you work with on this homework? List names and student ID s. (In case of homework party, you can also just describe the group.) (b) How long did you spend working on this homework? How did you approach it? 2. Lagrange interpolation by polynomials Given n distinct points and the corresponding evaluations/sampling of a function f (x), (x i, f (x i )) for 0 i n 1, the Lagrange interpolating polynomial is the polynomial of the least degree which passes through all the given points. Given n distinct points and the corresponding evaluations, (x i, f (x i )) for 0 i n 1, the Lagrange polynomial is where. L i (x) = Π j=n 1 j=0; j i P n (x) = Σ i=n 1 i=0 f (x i )L i (x), (x x j ) (x i x j ) = (x x 0) (x i x 0 )... (x x i 1) (x x i+1 ) (x i x i 1 ) (x i x i+1 )... (x x n 1) (x i x n 1 ) Here is an example: for two data points, (x 0, f (x 0 )) = (0, 4), (x 1, f (x 1 )) = (-1, -3), we have and. Then L 0 (x) = x x 1 x 0 x 1 = x ( 1) 0 ( 1) = x + 1 L 1 (x) = x x 0 x 1 x 0 = x (0) ( 1) (0) = x P 2 (x) = f (x 0 )L 0 (x) + f (x 1 )L 1 (x) = 4(x + 1) + ( 3)( x) = 7x + 4 We can sketch those equations on the 2D plane as follows: (a) Given three data points, (2, 3), (0, -1) and (-1, -6), find a polynomial f (x) = ax 2 +bx+c fitting the three points. Do this by solving a system of linear equations for the unknowns a, b, c. Is this polynomial unique? (b) Like the monomial basis {1, x, x 2, x 3,...}, the set {L i (x)} is a new basis for the subspace of degree n or lower polynomials. P n (x) is the sum of the scaled basis polynomials. Find the L i (x) corresponding to the three sample points in (a). Show your steps. EECS 16B, Fall 2016, Homework 11 1

(c) Find the Lagrange polynomial P n (x) for the three points in (a). Compare the result to the answer in (a). Are they different from each other? Why or why not? (d) Sketch P n (x) and each f (x i )L i (x) on the 2D plane. (e) Show that the Lagrange interpolating polynomial must pass through all given points. In other words, show that P n (x i ) = f (x i ) for all x i. Do this in general, not just for the example above. 3. The vector space of polynomials A polynomial of degree at most n on a single variable can be written as p(x) = p 0 + p 1 x + p 2 x 2 + p n x n where we assume that the coefficients p 0, p 1,..., p n are real. Let P n be the vector space of all polynomials of degree at most n. (a) Consider the representation of p P n as the vector of its coefficients in R n+1. p = [ p 0 p 1... p n ] T Show that the set B n = {1,x,x 2,...,x n } forms a basis of P n, by showing the following. Every element of P n can be expressed as a linear combination of elements in B n. No element in B n can be expressed as a linear combination of the other elements of B n. (Hint: Use the aspect of the fundamental theorem of algebra which says that a nonzero polynomial of degree n has at most n roots, and use a proof by contradiction.) (b) Suppose that the coefficients p 0,..., p n of p are unknown. To determine the coefficients, we evaluate p on n + 1 points, x 0,...,x n. Suppose that p(x i ) = y i for 0 i n. Find a matrix V in terms of the x i, such that p 0 y 0 p 1 V. = y 1.. p n y n EECS 16B, Fall 2016, Homework 11 2

(c) For the case where n = 2, compute the determinant of V and show that it is equal to det(v ) = 0 i< j n (x j x i ). Conclude that if x 0,...,x n are distinct, then we can uniquely recover the coefficients p 0,..., p n of p. This holds for n > 2 in general, but consider only the case where n = 2 for now. (d) (optional) Argue using Lagrange interpolation that indeed such matrices V above must always be invertible if the x i are distinct. (e) We can define an inner product on P n by setting 1 p,q = p(x)q(x) dx. 1 Show that this satisfies the following properties of a real inner product. (We would have to put in a complex conjugate on p if we wanted a complex inner product.) p, p 0, with equality if and only if p = 0. For all a R, ap,q = a p,q. p,q = q, p. (f) Now that we have an inner product on P n, we can consider orthonormality. If B = {b 0,b 1,...,b n } is a basis for P n, we say that it is an orthonormal basis if b i,b j = 0 if i j. b i,b i = 1. We can also compute projections. For any p,u P n,u 0, the projection of p onto u is proj u p = p,u u,u u. Consider the case where n = 2. From part (a), we have the basis {1,x,x 2 } for P 2. Convert this into an orthonormal basis using the Gram-Schmidt process. (g) (optional) An alternative inner-product could be placed upon real polynomials if we simply represented them by a sequence of their evaluations at 0,1,...,n and adopted the standard Euclidean inner product on sequences of real numbers. Can you give an example of an orthonormal basis with this alternative inner product? 4. Sampling a continuous-time control system to get a discrete-time control system The goal of this problem is to help us understand how given a linear continuous-time system: x(t) = A x(t) + B u(t) y(t) = C x(t) we can sample it every T seconds and get a discrete-time form of the control system. The discretization of the state equations is a sampled discrete time-invariant system given by x d (k + 1) = A d x d (k) + B d u d (k) (1) y d (k) = C d x d (k) (2) EECS 16B, Fall 2016, Homework 11 3

Here, the x d (k) denotes x(kt ). This is a snapshot of the state. Similarly, the output y d (k) is a snapshot of y(kt ). The relationship between the discrete-time input u d (k) and the actual input applied to the physical continuoustime system is that u(t) = u d (k) for all t [kt,(k + 1)T ). While it is clear from the above that the discrete-time state and the continuous-time state have the same dimensions and similarly for the control inputs, what is not clear is what the relationship should be between the matrices A,B and the matrices A d,b d. By contrast it is immediately clear that C d = C. (a) Argue intuitively why if the continuous-time system is stable, the corresponding discrete-time system should be stable too. Similarly, argue intuitively why if the discrete-time system is unstable, then the continuous-time system should also be unstable. (b) Consider the scalar case where A and B are just constants. What are the new constants A d and B d? (HINT: Think about solving this one step at a time. Every time a new control is applied, this is a simple differential equation with a new constant input. How does ẋ(t) = λx(t) + u evolve with time if it starts at x(0)? Notice that x(0)e λt + u λ (eλt 1) seems to solve this differential equation.) (c) Consider now the case where A is a diagonal matrix and B is some general matrix. What is the new matrix A d and B d? (d) Consider the case where A is a diagonalizable matrix. Use a change of coordinates to figure out the new matrix A d and B d. (e) Consider a general diagonal matrix A with distinct eigenvalues and a vector B = b that consists of all 1s. Is the pair (A, b) necessarily controllable? Prove that it must be or show a case where it isn t. (HINT: Polynomials) (f) Now consider a 2 2 diagonal matrix A that has the same eigenvalue repeated twice and a vector B = b. Is it ever possible for the pair (A, b) to be controllable? Show such a case or prove that it cannot exist. (g) Now consider the case of complex eigenvalues for a diagonal matrix A (with all the eigenvalues distinct) with a vector B = b that consists of all 1s. Can you find a case in which (A, b) is controllable but (A d,b d ) is not controllable? What has to be true about the sampling period T in relation to the eigenvalues for this to happen? 5. Aliasing intuition in continuous time The concept of aliasing is intuitively about having a signal of interest whose samples look identical to a different signal of interest creating an ambiguity as to which signal is actually present. While the concept of aliasing is quite general, it is easiest to understand in the context of sinusoidal signals. (a) Consider two signals, and x 1 (t) = acos(2π f 0 t + φ) x 2 (t) = acos(2π( f 0 + m f s )t φ) where f s = 1/T s. Are these two signals the same or different when viewed as functions of continuous time t? (b) Consider the two signals from the previous part. These will both be sampled with the sampling interval T s. What will be the corresponding discrete-time signals x d,1 [n] and x d,2 [n]? (The [n] refers to the nth sample taken this is the sample taken at real time nt s.) Are they the same or different? EECS 16B, Fall 2016, Homework 11 4

(c) What is the sinusoid acos(ωt + φ) that has the smallest ω 0 but still agrees at all of its samples (taken every T s seconds) with x 1 (t) above? Contributors: Ioannis Konstantakopoulos. EECS 16B, Fall 2016, Homework 11 5