Exercise 8: Frequency Response of MIMO Systems

Similar documents
Lecture 4: Analysis of MIMO Systems

Singular Value Decomposition Analysis

Full-State Feedback Design for a Multi-Input System

The Singular Value Decomposition

CS168: The Modern Algorithmic Toolbox Lecture #8: How PCA Works

1 Singular Value Decomposition and Principal Component

Control Systems I. Lecture 5: Transfer Functions. Readings: Emilio Frazzoli. Institute for Dynamic Systems and Control D-MAVT ETH Zürich

Control Systems I. Lecture 6: Poles and Zeros. Readings: Emilio Frazzoli. Institute for Dynamic Systems and Control D-MAVT ETH Zürich

Ordinary Differential Equations

Extreme Values and Positive/ Negative Definite Matrix Conditions

Question: Total. Points:

Computational Methods CMSC/AMSC/MAPL 460. Eigenvalues and Eigenvectors. Ramani Duraiswami, Dept. of Computer Science

Review problems for MA 54, Fall 2004.

Tutorials in Optimization. Richard Socher

Topic 15 Notes Jeremy Orloff

Exercises * on Linear Algebra

The Singular Value Decomposition (SVD) and Principal Component Analysis (PCA)

Chap 3. Linear Algebra

Name Solutions Linear Algebra; Test 3. Throughout the test simplify all answers except where stated otherwise.

Some solutions of the written exam of January 27th, 2014

3. Mathematical Properties of MDOF Systems

EL2520 Control Theory and Practice

Linear Algebra Review. Vectors

21 Linear State-Space Representations

GQE ALGEBRA PROBLEMS

Examples True or false: 3. Let A be a 3 3 matrix. Then there is a pattern in A with precisely 4 inversions.

Singular Value Decomposition

Numerical Linear Algebra

Second-Order Homogeneous Linear Equations with Constant Coefficients

MATH 23a, FALL 2002 THEORETICAL LINEAR ALGEBRA AND MULTIVARIABLE CALCULUS Solutions to Final Exam (in-class portion) January 22, 2003

2.3 Oscillation. The harmonic oscillator equation is the differential equation. d 2 y dt 2 r y (r > 0). Its solutions have the form

22.3. Repeated Eigenvalues and Symmetric Matrices. Introduction. Prerequisites. Learning Outcomes

M.A.P. Matrix Algebra Procedures. by Mary Donovan, Adrienne Copeland, & Patrick Curry

Lecture 7. Econ August 18

Eigenvalues & Eigenvectors

Linear Algebra: Matrix Eigenvalue Problems

MTH 464: Computational Linear Algebra

Background Mathematics (2/2) 1. David Barber

Solutions to the Homework Replaces Section 3.7, 3.8

APPLICATIONS The eigenvalues are λ = 5, 5. An orthonormal basis of eigenvectors consists of

Getting Started with Communications Engineering. Rows first, columns second. Remember that. R then C. 1

Tutorial 2 - Learning about the Discrete Fourier Transform

A Field Extension as a Vector Space

Linear Algebra. and

Abstract Algebra Study Sheet

Empirical Gramians and Balanced Truncation for Model Reduction of Nonlinear Systems

Repeated Eigenvalues and Symmetric Matrices

LECTURE NOTE #10 PROF. ALAN YUILLE

Eigenvectors and Hermitian Operators

[Disclaimer: This is not a complete list of everything you need to know, just some of the topics that gave people difficulty.]

ẋ n = f n (x 1,...,x n,u 1,...,u m ) (5) y 1 = g 1 (x 1,...,x n,u 1,...,u m ) (6) y p = g p (x 1,...,x n,u 1,...,u m ) (7)

Solutions to the Homework Replaces Section 3.7, 3.8

Eigenvalues and Eigenvectors

Exam. 135 minutes, 15 minutes reading time

Definition (T -invariant subspace) Example. Example

14 Singular Value Decomposition

Spring, 2012 CIS 515. Fundamentals of Linear Algebra and Optimization Jean Gallier

Linear Algebra- Final Exam Review

Module 02 CPS Background: Linear Systems Preliminaries

2t t dt.. So the distance is (t2 +6) 3/2

Dot Products. K. Behrend. April 3, Abstract A short review of some basic facts on the dot product. Projections. The spectral theorem.

Math 1553, Introduction to Linear Algebra

18.06 Problem Set 8 - Solutions Due Wednesday, 14 November 2007 at 4 pm in

Matrix Mathematics. Theory, Facts, and Formulas with Application to Linear Systems Theory. Dennis S. Bernstein

Large Scale Data Analysis Using Deep Learning

Solutions of the Sample Problems for the Third In-Class Exam Math 246, Fall 2017, Professor David Levermore

Elementary Linear Algebra

Chapter 3 Transformations

Topic 5 Notes Jeremy Orloff. 5 Homogeneous, linear, constant coefficient differential equations

EC Control Engineering Quiz II IIT Madras

Linear Algebra - Part II

Differential Equations 2280 Sample Midterm Exam 3 with Solutions Exam Date: 24 April 2015 at 12:50pm

Homework 1 Elena Davidson (B) (C) (D) (E) (F) (G) (H) (I)

Image Registration Lecture 2: Vectors and Matrices

Frequency Response of Linear Time Invariant Systems

MATH 423 Linear Algebra II Lecture 33: Diagonalization of normal operators.

Classify a transfer function to see which order or ramp it can follow and with which expected error.

Differential Equations

Basic Procedures for Common Problems

Contents. 1 State-Space Linear Systems 5. 2 Linearization Causality, Time Invariance, and Linearity 31

Lecture 7 (Weeks 13-14)

Singular Value Decomposition

MATH 350: Introduction to Computational Mathematics

Announcements (repeat) Principal Components Analysis

Linear Algebra. Min Yan

Math Fall Final Exam

Deep Learning Book Notes Chapter 2: Linear Algebra

A Brief Outline of Math 355

Systems of Linear ODEs

Laboratory handout 1 Mathematical preliminaries

Diagonalizing Matrices

Lecture 10: Powers of Matrices, Difference Equations

Adaptive Filtering. Squares. Alexander D. Poularikas. Fundamentals of. Least Mean. with MATLABR. University of Alabama, Huntsville, AL.

Designing Information Devices and Systems I Spring 2016 Official Lecture Notes Note 21

Chemistry 365: Normal Mode Analysis David Ronis McGill University

Linear Algebra & Geometry why is linear algebra useful in computer vision?

COMP 558 lecture 18 Nov. 15, 2010

Sample Quantum Chemistry Exam 2 Solutions

Eigenvalues, Eigenvectors and the Jordan Form

Fall TMA4145 Linear Methods. Exercise set Given the matrix 1 2

Transcription:

Exercise 8: Frequency Response of MIMO Systems 8 Singular Value Decomposition (SVD The Singular Value Decomposition plays a central role in MIMO frequency response analysis Let s recall some concepts from the course Lineare Algebra I/II : 8 Preliminary Definitions The induced norm A of a matrix that describes a linear function like is defined as y = A u (8 A = max u 0 y u = max y u = (8 Remark In the course Lineare Algebra I/II you learned that there are many different norms and the expression A could look too generic However, in this course we will always use the euclidean norm A = A This norm is defined as A = µ max = maximal eigenvalue of A A (83 where A is the conjugate transpose or Hermitian transpose of the matrix A This is defined as A = (conj(a T (84 Example Let Then it holds A = i + i i A = (conj(a T T + i = i i i = + i i Remark If we deal with A R n m holds of course One can list a few useful properties of the euclidean norm: (i Remember: A T A is always a quadratic matrix! A = A T (85 (ii If A is orthogonal: A = (86

(iii If A is symmetric: A = max( e i (87 (iv If A is invertible: A = µmin = minimal eigenvalue of A A (88 (v If A is invertible and symmetric: A = min( e i (89 where e i are the eigenvalues of A In order to define the SVD we have to go a step further Let s consider a Matrix A and the linear function 8 It holds A = max u = y y = max u = (A u (A u = max u = u A A u = max i µ(a A = max i σ i (80 where σ i are the singular values of matrix A They are defined as where µ i are the eigenvalues of A A Combining 8 and 8 one gets σ i = µ i (8 σ min (A y u σ max(a (8 8 Singular Value Decomposition Our goal is to write a general matrix A C p m as product of three matrices: U, Σ and V It holds A = U Σ V with U C p p, Σ R p m, V C m m (83 Remark U and V are orthogonal, Σ is a diagonal matrix

y = A u u Figure : Illustration of the singular values Kochrezept: Let A C p m be given: (I Compute all the eigenvalues and eigenvectors of the matrix A A C m m and sort them as µ µ µ r > µ r+ = = µ m = 0 (84 (II Compute an orthogonal basis from the eigenvectors v i and write it in a matrix as V = (v v m C m m (85 (III We have already found the singular values: they are defined as σ i = µ i for i =,, min{p, m} (86 We can then write Σ as σ 0 0 Σ = R p m, p < m (87 σ m 0 0 (IV One finds u,, u r from σ Σ = σ n 0 0 R p m, p > m (88 0 0 u i = σ i A v i for all i =,, r (for σ i 0 (89 3

(V If r < p one has to complete the basis u,, u r (with ONB to obtain an orthogonal basis, with U orthogonal (VI If you followed the previous steps, you can write A = U Σ V Motivation for the computation of Σ, U und V A A = (U Σ V (U Σ V = V Σ U U Σ V = V Σ Σ V = V Σ V (80 This is nothing else than the diagonalization of the matrix A A The columns of V are the eigenvectors of A A and the σi the eigenvalues For U: A A = (U Σ V (U Σ V = U Σ V V Σ U = U Σ Σ U = U Σ U (8 This is nothing else than the diagonalization of the matrix A A The columns of U are the eigenvectors of A A and the σi the eigenvalues Remark In order to derive the previous two equations I used that: The matrix A A is symmetric, ie U = U (because U is othogonal V = V (because V is othogonal (A A = A (A = A A Remark Since the matrix A A is always symmetric and positive semidefinite, the singular values are always real numbers Remark The MATLAB command for the singular value decomposition is [U,S,V]=svd One can write A T as A =transpose(a and A as A =conj(transpose(a Those two are equivalent for real numbers Remark Although I ve reported detailed informations about the calculation of U and V, this won t be relevant for the exam It is however good and useful to know the reasons that are behind this topic 4

Example Let u be with u = The matrix M is given as u = M = cos(x sin(x 0 0 We know that the product of M and u defines a linear function y = M u 0 cos(x = 0 sin(x cos(x = sin(x We need the maximum of y In order to avoid square roots, one can use that the x that maximizes y should also maximize y y = 4 cos (x + 4 sin (x has maximum d y dx = 8 cos(x sin(x + sin(x cos(x =! 0 { x max = 0, π, π, 3π } Inserting back for the maximal y one gets: y max =, y max = The singular values can be calculated with M M: M M = M M 4 0 = 0 4 λ i = { 4, } 4 σ i = {, } As stated before, one can see that y [σ min, σ max ] The matrix U has eigenvectors of M M as coulmns and the matrix V has eigenvectors of M M as columns In this case M M = M M, hence the two matrices are equal Since their product is a diagonal matrix one should recall from the theory that the eigenvectors are easy to determine: they are nothing else than the standard basis vectors This means Interpretation: U = [ 0 0 ] [ 0, Σ = 0 ], V = [ ] 0 0 Referring to Figure, let s interprete these calculations One can see that the maximal amplification occurs at v = V (:, and has direction u = U(:,, ie the vector u is doubled (σ max The minimal amplification occurs at v = V (:, and has direction u = U(:,, ie the vector u is halved (σ min 5

05 u y = M u Figure : Illustration von der Singularwertzerlegung Example 3 Let 3 0 A = 0 3 3 be given Question: Find the singular values of A and write down the matrix Σ Solution Let s compute A T A: A T A = ( 3 0 3 0 3 One can see easily that the eigenvalues are 3 0 0 3 = 3 λ = 6, λ = 9 ( 3 3 3 The singular values are We write then σ = 4, σ = 3 4 0 Σ = 0 3 0 0 6

Example 4 A transfer function G(s is given as ( s+3 s+ s+3 Find the singular values of G(s at ω = rad s s+ s+3 s+3 7

Solution The transfer function G(s evaluated at ω = rad s G(j = ( j+3 j+ j+3 j+ j+3 j+3 has the form In order to calculate the singular values, we have to compute the eigenvalues of H = G G: H = G G = = ( j+3 j+ ( 3 0 j+3 0 = 0 0 3 0 j+ j+3 j+3 ( 3 3 ( j+3 j+ j+3 j+ j+3 j+3 For the eigenvalues it holds It follows det(h λ = det = ( 3 0 λ 0 0 ( 3 0 λ 3 λ 0 ( 0 = λ 6 0 λ + 5 ( 00 = λ 0 ( λ 5 0 λ = 0 and so λ = σ = 0 036 σ = 0707 8

8 Frequency Responses As we learned for SISO systems, if one excites a system with an harmonic signal u(t = h(t cos(ω t, (8 the answer after a big amount of time is still an harmonic function with equal frequency ω: y (t = P (j ω cos(ω t + (P (j ω (83 One can generalize this and apply it to MIMO systems With the assumption of p = m, ie equal number of inputs and outputs, one excite a system with µ cos(ω t + φ u(t = h(t (84 µ m cos(ω t + φ m and get Let s define two diagonal matrices and two vectors y (t = ν cos(ω t + ψ ν m cos(ω t + ψ m (85 Φ = diag(φ,, φ m R m m, Φ = diag(ψ,, ψ m R m m (86 µ = ( µ µ m T, ν = ( ν ν m T (87 With these one can compute the Laplace Transform of the two signals as: U(s = e Φ s ω µ s s + ω (88 and Y (s = e Ψ s s ω ν (89 s + ω With the general equation for a systems one gets Y (s = P (s U(s e Ψ s s Φ s s ω ν = P (s e s ω + ω µ s + ω e Ψ j ω ω ν = P (s e Φ j ω ω µ e Ψ j ν = P (s e Φ j µ (830 We then recall that the induced norm for the matrix of a linear transformation y = A u from 8 Here it holds P (j ω = e Ψ j ν max e Φ j µ 0 e Φ j µ = max e Ψ j ν e Φ j µ = (83 9

Since and One gets e Φ j µ = µ (83 e Ψ j ν = ν (833 P (j ω = max µ 0 ν µ = max µ = ν (834 Here one should get the feeling of why we introduced the singular value decomposition From the theory we ve learned, it is clear that and if µ σ min (P (j ω ν σ max (P (j ω (835 σ min (P (j ω ν µ σ max(p (j ω (836 with σ i singular values of P (j ω These two are worst case ranges and is important to notice that there is no exact formula for ν = f(µ 8 Maximal and minimal Gain You are given a singular value decomposition P (j ω = U Σ V (837 One can read out from this decomposition several informations: the maximal/minial gain will be reached with an excitation in the direction of the column vectors of U The response of the system will then be in the direction of the coulmn vectors of V Let s look at an example and try to understand how to use these informations: Example 5 We consider a system with m = inputs and p = 3 outputs We are given its singular value decomposition at ω = 5 rad: s 0467 0 Σ = 0 063, 0 0 ( 0908 09568 V =, 09443 054 j 0870 + 00469 j 00496 0680 j 0767 0683 j 066 080 j U = 0046 0959 j 0059 + 0350 j 064 + 00 j 00349 03593 j 0360 0590 j 0678 + 0048 j For the singular value σ max = 0467 the eigenvectors are V (:, and U(:, : ( 0908 0908 0 V =, V 09443 054 j =, (V 09568 =, 068 00496 0680 j 075 858 U = 0046 0959 j, U = 0960, (U = 5548 00349 03593 j 03609 474 0

The maximal gain is then reached with ( 0908 cos(5 t u(t = 09568 cos(5 t 068 The response of the system is then 075 cos(5 t 858 075 cos(5 t 858 y(t = σ max 0960 cos(5 t 5548 = 0467 0960 cos(5 t 5548 03609 cos(5 t 474 03609 cos(5 t 474 Since the three signals y (t, y (t and y 3 (t are not in phase, the maximal gain will never be reached One can show that max y(t 0460 < 0467 = σ max t The reason for this difference stays in the phase deviation between y (t, y (t and y 3 (t The same analysis can be computed for σ min 8 Robustness and Disturbance Rejection Let s redefine the matrix norm as Remark It holds G(s = max (max (σ i (G(i ω (838 ω i G (s G (s G (s G (s (839 As we did for SISO systems, we can resume some important indicators for robustness and noise amplification: Robustness Noise Amplification SISO MIMO µ = min ( + L(j ω µ = min (σ min (I + L(j ω ω ω S = max ( S(j ω S = max (σ max(s(j ω ω ω

Example 6 Given the MIMO system P (s = ( s+3 s+ s+ 3 s+ Starting at t = 0, the system is excited with the following input signal: ( cos(t u(t = µ cos(t + ϕ Find the parameters ϕ and µ such that for steady-state conditions the output signal y (t y (t has y (t equal to zero

Solution For a system excited using a harmonic input signal µ cos(ωt + ϕ u(t = µ cos(ωt + ϕ the output signal y(t, after a transient phase, will also be harmonic and hence have the form ν cos(ωt + ψ u(t = ν cos(ωt + ψ As we have learned, it holds For the first component one gets e Ψ j ν = P (jω e Φ j µ e ψ j ν = P (jω e ϕ j µ + P (jω e ϕ j µ For y (t = 0 to hold we must have ν = 0 In the given case, some parameters can be easily copied from the signals: With the given transfer functions, one gets 0 = j + 3 + µ j + eϕ j 0 = 3 j 0 + µ j 0 = 3 j 0 + µ j e ϕ j µ = ϕ = 0 ω = (cos(ϕ + j sin(ϕ 0 = 3 0 + µ (cos(ϕ + sin(ϕ + j Splitting the real to the imaginary part, one gets two equations: µ (cos(ϕ + sin(ϕ + 3 0 = 0 µ (sin(ϕ cos(ϕ 0 = 0 ( µ (sin(ϕ cos(ϕ 0 Adding and subtracting the two equations one can reach two better equations: One of the solutions (periodicity reads µ sin(ϕ + 5 = 0 µ cos(ϕ + 5 = 0 µ = 5 ϕ = arctan + π 3

Example 7 A linear time invariant MIMO system with transfer function is excited with the signal u(t = P (s = ( s+ s + s+0 s+ s + µ cos(ω t + ϕ µ cos(ω t + ϕ Because we bought a cheap signal generator, we cannot know exactly the constants µ, and ϕ, A friend of you just found out with some measurements, that the excitation frequency is ω = rad The cheap generator, cannot produce signals with magnitude of µ bigger than s 0, ie µ + µ 0 This works always at maximal power, ie at 0 Choose all possible responses of the system after infinite time 5 sin(t + 04 y (t = cos(t 5 sin(t + 04 y (t = cos( t sin(t + 054 y (t = sin(t + 0459 9 cos(t + 04 y (t = cos(t + 4 5 cos(t + 04 y (t = 5 cos(t 0 sin(t + 4 y (t = sin(t + 34 4

Solution 5 sin(t + 04 y (t = cos(t 5 sin(t + 04 y (t = cos( t sin(t + 054 y (t = sin(t + 0459 9 cos(t + 04 y (t = cos(t + 4 5 cos(t + 04 y (t = 5 cos(t 0 sin(t + 4 y (t = sin(t + 34 Explanation We have to compute the singular values of the matrix P (j These are With what we have learned it follows σ max = 8305 σ min = 03863 0 σ min = 3863 ν 8305 = 0 σ max The first response has ν = 6 that is in this range The second response also has ν = 6 but the frequency in its second element changes and that isn t possible for linear systems The third response has ν = that is too small to be in the range The fourth response has ν = 36 that is too big to be in the range The fifth response has ν = 50 that is in the range The sixth response has ν = that is in the range 5

Example 8 A 3 linear time invariant MIMO system is excited with the input 3 sin(30 t u(t = 4 cos(30 t You have forgot your PC and you don t know the transfer function of the system Before coming to school, however, you have saved the Matlab plot of the singular values of the system on your phone (see Figure 3 Choose all the possible responses of the system Figure 3: Singular values behaviour 05 sin(30 t + 034 y (t = 05 cos(30 t 05 cos(30 t + 4 sin(30 t + 034 y (t = 3 cos(30 t cos(30 t + 0 sin(30 t + 034 y (t = 0 cos(30 t 0 cos(30 t + y (t = 0 4 cos(30 t cos(30 t + cos(30 t + 043 y (t = cos(30 t + 04 cos(30 t + 05 6

Solution 05 sin(30 t + 034 y (t = 05 cos(30 t 05 cos(30 t + 4 sin(30 t + 034 y (t = 3 cos(30 t cos(30 t + 0 sin(30 t + 034 y (t = 0 cos(30 t 0 cos(30 t + 0 y (t = 4 cos(30 t cos(30 t + cos(30 t + 043 y (t = cos(30 t + 04 cos(30 t + 05 Explanation From the given input one can read µ = 3 + 4 = 5 From the plot one can read at ω = 30 rad s σ min = 0 and σ max = It follows 5 σ min = 05 ν 5 = 5 σ max The first response has ν = 075 that is in the range The second response has ν = 9 that is too big to be in the range The third response has ν = 003 that is to small to be in the range The fourth response has ν = 0 that is in the range The fifth response has ν = that is in the range 7