Lecture 4: Analysis of MIMO Systems

Similar documents
Exercise 8: Frequency Response of MIMO Systems

Raktim Bhattacharya. . AERO 632: Design of Advance Flight Control System. Norms for Signals and Systems

Singular Value Decomposition Analysis

Lecture 7 (Weeks 13-14)

Control Systems I. Lecture 5: Transfer Functions. Readings: Emilio Frazzoli. Institute for Dynamic Systems and Control D-MAVT ETH Zürich

Control Systems I. Lecture 6: Poles and Zeros. Readings: Emilio Frazzoli. Institute for Dynamic Systems and Control D-MAVT ETH Zürich

2. Review of Linear Algebra

1 Singular Value Decomposition and Principal Component

ME 234, Lyapunov and Riccati Problems. 1. This problem is to recall some facts and formulae you already know. e Aτ BB e A τ dτ

The Singular Value Decomposition

L2 gains and system approximation quality 1

EL2520 Control Theory and Practice

Singular Value Decomposition

Full-State Feedback Design for a Multi-Input System

Linear Algebra Massoud Malek

Introduction to MVC. least common denominator of all non-identical-zero minors of all order of G(s). Example: The minor of order 2: 1 2 ( s 1)

Lecture 1: Feedback Control Loop

6.241 Dynamic Systems and Control

Large Scale Data Analysis Using Deep Learning

Linear Algebra Review. Vectors

The Singular Value Decomposition (SVD) and Principal Component Analysis (PCA)

Ordinary Differential Equations

Robust Multivariable Control

COMP 558 lecture 18 Nov. 15, 2010

Introduction. Performance and Robustness (Chapter 1) Advanced Control Systems Spring / 31

CS168: The Modern Algorithmic Toolbox Lecture #8: How PCA Works

Duke University, Department of Electrical and Computer Engineering Optimization for Scientists and Engineers c Alex Bronstein, 2014

Maths for Signals and Systems Linear Algebra in Engineering

Homework 1 Elena Davidson (B) (C) (D) (E) (F) (G) (H) (I)

Review problems for MA 54, Fall 2004.

Singular Value Decomposition

Diagonalizing Matrices

STA141C: Big Data & High Performance Statistical Computing

Throughout these notes we assume V, W are finite dimensional inner product spaces over C.

Lecture 8: Linear Algebra Background

Functional Analysis Review

33AH, WINTER 2018: STUDY GUIDE FOR FINAL EXAM

Basic Calculus Review

Singular Value Decomposition

Matrix Mathematics. Theory, Facts, and Formulas with Application to Linear Systems Theory. Dennis S. Bernstein

Chapter 3 Transformations

Hankel Optimal Model Reduction 1

STA141C: Big Data & High Performance Statistical Computing

1 Controller Optimization according to the Modulus Optimum

Linear Least Squares. Using SVD Decomposition.

Module 02 CPS Background: Linear Systems Preliminaries

MATH 23a, FALL 2002 THEORETICAL LINEAR ALGEBRA AND MULTIVARIABLE CALCULUS Solutions to Final Exam (in-class portion) January 22, 2003

MIT Final Exam Solutions, Spring 2017

Lecture 3: Review of Linear Algebra

Topic 15 Notes Jeremy Orloff

Lecture 3: Review of Linear Algebra

Linear Algebra March 16, 2019

Numerical Linear Algebra

1. Foundations of Numerics from Advanced Mathematics. Linear Algebra

Lecture 7. Econ August 18

Exercises * on Linear Algebra

3 Gramians and Balanced Realizations

Tutorials in Optimization. Richard Socher

Fall TMA4145 Linear Methods. Exercise set Given the matrix 1 2

forms Christopher Engström November 14, 2014 MAA704: Matrix factorization and canonical forms Matrix properties Matrix factorization Canonical forms

STAT 309: MATHEMATICAL COMPUTATIONS I FALL 2017 LECTURE 5

Section 3.9. Matrix Norm

12.4 Known Channel (Water-Filling Solution)

Identification Methods for Structural Systems

Homework 1. Yuan Yao. September 18, 2011

Extreme Values and Positive/ Negative Definite Matrix Conditions

Math Fall Final Exam

Information Retrieval

21 Linear State-Space Representations

Deep Learning Book Notes Chapter 2: Linear Algebra

4.2. ORTHOGONALITY 161

Background Mathematics (2/2) 1. David Barber

CS168: The Modern Algorithmic Toolbox Lecture #7: Understanding Principal Component Analysis (PCA)

Lecture 6. Numerical methods. Approximation of functions

GATE EE Topic wise Questions SIGNALS & SYSTEMS

Bindel, Fall 2016 Matrix Computations (CS 6210) Notes for

The University of Texas at Austin Department of Electrical and Computer Engineering. EE381V: Large Scale Learning Spring 2013.

Applied Linear Algebra in Geoscience Using MATLAB

NOTES ON LINEAR ALGEBRA CLASS HANDOUT

EE363 homework 7 solutions

Linear Algebra Review. Fei-Fei Li

MATH 423 Linear Algebra II Lecture 33: Diagonalization of normal operators.

A Brief Outline of Math 355

Inner products and Norms. Inner product of 2 vectors. Inner product of 2 vectors x and y in R n : x 1 y 1 + x 2 y x n y n in R n

Ir O D = D = ( ) Section 2.6 Example 1. (Bottom of page 119) dim(v ) = dim(l(v, W )) = dim(v ) dim(f ) = dim(v )

Discrete and continuous dynamic systems

LECTURE NOTE #10 PROF. ALAN YUILLE

Computational math: Assignment 1

ROBUST STABILITY AND PERFORMANCE ANALYSIS* [8 # ]

APPLICATIONS The eigenvalues are λ = 5, 5. An orthonormal basis of eigenvectors consists of

Spring, 2012 CIS 515. Fundamentals of Linear Algebra and Optimization Jean Gallier

Some solutions of the written exam of January 27th, 2014

Andrea Zanchettin Automatic Control AUTOMATIC CONTROL. Andrea M. Zanchettin, PhD Spring Semester, Linear systems (frequency domain)

Eigenvectors and Hermitian Operators

M.A.P. Matrix Algebra Procedures. by Mary Donovan, Adrienne Copeland, & Patrick Curry

HOMEWORK PROBLEMS FROM STRANG S LINEAR ALGEBRA AND ITS APPLICATIONS (4TH EDITION)

MATH 612 Computational methods for equation solving and function minimization Week # 2

Math for ML: review. CS 1675 Introduction to ML. Administration. Lecture 2. Milos Hauskrecht 5329 Sennott Square, x4-8845

6.241 Dynamic Systems and Control

Lecture 1: Systems of linear equations and their solutions

Transcription:

Lecture 4: Analysis of MIMO Systems Norms The concept of norm will be extremely useful for evaluating signals and systems quantitatively during this course In the following, we will present vector norms and matrix norms Vector Norms Definition A norm on a linear space (V, F is a function : V R + such that a v, v V, v + v v + v (triangle inequality b v V, α F, αv α v c v 0 v 0 Remark Note that norms are always non-negative This can be noticed by seeing that a norm always maps to R + Considering x C n, ie V C n, one can define the p norm as x p ( n i x i p p, p,, ( The easiest example of such a norm is the case where p, ie the euclidean norm (shortest distance between two points: x n x i ( Another important example of such a norm is the infinity norm (largest element in the vector: x max x i (3 i Example You are given the vector x (4 3 Compute the x, x and x norms of x Solution It holds i x + + 3 6 x + 4 + 9 4 x max{,, 3} 3 (5

Matrix Norms In addition to the defined axioms for norms, matrix norms fulfill A B A B (6 Considering the linear space (V, F, with V C m n, and assuming a matrix A C m n is given, one can define the Frobenius norm (euclidean matrix norm as: A F A ( m where a ij are the elements of A This can also be written as i m j a ij, (7 A F tr (A A, (8 where tr is the trace (ie sum of eigenvalues or diagonal elements of the matrix and A is the Hermitian transpose of A (complex transpose, ie A (conj(a T (9 Example Let Then it holds A i (0 + i i A (conj(a T T + i i i i + i i ( The maximum matrix norm is the largest element of the matrix and is defined as A max i,,m Matrix Norms as Induced Norms Matrix norms can always be defined as induced norms max a ij ( j,,n Definition Let A C m n Then, the induced norm of matrix A can be written as Ax p A p sup x C n, x 0 x p (3 sup x C n, x ( Ax p Remark At this point, one would ask what is the difference between sup and max A maximum is the largest number within a set A sup is a number that bounds a set A sup may or may not be part of the set itself (0 is not part of the set of negative numbers, but it is a sup because it is the least upper bound If the sup is part of the set, it is also the max

u G y Figure : Interpretation of induced norm The definition of induced norm is interesting because one can interpret this as in Figure In fact, using the induced norm Gu G p sup, (4 y 0 u one can quantify the maimum gain (amplification of an output vector for any possible input direction at a given frequency This turns out to be extremely useful for evaluating system interconnections Referring to Figure and using the multiplication property for norms, it holds y p G G u p G p G p u p y p u p G p G p (5 In words, the input-output gain of a system series is upper bounded by the product of the induced matrix norms u w y G G Figure : Interpretation of induced norm Properties of the Euclidean Norm We can list a few useful properties for the Euclidean norm, intended as induced norm: (i If A is squared (ie m n, the norm is defined as A µ max maximal eigenvalue of A A (6 (ii If A is orthogonal: A (7 Note that this is the case because orthogonal matrices always have eigenvalues with magnitude (iii If A is symmetric (ie A A: where λ i are the eigenvalues of A A max i ( λ i, (8 3

(iv If A is invertible: A µmin (9 minimal eigenvalue of A A (v If A is invertible and symmetric: A Remark Remember: the matrix A A is always a square matrix 3 Signal Norms min i ( λ i (0 The norms we have seen so far are space measures Temporal norms (signal norms, take into account the variability of signals in time/frequency Let The p-norm is defined as e(t ( e (t e n (t, ei (t C, i,, n ( e(t p ( i p n e i (τ p dτ ( A special case of this is the two-norm, also called euclidean norm, integral square error, energy of a signal: ( e(t n e i (τ dτ (3 The infinity norm is defined as 4 System Norms e(t sup τ i ( max i e i (τ (4 Considering linear, time-invariant, causal systems of the form depicted in Figure, one can write the relation y(t G u(t G(t τu(τdτ The two-norm for the transfer function Ĝ reads ( Ĝ(s π Ĝ(jω dω ( π ( g ij dω π (Ĝ tr (jωĝ(jω dω i,j 4 (5 (6

Remark Note that this norm is a measure of the combination of system gains in all directions, over all frequency This is not an induced norm, as it does not respect the multiplicative property The infinity norm is Ĝ(s sup Ĝ(jω (7 ω Remark This norm is a measure of the peak of the maximum singular value, ie the biggest amplification the system may bring at any frequency, for any input direction (worst case scenario This is an induced norm and respects the multiplicative property Singular Value Decomposition (SVD The Singular Value Decomposition plays a central role in MIMO frequency response analysis Let s recall some concepts from the course Lineare Algebra I/II : Preliminary Definitions The induced norm A of a matrix that describes a linear function like is defined as y A u ( A max u 0 y u max u y ( Let s recall Equation 6, and let s notice that if A R n m it holds A A T (3 In order to define the SVD we have to go a step further Let s consider a Matrix A and the linear function given in Equation It holds A max u y y max u (A u (A u max u u A A u max i µ(a A max i σ i where σ i are the singular values of matrix A They are defined as where µ i are the eigenvalues of A A Combining Equations and 5 one gets (4 σ i µ i (5 σ min (A y u σ max(a (6 5

y A u u Figure 3: Illustration of the singular values Singular Value Decomposition Our goal is to write a general matrix A C p m as product of three matrices: U, Σ and V It holds A U Σ V with U C p p, Σ R p m, V C m m (7 Remark U and V are orthogonal, Σ is a diagonal matrix Kochrezept: Let A C p m be given: (I Compute all the eigenvalues and eigenvectors of the matrix and sort them as A A C m m µ µ µ r > µ r+ µ m 0 (8 (II Compute an orthogonal basis from the eigenvectors v i and write it in a matrix as V (v v m C m m (9 (III We have already found the singular values: they are defined as σ i µ i for i,, min{p, m} (0 By ordering them from the biggest to the smallest, we can then write Σ as σ 0 0 Σ R p m, p < m ( σ m 0 0 6

σ σ Σ n R p m, p > m ( 0 0 0 0 (IV One finds u,, u r from u i A v i for all i,, r (for σ i 0 (3 σ i (V If r < p one has to complete the basis u,, u r (with ONB from Gram-Schmid to obtain an orthogonal basis, with U orthogonal (VI If you followed the previous steps, you can write A U Σ V Motivation for the computation of Σ, U und V A A (U Σ V (U Σ V V Σ U U Σ V V Σ Σ V V Σ V (4 This is nothing else than the diagonalization of the matrix A A The columns of V are the eigenvectors of A A and the σi the eigenvalues For U: A A (U Σ V (U Σ V U Σ V V Σ U U Σ Σ U U Σ U (5 This is nothing else than the diagonalization of the matrix A A The columns of U are the eigenvectors of A A and the σi the eigenvalues Remark In order to derive the previous two equations I used that: The matrix A A is symmetric, ie (A A A (A A A U U (because U is unitary V V (because V is unitary 7

Remark Since the matrix A A is always symmetric and positive semidefinite, the singular values are always real numbers Remark The Matlab command for the singular value decomposition is [U,S,V]svd One can write A T as A transpose(a and A as A conj(transpose(a Those two are equivalent for real numbers 3 Intepretation Considering the system depicted in Figure, one rewrite the system G as G UΣV The matrix V is orthogonal and contains the input directions of the system The matrix U is orthogonal as well and contains the output directions of the system (unfortunate notation It holds G UΣV GV UΣ Gv i σ i u i, i, (6 which is similar to an eigenvalue equation This can be rewritten as For a unitary input, ie u, one has σ i Gv i u i (7 y σ u, y m σ m u m, (8 where σ m is the last (and hence the smallest singular value This can be interpreted using Figure 3 and interpreting the circle as a unit circle Example 3 Let u be u cos(x sin(x with u The matrix M is given as 0 M 0 We know that the product of M and u defines a linear function y M u 0 cos(x 0 sin(x cos(x sin(x 8

We need the maximum of y In order to avoid square roots, one can use that the x that maximizes y should also maximize y has maximum d y y 4 cos (x + 4 sin (x dx 8 cos(x sin(x + sin(x cos(x! 0 { x max 0, π, π, 3π } Inserting back for the maximal y one gets: y max, y max The singular values can be calculated with M M: M M M M 4 0 0 4 λ i { 4, } 4 σ i {, } As stated before, one can see that y [σ min, σ max ] The matrix U has eigenvectors of M M as coulmns and the matrix V has eigenvectors of M M as columns In this case M M M M, hence the two matrices are equal Since their product is a diagonal matrix one should recall from the theory that the eigenvectors are easy to determine: they are nothing else than the standard basis vectors This means Interpretation: U [ 0 0 ] [ 0, Σ 0 ], V [ ] 0 0 Referring to Figure 4, let s interprete these calculations One can see that the maximal amplification occurs at v V (:, and has direction u U(:,, ie the vector u is doubled (σ max The minimal amplification occurs at v V (:, and has direction u U(:,, ie the vector u is halved (σ min 9

05 u y M u Figure 4: Illustration of the singular value decomposition Example 4 Let 3 0 A 0 3 3 be given Question: Find the singular values of A and write down the matrix Σ Solution Let s compute A T A: A T A ( 3 0 3 0 3 One can see easily that the eigenvalues are 3 0 0 3 3 λ 6, λ 9 ( 3 3 3 The singular values are One writes in this case σ 4, σ 3 4 0 Σ 0 3 0 0 0

Example 5 A transfer function G(s is given as ( s+3 s+ s+3 Find the singular values of G(s at ω rad s s+ s+3 s+3 Solution The transfer function G(s evaluated at ω rad s G(j ( j+3 j+ j+3 j+ j+3 j+3 has the form In order to calculate the singular values, we have to compute the eigenvalues of H G G: H G G ( j+3 j+ ( 3 0 j+3 0 0 0 3 0 j+ j+3 j+3 ( 3 3 ( j+3 j+ j+3 j+ j+3 j+3 For the eigenvalues it holds It follows det(h λ I det ( 3 0 λ 0 0 ( 3 0 λ 3 λ 0 ( 0 λ 6 0 λ + 5 ( 00 λ 0 ( λ 5 0 λ 0 and so λ σ 0 036 σ 0707

Example 6 Let be A 0 B ( j Find the singular values of the two matrices Solution Let s begin with matrix A It holds H A A 0 5 0 In order to find the eigenvalues of H we compute λ det(h λ I det 5 λ This means that the eigenvalues are The singular values are then Let s look at matrix B It holds ( λ (5 λ 4 λ 6λ + λ 3 + λ 3 σ 44 σ 044 F B B j (j j j In order to find the eigenvalues of F we compute λ j det(f λ I det j λ ( λ λ λ λ (λ

This means that the eigenvalues are The singular values are then λ 0 λ σ 0 σ 4 Directions of poles and zeros 4 Directions of zeros Assume a system G(s has a zero at s z Then, it must holds G(zu z 0, y zg(z 0, (9 where u z is the input zero direction and y z is the output zero direction Furthermore, it holds u z, y z (0 4 Directions of poles Assume a system G(s has a pole at s p Then, it must holds G(pu p, y pg(p, ( where u p is the input pole direction and y p is the output pole direction Furthermore, it holds u p, y p ( Remark In both cases, if the zero/pole causes an unfeasible calculation, one consider feasible variations, ie z + ε, p + ε 5 Frequency Responses As we learned for SISO systems, if one excites a system with an harmonic signal u(t h(t cos(ω t, (3 the answer after a big amount of time is still an harmonic function with equal frequency ω: y (t P (j ω cos(ω t + (P (j ω (4 One can generalize this and apply it to MIMO systems With the assumption of p m, ie equal number of inputs and outputs, one excite a system with µ cos(ω t + φ u(t h(t (5 µ m cos(ω t + φ m 3

and get Let s define two diagonal matrices and two vectors ν cos(ω t + ψ y (t (6 ν m cos(ω t + ψ m Φ diag(φ,, φ m R m m, Ψ diag(ψ,, ψ m R m m (7 µ ( µ µ m T, ν ( ν ν m T (8 With these one can compute the Laplace Transform of the two signals as: U(s e Φ s ω µ s s + ω (9 and Y (s e Ψ s s ω ν s + ω (30 With the general equation for a systems one gets Y (s P (s U(s e Ψ s s Φ s s ω ν P (s e s ω + ω µ s + ω ν P (s e Φ j ω ω µ e Ψ j ω ω e Ψ j ν P (s e Φ j µ (3 We then recall that the induced norm for the matrix of a linear transformation y A u from Here it holds Since and One gets P (j ω e Ψ j ν max e Φ j µ 0 e Φ j µ max e Ψ j ν e Φ j µ (3 e Φ j µ µ (33 e Ψ j ν ν (34 P (j ω max µ 0 ν µ max ν µ (35 Here one should get the feeling of why we introduced the singular value decomposition From the theory we ve learned, it is clear that σ min (P (j ω ν σ max (P (j ω (36 4

and if µ σ min (P (j ω ν µ σ max(p (j ω (37 with σ i singular values of P (j ω These two are worst case ranges and is important to notice that there is no exact formula for ν f(µ 5 Maximal and minimal Gain You are given a singular value decomposition P (j ω U Σ V (38 One can read out from this decomposition several informations: the maximal/minial gain will be reached with an excitation in the direction of the column vectors of V The response of the system will then be in the direction of the coulmn vectors of U Let s look at an example and try to understand how to use these informations: Example 7 We consider a system with m inputs and p 3 outputs We are given its singular value decomposition at ω 5 rad: s 0467 0 Σ 0 063, 0 0 ( 0908 09568 V, 09443 054 j 0870 + 00469 j 00496 0680 j 0767 0683 j 066 080 j U 0046 0959 j 0059 + 0350 j 064 + 00 j 00349 03593 j 0360 0590 j 0678 + 0048 j For the singular value σ max 0467 the eigenvectors are V (:, and U(:, : ( 0908 0908 0 V, V 09443 054 j, (V 09568, 068 00496 0680 j 075 858 U 0046 0959 j, U 0960, (U 5548 00349 03593 j 03609 474 The maximal gain is then reached with ( 0908 cos(5 t u(t 09568 cos(5 t 068 The response of the system is then 075 cos(5 t 858 075 cos(5 t 858 y(t σ max 0960 cos(5 t 5548 0467 0960 cos(5 t 5548 03609 cos(5 t 474 03609 cos(5 t 474 Since the three signals y (t, y (t and y 3 (t are not in phase, the maximal gain will never be reachen One can show that max y(t 0460 < 0467 σ max t The reason for this difference stays in the phase deviation between y (t, y (t and y 3 (t The same analysis can be computed for σ min 5

Example 8 Given the MIMO system P (s ( s+3 s+ s+ 3 s+ Starting at t 0, the system is excited with the following input signal: ( cos(t u(t µ cos(t + ϕ Find the parameters ϕ and µ such that for steady-state conditions the output signal y (t y (t has y (t equal to zero Solution For a system excited using a harmonic input signal µ cos(ωt + ϕ u(t µ cos(ωt + ϕ the output signal y(t, after a transient phase, will also be a harmonic signal and hence have the form ν cos(ωt + ψ y(t ν cos(ωt + ψ As we have learned, it holds One gets e ψ j 0 0 e ψ j ( ν ν For the first component one gets e Ψ j ν P (jω e Φ j µ P (jω P (jω P (jω P (jω e ϕ j 0 0 e ϕ j e ψ j ν P (jω e ϕ j µ + P (jω e ϕ j µ For y (t 0 to hold we must have ν 0 In the given case, some parameters can be easily copied from the signals: With the given transfer functions, one gets 0 j + 3 + µ j + eϕ j 0 3 j 0 + µ j 0 3 j 0 + µ j e ϕ j µ ϕ 0 ω (cos(ϕ + j sin(ϕ 0 3 0 + µ (cos(ϕ + sin(ϕ + j 6 ( µ µ ( µ (sin(ϕ cos(ϕ 0

Splitting the real to the imaginary part, one can get two equations that are easily solvable: µ (cos(ϕ + sin(ϕ + 3 0 0 µ (sin(ϕ cos(ϕ 0 0 Adding and subtracting the two equations one can reach two better equations: One of the solutions (periodicity reads µ sin(ϕ + 5 0 µ cos(ϕ + 5 0 µ 5 ϕ arctan + π Example 9 A linear time invariant MIMO system with transfer function is excited with the signal u(t P (s ( s+ s + s+0 s+ s + µ cos(ω t + ϕ µ cos(ω t + ϕ Because we bought a cheap signal generator, we cannot know exactly the constants µ, and ϕ, A friend of you just found out with some measurements, that the excitation frequency is ω rad The cheap generator, cannot produce signals with magnitude of s µ bigger than 0, ie µ + µ 0 This works always at maximal power, ie at 0 Choose all possible responses of the system after infinite time 5 sin(t + 04 y (t cos(t 5 sin(t + 04 y (t cos( t sin(t + 054 y (t sin(t + 0459 9 cos(t + 04 y (t cos(t + 4 5 cos(t + 04 y (t 5 cos(t 0 sin(t + 4 y (t sin(t + 34 7

Solution 5 sin(t + 04 y (t cos(t 5 sin(t + 04 y (t cos( t sin(t + 054 y (t sin(t + 0459 9 cos(t + 04 y (t cos(t + 4 5 cos(t + 04 y (t 5 cos(t 0 sin(t + 4 y (t sin(t + 34 Explanation We have to compute the singular values of the matrix P (j These are With what we have learned it follows σ max 8305 σ min 03863 0 σ min 3863 ν 8305 0 σ max The first response has ν 6 that is in this range The second response also has ν 6 but the frequency in its second element changes and that isn t possible for linear systems The third response has ν that is too small to be in the range The fourth response has ν 36 that is too big to be in the range The fifth response has ν 50 that is in the range The sixth response has ν that is in the range 8

Example 0 A 3 linear time invariant MIMO system is excited with the input 3 sin(30 t u(t 4 cos(30 t You have forgot your PC and you don t know the transfer function of the system Before coming to school, however, you have saved the Matlab plot of the singular values of the system on your phone (see Figure 5 Choose all the possible responses of the system Figure 5: Singular values behaviour 05 sin(30 t + 034 y (t 05 cos(30 t 05 cos(30 t + 4 sin(30 t + 034 y (t 3 cos(30 t cos(30 t + 0 sin(30 t + 034 y (t 0 cos(30 t 0 cos(30 t + y (t 0 4 cos(30 t cos(30 t + cos(30 t + 043 y (t cos(30 t + 04 cos(30 t + 05 9

Solution 05 sin(30 t + 034 y (t 05 cos(30 t 05 cos(30 t + 4 sin(30 t + 034 y (t 3 cos(30 t cos(30 t + 0 sin(30 t + 034 y (t 0 cos(30 t 0 cos(30 t + 0 y (t 4 cos(30 t cos(30 t + cos(30 t + 043 y (t cos(30 t + 04 cos(30 t + 05 Explanation From the given input one can read µ 3 + 4 5 From the plot one can read at ω 30 rad s σ min 0 and σ max It follows 5 σ min 05 ν 5 5 σ max The first response has ν 075 that is in the range The second response has ν 9 that is too big to be in the range The third response has ν 003 that is to small to be in the range The fourth response has ν 0 that is in the range The fifth response has ν that is in the range 0

References [] Lineare Algebra I/II notes, ETH Zurich [] Karl Johan Amstroem, Richard M Murray Feedback Systems for Scientists and Engineers Princeton University Press, Princeton and Oxford, 009 [3] Sigurd Skogestad, Multivariate Feedback Control John Wiley and Sons, New York, 00