Lecture 4: Analysis of MIMO Systems
|
|
- Nathan Davidson
- 5 years ago
- Views:
Transcription
1 Lecture 4: Analysis of MIMO Systems Norms The concept of norm will be extremely useful for evaluating signals and systems quantitatively during this course In the following, we will present vector norms and matrix norms Vector Norms Definition A norm on a linear space (V, F is a function : V R + such that a v, v V, v + v v + v (triangle inequality b v V, α F, αv α v c v 0 v 0 Remark Note that norms are always non-negative This can be noticed by seeing that a norm always maps to R + Considering x C n, ie V C n, one can define the p norm as x p ( n i x i p p, p,, ( The easiest example of such a norm is the case where p, ie the euclidean norm (shortest distance between two points: x n x i ( Another important example of such a norm is the infinity norm (largest element in the vector: x max x i (3 i Example You are given the vector x (4 3 Compute the x, x and x norms of x Solution It holds i x x x max{,, 3} 3 (5
2 Matrix Norms In addition to the defined axioms for norms, matrix norms fulfill A B A B (6 Considering the linear space (V, F, with V C m n, and assuming a matrix A C m n is given, one can define the Frobenius norm (euclidean matrix norm as: A F A ( m where a ij are the elements of A This can also be written as i m j a ij, (7 A F tr (A A, (8 where tr is the trace (ie sum of eigenvalues or diagonal elements of the matrix and A is the Hermitian transpose of A (complex transpose, ie A (conj(a T (9 Example Let Then it holds A i (0 + i i A (conj(a T T + i i i i + i i ( The maximum matrix norm is the largest element of the matrix and is defined as A max i,,m Matrix Norms as Induced Norms Matrix norms can always be defined as induced norms max a ij ( j,,n Definition Let A C m n Then, the induced norm of matrix A can be written as Ax p A p sup x C n, x 0 x p (3 sup x C n, x ( Ax p Remark At this point, one would ask what is the difference between sup and max A maximum is the largest number within a set A sup is a number that bounds a set A sup may or may not be part of the set itself (0 is not part of the set of negative numbers, but it is a sup because it is the least upper bound If the sup is part of the set, it is also the max
3 u G y Figure : Interpretation of induced norm The definition of induced norm is interesting because one can interpret this as in Figure In fact, using the induced norm Gu G p sup, (4 y 0 u one can quantify the maimum gain (amplification of an output vector for any possible input direction at a given frequency This turns out to be extremely useful for evaluating system interconnections Referring to Figure and using the multiplication property for norms, it holds y p G G u p G p G p u p y p u p G p G p (5 In words, the input-output gain of a system series is upper bounded by the product of the induced matrix norms u w y G G Figure : Interpretation of induced norm Properties of the Euclidean Norm We can list a few useful properties for the Euclidean norm, intended as induced norm: (i If A is squared (ie m n, the norm is defined as A µ max maximal eigenvalue of A A (6 (ii If A is orthogonal: A (7 Note that this is the case because orthogonal matrices always have eigenvalues with magnitude (iii If A is symmetric (ie A A: where λ i are the eigenvalues of A A max i ( λ i, (8 3
4 (iv If A is invertible: A µmin (9 minimal eigenvalue of A A (v If A is invertible and symmetric: A Remark Remember: the matrix A A is always a square matrix 3 Signal Norms min i ( λ i (0 The norms we have seen so far are space measures Temporal norms (signal norms, take into account the variability of signals in time/frequency Let The p-norm is defined as e(t ( e (t e n (t, ei (t C, i,, n ( e(t p ( i p n e i (τ p dτ ( A special case of this is the two-norm, also called euclidean norm, integral square error, energy of a signal: ( e(t n e i (τ dτ (3 The infinity norm is defined as 4 System Norms e(t sup τ i ( max i e i (τ (4 Considering linear, time-invariant, causal systems of the form depicted in Figure, one can write the relation y(t G u(t G(t τu(τdτ The two-norm for the transfer function Ĝ reads ( Ĝ(s π Ĝ(jω dω ( π ( g ij dω π (Ĝ tr (jωĝ(jω dω i,j 4 (5 (6
5 Remark Note that this norm is a measure of the combination of system gains in all directions, over all frequency This is not an induced norm, as it does not respect the multiplicative property The infinity norm is Ĝ(s sup Ĝ(jω (7 ω Remark This norm is a measure of the peak of the maximum singular value, ie the biggest amplification the system may bring at any frequency, for any input direction (worst case scenario This is an induced norm and respects the multiplicative property Singular Value Decomposition (SVD The Singular Value Decomposition plays a central role in MIMO frequency response analysis Let s recall some concepts from the course Lineare Algebra I/II : Preliminary Definitions The induced norm A of a matrix that describes a linear function like is defined as y A u ( A max u 0 y u max u y ( Let s recall Equation 6, and let s notice that if A R n m it holds A A T (3 In order to define the SVD we have to go a step further Let s consider a Matrix A and the linear function given in Equation It holds A max u y y max u (A u (A u max u u A A u max i µ(a A max i σ i where σ i are the singular values of matrix A They are defined as where µ i are the eigenvalues of A A Combining Equations and 5 one gets (4 σ i µ i (5 σ min (A y u σ max(a (6 5
6 y A u u Figure 3: Illustration of the singular values Singular Value Decomposition Our goal is to write a general matrix A C p m as product of three matrices: U, Σ and V It holds A U Σ V with U C p p, Σ R p m, V C m m (7 Remark U and V are orthogonal, Σ is a diagonal matrix Kochrezept: Let A C p m be given: (I Compute all the eigenvalues and eigenvectors of the matrix and sort them as A A C m m µ µ µ r > µ r+ µ m 0 (8 (II Compute an orthogonal basis from the eigenvectors v i and write it in a matrix as V (v v m C m m (9 (III We have already found the singular values: they are defined as σ i µ i for i,, min{p, m} (0 By ordering them from the biggest to the smallest, we can then write Σ as σ 0 0 Σ R p m, p < m ( σ m 0 0 6
7 σ σ Σ n R p m, p > m ( (IV One finds u,, u r from u i A v i for all i,, r (for σ i 0 (3 σ i (V If r < p one has to complete the basis u,, u r (with ONB from Gram-Schmid to obtain an orthogonal basis, with U orthogonal (VI If you followed the previous steps, you can write A U Σ V Motivation for the computation of Σ, U und V A A (U Σ V (U Σ V V Σ U U Σ V V Σ Σ V V Σ V (4 This is nothing else than the diagonalization of the matrix A A The columns of V are the eigenvectors of A A and the σi the eigenvalues For U: A A (U Σ V (U Σ V U Σ V V Σ U U Σ Σ U U Σ U (5 This is nothing else than the diagonalization of the matrix A A The columns of U are the eigenvectors of A A and the σi the eigenvalues Remark In order to derive the previous two equations I used that: The matrix A A is symmetric, ie (A A A (A A A U U (because U is unitary V V (because V is unitary 7
8 Remark Since the matrix A A is always symmetric and positive semidefinite, the singular values are always real numbers Remark The Matlab command for the singular value decomposition is [U,S,V]svd One can write A T as A transpose(a and A as A conj(transpose(a Those two are equivalent for real numbers 3 Intepretation Considering the system depicted in Figure, one rewrite the system G as G UΣV The matrix V is orthogonal and contains the input directions of the system The matrix U is orthogonal as well and contains the output directions of the system (unfortunate notation It holds G UΣV GV UΣ Gv i σ i u i, i, (6 which is similar to an eigenvalue equation This can be rewritten as For a unitary input, ie u, one has σ i Gv i u i (7 y σ u, y m σ m u m, (8 where σ m is the last (and hence the smallest singular value This can be interpreted using Figure 3 and interpreting the circle as a unit circle Example 3 Let u be u cos(x sin(x with u The matrix M is given as 0 M 0 We know that the product of M and u defines a linear function y M u 0 cos(x 0 sin(x cos(x sin(x 8
9 We need the maximum of y In order to avoid square roots, one can use that the x that maximizes y should also maximize y has maximum d y y 4 cos (x + 4 sin (x dx 8 cos(x sin(x + sin(x cos(x! 0 { x max 0, π, π, 3π } Inserting back for the maximal y one gets: y max, y max The singular values can be calculated with M M: M M M M λ i { 4, } 4 σ i {, } As stated before, one can see that y [σ min, σ max ] The matrix U has eigenvectors of M M as coulmns and the matrix V has eigenvectors of M M as columns In this case M M M M, hence the two matrices are equal Since their product is a diagonal matrix one should recall from the theory that the eigenvectors are easy to determine: they are nothing else than the standard basis vectors This means Interpretation: U [ 0 0 ] [ 0, Σ 0 ], V [ ] 0 0 Referring to Figure 4, let s interprete these calculations One can see that the maximal amplification occurs at v V (:, and has direction u U(:,, ie the vector u is doubled (σ max The minimal amplification occurs at v V (:, and has direction u U(:,, ie the vector u is halved (σ min 9
10 05 u y M u Figure 4: Illustration of the singular value decomposition Example 4 Let 3 0 A be given Question: Find the singular values of A and write down the matrix Σ Solution Let s compute A T A: A T A ( One can see easily that the eigenvalues are λ 6, λ 9 ( The singular values are One writes in this case σ 4, σ Σ
11 Example 5 A transfer function G(s is given as ( s+3 s+ s+3 Find the singular values of G(s at ω rad s s+ s+3 s+3 Solution The transfer function G(s evaluated at ω rad s G(j ( j+3 j+ j+3 j+ j+3 j+3 has the form In order to calculate the singular values, we have to compute the eigenvalues of H G G: H G G ( j+3 j+ ( 3 0 j j+ j+3 j+3 ( 3 3 ( j+3 j+ j+3 j+ j+3 j+3 For the eigenvalues it holds It follows det(h λ I det ( 3 0 λ 0 0 ( 3 0 λ 3 λ 0 ( 0 λ 6 0 λ + 5 ( 00 λ 0 ( λ 5 0 λ 0 and so λ σ σ 0707
12 Example 6 Let be A 0 B ( j Find the singular values of the two matrices Solution Let s begin with matrix A It holds H A A In order to find the eigenvalues of H we compute λ det(h λ I det 5 λ This means that the eigenvalues are The singular values are then Let s look at matrix B It holds ( λ (5 λ 4 λ 6λ + λ 3 + λ 3 σ 44 σ 044 F B B j (j j j In order to find the eigenvalues of F we compute λ j det(f λ I det j λ ( λ λ λ λ (λ
13 This means that the eigenvalues are The singular values are then λ 0 λ σ 0 σ 4 Directions of poles and zeros 4 Directions of zeros Assume a system G(s has a zero at s z Then, it must holds G(zu z 0, y zg(z 0, (9 where u z is the input zero direction and y z is the output zero direction Furthermore, it holds u z, y z (0 4 Directions of poles Assume a system G(s has a pole at s p Then, it must holds G(pu p, y pg(p, ( where u p is the input pole direction and y p is the output pole direction Furthermore, it holds u p, y p ( Remark In both cases, if the zero/pole causes an unfeasible calculation, one consider feasible variations, ie z + ε, p + ε 5 Frequency Responses As we learned for SISO systems, if one excites a system with an harmonic signal u(t h(t cos(ω t, (3 the answer after a big amount of time is still an harmonic function with equal frequency ω: y (t P (j ω cos(ω t + (P (j ω (4 One can generalize this and apply it to MIMO systems With the assumption of p m, ie equal number of inputs and outputs, one excite a system with µ cos(ω t + φ u(t h(t (5 µ m cos(ω t + φ m 3
14 and get Let s define two diagonal matrices and two vectors ν cos(ω t + ψ y (t (6 ν m cos(ω t + ψ m Φ diag(φ,, φ m R m m, Ψ diag(ψ,, ψ m R m m (7 µ ( µ µ m T, ν ( ν ν m T (8 With these one can compute the Laplace Transform of the two signals as: U(s e Φ s ω µ s s + ω (9 and Y (s e Ψ s s ω ν s + ω (30 With the general equation for a systems one gets Y (s P (s U(s e Ψ s s Φ s s ω ν P (s e s ω + ω µ s + ω ν P (s e Φ j ω ω µ e Ψ j ω ω e Ψ j ν P (s e Φ j µ (3 We then recall that the induced norm for the matrix of a linear transformation y A u from Here it holds Since and One gets P (j ω e Ψ j ν max e Φ j µ 0 e Φ j µ max e Ψ j ν e Φ j µ (3 e Φ j µ µ (33 e Ψ j ν ν (34 P (j ω max µ 0 ν µ max ν µ (35 Here one should get the feeling of why we introduced the singular value decomposition From the theory we ve learned, it is clear that σ min (P (j ω ν σ max (P (j ω (36 4
15 and if µ σ min (P (j ω ν µ σ max(p (j ω (37 with σ i singular values of P (j ω These two are worst case ranges and is important to notice that there is no exact formula for ν f(µ 5 Maximal and minimal Gain You are given a singular value decomposition P (j ω U Σ V (38 One can read out from this decomposition several informations: the maximal/minial gain will be reached with an excitation in the direction of the column vectors of V The response of the system will then be in the direction of the coulmn vectors of U Let s look at an example and try to understand how to use these informations: Example 7 We consider a system with m inputs and p 3 outputs We are given its singular value decomposition at ω 5 rad: s Σ 0 063, 0 0 ( V, j j j j j U j j j j j j For the singular value σ max 0467 the eigenvectors are V (:, and U(:, : ( V, V j, (V 09568, j U j, U 0960, (U j The maximal gain is then reached with ( 0908 cos(5 t u(t cos(5 t 068 The response of the system is then 075 cos(5 t cos(5 t 858 y(t σ max 0960 cos(5 t cos(5 t cos(5 t cos(5 t 474 Since the three signals y (t, y (t and y 3 (t are not in phase, the maximal gain will never be reachen One can show that max y(t 0460 < 0467 σ max t The reason for this difference stays in the phase deviation between y (t, y (t and y 3 (t The same analysis can be computed for σ min 5
16 Example 8 Given the MIMO system P (s ( s+3 s+ s+ 3 s+ Starting at t 0, the system is excited with the following input signal: ( cos(t u(t µ cos(t + ϕ Find the parameters ϕ and µ such that for steady-state conditions the output signal y (t y (t has y (t equal to zero Solution For a system excited using a harmonic input signal µ cos(ωt + ϕ u(t µ cos(ωt + ϕ the output signal y(t, after a transient phase, will also be a harmonic signal and hence have the form ν cos(ωt + ψ y(t ν cos(ωt + ψ As we have learned, it holds One gets e ψ j 0 0 e ψ j ( ν ν For the first component one gets e Ψ j ν P (jω e Φ j µ P (jω P (jω P (jω P (jω e ϕ j 0 0 e ϕ j e ψ j ν P (jω e ϕ j µ + P (jω e ϕ j µ For y (t 0 to hold we must have ν 0 In the given case, some parameters can be easily copied from the signals: With the given transfer functions, one gets 0 j µ j + eϕ j 0 3 j 0 + µ j 0 3 j 0 + µ j e ϕ j µ ϕ 0 ω (cos(ϕ + j sin(ϕ µ (cos(ϕ + sin(ϕ + j 6 ( µ µ ( µ (sin(ϕ cos(ϕ 0
17 Splitting the real to the imaginary part, one can get two equations that are easily solvable: µ (cos(ϕ + sin(ϕ µ (sin(ϕ cos(ϕ 0 0 Adding and subtracting the two equations one can reach two better equations: One of the solutions (periodicity reads µ sin(ϕ µ cos(ϕ µ 5 ϕ arctan + π Example 9 A linear time invariant MIMO system with transfer function is excited with the signal u(t P (s ( s+ s + s+0 s+ s + µ cos(ω t + ϕ µ cos(ω t + ϕ Because we bought a cheap signal generator, we cannot know exactly the constants µ, and ϕ, A friend of you just found out with some measurements, that the excitation frequency is ω rad The cheap generator, cannot produce signals with magnitude of s µ bigger than 0, ie µ + µ 0 This works always at maximal power, ie at 0 Choose all possible responses of the system after infinite time 5 sin(t + 04 y (t cos(t 5 sin(t + 04 y (t cos( t sin(t y (t sin(t cos(t + 04 y (t cos(t cos(t + 04 y (t 5 cos(t 0 sin(t + 4 y (t sin(t
18 Solution 5 sin(t + 04 y (t cos(t 5 sin(t + 04 y (t cos( t sin(t y (t sin(t cos(t + 04 y (t cos(t cos(t + 04 y (t 5 cos(t 0 sin(t + 4 y (t sin(t + 34 Explanation We have to compute the singular values of the matrix P (j These are With what we have learned it follows σ max 8305 σ min σ min 3863 ν σ max The first response has ν 6 that is in this range The second response also has ν 6 but the frequency in its second element changes and that isn t possible for linear systems The third response has ν that is too small to be in the range The fourth response has ν 36 that is too big to be in the range The fifth response has ν 50 that is in the range The sixth response has ν that is in the range 8
19 Example 0 A 3 linear time invariant MIMO system is excited with the input 3 sin(30 t u(t 4 cos(30 t You have forgot your PC and you don t know the transfer function of the system Before coming to school, however, you have saved the Matlab plot of the singular values of the system on your phone (see Figure 5 Choose all the possible responses of the system Figure 5: Singular values behaviour 05 sin(30 t y (t 05 cos(30 t 05 cos(30 t + 4 sin(30 t y (t 3 cos(30 t cos(30 t + 0 sin(30 t y (t 0 cos(30 t 0 cos(30 t + y (t 0 4 cos(30 t cos(30 t + cos(30 t y (t cos(30 t + 04 cos(30 t
20 Solution 05 sin(30 t y (t 05 cos(30 t 05 cos(30 t + 4 sin(30 t y (t 3 cos(30 t cos(30 t + 0 sin(30 t y (t 0 cos(30 t 0 cos(30 t + 0 y (t 4 cos(30 t cos(30 t + cos(30 t y (t cos(30 t + 04 cos(30 t + 05 Explanation From the given input one can read µ From the plot one can read at ω 30 rad s σ min 0 and σ max It follows 5 σ min 05 ν 5 5 σ max The first response has ν 075 that is in the range The second response has ν 9 that is too big to be in the range The third response has ν 003 that is to small to be in the range The fourth response has ν 0 that is in the range The fifth response has ν that is in the range 0
21 References [] Lineare Algebra I/II notes, ETH Zurich [] Karl Johan Amstroem, Richard M Murray Feedback Systems for Scientists and Engineers Princeton University Press, Princeton and Oxford, 009 [3] Sigurd Skogestad, Multivariate Feedback Control John Wiley and Sons, New York, 00
Exercise 8: Frequency Response of MIMO Systems
Exercise 8: Frequency Response of MIMO Systems 8 Singular Value Decomposition (SVD The Singular Value Decomposition plays a central role in MIMO frequency response analysis Let s recall some concepts from
More informationRaktim Bhattacharya. . AERO 632: Design of Advance Flight Control System. Norms for Signals and Systems
. AERO 632: Design of Advance Flight Control System Norms for Signals and. Raktim Bhattacharya Laboratory For Uncertainty Quantification Aerospace Engineering, Texas A&M University. Norms for Signals ...
More informationSingular Value Decomposition Analysis
Singular Value Decomposition Analysis Singular Value Decomposition Analysis Introduction Introduce a linear algebra tool: singular values of a matrix Motivation Why do we need singular values in MIMO control
More informationLecture 7 (Weeks 13-14)
Lecture 7 (Weeks 13-14) Introduction to Multivariable Control (SP - Chapters 3 & 4) Eugenio Schuster schuster@lehigh.edu Mechanical Engineering and Mechanics Lehigh University Lecture 7 (Weeks 13-14) p.
More informationControl Systems I. Lecture 5: Transfer Functions. Readings: Emilio Frazzoli. Institute for Dynamic Systems and Control D-MAVT ETH Zürich
Control Systems I Lecture 5: Transfer Functions Readings: Emilio Frazzoli Institute for Dynamic Systems and Control D-MAVT ETH Zürich October 20, 2017 E. Frazzoli (ETH) Lecture 5: Control Systems I 20/10/2017
More informationControl Systems I. Lecture 6: Poles and Zeros. Readings: Emilio Frazzoli. Institute for Dynamic Systems and Control D-MAVT ETH Zürich
Control Systems I Lecture 6: Poles and Zeros Readings: Emilio Frazzoli Institute for Dynamic Systems and Control D-MAVT ETH Zürich October 27, 2017 E. Frazzoli (ETH) Lecture 6: Control Systems I 27/10/2017
More information2. Review of Linear Algebra
2. Review of Linear Algebra ECE 83, Spring 217 In this course we will represent signals as vectors and operators (e.g., filters, transforms, etc) as matrices. This lecture reviews basic concepts from linear
More information1 Singular Value Decomposition and Principal Component
Singular Value Decomposition and Principal Component Analysis In these lectures we discuss the SVD and the PCA, two of the most widely used tools in machine learning. Principal Component Analysis (PCA)
More informationME 234, Lyapunov and Riccati Problems. 1. This problem is to recall some facts and formulae you already know. e Aτ BB e A τ dτ
ME 234, Lyapunov and Riccati Problems. This problem is to recall some facts and formulae you already know. (a) Let A and B be matrices of appropriate dimension. Show that (A, B) is controllable if and
More informationThe Singular Value Decomposition
The Singular Value Decomposition Philippe B. Laval KSU Fall 2015 Philippe B. Laval (KSU) SVD Fall 2015 1 / 13 Review of Key Concepts We review some key definitions and results about matrices that will
More informationL2 gains and system approximation quality 1
Massachusetts Institute of Technology Department of Electrical Engineering and Computer Science 6.242, Fall 24: MODEL REDUCTION L2 gains and system approximation quality 1 This lecture discusses the utility
More informationEL2520 Control Theory and Practice
So far EL2520 Control Theory and Practice r Fr wu u G w z n Lecture 5: Multivariable systems -Fy Mikael Johansson School of Electrical Engineering KTH, Stockholm, Sweden SISO control revisited: Signal
More informationSingular Value Decomposition
Chapter 5 Singular Value Decomposition We now reach an important Chapter in this course concerned with the Singular Value Decomposition of a matrix A. SVD, as it is commonly referred to, is one of the
More informationFull-State Feedback Design for a Multi-Input System
Full-State Feedback Design for a Multi-Input System A. Introduction The open-loop system is described by the following state space model. x(t) = Ax(t)+Bu(t), y(t) =Cx(t)+Du(t) () 4 8.5 A =, B =.5.5, C
More informationLinear Algebra Massoud Malek
CSUEB Linear Algebra Massoud Malek Inner Product and Normed Space In all that follows, the n n identity matrix is denoted by I n, the n n zero matrix by Z n, and the zero vector by θ n An inner product
More informationIntroduction to MVC. least common denominator of all non-identical-zero minors of all order of G(s). Example: The minor of order 2: 1 2 ( s 1)
Introduction to MVC Definition---Proerness and strictly roerness A system G(s) is roer if all its elements { gij ( s)} are roer, and strictly roer if all its elements are strictly roer. Definition---Causal
More informationLecture 1: Feedback Control Loop
Lecture : Feedback Control Loop Loop Transfer function The standard feedback control system structure is depicted in Figure. This represend(t) n(t) r(t) e(t) u(t) v(t) η(t) y(t) F (s) C(s) P (s) Figure
More information6.241 Dynamic Systems and Control
6.241 Dynamic Systems and Control Lecture 12: I/O Stability Readings: DDV, Chapters 15, 16 Emilio Frazzoli Aeronautics and Astronautics Massachusetts Institute of Technology March 14, 2011 E. Frazzoli
More informationLarge Scale Data Analysis Using Deep Learning
Large Scale Data Analysis Using Deep Learning Linear Algebra U Kang Seoul National University U Kang 1 In This Lecture Overview of linear algebra (but, not a comprehensive survey) Focused on the subset
More informationLinear Algebra Review. Vectors
Linear Algebra Review 9/4/7 Linear Algebra Review By Tim K. Marks UCSD Borrows heavily from: Jana Kosecka http://cs.gmu.edu/~kosecka/cs682.html Virginia de Sa (UCSD) Cogsci 8F Linear Algebra review Vectors
More informationThe Singular Value Decomposition (SVD) and Principal Component Analysis (PCA)
Chapter 5 The Singular Value Decomposition (SVD) and Principal Component Analysis (PCA) 5.1 Basics of SVD 5.1.1 Review of Key Concepts We review some key definitions and results about matrices that will
More informationOrdinary Differential Equations
Ordinary Differential Equations for Engineers and Scientists Gregg Waterman Oregon Institute of Technology c 2017 Gregg Waterman This work is licensed under the Creative Commons Attribution 4.0 International
More informationRobust Multivariable Control
Lecture 2 Anders Helmersson anders.helmersson@liu.se ISY/Reglerteknik Linköpings universitet Today s topics Today s topics Norms Today s topics Norms Representation of dynamic systems Today s topics Norms
More informationCOMP 558 lecture 18 Nov. 15, 2010
Least squares We have seen several least squares problems thus far, and we will see more in the upcoming lectures. For this reason it is good to have a more general picture of these problems and how to
More informationIntroduction. Performance and Robustness (Chapter 1) Advanced Control Systems Spring / 31
Introduction Classical Control Robust Control u(t) y(t) G u(t) G + y(t) G : nominal model G = G + : plant uncertainty Uncertainty sources : Structured : parametric uncertainty, multimodel uncertainty Unstructured
More informationCS168: The Modern Algorithmic Toolbox Lecture #8: How PCA Works
CS68: The Modern Algorithmic Toolbox Lecture #8: How PCA Works Tim Roughgarden & Gregory Valiant April 20, 206 Introduction Last lecture introduced the idea of principal components analysis (PCA). The
More informationDuke University, Department of Electrical and Computer Engineering Optimization for Scientists and Engineers c Alex Bronstein, 2014
Duke University, Department of Electrical and Computer Engineering Optimization for Scientists and Engineers c Alex Bronstein, 2014 Linear Algebra A Brief Reminder Purpose. The purpose of this document
More informationMaths for Signals and Systems Linear Algebra in Engineering
Maths for Signals and Systems Linear Algebra in Engineering Lectures 13 15, Tuesday 8 th and Friday 11 th November 016 DR TANIA STATHAKI READER (ASSOCIATE PROFFESOR) IN SIGNAL PROCESSING IMPERIAL COLLEGE
More informationHomework 1 Elena Davidson (B) (C) (D) (E) (F) (G) (H) (I)
CS 106 Spring 2004 Homework 1 Elena Davidson 8 April 2004 Problem 1.1 Let B be a 4 4 matrix to which we apply the following operations: 1. double column 1, 2. halve row 3, 3. add row 3 to row 1, 4. interchange
More informationReview problems for MA 54, Fall 2004.
Review problems for MA 54, Fall 2004. Below are the review problems for the final. They are mostly homework problems, or very similar. If you are comfortable doing these problems, you should be fine on
More informationSingular Value Decomposition
Chapter 6 Singular Value Decomposition In Chapter 5, we derived a number of algorithms for computing the eigenvalues and eigenvectors of matrices A R n n. Having developed this machinery, we complete our
More informationDiagonalizing Matrices
Diagonalizing Matrices Massoud Malek A A Let A = A k be an n n non-singular matrix and let B = A = [B, B,, B k,, B n ] Then A n A B = A A 0 0 A k [B, B,, B k,, B n ] = 0 0 = I n 0 A n Notice that A i B
More informationSTA141C: Big Data & High Performance Statistical Computing
STA141C: Big Data & High Performance Statistical Computing Numerical Linear Algebra Background Cho-Jui Hsieh UC Davis May 15, 2018 Linear Algebra Background Vectors A vector has a direction and a magnitude
More informationThroughout these notes we assume V, W are finite dimensional inner product spaces over C.
Math 342 - Linear Algebra II Notes Throughout these notes we assume V, W are finite dimensional inner product spaces over C 1 Upper Triangular Representation Proposition: Let T L(V ) There exists an orthonormal
More informationLecture 8: Linear Algebra Background
CSE 521: Design and Analysis of Algorithms I Winter 2017 Lecture 8: Linear Algebra Background Lecturer: Shayan Oveis Gharan 2/1/2017 Scribe: Swati Padmanabhan Disclaimer: These notes have not been subjected
More informationFunctional Analysis Review
Outline 9.520: Statistical Learning Theory and Applications February 8, 2010 Outline 1 2 3 4 Vector Space Outline A vector space is a set V with binary operations +: V V V and : R V V such that for all
More information33AH, WINTER 2018: STUDY GUIDE FOR FINAL EXAM
33AH, WINTER 2018: STUDY GUIDE FOR FINAL EXAM (UPDATED MARCH 17, 2018) The final exam will be cumulative, with a bit more weight on more recent material. This outline covers the what we ve done since the
More informationBasic Calculus Review
Basic Calculus Review Lorenzo Rosasco ISML Mod. 2 - Machine Learning Vector Spaces Functionals and Operators (Matrices) Vector Space A vector space is a set V with binary operations +: V V V and : R V
More informationSingular Value Decomposition
Singular Value Decomposition Motivatation The diagonalization theorem play a part in many interesting applications. Unfortunately not all matrices can be factored as A = PDP However a factorization A =
More informationMatrix Mathematics. Theory, Facts, and Formulas with Application to Linear Systems Theory. Dennis S. Bernstein
Matrix Mathematics Theory, Facts, and Formulas with Application to Linear Systems Theory Dennis S. Bernstein PRINCETON UNIVERSITY PRESS PRINCETON AND OXFORD Contents Special Symbols xv Conventions, Notation,
More informationChapter 3 Transformations
Chapter 3 Transformations An Introduction to Optimization Spring, 2014 Wei-Ta Chu 1 Linear Transformations A function is called a linear transformation if 1. for every and 2. for every If we fix the bases
More informationHankel Optimal Model Reduction 1
Massachusetts Institute of Technology Department of Electrical Engineering and Computer Science 6.242, Fall 2004: MODEL REDUCTION Hankel Optimal Model Reduction 1 This lecture covers both the theory and
More informationSTA141C: Big Data & High Performance Statistical Computing
STA141C: Big Data & High Performance Statistical Computing Lecture 5: Numerical Linear Algebra Cho-Jui Hsieh UC Davis April 20, 2017 Linear Algebra Background Vectors A vector has a direction and a magnitude
More information1 Controller Optimization according to the Modulus Optimum
Controller Optimization according to the Modulus Optimum w G K (s) F 0 (s) x The goal of applying a control loop usually is to get the control value x equal to the reference value w. x(t) w(t) X(s) W (s)
More informationLinear Least Squares. Using SVD Decomposition.
Linear Least Squares. Using SVD Decomposition. Dmitriy Leykekhman Spring 2011 Goals SVD-decomposition. Solving LLS with SVD-decomposition. D. Leykekhman Linear Least Squares 1 SVD Decomposition. For any
More informationModule 02 CPS Background: Linear Systems Preliminaries
Module 02 CPS Background: Linear Systems Preliminaries Ahmad F. Taha EE 5243: Introduction to Cyber-Physical Systems Email: ahmad.taha@utsa.edu Webpage: http://engineering.utsa.edu/ taha/index.html August
More informationMATH 23a, FALL 2002 THEORETICAL LINEAR ALGEBRA AND MULTIVARIABLE CALCULUS Solutions to Final Exam (in-class portion) January 22, 2003
MATH 23a, FALL 2002 THEORETICAL LINEAR ALGEBRA AND MULTIVARIABLE CALCULUS Solutions to Final Exam (in-class portion) January 22, 2003 1. True or False (28 points, 2 each) T or F If V is a vector space
More informationMIT Final Exam Solutions, Spring 2017
MIT 8.6 Final Exam Solutions, Spring 7 Problem : For some real matrix A, the following vectors form a basis for its column space and null space: C(A) = span,, N(A) = span,,. (a) What is the size m n of
More informationLecture 3: Review of Linear Algebra
ECE 83 Fall 2 Statistical Signal Processing instructor: R Nowak Lecture 3: Review of Linear Algebra Very often in this course we will represent signals as vectors and operators (eg, filters, transforms,
More informationTopic 15 Notes Jeremy Orloff
Topic 5 Notes Jeremy Orloff 5 Transpose, Inverse, Determinant 5. Goals. Know the definition and be able to compute the inverse of any square matrix using row operations. 2. Know the properties of inverses.
More informationLecture 3: Review of Linear Algebra
ECE 83 Fall 2 Statistical Signal Processing instructor: R Nowak, scribe: R Nowak Lecture 3: Review of Linear Algebra Very often in this course we will represent signals as vectors and operators (eg, filters,
More informationLinear Algebra March 16, 2019
Linear Algebra March 16, 2019 2 Contents 0.1 Notation................................ 4 1 Systems of linear equations, and matrices 5 1.1 Systems of linear equations..................... 5 1.2 Augmented
More informationNumerical Linear Algebra
Chapter 3 Numerical Linear Algebra We review some techniques used to solve Ax = b where A is an n n matrix, and x and b are n 1 vectors (column vectors). We then review eigenvalues and eigenvectors and
More information1. Foundations of Numerics from Advanced Mathematics. Linear Algebra
Foundations of Numerics from Advanced Mathematics Linear Algebra Linear Algebra, October 23, 22 Linear Algebra Mathematical Structures a mathematical structure consists of one or several sets and one or
More informationLecture 7. Econ August 18
Lecture 7 Econ 2001 2015 August 18 Lecture 7 Outline First, the theorem of the maximum, an amazing result about continuity in optimization problems. Then, we start linear algebra, mostly looking at familiar
More informationExercises * on Linear Algebra
Exercises * on Linear Algebra Laurenz Wiskott Institut für Neuroinformatik Ruhr-Universität Bochum, Germany, EU 4 February 7 Contents Vector spaces 4. Definition...............................................
More information3 Gramians and Balanced Realizations
3 Gramians and Balanced Realizations In this lecture, we use an optimization approach to find suitable realizations for truncation and singular perturbation of G. It turns out that the recommended realizations
More informationTutorials in Optimization. Richard Socher
Tutorials in Optimization Richard Socher July 20, 2008 CONTENTS 1 Contents 1 Linear Algebra: Bilinear Form - A Simple Optimization Problem 2 1.1 Definitions........................................ 2 1.2
More informationFall TMA4145 Linear Methods. Exercise set Given the matrix 1 2
Norwegian University of Science and Technology Department of Mathematical Sciences TMA445 Linear Methods Fall 07 Exercise set Please justify your answers! The most important part is how you arrive at an
More informationforms Christopher Engström November 14, 2014 MAA704: Matrix factorization and canonical forms Matrix properties Matrix factorization Canonical forms
Christopher Engström November 14, 2014 Hermitian LU QR echelon Contents of todays lecture Some interesting / useful / important of matrices Hermitian LU QR echelon Rewriting a as a product of several matrices.
More informationSTAT 309: MATHEMATICAL COMPUTATIONS I FALL 2017 LECTURE 5
STAT 39: MATHEMATICAL COMPUTATIONS I FALL 17 LECTURE 5 1 existence of svd Theorem 1 (Existence of SVD) Every matrix has a singular value decomposition (condensed version) Proof Let A C m n and for simplicity
More informationSection 3.9. Matrix Norm
3.9. Matrix Norm 1 Section 3.9. Matrix Norm Note. We define several matrix norms, some similar to vector norms and some reflecting how multiplication by a matrix affects the norm of a vector. We use matrix
More information12.4 Known Channel (Water-Filling Solution)
ECEn 665: Antennas and Propagation for Wireless Communications 54 2.4 Known Channel (Water-Filling Solution) The channel scenarios we have looed at above represent special cases for which the capacity
More informationIdentification Methods for Structural Systems
Prof. Dr. Eleni Chatzi System Stability Fundamentals Overview System Stability Assume given a dynamic system with input u(t) and output x(t). The stability property of a dynamic system can be defined from
More informationHomework 1. Yuan Yao. September 18, 2011
Homework 1 Yuan Yao September 18, 2011 1. Singular Value Decomposition: The goal of this exercise is to refresh your memory about the singular value decomposition and matrix norms. A good reference to
More informationExtreme Values and Positive/ Negative Definite Matrix Conditions
Extreme Values and Positive/ Negative Definite Matrix Conditions James K. Peterson Department of Biological Sciences and Department of Mathematical Sciences Clemson University November 8, 016 Outline 1
More informationMath Fall Final Exam
Math 104 - Fall 2008 - Final Exam Name: Student ID: Signature: Instructions: Print your name and student ID number, write your signature to indicate that you accept the honor code. During the test, you
More informationInformation Retrieval
Introduction to Information CS276: Information and Web Search Christopher Manning and Pandu Nayak Lecture 13: Latent Semantic Indexing Ch. 18 Today s topic Latent Semantic Indexing Term-document matrices
More information21 Linear State-Space Representations
ME 132, Spring 25, UC Berkeley, A Packard 187 21 Linear State-Space Representations First, let s describe the most general type of dynamic system that we will consider/encounter in this class Systems may
More informationDeep Learning Book Notes Chapter 2: Linear Algebra
Deep Learning Book Notes Chapter 2: Linear Algebra Compiled By: Abhinaba Bala, Dakshit Agrawal, Mohit Jain Section 2.1: Scalars, Vectors, Matrices and Tensors Scalar Single Number Lowercase names in italic
More information4.2. ORTHOGONALITY 161
4.2. ORTHOGONALITY 161 Definition 4.2.9 An affine space (E, E ) is a Euclidean affine space iff its underlying vector space E is a Euclidean vector space. Given any two points a, b E, we define the distance
More informationBackground Mathematics (2/2) 1. David Barber
Background Mathematics (2/2) 1 David Barber University College London Modified by Samson Cheung (sccheung@ieee.org) 1 These slides accompany the book Bayesian Reasoning and Machine Learning. The book and
More informationCS168: The Modern Algorithmic Toolbox Lecture #7: Understanding Principal Component Analysis (PCA)
CS68: The Modern Algorithmic Toolbox Lecture #7: Understanding Principal Component Analysis (PCA) Tim Roughgarden & Gregory Valiant April 0, 05 Introduction. Lecture Goal Principal components analysis
More informationLecture 6. Numerical methods. Approximation of functions
Lecture 6 Numerical methods Approximation of functions Lecture 6 OUTLINE 1. Approximation and interpolation 2. Least-square method basis functions design matrix residual weighted least squares normal equation
More informationGATE EE Topic wise Questions SIGNALS & SYSTEMS
www.gatehelp.com GATE EE Topic wise Questions YEAR 010 ONE MARK Question. 1 For the system /( s + 1), the approximate time taken for a step response to reach 98% of the final value is (A) 1 s (B) s (C)
More informationBindel, Fall 2016 Matrix Computations (CS 6210) Notes for
1 Logistics Notes for 2016-08-29 General announcement: we are switching from weekly to bi-weekly homeworks (mostly because the course is much bigger than planned). If you want to do HW but are not formally
More informationThe University of Texas at Austin Department of Electrical and Computer Engineering. EE381V: Large Scale Learning Spring 2013.
The University of Texas at Austin Department of Electrical and Computer Engineering EE381V: Large Scale Learning Spring 2013 Assignment Two Caramanis/Sanghavi Due: Tuesday, Feb. 19, 2013. Computational
More informationApplied Linear Algebra in Geoscience Using MATLAB
Applied Linear Algebra in Geoscience Using MATLAB Contents Getting Started Creating Arrays Mathematical Operations with Arrays Using Script Files and Managing Data Two-Dimensional Plots Programming in
More informationNOTES ON LINEAR ALGEBRA CLASS HANDOUT
NOTES ON LINEAR ALGEBRA CLASS HANDOUT ANTHONY S. MAIDA CONTENTS 1. Introduction 2 2. Basis Vectors 2 3. Linear Transformations 2 3.1. Example: Rotation Transformation 3 4. Matrix Multiplication and Function
More informationEE363 homework 7 solutions
EE363 Prof. S. Boyd EE363 homework 7 solutions 1. Gain margin for a linear quadratic regulator. Let K be the optimal state feedback gain for the LQR problem with system ẋ = Ax + Bu, state cost matrix Q,
More informationLinear Algebra Review. Fei-Fei Li
Linear Algebra Review Fei-Fei Li 1 / 51 Vectors Vectors and matrices are just collections of ordered numbers that represent something: movements in space, scaling factors, pixel brightnesses, etc. A vector
More informationMATH 423 Linear Algebra II Lecture 33: Diagonalization of normal operators.
MATH 423 Linear Algebra II Lecture 33: Diagonalization of normal operators. Adjoint operator and adjoint matrix Given a linear operator L on an inner product space V, the adjoint of L is a transformation
More informationA Brief Outline of Math 355
A Brief Outline of Math 355 Lecture 1 The geometry of linear equations; elimination with matrices A system of m linear equations with n unknowns can be thought of geometrically as m hyperplanes intersecting
More informationInner products and Norms. Inner product of 2 vectors. Inner product of 2 vectors x and y in R n : x 1 y 1 + x 2 y x n y n in R n
Inner products and Norms Inner product of 2 vectors Inner product of 2 vectors x and y in R n : x 1 y 1 + x 2 y 2 + + x n y n in R n Notation: (x, y) or y T x For complex vectors (x, y) = x 1 ȳ 1 + x 2
More informationIr O D = D = ( ) Section 2.6 Example 1. (Bottom of page 119) dim(v ) = dim(l(v, W )) = dim(v ) dim(f ) = dim(v )
Section 3.2 Theorem 3.6. Let A be an m n matrix of rank r. Then r m, r n, and, by means of a finite number of elementary row and column operations, A can be transformed into the matrix ( ) Ir O D = 1 O
More informationDiscrete and continuous dynamic systems
Discrete and continuous dynamic systems Bounded input bounded output (BIBO) and asymptotic stability Continuous and discrete time linear time-invariant systems Katalin Hangos University of Pannonia Faculty
More informationLECTURE NOTE #10 PROF. ALAN YUILLE
LECTURE NOTE #10 PROF. ALAN YUILLE 1. Principle Component Analysis (PCA) One way to deal with the curse of dimensionality is to project data down onto a space of low dimensions, see figure (1). Figure
More informationComputational math: Assignment 1
Computational math: Assignment 1 Thanks Ting Gao for her Latex file 11 Let B be a 4 4 matrix to which we apply the following operations: 1double column 1, halve row 3, 3add row 3 to row 1, 4interchange
More informationROBUST STABILITY AND PERFORMANCE ANALYSIS* [8 # ]
ROBUST STABILITY AND PERFORMANCE ANALYSIS* [8 # ] General control configuration with uncertainty [8.1] For our robustness analysis we use a system representation in which the uncertain perturbations are
More informationAPPLICATIONS The eigenvalues are λ = 5, 5. An orthonormal basis of eigenvectors consists of
CHAPTER III APPLICATIONS The eigenvalues are λ =, An orthonormal basis of eigenvectors consists of, The eigenvalues are λ =, A basis of eigenvectors consists of, 4 which are not perpendicular However,
More informationSpring, 2012 CIS 515. Fundamentals of Linear Algebra and Optimization Jean Gallier
Spring 0 CIS 55 Fundamentals of Linear Algebra and Optimization Jean Gallier Homework 5 & 6 + Project 3 & 4 Note: Problems B and B6 are for extra credit April 7 0; Due May 7 0 Problem B (0 pts) Let A be
More informationSome solutions of the written exam of January 27th, 2014
TEORIA DEI SISTEMI Systems Theory) Prof. C. Manes, Prof. A. Germani Some solutions of the written exam of January 7th, 0 Problem. Consider a feedback control system with unit feedback gain, with the following
More informationAndrea Zanchettin Automatic Control AUTOMATIC CONTROL. Andrea M. Zanchettin, PhD Spring Semester, Linear systems (frequency domain)
1 AUTOMATIC CONTROL Andrea M. Zanchettin, PhD Spring Semester, 2018 Linear systems (frequency domain) 2 Motivations Consider an LTI system Thanks to the Lagrange s formula we can compute the motion of
More informationEigenvectors and Hermitian Operators
7 71 Eigenvalues and Eigenvectors Basic Definitions Let L be a linear operator on some given vector space V A scalar λ and a nonzero vector v are referred to, respectively, as an eigenvalue and corresponding
More informationM.A.P. Matrix Algebra Procedures. by Mary Donovan, Adrienne Copeland, & Patrick Curry
M.A.P. Matrix Algebra Procedures by Mary Donovan, Adrienne Copeland, & Patrick Curry This document provides an easy to follow background and review of basic matrix definitions and algebra. Because population
More informationHOMEWORK PROBLEMS FROM STRANG S LINEAR ALGEBRA AND ITS APPLICATIONS (4TH EDITION)
HOMEWORK PROBLEMS FROM STRANG S LINEAR ALGEBRA AND ITS APPLICATIONS (4TH EDITION) PROFESSOR STEVEN MILLER: BROWN UNIVERSITY: SPRING 2007 1. CHAPTER 1: MATRICES AND GAUSSIAN ELIMINATION Page 9, # 3: Describe
More informationMATH 612 Computational methods for equation solving and function minimization Week # 2
MATH 612 Computational methods for equation solving and function minimization Week # 2 Instructor: Francisco-Javier Pancho Sayas Spring 2014 University of Delaware FJS MATH 612 1 / 38 Plan for this week
More informationMath for ML: review. CS 1675 Introduction to ML. Administration. Lecture 2. Milos Hauskrecht 5329 Sennott Square, x4-8845
CS 75 Introduction to ML Lecture Math for ML: review Milos Hauskrecht milos@cs.pitt.edu 5 Sennott Square, x4-45 people.cs.pitt.edu/~milos/courses/cs75/ Administration Instructor: Prof. Milos Hauskrecht
More information6.241 Dynamic Systems and Control
6.241 Dynamic Systems and Control Lecture 24: H2 Synthesis Emilio Frazzoli Aeronautics and Astronautics Massachusetts Institute of Technology May 4, 2011 E. Frazzoli (MIT) Lecture 24: H 2 Synthesis May
More informationLecture 1: Systems of linear equations and their solutions
Lecture 1: Systems of linear equations and their solutions Course overview Topics to be covered this semester: Systems of linear equations and Gaussian elimination: Solving linear equations and applications
More information