Chapter 6 Balanced Realization 6. Introduction One popular approach for obtaining a minimal realization is known as Balanced Realization. In this appr

Similar documents
Chapter 6 Balanced Realization 6. Introduction One popular approach for obtaining a minimal realization is known as Balanced Realization. In this appr

6.241 Dynamic Systems and Control

Chapter 30 Minimality and Stability of Interconnected Systems 30.1 Introduction: Relating I/O and State-Space Properties We have already seen in Chapt

MASSACHUSETTS INSTITUTE OF TECHNOLOGY Department of Electrical Engineering and Computer Science : Dynamic Systems Spring 2011

Chapter Robust Performance and Introduction to the Structured Singular Value Function Introduction As discussed in Lecture 0, a process is better desc

3 Gramians and Balanced Realizations

hapter 8 Simulation/Realization 8 Introduction Given an nth-order state-space description of the form x_ (t) = f (x(t) u(t) t) (state evolution equati

MATH 221, Spring Homework 10 Solutions

Balanced Truncation 1

Model Reduction for Unstable Systems

Chapter 7 Interconnected Systems and Feedback: Well-Posedness, Stability, and Performance 7. Introduction Feedback control is a powerful approach to o

Chapter 3 Least Squares Solution of y = A x 3.1 Introduction We turn to a problem that is dual to the overconstrained estimation problems considered s

Chapter Stability Robustness Introduction Last chapter showed how the Nyquist stability criterion provides conditions for the stability robustness of

ME 234, Lyapunov and Riccati Problems. 1. This problem is to recall some facts and formulae you already know. e Aτ BB e A τ dτ

Properties of Matrices and Operations on Matrices

j=1 u 1jv 1j. 1/ 2 Lemma 1. An orthogonal set of vectors must be linearly independent.

Systems and Control Theory Lecture Notes. Laura Giarré

Econ Slides from Lecture 7

CME 345: MODEL REDUCTION

Final Project # 5 The Cartan matrix of a Root System

6.241 Dynamic Systems and Control

Large Scale Data Analysis Using Deep Learning

6.241 Dynamic Systems and Control

Model reduction for linear systems by balancing

A matrix times a column vector is the linear combination of the columns of the matrix weighted by the entries in the column vector.

Spring 2019 Exam 2 3/27/19 Time Limit: / Problem Points Score. Total: 280

Balanced Model Reduction

Lecture 4 and 5 Controllability and Observability: Kalman decompositions

Chapter 13 Internal (Lyapunov) Stability 13.1 Introduction We have already seen some examples of both stable and unstable systems. The objective of th

ANSWERS. E k E 2 E 1 A = B

6.207/14.15: Networks Lectures 4, 5 & 6: Linear Dynamics, Markov Chains, Centralities

6.241 Dynamic Systems and Control

1. The Polar Decomposition

Lecture 11: Diagonalization

linearly indepedent eigenvectors as the multiplicity of the root, but in general there may be no more than one. For further discussion, assume matrice

Lecture 6 Sept Data Visualization STAT 442 / 890, CM 462

Midterm for Introduction to Numerical Analysis I, AMSC/CMSC 466, on 10/29/2015

Edexcel GCE A Level Maths Further Maths 3 Matrices.

Multiplying matrices by diagonal matrices is faster than usual matrix multiplication.

Multivariate Statistical Analysis

LINEAR ALGEBRA 1, 2012-I PARTIAL EXAM 3 SOLUTIONS TO PRACTICE PROBLEMS

Final Exam, Linear Algebra, Fall, 2003, W. Stephen Wilson

MODEL REDUCTION BY A CROSS-GRAMIAN APPROACH FOR DATA-SPARSE SYSTEMS

Numerical Linear Algebra Homework Assignment - Week 2

Algebra I Fall 2007

Linear Algebra- Final Exam Review

Kernels of Directed Graph Laplacians. J. S. Caughman and J.J.P. Veerman

1 Linearity and Linear Systems

Math Camp Notes: Linear Algebra II

Lecture 15, 16: Diagonalization

Notes on singular value decomposition for Math 54. Recall that if A is a symmetric n n matrix, then A has real eigenvalues A = P DP 1 A = P DP T.

18.06 Problem Set 2 Solution

Maths for Signals and Systems Linear Algebra in Engineering

Computational Methods CMSC/AMSC/MAPL 460. Eigenvalues and Eigenvectors. Ramani Duraiswami, Dept. of Computer Science

Linear Algebra: Characteristic Value Problem

only nite eigenvalues. This is an extension of earlier results from [2]. Then we concentrate on the Riccati equation appearing in H 2 and linear quadr

[3] (b) Find a reduced row-echelon matrix row-equivalent to ,1 2 2

Math 3191 Applied Linear Algebra

q n. Q T Q = I. Projections Least Squares best fit solution to Ax = b. Gram-Schmidt process for getting an orthonormal basis from any basis.

Spectral Theorem for Self-adjoint Linear Operators

FEL3210 Multivariable Feedback Control

Balanced realization and model order reduction for nonlinear systems based on singular value analysis

CANONICAL LOSSLESS STATE-SPACE SYSTEMS: STAIRCASE FORMS AND THE SCHUR ALGORITHM

Model reduction of coupled systems

The model reduction algorithm proposed is based on an iterative two-step LMI scheme. The convergence of the algorithm is not analyzed but examples sho

18.440: Lecture 33 Markov Chains

Lecture 19 (Nov. 15, 2017)

Throughout these notes we assume V, W are finite dimensional inner product spaces over C.

MASSACHUSETTS INSTITUTE OF TECHNOLOGY Department of Electrical Engineering and Computer Science : Dynamic Systems Spring 2011

The Singular Value Decomposition

OHSx XM511 Linear Algebra: Solutions to Online True/False Exercises

3118 IEEE TRANSACTIONS ON AUTOMATIC CONTROL, VOL. 57, NO. 12, DECEMBER 2012

18.06SC Final Exam Solutions

1 Last time: least-squares problems

= main diagonal, in the order in which their corresponding eigenvectors appear as columns of E.

Clustering-based State Aggregation of Dynamical Networks

Iterative Rational Krylov Algorithm for Unstable Dynamical Systems and Generalized Coprime Factorizations

Chapter 9 Robust Stability in SISO Systems 9. Introduction There are many reasons to use feedback control. As we have seen earlier, with the help of a

Identification and estimation of state variables on reduced model using balanced truncation method

Canonical lossless state-space systems: staircase forms and the Schur algorithm

Hankel Optimal Model Reduction 1

Jordan Normal Form and Singular Decomposition

16.31 Fall 2005 Lecture Presentation Mon 31-Oct-05 ver 1.1

Model reduction of large-scale systems by least squares

Linear Algebra. and

Answer Keys For Math 225 Final Review Problem

Numerical Methods I: Eigenvalues and eigenvectors

Lecture 11. Linear systems: Cholesky method. Eigensystems: Terminology. Jacobi transformations QR transformation

Eigenvalue and Eigenvector Homework

Massachusetts Institute of Technology

Jordan normal form notes (version date: 11/21/07)

A = 3 1. We conclude that the algebraic multiplicity of the eigenvalues are both one, that is,

Problem # Max points possible Actual score Total 120

. The following is a 3 3 orthogonal matrix: 2/3 1/3 2/3 2/3 2/3 1/3 1/3 2/3 2/3

Chapter 3 Transformations

6.241 Dynamic Systems and Control

Lecture 11: Eigenvalues and Eigenvectors

An iterative SVD-Krylov based method for model reduction of large-scale dynamical systems

Orthogonal Transformations

MATH 115A: SAMPLE FINAL SOLUTIONS

Transcription:

Lectures on ynamic Systems and Control Mohammed ahleh Munther A. ahleh George Verghese epartment of Electrical Engineering and Computer Science Massachuasetts Institute of Technology c

Chapter 6 Balanced Realization 6. Introduction One popular approach for obtaining a minimal realization is known as Balanced Realization. In this approach, a new state-space description is obtained so that the reachability and observability gramians are diagonalized. This denes a new set of invariant parameters known as Hankel singular values. This approach plays a major role in model reduction which will be highlighted in this chapter. 6. Balanced Realization Let us start with a system G with minimal realization A B G : C As we have seen in an earlier lecture, the controllability gramian P, and the observability gramian Q are obtained as solutions to the following Lyapunov equations AP + P A + BB = A Q + QA + C C = : P and Q are symmetric and since the realization is minimal they are also positive denite. The eigenvalues of the product of the controllability and observability gramians play an important role in system theory and control. We dene the Hankel singular values, i, as the square roots of the eigenvalues of P Q 4 i = ( i (P Q)) We would like to obtain coordinate transformation, T, that results in a realization for which the controllability and observability gramians are equal and diagonal. The diagonal entries of the transformed controllability and observability gramians will be the Hankel singular values. With the coordinate transformation T the new system realization is given by : T ; AT T ; B B^ G = CT C^

@. A : and the Lyapunov equations in the new coordinates are given by (T ; P T ; ) + (T ; P T ; ) + B^ ^B = (T QT ) + (T QT ) + C^ C^ = : Therefore the controllability and observability gramian in the new coordinate system are given by P^ = T ; P T ; Q^ = T QT: We are looking for a transformation T such that P ^ = Q ^ = = B.. C n We have the relation (T ; P T ; )(T QT ) = T ; P QT = : (6.) Since Q = Q and is positive denite, we can factor it as Q = R R, where R is an invertible matrix. ; P We can write equation 6. as T R RT =, which is equivalent to (RT ) ; RP R (RT ) = : (6.) Equation 6. means that RP R is similar to and is positive denite. Therefore, there exists an orthogonal transformation U, U U = I, such that By setting (RT ) ; U = I, we arrive at a denition for T and T ; as With this transformation it follows that RP R = U U : (6.3) T = R ; U ; ; T = U R: and ^ U R U ; P = ( ; )P (R ) = ( ; = U )(U U )(U ; ) Q^ = (R ; U ) R R(R ; U ) = ( U )(R ; R RR ; )(U ) = :

6.3 Model Reduction by Balanced Truncation Suppose we start with a system A B G C where A is asymptotically stable. Suppose T is the transformation that converts the above realization to a balanced realization, with G C^ and P ^ = Q ^ = = diag( : : : n ). In many applications it may be benecial to only consider the subsystem of G that corresponds to the Hankel singular values that are larger than a certain small constant. For that reason, suppose we partition as = B ^ where contains the small Hankel singular values. We can partition the realization of G accordingly as ^A ^A G 4 ^A ^C ^C Recall that the following Lyapunov equations hold which can be expanded as 3 ^B 5 B^ : + + B^B^ = + + C^ C^ = B^ B^ B^ B^ + + = B^ B^ B^ B^ C^ C^ C^ ^ C + + = : C^ C^ C^ C^ From the above two matrix equations we get the following set of equations + ^ ^ + B B = (6.4) ^ ^ A + A + B^ B^ = (6.5) + ^ ^ + B B = (6.6) + + C^ C^ = (6.7)

+ + C^ C^ = (6.8) + + C^ From this decomposition we can extract two subsystems G A ^ C ^ C^ = (6.9) : ^ ^ ^ B A B G C^ Theorem 6. G is an asymptotically stable system. If and do not have any common diagonal elements then G and G are asymptotically stable. : Proof: Let us show that the subsystem G B^ ^ C ^ is asymptotically stable. Since A satises the Lyapunov equation + + B^B^ then it immediately follows that all the eigenvalues of must be in the closed left half of the complex plane that is, Re i ( ). In order to show asymptotic stability we must show that has no purely imaginary eigenvalues. Suppose j! is an eigenvalue of, and let v be an eigenvector associated with j! A ^ ( ;j!i)v =. Assume that the Kernel of ( ; j!i) is one-dimensional. The general case where there may b e several independent eigenvectors associated with j! can be handled by a slight modication of the argument. present Equation 6.7 as can be written = ( ; j!i) + (A j!i) ; + C^ C^ = By multiplying the above equation by v on the right and v on the left we get v ( ; j!i) v + v (A ; j!i)v + v C^ C^v = which implies that (C^ v) (C^v) =, and this in turn implies that Again from equation 6.7 we get which implies that ^C v = : (6.) ( ; j!i) v + (A ; j!i)v + C^ C^ v = ( ; j!i) v = : (6.) Now we multiply equation 6.4 from the right by v and from the left by v to get v ( ; j!i) v + v (A ; j!i) v + v B B v = :

This implies that v B )(B v) =, and B v =. By multiplying equation 6.4 on the right by v we get and hence ( ; j!i) v + (A ; j!i) v + B^ B^ v = ( ; j!i) v = : (6.) Since that the kernel of ( ; j!i) is one dimensional, and both v and v are eigenvectors, it follows that v = ^v, where ^ is one of the diagonal elements in. Now multiply equation 6.5 on the left by v and equation 6.8 by v on the left we get and v + v = (6.3) >From equations 6.3 and 6.4 we get that ^ ^ v A A = : (6.4) + v ^ ^ ;v A + ^v A = which can be written as (v ) ; + ^ I = : Since by assumption and have no common eigenvalues, then ^I and have no common eignevalues, and hence A v =. We have ( ; j!i)v = ^A v = which can be written as v v ^ ^ = j! : A A ^ This statement implies that j! is an eigenvalue of A, which contradicts the assumption of the theorem stating that G is asymptotically stable.

MIT OpenCourseWare http://ocw.mit.edu 6.4J / 6.338J ynamic Systems and Control Spring For information about citing these materials or our Terms of Use, visit: http://ocw.mit.edu/terms.