Chapter 6 Balanced Realization 6. Introduction One popular approach for obtaining a minimal realization is known as Balanced Realization. In this appr

Similar documents
Chapter 6 Balanced Realization 6. Introduction One popular approach for obtaining a minimal realization is known as Balanced Realization. In this appr

6.241 Dynamic Systems and Control

Chapter 30 Minimality and Stability of Interconnected Systems 30.1 Introduction: Relating I/O and State-Space Properties We have already seen in Chapt

3 Gramians and Balanced Realizations

Balanced Truncation 1

Systems and Control Theory Lecture Notes. Laura Giarré

Model Reduction for Unstable Systems

Chapter 13 Internal (Lyapunov) Stability 13.1 Introduction We have already seen some examples of both stable and unstable systems. The objective of th

ME 234, Lyapunov and Riccati Problems. 1. This problem is to recall some facts and formulae you already know. e Aτ BB e A τ dτ

CME 345: MODEL REDUCTION

Chapter Robust Performance and Introduction to the Structured Singular Value Function Introduction As discussed in Lecture 0, a process is better desc

hapter 8 Simulation/Realization 8 Introduction Given an nth-order state-space description of the form x_ (t) = f (x(t) u(t) t) (state evolution equati

Large Scale Data Analysis Using Deep Learning

Model reduction for linear systems by balancing

Properties of Matrices and Operations on Matrices

MASSACHUSETTS INSTITUTE OF TECHNOLOGY Department of Electrical Engineering and Computer Science : Dynamic Systems Spring 2011

Balanced Model Reduction

j=1 u 1jv 1j. 1/ 2 Lemma 1. An orthogonal set of vectors must be linearly independent.

Econ Slides from Lecture 7

1. The Polar Decomposition

Lecture 11: Diagonalization

MATH 221, Spring Homework 10 Solutions

Lecture 6 Sept Data Visualization STAT 442 / 890, CM 462

Model reduction of coupled systems

Edexcel GCE A Level Maths Further Maths 3 Matrices.

Midterm for Introduction to Numerical Analysis I, AMSC/CMSC 466, on 10/29/2015

LINEAR ALGEBRA 1, 2012-I PARTIAL EXAM 3 SOLUTIONS TO PRACTICE PROBLEMS

Multivariate Statistical Analysis

Chapter 8 Stabilization: State Feedback 8. Introduction: Stabilization One reason feedback control systems are designed is to stabilize systems that m

Jordan normal form notes (version date: 11/21/07)

Multiplying matrices by diagonal matrices is faster than usual matrix multiplication.

The model reduction algorithm proposed is based on an iterative two-step LMI scheme. The convergence of the algorithm is not analyzed but examples sho

Linear Algebra- Final Exam Review

Math Camp Notes: Linear Algebra II

Lecture 15, 16: Diagonalization

Notes on singular value decomposition for Math 54. Recall that if A is a symmetric n n matrix, then A has real eigenvalues A = P DP 1 A = P DP T.

ANSWERS. E k E 2 E 1 A = B

Computational Methods CMSC/AMSC/MAPL 460. Eigenvalues and Eigenvectors. Ramani Duraiswami, Dept. of Computer Science

Chapter Stability Robustness Introduction Last chapter showed how the Nyquist stability criterion provides conditions for the stability robustness of

Math 3191 Applied Linear Algebra

[3] (b) Find a reduced row-echelon matrix row-equivalent to ,1 2 2

Linear Algebra: Characteristic Value Problem

FEL3210 Multivariable Feedback Control

Spectral Theorem for Self-adjoint Linear Operators

Practice Final Exam Solutions

Chapter 9 Observers, Model-based Controllers 9. Introduction In here we deal with the general case where only a subset of the states, or linear combin

Chapter 7 Interconnected Systems and Feedback: Well-Posedness, Stability, and Performance 7. Introduction Feedback control is a powerful approach to o

CANONICAL LOSSLESS STATE-SPACE SYSTEMS: STAIRCASE FORMS AND THE SCHUR ALGORITHM

Throughout these notes we assume V, W are finite dimensional inner product spaces over C.

linearly indepedent eigenvectors as the multiplicity of the root, but in general there may be no more than one. For further discussion, assume matrice

MODEL REDUCTION BY A CROSS-GRAMIAN APPROACH FOR DATA-SPARSE SYSTEMS

3118 IEEE TRANSACTIONS ON AUTOMATIC CONTROL, VOL. 57, NO. 12, DECEMBER 2012

The Singular Value Decomposition

Chater Matrix Norms and Singular Value Decomosition Introduction In this lecture, we introduce the notion of a norm for matrices The singular value de

Spring 2019 Exam 2 3/27/19 Time Limit: / Problem Points Score. Total: 280

= main diagonal, in the order in which their corresponding eigenvectors appear as columns of E.

only nite eigenvalues. This is an extension of earlier results from [2]. Then we concentrate on the Riccati equation appearing in H 2 and linear quadr

Clustering-based State Aggregation of Dynamical Networks

1 Last time: least-squares problems

Balanced realization and model order reduction for nonlinear systems based on singular value analysis

Jordan Normal Form and Singular Decomposition

Lecture 4 and 5 Controllability and Observability: Kalman decompositions

1 Linearity and Linear Systems

Numerical Methods I: Eigenvalues and eigenvectors

Linear Algebra. and

Eigenvalue and Eigenvector Homework

Problem # Max points possible Actual score Total 120

Proposition 42. Let M be an m n matrix. Then (32) N (M M)=N (M) (33) R(MM )=R(M)

Numerical Linear Algebra Homework Assignment - Week 2

Maths for Signals and Systems Linear Algebra in Engineering

Weighted balanced realization and model reduction for nonlinear systems

Orthogonal Transformations

Final Exam, Linear Algebra, Fall, 2003, W. Stephen Wilson

1 Solutions to selected problems

Chapter 3 Least Squares Solution of y = A x 3.1 Introduction We turn to a problem that is dual to the overconstrained estimation problems considered s

Online Exercises for Linear Algebra XM511

Lecture 8 : Eigenvalues and Eigenvectors

Port-Hamiltonian Realizations of Linear Time Invariant Systems

EE731 Lecture Notes: Matrix Computations for Signal Processing

Solution Set 7, Fall '12

Iterative Rational Krylov Algorithm for Unstable Dynamical Systems and Generalized Coprime Factorizations

Robust Multivariable Control

VII Selected Topics. 28 Matrix Operations

The Singular Value Decomposition

The Important State Coordinates of a Nonlinear System

Example Linear Algebra Competency Test

Remark 1 By definition, an eigenvector must be a nonzero vector, but eigenvalue could be zero.

Linear Algebra - Part II

MAC Module 12 Eigenvalues and Eigenvectors. Learning Objectives. Upon completing this module, you should be able to:

MAC Module 12 Eigenvalues and Eigenvectors

Canonical lossless state-space systems: staircase forms and the Schur algorithm

Kernels of Directed Graph Laplacians. J. S. Caughman and J.J.P. Veerman

Identification and estimation of state variables on reduced model using balanced truncation method

A TOUR OF LINEAR ALGEBRA FOR JDEP 384H

MATH 423 Linear Algebra II Lecture 33: Diagonalization of normal operators.

1 Inner Product and Orthogonality

Model reduction of large-scale systems by least squares

22m:033 Notes: 7.1 Diagonalization of Symmetric Matrices

Final Project # 5 The Cartan matrix of a Root System

(the matrix with b 1 and b 2 as columns). If x is a vector in R 2, then its coordinate vector [x] B relative to B satisfies the formula.

Transcription:

Lectures on ynamic Systems and Control Mohammed ahleh Munther A. ahleh George Verghese epartment of Electrical Engineering and Computer Science Massachuasetts Institute of Technology c

Chapter 6 Balanced Realization 6. Introduction One popular approach for obtaining a minimal realization is known as Balanced Realization. In this approach, a new state-space description is obtained so that the reachability and observability gramians are diagonalized. This denes a new set of invariant parameters known as Hankel singular values. This approach plays a major role in model reduction which w i l l b e h i g h l i g h ted in this chapter. 6. Balanced Realization Let us start with a system G with minimal realization A B G : C As we h a ve seen in an earlier lecture, the controllability gramian P, and the observability gramian Q are obtained as solutions to the following Lyapunov equations AP + P A + BB = A Q + QA + C C = : P and Q are symmetric and since the realization is minimal they are also positive denite. The eigenvalues of the product of the controllability and observability gramians play an important role in system theory and control. We dene the Hankel singular values, i, as the square roots of the eigenvalues of P Q 4 i = ( i (P Q )) We w ould like to obtain coordinate transformation, T, that results in a realization for which the controllability and observability gramians are equal and diagonal. The diagonal entries of the transformed controllability and observability gramians will b e the Hankel singular values. With the coordinate transformation T the new system realization is given by : T ; AT T ; B B ^ G = CT C^

@ A and the Lyapunov equations in the new coordinates are given by ^ ^ ^ ^ A(T ; P T ; ) + ( T ; P T ; )A + BB = ^ A (T QT ) + ( T QT )A ^ + C^ C ^ = : Therefore the controllability and observability gramian in the new coordinate system are given by ^P = T ; P T ; ^Q = T QT: We are looking for a transformation T such that P ^ = Q ^ = = B... C : n We h a ve the relation (T ; P T ; )(T QT ) = T ; P QT = : (6.) Since Q = Q and is positive denite, we can factor it as Q = R R, where R is an invertible matrix. We can write equation 6. as T ; P R RT =, w hich is equivalent to (RT ) ; RP R (RT ) = : (6.) Equation 6. means that RP R is similar to and is positive denite. Therefore, there exists an orthogonal transformation U, U U = I, such that By setting (RT ) ; U = I, w e arrive at a denition for T and T ; as With this transformation it follows that RP R = U U : (6.3) T = R ; U = ; U R: T ; ^P = ( ; = ( ; = U R)P (R U U )(U U )(U ; ) ; ) and Q ^ = (R ; U ) R R(R ; U ) U ) = ( )(R ; R RR ; )(U = :

6.3 Model Reduction by Balanced Truncation Suppose we start with a system A B G C where A is asymptotically stable. Suppose T is the transformation that converts the above realization to a balanced realization, with B ^ G ^C and P ^ = Q ^ = = diag( : : : n). In many applications it may b e benecial to only consider the subsystem of G that corresponds to the Hankel singular values that are larger than a certain small constant. For that reason, suppose we partition as = where contains the small Hankel singular values. We can partition the realization of G accordingly as G 4 C^ C^ Recall that the following Lyapunov equations hold which can be expanded as 3 ^B ^B 5 : A ^ + + B ^B ^ = ^ ^ ^ A + A + C C^ = B^ B^ B^ B^ + + ^ ^ ^ ^ = B B B B ^ ^ C^ C^ C^ ^ + A A C ^ ^ + = : A A C^ C^ C^ C ^ From the above t wo matrix equations we get the following set of equations ^ ^ ^ + B B + A = (6.4) ^ ^ ^ + B B + A = (6.5) ^ ^ ^ + B B + A = (6.6) ^ + A + C^ C^ = (6.7)

G G G = (6.) (6.9) + + ^C ^C = (6.8) ^C ^C : = + + From this decomposition we can extract two subsystems ^C ^B ^B : ^C Theorem 6. G is an asymptotically stable system. If and do not have any common diagonal elements then G and G are asymptotically stable. Proof: Let us show that the subsystem ^C ^B is asymptotically stable. Since satises the Lyapunov equation = ^B + + ^B then it immediately follows that all the eigenvalues of must be in the closed left half of the complex plane that is, Rei( ). In order to show asymptotic stability w e m ust show t h a t has no purely imaginary eigenvalues. Suppose j! is an eigenvalue of, and let v be an eigenvector associated with j! ( ; j! I)v =. Assume that the Kernel of ( ; j! I) is one-dimensional. The general case where there may b e several independent eigenvectors associated with j! can b e handled by a slight modication of the argument. present as Equation 6.7 can be written ( ; j! I) (A ; j ) +! I + ^C ^C By multiplying the above equation by v on the right a n d v on the left we get v ( ; j!i) v + v (A! I ^ ; j )v + v which implies that ( ^C v ) ( ^C v) =, and this in turn implies that Again from equation 6.7 we get C ^C v = ^C v = : (6.) ( ; j! I) v )v (A ; j! + I + ^C ^C v = which implies that ( ) ; j! I : v = Now w e m ultiply equation 6.4 from the right b y v and from the left by v to get ( ; j! ) v + v v I (A ; j! I) v + v B B : v =

This implies that v v) =, and B B )(B v =. By multiplying equation 6.4 on the right b y v we get ( v + ^ ^ (A ; j! I ) v + B B ; j! I ) v = and hence ( ; j! I ) v = : (6.) Since that the kernel of ( ; j! I ) is one dimensional, and both v and v are eigenvectors, it follows that v = ^ v, where ^ is one of the diagonal elements in. Now m ultiply equation 6.5 on the left by v and equation 6.8 by v on the left we get and v ^ A + v = (6.3) >From equations 6.3 and 6.4 we get that v A ^ ^ + v A = : (6.4) ;v + ^ v = which can be written as (v ) ; + ^ I = : Since by assumption and have no common eigenvalues, then ^ I and have no common eignevalues, and hence A v =. We have ( ; j! I )v = v = which can be written as v v ^ ^ = j! : A A ^ This statement implies that j! is an eigenvalue of A, w h i c h c o n tradicts the assumption of the theorem stating that G is asymptotically stable.