Introduction to MVC. least common denominator of all non-identical-zero minors of all order of G(s). Example: The minor of order 2: 1 2 ( s 1)

Similar documents
Numerical Linear Algebra

Lecture 7 (Weeks 13-14)

Chater Matrix Norms and Singular Value Decomosition Introduction In this lecture, we introduce the notion of a norm for matrices The singular value de

Positive decomposition of transfer functions with multiple poles

The Singular Value Decomposition

y p 2 p 1 y p Flexible System p 2 y p2 p1 y p u, y 1 -u, y 2 Component Breakdown z 1 w 1 1 y p 1 P 1 P 2 w 2 z 2

Feedback-Based Iterative Learning Control for MIMO LTI Systems

Lecture 4: Analysis of MIMO Systems

Quantitative estimates of propagation of chaos for stochastic systems with W 1, kernels

Principal Components Analysis and Unsupervised Hebbian Learning

STA141C: Big Data & High Performance Statistical Computing

EL2520 Control Theory and Practice

On the Toppling of a Sand Pile

STABILITY ANALYSIS TOOL FOR TUNING UNCONSTRAINED DECENTRALIZED MODEL PREDICTIVE CONTROLLERS

Background Mathematics (2/2) 1. David Barber

Controllability and Resiliency Analysis in Heat Exchanger Networks

Use of Transformations and the Repeated Statement in PROC GLM in SAS Ed Stanek

HARMONIC EXTENSION ON NETWORKS

Feedback-error control

Solved Problems. (a) (b) (c) Figure P4.1 Simple Classification Problems First we draw a line between each set of dark and light data points.

PROFIT MAXIMIZATION. π = p y Σ n i=1 w i x i (2)

STA141C: Big Data & High Performance Statistical Computing

The Hasse Minkowski Theorem Lee Dicker University of Minnesota, REU Summer 2001

Participation Factors. However, it does not give the influence of each state on the mode.

Convex Analysis and Economic Theory Winter 2018

Radial Basis Function Networks: Algorithms

CHAPTER 2: SMOOTH MAPS. 1. Introduction In this chapter we introduce smooth maps between manifolds, and some important

Maths for Signals and Systems Linear Algebra in Engineering

Extension of Minimax to Infinite Matrices

On The Circulant Matrices with Ducci Sequences and Fibonacci Numbers

Lecture 8: Linear Algebra Background

3.4 Design Methods for Fractional Delay Allpass Filters

ME 375 System Modeling and Analysis. Homework 11 Solution. Out: 18 November 2011 Due: 30 November 2011 = + +

Lecture 8 : Eigenvalues and Eigenvectors

Estimation of the large covariance matrix with two-step monotone missing data

Radar Dish. Armature controlled dc motor. Inside. θ r input. Outside. θ D output. θ m. Gearbox. Control Transmitter. Control. θ D.

LINEAR SYSTEMS WITH POLYNOMIAL UNCERTAINTY STRUCTURE: STABILITY MARGINS AND CONTROL

Linear Algebra Review. Vectors

Dimensionality Reduction: PCA. Nicholas Ruozzi University of Texas at Dallas

UNIT 6: The singular value decomposition.

Multivariable Generalized Predictive Scheme for Gas Turbine Control in Combined Cycle Power Plant

A Bound on the Error of Cross Validation Using the Approximation and Estimation Rates, with Consequences for the Training-Test Split

8 Extremal Values & Higher Derivatives

Singular Value Decomposition Analysis

Kalman Decomposition B 2. z = T 1 x, where C = ( C. z + u (7) T 1, and. where B = T, and

Introduction to Group Theory Note 1

The Singular Value Decomposition (SVD) and Principal Component Analysis (PCA)

Numerical Linear Algebra Homework Assignment - Week 2

Averaging sums of powers of integers and Faulhaber polynomials

Robust Performance Design of PID Controllers with Inverse Multiplicative Uncertainty

Computational math: Assignment 1

Hidden Predictors: A Factor Analysis Primer

Elements of Asymptotic Theory. James L. Powell Department of Economics University of California, Berkeley

forms Christopher Engström November 14, 2014 MAA704: Matrix factorization and canonical forms Matrix properties Matrix factorization Canonical forms

State Estimation with ARMarkov Models

216 S. Chandrasearan and I.C.F. Isen Our results dier from those of Sun [14] in two asects: we assume that comuted eigenvalues or singular values are

MATH 2710: NOTES FOR ANALYSIS

LinGloss. A glossary of linear algebra

2-D Analysis for Iterative Learning Controller for Discrete-Time Systems With Variable Initial Conditions Yong FANG 1, and Tommy W. S.

Some results on Lipschitz quasi-arithmetic means

Approximating min-max k-clustering

Elements of Asymptotic Theory. James L. Powell Department of Economics University of California, Berkeley

An Inverse Problem for Two Spectra of Complex Finite Jacobi Matrices

TRACES OF SCHUR AND KRONECKER PRODUCTS FOR BLOCK MATRICES

V. Practical Optimization

4. Score normalization technical details We now discuss the technical details of the score normalization method.

Ir O D = D = ( ) Section 2.6 Example 1. (Bottom of page 119) dim(v ) = dim(l(v, W )) = dim(v ) dim(f ) = dim(v )

Parametrization of All Stabilizing Controllers Subject to Any Structural Constraint

Number Theory Naoki Sato

Lecture 3. Chapter 4: Elements of Linear System Theory. Eugenio Schuster. Mechanical Engineering and Mechanics Lehigh University.

Solution sheet ξi ξ < ξ i+1 0 otherwise ξ ξ i N i,p 1 (ξ) + where 0 0

General Linear Model Introduction, Classes of Linear models and Estimation

Generation of Linear Models using Simulation Results

Research of power plant parameter based on the Principal Component Analysis method

Chapter 1 Fundamentals

DISCRIMINANTS IN TOWERS

Eigenvalues and Eigenvectors

3 Properties of Dedekind domains

Applications to stochastic PDE

Galois Fields, Linear Feedback Shift Registers and their Applications

Multivariable PID Control Design For Wastewater Systems

The Graph Accessibility Problem and the Universality of the Collision CRCW Conflict Resolution Rule

Elementary Analysis in Q p

Evaluating Circuit Reliability Under Probabilistic Gate-Level Fault Models

Wolfgang POESSNECKER and Ulrich GROSS*

MULTIVARIATE STATISTICAL PROCESS OF HOTELLING S T CONTROL CHARTS PROCEDURES WITH INDUSTRIAL APPLICATION

Lecture Note 7: Iterative methods for solving linear systems. Xiaoqun Zhang Shanghai Jiao Tong University

Actual exergy intake to perform the same task

Network Configuration Control Via Connectivity Graph Processes

Computations in Quantum Tensor Networks

Math 751 Lecture Notes Week 3

Dr. Shalabh Department of Mathematics and Statistics Indian Institute of Technology Kanpur

MODEL-BASED MULTIPLE FAULT DETECTION AND ISOLATION FOR NONLINEAR SYSTEMS

Mathematical foundations - linear algebra

ALGEBRAIC TOPOLOGY MASTERMATH (FALL 2014) Written exam, 21/01/2015, 3 hours Outline of solutions

Singular Value Decomposition

Various Proofs for the Decrease Monotonicity of the Schatten s Power Norm, Various Families of R n Norms and Some Open Problems

An Estimate For Heilbronn s Exponential Sum

be a Householder matrix. Then prove the followings H = I 2 uut Hu = (I 2 uu u T u )u = u 2 uut u

Enumeration of Balanced Symmetric Functions over GF (p)

Transcription:

Introduction to MVC Definition---Proerness and strictly roerness A system G(s) is roer if all its elements { gij ( s)} are roer, and strictly roer if all its elements are strictly roer. Definition---Causal A system G(s) is causal if all its elements are causal, and not causal if all its elements are noncausal. Definition---Poles The eigen values λ i, i=,---,n of the system G(s) are called the ole of the system. The ole olynomial π ( s) n is defined as: π ( s) = ( s λi ), where π ( s) is the least common denominator of all non-identical-zero minors of all order of G(s). Examle: Definition---Zeros ( s )( s + ) 0 ( s ) G( s) = ( s + )( s + )( s ) ( s + )( s + ) ( s )( s + ) ( s )( s + ) The minor of order : ( s ) G = ; G = ; G = ( s + )( s + ) ( s + )( s + ) ( s + )( s + ),,,,,3,3 i= The least common denominator of the minors: π s = s + s + s ( ) ( )( ) ( ) If the rank of G(z) is less than the normal rank, z is a zero of the system. The zero olynomial Z(s) is the greatest common divisor of the numerators of all order-r minors of G(s), where r is the norminal rank of G(s) rovided that these minors have all been adjusted in such a way as to have the ole olynomial π ( s) as their denominator., ( s )( s + ), ( s )( s + ), ( s ) G, ( s) = ; G,3 ( s) = ; G,3 ( s) = π ( s) π ( s) π ( s) So, Z( s) = ( s ) According to this definition, the zero olynomial of a square G(s) is: det{ G( s )} = 0 Notice that, in a MIMO system, there may be no inverse resonse to indicate the

resence of RHP-zero. For examle, in the MIMO system of the following: G( s) = ; G(0.5) = 0; σ { G(0.5) } = 0 (0.s + )( s + ) + s 0.45 0.89.9 0 0.7 0.7 G(0.5) =.65 0.89 0.45 0 0 0.7 0.7 44443 443 44443 U Σ H V H Transfer functions for Closed loo MIMO systems. Cascade rule. For the cascade interconnection of G ( s) and G ( ) s in the following figure(a), the overall transfer function matrix is G=G G. Feedback rule. With reference to the ositive feedback system in Figure (b): v = ( I L) u = ( I G G ) u 3. Push-through rule. G ( I G G ) = ( I G G ) G G G G G = G ( I G G ) = ( I G G ) G ( I G G ) G ( I G G ) = G ( I G G ) G = G ( I G G )

4. MIMO rule. (). Start from the outut, write down the blocks as moving backward by the most direct ath toward the inut (). When exit from a feedback loo, include a term (I-L) - for ositive feedback (or, (I+L) - for negative feedback. Notice that L is the evaluated against signal flow starting at the oint of exit from the loo. Examle: Consider the following block diagram: z = + K( P K) w 5. Consider the closed-loo system: The following relationshis are useful: I L L I L S T ( + ) + ( + ) = + = G( I + KG) = ( I + GK) G ( + ) = ( + ) = ( + ) GK I GK G I KG K I GK GK T = L( I + L) = ( I + L) L = ( I + L ) 3

Singular Values and Matrix Norms. Vector Norms A real valued function defined on a vector sace X is said to be a norm on X, if it satisfies the following roerties: () x 0 ; () x = 0 only if x = 0 ; (3) α x = α x, for any scalar α; (4) x + y x + y ; for any x X and y X. The vector -norm of x X articular =,, we have: is defined as: x = x n / i, for. In i= x n n = xi ; x ( x ) i i= i= = ; x = max x. i n i H o&&lder inequality: T x y x y, q + q = Notice that: () x x n x ; () x x n x ; (3) x x n x 4

. Matrix norms: A matrix norm is defined as a function valued that satisfies the following: m n () A 0 for all A R with equality only if A=0; m n () α A = α A for all α R, A R ; (3) A + B A + B for all A, B R m n m n Let A = [ a ] R, the vector induced -norms is defined as: ij A = su x 0 Ax x In articular: A A A m = max a (column sum) ; j n i = max ij * ( A A) = λ ; = max aij (row sum) j n n j = Another oular matrix norm is the Frobenius norm, i.e.: A F m n = i= j= a ij The -norms have the following imortant roerty for every m n A R : () AX A x ; =,, ; () A A n A ; F (3) max aij A mn ; { max aij i m j n i m; j n } ; (4) A A m A ; n A A n A (5) m 5

Lemma : Let x n F and y m F. () Suose n m. x = y iff.., n m * U F s t x = Uy and U U = I ; () Suose n = m. * x y x y. Moreover, equality holds iff x = α y for some α F or y = 0; n m (3) x y iff F with s. t. x = y. Furthermore, x y iff < ; (4) Ux = x for any aroriate dimensioned unitary matrix U. Lemma : Let A and B be any aroriate dimensioned matrices. () ρ( A) A, ( =,,, F) ; () AB A B ; A A ; (3) UAV = A, and UAV = A, for any unitary matrices U and V ; (4) AB A B, and AB B A. F F F F F F Lemma 3: Let A be a block artitioned matrix with: A A A A A A A A A A q q = = Aij m m mq Then, for any induced matrix -norm, A A A A q A A A q Am A m A mq Further, the equality holds if the F-norm is used. 6

7

Singular Value Decomosition Consider a fixed frequency ω where G( jω) is a l m comlex matrix. Denote G( jω) as G for simlicity. Any matrix G may be decomosed into G Where, Σ is an l m matrix with k = min{ l, m} non-negative singular values, σ i. arranged in descending order along its main diagonal; the other entries are H = UΣ V. zero. The singular values are the ositive square roots of the eigenvalues of H G G, and H G H σ ( G) = λ ( G G) i y = G d ( ) i is the conjugate transose of G. That is: {,, L, } i i α i i i n i i = = ; Let u be one of the eigenvector of G. d = v v = v + v + v y = G v = Gv = λ v y d i i i i y = λ v α λi vi i v λ i On the other hand, it can be shown that the extreme values of Gd / d are / H λ i G G, which is known as the singular value of G, in the directions of eigenvaectors of H G G. y y y Since, σ = min max = σ d d d As a result, σ λi σ The singular values can be considered as the extreme gains of the MIMO 8

system, which are local maximal values with resect to the direction of inuts. For examle, consider the gain matrix at a secific frequency: 5 4 G = 3 The gains with resect to the direction of d are given in the following figure: Tyical singular values with resect to frequency are as shown in the following figure: Theorem : Let m n A F. There exist unitary matrices: U = [ u, u, L, u ] F ; V = [ v, v, L, v ] F m m n n m n such that: where, * Σ 0 A = UΣV, Σ = 0 0 σ 0 L 0 0 σ 0 L Σ = ; σ σ L σ 0 ; = min, M M O M 0 0 L σ 9 { m n}

[Proof] Let σ = A, and assume m n. Then, from the definition of A, i.e.: Az A = su ; ( =,, ) Az = A z for some z z In other words, there exists a vector z F n such that Az = σ z = σ z By the Lemma ( m n * x y iff there is a matrix U F such that x Uy and U U I = = = ) there is a matrix ( σ ) Az = U% z = σuz %. % m n such that U F % % = I and * U U Let: x z z n = F and Uz y = % F Uz % m We have: Az σuz % σ Uz % σ Uz % Uz % Ax = = = = = = y z Az Az σuz % Uz % σ Let [, ] m m n n U = y U F ; V = [ x, V ] F be unitary. Thus, A = U AV = [ y, U ] [ Ax, AV ] * * * * * * y Ax y AV σ y y y AV = * * = * * U Ax U AV σu y U AV * σ w = 0 B Since, 0 * * A = σ + w w A σ + w w M 0 0

and σ = A = A, we conclude that w = 0. An obvious induction argument gives: * U AV = Σ. The σ i is the i-th eigen value of A, and u and v are i-th left singular vector and j-th right i j singular vectors, resectively. It is obvious to see: Av = σ u and A u = σ v * i i i i i i or, A Av = σ v and AA u = σ u * * i i i i i i Hence, σ i is an eigen value of * * AA or A A, u i is an eigen vector of * AA, and is an eigen vector of singular values are often used: * A A. The following notations for σ ( A) = σ ( A) = σ = max Ax max x = σ ( A) = σ ( A) = σ = min Ax min x = Lemma : Suose A and D are square matrices. () σ ( A + ) σ ( A) σ ( A); () σ ( A ) σ ( A) σ ( ); (3) σ ( A ) = if A is invertible; σ ( A)

Lemma 3: () σ ( A) λ σ ( A) () λ A (3) Let σ ( A) κ ( A) = = condition number of A, and Ax = b. If σ ( A) 3

δ x δ b A( x + δ x) = ( b + δ b), then: = κ ( A) x b (4) Let A A + δ A and x = x + δ x = b. Then: δ x x κ ( A) δ A κ ( A) A Thus, when κ ( A) is large and matrix A is almost singular, a very small change of A will make the ossible range of δ x large. 4

Alications of SVD 5

6

7

8

Problem of small singular values: Very small singular values in a multivariable system are analogus to very small gains in a conventional siso system. It requires very large controller gains and results in excessively large controller actions. The tyical resence of constraints in the maniulated variable and noise in the sensor makes it difficult even for siso system. In the context of mimo system, the additional comlications resented by hidden loos, interactions make the roblem even more severe. A general rule of thumb to measure the small singular value is the magnitude of the noise in the signal. Singular values are equal or less than the magnitude of sensor noise should be assumed degenerate. Problem of large singular values: Large singular values are not as serious a roblem as small values. It requires very small controller gains. This results in small controller oututs, which can be easily become lost in the resolution of the maniulator. The symtomatic behaviors are cyclic resonses, which never settling down to a reasonable steady state. A general rule of thumb concerning large singular values is that all singular values that are equal to or greater than the recirocal of the valve resolution should be avoided. Determining Good Sensor Locations Consider, for examle, the ethonal-water distillation column as shown in Figure 3.Assume that the first level objective is to control two column temeratures by maniulating D and Q. The basic concern is to determine which combination out of 5 ossible combinations of sensor locations from the oint of view of column control. 9

The 50 gain matrix is as shown in figure 4. 0

From the SVD result in Figure 5 and is lotted on Figure 6. The largest elements in U and U occurs on stage 8 and 3. On the other hand, if we use abs(u )-abs(u ) as criterion, as shown on Figure 7, the largest differences suggests that stage 8 and stage 3 are good choices.

3

Selection of Proer Maniulated variables For examle, in the design of control of a control for a distillation column, four maniulated variables are tyically be considered: D distillate flow rate L reflux flow rate B bottoms flow rate Q steam rate to the reboiler Two out of the four have to control levels (i.e. accumulator and column base). Thus, only the remaining two can be maniulated for the comositions. To choose two out of the four, SVD rovides a straightforward method to comare the steady state behavior of various first level control strategies. In this examle, DQ is the best first-level control scheme. It has a much better condition number than the others. The second singular value of this DQ scheme is much stronger than for the other schemes. 4

Alication to Feedback Proerties 5

6

Use of the minimum singular value of the lant: The minimum singular value of the lant evaluated as a function of frequency is a useful measure for evaluating the feasibility of achieving accetable control. In general, we want σ as large as ossible. Singular values for erformance: In general, it is reasonable to require that the gain e( ω) / r( ω) remains small for any direction of r( ω ), including the worst-case direction which gives a gin of σ ( S( jω )). Let / w ( jω ) reresent the maximum allowable magnitude of e( ω) / r( ω) at each frequency, This results in the following erformance requirement: σ ( S( jω)) <, ω σ { ws( jω) } <, ω ws( jω < w ( jω) Tyical weight function is given asl s / M + ωb w ( s) = s + ω A which means / b w equals A at low frequencies, and equals to M at high frequencies, and the asymtote crosses at frequency). ω b (the bandwidth 7