An Introduction to Linear Matrix Inequalities. Raktim Bhattacharya Aerospace Engineering, Texas A&M University

Similar documents
Lecture 7: Positive Semidefinite Matrices

Chapter 3 Transformations

2. Linear algebra. matrices and vectors. linear equations. range and nullspace of matrices. function of vectors, gradient and Hessian

Semidefinite Programming Basics and Applications

Quadratic Stability of Dynamical Systems. Raktim Bhattacharya Aerospace Engineering, Texas A&M University

Exercise Sheet 1.

Elementary linear algebra

Chapter 1. Matrix Algebra

Lecture Note 5: Semidefinite Programming for Stability Analysis

Lecture 1: Review of linear algebra

Linear Matrix Inequalities in Control

Cheat Sheet for MATH461

Math 102, Winter Final Exam Review. Chapter 1. Matrices and Gaussian Elimination

Chap 3. Linear Algebra

SCHUR IDEALS AND HOMOMORPHISMS OF THE SEMIDEFINITE CONE

11 a 12 a 21 a 11 a 22 a 12 a 21. (C.11) A = The determinant of a product of two matrices is given by AB = A B 1 1 = (C.13) and similarly.

Linear Algebra: Characteristic Value Problem

Recall the convention that, for us, all vectors are column vectors.

Linear Algebra Massoud Malek

Linear Algebra And Its Applications Chapter 6. Positive Definite Matrix

Copositive matrices and periodic dynamical systems

orthogonal relations between vectors and subspaces Then we study some applications in vector spaces and linear systems, including Orthonormal Basis,

Convex Optimization. (EE227A: UC Berkeley) Lecture 6. Suvrit Sra. (Conic optimization) 07 Feb, 2013

The matrix will only be consistent if the last entry of row three is 0, meaning 2b 3 + b 2 b 1 = 0.

LMI MODELLING 4. CONVEX LMI MODELLING. Didier HENRION. LAAS-CNRS Toulouse, FR Czech Tech Univ Prague, CZ. Universidad de Valladolid, SP March 2009

Review of Some Concepts from Linear Algebra: Part 2

Homework 2 Foundations of Computational Math 2 Spring 2019

MTH 2032 SemesterII

Solutions to Exercises, Section 2.5

May 9, 2014 MATH 408 MIDTERM EXAM OUTLINE. Sample Questions

Review problems for MA 54, Fall 2004.

Rank-one Generated Spectral Cones Defined by Two Homogeneous Linear Matrix Inequalities

Spring 2014 Math 272 Final Exam Review Sheet

DS-GA 1002 Lecture notes 0 Fall Linear Algebra. These notes provide a review of basic concepts in linear algebra.

Projection Theorem 1

Here each term has degree 2 (the sum of exponents is 2 for all summands). A quadratic form of three variables looks as

Rank minimization via the γ 2 norm

LINEAR ALGEBRA BOOT CAMP WEEK 4: THE SPECTRAL THEOREM

Algebra II. Paulius Drungilas and Jonas Jankauskas

6-1 The Positivstellensatz P. Parrilo and S. Lall, ECC

EE 227A: Convex Optimization and Applications October 14, 2008

Final Exam, Linear Algebra, Fall, 2003, W. Stephen Wilson

STAT200C: Review of Linear Algebra

ALGEBRA QUALIFYING EXAM PROBLEMS LINEAR ALGEBRA

Orthonormal Bases; Gram-Schmidt Process; QR-Decomposition

MODULE 8 Topics: Null space, range, column space, row space and rank of a matrix

Lecture: Examples of LP, SOCP and SDP

Matrix Algebra, part 2

Raktim Bhattacharya. . AERO 632: Design of Advance Flight Control System. Norms for Signals and Systems

Interior Point Methods: Second-Order Cone Programming and Semidefinite Programming

Semidefinite and Second Order Cone Programming Seminar Fall 2001 Lecture 2

Linear Algebra Formulas. Ben Lee

Lecture 3: Review of Linear Algebra

Example: feasibility. Interpretation as formal proof. Example: linear inequalities and Farkas lemma

MAT Linear Algebra Collection of sample exams

Lecture 3: Review of Linear Algebra

Linear Algebra: Matrix Eigenvalue Problems

Numerical Methods for Differential Equations Mathematical and Computational Tools

ALGEBRAIC GEOMETRY HOMEWORK 3

Dot Products. K. Behrend. April 3, Abstract A short review of some basic facts on the dot product. Projections. The spectral theorem.

Mobile Robotics 1. A Compact Course on Linear Algebra. Giorgio Grisetti

Properties of Linear Transformations from R n to R m

COURSE ON LMI PART I.2 GEOMETRY OF LMI SETS. Didier HENRION henrion

Example: Filter output power. maximization. Definition. Eigenvalues, eigenvectors and similarity. Example: Stability of linear systems.

MATH 315 Linear Algebra Homework #1 Assigned: August 20, 2018

OR MSc Maths Revision Course

1. General Vector Spaces

Principal Components Theory Notes

1 Strict local optimality in unconstrained optimization

Review of Linear Algebra

Robust Stability. Robust stability against time-invariant and time-varying uncertainties. Parameter dependent Lyapunov functions

Outline. Linear Matrix Inequalities in Control. Outline. System Interconnection. j _jst. ]Bt Bjj. Generalized plant framework

EE/ACM Applications of Convex Optimization in Signal Processing and Communications Lecture 4

3 (Maths) Linear Algebra

EE/ACM Applications of Convex Optimization in Signal Processing and Communications Lecture 2

Properties of Matrices and Operations on Matrices

MATH 240 Spring, Chapter 1: Linear Equations and Matrices

On V-orthogonal projectors associated with a semi-norm

Numerical Linear Algebra Homework Assignment - Week 2

HOMEWORK PROBLEMS FROM STRANG S LINEAR ALGEBRA AND ITS APPLICATIONS (4TH EDITION)

Computational Methods CMSC/AMSC/MAPL 460. Eigenvalues and Eigenvectors. Ramani Duraiswami, Dept. of Computer Science

Econ Slides from Lecture 8

Math Camp Lecture 4: Linear Algebra. Xiao Yu Wang. Aug 2010 MIT. Xiao Yu Wang (MIT) Math Camp /10 1 / 88

1. Let m 1 and n 1 be two natural numbers such that m > n. Which of the following is/are true?

Convex Functions and Optimization

Paul Schrimpf. October 18, UBC Economics 526. Unconstrained optimization. Paul Schrimpf. Notation and definitions. First order conditions

MATH 423 Linear Algebra II Lecture 33: Diagonalization of normal operators.

E2 212: Matrix Theory (Fall 2010) Solutions to Test - 1

Linear Algebra, part 2 Eigenvalues, eigenvectors and least squares solutions

Lecture Note 1: Background

Assignment 1: From the Definition of Convexity to Helley Theorem

Chapter 6 Inner product spaces

Symmetric Matrices and Eigendecomposition

A Little Necessary Matrix Algebra for Doctoral Studies in Business & Economics. Matrix Algebra

5 More on Linear Algebra

1 Series Solutions Near Regular Singular Points

Matrices A brief introduction

Lecture 1. 1 Conic programming. MA 796S: Convex Optimization and Interior Point Methods October 8, Consider the conic program. min.

1 Linear Algebra Problems

Inner products and Norms. Inner product of 2 vectors. Inner product of 2 vectors x and y in R n : x 1 y 1 + x 2 y x n y n in R n

Transcription:

An Introduction to Linear Matrix Inequalities Raktim Bhattacharya Aerospace Engineering, Texas A&M University

Linear Matrix Inequalities What are they? Inequalities involving matrix variables Matrix variables appear linearly Represent convex sets polynomial inequalities Critical tool in post-modern control theory AERO 632, Instructor: Raktim Bhattacharya 2 / 38

Standard Form where F (x) := F 0 + x 1 F 1 + + x n F n > 0 x 1 x 2 x :=., F i S m m m symmetric matrix x n Think of F (x) : R n S m. Example: [ ] 1 x > 0 x 1 [ ] 1 0 + x 0 1 [ ] 0 1 > 0. 1 0 AERO 632, Instructor: Raktim Bhattacharya 3 / 38

Positive Definiteness Let Matrix F > 0 represents positive definite matrix F > 0 x T F x > 0, x 0 F > 0 leading principal minors of F are positive F 11 F 12 F 13 F = F 21 F 22 F 23 F 31 F 32 F 33 n Polynomial Constraints as a Linear Matrix Inequality F > 0 F 11 > 0, F 11 F 12 F 21 F 22 > 0, F 11 F 12 F 13 F 21 F 22 F 23 F 31 F 32 F 33 > 0, AERO 632, Instructor: Raktim Bhattacharya 4 / 38

Definiteness Positive Semi-Definite F 0 iff all principal minors are 0 not just leading Negative Definite F < 0 iff every odd leading principal minor is < 0 and even leading principal minor is > 0 they alternate signs, starting with < 0 Negative Semi-Definite F 0 iff every odd principal minor is 0 and even principal minor is 0 F > 0 F < 0 F 0 F 0 Matrix Analysis, Roger Horn. AERO 632, Instructor: Raktim Bhattacharya 5 / 38

Example 1 y > 0, y x 2 > 0, [ ] y x > 0 x 1 LMI written as [ ] y x > 0 is in general form. x 1 We can write it in standard form as [ ] [ ] [ ] 0 0 1 0 0 1 + y + x > 0 0 1 0 0 1 0 General form saves notations, may lead to more efficient computation AERO 632, Instructor: Raktim Bhattacharya 6 / 38

Example 2 x 2 1 + x 2 2 < 1 Leading Minors are 1 0 x 1 0 1 x 2 > 0 x 1 x 2 1 1 > 0 1 0 0 1 > 0 1 1 x 2 x 2 1 0 1 x 1 x 1 1 + x 1 0 1 x 1 x 2 > 0 Last inequality simplifies to 1 (x 2 1 + x 2 2) > 0 AERO 632, Instructor: Raktim Bhattacharya 7 / 38

Eigenvalue Minimization Let A i S n, i = 0, 1,, n. Let A(x) := A 0 + A 1 x 1 + + A n x n. Find x := [x 1 x 2 x n ] T that minimizes How to solve this problem? J(x) := min λ max A(x). x AERO 632, Instructor: Raktim Bhattacharya 8 / 38

Eigenvalue Minimization (contd.) Recall for M S n λ max M t M ti 0. Linear algebra result: Matrix Analysis R.Horn, C.R. Johnson Optimization problem is therefore min x,t t such that A(x) ti 0. AERO 632, Instructor: Raktim Bhattacharya 9 / 38

Matrix Norm Minimization Let A i R n, i = 0, 1,, n. Let A(x) := A 0 + A 1 x 1 + + A n x n. Find x := [x 1 x 2 x n ] T that minimizes How to solve this problem? J(x) := min x A(x) 2. AERO 632, Instructor: Raktim Bhattacharya 10 / 38

Matrix Norm Minimization contd. Recall A 2 := λ max A T A. or Implies min t,x t2 A(x) T A(x) t 2 I 0. Optimization problem is therefore min t,x t2 subject to [ ] ti A(x) A(x) T 0. ti AERO 632, Instructor: Raktim Bhattacharya 11 / 38

Important Inequalities

Generalized Square Inequalities Lemma For arbitrary scalar x, y, and δ > 0, we have Implies ( ) δx y 2 = δx 2 + 1 δ δ y2 2xy 0. 2xy δx 2 + 1 δ y2. AERO 632, Instructor: Raktim Bhattacharya 13 / 38

Generalized Square Inequalities Restriction-Free Inequalities Lemma Let X, Y R m n, F S m, F > 0, and δ > 0 be a scalar, then X T F Y + Y T F X δx T F X + δ 1 Y T F Y. When X = x and Y = y 2x T F y δx T F x + δ 1 y T F y. Proof: Using completion of squares. ( ) T ( ) δx δ 1 Y F δx δ 1 Y 0. AERO 632, Instructor: Raktim Bhattacharya 14 / 38

Generalized Square Inequalities Inequalities with Restrictions Let F = {F F R n n, F T F I}. Lemma Let X R m n, Y R n m, then for arbitrary δ > 0 XF Y + Y T F T X T δxx T + δ 1 Y T Y, F F. Proof: Approach 1: Using completion of squares. Start with ( δx T ) T ( δ 1 F Y δx T δ 1 F Y )) 0. AERO 632, Instructor: Raktim Bhattacharya 15 / 38

Schur Complements Very useful for identifying convex sets Let [ Q(x) ] S(x) S T (x) R(x) Generalizing, Q(x) S m 1, R(x) S m 2 Q(x), R(x), S(x) are affine functions of x [ ] Q(x) S(x) S T 0 (x) R(x) > 0 Q(x) > 0 R(x) S T (x)q(x) 1 S(x) > 0 Q(x) 0 S T (x) ( I Q(x)Q (x) ) = 0 R(x) S T (x)q(x) S(x) 0 Q(x) is the pseudo-inverse This generalization is used when Q(x) is positive semidefinite but singular AERO 632, Instructor: Raktim Bhattacharya 16 / 38

Schur Complement Lemma Let Define [ ] A11 A A := 12. A 21 A 22 For symmetric A, S ch (A 11 ) := A 22 A 21 A 1 11 A 12 S ch (A 22 ) := A 11 A 12 A 1 22 A 21 A > 0 A 11 > 0, S ch (A 11 ) > 0 A 22 > 0, S ch (A 22 ) > 0 AERO 632, Instructor: Raktim Bhattacharya 17 / 38

Example 1 Here x 2 1 + x 2 2 < 1 1 x T x > 0 R(x) = 1, Q(x) = I > 0. [ ] I x x T > 0 1 AERO 632, Instructor: Raktim Bhattacharya 18 / 38

Example 2 or x P < 1 1 x T P x > 0 1 x T P x = 1 ( P x) T ( P x) > 0 where P is matrix square root. [ ] P 1 x x T > 0 1 [ I ( ] P x) ( P x) T > 0 1 AERO 632, Instructor: Raktim Bhattacharya 19 / 38

LMIs are not unique If F is positive definite then congruence transformation of F is also positive definite F > 0 x T F x, x 0 y T M T F My > 0, y 0 and nonsingular M M T F M > 0 Implies, rearrangement of matrix elements does not change the feasible set [ ] [ ] [ ] [ ] [ ] Q S 0 I Q S 0 I R S T S T > 0 R I 0 S T > 0 > 0 R I 0 S Q AERO 632, Instructor: Raktim Bhattacharya 20 / 38

Variable Elimination Lemma Lemma: For arbitrary nonzero vectors x, y R n, there holds max (x T F y) 2 = (x T x)(y T y). F F:F T F I Proof: From Schwarz inequality, x T F y x T x y T F T F y x T x y T y. Therefore for arbitrary x, y we have Next show equality. (x T F y) 2 (x T x)(y T y). AERO 632, Instructor: Raktim Bhattacharya 21 / 38

Variable Elimination Lemma contd. Let F 0 = xy T x T x y T y. Therefore, We can show that F T 0 F 0 = yxt xy T (x T x)(y T y) = yyt y T y. σ max (F T 0 F 0 ) = σ max (F 0 F T 0 ) = 1. = F T 0 F 0 1, thus F 0 F. AERO 632, Instructor: Raktim Bhattacharya 22 / 38

Variable Elimination Lemma contd. Therefore, (x T F 0 y) 2 = ( x T xy T x T x y T y y ) 2 = (x T x)(y T y). AERO 632, Instructor: Raktim Bhattacharya 23 / 38

Variable Elimination Lemma contd. Lemma: Let X R m n, Y R n m, and Q R m m. Then Q + XF Y + Y T F T X T < 0, F F, iff δ > 0 such that Q + δxx T + 1 δ Y T Y < 0. Proof: Sufficiency Q + XF Y + Y T F T X T Q + δxx T + 1 δ Y T Y from previous Lemma < 0. AERO 632, Instructor: Raktim Bhattacharya 24 / 38

Variable Elimination Lemma contd. Proof: Necessity Suppose Q + XF Y + Y T F T X T < 0, F F is true. Then for arbitrary nonzero x or Using previous lemma result x T (Q + XF Y + Y T F T X T )x < 0, max F F (xt XF Y x) = x T Qx + 2x T XF Y x < 0. (x T XX T x)(x T Y T Y x), = x T Qx + 2 (x T XX T x)(x T Y T Y x) < 0. AERO 632, Instructor: Raktim Bhattacharya 25 / 38

Variable Elimination Lemma contd. Therefore, x T Qx + 2 (x T XX T x)(x T Y T Y x) < 0 = x T Qx 2 (x T XX T x)(x T Y T Y x) < 0, and x T Qx < 0. or (x T Qx) 2 4 (x T XX T x) (x T Y T Y x) > 0. }{{}}{{}}{{} b 2 a c b 2 4ac > 0. AERO 632, Instructor: Raktim Bhattacharya 26 / 38

Variable Elimination Lemma contd. Or the quadratic equation aδ 2 + bδ + c = 0 has real-roots Recall, b ± b 2 4ac. 2a a := (x T XX T x) > 0, b := (x T Qx) < 0, c := (x T Y T Y x) > 0. Implies b 2a > 0, or at least one positive root. AERO 632, Instructor: Raktim Bhattacharya 27 / 38

Variable Elimination Lemma contd. Therefore, δ > 0 such that aδ 2 + bδ + c < 0. Dividing by δ we get aδ + b + c δ < 0, or or or x T Qx + δx T XX T x + 1 δ xt Y T Y x < 0, x T (Q + δxx T + 1 δ Y T Y )x < 0, Q + δxx T + 1 δ Y T Y < 0. AERO 632, Instructor: Raktim Bhattacharya 28 / 38

Elimination of Variables In a Partitioned Matrix Lemma: Let [ ] Z11 Z Z = 12 Z12 T, Z Z 11 R n n, 22 be symmetric. Then X = X T such that Z 11 X Z 12 X Z12 T Z 22 0 < 0 Z < 0. X 0 X Proof: Apply Schur complement lemma. Z 11 X Z 12 X Z12 T Z 22 0 < 0 X < 0, S ch ( X) < 0. X 0 X AERO 632, Instructor: Raktim Bhattacharya 29 / 38

Elimination of Variables In a Partitioned Matrix (contd.) 0 > S ch ( X), [ Z11 X Z = 12 Z12 T Z 22 [ Z11 X Z = 12 Z12 T Z 22 [ ] Z11 Z = 12 Z12 T. Z 22 ] ] + [ X 0 [ ] X 0, 0 0 ] ( X) 1 [ X 0 ], AERO 632, Instructor: Raktim Bhattacharya 30 / 38

Elimination of Variables In a Partitioned Matrix (contd.) Lemma: Z 11 Z 12 Z 13 Z 12 T Z 22 Z 23 + X T < 0 Z13 T Z23 T + X Z 33 [ ] Z11 Z 12 Z12 T < 0 Z 22 [ ] Z11 Z 13 Z13 T < 0, Z 33 with X = Z13Z T 11 1 Z 12 Z23. T Proof: Necessity Apply rules for negative definiteness. Sufficiency Following are true from Schur complement lemma. Z 11 < 0 Z 22 Z T 12Z 1 11 Z 12 < 0 Z 33 Z T 13Z 1 11 Z 13 < 0 AERO 632, Instructor: Raktim Bhattacharya 31 / 38

Elimination of Variables In a Partitioned Matrix (contd.) Look at Schur complement of Z 11 Z 12 Z 13 Z 12 T Z 22 Z 23 + X T. Z13 T Z23 T + X Z 33 [ Z 22 Z 23 + X T Z23 T + X Z 33 [ = Also Z!1 < 0. ] [ ] Z T 12 Z13 T Z11 1 [ ] Z12 Z 13 Z 22 Z12 T Z 1 11 Z 12 Z 23 + X T Z12 T Z 1 Z23 T + X ZT 13 Z 1 11 Z 12 Z 33 Z13 T Z 1 11 Z 13 11 Z 13 ] < 0. AERO 632, Instructor: Raktim Bhattacharya 32 / 38

Elimination of Variables Projection Lemma Definition Let A R m n. Then M a is left orthogonal complement of A if it satisfies M a A = 0, rank(m a ) = m rank(a). Definition Let A R m n. Then N a is right orthogonal complement of A if it satisfies AN a = 0, rank(n a ) = n rank(a). AERO 632, Instructor: Raktim Bhattacharya 33 / 38

Elimination of Variables Projection Lemma (contd.) Lemma: Let P, Q, and H = H T be matrices of appropriate dimensions. Let N p, N q be right orthogonal complements of P, Q respectively. Then X such that H + P T X T Q + Q T XP < 0 N T p HN p < 0 and N T q HN q < 0. Proof: Necessity : Multiply by N p or N q. Sufficiency : Little more involved Use base kernel of P, Q, followed by Schur complement lemma. AERO 632, Instructor: Raktim Bhattacharya 34 / 38

Elimination of Variables Reciprocal Projection Lemma Lemma: Let P be any given positive definite matrix. The following statements are equivalent: 1. Ψ + S + S T < 0. 2. The LMI problem [ Ψ + P (W + W T ) S T + W T ] < 0, S + W P is feasible with respect to W. Proof: Apply projection lemma w.r.t general variable W. Let [ ] Ψ + P S T X =, Y = [ I S P n 0 ], Z = [ ] I n I n. AERO 632, Instructor: Raktim Bhattacharya 35 / 38

Elimination of Variables Reciprocal Projection Lemma (contd.) Let [ ] Ψ + P S T X =, Y = [ I S P n 0 ], Z = [ ] I n I n. Right orthogonal complements of Y, Z are [ ] 0 N y = P 1, N z = Verify that Y N y = 0 and ZN z = 0. We can show [ In I n ]. N T y XN y = P 1, N T z XN z = Ψ + S T + S. Apply projection lemma. AERO 632, Instructor: Raktim Bhattacharya 36 / 38

Elimination of Variables Reciprocal Projection Lemma (contd.) N T y XN y = P 1, N T z XN z = Ψ + S T + S. The expression [ X + Y T W T Z + Z T Ψ + P (W + W W Y = T ) S T + W T ]. S + W P Therefore, if N T y XN y < 0 N T z XN z < 0 = [ Ψ + P (W + W T ) S T + W T ] < 0. S + W P AERO 632, Instructor: Raktim Bhattacharya 37 / 38

Trace of Matrices in LMIs Lemma Let A(x) S m be a matrix function in R n, and γ R > 0. The following statements are equivalent: 1. x R n such that 2. x R n, Z S m such that Proof: Homework problem. tra(x) < γ, A(x) < Z, trz < γ. AERO 632, Instructor: Raktim Bhattacharya 38 / 38