Lecture Note 5: Semidefinite Programming for Stability Analysis

Similar documents
I.3. LMI DUALITY. Didier HENRION EECI Graduate School on Control Supélec - Spring 2010

Introduction to Machine Learning Lecture 7. Mehryar Mohri Courant Institute and Google Research

Lagrange Duality. Daniel P. Palomar. Hong Kong University of Science and Technology (HKUST)

Lecture 6: Conic Optimization September 8

14. Duality. ˆ Upper and lower bounds. ˆ General duality. ˆ Constraint qualifications. ˆ Counterexample. ˆ Complementary slackness.

EE 227A: Convex Optimization and Applications October 14, 2008

Lecture Note 1: Background

Semidefinite Programming Basics and Applications

Tutorial on Convex Optimization for Engineers Part II

4. Algebra and Duality

Convex Optimization M2

Lecture: Duality of LP, SOCP and SDP

5. Duality. Lagrangian

Constrained Optimization and Lagrangian Duality

Interior Point Methods: Second-Order Cone Programming and Semidefinite Programming

Extreme Abridgment of Boyd and Vandenberghe s Convex Optimization

6-1 The Positivstellensatz P. Parrilo and S. Lall, ECC

Module 04 Optimization Problems KKT Conditions & Solvers

Lecture: Duality.

A Brief Review on Convex Optimization

Convex Optimization. Dani Yogatama. School of Computer Science, Carnegie Mellon University, Pittsburgh, PA, USA. February 12, 2014

SEMIDEFINITE PROGRAM BASICS. Contents

ELE539A: Optimization of Communication Systems Lecture 15: Semidefinite Programming, Detection and Estimation Applications

Semidefinite Programming Duality and Linear Time-invariant Systems

Convex Optimization Boyd & Vandenberghe. 5. Duality

Example: feasibility. Interpretation as formal proof. Example: linear inequalities and Farkas lemma

Optimization for Machine Learning

CSCI : Optimization and Control of Networks. Review on Convex Optimization

Convex Optimization. (EE227A: UC Berkeley) Lecture 28. Suvrit Sra. (Algebra + Optimization) 02 May, 2013

Lecture 2: Convex Sets and Functions

Duality. Lagrange dual problem weak and strong duality optimality conditions perturbation and sensitivity analysis generalized inequalities

Linear and non-linear programming

Lecture: Convex Optimization Problems

Semidefinite Programming

4. Convex optimization problems

Lecture: Examples of LP, SOCP and SDP

From Convex Optimization to Linear Matrix Inequalities

LMI MODELLING 4. CONVEX LMI MODELLING. Didier HENRION. LAAS-CNRS Toulouse, FR Czech Tech Univ Prague, CZ. Universidad de Valladolid, SP March 2009

N. L. P. NONLINEAR PROGRAMMING (NLP) deals with optimization models with at least one nonlinear function. NLP. Optimization. Models of following form:

Real Symmetric Matrices and Semidefinite Programming

HW1 solutions. 1. α Ef(x) β, where Ef(x) is the expected value of f(x), i.e., Ef(x) = n. i=1 p if(a i ). (The function f : R R is given.

EE/ACM Applications of Convex Optimization in Signal Processing and Communications Lecture 17

ICS-E4030 Kernel Methods in Machine Learning

Relaxations and Randomized Methods for Nonconvex QCQPs

Robust and Optimal Control, Spring 2015

E5295/5B5749 Convex optimization with engineering applications. Lecture 5. Convex programming and semidefinite programming

The Lagrangian L : R d R m R r R is an (easier to optimize) lower bound on the original problem:

Convex Optimization & Lagrange Duality

Lagrange duality. The Lagrangian. We consider an optimization program of the form

MATH 829: Introduction to Data Mining and Analysis Support vector machines and kernels

MIT Algebraic techniques and semidefinite optimization February 14, Lecture 3

Support Vector Machines

Introduction and Math Preliminaries

Sparse Optimization Lecture: Dual Certificate in l 1 Minimization

CSC Linear Programming and Combinatorial Optimization Lecture 10: Semidefinite Programming

Optimality, Duality, Complementarity for Constrained Optimization

Lagrangian Duality and Convex Optimization

CS-E4830 Kernel Methods in Machine Learning

c 2000 Society for Industrial and Applied Mathematics

Geometric problems. Chapter Projection on a set. The distance of a point x 0 R n to a closed set C R n, in the norm, is defined as

Lecture 14: Optimality Conditions for Conic Problems

Lecture 7: Convex Optimizations

Convex Optimization. (EE227A: UC Berkeley) Lecture 6. Suvrit Sra. (Conic optimization) 07 Feb, 2013

Convex optimization problems. Optimization problem in standard form

Homework Set #6 - Solutions

Semidefinite and Second Order Cone Programming Seminar Fall 2012 Project: Robust Optimization and its Application of Robust Portfolio Optimization

A Note on Nonconvex Minimax Theorem with Separable Homogeneous Polynomials

A ten page introduction to conic optimization

COURSE ON LMI PART I.2 GEOMETRY OF LMI SETS. Didier HENRION henrion

Introduction to Semidefinite Programming I: Basic properties a

LECTURE 25: REVIEW/EPILOGUE LECTURE OUTLINE

Lecture 1. 1 Conic programming. MA 796S: Convex Optimization and Interior Point Methods October 8, Consider the conic program. min.

A sensitivity result for quadratic semidefinite programs with an application to a sequential quadratic semidefinite programming algorithm

Summer School: Semidefinite Optimization

subject to (x 2)(x 4) u,

Chapter 2: Preliminaries and elements of convex analysis

Duality. Geoff Gordon & Ryan Tibshirani Optimization /

arzelier

ORF 523 Lecture 9 Spring 2016, Princeton University Instructor: A.A. Ahmadi Scribe: G. Hall Thursday, March 10, 2016

3. Convex functions. basic properties and examples. operations that preserve convexity. the conjugate function. quasiconvex functions

Convex Optimization in Communications and Signal Processing

Characterizing Robust Solution Sets of Convex Programs under Data Uncertainty

Semidefinite Programming

Appendix PRELIMINARIES 1. THEOREMS OF ALTERNATIVES FOR SYSTEMS OF LINEAR CONSTRAINTS

Two hours. To be provided by Examinations Office: Mathematical Formula Tables. THE UNIVERSITY OF MANCHESTER. xx xxxx 2017 xx:xx xx.

Trust Region Problems with Linear Inequality Constraints: Exact SDP Relaxation, Global Optimality and Robust Optimization

Motivation. Lecture 2 Topics from Optimization and Duality. network utility maximization (NUM) problem:

Convex Optimization M2

Solving large Semidefinite Programs - Part 1 and 2

Convex Optimization. Lecture 12 - Equality Constrained Optimization. Instructor: Yuanzhang Xiao. Fall University of Hawaii at Manoa

Lecture Notes on Support Vector Machine

Introduction to Nonlinear Stochastic Programming

Largest dual ellipsoids inscribed in dual cones

4TE3/6TE3. Algorithms for. Continuous Optimization

FRTN10 Multivariable Control, Lecture 13. Course outline. The Q-parametrization (Youla) Example: Spring-mass System

Course Outline. FRTN10 Multivariable Control, Lecture 13. General idea for Lectures Lecture 13 Outline. Example 1 (Doyle Stein, 1979)

Distributionally robust optimization techniques in batch bayesian optimisation

UC Berkeley Department of Electrical Engineering and Computer Science. EECS 227A Nonlinear and Convex Optimization. Solutions 5 Fall 2009

The Q-parametrization (Youla) Lecture 13: Synthesis by Convex Optimization. Lecture 13: Synthesis by Convex Optimization. Example: Spring-mass System

Algebra II. Paulius Drungilas and Jonas Jankauskas

Transcription:

ECE7850: Hybrid Systems:Theory and Applications Lecture Note 5: Semidefinite Programming for Stability Analysis Wei Zhang Assistant Professor Department of Electrical and Computer Engineering Ohio State University, Columbu, Ohio, USA Spring 2017 Lecture 5 (ECE7850 Sp17) Wei Zhang(OSU) 1 / 39

Outline Motivation Some Linear Algebra A Tiny Bit of Optimization and Duality Linear Matrix Inequalities Semidefinite Programming Problems Some Examples Conclusion Lecture 5 (ECE7850 Sp17) Wei Zhang(OSU) 2 / 39

Motivation Converse Lyapunov function theorems are not constructive Basic idea for Lyapunov function synthesis - Select Lyapunov function structure (e.g. quadratic, polynomial, piecewise quadratic,...) - Parameterize Lyapunov function candidates - Find values of parameters to satisfy Lyapunov conditions Many Lyapunov synthesis problems can be formulated as Semidefinite Programming (SDP) problems. SDP has broad applications in many other fields as well Motivation Lecture 5 (ECE7850 Sp17) Wei Zhang(OSU) 3 / 39

Real Symmetric Matrices S n : set of real symmetric matrices All eigenvalues are real There exists a full set of orthogonal eigenvectors Spectral decomposition: If A S n, then A = QΛQ T, where Λ diagonal and Q is unitary. Some Linear Algebra Lecture 5 (ECE7850 Sp17) Wei Zhang(OSU) 4 / 39

Positive Semidefinite Matrices I A S n is called positive semidefinite (p.s.d.), denoted by A 0, if x T Ax 0, x R n A S n is called positive definite (p.d.), denoted by A 0, if x T Ax > 0 for all nonzero x R n S n +: set of all p.s.d. (symmetric) matrices S n ++: set of all p.d. (symmetric) matrices p.s.d. [ or p.d. matrices ] can also be defined for non-symmetric matrices. 1 1 e.g.: 1 1 We assume p.s.d. and p.d. are symmetric (unless otherwise noted) Notation: A B (resp. A B ) means A B S n + (resp. A B S n ++) Some Linear Algebra Lecture 5 (ECE7850 Sp17) Wei Zhang(OSU) 5 / 39

Positive Semidefinite Matrices II Other equivalent definitions for symmetric p.s.d. matrices: - All 2 n 1 principal minors of A are nonnegative - All eigs of A are nonnegative - There exists a factorization A = B T B Other equivalent definitions for p.d. matrices: - All n leading principal minors of A are positive - All eigs of A are strictly positive - There exists a factorization A = B T B with B square and nonsingular. Some Linear Algebra Lecture 5 (ECE7850 Sp17) Wei Zhang(OSU) 6 / 39

Positive Semidefinite Matrices III Useful facts: - If T nonsingular, A 0 T T AT 0; and A 0 T T AT 0 - Inner product on R m n : < A, B > tr(a T B) A B. Some Linear Algebra Lecture 5 (ECE7850 Sp17) Wei Zhang(OSU) 7 / 39

Positive Semidefinite Matrices IV - For A, B S n +, tr(ab) 0 - For any symmetric A S n, λ min(a) µ A µi and λ max(a) β A βi Some Linear Algebra Lecture 5 (ECE7850 Sp17) Wei Zhang(OSU) 8 / 39

Convex Set Recall from Lecture 3. Convex Set: A set S is convex if x 1, x 2 S αx 1 + (1 α)x 2 S, α [0, 1] Convex combination of x 1,..., x k : { α 1 x 1 + α 2 x 2 + + α k x k : α i 0, and i α i = 1 } Convex hull: co {S} set of all convex combinations of points in S Basic Optimization Lecture 5 (ECE7850 Sp17) Wei Zhang(OSU) 9 / 39

Convex Cone A set S is called a cone if λ > 0, x S λx S. Conic combination of x 1 and x 2 : x = α 1 x 1 + α 2 x 2 with α 1, α 2 0 Convex cone: 1. a cone that is convex 2. equivalently, a set that contains all the conic combinations of points in the set Basic Optimization Lecture 5 (ECE7850 Sp17) Wei Zhang(OSU) 10 / 39

Positive Semidefinite Cone The set of positive semidefinite matrices (i.e. S n +) is a convex cone and is referred to as the positive semidefinite (PSD) cone Recall that if A, B S n +, then tr(ab) 0. This indicates that the cone S n + is acute. Basic Optimization Lecture 5 (ECE7850 Sp17) Wei Zhang(OSU) 11 / 39

Operations that preserve convexity I Intersection of possibly infinite number of convex sets: - e.g.: polyhedron: - e.g.: PSD cone: Basic Optimization Lecture 5 (ECE7850 Sp17) Wei Zhang(OSU) 12 / 39

Operations that preserve convexity II Affine mapping f : R n R m (i.e. f(x) = Ax + b) - f(x) = {f(x) : x X} is convex whenever X R n is convex e.g.: Ellipsoid: E 1 = {x R n : (x x c) T P (x x c) 1} or equivalently E 2 = {x c + Au : u 2 1} - f 1 (Y ) = {x R n : f(x) Y } is convex whenever Y R m is convex e.g.: {Ax b} = f 1 (R n +), where R n + is nonnegative orthant Basic Optimization Lecture 5 (ECE7850 Sp17) Wei Zhang(OSU) 13 / 39

Convex Function I Consider a finite dimensional vector space X. Let D X be convex. Definition 1 (Convex Function). A function f : D R is called convex if f(αx 1 + (1 α)x 2 ) αf(x 1 ) + (1 α)f(x 2 ), x 1, x 2 D, α [0, 1] f : D R is called strictly convex if f(αx 1 + (1 α)x 2 ) < αf(x 1 ) + (1 α)f(x 2 ), x 1 x 2 D, α [0, 1] f : D R is called concave if f is convex Basic Optimization Lecture 5 (ECE7850 Sp17) Wei Zhang(OSU) 14 / 39

Convex Function II How to check a function is convex? Directly use definition First-order condition: If f is differentiable over an open set that contains D, then f is convex over D iff f(z) f(x) + f(x) T (z x), x, z D Second-order condition: Suppose f is twicely differentiable over an open set that contains D, then f is convex over D iff 2 f(x) 0, x D Many other conditions, tricks,... see [BV04]. Basic Optimization Lecture 5 (ECE7850 Sp17) Wei Zhang(OSU) 15 / 39

Convex Function III Examples of convex functions: In general, affine functions are both convex and concave - e.g.: f(x) = a T x + b, for x R n - e.g.: f(x) = tr(a T X) + c = m n i=1 j=1 AijXij + c, for X Rm n All norms are convex - e.g. in R n : f(x) = x p = ( n i=1 xi p) 1/p ; f(x) = x = max k x k - e.g. in R m n : f(x) = X 2 = σ max(x) Basic Optimization Lecture 5 (ECE7850 Sp17) Wei Zhang(OSU) 16 / 39

Convex Function IV Lemma 1 (Jensen s Inequality). If f : D R is a convex function, then for any possible choices of x 1,..., x m D and α 1,..., α m 0 with m l=1 α l = 1 we have f(α 1x 1 + + α mx m) α 1f(x 1) + + α mf(x m) α k s can be viewed as the probability that x k occurs Probabilistic version of Jensen s inequality reads: f(e(x)) E(f(X)) for any random variable X taking value in D Basic Optimization Lecture 5 (ECE7850 Sp17) Wei Zhang(OSU) 17 / 39

Nonlinear Optimization and Duality I Nonlinear Optimization: minimize: subject to: f 0 (x) f i (x) 0, i = 1,... m h i (x) = 0, i = 1,..., q decision variable x R n, domain D, referred to as primal problem optimal value p is called a convex optimization problem if f 0,..., f m are convex and h 1,..., h q are affine typically convex optimization can be solved efficiently Basic Optimization Lecture 5 (ECE7850 Sp17) Wei Zhang(OSU) 18 / 39

Nonlinear Optimization and Duality II Associated Lagrangian: L : D R m R q R m q L(x, λ, ν) = f 0 (x) + λ i f i (x) + ν i h i (x) i=1 i=1 weighted sum of objective and constraints functions λ i : Lagrangian multiplier associated with f i (x) 0 ν i : Lagrangian multiplier associated with h i (x) = 0 Basic Optimization Lecture 5 (ECE7850 Sp17) Wei Zhang(OSU) 19 / 39

Nonlinear Optimization and Duality III Lagrange dual function: g : R m R q : R g(λ, ν) = inf L(x, λ, ν) x D { m = inf f 0 (x) + λ i f i (x) + x D i=1 g is concave, can be for some λ, ν } q ν i h i (x) Lower bound property: If λ 0 (elementwise), then g(λ, ν) p i=1 Basic Optimization Lecture 5 (ECE7850 Sp17) Wei Zhang(OSU) 20 / 39

Nonlinear Optimization and Duality IV Lagrange Dual Problem: { maximize : g(λ, ν) subject to: λ 0 Find the best lower bound on p using the Lagrange dual function a convex optimization problem even when the primal is nonconvex optimal value denoted d (λ, ν) is called dual feasible if λ 0 and (λ, ν) dom(g) Often simplifed by making the implicit constraint (λ, ν) dom(g) explicit Example: Linear Program and its dual minimize: c T x subject to: Ax = b x 0 { maximize: b T ν subject to: A T ν + c 0 Basic Optimization Lecture 5 (ECE7850 Sp17) Wei Zhang(OSU) 21 / 39

Nonlinear Optimization and Duality V Duality Theorems Weak Duality: d p - always hold (for convex and nonconvex problems) - can be used to find nontrivial lower bounds for difficult problems Strong Duality: d = p - not true in general, but typically holds for convex problems - conditions that guarantee strong duality in convex problems are called constraint qualifications - Slater s constraint qualification: Primal is strictly feasible Basic Optimization Lecture 5 (ECE7850 Sp17) Wei Zhang(OSU) 22 / 39

Linear Matrix Inequalities I Standard form: Given symmetric matrices F 0,..., F m S n, F (x) = F 0 + x 1 F 1 + + x m F m 0 is called a Linear Matrix Inequality in x = (x 1,..., x m ) T R m The function F (x) is affine in x The constraint set {x R n : F (x) 0} is nonlinear but convex Linear Matrix Inequalities Lecture 5 (ECE7850 Sp17) Wei Zhang(OSU) 23 / 39

Linear Matrix Inequalities II Example 1 (LMI in Standard Form). Characterize the constraint set: F (x) = [ ] x1 + x 2 x 2 + 1 0 x 2 + 1 x 3 Linear Matrix Inequalities Lecture 5 (ECE7850 Sp17) Wei Zhang(OSU) 24 / 39

Linear Matrix Inequalities III General Linear Matrix Inequalities (LMI) - Let X be a finite-dimensional real vector space. - F : X S n is an affine mapping from X to n n symmetric matrices - Then F (X) 0 is called also an LMI in variable X X - Translation to standard form: Choose a basis X 1,..., X m of X and represent X = x 1X 1 + + x mx m for any X X. For a given affine mapping F : X S n, we can define ˆF : R m S n as ˆF (x) F (X) = F (0) + m x i[f (X i) F (0)] i=1 where x is the coordinate of X w.r.t. the basis X 1,..., X m. Linear Matrix Inequalities Lecture 5 (ECE7850 Sp17) Wei Zhang(OSU) 25 / 39

Linear Matrix Inequalities IV Example 2. Find conditions on matrix P to ensure that V (x) = x T P x is a Lyapunov function for a linear system ẋ = Ax Linear Matrix Inequalities Lecture 5 (ECE7850 Sp17) Wei Zhang(OSU) 26 / 39

Schur Complement Lemma I Lemma 2 (Schur Complement Lemma). [ ] A B Define M = B T. The following three sets of inequalities are equivalent. C M 0 { A 0 C B T A 1 B 0 { C 0 A BC 1 B T 0 Proof: The lemma follows immediately from the following identities: [ ] [ ] [ ] [ ] I 0 A B I A 1 B A 0 B T A 1 I B T = C 0 I 0 C B T A 1 B [ I BC 1 ] [ A B ] [ 0 I B T C I 0 C 1 B T I ] = [ A BC 1 B T 0 0 C ] Linear Matrix Inequalities Lecture 5 (ECE7850 Sp17) Wei Zhang(OSU) 27 / 39

Schur Complement Lemma II The proof of Schur complement lemma also reveals more general relations between the numbers of negative, zero, positive eigenvalues of - M vs. A and C B T A 1 B - M vs. C and A BC 1 B T Schur complement lemma is a very useful result to transform nonlinear (quadratic or bilinear) matrix inequalities to linear ones. Linear Matrix Inequalities Lecture 5 (ECE7850 Sp17) Wei Zhang(OSU) 28 / 39

Semidefinite Programming I Semidefinite Programming (SDP) Problem: Optimization problem with linear objective, and Linear Matrix Inequality and linear equality constraints: minimize: c T x subject to: F 0 + x 1 F 1 + + x m F m 0 (1) Ax = b Linear equality constraint in (1) can be eliminated. So essentially SDP can be viewed as optimizing linear function subject to only LMI constraints. SDP is a particular class of convex optimization problem. Global optimal solution can be found efficiently. Optimizing nonlinear but convex cost function subject to LMI constraints is also a convex optimization that can often be solved efficiently. Semidefinite Programming Lecture 5 (ECE7850 Sp17) Wei Zhang(OSU) 29 / 39

Semidefinite Programming II Standard forms of SDP in matrix variable: SDP Standard Prime Form: min subject to: X S n : f p(x) = C X A i X = b i, i = 1,..., m X 0 (2) SDP Dual form: max : y R m subject to: f d(y) = b T y m i=1 y ia i C (3) One can derive the dual from the prime using either standard Lagrange duality method or more specialized Fenchel duality results The dual form (3) is equivalent to (1) (after eliminating the equality constraint Ax = b in (1)) Semidefinite Programming Lecture 5 (ECE7850 Sp17) Wei Zhang(OSU) 30 / 39

Semidefinite Programming III SDP Weak Duality: f p (X) f d (y) for any primal and dual feasible X and y SDP Strong Duality: f p (X ) = f d (y ) holds under Slater s condition: Many control and optimization problem can be formulated or translated into SDP problems Various computationally difficult optimization problems can be effectively approximated by SDP problems (SDP relaxation...) We will see some examples after introducing an important technique: S-procedure Semidefinite Programming Lecture 5 (ECE7850 Sp17) Wei Zhang(OSU) 31 / 39

S-Procedure I Many stability/engineering problems require to certify that a given function is sign-definite over certain subset of the space Mathematically, this condition can be stated as follows: g 0 (x) 0 on {x R n g 1 (x) 0,..., g m (x) 0} (4) Given functions g 0,..., g m, we want to know whether the condition holds. Sometimes we may also want to find a g 0 satisfying this condition for given g 1,..., g m. Conservative but useful condition: PSD functions s i (x) s.t. g 0 (x) i s i (x)g i (x) 0, x R n This is the so-called Generalized S-Procedure Semidefinite Programming Lecture 5 (ECE7850 Sp17) Wei Zhang(OSU) 32 / 39

S-Procedure II Now consider an important special case: g i (x) = x T G i x, i = 0, 1,... are quadratic functions Requirement (4) becomes: x R n, x T G 1 x 0,..., x T G k x 0 x T G 0 x 0 Sufficient condition (S-procedure): α 1,..., α m 0 with G 0 α 1 G 1 + + α m G m S-Procedure is lossless if m = 1 and ˆx s.t. ˆx T G 1ˆx > 0 (constraint qualification) Semidefinite Programming Lecture 5 (ECE7850 Sp17) Wei Zhang(OSU) 33 / 39

Some Examples I Example 3 (Eigenvalue Optimization). Given symmetric matrices A 0, A 1,..., A m. Let S(w) = A 0 + i w ia i. Find weights {w i } m i=1 to minimize λ max(s(w)) SDP Examples Lecture 5 (ECE7850 Sp17) Wei Zhang(OSU) 34 / 39

Some Examples II Example 4 (Ellipsoid inequality). Given R S n ++, the set E = {x R n : (x x c ) T R(x x c ) < 1} is an ellipsoid with center x c. Find the point in E that is the closet to the origin. SDP Examples Lecture 5 (ECE7850 Sp17) Wei Zhang(OSU) 35 / 39

Some Examples III Example 5 (Linear Feedback Control Gain Design). Given a linear control system ẋ = Ax + Bu with linear state feedback u = Kx. Find K to stabilize the system SDP Examples Lecture 5 (ECE7850 Sp17) Wei Zhang(OSU) 36 / 39

Some Examples IV Example 6 (Robust Stabiltiy). Given system ẋ = Ax + u with uncertain feedback u = g(x). Suppose all we know is that the feedback law satisfies: g(x) 2 β x 2. Find Lyapunov function V (x) = x T P x to ensure exponential stability. SDP Examples Lecture 5 (ECE7850 Sp17) Wei Zhang(OSU) 37 / 39

Concluding Remarks Basic knowledge on optimization and Lagrange duality Linear matrix inequalities impose convex constraints Semidefinite programming problem: optimize linear cost subject to LMI constraints SDP has broad applications in various engineering fields: signal processing, networking, communication, control, machine learning, big data... Further reading [BV04; Boy+94; VB00] Next Lecture: Stability analysis for hybrid systems Concluding Remarks Lecture 5 (ECE7850 Sp17) Wei Zhang(OSU) 38 / 39

References [Boy+94] [VB00] [BV04] Stephen Boyd et al. Linear matrix inequalities in system and control theory. SIAM, 1994. Jeremy G VanAntwerp and Richard D Braatz. A tutorial on linear and bilinear matrix inequalities. In: Journal of process control 10.4 (2000). Stephen Boyd and Lieven Vandenberghe. Convex optimization. Cambridge university press, 2004. Concluding Remarks Lecture 5 (ECE7850 Sp17) Wei Zhang(OSU) 39 / 39