EE/ACM Applications of Convex Optimization in Signal Processing and Communications Lecture 4

Similar documents
EE/ACM Applications of Convex Optimization in Signal Processing and Communications Lecture 2

ENGG5781 Matrix Analysis and Computations Lecture 9: Kronecker Product

KRONECKER PRODUCT AND LINEAR MATRIX EQUATIONS

EE/ACM Applications of Convex Optimization in Signal Processing and Communications Lecture 17

2. Linear algebra. matrices and vectors. linear equations. range and nullspace of matrices. function of vectors, gradient and Hessian

1. Structured representation of high-order tensors revisited. 2. Multi-linear algebra (MLA) with Kronecker-product data.

Semidefinite Programming Duality and Linear Time-invariant Systems

Chapter 1. Matrix Algebra

Duke University, Department of Electrical and Computer Engineering Optimization for Scientists and Engineers c Alex Bronstein, 2014

Matrix Mathematics. Theory, Facts, and Formulas with Application to Linear Systems Theory. Dennis S. Bernstein

Solvability of Linear Matrix Equations in a Symmetric Matrix Variable

Mathematical Foundations of Applied Statistics: Matrix Algebra

CHARACTERIZATIONS. is pd/psd. Possible for all pd/psd matrices! Generating a pd/psd matrix: Choose any B Mn, then

Math Camp Lecture 4: Linear Algebra. Xiao Yu Wang. Aug 2010 MIT. Xiao Yu Wang (MIT) Math Camp /10 1 / 88

1 Sylvester equations

LMI Methods in Optimal and Robust Control

Problem Set (T) If A is an m n matrix, B is an n p matrix and D is a p s matrix, then show

Matrices and Vectors. Definition of Matrix. An MxN matrix A is a two-dimensional array of numbers A =

Trace Inequalities for a Block Hadamard Product

Lecture 7: Positive Semidefinite Matrices

Basic Concepts in Linear Algebra

An iterative method for the skew-symmetric solution and the optimal approximate solution of the matrix equation AXB =C

STAT 309: MATHEMATICAL COMPUTATIONS I FALL 2017 LECTURE 5

Knowledge Discovery and Data Mining 1 (VO) ( )

Journal of Inequalities in Pure and Applied Mathematics

Review of Basic Concepts in Linear Algebra

Section 3.9. Matrix Norm

Applied Matrix Algebra Lecture Notes Section 2.2. Gerald Höhn Department of Mathematics, Kansas State University

Lecture: Algorithms for LP, SOCP and SDP

Modern Optimal Control

Lecture 5. Ch. 5, Norms for vectors and matrices. Norms for vectors and matrices Why?

6.241 Dynamic Systems and Control

Semidefinite Programming Basics and Applications

Diagonal and Monomial Solutions of the Matrix Equation AXB = C

EE/ACM Applications of Convex Optimization in Signal Processing and Communications Lecture 18

An Introduction to Linear Matrix Inequalities. Raktim Bhattacharya Aerospace Engineering, Texas A&M University

Lecture Notes: Eigenvalues and Eigenvectors. 1 Definitions. 2 Finding All Eigenvalues

Lecture Note 5: Semidefinite Programming for Stability Analysis

Inner products and Norms. Inner product of 2 vectors. Inner product of 2 vectors x and y in R n : x 1 y 1 + x 2 y x n y n in R n

Jim Lambers MAT 610 Summer Session Lecture 1 Notes

Cheng Soon Ong & Christian Walder. Canberra February June 2017

EE Applications of Convex Optimization in Signal Processing and Communications Dr. Andre Tkacenko, JPL Third Term

Chap 3. Linear Algebra

MATH 423 Linear Algebra II Lecture 33: Diagonalization of normal operators.

Massachusetts Institute of Technology Department of Economics Statistics. Lecture Notes on Matrix Algebra

Chapter 3 Transformations

Linear Algebra Formulas. Ben Lee

MIT Algebraic techniques and semidefinite optimization May 9, Lecture 21. Lecturer: Pablo A. Parrilo Scribe:???

Here represents the impulse (or delta) function. is an diagonal matrix of intensities, and is an diagonal matrix of intensities.

Rank minimization via the γ 2 norm

Foundations of Matrix Analysis

4. Matrix inverses. left and right inverse. linear independence. nonsingular matrices. matrices with linearly independent columns

Modern Optimal Control

Linear Algebra & Analysis Review UW EE/AA/ME 578 Convex Optimization

1 Linear Algebra Problems

EE731 Lecture Notes: Matrix Computations for Signal Processing

Review of Some Concepts from Linear Algebra: Part 2

(a) If A is a 3 by 4 matrix, what does this tell us about its nullspace? Solution: dim N(A) 1, since rank(a) 3. Ax =

A Characterization of the Hurwitz Stability of Metzler Matrices

MAT Linear Algebra Collection of sample exams

arxiv: v1 [math.na] 13 Mar 2019

Iterative Solution of a Matrix Riccati Equation Arising in Stochastic Control

E 231, UC Berkeley, Fall 2005, Packard

10-725/36-725: Convex Optimization Prerequisite Topics

Ir O D = D = ( ) Section 2.6 Example 1. (Bottom of page 119) dim(v ) = dim(l(v, W )) = dim(v ) dim(f ) = dim(v )

ELE/MCE 503 Linear Algebra Facts Fall 2018

Basic Elements of Linear Algebra

Lecture 4: January 26

DM559 Linear and Integer Programming. Lecture 6 Rank and Range. Marco Chiarandini

CS 766/QIC 820 Theory of Quantum Information (Fall 2011)

ELA ON A SCHUR COMPLEMENT INEQUALITY FOR THE HADAMARD PRODUCT OF CERTAIN TOTALLY NONNEGATIVE MATRICES

EE363 homework 7 solutions

Review problems for MA 54, Fall 2004.

Linear-Quadratic Optimal Control: Full-State Feedback

Outline. Linear Matrix Inequalities in Control. Outline. System Interconnection. j _jst. ]Bt Bjj. Generalized plant framework

1. Linear systems of equations. Chapters 7-8: Linear Algebra. Solution(s) of a linear system of equations (continued)

Lecture notes: Applied linear algebra Part 1. Version 2

LINEAR ALGEBRA BOOT CAMP WEEK 4: THE SPECTRAL THEOREM

On the game-theoretic value of a linear transformation on a symmetric cone

The symmetric linear matrix equation

Abstract. In this article, several matrix norm inequalities are proved by making use of the Hiroshima 2003 result on majorization relations.

ISOLATED SEMIDEFINITE SOLUTIONS OF THE CONTINUOUS-TIME ALGEBRAIC RICCATI EQUATION

Lecture 2: Linear Algebra Review

SOLUTION OF SPECIALIZED SYLVESTER EQUATION. Ondra Kamenik. Given the following matrix equation AX + BX C = D,

Trust Region Problems with Linear Inequality Constraints: Exact SDP Relaxation, Global Optimality and Robust Optimization

Modern Optimal Control

Singular Value Decomposition (SVD)

Orthonormal Bases; Gram-Schmidt Process; QR-Decomposition

Chapter 1 Matrices and Systems of Equations

Queens College, CUNY, Department of Computer Science Numerical Methods CSCI 361 / 761 Spring 2018 Instructor: Dr. Sateesh Mane.

Matrix Algebra, part 2

NOTES on LINEAR ALGEBRA 1

A Review of Linear Algebra

Math 102, Winter Final Exam Review. Chapter 1. Matrices and Gaussian Elimination

Lecture 8: Linear Algebra Background

Networked Sensing, Estimation and Control Systems

Lecture: Linear algebra. 4. Solutions of linear equation systems The fundamental theorem of linear algebra

Introduction. Vectors and Matrices. Vectors [1] Vectors [2]

MATH4406 (Control Theory) Unit 6: The Linear Quadratic Regulator (LQR) and Model Predictive Control (MPC) Prepared by Yoni Nazarathy, Artem

There are six more problems on the next two pages

Transcription:

EE/ACM 150 - Applications of Convex Optimization in Signal Processing and Communications Lecture 4 Andre Tkacenko Signal Processing Research Group Jet Propulsion Laboratory April 12, 2012 Andre Tkacenko (JPL) EE/ACM 150 - Lecture 4 April 12, 2012 1 / 13

Outline 1 Special Vector/Matrix Operators Hadamard Product Kronecker Product Vectorization 2 The Schur Complement Introduction Role in Linear Matrix Inequalities Applications in Control Theory Andre Tkacenko (JPL) EE/ACM 150 - Lecture 4 April 12, 2012 2 / 13

Special Vector/Matrix Operators Hadamard Product The Hadamard or Entrywise Product If A, B F m n, then the Hadamard product or entrywise product is the matrix A B F m n whose elements are given as follows: [A B] k,l [A] k,l [B] k,l, 1 k m, 1 l n. Properties: A B = B A. A (B C) = (A B) C. A (B + C) = (A B) + (A C). If C F m m, then C I m = diag(c). If x F m and y F n, then x (A B) y = tr ( diag(x ) Adiag(y) B T ). (Schur product theorem:) If A, B F m m and A, B 0, then A B 0. Also, we have det(a B) det(a) det(b). Andre Tkacenko (JPL) EE/ACM 150 - Lecture 4 April 12, 2012 3 / 13

Special Vector/Matrix Operators The Kronecker Product Kronecker Product The Kronecker product is a generalization of the outer product which maps two matrices of arbitrary size to a block matrix. Specifically, if A is an m n matrix and B is a p q matrix, then the Kronecker product A B is an mp nq block matrix given by Basic Properties: A B A (B + C) = (A B) + (A C). (A + B) C = (A C) + (B C). (αa) B = A (αb) = α (A B). (A B) C = A (B C). [A] 1,1 B [A] 1,n B....... [A] m,1 B [A] m,n B. (A B) = A B, (A B) T = A T B T, and (A B) = A B. There exist permutation matrices P F mp mp, Q F nq nq such that (A B) = P (B A) Q. Andre Tkacenko (JPL) EE/ACM 150 - Lecture 4 April 12, 2012 4 / 13

Special Vector/Matrix Operators Kronecker Product Advanced Properties of the Kronecker Product Products & Inverses: (Mixed-Product Property:) (A B) (C D) = AC BD. By the mixed-product property, it follows that (A B) is invertible if and only if A and B are invertible, in which case we have (A B) 1 = A 1 B 1. Spectrum: Suppose A F m m and B F n n. If λ 1,..., λ m and µ 1,..., µ n denote the eigenvalues of A and B, respectively, then the mn eigenvalues of (A B) are given by From this, it follows that λ k µ l, 1 k m, 1 l n. tr(a B) = tr(a) tr(b), det(a B) = (det(a)) n (det(b)) m. Singular Values: Suppose A F m n and B F p q have ρ A and ρ B nonzero singular values given by σ A,1,..., σ A,ρA and σ B,1,..., σ B,ρB, respectively. Then, (A B) has ρ A ρ B nonzero singular values given by From this it follows that σ A,k σ B,l, 1 k ρ A, 1 l ρ B. rank(a B) = rank(a) rank(b). Andre Tkacenko (JPL) EE/ACM 150 - Lecture 4 April 12, 2012 5 / 13

Special Vector/Matrix Operators Vectorization Vectorization and the Vec Operator In many cases, it is often convenient to stack the columns of a matrix together to form a column vector. This process of converting a matrix into a vector is called vectorization and is carried out using the vec operator. Specifically, if A is some m n matrix with a column representation of the form A = [ a 1 a n ], where a k is an m 1 vector for all k, then vec(a) is an mn 1 vector given by Equivalently, we have vec(a) [vec(a)] k = [A] ((k 1) mod m)+1, k 1 m +1, 1 k mn. Andre Tkacenko (JPL) EE/ACM 150 - Lecture 4 April 12, 2012 6 / 13 a 1. a n

Special Vector/Matrix Operators Properties of the Vec Operator Vectorization Compatibility with Kronecker products: If A F m n, B F n p, and C F p q, then we have vec(abc) = ( C T A ) vec(b), = (I q AB) vec(c) = ( C T B T I m ) vec(a). Compatibility with Hadamard products: If A, B F m n, then we have vec(a B) = vec(a) vec(b). Compatibility with inner products: If A, B F m n, then we have tr ( A B ) = (vec(a)) (vec(b)). In other words, for the standard inner product x, y = y x for vectors and X, Y = tr ( Y X ) for matrices, we have X, Y = vec(x), vec(y). Andre Tkacenko (JPL) EE/ACM 150 - Lecture 4 April 12, 2012 7 / 13

Special Vector/Matrix Operators Vectorization Solving Lyapunov Equations Using the Vec Operator In control theory, the stability of certain linear systems can be ascertained by solving a type of equation known as a Lyapunov equation. This equation takes one of the following forms: AX + XA + Q = 0, AXA X + Q = 0. (continuous-time Lyapunov equation) (discrete-time Lyapunov equation) Here, all matrices are n n and the stability for either the continuous or discrete-time systems can be determined by the value of X. Specifically, in either case, if there exist X 0 and Q 0 such that the corresponding Lyapunov equation holds, then the linear system evolving according to the state matrix A is globally asymptotically stable. The Lyapunov equation can be solved in closed form using the vec operator. Specifically, we have vec(x) = ((I n A) + (A I n)) 1 vec(q), vec(x) = (I n 2 (A A)) 1 vec(q). (continuous-time case) (discrete-time case) Andre Tkacenko (JPL) EE/ACM 150 - Lecture 4 April 12, 2012 8 / 13

The Schur Complement Introduction The Schur Complement: Formal Definition Recall that if A F m m, B F m n, C F n m, and D F n n are such that A and D are invertible, and if we define the (m n) (m + n) matrix M in block form as follows: [ ] A B M, C D then the n n matrix S A;M defined as S A;M D CA 1 B, is said to be the Schur complement of A in M. Similarly, the m m matrix S D;M defined as S D;M A BD 1 C is the Schur complement of D in M. Schur complements arise in the study of block matrix inversion (i.e., when deriving an expression for M 1 ), and play a role in the matrix inversion lemma. Andre Tkacenko (JPL) EE/ACM 150 - Lecture 4 April 12, 2012 9 / 13

The Hermitian Case The Schur Complement Introduction Suppose that A H m, B C m n, and C H n are such that A is invertible. If we define the block matrix X as [ ] A B X B, C and define the n n matrix S as S C B A 1 B, then we have the following results: X 0 if and only if A 0 and S 0. If A 0, then X 0 if and only if S 0. These results can be easily shown by using the following decomposition: [ ] [ ] [ ] A B v1 v 1 v 2 }{{} v B C } {{ } X v 2 } {{ } v T [ {}} [ ] I m 0 m n ] { [ v 1 v 2 }{{} B A 1 I n } v {{ } u = A 0 n m S } {{ } Y 0 m n T ] { [ }} ] { [ ] I m A 1 B v1. 0 n m I n v 2 } {{ } } {{ v } u Andre Tkacenko (JPL) EE/ACM 150 - Lecture 4 April 12, 2012 10 / 13

The Schur Complement Case of Singular Block Term Introduction If the matrix A from above is singular, then in general, it is not simple to relate the definiteness of X to its blocks, as the Schur complement of A in X is not formally defined in this case. However, if R(B) R(A), then the following decomposition holds, based on a singular version of the Schur complement: [ ] [ ] [ ] I m 0 m n A 0 m n Im A # B X = B A # I n 0 n m C B A #. B 0 n m I n It can be shown that R(B) R(A) Hence, we have the following result: ( I m AA #) B = 0. X 0 if and only if A 0, ( I m AA #) B = 0, and C B A # B 0. Andre Tkacenko (JPL) EE/ACM 150 - Lecture 4 April 12, 2012 11 / 13

The Schur Complement Role in Linear Matrix Inequalities Expressing Constraints as Linear Matrix Inequalities Often, constraints that one may encounter when formulating an optimization problem can be expressed in terms of linear matrix inequalities (LMIs). This can be extremely useful for posing the problem in terms of a semidefinite program (SDP), if possible, which is a standard form convex optimization problem. Examples: Suppose A F m n, b F m 1, x F n 1, and γ > 0, and we would like to enforce the vector norm constraint Ax b 2 γ γ 2 (Ax b) (Ax b) 0. This can be expressed as an LMI as shown below: [ I m (Ax b) (Ax b) γ 2 ] 0. Suppose X F m n and γ > 0, and we would like to enforce the matrix norm constraint ( X 2 = λ max X X ) γ. ( As we have λ max X X ) γ 2 if and only if X X γ 2 I n, this leads to the LMI given below: [ ] Im X X γ 2 0. I n Andre Tkacenko (JPL) EE/ACM 150 - Lecture 4 April 12, 2012 12 / 13

The Schur Complement Applications in Control Theory Algebraic Riccati Equations Examples of equations which arises frequently in the study of control theory are algebraic Riccati equations (AREs), which determine the solution to the infinite horizon, time-invariant linear-quadratic regulator (LQR) and linear-quadratic-gaussian (LQG) control problems. These equations come in the following variants: A X + XA XBR 1 B X + Q = 0, X = A XA + Q ( A XB ) ( R + B XB ) 1 ( B XA ). (continuous-time ARE (CARE)) (discrete-time ARE (DARE)) Here, A F m m, B F m n, Q H m, and R H n are the problem data and X H m is the variable to be solved. Typically we assume R 0 and we seek a solution for which X 0. To find such a solution, it is common to relax an equality to an inequality. For the DARE, one such relaxation is given below: ( ) ( ) 1 ( ) X A XA + Q A XB R + B XB B XA, X 0. This can be shown to lead to the following LMI: R + B XB B XA 0 n m A XB A XA + Q X 0 m m 0 m n 0 m m X 0. Andre Tkacenko (JPL) EE/ACM 150 - Lecture 4 April 12, 2012 13 / 13