An Introduction to Linear Matrix Inequalities. Raktim Bhattacharya Aerospace Engineering, Texas A&M University
|
|
- Camilla Perry
- 6 years ago
- Views:
Transcription
1 An Introduction to Linear Matrix Inequalities Raktim Bhattacharya Aerospace Engineering, Texas A&M University
2 Linear Matrix Inequalities What are they? Inequalities involving matrix variables Matrix variables appear linearly Represent convex sets polynomial inequalities Critical tool in post-modern control theory AERO 632, Instructor: Raktim Bhattacharya 2 / 38
3 Standard Form where F (x) := F 0 + x 1 F x n F n > 0 x 1 x 2 x :=., F i S m m m symmetric matrix x n Think of F (x) : R n S m. Example: [ ] 1 x > 0 x 1 [ ] x 0 1 [ ] 0 1 > AERO 632, Instructor: Raktim Bhattacharya 3 / 38
4 Positive Definiteness Let Matrix F > 0 represents positive definite matrix F > 0 x T F x > 0, x 0 F > 0 leading principal minors of F are positive F 11 F 12 F 13 F = F 21 F 22 F 23 F 31 F 32 F 33 n Polynomial Constraints as a Linear Matrix Inequality F > 0 F 11 > 0, F 11 F 12 F 21 F 22 > 0, F 11 F 12 F 13 F 21 F 22 F 23 F 31 F 32 F 33 > 0, AERO 632, Instructor: Raktim Bhattacharya 4 / 38
5 Definiteness Positive Semi-Definite F 0 iff all principal minors are 0 not just leading Negative Definite F < 0 iff every odd leading principal minor is < 0 and even leading principal minor is > 0 they alternate signs, starting with < 0 Negative Semi-Definite F 0 iff every odd principal minor is 0 and even principal minor is 0 F > 0 F < 0 F 0 F 0 Matrix Analysis, Roger Horn. AERO 632, Instructor: Raktim Bhattacharya 5 / 38
6 Example 1 y > 0, y x 2 > 0, [ ] y x > 0 x 1 LMI written as [ ] y x > 0 is in general form. x 1 We can write it in standard form as [ ] [ ] [ ] y + x > General form saves notations, may lead to more efficient computation AERO 632, Instructor: Raktim Bhattacharya 6 / 38
7 Example 2 x x 2 2 < 1 Leading Minors are 1 0 x x 2 > 0 x 1 x > > x 2 x x 1 x x x 1 x 2 > 0 Last inequality simplifies to 1 (x x 2 2) > 0 AERO 632, Instructor: Raktim Bhattacharya 7 / 38
8 Eigenvalue Minimization Let A i S n, i = 0, 1,, n. Let A(x) := A 0 + A 1 x A n x n. Find x := [x 1 x 2 x n ] T that minimizes How to solve this problem? J(x) := min λ max A(x). x AERO 632, Instructor: Raktim Bhattacharya 8 / 38
9 Eigenvalue Minimization (contd.) Recall for M S n λ max M t M ti 0. Linear algebra result: Matrix Analysis R.Horn, C.R. Johnson Optimization problem is therefore min x,t t such that A(x) ti 0. AERO 632, Instructor: Raktim Bhattacharya 9 / 38
10 Matrix Norm Minimization Let A i R n, i = 0, 1,, n. Let A(x) := A 0 + A 1 x A n x n. Find x := [x 1 x 2 x n ] T that minimizes How to solve this problem? J(x) := min x A(x) 2. AERO 632, Instructor: Raktim Bhattacharya 10 / 38
11 Matrix Norm Minimization contd. Recall A 2 := λ max A T A. or Implies min t,x t2 A(x) T A(x) t 2 I 0. Optimization problem is therefore min t,x t2 subject to [ ] ti A(x) A(x) T 0. ti AERO 632, Instructor: Raktim Bhattacharya 11 / 38
12 Important Inequalities
13 Generalized Square Inequalities Lemma For arbitrary scalar x, y, and δ > 0, we have Implies ( ) δx y 2 = δx δ δ y2 2xy 0. 2xy δx δ y2. AERO 632, Instructor: Raktim Bhattacharya 13 / 38
14 Generalized Square Inequalities Restriction-Free Inequalities Lemma Let X, Y R m n, F S m, F > 0, and δ > 0 be a scalar, then X T F Y + Y T F X δx T F X + δ 1 Y T F Y. When X = x and Y = y 2x T F y δx T F x + δ 1 y T F y. Proof: Using completion of squares. ( ) T ( ) δx δ 1 Y F δx δ 1 Y 0. AERO 632, Instructor: Raktim Bhattacharya 14 / 38
15 Generalized Square Inequalities Inequalities with Restrictions Let F = {F F R n n, F T F I}. Lemma Let X R m n, Y R n m, then for arbitrary δ > 0 XF Y + Y T F T X T δxx T + δ 1 Y T Y, F F. Proof: Approach 1: Using completion of squares. Start with ( δx T ) T ( δ 1 F Y δx T δ 1 F Y )) 0. AERO 632, Instructor: Raktim Bhattacharya 15 / 38
16 Schur Complements Very useful for identifying convex sets Let [ Q(x) ] S(x) S T (x) R(x) Generalizing, Q(x) S m 1, R(x) S m 2 Q(x), R(x), S(x) are affine functions of x [ ] Q(x) S(x) S T 0 (x) R(x) > 0 Q(x) > 0 R(x) S T (x)q(x) 1 S(x) > 0 Q(x) 0 S T (x) ( I Q(x)Q (x) ) = 0 R(x) S T (x)q(x) S(x) 0 Q(x) is the pseudo-inverse This generalization is used when Q(x) is positive semidefinite but singular AERO 632, Instructor: Raktim Bhattacharya 16 / 38
17 Schur Complement Lemma Let Define [ ] A11 A A := 12. A 21 A 22 For symmetric A, S ch (A 11 ) := A 22 A 21 A 1 11 A 12 S ch (A 22 ) := A 11 A 12 A 1 22 A 21 A > 0 A 11 > 0, S ch (A 11 ) > 0 A 22 > 0, S ch (A 22 ) > 0 AERO 632, Instructor: Raktim Bhattacharya 17 / 38
18 Example 1 Here x x 2 2 < 1 1 x T x > 0 R(x) = 1, Q(x) = I > 0. [ ] I x x T > 0 1 AERO 632, Instructor: Raktim Bhattacharya 18 / 38
19 Example 2 or x P < 1 1 x T P x > 0 1 x T P x = 1 ( P x) T ( P x) > 0 where P is matrix square root. [ ] P 1 x x T > 0 1 [ I ( ] P x) ( P x) T > 0 1 AERO 632, Instructor: Raktim Bhattacharya 19 / 38
20 LMIs are not unique If F is positive definite then congruence transformation of F is also positive definite F > 0 x T F x, x 0 y T M T F My > 0, y 0 and nonsingular M M T F M > 0 Implies, rearrangement of matrix elements does not change the feasible set [ ] [ ] [ ] [ ] [ ] Q S 0 I Q S 0 I R S T S T > 0 R I 0 S T > 0 > 0 R I 0 S Q AERO 632, Instructor: Raktim Bhattacharya 20 / 38
21 Variable Elimination Lemma Lemma: For arbitrary nonzero vectors x, y R n, there holds max (x T F y) 2 = (x T x)(y T y). F F:F T F I Proof: From Schwarz inequality, x T F y x T x y T F T F y x T x y T y. Therefore for arbitrary x, y we have Next show equality. (x T F y) 2 (x T x)(y T y). AERO 632, Instructor: Raktim Bhattacharya 21 / 38
22 Variable Elimination Lemma contd. Let F 0 = xy T x T x y T y. Therefore, We can show that F T 0 F 0 = yxt xy T (x T x)(y T y) = yyt y T y. σ max (F T 0 F 0 ) = σ max (F 0 F T 0 ) = 1. = F T 0 F 0 1, thus F 0 F. AERO 632, Instructor: Raktim Bhattacharya 22 / 38
23 Variable Elimination Lemma contd. Therefore, (x T F 0 y) 2 = ( x T xy T x T x y T y y ) 2 = (x T x)(y T y). AERO 632, Instructor: Raktim Bhattacharya 23 / 38
24 Variable Elimination Lemma contd. Lemma: Let X R m n, Y R n m, and Q R m m. Then Q + XF Y + Y T F T X T < 0, F F, iff δ > 0 such that Q + δxx T + 1 δ Y T Y < 0. Proof: Sufficiency Q + XF Y + Y T F T X T Q + δxx T + 1 δ Y T Y from previous Lemma < 0. AERO 632, Instructor: Raktim Bhattacharya 24 / 38
25 Variable Elimination Lemma contd. Proof: Necessity Suppose Q + XF Y + Y T F T X T < 0, F F is true. Then for arbitrary nonzero x or Using previous lemma result x T (Q + XF Y + Y T F T X T )x < 0, max F F (xt XF Y x) = x T Qx + 2x T XF Y x < 0. (x T XX T x)(x T Y T Y x), = x T Qx + 2 (x T XX T x)(x T Y T Y x) < 0. AERO 632, Instructor: Raktim Bhattacharya 25 / 38
26 Variable Elimination Lemma contd. Therefore, x T Qx + 2 (x T XX T x)(x T Y T Y x) < 0 = x T Qx 2 (x T XX T x)(x T Y T Y x) < 0, and x T Qx < 0. or (x T Qx) 2 4 (x T XX T x) (x T Y T Y x) > 0. }{{}}{{}}{{} b 2 a c b 2 4ac > 0. AERO 632, Instructor: Raktim Bhattacharya 26 / 38
27 Variable Elimination Lemma contd. Or the quadratic equation aδ 2 + bδ + c = 0 has real-roots Recall, b ± b 2 4ac. 2a a := (x T XX T x) > 0, b := (x T Qx) < 0, c := (x T Y T Y x) > 0. Implies b 2a > 0, or at least one positive root. AERO 632, Instructor: Raktim Bhattacharya 27 / 38
28 Variable Elimination Lemma contd. Therefore, δ > 0 such that aδ 2 + bδ + c < 0. Dividing by δ we get aδ + b + c δ < 0, or or or x T Qx + δx T XX T x + 1 δ xt Y T Y x < 0, x T (Q + δxx T + 1 δ Y T Y )x < 0, Q + δxx T + 1 δ Y T Y < 0. AERO 632, Instructor: Raktim Bhattacharya 28 / 38
29 Elimination of Variables In a Partitioned Matrix Lemma: Let [ ] Z11 Z Z = 12 Z12 T, Z Z 11 R n n, 22 be symmetric. Then X = X T such that Z 11 X Z 12 X Z12 T Z 22 0 < 0 Z < 0. X 0 X Proof: Apply Schur complement lemma. Z 11 X Z 12 X Z12 T Z 22 0 < 0 X < 0, S ch ( X) < 0. X 0 X AERO 632, Instructor: Raktim Bhattacharya 29 / 38
30 Elimination of Variables In a Partitioned Matrix (contd.) 0 > S ch ( X), [ Z11 X Z = 12 Z12 T Z 22 [ Z11 X Z = 12 Z12 T Z 22 [ ] Z11 Z = 12 Z12 T. Z 22 ] ] + [ X 0 [ ] X 0, 0 0 ] ( X) 1 [ X 0 ], AERO 632, Instructor: Raktim Bhattacharya 30 / 38
31 Elimination of Variables In a Partitioned Matrix (contd.) Lemma: Z 11 Z 12 Z 13 Z 12 T Z 22 Z 23 + X T < 0 Z13 T Z23 T + X Z 33 [ ] Z11 Z 12 Z12 T < 0 Z 22 [ ] Z11 Z 13 Z13 T < 0, Z 33 with X = Z13Z T 11 1 Z 12 Z23. T Proof: Necessity Apply rules for negative definiteness. Sufficiency Following are true from Schur complement lemma. Z 11 < 0 Z 22 Z T 12Z 1 11 Z 12 < 0 Z 33 Z T 13Z 1 11 Z 13 < 0 AERO 632, Instructor: Raktim Bhattacharya 31 / 38
32 Elimination of Variables In a Partitioned Matrix (contd.) Look at Schur complement of Z 11 Z 12 Z 13 Z 12 T Z 22 Z 23 + X T. Z13 T Z23 T + X Z 33 [ Z 22 Z 23 + X T Z23 T + X Z 33 [ = Also Z!1 < 0. ] [ ] Z T 12 Z13 T Z11 1 [ ] Z12 Z 13 Z 22 Z12 T Z 1 11 Z 12 Z 23 + X T Z12 T Z 1 Z23 T + X ZT 13 Z 1 11 Z 12 Z 33 Z13 T Z 1 11 Z Z 13 ] < 0. AERO 632, Instructor: Raktim Bhattacharya 32 / 38
33 Elimination of Variables Projection Lemma Definition Let A R m n. Then M a is left orthogonal complement of A if it satisfies M a A = 0, rank(m a ) = m rank(a). Definition Let A R m n. Then N a is right orthogonal complement of A if it satisfies AN a = 0, rank(n a ) = n rank(a). AERO 632, Instructor: Raktim Bhattacharya 33 / 38
34 Elimination of Variables Projection Lemma (contd.) Lemma: Let P, Q, and H = H T be matrices of appropriate dimensions. Let N p, N q be right orthogonal complements of P, Q respectively. Then X such that H + P T X T Q + Q T XP < 0 N T p HN p < 0 and N T q HN q < 0. Proof: Necessity : Multiply by N p or N q. Sufficiency : Little more involved Use base kernel of P, Q, followed by Schur complement lemma. AERO 632, Instructor: Raktim Bhattacharya 34 / 38
35 Elimination of Variables Reciprocal Projection Lemma Lemma: Let P be any given positive definite matrix. The following statements are equivalent: 1. Ψ + S + S T < The LMI problem [ Ψ + P (W + W T ) S T + W T ] < 0, S + W P is feasible with respect to W. Proof: Apply projection lemma w.r.t general variable W. Let [ ] Ψ + P S T X =, Y = [ I S P n 0 ], Z = [ ] I n I n. AERO 632, Instructor: Raktim Bhattacharya 35 / 38
36 Elimination of Variables Reciprocal Projection Lemma (contd.) Let [ ] Ψ + P S T X =, Y = [ I S P n 0 ], Z = [ ] I n I n. Right orthogonal complements of Y, Z are [ ] 0 N y = P 1, N z = Verify that Y N y = 0 and ZN z = 0. We can show [ In I n ]. N T y XN y = P 1, N T z XN z = Ψ + S T + S. Apply projection lemma. AERO 632, Instructor: Raktim Bhattacharya 36 / 38
37 Elimination of Variables Reciprocal Projection Lemma (contd.) N T y XN y = P 1, N T z XN z = Ψ + S T + S. The expression [ X + Y T W T Z + Z T Ψ + P (W + W W Y = T ) S T + W T ]. S + W P Therefore, if N T y XN y < 0 N T z XN z < 0 = [ Ψ + P (W + W T ) S T + W T ] < 0. S + W P AERO 632, Instructor: Raktim Bhattacharya 37 / 38
38 Trace of Matrices in LMIs Lemma Let A(x) S m be a matrix function in R n, and γ R > 0. The following statements are equivalent: 1. x R n such that 2. x R n, Z S m such that Proof: Homework problem. tra(x) < γ, A(x) < Z, trz < γ. AERO 632, Instructor: Raktim Bhattacharya 38 / 38
Lecture 7: Positive Semidefinite Matrices
Lecture 7: Positive Semidefinite Matrices Rajat Mittal IIT Kanpur The main aim of this lecture note is to prepare your background for semidefinite programming. We have already seen some linear algebra.
More informationChapter 3 Transformations
Chapter 3 Transformations An Introduction to Optimization Spring, 2014 Wei-Ta Chu 1 Linear Transformations A function is called a linear transformation if 1. for every and 2. for every If we fix the bases
More information2. Linear algebra. matrices and vectors. linear equations. range and nullspace of matrices. function of vectors, gradient and Hessian
FE661 - Statistical Methods for Financial Engineering 2. Linear algebra Jitkomut Songsiri matrices and vectors linear equations range and nullspace of matrices function of vectors, gradient and Hessian
More informationSemidefinite Programming Basics and Applications
Semidefinite Programming Basics and Applications Ray Pörn, principal lecturer Åbo Akademi University Novia University of Applied Sciences Content What is semidefinite programming (SDP)? How to represent
More informationQuadratic Stability of Dynamical Systems. Raktim Bhattacharya Aerospace Engineering, Texas A&M University
.. Quadratic Stability of Dynamical Systems Raktim Bhattacharya Aerospace Engineering, Texas A&M University Quadratic Lyapunov Functions Quadratic Stability Dynamical system is quadratically stable if
More informationExercise Sheet 1.
Exercise Sheet 1 You can download my lecture and exercise sheets at the address http://sami.hust.edu.vn/giang-vien/?name=huynt 1) Let A, B be sets. What does the statement "A is not a subset of B " mean?
More informationElementary linear algebra
Chapter 1 Elementary linear algebra 1.1 Vector spaces Vector spaces owe their importance to the fact that so many models arising in the solutions of specific problems turn out to be vector spaces. The
More informationChapter 1. Matrix Algebra
ST4233, Linear Models, Semester 1 2008-2009 Chapter 1. Matrix Algebra 1 Matrix and vector notation Definition 1.1 A matrix is a rectangular or square array of numbers of variables. We use uppercase boldface
More informationLecture Note 5: Semidefinite Programming for Stability Analysis
ECE7850: Hybrid Systems:Theory and Applications Lecture Note 5: Semidefinite Programming for Stability Analysis Wei Zhang Assistant Professor Department of Electrical and Computer Engineering Ohio State
More informationLecture 1: Review of linear algebra
Lecture 1: Review of linear algebra Linear functions and linearization Inverse matrix, least-squares and least-norm solutions Subspaces, basis, and dimension Change of basis and similarity transformations
More informationLinear Matrix Inequalities in Control
Linear Matrix Inequalities in Control Delft Center for Systems and Control (DCSC) Delft University of Technology The Netherlands Department of Electrical Engineering Eindhoven University of Technology
More informationCheat Sheet for MATH461
Cheat Sheet for MATH46 Here is the stuff you really need to remember for the exams Linear systems Ax = b Problem: We consider a linear system of m equations for n unknowns x,,x n : For a given matrix A
More informationMath 102, Winter Final Exam Review. Chapter 1. Matrices and Gaussian Elimination
Math 0, Winter 07 Final Exam Review Chapter. Matrices and Gaussian Elimination { x + x =,. Different forms of a system of linear equations. Example: The x + 4x = 4. [ ] [ ] [ ] vector form (or the column
More informationChap 3. Linear Algebra
Chap 3. Linear Algebra Outlines 1. Introduction 2. Basis, Representation, and Orthonormalization 3. Linear Algebraic Equations 4. Similarity Transformation 5. Diagonal Form and Jordan Form 6. Functions
More informationSCHUR IDEALS AND HOMOMORPHISMS OF THE SEMIDEFINITE CONE
SCHUR IDEALS AND HOMOMORPHISMS OF THE SEMIDEFINITE CONE BABHRU JOSHI AND M. SEETHARAMA GOWDA Abstract. We consider the semidefinite cone K n consisting of all n n real symmetric positive semidefinite matrices.
More information11 a 12 a 21 a 11 a 22 a 12 a 21. (C.11) A = The determinant of a product of two matrices is given by AB = A B 1 1 = (C.13) and similarly.
C PROPERTIES OF MATRICES 697 to whether the permutation i 1 i 2 i N is even or odd, respectively Note that I =1 Thus, for a 2 2 matrix, the determinant takes the form A = a 11 a 12 = a a 21 a 11 a 22 a
More informationLinear Algebra: Characteristic Value Problem
Linear Algebra: Characteristic Value Problem . The Characteristic Value Problem Let < be the set of real numbers and { be the set of complex numbers. Given an n n real matrix A; does there exist a number
More informationRecall the convention that, for us, all vectors are column vectors.
Some linear algebra Recall the convention that, for us, all vectors are column vectors. 1. Symmetric matrices Let A be a real matrix. Recall that a complex number λ is an eigenvalue of A if there exists
More informationLinear Algebra Massoud Malek
CSUEB Linear Algebra Massoud Malek Inner Product and Normed Space In all that follows, the n n identity matrix is denoted by I n, the n n zero matrix by Z n, and the zero vector by θ n An inner product
More informationLinear Algebra And Its Applications Chapter 6. Positive Definite Matrix
Linear Algebra And Its Applications Chapter 6. Positive Definite Matrix KAIST wit Lab 2012. 07. 10 남성호 Introduction The signs of the eigenvalues can be important. The signs can also be related to the minima,
More informationCopositive matrices and periodic dynamical systems
Extreme copositive matrices and periodic dynamical systems Weierstrass Institute (WIAS), Berlin Optimization without borders Dedicated to Yuri Nesterovs 60th birthday February 11, 2016 and periodic dynamical
More informationorthogonal relations between vectors and subspaces Then we study some applications in vector spaces and linear systems, including Orthonormal Basis,
5 Orthogonality Goals: We use scalar products to find the length of a vector, the angle between 2 vectors, projections, orthogonal relations between vectors and subspaces Then we study some applications
More informationConvex Optimization. (EE227A: UC Berkeley) Lecture 6. Suvrit Sra. (Conic optimization) 07 Feb, 2013
Convex Optimization (EE227A: UC Berkeley) Lecture 6 (Conic optimization) 07 Feb, 2013 Suvrit Sra Organizational Info Quiz coming up on 19th Feb. Project teams by 19th Feb Good if you can mix your research
More informationThe matrix will only be consistent if the last entry of row three is 0, meaning 2b 3 + b 2 b 1 = 0.
) Find all solutions of the linear system. Express the answer in vector form. x + 2x + x + x 5 = 2 2x 2 + 2x + 2x + x 5 = 8 x + 2x + x + 9x 5 = 2 2 Solution: Reduce the augmented matrix [ 2 2 2 8 ] to
More informationLMI MODELLING 4. CONVEX LMI MODELLING. Didier HENRION. LAAS-CNRS Toulouse, FR Czech Tech Univ Prague, CZ. Universidad de Valladolid, SP March 2009
LMI MODELLING 4. CONVEX LMI MODELLING Didier HENRION LAAS-CNRS Toulouse, FR Czech Tech Univ Prague, CZ Universidad de Valladolid, SP March 2009 Minors A minor of a matrix F is the determinant of a submatrix
More informationReview of Some Concepts from Linear Algebra: Part 2
Review of Some Concepts from Linear Algebra: Part 2 Department of Mathematics Boise State University January 16, 2019 Math 566 Linear Algebra Review: Part 2 January 16, 2019 1 / 22 Vector spaces A set
More informationHomework 2 Foundations of Computational Math 2 Spring 2019
Homework 2 Foundations of Computational Math 2 Spring 2019 Problem 2.1 (2.1.a) Suppose (v 1,λ 1 )and(v 2,λ 2 ) are eigenpairs for a matrix A C n n. Show that if λ 1 λ 2 then v 1 and v 2 are linearly independent.
More informationMTH 2032 SemesterII
MTH 202 SemesterII 2010-11 Linear Algebra Worked Examples Dr. Tony Yee Department of Mathematics and Information Technology The Hong Kong Institute of Education December 28, 2011 ii Contents Table of Contents
More informationSolutions to Exercises, Section 2.5
Instructor s Solutions Manual, Section 2.5 Exercise 1 Solutions to Exercises, Section 2.5 For Exercises 1 4, write the domain of the given function r as a union of intervals. 1. r(x) 5x3 12x 2 + 13 x 2
More informationMay 9, 2014 MATH 408 MIDTERM EXAM OUTLINE. Sample Questions
May 9, 24 MATH 48 MIDTERM EXAM OUTLINE This exam will consist of two parts and each part will have multipart questions. Each of the 6 questions is worth 5 points for a total of points. The two part of
More informationReview problems for MA 54, Fall 2004.
Review problems for MA 54, Fall 2004. Below are the review problems for the final. They are mostly homework problems, or very similar. If you are comfortable doing these problems, you should be fine on
More informationRank-one Generated Spectral Cones Defined by Two Homogeneous Linear Matrix Inequalities
Rank-one Generated Spectral Cones Defined by Two Homogeneous Linear Matrix Inequalities C.J. Argue Joint work with Fatma Kılınç-Karzan October 22, 2017 INFORMS Annual Meeting Houston, Texas C. Argue, F.
More informationSpring 2014 Math 272 Final Exam Review Sheet
Spring 2014 Math 272 Final Exam Review Sheet You will not be allowed use of a calculator or any other device other than your pencil or pen and some scratch paper. Notes are also not allowed. In kindness
More informationDS-GA 1002 Lecture notes 0 Fall Linear Algebra. These notes provide a review of basic concepts in linear algebra.
DS-GA 1002 Lecture notes 0 Fall 2016 Linear Algebra These notes provide a review of basic concepts in linear algebra. 1 Vector spaces You are no doubt familiar with vectors in R 2 or R 3, i.e. [ ] 1.1
More informationProjection Theorem 1
Projection Theorem 1 Cauchy-Schwarz Inequality Lemma. (Cauchy-Schwarz Inequality) For all x, y in an inner product space, [ xy, ] x y. Equality holds if and only if x y or y θ. Proof. If y θ, the inequality
More informationHere each term has degree 2 (the sum of exponents is 2 for all summands). A quadratic form of three variables looks as
Reading [SB], Ch. 16.1-16.3, p. 375-393 1 Quadratic Forms A quadratic function f : R R has the form f(x) = a x. Generalization of this notion to two variables is the quadratic form Q(x 1, x ) = a 11 x
More informationRank minimization via the γ 2 norm
Rank minimization via the γ 2 norm Troy Lee Columbia University Adi Shraibman Weizmann Institute Rank Minimization Problem Consider the following problem min X rank(x) A i, X b i for i = 1,..., k Arises
More informationLINEAR ALGEBRA BOOT CAMP WEEK 4: THE SPECTRAL THEOREM
LINEAR ALGEBRA BOOT CAMP WEEK 4: THE SPECTRAL THEOREM Unless otherwise stated, all vector spaces in this worksheet are finite dimensional and the scalar field F is R or C. Definition 1. A linear operator
More informationAlgebra II. Paulius Drungilas and Jonas Jankauskas
Algebra II Paulius Drungilas and Jonas Jankauskas Contents 1. Quadratic forms 3 What is quadratic form? 3 Change of variables. 3 Equivalence of quadratic forms. 4 Canonical form. 4 Normal form. 7 Positive
More information6-1 The Positivstellensatz P. Parrilo and S. Lall, ECC
6-1 The Positivstellensatz P. Parrilo and S. Lall, ECC 2003 2003.09.02.10 6. The Positivstellensatz Basic semialgebraic sets Semialgebraic sets Tarski-Seidenberg and quantifier elimination Feasibility
More informationEE 227A: Convex Optimization and Applications October 14, 2008
EE 227A: Convex Optimization and Applications October 14, 2008 Lecture 13: SDP Duality Lecturer: Laurent El Ghaoui Reading assignment: Chapter 5 of BV. 13.1 Direct approach 13.1.1 Primal problem Consider
More informationFinal Exam, Linear Algebra, Fall, 2003, W. Stephen Wilson
Final Exam, Linear Algebra, Fall, 2003, W. Stephen Wilson Name: TA Name and section: NO CALCULATORS, SHOW ALL WORK, NO OTHER PAPERS ON DESK. There is very little actual work to be done on this exam if
More informationSTAT200C: Review of Linear Algebra
Stat200C Instructor: Zhaoxia Yu STAT200C: Review of Linear Algebra 1 Review of Linear Algebra 1.1 Vector Spaces, Rank, Trace, and Linear Equations 1.1.1 Rank and Vector Spaces Definition A vector whose
More informationALGEBRA QUALIFYING EXAM PROBLEMS LINEAR ALGEBRA
ALGEBRA QUALIFYING EXAM PROBLEMS LINEAR ALGEBRA Kent State University Department of Mathematical Sciences Compiled and Maintained by Donald L. White Version: August 29, 2017 CONTENTS LINEAR ALGEBRA AND
More informationOrthonormal Bases; Gram-Schmidt Process; QR-Decomposition
Orthonormal Bases; Gram-Schmidt Process; QR-Decomposition MATH 322, Linear Algebra I J. Robert Buchanan Department of Mathematics Spring 205 Motivation When working with an inner product space, the most
More informationMODULE 8 Topics: Null space, range, column space, row space and rank of a matrix
MODULE 8 Topics: Null space, range, column space, row space and rank of a matrix Definition: Let L : V 1 V 2 be a linear operator. The null space N (L) of L is the subspace of V 1 defined by N (L) = {x
More informationLecture: Examples of LP, SOCP and SDP
1/34 Lecture: Examples of LP, SOCP and SDP Zaiwen Wen Beijing International Center For Mathematical Research Peking University http://bicmr.pku.edu.cn/~wenzw/bigdata2018.html wenzw@pku.edu.cn Acknowledgement:
More informationMatrix Algebra, part 2
Matrix Algebra, part 2 Ming-Ching Luoh 2005.9.12 1 / 38 Diagonalization and Spectral Decomposition of a Matrix Optimization 2 / 38 Diagonalization and Spectral Decomposition of a Matrix Also called Eigenvalues
More informationRaktim Bhattacharya. . AERO 632: Design of Advance Flight Control System. Norms for Signals and Systems
. AERO 632: Design of Advance Flight Control System Norms for Signals and. Raktim Bhattacharya Laboratory For Uncertainty Quantification Aerospace Engineering, Texas A&M University. Norms for Signals ...
More informationInterior Point Methods: Second-Order Cone Programming and Semidefinite Programming
School of Mathematics T H E U N I V E R S I T Y O H F E D I N B U R G Interior Point Methods: Second-Order Cone Programming and Semidefinite Programming Jacek Gondzio Email: J.Gondzio@ed.ac.uk URL: http://www.maths.ed.ac.uk/~gondzio
More informationSemidefinite and Second Order Cone Programming Seminar Fall 2001 Lecture 2
Semidefinite and Second Order Cone Programming Seminar Fall 2001 Lecture 2 Instructor: Farid Alizadeh Scribe: Xuan Li 9/17/2001 1 Overview We survey the basic notions of cones and cone-lp and give several
More informationLinear Algebra Formulas. Ben Lee
Linear Algebra Formulas Ben Lee January 27, 2016 Definitions and Terms Diagonal: Diagonal of matrix A is a collection of entries A ij where i = j. Diagonal Matrix: A matrix (usually square), where entries
More informationLecture 3: Review of Linear Algebra
ECE 83 Fall 2 Statistical Signal Processing instructor: R Nowak Lecture 3: Review of Linear Algebra Very often in this course we will represent signals as vectors and operators (eg, filters, transforms,
More informationExample: feasibility. Interpretation as formal proof. Example: linear inequalities and Farkas lemma
4-1 Algebra and Duality P. Parrilo and S. Lall 2006.06.07.01 4. Algebra and Duality Example: non-convex polynomial optimization Weak duality and duality gap The dual is not intrinsic The cone of valid
More informationMAT Linear Algebra Collection of sample exams
MAT 342 - Linear Algebra Collection of sample exams A-x. (0 pts Give the precise definition of the row echelon form. 2. ( 0 pts After performing row reductions on the augmented matrix for a certain system
More informationLecture 3: Review of Linear Algebra
ECE 83 Fall 2 Statistical Signal Processing instructor: R Nowak, scribe: R Nowak Lecture 3: Review of Linear Algebra Very often in this course we will represent signals as vectors and operators (eg, filters,
More informationLinear Algebra: Matrix Eigenvalue Problems
CHAPTER8 Linear Algebra: Matrix Eigenvalue Problems Chapter 8 p1 A matrix eigenvalue problem considers the vector equation (1) Ax = λx. 8.0 Linear Algebra: Matrix Eigenvalue Problems Here A is a given
More informationNumerical Methods for Differential Equations Mathematical and Computational Tools
Numerical Methods for Differential Equations Mathematical and Computational Tools Gustaf Söderlind Numerical Analysis, Lund University Contents V4.16 Part 1. Vector norms, matrix norms and logarithmic
More informationALGEBRAIC GEOMETRY HOMEWORK 3
ALGEBRAIC GEOMETRY HOMEWORK 3 (1) Consider the curve Y 2 = X 2 (X + 1). (a) Sketch the curve. (b) Determine the singular point P on C. (c) For all lines through P, determine the intersection multiplicity
More informationDot Products. K. Behrend. April 3, Abstract A short review of some basic facts on the dot product. Projections. The spectral theorem.
Dot Products K. Behrend April 3, 008 Abstract A short review of some basic facts on the dot product. Projections. The spectral theorem. Contents The dot product 3. Length of a vector........................
More informationMobile Robotics 1. A Compact Course on Linear Algebra. Giorgio Grisetti
Mobile Robotics 1 A Compact Course on Linear Algebra Giorgio Grisetti SA-1 Vectors Arrays of numbers They represent a point in a n dimensional space 2 Vectors: Scalar Product Scalar-Vector Product Changes
More informationProperties of Linear Transformations from R n to R m
Properties of Linear Transformations from R n to R m MATH 322, Linear Algebra I J. Robert Buchanan Department of Mathematics Spring 2015 Topic Overview Relationship between the properties of a matrix transformation
More informationCOURSE ON LMI PART I.2 GEOMETRY OF LMI SETS. Didier HENRION henrion
COURSE ON LMI PART I.2 GEOMETRY OF LMI SETS Didier HENRION www.laas.fr/ henrion October 2006 Geometry of LMI sets Given symmetric matrices F i we want to characterize the shape in R n of the LMI set F
More informationExample: Filter output power. maximization. Definition. Eigenvalues, eigenvectors and similarity. Example: Stability of linear systems.
Lecture 2: Eigenvalues, eigenvectors and similarity The single most important concept in matrix theory. German word eigen means proper or characteristic. KTH Signal Processing 1 Magnus Jansson/Emil Björnson
More informationMATH 315 Linear Algebra Homework #1 Assigned: August 20, 2018
Homework #1 Assigned: August 20, 2018 Review the following subjects involving systems of equations and matrices from Calculus II. Linear systems of equations Converting systems to matrix form Pivot entry
More informationOR MSc Maths Revision Course
OR MSc Maths Revision Course Tom Byrne School of Mathematics University of Edinburgh t.m.byrne@sms.ed.ac.uk 15 September 2017 General Information Today JCMB Lecture Theatre A, 09:30-12:30 Mathematics revision
More information1. General Vector Spaces
1.1. Vector space axioms. 1. General Vector Spaces Definition 1.1. Let V be a nonempty set of objects on which the operations of addition and scalar multiplication are defined. By addition we mean a rule
More informationPrincipal Components Theory Notes
Principal Components Theory Notes Charles J. Geyer August 29, 2007 1 Introduction These are class notes for Stat 5601 (nonparametrics) taught at the University of Minnesota, Spring 2006. This not a theory
More information1 Strict local optimality in unconstrained optimization
ORF 53 Lecture 14 Spring 016, Princeton University Instructor: A.A. Ahmadi Scribe: G. Hall Thursday, April 14, 016 When in doubt on the accuracy of these notes, please cross check with the instructor s
More informationReview of Linear Algebra
Review of Linear Algebra Definitions An m n (read "m by n") matrix, is a rectangular array of entries, where m is the number of rows and n the number of columns. 2 Definitions (Con t) A is square if m=
More informationRobust Stability. Robust stability against time-invariant and time-varying uncertainties. Parameter dependent Lyapunov functions
Robust Stability Robust stability against time-invariant and time-varying uncertainties Parameter dependent Lyapunov functions Semi-infinite LMI problems From nominal to robust performance 1/24 Time-Invariant
More informationOutline. Linear Matrix Inequalities in Control. Outline. System Interconnection. j _jst. ]Bt Bjj. Generalized plant framework
Outline Linear Matrix Inequalities in Control Carsten Scherer and Siep Weiland 7th Elgersburg School on Mathematical Systems heory Class 3 1 Single-Objective Synthesis Setup State-Feedback Output-Feedback
More informationEE/ACM Applications of Convex Optimization in Signal Processing and Communications Lecture 4
EE/ACM 150 - Applications of Convex Optimization in Signal Processing and Communications Lecture 4 Andre Tkacenko Signal Processing Research Group Jet Propulsion Laboratory April 12, 2012 Andre Tkacenko
More information3 (Maths) Linear Algebra
3 (Maths) Linear Algebra References: Simon and Blume, chapters 6 to 11, 16 and 23; Pemberton and Rau, chapters 11 to 13 and 25; Sundaram, sections 1.3 and 1.5. The methods and concepts of linear algebra
More informationEE/ACM Applications of Convex Optimization in Signal Processing and Communications Lecture 2
EE/ACM 150 - Applications of Convex Optimization in Signal Processing and Communications Lecture 2 Andre Tkacenko Signal Processing Research Group Jet Propulsion Laboratory April 5, 2012 Andre Tkacenko
More informationProperties of Matrices and Operations on Matrices
Properties of Matrices and Operations on Matrices A common data structure for statistical analysis is a rectangular array or matris. Rows represent individual observational units, or just observations,
More informationMATH 240 Spring, Chapter 1: Linear Equations and Matrices
MATH 240 Spring, 2006 Chapter Summaries for Kolman / Hill, Elementary Linear Algebra, 8th Ed. Sections 1.1 1.6, 2.1 2.2, 3.2 3.8, 4.3 4.5, 5.1 5.3, 5.5, 6.1 6.5, 7.1 7.2, 7.4 DEFINITIONS Chapter 1: Linear
More informationOn V-orthogonal projectors associated with a semi-norm
On V-orthogonal projectors associated with a semi-norm Short Title: V-orthogonal projectors Yongge Tian a, Yoshio Takane b a School of Economics, Shanghai University of Finance and Economics, Shanghai
More informationNumerical Linear Algebra Homework Assignment - Week 2
Numerical Linear Algebra Homework Assignment - Week 2 Đoàn Trần Nguyên Tùng Student ID: 1411352 8th October 2016 Exercise 2.1: Show that if a matrix A is both triangular and unitary, then it is diagonal.
More informationHOMEWORK PROBLEMS FROM STRANG S LINEAR ALGEBRA AND ITS APPLICATIONS (4TH EDITION)
HOMEWORK PROBLEMS FROM STRANG S LINEAR ALGEBRA AND ITS APPLICATIONS (4TH EDITION) PROFESSOR STEVEN MILLER: BROWN UNIVERSITY: SPRING 2007 1. CHAPTER 1: MATRICES AND GAUSSIAN ELIMINATION Page 9, # 3: Describe
More informationComputational Methods CMSC/AMSC/MAPL 460. Eigenvalues and Eigenvectors. Ramani Duraiswami, Dept. of Computer Science
Computational Methods CMSC/AMSC/MAPL 460 Eigenvalues and Eigenvectors Ramani Duraiswami, Dept. of Computer Science Eigen Values of a Matrix Recap: A N N matrix A has an eigenvector x (non-zero) with corresponding
More informationEcon Slides from Lecture 8
Econ 205 Sobel Econ 205 - Slides from Lecture 8 Joel Sobel September 1, 2010 Computational Facts 1. det AB = det BA = det A det B 2. If D is a diagonal matrix, then det D is equal to the product of its
More informationMath Camp Lecture 4: Linear Algebra. Xiao Yu Wang. Aug 2010 MIT. Xiao Yu Wang (MIT) Math Camp /10 1 / 88
Math Camp 2010 Lecture 4: Linear Algebra Xiao Yu Wang MIT Aug 2010 Xiao Yu Wang (MIT) Math Camp 2010 08/10 1 / 88 Linear Algebra Game Plan Vector Spaces Linear Transformations and Matrices Determinant
More information1. Let m 1 and n 1 be two natural numbers such that m > n. Which of the following is/are true?
. Let m and n be two natural numbers such that m > n. Which of the following is/are true? (i) A linear system of m equations in n variables is always consistent. (ii) A linear system of n equations in
More informationConvex Functions and Optimization
Chapter 5 Convex Functions and Optimization 5.1 Convex Functions Our next topic is that of convex functions. Again, we will concentrate on the context of a map f : R n R although the situation can be generalized
More informationPaul Schrimpf. October 18, UBC Economics 526. Unconstrained optimization. Paul Schrimpf. Notation and definitions. First order conditions
Unconstrained UBC Economics 526 October 18, 2013 .1.2.3.4.5 Section 1 Unconstrained problem x U R n F : U R. max F (x) x U Definition F = max x U F (x) is the maximum of F on U if F (x) F for all x U and
More informationMATH 423 Linear Algebra II Lecture 33: Diagonalization of normal operators.
MATH 423 Linear Algebra II Lecture 33: Diagonalization of normal operators. Adjoint operator and adjoint matrix Given a linear operator L on an inner product space V, the adjoint of L is a transformation
More informationE2 212: Matrix Theory (Fall 2010) Solutions to Test - 1
E2 212: Matrix Theory (Fall 2010) s to Test - 1 1. Let X = [x 1, x 2,..., x n ] R m n be a tall matrix. Let S R(X), and let P be an orthogonal projector onto S. (a) If X is full rank, show that P can be
More informationLinear Algebra, part 2 Eigenvalues, eigenvectors and least squares solutions
Linear Algebra, part 2 Eigenvalues, eigenvectors and least squares solutions Anna-Karin Tornberg Mathematical Models, Analysis and Simulation Fall semester, 2013 Main problem of linear algebra 2: Given
More informationLecture Note 1: Background
ECE5463: Introduction to Robotics Lecture Note 1: Background Prof. Wei Zhang Department of Electrical and Computer Engineering Ohio State University Columbus, Ohio, USA Spring 2018 Lecture 1 (ECE5463 Sp18)
More informationAssignment 1: From the Definition of Convexity to Helley Theorem
Assignment 1: From the Definition of Convexity to Helley Theorem Exercise 1 Mark in the following list the sets which are convex: 1. {x R 2 : x 1 + i 2 x 2 1, i = 1,..., 10} 2. {x R 2 : x 2 1 + 2ix 1x
More informationChapter 6 Inner product spaces
Chapter 6 Inner product spaces 6.1 Inner products and norms Definition 1 Let V be a vector space over F. An inner product on V is a function, : V V F such that the following conditions hold. x+z,y = x,y
More informationSymmetric Matrices and Eigendecomposition
Symmetric Matrices and Eigendecomposition Robert M. Freund January, 2014 c 2014 Massachusetts Institute of Technology. All rights reserved. 1 2 1 Symmetric Matrices and Convexity of Quadratic Functions
More informationA Little Necessary Matrix Algebra for Doctoral Studies in Business & Economics. Matrix Algebra
A Little Necessary Matrix Algebra for Doctoral Studies in Business & Economics James J. Cochran Department of Marketing & Analysis Louisiana Tech University Jcochran@cab.latech.edu Matrix Algebra Matrix
More information5 More on Linear Algebra
14.102, Math for Economists Fall 2004 Lecture Notes, 9/23/2004 These notes are primarily based on those written by George Marios Angeletos for the Harvard Math Camp in 1999 and 2000, and updated by Stavros
More information1 Series Solutions Near Regular Singular Points
1 Series Solutions Near Regular Singular Points All of the work here will be directed toward finding series solutions of a second order linear homogeneous ordinary differential equation: P xy + Qxy + Rxy
More informationMatrices A brief introduction
Matrices A brief introduction Basilio Bona DAUIN Politecnico di Torino Semester 1, 2014-15 B. Bona (DAUIN) Matrices Semester 1, 2014-15 1 / 44 Definitions Definition A matrix is a set of N real or complex
More informationLecture 1. 1 Conic programming. MA 796S: Convex Optimization and Interior Point Methods October 8, Consider the conic program. min.
MA 796S: Convex Optimization and Interior Point Methods October 8, 2007 Lecture 1 Lecturer: Kartik Sivaramakrishnan Scribe: Kartik Sivaramakrishnan 1 Conic programming Consider the conic program min s.t.
More information1 Linear Algebra Problems
Linear Algebra Problems. Let A be the conjugate transpose of the complex matrix A; i.e., A = A t : A is said to be Hermitian if A = A; real symmetric if A is real and A t = A; skew-hermitian if A = A and
More informationInner products and Norms. Inner product of 2 vectors. Inner product of 2 vectors x and y in R n : x 1 y 1 + x 2 y x n y n in R n
Inner products and Norms Inner product of 2 vectors Inner product of 2 vectors x and y in R n : x 1 y 1 + x 2 y 2 + + x n y n in R n Notation: (x, y) or y T x For complex vectors (x, y) = x 1 ȳ 1 + x 2
More information