Semidefinite and Second Order Cone Programming Seminar Fall 2001 Lecture 2

Size: px
Start display at page:

Download "Semidefinite and Second Order Cone Programming Seminar Fall 2001 Lecture 2"

Transcription

1 Semidefinite and Second Order Cone Programming Seminar Fall 2001 Lecture 2 Instructor: Farid Alizadeh Scribe: Xuan Li 9/17/ Overview We survey the basic notions of cones and cone-lp and give several examples mostly related to semidefinite programming. 2 Program Formulations The linear and semidefinite programming problems are formulated as follows: 2.1 Standard Form Linear Programming Let c R n and b R m,a R n m with rows a i R n, i = 1,...m. min: c T x s.t. a i x = b i, i = 1,..m x 0 (1) 2.2 Semidefinite Programming Here instead of vectors a i we use symmetric matrices A i S n n (the set of n n symmetric matrices), i = 1,...m, C S n n and X S n n instead of c and x. The matrix X is positive semidefinite. The inner product is defined as A B = i,j A ij B ij = Trace(AB T ) = Tr(AB) = Tr(BA). 1

2 The second equation is from definition of product, and the last one come from the observation that even though matrix product is not commutative, i.e. AB BA in general, the diagonal entries of AB and BA are equal and thus their traces are equal as well. The standard form of semidefinite programming is : min C X s.t. A i X = b i, i = 1,..m X 0 3 Some Notations and Definitions cone: A set K is called a cone if αx K for each x K and for each α 0. convex Cone: A convex cone K is a cone with the additional property that x + y K for each x, y K. pointed cone A pointed cone K is a cone with the property that K ( K) = {0}. open Set A set S is open if for every point s S, B(a, ɛ) = {x : x s < ɛ} S for some positive number ɛ s. closed set A set S is a closed set if its compliment S c is open. interior of set The interior of a set S is defined as Int(S) := T T S,Topen closure of set The closure of a set S is defined as cl(s) := T T S,Tclosed boundary of set The boundary of a set S is defined as Bd(S) := Cl(S) Int(S) c Remark 1 There are some basic facts which can be easily seen from the definitions above: 2

3 1. An open set in R n is not open in R m for n < m ; 2. similarly, the boundary or the interior of a set isn t the same in R n as in R m ; 3. As a result one talks about an open set with respect to the topology induced by the vector space spanned by a set S; 4. similarly we speak of relative interior and relative boundary of a set which are understood to be with respected to topology of the space spanned by the the set; 5. a closed set in R n is also closed in R m. Consider the half closed interval [a, b) = {x : a a < b} in R 1. The interior of [a, b) in R 1 is the open interval (a, b) and the boundary of [a, b) is {a} {b}. But (a, b) isn t open in R 2 since for any x (a, b), we can t find some ɛ > 0 such that B(x, ɛ) (a, b). The interior of [a, b) in R 2 is empty and the boundary of [a, b) in R 2 is [a, b]. However the relative interior of [a, b) in R n is again (a, b) and the relative boundary {a, b}. Definition 1 (Proper Cone) A proper cone K R n is a closed, pointed, convex and full- dimensional cone (i.e dim(k) = n). A full-dimensional cone is a cone which contains n linearly independent vectors. Theorem 1 Every proper cone K induces a partial order which is defined as follows: x, y R n, x K y x y K X K > y x y Int(K) Proof: First note that x K x since x x = 0 K. Secondly, ifx K y, y K x, then x y K, y x K. Since K is a proper cone, thus a pointed cone, we get x = y. Finally, if x K y, y K z then x z = (x y) + (y z) K, i.e., x K z. 4 The Standard cone linear programming (K- LP) min c T x s.t. a T i x = b i, i = 1,..m x K 0 where c R n and b R m,a R n m with rows a i R n, i = 1,...m. Observe that every convex optimization problem: min x C f(x) where C is a convex set 3

4 scribe:xuan Li and f(x) is convex over C, can be turned into a cone-lp. First turn the problem to one with linear objective and then turn it into Cone LP: min z s.t. f(x) z 0 x C. Since the set C = {(z, x) x C and f(x) z 0} is convex our problem is now equivalent to the cone LP where min z s.t. x 0 = 1 x K 0 where K = {(x 0, z, x) (z, x) x 0 C and x 0 0} The convex set embeded in plane and turned into a cone Definition 2 (Dual Cone) The dual cone K of a proper cone is the set {z : z T x 0, x K}. It is easy to prove that if K is proper so is K. 4

5 Example 1 (Half line) Let R + = {x : x 0}. The dual cone R + is exactly R +. Example 2 (non-negative orthant) Let R n + = {x x k 0 for k = 1,..., n}, the dual cone equals R n +, that is the non-negative orthant is self dual. We recall that Lemma 1 A matrix X is positive semidefinite if it satisfies any one of the following equivalent conditions: (1) a T Xa 0, a R n (2) A R n n such that AA T = X (3) All eigenvalues of X are non-negative. Example 3 (The semidefinite cone) Let P n n = {X R n n : X is positive semidefinite} Now we are interested in P n n. On one side, i.e., Z P n n, Z X 0 for allx 0, Z X = Tr(ZX) = Tr(ZAA T ) = Tr(A T ZA) 0 for all A R n n. Since X is symmetric, from the knowledge of linear algebra, X can be written as X = QΛQ T where QQ T = I, that is Q is an orthogonal matrix, and Λ is diagonal with the diagonal entries containing the eigenvalues of X. Write Q = [q 1,...q n ] and Λ = diag(λ 1,...λ n ). λ i, i = 1..n, then q i is the eigenvector corresponding to λ i, i.e, q T i Xq i = λ i Let us choose A i = p i R n where p i is the eigen vector of Z corresponding to γ i and p T i p i = 1. Then, 0 Tr(A T i ZA i ) = p T i Zp i = γ i. So all the eigenvalues of Z are non-negative, i.e., Z P n n, P n n P n n. On the other hand, Y P n n, B R n n such that Y = BB T. X P n n, X = AA T, we have Y X = Tr(YX) = Tr(BB T AA T ) = Tr(A T BB T A) = Tr[(B T A) T (B T A)] 0 i.e., Y P n n, P n n P n n. In conclusion, P n n = P n n 5

6 Example 4 (The second order cone) Let Q = {(x 0, x) x 0 x }. Q is a proper cone. What is Q? On one side, if z = (z 0, z) Q, then for every (x 0, x) Q ( ) (z 0, z T x0 ) = z x 0 x 0 + z T x z x + z T x z T x + z T x = 0 i.e., Q Q. The inequalities come from the Cauchy-Schwartz inequality: z T x x T z z x On the other side, we note that e = (1, 0) Q. For each element z = (z 0, z) Q we must have z T e = z 0 0. We also note that each vector of the form x = ( z, z) Q, for all z R n. Thus, in particular for z = (z 0, z) Q, z T x = z 0 z z 2 0 Since z is always non-negative, we get z 0 z, i.e., Q Q. Therefore, Q = Q Definition 3 An extreme ray of proper cone K is a half line αx = {αx α 0} for x K such that for each a αx, if a = b + c, then b, c αx. Example 5 (Extreme rays of the second order cone) Let Q the second order cone. The vectors x = ( x, x ) define the extreme rays of Q. This is fairly easy to prove. Example 6 (Extreme rays of the semidefinite cone) Let P n n be the semidefinite cone. Positive semi-definite matrices qq T of rank 1 form the extreme rays of P n n. Here is the proof. Any positive semidefinite matrix X can be written in the form of X = i λ ip i p T i (See previous lecture to see how to get this from spectral decomposition of X). This shows that all extreme rays must be among matrices of the form qq T. Now we must show that each qq T is an extreme ray. Let qq T = X+Y, where X, Y 0. Suppose {q 1 = q, q 2,..., q n } is an orthogonal set of vectors in R n. Then multiplying from left by q T i and from right by q i we see that q T i Xq i + q T i Yq i = 0 for i = 2,..., n; but since the summands are both non-negative and add up to zero, they are both zero. Thus q T i Xq i = q T i Yq i = 0 for i = 2,... n. Thus both X and Y are rank one matrices (their null space has dimension n 1) and we might as well write qq T = xx T + yy T. But the right hand side is a rank 2 matrix unless x and y are proportional, which proves they are proportional to q. Thus, qq T are extreme rays for each vector q R n. 6

7 4.1 An Example of a cone which is not self dual In the examples above, we note that they were all self-dual cones. But there are cones that are not self-dual. Let F be the set of functions F : R R with the following properties: 1. F is right continuous, 2. non-decreasing (i.e. if x > y then F(x) F(y),) and 3. has bounded variation, that is F(x) α > as x, and F(x) β < as x. First observe that functions in F are almost like probability distribution functions, except that their range is the interval [α, β] rather than [0, 1]. Second the set F itself is a convex cone and in fact pointed cone in the space of continuous functions. Now we define a particular kind of Moment cone. First, let us define u x = The moment cone is defined as: { M n+1 = c = 1 x x 2 x n. } u x df(x) : F(x) F that is M n+1 consits of vectors c where for each j = 0,..., n, c j is the j th moment of a distribution times a non-negative constant. Lemma 2 M n+1 is a proper cone. Proof: Let s examine the properties we need to prove: c M n+1 and α 0 αc M n+1. To see this observe that there exists F F such that c = u x df(x). Now if F is right-continuous, nondecreasing and with bounded variation, then all these properties also hold for αf for each α 0 and thus αf F. Therefore, αc = u x d(αf(x)) M n+1. Thus M n+1 is a cone. If c and d are in M n+1 then c + d M n+1. c = u x df 1 (x) M n+1, d = u x df 2 (x) M n+1 c + d = u x d[f 1 (x) + F 2 (x)] M n+1 Thus M n+1 is a convex cone. 7

8 If c and c are in M n+1 then c = 0. Ifc = u x df 1 (x) M n+1 and c M n+1, then c = u x df 2 (x) M n+1. c + ( c) = 0 = u x d[f 1 (x) + F 2 (x)] Especially, d[f 1 (x)+f 2 (x)] = 0. Since F 1 (x)+f 2 (x) F is non-decreasing with F 1 (x) + F 2 (x) 0 as x, we get F 1 (x) + F 2 (x) = 0 almost everywhere,i.e., F i (x) = 0, i = 1, 2 almost everywhere. It means c = 0, i.e., M n+1 M n+1 = 0. Thus M n+1 is a pointed cone. M n+1 is full-dimensional. Let F a (x) = { 0, if x < a 1, if x a Obviously, F a (x) F and u a = u x df a (x) M n+1 for all a R. Choose n + 1 distinct a 1,...a n+1, det[u a1,, u an+1 ] = i>j(a i a j ) 0 Thus M n+1 is full-dimension cone. (The determinant above is the wellknown Vander Monde determinant.) In addition we need to show that M n+1 is closed. future lectures. This will be taken up in Example 7 (Extreme rays of M n+1 ) The extreme rays of M n+1 are all αu x for x R. If c M n+1, c can be written as α 1 u x1 + α 2 u x2 + + α n+1 u xn+1, α i 0 for i = 1,..n + 1. There is a one-to-one correspondence between c M n+1 and H = α 1 u x1 u T x 1 + α 2 u x2 u T x α n+1 u xn+1 u T x n+1. Such a matrix is called Hankel matrix. In general Hankel matrices are thos matrices, H such that H ij = h i+j, that is entries are constant along all opposite diagonals. A vector c R 2n+1 is in the moment cone if and only if the Hankel matrix H ij = c i+j is positive semidefinite. Again these assertions will be proved in future lectures. Now we examine M n+1. Let s first consider the cone defined as follows: P n+1 = {p = (p 0,..., p n ) p 0 + p 1 x + p 2 x p n x n = p(x) 0 for all x} Lemma 3 Every non-negative polynomial is the sum of square polynomials. 8

9 Proof: First it is well known that p(x) can be written as { k [ p(x) = c (x αj iβ j )(x α j + iβ j ) ]}{ n j=1 j=k+1 } (x γ j ) where i = 1 and c 0. We first claim that n must be even. Otherwise, p(x) as x p(x) and cannot be non-negative. The number of real roots is even subsequently, say 2l. since p(x) 0, all the real roots must have even multiplicity, because otherwise in the neighborhood of the root with odd multiplicity there is some t such that p(t) < 0. Thus, we can write { k [ p(x) = c (x αj iβ j )(x α j + iβ j ) ]}{ n (x γ j ) 2} j=1 j=k+1 On the other hand for each pair of conjugate complex roots we have (x α iβ)(x α + iβ) = (x α) 2 + β 2 Therefore the product expression for p(x) is product of square polynomials or sums of square polynomials, which yields a sum of square polynomials. This means that the set of extreme rays of the non-negative polynomials is among polynomials that are square q 2 (x). Thus, the coefficients of extreme rays are of the form q q = q 2, where a b is the convolution of vectors a and b, that is for a, b R n+1, a b R 2n+1 and is defined as: a b = (a 0 b 0, a 0 b 1 + a 1 b 0,..., a 0 b k + a 1 b k a k b 0,..., a n b n ) T and q 2 = q q. Now not all square polynomials are extreme rays. In particular, if a square polynomial has non-real roots then it can be written as sum of two square polynomials as shown above. Thus, extreme rays are among those square polynomials with only real roots. We now argue that these polynomials are indeed extreme rays. 9

10 Suppose p(x) = (x γ j ) 2k is a polynomial with distinct roots γ j which is not an extreme ray. Then p(x) = q(x) + r(x) and since both q and r are non-negative, we must have q(x) p(x). This means that degree of q(x) is at most as large as degree of p. Furthermore, from the picture it is clear that each γ j is also a root of q(x). But if for some γ j the multiplicity in p is 2k and the multiplicity in q is 2m where m < k then in some neigborhood of γ j q(x) > p(x) because (x γ j ) 2m > (x γ j ) 2k in some neighborhood of γ j when m < k; therefore, k m for each root. Since degree of p is larger than or equal to degree of q it follows that k = m for each root. Thus q(x) = αp(x) for some constant α. We have proved: Corollary 1 p is an extreme ray of P n+1 if p = q 2 and q(x) has only real roots. P n+1 since We now show that P n+1 M n+1. Note that c = n+1 i=1 β iu xi M n+1, i p 2 i ( i n+1 ) i ) T ( β j u xj p 2 j=1 = i,j β j [ (p 2 i ) T (u xj ) ] 0. β i 0, [ (p 2 i ) T (u xj ) ] = [ p i (x) ] 2 Later in the course we will prove that that P n+1 = M n+1. 10

Common-Knowledge / Cheat Sheet

Common-Knowledge / Cheat Sheet CSE 521: Design and Analysis of Algorithms I Fall 2018 Common-Knowledge / Cheat Sheet 1 Randomized Algorithm Expectation: For a random variable X with domain, the discrete set S, E [X] = s S P [X = s]

More information

Semidefinite and Second Order Cone Programming Seminar Fall 2001 Lecture 5

Semidefinite and Second Order Cone Programming Seminar Fall 2001 Lecture 5 Semidefinite and Second Order Cone Programming Seminar Fall 2001 Lecture 5 Instructor: Farid Alizadeh Scribe: Anton Riabov 10/08/2001 1 Overview We continue studying the maximum eigenvalue SDP, and generalize

More information

Introduction to Matrix Algebra

Introduction to Matrix Algebra Introduction to Matrix Algebra August 18, 2010 1 Vectors 1.1 Notations A p-dimensional vector is p numbers put together. Written as x 1 x =. x p. When p = 1, this represents a point in the line. When p

More information

CSC Linear Programming and Combinatorial Optimization Lecture 10: Semidefinite Programming

CSC Linear Programming and Combinatorial Optimization Lecture 10: Semidefinite Programming CSC2411 - Linear Programming and Combinatorial Optimization Lecture 10: Semidefinite Programming Notes taken by Mike Jamieson March 28, 2005 Summary: In this lecture, we introduce semidefinite programming

More information

Differential Topology Final Exam With Solutions

Differential Topology Final Exam With Solutions Differential Topology Final Exam With Solutions Instructor: W. D. Gillam Date: Friday, May 20, 2016, 13:00 (1) Let X be a subset of R n, Y a subset of R m. Give the definitions of... (a) smooth function

More information

Semidefinite Programming

Semidefinite Programming Semidefinite Programming Basics and SOS Fernando Mário de Oliveira Filho Campos do Jordão, 2 November 23 Available at: www.ime.usp.br/~fmario under talks Conic programming V is a real vector space h, i

More information

Lecture 1 Review: Linear models have the form (in matrix notation) Y = Xβ + ε,

Lecture 1 Review: Linear models have the form (in matrix notation) Y = Xβ + ε, 2. REVIEW OF LINEAR ALGEBRA 1 Lecture 1 Review: Linear models have the form (in matrix notation) Y = Xβ + ε, where Y n 1 response vector and X n p is the model matrix (or design matrix ) with one row for

More information

Lecture 7: Positive Semidefinite Matrices

Lecture 7: Positive Semidefinite Matrices Lecture 7: Positive Semidefinite Matrices Rajat Mittal IIT Kanpur The main aim of this lecture note is to prepare your background for semidefinite programming. We have already seen some linear algebra.

More information

Quantum Computing Lecture 2. Review of Linear Algebra

Quantum Computing Lecture 2. Review of Linear Algebra Quantum Computing Lecture 2 Review of Linear Algebra Maris Ozols Linear algebra States of a quantum system form a vector space and their transformations are described by linear operators Vector spaces

More information

MATRIX ALGEBRA. or x = (x 1,..., x n ) R n. y 1 y 2. x 2. x m. y m. y = cos θ 1 = x 1 L x. sin θ 1 = x 2. cos θ 2 = y 1 L y.

MATRIX ALGEBRA. or x = (x 1,..., x n ) R n. y 1 y 2. x 2. x m. y m. y = cos θ 1 = x 1 L x. sin θ 1 = x 2. cos θ 2 = y 1 L y. as Basics Vectors MATRIX ALGEBRA An array of n real numbers x, x,, x n is called a vector and it is written x = x x n or x = x,, x n R n prime operation=transposing a column to a row Basic vector operations

More information

Lecture: Introduction to LP, SDP and SOCP

Lecture: Introduction to LP, SDP and SOCP Lecture: Introduction to LP, SDP and SOCP Zaiwen Wen Beijing International Center For Mathematical Research Peking University http://bicmr.pku.edu.cn/~wenzw/bigdata2015.html wenzw@pku.edu.cn Acknowledgement:

More information

EE/ACM Applications of Convex Optimization in Signal Processing and Communications Lecture 2

EE/ACM Applications of Convex Optimization in Signal Processing and Communications Lecture 2 EE/ACM 150 - Applications of Convex Optimization in Signal Processing and Communications Lecture 2 Andre Tkacenko Signal Processing Research Group Jet Propulsion Laboratory April 5, 2012 Andre Tkacenko

More information

Lecture 18. Ramanujan Graphs continued

Lecture 18. Ramanujan Graphs continued Stanford University Winter 218 Math 233A: Non-constructive methods in combinatorics Instructor: Jan Vondrák Lecture date: March 8, 218 Original scribe: László Miklós Lovász Lecture 18 Ramanujan Graphs

More information

Math Camp Lecture 4: Linear Algebra. Xiao Yu Wang. Aug 2010 MIT. Xiao Yu Wang (MIT) Math Camp /10 1 / 88

Math Camp Lecture 4: Linear Algebra. Xiao Yu Wang. Aug 2010 MIT. Xiao Yu Wang (MIT) Math Camp /10 1 / 88 Math Camp 2010 Lecture 4: Linear Algebra Xiao Yu Wang MIT Aug 2010 Xiao Yu Wang (MIT) Math Camp 2010 08/10 1 / 88 Linear Algebra Game Plan Vector Spaces Linear Transformations and Matrices Determinant

More information

STAT200C: Review of Linear Algebra

STAT200C: Review of Linear Algebra Stat200C Instructor: Zhaoxia Yu STAT200C: Review of Linear Algebra 1 Review of Linear Algebra 1.1 Vector Spaces, Rank, Trace, and Linear Equations 1.1.1 Rank and Vector Spaces Definition A vector whose

More information

Lecture 1. 1 Conic programming. MA 796S: Convex Optimization and Interior Point Methods October 8, Consider the conic program. min.

Lecture 1. 1 Conic programming. MA 796S: Convex Optimization and Interior Point Methods October 8, Consider the conic program. min. MA 796S: Convex Optimization and Interior Point Methods October 8, 2007 Lecture 1 Lecturer: Kartik Sivaramakrishnan Scribe: Kartik Sivaramakrishnan 1 Conic programming Consider the conic program min s.t.

More information

Semidefinite and Second Order Cone Programming Seminar Fall 2001 Lecture 4

Semidefinite and Second Order Cone Programming Seminar Fall 2001 Lecture 4 Semidefinite and Second Order Cone Programming Seminar Fall 2001 Lecture 4 Instructor: Farid Alizadeh Scribe: Haengju Lee 10/1/2001 1 Overview We examine the dual of the Fermat-Weber Problem. Next we will

More information

Elementary linear algebra

Elementary linear algebra Chapter 1 Elementary linear algebra 1.1 Vector spaces Vector spaces owe their importance to the fact that so many models arising in the solutions of specific problems turn out to be vector spaces. The

More information

10-725/36-725: Convex Optimization Prerequisite Topics

10-725/36-725: Convex Optimization Prerequisite Topics 10-725/36-725: Convex Optimization Prerequisite Topics February 3, 2015 This is meant to be a brief, informal refresher of some topics that will form building blocks in this course. The content of the

More information

LINEAR ALGEBRA BOOT CAMP WEEK 4: THE SPECTRAL THEOREM

LINEAR ALGEBRA BOOT CAMP WEEK 4: THE SPECTRAL THEOREM LINEAR ALGEBRA BOOT CAMP WEEK 4: THE SPECTRAL THEOREM Unless otherwise stated, all vector spaces in this worksheet are finite dimensional and the scalar field F is R or C. Definition 1. A linear operator

More information

Recall the convention that, for us, all vectors are column vectors.

Recall the convention that, for us, all vectors are column vectors. Some linear algebra Recall the convention that, for us, all vectors are column vectors. 1. Symmetric matrices Let A be a real matrix. Recall that a complex number λ is an eigenvalue of A if there exists

More information

Lecture 1 and 2: Random Spanning Trees

Lecture 1 and 2: Random Spanning Trees Recent Advances in Approximation Algorithms Spring 2015 Lecture 1 and 2: Random Spanning Trees Lecturer: Shayan Oveis Gharan March 31st Disclaimer: These notes have not been subjected to the usual scrutiny

More information

Lecture 8 : Eigenvalues and Eigenvectors

Lecture 8 : Eigenvalues and Eigenvectors CPS290: Algorithmic Foundations of Data Science February 24, 2017 Lecture 8 : Eigenvalues and Eigenvectors Lecturer: Kamesh Munagala Scribe: Kamesh Munagala Hermitian Matrices It is simpler to begin with

More information

1 Last time: least-squares problems

1 Last time: least-squares problems MATH Linear algebra (Fall 07) Lecture Last time: least-squares problems Definition. If A is an m n matrix and b R m, then a least-squares solution to the linear system Ax = b is a vector x R n such that

More information

2. Linear algebra. matrices and vectors. linear equations. range and nullspace of matrices. function of vectors, gradient and Hessian

2. Linear algebra. matrices and vectors. linear equations. range and nullspace of matrices. function of vectors, gradient and Hessian FE661 - Statistical Methods for Financial Engineering 2. Linear algebra Jitkomut Songsiri matrices and vectors linear equations range and nullspace of matrices function of vectors, gradient and Hessian

More information

1 Quantum states and von Neumann entropy

1 Quantum states and von Neumann entropy Lecture 9: Quantum entropy maximization CSE 599S: Entropy optimality, Winter 2016 Instructor: James R. Lee Last updated: February 15, 2016 1 Quantum states and von Neumann entropy Recall that S sym n n

More information

Linear Algebra Lecture Notes-II

Linear Algebra Lecture Notes-II Linear Algebra Lecture Notes-II Vikas Bist Department of Mathematics Panjab University, Chandigarh-64 email: bistvikas@gmail.com Last revised on March 5, 8 This text is based on the lectures delivered

More information

Linear Algebra Formulas. Ben Lee

Linear Algebra Formulas. Ben Lee Linear Algebra Formulas Ben Lee January 27, 2016 Definitions and Terms Diagonal: Diagonal of matrix A is a collection of entries A ij where i = j. Diagonal Matrix: A matrix (usually square), where entries

More information

Linear algebra for computational statistics

Linear algebra for computational statistics University of Seoul May 3, 2018 Vector and Matrix Notation Denote 2-dimensional data array (n p matrix) by X. Denote the element in the ith row and the jth column of X by x ij or (X) ij. Denote by X j

More information

Linear Algebra Review

Linear Algebra Review January 29, 2013 Table of contents Metrics Metric Given a space X, then d : X X R + 0 and z in X if: d(x, y) = 0 is equivalent to x = y d(x, y) = d(y, x) d(x, y) d(x, z) + d(z, y) is a metric is for all

More information

Exercise Sheet 1.

Exercise Sheet 1. Exercise Sheet 1 You can download my lecture and exercise sheets at the address http://sami.hust.edu.vn/giang-vien/?name=huynt 1) Let A, B be sets. What does the statement "A is not a subset of B " mean?

More information

Lecture: Examples of LP, SOCP and SDP

Lecture: Examples of LP, SOCP and SDP 1/34 Lecture: Examples of LP, SOCP and SDP Zaiwen Wen Beijing International Center For Mathematical Research Peking University http://bicmr.pku.edu.cn/~wenzw/bigdata2018.html wenzw@pku.edu.cn Acknowledgement:

More information

MAT Linear Algebra Collection of sample exams

MAT Linear Algebra Collection of sample exams MAT 342 - Linear Algebra Collection of sample exams A-x. (0 pts Give the precise definition of the row echelon form. 2. ( 0 pts After performing row reductions on the augmented matrix for a certain system

More information

MATRICES ARE SIMILAR TO TRIANGULAR MATRICES

MATRICES ARE SIMILAR TO TRIANGULAR MATRICES MATRICES ARE SIMILAR TO TRIANGULAR MATRICES 1 Complex matrices Recall that the complex numbers are given by a + ib where a and b are real and i is the imaginary unity, ie, i 2 = 1 In what we describe below,

More information

MATH 423 Linear Algebra II Lecture 33: Diagonalization of normal operators.

MATH 423 Linear Algebra II Lecture 33: Diagonalization of normal operators. MATH 423 Linear Algebra II Lecture 33: Diagonalization of normal operators. Adjoint operator and adjoint matrix Given a linear operator L on an inner product space V, the adjoint of L is a transformation

More information

Duke University, Department of Electrical and Computer Engineering Optimization for Scientists and Engineers c Alex Bronstein, 2014

Duke University, Department of Electrical and Computer Engineering Optimization for Scientists and Engineers c Alex Bronstein, 2014 Duke University, Department of Electrical and Computer Engineering Optimization for Scientists and Engineers c Alex Bronstein, 2014 Linear Algebra A Brief Reminder Purpose. The purpose of this document

More information

Algebra II. Paulius Drungilas and Jonas Jankauskas

Algebra II. Paulius Drungilas and Jonas Jankauskas Algebra II Paulius Drungilas and Jonas Jankauskas Contents 1. Quadratic forms 3 What is quadratic form? 3 Change of variables. 3 Equivalence of quadratic forms. 4 Canonical form. 4 Normal form. 7 Positive

More information

Chapter 1. Matrix Algebra

Chapter 1. Matrix Algebra ST4233, Linear Models, Semester 1 2008-2009 Chapter 1. Matrix Algebra 1 Matrix and vector notation Definition 1.1 A matrix is a rectangular or square array of numbers of variables. We use uppercase boldface

More information

The following definition is fundamental.

The following definition is fundamental. 1. Some Basics from Linear Algebra With these notes, I will try and clarify certain topics that I only quickly mention in class. First and foremost, I will assume that you are familiar with many basic

More information

Matrices A brief introduction

Matrices A brief introduction Matrices A brief introduction Basilio Bona DAUIN Politecnico di Torino Semester 1, 2014-15 B. Bona (DAUIN) Matrices Semester 1, 2014-15 1 / 44 Definitions Definition A matrix is a set of N real or complex

More information

Stat 206: Linear algebra

Stat 206: Linear algebra Stat 206: Linear algebra James Johndrow (adapted from Iain Johnstone s notes) 2016-11-02 Vectors We have already been working with vectors, but let s review a few more concepts. The inner product of two

More information

Linear Algebra. Workbook

Linear Algebra. Workbook Linear Algebra Workbook Paul Yiu Department of Mathematics Florida Atlantic University Last Update: November 21 Student: Fall 2011 Checklist Name: A B C D E F F G H I J 1 2 3 4 5 6 7 8 9 10 xxx xxx xxx

More information

Linear Algebra Massoud Malek

Linear Algebra Massoud Malek CSUEB Linear Algebra Massoud Malek Inner Product and Normed Space In all that follows, the n n identity matrix is denoted by I n, the n n zero matrix by Z n, and the zero vector by θ n An inner product

More information

Review of Linear Algebra Definitions, Change of Basis, Trace, Spectral Theorem

Review of Linear Algebra Definitions, Change of Basis, Trace, Spectral Theorem Review of Linear Algebra Definitions, Change of Basis, Trace, Spectral Theorem Steven J. Miller June 19, 2004 Abstract Matrices can be thought of as rectangular (often square) arrays of numbers, or as

More information

Mathematical Methods wk 2: Linear Operators

Mathematical Methods wk 2: Linear Operators John Magorrian, magog@thphysoxacuk These are work-in-progress notes for the second-year course on mathematical methods The most up-to-date version is available from http://www-thphysphysicsoxacuk/people/johnmagorrian/mm

More information

NORMS ON SPACE OF MATRICES

NORMS ON SPACE OF MATRICES NORMS ON SPACE OF MATRICES. Operator Norms on Space of linear maps Let A be an n n real matrix and x 0 be a vector in R n. We would like to use the Picard iteration method to solve for the following system

More information

1 Positive definiteness and semidefiniteness

1 Positive definiteness and semidefiniteness Positive definiteness and semidefiniteness Zdeněk Dvořák May 9, 205 For integers a, b, and c, let D(a, b, c) be the diagonal matrix with + for i =,..., a, D i,i = for i = a +,..., a + b,. 0 for i = a +

More information

Functional Analysis Exercise Class

Functional Analysis Exercise Class Functional Analysis Exercise Class Week 9 November 13 November Deadline to hand in the homeworks: your exercise class on week 16 November 20 November Exercises (1) Show that if T B(X, Y ) and S B(Y, Z)

More information

Math Bootcamp An p-dimensional vector is p numbers put together. Written as. x 1 x =. x p

Math Bootcamp An p-dimensional vector is p numbers put together. Written as. x 1 x =. x p Math Bootcamp 2012 1 Review of matrix algebra 1.1 Vectors and rules of operations An p-dimensional vector is p numbers put together. Written as x 1 x =. x p. When p = 1, this represents a point in the

More information

Linear Algebra. Session 12

Linear Algebra. Session 12 Linear Algebra. Session 12 Dr. Marco A Roque Sol 08/01/2017 Example 12.1 Find the constant function that is the least squares fit to the following data x 0 1 2 3 f(x) 1 0 1 2 Solution c = 1 c = 0 f (x)

More information

j=1 u 1jv 1j. 1/ 2 Lemma 1. An orthogonal set of vectors must be linearly independent.

j=1 u 1jv 1j. 1/ 2 Lemma 1. An orthogonal set of vectors must be linearly independent. Lecture Notes: Orthogonal and Symmetric Matrices Yufei Tao Department of Computer Science and Engineering Chinese University of Hong Kong taoyf@cse.cuhk.edu.hk Orthogonal Matrix Definition. Let u = [u

More information

Introduction and Math Preliminaries

Introduction and Math Preliminaries Introduction and Math Preliminaries Yinyu Ye Department of Management Science and Engineering Stanford University Stanford, CA 94305, U.S.A. http://www.stanford.edu/ yyye Appendices A, B, and C, Chapter

More information

3. Vector spaces 3.1 Linear dependence and independence 3.2 Basis and dimension. 5. Extreme points and basic feasible solutions

3. Vector spaces 3.1 Linear dependence and independence 3.2 Basis and dimension. 5. Extreme points and basic feasible solutions A. LINEAR ALGEBRA. CONVEX SETS 1. Matrices and vectors 1.1 Matrix operations 1.2 The rank of a matrix 2. Systems of linear equations 2.1 Basic solutions 3. Vector spaces 3.1 Linear dependence and independence

More information

DS-GA 1002 Lecture notes 0 Fall Linear Algebra. These notes provide a review of basic concepts in linear algebra.

DS-GA 1002 Lecture notes 0 Fall Linear Algebra. These notes provide a review of basic concepts in linear algebra. DS-GA 1002 Lecture notes 0 Fall 2016 Linear Algebra These notes provide a review of basic concepts in linear algebra. 1 Vector spaces You are no doubt familiar with vectors in R 2 or R 3, i.e. [ ] 1.1

More information

ELE539A: Optimization of Communication Systems Lecture 15: Semidefinite Programming, Detection and Estimation Applications

ELE539A: Optimization of Communication Systems Lecture 15: Semidefinite Programming, Detection and Estimation Applications ELE539A: Optimization of Communication Systems Lecture 15: Semidefinite Programming, Detection and Estimation Applications Professor M. Chiang Electrical Engineering Department, Princeton University March

More information

Lecture 2: Linear operators

Lecture 2: Linear operators Lecture 2: Linear operators Rajat Mittal IIT Kanpur The mathematical formulation of Quantum computing requires vector spaces and linear operators So, we need to be comfortable with linear algebra to study

More information

Knowledge Discovery and Data Mining 1 (VO) ( )

Knowledge Discovery and Data Mining 1 (VO) ( ) Knowledge Discovery and Data Mining 1 (VO) (707.003) Review of Linear Algebra Denis Helic KTI, TU Graz Oct 9, 2014 Denis Helic (KTI, TU Graz) KDDM1 Oct 9, 2014 1 / 74 Big picture: KDDM Probability Theory

More information

APPENDIX A. Background Mathematics. A.1 Linear Algebra. Vector algebra. Let x denote the n-dimensional column vector with components x 1 x 2.

APPENDIX A. Background Mathematics. A.1 Linear Algebra. Vector algebra. Let x denote the n-dimensional column vector with components x 1 x 2. APPENDIX A Background Mathematics A. Linear Algebra A.. Vector algebra Let x denote the n-dimensional column vector with components 0 x x 2 B C @. A x n Definition 6 (scalar product). The scalar product

More information

Problems in Linear Algebra and Representation Theory

Problems in Linear Algebra and Representation Theory Problems in Linear Algebra and Representation Theory (Most of these were provided by Victor Ginzburg) The problems appearing below have varying level of difficulty. They are not listed in any specific

More information

Lecture: Algorithms for LP, SOCP and SDP

Lecture: Algorithms for LP, SOCP and SDP 1/53 Lecture: Algorithms for LP, SOCP and SDP Zaiwen Wen Beijing International Center For Mathematical Research Peking University http://bicmr.pku.edu.cn/~wenzw/bigdata2018.html wenzw@pku.edu.cn Acknowledgement:

More information

Review problems for MA 54, Fall 2004.

Review problems for MA 54, Fall 2004. Review problems for MA 54, Fall 2004. Below are the review problems for the final. They are mostly homework problems, or very similar. If you are comfortable doing these problems, you should be fine on

More information

Linear Algebra, 4th day, Thursday 7/1/04 REU Info:

Linear Algebra, 4th day, Thursday 7/1/04 REU Info: Linear Algebra, 4th day, Thursday 7/1/04 REU 004. Info http//people.cs.uchicago.edu/laci/reu04. Instructor Laszlo Babai Scribe Nick Gurski 1 Linear maps We shall study the notion of maps between vector

More information

Contents. Preface for the Instructor. Preface for the Student. xvii. Acknowledgments. 1 Vector Spaces 1 1.A R n and C n 2

Contents. Preface for the Instructor. Preface for the Student. xvii. Acknowledgments. 1 Vector Spaces 1 1.A R n and C n 2 Contents Preface for the Instructor xi Preface for the Student xv Acknowledgments xvii 1 Vector Spaces 1 1.A R n and C n 2 Complex Numbers 2 Lists 5 F n 6 Digression on Fields 10 Exercises 1.A 11 1.B Definition

More information

Review of Linear Algebra

Review of Linear Algebra Review of Linear Algebra Definitions An m n (read "m by n") matrix, is a rectangular array of entries, where m is the number of rows and n the number of columns. 2 Definitions (Con t) A is square if m=

More information

Global Optimization of Polynomials

Global Optimization of Polynomials Semidefinite Programming Lecture 9 OR 637 Spring 2008 April 9, 2008 Scribe: Dennis Leventhal Global Optimization of Polynomials Recall we were considering the problem min z R n p(z) where p(z) is a degree

More information

1. What is the determinant of the following matrix? a 1 a 2 4a 3 2a 2 b 1 b 2 4b 3 2b c 1. = 4, then det

1. What is the determinant of the following matrix? a 1 a 2 4a 3 2a 2 b 1 b 2 4b 3 2b c 1. = 4, then det What is the determinant of the following matrix? 3 4 3 4 3 4 4 3 A 0 B 8 C 55 D 0 E 60 If det a a a 3 b b b 3 c c c 3 = 4, then det a a 4a 3 a b b 4b 3 b c c c 3 c = A 8 B 6 C 4 D E 3 Let A be an n n matrix

More information

Analysis and Linear Algebra. Lectures 1-3 on the mathematical tools that will be used in C103

Analysis and Linear Algebra. Lectures 1-3 on the mathematical tools that will be used in C103 Analysis and Linear Algebra Lectures 1-3 on the mathematical tools that will be used in C103 Set Notation A, B sets AcB union A1B intersection A\B the set of objects in A that are not in B N. Empty set

More information

Chapter Two Elements of Linear Algebra

Chapter Two Elements of Linear Algebra Chapter Two Elements of Linear Algebra Previously, in chapter one, we have considered single first order differential equations involving a single unknown function. In the next chapter we will begin to

More information

Convex Optimization and Modeling

Convex Optimization and Modeling Convex Optimization and Modeling Introduction and a quick repetition of analysis/linear algebra First lecture, 12.04.2010 Jun.-Prof. Matthias Hein Organization of the lecture Advanced course, 2+2 hours,

More information

Math Linear Algebra II. 1. Inner Products and Norms

Math Linear Algebra II. 1. Inner Products and Norms Math 342 - Linear Algebra II Notes 1. Inner Products and Norms One knows from a basic introduction to vectors in R n Math 254 at OSU) that the length of a vector x = x 1 x 2... x n ) T R n, denoted x,

More information

SPECTRAL THEORY EVAN JENKINS

SPECTRAL THEORY EVAN JENKINS SPECTRAL THEORY EVAN JENKINS Abstract. These are notes from two lectures given in MATH 27200, Basic Functional Analysis, at the University of Chicago in March 2010. The proof of the spectral theorem for

More information

Linear Regression. In this problem sheet, we consider the problem of linear regression with p predictors and one intercept,

Linear Regression. In this problem sheet, we consider the problem of linear regression with p predictors and one intercept, Linear Regression In this problem sheet, we consider the problem of linear regression with p predictors and one intercept, y = Xβ + ɛ, where y t = (y 1,..., y n ) is the column vector of target values,

More information

Chapter 6 Inner product spaces

Chapter 6 Inner product spaces Chapter 6 Inner product spaces 6.1 Inner products and norms Definition 1 Let V be a vector space over F. An inner product on V is a function, : V V F such that the following conditions hold. x+z,y = x,y

More information

E2 212: Matrix Theory (Fall 2010) Solutions to Test - 1

E2 212: Matrix Theory (Fall 2010) Solutions to Test - 1 E2 212: Matrix Theory (Fall 2010) s to Test - 1 1. Let X = [x 1, x 2,..., x n ] R m n be a tall matrix. Let S R(X), and let P be an orthogonal projector onto S. (a) If X is full rank, show that P can be

More information

Linear Algebra 1. M.T.Nair Department of Mathematics, IIT Madras. and in that case x is called an eigenvector of T corresponding to the eigenvalue λ.

Linear Algebra 1. M.T.Nair Department of Mathematics, IIT Madras. and in that case x is called an eigenvector of T corresponding to the eigenvalue λ. Linear Algebra 1 M.T.Nair Department of Mathematics, IIT Madras 1 Eigenvalues and Eigenvectors 1.1 Definition and Examples Definition 1.1. Let V be a vector space (over a field F) and T : V V be a linear

More information

Some notes on Linear Algebra. Mark Schmidt September 10, 2009

Some notes on Linear Algebra. Mark Schmidt September 10, 2009 Some notes on Linear Algebra Mark Schmidt September 10, 2009 References Linear Algebra and Its Applications. Strang, 1988. Practical Optimization. Gill, Murray, Wright, 1982. Matrix Computations. Golub

More information

Semidefinite Programming Basics and Applications

Semidefinite Programming Basics and Applications Semidefinite Programming Basics and Applications Ray Pörn, principal lecturer Åbo Akademi University Novia University of Applied Sciences Content What is semidefinite programming (SDP)? How to represent

More information

Mathematical foundations - linear algebra

Mathematical foundations - linear algebra Mathematical foundations - linear algebra Andrea Passerini passerini@disi.unitn.it Machine Learning Vector space Definition (over reals) A set X is called a vector space over IR if addition and scalar

More information

Approximation Algorithms

Approximation Algorithms Approximation Algorithms Chapter 26 Semidefinite Programming Zacharias Pitouras 1 Introduction LP place a good lower bound on OPT for NP-hard problems Are there other ways of doing this? Vector programs

More information

CS 246 Review of Linear Algebra 01/17/19

CS 246 Review of Linear Algebra 01/17/19 1 Linear algebra In this section we will discuss vectors and matrices. We denote the (i, j)th entry of a matrix A as A ij, and the ith entry of a vector as v i. 1.1 Vectors and vector operations A vector

More information

Lecture 15 Review of Matrix Theory III. Dr. Radhakant Padhi Asst. Professor Dept. of Aerospace Engineering Indian Institute of Science - Bangalore

Lecture 15 Review of Matrix Theory III. Dr. Radhakant Padhi Asst. Professor Dept. of Aerospace Engineering Indian Institute of Science - Bangalore Lecture 15 Review of Matrix Theory III Dr. Radhakant Padhi Asst. Professor Dept. of Aerospace Engineering Indian Institute of Science - Bangalore Matrix An m n matrix is a rectangular or square array of

More information

University of Colorado Denver Department of Mathematical and Statistical Sciences Applied Linear Algebra Ph.D. Preliminary Exam June 10, 2011

University of Colorado Denver Department of Mathematical and Statistical Sciences Applied Linear Algebra Ph.D. Preliminary Exam June 10, 2011 University of Colorado Denver Department of Mathematical and Statistical Sciences Applied Linear Algebra PhD Preliminary Exam June 2 Name: Exam Rules: This is a closed book exam Once the exam begins you

More information

Basic Concepts in Matrix Algebra

Basic Concepts in Matrix Algebra Basic Concepts in Matrix Algebra An column array of p elements is called a vector of dimension p and is written as x p 1 = x 1 x 2. x p. The transpose of the column vector x p 1 is row vector x = [x 1

More information

Lecture II: Linear Algebra Revisited

Lecture II: Linear Algebra Revisited Lecture II: Linear Algebra Revisited Overview Vector spaces, Hilbert & Banach Spaces, etrics & Norms atrices, Eigenvalues, Orthogonal Transformations, Singular Values Operators, Operator Norms, Function

More information

NONCOMMUTATIVE POLYNOMIAL EQUATIONS. Edward S. Letzter. Introduction

NONCOMMUTATIVE POLYNOMIAL EQUATIONS. Edward S. Letzter. Introduction NONCOMMUTATIVE POLYNOMIAL EQUATIONS Edward S Letzter Introduction My aim in these notes is twofold: First, to briefly review some linear algebra Second, to provide you with some new tools and techniques

More information

Symmetric matrices and dot products

Symmetric matrices and dot products Symmetric matrices and dot products Proposition An n n matrix A is symmetric iff, for all x, y in R n, (Ax) y = x (Ay). Proof. If A is symmetric, then (Ax) y = x T A T y = x T Ay = x (Ay). If equality

More information

11 a 12 a 21 a 11 a 22 a 12 a 21. (C.11) A = The determinant of a product of two matrices is given by AB = A B 1 1 = (C.13) and similarly.

11 a 12 a 21 a 11 a 22 a 12 a 21. (C.11) A = The determinant of a product of two matrices is given by AB = A B 1 1 = (C.13) and similarly. C PROPERTIES OF MATRICES 697 to whether the permutation i 1 i 2 i N is even or odd, respectively Note that I =1 Thus, for a 2 2 matrix, the determinant takes the form A = a 11 a 12 = a a 21 a 11 a 22 a

More information

Functional Analysis. Franck Sueur Metric spaces Definitions Completeness Compactness Separability...

Functional Analysis. Franck Sueur Metric spaces Definitions Completeness Compactness Separability... Functional Analysis Franck Sueur 2018-2019 Contents 1 Metric spaces 1 1.1 Definitions........................................ 1 1.2 Completeness...................................... 3 1.3 Compactness......................................

More information

Massachusetts Institute of Technology Department of Economics Statistics. Lecture Notes on Matrix Algebra

Massachusetts Institute of Technology Department of Economics Statistics. Lecture Notes on Matrix Algebra Massachusetts Institute of Technology Department of Economics 14.381 Statistics Guido Kuersteiner Lecture Notes on Matrix Algebra These lecture notes summarize some basic results on matrix algebra used

More information

Lecture Note 5: Semidefinite Programming for Stability Analysis

Lecture Note 5: Semidefinite Programming for Stability Analysis ECE7850: Hybrid Systems:Theory and Applications Lecture Note 5: Semidefinite Programming for Stability Analysis Wei Zhang Assistant Professor Department of Electrical and Computer Engineering Ohio State

More information

Optimization Theory. A Concise Introduction. Jiongmin Yong

Optimization Theory. A Concise Introduction. Jiongmin Yong October 11, 017 16:5 ws-book9x6 Book Title Optimization Theory 017-08-Lecture Notes page 1 1 Optimization Theory A Concise Introduction Jiongmin Yong Optimization Theory 017-08-Lecture Notes page Optimization

More information

1 Linear Algebra Problems

1 Linear Algebra Problems Linear Algebra Problems. Let A be the conjugate transpose of the complex matrix A; i.e., A = A t : A is said to be Hermitian if A = A; real symmetric if A is real and A t = A; skew-hermitian if A = A and

More information

Review of Some Concepts from Linear Algebra: Part 2

Review of Some Concepts from Linear Algebra: Part 2 Review of Some Concepts from Linear Algebra: Part 2 Department of Mathematics Boise State University January 16, 2019 Math 566 Linear Algebra Review: Part 2 January 16, 2019 1 / 22 Vector spaces A set

More information

SEMIDEFINITE PROGRAM BASICS. Contents

SEMIDEFINITE PROGRAM BASICS. Contents SEMIDEFINITE PROGRAM BASICS BRIAN AXELROD Abstract. A introduction to the basics of Semidefinite programs. Contents 1. Definitions and Preliminaries 1 1.1. Linear Algebra 1 1.2. Convex Analysis (on R n

More information

UNDERSTANDING THE DIAGONALIZATION PROBLEM. Roy Skjelnes. 1.- Linear Maps 1.1. Linear maps. A map T : R n R m is a linear map if

UNDERSTANDING THE DIAGONALIZATION PROBLEM. Roy Skjelnes. 1.- Linear Maps 1.1. Linear maps. A map T : R n R m is a linear map if UNDERSTANDING THE DIAGONALIZATION PROBLEM Roy Skjelnes Abstract These notes are additional material to the course B107, given fall 200 The style may appear a bit coarse and consequently the student is

More information

There are six more problems on the next two pages

There are six more problems on the next two pages Math 435 bg & bu: Topics in linear algebra Summer 25 Final exam Wed., 8/3/5. Justify all your work to receive full credit. Name:. Let A 3 2 5 Find a permutation matrix P, a lower triangular matrix L with

More information

1. General Vector Spaces

1. General Vector Spaces 1.1. Vector space axioms. 1. General Vector Spaces Definition 1.1. Let V be a nonempty set of objects on which the operations of addition and scalar multiplication are defined. By addition we mean a rule

More information

Some notes on Coxeter groups

Some notes on Coxeter groups Some notes on Coxeter groups Brooks Roberts November 28, 2017 CONTENTS 1 Contents 1 Sources 2 2 Reflections 3 3 The orthogonal group 7 4 Finite subgroups in two dimensions 9 5 Finite subgroups in three

More information

An Introduction to Linear Matrix Inequalities. Raktim Bhattacharya Aerospace Engineering, Texas A&M University

An Introduction to Linear Matrix Inequalities. Raktim Bhattacharya Aerospace Engineering, Texas A&M University An Introduction to Linear Matrix Inequalities Raktim Bhattacharya Aerospace Engineering, Texas A&M University Linear Matrix Inequalities What are they? Inequalities involving matrix variables Matrix variables

More information

Real Symmetric Matrices and Semidefinite Programming

Real Symmetric Matrices and Semidefinite Programming Real Symmetric Matrices and Semidefinite Programming Tatsiana Maskalevich Abstract Symmetric real matrices attain an important property stating that all their eigenvalues are real. This gives rise to many

More information