Mathematical Optimisation, Chpt 2: Linear Equations and inequalities

Size: px
Start display at page:

Download "Mathematical Optimisation, Chpt 2: Linear Equations and inequalities"

Transcription

1 Introduction Gauss-elimination Orthogonal projection Linear Inequalities Integer Solutions Mathematical Optimisation, Chpt 2: Linear Equations and inequalities Peter J.C. Dickinson version: 09/04/18 Monday 5th February 2018 Peter J.C. Dickinson MO18, Chpt 2: Linear Equations and inequalities 1/40

2 Introduction Gauss-elimination Orthogonal projection Linear Inequalities Integer Solutions Table of Contents 1 Introduction 2 Gauss-elimination [MO, 2.1] 3 Orthogonal projection, Least Squares, [MO, 2.2] 4 Linear Inequalities [MO, 2.4] 5 Integer Solutions of Linear Equations [MO, 2.3] Peter J.C. Dickinson MO18, Chpt 2: Linear Equations and inequalities 2/40

3 Introduction Gauss-elimination Orthogonal projection Linear Inequalities Integer Solutions Organization and Material Lectures (hoor-colleges) and 3 exercise classes for motivation, geometric illustration, and proofs. Script Mathematical Optimization, cited as: [MO,?] Sheets of the course (on Blackboard) Mark based on written exam Peter J.C. Dickinson MO18, Chpt 2: Linear Equations and inequalities 3/40

4 Introduction Gauss-elimination Orthogonal projection Linear Inequalities Integer Solutions Chapter 1: Real vector spaces A collection of facts from Analysis and Linear Algebra, self-instruction Chapter 2: Linear equations, linear inequalities Gauss elimination and application Gauss elimination provides constructive proofs of main theorems in matrix theory. Fourier-Motzkin elimination This method provides constructive proofs of the Farkas Lemma (which is strong LP duality in disguise). Least square approximation, Fourier approximation Integer solutions of linear equations (discrete optimization) Chapter 3: Linear programs and applications LP duality, sensitivity analysis, matrix games. Peter J.C. Dickinson MO18, Chpt 2: Linear Equations and inequalities 4/40

5 Introduction Gauss-elimination Orthogonal projection Linear Inequalities Integer Solutions Chapter 4: Convex analysis Properties of convex sets and convex functions. Applications in optimization Chapter 5: Unconstrained optimization optimality conditions algorithms: descent methods, Newton s method, Gauss-Newton method, Quasi-Newton methods, minimization of nondifferentiable functions. Peter J.C. Dickinson MO18, Chpt 2: Linear Equations and inequalities 5/40

6 Introduction Gauss-elimination Orthogonal projection Linear Inequalities Integer Solutions Chapter 2. Linear equations, inequalities We start with some definitions: Definitions in matrix theory M = (m ij ) is said to be lower triangular: if m ij = 0 for i < j, upper triangular: if m ij = 0 for i > j. P = (p ij ) R m m is a permutation matrix if p ij {0, 1} and each row and each column of P contains exactly one coefficient 1. Note that P T P = I, implying that P 1 = P T. Peter J.C. Dickinson MO18, Chpt 2: Linear Equations and inequalities 6/40

7 Introduction Gauss-elimination Orthogonal projection Linear Inequalities Integer Solutions The set of symmetric matrices in R n n is denoted by S n. Q S n, is called Positive Semidefinite (denoted Q O or Q PSD n ) if x T Qx 0 for all x R n, Positive Definite (denoted: Q O) if x T Qx > 0 for all x R n \ {0}, Peter J.C. Dickinson MO18, Chpt 2: Linear Equations and inequalities 7/40

8 Introduction Gauss-elimination Orthogonal projection Linear Inequalities Integer Solutions Table of Contents 1 Introduction 2 Gauss-elimination [MO, 2.1] Gauss-elimination (for solving Ax = b) Explicit Gauss Algorithm Implications of the Gauss algorithm Gauss-Algorithm for symmmetric A 3 Orthogonal projection, Least Squares, [MO, 2.2] 4 Linear Inequalities [MO, 2.4] 5 Integer Solutions of Linear Equations [MO, 2.3] Peter J.C. Dickinson MO18, Chpt 2: Linear Equations and inequalities 8/40

9 Introduction Gauss-elimination Orthogonal projection Linear Inequalities Integer Solutions Gauss-elimination (for solving Ax = b) In this Section 2.1 We shortly survey the Gauss algorithm and use the Gauss algorithm to give a constructive proof of important theorems in Matrix Theory Motivation: We give a simple example which shows that successive elimination is equivalent with the Gauss algorithm. Peter J.C. Dickinson MO18, Chpt 2: Linear Equations and inequalities 9/40

10 Introduction Gauss-elimination Orthogonal projection Linear Inequalities Integer Solutions General Idea: To eliminate x 1, x 2... is equivalent with transforming Ax = b or (A b) to triangular normal form (Ã b) (with same solution set). Then solve Ãx = b, recursively: a 11 a a 1n b 1 a 21 a a 2n b 2.. a m1 a m2... a mn b m Transformation into form (Ã b): ã 1 j1 ã 1 j2 ã 1 jr 1 ã 1 jr b1 ã 2 j2 ã 2 jr 1 ã 2 jr b2.... ã r 1 jr 1 ã r 1 jr b r 1 ã r jr b r. b m Peter J.C. Dickinson MO18, Chpt 2: Linear Equations and inequalities 10/40

11 Introduction Gauss-elimination Orthogonal projection Linear Inequalities Integer Solutions This Gauss elimination uses 2 types of row operations: (G1) (i, j)-pivot: k > i add λ k = a kj a ij times row i to row k. (G2) interchange row i with row k The matrix form of these operations are: [MO, Ex.2.3] The matrix form of (G1), (A b) (Ã b), is given by (Ã b) = M (A b) with a nonsingular lower triangular M R m m. [MO, Ex.2.4] The matrix form of (G2), (A b) (Ã b), is given by (Ã b) = P (A b) with a permutation matrix P R m m. Peter J.C. Dickinson MO18, Chpt 2: Linear Equations and inequalities 11/40

12 Introduction Gauss-elimination Orthogonal projection Linear Inequalities Integer Solutions Explicit Gauss Algorithm 1 Set i = 1 and j = 1. 2 While i m and j n do 3 If k i such that a kj 0 then 4 Interchange row i and row k. (G2) 5 Apply (i, j)-pivot. (G1) 6 Update i i + 1 and j j else 8 Update j j end if 10 end while Peter J.C. Dickinson MO18, Chpt 2: Linear Equations and inequalities 12/40

13 Introduction Gauss-elimination Orthogonal projection Linear Inequalities Integer Solutions Implications of the Gauss algorithm Lemma 2.1 The set of lower triangular matrices is closed under addition, multiplication and inversion (provided square and nonsingular). Theorem 2.2 ([MO, Thm. 2.1]) For every A R m n, there exists an (m m)-permutation matrix P and an invertible lower triangular matrix M R m m such that U = MPA is upper triangular. Corollary 2.3 (Cor. 2.1, LU-factorization) For A R m n, there exists an (m m)-permutation matrix P, an invertible lower triangular L R m m and an upper triangular U R m n such that LU = PA. Rem.: Solve Ax = b by using the decomposition PA = LU! (How?) Peter J.C. Dickinson MO18, Chpt 2: Linear Equations and inequalities 13/40

14 Introduction Gauss-elimination Orthogonal projection Linear Inequalities Integer Solutions Corollary 2.4 ([MO, Cor. 2.2], Gale s Theorem) Exactly one of the following statements is true for A R m n : (a) The system Ax = b has a solution x R n. (b) There exists y R m such that: y T A = 0 T and y T b 0. Remark: In normal form A Ã, the number r gives dimension of the space spanned by the rows of A. This equals the dimension of the space spanned by columns of A. Peter J.C. Dickinson MO18, Chpt 2: Linear Equations and inequalities 14/40

15 Introduction Gauss-elimination Orthogonal projection Linear Inequalities Integer Solutions Modified Gauss-Algorithm for symmmetric A Perform same row and column operators: 1 Set i = 1. 2 While i n do 3 If a kk 0 for some k i 4 Apply (G2 ): Interchange row i and row k. Interchange col. i and col. k, A PAP T. 5 Apply (G1 ): For all j > i add λ j = a ij a ii times row i to row j, and λ j times col. i to col. j, A MAM T. 6 Update i i Else if a ik 0 for some k > i then 8 Add row k to row i and add col. k to col. i, A BAB T. 9 Else 10 Update i i End if 12 End while. Peter J.C. Dickinson MO18, Chpt 2: Linear Equations and inequalities 15/40

16 Introduction Gauss-elimination Orthogonal projection Linear Inequalities Integer Solutions Implications of the symmetric Gauss algorithm Note: By symmetric Gauss the solution set of Ax = b is destroyed!!! But it is useful to get the following results. Theorem 2.5 ([MO, Thm. 2.2]) A R n n symmetric. Then with some nonsingular Q R n n QAQ T = D = diag(d 1,..., d n ), d R n. Look out: The d i s are in general not the eigenvalues of A. Recall: Q PSD n is positive semidefinite (notation: Q O) if: x T Qx 0 for all x R n. Peter J.C. Dickinson MO18, Chpt 2: Linear Equations and inequalities 16/40

17 Introduction Gauss-elimination Orthogonal projection Linear Inequalities Integer Solutions Corollary 2.6 ([MO, Cor. 2.3]) Let A S n, Q R n n nonsingular s.t. QAQ T = diag(d 1,..., d n ). Then (a) A O d i 0, i = 1,..., n (b) A O d i > 0, i = 1,..., n Implication: Checking A O can be done by the Gauss-algorithm. Corollary 2.7 ([MO, Cor. 2.4]) Let A S n. Then (a) A O A = BB T for some B R n m (b) A O A = BB T for some nonsingular B R n n Complexity of Gauss algorithm The number of ±,, / flop s (floating point operations) needed to solve Ax = b, where A R n n, is less than or equal to n 3. Peter J.C. Dickinson MO18, Chpt 2: Linear Equations and inequalities 17/40

18 Introduction Gauss-elimination Orthogonal projection Linear Inequalities Integer Solutions Table of Contents 1 Introduction 2 Gauss-elimination [MO, 2.1] 3 Orthogonal projection, Least Squares, [MO, 2.2] Projection and equivalent condition Constructing a solution Gram Matrix Gram-Schmidt Algorithm Eigenvalues 4 Linear Inequalities [MO, 2.4] 5 Integer Solutions of Linear Equations [MO, 2.3] Peter J.C. Dickinson MO18, Chpt 2: Linear Equations and inequalities 18/40

19 Introduction Gauss-elimination Orthogonal projection Linear Inequalities Integer Solutions Projection and equivalent condition In this section: We see that the orthogonal projection problem is solved by solving a system of linear equations; and present some more results from matrix theory. Assumption: V is a linear vector space over R with inner product x, y and (induced) norm x = x, x. Minimization Problem: Given x V, subspace W V find x W such that: x x = min x y (1) y W The vector x is called the projection of x onto W. Lemma 2.8 ([MO, Lm. 2.1]) x W is the (unique) solution to (1) x x, w = 0 w W. Peter J.C. Dickinson MO18, Chpt 2: Linear Equations and inequalities 19/40

20 Introduction Gauss-elimination Orthogonal projection Linear Inequalities Integer Solutions Constructing a solution via Lm 2.8 Let a 1,..., a m be a basis of W, i.e., a 1,..., a m are linearly independent and W = span{a 1,..., a m }. Write x := m i=1 z ia i Then x x, w = 0, w W is equivalent with x m z ia i a j = 0, j = 1,..., m i=1 or m a i, a j z i = x, a j, j = 1,..., m i=1 Peter J.C. Dickinson MO18, Chpt 2: Linear Equations and inequalities 20/40

21 Introduction Gauss-elimination Orthogonal projection Linear Inequalities Integer Solutions Gram Matrix Defining the Gram-matrix G = ( a i, a j ) i,j S m and considering b = ( x, a i ) i R m, this leads to the linear equation (for z): (2.16) Gz = b with solution ẑ = G 1 b Ex. 2.1 Prove that the Gram-matrix is positive definite, and thus non-singular (under our assumption). Summary Let W = span{a 1,..., a m }. Then the solution x of the minimization problem (1) is given by x := m i=1 ẑia i where ẑ is computed as the solution of G ẑ = b. Peter J.C. Dickinson MO18, Chpt 2: Linear Equations and inequalities 21/40

22 Introduction Gauss-elimination Orthogonal projection Linear Inequalities Integer Solutions Special case 1: V = R n, x, y = x T y and a 1,..., a m a basis of W. Then with A := [a 1,..., a m ] the projection of x onto W is given by ẑ = arg min{ x Az } = (A T A) 1 A T x, z x = arg min y { x y : y AR m } = Aẑ = A(A T A) 1 A T x. Special case 2: V = R n, x, y = x T y, a 1,..., a m R n lin. indep. and W = {w R n a T i w = 0, i = 1,..., m}. Then the projection of x onto W is given by x = arg min{ x y : a T y i y = 0 i} = x A(A T A) 1 A T x Special case 3: W = span{a 1,..., a m } with {a i }, an orthonormal basis, i.e., I = ( a i, a j ) i,j ). Then the projection of x onto W is given by ẑ = ( a j, x ) j R m, ( Fourier coefficients ) x = arg min{ x y : y W } = m ẑja j y j=1 Peter J.C. Dickinson MO18, Chpt 2: Linear Equations and inequalities 22/40

23 Introduction Gauss-elimination Orthogonal projection Linear Inequalities Integer Solutions Gram-Schmidt Algorithm Given subspace W with basis a 1,..., a m, construct an orthonormal basis c 1,... c m for W, i.e. ( c i, c j ) i,j = I. Start with b 1 := a 1 and c 1 = b 1 / b 1. For k = 2,..., m let b k = a k k 1 i=1 c i, a k c i and c i = b i / b i. Gram-Schmidt in matrix form: With W V := R n. Put A = [a 1,..., a m ] T, B = [b 1,..., b m ] T, C = [c 1,..., c m ] T. Then the Gram-Schmidt-steps are equivalent with: add multiple of row j < k to row k (for B) multiply row k by scalar (for C) Peter J.C. Dickinson MO18, Chpt 2: Linear Equations and inequalities 23/40

24 Introduction Gauss-elimination Orthogonal projection Linear Inequalities Integer Solutions Matrix form of Gram-Schmidt : Given A R m n with full row rank, there is a decomposition C = LA with lower triangular nonsingular matrix L (l ii = 1) and the rows c j of C are orthogonal, i.e. c i, c j = 0, i j. A corollary of this fact: Lemma 2.9 ([MO, Prop. 2.1]) Prop. 2.1 (Hadamard s inequality) Let A R m n with rows a T i. Then m 0 det (AA T ) a T i a i i=1 Peter J.C. Dickinson MO18, Chpt 2: Linear Equations and inequalities 24/40

25 Introduction Gauss-elimination Orthogonal projection Linear Inequalities Integer Solutions Eigenvalues Definition. λ C is an eigenvalue of A R n n if there is an (eigenvector) 0 x C n with Ax = λx. The results above (together with the Theorem of Weierstrass) allow a proof of: Theorem 2.10 ([MO, Thm. 2.3], Spectral theorem for symmetric matrices) Let A S n. Then there exists an orthogonal matrix Q R n n (i.e. Q T Q = I) and eigenvalues λ 1,..., λ n R such that Q T AQ = D = diag (λ 1,..., λ n ) Peter J.C. Dickinson MO18, Chpt 2: Linear Equations and inequalities 25/40

26 Introduction Gauss-elimination Orthogonal projection Linear Inequalities Integer Solutions Table of Contents 1 Introduction 2 Gauss-elimination [MO, 2.1] 3 Orthogonal projection, Least Squares, [MO, 2.2] 4 Linear Inequalities [MO, 2.4] Fourier-Motzkin Algorithm Solvability of linear systems Application: Markov chains 5 Integer Solutions of Linear Equations [MO, 2.3] Peter J.C. Dickinson MO18, Chpt 2: Linear Equations and inequalities 26/40

27 Introduction Gauss-elimination Orthogonal projection Linear Inequalities Integer Solutions Linear Inequalities Ax b In this Section we learn: Finding a solution to Ax b. What the Gauss algorithm is for linear equations; is the Fourier-Motzkin algorithm for linear inequalities. The Fourier-Motzkin algorithm leads to a construcive proof of the Farkas Lemma (the basis for strong duality in linear programming). Peter J.C. Dickinson MO18, Chpt 2: Linear Equations and inequalities 27/40

28 Introduction Gauss-elimination Orthogonal projection Linear Inequalities Integer Solutions Fourier-Motzkin algorithm for solving Ax b. Eliminate x 1 : a r1 x 1 + a s1 x 1 + n a rj x j b r j=2 n a sj x j b s j=2 n a tj x j b t j=2 r = 1,..., k s = k + 1,..., l t = l + 1,..., m with a r1 > 0, a s1 < 0. Divide by a r1, a s1, leading to (for r and s) x 1 + x 1 + n a rjx j b r j=2 n a sjx j b s j=2 r = 1,..., k s = k + 1,..., l Peter J.C. Dickinson MO18, Chpt 2: Linear Equations and inequalities 28/40

29 Introduction Gauss-elimination Orthogonal projection Linear Inequalities Integer Solutions These two sets of inequalities have a solution x iff n a sjx j b s x 1 b r n j=2 j=2 a rjx j { r = 1,..., k s = k + 1,..., l or equivalently: n { (a sj + a rj)x j b s + b r = 1,..., k r s = k + 1,..., l j=2 Remark Explosion of number of inequalities. Before number = k + (l k) + (m l). Now the number = k (l k) + (m l) Peter J.C. Dickinson MO18, Chpt 2: Linear Equations and inequalities 29/40

30 Introduction Gauss-elimination Orthogonal projection Linear Inequalities Integer Solutions So Ax b has a solution x = (x 1,.., x n ) if and only if there is a solution x = (x 2,.., x n ) of n (a sj + a rj)x j b r + b s r = 1,..., k; s = k , l j=2 n a tj x j b t t = l + 1,..., m. j=2 In matrix form: Ax b has a solution x = (x 1,.., x n ) if and only if there is a solution of the transformed system: A x b or ( 0 A )x b Remark: of (A b): Any row of (0 A b ) is a positive combination of rows any row is of the form y T (A b), y 0 Peter J.C. Dickinson MO18, Chpt 2: Linear Equations and inequalities 30/40

31 Introduction Gauss-elimination Orthogonal projection Linear Inequalities Integer Solutions By eliminating x 1, x 2,..., x n, in this way we finally obtain an equivalent system à (n) x b where à (n) = 0 which is solvable iff 0 b i, i. Theorem 2.11 (Projection Theorem, [MO, Thm. 2.5]) Let P = {x R n Ax b}. Then all for k = 1,..., n, the projection P (k) = {(x k+1,.., x n ) (x 1,.., x k, x k+1,.., x n ) P is the solution set of a linear system in n k variables x (k) = (x k+1,..., x n ). for suitable x 1,..., x k R} A (k) x (k) b (k) In principle: Linear inequalities can be solved by FM. However this might be inefficient! (Why?) Peter J.C. Dickinson MO18, Chpt 2: Linear Equations and inequalities 31/40

32 Introduction Gauss-elimination Orthogonal projection Linear Inequalities Integer Solutions Solvability of linear systems Theorem 2.12 (Farkas Lemma, [MO, 2.6]) Exactly one of the following statements is true: (I) Ax b has a solution x R n. (II) There exists y R m such that y T A = 0 T, y T b < 0 and y 0 Ex. 2.2 Let A R m n, C R k n, b R m, c R k. Then precisely one of the alternatives is valid. (I) There is a solution x of: Ax b, Cx = c (II) There is a solution µ R m, µ 0, λ R k of : ( A T b T ) µ + ( C T c T ) λ = ( ) 0 1 Peter J.C. Dickinson MO18, Chpt 2: Linear Equations and inequalities 32/40

33 Introduction Gauss-elimination Orthogonal projection Linear Inequalities Integer Solutions Corollary 2.13 (Gordan, [MO, Cor. 2.5]) Given A R m n, exactly one of the following alternatives is true: (I) Ax = 0, x 0 has a solution x 0. (II) y T A < 0 T has a solution y. Remark: As we shall see in Chapter 3, the Farkas Lemma in the following form is the strong duality of LP in disguise. Corollary 2.14 (Farkas, implied inequalities, [MO, Cor. 2.6]) Let A R m n, b R m, c R n, z R. Assume that Ax b is feasible. Then the following are equivalent: (a) Ax b c T x z (b) y T A = c T, y T b z, y 0 has a solution y. Peter J.C. Dickinson MO18, Chpt 2: Linear Equations and inequalities 33/40

34 Introduction Gauss-elimination Orthogonal projection Linear Inequalities Integer Solutions Application: Markov chains (Existence of a steady state) Def. A vector π R n + with 1 T π := n i=1 π i = 1 is called a probability distribution on {1,.., n}. A matrix P = (p ij ) where each row P i is a probability distribution is called a stochastic matrix, i.e. P R+ n n and P1 = 1 In a stochastic process: (individuals in n possibly states) π i proportion of population is in state i p ij is probability of transition from state i j So the transition step k k + 1 is: π (k+1) = P T π (k) A probability distribution π is called steady state if As a corollary of Gordan s result: π = P T π Each stochastic matrix P has a steady state π. Peter J.C. Dickinson MO18, Chpt 2: Linear Equations and inequalities 34/40

35 Introduction Gauss-elimination Orthogonal projection Linear Inequalities Integer Solutions Table of Contents 1 Introduction 2 Gauss-elimination [MO, 2.1] 3 Orthogonal projection, Least Squares, [MO, 2.2] 4 Linear Inequalities [MO, 2.4] 5 Integer Solutions of Linear Equations [MO, 2.3] Two variables Lattices Peter J.C. Dickinson MO18, Chpt 2: Linear Equations and inequalities 35/40

36 Introduction Gauss-elimination Orthogonal projection Linear Inequalities Integer Solutions Integer Solutions of Linear Equations Example Equation 3x 1 2x 2 = 1 has solution x = (1, 1) Z 2. But equation 6x 1 2x 2 = 1 does not allow an integer solution x. Key remark: Let a 1, a 2 Z and let a 1 x 1 + a 2 x 2 = b have a solution x 1, x 2 Z. Then b = λc with λ Z, c = gcd(a 1, a 2 ) Here: gcd(a 1, a 2 ) denotes the greatest common divisor of a 1, a 2. Lemma 2.15 (Euclid s Algorithm, [MO, Lm. 2.2]) Let c = gcd(a 1, a 2 ). Then L(a 1, a 2 ) := {a 1 λ 1 + a 2 λ 2 λ 1, λ 2 Z} = {cλ λ Z} =: L(c). (The proof of) this result allows to solve (if possible) a 1 x 1 + a 2 x 2 = b (in Z). Peter J.C. Dickinson MO18, Chpt 2: Linear Equations and inequalities 36/40

37 Introduction Gauss-elimination Orthogonal projection Linear Inequalities Integer Solutions Algorithm to solve, a 1 x 1 + a 2 x 2 = b (in Z) Compute c = gcd(a 1, a 2 ). If λ := b/c / Z, no integer solution exists. If λ := b/c Z, compute solutions λ 1, λ 2 Z of λ 1 a 1 + λ 2 a 2 = c. Then (λ 1 λ)a 1 + (λ 2 λ)a 2 = b. General problem: Given a 1,..., a n, b Z m, find x = (x 1,, x n ) Z n such that ( ) a 1 x 1 + a 2 x a n x n = b or equivalently Ax = b where A := [a 1,..., a n ]. Def. We introduce the lattice generated by a 1,..., a n, { n } L = L(a 1,..., a n ) = a jλ j λ j Z R m. j=1 Peter J.C. Dickinson MO18, Chpt 2: Linear Equations and inequalities 37/40

38 Introduction Gauss-elimination Orthogonal projection Linear Inequalities Integer Solutions Assumption 1: rank A = m (m n); w.l.o.g., a 1,..., a m are linearly independent. To solve the problem: Find C = [c 1... c m ] Z m m such that ( ) L(c 1,..., c m ) = L(a 1,..., a n ). Then ( ) has a solution x Z n iff λ := C 1 b Z m Bad news: As in the case of one equation: in general Lemma 2.16 ([MO, Lm. 2.3]) L(a 1,..., a m ) L(a 1,..., a n ). Let c 1,..., c m L(a 1,..., a n ). Then L(c 1,..., c m ) = L(a 1,..., a n ) if and only if for all j = 1,..., n, the system Cλ = a j has an integral solution. Last step: Find such c i s Peter J.C. Dickinson MO18, Chpt 2: Linear Equations and inequalities 38/40

39 Introduction Gauss-elimination Orthogonal projection Linear Inequalities Integer Solutions Main Result: The algorithm Lattice Basis INIT: C = [c 1,..., c m ] = [a 1,..., a m ] ; ITER: Compute C 1 ; If C 1 a j Z m for j = 1,..., n, then stop; If λ = C 1 a j / Z m for some j, then Let a j = Cλ = m i=1 λ ic i and compute c = m i=1 (λ i [λ i ])c i = a j m i=1 [λ i]c i ; Let k be the largest index i such that λ i / Z ; Update C by replacing c k with c in column k; next iteration stops after at most K = log 2 (det[a 1,..., a m ]) steps with a matrix C satisfying ( ). Peter J.C. Dickinson MO18, Chpt 2: Linear Equations and inequalities 39/40

40 Introduction Gauss-elimination Orthogonal projection Linear Inequalities Integer Solutions As an exercise: Explain how an integer solution x of ( ) can be constructed with the help of the results from the algorithm above. Complexity From the algorithm above we see: To solve integer systems of equations is polynomial. Under additional inequalities (such as x 0) the problem becomes NP-hard Theorem 2.17 ([MO, Thm. 2.4]) Let A Z m n and b Z m be given. Then exactly one of the following statements is true: (a) There exists some x Z n such that Ax = b. (b) There exists some y R m such that y T A Z n and y T b Z. Peter J.C. Dickinson MO18, Chpt 2: Linear Equations and inequalities 40/40

41 Introduction Recap Primal and dual problems Shadow prices Matrix Games Algorithms Mathematical Optimisation, Chpt 3: Linear Optimisation/Programming Peter J.C. Dickinson version: 09/04/18 Wednesday 21st February 2018 Peter J.C. Dickinson MO18, Chpt 3: Linear Optimisation 1/23

42 Introduction Recap Primal and dual problems Shadow prices Matrix Games Algorithms Table of Contents 1 Introduction 2 Recap 3 Primal and dual problems [MO, ] 4 Shadow prices [MO, 3.1.3] 5 Matrix Games [MO, 3.1.4] 6 Algorithms Peter J.C. Dickinson MO18, Chpt 3: Linear Optimisation 2/23

43 Introduction Recap Primal and dual problems Shadow prices Matrix Games Algorithms What do you recall from first year course? Peter J.C. Dickinson MO18, Chpt 3: Linear Optimisation 3/23

44 Introduction Recap Primal and dual problems Shadow prices Matrix Games Algorithms Peter J.C. 6 Dickinson Algorithms MO18, Chpt 3: Linear Optimisation 4/23 Table of Contents 1 Introduction 2 Recap 3 Primal and dual problems [MO, ] Definitions Weak and strong duality Complementarity Equivalent LP s 4 Shadow prices [MO, 3.1.3] 5 Matrix Games [MO, 3.1.4]

45 Introduction Recap Primal and dual problems Shadow prices Matrix Games Algorithms Linear Optimisation Given fixed parameters A R m n, b R m and c R n : max c T x s. t. Ax b, (LP p ) x R n F p := {x R n Ax b} is the feasible set of (LP p ), z p := max x Fp c T x is the optimal value of (LP p ), x F p is an optimal solution of (LP p ) if c T x = z p. Peter J.C. Dickinson MO18, Chpt 3: Linear Optimisation 5/23

46 Introduction Recap Primal and dual problems Shadow prices Matrix Games Algorithms Dual Problem Consider the problem max x R n c T x s. t. Ax b. Dual problem to this is of following form for some (1), (2), (3): (1) y b T y s. t. A T y (2) c, y (3) What should be filled in for (1), (2), (3)? (1) (a) min (b) max (2) (a) (b) (c) = (3) (a) R m (b) R m + (c) ( R m +) N.B. y R m + iff y R m and y i 0 for all i. y ( R m +) iff y R m and y i 0 for all i. Peter J.C. Dickinson MO18, Chpt 3: Linear Optimisation 6/23

47 Introduction Recap Primal and dual problems Shadow prices Matrix Games Algorithms Lagrangian Function (not directly needed for exam) Theorem 3.1 (Min-Max Theorem) For L : (X Y) R we have { } Proof. max x X min{l(x; y)} y Y { min y Y max{l(x; y)} x X }. min{l(x; y)} L(x; y) y Y, x X y Y { } max min{l(x; y)} max{l(x; y)} y Y x X y Y x X { } { } max x X min{l(x; y)} y Y min y Y max{l(x; y)} x X Peter J.C. Dickinson MO18, Chpt 3: Linear Optimisation 7/23

48 Introduction Recap Primal and dual problems Shadow prices Matrix Games Algorithms Lagrangian Function (not directly needed for exam) Theorem 3.1 (Min-Max Theorem) For L : (X Y) R we have { } Example 1 max x X min{l(x; y)} y Y Defining L : (R n R m +) as { min y Y max{l(x; y)} x X }. L(x; y) = c T x + y T (b Ax) = b T y + x T (c A T y) we have { } max x {ct x : Ax b} = max min {L(x; y)} x R n y R m + { } min y R m + max x Rn{L(x; y)} = min y {b T y : A T y = c, y R m +}. Peter J.C. Dickinson MO18, Chpt 3: Linear Optimisation 7/23

49 Introduction Recap Primal and dual problems Shadow prices Matrix Games Algorithms Primal and dual problems Given fixed parameters A R m n, b R m and c R n : max c T x s. t. Ax b, (LP p ) x R n min b T y s. t. A T y = c, y 0. (LP d ) y R m F p := {x R n Ax b} is the feasible set of (LP p ), F d := {y R m A T y = c, y 0} is the feasible set of (LP d ), zp := max x Fp c T x is the optimal value of (LP p ), zd := min y F d b T y is the optimal value of (LP d ), x F p is an optimal solution of (LP p ) if c T x = zp, y F d is an optimal solution of (LP d ) if b T y = zd. Peter J.C. Dickinson MO18, Chpt 3: Linear Optimisation 8/23

50 Introduction Recap Primal and dual problems Shadow prices Matrix Games Algorithms Weak and strong duality Lemma 3.2 (Weak duality, [MO, Lm 3.1]) For all (x, y) F p F d we have c T x b T y, and thus z p z d. Corollary 3.3 ([MO, Lm 3.1]) If (x, y) F p F d with c T x = b T y then x and y are optimal solutions of (LP p ) and (LP d ) respectively. Theorem 3.4 (Strong Duality, [MO, Thm 3.1]) Primal problem Feasible Infeasible Dual problem Feasible Infeasible z p = zd R zp = zd = z p = zd = p = < = zd If both (LP p ) and (LP d ) are feasible then there exist optimal solutions (x, y) F p F d (satisfying c T x = b T y). Peter J.C. Dickinson MO18, Chpt 3: Linear Optimisation 9/23

51 Introduction Recap Primal and dual problems Shadow prices Matrix Games Algorithms Complementarity For (x, y) R n R m, these are optimal solutions of (LP p ) and (LP d ) resp. if and only if they solve the following system: Ax b (1) A T y = c (2) c T x b T y = 0 (3) y 0 (4). Note that for x, y solving this system we have m 0 = b T y c T x = y T (b Ax) = i=1 y i }{{} 0 (b Ax) }{{} i. 0 Theorem 3.5 (Complementarity condition, [MO, Eq (3.12)]) If (1), (2), (4) hold then (3) y T (b Ax) = 0 y i (b i [Ax] i ) = 0 i. Peter J.C. Dickinson MO18, Chpt 3: Linear Optimisation 10/23

52 Introduction Recap Primal and dual problems Shadow prices Matrix Games Algorithms Equivalent LP s, example What is the dual to max x c T x s. t. Ax b, x 0? (a) min y b T y s. t. A T y = c, y 0, (b) min y b T y s. t. A T y c, y 0, (c) min y b T y s. t. A T y = c, (d) min y b T y s. t. A T y c, (e) Other. Peter J.C. Dickinson MO18, Chpt 3: Linear Optimisation 11/23

53 Introduction Recap Primal and dual problems Shadow prices Matrix Games Algorithms Equivalent LP s, [MO, 3.1.2] Example 2 Primal: max x c T x s. t. Ax b, x R n. Dual: min y b T y s. t. y 0, A T y = c. Rules for primal dual pairs: Primal problem max Free variable Nonnegative variable Equality constraint constraint Dual problem min Equality constraint constraint Free variable Nonnegative variable Peter J.C. Dickinson MO18, Chpt 3: Linear Optimisation 12/23

54 Introduction Recap Primal and dual problems Shadow prices Matrix Games Algorithms Equivalent LP s, [MO, 3.1.2] Example 2 Primal: max x c T x s. t. Ax b, x 0. Dual: min y b T y s. t. y 0, A T y c. Rules for primal dual pairs: Primal problem max Free variable Nonnegative variable Equality constraint constraint Dual problem min Equality constraint constraint Free variable Nonnegative variable Peter J.C. Dickinson MO18, Chpt 3: Linear Optimisation 12/23

55 Introduction Recap Primal and dual problems Shadow prices Matrix Games Algorithms Equivalent LP s, [MO, 3.1.2] Example 2 Primal: max x c T x s. t. Ax = b, x 0. Dual: min y b T y s. t. y R m, A T y c. Rules for primal dual pairs: Primal problem max Free variable Nonnegative variable Equality constraint constraint Dual problem min Equality constraint constraint Free variable Nonnegative variable Peter J.C. Dickinson MO18, Chpt 3: Linear Optimisation 12/23

56 Introduction Recap Primal and dual problems Shadow prices Matrix Games Algorithms Equivalent LP s, [MO, 3.1.2] Example 2 Primal: max x c T x s. t. Ax = b, x R n. Dual: min y b T y s. t. y R m, A T y = c. Rules for primal dual pairs: Primal problem max Free variable Nonnegative variable Equality constraint constraint Dual problem min Equality constraint constraint Free variable Nonnegative variable Peter J.C. Dickinson MO18, Chpt 3: Linear Optimisation 12/23

57 Introduction Recap Primal and dual problems Shadow prices Matrix Games Algorithms Table of Contents 1 Introduction 2 Recap 3 Primal and dual problems [MO, ] 4 Shadow prices [MO, 3.1.3] Problem Shadow prices 5 Matrix Games [MO, 3.1.4] 6 Algorithms Peter J.C. Dickinson MO18, Chpt 3: Linear Optimisation 13/23

58 Introduction Recap Primal and dual problems Shadow prices Matrix Games Algorithms Shadow prices, [MO, 3.1.3] A factory makes products P 1,..., P n from resources R 1,..., R m. x i amount of P i you choose to produce c i profit per unit on P i a ji units of R j required per unit of P i units of R j available b j max x c T x s. t. Ax b, x 0, Assume problem is feasible and optimal value, z 0, finite. Peter J.C. Dickinson MO18, Chpt 3: Linear Optimisation 14/23

59 Introduction Recap Primal and dual problems Shadow prices Matrix Games Algorithms Shadow prices, [MO, 3.1.3] A factory makes products P 1,..., P n from resources R 1,..., R m. x i amount of P i you choose to produce c i profit per unit on P i a ji units of R j required per unit of P i units of R j available b j max x c T x s. t. Ax b, x 0, Assume problem is feasible and optimal value, z 0, finite. How much would you pay to get more resources? Peter J.C. Dickinson MO18, Chpt 3: Linear Optimisation 14/23

60 Introduction Recap Primal and dual problems Shadow prices Matrix Games Algorithms Shadow prices, [MO, 3.1.3] A factory makes products P 1,..., P n from resources R 1,..., R m. x i amount of P i you choose to produce c i profit per unit on P i a ji units of R j required per unit of P i units of R j available b j max x c T x s. t. Ax b, x 0, min y b T y s. t. A T y c, y 0. Assume problem is feasible and optimal value, z 0, finite. Let y be the optimal solution of the dual problem. How much would you pay to get more resources? Peter J.C. Dickinson MO18, Chpt 3: Linear Optimisation 14/23

61 Introduction Recap Primal and dual problems Shadow prices Matrix Games Algorithms Shadow prices, [MO, 3.1.3] How much would you pay to get more resources? max x c T x s. t. Ax b + t, x 0, min y (b + t) T y s. t. A T y c, y 0. Peter J.C. Dickinson MO18, Chpt 3: Linear Optimisation 15/23

62 Introduction Recap Primal and dual problems Shadow prices Matrix Games Algorithms Shadow prices, [MO, 3.1.3] How much would you pay to get more resources? max x c T x s. t. Ax b + t, x 0, Lemma 3.6 min y (b + t) T y s. t. A T y c, y 0. Letting z t be their common optimal value, have z t z 0 + tt y. Corollary 3.7 (Shadow Price) Obtain higher profit only if price per unit for R j is smaller than y j. NB: If y is the unique dual optimal solution, then ε > 0 such that z t = z 0 + tt y for all t [ ε, ε] m. Peter J.C. Dickinson MO18, Chpt 3: Linear Optimisation 15/23

63 Introduction Recap Primal and dual problems Shadow prices Matrix Games Algorithms Table of Contents 1 Introduction 2 Recap 3 Primal and dual problems [MO, ] 4 Shadow prices [MO, 3.1.3] 5 Matrix Games [MO, 3.1.4] Pure strategy Mixed strategies Nash Equilibrium 6 Algorithms Peter J.C. Dickinson MO18, Chpt 3: Linear Optimisation 16/23

64 Introduction Recap Primal and dual problems Shadow prices Matrix Games Algorithms Matrix Games, [MO, 3.1.4] Matrix game is example of non-cooperative two player game. Payout matrix A R m n. Have two players R (rows) and C (columns). Players make one move simultaneously R chooses row i and C chooses column j R wins a ij from C. Example 3 A = ( 1 4 ) Peter J.C. Dickinson MO18, Chpt 3: Linear Optimisation 17/23

65 Introduction Recap Primal and dual problems Shadow prices Matrix Games Algorithms Matrix Games, [MO, 3.1.4] Matrix game is example of non-cooperative two player game. Payout matrix A R m n. Have two players R (rows) and C (columns). Players make one move simultaneously R chooses row i and C chooses column j R wins a ij from C. Pure strategy: Moves deterministic. May be no Nash-Equilibrium, i.e. no stable public strategies. Example 3 ( ) R chooses 1 C chooses A = C chooses 1 R chooses 2 C would never choose 3 Peter J.C. Dickinson MO18, Chpt 3: Linear Optimisation 17/23

66 Introduction Recap Primal and dual problems Shadow prices Matrix Games Algorithms Mixed strategies, [MO, 3.1.4] Game R chooses row i with probability x; x i 0, i x i = 1. C chooses col. j with probability y; y i 0, i y i = 1. Expected payment to R from C is x T Ay. Given y: R plays solution of max x n x T Ay ( = max i (Ay) i ) Given x: C plays solution of min y m x T Ay ( = min j (A T x) j ) Example 4 A = Given y: ( R plays x = ) 1/3, y = 1/3. 1/3 max x n x T Ay = max x n ( ) 0. 1 ( x1 x 2 ) T ( ) 0 = 2, 2 Peter J.C. Dickinson MO18, Chpt 3: Linear Optimisation 18/23

67 Introduction Recap Primal and dual problems Shadow prices Matrix Games Algorithms Mixed strategies, [MO, 3.1.4] Game R chooses row i with probability x; x i 0, i x i = 1. C chooses col. j with probability y; y i 0, i y i = 1. Expected payment to R from C is x T Ay. Given y: R plays solution of max x n x T Ay ( = max i (Ay) i ) Given x: C plays solution of min y m x T Ay ( = min j (A T x) j ) Example 4 A = Given y: Given x: ) ( ) 7/10 1/2, x =, y = 3/10 1/2 0 ( ) T ( ) max x n x T x1 1/2 Ay = max x n = 1/2, x 2 1/2 T 1/2 y 1 min y m x T Ay = min y m 1/2 y 2 = 1/2. 4 y 3 ( Peter J.C. Dickinson MO18, Chpt 3: Linear Optimisation 18/23

68 Introduction Recap Primal and dual problems Shadow prices Matrix Games Algorithms Nash Equilibrium, [MO, 3.1.4] Assume your opponent plays the best possible strategy against you. For R: max x n min y m x T Ay. For C: min y m max x n x T Ay. Lemma 3.8 max x F min y G f (x, y) min y G max x F f (x, y). Theorem 3.9 (minmax-theorem, [MO, Thm 3.2]) There exist feasible x, y such that x T Ay = max x n xt Ay = min y m xt Ay = max min x n y xt Ay = min max m y m x xt Ay. n x, y are a Nash equilibrium of the mixed stragegy matrix game. We say a game is fair if x T Ay = 0. Peter J.C. Dickinson MO18, Chpt 3: Linear Optimisation 19/23

69 Introduction Recap Primal and dual problems Shadow prices Matrix Games Algorithms Table of Contents 1 Introduction 2 Recap 3 Primal and dual problems [MO, ] 4 Shadow prices [MO, 3.1.3] 5 Matrix Games [MO, 3.1.4] 6 Algorithms Simplex Algorithm Interior Point Method Peter J.C. Dickinson MO18, Chpt 3: Linear Optimisation 20/23

70 Introduction Recap Primal and dual problems Shadow prices Matrix Games Algorithms Simplex Algorithm F p = {x R n Ax b} is a polyhedral. In the simplex method we move from vertex to vertex in F p, continually improving c T x until we can improve it no more, and are thus at the optimal. Alternatively, in the dual simplex method we move from vertex to vertex in F d. Mathematically equivalent, but in practice the dual simplex method tends to work better. Need to choose which vertex to travel to. Methods are available which work well in practice, although there always seems to be some nasty examples remaining. (Exponential) Peter J.C. Dickinson MO18, Chpt 3: Linear Optimisation 21/23

71 Introduction Recap Primal and dual problems Shadow prices Matrix Games Algorithms Interior Point Method Consider following system in variables (x, y, s) R n R m R m. Ax + s = b, A T y = c, (m equalities) (n equalities) y i s i = ε for all i, (m equalities) y, s 0. (x, y) is optimal solution to (LP p ), (LP d ) resp. iff there exists s such that (x, y, s) is a solution to this system with ε = 0. Using Newton s method, we attempt to solve the system (excluding inequalities) with ε > 0, decreasing ε towards zero as we go. The interior point method has better worst case behaviour (polynomial). Comparision in practice is a matter of debate. Peter J.C. Dickinson MO18, Chpt 3: Linear Optimisation 22/23

72 Introduction Recap Primal and dual problems Shadow prices Matrix Games Algorithms Table of Contents 1 Introduction 2 Recap 3 Primal and dual problems [MO, ] 4 Shadow prices [MO, 3.1.3] 5 Matrix Games [MO, 3.1.4] 6 Algorithms Peter J.C. Dickinson MO18, Chpt 3: Linear Optimisation 23/23

73 Introduction Convex sets Convex functions Reduction to R n variables Mathematical Optimisation, Chpt 4: Convexity Peter J.C. Dickinson p.j.c.dickinson@utwente.nl version: 09/04/18 Monday 5th March 2018 Peter J.C. Dickinson MO18, Chpt 4: Convexity 1/30

74 Introduction Convex sets Convex functions Reduction to R n variables Table of Contents 1 Introduction 2 Convex sets [MO, 4.1] 3 Convex functions [MO, 4.2] 4 Reduction to R [MO, 4.2.1] 5 n variables [MO, 4.2.2] Peter J.C. Dickinson MO18, Chpt 4: Convexity 2/30

75 Introduction Convex sets Convex functions Reduction to R n variables Table of Contents 1 Introduction 2 Convex sets [MO, 4.1] Recall Convex sets Hyperplanes and halfspaces Ellipsoid method Convex cones 3 Convex functions [MO, 4.2] 4 Reduction to R [MO, 4.2.1] 5 n variables [MO, 4.2.2] Peter J.C. Dickinson MO18, Chpt 4: Convexity 3/30

76 Introduction Convex sets Convex functions Reduction to R n variables Recall Definition 4.1 A R n is closed if for all limiting sequences {x i i N} A we have lim i x i A. A R n is bounded if R R such that x 2 R for all x A. A R n is compact if it is closed and bounded. f : R n R is a continuous function if f (c) = lim x c f (x) c R n. Lemma 4.2 If A is a compact set and {x i i N} A, then there is a limiting subsequence {x ij j N}, i.e. i j+1 > i j N for all j N, and lim j x ij is well defined. Ex. 4.1 Show that if f : R n R is a continuous function and A R n is a nonempty compact set, then the minimums and maximums of f over A are attained. Peter J.C. Dickinson MO18, Chpt 4: Convexity 4/30

77 Introduction Convex sets Convex functions Reduction to R n variables Convex sets Definition 4.3 A R n is a convex set if x, y A, θ [0, 1] (1 θ)x + θy A. NB: If a set is not convex then it is called a nonconvex set. There is no such thing as a concave set! Ex. 4.2 Show that the intersection of (a) closed sets is closed; (b) convex sets is convex; (c) a bounded set with any other sets is bounded. Corollary 4.4 The intersection of compact sets is compact. Peter J.C. Dickinson MO18, Chpt 4: Convexity 5/30

78 Introduction Convex sets Convex functions Reduction to R n variables (a) (b) (c) {x R 2 x 2 3 x 1 } {x R 2 3x 2 x 1 } { x R 2 2x 2 x 1 2x 1 x 2 } { (d) (e) (f) } x R 2 2x 2 x x 1 x {x R 2 x 1 2x 2 } {x R 2 x 2 1 x 2} {x R 2 x 2 1 x 2} Peter J.C. Dickinson MO18, Chpt 4: Convexity 6/30

79 Introduction Convex sets Convex functions Reduction to R n variables (a) Nonconvex (b) Convex (c) Convex {x R 2 x 2 3 x 1 } {x R 2 3x 2 x 1 } { x R 2 2x 2 x 1 2x 1 x 2 } { } x R 2 2x 2 x x 1 x (d) Convex (e) Convex (f) Nonconvex {x R 2 x 1 2x 2 } {x R 2 x 2 1 x 2} {x R 2 x 2 1 x 2} Peter J.C. Dickinson MO18, Chpt 4: Convexity 6/30

80 Introduction Convex sets Convex functions Reduction to R n variables Hyperplanes and halfspaces Definition 4.5 H R n is a hyperplane if there exists a R n \ {0} and α R such that H = {x R n : a T x = α}. Ĥ R n is a closed halfspace if there exists a R n \ {0} and α R such that Ĥ = {x Rn : a T x α}. Ex. 4.3 Prove that hyperplanes and closed halfspaces are closed convex sets. Corollary 4.6 The intersection of (infinitely many) halfspaces and hyperplanes is a closed convex set. Ex. 4.4 Show that the feasible sets F p and F d from the previous lecture (slide 9) are closed convex sets. Peter J.C. Dickinson MO18, Chpt 4: Convexity 7/30

81 Introduction Convex sets Convex functions Reduction to R n variables Hyperplanes and halfspaces Definition 4.7 Hyperplane H = {x R n a T x = α} (with a R n \ {0}, α R) is a separating hyperplane w.r.t. A R n and c R n \ A if a T x α < a T c for all x A. Peter J.C. Dickinson MO18, Chpt 4: Convexity 8/30

82 Introduction Convex sets Convex functions Reduction to R n variables Hyperplanes and halfspaces Definition 4.7 Hyperplane H = {x R n a T x = α} (with a R n \ {0}, α R) is a separating hyperplane w.r.t. A R n and c R n \ A if a T x α < a T c for all x A. Theorem 4.8 ([MO, Lm 4.1 & Cor 4.1]) For A R n, the following are equivalent: 1 A is an intersection of closed halfspaces. 2 For all c R n \ A there exists a separating hyperplane w.r.t. A and c. 3 A is a closed convex set. Ex. 4.6 Prove (2) (1) (3). Peter J.C. Dickinson MO18, Chpt 4: Convexity 8/30

83 Introduction Convex sets Convex functions Reduction to R n variables Proof (3) (2) Let A R n be a closed convex set and let c R n \ A. Will show (a, α) R n R s.t. a T x α < a T c for all x A. 1 Let b arg min z { c z 2 : z A} 2 Let a = c b R n \ {0}, α = a T b. 3 a T c = a T (a + b) = a α > α. 4 Assume for sake of contradiction y A such that β = a T y α > 0. 5 θ [0, 1] have x θ = θy + (1 θ)b A, b c 2 2 x θ c 2 2 = b c 2 2 2θβ + θ 2 y b 2 2 < b c 2 2 for θ > 0 small enough Peter J.C. Dickinson MO18, Chpt 4: Convexity 9/30

84 Introduction Convex sets Convex functions Reduction to R n variables Proof (3) (2) Let A R n be a closed convex set and let c R n \ A. Will show (a, α) R n R s.t. a T x α < a T c for all x A. 1 Let b arg min z { c z 2 : z A} 2 Let a = c b R n \ {0}, α = a T b. 3 a T c = a T (a + b) = a α > α. 4 Assume for sake of contradiction y A such that β = a T y α > 0. 5 θ [0, 1] have x θ = θy + (1 θ)b A, b c 2 2 x θ c 2 2 = b c 2 2 2θβ + θ 2 y b 2 2 < b c 2 2 for θ > 0 small enough Peter J.C. Dickinson MO18, Chpt 4: Convexity 9/30

85 Introduction Convex sets Convex functions Reduction to R n variables Proof (3) (2) Let A R n be a closed convex set and let c R n \ A. Will show (a, α) R n R s.t. a T x α < a T c for all x A. 1 Let b arg min z { c z 2 : z A} 2 Let a = c b R n \ {0}, α = a T b. 3 a T c = a T (a + b) = a α > α. 4 Assume for sake of contradiction y A such that β = a T y α > 0. 5 θ [0, 1] have x θ = θy + (1 θ)b A, b c 2 2 x θ c 2 2 = b c 2 2 2θβ + θ 2 y b 2 2 < b c 2 2 for θ > 0 small enough Peter J.C. Dickinson MO18, Chpt 4: Convexity 9/30

86 Introduction Convex sets Convex functions Reduction to R n variables Proof (3) (2) Let A R n be a closed convex set and let c R n \ A. Will show (a, α) R n R s.t. a T x α < a T c for all x A. 1 Let b arg min z { c z 2 : z A} 2 Let a = c b R n \ {0}, α = a T b. 3 a T c = a T (a + b) = a α > α. 4 Assume for sake of contradiction y A such that β = a T y α > 0. 5 θ [0, 1] have x θ = θy + (1 θ)b A, b c 2 2 x θ c 2 2 = b c 2 2 2θβ + θ 2 y b 2 2 < b c 2 2 for θ > 0 small enough Peter J.C. Dickinson MO18, Chpt 4: Convexity 9/30

87 Introduction Convex sets Convex functions Reduction to R n variables Proof (3) (2) Let A R n be a closed convex set and let c R n \ A. Will show (a, α) R n R s.t. a T x α < a T c for all x A. 1 Let b arg min z { c z 2 : z A} 2 Let a = c b R n \ {0}, α = a T b. 3 a T c = a T (a + b) = a α > α. 4 Assume for sake of contradiction y A such that β = a T y α > 0. 5 θ [0, 1] have x θ = θy + (1 θ)b A, b c 2 2 x θ c 2 2 = b c 2 2 2θβ + θ 2 y b 2 2 < b c 2 2 for θ > 0 small enough Peter J.C. Dickinson MO18, Chpt 4: Convexity 9/30

88 Introduction Convex sets Convex functions Reduction to R n variables Proof (3) (2) Let A R n be a closed convex set and let c R n \ A. Will show (a, α) R n R s.t. a T x α < a T c for all x A. 1 Let b arg min z { c z 2 : z A} 2 Let a = c b R n \ {0}, α = a T b. 3 a T c = a T (a + b) = a α > α. 4 Assume for sake of contradiction y A such that β = a T y α > 0. 5 θ [0, 1] have x θ = θy + (1 θ)b A, b c 2 2 x θ c 2 2 = b c 2 2 2θβ + θ 2 y b 2 2 < b c 2 2 for θ > 0 small enough Peter J.C. Dickinson MO18, Chpt 4: Convexity 9/30

89 Introduction Convex sets Convex functions Reduction to R n variables Supporting hyperplanes Definition 4.9 Hyperplane H = {x R n a T x = α} (with a R n \ {0}, α R) is a supporting hyperplane of A R n at c A if Theorem 4.10 ([MO, Thm 4.2]) a T x α = a T c for all x A. For a closed convex set A and c A we have that c bd(a) if and only if there is a supporting hyperplane of A at c. Peter J.C. Dickinson MO18, Chpt 4: Convexity 10/30

90 Introduction Convex sets Convex functions Reduction to R n variables Ellipsoid method For a compact convex set A consider the problem min c T x s. t. x A. If we can check whether y A efficiently, then this problem can be solved efficiently using the ellipsoid method. Peter J.C. Dickinson MO18, Chpt 4: Convexity 11/30

91 Definition 4.11 K R n is a cone if R ++ K K. (a) (b) (c) {x R 2 x 2 3 x 1 } {x R 2 3x 2 x 1 } { x R 2 2x 2 x 1 2x 1 x 2 } { (d) (e) (f) } x R 2 2x 2 x x 1 x {x R 2 x 1 2x 2 } {x R 2 x 2 1 x 2} {x R 2 x 2 1 x 2}

92 Definition 4.11 K R n is a cone if R ++ K K. (a) Cone (b) Cone (c) Not cone {x R 2 x 2 3 x 1 } {x R 2 3x 2 x 1 } { x R 2 2x 2 x 1 2x 1 x 2 } { (d) Cone (e) Not cone (f) Not cone } x R 2 2x 2 x x 1 x {x R 2 x 1 2x 2 } {x R 2 x 2 1 x 2} {x R 2 x 2 1 x 2}

93 Introduction Convex sets Convex functions Reduction to R n variables Convex cones Definition 4.12 K R n is a cone if R ++ K K. Theorem 4.13 K R n is a convex cone if x, y K, λ 1, λ 2 > 0 λ 1 x + λ 2 y K. e.g.: Some closed convex cones: Nonnegative vectors, R n + = {x R n x 0}, Symmetric matrices, S n = {X R n n X = X T }, Positive semidefinite cone, PSD n = {X S n v T X v 0 v R n } = {X S n X 0}. e.g.: Semidefinite Optimisation: max c T x s. t. B n A i x i 0. Peter J.C. Dickinson MO18, Chpt 4: Convexity 13/30 i=1

94 Introduction Convex sets Convex functions Reduction to R n variables Table of Contents 1 Introduction 2 Convex sets [MO, 4.1] 3 Convex functions [MO, 4.2] Epigraph Formal definition Examples Convex Hull 4 Reduction to R [MO, 4.2.1] 5 n variables [MO, 4.2.2] Peter J.C. Dickinson MO18, Chpt 4: Convexity 14/30

95 Introduction Convex sets Convex functions Reduction to R n variables Epigraphs Definition 4.14 For a function f : A R, let the epigraph epi(f ) A R be given by epi(f ) = {(x, z) A R z f (x)}. e.g.: f (x) = x sin(x) {(x, z) z = f (x)} Theorem 4.15 ([MO, Ex 4.6]) f is a convex function if and only if epi(f ) is a convex set. Corollary 4.16 f : A R convex {x A f (x) a} convex for all a R. Peter J.C. Dickinson MO18, Chpt 4: Convexity 15/30

96 Introduction Convex sets Convex functions Reduction to R n variables Epigraphs Definition 4.14 For a function f : A R, let the epigraph epi(f ) A R be given by epi(f ) = {(x, z) A R z f (x)}. e.g.: f (x) = x sin(x) epi(f ) Theorem 4.15 ([MO, Ex 4.6]) f is a convex function if and only if epi(f ) is a convex set. Corollary 4.16 f : A R convex {x A f (x) a} convex for all a R. Peter J.C. Dickinson MO18, Chpt 4: Convexity 15/30

97 Introduction Convex sets Convex functions Reduction to R n variables Formal definition Definition 4.17 A function f : A R is defined to be a convex function if A is a convex set and for all x, y A, θ [0, 1] we have f (θx + (1 θ)y) θf (x) + (1 θ)f (y). f : A R is strictly convex if A is a convex set and for all x, y A, θ (0, 1) with x y we have f (θx + (1 θ)y) < θf (x) + (1 θ)f (y). Ex. 4.7 Prove Theorem Peter J.C. Dickinson MO18, Chpt 4: Convexity 16/30

98 Introduction Convex sets Convex functions Reduction to R n variables Examples Affine functions (i.e. f (x) = a T x + α) are convex. f (x) = x 2, x 4, exp(x), x are convex on R. f (x) = x 3 x is convex on R +, but not on [ 1, 1]. f (x) = ax 2 + bx + c is convex on R if and only if a 0. { x 2 if x [ 1, 1) f (x) = is convex iff a 1. a if x = 1 f (x) = x is convex on R n for any (semi)norm. Peter J.C. Dickinson MO18, Chpt 4: Convexity 17/30

99 Introduction Convex sets Convex functions Reduction to R n variables Convex Hull Definition 4.18 For r N, a 1,..., a r R n, θ 1,..., θ r 0 with r i=1 θ i = 1, we say that r v = θ i a i i=1 is a convex combination of the a i s. Definition 4.19 The convex hull of A is the set of all convex combinations of vectors in A: { r } r N, a 1,..., a r A, conv A = θ i a i θ 1,..., θ r 0, r i=1 θ i = 1 i=1 Theorem 4.20 conv A is the smallest convex set containing A. Peter J.C. Dickinson MO18, Chpt 4: Convexity 18/30

100 Introduction Convex sets Convex functions Reduction to R n variables Important auxiliary result Ex. 4.8 Show for A R n and f : A R that (a) A is convex if and only if for all r N, a 1,..., a r A, θ 1,..., θ r 0 with r i=1 θ i = 1 we have r i=1 θ ia i A. (b) f is convex if and only if for all r N, a 1,..., a r A, θ 1,..., θ r 0 with r i=1 θ i = 1 we have r i=1 θ ia i A and f ( r i=1 θ ia i ) r i=1 θ if (a i ). Corollary 4.21 ([MO, Thm 4.3]) Let A = conv{a i : i I} R n and consider a convex function f : A R. Then max x A f (x) = max i I f (a i ). Peter J.C. Dickinson MO18, Chpt 4: Convexity 19/30

101 Introduction Convex sets Convex functions Reduction to R n variables Table of Contents 1 Introduction 2 Convex sets [MO, 4.1] 3 Convex functions [MO, 4.2] 4 Reduction to R [MO, 4.2.1] Reduction to R Differentiation Second Derivative Continuity 5 n variables [MO, 4.2.2] Peter J.C. Dickinson MO18, Chpt 4: Convexity 20/30

102 Introduction Convex sets Convex functions Reduction to R n variables Reduction to R Lemma 4.22 ([MO, Lm 4.2]) f : A R is convex if and only if for every x 0 A and d R n, the function p x0,d(t) := f (x 0 + td) is a convex function of t on the interval L = A d (x 0 ) = {t R x 0 + td A}. Ex. 4.9 Prove Lemma e.g.: For A S n, consider f (x) = x T Ax + a T x + α. We have p x0,d(t) = t 2 d T Ad + 2td T (Ax 0 + a) + f (x 0 ), which is convex for fixed x 0, d R n iff d T Ad 0. Therefore f is convex iff d T Ad 0 for all x 0, d R n, or equivalently A 0. Peter J.C. Dickinson MO18, Chpt 4: Convexity 21/30

103 Introduction Convex sets Convex functions Reduction to R n variables Common Error Corollary 4.23 For a convex function f : R n R we have that for every x 0 R n and i {1,..., n}, the function p x0,e i (t) = f (x 0 + te i ) is a convex function of t R. e.g.: Consider the function f : R 2 R given by f (x) = x 1 x 2. For every x 0 R n and i {1, 2}, the function p x0,e i (t) is a linear function, and thus convex. However 1 2 f (1, 1) f ( 1, 1) = 1 < 0 = f ( 1 2 (1, 1) ( 1, 1)), and thus f is not convex on R 2. Peter J.C. Dickinson MO18, Chpt 4: Convexity 22/30

104 Introduction Convex sets Convex functions Reduction to R n variables Left and right derivatives Lemma 4.24 ([MO, Lm 4.3]) For convex f : (a, b) R and x 0 (a, b) we have that ϕ x0 (t) := f (x 0 + t) f (x 0 ), t (a x 0, b x 0 ) \ {0}. t is monotonically increasing in t. Peter J.C. Dickinson MO18, Chpt 4: Convexity 23/30

105 Introduction Convex sets Convex functions Reduction to R n variables Left and right derivatives Lemma 4.24 ([MO, Lm 4.3]) For convex f : (a, b) R and x 0 (a, b) we have that ϕ x0 (t) := f (x 0 + t) f (x 0 ), t (a x 0, b x 0 ) \ {0}. t is monotonically increasing in t. Thus f (x 0 ) := lim t 0 ϕ x 0 (t) lim t 0 + ϕ x 0 (t) =: f +(x 0 ), and f (x 0 ), f +(x 0 ) R. Remark A convex function f : (a, b) R need not be differentiable at all x (a, b), e.g. for f (x) = x have f (0) = 1 < 1 = f +(0). Peter J.C. Dickinson MO18, Chpt 4: Convexity 23/30

106 Introduction Convex sets Convex functions Reduction to R n variables Subderivative Theorem 4.25 ([MO, Eq (4.1)]) Let f : (a, b) R be convex and differentiable at x (a, b). Then Definition 4.26 f (y) f (x) + f (x)(y x) for all y (a, b). For f : (a, b) R, we call d x R a subderivative of f at x (a, b) if f (y) f (x) + d x (y x) for all y (a, b). The set of all subderivatives of f at x is the subdifferential denoted by f (x). Ex Show that if f : (a, b) R is convex then for all x (a, b) we have f (x) = {d R f (x) d f +(x)} Peter J.C. Dickinson MO18, Chpt 4: Convexity 24/30

107 Introduction Convex sets Convex functions Reduction to R n variables Subderrivative Theorem 4.27 ([MO, Thm 4.6]) Let f : (a, b) R. Then f is convex f (x) x (a, b) Ex Prove Theorem Corollary 4.28 Let f C 1. Then f is convex f (y) f (x) + f (x)(y x) for all x, y. Peter J.C. Dickinson MO18, Chpt 4: Convexity 25/30

108 Introduction Convex sets Convex functions Reduction to R n variables Second Derivative Theorem 4.29 ([MO, Thm 4.7]) Let f : (a, b) R be differentiable. Then f is convex f (x) is monotonically increasing on (a, b). Corollary 4.30 ([MO, Cor 4.2]) Let f : (a, b) R be twice differentiable. Then f is convex f (x) 0 x (a, b). Peter J.C. Dickinson MO18, Chpt 4: Convexity 26/30

109 Introduction Convex sets Convex functions Reduction to R n variables Lipschitz Continuous Definition 4.31 f : R n R is Lipschitz continuous at x 0 if ε, L > 0 such that f (x) f (x 0 ) L x x 0 for all x U ε (x 0 ), where U ε (x 0 ) := {x R n x x 0 ε}. Theorem 4.32 Let f : (a, b) R be convex. Then f is Lipschitz continuous at all x 0 (a, b). e.g.: A convex function f : A R need not be continuous at its boundary, e.g. the following function on [0, 1]: { 0 if x [0, 1) f (x) = 1 if x = 1. Peter J.C. Dickinson MO18, Chpt 4: Convexity 27/30

110 Introduction Convex sets Convex functions Reduction to R n variables Table of Contents 1 Introduction 2 Convex sets [MO, 4.1] 3 Convex functions [MO, 4.2] 4 Reduction to R [MO, 4.2.1] 5 n variables [MO, 4.2.2] Subgradient Differentiability and continuity Peter J.C. Dickinson MO18, Chpt 4: Convexity 28/30

111 Introduction Convex sets Convex functions Reduction to R n variables Subgradient Definition 4.33 ([MO, Eq (4.3)]) For f : A R, we call d x R n a subgradient of f at x A if f (y) f (x) + d T x (y x) for all y A. The set of all subgradients of f at x is the subdifferential denoted by f (x). Theorem 4.34 ([MO, Thm 4.8]) For f : A R with A an open convex set, we have f is convex f (x) for all x A. Theorem 4.35 For f : A R convex, x A and d f (x) we have {y A f (y) f (x)} {y d T y d T x} Ex Show that if f : A R is convex with A open, then f (x) is a nonempty convex compact set for all x A. Peter J.C. Dickinson MO18, Chpt 4: Convexity 29/30

112 Introduction Convex sets Convex functions Reduction to R n variables Differentiability and continuity Corollary 4.36 (From Corollary 4.28, [MO, Thm 4.8]) Let f C 1. Then f is convex f (y) f (x) + f (x) T (y x) for all x, y. Corollary 4.37 (From Corollary 4.30, [MO, Cor 4.3]) Let f : A R be twice differentiable, with A convex. Then f is convex 2 f (x) 0 x A. Theorem 4.38 ([MO, Thm 4.9]) Let f : A R be convex. Then f is continuous on int A. In fact f is Lipschitz continuous at all x 0 int A. Peter J.C. Dickinson MO18, Chpt 4: Convexity 30/30

113 Intro Desc. Methods No derivative Conj. Dir. Newton s meth. Mathematical Optimisation, Chpt 5: Unconstrained Optimisation Peter J.C. Dickinson p.j.c.dickinson@utwente.nl version: 09/04/18 Wednesday 7th March 2018 Peter J.C. Dickinson MO18, Chpt 5: Unconstrained Optimisation 1/49

114 Intro Desc. Methods No derivative Conj. Dir. Newton s meth. Table of Contents 1 Introduction [MO, 5.1] Definitions Descent methods Necessary Conditions and Sufficient Conditions Solving necessary conditions 2 Descent Methods [MO, 5.2 & 5.4] 3 Minimisation of f without derivatives [MO, 5.8] 4 Method of conjugate directions [MO, 5.3] 5 Newton s method [MO, 5.5 7] Peter J.C. Dickinson MO18, Chpt 5: Unconstrained Optimisation 2/49

115 Intro Desc. Methods No derivative Conj. Dir. Newton s meth. Minimisation problem: Given F R n and f : F R, min x F f (x) (P) (P) is an unconstrained program if: F is open, e.g., F = R n Def. x F is a global minimiser of f (over F) if f (x) f (x) for all x F. x F is a local minimiser of f if there is an ε > 0 such that f (x) f (x) for all x F, x x ε. and a strict local minimiser if with an ε > 0 f (x) < f (x) for all x F, x x, x x ε. Rem. In nonlinear (nonconvex) Optimisation we usually mean: Find a local minimiser. Global minimisation is more difficult. Peter J.C. Dickinson MO18, Chpt 5: Unconstrained Optimisation 3/49

116 Intro Desc. Methods No derivative Conj. Dir. Newton s meth. CONCEPTUAL ALGORITHM: step k: Choose x 0 R n. Iterate Given x k R n, find a new point x k+1 with f (x k+1 ) < f (x k ). We want: x k x with x a local mininimiser. Definition Let x k x for k. The sequence (x k ) is: linearly convergent if with a constant 0 C < 1 and some K N: x k+1 x C x k x, k K. C is called convergence factor. quadratically convergent if with a constant c 0, superlinear convergence if x k+1 x c x k x 2, k N. lim k x k+1 x x k x = 0. Peter J.C. Dickinson MO18, Chpt 5: Unconstrained Optimisation 4/49

117 Intro Desc. Methods No derivative Conj. Dir. Newton s meth. Geometry of min f (x) : For f C 1 (R n, R) consider the level set N α = {x f (x) = α} (some α R) and a point x N α with f (x) 0. Then: In a neighbourhood of x the solution set N α is a C 1 -manifold of dimension n 1 and at x we have f (x) N α i.e., f (x) is perpendicular to N α and points into the direction of maximal increase for f (x). Notation: In this Chapter 5 of the sheets the gradient f (x) is always a column vector!!!! Peter J.C. Dickinson MO18, Chpt 5: Unconstrained Optimisation 5/49

118 Intro Desc. Methods No derivative Conj. Dir. Newton s meth. Example: f (x 1, x 2 ) = (x x 2 2 )((x 1 5) 2 + (x 2 1) 2 )((x 1 2) 2 + (x 2 3) 2 + 1) Two global minima and three strict local minima. Peter J.C. Dickinson MO18, Chpt 5: Unconstrained Optimisation 6/49

Mathematical Optimisation, Chpt 2: Linear Equations and inequalities

Mathematical Optimisation, Chpt 2: Linear Equations and inequalities Mathematical Optimisation, Chpt 2: Linear Equations and inequalities Peter J.C. Dickinson p.j.c.dickinson@utwente.nl http://dickinson.website version: 12/02/18 Monday 5th February 2018 Peter J.C. Dickinson

More information

Math Camp Lecture 4: Linear Algebra. Xiao Yu Wang. Aug 2010 MIT. Xiao Yu Wang (MIT) Math Camp /10 1 / 88

Math Camp Lecture 4: Linear Algebra. Xiao Yu Wang. Aug 2010 MIT. Xiao Yu Wang (MIT) Math Camp /10 1 / 88 Math Camp 2010 Lecture 4: Linear Algebra Xiao Yu Wang MIT Aug 2010 Xiao Yu Wang (MIT) Math Camp 2010 08/10 1 / 88 Linear Algebra Game Plan Vector Spaces Linear Transformations and Matrices Determinant

More information

3. Linear Programming and Polyhedral Combinatorics

3. Linear Programming and Polyhedral Combinatorics Massachusetts Institute of Technology 18.433: Combinatorial Optimization Michel X. Goemans February 28th, 2013 3. Linear Programming and Polyhedral Combinatorics Summary of what was seen in the introductory

More information

Continuous Optimisation, Chpt 9: Semidefinite Problems

Continuous Optimisation, Chpt 9: Semidefinite Problems Continuous Optimisation, Chpt 9: Semidefinite Problems Peter J.C. Dickinson DMMP, University of Twente p.j.c.dickinson@utwente.nl http://dickinson.website/teaching/2016co.html version: 21/11/16 Monday

More information

Contents Real Vector Spaces Linear Equations and Linear Inequalities Polyhedra Linear Programs and the Simplex Method Lagrangian Duality

Contents Real Vector Spaces Linear Equations and Linear Inequalities Polyhedra Linear Programs and the Simplex Method Lagrangian Duality Contents Introduction v Chapter 1. Real Vector Spaces 1 1.1. Linear and Affine Spaces 1 1.2. Maps and Matrices 4 1.3. Inner Products and Norms 7 1.4. Continuous and Differentiable Functions 11 Chapter

More information

Appendix PRELIMINARIES 1. THEOREMS OF ALTERNATIVES FOR SYSTEMS OF LINEAR CONSTRAINTS

Appendix PRELIMINARIES 1. THEOREMS OF ALTERNATIVES FOR SYSTEMS OF LINEAR CONSTRAINTS Appendix PRELIMINARIES 1. THEOREMS OF ALTERNATIVES FOR SYSTEMS OF LINEAR CONSTRAINTS Here we consider systems of linear constraints, consisting of equations or inequalities or both. A feasible solution

More information

Continuous Optimisation, Chpt 9: Semidefinite Optimisation

Continuous Optimisation, Chpt 9: Semidefinite Optimisation Continuous Optimisation, Chpt 9: Semidefinite Optimisation Peter J.C. Dickinson DMMP, University of Twente p.j.c.dickinson@utwente.nl http://dickinson.website/teaching/2017co.html version: 28/11/17 Monday

More information

Math 102, Winter Final Exam Review. Chapter 1. Matrices and Gaussian Elimination

Math 102, Winter Final Exam Review. Chapter 1. Matrices and Gaussian Elimination Math 0, Winter 07 Final Exam Review Chapter. Matrices and Gaussian Elimination { x + x =,. Different forms of a system of linear equations. Example: The x + 4x = 4. [ ] [ ] [ ] vector form (or the column

More information

Preliminary/Qualifying Exam in Numerical Analysis (Math 502a) Spring 2012

Preliminary/Qualifying Exam in Numerical Analysis (Math 502a) Spring 2012 Instructions Preliminary/Qualifying Exam in Numerical Analysis (Math 502a) Spring 2012 The exam consists of four problems, each having multiple parts. You should attempt to solve all four problems. 1.

More information

LMI MODELLING 4. CONVEX LMI MODELLING. Didier HENRION. LAAS-CNRS Toulouse, FR Czech Tech Univ Prague, CZ. Universidad de Valladolid, SP March 2009

LMI MODELLING 4. CONVEX LMI MODELLING. Didier HENRION. LAAS-CNRS Toulouse, FR Czech Tech Univ Prague, CZ. Universidad de Valladolid, SP March 2009 LMI MODELLING 4. CONVEX LMI MODELLING Didier HENRION LAAS-CNRS Toulouse, FR Czech Tech Univ Prague, CZ Universidad de Valladolid, SP March 2009 Minors A minor of a matrix F is the determinant of a submatrix

More information

Optimization WS 13/14:, by Y. Goldstein/K. Reinert, 9. Dezember 2013, 16: Linear programming. Optimization Problems

Optimization WS 13/14:, by Y. Goldstein/K. Reinert, 9. Dezember 2013, 16: Linear programming. Optimization Problems Optimization WS 13/14:, by Y. Goldstein/K. Reinert, 9. Dezember 2013, 16:38 2001 Linear programming Optimization Problems General optimization problem max{z(x) f j (x) 0,x D} or min{z(x) f j (x) 0,x D}

More information

Motivating examples Introduction to algorithms Simplex algorithm. On a particular example General algorithm. Duality An application to game theory

Motivating examples Introduction to algorithms Simplex algorithm. On a particular example General algorithm. Duality An application to game theory Instructor: Shengyu Zhang 1 LP Motivating examples Introduction to algorithms Simplex algorithm On a particular example General algorithm Duality An application to game theory 2 Example 1: profit maximization

More information

Math 5593 Linear Programming Week 1

Math 5593 Linear Programming Week 1 University of Colorado Denver, Fall 2013, Prof. Engau 1 Problem-Solving in Operations Research 2 Brief History of Linear Programming 3 Review of Basic Linear Algebra Linear Programming - The Story About

More information

Input: System of inequalities or equalities over the reals R. Output: Value for variables that minimizes cost function

Input: System of inequalities or equalities over the reals R. Output: Value for variables that minimizes cost function Linear programming Input: System of inequalities or equalities over the reals R A linear cost function Output: Value for variables that minimizes cost function Example: Minimize 6x+4y Subject to 3x + 2y

More information

Optimality, Duality, Complementarity for Constrained Optimization

Optimality, Duality, Complementarity for Constrained Optimization Optimality, Duality, Complementarity for Constrained Optimization Stephen Wright University of Wisconsin-Madison May 2014 Wright (UW-Madison) Optimality, Duality, Complementarity May 2014 1 / 41 Linear

More information

Lecture 5. Theorems of Alternatives and Self-Dual Embedding

Lecture 5. Theorems of Alternatives and Self-Dual Embedding IE 8534 1 Lecture 5. Theorems of Alternatives and Self-Dual Embedding IE 8534 2 A system of linear equations may not have a solution. It is well known that either Ax = c has a solution, or A T y = 0, c

More information

Inequality Constraints

Inequality Constraints Chapter 2 Inequality Constraints 2.1 Optimality Conditions Early in multivariate calculus we learn the significance of differentiability in finding minimizers. In this section we begin our study of the

More information

UNDERGROUND LECTURE NOTES 1: Optimality Conditions for Constrained Optimization Problems

UNDERGROUND LECTURE NOTES 1: Optimality Conditions for Constrained Optimization Problems UNDERGROUND LECTURE NOTES 1: Optimality Conditions for Constrained Optimization Problems Robert M. Freund February 2016 c 2016 Massachusetts Institute of Technology. All rights reserved. 1 1 Introduction

More information

Lecture 9 Monotone VIs/CPs Properties of cones and some existence results. October 6, 2008

Lecture 9 Monotone VIs/CPs Properties of cones and some existence results. October 6, 2008 Lecture 9 Monotone VIs/CPs Properties of cones and some existence results October 6, 2008 Outline Properties of cones Existence results for monotone CPs/VIs Polyhedrality of solution sets Game theory:

More information

NORMS ON SPACE OF MATRICES

NORMS ON SPACE OF MATRICES NORMS ON SPACE OF MATRICES. Operator Norms on Space of linear maps Let A be an n n real matrix and x 0 be a vector in R n. We would like to use the Picard iteration method to solve for the following system

More information

This can be accomplished by left matrix multiplication as follows: I

This can be accomplished by left matrix multiplication as follows: I 1 Numerical Linear Algebra 11 The LU Factorization Recall from linear algebra that Gaussian elimination is a method for solving linear systems of the form Ax = b, where A R m n and bran(a) In this method

More information

Assignment 1: From the Definition of Convexity to Helley Theorem

Assignment 1: From the Definition of Convexity to Helley Theorem Assignment 1: From the Definition of Convexity to Helley Theorem Exercise 1 Mark in the following list the sets which are convex: 1. {x R 2 : x 1 + i 2 x 2 1, i = 1,..., 10} 2. {x R 2 : x 2 1 + 2ix 1x

More information

Chap 2. Optimality conditions

Chap 2. Optimality conditions Chap 2. Optimality conditions Version: 29-09-2012 2.1 Optimality conditions in unconstrained optimization Recall the definitions of global, local minimizer. Geometry of minimization Consider for f C 1

More information

LINEAR PROGRAMMING II

LINEAR PROGRAMMING II LINEAR PROGRAMMING II LP duality strong duality theorem bonus proof of LP duality applications Lecture slides by Kevin Wayne Last updated on 7/25/17 11:09 AM LINEAR PROGRAMMING II LP duality Strong duality

More information

Chapter 2: Preliminaries and elements of convex analysis

Chapter 2: Preliminaries and elements of convex analysis Chapter 2: Preliminaries and elements of convex analysis Edoardo Amaldi DEIB Politecnico di Milano edoardo.amaldi@polimi.it Website: http://home.deib.polimi.it/amaldi/opt-14-15.shtml Academic year 2014-15

More information

MATH 20F: LINEAR ALGEBRA LECTURE B00 (T. KEMP)

MATH 20F: LINEAR ALGEBRA LECTURE B00 (T. KEMP) MATH 20F: LINEAR ALGEBRA LECTURE B00 (T KEMP) Definition 01 If T (x) = Ax is a linear transformation from R n to R m then Nul (T ) = {x R n : T (x) = 0} = Nul (A) Ran (T ) = {Ax R m : x R n } = {b R m

More information

LINEAR ALGEBRA KNOWLEDGE SURVEY

LINEAR ALGEBRA KNOWLEDGE SURVEY LINEAR ALGEBRA KNOWLEDGE SURVEY Instructions: This is a Knowledge Survey. For this assignment, I am only interested in your level of confidence about your ability to do the tasks on the following pages.

More information

Optimization Theory. A Concise Introduction. Jiongmin Yong

Optimization Theory. A Concise Introduction. Jiongmin Yong October 11, 017 16:5 ws-book9x6 Book Title Optimization Theory 017-08-Lecture Notes page 1 1 Optimization Theory A Concise Introduction Jiongmin Yong Optimization Theory 017-08-Lecture Notes page Optimization

More information

A Brief Outline of Math 355

A Brief Outline of Math 355 A Brief Outline of Math 355 Lecture 1 The geometry of linear equations; elimination with matrices A system of m linear equations with n unknowns can be thought of geometrically as m hyperplanes intersecting

More information

15-780: LinearProgramming

15-780: LinearProgramming 15-780: LinearProgramming J. Zico Kolter February 1-3, 2016 1 Outline Introduction Some linear algebra review Linear programming Simplex algorithm Duality and dual simplex 2 Outline Introduction Some linear

More information

DS-GA 1002 Lecture notes 0 Fall Linear Algebra. These notes provide a review of basic concepts in linear algebra.

DS-GA 1002 Lecture notes 0 Fall Linear Algebra. These notes provide a review of basic concepts in linear algebra. DS-GA 1002 Lecture notes 0 Fall 2016 Linear Algebra These notes provide a review of basic concepts in linear algebra. 1 Vector spaces You are no doubt familiar with vectors in R 2 or R 3, i.e. [ ] 1.1

More information

CSC Linear Programming and Combinatorial Optimization Lecture 10: Semidefinite Programming

CSC Linear Programming and Combinatorial Optimization Lecture 10: Semidefinite Programming CSC2411 - Linear Programming and Combinatorial Optimization Lecture 10: Semidefinite Programming Notes taken by Mike Jamieson March 28, 2005 Summary: In this lecture, we introduce semidefinite programming

More information

Part IB Optimisation

Part IB Optimisation Part IB Optimisation Theorems Based on lectures by F. A. Fischer Notes taken by Dexter Chua Easter 2015 These notes are not endorsed by the lecturers, and I have modified them (often significantly) after

More information

LECTURE 25: REVIEW/EPILOGUE LECTURE OUTLINE

LECTURE 25: REVIEW/EPILOGUE LECTURE OUTLINE LECTURE 25: REVIEW/EPILOGUE LECTURE OUTLINE CONVEX ANALYSIS AND DUALITY Basic concepts of convex analysis Basic concepts of convex optimization Geometric duality framework - MC/MC Constrained optimization

More information

3. Linear Programming and Polyhedral Combinatorics

3. Linear Programming and Polyhedral Combinatorics Massachusetts Institute of Technology 18.453: Combinatorial Optimization Michel X. Goemans April 5, 2017 3. Linear Programming and Polyhedral Combinatorics Summary of what was seen in the introductory

More information

LP Duality: outline. Duality theory for Linear Programming. alternatives. optimization I Idea: polyhedra

LP Duality: outline. Duality theory for Linear Programming. alternatives. optimization I Idea: polyhedra LP Duality: outline I Motivation and definition of a dual LP I Weak duality I Separating hyperplane theorem and theorems of the alternatives I Strong duality and complementary slackness I Using duality

More information

Review problems for MA 54, Fall 2004.

Review problems for MA 54, Fall 2004. Review problems for MA 54, Fall 2004. Below are the review problems for the final. They are mostly homework problems, or very similar. If you are comfortable doing these problems, you should be fine on

More information

Convex Optimization. (EE227A: UC Berkeley) Lecture 28. Suvrit Sra. (Algebra + Optimization) 02 May, 2013

Convex Optimization. (EE227A: UC Berkeley) Lecture 28. Suvrit Sra. (Algebra + Optimization) 02 May, 2013 Convex Optimization (EE227A: UC Berkeley) Lecture 28 (Algebra + Optimization) 02 May, 2013 Suvrit Sra Admin Poster presentation on 10th May mandatory HW, Midterm, Quiz to be reweighted Project final report

More information

Yinyu Ye, MS&E, Stanford MS&E310 Lecture Note #06. The Simplex Method

Yinyu Ye, MS&E, Stanford MS&E310 Lecture Note #06. The Simplex Method The Simplex Method Yinyu Ye Department of Management Science and Engineering Stanford University Stanford, CA 94305, U.S.A. http://www.stanford.edu/ yyye (LY, Chapters 2.3-2.5, 3.1-3.4) 1 Geometry of Linear

More information

AN ELEMENTARY PROOF OF THE SPECTRAL RADIUS FORMULA FOR MATRICES

AN ELEMENTARY PROOF OF THE SPECTRAL RADIUS FORMULA FOR MATRICES AN ELEMENTARY PROOF OF THE SPECTRAL RADIUS FORMULA FOR MATRICES JOEL A. TROPP Abstract. We present an elementary proof that the spectral radius of a matrix A may be obtained using the formula ρ(a) lim

More information

CSCI 1951-G Optimization Methods in Finance Part 01: Linear Programming

CSCI 1951-G Optimization Methods in Finance Part 01: Linear Programming CSCI 1951-G Optimization Methods in Finance Part 01: Linear Programming January 26, 2018 1 / 38 Liability/asset cash-flow matching problem Recall the formulation of the problem: max w c 1 + p 1 e 1 = 150

More information

Chapter 1: Linear Programming

Chapter 1: Linear Programming Chapter 1: Linear Programming Math 368 c Copyright 2013 R Clark Robinson May 22, 2013 Chapter 1: Linear Programming 1 Max and Min For f : D R n R, f (D) = {f (x) : x D } is set of attainable values of

More information

Lecture 2: Linear Algebra Review

Lecture 2: Linear Algebra Review EE 227A: Convex Optimization and Applications January 19 Lecture 2: Linear Algebra Review Lecturer: Mert Pilanci Reading assignment: Appendix C of BV. Sections 2-6 of the web textbook 1 2.1 Vectors 2.1.1

More information

Foundations of Matrix Analysis

Foundations of Matrix Analysis 1 Foundations of Matrix Analysis In this chapter we recall the basic elements of linear algebra which will be employed in the remainder of the text For most of the proofs as well as for the details, the

More information

1 9/5 Matrices, vectors, and their applications

1 9/5 Matrices, vectors, and their applications 1 9/5 Matrices, vectors, and their applications Algebra: study of objects and operations on them. Linear algebra: object: matrices and vectors. operations: addition, multiplication etc. Algorithms/Geometric

More information

LECTURE 10 LECTURE OUTLINE

LECTURE 10 LECTURE OUTLINE LECTURE 10 LECTURE OUTLINE Min Common/Max Crossing Th. III Nonlinear Farkas Lemma/Linear Constraints Linear Programming Duality Convex Programming Duality Optimality Conditions Reading: Sections 4.5, 5.1,5.2,

More information

MATH 240 Spring, Chapter 1: Linear Equations and Matrices

MATH 240 Spring, Chapter 1: Linear Equations and Matrices MATH 240 Spring, 2006 Chapter Summaries for Kolman / Hill, Elementary Linear Algebra, 8th Ed. Sections 1.1 1.6, 2.1 2.2, 3.2 3.8, 4.3 4.5, 5.1 5.3, 5.5, 6.1 6.5, 7.1 7.2, 7.4 DEFINITIONS Chapter 1: Linear

More information

4. Algebra and Duality

4. Algebra and Duality 4-1 Algebra and Duality P. Parrilo and S. Lall, CDC 2003 2003.12.07.01 4. Algebra and Duality Example: non-convex polynomial optimization Weak duality and duality gap The dual is not intrinsic The cone

More information

Lecture 8 Plus properties, merit functions and gap functions. September 28, 2008

Lecture 8 Plus properties, merit functions and gap functions. September 28, 2008 Lecture 8 Plus properties, merit functions and gap functions September 28, 2008 Outline Plus-properties and F-uniqueness Equation reformulations of VI/CPs Merit functions Gap merit functions FP-I book:

More information

MATH 315 Linear Algebra Homework #1 Assigned: August 20, 2018

MATH 315 Linear Algebra Homework #1 Assigned: August 20, 2018 Homework #1 Assigned: August 20, 2018 Review the following subjects involving systems of equations and matrices from Calculus II. Linear systems of equations Converting systems to matrix form Pivot entry

More information

Chapter 1. Preliminaries

Chapter 1. Preliminaries Introduction This dissertation is a reading of chapter 4 in part I of the book : Integer and Combinatorial Optimization by George L. Nemhauser & Laurence A. Wolsey. The chapter elaborates links between

More information

Linear Algebra March 16, 2019

Linear Algebra March 16, 2019 Linear Algebra March 16, 2019 2 Contents 0.1 Notation................................ 4 1 Systems of linear equations, and matrices 5 1.1 Systems of linear equations..................... 5 1.2 Augmented

More information

MATH 1120 (LINEAR ALGEBRA 1), FINAL EXAM FALL 2011 SOLUTIONS TO PRACTICE VERSION

MATH 1120 (LINEAR ALGEBRA 1), FINAL EXAM FALL 2011 SOLUTIONS TO PRACTICE VERSION MATH (LINEAR ALGEBRA ) FINAL EXAM FALL SOLUTIONS TO PRACTICE VERSION Problem (a) For each matrix below (i) find a basis for its column space (ii) find a basis for its row space (iii) determine whether

More information

MAT Linear Algebra Collection of sample exams

MAT Linear Algebra Collection of sample exams MAT 342 - Linear Algebra Collection of sample exams A-x. (0 pts Give the precise definition of the row echelon form. 2. ( 0 pts After performing row reductions on the augmented matrix for a certain system

More information

Lecture: Algorithms for LP, SOCP and SDP

Lecture: Algorithms for LP, SOCP and SDP 1/53 Lecture: Algorithms for LP, SOCP and SDP Zaiwen Wen Beijing International Center For Mathematical Research Peking University http://bicmr.pku.edu.cn/~wenzw/bigdata2018.html wenzw@pku.edu.cn Acknowledgement:

More information

SDP Relaxations for MAXCUT

SDP Relaxations for MAXCUT SDP Relaxations for MAXCUT from Random Hyperplanes to Sum-of-Squares Certificates CATS @ UMD March 3, 2017 Ahmed Abdelkader MAXCUT SDP SOS March 3, 2017 1 / 27 Overview 1 MAXCUT, Hardness and UGC 2 LP

More information

Lecture Note 5: Semidefinite Programming for Stability Analysis

Lecture Note 5: Semidefinite Programming for Stability Analysis ECE7850: Hybrid Systems:Theory and Applications Lecture Note 5: Semidefinite Programming for Stability Analysis Wei Zhang Assistant Professor Department of Electrical and Computer Engineering Ohio State

More information

Linear Algebra Highlights

Linear Algebra Highlights Linear Algebra Highlights Chapter 1 A linear equation in n variables is of the form a 1 x 1 + a 2 x 2 + + a n x n. We can have m equations in n variables, a system of linear equations, which we want to

More information

Linear Algebra. Matrices Operations. Consider, for example, a system of equations such as x + 2y z + 4w = 0, 3x 4y + 2z 6w = 0, x 3y 2z + w = 0.

Linear Algebra. Matrices Operations. Consider, for example, a system of equations such as x + 2y z + 4w = 0, 3x 4y + 2z 6w = 0, x 3y 2z + w = 0. Matrices Operations Linear Algebra Consider, for example, a system of equations such as x + 2y z + 4w = 0, 3x 4y + 2z 6w = 0, x 3y 2z + w = 0 The rectangular array 1 2 1 4 3 4 2 6 1 3 2 1 in which the

More information

Algebra C Numerical Linear Algebra Sample Exam Problems

Algebra C Numerical Linear Algebra Sample Exam Problems Algebra C Numerical Linear Algebra Sample Exam Problems Notation. Denote by V a finite-dimensional Hilbert space with inner product (, ) and corresponding norm. The abbreviation SPD is used for symmetric

More information

Cheat Sheet for MATH461

Cheat Sheet for MATH461 Cheat Sheet for MATH46 Here is the stuff you really need to remember for the exams Linear systems Ax = b Problem: We consider a linear system of m equations for n unknowns x,,x n : For a given matrix A

More information

Equality: Two matrices A and B are equal, i.e., A = B if A and B have the same order and the entries of A and B are the same.

Equality: Two matrices A and B are equal, i.e., A = B if A and B have the same order and the entries of A and B are the same. Introduction Matrix Operations Matrix: An m n matrix A is an m-by-n array of scalars from a field (for example real numbers) of the form a a a n a a a n A a m a m a mn The order (or size) of A is m n (read

More information

HOMEWORK PROBLEMS FROM STRANG S LINEAR ALGEBRA AND ITS APPLICATIONS (4TH EDITION)

HOMEWORK PROBLEMS FROM STRANG S LINEAR ALGEBRA AND ITS APPLICATIONS (4TH EDITION) HOMEWORK PROBLEMS FROM STRANG S LINEAR ALGEBRA AND ITS APPLICATIONS (4TH EDITION) PROFESSOR STEVEN MILLER: BROWN UNIVERSITY: SPRING 2007 1. CHAPTER 1: MATRICES AND GAUSSIAN ELIMINATION Page 9, # 3: Describe

More information

Study Guide for Linear Algebra Exam 2

Study Guide for Linear Algebra Exam 2 Study Guide for Linear Algebra Exam 2 Term Vector Space Definition A Vector Space is a nonempty set V of objects, on which are defined two operations, called addition and multiplication by scalars (real

More information

This operation is - associative A + (B + C) = (A + B) + C; - commutative A + B = B + A; - has a neutral element O + A = A, here O is the null matrix

This operation is - associative A + (B + C) = (A + B) + C; - commutative A + B = B + A; - has a neutral element O + A = A, here O is the null matrix 1 Matrix Algebra Reading [SB] 81-85, pp 153-180 11 Matrix Operations 1 Addition a 11 a 12 a 1n a 21 a 22 a 2n a m1 a m2 a mn + b 11 b 12 b 1n b 21 b 22 b 2n b m1 b m2 b mn a 11 + b 11 a 12 + b 12 a 1n

More information

Knowledge Discovery and Data Mining 1 (VO) ( )

Knowledge Discovery and Data Mining 1 (VO) ( ) Knowledge Discovery and Data Mining 1 (VO) (707.003) Review of Linear Algebra Denis Helic KTI, TU Graz Oct 9, 2014 Denis Helic (KTI, TU Graz) KDDM1 Oct 9, 2014 1 / 74 Big picture: KDDM Probability Theory

More information

EE/ACM Applications of Convex Optimization in Signal Processing and Communications Lecture 17

EE/ACM Applications of Convex Optimization in Signal Processing and Communications Lecture 17 EE/ACM 150 - Applications of Convex Optimization in Signal Processing and Communications Lecture 17 Andre Tkacenko Signal Processing Research Group Jet Propulsion Laboratory May 29, 2012 Andre Tkacenko

More information

Math 18, Linear Algebra, Lecture C00, Spring 2017 Review and Practice Problems for Final Exam

Math 18, Linear Algebra, Lecture C00, Spring 2017 Review and Practice Problems for Final Exam Math 8, Linear Algebra, Lecture C, Spring 7 Review and Practice Problems for Final Exam. The augmentedmatrix of a linear system has been transformed by row operations into 5 4 8. Determine if the system

More information

LECTURE NOTES ELEMENTARY NUMERICAL METHODS. Eusebius Doedel

LECTURE NOTES ELEMENTARY NUMERICAL METHODS. Eusebius Doedel LECTURE NOTES on ELEMENTARY NUMERICAL METHODS Eusebius Doedel TABLE OF CONTENTS Vector and Matrix Norms 1 Banach Lemma 20 The Numerical Solution of Linear Systems 25 Gauss Elimination 25 Operation Count

More information

2. Every linear system with the same number of equations as unknowns has a unique solution.

2. Every linear system with the same number of equations as unknowns has a unique solution. 1. For matrices A, B, C, A + B = A + C if and only if A = B. 2. Every linear system with the same number of equations as unknowns has a unique solution. 3. Every linear system with the same number of equations

More information

Linear and non-linear programming

Linear and non-linear programming Linear and non-linear programming Benjamin Recht March 11, 2005 The Gameplan Constrained Optimization Convexity Duality Applications/Taxonomy 1 Constrained Optimization minimize f(x) subject to g j (x)

More information

Chapter 3 Transformations

Chapter 3 Transformations Chapter 3 Transformations An Introduction to Optimization Spring, 2014 Wei-Ta Chu 1 Linear Transformations A function is called a linear transformation if 1. for every and 2. for every If we fix the bases

More information

Semidefinite Programming Basics and Applications

Semidefinite Programming Basics and Applications Semidefinite Programming Basics and Applications Ray Pörn, principal lecturer Åbo Akademi University Novia University of Applied Sciences Content What is semidefinite programming (SDP)? How to represent

More information

ANALYTICAL MATHEMATICS FOR APPLICATIONS 2018 LECTURE NOTES 3

ANALYTICAL MATHEMATICS FOR APPLICATIONS 2018 LECTURE NOTES 3 ANALYTICAL MATHEMATICS FOR APPLICATIONS 2018 LECTURE NOTES 3 ISSUED 24 FEBRUARY 2018 1 Gaussian elimination Let A be an (m n)-matrix Consider the following row operations on A (1) Swap the positions any

More information

IE 521 Convex Optimization Homework #1 Solution

IE 521 Convex Optimization Homework #1 Solution IE 521 Convex Optimization Homework #1 Solution your NAME here your NetID here February 13, 2019 Instructions. Homework is due Wednesday, February 6, at 1:00pm; no late homework accepted. Please use the

More information

Math113: Linear Algebra. Beifang Chen

Math113: Linear Algebra. Beifang Chen Math3: Linear Algebra Beifang Chen Spring 26 Contents Systems of Linear Equations 3 Systems of Linear Equations 3 Linear Systems 3 2 Geometric Interpretation 3 3 Matrices of Linear Systems 4 4 Elementary

More information

Elementary linear algebra

Elementary linear algebra Chapter 1 Elementary linear algebra 1.1 Vector spaces Vector spaces owe their importance to the fact that so many models arising in the solutions of specific problems turn out to be vector spaces. The

More information

Today: Linear Programming (con t.)

Today: Linear Programming (con t.) Today: Linear Programming (con t.) COSC 581, Algorithms April 10, 2014 Many of these slides are adapted from several online sources Reading Assignments Today s class: Chapter 29.4 Reading assignment for

More information

Math Camp II. Basic Linear Algebra. Yiqing Xu. Aug 26, 2014 MIT

Math Camp II. Basic Linear Algebra. Yiqing Xu. Aug 26, 2014 MIT Math Camp II Basic Linear Algebra Yiqing Xu MIT Aug 26, 2014 1 Solving Systems of Linear Equations 2 Vectors and Vector Spaces 3 Matrices 4 Least Squares Systems of Linear Equations Definition A linear

More information

Convex Optimization M2

Convex Optimization M2 Convex Optimization M2 Lecture 3 A. d Aspremont. Convex Optimization M2. 1/49 Duality A. d Aspremont. Convex Optimization M2. 2/49 DMs DM par email: dm.daspremont@gmail.com A. d Aspremont. Convex Optimization

More information

Lecture Notes in Linear Algebra

Lecture Notes in Linear Algebra Lecture Notes in Linear Algebra Dr. Abdullah Al-Azemi Mathematics Department Kuwait University February 4, 2017 Contents 1 Linear Equations and Matrices 1 1.2 Matrices............................................

More information

Lecture Note 18: Duality

Lecture Note 18: Duality MATH 5330: Computational Methods of Linear Algebra 1 The Dual Problems Lecture Note 18: Duality Xianyi Zeng Department of Mathematical Sciences, UTEP The concept duality, just like accuracy and stability,

More information

Linear Analysis Lecture 16

Linear Analysis Lecture 16 Linear Analysis Lecture 16 The QR Factorization Recall the Gram-Schmidt orthogonalization process. Let V be an inner product space, and suppose a 1,..., a n V are linearly independent. Define q 1,...,

More information

orthogonal relations between vectors and subspaces Then we study some applications in vector spaces and linear systems, including Orthonormal Basis,

orthogonal relations between vectors and subspaces Then we study some applications in vector spaces and linear systems, including Orthonormal Basis, 5 Orthogonality Goals: We use scalar products to find the length of a vector, the angle between 2 vectors, projections, orthogonal relations between vectors and subspaces Then we study some applications

More information

Problem Set (T) If A is an m n matrix, B is an n p matrix and D is a p s matrix, then show

Problem Set (T) If A is an m n matrix, B is an n p matrix and D is a p s matrix, then show MTH 0: Linear Algebra Department of Mathematics and Statistics Indian Institute of Technology - Kanpur Problem Set Problems marked (T) are for discussions in Tutorial sessions (T) If A is an m n matrix,

More information

Summer School: Semidefinite Optimization

Summer School: Semidefinite Optimization Summer School: Semidefinite Optimization Christine Bachoc Université Bordeaux I, IMB Research Training Group Experimental and Constructive Algebra Haus Karrenberg, Sept. 3 - Sept. 7, 2012 Duality Theory

More information

Transpose & Dot Product

Transpose & Dot Product Transpose & Dot Product Def: The transpose of an m n matrix A is the n m matrix A T whose columns are the rows of A. So: The columns of A T are the rows of A. The rows of A T are the columns of A. Example:

More information

Applied Linear Algebra

Applied Linear Algebra Applied Linear Algebra Gábor P. Nagy and Viktor Vígh University of Szeged Bolyai Institute Winter 2014 1 / 262 Table of contents I 1 Introduction, review Complex numbers Vectors and matrices Determinants

More information

MAT-INF4110/MAT-INF9110 Mathematical optimization

MAT-INF4110/MAT-INF9110 Mathematical optimization MAT-INF4110/MAT-INF9110 Mathematical optimization Geir Dahl August 20, 2013 Convexity Part IV Chapter 4 Representation of convex sets different representations of convex sets, boundary polyhedra and polytopes:

More information

Semidefinite Programming

Semidefinite Programming Semidefinite Programming Notes by Bernd Sturmfels for the lecture on June 26, 208, in the IMPRS Ringvorlesung Introduction to Nonlinear Algebra The transition from linear algebra to nonlinear algebra has

More information

5. Orthogonal matrices

5. Orthogonal matrices L Vandenberghe EE133A (Spring 2017) 5 Orthogonal matrices matrices with orthonormal columns orthogonal matrices tall matrices with orthonormal columns complex matrices with orthonormal columns 5-1 Orthonormal

More information

Math 520 Exam 2 Topic Outline Sections 1 3 (Xiao/Dumas/Liaw) Spring 2008

Math 520 Exam 2 Topic Outline Sections 1 3 (Xiao/Dumas/Liaw) Spring 2008 Math 520 Exam 2 Topic Outline Sections 1 3 (Xiao/Dumas/Liaw) Spring 2008 Exam 2 will be held on Tuesday, April 8, 7-8pm in 117 MacMillan What will be covered The exam will cover material from the lectures

More information

Linear Algebra Primer

Linear Algebra Primer Linear Algebra Primer David Doria daviddoria@gmail.com Wednesday 3 rd December, 2008 Contents Why is it called Linear Algebra? 4 2 What is a Matrix? 4 2. Input and Output.....................................

More information

Linear Programming. Larry Blume Cornell University, IHS Vienna and SFI. Summer 2016

Linear Programming. Larry Blume Cornell University, IHS Vienna and SFI. Summer 2016 Linear Programming Larry Blume Cornell University, IHS Vienna and SFI Summer 2016 These notes derive basic results in finite-dimensional linear programming using tools of convex analysis. Most sources

More information

Math Linear Algebra Final Exam Review Sheet

Math Linear Algebra Final Exam Review Sheet Math 15-1 Linear Algebra Final Exam Review Sheet Vector Operations Vector addition is a component-wise operation. Two vectors v and w may be added together as long as they contain the same number n of

More information

Continuous Optimisation, Chpt 7: Proper Cones

Continuous Optimisation, Chpt 7: Proper Cones Continuous Optimisation, Chpt 7: Proper Cones Peter J.C. Dickinson DMMP, University of Twente p.j.c.dickinson@utwente.nl http://dickinson.website/teaching/2017co.html version: 10/11/17 Monday 13th November

More information

Lectures 6, 7 and part of 8

Lectures 6, 7 and part of 8 Lectures 6, 7 and part of 8 Uriel Feige April 26, May 3, May 10, 2015 1 Linear programming duality 1.1 The diet problem revisited Recall the diet problem from Lecture 1. There are n foods, m nutrients,

More information

Conceptual Questions for Review

Conceptual Questions for Review Conceptual Questions for Review Chapter 1 1.1 Which vectors are linear combinations of v = (3, 1) and w = (4, 3)? 1.2 Compare the dot product of v = (3, 1) and w = (4, 3) to the product of their lengths.

More information

Math 1553, Introduction to Linear Algebra

Math 1553, Introduction to Linear Algebra Learning goals articulate what students are expected to be able to do in a course that can be measured. This course has course-level learning goals that pertain to the entire course, and section-level

More information

Conic Linear Programming. Yinyu Ye

Conic Linear Programming. Yinyu Ye Conic Linear Programming Yinyu Ye December 2004, revised January 2015 i ii Preface This monograph is developed for MS&E 314, Conic Linear Programming, which I am teaching at Stanford. Information, lecture

More information