Linear Algebra Review: Linear Independence. IE418 Integer Programming. Linear Algebra Review: Subspaces. Linear Algebra Review: Affine Independence

Similar documents
Chapter 1. Preliminaries

Introduction to Mathematical Programming IE406. Lecture 3. Dr. Ted Ralphs

Key Things We Learned Last Time. IE418 Integer Programming. Proving Facets Way #2 Indirect. A More Abstract Example

LP Relaxations of Mixed Integer Programs

Computational Integer Programming Universidad de los Andes. Lecture 1. Dr. Ted Ralphs

MATH 323 Linear Algebra Lecture 12: Basis of a vector space (continued). Rank and nullity of a matrix.

3. Linear Programming and Polyhedral Combinatorics

Review of Some Concepts from Linear Algebra: Part 2

Integer Programming ISE 418. Lecture 12. Dr. Ted Ralphs

MAT-INF4110/MAT-INF9110 Mathematical optimization

Lecture 9: Vector Algebra

3. Vector spaces 3.1 Linear dependence and independence 3.2 Basis and dimension. 5. Extreme points and basic feasible solutions

3. Linear Programming and Polyhedral Combinatorics

MATH 2331 Linear Algebra. Section 2.1 Matrix Operations. Definition: A : m n, B : n p. Example: Compute AB, if possible.

Linear independence, span, basis, dimension - and their connection with linear systems

Chapter 1. Vectors, Matrices, and Linear Spaces

Optimization WS 13/14:, by Y. Goldstein/K. Reinert, 9. Dezember 2013, 16: Linear programming. Optimization Problems

1 Maximal Lattice-free Convex Sets

The definition of a vector space (V, +, )

Review Notes for Linear Algebra True or False Last Updated: February 22, 2010

Matrix invertibility. Rank-Nullity Theorem: For any n-column matrix A, nullity A +ranka = n

Solving the MWT. Recall the ILP for the MWT. We can obtain a solution to the MWT problem by solving the following ILP:

2. Every linear system with the same number of equations as unknowns has a unique solution.

8. Geometric problems

Numerical Optimization

ELE/MCE 503 Linear Algebra Facts Fall 2018

Chapters 5 & 6: Theory Review: Solutions Math 308 F Spring 2015

Math 2174: Practice Midterm 1

Advanced Linear Algebra Math 4377 / 6308 (Spring 2015) March 5, 2015

LP Duality: outline. Duality theory for Linear Programming. alternatives. optimization I Idea: polyhedra

Lecture 7: Positive Semidefinite Matrices

Rank and Nullity. MATH 322, Linear Algebra I. J. Robert Buchanan. Spring Department of Mathematics

Farkas Lemma. Rudi Pendavingh. Optimization in R n, lecture 2. Eindhoven Technical University. Rudi Pendavingh (TUE) Farkas Lemma ORN2 1 / 15

General Vector Space (3A) Young Won Lim 11/19/12

Lecture 6 - Convex Sets

8. Geometric problems

Cutting Plane Methods I

Jim Lambers MAT 610 Summer Session Lecture 1 Notes

AN INTRODUCTION TO CONVEXITY

Linear Algebra: Homework 7

1. Select the unique answer (choice) for each problem. Write only the answer.

Study Guide for Linear Algebra Exam 2

Integer Programming, Part 1

Math 369 Exam #2 Practice Problem Solutions

b 1 b 2.. b = b m A = [a 1,a 2,...,a n ] where a 1,j a 2,j a j = a m,j Let A R m n and x 1 x 2 x = x n

MATH2210 Notebook 3 Spring 2018

Math 4A Notes. Written by Victoria Kala Last updated June 11, 2017

Math 2331 Linear Algebra

Lecture 21: 5.6 Rank and Nullity

Math 353, Practice Midterm 1

Linear Algebra in A Nutshell

ft-uiowa-math2550 Assignment OptionalFinalExamReviewMultChoiceMEDIUMlengthForm due 12/31/2014 at 10:36pm CST

Integer Programming ISE 418. Lecture 13. Dr. Ted Ralphs

2. Linear algebra. matrices and vectors. linear equations. range and nullspace of matrices. function of vectors, gradient and Hessian

MAT Linear Algebra Collection of sample exams

Math 123, Week 5: Linear Independence, Basis, and Matrix Spaces. Section 1: Linear Independence

Linear and Integer Optimization (V3C1/F4C1)

3.8 Strong valid inequalities

Row Space, Column Space, and Nullspace

Linear Algebra Formulas. Ben Lee

3.7 Strong valid inequalities for structured ILP problems

3.3 Linear Independence

Linear Programming. Murti V. Salapaka Electrical Engineering Department University Of Minnesota, Twin Cities

Linear Algebra. Linear Algebra. Chih-Wei Yi. Dept. of Computer Science National Chiao Tung University. November 12, 2008

Math 313 Chapter 5 Review

THE EXISTENCE AND USEFULNESS OF EQUALITY CUTS IN THE MULTI-DEMAND MULTIDIMENSIONAL KNAPSACK PROBLEM LEVI DELISSA. B.S., Kansas State University, 2014

Discrete Optimization 2010 Lecture 7 Introduction to Integer Programming

Appendix PRELIMINARIES 1. THEOREMS OF ALTERNATIVES FOR SYSTEMS OF LINEAR CONSTRAINTS

MATH 260 LINEAR ALGEBRA EXAM III Fall 2014

Math 3191 Applied Linear Algebra

Linear Programming. Chapter Introduction

Linear Systems. Carlo Tomasi

Definitions for Quizzes

Final Review Written by Victoria Kala SH 6432u Office Hours R 12:30 1:30pm Last Updated 11/30/2015

IE 521 Convex Optimization

Test 3, Linear Algebra

Chapter 2: Linear Programming Basics. (Bertsimas & Tsitsiklis, Chapter 1)

Convex Analysis and Economic Theory Winter 2018

MTH 362: Advanced Engineering Mathematics

MATH 304 Linear Algebra Lecture 10: Linear independence. Wronskian.

Knowledge Discovery and Data Mining 1 (VO) ( )

Linear algebra review

Linear Algebra Practice Problems

Spring 2017 CO 250 Course Notes TABLE OF CONTENTS. richardwu.ca. CO 250 Course Notes. Introduction to Optimization

A Review of Linear Programming

Optimization methods NOPT048

Linear Algebra Highlights

Exercises for Unit I (Topics from linear algebra)

Exercises for Unit I (Topics from linear algebra)

Math 415 Exam I. Name: Student ID: Calculators, books and notes are not allowed!

CO 250 Final Exam Guide

Math 4377/6308 Advanced Linear Algebra

YORK UNIVERSITY. Faculty of Science Department of Mathematics and Statistics MATH M Test #1. July 11, 2013 Solutions

1. What is the determinant of the following matrix? a 1 a 2 4a 3 2a 2 b 1 b 2 4b 3 2b c 1. = 4, then det

CSL361 Problem set 4: Basic linear algebra

Lecture 1: Convex Sets January 23

1. General Vector Spaces

Solutions to Final Practice Problems Written by Victoria Kala Last updated 12/5/2015

MAT 242 CHAPTER 4: SUBSPACES OF R n

Some Properties of Convex Hulls of Integer Points Contained in General Convex Sets

Transcription:

Linear Algebra Review: Linear Independence IE418: Integer Programming Department of Industrial and Systems Engineering Lehigh University 21st March 2005 A finite collection of vectors x 1,..., x k R n is linearly independent if the unique solution to k i=1 λ ix i = 0 is λ i = 0, i = 1, 2,..., k. Otherwise, the vectors are linearly dependent. Let A be a square matrix. Then, the following statements are equivalent: The matrix A is invertible. The matrix A T is invertible. The determinant of A is nonzero. The rows of A are linearly independent. The columns of A are linearly independent. For every vector b, the system Ax = b has a unique solution. Linear Algebra Review: Affine Independence Linear Algebra Review: Subspaces A finite collection of vectors x 1,..., x k R n is affinely independent if the unique solution to k i=1 α ix i = 0, k i=1 α i = 0 is α i = 0, i = 1, 2,..., k. Linear independence implies affine independence, but not vice versa. Affine independence is essentially a coordinate-free version of linear independence. The following statements are equivalent: 1 x 1,..., x k R n are affinely independent. 2 x 2 x 1,..., x k x 1 are linearly independent. 3 (x 1, 1),..., (x k, 1) R n+1 are linearly independent. A nonempty subset H R n is called a subspace if αx + γy H x, y H and α, γ R A linear combination of a collection of vectors x 1,... x k R n is any vector y R n such that y = k i=1 λ ix i for some λ R k. The span of a collection of vectors x 1,... x k R n is the set of all linear combinations of those vectors. Given a subspace H R n, a collection of linearly independent vectors whose span is H is called a basis of H. The number of vectors in the basis is the dimension of the subspace.

Linear Algebra Review: Subspaces and Bases Linear Algebra Review: Rank and Nullity A given subspace has an infinite number of bases. Each basis has the same number of vectors in it. If S and T are subspaces such that S T R n, then a basis of S can be extended to a basis of T. The span of the columns of a matrix A is a subspace called the column space or the range, denoted range(a). The span of the rows of a matrix A is a subspace called the row space. The dimensions of the column space and row space are always equal. We call this number rank(a). rank(a) min{m, n}. If rank(a) = min{m, n}, then A is said to have full rank. The set {x R n Ax = 0} is called the nullspace of A (denoted null(a)) and has dimension n rank(a). Some Properties of Subspaces Convex Sets The following are equivalent: 1 H R n is a subspace. 2 There is an m n matrix A such that H = {x R n Ax = 0} 3 There is a k n matrix B such that H = {x R n x = ub, u R k }. If {x R n Ax = b}, the maximum number of affinely independent solutions of Ax = b is n + 1 rank(a). If H R n is a subspace, then {x R n xy = 0 for y H} is a subspace called the orthogonal subspace and denoted H If H = {x R n Ax = 0}, (A R m n ) then H = {x R n x = A T u, u R m } A set S R n is convex if x, y S, λ [0, 1], we have λx + (1 λ)y S. Let x 1,..., x k R n and λ R k be given such that λ T e = 1. Then 1 The vector k i=1 λ ix i is said to be a convex combination of x 1,..., x k 2 The convex hull of x 1,..., x k is the set of all convex combinations of these vectors, denoted conv(x 1,..., x k ). The convex hull of two points is a line segment. A set is convex if and only if for any two points in the set, the line segment joining those two points lies entirely in the set. All polyhedra are convex.

Polyhedra, Hyperplanes, and Half-spaces Dimension of Polyhedra A polyhedron is a set of the form {x R n Ax b} = {x R n a i x b i, i M}, where A R m n and b R m. A polyhedron P R n is bounded if there exists a constant K such that x i < K x P, i [1, n]. A bounded polyhedron is called a polytope. Let a R n and b R be given. The set {x R n a T x = b} is called a hyperplane. The set {x R n a T x b} is called a half-space. A polyhedron P is of dimension k, denoted dim(p ) = k, if the maximum number of affinely independent points in P is k + 1. A polyhedron P R n is full-dimensional if dim(p ) = n. Let M = {1,..., m}, M = = {i M a i x = b i x P } (the equality set, M = M \ M = (the inequality set). Let (A =, b = ), (A, b ) be the corresponding rows of (A, b). If P R n, then dim(p ) + rank(a =, b = ) = n Dimension and Rank Example x P is called an inner point of P if a i x < b i i M. x P is called an interior point of P if a i x < b i i M. Every nonempty polyhedron has an inner point. A polyhedron has an interior point if and only if it is full-dimensional. 2.4 If P R n, then dim(p ) + rank(a =, b = ) = n P OLLY R 5 : x 1 2x 2 + x 3 x 4 + 2x 5 3 x 1 x 5 0 x 1 + x 5 0 2x 2 x 3 + x 4 2 4x 2 + 2x 3 2x 2 4 3x 1 x 2 2 x 1 0 x 2 0 x 3 0 x 4 0 x 5 0 What is dim(p OLLY )? Consider points: (1, 1, 0, 0, 1) (0, 1, 0, 0, 0) (1, 2, 2, 0, 1) (0, 0, 0, 2, 0) (1, 0, 0, 2, 1) Which are in P OLLY?

Figuring dim(p OLLY ) Figuring dim(p OLLY ) Are the points in P OLLY affinely independent? 02 rank B6 @ 4 1 0 1 0 1 1 2 0 0 0 2 0 0 0 0 2 1 0 1 0 1 1 1 1 31 = 4 7C 5A Implies that (1, 1, 0, 0, 1), (0, 1, 0, 0, 0), (1, 2, 2, 0, 1), (0, 0, 0, 2, 0) are affinely independent. dim(p OLLY ) 3 By 2.4, we now know dim(p OLLY ) = 3, 4, or 5 All points in P OLLY satisfy the following inequalities with equality: x 1 x 5 0 x 1 + x 5 0 2x 2 x 3 + x 4 2 4x 2 + 2x 3 2x 2 4 So rank(a =, b = ) 2 dim(p ) = 3 02 rank B6 @ 4 1 0 0 0 1 0 1 0 0 0 1 0 0 2 1 1 0 2 0 4 2 2 0 4 31 7C 5A = 2 Valid Inequalities and Faces More on Faces The inequality denoted by (π, π 0 ) is called a valid inequality for P if πx π 0 x P. Note that (π, π 0 ) is a valid inequality if and only if P lies in the half-space {x R n πx π 0 }. If (π, π 0 ) is a valid inequality for P and F = {x P πx = π 0 }, F is called a face of P and we say that (π, π 0 ) represents or defines F. A face is said to be proper if F and F P. Note that a face has multiple representations. The face represented by (π, π 0 ) is nonempty if and only if max{πx x P } = π 0. If the face F is nonempty, we say it supports P. Note that the set of optimal solutions to an LP is always a face of the feasible region. 3.1 Let P be a polyhedron with equality set M =. If F = {x P π T x = π 0 } is nonempty, then F is a polyhedron. Let M = F M =, M F = M \ M = F. Then F = {x a T i x = b i i M = F, at i x b i i M f } We get the polyhedron F by taking some of the inequalities of P and making them equalities The number of distinct faces of P is finite.

Facets Example, cont. Consider the face A face F is said to be a facet of P if dim(f ) = dim(p ) 1. In fact, facets are all we need to describe polyhedra. 3.2 If F is a facet of P, then in any description of P, there exists some inequality representing F. (By setting the inequality to equality, we get F ). 3.3 and 3.4 Every inequality that represents a face that is not a facet is unnecessary in the description of P. F = {x P OLLY 2x 1 + 10x 2 5x 3 + 5x 4 3x 5 = 10} Is it proper? F = P OLLY? F =? Points: (0, 2, 2, 0, 0), (0, 1, 0, 0, 0), (0, 0, 0, 2, 0) Are they in F? (Don t forget, they must also be in P ) Are they affinely independent? Yes! so dim(f ) 2 Is dim(f ) 2? (Yes!) dim(p OLLY ) = 3, so F is a facet of P OLLY Facet Representation Polyhedra A Fundamental Representation Theorem Remember 3.2. If F is a facet of P OLLY, then there is some inequality a T k b k, k M representing F. Which inequality in the inequality set of P OLLY represents F? x P OLLY, 2x 1 + 10x 2 5x 3 + 5x + 4 3x 5 = 10 x 1 = 0 So F is represented by the inequality x 1 0 Putting together what we have seen so far, we can say the following: (3.5) Every full-dimensional polyhedron P has a unique (up to scalar multiplication) representation that consists of one inequality representing each facet of P. If dim(p ) = n k with k > 0, then P is described by a maximal set of linearly independent rows of (A =, b = ), as well as one inequality representing each facet of P.

Polyhedra A Useful Facet Proving Theorem Put another way, if a facet F of P is represented by (π, π 0 ), then the set of all representations of F is obtained by taking scalar multiples of (π, π 0 ) plus linear combinations of the equality set of P. We can use this to actually prove an inequality is a facet! (3.6) Let M = (A =, b = ) be the equality set of P R n, and let F = {x P π T x = π 0 } be a proper face of P. The following statements are equivalent F is a facet of P If λx = λ 0 x F, then for some α R, u R M = (λ, λ 0 ) = (απ + ua =, απ 0 + u t b = ),