ORIE 6300 Mathematical Programming I August 25, Recitation 1

Similar documents
7.6 The Inverse of a Square Matrix

MATH 2331 Linear Algebra. Section 2.1 Matrix Operations. Definition: A : m n, B : n p. Example: Compute AB, if possible.

Elementary maths for GMT

Chapter 2 Notes, Linear Algebra 5e Lay

MTH 464: Computational Linear Algebra

MTH5112 Linear Algebra I MTH5212 Applied Linear Algebra (2017/2018)

The matrix will only be consistent if the last entry of row three is 0, meaning 2b 3 + b 2 b 1 = 0.

Review Let A, B, and C be matrices of the same size, and let r and s be scalars. Then

System of Linear Equations

4 Elementary matrices, continued

INVERSE OF A MATRIX [2.2]

CS 246 Review of Linear Algebra 01/17/19

Math 60. Rumbos Spring Solutions to Assignment #17

4 Elementary matrices, continued

Lecture 2 Systems of Linear Equations and Matrices, Continued

Chapter 1: Systems of linear equations and matrices. Section 1.1: Introduction to systems of linear equations

Fall Inverse of a matrix. Institute: UC San Diego. Authors: Alexander Knop

Econ Slides from Lecture 7

Methods for Solving Linear Systems Part 2

MAC Module 2 Systems of Linear Equations and Matrices II. Learning Objectives. Upon completing this module, you should be able to :

GAUSSIAN ELIMINATION AND LU DECOMPOSITION (SUPPLEMENT FOR MA511)

Definition 2.3. We define addition and multiplication of matrices as follows.

Matrices and systems of linear equations

Linear Algebra II. 2 Matrices. Notes 2 21st October Matrix algebra

Chapter 3. Linear Equations. Josef Leydold Mathematical Methods WS 2018/19 3 Linear Equations 1 / 33

Matrix operations Linear Algebra with Computer Science Application

POLI270 - Linear Algebra

Math Linear Algebra Final Exam Review Sheet

MATH 240 Spring, Chapter 1: Linear Equations and Matrices

Lecture 6 & 7. Shuanglin Shao. September 16th and 18th, 2013

Lecture Summaries for Linear Algebra M51A

Linear Algebra and Matrix Inversion

Lesson 3. Inverse of Matrices by Determinants and Gauss-Jordan Method

MATRIX ALGEBRA AND SYSTEMS OF EQUATIONS. + + x 1 x 2. x n 8 (4) 3 4 2

Section 9.2: Matrices. Definition: A matrix A consists of a rectangular array of numbers, or elements, arranged in m rows and n columns.

Matrix & Linear Algebra

Math Camp II. Basic Linear Algebra. Yiqing Xu. Aug 26, 2014 MIT

Lecture 1 Systems of Linear Equations and Matrices

Chapter 4 - MATRIX ALGEBRA. ... a 2j... a 2n. a i1 a i2... a ij... a in

Lecture 12: Solving Systems of Linear Equations by Gaussian Elimination

Linear Algebra I Lecture 8

Linear Algebra: Lecture notes from Kolman and Hill 9th edition.

MATH 304 Linear Algebra Lecture 10: Linear independence. Wronskian.

Math 4A Notes. Written by Victoria Kala Last updated June 11, 2017

Notes on Row Reduction

Section Gaussian Elimination

ANALYTICAL MATHEMATICS FOR APPLICATIONS 2018 LECTURE NOTES 3

Linear Equations in Linear Algebra

Matrix Algebra. Matrix Algebra. Chapter 8 - S&B

Chapter 7. Linear Algebra: Matrices, Vectors,

Applied Matrix Algebra Lecture Notes Section 2.2. Gerald Höhn Department of Mathematics, Kansas State University

Components and change of basis

Chapter 7. Tridiagonal linear systems. Solving tridiagonal systems of equations. and subdiagonal. E.g. a 21 a 22 a A =

Systems of Linear Equations and Matrices

Linear Algebra Highlights

1 - Systems of Linear Equations

Elementary Matrices. MATH 322, Linear Algebra I. J. Robert Buchanan. Spring Department of Mathematics

Linear Algebra. Matrices Operations. Consider, for example, a system of equations such as x + 2y z + 4w = 0, 3x 4y + 2z 6w = 0, x 3y 2z + w = 0.

Section 9.2: Matrices.. a m1 a m2 a mn

Systems of Linear Equations and Matrices

Linear Algebra Review

Numerical Linear Algebra Homework Assignment - Week 2

1. Linear systems of equations. Chapters 7-8: Linear Algebra. Solution(s) of a linear system of equations (continued)

ECON 186 Class Notes: Linear Algebra

Math 344 Lecture # Linear Systems

MAC1105-College Algebra. Chapter 5-Systems of Equations & Matrices

is a 3 4 matrix. It has 3 rows and 4 columns. The first row is the horizontal row [ ]

Chapter 5. Linear Algebra. A linear (algebraic) equation in. unknowns, x 1, x 2,..., x n, is. an equation of the form

Jim Lambers MAT 610 Summer Session Lecture 1 Notes

Determinant: 3.3 Properties of Determinants

Topics. Vectors (column matrices): Vector addition and scalar multiplication The matrix of a linear function y Ax The elements of a matrix A : A ij

MODEL ANSWERS TO THE THIRD HOMEWORK

Linear Algebra March 16, 2019

Ma 322 Spring Ma 322. Jan 18, 20

A FIRST COURSE IN LINEAR ALGEBRA. An Open Text by Ken Kuttler. Matrix Arithmetic

Review Questions REVIEW QUESTIONS 71

E k E k 1 E 2 E 1 A = B

Lecture Notes in Linear Algebra

Section 3.3. Matrix Rank and the Inverse of a Full Rank Matrix

Linear equations in linear algebra

Finite Mathematics Chapter 2. where a, b, c, d, h, and k are real numbers and neither a and b nor c and d are both zero.

Math 321: Linear Algebra

Linear Algebra. Preliminary Lecture Notes

Systems of Linear Equations. By: Tri Atmojo Kusmayadi and Mardiyana Mathematics Education Sebelas Maret University

Equality: Two matrices A and B are equal, i.e., A = B if A and B have the same order and the entries of A and B are the same.

MATH 213 Linear Algebra and ODEs Spring 2015 Study Sheet for Midterm Exam. Topics

Introduction to Matrices

3. Vector spaces 3.1 Linear dependence and independence 3.2 Basis and dimension. 5. Extreme points and basic feasible solutions

Math 240 Calculus III

Introduction to Mathematical Programming IE406. Lecture 3. Dr. Ted Ralphs

Midterm 1 Review. Written by Victoria Kala SH 6432u Office Hours: R 12:30 1:30 pm Last updated 10/10/2015

Chapter 2: Matrix Algebra

7.5 Operations with Matrices. Copyright Cengage Learning. All rights reserved.

Digital Workbook for GRA 6035 Mathematics

2. Every linear system with the same number of equations as unknowns has a unique solution.

MATH 3511 Lecture 1. Solving Linear Systems 1

MTH 309 Supplemental Lecture Notes Based on Robert Messer, Linear Algebra Gateway to Mathematics

Lecture 18: The Rank of a Matrix and Consistency of Linear Systems

Lecture 21: 5.6 Rank and Nullity

Lectures on Linear Algebra for IT

Transcription:

ORIE 6300 Mathematical Programming I August 25, 2016 Lecturer: Calvin Wylie Recitation 1 Scribe: Mateo Díaz 1 Linear Algebra Review 1 1.1 Independence, Spanning, and Dimension Definition 1 A (usually infinite) set of vectors S is a vector space if x, y S, λ R, (a) x + y S (b) λx S, and (c) 0 S. Definition 2 A set of vectors x 1,..., x k is said to be linearly dependent if there exists a vector λ 0 such that k i=1 λ ix i = 0. Otherwise, the set is linearly independent. Claim 1 If S is linearly dependent and S T, then T is also linearly dependent. If some set S is linearly independent and S T, then T is also linearly independent. Proof: If T = S, we re done. Otherwise, WLOG T = x 1,..., x k and S = x 1,..., x l where l < k. Since S is linearly dependent, λ 0, λ R l such that l i=1 λ ix i = 0. However, letting λ l+1 = = λ k = 0 gives k i=1 λ ix i = 0. Thus, T is linearly dependent. The second statement is just the contrapositive. Definition 3 A set S is said to span a vector space V if all elements of V can be written as linear combinations of vectors in S. Definition 4 For a set of vectors T, span(t ) is the set of all vectors that can be expressed as linear combinations of vectors in T. Claim 2 For any set T, span(t ) is a vector space. Proof: 0 is trivially in span(t ). Suppose x and y are linear combinations of vectors in T. Then x + y and λx are also linear combinations of vectors in T. Thus, span(t ) is a vector space. Fact 1 span(t ) is the largest vector space that T spans. Definition 5 A set of linearly independent vectors S is a basis for a subspace V if S V and S spans V. Example 1 The standard basis for R n is the set e 1,..., e n where e i is the vector of zeros with 1 in the i th position. Claim 3 If S = {x 1,..., x k } is linearly dependent, then j such that x j is a linear combination of x 1,... x j 1. 1 Based on previous notes of Chaoxu Tong 1-1

Proof: Since S is linearly dependent, λ 0 such that k i=1 λ ix i = 0. Let j be the largest index such that λ j 0. This implies that j i=1 λ ix i = 0. Since λ j 0, we can divide and obtain x j = j 1 i=1 λ i λ j x i. Thus, x j is a linear combination of x 1,..., x j 1. Claim 4 If S, T are linearly independent sets in vector space V, and S a basis for V, with S = n, T = k. Then k n. Proof: Assume k > n. Since S, T are linearly independent, no vector in S or T is zero. Let S = {x 1,..., x n }, T = {y 1,..., y k }. Since S spans V, y 1 is a linear combination of {x 1,..., x n }, which means the set {y 1, x 1,..., x n } is linearly dependent. By the previous claim, there is some vector the is a linear combination of the previous vectors. It cannot be y 1, so it is some x j. Let S 1 be {y 1, x 1,..., x n } with x j removed. Since S spans V and x j is a linear combination of elements in S 1, we have that S 1 spans V as well. Now, consider the set {y 2 } S 1. Again, this set is linearly dependent since S 2 spans V. If we order the elements of this set {y 1, y 2, x 1,..., x n }, we can apply the previous claim again, and we know that the resulting element must be some x l since {y 1, y 2 } are linearly independent. So let S 2 = S 1 {y 2 } {x l }. Again, S 2 spans V. We can continue this process, adding elements of T, always preserving the property that S i spans V. However, since k > n, we will reach the set S n = {y 1,... y n } which spans V. However, y n+1 is in V, which means that it is a linear combination of {y 1,... y n }. This is a contradiction since T is linearly independent. Thus, k n. Corollary 5 All bases for a vector space V have the same cardinality. Definition 6 The dimension of V is the size of any basis of V. 1.2 Matrices, Rank, and Invertibility Definition 7 For a matrix A R m n, the row (column) space of A is the vector space spanned by its row (column) vectors. A has row (column) rank k if the basis of its row (column) space has size k. Claim 6 For any matrix A, the row rank and column rank of A are equal. Proof: Consider some matrix A R m n. Assume its row rank is k, and that the set {y 1,..., y k } is a basis for the row space. Then the i th row r i = (a i1,..., a in ) can be expressed as k r=1 λ iry r. Looking at the j th coordinate of this sum, we have that a ij = k r=1 λ iryj r. Since this is true for all j, we have that the j th column c j = (a 1j,..., a nj ) can be expressed as k r=1 yr j zr, where z r = (λ 1r,..., λ nr ). This means that every column of A is a linear combination of k vectors, which means that the column space can have dimension no larger than k. So column rank of A row rank of A. Applying this argument to the column space gives the other inequality, that row rank of A column rank of A. Thus, row rank and column rank are the same. Thus we can denote the rank of the column space of A or the rank of the row space of A as simply the rank of A. 1-2

Fact 2 rank(a) min{m,n} Definition 8 (Matrix multiplication) Given matrix A R m r, B R r n, we have that C R m n is the product of A and B, denoted AB = C if C is the matrix where C ij = r k=1 a ikb kj. c 21 = a 21 a 22 a 23 a 24 b 11 b 21 b 31 b 41 Matrix multiplication is associative and distributive, but not commutative. Definition 9 A matrix A R n n has inverse B if AB = BA = I n, where 1 0 0 0 1 I n =.... 0 1 A matrix is invertible or nonsingular if it has an inverse. Otherwise it is singular. The inverse of a matrix A is denoted A 1. Claim 7 If A R n n has an inverse, it is unique. Proof: is Assume A has inverses B and C. Then consider D = BAC. Associating one way, this Associating the other way, this is D = B(AC) = B(I n ) = B. D = (BA)C = (I n )C = C. This implies that B = C, which implies that the inverse is unique. Fact 3 If AB = I n then BA = I n. Claim 8 If A, B invertible, then AB is invertible. Proof: The inverse of AB is B 1 A 1 since B 1 A 1 AB = B 1 I n B = B 1 B = I n. Claim 9 A matrix A R n n is invertible iff it has rank n. 1-3

Proof: ( ) Assume A has rank k < n, and inverse A 1. Since it does not have rank n, the columns of A, a 1,..., a n are dependent. Thus, there is a λ 0 such that n i=1 λ ia i = 0, or in matrix notation, Aλ = 0. Now consider A 1 Aλ. We have (A 1 A)λ = I n λ = λ associating one way, but we also have A 1 (Aλ) = A 1 0 = 0. This is a contradiction since λ 0. Thus, it must be the case that A has rank n. ( ) Assume A has rank n. Then the columns of A span R n. Thus, we can write any vector in R n as a linear combination of the columns of A. Specifically, for any j, we can write e j as some n i=1 λj i ai. Then if we let matrix B = λ 1... λ n, we see that AB = I n. Thus, A is invertible. 1.3 Solving Systems of Equations Given matrix A R m n and (column) vector b R m, it s often useful to be able to solve for a vector x R n that satisfies Ax = b. A system of equations can have no solutions, a unique solution or infinitely many solution. Fact 4 The system Ax = b has no solution if and only if b is not in the column space of A. Fact 5 If the system Ax = b has at least one solution, and rank(a)< n, then it has infinitely many solutions. Definition 10 A matrix A R m n with m n is said to have full rank if rank(a) = m. Claim 10 If A has full rank, then the system Ax = b always has a solution. Proof: Since A has full rank, it has column rank m, which means we can find m linearly independent columns of A. WLOG, let those columns be the 1 st m columns. Then the matrix B which consists of those m columns is invertible, and if we left multiply by B 1, we see that the first m columns of B 1 A are the identity matrix. Thus, if y = B 1 b, then one solution to Ax = b is the vector (y 1,..., y m, 0,... 0), since this satisfies B 1 Ax = B 1 b. So we can sometimes guarantee that a solution to a system of equations exists. But how can we actually find such a solution? Example 2 Find solutions to the system Ax = b where A = b = In order to determine the set of solutions x, we reduce the problem using elementary row operations. 9 8 3 1-4

Definition 11 The elementary row operations are: (1) Scaling a row by some nonzero constant 2 0 0 0 1 0 0 0 1 (2) Interchanging two rows 0 1 0 1 0 0 0 0 1 (3) Adding some multiple of one row to another 1 2 0 0 1 0 0 0 1 = = = 2 4 6 2-1 1 5 0 5 Each elementary row operation corresponds to left multiplying by a certain square matrix, known as an elementary matrix. Fact 6 All elementary matrices are invertible. Using this fact, we can apply elementary row operations to our matrix and RHS vector to simplify the problem. Since elementary matrices are invertible, we preserve the solution set. Claim 11 Let E be an invertible matrix. Then x satisfies Ax = b iff x satisfies EAx = Eb. Proof: If Ax = b, left multiplying by E gives EAx = Eb. If EAx = Eb, left multiplying by E 1 gives Ax = b. Now, we can define the Gauss-Jordan Elimination method for solving systems of equations. This method involves applying elementary row operations to the matrix and RHS vectors to reduce the problem to a simpler form. Example 3 Solve Ax = b for the previously defined A, b. Start by augmenting the matrix with the RHS vector: 9 8 3 Since the first element of the first row is 1, eliminate all entries in the first column under that element. 9 0 5 5 10 0 6 10 24 1-5

The second element of the second row is nonzero, but not 1, so scale that row. 9 0 1 1 2 0 6 10 24 Eliminate the rest of the second column 9 0 1 1 2 0 0 4 12 Scale the third row so that its leading nonzero is one. 9 0 1 1 2 0 0 1 3 Eliminate entries above the diagonal 1 0 1 5 0 1 1 2 0 0 1 3 1 0 0 2 0 1 0 1 0 0 1 3 Thus, the unique solution to the original problem is x = (2, 1, 3). Note that in this example the leading elements of the rows were not zero, so we could scale them to 1. If one of these elements was zero, we would either use the interchange operation to swap in a row that had a nonzero element in that position, or else continue to the next column since the current column had only zeros in active rows. 1-6