Linear programming: theory, algorithms and applications

Size: px
Start display at page:

Download "Linear programming: theory, algorithms and applications"

Transcription

1 Linear programming: theory, algorithms and applications Department of Differential Equations Budapest 2014/2015 Fall

2 Vector spaces A nonempty set L equipped with addition and multiplication with scalars is a linear space over the scalar field Γ if for any x, y, z L and λ, µ Γ there exists a so-called neutral or zero element 0 L for which for every x L there is a negative one x L, with Furthermore If 1 is the multiplicative unit of Γ then x + y L (1) x + y = y + x (2) (x + y) + z = x + (y + z) (3) x + 0 = x (4) ( x) + x = 0. (5) λ x L (6) (λ + µ) x = λ x + µ x (7) λ (x + y) = λ x + λ y (8) (λ µ) x = λ (µ x). (9) 1 x = x. (10) First Prev Next Last Go Back Full Screen Close Quit

3 The set of the ordered m-tuples is Vector spaces (ctd.) L = Γ m = {x x = (x 1, x 2,..., x m ), where x i Γ}. In most of the applications Γ = IR. Elements of IR m will be rendered as column vectors. For an x IR m x T is a row vector which is the transposed of the vector x. A nonempty subset L L is a subspace of the linear space L if L itself fulfills the above axioms of a linear space. Proposition. Let L be a linear space over the scalar field Γ. A nonempty subset L L is a subspace of L if and only if for any x, y L and λ Γ : x + y L and λ x L. Let L be a vector space, L L a linear subspace and a 1, a 2,..., a k L. Consider the set V = {v : v = v 1 a 1 + v 2 a v k a k, ahol v 1, v 2,..., v k IR} which consists of all the linear combinations of the vectors a 1, a 2,..., a k. Proposition. V L is a linear subspace. V is called the subspace generated by the vectors a 1, a 2,..., a k and is denoted by L({a 1, a 2,..., a k }). Let us define the Minkowski-sum of the vector sets L 1, L 2 as V = L 1 + L 2 = {v L : v = l 1 + l 2, where l 1 L 1, l 2 L 2 } First Prev Next Last Go Back Full Screen Close Quit

4 Exercises 1. Let a, b IR. Consider the set F of all continuous functions f : [a, b] IR. The sum of two such functions f, g F is defined by (f + g)(x) := f(x) + g(x) for all x [a, b]. The multiplication of a function f and a scalar u is defined by (u f)(x) := u f(x) for all x [a, b] and u IR. Prove that the set F is a vector space with the given operations. 2. Let L be a vector space and L 1, L 2,... L k L its subspaces. Prove that (i) V = L 1 + L 2 L is a subspace; (ii) L 1 + L L k is a subspace and (iii) L 1 L 2 L k is a subspace. 3. Let {a 1, a 2,...a n } IR m a set of vectors. Prove that (i) L(a 1,..., a j,..., a n ) = L(a 1,..., λ a j,..., a n ), where λ IR, λ 0 and (ii) L(a 1,..., a j,..., a n ) = L(a 1,..., a j + a k,..., a n ) for all 1 k n.

5 Scalar product Definition. Let the vector space L be given over the scalar field Γ. Consider the <, >: L L Γ mapping which assigns to the ordered pairs of L a number from Γ. The mapping <, > is called a scalar product (dot product) if it satisfies the conditions below: < a, b > = < b, a > < a, b + c > = < a, b > + < a, c > λ < a, b > = < λ a, b > < a, a > 0 < a, a > = 0 if and only if a = 0 where a, b, c are any members of the vector space, λ Γ and 0 L is the null element of the vector space. Let a, b IR m. Their scalar product is defined by < a, b > = a 1 b 1 + a 2 b a m b m = a T b. (11) Fact. On the space IR m the mapping given by (11) is in fact a scalar product. Hadamard product a b := (a 1 b 1, a 2 b 2, a m b m ) T, is the so called Hadamard-product of a, b IR m whose coordinates are the products of the corresponding coordinates of a and b.

6 Cauchy-Schwarz-Bunyakovsky inequality Theorem. Let L be a vector space over the scalar field Γ. For any two vectors a, b L and equality holds if and only if a = λ b, for some λ Γ Proof. Using the fourth axiom of scalar product < a, b > 2 < a, a > < b, b > (12) < a λ b, a λ b > 0 and equality holds when a = λ b. After some trivial alterations λ 2 < b, b > λ (2 < a, b >)+ < a, a > 0 for all λ Γ. But we know that the nonnegativity of a second degree expression is equivalent with its discriminant being nonpositive, that is 4 < a, b > 2 4 < a, a > < b, b > 0.

7 Norm, Euclidean spaces Definition. Let a vector space L be given over the scalar field Γ. A mapping : L Γ is a (vector) norm if the following axioms hold: a 0, a = 0, if and only if a = 0, λ a = λ a for all λ Γ a + b a + b, and equality holds in the last equation if and only if a = λ b for some λ Γ. We can define a vector norm from a scalar product by a := < a, a > it is easily seen that indeed the above requirements are fulfilled. A norm defined this way is called an Euclidean norm and vector spaces equipped with a Euclidean norm are called Euclidean spaces. Given a norm we can define a metric over the vector space by dis(a, b) := a b. (13)

8 Metric, metric spaces Proposition. Let a vector space L be given over the scalar field Γ. The above mentioned metric has the following properties: where a, b, c L are arbitrary vectors. dis(a, b) = dis(b, a), dis(a, a) 0, dis(a, b) = 0 a = b, dis(a, b) + dis(b, c) dis(a, c), Remark. When a mapping is given on the pairs of a set with the above specified properties then it is called a metric and we speak about a metric space.

9 Exercises 1. Consider the set F of continuous functions defined on [a, b]. For f, g F define the mapping which takes its values in IR: < f, g > := Prove that (14) is a scalar product on the usual vector space given on F. b a f(x) g(x) dx (14) 2. Let F be the set defined in the previous exercise. (i) For any f, g F prove the following inequality ( b ( b ) ( b ) a f(x) g(x) dx) 2 a f 2 (x) dx a g 2 (x) dx (15) (ii) What is the sufficient and necessary condition of equality in (15)?

10 Generating systems Let a i L vectors and J = {1, 2,..., n} the set of their indices. We say that a vector b L is the linear combination of a j, if n b = u j a j, where for all u j IR and j J. Definition. For a J G J {a i i J G } is a generating system for {a j j J }, if all a j, j J G ( J G = J \ J G ) is a linear combination of {a i i J G }. Proposition. Az L({a j j J G }) L is a subspace. Let J G J the set of indices of the generating system. Then a j = t ij a i, j J G, i J G where t ij is the coefficient of a i, i J G in the representation of a j, j J G. Generating table: j=1 JG a j J G a i... t ij.....

11 Generating systems (example) Let the vectors a T 1 = (1, 0, 3), at 2 = (0, 1, 1), at 3 = (1, 0, 1), at 4 = (0, 2, 1), at 5 b T = (3, 5, 5) be given. The set of indices is: J = {1, 2, 3, 4, 5}. Then = (1, 1, 1) and b = 3 a a 2 a 3 + a 5. J G = {1, 2, 3, 5}, because a 4 = a 2 a 3 + a 5. This representation is not unique, since a 4 = 2 a a 2 + a 3 3 a 5 and b = 4 a a 2 a 5. This can be rendered in tabular form this way: a 4 b a a a a T table a 4 b a a a a ˆT table

12 Example (ctd.), exercises It was already mentioned that a generating system could be chosen in many ways. If J G = {3, 4, 5} in the previous example, then we will have the following table for the other ones: a 2 a 2 b a a a Exercise. Let the set of vectors {a 1, a 2 a n } IR m be given. Furthermore, let J G denote the index set of the generating system and J G = J \ J G the set of those not belonging to the generating system. Suppose that JG and let J ˆ be an arbitrary but nonempty subset of JG. Check that the vectors given by the J G J ˆ set of indices is also a generating system of the vectors a 1, a 2 a n. 2. Exercise. Let {a 1, a 2 a n } IR m be given. Denote by J G of generating systems. Prove that if J G < J G then there exists an index k J G \ J G system of vectors. and J G two differing index sets so that J G \ {k} is also a generating system of the given

13 Pivoting Theorem. Let the{a 1, a 2,, a n } L system of vectors be given, where L is a vector space over the field Γ. If t rs 0, then the vector a r contained in the generating system J G can be swapped with a vector a s (s J G ) not contained there in the following way: t ij = t ij t rj t is t sj = t rj t rs t ir = t is t sr = t rs 1 t rs, t rs i J G, i s, j J G, j r j J G, j r i J G, i s where J G = J G \ {r} {s} will be the set of indices of the new generating system and t ij new coefficients. J G Proof. This is the opening generating table: a j. a s a i... t ij... t is... J G.. a r... t rj... t rs are the First Prev Next Last Go Back Full Screen Close Quit

14 Pivoting (ctd.) Let us pivot on the position t rs 0. As a result we get this table: J G J G a j a r.. a i... t ij... t ir a s... t sj... t sr..... Let us start from representation of the vector a s : a s = t rs a r + and now let us express a r : a r = 1 t rs a s + i J G\{r} i J G\{r} t is a i ( t is ) a i. t rs This way we verified 3.and 4. Starting from a j, j J G, j s we can check similarly 1. and 2., too. First Prev Next Last Go Back Full Screen Close Quit

15 The computational complexity of pivoting Lemma. Let the pivoting tableau T be given in a short form, where J G = m and J G = n m. Performing a pivot operation takes at most (n m 1) (m 1) addition and 2 m (n m) n + 1 multiplication. Proof. Computing a column of the short tableau means at most 2 (m 1)+1 = 2 m 1 multiplication and m 1 addition. This computation is to be applied on (n m 1) columns. Determining the vector p k k means m multiplication. Hence, pivoting on the short tableau takes no more then (2 m 1) (n m 1) + m = 2 m (n m) n + 1 multiplication and (m 1) (n m 1) addition. The short pivot tableau T as the family of it columns can be rendered in this form: T = [t 1. t 2.. t JG ]. Let t ij 0 (i J G, j J G ) be the pivot element. To understand pivoting consider this matrix: P i IR m m whose i. column (p i i )T = ( t 1j t ij, t 2j t ij,, t i 1,j t ij, 1 t ij, t i+1,j t ij,, t mj t ij ), while the other ones are the appropriate unit vectors. Then the kth column of the new short pivot tableau T has this form: t k = P i t k, if k j p i i, if (16) k = j Furthermore, the index set of the new generating system is J G = (J G \ {i}) {j} and the indices of the other vectors indices are in J G = (J G \ {j}) {i}. First Prev Next Last Go Back Full Screen Close Quit

16 An exercise on pivoting Exercise. Let the following vectors be given : a 1 = (1 1 1), a 2 = (2 1 1), a 3 = ( 1 6 4), a 4 = ( 2 1 2), a 5 = ( 6 3 4), a 6 = (1 2 1), a 7 = ( 1 1 0). a.) Compute the vector b = 3 a 1 + a 2 2 a 4 + a 5 a 7. b.) Determine which of the given two short pivot tableaux corresponds to the vector system {a 1, a 2,, a 7 }. Explain your answer. a 5 a 7 a 6 a a a a T 1 (short) pivot tableau a 2 a 3 a 5 a a a a T 2 (short) pivot tableau

17 Exercise (ctd.) a 3 a 4 a 5 a 6 a 7 a a T 3 (short) pivot tableau c.) How many pivot elements are in the pivot tableau T 1? Explain your answer. d.) Using the short pivot tableau T 2 (T 1 ) determine the value of these pivot elements: t 12 and t 43 (similarly t 35 and t 17 ). e.) Compute the pivot matrix corresponding to the pivot tableau T 2 with t 72 as the pivot element and determine the pivoting on the short tableau.

18 Linear independence Definition. The vector system {a j j J } consisting of at least two elements is linearly independent if there is no a r, r J which is the linear combination of the others ({a j j J \ {r}}). The vector system {a j j J } is linearly dependent, if the above definition cannot be fulfilled. Lemma. The vector system {a j j J } is linearly independent iff for every b which is a linear combination of the system and the representation is unique. Lemma. The vector system {a 1, a 2,..., a n } is linearly independent iff can happen only with all the λ j = 0. n λ j a j = 0 j=1

19 Testing linear independence An algorithm for deciding linear independence Input data: the vectors {a 1, a 2,..., a n } IR m and the index set J = {1, 2,, n}; the index set I = {ˆ1, ˆ2,, ˆm} of the unit vectors of IR m ; the short pivot tableau T = A = [a 1. a 2.. a n ] Begin J G := I és J G := J ; while J J G do if I J G then if tîj = 0 ( î I J G, j J J G ) then stop: {a 1, a 2,..., a n } is linearly dependent; else pivoting: J G := (J G \ {î}) {j} and J G := ( J G \ {j}) {î}, where tîj 0 (î I J G, j J J G )); endif else stop: {a 1, a 2,..., a n } is linearly dependent; endif endwhile {a 1, a 2,..., a n } is linearly independent; end. First Prev Next Last Go Back Full Screen Close Quit

20 Complexity of testing linear independence Lemma. Deciding for a set of vectors {a 1, a 2,..., a n } IR m whether it is linearly independent takes min{m, n} steps and the number of computational operations is O(m n min{m, n}). Proof. The previous algorithm is obviously finite because cardinality of index set J J G diminishes by one in every step, so the algorithm terminates in n steps at most. On the other hand the cardinality of the index set I J G decreases by one in every step as well, therefore the number of steps is at most m. If r denotes the number of steps of the algorithm, then r min{m, n}. In order to estimate the number of arithmetical operations let us notice that in every iteration exactly one pivoting is performed whose cost is O(m n). Putting together the number of operations of the previous algorithm is O(m n min{m, n}).

21 Example Let us decide the linear independence of the matrix A, where A = Take the table consisting of the vectors a 1, a 2, a 3, a 4 and e 1, e 2, e 3, e 4, e 5. e e e e e Now let J J G = {a 1, a 2, a 3, a 4 } and I J G = {e 1, e 2, e 3, e 4, e 5 }. We can exchange the vectors e 2 and a 1 using the element t 21 = 1.

22 Example (ctd.) a 1 a 2 a 3 a 4 e 1 e 2 e 3 e 4 e 5 e a e e e Now J J G = {2, 3, 4} and I J G = {ˆ1, ˆ3, ˆ4, ˆ5}. A possible pivot position in conformity with the rules of the algorithm is t 12 = 1. a 1 a 2 a 3 a 4 e 1 e 2 e 3 e 4 e 5 a a e e e After pivoting on the new tableau: J J G = {3, 4} and I J G = {ˆ3, ˆ4, ˆ5}. An other possible pivot position is t 33 = 2.

23 Example (ctd.) a 1 a 2 a 3 a 4 e 1 e 2 e 3 e 4 e 5 a a a e e Finally J J G = {4} and I J G = {ˆ4, ˆ5}. But t 44 = 0 and t 54 = 0, therefore the vectors {a 1, a 2, a 3, a 4 } are linearly dependent, so we get a 4 = a 1 + a 2 + a 3.

24 Steinitz s theorem, 1913 Theorem. Let {a j j J } be a system of vectors and denote by J F an arbitrary linearly independent and by J G an arbitrary generating set of indices of the given system, respectively. Then J F J G holds. Proof. We proceed indirectly. We can assume that J = J G J F, because if J G J F J, but J G J F J, then obviously J G is a generating system of the vectors belonging to J G J F. Now suppose that J F > J G. J G J F. J F J G... a j J G J F a i... t ij... Representation of the vectors {a j j J G } with the generating vectors {a i i J G }..

25 Proof of Steinitz s theorem Let us perform a pivot on the position t ij 0 if i J G J F and j J G J F. Let us repeat this as long as possible: J F J G J G J F J G J F 0 J F J G If i J G J F and j J F J G, then t ij = 0, since the procedure has ended. In this case J G J F, (17) because if it would be empty, then all the independent vectors would be part of the generating ones which is impossible because of the assumption J F > J G. Now J G J F, (18) because otherwise we would have this table: J F J G J F J G J G J F 0 which - taking into account (17) - means that the zero vector would be an element of the system of the independent vectors as 0 a i = 0. i J G J F First Prev Next Last Go Back Full Screen Close Quit

26 Proof of Steinitz s theorem (ctd.); basis Using (17) and (18) we get that the vectors, whose indices are contained in J G J F are represented by the vectors whose indices make up the J G J F, which contradicts to the independence of the vectors whose indices belong to J F. Let {a 1, a 2,..., a n } IR m be vectors and J = {1, 2,..., n} a set of indices. Basis: {a i i J B }, where J B J linearly independent and they generate {a j j J } Basis theorem. If J B, J B J are two arbitrary sets of indices of bases of the vectors {a j J }, then J B = J B. j Now we are in the position of being able to define the rank of a system of vectors as the cardinality of any of its basis: rank(a 1, a 2,..., a n ) = J B Exercises. 1. Let J F, J F J two differing set of indices of linearly independent vectors. Prove that if J F < J F, then there exists an index k J F \ J F such that J F {k} is also a linearly independent subset of the given system of vectors. 2. For any given J 1 J and J 2 J rank(j 1 J 2 ) rank(j 1 ) + rank(j 2 ). 3. Let a system of vectors {a 1, a 2,..., a n } IR m be given with its set of indices J. For any J 1 J and J 2 J the inequality holds. rank(j 1 J 2 ) + rank(j 1 J 2 ) rank(j 1 ) + rank(j 2 ) First Prev Next Last Go Back Full Screen Close Quit

27 Transformation of the bases Lemma. Let J B and J B be two differing system of indices of the vectors {a j j J } IR m. We can change the basis J B to an other basis J B by one pivoting for which J B J B = J B J B 1. Proof. From to the fact that J B and J B are two differing sets of indices we conclude that J B J B J B 1 and therefore J B \ J B 1. Now we show that there exists a k J B \ J B and l J B J B for which t kl 0. Proof by contradiction. Suppose that for any k J B \ J B and l J B J B t kl = 0. J B J B J B \ J B 0 J B J B J B \ J B Case 1. If J B J B =, then a l = 0, l J B J B, i.e. 0 = a l {a j : j J B } which is a contradiction. Case 2. If J B J B and we have the above tableau, then any vector a l, l J B J B can be expressed as a linear combination of the vectors a i, i J B J B which is a contradiction. Pivot on the element t kl 0, where k J B \ J B and l J B J B. Therefore we have a new basis for which J B = (J B \ {k}) {l} holds. So we get J B J B = J B J B + 1.

28 Example An easy consequence of the previous lemma is that bases - and therefore basis tables as well - can be transformed into one another. Consider the system of vectors {a 1, a 2,, a 8 } IR 3 which we present with the short basis tableau J B = {1, 2, 3}. We would like to get it from short pivot tableau J B = {4, 7, 8}. a 4 a 5 a 6 a 7 a 8 a a a a 1 a 5 a 6 a 7 a 3 a a a a 4 a 5 a 6 a 7 a 3 a a a a 1 a 5 a 6 a 2 a 3 a 4 4/3-11/3-4/3 1/3 2 a 7 1/3-2/3-1/3 1/3 0 a On the pivot tableaux we denoted the pivot elements by heavy typesetting: these are t 38, t 14, t 27 respectively.

29 Lemma Lemma. Let J B and J B be differing index sets of two bases of the vectors {a j j J } and let ˆ J J without any restriction. Then L(ˆt (i) i J B ) = L(ˆt (i) i J B ), where the figure below shows the vectors ˆt (i) and ˆt (i). Jˆ Jˆ i ˆt (i) J B J B i ˆt (i) where (ˆt (i) ) j = t ij and (ˆt (i) ) j = t ij, j ˆ J.

30 Matrix rank theorem Theorem. For an arbitrary matrix A = (a 1, a 2,..., a n ) = (a (1), a (2),..., a (m) ) T rank(a 1, a 2,..., a n ) = rank(a (1), a (2),..., a (m) ). Proof. Matrix A is the short pivot tableau of the vectors a 1, a 2,..., a n, e 1,..., e m. J B I B 1 J B 1 ˆt (k) 0 1 J B 0 Ī B 1 I B Then L(a (1), a (2),..., a (m) ) = L(ˆt (k) k J B ), thus rank(a (1), a (2),..., a (m) ) = rankl(a (1), a (2),..., a (m) ) = rankl(ˆt (k) k J B ) = J B. On the other hand rank(a 1, a 2,..., a n ) = J B.

31 Matrix rank: an example Let a 1, a 2, a 3, a 4, a 5 the column vectors of matrix A IR 3 5. a 1 a 2 a 3 a 4 a 5 e 1 e 2 e 3 e e e a 1 a 2 a 3 a 4 a 5 e 1 e 2 e 3 a a e a 1 a 2 a 3 a 4 a 5 e 1 e 2 e 3 a a a Let â (1) = (0, 0, 1, 1, 3) and let us define similarly the vectors â (2) and â (3). Then L(a (1), a (2), a (3) ) = L(â (1), â (2), â (3) ), thus rang(a (1), a (2), a (3) ) = 3. On the other hand the vectors {a 1, a 2, a 3 } form a basis of the vectors {a 1, a 2, a 3, a 4, a 5 }, so rank(a 1, a 2, a 3, a 4, a 5 ) = 3.

32 Determining the basis of J Let us consider the vectors a 1, a 2,..., a n IR m and its set of indices, J. Let us denote by J J the set of indices of those set of vectors whose rank we want to compute. Let J B J be the set of indices of a known basis of the given set of vectors and J B := J \ J B. Consider JB = J J B and J B = J J B. Obviously J = JB J B. This is the tableau corresponding to the index set J B. J B J B \ J J B J B \ J B If all the elements of submatrix (J B \ JB ) J B are zero, then rang(j ) = J B, otherwise pivoting on some nonzero element t ij we can increase the size of the set JB. We can continue this way as long as there exists an element t ij 0, where i J B \ JB és J B and J B. This simple procedure ends with no more than min{ J B \ JB, J B } pivoting.

33 Exercises Exercise 1. Let the indices of J B and J B belong to two different bases of the vectors {a j J } IR m. Prove that one can transform one to the other with at most m pivoting. j Exercise 2. We have the vectors a 1, a 2,..., a n IR m and their set of indices J. For a basis J B J the vector system {a i : i J B } is called a co basis of the vectors {a j : j J }. Suppose we have two differing co bases of the given system of vectors J B1 and J B2. Then (i) for any s J B1 \ J B2 there is an r J B2 \ J B1 such that ( J B1 \ {s}) {r} is a co basis and (ii) for any r J B2 \ J B1 there is an s J B1 \ J B2 such that ( J B2 \ {s}) {r} is a co basis.

34 Orthogonality The vectors a, b IR m are called orthogonal if a T b = 0. Example. Let us consider the following basis tableau of the vectors a 1, a 2,..., a 5 IR 3. a 1 a 2 a 3 a 4 a 5 e 1 e 2 e 3 a a a We can pick up quite simply the vectors t (i), i J B and t j, j J B = J N, respectively. t (1) = , t (2) = , t (3) = , t 4 = , t 5 = It is easy to see that for all i J B and for all j J N the equation t T j t(i) = 0 holds.

35 Orthogonality theorem Orthogonality theorem. For any set of vectors {a j j J } corresponding bases B and B we have t (i) T t j = 0, i J B and j J B, where J B and J B are the appropriate sets of indices. Proof. First we show that the statement is true inside a tableau (i.e for the pivot tableau belonging to the basis B ): t (i) T t j = 0, i J B and j J B. Since i t (i) = * * ij * * j t J B J B t j = * * t ij * * i.e. t (i) T t j = 1 t ij + ( 1) t ij = 0. From this it follows that t j is orthogonal to the row space of the basis tableau B which is the same as the row space of the pivot tableau of basis B. Taking into account the fact that bases tableaux can be transformed into one another we get what we wanted.

36 Composition property Corollary. For the given sets a 1, a 2,..., a n e 1,..., e m IR m and for any basis of the form J B I B with J B J = {1, 2,..., n}, I B I = {1, 2,..., m} we have t kj = y (k)t a j, where k J B I B and j J. J 1 I J I I a j. J B I B... a k t kj 1. y (k) Proof. We apply the orthogonality theorem. Then 0 = (t j )T t (k) = t kj + y (k)t a j, where j t j = a j J I t (k) = * * t kj * * y k First Prev Next Last Go Back Full Screen Close Quit

37 A Farkas Minty-type theorem Lemma. Let {a 1, a 2,..., a n, b} IR m be given vectors and let {e 1, e 2,..., e m } IR m be the canonical basis of IR m. From the two tableaux below one and just one is valid: J B b Ī B J B I B J B Ī B J B b I B where B and B are two differing bases of {a 1, a 2,..., a n, b, e 1, e 2,..., e m }. First Prev Next Last Go Back Full Screen Close Quit

38 Proof of the Farkas Minty-type lemma J B J B b I B Ī B t b = * * t b = * * 0 0 * * J B I B In this case t T t = 1, but this contradicts to the orthogonality theorem, therefore both tableaux can not be true. One of the two tableaux is valid: let us swap some basis vector e i with some vector a j till it is possible. J B b Ī B J B I B 0 *. * Case 1. All of * are zero which gives the tableau 1. Case 2. Some * 0, then pivoting on this element we get the tableau 2. First Prev Next Last Go Back Full Screen Close Quit

39 Linear systems of equations Definition. Let a matrix A IR m n and a vector b IR m be given. The equation A x = b is called solvable if there exists a vector s IR m for which the equation s 1 a 1 + s 2 a s n a n = b holds, where a i is the ith column vector of matrix A. The vector x = s is the solution of the equation. Exercise. Verify that for any matrix A IR m n M = {s IR m A s = 0} is a linear subspace. Rouché-Kronecker-Campelli lemma. exactly one is solvable: } Ax = b (E 1 ) From the following two linear systems of equations y T A = 0 y T b = 1 (E 2)

40 Proof of the Rouché-Kronecker-Campelli lemma We prove indirectly that both can not be true at the same time. We multiply then A x = b by y T from the left, then 0 = 0 T x = (y T A) x = y T (A x) = y T b=1 We apply the Farkas Minty-type lemma to column vectors of matrix A and vector b, respectively. Tableau 1.: It gives a solution for A x = b. Tableau 2.: Vector y is in the row of vector b, indexed by the positions of I. J b I 1... J B b I B J B J B Ī B I B Vector y solves the system of equations E 2, which can be proven with the composition property.

41 Linear system of equations: pivot tableau We can characterize the solvability of A x = b in the following way as well. Exercise. Let A IR m n and b IR m. The following statements are equivalent: (i) A x = b is solvable; (ii) b L(a 1, a 2,, a n ), where a i is the ith column vector of matrix A; (iii) rank(a 1, a 2,, a n ) = rank(a 1, a 2,, a n, b). We put the data belonging to the equation A x = b this way into a short pivot tableau: a 1 a 2 a n b e 1 a 11 a 12 a 1n b 1 e 2 a 21 a 22 a 2n b e m a m1 a m2 a mn b m

42 Algorithm: the Gauss Jordan elimination Input data: m, n IN; index set J = {1, 2,, n}; A IR m n and b IR m. Begin k := 1 and J B = ; while k m do if a kj = 0 ( j J ) and b k = 0 then delete the kth row of the pivot tableau; let m := m 1; else if a kj = 0 ( j J ) and b k 0 then stop: ˆx IR n : A ˆx = b; else l J : a kl 0; pivoting on the element (k, l) and J B := J B {l}; k := k + 1; endif endif endwhile stop: x IR n : A x = b; End.

43 Example: solution of a linear system of equations Solve the following system of equations: x 1 x 2 +x 3 +2x 4 + x 5 = 0 x 3 +3x 4 +3x 5 +x 6 = 5 2x 1 2x 2 +x 3 + x 4 x 5 x 6 = 5 where x 1, x 2, x 3, x 4, x 5, x 6 0. x 1 x 2 x 3 x 4 x 5 x 6 b x 1 x 2 x 3 x 4 x 5 x 6 b x x 1 x 2 x 3 x 4 x 5 x 6 b x x 1 x 2 x 3 x 4 x 5 x 6 b x x x 1 x 2 x 3 x 4 x 5 x 6 b x x First Prev Next Last Go Back Full Screen Close Quit

44 Complexity of the Gauss Jordan elimination Let A IR m n and b IR m. The linear system of equations A x = b can be solved by the Gauss-Jordan elimination method with O(m) iterations and using O(m 2 n) arithmetical operations. Definition. Let A IR m m. 1. Matrix A is called singular if its columns are linearly dependent. The nonsingular matrix A is called regular. 2. The matrix A is called invertible if there exists a matrix B IR m m for which A B = B A = I holds, where I IR m m is the identity matrix. Matrix B is the inverse of matrix A. 3. The determinant of matrix A is a number. det(a) = ( 1) i(σ) a 1σ(1) a 2σ(2) a mσ(m) σ S m where S m is the set of the permutations of {1, 2,, m}, whereas σ is a given permutation and i(σ) is the number of its inversions. First Prev Next Last Go Back Full Screen Close Quit

45 Matrices, determinants 4. If we delete the ith row and jth column of matrix A, then the resulting matrix is denoted by A ij. The number C ij = ( 1) i+j det(a ij ) is the signed minor belonging to the element a ij of matrix A. Let C 11 C 12 C 1m C 21 C 22 C 2m M =.. C m1 C m2 C mm be the matrix made up from the signed minors. The matrix adj(a) = M T is called the adjugate of matrix A. Definition. A regular submatrix A B IR m m of matrix A IR m n is called basis. Definition. Let A IR m n, b IR m be given and suppose that rank(a) = m. The solution x B = A 1 B b, x N = 0 of the linear system of equation A x = b is called basic solution.

46 Exercises 1. Let A IR m m. The following statements are equivalent: (i) Matrix A is regular. (ii) Matrix A is invertible. (iii) det(a) 0. (iv) The system of equation A x = b is solvable for all b IR m. 2. Let A IR m m and let C ij denote the signed minor corresponding to a ij. Prove the following statements: m m (a) det(a) = a lj C lj and det(a) = a ik C ik holds for all 1 l, k m. j=1 i=1 (b) Let 1 l, k m and l k be fixed indices. Then m j=1 a lj C kj = Let A IR m m be a regular matrix. Prove the following statements: (a) A 1 = adj(a) det(a). (b) (Cramer s rule) Let b IR m be an arbitrary vector. Then A x = b is solvable and x i = det(a i) det(a), holds for all i = 1, 2,, m. Matrix A i IR m m is gained from the substitution of vector a i (i.e. ith column of matrix A) and vector b.

47 Size of solutions Definition. Let A Z m n b Z m and m, n IN. We denote by d(a, b, m, n) the data describing the linear system of equation A x = b. We denote by l(d(a, b, m, n)) the corresponding binary memory size. An upper bound for l(d(a, b, m, n)) is L := m n (log 2 ( a ij + 1) + 1) + m (log 2 ( b i + 1) + 1). i=1 j=1 i=1 Lemma. Let the linear system of equation A x = b be given by its data d(a, b, m, n). (i) If C is a k k submatrix of matrix A, then det(c) 2 L. (ii) Suppose that rank(a) = m. In this case for any coordinate x j of a basic solution x for the linear system of equationa x = b corresponding to the basis B the following is true: x j = 0 or 2 L x j 2 L.

48 Proof of the lemma Proof. First we check (i). Using the definition of the determinant we get det(c) = ( 1) i(σ) c 1σ(1) c 2σ(2) c kσ(k) c 1σ(1) c 2σ(2) c kσ(k) σ S k σ S k k k k k (1+ c ij ) (1+ a ij ) 2 L, i=1 j=1 i=1 j=1 where S k is the index set of the kth order permutations and i(σ) is the number of inversions of the permutation σ. The first inequality is trivial. The second one is true because the sum resulting from factoring out the product of the r.h.s will contain all terms of the l.h.s and every term is greater or equal than zero. Third is a rough upper estimate and the forth is obvious from the definition of L. (ii) Due to the basic solution we have x N = 0, so the system of equations reduces to A B x B = b. x B = A 1 B b = adj(a) b det(a B ), therefore x i = det(a i) det(a B ), wherei I B and we get matrix A i from matrix A B by replacing its ith column with the vector b. If x i 0, then x i = det(a i) det(a B ) det(a i) 2 L, where we use that 1 det(a B ) which is a consequence of the fact that the elements of A B are integers. The second estimate is true because of (i).

49 Examination of systems of linear inequalities Let {a 1,..., a n, b}, {e 1,..., e m } IR m and J = {1, 2,..., n}, I = {ˆ1, ˆ2,..., ˆm} be vectors and indices, respectively. Farkas Minty-type lemma. From the following two tableaux exactly one occurs: J B b Ī B J B Ī B J B. J B 0 b... I B 0. 0 I B 0 Proof. We prove indirectly that both cannot be valid. t b = J B J B b I B Ī B t b = J B I B Then 0 = (t b )T t b = 1 + < 0 which contradicts to the orthogonality theorem. First Prev Next Last Go Back Full Screen Close Quit

50 Proof of the Farkas Minty-type theorem One of them holds: let us change so many basic vectors e i with a j as possible. If this procedure terminates then either we can let even the vector b enter into the basis (special case of the tableau 2.) or we get the following tableau (tableau 1. of the previous lemma) J B b Ī B J B I B If the column of b contains only nonnegative numbers corresponding to the indices J B, then we got the first case. Otherwise we can pivot on some element t rb < 0, i J B and the vector b enters the basis and a r leaves it. If the vector b is in the basis and its row in the columns of J B contains only nonpositive numbers, then we got the second case. Otherwise we can pivot on some element t bs > 0, i J B and the vector bleaves the basis and a s enters the basis. This procedure ends with one of the two cases if it ends at all. (If not, then we say that the procedure is cycling, because we have a finite number of bases so at least one occurs infinitely many times.) First Prev Next Last Go Back Full Screen Close Quit

51 Criss-cross algorithm for the Farkas Minty problem Input data: the pivot tableau presented on the previous slide. Begin counter := 0; while counter = 0 do K b := {i J B : t ib < 0}; if K b = then counter = 1 endif endwhile else begin r := min i i J B and pivot on the element t rb < 0 ; J B = J B \ {r} and J B = J \ J B ; K b := {j J B : t bj > 0}; if K b = then counter = 2 end endif else begin end cases counter = 1: we get tableau 1..; counter = 2: we get tableau 2..; endcases End. s := min j and pivot on the element t bs < 0 ; j J B J B = J B {s}; First Prev Next Last Go Back Full Screen Close Quit

52 Criss-cross algorithm: finiteness Indirect proof: suppose the algorithm is cycling on a given system of vectors. From all these cases let us take one with minimal size. Because of the minimality during the cycle all variables enter and later leave the basis. Consider the steps when the vector a n enters and leaves the basis. These are the corresponding tableaux: J B b Ī B J B Ī B J B I B n J B b I B n Let us substitute the vector a n with the vector a n. In the previous tableaux the row or column of the vector a n would turn to its negative and we would get two tableaux for which we have already demonstrated in the first half of our proof that they can not occur simultaneously. This is a contradiction which shows that our algorithm can not cycle.

53 Farkas lemma Farkas lemma. from the following two systems of linear inequalities exactly one is solvable: Ax = b y T A 0 x 0 y T b = 1 Proof. Both can not be solvable at the same time (indirect proof): 0 y T A x = y T b = 1. One of them is solvable: apply the previous theorem to the system of vectors {a 1, a 2,..., a n, b}. Tableau 1.: t ib a i = b, where t ib 0. Then x i = t ib, i J B and x j = 0, j J B. i J B Tableau 2.: the full pivot tableau has this form: J b I 1... J B b I B J B J B Ī B I B Let y i = t bi ahol i I. The compositional property guarantees that y solves the second system. First Prev Next Last Go Back Full Screen Close Quit

54 Theorems of alternatives Exercise. From the two tableaux below exactly one can occur if rank(a 1, a 2,..., a n ) = rank(a 1, a 2,..., a n, b). J B b Ī B J B b Ī B J B. 0 J B... 0 I B 0. 0 I B 0. 0 J B Ī B J B Ī B. J B J B b + b... I B 0 I B 0 First Prev Next Last Go Back Full Screen Close Quit

55 Theorems of alternatives (ctd.) Goldman s theorem. Let A IR m n be a matrix and b IR m, c IR n vectors. Then exactly one from the given two systems of inequalities is solvable: (a) Ax = b, c T x = 1, x 0, (b) y T A c. Exercise. Let A IR m n be a matrix and x IR n, y IR m vectors and y 0 IR. Verify that exactly one from the following two systems is solvable: (a) A x = 0, x > 0, e T x = 1, (b) A T y + e y 0 0, y 0 0, where e = (1, 1,..., 1) IR n. Exercise. B IR m n, C IR k n, D IR l n, x IR n, y IR m, z IR k and u IR l. Prove that the following two statements are equivalent: (a) x : Bx 0, Bx 0, Cx 0, Dx = 0, (b) (y, z, u) : B T y + C T z + D T u = 0, z > 0, u 0.

56 Hyperplane, halfspace, cone, finitely generated cone,... Definition. Let a IR n, a 0 be a given vector and β IR a given number. (affine) hyperplane: H = {x IR n a T x = β}; open (affine) halfspace: closed (affine) halfspace: F > = {x IR n a T x > β}; F = {x IR n a T x β}. Let C IR n, C be a set and λ, µ nonnegative real numbers. C is a cone if from a C it follows that λ a C as well. C is a convex cone if from a, b C it follows that λ a + µ b C as well. C is a polyhedral cone if there exists a matrix A IR m n for which C = {x IR n Ax 0} Exercise. A cone is polyhedral if it is the intersection of finitely many halfspaces. Definition. Let a 1, a 2,..., a n IR m be given vectors and J = {1, 2,..., n} the set of indices. n finitely generated cone: C(a 1, a 2,..., a n ) = {b IR m b = x j a j, x j 0, j J } j=1 = {b IR m A x = b, x 0 is solvable } polar of finitely generated cone: C (a 1, a 2,..., a n ) = {y IR m y T a j 0, j J } = {y IR m y T A 0} First Prev Next Last Go Back Full Screen Close Quit

57 Pivot tableaux Let a finite set of vectors A = {a 1, a 2,..., a n } IR m be given and consider pivot tableaux similar to those already seen in the proof of Farkas lemma, i.e. (i) J B is maximal and (ii) there is an r J B for which t rj 0 holds for all j J B. Using these tableaux we can define the vectors y r, u s IR m in the following way: Furthermore, we define y ri = t ri, i I, r J B and u si = t si, i I, s I B. U 1 = {±u s s I B } and U 2 = { y r r J B and fulfills the condition (ii)}. Exercise. Prove that the cone generated by U 1 is a subspace, the cone generated by U 2 does not contain a line. Now let us introduce the set Y = { U1 U 2, ha U 1 or U 2, 0, otherwise. First Prev Next Last Go Back Full Screen Close Quit

58 Weyl s theorem Weyl s theorem, Let {a 1, a 2,..., a n } IR m be given vectors, then there exist {y 1, y 2,..., y k } IR m vectors for which C(a 1, a 2,..., a n ) = C (y 1, y 2,..., y k ). Proof. We have to prove the following two statements: 1. for all b C(a 1, a 2,..., a n ) it is also true that b C (y 1, y 2,..., y k ), 2. for all b C (y 1, y 2,..., y k ) it is also true that b C(a 1, a 2,..., a n ). 1. Suppose b C(a 1, a 2,..., a n ), i.e. the system A x = b, x 0 is solvable. Because of the earlier construction of the sets U 1 and U 2 and using the compositional property we get that for all n indices i and j the relation yi T a j 0 holds. Then yi T b = (yi T a j)x j 0 for all index i, therefore b C (y 1, y 2,..., y k ). From this we get that j=1 C(a 1, a 2,..., a n ) C (y 1, y 2,..., y k ). 2. Now we verify that b C(a 1, a 2,..., a n ) entails b C (y 1, y 2,..., y k ). For any b C(a 1, a 2,..., a n ) the system Ax = b, x 0 is not solvable. Then taking into account Farkas lemma: y T A 0, y T b = 1 is solvable. According to the definition of the vectors y l it is obvious that some y i is the solution of the second system, i.e. yi T a j 0 for all j J and 1 = yi T b > 0. This means exactly that b C (y 1, y 2,..., y k ).

59 Minkowski s theorem Minkowski s theorem, Let {a 1, a 2,..., a n } IR m be a given set of vectors. Then there exists a set of vectors {y 1, y 2,..., y k } IR m for which C (a 1, a 2,..., a n ) = C(y 1, y 2,..., y k ). Proof. We have to prove the following two statements: C (a 1, a 2,..., a n ) C(y 1, y 2,..., y k ) and C(y 1, y 2,..., y k ) C (a 1, a 2,..., a n ). The second statement is proven exactly the same way as was done for the first statement in Weyl s n theorem. Let z C(y 1, y 2,..., y k ). Then the system z = y i t i, t i 0 is solvable. Because of the construction of the sets U 1 and U 2 and using the composition property we get that for all n indices i and j that the estimate yi T a j 0 holds. Therefore z T a j = t i yi T a j 0, for all j which means exactly that z C (a 1, a 2,..., a n ). As to the first statement, we prove it in this form: for all z C(y 1, y 2,..., y k ) it follows that z C (a 1, a 2,..., a n ). Now z C(y 1, y 2,..., y k ) means that z = k t=1 y it i, t i 0 has no solution and therefore the Farkas lemma guarantees the existence of a vector b IR m for which b T y i 0 for all i and b T z = 1 is solvable. This means that b C (y 1, y 2,..., y k ) according to the definition of the polar cone. i=1 i=1

60 Farkas theorem Let us apply Weyl s theorem to the set of vectors {y 1, y 2,..., y k }. We get that b C (y 1, y 2,..., y k ) = C(a 1, a 2,..., a n ), so Ax = b, x 0 is solvable. Then the Farkas lemma says that u T A 0, u T b = 1 is not solvable. On the other side we saw that the construction of vector b means that z T b = 1. Therefore z T A 0 can not be true which means exactly that z C (a 1, a 2,..., a n ). Farkas theorem, 1898, For an arbitrary system of vectors {a 1, a 2,..., a n } IR m it is true that C(a 1, a 2,..., a n ) = C (a 1, a 2,..., a n ). Proof. Apply the theorems of Weyl and Minkowski. C(a 1, a 2,..., a n ) = C (y 1, y 2,..., y k ) = (C(y 1, y 2,..., y k )) = (C (a 1, a 2,..., a n )) = = C (a 1, a 2,..., a n ). Corollary. A convex cone in IR m is an intersection cone if and only if it is finitely generated.

61 Minkowski sum of sets Definition. Let A, B IR m be nonempty sets. The set is called the Minkowski sum of the given sets. Exercises. A + B = {a + b a A, b B}, 1. Let K := {C IR n C is a finitely generated cone}. Prove that for all C 1, C 2 K the following statements are true: (1) C 1 + C 2 K, (2) C 1 C2 K, (3) C 1 K, (4) C 1 = C 1, (5) (C 1 + C 2 ) = C 1 C 2, (6) (C 1 C2 ) = C 1 + C 2. How should we call in this case the structure (K, +,, )? 2. Let A IR k n, E IR l n, B IR k m, F IR l m be matrices and b IR k, c IR l vectors. Prove that S = {(x, y) IR n+m Ax + By = b, Ex + F y c, x 0} is a convex set.

62 Convex polyhedra, polytopes Definition. Let P IR n. The set P is a (convex) polyhedron if for some matrix A IR m n and vector b IR m. P = {x IR n Ax b}, Exercise. Prove that any polyhedron is the intersection of finite number of affine halfspaces. Definition. Let Q IR n. The set Q is called a (convex) poytope if it is the convex hull of finite number of vectors, i.e. Q = conv(a 1, a 2,..., a r ) = {w IR n w = r λ i a i, i=1 r λ i = 1, λ i 0 for all i}, i=1 where a 1, a 2,..., a r IR n are given vectors. Obviously a polytope is bounded, convex and nonempty. On the other hand if Q is a poytope, then is solvable. w Q A x = w, e T x = 1, x 0 Definition. Let A IR n be a convex set. The point x A is called an extremal point of set A if for all x 1, x 2 A and 0 < λ < 1 from x = λ x 1 + (1 λ) x 2 follows x = x 1 = x 2.

63 Feasibility problem Consider the following system of linear inequalities: Ax = b, x 0, where A IR m n, b IR m, x IR n. The previous system is called feasibility problem. P := {x IR n Ax = b, x 0} is the set of feasible solutions. Clearly P is a polyhedron. For x P we say that the feasible solution x uses the column vectors a j of matrix A if x j > 0. The vector x P is a feasible basic solution if {a j x j > 0} is an independent set. Proposition. Let x P. The vector x is an extremal point of the polyhedron P if and only if x is a basic solution of Ax = b, x 0. Proof. First we verify that if the vector x is not a feasible basic solution of the above system, then it will not be an extremal point of P. This is equivalent to say that if x is an extremal point of P, then it is also a feasible basic solution of the system. Suppose that x is not a basic solution of the inequality system. Let us introduce the two index sets: J + := {j x j > 0} and J 0 := {j x j = 0}. Then we get b = x j a j = x j a j + x j a j = x j a j j J j J 0 j J + j J + according to the definition of J 0.

64 Characterization of feasible basic solutions Because x is not a basic solution therefore the vectors {a j j J + } are linearly dependent. This means that there are y i IR not all zero numbers for which y i a i = 0 and b = ( x i + λ y i ) a i i J + i J + for all λ IR. Let us define a new point of P in the following way: x(λ) = x j + λ y j, if j J + and x(λ) = 0, if j J 0 for all λ IR. All y i, i J + can not be zero, so x(λ 1 ) x(λ 2 ) if λ 1 λ 2. The number λ can be chosen such that x j (λ) > 0 for all j J + and x(λ) P. Let K + := {j J + y i > 0} and K := {j J + y i < 0}. We can choose λ, λ IR such that λ := max j K + xj y j, if K + 0, if K + = and λ := min j K xj y j, if K 0, if K = Obviously at least one of the numbers λ and λ is not zero because at least one from the numbers y i (i J + ) is not zero. Furthermore, it is easy to see that λ < λ. If both λ and λ are not zeros, then let ε := min{ λ, λ } otherwise ε := max{ λ, λ }. 2 2

65 Verification of the proposition In this case x(ε), x( ε) P. Furthermore, x can be expressed as x = x(ε) + x( ε), 2 which means that x P can not be an extremal point of polyhedron P. Now let us suppose that x P is a feasible basic solution. In this case the vectors {a j j J + } are linearly independent. Let us proceed indirectly: suppose that x P is not an extremal point of the polyhedron P. Now there exist points x 1, x 2 P and number 0 < λ < 1 such that x = λ x 1 + (1 λ) x 2. This also means that for all index j the relation x j = λ x 1 j + (1 λ) x2 j holds as well. Since x j = 0 for j J 0, so x 1 j = x2 j = 0 for all j J 0. We get b = x 1 j a j and b = x 2 j a j. Subtracting one from the other we get j J + j J + j J + (x 1 j x2 j ) a j = 0. Since the set {a j j J + } contains linearly independent vectors, so we conclude that x 1 j x2 j = 0 for all j J + which means that x 1 j = x2 j for all j J +, i.e. x 1 = x 2. It contradicts to our supposition. Therefore x P is an extremal point. First Prev Next Last Go Back Full Screen Close Quit

66 An algorithm for presenting a feasible basic solution from a given feasible solution Input data: m, n IN, vector b IR m ; the vectors {a 1, a 2,...a n } IR m and the corresponding index set J = {1, 2,, n}; the index set of the unit vectors I = {ˆ1, ˆ2,..., ˆm} of IR m ; the solution x and the short pivot tableau T = [A b]. Begin J + := {j J x j > 0}; if {a j j J + } is linearly independent then STOP else J B+ = I; while ( i J B+ and j J + \ J B+ : t ij 0) do { λ := min x := x + λ t j ; x i t ij t ij < 0 } = x k t kj ; J + := {j J x j > 0}; if k j then pivoting on the position t kj ; J B+ := (J B+ \ {k}) {j}; endif endwhile endif end. First Prev Next Last Go Back Full Screen Close Quit

67 Complexity of the algorithm Proposition. Starting from a given solution x P the previous algorithm gives a basic solution ˆx P in at most O(n) steps and using at most O(m n 2 ) arithmetical operations. Proof. We certify that the algorithm produces a feasible solution in every step. λ := min{ x i t ij t ij < 0 és i J } = x k t kj is finite because t jj = 1 and j J + according to the selection rule of the algorithm which means that x j = x j λ 0. t jj Let x + = x + l t j be a solution after some iteration. Then we have Ax + = A( x + λ t j ) = A x + λ At j = b + λ 0 = b. We prove x + 0 coordinatewise. If i J is an index for which t ij 0, then x + i = x i + λ t ij 0, since x i, λ, t ij are all nonnegative. If for some index i J for which t ij < 0, then by the definition of λ x i t ij λ holds, i.e. x + i = x i + λ t ij 0. This proves that the new solution remains to be feasible. Let x P be the initial vector at the beginning of the algorithm. Let us compose the index sets J + := {j J x j > 0} and J 0 := {j J x j = 0}.

68 Proof of the proposition Two cases are possible: 1. The set {a j j J + } is linearly independent 2. The set {a j j J + } is linearly dependent 1. J + = rank(a) (we are done) or J + < rank(a) ({a j j J + } can be augmented to a basis). 2. Let us run the algorithm which gives a feasible solution in every step. It is finite and at the end the the actual index vectors J + shall be independent b 1 J B+... t ij J B I B J B+ JB+ J B0 JB0 0 First Prev Next Last Go Back Full Screen Close Quit

69 By the definition of Proof of the proposition (ctd.) { xi λ = min t ij t ij < 0andi J there are only two possibilities: (i) k J 0 and (ii) k J +. In the first case λ = 0, so x + = x but now k j means that at pivoting the number of the elements of J B+ increases by one and therefore the previous basis can not recur. In the other case the index k might be k = j or k j. If k = j, then x + j = x j + x j t jj t jj = x j + x j ( 1) = 0, so the number of the elements of J + decreases by one if k J + and k j, then x + k = 0 which means that the number of the elements of J + decreases. Furthermore, because of the pivoting took place at the position (k, j) the cardinality of J B+ remains the same but the basis changes. Neither in case (i) nor in case (ii) can the previous basis return because the cardinality of J B+ increased or decreased monotonically and at least one of this changes was strictly monoton. Case (i) can occur only if J B0 rank(a) and case (ii) only at most n rang(a) times, so with n iteration we produced a basic solution ˆx P. Determining λ requires at most n division and computing the minimum of these quotients takes log n operations. During the computation of x + we had to carry out n multiplication and n addition so the number of additions/subtractions is n and the number of multiplications/divisions is 2n and the number of comparisons is log n. If k j, then another pivoting will happen whose operational cost is at most O(m n). Therefore, the overall complexity of the algorithm is O(m n 2 ). } First Prev Next Last Go Back Full Screen Close Quit

70 System of homogeneous linear inequalities Definition. If J + < rang(a) and the vectors {a j j J + } are independent then it can be complemented to a basis more than one way. It means that there are several bases which give the same extremal solution. In this case we say that the system Ax = b, x 0 is degeneate. Consider the following homogeneous linear system of inequalities: A y = 0, y 0, where A IR m n. The solution set is denoted by H := {y IR n A y = 0, y 0}. It is clear that H and H is a polyhedric convex cone. Further, if x P and y H, then x + κ y P for any κ IR. Exercise. LetQ IR n be a nonempty set. Prove that Q is a polytope if and only if H = 1, i.e. H = {0}. Theorem. For any ˆx P there are basic solutions x 1, x 2,..., x r P and nonnegative real numbers λ 1, λ 2,..., λ r for which r λ i = 1 and ŷ H such that i=1 Proof. Let ˆx P be given and let ˆx = r λ i x i + ŷ. i=1 J + := {j J ˆx j > 0} and J 0 := {j J ˆx j = 0}. Apply the mathematical induction on J +. First Prev Next Last Go Back Full Screen Close Quit

Appendix PRELIMINARIES 1. THEOREMS OF ALTERNATIVES FOR SYSTEMS OF LINEAR CONSTRAINTS

Appendix PRELIMINARIES 1. THEOREMS OF ALTERNATIVES FOR SYSTEMS OF LINEAR CONSTRAINTS Appendix PRELIMINARIES 1. THEOREMS OF ALTERNATIVES FOR SYSTEMS OF LINEAR CONSTRAINTS Here we consider systems of linear constraints, consisting of equations or inequalities or both. A feasible solution

More information

3. Linear Programming and Polyhedral Combinatorics

3. Linear Programming and Polyhedral Combinatorics Massachusetts Institute of Technology 18.453: Combinatorial Optimization Michel X. Goemans April 5, 2017 3. Linear Programming and Polyhedral Combinatorics Summary of what was seen in the introductory

More information

3. Linear Programming and Polyhedral Combinatorics

3. Linear Programming and Polyhedral Combinatorics Massachusetts Institute of Technology 18.433: Combinatorial Optimization Michel X. Goemans February 28th, 2013 3. Linear Programming and Polyhedral Combinatorics Summary of what was seen in the introductory

More information

3. Vector spaces 3.1 Linear dependence and independence 3.2 Basis and dimension. 5. Extreme points and basic feasible solutions

3. Vector spaces 3.1 Linear dependence and independence 3.2 Basis and dimension. 5. Extreme points and basic feasible solutions A. LINEAR ALGEBRA. CONVEX SETS 1. Matrices and vectors 1.1 Matrix operations 1.2 The rank of a matrix 2. Systems of linear equations 2.1 Basic solutions 3. Vector spaces 3.1 Linear dependence and independence

More information

Introduction to Mathematical Programming IE406. Lecture 3. Dr. Ted Ralphs

Introduction to Mathematical Programming IE406. Lecture 3. Dr. Ted Ralphs Introduction to Mathematical Programming IE406 Lecture 3 Dr. Ted Ralphs IE406 Lecture 3 1 Reading for This Lecture Bertsimas 2.1-2.2 IE406 Lecture 3 2 From Last Time Recall the Two Crude Petroleum example.

More information

Chapter 1. Preliminaries

Chapter 1. Preliminaries Introduction This dissertation is a reading of chapter 4 in part I of the book : Integer and Combinatorial Optimization by George L. Nemhauser & Laurence A. Wolsey. The chapter elaborates links between

More information

Math Camp Lecture 4: Linear Algebra. Xiao Yu Wang. Aug 2010 MIT. Xiao Yu Wang (MIT) Math Camp /10 1 / 88

Math Camp Lecture 4: Linear Algebra. Xiao Yu Wang. Aug 2010 MIT. Xiao Yu Wang (MIT) Math Camp /10 1 / 88 Math Camp 2010 Lecture 4: Linear Algebra Xiao Yu Wang MIT Aug 2010 Xiao Yu Wang (MIT) Math Camp 2010 08/10 1 / 88 Linear Algebra Game Plan Vector Spaces Linear Transformations and Matrices Determinant

More information

Equality: Two matrices A and B are equal, i.e., A = B if A and B have the same order and the entries of A and B are the same.

Equality: Two matrices A and B are equal, i.e., A = B if A and B have the same order and the entries of A and B are the same. Introduction Matrix Operations Matrix: An m n matrix A is an m-by-n array of scalars from a field (for example real numbers) of the form a a a n a a a n A a m a m a mn The order (or size) of A is m n (read

More information

MAT-INF4110/MAT-INF9110 Mathematical optimization

MAT-INF4110/MAT-INF9110 Mathematical optimization MAT-INF4110/MAT-INF9110 Mathematical optimization Geir Dahl August 20, 2013 Convexity Part IV Chapter 4 Representation of convex sets different representations of convex sets, boundary polyhedra and polytopes:

More information

MATRIX ALGEBRA AND SYSTEMS OF EQUATIONS. + + x 1 x 2. x n 8 (4) 3 4 2

MATRIX ALGEBRA AND SYSTEMS OF EQUATIONS. + + x 1 x 2. x n 8 (4) 3 4 2 MATRIX ALGEBRA AND SYSTEMS OF EQUATIONS SYSTEMS OF EQUATIONS AND MATRICES Representation of a linear system The general system of m equations in n unknowns can be written a x + a 2 x 2 + + a n x n b a

More information

Linear Algebra M1 - FIB. Contents: 5. Matrices, systems of linear equations and determinants 6. Vector space 7. Linear maps 8.

Linear Algebra M1 - FIB. Contents: 5. Matrices, systems of linear equations and determinants 6. Vector space 7. Linear maps 8. Linear Algebra M1 - FIB Contents: 5 Matrices, systems of linear equations and determinants 6 Vector space 7 Linear maps 8 Diagonalization Anna de Mier Montserrat Maureso Dept Matemàtica Aplicada II Translation:

More information

Lecture 7. Econ August 18

Lecture 7. Econ August 18 Lecture 7 Econ 2001 2015 August 18 Lecture 7 Outline First, the theorem of the maximum, an amazing result about continuity in optimization problems. Then, we start linear algebra, mostly looking at familiar

More information

Linear Algebra (part 1) : Matrices and Systems of Linear Equations (by Evan Dummit, 2016, v. 2.02)

Linear Algebra (part 1) : Matrices and Systems of Linear Equations (by Evan Dummit, 2016, v. 2.02) Linear Algebra (part ) : Matrices and Systems of Linear Equations (by Evan Dummit, 206, v 202) Contents 2 Matrices and Systems of Linear Equations 2 Systems of Linear Equations 2 Elimination, Matrix Formulation

More information

OPERATIONS RESEARCH. Linear Programming Problem

OPERATIONS RESEARCH. Linear Programming Problem OPERATIONS RESEARCH Chapter 1 Linear Programming Problem Prof. Bibhas C. Giri Department of Mathematics Jadavpur University Kolkata, India Email: bcgiri.jumath@gmail.com MODULE - 2: Simplex Method for

More information

NOTES on LINEAR ALGEBRA 1

NOTES on LINEAR ALGEBRA 1 School of Economics, Management and Statistics University of Bologna Academic Year 207/8 NOTES on LINEAR ALGEBRA for the students of Stats and Maths This is a modified version of the notes by Prof Laura

More information

MATH 240 Spring, Chapter 1: Linear Equations and Matrices

MATH 240 Spring, Chapter 1: Linear Equations and Matrices MATH 240 Spring, 2006 Chapter Summaries for Kolman / Hill, Elementary Linear Algebra, 8th Ed. Sections 1.1 1.6, 2.1 2.2, 3.2 3.8, 4.3 4.5, 5.1 5.3, 5.5, 6.1 6.5, 7.1 7.2, 7.4 DEFINITIONS Chapter 1: Linear

More information

1 Matrices and Systems of Linear Equations

1 Matrices and Systems of Linear Equations Linear Algebra (part ) : Matrices and Systems of Linear Equations (by Evan Dummit, 207, v 260) Contents Matrices and Systems of Linear Equations Systems of Linear Equations Elimination, Matrix Formulation

More information

Linear Systems and Matrices

Linear Systems and Matrices Department of Mathematics The Chinese University of Hong Kong 1 System of m linear equations in n unknowns (linear system) a 11 x 1 + a 12 x 2 + + a 1n x n = b 1 a 21 x 1 + a 22 x 2 + + a 2n x n = b 2.......

More information

Math 341: Convex Geometry. Xi Chen

Math 341: Convex Geometry. Xi Chen Math 341: Convex Geometry Xi Chen 479 Central Academic Building, University of Alberta, Edmonton, Alberta T6G 2G1, CANADA E-mail address: xichen@math.ualberta.ca CHAPTER 1 Basics 1. Euclidean Geometry

More information

The Solution of Linear Systems AX = B

The Solution of Linear Systems AX = B Chapter 2 The Solution of Linear Systems AX = B 21 Upper-triangular Linear Systems We will now develop the back-substitution algorithm, which is useful for solving a linear system of equations that has

More information

A strongly polynomial algorithm for linear systems having a binary solution

A strongly polynomial algorithm for linear systems having a binary solution A strongly polynomial algorithm for linear systems having a binary solution Sergei Chubanov Institute of Information Systems at the University of Siegen, Germany e-mail: sergei.chubanov@uni-siegen.de 7th

More information

Optimization WS 13/14:, by Y. Goldstein/K. Reinert, 9. Dezember 2013, 16: Linear programming. Optimization Problems

Optimization WS 13/14:, by Y. Goldstein/K. Reinert, 9. Dezember 2013, 16: Linear programming. Optimization Problems Optimization WS 13/14:, by Y. Goldstein/K. Reinert, 9. Dezember 2013, 16:38 2001 Linear programming Optimization Problems General optimization problem max{z(x) f j (x) 0,x D} or min{z(x) f j (x) 0,x D}

More information

Linear equations in linear algebra

Linear equations in linear algebra Linear equations in linear algebra Samy Tindel Purdue University Differential equations and linear algebra - MA 262 Taken from Differential equations and linear algebra Pearson Collections Samy T. Linear

More information

16 Chapter 3. Separation Properties, Principal Pivot Transforms, Classes... for all j 2 J is said to be a subcomplementary vector of variables for (3.

16 Chapter 3. Separation Properties, Principal Pivot Transforms, Classes... for all j 2 J is said to be a subcomplementary vector of variables for (3. Chapter 3 SEPARATION PROPERTIES, PRINCIPAL PIVOT TRANSFORMS, CLASSES OF MATRICES In this chapter we present the basic mathematical results on the LCP. Many of these results are used in later chapters to

More information

Numerical Analysis: Solving Systems of Linear Equations

Numerical Analysis: Solving Systems of Linear Equations Numerical Analysis: Solving Systems of Linear Equations Mirko Navara http://cmpfelkcvutcz/ navara/ Center for Machine Perception, Department of Cybernetics, FEE, CTU Karlovo náměstí, building G, office

More information

Foundations of Matrix Analysis

Foundations of Matrix Analysis 1 Foundations of Matrix Analysis In this chapter we recall the basic elements of linear algebra which will be employed in the remainder of the text For most of the proofs as well as for the details, the

More information

Analysis-3 lecture schemes

Analysis-3 lecture schemes Analysis-3 lecture schemes (with Homeworks) 1 Csörgő István November, 2015 1 A jegyzet az ELTE Informatikai Kar 2015. évi Jegyzetpályázatának támogatásával készült Contents 1. Lesson 1 4 1.1. The Space

More information

Spring 2017 CO 250 Course Notes TABLE OF CONTENTS. richardwu.ca. CO 250 Course Notes. Introduction to Optimization

Spring 2017 CO 250 Course Notes TABLE OF CONTENTS. richardwu.ca. CO 250 Course Notes. Introduction to Optimization Spring 2017 CO 250 Course Notes TABLE OF CONTENTS richardwu.ca CO 250 Course Notes Introduction to Optimization Kanstantsin Pashkovich Spring 2017 University of Waterloo Last Revision: March 4, 2018 Table

More information

1. General Vector Spaces

1. General Vector Spaces 1.1. Vector space axioms. 1. General Vector Spaces Definition 1.1. Let V be a nonempty set of objects on which the operations of addition and scalar multiplication are defined. By addition we mean a rule

More information

MATH 4211/6211 Optimization Linear Programming

MATH 4211/6211 Optimization Linear Programming MATH 4211/6211 Optimization Linear Programming Xiaojing Ye Department of Mathematics & Statistics Georgia State University Xiaojing Ye, Math & Stat, Georgia State University 0 The standard form of a Linear

More information

10. Rank-nullity Definition Let A M m,n (F ). The row space of A is the span of the rows. The column space of A is the span of the columns.

10. Rank-nullity Definition Let A M m,n (F ). The row space of A is the span of the rows. The column space of A is the span of the columns. 10. Rank-nullity Definition 10.1. Let A M m,n (F ). The row space of A is the span of the rows. The column space of A is the span of the columns. The nullity ν(a) of A is the dimension of the kernel. The

More information

Linear Algebra Highlights

Linear Algebra Highlights Linear Algebra Highlights Chapter 1 A linear equation in n variables is of the form a 1 x 1 + a 2 x 2 + + a n x n. We can have m equations in n variables, a system of linear equations, which we want to

More information

Lecture Summaries for Linear Algebra M51A

Lecture Summaries for Linear Algebra M51A These lecture summaries may also be viewed online by clicking the L icon at the top right of any lecture screen. Lecture Summaries for Linear Algebra M51A refers to the section in the textbook. Lecture

More information

Matrices and systems of linear equations

Matrices and systems of linear equations Matrices and systems of linear equations Samy Tindel Purdue University Differential equations and linear algebra - MA 262 Taken from Differential equations and linear algebra by Goode and Annin Samy T.

More information

Linear Algebra March 16, 2019

Linear Algebra March 16, 2019 Linear Algebra March 16, 2019 2 Contents 0.1 Notation................................ 4 1 Systems of linear equations, and matrices 5 1.1 Systems of linear equations..................... 5 1.2 Augmented

More information

ORIE 6300 Mathematical Programming I August 25, Recitation 1

ORIE 6300 Mathematical Programming I August 25, Recitation 1 ORIE 6300 Mathematical Programming I August 25, 2016 Lecturer: Calvin Wylie Recitation 1 Scribe: Mateo Díaz 1 Linear Algebra Review 1 1.1 Independence, Spanning, and Dimension Definition 1 A (usually infinite)

More information

Math 321: Linear Algebra

Math 321: Linear Algebra Math 32: Linear Algebra T. Kapitula Department of Mathematics and Statistics University of New Mexico September 8, 24 Textbook: Linear Algebra,by J. Hefferon E-mail: kapitula@math.unm.edu Prof. Kapitula,

More information

Chapter 3. Vector spaces

Chapter 3. Vector spaces Chapter 3. Vector spaces Lecture notes for MA1111 P. Karageorgis pete@maths.tcd.ie 1/22 Linear combinations Suppose that v 1,v 2,...,v n and v are vectors in R m. Definition 3.1 Linear combination We say

More information

DS-GA 1002 Lecture notes 0 Fall Linear Algebra. These notes provide a review of basic concepts in linear algebra.

DS-GA 1002 Lecture notes 0 Fall Linear Algebra. These notes provide a review of basic concepts in linear algebra. DS-GA 1002 Lecture notes 0 Fall 2016 Linear Algebra These notes provide a review of basic concepts in linear algebra. 1 Vector spaces You are no doubt familiar with vectors in R 2 or R 3, i.e. [ ] 1.1

More information

Jim Lambers MAT 610 Summer Session Lecture 1 Notes

Jim Lambers MAT 610 Summer Session Lecture 1 Notes Jim Lambers MAT 60 Summer Session 2009-0 Lecture Notes Introduction This course is about numerical linear algebra, which is the study of the approximate solution of fundamental problems from linear algebra

More information

On the projection onto a finitely generated cone

On the projection onto a finitely generated cone Acta Cybernetica 00 (0000) 1 15. On the projection onto a finitely generated cone Miklós Ujvári Abstract In the paper we study the properties of the projection onto a finitely generated cone. We show for

More information

Chapter 1: Systems of Linear Equations

Chapter 1: Systems of Linear Equations Chapter : Systems of Linear Equations February, 9 Systems of linear equations Linear systems Lecture A linear equation in variables x, x,, x n is an equation of the form a x + a x + + a n x n = b, where

More information

Mathematical Optimisation, Chpt 2: Linear Equations and inequalities

Mathematical Optimisation, Chpt 2: Linear Equations and inequalities Mathematical Optimisation, Chpt 2: Linear Equations and inequalities Peter J.C. Dickinson p.j.c.dickinson@utwente.nl http://dickinson.website version: 12/02/18 Monday 5th February 2018 Peter J.C. Dickinson

More information

LMI MODELLING 4. CONVEX LMI MODELLING. Didier HENRION. LAAS-CNRS Toulouse, FR Czech Tech Univ Prague, CZ. Universidad de Valladolid, SP March 2009

LMI MODELLING 4. CONVEX LMI MODELLING. Didier HENRION. LAAS-CNRS Toulouse, FR Czech Tech Univ Prague, CZ. Universidad de Valladolid, SP March 2009 LMI MODELLING 4. CONVEX LMI MODELLING Didier HENRION LAAS-CNRS Toulouse, FR Czech Tech Univ Prague, CZ Universidad de Valladolid, SP March 2009 Minors A minor of a matrix F is the determinant of a submatrix

More information

13. Systems of Linear Equations 1

13. Systems of Linear Equations 1 13. Systems of Linear Equations 1 Systems of linear equations One of the primary goals of a first course in linear algebra is to impress upon the student how powerful matrix methods are in solving systems

More information

MTH 309 Supplemental Lecture Notes Based on Robert Messer, Linear Algebra Gateway to Mathematics

MTH 309 Supplemental Lecture Notes Based on Robert Messer, Linear Algebra Gateway to Mathematics MTH 309 Supplemental Lecture Notes Based on Robert Messer, Linear Algebra Gateway to Mathematics Ulrich Meierfrankenfeld Department of Mathematics Michigan State University East Lansing MI 48824 meier@math.msu.edu

More information

Chapter 1: Linear Programming

Chapter 1: Linear Programming Chapter 1: Linear Programming Math 368 c Copyright 2013 R Clark Robinson May 22, 2013 Chapter 1: Linear Programming 1 Max and Min For f : D R n R, f (D) = {f (x) : x D } is set of attainable values of

More information

Math Linear Algebra Final Exam Review Sheet

Math Linear Algebra Final Exam Review Sheet Math 15-1 Linear Algebra Final Exam Review Sheet Vector Operations Vector addition is a component-wise operation. Two vectors v and w may be added together as long as they contain the same number n of

More information

Linear Algebra. Linear Algebra. Chih-Wei Yi. Dept. of Computer Science National Chiao Tung University. November 12, 2008

Linear Algebra. Linear Algebra. Chih-Wei Yi. Dept. of Computer Science National Chiao Tung University. November 12, 2008 Linear Algebra Chih-Wei Yi Dept. of Computer Science National Chiao Tung University November, 008 Section De nition and Examples Section De nition and Examples Section De nition and Examples De nition

More information

II. Determinant Functions

II. Determinant Functions Supplemental Materials for EE203001 Students II Determinant Functions Chung-Chin Lu Department of Electrical Engineering National Tsing Hua University May 22, 2003 1 Three Axioms for a Determinant Function

More information

Fundamentals of Linear Algebra. Marcel B. Finan Arkansas Tech University c All Rights Reserved

Fundamentals of Linear Algebra. Marcel B. Finan Arkansas Tech University c All Rights Reserved Fundamentals of Linear Algebra Marcel B. Finan Arkansas Tech University c All Rights Reserved October 9, 200 2 PREFACE Linear algebra has evolved as a branch of mathematics with wide range of applications

More information

Linear Algebra. Min Yan

Linear Algebra. Min Yan Linear Algebra Min Yan January 2, 2018 2 Contents 1 Vector Space 7 1.1 Definition................................. 7 1.1.1 Axioms of Vector Space..................... 7 1.1.2 Consequence of Axiom......................

More information

Chapter 1 Vector Spaces

Chapter 1 Vector Spaces Chapter 1 Vector Spaces Per-Olof Persson persson@berkeley.edu Department of Mathematics University of California, Berkeley Math 110 Linear Algebra Vector Spaces Definition A vector space V over a field

More information

Linear Algebra Primer

Linear Algebra Primer Linear Algebra Primer David Doria daviddoria@gmail.com Wednesday 3 rd December, 2008 Contents Why is it called Linear Algebra? 4 2 What is a Matrix? 4 2. Input and Output.....................................

More information

UNDERGROUND LECTURE NOTES 1: Optimality Conditions for Constrained Optimization Problems

UNDERGROUND LECTURE NOTES 1: Optimality Conditions for Constrained Optimization Problems UNDERGROUND LECTURE NOTES 1: Optimality Conditions for Constrained Optimization Problems Robert M. Freund February 2016 c 2016 Massachusetts Institute of Technology. All rights reserved. 1 1 Introduction

More information

CHAPTER 2: CONVEX SETS AND CONCAVE FUNCTIONS. W. Erwin Diewert January 31, 2008.

CHAPTER 2: CONVEX SETS AND CONCAVE FUNCTIONS. W. Erwin Diewert January 31, 2008. 1 ECONOMICS 594: LECTURE NOTES CHAPTER 2: CONVEX SETS AND CONCAVE FUNCTIONS W. Erwin Diewert January 31, 2008. 1. Introduction Many economic problems have the following structure: (i) a linear function

More information

AN INTRODUCTION TO CONVEXITY

AN INTRODUCTION TO CONVEXITY AN INTRODUCTION TO CONVEXITY GEIR DAHL NOVEMBER 2010 University of Oslo, Centre of Mathematics for Applications, P.O.Box 1053, Blindern, 0316 Oslo, Norway (geird@math.uio.no) Contents 1 The basic concepts

More information

Optimization Theory. A Concise Introduction. Jiongmin Yong

Optimization Theory. A Concise Introduction. Jiongmin Yong October 11, 017 16:5 ws-book9x6 Book Title Optimization Theory 017-08-Lecture Notes page 1 1 Optimization Theory A Concise Introduction Jiongmin Yong Optimization Theory 017-08-Lecture Notes page Optimization

More information

The dual simplex method with bounds

The dual simplex method with bounds The dual simplex method with bounds Linear programming basis. Let a linear programming problem be given by min s.t. c T x Ax = b x R n, (P) where we assume A R m n to be full row rank (we will see in the

More information

3 Matrix Algebra. 3.1 Operations on matrices

3 Matrix Algebra. 3.1 Operations on matrices 3 Matrix Algebra A matrix is a rectangular array of numbers; it is of size m n if it has m rows and n columns. A 1 n matrix is a row vector; an m 1 matrix is a column vector. For example: 1 5 3 5 3 5 8

More information

EE731 Lecture Notes: Matrix Computations for Signal Processing

EE731 Lecture Notes: Matrix Computations for Signal Processing EE731 Lecture Notes: Matrix Computations for Signal Processing James P. Reilly c Department of Electrical and Computer Engineering McMaster University September 22, 2005 0 Preface This collection of ten

More information

Fundamentals of Engineering Analysis (650163)

Fundamentals of Engineering Analysis (650163) Philadelphia University Faculty of Engineering Communications and Electronics Engineering Fundamentals of Engineering Analysis (6563) Part Dr. Omar R Daoud Matrices: Introduction DEFINITION A matrix is

More information

Definition 2.3. We define addition and multiplication of matrices as follows.

Definition 2.3. We define addition and multiplication of matrices as follows. 14 Chapter 2 Matrices In this chapter, we review matrix algebra from Linear Algebra I, consider row and column operations on matrices, and define the rank of a matrix. Along the way prove that the row

More information

The Gauss-Jordan Elimination Algorithm

The Gauss-Jordan Elimination Algorithm The Gauss-Jordan Elimination Algorithm Solving Systems of Real Linear Equations A. Havens Department of Mathematics University of Massachusetts, Amherst January 24, 2018 Outline 1 Definitions Echelon Forms

More information

1 Determinants. 1.1 Determinant

1 Determinants. 1.1 Determinant 1 Determinants [SB], Chapter 9, p.188-196. [SB], Chapter 26, p.719-739. Bellow w ll study the central question: which additional conditions must satisfy a quadratic matrix A to be invertible, that is to

More information

HOMEWORK PROBLEMS FROM STRANG S LINEAR ALGEBRA AND ITS APPLICATIONS (4TH EDITION)

HOMEWORK PROBLEMS FROM STRANG S LINEAR ALGEBRA AND ITS APPLICATIONS (4TH EDITION) HOMEWORK PROBLEMS FROM STRANG S LINEAR ALGEBRA AND ITS APPLICATIONS (4TH EDITION) PROFESSOR STEVEN MILLER: BROWN UNIVERSITY: SPRING 2007 1. CHAPTER 1: MATRICES AND GAUSSIAN ELIMINATION Page 9, # 3: Describe

More information

1111: Linear Algebra I

1111: Linear Algebra I 1111: Linear Algebra I Dr. Vladimir Dotsenko (Vlad) Michaelmas Term 2015 Dr. Vladimir Dotsenko (Vlad) 1111: Linear Algebra I Michaelmas Term 2015 1 / 10 Row expansion of the determinant Our next goal is

More information

Matrix & Linear Algebra

Matrix & Linear Algebra Matrix & Linear Algebra Jamie Monogan University of Georgia For more information: http://monogan.myweb.uga.edu/teaching/mm/ Jamie Monogan (UGA) Matrix & Linear Algebra 1 / 84 Vectors Vectors Vector: A

More information

Problem Set (T) If A is an m n matrix, B is an n p matrix and D is a p s matrix, then show

Problem Set (T) If A is an m n matrix, B is an n p matrix and D is a p s matrix, then show MTH 0: Linear Algebra Department of Mathematics and Statistics Indian Institute of Technology - Kanpur Problem Set Problems marked (T) are for discussions in Tutorial sessions (T) If A is an m n matrix,

More information

Discrete Optimization 23

Discrete Optimization 23 Discrete Optimization 23 2 Total Unimodularity (TU) and Its Applications In this section we will discuss the total unimodularity theory and its applications to flows in networks. 2.1 Total Unimodularity:

More information

TRANSPORTATION PROBLEMS

TRANSPORTATION PROBLEMS Chapter 6 TRANSPORTATION PROBLEMS 61 Transportation Model Transportation models deal with the determination of a minimum-cost plan for transporting a commodity from a number of sources to a number of destinations

More information

ECON 186 Class Notes: Linear Algebra

ECON 186 Class Notes: Linear Algebra ECON 86 Class Notes: Linear Algebra Jijian Fan Jijian Fan ECON 86 / 27 Singularity and Rank As discussed previously, squareness is a necessary condition for a matrix to be nonsingular (have an inverse).

More information

7. Dimension and Structure.

7. Dimension and Structure. 7. Dimension and Structure 7.1. Basis and Dimension Bases for Subspaces Example 2 The standard unit vectors e 1, e 2,, e n are linearly independent, for if we write (2) in component form, then we obtain

More information

Lecture Notes in Linear Algebra

Lecture Notes in Linear Algebra Lecture Notes in Linear Algebra Dr. Abdullah Al-Azemi Mathematics Department Kuwait University February 4, 2017 Contents 1 Linear Equations and Matrices 1 1.2 Matrices............................................

More information

1. Let r, s, t, v be the homogeneous relations defined on the set M = {2, 3, 4, 5, 6} by

1. Let r, s, t, v be the homogeneous relations defined on the set M = {2, 3, 4, 5, 6} by Seminar 1 1. Which ones of the usual symbols of addition, subtraction, multiplication and division define an operation (composition law) on the numerical sets N, Z, Q, R, C? 2. Let A = {a 1, a 2, a 3 }.

More information

Math 4A Notes. Written by Victoria Kala Last updated June 11, 2017

Math 4A Notes. Written by Victoria Kala Last updated June 11, 2017 Math 4A Notes Written by Victoria Kala vtkala@math.ucsb.edu Last updated June 11, 2017 Systems of Linear Equations A linear equation is an equation that can be written in the form a 1 x 1 + a 2 x 2 +...

More information

Linear Algebra Review: Linear Independence. IE418 Integer Programming. Linear Algebra Review: Subspaces. Linear Algebra Review: Affine Independence

Linear Algebra Review: Linear Independence. IE418 Integer Programming. Linear Algebra Review: Subspaces. Linear Algebra Review: Affine Independence Linear Algebra Review: Linear Independence IE418: Integer Programming Department of Industrial and Systems Engineering Lehigh University 21st March 2005 A finite collection of vectors x 1,..., x k R n

More information

Math113: Linear Algebra. Beifang Chen

Math113: Linear Algebra. Beifang Chen Math3: Linear Algebra Beifang Chen Spring 26 Contents Systems of Linear Equations 3 Systems of Linear Equations 3 Linear Systems 3 2 Geometric Interpretation 3 3 Matrices of Linear Systems 4 4 Elementary

More information

1 Matrices and Systems of Linear Equations

1 Matrices and Systems of Linear Equations March 3, 203 6-6. Systems of Linear Equations Matrices and Systems of Linear Equations An m n matrix is an array A = a ij of the form a a n a 2 a 2n... a m a mn where each a ij is a real or complex number.

More information

Computational Methods. Systems of Linear Equations

Computational Methods. Systems of Linear Equations Computational Methods Systems of Linear Equations Manfred Huber 2010 1 Systems of Equations Often a system model contains multiple variables (parameters) and contains multiple equations Multiple equations

More information

Linear Algebra Massoud Malek

Linear Algebra Massoud Malek CSUEB Linear Algebra Massoud Malek Inner Product and Normed Space In all that follows, the n n identity matrix is denoted by I n, the n n zero matrix by Z n, and the zero vector by θ n An inner product

More information

MA106 Linear Algebra lecture notes

MA106 Linear Algebra lecture notes MA106 Linear Algebra lecture notes Lecturers: Diane Maclagan and Damiano Testa 2017-18 Term 2 Contents 1 Introduction 3 2 Matrix review 3 3 Gaussian Elimination 5 3.1 Linear equations and matrices.......................

More information

Lecture 1. 1 Conic programming. MA 796S: Convex Optimization and Interior Point Methods October 8, Consider the conic program. min.

Lecture 1. 1 Conic programming. MA 796S: Convex Optimization and Interior Point Methods October 8, Consider the conic program. min. MA 796S: Convex Optimization and Interior Point Methods October 8, 2007 Lecture 1 Lecturer: Kartik Sivaramakrishnan Scribe: Kartik Sivaramakrishnan 1 Conic programming Consider the conic program min s.t.

More information

Chapter 1: Systems of linear equations and matrices. Section 1.1: Introduction to systems of linear equations

Chapter 1: Systems of linear equations and matrices. Section 1.1: Introduction to systems of linear equations Chapter 1: Systems of linear equations and matrices Section 1.1: Introduction to systems of linear equations Definition: A linear equation in n variables can be expressed in the form a 1 x 1 + a 2 x 2

More information

Algebraic Methods in Combinatorics

Algebraic Methods in Combinatorics Algebraic Methods in Combinatorics Po-Shen Loh 27 June 2008 1 Warm-up 1. (A result of Bourbaki on finite geometries, from Răzvan) Let X be a finite set, and let F be a family of distinct proper subsets

More information

ENGINEERING MATH 1 Fall 2009 VECTOR SPACES

ENGINEERING MATH 1 Fall 2009 VECTOR SPACES ENGINEERING MATH 1 Fall 2009 VECTOR SPACES A vector space, more specifically, a real vector space (as opposed to a complex one or some even stranger ones) is any set that is closed under an operation of

More information

LINEAR PROGRAMMING I. a refreshing example standard form fundamental questions geometry linear algebra simplex algorithm

LINEAR PROGRAMMING I. a refreshing example standard form fundamental questions geometry linear algebra simplex algorithm Linear programming Linear programming. Optimize a linear function subject to linear inequalities. (P) max c j x j n j= n s. t. a ij x j = b i i m j= x j 0 j n (P) max c T x s. t. Ax = b Lecture slides

More information

(f + g)(s) = f(s) + g(s) for f, g V, s S (cf)(s) = cf(s) for c F, f V, s S

(f + g)(s) = f(s) + g(s) for f, g V, s S (cf)(s) = cf(s) for c F, f V, s S 1 Vector spaces 1.1 Definition (Vector space) Let V be a set with a binary operation +, F a field, and (c, v) cv be a mapping from F V into V. Then V is called a vector space over F (or a linear space

More information

Lecture 5. Theorems of Alternatives and Self-Dual Embedding

Lecture 5. Theorems of Alternatives and Self-Dual Embedding IE 8534 1 Lecture 5. Theorems of Alternatives and Self-Dual Embedding IE 8534 2 A system of linear equations may not have a solution. It is well known that either Ax = c has a solution, or A T y = 0, c

More information

Yinyu Ye, MS&E, Stanford MS&E310 Lecture Note #06. The Simplex Method

Yinyu Ye, MS&E, Stanford MS&E310 Lecture Note #06. The Simplex Method The Simplex Method Yinyu Ye Department of Management Science and Engineering Stanford University Stanford, CA 94305, U.S.A. http://www.stanford.edu/ yyye (LY, Chapters 2.3-2.5, 3.1-3.4) 1 Geometry of Linear

More information

Linear Algebra: Lecture notes from Kolman and Hill 9th edition.

Linear Algebra: Lecture notes from Kolman and Hill 9th edition. Linear Algebra: Lecture notes from Kolman and Hill 9th edition Taylan Şengül March 20, 2019 Please let me know of any mistakes in these notes Contents Week 1 1 11 Systems of Linear Equations 1 12 Matrices

More information

1 Maximal Lattice-free Convex Sets

1 Maximal Lattice-free Convex Sets 47-831: Advanced Integer Programming Lecturer: Amitabh Basu Lecture 3 Date: 03/23/2010 In this lecture, we explore the connections between lattices of R n and convex sets in R n. The structures will prove

More information

Linear Algebra. Matrices Operations. Consider, for example, a system of equations such as x + 2y z + 4w = 0, 3x 4y + 2z 6w = 0, x 3y 2z + w = 0.

Linear Algebra. Matrices Operations. Consider, for example, a system of equations such as x + 2y z + 4w = 0, 3x 4y + 2z 6w = 0, x 3y 2z + w = 0. Matrices Operations Linear Algebra Consider, for example, a system of equations such as x + 2y z + 4w = 0, 3x 4y + 2z 6w = 0, x 3y 2z + w = 0 The rectangular array 1 2 1 4 3 4 2 6 1 3 2 1 in which the

More information

homogeneous 71 hyperplane 10 hyperplane 34 hyperplane 69 identity map 171 identity map 186 identity map 206 identity matrix 110 identity matrix 45

homogeneous 71 hyperplane 10 hyperplane 34 hyperplane 69 identity map 171 identity map 186 identity map 206 identity matrix 110 identity matrix 45 address 12 adjoint matrix 118 alternating 112 alternating 203 angle 159 angle 33 angle 60 area 120 associative 180 augmented matrix 11 axes 5 Axiom of Choice 153 basis 178 basis 210 basis 74 basis test

More information

Ω R n is called the constraint set or feasible set. x 1

Ω R n is called the constraint set or feasible set. x 1 1 Chapter 5 Linear Programming (LP) General constrained optimization problem: minimize subject to f(x) x Ω Ω R n is called the constraint set or feasible set. any point x Ω is called a feasible point We

More information

MATH 106 LINEAR ALGEBRA LECTURE NOTES

MATH 106 LINEAR ALGEBRA LECTURE NOTES MATH 6 LINEAR ALGEBRA LECTURE NOTES FALL - These Lecture Notes are not in a final form being still subject of improvement Contents Systems of linear equations and matrices 5 Introduction to systems of

More information

1. Let m 1 and n 1 be two natural numbers such that m > n. Which of the following is/are true?

1. Let m 1 and n 1 be two natural numbers such that m > n. Which of the following is/are true? . Let m and n be two natural numbers such that m > n. Which of the following is/are true? (i) A linear system of m equations in n variables is always consistent. (ii) A linear system of n equations in

More information

3 (Maths) Linear Algebra

3 (Maths) Linear Algebra 3 (Maths) Linear Algebra References: Simon and Blume, chapters 6 to 11, 16 and 23; Pemberton and Rau, chapters 11 to 13 and 25; Sundaram, sections 1.3 and 1.5. The methods and concepts of linear algebra

More information

Lecture slides by Kevin Wayne

Lecture slides by Kevin Wayne LINEAR PROGRAMMING I a refreshing example standard form fundamental questions geometry linear algebra simplex algorithm Lecture slides by Kevin Wayne Last updated on 7/25/17 11:09 AM Linear programming

More information

Review of Matrices and Block Structures

Review of Matrices and Block Structures CHAPTER 2 Review of Matrices and Block Structures Numerical linear algebra lies at the heart of modern scientific computing and computational science. Today it is not uncommon to perform numerical computations

More information