Sandwich Theorem and Calculation of the Theta Function for Several Graphs

Size: px
Start display at page:

Download "Sandwich Theorem and Calculation of the Theta Function for Several Graphs"

Transcription

1 Brigham Young University BYU ScholarsArchive All Theses and Dissertations Sandwich Theorem and Calculation of the Theta Function for Several Graphs Marcia Ling Riddle Brigham Young University - Provo Follow this and additional works at: Part of the Mathematics Commons BYU ScholarsArchive Citation Riddle, Marcia Ling, "Sandwich Theorem and Calculation of the Theta Function for Several Graphs" (2003). All Theses and Dissertations This Thesis is brought to you for free and open access by BYU ScholarsArchive. It has been accepted for inclusion in All Theses and Dissertations by an authorized administrator of BYU ScholarsArchive. For more information, please contact scholarsarchive@byu.edu.

2 SANDWICH THEOREM AND CALCULATION OF THE THETA FUNCTION FOR SEVERAL GRAPHS by Marcia Riddle A thesis submitted to the faculty of Brigham Young University in partial fulfillment of the requirements for the degree of Master of Science Department of Mathematics Brigham Young University April 2003

3 BRIGHAM YOUNG UNIVERSITY GRADUATE COMMITTEE APPROVAL of a thesis submitted by Marcia Riddle This thesis has been read by each member of the following graduate committee and by majority vote has been found to be satisfactory. Date Wayne W. Barrett, Chair Date Rodney W. Forcade Date R. Vencil Skarda

4 BRIGHAM YOUNG UNIVERSITY As chair of the candidate s graduate committee, I have read the thesis of Marcia Riddle in its final form and have found that () its format, citations, and bibliographical style are consistent and acceptable and fulfill university and department style requirements; (2) its illustrative materials including figures, tables, and charts are in place; and (3) the final manuscript is satisfactory to the graduate committee and is ready for submission to the university library. Date Wayne W. Barrett Chair, Graduate Committee Accepted for the Department Tyler J. Jarvis Graduate Coordinator Accepted for the College G. Rex Bryce, Associate Dean College of Physical and Mathematical Sciences

5 ABSTRACT SANDWICH THEOREM AND CALCULATION OF THE THETA FUNCTION FOR SEVERAL GRAPHS Marcia Riddle Department of Mathematics Master of Science This paper includes some basic ideas about the computation of a function ϑ(g), the theta number of a graph G, which is known as the Lovász number of G. ϑ(g c ) lies between two hard-to-compute graph numbers ω(g), the size of the largest clique in a graph G, and χ(g), the minimum number of colors need to properly color the vertices of G. Lovász and Grötschel called this the Sandwich Theorem. Donald E. Knuth gives four additional definitions of ϑ, ϑ, ϑ 2, ϑ 3, ϑ 4 and proves that they are all equal. First I am going to describe the proof of the equality of ϑ, ϑ and ϑ 2 and then I will show the calculation of the ϑ function for some specific graphs: K n, graphs related to K n, and C n. This will help us understand the ϑ function, an important function for graph theory. Some of the results are calculated in different ways. This will benefit students who have a basic knowledge of graph theory and want to learn more about the theta function.

6 ACKNOWLEDGEMENTS The author of this thesis would like to express appreciation for the tremendous help given by Dr. Wayne Barrett in the preparation and presentation for the key ideas contained in this thesis. The author would also like to express appreciation to her graduate committee and to the Mathematics Department. Also, she expresses appreciation for Steven Butler who prepared the pictures and cleaned up the formatting of the final form of the thesis.

7 Contents Terminology STAB(G), TH(G) and QSTAB(G) The Theta Function 6 2 Two Additional Functions ϑ (G) and ϑ 2 (G) 8 2. The ϑ (G) function The ϑ 2 (G) function ϑ 2 (G) and K n 2 3. ϑ 2 (K n ) and ϑ 2 (Kn) c ϑ 2 (G) for complete multipartite graphs ϑ 2 (G) for the union of two complete graphs ϑ 2 (G) and C n 8 References 3 vi

8 Terminology Graph: Given a nonempty finite set V, let V (2) be the set of all 2-element subsets of V. A graph G = (V, E) consists of two things, a nonempty finite set V called the vertices of G and a (possibly empty) subset E of V (2) called the edges of G. Let n = V be the number of vertices of G. When more than one graph is under consideration, it may be useful to write V (G) and E(G) for the vertex and edge sets of G, respectively. We will use the notation of ab instead of {a, b} to represent an edge. For example, let C 4 be the graph with 4 vertices, V = {, 2, 3, 4} and E = {2, 3, 24, 34}, shown in Figure Figure : Graph of C 4 Subgraph: A subgraph of a graph G is a graph H such that V (H) V (G) and E(H) E(G). Induced subgraph: An induced subgraph of G is a subgraph H such that every edge of G whose vertices are contained in V (H) belongs to E(H). Complement G c of G: the graph which has V (G) as its vertex set, and in which two vertices are adjacent if and only if they are not adjacent in G. If G has n vertices, then G c can be constructed by removing from K n all the edges of G. Note that the complement of a complete graph is a empty graph. An example of a graph and its complement is seen in Figure 2. Positive definite (positive semidefinite) matrix: a real n n symmetric matrix A which satisfies x T Ax > 0 for all x R n \ {0} (x T Ax 0 for all x R n ). Negative definite (negative semidefinite) matrix: a real n n symmetric matrix A

9 G G c Figure 2: A graph and its complement is negative definite if A is positive definite and is negative semidefinite if A is positive demidefinite. Clique: a nonempty set of pairwise adjacent vertices. Some cliques in the graph G in Figure 3 are {, 2}, {3, 4, 5}, and {6, 7} Figure 3: A graph with three large cliques Clique number: the maximum number of vertices of a clique in a graph. It is denoted by ω(g). The clique number of G in Figure 3 is 3. Independent set: a set of pairwise nonadjacent vertices of G. This is also called a stable set. Some independent sets of the graph G in Figure 3 are: V = {, 3, 7}, V 2 = {2, 5, 6}, V 3 = {, 4, 6}. So, S V (G) is an independent set if and only if no two of its vertices are adjacent, or if and only if S is a clique in G c. Independence number, α(g) = ω(g c ): the maximum number of vertices in an independent set of G. Chromatic number χ(g): the minimum number of independent sets that partition V (G). A graph G is m-partite if V (G) can be partitioned into m or fewer independent sets. The independent sets in a specified partition are partite sets. The graph G in Figure 3 has chromatic number 3 and is 3-partite. Clique cover number: the smallest number of cliques that cover the vertices of G. 2

10 It is denoted by χ(g). For example, χ(g) = 3 for G in Figure 3. We can also think of it as the minimum number of independent sets that partition V (G c ). It is clear that χ(g) = χ(g c ). Perfect graph: a graph G such that each induced subgraph G of G satisfies ω(g ) = χ(g ). The graph G in Figure 3 is perfect. Imperfect graph: a graph that is not perfect. The graph C 5 (shown in Figure 4) is an imperfect graph because the clique number is ω(g) = 2 but the chromatic number is χ(g) = 3. It is the only imperfect graph for n 5 vertices Figure 4: C 5 Dot product: The dot product of (column) vectors a and b is a b = a T b. Orthogonal vectors: the vectors a, b R n are orthogonal if a b = 0. Orthogonal matrix: a square matrix Q such that Q T Q is the identity matrix I. In other words, Q is orthogonal if and only if its columns are unit vectors perpendicular to each other. Symmetric matrix: A matrix A is symmetric if A = A T. For any symmetric matrix A we have that A = QDQ T where Q is orthogonal and D is a diagonal matrix. Orthogonal labeling: an assignment of vectors a v R d to each vertex v of a graph G such that a u a v = 0 for each pair of nonajacent vertices u and v. In other words, whenever a u is not perpendicular to a v in the labeling, we will have u and v adjacent in the graph G. Convex set: A set in Euclidean space R d is a convex set if it contains all the line segments connecting any pair of its points. Convex hull: Given a set of vectors S R n, the convex hull of S is the smallest 3

11 convex set containing S. STAB(G), TH(G) and QSTAB(G) Definition. The incidence vector X S of a set S of vertices of a graph G is defined by { Xi S if i S = 0 if i / S. Typically, S will be a clique or independent set. Definition 2. Given a graph G, ST AB(G) is the convex hull of incidence vectors of all stable sets of G. Example. Consider K 3 (a clique with 3 vertices): the independent sets are, {}, {2}, {3} and the corresponding incidence vectors are 0 0, 0 0, 0 0, Figure 5: K 3 and STAB(K 3 ) Definition 3. QSTAB(G) is the set in R n defined by x i 0 for each i V, x i for each clique K of G. i K It suffices to consider the maximal cliques of G. In the example of K 3, these become x 0, x 2 0 and x 3 0, x +x 2 +x 3. It is clear that the last inequality implies x + x 2, corresponding to the nonmaximal clique {, 2}. For K 3, STAB(K 3 )=QSTAB(K 3 ). These can be seen in Figure 5. 4

12 Definition 4. The cost c(u i ) of a vector u i in an orthogonal labeling of G is defined to be 0 if u i = 0, otherwise c(u i ) = u2 i u i 2 = u 2 i u 2 i u2 di Because c(u) is a homogenous function, we may whenever convenient take all nonzero vectors in an orthogonal labeling to be unit vectors. Definition 5. Given a graph G = (V, E) on n vertices, TH(G) = {x 0 i V c(u i )x i for all orthogonal labelings u of G}. convex set. Notice that TH(G) is the intersection of infinitely many halfspaces, so TH(G) is a Lemma. If S is a stable set in G and u is an orthogonal labeling of G, then i S c(u i). Proof. Let S = {i, i 2,..., i k }. It suffices to consider the case when {u i, u i2,..., u ik } is a set of k orthonormal vectors. There exists a d d orthogonal matrix Q such that the first k columns are u i, u i2,..., u ik. Since the length of the first row of Q is, u 2 i + u 2 i u 2 i k = c(u i ) + c(u i2 ) + + c(u ik ) = i S c(u i ). Hence, i S c(u i). Lemma 2. STAB(G) TH(G) QSTAB(G). Proof. If S is a stable set, X S is its incidence vector and {u, u 2,..., u n } is an orthogonal labeling of G, by the previous lemma, c(u i ) Xi S = i S c(u i ) + i/ S c(u i ) 0. i V Since TH(G) contains each incidence vector, TH(G) contains the convex hull of incidence vectors of stable sets. Therefore, TH(G) STAB(G). 5

13 For each clique K of G, define the map from V to R by i X K i. If ij / E, either i / K or j / K. So we know that X K i X K j = 0. So this map is an orthogonal labeling. Therefore, if x TH(G), i V c(x K i )x i = i V X K i x i = i K x i + i/ K 0 x i = i K x i, so x QSTAB(G). Hence TH(G) QSTAB(G). The Theta Function Definition 6. Given a graph G, we define ϑ(g) = max{ i V x i x TH(G)}. Similar definitions can be given for STAB and QSTAB: α(g) = max{ i V κ(g) = max{ i V x i x STAB(G)}, x i x QSTAB(G)}. First, we need to show that this definition of α(g) is in fact the size of the largest stable set in G. Let G be a graph with α(g) = m. Let S be an independent set of size m and let X S = [x, x 2,, x n ] be the incidence vector of S. Then, So, n x i = m. i= α(g) = m max{ x i x STAB(G)}. For example consider the graph G shown in Figure 6. For this graph we have, α(g ) = 2. The incidence vector of the stable set {, 4} is x = [, 0, 0, ] T STAB(G ), so we have 4 i= x i = 2. It is clear that 2 max{σx i x STAB(G )}. So α(g ) max{ x i x STAB(G )}. 6

14 3 4 2 Figure 6: G We now prove the reverse inequality. We know that STAB(G) is convex set. Let {X S, X S 2,, X Sm } be the set of incidence vectors of the stable sets of G. Then any vector x STAB(G) is a convex combination of X S, X S 2,, X Sm, so x = m a i X S i, where i= m a i = and a i 0 for i =, 2,..., m. i= Since α(g) is the maximum number of vertices in an independent set of G, for each vector X S i, α(g) n j= XS i j, so we have, n x j = j= n m m ( a i X S i j ) = j= i= i= a i n j= X S i j m a i α(g) = α(g) i= m i= a i = α(g), so α(g) max{ i V x i x STAB(G)}. Therefore, α(g) = max{ i V x i x STAB(G)}. This shows that our two different definitions of α(g) are consistent. We want to prove the sandwich theorem that we have mentioned at the beginning of the paper which is that for any graph G, ϑ(g c ) lies between the clique number ω(g) and the chromatic number χ(g). Theorem (Sandwich Theorem). ω(g) ϑ(g c ) χ(g). Proof. Since α(g c ) = ω(g) and χ(g c ) = χ(g), this is equivalent to proving that α(g c ) ϑ(g c ) χ(g c ) or α(g) ϑ(g) χ(g). 7

15 Since ϑ(g) = max{ i V x i x TH(G)}, α(g) = max{ x V x i x STAB(G)}, and STAB(G) TH(G), we have α(g) ϑ(g). Similarly, since TH(G) QSTAB(G), we have ϑ(g) κ(g). Now, we need to show that κ(g) χ(g). Suppose K,, K p is a smallest set of cliques that cover the vertices of G, and let x QSTAB(G). Then Therefore, n x i = i= p j= i K j x i p = p = χ(g). j= κ(g) = max{ i V x i x QSTAB(G)} χ(g). Combining, we have α(g) ϑ(g) κ(g) χ(g), which completes the proof. For example, consider the cyclic graph C 5 in Figure 4 with vertices {, 2, 3, 4, 5}. For x QSTAB(G), x + x 2, x 2 + x 3, x 3 + x 4, x 4 + x 5, x 5 + x. Then 2(x +x 2 +x 3 +x 4 +x 5 ) 5. Hence κ(c 5 ) 5 2. Since ( 2, 2, 2, 2, 2 ) QSTAB(C 5), κ(c 5 ) = 5. But χ(c 2 5) = 3 > κ(c 5 ). Corollary. If G is a perfect graph, then ϑ(g) = α(g). Proof. By the Perfect Graph Theorem, G c is a perfect graph. Therefore, we have ω(g c ) = χ(g c ). By the Sandwich Theorem, ω(g c ) ϑ(g) χ(g c ), so ϑ(g) = ω(g c ). Since α(g) = ω(g c ), therefore ϑ(g) = α(g). 2 Two Additional Functions ϑ (G) and ϑ 2 (G) We have just shown that ϑ(g) lies between χ(g) and α(g). In this part, we are going to introduce the other two functions ϑ (G) and ϑ 2 (G). They are different ways of defining ϑ(g). This will not only help us to understand ϑ(g) but also help us to compute ϑ(g). We will prove that ϑ(g) ϑ (G) ϑ 2 (G). In [3], Donald Knuth introduced two additional functions ϑ 3, ϑ 4 in order to prove ϑ 2 ϑ. Therefore ϑ, ϑ, ϑ 2 are all equivalent. 8

16 2. The ϑ (G) function If G is a graph, ( ) ϑ (G) = min max(/c(a v )), over all orthogonal labelings a. a v The max is if there is some v such that c(a v ) = 0. Lemma 3. ϑ(g) ϑ (G). Proof. If G is a graph, suppose x TH(G) and a is an orthogonal labeling of G. Then ϑ(g) = max{ v x v x TH(G)} = max{ c(a v v ) c(a v) x v x TH(G)} max v c(a v ) max{ c(a v ) x v x TH(G)}. v By the definition of TH(G), v c(a v)x v. Therefore, ϑ(g) max v c(a v ). Since this inequality holds for each orthogonal labeling, ( ) ϑ(g) min max = ϑ (G). a v c(a v ) 2.2 The ϑ 2 (G) function Definition 7. Matrix A is a feasible matrix for the graph G if A is indexed by the vertices of G and (i) A is real and symmetric, (ii) a ii =, for all i V, (iii) a ij =, whenever i and j are nonadjacent in G. 9

17 The rank matrix A 0 = [,,, ] T [,,, ] is feasible for any graph G and has one nonzero eigenvalue λ (A 0 ) = tr(a) = n = n. i= If A is any real symmetric matrix which has eigenvalues λ λ 2 λ n, by the Spectral Theorem, A = QDQ T for some orthogonal matrix Q and diagonal matrix D where D = diag(λ, λ 2,..., λ n ). Since all the eigenvalues of A are real, by the Rayleigh-Ritz Theorem, A has the maximum eigenvalue, λ (A) = max{x T Ax x 2 =, x R n } Lemma 4. The set of feasible A with λ (A) n is compact. Proof. Starting with λ (A) + λ 2 (A) + + λ n (A) = n, we use what we know about the ordering to get (n )λ (A) + λ n (A) n so (n )n + λ n (A) n. In particular, we have that λ n (A) (n 2)n so that (n 2)n λ i (A) n for i =,, n. This shows that λ 2 i (A) n 2 (n 2) 2 and so n a ij 2 = i,j= n λ 2 i (A) n 3 (n 2) 2. i= Therefore, the set is bounded. The set is clearly closed, so it is compact. From Lemma 4 and the fact that λ (A) is a continuous function of the entries of A, we know that the minimum value of λ (A) exists. 0

18 Definition 8. Given a graph G, then ϑ 2 (G) = min{λ (A) A is a feasible matrix for G}. Lemma 5. ϑ (G) ϑ 2 (G). Proof. Let A be a feasible matrix such that ϑ 2 (G) = λ 0 and let B = λ I A. Let λ 2 λ 3 λ n be the remaining eigenvalues of A. The eigenvalues of B are λ λ i. It is clear that all the eigenvalues of B are nonnegative, so B is positive semidefinite. If λ λ 2 A = Q... λ n QT then λ λ λ λ 2 B = Q... λ λ n QT ( λ λ ) 2 ( λ λ 2 ) 2 = Q... ( λ λ n ) 2 QT = X T X, where λ λ X = Q λ λ2... QT. λ λ n Let X = [x, x 2,, x n ] (i.e., the x i are the columns of X), so that we have b ij = x T i x j. [ ] Let u i = R n+, x i R n. x i

19 If i j and ij is not in E, u i u j = + x T i x j = + b ij = a ij + b ij. Since B = λ I A, we have b ij = a ij, so u i u j = a ij a ij = 0. Therefore, u, u 2,, u n is an orthogonal labeling of G. Note that appealing to the definition of cost that So for each i =, 2,, n, c(u i ) = + x i 2. c(u i ) = + x i 2 = + x i x i = + b ii = a ii + λ a ii = λ. This implies Hence, λ = max i v c(u i ). ( ) ϑ (G) = min max max a i c(a i ) i c(u i ) = λ = ϑ 2 (G). This completes the proof. In [3], Donald E. Knuth has proved that ϑ 2 (G) ϑ(g) by proving ϑ 2 (G) ϑ 3 (G) ϑ 4 (G) ϑ(g). So, we can conclude that ϑ(g) = ϑ 2 (G). 3 ϑ 2 (G) and K n Now we are going to use ϑ 2 (G) to calculate ϑ(g) for the two simplest graphs, K n and K c n. 3. ϑ 2 (K n ) and ϑ 2 (K c n) Lemma 6. ϑ(k n ) = ϑ 2 (K n ) =. 2

20 Consider the graph K n. We know that K n is a perfect graph. By the Sandwich Theorem, ϑ(k n ) = α(k n ) =. We also obtain this conclusion in two different ways. There are no missing edges in the graph K n, so any set of n vectors is an orthogonal labeling. Let u i = [ a, b] T for i =, 2,..., n, then c(u i ) = a/(a + b). Now consider the following, c(ui ) x i = a a + b x i, rearranging we get xi + b a. From this last statement, it is clear that if we let b = 0 then x i. It follows that TH(K n ) = {x 0 x i }. Therefore, ϑ(k n ) = max{ i V x i x TH(K n )} = max{ i V x i x i, x 0} =. We should get the same result if we look at ϑ 2 (K n ). Let A n be the feasible matrix of K n, and let λ be the maximum eigenvalue of A n, then the matrix A n has the form x 2 x (n ) x n x 2 x 2(n ) x 2n A n = x (n ) x 2(n ) x (n )n x n x 2n x (n )n Since for any graph G nλ = λ + λ + + λ λ + λ λ n = tra n = n, we have that λ. On the other hand, A n becomes I when x ij = 0 for any i j, and in that case λ =. So ϑ 2 (K n ) = min λ =. Lemma 7. ϑ 2 (K c n) = n. 3

21 The graph of K c n is very simple. It just contains n vertices without any edges, which means there are no pairwise adjacent vertices. Since Kn c is a perfect graph, by the Sandwich Theorem, ϑ(kn) c = α(kn) c = n. We can also see this result very easily from the definition of the ϑ 2 function. Since the feasible matrix A n of Kn c is a n n matrix with all the entries equal to, the rank of A n is. Therefore, there is only one nonzero eigenvalue and it equals tr(a n ) which is n. Hence ϑ(kn) c = ϑ 2 (Kn) c = n. 3.2 ϑ 2 (G) for complete multipartite graphs Definition 9. The union of graphs G and H with V (G) V (H) =, written G H, has vertex set V (G) V (H) and edge set E(G) E(H). Similarly, the join of graphs G and H with V (G) V (H) =, written G H, is obtained from G H by adding the edges {xy : x V (G), y V (H)}. Let G = K n,n 2,...,n k be a complete multipartite graph, then K n,n 2,,n k = (K n K n2 K nk ) c = K c n K c n 2 K c n k. For example, in Figure 7 we have the union and the join of the two graphs P 3 and K 3. The bold edge in the join indicates that we join all vertices between these two sets. G c H G w H Figure 7: Lemma 8. ϑ(k n,n 2,,n k ) = max{n, n 2,, n k }. We will prove this lemma in two different ways. 4

22 Proof. For our first proof, consider the following. First, let G = K n,n 2,,n k. We show that G is a perfect graph. This follows since, ω(k n K n2 K nk ) = max{n, n 2,, n k } = χ(k n K n2 K nk ), with a similar equality for each induced subgraph. So K n K n2 K nk is a perfect graph. By the Perfect Graph Theorem, a graph is perfect if and only if its complement is perfect. Therefore, K n,n 2,,n k = (K n K n2 K nn ) c is perfect. So by Corollary, ϑ(k n,n 2,,n k ) = α(k n,n 2,,n k ) = ω(k n K n2 K nk ) = max{n, n 2,, n k }. result. Before we use the second method of proving the lemma, we need to state a useful Theorem 2 (Interlacing Inequalities). Let A M n be Hermitian and for any i N, let A(i) be the principal submatrix obtained from A by deleting the i th row and column. Let λ λ 2 λ n be the eigenvalues of A and µ µ 2 µ n be the eigenvalues of A(i), then λ µ λ 2 µ 2 λ 3 µ 3 λ n µ n λ n. Corollary 2. Let A M n be a Hermitian matrix and B M m be any principal submatrix of A, then λ k (A) λ k (B), k =, 2,, m. Now we are ready prove Lemma 8 by using the definition of the ϑ 2 function. First, consider an example, namely let G = K 3,2, = (K 3 K 2 K ) c. This graph is shown in Figure 8. Let A be a feasible matrix of K 3,2,. Then A has the form A = 5

23 K 3,2, = ( K 3 c K 2 c K ) Figure 8: where the remaining entries are any real numbers for which A is symmetric. There are three blocks in A, they are A 3, A 2, A, the feasible matrices for K3, c K2 c and K. c The eigenvalues for these matrices are: λ(a 3 ) = 3, 0, 0; λ(a 2 ) = 2, 0; λ(a ) =. Let λ be the maximum eigenvalue of A. Then by Corollary 2, ϑ 2 (K 3,2, ) = λ max{λ (A 3 ), λ (A 2 ), λ (A )} = 3. However if we set all remaining entries in A equal to 0, A = A 3 A 2 A which has eigenvalues 3, 0, 0, 2, 0, and so λ = 3. Therefore ϑ 2 (K 3,2, ) = 3. We are now ready to generalize this for the proof of Lemma 8. Proof. For our second proof, consider the following. Consider the feasible matrix A n of the graph G = K n,n 2,,n k. Then A n = J n J n2 c..., J nk where all the off diagonal entries are some real numbers which preserve symmetry and J ni are the feasible matrices for K c n i, so all the entries equal. Let λ be the maximum eigenvalue. By Corollary 2, λ (A n ) λ (J ni ) for i =, 2,, k. Since the eigenvalues of the matrix J ni are n i and 0, λ (J ni ) = n i. Therefore λ (K n,n 2,,n k ) max{n, n 2,, n k }. Equality holds when all the off diagonal entries of A n equal 0 which reaches the minimum value of λ. Hence ϑ 2 (K n,n 2,,n k ) = max{n, n 2,, n k }. 6

24 3.3 ϑ 2 (G) for the union of two complete graphs We have proved earlier that ϑ(k n ) = (Lemma 6). Now we want to show the calculation of the ϑ function for the union of two complete graphs. Lemma 9. ϑ(k n K m ) = 2. Proof. Let G = K m K n, and A, A m and A n be feasible matrices of G, K m, and K n respectively. Let λ (A), λ (A m ), λ (A n ) represent the maximum eigenvalues of each matrix. Then A m is a m m symmetric matrix with all the diagonal entries equal and all the off diagonal entries are real numbers. Similarly, A n is a n n symmetric matrix with all the diagonal entries equal and all the off diagonal entries are real numbers. So A has the form [ An A = A m ]. Now suppose each off diagonal entry in A n and A m equals x and define B = A + (x )I. Then we will obtain an (m + n) (m + n) symmetric matrix [ x x Since the rank of B is at most two, we have at most two nonzero eigenvalues. So, at least (m + n 2) eigenvalues are zero. The characteristic polynomial for B is ]. t m+n (m + n)xt m+n + mn(x 2 )t m+n 2. Using the quadratic formula to solve for the possible nonzero eigenvalues we find they are Hence the eigenvalues of B are: t = (m + n)x ± (m n) 2 x 2 + 4mn. 2 (m + n)x ± (m n) 2 x 2 + 4mn, and 0 with multiplicity m + n

25 It follows that the eigenvalues of A are λ(b) (x ): (m + n)x ± (m n) 2 x 2 + 4mn 2 (x ), and (x ) with multiplicity m + n 2. When x = the eigenvalues are 0, (m+n) and 2. We have the largest eigenvalue λ = 2. This shows ϑ 2 (K n K m ) 2. By the Sandwich Theorem, ϑ(k n K m ) α(k n K m ) = 2, therefore, ϑ(k n K m ) = 2. 4 ϑ 2 (G) and C n In this part, we are going to utilize the ϑ 2 (G) function to find ϑ(g) for the n-cycle, C n, which is an imperfect graph for each odd integer greater than three. First we will illustrate how to use ϑ 2 to calculate ϑ(c 5 ). We first need to prove one important fact. Given a symmetric matrix A, let λ (A) denote the maximum eigenvalue of A. Lemma 0. Let A i, i =, 2,, k be symmetric matrices. Then λ ( k i= A i ) k λ (A i ). i= Proof. Let A and B be symmetric matrices. Since By the Rayleigh-Ritz theorem λ (A) = max x 0 x T Ax x T x, λ x T Bx (B) = max x 0 x T x. x T (A + B)x x T x = xt Ax x T x + xt Bx x T x max x 0 x T Ax x T x + max x T Bx x 0 x T x = λ (A) + λ (B), we have λ (A + B) = max x 0 x T (A + B)x x T x λ (A) + λ (B). The result follows by mathematical induction. 8

26 Notice that the Lemma is false if A is not a symmetric matrix. For example, if [ ] [ ] 0 A = m 0 m and B = m 0 0, then the eigenvalues of A and B are and. Then m [ ] 0 m + λ (A) + λ (B) = 2, but A + B = m m +, and the eigenvalues of A + B are 0 m m +, (m + ). It is clear that m + > 2 for any m > 0 but m. m m m Now we compute ϑ 2 (C 5 ). Consider the graph G = C 5 with V = {, 2, 3, 4, 5}, E = {x, x 2, x 3, x 4, x 5 }. The relationship of the vertices with the edges is shown in Figure 9. x 5 5 x 4 4 x 3 x 3 2 x 2 Figure 9: The feasible matrix for C 5 is: x x 5 x x 2 A = x 2 x 3 x 3 x 4 x 5 x 4 where now x, x 2, x 3, x 4, x 5 represent real numbers. We want to show that the minimum value of λ occurs when x = x 2 = x 3 = x 4 = x 5. To do this consider the matrix P where, 0 0 P = 0 0, 0 i.e., P is a permutation matrix. Now define x 5 x 4 x 5 x A 2 = P T AP = x x 2 x 2 x 3. x 4 x 3 We may view the action of P as rotating each vertex of the graph C 5 one position counterclockwise, or equivalently, each edge one position clockwise. This is seen in Figure 0. 9

27 x 5 5 x 4 4 x 3 x 3 2 x 2 x 5 2 x 3... x 4 x 2 5 x 4 3 x x 4 3 x 3 x 2 x 2 A A 2 A 5 Figure 0: We similarly define A 3 = (P 2 ) T AP 2, A 4 = (P 3 ) T AP 3 and A 5 = (P 5 ) T AP 5, x 4 x 3 x 3 x 2 x 4 x 5 A 3 = x 5 x x x 2, A x 3 x 4 4 = x 4 x 5 x 5 x x 3 x 2 x 2 x x 2 x x 2 x 3 A 5 = x 3 x 4 x 4 x 5. x x 5 Since A, A 2, A 3, A 4 and A 5 are similar, they have the same eigenvalues. Now let x x A 0 = 5 (A x x + A 2 + A 3 + A 4 + A 5 ) = x x x x x x where x = 5 (x + x 2 + x 3 + x 4 + x 5 ). By Lemma 0 we have λ (A 0 ) = λ [ 5 (A + A 2 + A 3 + A 4 + A 5 )] λ ( 5 A ) + λ ( 5 A 2) + λ ( 5 A 3) + λ ( 5 A 4) + λ ( 5 A 5) = 5 [λ (A ) + λ (A 2 ) + λ (A 3 ) + λ (A 4 ) + λ (A 5 )] = λ (A ). Therefore, the minimum eigenvalue λ occurs at A 0, which means when x = x 2 = x 3 = x 4 = x 5. I used Maple to compute the eigenvalues of A 0 as follows: 3 + 2x, ( )( + x), ( 2 2 5)( + x), ( )( + x), ( 2 2 5)( + x). 20

28 Notice that there are only three distinct eigenvalues, namely, λ = 3 + 2x, λ 2 = ( )( + x), λ3 = ( 2 2 5)( + x). We know that ϑ 2 (C 5 ) = min λ over all feasible matrices. Figure shows plots of the lines λ, λ 2, λ 3 as functions of x x Figure : From the graph (Figure ) we see that the minimum λ occurs at the intersection of λ and λ 3, which occurs at x = ( 3 + 5)/2, and gives λ = 5. Therefore, ϑ 2 (C 5 ) = min λ (A) = 5. Since C c 5 = C 5, the Sandwich Theorem says that ω(c 5 ) ϑ(c 5 ) χ(c 5 ) which is 2 < 5 < 3. This is our first example of the sandwich theorem in which the inequalities are strict. Now, we will look at another important property of the ϑ 2 function which enables us to compute ϑ 2 (C n ). First, we need to introduce some related definitions and prove one lemma. Definition 0. Let G = (V, E) be a graph with vertex set V = {, 2, 3,, n}. The n-by-n adjacency matrix A n (G) = (a ij ) is defined by { if ij E. a ij = 0 otherwise. Definition. The eigenvalues of a graph are the eigenvalues of its adjacency matrix. 2

29 Lemma. Let A n be the adjacency matrix of C n. Then the eigenvalues for A n are λ j = 2 cos 2πj, j = 0,,..., n. n Proof. Assume the vertices of C n are labeled through n. Then the adjacency matrix A n of C n is A n = Assume λ is an eigenvalue of A n and x = [x, x 2,..., x n ] T is a corresponding eigenvector. The equation Ax = λx written in components is x n + x 2 = λx x + x 3 = λx 2 x 2 + x 4 = λx 3. x n + x = λx n We can rewrite this as a single equation x k + x k+ = λx k () with boundary conditions x = x n+ and x 0 = x n. This is a second degree linear difference equation with constant coefficients. For a difference equation, the fundamental solutions have the form r k, r 0. Let x k = r k in equation (). Then we have r k + r k+ = λr k. Dividing by r k, we have r 2 λr + = 0. Let r, r 2 be the two roots of this equation. Then r + r 2 = λ, r r 2 =. 22

30 First assume that r = r 2. So r = r 2 = or r = r 2 =. Consider r = r 2 =, then λ = 2. The solution for equation () is x k = k =. Also x k = k k = k is a solution since x k + x k+ λx k = (k ) + (k + ) 2k = 0. So if r = r 2 =, the general solution for equation () is x k = c + c 2 k. From the boundary conditions we have c = c + nc 2 and c + c 2 = c + (n + )c 2. Therefore c 2 = 0. Hence x = x 2 = = x n = c, i.e., c [,,..., ] T is an eigenvector corresponding to λ = 2. Similarly, if r = r 2 =, λ = 2, the solutions for the equation () are x k = ( ) k and x k = k ( ) k. The general solution for equation () is also x k = c ( ) k + c 2 k( ) k. From the boundary conditions we have x = c [,,,,, ] T if n is even, and x = 0 if n is odd. So λ = 2 is an eigenvalue if n is even. Now we assume that r r 2. Then the general solution for equation () is x k = c r k + c 2 r k 2. From the boundary conditions we have c + c 2 = c r n + c 2 r n 2 and c r + c 2 r 2 = c r n+ + c 2 r n+ 2. Equivalently, c ( r n ) + c 2 ( r n 2 ) = 0 and c r ( r n ) + c 2 r 2 ( r n 2 ) = 0. (2) 23

31 From equation (2) we have [ ] [ ] r n r2 n c r ( r n ) r 2 ( r2 n = 0. ) c 2 [ ] r Since [c, c 2 ] T n is nonzero, therefore B = r2 n r ( r n ) r 2 ( r2 n ) which means det(b) = 0. That is, is a singular matrix, ( r n )r 2 ( r n 2 ) ( r n 2 )r ( r n ) = 0, so, For r n ( r n )( r n 2 )(r 2 r ) = 0. =, r = e 2πij n, j = 0,,..., n. Then r 2 = r = e 2πij n which gives possible eigenvalues λ j = r + r 2 = e 2πij n For r n 2 =, we obtain the same result. Now we see the only solutions are 2πij + e n = 2 cos 2πj, j = 0,,..., n. n λ j = 2 cos 2πj, for j = 0,,..., n. n Note that j = 0 gives the solution for the case r = r 2 = and when n is even, j = n/2 gives the solution for r = r 2 =, Therefore, these two previous cases can be combined with this case. Next, we need to show that the eigenvalues of A and the λ j are in one-to-one correspondence. Consider the matrix λ 0 0 λ λ B = A λi = λ 0 0 λ 24

32 If we delete the first and last rows and first two columns, we obtain an upper triangular matrix, which is invertible. Therefore rank(b) n 2, so nullity(b) 2. So geometric multiplicity(λ) 2, algebraic multiplicity(λ) 2. For λ 0 = 2, we consider the matrix B = A 2I, 2 2 B = If we delete the first row and first column, we obtain the matrix 2 2 B = For any x R n, x T B x = 2x 2 + 2x x 2 2x x 2 x 3 2x x n 2 x n 2x 2 n = x 2 (x x 2 ) 2 (x 2 x 3 ) 2 (x n 2 x n ) 2 x 2 n 0. So, B is negative semidefinite. If x T B x = 0, then x = 0, x x 2 = 0, x 2 x 3 = 0,..., x n 2 x n = 0. Therefore x = x 2 = = x n = 0. So if x 0, then x T B x < 0 and B is negative definite. We know that B is invertible because it is positive definite. Therefore B is also invertible. So the rank of B is at least n. Also, since B is a singular matrix, we have n rank(b ) n. Therefore rank(b ) = n and nullity(b ) =. Therefore, geometric multiplicity(2) =, and algebraic multiplicity(2) =. 25

33 If n is an odd number there are (n + )/2 distinct values in λ j = 2 cos(2πj/n). We let λ (2) be the number of eigenvalues with algebraic multiplicity equal 2, and λ () be the number of eigenvalues with algebraic multiplicity equal. Since B has n eigenvalues n = λ (2) + λ () ( ) n = n. So each eigenvalue 2 cos(2πj/n), j =,..., n 2 has multiplicity 2. If n is an even number, λ = 2 is also one of the eigenvalues of matrix A since for eigenvector x = [,,,,..., ] T = ( 2) Consider the matrix B 2 = A λi 2 2 B 2 = If we delete the first row and the first column, we obtain the matrix 2 2 B 2 = For any x R n, by an argument similar to the case when λ 0 = 2, B 2 is positive definite. Therefore B 2 is invertible. So the rank of B 2 is at least n. On the other hand, B 2 is a singular matrix because 2 is an eigenvalue. So n rank(b 2 ) n. 26

34 Therefore rank(b 2 ) = n and nullity(b 2 ) =. So we have geometric multiplicity( 2) =, and algebraic multiplicity( 2) =. So the dimension of the eigenspace of B 2 corresponding to the eigenvalue 2 is one. By a similar argument as for when n is odd, we see that 2 and 2 are eigenvalues of multiplicity one and 2 cos(2πj/n), j =, 2, (n/2) are eigenvalues of multiplicity two. Equivalently, the eigenvalues of A in all cases are λ j = 2 cos 2πj, j = 0,,..., n. n Theorem 3. For any odd number n ϑ(c n ) = { n cos(π/n) +cos(π/n) if n is odd. n if n is even. 2 Proof. Let A n be the adjacency matrix of C n and J n be a n-by-n matrix with all entries equal to. As in the case n = 5, it is sufficient to consider a feasible matrix for C n to be J n ( x)a n. We have just proved that the eigenvalues of the graph C n are 2 cos 2πj, for j = 0,,..., n. n Let u j be the corresponding eigenvector. Note that u 0 = [,,..., ] T is an eigenvector corresponding to λ 0 = 2. Then we have [J n ( x)a n ]u 0 = [n ( x)2]u 0 = (n 2 + 2x)u 0. Since all the eigenvectors corresponding to distinct eigenvalues of a symmetric matrix are orthogonal, then for j > 0 we have, u T 0 J n u j =. u T 0 u j = 27 u T 0 u j. u T 0 u j = 0,

35 so then, [J n ( x)a n ]u j = 0u j ( x)a n u j = ( x)2 cos 2πj n u j. So the eigenvalues of J n ( x)a n are µ 0 = n 2 + 2x Let λ be the maximum eigenvalue. µ j = 2( x) cos 2πj, j =, 2,, n. n If n is even, we consider the graph of µ 0 and µ i, and consider the intersection of these lines. We know that all the lines of the form µ i = 2( x) cos(2πj/n) for j =, 2,..., n pass through the point (, 0). Since line µ 0 = n 2 + 2x has biggest positive slope and biggest y intercept, it crosses each of the lines µ, µ 2,..., µ n to the left of the vertical axis. The line µ n/2 = 2( x) cos(π) = 2 2x has the most negative slope so is the first to meet µ 0. It is easy to see this result if we consider the example of n = 6, which is illustrated in Figure 2. The distinct eigenvalues are µ 0 = 4 + 2x µ = 2( x) cos π 3 µ 2 = 2( x) cos 2π 3 µ 3 = 2( x) cos π. By graphing these eigenvalues, we can see that the maximum eigenvalue occurs at the intersection of µ 0 and µ 3 which occurs at ( /2, 3) and gives λ = 3 = n/2. 28

36 : : 2 : 3 4 : : 0 x Figure 2: So, for any even n, we consider the intersection of the lines 2 2x and n 2 + 2x, which occurs when x = (n/4). Then ( λ = n n ) = n 4 2 = α(c n). If n is odd and j = (n )/2, ( 2π( n 2 µ j = 2( x) cos ) ) n ( = 2( x) cos π π ) = 2( x) cos π n n. Similarly, we can find the maximum eigenvalue λ by finding the intersection of µ 0 and µ j. This occurs when n 2 + 2x = 2 cos π n 2x cos π n. Rearranging we get, ( 2 ( + cos π ) )x = 2 cos π n n n + 2, so that the intersection will occur when x = 2 cos(π/n) + 2 n 2( + cos(π/n)) = n 2( + cos(π/n)). 29

37 Therefore Hence ( λ = n 2 + 2x = n = n = n cos(π/n) + cos(π/n). ϑ 2 (C n ) = ) n 2( + cos(π/n)) 2n 2( + cos(π/n)) = n n + cos(π/n) n cos(π/n), for any odd n. + cos(π/n) 30

38 References [] Barrett, Wayne W., Introduction to Spectral Graph Theory, unpublished notes. [2] Horn, Roger A. and Charles R. Johnson, Matrix Analysis, Cambridge University Press, 985. [3] Knuth, Donald E., The Sandwich Theorem, The Electronic Journal of Combinatorics I (994) A. [4] Lovasz, Laszlo, On the Shannon Capacity of a Graph, IEEE Transactions on Information Theory, Vol. IT-25, No., January 979. [5] Merris, Russell, Graph Theory, John Wiley & Sons, Inc., New York, 200. [6] West, Douglas B., Introduction to Graph Theory, Prentice Hall, Upper Saddle River, NY,

Semidefinite and Second Order Cone Programming Seminar Fall 2001 Lecture 5

Semidefinite and Second Order Cone Programming Seminar Fall 2001 Lecture 5 Semidefinite and Second Order Cone Programming Seminar Fall 2001 Lecture 5 Instructor: Farid Alizadeh Scribe: Anton Riabov 10/08/2001 1 Overview We continue studying the maximum eigenvalue SDP, and generalize

More information

Four new upper bounds for the stability number of a graph

Four new upper bounds for the stability number of a graph Four new upper bounds for the stability number of a graph Miklós Ujvári Abstract. In 1979, L. Lovász defined the theta number, a spectral/semidefinite upper bound on the stability number of a graph, which

More information

Using Laplacian Eigenvalues and Eigenvectors in the Analysis of Frequency Assignment Problems

Using Laplacian Eigenvalues and Eigenvectors in the Analysis of Frequency Assignment Problems Using Laplacian Eigenvalues and Eigenvectors in the Analysis of Frequency Assignment Problems Jan van den Heuvel and Snežana Pejić Department of Mathematics London School of Economics Houghton Street,

More information

A Forbidden Subgraph Characterization Problem and a Minimal-Element Subset of Universal Graph Classes

A Forbidden Subgraph Characterization Problem and a Minimal-Element Subset of Universal Graph Classes Brigham Young University BYU ScholarsArchive All Theses and Dissertations 2004-03-17 A Forbidden Subgraph Characterization Problem and a Minimal-Element Subset of Universal Graph Classes Michael D. Barrus

More information

Chapter 3. Some Applications. 3.1 The Cone of Positive Semidefinite Matrices

Chapter 3. Some Applications. 3.1 The Cone of Positive Semidefinite Matrices Chapter 3 Some Applications Having developed the basic theory of cone programming, it is time to apply it to our actual subject, namely that of semidefinite programming. Indeed, any semidefinite program

More information

Semidefinite programs and combinatorial optimization

Semidefinite programs and combinatorial optimization Semidefinite programs and combinatorial optimization Lecture notes by L. Lovász Microsoft Research Redmond, WA 98052 lovasz@microsoft.com http://www.research.microsoft.com/ lovasz Contents 1 Introduction

More information

Diagonal Entry Restrictions in Minimum Rank Matrices, and the Inverse Inertia and Eigenvalue Problems for Graphs

Diagonal Entry Restrictions in Minimum Rank Matrices, and the Inverse Inertia and Eigenvalue Problems for Graphs Brigham Young University BYU ScholarsArchive All Theses and Dissertations 2012-06-11 Diagonal Entry Restrictions in Minimum Rank Matrices, and the Inverse Inertia and Eigenvalue Problems for Graphs Curtis

More information

Applications of the Inverse Theta Number in Stable Set Problems

Applications of the Inverse Theta Number in Stable Set Problems Acta Cybernetica 21 (2014) 481 494. Applications of the Inverse Theta Number in Stable Set Problems Miklós Ujvári Abstract In the paper we introduce a semidefinite upper bound on the square of the stability

More information

1 Positive definiteness and semidefiniteness

1 Positive definiteness and semidefiniteness Positive definiteness and semidefiniteness Zdeněk Dvořák May 9, 205 For integers a, b, and c, let D(a, b, c) be the diagonal matrix with + for i =,..., a, D i,i = for i = a +,..., a + b,. 0 for i = a +

More information

Graph coloring, perfect graphs

Graph coloring, perfect graphs Lecture 5 (05.04.2013) Graph coloring, perfect graphs Scribe: Tomasz Kociumaka Lecturer: Marcin Pilipczuk 1 Introduction to graph coloring Definition 1. Let G be a simple undirected graph and k a positive

More information

Symmetric 0-1 matrices with inverses having two distinct values and constant diagonal

Symmetric 0-1 matrices with inverses having two distinct values and constant diagonal Symmetric 0-1 matrices with inverses having two distinct values and constant diagonal Wayne Barrett Department of Mathematics, Brigham Young University, Provo, UT, 84602, USA Steve Butler Dept. of Mathematics,

More information

Linear algebra and applications to graphs Part 1

Linear algebra and applications to graphs Part 1 Linear algebra and applications to graphs Part 1 Written up by Mikhail Belkin and Moon Duchin Instructor: Laszlo Babai June 17, 2001 1 Basic Linear Algebra Exercise 1.1 Let V and W be linear subspaces

More information

LOVÁSZ-SCHRIJVER SDP-OPERATOR AND A SUPERCLASS OF NEAR-PERFECT GRAPHS

LOVÁSZ-SCHRIJVER SDP-OPERATOR AND A SUPERCLASS OF NEAR-PERFECT GRAPHS LOVÁSZ-SCHRIJVER SDP-OPERATOR AND A SUPERCLASS OF NEAR-PERFECT GRAPHS S. BIANCHI, M. ESCALANTE, G. NASINI, L. TUNÇEL Abstract. We study the Lovász-Schrijver SDP-operator applied to the fractional stable

More information

Math Camp Lecture 4: Linear Algebra. Xiao Yu Wang. Aug 2010 MIT. Xiao Yu Wang (MIT) Math Camp /10 1 / 88

Math Camp Lecture 4: Linear Algebra. Xiao Yu Wang. Aug 2010 MIT. Xiao Yu Wang (MIT) Math Camp /10 1 / 88 Math Camp 2010 Lecture 4: Linear Algebra Xiao Yu Wang MIT Aug 2010 Xiao Yu Wang (MIT) Math Camp 2010 08/10 1 / 88 Linear Algebra Game Plan Vector Spaces Linear Transformations and Matrices Determinant

More information

A lower bound for the Laplacian eigenvalues of a graph proof of a conjecture by Guo

A lower bound for the Laplacian eigenvalues of a graph proof of a conjecture by Guo A lower bound for the Laplacian eigenvalues of a graph proof of a conjecture by Guo A. E. Brouwer & W. H. Haemers 2008-02-28 Abstract We show that if µ j is the j-th largest Laplacian eigenvalue, and d

More information

An approximate version of Hadwiger s conjecture for claw-free graphs

An approximate version of Hadwiger s conjecture for claw-free graphs An approximate version of Hadwiger s conjecture for claw-free graphs Maria Chudnovsky Columbia University, New York, NY 10027, USA and Alexandra Ovetsky Fradkin Princeton University, Princeton, NJ 08544,

More information

Knowledge Discovery and Data Mining 1 (VO) ( )

Knowledge Discovery and Data Mining 1 (VO) ( ) Knowledge Discovery and Data Mining 1 (VO) (707.003) Review of Linear Algebra Denis Helic KTI, TU Graz Oct 9, 2014 Denis Helic (KTI, TU Graz) KDDM1 Oct 9, 2014 1 / 74 Big picture: KDDM Probability Theory

More information

Kernels of Directed Graph Laplacians. J. S. Caughman and J.J.P. Veerman

Kernels of Directed Graph Laplacians. J. S. Caughman and J.J.P. Veerman Kernels of Directed Graph Laplacians J. S. Caughman and J.J.P. Veerman Department of Mathematics and Statistics Portland State University PO Box 751, Portland, OR 97207. caughman@pdx.edu, veerman@pdx.edu

More information

Ma/CS 6b Class 20: Spectral Graph Theory

Ma/CS 6b Class 20: Spectral Graph Theory Ma/CS 6b Class 20: Spectral Graph Theory By Adam Sheffer Eigenvalues and Eigenvectors A an n n matrix of real numbers. The eigenvalues of A are the numbers λ such that Ax = λx for some nonzero vector x

More information

Sign Patterns that Allow Diagonalizability

Sign Patterns that Allow Diagonalizability Georgia State University ScholarWorks @ Georgia State University Mathematics Dissertations Department of Mathematics and Statistics 12-10-2018 Sign Patterns that Allow Diagonalizability Christopher Michael

More information

The Minimum Rank, Inverse Inertia, and Inverse Eigenvalue Problems for Graphs. Mark C. Kempton

The Minimum Rank, Inverse Inertia, and Inverse Eigenvalue Problems for Graphs. Mark C. Kempton The Minimum Rank, Inverse Inertia, and Inverse Eigenvalue Problems for Graphs Mark C. Kempton A thesis submitted to the faculty of Brigham Young University in partial fulfillment of the requirements for

More information

Math Linear Algebra Final Exam Review Sheet

Math Linear Algebra Final Exam Review Sheet Math 15-1 Linear Algebra Final Exam Review Sheet Vector Operations Vector addition is a component-wise operation. Two vectors v and w may be added together as long as they contain the same number n of

More information

8. Diagonalization.

8. Diagonalization. 8. Diagonalization 8.1. Matrix Representations of Linear Transformations Matrix of A Linear Operator with Respect to A Basis We know that every linear transformation T: R n R m has an associated standard

More information

1. General Vector Spaces

1. General Vector Spaces 1.1. Vector space axioms. 1. General Vector Spaces Definition 1.1. Let V be a nonempty set of objects on which the operations of addition and scalar multiplication are defined. By addition we mean a rule

More information

Assignment 1: From the Definition of Convexity to Helley Theorem

Assignment 1: From the Definition of Convexity to Helley Theorem Assignment 1: From the Definition of Convexity to Helley Theorem Exercise 1 Mark in the following list the sets which are convex: 1. {x R 2 : x 1 + i 2 x 2 1, i = 1,..., 10} 2. {x R 2 : x 2 1 + 2ix 1x

More information

Elementary linear algebra

Elementary linear algebra Chapter 1 Elementary linear algebra 1.1 Vector spaces Vector spaces owe their importance to the fact that so many models arising in the solutions of specific problems turn out to be vector spaces. The

More information

The maximal stable set problem : Copositive programming and Semidefinite Relaxations

The maximal stable set problem : Copositive programming and Semidefinite Relaxations The maximal stable set problem : Copositive programming and Semidefinite Relaxations Kartik Krishnan Department of Mathematical Sciences Rensselaer Polytechnic Institute Troy, NY 12180 USA kartis@rpi.edu

More information

Ma/CS 6b Class 20: Spectral Graph Theory

Ma/CS 6b Class 20: Spectral Graph Theory Ma/CS 6b Class 20: Spectral Graph Theory By Adam Sheffer Recall: Parity of a Permutation S n the set of permutations of 1,2,, n. A permutation σ S n is even if it can be written as a composition of an

More information

Lecture 7. Econ August 18

Lecture 7. Econ August 18 Lecture 7 Econ 2001 2015 August 18 Lecture 7 Outline First, the theorem of the maximum, an amazing result about continuity in optimization problems. Then, we start linear algebra, mostly looking at familiar

More information

Math 315: Linear Algebra Solutions to Assignment 7

Math 315: Linear Algebra Solutions to Assignment 7 Math 5: Linear Algebra s to Assignment 7 # Find the eigenvalues of the following matrices. (a.) 4 0 0 0 (b.) 0 0 9 5 4. (a.) The characteristic polynomial det(λi A) = (λ )(λ )(λ ), so the eigenvalues are

More information

EE/ACM Applications of Convex Optimization in Signal Processing and Communications Lecture 2

EE/ACM Applications of Convex Optimization in Signal Processing and Communications Lecture 2 EE/ACM 150 - Applications of Convex Optimization in Signal Processing and Communications Lecture 2 Andre Tkacenko Signal Processing Research Group Jet Propulsion Laboratory April 5, 2012 Andre Tkacenko

More information

Linear Algebra M1 - FIB. Contents: 5. Matrices, systems of linear equations and determinants 6. Vector space 7. Linear maps 8.

Linear Algebra M1 - FIB. Contents: 5. Matrices, systems of linear equations and determinants 6. Vector space 7. Linear maps 8. Linear Algebra M1 - FIB Contents: 5 Matrices, systems of linear equations and determinants 6 Vector space 7 Linear maps 8 Diagonalization Anna de Mier Montserrat Maureso Dept Matemàtica Aplicada II Translation:

More information

Solving Homogeneous Systems with Sub-matrices

Solving Homogeneous Systems with Sub-matrices Pure Mathematical Sciences, Vol 7, 218, no 1, 11-18 HIKARI Ltd, wwwm-hikaricom https://doiorg/112988/pms218843 Solving Homogeneous Systems with Sub-matrices Massoud Malek Mathematics, California State

More information

Laplacian Integral Graphs with Maximum Degree 3

Laplacian Integral Graphs with Maximum Degree 3 Laplacian Integral Graphs with Maximum Degree Steve Kirkland Department of Mathematics and Statistics University of Regina Regina, Saskatchewan, Canada S4S 0A kirkland@math.uregina.ca Submitted: Nov 5,

More information

Math Matrix Algebra

Math Matrix Algebra Math 44 - Matrix Algebra Review notes - (Alberto Bressan, Spring 7) sec: Orthogonal diagonalization of symmetric matrices When we seek to diagonalize a general n n matrix A, two difficulties may arise:

More information

Symmetric Matrices and Eigendecomposition

Symmetric Matrices and Eigendecomposition Symmetric Matrices and Eigendecomposition Robert M. Freund January, 2014 c 2014 Massachusetts Institute of Technology. All rights reserved. 1 2 1 Symmetric Matrices and Convexity of Quadratic Functions

More information

Applications of Chromatic Polynomials Involving Stirling Numbers

Applications of Chromatic Polynomials Involving Stirling Numbers Applications of Chromatic Polynomials Involving Stirling Numbers A. Mohr and T.D. Porter Department of Mathematics Southern Illinois University Carbondale, IL 6290 tporter@math.siu.edu June 23, 2008 The

More information

Ir O D = D = ( ) Section 2.6 Example 1. (Bottom of page 119) dim(v ) = dim(l(v, W )) = dim(v ) dim(f ) = dim(v )

Ir O D = D = ( ) Section 2.6 Example 1. (Bottom of page 119) dim(v ) = dim(l(v, W )) = dim(v ) dim(f ) = dim(v ) Section 3.2 Theorem 3.6. Let A be an m n matrix of rank r. Then r m, r n, and, by means of a finite number of elementary row and column operations, A can be transformed into the matrix ( ) Ir O D = 1 O

More information

Lecture Summaries for Linear Algebra M51A

Lecture Summaries for Linear Algebra M51A These lecture summaries may also be viewed online by clicking the L icon at the top right of any lecture screen. Lecture Summaries for Linear Algebra M51A refers to the section in the textbook. Lecture

More information

Topics in Graph Theory

Topics in Graph Theory Topics in Graph Theory September 4, 2018 1 Preliminaries A graph is a system G = (V, E) consisting of a set V of vertices and a set E (disjoint from V ) of edges, together with an incidence function End

More information

3. Linear Programming and Polyhedral Combinatorics

3. Linear Programming and Polyhedral Combinatorics Massachusetts Institute of Technology 18.453: Combinatorial Optimization Michel X. Goemans April 5, 2017 3. Linear Programming and Polyhedral Combinatorics Summary of what was seen in the introductory

More information

Symmetric matrices and dot products

Symmetric matrices and dot products Symmetric matrices and dot products Proposition An n n matrix A is symmetric iff, for all x, y in R n, (Ax) y = x (Ay). Proof. If A is symmetric, then (Ax) y = x T A T y = x T Ay = x (Ay). If equality

More information

Ma/CS 6b Class 23: Eigenvalues in Regular Graphs

Ma/CS 6b Class 23: Eigenvalues in Regular Graphs Ma/CS 6b Class 3: Eigenvalues in Regular Graphs By Adam Sheffer Recall: The Spectrum of a Graph Consider a graph G = V, E and let A be the adjacency matrix of G. The eigenvalues of G are the eigenvalues

More information

The spectra of super line multigraphs

The spectra of super line multigraphs The spectra of super line multigraphs Jay Bagga Department of Computer Science Ball State University Muncie, IN jbagga@bsuedu Robert B Ellis Department of Applied Mathematics Illinois Institute of Technology

More information

On the adjacency matrix of a block graph

On the adjacency matrix of a block graph On the adjacency matrix of a block graph R. B. Bapat Stat-Math Unit Indian Statistical Institute, Delhi 7-SJSS Marg, New Delhi 110 016, India. email: rbb@isid.ac.in Souvik Roy Economics and Planning Unit

More information

Jeong-Hyun Kang Department of Mathematics, University of West Georgia, Carrollton, GA

Jeong-Hyun Kang Department of Mathematics, University of West Georgia, Carrollton, GA #A33 INTEGERS 10 (2010), 379-392 DISTANCE GRAPHS FROM P -ADIC NORMS Jeong-Hyun Kang Department of Mathematics, University of West Georgia, Carrollton, GA 30118 jkang@westga.edu Hiren Maharaj Department

More information

Eigenvectors Via Graph Theory

Eigenvectors Via Graph Theory Eigenvectors Via Graph Theory Jennifer Harris Advisor: Dr. David Garth October 3, 2009 Introduction There is no problem in all mathematics that cannot be solved by direct counting. -Ernst Mach The goal

More information

HOMEWORK PROBLEMS FROM STRANG S LINEAR ALGEBRA AND ITS APPLICATIONS (4TH EDITION)

HOMEWORK PROBLEMS FROM STRANG S LINEAR ALGEBRA AND ITS APPLICATIONS (4TH EDITION) HOMEWORK PROBLEMS FROM STRANG S LINEAR ALGEBRA AND ITS APPLICATIONS (4TH EDITION) PROFESSOR STEVEN MILLER: BROWN UNIVERSITY: SPRING 2007 1. CHAPTER 1: MATRICES AND GAUSSIAN ELIMINATION Page 9, # 3: Describe

More information

and let s calculate the image of some vectors under the transformation T.

and let s calculate the image of some vectors under the transformation T. Chapter 5 Eigenvalues and Eigenvectors 5. Eigenvalues and Eigenvectors Let T : R n R n be a linear transformation. Then T can be represented by a matrix (the standard matrix), and we can write T ( v) =

More information

The value of a problem is not so much coming up with the answer as in the ideas and attempted ideas it forces on the would be solver I.N.

The value of a problem is not so much coming up with the answer as in the ideas and attempted ideas it forces on the would be solver I.N. Math 410 Homework Problems In the following pages you will find all of the homework problems for the semester. Homework should be written out neatly and stapled and turned in at the beginning of class

More information

Row and Column Distributions of Letter Matrices

Row and Column Distributions of Letter Matrices College of William and Mary W&M ScholarWorks Undergraduate Honors Theses Theses, Dissertations, & Master Projects 5-2016 Row and Column Distributions of Letter Matrices Xiaonan Hu College of William and

More information

11 Block Designs. Linear Spaces. Designs. By convention, we shall

11 Block Designs. Linear Spaces. Designs. By convention, we shall 11 Block Designs Linear Spaces In this section we consider incidence structures I = (V, B, ). always let v = V and b = B. By convention, we shall Linear Space: We say that an incidence structure (V, B,

More information

Recall the convention that, for us, all vectors are column vectors.

Recall the convention that, for us, all vectors are column vectors. Some linear algebra Recall the convention that, for us, all vectors are column vectors. 1. Symmetric matrices Let A be a real matrix. Recall that a complex number λ is an eigenvalue of A if there exists

More information

9.1 Eigenvectors and Eigenvalues of a Linear Map

9.1 Eigenvectors and Eigenvalues of a Linear Map Chapter 9 Eigenvectors and Eigenvalues 9.1 Eigenvectors and Eigenvalues of a Linear Map Given a finite-dimensional vector space E, letf : E! E be any linear map. If, by luck, there is a basis (e 1,...,e

More information

Index coding with side information

Index coding with side information Index coding with side information Ehsan Ebrahimi Targhi University of Tartu Abstract. The Index Coding problem has attracted a considerable amount of attention in the recent years. The problem is motivated

More information

Preliminaries and Complexity Theory

Preliminaries and Complexity Theory Preliminaries and Complexity Theory Oleksandr Romanko CAS 746 - Advanced Topics in Combinatorial Optimization McMaster University, January 16, 2006 Introduction Book structure: 2 Part I Linear Algebra

More information

Introduction to Semidefinite Programming I: Basic properties a

Introduction to Semidefinite Programming I: Basic properties a Introduction to Semidefinite Programming I: Basic properties and variations on the Goemans-Williamson approximation algorithm for max-cut MFO seminar on Semidefinite Programming May 30, 2010 Semidefinite

More information

3. Linear Programming and Polyhedral Combinatorics

3. Linear Programming and Polyhedral Combinatorics Massachusetts Institute of Technology 18.433: Combinatorial Optimization Michel X. Goemans February 28th, 2013 3. Linear Programming and Polyhedral Combinatorics Summary of what was seen in the introductory

More information

Chapter 1. Matrix Algebra

Chapter 1. Matrix Algebra ST4233, Linear Models, Semester 1 2008-2009 Chapter 1. Matrix Algebra 1 Matrix and vector notation Definition 1.1 A matrix is a rectangular or square array of numbers of variables. We use uppercase boldface

More information

Foundations of Matrix Analysis

Foundations of Matrix Analysis 1 Foundations of Matrix Analysis In this chapter we recall the basic elements of linear algebra which will be employed in the remainder of the text For most of the proofs as well as for the details, the

More information

MATH 423 Linear Algebra II Lecture 33: Diagonalization of normal operators.

MATH 423 Linear Algebra II Lecture 33: Diagonalization of normal operators. MATH 423 Linear Algebra II Lecture 33: Diagonalization of normal operators. Adjoint operator and adjoint matrix Given a linear operator L on an inner product space V, the adjoint of L is a transformation

More information

1 The independent set problem

1 The independent set problem ORF 523 Lecture 11 Spring 2016, Princeton University Instructor: A.A. Ahmadi Scribe: G. Hall Tuesday, March 29, 2016 When in doubt on the accuracy of these notes, please cross chec with the instructor

More information

Approximating the independence number via the ϑ-function

Approximating the independence number via the ϑ-function Approximating the independence number via the ϑ-function Noga Alon Nabil Kahale Abstract We describe an approximation algorithm for the independence number of a graph. If a graph on n vertices has an independence

More information

7. Symmetric Matrices and Quadratic Forms

7. Symmetric Matrices and Quadratic Forms Linear Algebra 7. Symmetric Matrices and Quadratic Forms CSIE NCU 1 7. Symmetric Matrices and Quadratic Forms 7.1 Diagonalization of symmetric matrices 2 7.2 Quadratic forms.. 9 7.4 The singular value

More information

Linear Algebra: Matrix Eigenvalue Problems

Linear Algebra: Matrix Eigenvalue Problems CHAPTER8 Linear Algebra: Matrix Eigenvalue Problems Chapter 8 p1 A matrix eigenvalue problem considers the vector equation (1) Ax = λx. 8.0 Linear Algebra: Matrix Eigenvalue Problems Here A is a given

More information

Lecture 7: Positive Semidefinite Matrices

Lecture 7: Positive Semidefinite Matrices Lecture 7: Positive Semidefinite Matrices Rajat Mittal IIT Kanpur The main aim of this lecture note is to prepare your background for semidefinite programming. We have already seen some linear algebra.

More information

K 4 -free graphs with no odd holes

K 4 -free graphs with no odd holes K 4 -free graphs with no odd holes Maria Chudnovsky 1 Columbia University, New York NY 10027 Neil Robertson 2 Ohio State University, Columbus, Ohio 43210 Paul Seymour 3 Princeton University, Princeton

More information

Eigenvalues and edge-connectivity of regular graphs

Eigenvalues and edge-connectivity of regular graphs Eigenvalues and edge-connectivity of regular graphs Sebastian M. Cioabă University of Delaware Department of Mathematical Sciences Newark DE 19716, USA cioaba@math.udel.edu August 3, 009 Abstract In this

More information

Inequalities for the spectra of symmetric doubly stochastic matrices

Inequalities for the spectra of symmetric doubly stochastic matrices Linear Algebra and its Applications 49 (2006) 643 647 wwwelseviercom/locate/laa Inequalities for the spectra of symmetric doubly stochastic matrices Rajesh Pereira a,, Mohammad Ali Vali b a Department

More information

Linear Algebra. Matrices Operations. Consider, for example, a system of equations such as x + 2y z + 4w = 0, 3x 4y + 2z 6w = 0, x 3y 2z + w = 0.

Linear Algebra. Matrices Operations. Consider, for example, a system of equations such as x + 2y z + 4w = 0, 3x 4y + 2z 6w = 0, x 3y 2z + w = 0. Matrices Operations Linear Algebra Consider, for example, a system of equations such as x + 2y z + 4w = 0, 3x 4y + 2z 6w = 0, x 3y 2z + w = 0 The rectangular array 1 2 1 4 3 4 2 6 1 3 2 1 in which the

More information

Math 102, Winter Final Exam Review. Chapter 1. Matrices and Gaussian Elimination

Math 102, Winter Final Exam Review. Chapter 1. Matrices and Gaussian Elimination Math 0, Winter 07 Final Exam Review Chapter. Matrices and Gaussian Elimination { x + x =,. Different forms of a system of linear equations. Example: The x + 4x = 4. [ ] [ ] [ ] vector form (or the column

More information

Reducing graph coloring to stable set without symmetry

Reducing graph coloring to stable set without symmetry Reducing graph coloring to stable set without symmetry Denis Cornaz (with V. Jost, with P. Meurdesoif) LAMSADE, Paris-Dauphine SPOC 11 Cornaz (with Jost and Meurdesoif) Coloring without symmetry SPOC 11

More information

Math 520 Exam 2 Topic Outline Sections 1 3 (Xiao/Dumas/Liaw) Spring 2008

Math 520 Exam 2 Topic Outline Sections 1 3 (Xiao/Dumas/Liaw) Spring 2008 Math 520 Exam 2 Topic Outline Sections 1 3 (Xiao/Dumas/Liaw) Spring 2008 Exam 2 will be held on Tuesday, April 8, 7-8pm in 117 MacMillan What will be covered The exam will cover material from the lectures

More information

I. Multiple Choice Questions (Answer any eight)

I. Multiple Choice Questions (Answer any eight) Name of the student : Roll No : CS65: Linear Algebra and Random Processes Exam - Course Instructor : Prashanth L.A. Date : Sep-24, 27 Duration : 5 minutes INSTRUCTIONS: The test will be evaluated ONLY

More information

C-Perfect K-Uniform Hypergraphs

C-Perfect K-Uniform Hypergraphs C-Perfect K-Uniform Hypergraphs Changiz Eslahchi and Arash Rafiey Department of Mathematics Shahid Beheshty University Tehran, Iran ch-eslahchi@cc.sbu.ac.ir rafiey-ar@ipm.ir Abstract In this paper we define

More information

1 Directional Derivatives and Differentiability

1 Directional Derivatives and Differentiability Wednesday, January 18, 2012 1 Directional Derivatives and Differentiability Let E R N, let f : E R and let x 0 E. Given a direction v R N, let L be the line through x 0 in the direction v, that is, L :=

More information

Chapter 6 Orthogonal representations II: Minimal dimension

Chapter 6 Orthogonal representations II: Minimal dimension Chapter 6 Orthogonal representations II: Minimal dimension Nachdiplomvorlesung by László Lovász ETH Zürich, Spring 2014 1 Minimum dimension Perhaps the most natural way to be economic in constructing an

More information

Solutions to Exercises Chapter 10: Ramsey s Theorem

Solutions to Exercises Chapter 10: Ramsey s Theorem Solutions to Exercises Chapter 10: Ramsey s Theorem 1 A platoon of soldiers (all of different heights) is in rectangular formation on a parade ground. The sergeant rearranges the soldiers in each row of

More information

MATH 240 Spring, Chapter 1: Linear Equations and Matrices

MATH 240 Spring, Chapter 1: Linear Equations and Matrices MATH 240 Spring, 2006 Chapter Summaries for Kolman / Hill, Elementary Linear Algebra, 8th Ed. Sections 1.1 1.6, 2.1 2.2, 3.2 3.8, 4.3 4.5, 5.1 5.3, 5.5, 6.1 6.5, 7.1 7.2, 7.4 DEFINITIONS Chapter 1: Linear

More information

DS-GA 1002 Lecture notes 0 Fall Linear Algebra. These notes provide a review of basic concepts in linear algebra.

DS-GA 1002 Lecture notes 0 Fall Linear Algebra. These notes provide a review of basic concepts in linear algebra. DS-GA 1002 Lecture notes 0 Fall 2016 Linear Algebra These notes provide a review of basic concepts in linear algebra. 1 Vector spaces You are no doubt familiar with vectors in R 2 or R 3, i.e. [ ] 1.1

More information

Then x 1,..., x n is a basis as desired. Indeed, it suffices to verify that it spans V, since n = dim(v ). We may write any v V as r

Then x 1,..., x n is a basis as desired. Indeed, it suffices to verify that it spans V, since n = dim(v ). We may write any v V as r Practice final solutions. I did not include definitions which you can find in Axler or in the course notes. These solutions are on the terse side, but would be acceptable in the final. However, if you

More information

Some notes on Coxeter groups

Some notes on Coxeter groups Some notes on Coxeter groups Brooks Roberts November 28, 2017 CONTENTS 1 Contents 1 Sources 2 2 Reflections 3 3 The orthogonal group 7 4 Finite subgroups in two dimensions 9 5 Finite subgroups in three

More information

Lecture 4: January 26

Lecture 4: January 26 10-725/36-725: Conve Optimization Spring 2015 Lecturer: Javier Pena Lecture 4: January 26 Scribes: Vipul Singh, Shinjini Kundu, Chia-Yin Tsai Note: LaTeX template courtesy of UC Berkeley EECS dept. Disclaimer:

More information

Math Bootcamp An p-dimensional vector is p numbers put together. Written as. x 1 x =. x p

Math Bootcamp An p-dimensional vector is p numbers put together. Written as. x 1 x =. x p Math Bootcamp 2012 1 Review of matrix algebra 1.1 Vectors and rules of operations An p-dimensional vector is p numbers put together. Written as x 1 x =. x p. When p = 1, this represents a point in the

More information

Algebraic Methods in Combinatorics

Algebraic Methods in Combinatorics Algebraic Methods in Combinatorics Po-Shen Loh 27 June 2008 1 Warm-up 1. (A result of Bourbaki on finite geometries, from Răzvan) Let X be a finite set, and let F be a family of distinct proper subsets

More information

Z-Pencils. November 20, Abstract

Z-Pencils. November 20, Abstract Z-Pencils J. J. McDonald D. D. Olesky H. Schneider M. J. Tsatsomeros P. van den Driessche November 20, 2006 Abstract The matrix pencil (A, B) = {tb A t C} is considered under the assumptions that A is

More information

Strongly Regular Decompositions of the Complete Graph

Strongly Regular Decompositions of the Complete Graph Journal of Algebraic Combinatorics, 17, 181 201, 2003 c 2003 Kluwer Academic Publishers. Manufactured in The Netherlands. Strongly Regular Decompositions of the Complete Graph EDWIN R. VAN DAM Edwin.vanDam@uvt.nl

More information

Graph fundamentals. Matrices associated with a graph

Graph fundamentals. Matrices associated with a graph Graph fundamentals Matrices associated with a graph Drawing a picture of a graph is one way to represent it. Another type of representation is via a matrix. Let G be a graph with V (G) ={v 1,v,...,v n

More information

MATH 5720: Unconstrained Optimization Hung Phan, UMass Lowell September 13, 2018

MATH 5720: Unconstrained Optimization Hung Phan, UMass Lowell September 13, 2018 MATH 57: Unconstrained Optimization Hung Phan, UMass Lowell September 13, 18 1 Global and Local Optima Let a function f : S R be defined on a set S R n Definition 1 (minimizers and maximizers) (i) x S

More information

Conceptual Questions for Review

Conceptual Questions for Review Conceptual Questions for Review Chapter 1 1.1 Which vectors are linear combinations of v = (3, 1) and w = (4, 3)? 1.2 Compare the dot product of v = (3, 1) and w = (4, 3) to the product of their lengths.

More information

LINEAR ALGEBRA BOOT CAMP WEEK 4: THE SPECTRAL THEOREM

LINEAR ALGEBRA BOOT CAMP WEEK 4: THE SPECTRAL THEOREM LINEAR ALGEBRA BOOT CAMP WEEK 4: THE SPECTRAL THEOREM Unless otherwise stated, all vector spaces in this worksheet are finite dimensional and the scalar field F is R or C. Definition 1. A linear operator

More information

3.3 Easy ILP problems and totally unimodular matrices

3.3 Easy ILP problems and totally unimodular matrices 3.3 Easy ILP problems and totally unimodular matrices Consider a generic ILP problem expressed in standard form where A Z m n with n m, and b Z m. min{c t x : Ax = b, x Z n +} (1) P(b) = {x R n : Ax =

More information

Boolean Inner-Product Spaces and Boolean Matrices

Boolean Inner-Product Spaces and Boolean Matrices Boolean Inner-Product Spaces and Boolean Matrices Stan Gudder Department of Mathematics, University of Denver, Denver CO 80208 Frédéric Latrémolière Department of Mathematics, University of Denver, Denver

More information

5 Flows and cuts in digraphs

5 Flows and cuts in digraphs 5 Flows and cuts in digraphs Recall that a digraph or network is a pair G = (V, E) where V is a set and E is a multiset of ordered pairs of elements of V, which we refer to as arcs. Note that two vertices

More information

Review problems for MA 54, Fall 2004.

Review problems for MA 54, Fall 2004. Review problems for MA 54, Fall 2004. Below are the review problems for the final. They are mostly homework problems, or very similar. If you are comfortable doing these problems, you should be fine on

More information

Problem Set (T) If A is an m n matrix, B is an n p matrix and D is a p s matrix, then show

Problem Set (T) If A is an m n matrix, B is an n p matrix and D is a p s matrix, then show MTH 0: Linear Algebra Department of Mathematics and Statistics Indian Institute of Technology - Kanpur Problem Set Problems marked (T) are for discussions in Tutorial sessions (T) If A is an m n matrix,

More information

Introduction to Quantitative Techniques for MSc Programmes SCHOOL OF ECONOMICS, MATHEMATICS AND STATISTICS MALET STREET LONDON WC1E 7HX

Introduction to Quantitative Techniques for MSc Programmes SCHOOL OF ECONOMICS, MATHEMATICS AND STATISTICS MALET STREET LONDON WC1E 7HX Introduction to Quantitative Techniques for MSc Programmes SCHOOL OF ECONOMICS, MATHEMATICS AND STATISTICS MALET STREET LONDON WC1E 7HX September 2007 MSc Sep Intro QT 1 Who are these course for? The September

More information

Chapter 3 Transformations

Chapter 3 Transformations Chapter 3 Transformations An Introduction to Optimization Spring, 2014 Wei-Ta Chu 1 Linear Transformations A function is called a linear transformation if 1. for every and 2. for every If we fix the bases

More information

The minimum rank of matrices and the equivalence class graph

The minimum rank of matrices and the equivalence class graph Linear Algebra and its Applications 427 (2007) 161 170 wwwelseviercom/locate/laa The minimum rank of matrices and the equivalence class graph Rosário Fernandes, Cecília Perdigão Departamento de Matemática,

More information

THE REAL POSITIVE DEFINITE COMPLETION PROBLEM. WAYNE BARRETT**, CHARLES R. JOHNSONy and PABLO TARAZAGAz

THE REAL POSITIVE DEFINITE COMPLETION PROBLEM. WAYNE BARRETT**, CHARLES R. JOHNSONy and PABLO TARAZAGAz THE REAL POSITIVE DEFINITE COMPLETION PROBLEM FOR A SIMPLE CYCLE* WAYNE BARRETT**, CHARLES R JOHNSONy and PABLO TARAZAGAz Abstract We consider the question of whether a real partial positive denite matrix

More information