Expander and Derandomization

Size: px
Start display at page:

Download "Expander and Derandomization"

Transcription

1 Expander and Derandomization

2 Many derandomization results are based on the assumption that certain random/hard objects exist. Some unconditional derandomization can be achieved using explicit constructions of pseduorandom objects. Computational Complexity, by Fu Yuxi Expander and Derandomization 1 / 92

3 Synopsis 1. Basic Linear Algebra 2. Random Walk 3. Expander Graph 4. Explicit Construction of Expander Graph 5. Reingold s Theorem Computational Complexity, by Fu Yuxi Expander and Derandomization 2 / 92

4 Basic Linear Algebra Computational Complexity, by Fu Yuxi Expander and Derandomization 3 / 92

5 Three Views All capital lower case letters will denote column vectors. Matrix = Linear transformation : Q n Q m 1. f (u + v) = f (u) + f (v), f (cu) = cf (u) 2. the matrix corresponding to f has f (e j ) i as the (i, j)-th entry Interpretation of v = Au 1. Dynamic view: u is transformed to v, movement in one basis 2. Static view: u in the column basis is the same as v in the standard basis, one point in two bases Equation, Geometry (row picture), Algebra (column picture) Linear equation, hyperplane, linear combination Computational Complexity, by Fu Yuxi Expander and Derandomization 4 / 92

6 Inner Product, Projection, Orthogonality 1. Inner product u v measures the degree of colinearity of u, v u = u u where u is the conjugate transpose of u is the normalization of u u u cos θ = u v u v u v u u u is the projection of v onto u u and v are orthogonal if u v = 0 2. Row space null space, column space left null space 3. Basis, orthogonal/orthonormal basis 4. Orthogonal matrix Q 1 = Q Gram-Schmidt orthogonalization, A = QR Cauchy-Schwartz Inequality. cos θ = u v u v 1. Computational Complexity, by Fu Yuxi Expander and Derandomization 5 / 92

7 Fixpoints for Linear Transformation We look for fixpoints of a linear transformation A : R n R n. Av i = λ i v i. If there are n linear independent fixpoints v 1,..., v n, every v R n is some linear combination c 1 v c n v n. By linearity, Av = c 1 Av c n Av n = c 1 λ 1 v c n λ n v n. If we think of v 1,..., v n as a basis, the effect of the transform A is to stretch the coordinates in the directions of the axes. Computational Complexity, by Fu Yuxi Expander and Derandomization 6 / 92

8 Eigenvalue, Eigenvector, Eigenmatrix If A λi is singular, an eigenvector x is such thst Ax=λx, and λ is the eigenvalue. 1. S = [x 1,..., x n ] is the eigenmatrix. By definition AS = SΛ. 2. If λ 1,..., λ n are different, x 1,..., x n are linearly independent. 3. If x 1,..., x n are linearly independent, then A = SΛS 1. We shall write the spectrum λ 1, λ 2,..., λ n of a matrix in the order that λ 1 λ 2... λ n. The value ρ(a) = λ 1 is called spectral radius. Computational Complexity, by Fu Yuxi Expander and Derandomization 7 / 92

9 Hermitian Matrix and Symmetric Matrix real matrix complex matrix length x = i [n] x 2 i x = i [n] x i 2 transpose A A inner product x y = i [n] x iy i x y = i [n] x iy i orthogonality x y = 0 x y = 0 symmetric/hermitian A = A A = A diagonalization A = QΛQ A = UΛU orthogonal/unitary Q Q = I U U = I Fact. If A = A, then x Ax = (x Ax) is real for all complex x. Fact. If A = A, the eigenvalues are real since v Av = λv v = λ v 2. Fact. If A = A, the eigenvectors of different eigenvalues are orthogonal. Fact. Ux 2 = x 2 and Qx 2 = x 2. Computational Complexity, by Fu Yuxi Expander and Derandomization 8 / 92

10 Similarity Transformation Similarity Transformation = Change of Basis 1. A is similar to B if A = MBM 1 for some invertible M. 2. v is an eigenvector of A iff M 1 v is an eigenvector of B. A and B describe the same transformation using different bases. 1. The basis of B consists of the column vectors of M. 2. A vector x in the basis of A is transformed into the vector M 1 x in the basis of B, that is x = M(M 1 x). 3. B then transforms M 1 x into some y in the basis of B. 4. In the basis of A the vector Ax is My. Fact. Similar matrices have the same eigenvalues. Computational Complexity, by Fu Yuxi Expander and Derandomization 9 / 92

11 Triangularization Diagonalization transformation is a special case of similarity transformation. In diagonalization Q provides an orthogonal basis. Question. Is every matrix similar to a diagonal matrix? Schur s Lemma. For each matrix A there is a unitary matrix U such that T = U 1 AU is triangular. The eigenvalues of A appear in the diagonal of T. If A is Hermitian, T must be diagonal. Computational Complexity, by Fu Yuxi Expander and Derandomization 10 / 92

12 Spectral Theorem Theorem. Every Hermitian matrix A can be diagonalized by a unitary matrix U. Every symmetric matrix A can be diagonalized by an orthogonal matrix Q. U AU = Λ, Q AQ = Λ. The eigenvalues are in Λ; the orthonormal eigenvectors are in Q/U. Corollary. Every Hermitian matrix A has a spectral decomposition. A = UΛU = λ 1 v 1 v λ nv n v n. Computational Complexity, by Fu Yuxi Expander and Derandomization 11 / 92

13 Diagonalization We still need to answer the Question. What are the matrices that are similar to diagonal matrices? A matrix N is normal if NN = N N. (A normal A Hermitian) Theorem. A matrix N is normal iff T = U 1 NU is diagonal iff N has a complete set of orthonormal eigenvectors. Proof. If N is normal, T is normal. It follows from T = T that T is diagonal. If T is diagonal, it is the eigenvalue matrix of N, and NU = UT says that the column vectors of U are precisely the eigenvectors. Computational Complexity, by Fu Yuxi Expander and Derandomization 12 / 92

14 Jordan Form What if a matrix is not diagonalizable? Every matrix is similar to a Jordan Form. λ i 1 1 J = M 1 AM = J, where J i = λ i 1. J s λ i Here s is the number of independent eigenvectors. The same eigenvalue λ i will appear in several Jordan blocks if it has several independent eigenvectors. Theorem. A, B are similar iff they have the same Jordan Form. Theorem. If an n n matrix A is of rank r, then it has r nonzero eigenvalues and n r zero eigenvalues. Computational Complexity, by Fu Yuxi Expander and Derandomization 13 / 92

15 Rayleigh Quotient Suppose A is an n n Hermitian matrix, (λ 1, v 1 ),..., (λ n, v n ) are the eigenpairs. The Rayleigh quotient of A and nonzero x is defined as follows: R(A, x) = x Ax i [n] x x = λ i v i x 2. (1) i [n] v i x 2 It is clear from (1) that if λ 1... λ n, then λ i = max x v1,...,x v i 1 R(A, x), and if λ 1... λ n, then λ i = max x v1,...,x v i 1 R(A, x). One can use Rayleigh quotient to derive lower bound for λ i. Computational Complexity, by Fu Yuxi Expander and Derandomization 14 / 92

16 Positive Definite Matrix A symmetric matrix A is positive definite if x Ax > 0 for all x 0. Theorem. Suppose A is symmetric. The following are equivalent. 1. x Ax > 0 for all x λ i > 0 for all the eigenvalues λ i. 3. A i > 0 for all the upper left sub-matrices A i. 4. d i > 0 for all the pivots d i. 5. A = R R for some matrix R with independent columns. If we replace > by, we get positive semidefinite matrices. Computational Complexity, by Fu Yuxi Expander and Derandomization 15 / 92

17 Singular Value Decomposition Consider an m n matrix. Both AA and A A are symmetric. 1. AA is positive semidefinite since x AA x = A x AA = UΣU, where U consists of the orthonormal eigenvectors u 1,..., u m and Σ is the diagonal matrix made up from the eigenvalues σ 2 1,..., σ2 r. We assume σ 1... σ r. 3. Similarly A A = V Σ V. 4. Now AA u i = σ 2 i u i implies that σ 2 i is an eigenvalue of A A and A u i is the corresponding eigenvector. So v i = 5. Observe that u i AA u i = u i σ2 i u i = σ 2 i. 6. Hence Av i = A A u i A u i = σ2 i u i σ i = σ i u i. Conclude AV = UΣ. A u i A u i. Computational Complexity, by Fu Yuxi Expander and Derandomization 16 / 92

18 Singular Value Decomposition 1. σ 1,..., σ r are called the singular values of A. 2. A = UΣV is the singular value decomposition, or SVD, of A. Lemma. If A is normal, then σ i = λ i for all i [n]. Proof. Since A is normal, A = UΛU. Now A A = AA = UΛ 2 U. So the spectrum of A A/AA is λ 2 1,..., λ2 n. Computational Complexity, by Fu Yuxi Expander and Derandomization 17 / 92

19 Vector Norm The norm of a vector is a measure of its magnitude/size/length. A norm on F n is a function : F n R 0 satisfying the following: 1. v = 0 iff v = av = a v. 3. v + w v + w. A vector space with a norm is called a normed vector space. 1. L 1 -norm. v 1 = v v n. 2. L 2 -norm. v 2 = v v n 2 = v v. 3. L p -norm. v p = p v 1 p v n p. 4. L -norm. v = max{ v 1,..., v n }. Computational Complexity, by Fu Yuxi Expander and Derandomization 18 / 92

20 Matrix Norm We define matrix norm in compatible with vector norm. Suppose F n is a normed vector space over filed F. An induced matrix norm is a function : F n n R + satisfying the following properties. 1. A = 0 iff A = aa = a A. 3. A + B A + B. 4. AB A B. Computational Complexity, by Fu Yuxi Expander and Derandomization 19 / 92

21 Matrix Norm A matrix norm measures the amplifying power of a matrix. Define A = max v 0 Av v. It satisfies (1-4). Additionally Ax A x for all x. Lemma. ρ(a) A. A 1 = max 1 j n A = max 1 i n n A i,j, i=1 n A i,j. j=1 Computational Complexity, by Fu Yuxi Expander and Derandomization 20 / 92

22 Spectral Norm A 2 is called the spectral norm of A. 1 n A 1 A 2 n A 1. Lemma. A 2 = σ 1. Corollary. If A is a normal matrix, then A 2 = λ 1. Let A A = V ΣV be the decomposition, let v 1,..., v n be the orthonormal eigenvectors, and let x = a 1 v a n v n. Then Ax 2 2 = x (A Ax) = x ( σi 2 a i v i ) σ1 x i [n] The equality holds when x = v 1. Therefore A 2 = σ 1. Computational Complexity, by Fu Yuxi Expander and Derandomization 21 / 92

23 MIT Open Course Computational Complexity, by Fu Yuxi Expander and Derandomization 22 / 92

24 Random Walk Computational Complexity, by Fu Yuxi Expander and Derandomization 23 / 92

25 Graphs are the prime objects of study in combinatorics. The matrix representation of graphs lends itself to an algebraic treatment to these combinatorial objects. It is especially effective in the treatment of regular graph. Computational Complexity, by Fu Yuxi Expander and Derandomization 24 / 92

26 Our digraph admit both self-loops and parallel edges. An undirected edge is seen as two directed edges in opposite directions. In this chapter whenever we say graph, we mean undirected graph. Computational Complexity, by Fu Yuxi Expander and Derandomization 25 / 92

27 Random Walk Matrix The reachability matrix M of a digraph G is defined by M i,j = 1 if there is an edge from vertex j to vertex i; M i,j = 0 otherwise. The random walk matrix A of a d-regular digraph G is 1 d M. Let p be a probability distribution over the vertices of G and A is the random walk matrix of G. Then A k p is the distribution after k-step random walk. Computational Complexity, by Fu Yuxi Expander and Derandomization 26 / 92

28 Random Walk Matrix Consider the following periodic graph with d n vertices. The vertices are arranged in n layers, each consisting of d vertices. There is an edge from every vertex in the i-th layer to every vertex in the j-th layer, where j = i + 1 mod n. Does A k p converge to a stationary state? Computational Complexity, by Fu Yuxi Expander and Derandomization 27 / 92

29 Spectral Graph Theory In spectral graph theory graph properties are characterized by graph spectrums. Suppose G is a d-regular graph and A is the random walk matrix of G is an eigenvalue of A and its associated eigenvector is the stationary distribution vector 1 = ( 1 n,..., 1 n ). In other words A1 = All eigenvalues have absolute values G is disconnected iff 1 is an eigenvalue of multiplicity If G is connected, G is bipartite iff 1 is an eigenvalue of A. 3( ). 4( ). Computational Complexity, by Fu Yuxi Expander and Derandomization 28 / 92

30 Rate of Convergence For a regular graph G with random walk matrix A, we define def Ap 1 2 λ G = max p p 1 2 = max v 1 where p is over all probability distribution vectors. The two definitions are equivalent. 1. (p 1) 1 and Ap 1 = A(p 1). Av 2 v 2 = max v 1, v 2 =1 Av 2, 2. For each v 1, p = αv + 1 is a probability distribution for a sufficiently small α. By definition Av 2 λ G v 2 for all v. Computational Complexity, by Fu Yuxi Expander and Derandomization 29 / 92

31 Lemma. λ G = λ 2. Let v 2,..., v n be the eigenvectors corresponding to λ 2,..., λ n. Given x 1, let x = c 2 v c n v n. Then Ax 2 2 = λ 2 c 2 v λ n c n v n 2 2 = λ 2 2c 2 2 v λ 2 nc 2 n v n 2 2 λ 2 2(c 2 2 v c 2 n v n 2 2) = λ 2 2 x 2 2. So λ 2 G λ2 2. The equality holds since Av = λ2 2 v Computational Complexity, by Fu Yuxi Expander and Derandomization 30 / 92

32 The spectral gap γ G of a graph G is defined by γ G = 1 λ G. A graph has spectral expansion γ, where γ (0, 1), if γ G γ. In an expander G, γ G provides a bound on the expansion ratio. In terms of random walk, λ G bounds the speed of mixing time. Computational Complexity, by Fu Yuxi Expander and Derandomization 31 / 92

33 Lemma. Let G be an n-vertex regular graph and p a probability distribution over the vertices of G. Then A l p 1 2 λ l G p 1 2 < λ l G. The first inequality holds because The second inequality holds because A l p 1 2 p 1 2 = Al p 1 2 A l 1 p Ap 1 2 p 1 2 λ l G. p = p p, n 2 1 n < 1. Computational Complexity, by Fu Yuxi Expander and Derandomization 32 / 92

34 Lemma. If G is an n-vertex regular graph with self-loops at each vertex, γ G 1 12n 2. Let u be the unit vector such that u 1 and λ G = Au 2, and let v = Au. If we can prove 1 v n 2, we will get λ G = v n 2, hence the lemma. It s easy to show 1 v 2 2 = u 2 2 v 2 2 = u Au, v + v 2 2 = i,j A i,j(u i v j ) 2. By the assumption u i u j 1 n for some i, j [n]. Let u i u i1... u ik u j be a shortest path. Then u i u j = (u i v i ) + (v i u i1 ) (v ik u j ) u i v i + v i u i v ik u j (u i v i ) 2 + (v i u i1 ) (v ik u j ) 2 2D + 1, (2) where D is the diameter of G. Thus i,j A i,j(u i v j ) 2 1 dn(2d+1) by (2) and A h,h, A h,h+1 1 d. The inequality then follows from D 3n d+1. Computational Complexity, by Fu Yuxi Expander and Derandomization 33 / 92

35 Randomized Algorithm for Undirected Connectivity Corollary. Let G be a d-degree n-vertex graph with self-loop on every vertex. Let s, t be connected. Let l > 24n 2 log n and let X l denote the vertex distribution after l step random walk from s. Then Pr[X l = t] > 1 2n. Graphs with self-loops are not bipartite. According to the Lemmas, A l e s 1 2 < It follows that ( A l e s )t 1 n > 1 n 2. ( 1 1 ) 24n 2 log n 12n 2 < 1 n 2. If the walk is repeated for 2n 2 times, the error probability is reduced to below 1 2 n. Computational Complexity, by Fu Yuxi Expander and Derandomization 34 / 92

36 Randomized Algorithm for Undirected Connectivity Theorem. UPATH (Undirected Connectivity) is in RL. An undirected graph can be turned into a non-bipartite regular graph by introducing enough self-loops. Computational Complexity, by Fu Yuxi Expander and Derandomization 35 / 92

37 Can the random algorithm for UPATH be derandomized? Recall that L RL NL. Computational Complexity, by Fu Yuxi Expander and Derandomization 36 / 92

38 Expander Graph Computational Complexity, by Fu Yuxi Expander and Derandomization 37 / 92

39 Expansion graphs, defined by Pinsker in 1973, are sparse and well connected. They behave approximately like complete graphs. Sparsity should be understood in an asymptotic sense. 1. Fan Chung. Spectral Graph Theory. American Mathematical Society, Hoory, Linial, and Wigderson. Expander Graphs and their Applications. Bulletin of the AMS, 43, , Computational Complexity, by Fu Yuxi Expander and Derandomization 38 / 92

40 Well-connectedness can be characterized in a number of manners. 1. Algebraically, expanders are graphs whose second largest eigenvalue is bounded away from 1 by a constant. 2. Combinatorially, expanders are highly connected. Every set of vertices of an expander has a large boundary geometrically. 3. Probabilistically, expanders are graphs in which a random walk converges to the stationary distribution quickly. Computational Complexity, by Fu Yuxi Expander and Derandomization 39 / 92

41 Algebraic Property Intuitively the faster random walk converges, the better the graph is connected. According to Lemma, the smaller λ G is, the faster random walk converges to the uniform distribution. Suppose d N and λ (0, 1) are constants. A d-regular graph G with n vertices is an (n, d, λ)-graph if λ G λ. {G n } n N is an (d, λ)-expander graph family if G n is an (n, d, λ)-graph for all n N. Computational Complexity, by Fu Yuxi Expander and Derandomization 40 / 92

42 Probabilistic Property In expanders random walk converges to the uniform distribution in logarithmic steps. A 2 log 1 λ (n) p 1 2 < λ 2 log 1 λ (n) = 1 n 2. In other words, the mixing time of an expander is logarithmic. The mixing time of an n-vertex graph G is the minimal l such that for any vertex distribution p, A l p 1 < 1 2n, where A is the random walk matrix of G. The diameter of an n-vertex expander graph is Θ(log n). Computational Complexity, by Fu Yuxi Expander and Derandomization 41 / 92

43 Combinatorial Property Suppose G = (V, E) is an n-vertex d-regular graph. Let S stand for V \ S for S V. Let E(S, T ) be the set of edges i j with i S and j T. Let S = E(S, S) for S n 2. The expansion constant h G of G is defined as follows: h G = min S S S. Suppose ρ > 0 is a constant. An n-vertex d-regular graph G is an (n, d, ρ)-edge expander if h G ρd. In other words G is an (n, d, ρ)-edge expander if S ρ (d S ) for all S. Computational Complexity, by Fu Yuxi Expander and Derandomization 42 / 92

44 Existence of Expander Theorem. Let ɛ > 0. There exists d = d(ɛ) and N N such that for every n > N there exists an (n, d, 1 2 ɛ) edge expander. Computational Complexity, by Fu Yuxi Expander and Derandomization 43 / 92

45 Expansion and Spectral Gap Theorem. Let G = (V, E) be a finite, connected, d-regular graph. Then d γg 2 h G d 2γ G. 1. J. Dodziuk. Difference Equations, Isoperimetric Inequality and Transience of Certain Random Walks. Trans. AMS, N. Alon and V. Milman. λ 1, Isoperimetric Inequalities for Graphs, and Superconcentrators. J. Comb. Theory, N. Alon. Eigenvalues and Expanders. Combinatorica, Computational Complexity, by Fu Yuxi Expander and Derandomization 44 / 92

46 d 1 λ G 2 h G Let S be such that S n 2 { (S) S, i S, and S = h G. Define x 1 by x i = S, i S. x 2 2 = n S S, x Ax = ( S 1 S S 1 S ) A( S 1 S S 1 S ) = 1 ( S 2 E(S, S) + S 2 E(S, S) 2 S S E(S, S) ) d = 1 d ( dn S S n 2 E(S, S) ), where = is due to d S = E(S, S) + E(S, S) and E(S, S) + E(S, S) = d S. The Rayleigh quotient R(A, x) provides a lower bound for λ G. λ G x Ax x 2 2 = 1 dn S S n 2 E(S, S) d n S S = 1 1 d S (S) n 1 2h G d. Computational Complexity, by Fu Yuxi Expander and Derandomization 45 / 92

47 h G d 2(1 λ G ) Let u 1 be such that Au = λ 2 u. Write u = v + w, where v/w is defined from u by replacing the negative/positive components by 0. Wlog, assume that the number of positive components of v is n 2. Computational Complexity, by Fu Yuxi Expander and Derandomization 46 / 92

48 h G d 2(1 λ G ) Wlog, assume v 1 v 2... v n. Then A i,j vi 2 vj 2 = 2 i,j i<j n/2 (vk 2 vk+1) 2 A i,j j 1 k=i = 2 [k] (vk 2 v 2 d k+1) k=1 n/2 2 h G k(vk 2 v 2 d k+1) k=1 = 2h G d v 2 2, where the second equality holds because S n 2 and because for every fixed k the term vk 2 v2 k+1 appears once for every edge i j such that i k < j. Computational Complexity, by Fu Yuxi Expander and Derandomization 47 / 92

49 h G d 2(1 λ G ) Av, v > Av, v + Aw, v = λ 2 v 2 2 because Au = λ 2u, v, w = 0 and Aw, v < 0. 1 λ 2 1 Av, v v 2 2 = v 2 2 Av, v i,j v 2 = A i,j(v i v j ) v 2. (3) 2 Let brown = i,j A i,j(v i + v j ) 2. Using Cauchy-Schwartz Inequality, blue brown A i,j vi 2 vj 2. (4) i,j 2 Now Av 2 σ 1 v 2 = v 2 implies Av, v v 2 2. Therefore red brown 2 v 2 2 (2 v Av, v ) 8 v 4 2. (5) (3)+(4)+(5)+the previous inequality implies 8(1 λ 2 ) 2h G d. Computational Complexity, by Fu Yuxi Expander and Derandomization 48 / 92

50 Combinatorial definition and algebraic definition are equivalent. 1. The inequality d 1 λ G 2 h G implies that if G is an (n, d, λ)-expander graph, then it is an (n, d, 1 λ 2 ) edge expander. 2. The inequality h G d 2(1 λ G ) implies that if G is an (n, d, ρ) edge expander, then it is an (n, d, 1 ρ2 2 )-expander graph. Computational Complexity, by Fu Yuxi Expander and Derandomization 49 / 92

51 Convergence in Entropy Rényi 2-Entropy: H 2 (p) = 2 log( p 2 ). Fact. If A is the random walk matrix of an (n, d, λ)-expander, then H 2 (Ap) H 2 (p). The equality holds if and only if p is uniform. Proof. Let p = 1 + w such that w 1. Then Aw, 1 = w A 1 = w A1 = w 1 = 0. Therefore ( ) Ap 2 2 = Aw λ w 2 2 = λ( p 2 2) = γ p 2 + λ p 2 2 p The equality holds when p = 1. Random walks do not decrease randomness. Computational Complexity, by Fu Yuxi Expander and Derandomization 50 / 92

52 The smaller the spectral gap, or the larger the spectral expansion, the more expander graphs behave like random graphs. This is what the next lemma says. Computational Complexity, by Fu Yuxi Expander and Derandomization 51 / 92

53 Expander Mixing Lemma Lemma. Let G = (V, E) be an (n, d, λ)-expander graph. Let S, T V. Then E(S, T ) d n S T λd S T. (6) Notice that (6) implies E(S, T ) S T dn n n λ. (7) The edge density approximates the product of the vertex densities. 1. N. Alon and F. Chung. Explicit Construction of Linear Sized Tolerant Networks. Discrete Mathematics, Computational Complexity, by Fu Yuxi Expander and Derandomization 52 / 92

54 Proof of Expander Mixing Lemma Let [v 1,..., v n ] be the eigenmatrix of G with v 1 = ( 1 n,..., 1 n ). Let 1 S = i α iv i and 1 T = j β jv j be the characteristic vectors of S, T respectively. E(S, T ) = (1 S ) (da)1 T = ( i α i v i ) (da)( j β j v j ) = i dλ i α i β i. Since α 1 = (1 S ) v 1 = S n and β 1 = (1 T ) v 1 = T n, E(S, T ) d n S T = n dλ i α i β i dλ Finally observe that α 2 β 2 = 1 S 2 1 T 2 = S T. i=2 n α i β i dλ α 2 β 2. i=2 Computational Complexity, by Fu Yuxi Expander and Derandomization 53 / 92

55 Error Reduction for Random Algorithm Suppose A(x, r) is a random algorithm with error probability 1/3. The algorithm uses r(n) random bits on input x with x = n. 1. We can reduce the error probability exponentially by repeating the algorithm t(n) times. 2. The resulting algorithm uses r(n)t(n) random bits. The goal is to achieve the same error reduction rate using far fewer random bits (in fact r(n) + O(t(n)) random bits). The key observation is that a t-step random walk in an expander graph looks like t vertices sampled uniformly and independently. Confer the inequality (7). Computational Complexity, by Fu Yuxi Expander and Derandomization 54 / 92

56 A Decomposition Lemma for Random Walk K n is perfect from the viewpoint of random walk. No matter what distribution it starts with, a random walk reaches the uniform distribution in one step. Let J n = [1,..., 1] be the random walk matrix of K n with self-loop. Lemma. Suppose G is an (n, d, λ)-expander and A is its random walk matrix. Then A = γj n + λe for some E such that E 1. We may think of a random walk on an expander as a convex combination of two random walks of different type, a walk with probability γ on a complete graph, and a walk with probability λ according to an error matrix that does not amplify the distance to the uniform distribution. Computational Complexity, by Fu Yuxi Expander and Derandomization 55 / 92

57 A Decomposition Lemma for Random Walk We need to prove that Ev 2 v 2 for all v, where E is defined by E = 1 λ (A (1 λ)j n). The following proof methodology should now be familiar. Decompose v into v = u + w = α1 + w with w 1. A1 = 1 and J n 1 = 1. Consequently Eu = u. J n w = 0, w def = Aw 1 and w u. Hence Ew = 1 λ w. w 2 λ w 2. So Ev 2 2 = u + 1 λ w 2 2 = u λ w 2 2 u w 2 2 = v 2 2. Computational Complexity, by Fu Yuxi Expander and Derandomization 56 / 92

58 Expander Random Walk Theorem Theorem. Let G be an (n, d, λ) expander graph, and let B [n] satisfy B βn for some β (0, 1). Let X 1 be a random variable denoting the uniform distribution on [n] and let X k be a random variable denoting a k 1 step random walk from X 1. Then Pr X i B i [k] ( γ β + λ) k 1. Computational Complexity, by Fu Yuxi Expander and Derandomization 57 / 92

59 Expander Random Walk Theorem Let B i stand for X i B. We need to bound the following. Pr X i B = Pr[B 1 ] Pr[B 2 B 1 ]... Pr[B k B 1... B k 1 ]. (8) i [k] By seeing B as a diagonal matrix, we define the distribution p i by p i = BA Pr[B i B 1... B i 1 ]... BA Pr[B 2 B 1 ] B1 Pr[B 1 ]. So the probability in (8) is bounded by (BA) k 1 B1 1. We ll prove (BA) k 1 B1 2 1 n ((1 λ) β + λ) k 1. Computational Complexity, by Fu Yuxi Expander and Derandomization 58 / 92

60 Expander Random Walk Theorem Using Lemma, Therefore BA = B((1 λ)j n + λe) (1 λ) BJ n + λ BE (BA) k 1 B1 2 = (1 λ) β + λ BE (1 λ) β + λ B E (1 λ) β + λ. β ((1 λ) ) k 1 1 β + λ n ((1 λ) k 1 β + λ). n BJ n v 2 = BJ n α1 2 = α B1 2 n B1 2 = n β n = β for v 2 = 1. Therefore BJ n = max{ BJ n v 2 v 2 = 1} = β. Computational Complexity, by Fu Yuxi Expander and Derandomization 59 / 92

61 Error Reduction for RP Suppose A(x, r) is a random algorithm with error probability β. Given input x with n = x, let k = r(n). Choose an explicit (2 k, d, λ)-graph G = (V, E) with V = {0, 1} k. Algorithm A. 1. Pick v 0 R V. 2. Generate a random walk v 0,..., v t. 3. Output t i=0 A(x, v i). By the Theorem, the error probability of A is ( γ β + λ ) k 1. Here B is the set of r s for which A errs on x. Computational Complexity, by Fu Yuxi Expander and Derandomization 60 / 92

62 Error Reduction for BPP Algorithm A. 1. Pick v 0 R V. 2. Generate a random walk v 0,..., v t. 3. Output Maj{A(x, v i )} i [t]. Let K [t] be the set of samples for which A errs and K t+1 2. Pr[ i K.v i B] assuming γ β + λ 1/16. By union bound, ( γ ) t 1 2 β + λ ( ) 1 t 1, 4 Pr[A fails] 2 t ( 1 4) t 1 = O(2 t ). Computational Complexity, by Fu Yuxi Expander and Derandomization 61 / 92

63 Explicit Construction of Expander Graph Computational Complexity, by Fu Yuxi Expander and Derandomization 62 / 92

64 Explicit Construction In some applications the size of expander graph can be handled. An expander family {G n } n N is mildly explicit if there is a P-time algorithm that outputs the random walk matrix of G n whenever the input is 1 n. In some other applications a huge expander graph must be used. An expander family {G n } n N is strongly explicit if there is a P-time algorithm that on input n, v, i outputs the index of the i-th neighbor of v. Computational Complexity, by Fu Yuxi Expander and Derandomization 63 / 92

65 We will look at several graph product operations. We then show how to use these operations to construct explicit expander graphs. 1. O. Reingold, S. Vadhan, and A. Wigderson. Entropy Waves, the Zig-Zag Graph Product, and New Constant-Degree Expanders and Extractors. FOCS, Computational Complexity, by Fu Yuxi Expander and Derandomization 64 / 92

66 Rotation Map Suppose G is an n-vertex d-degree graph. A rotation map Ĝ of G is a function of type [n] [d] [n] [d] satisfying the following. Ĝ(u, i) = (v, j) if vertex v is the i-th neighbor of vertex u and u is the j-th neighbor of v. Clearly Ĝ is a permutation. The rotation matrix  is defined by  (v,m),(u,l) = { 1, if Ĝ(u, l) = (v, m), 0, otherwise. Walks described in  are deterministic, meaning that no random bits are necessary. Computational Complexity, by Fu Yuxi Expander and Derandomization 65 / 92

67 Path Product Suppose G, G are n-vertex graphs with degree d respectively d. Let A, A be their random walk matrices. The path product G G is defined by the random walk matrix A A. G G is n-vertex dd -degree. Lemma. λ G G λ G λ G. Proof. A λ G G = max Av 2 A v 1 v 2 = max v 2 v 1 v 2 Av 2 A v 2 max Av 2 Av v 1 Av 2 max 2 v 1 λ G λ G using the fact that Av 1 whenever v 1. Lemma. λ G k = (λ G ) k. Proof. (λ G ) k is the second largest eigenvalue of G k. v 2 = Computational Complexity, by Fu Yuxi Expander and Derandomization 66 / 92

68 Tensor Product Suppose G is an n-vertex d-degree graph and G is an n -vertex d -degree graph. The random walk matrix of the tensor product G G is a 11 A a 12 A a 1n A A A a 21 A a 22 A a 2n A =.... a n1 A a n2 A a nn A (u, u ) (v, v ) in G G iff u v in G and u v in G. G G is nn -vertex dd -degree. Computational Complexity, by Fu Yuxi Expander and Derandomization 67 / 92

69 Tensor Product Lemma. λ G G = max{λ G, λ G }. If λ is an eigenvalue of A and v is the associated eigenvector, and λ is an eigenvalue of A and v is the associated eigenvector, then (A A )(v v ) = λλ (v v ). Computational Complexity, by Fu Yuxi Expander and Derandomization 68 / 92

70 Zig-Zag Product G is an n-vertex D-degree graph. H is a D-vertex d-degree graph. The zig-zag product G z H is the nd-vertex d 2 -degree graph constructed as follows: 1. The vertex set is [n] [D]. 2. For i, j [d] and (u, l) [n] [D], the (i, j)-th neighbor of (u, l) is the vertex (v, m) [n] [D] computed as follows: 2.1 Let l be the i-th neighbor of l in H. 2.2 v is the l -th neighbor of u and u is the m -th neighbor of v. 2.3 Let m be the j-th neighbor of m in H. Typically d D. A t-step random walk uses O(t log d) rather than O(t log D) random bits. Computational Complexity, by Fu Yuxi Expander and Derandomization 69 / 92

71 Zig-Zag Product We need a picture here. Computational Complexity, by Fu Yuxi Expander and Derandomization 70 / 92

72 Zig-Zag Product Lemma. (I n J D )Â(I n J D ) = A J D. Consider the left matrix. Starting from (u, l), a random neighbor l of l is chosen with probability 1 D, then a walk from (u, l ) to some (v, m ) with probability 1; and finally a random neighbor m of m is picked up with probability 1 D. According to the right matrix the random walk from (u, l) to (v, m) occurs with the probability 1 D 1 D. Â is the rotation matrix of G, and I n is the n n diagonal matrix. Computational Complexity, by Fu Yuxi Expander and Derandomization 71 / 92

73 Claim. If C 2 1 then λ C 1. Proof. λ C = max v 1 Cv 2 v 2 max v 1 C 2 v 2 v 2 1. Claim. λ A+B λ A + λ B for symmetric stochastic matrices A, B. Proof. λ A+B = max v 1 (A+B)v 2 v 2 max v 1 Av 2 + Bv 2 v 2 λ A + λ B. Computational Complexity, by Fu Yuxi Expander and Derandomization 72 / 92

74 Zig-Zag Product Lemma. λ G z H λ G + 2λ H and γ M γ G γh 2. Let A, B and M be the random walk matrices of G, H and G z H. Â is the (nd) (nd) rotation matrix of G. B = (1 λ H )J D + λ H E for some E with E 2 1. This is Lemma. Now M = (I n B)Â(I n B) = ((1 λ H )I n J D + λ H I n E) Â ((1 λ H)I n J D + λ H I n E) = (1 λ H ) 2 (I n J D )Â(I n J D ) +... = (1 λ H ) 2 (A J D ) +..., where = is due to Lemma. Using Lemma and the Claims, one gets λ M (1 λ H ) 2 λ A JD + 1 (1 λ H ) 2 max{λ G, λ JD } + 2λ H = λ G + 2λ H. Computational Complexity, by Fu Yuxi Expander and Derandomization 73 / 92

75 Zig-Zag Product The lemma is useful when both λ G and λ H are small. If not, a different upper bound can be derived. Both upper bounds are discussed in the following paper. 1. O. Reingold, S. Vadhan, and A. Wigderson. Entropy Waves, the Zig-Zag Graph Product, and New Constant Degree Expanders and Extractors. FOCS, Computational Complexity, by Fu Yuxi Expander and Derandomization 74 / 92

76 Expander Construction I The crucial point of the zig-zag construction is that we can use a constant graph to build a constant degree graph family. Let H be a (D 4, D, 1/8)-graph constructed by brute force. Define Fact. G k is a (D 4k, D 2, 1/2)-graph. G 1 = H 2, G k+1 = Gk 2 z H. Proof. The base case is clear from Lemma, and the induction step is taken care of by the previous lemma. Computational Complexity, by Fu Yuxi Expander and Derandomization 75 / 92

77 Expander Construction I The time to access to a neighbor of a vertex is given by the following recursive equation time(g k+1 ) = 2 time(g k ) + poly( vertex ) = poly( G k+1 ). The expander family is mildly explicit but not strongly explicit. Computational Complexity, by Fu Yuxi Expander and Derandomization 76 / 92

78 Expander Construction I Both the size of the graph and the time to compute a neighbor grow exponentially. This suggests to use tensor product to expand the size of graph doubly exponential. We will explain the idea using a variant of zig-zag product. Computational Complexity, by Fu Yuxi Expander and Derandomization 77 / 92

79 Replacement Product G is an n-vertex D-degree graph. H is a D-vertex d-degree graph. The replacement product G RH is the nd-vertex 2d-degree graph constructed in the following manner: 1. Every vertex w of G is replaced by a copy H w of H. 2. If Ĝ(u, l) = (v, m), place d parallel edges from the l-th vertex of H u to the m-th vertex of H v. The replacement product is well known in graph theory. It is often used to reduce vertex degree without loosing connectivity. Computational Complexity, by Fu Yuxi Expander and Derandomization 78 / 92

80 Replacement Product We need a picture here. Computational Complexity, by Fu Yuxi Expander and Derandomization 79 / 92

81 Replacement Product The rotation map on [n] [D] [d] {0, 1} is defined by { (u, Ĥ(m, i), b), if b = 0, ĜRH(u, m, i, b) = (Ĝ(u, m), i, b), if b = 1. Computational Complexity, by Fu Yuxi Expander and Derandomization 80 / 92

82 Replacement Product Lemma. λ GRH 1 (1 λ G )(1 λ H ) 2 24 and γ GRH 1 24 γ G γ 2 H. Let A, B be the random walk matrices of G, H respectively. Â = the (nd) (nd) permutation matrix corresponding to Ĝ. By Lemma, B = λ H E + γ H J D for some E with E 2 1. The (nd) (nd) random walk matrix of GRH is ARB = 1 2Â (I n B). It suffices to prove λ 3 ARB 1 γ G γ 2 H 8, which is λ (ARB) 3 1 γ G γ 2 H 8. Computational Complexity, by Fu Yuxi Expander and Derandomization 81 / 92

83 Replacement Product We have (ARB) 3 = = ( 1 2Â + 1 ) 3 2 (I n B) ( 1 2Â (I n (λ H E + γ H J D )) ) 3 ) 3 = 1 (Â + λh (I n E) + γ H (I n J D ) 8 = 1 (Â3 ) γ 2 8 H(I n J D )Â(I n J D ) = 1 (Â3 ) γ 2 8 H(A J D ), where the last equality is due to Lemma. Applying the two Claims, we get λ (ARB) 3 1 γ2 H 8 + γ2 H 8 λ A J D 1 γ2 H 8 + γ2 H 8 λ G = 1 γ2 H 8 γ G. Computational Complexity, by Fu Yuxi Expander and Derandomization 82 / 92

84 Expander Construction II Theorem. There exists a strongly explicit (4, λ)-expander family for some λ < 1. As a first step we prove that we can efficiently construct a family {G k } k of graphs where each G k has (2d) 100k vertices. 1. Let H be a ((2d) 100, d, 0.01)-expander graph, G 1 a ((2d) 100, 2d, 0.5)-expander graph, and G 2 a ((2d) 100 2, 2d, 0.5)-expander graph. 2. For k > 2 define G k = ( G k 1 2 G k 1 2 ) 50 RH. Tensor product increases graph size. Path product improves spectral expansion. Replacement product reduces degree. Computational Complexity, by Fu Yuxi Expander and Derandomization 83 / 92

85 Expander Construction II G k is a ((2d) 100k, 2d, 0.98)-expander graph. 1. Let n k be the number of vertices of G k. n k k 1 = n k 1 n 100 k (2d)100 = (2d) 2 k (2d) 2 (2d) 100 = (2d) 100k. 2. G k 1, G k 1 degree 2d G k 1 G k 1 degree (2d) 2 (G k 1 G k 1 ) 50 degree (2d) 100 G k degree 2d. 3. λ G k 1, λ G k λ G k 1 G k λ (G k 1 G k 1 ) λ Gk 1 0.5(0.99) 2 /24 < Computational Complexity, by Fu Yuxi Expander and Derandomization 84 / 92

86 Expander Construction II There is a poly(k)-time algorithm that upon receiving a label i of a vertex in G k and an index j in [2d] finds the j-th neighbor of i. 1. A search for a neighborhood of the input vertex of G k recursively calls 50 times on G k 1 and 50 times on G k The depth of the recursive calls is bounded by t = O(log k). 3. The overall time is bounded by O(2 O(log k) ) = poly(k). Computational Complexity, by Fu Yuxi Expander and Derandomization 85 / 92

87 Expander Construction II Suppose (2d) 100k < i < (2d) 100(k+1). Let (2d) 100(k+1) = xi + r. Divide the (2d) 100(k+1) vertices into i classes among which r classes being of size x + 1 and i r classes being of size x. Contract every class into a mega-vertex. Add 2d self-loops to each of the i r mega-vertices. This is a (i, 2d(x + 1), (2d)0.01/(x + 1)) edge expander. We get a ((2d) 101, 0.01/(2d) 99 ) edge expander family. Computational Complexity, by Fu Yuxi Expander and Derandomization 86 / 92

88 Reingold s Theorem Computational Complexity, by Fu Yuxi Expander and Derandomization 87 / 92

89 Theorem. UPATH L. 1. O. Reingold. Undirected ST-Connectivity in Log-Space. STOC Computational Complexity, by Fu Yuxi Expander and Derandomization 88 / 92

90 The Idea Connectivity Algorithm for d-degree expander graph is easy. The diameter of an expander graph is of length O(log(n)). An exhaustive search can be carried out in logspace. Reingold s idea is to transform conceptually a graph G to a graph G so that a connected component in G becomes an expander in G and unconnected vertices in G remain unconnected in G. Moreover finding a neighbor of a given vertex in the conceptual G can be done in O(log G ) space. Computational Complexity, by Fu Yuxi Expander and Derandomization 89 / 92

91 The Algorithm 1. Fix a (d 50, d/2, 0.01)-expander graph H for d = Convert the input graph G to a d 50 -degree graph on the fly. 2.1 Add self-loops to increase degree. 2.2 Replace a large degree vertex by a cycle to decrease degree. 3. G 0 = G; G k = (G k 1 RH) 50 is constructed on the fly. 4. Apply Connectivity Algorithm to the expander G 10 log n. If G k 1 is an N-vertex d 50 -degree, G k 1 RH is an d 50 N-vertex d-degree, and (G k 1 RH) 50 is an d 50 N-vertex d 50 -degree. So G 10 log n contains (d 50 ) 10 log n n = n 1001 vertices. Computational Complexity, by Fu Yuxi Expander and Derandomization 90 / 92

92 The Complexity Only paths of length 2 log 1 n 1001 need be considered Each vertex, except s, t, is coded up by 50 log(d) bits, say x, declaring that it is the 2 x -th neighbor of the previous vertex. 2. The algorithm keeps the current vertex. When backtracking it starts from s all over again to get the previous vertex. Step 2 and Step 3 of the algorithm can be carried out on the fly using this mechanism. We cannot record all the vertices in a path. [log(n) 2 ] Although the Expander Construction I is only mildly explicit, computing the i-th neighbor of a given vertex is in logspace. Computational Complexity, by Fu Yuxi Expander and Derandomization 91 / 92

93 Lewis and Papadimitriou introduced SL as the class of problems solvable in logspace by an NTM that satisfies the following. 1. If the answer is yes, one or more computation paths accept. 2. If the answer is no, all paths reject. 3. If the machine can make a transition from configuration C to configuration D, then it can also goes from D to C. Theorem. UPATH is SL-complete. Corollary. UPATH is L-complete. Proof. Reingold Theorem implies that L = SL. Computational Complexity, by Fu Yuxi Expander and Derandomization 92 / 92

Random Walk and Expander

Random Walk and Expander Random Walk and Expander We have seen some major results based on the assumption that certain random/hard objects exist. We will see what we can achieve unconditionally using explicit constructions of

More information

Math 102, Winter Final Exam Review. Chapter 1. Matrices and Gaussian Elimination

Math 102, Winter Final Exam Review. Chapter 1. Matrices and Gaussian Elimination Math 0, Winter 07 Final Exam Review Chapter. Matrices and Gaussian Elimination { x + x =,. Different forms of a system of linear equations. Example: The x + 4x = 4. [ ] [ ] [ ] vector form (or the column

More information

1 Adjacency matrix and eigenvalues

1 Adjacency matrix and eigenvalues CSC 5170: Theory of Computational Complexity Lecture 7 The Chinese University of Hong Kong 1 March 2010 Our objective of study today is the random walk algorithm for deciding if two vertices in an undirected

More information

Spectral radius, symmetric and positive matrices

Spectral radius, symmetric and positive matrices Spectral radius, symmetric and positive matrices Zdeněk Dvořák April 28, 2016 1 Spectral radius Definition 1. The spectral radius of a square matrix A is ρ(a) = max{ λ : λ is an eigenvalue of A}. For an

More information

A Brief Outline of Math 355

A Brief Outline of Math 355 A Brief Outline of Math 355 Lecture 1 The geometry of linear equations; elimination with matrices A system of m linear equations with n unknowns can be thought of geometrically as m hyperplanes intersecting

More information

An Elementary Construction of Constant-Degree Expanders

An Elementary Construction of Constant-Degree Expanders An Elementary Construction of Constant-Degree Expanders Noga Alon Oded Schwartz Asaf Shapira Abstract We describe a short and easy to analyze construction of constant-degree expanders. The construction

More information

The University of Texas at Austin Department of Electrical and Computer Engineering. EE381V: Large Scale Learning Spring 2013.

The University of Texas at Austin Department of Electrical and Computer Engineering. EE381V: Large Scale Learning Spring 2013. The University of Texas at Austin Department of Electrical and Computer Engineering EE381V: Large Scale Learning Spring 2013 Assignment Two Caramanis/Sanghavi Due: Tuesday, Feb. 19, 2013. Computational

More information

Lecture 15: Expanders

Lecture 15: Expanders CS 710: Complexity Theory 10/7/011 Lecture 15: Expanders Instructor: Dieter van Melkebeek Scribe: Li-Hsiang Kuo In the last lecture we introduced randomized computation in terms of machines that have access

More information

Conceptual Questions for Review

Conceptual Questions for Review Conceptual Questions for Review Chapter 1 1.1 Which vectors are linear combinations of v = (3, 1) and w = (4, 3)? 1.2 Compare the dot product of v = (3, 1) and w = (4, 3) to the product of their lengths.

More information

(a) If A is a 3 by 4 matrix, what does this tell us about its nullspace? Solution: dim N(A) 1, since rank(a) 3. Ax =

(a) If A is a 3 by 4 matrix, what does this tell us about its nullspace? Solution: dim N(A) 1, since rank(a) 3. Ax = . (5 points) (a) If A is a 3 by 4 matrix, what does this tell us about its nullspace? dim N(A), since rank(a) 3. (b) If we also know that Ax = has no solution, what do we know about the rank of A? C(A)

More information

Stat 159/259: Linear Algebra Notes

Stat 159/259: Linear Algebra Notes Stat 159/259: Linear Algebra Notes Jarrod Millman November 16, 2015 Abstract These notes assume you ve taken a semester of undergraduate linear algebra. In particular, I assume you are familiar with the

More information

Linear Algebra, part 3. Going back to least squares. Mathematical Models, Analysis and Simulation = 0. a T 1 e. a T n e. Anna-Karin Tornberg

Linear Algebra, part 3. Going back to least squares. Mathematical Models, Analysis and Simulation = 0. a T 1 e. a T n e. Anna-Karin Tornberg Linear Algebra, part 3 Anna-Karin Tornberg Mathematical Models, Analysis and Simulation Fall semester, 2010 Going back to least squares (Sections 1.7 and 2.3 from Strang). We know from before: The vector

More information

Lecture 6: Random Walks versus Independent Sampling

Lecture 6: Random Walks versus Independent Sampling Spectral Graph Theory and Applications WS 011/01 Lecture 6: Random Walks versus Independent Sampling Lecturer: Thomas Sauerwald & He Sun For many problems it is necessary to draw samples from some distribution

More information

Problem Set 2. Assigned: Mon. November. 23, 2015

Problem Set 2. Assigned: Mon. November. 23, 2015 Pseudorandomness Prof. Salil Vadhan Problem Set 2 Assigned: Mon. November. 23, 2015 Chi-Ning Chou Index Problem Progress 1 SchwartzZippel lemma 1/1 2 Robustness of the model 1/1 3 Zero error versus 1-sided

More information

EIGENVALUE PROBLEMS. Background on eigenvalues/ eigenvectors / decompositions. Perturbation analysis, condition numbers..

EIGENVALUE PROBLEMS. Background on eigenvalues/ eigenvectors / decompositions. Perturbation analysis, condition numbers.. EIGENVALUE PROBLEMS Background on eigenvalues/ eigenvectors / decompositions Perturbation analysis, condition numbers.. Power method The QR algorithm Practical QR algorithms: use of Hessenberg form and

More information

1 Last time: least-squares problems

1 Last time: least-squares problems MATH Linear algebra (Fall 07) Lecture Last time: least-squares problems Definition. If A is an m n matrix and b R m, then a least-squares solution to the linear system Ax = b is a vector x R n such that

More information

Lecture notes: Applied linear algebra Part 1. Version 2

Lecture notes: Applied linear algebra Part 1. Version 2 Lecture notes: Applied linear algebra Part 1. Version 2 Michael Karow Berlin University of Technology karow@math.tu-berlin.de October 2, 2008 1 Notation, basic notions and facts 1.1 Subspaces, range and

More information

Basic Calculus Review

Basic Calculus Review Basic Calculus Review Lorenzo Rosasco ISML Mod. 2 - Machine Learning Vector Spaces Functionals and Operators (Matrices) Vector Space A vector space is a set V with binary operations +: V V V and : R V

More information

Throughout these notes we assume V, W are finite dimensional inner product spaces over C.

Throughout these notes we assume V, W are finite dimensional inner product spaces over C. Math 342 - Linear Algebra II Notes Throughout these notes we assume V, W are finite dimensional inner product spaces over C 1 Upper Triangular Representation Proposition: Let T L(V ) There exists an orthonormal

More information

18.06SC Final Exam Solutions

18.06SC Final Exam Solutions 18.06SC Final Exam Solutions 1 (4+7=11 pts.) Suppose A is 3 by 4, and Ax = 0 has exactly 2 special solutions: 1 2 x 1 = 1 and x 2 = 1 1 0 0 1 (a) Remembering that A is 3 by 4, find its row reduced echelon

More information

Eigenvalues and Eigenvectors

Eigenvalues and Eigenvectors /88 Chia-Ping Chen Department of Computer Science and Engineering National Sun Yat-sen University Linear Algebra Eigenvalue Problem /88 Eigenvalue Equation By definition, the eigenvalue equation for matrix

More information

Linear Algebra in Actuarial Science: Slides to the lecture

Linear Algebra in Actuarial Science: Slides to the lecture Linear Algebra in Actuarial Science: Slides to the lecture Fall Semester 2010/2011 Linear Algebra is a Tool-Box Linear Equation Systems Discretization of differential equations: solving linear equations

More information

Numerical Methods for Solving Large Scale Eigenvalue Problems

Numerical Methods for Solving Large Scale Eigenvalue Problems Peter Arbenz Computer Science Department, ETH Zürich E-mail: arbenz@inf.ethz.ch arge scale eigenvalue problems, Lecture 2, February 28, 2018 1/46 Numerical Methods for Solving Large Scale Eigenvalue Problems

More information

Linear Algebra, part 3 QR and SVD

Linear Algebra, part 3 QR and SVD Linear Algebra, part 3 QR and SVD Anna-Karin Tornberg Mathematical Models, Analysis and Simulation Fall semester, 2012 Going back to least squares (Section 1.4 from Strang, now also see section 5.2). We

More information

33AH, WINTER 2018: STUDY GUIDE FOR FINAL EXAM

33AH, WINTER 2018: STUDY GUIDE FOR FINAL EXAM 33AH, WINTER 2018: STUDY GUIDE FOR FINAL EXAM (UPDATED MARCH 17, 2018) The final exam will be cumulative, with a bit more weight on more recent material. This outline covers the what we ve done since the

More information

Lecture 2: Linear Algebra Review

Lecture 2: Linear Algebra Review EE 227A: Convex Optimization and Applications January 19 Lecture 2: Linear Algebra Review Lecturer: Mert Pilanci Reading assignment: Appendix C of BV. Sections 2-6 of the web textbook 1 2.1 Vectors 2.1.1

More information

Preliminary/Qualifying Exam in Numerical Analysis (Math 502a) Spring 2012

Preliminary/Qualifying Exam in Numerical Analysis (Math 502a) Spring 2012 Instructions Preliminary/Qualifying Exam in Numerical Analysis (Math 502a) Spring 2012 The exam consists of four problems, each having multiple parts. You should attempt to solve all four problems. 1.

More information

6 Inner Product Spaces

6 Inner Product Spaces Lectures 16,17,18 6 Inner Product Spaces 6.1 Basic Definition Parallelogram law, the ability to measure angle between two vectors and in particular, the concept of perpendicularity make the euclidean space

More information

Quantum Computing Lecture 2. Review of Linear Algebra

Quantum Computing Lecture 2. Review of Linear Algebra Quantum Computing Lecture 2 Review of Linear Algebra Maris Ozols Linear algebra States of a quantum system form a vector space and their transformations are described by linear operators Vector spaces

More information

Linear Algebra Primer

Linear Algebra Primer Linear Algebra Primer David Doria daviddoria@gmail.com Wednesday 3 rd December, 2008 Contents Why is it called Linear Algebra? 4 2 What is a Matrix? 4 2. Input and Output.....................................

More information

Maths for Signals and Systems Linear Algebra in Engineering

Maths for Signals and Systems Linear Algebra in Engineering Maths for Signals and Systems Linear Algebra in Engineering Lectures 13 15, Tuesday 8 th and Friday 11 th November 016 DR TANIA STATHAKI READER (ASSOCIATE PROFFESOR) IN SIGNAL PROCESSING IMPERIAL COLLEGE

More information

Numerical Methods I Eigenvalue Problems

Numerical Methods I Eigenvalue Problems Numerical Methods I Eigenvalue Problems Aleksandar Donev Courant Institute, NYU 1 donev@courant.nyu.edu 1 MATH-GA 2011.003 / CSCI-GA 2945.003, Fall 2014 October 2nd, 2014 A. Donev (Courant Institute) Lecture

More information

Ir O D = D = ( ) Section 2.6 Example 1. (Bottom of page 119) dim(v ) = dim(l(v, W )) = dim(v ) dim(f ) = dim(v )

Ir O D = D = ( ) Section 2.6 Example 1. (Bottom of page 119) dim(v ) = dim(l(v, W )) = dim(v ) dim(f ) = dim(v ) Section 3.2 Theorem 3.6. Let A be an m n matrix of rank r. Then r m, r n, and, by means of a finite number of elementary row and column operations, A can be transformed into the matrix ( ) Ir O D = 1 O

More information

Chapter 7: Symmetric Matrices and Quadratic Forms

Chapter 7: Symmetric Matrices and Quadratic Forms Chapter 7: Symmetric Matrices and Quadratic Forms (Last Updated: December, 06) These notes are derived primarily from Linear Algebra and its applications by David Lay (4ed). A few theorems have been moved

More information

A Review of Linear Algebra

A Review of Linear Algebra A Review of Linear Algebra Mohammad Emtiyaz Khan CS,UBC A Review of Linear Algebra p.1/13 Basics Column vector x R n, Row vector x T, Matrix A R m n. Matrix Multiplication, (m n)(n k) m k, AB BA. Transpose

More information

MATH 423 Linear Algebra II Lecture 33: Diagonalization of normal operators.

MATH 423 Linear Algebra II Lecture 33: Diagonalization of normal operators. MATH 423 Linear Algebra II Lecture 33: Diagonalization of normal operators. Adjoint operator and adjoint matrix Given a linear operator L on an inner product space V, the adjoint of L is a transformation

More information

Chapter 7. Canonical Forms. 7.1 Eigenvalues and Eigenvectors

Chapter 7. Canonical Forms. 7.1 Eigenvalues and Eigenvectors Chapter 7 Canonical Forms 7.1 Eigenvalues and Eigenvectors Definition 7.1.1. Let V be a vector space over the field F and let T be a linear operator on V. An eigenvalue of T is a scalar λ F such that there

More information

MATH 581D FINAL EXAM Autumn December 12, 2016

MATH 581D FINAL EXAM Autumn December 12, 2016 MATH 58D FINAL EXAM Autumn 206 December 2, 206 NAME: SIGNATURE: Instructions: there are 6 problems on the final. Aim for solving 4 problems, but do as much as you can. Partial credit will be given on all

More information

QR-decomposition. The QR-decomposition of an n k matrix A, k n, is an n n unitary matrix Q and an n k upper triangular matrix R for which A = QR

QR-decomposition. The QR-decomposition of an n k matrix A, k n, is an n n unitary matrix Q and an n k upper triangular matrix R for which A = QR QR-decomposition The QR-decomposition of an n k matrix A, k n, is an n n unitary matrix Q and an n k upper triangular matrix R for which In Matlab A = QR [Q,R]=qr(A); Note. The QR-decomposition is unique

More information

IMPORTANT DEFINITIONS AND THEOREMS REFERENCE SHEET

IMPORTANT DEFINITIONS AND THEOREMS REFERENCE SHEET IMPORTANT DEFINITIONS AND THEOREMS REFERENCE SHEET This is a (not quite comprehensive) list of definitions and theorems given in Math 1553. Pay particular attention to the ones in red. Study Tip For each

More information

Linear Algebra. Session 12

Linear Algebra. Session 12 Linear Algebra. Session 12 Dr. Marco A Roque Sol 08/01/2017 Example 12.1 Find the constant function that is the least squares fit to the following data x 0 1 2 3 f(x) 1 0 1 2 Solution c = 1 c = 0 f (x)

More information

Linear algebra and applications to graphs Part 1

Linear algebra and applications to graphs Part 1 Linear algebra and applications to graphs Part 1 Written up by Mikhail Belkin and Moon Duchin Instructor: Laszlo Babai June 17, 2001 1 Basic Linear Algebra Exercise 1.1 Let V and W be linear subspaces

More information

Math 408 Advanced Linear Algebra

Math 408 Advanced Linear Algebra Math 408 Advanced Linear Algebra Chi-Kwong Li Chapter 4 Hermitian and symmetric matrices Basic properties Theorem Let A M n. The following are equivalent. Remark (a) A is Hermitian, i.e., A = A. (b) x

More information

Math Camp Lecture 4: Linear Algebra. Xiao Yu Wang. Aug 2010 MIT. Xiao Yu Wang (MIT) Math Camp /10 1 / 88

Math Camp Lecture 4: Linear Algebra. Xiao Yu Wang. Aug 2010 MIT. Xiao Yu Wang (MIT) Math Camp /10 1 / 88 Math Camp 2010 Lecture 4: Linear Algebra Xiao Yu Wang MIT Aug 2010 Xiao Yu Wang (MIT) Math Camp 2010 08/10 1 / 88 Linear Algebra Game Plan Vector Spaces Linear Transformations and Matrices Determinant

More information

MATH 240 Spring, Chapter 1: Linear Equations and Matrices

MATH 240 Spring, Chapter 1: Linear Equations and Matrices MATH 240 Spring, 2006 Chapter Summaries for Kolman / Hill, Elementary Linear Algebra, 8th Ed. Sections 1.1 1.6, 2.1 2.2, 3.2 3.8, 4.3 4.5, 5.1 5.3, 5.5, 6.1 6.5, 7.1 7.2, 7.4 DEFINITIONS Chapter 1: Linear

More information

Notes on Linear Algebra

Notes on Linear Algebra 1 Notes on Linear Algebra Jean Walrand August 2005 I INTRODUCTION Linear Algebra is the theory of linear transformations Applications abound in estimation control and Markov chains You should be familiar

More information

Functional Analysis Review

Functional Analysis Review Outline 9.520: Statistical Learning Theory and Applications February 8, 2010 Outline 1 2 3 4 Vector Space Outline A vector space is a set V with binary operations +: V V V and : R V V such that for all

More information

LinGloss. A glossary of linear algebra

LinGloss. A glossary of linear algebra LinGloss A glossary of linear algebra Contents: Decompositions Types of Matrices Theorems Other objects? Quasi-triangular A matrix A is quasi-triangular iff it is a triangular matrix except its diagonal

More information

Math 350 Fall 2011 Notes about inner product spaces. In this notes we state and prove some important properties of inner product spaces.

Math 350 Fall 2011 Notes about inner product spaces. In this notes we state and prove some important properties of inner product spaces. Math 350 Fall 2011 Notes about inner product spaces In this notes we state and prove some important properties of inner product spaces. First, recall the dot product on R n : if x, y R n, say x = (x 1,...,

More information

Eigenvalues, random walks and Ramanujan graphs

Eigenvalues, random walks and Ramanujan graphs Eigenvalues, random walks and Ramanujan graphs David Ellis 1 The Expander Mixing lemma We have seen that a bounded-degree graph is a good edge-expander if and only if if has large spectral gap If G = (V,

More information

Lecture 7: Positive Semidefinite Matrices

Lecture 7: Positive Semidefinite Matrices Lecture 7: Positive Semidefinite Matrices Rajat Mittal IIT Kanpur The main aim of this lecture note is to prepare your background for semidefinite programming. We have already seen some linear algebra.

More information

IMPORTANT DEFINITIONS AND THEOREMS REFERENCE SHEET

IMPORTANT DEFINITIONS AND THEOREMS REFERENCE SHEET IMPORTANT DEFINITIONS AND THEOREMS REFERENCE SHEET This is a (not quite comprehensive) list of definitions and theorems given in Math 1553. Pay particular attention to the ones in red. Study Tip For each

More information

MATRICES ARE SIMILAR TO TRIANGULAR MATRICES

MATRICES ARE SIMILAR TO TRIANGULAR MATRICES MATRICES ARE SIMILAR TO TRIANGULAR MATRICES 1 Complex matrices Recall that the complex numbers are given by a + ib where a and b are real and i is the imaginary unity, ie, i 2 = 1 In what we describe below,

More information

Linear Algebra: Matrix Eigenvalue Problems

Linear Algebra: Matrix Eigenvalue Problems CHAPTER8 Linear Algebra: Matrix Eigenvalue Problems Chapter 8 p1 A matrix eigenvalue problem considers the vector equation (1) Ax = λx. 8.0 Linear Algebra: Matrix Eigenvalue Problems Here A is a given

More information

Singular Value Decomposition

Singular Value Decomposition Singular Value Decomposition Motivatation The diagonalization theorem play a part in many interesting applications. Unfortunately not all matrices can be factored as A = PDP However a factorization A =

More information

Linear Algebra Lecture Notes-II

Linear Algebra Lecture Notes-II Linear Algebra Lecture Notes-II Vikas Bist Department of Mathematics Panjab University, Chandigarh-64 email: bistvikas@gmail.com Last revised on March 5, 8 This text is based on the lectures delivered

More information

The Singular Value Decomposition

The Singular Value Decomposition The Singular Value Decomposition Philippe B. Laval KSU Fall 2015 Philippe B. Laval (KSU) SVD Fall 2015 1 / 13 Review of Key Concepts We review some key definitions and results about matrices that will

More information

PRGs for space-bounded computation: INW, Nisan

PRGs for space-bounded computation: INW, Nisan 0368-4283: Space-Bounded Computation 15/5/2018 Lecture 9 PRGs for space-bounded computation: INW, Nisan Amnon Ta-Shma and Dean Doron 1 PRGs Definition 1. Let C be a collection of functions C : Σ n {0,

More information

Lecture 12: Introduction to Spectral Graph Theory, Cheeger s inequality

Lecture 12: Introduction to Spectral Graph Theory, Cheeger s inequality CSE 521: Design and Analysis of Algorithms I Spring 2016 Lecture 12: Introduction to Spectral Graph Theory, Cheeger s inequality Lecturer: Shayan Oveis Gharan May 4th Scribe: Gabriel Cadamuro Disclaimer:

More information

Basic Elements of Linear Algebra

Basic Elements of Linear Algebra A Basic Review of Linear Algebra Nick West nickwest@stanfordedu September 16, 2010 Part I Basic Elements of Linear Algebra Although the subject of linear algebra is much broader than just vectors and matrices,

More information

Further Mathematical Methods (Linear Algebra) 2002

Further Mathematical Methods (Linear Algebra) 2002 Further Mathematical Methods (Linear Algebra) 00 Solutions For Problem Sheet 0 In this Problem Sheet we calculated some left and right inverses and verified the theorems about them given in the lectures.

More information

235 Final exam review questions

235 Final exam review questions 5 Final exam review questions Paul Hacking December 4, 0 () Let A be an n n matrix and T : R n R n, T (x) = Ax the linear transformation with matrix A. What does it mean to say that a vector v R n is an

More information

8.1 Concentration inequality for Gaussian random matrix (cont d)

8.1 Concentration inequality for Gaussian random matrix (cont d) MGMT 69: Topics in High-dimensional Data Analysis Falll 26 Lecture 8: Spectral clustering and Laplacian matrices Lecturer: Jiaming Xu Scribe: Hyun-Ju Oh and Taotao He, October 4, 26 Outline Concentration

More information

Lecture notes on Quantum Computing. Chapter 1 Mathematical Background

Lecture notes on Quantum Computing. Chapter 1 Mathematical Background Lecture notes on Quantum Computing Chapter 1 Mathematical Background Vector states of a quantum system with n physical states are represented by unique vectors in C n, the set of n 1 column vectors 1 For

More information

The QR Decomposition

The QR Decomposition The QR Decomposition We have seen one major decomposition of a matrix which is A = LU (and its variants) or more generally PA = LU for a permutation matrix P. This was valid for a square matrix and aided

More information

IITM-CS6845: Theory Toolkit February 3, 2012

IITM-CS6845: Theory Toolkit February 3, 2012 IITM-CS6845: Theory Toolkit February 3, 2012 Lecture 4 : Derandomizing the logspace algorithm for s-t connectivity Lecturer: N S Narayanaswamy Scribe: Mrinal Kumar Lecture Plan:In this lecture, we will

More information

Review of similarity transformation and Singular Value Decomposition

Review of similarity transformation and Singular Value Decomposition Review of similarity transformation and Singular Value Decomposition Nasser M Abbasi Applied Mathematics Department, California State University, Fullerton July 8 7 page compiled on June 9, 5 at 9:5pm

More information

Schur s Triangularization Theorem. Math 422

Schur s Triangularization Theorem. Math 422 Schur s Triangularization Theorem Math 4 The characteristic polynomial p (t) of a square complex matrix A splits as a product of linear factors of the form (t λ) m Of course, finding these factors is a

More information

Problem Set (T) If A is an m n matrix, B is an n p matrix and D is a p s matrix, then show

Problem Set (T) If A is an m n matrix, B is an n p matrix and D is a p s matrix, then show MTH 0: Linear Algebra Department of Mathematics and Statistics Indian Institute of Technology - Kanpur Problem Set Problems marked (T) are for discussions in Tutorial sessions (T) If A is an m n matrix,

More information

MTH 2032 SemesterII

MTH 2032 SemesterII MTH 202 SemesterII 2010-11 Linear Algebra Worked Examples Dr. Tony Yee Department of Mathematics and Information Technology The Hong Kong Institute of Education December 28, 2011 ii Contents Table of Contents

More information

DS-GA 1002 Lecture notes 0 Fall Linear Algebra. These notes provide a review of basic concepts in linear algebra.

DS-GA 1002 Lecture notes 0 Fall Linear Algebra. These notes provide a review of basic concepts in linear algebra. DS-GA 1002 Lecture notes 0 Fall 2016 Linear Algebra These notes provide a review of basic concepts in linear algebra. 1 Vector spaces You are no doubt familiar with vectors in R 2 or R 3, i.e. [ ] 1.1

More information

Notes on Eigenvalues, Singular Values and QR

Notes on Eigenvalues, Singular Values and QR Notes on Eigenvalues, Singular Values and QR Michael Overton, Numerical Computing, Spring 2017 March 30, 2017 1 Eigenvalues Everyone who has studied linear algebra knows the definition: given a square

More information

HOMEWORK PROBLEMS FROM STRANG S LINEAR ALGEBRA AND ITS APPLICATIONS (4TH EDITION)

HOMEWORK PROBLEMS FROM STRANG S LINEAR ALGEBRA AND ITS APPLICATIONS (4TH EDITION) HOMEWORK PROBLEMS FROM STRANG S LINEAR ALGEBRA AND ITS APPLICATIONS (4TH EDITION) PROFESSOR STEVEN MILLER: BROWN UNIVERSITY: SPRING 2007 1. CHAPTER 1: MATRICES AND GAUSSIAN ELIMINATION Page 9, # 3: Describe

More information

The following definition is fundamental.

The following definition is fundamental. 1. Some Basics from Linear Algebra With these notes, I will try and clarify certain topics that I only quickly mention in class. First and foremost, I will assume that you are familiar with many basic

More information

Review problems for MA 54, Fall 2004.

Review problems for MA 54, Fall 2004. Review problems for MA 54, Fall 2004. Below are the review problems for the final. They are mostly homework problems, or very similar. If you are comfortable doing these problems, you should be fine on

More information

Lecture 11. Linear systems: Cholesky method. Eigensystems: Terminology. Jacobi transformations QR transformation

Lecture 11. Linear systems: Cholesky method. Eigensystems: Terminology. Jacobi transformations QR transformation Lecture Cholesky method QR decomposition Terminology Linear systems: Eigensystems: Jacobi transformations QR transformation Cholesky method: For a symmetric positive definite matrix, one can do an LU decomposition

More information

The Singular Value Decomposition and Least Squares Problems

The Singular Value Decomposition and Least Squares Problems The Singular Value Decomposition and Least Squares Problems Tom Lyche Centre of Mathematics for Applications, Department of Informatics, University of Oslo September 27, 2009 Applications of SVD solving

More information

Linear Algebra using Dirac Notation: Pt. 2

Linear Algebra using Dirac Notation: Pt. 2 Linear Algebra using Dirac Notation: Pt. 2 PHYS 476Q - Southern Illinois University February 6, 2018 PHYS 476Q - Southern Illinois University Linear Algebra using Dirac Notation: Pt. 2 February 6, 2018

More information

Review of some mathematical tools

Review of some mathematical tools MATHEMATICAL FOUNDATIONS OF SIGNAL PROCESSING Fall 2016 Benjamín Béjar Haro, Mihailo Kolundžija, Reza Parhizkar, Adam Scholefield Teaching assistants: Golnoosh Elhami, Hanjie Pan Review of some mathematical

More information

2. Review of Linear Algebra

2. Review of Linear Algebra 2. Review of Linear Algebra ECE 83, Spring 217 In this course we will represent signals as vectors and operators (e.g., filters, transforms, etc) as matrices. This lecture reviews basic concepts from linear

More information

Elementary linear algebra

Elementary linear algebra Chapter 1 Elementary linear algebra 1.1 Vector spaces Vector spaces owe their importance to the fact that so many models arising in the solutions of specific problems turn out to be vector spaces. The

More information

6.842 Randomness and Computation March 3, Lecture 8

6.842 Randomness and Computation March 3, Lecture 8 6.84 Randomness and Computation March 3, 04 Lecture 8 Lecturer: Ronitt Rubinfeld Scribe: Daniel Grier Useful Linear Algebra Let v = (v, v,..., v n ) be a non-zero n-dimensional row vector and P an n n

More information

Maths for Signals and Systems Linear Algebra in Engineering

Maths for Signals and Systems Linear Algebra in Engineering Maths for Signals and Systems Linear Algebra in Engineering Lecture 18, Friday 18 th November 2016 DR TANIA STATHAKI READER (ASSOCIATE PROFFESOR) IN SIGNAL PROCESSING IMPERIAL COLLEGE LONDON Mathematics

More information

G1110 & 852G1 Numerical Linear Algebra

G1110 & 852G1 Numerical Linear Algebra The University of Sussex Department of Mathematics G & 85G Numerical Linear Algebra Lecture Notes Autumn Term Kerstin Hesse (w aw S w a w w (w aw H(wa = (w aw + w Figure : Geometric explanation of the

More information

Applied Linear Algebra in Geoscience Using MATLAB

Applied Linear Algebra in Geoscience Using MATLAB Applied Linear Algebra in Geoscience Using MATLAB Contents Getting Started Creating Arrays Mathematical Operations with Arrays Using Script Files and Managing Data Two-Dimensional Plots Programming in

More information

Class notes: Approximation

Class notes: Approximation Class notes: Approximation Introduction Vector spaces, linear independence, subspace The goal of Numerical Analysis is to compute approximations We want to approximate eg numbers in R or C vectors in R

More information

Chap 3. Linear Algebra

Chap 3. Linear Algebra Chap 3. Linear Algebra Outlines 1. Introduction 2. Basis, Representation, and Orthonormalization 3. Linear Algebraic Equations 4. Similarity Transformation 5. Diagonal Form and Jordan Form 6. Functions

More information

LINEAR ALGEBRA REVIEW

LINEAR ALGEBRA REVIEW LINEAR ALGEBRA REVIEW JC Stuff you should know for the exam. 1. Basics on vector spaces (1) F n is the set of all n-tuples (a 1,... a n ) with a i F. It forms a VS with the operations of + and scalar multiplication

More information

October 25, 2013 INNER PRODUCT SPACES

October 25, 2013 INNER PRODUCT SPACES October 25, 2013 INNER PRODUCT SPACES RODICA D. COSTIN Contents 1. Inner product 2 1.1. Inner product 2 1.2. Inner product spaces 4 2. Orthogonal bases 5 2.1. Existence of an orthogonal basis 7 2.2. Orthogonal

More information

Lecture 2: Linear operators

Lecture 2: Linear operators Lecture 2: Linear operators Rajat Mittal IIT Kanpur The mathematical formulation of Quantum computing requires vector spaces and linear operators So, we need to be comfortable with linear algebra to study

More information

Math 108b: Notes on the Spectral Theorem

Math 108b: Notes on the Spectral Theorem Math 108b: Notes on the Spectral Theorem From section 6.3, we know that every linear operator T on a finite dimensional inner product space V has an adjoint. (T is defined as the unique linear operator

More information

Expander Construction in VNC 1

Expander Construction in VNC 1 Expander Construction in VNC 1 Sam Buss joint work with Valentine Kabanets, Antonina Kolokolova & Michal Koucký Prague Workshop on Bounded Arithmetic November 2-3, 2017 Talk outline I. Combinatorial construction

More information

Linear Algebra Review

Linear Algebra Review Chapter 1 Linear Algebra Review It is assumed that you have had a course in linear algebra, and are familiar with matrix multiplication, eigenvectors, etc. I will review some of these terms here, but quite

More information

18.06 Problem Set 10 - Solutions Due Thursday, 29 November 2007 at 4 pm in

18.06 Problem Set 10 - Solutions Due Thursday, 29 November 2007 at 4 pm in 86 Problem Set - Solutions Due Thursday, 29 November 27 at 4 pm in 2-6 Problem : (5=5+5+5) Take any matrix A of the form A = B H CB, where B has full column rank and C is Hermitian and positive-definite

More information

1. General Vector Spaces

1. General Vector Spaces 1.1. Vector space axioms. 1. General Vector Spaces Definition 1.1. Let V be a nonempty set of objects on which the operations of addition and scalar multiplication are defined. By addition we mean a rule

More information

Designing Information Devices and Systems II

Designing Information Devices and Systems II EECS 16B Fall 2016 Designing Information Devices and Systems II Linear Algebra Notes Introduction In this set of notes, we will derive the linear least squares equation, study the properties symmetric

More information

BASIC ALGORITHMS IN LINEAR ALGEBRA. Matrices and Applications of Gaussian Elimination. A 2 x. A T m x. A 1 x A T 1. A m x

BASIC ALGORITHMS IN LINEAR ALGEBRA. Matrices and Applications of Gaussian Elimination. A 2 x. A T m x. A 1 x A T 1. A m x BASIC ALGORITHMS IN LINEAR ALGEBRA STEVEN DALE CUTKOSKY Matrices and Applications of Gaussian Elimination Systems of Equations Suppose that A is an n n matrix with coefficents in a field F, and x = (x,,

More information

MATH 304 Linear Algebra Lecture 20: The Gram-Schmidt process (continued). Eigenvalues and eigenvectors.

MATH 304 Linear Algebra Lecture 20: The Gram-Schmidt process (continued). Eigenvalues and eigenvectors. MATH 304 Linear Algebra Lecture 20: The Gram-Schmidt process (continued). Eigenvalues and eigenvectors. Orthogonal sets Let V be a vector space with an inner product. Definition. Nonzero vectors v 1,v

More information

EE/ACM Applications of Convex Optimization in Signal Processing and Communications Lecture 2

EE/ACM Applications of Convex Optimization in Signal Processing and Communications Lecture 2 EE/ACM 150 - Applications of Convex Optimization in Signal Processing and Communications Lecture 2 Andre Tkacenko Signal Processing Research Group Jet Propulsion Laboratory April 5, 2012 Andre Tkacenko

More information

Undirected ST-Connectivity in Log-Space. Omer Reingold. Presented by: Thang N. Dinh CISE, University of Florida, Fall, 2010

Undirected ST-Connectivity in Log-Space. Omer Reingold. Presented by: Thang N. Dinh CISE, University of Florida, Fall, 2010 Undirected ST-Connectivity in Log-Space Omer Reingold Presented by: Thang N. Dinh CISE, University of Florida, Fall, 2010 Undirected s-t connectivity(ustcon) Is t reachable from s in a undirected graph

More information