Random Walk and Expander

Size: px
Start display at page:

Download "Random Walk and Expander"

Transcription

1 Random Walk and Expander

2 We have seen some major results based on the assumption that certain random/hard objects exist. We will see what we can achieve unconditionally using explicit constructions of pseduorandom objects. Computational Complexity, by Fu Yuxi Random Walk and Expander 1 / 89

3 Synopsis 1. Preliminary on Linear Algebra 2. Random Walk 3. Expander Graph 4. Explicit Construction of Expander Graph 5. Reingold s Theorem Computational Complexity, by Fu Yuxi Random Walk and Expander 2 / 89

4 Preliminary on Linear Algebra Computational Complexity, by Fu Yuxi Random Walk and Expander 3 / 89

5 Three Views Matrix = Linear transformation : N n N m 1. f (u + v) = f (u) + f (v), f (cu) = cf (u) 2. the matrix corresponding to f has f (e j ) i as the (i, j)-th entry Interpretation of v = Au 1. Dynamic view: u is transformed to v, movement in one basis 2. Static view: u in the column basis is the same as v in the standard basis, one point in two bases Equation, Geometry (row picture), Algebra (column picture) Linear equation, hyperplane, linear combination Computational Complexity, by Fu Yuxi Random Walk and Expander 4 / 89

6 Inner Product, Projection, Orthogonality 1. Inner product u T v measures the degree of colinearity of u, v cos θ = u T v u v, projection = ut v u T u u, orthogonality ut v = 0 2. Row space null space, column space left null space 3. Basis, orthogonal/orthonormal basis 4. Orthogonal matrix Q 1 = Q T Gram-Schmidt orthogonalization, A = QR Cauchy-Schwartz Inequality. cos θ = ut v u v 1. Computational Complexity, by Fu Yuxi Random Walk and Expander 5 / 89

7 Fixpoints for Linear Transformation We look for fixpoints of a linear transformation A : R n R n. Av i = λ i v i. If there are n linear independent fixpoints v 1,..., v n, every vector v R n is some linear combination c 1 v c n v n. By linearity, Av = c 1 Av c n Av n = c 1 λ 1 v c n λ n v n. In this case the effect of A is to move a vector c 1 v c n v n in the standard basis to the vector (c 1 λ 1,..., c n λ n ) in the basis given by v 1,..., v n. All capital lower case letters will denote column vectors. Computational Complexity, by Fu Yuxi Random Walk and Expander 6 / 89

8 Eigenvalue, Eigenvector, Eigenmatrix 1. Nonzero solution to Ax=λx. In other words, A λi is singular. 2. S = [x 1,..., x n ] is the eigenmatrix. AS = SΛ. 3. x 1,..., x n are linearly independent if λ 1,..., λ n are different. 4. In case x 1,..., x n are linearly independent, A = SΛS We shall write the spectrum λ 1, λ 2,..., λ n of a matrix in such a way that λ 1 λ 2... λ n. 6. The value ρ(a) = λ 1 is called spectral radius. Computational Complexity, by Fu Yuxi Random Walk and Expander 7 / 89

9 Hermitian Transpose and Symmetric Matrix real matrix complex matrix length x = i [n] x 2 i x = i [n] x i 2 transpose A T A H inner product x T y = i [n] x iy i x H y = i [n] x iy i orthogonality x T y = 0 x H y = 0 symmetric/hermitian A T = A A H = A diagonalization A = QΛQ T A = UΛU H orthogonal/unitary Q T Q = I U H U = I Fact. If A H = A, then x H Ax = (x H Ax) H is real for all complex x. Fact. If A H = A, every eigenvalue is real since v H Av = λv H v = λ v 2. Fact. The eigenvectors of two different eigenvalues are orthogonal. Fact. Ux 2 = x 2 and Qx 2 = x 2. Computational Complexity, by Fu Yuxi Random Walk and Expander 8 / 89

10 Similarity Transformation Similarity Transformation = Change of Basis 1. A is similar to B if A = MBM 1 for some invertible M. 2. A and B have the same eigenvalues. An eigenvector v of A corresponds to the eigenvector M 1 v of B. 3. A linear transformation exists without any basis. A and B describe the same transformation, the difference being that they use different bases. 3.1 The basis of B consists of the column vectors of M. 3.2 A vector x in the basis of A is transformed into the vector M 1 x in the basis of B, that is x = M(M 1 x). 3.3 B then transforms M 1 x into some y in the basis of B. 3.4 In the basis of A the vector Ax is My. Fact. Similar matrices have the same eigenvalues. Computational Complexity, by Fu Yuxi Random Walk and Expander 9 / 89

11 Triangularization Diagonalization transformation is a special case of similarity transformation. In diagonalization Q provides an orthogonal basis. Question. Is every matrix similar to a diagonal matrix? Schur s Lemma. For each matrix A there is a unitary matrix U such that T = U 1 AU is triangular. The eigenvalues of A appear in the diagonal of T. If A is Hermitian, T must be diagonal. Computational Complexity, by Fu Yuxi Random Walk and Expander 10 / 89

12 Spectral Theorem Theorem. Every Hermitian matrix A can be diagonalized by a unitary matrix U. Every symmetric matrix A can be diagonalized by an orthogonal matrix Q. U H AU = Λ, Q T AQ = Λ. The eigenvalues are in Λ; the orthonormal eigenvectors are in Q/U. Corollary. Every Hermitian matrix A has a spectral decomposition. A = UΛU H = λ 1 v 1 v H λ n v n v H n. Computational Complexity, by Fu Yuxi Random Walk and Expander 11 / 89

13 Diagonalization We still need to answer the Question. What are the matrices that are similar to diagonal matrices? Answer: They are the normal matrices. A matrix N is normal if NN H = N H N. Theorem. A matrix N is normal iff T = U 1 NU is diagonal iff N has a complete set of orthonormal eigenvectors. Proof. If N is normal, T is normal. By examining the equality TT H = T H T, one sees that T is diagonal. If T is diagonal, the eigenvalues of N are all in diag(t ). And the equality T = U 1 NU says precisely that the column vectors are the eigenvectors. Computational Complexity, by Fu Yuxi Random Walk and Expander 12 / 89

14 Jordan Form What if a matrix is not diagonalizable? Every matrix is similar to a Jordan Form. λ i 1 1 J = M 1 AM = J, where J i = λ i 1. J s λ i Here s is the number of independent eigenvectors. The same eigenvalue λ i will appear in several Jordan blocks if it has several independent eigenvectors. Theorem. A, B are similar iff they have the same Jordan Form. Theorem. If an n n matrix A is of rank r, then it has r nonzero eigenvalues and n r zero eigenvalues.. Computational Complexity, by Fu Yuxi Random Walk and Expander 13 / 89

15 Rayleigh Quotient Suppose A is an n n Hermitian matrix, (λ 1, v 1 ),..., (λ n, v n ) are the eigenvalue-eigenvector pairs. Let x be a nonzero n-dimensional vector and let y i = v H i x be the projection of x onto the i-th axis v i. The Rayleigh quotient of A and x is defined as follows: It is clear from (1) that R(A, x) = xh Ax x H x = i [n] λ iy 2 i i [n] y 2 i. (1) if λ 1... λ n, then λ i = max x v1,...,x v i+1 R(A, x), and if λ 1... λ n, then λ i = max x v1,...,x v i+1 R(A, x). One can use Rayleigh quotient to derive lower bound for λ i. Computational Complexity, by Fu Yuxi Random Walk and Expander 14 / 89

16 Positive Definite Matrix A symmetric matrix A is positive definite if x T Ax > 0 for all nonzero vector x. Theorem. Let A be symmetric. The following are equivalent. 1. x T Ax > 0 for all nonzero vectors x. 2. λ i > 0 for all eigenvalues λ i. 3. A i > 0 for all upper left sub-matrices A i. 4. d i > 0 for all pivots d i. 5. There is a matrix R with independent columns st A = R T R. If we replace > by, we get positive semidefinite. Computational Complexity, by Fu Yuxi Random Walk and Expander 15 / 89

17 Singular Value Decomposition Consider an m by n matrix. Both AA T and A T A are symmetric. 1. AA T is positive semidefinite since x T AA T x = A T x AA T = UΣ U T, where U consists of the orthonormal eigenvectors u 1,..., u m and Σ is the diagonal matrix made up from the eigenvalues σ 2 1,..., σ2 r. We assume σ 1... σ r. 3. Similarly A T A = V Σ V T. 4. Now AA T u i = σ 2 i u i implies that σ 2 i is an eigenvalue of A T A and A T u i is the corresponding eigenvector. So v i = 5. Observe that A T u i = σ i u i = σ i. 6. Hence Av i = A AT u i A T u i = σ2 i u i σ i = σ i u i. Conclude AV = UΣ. AT u i A T u i. Computational Complexity, by Fu Yuxi Random Walk and Expander 16 / 89

18 Singular Value Decomposition 1. σ 1,..., σ r are called the singular values of A. 2. A = UΣV T is the singular value decomposition, or SVD, of A. Lemma. If A is normal, then σ i = λ i for all i [n]. Proof. Since A is normal, A = UΛU T. Now A T A = AA T = UΛ 2 U T. So the spectrum of A T A/AA T is λ 2 1,..., λ2 n. Computational Complexity, by Fu Yuxi Random Walk and Expander 17 / 89

19 Vector Norm The norm of a vector is a measure of its magnitude/size/length. A norm on F n is a function : F n R + satisfying the following: 1. v = 0 iff v = av = a v. 3. v + w v + w. A vector space with a norm is called a normed vector space. 1. L 1 -norm. v 1 = v v n. 2. L 2 -norm. v 2 = v v n 2 = v T v. 3. L p -norm. v p = p v 1 p v n p. 4. L -norm. v = max{ v 1,..., v n }. Computational Complexity, by Fu Yuxi Random Walk and Expander 18 / 89

20 Matrix Norm We define matrix norm in compatible with vector norm. Suppose F n is a normed vector space over filed F. An induced matrix norm is a function : F n n R + satisfying the following properties. 1. A = 0 iff A = aa = a A. 3. A + B A + B. 4. AB A B. Computational Complexity, by Fu Yuxi Random Walk and Expander 19 / 89

21 Matrix Norm A matrix norm measures the amplifying power of a matrix. Define A = max v 0 Av v. It satisfies (1-4). Additionally Ax A x for all x. Lemma. ρ(a) A. A 1 = max 1 j n A = max 1 i n n A i,j, i=1 n A i,j. j=1 Computational Complexity, by Fu Yuxi Random Walk and Expander 20 / 89

22 Spectral Norm A 2 is called the spectral norm of A. 1 n A 1 A 2 n A 1. Lemma. A 2 = σ 1. Corollary. If A is a normal matrix, then A 2 = λ 1. Let A T A = V ΣV T be the decomposition, let v 1,..., v n be the orthonormal eigenvectors, and let x = a 1 v a n v n. Then Ax 2 2 = x T (A T Ax) = x T ( σi 2 a i v i ) σ1 x i [n] The equality holds when x = v 1. Therefore A 2 = σ 1. Computational Complexity, by Fu Yuxi Random Walk and Expander 21 / 89

23 Random Walk Computational Complexity, by Fu Yuxi Random Walk and Expander 22 / 89

24 Graphs are the prime objects of study in combinatorics. The matrix representation of graphs lends itself to an algebraic treatment to these combinatorial objects. It is especially effective in the treatment of regular graph. Computational Complexity, by Fu Yuxi Random Walk and Expander 23 / 89

25 Our digraph admit both self-loops and parallel edges. An undirected edge is seen as two directed edges in opposite directions. Whenever we say graph, we will always mean undirected graph. Computational Complexity, by Fu Yuxi Random Walk and Expander 24 / 89

26 Random Walk Matrix The reachability matrix M of a digraph G is defined by M i,j = 1 if there is an edge from vertex j to vertex i; M i,j = 0 otherwise. The random walk matrix A of a d-regular digraph G is 1 d M. Let p be a probability distribution over the vertices of G and A is the random walk matrix of G. Then A k p is the distribution after k-step random walk. Computational Complexity, by Fu Yuxi Random Walk and Expander 25 / 89

27 Random Walk Matrix Consider the following periodic graph with d n vertices. The vertices are arranged in n layers, each consisting of d vertices. There is an edge from every vertex in the i-th layer to every vertex in the j-th layer, where j = i + 1 mod n. Does A k p converge to a stationary state? Computational Complexity, by Fu Yuxi Random Walk and Expander 26 / 89

28 Spectral Graph Theory Suppose G is a d-regular graph and A is the random walk matrix of G. The following hold is an eigenvalue of A and its associated eigenvector is the stationary distribution vector 1 = ( 1 n,..., 1 n )T. Ie, A1 = All eigenvalues have absolute values G is disconnected iff 1 is an eigenvalue of multiplicity If G is connected, G is bipartite iff 1 is an eigenvalue of A. In spectral graph theory graph properties are characterized in terms of graph spectrums. Computational Complexity, by Fu Yuxi Random Walk and Expander 27 / 89

29 Rate of Convergence For a regular graph G with random walk matrix A, we define def Ap 1 2 λ G = max p p 1 2 = max v 1 Av 2 v 2 = max v 1, v 2 =1 Av 2, where p is over all probability distribution vectors. The two definitions are equivalent. 1. (p 1) 1 and Ap 1 = A(p 1). 2. For each v 1, p = αv + 1 is a probability distribution vector for a sufficiently small α. By definition Av 2 λ G v 2 for all v. Computational Complexity, by Fu Yuxi Random Walk and Expander 28 / 89

30 Lemma. λ G = λ 2. Let v 2,..., v n be the eigenvectors corresponding to λ 2,..., λ n. Given x 1, let x = c 2 v c n v n. Then Ax 2 2 = λ 2 c 2 v λ n c n v n 2 2 = λ 2 2c 2 2 v λ 2 nc 2 nv 2 n λ 2 2(c 2 2 v c 2 nv 2 n) = λ 2 2 x 2. So λ 2 G λ2 2. The equality holds since Av = λ2 2 v Computational Complexity, by Fu Yuxi Random Walk and Expander 29 / 89

31 The spectral gap γ G of a graph G is defined by γ G = 1 λ G. A graph has spectral expansion γ, where γ (0, 1), if γ G γ. In an expander G, γ G provides a bound on the expansion ratio. In terms of random walk, λ G bounds the speed of mixing time. Computational Complexity, by Fu Yuxi Random Walk and Expander 30 / 89

32 Lemma. Let G be an n-vertex regular graph and p a probability distribution over the vertices of G. Then A l p 1 2 λ l G p 1 2 < λ l G. The first inequality holds because A l p 1 2 p 1 2 = Al p 1 2 A l 1 p Ap 1 2 p 1 2 λ l G. The second inequality holds because p = p p, n 2 1 n < 1. Computational Complexity, by Fu Yuxi Random Walk and Expander 31 / 89

33 Lemma. If G is an n-vertex regular graph with self-loops at each vertex, then γ G 1 12n 2. Let ɛ = 1 6n 2. Let u be a unit vector such that u 1, and let v = Au. If we can prove 1 v 2 2 ɛ, we will get v 2 1 ɛ 2. It s easy to show 1 v 2 2 = u 2 2 v 2 2 = i,j A i,j(u i v j ) 2. By the assumption u i u j 1 n for some i, j [n]. Now u i u j = (u i v i ) + (v i u i1 ) (v ik u j ) u i v i + v i u i v ik u j (u i v i ) 2 + (v i u i1 ) (v ik u j ) 2 2D + 1, where u i u i1... u ik u j is a shortest path and D is the diameter of G. The lemma then follows from A h,h, A h,h+1 1 d and D 3n d+1. Computational Complexity, by Fu Yuxi Random Walk and Expander 32 / 89

34 Randomized Algorithm for Undirected Connectivity Corollary. Let G be a d-degree n-vertex graph with self-loop on every vertex. Let s, t be connected. Let l > 24n 2 log n and let X l denote the vertex distribution after l step random walk from s. Then Pr[X l = t] > 1 2n. Graphs with self-loops are not bipartite. According to the Lemmas, A l e s 1 2 < It follows that ( A l e s )t 1 n > 1 n 2. ( 1 1 ) 24n 2 log n 12n 2 < 1 n 2. If the 24n 2 log n walk is repeated 2n 2 times, the error probability is reduced to below 2 n. Computational Complexity, by Fu Yuxi Random Walk and Expander 33 / 89

35 Randomized Algorithm for Undirected Connectivity Theorem. Connectivity in undirected graphs is in RL. An undirected graph can be turned into a non-bipartite regular graph by introducing enough self-loops. Computational Complexity, by Fu Yuxi Random Walk and Expander 34 / 89

36 Expander Graph Computational Complexity, by Fu Yuxi Random Walk and Expander 35 / 89

37 Expansion graphs, defined by Pinsker in 1973, are simultaneously sparse and well connected. Sparsity should be understood in an asymptotic sense. Expander graphs behave approximately like complete graphs. Hoory, Linial, and Wigderson. Expander Graphs and their Applications. Bulletin of the AMS, 43, , Computational Complexity, by Fu Yuxi Random Walk and Expander 36 / 89

38 Well-connectedness can be characterized in a number of manners. 1. Algebraically, expanders are graphs whose second largest eigenvalue is bounded away from 1 by a constant. 2. Combinatorially, expanders are highly connected. Every set of vertices of an expander has a large boundary geometrically. 3. Probabilistically, expanders are graphs in which a random walk converges to the stationary distribution as quickly as possible. Computational Complexity, by Fu Yuxi Random Walk and Expander 37 / 89

39 Algebraic Property Intuitively the faster random walk converges, the better the graph is connected. According to Lemma, the smaller λ G is, the faster random walk converges to the uniform distribution. Suppose d N and λ (0, 1) are constants. A d-regular graph G with n vertices is an (n, d, λ)-graph if λ G λ. A family of graphs {G n } n N is an (d, λ)-expander graph family if G n is an (n, d, λ)-graph for every n N. Computational Complexity, by Fu Yuxi Random Walk and Expander 38 / 89

40 Probabilistic Property In expanders random walk converges to the uniform distribution in logarithmic steps. A 2 log 1 λ (n) p 1 2 < λ 2 log 1 λ (n) = 1 n 2. In other words, the mixing time of an expander is logarithmic. The mixing time of an n-vertex graph G is the minimal l such that for any vertex distribution p, A l p 1 < 1 2n, where A is the random walk matrix of G. The diameter of an n-vertex expander graph is Θ(log n). Computational Complexity, by Fu Yuxi Random Walk and Expander 39 / 89

41 Combinatorial Property Suppose G = (V, E) is an n-vertex d-regular graph. Let S stand for V \ S for S V. Let E(S, T ) be the set of edges i j with i S and j T. Define S = E(S, S) for S n 2. The expansion constant h G of G is defined as follows: S h G = min S S. Suppose d N and ρ > 0 are constants. We say that G is an (n, d, ρ)-edge expander if h G ρd. Ie, G is an (n, d, ρ)-edge expander if S ρ (d S ) for all S. Computational Complexity, by Fu Yuxi Random Walk and Expander 40 / 89

42 Existence of Expander Theorem. Let ɛ > 0. There exists d = d(ɛ) and N N such that for every n > N there exists an (n, d, 1 2 ɛ) edge expander. Computational Complexity, by Fu Yuxi Random Walk and Expander 41 / 89

43 Expansion and Spectral Gap Theorem. Let G = (V, E) be a finite, connected, d-regular graph. Then d 1 λ G h G d 2(1 λ G ) J. Dodziuk. Difference Equations, Isoperimetric Inequality and Transience of Certain Random Walks. Trans. AMS, N. Alon and V. D. Milman. λ 1, Isoperimetric Inequalities for Graphs, and Superconcentrators. J. Comb. Theory, N. Alon. Eigenvalues and Expanders. Combinatorica, Computational Complexity, by Fu Yuxi Random Walk and Expander 42 / 89

44 d 1 λ G 2 h G Let S n 2 be such that (S) S = h G. Define x 1 by x = S 1 S S 1 S. x 2 2 = n S S, x T Ax = 1 ( S 2 E(S, S) + S 2 E(S, S) + 2 S S E(S, S) ) d = n S S n 2 E(S, S), using d S = E(S, S) + E(S, S) = E(S, S) + E(S, S). The Rayleigh quotient R(A, x) provides a lower bound for λ G. λ G xt Ax x 2 2 = 1 n S S n 2 E(S, S) d n S S = 1 1 d S (S) n 1 2h G d. Computational Complexity, by Fu Yuxi Random Walk and Expander 43 / 89

45 h G d 2(1 λ G ) Let u 1 be such that Au = λ 2 u. Write u = v + w, where v/w is defined from u by replacing the negative/positive components by 0. Wlog, assume that the number of positive components of v is n 2. Computational Complexity, by Fu Yuxi Random Walk and Expander 44 / 89

46 h G d 2(1 λ G ) Wlog, assume v 1 v 2... v n. Then A i,j vi 2 vj 2 = 2 i,j i<j n/2 (vk 2 vk+1) 2 A i,j j 1 k=i = 2 [k] (vk 2 v 2 d k+1) k=1 n/2 2 h G k(vk 2 v 2 d k+1) k=1 = 2h G d v 2 2, where the second equality holds because for any fixed k the term vk 2 v2 k+1 appears once for every edge i j such that i k < j. Computational Complexity, by Fu Yuxi Random Walk and Expander 45 / 89

47 h G d 2(1 λ G ) Av, v + Aw, v = λ 2 v 2 2 follows from Au = λ 2u and v, w = 0. Since Aw, v is not positive, we may derive 1 λ 2 1 Av, v v 2 2 = v 2 2 Av, v i,j v 2 = A i,j(v i v j ) v 2. (2) 2 Let brown = i,j A i,j(v i + v j ) 2. Using Cauchy-Schwartz Inequality, blue brown A i,j vi 2 vj 2. (3) i,j 2 Now Av 2 σ 1 v 2 = v 2 implies Av, v v 2 2. Therefore red brown 8 v 4 2. (4) (2)+(3)+(4)+the previous inequality implies 8(1 λ 2 ) 2h G d. Computational Complexity, by Fu Yuxi Random Walk and Expander 46 / 89

48 Combinatorial definition and algebraic definition are equivalent. 1. The first inequality implies that if G is an (n, d, λ)-expander graph, then it is an (n, d, 1 λ 2 ) edge expander. 2. The second inequality implies that if G is an (n, d, ρ) edge expander, then it is an (n, d, 1 ρ2 2 )-expander graph. Computational Complexity, by Fu Yuxi Random Walk and Expander 47 / 89

49 Convergence in Entropy Rényi 2-Entropy: H 2 (p) = 2 log( p 2 ). Fact. Let A be the random walk matrix of an (n, d, λ)-expander. Then H 2 (Ap) H 2 (p). The equality holds iff p is uniform. Proof. Let p = 1 + w such that w 1. Then Ap 2 2 = Aw λ w 2 2 = The equality holds when p = 1. ( ) γ p 2 + λ p 2 2 p Computational Complexity, by Fu Yuxi Random Walk and Expander 48 / 89

50 The smaller the spectral gap, or the larger the spectral expansion, expander graphs behave more like random graphs. This is what the next lemma says. Computational Complexity, by Fu Yuxi Random Walk and Expander 49 / 89

51 Expander Mixing Lemma Lemma. Let G = (V, E) be an (n, d, λ)-expander graph. Let S, T V. Then E(S, T ) d n S T λd S T. (5) Notice that (5) implies E(S, T ) S T dn n n λ. (6) The edge density approximates the product of the vertex densities. N. Alon andf. R. K. Chung. Explicit Construction of Linear Sized Tolerant Networks. Discrete Mathematics, Computational Complexity, by Fu Yuxi Random Walk and Expander 50 / 89

52 Proof of Expander Mixing Lemma Let [v 1,..., v n ] be the eigenmatrix of G with v 1 = ( 1 n,..., 1 n ) T. Let 1 S = i α iv i and 1 T = j β jv j be the characteristic vectors of S and T respectively. E(S, T ) = (1 S ) T (da)1 T = ( i α i v i ) T (da)( j β j v j ) = i dλ i α i β i. Since α 1 = (1 S ) T v 1 = S n and β 1 = (1 T ) T v 1 = T n, E(S, T ) d n S T = n dλ i α i β i dλ i=2 n α i β i dλ α 2 β 2. Finally observe that α 2 β 2 = 1 S 2 1 T 2 = S T. i=2 Computational Complexity, by Fu Yuxi Random Walk and Expander 51 / 89

53 Error Reduction for Random Algorithm Suppose A(x, r) is a random algorithm with error probability 1/3. The algorithm uses r(n) random bits on input x with x = n. 1. We can exponentially reduce the error probability by repeating the algorithm t(n) times. 2. The resulting algorithm uses r(n)t(n) random bits. The goal is to achieve the same error reduction rate using far fewer random bits (in fact r(n) + O(t(n)) random bits). The key observation is that a t-step random walk in an expander graph looks like t vertices sampled uniformly and independently. Confer the inequality (6). Computational Complexity, by Fu Yuxi Random Walk and Expander 52 / 89

54 A Decomposition Lemma for Random Walk K n is perfect from the viewpoint of random walk. No matter what distribution it starts with, a random walk reaches the uniform distribution in one step. Let J n = [1,..., 1] be the random walk matrix of K n with self-loop. Lemma. Suppose G is an (n, d, λ)-expander and A is its random walk matrix. Then A = γj n + λe for some E such that E 1. We may think of a random walk on an expander as a convex combination of two types of random walk, 1. a walk with probability γ on a complete graph, and 2. a walk with probability λ according to an error matrix that does not amplify the distance to the uniform distribution. Computational Complexity, by Fu Yuxi Random Walk and Expander 53 / 89

55 A Decomposition Lemma for Random Walk We need to prove that Ev 2 v 2 for all v for E defined by E = 1 λ (A (1 λ)j n). The following proof methodology should now be familiar. Decompose v into v = u + w = α1 + w with w 1. A1 = 1 and J n 1 = 1. Consequently Eu = u. J n w = 0, w def = Aw 1, w u. Hence Ew = 1 λ w. w 2 λ w 2. So Ev 2 2 = u + 1 λ w 2 2 = u λ w 2 2 u w 2 2 = v 2 2. Computational Complexity, by Fu Yuxi Random Walk and Expander 54 / 89

56 Expander Random Walk Theorem Theorem. Let G be an (n, d, λ) expander graph, and let B [n] satisfy B βn for some β (0, 1). Let X 1 be a random variable denoting the uniform distribution on [n] and let X k be a random variable denoting a k 1 step random walk from X 1. Then Pr X i B i [k] ( γ β + λ) k 1. Computational Complexity, by Fu Yuxi Random Walk and Expander 55 / 89

57 Expander Random Walk Theorem Let B i stand for X i B. We need to bound the following. Pr X i B = Pr[B 1 ] Pr[B 2 B 1 ]... Pr[B k B 1... B k 1 ]. (7) i [k] By seeing B as a diagonal matrix, we define the distribution p i by p i = BA Pr[B i B 1... B i 1 ]... BA Pr[B 2 B 1 ] B1 Pr[B 1 ]. So the probability in (7) is bounded by (BA) k 1 B1 1. We ll prove (BA) k 1 B1 2 1 n ((1 λ) β + λ) k 1. Computational Complexity, by Fu Yuxi Random Walk and Expander 56 / 89

58 Expander Random Walk Theorem Using Lemma, Therefore BA = B((1 λ)j n + λe) (1 λ) BJ n + λ BE (BA) k 1 B1 2 = (1 λ) β + λ BE (1 λ) β + λ B E = (1 λ) β + λ. β ((1 λ) ) k 1 1 β + λ n ((1 λ) k 1 β + λ). n BJ n v 2 = BJ n α1 2 = α B1 2 n B1 2 = n β n = β for v 2 = 1. Therefore BJ n = max{ BJ n v 2 v 2 = 1} = β. Computational Complexity, by Fu Yuxi Random Walk and Expander 57 / 89

59 Error Reduction for RP Suppose A(x, r) is a random algorithm with error probability is β. Given input x with n = x, let k = r(n). Choose an explicit (2 k, d, λ)-graph G = (V, E) with V = {0, 1} k. Algorithm A. 1. Pick v 0 R V. 2. Generate a random walk v 0,..., v t. 3. Output t i=0 A(x, v i). By the Theorem, the error probability of A is ( γ β + λ ) k 1. Here B is the set of r s for which A errs on x. Computational Complexity, by Fu Yuxi Random Walk and Expander 58 / 89

60 Error Reduction for BPP Algorithm A. 1. Pick v 0 R V. 2. Generate a random walk v 0,..., v t. 3. Output Maj{A(x, v i )} i [t]. Let K [t] be the set of samples for which A errs and K t+1 2. Pr[ i K.v i B] ( γ ) t 1 2 β + λ assuming γ β + λ 1/16. By union bound, ( ) 1 t 1, 4 Pr[A fails] 2 t ( 1 4) t 1 = O(2 t ). Computational Complexity, by Fu Yuxi Random Walk and Expander 59 / 89

61 Explicit Construction of Expander Graph Computational Complexity, by Fu Yuxi Random Walk and Expander 60 / 89

62 Explicit Construction In some applications the size of expander graph can be handled. An expander family {G n } n N is mildly explicit if there is a P-time algorithm that outputs the random walk matrix of G n on 1 n. In some other applications a huge expander graph must be used. An expander family {G n } n N is strongly explicit if there is a P-time algorithm that on input n, v, i outputs the index of the i-th neighbor of v. Computational Complexity, by Fu Yuxi Random Walk and Expander 61 / 89

63 We will look at several graph product operations. We then show how to use these operations to construct explicit expander graphs. O. Reingold, S. Vadhan, and A. Wigderson. Entropy Waves, the Zig-Zag Graph Product, and New Constant-Degree Expanders and Extractors. FOCS, Computational Complexity, by Fu Yuxi Random Walk and Expander 62 / 89

64 Rotation Map Suppose G is an n-vertex d-degree graph. A rotation map Ĝ of G is a function of type [n] [d] [n] [d] satisfying the following. Ĝ(u, i) = (v, j) if vertex v is the i-th neighbor of vertex u and u is the j-th neighbor of v. Clearly Ĝ is a permutation. The rotation matrix  is defined by  (v,m),(u,l) = { 1, if Ĝ(u, l) = (v, m), 0, otherwise. Walks described in  are deterministic, meaning that no random bits are necessary. Computational Complexity, by Fu Yuxi Random Walk and Expander 63 / 89

65 Path Product Suppose G, G are n-vertex graphs with degree d respectively d. Let A, A be their random walk matrices. The path product G G is defined by the random walk matrix A A. G G is n-vertex dd -degree. Lemma. λ G G λ G λ G. Proof. A λ G G = max Av 2 A v 1 v 2 = max Av 2 v 1 fact that Av 1 whenever v 1. Av 2 Av 2 v 2 λ G λ G using the Computational Complexity, by Fu Yuxi Random Walk and Expander 64 / 89

66 Tensor Product Suppose G/G is an n-vertex/n -vertex d-degree/d -degree graph. The random walk matrix of the tensor product G G is a 11 A a 12 A a 1n A A A a 21 A a 22 A a 2n A =.... a n1 A a n2 A a nn A (u, u ) (v, v ) in G G iff u v in G and u v in G. G G is nn -vertex dd -degree. Computational Complexity, by Fu Yuxi Random Walk and Expander 65 / 89

67 Tensor Product Lemma. λ G G = max{λ G, λ G }. If λ is an eigenvalue of A and v is the associated eigenvector, and λ is an eigenvalue of A and v is the associated eigenvector, then (A A )(v v ) = λλ (v v ). Computational Complexity, by Fu Yuxi Random Walk and Expander 66 / 89

68 Zig-Zag Product G is an n-vertex D-degree graph. H is a D-vertex d-degree graph. The zig-zag product G z H is the nd-vertex d 2 -degree graph constructed as follows: 1. The vertex set is [n] [D]. 2. For i, j [d] and (u, l) [n] [D], the (i, j)-th neighbor of (u, l) is the vertex (v, m) [n] [D] computed as follows: 2.1 Let l be the i-th neighbor of l in H. 2.2 v is the l -th neighbor of u and u is the m -th neighbor of v. 2.3 Let m be the j-th neighbor of m in H. Typically d D. A t-step random walk uses O(t log d) rather than O(t log D) random bits. Computational Complexity, by Fu Yuxi Random Walk and Expander 67 / 89

69 Zig-Zag Product We need a picture here. Computational Complexity, by Fu Yuxi Random Walk and Expander 68 / 89

70 Zig-Zag Product Lemma. (I n J D )Â(I n J D ) = A J D. Consider the left matrix. Starting from (u, l), a random neighbor l of l is chosen with probability 1 D, then a walk from (u, l ) to some (v, m ) with probability 1; and finally a random neighbor m of m is picked up with probability 1 D. According to the right matrix the random walk from (u, l) to (v, m) occurs with the probability 1 D 1 D. Â is the rotation matrix of G, and I n is the n n diagonal matrix. Computational Complexity, by Fu Yuxi Random Walk and Expander 69 / 89

71 Claim. λ C 1 whenever C 2 1. Proof. λ C = max v 1 Cv 2 v 2 max v 1 C 2 v 2 v 2 = 1. Claim. λ A+B λ A + λ B for symmetric stochastic matrices A, B. Proof. λ A+B = max v 1 (A+B)v 2 v 2 max v 1 Av 2 + Bv 2 v 2 λ A + λ B. Computational Complexity, by Fu Yuxi Random Walk and Expander 70 / 89

72 Zig-Zag Product Lemma. λ G z H λ G + 2λ H and γ M γ G γh 2. Let A, B and M be the random walk matrices of G, H and G z H. Â is the (nd) (nd) rotation matrix of G. B = (1 λ H )J D + λ H E for some E with E 2 1 by Lemma. Now M = (I n B)Â(I n B) = ((1 λ H )I n J D + λ H I n E) Â ((1 λ H)I n J D + λ H I n E) = (1 λ H ) 2 (I n J D )Â(I n J D ) +... = (1 λ H ) 2 (A J D ) +..., where = is due to Lemma. Using Lemma and the Claims, one gets λ M (1 λ H ) 2 λ A JD +1 (1 λ H ) 2 max{λ G, λ JD }+2λ H = λ G +2λ H. Computational Complexity, by Fu Yuxi Random Walk and Expander 71 / 89

73 Zig-Zag Product The lemma is useful when both λ G and λ H are small. If not, a different upper bound can be derived. Both upper bounds are discussed in the paper by Reingold, Vadhan and Wigderson. O. Reingold, S. Vadhan, and A. Wigderson. Entropy Waves, the Zig-Zag Graph Product, and New Constant-Degree Expanders and Extractors. FOCS, Computational Complexity, by Fu Yuxi Random Walk and Expander 72 / 89

74 Expander Construction I The crucial point of the zig-zag construction is that we can use a constant graph to build a constant degree graph family. Let H be a (D 4, D, 1/8)-graph constructed by brute force. Define G 1 = H 2, G k+1 = Gk 2 z H. Fact. G k is a (D 4k, D 2, 1/2)-graph. Proof. The base case is clear from Lemma, and the induction step is taken care of by the previous lemma. Computational Complexity, by Fu Yuxi Random Walk and Expander 73 / 89

75 Expander Construction I The time to access to a neighbor of a vertex is given by the following recursive equation time(g k+1 ) = 2 time(g k ) + poly( vertex ) = poly( G k+1 ). The expander family is mildly explicit but not strongly explicit. Computational Complexity, by Fu Yuxi Random Walk and Expander 74 / 89

76 Expander Construction I Both the size of the graph and the time to compute a neighbor grow exponentially. This suggests to use tensor product to expand the size of graph doubly exponential. We will explain the idea using a variant of zig-zag product. Computational Complexity, by Fu Yuxi Random Walk and Expander 75 / 89

77 Replacement Product G is an n-vertex D-degree graph. H is a D-vertex d-degree graph. The replacement product G RH is the nd-vertex 2d-degree graph constructed as follows: 1. Every vertex w of G is replaced by a copy H w of H. 2. If Ĝ(u, l) = (v, m), place d parallel edges from the l-th vertex of H u to the m-th vertex of H v. The replacement product is well known in graph theory. It is often used to reduce vertex degree without loosing connectivity. Computational Complexity, by Fu Yuxi Random Walk and Expander 76 / 89

78 Replacement Product We need a picture here. Computational Complexity, by Fu Yuxi Random Walk and Expander 77 / 89

79 Replacement Product The rotation map on [n] [D] [d] {0, 1} is defined by { (u, Ĥ(m, i), b), if b = 0, ĜRH(u, m, i, b) = (Ĝ(u, m), i, b), if b = 1. Computational Complexity, by Fu Yuxi Random Walk and Expander 78 / 89

80 Replacement Product Lemma. λ GRH 1 (1 λ G )(1 λ H ) 2 24 and γ GRH 1 24 γ G γ 2 H. Let A, B be the random walk matrices of G, H respectively. Â = the (nd) (nd) permutation matrix corresponding to Ĝ. By Lemma, B = λ H E + γ H J D for some E with E 2 1. The (nd) (nd) random walk matrix of GRH is ARB = 1 2Â (I n B). It suffices to prove λ 3 ARB 1 γ G γ 2 H 8, which is λ (ARB) 3 1 γ G γ 2 H 8. Computational Complexity, by Fu Yuxi Random Walk and Expander 79 / 89

81 Replacement Product We have (ARB) 3 = = ) 3 ( 1 2Â (I n B) ( 1 2Â (I n (λ H E + γ H J D )) ) 3 ) 3 = 1 (Â + γh (I n E) + γ H (I n J D ) 8 = 1 (Â3 ) γ 2 8 H(I n J D )Â(I n J D ) = 1 (Â3 ) γ 2 8 H(A J D ), where the last equality is due to Lemma. Applying the two Claims, we get λ (ARB) 3 1 γ2 H 8 + γ2 H 8 λ A J D 1 γ2 H 8 + γ2 H 8 λ G = 1 γ2 H 8 γ G. Computational Complexity, by Fu Yuxi Random Walk and Expander 80 / 89

82 Expander Construction II Theorem. There exists a strongly explicit (d, λ)-expander family for d = 4 and λ < As a first step we prove that we can efficiently construct a family {G k } k of graphs where each G k has (2d) 100k vertices. 1. Let H be a ((2d) 100, d, 0.01)-expander graph, G 1 and G 2 be ((2d) 100, 2d, 0.5)- resp. ((2d) 100 2, 2d, 0.5)-expander graphs. 2. For k > 2 define G k = ( G k 1 2 G k 1 2 ) 50 RH. Tensor product increases graph size. Path product improves spectral expansion. Replacement product reduces degree. Computational Complexity, by Fu Yuxi Random Walk and Expander 81 / 89

83 Expander Construction II We show that G k is a ((2d) 100k, 2d, 0.98)-expander graph. 1. Let n k be the number of vertices of G k. n k = n k 1 2 n k 1 2 (2d)100 k = (2d) 2 (2d) = (2d) 100k. 100 k 1 2 (2d) G k 1, G k 1 degree 2d G k 1 G k 1 degree (2d) 2 (G k 1 G k 1 ) 50 degree (2d) 100 G k degree 2d. 3. λ G k 1, λ G k λ G k 1 G k 1 (0.98) 2 λ (G k 1 G k 1 ) 50 e 2 λ Gk 1 0.5(0.98) 2 /24 < Computational Complexity, by Fu Yuxi Random Walk and Expander 82 / 89

84 Expander Construction II There is a poly(k)-time algorithm that upon receiving a label i of a vertex in G k and an index j in [2d] finds the j-th neighbor of i. 1. A search for a neighborhood of the input vertex of G k recursively calls on G k 1 50 times and G k 1 50 times. 2. The depth of the recursive calls is bounded by t = O(log k). 3. The overall time is bounded by O(100 O(log k) ) = poly(k). Computational Complexity, by Fu Yuxi Random Walk and Expander 83 / 89

85 Expander Construction II Suppose (2d) 100k < i < (2d) 100(k+1). Let (2d) 100(k+1) = xi + r. Divide the (2d) 100(k+1) vertices into i classes among which r classes being of size i + 1 and x r classes being of size i. Contract every class into a mega-vertex. Add 2d self-loops to each of the x r mega-vertices. This is a (i, 2d(x + 1), (2d)0.01/(x + 1)) edge expander. We get a ((2d) 101, 0.01/(2d) 100 ) edge expander family. Computational Complexity, by Fu Yuxi Random Walk and Expander 84 / 89

86 Reingold s Theorem Computational Complexity, by Fu Yuxi Random Walk and Expander 85 / 89

87 Theorem. UPATH L. O. Reingold. Undirected ST-Connectivity in Log-Space. STOC Computational Complexity, by Fu Yuxi Random Walk and Expander 86 / 89

88 The Idea Connectivity Algorithm for d-degree expander graph is easy. The diameter of an expander graph is of length O(log(n)). An exhaustive search can be carried out in logspace. Reingold s idea is to transform a graph G in logspace to a graph G so that a connected component in G becomes an expander in G and unconnected vertices in G remain unconnected in G. Computational Complexity, by Fu Yuxi Random Walk and Expander 87 / 89

89 The Algorithm 1. Fix a (d 50, d/2, 0.01)-expander graph H for d = Convert the input graph G to a d 50 -degree graph on the fly. 2.1 Add self-loops to increase degree. 2.2 Replace a large degree vertex by a cycle to decrease degree. 3. G 0 = G; G k = (G k 1 RH) Apply Connectivity Algorithm to the expander G 10 log n. If G k 1 is an N-vertex d 50 -degree, G k 1 RH is an d 50 N-vertex d-degree, and (G k 1 RH) 50 is an d 50 N-vertex d 50 -degree. So G 10 log n contains (d 50 ) 10 log n n = n 1001 vertices. Computational Complexity, by Fu Yuxi Random Walk and Expander 88 / 89

90 The Complexity A counter of length 2 log 1 n 1001 indicates the current path We cannot record all the vertices in a path. [log(n) 2 ] Each vertex, except s, t, is coded up by 50 log(d) bits, say x, declaring that it is the 2 x -th neighbor of the previous vertex. The neighbors of a vertex can be ordered in a canonical way. The algorithm keeps the current vertex. When backtracking it starts from s all over again to get the previous vertex. The mechanism allows us to do Step 2 of the algorithm on the fly. Even though the Expander Construction I is only mildly explicit, computing a neighbor from a given vertex can be done in logspace. Computational Complexity, by Fu Yuxi Random Walk and Expander 89 / 89

Expander and Derandomization

Expander and Derandomization Expander and Derandomization Many derandomization results are based on the assumption that certain random/hard objects exist. Some unconditional derandomization can be achieved using explicit constructions

More information

Math 102, Winter Final Exam Review. Chapter 1. Matrices and Gaussian Elimination

Math 102, Winter Final Exam Review. Chapter 1. Matrices and Gaussian Elimination Math 0, Winter 07 Final Exam Review Chapter. Matrices and Gaussian Elimination { x + x =,. Different forms of a system of linear equations. Example: The x + 4x = 4. [ ] [ ] [ ] vector form (or the column

More information

1 Adjacency matrix and eigenvalues

1 Adjacency matrix and eigenvalues CSC 5170: Theory of Computational Complexity Lecture 7 The Chinese University of Hong Kong 1 March 2010 Our objective of study today is the random walk algorithm for deciding if two vertices in an undirected

More information

Spectral radius, symmetric and positive matrices

Spectral radius, symmetric and positive matrices Spectral radius, symmetric and positive matrices Zdeněk Dvořák April 28, 2016 1 Spectral radius Definition 1. The spectral radius of a square matrix A is ρ(a) = max{ λ : λ is an eigenvalue of A}. For an

More information

An Elementary Construction of Constant-Degree Expanders

An Elementary Construction of Constant-Degree Expanders An Elementary Construction of Constant-Degree Expanders Noga Alon Oded Schwartz Asaf Shapira Abstract We describe a short and easy to analyze construction of constant-degree expanders. The construction

More information

The University of Texas at Austin Department of Electrical and Computer Engineering. EE381V: Large Scale Learning Spring 2013.

The University of Texas at Austin Department of Electrical and Computer Engineering. EE381V: Large Scale Learning Spring 2013. The University of Texas at Austin Department of Electrical and Computer Engineering EE381V: Large Scale Learning Spring 2013 Assignment Two Caramanis/Sanghavi Due: Tuesday, Feb. 19, 2013. Computational

More information

A Brief Outline of Math 355

A Brief Outline of Math 355 A Brief Outline of Math 355 Lecture 1 The geometry of linear equations; elimination with matrices A system of m linear equations with n unknowns can be thought of geometrically as m hyperplanes intersecting

More information

Linear Algebra, part 3. Going back to least squares. Mathematical Models, Analysis and Simulation = 0. a T 1 e. a T n e. Anna-Karin Tornberg

Linear Algebra, part 3. Going back to least squares. Mathematical Models, Analysis and Simulation = 0. a T 1 e. a T n e. Anna-Karin Tornberg Linear Algebra, part 3 Anna-Karin Tornberg Mathematical Models, Analysis and Simulation Fall semester, 2010 Going back to least squares (Sections 1.7 and 2.3 from Strang). We know from before: The vector

More information

Eigenvalues and Eigenvectors

Eigenvalues and Eigenvectors /88 Chia-Ping Chen Department of Computer Science and Engineering National Sun Yat-sen University Linear Algebra Eigenvalue Problem /88 Eigenvalue Equation By definition, the eigenvalue equation for matrix

More information

Stat 159/259: Linear Algebra Notes

Stat 159/259: Linear Algebra Notes Stat 159/259: Linear Algebra Notes Jarrod Millman November 16, 2015 Abstract These notes assume you ve taken a semester of undergraduate linear algebra. In particular, I assume you are familiar with the

More information

Conceptual Questions for Review

Conceptual Questions for Review Conceptual Questions for Review Chapter 1 1.1 Which vectors are linear combinations of v = (3, 1) and w = (4, 3)? 1.2 Compare the dot product of v = (3, 1) and w = (4, 3) to the product of their lengths.

More information

18.06SC Final Exam Solutions

18.06SC Final Exam Solutions 18.06SC Final Exam Solutions 1 (4+7=11 pts.) Suppose A is 3 by 4, and Ax = 0 has exactly 2 special solutions: 1 2 x 1 = 1 and x 2 = 1 1 0 0 1 (a) Remembering that A is 3 by 4, find its row reduced echelon

More information

EIGENVALUE PROBLEMS. Background on eigenvalues/ eigenvectors / decompositions. Perturbation analysis, condition numbers..

EIGENVALUE PROBLEMS. Background on eigenvalues/ eigenvectors / decompositions. Perturbation analysis, condition numbers.. EIGENVALUE PROBLEMS Background on eigenvalues/ eigenvectors / decompositions Perturbation analysis, condition numbers.. Power method The QR algorithm Practical QR algorithms: use of Hessenberg form and

More information

(a) If A is a 3 by 4 matrix, what does this tell us about its nullspace? Solution: dim N(A) 1, since rank(a) 3. Ax =

(a) If A is a 3 by 4 matrix, what does this tell us about its nullspace? Solution: dim N(A) 1, since rank(a) 3. Ax = . (5 points) (a) If A is a 3 by 4 matrix, what does this tell us about its nullspace? dim N(A), since rank(a) 3. (b) If we also know that Ax = has no solution, what do we know about the rank of A? C(A)

More information

Throughout these notes we assume V, W are finite dimensional inner product spaces over C.

Throughout these notes we assume V, W are finite dimensional inner product spaces over C. Math 342 - Linear Algebra II Notes Throughout these notes we assume V, W are finite dimensional inner product spaces over C 1 Upper Triangular Representation Proposition: Let T L(V ) There exists an orthonormal

More information

Basic Calculus Review

Basic Calculus Review Basic Calculus Review Lorenzo Rosasco ISML Mod. 2 - Machine Learning Vector Spaces Functionals and Operators (Matrices) Vector Space A vector space is a set V with binary operations +: V V V and : R V

More information

Linear Algebra, part 3 QR and SVD

Linear Algebra, part 3 QR and SVD Linear Algebra, part 3 QR and SVD Anna-Karin Tornberg Mathematical Models, Analysis and Simulation Fall semester, 2012 Going back to least squares (Section 1.4 from Strang, now also see section 5.2). We

More information

Chapter 7: Symmetric Matrices and Quadratic Forms

Chapter 7: Symmetric Matrices and Quadratic Forms Chapter 7: Symmetric Matrices and Quadratic Forms (Last Updated: December, 06) These notes are derived primarily from Linear Algebra and its applications by David Lay (4ed). A few theorems have been moved

More information

Chapter 7. Canonical Forms. 7.1 Eigenvalues and Eigenvectors

Chapter 7. Canonical Forms. 7.1 Eigenvalues and Eigenvectors Chapter 7 Canonical Forms 7.1 Eigenvalues and Eigenvectors Definition 7.1.1. Let V be a vector space over the field F and let T be a linear operator on V. An eigenvalue of T is a scalar λ F such that there

More information

MATH 581D FINAL EXAM Autumn December 12, 2016

MATH 581D FINAL EXAM Autumn December 12, 2016 MATH 58D FINAL EXAM Autumn 206 December 2, 206 NAME: SIGNATURE: Instructions: there are 6 problems on the final. Aim for solving 4 problems, but do as much as you can. Partial credit will be given on all

More information

A Review of Linear Algebra

A Review of Linear Algebra A Review of Linear Algebra Mohammad Emtiyaz Khan CS,UBC A Review of Linear Algebra p.1/13 Basics Column vector x R n, Row vector x T, Matrix A R m n. Matrix Multiplication, (m n)(n k) m k, AB BA. Transpose

More information

Lecture notes: Applied linear algebra Part 1. Version 2

Lecture notes: Applied linear algebra Part 1. Version 2 Lecture notes: Applied linear algebra Part 1. Version 2 Michael Karow Berlin University of Technology karow@math.tu-berlin.de October 2, 2008 1 Notation, basic notions and facts 1.1 Subspaces, range and

More information

Lecture 15: Expanders

Lecture 15: Expanders CS 710: Complexity Theory 10/7/011 Lecture 15: Expanders Instructor: Dieter van Melkebeek Scribe: Li-Hsiang Kuo In the last lecture we introduced randomized computation in terms of machines that have access

More information

Linear Algebra. Session 12

Linear Algebra. Session 12 Linear Algebra. Session 12 Dr. Marco A Roque Sol 08/01/2017 Example 12.1 Find the constant function that is the least squares fit to the following data x 0 1 2 3 f(x) 1 0 1 2 Solution c = 1 c = 0 f (x)

More information

Preliminary/Qualifying Exam in Numerical Analysis (Math 502a) Spring 2012

Preliminary/Qualifying Exam in Numerical Analysis (Math 502a) Spring 2012 Instructions Preliminary/Qualifying Exam in Numerical Analysis (Math 502a) Spring 2012 The exam consists of four problems, each having multiple parts. You should attempt to solve all four problems. 1.

More information

1 Last time: least-squares problems

1 Last time: least-squares problems MATH Linear algebra (Fall 07) Lecture Last time: least-squares problems Definition. If A is an m n matrix and b R m, then a least-squares solution to the linear system Ax = b is a vector x R n such that

More information

Notes on Linear Algebra

Notes on Linear Algebra 1 Notes on Linear Algebra Jean Walrand August 2005 I INTRODUCTION Linear Algebra is the theory of linear transformations Applications abound in estimation control and Markov chains You should be familiar

More information

Ir O D = D = ( ) Section 2.6 Example 1. (Bottom of page 119) dim(v ) = dim(l(v, W )) = dim(v ) dim(f ) = dim(v )

Ir O D = D = ( ) Section 2.6 Example 1. (Bottom of page 119) dim(v ) = dim(l(v, W )) = dim(v ) dim(f ) = dim(v ) Section 3.2 Theorem 3.6. Let A be an m n matrix of rank r. Then r m, r n, and, by means of a finite number of elementary row and column operations, A can be transformed into the matrix ( ) Ir O D = 1 O

More information

Problem Set 2. Assigned: Mon. November. 23, 2015

Problem Set 2. Assigned: Mon. November. 23, 2015 Pseudorandomness Prof. Salil Vadhan Problem Set 2 Assigned: Mon. November. 23, 2015 Chi-Ning Chou Index Problem Progress 1 SchwartzZippel lemma 1/1 2 Robustness of the model 1/1 3 Zero error versus 1-sided

More information

Linear Algebra Primer

Linear Algebra Primer Linear Algebra Primer David Doria daviddoria@gmail.com Wednesday 3 rd December, 2008 Contents Why is it called Linear Algebra? 4 2 What is a Matrix? 4 2. Input and Output.....................................

More information

Linear Algebra in Actuarial Science: Slides to the lecture

Linear Algebra in Actuarial Science: Slides to the lecture Linear Algebra in Actuarial Science: Slides to the lecture Fall Semester 2010/2011 Linear Algebra is a Tool-Box Linear Equation Systems Discretization of differential equations: solving linear equations

More information

Lecture 2: Linear Algebra Review

Lecture 2: Linear Algebra Review EE 227A: Convex Optimization and Applications January 19 Lecture 2: Linear Algebra Review Lecturer: Mert Pilanci Reading assignment: Appendix C of BV. Sections 2-6 of the web textbook 1 2.1 Vectors 2.1.1

More information

33AH, WINTER 2018: STUDY GUIDE FOR FINAL EXAM

33AH, WINTER 2018: STUDY GUIDE FOR FINAL EXAM 33AH, WINTER 2018: STUDY GUIDE FOR FINAL EXAM (UPDATED MARCH 17, 2018) The final exam will be cumulative, with a bit more weight on more recent material. This outline covers the what we ve done since the

More information

Linear Algebra: Matrix Eigenvalue Problems

Linear Algebra: Matrix Eigenvalue Problems CHAPTER8 Linear Algebra: Matrix Eigenvalue Problems Chapter 8 p1 A matrix eigenvalue problem considers the vector equation (1) Ax = λx. 8.0 Linear Algebra: Matrix Eigenvalue Problems Here A is a given

More information

Linear algebra and applications to graphs Part 1

Linear algebra and applications to graphs Part 1 Linear algebra and applications to graphs Part 1 Written up by Mikhail Belkin and Moon Duchin Instructor: Laszlo Babai June 17, 2001 1 Basic Linear Algebra Exercise 1.1 Let V and W be linear subspaces

More information

Numerical Methods for Solving Large Scale Eigenvalue Problems

Numerical Methods for Solving Large Scale Eigenvalue Problems Peter Arbenz Computer Science Department, ETH Zürich E-mail: arbenz@inf.ethz.ch arge scale eigenvalue problems, Lecture 2, February 28, 2018 1/46 Numerical Methods for Solving Large Scale Eigenvalue Problems

More information

Maths for Signals and Systems Linear Algebra in Engineering

Maths for Signals and Systems Linear Algebra in Engineering Maths for Signals and Systems Linear Algebra in Engineering Lectures 13 15, Tuesday 8 th and Friday 11 th November 016 DR TANIA STATHAKI READER (ASSOCIATE PROFFESOR) IN SIGNAL PROCESSING IMPERIAL COLLEGE

More information

Lecture 6: Random Walks versus Independent Sampling

Lecture 6: Random Walks versus Independent Sampling Spectral Graph Theory and Applications WS 011/01 Lecture 6: Random Walks versus Independent Sampling Lecturer: Thomas Sauerwald & He Sun For many problems it is necessary to draw samples from some distribution

More information

LinGloss. A glossary of linear algebra

LinGloss. A glossary of linear algebra LinGloss A glossary of linear algebra Contents: Decompositions Types of Matrices Theorems Other objects? Quasi-triangular A matrix A is quasi-triangular iff it is a triangular matrix except its diagonal

More information

QR-decomposition. The QR-decomposition of an n k matrix A, k n, is an n n unitary matrix Q and an n k upper triangular matrix R for which A = QR

QR-decomposition. The QR-decomposition of an n k matrix A, k n, is an n n unitary matrix Q and an n k upper triangular matrix R for which A = QR QR-decomposition The QR-decomposition of an n k matrix A, k n, is an n n unitary matrix Q and an n k upper triangular matrix R for which In Matlab A = QR [Q,R]=qr(A); Note. The QR-decomposition is unique

More information

235 Final exam review questions

235 Final exam review questions 5 Final exam review questions Paul Hacking December 4, 0 () Let A be an n n matrix and T : R n R n, T (x) = Ax the linear transformation with matrix A. What does it mean to say that a vector v R n is an

More information

Lecture 7: Positive Semidefinite Matrices

Lecture 7: Positive Semidefinite Matrices Lecture 7: Positive Semidefinite Matrices Rajat Mittal IIT Kanpur The main aim of this lecture note is to prepare your background for semidefinite programming. We have already seen some linear algebra.

More information

Functional Analysis Review

Functional Analysis Review Outline 9.520: Statistical Learning Theory and Applications February 8, 2010 Outline 1 2 3 4 Vector Space Outline A vector space is a set V with binary operations +: V V V and : R V V such that for all

More information

IMPORTANT DEFINITIONS AND THEOREMS REFERENCE SHEET

IMPORTANT DEFINITIONS AND THEOREMS REFERENCE SHEET IMPORTANT DEFINITIONS AND THEOREMS REFERENCE SHEET This is a (not quite comprehensive) list of definitions and theorems given in Math 1553. Pay particular attention to the ones in red. Study Tip For each

More information

DS-GA 1002 Lecture notes 0 Fall Linear Algebra. These notes provide a review of basic concepts in linear algebra.

DS-GA 1002 Lecture notes 0 Fall Linear Algebra. These notes provide a review of basic concepts in linear algebra. DS-GA 1002 Lecture notes 0 Fall 2016 Linear Algebra These notes provide a review of basic concepts in linear algebra. 1 Vector spaces You are no doubt familiar with vectors in R 2 or R 3, i.e. [ ] 1.1

More information

6 Inner Product Spaces

6 Inner Product Spaces Lectures 16,17,18 6 Inner Product Spaces 6.1 Basic Definition Parallelogram law, the ability to measure angle between two vectors and in particular, the concept of perpendicularity make the euclidean space

More information

MATRICES ARE SIMILAR TO TRIANGULAR MATRICES

MATRICES ARE SIMILAR TO TRIANGULAR MATRICES MATRICES ARE SIMILAR TO TRIANGULAR MATRICES 1 Complex matrices Recall that the complex numbers are given by a + ib where a and b are real and i is the imaginary unity, ie, i 2 = 1 In what we describe below,

More information

Math 408 Advanced Linear Algebra

Math 408 Advanced Linear Algebra Math 408 Advanced Linear Algebra Chi-Kwong Li Chapter 4 Hermitian and symmetric matrices Basic properties Theorem Let A M n. The following are equivalent. Remark (a) A is Hermitian, i.e., A = A. (b) x

More information

MATH 240 Spring, Chapter 1: Linear Equations and Matrices

MATH 240 Spring, Chapter 1: Linear Equations and Matrices MATH 240 Spring, 2006 Chapter Summaries for Kolman / Hill, Elementary Linear Algebra, 8th Ed. Sections 1.1 1.6, 2.1 2.2, 3.2 3.8, 4.3 4.5, 5.1 5.3, 5.5, 6.1 6.5, 7.1 7.2, 7.4 DEFINITIONS Chapter 1: Linear

More information

Basic Elements of Linear Algebra

Basic Elements of Linear Algebra A Basic Review of Linear Algebra Nick West nickwest@stanfordedu September 16, 2010 Part I Basic Elements of Linear Algebra Although the subject of linear algebra is much broader than just vectors and matrices,

More information

Linear algebra II Homework #1 solutions A = This means that every eigenvector with eigenvalue λ = 1 must have the form

Linear algebra II Homework #1 solutions A = This means that every eigenvector with eigenvalue λ = 1 must have the form Linear algebra II Homework # solutions. Find the eigenvalues and the eigenvectors of the matrix 4 6 A =. 5 Since tra = 9 and deta = = 8, the characteristic polynomial is f(λ) = λ (tra)λ+deta = λ 9λ+8 =

More information

Numerical Methods I Eigenvalue Problems

Numerical Methods I Eigenvalue Problems Numerical Methods I Eigenvalue Problems Aleksandar Donev Courant Institute, NYU 1 donev@courant.nyu.edu 1 MATH-GA 2011.003 / CSCI-GA 2945.003, Fall 2014 October 2nd, 2014 A. Donev (Courant Institute) Lecture

More information

Singular Value Decomposition

Singular Value Decomposition Singular Value Decomposition Motivatation The diagonalization theorem play a part in many interesting applications. Unfortunately not all matrices can be factored as A = PDP However a factorization A =

More information

Math Camp Lecture 4: Linear Algebra. Xiao Yu Wang. Aug 2010 MIT. Xiao Yu Wang (MIT) Math Camp /10 1 / 88

Math Camp Lecture 4: Linear Algebra. Xiao Yu Wang. Aug 2010 MIT. Xiao Yu Wang (MIT) Math Camp /10 1 / 88 Math Camp 2010 Lecture 4: Linear Algebra Xiao Yu Wang MIT Aug 2010 Xiao Yu Wang (MIT) Math Camp 2010 08/10 1 / 88 Linear Algebra Game Plan Vector Spaces Linear Transformations and Matrices Determinant

More information

IMPORTANT DEFINITIONS AND THEOREMS REFERENCE SHEET

IMPORTANT DEFINITIONS AND THEOREMS REFERENCE SHEET IMPORTANT DEFINITIONS AND THEOREMS REFERENCE SHEET This is a (not quite comprehensive) list of definitions and theorems given in Math 1553. Pay particular attention to the ones in red. Study Tip For each

More information

PRGs for space-bounded computation: INW, Nisan

PRGs for space-bounded computation: INW, Nisan 0368-4283: Space-Bounded Computation 15/5/2018 Lecture 9 PRGs for space-bounded computation: INW, Nisan Amnon Ta-Shma and Dean Doron 1 PRGs Definition 1. Let C be a collection of functions C : Σ n {0,

More information

Problem Set (T) If A is an m n matrix, B is an n p matrix and D is a p s matrix, then show

Problem Set (T) If A is an m n matrix, B is an n p matrix and D is a p s matrix, then show MTH 0: Linear Algebra Department of Mathematics and Statistics Indian Institute of Technology - Kanpur Problem Set Problems marked (T) are for discussions in Tutorial sessions (T) If A is an m n matrix,

More information

Eigenvalues, random walks and Ramanujan graphs

Eigenvalues, random walks and Ramanujan graphs Eigenvalues, random walks and Ramanujan graphs David Ellis 1 The Expander Mixing lemma We have seen that a bounded-degree graph is a good edge-expander if and only if if has large spectral gap If G = (V,

More information

Math 350 Fall 2011 Notes about inner product spaces. In this notes we state and prove some important properties of inner product spaces.

Math 350 Fall 2011 Notes about inner product spaces. In this notes we state and prove some important properties of inner product spaces. Math 350 Fall 2011 Notes about inner product spaces In this notes we state and prove some important properties of inner product spaces. First, recall the dot product on R n : if x, y R n, say x = (x 1,...,

More information

BASIC ALGORITHMS IN LINEAR ALGEBRA. Matrices and Applications of Gaussian Elimination. A 2 x. A T m x. A 1 x A T 1. A m x

BASIC ALGORITHMS IN LINEAR ALGEBRA. Matrices and Applications of Gaussian Elimination. A 2 x. A T m x. A 1 x A T 1. A m x BASIC ALGORITHMS IN LINEAR ALGEBRA STEVEN DALE CUTKOSKY Matrices and Applications of Gaussian Elimination Systems of Equations Suppose that A is an n n matrix with coefficents in a field F, and x = (x,,

More information

Schur s Triangularization Theorem. Math 422

Schur s Triangularization Theorem. Math 422 Schur s Triangularization Theorem Math 4 The characteristic polynomial p (t) of a square complex matrix A splits as a product of linear factors of the form (t λ) m Of course, finding these factors is a

More information

Quantum Computing Lecture 2. Review of Linear Algebra

Quantum Computing Lecture 2. Review of Linear Algebra Quantum Computing Lecture 2 Review of Linear Algebra Maris Ozols Linear algebra States of a quantum system form a vector space and their transformations are described by linear operators Vector spaces

More information

Review problems for MA 54, Fall 2004.

Review problems for MA 54, Fall 2004. Review problems for MA 54, Fall 2004. Below are the review problems for the final. They are mostly homework problems, or very similar. If you are comfortable doing these problems, you should be fine on

More information

Elementary linear algebra

Elementary linear algebra Chapter 1 Elementary linear algebra 1.1 Vector spaces Vector spaces owe their importance to the fact that so many models arising in the solutions of specific problems turn out to be vector spaces. The

More information

Lecture 12: Introduction to Spectral Graph Theory, Cheeger s inequality

Lecture 12: Introduction to Spectral Graph Theory, Cheeger s inequality CSE 521: Design and Analysis of Algorithms I Spring 2016 Lecture 12: Introduction to Spectral Graph Theory, Cheeger s inequality Lecturer: Shayan Oveis Gharan May 4th Scribe: Gabriel Cadamuro Disclaimer:

More information

The QR Decomposition

The QR Decomposition The QR Decomposition We have seen one major decomposition of a matrix which is A = LU (and its variants) or more generally PA = LU for a permutation matrix P. This was valid for a square matrix and aided

More information

MATH 31 - ADDITIONAL PRACTICE PROBLEMS FOR FINAL

MATH 31 - ADDITIONAL PRACTICE PROBLEMS FOR FINAL MATH 3 - ADDITIONAL PRACTICE PROBLEMS FOR FINAL MAIN TOPICS FOR THE FINAL EXAM:. Vectors. Dot product. Cross product. Geometric applications. 2. Row reduction. Null space, column space, row space, left

More information

2. Review of Linear Algebra

2. Review of Linear Algebra 2. Review of Linear Algebra ECE 83, Spring 217 In this course we will represent signals as vectors and operators (e.g., filters, transforms, etc) as matrices. This lecture reviews basic concepts from linear

More information

G1110 & 852G1 Numerical Linear Algebra

G1110 & 852G1 Numerical Linear Algebra The University of Sussex Department of Mathematics G & 85G Numerical Linear Algebra Lecture Notes Autumn Term Kerstin Hesse (w aw S w a w w (w aw H(wa = (w aw + w Figure : Geometric explanation of the

More information

The Singular Value Decomposition and Least Squares Problems

The Singular Value Decomposition and Least Squares Problems The Singular Value Decomposition and Least Squares Problems Tom Lyche Centre of Mathematics for Applications, Department of Informatics, University of Oslo September 27, 2009 Applications of SVD solving

More information

Further Mathematical Methods (Linear Algebra) 2002

Further Mathematical Methods (Linear Algebra) 2002 Further Mathematical Methods (Linear Algebra) 00 Solutions For Problem Sheet 0 In this Problem Sheet we calculated some left and right inverses and verified the theorems about them given in the lectures.

More information

Linear Algebra - Part II

Linear Algebra - Part II Linear Algebra - Part II Projection, Eigendecomposition, SVD (Adapted from Sargur Srihari s slides) Brief Review from Part 1 Symmetric Matrix: A = A T Orthogonal Matrix: A T A = AA T = I and A 1 = A T

More information

Review of some mathematical tools

Review of some mathematical tools MATHEMATICAL FOUNDATIONS OF SIGNAL PROCESSING Fall 2016 Benjamín Béjar Haro, Mihailo Kolundžija, Reza Parhizkar, Adam Scholefield Teaching assistants: Golnoosh Elhami, Hanjie Pan Review of some mathematical

More information

Linear Algebra Review. Vectors

Linear Algebra Review. Vectors Linear Algebra Review 9/4/7 Linear Algebra Review By Tim K. Marks UCSD Borrows heavily from: Jana Kosecka http://cs.gmu.edu/~kosecka/cs682.html Virginia de Sa (UCSD) Cogsci 8F Linear Algebra review Vectors

More information

LINEAR ALGEBRA REVIEW

LINEAR ALGEBRA REVIEW LINEAR ALGEBRA REVIEW JC Stuff you should know for the exam. 1. Basics on vector spaces (1) F n is the set of all n-tuples (a 1,... a n ) with a i F. It forms a VS with the operations of + and scalar multiplication

More information

Linear Algebra Lecture Notes-II

Linear Algebra Lecture Notes-II Linear Algebra Lecture Notes-II Vikas Bist Department of Mathematics Panjab University, Chandigarh-64 email: bistvikas@gmail.com Last revised on March 5, 8 This text is based on the lectures delivered

More information

I. Multiple Choice Questions (Answer any eight)

I. Multiple Choice Questions (Answer any eight) Name of the student : Roll No : CS65: Linear Algebra and Random Processes Exam - Course Instructor : Prashanth L.A. Date : Sep-24, 27 Duration : 5 minutes INSTRUCTIONS: The test will be evaluated ONLY

More information

Notes on Eigenvalues, Singular Values and QR

Notes on Eigenvalues, Singular Values and QR Notes on Eigenvalues, Singular Values and QR Michael Overton, Numerical Computing, Spring 2017 March 30, 2017 1 Eigenvalues Everyone who has studied linear algebra knows the definition: given a square

More information

Linear Algebra using Dirac Notation: Pt. 2

Linear Algebra using Dirac Notation: Pt. 2 Linear Algebra using Dirac Notation: Pt. 2 PHYS 476Q - Southern Illinois University February 6, 2018 PHYS 476Q - Southern Illinois University Linear Algebra using Dirac Notation: Pt. 2 February 6, 2018

More information

Lecture 2: Linear operators

Lecture 2: Linear operators Lecture 2: Linear operators Rajat Mittal IIT Kanpur The mathematical formulation of Quantum computing requires vector spaces and linear operators So, we need to be comfortable with linear algebra to study

More information

CS 143 Linear Algebra Review

CS 143 Linear Algebra Review CS 143 Linear Algebra Review Stefan Roth September 29, 2003 Introductory Remarks This review does not aim at mathematical rigor very much, but instead at ease of understanding and conciseness. Please see

More information

7. Symmetric Matrices and Quadratic Forms

7. Symmetric Matrices and Quadratic Forms Linear Algebra 7. Symmetric Matrices and Quadratic Forms CSIE NCU 1 7. Symmetric Matrices and Quadratic Forms 7.1 Diagonalization of symmetric matrices 2 7.2 Quadratic forms.. 9 7.4 The singular value

More information

forms Christopher Engström November 14, 2014 MAA704: Matrix factorization and canonical forms Matrix properties Matrix factorization Canonical forms

forms Christopher Engström November 14, 2014 MAA704: Matrix factorization and canonical forms Matrix properties Matrix factorization Canonical forms Christopher Engström November 14, 2014 Hermitian LU QR echelon Contents of todays lecture Some interesting / useful / important of matrices Hermitian LU QR echelon Rewriting a as a product of several matrices.

More information

MTH 2032 SemesterII

MTH 2032 SemesterII MTH 202 SemesterII 2010-11 Linear Algebra Worked Examples Dr. Tony Yee Department of Mathematics and Information Technology The Hong Kong Institute of Education December 28, 2011 ii Contents Table of Contents

More information

1 9/5 Matrices, vectors, and their applications

1 9/5 Matrices, vectors, and their applications 1 9/5 Matrices, vectors, and their applications Algebra: study of objects and operations on them. Linear algebra: object: matrices and vectors. operations: addition, multiplication etc. Algorithms/Geometric

More information

Applied Linear Algebra in Geoscience Using MATLAB

Applied Linear Algebra in Geoscience Using MATLAB Applied Linear Algebra in Geoscience Using MATLAB Contents Getting Started Creating Arrays Mathematical Operations with Arrays Using Script Files and Managing Data Two-Dimensional Plots Programming in

More information

Bindel, Fall 2016 Matrix Computations (CS 6210) Notes for

Bindel, Fall 2016 Matrix Computations (CS 6210) Notes for 1 Logistics Notes for 2016-08-29 General announcement: we are switching from weekly to bi-weekly homeworks (mostly because the course is much bigger than planned). If you want to do HW but are not formally

More information

Fundamentals of Engineering Analysis (650163)

Fundamentals of Engineering Analysis (650163) Philadelphia University Faculty of Engineering Communications and Electronics Engineering Fundamentals of Engineering Analysis (6563) Part Dr. Omar R Daoud Matrices: Introduction DEFINITION A matrix is

More information

Review of similarity transformation and Singular Value Decomposition

Review of similarity transformation and Singular Value Decomposition Review of similarity transformation and Singular Value Decomposition Nasser M Abbasi Applied Mathematics Department, California State University, Fullerton July 8 7 page compiled on June 9, 5 at 9:5pm

More information

Lecture notes on Quantum Computing. Chapter 1 Mathematical Background

Lecture notes on Quantum Computing. Chapter 1 Mathematical Background Lecture notes on Quantum Computing Chapter 1 Mathematical Background Vector states of a quantum system with n physical states are represented by unique vectors in C n, the set of n 1 column vectors 1 For

More information

Problems of Eigenvalues/Eigenvectors

Problems of Eigenvalues/Eigenvectors 67 Problems of Eigenvalues/Eigenvectors Reveiw of Eigenvalues and Eigenvectors Gerschgorin s Disk Theorem Power and Inverse Power Methods Jacobi Transform for Symmetric Matrices Spectrum Decomposition

More information

Chap 3. Linear Algebra

Chap 3. Linear Algebra Chap 3. Linear Algebra Outlines 1. Introduction 2. Basis, Representation, and Orthonormalization 3. Linear Algebraic Equations 4. Similarity Transformation 5. Diagonal Form and Jordan Form 6. Functions

More information

Class notes: Approximation

Class notes: Approximation Class notes: Approximation Introduction Vector spaces, linear independence, subspace The goal of Numerical Analysis is to compute approximations We want to approximate eg numbers in R or C vectors in R

More information

Math 108b: Notes on the Spectral Theorem

Math 108b: Notes on the Spectral Theorem Math 108b: Notes on the Spectral Theorem From section 6.3, we know that every linear operator T on a finite dimensional inner product space V has an adjoint. (T is defined as the unique linear operator

More information

Lecture 8: Linear Algebra Background

Lecture 8: Linear Algebra Background CSE 521: Design and Analysis of Algorithms I Winter 2017 Lecture 8: Linear Algebra Background Lecturer: Shayan Oveis Gharan 2/1/2017 Scribe: Swati Padmanabhan Disclaimer: These notes have not been subjected

More information

Expander Construction in VNC 1

Expander Construction in VNC 1 Expander Construction in VNC 1 Sam Buss joint work with Valentine Kabanets, Antonina Kolokolova & Michal Koucký Prague Workshop on Bounded Arithmetic November 2-3, 2017 Talk outline I. Combinatorial construction

More information

Math 307 Learning Goals. March 23, 2010

Math 307 Learning Goals. March 23, 2010 Math 307 Learning Goals March 23, 2010 Course Description The course presents core concepts of linear algebra by focusing on applications in Science and Engineering. Examples of applications from recent

More information

orthogonal relations between vectors and subspaces Then we study some applications in vector spaces and linear systems, including Orthonormal Basis,

orthogonal relations between vectors and subspaces Then we study some applications in vector spaces and linear systems, including Orthonormal Basis, 5 Orthogonality Goals: We use scalar products to find the length of a vector, the angle between 2 vectors, projections, orthogonal relations between vectors and subspaces Then we study some applications

More information

HOMEWORK PROBLEMS FROM STRANG S LINEAR ALGEBRA AND ITS APPLICATIONS (4TH EDITION)

HOMEWORK PROBLEMS FROM STRANG S LINEAR ALGEBRA AND ITS APPLICATIONS (4TH EDITION) HOMEWORK PROBLEMS FROM STRANG S LINEAR ALGEBRA AND ITS APPLICATIONS (4TH EDITION) PROFESSOR STEVEN MILLER: BROWN UNIVERSITY: SPRING 2007 1. CHAPTER 1: MATRICES AND GAUSSIAN ELIMINATION Page 9, # 3: Describe

More information

MATH 423 Linear Algebra II Lecture 33: Diagonalization of normal operators.

MATH 423 Linear Algebra II Lecture 33: Diagonalization of normal operators. MATH 423 Linear Algebra II Lecture 33: Diagonalization of normal operators. Adjoint operator and adjoint matrix Given a linear operator L on an inner product space V, the adjoint of L is a transformation

More information