Solving Constrained Rayleigh Quotient Optimization Problem by Projected QEP Method
|
|
- Deirdre Stone
- 5 years ago
- Views:
Transcription
1 Solving Constrained Rayleigh Quotient Optimization Problem by Projected QEP Method Yunshen Zhou Advisor: Prof. Zhaojun Bai University of California, Davis June 15, 2017 Yunshen Zhou Advisor: Prof. Zhaojun Bai Solving Constrained Rayleigh Quotient Optimization Problem by June Projected 15, 2017 QEP Method 1 / 44
2 1 Introduction 2 Theory Problem Decomposition The Lagrange Equations The Equivalent QEP Overview 3 Algorithm Lanczos Algorithm Projected QEP Method Compute the Solution of CRQopt 4 Convergence Analysis 5 Application Yunshen Zhou Advisor: Prof. Zhaojun Bai Solving Constrained Rayleigh Quotient Optimization Problem by June Projected 15, 2017 QEP Method 2 / 44
3 1 Introduction 2 Theory Problem Decomposition The Lagrange Equations The Equivalent QEP Outline 3 Algorithm Lanczos Algorithm Projected QEP Method Compute the Solution of CRQopt 4 Convergence Analysis 5 Application Yunshen Zhou Advisor: Prof. Zhaojun Bai Solving Constrained Rayleigh Quotient Optimization Problem by June Projected 15, 2017 QEP Method 3 / 44
4 The Constrained Rayleigh Quotient Optimization Problem Consider the problem CRQopt min g(v) = v T Av, s.t. v T v = 1, C T v = t. A R n n, A = A T, C R n m is full column rank, t R m and m n. We are interested in the case where A is large and sparse and t 0. Yunshen Zhou Advisor: Prof. Zhaojun Bai Solving Constrained Rayleigh Quotient Optimization Problem by June Projected 15, 2017 QEP Method 4 / 44
5 Applications Ridge regression. Trust region subproblems. Constrained least square problems. Transductive learning. Graph cut. Yunshen Zhou Advisor: Prof. Zhaojun Bai Solving Constrained Rayleigh Quotient Optimization Problem by June Projected 15, 2017 QEP Method 5 / 44
6 Methods: Previous Work Comments: Secular equations [Gander, Golub and von Matt, 1989]. Inner-outer iteation of Lanczos method [Golub, Zhang and Zha, 2000]. Projected power method [Xu, Li and Schuurmans, 2009]. Only works for small matrices. Only works for the case t = 0. The convergence is slow, the shifting parameter is hard to determine. Parametric eigenvalue problem [Eriksson, Olsson and Kahl, 2011]. The eigenvalue computation involved in each iteration of line search results in a huge amount of calculation. Yunshen Zhou Advisor: Prof. Zhaojun Bai Solving Constrained Rayleigh Quotient Optimization Problem by June Projected 15, 2017 QEP Method 6 / 44
7 1 Introduction 2 Theory Problem Decomposition The Lagrange Equations The Equivalent QEP Outline 3 Algorithm Lanczos Algorithm Projected QEP Method Compute the Solution of CRQopt 4 Convergence Analysis 5 Application Yunshen Zhou Advisor: Prof. Zhaojun Bai Solving Constrained Rayleigh Quotient Optimization Problem by June Projected 15, 2017 QEP Method 7 / 44
8 Let Vector Decomposition v = n 0 + P v. where n 0 = (C T ) t is the minimum norm solution of C T v = t and u = P v, where P = I CC. We are interested in the case where n 0 < 1. There are multiple feasible vectors. Yunshen Zhou Advisor: Prof. Zhaojun Bai Solving Constrained Rayleigh Quotient Optimization Problem by June Projected 15, 2017 QEP Method 8 / 44
9 Problem Transformation By the vector decomposition, u is the solution of the following CQ optimization problem (CQopt): min f(u) = u T P AP u + 2u T b 0, s.t. u = γ, u N (C T ), where b 0 = P An 0 and γ = 1 n 0 2. Yunshen Zhou Advisor: Prof. Zhaojun Bai Solving Constrained Rayleigh Quotient Optimization Problem by June Projected 15, 2017 QEP Method 9 / 44
10 The Lagrange Equations By Lagrange multiplier theory, the Lagrange equations for CQopt is (P AP λi)u = b 0, u = γ, u N (C T ). Yunshen Zhou Advisor: Prof. Zhaojun Bai Solving Constrained Rayleigh Quotient Optimization Problem June by Projected 15, 2017QEP Method 10 / 44
11 λ 1 < λ 2 f(u 1 ) < f(u 2 ). The Lagrange Optimization Solving CQopt is equivalent to solving the following Lagrange optimization problem (LGopt): (P AP λi)u = b 0, u = γ, u N (C T ), λ = min. When λ = min, the second order necessary condition of Lagrange equations is satisfied. For standard trust-region subproblems, P AP λi is positive semidefinite. Yunshen Zhou Advisor: Prof. Zhaojun Bai Solving Constrained Rayleigh Quotient Optimization Problem June by Projected 15, 2017QEP Method 11 / 44
12 The Equivalent QEP Back to the Lagrange equations, assuming that λ / λ(p AP ), define z = (P AP λi) 2 b 0, then we have { (P AP λi) 2 z = b 0, So γ 2 = b T 0 z. (P AP λi) 2 z = 1 γ 2 b 0b T 0 z, z N (C T ). Yunshen Zhou Advisor: Prof. Zhaojun Bai Solving Constrained Rayleigh Quotient Optimization Problem June by Projected 15, 2017QEP Method 12 / 44
13 Equivalency of LGopt and QEPmin LGopt: (P AP λi)u = b 0, u = γ, Theorem 1 u N (C T ), λ = min. QEPmin: (P AP λi) 2 z = 1 γ 2 b 0b T 0 z, z N (C T ), λ = min, real. If (λ, z) is the solution of QEPmin, there exists u such that (λ, u) is the solution of LGopt. Conversely, let (λ, u) be the solution of LGopt, there exists z such that (λ, z) is the solution of QEPmin. Therefore, LGopt is equivalent to QEPmin. Yunshen Zhou Advisor: Prof. Zhaojun Bai Solving Constrained Rayleigh Quotient Optimization Problem June by Projected 15, 2017QEP Method 13 / 44
14 Distribution of the Eigenvalues Proposition 1 The smallest real eigenvalue of the QEP (P AP λi) 2 z = 1 γ 2 b 0b T 0 z, z N (C T ), is the leftmost eigenvalue. Yunshen Zhou Advisor: Prof. Zhaojun Bai Solving Constrained Rayleigh Quotient Optimization Problem June by Projected 15, 2017QEP Method 14 / 44
15 Distribution of the Eigenvalues Imag Real Figure 1: Plot of the eigenvalues of QEP. Problem size: n = 100, m = 10. Yunshen Zhou Advisor: Prof. Zhaojun Bai Solving Constrained Rayleigh Quotient Optimization Problem June by Projected 15, 2017QEP Method 15 / 44
16 1 Introduction 2 Theory Problem Decomposition The Lagrange Equations The Equivalent QEP Outline 3 Algorithm Lanczos Algorithm Projected QEP Method Compute the Solution of CRQopt 4 Convergence Analysis 5 Application Yunshen Zhou Advisor: Prof. Zhaojun Bai Solving Constrained Rayleigh Quotient Optimization Problem June by Projected 15, 2017QEP Method 16 / 44
17 Lanczos Algorithm Given a real symmetric matrix M, the Lanczos process generates an orthogonal matrix Q k whose column space is the Krylov subspace K k (M, r 0 ) = R(r 0, Mr 0,, M k 1 r 0 ). The first k steps of Lanczos algorithm take the matrix form MQ k = Q k T k + β k+1 q k+1 e T k, where T k is a diagonal symmetric matrix with diagonal entries α 1,, α k and subdiagonal entries β 2,, β k. Yunshen Zhou Advisor: Prof. Zhaojun Bai Solving Constrained Rayleigh Quotient Optimization Problem June by Projected 15, 2017QEP Method 17 / 44
18 Lanczos Algorithm Require: M, r 0 Ensure: α, β, Q = [q 1, q 2,..., q k ], k 1: β 1 r 0 2: if β 1 = 0 then 3: stop 4: end if 5: q 1 r 0 /β 1, q 0 0 6: for k 1, 2,... do 7: p Mq k 8: r p β k q k 1 9: α k qk T p 10: r r α k q k 11: β k+1 r 12: if β k+1 = 0 then 13: stop 14: end if 15: q k+1 r/β k+1 16: end for Yunshen Zhou Advisor: Prof. Zhaojun Bai Solving Constrained Rayleigh Quotient Optimization Problem June by Projected 15, 2017QEP Method 18 / 44
19 Matrix-Vector Multiplication P AP x Apply Lanczos to M = P AP. If r 0 N (C T ), then Mq k = P AP q k = P Aq k. p = P Aq k can be computed by P Aq k = (I CC )Aq k = Aq k CC Aq k = Aq k Cy, where y = arg min y R m Cy Aq k. Yunshen Zhou Advisor: Prof. Zhaojun Bai Solving Constrained Rayleigh Quotient Optimization Problem June by Projected 15, 2017QEP Method 19 / 44
20 Approximate QEPmin by Projection Let b 0 N (C T ) be the starting vector of Lanczos iteration, then the projection of QEPmin is ( (T k λi) 2 b0 2 ) y = e 1 e T 1 βk+1 2 e ke T k y, y R k, γ 2 λ = min, real. Yunshen Zhou Advisor: Prof. Zhaojun Bai Solving Constrained Rayleigh Quotient Optimization Problem June by Projected 15, 2017QEP Method 20 / 44
21 QEP Existence of Real Eigenvalues ( (T k λi) 2 b0 2 y = e 1 e T 1 βk+1 2 e ke T k γ 2 may not have any real eigenvalues. Example: ) y (1) Figure 2: Plot of the eigenvalues of QEP (1). Problem: A = diag(1, 2, 3, 4, 5), C = [4, 8, 3, 1, 1] T, t = 3 and k = 3. Yunshen Zhou Advisor: Prof. Zhaojun Bai Solving Constrained Rayleigh Quotient Optimization Problem June by Projected 15, 2017QEP Method 21 / 44
22 Existence of Real Eigenvalues In order to make sure that the reduced QEP has at least one real eigenvalue, we can solve the following QEP (rqepmin): (T k λi) 2 w = b 0 2 γ 2 e 1 e T 1 w, w R k, λ = min, real. Let (µ (k) 1, w) be the eigenpair of rqepmin, then (µ(k) 1, Q kw) is the approximation of eigenpair of QEPmin. Note that Q k w N (C T ). Yunshen Zhou Advisor: Prof. Zhaojun Bai Solving Constrained Rayleigh Quotient Optimization Problem June by Projected 15, 2017QEP Method 22 / 44
23 Approximation of the Eigenvalues 0.03 Original QEP Projected QEP Imag Real Figure 3: Plot of the eigenvalues of QEP and projected QEP. Problem size: n = 100, m = 10, k = 25. Yunshen Zhou Advisor: Prof. Zhaojun Bai Solving Constrained Rayleigh Quotient Optimization Problem June by Projected 15, 2017QEP Method 23 / 44
24 Solving rqepmin by Linearization Let y = (T k λi)w, then rqepmin can be written as Let s = [ ] y, then w (T k λi) 2 w = b 0 2 e 1 e T 1 w γ 2 T k y b 0 2 γ 2 e 1 e T 1 w = λy, T k w y = λw. [T k b 0 2 I γ 2 e 1 e T 1 T k ] s = λs. Yunshen Zhou Advisor: Prof. Zhaojun Bai Solving Constrained Rayleigh Quotient Optimization Problem June by Projected 15, 2017QEP Method 24 / 44
25 Stopping Condition Let µ (k) 1 be the smallest real eigenvalue of rqepmin and w be the corresponding eigenvector. Let ŵ (k) = Q k w and the residual of the QEP be ( r k = P AP µ (k)2 2 1 I) ŵ(k) γ 2 b 0 b T 0 ŵ (k). Stopping condition: r k < ε. Observation 1: r k = [ 2µ (k) 1 β k+1q k+1 + β 2 k+1 q k + α k β k+1 q k+1 + β k+1 (α k+1 q k+1 + β k+2 q k+2 )]w k + q k+1 β k β k+1 w k 1. Yunshen Zhou Advisor: Prof. Zhaojun Bai Solving Constrained Rayleigh Quotient Optimization Problem June by Projected 15, 2017QEP Method 25 / 44
26 10 2 w k Observation 2: r k w k. Stopping Condition 10 0 r k Figure 4: Plot of r k and w k. Problem size: n = 100, m = 10. Yunshen Zhou Advisor: Prof. Zhaojun Bai Solving Constrained Rayleigh Quotient Optimization Problem June by Projected 15, 2017QEP Method 26 / 44
27 Compute the Solution of CRQopt Let the approximated solution of CRQopt be γ2 v (k) = Q k y + n 0, b 0 w 1 where w 1 is the first component of w and y = (T k µ (k) 1 I)w. v (k) is the best approximation of the solution of CRQopt in n 0 + K k (P AP, b 0 ), namely, v (k) = arg min s n 0 +K k (P AP,b 0 ), s =1 st As. Yunshen Zhou Advisor: Prof. Zhaojun Bai Solving Constrained Rayleigh Quotient Optimization Problem June by Projected 15, 2017QEP Method 27 / 44
28 Numerical Results: Running Time Explicit Projected Power Projected QEP Running TIme n Figure 5: The running time of different algorithms. Yunshen Zhou Advisor: Prof. Zhaojun Bai Solving Constrained Rayleigh Quotient Optimization Problem June by Projected 15, 2017QEP Method 28 / 44
29 1 Introduction 2 Theory Problem Decomposition The Lagrange Equations The Equivalent QEP Outline 3 Algorithm Lanczos Algorithm Projected QEP Method Compute the Solution of CRQopt 4 Convergence Analysis 5 Application Yunshen Zhou Advisor: Prof. Zhaojun Bai Solving Constrained Rayleigh Quotient Optimization Problem June by Projected 15, 2017QEP Method 29 / 44
30 Convergence Analysis Let ˆθ be the eigenvalue of QEPmin and suppose ˆθ / D, D = {λ P AP y = λy, y N (C T )}, 1 the sequence {g( v (k) )} is nonincreasing, 2 let k max be the smallest k such that β kmax+1 = 0, then g( v (kmax) ) = g(v), and 3 let θ = min(d) and θ = max(d), then 0 g( v (k) ) g(v) 4 P AP ˆθI ( γ p k+1 (η) )2, where η = θ ˆθ θ θ, and p k (x) is the k-th Chebyshev polynomial of the first kind. Yunshen Zhou Advisor: Prof. Zhaojun Bai Solving Constrained Rayleigh Quotient Optimization Problem June by Projected 15, 2017QEP Method 30 / 44
31 Chebyshev Polynomials p k (x) 0 5 p 1 (x) p 2 (x) p 3 (x) 10 p 4 (x) p 5 (x) x Figure 6: Chebyshev polynomials of different degrees. Yunshen Zhou Advisor: Prof. Zhaojun Bai Solving Constrained Rayleigh Quotient Optimization Problem June by Projected 15, 2017QEP Method 31 / 44
32 Numerical Results: Rate of Convergence η= η= η= Error of Objective Function k Figure 7: The error of objective function for different η. Problem size: n = 500, m = 50. Yunshen Zhou Advisor: Prof. Zhaojun Bai Solving Constrained Rayleigh Quotient Optimization Problem June by Projected 15, 2017QEP Method 32 / 44
33 Numerical Results: Tightness of the Bound Actual Error Error Bound Actual Error Error Bound Error of Objective Function Error of Objective Function k k Figure 8: The actual error and the error bound of the objective function for different k. The left plot is the case where η = and the right plot is the case where η = Problem size: n = 500, m = 50. Yunshen Zhou Advisor: Prof. Zhaojun Bai Solving Constrained Rayleigh Quotient Optimization Problem June by Projected 15, 2017QEP Method 33 / 44
34 1 Introduction 2 Theory Problem Decomposition The Lagrange Equations The Equivalent QEP Outline 3 Algorithm Lanczos Algorithm Projected QEP Method Compute the Solution of CRQopt 4 Convergence Analysis 5 Application Yunshen Zhou Advisor: Prof. Zhaojun Bai Solving Constrained Rayleigh Quotient Optimization Problem June by Projected 15, 2017QEP Method 34 / 44
35 Application: Constrained Normalized Cut Let i 1,, i k be the indexes of training data. The CRQ form of constrained normalized cut can be written as min z T Lz, s.t. z T z = 1, C T z = t, where and C = [D 1 2 1, ei1,, e ik ], t = [0, ± 1 n,, ± 1 n ] T. Yunshen Zhou Advisor: Prof. Zhaojun Bai Solving Constrained Rayleigh Quotient Optimization Problem June by Projected 15, 2017QEP Method 35 / 44
36 Example of Application: Image Cut (a) Normalized cut (b) Constrained normalized cut Figure 9: Numerical results of normalized cut with and without constraints. Problem size: n = 41616, m = 20. Yunshen Zhou Advisor: Prof. Zhaojun Bai Solving Constrained Rayleigh Quotient Optimization Problem June by Projected 15, 2017QEP Method 36 / 44
37 Example of Application: Image Cut (a) Normalized cut (b) Constrained normalized cut 1 (c) Constrained normalized cut 2 Figure 10: Numerical results of normalized cut with and without constraints. Problem size for constrained normalized cut 1: n = 5184, m = 18. Problem size for constrained normalized cut 2: n = 5184, m = 24. Yunshen Zhou Advisor: Prof. Zhaojun Bai Solving Constrained Rayleigh Quotient Optimization Problem June by Projected 15, 2017QEP Method 37 / 44
38 Future Work Tighter bound for the error of objective function. Compute v (k) when ˆθ D. Convergence analysis when matrix-vector multiplication has error. Yunshen Zhou Advisor: Prof. Zhaojun Bai Solving Constrained Rayleigh Quotient Optimization Problem June by Projected 15, 2017QEP Method 38 / 44
39 References [1] Anders Eriksson, Carl Olsson, and Fredrik Kahl. Normalized cuts revisited: A reformulation for segmentation with linear grouping constraints. Journal of Mathematical Imaging and Vision, 39(1):45-61, [2] Walter Gander, Gene H. Golub, and Urs von Matt. A constrained eigenvalue problem. Linear Algebra and its Applications, 114C115: , [3] Gene H Golub, Zhenyue Zhang, and Hongyuan Zha. Large sparse symmetric eigenvalue problems with homogeneous linear constraints: the Lanczos process with innerouter iterations. Linear Algebra and its Applications, 309(1): , [4] Linli Xu, Wenye Li, and Dale Schuurmans. Fast normalized cut with linear constraints. In the Proceedings of IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2009, pages IEEE, [5] Jianbo Shi and Jitendra Malik. Normalized cuts and image segmentation. IEEE Transactions on Pattern Analysis and Machine Intelligence, 22(8): , [6] Chungen Shen, Lei-Hong Zhang, and Ren-Cang Li. On the generalized Lanczos trust-region method. Technical report, DOI: /RG /1. Available at Yunshen Zhou Advisor: Prof. Zhaojun Bai Solving Constrained Rayleigh Quotient Optimization Problem June by Projected 15, 2017QEP Method 39 / 44
40 Thank you! Yunshen Zhou Advisor: Prof. Zhaojun Bai Solving Constrained Rayleigh Quotient Optimization Problem June by Projected 15, 2017QEP Method 40 / 44
41 LSQR Consider the least square problem Let min Cy x 2. y R m [ ] 0 C B = C T, 0 [ ] u1 0 u W = k 0. 0 v 1 0 v k We apply Lanczos algorithm to matrix B with starting vector [ ] u1. 0 Yunshen Zhou Advisor: Prof. Zhaojun Bai Solving Constrained Rayleigh Quotient Optimization Problem June by Projected 15, 2017QEP Method 41 / 44
42 Then where LSQR U k+1 (β 1 e 1 ) = b, AV k = U k+1 B k, A T U k+1 = V k Bk T + α k+1v k+1 e T k+1, α β 2 α B k = 0 β 3 α 3 0 α k. 0 0 β k+1 Then x k = V k y k is the approximation of the solution of the least square problem, where y k is the solution of min β 1e 1 B k y k. y k R k Yunshen Zhou Advisor: Prof. Zhaojun Bai Solving Constrained Rayleigh Quotient Optimization Problem June by Projected 15, 2017QEP Method 42 / 44
43 Application: Constrained Normalized Cut Let x be an indicator vector, x i = 1 if i A and 1 otherwise. The normalized cut can be simplified to Ncut(A, B) = yt (D W )y y T, Dy where D is a diagonal matrix with the row sums of W on the diagonal, y is a linear transformation of x with the property that y T D1 = 0. Set z = D 1 2 y, L = D 1 2 (D W )D 1 2 and rewrite the problem of minimizing the normalized cut: min z T Lz, s.t. z T z = 1, z T D = 0. Yunshen Zhou Advisor: Prof. Zhaojun Bai Solving Constrained Rayleigh Quotient Optimization Problem June by Projected 15, 2017QEP Method 43 / 44
44 Application: Constrained Normalized Cut Linear constraints: information for cluster z i = ± 1 n and the constraint z T D = 0. Weight Matrix: F (i) F (j) 2 2 w ij = e δ F 2 if X(i) X(j) 2 < r, 0 otherwise. where F (i) is the brightness of pixel i and X(i) is the location of pixel i. Yunshen Zhou Advisor: Prof. Zhaojun Bai Solving Constrained Rayleigh Quotient Optimization Problem June by Projected 15, 2017QEP Method 44 / 44
The Lanczos and conjugate gradient algorithms
The Lanczos and conjugate gradient algorithms Gérard MEURANT October, 2008 1 The Lanczos algorithm 2 The Lanczos algorithm in finite precision 3 The nonsymmetric Lanczos algorithm 4 The Golub Kahan bidiagonalization
More informationLinear Algebra & Geometry why is linear algebra useful in computer vision?
Linear Algebra & Geometry why is linear algebra useful in computer vision? References: -Any book on linear algebra! -[HZ] chapters 2, 4 Some of the slides in this lecture are courtesy to Prof. Octavia
More informationIntroduction to Spectral Graph Theory and Graph Clustering
Introduction to Spectral Graph Theory and Graph Clustering Chengming Jiang ECS 231 Spring 2016 University of California, Davis 1 / 40 Motivation Image partitioning in computer vision 2 / 40 Motivation
More informationLecture 9: Krylov Subspace Methods. 2 Derivation of the Conjugate Gradient Algorithm
CS 622 Data-Sparse Matrix Computations September 19, 217 Lecture 9: Krylov Subspace Methods Lecturer: Anil Damle Scribes: David Eriksson, Marc Aurele Gilles, Ariah Klages-Mundt, Sophia Novitzky 1 Introduction
More informationLARGE SPARSE EIGENVALUE PROBLEMS. General Tools for Solving Large Eigen-Problems
LARGE SPARSE EIGENVALUE PROBLEMS Projection methods The subspace iteration Krylov subspace methods: Arnoldi and Lanczos Golub-Kahan-Lanczos bidiagonalization General Tools for Solving Large Eigen-Problems
More informationLARGE SPARSE EIGENVALUE PROBLEMS
LARGE SPARSE EIGENVALUE PROBLEMS Projection methods The subspace iteration Krylov subspace methods: Arnoldi and Lanczos Golub-Kahan-Lanczos bidiagonalization 14-1 General Tools for Solving Large Eigen-Problems
More informationOn Some Quadratic Optimization Problems Arising in Computer Vision
On Some Quadratic Optimization Problems Arising in Computer Vision Jean Gallier Department of Computer and Information Science University of Pennsylvania Philadelphia, PA 19104, USA e-mail: jean@cis.upenn.edu
More informationGI07/COMPM012: Mathematical Programming and Research Methods (Part 2) 2. Least Squares and Principal Components Analysis. Massimiliano Pontil
GI07/COMPM012: Mathematical Programming and Research Methods (Part 2) 2. Least Squares and Principal Components Analysis Massimiliano Pontil 1 Today s plan SVD and principal component analysis (PCA) Connection
More informationSpectral Clustering on Handwritten Digits Database
University of Maryland-College Park Advance Scientific Computing I,II Spectral Clustering on Handwritten Digits Database Author: Danielle Middlebrooks Dmiddle1@math.umd.edu Second year AMSC Student Advisor:
More informationSpectral Clustering. Zitao Liu
Spectral Clustering Zitao Liu Agenda Brief Clustering Review Similarity Graph Graph Laplacian Spectral Clustering Algorithm Graph Cut Point of View Random Walk Point of View Perturbation Theory Point of
More informationEigenvalue and Eigenvector Homework
Eigenvalue and Eigenvector Homework Olena Bormashenko November 4, 2 For each of the matrices A below, do the following:. Find the characteristic polynomial of A, and use it to find all the eigenvalues
More informationSTA141C: Big Data & High Performance Statistical Computing
STA141C: Big Data & High Performance Statistical Computing Numerical Linear Algebra Background Cho-Jui Hsieh UC Davis May 15, 2018 Linear Algebra Background Vectors A vector has a direction and a magnitude
More informationThe Eigenvalue Problem: Perturbation Theory
Jim Lambers MAT 610 Summer Session 2009-10 Lecture 13 Notes These notes correspond to Sections 7.2 and 8.1 in the text. The Eigenvalue Problem: Perturbation Theory The Unsymmetric Eigenvalue Problem Just
More informationSolving Regularized Total Least Squares Problems
Solving Regularized Total Least Squares Problems Heinrich Voss voss@tu-harburg.de Hamburg University of Technology Institute of Numerical Simulation Joint work with Jörg Lampe TUHH Heinrich Voss Total
More informationECS231 Handout Subspace projection methods for Solving Large-Scale Eigenvalue Problems. Part I: Review of basic theory of eigenvalue problems
ECS231 Handout Subspace projection methods for Solving Large-Scale Eigenvalue Problems Part I: Review of basic theory of eigenvalue problems 1. Let A C n n. (a) A scalar λ is an eigenvalue of an n n A
More informationLast Time. Social Network Graphs Betweenness. Graph Laplacian. Girvan-Newman Algorithm. Spectral Bisection
Eigenvalue Problems Last Time Social Network Graphs Betweenness Girvan-Newman Algorithm Graph Laplacian Spectral Bisection λ 2, w 2 Today Small deviation into eigenvalue problems Formulation Standard eigenvalue
More informationAMS526: Numerical Analysis I (Numerical Linear Algebra)
AMS526: Numerical Analysis I (Numerical Linear Algebra) Lecture 1: Course Overview & Matrix-Vector Multiplication Xiangmin Jiao SUNY Stony Brook Xiangmin Jiao Numerical Analysis I 1 / 20 Outline 1 Course
More informationLinear Algebra Practice Problems
Math 7, Professor Ramras Linear Algebra Practice Problems () Consider the following system of linear equations in the variables x, y, and z, in which the constants a and b are real numbers. x y + z = a
More informationEigenvalue Problems CHAPTER 1 : PRELIMINARIES
Eigenvalue Problems CHAPTER 1 : PRELIMINARIES Heinrich Voss voss@tu-harburg.de Hamburg University of Technology Institute of Mathematics TUHH Heinrich Voss Preliminaries Eigenvalue problems 2012 1 / 14
More informationLinear Algebra & Geometry why is linear algebra useful in computer vision?
Linear Algebra & Geometry why is linear algebra useful in computer vision? References: -Any book on linear algebra! -[HZ] chapters 2, 4 Some of the slides in this lecture are courtesy to Prof. Octavia
More informationMath Matrix Algebra
Math 44 - Matrix Algebra Review notes - (Alberto Bressan, Spring 7) sec: Orthogonal diagonalization of symmetric matrices When we seek to diagonalize a general n n matrix A, two difficulties may arise:
More informationNumerical Methods I Eigenvalue Problems
Numerical Methods I Eigenvalue Problems Aleksandar Donev Courant Institute, NYU 1 donev@courant.nyu.edu 1 MATH-GA 2011.003 / CSCI-GA 2945.003, Fall 2014 October 2nd, 2014 A. Donev (Courant Institute) Lecture
More informationFinding normalized and modularity cuts by spectral clustering. Ljubjana 2010, October
Finding normalized and modularity cuts by spectral clustering Marianna Bolla Institute of Mathematics Budapest University of Technology and Economics marib@math.bme.hu Ljubjana 2010, October Outline Find
More informationMethods for sparse analysis of high-dimensional data, II
Methods for sparse analysis of high-dimensional data, II Rachel Ward May 26, 2011 High dimensional data with low-dimensional structure 300 by 300 pixel images = 90, 000 dimensions 2 / 55 High dimensional
More informationGolub-Kahan iterative bidiagonalization and determining the noise level in the data
Golub-Kahan iterative bidiagonalization and determining the noise level in the data Iveta Hnětynková,, Martin Plešinger,, Zdeněk Strakoš, * Charles University, Prague ** Academy of Sciences of the Czech
More informationSTA141C: Big Data & High Performance Statistical Computing
STA141C: Big Data & High Performance Statistical Computing Lecture 5: Numerical Linear Algebra Cho-Jui Hsieh UC Davis April 20, 2017 Linear Algebra Background Vectors A vector has a direction and a magnitude
More informationKrylov subspace projection methods
I.1.(a) Krylov subspace projection methods Orthogonal projection technique : framework Let A be an n n complex matrix and K be an m-dimensional subspace of C n. An orthogonal projection technique seeks
More informationAMS526: Numerical Analysis I (Numerical Linear Algebra)
AMS526: Numerical Analysis I (Numerical Linear Algebra) Lecture 19: More on Arnoldi Iteration; Lanczos Iteration Xiangmin Jiao Stony Brook University Xiangmin Jiao Numerical Analysis I 1 / 17 Outline 1
More informationContribution of Wo¹niakowski, Strako²,... The conjugate gradient method in nite precision computa
Contribution of Wo¹niakowski, Strako²,... The conjugate gradient method in nite precision computations ªaw University of Technology Institute of Mathematics and Computer Science Warsaw, October 7, 2006
More informationEECS 275 Matrix Computation
EECS 275 Matrix Computation Ming-Hsuan Yang Electrical Engineering and Computer Science University of California at Merced Merced, CA 95344 http://faculty.ucmerced.edu/mhyang Lecture 12 1 / 18 Overview
More informationOn the Equivalence of Nonnegative Matrix Factorization and Spectral Clustering
On the Equivalence of Nonnegative Matrix Factorization and Spectral Clustering Chris Ding, Xiaofeng He, Horst D. Simon Published on SDM 05 Hongchang Gao Outline NMF NMF Kmeans NMF Spectral Clustering NMF
More informationMoments, Model Reduction and Nonlinearity in Solving Linear Algebraic Problems
Moments, Model Reduction and Nonlinearity in Solving Linear Algebraic Problems Zdeněk Strakoš Charles University, Prague http://www.karlin.mff.cuni.cz/ strakos 16th ILAS Meeting, Pisa, June 2010. Thanks
More informationThe Nearest Doubly Stochastic Matrix to a Real Matrix with the same First Moment
he Nearest Doubly Stochastic Matrix to a Real Matrix with the same First Moment William Glunt 1, homas L. Hayden 2 and Robert Reams 2 1 Department of Mathematics and Computer Science, Austin Peay State
More informationPart 3: Trust-region methods for unconstrained optimization. Nick Gould (RAL)
Part 3: Trust-region methods for unconstrained optimization Nick Gould (RAL) minimize x IR n f(x) MSc course on nonlinear optimization UNCONSTRAINED MINIMIZATION minimize x IR n f(x) where the objective
More informationEIGENVALUE PROBLEMS. EIGENVALUE PROBLEMS p. 1/4
EIGENVALUE PROBLEMS EIGENVALUE PROBLEMS p. 1/4 EIGENVALUE PROBLEMS p. 2/4 Eigenvalues and eigenvectors Let A C n n. Suppose Ax = λx, x 0, then x is a (right) eigenvector of A, corresponding to the eigenvalue
More informationCOMS 4721: Machine Learning for Data Science Lecture 6, 2/2/2017
COMS 4721: Machine Learning for Data Science Lecture 6, 2/2/2017 Prof. John Paisley Department of Electrical Engineering & Data Science Institute Columbia University UNDERDETERMINED LINEAR EQUATIONS We
More informationM.A. Botchev. September 5, 2014
Rome-Moscow school of Matrix Methods and Applied Linear Algebra 2014 A short introduction to Krylov subspaces for linear systems, matrix functions and inexact Newton methods. Plan and exercises. M.A. Botchev
More informationMultivariate Statistical Analysis
Multivariate Statistical Analysis Fall 2011 C. L. Williams, Ph.D. Lecture 4 for Applied Multivariate Analysis Outline 1 Eigen values and eigen vectors Characteristic equation Some properties of eigendecompositions
More informationAPPLIED NUMERICAL LINEAR ALGEBRA
APPLIED NUMERICAL LINEAR ALGEBRA James W. Demmel University of California Berkeley, California Society for Industrial and Applied Mathematics Philadelphia Contents Preface 1 Introduction 1 1.1 Basic Notation
More informationANSWERS. E k E 2 E 1 A = B
MATH 7- Final Exam Spring ANSWERS Essay Questions points Define an Elementary Matrix Display the fundamental matrix multiply equation which summarizes a sequence of swap, combination and multiply operations,
More informationMatrices, Moments and Quadrature, cont d
Jim Lambers CME 335 Spring Quarter 2010-11 Lecture 4 Notes Matrices, Moments and Quadrature, cont d Estimation of the Regularization Parameter Consider the least squares problem of finding x such that
More informationSpace-Variant Computer Vision: A Graph Theoretic Approach
p.1/65 Space-Variant Computer Vision: A Graph Theoretic Approach Leo Grady Cognitive and Neural Systems Boston University p.2/65 Outline of talk Space-variant vision - Why and how of graph theory Anisotropic
More informationLinear Algebra - Part II
Linear Algebra - Part II Projection, Eigendecomposition, SVD (Adapted from Sargur Srihari s slides) Brief Review from Part 1 Symmetric Matrix: A = A T Orthogonal Matrix: A T A = AA T = I and A 1 = A T
More informationLaplacian Eigenmaps for Dimensionality Reduction and Data Representation
Introduction and Data Representation Mikhail Belkin & Partha Niyogi Department of Electrical Engieering University of Minnesota Mar 21, 2017 1/22 Outline Introduction 1 Introduction 2 3 4 Connections to
More informationDELFT UNIVERSITY OF TECHNOLOGY
DELFT UNIVERSITY OF TECHNOLOGY REPORT -09 Computational and Sensitivity Aspects of Eigenvalue-Based Methods for the Large-Scale Trust-Region Subproblem Marielba Rojas, Bjørn H. Fotland, and Trond Steihaug
More informationMath 4153 Exam 3 Review. The syllabus for Exam 3 is Chapter 6 (pages ), Chapter 7 through page 137, and Chapter 8 through page 182 in Axler.
Math 453 Exam 3 Review The syllabus for Exam 3 is Chapter 6 (pages -2), Chapter 7 through page 37, and Chapter 8 through page 82 in Axler.. You should be sure to know precise definition of the terms we
More informationLecture 8 : Eigenvalues and Eigenvectors
CPS290: Algorithmic Foundations of Data Science February 24, 2017 Lecture 8 : Eigenvalues and Eigenvectors Lecturer: Kamesh Munagala Scribe: Kamesh Munagala Hermitian Matrices It is simpler to begin with
More informationSolutions to Final Practice Problems Written by Victoria Kala Last updated 12/5/2015
Solutions to Final Practice Problems Written by Victoria Kala vtkala@math.ucsb.edu Last updated /5/05 Answers This page contains answers only. See the following pages for detailed solutions. (. (a x. See
More informationMatrix Algorithms. Volume II: Eigensystems. G. W. Stewart H1HJ1L. University of Maryland College Park, Maryland
Matrix Algorithms Volume II: Eigensystems G. W. Stewart University of Maryland College Park, Maryland H1HJ1L Society for Industrial and Applied Mathematics Philadelphia CONTENTS Algorithms Preface xv xvii
More informationFantope Regularization in Metric Learning
Fantope Regularization in Metric Learning CVPR 2014 Marc T. Law (LIP6, UPMC), Nicolas Thome (LIP6 - UPMC Sorbonne Universités), Matthieu Cord (LIP6 - UPMC Sorbonne Universités), Paris, France Introduction
More informationPrincipal Component Analysis and Linear Discriminant Analysis
Principal Component Analysis and Linear Discriminant Analysis Ying Wu Electrical Engineering and Computer Science Northwestern University Evanston, IL 60208 http://www.eecs.northwestern.edu/~yingwu 1/29
More informationLanczos tridigonalization and Golub - Kahan bidiagonalization: Ideas, connections and impact
Lanczos tridigonalization and Golub - Kahan bidiagonalization: Ideas, connections and impact Zdeněk Strakoš Academy of Sciences and Charles University, Prague http://www.cs.cas.cz/ strakos Hong Kong, February
More informationhttps://goo.gl/kfxweg KYOTO UNIVERSITY Statistical Machine Learning Theory Sparsity Hisashi Kashima kashima@i.kyoto-u.ac.jp DEPARTMENT OF INTELLIGENCE SCIENCE AND TECHNOLOGY 1 KYOTO UNIVERSITY Topics:
More informationLearning Spectral Graph Segmentation
Learning Spectral Graph Segmentation AISTATS 2005 Timothée Cour Jianbo Shi Nicolas Gogin Computer and Information Science Department University of Pennsylvania Computer Science Ecole Polytechnique Graph-based
More informationLarge-scale eigenvalue problems
ELE 538B: Mathematics of High-Dimensional Data Large-scale eigenvalue problems Yuxin Chen Princeton University, Fall 208 Outline Power method Lanczos algorithm Eigenvalue problems 4-2 Eigendecomposition
More informationMethods for sparse analysis of high-dimensional data, II
Methods for sparse analysis of high-dimensional data, II Rachel Ward May 23, 2011 High dimensional data with low-dimensional structure 300 by 300 pixel images = 90, 000 dimensions 2 / 47 High dimensional
More informationICS 6N Computational Linear Algebra Symmetric Matrices and Orthogonal Diagonalization
ICS 6N Computational Linear Algebra Symmetric Matrices and Orthogonal Diagonalization Xiaohui Xie University of California, Irvine xhx@uci.edu Xiaohui Xie (UCI) ICS 6N 1 / 21 Symmetric matrices An n n
More informationTHE PERTURBATION BOUND FOR THE SPECTRAL RADIUS OF A NON-NEGATIVE TENSOR
THE PERTURBATION BOUND FOR THE SPECTRAL RADIUS OF A NON-NEGATIVE TENSOR WEN LI AND MICHAEL K. NG Abstract. In this paper, we study the perturbation bound for the spectral radius of an m th - order n-dimensional
More informationQuantum Computing Lecture 2. Review of Linear Algebra
Quantum Computing Lecture 2 Review of Linear Algebra Maris Ozols Linear algebra States of a quantum system form a vector space and their transformations are described by linear operators Vector spaces
More informationMarkov Chains and Spectral Clustering
Markov Chains and Spectral Clustering Ning Liu 1,2 and William J. Stewart 1,3 1 Department of Computer Science North Carolina State University, Raleigh, NC 27695-8206, USA. 2 nliu@ncsu.edu, 3 billy@ncsu.edu
More information8.1 Concentration inequality for Gaussian random matrix (cont d)
MGMT 69: Topics in High-dimensional Data Analysis Falll 26 Lecture 8: Spectral clustering and Laplacian matrices Lecturer: Jiaming Xu Scribe: Hyun-Ju Oh and Taotao He, October 4, 26 Outline Concentration
More informationMachine learning for pervasive systems Classification in high-dimensional spaces
Machine learning for pervasive systems Classification in high-dimensional spaces Department of Communications and Networking Aalto University, School of Electrical Engineering stephan.sigg@aalto.fi Version
More informationIntroduction. Chapter One
Chapter One Introduction The aim of this book is to describe and explain the beautiful mathematical relationships between matrices, moments, orthogonal polynomials, quadrature rules and the Lanczos and
More informationSummary: A Random Walks View of Spectral Segmentation, by Marina Meila (University of Washington) and Jianbo Shi (Carnegie Mellon University)
Summary: A Random Walks View of Spectral Segmentation, by Marina Meila (University of Washington) and Jianbo Shi (Carnegie Mellon University) The authors explain how the NCut algorithm for graph bisection
More informationKrylov Subspace Methods to Calculate PageRank
Krylov Subspace Methods to Calculate PageRank B. Vadala-Roth REU Final Presentation August 1st, 2013 How does Google Rank Web Pages? The Web The Graph (A) Ranks of Web pages v = v 1... Dominant Eigenvector
More informationMatrix Approximations
Matrix pproximations Sanjiv Kumar, Google Research, NY EECS-6898, Columbia University - Fall, 00 Sanjiv Kumar 9/4/00 EECS6898 Large Scale Machine Learning Latent Semantic Indexing (LSI) Given n documents
More informationSpectral Clustering. Spectral Clustering? Two Moons Data. Spectral Clustering Algorithm: Bipartioning. Spectral methods
Spectral Clustering Seungjin Choi Department of Computer Science POSTECH, Korea seungjin@postech.ac.kr 1 Spectral methods Spectral Clustering? Methods using eigenvectors of some matrices Involve eigen-decomposition
More informationEIGENVALUE PROBLEMS. Background on eigenvalues/ eigenvectors / decompositions. Perturbation analysis, condition numbers..
EIGENVALUE PROBLEMS Background on eigenvalues/ eigenvectors / decompositions Perturbation analysis, condition numbers.. Power method The QR algorithm Practical QR algorithms: use of Hessenberg form and
More informationLinear Dimensionality Reduction
Outline Hong Chang Institute of Computing Technology, Chinese Academy of Sciences Machine Learning Methods (Fall 2012) Outline Outline I 1 Introduction 2 Principal Component Analysis 3 Factor Analysis
More informationDomain decomposition on different levels of the Jacobi-Davidson method
hapter 5 Domain decomposition on different levels of the Jacobi-Davidson method Abstract Most computational work of Jacobi-Davidson [46], an iterative method suitable for computing solutions of large dimensional
More information1 Conjugate gradients
Notes for 2016-11-18 1 Conjugate gradients We now turn to the method of conjugate gradients (CG), perhaps the best known of the Krylov subspace solvers. The CG iteration can be characterized as the iteration
More informationSpectral Generative Models for Graphs
Spectral Generative Models for Graphs David White and Richard C. Wilson Department of Computer Science University of York Heslington, York, UK wilson@cs.york.ac.uk Abstract Generative models are well known
More informationSpectral Clustering on Handwritten Digits Database Mid-Year Pr
Spectral Clustering on Handwritten Digits Database Mid-Year Presentation Danielle dmiddle1@math.umd.edu Advisor: Kasso Okoudjou kasso@umd.edu Department of Mathematics University of Maryland- College Park
More informationPCA and LDA. Man-Wai MAK
PCA and LDA Man-Wai MAK Dept. of Electronic and Information Engineering, The Hong Kong Polytechnic University enmwmak@polyu.edu.hk http://www.eie.polyu.edu.hk/ mwmak References: S.J.D. Prince,Computer
More informationChapter 3 Transformations
Chapter 3 Transformations An Introduction to Optimization Spring, 2014 Wei-Ta Chu 1 Linear Transformations A function is called a linear transformation if 1. for every and 2. for every If we fix the bases
More informationPrincipal Component Analysis
Machine Learning Michaelmas 2017 James Worrell Principal Component Analysis 1 Introduction 1.1 Goals of PCA Principal components analysis (PCA) is a dimensionality reduction technique that can be used
More informationMTH50 Spring 07 HW Assignment 7 {From [FIS0]}: Sec 44 #4a h 6; Sec 5 #ad ac 4ae 4 7 The due date for this assignment is 04/05/7 Sec 44 #4a h Evaluate the erminant of the following matrices by any legitimate
More informationThroughout these notes we assume V, W are finite dimensional inner product spaces over C.
Math 342 - Linear Algebra II Notes Throughout these notes we assume V, W are finite dimensional inner product spaces over C 1 Upper Triangular Representation Proposition: Let T L(V ) There exists an orthonormal
More information3 (Maths) Linear Algebra
3 (Maths) Linear Algebra References: Simon and Blume, chapters 6 to 11, 16 and 23; Pemberton and Rau, chapters 11 to 13 and 25; Sundaram, sections 1.3 and 1.5. The methods and concepts of linear algebra
More informationAMS526: Numerical Analysis I (Numerical Linear Algebra) Lecture 23: GMRES and Other Krylov Subspace Methods; Preconditioning
AMS526: Numerical Analysis I (Numerical Linear Algebra) Lecture 23: GMRES and Other Krylov Subspace Methods; Preconditioning Xiangmin Jiao SUNY Stony Brook Xiangmin Jiao Numerical Analysis I 1 / 18 Outline
More informationLecture # 11 The Power Method for Eigenvalues Part II. The power method find the largest (in magnitude) eigenvalue of. A R n n.
Lecture # 11 The Power Method for Eigenvalues Part II The power method find the largest (in magnitude) eigenvalue of It makes two assumptions. 1. A is diagonalizable. That is, A R n n. A = XΛX 1 for some
More informationKnowledge Discovery and Data Mining 1 (VO) ( )
Knowledge Discovery and Data Mining 1 (VO) (707.003) Review of Linear Algebra Denis Helic KTI, TU Graz Oct 9, 2014 Denis Helic (KTI, TU Graz) KDDM1 Oct 9, 2014 1 / 74 Big picture: KDDM Probability Theory
More informationj=1 u 1jv 1j. 1/ 2 Lemma 1. An orthogonal set of vectors must be linearly independent.
Lecture Notes: Orthogonal and Symmetric Matrices Yufei Tao Department of Computer Science and Engineering Chinese University of Hong Kong taoyf@cse.cuhk.edu.hk Orthogonal Matrix Definition. Let u = [u
More informationThe quadratic eigenvalue problem (QEP) is to find scalars λ and nonzero vectors u satisfying
I.2 Quadratic Eigenvalue Problems 1 Introduction The quadratic eigenvalue problem QEP is to find scalars λ and nonzero vectors u satisfying where Qλx = 0, 1.1 Qλ = λ 2 M + λd + K, M, D and K are given
More informationSemidefinite and Second Order Cone Programming Seminar Fall 2001 Lecture 5
Semidefinite and Second Order Cone Programming Seminar Fall 2001 Lecture 5 Instructor: Farid Alizadeh Scribe: Anton Riabov 10/08/2001 1 Overview We continue studying the maximum eigenvalue SDP, and generalize
More informationDS-GA 1002 Lecture notes 0 Fall Linear Algebra. These notes provide a review of basic concepts in linear algebra.
DS-GA 1002 Lecture notes 0 Fall 2016 Linear Algebra These notes provide a review of basic concepts in linear algebra. 1 Vector spaces You are no doubt familiar with vectors in R 2 or R 3, i.e. [ ] 1.1
More informationEECS 275 Matrix Computation
EECS 275 Matrix Computation Ming-Hsuan Yang Electrical Engineering and Computer Science University of California at Merced Merced, CA 95344 http://faculty.ucmerced.edu/mhyang Lecture 9 1 / 23 Overview
More informationMatrices and Vectors. Definition of Matrix. An MxN matrix A is a two-dimensional array of numbers A =
30 MATHEMATICS REVIEW G A.1.1 Matrices and Vectors Definition of Matrix. An MxN matrix A is a two-dimensional array of numbers A = a 11 a 12... a 1N a 21 a 22... a 2N...... a M1 a M2... a MN A matrix can
More informationRemark By definition, an eigenvector must be a nonzero vector, but eigenvalue could be zero.
Sec 6 Eigenvalues and Eigenvectors Definition An eigenvector of an n n matrix A is a nonzero vector x such that A x λ x for some scalar λ A scalar λ is called an eigenvalue of A if there is a nontrivial
More informationNumerical Methods for Solving Large Scale Eigenvalue Problems
Peter Arbenz Computer Science Department, ETH Zürich E-mail: arbenz@inf.ethz.ch arge scale eigenvalue problems, Lecture 2, February 28, 2018 1/46 Numerical Methods for Solving Large Scale Eigenvalue Problems
More informationThe QR Decomposition
The QR Decomposition We have seen one major decomposition of a matrix which is A = LU (and its variants) or more generally PA = LU for a permutation matrix P. This was valid for a square matrix and aided
More informationLINEAR ALGEBRA 1, 2012-I PARTIAL EXAM 3 SOLUTIONS TO PRACTICE PROBLEMS
LINEAR ALGEBRA, -I PARTIAL EXAM SOLUTIONS TO PRACTICE PROBLEMS Problem (a) For each of the two matrices below, (i) determine whether it is diagonalizable, (ii) determine whether it is orthogonally diagonalizable,
More informationPreliminary/Qualifying Exam in Numerical Analysis (Math 502a) Spring 2012
Instructions Preliminary/Qualifying Exam in Numerical Analysis (Math 502a) Spring 2012 The exam consists of four problems, each having multiple parts. You should attempt to solve all four problems. 1.
More informationNumerical Methods I: Eigenvalues and eigenvectors
1/25 Numerical Methods I: Eigenvalues and eigenvectors Georg Stadler Courant Institute, NYU stadler@cims.nyu.edu November 2, 2017 Overview 2/25 Conditioning Eigenvalues and eigenvectors How hard are they
More informationMathematical foundations - linear algebra
Mathematical foundations - linear algebra Andrea Passerini passerini@disi.unitn.it Machine Learning Vector space Definition (over reals) A set X is called a vector space over IR if addition and scalar
More informationChapter 7. Canonical Forms. 7.1 Eigenvalues and Eigenvectors
Chapter 7 Canonical Forms 7.1 Eigenvalues and Eigenvectors Definition 7.1.1. Let V be a vector space over the field F and let T be a linear operator on V. An eigenvalue of T is a scalar λ F such that there
More informationFoundations of Computer Vision
Foundations of Computer Vision Wesley. E. Snyder North Carolina State University Hairong Qi University of Tennessee, Knoxville Last Edited February 8, 2017 1 3.2. A BRIEF REVIEW OF LINEAR ALGEBRA Apply
More informationTutorials in Optimization. Richard Socher
Tutorials in Optimization Richard Socher July 20, 2008 CONTENTS 1 Contents 1 Linear Algebra: Bilinear Form - A Simple Optimization Problem 2 1.1 Definitions........................................ 2 1.2
More informationA Tour of the Lanczos Algorithm and its Convergence Guarantees through the Decades
A Tour of the Lanczos Algorithm and its Convergence Guarantees through the Decades Qiaochu Yuan Department of Mathematics UC Berkeley Joint work with Prof. Ming Gu, Bo Li April 17, 2018 Qiaochu Yuan Tour
More informationLecture 4 Eigenvalue problems
Lecture 4 Eigenvalue problems Weinan E 1,2 and Tiejun Li 2 1 Department of Mathematics, Princeton University, weinan@princeton.edu 2 School of Mathematical Sciences, Peking University, tieli@pku.edu.cn
More information