Estimating the Largest Elements of a Matrix

Size: px
Start display at page:

Download "Estimating the Largest Elements of a Matrix"

Transcription

1 Estimating the Largest Elements of a Matrix Samuel Relton samrelton.com blog.samrelton.com Joint work with Nick Higham nick.higham@manchester.ac.uk May 12th, 2016 Sam Relton (Univ. of Man.) Largest elements of A 12th May / 41

2 Defining the problem Applications Basic algorithm and theory Extensions and heuristics Numerical experiments Sam Relton (Univ. of Man.) Largest elements of A 12th May / 41

3 Defining the problem Applications Basic algorithm and theory Extensions and heuristics Numerical experiments Sam Relton (Univ. of Man.) Largest elements of A 12th May / 41

4 The problem We are interested in the max-norm A M = max a ij. i,j Sam Relton (Univ. of Man.) Largest elements of A 12th May / 41

5 The problem We are interested in the max-norm A M = max a ij. i,j Interesting features: Norm is not consistent: AB M A M B M is not always true. It is simple to compute when A is known explicitly. We only use Av for any v. Sam Relton (Univ. of Man.) Largest elements of A 12th May / 41

6 Key observation Our approach relies on a key observation: A M A 1, = max x 0 Ax / x 1. We will start from the mixed subordinate norm estimator derived independently by Boyd (1974) and Tao (1975). This was used by Gu and Miranian (2004) for A = B T C T for rank-revealing Cholesky but was not analyzed. Sam Relton (Univ. of Man.) Largest elements of A 12th May / 41

7 Key observation Our approach relies on a key observation: A M A 1, = max x 0 Ax / x 1. We will start from the mixed subordinate norm estimator derived independently by Boyd (1974) and Tao (1975). This was used by Gu and Miranian (2004) for A = B T C T for rank-revealing Cholesky but was not analyzed. More on this later. Things I won t mention here: Computing max i,j a ij without modulus is also possible. We focus on A R m n but complex matrices are supported. Sam Relton (Univ. of Man.) Largest elements of A 12th May / 41

8 Defining the problem Applications Basic algorithm and theory Extensions and heuristics Numerical experiments Sam Relton (Univ. of Man.) Largest elements of A 12th May / 41

9 Applications - Data mining Factor models in recommender systems (Amazon, Netflix etc.) need kbc T km with B, C rectangular matrices. Sam Relton (Univ. of Man.) Largest elements of A 12th May / 41

10 Applications - Data mining The largest values of BC T M give the strongest recommendations. For example, rows of B are user preferences for films. Action Romance Horror Comedy B = C describes how each film scores in these categories. C = Action Romance Horror Comedy Die Hard Love Actually The Ring Sam Relton (Univ. of Man.) Largest elements of A 12th May / 41

11 Applications - Data mining In general, finding the largest p entries of A T B is called the Maximum All-pairs Dot-product (MAD) search. Forming A T B is expensive and (due to fill-in) may not be possible, whereas calculating A T Bv is easy. Other applications: Link prediction in graphs. Text analysis. Sam Relton (Univ. of Man.) Largest elements of A 12th May / 41

12 Applications - Analysing complex networks What/who is the most important node of this graph? Sam Relton (Univ. of Man.) Largest elements of A 12th May / 41

13 Applications - Analysing complex networks What/who is the most important node of this graph? Clearly number 3 since he connects everyone else! Sam Relton (Univ. of Man.) Largest elements of A 12th May / 41

14 Applications - Analysing complex networks Between each pair of nodes we can define their communicability. It measures how easy it is to send a message from node i to node j. The communicability C(i, j) is a measure of the relative importance of each connection. If we use the matrix exponential exp(a) = k Ak /k!, then (Estrada & Hatano, 2008) C(i, j) = exp(a) i,j. Sam Relton (Univ. of Man.) Largest elements of A 12th May / 41

15 Applications - Analysing complex networks California road network. We want the p 1 most important edges without computing exp(a), which is dense. Sam Relton (Univ. of Man.) Largest elements of A 12th May / 41

16 Application - Optimization (CLIME Estimator) In the CLIME estimator (Constrained l-1 Inverse Matrix Estimator) we would like to solve min Ω 1 subject to Σ n Ω I λ n, where Ω R p p is the precision matrix (inverse covariance matrix). Sam Relton (Univ. of Man.) Largest elements of A 12th May / 41

17 Application - Optimization (CLIME Estimator) In the CLIME estimator (Constrained l-1 Inverse Matrix Estimator) we would like to solve min Ω 1 subject to Σ n Ω I λ n, where Ω R p p is the precision matrix (inverse covariance matrix). Computing Σ n Ω I involves a matrix product. Will become increasing expensive for larger p. Our new method only requires (Σ n Ω I )v and (Σ n Ω I ) T v. Sam Relton (Univ. of Man.) Largest elements of A 12th May / 41

18 Defining the problem Applications Basic algorithm and theory Extensions and heuristics Numerical experiments Sam Relton (Univ. of Man.) Largest elements of A 12th May / 41

19 Basic Algorithm - Theory Based on an iteration for A α,β by Boyd (1974), and Tao (1975). Our problem is reformulated as max F (x) := Ax subject to x S := { x : x 1 1 }. Sam Relton (Univ. of Man.) Largest elements of A 12th May / 41

20 Basic Algorithm - Theory Based on an iteration for A α,β by Boyd (1974), and Tao (1975). Our problem is reformulated as max F (x) := Ax subject to x S := { x : x 1 1 }. Since F and S are convex there exists a subgradient g such that F (v) F (u) + g T (v u), for all u, v S. Sam Relton (Univ. of Man.) Largest elements of A 12th May / 41

21 Basic Algorithm - Theory F (v) F (u) + g T (v u), for all u, v S. Each iteration of our algorithm follows the same procedure: 1. Given a current iterate u, pick a subgradient g. 2. Declare convergence if g is sufficiently small. 3. Move from the u to v, where v maximizes the above. Sam Relton (Univ. of Man.) Largest elements of A 12th May / 41

22 Basic Algorithm - Theory F (v) F (u) + g T (v u), for all u, v S. Each iteration of our algorithm follows the same procedure: 1. Given a current iterate u, pick a subgradient g. 2. Declare convergence if g is sufficiently small. 3. Move from the u to v, where v maximizes the above. After we pick a subgradient g, we need to find v S which maximizes g T (v u) (or equivalently g T v s.t. v 1 1). Clearly v dual (g) so it remains only to choose a subgradient. Sam Relton (Univ. of Man.) Largest elements of A 12th May / 41

23 Basic Algorithm - Theory After some manipulation we find that the set of all subgradients satisfies F A T dual (Ax), where dual (v) = e i and v i = v. Since we know that F A T dual (Ax), we can use the following iteration... Sam Relton (Univ. of Man.) Largest elements of A 12th May / 41

24 Basic Algorithm 1 x = n 1 e 2 for k = 1, 2,... 3 y = Ax 4 if k > 1 5 if y g, γ = g, quit, end 6 end 7 g = A T e i, where y i = y (smallest such i) 8 if g y, γ = y, quit, end 9 x = e j, where g j = g (smallest such j) 10 end Costs two matrix vector products per iteration. We can also extract the (i, j) position of the largest element. Sam Relton (Univ. of Man.) Largest elements of A 12th May / 41

25 Basic Algorithm - Example Here is an example of how the algorithm searches a matrix Sam Relton (Univ. of Man.) Largest elements of A 12th May / 41

26 Basic Algorithm - Example Here is an example of how the algorithm searches a matrix Sam Relton (Univ. of Man.) Largest elements of A 12th May / 41

27 Basic Algorithm - Example Here is an example of how the algorithm searches a matrix Sam Relton (Univ. of Man.) Largest elements of A 12th May / 41

28 Basic Algorithm - Example Here is an example of how the algorithm searches a matrix Note: This are the steps taken by Gaussian elimination with rook pivoting. Sam Relton (Univ. of Man.) Largest elements of A 12th May / 41

29 Basic Algorithm - Properties The algorithm has some interesting convergence properties. The estimate γ A M increases monotonically until convergence. The algorithm searches rows and columns sequentially If the elements of A are i.i.d random variables then the expected number of iterations is less than 1 + exp(1) 3 (Foster, GEPP). Sam Relton (Univ. of Man.) Largest elements of A 12th May / 41

30 Basic Algorithm - Properties The algorithm has some interesting convergence properties. The estimate γ A M increases monotonically until convergence. The algorithm searches rows and columns sequentially If the elements of A are i.i.d random variables then the expected number of iterations is less than 1 + exp(1) 3 (Foster, GEPP). Since the algorithm inspects m n rows/columns we can construct counterexamples. A class of examples where the γ/ A M = ɛ. A class of examples that n 1 steps for an n n input. Sam Relton (Univ. of Man.) Largest elements of A 12th May / 41

31 Basic Algorithm - Counterexample The following 6 6 matrix needs exactly 5 iterations to converge T = The algorithm searches rows/columns sequentially until 24. First iteration: column 1 and row 2. Second iteration: column 2 and row 3... Sam Relton (Univ. of Man.) Largest elements of A 12th May / 41

32 Defining the problem Applications Basic algorithm and theory Extensions and heuristics Numerical experiments Sam Relton (Univ. of Man.) Largest elements of A 12th May / 41

33 Algorithm enhancements There are a number of issues with the basic algorithm: only uses matrix vector operations (Level-2 BLAS). can only explore one column/row in each iteration. can only find the largest element (we want p 1 largest). We propose the following enhancements. Blocking the iterates. Deflating previous entries. Sam Relton (Univ. of Man.) Largest elements of A 12th May / 41

34 Blocked algorithm We want to produce a blocked version of this algorithm, meaning we replace the iterate x by X R n t. The advantages are that: the t columns can communicate to give better estimates. it allows use of level-3 BLAS. Sam Relton (Univ. of Man.) Largest elements of A 12th May / 41

35 Blocked algorithm We want to produce a blocked version of this algorithm, meaning we replace the iterate x by X R n t. The advantages are that: the t columns can communicate to give better estimates. it allows use of level-3 BLAS. There are a couple of small issues to deal with. How to choose the t starting vectors? What to do when we see the same column/row twice? Sam Relton (Univ. of Man.) Largest elements of A 12th May / 41

36 Blocked Algorithm - Starting vectors Using a matrix X R n t as our iterate means that t starting vectors are required. X (:, 1) = [1/n,..., 1/n] T. X (i, 2) = ( 1) i+1 (1 + (i 1)/(n 1)). The rest are random unit vectors. Sam Relton (Univ. of Man.) Largest elements of A 12th May / 41

37 Blocked Algorithm - Starting vectors Using a matrix X R n t as our iterate means that t starting vectors are required. X (:, 1) = [1/n,..., 1/n] T. X (i, 2) = ( 1) i+1 (1 + (i 1)/(n 1)). The rest are random unit vectors. This vector is recommended by Higham to pick out large rows of a matrix. Future work: Investigate how max-plus algebra can be used to choose starting vectors that are better than random. a b = max(a, b) a b = a + b Sam Relton (Univ. of Man.) Largest elements of A 12th May / 41

38 Blocked Algorithm - Replacing vectors Suppose on iteration number 3 we have X (:, i) = X (:, j) for i j. Multiplying them both by A would be a waste of computation. Solution: replace X (:, j) with a random unit vector which hasn t been seen in any previous iteration. This involves keeping track of all previous used unit vectors. Means that vectors can communicate to explore the search space. Sam Relton (Univ. of Man.) Largest elements of A 12th May / 41

39 Finding the p > 1 largest elements To find the p > 1 largest elements we need to keep track of the p largest elements encountered so far. Use block algorithm with t = ceil(αp) columns. Problem: all t columns of our block method try and converge to the same element. Sam Relton (Univ. of Man.) Largest elements of A 12th May / 41

40 Finding the p > 1 largest elements To find the p > 1 largest elements we need to keep track of the p largest elements encountered so far. Use block algorithm with t = ceil(αp) columns. Problem: all t columns of our block method try and converge to the same element. Solution: deflate the largest p entries found so far, i.e. work with the matrix p A p := A a ir,j r e ir ej T r. r=1 This means updating p entries of A p v for a given v each time. Note that this means A will change in each iteration. Sam Relton (Univ. of Man.) Largest elements of A 12th May / 41

41 Defining the problem Applications Basic algorithm and theory Extensions and heuristics Numerical experiments Sam Relton (Univ. of Man.) Largest elements of A 12th May / 41

42 Numerical Experiments Two main types of experiment. Random matrices Normal N(0, 1) distributed elements Inverse of such matrices Real-life application matrices MAD search Networks We track a number of metrics: ψ = γ/ A M which is in [0, 1] the underestimation ratio. π the number of iterations required. Machine details: 2x Intel i7 2.7GHz, 8GB RAM, Ubuntu 13.10, MATLAB 2014b. Sam Relton (Univ. of Man.) Largest elements of A 12th May / 41

43 Experiments on random matrices - p = 1 Using block algorithm for p = 1 we compute the metrics over 1000 random matrices of each type. We will use random matrices of the form: randn(100) inv(randn(100)) More types of matrix considered in the paper. Sam Relton (Univ. of Man.) Largest elements of A 12th May / 41

44 p = 1 randn(100) 1000 runs t ψ min ψ avg % exact π avg π max % improve Sam Relton (Univ. of Man.) Largest elements of A 12th May / 41

45 p = 1 inv(randn(100)) 1000 runs t ψ min ψ avg % exact π avg π max % improve Sam Relton (Univ. of Man.) Largest elements of A 12th May / 41

46 Real-life MAD search Recall the MAD search: Find the largest element of A T B. We will look at the following cases: MovieLens: m = 65, 133, n = 71, 567 nnz = 10, 000, 054 Form SVD with 150 singular values UΣV T = R then take A = (UΣ) T, B = V T. Sandia/ASIC 680k: n = 682, 862 nnz = 2, 638, 997 Find largest elements of A T A SNAP/as-Skitter: n = 1, 696, 415 nnz = 11, 095, 298 Find largest elements of A T A Sam Relton (Univ. of Man.) Largest elements of A 12th May / 41

47 MAD Search - MovieLens Parameters used: t = 10. Time to compute A T B and take maximum: 88 secs Time for new algorithm 0.4 secs: (found largest element) Speedup 200 Sam Relton (Univ. of Man.) Largest elements of A 12th May / 41

48 MAD Search - ASIC 680k Parameters used: t = 10. Time to compute A T A and take maximum: 1.6e4 secs Time for new algorithm 1.0 secs: (found largest element) Speedup 16,000 Sam Relton (Univ. of Man.) Largest elements of A 12th May / 41

49 MAD Search - as-skitter Parameters used: t = 10. Time to compute A T A and take maximum: 2.4e4 secs Time for new algorithm 3.6 secs: (found second largest element) Speedup 6,600 Sam Relton (Univ. of Man.) Largest elements of A 12th May / 41

50 MAD search - Summary Exact method (for comparison): Compute A T Ae i for i = 1 : n and store largest entries. Time (secs) Matrix ψ Exact Alg New Alg Speedup Movielens Sandia/ASIC 680k 1 1.6e , 000 SNAP/as-Skitter e , 600 Finding the p = 5 using deflation gets all elements correct. Sam Relton (Univ. of Man.) Largest elements of A 12th May / 41

51 Real-life networks Find the p = 10 strongest links in the following networks. SNAP/ca-AstroPh: n = 18, 772 nnz = 396, 160 SNAP/ca-CondMat: n = 23, 133 nnz = 186, 936 MUFC Twitter: n = 148, 918 nnz = 193, 032 exp(a)v computed using expmv (Higham and Al-Mohy). Exact method (for comparison): Compute exp(a)e i for i = 1 : n and store largest entries. Parameter α = 3 used meaning that t = 30 columns are used in each iteration. Sam Relton (Univ. of Man.) Largest elements of A 12th May / 41

52 Real-life networks - SNAP/ca-AstroPh Parameters used: α = 3, p = 10, t = 30. Time to compute exp(a) and take maximum: secs Time for new algorithm 1.7 secs Speedup 140 Sam Relton (Univ. of Man.) Largest elements of A 12th May / 41

53 Real-life networks - SNAP/ca-CondMat Parameters used: α = 3, p = 10, t = 30. Time to compute exp(a) and take maximum: secs Time for new algorithm 1.2 secs Speedup 170 Sam Relton (Univ. of Man.) Largest elements of A 12th May / 41

54 Real-life networks - MUFC Twitter Parameters used: α = 3, p = 10, t = 30. Time to compute exp(a) and take maximum: secs Time for new algorithm 3.2 secs Speedup 1720 Sam Relton (Univ. of Man.) Largest elements of A 12th May / 41

55 Real-life networks - Summary Time (secs) Matrix Correct Elems Exact Alg New Alg Speedup SNAP/ca-AstroPh SNAP/ca-CondMat MUFC Twitter All the largest elements found correctly Significant speedups Sam Relton (Univ. of Man.) Largest elements of A 12th May / 41

56 Recommendations We learned from our experiments that: Using deflation is always preferable. At least as accurate and reliable (usually more so) Tiny performance hit compared to level-3 BLAS operations To increase accuracy further Try and find more largest elements than required Use deflation Sam Relton (Univ. of Man.) Largest elements of A 12th May / 41

57 Conclusions Finding the largest element of a matrix is an important problem with numerous applications. Our algorithm is a black box applicable to any matrix where Av can be formed (in contrast to other specific algorithms). Algorithm performs well on real-life problems Code available on Github: github.com/sdrelton/matrix-est-maxelts Sam Relton (Univ. of Man.) Largest elements of A 12th May / 41

58 References David W. Boyd, The power method for l p norms. Linear Algebra Appl., 9:95 101, Ernesto Estrada and Naomichi Hatano, Communicability in complex networks. Physical Review E, 77:036111, Leslie V. Foster, The growth factor and efficiency of Gaussian elimination with rook pivoting. J. Comput. Appl. Math, 86: , M. Gu and L. Miranian, Strong rank revealing Cholesky factorization. Electron. Trans. Numer. Anal., 17:76 92, Pham Dinh Tao, Convergence of a subgradient method for computing the bound norm of matrices. Linear Algebra Appl., 62: , Sam Relton (Univ. of Man.) Largest elements of A 12th May / 41

Linear Algebra Section 2.6 : LU Decomposition Section 2.7 : Permutations and transposes Wednesday, February 13th Math 301 Week #4

Linear Algebra Section 2.6 : LU Decomposition Section 2.7 : Permutations and transposes Wednesday, February 13th Math 301 Week #4 Linear Algebra Section. : LU Decomposition Section. : Permutations and transposes Wednesday, February 1th Math 01 Week # 1 The LU Decomposition We learned last time that we can factor a invertible matrix

More information

Incomplete Cholesky preconditioners that exploit the low-rank property

Incomplete Cholesky preconditioners that exploit the low-rank property anapov@ulb.ac.be ; http://homepages.ulb.ac.be/ anapov/ 1 / 35 Incomplete Cholesky preconditioners that exploit the low-rank property (theory and practice) Artem Napov Service de Métrologie Nucléaire, Université

More information

Review Questions REVIEW QUESTIONS 71

Review Questions REVIEW QUESTIONS 71 REVIEW QUESTIONS 71 MATLAB, is [42]. For a comprehensive treatment of error analysis and perturbation theory for linear systems and many other problems in linear algebra, see [126, 241]. An overview of

More information

Lecture 9: September 28

Lecture 9: September 28 0-725/36-725: Convex Optimization Fall 206 Lecturer: Ryan Tibshirani Lecture 9: September 28 Scribes: Yiming Wu, Ye Yuan, Zhihao Li Note: LaTeX template courtesy of UC Berkeley EECS dept. Disclaimer: These

More information

Behavioral Data Mining. Lecture 7 Linear and Logistic Regression

Behavioral Data Mining. Lecture 7 Linear and Logistic Regression Behavioral Data Mining Lecture 7 Linear and Logistic Regression Outline Linear Regression Regularization Logistic Regression Stochastic Gradient Fast Stochastic Methods Performance tips Linear Regression

More information

Linear Systems of n equations for n unknowns

Linear Systems of n equations for n unknowns Linear Systems of n equations for n unknowns In many application problems we want to find n unknowns, and we have n linear equations Example: Find x,x,x such that the following three equations hold: x

More information

Facebook Friends! and Matrix Functions

Facebook Friends! and Matrix Functions Facebook Friends! and Matrix Functions! Graduate Research Day Joint with David F. Gleich, (Purdue), supported by" NSF CAREER 1149756-CCF Kyle Kloster! Purdue University! Network Analysis Use linear algebra

More information

Dimensionality Reduction

Dimensionality Reduction 394 Chapter 11 Dimensionality Reduction There are many sources of data that can be viewed as a large matrix. We saw in Chapter 5 how the Web can be represented as a transition matrix. In Chapter 9, the

More information

LU Factorization. LU factorization is the most common way of solving linear systems! Ax = b LUx = b

LU Factorization. LU factorization is the most common way of solving linear systems! Ax = b LUx = b AM 205: lecture 7 Last time: LU factorization Today s lecture: Cholesky factorization, timing, QR factorization Reminder: assignment 1 due at 5 PM on Friday September 22 LU Factorization LU factorization

More information

Normalized power iterations for the computation of SVD

Normalized power iterations for the computation of SVD Normalized power iterations for the computation of SVD Per-Gunnar Martinsson Department of Applied Mathematics University of Colorado Boulder, Co. Per-gunnar.Martinsson@Colorado.edu Arthur Szlam Courant

More information

Lecture 8: Linear Algebra Background

Lecture 8: Linear Algebra Background CSE 521: Design and Analysis of Algorithms I Winter 2017 Lecture 8: Linear Algebra Background Lecturer: Shayan Oveis Gharan 2/1/2017 Scribe: Swati Padmanabhan Disclaimer: These notes have not been subjected

More information

7.6 The Inverse of a Square Matrix

7.6 The Inverse of a Square Matrix 7.6 The Inverse of a Square Matrix Copyright Cengage Learning. All rights reserved. What You Should Learn Verify that two matrices are inverses of each other. Use Gauss-Jordan elimination to find inverses

More information

sublinear time low-rank approximation of positive semidefinite matrices Cameron Musco (MIT) and David P. Woodru (CMU)

sublinear time low-rank approximation of positive semidefinite matrices Cameron Musco (MIT) and David P. Woodru (CMU) sublinear time low-rank approximation of positive semidefinite matrices Cameron Musco (MIT) and David P. Woodru (CMU) 0 overview Our Contributions: 1 overview Our Contributions: A near optimal low-rank

More information

Name: INSERT YOUR NAME HERE. Due to dropbox by 6pm PDT, Wednesday, December 14, 2011

Name: INSERT YOUR NAME HERE. Due to dropbox by 6pm PDT, Wednesday, December 14, 2011 AMath 584 Name: INSERT YOUR NAME HERE Take-home Final UWNetID: INSERT YOUR NETID Due to dropbox by 6pm PDT, Wednesday, December 14, 2011 The main part of the assignment (Problems 1 3) is worth 80 points.

More information

arxiv: v1 [cs.lg] 26 Jul 2017

arxiv: v1 [cs.lg] 26 Jul 2017 Updating Singular Value Decomposition for Rank One Matrix Perturbation Ratnik Gandhi, Amoli Rajgor School of Engineering & Applied Science, Ahmedabad University, Ahmedabad-380009, India arxiv:70708369v

More information

Background Mathematics (2/2) 1. David Barber

Background Mathematics (2/2) 1. David Barber Background Mathematics (2/2) 1 David Barber University College London Modified by Samson Cheung (sccheung@ieee.org) 1 These slides accompany the book Bayesian Reasoning and Machine Learning. The book and

More information

CS227-Scientific Computing. Lecture 4: A Crash Course in Linear Algebra

CS227-Scientific Computing. Lecture 4: A Crash Course in Linear Algebra CS227-Scientific Computing Lecture 4: A Crash Course in Linear Algebra Linear Transformation of Variables A common phenomenon: Two sets of quantities linearly related: y = 3x + x 2 4x 3 y 2 = 2.7x 2 x

More information

Numerical Linear Algebra

Numerical Linear Algebra Chapter 3 Numerical Linear Algebra We review some techniques used to solve Ax = b where A is an n n matrix, and x and b are n 1 vectors (column vectors). We then review eigenvalues and eigenvectors and

More information

ANONSINGULAR tridiagonal linear system of the form

ANONSINGULAR tridiagonal linear system of the form Generalized Diagonal Pivoting Methods for Tridiagonal Systems without Interchanges Jennifer B. Erway, Roummel F. Marcia, and Joseph A. Tyson Abstract It has been shown that a nonsingular symmetric tridiagonal

More information

Applications of Randomized Methods for Decomposing and Simulating from Large Covariance Matrices

Applications of Randomized Methods for Decomposing and Simulating from Large Covariance Matrices Applications of Randomized Methods for Decomposing and Simulating from Large Covariance Matrices Vahid Dehdari and Clayton V. Deutsch Geostatistical modeling involves many variables and many locations.

More information

ECS231: Spectral Partitioning. Based on Berkeley s CS267 lecture on graph partition

ECS231: Spectral Partitioning. Based on Berkeley s CS267 lecture on graph partition ECS231: Spectral Partitioning Based on Berkeley s CS267 lecture on graph partition 1 Definition of graph partitioning Given a graph G = (N, E, W N, W E ) N = nodes (or vertices), E = edges W N = node weights

More information

MATH.2720 Introduction to Programming with MATLAB Vector and Matrix Algebra

MATH.2720 Introduction to Programming with MATLAB Vector and Matrix Algebra MATH.2720 Introduction to Programming with MATLAB Vector and Matrix Algebra A. Vectors A vector is a quantity that has both magnitude and direction, like velocity. The location of a vector is irrelevant;

More information

Applied Linear Algebra in Geoscience Using MATLAB

Applied Linear Algebra in Geoscience Using MATLAB Applied Linear Algebra in Geoscience Using MATLAB Contents Getting Started Creating Arrays Mathematical Operations with Arrays Using Script Files and Managing Data Two-Dimensional Plots Programming in

More information

Scientific Computing

Scientific Computing Scientific Computing Direct solution methods Martin van Gijzen Delft University of Technology October 3, 2018 1 Program October 3 Matrix norms LU decomposition Basic algorithm Cost Stability Pivoting Pivoting

More information

Mathematical foundations - linear algebra

Mathematical foundations - linear algebra Mathematical foundations - linear algebra Andrea Passerini passerini@disi.unitn.it Machine Learning Vector space Definition (over reals) A set X is called a vector space over IR if addition and scalar

More information

Lecture 7. Gaussian Elimination with Pivoting. David Semeraro. University of Illinois at Urbana-Champaign. February 11, 2014

Lecture 7. Gaussian Elimination with Pivoting. David Semeraro. University of Illinois at Urbana-Champaign. February 11, 2014 Lecture 7 Gaussian Elimination with Pivoting David Semeraro University of Illinois at Urbana-Champaign February 11, 2014 David Semeraro (NCSA) CS 357 February 11, 2014 1 / 41 Naive Gaussian Elimination

More information

arxiv:cs/ v1 [cs.ms] 13 Aug 2004

arxiv:cs/ v1 [cs.ms] 13 Aug 2004 tsnnls: A solver for large sparse least squares problems with non-negative variables Jason Cantarella Department of Mathematics, University of Georgia, Athens, GA 30602 Michael Piatek arxiv:cs/0408029v

More information

The Singular Value Decomposition

The Singular Value Decomposition The Singular Value Decomposition Philippe B. Laval KSU Fall 2015 Philippe B. Laval (KSU) SVD Fall 2015 1 / 13 Review of Key Concepts We review some key definitions and results about matrices that will

More information

Singular Value Decomposition

Singular Value Decomposition Singular Value Decomposition Motivatation The diagonalization theorem play a part in many interesting applications. Unfortunately not all matrices can be factored as A = PDP However a factorization A =

More information

Process Model Formulation and Solution, 3E4

Process Model Formulation and Solution, 3E4 Process Model Formulation and Solution, 3E4 Section B: Linear Algebraic Equations Instructor: Kevin Dunn dunnkg@mcmasterca Department of Chemical Engineering Course notes: Dr Benoît Chachuat 06 October

More information

Preserving sparsity in dynamic network computations

Preserving sparsity in dynamic network computations GoBack Preserving sparsity in dynamic network computations Francesca Arrigo and Desmond J. Higham Network Science meets Matrix Functions Oxford, Sept. 1, 2016 This work was funded by the Engineering and

More information

Restricted Boltzmann Machines for Collaborative Filtering

Restricted Boltzmann Machines for Collaborative Filtering Restricted Boltzmann Machines for Collaborative Filtering Authors: Ruslan Salakhutdinov Andriy Mnih Geoffrey Hinton Benjamin Schwehn Presentation by: Ioan Stanculescu 1 Overview The Netflix prize problem

More information

DATA MINING LECTURE 8. Dimensionality Reduction PCA -- SVD

DATA MINING LECTURE 8. Dimensionality Reduction PCA -- SVD DATA MINING LECTURE 8 Dimensionality Reduction PCA -- SVD The curse of dimensionality Real data usually have thousands, or millions of dimensions E.g., web documents, where the dimensionality is the vocabulary

More information

Preliminary/Qualifying Exam in Numerical Analysis (Math 502a) Spring 2012

Preliminary/Qualifying Exam in Numerical Analysis (Math 502a) Spring 2012 Instructions Preliminary/Qualifying Exam in Numerical Analysis (Math 502a) Spring 2012 The exam consists of four problems, each having multiple parts. You should attempt to solve all four problems. 1.

More information

Linear Algebra. The analysis of many models in the social sciences reduces to the study of systems of equations.

Linear Algebra. The analysis of many models in the social sciences reduces to the study of systems of equations. POLI 7 - Mathematical and Statistical Foundations Prof S Saiegh Fall Lecture Notes - Class 4 October 4, Linear Algebra The analysis of many models in the social sciences reduces to the study of systems

More information

POLI270 - Linear Algebra

POLI270 - Linear Algebra POLI7 - Linear Algebra Septemer 8th Basics a x + a x +... + a n x n b () is the linear form where a, b are parameters and x n are variables. For a given equation such as x +x you only need a variable and

More information

MATRICES. a m,1 a m,n A =

MATRICES. a m,1 a m,n A = MATRICES Matrices are rectangular arrays of real or complex numbers With them, we define arithmetic operations that are generalizations of those for real and complex numbers The general form a matrix of

More information

Spectrum-Revealing Matrix Factorizations Theory and Algorithms

Spectrum-Revealing Matrix Factorizations Theory and Algorithms Spectrum-Revealing Matrix Factorizations Theory and Algorithms Ming Gu Department of Mathematics University of California, Berkeley April 5, 2016 Joint work with D. Anderson, J. Deursch, C. Melgaard, J.

More information

BlockMatrixComputations and the Singular Value Decomposition. ATaleofTwoIdeas

BlockMatrixComputations and the Singular Value Decomposition. ATaleofTwoIdeas BlockMatrixComputations and the Singular Value Decomposition ATaleofTwoIdeas Charles F. Van Loan Department of Computer Science Cornell University Supported in part by the NSF contract CCR-9901988. Block

More information

Example: 2x y + 3z = 1 5y 6z = 0 x + 4z = 7. Definition: Elementary Row Operations. Example: Type I swap rows 1 and 3

Example: 2x y + 3z = 1 5y 6z = 0 x + 4z = 7. Definition: Elementary Row Operations. Example: Type I swap rows 1 and 3 Linear Algebra Row Reduced Echelon Form Techniques for solving systems of linear equations lie at the heart of linear algebra. In high school we learn to solve systems with or variables using elimination

More information

Probabilistic Low-Rank Matrix Completion with Adaptive Spectral Regularization Algorithms

Probabilistic Low-Rank Matrix Completion with Adaptive Spectral Regularization Algorithms Probabilistic Low-Rank Matrix Completion with Adaptive Spectral Regularization Algorithms Adrien Todeschini Inria Bordeaux JdS 2014, Rennes Aug. 2014 Joint work with François Caron (Univ. Oxford), Marie

More information

AMS526: Numerical Analysis I (Numerical Linear Algebra for Computational and Data Sciences)

AMS526: Numerical Analysis I (Numerical Linear Algebra for Computational and Data Sciences) AMS526: Numerical Analysis I (Numerical Linear Algebra for Computational and Data Sciences) Lecture 19: Computing the SVD; Sparse Linear Systems Xiangmin Jiao Stony Brook University Xiangmin Jiao Numerical

More information

CS 323: Numerical Analysis and Computing

CS 323: Numerical Analysis and Computing CS 323: Numerical Analysis and Computing MIDTERM #1 Instructions: This is an open notes exam, i.e., you are allowed to consult any textbook, your class notes, homeworks, or any of the handouts from us.

More information

SOLVING LINEAR SYSTEMS

SOLVING LINEAR SYSTEMS SOLVING LINEAR SYSTEMS We want to solve the linear system a, x + + a,n x n = b a n, x + + a n,n x n = b n This will be done by the method used in beginning algebra, by successively eliminating unknowns

More information

Today s class. Linear Algebraic Equations LU Decomposition. Numerical Methods, Fall 2011 Lecture 8. Prof. Jinbo Bi CSE, UConn

Today s class. Linear Algebraic Equations LU Decomposition. Numerical Methods, Fall 2011 Lecture 8. Prof. Jinbo Bi CSE, UConn Today s class Linear Algebraic Equations LU Decomposition 1 Linear Algebraic Equations Gaussian Elimination works well for solving linear systems of the form: AX = B What if you have to solve the linear

More information

CS6964: Notes On Linear Systems

CS6964: Notes On Linear Systems CS6964: Notes On Linear Systems 1 Linear Systems Systems of equations that are linear in the unknowns are said to be linear systems For instance ax 1 + bx 2 dx 1 + ex 2 = c = f gives 2 equations and 2

More information

COMP 558 lecture 18 Nov. 15, 2010

COMP 558 lecture 18 Nov. 15, 2010 Least squares We have seen several least squares problems thus far, and we will see more in the upcoming lectures. For this reason it is good to have a more general picture of these problems and how to

More information

Key words. conjugate gradients, normwise backward error, incremental norm estimation.

Key words. conjugate gradients, normwise backward error, incremental norm estimation. Proceedings of ALGORITMY 2016 pp. 323 332 ON ERROR ESTIMATION IN THE CONJUGATE GRADIENT METHOD: NORMWISE BACKWARD ERROR PETR TICHÝ Abstract. Using an idea of Duff and Vömel [BIT, 42 (2002), pp. 300 322

More information

The Singular Value Decomposition (SVD) and Principal Component Analysis (PCA)

The Singular Value Decomposition (SVD) and Principal Component Analysis (PCA) Chapter 5 The Singular Value Decomposition (SVD) and Principal Component Analysis (PCA) 5.1 Basics of SVD 5.1.1 Review of Key Concepts We review some key definitions and results about matrices that will

More information

Math 1314 Week #14 Notes

Math 1314 Week #14 Notes Math 3 Week # Notes Section 5.: A system of equations consists of two or more equations. A solution to a system of equations is a point that satisfies all the equations in the system. In this chapter,

More information

Binary matrix completion

Binary matrix completion Binary matrix completion Yaniv Plan University of Michigan SAMSI, LDHD workshop, 2013 Joint work with (a) Mark Davenport (b) Ewout van den Berg (c) Mary Wootters Yaniv Plan (U. Mich.) Binary matrix completion

More information

ANALYTICAL MATHEMATICS FOR APPLICATIONS 2018 LECTURE NOTES 3

ANALYTICAL MATHEMATICS FOR APPLICATIONS 2018 LECTURE NOTES 3 ANALYTICAL MATHEMATICS FOR APPLICATIONS 2018 LECTURE NOTES 3 ISSUED 24 FEBRUARY 2018 1 Gaussian elimination Let A be an (m n)-matrix Consider the following row operations on A (1) Swap the positions any

More information

Math/Phys/Engr 428, Math 529/Phys 528 Numerical Methods - Summer Homework 3 Due: Tuesday, July 3, 2018

Math/Phys/Engr 428, Math 529/Phys 528 Numerical Methods - Summer Homework 3 Due: Tuesday, July 3, 2018 Math/Phys/Engr 428, Math 529/Phys 528 Numerical Methods - Summer 28. (Vector and Matrix Norms) Homework 3 Due: Tuesday, July 3, 28 Show that the l vector norm satisfies the three properties (a) x for x

More information

This ensures that we walk downhill. For fixed λ not even this may be the case.

This ensures that we walk downhill. For fixed λ not even this may be the case. Gradient Descent Objective Function Some differentiable function f : R n R. Gradient Descent Start with some x 0, i = 0 and learning rate λ repeat x i+1 = x i λ f(x i ) until f(x i+1 ) ɛ Line Search Variant

More information

Math 54 HW 4 solutions

Math 54 HW 4 solutions Math 54 HW 4 solutions 2.2. Section 2.2 (a) False: Recall that performing a series of elementary row operations A is equivalent to multiplying A by a series of elementary matrices. Suppose that E,...,

More information

On the Skeel condition number, growth factor and pivoting strategies for Gaussian elimination

On the Skeel condition number, growth factor and pivoting strategies for Gaussian elimination On the Skeel condition number, growth factor and pivoting strategies for Gaussian elimination J.M. Peña 1 Introduction Gaussian elimination (GE) with a given pivoting strategy, for nonsingular matrices

More information

Proximal Gradient Descent and Acceleration. Ryan Tibshirani Convex Optimization /36-725

Proximal Gradient Descent and Acceleration. Ryan Tibshirani Convex Optimization /36-725 Proximal Gradient Descent and Acceleration Ryan Tibshirani Convex Optimization 10-725/36-725 Last time: subgradient method Consider the problem min f(x) with f convex, and dom(f) = R n. Subgradient method:

More information

STA141C: Big Data & High Performance Statistical Computing

STA141C: Big Data & High Performance Statistical Computing STA141C: Big Data & High Performance Statistical Computing Numerical Linear Algebra Background Cho-Jui Hsieh UC Davis May 15, 2018 Linear Algebra Background Vectors A vector has a direction and a magnitude

More information

Sparse Principal Component Analysis via Alternating Maximization and Efficient Parallel Implementations

Sparse Principal Component Analysis via Alternating Maximization and Efficient Parallel Implementations Sparse Principal Component Analysis via Alternating Maximization and Efficient Parallel Implementations Martin Takáč The University of Edinburgh Joint work with Peter Richtárik (Edinburgh University) Selin

More information

ETNA Kent State University

ETNA Kent State University C 8 Electronic Transactions on Numerical Analysis. Volume 17, pp. 76-2, 2004. Copyright 2004,. ISSN 1068-613. etnamcs.kent.edu STRONG RANK REVEALING CHOLESKY FACTORIZATION M. GU AND L. MIRANIAN Abstract.

More information

3. Vector spaces 3.1 Linear dependence and independence 3.2 Basis and dimension. 5. Extreme points and basic feasible solutions

3. Vector spaces 3.1 Linear dependence and independence 3.2 Basis and dimension. 5. Extreme points and basic feasible solutions A. LINEAR ALGEBRA. CONVEX SETS 1. Matrices and vectors 1.1 Matrix operations 1.2 The rank of a matrix 2. Systems of linear equations 2.1 Basic solutions 3. Vector spaces 3.1 Linear dependence and independence

More information

Maths for Signals and Systems Linear Algebra in Engineering

Maths for Signals and Systems Linear Algebra in Engineering Maths for Signals and Systems Linear Algebra in Engineering Lectures 13 15, Tuesday 8 th and Friday 11 th November 016 DR TANIA STATHAKI READER (ASSOCIATE PROFFESOR) IN SIGNAL PROCESSING IMPERIAL COLLEGE

More information

A BLOCK ALGORITHM FOR MATRIX 1-NORM ESTIMATION, WITH AN APPLICATION TO 1-NORM PSEUDOSPECTRA

A BLOCK ALGORITHM FOR MATRIX 1-NORM ESTIMATION, WITH AN APPLICATION TO 1-NORM PSEUDOSPECTRA SIAM J. MATRIX ANAL. APPL. Vol. 21, No. 4, pp. 1185 1201 c 2000 Society for Industrial and Applied Mathematics A BLOCK ALGORITHM FOR MATRIX 1-NORM ESTIMATION, WITH AN APPLICATION TO 1-NORM PSEUDOSPECTRA

More information

MTH 464: Computational Linear Algebra

MTH 464: Computational Linear Algebra MTH 464: Computational Linear Algebra Lecture Outlines Exam 2 Material Prof. M. Beauregard Department of Mathematics & Statistics Stephen F. Austin State University February 6, 2018 Linear Algebra (MTH

More information

Accurate and Efficient Matrix Computations with Totally Positive Generalized Vandermonde Matrices Using Schur Functions

Accurate and Efficient Matrix Computations with Totally Positive Generalized Vandermonde Matrices Using Schur Functions Accurate and Efficient Matrix Computations with Totally Positive Generalized Matrices Using Schur Functions Plamen Koev Department of Mathematics UC Berkeley Joint work with Prof. James Demmel Supported

More information

MATH 3511 Lecture 1. Solving Linear Systems 1

MATH 3511 Lecture 1. Solving Linear Systems 1 MATH 3511 Lecture 1 Solving Linear Systems 1 Dmitriy Leykekhman Spring 2012 Goals Review of basic linear algebra Solution of simple linear systems Gaussian elimination D Leykekhman - MATH 3511 Introduction

More information

Block Lanczos Tridiagonalization of Complex Symmetric Matrices

Block Lanczos Tridiagonalization of Complex Symmetric Matrices Block Lanczos Tridiagonalization of Complex Symmetric Matrices Sanzheng Qiao, Guohong Liu, Wei Xu Department of Computing and Software, McMaster University, Hamilton, Ontario L8S 4L7 ABSTRACT The classic

More information

Numerical Linear Algebra

Numerical Linear Algebra Numerical Linear Algebra Decompositions, numerical aspects Gerard Sleijpen and Martin van Gijzen September 27, 2017 1 Delft University of Technology Program Lecture 2 LU-decomposition Basic algorithm Cost

More information

Program Lecture 2. Numerical Linear Algebra. Gaussian elimination (2) Gaussian elimination. Decompositions, numerical aspects

Program Lecture 2. Numerical Linear Algebra. Gaussian elimination (2) Gaussian elimination. Decompositions, numerical aspects Numerical Linear Algebra Decompositions, numerical aspects Program Lecture 2 LU-decomposition Basic algorithm Cost Stability Pivoting Cholesky decomposition Sparse matrices and reorderings Gerard Sleijpen

More information

Sparsity-Preserving Difference of Positive Semidefinite Matrix Representation of Indefinite Matrices

Sparsity-Preserving Difference of Positive Semidefinite Matrix Representation of Indefinite Matrices Sparsity-Preserving Difference of Positive Semidefinite Matrix Representation of Indefinite Matrices Jaehyun Park June 1 2016 Abstract We consider the problem of writing an arbitrary symmetric matrix as

More information

MODEL ANSWERS TO THE THIRD HOMEWORK

MODEL ANSWERS TO THE THIRD HOMEWORK MODEL ANSWERS TO THE THIRD HOMEWORK 1 (i) We apply Gaussian elimination to A First note that the second row is a multiple of the first row So we need to swap the second and third rows 1 3 2 1 2 6 5 7 3

More information

SUPPLEMENTARY MATERIALS TO THE PAPER: ON THE LIMITING BEHAVIOR OF PARAMETER-DEPENDENT NETWORK CENTRALITY MEASURES

SUPPLEMENTARY MATERIALS TO THE PAPER: ON THE LIMITING BEHAVIOR OF PARAMETER-DEPENDENT NETWORK CENTRALITY MEASURES SUPPLEMENTARY MATERIALS TO THE PAPER: ON THE LIMITING BEHAVIOR OF PARAMETER-DEPENDENT NETWORK CENTRALITY MEASURES MICHELE BENZI AND CHRISTINE KLYMKO Abstract This document contains details of numerical

More information

Homework 2 Foundations of Computational Math 2 Spring 2019

Homework 2 Foundations of Computational Math 2 Spring 2019 Homework 2 Foundations of Computational Math 2 Spring 2019 Problem 2.1 (2.1.a) Suppose (v 1,λ 1 )and(v 2,λ 2 ) are eigenpairs for a matrix A C n n. Show that if λ 1 λ 2 then v 1 and v 2 are linearly independent.

More information

Probabilistic Latent Semantic Analysis

Probabilistic Latent Semantic Analysis Probabilistic Latent Semantic Analysis Seungjin Choi Department of Computer Science and Engineering Pohang University of Science and Technology 77 Cheongam-ro, Nam-gu, Pohang 37673, Korea seungjin@postech.ac.kr

More information

a Short Introduction

a Short Introduction Collaborative Filtering in Recommender Systems: a Short Introduction Norm Matloff Dept. of Computer Science University of California, Davis matloff@cs.ucdavis.edu December 3, 2016 Abstract There is a strong

More information

Lecture 3: Special Matrices

Lecture 3: Special Matrices Lecture 3: Special Matrices Feedback of assignment1 Random matrices The magic matrix commend magic() doesn t give us random matrix. Random matrix means we will get different matrices each time when we

More information

Recommendation Systems

Recommendation Systems Recommendation Systems Popularity Recommendation Systems Predicting user responses to options Offering news articles based on users interests Offering suggestions on what the user might like to buy/consume

More information

Least Squares Approximation

Least Squares Approximation Chapter 6 Least Squares Approximation As we saw in Chapter 5 we can interpret radial basis function interpolation as a constrained optimization problem. We now take this point of view again, but start

More information

Gaussian Graphical Models and Graphical Lasso

Gaussian Graphical Models and Graphical Lasso ELE 538B: Sparsity, Structure and Inference Gaussian Graphical Models and Graphical Lasso Yuxin Chen Princeton University, Spring 2017 Multivariate Gaussians Consider a random vector x N (0, Σ) with pdf

More information

The answer in each case is the error in evaluating the taylor series for ln(1 x) for x = which is 6.9.

The answer in each case is the error in evaluating the taylor series for ln(1 x) for x = which is 6.9. Brad Nelson Math 26 Homework #2 /23/2. a MATLAB outputs: >> a=(+3.4e-6)-.e-6;a- ans = 4.449e-6 >> a=+(3.4e-6-.e-6);a- ans = 2.224e-6 And the exact answer for both operations is 2.3e-6. The reason why way

More information

Probabilistic Low-Rank Matrix Completion with Adaptive Spectral Regularization Algorithms

Probabilistic Low-Rank Matrix Completion with Adaptive Spectral Regularization Algorithms Probabilistic Low-Rank Matrix Completion with Adaptive Spectral Regularization Algorithms François Caron Department of Statistics, Oxford STATLEARN 2014, Paris April 7, 2014 Joint work with Adrien Todeschini,

More information

ECE 8201: Low-dimensional Signal Models for High-dimensional Data Analysis

ECE 8201: Low-dimensional Signal Models for High-dimensional Data Analysis ECE 8201: Low-dimensional Signal Models for High-dimensional Data Analysis Lecture 7: Matrix completion Yuejie Chi The Ohio State University Page 1 Reference Guaranteed Minimum-Rank Solutions of Linear

More information

Homework 4. Convex Optimization /36-725

Homework 4. Convex Optimization /36-725 Homework 4 Convex Optimization 10-725/36-725 Due Friday November 4 at 5:30pm submitted to Christoph Dann in Gates 8013 (Remember to a submit separate writeup for each problem, with your name at the top)

More information

Revised Simplex Method

Revised Simplex Method DM545 Linear and Integer Programming Lecture 7 Marco Chiarandini Department of Mathematics & Computer Science University of Southern Denmark Outline 1. 2. 2 Motivation Complexity of single pivot operation

More information

Section 3.5 LU Decomposition (Factorization) Key terms. Matrix factorization Forward and back substitution LU-decomposition Storage economization

Section 3.5 LU Decomposition (Factorization) Key terms. Matrix factorization Forward and back substitution LU-decomposition Storage economization Section 3.5 LU Decomposition (Factorization) Key terms Matrix factorization Forward and back substitution LU-decomposition Storage economization In matrix analysis as implemented in modern software the

More information

Total least squares. Gérard MEURANT. October, 2008

Total least squares. Gérard MEURANT. October, 2008 Total least squares Gérard MEURANT October, 2008 1 Introduction to total least squares 2 Approximation of the TLS secular equation 3 Numerical experiments Introduction to total least squares In least squares

More information

Data Mining Recitation Notes Week 3

Data Mining Recitation Notes Week 3 Data Mining Recitation Notes Week 3 Jack Rae January 28, 2013 1 Information Retrieval Given a set of documents, pull the (k) most similar document(s) to a given query. 1.1 Setup Say we have D documents

More information

Applied Numerical Linear Algebra. Lecture 8

Applied Numerical Linear Algebra. Lecture 8 Applied Numerical Linear Algebra. Lecture 8 1/ 45 Perturbation Theory for the Least Squares Problem When A is not square, we define its condition number with respect to the 2-norm to be k 2 (A) σ max (A)/σ

More information

7.5 Operations with Matrices. Copyright Cengage Learning. All rights reserved.

7.5 Operations with Matrices. Copyright Cengage Learning. All rights reserved. 7.5 Operations with Matrices Copyright Cengage Learning. All rights reserved. What You Should Learn Decide whether two matrices are equal. Add and subtract matrices and multiply matrices by scalars. Multiply

More information

A Nearly Sublinear Approximation to exp{p}e i for Large Sparse Matrices from Social Networks

A Nearly Sublinear Approximation to exp{p}e i for Large Sparse Matrices from Social Networks A Nearly Sublinear Approximation to exp{p}e i for Large Sparse Matrices from Social Networks Kyle Kloster and David F. Gleich Purdue University December 14, 2013 Supported by NSF CAREER 1149756-CCF Kyle

More information

CS264: Beyond Worst-Case Analysis Lecture #15: Topic Modeling and Nonnegative Matrix Factorization

CS264: Beyond Worst-Case Analysis Lecture #15: Topic Modeling and Nonnegative Matrix Factorization CS264: Beyond Worst-Case Analysis Lecture #15: Topic Modeling and Nonnegative Matrix Factorization Tim Roughgarden February 28, 2017 1 Preamble This lecture fulfills a promise made back in Lecture #1,

More information

Spectral k-support Norm Regularization

Spectral k-support Norm Regularization Spectral k-support Norm Regularization Andrew McDonald Department of Computer Science, UCL (Joint work with Massimiliano Pontil and Dimitris Stamos) 25 March, 2015 1 / 19 Problem: Matrix Completion Goal:

More information

Direct and Incomplete Cholesky Factorizations with Static Supernodes

Direct and Incomplete Cholesky Factorizations with Static Supernodes Direct and Incomplete Cholesky Factorizations with Static Supernodes AMSC 661 Term Project Report Yuancheng Luo 2010-05-14 Introduction Incomplete factorizations of sparse symmetric positive definite (SSPD)

More information

DATA MINING LECTURE 13. Link Analysis Ranking PageRank -- Random walks HITS

DATA MINING LECTURE 13. Link Analysis Ranking PageRank -- Random walks HITS DATA MINING LECTURE 3 Link Analysis Ranking PageRank -- Random walks HITS How to organize the web First try: Manually curated Web Directories How to organize the web Second try: Web Search Information

More information

B553 Lecture 5: Matrix Algebra Review

B553 Lecture 5: Matrix Algebra Review B553 Lecture 5: Matrix Algebra Review Kris Hauser January 19, 2012 We have seen in prior lectures how vectors represent points in R n and gradients of functions. Matrices represent linear transformations

More information

7 Principal Component Analysis

7 Principal Component Analysis 7 Principal Component Analysis This topic will build a series of techniques to deal with high-dimensional data. Unlike regression problems, our goal is not to predict a value (the y-coordinate), it is

More information

Gaussian Elimination and Back Substitution

Gaussian Elimination and Back Substitution Jim Lambers MAT 610 Summer Session 2009-10 Lecture 4 Notes These notes correspond to Sections 31 and 32 in the text Gaussian Elimination and Back Substitution The basic idea behind methods for solving

More information

Numerical Linear Algebra Primer. Ryan Tibshirani Convex Optimization /36-725

Numerical Linear Algebra Primer. Ryan Tibshirani Convex Optimization /36-725 Numerical Linear Algebra Primer Ryan Tibshirani Convex Optimization 10-725/36-725 Last time: proximal gradient descent Consider the problem min g(x) + h(x) with g, h convex, g differentiable, and h simple

More information

Applied Mathematics 205. Unit II: Numerical Linear Algebra. Lecturer: Dr. David Knezevic

Applied Mathematics 205. Unit II: Numerical Linear Algebra. Lecturer: Dr. David Knezevic Applied Mathematics 205 Unit II: Numerical Linear Algebra Lecturer: Dr. David Knezevic Unit II: Numerical Linear Algebra Chapter II.3: QR Factorization, SVD 2 / 66 QR Factorization 3 / 66 QR Factorization

More information