Computing approximate PageRank vectors by Basis Pursuit Denoising

Size: px
Start display at page:

Download "Computing approximate PageRank vectors by Basis Pursuit Denoising"

Transcription

1 Computing approximate PageRank vectors by Basis Pursuit Denoising Michael Saunders Systems Optimization Laboratory, Stanford University Joint work with Holly Jin, LinkedIn Corp SIAM Annual Meeting San Diego, July 7 11, 2008 SIAM AN08 1/36

2 Outline 1 Introduction 2 PageRank 3 Basis Pursuit Denoising 4 Sparse PageRank (Neumann) 5 Sparse PageRank (LPdual) 6 Preconditioning 7 Summary SIAM AN08 2/36

3 Abstract The PageRank eigenvector problem involves a square system Ax = b in which x is naturally nonnegative and somewhat sparse (depending on b). We seek an approximate x that is nonnegative and extremely sparse. We experiment with an active-set optimization method designed for the dual of the BPDN problem, and find that it tends to extract the important elements of x in a greedy fashion. Supported by the Office of Naval Research SIAM AN08 3/36

4 Introduction Many applications (statistics, signal analysis, imaging) seek sparse solutions to square or rectangular systems Ax b x sparse SIAM AN08 4/36

5 Introduction Many applications (statistics, signal analysis, imaging) seek sparse solutions to square or rectangular systems Ax b x sparse Basis Pursuit Denoising (BPDN) seeks sparsity by solving min 1 2 Ax b 2 + λ x 1 SIAM AN08 4/36

6 Introduction Many applications (statistics, signal analysis, imaging) seek sparse solutions to square or rectangular systems Ax b x sparse Basis Pursuit Denoising (BPDN) seeks sparsity by solving min 1 2 Ax b 2 + λ x 1 Personalized PageRank eigenvectors involve square systems (I αh T )x = v H, v sparse where 0 < α < 1 He e x 0, sparse SIAM AN08 4/36

7 Introduction Many applications (statistics, signal analysis, imaging) seek sparse solutions to square or rectangular systems Ax b x sparse Basis Pursuit Denoising (BPDN) seeks sparsity by solving min 1 2 Ax b 2 + λ x 1 Personalized PageRank eigenvectors involve square systems (I αh T )x = v H, v sparse where 0 < α < 1 He e x 0, sparse Perhaps BPDN can find a very sparse approximate x SIAM AN08 4/36

8 PageRank SIAM AN08 5/36

9 H = 0 1 1/3 1/3 1/3 B 1/2 1/ /2 1/2 PageRank Langville and Meyer 2006 A Hyperlink matrix He e SIAM AN08 6/36

10 H = 0 1 1/3 1/3 1/3 B 1/2 1/ /2 1/2 v = 1 n e or e i PageRank Langville and Meyer 2006 A Hyperlink matrix He e Teleportation vector or personalization vector e T v = 1 SIAM AN08 6/36

11 H = 0 1 1/3 1/3 1/3 B 1/2 1/ /2 1/2 v = 1 n e or e i PageRank Langville and Meyer 2006 A Hyperlink matrix He e Teleportation vector or personalization vector e T v = 1 S = H + av T Stochastic matrix Se = e SIAM AN08 6/36

12 H = 0 1 1/3 1/3 1/3 B 1/2 1/ /2 1/2 v = 1 n e or e i PageRank Langville and Meyer 2006 A Hyperlink matrix He e Teleportation vector or personalization vector e T v = 1 S = H + av T Stochastic matrix Se = e G = αs + (1 α)ev T Google matrix α = 0.85 say Ge = e SIAM AN08 6/36

13 H = 0 1 1/3 1/3 1/3 B 1/2 1/ /2 1/2 v = 1 n e or e i PageRank Langville and Meyer 2006 A Hyperlink matrix He e Teleportation vector or personalization vector e T v = 1 S = H + av T Stochastic matrix Se = e G = αs + (1 α)ev T Google matrix α = 0.85 say Ge = e eigenvector linear system G T x = x (I αh T )x = v SIAM AN08 6/36

14 H = 0 1 1/3 1/3 1/3 B 1/2 1/ /2 1/2 v = 1 n e or e i PageRank Langville and Meyer 2006 A Hyperlink matrix He e Teleportation vector or personalization vector e T v = 1 S = H + av T Stochastic matrix Se = e G = αs + (1 α)ev T Google matrix α = 0.85 say Ge = e eigenvector linear system G T x = x (I αh T )x = v In both cases, x x/e T x (and x 0) SIAM AN08 6/36

15 Web matrices H Name n (pages) nnz (links) i (v = e i ) csstanford cs.stanford.edu Stanford cs.stanford.edu 6753 calendus.stanford.edu Stanford-Berkeley unknown All data collected around 2001 Cleve Moler Sep Kamvar David Gleich (I αh T )x = v SIAM AN08 7/36

16 Power method on G T x = x Stanford, n = , v = e 6753 (calendus.stanford.edu) Time (secs) vs α = 0.1 : 0.01 : α SIAM AN08 8/36

17 Power method Stanford-Berkeley, n = , v = e Time (secs) vs α = 0.1 : 0.01 : α SIAM AN08 9/36

18 Exact x(α) csstanford, n = 9914, v = e 4 (cs.stanford.edu) x j (α), j = 1:n Stanford alpha=0.850 v=e(4) xtrue=p\v α = 0.85 SIAM AN08 10/36

19 Exact x(α) Stanford, n = , v = e (cs.stanford.edu) x j (α), j = 1:n α = 0.85 SIAM AN08 11/36

20 Sparsity of x(α) Stanford, n = , v = e (cs.stanford.edu) x > 250/n x > 500/n nz = nz = 3076 α = 0.1 : 0.01 : 0.99 (90 values) SIAM AN08 12/36

21 n = x > 1000/n x > 2000/n nz = nz = 2305 α = 0.1 : 0.01 : 0.99 (90 values) SIAM AN08 13/36

22 Basis Pursuit Denoising SIAM AN08 14/36

23 Lasso(ν) Tibshirani 1996 min L1 sparse x min 1 2 Ax b 2 st x 1 ν A = SIAM AN08 15/36

24 Lasso(ν) Tibshirani 1996 min L1 sparse x min 1 2 Ax b 2 st x 1 ν A = Basis Pursuit Chen, Donoho & S 1998 min x 1 st Ax = b A = fast operator (Ax, A T y are efficient) SIAM AN08 15/36

25 Lasso(ν) Tibshirani 1996 min L1 sparse x min 1 2 Ax b 2 st x 1 ν A = Basis Pursuit Chen, Donoho & S 1998 min x 1 st Ax = b A = fast operator (Ax, A T y are efficient) BPDN(λ) Same paper min λ x Ax b 2 fast operator, any shape SIAM AN08 15/36

26 BP and BPDN algorithms min λ x Ax b 2 2 OMP Davis, Mallat et al 1997 Greedy BPDN-interior Chen, Donoho & S, 1998, 2001 Interior, CG PDSCO, PDCO Saunders 1997, 2002 Interior, LSQR BCR Sardy, Bruce & Tseng 2000 Orthogonal blocks Homotopy Osborne et al 2000 Active-set, all λ LARS Efron, Hastie, Tibshirani 2004 Active-set, all λ STOMP Donoho, Tsaig, et al 2006 Double greedy l1 ls Kim, Koh, Lustig, Boyd et al 2007 Primal barrier, PCG GPSR Figueiredo, Nowak & Wright 2007 Gradient Projection BCLS Friedlander 2006 Projection, LSQR SPGL1 van den Berg & Friedlander 2007 Spectral GP, all λ BPdual Friedlander & S 2007 Active-set on dual LPdual Friedlander & S 2007 For x 0 SIAM AN08 16/36

27 x 1 when x 0 Suggests regularized LP problems: LPprimal(λ) min x, y e T x λ y 2 st Ax + λy = b, x 0 SIAM AN08 17/36

28 x 1 when x 0 Suggests regularized LP problems: LPprimal(λ) min x, y e T x λ y 2 st Ax + λy = b, x 0 LPdual(λ) min y b T y λ y 2 st A T y e SIAM AN08 17/36

29 x 1 when x 0 Suggests regularized LP problems: LPprimal(λ) min x, y e T x λ y 2 st Ax + λy = b, x 0 LPdual(λ) min y b T y λ y 2 st A T y e Friedlander and S 2007: New active-set Matlab codes SIAM AN08 17/36

30 LPdual solver for Ax b, x 0 min y b T y λ y 2 st A T y e SIAM AN08 18/36

31 LPdual solver for Ax b, x 0 min y b T y λ y 2 st A T y e A few active constraints: B T y = e Initially B is empty, y = 0 Select columns of B in almost greedy manner SIAM AN08 18/36

32 LPdual solver for Ax b, x 0 min y b T y λ y 2 st A T y e A few active constraints: B T y = e Initially B is empty, y = 0 Select columns of B in almost greedy manner Main work per itn: Solve min Bx g Form dy = (g Bx)/λ Form dz = A T dy SIAM AN08 18/36

33 LPdual solver for Ax b, x 0 min y b T y λ y 2 st A T y e A few active constraints: B T y = e Initially B is empty, y = 0 Select columns of B in almost greedy manner Main work per itn: R Solve min Bx g Update QB = 0 Form dy = (g Bx)/λ Form dz = A T dy «without Q SIAM AN08 18/36

34 Sparse PageRank with Neumann Series SIAM AN08 19/36

35 Neumann series (I αh T )x = v (*) x = (I αh T ) 1 v = (I + αh T + (αh T ) 2 + )v A sum of nonnegative terms SIAM AN08 20/36

36 Neumann series (I αh T )x = v (*) x = (I αh T ) 1 v = (I + αh T + (αh T ) 2 + )v A sum of nonnegative terms Equivalent to Jacobi s method on (*) Thanks David! SIAM AN08 20/36

37 Neumann series (I αh T )x = v (*) x = (I αh T ) 1 v = (I + αh T + (αh T ) 2 + )v A sum of nonnegative terms Equivalent to Jacobi s method on (*) x x k decreases monotonically Thanks David! SIAM AN08 20/36

38 Neumann series (I αh T )x = v (*) x = (I αh T ) 1 v = (I + αh T + (αh T ) 2 + )v A sum of nonnegative terms Equivalent to Jacobi s method on (*) Thanks David! x x k decreases monotonically r k x x k 1+α 1 α r k Error bounded by residual SIAM AN08 20/36

39 Neumann series (I αh T )x = v (*) x = (I αh T ) 1 v = (I + αh T + (αh T ) 2 + )v A sum of nonnegative terms Equivalent to Jacobi s method on (*) Thanks David! x x k decreases monotonically r k x x k 1+α 1 α r k Error bounded by residual Eventually small (x k ) j 0, update remainder SIAM AN08 20/36

40 Neumann series (I αh T )x = v (*) x = (I αh T ) 1 v = (I + αh T + (αh T ) 2 + )v A sum of nonnegative terms Equivalent to Jacobi s method on (*) Thanks David! x x k decreases monotonically r k x x k 1+α 1 α r k Error bounded by residual Eventually small (x k ) j 0, update remainder See Gleich and Polito (2006) for sparse Power Method SIAM AN08 20/36

41 Sparse PageRank with BPDN (LPdual) SIAM AN08 21/36

42 Sparse PageRank Regard as (I αh T )x = v v sparse Ax v v = e i say SIAM AN08 22/36

43 Sparse PageRank Regard as (I αh T )x = v v sparse Ax v v = e i say Apply active-set solver LPdual to dual of min x, r λe T x r 2 st Ax + r = v, x 0 SIAM AN08 22/36

44 0.08 Stanford alpha=0.950 v=e(4) x(active) SIAM AN08 23/36

45 Stanford web, n = α = 0.85 v = e (cs.stanford.edu) Direct solve: xtrue = A\v BPDN λ = 2e-3 76 nonzero x j SIAM AN08 24/36

46 Sparsity of xbp(α) Stanford, n = , v = e (cs.stanford.edu) xpower > 1000/n xbp > 1/n nz = 2382 nz = 2305 α = 0.1 : 0.01 : 0.99 (90 values) SIAM AN08 25/36

47 Sparsity of xbp(α) Stanford, n = , v = e (cs.stanford.edu) xpower > 1000/n xbp > 1/n nz = 2305 α = 0.1 : 0.01 : 0.99 (90 values) nz = 2382 xpower > 1/n has λ = 4e rows, nonzeros SIAM AN08 25/36

48 Power method vs BP Stanford, n = , v = e (cs.stanford.edu) Time (secs) for each α BP Direct Power α = 0.1 : 0.01 : 0.99 (90 values) λ = 4e-3 SIAM AN08 26/36

49 Power method vs BP Stanford-Berkeley, n = , v = e 6753 (unknown) Time (secs) for each α BP Direct Power α = 0.1 : 0.01 : 0.99 (90 values) λ = 5e-3 SIAM AN08 27/36

50 Varying α and λ min x, r λ x r 2 st (I αh T )x + r = v H = csstanford web v = e 4 (cs.stanford.edu) α = 0.75, 0.85, 0.95 λ = 1e-3, 2e-3, 4e-3, 6e-3 Plot nonzero x j in the order they are chosen SIAM AN08 28/36

51 csstanford v = e 4 α = 0.75 min λe T x r 2 Stanford alpha=0.750 v=e(4) x(active) Stanford alpha=0.750 v=e(4) x(active) λ = 1e λ = 2e Stanford alpha=0.750 v=e(4) x(active) Stanford alpha=0.750 v=e(4) x(active) λ = 4e λ = 6e SIAM AN08 29/36

52 csstanford v = e 4 α = 0.85 min λe T x r 2 Stanford alpha=0.850 v=e(4) x(active) csstanford alpha=0.850 v=e(4) x(active) λ = 1e-3 λ = 2e Stanford alpha=0.850 v=e(4) x(active) Stanford alpha=0.850 v=e(4) x(active) λ = 4e-3 λ = 6e SIAM AN08 30/36

53 csstanford v = e 4 α = 0.95 min λe T x r 2 Stanford alpha=0.950 v=e(4) x(active) Stanford alpha=0.950 v=e(4) x(active) λ = 1e-3 λ = 2e Stanford alpha=0.950 v=e(4) x(active) 0.08 Stanford alpha=0.950 v=e(4) x(active) λ = 4e-3 λ = 6e SIAM AN08 31/36

54 SPAI: Sparse Approximate Inverse SIAM AN08 32/36

55 SPAI Preconditioning A plausible future application Find sparse X such that AX I SIAM AN08 33/36

56 SPAI Preconditioning A plausible future application Find sparse X such that AX I Find sparse x j such that Ax j e j for j = 1 : n SIAM AN08 33/36

57 SPAI Preconditioning A plausible future application Find sparse X such that AX I Find sparse x j such that Ax j e j for j = 1 : n Embarrassingly parallel use of sparse BP solutions SIAM AN08 33/36

58 Summary SIAM AN08 34/36

59 (I αh T )x v Comments Sparse v sometimes gives sparse x SIAM AN08 35/36

60 (I αh T )x v Comments Sparse v sometimes gives sparse x LPprimal and LPdual work in greedy manner SIAM AN08 35/36

61 (I αh T )x v Comments Sparse v sometimes gives sparse x LPprimal and LPdual work in greedy manner Greedy solvers guarantee sparse x (unlike power method) SIAM AN08 35/36

62 (I αh T )x v Comments Sparse v sometimes gives sparse x LPprimal and LPdual work in greedy manner Greedy solvers guarantee sparse x (unlike power method) Not sensitive to α SIAM AN08 35/36

63 (I αh T )x v Comments Sparse v sometimes gives sparse x LPprimal and LPdual work in greedy manner Greedy solvers guarantee sparse x (unlike power method) Not sensitive to α Sensitive to λ but we can limit iterations SIAM AN08 35/36

64 (I αh T )x v Comments Sparse v sometimes gives sparse x LPprimal and LPdual work in greedy manner Greedy solvers guarantee sparse x (unlike power method) Not sensitive to α Sensitive to λ but we can limit iterations Questions SIAM AN08 35/36

65 (I αh T )x v Comments Sparse v sometimes gives sparse x LPprimal and LPdual work in greedy manner Greedy solvers guarantee sparse x (unlike power method) Not sensitive to α Sensitive to λ but we can limit iterations Questions Run much bigger examples SIAM AN08 35/36

66 (I αh T )x v Comments Sparse v sometimes gives sparse x LPprimal and LPdual work in greedy manner Greedy solvers guarantee sparse x (unlike power method) Not sensitive to α Sensitive to λ but we can limit iterations Questions Run much bigger examples Which properties of I αh T lead to success of greedy SIAM AN08 35/36

67 Thanks Shaobing Chen and David Donoho Michael Friedlander (BP solvers) SIAM AN08 36/36

68 Thanks Shaobing Chen and David Donoho Michael Friedlander (BP solvers) Sou-Cheng Choi and Lek-Heng Lim (ICIAM 07) SIAM AN08 36/36

69 Thanks Shaobing Chen and David Donoho Michael Friedlander (BP solvers) Sou-Cheng Choi and Lek-Heng Lim (ICIAM 07) Amy Langville and Carl Meyer (splendid book) David Gleich SIAM AN08 36/36

70 Thanks Shaobing Chen and David Donoho Michael Friedlander (BP solvers) Sou-Cheng Choi and Lek-Heng Lim (ICIAM 07) Amy Langville and Carl Meyer (splendid book) David Gleich Holly Jin SIAM AN08 36/36

Basis Pursuit Denoising and the Dantzig Selector

Basis Pursuit Denoising and the Dantzig Selector BPDN and DS p. 1/16 Basis Pursuit Denoising and the Dantzig Selector West Coast Optimization Meeting University of Washington Seattle, WA, April 28 29, 2007 Michael Friedlander and Michael Saunders Dept

More information

Department of Computer Science, University of British Columbia DRAFT Technical Report TR-2010-nn, July 30, 2012

Department of Computer Science, University of British Columbia DRAFT Technical Report TR-2010-nn, July 30, 2012 Department of Computer Science, University of British Columbia DRAFT Technical Report TR-2010-nn, July 30, 2012 A DUAL ACTIVE-SET QUADRATIC PROGRAMMING METHOD FOR FINDING SPARSE LEAST-SQUARES SOLUTIONS

More information

Elaine T. Hale, Wotao Yin, Yin Zhang

Elaine T. Hale, Wotao Yin, Yin Zhang , Wotao Yin, Yin Zhang Department of Computational and Applied Mathematics Rice University McMaster University, ICCOPT II-MOPTA 2007 August 13, 2007 1 with Noise 2 3 4 1 with Noise 2 3 4 1 with Noise 2

More information

MS&E 318 (CME 338) Large-Scale Numerical Optimization. A Lasso Solver

MS&E 318 (CME 338) Large-Scale Numerical Optimization. A Lasso Solver Stanford University, Dept of Management Science and Engineering MS&E 318 (CME 338) Large-Scale Numerical Optimization Instructor: Michael Saunders Spring 2011 Final Project Due Friday June 10 A Lasso Solver

More information

Experiments with iterative computation of search directions within interior methods for constrained optimization

Experiments with iterative computation of search directions within interior methods for constrained optimization Experiments with iterative computation of search directions within interior methods for constrained optimization Santiago Akle and Michael Saunders Institute for Computational Mathematics and Engineering

More information

Large-Scale L1-Related Minimization in Compressive Sensing and Beyond

Large-Scale L1-Related Minimization in Compressive Sensing and Beyond Large-Scale L1-Related Minimization in Compressive Sensing and Beyond Yin Zhang Department of Computational and Applied Mathematics Rice University, Houston, Texas, U.S.A. Arizona State University March

More information

Sparsity in Underdetermined Systems

Sparsity in Underdetermined Systems Sparsity in Underdetermined Systems Department of Statistics Stanford University August 19, 2005 Classical Linear Regression Problem X n y p n 1 > Given predictors and response, y Xβ ε = + ε N( 0, σ 2

More information

Affine iterations on nonnegative vectors

Affine iterations on nonnegative vectors Affine iterations on nonnegative vectors V. Blondel L. Ninove P. Van Dooren CESAME Université catholique de Louvain Av. G. Lemaître 4 B-348 Louvain-la-Neuve Belgium Introduction In this paper we consider

More information

Tractable Upper Bounds on the Restricted Isometry Constant

Tractable Upper Bounds on the Restricted Isometry Constant Tractable Upper Bounds on the Restricted Isometry Constant Alex d Aspremont, Francis Bach, Laurent El Ghaoui Princeton University, École Normale Supérieure, U.C. Berkeley. Support from NSF, DHS and Google.

More information

Link Analysis. Leonid E. Zhukov

Link Analysis. Leonid E. Zhukov Link Analysis Leonid E. Zhukov School of Data Analysis and Artificial Intelligence Department of Computer Science National Research University Higher School of Economics Structural Analysis and Visualization

More information

MS&E 318 (CME 338) Large-Scale Numerical Optimization

MS&E 318 (CME 338) Large-Scale Numerical Optimization Stanford University, Management Science & Engineering (and ICME MS&E 38 (CME 338 Large-Scale Numerical Optimization Course description Instructor: Michael Saunders Spring 28 Notes : Review The course teaches

More information

Optimization Algorithms for Compressed Sensing

Optimization Algorithms for Compressed Sensing Optimization Algorithms for Compressed Sensing Stephen Wright University of Wisconsin-Madison SIAM Gator Student Conference, Gainesville, March 2009 Stephen Wright (UW-Madison) Optimization and Compressed

More information

Sparse & Redundant Representations by Iterated-Shrinkage Algorithms

Sparse & Redundant Representations by Iterated-Shrinkage Algorithms Sparse & Redundant Representations by Michael Elad * The Computer Science Department The Technion Israel Institute of technology Haifa 3000, Israel 6-30 August 007 San Diego Convention Center San Diego,

More information

An Homotopy Algorithm for the Lasso with Online Observations

An Homotopy Algorithm for the Lasso with Online Observations An Homotopy Algorithm for the Lasso with Online Observations Pierre J. Garrigues Department of EECS Redwood Center for Theoretical Neuroscience University of California Berkeley, CA 94720 garrigue@eecs.berkeley.edu

More information

Analysis of Google s PageRank

Analysis of Google s PageRank Analysis of Google s PageRank Ilse Ipsen North Carolina State University Joint work with Rebecca M. Wills AN05 p.1 PageRank An objective measure of the citation importance of a web page [Brin & Page 1998]

More information

Gradient Descent with Sparsification: An iterative algorithm for sparse recovery with restricted isometry property

Gradient Descent with Sparsification: An iterative algorithm for sparse recovery with restricted isometry property : An iterative algorithm for sparse recovery with restricted isometry property Rahul Garg grahul@us.ibm.com Rohit Khandekar rohitk@us.ibm.com IBM T. J. Watson Research Center, 0 Kitchawan Road, Route 34,

More information

Making Flippy Floppy

Making Flippy Floppy Making Flippy Floppy James V. Burke UW Mathematics jvburke@uw.edu Aleksandr Y. Aravkin IBM, T.J.Watson Research sasha.aravkin@gmail.com Michael P. Friedlander UBC Computer Science mpf@cs.ubc.ca Current

More information

Calculating Web Page Authority Using the PageRank Algorithm

Calculating Web Page Authority Using the PageRank Algorithm Jacob Miles Prystowsky and Levi Gill Math 45, Fall 2005 1 Introduction 1.1 Abstract In this document, we examine how the Google Internet search engine uses the PageRank algorithm to assign quantitatively

More information

Lab 8: Measuring Graph Centrality - PageRank. Monday, November 5 CompSci 531, Fall 2018

Lab 8: Measuring Graph Centrality - PageRank. Monday, November 5 CompSci 531, Fall 2018 Lab 8: Measuring Graph Centrality - PageRank Monday, November 5 CompSci 531, Fall 2018 Outline Measuring Graph Centrality: Motivation Random Walks, Markov Chains, and Stationarity Distributions Google

More information

SUPPLEMENTARY MATERIALS TO THE PAPER: ON THE LIMITING BEHAVIOR OF PARAMETER-DEPENDENT NETWORK CENTRALITY MEASURES

SUPPLEMENTARY MATERIALS TO THE PAPER: ON THE LIMITING BEHAVIOR OF PARAMETER-DEPENDENT NETWORK CENTRALITY MEASURES SUPPLEMENTARY MATERIALS TO THE PAPER: ON THE LIMITING BEHAVIOR OF PARAMETER-DEPENDENT NETWORK CENTRALITY MEASURES MICHELE BENZI AND CHRISTINE KLYMKO Abstract This document contains details of numerical

More information

ParNes: A rapidly convergent algorithm for accurate recovery of sparse and approximately sparse signals

ParNes: A rapidly convergent algorithm for accurate recovery of sparse and approximately sparse signals Preprint manuscript No. (will be inserted by the editor) ParNes: A rapidly convergent algorithm for accurate recovery of sparse and approximately sparse signals Ming Gu ek-heng im Cinna Julie Wu Received:

More information

Department of Computer Science, University of British Columbia Technical Report TR , January 2008

Department of Computer Science, University of British Columbia Technical Report TR , January 2008 Department of Computer Science, University of British Columbia Technical Report TR-2008-01, January 2008 PROBING THE PARETO FRONTIER FOR BASIS PURSUIT SOLUTIONS EWOUT VAN DEN BERG AND MICHAEL P. FRIEDLANDER

More information

Sparse representation classification and positive L1 minimization

Sparse representation classification and positive L1 minimization Sparse representation classification and positive L1 minimization Cencheng Shen Joint Work with Li Chen, Carey E. Priebe Applied Mathematics and Statistics Johns Hopkins University, August 5, 2014 Cencheng

More information

Lecture 7 Mathematics behind Internet Search

Lecture 7 Mathematics behind Internet Search CCST907 Hidden Order in Daily Life: A Mathematical Perspective Lecture 7 Mathematics behind Internet Search Dr. S. P. Yung (907A) Dr. Z. Hua (907B) Department of Mathematics, HKU Outline Google is the

More information

Linear algebra issues in Interior Point methods for bound-constrained least-squares problems

Linear algebra issues in Interior Point methods for bound-constrained least-squares problems Linear algebra issues in Interior Point methods for bound-constrained least-squares problems Stefania Bellavia Dipartimento di Energetica S. Stecco Università degli Studi di Firenze Joint work with Jacek

More information

MATH36001 Perron Frobenius Theory 2015

MATH36001 Perron Frobenius Theory 2015 MATH361 Perron Frobenius Theory 215 In addition to saying something useful, the Perron Frobenius theory is elegant. It is a testament to the fact that beautiful mathematics eventually tends to be useful,

More information

Multiple Change Point Detection by Sparse Parameter Estimation

Multiple Change Point Detection by Sparse Parameter Estimation Multiple Change Point Detection by Sparse Parameter Estimation Department of Econometrics Fac. of Economics and Management University of Defence Brno, Czech Republic Dept. of Appl. Math. and Comp. Sci.

More information

Pathwise coordinate optimization

Pathwise coordinate optimization Stanford University 1 Pathwise coordinate optimization Jerome Friedman, Trevor Hastie, Holger Hoefling, Robert Tibshirani Stanford University Acknowledgements: Thanks to Stephen Boyd, Michael Saunders,

More information

Inverse problems and sparse models (1/2) Rémi Gribonval INRIA Rennes - Bretagne Atlantique, France

Inverse problems and sparse models (1/2) Rémi Gribonval INRIA Rennes - Bretagne Atlantique, France Inverse problems and sparse models (1/2) Rémi Gribonval INRIA Rennes - Bretagne Atlantique, France remi.gribonval@inria.fr Structure of the tutorial Session 1: Introduction to inverse problems & sparse

More information

Improved FOCUSS Method With Conjugate Gradient Iterations

Improved FOCUSS Method With Conjugate Gradient Iterations IEEE TRANSACTIONS ON SIGNAL PROCESSING, VOL. 57, NO. 1, JANUARY 2009 399 Improved FOCUSS Method With Conjugate Gradient Iterations Zhaoshui He, Andrzej Cichocki, Rafal Zdunek, and Shengli Xie Abstract

More information

Making Flippy Floppy

Making Flippy Floppy Making Flippy Floppy James V. Burke UW Mathematics jvburke@uw.edu Aleksandr Y. Aravkin IBM, T.J.Watson Research sasha.aravkin@gmail.com Michael P. Friedlander UBC Computer Science mpf@cs.ubc.ca Vietnam

More information

Solving l 1 Regularized Least Square Problems with Hierarchical Decomposition

Solving l 1 Regularized Least Square Problems with Hierarchical Decomposition Solving l 1 Least Square s with 1 mzhong1@umd.edu 1 AMSC and CSCAMM University of Maryland College Park Project for AMSC 663 October 2 nd, 2012 Outline 1 The 2 Outline 1 The 2 Compressed Sensing Example

More information

Generalized Orthogonal Matching Pursuit- A Review and Some

Generalized Orthogonal Matching Pursuit- A Review and Some Generalized Orthogonal Matching Pursuit- A Review and Some New Results Department of Electronics and Electrical Communication Engineering Indian Institute of Technology, Kharagpur, INDIA Table of Contents

More information

Chapter 8 Cholesky-based Methods for Sparse Least Squares: The Benefits of Regularization

Chapter 8 Cholesky-based Methods for Sparse Least Squares: The Benefits of Regularization In L. Adams and J. L. Nazareth eds., Linear and Nonlinear Conjugate Gradient-Related Methods, SIAM, Philadelphia, 92 100 1996. Chapter 8 Cholesky-based Methods for Sparse Least Squares: The Benefits of

More information

On the eigenvalues of specially low-rank perturbed matrices

On the eigenvalues of specially low-rank perturbed matrices On the eigenvalues of specially low-rank perturbed matrices Yunkai Zhou April 12, 2011 Abstract We study the eigenvalues of a matrix A perturbed by a few special low-rank matrices. The perturbation is

More information

Mathematical Properties & Analysis of Google s PageRank

Mathematical Properties & Analysis of Google s PageRank Mathematical Properties & Analysis of Google s PageRank Ilse Ipsen North Carolina State University, USA Joint work with Rebecca M. Wills Cedya p.1 PageRank An objective measure of the citation importance

More information

Updating Markov Chains Carl Meyer Amy Langville

Updating Markov Chains Carl Meyer Amy Langville Updating Markov Chains Carl Meyer Amy Langville Department of Mathematics North Carolina State University Raleigh, NC A. A. Markov Anniversary Meeting June 13, 2006 Intro Assumptions Very large irreducible

More information

A Fixed-Point Continuation Method for l 1 -Regularized Minimization with Applications to Compressed Sensing

A Fixed-Point Continuation Method for l 1 -Regularized Minimization with Applications to Compressed Sensing CAAM Technical Report TR07-07 A Fixed-Point Continuation Method for l 1 -Regularized Minimization with Applications to Compressed Sensing Elaine T. Hale, Wotao Yin, and Yin Zhang Department of Computational

More information

Optimal Value Function Methods in Numerical Optimization Level Set Methods

Optimal Value Function Methods in Numerical Optimization Level Set Methods Optimal Value Function Methods in Numerical Optimization Level Set Methods James V Burke Mathematics, University of Washington, (jvburke@uw.edu) Joint work with Aravkin (UW), Drusvyatskiy (UW), Friedlander

More information

gpcvx A Matlab Solver for Geometric Programs in Convex Form

gpcvx A Matlab Solver for Geometric Programs in Convex Form gpcvx A Matlab Solver for Geometric Programs in Convex Form Kwangmoo Koh deneb1@stanford.edu Almir Mutapcic almirm@stanford.edu Seungjean Kim sjkim@stanford.edu Stephen Boyd boyd@stanford.edu May 22, 2006

More information

Updating PageRank. Amy Langville Carl Meyer

Updating PageRank. Amy Langville Carl Meyer Updating PageRank Amy Langville Carl Meyer Department of Mathematics North Carolina State University Raleigh, NC SCCM 11/17/2003 Indexing Google Must index key terms on each page Robots crawl the web software

More information

COMPARATIVE ANALYSIS OF ORTHOGONAL MATCHING PURSUIT AND LEAST ANGLE REGRESSION

COMPARATIVE ANALYSIS OF ORTHOGONAL MATCHING PURSUIT AND LEAST ANGLE REGRESSION COMPARATIVE ANALYSIS OF ORTHOGONAL MATCHING PURSUIT AND LEAST ANGLE REGRESSION By Mazin Abdulrasool Hameed A THESIS Submitted to Michigan State University in partial fulfillment of the requirements for

More information

UpdatingtheStationary VectorofaMarkovChain. Amy Langville Carl Meyer

UpdatingtheStationary VectorofaMarkovChain. Amy Langville Carl Meyer UpdatingtheStationary VectorofaMarkovChain Amy Langville Carl Meyer Department of Mathematics North Carolina State University Raleigh, NC NSMC 9/4/2003 Outline Updating and Pagerank Aggregation Partitioning

More information

A Generalized Uncertainty Principle and Sparse Representation in Pairs of Bases

A Generalized Uncertainty Principle and Sparse Representation in Pairs of Bases 2558 IEEE TRANSACTIONS ON INFORMATION THEORY, VOL 48, NO 9, SEPTEMBER 2002 A Generalized Uncertainty Principle Sparse Representation in Pairs of Bases Michael Elad Alfred M Bruckstein Abstract An elementary

More information

Homotopy methods based on l 0 norm for the compressed sensing problem

Homotopy methods based on l 0 norm for the compressed sensing problem Homotopy methods based on l 0 norm for the compressed sensing problem Wenxing Zhu, Zhengshan Dong Center for Discrete Mathematics and Theoretical Computer Science, Fuzhou University, Fuzhou 350108, China

More information

SMO vs PDCO for SVM: Sequential Minimal Optimization vs Primal-Dual interior method for Convex Objectives for Support Vector Machines

SMO vs PDCO for SVM: Sequential Minimal Optimization vs Primal-Dual interior method for Convex Objectives for Support Vector Machines vs for SVM: Sequential Minimal Optimization vs Primal-Dual interior method for Convex Objectives for Support Vector Machines Ding Ma Michael Saunders Working paper, January 5 Introduction In machine learning,

More information

Node Centrality and Ranking on Networks

Node Centrality and Ranking on Networks Node Centrality and Ranking on Networks Leonid E. Zhukov School of Data Analysis and Artificial Intelligence Department of Computer Science National Research University Higher School of Economics Social

More information

Google PageRank. Francesco Ricci Faculty of Computer Science Free University of Bozen-Bolzano

Google PageRank. Francesco Ricci Faculty of Computer Science Free University of Bozen-Bolzano Google PageRank Francesco Ricci Faculty of Computer Science Free University of Bozen-Bolzano fricci@unibz.it 1 Content p Linear Algebra p Matrices p Eigenvalues and eigenvectors p Markov chains p Google

More information

MS&E 318 (CME 338) Large-Scale Numerical Optimization

MS&E 318 (CME 338) Large-Scale Numerical Optimization Stanford University, Management Science & Engineering (and ICME) MS&E 318 (CME 338) Large-Scale Numerical Optimization 1 Origins Instructor: Michael Saunders Spring 2015 Notes 9: Augmented Lagrangian Methods

More information

Sparse analysis Lecture III: Dictionary geometry and greedy algorithms

Sparse analysis Lecture III: Dictionary geometry and greedy algorithms Sparse analysis Lecture III: Dictionary geometry and greedy algorithms Anna C. Gilbert Department of Mathematics University of Michigan Intuition from ONB Key step in algorithm: r, ϕ j = x c i ϕ i, ϕ j

More information

An Introduction to Sparse Approximation

An Introduction to Sparse Approximation An Introduction to Sparse Approximation Anna C. Gilbert Department of Mathematics University of Michigan Basic image/signal/data compression: transform coding Approximate signals sparsely Compress images,

More information

Sparse & Redundant Signal Representation, and its Role in Image Processing

Sparse & Redundant Signal Representation, and its Role in Image Processing Sparse & Redundant Signal Representation, and its Role in Michael Elad The CS Department The Technion Israel Institute of technology Haifa 3000, Israel Wave 006 Wavelet and Applications Ecole Polytechnique

More information

CG VERSUS MINRES: AN EMPIRICAL COMPARISON

CG VERSUS MINRES: AN EMPIRICAL COMPARISON VERSUS : AN EMPIRICAL COMPARISON DAVID CHIN-LUNG FONG AND MICHAEL SAUNDERS Abstract. For iterative solution of symmetric systems Ax = b, the conjugate gradient method () is commonly used when A is positive

More information

Krylov Subspace Methods to Calculate PageRank

Krylov Subspace Methods to Calculate PageRank Krylov Subspace Methods to Calculate PageRank B. Vadala-Roth REU Final Presentation August 1st, 2013 How does Google Rank Web Pages? The Web The Graph (A) Ranks of Web pages v = v 1... Dominant Eigenvector

More information

Applications. Nonnegative Matrices: Ranking

Applications. Nonnegative Matrices: Ranking Applications of Nonnegative Matrices: Ranking and Clustering Amy Langville Mathematics Department College of Charleston Hamilton Institute 8/7/2008 Collaborators Carl Meyer, N. C. State University David

More information

Greedy Signal Recovery and Uniform Uncertainty Principles

Greedy Signal Recovery and Uniform Uncertainty Principles Greedy Signal Recovery and Uniform Uncertainty Principles SPIE - IE 2008 Deanna Needell Joint work with Roman Vershynin UC Davis, January 2008 Greedy Signal Recovery and Uniform Uncertainty Principles

More information

Gradient Projection for Sparse Reconstruction: Application to Compressed Sensing and Other Inverse Problems

Gradient Projection for Sparse Reconstruction: Application to Compressed Sensing and Other Inverse Problems SUBMITTED FOR PUBLICATION; 27. Gradient Projection for Sparse Reconstruction: Application to Compressed Sensing and Other Inverse Problems Mário A. T. Figueiredo, Robert D. Nowak, Stephen J. Wright Abstract

More information

A Note on Google s PageRank

A Note on Google s PageRank A Note on Google s PageRank According to Google, google-search on a given topic results in a listing of most relevant web pages related to the topic. Google ranks the importance of webpages according to

More information

Uncertainty and Randomization

Uncertainty and Randomization Uncertainty and Randomization The PageRank Computation in Google Roberto Tempo IEIIT-CNR Politecnico di Torino tempo@polito.it 1993: Robustness of Linear Systems 1993: Robustness of Linear Systems 16 Years

More information

Sparse Optimization: Algorithms and Applications. Formulating Sparse Optimization. Motivation. Stephen Wright. Caltech, 21 April 2007

Sparse Optimization: Algorithms and Applications. Formulating Sparse Optimization. Motivation. Stephen Wright. Caltech, 21 April 2007 Sparse Optimization: Algorithms and Applications Stephen Wright 1 Motivation and Introduction 2 Compressed Sensing Algorithms University of Wisconsin-Madison Caltech, 21 April 2007 3 Image Processing +Mario

More information

AMS526: Numerical Analysis I (Numerical Linear Algebra) Lecture 23: GMRES and Other Krylov Subspace Methods; Preconditioning

AMS526: Numerical Analysis I (Numerical Linear Algebra) Lecture 23: GMRES and Other Krylov Subspace Methods; Preconditioning AMS526: Numerical Analysis I (Numerical Linear Algebra) Lecture 23: GMRES and Other Krylov Subspace Methods; Preconditioning Xiangmin Jiao SUNY Stony Brook Xiangmin Jiao Numerical Analysis I 1 / 18 Outline

More information

Analysis and Computation of Google s PageRank

Analysis and Computation of Google s PageRank Analysis and Computation of Google s PageRank Ilse Ipsen North Carolina State University, USA Joint work with Rebecca S. Wills ANAW p.1 PageRank An objective measure of the citation importance of a web

More information

Sparse Solutions of Systems of Equations and Sparse Modelling of Signals and Images

Sparse Solutions of Systems of Equations and Sparse Modelling of Signals and Images Sparse Solutions of Systems of Equations and Sparse Modelling of Signals and Images Alfredo Nava-Tudela ant@umd.edu John J. Benedetto Department of Mathematics jjb@umd.edu Abstract In this project we are

More information

Sparse Optimization Lecture: Basic Sparse Optimization Models

Sparse Optimization Lecture: Basic Sparse Optimization Models Sparse Optimization Lecture: Basic Sparse Optimization Models Instructor: Wotao Yin July 2013 online discussions on piazza.com Those who complete this lecture will know basic l 1, l 2,1, and nuclear-norm

More information

CG and MINRES: An empirical comparison

CG and MINRES: An empirical comparison School of Engineering CG and MINRES: An empirical comparison Prequel to LSQR and LSMR: Two least-squares solvers David Fong and Michael Saunders Institute for Computational and Mathematical Engineering

More information

arxiv: v1 [cs.lg] 17 Dec 2015

arxiv: v1 [cs.lg] 17 Dec 2015 Successive Ray Refinement and Its Application to Coordinate Descent for LASSO arxiv:1512.05808v1 [cs.lg] 17 Dec 2015 Jun Liu SAS Institute Inc. Zheng Zhao Google Inc. October 16, 2018 Ruiwen Zhang SAS

More information

where κ is a gauge function. The majority of the solvers available for the problems under consideration work with the penalized version,

where κ is a gauge function. The majority of the solvers available for the problems under consideration work with the penalized version, SPARSE OPTIMIZATION WITH LEAST-SQUARES CONSTRAINTS ON-LINE APPENDIX EWOUT VAN DEN BERG AND MICHAEL P. FRIEDLANDER Appendix A. Numerical experiments. This on-line appendix accompanies the paper Sparse optimization

More information

A Continuation Approach to Estimate a Solution Path of Mixed L2-L0 Minimization Problems

A Continuation Approach to Estimate a Solution Path of Mixed L2-L0 Minimization Problems A Continuation Approach to Estimate a Solution Path of Mixed L2-L Minimization Problems Junbo Duan, Charles Soussen, David Brie, Jérôme Idier Centre de Recherche en Automatique de Nancy Nancy-University,

More information

Algebraic Multigrid Preconditioners for Computing Stationary Distributions of Markov Processes

Algebraic Multigrid Preconditioners for Computing Stationary Distributions of Markov Processes Algebraic Multigrid Preconditioners for Computing Stationary Distributions of Markov Processes Elena Virnik, TU BERLIN Algebraic Multigrid Preconditioners for Computing Stationary Distributions of Markov

More information

Bhaskar Rao Department of Electrical and Computer Engineering University of California, San Diego

Bhaskar Rao Department of Electrical and Computer Engineering University of California, San Diego Bhaskar Rao Department of Electrical and Computer Engineering University of California, San Diego 1 Outline Course Outline Motivation for Course Sparse Signal Recovery Problem Applications Computational

More information

A Nearly Sublinear Approximation to exp{p}e i for Large Sparse Matrices from Social Networks

A Nearly Sublinear Approximation to exp{p}e i for Large Sparse Matrices from Social Networks A Nearly Sublinear Approximation to exp{p}e i for Large Sparse Matrices from Social Networks Kyle Kloster and David F. Gleich Purdue University December 14, 2013 Supported by NSF CAREER 1149756-CCF Kyle

More information

Compressed Sensing with Quantized Measurements

Compressed Sensing with Quantized Measurements Compressed Sensing with Quantized Measurements Argyrios Zymnis, Stephen Boyd, and Emmanuel Candès April 28, 2009 Abstract We consider the problem of estimating a sparse signal from a set of quantized,

More information

Numerical Methods I Non-Square and Sparse Linear Systems

Numerical Methods I Non-Square and Sparse Linear Systems Numerical Methods I Non-Square and Sparse Linear Systems Aleksandar Donev Courant Institute, NYU 1 donev@courant.nyu.edu 1 MATH-GA 2011.003 / CSCI-GA 2945.003, Fall 2014 September 25th, 2014 A. Donev (Courant

More information

Clustering. SVD and NMF

Clustering. SVD and NMF Clustering with the SVD and NMF Amy Langville Mathematics Department College of Charleston Dagstuhl 2/14/2007 Outline Fielder Method Extended Fielder Method and SVD Clustering with SVD vs. NMF Demos with

More information

Proximal Newton Method. Ryan Tibshirani Convex Optimization /36-725

Proximal Newton Method. Ryan Tibshirani Convex Optimization /36-725 Proximal Newton Method Ryan Tibshirani Convex Optimization 10-725/36-725 1 Last time: primal-dual interior-point method Given the problem min x subject to f(x) h i (x) 0, i = 1,... m Ax = b where f, h

More information

Applications with l 1 -Norm Objective Terms

Applications with l 1 -Norm Objective Terms Applications with l 1 -Norm Objective Terms Stephen Wright University of Wisconsin-Madison Huatulco, January 2007 Stephen Wright (UW-Madison) Applications with l 1 -Norm Objective Terms Huatulco, January

More information

Scale Mixture Modeling of Priors for Sparse Signal Recovery

Scale Mixture Modeling of Priors for Sparse Signal Recovery Scale Mixture Modeling of Priors for Sparse Signal Recovery Bhaskar D Rao 1 University of California, San Diego 1 Thanks to David Wipf, Jason Palmer, Zhilin Zhang and Ritwik Giri Outline Outline Sparse

More information

CG VERSUS MINRES: AN EMPIRICAL COMPARISON

CG VERSUS MINRES: AN EMPIRICAL COMPARISON VERSUS : AN EMPIRICAL COMPARISON DAVID CHIN-LUNG FONG AND MICHAEL SAUNDERS Abstract. For iterative solution of symmetric systems Ax = b, the conjugate gradient method () is commonly used when A is positive

More information

Lecture: Local Spectral Methods (1 of 4)

Lecture: Local Spectral Methods (1 of 4) Stat260/CS294: Spectral Graph Methods Lecture 18-03/31/2015 Lecture: Local Spectral Methods (1 of 4) Lecturer: Michael Mahoney Scribe: Michael Mahoney Warning: these notes are still very rough. They provide

More information

CG versus MINRES on Positive Definite Systems

CG versus MINRES on Positive Definite Systems School of Engineering CG versus MINRES on Positive Definite Systems Michael Saunders SOL and ICME, Stanford University Joint work with David Fong Recent Advances on Optimization In honor of Philippe Toint

More information

1998: enter Link Analysis

1998: enter Link Analysis 1998: enter Link Analysis uses hyperlink structure to focus the relevant set combine traditional IR score with popularity score Page and Brin 1998 Kleinberg Web Information Retrieval IR before the Web

More information

The Second Eigenvalue of the Google Matrix

The Second Eigenvalue of the Google Matrix The Second Eigenvalue of the Google Matrix Taher H. Haveliwala and Sepandar D. Kamvar Stanford University {taherh,sdkamvar}@cs.stanford.edu Abstract. We determine analytically the modulus of the second

More information

A hybrid reordered Arnoldi method to accelerate PageRank computations

A hybrid reordered Arnoldi method to accelerate PageRank computations A hybrid reordered Arnoldi method to accelerate PageRank computations Danielle Parker Final Presentation Background Modeling the Web The Web The Graph (A) Ranks of Web pages v = v 1... Dominant Eigenvector

More information

Iterative Methods for Solving A x = b

Iterative Methods for Solving A x = b Iterative Methods for Solving A x = b A good (free) online source for iterative methods for solving A x = b is given in the description of a set of iterative solvers called templates found at netlib: http

More information

How to optimize the personalization vector to combat link spamming

How to optimize the personalization vector to combat link spamming Delft University of Technology Faculty of Electrical Engineering, Mathematics and Computer Science Delft Institute of Applied Mathematics How to optimize the personalization vector to combat link spamming

More information

Generalized MINRES or Generalized LSQR?

Generalized MINRES or Generalized LSQR? Generalized MINRES or Generalized LSQR? Michael Saunders Systems Optimization Laboratory (SOL) Institute for Computational Mathematics and Engineering (ICME) Stanford University New Frontiers in Numerical

More information

The Conjugate Gradient Method

The Conjugate Gradient Method The Conjugate Gradient Method Classical Iterations We have a problem, We assume that the matrix comes from a discretization of a PDE. The best and most popular model problem is, The matrix will be as large

More information

On Algorithms for Solving Least Squares Problems under an L 1 Penalty or an L 1 Constraint

On Algorithms for Solving Least Squares Problems under an L 1 Penalty or an L 1 Constraint On Algorithms for Solving Least Squares Problems under an L 1 Penalty or an L 1 Constraint B.A. Turlach School of Mathematics and Statistics (M19) The University of Western Australia 35 Stirling Highway,

More information

Incomplete Cholesky preconditioners that exploit the low-rank property

Incomplete Cholesky preconditioners that exploit the low-rank property anapov@ulb.ac.be ; http://homepages.ulb.ac.be/ anapov/ 1 / 35 Incomplete Cholesky preconditioners that exploit the low-rank property (theory and practice) Artem Napov Service de Métrologie Nucléaire, Université

More information

Sparse Solutions of an Undetermined Linear System

Sparse Solutions of an Undetermined Linear System 1 Sparse Solutions of an Undetermined Linear System Maddullah Almerdasy New York University Tandon School of Engineering arxiv:1702.07096v1 [math.oc] 23 Feb 2017 Abstract This work proposes a research

More information

A Method for Constructing Diagonally Dominant Preconditioners based on Jacobi Rotations

A Method for Constructing Diagonally Dominant Preconditioners based on Jacobi Rotations A Method for Constructing Diagonally Dominant Preconditioners based on Jacobi Rotations Jin Yun Yuan Plamen Y. Yalamov Abstract A method is presented to make a given matrix strictly diagonally dominant

More information

Machine Learning for Signal Processing Sparse and Overcomplete Representations

Machine Learning for Signal Processing Sparse and Overcomplete Representations Machine Learning for Signal Processing Sparse and Overcomplete Representations Abelino Jimenez (slides from Bhiksha Raj and Sourish Chaudhuri) Oct 1, 217 1 So far Weights Data Basis Data Independent ICA

More information

Lecture: Local Spectral Methods (2 of 4) 19 Computing spectral ranking with the push procedure

Lecture: Local Spectral Methods (2 of 4) 19 Computing spectral ranking with the push procedure Stat260/CS294: Spectral Graph Methods Lecture 19-04/02/2015 Lecture: Local Spectral Methods (2 of 4) Lecturer: Michael Mahoney Scribe: Michael Mahoney Warning: these notes are still very rough. They provide

More information

Numerical Linear Algebra Primer. Ryan Tibshirani Convex Optimization /36-725

Numerical Linear Algebra Primer. Ryan Tibshirani Convex Optimization /36-725 Numerical Linear Algebra Primer Ryan Tibshirani Convex Optimization 10-725/36-725 Last time: proximal gradient descent Consider the problem min g(x) + h(x) with g, h convex, g differentiable, and h simple

More information

FINDING PARALLELISM IN GENERAL-PURPOSE LINEAR PROGRAMMING

FINDING PARALLELISM IN GENERAL-PURPOSE LINEAR PROGRAMMING FINDING PARALLELISM IN GENERAL-PURPOSE LINEAR PROGRAMMING Daniel Thuerck 1,2 (advisors Michael Goesele 1,2 and Marc Pfetsch 1 ) Maxim Naumov 3 1 Graduate School of Computational Engineering, TU Darmstadt

More information

ON THE LIMITING PROBABILITY DISTRIBUTION OF A TRANSITION PROBABILITY TENSOR

ON THE LIMITING PROBABILITY DISTRIBUTION OF A TRANSITION PROBABILITY TENSOR ON THE LIMITING PROBABILITY DISTRIBUTION OF A TRANSITION PROBABILITY TENSOR WEN LI AND MICHAEL K. NG Abstract. In this paper we propose and develop an iterative method to calculate a limiting probability

More information

Numerical linear algebra

Numerical linear algebra Numerical linear algebra Purdue University CS 51500 Fall 2017 David Gleich David F. Gleich Call me Prof Gleich Dr. Gleich Please not Hey matrix guy! Huda Nassar Call me Huda Ms. Huda Please not Matrix

More information

Nonlinear Optimization for Optimal Control

Nonlinear Optimization for Optimal Control Nonlinear Optimization for Optimal Control Pieter Abbeel UC Berkeley EECS Many slides and figures adapted from Stephen Boyd [optional] Boyd and Vandenberghe, Convex Optimization, Chapters 9 11 [optional]

More information

OUTLINE ffl CFD: elliptic pde's! Ax = b ffl Basic iterative methods ffl Krylov subspace methods ffl Preconditioning techniques: Iterative methods ILU

OUTLINE ffl CFD: elliptic pde's! Ax = b ffl Basic iterative methods ffl Krylov subspace methods ffl Preconditioning techniques: Iterative methods ILU Preconditioning Techniques for Solving Large Sparse Linear Systems Arnold Reusken Institut für Geometrie und Praktische Mathematik RWTH-Aachen OUTLINE ffl CFD: elliptic pde's! Ax = b ffl Basic iterative

More information

c 2007 Society for Industrial and Applied Mathematics

c 2007 Society for Industrial and Applied Mathematics SIAM J MATRIX ANAL APPL Vol 29, No 4, pp 1281 1296 c 2007 Society for Industrial and Applied Mathematics PAGERANK COMPUTATION, WITH SPECIAL ATTENTION TO DANGLING NODES ILSE C F IPSEN AND TERESA M SELEE

More information