Sketched Ridge Regression:
|
|
- Ross Gibbs
- 5 years ago
- Views:
Transcription
1 Sketched Ridge Regression: Optimization and Statistical Perspectives Shusen Wang UC Berkeley Alex Gittens RPI Michael Mahoney UC Berkeley
2 Overview
3 Ridge Regression min w f w = 1 n Xw y + γ w Over-determined: n d n d
4 Ridge Regression min w f w = 1 n Xw y + γ w Efficient and approximate solution? Use only part of the data? n d
5 Ridge Regression min w f w = 1 n Xw y + γ w Matrix Sketching: Random selection Random projection
6 Approximate Ridge Regression min w f w = 1 n Xw y + γ w Sketched solution: w O
7 Approximate Ridge Regression min w f w = 1 n Xw y + γ w Sketched solution: w O Sketch size OR S T sketch size
8 Approximate Ridge Regression min w f w = 1 n Xw y + γ w Sketched solution: w O Sketch size OR S T sketch size f w O 1 + ε min w f w Optimization Perspective
9 Approximate Ridge Regression min w f w = 1 n Xw y + γ w Statistical Perspective Bias Variance
10 Related Work Least squares regression: min Xw y w Reference Drineas, Mahoney, and Muthukrishnan: Sampling algorithms for l2 regression and applications. In SODA, Drineas, Mahoney, Muthukrishnan, and Sarlos: Faster least squares approximation. Numerische Mathematik, Clarkson and Woodruff: Low rank approximation and regression in input sparsity time. In STOC, Ma, Mahoney, and Yu: A statistical perspective on algorithmic leveraging. Journal of Machine Learning Research, Pilanci and Wainwright: Iterative Hessian sketch: fast and accurate solution approximation for constrained least squares. Journal of Machine Learning Research, Raskutti and Mahoney: A statistical perspective on randomized sketching for ordinary least-squares. Journal of Machine Learning Research, Etc
11 Sketched Ridge Regression
12 Matrix Sketching X S [ X Turn big matrix into smaller one. X R^ S S [ X R _ S. S R^ _ is called sketching matrix, e.g., Uniform sampling Leverage score sampling Gaussian projection Subsampled randomized Hadamard transform (SRHT) Count sketch (sparse embedding) Etc.
13 Matrix Sketching Some matrix sketching methods are efficient. Time cost is o(nds) lower than multiplication. Examples: Leverage score sampling: O(nd log n) time SRHT: O(nd log s) time X S [ X
14 Ridge Regression Objective function: Optimal solution: f w = 1 n Xw y + γ w w = argmin f w w = X [ X + nγi e S X [ y Time cost: O nd or O ndt
15 Sketched Ridge Regression Goal: efficiently and approximately solve argmin w f w = 1 n Xw y + γ w.
16 Sketched Ridge Regression Goal: efficiently and approximately solve argmin w f w = 1 n Xw y + γ w. Approach: reduce the size of X and y by matrix sketching.
17 Sketched Ridge Regression Sketched solution: w O = argmin w 1 n S[ Xw S [ y + γ w = X [ SS [ X + nγi S e X [ SS [ y
18 Sketched Ridge Regression Sketched solution: Time: O sd w O = argmin w 1 n S[ Xw S [ y + γ w = X [ SS [ X + nγi S e X [ SS [ y + T _ T _ is the cost of sketching S [ X E.g. T _ = O(nd log s) for SRHT. E.g. T _ = O nd log n for leverage score sampling.
19 Theory: Optimization Perspective
20 Optimization Perspective Recall the objective function f w = i^ Xw y + γ w. Bound f w O f w. i^ XwO Xw f w O f w.
21 Optimization Perspective For the sketching methods SRHT or leverage sampling with s = OR js uniform sampling with s = O k js lmn S T f w O f w εf w holds w.p T,, X R^ S : the design matrix γ: the regularization parameter β = X p p (0, 1] ^qr X p μ 1, ^ S p : the row coherence of X
22 Optimization Perspective For the sketching methods SRHT or leverage sampling with s = OR js uniform sampling with s = O k js lmn S T f w O f w εf w holds w.p T,, X R^ S : the design matrix γ: the regularization parameter β = X p p (0, 1] ^qr X p μ 1, ^ S p : the row coherence of X i ^ XwO Xw εf w.
23 Theory: Statistical Perspective
24 Statistical Model X R^ S : fixed design matrix w x R S : the true and unknown model y = Xw x + δ: observed response vector δ i,, δ^ are random noise E δ = 0 and E δδ [ = ξ I^
25 Bias-Variance Decomposition Risk: R w = i^ E Xw Xw x E is taken w.r.t. the random noise δ.
26 Bias-Variance Decomposition Risk: R w = i^ E Xw Xw x E is taken w.r.t. the random noise δ. Risk measures prediction error.
27 Bias-Variance Decomposition Risk: R w = i^ E Xw Xw x R w = bias w + var w
28 Bias-Variance Decomposition Risk: R w = i^ E Xw Xw x R w = bias w + var w Optimal Solution Sketched Solution bias w = γ n Σ + nγi S ƒi ΣV [ w x, var w = p ^ I S + nγσ ƒ ƒi, bias w O = γ n ΣU [ SS [ UΣ + nγi S e ΣV [ w x, var w O = p ^ U [ SS [ U + nγσ ƒ e U [ SS [, Here X = UΣV [ is the SVD.
29 Statistical Perspective For the sketching methods SRHT or leverage sampling with s = OR uniform sampling with s = O k S lmn S T p, the followings hold w.p. 0.9: S T p, X R^ S : the design matrix μ 1, ^ : the row coherence of X S 1 ε bias wo bias w 1 + ε, Good! 1 ε n s var wo var w 1 + ε n s. Bad! Because n s.
30 Statistical Perspective For the sketching methods SRHT or leverage sampling with s = OR uniform sampling with s = O k S lmn S T p, the followings hold w.p. 0.9: S T p, X R^ S : the design matrix μ 1, ^ : the row coherence of X S 1 ε 1 ε n s bias wo bias w 1 + ε, var wo var w 1 + ε n s. If y is noisy variance dominatesbias R w _ R(w ).
31 Conclusions Use sketched solution to initialize numerical optimization. Xw O is close to Xw. Optimization Perspective
32 Conclusions Use sketched solution to initialize numerical optimization. Xw O is close to Xw. Optimization Perspective w : output of the t-th iteration of CG algorithm. Xw Š ƒxw Xw ƒxw p p p 2 p X Ž X ƒi X Ž X ri Initialization is important..
33 Conclusions Use sketched solution to initialize numerical optimization. Xw O is close to Xw. Never use sketched solution to replace the optimal solution. Much higher variance è bad generalization. Optimization Perspective Statistical Perspective
34 Model Averaging
35 Model Averaging Independently draw S i,, S. Compute the sketched solutions w O i,, w O. Model averaging: w O = i O i w.
36 Optimization Perspective For sufficiently large s, w ƒ w w ε holds w.h.p. Without model averaging
37 Optimization Perspective For sufficiently large s, w ƒ w w ε holds w.h.p. Without model averaging Using the same matrix sketching and same s, w ƒ w w T + ε holds w.h.p. With model averaging
38 Optimization Perspective For sufficiently large s, w ƒ w w ε holds w.h.p. Without model averaging Using the same matrix sketching and same s, w ƒ w w T + ε holds w.h.p. With model averaging
39 Optimization Perspective For sufficiently large s, w ƒ w w ε holds w.h.p. Without model averaging Using the same matrix sketching and same s, w ƒ w w T + ε holds w.h.p. With model averaging If s d ε is very small error bound T.
40 Statistical Perspective Risk: R w = i^ E Xw Xw x = bias w + var w Model averaging : bias w O = γ n i ΣU [ S S [ UΣ + nγi S e ΣV [ w x i var w O = p ^ Here X = UΣV [ is the SVD. i U [ S S [ U + nγσ ƒ e U [ S S [ i.,
41 Statistical Perspective For sufficiently large s, the followings hold w.h.p.: bias w O bias w 1 + ε and var w O var w n s 1 + ε. Without model averaging Using the same sketching methods and same s, the followings hold w.h.p.: bias w O bias w 1 + ε and var w O var w n s 1 g + ε 2
42 Statistical Perspective For sufficiently large s, the followings hold w.h.p.: bias w O bias w 1 + ε and var w O var w n s 1 + ε. Without model averaging Using the same sketching methods and same s, the followings hold w.h.p.: bias w O bias w 1 + ε and var w O var w n s 1 g + ε 2 With model averaging
43 Statistical Perspective For sufficiently large s, the followings hold w.h.p.: bias w O bias w 1 + ε and var w O var w n s 1 + ε. Without model averaging Using the same sketching methods and same s, the followings hold w.h.p.: bias w O bias w 1 + ε and var w O var w n s 1 g + ε 2 With model averaging
44 Statistical Perspective For sufficiently large s, the followings hold w.h.p.: bias w O bias w 1 + ε and var w O var w n s 1 + ε. Without model averaging Using the same sketching methods and same s, the followings hold w.h.p.: bias w O bias w 1 + ε and var w O var w n s 1 g + ε 2 With model averaging If ε is small, then var w O i.
45 Applications to Distributed Optimization x i, y i,, x^, y^ are (randomly) split among g machines. Equivalent to uniform sampling with s = ^.
46 Optimization Perspective Application to distributed optimization: If s = ^ d, w O is very close to w (provably). w O is good initialization of distributed optimization algorithms.
47 Statistical Perspective Application to distributed machine learning: If s = ^ d, then R w O is comparable to R w. If low-precision solution suffices, then w O is a good substitute of w. One-shot solution.
48 Thank You! The paper is at arxiv:
Recovering any low-rank matrix, provably
Recovering any low-rank matrix, provably Rachel Ward University of Texas at Austin October, 2014 Joint work with Yudong Chen (U.C. Berkeley), Srinadh Bhojanapalli and Sujay Sanghavi (U.T. Austin) Matrix
More informationSketching as a Tool for Numerical Linear Algebra
Sketching as a Tool for Numerical Linear Algebra (Part 2) David P. Woodruff presented by Sepehr Assadi o(n) Big Data Reading Group University of Pennsylvania February, 2015 Sepehr Assadi (Penn) Sketching
More informationGradient-based Sampling: An Adaptive Importance Sampling for Least-squares
Gradient-based Sampling: An Adaptive Importance Sampling for Least-squares Rong Zhu Academy of Mathematics and Systems Science, Chinese Academy of Sciences, Beijing, China. rongzhu@amss.ac.cn Abstract
More informationSketching as a Tool for Numerical Linear Algebra
Sketching as a Tool for Numerical Linear Algebra David P. Woodruff presented by Sepehr Assadi o(n) Big Data Reading Group University of Pennsylvania February, 2015 Sepehr Assadi (Penn) Sketching for Numerical
More informationIntroduction The framework Bias and variance Approximate computation of leverage Empirical evaluation Discussion of sampling approach in big data
Discussion of sampling approach in big data Big data discussion group at MSCS of UIC Outline 1 Introduction 2 The framework 3 Bias and variance 4 Approximate computation of leverage 5 Empirical evaluation
More informationRandomized Numerical Linear Algebra: Review and Progresses
ized ized SVD ized : Review and Progresses Zhihua Department of Computer Science and Engineering Shanghai Jiao Tong University The 12th China Workshop on Machine Learning and Applications Xi an, November
More informationarxiv: v2 [stat.ml] 29 Nov 2018
Randomized Iterative Algorithms for Fisher Discriminant Analysis Agniva Chowdhury Jiasen Yang Petros Drineas arxiv:1809.03045v2 [stat.ml] 29 Nov 2018 Abstract Fisher discriminant analysis FDA is a widely
More informationRelative-Error CUR Matrix Decompositions
RandNLA Reading Group University of California, Berkeley Tuesday, April 7, 2015. Motivation study [low-rank] matrix approximations that are explicitly expressed in terms of a small numbers of columns and/or
More informationRandNLA: Randomized Numerical Linear Algebra
RandNLA: Randomized Numerical Linear Algebra Petros Drineas Rensselaer Polytechnic Institute Computer Science Department To access my web page: drineas RandNLA: sketch a matrix by row/ column sampling
More informationSubsampling for Ridge Regression via Regularized Volume Sampling
Subsampling for Ridge Regression via Regularized Volume Sampling Micha l Dereziński and Manfred Warmuth University of California at Santa Cruz AISTATS 18, 4-9-18 1 / 22 Linear regression y x 2 / 22 Optimal
More informationCOMPRESSED AND PENALIZED LINEAR
COMPRESSED AND PENALIZED LINEAR REGRESSION Daniel J. McDonald Indiana University, Bloomington mypage.iu.edu/ dajmcdon 2 June 2017 1 OBLIGATORY DATA IS BIG SLIDE Modern statistical applications genomics,
More informationTighter Low-rank Approximation via Sampling the Leveraged Element
Tighter Low-rank Approximation via Sampling the Leveraged Element Srinadh Bhojanapalli The University of Texas at Austin bsrinadh@utexas.edu Prateek Jain Microsoft Research, India prajain@microsoft.com
More informationApproximate Spectral Clustering via Randomized Sketching
Approximate Spectral Clustering via Randomized Sketching Christos Boutsidis Yahoo! Labs, New York Joint work with Alex Gittens (Ebay), Anju Kambadur (IBM) The big picture: sketch and solve Tradeoff: Speed
More informationA Fast Algorithm For Computing The A-optimal Sampling Distributions In A Big Data Linear Regression
A Fast Algorithm For Computing The A-optimal Sampling Distributions In A Big Data Linear Regression Hanxiang Peng and Fei Tan Indiana University Purdue University Indianapolis Department of Mathematical
More informationAn Iterative, Sketching-based Framework for Ridge Regression
Agniva Chowdhury Jiasen Yang Petros Drineas Abstract Ridge regression is a variant of regularized least squares regression that is particularly suitable in settings where the number of predictor variables
More informationA Statistical Perspective on Algorithmic Leveraging
Ping Ma PINGMA@UGA.EDU Department of Statistics, University of Georgia, Athens, GA 30602 Michael W. Mahoney MMAHONEY@ICSI.BERKELEY.EDU International Computer Science Institute and Dept. of Statistics,
More informationSketching as a Tool for Numerical Linear Algebra All Lectures. David Woodruff IBM Almaden
Sketching as a Tool for Numerical Linear Algebra All Lectures David Woodruff IBM Almaden Massive data sets Examples Internet traffic logs Financial data etc. Algorithms Want nearly linear time or less
More informationdimensionality reduction for k-means and low rank approximation
dimensionality reduction for k-means and low rank approximation Michael Cohen, Sam Elder, Cameron Musco, Christopher Musco, Mădălina Persu Massachusetts Institute of Technology 0 overview Simple techniques
More informationA Statistical Perspective on Algorithmic Leveraging
Journal of Machine Learning Research 16 (2015) 861-911 Submitted 1/14; Revised 10/14; Published 4/15 A Statistical Perspective on Algorithmic Leveraging Ping Ma Department of Statistics University of Georgia
More informationError Estimation for Randomized Least-Squares Algorithms via the Bootstrap
via the Bootstrap Miles E. Lopes 1 Shusen Wang 2 Michael W. Mahoney 2 Abstract Over the course of the past decade, a variety of randomized algorithms have been proposed for computing approximate least-squares
More informationApproximate Principal Components Analysis of Large Data Sets
Approximate Principal Components Analysis of Large Data Sets Daniel J. McDonald Department of Statistics Indiana University mypage.iu.edu/ dajmcdon April 27, 2016 Approximation-Regularization for Analysis
More informationCS 229r: Algorithms for Big Data Fall Lecture 17 10/28
CS 229r: Algorithms for Big Data Fall 2015 Prof. Jelani Nelson Lecture 17 10/28 Scribe: Morris Yau 1 Overview In the last lecture we defined subspace embeddings a subspace embedding is a linear transformation
More informationEstimators based on non-convex programs: Statistical and computational guarantees
Estimators based on non-convex programs: Statistical and computational guarantees Martin Wainwright UC Berkeley Statistics and EECS Based on joint work with: Po-Ling Loh (UC Berkeley) Martin Wainwright
More informationsublinear time low-rank approximation of positive semidefinite matrices Cameron Musco (MIT) and David P. Woodru (CMU)
sublinear time low-rank approximation of positive semidefinite matrices Cameron Musco (MIT) and David P. Woodru (CMU) 0 overview Our Contributions: 1 overview Our Contributions: A near optimal low-rank
More informationGI07/COMPM012: Mathematical Programming and Research Methods (Part 2) 2. Least Squares and Principal Components Analysis. Massimiliano Pontil
GI07/COMPM012: Mathematical Programming and Research Methods (Part 2) 2. Least Squares and Principal Components Analysis Massimiliano Pontil 1 Today s plan SVD and principal component analysis (PCA) Connection
More informationhttps://goo.gl/kfxweg KYOTO UNIVERSITY Statistical Machine Learning Theory Sparsity Hisashi Kashima kashima@i.kyoto-u.ac.jp DEPARTMENT OF INTELLIGENCE SCIENCE AND TECHNOLOGY 1 KYOTO UNIVERSITY Topics:
More informationto be more efficient on enormous scale, in a stream, or in distributed settings.
16 Matrix Sketching The singular value decomposition (SVD) can be interpreted as finding the most dominant directions in an (n d) matrix A (or n points in R d ). Typically n > d. It is typically easy to
More informationCompressed Least Squares Regression revisited
Martin Slawski Department of Statistics George Mason University Fairfax, VA 22030, USA mslawsk3@gmu.edu Abstract We revisit compressed least squares CLS) regression as originally analyzed in Maillard and
More informationLecture 18 Nov 3rd, 2015
CS 229r: Algorithms for Big Data Fall 2015 Prof. Jelani Nelson Lecture 18 Nov 3rd, 2015 Scribe: Jefferson Lee 1 Overview Low-rank approximation, Compression Sensing 2 Last Time We looked at three different
More informationApproximate Newton Methods and Their Local Convergence
Haishan Ye Luo Luo Zhihua Zhang 2 Abstract Many machine learning models are reformulated as optimization problems. Thus, it is important to solve a large-scale optimization problem in big data applications.
More informationEECS 275 Matrix Computation
EECS 275 Matrix Computation Ming-Hsuan Yang Electrical Engineering and Computer Science University of California at Merced Merced, CA 95344 http://faculty.ucmerced.edu/mhyang Lecture 22 1 / 21 Overview
More informationSubspace sampling and relative-error matrix approximation
Subspace sampling and relative-error matrix approximation Petros Drineas Rensselaer Polytechnic Institute Computer Science Department (joint work with M. W. Mahoney) For papers, etc. drineas The CUR decomposition
More informationSketching as a Tool for Numerical Linear Algebra All Lectures. David Woodruff IBM Almaden
Sketching as a Tool for Numerical Linear Algebra All Lectures David Woodruff IBM Almaden Massive data sets Examples Internet traffic logs Financial data etc. Algorithms Want nearly linear time or less
More informationA fast randomized algorithm for approximating an SVD of a matrix
A fast randomized algorithm for approximating an SVD of a matrix Joint work with Franco Woolfe, Edo Liberty, and Vladimir Rokhlin Mark Tygert Program in Applied Mathematics Yale University Place July 17,
More informationLecture 24: Element-wise Sampling of Graphs and Linear Equation Solving. 22 Element-wise Sampling of Graphs and Linear Equation Solving
Stat260/CS294: Randomized Algorithms for Matrices and Data Lecture 24-12/02/2013 Lecture 24: Element-wise Sampling of Graphs and Linear Equation Solving Lecturer: Michael Mahoney Scribe: Michael Mahoney
More information1 Cricket chirps: an example
Notes for 2016-09-26 1 Cricket chirps: an example Did you know that you can estimate the temperature by listening to the rate of chirps? The data set in Table 1 1. represents measurements of the number
More informationSketching Structured Matrices for Faster Nonlinear Regression
Sketching Structured Matrices for Faster Nonlinear Regression Haim Avron Vikas Sindhwani IBM T.J. Watson Research Center Yorktown Heights, NY 10598 {haimav,vsindhw}@us.ibm.com David P. Woodruff IBM Almaden
More informationBig Data Analytics: Optimization and Randomization
Big Data Analytics: Optimization and Randomization Tianbao Yang Tutorial@ACML 2015 Hong Kong Department of Computer Science, The University of Iowa, IA, USA Nov. 20, 2015 Yang Tutorial for ACML 15 Nov.
More informationITERATIVE HESSIAN SKETCH WITH MOMENTUM
ITERATIVE HESSIAN SKETCH WITH MOMENTUM Ibrahim Kurban Ozaslan Mert Pilanci Orhan Arikan EEE Department, Bilkent University, Ankara, Turkey EE Department, Stanford University, CA, USA {iozaslan, oarikan}@ee.bilkent.edu.tr,
More informationMANY scientific computations, signal processing, data analysis and machine learning applications lead to large dimensional
Low rank approximation and decomposition of large matrices using error correcting codes Shashanka Ubaru, Arya Mazumdar, and Yousef Saad 1 arxiv:1512.09156v3 [cs.it] 15 Jun 2017 Abstract Low rank approximation
More informationSubspace Embeddings for the Polynomial Kernel
Subspace Embeddings for the Polynomial Kernel Haim Avron IBM T.J. Watson Research Center Yorktown Heights, NY 10598 haimav@us.ibm.com Huy L. Nguy ên Simons Institute, UC Berkeley Berkeley, CA 94720 hlnguyen@cs.princeton.edu
More informationSub-Sampled Newton Methods for Machine Learning. Jorge Nocedal
Sub-Sampled Newton Methods for Machine Learning Jorge Nocedal Northwestern University Goldman Lecture, Sept 2016 1 Collaborators Raghu Bollapragada Northwestern University Richard Byrd University of Colorado
More informationThe Fast Cauchy Transform and Faster Robust Linear Regression
The Fast Cauchy Transform and Faster Robust Linear Regression Kenneth L Clarkson Petros Drineas Malik Magdon-Ismail Michael W Mahoney Xiangrui Meng David P Woodruff Abstract We provide fast algorithms
More informationWeighted SGD for l p Regression with Randomized Preconditioning
Journal of Machine Learning Research 18 (018) 1-43 Submitted 1/17; Revised 1/17; Published 4/18 Weighted SGD for l p Regression with Randomized Preconditioning Jiyan Yang Institute for Computational and
More informationThe Randomized Newton Method for Convex Optimization
The Randomized Newton Method for Convex Optimization Vaden Masrani UBC MLRG April 3rd, 2018 Introduction We have some unconstrained, twice-differentiable convex function f : R d R that we want to minimize:
More informationOne Picture and a Thousand Words Using Matrix Approximtions October 2017 Oak Ridge National Lab Dianne P. O Leary c 2017
One Picture and a Thousand Words Using Matrix Approximtions October 2017 Oak Ridge National Lab Dianne P. O Leary c 2017 1 One Picture and a Thousand Words Using Matrix Approximations Dianne P. O Leary
More informationFast and Robust Least Squares Estimation in Corrupted Linear Models
Fast and Robust Least Squares Estimation in Corrupted Linear Models Brian McWilliams Gabriel Krummenacher Mario Lucic Joachim M. Buhmann Department of Computer Science ETH Zürich, Switzerland {mcbrian,gabriel.krummenacher,lucic,jbuhmann}@inf.ethz.ch
More informationInformation-Theoretic Methods in Data Science
Information-Theoretic Methods in Data Science Information-theoretic bounds on sketching Mert Pilanci Department of Electrical Engineering Stanford University Contents Information-theoretic bounds on sketching
More informationarxiv: v1 [stat.me] 23 Jun 2013
A Statistical Perspective on Algorithmic Leveraging Ping Ma Michael W. Mahoney Bin Yu arxiv:1306.5362v1 [stat.me] 23 Jun 2013 Abstract One popular method for dealing with large-scale data sets is sampling.
More informationSTA141C: Big Data & High Performance Statistical Computing
STA141C: Big Data & High Performance Statistical Computing Lecture 4: ML Models (Overview) Cho-Jui Hsieh UC Davis April 17, 2017 Outline Linear regression Ridge regression Logistic regression Other finite-sum
More informationTUM 2016 Class 3 Large scale learning by regularization
TUM 2016 Class 3 Large scale learning by regularization Lorenzo Rosasco UNIGE-MIT-IIT July 25, 2016 Learning problem Solve min w E(w), E(w) = dρ(x, y)l(w x, y) given (x 1, y 1 ),..., (x n, y n ) Beyond
More informationLinear Regression. CSL603 - Fall 2017 Narayanan C Krishnan
Linear Regression CSL603 - Fall 2017 Narayanan C Krishnan ckn@iitrpr.ac.in Outline Univariate regression Multivariate regression Probabilistic view of regression Loss functions Bias-Variance analysis Regularization
More informationLinear Regression. CSL465/603 - Fall 2016 Narayanan C Krishnan
Linear Regression CSL465/603 - Fall 2016 Narayanan C Krishnan ckn@iitrpr.ac.in Outline Univariate regression Multivariate regression Probabilistic view of regression Loss functions Bias-Variance analysis
More information4 Bias-Variance for Ridge Regression (24 points)
Implement Ridge Regression with λ = 0.00001. Plot the Squared Euclidean test error for the following values of k (the dimensions you reduce to): k = {0, 50, 100, 150, 200, 250, 300, 350, 400, 450, 500,
More informationECE521 lecture 4: 19 January Optimization, MLE, regularization
ECE521 lecture 4: 19 January 2017 Optimization, MLE, regularization First four lectures Lectures 1 and 2: Intro to ML Probability review Types of loss functions and algorithms Lecture 3: KNN Convexity
More informationAdvances in Extreme Learning Machines
Advances in Extreme Learning Machines Mark van Heeswijk April 17, 2015 Outline Context Extreme Learning Machines Part I: Ensemble Models of ELM Part II: Variable Selection and ELM Part III: Trade-offs
More informationRandomized Algorithms for Matrix Computations
Randomized Algorithms for Matrix Computations Ilse Ipsen Students: John Holodnak, Kevin Penner, Thomas Wentworth Research supported in part by NSF CISE CCF, DARPA XData Randomized Algorithms Solve a deterministic
More informationRegression. Goal: Learn a mapping from observations (features) to continuous labels given a training set (supervised learning)
Linear Regression Regression Goal: Learn a mapping from observations (features) to continuous labels given a training set (supervised learning) Example: Height, Gender, Weight Shoe Size Audio features
More informationRandomized algorithms for the approximation of matrices
Randomized algorithms for the approximation of matrices Luis Rademacher The Ohio State University Computer Science and Engineering (joint work with Amit Deshpande, Santosh Vempala, Grant Wang) Two topics
More informationSupremum of simple stochastic processes
Subspace embeddings Daniel Hsu COMS 4772 1 Supremum of simple stochastic processes 2 Recap: JL lemma JL lemma. For any ε (0, 1/2), point set S R d of cardinality 16 ln n S = n, and k N such that k, there
More informationrandomized block krylov methods for stronger and faster approximate svd
randomized block krylov methods for stronger and faster approximate svd Cameron Musco and Christopher Musco December 2, 25 Massachusetts Institute of Technology, EECS singular value decomposition n d left
More informationTechnical Report. Random projections for Bayesian regression. Leo Geppert, Katja Ickstadt, Alexander Munteanu and Christian Sohler 04/2014
Random projections for Bayesian regression Technical Report Leo Geppert, Katja Ickstadt, Alexander Munteanu and Christian Sohler 04/2014 technische universität dortmund Part of the work on this technical
More informationApplications of Randomized Methods for Decomposing and Simulating from Large Covariance Matrices
Applications of Randomized Methods for Decomposing and Simulating from Large Covariance Matrices Vahid Dehdari and Clayton V. Deutsch Geostatistical modeling involves many variables and many locations.
More informationMultidimensional data analysis in biomedicine and epidemiology
in biomedicine and epidemiology Katja Ickstadt and Leo N. Geppert Faculty of Statistics, TU Dortmund, Germany Stakeholder Workshop 12 13 December 2017, PTB Berlin Supported by Deutsche Forschungsgemeinschaft
More informationRegression. Goal: Learn a mapping from observations (features) to continuous labels given a training set (supervised learning)
Linear Regression Regression Goal: Learn a mapping from observations (features) to continuous labels given a training set (supervised learning) Example: Height, Gender, Weight Shoe Size Audio features
More informationLecture 12: Randomized Least-squares Approximation in Practice, Cont. 12 Randomized Least-squares Approximation in Practice, Cont.
Stat60/CS94: Randomized Algorithms for Matrices and Data Lecture 1-10/14/013 Lecture 1: Randomized Least-squares Approximation in Practice, Cont. Lecturer: Michael Mahoney Scribe: Michael Mahoney Warning:
More informationThe Pros and Cons of Compressive Sensing
The Pros and Cons of Compressive Sensing Mark A. Davenport Stanford University Department of Statistics Compressive Sensing Replace samples with general linear measurements measurements sampled signal
More informationLecture 20: November 1st
10-725: Optimization Fall 2012 Lecture 20: November 1st Lecturer: Geoff Gordon Scribes: Xiaolong Shen, Alex Beutel Note: LaTeX template courtesy of UC Berkeley EECS dept. Disclaimer: These notes have not
More informationAn Investigation of Newton-Sketch and Subsampled Newton Methods
An Investigation of Newton-Sketch and Subsampled Newton Methods Albert S. Berahas, Raghu Bollapragada Northwestern University Evanston, IL {albertberahas,raghu.bollapragada}@u.northwestern.edu Jorge Nocedal
More informationRandNLA: Randomization in Numerical Linear Algebra
RandNLA: Randomization in Numerical Linear Algebra Petros Drineas Department of Computer Science Rensselaer Polytechnic Institute To access my web page: drineas Why RandNLA? Randomization and sampling
More informationMachine Learning (CSE 446): Probabilistic Machine Learning
Machine Learning (CSE 446): Probabilistic Machine Learning oah Smith c 2017 University of Washington nasmith@cs.washington.edu ovember 1, 2017 1 / 24 Understanding MLE y 1 MLE π^ You can think of MLE as
More informationSingle Pass PCA of Matrix Products
Single Pass PCA of Matrix Products Shanshan Wu The University of Texas at Austin shanshan@utexas.edu Sujay Sanghavi The University of Texas at Austin sanghavi@mail.utexas.edu Srinadh Bhojanapalli Toyota
More informationCross-Validation with Confidence
Cross-Validation with Confidence Jing Lei Department of Statistics, Carnegie Mellon University UMN Statistics Seminar, Mar 30, 2017 Overview Parameter est. Model selection Point est. MLE, M-est.,... Cross-validation
More informationColumn Selection via Adaptive Sampling
Column Selection via Adaptive Sampling Saurabh Paul Global Risk Sciences, Paypal Inc. saupaul@paypal.com Malik Magdon-Ismail CS Dept., Rensselaer Polytechnic Institute magdon@cs.rpi.edu Petros Drineas
More informationFast Relative-Error Approximation Algorithm for Ridge Regression
Fast Relative-Error Approximation Algorithm for Ridge Regression Shouyuan Chen 1 Yang Liu 21 Michael R. Lyu 31 Irwin King 31 Shengyu Zhang 21 3: Shenzhen Key Laboratory of Rich Media Big Data Analytics
More informationFast Dimension Reduction
Fast Dimension Reduction MMDS 2008 Nir Ailon Google Research NY Fast Dimension Reduction Using Rademacher Series on Dual BCH Codes (with Edo Liberty) The Fast Johnson Lindenstrauss Transform (with Bernard
More informationStochastic Gradient Descent. CS 584: Big Data Analytics
Stochastic Gradient Descent CS 584: Big Data Analytics Gradient Descent Recap Simplest and extremely popular Main Idea: take a step proportional to the negative of the gradient Easy to implement Each iteration
More informationORIE 4741: Learning with Big Messy Data. Regularization
ORIE 4741: Learning with Big Messy Data Regularization Professor Udell Operations Research and Information Engineering Cornell October 26, 2017 1 / 24 Regularized empirical risk minimization choose model
More informationLess is More: Computational Regularization by Subsampling
Less is More: Computational Regularization by Subsampling Lorenzo Rosasco University of Genova - Istituto Italiano di Tecnologia Massachusetts Institute of Technology lcsl.mit.edu joint work with Alessandro
More informationSTA141C: Big Data & High Performance Statistical Computing
STA141C: Big Data & High Performance Statistical Computing Lecture 5: Numerical Linear Algebra Cho-Jui Hsieh UC Davis April 20, 2017 Linear Algebra Background Vectors A vector has a direction and a magnitude
More information4 Bias-Variance for Ridge Regression (24 points)
2 count = 0 3 for x in self.x_test_ridge: 4 5 prediction = np.matmul(self.w_ridge,x) 6 ###ADD THE COMPUTED MEAN BACK TO THE PREDICTED VECTOR### 7 prediction = self.ss_y.inverse_transform(prediction) 8
More informationarxiv: v3 [cs.ds] 21 Mar 2013
Low-distortion Subspace Embeddings in Input-sparsity Time and Applications to Robust Linear Regression Xiangrui Meng Michael W. Mahoney arxiv:1210.3135v3 [cs.ds] 21 Mar 2013 Abstract Low-distortion subspace
More informationFast Approximation of Matrix Coherence and Statistical Leverage
Journal of Machine Learning Research 13 (01) 3475-3506 Submitted 7/1; Published 1/1 Fast Approximation of Matrix Coherence and Statistical Leverage Petros Drineas Malik Magdon-Ismail Department of Computer
More informationThe Pros and Cons of Compressive Sensing
The Pros and Cons of Compressive Sensing Mark A. Davenport Stanford University Department of Statistics Compressive Sensing Replace samples with general linear measurements measurements sampled signal
More informationStatistical Computing (36-350)
Statistical Computing (36-350) Lecture 19: Optimization III: Constrained and Stochastic Optimization Cosma Shalizi 30 October 2013 Agenda Constraints and Penalties Constraints and penalties Stochastic
More informationLinear and Sublinear Linear Algebra Algorithms: Preconditioning Stochastic Gradient Algorithms with Randomized Linear Algebra
Linear and Sublinear Linear Algebra Algorithms: Preconditioning Stochastic Gradient Algorithms with Randomized Linear Algebra Michael W. Mahoney ICSI and Dept of Statistics, UC Berkeley ( For more info,
More informationSTA141C: Big Data & High Performance Statistical Computing
STA141C: Big Data & High Performance Statistical Computing Numerical Linear Algebra Background Cho-Jui Hsieh UC Davis May 15, 2018 Linear Algebra Background Vectors A vector has a direction and a magnitude
More informationChapter XII: Data Pre and Post Processing
Chapter XII: Data Pre and Post Processing Information Retrieval & Data Mining Universität des Saarlandes, Saarbrücken Winter Semester 2013/14 XII.1 4-1 Chapter XII: Data Pre and Post Processing 1. Data
More informationInterpretable Nonnegative Matrix Decompositions
27 August 2008 Outline 1 Introduction 2 Definitions 3 Algorithms and Complexity 4 Experiments Synthetic Data Real Data 5 Conclusions Outline 1 Introduction 2 Definitions 3 Algorithms and Complexity 4 Experiments
More informationRandomly Sampling from Orthonormal Matrices: Coherence and Leverage Scores
Randomly Sampling from Orthonormal Matrices: Coherence and Leverage Scores Ilse Ipsen Joint work with Thomas Wentworth (thanks to Petros & Joel) North Carolina State University Raleigh, NC, USA Research
More informationSTAT 200C: High-dimensional Statistics
STAT 200C: High-dimensional Statistics Arash A. Amini May 30, 2018 1 / 57 Table of Contents 1 Sparse linear models Basis Pursuit and restricted null space property Sufficient conditions for RNS 2 / 57
More informationSketching as a Tool for Numerical Linear Algebra
Foundations and Trends R in Theoretical Computer Science Vol. 10, No. 1-2 (2014) 1 157 c 2014 D. P. Woodruff DOI: 10.1561/0400000060 Sketching as a Tool for Numerical Linear Algebra David P. Woodruff IBM
More informationRandNLA: Randomization in Numerical Linear Algebra: Theory and Practice
RandNLA: Randomization in Numerical Linear Algebra: Theory and Practice Petros Drineas Ilse Ipsen (organizer) Michael W. Mahoney RPI NCSU UC Berkeley To access our web pages use your favorite search engine.
More informationEfficiently Implementing Sparsity in Learning
Efficiently Implementing Sparsity in Learning M. Magdon-Ismail Rensselaer Polytechnic Institute (Joint Work) December 9, 2013. Out-of-Sample is What Counts NO YES A pattern exists We don t know it We have
More informationConvergence Rates of Kernel Quadrature Rules
Convergence Rates of Kernel Quadrature Rules Francis Bach INRIA - Ecole Normale Supérieure, Paris, France ÉCOLE NORMALE SUPÉRIEURE NIPS workshop on probabilistic integration - Dec. 2015 Outline Introduction
More informationComputation time/accuracy trade-off and linear regression
Computation time/accuracy trade-off and linear regression Maxime BRUNIN & Christophe BIERNACKI & Alain CELISSE Laboratoire Paul Painlevé, Université de Lille, Science et Technologie INRIA Lille-Nord Europe,
More informationCOMS 4771 Regression. Nakul Verma
COMS 4771 Regression Nakul Verma Last time Support Vector Machines Maximum Margin formulation Constrained Optimization Lagrange Duality Theory Convex Optimization SVM dual and Interpretation How get the
More informationSTA141C: Big Data & High Performance Statistical Computing
STA141C: Big Data & High Performance Statistical Computing Lecture 6: Numerical Linear Algebra: Applications in Machine Learning Cho-Jui Hsieh UC Davis April 27, 2017 Principal Component Analysis Principal
More informationA fast randomized algorithm for overdetermined linear least-squares regression
A fast randomized algorithm for overdetermined linear least-squares regression Vladimir Rokhlin and Mark Tygert Technical Report YALEU/DCS/TR-1403 April 28, 2008 Abstract We introduce a randomized algorithm
More informationIntroduction to Machine Learning Fall 2017 Note 5. 1 Overview. 2 Metric
CS 189 Introduction to Machine Learning Fall 2017 Note 5 1 Overview Recall from our previous note that for a fixed input x, our measurement Y is a noisy measurement of the true underlying response f x):
More information