Robust estimation of scale and covariance with P n and its application to precision matrix estimation

Size: px
Start display at page:

Download "Robust estimation of scale and covariance with P n and its application to precision matrix estimation"

Transcription

1 Robust estimation of scale and covariance with P n and its application to precision matrix estimation Garth Tarr, Samuel Müller and Neville Weber USYD 2013 School of Mathematics and Statistics THE UNIVERSITY OF SYDNEY

2 Outline Robust Scale Estimator, P n Covariance Covariance Matrix Autocovariance Precision Matrix Long Range Dependence Short Range Dependence

3 Outline Robust scale estimation Robust covariance estimation Robust covariance matrices Summary and key references

4 History of relative efficiencies at the Gaussian (n = 20) 1.00 SD 0.85 P n Relative Efficiency IQR MAD Q n S n Year

5 U-quantile statistics Given data X = (X 1,..., X n ) and a symmetric kernel h : R 2 R a U-statistic of order 2 is defined as: ( ) n 1 U n (X) := h(x i, X j ). (1) 2 i<j Let H(t) = P (h(x i, X j ) t) be the cdf of the kernels with corresponding empirical distribution function, H n (t) := ( ) n 1 I{h(X i, X j ) t}, for t R. (2) 2 i<j For 0 < p < 1, the corresponding sample U-quantile is: H 1 n (p) := inf{t : H n (t) p}. (3)

6 Generalised L-statistics A generalised linear (GL) statistic can be defined as where T n (H n ) = I J(p)H 1 n (p)dp + d j=1 J is function for smooth weighting of Hn 1 (p) I [0, 1] is some interval a j are discrete coefficients for H 1 n (p j ) a j H 1 n (p j ). (Serfling, 1984)

7 Examples of GL-statistics Interquartile range: h(x) = x, IQR = Hn 1 (0.75) Hn 1 (0.25) Variance: h(x, y) = 1 2 (x y)2, 1 0 Hn 1 (p)dp Winsorized variance: h(x, y) = 1 2 (x y)2, Hn 1 (p)dp Hn 1 (0.75) Rousseeuw and Croux s Q n : h(x, y) = x y, H 1 n (0.25)

8 Pairwise mean scale estimator: P n Consider the set of ( n 2) pairwise means: {h(x i, X j ), 1 i < j n} where h(x 1, X 2 ) = (X 1 + X 2 )/2. Let H n be the corresponding empirical distribution function: H n (t) := ( ) n 1 I{h(X i, X j ) t}, for t R. 2 i<j Definition P n = c [ Hn 1 (0.75) Hn 1 (0.25) ], where c is a correction factor to make P n consistent for the standard deviation when the underlying observations are Gaussian.

9 Why pairwise means? Consider 10 observations from N (0, 1). Density Original Data Pairwise Means Observations

10 Influence curve The influence curve for a functional T at distribution F is T ((1 ɛ)f + ɛδ x ) T (F ) IC(x; T, F ) = lim ɛ 0 ɛ where δ x has all its mass at x. Serfling (1984) outlines the IC for GL-statistics. Influence curve for P n Assuming that F has derivative f > 0 on [F 1 (ɛ), F 1 (1 ɛ)] for all ɛ > 0, [ 0.75 F (2H 1 F (0.75) x) IC(x; P n, F ) = c f(2h 1 F (0.75) x)f(x)dx ] 0.25 F (2H 1 F (0.25) x) f(2h 1. F (0.25) x)f(x)dx

11 Influence curves when F = Φ SD P n IC(x; T, F ) Q n MAD x

12 Asymptotic normality The empirical U-process is ( n (Hn (t) H(t)) ) t R. Silverman (1976) proved that in this context, n(hn ( ) H( )) converges weakly to an almost sure continuous zero-mean Gaussian process W with covariance function: EW (s)w (t) = 4 P(h(X 1, X 2 ) s, h(x 1, X 3 ) t) 4H(s)H(t). For 0 < p < q < 1, if H, the derivative of H, is strictly positive on the interval [H 1 (p) ε, H 1 (q) + ε] for some ε > 0, then we can use the inverse map to show n ( H 1 n ( ) H 1 ( ) ) D W (H 1 ( )) H (H 1 ( )).

13 Asymptotic normality Recall, Hence, P n = c [ Hn 1 (3/4) Hn 1 (1/4) ]. n(pn θ) D N (0, V ) where θ = c ( H 1 (3/4) H 1 (1/4) ) and V both depend on the underlying distribution. When the underlying data are Gaussian, V = IC(x, P n, Φ) 2 d Φ(x) = This equates to an asymptotic efficiency of 0.86 as compared with 0.82 for Q n and 0.37 for the MAD at the normal.

14 Breakdown value Definition The breakdown value is the smallest fraction of contamination that can cause arbitrarily corruption to an estimator. P n will breakdown if 25% of the pairwise means are contaminated. Arbitrarily changing m of the original observations leaves n m uncontaminated and ( ) n m 2 pairwise means remain uncontaminated. P n will remain bounded so long as more than 75% of the pairwise means are uncontaminated, i.e. ( ) ( ) n m n > The asymptotic breakdown value of P n is 13.4%.

15 Why the interquartile range? Asymptotic breakdown point Consider, P n (τ) = H 1 n ( 1 + τ 2 ) H 1 n ( 1 τ The asymptotic breakdown point is: ε τ Asymptotic efficiency ). 2 1+τ τ

16 Asymptotic relative efficiency when f = t ν for ν [1, 10] Asymptotic relative efficiency P n Q n MAD Degrees of freedom

17 Adaptive trimming: Pn For preliminary high breakdown location and scale estimates, m(x) and s(x) respectively, an observation, X i, is trimmed if X i m(x) s(x) > d, (4) where d is the tuning parameter. Achieves the best possible breakdown value for a sensible choice of tuning parameter. Definition Denote P n as the adaptively trimmed P n with d = 5.

18 Efficiency of P n in finite samples Following Randal (2008) efficiencies are estimated over m independent samples as êff(t ) = For each i = 1, 2,..., m, Var (ln ˆσ 1,..., ln ˆσ m ) Var (ln T (X 1 ),..., ln T (X m )). (5) X i are independent samples of size n, ˆσ i is the ML scale estimate, and T (X i ) is the proposed scale estimate. Distributions considered t distributions with degrees of freedom between 1 and 10. Configural polysampling using Tukey s 3 corners: Gaussian, One-wild and Slash.

19 Aside: the slash distribution If Z N (0, 1) and U U(0, 1) then X = µ + σz/u follows the slash distribution. When µ = 0 and σ = 1 we have the standard slash distribution. N (0, 1) Cauchy Slash

20 Relative efficiencies: f = t ν for ν [1, 10] and n = 20 Estimated relative efficiency Degrees of freedom P n P n Q n

21 Relative efficiencies at the Slash corner (n = 20) P n 48% P n 75% 10% trim P n Q n 95% S n MAD 87% IQR Estimated relative efficiency

22 Relative efficiencies at the One-wild corner (n = 20) P n 84% P n 72% 10% trim P n Q n 68% S n MAD 40% IQR Estimated relative efficiency

23 Relative efficiencies at the Gaussian corner (n = 20) P n P n 81% 85% 10% trim P n Q n 67% S n MAD 38% IQR Estimated relative efficiency

24 Outline Robust scale estimation Robust covariance estimation Robust covariance matrices Summary and key references

25 From scale to covariance: the GK device Gnanadesikan and Kettenring (1972) relate scale and covariance using the following identity, cov(x, Y ) = 1 [var(αx + βy ) var(αx βy )], 4αβ where X and Y are random variables. In general, X and Y can have different units, so we set α = 1/ var(x) and β = 1/ var(y ). Replacing variance with P 2 n we can similarly construct, γ P (X, Y ) = 1 [ P 2 4αβ n (αx + βy ) Pn(αX 2 βy ) ], where α = 1/P n (X) and β = 1/P n (Y ).

26 Robustness properties IC((x,y),P n,φ) γ P inherits the 13.4% breakdown value from P n. Following Genton and Ma (1999), we have shown that influence curve and therefore the gross error sensitivity of γ P can be derived from the IC of P n. IC(x,y) x y 2 4! IC is bounded and hence the gross error sensitivity is finite.

27 Asymptotic variance Genton and Ma (1999) show that the asymptotic variance is directly proportional to that of the underlying scale estimator: V (γ S, Φ ρ ) = 2(1 + ρ 2 )V (S, Φ) V (γ S, Φ ρ ) 5 γ MAD 1 1 γ P γ SD! Asymptotic efficiency of γ P relative to the standard deviation at the Gaussian remains 86%. ρ

28 Bivariate t ν for ν [3, 10], n = 20 and ρ = 0.5. Estimated efficiency relative to MLE ρ P ρ Q ρ MAD Degrees of freedom

29 Bivariate t ν for ν [3, 10], n = 20 and ρ = 0.9. Estimated relative efficiency ρ P ρ Q ρ MAD Degrees of freedom

30 Outline Robust scale estimation Robust covariance estimation Robust covariance matrices Summary and key references

31 Existing robust covariance matrix estimation methods M-estimates have BDP 1/(1 + p) Minimum volume ellipsoid (MVE) converges at n 1/3 S-estimates behave like MLE for large p Minimum covariance determinant (MCD) requires subsampling, so slow for large p! Try constructing a covariance matrix using pairwise robust covariance estimates.

32 Covariance matrices Good covariance matrices should be: 1. Positive-semidefinite. 2. Affine equivariant. That is, if Ĉ is a covariance matrix estimator, A and a are constants and X is random then, Ĉ(AX + a) = AĈ(X)A.! How can we ensure that a matrix full of pairwise covariances is a good covariance matrix? Maronna and Zamar (2002) outline the OGK routine which ensures that the resulting covariance matrix is positive-definite and approximately affine-equivariant matrix.

33 OGK procedure (Maronna and Zamar, 2002) Let s( ) be a scale function and X = [x 1,..., x n ] R n p be a data matrix. 1. Let D = diag(s(x 1 ),..., s(x p )) and define Y = XD Compute the correlation matrix applying s( ) to the columns of Y: { 1 U = [u jk ] = 4 (s(y j + y k ) 2 s(y j y k ) 2 ) j k 1 j = k 3. Compute the eigendecomposition U = EΛE. 4. Project the data onto the basis eigenvectors: Z = YE. 5. Estimate the variances in the coordinate directions: Γ = diag(s(z 1 ) 2,..., s(z p ) 2 ). 6. The estimated covariance matrix is then, ˆΣ = D 2 EΓE.

34 Precision matrices In many practical applications, the covariance matrix is not what is really required. PCA, Mahalanobis distance, LDA, etc. use the inverse covariance matrix: the precision matrix, Θ = Σ 1. Precision matrices are also of interest in modelling Gaussian Markov random fields, where zeros in the correspond to conditional independence between variables. Sparsity! In many applications, it is often useful to impose a level of sparsity on the estimated precision matrix.

35 Regularisation techniques Graphical lasso (glasso) (Friedman, Hastie, Tibshirani, 2007) minimises the penalised negative Gaussian log-likelihood: f(θ) = tr( ˆΣΘ) log Θ + λ Θ 1, where Θ 1 is the L 1 norm and λ is a tuning parameter for the amount of shrinkage. Constrained l 1 -minimisation for inverse matrix estimation (CLIME) (Cai, Liu and Luo, 2011) solves the following objective function: min Θ 1 subject to: ˆΣΘ I λ. Quadratic Inverse Covariance (QUIC) (Hsieh, et. al. 2011) solves the same minimisation problem as the glasso but uses a second order approach.

36 QUadratic Inverse Covariance (QUIC) Given a regularisation parameter λ 0, the l 1 regularised Gaussian MLE for Θ = Σ 1 can be estimated by solving a regularised log determinant program: argmin Θ 0 f(θ) = argmin Θ 0 {tr( ˆΣΘ) log Θ + λ Θ }{{} 1 } }{{} g(θ) h(θ) where ˆΣ is the sample covariance matrix and Θ 1 = i,j θ ij. Key features The l 1 regularisation promotes sparsity in Θ g(θ) is twice differentiable and strictly convex quadratic approximation h(θ) is convex but not differentiable

37 Algorithm 1 QUadratic Inverse Covariance Input: symmetric matrix ˆΣ, scalar λ, stopping tolerance ɛ 1: for i = 0, 1,... do 2: Compute Σ t = Θ 1 t. 3: Find second order approximation f( ; Θ t ) to f(θ t + ). 4: Partition variables into free and fixed sets. 5: Find Newton direction D t using coordinate descent over the free variable set (Lasso problem). 6: Determine step-size α such that Θ t+1 = Θ t +αd t is positive definite and the objective function sufficiently decreases. 7: end for Output: sequence of Θ t R package QUIC

38 Randomly scattered contamination % Observation vectors 50 Rows contaminated % 50% 5 25% p=30 1 0% Variables 0% 5% 10% Cells contaminated

39 Simulation design Two scenarios: sparse and dense precision matrices, Θ. sample size n = 120 variables p = 30 data N (0, Σ) where Σ = Θ 1 sparsity parameter λ = 0.1 contamination 0% to 3% randomly scattered replications N = 1000 Performance metric The entropy loss L(Θ, ˆΘ) = tr(θ 1 ˆΘ) log Θ 1 ˆΘ p.

40 Simulation design Sparse Θ Dense Θ

41 Sparse Θ, outliers at 5 6 Entropy loss 4 2 Classic P n OGK P n Q n OGK Q n τ scale OGK τ OGKw τ MCD 0% 1% 2% 3% Random contamination

42 Dense Θ, outliers at 5 6 Entropy loss 4 2 Classic P n OGK P n Q n OGK Q n τ scale OGK τ OGKw τ MCD 0% 1% 2% 3% Random contamination

43 Outline Robust scale estimation Robust covariance estimation Robust covariance matrices Summary and key references

44 Summary 1. Aim Efficient, robust and widely applicable scale and covariance estimators. 2. Method P n scale estimator with the GK identity and an orthogonalisation procedure for pairwise covariance matrices or QUIC procedure for (sparse) precision matrix estimation. 3. Results 86% asymptotic efficiency at the Gaussian distribution. Robustness properties flow through to covariance estimates and beyond.

45 References Hampel, F. (1974). The influence curve and its role in robust estimation. Journal of the American Statistical Association, 69(346): Randal, J. (2008). A reinvestigation of robust scale estimation in finite samples. Computational Statistics & Data Analysis, 52(11): Rousseeuw, P. and Croux, C. (1993). Alternatives to the median absolute deviation. Journal of the American Statistical Association, 88(424): Serfling, R. J. (1984). Generalized L-, M-, and R-statistics. The Annals of Statistics, 12(1): Genton, M. and Ma, Y. (1999). Robustness properties of dispersion estimators. Statistics & Probability Letters, 44(4):

46 References Hsieh, C-J., Sustik, M.A., Dhillon I.S. and Ravikumar, P.K. (2011). Sparse inverse covariance matrix estimation using quadratic approximation. Advances in Neural Information Processing Systems 24, Gnanadesikan, R. and Kettenring J. R. (1972). Robust estimates, residuals and outlier detection with multiresponse data. Biometrics, 28(1): Maronna, R. and Zamar, R., (2002). Robust estimates of location and dispersion for high-dimensional datasets. Technometrics, 44(4): Tarr, G., Müller, S. and Weber, N.C., (2012). A robust scale estimator based on pairwise means. Journal of Nonparametric Statistics, 24(1):

Efficient and Robust Scale Estimation

Efficient and Robust Scale Estimation Efficient and Robust Scale Estimation Garth Tarr, Samuel Müller and Neville Weber School of Mathematics and Statistics THE UNIVERSITY OF SYDNEY Outline Introduction and motivation The robust scale estimator

More information

Robust scale estimation with extensions

Robust scale estimation with extensions Robust scale estimation with extensions Garth Tarr, Samuel Müller and Neville Weber School of Mathematics and Statistics THE UNIVERSITY OF SYDNEY Outline The robust scale estimator P n Robust covariance

More information

Robust and sparse Gaussian graphical modelling under cell-wise contamination

Robust and sparse Gaussian graphical modelling under cell-wise contamination Robust and sparse Gaussian graphical modelling under cell-wise contamination Shota Katayama 1, Hironori Fujisawa 2 and Mathias Drton 3 1 Tokyo Institute of Technology, Japan 2 The Institute of Statistical

More information

Robust methods and model selection. Garth Tarr September 2015

Robust methods and model selection. Garth Tarr September 2015 Robust methods and model selection Garth Tarr September 2015 Outline 1. The past: robust statistics 2. The present: model selection 3. The future: protein data, meat science, joint modelling, data visualisation

More information

Approximation. Inderjit S. Dhillon Dept of Computer Science UT Austin. SAMSI Massive Datasets Opening Workshop Raleigh, North Carolina.

Approximation. Inderjit S. Dhillon Dept of Computer Science UT Austin. SAMSI Massive Datasets Opening Workshop Raleigh, North Carolina. Using Quadratic Approximation Inderjit S. Dhillon Dept of Computer Science UT Austin SAMSI Massive Datasets Opening Workshop Raleigh, North Carolina Sept 12, 2012 Joint work with C. Hsieh, M. Sustik and

More information

Lecture 12 Robust Estimation

Lecture 12 Robust Estimation Lecture 12 Robust Estimation Prof. Dr. Svetlozar Rachev Institute for Statistics and Mathematical Economics University of Karlsruhe Financial Econometrics, Summer Semester 2007 Copyright These lecture-notes

More information

An efficient ADMM algorithm for high dimensional precision matrix estimation via penalized quadratic loss

An efficient ADMM algorithm for high dimensional precision matrix estimation via penalized quadratic loss An efficient ADMM algorithm for high dimensional precision matrix estimation via penalized quadratic loss arxiv:1811.04545v1 [stat.co] 12 Nov 2018 Cheng Wang School of Mathematical Sciences, Shanghai Jiao

More information

IMPROVING THE SMALL-SAMPLE EFFICIENCY OF A ROBUST CORRELATION MATRIX: A NOTE

IMPROVING THE SMALL-SAMPLE EFFICIENCY OF A ROBUST CORRELATION MATRIX: A NOTE IMPROVING THE SMALL-SAMPLE EFFICIENCY OF A ROBUST CORRELATION MATRIX: A NOTE Eric Blankmeyer Department of Finance and Economics McCoy College of Business Administration Texas State University San Marcos

More information

Supplementary Material for Wang and Serfling paper

Supplementary Material for Wang and Serfling paper Supplementary Material for Wang and Serfling paper March 6, 2017 1 Simulation study Here we provide a simulation study to compare empirically the masking and swamping robustness of our selected outlyingness

More information

Joint Gaussian Graphical Model Review Series I

Joint Gaussian Graphical Model Review Series I Joint Gaussian Graphical Model Review Series I Probability Foundations Beilun Wang Advisor: Yanjun Qi 1 Department of Computer Science, University of Virginia http://jointggm.org/ June 23rd, 2017 Beilun

More information

Symmetrised M-estimators of multivariate scatter

Symmetrised M-estimators of multivariate scatter Journal of Multivariate Analysis 98 (007) 1611 169 www.elsevier.com/locate/jmva Symmetrised M-estimators of multivariate scatter Seija Sirkiä a,, Sara Taskinen a, Hannu Oja b a Department of Mathematics

More information

Cellwise robust regularized discriminant analysis

Cellwise robust regularized discriminant analysis Cellwise robust regularized discriminant analysis Ines Wilms (KU Leuven) and Stéphanie Aerts (University of Liège) ICORS, July 2017 Wilms and Aerts Cellwise robust regularized discriminant analysis 1 Discriminant

More information

BAGUS: Bayesian Regularization for Graphical Models with Unequal Shrinkage

BAGUS: Bayesian Regularization for Graphical Models with Unequal Shrinkage BAGUS: Bayesian Regularization for Graphical Models with Unequal Shrinkage Lingrui Gan, Naveen N. Narisetty, Feng Liang Department of Statistics University of Illinois at Urbana-Champaign Problem Statement

More information

Robust and sparse estimation of the inverse covariance matrix using rank correlation measures

Robust and sparse estimation of the inverse covariance matrix using rank correlation measures Robust and sparse estimation of the inverse covariance matrix using rank correlation measures Christophe Croux, Viktoria Öllerer Abstract Spearman s rank correlation is a robust alternative for the standard

More information

Cellwise robust regularized discriminant analysis

Cellwise robust regularized discriminant analysis Cellwise robust regularized discriminant analysis JSM 2017 Stéphanie Aerts University of Liège, Belgium Ines Wilms KU Leuven, Belgium Cellwise robust regularized discriminant analysis 1 Discriminant analysis

More information

Robust Variable Selection Through MAVE

Robust Variable Selection Through MAVE Robust Variable Selection Through MAVE Weixin Yao and Qin Wang Abstract Dimension reduction and variable selection play important roles in high dimensional data analysis. Wang and Yin (2008) proposed sparse

More information

Statistical Data Analysis

Statistical Data Analysis DS-GA 0 Lecture notes 8 Fall 016 1 Descriptive statistics Statistical Data Analysis In this section we consider the problem of analyzing a set of data. We describe several techniques for visualizing the

More information

A ROBUST METHOD OF ESTIMATING COVARIANCE MATRIX IN MULTIVARIATE DATA ANALYSIS G.M. OYEYEMI *, R.A. IPINYOMI **

A ROBUST METHOD OF ESTIMATING COVARIANCE MATRIX IN MULTIVARIATE DATA ANALYSIS G.M. OYEYEMI *, R.A. IPINYOMI ** ANALELE ŞTIINłIFICE ALE UNIVERSITĂłII ALEXANDRU IOAN CUZA DIN IAŞI Tomul LVI ŞtiinŃe Economice 9 A ROBUST METHOD OF ESTIMATING COVARIANCE MATRIX IN MULTIVARIATE DATA ANALYSIS G.M. OYEYEMI, R.A. IPINYOMI

More information

Development of robust scatter estimators under independent contamination model

Development of robust scatter estimators under independent contamination model Development of robust scatter estimators under independent contamination model C. Agostinelli 1, A. Leung, V.J. Yohai 3 and R.H. Zamar 1 Universita Cà Foscàri di Venezia, University of British Columbia,

More information

A Robust Approach to Regularized Discriminant Analysis

A Robust Approach to Regularized Discriminant Analysis A Robust Approach to Regularized Discriminant Analysis Moritz Gschwandtner Department of Statistics and Probability Theory Vienna University of Technology, Austria Österreichische Statistiktage, Graz,

More information

Proximal Newton Method. Ryan Tibshirani Convex Optimization /36-725

Proximal Newton Method. Ryan Tibshirani Convex Optimization /36-725 Proximal Newton Method Ryan Tibshirani Convex Optimization 10-725/36-725 1 Last time: primal-dual interior-point method Given the problem min x subject to f(x) h i (x) 0, i = 1,... m Ax = b where f, h

More information

Robust Inverse Covariance Estimation under Noisy Measurements

Robust Inverse Covariance Estimation under Noisy Measurements .. Robust Inverse Covariance Estimation under Noisy Measurements Jun-Kun Wang, Shou-De Lin Intel-NTU, National Taiwan University ICML 2014 1 / 30 . Table of contents Introduction.1 Introduction.2 Related

More information

Sparse inverse covariance estimation with the lasso

Sparse inverse covariance estimation with the lasso Sparse inverse covariance estimation with the lasso Jerome Friedman Trevor Hastie and Robert Tibshirani November 8, 2007 Abstract We consider the problem of estimating sparse graphs by a lasso penalty

More information

Lecture 14 October 13

Lecture 14 October 13 STAT 383C: Statistical Modeling I Fall 2015 Lecture 14 October 13 Lecturer: Purnamrita Sarkar Scribe: Some one Disclaimer: These scribe notes have been slightly proofread and may have typos etc. Note:

More information

9. Robust regression

9. Robust regression 9. Robust regression Least squares regression........................................................ 2 Problems with LS regression..................................................... 3 Robust regression............................................................

More information

Statistical Data Mining and Machine Learning Hilary Term 2016

Statistical Data Mining and Machine Learning Hilary Term 2016 Statistical Data Mining and Machine Learning Hilary Term 2016 Dino Sejdinovic Department of Statistics Oxford Slides and other materials available at: http://www.stats.ox.ac.uk/~sejdinov/sdmml Naïve Bayes

More information

Coordinate descent. Geoff Gordon & Ryan Tibshirani Optimization /

Coordinate descent. Geoff Gordon & Ryan Tibshirani Optimization / Coordinate descent Geoff Gordon & Ryan Tibshirani Optimization 10-725 / 36-725 1 Adding to the toolbox, with stats and ML in mind We ve seen several general and useful minimization tools First-order methods

More information

Spatial autocorrelation: robustness of measures and tests

Spatial autocorrelation: robustness of measures and tests Spatial autocorrelation: robustness of measures and tests Marie Ernst and Gentiane Haesbroeck University of Liege London, December 14, 2015 Spatial Data Spatial data : geographical positions non spatial

More information

OPTIMISATION CHALLENGES IN MODERN STATISTICS. Co-authors: Y. Chen, M. Cule, R. Gramacy, M. Yuan

OPTIMISATION CHALLENGES IN MODERN STATISTICS. Co-authors: Y. Chen, M. Cule, R. Gramacy, M. Yuan OPTIMISATION CHALLENGES IN MODERN STATISTICS Co-authors: Y. Chen, M. Cule, R. Gramacy, M. Yuan How do optimisation problems arise in Statistics? Let X 1,...,X n be independent and identically distributed

More information

Proximal Newton Method. Zico Kolter (notes by Ryan Tibshirani) Convex Optimization

Proximal Newton Method. Zico Kolter (notes by Ryan Tibshirani) Convex Optimization Proximal Newton Method Zico Kolter (notes by Ryan Tibshirani) Convex Optimization 10-725 Consider the problem Last time: quasi-newton methods min x f(x) with f convex, twice differentiable, dom(f) = R

More information

Permutation-invariant regularization of large covariance matrices. Liza Levina

Permutation-invariant regularization of large covariance matrices. Liza Levina Liza Levina Permutation-invariant covariance regularization 1/42 Permutation-invariant regularization of large covariance matrices Liza Levina Department of Statistics University of Michigan Joint work

More information

Variables. Cho-Jui Hsieh The University of Texas at Austin. ICML workshop on Covariance Selection Beijing, China June 26, 2014

Variables. Cho-Jui Hsieh The University of Texas at Austin. ICML workshop on Covariance Selection Beijing, China June 26, 2014 for a Million Variables Cho-Jui Hsieh The University of Texas at Austin ICML workshop on Covariance Selection Beijing, China June 26, 2014 Joint work with M. Sustik, I. Dhillon, P. Ravikumar, R. Poldrack,

More information

Introduction to Robust Statistics. Anthony Atkinson, London School of Economics, UK Marco Riani, Univ. of Parma, Italy

Introduction to Robust Statistics. Anthony Atkinson, London School of Economics, UK Marco Riani, Univ. of Parma, Italy Introduction to Robust Statistics Anthony Atkinson, London School of Economics, UK Marco Riani, Univ. of Parma, Italy Multivariate analysis Multivariate location and scatter Data where the observations

More information

ROBUST ESTIMATION OF A CORRELATION COEFFICIENT: AN ATTEMPT OF SURVEY

ROBUST ESTIMATION OF A CORRELATION COEFFICIENT: AN ATTEMPT OF SURVEY ROBUST ESTIMATION OF A CORRELATION COEFFICIENT: AN ATTEMPT OF SURVEY G.L. Shevlyakov, P.O. Smirnov St. Petersburg State Polytechnic University St.Petersburg, RUSSIA E-mail: Georgy.Shevlyakov@gmail.com

More information

Chapter 3. Linear Models for Regression

Chapter 3. Linear Models for Regression Chapter 3. Linear Models for Regression Wei Pan Division of Biostatistics, School of Public Health, University of Minnesota, Minneapolis, MN 55455 Email: weip@biostat.umn.edu PubH 7475/8475 c Wei Pan Linear

More information

LARGE SAMPLE BEHAVIOR OF SOME WELL-KNOWN ROBUST ESTIMATORS UNDER LONG-RANGE DEPENDENCE

LARGE SAMPLE BEHAVIOR OF SOME WELL-KNOWN ROBUST ESTIMATORS UNDER LONG-RANGE DEPENDENCE LARGE SAMPLE BEHAVIOR OF SOME WELL-KNOWN ROBUST ESTIMATORS UNDER LONG-RANGE DEPENDENCE C. LÉVY-LEDUC, H. BOISTARD, E. MOULINES, M. S. TAQQU, AND V. A. REISEN Abstract. The paper concerns robust location

More information

Regression diagnostics

Regression diagnostics Regression diagnostics Kerby Shedden Department of Statistics, University of Michigan November 5, 018 1 / 6 Motivation When working with a linear model with design matrix X, the conventional linear model

More information

ORIE 4741: Learning with Big Messy Data. Regularization

ORIE 4741: Learning with Big Messy Data. Regularization ORIE 4741: Learning with Big Messy Data Regularization Professor Udell Operations Research and Information Engineering Cornell October 26, 2017 1 / 24 Regularized empirical risk minimization choose model

More information

Biostatistics Advanced Methods in Biostatistics IV

Biostatistics Advanced Methods in Biostatistics IV Biostatistics 140.754 Advanced Methods in Biostatistics IV Jeffrey Leek Assistant Professor Department of Biostatistics jleek@jhsph.edu Lecture 12 1 / 36 Tip + Paper Tip: As a statistician the results

More information

High Dimensional Covariance and Precision Matrix Estimation

High Dimensional Covariance and Precision Matrix Estimation High Dimensional Covariance and Precision Matrix Estimation Wei Wang Washington University in St. Louis Thursday 23 rd February, 2017 Wei Wang (Washington University in St. Louis) High Dimensional Covariance

More information

Robust estimators based on generalization of trimmed mean

Robust estimators based on generalization of trimmed mean Communications in Statistics - Simulation and Computation ISSN: 0361-0918 (Print) 153-4141 (Online) Journal homepage: http://www.tandfonline.com/loi/lssp0 Robust estimators based on generalization of trimmed

More information

Inference based on robust estimators Part 2

Inference based on robust estimators Part 2 Inference based on robust estimators Part 2 Matias Salibian-Barrera 1 Department of Statistics University of British Columbia ECARES - Dec 2007 Matias Salibian-Barrera (UBC) Robust inference (2) ECARES

More information

High-dimensional covariance estimation based on Gaussian graphical models

High-dimensional covariance estimation based on Gaussian graphical models High-dimensional covariance estimation based on Gaussian graphical models Shuheng Zhou Department of Statistics, The University of Michigan, Ann Arbor IMA workshop on High Dimensional Phenomena Sept. 26,

More information

Estimation of large dimensional sparse covariance matrices

Estimation of large dimensional sparse covariance matrices Estimation of large dimensional sparse covariance matrices Department of Statistics UC, Berkeley May 5, 2009 Sample covariance matrix and its eigenvalues Data: n p matrix X n (independent identically distributed)

More information

High Dimensional Inverse Covariate Matrix Estimation via Linear Programming

High Dimensional Inverse Covariate Matrix Estimation via Linear Programming High Dimensional Inverse Covariate Matrix Estimation via Linear Programming Ming Yuan October 24, 2011 Gaussian Graphical Model X = (X 1,..., X p ) indep. N(µ, Σ) Inverse covariance matrix Σ 1 = Ω = (ω

More information

odhady a jejich (ekonometrické)

odhady a jejich (ekonometrické) modifikace Stochastic Modelling in Economics and Finance 2 Advisor : Prof. RNDr. Jan Ámos Víšek, CSc. Petr Jonáš 27 th April 2009 Contents 1 2 3 4 29 1 In classical approach we deal with stringent stochastic

More information

A SHORT COURSE ON ROBUST STATISTICS. David E. Tyler Rutgers The State University of New Jersey. Web-Site dtyler/shortcourse.

A SHORT COURSE ON ROBUST STATISTICS. David E. Tyler Rutgers The State University of New Jersey. Web-Site  dtyler/shortcourse. A SHORT COURSE ON ROBUST STATISTICS David E. Tyler Rutgers The State University of New Jersey Web-Site www.rci.rutgers.edu/ dtyler/shortcourse.pdf References Huber, P.J. (1981). Robust Statistics. Wiley,

More information

A Blockwise Descent Algorithm for Group-penalized Multiresponse and Multinomial Regression

A Blockwise Descent Algorithm for Group-penalized Multiresponse and Multinomial Regression A Blockwise Descent Algorithm for Group-penalized Multiresponse and Multinomial Regression Noah Simon Jerome Friedman Trevor Hastie November 5, 013 Abstract In this paper we purpose a blockwise descent

More information

Generalized Elastic Net Regression

Generalized Elastic Net Regression Abstract Generalized Elastic Net Regression Geoffroy MOURET Jean-Jules BRAULT Vahid PARTOVINIA This work presents a variation of the elastic net penalization method. We propose applying a combined l 1

More information

The Nonparanormal skeptic

The Nonparanormal skeptic The Nonpara skeptic Han Liu Johns Hopkins University, 615 N. Wolfe Street, Baltimore, MD 21205 USA Fang Han Johns Hopkins University, 615 N. Wolfe Street, Baltimore, MD 21205 USA Ming Yuan Georgia Institute

More information

Small Sample Corrections for LTS and MCD

Small Sample Corrections for LTS and MCD myjournal manuscript No. (will be inserted by the editor) Small Sample Corrections for LTS and MCD G. Pison, S. Van Aelst, and G. Willems Department of Mathematics and Computer Science, Universitaire Instelling

More information

The Minimum Regularized Covariance Determinant estimator

The Minimum Regularized Covariance Determinant estimator The Minimum Regularized Covariance Determinant estimator Kris Boudt Solvay Business School, Vrije Universiteit Brussel School of Business and Economics, Vrije Universiteit Amsterdam arxiv:1701.07086v3

More information

Properties of optimizations used in penalized Gaussian likelihood inverse covariance matrix estimation

Properties of optimizations used in penalized Gaussian likelihood inverse covariance matrix estimation Properties of optimizations used in penalized Gaussian likelihood inverse covariance matrix estimation Adam J. Rothman School of Statistics University of Minnesota October 8, 2014, joint work with Liliana

More information

An Overview of Multiple Outliers in Multidimensional Data

An Overview of Multiple Outliers in Multidimensional Data Sri Lankan Journal of Applied Statistics, Vol (14-2) An Overview of Multiple Outliers in Multidimensional Data T. A. Sajesh 1 and M.R. Srinivasan 2 1 Department of Statistics, St. Thomas College, Thrissur,

More information

Generalized Multivariate Rank Type Test Statistics via Spatial U-Quantiles

Generalized Multivariate Rank Type Test Statistics via Spatial U-Quantiles Generalized Multivariate Rank Type Test Statistics via Spatial U-Quantiles Weihua Zhou 1 University of North Carolina at Charlotte and Robert Serfling 2 University of Texas at Dallas Final revision for

More information

ON THE MAXIMUM BIAS FUNCTIONS OF MM-ESTIMATES AND CONSTRAINED M-ESTIMATES OF REGRESSION

ON THE MAXIMUM BIAS FUNCTIONS OF MM-ESTIMATES AND CONSTRAINED M-ESTIMATES OF REGRESSION ON THE MAXIMUM BIAS FUNCTIONS OF MM-ESTIMATES AND CONSTRAINED M-ESTIMATES OF REGRESSION BY JOSE R. BERRENDERO, BEATRIZ V.M. MENDES AND DAVID E. TYLER 1 Universidad Autonoma de Madrid, Federal University

More information

Package scio. February 20, 2015

Package scio. February 20, 2015 Title Sparse Column-wise Inverse Operator Version 0.6.1 Author Xi (Rossi) LUO and Weidong Liu Package scio February 20, 2015 Maintainer Xi (Rossi) LUO Suggests clime, lorec, QUIC

More information

Computational Connections Between Robust Multivariate Analysis and Clustering

Computational Connections Between Robust Multivariate Analysis and Clustering 1 Computational Connections Between Robust Multivariate Analysis and Clustering David M. Rocke 1 and David L. Woodruff 2 1 Department of Applied Science, University of California at Davis, Davis, CA 95616,

More information

Robust variable selection through MAVE

Robust variable selection through MAVE This is the author s final, peer-reviewed manuscript as accepted for publication. The publisher-formatted version may be available through the publisher s web site or your institution s library. Robust

More information

Robust Estimation of Cronbach s Alpha

Robust Estimation of Cronbach s Alpha Robust Estimation of Cronbach s Alpha A. Christmann University of Dortmund, Fachbereich Statistik, 44421 Dortmund, Germany. S. Van Aelst Ghent University (UGENT), Department of Applied Mathematics and

More information

Sparse Inverse Covariance Estimation for a Million Variables

Sparse Inverse Covariance Estimation for a Million Variables Sparse Inverse Covariance Estimation for a Million Variables Inderjit S. Dhillon Depts of Computer Science & Mathematics The University of Texas at Austin SAMSI LDHD Opening Workshop Raleigh, North Carolina

More information

MATH 829: Introduction to Data Mining and Analysis Graphical Models II - Gaussian Graphical Models

MATH 829: Introduction to Data Mining and Analysis Graphical Models II - Gaussian Graphical Models 1/13 MATH 829: Introduction to Data Mining and Analysis Graphical Models II - Gaussian Graphical Models Dominique Guillot Departments of Mathematical Sciences University of Delaware May 4, 2016 Recall

More information

The S-estimator of multivariate location and scatter in Stata

The S-estimator of multivariate location and scatter in Stata The Stata Journal (yyyy) vv, Number ii, pp. 1 9 The S-estimator of multivariate location and scatter in Stata Vincenzo Verardi University of Namur (FUNDP) Center for Research in the Economics of Development

More information

A Short Course in Basic Statistics

A Short Course in Basic Statistics A Short Course in Basic Statistics Ian Schindler November 5, 2017 Creative commons license share and share alike BY: C 1 Descriptive Statistics 1.1 Presenting statistical data Definition 1 A statistical

More information

Sparsity Models. Tong Zhang. Rutgers University. T. Zhang (Rutgers) Sparsity Models 1 / 28

Sparsity Models. Tong Zhang. Rutgers University. T. Zhang (Rutgers) Sparsity Models 1 / 28 Sparsity Models Tong Zhang Rutgers University T. Zhang (Rutgers) Sparsity Models 1 / 28 Topics Standard sparse regression model algorithms: convex relaxation and greedy algorithm sparse recovery analysis:

More information

Introduction to Robust Statistics. Elvezio Ronchetti. Department of Econometrics University of Geneva Switzerland.

Introduction to Robust Statistics. Elvezio Ronchetti. Department of Econometrics University of Geneva Switzerland. Introduction to Robust Statistics Elvezio Ronchetti Department of Econometrics University of Geneva Switzerland Elvezio.Ronchetti@metri.unige.ch http://www.unige.ch/ses/metri/ronchetti/ 1 Outline Introduction

More information

x. Figure 1: Examples of univariate Gaussian pdfs N (x; µ, σ 2 ).

x. Figure 1: Examples of univariate Gaussian pdfs N (x; µ, σ 2 ). .8.6 µ =, σ = 1 µ = 1, σ = 1 / µ =, σ =.. 3 1 1 3 x Figure 1: Examples of univariate Gaussian pdfs N (x; µ, σ ). The Gaussian distribution Probably the most-important distribution in all of statistics

More information

Identification of Multivariate Outliers: A Performance Study

Identification of Multivariate Outliers: A Performance Study AUSTRIAN JOURNAL OF STATISTICS Volume 34 (2005), Number 2, 127 138 Identification of Multivariate Outliers: A Performance Study Peter Filzmoser Vienna University of Technology, Austria Abstract: Three

More information

STA 2201/442 Assignment 2

STA 2201/442 Assignment 2 STA 2201/442 Assignment 2 1. This is about how to simulate from a continuous univariate distribution. Let the random variable X have a continuous distribution with density f X (x) and cumulative distribution

More information

Evaluation of robust PCA for supervised audio outlier detection

Evaluation of robust PCA for supervised audio outlier detection Evaluation of robust PCA for supervised audio outlier detection Sarka Brodinova, Vienna University of Technology, sarka.brodinova@tuwien.ac.at Thomas Ortner, Vienna University of Technology, thomas.ortner@tuwien.ac.at

More information

Lecture 14: Variable Selection - Beyond LASSO

Lecture 14: Variable Selection - Beyond LASSO Fall, 2017 Extension of LASSO To achieve oracle properties, L q penalty with 0 < q < 1, SCAD penalty (Fan and Li 2001; Zhang et al. 2007). Adaptive LASSO (Zou 2006; Zhang and Lu 2007; Wang et al. 2007)

More information

Big & Quic: Sparse Inverse Covariance Estimation for a Million Variables

Big & Quic: Sparse Inverse Covariance Estimation for a Million Variables for a Million Variables Cho-Jui Hsieh The University of Texas at Austin NIPS Lake Tahoe, Nevada Dec 8, 2013 Joint work with M. Sustik, I. Dhillon, P. Ravikumar and R. Poldrack FMRI Brain Analysis Goal:

More information

Gaussian Graphical Models and Graphical Lasso

Gaussian Graphical Models and Graphical Lasso ELE 538B: Sparsity, Structure and Inference Gaussian Graphical Models and Graphical Lasso Yuxin Chen Princeton University, Spring 2017 Multivariate Gaussians Consider a random vector x N (0, Σ) with pdf

More information

Robust Wilks' Statistic based on RMCD for One-Way Multivariate Analysis of Variance (MANOVA)

Robust Wilks' Statistic based on RMCD for One-Way Multivariate Analysis of Variance (MANOVA) ISSN 2224-584 (Paper) ISSN 2225-522 (Online) Vol.7, No.2, 27 Robust Wils' Statistic based on RMCD for One-Way Multivariate Analysis of Variance (MANOVA) Abdullah A. Ameen and Osama H. Abbas Department

More information

robustness, efficiency, breakdown point, outliers, rank-based procedures, least absolute regression

robustness, efficiency, breakdown point, outliers, rank-based procedures, least absolute regression Robust Statistics robustness, efficiency, breakdown point, outliers, rank-based procedures, least absolute regression University of California, San Diego Instructor: Ery Arias-Castro http://math.ucsd.edu/~eariasca/teaching.html

More information

CPSC 540: Machine Learning

CPSC 540: Machine Learning CPSC 540: Machine Learning Mixture Models, Density Estimation, Factor Analysis Mark Schmidt University of British Columbia Winter 2016 Admin Assignment 2: 1 late day to hand it in now. Assignment 3: Posted,

More information

Measuring robustness

Measuring robustness Measuring robustness 1 Introduction While in the classical approach to statistics one aims at estimates which have desirable properties at an exactly speci ed model, the aim of robust methods is loosely

More information

Regularized Estimation of High Dimensional Covariance Matrices. Peter Bickel. January, 2008

Regularized Estimation of High Dimensional Covariance Matrices. Peter Bickel. January, 2008 Regularized Estimation of High Dimensional Covariance Matrices Peter Bickel Cambridge January, 2008 With Thanks to E. Levina (Joint collaboration, slides) I. M. Johnstone (Slides) Choongsoon Bae (Slides)

More information

1 Newton s Method. Suppose we want to solve: x R. At x = x, f (x) can be approximated by:

1 Newton s Method. Suppose we want to solve: x R. At x = x, f (x) can be approximated by: Newton s Method Suppose we want to solve: (P:) min f (x) At x = x, f (x) can be approximated by: n x R. f (x) h(x) := f ( x)+ f ( x) T (x x)+ (x x) t H ( x)(x x), 2 which is the quadratic Taylor expansion

More information

Master 2 MathBigData. 3 novembre CMAP - Ecole Polytechnique

Master 2 MathBigData. 3 novembre CMAP - Ecole Polytechnique Master 2 MathBigData S. Gaïffas 1 3 novembre 2014 1 CMAP - Ecole Polytechnique 1 Supervised learning recap Introduction Loss functions, linearity 2 Penalization Introduction Ridge Sparsity Lasso 3 Some

More information

Cheng Soon Ong & Christian Walder. Canberra February June 2018

Cheng Soon Ong & Christian Walder. Canberra February June 2018 Cheng Soon Ong & Christian Walder Research Group and College of Engineering and Computer Science Canberra February June 2018 Outlines Overview Introduction Linear Algebra Probability Linear Regression

More information

An Introduction to Graphical Lasso

An Introduction to Graphical Lasso An Introduction to Graphical Lasso Bo Chang Graphical Models Reading Group May 15, 2015 Bo Chang (UBC) Graphical Lasso May 15, 2015 1 / 16 Undirected Graphical Models An undirected graph, each vertex represents

More information

Lecture 25: November 27

Lecture 25: November 27 10-725: Optimization Fall 2012 Lecture 25: November 27 Lecturer: Ryan Tibshirani Scribes: Matt Wytock, Supreeth Achar Note: LaTeX template courtesy of UC Berkeley EECS dept. Disclaimer: These notes have

More information

MS-E2112 Multivariate Statistical Analysis (5cr) Lecture 4: Measures of Robustness, Robust Principal Component Analysis

MS-E2112 Multivariate Statistical Analysis (5cr) Lecture 4: Measures of Robustness, Robust Principal Component Analysis MS-E2112 Multivariate Statistical Analysis (5cr) Lecture 4:, Robust Principal Component Analysis Contents Empirical Robust Statistical Methods In statistics, robust methods are methods that perform well

More information

MULTIVARIATE TECHNIQUES, ROBUSTNESS

MULTIVARIATE TECHNIQUES, ROBUSTNESS MULTIVARIATE TECHNIQUES, ROBUSTNESS Mia Hubert Associate Professor, Department of Mathematics and L-STAT Katholieke Universiteit Leuven, Belgium mia.hubert@wis.kuleuven.be Peter J. Rousseeuw 1 Senior Researcher,

More information

A Brief Overview of Robust Statistics

A Brief Overview of Robust Statistics A Brief Overview of Robust Statistics Olfa Nasraoui Department of Computer Engineering & Computer Science University of Louisville, olfa.nasraoui_at_louisville.edu Robust Statistical Estimators Robust

More information

arxiv: v1 [math.st] 2 Aug 2007

arxiv: v1 [math.st] 2 Aug 2007 The Annals of Statistics 2007, Vol. 35, No. 1, 13 40 DOI: 10.1214/009053606000000975 c Institute of Mathematical Statistics, 2007 arxiv:0708.0390v1 [math.st] 2 Aug 2007 ON THE MAXIMUM BIAS FUNCTIONS OF

More information

A Comparison of Robust Estimators Based on Two Types of Trimming

A Comparison of Robust Estimators Based on Two Types of Trimming Submitted to the Bernoulli A Comparison of Robust Estimators Based on Two Types of Trimming SUBHRA SANKAR DHAR 1, and PROBAL CHAUDHURI 1, 1 Theoretical Statistics and Mathematics Unit, Indian Statistical

More information

Descriptive Statistics

Descriptive Statistics Descriptive Statistics DS GA 1002 Probability and Statistics for Data Science http://www.cims.nyu.edu/~cfgranda/pages/dsga1002_fall17 Carlos Fernandez-Granda Descriptive statistics Techniques to visualize

More information

Exam 2. Jeremy Morris. March 23, 2006

Exam 2. Jeremy Morris. March 23, 2006 Exam Jeremy Morris March 3, 006 4. Consider a bivariate normal population with µ 0, µ, σ, σ and ρ.5. a Write out the bivariate normal density. The multivariate normal density is defined by the following

More information

Robust Exponential Smoothing of Multivariate Time Series

Robust Exponential Smoothing of Multivariate Time Series Robust Exponential Smoothing of Multivariate Time Series Christophe Croux,a, Sarah Gelper b, Koen Mahieu a a Faculty of Business and Economics, K.U.Leuven, Naamsestraat 69, 3000 Leuven, Belgium b Erasmus

More information

CPSC 540: Machine Learning

CPSC 540: Machine Learning CPSC 540: Machine Learning Multivariate Gaussians Mark Schmidt University of British Columbia Winter 2019 Last Time: Multivariate Gaussian http://personal.kenyon.edu/hartlaub/mellonproject/bivariate2.html

More information

Solving Corrupted Quadratic Equations, Provably

Solving Corrupted Quadratic Equations, Provably Solving Corrupted Quadratic Equations, Provably Yuejie Chi London Workshop on Sparse Signal Processing September 206 Acknowledgement Joint work with Yuanxin Li (OSU), Huishuai Zhuang (Syracuse) and Yingbin

More information

Lecture 3: More on regularization. Bayesian vs maximum likelihood learning

Lecture 3: More on regularization. Bayesian vs maximum likelihood learning Lecture 3: More on regularization. Bayesian vs maximum likelihood learning L2 and L1 regularization for linear estimators A Bayesian interpretation of regularization Bayesian vs maximum likelihood fitting

More information

Machine Learning and Computational Statistics, Spring 2017 Homework 2: Lasso Regression

Machine Learning and Computational Statistics, Spring 2017 Homework 2: Lasso Regression Machine Learning and Computational Statistics, Spring 2017 Homework 2: Lasso Regression Due: Monday, February 13, 2017, at 10pm (Submit via Gradescope) Instructions: Your answers to the questions below,

More information

Lecture 17: October 27

Lecture 17: October 27 0-725/36-725: Convex Optimiation Fall 205 Lecturer: Ryan Tibshirani Lecture 7: October 27 Scribes: Brandon Amos, Gines Hidalgo Note: LaTeX template courtesy of UC Berkeley EECS dept. Disclaimer: These

More information

Introduction Robust regression Examples Conclusion. Robust regression. Jiří Franc

Introduction Robust regression Examples Conclusion. Robust regression. Jiří Franc Robust regression Robust estimation of regression coefficients in linear regression model Jiří Franc Czech Technical University Faculty of Nuclear Sciences and Physical Engineering Department of Mathematics

More information

Mark your answers ON THE EXAM ITSELF. If you are not sure of your answer you may wish to provide a brief explanation.

Mark your answers ON THE EXAM ITSELF. If you are not sure of your answer you may wish to provide a brief explanation. CS 189 Spring 2015 Introduction to Machine Learning Midterm You have 80 minutes for the exam. The exam is closed book, closed notes except your one-page crib sheet. No calculators or electronic items.

More information

Robust estimation of principal components from depth-based multivariate rank covariance matrix

Robust estimation of principal components from depth-based multivariate rank covariance matrix Robust estimation of principal components from depth-based multivariate rank covariance matrix Subho Majumdar Snigdhansu Chatterjee University of Minnesota, School of Statistics Table of contents Summary

More information

APPLICATIONS OF A ROBUST DISPERSION ESTIMATOR. Jianfeng Zhang. Doctor of Philosophy in Mathematics, Southern Illinois University, 2011

APPLICATIONS OF A ROBUST DISPERSION ESTIMATOR. Jianfeng Zhang. Doctor of Philosophy in Mathematics, Southern Illinois University, 2011 APPLICATIONS OF A ROBUST DISPERSION ESTIMATOR by Jianfeng Zhang Doctor of Philosophy in Mathematics, Southern Illinois University, 2011 A Research Dissertation Submitted in Partial Fulfillment of the Requirements

More information