Concentration behavior of the penalized least squares estimator

Similar documents
THE LASSO, CORRELATED DESIGN, AND IMPROVED ORACLE INEQUALITIES. By Sara van de Geer and Johannes Lederer. ETH Zürich

The deterministic Lasso

Fast Rates for Estimation Error and Oracle Inequalities for Model Selection

Supplementary Material for Nonparametric Operator-Regularized Covariance Function Estimation for Functional Data

Least squares under convex constraint

A talk on Oracle inequalities and regularization. by Sara van de Geer

Model Selection and Geometry

arxiv: v2 [math.st] 12 Feb 2008

Empirical Processes: General Weak Convergence Theory

D I S C U S S I O N P A P E R

On threshold-based classification rules By Leila Mohammadi and Sara van de Geer. Mathematical Institute, University of Leiden

Optimal global rates of convergence for interpolation problems with random design

Can we do statistical inference in a non-asymptotic way? 1

Introduction to Empirical Processes and Semiparametric Inference Lecture 02: Overview Continued

Additive Isotonic Regression

Introduction to Empirical Processes and Semiparametric Inference Lecture 12: Glivenko-Cantelli and Donsker Results

Quantile Processes for Semi and Nonparametric Regression

Statistical inference on Lévy processes

The adaptive and the thresholded Lasso for potentially misspecified models (and a lower bound for the Lasso)

Estimation of a quadratic regression functional using the sinc kernel

Lecture 4 Noisy Channel Coding

(Part 1) High-dimensional statistics May / 41

Random Bernstein-Markov factors

An elementary proof of the weak convergence of empirical processes

Lecture 6: September 19

Harmonic Analysis on the Cube and Parseval s Identity

Concentration inequalities: basics and some new challenges

Analysis II - few selective results

ASYMPTOTIC EQUIVALENCE OF DENSITY ESTIMATION AND GAUSSIAN WHITE NOISE. By Michael Nussbaum Weierstrass Institute, Berlin

Boosting Methods: Why They Can Be Useful for High-Dimensional Data

Functional Analysis Exercise Class

Notes, March 4, 2013, R. Dudley Maximum likelihood estimation: actual or supposed

The Australian National University and The University of Sydney. Supplementary Material

The Dirichlet s P rinciple. In this lecture we discuss an alternative formulation of the Dirichlet problem for the Laplace equation:

Inference For High Dimensional M-estimates. Fixed Design Results

Bennett-type Generalization Bounds: Large-deviation Case and Faster Rate of Convergence

19.1 Maximum Likelihood estimator and risk upper bound

Statistics for high-dimensional data: Group Lasso and additive models

Metric Spaces and Topology

arxiv: v1 [math.st] 1 Dec 2014

Minimax Rates of Estimation for High-Dimensional Linear Regression Over -Balls

Some superconcentration inequalities for extrema of stationary Gaussian processes

Introduction to Empirical Processes and Semiparametric Inference Lecture 13: Entropy Calculations

STAT 992 Paper Review: Sure Independence Screening in Generalized Linear Models with NP-Dimensionality J.Fan and R.Song

Closest Moment Estimation under General Conditions

Lecture 7 Introduction to Statistical Decision Theory

NETWORK-REGULARIZED HIGH-DIMENSIONAL COX REGRESSION FOR ANALYSIS OF GENOMIC DATA

Problem List MATH 5143 Fall, 2013

Near-Potential Games: Geometry and Dynamics

Adaptivity to Local Smoothness and Dimension in Kernel Regression

l 1 -Regularized Linear Regression: Persistence and Oracle Inequalities

Approximately Gaussian marginals and the hyperplane conjecture

TECHNICAL REPORT NO. 1091r. A Note on the Lasso and Related Procedures in Model Selection

Concentration inequalities and the entropy method

Adaptive Piecewise Polynomial Estimation via Trend Filtering

The Codimension of the Zeros of a Stable Process in Random Scenery

Asymptotic normality of the L k -error of the Grenander estimator

Learning Theory. Ingo Steinwart University of Stuttgart. September 4, 2013

LAW OF LARGE NUMBERS FOR THE SIRS EPIDEMIC

Economics 204 Fall 2011 Problem Set 2 Suggested Solutions

Covariance function estimation in Gaussian process regression

Introduction to Real Analysis Alternative Chapter 1

Regression, Ridge Regression, Lasso

An introduction to some aspects of functional analysis

Effective Dimension and Generalization of Kernel Learning

Bayesian Nonparametric Point Estimation Under a Conjugate Prior

THE DVORETZKY KIEFER WOLFOWITZ INEQUALITY WITH SHARP CONSTANT: MASSART S 1990 PROOF SEMINAR, SEPT. 28, R. M. Dudley

Asymptotic Nonequivalence of Nonparametric Experiments When the Smoothness Index is ½

Gaussian Processes. Le Song. Machine Learning II: Advanced Topics CSE 8803ML, Spring 2012

Minimax Rate of Convergence for an Estimator of the Functional Component in a Semiparametric Multivariate Partially Linear Model.

Stochastic Optimization with Inequality Constraints Using Simultaneous Perturbations and Penalty Functions

Semi-Nonparametric Inferences for Massive Data

Reconstruction from Anisotropic Random Measurements

Generalization theory

Empirical Risk Minimization

Statistics: Learning models from data

5.1 Approximation of convex functions

21.2 Example 1 : Non-parametric regression in Mean Integrated Square Error Density Estimation (L 2 2 risk)

arxiv:submit/ [math.st] 6 May 2011

Existence and Uniqueness of Penalized Least Square Estimation for Smoothing Spline Nonlinear Nonparametric Regression Models

Midterm 1. Every element of the set of functions is continuous

arxiv: v1 [math.ca] 7 Aug 2015

Lectures 2 3 : Wigner s semicircle law

EC9A0: Pre-sessional Advanced Mathematics Course. Lecture Notes: Unconstrained Optimisation By Pablo F. Beker 1

Lecture 2: Uniform Entropy

Econ 2148, fall 2017 Gaussian process priors, reproducing kernel Hilbert spaces, and Splines

Worst-Case Bounds for Gaussian Process Models

CURRENT STATUS LINEAR REGRESSION. By Piet Groeneboom and Kim Hendrickx Delft University of Technology and Hasselt University

Fast learning rates for plug-in classifiers under the margin condition

SOME CONVERSE LIMIT THEOREMS FOR EXCHANGEABLE BOOTSTRAPS

On the Entropy of Sums of Bernoulli Random Variables via the Chen-Stein Method

Statistics 612: L p spaces, metrics on spaces of probabilites, and connections to estimation

A LOWER BOUND ON BLOWUP RATES FOR THE 3D INCOMPRESSIBLE EULER EQUATION AND A SINGLE EXPONENTIAL BEALE-KATO-MAJDA ESTIMATE. 1.

i=1 β i,i.e. = β 1 x β x β 1 1 xβ d

Convex Functions. Daniel P. Palomar. Hong Kong University of Science and Technology (HKUST)

University of Luxembourg. Master in Mathematics. Student project. Compressed sensing. Supervisor: Prof. I. Nourdin. Author: Lucien May

RESEARCH REPORT. Estimation of sample spacing in stochastic processes. Anders Rønn-Nielsen, Jon Sporring and Eva B.

RATE-OPTIMAL GRAPHON ESTIMATION. By Chao Gao, Yu Lu and Harrison H. Zhou Yale University

Product-limit estimators of the survival function with left or right censored data

The Moment Method; Convex Duality; and Large/Medium/Small Deviations

Transcription:

Concentration behavior of the penalized least squares estimator Penalized least squares behavior arxiv:1511.08698v2 [math.st] 19 Oct 2016 Alan Muro and Sara van de Geer {muro,geer}@stat.math.ethz.ch Seminar für Statistik, ETH Zürich Rämistrasse 101, 8092 Zürich Abstract Consider the standard nonparametric regression model and take as estimator the penalized least squares function. In this article, we study the trade-off between closeness to the true function and complexity penalization of the estimator, where complexity is described by a seminorm on a class of functions. First, we present an exponential concentration inequality revealing the concentration behavior of the trade-off of the penalized least squares estimator around a nonrandom quantity, where such quantity depends on the problem under consideration. Then, under some conditions and for the proper choice of the tuning parameter, we obtain bounds for this nonrandom quantity. We illustrate our results with some examples that include the smoothing splines estimator. Keywords: Concentration inequalities, regularized least squares, statistical trade-off. 1 Introduction Let Y 1,..., Y n be independent real-valued response variables satisfying Y i = f 0 (x i + ɛ i, i = 1,..., n, where x 1,..., x n are given covariates in some space X, f 0 is an unknown function in a given space F, and ɛ 1,..., ɛ n are independent standard Gaussian random variables. We assume that f 0 is smooth in some sense but that this degree of smoothness is unknown. We will make this clear below. 1

To estimate f 0, we consider the penalized least squares estimator given by { ˆf := arg min Y f 2 n + λ 2 I 2 (f }, (1 f F where λ > 0 is a tuning parameter and I a given seminorm on F. Here, for a vector u R n, we write u 2 n := u T u/n, and we apply the same notation f for the vector f = (f(x 1,..., f(x n T and the function f F. Moreover, to avoid digressions from our main arguments, we assume that expression (1 exists and is unique. For a function f F, define τ 2 (f := f f 0 2 n + λ 2 I 2 (f. (2 This expression can be seen as a description of the trade-off of f. The term f f 0 n measures the closeness of f to the true function, while I(f quantifies its smoothness. As mentioned above, we only assume that f 0 is not too complex, but that this degree of complexity is not known. This accounts for assuming that I(f 0 <, but that an upper bound for I(f 0 is unknown. Therefore, we choose as model class F = {f : I(f < }. It is important to see that taking as model class F 0 = {f : I(f M 0 }, for some fixed M 0 > 0, instead of F could lead to a model misspecification error if the unknown smoothness of f 0 is large. Estimators with roughness penalization have been widely studied. Wahba (1990 and Green and Silverman (1993 consider the smoothing splines estimator, which corresponds to the solution of (1 when I 2 (f = 1 f (m (x 2 dx, where f (m denotes the 0 m-th derivative of f. Gu (2002 provides results for the more general penalized likelihood estimator with a general quadratic functional as complexity regularization. If we assume that this functional is a seminorm and that the noise follows a Gaussian distribution, then the penalized likelihood estimator reduces to the estimator in (1. Upper bounds for the estimation error can be found in the literature (see e.g. van der Vaart and Wellner (1996, del Barrio et al. (2007. When no complexity regularization term is included, the standard method used to derive these is roughly as follows: first, the basic inequality ( n ˆf f 0 2 n 2 n sup f i=1 ɛ i (f(x i f 0 (x i is invoked. Then, an inequality for the right hand side of (3 is obtained for functions f in {g F : g f 0 n R} for some R > 0. Finally, upper bounds for the estimation error are obtained with high probability by using entropy computations. When a penalty term is included, a similar approach can be used, but in this case the process (3 2

in (3 is studied in terms of both f f 0 n and the smoothness of the functions considered, i.e., I(f and I(f 0. A limitation of the approach mentioned above is that it does not allow us to obtain lower bounds. Consistency results have been proved (e.g. van de Geer and Wegkamp (1996, but it is not clear how to use these to derive explicit bounds. Chatterjee (2014 proposes a new approach to estimate the exact value of the error of least square estimators under convex constraints. It provides a concentration result for the estimation error by relating it with the expected maxima of a Gaussian process. One can then use this result to get both upper and lower bounds. In van de Geer and Wainwright (2016, a more direct argument is employed to show that the error for a more general class of penalized least squares estimators is concentrated around its expectation. Here, the penalty is only assumed to be convex. Moreover, the authors also consider the approach from Chatterjee (2014 to derive a concentration result for the trade-off τ( ˆf for uniformly bounded function classes under general loss functions. The goal of this paper is to contribute to the study of the behavior of τ( ˆf from a theoretical point of view. We consider the approach from Chatterjee (2014 and extend the ideas to the penalized least squares estimator with penalty based on a squared seminorm without making assumptions on the function space. We present a concentration inequality showing that τ( ˆf is concentrated around a nonrandom quantity R 0 (defined below in the nonparametric regime. Here, R 0 depends on the sample size and the problem under consideration. Then, we derive upper and lower bounds for R 0 in Theorem 2. These are obtained for the proper choice of λ and two additional conditions, including an entropy assumption. Combining this result with Theorem 1, one can obtain both upper and lower bounds for τ( ˆf with high probability for a sufficiently large sample size. We illustrate our results with some examples in section 2.2 and observe that we are able to recover optimal rates of convergence for the estimation error from the literature. We now introduce further notation that will be used in the following sections. Denote the minimum of τ over F by R min := min f F τ(f, and let this minimum be attained by f min. Note that f min can be seen as the unknown noiseless counterpart of ˆf and that R min is a nonrandom unknown quantity. For two vectors u, v R n, let u, v denote the usual inner product. For R R min and λ > 0, define and F(R := {f F : τ(f R}, 3

Additionally, we write M n (R := sup ɛ, f f 0 /n. f F(R M(R := EM n (R, H n (R := M n (R R2 2, H(R := EH n(r. Moreover, define the random quantity and the nonrandom quantity R := arg max H n (R, R R min R 0 := arg max H(R. R R min From Lemma 1, it will follow that R and R 0 are unique. For ease of exposition, we will use the following asymptotic notation throughout this paper: for two positive sequences {x n } n=1 and {y n } n=1, we write x n = O(y n if lim sup x n /y n < as n and x n = o(y n if x n /y n 0 as n. Moreover, we employ the notation x n y n if x n = O(y n and y n = O(x n. In addition, we make use of the stochastic order symbols O P and o P. It is important to note that the quantities λ, τ( ˆf, τ(f 0, R min, M(R, H(R, R, and R 0 depend on the sample size n. However, we omit this dependence in the notation to simplify the exposition. 1.1 Organization of the paper First, in section 2.1, we present the main results: Theorems 1 and 2. Note that the former does not require further assumptions than those from section 1, while the latter needs two extra conditions, which will be introduced after stating Theorem 1. Then, in section 2.2, we illustrate the theory with some examples and, in section 2.3, we present some concluding remarks. Finally, in section 3, we present all the proofs. We deferred to the appendix results from the literature used in this last section. 2 Behavior of τ( ˆf 2.1 Main results The first theorem provides a concentration probability inequality for τ( ˆf around R 0. It can be seen as an extension of Theorem 1.1 from Chatterjee (2014 to the penalized least squares estimator with a squared seminorm on F in the penalty term. 4

Theorem 1. For all λ > 0 and x > 0, we have ( τ( P ˆf 1 R 0 x 3 exp ( x4 ( nr 0 2. 32(1 + x 2 Asymptotics. If R 0 satisfies 1/( nr 0 = o(1, then τ( ˆf 1 R 0 = o P(1. Therefore, the random fluctuations of τ( ˆf around R 0 are of negligible size in comparison with R 0. Moreover, this asymptotic result implies that the asymptotic distribution of τ( ˆf/R 0 is degenerate. In Theorem 2, we will provide bounds for R 0 under some additional conditions and observe that R 0 satisfies the condition from above. Remark 1. Note that we only consider a square seminorm in the penalty term. A generalization of our results to penalties of the form I q (, q 2, is straightforward but omitted for simplicity. The case q < 2 is not considered in our study since our method of proof requires that the square root of the penalty term is convex, as can be observed in the proof of Lemma 1. Before stating the first condition required in Theorem 2, we will introduce the following definition: let S be some subset of a metric space (S, d. For δ > 0, the δ-covering number N(δ, S, d of S is the smallest value of N such that there exist s 1,..., s N in S such that min d(s, s j δ, s S. j=1,...,n Moreover, for δ >0, the δ-packing number N D (δ, S, d of S is the largest value of N such that there exist s 1,..., s N in S with d(s k, s j > δ, k j. Condition 1. Let α (0, 2. For all λ > 0 and R R min, we have (( α R/λ log N(u, F(R, n = O, u > 0, u and for some c 0 (0, 2], 1 log N(c 0 R, F(R, n = O ((c 0λ α. 5

Condition 1 can be seen as a description of the richness of F(R. We refer the reader to Kolmogorov and Tihomirov (1959 and Birman and Solomjak (1967 for an extensive study on entropy bounds, and to van der Vaart and Wellner (1996 and van de Geer (2000 for their application in empirical process theory. Condition 2. R min τ(f 0. Condition 2 relates the roughness penalty of f 0 with the minimum trade-off achieved by functions in F. It implies that our choice of the penalty term is appropriate. In other words, I(f min and I(f 0 are not too far away from each other when the tuning parameter is chosen properly. Therefore, aiming to mimic the trade-off of f min points us in the right direction as we would like to estimate f 0. The following result provides upper and lower bounds for R 0. Note that I(f 0 is permitted to depend on the sample size. We assume that I(f 0 remains upper bounded and bounded away from zero as the sample size increases. However, we allow these bounds to be unknown. Theorem 2. Assume Conditions 1 and 2, and that I(f 0 1. For suitably chosen depending on α, one has λ n 1 Therefore, we obtain R 0 n 1. τ( ˆf n 1 with probability at least 1 3 exp ( c n α 3 exp some positive constants not depending on the sample size. ( c n α, where c, c denote Theorem 2 shows that one can obtain bounds for R 0 if we choose the tuning parameter properly. Since the condition 1/( nr 0 = o(1 is satisfied, one obtains then convergence in probability of the ratio τ( ˆf/R 0 to a constant. Moreover, the theorem also provides bounds for the trade-off of ˆf. Note that these hold with probability tending to one. Remark 2. From Theorem 2, one obtains ˆf f 0 2 n + λ 2 I 2 ( ˆf min f F ( f f 0 2 n + λ 2 I 2 (f 6

with high probability for a sufficiently large sample size. One can then say that the trade-off of ˆf mimics that of its noiseless counterpart f min, i.e., ˆf behaves as if there were no noise in our observations. Therefore, our choice of λ allows us to control the random part of the problem using the penalty term and over-rule the variance of the noise. Remark 3. From Theorem 2, one can observe that ˆf f 0 n = O P (n 1, I( ˆf = O P (1. Therefore, we are able to recover the rate of convergence for the estimation error of ˆf. This will be illustrated in section 2.2. Furthermore, we observe that the degree of smoothness of ˆf is bounded in probability. Although we have a lower bound for τ( ˆf, this does neither imply (directly a lower bound for ˆf f 0 n, nor for I( ˆf. 2.2 Examples In the following, we require that the assumptions in Theorem 2 hold. In each example, we provide references to results from the literature where one can verify that Condition 1 is satisfied and refer the interested reader to these for further details. For the lower bounds, one may first note that N(u, F(R, n ( N u, { f F : f f 0 n R 2, I(f R }, n 2λ and additionally insert the results from Yang and Barron (1999, where the authors show that often global and local entropies are of the same order for some f 0. Example 1. Let X = [0, 1] and I 2 (f = 1 f (m (x 2 dx for some m {2, 3,...}. 0 Then ˆf is the smoothing spline estimator and can be explicitly computed (e.g. Green and Silverman (1993. Moreover, it can be shown that in this case Condition 1 holds with α = 1/m under some conditions on the design matrix (Kolmogorov and Tihomirov (1959 and Example 2.1 in van de Geer (1990. Therefore from Theorem 2, we have that the standard choice λ n m 2m+1 yields R 0 n m 2m+1. Moreover, we obtain upper and lower bounds for τ( ˆf with high probability for large n. Then by Remark 3, we recover the optimal rate of convergence for ˆf f 0 n (Stone (1982. 7

Now we consider the case where the design points have a larger dimension. Let X = [0, 1] d with d 2 and define the d-dimensional index r = (r 1,..., r d, where the values r i are non-negative integers. We write r = d i=1 r i. Furthermore, denote by D r the differential operator defined by r D r f(x = x r 1 1 x r f(x d 1,..., x d d and consider as roughness penalization I 2 (f = D r f(x 2 dx, r =m with m > d/2. In this case, Condition 1 holds with α = d/m (Birman and Solomjak (1967 Then by Theorem 2, we have that for the choice λ n m 2m+d we obtain R 0 n m 2m+d. Similarly as above, we are able to recover the optimal rate for the estimation error. Example 2. In this example, define the total variation penalty as T V (f = n f(x i f(x i 1, i=2 where x 1,..., x n denote the design points. Let X be real-valued and I 2 (f = (T V (f 2. In this case, Condition 1 is fulfilled for α = 1 (Birman and Solomjak (1967. The advantage of the total variation penalty over that from Example 1 is that it can be used for unbounded X. Let α = 1 in Theorem 2. For the choice λ n 1 3, we have R 0 n 1 3, and we also obtain bounds for τ( ˆf with high probability for large n. By Remark 3, we also recover the optimal upper bound for the estimation error. 2.3 Conclusions Theorem 1 derives a concentration result for τ( ˆf around a nonrandom quantity rather than just an upper bound. In particular, we observe that the ratio τ( ˆf/R 0 convergences in probability to 1 if R 0 satisfies 1/( nr 0 = o(1. This condition holds in the nonparametric setting for λ suitably chosen and I(f 0 1, as shown in Theorem 2 and in our examples from section 2.2. 8

The strict concavity of H and H n (Lemma 1 plays an important role in the derivation of both theorems. In our work, the proof of this property requires that the square root of the penalty term is convex. Furthermore, the proof of both Theorems 1 and 2 rely on the fact that the noise vector is Gaussian. This can be seen in Lemma 6, where we invoke a concentration result for functions of independent Gaussian random variables, and in Lemma 4, where we employ a lower bound for the expected value of the supremum of Gaussian processes to bound the function H. 3 Proofs This section is divided in two parts. In section 3.1, we first state and prove the lemmas necessary to prove Theorem 1. These follow closely the proof of Theorem 1.1 from Chatterjee (2014, however, here we include a roughness penalization term for functions in F. At the end of this section, we combine these lemmas to prove the first theorem. In section 3.2, we first prove an additional result necessary to establish Theorem 2. After this, we present the proof of the second theorem. Some results from the literature used in our proofs are deferred to the appendix. 3.1 Proof of Theorem 1 Lemma 1. For all λ > 0, H n ( and H( are strictly concave functions. Proof. Let λ > 0. Take any two values r s, r b such that R min r s r b and define F s,b := {f t = tv s + (1 tv b t [0, 1], v s F(r s, v b F(r b }. Take v t F s,b and let r = tr s + (1 tr b. By properties of a seminorm, we have that I(v t <, which implies that v t F. Moreover, we have that τ(v t = v t f 0 2 n + λ 2 I 2 (v t t v s f 0 2 n + λ 2 I 2 (v s + (1 t v b f 0 2 n + λ 2 I 2 (v b tτ(v s + (1 tτ(v b tr s + (1 tr b, where the first inequality uses the fact that the square root of the penalty term λ 2 I 2 ( is convex. Therefore, v t F(r. Using these equations, we have M n (r = sup ɛ, f f 0 /n sup ɛ, f f 0 /n (4 f F(r f F s,b = sup v s,v b F τ(v s r s τ(v b r b ɛ, tv s + (1 tv b f 0 /n = tm n (r s + (1 tm n (r b. 9

Therefore, M n is concave for all λ > 0. Taking expected value in the equations in (4 yields that M is concave for all λ > 0. Since g(r := r2 is strictly concave, then H 2 n and H are strictly concaves and we have our result. Lemma 2. For all λ > 0, we have that τ( ˆf = R. Proof. Let f F(R be such that ɛ, f f 0 /n = sup ɛ, f f 0 /n. f F(R We will show first that τ(f = R. Suppose τ(f = R for some R min R < R. Note that then M n ( R = M n (R, and therefore, we have ( H n ( R R = H n (R + 2 R > H n (R, 2 which is a contradiction by definition of R. We must then have that τ(f = R. Now we will prove that τ( ˆf = τ(f. For all λ > 0 and for all f F, we have Y f 2 n + λ 2 I 2 (f (5 { ( } = Y f 0 2 n 2 ɛ, f f 0 /n 1 2 Y f 0 2 n 2H n ( f f 0 2 n + λ 2 I 2 (f Y f 2 n + λ 2 I 2 (f. f f 0 2 n + λ 2 I 2 (f In consequence, by definition of ˆf and the inequalities in (5, both ˆf and f minimize Y f 2 n + λ 2 I 2 (f. and by uniqueness, it follows that τ( ˆf = τ(f. Lemma 3. For x > 0, define s 1 := R 0 x, s 2 := R 0 + x, z := H(R 0 x 2 /4. Moreover, define the event e = {{H n (s 1 < z} {H n (s 2 < z} {H n (R 0 > z}}. We have ( P (e c n x 4 3 exp. 32(R 0 + x 2 10

Proof. From the proof of Theorem 1.1 in Chatterjee (2014, one can easily observe that for all λ > 0 and any R we have H(R 0 H(R (R R 0 2. 2 Applying the inequality above to R = s 1 and R = s 2, we have that H(s i + x2 4 H(R 0 x2, i = 1, 2. 4 By Lemma 6, we have P (H n (R 0 z = P (H n (R 0 H(R 0 x2 e 4 Therefore, using again Lemma 6 yields P (H n (s 1 z P (H n (s 1 H(s 1 + x2 e 4 and P (H n (s 2 z P By the equations above, we obtain (H n (s 2 H(s 2 + x2 4 e n x 4 32(R 0 2. n x 4 32(s 1 2, n x 4 32(s 2 2. P (e c = P (H n (s 1 z + P (H n (s 2 z + P (H n (R 0 z 3e n x 4 32(s 2 2. Now we are ready to prove Theorem 1. Proof of Theorem 1. Let λ > 0. First, we note that H is equal to when R tends to infinity or R < R min. Then, by Lemma 1, we know that R 0 is unique. For x > 0, define the event e := {{H n (s 1 < z} {H n (s 2 < z} {H n (R 0 > z}}, where s 1 = R 0 x, s 2 = R 0 + x, and z = H(R 0 x 2 /4. Therefore, we have that s 1 < R 0 < s 2 by construction. Moreover, we know that H n (R H n (R 0 by definition of R. Since H n is strictly concave by Lemma 1, we must have that, in e, Combining Lemma 2 with equation (6 yields that, in e, τ( ˆf 1 R 0 < x. R 0 s 1 < R < s 2. (6 11

Therefore, by Lemma 3, and letting y = x/r 0, we have ( τ( P ˆf ( 1 R 0 y 3 exp n y4 R0 2 3 exp ( y4 ( nr 0 2. 32(1 + y 2 32(1 + y 2 3.2 Proof of Theorem 2 For proving Theorem 2, we will need the following result. This lemma gives us bounds for the unknown nonrandom quantity H(R. We note that these bounds can be written as parabolas with maximums and maximizers depending on α, n, and λ. Lemma 4. Assume Condition 1 and let α be as stated there. For some constants C 1/2, 0 < c 0 2, c 2 > c 1 > 0 not depending on the sample size and for all λ > 0, we have g 1 (R < H(R < g 2 (R, R R min, where g i (R = 1(R K 2 i 2 + K2 i, i = 1, 2, with 2 ( 1/2 K 1 := K 1 (n, λ := c1 α/2 0 c1 1, 2 nλ α K 2 := K 2 (n, λ := 4C ( c 1/2 2 1. 2 α nλ α Proof of Lemma 4. This proof makes use of known results for upper and lower bounds for the expected maxima of random processes, which can be found in the appendix. We will indicate this below. Let λ > 0 and R R min. For f F(R and ɛ 1,..., ɛ n standard Gaussian random variables, define X f := n i=1 ɛ if(x i / n. Take any two functions f, f F and note that X f X f follows a Gaussian distribution with expected value E[X f X f ] = 0 and variance V ar(x f X f = f 2 n + f 2 n 2Cov(X f, X f = f f 2 n. Therefore {X f : f F(R} is a sub-gaussian process with respect to the metric d(f, f = f f n on its index set (see Appendix. Note that, if we define the diameter of F(R as diam n (F(R := sup x,y F(R x y n, then it is not difficult to see that diam n (F(R 2R. Now we proceed to obtain bounds for M(R. By Dudley s entropy bound (see Lemma 7 in Appendix and Condition 1, for some constants C 1/2 and c 2, we have 12

[ ] M(R = 1 E sup X f X f 0 n f F(R C 2R 0 C c 2 n 4C c 2 2 α log N(u, F(R, n du n 2R ( α/2 R du 0 uλ ( 1/2 1 R. nλ α Moreover, by Sudakov lower bound (see Lemma 8 in Appendix we have 1 log ND (ɛ, F(R, n sup ɛ M(R. 2 0<ɛ diam n(f(r n For some 0 < c 0 2, let diam n (F(R = c 0 R and take ɛ = c 0 R in the last equation. By Condition 1, for some constant c 1 we have c 0 R log ND (c 0 R, F(R, n 2 n c 0R log N(c0 R, F(R, n 2 n ( 1/2 c1 α/2 0 c1 1 R. 2 nλ α Then, by the equations above and the definition of H(R, we have, for all R R min, c 1 α/2 ( 1/2 0 c1 1 R R2 2 nλ α 2 ( 4C 1/2 c2 1 < H(R < R R2 2 α nλ α 2. Writing K i R R 2 /2 = 1 2 (R K i 2 + K 2 i /2 for i = 1, 2 completes the proof. Now, we are ready to prove Theorem 2. Proof of Theorem 2. By Condition 2 and I(f 0 1, we know that there exist constants 0 < b 1 < b 2 not depending on n such that b 1 λ R min b 2 λ. (7 2b 2 2 Take c n 1 λ c n 1 (8 ( with c, c constants satisfying 0 < c < c c 0 c1, where α is as in Condition 1, c 0 and c 1 are as in Lemma 4, and b 2 as in equation (7. 13

First, we will derive bounds for R 0. Let g 1 and g 2 be as in Lemma 4 and recall that g 1 (R < H(R < g 2 (R for R R min. Note that g 1 (R K 2 1/2 for all R R min and that this upper bound is reached at R = K 1. Moreover, we know that the function g 2 attain negative values when R < 0 and when R > 2K 2. Then, by strict concavity of H (Lemma 1, if R min K 1, we must have that R 0 2K 2. Now, combining equation (7 and the choice in (8, we obtain that R min K 1. Therefore, following the rationale from above and substituting equation (8 into the definition of K 2, we have that there exist some constant a 2 > 0 such that R 0 a 2 n 1 (9 Furthermore, combining again equations (7 and (8, and recalling that R min R 0 yields that, for some constant a 1 > 0, R 0 a 1 n 1 (10 Joining equations (9 and (10 gives us the first result in our theorem. We proceed to obtain bounds for τ( ˆf. Let A 1, A 2 be some constants such that 0 < A 1 < a 1 < a 2 < A 2. We have P (τ( ˆf A 1 n 1 = P (R 0 τ( ˆf R 0 A 1 n 1 P (R 0 τ( ˆf (a 1 A 1 n 1 3 exp ( (a 1 A 1 4 n α, 32(a 2 + a 1 A 1 where in the first inequality we used equation (10, and in the second, Theorem 1 and equation (9. Similarly, we have P (τ( ˆf A 2 n 1 P (τ( ˆf R 0 (A 2 a 2 n 1 3 exp ( (A 2 a 2 4 n α, 32A 2 where in the first inequality we use equation (9, and in the second, Theorem 1 and again equation (9. Therefore, we have 14

P (A 1 n 1 τ( ˆf A2 n 1 = P (τ( ˆf A 2 n 1 1 3 exp P ( (a 1 A 1 4 n α 32(a 2 + a 1 A 1 and the second result of the theorem follows. (τ( ˆf A 1 n 1 3 exp ( (A 2 a 2 4 n α 32A 2 References Birman, M. Š. and M. Z. Solomjak (1967, Piecewise polynomial approximations of functions of classes W p α, Matematicheskii Sbornik (N.S. 73 (115, 331 355. Boucheron, S., G. Lugosi, and P. Massart (2013, Concentration inequalities: a nonasymptotic theory of independence, Oxford University Press. Chatterjee, S. (Dec. 2014, A new perspective on least squares under convex constraint, Annals of Statistics 42.6, 2340 2381. del Barrio, E., P. Deheuvels, and S. van de Geer (2007, Lectures on Empirical Processes: theory and statistical applications, EMS series of lectures in mathematics, European Mathematical Society. Green, P. and B. Silverman (1993, Nonparametric regression and generalized linear models: a roughness penalty approach, Chapman & Hall/CRC Monographs on Statistics & Applied Probability, Taylor & Francis. Gu, C. (2002, Smoothing Spline ANOVA Models, IMA Volumes in Mathematics and Its Applications, Springer. Kolmogorov, A. N. and V. M. Tihomirov (1959, ε-entropy and ε-capacity of sets in function spaces, Uspekhi Matematicheskikh Nauk 14.2 (86, 3 86. Koltchinskii, V. (2011, Oracle inequalities in empirical risk minimization and sparse recovery problems: École d Eté de Probabilités de Saint-Flour XXXVIII-2008, Ecole d Eté de Probabilités de Saint-Flour, Springer. Stone, C. J. (Dec. 1982, Optimal Global Rates of Convergence for Nonparametric Regression, Annals of Statistics 10.4, 1040 1053. van de Geer, S. (2000, Empirical Processes in M-Estimation, Cambridge Series in Statistical and Probabilistic Mathematics, Cambridge University Press. van de Geer, S. (June 1990, Estimating a Regression Function, Annals of Statistics 18.2, 907 924. van de Geer, S. and M. J. Wainwright (2016, On concentration for (regularized empirical risk minimization, preprint, arxiv:1512.00677. van de Geer, S. and M. Wegkamp (Dec. 1996, Consistency for the least squares estimator in nonparametric regression, Annals of Statistics 24.6, 2513 2523. 15

van der Vaart, A. and J. Wellner (1996, Weak convergence and Empirical Processes, Springer Series in Statistics, Springer. Wahba, G. (1990, Spline models for observational data, CBMS-NSF Regional Conference Series in Applied Mathematics, Society for Industrial and Applied Mathematics. Yang, Y. and A. Barron (Oct. 1999, Information-theoretic determination of minimax rates of convergence, Annals of Statistics 27.5, 1564 1599. A Appendix Lemma 5. [Gaussian Concentration Inequality. See, e.g. Boucheron et al. (2013] Let X = (X 1,..., X n be a vector of n independent standard Gaussian random variables. Let f : R n R denote an L-Lipschitz function. Then, for all t > 0, P (f(x Ef(X t e t2 /(2L 2. The following lemma applies the Gaussian Concentration Inequality from above to show that the quantities H n and H are close with exponential probability, as exploited by Chatterjee (2014. Lemma 6. For all λ > 0, R R min, and t > 0, we have P ( H n (R H(R t 2e n t2 /(2R 2. Proof. Let λ > 0. In this proof, we will write M n (R = M n (R, ɛ = sup ɛ, f f 0 /n. f F(R Let u and v be two n-dimensional standard Gaussian random vectors. By properties of the supremum and by Cauchy-Schwarz inequality, we have M n (R, u M n (R, v = sup u, f f 0 /n sup v, f f 0 /n f F(R f F(R sup u, f f 0 /n v, f f 0 /n f F(R ( n sup u v n f f 0 n R 1/2 (u i v i 2. f F(R n i=1 16

Therefore, M n (R, is (R/ n - Lipschitz in its second argument. By the Gaussian Concentration Inequality (Lemma 5, for every t > 0 and every R R min, we have P (M n (R, ɛ M(R, ɛ t e n t2 /(2R 2. Now, take M n (R, ɛ. Applying again the Gaussian Concentration Inequality yields P (M n (R, ɛ M(R, ɛ t e n t2 /(2R 2. Combining the last two equations yields the result of this lemma. For the next lemma, we will need the following definition: A stochastic process {X t : t T } is called sub-gaussian with respect to the semimetric d on its index set if x 2 P ( X s X t > x 2e 2d 2 (s,t, for every s, t T, x > 0. Lemma 7. [Dudley s entropy bound. See, e.g. Koltchinskii (2011 ] If {X t : t T } is a sub-gaussian process with respect to d, then the following bounds hold with some numerical constant C > 0: and for all t 0 T E sup X t C t T D(T 0 log N(ɛ, T, d dɛ E sup X t X t0 C t T D(T 0 log N(ɛ, T, d dɛ where D(T = D(T, d denotes the diameter of the space T. Lemma 8. [Sudakov Lower Bound. See, e.g. Boucheron et al. (2013 ] Let T be a finite set and let (X t t T be a Gaussian vector with EX t = 0 t. Then, E sup X t 1 t T 2 min E[(Xt X t 2 ] log T. t t Moreover, let d be a pseudo-metric on T defined by d(t, t 2 = E[(X t X t 2 ]. For all ɛ > 0 smaller than the diameter of T, the lower bound from above can be rewritten as E sup X t 1 t T 2 ɛ log N D (ɛ, T, d. 17