Spiking problem in monotone regression : penalized residual sum of squares

Size: px
Start display at page:

Download "Spiking problem in monotone regression : penalized residual sum of squares"

Transcription

1 Spiking prolem in monotone regression : penalized residual sum of squares Jayanta Kumar Pal 12 SAMSI, NC 27606, U.S.A. Astract We consider the estimation of a monotone regression at its end-point, where the least square solution is inconsistent. We penalize the least square criterion to achieve consistency. We also derive the limit distriution for the residual sum of squares, that can e used to construct confidence intervals. keywords & phrases : prolem. monotone regression, penalization, residual sum of squares, spiking 1 Introduction Monotone regression gained statistical importance in the last decades, in several scientific prolems; e.g. dose-response eperiments in iology, modeling disease incidences as function of toicity levels, industrial eperiments like the effect of temperature on the strength of steel etc. The least squares estimator (LSE henceforth), for this prolem is descried y Roertson et al (1987). We refer to Wright (1981) for its derivation and Groeneoom et al (2001) for its distriution. In many of the prolems, including the ones mentioned aove, the value of the regression function at one of the end-points of the domain of the covariate is of special interest. For eample, 1 19, T.W. Aleander Drive, Research Triangle Park, NC 27709, U.S.A., jpal@samsi.info 2 Research supported y NSF-SAMSI cooperative agreement No. DMS

2 the response level at the asence of any dose of medicine or the disease incidence at zero toicity level (known as ase level or placeo) is a quantity that needs specific attention. However, the LSE fails to e consistent at the end-points and has a systematic non-negligile asymptotic ias. This is known as the spiking prolem. Similar prolem also arises in the estimation of decreasing densities, where the maimum likelihood estimator is inconsistent at its left end-point. This has een addressed y Woodroofe and Sun (1993), using a penalization on the likelihood. Recently, Pal (2006, Chapter 5) investigates the ehavior of the likelihood ratio in presence of the penalization in the density estimation. Analogously, here we apply the penalization on the least squares criterion and investigate the estimator at the end-point and the corresponding residual sum of squares. In Section 2 we formulate the penalized least squares prolem and characterize the estimators. We state the main result regarding the convergence of the penalized residual sum of squares to a limit distriution as well. In Section 3 we present the proofs of key propositions using technical arguments. 2 Main results 2.1 Preliminaries We consider the prolem of estimating the value of an increasing regression function at its left-end point. The prolem of decreasing functions or estimation at the right end point follows similarly, y reflecting the regression on the vertical or horizontal aes. For simplicity, we restrict ourselves to the unit-interval as the support of the co-variate, and take the design points as equally spaced, viz. i = i/n. Suppose we have oservations Y 1, Y 2,..., Y n at those design points such that Y i = µ( i ) + ɛ i for i = 1,..., n, and ɛ i are independent, identically distriuted errors with zero means and common variance σ 2. Assuming that the regression function µ is increasing in [0, 1], the least square estimate (LSE) (MLE in case ɛ i are Gaussian) of µ which minimizes (Y i µ( i )) 2 2

3 is the Pool-Adjacent-Violator-Algorithm (PAVA henceforth) estimator µ( k ) = min ma Y r Y s s k r k s r + 1 Define µ i = µ( i ) for i 1, and µ i = µ( i ). Then µ(0+) = µ( 1 ) = min r 1 (Y Y r )/r is inconsistent for µ(0+). In particular P ( µ(0+) µ(0+)) 1, always yielding a negative nonnegligile ias. 2.2 Penalization on the least square One way to counter the prolem is to penalize for too low estimate of µ(0+). In particular, we look at the penalized least square (Y i µ i ) 2 2nαµ 1 (1) where α = α n is a penalty parameter that depends on the sample size n. For simplicity, we make that dependence implicit. To start with, we define the Cusum Diagram F of the data as follows. Let F r = (Y Y r )/n for r = 1,..., n. F is the piecewise linear interpolant of F 1,..., F n in the interval [0, 1] such that F (i/n) = F i. The following characterization of the penalized least square estimator, denoted ˆµ, is similar to the estimator of Woodroofe and Sun (1993). The proof is presented in Section 3. Proposition 1 ˆµ, the minimizer of 1 can e characterized as, F ()+α min 0 1, if k m 0 ˆµ( k ) = µ( k ), if k > m 0 where m 0 = n argmin 0 1 (F () + α)/. In particular ˆµ(0+) = min r F r+α r/n. The goal is to construct confidence sets for µ(0+), and for that we need sampling distriutions for the estimate ˆµ(0+). The estimator achieves consistency and its limit distriution is derived susequently. Alternately, we can use the residual sum of squares. However, the regular RSS is not relevant here ecause of the penalization. Under the constraint µ(0+) = c, a specific value, the penalized sum of square will increase, and reflect the effect of µ(0+). Hence, we focus on the 3

4 difference etween the constrained and unconstrained penalized sum of squares, and conclude that a large difference indicates a lack of evidence in favor of the hypothesis µ(0+) = c. To e more [ n ] [ precise, we measure Q = min µ,µ1 =c (Y n ] i µ i ) 2 2nαc min µ (Y i µ i ) 2 2nαµ 1 as a test statistic for that hypothesis. To find the constrained (penalized) least square solution, one would try to minimize the quantity n (Y i µ i ) 2 under the constraint c = µ 1 µ 2... µ n. Taking β as a Lagrange multiplier, we minimize (Y i µ i ) 2 2nβµ 1 (2) suject to µ 1 = c. The following characterization, similar to the penalized LSE, can e proved using similar arguments. See Section 3, for a proof. Lemma 1 ˆµ c, the minimizer of (2), can e characterized as, ˆµ c c, if k s 0 ( k ) = µ( k ), if k > s 0 where s 0 = n argmin 0 1 (F () c). For convenience, we define, s0 = s 0 /n and m0 = m 0 /n. 2.3 Limit distriution for the estimators We state the main result related to the asymptotic ehavior of the LSE and the RSS (oth penalized) in the following theorem. First, we need a few definitions. Define, G(t) = B(t) + t 2, and G 1 (t) = (G(t) + 1)/t, where B is a standard Brownian motion and let Z(t) e the right derivative process of the Greatest Conve Minorant (GCM) of the process G(t). Further, we define X = argming, X 1 = argming 1 and U 1 = inf G 1. Let, X 1 U 2 1 J = + X X 1 Z 2 (t)dt, if X 1 X X 1 U 2 1 X 1 X Z2 (t)dt, if X < X 1 We state the main result as follows. The proof is presented in Section 3. 4

5 Theorem 1 Suppose µ (0+) > 0 and define a = µ (0+)/2. Under the penalization α = (σ 2 /n 2 a) 1/3, we have, n 1/3 (ˆµ(0+) c) (aσ 2 ) 1/3 U 1 Q σ 2 J where denotes convergence in distriution. In the following, we state some preparatory results, with their proofs deferred until Section 3. They will e essential in showing Theorem 1. To investigate the large sample properties of Q, we need to derive limits of ˆµ and ˆµ c respectively. Equivalently, we look at the joint limit distriution of ( m0, ˆµ(0+), s0, µ). Our net proposition provides the result. Proposition 2 Suppose a and α are defined as in Theorem 1. We define a scaled version of the LSE, viz. Z n (t) = n 1/3 ( µ((σ 2 /na 2 ) 1/3 t) c). Then, as n, n 1/3 (ˆµ(0+) c) (aσ 2 ) 1/3 U 1 n 1/3 m0 (σ n 1/3 2 /a 2 ) 1/3 X 1 s0 (σ 2 /a 2 ) 1/3 X {Z n (t)} t 0 {Z(t)} t 0. The proof is deferred until the net Section. The last step efore proving Theorem 1 is to epress Q in terms of ( m0, ˆµ(0+), s0, µ), or equivalently (m 0, ˆµ(0+), s 0, µ). The proof of the following Lemma requires algeraic manipulation and several properties of the estimator µ. Lemma 2 Q can e epressed as follows : Q = m 0 (ˆµ(0+) c) 2 + s 0 ( µ i c) 2, if m 0 < s 0 m 0 (ˆµ(0+) c) 2 m 0 ( µ i c) 2, if s 0 m 0 Using Proposition 2 and Lemma 2, we can deduce Theorem 1 using continuous mapping theorem etc. The details are sketched in the proof in the net Section. 5

6 2.4 Discussion Theorem 1 shows consistency for the penalized LSE and gives the limit distriution for oth LSE and the RSS. This is analogous to the asymptotic normality of the usual LSE and the χ 2 convergence of the corresponding RSS in the regular parametric models. Moreover, this can e compared to the corresponding results in case of monotone regression at an interior point. There, the LSE converges to a non-normal distriution at a rate n 1/3, and the RSS or the likelihood ratio converges to a universal distriution different from χ 2. See, e.g. Banerjee and Wellner (2001). However, at the end-point, we get a completely different distriution, and the effect of µ (0+) can not e neglected. However, it should e noted that we need a consistent estimator of σ 2 as well as µ (0+) to achieve a confidence interval. Both of them are relatively tricky. Meyer and Woodroofe (2000) discusses a method that yields consistent estimator for σ 2 using the LSE µ. The estimation of µ (0+) can e done using similar techniques as in density estimation, discussed y Pal (2006). 3 Proofs 3.1 Proof of Proposition 1 We oserve, n (Y i µ i ) 2 2nαµ 1 = n (Ỹi µ i ) 2 2nαY 1 n 2 α 2 where Ỹ1 = Y 1 + nα and Ỹ i = Y i for i 2. Therefore the minimizer ˆµ is the PAVA estimator with new data Ỹ1, Y 2,..., Y n i.e. ˆµ k = min s k ma r k (Ỹr Ỹs)/(s r + 1). Therefore, ˆµ(0+) = min r 1 = min r 1 = min r 1 = min 0 1 Ỹ Ỹr r Y Y r + nα r n(f r + α) r F () + α Moreover, ecept for the first lock (i.e. until the point where ˆµ i = ˆµ 1 ), the contriution of Ỹ1 does not influence the estimator ˆµ, and hence ˆµ k = µ k. Let m 0 denote the first reak-off point of the 6

7 estimator ˆµ. It is characterized as m 0 = argmin r 1 Y Y r + nα r = argmin n(f r + α) r = n argmin 0 1 F () + α The proposition follows. 3.2 Proof of Lemma 1 As in Proposition 2, ˆµ c is the PAVA estimator of a new data set Y 1 + n ˆβ, Y 2,..., Y n where ˆβ solves the equation ˆµ c 1 = min s 1 Y Y s + nβ s = c Therefore, ˆβ = ma s 1 (cs/n F s ) = ma 0 1 (c F ()). To see this, oserve that min s 1 Y Y s + n ˆβ s Y Y s + n ˆβ s c = c for all s with equality at least once, nf s + n ˆβ cs for all s with equality at least once, ˆβ cs n F s for all s with equality at least once, and the definition of ˆβ follows. Moreover, ˆµ c = c until its first reak-off point s 0, and equal to µ eyond that. Clearly, s 0 = argmin(f s cs/n) = n argmin 0 1 (F () c). 3.3 Proof of Proposition 2 Define the step function Λ n () = n /n. To prove the result, we invoke the Hungarian emedding for the limit of partial sums of i.i.d. random variales. See, e.g. Karatzas & Shreve (1991) for a discussion. Define the partial sums S r = ɛ ɛ r for r = 1,..., n. Assume that ɛ i s have finite moment generating function and common variance σ 2. Then, we can ascertain the eistence of a standard 7

8 Brownian Motion B n for each n, and a sequence of random variales T n having the same distriution as S n such that, sup T nt σ nb n (t) = O(log n) t almost surely. This result is also known as the strong approimation. Since we are looking at a distriution convergence, we treat T n and S n likewise, and replace one of them y other. We oserve that, F () = 1 n n n Y i + O p ( 1 n ) = 1 [µ( i n n ) + ɛ i] + O p ( 1 n ) = = = µ(t)dλ n (t) + 1 n S nλ n() + O p ( 1 n ) µ(t)dt + σ n B n () + O p ( 1 n ) + O(log n n ) (µ(0) + tµ (0) + t2 2 µ (t ))dt + σ n B n () + O p ( 1 n ) + O(log n n ) = c + a 2 + o( 2 ) + σ n B n () + O p ( 1 n ) + O(log n n ) = c + a 2 + σ n B n () + R n () where the remainder R n () is negligile for calculations. Define = (σ 2 /na 2 ) 1/3 and B n(t) = B n (t)/. Then, m0 = argmin F () + α [c + a 2 + σ n B n () + α + R n ()] = argmin 0 1 = argmin 0 t 1 [c + a2 t 2 + σ n B n (t) + α] + o p () taking = t t = argmin 0 t 1 [a 2 t 2 + σ n B n(t) + α Using α = (σ 4 /n 2 a) 1/3, so that a 2 = σ /n = α, we conclude, m0 = ( σ2 na 2 ) 1 3 argmin0 t ( na 2 σ 2 ) 1 3 t 8 ] + o p (n 1/3 ) [ t 2 + B n(t) + 1 t ] + o p (n 1/3 )

9 and therefore, Moreover, i.e. ˆµ(0+) c = min = min n 1 3 m0 ( σ2 a 2 ) 1 3 X1 F () + α c a 2 + σ n B n () + α + R n () = min a t2 + B n(t) o p (n 1/3 ) 0 t 1 t Consider the constrained estimator now. First, so that n 1 3 (ˆµ(0+) c) (aσ 2 ) 1 3 U1 s0 = argmin(f () c) = argmin [a 2 + σ ] B n () + R n () n = ( σ2 na 2 ) 1 3 argmint (t 2 + B n(t)) + o p (n 1 3 ) n 1 3 s0 ( σ2 a 2 ) 1 3 X. We look at the ehavior of µ, properly scaled, in the region ( s0, m0 ). We rememer that it is the right-derivative of the Greatest Conve Minorant of the Cusum diagram F () at the point = s. Approimating F () y a 2 + c + σ/ nb n () in the interval, we write, µ(s) = d d (GCM(a2 + c + ( σ n )B n ())) =s Using the scaling = t as efore, and the fact that the GCM of a process added with a linear term is same as the linear term added with the GCM of the process itself, we get, µ(t) c = 1 d dt (GCM(a2 t 2 + σ B n n(t))) = a d dt (GCM(t2 + B n(t))) since a 2 = σ n = ( aσ2 n ) 1 d 3 dt (GCM(t2 + B n(t))) 9

10 which yields, n 1 3 ( µ(t( σ 2 na 2 ) 1 3 ) c) (aσ 2 ) 1 3 Z(t) The remainders in all these calculations are negligile as we look at points those are at a distance O p (n 1/3 ) from zero. Moreover, since the individual distriutions are derived as limit of the same Brownian motion B n, the joint convergence follows. 3.4 Proof of Lemma 2 Proof : The following identities are useful to prove the Lemma. Y i = µ i = m 0 ˆµ 1 nα (A) Y i = µ i = cs 0 nβ (B) If m 0 < s 0 Y i = µ i Y i = µ i (C1) µ 2 i (C2) The first part of (A) follows from the properties of the PAVA estimator, which is constant over each lock, and equals the average of the Y -values in that lock. The second part follows similarly, since m 0 is the first reak-off point for Ỹ1, Y 2 etc, and therefore ˆµ 1 = (Ỹ Y m0 )/m 0 = ( m 0 Y i + nα)/m 0. (B) and (C1) follows similarly. (C2) follows y splitting the summation over locks first, and then taking the sum of the Y -s in each lock. In case s 0 m 0, the equalities (C1) 10

11 and (C2) remain still valid, with the role of m 0 and s 0 eing interchanged. Now, Q = min µ,µ 1 =c = = [ (Y i µ i ) 2 2nαµ 1 ] min µ (Y i ˆµ c i) 2 2nαc s 0 (Y i c) 2 + [ ] (Y i µ i ) 2 2nαµ 1 (Y i ˆµ i ) 2 + 2nαˆµ 1 (Y i µ i ) 2 (Y i ˆµ 1 ) 2 (Y i µ i ) 2 + 2nα(ˆµ(0+) c) We use the identities (A) etc in the following two cases. Case 1: m 0 < s 0 Q = 2nα(ˆµ(0+) c) + [(Y i c) 2 (Y i ˆµ 1 ) 2 ] + = 2nα(ˆµ(0+) c) + 2(ˆµ 1 c) Y i + m 0 [c 2 ˆµ 2 1] + = 2m 0 (ˆµ 1 c)ˆµ 1 + m 0 [c 2 ˆµ 2 1] + = m 0 (ˆµ(0+) c) 2 + Case 2: s 0 m 0 Q = 2nα(ˆµ 1 c) + = 2nα(ˆµ 1 c) + 2(ˆµ 1 c) ( µ i c) 2 [(Y i c) 2 (Y i ˆµ 1 ) 2 ] + Y i + s 0 [c 2 ˆµ 2 1] + s 0 [(Y i c) 2 (Y i µ i ) 2 ] [c 2 2c µ i µ 2 i + 2 µ 2 i ] m 0 [c 2 2cY i µ 2 i + 2 µ i Y i ] [(Y i µ i ) 2 (Y i ˆµ 1 ) 2 ] = 2nα(ˆµ 1 c) + 2(ˆµ 1 c)(cs 0 nβ) + s 0 [c 2 ˆµ 2 1] (m 0 s 0 )ˆµ [ µ 2 i 2 µ 2 i ] + 2ˆµ 1 µ i = 2(ˆµ 1 c)(nα + cs 0 nβ + = m 0 (ˆµ(0+) c) 2 m 0 ( µ i c) 2, µ i ) + s 0 c 2 m 0 ˆµ 2 1 ( since [ µ 2 i ˆµ µ i Y i + 2ˆµ 1 Y i ] ( µ i c) 2 + c 2 (m 0 s 0 ) µ i = m 0 ˆµ 1 nα cs 0 + nβ) 11

12 The lemma follows. 3.5 Proof of Theorem 1 To start with, let s 0 m 0. Then, m 0 ( µ i c) 2 = n m0 s0 ( µ(s) c) 2 dλ n (s) = n m0 s0 ( µ(s) c) 2 ds + o p (1), since oth m0 and s0 are of order n 1/3. Similarly for m 0 < s 0, we have s0 ( µ i c) 2 = n s0 m0 ( µ(s) c) 2 ds + o p (1). From Proposition 2, (na 2 /σ 2 ) 1/3 ( m0, s0 ) converges weakly to (X 1, X) where the limit processes are functions of the same standard Brownian Motion B(t). Therefore, s0 n ( µ(s) c) 2 ds = n m0 = s 0 m 0 s 0 m 0 σ 2 X ( µ(t) c) 2 dt [n 1 σ 2 ] 2(n 3 ( µ(t( na 2 ) ) c) 3 )dt X 1 Z 2 (t)dt Similarly, the same result holds when on the set s0 m0, with their roles eing interchanged. Finally, we recall that, m 0 (ˆµ(0+) c) 2 = n 1 3 m 0 [n 1 3 (ˆµ(0+) c) ] 2 ( σ2 a 2 ) 1 3 X1 (aσ 2 ) 2 3 U 2 1 = σ 2 X 1 U 2 1 Therefore, using Lemma 2, X Q σ 2 (X 1 U Z 2 (t)dt) = σ 2 J X 1 where the integral changes sign when X 1 > X. 12

13 Acknowledgments The research is supported y NSF-SAMSI cooperative agreement No. DMS , during postdoctoral research in SAMSI. Discussions with Professor Moulinath Banerjee have een instrumental for the inception of the prolem. The author also acknowledges the contriution of Professor Michael Woodroofe as the doctoral thesis supervisor. References [1] Banerjee, M. and Wellner, J. A. (2001), Likelihood Ratio Tests for Monotone Functions, Ann. Statist. 29, [2] Groeneoom, P. and Wellner, J. A. (2001), Computing Chernoff s distriution, Journal of Computational & Graphical Statistics 10, [3] Karatzas, I. and Shreve, S. E. (1991), Brownian motion and Stochastic calculus (Springer- Verlag, New York) [4] Meyer, M. and Woodroofe, M. B. (2000), On the degrees of freedom of shape restricted regression. Ann. Statist. 28, [5] Pal, J. K. (2006), Statistical analaysis and inference in shape-restricted prolems with applications to astronomy (Doctoral Dissertation, University of Michigan). [6] Roertson, T., Wright, F. T. and Dykstra, R. L. (1987), Order Restricted Statistical Inference (John Wiley, New York). [7] Woodroofe, M. B. and Sun, J. (1993), A Penalized Maimum Likelihood estimate of f(0+) when f is non-increasing, Statist. Sinica [8] Wright, F. T. (1981), The asymptotic ehavior of monotone regression estimates, Ann. Statist. 9,

Likelihood Based Inference for Monotone Response Models

Likelihood Based Inference for Monotone Response Models Likelihood Based Inference for Monotone Response Models Moulinath Banerjee University of Michigan September 5, 25 Abstract The behavior of maximum likelihood estimates (MLE s) the likelihood ratio statistic

More information

1. Define the following terms (1 point each): alternative hypothesis

1. Define the following terms (1 point each): alternative hypothesis 1 1. Define the following terms (1 point each): alternative hypothesis One of three hypotheses indicating that the parameter is not zero; one states the parameter is not equal to zero, one states the parameter

More information

The International Journal of Biostatistics

The International Journal of Biostatistics The International Journal of Biostatistics Volume 1, Issue 1 2005 Article 3 Score Statistics for Current Status Data: Comparisons with Likelihood Ratio and Wald Statistics Moulinath Banerjee Jon A. Wellner

More information

Lecture 15. Hypothesis testing in the linear model

Lecture 15. Hypothesis testing in the linear model 14. Lecture 15. Hypothesis testing in the linear model Lecture 15. Hypothesis testing in the linear model 1 (1 1) Preliminary lemma 15. Hypothesis testing in the linear model 15.1. Preliminary lemma Lemma

More information

14.30 Introduction to Statistical Methods in Economics Spring 2009

14.30 Introduction to Statistical Methods in Economics Spring 2009 MIT OpenCourseWare http://ocw.mit.edu 4.0 Introduction to Statistical Methods in Economics Spring 009 For information about citing these materials or our Terms of Use, visit: http://ocw.mit.edu/terms.

More information

Polynomial Degree and Finite Differences

Polynomial Degree and Finite Differences CONDENSED LESSON 7.1 Polynomial Degree and Finite Differences In this lesson, you Learn the terminology associated with polynomials Use the finite differences method to determine the degree of a polynomial

More information

with Current Status Data

with Current Status Data Estimation and Testing with Current Status Data Jon A. Wellner University of Washington Estimation and Testing p. 1/4 joint work with Moulinath Banerjee, University of Michigan Talk at Université Paul

More information

A Short Introduction to the Lasso Methodology

A Short Introduction to the Lasso Methodology A Short Introduction to the Lasso Methodology Michael Gutmann sites.google.com/site/michaelgutmann University of Helsinki Aalto University Helsinki Institute for Information Technology March 9, 2016 Michael

More information

Score Statistics for Current Status Data: Comparisons with Likelihood Ratio and Wald Statistics

Score Statistics for Current Status Data: Comparisons with Likelihood Ratio and Wald Statistics Score Statistics for Current Status Data: Comparisons with Likelihood Ratio and Wald Statistics Moulinath Banerjee 1 and Jon A. Wellner 2 1 Department of Statistics, Department of Statistics, 439, West

More information

Likelihood Based Inference for Monotone Response Models

Likelihood Based Inference for Monotone Response Models Likelihood Based Inference for Monotone Response Models Moulinath Banerjee University of Michigan September 11, 2006 Abstract The behavior of maximum likelihood estimates (MLEs) and the likelihood ratio

More information

= 1 2 x (x 1) + 1 {x} (1 {x}). [t] dt = 1 x (x 1) + O (1), [t] dt = 1 2 x2 + O (x), (where the error is not now zero when x is an integer.

= 1 2 x (x 1) + 1 {x} (1 {x}). [t] dt = 1 x (x 1) + O (1), [t] dt = 1 2 x2 + O (x), (where the error is not now zero when x is an integer. Problem Sheet,. i) Draw the graphs for [] and {}. ii) Show that for α R, α+ α [t] dt = α and α+ α {t} dt =. Hint Split these integrals at the integer which must lie in any interval of length, such as [α,

More information

Ch 2: Simple Linear Regression

Ch 2: Simple Linear Regression Ch 2: Simple Linear Regression 1. Simple Linear Regression Model A simple regression model with a single regressor x is y = β 0 + β 1 x + ɛ, where we assume that the error ɛ is independent random component

More information

ab is shifted horizontally by h units. ab is shifted vertically by k units.

ab is shifted horizontally by h units. ab is shifted vertically by k units. Algera II Notes Unit Eight: Eponential and Logarithmic Functions Sllaus Ojective: 8. The student will graph logarithmic and eponential functions including ase e. Eponential Function: a, 0, Graph of an

More information

Some General Types of Tests

Some General Types of Tests Some General Types of Tests We may not be able to find a UMP or UMPU test in a given situation. In that case, we may use test of some general class of tests that often have good asymptotic properties.

More information

Let us first identify some classes of hypotheses. simple versus simple. H 0 : θ = θ 0 versus H 1 : θ = θ 1. (1) one-sided

Let us first identify some classes of hypotheses. simple versus simple. H 0 : θ = θ 0 versus H 1 : θ = θ 1. (1) one-sided Let us first identify some classes of hypotheses. simple versus simple H 0 : θ = θ 0 versus H 1 : θ = θ 1. (1) one-sided H 0 : θ θ 0 versus H 1 : θ > θ 0. (2) two-sided; null on extremes H 0 : θ θ 1 or

More information

EVALUATIONS OF EXPECTED GENERALIZED ORDER STATISTICS IN VARIOUS SCALE UNITS

EVALUATIONS OF EXPECTED GENERALIZED ORDER STATISTICS IN VARIOUS SCALE UNITS APPLICATIONES MATHEMATICAE 9,3 (), pp. 85 95 Erhard Cramer (Oldenurg) Udo Kamps (Oldenurg) Tomasz Rychlik (Toruń) EVALUATIONS OF EXPECTED GENERALIZED ORDER STATISTICS IN VARIOUS SCALE UNITS Astract. We

More information

Statistics 3858 : Maximum Likelihood Estimators

Statistics 3858 : Maximum Likelihood Estimators Statistics 3858 : Maximum Likelihood Estimators 1 Method of Maximum Likelihood In this method we construct the so called likelihood function, that is L(θ) = L(θ; X 1, X 2,..., X n ) = f n (X 1, X 2,...,

More information

Introduction to Empirical Processes and Semiparametric Inference Lecture 02: Overview Continued

Introduction to Empirical Processes and Semiparametric Inference Lecture 02: Overview Continued Introduction to Empirical Processes and Semiparametric Inference Lecture 02: Overview Continued Michael R. Kosorok, Ph.D. Professor and Chair of Biostatistics Professor of Statistics and Operations Research

More information

Fall 2017 STAT 532 Homework Peter Hoff. 1. Let P be a probability measure on a collection of sets A.

Fall 2017 STAT 532 Homework Peter Hoff. 1. Let P be a probability measure on a collection of sets A. 1. Let P be a probability measure on a collection of sets A. (a) For each n N, let H n be a set in A such that H n H n+1. Show that P (H n ) monotonically converges to P ( k=1 H k) as n. (b) For each n

More information

3.5 Solving Quadratic Equations by the

3.5 Solving Quadratic Equations by the www.ck1.org Chapter 3. Quadratic Equations and Quadratic Functions 3.5 Solving Quadratic Equations y the Quadratic Formula Learning ojectives Solve quadratic equations using the quadratic formula. Identify

More information

Fixed-b Inference for Testing Structural Change in a Time Series Regression

Fixed-b Inference for Testing Structural Change in a Time Series Regression Fixed- Inference for esting Structural Change in a ime Series Regression Cheol-Keun Cho Michigan State University imothy J. Vogelsang Michigan State University August 29, 24 Astract his paper addresses

More information

Travel Grouping of Evaporating Polydisperse Droplets in Oscillating Flow- Theoretical Analysis

Travel Grouping of Evaporating Polydisperse Droplets in Oscillating Flow- Theoretical Analysis Travel Grouping of Evaporating Polydisperse Droplets in Oscillating Flow- Theoretical Analysis DAVID KATOSHEVSKI Department of Biotechnology and Environmental Engineering Ben-Gurion niversity of the Negev

More information

Confidence Intervals for Low-dimensional Parameters with High-dimensional Data

Confidence Intervals for Low-dimensional Parameters with High-dimensional Data Confidence Intervals for Low-dimensional Parameters with High-dimensional Data Cun-Hui Zhang and Stephanie S. Zhang Rutgers University and Columbia University September 14, 2012 Outline Introduction Methodology

More information

A732: Exercise #7 Maximum Likelihood

A732: Exercise #7 Maximum Likelihood A732: Exercise #7 Maximum Likelihood Due: 29 Novemer 2007 Analytic computation of some one-dimensional maximum likelihood estimators (a) Including the normalization, the exponential distriution function

More information

Title: Estimation of smooth regression functions in monotone response models

Title: Estimation of smooth regression functions in monotone response models Elsevier Editorial System(tm) for Journal of Statistical Planning and Inference Manuscript Draft Manuscript Number: JSPI-D-07-00264 Title: Estimation of smooth regression functions in monotone response

More information

Simple Linear Regression

Simple Linear Regression Simple Linear Regression ST 430/514 Recall: A regression model describes how a dependent variable (or response) Y is affected, on average, by one or more independent variables (or factors, or covariates)

More information

Piecewise Smooth Solutions to the Burgers-Hilbert Equation

Piecewise Smooth Solutions to the Burgers-Hilbert Equation Piecewise Smooth Solutions to the Burgers-Hilbert Equation Alberto Bressan and Tianyou Zhang Department of Mathematics, Penn State University, University Park, Pa 68, USA e-mails: bressan@mathpsuedu, zhang

More information

Local Whittle Likelihood Estimators and Tests for non-gaussian Linear Processes

Local Whittle Likelihood Estimators and Tests for non-gaussian Linear Processes Local Whittle Likelihood Estimators and Tests for non-gaussian Linear Processes By Tomohito NAITO, Kohei ASAI and Masanobu TANIGUCHI Department of Mathematical Sciences, School of Science and Engineering,

More information

Linear models and their mathematical foundations: Simple linear regression

Linear models and their mathematical foundations: Simple linear regression Linear models and their mathematical foundations: Simple linear regression Steffen Unkel Department of Medical Statistics University Medical Center Göttingen, Germany Winter term 2018/19 1/21 Introduction

More information

Generalized Linear Models

Generalized Linear Models Generalized Linear Models Lecture 3. Hypothesis testing. Goodness of Fit. Model diagnostics GLM (Spring, 2018) Lecture 3 1 / 34 Models Let M(X r ) be a model with design matrix X r (with r columns) r n

More information

Simple and Multiple Linear Regression

Simple and Multiple Linear Regression Sta. 113 Chapter 12 and 13 of Devore March 12, 2010 Table of contents 1 Simple Linear Regression 2 Model Simple Linear Regression A simple linear regression model is given by Y = β 0 + β 1 x + ɛ where

More information

Competing statistics in exponential family regression models under certain shape constraints

Competing statistics in exponential family regression models under certain shape constraints Competing statistics in exponential family regression models under certain shape constraints Moulinath Banerjee 1 University of Michigan April 24, 2006 Abstract We study a number of competing statistics

More information

Comparing two independent samples

Comparing two independent samples In many applications it is necessary to compare two competing methods (for example, to compare treatment effects of a standard drug and an experimental drug). To compare two methods from statistical point

More information

Adjusted Empirical Likelihood for Long-memory Time Series Models

Adjusted Empirical Likelihood for Long-memory Time Series Models Adjusted Empirical Likelihood for Long-memory Time Series Models arxiv:1604.06170v1 [stat.me] 21 Apr 2016 Ramadha D. Piyadi Gamage, Wei Ning and Arjun K. Gupta Department of Mathematics and Statistics

More information

The Mean Version One way to write the One True Regression Line is: Equation 1 - The One True Line

The Mean Version One way to write the One True Regression Line is: Equation 1 - The One True Line Chapter 27: Inferences for Regression And so, there is one more thing which might vary one more thing aout which we might want to make some inference: the slope of the least squares regression line. The

More information

Robust estimation, efficiency, and Lasso debiasing

Robust estimation, efficiency, and Lasso debiasing Robust estimation, efficiency, and Lasso debiasing Po-Ling Loh University of Wisconsin - Madison Departments of ECE & Statistics WHOA-PSI workshop Washington University in St. Louis Aug 12, 2017 Po-Ling

More information

Comment about AR spectral estimation Usually an estimate is produced by computing the AR theoretical spectrum at (ˆφ, ˆσ 2 ). With our Monte Carlo

Comment about AR spectral estimation Usually an estimate is produced by computing the AR theoretical spectrum at (ˆφ, ˆσ 2 ). With our Monte Carlo Comment aout AR spectral estimation Usually an estimate is produced y computing the AR theoretical spectrum at (ˆφ, ˆσ 2 ). With our Monte Carlo simulation approach, for every draw (φ,σ 2 ), we can compute

More information

STRONG NORMALITY AND GENERALIZED COPELAND ERDŐS NUMBERS

STRONG NORMALITY AND GENERALIZED COPELAND ERDŐS NUMBERS #A INTEGERS 6 (206) STRONG NORMALITY AND GENERALIZED COPELAND ERDŐS NUMBERS Elliot Catt School of Mathematical and Physical Sciences, The University of Newcastle, Callaghan, New South Wales, Australia

More information

Hypothesis testing for Stochastic PDEs. Igor Cialenco

Hypothesis testing for Stochastic PDEs. Igor Cialenco Hypothesis testing for Stochastic PDEs Igor Cialenco Department of Applied Mathematics Illinois Institute of Technology igor@math.iit.edu Joint work with Liaosha Xu Research partially funded by NSF grants

More information

STAT5044: Regression and Anova. Inyoung Kim

STAT5044: Regression and Anova. Inyoung Kim STAT5044: Regression and Anova Inyoung Kim 2 / 47 Outline 1 Regression 2 Simple Linear regression 3 Basic concepts in regression 4 How to estimate unknown parameters 5 Properties of Least Squares Estimators:

More information

Introduction to Empirical Processes and Semiparametric Inference Lecture 25: Semiparametric Models

Introduction to Empirical Processes and Semiparametric Inference Lecture 25: Semiparametric Models Introduction to Empirical Processes and Semiparametric Inference Lecture 25: Semiparametric Models Michael R. Kosorok, Ph.D. Professor and Chair of Biostatistics Professor of Statistics and Operations

More information

arxiv: v1 [stat.me] 13 Nov 2017

arxiv: v1 [stat.me] 13 Nov 2017 Checking Validity of Monotone Domain Mean Estimators arxiv:1711.04749v1 [stat.me] 13 ov 2017 Cristian Oliva, Mary C. Meyer and Jean D. Opsomer Department of Statistics, Colorado State University, Fort

More information

FinQuiz Notes

FinQuiz Notes Reading 9 A time series is any series of data that varies over time e.g. the quarterly sales for a company during the past five years or daily returns of a security. When assumptions of the regression

More information

Least squares under convex constraint

Least squares under convex constraint Stanford University Questions Let Z be an n-dimensional standard Gaussian random vector. Let µ be a point in R n and let Y = Z + µ. We are interested in estimating µ from the data vector Y, under the assumption

More information

Maximum Likelihood (ML) Estimation

Maximum Likelihood (ML) Estimation Econometrics 2 Fall 2004 Maximum Likelihood (ML) Estimation Heino Bohn Nielsen 1of32 Outline of the Lecture (1) Introduction. (2) ML estimation defined. (3) ExampleI:Binomialtrials. (4) Example II: Linear

More information

Part IB Statistics. Theorems with proof. Based on lectures by D. Spiegelhalter Notes taken by Dexter Chua. Lent 2015

Part IB Statistics. Theorems with proof. Based on lectures by D. Spiegelhalter Notes taken by Dexter Chua. Lent 2015 Part IB Statistics Theorems with proof Based on lectures by D. Spiegelhalter Notes taken by Dexter Chua Lent 2015 These notes are not endorsed by the lecturers, and I have modified them (often significantly)

More information

Change detection problems in branching processes

Change detection problems in branching processes Change detection problems in branching processes Outline of Ph.D. thesis by Tamás T. Szabó Thesis advisor: Professor Gyula Pap Doctoral School of Mathematics and Computer Science Bolyai Institute, University

More information

Algebra I Notes Unit Nine: Exponential Expressions

Algebra I Notes Unit Nine: Exponential Expressions Algera I Notes Unit Nine: Eponential Epressions Syllaus Ojectives: 7. The student will determine the value of eponential epressions using a variety of methods. 7. The student will simplify algeraic epressions

More information

Algebra I Notes Concept 00b: Review Properties of Integer Exponents

Algebra I Notes Concept 00b: Review Properties of Integer Exponents Algera I Notes Concept 00: Review Properties of Integer Eponents In Algera I, a review of properties of integer eponents may e required. Students egin their eploration of power under the Common Core in

More information

Math 216 Second Midterm 28 March, 2013

Math 216 Second Midterm 28 March, 2013 Math 26 Second Midterm 28 March, 23 This sample exam is provided to serve as one component of your studying for this exam in this course. Please note that it is not guaranteed to cover the material that

More information

Asymptotics for posterior hazards

Asymptotics for posterior hazards Asymptotics for posterior hazards Pierpaolo De Blasi University of Turin 10th August 2007, BNR Workshop, Isaac Newton Intitute, Cambridge, UK Joint work with Giovanni Peccati (Université Paris VI) and

More information

A Test for Order Restriction of Several Multivariate Normal Mean Vectors against all Alternatives when the Covariance Matrices are Unknown but Common

A Test for Order Restriction of Several Multivariate Normal Mean Vectors against all Alternatives when the Covariance Matrices are Unknown but Common Journal of Statistical Theory and Applications Volume 11, Number 1, 2012, pp. 23-45 ISSN 1538-7887 A Test for Order Restriction of Several Multivariate Normal Mean Vectors against all Alternatives when

More information

Statistical Inference with Monotone Incomplete Multivariate Normal Data

Statistical Inference with Monotone Incomplete Multivariate Normal Data Statistical Inference with Monotone Incomplete Multivariate Normal Data p. 1/4 Statistical Inference with Monotone Incomplete Multivariate Normal Data This talk is based on joint work with my wonderful

More information

Mathematical Ideas Modelling data, power variation, straightening data with logarithms, residual plots

Mathematical Ideas Modelling data, power variation, straightening data with logarithms, residual plots Kepler s Law Level Upper secondary Mathematical Ideas Modelling data, power variation, straightening data with logarithms, residual plots Description and Rationale Many traditional mathematics prolems

More information

ORDER RESTRICTED STATISTICAL INFERENCE ON LORENZ CURVES OF PARETO DISTRIBUTIONS. Myongsik Oh. 1. Introduction

ORDER RESTRICTED STATISTICAL INFERENCE ON LORENZ CURVES OF PARETO DISTRIBUTIONS. Myongsik Oh. 1. Introduction J. Appl. Math & Computing Vol. 13(2003), No. 1-2, pp. 457-470 ORDER RESTRICTED STATISTICAL INFERENCE ON LORENZ CURVES OF PARETO DISTRIBUTIONS Myongsik Oh Abstract. The comparison of two or more Lorenz

More information

where x and ȳ are the sample means of x 1,, x n

where x and ȳ are the sample means of x 1,, x n y y Animal Studies of Side Effects Simple Linear Regression Basic Ideas In simple linear regression there is an approximately linear relation between two variables say y = pressure in the pancreas x =

More information

Diagnostics can identify two possible areas of failure of assumptions when fitting linear models.

Diagnostics can identify two possible areas of failure of assumptions when fitting linear models. 1 Transformations 1.1 Introduction Diagnostics can identify two possible areas of failure of assumptions when fitting linear models. (i) lack of Normality (ii) heterogeneity of variances It is important

More information

Mathematics Background

Mathematics Background UNIT OVERVIEW GOALS AND STANDARDS MATHEMATICS BACKGROUND UNIT INTRODUCTION Patterns of Change and Relationships The introduction to this Unit points out to students that throughout their study of Connected

More information

Universidad Carlos III de Madrid

Universidad Carlos III de Madrid Universidad Carlos III de Madrid Eercise 1 2 3 4 5 6 Total Points Department of Economics Mathematics I Final Eam January 22nd 2018 LAST NAME: Eam time: 2 hours. FIRST NAME: ID: DEGREE: GROUP: 1 (1) Consider

More information

Non-Linear Regression Samuel L. Baker

Non-Linear Regression Samuel L. Baker NON-LINEAR REGRESSION 1 Non-Linear Regression 2006-2008 Samuel L. Baker The linear least squares method that you have een using fits a straight line or a flat plane to a unch of data points. Sometimes

More information

Nonparametric estimation of log-concave densities

Nonparametric estimation of log-concave densities Nonparametric estimation of log-concave densities Jon A. Wellner University of Washington, Seattle Seminaire, Institut de Mathématiques de Toulouse 5 March 2012 Seminaire, Toulouse Based on joint work

More information

The Codimension of the Zeros of a Stable Process in Random Scenery

The Codimension of the Zeros of a Stable Process in Random Scenery The Codimension of the Zeros of a Stable Process in Random Scenery Davar Khoshnevisan The University of Utah, Department of Mathematics Salt Lake City, UT 84105 0090, U.S.A. davar@math.utah.edu http://www.math.utah.edu/~davar

More information

STAT Sample Problem: General Asymptotic Results

STAT Sample Problem: General Asymptotic Results STAT331 1-Sample Problem: General Asymptotic Results In this unit we will consider the 1-sample problem and prove the consistency and asymptotic normality of the Nelson-Aalen estimator of the cumulative

More information

Model comparison and selection

Model comparison and selection BS2 Statistical Inference, Lectures 9 and 10, Hilary Term 2008 March 2, 2008 Hypothesis testing Consider two alternative models M 1 = {f (x; θ), θ Θ 1 } and M 2 = {f (x; θ), θ Θ 2 } for a sample (X = x)

More information

f 0 ab a b: base f

f 0 ab a b: base f Precalculus Notes: Unit Eponential and Logarithmic Functions Sllaus Ojective: 9. The student will sketch the graph of a eponential, logistic, or logarithmic function. 9. The student will evaluate eponential

More information

Higher. Polynomials and Quadratics. Polynomials and Quadratics 1

Higher. Polynomials and Quadratics. Polynomials and Quadratics 1 Higher Mathematics Polnomials and Quadratics Contents Polnomials and Quadratics 1 1 Quadratics EF 1 The Discriminant EF Completing the Square EF Sketching Paraolas EF 7 5 Determining the Equation of a

More information

Model Selection and Geometry

Model Selection and Geometry Model Selection and Geometry Pascal Massart Université Paris-Sud, Orsay Leipzig, February Purpose of the talk! Concentration of measure plays a fundamental role in the theory of model selection! Model

More information

Problem Selected Scores

Problem Selected Scores Statistics Ph.D. Qualifying Exam: Part II November 20, 2010 Student Name: 1. Answer 8 out of 12 problems. Mark the problems you selected in the following table. Problem 1 2 3 4 5 6 7 8 9 10 11 12 Selected

More information

Simple Examples. Let s look at a few simple examples of OI analysis.

Simple Examples. Let s look at a few simple examples of OI analysis. Simple Examples Let s look at a few simple examples of OI analysis. Example 1: Consider a scalar prolem. We have one oservation y which is located at the analysis point. We also have a ackground estimate

More information

1. The Multivariate Classical Linear Regression Model

1. The Multivariate Classical Linear Regression Model Business School, Brunel University MSc. EC550/5509 Modelling Financial Decisions and Markets/Introduction to Quantitative Methods Prof. Menelaos Karanasos (Room SS69, Tel. 08956584) Lecture Notes 5. The

More information

Nonconcave Penalized Likelihood with A Diverging Number of Parameters

Nonconcave Penalized Likelihood with A Diverging Number of Parameters Nonconcave Penalized Likelihood with A Diverging Number of Parameters Jianqing Fan and Heng Peng Presenter: Jiale Xu March 12, 2010 Jianqing Fan and Heng Peng Presenter: JialeNonconcave Xu () Penalized

More information

Estimating a Finite Population Mean under Random Non-Response in Two Stage Cluster Sampling with Replacement

Estimating a Finite Population Mean under Random Non-Response in Two Stage Cluster Sampling with Replacement Open Journal of Statistics, 07, 7, 834-848 http://www.scirp.org/journal/ojs ISS Online: 6-798 ISS Print: 6-78X Estimating a Finite Population ean under Random on-response in Two Stage Cluster Sampling

More information

CS 195-5: Machine Learning Problem Set 1

CS 195-5: Machine Learning Problem Set 1 CS 95-5: Machine Learning Problem Set Douglas Lanman dlanman@brown.edu 7 September Regression Problem Show that the prediction errors y f(x; ŵ) are necessarily uncorrelated with any linear function of

More information

Nonparametric Inference In Functional Data

Nonparametric Inference In Functional Data Nonparametric Inference In Functional Data Zuofeng Shang Purdue University Joint work with Guang Cheng from Purdue Univ. An Example Consider the functional linear model: Y = α + where 1 0 X(t)β(t)dt +

More information

5.6 Asymptotes; Checking Behavior at Infinity

5.6 Asymptotes; Checking Behavior at Infinity 5.6 Asymptotes; Checking Behavior at Infinity checking behavior at infinity DEFINITION asymptote In this section, the notion of checking behavior at infinity is made precise, by discussing both asymptotes

More information

Introduction to Probability Theory for Graduate Economics Fall 2008

Introduction to Probability Theory for Graduate Economics Fall 2008 Introduction to Probability Theory for Graduate Economics Fall 008 Yiğit Sağlam October 10, 008 CHAPTER - RANDOM VARIABLES AND EXPECTATION 1 1 Random Variables A random variable (RV) is a real-valued function

More information

Lecture 3. Inference about multivariate normal distribution

Lecture 3. Inference about multivariate normal distribution Lecture 3. Inference about multivariate normal distribution 3.1 Point and Interval Estimation Let X 1,..., X n be i.i.d. N p (µ, Σ). We are interested in evaluation of the maximum likelihood estimates

More information

#A50 INTEGERS 14 (2014) ON RATS SEQUENCES IN GENERAL BASES

#A50 INTEGERS 14 (2014) ON RATS SEQUENCES IN GENERAL BASES #A50 INTEGERS 14 (014) ON RATS SEQUENCES IN GENERAL BASES Johann Thiel Dept. of Mathematics, New York City College of Technology, Brooklyn, New York jthiel@citytech.cuny.edu Received: 6/11/13, Revised:

More information

Estimation and Inference of Quantile Regression. for Survival Data under Biased Sampling

Estimation and Inference of Quantile Regression. for Survival Data under Biased Sampling Estimation and Inference of Quantile Regression for Survival Data under Biased Sampling Supplementary Materials: Proofs of the Main Results S1 Verification of the weight function v i (t) for the lengthbiased

More information

Central Limit Theorem ( 5.3)

Central Limit Theorem ( 5.3) Central Limit Theorem ( 5.3) Let X 1, X 2,... be a sequence of independent random variables, each having n mean µ and variance σ 2. Then the distribution of the partial sum S n = X i i=1 becomes approximately

More information

Review of Classical Least Squares. James L. Powell Department of Economics University of California, Berkeley

Review of Classical Least Squares. James L. Powell Department of Economics University of California, Berkeley Review of Classical Least Squares James L. Powell Department of Economics University of California, Berkeley The Classical Linear Model The object of least squares regression methods is to model and estimate

More information

Linear Models A linear model is defined by the expression

Linear Models A linear model is defined by the expression Linear Models A linear model is defined by the expression x = F β + ɛ. where x = (x 1, x 2,..., x n ) is vector of size n usually known as the response vector. β = (β 1, β 2,..., β p ) is the transpose

More information

Estimation of a Two-component Mixture Model

Estimation of a Two-component Mixture Model Estimation of a Two-component Mixture Model Bodhisattva Sen 1,2 University of Cambridge, Cambridge, UK Columbia University, New York, USA Indian Statistical Institute, Kolkata, India 6 August, 2012 1 Joint

More information

DA Freedman Notes on the MLE Fall 2003

DA Freedman Notes on the MLE Fall 2003 DA Freedman Notes on the MLE Fall 2003 The object here is to provide a sketch of the theory of the MLE. Rigorous presentations can be found in the references cited below. Calculus. Let f be a smooth, scalar

More information

Upper Bounds for Stern s Diatomic Sequence and Related Sequences

Upper Bounds for Stern s Diatomic Sequence and Related Sequences Upper Bounds for Stern s Diatomic Sequence and Related Sequences Colin Defant Department of Mathematics University of Florida, U.S.A. cdefant@ufl.edu Sumitted: Jun 18, 01; Accepted: Oct, 016; Pulished:

More information

ON THE FIRST TIME THAT AN ITO PROCESS HITS A BARRIER

ON THE FIRST TIME THAT AN ITO PROCESS HITS A BARRIER ON THE FIRST TIME THAT AN ITO PROCESS HITS A BARRIER GERARDO HERNANDEZ-DEL-VALLE arxiv:1209.2411v1 [math.pr] 10 Sep 2012 Abstract. This work deals with first hitting time densities of Ito processes whose

More information

Generalized Linear Model under the Extended Negative Multinomial Model and Cancer Incidence

Generalized Linear Model under the Extended Negative Multinomial Model and Cancer Incidence Generalized Linear Model under the Extended Negative Multinomial Model and Cancer Incidence Sunil Kumar Dhar Center for Applied Mathematics and Statistics, Department of Mathematical Sciences, New Jersey

More information

BALANCING GAUSSIAN VECTORS. 1. Introduction

BALANCING GAUSSIAN VECTORS. 1. Introduction BALANCING GAUSSIAN VECTORS KEVIN P. COSTELLO Abstract. Let x 1,... x n be independent normally distributed vectors on R d. We determine the distribution function of the minimum norm of the 2 n vectors

More information

Covariance function estimation in Gaussian process regression

Covariance function estimation in Gaussian process regression Covariance function estimation in Gaussian process regression François Bachoc Department of Statistics and Operations Research, University of Vienna WU Research Seminar - May 2015 François Bachoc Gaussian

More information

Sequential Procedure for Testing Hypothesis about Mean of Latent Gaussian Process

Sequential Procedure for Testing Hypothesis about Mean of Latent Gaussian Process Applied Mathematical Sciences, Vol. 4, 2010, no. 62, 3083-3093 Sequential Procedure for Testing Hypothesis about Mean of Latent Gaussian Process Julia Bondarenko Helmut-Schmidt University Hamburg University

More information

Section 8.5. z(t) = be ix(t). (8.5.1) Figure A pendulum. ż = ibẋe ix (8.5.2) (8.5.3) = ( bẋ 2 cos(x) bẍ sin(x)) + i( bẋ 2 sin(x) + bẍ cos(x)).

Section 8.5. z(t) = be ix(t). (8.5.1) Figure A pendulum. ż = ibẋe ix (8.5.2) (8.5.3) = ( bẋ 2 cos(x) bẍ sin(x)) + i( bẋ 2 sin(x) + bẍ cos(x)). Difference Equations to Differential Equations Section 8.5 Applications: Pendulums Mass-Spring Systems In this section we will investigate two applications of our work in Section 8.4. First, we will consider

More information

Minimum distance tests and estimates based on ranks

Minimum distance tests and estimates based on ranks Minimum distance tests and estimates based on ranks Authors: Radim Navrátil Department of Mathematics and Statistics, Masaryk University Brno, Czech Republic (navratil@math.muni.cz) Abstract: It is well

More information

Poor presentation resulting in mark deduction:

Poor presentation resulting in mark deduction: Eam Strategies 1. Rememer to write down your school code, class and class numer at the ottom of the first page of the eam paper. 2. There are aout 0 questions in an eam paper and the time allowed is 6

More information

Regression, Ridge Regression, Lasso

Regression, Ridge Regression, Lasso Regression, Ridge Regression, Lasso Fabio G. Cozman - fgcozman@usp.br October 2, 2018 A general definition Regression studies the relationship between a response variable Y and covariates X 1,..., X n.

More information

Linear Models for Regression CS534

Linear Models for Regression CS534 Linear Models for Regression CS534 Example Regression Problems Predict housing price based on House size, lot size, Location, # of rooms Predict stock price based on Price history of the past month Predict

More information

Week 5 Quantitative Analysis of Financial Markets Modeling and Forecasting Trend

Week 5 Quantitative Analysis of Financial Markets Modeling and Forecasting Trend Week 5 Quantitative Analysis of Financial Markets Modeling and Forecasting Trend Christopher Ting http://www.mysmu.edu/faculty/christophert/ Christopher Ting : christopherting@smu.edu.sg : 6828 0364 :

More information

The Cameron-Martin-Girsanov (CMG) Theorem

The Cameron-Martin-Girsanov (CMG) Theorem The Cameron-Martin-Girsanov (CMG) Theorem There are many versions of the CMG Theorem. In some sense, there are many CMG Theorems. The first version appeared in ] in 944. Here we present a standard version,

More information

Section 3.3 Limits Involving Infinity - Asymptotes

Section 3.3 Limits Involving Infinity - Asymptotes 76 Section. Limits Involving Infinity - Asymptotes We begin our discussion with analyzing its as increases or decreases without bound. We will then eplore functions that have its at infinity. Let s consider

More information

On Simulations form the Two-Parameter. Poisson-Dirichlet Process and the Normalized. Inverse-Gaussian Process

On Simulations form the Two-Parameter. Poisson-Dirichlet Process and the Normalized. Inverse-Gaussian Process On Simulations form the Two-Parameter arxiv:1209.5359v1 [stat.co] 24 Sep 2012 Poisson-Dirichlet Process and the Normalized Inverse-Gaussian Process Luai Al Labadi and Mahmoud Zarepour May 8, 2018 ABSTRACT

More information

δ -method and M-estimation

δ -method and M-estimation Econ 2110, fall 2016, Part IVb Asymptotic Theory: δ -method and M-estimation Maximilian Kasy Department of Economics, Harvard University 1 / 40 Example Suppose we estimate the average effect of class size

More information