Change-Point Estimation

Size: px
Start display at page:

Download "Change-Point Estimation"

Transcription

1 Change-Point Estimation. Asymptotic Quasistationary Bias For notational convenience, we denote by {S n}, for n, an independent copy of {S n } and M = sup S k, k<inf as the maximum of {S n} when S = and σ M = arg sup S k k< as the corresponding maximum point. Conditional on N>ν, depending on whether ˆν >νor ˆν <ν, we can write the bias as ˆν ν =(ˆν ν)i [ˆν>ν] (ν ˆν)I [ˆν<ν], where I A denotes the indicator function for the event A. The notations are consistent with Chapter and, in addition, we shall denote by E θθ[.] the expectation when both P θ and P θ are involved. 5

2 6. Change-Point Estimation Pollak and Siegmund (986) shows that the quasistationary distribution of T ν converges to the stationary distribution of T ν as d. That means, lim lim P θ d ν (T ν <y N >ν) = lim lim P θ d ν (T ν <y) = P θ (M<y), as we note that T ν is asymptotically equivalent to M in distribution as ν. This implies that when the change occurs, T ν is asymptotically distributed as M. Thus, the event {ˆν >ν} is asymptotically equivalent to the event {τ M < }, i.e., the random walk S n eventually goes below to zero with initial starting point M. Given ˆν >ν, the bias ˆν ν is asymptotically equal to τ M plus the length, say γ m for a CUSUM process T n starting from zero until the last zero point time under P θ (.). Define E[X; A] =E[XI A ]. As d, ν,wehave E ν [ˆν ν;ˆν>ν] E θθ[τ M + γ m ; τ M < ] = E θθ[τ M ; τ M < ]+E θ [γ m ]P θθ(τ M < ). We can see that γ m is a geometric summation of i.i.d. random variables distributed as {τ ; τ < } with terminating probability P θ (τ = ). Thus, we have Lemma.: E θ γ m = E θ[τ ; τ < ]. P θ (τ = ) On the other hand, given ˆν <ν, by looking at T k backward in time starting from ν, we see that T νk behaves like a random walk {S k } for k with maximum value M and thus, ˆνν is asymptotically distributed as the maximum point σ M. Thus, as d, ν,wehave E ν [ν ˆν;ˆν<ν] E θθ[σ M ; τ M = ] = E θ [σ M P θ (τ M = )]. A similar argument is used in Srivastava and Wu (999) for the continuous-time analog. Summarizing the results, we get the following asymptotic first-order result. Theorem. : As ν, d, E ν [ˆν ν N >ν] E θθ[τ M ; τ M < ]+P θθ(τ M < ) E θ[τ ; τ < ] P θ (τ = ) E θθ[σ M ; τ M = ], E ν [ ˆν ν N >ν] E θθ[τ M ; τ M < ]+P θθ(τ M < ) E θ[τ ; τ < ] P θ (τ = ) +E θθ[σ M ; τ M = ].

3 .. Second-Order Approximation 7. Second-Order Approximation In this section, we shall derive the second-order approximation for the asymptotic bias given in Theorem. in order to investigate the bias numerically and also see some local properties by further assuming θ and θ approach zero at the same order. The main theoretical tool is the strong renewal theorem given in Section.3 and its applications to ladder variables. There are five terms in Theorem. which will be evaluated in a sequence of lemmas. Most of the results generalizes the ones given in Wu (999) in the fixed sample size case with normal distribution. However, the technique used here is much more general and can be used for any distribution of exponential family type and also raises more difficulties due to sequential sampling plan. The first lemma gives the approximation for the expected length τ if the random walk goes below zero. Lemma.: As θ, E θ [τ ; τ < ] = E S τ exp (θρ + θ µ ( ρ () ρ 5β )) (+o(θ ) ), E S τ where β is given in Lemma.. Proof: By using Wald s likelihood ratio identity by changing the measure P θ (.) top θ(.) and Lemma., we have E θ [τ ; τ < ] = E θ[τ e Sτ ] = µ E θs τ + E θ(τ S τ )+ E θ After some algebraic simplifications, we get the result. Corollary.: As θ, E θ γ m = µ e(β/esτ )θ ( + o(θ )). (τ S τ ) + o( ). To evaluate P θθ(τ M < ), we follow a similar technique used in Wu (999) and only the main steps are provided. First, by conditioning on whether M =orm>, we have From Lemma., we have P θθ(τ M < ) = P θ (τ < )P θ (τ + = ) + P θθ(τ M < ; M>). (.) P θ (τ < )P θ (τ + = ) = E S τ+ e θρ+ (+ E S τ+ )+o(θ ) = E S τ+ e θsτ + + o(θ ).

4 8. Change-Point Estimation For the second term in (.), by using the Wald s likelihood ratio identity by changing parameters θ to θ and θ to θ,wehave and From Corollary., we know P θ (τ x < ) = E θe Sτ x = e x E θe Rx, P θ (M>x) = P θ (τ x < ) = e x E θ e Rx. E R x ρ + = O ( e rx) and E R x ρ = O ( e rx), as x. Now, we write P θθ(τ M <,M >) = = = d P θ (τ x < )dp θ (M>x) E θe Sτ x deθ e Sτx e (xρ) de (x+ρ+) ( ( e (xρ) d e (x+ρ+) E θ e (Rxρ+)) ) ( ) e (xρ) E θ e (Rxρ) de (x+ρ+) ( ) e (xρ) E θ e (Rxρ) ( ( e (x+ρ+) E θ e (Rxρ+)) ). (.) The first term in (.) is e ρ ρ+. + The third term in (.) is approximately equal to (E R x ρ )dx + o(θ ). The fourth term in (.) is E (R x ρ )de (R x ρ + )+o(θ ).

5 .. Second-Order Approximation 9 The second term in (.), by integrating by parts, can be approximated as e ( ρ P θ (τ + < ) e ρ+) + (E R x ρ + )dx + o(θ ). Combining the above approximations, we have Lemma.3: As θ,θ at the same order, P θθ(τ M < ) = + e ρ ρ+ + e ρ ( e ρ+) + ( ρ E S τ + + (E R x ρ )d(e R x ρ + )+ (E R x ρ )dx + o(θ ). (E R x ρ + )dx) The evaluation of E θθ[τ M ; τ M < ] is similar. Lemma.4: As θ,θ at the same order, E θθ[τ M ; τ M < ] = ( µ ( + ) ρ ) + µ ( + ρ (E S τ+ ρ + ) (E R x ρ )d(e R x ρ + ) (E R x ρ + )dx)+o(). (E R x ρ )dx Proof: Again depending on whether {M =} or {M >}, wehave E θθ[τ M ; τ M < ] = E θ [τ ; τ < ]P θ (τ + = ) E θ [τ x ; τ x < ]dp θ (τ x < ). (.3) The first term in (.3) can be approximated by using Lemmas. and. as E θ [τ ; τ < ]P θ (τ + = ) = µ µ + o() = µ + o(). For the second term in (.3), by conditioning on the value of M given M> we use the similar techniques as in Lemma.3 and write it as

6 . Change-Point Estimation E θ [τ x ; τ x < ]dp θ (τ x < ) = E θ [ τ x e (xrx)] de θ e (x+rx) = E θ(τ x )e (xρ) de θ e (x+rx) + o() = µ [ (x + E R x )e (xρ) de (x+ρ+) (x + E R x )d(e R x ρ + )] + o(). (.4) The first term in (.4) is approximately equal to ( [e ρ ρ+ µ ( + ) ρ ) + The second term in (.4) is equal to µ [ (x ρ )d(e R x ρ + ) = µ [ρ (E S τ+ ρ + ) (E R x ρ )d(e R x ρ + )]. (E R x ρ + )dx Combining the above results, we complete the proof. Finally, we evaluate E θθ[σ M ; τ M < ]. We first write ] (E R x ρ )dx + o(). ] (E R x ρ )d(e R x ρ + ) E θθ[σ M ; τ M < ] =E θ σ M E θ [σ M E θe (M+R M ) ]. (.5) For the second term in (.5), we write E θ [σ M E θe (M+R M ) ] = E θ [ σm e M ] e ρ To evaluate E θ [σ M e M ], we note that under P θ (.) + E θ [ σm e M ( E θe R M e ρ)]. (σ M,M)= d (τ (K) +,S (K) τ ), where = d means equivalence in distribution, τ (k) + is the kth ladder epoch defined in Section.3, and K = inf{k >: τ (k) + < }. Note that K is a geometric random variable with + P (K = k) =p k ( p)

7 .. Second-Order Approximation for k, with terminating probability p = P θ (τ + = ). For given K = k, (τ (k) +,S (k) τ ) is, in distribution, equivalent to the sum of k i.i.d. + random variables distributed as (τ +,S τ+ ). Thus, [ E θ σm e M ] [ ] = E θ τ (K) + exp( S (K) τ ) = = = k= E θ [ τ (k) + + exp( S τ (k) + ] ); K = k [ ke θ τ+ exp( S τ+ ); τ + < ] k= ( [ E θ exp( Sτ+ ); τ + < ]) k Pθ (τ + = ) [ E θ τ+ exp( S τ+ ); τ + < ] ( [ Eθ exp( Sτ+ ); τ + < ]) P θ (τ + = ). The next two lemmas give the approximations for the related quantities. Lemma.5: As θ,θ at the same order, [ E θ exp( Sτ+ ); τ + < ] ( =( + )E S τ+ exp ( θ )ρ + + ( θ ) (ρ () + ρ +) θ ( + o(θ )). α E S τ+ ) Proof: Using Wald s likelihood ratio identity, we have E θ [ exp( Sτ+ ); τ + < ] = E θ e ( + )Sτ +. The Taylor series expansion following the lines of Lemma. will give the result after some algebraic simplification. In particular, by letting =, we have ( P θ (τ + = ) = E S τ+ exp θ ρ + + ( θ ρ () + ρ + α )) ( + o(θ E S )). τ+ The following lemma can be proved similarly as for Lemma., and its proof is omitted.

8 . Change-Point Estimation Lemma.6: As θ,θ at the same order, ] E θ [τ + e Sτ + ; τ+ < = E S τ+ exp(( θ )ρ + + µ ( θ ) (ρ () + ρ +) θ α α θ ( + ) )( + o(θ )). E S τ+ E S τ+ In particular, by letting =, we have E θ [τ + ; τ + < ] = E ( S τ+ exp θ ρ + + θ µ ( ρ () + ρ + 5α E S τ+ On the other hand, by conditioning on the value of M, wehave )) ( + o(θ )). E θ [ σm e M ( E θe R M e ρ)] (.6) = E θ [σ M (E R M ρ )]( + o()) = E θ [σ x M = x](e R x ρ )dp θ (M>x) = E θ [σ x M = x](e R x ρ )d(x + E R x )( + o()). Since E θ [S τ+ ; τ + < ] =E S τ+ ( + o()), as θ, thus, K = O p (x), where O p (.) means at the same order in probability. This implies E θ [σ x M = x] =O( x ). µ Thus, (.6) is at the the order of O(θ). By letting =, the first term of (.5) can be evaluated by combining Lemmas.5 and.6. Lemma.7: As θ, E θ σ M = E θ [τ + ; τ + < ] P θ (τ + = ) = e θ (α/esτ + ) ( + o(θ µ )). Finally, we have the following result: Lemma.8: As θ,θ, E θθ[σ M ; τ M = ] = e θ µ (α/esτ + )

9 .. Second-Order Approximation 3 () µ ( + ) eγ /3θ(θθ)(ρ + ρ + α/esτ + )(θθ) (α /E S τ+ ) ( + o(θ )). Combining Lemmas..8 and Lemmas.., we have the following local second-order expansion for the asymptotic bias of ˆν. Theorem.: As θ,θ at the same order, we have lim lim d ν Eν [ˆνν N >ν]= ( e ρ ρ+ + e ( ρ e ρ+) ) µ + µ ( ( + ) ρ ) e ρ ρ+ + + θ β + θ θ θ E S τ+ θ θ θ θ ρ ρ + ( ρ () + ρ + α E Sτ + ) + o(). A similar result can be obtained for the absolute bias E ν [ ˆν ν N >ν]. In particular, when θ = θ, we have the following result. Corollary.: As θ = θ, lim d lim ν E ν [ˆν ν N >ν] = 3 ( + ) γ ( + ) β 4 µ µ µ µ E S τ ( ρ () + ρ + α ) + o() E S τ+ = γ + 7γ 4θ 88 κ 6 β E S τ ( ρ () + ρ + α ) + o(). E S τ+ Proof: The first equation is a direct simplification of Theorem.. For the second equation, we note that as θ, µ = θ e θ(γ/)+θ (κ/6γ/8) ( + o(θ)), = θ e θ(γ/6)+θ (γ/4) ( + o(θ)), θ = θ e θ(γ/3)+θ (γ/8) ( + o(θ)), µ = θ e θ(γ/6)+θ (κ/67γ/7) ( + o(θ)). Some tedious simplifications give the expected results.

10 4. Change-Point Estimation Therefore, the local bias of ˆν is largely affected by the skewness γ. Ifγ>, the local bias becomes positive. If F (x) is symmetric, from Corollary., we have ρ () + ρ + α = κ E S τ+ 6, and thus, E ν [ˆν ν N >ν] 7 48 κ β + o(), E S τ which, surprisingly, is a nonzero constant, in contrast to the fixed sample size case as given in the normal case of the next section..3 Two Examples In this section, we present two cases: normal and exponential distributions..3. Normal Distribution From Example., the approximations for the related quantities are simplified as E θ (γ m ) = θ 4 + o(), P θθ(τ M < ) = θ e θ(θθ) + o(θ ), θ θ E θθ[τ M ; τ M < ] = θ θ(θ θ ) e(θθ) + o(), E θθ[σ M ; τ M < ] = θ (θ θ ) + o(). Summarizing the above results, we have the following corollary: Corollary.3: As θ,θ at the same order, lim lim d ν Eν [ˆν ν N >ν] = lim d lim ν Eν [ ˆν ν N >ν] = At θ = θ = θ, θ θ θ + 4(θ θ ) + o(), ( θ + ) θ (θ θ ) + lim lim d ν Eν [ˆν ν N >ν] = 8 + o(), lim lim 3 d ν Eν [ ˆν ν N >ν] = 8 + o(). 4θ θ 4(θ θ ) + o().

11 .3. Two Examples 5 Remark: Wu (999) considered the bias of the estimator in the large fixed sample size case, which corresponds to the maximum point of a two-sided random walk, and obtained the following result: E ν [ˆν ν] = θ θ + θ + θ + o(); 4 θ θ and at θ = θ, E ν [ ˆν ν ] = 3 4θ 4 + o(). Srivastava and Wu (999) also considered the continuous-time analog in the sequential sampling plan case, which gives lim lim d ν Eν [ˆν ν N >ν]= θ θ, which is zero at θ = θ and at θ = θ, lim lim d ν Eν [ˆν ν N >ν]= 3 4θ. We see that the sequential sampling plan has local effect at the second-order in the discrete-time case and is negative at θ = θ. To show the accuracy of the second-order approximations, we conduct a simple simulation study. For d = and θ =.5,.5, we let ν = 5 and. One thousand replications of the CUSUM charts are simulated for several values for θ. Only those runs with N>νare used for calculating ˆν. Table. gives the comparison between the simulated results and approximated values. The approximated values from Corollary.4 are given in the parentheses. We see that the approximations are generally good. The case ν = shows quite satisfactory results. Also, we see that approximations for the case θ =.5 perform better than those for the case θ =.5. The reason is that our results are given by first assuming d, ν and then letting θ,θ. The effect of ν is very little. However, as the local bias is at the order O(/θ)at θ = θ, which approaches as θ, there could be an error term at the order, say O(/(dθ)) for finitely large d. Thus, the approximation may perform better for θ = θ =.5. The case when θd approaches a constant, called moderate deviation as considered in Chang (99), is definitely worths a future study..3. Exponential Distribution Here, we are interested in quick detection of increment in the mean of an exponential distribution from the initial mean. From Example. and Theorem., we have the following result: Corollary.4. As θ, θ at the same order,

12 6. Change-Point Estimation Table.: Biases in the Normal Case ν θ θ E[ˆν ν N >ν] E[ ˆν ν N >ν] (.5) 9.737(.875).5 4.9(6.83) 7.9(8.39) (7.74) 6.376(7.86) (7.55) 6.88(7.8) (.5) 3.(.875).75.3(.).338(.49)..73(.583) 3.35(.97) (.5).78(.875) (6.83) 7.768(8.39) (7.74) 6.94(7.86). 6.5(7.55) 6.5(7.8).5.5.3(.5) 3.5(.875).75.9(.).8(.49)..564(.583).84(.97) lim d lim ν E ν [ˆν ν N >ν] At θ = θ, = + µ( + ) e(/) µ e / ( e ) µ( + ) e(/) + µ µ ( + ) e(/3) θ θ θ θ θ + 7 θ + o(). 8 θ θ lim d lim ν Eν [ˆν ν N >ν]= θ o(). We see that due to the asymmetry of F (x), the local bias becomes positive as θ is small..4 Case Study In this section, we conduct three classical case studies to illustrate the applications.

13 .4. Case Study 7 Table.: Nile River Flow from Year Flow Year Flow Year Flow Year Flow Nile River Data (Normal Case): The following Nile River flow data are reproduced from Cobb(978), and the data are read in columns. From a scatterplot, we see there is an obvious change around the year 9. A qq-normal plot shows the normality is roughly true. As in Cobb(978), we assume the independent normality. Assume the prechange mean is m = and the post-change mean is m = 85, with a change magnitude m m = 5 and standard deviation 5. To apply the CUSUM procedure, we first standardize the data by subtracting all the observations by m +(m m )/ = 975, the average of the pre-change and post-change means, then switch the sign in order to make the change positive, and then divide all the data by 5. After these transformations, we standardize the observations to x i for i =

14 8. Change-Point Estimation,,..., such that θ =, θ =, and σ =. Now we form the CUSUM process by letting T = and T i = max(,t i + x i ), for i =,...,. The calculated values are reported as follows: [] [6] [3] [46] [6] [76] [9] The reason we report the CUSUM process instead of drawing graphs is to inspect the path directly rather than guessing from the graph. By looking at the CUSUM process, we see that in the fixed sample size case, the maximum likelihood ratio estimator (the last zero point of the CUSUM process) is ˆν n =8 which is the year 898. The estimated pre-change mean is the average of the observations from 97 to 998, which is.98; the estimated post-change mean is.; and the prechange standard deviation is.8 and post-change standard deviation is.. We see that the assumption is roughly correct except for the slight discrepancy from the pre-change standard deviation. Let us look at the estimator from the sequential sampling plan point of view, which is more natural from the nature of monitoring. We see that as long as the threshold d is taken larger than. and less than 7, a change is always signaled and the maximum likelihood estimator ˆν =8 no matter what the value d is. Thus, we see that the estimator ˆν is stable to the selection of the threshold d..4. British Coal Mining Disaster (Exponential Case) The following table gives the intervals in days between successive coal mining disasters in Great Britain for the period A disaster is defined as involving the death of or more men. The data are taken from Maguire, Pearson, and Wynn(95) and appeared in many places; the most noticeable is Cox and Lewis (966). The data are read in columns.

15 .4. Case Study 9 Table.3: British Coal Mining Disaster Intervals

16 3. Change-Point Estimation We first explain the transformation on the data in the exponential case in order to fit them into the frame. Suppose the original data {Y i } follow exp(λ ) for i ν and exp(λ ) for i>ν where λ >λ are the corresponding hazard rates. Define λ =(λ λ )/ ln(λ /λ ), and make the following data transformation: X i = λ Y i, for i =,,... Denote for x, and then f (x) =e (x+), satisfies the standardized model with and f θ (x) = exp(θx c(θ))f (x) c(θ) =(θ + ln( θ)), θ i = λ i /λ for i =, such that c(θ )=c(θ ). A scatterplot or by looking at the data directly shows that there is a change around the observation 5. So we take the mean from observations to 5 as the pre-change mean and the mean from observations 5 to 9 as the post-change mean, which gives λ =/9 and λ =/335, and λ =.5. Now, after the data transformation by letting x i =.5y i for i =,...9, we can fit the data into the standardized model with θ =.55 and θ =.43. Next, we can formalize the CUSUM process by calculating T n s based on x i s, which are reported as follows: [] [6]..... [] [6] []..... [6]

17 .4. Case Study 3 [3] [36] [4] [46] [5] [56] [6] [66] [7] [76] [8] [86] [9] [96] [] [6] By inspecting the CUSUM process, we see that the estimator ˆν = 46 no matter what the threshold d is (at least 5 and at most 4). Again, we see that the CUSUM procedure is very reliable in terms of the change-point estimator..4.3 IBM Stock Price (Variance Change) Suppose the original independent observations {Y i } follow N(,σ ) for i ν and N(,σ ) for i>ν. For a reference value σ for σ, we define ( λ = σ σ ) / ln and make the following data transformation: ( ) σ σ X i = (λ Y i ). Let f (x) be the density function of (χ )/, where χ is the standard chi-square random variable with degree of freedom. Then from Example.3, we know c(θ) = θ ln( θ). Define and generally θ = ) ( /σ λ and θ = ( /σ θ = ( ) /σ λ. λ ),

18 3. Change-Point Estimation It can be verified that c (θ) = ( ) λ /σ and c(θ )=c(θ ). Thus, the standardized observations {X i } fit in our model. Remark: Generally, the original independent observations {Y i } follow ( + ɛ ) χ p for ı ν and ( + ɛ ) χ p for i>ν, where χ p is the standard chi-square random variable with degree of freedom p. Then, by using Example.3, we define ( λ = ( + ɛ ) ( + ɛ ) and make the following transformation: X i = p (λ Y i p). ) / ln[( + ɛ ) /( + ɛ ) ], For f (x) defined as in Example.3, let θ = p/ ( ) ( + ɛ) λ, and define θ and θ correspondingly. Then we can verify that c(θ )=c(θ ). The following data set is taken as the IBM stock daily closing prices from May 7 of 96 to Nov. of 96, [Box, Jenkins, and Reinsel(994), pp.54)] for a total of 369 observations. The data are read in rows. We use the geometric normal random walk model and find that it fits the data quite well with quite small autocorrelation. After taking the difference for the logarithm of the data, the plot of total 368 data shows an obvious increase in the variance roughly around the 5th observation. The standard deviation for the first 5 observations is found to be.978. So we divide all the data by.978, and denote the modified data as {Y i } s, which gives σ = and σ =7.39, where σ is the variance of the last 43 modified observations. Now, we calculate λ as ( λ = σ ) ( ) σ σ / ln σ =(/7.39)/ log(7.39) = The transformed data are calculated as X i = (λ Yi ), and the corresponding conjugate parameters are found to be θ = ( ) =

19 .4. Case Study 33 Table.4: IBM Stock Price

20 34. Change-Point Estimation and θ = ) ( = The CUSUM process formed by the standardized {X i } s are calculated as [] [3] [5] [37] [49] [6] [73] [85] [97] [9] [] [33] [45] [57] [69] [8] [93] [5] [7] [9] [4] [53] [65] [77] [89] [3] [33] [35] [337] [349] [36]

21 .4. Case Study 35 We see that no matter what the threshold d will be (at least 5 and at most 3), the change-point estimator is consistently ˆν = 34. The estimation for the post-change parameter is considered in Chapter 4.

22

Lecture Notes in Statistics 180 Edited by P. Bickel, P. Diggle, S. Fienberg, U. Gather, I. Olkin, S. Zeger

Lecture Notes in Statistics 180 Edited by P. Bickel, P. Diggle, S. Fienberg, U. Gather, I. Olkin, S. Zeger Lecture Notes in Statistics 180 Edited by P. Bickel, P. Diggle, S. Fienberg, U. Gather, I. Olkin, S. Zeger Yanhong Wu Inference for Change-Point and Post-Change Means After a CUSUM Test Yanhong Wu Department

More information

Theory and Applications of Stochastic Systems Lecture Exponential Martingale for Random Walk

Theory and Applications of Stochastic Systems Lecture Exponential Martingale for Random Walk Instructor: Victor F. Araman December 4, 2003 Theory and Applications of Stochastic Systems Lecture 0 B60.432.0 Exponential Martingale for Random Walk Let (S n : n 0) be a random walk with i.i.d. increments

More information

Sequential Procedure for Testing Hypothesis about Mean of Latent Gaussian Process

Sequential Procedure for Testing Hypothesis about Mean of Latent Gaussian Process Applied Mathematical Sciences, Vol. 4, 2010, no. 62, 3083-3093 Sequential Procedure for Testing Hypothesis about Mean of Latent Gaussian Process Julia Bondarenko Helmut-Schmidt University Hamburg University

More information

Chapter 5. Bayesian Statistics

Chapter 5. Bayesian Statistics Chapter 5. Bayesian Statistics Principles of Bayesian Statistics Anything unknown is given a probability distribution, representing degrees of belief [subjective probability]. Degrees of belief [subjective

More information

If we want to analyze experimental or simulated data we might encounter the following tasks:

If we want to analyze experimental or simulated data we might encounter the following tasks: Chapter 1 Introduction If we want to analyze experimental or simulated data we might encounter the following tasks: Characterization of the source of the signal and diagnosis Studying dependencies Prediction

More information

2. What are the tradeoffs among different measures of error (e.g. probability of false alarm, probability of miss, etc.)?

2. What are the tradeoffs among different measures of error (e.g. probability of false alarm, probability of miss, etc.)? ECE 830 / CS 76 Spring 06 Instructors: R. Willett & R. Nowak Lecture 3: Likelihood ratio tests, Neyman-Pearson detectors, ROC curves, and sufficient statistics Executive summary In the last lecture we

More information

Ch. 5 Hypothesis Testing

Ch. 5 Hypothesis Testing Ch. 5 Hypothesis Testing The current framework of hypothesis testing is largely due to the work of Neyman and Pearson in the late 1920s, early 30s, complementing Fisher s work on estimation. As in estimation,

More information

Economics 520. Lecture Note 19: Hypothesis Testing via the Neyman-Pearson Lemma CB 8.1,

Economics 520. Lecture Note 19: Hypothesis Testing via the Neyman-Pearson Lemma CB 8.1, Economics 520 Lecture Note 9: Hypothesis Testing via the Neyman-Pearson Lemma CB 8., 8.3.-8.3.3 Uniformly Most Powerful Tests and the Neyman-Pearson Lemma Let s return to the hypothesis testing problem

More information

Lesson 8: Testing for IID Hypothesis with the correlogram

Lesson 8: Testing for IID Hypothesis with the correlogram Lesson 8: Testing for IID Hypothesis with the correlogram Dipartimento di Ingegneria e Scienze dell Informazione e Matematica Università dell Aquila, umberto.triacca@ec.univaq.it Testing for i.i.d. Hypothesis

More information

Hypothesis Testing. 1 Definitions of test statistics. CB: chapter 8; section 10.3

Hypothesis Testing. 1 Definitions of test statistics. CB: chapter 8; section 10.3 Hypothesis Testing CB: chapter 8; section 0.3 Hypothesis: statement about an unknown population parameter Examples: The average age of males in Sweden is 7. (statement about population mean) The lowest

More information

Volatility. Gerald P. Dwyer. February Clemson University

Volatility. Gerald P. Dwyer. February Clemson University Volatility Gerald P. Dwyer Clemson University February 2016 Outline 1 Volatility Characteristics of Time Series Heteroskedasticity Simpler Estimation Strategies Exponentially Weighted Moving Average Use

More information

STAT 512 sp 2018 Summary Sheet

STAT 512 sp 2018 Summary Sheet STAT 5 sp 08 Summary Sheet Karl B. Gregory Spring 08. Transformations of a random variable Let X be a rv with support X and let g be a function mapping X to Y with inverse mapping g (A = {x X : g(x A}

More information

Exercises and Answers to Chapter 1

Exercises and Answers to Chapter 1 Exercises and Answers to Chapter The continuous type of random variable X has the following density function: a x, if < x < a, f (x), otherwise. Answer the following questions. () Find a. () Obtain mean

More information

Numerical Methods. King Saud University

Numerical Methods. King Saud University Numerical Methods King Saud University Aims In this lecture, we will... find the approximate solutions of derivative (first- and second-order) and antiderivative (definite integral only). Numerical Differentiation

More information

Lecture 8: Information Theory and Statistics

Lecture 8: Information Theory and Statistics Lecture 8: Information Theory and Statistics Part II: Hypothesis Testing and I-Hsiang Wang Department of Electrical Engineering National Taiwan University ihwang@ntu.edu.tw December 23, 2015 1 / 50 I-Hsiang

More information

10. Linear Models and Maximum Likelihood Estimation

10. Linear Models and Maximum Likelihood Estimation 10. Linear Models and Maximum Likelihood Estimation ECE 830, Spring 2017 Rebecca Willett 1 / 34 Primary Goal General problem statement: We observe y i iid pθ, θ Θ and the goal is to determine the θ that

More information

is a Borel subset of S Θ for each c R (Bertsekas and Shreve, 1978, Proposition 7.36) This always holds in practical applications.

is a Borel subset of S Θ for each c R (Bertsekas and Shreve, 1978, Proposition 7.36) This always holds in practical applications. Stat 811 Lecture Notes The Wald Consistency Theorem Charles J. Geyer April 9, 01 1 Analyticity Assumptions Let { f θ : θ Θ } be a family of subprobability densities 1 with respect to a measure µ on a measurable

More information

Parametric Techniques

Parametric Techniques Parametric Techniques Jason J. Corso SUNY at Buffalo J. Corso (SUNY at Buffalo) Parametric Techniques 1 / 39 Introduction When covering Bayesian Decision Theory, we assumed the full probabilistic structure

More information

Lecture 2: Univariate Time Series

Lecture 2: Univariate Time Series Lecture 2: Univariate Time Series Analysis: Conditional and Unconditional Densities, Stationarity, ARMA Processes Prof. Massimo Guidolin 20192 Financial Econometrics Spring/Winter 2017 Overview Motivation:

More information

Notes, March 4, 2013, R. Dudley Maximum likelihood estimation: actual or supposed

Notes, March 4, 2013, R. Dudley Maximum likelihood estimation: actual or supposed 18.466 Notes, March 4, 2013, R. Dudley Maximum likelihood estimation: actual or supposed 1. MLEs in exponential families Let f(x,θ) for x X and θ Θ be a likelihood function, that is, for present purposes,

More information

Lecture 2. 1 Wald Identities For General Random Walks. Tel Aviv University Spring 2011

Lecture 2. 1 Wald Identities For General Random Walks. Tel Aviv University Spring 2011 Random Walks and Brownian Motion Tel Aviv University Spring 20 Lecture date: Feb 2, 20 Lecture 2 Instructor: Ron Peled Scribe: David Lagziel The following lecture will consider mainly the simple random

More information

f-divergence Estimation and Two-Sample Homogeneity Test under Semiparametric Density-Ratio Models

f-divergence Estimation and Two-Sample Homogeneity Test under Semiparametric Density-Ratio Models IEEE Transactions on Information Theory, vol.58, no.2, pp.708 720, 2012. 1 f-divergence Estimation and Two-Sample Homogeneity Test under Semiparametric Density-Ratio Models Takafumi Kanamori Nagoya University,

More information

Confidence Intervals of Prescribed Precision Summary

Confidence Intervals of Prescribed Precision Summary Confidence Intervals of Prescribed Precision Summary Charles Stein showed in 1945 that by using a two stage sequential procedure one could give a confidence interval for the mean of a normal distribution

More information

Lecture 7 Introduction to Statistical Decision Theory

Lecture 7 Introduction to Statistical Decision Theory Lecture 7 Introduction to Statistical Decision Theory I-Hsiang Wang Department of Electrical Engineering National Taiwan University ihwang@ntu.edu.tw December 20, 2016 1 / 55 I-Hsiang Wang IT Lecture 7

More information

Exercises Chapter 4 Statistical Hypothesis Testing

Exercises Chapter 4 Statistical Hypothesis Testing Exercises Chapter 4 Statistical Hypothesis Testing Advanced Econometrics - HEC Lausanne Christophe Hurlin University of Orléans December 5, 013 Christophe Hurlin (University of Orléans) Advanced Econometrics

More information

A Very Brief Summary of Statistical Inference, and Examples

A Very Brief Summary of Statistical Inference, and Examples A Very Brief Summary of Statistical Inference, and Examples Trinity Term 2009 Prof. Gesine Reinert Our standard situation is that we have data x = x 1, x 2,..., x n, which we view as realisations of random

More information

Parametric Techniques Lecture 3

Parametric Techniques Lecture 3 Parametric Techniques Lecture 3 Jason Corso SUNY at Buffalo 22 January 2009 J. Corso (SUNY at Buffalo) Parametric Techniques Lecture 3 22 January 2009 1 / 39 Introduction In Lecture 2, we learned how to

More information

Understanding Regressions with Observations Collected at High Frequency over Long Span

Understanding Regressions with Observations Collected at High Frequency over Long Span Understanding Regressions with Observations Collected at High Frequency over Long Span Yoosoon Chang Department of Economics, Indiana University Joon Y. Park Department of Economics, Indiana University

More information

Problem set 2 The central limit theorem.

Problem set 2 The central limit theorem. Problem set 2 The central limit theorem. Math 22a September 6, 204 Due Sept. 23 The purpose of this problem set is to walk through the proof of the central limit theorem of probability theory. Roughly

More information

6.1 Variational representation of f-divergences

6.1 Variational representation of f-divergences ECE598: Information-theoretic methods in high-dimensional statistics Spring 2016 Lecture 6: Variational representation, HCR and CR lower bounds Lecturer: Yihong Wu Scribe: Georgios Rovatsos, Feb 11, 2016

More information

X 1,n. X L, n S L S 1. Fusion Center. Final Decision. Information Bounds and Quickest Change Detection in Decentralized Decision Systems

X 1,n. X L, n S L S 1. Fusion Center. Final Decision. Information Bounds and Quickest Change Detection in Decentralized Decision Systems 1 Information Bounds and Quickest Change Detection in Decentralized Decision Systems X 1,n X L, n Yajun Mei Abstract The quickest change detection problem is studied in decentralized decision systems,

More information

Covariance function estimation in Gaussian process regression

Covariance function estimation in Gaussian process regression Covariance function estimation in Gaussian process regression François Bachoc Department of Statistics and Operations Research, University of Vienna WU Research Seminar - May 2015 François Bachoc Gaussian

More information

Parameter Estimation

Parameter Estimation Parameter Estimation Consider a sample of observations on a random variable Y. his generates random variables: (y 1, y 2,, y ). A random sample is a sample (y 1, y 2,, y ) where the random variables y

More information

Chapter 3. Point Estimation. 3.1 Introduction

Chapter 3. Point Estimation. 3.1 Introduction Chapter 3 Point Estimation Let (Ω, A, P θ ), P θ P = {P θ θ Θ}be probability space, X 1, X 2,..., X n : (Ω, A) (IR k, B k ) random variables (X, B X ) sample space γ : Θ IR k measurable function, i.e.

More information

An exponential family of distributions is a parametric statistical model having densities with respect to some positive measure λ of the form.

An exponential family of distributions is a parametric statistical model having densities with respect to some positive measure λ of the form. Stat 8112 Lecture Notes Asymptotics of Exponential Families Charles J. Geyer January 23, 2013 1 Exponential Families An exponential family of distributions is a parametric statistical model having densities

More information

Early Detection of a Change in Poisson Rate After Accounting For Population Size Effects

Early Detection of a Change in Poisson Rate After Accounting For Population Size Effects Early Detection of a Change in Poisson Rate After Accounting For Population Size Effects School of Industrial and Systems Engineering, Georgia Institute of Technology, 765 Ferst Drive NW, Atlanta, GA 30332-0205,

More information

Chapter 7. Basic Probability Theory

Chapter 7. Basic Probability Theory Chapter 7. Basic Probability Theory I-Liang Chern October 20, 2016 1 / 49 What s kind of matrices satisfying RIP Random matrices with iid Gaussian entries iid Bernoulli entries (+/ 1) iid subgaussian entries

More information

CHANGE DETECTION WITH UNKNOWN POST-CHANGE PARAMETER USING KIEFER-WOLFOWITZ METHOD

CHANGE DETECTION WITH UNKNOWN POST-CHANGE PARAMETER USING KIEFER-WOLFOWITZ METHOD CHANGE DETECTION WITH UNKNOWN POST-CHANGE PARAMETER USING KIEFER-WOLFOWITZ METHOD Vijay Singamasetty, Navneeth Nair, Srikrishna Bhashyam and Arun Pachai Kannu Department of Electrical Engineering Indian

More information

CHANGE DETECTION IN TIME SERIES

CHANGE DETECTION IN TIME SERIES CHANGE DETECTION IN TIME SERIES Edit Gombay TIES - 2008 University of British Columbia, Kelowna June 8-13, 2008 Outline Introduction Results Examples References Introduction sunspot.year 0 50 100 150 1700

More information

Chapter 10. Hypothesis Testing (I)

Chapter 10. Hypothesis Testing (I) Chapter 10. Hypothesis Testing (I) Hypothesis Testing, together with statistical estimation, are the two most frequently used statistical inference methods. It addresses a different type of practical problems

More information

Eco517 Fall 2004 C. Sims MIDTERM EXAM

Eco517 Fall 2004 C. Sims MIDTERM EXAM Eco517 Fall 2004 C. Sims MIDTERM EXAM Answer all four questions. Each is worth 23 points. Do not devote disproportionate time to any one question unless you have answered all the others. (1) We are considering

More information

Rewrite logarithmic equations 2 3 = = = 12

Rewrite logarithmic equations 2 3 = = = 12 EXAMPLE 1 Rewrite logarithmic equations Logarithmic Form a. log 2 8 = 3 Exponential Form 2 3 = 8 b. log 4 1 = 0 4 0 = 1 log 12 = 1 c. 12 12 1 = 12 log 4 = 1 d. 1/4 1 4 1 = 4 GUIDED PRACTICE for Example

More information

Testing Algebraic Hypotheses

Testing Algebraic Hypotheses Testing Algebraic Hypotheses Mathias Drton Department of Statistics University of Chicago 1 / 18 Example: Factor analysis Multivariate normal model based on conditional independence given hidden variable:

More information

40.530: Statistics. Professor Chen Zehua. Singapore University of Design and Technology

40.530: Statistics. Professor Chen Zehua. Singapore University of Design and Technology Singapore University of Design and Technology Lecture 9: Hypothesis testing, uniformly most powerful tests. The Neyman-Pearson framework Let P be the family of distributions of concern. The Neyman-Pearson

More information

arxiv: v1 [stat.me] 14 Jan 2019

arxiv: v1 [stat.me] 14 Jan 2019 arxiv:1901.04443v1 [stat.me] 14 Jan 2019 An Approach to Statistical Process Control that is New, Nonparametric, Simple, and Powerful W.J. Conover, Texas Tech University, Lubbock, Texas V. G. Tercero-Gómez,Tecnológico

More information

Uncertainty. Jayakrishnan Unnikrishnan. CSL June PhD Defense ECE Department

Uncertainty. Jayakrishnan Unnikrishnan. CSL June PhD Defense ECE Department Decision-Making under Statistical Uncertainty Jayakrishnan Unnikrishnan PhD Defense ECE Department University of Illinois at Urbana-Champaign CSL 141 12 June 2010 Statistical Decision-Making Relevant in

More information

A MODEL FOR THE LONG-TERM OPTIMAL CAPACITY LEVEL OF AN INVESTMENT PROJECT

A MODEL FOR THE LONG-TERM OPTIMAL CAPACITY LEVEL OF AN INVESTMENT PROJECT A MODEL FOR HE LONG-ERM OPIMAL CAPACIY LEVEL OF AN INVESMEN PROJEC ARNE LØKKA AND MIHAIL ZERVOS Abstract. We consider an investment project that produces a single commodity. he project s operation yields

More information

9 Bayesian inference. 9.1 Subjective probability

9 Bayesian inference. 9.1 Subjective probability 9 Bayesian inference 1702-1761 9.1 Subjective probability This is probability regarded as degree of belief. A subjective probability of an event A is assessed as p if you are prepared to stake pm to win

More information

Stochastic Convergence, Delta Method & Moment Estimators

Stochastic Convergence, Delta Method & Moment Estimators Stochastic Convergence, Delta Method & Moment Estimators Seminar on Asymptotic Statistics Daniel Hoffmann University of Kaiserslautern Department of Mathematics February 13, 2015 Daniel Hoffmann (TU KL)

More information

1 Appendix A: Matrix Algebra

1 Appendix A: Matrix Algebra Appendix A: Matrix Algebra. Definitions Matrix A =[ ]=[A] Symmetric matrix: = for all and Diagonal matrix: 6=0if = but =0if 6= Scalar matrix: the diagonal matrix of = Identity matrix: the scalar matrix

More information

On Optimal Stopping Problems with Power Function of Lévy Processes

On Optimal Stopping Problems with Power Function of Lévy Processes On Optimal Stopping Problems with Power Function of Lévy Processes Budhi Arta Surya Department of Mathematics University of Utrecht 31 August 2006 This talk is based on the joint paper with A.E. Kyprianou:

More information

SPRING 2007 EXAM C SOLUTIONS

SPRING 2007 EXAM C SOLUTIONS SPRING 007 EXAM C SOLUTIONS Question #1 The data are already shifted (have had the policy limit and the deductible of 50 applied). The two 350 payments are censored. Thus the likelihood function is L =

More information

Economics Division University of Southampton Southampton SO17 1BJ, UK. Title Overlapping Sub-sampling and invariance to initial conditions

Economics Division University of Southampton Southampton SO17 1BJ, UK. Title Overlapping Sub-sampling and invariance to initial conditions Economics Division University of Southampton Southampton SO17 1BJ, UK Discussion Papers in Economics and Econometrics Title Overlapping Sub-sampling and invariance to initial conditions By Maria Kyriacou

More information

Multivariate Normal-Laplace Distribution and Processes

Multivariate Normal-Laplace Distribution and Processes CHAPTER 4 Multivariate Normal-Laplace Distribution and Processes The normal-laplace distribution, which results from the convolution of independent normal and Laplace random variables is introduced by

More information

Mark Gales October y (x) x 1. x 2 y (x) Inputs. Outputs. x d. y (x) Second Output layer layer. layer.

Mark Gales October y (x) x 1. x 2 y (x) Inputs. Outputs. x d. y (x) Second Output layer layer. layer. University of Cambridge Engineering Part IIB & EIST Part II Paper I0: Advanced Pattern Processing Handouts 4 & 5: Multi-Layer Perceptron: Introduction and Training x y (x) Inputs x 2 y (x) 2 Outputs x

More information

Sequential Detection. Changes: an overview. George V. Moustakides

Sequential Detection. Changes: an overview. George V. Moustakides Sequential Detection of Changes: an overview George V. Moustakides Outline Sequential hypothesis testing and Sequential detection of changes The Sequential Probability Ratio Test (SPRT) for optimum hypothesis

More information

Lecture 5: Importance sampling and Hamilton-Jacobi equations

Lecture 5: Importance sampling and Hamilton-Jacobi equations Lecture 5: Importance sampling and Hamilton-Jacobi equations Henrik Hult Department of Mathematics KTH Royal Institute of Technology Sweden Summer School on Monte Carlo Methods and Rare Events Brown University,

More information

Statistical Data Analysis Stat 3: p-values, parameter estimation

Statistical Data Analysis Stat 3: p-values, parameter estimation Statistical Data Analysis Stat 3: p-values, parameter estimation London Postgraduate Lectures on Particle Physics; University of London MSci course PH4515 Glen Cowan Physics Department Royal Holloway,

More information

7.4* General logarithmic and exponential functions

7.4* General logarithmic and exponential functions 7.4* General logarithmic and exponential functions Mark Woodard Furman U Fall 2010 Mark Woodard (Furman U) 7.4* General logarithmic and exponential functions Fall 2010 1 / 9 Outline 1 General exponential

More information

More Empirical Process Theory

More Empirical Process Theory More Empirical Process heory 4.384 ime Series Analysis, Fall 2008 Recitation by Paul Schrimpf Supplementary to lectures given by Anna Mikusheva October 24, 2008 Recitation 8 More Empirical Process heory

More information

Nonparametric Identification of a Binary Random Factor in Cross Section Data - Supplemental Appendix

Nonparametric Identification of a Binary Random Factor in Cross Section Data - Supplemental Appendix Nonparametric Identification of a Binary Random Factor in Cross Section Data - Supplemental Appendix Yingying Dong and Arthur Lewbel California State University Fullerton and Boston College July 2010 Abstract

More information

MAS223 Statistical Inference and Modelling Exercises

MAS223 Statistical Inference and Modelling Exercises MAS223 Statistical Inference and Modelling Exercises The exercises are grouped into sections, corresponding to chapters of the lecture notes Within each section exercises are divided into warm-up questions,

More information

Master s Written Examination

Master s Written Examination Master s Written Examination Option: Statistics and Probability Spring 016 Full points may be obtained for correct answers to eight questions. Each numbered question which may have several parts is worth

More information

Sequential Change-Point Approach for Online Community Detection

Sequential Change-Point Approach for Online Community Detection Sequential Change-Point Approach for Online Community Detection Yao Xie Joint work with David Marangoni-Simonsen H. Milton Stewart School of Industrial and Systems Engineering Georgia Institute of Technology

More information

8.5 Taylor Polynomials and Taylor Series

8.5 Taylor Polynomials and Taylor Series 8.5. TAYLOR POLYNOMIALS AND TAYLOR SERIES 50 8.5 Taylor Polynomials and Taylor Series Motivating Questions In this section, we strive to understand the ideas generated by the following important questions:

More information

21.1 Lower bounds on minimax risk for functional estimation

21.1 Lower bounds on minimax risk for functional estimation ECE598: Information-theoretic methods in high-dimensional statistics Spring 016 Lecture 1: Functional estimation & testing Lecturer: Yihong Wu Scribe: Ashok Vardhan, Apr 14, 016 In this chapter, we will

More information

The Moment Method; Convex Duality; and Large/Medium/Small Deviations

The Moment Method; Convex Duality; and Large/Medium/Small Deviations Stat 928: Statistical Learning Theory Lecture: 5 The Moment Method; Convex Duality; and Large/Medium/Small Deviations Instructor: Sham Kakade The Exponential Inequality and Convex Duality The exponential

More information

ECONOMETRICS II, FALL Testing for Unit Roots.

ECONOMETRICS II, FALL Testing for Unit Roots. ECONOMETRICS II, FALL 216 Testing for Unit Roots. In the statistical literature it has long been known that unit root processes behave differently from stable processes. For example in the scalar AR(1)

More information

Qualifying Exam CS 661: System Simulation Summer 2013 Prof. Marvin K. Nakayama

Qualifying Exam CS 661: System Simulation Summer 2013 Prof. Marvin K. Nakayama Qualifying Exam CS 661: System Simulation Summer 2013 Prof. Marvin K. Nakayama Instructions This exam has 7 pages in total, numbered 1 to 7. Make sure your exam has all the pages. This exam will be 2 hours

More information

Lecture 4: Dynamic models

Lecture 4: Dynamic models linear s Lecture 4: s Hedibert Freitas Lopes The University of Chicago Booth School of Business 5807 South Woodlawn Avenue, Chicago, IL 60637 http://faculty.chicagobooth.edu/hedibert.lopes hlopes@chicagobooth.edu

More information

Latent voter model on random regular graphs

Latent voter model on random regular graphs Latent voter model on random regular graphs Shirshendu Chatterjee Cornell University (visiting Duke U.) Work in progress with Rick Durrett April 25, 2011 Outline Definition of voter model and duality with

More information

The information complexity of sequential resource allocation

The information complexity of sequential resource allocation The information complexity of sequential resource allocation Emilie Kaufmann, joint work with Olivier Cappé, Aurélien Garivier and Shivaram Kalyanakrishan SMILE Seminar, ENS, June 8th, 205 Sequential allocation

More information

Detection Theory. Composite tests

Detection Theory. Composite tests Composite tests Chapter 5: Correction Thu I claimed that the above, which is the most general case, was captured by the below Thu Chapter 5: Correction Thu I claimed that the above, which is the most general

More information

Change Detection Algorithms

Change Detection Algorithms 5 Change Detection Algorithms In this chapter, we describe the simplest change detection algorithms. We consider a sequence of independent random variables (y k ) k with a probability density p (y) depending

More information

Information Theoretic Asymptotic Approximations for Distributions of Statistics

Information Theoretic Asymptotic Approximations for Distributions of Statistics Information Theoretic Asymptotic Approximations for Distributions of Statistics Ximing Wu Department of Agricultural Economics Texas A&M University Suojin Wang Department of Statistics Texas A&M University

More information

Consistency of the maximum likelihood estimator for general hidden Markov models

Consistency of the maximum likelihood estimator for general hidden Markov models Consistency of the maximum likelihood estimator for general hidden Markov models Jimmy Olsson Centre for Mathematical Sciences Lund University Nordstat 2012 Umeå, Sweden Collaborators Hidden Markov models

More information

The Restricted Likelihood Ratio Test at the Boundary in Autoregressive Series

The Restricted Likelihood Ratio Test at the Boundary in Autoregressive Series The Restricted Likelihood Ratio Test at the Boundary in Autoregressive Series Willa W. Chen Rohit S. Deo July 6, 009 Abstract. The restricted likelihood ratio test, RLRT, for the autoregressive coefficient

More information

An Introduction to Probability Theory and Its Applications

An Introduction to Probability Theory and Its Applications An Introduction to Probability Theory and Its Applications WILLIAM FELLER (1906-1970) Eugene Higgins Professor of Mathematics Princeton University VOLUME II SECOND EDITION JOHN WILEY & SONS Contents I

More information

Hypothesis Testing - Frequentist

Hypothesis Testing - Frequentist Frequentist Hypothesis Testing - Frequentist Compare two hypotheses to see which one better explains the data. Or, alternatively, what is the best way to separate events into two classes, those originating

More information

Statistical Inference

Statistical Inference Statistical Inference Robert L. Wolpert Institute of Statistics and Decision Sciences Duke University, Durham, NC, USA Week 12. Testing and Kullback-Leibler Divergence 1. Likelihood Ratios Let 1, 2, 2,...

More information

Exponential Tail Bounds

Exponential Tail Bounds Exponential Tail Bounds Mathias Winther Madsen January 2, 205 Here s a warm-up problem to get you started: Problem You enter the casino with 00 chips and start playing a game in which you double your capital

More information

UNIVERSITÄT POTSDAM Institut für Mathematik

UNIVERSITÄT POTSDAM Institut für Mathematik UNIVERSITÄT POTSDAM Institut für Mathematik Testing the Acceleration Function in Life Time Models Hannelore Liero Matthias Liero Mathematische Statistik und Wahrscheinlichkeitstheorie Universität Potsdam

More information

Bayesian Quickest Change Detection Under Energy Constraints

Bayesian Quickest Change Detection Under Energy Constraints Bayesian Quickest Change Detection Under Energy Constraints Taposh Banerjee and Venugopal V. Veeravalli ECE Department and Coordinated Science Laboratory University of Illinois at Urbana-Champaign, Urbana,

More information

Standard forms for writing numbers

Standard forms for writing numbers Standard forms for writing numbers In order to relate the abstract mathematical descriptions of familiar number systems to the everyday descriptions of numbers by decimal expansions and similar means,

More information

Kernel Methods. Lecture 4: Maximum Mean Discrepancy Thanks to Karsten Borgwardt, Malte Rasch, Bernhard Schölkopf, Jiayuan Huang, Arthur Gretton

Kernel Methods. Lecture 4: Maximum Mean Discrepancy Thanks to Karsten Borgwardt, Malte Rasch, Bernhard Schölkopf, Jiayuan Huang, Arthur Gretton Kernel Methods Lecture 4: Maximum Mean Discrepancy Thanks to Karsten Borgwardt, Malte Rasch, Bernhard Schölkopf, Jiayuan Huang, Arthur Gretton Alexander J. Smola Statistical Machine Learning Program Canberra,

More information

BIAS OF ESTIMATOR OF CHANGE POINT DETECTED BY A CUSUM PROCEDURE

BIAS OF ESTIMATOR OF CHANGE POINT DETECTED BY A CUSUM PROCEDURE Ann. Inst. Statist. Math. Vol. 56, No. 1, 127-142 (2004) Q2004 The Institute of Statistical Mathematics BIAS OF ESTIMATOR OF CHANGE POINT DETECTED BY A CUSUM PROCEDURE YANHONG WU Department of Mathematics,

More information

Introduction to Rare Event Simulation

Introduction to Rare Event Simulation Introduction to Rare Event Simulation Brown University: Summer School on Rare Event Simulation Jose Blanchet Columbia University. Department of Statistics, Department of IEOR. Blanchet (Columbia) 1 / 31

More information

STAT 135 Lab 6 Duality of Hypothesis Testing and Confidence Intervals, GLRT, Pearson χ 2 Tests and Q-Q plots. March 8, 2015

STAT 135 Lab 6 Duality of Hypothesis Testing and Confidence Intervals, GLRT, Pearson χ 2 Tests and Q-Q plots. March 8, 2015 STAT 135 Lab 6 Duality of Hypothesis Testing and Confidence Intervals, GLRT, Pearson χ 2 Tests and Q-Q plots March 8, 2015 The duality between CI and hypothesis testing The duality between CI and hypothesis

More information

Computational statistics

Computational statistics Computational statistics Markov Chain Monte Carlo methods Thierry Denœux March 2017 Thierry Denœux Computational statistics March 2017 1 / 71 Contents of this chapter When a target density f can be evaluated

More information

Sequential Monte Carlo Methods (for DSGE Models)

Sequential Monte Carlo Methods (for DSGE Models) Sequential Monte Carlo Methods (for DSGE Models) Frank Schorfheide University of Pennsylvania, PIER, CEPR, and NBER October 23, 2017 Some References These lectures use material from our joint work: Tempered

More information

STATISTICS SYLLABUS UNIT I

STATISTICS SYLLABUS UNIT I STATISTICS SYLLABUS UNIT I (Probability Theory) Definition Classical and axiomatic approaches.laws of total and compound probability, conditional probability, Bayes Theorem. Random variable and its distribution

More information

Statistics and data analyses

Statistics and data analyses Statistics and data analyses Designing experiments Measuring time Instrumental quality Precision Standard deviation depends on Number of measurements Detection quality Systematics and methology σ tot =

More information

Quickest Detection With Post-Change Distribution Uncertainty

Quickest Detection With Post-Change Distribution Uncertainty Quickest Detection With Post-Change Distribution Uncertainty Heng Yang City University of New York, Graduate Center Olympia Hadjiliadis City University of New York, Brooklyn College and Graduate Center

More information

Institute of Actuaries of India

Institute of Actuaries of India Institute of Actuaries of India Subject CT3 Probability & Mathematical Statistics May 2011 Examinations INDICATIVE SOLUTION Introduction The indicative solution has been written by the Examiners with the

More information

I forgot to mention last time: in the Ito formula for two standard processes, putting

I forgot to mention last time: in the Ito formula for two standard processes, putting I forgot to mention last time: in the Ito formula for two standard processes, putting dx t = a t dt + b t db t dy t = α t dt + β t db t, and taking f(x, y = xy, one has f x = y, f y = x, and f xx = f yy

More information

Chapter 5: Models for Nonstationary Time Series

Chapter 5: Models for Nonstationary Time Series Chapter 5: Models for Nonstationary Time Series Recall that any time series that is a stationary process has a constant mean function. So a process that has a mean function that varies over time must be

More information

Does k-th Moment Exist?

Does k-th Moment Exist? Does k-th Moment Exist? Hitomi, K. 1 and Y. Nishiyama 2 1 Kyoto Institute of Technology, Japan 2 Institute of Economic Research, Kyoto University, Japan Email: hitomi@kit.ac.jp Keywords: Existence of moments,

More information

STAT 461/561- Assignments, Year 2015

STAT 461/561- Assignments, Year 2015 STAT 461/561- Assignments, Year 2015 This is the second set of assignment problems. When you hand in any problem, include the problem itself and its number. pdf are welcome. If so, use large fonts and

More information

Shrinkage Estimation in High Dimensions

Shrinkage Estimation in High Dimensions Shrinkage Estimation in High Dimensions Pavan Srinath and Ramji Venkataramanan University of Cambridge ITA 206 / 20 The Estimation Problem θ R n is a vector of parameters, to be estimated from an observation

More information

2.3 Estimating PDFs and PDF Parameters

2.3 Estimating PDFs and PDF Parameters .3 Estimating PDFs and PDF Parameters estimating means - discrete and continuous estimating variance using a known mean estimating variance with an estimated mean estimating a discrete pdf estimating a

More information