Weighted Exponential Distribution and Process
|
|
- Camilla Gallagher
- 5 years ago
- Views:
Transcription
1 Weighted Exponential Distribution and Process Jilesh V Some generalizations of exponential distribution and related time series models Thesis. Department of Statistics, University of Calicut, 200
2 Chapter 5 Weighted Exponential Distribution and Process 5. Introduction Different methods may be used to introduce a shape parameter to an exponential model and result different generalizations of weighted exponential distributions. For example, the gamma distribution, and the generalized exponential distribution are different weighted versions of the exponential distribution. Gupta and Kundu (2009) introduced a generalized form of exponential distribution termed as Weighted Exponential Distribution, denoted by WE(α). They used the idea of Azzalini (985) to introduce a shape parameter to an exponential distribution which results a new class of weighted exponential distributions. Suppose X and X 2 are two independent and identically distributed random variables, with the probability density function f Y (y) and cumulative distribution function (CDF) F Y (y). Then for any α > 0, consider a new random variable X = X given that αx > X 2. Then the PDF of the new 68
3 random variable X is f(x) = P (αx > X 2 ) f Y (x)f Y (αx), x > 0. (5..) Weighted Exponential distribution of Gupta and Kundu (2009) is obtained by choosing f Y (x) as the exponential density function and F Y (x) as the corresponding distribution function. Then the density function of weighted exponential distribution is given by f(x) = α + α e x ( e αx ), α > 0, x > 0. (5..2) The graph of the WE (α) for different values of α is given in Figure 5.. The characteristic function of the random variable X d W E(α) is ψ X (t) = ( it) ( ) (5..3) it +α From the characteristic function (5..3), it is clear that the WE distribution is the distribution of the convolution of two independent but non- identically distributed exponential random variables. That is, random variable following WE (α) can be is can be represent as X d E + δe 2, (5..4) where E i, i =, 2 are independent standard exponential random variables and δ = +α. Another representation of the WE(α) can be done using beta transformation. That is, suppose U has a Beta(α, β) distribution, then for any c > 0 consider a new random variable V such that U = e cv, which has the probability density function f V (v) = Γ(a + b) Γ(a)Γ(b) ce acv ( e cv) b, v > 0. (5..5) 69
4 f(x) x Figure 5.: Shapes of the density function (5..2) for α=5 (black), 2 (red), (blue),.5 (green). Therefore, the probability density function of WE() can be obtained as a special case by taking a=, b=2 and c=α. Note that when α 0, the distribution tends α to gamma distribution and when α to exponential distribution. f X (x; α) is always log-concave. The probability density function is always unimodal and the mode is located at the point ln ( ) α+ α. Jayakumar and Jilesh (200) introduce an autoregressive process with WE(α) as marginals. In Section 2 a first order autoregressive process with WE(α) distribution as marginal is introduced. The properties of such process and its higher order extension also discussed. A generalization of WE(α) distribution and process is given in Section 3. In Section 4, We introduce a new distribution with support on real line using weighted exponential distribution and related time series are discussed in Section 5. A generalization is discussed in Section 6. A study has been done about weighted Weibull distribution in Section 7. 70
5 5.2 First order Autoregressive Process with WE(α) as marginals (WEAR()) Consider the first order autoregressive AR() process, X n = ρx n + ɛ n, 0 < ρ <, (5.2.) where {ɛ n } is a sequence of independent and identically distributed random variables. In terms of characteristic functions, we have, Ψ Xn (t) = Ψ Xn (ρt)ψ ɛn (t) which gives Ψ ɛn (t) = Ψ Xn (t) Ψ Xn. Using the characteristic function (5..3), we can represent (ρt) the innovation random variable as 0 with probability ρ 2 δe n with probability ρ( ρ) ɛ n = E 2n with probability ρ( ρ) Z n with probability ( ρ) 2 (5.2.2) where E in, i=,2 are standard exponential random variables and Z n random variable. is a WE(α) But Similarly we the innovation variable ɛ n also can be written as Ψ ɛn (t) = ( iρt) ( ) iρt +α ( it) ( ). (5.2.3) it +α [ ( iρt) ( it) = ρ + ( ρ) it ]. (5.2.4) Therefore, we obtain the distribution of the innovation random variable ɛ n as the 7
6 convolution of two independent tailed exponential random variables discussed in Littlejohn (994). That is, ɛ n is distributed as the convolution of two independent random variables ET n and ET 2n defined as 0 with probability ρ ET n = δe n with probability ρ (5.2.5) and 0 with probability ρ ET 2n = E 2n with probability ρ (5.2.6) where δ = +α and E n and E 2n are two independent standard exponential random variables with characteristic function. Similarly, we can write the random variable it ɛ n as ɛ n d I E n + δi 2 E 2n, (5.2.7) where I i s i=,2 are Bernoulli random variables with P (I i = ) = ρ, and E in, i=,2 are independent standard exponential random variables. Another representation for ɛ n can be given by writing where p = ρ 2, p 2 = ( ρ)( δρ) δ ( iρt) ( ) iρt +α ( it) ( ) = p it + p 2 ( it) + p ( 3 ) (5.2.8) +α it +α and p 3 = ( ρ)(ρ δ). Clearly 0 < p δ i <, i=,2,3 and p + p 2 + p 3 =. Therefore, the ɛ n can be also represented as 0 with probability p ɛ n = E n with probability p 2 δe 2n with probability p 3 (5.2.9) where δ, E n and E 2n are as defined above. Using Gupta and Kundu (2009) it can be shown that the moments of innovation 72
7 sequence ɛ n are E(ɛ n ) = ( ρ)( + δ) and V (ɛ n ) = ( ρ 2 )( + δ 2 ). Higher order cumulants are k r = Γ(r)( ρ r )( + δ r ), for integers r > 2. Theorem The AR() process (5.2.) is strictly stationary Markovian with WE(α) as marginal distribution if and only if ɛ n is distributed as (5.2.2) or it is the convolution of two independent tailed exponential random variables defined as in (5.2.5), (5.2.6), provided X d 0 W E(α) and {X n } is independent of ɛ n for all n. Proof:The proof follows by mathematical induction. Remark If X 0 is distributed arbitrarily, then also the process is asymptotically Markovian with WE(α) distribution. Proof:We have from (5.2.), X n = ρ n X 0 + n k=0 ρk ɛ n k. Using the characteristic function we can write it as On substituting (5.2.3), we can see that n Ψ Xn t = Ψ X0 (ρ n t) Ψ ɛn k (ρ k t) (5.2.0) k=0 Ψ Xn t ( it)( it +α ) Hence it follows that even if X 0 is arbitrarily distributed, the process is asymptotically stationary Markovian with WE marginals. Remark The model (5.2.) is defined for all values of ρ (0, ). The lag-k autocorrelation is given by ρ k = Corr(X n, Xn k) = ρ k ; k = 0,,... and the correlations are always positive. The joint distribution of observations (X n, X n+ ) can be given in terms of char- 73
8 acteristics function as Ψ Xn,X n+ (s, s 2 ) = ( iρs 2 )( iρδs 2 ) ( is iρs 2 )( iδ(s + ρs 2 ))( is 2 )( iδs 2 ) (5.2.) The above joint characteristic function is not symmetric in s and s 2, Therefore the process is not time reversible. When s = s 2 = s, we get the transform of the sum X n + X n+ as, E ( e is(x n+x n+ ) ) = ( iρs)( iρδs) ( is)( iδs) ( ( + ρ)is)( δ( + ρ)is) (5.2.2) A simple quantification of the sample path behavior is given by P (X n > X n ), which is related to the average length of down run sequences. Calculation of P (X n > X n ) follows from (5.2.2) as, P (X n > X n ) = ρ 2 P (ρx n > X n ) + ρ( ρ)p (ρx n + δe n > X n ) + ρ( ρ)p (ρx n + E 2n > X n ) + ( ρ) 2 P (( ρ)x n < Z n ) (5.2.3) = ρ 2 P (( ρ)x n < 0) + ρ( ρ)p (( ρ)x n < δe n ) + ρ( ρ)p (( ρ)x n < E 2n ) + ( ρ) 2 P (( ρ)x n < Z n ) = ρ 2 I + ρ( ρ)i 2 + ρ( ρ)i 3 + ( ρ) 2 I 4 (5.2.4) Note that I = P (( ρ)x n < 0) = 0 and I 2 = P (( ρ)x n < δe n ) (5.2.5) ( = P E n > ρ ) x f(x)dx (5.2.6) x δ = α + e ( ( ( ρ) +)x δ e αx) dx (5.2.7) α = α + α 0 { } + + ( ρ) + + α δ δ ( ρ) (5.2.8) 74
9 = ( ) ( δ ( ρ) + ( ρ) δ δ ) (5.2.9) + + α Similarly we obtain I 3 = P (( ρ)x n < E 2n ) = δ (( ρ) + ) ( ) (5.2.20) ( ρ) + δ and I 4 = P (( ρ)x n < Z n ) = J + J 2 (5.2.2) where ( ) 2 [ ] α + J = α α +, (5.2.22) [( ρ) + (α + )][( ρ) + ] ( ) [ ] α + J 2 = α + α + (5.2.23) ( + ( ρ)( + α)) (( + α) + ( ρ)( + α)) On substituting the values I n, i=,2,3,4 in (5.2.4) we obtain P (X n > X n ). Let T r = X + X X r, which can be defined as the time of r th event in a point process which starts with an event at the origin. Now we shall consider the regression behavior of the WEAR() model. Study of the regression of the model is in effect forecasting of the process. As stated in Jose et al. (2008) the practical implication of regression will be in the statistical analysis of direction-dependent data, since the WEAR process is not time reversible. Regression is linear, since E(X n /X n ) = ρx + ( ρ)( + δ). Further more the conditional variance is constant. The simulated sample path of the process can be seen in Figure
10 Figure 5.2: Sample path of the process (5.2.) for ρ =.25 and α = 3, k th order AR process with WE(α) as marginals (WEAR(k)) k th order WEAR(k) is given by the model ρ X n + ɛ n with probability p X n = ρ 2 X n 2 + ɛ n with probability p 2... ρ k X n k + ɛ n with probability p k (5.2.24) where 0 < p r <, k r= p r =, r={,2,...,k} and {X n, n } are marginally WE distributed. If all ρ i s are equal and using the characteristic function approach as above we obtain the distribution of the innovation sequence as the convolution of the same variables defined in (5.2.5) and (5.2.6). 76
11 Another autoregressive model of interest which is free from the zero defect is ɛ n with probability p X n = ρx n + ɛ n with probability p (5.2.25) Dewald and Lewis (985) studied the autoregressive process given by the equation (5.2.25) with Laplace distribution as marginals. Here we discuss the autoregressive process of structure (5.2.25) with weighted exponential distribution as marginals. Theorem The first order autoregressive process (5.2.25) with X 0 d W E(α) is stationary with weighted exponential distribution as marginals if and only if ɛ n = Z n +U n +V n, where U n and V n are two independent tailed exponential random variables and Z n is weighted exponential distributed with parameter pδ. proof: In terms characteristic function, the equation (5.2.25) becomes ψ ɛ (t) = ψ Xn (t) p + ( p)ψ Xn (ρt) (5.2.26) If we assume that {X n } is stationary with weighted exponential marginal distribution, then (5.2.26) implies ( iρt)( iρδt) ψ ɛ (t) = ( it)( iδt) ( iρt)( ipρδt) ( ) ( ) = ρ + ( ρ) ρ + ( ρ) ( it) ( δit) ( iρt)( ipρδt) Therefore we can write ɛ n = Z n + U n + V n, where U n and V n are two independent tailed exponential random variables and Z n is weighted exponential distributed with parameter pδ. Converse can be proved by mathematical induction, assuming X n d W E(α). Thus {X n } is a stationary process with weighted exponential marginal distribution. 77
12 ɛ n can also be represent as ɛ n d I E n + δi 2 E 2n + ρ (E 3n + pδe 4n ), (5.2.27) where E in, i =, 2, 3, 4 are standard exponential random variables. Next we introduced a generalization of weighted exponential distribution and discuss related time series models. 5.3 A Generalization of Weighted Exponential Distribution Here we introduce a Generalized Weighted Exponential distribution (GWE) as a generalization of weighted exponential distribution. A random variable X is said to be GWE distributed with parameters α and τ, if its characteristic function is given by ( τ Ψ(t) = )) ( it) (, α > 0, τ > 0. (5.3.) it +α We denote the distribution with the characteristic function (5.3.) as GWE(α, τ). Note that when τ = we obtain the WE(α) distribution and for α = 0, τ, (5.3.) is the characteristic function of a Gamma(, 2τ) distributed random variable, where Gamma(a, b) means a gamma distributed random variable with characteristic function ( at) b. The distribution with characteristic function (5.3.) arise as the distribution of the τ-fold convolution of independent WE(α) random variables. From (5.3.), we have ( ) ( ) τ τ Ψ(t) = it it, α > 0, τ > 0. (5.3.2) +α 78
13 Therefore X can be represented as X = G + G 2, (5.3.3) where G d Gamma(, τ) and G 2 d Gamma(δ, τ) distributed random variables. Now we discuss the autoregressive process with generalized weighted exponential distribution as marginals First order Generalized Weighted Exponential GWEAR() Process For the process defined in (5.2.), where X n d GW E(α, τ) using the characteristic function, we obtain the characteristic function of innovation sequence as [ ] τ [ ] τ Ψ ɛ (t) = ρ + ( ρ) ρ + ( ρ) (5.3.4) it iδt Therefore ɛ n as the τ fold convolution of tailed exponentials defined in (5.2.5) and (5.2.6). Therefore we can represent the innovation sequence as ɛ n = E n + E 2n, where E n and E 2n are the τ fold convolutions of tailed exponential variables. Similarly, we can represent ɛ n as the τ fold convolution the random variable defined in the right hand side of (5.2.9) GWEARMA(,) Process Consider the Autoregressive moving average model with GWE marginals defined by X n = ρx n + ζɛ n + ɛ n (5.3.5) 79
14 Assuming stationarity and when ζ =, we obtain in terms of characteristic function ) 2 Ψ ɛ (t) =. Therefore we obtain the distribution of the innovation sequence ( ΨX (t) Ψ X (ρt) as the distribution of τ fold convolution of tailed exponential random variables Weighted Exponential distribution on real line For any positive random variable X with density f(x) can be extended symmetrically to real line with density h(x) = f( x ), x R. A similar symmetrization of the 2 density (5..2) gives rise to a weighted exponential distribution on real line. Jilesh and Jayakumar (200a) Definition A random variable with support on the real line is said to follow the Double Weighted Exponential distribution with parameters α and λ, denoted by X d DW E(α, λ) if its probability density function (PDF) is given by f(x) = 2 (α + ) λe ( λ x e αλ x ), x R, α > 0, λ > 0. (5.4.) α For various derivations it would seem convenient to consider a location parameter also, that is for a location parameter θ, such that < θ <, the above density becomes, f(x) = 2 (α + ) λe ( λ x θ e αλ x θ ), x R, α, λ > 0 (5.4.2) α Without loss of generality assume λ = and for brevity call it as DWE(α). The probability density function of the DWE(α) is bimodal with modes located on both sides of the origin at a distance ±log ( ) +α α. As α, the DWE(α) converges to the standard Laplace distribution. For α =, we obtain a symmetric extension of 80
15 f(x) x Figure 5.3: Shapes of the density function of DWE(α)for α=.5(red),.5(black),5(blue) the generalized exponential distribution of Gupta and Kundu (999) to the real line. The distribution function takes the form F (x) = α+ 2α eλx ( α+ 2α e λx ( ) eαλx α+ ) e αλx α+ for x < 0 for x 0 (5.4.3) The moment generating function of DWE(α) distribution is given by M(t) = ( t 2 )( δ 2 t 2 ), (5.4.4) where δ =. Consequently, the cumulant generating function log (M(t)) is given +α by log(m(t)) = log( t 2 ) log( δ 2 t 2 ) (5.4.5) 8
16 5.4. Moments and Related Measures cumulants The n th cumulant of a DWE(α) random variable X, denoted κ n, is defined as the coefficient of tn n! in the Taylor expansion (about t=0) of the cumulant generating function of X. Formula (5.4.5) for the cumulant generating function generate the cumulants of DWE(α) in a straightforward manner. Indeed, using the Taylor expansion of log(-z) about z=0, we have log( t 2 ) = k= t 2k k!. (5.4.6) Thus for the DWE(α) random variable X, we have 0 if n is odd κ n (x) = 2(n )!( + δ 2n ) if n is even (5.4.7) Moments By writing Taylor expansion of the moment generating function (5.4.4) ( ) ( ) M(t) = t 2k δt 2k = k= k=0 l=0 k=0 ( k ) (2k)! δ 2l t 2k (2k)! (5.4.8) (5.4.9) we obtain the n th central moments of the DWE(α) random variable X, as 0 if n is odd µ n (x) = ( n ) 2 l=0 δ2l n! if n is even (5.4.0) In particular, E(X) = 0 and V (X) = 2( + δ 2 ). Instead of the random variable X is distributed with the probability density function (5.4.), if it is distributed according 82
17 to (5.4.2) we obtain E(X) = θ, V (X) = 2( + δ 2 ) and Coefficient variation as CV = 2( + δ2 ). (5.4.) θ Note that mean and variance involve different parameters as in the case of normal, Laplace distributions. Coefficient of Skewness and Kurtosis For a distribution of an random variable X with a finite third moment and standard deviation greater than zero, the coefficient of skewness is a measure of symmetry defined by γ = µ 3. (5.4.2) (µ 2 ) 3 2 The coefficient of skewness for DWE(α) random variable is zero as in the case of any symmetric distribution with a finite third moment. For a random variable with finite fourth the excess kurtosis is defined as γ 2 = µ 4 (µ 2 ) 2 3 (5.4.3) For DWE(α) random variable the excess kurtosis is given by γ 2 = 6 ( + δ2 + δ 4 ) ( + δ 2 ) 2 3. (5.4.4) Representations. Mixture of Normal Distribution Any DWE(α) random variable can be represent as a Gaussian random variable with mean zero and stochastic variance which has an weighted exponential distribution, results the following proposition. 83
18 Proposition A standard DWE(α) random variable has the representation X d 2W Z (5.4.5) where Z is a standard normal random variable, that is Z d N(0, ) and W is a WE(α) distributed with moment generating function M W (t) = ( t)( δ 2 t). Proof Let Z be a standard normal random variable with characteristic function φ Z (t) = e t2 2, < t <. To obtain the characteristic function of the product 2W Z, conditioning on W, we obtain Ψ(t) = E(e it 2W Z ) ( ) = E E(e it 2W Z W ) ( = E φ Z (t ) 2W ) ( ) = E e t2 W = M W ( t 2 ) 2. Relation to Laplace Distribution = ( + t 2 ) ( + δ 2 t 2 ). (5.4.6) The moment generating function (5.4.4) is the product of the two moment generating functions of the independent and non-identically distributed Laplace random variable. Proposition A standard DWE(α) random variable has the representation X d L () + L 2 (δ) (5.4.7) where L i (ξ), i=,2 means a Laplace random variable with characteristic function Ψ(t) = +ξ 2 t 2. 84
19 Proof Since for independent random variables the product of moment generating functions corresponds to their sum, we can derive the representation directly using (5.4.4). Remark Another related representations are possible by considering the fact that, a standard Laplace random variable L have the representation L d E E 2, where E is are independent and identically distributed standard exponential variables. The characteristic function (5.4.6) can also be decomposed as ( + t 2 ) ( + δ 2 t 2 ) = 2 ( it) ( iδt) + 2 ( + it) ( + iδt) (5.4.8) The right hand side of (5.4.8) is the characteristic function of the product IW, where I is a discrete random variable takes the values ± with probabilities, while W is a 2 weighted exponential random variable with moment generating function M(t) = ( it) ( iδt). (5.4.9) Therefore the DWE(α) random variable is the mixture of weighted exponential random variables. Proposition A standard DWE(α) random variable admits the representation X d IW (5.4.20) where W is weighted exponential with moment generating function (5.4.9) and I takes values ± with probabilities 2. A probability distribution with characteristic function Ψ(t) is infinitely divisible if, for any integer n, we have Ψ(t) = (Φ n (t)) n, where Φ n (t) is another characteristic 85
20 function. In other words, an random variable Y with characteristic function Ψ(t) has the representation Y d n i= X i, for some random variables X i. For more details about infinite divisibility and related concepts see, Steutal and Van Harn (2004). The characteristic function of DWE(α) can be factorize as [ ( ) ( ) ( + t 2 )( + δ 2 t 2 ) = n ( ) n ( ) ] n n n it + it iδt + iδt = (Φ n (t)) n For each integer n, the function Φ n (t) is the characteristic function of E n () E 2n () + E 3n (δ) E 4n (δ), where E in (ξ), i=,2,3,4 are independent exponentials with characteristic function iξt. Consequently, the DWE(α) is infinitely divisible. 5.5 Time series models with Double Weighted Exponential as Marginals (DWEAR Process) Let {X n, n } be a sequence of random variables defined by the autoregressive equation as defined in (5.2.) with ρ < and {ɛ n } be a sequence of independent and identically distributed random variables, Assume that {X n } is stationary with DWE (α) as marginal distribution having characteristic function (5.4.6). From (5.2.), we have the characteristic function of 86
21 the innovation sequence {ɛ n } as Ψ ɛn (t) = ( + ρ2 t 2 )( + δ 2 ρ 2 t 2 ) ( + t 2 )( + δ 2 t 2 ) = ρ 4 + ( ρ2 ) (ρ 2 + ) ρ2 2 δ 2 + it + ( ρ2 ) (ρ 2 + ) ρ2 2 δ 2 it + ( + ) ρ2 2 δ 2 iδt + 2 ( + ρ2 δ 2 ) + iδt (5.5.) (5.5.2) (5.5.3) Hence the innovation sequence {ɛ n } of the first order autoregressive process (5.2.) is the convex mixture of random variables given by ɛ n = 0 with probability ρ 4 ( E n () with probability ( ρ2 ) ρ 2 + ρ2 2 ( E 2n () with probability ( ρ2 ) ρ 2 + ρ2 2 ( ) E 3n (δ) with probability + ρ2 2 δ 2 ( ) E 4n (δ) with probability + ρ2, 2 δ 2 δ 2 ) δ 2 ) (5.5.4) where E in (ζ) means exponential random variable with characteristic function iζt. Also it can be verified that, if X 0 d DW E(α) and {ɛ n } be a sequence of independent and identically distributed convex mixture of exponential random variables given by (5.5.4), the first order autoregressive process (5.2.) is stationary with DWE (α) marginal distribution. Thus we have the following theorem. Theorem Let {ɛ n } be a sequence of independent and identically distributed random variables defined as in (5.5.4). Then the first order autoregressive process of structure (5.2.) with X 0 d DW E(α) defines a stationary autoregressive process with DWE(α) distribution. 87
22 We call the process defined in (5.2.) with X 0 d DW E(α) and ɛ n as in (5.5.4) as the first order Double Weighted Exponential autoregressive (DWEAR()) process. Let X 0 d DW E(α) and for n=,2,... the sequence {X n } can be written as X n = ρx n with probability ρ 4 ( ρx n E n () with probability ( ρ2 ) ρ 2 + ρ2 2 ( ρx n + E 2n () with probability ( ρ2 ) ρ 2 + ρ2 2 ( ) ρx n + E 3n (δ) with probability + ρ2 2 δ 2 ( ) ρx n E 4n (δ) with probability + ρ2. 2 δ 2 δ 2 ) δ 2 ) (5.5.5) Another representation ( for ɛ n is obtained by writing Ψ ɛn (t) as Ψ ɛn (t) = ρ 2 + ( ρ2 ) 2 it + ( ) ( ρ2 ) ρ 2 + ( ρ2 ) 2 + it 2 iδt + ( ) ρ2 ). 2 + iδt Hence we obtain where U n and V n as ɛ n d U n + V n, (5.5.6) U n = 0 with probability ρ 2 E n () with probability ( ρ2 ) 2, E 2n () with probability ( ρ2 ) 2, (5.5.7) and V n = 0 with probability ρ 2 E 3n (δ) with probability ( ρ2 ) 2, E 4n (δ) with probability ( ρ2 ) 2. (5.5.8) The moments of the innovation sequence ɛ n is give as E(ɛ n )=0 and V (ɛ n ) = 2( ρ 2 )( + δ) other moments will be obtained by using of (5.4.7). Also we can write ɛ d I L n + δi 2 L 2n, (5.5.9) 88
23 where L in, i=,2 are independent and identically distributed standard Laplace random variables and I i, i=,2 are Bernoulli random variables with P (I = ) = ( ρ 2 ). Theorem The AR() process (5.2.) is strictly stationary Markovian with DWE(α) as marginal distribution if and only if ɛ n is distributed as (5.5.4), (or the distribution of the convolution of two independent random variables defined as in (5.5.7), (5.5.8)), provided X d 0 DW E(α) and {X n } is independent of ɛ n for all n. Proof:The proof follows by mathematical induction. Remark If X 0 is distributed arbitrarily, then also the process is asymptotically Markovian with DWE (α) distribution. Proof:We have from (5.2.), X n = ρ n X 0 + n k=0 ρk ɛ n k. Using the characteristic function we can write it as n Ψ Xn t = Ψ X0 (ρ n t) Ψ ɛn k (ρ k t) (5.5.0) On substituting the characteristic function of DWE (α), we can see that Ψ Xn t k=0 ( + t 2 )( + δ 2 t 2 ) Hence it follows that even if X 0 is arbitrarily distributed, the process is asymptotically stationary Markovian with DWE marginals. Remark The model (5.2.) is defined for all values of ρ <. autocorrelation is given by Corr(X n, X n k ) = ρ k ; k = 0,,.... The lag-k The joint distribution of observations (X n, X n+ ) can be given in terms of char- 89
24 acteristics function as Ψ Xn,X n+ (s, s 2 ) = ( + ρ 2 s 2 2)( + δ 2 ρ 2 s 2 2) ( + (s + ρs 2 ) 2 )( + δ 2 (s + ρs 2 ) 2 )( + s 2 2)( + δ 2 s 2 2) (5.5.) The above joint characteristic function is not symmetric in s and s 2, Therefore the process is not time reversible k th order AR process with DWE as marginals (DWEAR(k)) k th order DWEAR(k) is given by the model ρ X n + ɛ n with probability p X n = ρ 2 X n 2 + ɛ n with probability p 2... ρ k X n k + ɛ n with probability p k (5.5.2) where 0 < p r <, k r= p r =, r={,2,...,k} and {X n, n } are marginally DWE distributed. If all ρ i s are equal and using the characteristic function approach as above we obtain the distribution of the innovation sequence variables defined in (5.5.4) or as distribution of the convolution of (5.5.7) and (5.5.8). 90
25 5.6 A Generalization of Double Weighted Exponential Distribution Here we introduce a Generalized form of DWE distribution (GDWE). A random variable X is said to be GDWE distributed with parameters α and τ, if its characteristic function is given by ( ) τ Ψ(t) =, α > 0, τ > 0. (5.6.) ( + t 2 ) ( + δ 2 t 2 ) We denote the distribution with the characteristic function (5.6.) as GDWE(α, τ). Note that when τ = we obtain the DWE(α) distribution and for values τ, (5.6.) is the characteristic function of the convolution of two independent but non identically distributed generalized Laplace random variable of Mathai (2000). The distribution with characteristic function (5.6.) arise as the distribution of the τ-fold convolution of independent DWE(α) random variables. From (5.6.), we have Ψ(t) = ( ) τ ( ) τ ( ) τ ( ) τ, α > 0, τ > 0. (5.6.2) it + it iδt + iδt Therefore X can be represented as the convolution of four independently distributed random variables as X = G G 2 + G 3 G 4, (5.6.3) where G i d Gamma(, τ), i=,2 and G i d Gamma(δ, τ), i=3,4 distributed random variables. Now we discuss the autoregressive process with generalized weighted exponential distribution as marginals. 9
26 5.6. First order Generalized Double Weighted Exponential GDWEAR() Process For the process defined in (5.2.), where X n d GDW E(α, τ), using the characteristic function we can see that the innovation sequence is distributed as ɛ n d U n + V n, where U n and V n are the independent τ convolution of U n and V n defined in (5.5.7) and (5.5.8) respectively. Similarly, we can represent ɛ n as the τ fold convolution the random variable defined in the right hand side of (5.5.4) GDWEARMA(,) Process Consider the Autoregressive moving average model with GWE marginals defined by X n = θx n + ζɛ n + ɛ n (5.6.4) Assuming stationarity and when ζ =, we obtain in terms of characteristic function ) 2 Ψ ɛ (t) =. Therefore we obtain the distribution of the innovation sequence ( ΨX (t) Ψ X (θt) as the distribution of random variable defined as above with τ = Weighted Weibull distribution A random variable X with positive support is said to follow Weighted Weibull distribution with parameter α, β > 0, denoted by X d W W (α, β) if the probability 92
27 f(x) f(x) x x Figure 5.4: Shapes of the density functions of Weighted Weibull distribution density function of X is given by f(x; α, β) = (α + )β x β e xβ ( e αxβ ), α > 0, β > 0, x > 0. (5.7.) α This model can be obtained from two independent identically distributed Weibull random variables exactly the same way Azzalini (985) obtained the skew-normal distribution from two independent and identically distributed normal distributions. On substituting f Y (x) = e xβ and F Y (x) = e xβ in (5..) we will get (5.7.). This model can be obtained as a hidden truncation model as it was observed by Arnold and Beaver (2000a) in case of skew-normal distribution. This distribution can also considered as a special case of distributions generated from beta family as done by Famoye et al (2005). The k th moment is given by 93
28 E(X k (α + )β ( ) = x (k+β ) e xβ e αxβ) α 0 (α + )β = y ( k β e y e αy) dy, where y = x β α 0 ( ) ( ) (α + ) k = Γ α β + (α + ) α+ β (5.7.2) but it is not possible to write an explicit expression for the k th moment. The median of (5.7.) satisfies the equation ( ) e xβ e xβ (α + ) = α 2 (5.7.3) An approximate value for the mode is given by x mode ( ) 2β β β(α + ) (5.7.4) The distribution function corresponding to (5.7.) is given by F α,β (x) = α + α [ e xβ ] α + ( e xβ ) (5.7.5) and the hazard function is γ(t) = βt β ( e αtβ ) α+ e αtβ (5.7.6) 5.7. Maximum Likelihood Estimation of the Parameters In this section we discuss the maximum likelihood estimation of the parameters of the distribution. Let x, x 2,..., x n be a random sample from a population following 94
29 WE(α, β) distribution. Then the log likelihood function is given by logl = nlog(α + ) + nlog(β) nlog(α) + (β ) + n i= ( ) log e αxβ i n log(x i ) i= n i= x i β (5.7.7) On differentiating (5.7.7) with respect to α and β we obtain the normal equations as n n β + log(x i ) β i= n i= n α + n n α + x β i + αβ i= i= x β i e αxβ i ( e αxβ i ) = 0 (5.7.8) n ( x β i e αxβ i ) = 0 (5.7.9) e αxβ i But it is not possible to write an explicit form for he estimate of the parameters but using softwares we can do this as done in the next section Data Analysis In this Section we fit the model two data sets. The Data set is a simulated data where the values are as given below. Data set :.58, 0.43, 0.50, 2.44, 3.8,.95, 0.78, , 3.54,.42,3.2, 2.5,.9, 3.46, 2.54, 0.63, 2.92, 2.35, 0.54, 3.42, 0.3, 0.84, 0.49,.48,.92,4.25, 4.55, 2.37, 2.3,.4, 0.63,.26, 0.39,., 0.76,.35, 0.68, 0.89,.89,3.95,.32, 2.4,.58, 3.82, 2.25, 0.28,.29, 2.29, 0.70,0.69, 2.42, 2.80,.58,.2, 3.6, 0.86, 0.72, 0.23, 2.7, 0.56, 2.4, 2.43,.42, 2.76,.99, 0.92, 2.76, 2.9, 0.44,.02,.4, 2.32, 0.92,.5,.65,.65,.96, 0.60, 0.3, 3.85, 3.70, 2.32, 0.58, 0.68,2.72, 3.75, 0.43,.58,.56, 3.6,.20, 2.09, 2.02, 3.6, 3.07, 0.52,.58,.73. The maximum likelihood estimates are obtained as ˆα = and ˆβ =.792. See the Figure 5.5 for the fitness of the model to the data. 95
30 Density x Figure 5.5: WW fit to simulated data The Data set 2 is real data set reported by Murthy et al (2004) the data values are the failure times of 50 items. The data values are given below. The estimates obtained as ˆα = and β = , see Figure 5.6. Data set 2: 0.036,0.058,0.06,0.074,0.078,0.086, 0.02,0.03,0.4, 0.6, 0.48, 0.83, 0.92, 0.254,0.262, 0.379,0.38,0.538,0.570,0.574, 0.590,0.68, 0.645,0.96,.228,.600, 2.006,2.054,2.804,3.058, 3.076,3.47,3.625,3.704, 3.93, 4.073,4.393, 4.534, 4.893,6.274, 6.86,7.896, 7.904,8.022,9.337, 0.94,.02, 3.88,4.73,
31 Density x Figure 5.6: WW fit to Data set 2 97
Marshall-Olkin Univariate and Bivariate Logistic Processes
CHAPTER 5 Marshall-Olkin Univariate and Bivariate Logistic Processes 5. Introduction The logistic distribution is the most commonly used probability model for modelling data on population growth, bioassay,
More informationMultivariate Normal-Laplace Distribution and Processes
CHAPTER 4 Multivariate Normal-Laplace Distribution and Processes The normal-laplace distribution, which results from the convolution of independent normal and Laplace random variables is introduced by
More informationOn Weighted Exponential Distribution and its Length Biased Version
On Weighted Exponential Distribution and its Length Biased Version Suchismita Das 1 and Debasis Kundu 2 Abstract In this paper we consider the weighted exponential distribution proposed by Gupta and Kundu
More informationA Marshall-Olkin Gamma Distribution and Process
CHAPTER 3 A Marshall-Olkin Gamma Distribution and Process 3.1 Introduction Gamma distribution is a widely used distribution in many fields such as lifetime data analysis, reliability, hydrology, medicine,
More informationPLEASE SCROLL DOWN FOR ARTICLE
This article was downloaded by: [Kundu, Debasis] On: 4 November 2009 Access details: Access Details: [subscription number 9655482] Publisher Taylor & Francis Informa Ltd Registered in England and Wales
More informationKatz Family of Distributions and Processes
CHAPTER 7 Katz Family of Distributions and Processes 7. Introduction The Poisson distribution and the Negative binomial distribution are the most widely used discrete probability distributions for the
More informationOn q-gamma Distributions, Marshall-Olkin q-gamma Distributions and Minification Processes
CHAPTER 4 On q-gamma Distributions, Marshall-Olkin q-gamma Distributions and Minification Processes 4.1 Introduction Several skewed distributions such as logistic, Weibull, gamma and beta distributions
More informationGeneralized Laplacian Distributions and Autoregressive Processes
CHAPTER Generalized Laplacian Distributions and Autoregressive Processes. Introduction Generalized Laplacian distribution corresponds to the distribution of differences of independently and identically
More informationAn Extended Weighted Exponential Distribution
Journal of Modern Applied Statistical Methods Volume 16 Issue 1 Article 17 5-1-017 An Extended Weighted Exponential Distribution Abbas Mahdavi Department of Statistics, Faculty of Mathematical Sciences,
More informationChapter 3 sections. SKIP: 3.10 Markov Chains. SKIP: pages Chapter 3 - continued
Chapter 3 sections Chapter 3 - continued 3.1 Random Variables and Discrete Distributions 3.2 Continuous Distributions 3.3 The Cumulative Distribution Function 3.4 Bivariate Distributions 3.5 Marginal Distributions
More informationChapter 3 sections. SKIP: 3.10 Markov Chains. SKIP: pages Chapter 3 - continued
Chapter 3 sections 3.1 Random Variables and Discrete Distributions 3.2 Continuous Distributions 3.3 The Cumulative Distribution Function 3.4 Bivariate Distributions 3.5 Marginal Distributions 3.6 Conditional
More informationPart IA Probability. Definitions. Based on lectures by R. Weber Notes taken by Dexter Chua. Lent 2015
Part IA Probability Definitions Based on lectures by R. Weber Notes taken by Dexter Chua Lent 2015 These notes are not endorsed by the lecturers, and I have modified them (often significantly) after lectures.
More informationProbability and Statistics
Kristel Van Steen, PhD 2 Montefiore Institute - Systems and Modeling GIGA - Bioinformatics ULg kristel.vansteen@ulg.ac.be Chapter 3: Parametric families of univariate distributions CHAPTER 3: PARAMETRIC
More informationOn the Comparison of Fisher Information of the Weibull and GE Distributions
On the Comparison of Fisher Information of the Weibull and GE Distributions Rameshwar D. Gupta Debasis Kundu Abstract In this paper we consider the Fisher information matrices of the generalized exponential
More informationTwo Weighted Distributions Generated by Exponential Distribution
Journal of Mathematical Extension Vol. 9, No. 1, (2015), 1-12 ISSN: 1735-8299 URL: http://www.ijmex.com Two Weighted Distributions Generated by Exponential Distribution A. Mahdavi Vali-e-Asr University
More informationA New Method for Generating Distributions with an Application to Exponential Distribution
A New Method for Generating Distributions with an Application to Exponential Distribution Abbas Mahdavi & Debasis Kundu Abstract Anewmethodhasbeenproposedtointroduceanextraparametertoafamilyofdistributions
More information1 Probability and Random Variables
1 Probability and Random Variables The models that you have seen thus far are deterministic models. For any time t, there is a unique solution X(t). On the other hand, stochastic models will result in
More informationStat410 Probability and Statistics II (F16)
Stat4 Probability and Statistics II (F6 Exponential, Poisson and Gamma Suppose on average every /λ hours, a Stochastic train arrives at the Random station. Further we assume the waiting time between two
More informationThe Inverse Weibull Inverse Exponential. Distribution with Application
International Journal of Contemporary Mathematical Sciences Vol. 14, 2019, no. 1, 17-30 HIKARI Ltd, www.m-hikari.com https://doi.org/10.12988/ijcms.2019.913 The Inverse Weibull Inverse Exponential Distribution
More informationIntroduction of Shape/Skewness Parameter(s) in a Probability Distribution
Journal of Probability and Statistical Science 7(2), 153-171, Aug. 2009 Introduction of Shape/Skewness Parameter(s) in a Probability Distribution Rameshwar D. Gupta University of New Brunswick Debasis
More informationContinuous Random Variables and Continuous Distributions
Continuous Random Variables and Continuous Distributions Continuous Random Variables and Continuous Distributions Expectation & Variance of Continuous Random Variables ( 5.2) The Uniform Random Variable
More informationPCMI Introduction to Random Matrix Theory Handout # REVIEW OF PROBABILITY THEORY. Chapter 1 - Events and Their Probabilities
PCMI 207 - Introduction to Random Matrix Theory Handout #2 06.27.207 REVIEW OF PROBABILITY THEORY Chapter - Events and Their Probabilities.. Events as Sets Definition (σ-field). A collection F of subsets
More informationProblem Selected Scores
Statistics Ph.D. Qualifying Exam: Part II November 20, 2010 Student Name: 1. Answer 8 out of 12 problems. Mark the problems you selected in the following table. Problem 1 2 3 4 5 6 7 8 9 10 11 12 Selected
More informationChapter 5 continued. Chapter 5 sections
Chapter 5 sections Discrete univariate distributions: 5.2 Bernoulli and Binomial distributions Just skim 5.3 Hypergeometric distributions 5.4 Poisson distributions Just skim 5.5 Negative Binomial distributions
More informationSome Generalizations of Weibull Distribution and Related Processes
Journal of Statistical Theory and Applications, Vol. 4, No. 4 (December 205), 425-434 Some Generalizations of Weibull Distribution and Related Processes K. Jayakumar Department of Statistics, University
More informationActuarial Science Exam 1/P
Actuarial Science Exam /P Ville A. Satopää December 5, 2009 Contents Review of Algebra and Calculus 2 2 Basic Probability Concepts 3 3 Conditional Probability and Independence 4 4 Combinatorial Principles,
More informationWeighted Marshall-Olkin Bivariate Exponential Distribution
Weighted Marshall-Olkin Bivariate Exponential Distribution Ahad Jamalizadeh & Debasis Kundu Abstract Recently Gupta and Kundu [9] introduced a new class of weighted exponential distributions, and it can
More informationthe convolution of f and g) given by
09:53 /5/2000 TOPIC Characteristic functions, cont d This lecture develops an inversion formula for recovering the density of a smooth random variable X from its characteristic function, and uses that
More informationReview 1: STAT Mark Carpenter, Ph.D. Professor of Statistics Department of Mathematics and Statistics. August 25, 2015
Review : STAT 36 Mark Carpenter, Ph.D. Professor of Statistics Department of Mathematics and Statistics August 25, 25 Support of a Random Variable The support of a random variable, which is usually denoted
More informationSummary of basic probability theory Math 218, Mathematical Statistics D Joyce, Spring 2016
8. For any two events E and F, P (E) = P (E F ) + P (E F c ). Summary of basic probability theory Math 218, Mathematical Statistics D Joyce, Spring 2016 Sample space. A sample space consists of a underlying
More informationRandom Variables. Random variables. A numerically valued map X of an outcome ω from a sample space Ω to the real line R
In probabilistic models, a random variable is a variable whose possible values are numerical outcomes of a random phenomenon. As a function or a map, it maps from an element (or an outcome) of a sample
More informationDistributions of Functions of Random Variables. 5.1 Functions of One Random Variable
Distributions of Functions of Random Variables 5.1 Functions of One Random Variable 5.2 Transformations of Two Random Variables 5.3 Several Random Variables 5.4 The Moment-Generating Function Technique
More informationLecture 17: The Exponential and Some Related Distributions
Lecture 7: The Exponential and Some Related Distributions. Definition Definition: A continuous random variable X is said to have the exponential distribution with parameter if the density of X is e x if
More informationContinuous Random Variables
Continuous Random Variables Recall: For discrete random variables, only a finite or countably infinite number of possible values with positive probability. Often, there is interest in random variables
More informationSTAT 3610: Review of Probability Distributions
STAT 3610: Review of Probability Distributions Mark Carpenter Professor of Statistics Department of Mathematics and Statistics August 25, 2015 Support of a Random Variable Definition The support of a random
More informationNorthwestern University Department of Electrical Engineering and Computer Science
Northwestern University Department of Electrical Engineering and Computer Science EECS 454: Modeling and Analysis of Communication Networks Spring 2008 Probability Review As discussed in Lecture 1, probability
More informationParameter Estimation
Parameter Estimation Chapters 13-15 Stat 477 - Loss Models Chapters 13-15 (Stat 477) Parameter Estimation Brian Hartman - BYU 1 / 23 Methods for parameter estimation Methods for parameter estimation Methods
More information3 Continuous Random Variables
Jinguo Lian Math437 Notes January 15, 016 3 Continuous Random Variables Remember that discrete random variables can take only a countable number of possible values. On the other hand, a continuous random
More informationSlides 8: Statistical Models in Simulation
Slides 8: Statistical Models in Simulation Purpose and Overview The world the model-builder sees is probabilistic rather than deterministic: Some statistical model might well describe the variations. An
More informationLECTURES 2-3 : Stochastic Processes, Autocorrelation function. Stationarity.
LECTURES 2-3 : Stochastic Processes, Autocorrelation function. Stationarity. Important points of Lecture 1: A time series {X t } is a series of observations taken sequentially over time: x t is an observation
More informationIf we want to analyze experimental or simulated data we might encounter the following tasks:
Chapter 1 Introduction If we want to analyze experimental or simulated data we might encounter the following tasks: Characterization of the source of the signal and diagnosis Studying dependencies Prediction
More informationReview for the previous lecture
Lecture 1 and 13 on BST 631: Statistical Theory I Kui Zhang, 09/8/006 Review for the previous lecture Definition: Several discrete distributions, including discrete uniform, hypergeometric, Bernoulli,
More informationCourse: ESO-209 Home Work: 1 Instructor: Debasis Kundu
Home Work: 1 1. Describe the sample space when a coin is tossed (a) once, (b) three times, (c) n times, (d) an infinite number of times. 2. A coin is tossed until for the first time the same result appear
More information6.3 Forecasting ARMA processes
6.3. FORECASTING ARMA PROCESSES 123 6.3 Forecasting ARMA processes The purpose of forecasting is to predict future values of a TS based on the data collected to the present. In this section we will discuss
More informationA Class of Weighted Weibull Distributions and Its Properties. Mervat Mahdy Ramadan [a],*
Studies in Mathematical Sciences Vol. 6, No. 1, 2013, pp. [35 45] DOI: 10.3968/j.sms.1923845220130601.1065 ISSN 1923-8444 [Print] ISSN 1923-8452 [Online] www.cscanada.net www.cscanada.org A Class of Weighted
More information1 Introduction to Generalized Least Squares
ECONOMICS 7344, Spring 2017 Bent E. Sørensen April 12, 2017 1 Introduction to Generalized Least Squares Consider the model Y = Xβ + ɛ, where the N K matrix of regressors X is fixed, independent of the
More informationSOLUTIONS TO MATH68181 EXTREME VALUES AND FINANCIAL RISK EXAM
SOLUTIONS TO MATH68181 EXTREME VALUES AND FINANCIAL RISK EXAM Solutions to Question A1 a) The marginal cdfs of F X,Y (x, y) = [1 + exp( x) + exp( y) + (1 α) exp( x y)] 1 are F X (x) = F X,Y (x, ) = [1
More informationGeneralized Exponential Distribution: Existing Results and Some Recent Developments
Generalized Exponential Distribution: Existing Results and Some Recent Developments Rameshwar D. Gupta 1 Debasis Kundu 2 Abstract Mudholkar and Srivastava [25] introduced three-parameter exponentiated
More informationMarshall-Olkin Bivariate Exponential Distribution: Generalisations and Applications
CHAPTER 6 Marshall-Olkin Bivariate Exponential Distribution: Generalisations and Applications 6.1 Introduction Exponential distributions have been introduced as a simple model for statistical analysis
More informationLIST OF FORMULAS FOR STK1100 AND STK1110
LIST OF FORMULAS FOR STK1100 AND STK1110 (Version of 11. November 2015) 1. Probability Let A, B, A 1, A 2,..., B 1, B 2,... be events, that is, subsets of a sample space Ω. a) Axioms: A probability function
More informationChapter 5. Chapter 5 sections
1 / 43 sections Discrete univariate distributions: 5.2 Bernoulli and Binomial distributions Just skim 5.3 Hypergeometric distributions 5.4 Poisson distributions Just skim 5.5 Negative Binomial distributions
More informationSampling Distributions
In statistics, a random sample is a collection of independent and identically distributed (iid) random variables, and a sampling distribution is the distribution of a function of random sample. For example,
More informationDefinition 1.1 (Parametric family of distributions) A parametric distribution is a set of distribution functions, each of which is determined by speci
Definition 1.1 (Parametric family of distributions) A parametric distribution is a set of distribution functions, each of which is determined by specifying one or more values called parameters. The number
More information1: PROBABILITY REVIEW
1: PROBABILITY REVIEW Marek Rutkowski School of Mathematics and Statistics University of Sydney Semester 2, 2016 M. Rutkowski (USydney) Slides 1: Probability Review 1 / 56 Outline We will review the following
More information0, otherwise, (a) Find the value of c that makes this a valid pdf. (b) Find P (Y < 5) and P (Y 5). (c) Find the mean death time.
1. In a toxicology experiment, Y denotes the death time (in minutes) for a single rat treated with a toxin. The probability density function (pdf) for Y is given by cye y/4, y > 0 (a) Find the value of
More informationELEMENTS OF PROBABILITY THEORY
ELEMENTS OF PROBABILITY THEORY Elements of Probability Theory A collection of subsets of a set Ω is called a σ algebra if it contains Ω and is closed under the operations of taking complements and countable
More informationHANDBOOK OF APPLICABLE MATHEMATICS
HANDBOOK OF APPLICABLE MATHEMATICS Chief Editor: Walter Ledermann Volume II: Probability Emlyn Lloyd University oflancaster A Wiley-Interscience Publication JOHN WILEY & SONS Chichester - New York - Brisbane
More informationContinuous Distributions
Chapter 3 Continuous Distributions 3.1 Continuous-Type Data In Chapter 2, we discuss random variables whose space S contains a countable number of outcomes (i.e. of discrete type). In Chapter 3, we study
More informationClass 1: Stationary Time Series Analysis
Class 1: Stationary Time Series Analysis Macroeconometrics - Fall 2009 Jacek Suda, BdF and PSE February 28, 2011 Outline Outline: 1 Covariance-Stationary Processes 2 Wold Decomposition Theorem 3 ARMA Models
More informationENSC327 Communications Systems 19: Random Processes. Jie Liang School of Engineering Science Simon Fraser University
ENSC327 Communications Systems 19: Random Processes Jie Liang School of Engineering Science Simon Fraser University 1 Outline Random processes Stationary random processes Autocorrelation of random processes
More informationThe autocorrelation and autocovariance functions - helpful tools in the modelling problem
The autocorrelation and autocovariance functions - helpful tools in the modelling problem J. Nowicka-Zagrajek A. Wy lomańska Institute of Mathematics and Computer Science Wroc law University of Technology,
More informationNonlinear time series
Based on the book by Fan/Yao: Nonlinear Time Series Robert M. Kunst robert.kunst@univie.ac.at University of Vienna and Institute for Advanced Studies Vienna October 27, 2009 Outline Characteristics of
More informationMultivariate Distribution Models
Multivariate Distribution Models Model Description While the probability distribution for an individual random variable is called marginal, the probability distribution for multiple random variables is
More informationp-birnbaum SAUNDERS DISTRIBUTION: APPLICATIONS TO RELIABILITY AND ELECTRONIC BANKING HABITS
p-birnbaum SAUNDERS DISTRIBUTION: APPLICATIONS TO RELIABILITY AND ELECTRONIC BANKING 1 V.M.Chacko, Mariya Jeeja P V and 3 Deepa Paul 1, Department of Statistics St.Thomas College, Thrissur Kerala-681 e-mail:chackovm@gmail.com
More informationSTA 2201/442 Assignment 2
STA 2201/442 Assignment 2 1. This is about how to simulate from a continuous univariate distribution. Let the random variable X have a continuous distribution with density f X (x) and cumulative distribution
More informationGumbel Distribution: Generalizations and Applications
CHAPTER 3 Gumbel Distribution: Generalizations and Applications 31 Introduction Extreme Value Theory is widely used by many researchers in applied sciences when faced with modeling extreme values of certain
More informationStatistics for scientists and engineers
Statistics for scientists and engineers February 0, 006 Contents Introduction. Motivation - why study statistics?................................... Examples..................................................3
More informationThe Laplace driven moving average a non-gaussian stationary process
The Laplace driven moving average a non-gaussian stationary process 1, Krzysztof Podgórski 2, Igor Rychlik 1 1 Mathematical Sciences, Mathematical Statistics, Chalmers 2 Centre for Mathematical Sciences,
More informationt x 1 e t dt, and simplify the answer when possible (for example, when r is a positive even number). In particular, confirm that EX 4 = 3.
Mathematical Statistics: Homewor problems General guideline. While woring outside the classroom, use any help you want, including people, computer algebra systems, Internet, and solution manuals, but mae
More informationwhere r n = dn+1 x(t)
Random Variables Overview Probability Random variables Transforms of pdfs Moments and cumulants Useful distributions Random vectors Linear transformations of random vectors The multivariate normal distribution
More informationA Very Brief Summary of Statistical Inference, and Examples
A Very Brief Summary of Statistical Inference, and Examples Trinity Term 2008 Prof. Gesine Reinert 1 Data x = x 1, x 2,..., x n, realisations of random variables X 1, X 2,..., X n with distribution (model)
More informationPROBABILITY DISTRIBUTION
PROBABILITY DISTRIBUTION DEFINITION: If S is a sample space with a probability measure and x is a real valued function defined over the elements of S, then x is called a random variable. Types of Random
More informationHybrid Censoring; An Introduction 2
Hybrid Censoring; An Introduction 2 Debasis Kundu Department of Mathematics & Statistics Indian Institute of Technology Kanpur 23-rd November, 2010 2 This is a joint work with N. Balakrishnan Debasis Kundu
More informationProbability Theory and Statistics. Peter Jochumzen
Probability Theory and Statistics Peter Jochumzen April 18, 2016 Contents 1 Probability Theory And Statistics 3 1.1 Experiment, Outcome and Event................................ 3 1.2 Probability............................................
More information1 Appendix A: Matrix Algebra
Appendix A: Matrix Algebra. Definitions Matrix A =[ ]=[A] Symmetric matrix: = for all and Diagonal matrix: 6=0if = but =0if 6= Scalar matrix: the diagonal matrix of = Identity matrix: the scalar matrix
More informationPh.D. Qualifying Exam Friday Saturday, January 6 7, 2017
Ph.D. Qualifying Exam Friday Saturday, January 6 7, 2017 Put your solution to each problem on a separate sheet of paper. Problem 1. (5106) Let X 1, X 2,, X n be a sequence of i.i.d. observations from a
More informationGeneralized quantiles as risk measures
Generalized quantiles as risk measures Bellini, Klar, Muller, Rosazza Gianin December 1, 2014 Vorisek Jan Introduction Quantiles q α of a random variable X can be defined as the minimizers of a piecewise
More informationJoint Probability Distributions
Joint Probability Distributions ST 370 In many random experiments, more than one quantity is measured, meaning that there is more than one random variable. Example: Cell phone flash unit A flash unit is
More informationTopic 4: Continuous random variables
Topic 4: Continuous random variables Course 3, 216 Page Continuous random variables Definition (Continuous random variable): An r.v. X has a continuous distribution if there exists a non-negative function
More informationTopic 4: Continuous random variables
Topic 4: Continuous random variables Course 003, 2018 Page 0 Continuous random variables Definition (Continuous random variable): An r.v. X has a continuous distribution if there exists a non-negative
More informationCHAPTER 6 TIME SERIES AND STOCHASTIC PROCESSES
CHAPTER 6 TIME SERIES AND STOCHASTIC PROCESSES [This chapter is based on the lectures of K.K. Jose, Department of Statistics, St. Thomas College, Palai, Kerala, India at the 5th SERC School] 6.0. Introduction
More information4. Distributions of Functions of Random Variables
4. Distributions of Functions of Random Variables Setup: Consider as given the joint distribution of X 1,..., X n (i.e. consider as given f X1,...,X n and F X1,...,X n ) Consider k functions g 1 : R n
More informationLecture 5: Moment generating functions
Lecture 5: Moment generating functions Definition 2.3.6. The moment generating function (mgf) of a random variable X is { x e tx f M X (t) = E(e tx X (x) if X has a pmf ) = etx f X (x)dx if X has a pdf
More informationn! (k 1)!(n k)! = F (X) U(0, 1). (x, y) = n(n 1) ( F (y) F (x) ) n 2
Order statistics Ex. 4.1 (*. Let independent variables X 1,..., X n have U(0, 1 distribution. Show that for every x (0, 1, we have P ( X (1 < x 1 and P ( X (n > x 1 as n. Ex. 4.2 (**. By using induction
More informationThe Marshall-Olkin Flexible Weibull Extension Distribution
The Marshall-Olkin Flexible Weibull Extension Distribution Abdelfattah Mustafa, B. S. El-Desouky and Shamsan AL-Garash arxiv:169.8997v1 math.st] 25 Sep 216 Department of Mathematics, Faculty of Science,
More informationExercises and Answers to Chapter 1
Exercises and Answers to Chapter The continuous type of random variable X has the following density function: a x, if < x < a, f (x), otherwise. Answer the following questions. () Find a. () Obtain mean
More informationDiscrete time processes
Discrete time processes Predictions are difficult. Especially about the future Mark Twain. Florian Herzog 2013 Modeling observed data When we model observed (realized) data, we encounter usually the following
More informationNeighbourhoods of Randomness and Independence
Neighbourhoods of Randomness and Independence C.T.J. Dodson School of Mathematics, Manchester University Augment information geometric measures in spaces of distributions, via explicit geometric representations
More informationSTAT/MATH 395 A - PROBABILITY II UW Winter Quarter Moment functions. x r p X (x) (1) E[X r ] = x r f X (x) dx (2) (x E[X]) r p X (x) (3)
STAT/MATH 395 A - PROBABILITY II UW Winter Quarter 07 Néhémy Lim Moment functions Moments of a random variable Definition.. Let X be a rrv on probability space (Ω, A, P). For a given r N, E[X r ], if it
More informationParameter Estimation of Power Lomax Distribution Based on Type-II Progressively Hybrid Censoring Scheme
Applied Mathematical Sciences, Vol. 12, 2018, no. 18, 879-891 HIKARI Ltd, www.m-hikari.com https://doi.org/10.12988/ams.2018.8691 Parameter Estimation of Power Lomax Distribution Based on Type-II Progressively
More informationE[X n ]= dn dt n M X(t). ). What is the mgf? Solution. Found this the other day in the Kernel matching exercise: 1 M X (t) =
Chapter 7 Generating functions Definition 7.. Let X be a random variable. The moment generating function is given by M X (t) =E[e tx ], provided that the expectation exists for t in some neighborhood of
More informationMultivariate Distributions
IEOR E4602: Quantitative Risk Management Spring 2016 c 2016 by Martin Haugh Multivariate Distributions We will study multivariate distributions in these notes, focusing 1 in particular on multivariate
More informationPREDICTION AND NONGAUSSIAN AUTOREGRESSIVE STATIONARY SEQUENCES 1. Murray Rosenblatt University of California, San Diego
PREDICTION AND NONGAUSSIAN AUTOREGRESSIVE STATIONARY SEQUENCES 1 Murray Rosenblatt University of California, San Diego Abstract The object of this paper is to show that under certain auxiliary assumptions
More informationStochastic Processes: I. consider bowl of worms model for oscilloscope experiment:
Stochastic Processes: I consider bowl of worms model for oscilloscope experiment: SAPAscope 2.0 / 0 1 RESET SAPA2e 22, 23 II 1 stochastic process is: Stochastic Processes: II informally: bowl + drawing
More informationSystem Simulation Part II: Mathematical and Statistical Models Chapter 5: Statistical Models
System Simulation Part II: Mathematical and Statistical Models Chapter 5: Statistical Models Fatih Cavdur fatihcavdur@uludag.edu.tr March 20, 2012 Introduction Introduction The world of the model-builder
More informationON A GENERALIZATION OF THE GUMBEL DISTRIBUTION
ON A GENERALIZATION OF THE GUMBEL DISTRIBUTION S. Adeyemi Department of Mathematics Obafemi Awolowo University, Ile-Ife. Nigeria.0005 e-mail:shollerss00@yahoo.co.uk Abstract A simple generalization of
More informationLecture 2: Review of Probability
Lecture 2: Review of Probability Zheng Tian Contents 1 Random Variables and Probability Distributions 2 1.1 Defining probabilities and random variables..................... 2 1.2 Probability distributions................................
More informationSpring 2012 Math 541B Exam 1
Spring 2012 Math 541B Exam 1 1. A sample of size n is drawn without replacement from an urn containing N balls, m of which are red and N m are black; the balls are otherwise indistinguishable. Let X denote
More informationApplied Probability Models in Marketing Research: Introduction
Applied Probability Models in Marketing Research: Introduction (Supplementary Materials for the A/R/T Forum Tutorial) Bruce G. S. Hardie London Business School bhardie@london.edu www.brucehardie.com Peter
More informationExam P Review Sheet. for a > 0. ln(a) i=0 ari = a. (1 r) 2. (Note that the A i s form a partition)
Exam P Review Sheet log b (b x ) = x log b (y k ) = k log b (y) log b (y) = ln(y) ln(b) log b (yz) = log b (y) + log b (z) log b (y/z) = log b (y) log b (z) ln(e x ) = x e ln(y) = y for y > 0. d dx ax
More information