IDENTIFIABILITY OF THE MULTIVARIATE NORMAL BY THE MAXIMUM AND THE MINIMUM
|
|
- Elinor Edwards
- 5 years ago
- Views:
Transcription
1 Surveys in Mathematics and its Applications ISSN (electronic), (print) Volume 5 (200), IDENTIFIABILITY OF THE MULTIVARIATE NORMAL BY THE MAXIMUM AND THE MINIMUM Arunava Mukherjea Abstract. In this paper, we have discussed theoretical problems in statistics on identification of parameters of a non-singular multi-variate normal when only either the distribution of the maximum or the distribution of the minimum is known. Introduction Let X, X 2,..., X m, where X i = (X i, X i2,..., X in ) be m independent random n- vectors each with a n-variate non-singular density in some class F. Let Y, Y 2,..., Y p be another such independent family of random n-vectors with each Y i having its n- variate non-singular density in F. Suppose that X 0 = (X 0,..., X 0n ), where X 0j = max{x ij : i m}, and Y 0 = (Y 0,..., Y 0n ), where Y 0j = max{y ij : i p}, have the same n-dimensional distribution function. The problem is to determine if m must equal p, and the distributions of {X, X 2,... X m } are simply a rearrangement of those of {Y, Y 2,..., Y p }. This problem comes up naturally in the context of a supply-demand problem in econometrics, and as far as we know, was considered first in [2], where it was solved (in the affirmative ) for the class of univariate normal distributions, and also, for the class of bivariate normal distributions with positive correlations. In [8], it was solved in the affirmative for the class of bivariate normal distributions with positive or negative correlations. However, if such distributions with zero correlations are allowed, then unique factorization of product of such distributions no longer holds, and this can be verified by considering simply four univariate normal distributions 200 Mathematics Subject Classification: 62H05; 62H0; 60E05. Keywords: Multivariate normal distributions; Identification of parameters.
2 32 A. Mukherjea F (x), F 2 (x), F 3 (y), F 4 (y) and observing that for all x, y, we have the equality: [F (x)f 3 (y)][f 2 (x)f 4 (y)] = [F (x)f 4 (y)][f 2 (x)f 3 (y)]. In [9], it was shown in the general n-variate normal case, that when the n-variate normal distributions of each X i, i m, and each Y j, j p, have positive partial correlations (that is, the off-diagonal entries of the covariance matrix are all negative), then when X o and Y o, as defined earlier, have the same distribution function, we must have m = p, and the distributions of the X i, i m, must be a permutation of those of the Y j, j m. In [7], this problem was solved in the affirmative for the general n-variate case when the covariance matrices of all the n-variate normal distributions are of the form: Σ ij = ρσ i σ j for i j. As far as we know, the general problem discussed above is still open. The maximum problem above occurs in m-component systems where the components are connected in parallel. The system lifetime is then given by max{x, X 2,..., X m }, and this is observable. There are instances where this maximum is observable, but the individual X i s are not. It may also be noted that if the distribution function of Y is F (x), and if X and X 2 are two independent random variables each with distribution F (x), then max{x, X 2 } and Y have the same distribution function. In [8], a corresponding minimum problem was discussed in the context of a probability model describing the death of an individual from one of several competing causes. Let X, X 2,..., X n be independent random variables with continuous distribution functions, Z = min{x, X 2,..., X i } and I = k iff Z = X k. If the X i have a common distribution function, then it is uniquely determined by the distribution function of Z. In [8], it was shown that when the distribution of the X i are not all the same, then the joint distribution function of the identified minimum (that is, that of (I, Z)) uniquely determines each F i (x), the distribution function of X i, i =, 2,..., n. Since max{x, X 2,..., X n } = min{ X, X 2,..., X n }, the distribution of the identified maximum also uniquely determines the distribution of the X i. Notice that if we consider n = 2 and the case where the independent random variables X and X 2 are both exponential such that their density functions are given by and f (x) =λe λx, x > 0; =0, otherwise, f 2 (x) =µe µx, x > 0; =0, otherwise. Surveys in Mathematics and its Applications 5 (200), 3 320
3 Identifiability of the multivariate normal 33 where λ and µ are both positive, then even though the joint distribution of their identified minimum (that is, that of (I, min{x, X 2 }) uniquely determines the parameters λ and µ, the min{x, X 2 }, by itself, does not identify uniquely the parameters λ and µ. Thus, one natural question comes up: when does the minimum of a n-variate random vector uniquely determine the parameters of the distribution of the random vector? In what follows, in the rest of this section, we discuss this problem. As far as we know, the problem on identification of parameters of a random vector (X, X 2,..., X n ) by the distribution of the minimum (namely, min{x i : i n}) is still unsolved even in the case of a n-variate normal vector. In the bivariate normal case, the problem was solved in the affirmative (in the natural sense) in [5] and [6] independently. The problem was considered also in the tri-variate normal case in [5] in the context of an identified minimum, and solved partially. The general minimum problem in the n-variate normal case, in the case of a common correlation, was solved in [9], and in the case of the tri-variate normal with negative correlations was solved in [2]. In the next section, we present asymptotic orders of certain tail probabilities for a multi-variate normal random vector that are useful in the context of the problems mentioned above. The purpose of this note here is to present some essential results which are useful in solving the identified minimum problem in the general tri-variate normal case. 2 Tail probabilities of a multivariate normal Let (X, X 2,..., X n ) be a n-variate normal random vector with a symmetric positive definite covariance matrix Σ such that the vector Σ = (α, α 2,..., α n ), where = (,,..., ), is positive (that is, each α i is positive). Then it was proven in [9] that as t, the tail probability P (X > t, X 2 > t,..., X n > t) is of the same (asymptotic) orders as that of where C exp( 2 t2 [Σ T ]), c = (2π) n 2 detσ (α α 2... α n )t n. Here we consider two functions f(t) and g(t) having the same order as t if =. Thus, we can state, when n = 2, the following lemma. lim t f(t) g(t) Surveys in Mathematics and its Applications 5 (200), 3 320
4 34 A. Mukherjea Lemma. Let (X, X 2 ) be a bivariate normal with zero means and variances σ 2, σ2 2 (where σ 2 σ2 2, and correlation ρ, ρ <, such that ρ < σ 2 σ. Then, we have: as t, ( P (X > t, X 2 > t) C exp 2 t2 [ σ2 + σ2 2 2ρσ ) σ 2 σ 2σ2 2 ( ], ρ2 ) where C = 2π( ρ2 ) 3 2 t 2 (σ 2 ρσ )(σ ρσ 2 ). Let us now consider the case ρ > σ 2 σ for the general bivariate normal (with zero means, for simplicity) considered in Lemma In this case, we no longer have Σ > 0, and thus, we need to use another idea. We can write where = = = P (X > t, X 2 > t) t t t P (X > t X 2 = x)f X2 (x)dx ( ) f X2 (x)dx exp y 2 ρσ x σ 2 t 2πσ ρ 2 2 σ ρ 2 dy f X2 (x)dx exp ( 2 ) z2 dz, g(x,t) 2π g(x, t) = t ρσ x σ 2 σ ρ 2. Since ρ > σ 2 σ, for t sufficiently large (for a pre-assigned positive δ), we can write: P (X > t, X 2 > t) > ( δ) It follows easily that as t, when ρ > σ 2 σ, we have: P (X > t, X 2 > t) σ [ 2 exp 2π t 2 t f X2 (x)dx. t 2 σ2 2 ] (2.) When ρ = σ 2 σ (< ) for the bivariate normal considered above, we have, similarly, for any ɛ > 0 and all sufficiently large t, [ ( ɛ)σ 2 exp 2 2π t 2 t 2 σ2 2 ] P (X > t, X 2 > t) ( + ɛ)σ 2 2π t [ exp 2 t 2 σ2 2 ] (2.2) Surveys in Mathematics and its Applications 5 (200), 3 320
5 Identifiability of the multivariate normal 35 Lemma 2. Let (X, X 2 ) be a bivariate normal with zero means, variances each, and correlation ρ, ρ <. Let α > 0, β > 0 and ρ < α β. Then we have: P (X αt, X 2 βt) [ C exp ( α 2 + β 2 )] 2ραβ 2 t2 ρ 2, as t, which is o(exp [ 2 β2 t 2] ) and also o(exp [ 2 α2 t 2] ), where C = 2πt 2 (α ρβ)(β ρα)( ρ 2 ) 3 2 α 2 β 2. Lemma 3. Let (X, X 2 ) be as in Lemma 2.2 letα > 0, β > 0, α β and ρ > α β. Then we have P (X > αt, X 2 > βt) exp [ 2 ] β2 t 2 as t. 2π βt Both Lemmas 2 and 3 follow from Lemma and equation (2.) above. In Lemma 3, if we take ρ = α β, then given ɛ > 0 and for all all sufficiently large t, [ 2 ] β2 t 2 ɛ 2 exp 2π βt P (X > αt, X 2 > βt) + ɛ exp [ 2 ] β2 t 2. 2π βt (2.3) Lemma 4. Let (X, X 2 ) be as in Lemma 2. Let α > 0, β > 0. Let ρ < min{ α β, β α }. Then as t, P (X > αt, X 2 > βt) exp [ 2 ] µ2 t 2, µ = min{α, β}. 2π µ t Proof. Notice that P (X > αt, X 2 > βt) =P (X αt, X 2 βt) + P (X αt, X 2 βt) + P (X αt, X 2 > βt) ( The first term on the right hand side is, by Lemma 2, o exp [ 2π µ t 2 µ2 t 2]) as t. The second term there can be written as P (X 2 βt) P ( X > α( t), X 2 > β( t)) P (X 2 βt) as t, and similarly, the third term is P (X αt) as t. The lemma is now clear. Surveys in Mathematics and its Applications 5 (200), 3 320
6 36 A. Mukherjea Lemma 5. Let (X, X 2 ) be as in Lemma 2. Let α < 0, β > 0, α > β. Then P (X > αt, X 2 > βt) exp [ 2 ] β2 t 2 2π βt as t Proof. Let us write: P (X > αt, X 2 > βt) =P (X 2 > βt) P ( X > ( α)t), X 2 > βt) Let t. Since the correlation of ( X, X 2 ) is ρ, it follows from Lemma 2 that when ρ < β α, ( P ( X > ( α)t, X 2 > βt) = o exp [ 2 ]) β2 t 2. Also, it follows from Lemma 3 and inequalities in (2.3) that when ρ β α, as t, P ( X > ( α)t, X 2 > βt) C(t) exp [ 2 ] α2 t 2 2π αt ( =o exp [ 2 ]) β2 t 2. where 2 C(t). The lemma now follows easily. Lemma 6. Let (X, X 2 ) be as in Lemma 2. Let α < 0. Then as t, P (X > αt, X 2 > ( α)t) exp [ 2 ] α2 t 2 2π α t Proof. It is enough to observe that P (X > αt, X 2 > ( α)t) =P (X 2 > ( α)t) P ( X > ( α)t, X 2 > ( α)t) and then Lemma 2 applies. Surveys in Mathematics and its Applications 5 (200), 3 320
7 Identifiability of the multivariate normal 37 3 The pdf of the identified minimum Let (X, X 2, X 3 ) be a tri-variate normal random vector with zero means and variances σ 2, σ2 2, σ2 3, and a non-singular covariance matrix Σ, where Σ ij = ρ ij σ i σ j, i j, Σ ii = σ 2 i. We assume, with no loss of generality, that σ 2, σ2 2, σ2 3. Let Y = min{x, X 2, X 3 }. We define the random variable I by I = i iff Y = X i, i =, 2, 3. Let F (y, i) be the joint distribution of (Y, I) such that F (y, ) = P (Y y, I = ), F (y, 2) = P (Y y, I = 2), F (y, 3) = P (Y y, I = 3). Then we have: P (Y y, I = ) =P (X y, X X 2, X X 3 ) = y P (X 2 x, X 3 x X = x )f X (x )dx Now by differentiating with respect to y, we obtain: f (y) = d F (y, ) dy =f X (y)p (X 2 y, X 3 y X = y) Notice that the conditional density of (X 2, X 3 ), given X = y, is a bivariate normal with means ρ 2 ( σ 2 σ )y, ρ 3 ( σ 3 σ )y, and variances σ2 2( ρ 2 2 ), σ3 2( ρ 3 2 ). Thus, we can write: ( ) ( ) f (y) = ( ) y ρ σ2 2 σ ρ σ3 3 ϕ P W 2 σ σ σ y, W σ 3 2 ρ2 2 σ y (3.) 3 ρ3 2 where (W 2, W 3 ) is a bivariate normal with zero means, variances each one, and ρ correlation ρ 23. = 23 ρ 2 ρ 3, and ϕ is the standard normal density. Similarly, ρ2 2 ρ 2 3 we also get the expressions for f 2 (y) and f 3 (y), where f 2 (y) = f 3 (y) = d dy [F (y, 3)] given in (3.2) and (3.3) below. We have f 2 (y) = ( ) y ϕ P σ 2 σ 2 d dy [F (y, 2)] and ( ) ( ) ρ σ 2 σ 2 ρ σ3 23 W 2 σ y, W σ 2 32 ρ2 2 σ y (3.2) 3 ρ23 2 Surveys in Mathematics and its Applications 5 (200), 3 320
8 38 A. Mukherjea ( ) ( ) f 3 (y) = ( ) y ρ σ 3 σ 3 ρ σ2 23 ϕ P W 3 σ 3 σ 3 σ y, W σ 3 23 ρ3 2 σ y (3.3) 2 ρ23 2 where (W 2, W 32 ) and (W 3, W 23 ) are both bivariate normals with zero means and ρ variances all ones, and correlations given, respectively, by ρ 3.2 = 3 ρ 2 ρ 23 ρ2 2 ρ 2 23 ρ and ρ 2.3 = 2 ρ 3 ρ 23. Now the problem of identified minimum is the following: we will assume that the functions f (y), f 2 (y) and f 3 (y) are given, and we need ρ3 2 ρ 2 23 to prove that there can be only a unique set of parameters σ 2, σ2 2, σ2 3, ρ 2, ρ 23, ρ 3 that can lead to the same three given functions f, f 2 and f 3. The proof, besides other arguments, uses mainly the lemmas given in section 2. The proof is rather involved and will appear elsewhere in full. References [] T. Amemiya, A note on a Fair and Jaffee model, Econometrica 42 (974), [2] T. W. Anderson and S. G. Ghurye, Identification of parameters by the distribution of a maximum random variable, J. of Royal Stat. Society 39B (977), MR ( ). Zbl [3] T. W. Adnerson and S. G. Ghurye, Unique factorization of products of bivariate normal cumulative distribution functions, Annals of the Institute of Statistical Math. 30 (978), MR Zbl [4] A. P. Basu, Identifiability problems in the theory of competing and complementary risks: a survey in Statistical Distributions in Scientific Work, vol. 5, (C.Taillie, et. al, eds.), D. Reidel Publishing Co., Dordrecht, Holland, 98, MR Zbl [5] A. P. Basu and J. K. Ghosh, Identifiability of the multinormal and other distributions under competing risks model, J. Multivariate Analysis 8 (978), MR0526. Zbl [6] A. P. Basu and J. K. Ghosh, Identifiablility of distributions under competing risks and complementary risks model, Commun. Statistics A9 (4) (980), MR Zbl [7] T. Bedford and I. Meilijson, A characterization of marginal distributions of (possibly dependent) lifetime variables which right censor each other, The Annals of Statistics 25(4) (997), MR Zbl Surveys in Mathematics and its Applications 5 (200), 3 320
9 Identifiability of the multivariate normal 39 [8] S. M. Berman, Note on extreme values, competing risks and semi-markov processes, Ann. Math. Statist. 34 (963), MR Zbl [9] M. Dai and A.Mukherjea, Identification of the parameters of a multivariate normal vector by the distribution of the maximum, J. Theoret. Probability 4 (200), MR Zbl [0] H. A. David, Estimation of means of normal population from observed minima, Biometrika 44 (957), [] H. A. David and M. L. Moeschberger, The Theory of Competing Risks, Griffin, London, 978. MR Zbl [2] J. Davis and A. Mukherjea, Identification of parameters by the distribution of the minimum, J. Multivariate Analysis 9 (2007), MR Zbl [3] M. Elnaggar and A. Mukherjea, Identification of parameters of a tri-variate normal vector by the distribution of the minimum, J. Statistical Planning and Inference 78 (999), MR Zbl [4] R. C. Fair and H. H. Kelejian, Methods of estimation for markets in disequilibrium: a further study, Econometrica 42 (974), MR Zbl [5] F.M. Fisher, The Identification Problem in Econometrics, McGraw Hill, New York, 996. [6] D. C. Gilliand and J. Hannan, Identification of the ordered bi-variate normal distribution by minimum variate, J. Amer. Statist. Assoc. 75(37) (980), MR Zbl [7] J. Gong and A. Mukherjea, Solution of the problem on the identification of parameters by the distribution of the maximum random variable: A multivariate normal case, J. Theoretical Probability 4 (4) (99), MR3238. Zbl [8] A. Mukherjea, A. Nakassis and J. Miyashita, Identification of parameters by the distribution of the maximum random variable: The Anderson-Ghuiye theorem, J. Multivariate Analysis 8 (986), MR Zbl [9] A. Mukherjea and R. Stephens, Identification of parameters by the distribution of the maximum random variable: the general multivariate normal case, Prob. Theory and Rel. Fields 84 (990), MR Zbl Surveys in Mathematics and its Applications 5 (200), 3 320
10 320 A. Mukherjea [20] A. Mukherjea and R. Stephens, The problem of identification of parameters by the distribution of the maximum random variable: solution for the trivariate normal case, J. Multivariate Anal. 34 (990), MR Zbl [2] A. Nadas, On estimating the distribution of a random vector when only the smallest coordinate is observable, Technometrics 2 (4) (970), Zbl [22] A. A. Tsiatis, A non-identifiability aspect of the problem of computing risks, Proc. Natl. Acad.Sci. (USA) 72 (975), MR Zbl [23] A. A. Tsiatis, An example of non-identifiability in computing risks, Scand. Actuarial Journal 978 (978), Zbl Arunava Mukherjea Department of Mathematics, The University of Texas-Pan American, 20 West University, Edinburg, Tx, 7854, USA. arunava.mukherjea@gmail.com Surveys in Mathematics and its Applications 5 (200), 3 320
Random Variables and Their Distributions
Chapter 3 Random Variables and Their Distributions A random variable (r.v.) is a function that assigns one and only one numerical value to each simple event in an experiment. We will denote r.vs by capital
More informationLecture 14: Multivariate mgf s and chf s
Lecture 14: Multivariate mgf s and chf s Multivariate mgf and chf For an n-dimensional random vector X, its mgf is defined as M X (t) = E(e t X ), t R n and its chf is defined as φ X (t) = E(e ıt X ),
More informationA New Class of Positively Quadrant Dependent Bivariate Distributions with Pareto
International Mathematical Forum, 2, 27, no. 26, 1259-1273 A New Class of Positively Quadrant Dependent Bivariate Distributions with Pareto A. S. Al-Ruzaiza and Awad El-Gohary 1 Department of Statistics
More informationEstimation of the Bivariate Generalized. Lomax Distribution Parameters. Based on Censored Samples
Int. J. Contemp. Math. Sciences, Vol. 9, 2014, no. 6, 257-267 HIKARI Ltd, www.m-hikari.com http://dx.doi.org/10.12988/ijcms.2014.4329 Estimation of the Bivariate Generalized Lomax Distribution Parameters
More informationLecture 25: Review. Statistics 104. April 23, Colin Rundel
Lecture 25: Review Statistics 104 Colin Rundel April 23, 2012 Joint CDF F (x, y) = P [X x, Y y] = P [(X, Y ) lies south-west of the point (x, y)] Y (x,y) X Statistics 104 (Colin Rundel) Lecture 25 April
More informationEEL 5544 Noise in Linear Systems Lecture 30. X (s) = E [ e sx] f X (x)e sx dx. Moments can be found from the Laplace transform as
L30-1 EEL 5544 Noise in Linear Systems Lecture 30 OTHER TRANSFORMS For a continuous, nonnegative RV X, the Laplace transform of X is X (s) = E [ e sx] = 0 f X (x)e sx dx. For a nonnegative RV, the Laplace
More informationIntroduction to Probability and Stocastic Processes - Part I
Introduction to Probability and Stocastic Processes - Part I Lecture 2 Henrik Vie Christensen vie@control.auc.dk Department of Control Engineering Institute of Electronic Systems Aalborg University Denmark
More informationBasics of Stochastic Modeling: Part II
Basics of Stochastic Modeling: Part II Continuous Random Variables 1 Sandip Chakraborty Department of Computer Science and Engineering, INDIAN INSTITUTE OF TECHNOLOGY KHARAGPUR August 10, 2016 1 Reference
More informationChp 4. Expectation and Variance
Chp 4. Expectation and Variance 1 Expectation In this chapter, we will introduce two objectives to directly reflect the properties of a random variable or vector, which are the Expectation and Variance.
More informationConsistent Bivariate Distribution
A Characterization of the Normal Conditional Distributions MATSUNO 79 Therefore, the function ( ) = G( : a/(1 b2)) = N(0, a/(1 b2)) is a solu- tion for the integral equation (10). The constant times of
More informationReview of Probability Theory
Review of Probability Theory Arian Maleki and Tom Do Stanford University Probability theory is the study of uncertainty Through this class, we will be relying on concepts from probability theory for deriving
More informationMarshall-Olkin Bivariate Exponential Distribution: Generalisations and Applications
CHAPTER 6 Marshall-Olkin Bivariate Exponential Distribution: Generalisations and Applications 6.1 Introduction Exponential distributions have been introduced as a simple model for statistical analysis
More informationMultiple Random Variables
Multiple Random Variables Joint Probability Density Let X and Y be two random variables. Their joint distribution function is F ( XY x, y) P X x Y y. F XY ( ) 1, < x
More informationconditional cdf, conditional pdf, total probability theorem?
6 Multiple Random Variables 6.0 INTRODUCTION scalar vs. random variable cdf, pdf transformation of a random variable conditional cdf, conditional pdf, total probability theorem expectation of a random
More informationStatistical Methods for Handling Incomplete Data Chapter 2: Likelihood-based approach
Statistical Methods for Handling Incomplete Data Chapter 2: Likelihood-based approach Jae-Kwang Kim Department of Statistics, Iowa State University Outline 1 Introduction 2 Observed likelihood 3 Mean Score
More information4. CONTINUOUS RANDOM VARIABLES
IA Probability Lent Term 4 CONTINUOUS RANDOM VARIABLES 4 Introduction Up to now we have restricted consideration to sample spaces Ω which are finite, or countable; we will now relax that assumption We
More informationMath 416 Lecture 3. The average or mean or expected value of x 1, x 2, x 3,..., x n is
Math 416 Lecture 3 Expected values The average or mean or expected value of x 1, x 2, x 3,..., x n is x 1 x 2... x n n x 1 1 n x 2 1 n... x n 1 n 1 n x i p x i where p x i 1 n is the probability of x i
More information3. Probability and Statistics
FE661 - Statistical Methods for Financial Engineering 3. Probability and Statistics Jitkomut Songsiri definitions, probability measures conditional expectations correlation and covariance some important
More informationMath 180B Problem Set 3
Math 180B Problem Set 3 Problem 1. (Exercise 3.1.2) Solution. By the definition of conditional probabilities we have Pr{X 2 = 1, X 3 = 1 X 1 = 0} = Pr{X 3 = 1 X 2 = 1, X 1 = 0} Pr{X 2 = 1 X 1 = 0} = P
More informationMultivariate Survival Data With Censoring.
1 Multivariate Survival Data With Censoring. Shulamith Gross and Catherine Huber-Carol Baruch College of the City University of New York, Dept of Statistics and CIS, Box 11-220, 1 Baruch way, 10010 NY.
More informationThe Multivariate Normal Distribution. In this case according to our theorem
The Multivariate Normal Distribution Defn: Z R 1 N(0, 1) iff f Z (z) = 1 2π e z2 /2. Defn: Z R p MV N p (0, I) if and only if Z = (Z 1,..., Z p ) T with the Z i independent and each Z i N(0, 1). In this
More informationStochastic Comparisons of Weighted Sums of Arrangement Increasing Random Variables
Portland State University PDXScholar Mathematics and Statistics Faculty Publications and Presentations Fariborz Maseeh Department of Mathematics and Statistics 4-7-2015 Stochastic Comparisons of Weighted
More informationRELATIVE ERRORS IN RELIABILITY MEASURES. University of Maine and University of New Brunswick
RELATIVE ERRORS IN RELIABILITY MEASURES BY PUSHPA L. GUPTA AND R.D. GUPTA 1 University of Maine and University of New Brunswick A common assumption, in reliability and lifetesting situations when the components
More information1.1 Review of Probability Theory
1.1 Review of Probability Theory Angela Peace Biomathemtics II MATH 5355 Spring 2017 Lecture notes follow: Allen, Linda JS. An introduction to stochastic processes with applications to biology. CRC Press,
More informationFormulas for probability theory and linear models SF2941
Formulas for probability theory and linear models SF2941 These pages + Appendix 2 of Gut) are permitted as assistance at the exam. 11 maj 2008 Selected formulae of probability Bivariate probability Transforms
More informationmatrix-free Elements of Probability Theory 1 Random Variables and Distributions Contents Elements of Probability Theory 2
Short Guides to Microeconometrics Fall 2018 Kurt Schmidheiny Unversität Basel Elements of Probability Theory 2 1 Random Variables and Distributions Contents Elements of Probability Theory matrix-free 1
More informationPerhaps the simplest way of modeling two (discrete) random variables is by means of a joint PMF, defined as follows.
Chapter 5 Two Random Variables In a practical engineering problem, there is almost always causal relationship between different events. Some relationships are determined by physical laws, e.g., voltage
More informationDependence. MFM Practitioner Module: Risk & Asset Allocation. John Dodson. September 11, Dependence. John Dodson. Outline.
MFM Practitioner Module: Risk & Asset Allocation September 11, 2013 Before we define dependence, it is useful to define Random variables X and Y are independent iff For all x, y. In particular, F (X,Y
More informationODEs Cathal Ormond 1
ODEs Cathal Ormond 2 1. Separable ODEs Contents 2. First Order ODEs 3. Linear ODEs 4. 5. 6. Chapter 1 Separable ODEs 1.1 Definition: An ODE An Ordinary Differential Equation (an ODE) is an equation whose
More informationRandom Variables. Random variables. A numerically valued map X of an outcome ω from a sample space Ω to the real line R
In probabilistic models, a random variable is a variable whose possible values are numerical outcomes of a random phenomenon. As a function or a map, it maps from an element (or an outcome) of a sample
More informationCSCI-6971 Lecture Notes: Probability theory
CSCI-6971 Lecture Notes: Probability theory Kristopher R. Beevers Department of Computer Science Rensselaer Polytechnic Institute beevek@cs.rpi.edu January 31, 2006 1 Properties of probabilities Let, A,
More informationA COMPARISON OF POISSON AND BINOMIAL EMPIRICAL LIKELIHOOD Mai Zhou and Hui Fang University of Kentucky
A COMPARISON OF POISSON AND BINOMIAL EMPIRICAL LIKELIHOOD Mai Zhou and Hui Fang University of Kentucky Empirical likelihood with right censored data were studied by Thomas and Grunkmier (1975), Li (1995),
More informationLecture 3. David Aldous. 31 August David Aldous Lecture 3
Lecture 3 David Aldous 31 August 2015 This size-bias effect occurs in other contexts, such as class size. If a small Department offers two courses, with enrollments 90 and 10, then average class (faculty
More informationStat 5101 Notes: Brand Name Distributions
Stat 5101 Notes: Brand Name Distributions Charles J. Geyer September 5, 2012 Contents 1 Discrete Uniform Distribution 2 2 General Discrete Uniform Distribution 2 3 Uniform Distribution 3 4 General Uniform
More informationIntroduction to Probability Theory
Introduction to Probability Theory Ping Yu Department of Economics University of Hong Kong Ping Yu (HKU) Probability 1 / 39 Foundations 1 Foundations 2 Random Variables 3 Expectation 4 Multivariate Random
More informationChapter 5 continued. Chapter 5 sections
Chapter 5 sections Discrete univariate distributions: 5.2 Bernoulli and Binomial distributions Just skim 5.3 Hypergeometric distributions 5.4 Poisson distributions Just skim 5.5 Negative Binomial distributions
More informationNonparametric Function Estimation with Infinite-Order Kernels
Nonparametric Function Estimation with Infinite-Order Kernels Arthur Berg Department of Statistics, University of Florida March 15, 2008 Kernel Density Estimation (IID Case) Let X 1,..., X n iid density
More information18 Bivariate normal distribution I
8 Bivariate normal distribution I 8 Example Imagine firing arrows at a target Hopefully they will fall close to the target centre As we fire more arrows we find a high density near the centre and fewer
More informationMAS223 Statistical Inference and Modelling Exercises
MAS223 Statistical Inference and Modelling Exercises The exercises are grouped into sections, corresponding to chapters of the lecture notes Within each section exercises are divided into warm-up questions,
More informationA Probability Review
A Probability Review Outline: A probability review Shorthand notation: RV stands for random variable EE 527, Detection and Estimation Theory, # 0b 1 A Probability Review Reading: Go over handouts 2 5 in
More informationExercises with solutions (Set D)
Exercises with solutions Set D. A fair die is rolled at the same time as a fair coin is tossed. Let A be the number on the upper surface of the die and let B describe the outcome of the coin toss, where
More information1: PROBABILITY REVIEW
1: PROBABILITY REVIEW Marek Rutkowski School of Mathematics and Statistics University of Sydney Semester 2, 2016 M. Rutkowski (USydney) Slides 1: Probability Review 1 / 56 Outline We will review the following
More informationChapter 2: Random Variables
ECE54: Stochastic Signals and Systems Fall 28 Lecture 2 - September 3, 28 Dr. Salim El Rouayheb Scribe: Peiwen Tian, Lu Liu, Ghadir Ayache Chapter 2: Random Variables Example. Tossing a fair coin twice:
More informationLecture 2: Repetition of probability theory and statistics
Algorithms for Uncertainty Quantification SS8, IN2345 Tobias Neckel Scientific Computing in Computer Science TUM Lecture 2: Repetition of probability theory and statistics Concept of Building Block: Prerequisites:
More informationBMIR Lecture Series on Probability and Statistics Fall 2015 Discrete RVs
Lecture #7 BMIR Lecture Series on Probability and Statistics Fall 2015 Department of Biomedical Engineering and Environmental Sciences National Tsing Hua University 7.1 Function of Single Variable Theorem
More informationContinuous Random Variables
1 / 24 Continuous Random Variables Saravanan Vijayakumaran sarva@ee.iitb.ac.in Department of Electrical Engineering Indian Institute of Technology Bombay February 27, 2013 2 / 24 Continuous Random Variables
More informationChapter 3 sections. SKIP: 3.10 Markov Chains. SKIP: pages Chapter 3 - continued
Chapter 3 sections Chapter 3 - continued 3.1 Random Variables and Discrete Distributions 3.2 Continuous Distributions 3.3 The Cumulative Distribution Function 3.4 Bivariate Distributions 3.5 Marginal Distributions
More informationElements of Probability Theory
Short Guides to Microeconometrics Fall 2016 Kurt Schmidheiny Unversität Basel Elements of Probability Theory Contents 1 Random Variables and Distributions 2 1.1 Univariate Random Variables and Distributions......
More informationChapter 4 Multiple Random Variables
Review for the previous lecture Definition: n-dimensional random vector, joint pmf (pdf), marginal pmf (pdf) Theorem: How to calculate marginal pmf (pdf) given joint pmf (pdf) Example: How to calculate
More informationNotes for Math 324, Part 19
48 Notes for Math 324, Part 9 Chapter 9 Multivariate distributions, covariance Often, we need to consider several random variables at the same time. We have a sample space S and r.v. s X, Y,..., which
More informationA Bivariate Weibull Regression Model
c Heldermann Verlag Economic Quality Control ISSN 0940-5151 Vol 20 (2005), No. 1, 1 A Bivariate Weibull Regression Model David D. Hanagal Abstract: In this paper, we propose a new bivariate Weibull regression
More informationProbability Models. 4. What is the definition of the expectation of a discrete random variable?
1 Probability Models The list of questions below is provided in order to help you to prepare for the test and exam. It reflects only the theoretical part of the course. You should expect the questions
More informationThis exam is closed book and closed notes. (You will have access to a copy of the Table of Common Distributions given in the back of the text.
TEST #3 STA 536 December, 00 Name: Please read the following directions. DO NOT TURN THE PAGE UNTIL INSTRUCTED TO DO SO Directions This exam is closed book and closed notes. You will have access to a copy
More information2. Variance and Covariance: We will now derive some classic properties of variance and covariance. Assume real-valued random variables X and Y.
CS450 Final Review Problems Fall 08 Solutions or worked answers provided Problems -6 are based on the midterm review Identical problems are marked recap] Please consult previous recitations and textbook
More informationExpectation and Variance
Expectation and Variance August 22, 2017 STAT 151 Class 3 Slide 1 Outline of Topics 1 Motivation 2 Expectation - discrete 3 Transformations 4 Variance - discrete 5 Continuous variables 6 Covariance STAT
More informationECON 3150/4150, Spring term Lecture 6
ECON 3150/4150, Spring term 2013. Lecture 6 Review of theoretical statistics for econometric modelling (II) Ragnar Nymoen University of Oslo 31 January 2013 1 / 25 References to Lecture 3 and 6 Lecture
More informationMultivariate Distribution Models
Multivariate Distribution Models Model Description While the probability distribution for an individual random variable is called marginal, the probability distribution for multiple random variables is
More informationLIST OF FORMULAS FOR STK1100 AND STK1110
LIST OF FORMULAS FOR STK1100 AND STK1110 (Version of 11. November 2015) 1. Probability Let A, B, A 1, A 2,..., B 1, B 2,... be events, that is, subsets of a sample space Ω. a) Axioms: A probability function
More informationTheorem 2.1 (Caratheodory). A (countably additive) probability measure on a field has an extension. n=1
Chapter 2 Probability measures 1. Existence Theorem 2.1 (Caratheodory). A (countably additive) probability measure on a field has an extension to the generated σ-field Proof of Theorem 2.1. Let F 0 be
More informationBAYESIAN ANALYSIS OF BIVARIATE COMPETING RISKS MODELS
Sankhyā : The Indian Journal of Statistics 2000, Volume 62, Series B, Pt. 3, pp. 388 401 BAYESIAN ANALYSIS OF BIVARIATE COMPETING RISKS MODELS By CHEN-PIN WANG University of South Florida, Tampa, U.S.A.
More informationSTAT Chapter 5 Continuous Distributions
STAT 270 - Chapter 5 Continuous Distributions June 27, 2012 Shirin Golchi () STAT270 June 27, 2012 1 / 59 Continuous rv s Definition: X is a continuous rv if it takes values in an interval, i.e., range
More informationMultiple Random Variables
Multiple Random Variables This Version: July 30, 2015 Multiple Random Variables 2 Now we consider models with more than one r.v. These are called multivariate models For instance: height and weight An
More informationEfficiency of Profile/Partial Likelihood in the Cox Model
Efficiency of Profile/Partial Likelihood in the Cox Model Yuichi Hirose School of Mathematics, Statistics and Operations Research, Victoria University of Wellington, New Zealand Summary. This paper shows
More informationUniversity of Toronto Department of Statistics
Norm Comparisons for Data Augmentation by James P. Hobert Department of Statistics University of Florida and Jeffrey S. Rosenthal Department of Statistics University of Toronto Technical Report No. 0704
More informationNotes on Random Vectors and Multivariate Normal
MATH 590 Spring 06 Notes on Random Vectors and Multivariate Normal Properties of Random Vectors If X,, X n are random variables, then X = X,, X n ) is a random vector, with the cumulative distribution
More informationCourse: ESO-209 Home Work: 1 Instructor: Debasis Kundu
Home Work: 1 1. Describe the sample space when a coin is tossed (a) once, (b) three times, (c) n times, (d) an infinite number of times. 2. A coin is tossed until for the first time the same result appear
More informationThe main results about probability measures are the following two facts:
Chapter 2 Probability measures The main results about probability measures are the following two facts: Theorem 2.1 (extension). If P is a (continuous) probability measure on a field F 0 then it has a
More informationx. Figure 1: Examples of univariate Gaussian pdfs N (x; µ, σ 2 ).
.8.6 µ =, σ = 1 µ = 1, σ = 1 / µ =, σ =.. 3 1 1 3 x Figure 1: Examples of univariate Gaussian pdfs N (x; µ, σ ). The Gaussian distribution Probably the most-important distribution in all of statistics
More informationStat 5101 Notes: Brand Name Distributions
Stat 5101 Notes: Brand Name Distributions Charles J. Geyer February 14, 2003 1 Discrete Uniform Distribution DiscreteUniform(n). Discrete. Rationale Equally likely outcomes. The interval 1, 2,..., n of
More informationIntroduction to Normal Distribution
Introduction to Normal Distribution Nathaniel E. Helwig Assistant Professor of Psychology and Statistics University of Minnesota (Twin Cities) Updated 17-Jan-2017 Nathaniel E. Helwig (U of Minnesota) Introduction
More informationStatistical Inference and Methods
Department of Mathematics Imperial College London d.stephens@imperial.ac.uk http://stats.ma.ic.ac.uk/ das01/ 31st January 2006 Part VI Session 6: Filtering and Time to Event Data Session 6: Filtering and
More informationProbability Background
Probability Background Namrata Vaswani, Iowa State University August 24, 2015 Probability recap 1: EE 322 notes Quick test of concepts: Given random variables X 1, X 2,... X n. Compute the PDF of the second
More informationProbability Lecture III (August, 2006)
robability Lecture III (August, 2006) 1 Some roperties of Random Vectors and Matrices We generalize univariate notions in this section. Definition 1 Let U = U ij k l, a matrix of random variables. Suppose
More informationTwo hours. To be supplied by the Examinations Office: Mathematical Formula Tables and Statistical Tables THE UNIVERSITY OF MANCHESTER.
Two hours MATH38181 To be supplied by the Examinations Office: Mathematical Formula Tables and Statistical Tables THE UNIVERSITY OF MANCHESTER EXTREME VALUES AND FINANCIAL RISK Examiner: Answer any FOUR
More informationOutline. Random Variables. Examples. Random Variable
Outline Random Variables M. Sami Fadali Professor of Electrical Engineering University of Nevada, Reno Random variables. CDF and pdf. Joint random variables. Correlated, independent, orthogonal. Correlation,
More informationMA/ST 810 Mathematical-Statistical Modeling and Analysis of Complex Systems
MA/ST 810 Mathematical-Statistical Modeling and Analysis of Complex Systems Review of Basic Probability The fundamentals, random variables, probability distributions Probability mass/density functions
More information. Find E(V ) and var(v ).
Math 6382/6383: Probability Models and Mathematical Statistics Sample Preliminary Exam Questions 1. A person tosses a fair coin until she obtains 2 heads in a row. She then tosses a fair die the same number
More informationProblem 1. Problem 2. Problem 3. Problem 4
Problem Let A be the event that the fungus is present, and B the event that the staph-bacteria is present. We have P A = 4, P B = 9, P B A =. We wish to find P AB, to do this we use the multiplication
More informationOn the conservative multivariate Tukey-Kramer type procedures for multiple comparisons among mean vectors
On the conservative multivariate Tukey-Kramer type procedures for multiple comparisons among mean vectors Takashi Seo a, Takahiro Nishiyama b a Department of Mathematical Information Science, Tokyo University
More informationConditional distributions. Conditional expectation and conditional variance with respect to a variable.
Conditional distributions Conditional expectation and conditional variance with respect to a variable Probability Theory and Stochastic Processes, summer semester 07/08 80408 Conditional distributions
More informationThe Delta Method and Applications
Chapter 5 The Delta Method and Applications 5.1 Local linear approximations Suppose that a particular random sequence converges in distribution to a particular constant. The idea of using a first-order
More informationChapter 3 sections. SKIP: 3.10 Markov Chains. SKIP: pages Chapter 3 - continued
Chapter 3 sections 3.1 Random Variables and Discrete Distributions 3.2 Continuous Distributions 3.3 The Cumulative Distribution Function 3.4 Bivariate Distributions 3.5 Marginal Distributions 3.6 Conditional
More informationFrailty Models and Copulas: Similarities and Differences
Frailty Models and Copulas: Similarities and Differences KLARA GOETHALS, PAUL JANSSEN & LUC DUCHATEAU Department of Physiology and Biometrics, Ghent University, Belgium; Center for Statistics, Hasselt
More informationCOMPETING RISKS WEIBULL MODEL: PARAMETER ESTIMATES AND THEIR ACCURACY
Annales Univ Sci Budapest, Sect Comp 45 2016) 45 55 COMPETING RISKS WEIBULL MODEL: PARAMETER ESTIMATES AND THEIR ACCURACY Ágnes M Kovács Budapest, Hungary) Howard M Taylor Newark, DE, USA) Communicated
More informationSTAT 730 Chapter 4: Estimation
STAT 730 Chapter 4: Estimation Timothy Hanson Department of Statistics, University of South Carolina Stat 730: Multivariate Analysis 1 / 23 The likelihood We have iid data, at least initially. Each datum
More informationLecture 2: Review of Basic Probability Theory
ECE 830 Fall 2010 Statistical Signal Processing instructor: R. Nowak, scribe: R. Nowak Lecture 2: Review of Basic Probability Theory Probabilistic models will be used throughout the course to represent
More informationA PRACTICAL WAY FOR ESTIMATING TAIL DEPENDENCE FUNCTIONS
Statistica Sinica 20 2010, 365-378 A PRACTICAL WAY FOR ESTIMATING TAIL DEPENDENCE FUNCTIONS Liang Peng Georgia Institute of Technology Abstract: Estimating tail dependence functions is important for applications
More informationA Note on Tail Behaviour of Distributions. the max domain of attraction of the Frechét / Weibull law under power normalization
ProbStat Forum, Volume 03, January 2010, Pages 01-10 ISSN 0974-3235 A Note on Tail Behaviour of Distributions in the Max Domain of Attraction of the Frechét/ Weibull Law under Power Normalization S.Ravi
More informationChapter 5. Chapter 5 sections
1 / 43 sections Discrete univariate distributions: 5.2 Bernoulli and Binomial distributions Just skim 5.3 Hypergeometric distributions 5.4 Poisson distributions Just skim 5.5 Negative Binomial distributions
More informationSection 8.1. Vector Notation
Section 8.1 Vector Notation Definition 8.1 Random Vector A random vector is a column vector X = [ X 1 ]. X n Each Xi is a random variable. Definition 8.2 Vector Sample Value A sample value of a random
More informationTwo-dimensional Random Vectors
1 Two-dimensional Random Vectors Joint Cumulative Distribution Function (joint cd) [ ] F, ( x, ) P xand Properties: 1) F, (, ) = 1 ),, F (, ) = F ( x, ) = 0 3) F, ( x, ) is a non-decreasing unction 4)
More informationELEMENTS OF PROBABILITY THEORY
ELEMENTS OF PROBABILITY THEORY Elements of Probability Theory A collection of subsets of a set Ω is called a σ algebra if it contains Ω and is closed under the operations of taking complements and countable
More informationLecture 1: August 28
36-705: Intermediate Statistics Fall 2017 Lecturer: Siva Balakrishnan Lecture 1: August 28 Our broad goal for the first few lectures is to try to understand the behaviour of sums of independent random
More informationDependence. Practitioner Course: Portfolio Optimization. John Dodson. September 10, Dependence. John Dodson. Outline.
Practitioner Course: Portfolio Optimization September 10, 2008 Before we define dependence, it is useful to define Random variables X and Y are independent iff For all x, y. In particular, F (X,Y ) (x,
More informationSome Approximations of the Logistic Distribution with Application to the Covariance Matrix of Logistic Regression
Working Paper 2013:9 Department of Statistics Some Approximations of the Logistic Distribution with Application to the Covariance Matrix of Logistic Regression Ronnie Pingel Working Paper 2013:9 June
More informationTotal positivity order and the normal distribution
Journal of Multivariate Analysis 97 (2006) 1251 1261 www.elsevier.com/locate/jmva Total positivity order and the normal distribution Yosef Rinott a,,1, Marco Scarsini b,2 a Department of Statistics, Hebrew
More informationLecture 2 One too many inequalities
University of Illinois Department of Economics Spring 2017 Econ 574 Roger Koenker Lecture 2 One too many inequalities In lecture 1 we introduced some of the basic conceptual building materials of the course.
More informationMultivariate Versus Multinomial Probit: When are Binary Decisions Made Separately also Jointly Optimal?
Multivariate Versus Multinomial Probit: When are Binary Decisions Made Separately also Jointly Optimal? Dale J. Poirier and Deven Kapadia University of California, Irvine March 10, 2012 Abstract We provide
More informationENGG2430A-Homework 2
ENGG3A-Homework Due on Feb 9th,. Independence vs correlation a For each of the following cases, compute the marginal pmfs from the joint pmfs. Explain whether the random variables X and Y are independent,
More informationStat410 Probability and Statistics II (F16)
Stat4 Probability and Statistics II (F6 Exponential, Poisson and Gamma Suppose on average every /λ hours, a Stochastic train arrives at the Random station. Further we assume the waiting time between two
More information