Stein s method and zero bias transformation: Application to CDO pricing

Size: px
Start display at page:

Download "Stein s method and zero bias transformation: Application to CDO pricing"

Transcription

1 Stein s method and zero bias transformation: Application to CDO pricing ESILV and Ecole Polytechnique Joint work with N. El Karoui

2 Introduction CDO a portfolio credit derivative containing 100 underlying names susceptible to default risk. Key term to calculate: the cumulative loss L T = n i=1 N i(1 R i )11 {τi T } where N i is the nominal of each name and R i is the recovery rate. Pricing of one CDO tranche: call function E[(L T K ) + ] with attachment or detachment point K. Motivation: fast and precise numerical method for CDOs in the market-adopted framework

3 Factor models Standard model in the market: factor model with conditionally independent defaults. Normal factor model: Gaussian copula approach {τ i t} = {ρ i Y + 1 ρ 2 i Y i N 1 (α i (t))} where Y i, Y N(0, 1) and independent, and α i (t) is the average probability of default before t. Given Y (not necessarily normal), L T is written as sum of independent random variables L T follows binomial law for homogeneous portfolios Central limit theorems: Gaussian and Poisson approximations

4 Some approximation methods Gaussian approximation in finance: Total loss of a homogeneous portfolio (Vasicek (1991)), normal approximation of binomial distribution Option pricing: symmetric case (Diener-Diener (2004)) Difficulty: n 100 and small default probability. Saddle point method applied to CDOs (Martin-Thompson-Browne, Antonov-Mechkov-Misirpashaev): calculation time for non-homogeneous portfolio Our solution: combine Gaussian and Poisson approximations and propose a corrector term in each case. Mathematical tools: Stein s method and zero bias transformation

5 Stein method and Zero bias transformation

6 Zero bias transformation Goldstein-Reinert, X a r.v. of mean zero and of variance σ 2 -X has the zero-bias distribution of X if for all regular enough f, E[Xf (X)] = σ 2 E[f (X )] (1) -distribution of X unique with density p X (x) = E[X11 {X>x} ]/σ 2. Observation of Stein (1972): if Z N(0, σ 2 ), then E[Zf (Z )] = σ 2 E[f (Z )].

7 Stein method and normal approximation Stein s equation (1972) For the normal approximation of E[h(X)], Stein s idea is to use f h defined as the solution of h(x) Φ σ (h) = xf (x) σ 2 f (x) (2) where Φ σ (h) = E[h(Z )] with Z N(0, σ 2 ). Proposition (Stein): If h is absolutely continuous, then E[h(X)] Φ σ (h) = σ 2 E [ f h (X ) f h (X)] σ 2 f h E[ X X ]. To estimate the approximation error, it s important to measure the distance between X and X and the sup-norm of f h and its derivatives.

8 Example and estimations Example (Asymmetric Bernoulli distribution) : P(X = q) = p and P(X = p) = q with q = 1 p. So E(X) = 0 and Var(X) = pq. Furthermore, X U[ p, q]. If X and X are independent, then, for any even function g, E [ g(x X) ] = 1 2σ 2 E[ X s G(X s ) ] where G(x) = x 0 g(t)dt, X s = X X and X is an independent copy of X. In particular, E[ X X ] = 1 E [ X s 3] ( O ( )) n 1 4σ. 2 E[ X X k 1 ] = E [ X s k+2] ( O ( )) 1 2(k+1)σ 2 k. n

9 Sum of independent random variables Goldstein-Reinert, 97 Let W = X X n where X i are independent r.v. of mean zero and Var(W ) = σw 2 <. Then W = W (I) + X I, where P(I = i) = σi 2 /σw 2. W (i) = W X i. Furthermore, W (i), X i, Xi and I are mutually independent. Here W and W are not independent. E [ W W k] 1 = n 2(k+1)σW 2 i=1 E[ Xi s k+2] ( O ( )) 1 k. n

10 Conditional expectation technic To obtain an addition order in the estimations, for example, E[X I X 1,, X n ] = n σi 2 i=1 X i, then, σ 2 W E[E[X I X 1,, X n ] 2 ] = n i=1 σ 6 i σ 4 W is of order O( 1 n 2 ). However, E[XI 2] is of order O( 1 n ). This technic is crucial for estimating E[g(X I, XI )] and E[f (W )g(x I, XI )].

11 First order Gaussian correction Theorem Normal approximation Φ σw (h) of E[h(W )] has corrector : ( ) C h = 1 n ( x 2σW 4 E[Xi 3 2 ) ]Φ σw 3σW 2 1 xh(x) i=1 (3) where h has bounded second order derivative. The corrected approximation error is bounded by E[h(W )] Φ σw (h) C h α(h, X1,, X n ) where α(h, X 1,, X n ) depends on h and moments of X i up to fourth order.

12 Remarks For i.i.d. asymmetric Bernoulli: corrector of order O( 1 n ); corrected error of order O( 1 n ). In the symmetric case, E[X I ] = 0, so C h = 0 for any h. C h can be viewed as an asymmetric corrector. Skew play an important role.

13 Ingredients of proof development around the total loss W E[h(W )] Φ σw (h) = σw 2 [ + σw 2 E E[f h (W )(XI X I )] ( ξw + (1 ξ)w ) ξ(w W ) 2]. f (3) h by independence, E[f h (W )X I ] = E[X I ]E[f h (W )] = E[XI ]Φ σ W (f h ) + E[X I ]( E[f h (W )] Φ σ W (f h )). use the conditional expectation technic E[f h (W )X I] = cov ( f h (W ), E[X I X 1,, X n ] ). write Φ σw (f h ) in term of ) h and obtain C h = σw 2 E[X I ]Φ σ W ( f h (( = 1 E[X σw 2 I ]Φ x 2 σ W 3σW 2 ) ) 1 xh(x).

14 Function call Call function C k (x) = (x k) + : absolutely continuous with C k (x) = 11 {x>k}, but no second order derivative. Hypothesis not satisfied. Same corrector E[(W k) + ] Φ σw ((x k) + ) 1 3 E[X I ]kφ σ W (k) O( 1 n ). Error bounds depend on f C k, xf C k. Ingredients of proof: write f C k as more regular function by Stein s equation Concentration inequality (Chen-Shao, 2001) Corrector C h = 0 when k = 0, extremal values when k = ±σ 2

15 Normal or Poisson? The methodology works for other distributions. Stein s method in Poisson case (Chen 1975): a r.v. Λ taking positive integer values follows Poisson distribution P(λ) iif E[Λg(Λ)] = λe[g(λ + 1)] := A P g(λ). Poisson zero bias transformation X for X with E[X] = λ: E[Xg(X)] = E[A P g(x )] Stein s equation : h(x) P λ (h) = xg(x) A P g(x) (4) where P λ (h) = E[h(Λ)] with Λ P(λ).

16 Poisson correction Poisson corrector of P λw (h) for E[h(W )] : C P h = λ W 2 P λ W ( 2 h ) E [ X I X I ] where h(x) = h(x + 1) h(x) and P(I = i) = λ i /λ W. Proof: combinatory technics for integer valued r.v. For h = (x k) +, 2 h(x) = 11 {x=k 1}, then C P h = σ2 W λ W 2( k 1)! e λ W λ k 1 W.

17 Numerical tests: E[(W n k) + ] E[(W n k) + ] in function of n σ W = 1 and k = 1 (maximal Gaussian correction) for homogeneous portfolio. Two graphes : p = 0.1 and p = 0.01 respectively. Oscillation of the binomial curve and comparison of different approximations.

18 Numerical tests: E[(W n k) + ] Legend : binomial (black), normal (magenta), corrected normal (green), Poisson (blue) and corrected Poisson (red). p = 0.1, n = binomial normal normal with correction poisson poisson with correction n

19 Numerical tests : E[(W n k) + ] Legend : binomial (black), normal (magenta), corrected normal (green), Poisson (blue) et corrected Poisson (red). p = 0.01, n = binomial normal normal with correction poisson poisson with correction n

20 Higher order corrections The regularity of h plays an important role. Second order approximation is given by C(2, h) = Φ σw (h) + C h + Φ σw ( ( x 6 18σ 8 W Φ σ W ( ( x 4 4σ 6 W 5x 4 6σW 6 + 5x 2 2σW 4 3x 2 ) 2σW 4 h(x) ) ) (h(x) Φ σw (h)) E[XI ]2 ) (E[(X I )2 ] E[X 2 I ]). Since the call function is not regular enough, it s difficult to control errors. (5)

21 Applications to CDOs In collaboration with David KURTZ (Calyon, Londres).

22 Normalization for loss Percentage loss l T = 1 n n i=1 (1 R i)11 {τi T } K = 3%, 6%, 9%, 12%. Normalization for ξ i = 11 {τi T } : let X i = ξ i p i where npi (1 p i ) p i = P(ξ i = 1). Then E[X i ] = 0 and Var[X i ] = 1 n. Suppose R i = 0 and p i = p for simplicity. Let p(y ) = p i (Y ) = P(τ i T Y ). Then E [ (l T K ) + Y ] p(y )(1 p(y )) = E [ (W k(n, p(y ))) + Y ]. n where W = n i=1 X i and k(n, p) = (K p) n. p(1 p) Constraint in Poisson case: R i deterministic and proportional.

23 Numerical tests: domain of validity Conditional error of E[(l T K ) + ], in function of K /p, n = 100. Inhomogeneous portfolio with log-normal p i of mean p.

24 Numerical tests : Domain of validity Legend : corrected Gaussian error (blue), corrected Poisson error (red). np=5 and np=20. np= % 0.015% 0.010% 0.005% 0.000% % 0% 100% 200% 300% % Error Gauss Error Poisson np= % 0.010% 0.008% 0.006% 0.004% 0.002% 0.000% % 0% 100% 200% 300% % % % Error Gauss Error Poisson Domain of validity : np 15 Maximal error : when k near the average loss; overlapping area of the two approximations

25 Comparison with saddle-point method Legend : Gaussian error (red), saddle-point first order error (blue dot), saddle-point second order error (blue). np=5 and np=20. np=5 np= % 0.030% 0.020% 0.010% 0.000% 0% 100% 200% 300% % % % 0.025% 0.020% 0.015% 0.010% 0.005% 0.000% % 0% 100% 200% 300% % % % % Gauss Saddle 1 Saddle % Gauss Saddle 1 Saddle 2

26 Numerical tests : stochastic R i Conditional error of E[(l T K ) + ], in function of K /p for Gaussian approximation error with stochastic R i. R i : Beta distribution with expectation 50% and variance 26%, independent (Moody s). Corrector depends on the third order moment of R i. Confidence interval 95% by 10 6 Monte Carlo simulation.

27 Numerical tests : stochastic R i Gaussian approximation better when p is large as in the standard case. np=5 and np=20. np= % 0.003% 0.002% 0.001% 0.000% % 0% 100% 200% 300% % % % % Lower MC Bound Gauss Error Upper MC Bound np= % 0.004% 0.002% 0.000% % 0% 100% 200% % % Lower MC Bound Gauss Error Upper MC Bound

28 Real-life CDOs pricing Parametric correlation model of ρ(y ) to match the smile. Method adapted to all conditional independent models, not only in the normal factor model case Numerical results compared to the recursive method (Hull-White) Calculation time largely reduced: 1 : 200 for one price

29 Real-life CDOs pricing Error of prices in bp (10 4 ) for corrected Gauss-Poisson approximation with realistic local correlation. Expected loss 4% 5% Break Even Error Break Even Error

30 Conclusion remark and perspective Tests for VaR and sensitivity remain satisfactory Perspective: remove the deterministic recovery rate condition in the Poisson case.

Gauss and Poisson Approximation: Applications to CDOs Tranche Pricing

Gauss and Poisson Approximation: Applications to CDOs Tranche Pricing Gauss and Poisson Approximation: Applications to CDOs Tranche Pricing September 22, 2008 Nicole El Karoui CMAP Ecole Polytechnique Palaiseau 91128 Cedex France elkaroui@cmapx.polytechnique.fr Ying Jiao

More information

Limiting Distributions

Limiting Distributions Limiting Distributions We introduce the mode of convergence for a sequence of random variables, and discuss the convergence in probability and in distribution. The concept of convergence leads us to the

More information

Stein s Method: Distributional Approximation and Concentration of Measure

Stein s Method: Distributional Approximation and Concentration of Measure Stein s Method: Distributional Approximation and Concentration of Measure Larry Goldstein University of Southern California 36 th Midwest Probability Colloquium, 2014 Stein s method for Distributional

More information

A Gentle Introduction to Stein s Method for Normal Approximation I

A Gentle Introduction to Stein s Method for Normal Approximation I A Gentle Introduction to Stein s Method for Normal Approximation I Larry Goldstein University of Southern California Introduction to Stein s Method for Normal Approximation 1. Much activity since Stein

More information

Stein s Method and the Zero Bias Transformation with Application to Simple Random Sampling

Stein s Method and the Zero Bias Transformation with Application to Simple Random Sampling Stein s Method and the Zero Bias Transformation with Application to Simple Random Sampling Larry Goldstein and Gesine Reinert November 8, 001 Abstract Let W be a random variable with mean zero and variance

More information

Week 1 Quantitative Analysis of Financial Markets Distributions A

Week 1 Quantitative Analysis of Financial Markets Distributions A Week 1 Quantitative Analysis of Financial Markets Distributions A Christopher Ting http://www.mysmu.edu/faculty/christophert/ Christopher Ting : christopherting@smu.edu.sg : 6828 0364 : LKCSB 5036 October

More information

1 Presessional Probability

1 Presessional Probability 1 Presessional Probability Probability theory is essential for the development of mathematical models in finance, because of the randomness nature of price fluctuations in the markets. This presessional

More information

Differential Stein operators for multivariate continuous distributions and applications

Differential Stein operators for multivariate continuous distributions and applications Differential Stein operators for multivariate continuous distributions and applications Gesine Reinert A French/American Collaborative Colloquium on Concentration Inequalities, High Dimensional Statistics

More information

6 The normal distribution, the central limit theorem and random samples

6 The normal distribution, the central limit theorem and random samples 6 The normal distribution, the central limit theorem and random samples 6.1 The normal distribution We mentioned the normal (or Gaussian) distribution in Chapter 4. It has density f X (x) = 1 σ 1 2π e

More information

Probability and Statistics

Probability and Statistics Probability and Statistics Jane Bae Stanford University hjbae@stanford.edu September 16, 2014 Jane Bae (Stanford) Probability and Statistics September 16, 2014 1 / 35 Overview 1 Probability Concepts Probability

More information

Susceptible-Infective-Removed Epidemics and Erdős-Rényi random

Susceptible-Infective-Removed Epidemics and Erdős-Rényi random Susceptible-Infective-Removed Epidemics and Erdős-Rényi random graphs MSR-Inria Joint Centre October 13, 2015 SIR epidemics: the Reed-Frost model Individuals i [n] when infected, attempt to infect all

More information

Continuous Random Variables

Continuous Random Variables 1 / 24 Continuous Random Variables Saravanan Vijayakumaran sarva@ee.iitb.ac.in Department of Electrical Engineering Indian Institute of Technology Bombay February 27, 2013 2 / 24 Continuous Random Variables

More information

CS145: Probability & Computing

CS145: Probability & Computing CS45: Probability & Computing Lecture 5: Concentration Inequalities, Law of Large Numbers, Central Limit Theorem Instructor: Eli Upfal Brown University Computer Science Figure credits: Bertsekas & Tsitsiklis,

More information

Formulas for probability theory and linear models SF2941

Formulas for probability theory and linear models SF2941 Formulas for probability theory and linear models SF2941 These pages + Appendix 2 of Gut) are permitted as assistance at the exam. 11 maj 2008 Selected formulae of probability Bivariate probability Transforms

More information

Probability and Distributions

Probability and Distributions Probability and Distributions What is a statistical model? A statistical model is a set of assumptions by which the hypothetical population distribution of data is inferred. It is typically postulated

More information

Expectation of Random Variables

Expectation of Random Variables 1 / 19 Expectation of Random Variables Saravanan Vijayakumaran sarva@ee.iitb.ac.in Department of Electrical Engineering Indian Institute of Technology Bombay February 13, 2015 2 / 19 Expectation of Discrete

More information

Lecture 2: Repetition of probability theory and statistics

Lecture 2: Repetition of probability theory and statistics Algorithms for Uncertainty Quantification SS8, IN2345 Tobias Neckel Scientific Computing in Computer Science TUM Lecture 2: Repetition of probability theory and statistics Concept of Building Block: Prerequisites:

More information

Lecture 4: Proofs for Expectation, Variance, and Covariance Formula

Lecture 4: Proofs for Expectation, Variance, and Covariance Formula Lecture 4: Proofs for Expectation, Variance, and Covariance Formula by Hiro Kasahara Vancouver School of Economics University of British Columbia Hiro Kasahara (UBC) Econ 325 1 / 28 Discrete Random Variables:

More information

1 Review of Probability and Distributions

1 Review of Probability and Distributions Random variables. A numerically valued function X of an outcome ω from a sample space Ω X : Ω R : ω X(ω) is called a random variable (r.v.), and usually determined by an experiment. We conventionally denote

More information

Practice Problem - Skewness of Bernoulli Random Variable. Lecture 7: Joint Distributions and the Law of Large Numbers. Joint Distributions - Example

Practice Problem - Skewness of Bernoulli Random Variable. Lecture 7: Joint Distributions and the Law of Large Numbers. Joint Distributions - Example A little more E(X Practice Problem - Skewness of Bernoulli Random Variable Lecture 7: and the Law of Large Numbers Sta30/Mth30 Colin Rundel February 7, 014 Let X Bern(p We have shown that E(X = p Var(X

More information

Exam P Review Sheet. for a > 0. ln(a) i=0 ari = a. (1 r) 2. (Note that the A i s form a partition)

Exam P Review Sheet. for a > 0. ln(a) i=0 ari = a. (1 r) 2. (Note that the A i s form a partition) Exam P Review Sheet log b (b x ) = x log b (y k ) = k log b (y) log b (y) = ln(y) ln(b) log b (yz) = log b (y) + log b (z) log b (y/z) = log b (y) log b (z) ln(e x ) = x e ln(y) = y for y > 0. d dx ax

More information

STAT Chapter 5 Continuous Distributions

STAT Chapter 5 Continuous Distributions STAT 270 - Chapter 5 Continuous Distributions June 27, 2012 Shirin Golchi () STAT270 June 27, 2012 1 / 59 Continuous rv s Definition: X is a continuous rv if it takes values in an interval, i.e., range

More information

Expectation. DS GA 1002 Probability and Statistics for Data Science. Carlos Fernandez-Granda

Expectation. DS GA 1002 Probability and Statistics for Data Science.   Carlos Fernandez-Granda Expectation DS GA 1002 Probability and Statistics for Data Science http://www.cims.nyu.edu/~cfgranda/pages/dsga1002_fall17 Carlos Fernandez-Granda Aim Describe random variables with a few numbers: mean,

More information

1 Random Variable: Topics

1 Random Variable: Topics Note: Handouts DO NOT replace the book. In most cases, they only provide a guideline on topics and an intuitive feel. 1 Random Variable: Topics Chap 2, 2.1-2.4 and Chap 3, 3.1-3.3 What is a random variable?

More information

A Short Introduction to Stein s Method

A Short Introduction to Stein s Method A Short Introduction to Stein s Method Gesine Reinert Department of Statistics University of Oxford 1 Overview Lecture 1: focusses mainly on normal approximation Lecture 2: other approximations 2 1. The

More information

Concentration of Measures by Bounded Couplings

Concentration of Measures by Bounded Couplings Concentration of Measures by Bounded Couplings Subhankar Ghosh, Larry Goldstein and Ümit Işlak University of Southern California [arxiv:0906.3886] [arxiv:1304.5001] May 2013 Concentration of Measure Distributional

More information

Analysis of Engineering and Scientific Data. Semester

Analysis of Engineering and Scientific Data. Semester Analysis of Engineering and Scientific Data Semester 1 2019 Sabrina Streipert s.streipert@uq.edu.au Example: Draw a random number from the interval of real numbers [1, 3]. Let X represent the number. Each

More information

1 Probability and Random Variables

1 Probability and Random Variables 1 Probability and Random Variables The models that you have seen thus far are deterministic models. For any time t, there is a unique solution X(t). On the other hand, stochastic models will result in

More information

Expectation. DS GA 1002 Statistical and Mathematical Models. Carlos Fernandez-Granda

Expectation. DS GA 1002 Statistical and Mathematical Models.   Carlos Fernandez-Granda Expectation DS GA 1002 Statistical and Mathematical Models http://www.cims.nyu.edu/~cfgranda/pages/dsga1002_fall16 Carlos Fernandez-Granda Aim Describe random variables with a few numbers: mean, variance,

More information

2 (Statistics) Random variables

2 (Statistics) Random variables 2 (Statistics) Random variables References: DeGroot and Schervish, chapters 3, 4 and 5; Stirzaker, chapters 4, 5 and 6 We will now study the main tools use for modeling experiments with unknown outcomes

More information

Midterm #1. Lecture 10: Joint Distributions and the Law of Large Numbers. Joint Distributions - Example, cont. Joint Distributions - Example

Midterm #1. Lecture 10: Joint Distributions and the Law of Large Numbers. Joint Distributions - Example, cont. Joint Distributions - Example Midterm #1 Midterm 1 Lecture 10: and the Law of Large Numbers Statistics 104 Colin Rundel February 0, 01 Exam will be passed back at the end of class Exam was hard, on the whole the class did well: Mean:

More information

STAT 430/510: Lecture 16

STAT 430/510: Lecture 16 STAT 430/510: Lecture 16 James Piette June 24, 2010 Updates HW4 is up on my website. It is due next Mon. (June 28th). Starting today back at section 6.7 and will begin Ch. 7. Joint Distribution of Functions

More information

Fast Computation of the Expected Loss of a Loan Portfolio Tranche in the Gaussian Factor Model: Using Hermite Expansions for Higher Accuracy.

Fast Computation of the Expected Loss of a Loan Portfolio Tranche in the Gaussian Factor Model: Using Hermite Expansions for Higher Accuracy. arxiv:math/0506378v1 [math.st] 19 Jun 005 Fast Computation of the Expected Loss of a Loan Portfolio Tranche in the Gaussian Factor Model: Using Hermite Expansions for Higher Accuracy. Pavel Okunev Department

More information

EE5319R: Problem Set 3 Assigned: 24/08/16, Due: 31/08/16

EE5319R: Problem Set 3 Assigned: 24/08/16, Due: 31/08/16 EE539R: Problem Set 3 Assigned: 24/08/6, Due: 3/08/6. Cover and Thomas: Problem 2.30 (Maimum Entropy): Solution: We are required to maimize H(P X ) over all distributions P X on the non-negative integers

More information

Deccan Education Society s FERGUSSON COLLEGE, PUNE (AUTONOMOUS) SYLLABUS UNDER AUTOMONY. SECOND YEAR B.Sc. SEMESTER - III

Deccan Education Society s FERGUSSON COLLEGE, PUNE (AUTONOMOUS) SYLLABUS UNDER AUTOMONY. SECOND YEAR B.Sc. SEMESTER - III Deccan Education Society s FERGUSSON COLLEGE, PUNE (AUTONOMOUS) SYLLABUS UNDER AUTOMONY SECOND YEAR B.Sc. SEMESTER - III SYLLABUS FOR S. Y. B. Sc. STATISTICS Academic Year 07-8 S.Y. B.Sc. (Statistics)

More information

Math Review Sheet, Fall 2008

Math Review Sheet, Fall 2008 1 Descriptive Statistics Math 3070-5 Review Sheet, Fall 2008 First we need to know about the relationship among Population Samples Objects The distribution of the population can be given in one of the

More information

Probability reminders

Probability reminders CS246 Winter 204 Mining Massive Data Sets Probability reminders Sammy El Ghazzal selghazz@stanfordedu Disclaimer These notes may contain typos, mistakes or confusing points Please contact the author so

More information

A Fast Algorithm for Computing Expected Loan Portfolio Tranche Loss in the Gaussian Factor Model.

A Fast Algorithm for Computing Expected Loan Portfolio Tranche Loss in the Gaussian Factor Model. arxiv:math/050615v [math.na] 8 Jun 005 A Fast Algorithm for Computing Expected Loan Portfolio Tranche Loss in the Gaussian Factor Model. Pavel Okunev Department of Mathematics LBNL and UC Berkeley Berkeley,

More information

Random Variables. P(x) = P[X(e)] = P(e). (1)

Random Variables. P(x) = P[X(e)] = P(e). (1) Random Variables Random variable (discrete or continuous) is used to derive the output statistical properties of a system whose input is a random variable or random in nature. Definition Consider an experiment

More information

On the convergence of sequences of random variables: A primer

On the convergence of sequences of random variables: A primer BCAM May 2012 1 On the convergence of sequences of random variables: A primer Armand M. Makowski ECE & ISR/HyNet University of Maryland at College Park armand@isr.umd.edu BCAM May 2012 2 A sequence a :

More information

Algorithms for Uncertainty Quantification

Algorithms for Uncertainty Quantification Algorithms for Uncertainty Quantification Tobias Neckel, Ionuț-Gabriel Farcaș Lehrstuhl Informatik V Summer Semester 2017 Lecture 2: Repetition of probability theory and statistics Example: coin flip Example

More information

Joint Distribution of Two or More Random Variables

Joint Distribution of Two or More Random Variables Joint Distribution of Two or More Random Variables Sometimes more than one measurement in the form of random variable is taken on each member of the sample space. In cases like this there will be a few

More information

EXPECTED VALUE of a RV. corresponds to the average value one would get for the RV when repeating the experiment, =0.

EXPECTED VALUE of a RV. corresponds to the average value one would get for the RV when repeating the experiment, =0. EXPECTED VALUE of a RV corresponds to the average value one would get for the RV when repeating the experiment, independently, infinitely many times. Sample (RIS) of n values of X (e.g. More accurately,

More information

Learning Objectives for Stat 225

Learning Objectives for Stat 225 Learning Objectives for Stat 225 08/20/12 Introduction to Probability: Get some general ideas about probability, and learn how to use sample space to compute the probability of a specific event. Set Theory:

More information

Generalized Hypothesis Testing and Maximizing the Success Probability in Financial Markets

Generalized Hypothesis Testing and Maximizing the Success Probability in Financial Markets Generalized Hypothesis Testing and Maximizing the Success Probability in Financial Markets Tim Leung 1, Qingshuo Song 2, and Jie Yang 3 1 Columbia University, New York, USA; leung@ieor.columbia.edu 2 City

More information

2 Random Variable Generation

2 Random Variable Generation 2 Random Variable Generation Most Monte Carlo computations require, as a starting point, a sequence of i.i.d. random variables with given marginal distribution. We describe here some of the basic methods

More information

Random Variables and Their Distributions

Random Variables and Their Distributions Chapter 3 Random Variables and Their Distributions A random variable (r.v.) is a function that assigns one and only one numerical value to each simple event in an experiment. We will denote r.vs by capital

More information

1 Exercises for lecture 1

1 Exercises for lecture 1 1 Exercises for lecture 1 Exercise 1 a) Show that if F is symmetric with respect to µ, and E( X )

More information

Set Theory Digression

Set Theory Digression 1 Introduction to Probability 1.1 Basic Rules of Probability Set Theory Digression A set is defined as any collection of objects, which are called points or elements. The biggest possible collection of

More information

ECE 4400:693 - Information Theory

ECE 4400:693 - Information Theory ECE 4400:693 - Information Theory Dr. Nghi Tran Lecture 8: Differential Entropy Dr. Nghi Tran (ECE-University of Akron) ECE 4400:693 Lecture 1 / 43 Outline 1 Review: Entropy of discrete RVs 2 Differential

More information

THE LINDEBERG-FELLER CENTRAL LIMIT THEOREM VIA ZERO BIAS TRANSFORMATION

THE LINDEBERG-FELLER CENTRAL LIMIT THEOREM VIA ZERO BIAS TRANSFORMATION THE LINDEBERG-FELLER CENTRAL LIMIT THEOREM VIA ZERO BIAS TRANSFORMATION JAINUL VAGHASIA Contents. Introduction. Notations 3. Background in Probability Theory 3.. Expectation and Variance 3.. Convergence

More information

Chapter 4. Chapter 4 sections

Chapter 4. Chapter 4 sections Chapter 4 sections 4.1 Expectation 4.2 Properties of Expectations 4.3 Variance 4.4 Moments 4.5 The Mean and the Median 4.6 Covariance and Correlation 4.7 Conditional Expectation SKIP: 4.8 Utility Expectation

More information

Chapter 2. Poisson point processes

Chapter 2. Poisson point processes Chapter 2. Poisson point processes Jean-François Coeurjolly http://www-ljk.imag.fr/membres/jean-francois.coeurjolly/ Laboratoire Jean Kuntzmann (LJK), Grenoble University Setting for this chapter To ease

More information

Extreme Value Analysis and Spatial Extremes

Extreme Value Analysis and Spatial Extremes Extreme Value Analysis and Department of Statistics Purdue University 11/07/2013 Outline Motivation 1 Motivation 2 Extreme Value Theorem and 3 Bayesian Hierarchical Models Copula Models Max-stable Models

More information

Statistics (1): Estimation

Statistics (1): Estimation Statistics (1): Estimation Marco Banterlé, Christian Robert and Judith Rousseau Practicals 2014-2015 L3, MIDO, Université Paris Dauphine 1 Table des matières 1 Random variables, probability, expectation

More information

CS37300 Class Notes. Jennifer Neville, Sebastian Moreno, Bruno Ribeiro

CS37300 Class Notes. Jennifer Neville, Sebastian Moreno, Bruno Ribeiro CS37300 Class Notes Jennifer Neville, Sebastian Moreno, Bruno Ribeiro 2 Background on Probability and Statistics These are basic definitions, concepts, and equations that should have been covered in your

More information

Structural Reliability

Structural Reliability Structural Reliability Thuong Van DANG May 28, 2018 1 / 41 2 / 41 Introduction to Structural Reliability Concept of Limit State and Reliability Review of Probability Theory First Order Second Moment Method

More information

Probability and Statistics

Probability and Statistics Probability and Statistics 1 Contents some stochastic processes Stationary Stochastic Processes 2 4. Some Stochastic Processes 4.1 Bernoulli process 4.2 Binomial process 4.3 Sine wave process 4.4 Random-telegraph

More information

Monte-Carlo MMD-MA, Université Paris-Dauphine. Xiaolu Tan

Monte-Carlo MMD-MA, Université Paris-Dauphine. Xiaolu Tan Monte-Carlo MMD-MA, Université Paris-Dauphine Xiaolu Tan tan@ceremade.dauphine.fr Septembre 2015 Contents 1 Introduction 1 1.1 The principle.................................. 1 1.2 The error analysis

More information

Exercises and Answers to Chapter 1

Exercises and Answers to Chapter 1 Exercises and Answers to Chapter The continuous type of random variable X has the following density function: a x, if < x < a, f (x), otherwise. Answer the following questions. () Find a. () Obtain mean

More information

MAS113 Introduction to Probability and Statistics

MAS113 Introduction to Probability and Statistics MAS113 Introduction to Probability and Statistics School of Mathematics and Statistics, University of Sheffield 2018 19 Identically distributed Suppose we have n random variables X 1, X 2,..., X n. Identically

More information

4. Distributions of Functions of Random Variables

4. Distributions of Functions of Random Variables 4. Distributions of Functions of Random Variables Setup: Consider as given the joint distribution of X 1,..., X n (i.e. consider as given f X1,...,X n and F X1,...,X n ) Consider k functions g 1 : R n

More information

PROBABILITY DISTRIBUTION

PROBABILITY DISTRIBUTION PROBABILITY DISTRIBUTION DEFINITION: If S is a sample space with a probability measure and x is a real valued function defined over the elements of S, then x is called a random variable. Types of Random

More information

STA 256: Statistics and Probability I

STA 256: Statistics and Probability I Al Nosedal. University of Toronto. Fall 2017 My momma always said: Life was like a box of chocolates. You never know what you re gonna get. Forrest Gump. There are situations where one might be interested

More information

Mathematical Methods for Neurosciences. ENS - Master MVA Paris 6 - Master Maths-Bio ( )

Mathematical Methods for Neurosciences. ENS - Master MVA Paris 6 - Master Maths-Bio ( ) Mathematical Methods for Neurosciences. ENS - Master MVA Paris 6 - Master Maths-Bio (2014-2015) Etienne Tanré - Olivier Faugeras INRIA - Team Tosca October 22nd, 2014 E. Tanré (INRIA - Team Tosca) Mathematical

More information

Week 2. Review of Probability, Random Variables and Univariate Distributions

Week 2. Review of Probability, Random Variables and Univariate Distributions Week 2 Review of Probability, Random Variables and Univariate Distributions Probability Probability Probability Motivation What use is Probability Theory? Probability models Basis for statistical inference

More information

3 Multiple Discrete Random Variables

3 Multiple Discrete Random Variables 3 Multiple Discrete Random Variables 3.1 Joint densities Suppose we have a probability space (Ω, F,P) and now we have two discrete random variables X and Y on it. They have probability mass functions f

More information

A Gaussian-Poisson Conditional Model for Defaults

A Gaussian-Poisson Conditional Model for Defaults 1 / 26 A Gaussian-Poisson Conditional Model for Defaults Ambar N. Sengupta Department of Mathematics University of Connecticut 2016 High Frequency Conference 2 / 26 Initial remarks Risky Portfolios and

More information

IE 230 Probability & Statistics in Engineering I. Closed book and notes. 120 minutes.

IE 230 Probability & Statistics in Engineering I. Closed book and notes. 120 minutes. Closed book and notes. 10 minutes. Two summary tables from the concise notes are attached: Discrete distributions and continuous distributions. Eight Pages. Score _ Final Exam, Fall 1999 Cover Sheet, Page

More information

Stationary independent increments. 1. Random changes of the form X t+h X t fixed h > 0 are called increments of the process.

Stationary independent increments. 1. Random changes of the form X t+h X t fixed h > 0 are called increments of the process. Stationary independent increments 1. Random changes of the form X t+h X t fixed h > 0 are called increments of the process. 2. If each set of increments, corresponding to non-overlapping collection of

More information

Copyright c 2006 Jason Underdown Some rights reserved. choose notation. n distinct items divided into r distinct groups.

Copyright c 2006 Jason Underdown Some rights reserved. choose notation. n distinct items divided into r distinct groups. Copyright & License Copyright c 2006 Jason Underdown Some rights reserved. choose notation binomial theorem n distinct items divided into r distinct groups Axioms Proposition axioms of probability probability

More information

Probability Models. 4. What is the definition of the expectation of a discrete random variable?

Probability Models. 4. What is the definition of the expectation of a discrete random variable? 1 Probability Models The list of questions below is provided in order to help you to prepare for the test and exam. It reflects only the theoretical part of the course. You should expect the questions

More information

Part IA Probability. Theorems. Based on lectures by R. Weber Notes taken by Dexter Chua. Lent 2015

Part IA Probability. Theorems. Based on lectures by R. Weber Notes taken by Dexter Chua. Lent 2015 Part IA Probability Theorems Based on lectures by R. Weber Notes taken by Dexter Chua Lent 2015 These notes are not endorsed by the lecturers, and I have modified them (often significantly) after lectures.

More information

Normal Approximation for Hierarchical Structures

Normal Approximation for Hierarchical Structures Normal Approximation for Hierarchical Structures Larry Goldstein University of Southern California July 1, 2004 Abstract Given F : [a, b] k [a, b] and a non-constant X 0 with P (X 0 [a, b]) = 1, define

More information

Modelling Dependent Credit Risks

Modelling Dependent Credit Risks Modelling Dependent Credit Risks Filip Lindskog, RiskLab, ETH Zürich 30 November 2000 Home page:http://www.math.ethz.ch/ lindskog E-mail:lindskog@math.ethz.ch RiskLab:http://www.risklab.ch Modelling Dependent

More information

Chp 4. Expectation and Variance

Chp 4. Expectation and Variance Chp 4. Expectation and Variance 1 Expectation In this chapter, we will introduce two objectives to directly reflect the properties of a random variable or vector, which are the Expectation and Variance.

More information

Calculating credit risk capital charges with the one-factor model

Calculating credit risk capital charges with the one-factor model Calculating credit risk capital charges with the one-factor model Susanne Emmer Dirk Tasche September 15, 2003 Abstract Even in the simple Vasicek one-factor credit portfolio model, the exact contributions

More information

1: PROBABILITY REVIEW

1: PROBABILITY REVIEW 1: PROBABILITY REVIEW Marek Rutkowski School of Mathematics and Statistics University of Sydney Semester 2, 2016 M. Rutkowski (USydney) Slides 1: Probability Review 1 / 56 Outline We will review the following

More information

Statistical Modeling and Cliodynamics

Statistical Modeling and Cliodynamics Statistical Modeling and Cliodynamics Dave Darmon and Lucia Simonelli M3C Tutorial 3 October 17, 2013 Overview 1 Why probability and statistics? 2 Intro to Probability and Statistics 3 Cliodynamics; or,

More information

Probability and Statistics Notes

Probability and Statistics Notes Probability and Statistics Notes Chapter Five Jesse Crawford Department of Mathematics Tarleton State University Spring 2011 (Tarleton State University) Chapter Five Notes Spring 2011 1 / 37 Outline 1

More information

Chapter 2. Discrete Distributions

Chapter 2. Discrete Distributions Chapter. Discrete Distributions Objectives ˆ Basic Concepts & Epectations ˆ Binomial, Poisson, Geometric, Negative Binomial, and Hypergeometric Distributions ˆ Introduction to the Maimum Likelihood Estimation

More information

Inference for High Dimensional Robust Regression

Inference for High Dimensional Robust Regression Department of Statistics UC Berkeley Stanford-Berkeley Joint Colloquium, 2015 Table of Contents 1 Background 2 Main Results 3 OLS: A Motivating Example Table of Contents 1 Background 2 Main Results 3 OLS:

More information

Asymptotic distribution of the sample average value-at-risk

Asymptotic distribution of the sample average value-at-risk Asymptotic distribution of the sample average value-at-risk Stoyan V. Stoyanov Svetlozar T. Rachev September 3, 7 Abstract In this paper, we prove a result for the asymptotic distribution of the sample

More information

A Probability Primer. A random walk down a probabilistic path leading to some stochastic thoughts on chance events and uncertain outcomes.

A Probability Primer. A random walk down a probabilistic path leading to some stochastic thoughts on chance events and uncertain outcomes. A Probability Primer A random walk down a probabilistic path leading to some stochastic thoughts on chance events and uncertain outcomes. Are you holding all the cards?? Random Events A random event, E,

More information

n! (k 1)!(n k)! = F (X) U(0, 1). (x, y) = n(n 1) ( F (y) F (x) ) n 2

n! (k 1)!(n k)! = F (X) U(0, 1). (x, y) = n(n 1) ( F (y) F (x) ) n 2 Order statistics Ex. 4. (*. Let independent variables X,..., X n have U(0, distribution. Show that for every x (0,, we have P ( X ( < x and P ( X (n > x as n. Ex. 4.2 (**. By using induction or otherwise,

More information

Chapter 3, 4 Random Variables ENCS Probability and Stochastic Processes. Concordia University

Chapter 3, 4 Random Variables ENCS Probability and Stochastic Processes. Concordia University Chapter 3, 4 Random Variables ENCS6161 - Probability and Stochastic Processes Concordia University ENCS6161 p.1/47 The Notion of a Random Variable A random variable X is a function that assigns a real

More information

The Choice of Representative Volumes for Random Materials

The Choice of Representative Volumes for Random Materials The Choice of Representative Volumes for Random Materials Julian Fischer IST Austria 13.4.2018 Effective properties of random materials Material with random microstructure Effective large-scale description?

More information

On Approximating a Generalized Binomial by Binomial and Poisson Distributions

On Approximating a Generalized Binomial by Binomial and Poisson Distributions International Journal of Statistics and Systems. ISSN 0973-675 Volume 3, Number (008), pp. 3 4 Research India Publications http://www.ripublication.com/ijss.htm On Approximating a Generalized inomial by

More information

EE4601 Communication Systems

EE4601 Communication Systems EE4601 Communication Systems Week 2 Review of Probability, Important Distributions 0 c 2011, Georgia Institute of Technology (lect2 1) Conditional Probability Consider a sample space that consists of two

More information

Topic 3: The Expectation of a Random Variable

Topic 3: The Expectation of a Random Variable Topic 3: The Expectation of a Random Variable Course 003, 2017 Page 0 Expectation of a discrete random variable Definition (Expectation of a discrete r.v.): The expected value (also called the expectation

More information

Exponential Families

Exponential Families Exponential Families David M. Blei 1 Introduction We discuss the exponential family, a very flexible family of distributions. Most distributions that you have heard of are in the exponential family. Bernoulli,

More information

Discrete Probability Refresher

Discrete Probability Refresher ECE 1502 Information Theory Discrete Probability Refresher F. R. Kschischang Dept. of Electrical and Computer Engineering University of Toronto January 13, 1999 revised January 11, 2006 Probability theory

More information

Ruin probabilities in multivariate risk models with periodic common shock

Ruin probabilities in multivariate risk models with periodic common shock Ruin probabilities in multivariate risk models with periodic common shock January 31, 2014 3 rd Workshop on Insurance Mathematics Universite Laval, Quebec, January 31, 2014 Ruin probabilities in multivariate

More information

Quick Tour of Basic Probability Theory and Linear Algebra

Quick Tour of Basic Probability Theory and Linear Algebra Quick Tour of and Linear Algebra Quick Tour of and Linear Algebra CS224w: Social and Information Network Analysis Fall 2011 Quick Tour of and Linear Algebra Quick Tour of and Linear Algebra Outline Definitions

More information

Notes for Math 324, Part 19

Notes for Math 324, Part 19 48 Notes for Math 324, Part 9 Chapter 9 Multivariate distributions, covariance Often, we need to consider several random variables at the same time. We have a sample space S and r.v. s X, Y,..., which

More information

Bounds of the normal approximation to random-sum Wilcoxon statistics

Bounds of the normal approximation to random-sum Wilcoxon statistics R ESEARCH ARTICLE doi: 0.306/scienceasia53-874.04.40.8 ScienceAsia 40 04: 8 9 Bounds of the normal approximation to random-sum Wilcoxon statistics Mongkhon Tuntapthai Nattakarn Chaidee Department of Mathematics

More information

Modern Portfolio Theory with Homogeneous Risk Measures

Modern Portfolio Theory with Homogeneous Risk Measures Modern Portfolio Theory with Homogeneous Risk Measures Dirk Tasche Zentrum Mathematik Technische Universität München http://www.ma.tum.de/stat/ Rotterdam February 8, 2001 Abstract The Modern Portfolio

More information

Actuarial Science Exam 1/P

Actuarial Science Exam 1/P Actuarial Science Exam /P Ville A. Satopää December 5, 2009 Contents Review of Algebra and Calculus 2 2 Basic Probability Concepts 3 3 Conditional Probability and Independence 4 4 Combinatorial Principles,

More information

Stochastic Models of Manufacturing Systems

Stochastic Models of Manufacturing Systems Stochastic Models of Manufacturing Systems Ivo Adan Organization 2/47 7 lectures (lecture of May 12 is canceled) Studyguide available (with notes, slides, assignments, references), see http://www.win.tue.nl/

More information

Statistics 1B. Statistics 1B 1 (1 1)

Statistics 1B. Statistics 1B 1 (1 1) 0. Statistics 1B Statistics 1B 1 (1 1) 0. Lecture 1. Introduction and probability review Lecture 1. Introduction and probability review 2 (1 1) 1. Introduction and probability review 1.1. What is Statistics?

More information