Elements of Financial Engineering Course
|
|
- Marion Ramsey
- 5 years ago
- Views:
Transcription
1 Elements of Financial Engineering Course NSD Baruch MFE Summer Camp 206 Lecture 6 Tai Ho Wang Agenda Methods of simulation: inverse transformation method, acceptance rejection method Variance reduction techniques Copulas and methods for copula simulation Financial applications: Multiasset option pricing by simulation Portfolio VaR and CVaR calculation Inverse transformation method Theorem X Q X X Q X (U) U U[0, ] Let be a random variable and be its qf. Then, where. Proof Homework Note A pseudo code is like u, u2,, u n U[0, ] Generate sample points from uniform distribution For each i =,, n, let x i = Q X ( u i ) Return x,, x n Example
2 Example X Exp(λ) F X x 0 Let. Note that the cdf is given by, for, p (0, ) Hence, for, F X x (x) = P [X x] = λe λξ dξ = e λx. Q X (p) = F X Therefore,, where. X ln( U) λ 0 (p) = ln( p). U U[0, ] λ
3 In [8]: NumSim < e4 # generate sample points from U[0,] U < runif(numsim) # inverse tranform lambda < 3 Qf < function(p) /lambda*log( p) X < Qf(U) # plot hist(x,breaks=00,prob=t) curve(dexp(x,rate=lambda),from=0,col='blue',add=t) Empiricial cumulative distribution function In practice, where is the qf from? Empirical qf as the inverse of empirical cdf. Definition
4 Definition X j j =,, J F Let,, be a random sample ~ from a common distribution with cdf. The nondecreasing, càdlàg (why?) function F defined by ~ F J(x) = (x) J [ Xj, ) J j= F X j j =,, J is called the empirical cdf of from the sample,. Remark X j j =,, J Y X j P [Y = ] = X j (ω) We can reconcile the empirical cdf from the sample,, as the cdf of the random variable uniformly distributed among the points, i.e., X j. Empirical cdf is random, the randomness is from the samples empirical cdf has actually two arguments: x and ω.. Therefore, an J Glivenko Cantelli lemma The Glivenko Cantelli ~ lemma asserts that, as the total number of samples J approaches infinity, the empirical cdf F J converges almost surely (in ω) and uniformly (in x) to the original behindthe scene cdf F which generates the samples. Theorem X j j =,, J Assume that, ~, are independent random samples drawn from the common distribution with cdf F and F J be the empirical cdf from the random samples X j. Then lim sup (x) F(x) = 0 J x R ~ F J almost surely. Remark The pointwise convergence version of the Glivenko Cantelli lemma can be obtained by applying the strong law of large numbers. Namely, for any fixed, The rate of convergence in the Glivenko Cantelli lemma is have finite third moment, according to the Berry Esseen theorem. x R lim (x) F(x) = 0 almost surely. J ~ F J n if the random variables
5 In [9]: # the code illustrates the convergences of empirical cdf in Glivenko Cantelli lemm NumSim < 00 df < 3 X < rt(numsim,df=df) plot.ecdf(x,xlim=c( 3,3)) curve(pt(x,df=df),col='blue',add=t) hist(x, breaks=20, prob=t) curve(dt(x,df=df), col='blue', add=t)
6 Acceptance rejection method Let X and Y with pdfs f X and f Y respectively. We have a scheme to simulate Y and wish to simulate X through the samples from Y. Assumption: there exists a constant c such that f X for every x, i.e., f X is dominated by cf Y. The procedure goes as follows. (x) c Generate samples from and samples from independently. f For i =,, n, accept y i if it satisfies u X ( y i ) i ; otherwise, reject y. c f Y ( y ) i i Return the accepted 's f Y (x) y,, y n Y u,, u n U[0, ] y i
7 Example As an example, we use the method of acceptance rejection to simulate normal distribution by double exponential distribution. Let X N(0, ) and Y DE(). The pdf of X is x2 f (x) = X e 2 whereas the pdf of Y is f. 2π Y (y) = 2 e y We determine the acceptance threshold c as (x) f X(x) f Y 2 e 2 e x π x 2 π e x2 2 = = =: c ( + x x) 2π x x 2e 2 2
8 In [23]: # the code demonstrate the acceptance rejection method by using Laplace and Gaussi # X ~ normal # Y ~ dobule exponential NumSim < e4 lambda < # Simulate samples from Y and U independently U < runif(numsim) Y < runif(numsim) Qf < function(p) ifelse(p<=0.5,/lambda*log(2*p), /lambda*log(2 2*p)) Y < Qf(Y) # plot histogram of Y fy < function(x) lambda/2*exp( lambda*abs(x)) hist(y,breaks=50,prob=t) curve(fy,add=t,col='blue') # setting the acceptance threshold c c < sqrt(2*exp()/pi) # check acceptance rejection criterion fx < function(x) dnorm(x) AccRej < (U <= fx(y)/fy(y)/c) # fetch out accepted Y as sample points for X and plot X < Y[AccRej == T] hist(x,breaks=50,prob=t) curve(dnorm,add=t,col='blue')
9
10 In [24]: length(x) Out[24]: 7570 Why it works? Note that, since the sample points from Y are accepted only when the inequality is satisfied, we can rewrite the acceptance rejection criterion in terms of conditional expectation as
11 We calculate the numerator and denominator as follows. denominator = numerator = It follows that = = = f ) P[Y P[Y x U X(Y x, U f X(Y) ] c f ] = Y c f Y ) P[U ] f X(Y) = = = P[U ] = P[U Y = y] (y)dy c f X(Y ) f X(Y ) f Y ) c ) P[U f X(y) ] f Y (y)dy ( Y and U are independent) c f Y f Y f X(y) (y)dy ( (u) = u) c c f Y f Y F U ( f X (x)dx = ) P[Y x, U ] = P[Y x, U Y = y] (y)dy x x c F X c f X(Y ) f X(Y ) f Y ) c ) f X(y) P[U ] (y)dy ( Y and U are independent) c (x). f X(y) c f Y f Y (y)dy ( (u) = u) f Y f Y F U f P[Y x U X(Y )] = = F (x). c ) X f Y F X c(x) c c f Y f Y f Y f Y Variance reduction Antithetic variate For each simulated sample point ω, we immediately add its reflected point ω. Quote from Glasserman: The method of antithetic variates can take various forms; the most broadly applicable is based on the observation that if U is uniformly distributed over [0, ], then U is too. Hence, if we generate a path using as inputs U,, U n, we can generate a second path using U,, U n without changing the law of the simulated process. The variables U i and U i form an antithetic pair in the sense that a large value of one is accompanied by a small value of the other. This suggests that an unusually large or small output computed from the first path may be balanced by the value computed from the antithetic path, resulting in a reduction in variance. Variance reduction Importance sampling
12 The goal is to estimate, for some function so that is integrable. We rewrite the expectation as where p and q are pdfs for the random variables X and Y respectively. Therefore, we can use g(y)p(y) as an unbiased (why?) estimator for, i.e., sample an iid sequence Y of Y then calculate the sample mean E[g(X)] g g(x) p(x) q(x) p(y ) q(y ) E[g(X)] = g(x)p(x)dx = g(x) q(x)dx = E[g(Y ) ], q(y) E[g(X)],, N N n= p( Y ) g( Y n ) n. q( ) In other words, instead of sampling an iid sequence directly from the distribution of X and calculating the sample mean g( ), we sample an iid sequence N N n= X n Y,, Y N from the distribution of Y, which we have the flexibility to choose, then calculate the sample mean p( g( ). N N Y n ) n= Y n q( ) Y n Y n,, X X N Y N So the question is: why would this help reduce the variance? Let's start with calculating their variances. From Cauchy Schwarz inequality we have g(x) p(x) q(x) q(x) g(x) (x) 2 p 2 g(y ) q(x)dx = 2 p 2 (Y ). q 2 (x) q 2 (Y ) E g(x) = g(x) p(x)dx = E[ ] Therefore, we obtain a lower bound for the variance of the random variable Var[ ] g(y ) 2 p2 (Y ). q 2 [E g(x) ] 2 (E[g(X)]) 2 On the other hand, again by Cauchy Schwarz inequality, we obtain the same lower bound for the variance of : g(x) (Y ) dx g(y)p(y) q(y) Var[g(X)] = E g(x) 2 (E[g(X)]) 2 [E g(x) ] 2 (E[g(X)]) 2. as At first glance, there seems no advantage in either case because they are both greater than or equal to the same right hand side. We can't do anything to the inequality Var[g(X)] [E g(x) ] 2 (E[g(X)]) 2. there is nothing to play with. However, we can try to make the inequality Var[ ] g(y ) 2 p2 (Y ). q 2 [E g(x) ] 2 (E[g(X)]) 2 (Y )
13 q(y) = g(y) p(y) an equality by smartly choosing q! In fact, simply pick. Moreover, if g doesn't E g(x) change sign, i.e., g(x) 0 or g(x) 0 for all x, then the right hand side of the above inequality g(y)p(y) is zero, which means with such a choice of q, the random variable has zero variance! Of q(y) course, this is too good to be true because in order to be in such an ideal situation, we need to know p and which are exactly the answers we are searching for. E[g(X)] From marginal to joint For notational simplicity, we shall denote by the unit interval and by the unit square hereafter. I 2 = {(x, y) : 0 x, y } (X, Y ) Let be a pair of random variables which are (marginally) uniformly distributed on I, i.e., X, Y U[0, ]. How much do we know about the joint distribution F XY (x, y) of (X, Y )? Basically nothing, since we have no information on the "linkage" between X and Y. There is no hope to reconstruct F XY (x, y) because there are infinitely many possibilities. It is in fact an ill posed problem However, the very least we have is the following. Apparently, since the random vector is supported in the unit square, the relevant values o F XY are within the unit square I 2. The above formula simply tells what the values of F XY on the boundaries of I 2 should be. In other words, there is no freedom of picking the values of F XY on the boundary of the unit square. For values of inside, we note that is 2 increasing, i.e., for all x x2 and 0 y y2, the inequality holds since, F XY (x, y) = P [X x, Y y] = y, x, 0, I [0, ] = {x : 0 x } if x, y ; if x, 0 y ; if 0 x, y ; if x < 0 or y < 0. (X, Y ) I 2 F XY I 2 F XY 0 F XY ( x2, y2) F XY ( x2, y) F XY ( x, y2) + F XY ( x, y) 0, F XY ( x2, y2) F XY ( x2, y) F XY ( x, y2) + F XY ( x, y) = P [ x X x2, y Y Y2 Copula in two dimensions In fact, any bivariate function defined on I 2 satisfies the aforementioned properties defines a 2 copula. Definition R A bivariate function C : I 2 is call 2 increasing if for every non empty rectangle u u2 v v2 I 2 [, ] [, ] C(, ) C(, ) C(, ) + C(, ) 0. u2 v2 u2 v u v2 u v
14 We now give a formal definition of copula. Definition A bivariate function C : I 2 I is called a copula if it satisfies C(0, u) = C(u, 0) = 0 for every u I. C(u, ) = u and C(, v) = v for every u, v I. C is 2 increasing in I 2. Remark In other words, a copula is simply the joint distribution function of the random vector (U, V ) with U and V being marginally uniformly distributed in the unit interval I. Recall that, if X and Y are continuous random variables with quantile functions Q X and Q Y respectively, then X Q X (U) and Y Q Y (V ), where U and V are uniformly distributed in I. Therefore, we readily have that, let F XY denote the joint distribution function of X and Y, the function C given by C(u, v) F XY ( Q X (u), Q Y (v)) defines a copula. (Check the definition.) In other words, we can "extract" the copula from a joint distribution. Natural questions to ask are a) is such copula unique? b) are all joint distribution given by copulas? Answers to the questions are given by Sklar's theorem. Sklar's theorem The Sklar's theorem in a sense means the decomposition of the joint distribution of a random vector into its marginal distribution combined with a (margin independent) copula function. In other words, one component comes purely from the marginal distributions, which statistically is more accessible to infer a la Glivenko Cantelli; and the other component (the copula) is unitized so that it accounts solely for "linking" the margins. Theorem (x) Let F and F2 be two one dimensional cdfs. If is any copula then F F2 is a 2 dimensional distribution function, whose marginal distribution functions are F (x) and F2(y). If is a dimensional distribution function with marginals F and F2, then there exists a copula C(v, z) such that F(x, y) = C( F(x), F2 (y)). If F(x) and F2(y) are continuous then C is unique. Thus, we can either (y) C C( (x), (y)) F(x, y) 2 (x) link two marginal distributions by a given copula or extract copula from a given joint distribution (y) Example
15 We mean.x. demonstrate sd.x. how to mean.y. extract the copula, sd.y. referred cor.x..y. to as the normal or Gaussian copula, from a joint normal distribution. A pseudo code will be like (, ),, (, ) u i F X x i = ( ) Simulate joint normal sample points x y x n y n For each i =,, n, let = ( ) and v i F Y y i Return the joint sample points ( u, v),, ( u n, v n ) Note It doesn't matter if the marginals X and Y are standard or not because F X (X) and F Y (Y ) are always uniformly distributed in. [0, ] In [25]: # the code demonstrate how to simulate Gaussian copula NumSim < e4 rho < 0.7 # First simulate indpendent normals X < rnorm(numsim) Y < rnorm(numsim) # correlate the independent normals X < X mean(x) X < X/sd(X) Y < rho*x + sqrt( rho^2)*y Y < Y mean(y) Y < Y/sd(Y) In [26]: Out[26]: data.frame(mean(x),sd(x),mean(y),sd(y),cor(x,y)) mean.x. sd.x. mean.y. sd.y. cor.x..y..98e e
16 In [27]: # copula sample points U < pnorm(x) V < pnorm(y) # plot par(mfrow=c(2,2)) plot(u,v,col='blue') Vh < hist(v,breaks=50,plot=f) barplot(vh$density,horiz=t,space=0,,main='histogram of V') abline(v=,col='blue') hist(u,breaks=50,prob=t) abline(h=,col='blue') Now we can use the simulated Gaussian copula sample points to link two marginal distributions, say, a distribution with degrees of freedom 3 and an exponential distribution with parameter 2. t
17 In [28]: X < qt(u,df=3) Y < qexp(v,rate=2) # plot par(mfrow=c(2,2)) plot(x,y,col='blue') yh < hist(y,breaks=50,plot=f) barplot(yh$density,space=0,horiz=t,main='histogram of Y') #curve(dexp(x,rate=2),col='blue',add=t) hist(x,breaks=50,prob=t) curve(dt(x,df=3),col='blue',add=t) Archimedean copulas Definition An Archimedean copula is a copula of the form
18 C φ (u, v) = φ (φ(u) + φ(v)) (0, ] φ() = 0 for some convex, continuous, decreasing function defined on with. The function φ is called the generator of C φ. (Exercise: check such defined function C is a copula) Commonly used Archimedean copulas Commonly used Archimedean copulas and their generators include Ali Mikhail Haq: Clayton: φ(t) = log( θ( t)) log t, θ [, ) uv C φ (u, v) = θ( u)( v) t φ(t) = θ, θ [, ) {0} θ C φ (u, v) = [( u + ) ] θ v θ + /θ Frank: Gumbel: Note that Joe: θ = φ(t) = log( e θ ) log( e θt ), θ R {0} ( ] C φ (u, v) = log[ + e θu )( e θv ) θ e θ φ(t) = ( log t ) θ, θ C φ (u, v) = e corresponds to the product copula. [( log u +( log v ] )θ ) θ θ φ(t) = ( e t ) θ, θ [, ) C φ (u, v) = [( u ) θ + ( v) θ ( u ) θ ( v ) θ θ ] Simulation of Archimedean copulas Therefore, a pseudo code will be like Simulate the random variable whose moment generating function is φ. U i i =,, n Simulate iid random variables, for, whose common conditional cdf (conditioned on V ) is e vφ(u), by the method of inverse transformation.
19 ,, U n Fetch out samples U. Proof = = = = = = C( u,, u n ) P [ U u,, U n u n ] P [ U u,, U n u n V = v] d F V (v) n i= n i= P [ V = v] d (v) ( 's are conditionally independent) U i u i F V U i e vφ( u ) i F V U i e vφ( ) d (v) ( V has cdf u i ) (v) v e n i= φ( u i ) d F V 0 φ u u n m V φ (φ( ) + + φ( )) ( V has mgf (t) = (t)) Simulation of the Clayton copula
20 In []: # Clayton copula simulation # number of simulations NumSim < e4 # dimension of random vector d < 3 # copula parameter theta < 0.5 # F tilde = inverse of generator Ft < function(t) ( + t)^( /theta) V < rgamma(numsim,shape = /theta,scale=) U < matrix(runif(numsim*d),ncol=d) U < Ft( log(u)/v) plot(u[,2],u[,],col='blue')
21 Simulation of the Gumbel copula
22 In [0]: # Gumbel copula simulation # generator = ( log(u))^theta # for simulating stable distribution, require package: stabledist # requires library(stabledist) library(stabledist) # number of simulations NumSim < e4 # dimension of random vector d < 3 # copula parameter: theta > theta < 5 # Ftile = inverse generator Ft < function(t) exp( t^(/theta)) V < rstable(numsim,alpha = /theta,beta=,gamma=cos(pi/2/theta)^(theta),delta=0) U < matrix(runif(numsim*d),ncol=d) U < Ft( log(u)/v) plot(u[,2],u[,3],col='blue')
23 Simulation of the Marshall Olkin copula
24 In [30]: # Marshall Olkin copula simulation NumSimul < e4 lambda <. lambda2 < 0.2 lambda2 < 0.6 Z < rexp(numsimul,rate=lambda) Z2 < rexp(numsimul,rate=lambda2) Z2 < rexp(numsimul,rate=lambda2) U < exp( (lambda + lambda2)*pmax(z,z2)) U2 < exp( (lambda + lambda2)*pmax(z2,z2)) #plot plot(u,u2) Now we can calculate by simulation the price of multiasset options such as basket, spread, or BestOf. VaR and CVaR of a portfolio For example, assuming each individual asset is lognormally distributed with μ i and σ 2 i, for. We link them by the Clayton copula, say i =, 2, 3 θ =.5
25 In []: # Generate Clayton copula # number of simulations NumSim < e4 # dimension of random vector d < 2 # copula parameter theta < 0.5 # F tilde = inverse of generator Ft < function(t) ( + t)^( /theta) V < rgamma(numsim,shape = /theta,scale=) U < matrix(runif(numsim*d),ncol=d) U < Ft( log(u)/v) # Link log normal margins by Clayton copula # time to expiry t < /52 # volatilites sig < 0.3 sig2 < 0.25 # log price at current time x < log(00) x2 < log(50) # lognormal samples S < qlnorm(u[,], meanlog = x sig^2*t/2, sdlog = sig*sqrt(t)) S2 < qlnorm(u[,2], meanlog = x2 sig2^2*t/2, sdlog = sig2*sqrt(t)) K < 250 payoffbasket < (S + S2 K)*(S + S2 >= K) pricebasket < mean(payoffbasket) In [2]: pricebasket Out[2]: Now we wrap the code up as a function.
26 In [3]: # copula pricemean parameter pricesd theta < 0.5 s < 00 s2 < 50 sig < 0.3 sig2 < 0.25 t < K < 250 pricebasket_clayton < function(k,t,numsim=e4,theta=0.5,m=){ # F tilde = inverse of generator Ft < function(t) ( + t)^( /theta) d < 2 pricemean < numeric(m) pricesd < numeric(m) for (i in :m) { V < rgamma(numsim,shape = /theta,scale=) U < matrix(runif(numsim*d),ncol=d) U < Ft( log(u)/v) # Link log normal margins by Clayton copula S < qlnorm(u[,], meanlog = log(s) sig^2*t/2, sdlog = sig*sqrt(t)) S2 < qlnorm(u[,2], meanlog = log(s2) sig2^2*t/2, sdlog = sig2*sqrt(t)) payoffbasket < (S + S2 K)*(S + S2 >= K) pricemean[i] < mean(payoffbasket) pricesd[i] < sd(payoffbasket) } data.frame(pricemean,pricesd) } In [4]: Out[4]: pricebasket_clayton(k=k,t=t,m=4) pricemean pricesd
27 In [5]: pv < pricebasket_clayton(k=k,t=t,m=e3) hist(pv$pricemean,breaks=50,prob=t) In [6]: mean(pv$pricemean) Out[6]:
Elements of Financial Engineering Course
Elements of Financial Engineering Course Baruch-NSD Summer Camp 0 Lecture Tai-Ho Wang Agenda Methods of simulation: inverse transformation method, acceptance-rejection method Variance reduction techniques
More informationLecture Quantitative Finance Spring Term 2015
on bivariate Lecture Quantitative Finance Spring Term 2015 Prof. Dr. Erich Walter Farkas Lecture 07: April 2, 2015 1 / 54 Outline on bivariate 1 2 bivariate 3 Distribution 4 5 6 7 8 Comments and conclusions
More information2 Functions of random variables
2 Functions of random variables A basic statistical model for sample data is a collection of random variables X 1,..., X n. The data are summarised in terms of certain sample statistics, calculated as
More informationLecture 1: August 28
36-705: Intermediate Statistics Fall 2017 Lecturer: Siva Balakrishnan Lecture 1: August 28 Our broad goal for the first few lectures is to try to understand the behaviour of sums of independent random
More informationModelling Dependence with Copulas and Applications to Risk Management. Filip Lindskog, RiskLab, ETH Zürich
Modelling Dependence with Copulas and Applications to Risk Management Filip Lindskog, RiskLab, ETH Zürich 02-07-2000 Home page: http://www.math.ethz.ch/ lindskog E-mail: lindskog@math.ethz.ch RiskLab:
More informationChapter 3 sections. SKIP: 3.10 Markov Chains. SKIP: pages Chapter 3 - continued
Chapter 3 sections Chapter 3 - continued 3.1 Random Variables and Discrete Distributions 3.2 Continuous Distributions 3.3 The Cumulative Distribution Function 3.4 Bivariate Distributions 3.5 Marginal Distributions
More informationMAS223 Statistical Inference and Modelling Exercises
MAS223 Statistical Inference and Modelling Exercises The exercises are grouped into sections, corresponding to chapters of the lecture notes Within each section exercises are divided into warm-up questions,
More informationProbability Distribution And Density For Functional Random Variables
Probability Distribution And Density For Functional Random Variables E. Cuvelier 1 M. Noirhomme-Fraiture 1 1 Institut d Informatique Facultés Universitaires Notre-Dame de la paix Namur CIL Research Contact
More informationLifetime Dependence Modelling using a Generalized Multivariate Pareto Distribution
Lifetime Dependence Modelling using a Generalized Multivariate Pareto Distribution Daniel Alai Zinoviy Landsman Centre of Excellence in Population Ageing Research (CEPAR) School of Mathematics, Statistics
More informationChapter 3 sections. SKIP: 3.10 Markov Chains. SKIP: pages Chapter 3 - continued
Chapter 3 sections 3.1 Random Variables and Discrete Distributions 3.2 Continuous Distributions 3.3 The Cumulative Distribution Function 3.4 Bivariate Distributions 3.5 Marginal Distributions 3.6 Conditional
More informationFinancial Econometrics and Volatility Models Copulas
Financial Econometrics and Volatility Models Copulas Eric Zivot Updated: May 10, 2010 Reading MFTS, chapter 19 FMUND, chapters 6 and 7 Introduction Capturing co-movement between financial asset returns
More informationRandom Variables. Random variables. A numerically valued map X of an outcome ω from a sample space Ω to the real line R
In probabilistic models, a random variable is a variable whose possible values are numerical outcomes of a random phenomenon. As a function or a map, it maps from an element (or an outcome) of a sample
More informationCopulas. MOU Lili. December, 2014
Copulas MOU Lili December, 2014 Outline Preliminary Introduction Formal Definition Copula Functions Estimating the Parameters Example Conclusion and Discussion Preliminary MOU Lili SEKE Team 3/30 Probability
More informationMultivariate Distribution Models
Multivariate Distribution Models Model Description While the probability distribution for an individual random variable is called marginal, the probability distribution for multiple random variables is
More informationEC212: Introduction to Econometrics Review Materials (Wooldridge, Appendix)
1 EC212: Introduction to Econometrics Review Materials (Wooldridge, Appendix) Taisuke Otsu London School of Economics Summer 2018 A.1. Summation operator (Wooldridge, App. A.1) 2 3 Summation operator For
More informationCopulas and Measures of Dependence
1 Copulas and Measures of Dependence Uttara Naik-Nimbalkar December 28, 2014 Measures for determining the relationship between two variables: the Pearson s correlation coefficient, Kendalls tau and Spearmans
More informationSampling Archimedean copulas. Marius Hofert. Ulm University
Marius Hofert marius.hofert@uni-ulm.de Ulm University 2007-12-08 Outline 1 Nonnested Archimedean copulas 1.1 Marshall and Olkin s algorithm 2 Nested Archimedean copulas 2.1 McNeil s algorithm 2.2 A general
More informationModelling Dependent Credit Risks
Modelling Dependent Credit Risks Filip Lindskog, RiskLab, ETH Zürich 30 November 2000 Home page:http://www.math.ethz.ch/ lindskog E-mail:lindskog@math.ethz.ch RiskLab:http://www.risklab.ch Modelling Dependent
More informationOn the Choice of Parametric Families of Copulas
On the Choice of Parametric Families of Copulas Radu Craiu Department of Statistics University of Toronto Collaborators: Mariana Craiu, University Politehnica, Bucharest Vienna, July 2008 Outline 1 Brief
More informationSimulating Exchangeable Multivariate Archimedean Copulas and its Applications. Authors: Florence Wu Emiliano A. Valdez Michael Sherris
Simulating Exchangeable Multivariate Archimedean Copulas and its Applications Authors: Florence Wu Emiliano A. Valdez Michael Sherris Literatures Frees and Valdez (1999) Understanding Relationships Using
More information1 Probability theory. 2 Random variables and probability theory.
Probability theory Here we summarize some of the probability theory we need. If this is totally unfamiliar to you, you should look at one of the sources given in the readings. In essence, for the major
More informationProbability Models. 4. What is the definition of the expectation of a discrete random variable?
1 Probability Models The list of questions below is provided in order to help you to prepare for the test and exam. It reflects only the theoretical part of the course. You should expect the questions
More informationMATH Notebook 5 Fall 2018/2019
MATH442601 2 Notebook 5 Fall 2018/2019 prepared by Professor Jenny Baglivo c Copyright 2004-2019 by Jenny A. Baglivo. All Rights Reserved. 5 MATH442601 2 Notebook 5 3 5.1 Sequences of IID Random Variables.............................
More informationModelling and Estimation of Stochastic Dependence
Modelling and Estimation of Stochastic Dependence Uwe Schmock Based on joint work with Dr. Barbara Dengler Financial and Actuarial Mathematics and Christian Doppler Laboratory for Portfolio Risk Management
More informationTheorem 2.1 (Caratheodory). A (countably additive) probability measure on a field has an extension. n=1
Chapter 2 Probability measures 1. Existence Theorem 2.1 (Caratheodory). A (countably additive) probability measure on a field has an extension to the generated σ-field Proof of Theorem 2.1. Let F 0 be
More informationA Brief Introduction to Copulas
A Brief Introduction to Copulas Speaker: Hua, Lei February 24, 2009 Department of Statistics University of British Columbia Outline Introduction Definition Properties Archimedean Copulas Constructing Copulas
More information2 (Statistics) Random variables
2 (Statistics) Random variables References: DeGroot and Schervish, chapters 3, 4 and 5; Stirzaker, chapters 4, 5 and 6 We will now study the main tools use for modeling experiments with unknown outcomes
More informationContinuous Random Variables
1 / 24 Continuous Random Variables Saravanan Vijayakumaran sarva@ee.iitb.ac.in Department of Electrical Engineering Indian Institute of Technology Bombay February 27, 2013 2 / 24 Continuous Random Variables
More informationProbability and Distributions
Probability and Distributions What is a statistical model? A statistical model is a set of assumptions by which the hypothetical population distribution of data is inferred. It is typically postulated
More informationThe main results about probability measures are the following two facts:
Chapter 2 Probability measures The main results about probability measures are the following two facts: Theorem 2.1 (extension). If P is a (continuous) probability measure on a field F 0 then it has a
More information1 Joint and marginal distributions
DECEMBER 7, 204 LECTURE 2 JOINT (BIVARIATE) DISTRIBUTIONS, MARGINAL DISTRIBUTIONS, INDEPENDENCE So far we have considered one random variable at a time. However, in economics we are typically interested
More informationLecture Notes 1 Probability and Random Variables. Conditional Probability and Independence. Functions of a Random Variable
Lecture Notes 1 Probability and Random Variables Probability Spaces Conditional Probability and Independence Random Variables Functions of a Random Variable Generation of a Random Variable Jointly Distributed
More informationLecture Notes 1 Probability and Random Variables. Conditional Probability and Independence. Functions of a Random Variable
Lecture Notes 1 Probability and Random Variables Probability Spaces Conditional Probability and Independence Random Variables Functions of a Random Variable Generation of a Random Variable Jointly Distributed
More informationf (1 0.5)/n Z =
Math 466/566 - Homework 4. We want to test a hypothesis involving a population proportion. The unknown population proportion is p. The null hypothesis is p = / and the alternative hypothesis is p > /.
More informationMA/ST 810 Mathematical-Statistical Modeling and Analysis of Complex Systems
MA/ST 810 Mathematical-Statistical Modeling and Analysis of Complex Systems Review of Basic Probability The fundamentals, random variables, probability distributions Probability mass/density functions
More informationLecture 3. David Aldous. 31 August David Aldous Lecture 3
Lecture 3 David Aldous 31 August 2015 This size-bias effect occurs in other contexts, such as class size. If a small Department offers two courses, with enrollments 90 and 10, then average class (faculty
More informationMAXIMUM ENTROPIES COPULAS
MAXIMUM ENTROPIES COPULAS Doriano-Boris Pougaza & Ali Mohammad-Djafari Groupe Problèmes Inverses Laboratoire des Signaux et Systèmes (UMR 8506 CNRS - SUPELEC - UNIV PARIS SUD) Supélec, Plateau de Moulon,
More informationLimiting Distributions
Limiting Distributions We introduce the mode of convergence for a sequence of random variables, and discuss the convergence in probability and in distribution. The concept of convergence leads us to the
More informationHomework # , Spring Due 14 May Convergence of the empirical CDF, uniform samples
Homework #3 36-754, Spring 27 Due 14 May 27 1 Convergence of the empirical CDF, uniform samples In this problem and the next, X i are IID samples on the real line, with cumulative distribution function
More informationRandom Variables and Their Distributions
Chapter 3 Random Variables and Their Distributions A random variable (r.v.) is a function that assigns one and only one numerical value to each simple event in an experiment. We will denote r.vs by capital
More informationconditional cdf, conditional pdf, total probability theorem?
6 Multiple Random Variables 6.0 INTRODUCTION scalar vs. random variable cdf, pdf transformation of a random variable conditional cdf, conditional pdf, total probability theorem expectation of a random
More informationProbability Review. Yutian Li. January 18, Stanford University. Yutian Li (Stanford University) Probability Review January 18, / 27
Probability Review Yutian Li Stanford University January 18, 2018 Yutian Li (Stanford University) Probability Review January 18, 2018 1 / 27 Outline 1 Elements of probability 2 Random variables 3 Multiple
More informationExercises and Answers to Chapter 1
Exercises and Answers to Chapter The continuous type of random variable X has the following density function: a x, if < x < a, f (x), otherwise. Answer the following questions. () Find a. () Obtain mean
More informationLecture 21: Convergence of transformations and generating a random variable
Lecture 21: Convergence of transformations and generating a random variable If Z n converges to Z in some sense, we often need to check whether h(z n ) converges to h(z ) in the same sense. Continuous
More information2. Variance and Covariance: We will now derive some classic properties of variance and covariance. Assume real-valued random variables X and Y.
CS450 Final Review Problems Fall 08 Solutions or worked answers provided Problems -6 are based on the midterm review Identical problems are marked recap] Please consult previous recitations and textbook
More informationLimiting Distributions
We introduce the mode of convergence for a sequence of random variables, and discuss the convergence in probability and in distribution. The concept of convergence leads us to the two fundamental results
More informationSDS 321: Introduction to Probability and Statistics
SDS 321: Introduction to Probability and Statistics Lecture 17: Continuous random variables: conditional PDF Purnamrita Sarkar Department of Statistics and Data Science The University of Texas at Austin
More information2 Random Variable Generation
2 Random Variable Generation Most Monte Carlo computations require, as a starting point, a sequence of i.i.d. random variables with given marginal distribution. We describe here some of the basic methods
More informationThree hours. To be supplied by the Examinations Office: Mathematical Formula Tables and Statistical Tables THE UNIVERSITY OF MANCHESTER.
Three hours To be supplied by the Examinations Office: Mathematical Formula Tables and Statistical Tables THE UNIVERSITY OF MANCHESTER EXTREME VALUES AND FINANCIAL RISK Examiner: Answer QUESTION 1, QUESTION
More informationContinuous random variables
Continuous random variables Continuous r.v. s take an uncountably infinite number of possible values. Examples: Heights of people Weights of apples Diameters of bolts Life lengths of light-bulbs We cannot
More informationECE 4400:693 - Information Theory
ECE 4400:693 - Information Theory Dr. Nghi Tran Lecture 8: Differential Entropy Dr. Nghi Tran (ECE-University of Akron) ECE 4400:693 Lecture 1 / 43 Outline 1 Review: Entropy of discrete RVs 2 Differential
More informationSTAT Chapter 5 Continuous Distributions
STAT 270 - Chapter 5 Continuous Distributions June 27, 2012 Shirin Golchi () STAT270 June 27, 2012 1 / 59 Continuous rv s Definition: X is a continuous rv if it takes values in an interval, i.e., range
More informationConstruction and estimation of high dimensional copulas
Construction and estimation of high dimensional copulas Gildas Mazo PhD work supervised by S. Girard and F. Forbes Mistis, Inria and laboratoire Jean Kuntzmann, Grenoble, France Séminaire Statistiques,
More information1: PROBABILITY REVIEW
1: PROBABILITY REVIEW Marek Rutkowski School of Mathematics and Statistics University of Sydney Semester 2, 2016 M. Rutkowski (USydney) Slides 1: Probability Review 1 / 56 Outline We will review the following
More informationSTA 256: Statistics and Probability I
Al Nosedal. University of Toronto. Fall 2017 My momma always said: Life was like a box of chocolates. You never know what you re gonna get. Forrest Gump. Exercise 4.1 Let X be a random variable with p(x)
More informationReview: mostly probability and some statistics
Review: mostly probability and some statistics C2 1 Content robability (should know already) Axioms and properties Conditional probability and independence Law of Total probability and Bayes theorem Random
More informationIntroduction to Dependence Modelling
Introduction to Dependence Modelling Carole Bernard Berlin, May 2015. 1 Outline Modeling Dependence Part 1: Introduction 1 General concepts on dependence. 2 in 2 or N 3 dimensions. 3 Minimizing the expectation
More informationMTH739U/P: Topics in Scientific Computing Autumn 2016 Week 6
MTH739U/P: Topics in Scientific Computing Autumn 16 Week 6 4.5 Generic algorithms for non-uniform variates We have seen that sampling from a uniform distribution in [, 1] is a relatively straightforward
More informationOn the Conditional Value at Risk (CoVaR) from the copula perspective
On the Conditional Value at Risk (CoVaR) from the copula perspective Piotr Jaworski Institute of Mathematics, Warsaw University, Poland email: P.Jaworski@mimuw.edu.pl 1 Overview 1. Basics about VaR, CoVaR
More informationX n D X lim n F n (x) = F (x) for all x C F. lim n F n(u) = F (u) for all u C F. (2)
14:17 11/16/2 TOPIC. Convergence in distribution and related notions. This section studies the notion of the so-called convergence in distribution of real random variables. This is the kind of convergence
More informationRobustness of a semiparametric estimator of a copula
Robustness of a semiparametric estimator of a copula Gunky Kim a, Mervyn J. Silvapulle b and Paramsothy Silvapulle c a Department of Econometrics and Business Statistics, Monash University, c Caulfield
More informationPerhaps the simplest way of modeling two (discrete) random variables is by means of a joint PMF, defined as follows.
Chapter 5 Two Random Variables In a practical engineering problem, there is almost always causal relationship between different events. Some relationships are determined by physical laws, e.g., voltage
More informationChapter 4 Multiple Random Variables
Review for the previous lecture Theorems and Examples: How to obtain the pmf (pdf) of U = g ( X Y 1 ) and V = g ( X Y) Chapter 4 Multiple Random Variables Chapter 43 Bivariate Transformations Continuous
More informationStatistics 3657 : Moment Approximations
Statistics 3657 : Moment Approximations Preliminaries Suppose that we have a r.v. and that we wish to calculate the expectation of g) for some function g. Of course we could calculate it as Eg)) by the
More informationMultivariate Statistics
Multivariate Statistics Chapter 2: Multivariate distributions and inference Pedro Galeano Departamento de Estadística Universidad Carlos III de Madrid pedro.galeano@uc3m.es Course 2016/2017 Master in Mathematical
More informationConvexity of chance constraints with dependent random variables: the use of copulae
Convexity of chance constraints with dependent random variables: the use of copulae René Henrion 1 and Cyrille Strugarek 2 1 Weierstrass Institute for Applied Analysis and Stochastics, 10117 Berlin, Germany.
More information1 Review of Probability
1 Review of Probability Random variables are denoted by X, Y, Z, etc. The cumulative distribution function (c.d.f.) of a random variable X is denoted by F (x) = P (X x), < x
More informationReview of Probability Theory
Review of Probability Theory Arian Maleki and Tom Do Stanford University Probability theory is the study of uncertainty Through this class, we will be relying on concepts from probability theory for deriving
More information1 Review of Probability and Distributions
Random variables. A numerically valued function X of an outcome ω from a sample space Ω X : Ω R : ω X(ω) is called a random variable (r.v.), and usually determined by an experiment. We conventionally denote
More informationJoint Probability Distributions and Random Samples (Devore Chapter Five)
Joint Probability Distributions and Random Samples (Devore Chapter Five) 1016-345-01: Probability and Statistics for Engineers Spring 2013 Contents 1 Joint Probability Distributions 2 1.1 Two Discrete
More informationFirst steps of multivariate data analysis
First steps of multivariate data analysis November 28, 2016 Let s Have Some Coffee We reproduce the coffee example from Carmona, page 60 ff. This vignette is the first excursion away from univariate data.
More informationp(z)
Chapter Statistics. Introduction This lecture is a quick review of basic statistical concepts; probabilities, mean, variance, covariance, correlation, linear regression, probability density functions and
More information4. CONTINUOUS RANDOM VARIABLES
IA Probability Lent Term 4 CONTINUOUS RANDOM VARIABLES 4 Introduction Up to now we have restricted consideration to sample spaces Ω which are finite, or countable; we will now relax that assumption We
More informationLecture 11. Probability Theory: an Overveiw
Math 408 - Mathematical Statistics Lecture 11. Probability Theory: an Overveiw February 11, 2013 Konstantin Zuev (USC) Math 408, Lecture 11 February 11, 2013 1 / 24 The starting point in developing the
More information18.440: Lecture 28 Lectures Review
18.440: Lecture 28 Lectures 18-27 Review Scott Sheffield MIT Outline Outline It s the coins, stupid Much of what we have done in this course can be motivated by the i.i.d. sequence X i where each X i is
More informationn! (k 1)!(n k)! = F (X) U(0, 1). (x, y) = n(n 1) ( F (y) F (x) ) n 2
Order statistics Ex. 4.1 (*. Let independent variables X 1,..., X n have U(0, 1 distribution. Show that for every x (0, 1, we have P ( X (1 < x 1 and P ( X (n > x 1 as n. Ex. 4.2 (**. By using induction
More informationChapter 4. Continuous Random Variables 4.1 PDF
Chapter 4 Continuous Random Variables In this chapter we study continuous random variables. The linkage between continuous and discrete random variables is the cumulative distribution (CDF) which we will
More informationStatistics 100A Homework 5 Solutions
Chapter 5 Statistics 1A Homework 5 Solutions Ryan Rosario 1. Let X be a random variable with probability density function a What is the value of c? fx { c1 x 1 < x < 1 otherwise We know that for fx to
More informationJoint Distributions. (a) Scalar multiplication: k = c d. (b) Product of two matrices: c d. (c) The transpose of a matrix:
Joint Distributions Joint Distributions A bivariate normal distribution generalizes the concept of normal distribution to bivariate random variables It requires a matrix formulation of quadratic forms,
More informationA Review of Basic Monte Carlo Methods
A Review of Basic Monte Carlo Methods Julian Haft May 9, 2014 Introduction One of the most powerful techniques in statistical analysis developed in this past century is undoubtedly that of Monte Carlo
More informationProduct measure and Fubini s theorem
Chapter 7 Product measure and Fubini s theorem This is based on [Billingsley, Section 18]. 1. Product spaces Suppose (Ω 1, F 1 ) and (Ω 2, F 2 ) are two probability spaces. In a product space Ω = Ω 1 Ω
More informationAPPM/MATH 4/5520 Solutions to Exam I Review Problems. f X 1,X 2. 2e x 1 x 2. = x 2
APPM/MATH 4/5520 Solutions to Exam I Review Problems. (a) f X (x ) f X,X 2 (x,x 2 )dx 2 x 2e x x 2 dx 2 2e 2x x was below x 2, but when marginalizing out x 2, we ran it over all values from 0 to and so
More informationExpectation and Variance
Expectation and Variance August 22, 2017 STAT 151 Class 3 Slide 1 Outline of Topics 1 Motivation 2 Expectation - discrete 3 Transformations 4 Variance - discrete 5 Continuous variables 6 Covariance STAT
More informationTwo hours. To be supplied by the Examinations Office: Mathematical Formula Tables and Statistical Tables THE UNIVERSITY OF MANCHESTER.
Two hours MATH38181 To be supplied by the Examinations Office: Mathematical Formula Tables and Statistical Tables THE UNIVERSITY OF MANCHESTER EXTREME VALUES AND FINANCIAL RISK Examiner: Answer any FOUR
More informationEE/Stats 376A: Homework 7 Solutions Due on Friday March 17, 5 pm
EE/Stats 376A: Homework 7 Solutions Due on Friday March 17, 5 pm 1. Feedback does not increase the capacity. Consider a channel with feedback. We assume that all the recieved outputs are sent back immediately
More informationVaR vs. Expected Shortfall
VaR vs. Expected Shortfall Risk Measures under Solvency II Dietmar Pfeifer (2004) Risk measures and premium principles a comparison VaR vs. Expected Shortfall Dependence and its implications for risk measures
More informationMath 3215 Intro. Probability & Statistics Summer 14. Homework 5: Due 7/3/14
Math 325 Intro. Probability & Statistics Summer Homework 5: Due 7/3/. Let X and Y be continuous random variables with joint/marginal p.d.f. s f(x, y) 2, x y, f (x) 2( x), x, f 2 (y) 2y, y. Find the conditional
More informationLecture 2 One too many inequalities
University of Illinois Department of Economics Spring 2017 Econ 574 Roger Koenker Lecture 2 One too many inequalities In lecture 1 we introduced some of the basic conceptual building materials of the course.
More informationLecture 22: Variance and Covariance
EE5110 : Probability Foundations for Electrical Engineers July-November 2015 Lecture 22: Variance and Covariance Lecturer: Dr. Krishna Jagannathan Scribes: R.Ravi Kiran In this lecture we will introduce
More informationSolutions of the Financial Risk Management Examination
Solutions of the Financial Risk Management Examination Thierry Roncalli January 9 th 03 Remark The first five questions are corrected in TR-GDR and in the document of exercise solutions, which is available
More informationMultiple Random Variables
Multiple Random Variables Joint Probability Density Let X and Y be two random variables. Their joint distribution function is F ( XY x, y) P X x Y y. F XY ( ) 1, < x
More informationConditional densities, mass functions, and expectations
Conditional densities, mass functions, and expectations Jason Swanson April 22, 27 1 Discrete random variables Suppose that X is a discrete random variable with range {x 1, x 2, x 3,...}, and that Y is
More information7 Random samples and sampling distributions
7 Random samples and sampling distributions 7.1 Introduction - random samples We will use the term experiment in a very general way to refer to some process, procedure or natural phenomena that produces
More informationLecture 6 Basic Probability
Lecture 6: Basic Probability 1 of 17 Course: Theory of Probability I Term: Fall 2013 Instructor: Gordan Zitkovic Lecture 6 Basic Probability Probability spaces A mathematical setup behind a probabilistic
More informationSTAT2201. Analysis of Engineering & Scientific Data. Unit 3
STAT2201 Analysis of Engineering & Scientific Data Unit 3 Slava Vaisman The University of Queensland School of Mathematics and Physics What we learned in Unit 2 (1) We defined a sample space of a random
More informationThe Central Limit Theorem Under Random Truncation
The Central Limit Theorem Under Random Truncation WINFRIED STUTE and JANE-LING WANG Mathematical Institute, University of Giessen, Arndtstr., D-3539 Giessen, Germany. winfried.stute@math.uni-giessen.de
More informationn! (k 1)!(n k)! = F (X) U(0, 1). (x, y) = n(n 1) ( F (y) F (x) ) n 2
Order statistics Ex. 4. (*. Let independent variables X,..., X n have U(0, distribution. Show that for every x (0,, we have P ( X ( < x and P ( X (n > x as n. Ex. 4.2 (**. By using induction or otherwise,
More informationDistributions of Functions of Random Variables. 5.1 Functions of One Random Variable
Distributions of Functions of Random Variables 5.1 Functions of One Random Variable 5.2 Transformations of Two Random Variables 5.3 Several Random Variables 5.4 The Moment-Generating Function Technique
More informationMultivariate Non-Normally Distributed Random Variables
Multivariate Non-Normally Distributed Random Variables An Introduction to the Copula Approach Workgroup seminar on climate dynamics Meteorological Institute at the University of Bonn 18 January 2008, Bonn
More informationReview (Probability & Linear Algebra)
Review (Probability & Linear Algebra) CE-725 : Statistical Pattern Recognition Sharif University of Technology Spring 2013 M. Soleymani Outline Axioms of probability theory Conditional probability, Joint
More information