Elements of Financial Engineering Course

Size: px
Start display at page:

Download "Elements of Financial Engineering Course"

Transcription

1 Elements of Financial Engineering Course Baruch-NSD Summer Camp 0 Lecture Tai-Ho Wang Agenda Methods of simulation: inverse transformation method, acceptance-rejection method Variance reduction techniques Copulas and methods for copula simulation Financial applications: Multiasset option pricing by simulation Portfolio VaR and CVaR calculation Inverse transformation method Theorem Let X be a random variable and Q X be its qf. Then X Q X (U), where U U[0, ]. Proof Homework Note A pseudo code is like Generate sample points u, u,, u n from uniform distribution U[0, ] For each i =,, n, let x i = Q X (u i ) Return x,, x n of //, :0 AM

2 Example Let X Exp(λ). Note that the cdf F X is given by, for x 0, F X (x) = P[X x] = 0 xλe λξ dξ = e λx. Hence, for p (0, ), Q X (p) = F X (p) = ln( p). λ Therefore, X ln( U), where U U[0, ]. λ In []: # rescale the output figure library(repr) options(repr.plot.width=, repr.plot.height=) In []: 0 NumSim <- e # generate sample points from U[0,] U <- runif(numsim) # inverse tranform lambda <- Qf <- function(p) -/lambda*log(-p) X <- Qf(U) # plot hist(x,breaks=00,prob=t) curve(dexp(x,rate=lambda),from=0,col='blue',add=t) of //, :0 AM

3 Empiricial cumulative distribution function In practice, where is the qf from? Empirical qf as the inverse of empirical cdf. Definition Let X j, j =,, J, be a random sample from a common distribution with cdf F. The nondecreasing, càdlàg (why?) function F ~ defined by J F ~ J(x) = J [Xj, )(x) j = is called the empirical cdf of F from the sample X j, j =,, J. Remark We can reconcile the empirical cdf from the sample X j, j =,, J, as the cdf of the random variable Y uniformly distributed among the points X j, i.e., P[Y = X j] = J. Empirical cdf is random, the randomness is from the samples X j (ω). Therefore, an empirical cdf has actually two arguments: x and ω. Glivenko-Cantelli lemma The Glivenko-Cantelli lemma asserts that, as the total number of samples J approaches infinity, the empirical cdf F ~ J converges almost surely (in ω) and uniformly (in x) to the original behind-the-scene cdf F which generates the samples. Theorem Assume that X j, j =,, J, are independent random samples drawn from the common distribution with cdf F and F ~ J be the empirical cdf from the random samples X j. Then lim J sup x R F~ J(x) F(x) = 0 almost surely. Remark The pointwise convergence version of the Glivenko-Cantelli lemma can be obtained by applying the strong law of large numbers. Namely, for any fixed x R, lim J F~ J(x) F(x) = 0 almost surely. The rate of convergence in the Glivenko-Cantelli lemma is n moment, according to the Berry-Esseen theorem. if the random variables have finite third of //, :0 AM

4 In []: # the code illustrates the convergences of empirical cdf in Glivenko-Cantelli lem NumSim <- 00 df <- X <- rt(numsim,df=df) plot.ecdf(x,xlim=c(-,)) curve(pt(x,df=df),col='blue',add=t) hist(x, breaks=0, prob=t) curve(dt(x,df=df), col='blue', add=t) of //, :0 AM

5 Acceptance-rejection method Let X and Y with pdfs f X and f Y respectively. We have a scheme to simulate Y and wish to simulate X through the samples from Y. Assumption: there exists a constant c such that f X (x) cf Y (x) for every x, i.e., f X is dominated by cf Y. The procedure goes as follows. Generate samples y,, y n from Y and samples u,, u n from U[0, ] independently. For i =,, n, accept y i if it satisfies u i f X (y i ) cf Y (y i ) ; otherwise, reject y i. Return the accepted y i 's Example As an example, we use the method of acceptance-rejection to simulate normal distribution by double exponential distribution. Let X N(0, ) and Y DE(). The pdf of X is f X (x) = f Y (y) = e y. π e x whereas the pdf of Y is We determine the acceptance threshold c as f X (x) f Y (x) = e x π = e x π x e + x e π =: c ( x + x x ) of //, :0 AM

6 In []: # the code demonstrate the acceptance-rejection method by using Laplace and Gauss # X ~ normal # Y ~ dobule exponential NumSim <- e lambda <- # Simulate samples from Y and U independently U <- runif(numsim) Y <- runif(numsim) Qf <- function(p) ifelse(p<=0.,/lambda*log(*p),-/lambda*log( - *p)) Y <- Qf(Y) # plot histogram of Y fy <- function(x) lambda/*exp(-lambda*abs(x)) hist(y,breaks=0,prob=t) curve(fy,add=t,col='blue') # setting the acceptance threshold c c <- sqrt(*exp()/pi) #c <- 0 # check acceptance-rejection criterion fx <- function(x) dnorm(x) AccRej <- (U <= fx(y)/fy(y)/c) # fetch out accepted Y as sample points for X and plot X <- Y[AccRej == T] hist(x,breaks=0,prob=t) curve(dnorm,add=t,col='blue') of //, :0 AM

7 In []: length(x) c 0 0 Why it works? Note that, since the sample points from Y are accepted only when the inequality is satisfied, we can rewrite the acceptance-rejection criterion in terms of conditional expectation as P Y x [ U f X (Y ) P [ Y x, U c f Y (Y ) ] = P [ U c We calculate the numerator and denominator as follows. denominator = P [ U c = P [ U c f = X (y) c f Y (y) f Y(y)dy = c numerator = P [ Y x, U c = x P [ U c f X (Y ) f Y (Y ) ] = P [ U c f X (y) f Y (y) ] f Y(y)dy f X(x)dx = ( F U (u) = u) f X (Y ) f Y (Y ) ] = P [ Y x, U c f X (y) f Y (y) ] f Y(y)dy f X (Y ) c f Y (Y ) ] f X (Y ) f Y (Y ) ] f X (Y ) f Y (Y ) Y = y f ] Y (y)dy ( Y and U are independent) f X (Y ) f Y (Y ) Y = y f ] Y (y)dy ( Y and U are independent) x f = X (y) c f Y (y) f Y(y)dy ( F U (u) = u) = c F X(x). It follows that P [ Y x U c f X (Y ) f Y (Y ) ] = F X (x) c c = F X (x). of //, :0 AM

8 Variance reduction - Antithetic variate For each simulated sample point ω, we immediately add its reflected point ω. Quote from Glasserman: The method of antithetic variates can take various forms; the most broadly applicable is based on the observation that if U is uniformly distributed over [0, ], then U is too. Hence, if we generate a path using as inputs U,, U n, we can generate a second path using U,, U n without changing the law of the simulated process. The variables U i and U i form an antithetic pair in the sense that a large value of one is accompanied by a small value of the other. This suggests that an unusually large or small output computed from the first path may be balanced by the value computed from the antithetic path, resulting in a reduction in variance. Variance reduction - Importance sampling The goal is to estimate E[g(X)], for some function g so that g(x) is integrable. We rewrite the expectation as E[g(X)] = g(x)p(x)dx = g(x) p(x) q(x) q(x)dx = E ) g(y )p(y [ q(y )], where p and q are pdfs for the random variables X and Y respectively. Therefore, we can use g(y )p(y ) q(y ) as an unbiased (why?) estimator for E[g(X)], i.e., sample an iid sequence Y,, Y N of Y then calculate the sample mean N N n = g(y n ) p(y n) q(y n ). In other words, instead of sampling an iid sequence X,, X N directly from the distribution of X and calculating the sample mean N n N = g(x n ), we sample an iid sequence Y,, Y N from the distribution of Y, which we have the flexibility to choose, then calculate the sample mean N n N = g(y n ) p(y n) q(yn). So the question is: why would this help reduce the variance? Let's start with calculating their variances. From Cauchy-Schwarz inequality we have E g(x) = g(x) p(x)dx = g(x) p(x) q(x) q(x) dx g(x) p (x) g(y ) p (Y ) q q(x)dx = E (x) [ q. (Y ) ] Therefore, we obtain a lower bound for the variance of the random variable g(y )p(y ) q(y ) Var [ g(y ) p (Y ) q (Y ) ] [E g(x) ] (E[g(X)]). On the other hand, again by Cauchy-Schwarz inequality, we obtain the same lower bound for the variance of g(x): Var[g(X)] = E g(x) (E[g(X)]) [E g(x) ] (E[g(X)]). as of //, :0 AM

9 At first glance, there seems no advantage in either case because they are both greater than or equal to the same right hand side. We can't do anything to the inequality Var[g(X)] [E g(x) ] (E[g(X)]). there is nothing to play with. However, we can try to make the inequality Var [ g(y ) p (Y ) q (Y ) ] [E g(x) ] (E[g(X)]). an equality by smartly choosing q! In fact, simply pick q(y) = g(y) p(y). Moreover, if g doesn't change sign, i.e., E g(x) g(x) 0 or g(x) 0 for all x, then the right hand side of the above inequality is zero, which means with such a choice of q, the random variable g(y )p(y ) q(y ) has zero variance! Of course, this is too good to be true because in order to be in such an ideal situation, we need to know p and E[g(X)] which are exactly the answers we are searching for. From marginal to joint For notational simplicity, we shall denote by I the unit interval [0, ] = {x:0 x } and by I = {(x, y):0 x, y } the unit square hereafter. Let (X, Y ) be a pair of random variables which are (marginally) uniformly distributed on I, i.e., X, Y U[0, ]. How much do we know about the joint distribution F XY (x, y) of (X, Y )? Basically nothing, since we have no information on the "linkage" between X and Y. There is no hope to reconstruct F XY (x, y) because there are infinitely many possibilities. It is in fact an ill-posed problem. However, the very least we have is the following., if x, y ; y, if x, 0 y ; F XY (x, y) = P[X x, Y y] = x, if 0 x, y ; 0, if x < 0 or y < 0. Apparently, since the random vector (X, Y ) is supported in the unit square I, the relevant values of F XY are within the unit square I. The above formula simply tells what the values of F XY on the boundaries of I should be. In other words, there is no freedom of picking the values of F XY on the boundary of the unit square. For values of F XY inside I, we note that F XY is -increasing, i.e., for all 0 x x and 0 y y, the inequality holds F XY (x, y ) F XY (x, y ) F XY (x, y ) + F XY (x, y ) 0, since F XY (x, y ) F XY (x, y ) F XY (x, y ) + F XY (x, y ) = P[x X x, y Y Y ] 0. of //, :0 AM

10 Copula in two dimensions In fact, any bivariate function defined on I satisfies the aforementioned properties defines a -copula. Definition A bivariate function C:I R is call -increasing if for every non-empty rectangle [u, u ] [v, v ] I C(u, v ) C(u, v ) C(u, v ) + C(u, v ) 0. We now give a formal definition of copula. Definition A bivariate function C:I I is called a copula if it satisfies C(0, u) = C(u, 0) = 0 for every u I. C(u, ) = u and C(, v) = v for every u, v I. C is -increasing in I. Remark In other words, a copula is simply the joint distribution function of the random vector (U, V ) with U and V being marginally uniformly distributed in the unit interval I. Recall that, if X and Y are continuous random variables with quantile functions Q X and Q Y respectively, then X Q X (U) and Y Q Y (V ), where U and V are uniformly distributed in I. Therefore, we readily have that, let F XY denote the joint distribution function of X and Y, the function C given by C(u, v) F XY (Q X (u), Q Y (v)) defines a copula. (Check the definition.) In other words, we can "extract" the copula from a joint distribution. Natural questions to ask are a) is such copula unique? b) are all joint distribution given by copulas? Answers to the questions are given by Sklar's theorem. 0 of //, :0 AM

11 Sklar's theorem The Sklar's theorem in a sense means the decomposition of the joint distribution of a random vector into its marginal distribution combined with a (margin-independent) copula function. In other words, one component comes purely from the marginal distributions, which statistically is more accessible to infer a la Glivenko-Cantelli; and the other component (the copula) is unitized so that it accounts solely for "linking" the margins. Theorem Let F (x) and F (y) be two one-dimensional cdfs. If C is any copula then C(F (x), F (y)) is a -dimensional distribution function, whose marginal distribution functions are F (x) and F (y). If F(x, y) is a -dimensional distribution function with marginals F (x) and F (y), then there exists a copula C(v, z) such that F(x, y) = C(F (x), F (y)). If F (x) and F (y) are continuous then C is unique. Thus, we can either link two marginal distributions by a given copula or extract copula from a given joint distribution Example We demonstrate how to extract the copula, referred to as the normal or Gaussian copula, from a joint normal distribution. A pseudo code will be like Simulate joint normal sample points (x, y ),, (x n, y n ) For each i =,, n, let u i = F X (x i ) and v i = F Y (y i ) Return the joint sample points (u, v ),, (u n, v n ) Note It doesn't matter if the marginals X and Y are standard or not because F X (X) and F Y (Y ) are always uniformly distributed in [0, ]. In []: 0 # the code demonstrate how to simulate Gaussian copula NumSim <- e rho <- 0. # First simulate indpendent normals X <- rnorm(numsim) Y <- rnorm(numsim) # correlate the independent normals X <- X - mean(x) X <- X/sd(X) Y <- rho*x + sqrt( - rho^)*y Y <- Y - mean(y) Y <- Y/sd(Y) of //, :0 AM

12 In []: data.frame(mean(x),sd(x),mean(y),sd(y),cor(x,y)) mean.x. sd.x. mean.y. sd.y. cor.x..y..0e- -.e- 0. In []: 0 # copula sample points U <- pnorm(x) V <- pnorm(y) # plot par(mfrow=c(,)) plot(u,v,col='blue') Vh <- hist(v,breaks=0,plot=f) barplot(vh$density,horiz=t,space=0,,main='histogram of V') abline(v=,col='blue') hist(u,breaks=0,prob=t) abline(h=,col='blue') Now we can use the simulated Gaussian copula sample points to link two marginal distributions, say, a t distribution with degrees of freedom and an exponential distribution with parameter. of //, :0 AM

13 In []: 0 X <- qt(u,df=) Y <- qexp(v,rate=) # plot par(mfrow=c(,)) plot(x,y,col='blue') yh <- hist(y,breaks=0,plot=f) barplot(yh$density,space=0,horiz=t,main='histogram of Y') #curve(dexp(x,rate=),col='blue',add=t) hist(x,breaks=0,prob=t) curve(dt(x,df=),col='blue',add=t) Archimedean copulas Definition An Archimedean copula is a copula of the form C φ (u, v) = φ (φ(u) + φ(v)) for some convex, continuous, decreasing function defined on (0, ] with φ() = 0. The function φ is called the generator of C φ. (Exercise: check such defined function C is a copula) of //, :0 AM

14 Commonly used Archimedean copulas Commonly used Archimedean copulas and their generators include Ali-Mikhail-Haq: Clayton: φ(t) = log( θ( t)) log t, θ [, ) C φ (u, v) = uv θ( u)( v) φ(t) = t θ, θ [, ) {0} θ C φ (u, v) = (u θ + v θ ) + /θ Frank: Gumbel: Note that θ = corresponds to the product copula. Joe: φ(t) = log (e θ ) log (e θt ), θ R {0} C φ (u, v) = θ log [ + (e θu )(e θv ) e θ ] φ(t) = ( log t) θ, θ C φ (u, v) = e [( log u)θ +( log v) θ ] θ φ(t) = ( e t ) θ, θ [, ) C φ (u, v) = [( u) θ + ( v) θ ( u) θ ( v) θ ] θ Simulation of Archimedean copulas Therefore, a pseudo code will be like Simulate the random variable whose moment generating function is φ. Simulate iid random variables U i, for i =,, n, whose common conditional cdf (conditioned on V ) is e vφ(u), by the method of inverse transformation. Fetch out samples U,, U n. of //, :0 AM

15 Proof C(u,, u n ) = P[U u,, U n u n ] = 0 P[U u,, U n u n V = v]df V (v) n = 0 i = P[U i u i V = v]df V (v) ( U i 's are conditionally independent) n = 0 i = e vφ(u i ) df V (v) ( U i V has cdf e vφ(u i ) ) = e v n φ(u i = i ) dfv (v) 0 = φ (φ(u ) + + φ(u n )) ( V has mgf m V (t) = φ (t)) Simulation of the Clayton copula of //, :0 AM

16 In [0]: 0 # Clayton copula simulation # number of simulations NumSim <- e # dimension of random vector d <- # copula parameter theta <- 0.0 # F tilde = inverse of generator Ft <- function(t) ( + t)^(-/theta) V <- rgamma(numsim,shape = /theta,scale=) U <- matrix(runif(numsim*d),ncol=d) U <- Ft(-log(U)/V) plot(u[,],u[,],col='blue') Simulation of the Gumbel copula of //, :0 AM

17 In []: 0 0 # Gumbel copula simulation # generator = (-log(u))^theta # for simulating stable distribution, require package: stabledist # requires library(stabledist) library(stabledist) # number of simulations NumSim <- e # dimension of random vector d <- # copula parameter: theta > theta <- # Ftile = inverse generator Ft <- function(t) exp(-t^(/theta)) V <- rstable(numsim,alpha = /theta,beta=,gamma=cos(pi//theta)^(theta),delta=0) U <- matrix(runif(numsim*d),ncol=d) U <- Ft(-log(U)/V) plot(u[,],u[,],col='blue') Simulation of the Marshall-Olkin copula of //, :0 AM

18 In []: 0 # Marshall-Olkin copula simulation NumSimul <- e lambda <-. lambda <- 0. lambda <- 0. Z <- rexp(numsimul,rate=lambda) Z <- rexp(numsimul,rate=lambda) Z <- rexp(numsimul,rate=lambda) U <- - exp(-(lambda + lambda)*pmax(z,z)) U <- - exp(-(lambda + lambda)*pmax(z,z)) #plot plot(u,u) Now we can calculate by simulation the price of multiasset options such as basket, spread, or BestOf. VaR and CVaR of a portfolio For example, assuming each individual asset is lognormally distributed with μ i and σ i, for i =,,. We link them by the Clayton copula, say θ =. of //, :0 AM

19 In []: # Generate Clayton copula # number of simulations NumSim <- e # dimension of random vector d <- # copula parameter theta <-. # F tilde = inverse of generator Ft <- function(t) ( + t)^(-/theta) V <- rgamma(numsim,shape = /theta,scale=) U <- matrix(runif(numsim*d),ncol=d) U <- Ft(-log(U)/V) # Link log normal margins by Clayton copula # time to expiry t <- / # volatilites sig <- 0. sig <- 0. # log price at current time x <- log(00) x <- log(0) # lognormal samples S <- qlnorm(u[,], meanlog = x - sig^*t/, sdlog = sig*sqrt(t)) S <- qlnorm(u[,], meanlog = x - sig^*t/, sdlog = sig*sqrt(t)) plot(s,s,col='blue') K <- 0 payoffbasket <- (S + S - K)*(S + S >= K) pricebasket <- mean(payoffbasket) of //, :0 AM

20 In []: pricebasket. Now we wrap the code up as a function. In []: # copula parameter theta <- 0. s <- 00 s <- 0 sig <- 0. sig <- 0. t <- K <- 0 pricebasket_clayton <- function(k,t,numsim=e,theta=0.,m=){ # F tilde = inverse of generator Ft <- function(t) ( + t)^(-/theta) d <- pricemean <- numeric(m) pricesd <- numeric(m) for (i in :m) { V <- rgamma(numsim,shape = /theta,scale=) U <- matrix(runif(numsim*d),ncol=d) U <- Ft(-log(U)/V) # Link log normal margins by Clayton copula S <- qlnorm(u[,], meanlog = log(s) - sig^*t/, sdlog = sig*sqrt(t)) S <- qlnorm(u[,], meanlog = log(s) - sig^*t/, sdlog = sig*sqrt(t)) payoffbasket <- (S + S - K)*(S + S >= K) pricemean[i] <- mean(payoffbasket) pricesd[i] <- sd(payoffbasket) } data.frame(pricemean,pricesd) } In []: pricebasket_clayton(k=k,t=t,m=) pricemean pricesd of //, :0 AM

21 In []: pv <- pricebasket_clayton(k=k,t=t,m=e) hist(pv$pricemean,breaks=0,prob=t) In []: mean(pv$pricemean). of //, :0 AM

Elements of Financial Engineering Course

Elements of Financial Engineering Course Elements of Financial Engineering Course NSD Baruch MFE Summer Camp 206 Lecture 6 Tai Ho Wang Agenda Methods of simulation: inverse transformation method, acceptance rejection method Variance reduction

More information

Lecture Quantitative Finance Spring Term 2015

Lecture Quantitative Finance Spring Term 2015 on bivariate Lecture Quantitative Finance Spring Term 2015 Prof. Dr. Erich Walter Farkas Lecture 07: April 2, 2015 1 / 54 Outline on bivariate 1 2 bivariate 3 Distribution 4 5 6 7 8 Comments and conclusions

More information

Modelling Dependence with Copulas and Applications to Risk Management. Filip Lindskog, RiskLab, ETH Zürich

Modelling Dependence with Copulas and Applications to Risk Management. Filip Lindskog, RiskLab, ETH Zürich Modelling Dependence with Copulas and Applications to Risk Management Filip Lindskog, RiskLab, ETH Zürich 02-07-2000 Home page: http://www.math.ethz.ch/ lindskog E-mail: lindskog@math.ethz.ch RiskLab:

More information

Lecture 1: August 28

Lecture 1: August 28 36-705: Intermediate Statistics Fall 2017 Lecturer: Siva Balakrishnan Lecture 1: August 28 Our broad goal for the first few lectures is to try to understand the behaviour of sums of independent random

More information

Multivariate Distribution Models

Multivariate Distribution Models Multivariate Distribution Models Model Description While the probability distribution for an individual random variable is called marginal, the probability distribution for multiple random variables is

More information

Probability Distribution And Density For Functional Random Variables

Probability Distribution And Density For Functional Random Variables Probability Distribution And Density For Functional Random Variables E. Cuvelier 1 M. Noirhomme-Fraiture 1 1 Institut d Informatique Facultés Universitaires Notre-Dame de la paix Namur CIL Research Contact

More information

2 Functions of random variables

2 Functions of random variables 2 Functions of random variables A basic statistical model for sample data is a collection of random variables X 1,..., X n. The data are summarised in terms of certain sample statistics, calculated as

More information

MAS223 Statistical Inference and Modelling Exercises

MAS223 Statistical Inference and Modelling Exercises MAS223 Statistical Inference and Modelling Exercises The exercises are grouped into sections, corresponding to chapters of the lecture notes Within each section exercises are divided into warm-up questions,

More information

Chapter 3 sections. SKIP: 3.10 Markov Chains. SKIP: pages Chapter 3 - continued

Chapter 3 sections. SKIP: 3.10 Markov Chains. SKIP: pages Chapter 3 - continued Chapter 3 sections Chapter 3 - continued 3.1 Random Variables and Discrete Distributions 3.2 Continuous Distributions 3.3 The Cumulative Distribution Function 3.4 Bivariate Distributions 3.5 Marginal Distributions

More information

Copulas. MOU Lili. December, 2014

Copulas. MOU Lili. December, 2014 Copulas MOU Lili December, 2014 Outline Preliminary Introduction Formal Definition Copula Functions Estimating the Parameters Example Conclusion and Discussion Preliminary MOU Lili SEKE Team 3/30 Probability

More information

Lifetime Dependence Modelling using a Generalized Multivariate Pareto Distribution

Lifetime Dependence Modelling using a Generalized Multivariate Pareto Distribution Lifetime Dependence Modelling using a Generalized Multivariate Pareto Distribution Daniel Alai Zinoviy Landsman Centre of Excellence in Population Ageing Research (CEPAR) School of Mathematics, Statistics

More information

Financial Econometrics and Volatility Models Copulas

Financial Econometrics and Volatility Models Copulas Financial Econometrics and Volatility Models Copulas Eric Zivot Updated: May 10, 2010 Reading MFTS, chapter 19 FMUND, chapters 6 and 7 Introduction Capturing co-movement between financial asset returns

More information

Copulas and Measures of Dependence

Copulas and Measures of Dependence 1 Copulas and Measures of Dependence Uttara Naik-Nimbalkar December 28, 2014 Measures for determining the relationship between two variables: the Pearson s correlation coefficient, Kendalls tau and Spearmans

More information

A Brief Introduction to Copulas

A Brief Introduction to Copulas A Brief Introduction to Copulas Speaker: Hua, Lei February 24, 2009 Department of Statistics University of British Columbia Outline Introduction Definition Properties Archimedean Copulas Constructing Copulas

More information

Modelling Dependent Credit Risks

Modelling Dependent Credit Risks Modelling Dependent Credit Risks Filip Lindskog, RiskLab, ETH Zürich 30 November 2000 Home page:http://www.math.ethz.ch/ lindskog E-mail:lindskog@math.ethz.ch RiskLab:http://www.risklab.ch Modelling Dependent

More information

Simulating Exchangeable Multivariate Archimedean Copulas and its Applications. Authors: Florence Wu Emiliano A. Valdez Michael Sherris

Simulating Exchangeable Multivariate Archimedean Copulas and its Applications. Authors: Florence Wu Emiliano A. Valdez Michael Sherris Simulating Exchangeable Multivariate Archimedean Copulas and its Applications Authors: Florence Wu Emiliano A. Valdez Michael Sherris Literatures Frees and Valdez (1999) Understanding Relationships Using

More information

Modelling and Estimation of Stochastic Dependence

Modelling and Estimation of Stochastic Dependence Modelling and Estimation of Stochastic Dependence Uwe Schmock Based on joint work with Dr. Barbara Dengler Financial and Actuarial Mathematics and Christian Doppler Laboratory for Portfolio Risk Management

More information

Sampling Archimedean copulas. Marius Hofert. Ulm University

Sampling Archimedean copulas. Marius Hofert. Ulm University Marius Hofert marius.hofert@uni-ulm.de Ulm University 2007-12-08 Outline 1 Nonnested Archimedean copulas 1.1 Marshall and Olkin s algorithm 2 Nested Archimedean copulas 2.1 McNeil s algorithm 2.2 A general

More information

2 (Statistics) Random variables

2 (Statistics) Random variables 2 (Statistics) Random variables References: DeGroot and Schervish, chapters 3, 4 and 5; Stirzaker, chapters 4, 5 and 6 We will now study the main tools use for modeling experiments with unknown outcomes

More information

MATH Notebook 5 Fall 2018/2019

MATH Notebook 5 Fall 2018/2019 MATH442601 2 Notebook 5 Fall 2018/2019 prepared by Professor Jenny Baglivo c Copyright 2004-2019 by Jenny A. Baglivo. All Rights Reserved. 5 MATH442601 2 Notebook 5 3 5.1 Sequences of IID Random Variables.............................

More information

Chapter 3 sections. SKIP: 3.10 Markov Chains. SKIP: pages Chapter 3 - continued

Chapter 3 sections. SKIP: 3.10 Markov Chains. SKIP: pages Chapter 3 - continued Chapter 3 sections 3.1 Random Variables and Discrete Distributions 3.2 Continuous Distributions 3.3 The Cumulative Distribution Function 3.4 Bivariate Distributions 3.5 Marginal Distributions 3.6 Conditional

More information

1 Probability theory. 2 Random variables and probability theory.

1 Probability theory. 2 Random variables and probability theory. Probability theory Here we summarize some of the probability theory we need. If this is totally unfamiliar to you, you should look at one of the sources given in the readings. In essence, for the major

More information

On the Choice of Parametric Families of Copulas

On the Choice of Parametric Families of Copulas On the Choice of Parametric Families of Copulas Radu Craiu Department of Statistics University of Toronto Collaborators: Mariana Craiu, University Politehnica, Bucharest Vienna, July 2008 Outline 1 Brief

More information

EC212: Introduction to Econometrics Review Materials (Wooldridge, Appendix)

EC212: Introduction to Econometrics Review Materials (Wooldridge, Appendix) 1 EC212: Introduction to Econometrics Review Materials (Wooldridge, Appendix) Taisuke Otsu London School of Economics Summer 2018 A.1. Summation operator (Wooldridge, App. A.1) 2 3 Summation operator For

More information

Random Variables. Random variables. A numerically valued map X of an outcome ω from a sample space Ω to the real line R

Random Variables. Random variables. A numerically valued map X of an outcome ω from a sample space Ω to the real line R In probabilistic models, a random variable is a variable whose possible values are numerical outcomes of a random phenomenon. As a function or a map, it maps from an element (or an outcome) of a sample

More information

Probability Models. 4. What is the definition of the expectation of a discrete random variable?

Probability Models. 4. What is the definition of the expectation of a discrete random variable? 1 Probability Models The list of questions below is provided in order to help you to prepare for the test and exam. It reflects only the theoretical part of the course. You should expect the questions

More information

Theorem 2.1 (Caratheodory). A (countably additive) probability measure on a field has an extension. n=1

Theorem 2.1 (Caratheodory). A (countably additive) probability measure on a field has an extension. n=1 Chapter 2 Probability measures 1. Existence Theorem 2.1 (Caratheodory). A (countably additive) probability measure on a field has an extension to the generated σ-field Proof of Theorem 2.1. Let F 0 be

More information

MAXIMUM ENTROPIES COPULAS

MAXIMUM ENTROPIES COPULAS MAXIMUM ENTROPIES COPULAS Doriano-Boris Pougaza & Ali Mohammad-Djafari Groupe Problèmes Inverses Laboratoire des Signaux et Systèmes (UMR 8506 CNRS - SUPELEC - UNIV PARIS SUD) Supélec, Plateau de Moulon,

More information

The main results about probability measures are the following two facts:

The main results about probability measures are the following two facts: Chapter 2 Probability measures The main results about probability measures are the following two facts: Theorem 2.1 (extension). If P is a (continuous) probability measure on a field F 0 then it has a

More information

conditional cdf, conditional pdf, total probability theorem?

conditional cdf, conditional pdf, total probability theorem? 6 Multiple Random Variables 6.0 INTRODUCTION scalar vs. random variable cdf, pdf transformation of a random variable conditional cdf, conditional pdf, total probability theorem expectation of a random

More information

Homework # , Spring Due 14 May Convergence of the empirical CDF, uniform samples

Homework # , Spring Due 14 May Convergence of the empirical CDF, uniform samples Homework #3 36-754, Spring 27 Due 14 May 27 1 Convergence of the empirical CDF, uniform samples In this problem and the next, X i are IID samples on the real line, with cumulative distribution function

More information

Probability and Distributions

Probability and Distributions Probability and Distributions What is a statistical model? A statistical model is a set of assumptions by which the hypothetical population distribution of data is inferred. It is typically postulated

More information

Three hours. To be supplied by the Examinations Office: Mathematical Formula Tables and Statistical Tables THE UNIVERSITY OF MANCHESTER.

Three hours. To be supplied by the Examinations Office: Mathematical Formula Tables and Statistical Tables THE UNIVERSITY OF MANCHESTER. Three hours To be supplied by the Examinations Office: Mathematical Formula Tables and Statistical Tables THE UNIVERSITY OF MANCHESTER EXTREME VALUES AND FINANCIAL RISK Examiner: Answer QUESTION 1, QUESTION

More information

p(z)

p(z) Chapter Statistics. Introduction This lecture is a quick review of basic statistical concepts; probabilities, mean, variance, covariance, correlation, linear regression, probability density functions and

More information

ECE 4400:693 - Information Theory

ECE 4400:693 - Information Theory ECE 4400:693 - Information Theory Dr. Nghi Tran Lecture 8: Differential Entropy Dr. Nghi Tran (ECE-University of Akron) ECE 4400:693 Lecture 1 / 43 Outline 1 Review: Entropy of discrete RVs 2 Differential

More information

f (1 0.5)/n Z =

f (1 0.5)/n Z = Math 466/566 - Homework 4. We want to test a hypothesis involving a population proportion. The unknown population proportion is p. The null hypothesis is p = / and the alternative hypothesis is p > /.

More information

1 Joint and marginal distributions

1 Joint and marginal distributions DECEMBER 7, 204 LECTURE 2 JOINT (BIVARIATE) DISTRIBUTIONS, MARGINAL DISTRIBUTIONS, INDEPENDENCE So far we have considered one random variable at a time. However, in economics we are typically interested

More information

Introduction to Dependence Modelling

Introduction to Dependence Modelling Introduction to Dependence Modelling Carole Bernard Berlin, May 2015. 1 Outline Modeling Dependence Part 1: Introduction 1 General concepts on dependence. 2 in 2 or N 3 dimensions. 3 Minimizing the expectation

More information

Continuous Random Variables

Continuous Random Variables 1 / 24 Continuous Random Variables Saravanan Vijayakumaran sarva@ee.iitb.ac.in Department of Electrical Engineering Indian Institute of Technology Bombay February 27, 2013 2 / 24 Continuous Random Variables

More information

Lecture Notes 1 Probability and Random Variables. Conditional Probability and Independence. Functions of a Random Variable

Lecture Notes 1 Probability and Random Variables. Conditional Probability and Independence. Functions of a Random Variable Lecture Notes 1 Probability and Random Variables Probability Spaces Conditional Probability and Independence Random Variables Functions of a Random Variable Generation of a Random Variable Jointly Distributed

More information

APPM/MATH 4/5520 Solutions to Exam I Review Problems. f X 1,X 2. 2e x 1 x 2. = x 2

APPM/MATH 4/5520 Solutions to Exam I Review Problems. f X 1,X 2. 2e x 1 x 2. = x 2 APPM/MATH 4/5520 Solutions to Exam I Review Problems. (a) f X (x ) f X,X 2 (x,x 2 )dx 2 x 2e x x 2 dx 2 2e 2x x was below x 2, but when marginalizing out x 2, we ran it over all values from 0 to and so

More information

Lecture Notes 1 Probability and Random Variables. Conditional Probability and Independence. Functions of a Random Variable

Lecture Notes 1 Probability and Random Variables. Conditional Probability and Independence. Functions of a Random Variable Lecture Notes 1 Probability and Random Variables Probability Spaces Conditional Probability and Independence Random Variables Functions of a Random Variable Generation of a Random Variable Jointly Distributed

More information

Multivariate Statistics

Multivariate Statistics Multivariate Statistics Chapter 2: Multivariate distributions and inference Pedro Galeano Departamento de Estadística Universidad Carlos III de Madrid pedro.galeano@uc3m.es Course 2016/2017 Master in Mathematical

More information

Limiting Distributions

Limiting Distributions Limiting Distributions We introduce the mode of convergence for a sequence of random variables, and discuss the convergence in probability and in distribution. The concept of convergence leads us to the

More information

Exercises and Answers to Chapter 1

Exercises and Answers to Chapter 1 Exercises and Answers to Chapter The continuous type of random variable X has the following density function: a x, if < x < a, f (x), otherwise. Answer the following questions. () Find a. () Obtain mean

More information

Convexity of chance constraints with dependent random variables: the use of copulae

Convexity of chance constraints with dependent random variables: the use of copulae Convexity of chance constraints with dependent random variables: the use of copulae René Henrion 1 and Cyrille Strugarek 2 1 Weierstrass Institute for Applied Analysis and Stochastics, 10117 Berlin, Germany.

More information

Chapter 4 Multiple Random Variables

Chapter 4 Multiple Random Variables Review for the previous lecture Theorems and Examples: How to obtain the pmf (pdf) of U = g ( X Y 1 ) and V = g ( X Y) Chapter 4 Multiple Random Variables Chapter 43 Bivariate Transformations Continuous

More information

Exercises with solutions (Set D)

Exercises with solutions (Set D) Exercises with solutions Set D. A fair die is rolled at the same time as a fair coin is tossed. Let A be the number on the upper surface of the die and let B describe the outcome of the coin toss, where

More information

2. Variance and Covariance: We will now derive some classic properties of variance and covariance. Assume real-valued random variables X and Y.

2. Variance and Covariance: We will now derive some classic properties of variance and covariance. Assume real-valued random variables X and Y. CS450 Final Review Problems Fall 08 Solutions or worked answers provided Problems -6 are based on the midterm review Identical problems are marked recap] Please consult previous recitations and textbook

More information

n! (k 1)!(n k)! = F (X) U(0, 1). (x, y) = n(n 1) ( F (y) F (x) ) n 2

n! (k 1)!(n k)! = F (X) U(0, 1). (x, y) = n(n 1) ( F (y) F (x) ) n 2 Order statistics Ex. 4.1 (*. Let independent variables X 1,..., X n have U(0, 1 distribution. Show that for every x (0, 1, we have P ( X (1 < x 1 and P ( X (n > x 1 as n. Ex. 4.2 (**. By using induction

More information

Lecture 21: Convergence of transformations and generating a random variable

Lecture 21: Convergence of transformations and generating a random variable Lecture 21: Convergence of transformations and generating a random variable If Z n converges to Z in some sense, we often need to check whether h(z n ) converges to h(z ) in the same sense. Continuous

More information

7 Random samples and sampling distributions

7 Random samples and sampling distributions 7 Random samples and sampling distributions 7.1 Introduction - random samples We will use the term experiment in a very general way to refer to some process, procedure or natural phenomena that produces

More information

Joint Probability Distributions and Random Samples (Devore Chapter Five)

Joint Probability Distributions and Random Samples (Devore Chapter Five) Joint Probability Distributions and Random Samples (Devore Chapter Five) 1016-345-01: Probability and Statistics for Engineers Spring 2013 Contents 1 Joint Probability Distributions 2 1.1 Two Discrete

More information

Robustness of a semiparametric estimator of a copula

Robustness of a semiparametric estimator of a copula Robustness of a semiparametric estimator of a copula Gunky Kim a, Mervyn J. Silvapulle b and Paramsothy Silvapulle c a Department of Econometrics and Business Statistics, Monash University, c Caulfield

More information

Lecture 6 Basic Probability

Lecture 6 Basic Probability Lecture 6: Basic Probability 1 of 17 Course: Theory of Probability I Term: Fall 2013 Instructor: Gordan Zitkovic Lecture 6 Basic Probability Probability spaces A mathematical setup behind a probabilistic

More information

Construction and estimation of high dimensional copulas

Construction and estimation of high dimensional copulas Construction and estimation of high dimensional copulas Gildas Mazo PhD work supervised by S. Girard and F. Forbes Mistis, Inria and laboratoire Jean Kuntzmann, Grenoble, France Séminaire Statistiques,

More information

2 Random Variable Generation

2 Random Variable Generation 2 Random Variable Generation Most Monte Carlo computations require, as a starting point, a sequence of i.i.d. random variables with given marginal distribution. We describe here some of the basic methods

More information

On the Conditional Value at Risk (CoVaR) from the copula perspective

On the Conditional Value at Risk (CoVaR) from the copula perspective On the Conditional Value at Risk (CoVaR) from the copula perspective Piotr Jaworski Institute of Mathematics, Warsaw University, Poland email: P.Jaworski@mimuw.edu.pl 1 Overview 1. Basics about VaR, CoVaR

More information

First steps of multivariate data analysis

First steps of multivariate data analysis First steps of multivariate data analysis November 28, 2016 Let s Have Some Coffee We reproduce the coffee example from Carmona, page 60 ff. This vignette is the first excursion away from univariate data.

More information

Limiting Distributions

Limiting Distributions We introduce the mode of convergence for a sequence of random variables, and discuss the convergence in probability and in distribution. The concept of convergence leads us to the two fundamental results

More information

1: PROBABILITY REVIEW

1: PROBABILITY REVIEW 1: PROBABILITY REVIEW Marek Rutkowski School of Mathematics and Statistics University of Sydney Semester 2, 2016 M. Rutkowski (USydney) Slides 1: Probability Review 1 / 56 Outline We will review the following

More information

X n D X lim n F n (x) = F (x) for all x C F. lim n F n(u) = F (u) for all u C F. (2)

X n D X lim n F n (x) = F (x) for all x C F. lim n F n(u) = F (u) for all u C F. (2) 14:17 11/16/2 TOPIC. Convergence in distribution and related notions. This section studies the notion of the so-called convergence in distribution of real random variables. This is the kind of convergence

More information

Lecture 2 One too many inequalities

Lecture 2 One too many inequalities University of Illinois Department of Economics Spring 2017 Econ 574 Roger Koenker Lecture 2 One too many inequalities In lecture 1 we introduced some of the basic conceptual building materials of the course.

More information

Perhaps the simplest way of modeling two (discrete) random variables is by means of a joint PMF, defined as follows.

Perhaps the simplest way of modeling two (discrete) random variables is by means of a joint PMF, defined as follows. Chapter 5 Two Random Variables In a practical engineering problem, there is almost always causal relationship between different events. Some relationships are determined by physical laws, e.g., voltage

More information

Two hours. To be supplied by the Examinations Office: Mathematical Formula Tables and Statistical Tables THE UNIVERSITY OF MANCHESTER.

Two hours. To be supplied by the Examinations Office: Mathematical Formula Tables and Statistical Tables THE UNIVERSITY OF MANCHESTER. Two hours MATH38181 To be supplied by the Examinations Office: Mathematical Formula Tables and Statistical Tables THE UNIVERSITY OF MANCHESTER EXTREME VALUES AND FINANCIAL RISK Examiner: Answer any FOUR

More information

A Review of Basic Monte Carlo Methods

A Review of Basic Monte Carlo Methods A Review of Basic Monte Carlo Methods Julian Haft May 9, 2014 Introduction One of the most powerful techniques in statistical analysis developed in this past century is undoubtedly that of Monte Carlo

More information

5 Introduction to the Theory of Order Statistics and Rank Statistics

5 Introduction to the Theory of Order Statistics and Rank Statistics 5 Introduction to the Theory of Order Statistics and Rank Statistics This section will contain a summary of important definitions and theorems that will be useful for understanding the theory of order

More information

IEOR E4703: Monte-Carlo Simulation

IEOR E4703: Monte-Carlo Simulation IEOR E4703: Monte-Carlo Simulation Output Analysis for Monte-Carlo Martin Haugh Department of Industrial Engineering and Operations Research Columbia University Email: martin.b.haugh@gmail.com Output Analysis

More information

1 Review of Probability

1 Review of Probability 1 Review of Probability Random variables are denoted by X, Y, Z, etc. The cumulative distribution function (c.d.f.) of a random variable X is denoted by F (x) = P (X x), < x

More information

Math 3215 Intro. Probability & Statistics Summer 14. Homework 5: Due 7/3/14

Math 3215 Intro. Probability & Statistics Summer 14. Homework 5: Due 7/3/14 Math 325 Intro. Probability & Statistics Summer Homework 5: Due 7/3/. Let X and Y be continuous random variables with joint/marginal p.d.f. s f(x, y) 2, x y, f (x) 2( x), x, f 2 (y) 2y, y. Find the conditional

More information

n! (k 1)!(n k)! = F (X) U(0, 1). (x, y) = n(n 1) ( F (y) F (x) ) n 2

n! (k 1)!(n k)! = F (X) U(0, 1). (x, y) = n(n 1) ( F (y) F (x) ) n 2 Order statistics Ex. 4. (*. Let independent variables X,..., X n have U(0, distribution. Show that for every x (0,, we have P ( X ( < x and P ( X (n > x as n. Ex. 4.2 (**. By using induction or otherwise,

More information

MTH739U/P: Topics in Scientific Computing Autumn 2016 Week 6

MTH739U/P: Topics in Scientific Computing Autumn 2016 Week 6 MTH739U/P: Topics in Scientific Computing Autumn 16 Week 6 4.5 Generic algorithms for non-uniform variates We have seen that sampling from a uniform distribution in [, 1] is a relatively straightforward

More information

Solutions of the Financial Risk Management Examination

Solutions of the Financial Risk Management Examination Solutions of the Financial Risk Management Examination Thierry Roncalli January 9 th 03 Remark The first five questions are corrected in TR-GDR and in the document of exercise solutions, which is available

More information

Multivariate Non-Normally Distributed Random Variables

Multivariate Non-Normally Distributed Random Variables Multivariate Non-Normally Distributed Random Variables An Introduction to the Copula Approach Workgroup seminar on climate dynamics Meteorological Institute at the University of Bonn 18 January 2008, Bonn

More information

Random Variables and Their Distributions

Random Variables and Their Distributions Chapter 3 Random Variables and Their Distributions A random variable (r.v.) is a function that assigns one and only one numerical value to each simple event in an experiment. We will denote r.vs by capital

More information

Review: mostly probability and some statistics

Review: mostly probability and some statistics Review: mostly probability and some statistics C2 1 Content robability (should know already) Axioms and properties Conditional probability and independence Law of Total probability and Bayes theorem Random

More information

Approximation of multivariate distribution functions MARGUS PIHLAK. June Tartu University. Institute of Mathematical Statistics

Approximation of multivariate distribution functions MARGUS PIHLAK. June Tartu University. Institute of Mathematical Statistics Approximation of multivariate distribution functions MARGUS PIHLAK June 29. 2007 Tartu University Institute of Mathematical Statistics Formulation of the problem Let Y be a random variable with unknown

More information

Review of Probability Theory

Review of Probability Theory Review of Probability Theory Arian Maleki and Tom Do Stanford University Probability theory is the study of uncertainty Through this class, we will be relying on concepts from probability theory for deriving

More information

Distributions of Functions of Random Variables. 5.1 Functions of One Random Variable

Distributions of Functions of Random Variables. 5.1 Functions of One Random Variable Distributions of Functions of Random Variables 5.1 Functions of One Random Variable 5.2 Transformations of Two Random Variables 5.3 Several Random Variables 5.4 The Moment-Generating Function Technique

More information

Review (Probability & Linear Algebra)

Review (Probability & Linear Algebra) Review (Probability & Linear Algebra) CE-725 : Statistical Pattern Recognition Sharif University of Technology Spring 2013 M. Soleymani Outline Axioms of probability theory Conditional probability, Joint

More information

Continuous Random Variables

Continuous Random Variables MATH 38 Continuous Random Variables Dr. Neal, WKU Throughout, let Ω be a sample space with a defined probability measure P. Definition. A continuous random variable is a real-valued function X defined

More information

MA/ST 810 Mathematical-Statistical Modeling and Analysis of Complex Systems

MA/ST 810 Mathematical-Statistical Modeling and Analysis of Complex Systems MA/ST 810 Mathematical-Statistical Modeling and Analysis of Complex Systems Review of Basic Probability The fundamentals, random variables, probability distributions Probability mass/density functions

More information

1 Review of Probability and Distributions

1 Review of Probability and Distributions Random variables. A numerically valued function X of an outcome ω from a sample space Ω X : Ω R : ω X(ω) is called a random variable (r.v.), and usually determined by an experiment. We conventionally denote

More information

Continuous random variables

Continuous random variables Continuous random variables Continuous r.v. s take an uncountably infinite number of possible values. Examples: Heights of people Weights of apples Diameters of bolts Life lengths of light-bulbs We cannot

More information

4. CONTINUOUS RANDOM VARIABLES

4. CONTINUOUS RANDOM VARIABLES IA Probability Lent Term 4 CONTINUOUS RANDOM VARIABLES 4 Introduction Up to now we have restricted consideration to sample spaces Ω which are finite, or countable; we will now relax that assumption We

More information

18.440: Lecture 28 Lectures Review

18.440: Lecture 28 Lectures Review 18.440: Lecture 28 Lectures 18-27 Review Scott Sheffield MIT Outline Outline It s the coins, stupid Much of what we have done in this course can be motivated by the i.i.d. sequence X i where each X i is

More information

Statistics 3657 : Moment Approximations

Statistics 3657 : Moment Approximations Statistics 3657 : Moment Approximations Preliminaries Suppose that we have a r.v. and that we wish to calculate the expectation of g) for some function g. Of course we could calculate it as Eg)) by the

More information

2.2 Separable Equations

2.2 Separable Equations 2.2 Separable Equations Definition A first-order differential equation that can be written in the form Is said to be separable. Note: the variables of a separable equation can be written as Examples Solve

More information

The Instability of Correlations: Measurement and the Implications for Market Risk

The Instability of Correlations: Measurement and the Implications for Market Risk The Instability of Correlations: Measurement and the Implications for Market Risk Prof. Massimo Guidolin 20254 Advanced Quantitative Methods for Asset Pricing and Structuring Winter/Spring 2018 Threshold

More information

VaR vs. Expected Shortfall

VaR vs. Expected Shortfall VaR vs. Expected Shortfall Risk Measures under Solvency II Dietmar Pfeifer (2004) Risk measures and premium principles a comparison VaR vs. Expected Shortfall Dependence and its implications for risk measures

More information

Correlation: Copulas and Conditioning

Correlation: Copulas and Conditioning Correlation: Copulas and Conditioning This note reviews two methods of simulating correlated variates: copula methods and conditional distributions, and the relationships between them. Particular emphasis

More information

1 Glivenko-Cantelli type theorems

1 Glivenko-Cantelli type theorems STA79 Lecture Spring Semester Glivenko-Cantelli type theorems Given i.i.d. observations X,..., X n with unknown distribution function F (t, consider the empirical (sample CDF ˆF n (t = I [Xi t]. n Then

More information

Chapter 4. Continuous Random Variables 4.1 PDF

Chapter 4. Continuous Random Variables 4.1 PDF Chapter 4 Continuous Random Variables In this chapter we study continuous random variables. The linkage between continuous and discrete random variables is the cumulative distribution (CDF) which we will

More information

Stat 5101 Lecture Slides: Deck 7 Asymptotics, also called Large Sample Theory. Charles J. Geyer School of Statistics University of Minnesota

Stat 5101 Lecture Slides: Deck 7 Asymptotics, also called Large Sample Theory. Charles J. Geyer School of Statistics University of Minnesota Stat 5101 Lecture Slides: Deck 7 Asymptotics, also called Large Sample Theory Charles J. Geyer School of Statistics University of Minnesota 1 Asymptotic Approximation The last big subject in probability

More information

Probability Methods in Civil Engineering Prof. Dr. Rajib Maity Department of Civil Engineering Indian Institute of Technology, Kharagpur

Probability Methods in Civil Engineering Prof. Dr. Rajib Maity Department of Civil Engineering Indian Institute of Technology, Kharagpur Probability Methods in Civil Engineering Prof. Dr. Rajib Maity Department of Civil Engineering Indian Institute of Technology, Kharagpur Lecture No. # 29 Introduction to Copulas Hello and welcome to this

More information

A measure of radial asymmetry for bivariate copulas based on Sobolev norm

A measure of radial asymmetry for bivariate copulas based on Sobolev norm A measure of radial asymmetry for bivariate copulas based on Sobolev norm Ahmad Alikhani-Vafa Ali Dolati Abstract The modified Sobolev norm is used to construct an index for measuring the degree of radial

More information

Product measure and Fubini s theorem

Product measure and Fubini s theorem Chapter 7 Product measure and Fubini s theorem This is based on [Billingsley, Section 18]. 1. Product spaces Suppose (Ω 1, F 1 ) and (Ω 2, F 2 ) are two probability spaces. In a product space Ω = Ω 1 Ω

More information

Clearly, if F is strictly increasing it has a single quasi-inverse, which equals the (ordinary) inverse function F 1 (or, sometimes, F 1 ).

Clearly, if F is strictly increasing it has a single quasi-inverse, which equals the (ordinary) inverse function F 1 (or, sometimes, F 1 ). APPENDIX A SIMLATION OF COPLAS Copulas have primary and direct applications in the simulation of dependent variables. We now present general procedures to simulate bivariate, as well as multivariate, dependent

More information

Chapter 5 Random vectors, Joint distributions. Lectures 18-23

Chapter 5 Random vectors, Joint distributions. Lectures 18-23 Chapter 5 Random vectors, Joint distributions Lectures 18-23 In many real life problems, one often encounter multiple random objects. For example, if one is interested in the future price of two different

More information

Probability Review. Yutian Li. January 18, Stanford University. Yutian Li (Stanford University) Probability Review January 18, / 27

Probability Review. Yutian Li. January 18, Stanford University. Yutian Li (Stanford University) Probability Review January 18, / 27 Probability Review Yutian Li Stanford University January 18, 2018 Yutian Li (Stanford University) Probability Review January 18, 2018 1 / 27 Outline 1 Elements of probability 2 Random variables 3 Multiple

More information