1 Description of variables

Size: px
Start display at page:

Download "1 Description of variables"

Transcription

1 1 Description of variables We have three possible instruments/state variables: dividend yield d t+1, default spread y t+1, and realized market volatility v t+1 d t is the continuously compounded 12 month dividend yield on the value-weighted NYSE. Data is from CRSP s index file. ( ) total dividends paid over months t 11 to t d t = ln price at the end of month t y t is the log of the difference between the monthly yields on corporate bonds rated Aaa and Baa by Moody s. Data is from the St Louis Fed s FRED website. v t is calculated as: v t = ln ( Nt j=1 y t = ln(aaa yield t Baa yield t ) ( ln(1 + Rj,t ) ) N t j=1 ln(1 + R j,t ) ln(1 + R j,t+1 ) where N t is number of days in month t, and R j,t is daily market return. The daily market return is taken from Ken French s website. We inspect the histograms of the raw and the logged variables, and settle on the log forms because each appears more normal. After construction, the three instruments are standardized so that each has a distribution with mean 0 and variance 1. The assets used are three portfolios formed on the basis of book-to-market ratio. They are value-weighted portfolios of the bottom three, the middle four, and the top three BE-ME decile portfolios available from Ken French s website. The decile portfolios themselves are value-weighted. We take the log of [1 plus] the returns. The data period used is from 31 Dec 1926 to 31 Dec 2015, yielding 1069 observations. Since our regressions are on instruments lagged one period, the data series in our estimations are generally 1068 observations long. 2 Framework Start with an instrument series, Z t+1. This consists of one or more of dividend yield d t+1, default spread y t+1, and realized market volatility v t+1. To specify how the instruments evolve, we run a VAR: Z t+1 = a Z + b Z Z t + e Z t+1 (1) The log return series is r t+1. We consider two different specifications for its conditional expectation. Returns are either predictable by dividend yield: r t+1 = a r + b r d t + e t+1 1 )

2 or they are not predictable: r t+1 = a r + e r t+1 We call e r t+1 the return residual and e Z t+1 the residual from the instrument VAR. Finally, we regress e r t+1 = b e e Z t+1 + u r t+1 (2) 3 Outline of solution method We will solve the portfolio problem by building a numerical model in the framework laid out in the previous subsection. We discretize the AR(1) process which generates the Z. In this discretization, the e Z can take on a finite number of values, and the evolution Equation 1 then implies that the state variables Z too take on only a finite number of possible values, and constitute a Markov chain with discrete states and a transition probability matrix. The technique we use for discretization is Gaussian quadrature, as described by Tauchen and Hussey. Previous work (e.g., Lynch (2000)) has used this technique with great success under the assumption that all variables in the model are normal. Working under this assumption, Lynch (2000) discretizes both the e Z (and therefore the Z) as well as the u r. The e r are automatically rendered discrete, since they are constructed from the e Z and the u r using Equation 2. In this model, all variables are assumed to be homoscedastic. In our work, we maintain the assumption that the e Z are normal and homoscedastic, but we allow the u r to be nonnormal and heteroscedastic. While we could adapt the Tauchen and Hussey procedure to work with nonnormal variables, such an adaptation would be fraught with problems and is likely to work poorly. Instead, we simulate a large number of points for the u r, and, in combination with the discretized, normal, homoscedastic e Z, this gives us the possible values of the e r. 4 Normalizing the e Z Our goal, then, is to be able to simulate the u r so that when these simulated values are used together with the discretized e Z, we obtain e r which have the same properties as the e r in the data. An initial step would be to specify distributions for the u r and to estimate the parameters of these distributions. However, before we can run estimations involving the u r, we need to ensure that the e Z have the properties in the data that we will assume they have in the quadrature: specifically, that they are homoscedastic and normal. In the data, it turns out that the e Z are heteroscedastic and nonnormal. If we were to use these data e Z, and estimate Equation 2, some, perhaps most, of the nonnormality in the e r will be accounted for by the e Z. Estimating the parameters of the resulting u r, and attempting to simulate from those u r parameters with normal e Z, will produce simulated e r which have different nonnormal behavior than shown by the e r in the data. Such differences may show up in the level of nonnormality (where we would expect the simulated e r to have less nonnormality than in the data), but they may also show up in how that nonnormality varies over time. 2

3 Because a principal aim of this paper is to estimate hedging demands driven by time-varying nonnormality, it is essential that the nonnormality of the model e r closely match that in the data. As a first step towards resolving this issue, we normalize the e Z. We consider two combinations of the instruments Z: first, dividend yield and default spread, and second, default spread and realized market variance. For concreteness, in the description below I focus on the first of these combinations. We start with the data Z, and we estimate Equation 1. This gives us the data e Z, a T 2 matrix. We then fit a conditional variance model to each of these e Z. The model we use is: (e Z t+1) 2 = exp(k) + exp(a + b Z t ) + ν t+1 where the model is estimated under the constraint that the unconditional variance equals the average conditional variance: E(e Z t+1) 2 = exp(k) + E exp(a + b Z t ) Please see Section 5.1 for a description of this model, and the reasoning behind this specification. With this model, we can estimate the conditional variance of each of the e Z as: V t (e Z ) = exp(k) + exp(a + b Z t ) We divide each e Z by its conditional standard deviation at each point in time. Our expectation is that this removes any time-varying variance from the e Z. If our conditional variance model were perfect, the standardized series would have mean 0 and variance 1. In our estimations these conditions are almost met. We force them to be met exactly. We then orthogonalize the two e Z with respect to each other, by postmultiplying by the inverse of the Cholesky decomposition of their covariance matrix. The Cholesky decomposition is lower triangular, and so the first column of the e Z will not be affected by the orthogonalization. The orthogonalization results in series which are uncorrelated, but possibly dependent. We attempt to make these orthogonalized series bivariate normal. Bivariate normality of two variables implies first, that each variable is univariate normal, and second, that the conditional distribution of the second variable given the first is normal with a known mean and variance. For mean-zero variables which are uncorrelated, the mean of the conditional distribution is zero and the variance is the unconditional variance. We render the first e Z unconditionally normal in the following way. We obtain the percentiles each observation occupies in the empirical CDF. We then apply the inverse normal CDF (with parameters mean zero and standard deviation 1) to this series and get the values that this series would take if it were normal. We standardize the result to have mean 0 and standard deviation 1. When we run the usual tests for normality on this transformed series, we are unable to reject. We then sort the first series into 20 bins. Because there are 1068 observations, this will result in an uneven number of observations in each bin. We assign the leftover observations, 3

4 one to a bin, starting from the most extreme bins and moving inward. This is done because if forced to choose, we would like the extreme values to be better behaved. Assigning more values to those bins may achieve this. For each bin, we take the subset of values that the second series takes when the first series takes values in that bin, and we make that subset of values unconditionally normal. Again we standardize the result to have mean 0 and standard deviation 1. Once we have processed all bins, we unconditionally normalize the entire second e Z series. We check for time-varying variance by estimating a conditional variance model on these series. We find little evidence of time-varying variance. We then postmultiply by the Cholesky decomposition of the covariance matrix of the original e Z. This gives us an e Z series which we hope has a distribution more close to normal than previously, but which has a similar covariance structure as the previous series. Using this e Z series, we reconstruct the Z. Given that the e Z are distributed normally with mean zero and known variance, say Σ ez, Equation 1 implies the population distribution of Z: it is normal with a mean of (I b Z ) 1 a Z and a variance given by the solution to the following matrix equation: V ar(z) = b ZV ar(z)b Z + Σ ez To solve this equation, we pick an initial guess for V ar(z), and then iterate on the equation above until convergence. We us Σ ez as the initial guess. We then reconstruct the Z series. We start from the initial values in the data Z series, but we modify them to be more consistent with normality. We know what percentiles each initial value occupies in the empirical CDF of the Z. We replace it with the value the inverse normal CDF assigns to that probability value. In this calculation, the inverse normal CDF has parameters equal to the population parameters of the Z. Starting from these values, and given the e Z series, we can construct the Z series using Equation 1. Once we have these Z, we repeat the entire normalization procedure using them as the data Z. We do this to make the Z as normal as we can. In theory we could iterate on the Z, repeatedly normalizing until we find that the values don t change much. In practice this leads to poor properties: in some instances the autoregressive coefficient on the Z becomes larger than 1. With the Z output from the second normalization, we reestimate Equation 1. The coefficients we estimate are close to but not exactly the same as the coefficients when using the original Z. Using the coefficients we obtain the e Z. We test these e Z for univariate normality, and find that we are unable to reject the null of normality, with p-values in excess of 0.8. A necessary condition for two variables to be bivariate normal is that all linear combinations are normal. We test the following linear combinations: [ 1, 1], [ 1, 2], [ 2, 1 ], and once again we are unable to reject the null of normality. The smallest p-value we obtain is

5 5 Model specification 5.1 Conditional variance specification The first task is to model the time-varying conditional variance of the u r t+1 and the e r t+1. (Recall that the conditional mean has already been modelled, either by assuming that it is a constant or that it is linear in the lagged dividend yield.) We would like to specify the conditional variance as some function of the lagged state variables, because this is how it will be modelled in the quadrature. Once we have obtained estimates of the conditional variance of the u r t+1 and the e r t+1 at each point in time, we can remove the effects of the time-varying variance by dividing by the appropriate conditional standard deviation. We can then further model the behavior of these standardized processes. The conditional variance specifications must satisfy a variety of conditions: 1. Once estimated, it must be easy to extract the conditional variance at each point in time. 2. The conditional variance must not be negative. Further, it is desirable that the conditional variance not go too close to zero, otherwise the standardized series (creating by dividing by the conditional standard deviation) may blow up. 3. The average conditional variance must be equal to the unconditional variance. This is necessary because of the relation, for example for the u below, that the unconditional variance equals the expected conditional variance plus the variance of the conditional expectation: V (u r t+1) = E(V t (u r t+1)) + V (E t (u r t+1)) In our application, we assume that the u r and the e r have a constant conditional mean of zero, which implies that V (u r t+1) = E(V t (u r t+1)) and V (e r t+1) = E(V t (e r t+1)). 4. The conditional variance processes specified for the u r and the e r must be consistent with each other, in that they obey the relations implied by (a) our assumptions about e Z : that it is homoscedastic and normal and (b) Equation 2. Together, these restrictions imply that: V t (e r t+1) = b ev (e Z t+1)b e + V t (u r t+1) (3) A simple way to ensure that this condition is met is to model the conditional variance of the u r, and to use Equation 3 to back out the conditional variance of the e r. As an example, we could model the conditional variance of the u r t+1 by estimating the following equation: ln(u r t+1) 2 = a + b Z t + ν t+1 This imposes positivity, but it hard to force the conditional variance to be above a bound. Further, the conditional variance at each point in time, E t ((u r t+1) 2 ), is difficult to obtain: the equation above gives us an estimate only of E t (ln(u r t+1) 2 ). While it is possible to adjust 5

6 this by multiplying by a Jensen s inequality term, in practice we find that such adjustments produce conditional variance estimates with poor properties. Instead, we consider the following specification for the conditional variance of the u r (u r t+1) 2 = exp(k) + exp(a + b Z t ) + ν t+1 It is easy to estimate the conditional variance at any point, as E t (u r t+1) 2 = exp(k) + exp(a + b Z t ) The parameters k, a, b can be estimated through nonlinear least squares (NLS). NLS minimizes Eν 2 t+1 = E ( (u r t+1) 2 exp(k) exp(a + b Z t ) ) 2 with first order conditions: w.r.t. k : 2 exp(k)eν t+1 = 0 w.r.t. a : 2E exp(a + b Z t )ν t+1 = 0 w.r.t. b : 2E exp(a + b Z t )Z t ν t+1 = 0 In the absence of a corner solution (i.e., k = ), the first of these FOCs guarantees that Eν t+1 = 0, or that E(u r t+1) 2 = exp(k) + E exp(a + b Z t ) meaning that we may get the desired constraint, that the unconditional variance of the u will be equal to the expected estimated conditional variance, for free. To ensure that this constraint is always met, we estimate the constrained NLS: min E(ν t+1) 2 subject to E(ν t+1 ) = 0 k,a,b We can run this estimation using the sample average as the expectation in the constraint. This will mean that the sample average conditional variance equals the unconditional variance. What we care about, however, is that the quadrature expected conditional variance equal the unconditional variance. One way to proceed is to observe that we know what the population distribution of Z is: it is normal with a known mean and variance. This implies that the conditional variance, given by exp(k) + exp(a + b Z t ), is a constant plus a lognormal variable with known parameters, and its expected value is just exp(k) + exp(a + b EZ t + 0.5V ar(b Z t )). To the extent that the quadrature faithfully reproduces this population expectation, this is the appropriate constraint to impose. In the quadrature, however, the use of discrete points for the Z implies that the distribution of Z is truncated at both ends. In running the estimation of the conditional variance, we 6

7 would like the conditions we impose to resemble the quadrature as closely as possible. Accordingly, we estimate the NLS with the constraint that the truncated population expected conditional variance equals the unconditional variance of the u r. It is fairly simple to obtain the symmetrically truncated expectation of the lognormal variable. 1 Therefore, our final specification for the conditional variance of the u r involves solving the following NLS problem: min k,a,b E(ν t+1) 2 subject to E(u r t+1) 2 = exp(k) + E [exp(a + b Z t ) U > Z t > L] To get the truncation points, we look at the extreme values that our normalized Zs take, and find the corresponding percentiles in their known population distributions. These may be different for the minimum and the maximum of each Z. We take the laxer of the percentiles as the cutoffs in the truncation. Using the truncated population expectation in the constraint produces coefficient estimates that are close to when we use the sample expectation. Using the untruncated population expectation produces values which are different. This is likely because the expectation of the exponential function of a variable is severely affected by the very large values in the extreme right tail. 1 We make use of the following relations: we know how to decompose an integral and observing that for example, we can decompose an expectation xdf (x) = L E(x x L) = U xdf (x) + xdf (x) + L U 1 L xdf (x) P (x L) xdf (x) E(x) = P (x L)E(x x L) + P (L < x U)E(x L < x U) + P (U < x)e(x U < x) Further, we know: If y N(µ, σ), and x = exp(y), then where U 0 = (ln(u) µ)/σ, and E(x U < x) = E(x) Φ(σ U 0) Φ( U 0 ) E(x x L) = E(x) Φ( σ + L 0) Φ(L 0 ) where L 0 = (ln(l) µ)/σ. Now we want the expectation of a symmetrically truncated lognormal. From the above, E(x L < x U) = We can work out each part of this separately. E(x) P (x L)E(x x L) P (U < x)e(x U < x) P (L < x U) 7

8 Given the specification for the conditional variance of the u r, we obtain the conditional variance of the e r from Equation 2: V t (e r t+1) = b ev (e Z )b e + V t (u r t+1) We then standardize the u r and the e r by their conditional standard deviations, and standardize the resulting series to have unconditional mean 0 and unconditional standard deviation of Distribution specification Given the homoscedastic, standardized u r and e r, we can proceed to model their distributions. In each case, we might use a single multivariate distribution, but copula theory allows us the more flexible alternative of modelling the marginal and joint distributions separately. We consider conditional and unconditional models. A conditional model is one whose parameters vary over time. Time-varying parameters are modelled by allowing them to be functions of the lagged state variables. In particular, we model them as [possibly nonlinear] functions of linear combinations of the lagged state variables. For the marginals we consider the following specifications: 1. Normal: with parameters mean µ and variance σ 2. Since the series have already been standardized to have constant conditional mean and variance of 0 and 1, the parameters of the normal marginals are determined. 2. Skew-t: with parameters degrees of freedom ν and skewness λ. For the copulas we consider the following specifications: 1. Normal: with three correlation parameters ρ i,j. 2. t: with three correlation parameters ρ i,j, and with a degrees of freedom parameter ν 3. Mixture: where the copula can be either t with a certain probability, or Clayton with one minus that probability. The parameters are the four parameters of the t-copula, one parameter κ of the Clayton copula, and one mixture probability. In principle, any or all of these six parameters can be time-varying. Each of the parameters of the marginals and the copula have certain bounds on them. For example, the correlation parameters have to lie between 1 and 1. In Table 1 we list the parameters, the bounds on them, and the functional forms that we use to ensure that they stay within the bounds. When estimating the copulas, we would like to ensure that the unconditional correlations of implied by the copula parameters are the same as in the data. Unfortunately, there is no analytical formula linking the copula parameters to the unconditional correlations. We impose the constraint via simulation. 8

9 Consider a particular marginal-copula pair. For example, take the skew-t marginals and the normal copula. Marginal parameters are unaffected by correlations. The normal copula has three correlation parameters. 2 We model them as ρ i,j,t = g(β 0 + β Z t ) where g() is a nonlinear function with range ( 1, 1). We first estimate the copula with no constraints, and obtain estimates of β 0 and β. We then simulate 100,000 observations from the e Z and the Z (with a burn-in period of 1000 observations), and for the three assets from the copula-marginal pair using the estimated parameters. We measure the sample unconditional correlations of the simulated variables. If they do not equal their values in the data, we adjust each of the three β 0, the level parameter of the correlations, until they do. When looking at the other copulas which have more parameters than just correlations, we maintain this approach: we only ever adjust the level parameters of the three time-varying correlations. 6 Specifications considered We consider three principal specifications: 2 Even in this simple case the correlation parameters don t have an analytical link to the sample correlations. 9

10 A Description of marginal distributions used to model return residuals A.1 Normal marginals The univariate standard normal distribution with parameters µ and σ has a PDF given by A.2 Skew-t marginals f(u) = 1 σ 2π e (u µ) 2 2σ 2 The univariate skew-t distribution, for a variable u with mean 0 and standard deviation 1, has parameters ν and λ and has a PDF given by { bc ( ν 2 f(u) = ( bu+a)2) (ν+1)/2 1 λ u < a/b, bc ( ( bu+a)2) (ν+1)/2 ν 2 1+λ u a/b. The constants a, b and c are defined as a = 4λc ( ) ν 2, b 2 = 1 + 3λ 2 a 2, c = ν 1 Γ ( ) ν+1 2 π(ν 2)Γ( ν 2 ) B Description of copula distributions used to model return residuals B.1 Normal copula Let u be a vector-valued random variable with dimension N 1. Let Φ(u; R) be the N- dimensional multivariate normal CDF with correlation matrix R. Let Φ(u i ) be the univariate standard normal CDF. Then the normal copula is given by and its PDF is given by: B.2 t copula B.3 Clayton copula B.4 Mixture copula C(u, R) = Φ(Φ 1 (u 1 ),..., Φ 1 (u n ); R) (4) 10

11 Distribution Parameters and bounds Implementation as functions of lagged Z Panel A: Marginals Normal mean µ Fixed at 0 in standardized series standard deviation σ Fixed at 1 in standardized series Skew-t degrees of freedom ν, always above 2 νi,t+1 = 2 + e Z t β i skewness λ, in ( 1, 1) λi,t+1 = /(1 + e Z t β i) Panel B: Copulas Normal Correlations ρ, in ( 1, 1) ρi,j,t+1 = /(1 + e Z t β i,j) t Correlations ρ, in ( 1, 1) ρi,j,t+1x = /(1 + e Z t β i,j) Degrees of freedom ν, always above 2 νt+1 = 2 + e Z t β Clayton copula kappa κ, always above 0 κt+1 = (1 1./(1 + e Z t β ) We bound it below by 0.01 and above by to help convergence Mixture Probability p pt+1 = 1 1/(1 + e Z t β ) All parameters of Clayton and t Table 1: Distribution parameters, bounds on them, and functional forms: The Zt are assumed to include a constant. 11

Multivariate Distributions

Multivariate Distributions IEOR E4602: Quantitative Risk Management Spring 2016 c 2016 by Martin Haugh Multivariate Distributions We will study multivariate distributions in these notes, focusing 1 in particular on multivariate

More information

The Instability of Correlations: Measurement and the Implications for Market Risk

The Instability of Correlations: Measurement and the Implications for Market Risk The Instability of Correlations: Measurement and the Implications for Market Risk Prof. Massimo Guidolin 20254 Advanced Quantitative Methods for Asset Pricing and Structuring Winter/Spring 2018 Threshold

More information

Volatility. Gerald P. Dwyer. February Clemson University

Volatility. Gerald P. Dwyer. February Clemson University Volatility Gerald P. Dwyer Clemson University February 2016 Outline 1 Volatility Characteristics of Time Series Heteroskedasticity Simpler Estimation Strategies Exponentially Weighted Moving Average Use

More information

Solutions of the Financial Risk Management Examination

Solutions of the Financial Risk Management Examination Solutions of the Financial Risk Management Examination Thierry Roncalli January 9 th 03 Remark The first five questions are corrected in TR-GDR and in the document of exercise solutions, which is available

More information

When is a copula constant? A test for changing relationships

When is a copula constant? A test for changing relationships When is a copula constant? A test for changing relationships Fabio Busetti and Andrew Harvey Bank of Italy and University of Cambridge November 2007 usetti and Harvey (Bank of Italy and University of Cambridge)

More information

Financial Econometrics and Quantitative Risk Managenent Return Properties

Financial Econometrics and Quantitative Risk Managenent Return Properties Financial Econometrics and Quantitative Risk Managenent Return Properties Eric Zivot Updated: April 1, 2013 Lecture Outline Course introduction Return definitions Empirical properties of returns Reading

More information

401 Review. 6. Power analysis for one/two-sample hypothesis tests and for correlation analysis.

401 Review. 6. Power analysis for one/two-sample hypothesis tests and for correlation analysis. 401 Review Major topics of the course 1. Univariate analysis 2. Bivariate analysis 3. Simple linear regression 4. Linear algebra 5. Multiple regression analysis Major analysis methods 1. Graphical analysis

More information

Lecture 2: Univariate Time Series

Lecture 2: Univariate Time Series Lecture 2: Univariate Time Series Analysis: Conditional and Unconditional Densities, Stationarity, ARMA Processes Prof. Massimo Guidolin 20192 Financial Econometrics Spring/Winter 2017 Overview Motivation:

More information

Regression: Ordinary Least Squares

Regression: Ordinary Least Squares Regression: Ordinary Least Squares Mark Hendricks Autumn 2017 FINM Intro: Regression Outline Regression OLS Mathematics Linear Projection Hendricks, Autumn 2017 FINM Intro: Regression: Lecture 2/32 Regression

More information

X t = a t + r t, (7.1)

X t = a t + r t, (7.1) Chapter 7 State Space Models 71 Introduction State Space models, developed over the past 10 20 years, are alternative models for time series They include both the ARIMA models of Chapters 3 6 and the Classical

More information

Lecture 3: Statistical sampling uncertainty

Lecture 3: Statistical sampling uncertainty Lecture 3: Statistical sampling uncertainty c Christopher S. Bretherton Winter 2015 3.1 Central limit theorem (CLT) Let X 1,..., X N be a sequence of N independent identically-distributed (IID) random

More information

Intro VEC and BEKK Example Factor Models Cond Var and Cor Application Ref 4. MGARCH

Intro VEC and BEKK Example Factor Models Cond Var and Cor Application Ref 4. MGARCH ntro VEC and BEKK Example Factor Models Cond Var and Cor Application Ref 4. MGARCH JEM 140: Quantitative Multivariate Finance ES, Charles University, Prague Summer 2018 JEM 140 () 4. MGARCH Summer 2018

More information

Monte Carlo Composition Inversion Acceptance/Rejection Sampling. Direct Simulation. Econ 690. Purdue University

Monte Carlo Composition Inversion Acceptance/Rejection Sampling. Direct Simulation. Econ 690. Purdue University Methods Econ 690 Purdue University Outline 1 Monte Carlo Integration 2 The Method of Composition 3 The Method of Inversion 4 Acceptance/Rejection Sampling Monte Carlo Integration Suppose you wish to calculate

More information

Lecture 8: Multivariate GARCH and Conditional Correlation Models

Lecture 8: Multivariate GARCH and Conditional Correlation Models Lecture 8: Multivariate GARCH and Conditional Correlation Models Prof. Massimo Guidolin 20192 Financial Econometrics Winter/Spring 2018 Overview Three issues in multivariate modelling of CH covariances

More information

Multivariate GARCH models.

Multivariate GARCH models. Multivariate GARCH models. Financial market volatility moves together over time across assets and markets. Recognizing this commonality through a multivariate modeling framework leads to obvious gains

More information

Chapter 5 continued. Chapter 5 sections

Chapter 5 continued. Chapter 5 sections Chapter 5 sections Discrete univariate distributions: 5.2 Bernoulli and Binomial distributions Just skim 5.3 Hypergeometric distributions 5.4 Poisson distributions Just skim 5.5 Negative Binomial distributions

More information

Dependence. Practitioner Course: Portfolio Optimization. John Dodson. September 10, Dependence. John Dodson. Outline.

Dependence. Practitioner Course: Portfolio Optimization. John Dodson. September 10, Dependence. John Dodson. Outline. Practitioner Course: Portfolio Optimization September 10, 2008 Before we define dependence, it is useful to define Random variables X and Y are independent iff For all x, y. In particular, F (X,Y ) (x,

More information

Multivariate Time Series Analysis and Its Applications [Tsay (2005), chapter 8]

Multivariate Time Series Analysis and Its Applications [Tsay (2005), chapter 8] 1 Multivariate Time Series Analysis and Its Applications [Tsay (2005), chapter 8] Insights: Price movements in one market can spread easily and instantly to another market [economic globalization and internet

More information

Bootstrap tests of multiple inequality restrictions on variance ratios

Bootstrap tests of multiple inequality restrictions on variance ratios Economics Letters 91 (2006) 343 348 www.elsevier.com/locate/econbase Bootstrap tests of multiple inequality restrictions on variance ratios Jeff Fleming a, Chris Kirby b, *, Barbara Ostdiek a a Jones Graduate

More information

Chapter 12: An introduction to Time Series Analysis. Chapter 12: An introduction to Time Series Analysis

Chapter 12: An introduction to Time Series Analysis. Chapter 12: An introduction to Time Series Analysis Chapter 12: An introduction to Time Series Analysis Introduction In this chapter, we will discuss forecasting with single-series (univariate) Box-Jenkins models. The common name of the models is Auto-Regressive

More information

Online Appendix (Not intended for Publication): Decomposing the Effects of Monetary Policy Using an External Instruments SVAR

Online Appendix (Not intended for Publication): Decomposing the Effects of Monetary Policy Using an External Instruments SVAR Online Appendix (Not intended for Publication): Decomposing the Effects of Monetary Policy Using an External Instruments SVAR Aeimit Lakdawala Michigan State University December 7 This appendix contains

More information

Notes on Time Series Modeling

Notes on Time Series Modeling Notes on Time Series Modeling Garey Ramey University of California, San Diego January 17 1 Stationary processes De nition A stochastic process is any set of random variables y t indexed by t T : fy t g

More information

Switching Regime Estimation

Switching Regime Estimation Switching Regime Estimation Series de Tiempo BIrkbeck March 2013 Martin Sola (FE) Markov Switching models 01/13 1 / 52 The economy (the time series) often behaves very different in periods such as booms

More information

Econ 423 Lecture Notes: Additional Topics in Time Series 1

Econ 423 Lecture Notes: Additional Topics in Time Series 1 Econ 423 Lecture Notes: Additional Topics in Time Series 1 John C. Chao April 25, 2017 1 These notes are based in large part on Chapter 16 of Stock and Watson (2011). They are for instructional purposes

More information

Multivariate Distribution Models

Multivariate Distribution Models Multivariate Distribution Models Model Description While the probability distribution for an individual random variable is called marginal, the probability distribution for multiple random variables is

More information

Multivariate Random Variable

Multivariate Random Variable Multivariate Random Variable Author: Author: Andrés Hincapié and Linyi Cao This Version: August 7, 2016 Multivariate Random Variable 3 Now we consider models with more than one r.v. These are called multivariate

More information

at least 50 and preferably 100 observations should be available to build a proper model

at least 50 and preferably 100 observations should be available to build a proper model III Box-Jenkins Methods 1. Pros and Cons of ARIMA Forecasting a) need for data at least 50 and preferably 100 observations should be available to build a proper model used most frequently for hourly or

More information

Aggregation and the use of copulas Tony Jeffery

Aggregation and the use of copulas Tony Jeffery Aggregation and the use of copulas Tony Jeffery 4 th March 2013 Disclaimer 1 Disclaimer 2 Any views or opinions expressed herein are my own and do not represent those of my employer the Central Bank of

More information

Chapter 5. Chapter 5 sections

Chapter 5. Chapter 5 sections 1 / 43 sections Discrete univariate distributions: 5.2 Bernoulli and Binomial distributions Just skim 5.3 Hypergeometric distributions 5.4 Poisson distributions Just skim 5.5 Negative Binomial distributions

More information

Panel Data Models. Chapter 5. Financial Econometrics. Michael Hauser WS17/18 1 / 63

Panel Data Models. Chapter 5. Financial Econometrics. Michael Hauser WS17/18 1 / 63 1 / 63 Panel Data Models Chapter 5 Financial Econometrics Michael Hauser WS17/18 2 / 63 Content Data structures: Times series, cross sectional, panel data, pooled data Static linear panel data models:

More information

Week 1 Quantitative Analysis of Financial Markets Distributions A

Week 1 Quantitative Analysis of Financial Markets Distributions A Week 1 Quantitative Analysis of Financial Markets Distributions A Christopher Ting http://www.mysmu.edu/faculty/christophert/ Christopher Ting : christopherting@smu.edu.sg : 6828 0364 : LKCSB 5036 October

More information

Principal Component Analysis-I Geog 210C Introduction to Spatial Data Analysis. Chris Funk. Lecture 17

Principal Component Analysis-I Geog 210C Introduction to Spatial Data Analysis. Chris Funk. Lecture 17 Principal Component Analysis-I Geog 210C Introduction to Spatial Data Analysis Chris Funk Lecture 17 Outline Filters and Rotations Generating co-varying random fields Translating co-varying fields into

More information

Correlation: Copulas and Conditioning

Correlation: Copulas and Conditioning Correlation: Copulas and Conditioning This note reviews two methods of simulating correlated variates: copula methods and conditional distributions, and the relationships between them. Particular emphasis

More information

Generalized Autoregressive Score Models

Generalized Autoregressive Score Models Generalized Autoregressive Score Models by: Drew Creal, Siem Jan Koopman, André Lucas To capture the dynamic behavior of univariate and multivariate time series processes, we can allow parameters to be

More information

Financial Econometrics and Volatility Models Copulas

Financial Econometrics and Volatility Models Copulas Financial Econometrics and Volatility Models Copulas Eric Zivot Updated: May 10, 2010 Reading MFTS, chapter 19 FMUND, chapters 6 and 7 Introduction Capturing co-movement between financial asset returns

More information

6. The econometrics of Financial Markets: Empirical Analysis of Financial Time Series. MA6622, Ernesto Mordecki, CityU, HK, 2006.

6. The econometrics of Financial Markets: Empirical Analysis of Financial Time Series. MA6622, Ernesto Mordecki, CityU, HK, 2006. 6. The econometrics of Financial Markets: Empirical Analysis of Financial Time Series MA6622, Ernesto Mordecki, CityU, HK, 2006. References for Lecture 5: Quantitative Risk Management. A. McNeil, R. Frey,

More information

(x t. x t +1. TIME SERIES (Chapter 8 of Wilks)

(x t. x t +1. TIME SERIES (Chapter 8 of Wilks) 45 TIME SERIES (Chapter 8 of Wilks) In meteorology, the order of a time series matters! We will assume stationarity of the statistics of the time series. If there is non-stationarity (e.g., there is a

More information

Model Mis-specification

Model Mis-specification Model Mis-specification Carlo Favero Favero () Model Mis-specification 1 / 28 Model Mis-specification Each specification can be interpreted of the result of a reduction process, what happens if the reduction

More information

Gaussian Slug Simple Nonlinearity Enhancement to the 1-Factor and Gaussian Copula Models in Finance, with Parametric Estimation and Goodness-of-Fit

Gaussian Slug Simple Nonlinearity Enhancement to the 1-Factor and Gaussian Copula Models in Finance, with Parametric Estimation and Goodness-of-Fit Gaussian Slug Simple Nonlinearity Enhancement to the 1-Factor and Gaussian Copula Models in Finance, with Parametric Estimation and Goodness-of-Fit Tests on US and Thai Equity Data 22 nd Australasian Finance

More information

Econometrics. Week 4. Fall Institute of Economic Studies Faculty of Social Sciences Charles University in Prague

Econometrics. Week 4. Fall Institute of Economic Studies Faculty of Social Sciences Charles University in Prague Econometrics Week 4 Institute of Economic Studies Faculty of Social Sciences Charles University in Prague Fall 2012 1 / 23 Recommended Reading For the today Serial correlation and heteroskedasticity in

More information

CHICAGO: A Fast and Accurate Method for Portfolio Risk Calculation

CHICAGO: A Fast and Accurate Method for Portfolio Risk Calculation CHICAGO: A Fast and Accurate Method for Portfolio Risk Calculation University of Zürich April 28 Motivation Aim: Forecast the Value at Risk of a portfolio of d assets, i.e., the quantiles of R t = b r

More information

Lifetime Dependence Modelling using a Generalized Multivariate Pareto Distribution

Lifetime Dependence Modelling using a Generalized Multivariate Pareto Distribution Lifetime Dependence Modelling using a Generalized Multivariate Pareto Distribution Daniel Alai Zinoviy Landsman Centre of Excellence in Population Ageing Research (CEPAR) School of Mathematics, Statistics

More information

Is there still room for new developments in geostatistics?

Is there still room for new developments in geostatistics? Is there still room for new developments in geostatistics? Jean-Paul Chilès MINES ParisTech, Fontainebleau, France, IAMG 34th IGC, Brisbane, 8 August 2012 Matheron: books and monographs 1962-1963: Treatise

More information

TESTING FOR CO-INTEGRATION

TESTING FOR CO-INTEGRATION Bo Sjö 2010-12-05 TESTING FOR CO-INTEGRATION To be used in combination with Sjö (2008) Testing for Unit Roots and Cointegration A Guide. Instructions: Use the Johansen method to test for Purchasing Power

More information

TAKEHOME FINAL EXAM e iω e 2iω e iω e 2iω

TAKEHOME FINAL EXAM e iω e 2iω e iω e 2iω ECO 513 Spring 2015 TAKEHOME FINAL EXAM (1) Suppose the univariate stochastic process y is ARMA(2,2) of the following form: y t = 1.6974y t 1.9604y t 2 + ε t 1.6628ε t 1 +.9216ε t 2, (1) where ε is i.i.d.

More information

Psychology 282 Lecture #4 Outline Inferences in SLR

Psychology 282 Lecture #4 Outline Inferences in SLR Psychology 282 Lecture #4 Outline Inferences in SLR Assumptions To this point we have not had to make any distributional assumptions. Principle of least squares requires no assumptions. Can use correlations

More information

GARCH Models Estimation and Inference. Eduardo Rossi University of Pavia

GARCH Models Estimation and Inference. Eduardo Rossi University of Pavia GARCH Models Estimation and Inference Eduardo Rossi University of Pavia Likelihood function The procedure most often used in estimating θ 0 in ARCH models involves the maximization of a likelihood function

More information

A simple graphical method to explore tail-dependence in stock-return pairs

A simple graphical method to explore tail-dependence in stock-return pairs A simple graphical method to explore tail-dependence in stock-return pairs Klaus Abberger, University of Konstanz, Germany Abstract: For a bivariate data set the dependence structure can not only be measured

More information

Recovering Copulae from Conditional Quantiles

Recovering Copulae from Conditional Quantiles Wolfgang K. Härdle Chen Huang Alexander Ristig Ladislaus von Bortkiewicz Chair of Statistics C.A.S.E. Center for Applied Statistics and Economics HumboldtUniversität zu Berlin http://lvb.wiwi.hu-berlin.de

More information

Fall 2017 STAT 532 Homework Peter Hoff. 1. Let P be a probability measure on a collection of sets A.

Fall 2017 STAT 532 Homework Peter Hoff. 1. Let P be a probability measure on a collection of sets A. 1. Let P be a probability measure on a collection of sets A. (a) For each n N, let H n be a set in A such that H n H n+1. Show that P (H n ) monotonically converges to P ( k=1 H k) as n. (b) For each n

More information

HANDBOOK OF APPLICABLE MATHEMATICS

HANDBOOK OF APPLICABLE MATHEMATICS HANDBOOK OF APPLICABLE MATHEMATICS Chief Editor: Walter Ledermann Volume II: Probability Emlyn Lloyd University oflancaster A Wiley-Interscience Publication JOHN WILEY & SONS Chichester - New York - Brisbane

More information

Identifying Causal Effects in Time Series Models

Identifying Causal Effects in Time Series Models Identifying Causal Effects in Time Series Models Aaron Smith UC Davis October 20, 2015 Slides available at http://asmith.ucdavis.edu 1 Identifying Causal Effects in Time Series Models If all the Metrics

More information

GARCH Models Estimation and Inference

GARCH Models Estimation and Inference GARCH Models Estimation and Inference Eduardo Rossi University of Pavia December 013 Rossi GARCH Financial Econometrics - 013 1 / 1 Likelihood function The procedure most often used in estimating θ 0 in

More information

1 Class Organization. 2 Introduction

1 Class Organization. 2 Introduction Time Series Analysis, Lecture 1, 2018 1 1 Class Organization Course Description Prerequisite Homework and Grading Readings and Lecture Notes Course Website: http://www.nanlifinance.org/teaching.html wechat

More information

Lecture 6: Univariate Volatility Modelling: ARCH and GARCH Models

Lecture 6: Univariate Volatility Modelling: ARCH and GARCH Models Lecture 6: Univariate Volatility Modelling: ARCH and GARCH Models Prof. Massimo Guidolin 019 Financial Econometrics Winter/Spring 018 Overview ARCH models and their limitations Generalized ARCH models

More information

GARCH Models Estimation and Inference

GARCH Models Estimation and Inference Università di Pavia GARCH Models Estimation and Inference Eduardo Rossi Likelihood function The procedure most often used in estimating θ 0 in ARCH models involves the maximization of a likelihood function

More information

Econometrics I KS. Module 2: Multivariate Linear Regression. Alexander Ahammer. This version: April 16, 2018

Econometrics I KS. Module 2: Multivariate Linear Regression. Alexander Ahammer. This version: April 16, 2018 Econometrics I KS Module 2: Multivariate Linear Regression Alexander Ahammer Department of Economics Johannes Kepler University of Linz This version: April 16, 2018 Alexander Ahammer (JKU) Module 2: Multivariate

More information

Multivariate Normal-Laplace Distribution and Processes

Multivariate Normal-Laplace Distribution and Processes CHAPTER 4 Multivariate Normal-Laplace Distribution and Processes The normal-laplace distribution, which results from the convolution of independent normal and Laplace random variables is introduced by

More information

Chapter 16. Simple Linear Regression and Correlation

Chapter 16. Simple Linear Regression and Correlation Chapter 16 Simple Linear Regression and Correlation 16.1 Regression Analysis Our problem objective is to analyze the relationship between interval variables; regression analysis is the first tool we will

More information

EE/CpE 345. Modeling and Simulation. Fall Class 10 November 18, 2002

EE/CpE 345. Modeling and Simulation. Fall Class 10 November 18, 2002 EE/CpE 345 Modeling and Simulation Class 0 November 8, 2002 Input Modeling Inputs(t) Actual System Outputs(t) Parameters? Simulated System Outputs(t) The input data is the driving force for the simulation

More information

Interpreting Regression Results

Interpreting Regression Results Interpreting Regression Results Carlo Favero Favero () Interpreting Regression Results 1 / 42 Interpreting Regression Results Interpreting regression results is not a simple exercise. We propose to split

More information

17 Factor Models and Principal Components

17 Factor Models and Principal Components 17 Factor Models and Principal Components 17.1 Dimension Reduction High-dimensional data can be challenging to analyze. They are difficult to visualize, need extensive computer resources, and often require

More information

EE/CpE 345. Modeling and Simulation. Fall Class 9

EE/CpE 345. Modeling and Simulation. Fall Class 9 EE/CpE 345 Modeling and Simulation Class 9 208 Input Modeling Inputs(t) Actual System Outputs(t) Parameters? Simulated System Outputs(t) The input data is the driving force for the simulation - the behavior

More information

Vector Auto-Regressive Models

Vector Auto-Regressive Models Vector Auto-Regressive Models Laurent Ferrara 1 1 University of Paris Nanterre M2 Oct. 2018 Overview of the presentation 1. Vector Auto-Regressions Definition Estimation Testing 2. Impulse responses functions

More information

Vector autoregressions, VAR

Vector autoregressions, VAR 1 / 45 Vector autoregressions, VAR Chapter 2 Financial Econometrics Michael Hauser WS17/18 2 / 45 Content Cross-correlations VAR model in standard/reduced form Properties of VAR(1), VAR(p) Structural VAR,

More information

VAR Models and Applications

VAR Models and Applications VAR Models and Applications Laurent Ferrara 1 1 University of Paris West M2 EIPMC Oct. 2016 Overview of the presentation 1. Vector Auto-Regressions Definition Estimation Testing 2. Impulse responses functions

More information

Simultaneous Equation Models Learning Objectives Introduction Introduction (2) Introduction (3) Solving the Model structural equations

Simultaneous Equation Models Learning Objectives Introduction Introduction (2) Introduction (3) Solving the Model structural equations Simultaneous Equation Models. Introduction: basic definitions 2. Consequences of ignoring simultaneity 3. The identification problem 4. Estimation of simultaneous equation models 5. Example: IS LM model

More information

E cient Importance Sampling

E cient Importance Sampling E cient David N. DeJong University of Pittsburgh Spring 2008 Our goal is to calculate integrals of the form Z G (Y ) = ϕ (θ; Y ) dθ. Special case (e.g., posterior moment): Z G (Y ) = Θ Θ φ (θ; Y ) p (θjy

More information

Econ 424 Time Series Concepts

Econ 424 Time Series Concepts Econ 424 Time Series Concepts Eric Zivot January 20 2015 Time Series Processes Stochastic (Random) Process { 1 2 +1 } = { } = sequence of random variables indexed by time Observed time series of length

More information

Structural VAR Models and Applications

Structural VAR Models and Applications Structural VAR Models and Applications Laurent Ferrara 1 1 University of Paris Nanterre M2 Oct. 2018 SVAR: Objectives Whereas the VAR model is able to capture efficiently the interactions between the different

More information

Online appendix to On the stability of the excess sensitivity of aggregate consumption growth in the US

Online appendix to On the stability of the excess sensitivity of aggregate consumption growth in the US Online appendix to On the stability of the excess sensitivity of aggregate consumption growth in the US Gerdie Everaert 1, Lorenzo Pozzi 2, and Ruben Schoonackers 3 1 Ghent University & SHERPPA 2 Erasmus

More information

Chapter 16. Simple Linear Regression and dcorrelation

Chapter 16. Simple Linear Regression and dcorrelation Chapter 16 Simple Linear Regression and dcorrelation 16.1 Regression Analysis Our problem objective is to analyze the relationship between interval variables; regression analysis is the first tool we will

More information

1 Appendix A: Matrix Algebra

1 Appendix A: Matrix Algebra Appendix A: Matrix Algebra. Definitions Matrix A =[ ]=[A] Symmetric matrix: = for all and Diagonal matrix: 6=0if = but =0if 6= Scalar matrix: the diagonal matrix of = Identity matrix: the scalar matrix

More information

Topic 1. Definitions

Topic 1. Definitions S Topic. Definitions. Scalar A scalar is a number. 2. Vector A vector is a column of numbers. 3. Linear combination A scalar times a vector plus a scalar times a vector, plus a scalar times a vector...

More information

R = µ + Bf Arbitrage Pricing Model, APM

R = µ + Bf Arbitrage Pricing Model, APM 4.2 Arbitrage Pricing Model, APM Empirical evidence indicates that the CAPM beta does not completely explain the cross section of expected asset returns. This suggests that additional factors may be required.

More information

Empirical Market Microstructure Analysis (EMMA)

Empirical Market Microstructure Analysis (EMMA) Empirical Market Microstructure Analysis (EMMA) Lecture 3: Statistical Building Blocks and Econometric Basics Prof. Dr. Michael Stein michael.stein@vwl.uni-freiburg.de Albert-Ludwigs-University of Freiburg

More information

GARCH Models. Eduardo Rossi University of Pavia. December Rossi GARCH Financial Econometrics / 50

GARCH Models. Eduardo Rossi University of Pavia. December Rossi GARCH Financial Econometrics / 50 GARCH Models Eduardo Rossi University of Pavia December 013 Rossi GARCH Financial Econometrics - 013 1 / 50 Outline 1 Stylized Facts ARCH model: definition 3 GARCH model 4 EGARCH 5 Asymmetric Models 6

More information

Numerical Methods Lecture 7 - Statistics, Probability and Reliability

Numerical Methods Lecture 7 - Statistics, Probability and Reliability Topics Numerical Methods Lecture 7 - Statistics, Probability and Reliability A summary of statistical analysis A summary of probability methods A summary of reliability analysis concepts Statistical Analysis

More information

Pair-copula constructions of multiple dependence

Pair-copula constructions of multiple dependence Pair-copula constructions of multiple dependence 3 4 5 3 34 45 T 3 34 45 3 4 3 35 4 T 3 4 3 35 4 4 3 5 34 T 3 4 3 5 34 5 34 T 4 Note no SAMBA/4/06 Authors Kjersti Aas Claudia Czado Arnoldo Frigessi Henrik

More information

Asymmetric Dependence, Tail Dependence, and the. Time Interval over which the Variables Are Measured

Asymmetric Dependence, Tail Dependence, and the. Time Interval over which the Variables Are Measured Asymmetric Dependence, Tail Dependence, and the Time Interval over which the Variables Are Measured Byoung Uk Kang and Gunky Kim Preliminary version: August 30, 2013 Comments Welcome! Kang, byoung.kang@polyu.edu.hk,

More information

Chapter 1. Probability, Random Variables and Expectations. 1.1 Axiomatic Probability

Chapter 1. Probability, Random Variables and Expectations. 1.1 Axiomatic Probability Chapter 1 Probability, Random Variables and Expectations Note: The primary reference for these notes is Mittelhammer (1999. Other treatments of probability theory include Gallant (1997, Casella & Berger

More information

Asymmetry in Tail Dependence of Equity Portfolios

Asymmetry in Tail Dependence of Equity Portfolios Asymmetry in Tail Dependence of Equity Portfolios Eric Jondeau This draft: August 1 Abstract In this paper, we investigate the asymmetry in the tail dependence between US equity portfolios and the aggregate

More information

Two Issues in Using Mixtures of Polynomials for Inference in Hybrid Bayesian Networks

Two Issues in Using Mixtures of Polynomials for Inference in Hybrid Bayesian Networks Accepted for publication in: International Journal of Approximate Reasoning, 2012, Two Issues in Using Mixtures of Polynomials for Inference in Hybrid Bayesian

More information

Review (Probability & Linear Algebra)

Review (Probability & Linear Algebra) Review (Probability & Linear Algebra) CE-725 : Statistical Pattern Recognition Sharif University of Technology Spring 2013 M. Soleymani Outline Axioms of probability theory Conditional probability, Joint

More information

Multivariate Statistics

Multivariate Statistics Multivariate Statistics Chapter 2: Multivariate distributions and inference Pedro Galeano Departamento de Estadística Universidad Carlos III de Madrid pedro.galeano@uc3m.es Course 2016/2017 Master in Mathematical

More information

Review of Statistics 101

Review of Statistics 101 Review of Statistics 101 We review some important themes from the course 1. Introduction Statistics- Set of methods for collecting/analyzing data (the art and science of learning from data). Provides methods

More information

Review. DS GA 1002 Statistical and Mathematical Models. Carlos Fernandez-Granda

Review. DS GA 1002 Statistical and Mathematical Models.   Carlos Fernandez-Granda Review DS GA 1002 Statistical and Mathematical Models http://www.cims.nyu.edu/~cfgranda/pages/dsga1002_fall16 Carlos Fernandez-Granda Probability and statistics Probability: Framework for dealing with

More information

Eco517 Fall 2004 C. Sims MIDTERM EXAM

Eco517 Fall 2004 C. Sims MIDTERM EXAM Eco517 Fall 2004 C. Sims MIDTERM EXAM Answer all four questions. Each is worth 23 points. Do not devote disproportionate time to any one question unless you have answered all the others. (1) We are considering

More information

Cointegrated VAR s. Eduardo Rossi University of Pavia. November Rossi Cointegrated VAR s Financial Econometrics / 56

Cointegrated VAR s. Eduardo Rossi University of Pavia. November Rossi Cointegrated VAR s Financial Econometrics / 56 Cointegrated VAR s Eduardo Rossi University of Pavia November 2013 Rossi Cointegrated VAR s Financial Econometrics - 2013 1 / 56 VAR y t = (y 1t,..., y nt ) is (n 1) vector. y t VAR(p): Φ(L)y t = ɛ t The

More information

The Mixture Approach for Simulating New Families of Bivariate Distributions with Specified Correlations

The Mixture Approach for Simulating New Families of Bivariate Distributions with Specified Correlations The Mixture Approach for Simulating New Families of Bivariate Distributions with Specified Correlations John R. Michael, Significance, Inc. and William R. Schucany, Southern Methodist University The mixture

More information

Modelling Dependent Credit Risks

Modelling Dependent Credit Risks Modelling Dependent Credit Risks Filip Lindskog, RiskLab, ETH Zürich 30 November 2000 Home page:http://www.math.ethz.ch/ lindskog E-mail:lindskog@math.ethz.ch RiskLab:http://www.risklab.ch Modelling Dependent

More information

CSCI-6971 Lecture Notes: Probability theory

CSCI-6971 Lecture Notes: Probability theory CSCI-6971 Lecture Notes: Probability theory Kristopher R. Beevers Department of Computer Science Rensselaer Polytechnic Institute beevek@cs.rpi.edu January 31, 2006 1 Properties of probabilities Let, A,

More information

EEC 686/785 Modeling & Performance Evaluation of Computer Systems. Lecture 19

EEC 686/785 Modeling & Performance Evaluation of Computer Systems. Lecture 19 EEC 686/785 Modeling & Performance Evaluation of Computer Systems Lecture 19 Department of Electrical and Computer Engineering Cleveland State University wenbing@ieee.org (based on Dr. Raj Jain s lecture

More information

Econ671 Factor Models: Principal Components

Econ671 Factor Models: Principal Components Econ671 Factor Models: Principal Components Jun YU April 8, 2016 Jun YU () Econ671 Factor Models: Principal Components April 8, 2016 1 / 59 Factor Models: Principal Components Learning Objectives 1. Show

More information

Word-length Optimization and Error Analysis of a Multivariate Gaussian Random Number Generator

Word-length Optimization and Error Analysis of a Multivariate Gaussian Random Number Generator Word-length Optimization and Error Analysis of a Multivariate Gaussian Random Number Generator Chalermpol Saiprasert, Christos-Savvas Bouganis and George A. Constantinides Department of Electrical & Electronic

More information

Statistical Data Analysis

Statistical Data Analysis DS-GA 0 Lecture notes 8 Fall 016 1 Descriptive statistics Statistical Data Analysis In this section we consider the problem of analyzing a set of data. We describe several techniques for visualizing the

More information

Outline. Overview of Issues. Spatial Regression. Luc Anselin

Outline. Overview of Issues. Spatial Regression. Luc Anselin Spatial Regression Luc Anselin University of Illinois, Urbana-Champaign http://www.spacestat.com Outline Overview of Issues Spatial Regression Specifications Space-Time Models Spatial Latent Variable Models

More information

Economic Scenario Generation with Regime Switching Models

Economic Scenario Generation with Regime Switching Models Economic Scenario Generation with Regime Switching Models 2pm to 3pm Friday 22 May, ASB 115 Acknowledgement: Research funding from Taylor-Fry Research Grant and ARC Discovery Grant DP0663090 Presentation

More information

Latin Hypercube Sampling with Multidimensional Uniformity

Latin Hypercube Sampling with Multidimensional Uniformity Latin Hypercube Sampling with Multidimensional Uniformity Jared L. Deutsch and Clayton V. Deutsch Complex geostatistical models can only be realized a limited number of times due to large computational

More information

Modelling Dependence with Copulas and Applications to Risk Management. Filip Lindskog, RiskLab, ETH Zürich

Modelling Dependence with Copulas and Applications to Risk Management. Filip Lindskog, RiskLab, ETH Zürich Modelling Dependence with Copulas and Applications to Risk Management Filip Lindskog, RiskLab, ETH Zürich 02-07-2000 Home page: http://www.math.ethz.ch/ lindskog E-mail: lindskog@math.ethz.ch RiskLab:

More information