MFM Practitioner Module: Risk & Asset Allocation. John Dodson. September 7, 2011

Similar documents
Introduction & Random Variables. John Dodson. September 3, 2008

Dependence. MFM Practitioner Module: Risk & Asset Allocation. John Dodson. September 11, Dependence. John Dodson. Outline.

MFM Practitioner Module: Quantitative Risk Management. John Dodson. September 23, 2015

MFM Practitioner Module: Risk & Asset Allocation. John Dodson. February 4, 2015

MFM Practitioner Module: Risk & Asset Allocation. John Dodson. January 25, 2012

Dependence. Practitioner Course: Portfolio Optimization. John Dodson. September 10, Dependence. John Dodson. Outline.

1: PROBABILITY REVIEW

Brief Review of Probability

MFM Practitioner Module: Risk & Asset Allocation. John Dodson. January 28, 2015

Aggregate Risk. MFM Practitioner Module: Quantitative Risk Management. John Dodson. February 6, Aggregate Risk. John Dodson.

MFM Practitioner Module: Risk & Asset Allocation. John Dodson. February 3, 2010

MFM Practitioner Module: Risk & Asset Allocation. John Dodson. February 18, 2015

Twelfth Problem Assignment

Bayesian Optimization

IAM 530 ELEMENTS OF PROBABILITY AND STATISTICS LECTURE 3-RANDOM VARIABLES

Attilio Meucci. Issues in Quantitative Portfolio Management: Handling Estimation Risk

(6, 4) Is there arbitrage in this market? If so, find all arbitrages. If not, find all pricing kernels.

Fundamentals in Optimal Investments. Lecture I

MAS113 Introduction to Probability and Statistics

Solutions of the Financial Risk Management Examination

Essential Maths 1. Macquarie University MAFC_Essential_Maths Page 1 of These notes were prepared by Anne Cooper and Catriona March.

Two hours. Statistical Tables to be provided THE UNIVERSITY OF MANCHESTER. 14 January :45 11:45

Econ 583 Final Exam Fall 2008

1 Bewley Economies with Aggregate Uncertainty

Assessing financial model risk

Multivariate Stress Testing for Solvency

Multivariate Distributions

Week 1 Quantitative Analysis of Financial Markets Distributions A

Multivariate Stress Scenarios and Solvency

M(t) = 1 t. (1 t), 6 M (0) = 20 P (95. X i 110) i=1

STA2603/205/1/2014 /2014. ry II. Tutorial letter 205/1/

FE670 Algorithmic Trading Strategies. Stevens Institute of Technology

Market Risk. MFM Practitioner Module: Quantitiative Risk Management. John Dodson. February 8, Market Risk. John Dodson.

Solutions to Math 41 First Exam October 15, 2013

Math Review Sheet, Fall 2008

Expectation, Variance and Standard Deviation for Continuous Random Variables Class 6, Jeremy Orloff and Jonathan Bloom

2 (Statistics) Random variables

STATISTICS 1 REVISION NOTES

arxiv: v1 [math.pr] 24 Sep 2018

College Mathematics

Gaussian Random Fields

Corrections to Theory of Asset Pricing (2008), Pearson, Boston, MA

Summary of basic probability theory Math 218, Mathematical Statistics D Joyce, Spring 2016

Microeconomic theory focuses on a small number of concepts. The most fundamental concept is the notion of opportunity cost.

Endogenous Information Choice

Some Aspects of Universal Portfolio

Multivariate Normal-Laplace Distribution and Processes

Random Variables and Their Distributions

Topic 6 Continuous Random Variables

An application of hidden Markov models to asset allocation problems

Econ 2148, spring 2019 Statistical decision theory

Continuous Random Variables

Gaussian, Markov and stationary processes

Outline. Motivation Contest Sample. Estimator. Loss. Standard Error. Prior Pseudo-Data. Bayesian Estimator. Estimators. John Dodson.

Generalized Hypothesis Testing and Maximizing the Success Probability in Financial Markets

Asymptotic distribution of the sample average value-at-risk

ARE211 FINAL EXAM DECEMBER 12, 2003

Order Statistics and Distributions

Better than Dynamic Mean-Variance Policy in Market with ALL Risky Assets

Exercises with solutions (Set D)

MFM Practitioner Module: Quantitiative Risk Management. John Dodson. October 14, 2015

Course information: Instructor: Tim Hanson, Leconte 219C, phone Office hours: Tuesday/Thursday 11-12, Wednesday 10-12, and by appointment.

DEPARTMENT OF MATHEMATICS

Econ 2140, spring 2018, Part IIa Statistical Decision Theory

Probability Background

Markowitz Efficient Portfolio Frontier as Least-Norm Analytic Solution to Underdetermined Equations

1 Solution to Problem 2.1

Asset Pricing. Chapter III. Making Choice in Risky Situations. June 20, 2006


cf. The Fields medal carries a portrait of Archimedes.

Problem Set 7 Due March, 22

Chapter 1. Probability, Random Variables and Expectations. 1.1 Axiomatic Probability

ECE 4400:693 - Information Theory

1 Introduction. 2 Measure theoretic definitions

Sequenced Units for Arizona s College and Career Ready Standards MA35 Personal Finance Year at a Glance

Economics 241B Review of Limit Theorems for Sequences of Random Variables

Announcements. Topics: Homework:

Lecture 8: Channel Capacity, Continuous Random Variables

EE 302: Probabilistic Methods in Electrical Engineering

Modelling and Estimation of Stochastic Dependence

Modern Portfolio Theory with Homogeneous Risk Measures

Probability. Table of contents

Analysis of Experimental Designs

Phenomena in high dimensions in geometric analysis, random matrices, and computational geometry Roscoff, France, June 25-29, 2012

Lecture 21: Expectation of CRVs, Fatou s Lemma and DCT Integration of Continuous Random Variables

CHANGE OF MEASURE. D.Majumdar

Estimating Covariance Using Factorial Hidden Markov Models

MATH 3200 PROBABILITY AND STATISTICS M3200SP081.1

Information Choice in Macroeconomics and Finance.

04. Random Variables: Concepts

Chapter 2. Continuous random variables

Week Topics of study Home/Independent Learning Assessment (If in addition to homework) 7 th September 2015

STAT2201. Analysis of Engineering & Scientific Data. Unit 3

3 Operations on One Random Variable - Expectation

Ch. 8 Math Preliminaries for Lossy Coding. 8.4 Info Theory Revisited

( 1 k "information" I(X;Y) given by Y about X)

Introduction to Algorithmic Trading Strategies Lecture 10

Chris Piech CS109 CS109 Final Exam. Fall Quarter Dec 14 th, 2017

Recitation 2: Probability

ECON4515 Finance theory 1 Diderik Lund, 5 May Perold: The CAPM

Transcription:

& & MFM Practitioner Module: Risk & Asset Allocation September 7, 2011

&

Course Fall sequence modules portfolio optimization Blaise Morton fixed income spread trading Arkady Shemyakin portfolio credit risk introductions Nick Kirchner is our TA Thanks, Nick! drop a note in my dropbox 1 what is your background? what are your professional goals? module syllabus office hours evaluations & grading module text required Meucci recommended DeGroot, Glasserman, McNeil et. al. & 1 see website

Module & Goal My goal is to introduce modern concepts to describe uncertainty in the financial markets and how to convert investment manager expertise and investor preferences into optimal quantitative investment strategies. Out of Scope I will not be talking about the economics of financial markets, fundamental or technical investment analysis, or the origin of investment manager expertise.

Meucci s Program for Asset Allocation detect market invariance select the invariants estimate the market specify the distribution of the invariants decide on approach to dealing with heteroskedasticity model the market project the invariants to the horizon map the invariants back to market prices define investor optimality objective satisfaction constraints collect manager experience prior on the market vector parameters determine optimal allocation two-step approach &

& Definitions For our purposes, A random variable is a quantity whose value is not known to us right now but will be at some point in the future. These can be represented mathematically as measurable functions of a sample space, and we will usually denoted them by upper-case Roman letters. We will usually denote a particular value obtained by a random variable by the corresponding lower-case letter. There must be a probability associated with every possible set of outcomes. Such sets are called events and might consist of intervals or points or any combination thereof. The corresponding probability is called the probability measure of the event.

& This structure lends itself to a measure theory interpretation, where the probability associated with a set is simply the integral of the probability density over that set. P {X (a, b)} = b a f X (x) dx If X ranges over the real numbers R, then f X ( ) must have certain properties. In particular, it must be a non-negative (generalized) function and lim f X (x) = 0 x f X (x) dx = 1 lim f X (x) = 0 x

In addition to the density function, there are at least three other characterizations of a random variable distribution function F X (x) = x f X (x ) dx quantile function Q X (p) = F 1 X (p) characteristic function φ X (t) = eitx f X (x) dx This last is based on the Fourier transform, where i 2 = 1. While the density function is the most common, these four representations are all equivalent and we will be working with each. Hint: they can be distinguished by the nature of their arguments; resp. the value of an outcome, the upper range on a set of outcomes, a probability, and a frequency. &

If we have a characterization of a random variable X, it is natural to ask if we can derive the characterization of a function of that variable Y = h(x ), or of a sample, {X 1, X 2,..., X n }, Y = h (X 1, X 2,..., X n ). This is in general difficult, but there are some notable easy cases. f, F, and Q for an increasing, invertible function f Y (y) = f ( X h 1 (y) ) h (h 1 (y)) ( F Y (y) = F X h 1 (y) ) Q Y (p) = h (Q X (p)) φ for the mean of a sample, Y n = 1 n n j=1 X j φ Yn (t) = [ ( t )] n φ X n &

We will talk in detail about the topic of estimation later, but if you were asked to give a best guess about the value of random variable, the expected value would be a natural answer. The expected value is a density-weighted average of all possible values E X = = = 1 0 x f X (x) dx 1 2 F X (x) dx Q X (p) dp = iφ X (0) & Ex. evaluate each for a Dirac spike, f X (x) = δ(x x 0 )

While it may not be possible to evaluate the characterization of function of a random variable, it is generally possible to evaluate the expected value of a function of a random variable. E h(x ) = h(x)f X (x) dx The probability measure of an event A is the expectation of the indicator function for the event P {X A} = E 1 A (X ) In general, the expected value of a function of a random variable is not just the value of the function at the expected value of the random variable. & Do not make this mistake! E h(x ) h (E X )

A foundational result connects the expectation of a random variable with the sample mean. 1 lim n n n X j = E X j=1 almost surely We will demonstrate the weak version of this result using the characteristic function and then look at some numerical results. The key to the derivation is to recognize that ( φ Yn (t) = 1 + it E X ( t ) ) n + o e it E X n n &

The LLN, combined with information technology, brought about a revolution in applied mathematics last century, introducing a completely novel way to evaluate integrals. integrals are expectations of functions of random variables h(x) dx = h(x) f X (x) f X (x) dx = E h(x ) f X (X ) expectations are means of random samples E h(x ) a.s. 1 = lim f X (X ) n n n j=1 h(x j ) f X (x j ) you can evaluate an arbitrarily complicated integral if you can 1. identify an appropriate random variable 2. generate a very large sample of random variates 3. cheaply evaluate the integrand and density function &

Affine equivariance guides us in defining measures for the location and dispersion of a random variable. In particular, we should expect Loc {ax + b} = a Loc {X } + b Disp {ax + b} = a Disp {X } Location The expected value Loc {X } E X is a natural candidate for a measure of location. Dispersion The standard deviation is a natural candidate for a measure of dispersion. Disp {X } E (X 2 ) (E X ) 2 &

Other candidates for a measure of location include mode arg max f X median Q X ( 1 2 ) Alternate measures of dispersion include ) 1/2 modal dispersion ( 2 log f arg X max fx inter-quartile range Q X ( 3 4 ) Q X ( 1 4 ) absolute deviation E X E X Standard deviation as a measure of dispersion is justified by an important general result: Chebyshev (Qebyxv) inequality For any r.v. X with a finite variance and any k > 1, { ( P (X E X ) 2 > k E ( X 2) (E X ) 2)} < 1 k &

Denote the expected value and standard deviation of a random variable X by µ and σ. The standardized transformation of X is Its characteristic function is Z = X µ σ φ Z (t) = e i µ σ t φ X ( t σ The moments of Z measure the skewness, kurtosis, etc. of X. The easiest way to evaluate these is to note that ) E (Z n ) = ( i) n φ (n) Z (0) & N.B.: Moments do not always exist, and they are generally not adequate to characterize a random variable.

The most important distribution for X R is the normal or gaussian distribution. It takes two parameters, and is denoted X N ( µ, σ 2). µ is often called the mean. f X (x) = 1 2π e 1 2 ( x µ σ ) 2 1 ( ) σ µ x F X (x) = 1 2 erfc 2σ Q X (p) = µ 2σ erfc 1 (2p) φ X (t) = e iµt 1 2 σ2 t 2 & density

A foundational result connects the normal distribution with the. For any random variable X with finite expected value µ and standard deviation σ, n lim n 1 n n X j µ N ( 0, σ 2) in distribution j=1 That is, the deviation between the sample mean and the expected value is approximately normal, with standard deviation equal to the standard deviation of the random variable divided by the square root of the sample size. This result can be demonstrated by considering the characteristic function of the RHS above. &

Recall, a random variable is a measurable function of the sample space with respect to a probability measure. It can be useful to work with the same random variable under an alternate probability measure. Radon-Nikodým theorem If P and P are equivalent measures, then there is a random variable Z 0 such that for any random variable X E X = E (ZX ) For example, say that X N(µ, σ 2 ) under P, but X N(µ, σ 2 ) under P. Then the Radon-Nikodým derivative is Z = e X σ µ σ µ + 1 σ 2 ( µ σ ) 2 1 2 µ 2 σ & and this same Z would apply to any function of X.

The entropy of a distribution is a measure of how much information we do not know about the random variable. H X = E log f X For example, For X N ( µ, σ 2), we have so log f X (X ) = log 2πσ 2 H X = log 2πeσ 2 (X µ)2 2σ 2 which is an increasing function in the variance σ 2. Relative Entropy Similarly, relative entropy is defined in terms of the Radon-Nikodým derivative. & D = E log Z

A technique for increasing the entropy of a random variable is to introduce randomness into the characterization. Consider for example the hierarchical model leads to X µ N ( µ, σ 2) µ N ( ν, τ 2) X N ( ν, σ 2 + τ 2) which has higher entropy than X µ H X µ = log 2πeσ 2 H X = log 2πe (σ 2 + τ 2 ) > H X µ &