Solutions and Proofs: Optimizing Portfolios

Similar documents
Introduction to Computational Finance and Financial Econometrics Probability Review - Part 2

Math 10C - Fall Final Exam

1: PROBABILITY REVIEW

MATHEMATICAL ECONOMICS: OPTIMIZATION. Contents

Random Variables. Random variables. A numerically valued map X of an outcome ω from a sample space Ω to the real line R

Recall that if X 1,...,X n are random variables with finite expectations, then. The X i can be continuous or discrete or of any other type.

BASICS OF PROBABILITY

1 Solving Algebraic Equations

MATHEMATICS 154, SPRING 2009 PROBABILITY THEORY Outline #11 (Tail-Sum Theorem, Conditional distribution and expectation)

CSCI-6971 Lecture Notes: Probability theory

2. Suppose (X, Y ) is a pair of random variables uniformly distributed over the triangle with vertices (0, 0), (2, 0), (2, 1).

2019 Spring MATH2060A Mathematical Analysis II 1

MAS113 Introduction to Probability and Statistics. Proofs of theorems

Integration - Past Edexcel Exam Questions

Tangent Plane. Linear Approximation. The Gradient

Problem Set 0 Solutions

MAS223 Statistical Inference and Modelling Exercises

3 Applications of partial differentiation

Chapter 2. Some Basic Probability Concepts. 2.1 Experiments, Outcomes and Random Variables

Chapter 4 continued. Chapter 4 sections

If g is also continuous and strictly increasing on J, we may apply the strictly increasing inverse function g 1 to this inequality to get

Ordinary Differential Equations (ODEs)

Math 2163, Practice Exam II, Solution

Math 5051 Measure Theory and Functional Analysis I Homework Assignment 3

Review For the Final: Problem 1 Find the general solutions of the following DEs. a) x 2 y xy y 2 = 0 solution: = 0 : homogeneous equation.

DEPARTMENT OF MATHEMATICS AND STATISTICS UNIVERSITY OF MASSACHUSETTS. MATH 233 SOME SOLUTIONS TO EXAM 2 Fall 2018

Solutions to Homework 5

f X, Y (x, y)dx (x), where f(x,y) is the joint pdf of X and Y. (x) dx

ORF 245 Fundamentals of Statistics Chapter 4 Great Expectations

Joint Distributions. (a) Scalar multiplication: k = c d. (b) Product of two matrices: c d. (c) The transpose of a matrix:

Partial Solutions for h4/2014s: Sampling Distributions

Learning Target: I can sketch the graphs of rational functions without a calculator. a. Determine the equation(s) of the asymptotes.

Expectation. DS GA 1002 Probability and Statistics for Data Science. Carlos Fernandez-Granda

We introduce methods that are useful in:

Computational Optimization. Mathematical Programming Fundamentals 1/25 (revised)

Practice problems for Exam 1. a b = (2) 2 + (4) 2 + ( 3) 2 = 29

Functions and Equations

Mathematics 426 Robert Gross Homework 9 Answers

Continuous Random Variables

Def. The euclidian distance between two points x = (x 1,...,x p ) t and y = (y 1,...,y p ) t in the p-dimensional space R p is defined as

MAS113 Introduction to Probability and Statistics. Proofs of theorems

Practice Midterm 2 Math 2153

Introduction to Real Analysis

18.440: Lecture 28 Lectures Review

Multivariable Calculus and Matrix Algebra-Summer 2017

Expectation. DS GA 1002 Statistical and Mathematical Models. Carlos Fernandez-Granda

Chapter 1 Preliminaries

ORF 245 Fundamentals of Statistics Great Expectations

Bivariate distributions

14 Lecture 14 Local Extrema of Function

Partial Derivatives. w = f(x, y, z).

Lecture 22: A Review of Linear Algebra and an Introduction to The Multivariate Normal Distribution

6.041/6.431 Fall 2010 Quiz 2 Solutions

MS 2001: Test 1 B Solutions

EEL 5544 Noise in Linear Systems Lecture 30. X (s) = E [ e sx] f X (x)e sx dx. Moments can be found from the Laplace transform as

Jointly Distributed Random Variables

CHAPTER 4: HIGHER ORDER DERIVATIVES. Likewise, we may define the higher order derivatives. f(x, y, z) = xy 2 + e zx. y = 2xy.

ECE534, Spring 2018: Solutions for Problem Set #3

Chp 4. Expectation and Variance

Polynomials. In many problems, it is useful to write polynomials as products. For example, when solving equations: Example:

Logarithmic and Exponential Equations and Change-of-Base

Notes for Math 324, Part 19

Chapter 1. Optimality Conditions: Unconstrained Optimization. 1.1 Differentiable Problems

STA 256: Statistics and Probability I

MA 575 Linear Models: Cedric E. Ginestet, Boston University Revision: Probability and Linear Algebra Week 1, Lecture 2

Chapter 4 : Expectation and Moments

Never leave a NEGATIVE EXPONENT or a ZERO EXPONENT in an answer in simplest form!!!!!

Midterm 1 Solutions Thursday, February 26

Math 261 Calculus I. Test 1 Study Guide. Name. Decide whether the limit exists. If it exists, find its value. 1) lim x 1. f(x) 2) lim x -1/2 f(x)

Discrete Random Variables

MATH 56A: STOCHASTIC PROCESSES CHAPTER 0

1 Basic continuous random variable problems

x + ye z2 + ze y2, y + xe z2 + ze x2, z and where T is the

The Multivariate Normal Distribution. In this case according to our theorem

Statistics 351 Probability I Fall 2006 (200630) Final Exam Solutions. θ α β Γ(α)Γ(β) (uv)α 1 (v uv) β 1 exp v }

SOLUTIONS TO THE FINAL EXAM. December 14, 2010, 9:00am-12:00 (3 hours)

LESSON 25: LAGRANGE MULTIPLIERS OCTOBER 30, 2017

A Correction. Joel Peress INSEAD. Abstract

Lecture 4: Convex Functions, Part I February 1

FUNCTIONAL ANALYSIS HAHN-BANACH THEOREM. F (m 2 ) + α m 2 + x 0

Matrix Theory and Differential Equations Homework 2 Solutions, due 9/7/6

Statistics STAT:5100 (22S:193), Fall Sample Final Exam B

ERASMUS UNIVERSITY ROTTERDAM Information concerning the Entrance examination Mathematics level 2 for International Business Administration (IBA)

Probability. Table of contents

MULTIVARIATE PROBABILITY DISTRIBUTIONS

Review for the Final Exam

Normal Random Variables and Probability

Math 118, Fall 2014 Final Exam

Section 9.1. Expected Values of Sums

7.1. Calculus of inverse functions. Text Section 7.1 Exercise:

PreCalculus: Semester 1 Final Exam Review

Random Variables and Their Distributions

Computational Optimization. Convexity and Unconstrained Optimization 1/29/08 and 2/1(revised)

18.440: Lecture 28 Lectures Review

Wealth, Information Acquisition and Portfolio Choice: A Correction

Bell-shaped curves, variance

Formulas for probability theory and linear models SF2941

14.30 Introduction to Statistical Methods in Economics Spring 2009

CONTENTS COLLEGE ALGEBRA: DR.YOU

Random Variables. Cumulative Distribution Function (CDF) Amappingthattransformstheeventstotherealline.

Transcription:

Solutions and Proofs: Optimizing Portfolios An Undergraduate Introduction to Financial Mathematics J. Robert Buchanan

Covariance Proof: Cov(X, Y) = E [XY Y E [X] XE [Y] + E [X] E [Y]] = E [XY] E [Y] E [X] E [X] E [Y] + E [X] E [Y] = E [XY] E [X] E [Y]

Properties of Covariance (1 of 2) Proof: Let X, Y, and Z be random variables, then Cov (X + Y, Z) = E [(X + Y)Z] E [X + Y]E [Z] = E [XZ] + E [YZ] E [X] E [Z] E [Y]E [Z] = E [XZ] E [X] E [Z] + E [YZ] E [Y]E [Z] = Cov (X, Z) + Cov(Y, Z)

Properties of Covariance (2 of 2) I Proof: Let {X 1, X 2,..., X k, X k+1 } be random variables. Cov ( k+1 ) X i, Y 1 ( k ) = Cov X i, Y 1 + Cov(X k+1, Y 1 ) = = k Cov (X i, Y 1 ) + Cov(X k+1, Y 1 ) k+1 Cov(X i, Y 1 ) Therefore by induction we may show that the result is true for any finite, integer value of n and m = 1.

Properties of Covariance (2 of 2) II When m is an integer larger than 1 we can argue that m Cov X i, = m Cov X i, Y j j=1 = = = j=1 Y j m Cov Y j, X i j=1 j=1 j=1 m Cov ( ) Y j, X i m Cov ( ) X i, Y j.

Properties of Covariance (2 of 2) III Proof: Let Y = n X i, then Var(Y) = Cov(Y, Y) ( ) Var X i = Cov X i, = = = j=1 j=1 X j Cov ( ) X i, X j Cov(X i, X i ) + Var(X i ) + Cov ( ) X i, X j j i Cov ( ) X i, X j j i

Properties of the Correlation Proof: Cov(X, Y) = Cov (X, ax + b) = E [X(aX + b)] E [X] E [ax + b] [ ] = E ax 2 + bx) E [X] (ae [X] + b) [ = ae X 2] + be [X] ae [X] E [X] be [X] ( = a E [X 2] E [X] 2) = avar (X) avar(x) ρ(x, Y) = Var (X) a 2 Var(X) = a a

Schwarz Inequality I Proof: If a and b are real numbers then the following two inequalities hold: [ 0 E (ax + by) 2] [ = a 2 E X 2] + 2abE [XY] + b 2 E [Y 2] [ 0 E (ax by) 2] [ = a 2 E X 2] 2abE [XY] + b 2 E [Y 2] If we let a 2 = E [ Y 2] and b 2 = E [ X 2] then the first inequality above yields [ 2E X 2] E [ 0 2E [ Y 2] 2 E [ X 2] E [ Y 2] X 2] E [ Y 2] + 2 E [ Y 2] E [ X 2] E [XY] E [ Y 2] E [ X 2] E [XY] E [XY].

Schwarz Inequality II Similarly the second inequality produces E [XY] E [ X 2] E [ Y 2]. Therefore, since E [ X 2] E [ Y 2] E [XY] E [ X 2] E [ Y 2] [ (E [XY]) 2 E X 2] [ E Y 2].

Range of the Correlation Proof: (Cov (X, Y)) 2 = (E [(X E [X])(Y E [Y])]) 2 [ E (X E [X]) 2] E [(Y E [Y]) 2] = Var(X) Var(Y) Thus Cov(X, Y) Var(X) Var(Y), which is equivalent to the inequality, 1 Cov (X, Y) Var (X) Var (Y) 1 1 ρ(x, Y) 1.

Technical Result I Proof: E [ X(X K) +] = = = 1 x(x K) + 1 2πσ x e (ln x µ)2 /2σ 2 dx 0 1 (x K)e (ln x µ)2 /2σ 2 dx 2πσ K 1 2π (ln K µ)/σ (e σz+µ K)e σz+µ e z2 /2 dz

Technical Result II Make the substitution σz = ln x µ. E [ X(X K) +] = e2(µ+σ2 ) 2π Keµ+σ2 /2 2π (ln K µ)/σ (ln K µ)/σ e (z 2σ)2 /2 dz e (z σ)2 /2 dz ( ) µ ln K = e 2(µ+σ2) φ + 2σ σ ( ) µ ln K Ke µ+σ2 /2 φ + σ. σ

Concavity and Derivatives I Proof: If f is concave on (a, b) then by definition f satisfies λf(x) + (1 λ)f(y) f(λx + (1 λ)y) for every x, y (a, b) and every λ [0, 1]. Assume x < y. If w = λx + (1 λ)y and if 0 < λ < 1 then a < x < w < y < b. By the definition of w, (1 λ)[f(y) f(w)] λ[f(w) f(x)] 1 λ = w x y x and λ = y w y x. Substituting these expressions yields f(y) f(w) y w f(w) f(x) w x 0

Concavity and Derivatives II Applying the Mean Value Theorem to each of the difference quotients of implies that for some α and β satisfying with x < α < w < β < y, f (β) f (α) 0 Using the Mean Value Theorem once more proves that for some t with α < t < β which implies f (t) 0. f (t)(β α) 0

Jensen s Inequality (Discrete Version) Proof: Let µ = n λ ix i and note that since λ i [0, 1] for i = 1, 2,...,n and n λ i = 1, then a < µ < b. The equation of the line tangent to the graph of f at the point (µ, f(µ)) is y = f (µ)(x µ) + f(µ). Since f is concave on (a, b) then Therefore f(x i ) f (µ)(x i µ) + f(µ) for i = 1, 2,..., n. λ i f(x i ) ( [ λi f (µ)(x i µ) + f(µ) ]) = f (µ) (λ i x i λ i µ) + f(µ) ( ) = f(µ) = f λ i x i. λ i

Jensen s Inequality (Continuous Version) Proof: For the sake of compactness of notation let α = 1 0 φ(t) dt, and let y = f (α)(x α) + f(α), the equation of the tangent line passing through the point with coordinates (α, f(α)). Since f is concave then which implies that f(φ(t)) f (α)(φ(t) α) + f(α), 1 0 f(φ(t)) dt 1 0 [ f (α)(φ(t) α) + f(α) ] dt 1 = f(α) + f (α) (φ(t) α) dt 0 ( 1 ) = f(α) = f φ(t) dt. 0

Expected Utility Solution: A rational investor will select the investment with the greater expected utility. The expected utility for investment A is 1 2 u(10) + 1 2 u(0) = 1 2 ) (10 102 = 3. 25 The expected utility for B is u(m) = M M 2 /25. Thus the investor will choose the coin flip whenever 3 > M M2 25 Thus investment A is preferable to B whenever M < $3.49.

Certainty Equivalent Solution: The certainty equivalent and payoffs of investment A must satisfy the following equation. C C2 25 = 1 (X X 2 2 25 + Y Y 2 ) 25 ) [ 2 ( = 1 X 25 2 2 ( C 25 2 ) 2 + ( Y 25 2 ) 2 ] ( C = 25 2 1 X 25 ) 2 ( + Y 25 ) 2 2 2 2

Minimum Variance Analysis I Proof: Since the rates of return are uncorrelated, the variance of the returned wealth W is Var (W) = α 2 i σ2 i, and is subject to the constraint that 1 = n α i. Applying the technique of finding the minimum using Lagrange Multipliers yields the following system of equations. ( ) ( ) = λ α i α 2 i σ2 i α i = 1

Minimum Variance Analysis II These equations are equivalent to respectively: 2α i σi 2 = λ for i = 1, 2,...,n, and α i = 1. Solving for α i in the first equation and substituting into the second equation determines that λ = 2 n j=1 1. σj 2 Substituting this expression for λ into the first equation yields α i = 1 σ 2 i n j=1 1 σ 2 j for i = 1, 2,...,n.

Portfolio Separation Theorem Proof: Suppose x is a portfolio for which r(x) = b, then ( ) 1 1 b r(x) = r b x = 1. Thus 1 bx is a portfolio with unit expected rate of return. For the portfolio w, σ 2 (bw ) = b 2 σ 2 (w ) b 2 σ 2 ( 1 b x ) = σ 2 (x).