Introduction Eigen Values and Eigen Vectors An Application Matrix Calculus Optimal Portfolio. Portfolios. Christopher Ting.

Similar documents
Knowledge Discovery and Data Mining 1 (VO) ( )

Math Bootcamp An p-dimensional vector is p numbers put together. Written as. x 1 x =. x p

Introduction to Matrix Algebra

Review of Linear Algebra

Introduction to Matrices

Matrices and Linear Algebra

A matrix is a rectangular array of. objects arranged in rows and columns. The objects are called the entries. is called the size of the matrix, and

Chapter 1. Matrix Algebra

A matrix is a rectangular array of. objects arranged in rows and columns. The objects are called the entries. is called the size of the matrix, and

Math Linear Algebra Final Exam Review Sheet

Problem Set (T) If A is an m n matrix, B is an n p matrix and D is a p s matrix, then show

Quantum Computing Lecture 2. Review of Linear Algebra

Math Camp II. Basic Linear Algebra. Yiqing Xu. Aug 26, 2014 MIT

Dimension. Eigenvalue and eigenvector

3 (Maths) Linear Algebra

Math Camp Lecture 4: Linear Algebra. Xiao Yu Wang. Aug 2010 MIT. Xiao Yu Wang (MIT) Math Camp /10 1 / 88

Chapter 5. Linear Algebra. A linear (algebraic) equation in. unknowns, x 1, x 2,..., x n, is. an equation of the form

1. Linear systems of equations. Chapters 7-8: Linear Algebra. Solution(s) of a linear system of equations (continued)

Math 315: Linear Algebra Solutions to Assignment 7

Conceptual Questions for Review

CS 246 Review of Linear Algebra 01/17/19

[ Here 21 is the dot product of (3, 1, 2, 5) with (2, 3, 1, 2), and 31 is the dot product of

Principal Component Analysis (PCA) Our starting point consists of T observations from N variables, which will be arranged in an T N matrix R,

Glossary of Linear Algebra Terms. Prepared by Vince Zaccone For Campus Learning Assistance Services at UCSB

Introduction. Vectors and Matrices. Vectors [1] Vectors [2]

Introduction to Quantitative Techniques for MSc Programmes SCHOOL OF ECONOMICS, MATHEMATICS AND STATISTICS MALET STREET LONDON WC1E 7HX

Linear Algebra V = T = ( 4 3 ).

and let s calculate the image of some vectors under the transformation T.

Systems of Algebraic Equations and Systems of Differential Equations

Mathematical Foundations of Applied Statistics: Matrix Algebra

Matrix Algebra Determinant, Inverse matrix. Matrices. A. Fabretti. Mathematics 2 A.Y. 2015/2016. A. Fabretti Matrices

2. Linear algebra. matrices and vectors. linear equations. range and nullspace of matrices. function of vectors, gradient and Hessian

Linear Algebra. Matrices Operations. Consider, for example, a system of equations such as x + 2y z + 4w = 0, 3x 4y + 2z 6w = 0, x 3y 2z + w = 0.

Phys 201. Matrices and Determinants

Linear Algebra review Powers of a diagonalizable matrix Spectral decomposition

Foundations of Matrix Analysis

GATE Engineering Mathematics SAMPLE STUDY MATERIAL. Postal Correspondence Course GATE. Engineering. Mathematics GATE ENGINEERING MATHEMATICS

ENGI 9420 Lecture Notes 2 - Matrix Algebra Page Matrix operations can render the solution of a linear system much more efficient.

Linear Algebra Formulas. Ben Lee

Introduction to Mobile Robotics Compact Course on Linear Algebra. Wolfram Burgard, Cyrill Stachniss, Kai Arras, Maren Bennewitz

Basic Concepts in Matrix Algebra

Quick Tour of Linear Algebra and Graph Theory

Lecture notes on Quantum Computing. Chapter 1 Mathematical Background

Lecture Notes in Linear Algebra

Linear Algebra review Powers of a diagonalizable matrix Spectral decomposition

MATRICES. knowledge on matrices Knowledge on matrix operations. Matrix as a tool of solving linear equations with two or three unknowns.

Computational Foundations of Cognitive Science. Definition of the Inverse. Definition of the Inverse. Definition Computation Properties

Massachusetts Institute of Technology Department of Economics Statistics. Lecture Notes on Matrix Algebra

TBP MATH33A Review Sheet. November 24, 2018

Basics of Calculus and Algebra

Pre-School Linear Algebra

Lecture 15 Review of Matrix Theory III. Dr. Radhakant Padhi Asst. Professor Dept. of Aerospace Engineering Indian Institute of Science - Bangalore

1. General Vector Spaces

Quick Tour of Linear Algebra and Graph Theory

Mathematical Foundations

Eigenvalues and Eigenvectors

EIGENVALUES AND EIGENVECTORS

MATRIX ALGEBRA AND SYSTEMS OF EQUATIONS. + + x 1 x 2. x n 8 (4) 3 4 2

Math 520 Exam 2 Topic Outline Sections 1 3 (Xiao/Dumas/Liaw) Spring 2008

Symmetric and anti symmetric matrices

NOTES on LINEAR ALGEBRA 1

MATH2210 Notebook 2 Spring 2018

. a m1 a mn. a 1 a 2 a = a n

Announcements Wednesday, November 01

Introduction to Mobile Robotics Compact Course on Linear Algebra. Wolfram Burgard, Bastian Steder

EE731 Lecture Notes: Matrix Computations for Signal Processing

Mobile Robotics 1. A Compact Course on Linear Algebra. Giorgio Grisetti

Math 304 Fall 2018 Exam 3 Solutions 1. (18 Points, 3 Pts each part) Let A, B, C, D be square matrices of the same size such that

Review problems for MA 54, Fall 2004.

MATH 240 Spring, Chapter 1: Linear Equations and Matrices

Chapter 4 - MATRIX ALGEBRA. ... a 2j... a 2n. a i1 a i2... a ij... a in

Matrix Algebra, part 2

Ir O D = D = ( ) Section 2.6 Example 1. (Bottom of page 119) dim(v ) = dim(l(v, W )) = dim(v ) dim(f ) = dim(v )

LINEAR ALGEBRA REVIEW

1 9/5 Matrices, vectors, and their applications

22.3. Repeated Eigenvalues and Symmetric Matrices. Introduction. Prerequisites. Learning Outcomes

Chapter 2 Notes, Linear Algebra 5e Lay

Announcements Wednesday, November 01

c c c c c c c c c c a 3x3 matrix C= has a determinant determined by

CP3 REVISION LECTURES VECTORS AND MATRICES Lecture 1. Prof. N. Harnew University of Oxford TT 2013

Linear Algebra (Review) Volker Tresp 2018

A Quick Tour of Linear Algebra and Optimization for Machine Learning

Linear Algebra and Matrices

Computational Methods CMSC/AMSC/MAPL 460. Eigenvalues and Eigenvectors. Ramani Duraiswami, Dept. of Computer Science

Matrix Algebra: Summary

Linear Algebra - Part II

Econ Slides from Lecture 7

18.S34 linear algebra problems (2007)

Study Notes on Matrices & Determinants for GATE 2017

Eigenvalues and Eigenvectors

Lecture 2: Linear operators

MATH 2331 Linear Algebra. Section 2.1 Matrix Operations. Definition: A : m n, B : n p. Example: Compute AB, if possible.

Linear Algebra in Actuarial Science: Slides to the lecture

Elementary Linear Algebra

Matrix Algebra. Matrix Algebra. Chapter 8 - S&B

Introduction to Matrices

Recall the convention that, for us, all vectors are column vectors.

4. Determinants.

1 Matrices and Systems of Linear Equations. a 1n a 2n

Properties of Matrices and Operations on Matrices

Transcription:

Portfolios Christopher Ting Christopher Ting http://www.mysmu.edu/faculty/christophert/ : christopherting@smu.edu.sg : 6828 0364 : LKCSB 5036 November 4, 2016 Christopher Ting QF 101 Week 12 November 4, 2016 1/51

Table of Contents 1 Introduction 2 Eigen Values and Eigen Vectors 3 An Application 4 Matrix Calculus 5 Optimal Portfolio Christopher Ting QF 101 Week 12 November 4, 2016 2/51

Pre-U Scalar and Vector A scalar a is a single number or one dimension. N: Natural numbers Z: Integers Q: Rational numbers R: Real numbers A vector a is a k 1 list of numbers arranged in a column. a 1 a 2 a =. Rk. a k Christopher Ting QF 101 Week 12 November 4, 2016 3/51

Pre-U Matrix A matrix A is a k r rectangular array of numbers arranged in k rows and r columns. A 11 A 12 A 1r A 21 A 22 A 2r.... A k1 A k2 A kr Regular letters are used for scalars, lower case bold letters for vectors, and upper case bold letters for matrices. By convention, a i is the element of a vector in the i-th row, and A ij refers to the element of a matrix in the i-th row and j-th column. Christopher Ting QF 101 Week 12 November 4, 2016 4/51

Matrix Transpose The transpose of matrix is the matrix obtained by rotating each row into a column (clockwise) with the first element as pivot, i.e., for each row i, A 1i A 2i [ Ai1 A i2 A ir]. Notation-wise, we write the transpose of A as A or A, and [ Aij ] = [ Aji ]. A ri Question If A is a k r matrix, what is the length and width of A? Christopher Ting QF 101 Week 12 November 4, 2016 5/51

Matrix Addition Two matrices A and B are addable only when they are of the same order (rows column): A + B = [A ij ] + [B ij ] = [A ij + B ij ]. Matrix addition follows the commutative and associative laws. A + B = B + A. ( A + B ) + C = A + ( B + C ). Christopher Ting QF 101 Week 12 November 4, 2016 6/51

Inner or Dot Product If a and b are both k 1, then the sum of element-wise products is called the inner product or dot product. a b = k a h b h. h=1 The result of an inner product is a scalar. Dot product is commutative: a b = b a. In Python, inner product is coded as numpy.inner(a, b). Christopher Ting QF 101 Week 12 November 4, 2016 7/51

Linear Combination A vector can be written as a linear combination: a 1 1 0 0 a 2 a = = a 0 1. + a 1 2. +.+a 0 k. a k 0 0 1 The vectors with 1 in the i-th row and the rest 0 are called the basis vectors, and denoted by e i. Compactly, k the column vector is expressed as a = a i e i ; and the row vector a as a = k a i e i. i=1 i=1 Christopher Ting QF 101 Week 12 November 4, 2016 8/51

Linear Dependence and Independence 1 A set of vectors a i, i = 1,2,...,n are said to be linearly dependent if there exist scalars c 1,c 2,...,c n such that c 1 a 1 + c 2 a 2 + c n a n = c i a i = o. (1) 2 A set of vectors a i, i = 1,2,...,n are said to be linearly independent if they are not linearly dependent. 3 Hence, a i, i = 1,2,...,n are linearly independent iff equation (1) has only the trivial solution: c i = 0, for all i = 1,2,...,n. Christopher Ting QF 101 Week 12 November 4, 2016 9/51

Einstein s Notation In Einstein s notation, the inner product is written as a b = e i a i b j e j = a i b j e i e j = a i b j δ i j = a ib i, where δ i is called the Kronecker delta, which is defined as j e i e j = δ i j := { 1 when i = j; 0 otherwise. When a b = 0, we say that a and b are orthogonal. Christopher Ting QF 101 Week 12 November 4, 2016 10/51

Inspirational Digression: General Relativity Space-Time Curvature = Energy-Momentum tensor R µν 1 2 Rg µν = 8πG c 4 T µν Christopher Ting QF 101 Week 12 November 4, 2016 11/51

Matrix Multiplication Multiplication by a scalar: If c is a non-zero and finite scalar, c A = Ac = [ A ij c ]. If A is k r and B is r s so that the number of columns of A equals the number of rows of B, we say that A and B are conformable, and the matrix product AB can be defined. a 1 a 1 a b 1 a 1 b 2 a 1 b s 2 [ ] a 2 AB = b1 b 2 b s = b 1 a 2 b 2 a 2 b s..... a k b 1 a k b 2 a k b s a k Christopher Ting QF 101 Week 12 November 4, 2016 12/51

Matrix Multiplication Christopher Ting QF 101 Week 12 November 4, 2016 13/51

Square Matrix and Trace A matrix is said to be square when r = k. A square matrix is said to symmetric when A = A. The trace of a k k square matrix A is defined as k tra := A ii = A ij δ ij, i=1 i.e., the sum of its diagonal elements. Christopher Ting QF 101 Week 12 November 4, 2016 14/51

Properties of Trace For square matrices A and B and a real and finite constant c, we have tr ( ca ) = c tra; tra = tra; tr ( A + B ) = tra + trb; tri k = k. For example, the trace of the matrix [ 3 ] 4 Tr(A) = 7 9 = 3 + 9 = 12. Christopher Ting QF 101 Week 12 November 4, 2016 15/51

Theorem Commutativity of Trace For k r A and r k B, we have tr ( AB ) = tr ( BA ). Proof: a 1 b 1 a 1 b 2 a 1 b k tr ( AB ) a 2 = tr b 1 a 2 b 2 a 2 b k... a k b 1 a k b 2 a k b k = a i b i = b i a i = tr ( BA ). We have applied the fact that the dot product is commutative. Christopher Ting QF 101 Week 12 November 4, 2016 16/51

Rank of a Matrix 1 The rank of a matrix is defined as the number of its linearly independent rows, which is equal to the number of its linearly independent columns, i.e., row rank = column rank. 2 The rank of a matrix A is given by the maximum number of linearly independent rows (or columns). For example, [ 3 ] 4 rank 7 9 = 2, [ 3 ] 6 rank 2 4 = 1. Christopher Ting QF 101 Week 12 November 4, 2016 17/51

Properties of Rank A matrix with a rank equal to its dimension is a matrix of full rank. A matrix that is not full rank is known as a short rank matrix, and is singular (non-invertible). Four important properties: rank(a) = rank ( A ). rank(ca) = rank ( A ), where c is a constant. rank(ab) min ( rank(a), rank(b) ) rank ( A A ) = rank ( AA ) = rank (A). Christopher Ting QF 101 Week 12 November 4, 2016 18/51

Rank and Inverse Matrix The rank of the k r matrix (r k) A = [ a 1 a 2 a r ] is the number of linearly independent columns a j, and is written as ranka. We say that A has full rank if ranka = r. A square k k matrix A is said to be non-singular if it is has full rank, e.g. ranka = k. If A is non-singular then there exists a unique k k matrix A 1 called the inverse of A that satisfies AA 1 = A 1 A = I k. Christopher Ting QF 101 Week 12 November 4, 2016 19/51

Properties of Matrix Inverse For non-singular A and C, ( A 1 ) ( = A ) 1 ; ( ) 1 AC = C 1 A 1 ; ( ) 1 A + C = A 1 ( A 1 + C 1) 1 C 1 ; A 1 ( A + C ) 1 = A 1 ( A 1 + C 1) 1 A 1. If A is an orthogonal matrix, then A 1 = A. Christopher Ting QF 101 Week 12 November 4, 2016 20/51

Determinant of Square Matrix Let A be a general k k matrix. Let (j 1,j 2,...,j k ) denote a permutation of (1,2,...,k). There are k! permutations. There is a unique count of the number of inversions of the indices of such permutations relative to the natural order (1,2,...,k), and let ɛ (j1,j 2,...,j k ) = +1 if this count is even and ɛ (j1,j 2,...,j k ) = 1 if the count is odd. Then the determinant of A is defined as deta := π ɛ π A 1j1 A 2j2 A kjk = ξ j 1j 2 j k A 1j1 A 2j2 A kjk, where { ξ j 1j 2 j k 1 if (j1,j 2,...,j k ) is even; := 1 if (j 1,j 2,...,j k ) is odd. Christopher Ting QF 101 Week 12 November 4, 2016 21/51

Properties of Determinant For example, if k = 2, then the two permutations of (1,2) are (1,2) and (2,1), for which ɛ (1,2) = 1 and ɛ (2,1) = 1. Thus, deta = ɛ (1,2) A 11 A 22 + ɛ (2,1) A 21 A 12 = A 11 A 22 A 21 A 12. If A is non-singular, deta 0. If A is orthogonal, then deta = ±1. If A is triangular (upper or lower), then deta = k i=1 A ii. Some other properties of a k k square matrix A include deta = deta ; det ( ca ) = c k deta; det ( AB ) = deta detb; det ( A 1) = ( deta ) 1 ; [ ] A B det = detddet ( A BD 1 C ), if detd 0. C D Christopher Ting QF 101 Week 12 November 4, 2016 22/51

Determinant of a 3 3 Matrix a 11 a 12 a 13 det(a) = a 21 a 22 a 23 a 31 a 32 a = ( ) a 11 a 22 a 33 + a 21 a 32 a 13 + a 31 a 12 a 23 33 ( ) a 13 a 22 a 31 + a 23 a 32 a 11 + a 33 a 12 a 21 + + + a 11 a 12 a 13 a 21 a 22 a 23 a 31 a 32 a 33 a 11 a 12 a 13 a 21 a 22 a 23 Christopher Ting QF 101 Week 12 November 4, 2016 23/51

Calculating the Inverse of a 2 2 Matrix The inverse of a 2 2 non-singular matrix [ ] [ ] a b 1 d b is. c d ad bc c a The expression in the denominator, ad bc, is the determinant of the matrix. If the matrix is [ ] 2 1, 4 6 then the inverse will be 1 8 [ 6 1 4 2 ] = [ 3 4 1 ] 8. As a check, multiply the two matrices together and it should give the identity matrix I. Christopher Ting QF 101 Week 12 November 4, 2016 24/51 1 2 1 4

Eigenvalues The characteristic equation of a k k square matrix A is det ( A λi k ) = 0. It is a polynomial of degree k in λ, so it has exactly k roots, which are not necessarily distinct and may be real or complex. They are called the latent roots or characteristic roots or eigenvalues of A. When the eigenvalues are real, they are written in descending order λ 1 λ 2 λ k. We also write λ min (A) = λ k and λ max (A) = λ 1. Christopher Ting QF 101 Week 12 November 4, 2016 25/51

Eigenvectors If λ i is an eigenvalue of A, then A λ i I k is singular, i.e., there exists a non-zero vector h i such that ( A λi I k ) hi = o. The vector h i is called a latent vector or characteristic vector or eigenvector of A corresponding to λ i. It is a fundamental result of linear algebra that an equation Mv = o has a non-zero solution v if and only if the determinant det(m)is zero. Hence, f (λ) := det ( A λi ) = 0. Christopher Ting QF 101 Week 12 November 4, 2016 26/51

Calculating Eigenvalues: An Example Let A be the 2 2 matrix A = [ ] 5 1 2 4 Then the characteristic equation is A λi 2 = 0. That is [ ] [ ] 5 1 1 0 λ = 0 2 4 0 1 [ ] 5 λ 1 = = (5 λ)(4 λ) 2 = λ 2 9λ + 18 2 4 λ The solutions are λ = 6 and λ = 3. The characteristic roots are also known as. Christopher Ting QF 101 Week 12 November 4, 2016 27/51

A Simple PageRank Algorithm Transition matrix of the directed graph of 4 web sites: A = 0 0 1 1 2 1 3 0 0 0 1 3 1 2 0 1 2 1 3 1 2 0 0 Christopher Ting QF 101 Week 12 November 4, 2016 28/51

A Simple PageRank Algorithm (Cont d) Denote by x 1,x 2,x 3, and x 4 the importance of the four sites. Analyzing the situation at each node, we get the system: x 1 = 1 x 3 + 1 2 x 4 x 2 = 1 3 x 1 x 3 = 1 3 x 1 + 1 2 x 2 + 1 2 x 4 x 4 = 1 3 x 1 + 1 2 x 2 It is equivalent to Ax = x, where x := [x 1 x 2 x 3 x 4 ]. Christopher Ting QF 101 Week 12 November 4, 2016 29/51

Finding the Eigenvector The PageRank algorithm involves finding the eigenvector corresponding to the eigenvalue of 1! 1 0 0 1 2 1 0 0 0 3 1 1 1 0 3 2 2 1 1 0 0 3 2 x 1 x 2 x 3 x 4 = 1 Since x 2 = x 1 /3, we substitute it in x 4 to find that x 4 = x 1 /2. In turn, we find that x 3 = x 1 3 + 1 2 x 1 3 + 1 2 x 1 2 = 3x 1 4. Christopher Ting QF 101 Week 12 November 4, 2016 30/51 x 1 x 2 x 3 x 4.

Finding the Eigenvector (Cont d) So the eigenvector is v := x 1 x 2 x 3 x 4 1 = x 1/3 1 3/4. 1/2 To find x 1, we impose the condition that the sum of eigenvector s entries is equal to 1. Therefore ( x 1 1 + 1 3 + 3 4 + 1 ) = 1. 2 Christopher Ting QF 101 Week 12 November 4, 2016 31/51

Finding the Eigenvector (Cont d) So x 1 = 0.3871. Finally, the unique eigenvector is 0.3871 0.1290 0.2903. 0.1935 Therefore, the PageRank is, in declining order of importance, Site 1, Site 3, Site 4, and Site 2. Christopher Ting QF 101 Week 12 November 4, 2016 32/51

Final Result So x 1 = 0.3871. Finally, the unique eigenvector is 0.3871 0.1290 0.2903. 0.1935 Therefore, the PageRank is, in declining order of importance, Site 1, Site 3, Site 4, and Site 2. Reference: PageRank Algorithm - The Mathematics of Google Search...some of you may have heard of quants but at Google, they re just called employees, because they re all quants... James Simon Christopher Ting QF 101 Week 12 November 4, 2016 33/51

Definition of Vector Differentiation Let x be a column k-vector. Consider the function g(x) = g(x 1,x 2,...,x k ) : R k R. The vector derivative is g(x) x 1 g(x) x g(x) = x 2.. g(x) x k and [ x g(x) = g(x) x 1 x 2 g(x) ] g(x) x k Christopher Ting QF 101 Week 12 November 4, 2016 34/51

Basic Properties For constant vector a and matrix A, x ( a x ) = x ( x a ) = a, x ( a x ) = a x ( Ax ) = A ( x Ax ) = ( A + A ) x x 2 x x ( x Ax ) = A + A Christopher Ting QF 101 Week 12 November 4, 2016 35/51

Assets There are n assets whose expected returns are denoted by µ i := E ( r i ), i = 1,2,...,n. The covariance between asset i and asset j is denoted as σ ij. Arrange the covariances into a n n matrix: σ 11 σ 12 σ 1n Σ := [ ] σ 21 σ 22 σ 2n σ ij =...... σ n1 σ n2 σ nn The diagonal element σ ii is the variance of asset i. Note that the covariance matrix Σ is symmetric. Christopher Ting QF 101 Week 12 November 4, 2016 36/51

Investment For each dollar invested, a fraction w i is invested in asset i. It must be that n w i = 1. The weights are arranged as a n-vector w. The portfolio s expected return and variance are, respectively, i=1 µ p := E ( ) n r p = w i E ( ) r i = wi µ i = w µ; σ 2 p := n i=1 j=1 i=1 n w i w j σ ij = w i Σ i j w j = w Σw. Christopher Ting QF 101 Week 12 November 4, 2016 37/51

Numerical Illustration Suppose there are two assets. µ 1 = 5% and µ 2 = 8% per annum. The covariance is a 2 by 2 matrix. [ σ11 σ 12 Σ := σ 21 σ 22 ] = [ 0.0625 0.01 0.01 0.16 Given that the variance of asset 1 is 0.0625, its volatility is 0.0625 = 25% per annum. The volatility of asset 2 is. The portfolio s expected return and variance are, respectively, µ p = 0.05w 1 + 0.08w 2 σ 2 p = 0.0625w2 1 0.01w 1w 2 0.01w 2 w 1 + 0.16w 2 2 ] Christopher Ting QF 101 Week 12 November 4, 2016 38/51

Optimization Minimize half the portfolio variance under two constrains: n w i E ( ) r i i=1 = E ( r p ). n w i = 1. i=1 Constrained optimization with Lagrange multipliers λ and ψ: ( min L = 1 n n n w i w j σ ij λ w 1,w 2,...,w n 2 i=1 j=1 ) w i µ i µ p ψ i=1 ( n w i 1 i=1 ) Christopher Ting QF 101 Week 12 November 4, 2016 39/51

In Matrix Form The Lagrangian L is L = 1 2 w Σw λ ( w µ µ p ) ψ(w 1 1). The first-order conditions with respect to w are Σw λµ ψ1 = 0 w µ = µ p w 1 = 1 Christopher Ting QF 101 Week 12 November 4, 2016 40/51

Solution of First FOC The first FOC gives the solution for the weight vector w = Σ 1( λµ + ψ1 ). But what are the values of the Lagrange multipliers λ and ψ? To solve for λ and ψ, substitute the optimal weight vector above into the last two FOC s, µ w = µ Σ 1( λµ + ψ1 ) = µ p 1 w = 1 Σ 1( λµ + ψ1 ) = 1 Christopher Ting QF 101 Week 12 November 4, 2016 41/51

Solution of Second and Third FOCs Let A := µ Σ 1 µ B := µ Σ 1 1 C := 1 Σ 1 1 The last two FOCs can be written as [ ][ ] [ A B λ µp = B C ψ 1 ] Solving these two linear equations, we obtain λ = Cµ p B AC B 2, ψ = A Bµ p AC B 2 Christopher Ting QF 101 Week 12 November 4, 2016 42/51

Optimal Weight Vector and Portfolio Variance The optimal weight is then solved as Σ 1( ( Cµp B ) µ + ( ) ) A Bµ p 1 w = AC B 2. The portfolio variance is a quadratic function of the mean portfolio return µ p : V ( ) r p = w Σw = Cµ2 p 2Bµ p + A AC B 2 C = AC B 2 µ2 p 2B AC B 2 µ A p + AC B 2. Christopher Ting QF 101 Week 12 November 4, 2016 43/51

Global Minimum Variance Portfolio The global minimum variance portfolio is obtained by minimizing V ( r p ) with respect to µp. dv ( ) r p = 2C dµ p AC B 2 µ p 2B AC B 2. The results of the first-order condition are µ = B C, σ2 = 1 C, w = Σ 1 1 C. Christopher Ting QF 101 Week 12 November 4, 2016 44/51

Numerical Example For the two assets, compute the inverse matrix Σ 1 = [ 0.0625 ] 1 0.01 0.01 0.16 1 = 0.0625 0.16 ( 0.01) ( 0.01) [ ] 16.16 1.01 = 1.01 6.31 [ 0.16 0.01 ] 0.01 0.0625 Christopher Ting QF 101 Week 12 November 4, 2016 45/51

Values of A,B, and C So the three scalars A, B, and C are A = µ Σ 1 µ = [ 0.05 0.08 ][ ][ ] 16.16 1.01 0.05 1.01 6.31 0.08 = 0.088864; B = µ Σ 1 1 = [ 0.05 0.08 ][ ][ ] 16.16 1.01 1 1.01 6.31 1 = 1.4441 C = 1 Σ 1 1 = 24.49. Christopher Ting QF 101 Week 12 November 4, 2016 46/51

Global Minimum Variance Portfolio Let s consider the global minimum variance portfolio. µ = B C = 0.0590, σ2 = 1 C = 0.0408 So the minimum volatility is σ = 20.21%. The weight vector for w for the global minimum variance portfolio is w = Σ 1 1 C = 1 24.49 [ 16.16 1.01 1.01 6.31 ][ ] 1 = 1 [ ] 70.11% 29.89% Christopher Ting QF 101 Week 12 November 4, 2016 47/51

Efficient Frontier Christopher Ting QF 101 Week 12 November 4, 2016 48/51

Takeaways Quantitative (mathematics and programming) skills provide a competitive advantage. Ideas of eigenvalue and eigenvector appear in Google search engine! Investment optimization: Obtain highest possible return with minimal risk. Quants way of deriving the efficient frontier Christopher Ting QF 101 Week 12 November 4, 2016 49/51

Week 12 Assignment 1. What are the eigenvalues and eigenvectors of a diagonal matrix? 5 8 16 2. Let A = 4 1 8. 4 4 11 (A) Show that the characteristic equation is (λ 1)(λ + 3) 2 = 0. (B) Find the eigenvector corresponding to λ 1 = 1. (C) Find the eigenvector corresponding to λ = 3. Christopher Ting QF 101 Week 12 November 4, 2016 50/51

Week 12 Additional Exercise 1. Consider the transformation of Cartesian coordinates (x, y) to the polar coordinates (r, ϕ): (A) Show that x = r cosϕ, y = r sinϕ x x r ϕ [ ] cosϕ r sinϕ J(r,ϕ) = y y = sinϕ r cosϕ r ϕ (B) Show that detj = r. 2. Apply the Jacobian detj in Problem 1 to show that I := exp ( x2 2 ) dx = 2π. Christopher Ting QF 101 Week 12 November 4, 2016 51/51