Optimal portfolio strategies under partial information with expert opinions

Similar documents
Portfolio Optimization with unobservable Markov-modulated drift process

HJB equations. Seminar in Stochastic Modelling in Economics and Finance January 10, 2011

A Model of Optimal Portfolio Selection under. Liquidity Risk and Price Impact

A new approach for investment performance measurement. 3rd WCMF, Santa Barbara November 2009

A problem of portfolio/consumption choice in a. liquidity risk model with random trading times

Utility Maximization in Hidden Regime-Switching Markets with Default Risk

arxiv: v1 [math.pr] 24 Sep 2018

Introduction Optimality and Asset Pricing

Worst Case Portfolio Optimization and HJB-Systems

Thomas Knispel Leibniz Universität Hannover

Dynamic Risk Measures and Nonlinear Expectations with Markov Chain noise

THE RELAXED INVESTOR WITH PARTIAL INFORMATION

Shadow prices and well-posedness in the problem of optimal investment and consumption with transaction costs

Introduction to Algorithmic Trading Strategies Lecture 3

Backward martingale representation and endogenous completeness in finance

Multi-dimensional Stochastic Singular Control Via Dynkin Game and Dirichlet Form

Short-time expansions for close-to-the-money options under a Lévy jump model with stochastic volatility

March 16, Abstract. We study the problem of portfolio optimization under the \drawdown constraint" that the

MDP Algorithms for Portfolio Optimization Problems in pure Jump Markets

Ambiguity and Information Processing in a Model of Intermediary Asset Pricing

Some Aspects of Universal Portfolio

Hamilton-Jacobi-Bellman Equation of an Optimal Consumption Problem

Brownian Motion. An Undergraduate Introduction to Financial Mathematics. J. Robert Buchanan. J. Robert Buchanan Brownian Motion

Affine Processes. Econometric specifications. Eduardo Rossi. University of Pavia. March 17, 2009

of space-time diffusions

A Barrier Version of the Russian Option

Minimization of ruin probabilities by investment under transaction costs

Liquidity risk and optimal dividend/investment strategies

Stochastic optimal control with rough paths

Portfolio Optimization in discrete time

Optimal Stopping Problems and American Options

Optimal investment with high-watermark fee in a multi-dimensional jump diffusion model

SEPARABLE TERM STRUCTURES AND THE MAXIMAL DEGREE PROBLEM. 1. Introduction This paper discusses arbitrage-free separable term structure (STS) models

EQUITY MARKET STABILITY

Regularity of the Optimal Exercise Boundary of American Options for Jump Diffusions

Birgit Rudloff Operations Research and Financial Engineering, Princeton University

Malliavin Calculus in Finance

Continuous Time Finance

Nonlinear representation, backward SDEs, and application to the Principal-Agent problem

Geometry and Optimization of Relative Arbitrage

Uniformly Uniformly-ergodic Markov chains and BSDEs

1. Stochastic Process

Optimal exit strategies for investment projects. 7th AMaMeF and Swissquote Conference

An Introduction to Malliavin calculus and its applications

Minimal Sufficient Conditions for a Primal Optimizer in Nonsmooth Utility Maximization

ALEKSANDAR MIJATOVIĆ AND MIKHAIL URUSOV

Perturbative Approaches for Robust Intertemporal Optimal Portfolio Selection

Economics 2010c: Lectures 9-10 Bellman Equation in Continuous Time

Optimal Trade Execution with Instantaneous Price Impact and Stochastic Resilience

Stochastic Calculus for Finance II - some Solutions to Chapter VII

Key words. Ambiguous correlation, G-Brownian motion, Hamilton Jacobi Bellman Isaacs equation, Stochastic volatility

The Nottingham eprints service makes this work by researchers of the University of Nottingham available open access under the following conditions.

Liquidation in Limit Order Books. LOBs with Controlled Intensity

Research Article Existence and Uniqueness Theorem for Stochastic Differential Equations with Self-Exciting Switching

Stochastic Volatility and Correction to the Heat Equation

Lecture 4: Ito s Stochastic Calculus and SDE. Seung Yeal Ha Dept of Mathematical Sciences Seoul National University

Controlled Diffusions and Hamilton-Jacobi Bellman Equations

On the Multi-Dimensional Controller and Stopper Games

Optimal Investment for a Large Investor in a Regime-Switching Model

Proving the Regularity of the Minimal Probability of Ruin via a Game of Stopping and Control

Variance Reduction Techniques for Monte Carlo Simulations with Stochastic Volatility Models

Some aspects of the mean-field stochastic target problem

Stochastic Calculus February 11, / 33

(6, 4) Is there arbitrage in this market? If so, find all arbitrages. If not, find all pricing kernels.

On continuous time contract theory

IOANNIS KARATZAS Mathematics and Statistics Departments Columbia University

Research Article Continuous Time Portfolio Selection under Conditional Capital at Risk

Optimal Consumption, Investment and Insurance Problem in Infinite Time Horizon

An Uncertain Control Model with Application to. Production-Inventory System

Viscosity Solutions of Path-dependent Integro-Differential Equations

Kolmogorov Equations and Markov Processes

OPTIMAL STOPPING OF A BROWNIAN BRIDGE

Lecture 4: Introduction to stochastic processes and stochastic calculus

Discretization of SDEs: Euler Methods and Beyond

Estimation of Dynamic Regression Models

Market environments, stability and equlibria

Modeling Ultra-High-Frequency Multivariate Financial Data by Monte Carlo Simulation Methods

Dynamic Pricing for Non-Perishable Products with Demand Learning

Necessary Conditions for the Existence of Utility Maximizing Strategies under Transaction Costs

The priority option: the value of being a leader in complete and incomplete markets.

Sequential Monte Carlo Methods for Bayesian Computation

University Of Calgary Department of Mathematics and Statistics

Computer Intensive Methods in Mathematical Statistics

Random Times and Their Properties

Ulam Quaterly { Volume 2, Number 4, with Convergence Rate for Bilinear. Omar Zane. University of Kansas. Department of Mathematics

Exercises in stochastic analysis

1. Stochastic Processes and filtrations

Risk-sensitive asset management with jump-diffusion price processes

Optimal portfolio in a regime-switching model

Lecture 2: From Linear Regression to Kalman Filter and Beyond

Optimal Control. Macroeconomics II SMU. Ömer Özak (SMU) Economic Growth Macroeconomics II 1 / 112

Optimal Control of Partially Observable Piecewise Deterministic Markov Processes

Stochastic Integration and Stochastic Differential Equations: a gentle introduction

B8.3 Mathematical Models for Financial Derivatives. Hilary Term Solution Sheet 2

A New Theory of Inter-temporal Equilibrium

Applications of Optimal Stopping and Stochastic Control

Optimal investment on finite horizon with random discrete order flow in illiquid markets

Generalized Hypothesis Testing and Maximizing the Success Probability in Financial Markets

Markov Chain BSDEs and risk averse networks

Information and Credit Risk

Transcription:

1 / 35 Optimal portfolio strategies under partial information with expert opinions Ralf Wunderlich Brandenburg University of Technology Cottbus, Germany Joint work with Rüdiger Frey Research Seminar WU Wien, December 7, 2012

Agenda 2 / 35 1 Dynamic Portfolio Optimization 2 Partial Information and Expert Opinions 3 Optimization for Power Utility 4 Approximation of the Optimal Strategy

Dynamic Portfolio Optimization 3 / 35 Initial capital x 0 > 0 Horizon [0, T ] Aim Problem maximize expected utility of terminal wealth find an optimal investment strategy How many shares of which asset have to be held at which time by the portfolio manager? Market model continuously tradable assets drift depends on unobservable finite-state Markov chain investor only observes stock prices and expert opinions

Classical Black-Scholes Model of Financial Market 4 / 35 (Ω, G = (G t ) t [0,T ], P) filtered probability space Bond S 0 t = e rt, r risk-free interest rate Stocks prices S t = (S 1 t,..., Sn t ), returns dr i t = dsi t S i t dr t = µ dt + σ dw t µ R n average stock return, drift σ R n n volatility W t n-dimensional Brownian motion parameters µ and σ are constant and known Generalization time-dependent (non-random) parameters µ, σ, r

5 / 35 Portfolio Initial capital X 0 = x 0 > 0 Wealth at time t invested in X t = X t ( ht 0 + }{{} ht 1 }{{} +... + ht n }{{} ) bond stock 1 stock n h k t fractions of wealth invested in asset k Strategy h t = (h 1 t,..., hn t ) Self financing condition (assume r = 0 for simplicity) Wealth equation X t satisfies linear SDE with initial value x 0 dx (h) t = X (h) t h t (µ dt + σdw t ) X (h) 0 = x 0

Utility Function U : [0, ) R { } strictly increasing and concave Inada conditions lim U (x) = and lim U (x) = 0 x 0 x U(x) = { x θ θ for θ (, 1) \ {0} power utility log x for θ = 0 log-utility x θ 1 θ 6 / 35

Optimization Problem Wealth dx (h) t = X (h) t h t (µ dt + σdw t ), X (h) 0 = x 0 Admissible Strategies H = {(h t ) t [0,T ] h t R n, E [ exp { T 0 h t 2 dt }] < } Reward function Value function v(t, x, h) = E t,x [ U(X (h) T ) ] for h H V (t, x) = sup h H v(t, x, h) Find optimal strategy h H such that V (0, x 0 ) = v(0, x 0, h ) Solution optimal fractions of wealth h t = 1 1 θ (σσ ) 1 µ = const Merton (1969,1973) using methods from dynamic programming 7 / 35

Drawbacks of the Merton Strategy 8 / 35 Sensitive dependence of investment strategies on the drift µ of assets Drift is hard to estimate empirically need data over long time horizons (other than volatility estimation) is not constant depends on the state of the economy Non-intuitive strategies for constant fraction of wealth (0, 1) = sell stocks when prices increase buy stocks when prices decrease = Model drift as stochastic process, not directly observable

9 / 35 Models With Partial Information on the Drift Drift depends on an additional source of randomness µ = µ t = µ(y t ) with factor process Y t Investor is not informed about factor process Y t, he only observes Stock prices S t or equivalently stock returns R t Expert opinions news, company reports recommendations of analysts or rating agencies own view about future performance = Model with partial information Problem Investor needs to learn the drift from observable quantities Find an estimate or filter for µ(y t )

10 / 35 Models With Partial Information on the Drift (cont.) Linear Gaussian Model Lakner (1998), Nagai, Peng (2002), Brendle (2006) Drift µ(y t ) = Y t is a mean-reversion process where W 1 t dy t = α(µ Y t )dt + βdw 1 t is a Brownian motion (in)dependent of W t

Models With Partial Information on the Drift (cont.) 11 / 35 Hidden Markov Model (HMM) Sass, Haussmann (2004), Rieder, Bäuerle (2005), Nagai, Rungaldier (2008) Factor process Y t finite-state Markov chain, independent of W t state space {e 1,..., e d }, unit vectors in R d states of drift µ(y t ) = MY t where M = (µ 1,..., µ d ) generator or rate matrix Q R d d diagonal: Q kk = λ k exponential rate of leaving state k conditional transition prob. P(Y t = e l Y t = k, Y t Y t ) = Q kl /λ k initial distribution (π 1,..., π d )

HMM Filtering 12 / 35 Returns dr t = ds t S t = µ(y t ) dt + σ dw t observations Drift µ(y t ) = M Y t non-observable (hidden) state Investor Filtration F = (F t ) t [0,T ] with F t = σ(s u : u t) G t Filter p k t := P(Y t = e k F t ) Innovations process µ(y t ) := E[µ(Y t ) F t ] = µ(p t ) = d p j t µ j j=1 B t := σ 1 ( R t t 0 µ(y s )ds ) is an F-BM HMM filter Liptser, Shiryaev (1974), Wonham (1965), Elliot (1993) p0 k = π k d dpt k = Q jk p j t dt + β k(p t ) db t j=1 where β k (p) = p k σ 1( µ k d ) p j µ j j=1

HMM Filtering: Example 13 / 35

HMM Filtering: Example 14 / 35

Expert Opinions Academic literature: drift is driven by unobservable factors Models with partial information, apply filtering techniques Linear Gaussian models Hidden Markov models Practitioners use static Black-Litterman model Apply Bayesian updating to combine subjective views (such as asset 1 will grow by 5% ) with empirical or implied drift estimates Present paper combines the two approaches consider dynamic models with partial observation including expert opinions 15 / 35

Expert Opinions Modelled by marked point process I = (T n, Z n ) I(dt, dz) At random points in time T n Poi(λ) investor observes r.v. Z n Z Z n depends on current state Y Tn, density f (Y Tn, z) (Z n ) cond. independent given F Y T = σ(y s : s [0, T ]) Examples Absolute view: Z n = µ(y Tn ) + σ ε ε n, (ε n ) i.i.d. N(0, 1) The view S will grow by 5% is modelled by Z n = 0.05 σ ε models confidence of investor Relative view (2 assets): Z n = µ 1 (Y Tn ) µ 2 (Y Tn ) + σ ε ε n Investor filtration F = (F t ) with F t = σ(s u : u t; (T n, Z n ): T n t) 16 / 35

HMM Filtering - Including Expert Opinions 17 / 35 Extra information has no impact on filter p t between information dates T n Bayesian updating at t = T n : p k T n p k T n f (e k, Z n ) recall: f (Y Tn, z) is density of Z n given Y Tn with normalizer d p j T f (e n j, Z n ) =: f (p Tn, Z n ) j=1 HMM filter p0 k = π k d dpt k = Q jk p j t dt + β k(p t ) db t + pt k j=1 Compensated measure Z ( ) f (e k,z) 1 Ĩ(dt dz) f (p t,z) d Ĩ(dt dz) := I(dt dz) λdt pt f k (e k, z) dz k=1 } {{ } compensator

Filter: Example 18 / 35

Filter: Example 19 / 35

Optimization Problem Under Partial Information Wealth dx (h) t = X (h) t h t (µ(y t ) dt + σdw t ), X (h) 0 = x 0 Admissible Strategies H = {(h t ) t [0,T ] h t K R n with K compact h is F-adapted } Reward function Value function v(t, x, h) = E t,x [ U(X (h) T ) ] for h H V (t, x) = sup h H v(t, x, h) Find optimal strategy h H such that V (0, x 0 ) = v(0, x 0, h ) 20 / 35

Reduction to an OP Under Full Information 21 / 35 Consider augmented state process (X t, p t ) Wealth dt + σdb t ), X =M p t Filter dpt k = d Q jk p j t dt + β k(p t ) db t Reward function Value function dx (h) t = X (h) t h t ( µ(y t ) }{{} j=1 +p k t Z ( ) f (e k,z) 1 Ĩ(dt dz), f (p t,z) v(t, x, p, h) = E t,x,p [ U(X (h) T ) ] for h H V (t, x, p) = sup h H v(t, x, p, h) Find h H(0) such that V (0, x 0, π) = v(0, x 0, π, h ) (h) 0 = x 0 p k 0 = πk

Logarithmic Utility U(X (h) T (h) ) = log(x T ) = log x 0 + T 0 (h µ(y s s ) 1 ) T 2 h s σσ h s ds + h s σdb s 0 [ E[U(X (h) T T )] = log x 0 + E (h µ(y s s ) 1 ) ] 0 2 h s σσ h s ds + 0 Optimal Strategy h t = (σσ ) 1 µ(yt ). Certainty equivalence principle h is obtained by replacing in the optimal strategy under full information h full t = (σσ ) 1 µ(y t ) the unknown drift µ(y t ) by its filter µ(y t ) 22 / 35

Solution for Power Utility 23 / 35 Risk-sensitive control problem Nagai & Runggaldier (2008), Davis & Lleo (2011) { T T Let Z h := exp θ h s σdb s θ2 } h s σσ h s ds 2 0 Change of measure: P (h) (A) = E[Z (h) 1 A ] for A F T Reward function E t,x,p [U(X (h) T )] = x θ θ E (h) t,p 0 T [ { }] exp b(p s, h s )ds t }{{} =: v(t, p, h) independent of x where b(p, h) := ( θ h Mp 1 θ ) 2 h σσ h Value function V (t, p) = sup h H v(t, p, h) for 0 < θ < 1 Find h H such that V (0, π) = v(0, π, h )

HJB-Equation State dp t = α(p t, h t )dt + β (p t )db t + Z γ(p t, z)ĩ(dt dz) Generator L h g(p) = 1 2 tr[ β (p)β(p)d 2 g ] + α (p, h) g +λ Z {g(p + γ(p, z)) g(p)}f (p, z)dz { } V t (t, p) + sup h R n L h V (t, p) b(p, h)v (t, p) = 0 terminal condition V (T, p) = 1 Candidate for the Optimal Strategy h = h (t, p) = 1 (1 θ) (σσ ) 1{ Mp + 1 } V (t, p) σβ(p) pv (t, p) }{{} myopic strategy + correction Certainty equivalence principle does not hold 24 / 35

Justification of HJB-Equation 25 / 35 Standard verification arguments fail, since we cannot guarantee uniform ellipticity of the diffusion part: tr [ β (p)β(p)d 2 V ] ξ β (p)β(p)ξ c ξ 2 for some c > 0 and all ξ R d satisfiable only if number of assets n number of drift states d Applying results and techniques from Pham (1998) = V is a unique continuous viscosity solution of the HJB-equation

Regularization of HJB-Equation 26 / 35 Add a small Gaussian perturbation 1 m d B t first d 1 components of the filter to the SDE for the Consider control problem for the modified dynamics of the filter Modified HJB-equation has an additional diffusion term 1 2m V m (t, p) = uniform ellipticity Applying results from Davis & Lleo (2011) = classical solution V m (t, p) to the modified HJB-equation Standard verification results can be applied Convergence results for m : optimal strategy ε-optimal strategy to the modified control problem is an to the original control problem

Approximation of the optimal strategy 27 / 35 Policy Improvement Numerical solution of HJB equation Feynman-Kac formula for linearized HJB equation

Policy Improvement Starting approximation is the myopic strategy h (0) t = 1 1 θ (σσ ) 1 Mp t The corresponding reward function is [ ( T )] V (0) (t, p) := v(t, p, h (0) ) = E t,p exp b(p (h(0) ) s, h (0) s )ds t Consider the following optimization problem { max L h V (0) (t, p) b(p, h)v (0) (t, p) } h with the maximizer h (1) (t, p) = h (0) (t, p) + 1 (1 θ)v (0) (t, p) (σ ) 1 β(p) p V (0) (t, p) For the corresponding reward function V (1) (t, p) := v(t, p, h (1) ) it holds Lemma ( h (1) is an improvement of h (0) ) V (1) (t, p) V (0) (t, p) 28 / 35

Policy Improvement (cont.) Policy improvement requires Monte-Carlo approximation of reward function [ ( T )] V (0) (t, p) = E t,p exp b(p (h(0) ) s, h (0) s )ds ]. t Generate N paths of ps h(0) starting at time t with p = p t Estimate expectation E t,p [ ] Approximate partial derivatives V (0) p k (t, p) by finite differences Compute first iterate h (1) 29 / 35

Numerical Results 30 / 35

Numerical Results For t = T n : nearly full information = correction 0 31 / 35

Numerical solution of HJB equation HJB Equation { } V t (t, p) + sup h R n L h V (t, p) b(p, h)v (t, p) = 0 terminal condition V (T, p) = 1 Generator L h g(p) = 1 2 tr[ β (p)β(p)d 2 g ] + α (p, h) g Plugging in the optimal strategy +λ Z {g(p + γ(p, z)) g(p)}f (p, z)dz h = h 1 (t, p) = (1 θ) (σσ ) 1{ Mp + 1 } V (t, p) σβ(p) pv (t, p) }{{} myopic strategy + correction yields a nonlinear partial integro-differential equation Normalization of p = reduction to d 1 spatial variables d = 2: only one spatial variable, ellipticity condition is satisfied 32 / 35

Numerical solution of HJB equation (cont.) 33 / 35

Conclusion 34 / 35 Portfolio optimization under partial information on the drift Investor observes stock prices and expert opinions Non-linear HJB-equation with a jump part Computation of the optimal strategy

References 35 / 35 Davis, M. and Lleo, S. (2011). Jump-Diffusion Risk-Sensitive Asset Management II: Jump-Diffusion Factor Model, arxiv:1102.5126v1. Frey, R., Gabih, A. and Wunderlich, R. (2012): Portfolio optimization under partial information with expert opinions. International Journal of Theoretical and Applied Finance,Vol. 15, No. 1. Nagai, H. and Runggaldier, W.J. (2008): PDE approach to utility maximization for market models with hidden Markov factors. In: Seminar on Stochastic Analysis, Random Fields and Applications V (R.C.Dalang, M.Dozzi, F.Russo, eds.). Progress in Probability, Vol.59, Birkhäuser Verlag, 493 506. Pham, H. (1998): Optimal Stopping of Controlled Jump Diffusion Processes: A viscosity Solution Approach. Journal of Mathematical Systems, estimations, and Control Vol.8, No.1, 1-27. Rieder, U. and Bäuerle, N. (2005): Portfolio optimization with unobservable Markov-modulated drift process. Journal of Applied Probability 43, 362 378 Sass, J. and Haussmann, U.G (2004): Optimizing the terminal wealth under partial information: The drift process as a continuous time Markov chain. Finance and Stochastics 8, 553 577.