FINANCIAL OPTIMIZATION

Similar documents
FIN 550 Practice Exam Answers. A. Linear programs typically have interior solutions.

Review of Optimization Methods

where u is the decision-maker s payoff function over her actions and S is the set of her feasible actions.

Extreme Abridgment of Boyd and Vandenberghe s Convex Optimization

Optimality Conditions

January 29, Introduction to optimization and complexity. Outline. Introduction. Problem formulation. Convexity reminder. Optimality Conditions

Computational Finance

FIN 550 Exam answers. A. Every unconstrained problem has at least one interior solution.

Constrained Optimization

Mathematical Preliminaries for Microeconomics: Exercises

Appendix A Taylor Approximations and Definite Matrices

Econ Slides from Lecture 14

Seminars on Mathematics for Economics and Finance Topic 5: Optimization Kuhn-Tucker conditions for problems with inequality constraints 1

Lagrange multipliers. Portfolio optimization. The Lagrange multipliers method for finding constrained extrema of multivariable functions.

The Kuhn-Tucker Problem

The Fundamental Welfare Theorems

September Math Course: First Order Derivative

2.098/6.255/ Optimization Methods Practice True/False Questions

The Fundamental Welfare Theorems

The general programming problem is the nonlinear programming problem where a given function is maximized subject to a set of inequality constraints.

Scientific Computing: Optimization

Mathematical Foundations -1- Constrained Optimization. Constrained Optimization. An intuitive approach 2. First Order Conditions (FOC) 7

MVE165/MMG631 Linear and integer optimization with applications Lecture 5 Linear programming duality and sensitivity analysis

Review of Optimization Basics

Nonlinear Programming and the Kuhn-Tucker Conditions

g(x,y) = c. For instance (see Figure 1 on the right), consider the optimization problem maximize subject to

STATIC LECTURE 4: CONSTRAINED OPTIMIZATION II - KUHN TUCKER THEORY

Chapter 2: Preliminaries and elements of convex analysis

Interior-Point Methods for Linear Optimization

Hicksian Demand and Expenditure Function Duality, Slutsky Equation

Economics 101. Lecture 2 - The Walrasian Model and Consumer Choice

E 600 Chapter 4: Optimization

Summary of the simplex method

More on Lagrange multipliers

Generalization to inequality constrained problem. Maximize

Lecture 2: Convex Sets and Functions

Homework 3. Convex Optimization /36-725

ARE202A, Fall 2005 CONTENTS. 1. Graphical Overview of Optimization Theory (cont) Separating Hyperplanes 1

LINEAR PROGRAMMING II

3E4: Modelling Choice

5. Duality. Lagrangian

UNDERGROUND LECTURE NOTES 1: Optimality Conditions for Constrained Optimization Problems

Lecture 18: Constrained & Penalized Optimization

Math 5593 Linear Programming Week 1

Support Vector Machines

In view of (31), the second of these is equal to the identity I on E m, while this, in view of (30), implies that the first can be written

Optimization. Charles J. Geyer School of Statistics University of Minnesota. Stat 8054 Lecture Notes

MFM Practitioner Module: Risk & Asset Allocation. John Dodson. January 25, 2012

Increases in Risk Aversion and Portfolio Choice in a Complete Market

Math 164-1: Optimization Instructor: Alpár R. Mészáros

A possibilistic approach to selecting portfolios with highest utility score

IE 5531 Midterm #2 Solutions

Convex Optimization Boyd & Vandenberghe. 5. Duality

Calculus and optimization

Quiz Discussion. IE417: Nonlinear Programming: Lecture 12. Motivation. Why do we care? Jeff Linderoth. 16th March 2006

ISM206 Lecture Optimization of Nonlinear Objective with Linear Constraints

Duality Theory of Constrained Optimization

Convex Optimization & Lagrange Duality

Unconstrained Optimization

Introduction to Mathematical Programming IE406. Lecture 10. Dr. Ted Ralphs

CONSTRAINED OPTIMALITY CRITERIA

CHAPTER 1-2: SHADOW PRICES

1 Uncertainty. These notes correspond to chapter 2 of Jehle and Reny.

Gradient Descent. Dr. Xiaowei Huang

EE/AA 578, Univ of Washington, Fall Duality

How to Characterize Solutions to Constrained Optimization Problems

OPTIMISATION 2007/8 EXAM PREPARATION GUIDELINES

1 Review Session. 1.1 Lecture 2

3.10 Lagrangian relaxation

Optimization Theory. Lectures 4-6

Lecture Notes: Math Refresher 1

CS-E4830 Kernel Methods in Machine Learning

Computational Integer Programming Universidad de los Andes. Lecture 1. Dr. Ted Ralphs

Convex Optimization M2

minimize x x2 2 x 1x 2 x 1 subject to x 1 +2x 2 u 1 x 1 4x 2 u 2, 5x 1 +76x 2 1,

Finite Dimensional Optimization Part I: The KKT Theorem 1

UC Berkeley Department of Electrical Engineering and Computer Science. EECS 227A Nonlinear and Convex Optimization. Solutions 5 Fall 2009

MATHEMATICAL ECONOMICS: OPTIMIZATION. Contents

36106 Managerial Decision Modeling Linear Decision Models: Part II

Support Vector Machines

Choice under Uncertainty

Final Examination with Answers: Economics 210A

Semidefinite Programming

A Brief Review on Convex Optimization

More First-Order Optimization Algorithms

Structural and Multidisciplinary Optimization. P. Duysinx and P. Tossings

min f(x). (2.1) Objectives consisting of a smooth convex term plus a nonconvex regularization term;

Sufficient Conditions for Finite-variable Constrained Minimization

Machine Learning Support Vector Machines. Prof. Matteo Matteucci

MVE165/MMG631 Linear and integer optimization with applications Lecture 13 Overview of nonlinear programming. Ann-Brith Strömberg

14. Duality. ˆ Upper and lower bounds. ˆ General duality. ˆ Constraint qualifications. ˆ Counterexample. ˆ Complementary slackness.

10 Numerical methods for constrained problems

Motivation. Lecture 2 Topics from Optimization and Duality. network utility maximization (NUM) problem:

Constrained Optimization. Unconstrained Optimization (1)

Part IB Optimisation

Lagrange Relaxation and Duality

Introduction to Support Vector Machines

Lecture: Duality.

CONSTRAINT QUALIFICATIONS, LAGRANGIAN DUALITY & SADDLE POINT OPTIMALITY CONDITIONS

ARE211, Fall 2005 CONTENTS. 5. Characteristics of Functions Surjective, Injective and Bijective functions. 5.2.

Transcription:

FINANCIAL OPTIMIZATION Lecture 1: General Principles and Analytic Optimization Philip H. Dybvig Washington University Saint Louis, Missouri Copyright c Philip H. Dybvig 2008

Choose x R N to minimize f(x) subject to ( i E)g i (x) = 0, and ( i I)g i (x) 0. A Canonical Optimization Problem x = (x 1,..., x N ) is a vector of choice variables. f(x) is the scalar-valued objective function. g i (x) = 0, i E are equality constraints. g i (x) 0, i I are inequality constraints. E I = Maximizing f(x) is equivalent to minimizing f(x). Problems in finance are usually written as maximizations. Pick the most natural version to communicate your results. Sometimes constraints on x are shown separately when specifying the domain of x, e.g., x R N +, x [0, 1] N, or x {0, 1} N.

Solutions A feasible solution (or feasible choice) x satisfies the constraints but may not maximize the objective function. A problem is said to be feasible if it has a feasible solution. An optimal solution (or optimal choice) x is feasible and x has the smallest value of the objective function (largest if maximizing) of all feasible solutions. (Also, commonly called solution but the book uses this for a candidate that need not be feasible or optimal.) An interior solution is an optimal solution at which no constraints are binding. A corner solution is an optimal solution at which constraints are binding. A local optimum is a feasible choice x that, for some ε > 0, is optimal in the problem with the additional constraint x x ε. value is f(x ) for optimal x

local optimum global optimum unbounded problem boundary solution interior solution Drawing Pictures: Solutions

Finding a Sensible Optimization Problem Example Find a portfolio strategy for an aggressively managed fund doing market timing. Your colleagues have a model for how expected return evolves as a function of past returns in a binomial model. The reduced form of the model says that there are N >> 0 final states of nature and each state of nature n has a price p n > 0 and probability π n > 0, where the states have been ordered so that p n /π n is strictly increasing in n (the cheapest states are first). Given initial wealth W 0, you should find an optimal strategy that gives the payoff x n in state n that satisfies the budget constraint N n=1 p n x n = W 0.

First Model: In-Class Exercise Given p n > 0 and π n > 0 for n = 1,..., N, and W 0 > 0, choose x = (x 1,..., x N ) R N to maximize N n=1 π n x n subject to N n=1 p n x n = W 0. Is this optimization problem feasible? Does this optimization problem have an optimal solution? If so, does the optimal solution make sense? hint: consider changing just two x i s.

Second Model: In-Class Exercise Given p n > 0 and π n > 0 for n = 1,..., N, and W 0 > 0, choose x = (x 1,..., x N ) R N to maximize N n=1 π n x n subject to N n=1 p n x n = W 0 and ( n)x n 0 Is this optimization problem feasible? Does this optimization problem have an optimal solution? If so, does the optimal solution make sense? hint: look for a corner solution

f(x ) = i E I λ i g i (x ) ( i I)λ i 0 λ i g i (x ) = 0 Kuhn-Tucker Conditions The feasible solution x is called regular if the set { g i (x ) g i (x ) = 0} is a linearly independent set. In particular, an interior solution is always regular. If x is regular and f and the g i s are differentiable, the Kuhn-Tucker conditions are necessary for feasible x to be optimal. If the optimization problem is convex (defined by f(x) and all g i (x) s are convex functions for inequality constraints and affine 1 functions for equality constraints), then the Kuhn-Tucker conditions are sufficient for an optimum. (Recall that a function q(x) is called convex if, for all x 1 and x 2 in the domain of q, and for all α (0, 1), q(αx 1 + (1 α)x 2 ) αq(x 1 ) + (1 α)x(x 2 ).) The solution to a convex optimization problem is unique if the objective function is strictly convex. 1 An affine function is linear plus a constant. Some authors define linear as the same as this, others take the constant to be zero in a linear function.

Drawing Pictures: Kuhn-Tucker Conditions interior solution single constraint binding multiple constraints binding inflection point irregular point

Third Model: In-Class Exercise Given p n > 0 and π n > 0 for n = 1,..., N, and W 0 > 0, choose x = (x 1,..., x N ) R N ++ to maximize N n=1 π n log(x n ) subject to N n=1 p n x n = W 0. Is this optimization problem feasible? Does this optimization problem have an optimal solution? If so, does the optimal solution make sense? hint: solve the Kuhn-Tucker conditions

Shadow Prices The λ i s in the objective function are sometimes called Lagrange multipliers because of their role in the Lagrangian function that is related to the Principle of Least Action in physics, or dual variables because they are related to linear functionals at the solution. More interestingly for us, the λ i s are shadow prices that measure how much we would pay at the margin to relax the constraint. This interpretation is useful for doing sensitivity analysis to the level of the constraint. For example, if λ i = 0, the constraint is not (strictly) binding and relaxing the constraint does not affect the solution. If λ i = 10, that means that relaxing the constraint by a small amount ε > 0 should reduce the objective function (in a minimization) by 10ε. This sort of sensitivity analysis is really useful because usually some of the inputs are guesses or policy decisions at another level and we need to know how much they matter. Disclaimer: this discussion assumes a convex optimization (or nonlocal things might matter) at a regular point (or else the constraint might be redundant).

von Neumann-Morgenstern Preferences Choosing to maximize the expectation of log(x) is a special case of von Neumann- Morgenstern preferences. The general form of these preferences maximizes the expectation of u(x) where u( ) is the von Neumman-Morgenstern utility function. These preferences were given an axiomatic foundation by John von Neumann and Oskar Morgenstern, and are the most commonly-used preferences in financial research. The most popular preference assumption in practice is mean-variance utility, which has a lot of conceptual problems but is easy to work with. Academic researchers are looking at many other preference assumptions that go beyond these traditional models by including such features as preference for the timing of resolution of uncertainty and ambiguity aversion. These topics in Choice Theory are interesting but beyond the scope of this course.

Convex Optimization: Intuition The Kuhn-Tucker conditions are local conditions that only look at the value and derivative of the objective and contraint functions at a point. For a convex optimization problem, this is good enough because any local solution is a global solution. For a convex optimization problem, the feasible set is convex, that is, for all feasible x 1 and x 2 and for all α (0, 1), αx 1 + (1 α)x 2 is also feasible. (This follows easily from the definitions.) With convexity of the objective function, this implies that if another feasible choice is better, so are many nearby feasible choices. (This also follows easily from the definitions.) Therefore, any point that is not a global optimum is not a local optimum, which is equivalent to saying that any point that is a local optimum is also a global optimum.

Drawing Pictures: Convex and Nonconvex Optimization local optimum = global optimum? typical algorithm: gradient and Newton-Raphson

Why Convexity Matters in Practice With some interesting exceptions (such as integer linear programs and certain continuous-time dynamic problems), choice problems we can solve reliably are convex optimization problems. These problems tend to be easier to solve reliably because a choice is optimal if and only if it is optimal locally (within some neighborhood) and the algorithm is not going to be confused by a local optimum. Also, optimization algorithms have a clear path through the feasible set to the optimum and globally good local indications about how to make way towards an optimum. A sufficiently smooth function of one variable is convex if its second derivative is everywhere nonnegative. A sufficiently smooth real-valued function of many variables is convex if its matrix of second derivatives is everywhere positive semidefinite (all eigenvalues are nonnegative). If your choice problem is convex, you have some reason to expect your nonlinear optimization problem will converge on an optimal solution, and if it is not convex, you have every reason to think numerical optimization might have problems and maybe you should double-check the solution..

Mean-variance Analysis: First Pass Choose portfolio weights x 1,..., x N to maximize x 0 r F + ( N i=1 x i µ i ) (1/2)γ ( N i=1 N j=1 x i σ ij x j ), subject to N i=0 x i = 1. µ = (µ i ): vector of means Σ = (σ ij ): covariance matrix, positive definite γ > 0: risk aversion parameter

Mean-variance Analysis: Solution and Observation First-order (Kuhn-Tucker) condition: µ 0 = λ µ i = γ N σ ijx j + λ for i > 0 j=1 or in vector notation (using 1 to denote a vector of 1s of length N): µ = r F 1 + γσ x which implies x = 1 γ Σ 1 (µ r F 1) This is the most common financial application of optimization, but it is rarely done competently because people do not use reasonable choices of µ and Σ (sample estimates are dominated by estimation error) and compensate by adding ad hoc constraints that help but do not fix the problem.

Summary Optimization problems have choice variables, objective functions and constraints Optimal solutions and feasible solutions, interior solutions and corner solutions, local solutions Kuhn-Tucker conditions Shadow prices are useful for sensitivity analysis Convex optimization is safest