Higher Moment Coherent Risk Measures

Similar documents
Risk optimization with p-order conic constraints

MEASURES OF RISK IN STOCHASTIC OPTIMIZATION

COHERENT APPROACHES TO RISK IN OPTIMIZATION UNDER UNCERTAINTY

The Subdifferential of Convex Deviation Measures and Risk Functions

Risk-averse optimization in networks

Stochastic dominance and risk measure: A decision-theoretic foundation for VaR and C-VaR

CVaR and Examples of Deviation Risk Measures

A SECOND ORDER STOCHASTIC DOMINANCE PORTFOLIO EFFICIENCY MEASURE

RISK AND RELIABILITY IN OPTIMIZATION UNDER UNCERTAINTY

Birgit Rudloff Operations Research and Financial Engineering, Princeton University

Inverse Stochastic Dominance Constraints Duality and Methods

Portfolio optimization with stochastic dominance constraints

On Kusuoka Representation of Law Invariant Risk Measures

Coherent risk measures

A scenario decomposition algorithm for stochastic programming problems with a class of downside risk measures

Valid Inequalities and Restrictions for Stochastic Programming Problems with First Order Stochastic Dominance Constraints

Modeling and Optimization of Risk

Generalized quantiles as risk measures

The Canonical Model Space for Law-invariant Convex Risk Measures is L 1

Reward-Risk Portfolio Selection and Stochastic Dominance

Robust Optimization for Risk Control in Enterprise-wide Optimization

Generalized quantiles as risk measures

An axiomatic characterization of capital allocations of coherent risk measures

Włodzimierz Ogryczak. Warsaw University of Technology, ICCE ON ROBUST SOLUTIONS TO MULTI-OBJECTIVE LINEAR PROGRAMS. Introduction. Abstract.

Risk-Averse Dynamic Optimization. Andrzej Ruszczyński. Research supported by the NSF award CMMI

Asymptotic distribution of the sample average value-at-risk

Efficient optimization of the reward-risk ratio with polyhedral risk measures

Expected Shortfall is not elicitable so what?

Expected Shortfall is not elicitable so what?

Stochastic Optimization with Risk Measures

Motivation General concept of CVaR Optimization Comparison. VaR and CVaR. Přemysl Bejda.

Miloš Kopa. Decision problems with stochastic dominance constraints

Duality in Regret Measures and Risk Measures arxiv: v1 [q-fin.mf] 30 Apr 2017 Qiang Yao, Xinmin Yang and Jie Sun

Spectral Measures of Uncertain Risk

Convex optimization problems. Optimization problem in standard form

Fundamentals in Optimal Investments. Lecture I

On Kusuoka representation of law invariant risk measures

ORIGINS OF STOCHASTIC PROGRAMMING

Generalized Hypothesis Testing and Maximizing the Success Probability in Financial Markets

Robustness in Stochastic Programs with Risk Constraints

Portfolio optimization with a copula-based extension of conditional value-at-risk

On LP Solvable Models for Portfolio Selection

Tail Value-at-Risk in Uncertain Random Environment

Decomposability and time consistency of risk averse multistage programs

Value at Risk and Tail Value at Risk in Uncertain Environment

Department of Social Systems and Management. Discussion Paper Series

Bregman superquantiles. Estimation methods and applications

Semidefinite and Second Order Cone Programming Seminar Fall 2012 Project: Robust Optimization and its Application of Robust Portfolio Optimization

Central-limit approach to risk-aware Markov decision processes

Concepts and Applications of Stochastically Weighted Stochastic Dominance

A unified approach to time consistency of dynamic risk measures and dynamic performance measures in discrete time

Complexity of Bilevel Coherent Risk Programming

Coherent Risk Measures. Acceptance Sets. L = {X G : X(ω) < 0, ω Ω}.

Finanzrisiken. Fachbereich Angewandte Mathematik - Stochastik Introduction to Financial Risk Measurement

MFM Practitioner Module: Risk & Asset Allocation. John Dodson. February 4, 2015

Lectures for the Course on Foundations of Mathematical Finance

Reformulation of chance constrained problems using penalty functions

Handout 8: Dealing with Data Uncertainty

4. Convex optimization problems

Translative Sets and Functions and their Applications to Risk Measure Theory and Nonlinear Separation

Modern Portfolio Theory with Homogeneous Risk Measures

Minimax and risk averse multistage stochastic programming

Recitation 7: Uncertainty. Xincheng Qiu

Applications of axiomatic capital allocation and generalized weighted allocation

1: PROBABILITY REVIEW

Operations Research Letters. On a time consistency concept in risk averse multistage stochastic programming

Aggregate Risk. MFM Practitioner Module: Quantitative Risk Management. John Dodson. February 6, Aggregate Risk. John Dodson.

Distributionally Robust Discrete Optimization with Entropic Value-at-Risk

Worst-Case Conditional Value-at-Risk with Application to Robust Portfolio Management

Distributionally Robust Optimization with ROME (part 1)

Stochastic Programming: Basic Theory

Quadratic Two-Stage Stochastic Optimization with Coherent Measures of Risk

ИЗМЕРЕНИЕ РИСКА MEASURING RISK Arcady Novosyolov Institute of computational modeling SB RAS Krasnoyarsk, Russia,

Tractable Robust Expected Utility and Risk Models for Portfolio Optimization

Pareto Efficiency in Robust Optimization

Calculating credit risk capital charges with the one-factor model

Conditional and Dynamic Preferences

Bregman superquantiles. Estimation methods and applications

Bayesian Optimization

Robustness and bootstrap techniques in portfolio efficiency tests

Multivariate Regression Model Results

Risk Aversion and Coherent Risk Measures: a Spectral Representation Theorem

Stochastic Dual Dynamic Programming with CVaR Risk Constraints Applied to Hydrothermal Scheduling. ICSP 2013 Bergamo, July 8-12, 2012

Probability metrics applied to problems in portfolio theory

Stochastic Programming with Multivariate Second Order Stochastic Dominance Constraints with Applications in Portfolio Optimization

Optimization Problems with Probabilistic Constraints

Pareto Optimal Allocations for Law Invariant Robust Utilities

Near-Potential Games: Geometry and Dynamics

OPTIMIZATION APPROACHES IN RISK MANAGEMENT AND FINANCIAL ENGINEERING

MFM Practitioner Module: Risk & Asset Allocation. John Dodson. January 25, 2012

A Probabilistic Model for Minmax Regret in Combinatorial Optimization

Relaxations of linear programming problems with first order stochastic dominance constraints

Robust Multicriteria Risk-Averse Stochastic Programming Models

Selected Examples of CONIC DUALITY AT WORK Robust Linear Optimization Synthesis of Linear Controllers Matrix Cube Theorem A.

Research Article Continuous Time Portfolio Selection under Conditional Capital at Risk

ENGINEERING DECISIONS UNDER RISK-AVERSENESS. R. Tyrrell Rockafellar Johannes O. Royset

arxiv: v3 [math.oc] 25 Apr 2018

THE FUNDAMENTAL QUADRANGLE OF RISK

MFM Practitioner Module: Risk & Asset Allocation. John Dodson. February 18, 2015

The newsvendor problem with convex risk

Transcription:

Higher Moment Coherent Risk Measures Pavlo A. Krokhmal Department of Mechanical and Industrial Engineering The University of Iowa, 2403 Seamans Center, Iowa City, IA 52242 E-mail: krokhmal@engineering.uiowa.edu April 2007 Abstract The paper considers modeling of risk-averse preferences in stochastic programming problems using risk measures. We utilize the axiomatic foundation of coherent risk measures and deviation measures in order to develop simple representations that express risk measures via specially constructed stochastic programming problems. Using the developed representations, we introduce a new family of higher-moment coherent risk measures (HMCR), which includes, as a special case, the Conditional Value-at-Risk measure. It is demonstrated that the HMCR measures are compatible with the second order stochastic dominance and utility theory, can be efficiently implemented in stochastic optimization models, and perform well in portfolio optimization case studies. Keywords: Risk measures, stochastic programming, stochastic dominance, portfolio optimization 1 Introduction Research and practice of portfolio management and optimization is driven to a large extent by tailoring the measures of reward (satisfaction) and risk (unsatisfaction/regret) of the investment venture to the specific preferences of an investor. While there exists a common consensus that an investment s reward may be adequately associated with its expected return, the methods for proper modeling and measurement of an investment s risk are subject to much more pondering and debate. In fact, the risk-reward or mean-risk models constitute an important part of the investment science subject and, more generally, the field of decision making under uncertainty. The cornerstone of modern portfolio analysis was set up by Markowitz (1952, 1959), who advocated identification of the portfolio s risk with the volatility (variance) of its returns. On the other hand, Markowitz s work led to formalization of the fundamental view that any decision under uncertainties may be evaluated in terms of its risk and reward. The seminal Markowitz s ideas are still widely used today in many areas of decision making, and the entire paradigm of bi-criteria risk-reward optimization has received extensive development in both directions of increasing the computational efficiency and enhancing the models for risk measurement and estimation. At the same time, it has been recognized that the symmetric attitude of the classical Mean-Variance (MV) approach, where both the positive and negative deviations from the expected level are penalized equally, does not always yield an adequate estimation of risks induced by the uncertainties. Hence, significant effort has been devoted to the development of downside risk measures and models. Replacing the variance by the lower standard semi-deviation as a measure of investment risk so as to take into account only negative deviations from the expected level has been proposed as early as by Markowitz (1959); see also more recent works by Ogryczak and Ruszczyński (1999, 2001, 2002). Supported in part by NSF grant DMI 0457473. 1

Among the popular downside risk models we mention the Lower Partial Moment and its special case, the Expected Regret, which is also known as Integrated Chance Constraint in stochastic programming (Bawa, 1975; Fishburn, 1977; Dembo and Rosen, 1999; Testuri and Uryasev, 2003; van der Vlerk, 2003). Widely known in finance and banking industry is the Value-at-Risk measure (JP Morgan, 1994; Jorion, 1997; Duffie and Pan, 1997). Being simply a quantile of loss distribution, the Value-at-Risk (VaR) concept has its counterparts in stochastic optimization (probabilistic, or chance programming, see Prékopa, 1995), reliability theory, etc. Yet, minimization or control of risk using the VaR measure proved to be technically and methodologically difficult, mainly due to VaR s notorious non-convexity as a function of the decision variables. A downside risk measure that circumvents the shortcomings of VaR while offering a similar quantile approach to estimation of risk is the Conditional Value-at-Risk measure (Rockafellar and Uryasev, 2000, 2002; Krokhmal et al., 2002a). Risk measures that are similar to CVaR and/or may coincide with it, are Expected Shortfall and Tail VaR (Acerbi and Tasche, 2002), see also Conditional Drawdownat-Risk (Chekhlov et al., 2005; Krokhmal et al., 2002b). A simple yet effective risk measure closely related to CVaR is the so-called Maximum Loss, or Worst-Case Risk (Young, 1998; Krokhmal et al., 2002b), whose use in problems with uncertainties is also known as the robust optimization approach (see, e.g., Kouvelis and Yu, 1997). In the last few years, the formal theory of risk measures received a major impetus from the works of Artzner et al. (1999) and Delbaen (2002), who introduced an axiomatic approach to definition and construction of risk measures by developing the concept of coherent risk measures. Among the risk measures that satisfy the coherency properties, there are Conditional Value-at-Risk, Maximum Loss (Pflug, 2000; Acerbi and Tasche, 2002), coherent risk measures based on one-sided moments (Fischer, 2003), etc. Recently, Rockafellar et al. (2006) have extended the theory of risk measures to the case of deviation measures, and demonstrated a close relationship between the coherent risk measures and deviation measures; spectral measures of risk have been proposed by Acerbi (2002). An approach to decision making under uncertainty, different from the risk-reward paradigm, is embodied by the von Neumann and Morgenstern (vnm) utility theory, which exercises mathematically sound axiomatic description of preferences and construction of the corresponding decision strategies. Along with its numerous modifications and extensions, the vnm utility theory is widely adopted as a basic model of rational choice, especially in economics and social sciences (see, among others, Fishburn, 1970, 1988; Karni and Schmeidler, 1991, etc). Thus, substantial attention has been paid in the literature to the development of risk-reward optimization models and risk measures that are consistent with expected utility maximization. In particular, it has been shown that under certain conditions the Markovitz MV framework is consistent with the vnm theory (Kroll et al., 1984). Ogryczak and Ruszczyński (1999, 2001, 2002) developed mean-semideviation models that are consistent with stochastic dominance concepts (Fishburn, 1964; Rothschild and Stiglitz, 1970; Levy, 1998); a class of risk-reward models with SSD-consistent coherent risk measures was discussed in De Giorgi (2005). Optimization with stochastic dominance constraints was recently considered by Dentcheva and Ruszczyński (2003); stochastic dominance-based portfolio construction was discussed in Roman et al. (2006). In this paper we aim to offer an additional insight into the properties of axiomatically defined measures of risk by developing a number of representations that express risk measures via solutions of stochastic programming problems (Section 2.1); using the developed representations, we construct a new family of higher-moment coherent risk (HMCR) measures. In Section 2.2 it is demonstrated that the suggested representations are amenable to seamless incorporation into stochastic programming problems. In particular, implementation of the HMCR measures reduces to p-order conic programming, and can be approximated via linear programming. Section 2.3 shows that the developed results are applicable to deviation measures, while section 2.4 illustrates that the HMCR measures are compatible with the second-order stochastic dominance and utility theory. The conducted case study (Section 3) indicates that the family of HMCR measures has a strong potential for practical application in portfolio selection problems. Finally, the Appendix contains the proofs of the theorems introduced in the paper. 2 Modeling of risk measures as stochastic programs The discussion in the Introduction has illustrated the variety of approaches to definition and estimation of risk. Arguably, the recent advances in risk theory are associated with the axiomatic approach to construction of risk measures pioneered by Artzner et al. (1999). The present endeavor essentially exploits this axiomatic approach in 2

order to devise simple computational recipes for dealing with several types of risk measures by representing them in the form of stochastic programming problems. These representations can be used to create new risk measures to be tailored to specific risk preferences, as well as to incorporate these preferences into stochastic programming problems. In particular, we present a new family of Higher Moment Coherent Risk measures (HMCR). It will be shown that the HMCR measures are well-behaved in terms of theoretical properties, and demonstrate very promising performance in test applications. Within the axiomatic framework of risk analysis, risk measure R(X) of a random outcome X from some probability space (, F, µ) may be defined as a mapping R : X R, where X is a linear space of F -measurable functions X : R. In a more general setting one may assume X to be a separated locally convex space; for our purposes it suffices to consider X = L p (, F, P), 1 p, where the particular value of p shall be clear from the context. Traditionally to convex analysis, we call function f : X R proper if f (X) > for all X X and dom f, i.e., there exists X X such that f (X) < + (see, e.g., Rockafellar, 1970; Zălinescu, 2002). In the remainder of the paper, we confine ourselves to risk measures that are proper and not identically equal to +. Also, throughout the paper it is assumed that X represents a loss function, i.e., small values of X are good, and large values are bad. 2.1 Convolution-type representations for coherent measures of risk A coherent risk measure, according to Artzner et al. (1999) and Delbaen (2002), is defined as a mapping R : X R that further satisfies the next four properties (axioms): (A1) monotonicity: X 0 R(X) 0 for all X X, (A2) sub-additivity: R(X + Y ) R(X) + R(Y ) for all X, Y X, (A3) positive homogeneity: R(λX) = λr(x) for all X X, λ > 0, (A4) translation invariance: R(X + a) = R(X) + a for all X X, a R. Observe that given the positive homogeneity (A3), the requirement of sub-additivity (A2) in the above definition can be equivalently replaced with the requirement of convexity (see also Schied and Follmer, 2002): (A2 ) convexity: R ( λx + (1 λ)y ) λr(x) + (1 λ)r(y ), X, Y X, 0 λ 1. From the axioms (A1) (A4) one can easily derive the following useful properties of coherent risk measures (see, for example, Delbaen, 2002; Ruszczyński and Shapiro, 2006): (C1) R(0) = 0 and, in general, R(a) = a for all a R, (C2) X Y R(X) R(Y ), and, in particular, X a R(X) a, a R, (C3) R ( X R(X) ) = 0, (C4) if X is a Banach lattice, R(X) is continuous in the interior of its effective domain, (where the inequalities X a, X Y, etc., are assumed to hold almost surely). From the definition of coherent risk measures it is easy to see that, for example, EX and ess.sup X, where { min{ X η }, if { X η }, ess.sup X =, otherwise, are coherent risk measures; more examples can be found in Rockafellar et al. (2006). Below we present simple computational formulas that aid in construction of coherent risk measures and their incorporation into stochastic programs. Namely, we execute the idea that one of the axioms (A3) or (A4) can be relaxed and then reinstated by solving an appropriately defined mathematical programming problem. In other words, one can construct a coherent risk measure via solution of a stochastic programming problem that involves a function φ : X R satisfying only three of the four axioms (A1) (A4). 3

First we present a representation for coherent risk measures that is based on the relaxation of the translation invariance axiom (A4). The next theorem shows that if one selects a function φ : X R satisfying axioms (A1) (A3) along with additional technical conditions, then there exists a simple stochastic optimization problem involving φ whose optimal value would satisfy (A1) (A4). Theorem 1 Let function φ : X R satisfy axioms (A1) (A3), and be a lower semicontinuous (lsc) function such that φ(η) > η for all real η 0. Then the optimal value of the stochastic programming problem ρ(x) = inf η η + φ(x η) (1) is a proper coherent risk measure, and the infimum is attained for all X, so inf η in (1) may be replaced by min η R. For proof of Theorem 1, as well as other theorems introduced in the paper, see the Appendix. Remark 1.1 It is all-important that the stochastic programming problem (1) is convex, due to the convexity of the function φ. Also, it is worth mentioning that one cannot substitute a coherent risk measure itself for function φ in (1), as it will violate the condition φ(η) > η of the Theorem. Corollary 1.1 The set arg min η R { η + φ(x η) } R of optimal solutions of (1) is closed. Example 1.1 (Conditional Value-at-Risk) A famous special case of (1) is the optimization formula for Conditional Value-at-Risk (Rockafellar and Uryasev, 2000, 2002): CVaR α (X) = min η + (1 α) 1 E(X η) +, 0 < α < 1, (2) where (X) ± = max{±x, 0}, and function φ(x) = (1 α) 1 E(X) + evidently satisfies the conditions of Theorem 1. The space X in this case can be selected as L 2 (, F, P). One of the many appealing features of (2) is that it has a simple intuitive interpretation: if X represents loss/unsatisfaction, then CVaR α (X), is, roughly speaking, the conditional expectation of losses that may occur in (1 α) 100% of the worst cases. In the case of a continuously distributed X, this rendition is exact: CVaR α (X) = E [ X X VaR α (X) ], where VaR α (X) is defined as the α-quantile of X: VaR α (X) = inf { ζ P[X ζ ] α }. In the general case, the formal definition of CVaR α (X) becomes more intricate (Rockafellar and Uryasev, 2002), but representation (2) still applies. Example 1.2 A generalization of (2) can be constructed as (see also Ben-Tal and Teboulle, 1986) R α,β (X) = min η + α E(X η)+ β E(X η), (3) where, in accordance with the conditions of Theorem 1, one has to put α > 1 and 0 β < 1. Example 1.3 (Maximum Loss) If the requirement of finiteness of φ in (1) is relaxed, i.e., the image of φ is (, + ], then the optimal value of (1) still defines a coherent risk measure, but the infimum may not be achievable. An example is served by the so-called MaxLoss measure, MaxLoss(X) = ess.sup X = inf η η + φ (X η), where φ (X) = { 0, X 0,, X > 0. It is easy to see that φ is positive homogeneous convex, non-decreasing, lsc, and satisfies φ (η) > η for all η 0, but is not finite. Example 1.4 (Higher Moment Coherent Risk Measures) Let X = L p (, F, P), and for some 0 < α < 1 consider φ(x) = (1 α) 1 (X) + p, where X p = ( E X p) 1/p. Clearly, φ satisfies the conditions of Theorem 1, thus we can define a family of higher-moment coherent risk measures (HMCR) as HMCR p,α (X) = min η + (1 α) 1 (X η) + p, p 1, α (0, 1). (4) 4

From the fact that X p X q for 1 p < q it immediately follows that the HMCR measures are monotonic with respect to the order p: HMCR p,α (X) HMCR q,α (X) for p < q and X L q. (5) Of special interest is the case p = 2 that defines the second-moment coherent risk measure (SMCR): SMCR α (X) = min η + (1 α) 1 (X η) + 2, 0 < α < 1. (6) We will see below that SMCR α (X) is quite similar in properties to CVaR α (X) while measuring the risk in terms of the second moments of loss distributions. Implementation-wise, the SMCR measure can be incorporated into a mathematical programming problem via the second order cone constraints (see Section 3). The second order cone programming (SOCP) is a well-developed topic in the field of convex optimization (see, e.g., a review by Alizadeh and Goldfarb, 2003), and a number of commercial off-the-shelf software packages are available for solving convex problems with second-order cone constraints. Now we comment briefly on the relation between the introduced above HMCR family and other known in the literature risk measures that involve higher moments of distributions. The Lower Partial Moment (see, e.g., Bawa (1975); Fishburn (1977), and others) 1 LPM p (X; t) = E ( (X t) +) p, p 1, t R, (7) is convex in X, but not positive homogeneous or translation invariant. In the context of axiomatically defined risk measures, an interesting example of spectral risk measure that corresponds to pessimistic manipulation of X and is sensitive to higher moments was considered by Tasche (2002). Closely related to the proposed here HMCR measures are the so-called coherent measures based on one-sided moments, or coherent measures of semi-l p type (Fischer, 2003; Rockafellar et al., 2006): 2 R(X) = EX + β (X EX) + p, p 1, β 0. (8) A key difference between (8) and the HMCR measures (4) is that the HMCR family are tail risk measures, while the measures of type (8) are based on central semi-moments (see Example 2.3 below). Example 1.5 (Composition of risk measures) Formula (1) readily extends to the case of multiple functions φ i, i = 1,..., n, that are cumulatively used in measuring the risk of element X X and conform to the conditions of Theorem 1. Namely, one has that is a proper coherent risk measure. ρ n (X) = min η i R, i=1,...,n n ( ηi + φ i (X η i ) ), (9) The value of η that delivers minimum in (1) does also possess some noteworthy properties as a function of X. In establishing of these properties the following notation is convenient. Assuming that the set arg min x R f (x) is closed for some function f : R R, we denote its left endpoint as Arg min x R i=1 f (x) = min { y y arg min x R f (x) }. 1 Here, the traditional terminology is preserved: according to the convention adopted in this paper, X denotes losses and therefore the proper term for (7) would be the upper partial moment. 2 Interestingly, Fischer (2003) restricted the range of values for the constant β in (8) to β [0, 1], whereas Rockafellar et al. (2006) allowed β to take values in (0, ). 5

Theorem 2 Let function φ : X R satisfy the conditions of Theorem 1. Then function η(x) = Arg min η + φ(x η) (10) exists and satisfies properties (A3) and (A4). If, additionally, φ(x) = 0 for every X 0, then η(x) satisfies (A1), along with the inequality η(x) ρ(x), where ρ(x) is the optimal value of (1). Remark 2.1 If φ satisfies all the conditions of Theorem 2, the optimal solution η(x) of the stochastic optimization problem (1) complies with all axioms for coherent risk measures, except (A2), thereby failing to be convex. Example 2.1 (Value-at-Risk) A well-known example of two risk measures obtained by solving a stochastic programming problem of type (1) is again provided by formula (2) due to Rockafellar and Uryasev (2000, 2002), and its counterpart VaR α (X) = Arg min η + (1 α) 1 E(X η) +. (11) The Value-at-Risk measure VaR α (X), despite being adopted as a de facto standard for measurement of risk in finance and banking industries, is notorious for the difficulties it presents in risk estimation and control. Example 2.2 (Higher-Moment Coherent Risk Measures) For higher-moment coherent risk measures, the function φ in (10) is taken as φ(x) = (1 α) 1 (X) + p, and the corresponding optimal η p,α (X) satisfies the equation (1 α) 1/(p 1) = ( X η p,α (X) ) + p ( X η p,α (X) ) + p 1, p > 1. (12) A formal derivation of equality (12) can be carried out using the techniques employed in Rockafellar and Uryasev (2002) to establish formula (11). Although the optimal η p,α (X) in (12) is determined implicitly, Theorem 2 ensures that it has properties similar to those of VaR α (monotonicity, positive homogeneity, etc). Moreover, by plugging relation (12) with p = 2 into (6), the second-moment coherent risk measure (SMCR) can be presented in the form that involves only the first moment of losses in the tail of the distribution: SMCR α (X) = η 2,α (X) + (1 α) 2 ( X η 2,α (X) ) + 1 = η 2,α (X) + (1 α) 2 E ( X η 2,α (X) ) +. (13) Note that in (13) the second-moment information is concealed in the η 2,α (X). Further, by taking a CVaR measure with the confidence level α = 2α α 2, one can write SMCR α (X) = η smcr + 1 1 α E( X η smcr ) + η cvar + 1 1 α E( X η cvar ) + = CVaRα (X), (14) where η smcr = η p,α (X) as in (12) with p = 2, and η cvar = VaR α (X); note that the inequality in (14) holds due to the fact that η cvar = VaR α (X) minimizes the expression η+(1 α ) 1 E(X η) +. In other words, with the above selection of α and α, the expressions for SMCR α (X) and CVaR α (X) differ only in the choice of η that delivers the minimum to the corresponding expressions (6) and (2). For Conditional Value-at-Risk, it is the α-quantile of the distribution of X, whereas the optimal η 2,α (X) for the SMCR measure incorporates the information on the second moment of losses X. Example 2.3 (HMCR as tail risk measures) It is easy to see that the HMCR family { are tail risk measures, namely that 0 < α 1 < α 2 < 1 implies η p,α1 (X) η p,α2 (X), where η p,αi = Arg min η η + (1 αi ) 1 (X η) + } p ; in addition, one has lim α 1 η p,α (X) = ess.sup X, at least when ess.sup X is finite (see the Appendix). 6

These properties puts the HMCR family in a favorable position comparing to the coherent measures of type (8) (Rockafellar et al., 2006; Fischer, 2003), where the tail cutoff point, about which the partial moments are computed, is always fixed at EX. In contrast to (8), the location of tail cutoff in the HMCR measures is determined by the optimal η p,α (X) and is adjustable by means of the parameter α. In a special case, for example, the HMCR measures (4) can be reduced to form (8) with β = (1 α p ) 1 > 1 by selecting α p according to (12) as α p = 1 ( (X EX) + p 1 / (X EX) + p ) p 1, p > 1. Representations based on relaxation of (A3) Observe that formula (1) in Theorem 1 is analogous to the operation of infimal convolution, well-known in convex analysis: ( f g)(x) = inf y f (x y) + g(y). Continuing this analogy, consider the operation of right scalar multiplication (φη)(x) = η φ(η 1 X), η 0, where for η = 0 we set (φ0)(x) = (φ0 + )(X). If φ is proper and convex, then it is known that (φη)(x) is a convex proper function in η 0 for any X dom φ (see, for example, Rockafellar, 1970). Interestingly enough, this fact can be pressed into service to formally define a coherent risk measure as ρ(x) = inf η 0 η φ( η 1 X ), (15) if function φ, along with some technical conditions similar to those of Theorem 1, satisfies axioms (A1), (A2 ), and (A4). Note that excluding the positive homogeneity (A3) from the list of properties of φ denies also its convexity, thus one must replace (A2) with (A2 ) to ensure convexity in (15). In the terminology of convex analysis the function ρ(x) defined by (15) is known as the positively homogeneous convex function generated by φ. Likewise, by direct verification of conditions (A1) (A4) it can be demonstrated that ρ(x) = sup η φ(η 1 X), (16) η > 0 is a proper coherent risk measure, provided that φ(x) satisfies (A1), (A2), and (A4). By (C1), axioms (A1) and (A2) imply that φ(0) = 0, which allows one to rewrite (16) as ρ(x) = sup η > 0 φ(ηx + 0) φ(0) η = φ0 + (X), (17) where the last equality in (17) comes from the definition of the recession function (Rockafellar, 1970; Zălinescu, 2002). 2.2 Implementation in stochastic programming problems The developed results can be efficiently applied in the context of stochastic optimization, where the random outcome X = X (x, ω) can be considered as a function of the decision vector x R m, convex in x over some closed convex set C R m. Firstly, representation (1) allows for efficient minimization of risk in stochastic programs. For a function φ that complies with the requirements of Theorem 1, denote Then, clearly, (x, η) = η + φ ( X (x, ω) η ) and R(x) = ρ ( X (x, ω) ) = min (x, η). (18) min ρ( X (x, ω) ) min (x, η), (19) x C (x,η) C R 7

in the sense that both problems have the same optimal objective values and optimal vector x. The last observation enables seamless incorporation of risk measures into stochastic programming problems, thereby facilitating the modeling of risk-averse preferences. For example, a generalization of the classical 2-stage stochastic linear programming (SLP) problem (see, e.g., Birge and Louveaux, 1997; Prékopa, 1995) where the outcome of the second-stage (recourse) action is evaluated by its risk rather than the expected value, can be formulated by replacing the expectation operator in the second-stage problem with a coherent risk measure R: min x 0 c x + R [ ] min y 0 q(ω) y(ω) s. t. Ax = b, T(ω) x + W(ω) y(ω) = h(ω). Note that the expectation operator is a member of the class of coherent risk measures defined by (A1) (A4), whereby the classical 2-stage SLP problem is a special case of (20). Assuming that the risk measure R above is amenable to representation (1) via some function φ, problem (20) can be implemented by virtue of Theorem 1 as min c x + η + φ ( q(ω) y(ω) η ) s. t. Ax = b, T(ω) x + W(ω) y(ω) = h(ω), x 0, y(ω) 0,, with all the distinctive properties of the standard SLP problems (e.g., the convexity of the recourse function, etc.) being preserved. Secondly, representation (1) also admits implementation of risk constraints in stochastic programs. Namely, let g(x) be a function that is convex on C; then, the following two problems are equivalent, as demonstrated by Theorem 3 below: (20) min { g(x) R(x) c}, (21a) x C min { g(x) (x, η) c}. (21b) (x,η) C R Theorem 3 Optimization problems (21a) and (21b) are equivalent in the sense that they achieve minima at the same values of the decision variable x and their optimal objective values coincide. Further, if the risk constraint in (21a) is binding at optimality, (x, η ) achieves the minimum of (21b) if and only if x is an optimal solution of (21a) and η arg min η (x, η). In other words, one can implement the risk constraint ρ ( X (x, ω) ) c by using representation (1) for the risk measure ρ with the infimum operator omitted. HMCR measures in stochastic programming problems The introduced higher moment coherent risk measures can be incorporated in stochastic programming problems via conic constraints of order p > 1. Namely, let {ω 1,..., ω J } where P{ω j } = π j (0, 1) be the scenario set of a stochastic programming ( model. ) Observe that a HMCR-based objective or constraint can be implemented via the constraint HMCR p,α X (x, ω) u, with u being either a variable or a constant, correspondingly. By virtue of Theorem 3, the latter constraint admits a representation by the set of inequalities u η + (1 α) 1 t, (22a) t ( w p 1 +... + w p ) 1/p, J (22b) w j π 1/p ( j X (x, ωj ) η ), j = 1,..., J, (22c) w j 0, j = 1,..., J. (22d) Note that the convexity of X as a function of the decision variables x implies convexity of (22c), and, consequently, convexity of the set (22). Constraint (22b) defines a J + 1-dimensional cone of order p, and is central to practical 8

implementation of constraints (22) in mathematical programming models. In the special case of p = 2, it represents a second-order (quadratic) cone in R J+1, and well-developed methods of second-order cone programming (SOCP) can be invoked to handle the constructions of type (22). In the general case of p (1, ), the p-order cone within the positive orthant ( w p 1 +... + w p J ) 1/p t, t, wj 0, j = 1,..., J, (23) can be approximated by linear inequalities when J = 2 d. Following Ben-Tal and Nemirovski (1999), the 2 d + 1- dimensional p-order conic constraint (23) can be represented by a set of 3-dimensional p-order conic inequalities [ (w (k 1) 2 j 1 ) p + ( w (k 1) 2 j ) ] p 1/p (k) w j, j = 1,..., 2 d k, k = 1,..., d, (24) where w (d) 1 t and w (0) j w j ( j = 1,..., 2 d ). Each of the 3-dimensional p-order cones (24) can then be approximated by a set of linear equalities. For any partition 0 α 0 < α 1 <... < α m π/2 of the segment [0, π 2 ], an internal approximation of the p-order cone in the positive orthant of R3 can be written in the form ξ 3 ( sin 2/p α i+1 cos 2/p α i cos 2/p α i+1 sin 2/p α i ) ξ 3 ( ξ p 1 + ξ p ) 1/p, 2 ξ1, ξ 2, ξ 3 0, (25) ξ 1 ( sin 2/p α i+1 sin 2/p α i ) + ξ2 ( cos 2/p α i cos 2/p α i+1 ), i = 0,..., m 1, (26a) and an external approximation can be constructed as ξ 3 ( cos p α i + sin p α i ) p 1 p ξ 1 cos p 1 α i + ξ 2 sin p 1 α i, i = 0,..., m. (26b) For example, the uniform partition α i dimensional second-order cone: = πi 2m (i = 0,..., m) generates the following approximations of a 3- ξ 3 cos 4m π ξ 1 cos π(2i+1) 4m ξ 3 ξ 1 cos πi 2m + ξ 2 sin πi 2m 2.3 Application to deviation measures + ξ 2 sin π(2i+1) 4m, i = 0,..., m 1,, i = 0,..., m. Since being introduced in Artzner et al. (1999), the axiomatic approach to construction of risk measures has been repeatedly employed by many authors for the development of other types of risk measures tailored to specific preferences and applications (see Rockafellar et al., 2006; Acerbi, 2002; Ruszczyński and Shapiro, 2006). In this subsection we consider deviation measures as introduced by Rockafellar et al. (2006). Namely, deviation measure is a mapping D : X [0, + ] that satisfies (D1) D(X) > 0 for any non-constant X X, whereas D(X) = 0 for constant X, (D2) D(X + Y ) D(X) + D(Y ) for all X, Y X, 0 λ 1, (D3) D(λX) = λd(x) for all X X, λ > 0, (D4) D(X + a) = D(X) for all X X, a R. Again, from axioms (D1) and (D2) follows convexity of D(X). In Rockafellar et al. (2006) it was shown that deviation measures that further satisfy (D5) D(X) ess.sup X EX for all X X, 9

are characterized by the one-to-one correspondence D(X) = R(X EX) (27) with expectation-bounded coherent risk measures, i.e., risk measures that satisfy (A1) (A4) and an additional requirement (A5) R(X) > EX for all non-constant X X, whereas R(X) = EX for all constant X. Using this result, it is easy to provide an analog of formula (1) for deviation measures. Theorem 4 Let function φ : X R satisfy axioms (A1) (A3), and be a lsc function such that φ(x) > EX for all X 0. Then the optimal value of the stochastic programming problem { } D(X) = EX + inf η + φ(x η) (28) η is a deviation measure, and the infimum is attained for all X, so that inf η in (28) may be replaced by min η R. Given the close relationship between deviation measures and coherent risk measures, it is straightforward to apply the above results to deviation measures. 2.4 Connection with utility theory and second-order stochastic dominance As it has been mentioned in the Introduction, considerable attention has been devoted in the literature to the development of risk models and measures compatible with the utility theory of von Neumann and Morgenstern (1944), which represents one of the cornerstones of the decision-making science. The vnm theory argues that when the preference relation of the decision-maker satisfies certain axioms (completeness, transitivity, continuity, and independence), there exists a function u : R R, such that an outcome X is preferred to outcome Y ( X Y ) if and only if E[u(X)] E[u(Y )]. If the function u is non-decreasing and concave, the corresponding preference is said to be risk averse. Rothschild and Stiglitz (1970) have bridged the vnm utility theory with the concept of the second-order stochastic dominance by showing that X dominating Y by the second-order stochastic dominance, X SSD Y, is equivalent to the relation E[u(X)] E[u(Y )] holding true for all concave non-decreasing functions u, where the inequality is strict for at least one such u. Recall that a random outcome X dominates outcome Y by the second order stochastic dominance if z P[X t] dt z P[Y t] dt for all z R. Since coherent risk measures are generally inconsistent with the second-order stochastic dominance (see an example in De Giorgi, 2005), it is of interest to introduce risk measures that comply with this property. To this end, we replace the monotonicity axiom (A1) in the definition of coherent risk measures by the requirement of SSD isotonicity (Pflug, 2000; De Giorgi, 2005): ( X) SSD ( Y ) R(X) R(Y ). Namely, we consider risk measures R : X R that satisfy the following set of axioms: 3 (A1 ) SSD isotonicity: ( X) SSD ( Y ) R(X) R(Y ) for X, Y X, (A2 ) convexity: R ( λx + (1 λ)y ) λr(x) + (1 λ)r(y ), X, Y X, 0 λ 1, (A3) positive homogeneity: R(λX) = λr(x), X X, λ > 0, (A4) translation invariance: R(X + a) = R(X) + a, X X, a R. 3 See Mansini et al. (2003); Ogryczak and Opolska-Rutkowska (2006) for conditions under which SSD-isotonic measures also satisfy the coherence properties. 10

Note that unlike the system of axioms (A1) (A4), the above axioms, and in particular (A1 ), require X and Y to be integrable, i.e., one can take the space X in (A1 ) (A4) to be L 1 (for a discussion of topological properties of sets defined by stochastic dominance relations, see, e.g., Dentcheva and Ruszczyński, 2004). Again, it is possible to develop an analog of formula (1), which would allow for construction of risk measures with the above properties using functions that comply with (A1 ), (A2 ), and (A3). Theorem 5 Let function φ : X R satisfy axioms (A1 ), (A2 ), and (A3), and be a lsc function such that φ(η) > η for all real η 0. Then the optimal value of the stochastic programming problem exists and is a proper function that satisfies (A1 ), (A2 ), (A3), and (A4). Obviously, by solving the risk-minimization problem ρ(x) = min η + φ(x η) (29) min ρ( X (x, ω) ) x C where ρ is a risk measure that is both coherent and SSD-compatible in the sense of (A1 ), one obtains a solution that is SSD-efficient, i.e., acceptable to any risk-averse rational utility maximizer, and also bears the lowest risk in terms of coherence preference metrics. Below we illustrate that functions φ satisfying the conditions of Theorem 5 can be easily constructed in the scope of the presented approach. Example 5.1 Let φ(x) = E[u(X)], where u : R R is a convex, positively homogeneous, non-decreasing function such that u(η) > η for all η 0. Obviously, function φ(x) defined in this way satisfies the conditions of Theorem 5. Since u( η) is concave and non-decreasing, one has that E[u(X)] E[u(Y )], and, consequently, φ(x) φ(y ), whenever ( X) SSD ( Y ). It is easy to see that, for example, function φ of the form φ(x) = αe(x) + βe(x), α (1, + ), β [0, 1), satisfies the conditions of Theorem 5. Thus, in accordance to Theorems 1 and 5, the coherent risk measure R α,β (3) is also consistent with the second-order stochastic dominance. A special case of (3) is Conditional Value-at-Risk, which is known to be compatible with the second-order stochastic dominance (Pflug, 2000). Example 5.2 (Hiher-Moment Coherent Risk Measures) SMCR and, in general, the family of higher-moment coherent risk measures constitute another example of risk measures that are both coherent and compatible with the second-order stochastic dominance. Indeed, function u(η) = ((η) + ) p is convex and non-decreasing, whence ( E[u(X)] ) 1/p ( E[u(Y )] ) 1/p for any ( X) SSD ( Y ). Thus, the HMCR family, defined by (29) with φ(x) = (1 α) 1 (X) + p, HMCR p,α (X) = min η + (1 α) 1 (X η) + p, p 1, is both coherent and SSD-compatible, by virtue of Theorems 1 and 5. Implementation of such a risk measure in stochastic programming problems enables one to introduce risk preferences that are consistent with both concepts of coherence and second-order stochastic dominance. The developed family of higher-moment risk measures (4) possesses all the outstanding properties that are sought after in the realm of risk management and decision making under uncertainty: compliance with the coherence principles, amenability for an efficient implementation in stochastic programming problems (e.g., via second-order cone programming), and compatibility with the second-order stochastic dominance and utility theory. The question that remains to be answered is whether these superior properties translate into an equally superior performance in practical risk management applications. The next section reports a pilot study intended to investigate the performance of the HMCR measures in real-life risk management applications. It shows that the family of HMCR measures is a promising tool for tailoring risk preferences to the specific needs of decision-makers, and can be compared favorably with some of the most widely used risk management frameworks. 11

3 Portfolio choice using HMCR measures: An illustration In this section we illustrate the practical utility of the developed Higher-Moment Coherent Risk measures on the example of portfolio optimization, a typical testing ground for many risk management and stochastic programming techniques. To this end, we compare portfolio optimization models that use the HMCR measures with p = 2 (SMCR) and p = 3 against portfolio allocation models based on two well-established, and theoretically as well as practically proven methodologies, the Conditional Value-at-Risk measure and the Markowitz Mean-Variance framework. This choice of benchmark models is further supported by the fact that the HMCR family is similar in the construction and properties to CVaR (more precisely, CVaR is a HMCR measure with p = 1), but, while CVaR measures the risk in terms of the first moment of losses residing in the tail of the distribution, the SMCR measure quantifies risk using the second moments, in this way relating to the MV paradigm. The HMCR measure with p = 3 demonstrates the potential benefits of using higher-order tail moments of loss distributions for risk estimation. Portfolio optimization models and implementation have the general form The portfolio selection models employed in this case study min x R( r x) (30a) s. t. e x = 1, (30b) Er x r 0, x 0, where x = (x 1,..., x n ) is the vector of portfolio weights, r = (r 1,..., r n ) is the random vector of assets returns, and e = (1,..., 1). The risk measure R in (30a) is taken to be either SMCR (6), HMCR with p = 3 (4), CVaR (2), or variance σ 2 of the negative portfolio return, r x = X. In the above portfolio optimization problem, (30b) represents the budget constraint, which, together with the no-short-selling constraint (30d) ensures that all the available funds are invested, and (30c) imposes the minimal required level r 0 for the expected return of the portfolio. We have deliberately chosen not to include any additional trading or institutional constraints (transaction costs, liquidity constraints, etc.) in the portfolio allocation problem (30) so as to make the effect of risk measure selection in (30a) onto the resulting portfolio rebalancing strategy more marked and visible. Traditionally to stochastic programming, the distribution of random return r i of asset i is modeled using a set of J discrete equiprobable scenarios {r i1,..., r i J }. Then, optimization problem (30) reduces to a linear programming problem if CVaR is selected as the risk measure R in (30a) (see, for instance, Rockafellar and Uryasev, 2000; Krokhmal et al., 2002a). Within the Mean-Variance framework, (30) becomes a convex quadratic optimization problem with the objective (30c) (30d) R( r x) = n i,k=1 σ ik x i x k, where σ ik = 1 J 1 J (r i j r i )(r kj r k ), r i = 1 J j=1 J r i j. (31) j=1 In the case of R(X) = HMCR p,α (X), problem (30) transforms into a linear programming problem with a single 12

p-order cone constraint (32e): min η + 1 1 1 α J 1/p t n s. t. x i = 1, i=1 1 J J j=1 i=1 w j n r i j x i r 0, (32a) (32b) (32c) n r i j x i η, j = 1,..., J, (32d) i=1 t ( w p 1 +... + w p J ) 1/p, (32e) x i 0, i = 1,..., n, (32f) w j 0, j = 1,..., J. (32g) When p = 2, i.e., R is equal to SMCR, (32) reduces to a SOCP problem. In the case when R is selected as HMCR with p = 3, the 3 rd order cone constraint (32e) has been approximated via linear inequalities (26a) with m = 500, thereby transforming problem (32) to a LP. The resulting mathematical programming problems have been implemented in C++, and we used CPLEX 10.0 for solving the LP and QP problems, and MOSEK 4.0 for solving the SOCP problem. Set of instruments and scenario data Since the introduced family of HMCR risk measures quantifies risk in terms of higher tail moments of loss distributions, the portfolio optimization case studies were conducted using a data set that contained return distributions of fifty S&P500 stocks with the so-called heavy tails. Namely, for scenario generation we used 10-day historical returns over J = 1024 overlapping periods, calculated using daily closing prices from Oct 30, 1998 to Jan 18, 2006. From the set of S&P500 stocks (as of January, 2006) we selected n = 50 instruments by picking the ones with the highest values of kurtosis of biweekly returns, calculated over the specified period. In such a way, the investment pool had the average kurtosis of 51.93, with 429.80 and 17.07 being the maximum and minimum kurtosis, correspondingly. The particular size of scenario model, J = 1024 = 2 10, has been chosen so that the linear approximation techniques (26) can be employed for the HMCR measure with p = 3. Out-of-sample simulations The primary goal of our case study is to shed light on the potential real-life performance of the HMCR measures in risk management applications, and to this end we conducted the so-called out-of-sample experiments. As the name suggests, the out-of-sample tests determine the merits of a constructed solution using out-of-sample data that have not been included in the scenario model used to generate the solution. In other words, the out-of-sample setup simulates a common situation when the true realization of uncertainties happens to be outside of the set of the expected, or predicted scenarios (as is the case for most portfolio optimization models). Here, we employ the out-of-sample method to compare simulated historic performances of four self-financing portfolio rebalancing strategies that are based on (30) with R chosen either as a member of the HMCR family with α = 0.90, namely, CVaR 0.90 ( ), SMCR 0.90 ( ), HMCR 3, 0.90 ( ), or as variance σ 2 ( ). It may be argued that in practice of portfolio management, instead of solving (30), it is of more interest to construct investment portfolios that maximize the expected return subject to risk constraint(s), e.g., { Er x } R( r x) c 0, e x = 1. (33) max x 0 Indeed, many investment institutions are required to keep their investment portfolios in compliance with numerous constraints, including constraints on risk. However, our main point is to gauge the effectiveness of the HMCR risk 13

measures in portfolio optimization by comparing them against other well-established risk management methodologies, such as the CVaR and MV frameworks. And since these risk measures yield risk estimates on different scales, it is not obvious which risk tolerance levels c 0 should be selected in (33) to make the resulting portfolios comparable. Thus, to facilitate a fair apple-to-apple comparison, we construct self-financing portfolio rebalancing strategies by solving the risk-minimization problem (30), so that the resulting portfolios all have the same level r 0 of expected return, and the success of a particular portfolio rebalancing strategy will depend on the actual amount of risk borne by the portfolio due to the utilization of the corresponding risk measure. The out-of-sample experiments have been set up as follows. The initial portfolios were constructed on Dec 11, 2002 by solving the corresponding variant of problem (30), where the scenario set consisted of 1024 overlapping bi-weekly returns covering the period from Oct 30, 1998 to Dec 11, 2002. The duration of the rebalancing period for all strategies was set at two weeks (ten business days). On the next rebalancing date of Dec 26, 2002, 4 the 10- day out-of-sample portfolio returns, ˆr x, were observed for each of the three portfolios, where ˆr is the vector of out-of-sample (Dec 11, 2002 Dec 26, 2002) returns and x is the corresponding optimal portfolio configuration obtained on Dec 11, 2002. Then, all portfolios were rebalanced by solving (30) with an updated scenario set. Namely, we included in the scenario set the ten vectors of overlapping biweekly returns that realized during the ten business days from Dec 11, 2002 to Dec 26, 2002, and discarded the oldest ten vectors from October November of 1998. The process was repeated on Dec 26, 2002, and so on. In such a way, the out-of-sample experiment consisted of 78 biweekly rebalancing periods covering more than three years. We ran the out-of-sample tests for different values of the minimal required expected return r 0, and typical results are presented in Figures 1 to 3. Portfolio value, % 250 230 210 190 170 150 130 110 90 Dec-11-2002 Jan-27-2003 Mar-11-2003 MV CVaR SMCR HMCR Apr-23-2003 Jun-05-2003 Jul-18-2003 Aug-29-2003 Oct-13-2003 Nov-24-2003 Jan-08-2004 Feb-23-2004 Apr-05-2004 May-18-2004 Jul-01-2004 Aug-13-2004 Sep-27-2004 Nov-08-2004 Dec-21-2004 Feb-03-2005 Mar-18-2005 May-02-2005 Jun-14-2005 Jul-27-2005 Sep-08-2005 Oct-20-2005 Dec-02-2005 Figure 1: Out-of-sample performance of conservative (r 0 = 0.5%) self-financing portfolio rebalancing strategies based on MV model, and CVaR, SMCR (p = 2), and HMCR (p = 3) measures of risk with α = 0.90. Figure 1 reports the portfolio values (in percent of the initial investment) for the four portfolio rebalancing strategies based on (30) with SMCR 0.90 ( ), CVaR 0.90 ( ), variance σ 2 ( ), and HMCR 3, 0.90 ( ) as R( ), and the required level r 0 of the expected return being set at 0.5%. One can observe that in this case the clear winner is the portfolio based in the HMCR measure with p = 3, the SMCR-based portfolio is runner-up, with the CVaR- and MVbased portfolios falling behind these two. This situation is typical for smaller values of r 0 ; as r 0 increases and the rebalancing strategies become more aggressive, the CVaR- and MV- portfolios become more competitive, while the HMCR (p = 3) portfolio remains dominant most of the time (Fig. 2). An illustration of typical behavior of more aggressive rebalancing strategies is presented in Figure 3, where r 0 is set 4 Holidays were omitted from the data. Jan-18-2006 14

Portfolio value, % 250 230 210 190 170 150 130 110 90 Dec-11-2002 Jan-27-2003 Mar-11-2003 MV CVaR SMCR HMCR Apr-23-2003 Jun-05-2003 Jul-18-2003 Aug-29-2003 Oct-13-2003 Nov-24-2003 Jan-08-2004 Feb-23-2004 Apr-05-2004 May-18-2004 Jul-01-2004 Aug-13-2004 Sep-27-2004 Nov-08-2004 Dec-21-2004 Feb-03-2005 Mar-18-2005 May-02-2005 Jun-14-2005 Jul-27-2005 Sep-08-2005 Oct-20-2005 Dec-02-2005 Jan-18-2006 Figure 2: Out-of-sample performance of self-financing portfolio rebalancing strategies that have expected return r 0 = 1.0% and are based on MV model, and CVaR, SMCR (p = 2), and HMCR (p = 3) risk measures with α = 0.90. Portfolio value, % 290 240 190 140 90 Dec-11-2002 Jan-27-2003 Mar-11-2003 MV CVaR SMCR HMCR Apr-23-2003 Jun-05-2003 Jul-18-2003 Aug-29-2003 Oct-13-2003 Nov-24-2003 Jan-08-2004 Feb-23-2004 Apr-05-2004 May-18-2004 Jul-01-2004 Aug-13-2004 Sep-27-2004 Nov-08-2004 Dec-21-2004 Feb-03-2005 Mar-18-2005 May-02-2005 Jun-14-2005 Jul-27-2005 Sep-08-2005 Oct-20-2005 Dec-02-2005 Figure 3: Out-of-sample performance of aggressive (r 0 = 1.3%) self-financing portfolio rebalancing strategies based on SMCR (p = 2), CVaR, MV, and HMCR (p = 3) measures of risk. at 1.3% (in the dataset used in this case study, infeasibilities in (30) started to occur for values of r 0 > 0.013). As a general trend, the HMCR (p = 2, 3) and CVaR portfolios exhibit similar performance (which can be explained by the fact that at high values of r 0 the set of instruments capable of providing such a level of expected return is rather limited). Still, it may be argued that the HMCR (p = 3) strategy can be preferred to the other three, based on its overall stable performance throughout the duration of the run. Finally, to illuminate the effects of taking into account higher-moment information in estimation of risks using the HMCR family of risk measures, we compared the simulated historical performances of the SMCR measure with parameter α = 0.90 and CVaR measure with confidence level α = 0.99, so that the relation α = 2α α 2 holds (see the discussion in Example 2.2). Recall that in such a case the expressions for SMCR and CVaR differ only in Jan-18-2006 15

the location of the optimal η (see (14)). For CVaR, the optimal η equals to VaR, and for SMCR the corresponding value of η is determined by (12) and depends on the second tail moment of the distribution. Figure 4 presents a typical outcome for mid-range values of the expected return level r 0 : most of the time, the SMCR portfolio dominates the corresponding CVaR portfolio. However, for smaller values of expected return (e.g., r 0 0.005), as well as for values approaching r 0 = 0.013, the SMCR- and CVaR-based rebalancing strategies demonstrated very close performance. This can be explained by the fact that lower values of r 0 lead to rather conservative portfolios, while the values of r 0 close to the infeasibility barrier of 0.013 lead to very aggressive and poorly diversified portfolios that are comprised of a limited set of assets capable of achieving this high level of the expected return. 230 Portfolio value, % 210 190 170 150 130 110 90 Dec-11-2002 Jan-27-2003 Mar-11-2003 SMCR CVaR Apr-23-2003 Jun-05-2003 Jul-18-2003 Aug-29-2003 Oct-13-2003 Nov-24-2003 Jan-08-2004 Feb-23-2004 Apr-05-2004 May-18-2004 Jul-01-2004 Aug-13-2004 Sep-27-2004 Nov-08-2004 Dec-21-2004 Feb-03-2005 Mar-18-2005 May-02-2005 Jun-14-2005 Jul-27-2005 Sep-08-2005 Oct-20-2005 Dec-02-2005 Jan-18-2006 Figure 4: Out-of-sample performance of self-financing portfolio rebalancing strategies based on SMCR 0.90 and CVaR 0.99 measures of risk (r 0 = 1.0%) Although the obtained results are data-specific, the presented preliminary case studies indicate that the developed HMCR measures demonstrate very promising performance, and can be successfully employed in the practice of risk management and portfolio optimization. 4 Conclusions In this paper we have considered modeling of risk-averse preferences in stochastic programming problems using risk measures. We utilized the axiomatic approach to construction of coherent risk measures and deviation measures in order to develop simple representations for these risk measures via solutions of specially designed stochastic programming problems. Using the developed general representations, we introduced a new family of higher-moment coherent risk measures (HMCR). In particular, we considered the second-moment coherent risk measure (SMCR), which is implementable in stochastic optimization problems using the second-order cone programming, and the 3 rd order HMCR (p = 3). The conducted numerical studies indicate that the HMCR measures can be effectively used in the practice of portfolio optimization, and compare well with the well-established benchmark models such as Mean-Variance framework or CVaR. 16