CVaR (Superquantile) Norm: Stochastic Case

Size: px
Start display at page:

Download "CVaR (Superquantile) Norm: Stochastic Case"

Transcription

1 CVaR (Superquantile) Norm: Stochastic Case Alexander Mafusalov, Stan Uryasev Risk Management and Financial Engineering Lab Department of Industrial and Systems Engineering 303 Weil Hall, University of Florida, Gainesville, FL 36. s: Abstract The concept of Conditional Value-at-Risk (CVaR) is used in various applications in uncertain environment. This paper introduces CVaR (superquantile) norm for a random variable, which is by denition CVaR of absolute value of this random variable. It is proved that CVaR norm is indeed a norm in the space of random variables. CVaR norm is dened in two variations: scaled and non-scaled. L- and L-innity norms are limiting cases of the CVaR norm. In continuous case, scaled CVaR norm is a conditional expectation of the random variable. A similar representation of CVaR norm is valid for discrete random variables. Several properties for scaled and non-scaled CVaR norm, as a function of condence level, were proved. Dual norm for CVaR norm is proved to be the maximum of L- and scaled L-innity norms. CVaR norm, as a Measure of Error, is related to a Regular Risk Quadrangle. Trimmed L-norm, which is a non-convex extension for CVaR norm, is introduced analogously to function L-p for p <. Linear regression problems were solved by minimizing CVaR norm of regression residuals. Keywords: CVaR norm, L-p norm, superquantile, risk quadrangle, linear regression

2 . Introduction The concept of Conditional Value-at-Risk (CVaR) is widely used in risk management and various applications in uncertain environment. This paper introduces a concept of CVaR norm in the space of random variables. CVaR norm in R n was introduced and developed in (Pavlikov and Uryasev, 04; Gotoh and Uryasev, 03), and is a particular case of general error measures introduced and developed by (Rockafellar et al., 008). The term ¾superquantile, free from dependence on nancial terminology, can be used as a neutral alternative name for ¾CVaR, like it was done in (Rockafellar and Royset, 00; Rockafellar and Uryasev, 03). For the similar reason the alternative name ¾superquantile norm is proposed to replace ¾CVaR norm when desired. For the sake of consistency within the paper and with the earlier study (Pavlikov and Uryasev, 04), this paper will use mostly the ¾CVaR norm term. This section provides a short introduction in the CVaR norm in R n and shows the relation with the CVaR norm in the space of random variables. This paper is motivated by applications of norms in optimization. We consider norms in R n and in the space of random variables. We use symbols x and x i for a vector and an i-th vector component in R n, i.e. x = (x,..., x n ). We use symbol X for a random variable. l p norms are broadly used in R n, and L p norms are considered in the space of random variables. For p [, ], the norms l p and L p are dened as follows : l p (x) = ( n ) /p n x i p, L p (X) = (E X p ) /p, where E is the expectation sign. The most popular cases are p =,,, i.e., i= l (x) = n n i= x i, L (X) = E X ; l (x) = max i=,...,n x i, L (X) = sup X ; l (x) = ( n n i= x i ) /, L (X) = (EX ) /. It is known that l (x) l (x) l (x) and L (X) L (X) L (X), which follow from l p (x) l q (x) and L p (X) L q (X) for p < q, see (e.g. Brezis, 00, page 8). The other family of norms, CVaR norm for R n was dened in (Pavlikov and Uryasev, 04) and studied in (Gotoh and Uryasev, 03). According to (Pavlikov and Uryasev, 04, Denition 3), the non-scaled CVaR norm with parameter α, or x α, is dened on x R n as a sum of absolute values of biggest n( α) components. If n( α) is not an integer, then x α is dened as a weighted average of two norms x α and x α for closest values α, α such that n( α ) and n( α ) are integers. A similar norm, called D-norm, was introduced in (Bertsimas et al., 004, Section 3) in a dierent way. D-norm is dened as a maximum of a sum of weighted absolute values of vector components. The maximization is performed over all sets of indexes for components in the sum, with a constraint on cardinality. For α [ ] 0, n n, the CVaR norm x α coincides with the D-norm x p with parameter p dened by p = n( α), see (Pavlikov and Uryasev, 04, Proposition 3.4). (Bertsimas et al., 004, Proposition ) nds a dual norm to D-norm; this result was generalized with Item of Proposition. of this paper for the stochastic case. Both CVaR norm and D-norm can be viewed as important special cases of Ordered Weighting Averaging (OWA) operators, see (Yager, 00; Merig o and Yager, 03; Torra and Narukawa, 007). A subfamily of OWA operators with monotonically non-increasing weights, when implied to the absolute values of the vector, were formalized as norms in (Yager, 00). The worst-case averages, corresponding to CVaR, were also studied, e.g., in (Ogryczak and Zawadzki, 00; Romeijn et al., 005). The paper (Pavlikov and Uryasev, 04, Denition ) has also dened scaled CVaR norm x S α. Scaled version calculates average value of components instead of sum: n( α) x S α = x α. This Note that the classic denition for l p norm is l p(x) = ( n i= x i p) /p does not satisfy inequality l p(x) l q(x) for p < q. This paper uses an equivalent scaled version of this norm l p(x) = ( n n i= x i p) /p, which satises that inequality. L p norm is commonly dened as L p(f) = f p ( S f p dµ ) /p, where S is a considered space. It is known (e.g. Brezis, 00, page 8) that for p and q norms inequality f p µ(s) p q f q holds for p q, where S is a considered space and µ(s) is the measure of the space S. When S is a probability space, µ(s) = and inequality L p(x) L q(x) holds for p q, where L p(x) = (E X p ) /p.

3 paper denes (scaled) CVaR norm of a random variable X as an expectation of X in its right ( α)- tail. It can be shown that proposed norm is a generalization of x S α in a following way. Consider mapping X(x) : R n L (Ω) from Eucledian space of dimension n to the space of L -nite random variables. Denote x = (x,..., x n ). Let X(x) be discretely distributed over atoms x,..., x n with equal probabilities n. Then it is easy to see that x S α = CVaR α ( X(x) ) X(x) S α, see (Pavlikov and Uryasev, 04, Denition ). This paper also denes non-scaled CVaR norm X α = ( α) X S α, which corresponds to x α from (Pavlikov and Uryasev, 04). Non-scaled version has attractive properties with respect to parameter α, see Items 6,7 from Section. Risk Quadrangle considers risk R(X), deviation D(X), regret V(X), error E(X) and statistic S(X), related with a set of equations, called The General Relationships, see (Rockafellar and Uryasev, 03, Diagram 3). If a functional satises a corresponding set of axioms, it is called regular, see (Rockafellar and Uryasev, 03, Section 3). It can be proved that if R(X) is a coherent and regular Measure of Risk, then R( X ) is both a norm and a regular Measure of Error, however, this proof is beyond the scope of this paper. This paper proves that X α is a regular Measure of Error and nds the corresponding functions R(X), D(X), V(X) and S(X) in risk quadrangle related to the Measure of Error E(X) = X α, see Section. Item 3 from Proposition. can be viewed as a stochastic generalization of (Hall and Tymoczko, 0, Lemma ). Paper (Hall and Tymoczko, 0) considers functions Σ j (x) on nonnegative orthant R n +, corresponding to R n + reduction of special cases of CVaR norm. Paper (Hall and Tymoczko, 0) relies on majorization theory, see (Marshall et al., 0), which is generalized for the stochastic case with a concept of stochastic dominance, see (Ogryczak and Ruszczynski, 00; Dentcheva and Ruszczynski, 003, e.g.). This paper considers also non-convex functions closely related to CVaR norm. In deterministic case, by denition, CVaR norm is the average of biggest ( α)n absolute values of components of a vector. The trimmed L-norm is also known as a trimmed sum of absolute deviations estimator for LTA regression, see (Hawkins and Olive, 999; Bassett Jr, 99; H ossjer, 994, e.g.). L-based LTA regression is introduced similarly to the more common L-based Least-Trimmed-Squares (LTS) regression, see (Zabarankin and Uryasev, 04b, e.g.). Trimmed L-norm is denoted here by t α, it is the average of smallest αn absolute values of components of a vector. Calculation formulas and mathematical properties for t α (x) in Eucledian space and T α (X) in the space of random variables are considered in Section 3. Trimmed L-norm is also related to the sparse optimization, similar to functions l p for 0 < p <, see (Ge et al., 0). Note that the constraint on trimmed L-norm directly species sparsity of the solution vector (see Item 6 of Section 3), compared to ¾indirect sparsity specication with l p function. This paper also provides an illustration for t α in Euclidean space. Paper (Krzemienowski, 009, Denition ) denes conditional average CAVG function. Both average quantile and CVaR are subfamilies of CAVG family, therefore, both X S α and T α (X) are subfamilies of CAVG β,γ ( X ) function family. Unfortunately, these functions are not convex or concave in general, and are out of the scope of this study, although robust regression applications based on these functions is a promising research direction. The paper is organized as follows. Section gives a formal denition of CVaR norm in stochastic case and enlists various mathematical of CVaR norm, including that it is indeed a norm and a regular measure of error. CVaR norm is a parametric family of norms with respect to the condence parameter α, properties of CVaR norm as a function of α are proved. Dual representation of the CVaR norm is derived, and a dual norm to the CVaR norm is dened. A short introduction to the concept of Risk Quadrangle is given. We derive the quadrangle related to the CVaR norm as a measure of error and we prove that this quadrangle is regular. Section 3 denes the trimmed L-norm, both in R n and in the space of random variables, and enlists several basic properties. The trimmed L-norm is an extension of CVaR norm, but it is not actually a norm. Section 4 illustrates properties of CVaR norm with a case study. Section 5 provides concluding remarks and acknowledgements.. CVaR (Superquantile) Norm Properties and Connection to Risk Quadrangle This section gives a formal denition of CVaR norm in stochastic case and proves various properties of the norm. Let us denote [x] + = max0, x}, [x] = max0, x}. Consider cumulative distribution function F X (x) = P (X x). If, for a probability level α (0, ), there is a unique x such that F X (x) = α, then this x is called the α-quantile q α (X). In general, however, the value x is not unique, or 3

4 may not even exist. There are two boundary values: q + α (X) = infx F X (x) > α}, q α (X) = supx F X (x) < α}. We will call by the quantile the entire interval between the two boundary values, q α (X) = [q α (X), q + α (X)]. () We will use notation q p (X)dp q p (X)dp, which is reasonable since q + p (X)dp = q p (X)dp. The CVaR norm is dened as follows. Denition. Let X be a random variable with E X <. Then CVaR (superquantile) norm of X with parameter α [0, ] is dened by X S α = CVaR α ( X ) = q α ( X ). Following the logic of (Pavlikov and Uryasev, 04), X S α is called scaled CVaR norm, while Denition introduces X α, corresponding to non-scaled CVaR norm for R n in (Pavlikov and Uryasev, 04, Denition 3). By default we call by CVaR norm the function X S α. The second version of the norm, non-scaled CVaR norm, is dened as follows. Denition. Let X be a random variable with E X <. Then non-csaled CVaR (superquantile) norm of X with parameter α [0, ) is dened as follows: X α = ( α) X S α. Note that by continuity X lim α X α = 0. That is, for α =, the function X α is not a norm. Recall some properties of CVaR (e.g. Rockafellar and Uryasev, 03, page 0), see also (Rockafellar and Uryasev, 000, 00): Positive homogeneity: Subadditivity: Monotonicity: CVaR α (λx) = λcvar α (X), for λ > 0. () CVaR α (X + Y ) CVaR α (X) + CVaR α (Y ). (3) CVaR α (X) CVaR α (Y ), for X Y. (4) Properties and Representations of CVaR (Superquantile) Norm. X S 0 = L (X), X S = L (X). (Follows from Denition ).. X S α is a norm on L (Ω) space of random variables. (Norm positive homogeneity and convexity follow from the positive homogeneity and convexity of the CVaR and the absolute value, whereas CVaR α ( X ) = 0 X 0 is obvious.) 3. The representaions X S α = min c X S α = α c + α } E[ X c]+. (5) α q p ( X )dp. (6) X S α = E ( X X > q α ( X )), if X is a continuous random variable, (7) follow directly from CVaR calculation formulas, see (e.g. Rockafellar and Uryasev, 000, formulas (3),(5)), (e.g. Rockafellar and Uryasev, 00, Denitions 3,4, Theorem 0). Also, X S X, with probability α = CVaR (+α)/ (Y ), where Y = ; X, with probability ; (8) see the proof in Appendix A. 4

5 4. Let X be a discrete random variable, i.e., it takes values x i } N i= with positive probabilities p i} N i=, where N i= p i = and N N (N also can be ). Let us denote by x (i) } N i= an ordered sequence x i } N i=, i.e., x (i) x (i+). We also denote by p (i) } N i= a corresponding to the x (i) } N i= sequence of probabilities from the p i} N i=. Then (a) for α =, X S x (N), for N < ; = lim i x (i), for N = ; (b) for α j = j i= p (i) and j < N, j Z + N 0}, (c) for α j < α < α j+, X S α j = α j N i=j+ x (i) p (i), for N < ; α j i=j+ x (i) p (i), for N = ; X S α = ( λ) α j α X S α j + λ α j+ α X S α j+, where λ = α α j α j+ α j. See the proof in Appendix A. 5. X S α is a continuous increasing function of α. (Follows from the fact that CVaR is a continuous increasing function of α, see (Rockafellar and Uryasev, 00, Proposition 3).) 6. X α is a concave and decreasing function of α. (The integral α q p( X )dp is a convex function of α as shown in (Ogryczak and Ruszczynski, 00, page 6), and X α = E X α q p( X )dp.) 7. X α and ( α)cvar α (X) are piecewise-linear functions of α for a discretely distributed random variable X. (Since quantile function for discretely distributed random variable is a step function, then its integral is piecewise-linear, and X α = E X α q p( X )dp.) 8. The norm X S α generates a Banach space for α [0, ]. (From the denition of Banach space 3 follows that since L (X) = E X generates 4 a Banach space and E X X S α α X S α = L (X) for α =.) 9. E(X) = X S α is a regular measure of error. See the proof in Appendix A. E X, then for α [0, ) CVaR norm also generates a Banach space, and The asterisk denotes the dual norm 5 to a norm. Therefore, Y S α denotes the norm dual to the CVaR norm X S α. Proposition.. For X L (Ω) and the norm S α, the following statements hold: }. X S α = sup Y Y EXY, where Y = Y Y α, E Y = Y } EXY X S α is a closed convex set. Note that ordered sequence x (i) } may not exist for some sets x i }. As a counterexample, consider x i = i, for i =,...,. 3 A Banach space is a vector space X over R, which is equipped with a norm and which is complete with respect to that norm. By denition, completeness means that for every Cauchy sequence x n} n= in X (i.e., for every ε > 0 exists N such that x m x n < ε for all m, n > N), there exists an element x in X such that lim xn = x, i.e., lim xn x = 0. n n 4 We say that norm L generates space (X L, L), where X L = X L(X) < }. 5 Let X be a normed space over R with norm (i.e., X R for X X). Then, the dual (or conjugate) normed space X is dened as the set of all continuous linear functionals from X into R. For f X, the dual norm of f is dened by } f(x) f = sup f(x) : x X, x } = sup x : x X, x 0. 5

6 . The norm Y S α = maxe Y, ( α) sup Y } is dual to the norm X S α for α (0, ). 3. Problem min d R X d S α has the following solution: arg min X d S α = ( q( α)/ (X) + q (+α)/ (X) ), d min X d S α = ( + α d α CVaR ( α)/ (X) + α CVaR (+α)/ (X) EX ). Proof.. Papers (Rockafellar et al., 006), (Rockafellar and Uryasev, 03, (6.9)) proved that CVaR α (X) = sup Q Q EXQ, where Q = Q 0 Q } α, EQ =. Denote Y = Y Y α, E Y } and Y = Y EXY X S α }. Then X S α = sup E X Q sup EXY sup E X Y sup E X Q. Q Q Y Y Y Y Q Q The rst inequality holds since for any Q Q exists Y = (I(X > 0) I(X < 0))Q Y such that X Q = XY. Hence, X S α = sup Y Y EXY. Note that X S α = sup Y Y EXY. Then, Y is a closed } convex hull of Y. Y is closed convex as an intersection of two closed convex sets Y Y α and Y E Y }. That is, Y = Y. }. Item implies that Y = Y Y α, E Y is a unit ball for the dual norm Y S α = EXY sup X 0. Then, the unit sphere Y S X α = for the dual norm is the set S α } } Y sup Y = α, E Y Y sup Y α, E Y =. Therefore, the dual norm equals Y S α = maxe Y, ( α) sup Y }. 3. Note that for c 0 and arbitrary x, d R, we have d c x, for x d c; [ x d c] + = 0, for x [d c, d + c]; [d c x] + + [x (d + c)] + = x (d + c), for x d + c. = [x (d c)] + + d c x + [x (d + c)] +. Let c = d c and c = d + c, then this relationship yields ( α)c + [ x d c] + = ( α) c c = + α + [x c ] + + c x + [x c ] + = c + [x c ] + + α c + [x c ] + x. (9) An optimal value c in X S α = min c R c + ( α) E[ X c] +} is the quantile q α ( X ) and hence nonnegative. Thus, c can be restricted in this optimization problem to be nonnegative, and, applying (9), min X d R d S α = min c + ( α) E[ X d c] +} = c 0,d R + α = α min c,c R = ( + α α c + E[X c ] + + α } c + E[X c ] + EX CVaR ( α)/ (X) + α CVaR (+α)/ (X) EX where optimal ( c and c are determined by q ( α)/ (X) and q (+α)/ (X), respectively, and yield optimal d = q( α)/ (X) + q (+α)/ (X) ). ), = 6

7 Risk Quadrangle (Rockafellar and Uryasev, 03) relates risk R(X), deviation D(X), regret V(X), error E(X) and statistic S(X) with the following equations (Rockafellar and Uryasev, 03, Diagram 3): V(X) = EX + E(X), R(X) = EX + D(X), (0) D(X) = mine(x C)}, R(X) = minc + V(X C)}, () C C S(X) = arg mine(x C)} = arg minc + V(X C)}. () C Following the paper (Rockafellar and Uryasev, 03, Section 3), we consider the L (Ω) space of random variables with nite second moment, EX <, which implies nite rst moment, E X <. The natural (¾strong ) convergence in L (Ω) of a sequence of random variables X k to a random variable X is characterized as follows: L - lim k Xk = X lim k L (X k X) = 0. The functional F is closed if for any C R the set X F(X) C} is closed with respect to L - convergence. The functional F is convex if F(λX + ( λ)y ) λf(x) + ( λ)f(y ) for all X, Y and λ (0, ). Measure of error E(X) is regular if: ) E(X) [0, ]; ) E(X) is closed convex; 3) E(0) = 0; 4) E(X) > 0 for any X 0; 5) E(X) ψ(ex) with a convex function ψ on (, ) having ψ(0) = 0 but ψ(t) > 0 for t 0. Denitions for regular measures of risk, deviation and regret are available in (Rockafellar and Uryasev, 03, Section 3). The quadrangle (R, D, E, V, S) is regular if equations (0) () hold and if also R(X) is a regular measure of risk, D(X) is a regular measure of deviation, V(X) is a regular measure of regret, and E(X) is a regular measure of error. (Rockafellar and Uryasev, 03, Quadrangle Theorem) implies that if equations (0)() hold for functions R, D, E, V, S, and if also E(X) is a regular measure of error, then (R, D, E, V, S) is a regular quadrangle. Since X S α is a regular measure of error, then X α is a regular measure of error, and the quadrangle, related to CVaR norm as a measure of error, is regular. If E(X) = X α and equations (0)() hold, then the corresponding measure of risk and statistic are calculated from Item 3 of Proposition., and the whole corresponding quadrangle is presented below. Proposition. (CVaR (Superquantile) Norm Quadrangle). For α [0, ) the error measure E(X) = X α is related to the following regular quadrangle: S(X) = ( q( α)/ (X) + q (+α)/ (X) ), R(X) = α CVaR (+α)/ (X) + + α CVaR ( α)/ (X), D(X) = α CVaR (+α)/ (X EX) + + α CVaR ( α)/ (X EX), V(X) = X α + EX, E(X) = X α. C CVaR norm quadrangle is similar to the Mixed-Quantile-Based quadrangle, see (Rockafellar and Uryasev, 03), for α = ( + α)/, α = ( α)/, λ = ( α)/, λ = ( + α)/. (3) For k =, dene [ ] αk E αk (X) = E X + + X, V αk (X) = EX +. α k α k With parameters from (3) we obtain the Mixed-Quantile-Based quadrangle with the same risk and deviation as in CVaR norm quadrangle, but with dierent statistic, error and regret: S(X) = α q (+α)/ (X) + + α q ( α)/ (X), V(X) = min λ V α (X B ) + λ V α (X B ) λ B + λ B = 0}, B,B E(X) = min B,B λ E α (X B ) + λ E α (X B ) λ B + λ B = 0}. 7

8 Suppose one is optimizing measure of error over some parametric family X(θ): min θ E i (X(θ)), (4) where i = for error from CVaR norm quadrangle, and i = for error from Mixed-Quantile-Based quadrangle. Assume that X(θ) = θ 0 + Y ( θ), where θ = (θ 0, θ), and θ 0 is a free parameter. Dene θ i = arg min θ E i (X(θ)). Then θ = θ = arg min θ D(Y ( θ)) = θ. Therefore, Y ( θ ) = Y ( θ ) and two optimal points X(θ ) and X(θ ) for problems (4) can be obtained from each other by adding a constant shift X(θ ) = (θ 0) + Y ( θ ), X(θ ) = (θ 0) + Y ( θ ), X(θ ) X(θ ) = (θ 0) (θ 0). 3. Trimmed L-Norm Paper (Ge et al., 0) considers a class of functions dened similar to L p norms, but for p [0, ). These functions are not norms and they are concave for some regions of the space they are dened 6. Such functions are used in optimization problems to achieve a sparse solution vector. We will dene a similar functions in terms of CVaR concept. Further we dene trimmed L-norm. Contrary to CVaR norm, this function takes average over smallest α-fraction of absolute values X of a random variable X. The word ¾norm here is potentially deceptive, since it corresponds to L and not the resulting function itself: trimmed L-norm is not actually a norm. The term ¾trimmed is widely used in robust regression, when the average of a few smallest regression residuals is minimized. Before averaging, residuals are usually transformed with some function φ : R [0, ). The case of φ(x) = x is the most commonly used and corresponds to the least trimmed squares regression (LTS), see (Rousseeuw, 984; Rousseeuw and Van Driessen, 999; Zabarankin and Uryasev, 04b). The case of φ(x) = x corresponds to the least trimmed sum of absolute deviations (LTA) regression, see (Hawkins and Olive, 999; Bassett Jr, 99; H ossjer, 994), it also corresponds to the trimmed L-norm function here, see below. The general case for arbitrary function φ was described in (Neykov et al., 0, formula (3)) and formalized for random variables via average quantile function in (Zabarankin and Uryasev, 04b, problem (6.5.9)). One possible way to dene trimmed L-norm, or T α, is as follows. Denition 3. Let X be a random variable with E X <. Trimmed L-norm T α (X) for α [0, ] is dened by T α (X) = CVaR α ( X ). Below we provide some mathematical properties for trimmed L-norm. These properties mostly follow from the ones presented in Section. Properties and Representations of Trimmed L-Norm. For α = 0, trimmed L-norm T α (X) = inf X. For α (0, ] trimmed L-norm can be calculated using one of the formulas below: T α (X) = α (E X ( α)cvar α( X )), (5) T α (X) = α α T α (X) = max c 0 q p ( X )dp, (6) c α } E[ X c]. (7). 0 T α (X) L (X) = E X, 3. T α (λx) = λ T α (X), 4. if XY 0, then T α (λx + ( λ)y ) λt α (X) + ( λ)t α (Y ), 5. T α (0) = 0, however, there exists X 0 such that T α (X) = 0, 6 For p [0, ) there is l p(x) in R n and L p(x) in the space of random variables. Concavity holds, for example, for region x 0 in R n and for region X 0 in the space of random variables. 8

9 6. T α (X) 0 T α (X) = 0 implies that P (X = 0) α. 7. T α (X) is a continuous non-decreasing function w.r.t. α. 8. αt α (X) is a convex non-decreasing function w.r.t. α. Formula (5) follows from (Pug, 000, Proposition (iii)), when Y = X is taken. T α (X) can be interpreted as an expectation of X in left α-tail. Note that T α (X) is then the average quantile of the random variable X, hence, formula (6) holds, see (Zabarankin and Uryasev, 04a, formula (.4.)). Formula (7) follows from (Rockafellar et al., 006, formula (5)). For p (0, ) the following inequality holds L p (X) L (X), where L p (X) = (E X p ) /p. Since x p is a concave function for 0 < p <, using Jensen's inequality we have E X p (E X ) p, therefore, (E X p ) /p E X, and Item shows that for trimmed L-norm. Items 4 follow from the fact that α α 0 q p(x)dp is a coherent, expectation bounded risk measure, see (Rockafellar et al., 006, Example 3). Item 5 follows from example of X = 0 with probability 0.5 and X = with probability 0.5. Then, for α [0, 0.5] function T α (X) = 0. Item 6 follows from formula (6). Note that Item 6 can be used in optimization problem settings for a constraint T α (X) 0 to achieve a given sparsity of a solution variable vector, or as max α subject to T α (X) 0 to maximize the sparsity of a solution variable vector. Item 4 implies that T α (X) is a concave function for X 0. Notice that this property cannot be strengthened to concavity in the whole space of random variables. Consider a function g(x) such that g(x) 0, g(0) = 0 and g(x) 0. Assume that g(x) is concave in the space of random variables. Since g(x) 0, then there exists X such that g(x) > 0. Then g(x) + g( X) > 0 = g(0) = g(x X), which implies that g(x) is not a concave function. Consider x = (x,..., x n ) R n. Let X(x) be a discretely distributed random variable taking values x,..., x n with equal probabilities n. Then the trimmed L-norm on Rn is dened as t α (X) = T α (X). Properties of t α (x) are similar to properties of the T α (X) and follow directly from Denition 3 and enlisted properties. For x R n, trimmed L-norm is calculated as follows: t α (x) = α (l (x) ( α) x S α), t αj (x) = ( x () x (j) )/j, for α = α j = j/n, t 0 (x) = min x i, i for α = 0, t (x) = l (x), for α =. Figure shows level-sets of x S α and t α (x) in R for dierent values of α. The function t α ( ) is a natural extension of S α. When α varies from 0 to, the function t α (x) changes from min i x i to l (x) = n n i= x i, and the function x S α changes from l (x) to max i x i. 4. Case Study 4.. Linear Regression: Financial Optimization Dataset We illustrate CVaR norm quadrangle, see Proposition., with the following case study. The case study results, data, and codes are posted at this link 7. We considered a linear regression problem with CVaR norm error. Let X be a n d design matrix, where n is a number of observations, d is a number of explanatory variables. Let y R n be a vector of observations on the dependent variable. Let e R n be a vector of ones. We denote by X = [e, X] an extended design matrix including additional unit column. We considered a linear regression: ŷ = Xa, where a R d+ is a vector of parameters. To solve this regression problem we minimized CVaR norm of vector of residuals y ŷ: min y Xa α. (8) a R d+ 7 cvar-norm-regression/ 9

10 3.5 x S 0.5 = max( x, x )= x S 0.5 = x S 0 = l (x) = t (x)= t 0.75 (x)= t 0.5 (x) = min( x, x )= x x Figure : Level-sets of CVaR norm x S α for α = 0, 0.5, 0.5 and level-sets of trimmed L-norm t α(x) for α = 0.5, 0.75, in R space. For α [0.5, ] norm x S α = max i x i. For α [0, 0.5] function t α(x) = min i x i. Equality x S 0 = l (x) = t (x) holds. 0

11 rlv rlg ruj ruo intercept objective Table : Optimal vector of parameters and objective for linear regression with CVaR norm. It is desirable to use CVaR norm in regression when we want to control directly large absolute values of residuals. We are indierent to the sign of the residual. We just do not want to have large absolute values, but are tolerant to small absolute values. Similar purpose can be achieved by minimizing L p norm, but in this case we do not control directly some specic percentage of largest outcomes. We can directly specify the percentage of larges absolute residuals with CVaR norm, e.g., 0% of largest outcomes. We also want to mention that the percentile regression (Koenker and Bassett Jr, 978) with Koenker and Bassett function is quite close to CVaR norm regression. However, percentile regression is concentrated on large outcomes in one tail, while CVaR norm regression pays attention to large outcomes without identifying the sign of the residual. Similar type of error has been considered earlier in OR literature. For instance, (Zabarankin and Uryasev, 04b, formula (6.5.9)) considered so called ¾average alfa-quantile error minimization applied to the transformed residual. In this problem, the average is taken over the left tail of the distribution. Such approach corresponds to trimmed error measures and to robust regression, it produces regression which is stable to outliers. By selecting the absolute value as a transformation function in ¾average alfaquantile error we are coming to trimmed L-norm minimization, or to LTA regression. Here we consider averaging over the right tail of distribution, which leads to CVaR, or superquantile, functions. Similarly, by selecting the absolute value as a transformation function in CVaR we are coming to CVaR norm minimization. CVaR norm regression is not stable to outliers, moreover, it is a ¾pessimistic estimator focused is on a fraction of the most ¾problematic observations. Aside from potential benets of the pessimistic approach, problem (8) is a convex problem and can be solved precisely and eciently. We consider the dataset from the case study ¾Estimation of CVaR through Explanatory Factors with Mixed Quantile Regression 8. The data contains returns of the Fidelity Magellan Fund as a dependent variable. Russell Value Index (RUJ), Russell 000 Value Index (RLV), Russell 000 Growth Index (RUO) and Russell 000 Growth Index (RLG) are taken as independent variables. Data include,64 observations. Solving Time on a PC with.83ghz is 0.0 sec. The CVaR norm is minimized with Portfolio Safeguard (AORDA, 009) software package. Condence level α in CVaR norm equals α = 0.9. We minimized CVaR instead of CVaR norm, according to calculation formula (8). Denote ȳ = [y; y] R n and X = [ X; X] R n d. Formula (8) implies Then, problem (8) is equivalently stated as follows y Xa S α = CVaR (+α)/ (ȳ Xa). min a R d+ CVaR (+α)/ (ȳ Xa). Optimization results for this problem are in Table. CVaR norm quadrangle is a regular quadrangle, see Proposition.. According to (Rockafellar and Uryasev, 03, Regression Theorem), rst stated as the Error-Shaping Decomposition of Regression Theorem (Rockafellar et al., 008, Theorem 3., page 7), the intercept, obtained in regression, equals to the Statistic of a modied residuals. In CVaR norm quadrangle, statistic equals S(X) = (q (+α)/ (X) + q ( α)/ (X))/. Denote the optimal vector of parameters obtained in regression by a = [c, b ], where c R is an optimal intercept. According to the Regression Theorem, c S(y Xb ) (we write because, in general, quantile q p (X) is an interval, see (), therefore S(X) is also an interval). At the optimal point, c = 0.00, q0.05 (y Xb ) = 0.03, q0.95 (y Xb ) = Therefore, S(y Xb ) (q 0.05 (y Xb ) + q 0.95 (y Xb ))/ = ( )/ = 0.00 = c. That is, numerical experiment conrms theoretical results for CVaR norm quadrangle, namely statistics S(y Xb ) = (q ( α)/(y Xb ) + q (+α)/ (y Xb )) including the optimal intercept, c. 8 estimation-of-cvar-through-explanatory-factors-with-mixed-quantile-regression/

12 y x y x y x y x Figure : Samples generated for regression. A true law is the red line. Observations are the blue dots. 4.. Linear Regression on Simulated Data In the following case study we consider L-norm as an out-of-sample criterion. For the in-sample criteria we consider elements of the CVaR norm parametric family. L-norm is an element of this family with the corresponding parameter value 0. We vary value of the parameter between 0 and for in-sample learning to optimize L-norm of residuals in out-of-sample. We show that using CVaR norms with parameter value bigger than 0 can lead, for small training samples, to better out-of-sample performance than direct in-sample minimization of L-norm. We illustrate CVaR norm regression with a controlled numerical experiment. In this case study we set true law as y(x) = x for x [0, ]. We pick sample points (x,..., x ) = (0, 0.,..., 0.9, ). Then we simulate dependent variable as y i = x i + ε i, where error terms ε i Laplace(0, 0.5) are distributed according to the ( Laplace ) distribution (double exponential distribution, probability density function f(x; µ, b) = b exp x µ b ). L-norm of regression residuals is chosen as the out-of-sample error criterion. Similar to exponential distribution, Laplace distribution has heavy tails, which makes L-based regression a reasonable choice. Moreover, minimization of L-norm of residuals is equivalent to likelihood maximization for the case of Laplace-distributed error. Since the true distribution is known, there is no need to divide sample into training and testing subsamples to measure error of the estimated model. Let ŷ i denote model estimation for the data point x i. First, ¾expected error is calculated as EE = n n E ε y(x i ) + ε ŷ i, i= where n = and each expectation is taken for ε Laplace(0, 0.5). Also, since the true law is know, then ¾true error is calculated as TE = n y(x i ) ŷ i. n i= As in Section 4., the problem (8) is solved. L-norm minimization is a special case of (8) when α = 0. The goal of the case study is to test whether ¾pessimistic estimation provided by CVaR norm minimization with α > 0 can achieve better out-of-sample quality of estimators, measured by EE and TE, than direct L-norm minimization with α = 0. To smooth the results obtained with the randomly generated sample, number of dierent samples generated N = 000, each sample size n =. For each sample, the problem (8) is solved for values α k = (k )/(K ) for k =,..., K, K = (that is, values of α are (0, 0.05, 0.0,..., )). For sample j, where j N, and parameter α k, regression estimator is found, and errors EE k j and TE k j are calculated. Figure 3 shows dependence on α for average error and standard deviation of error, scaled to the average error for α = 0 and the standard deviation of error for α = 0 correspondingly. These values are calculated for the two types of error we consider: ¾expected error EE, and ¾true error TE. Minimal average EE is achieved for α = 0.5 and is % lower than average EE for α = 0. Despite the modest drop in average error, CVaR norm model is more stable than L-norm model: standard deviation of EE is 0% lower for α = 0.5 than for α = 0. Average TE is minimized by the same value α = 0.5 and is 5% lower than for α = 0; standard deviation of TE is 8% lower for α = 0.5 than for α = 0. This case study showed that even if CVaR norm is not an out-of-sample criterion itself, as it was assumed in Section 4., the minimization of CVaR norm can be more advantageous than direct optimization of the out-of-sample criterion.

13 3 EE Average and Deviation. TE Average and Deviation.5 average deviation.8 average deviation error.5 error α α Figure 3: Average error and standard deviation of error, scaled to the average error for α = 0 and the standard deviation of error for α = 0 correspondingly, as functions of α. On the left: ¾Expected Error. On the right: ¾True Error. 5. Conslusion This paper extended the denition of CVaR norm from Eucledian space to a L (Ω) space of random variables and proved that CVaR norm is indeed a norm. CVaR norm is dened in two variations: scaled and non-scaled. Several properties for scaled and non-scaled CVaR norm, as a function of condence level, were proved. Dual norm for CVaR norm was proved to be the maximum of L and scaled L norms. CVaR norm was proved to be a regular Measure of Error and components of the corresponding CVaR (Superquantile) Norm Risk Quadrangle were found. Trimmed L-norm, which is a non-convex extension for CVaR norm, was introduced analogously to function L p for p <. Linear regression problems were solved by minimizing CVaR norm of regression residuals. CVaR norm has an intuitive interpretation to be chosen as an out-of-sample criterion, and also minimization of CVaR norm can be more advantageous than direct optimization of the out-of-sample criterion. Acknowledgements: Authors would like to thank Prof. R. Tyrrell Rockafellar, Prof. Michael Zabarankin, Prof. Jun-Ya Gotoh and Dr. Konstantin Pavlikov for their helpful comments and suggestions. Authors would also like to thank Prof. Georg Pug and two anonymous reviewers for their tremendous help shaping the material of the paper in a clearer way and for the help connecting this study with existing work in related elds. This work was partially supported by the USA AFOSR grants: ¾Design and Redesign of Engineering Systems, FA , and ¾New Developments in Uncertainty: Linking Risk Management, Reliability, Statistics and Stochastic Optimization, FA References AORDA (009). Portfolio Safeguard (PSG) version.. American Optimal Decisions, Inc., Gainesville, FL. http: // Bassett Jr, G. W. (99). Equivariant, monotonic, 50% breakdown estimators. The American Statistician, 45():3537. Bertsimas, D., Pachamanova, D., and Sim, M. (004). Robust linear optimization under general norms. Operations Research Letters, 3(6):5056. Brezis, H. (00). Functional Analysis, Sobolev Spaces and Partial Dierential Equations. Springer Science & Business Media. Dentcheva, D. and Ruszczynski, A. (003). Optimization with stochastic dominance constraints. SIAM Journal on Optimization, 4(): Ge, D., Jiang, X., and Ye, Y. (0). A note on the complexity of lp minimization. Math. Program., 9():8599. Gotoh, J.-y. and Uryasev, S. (03). Two pairs of families of polyhedral norms versus lp-norms: Proximity and applications in optimization. University of Florida, Research Report TwoPairsOfPolyhedralNormsVersusLpNorms_0305_UFISEPD.pdf. Hall, R. W. and Tymoczko, D. (0). Submajorization and the geometry of unordered collections. The American Mathematical Monthly, 9(4):

14 Hawkins, D. M. and Olive, D. (999). Applications and algorithms for least trimmed sum of absolute deviations regression. Computational Statistics & Data Analysis, 3():934. H ossjer, O. (994). Rank-based estimates in the linear model with high breakdown point. Journal of the American Statistical Association, 89(45):4958. Koenker, R. and Bassett Jr, G. (978). Regression quantiles. Econometrica: journal of the Econometric Society, pages Krzemienowski, A. (009). Risk preference modeling with conditional average: An application to portfolio optimization. Annals of Operations Research, 65():6795. Marshall, A. W., Olkin, I., and Arnold, B. C. (0). Inequalities: Theory of Majorization and Its Applications. Springer, New York. Merig o, J. M. and Yager, R. R. (03). Norm aggregations and OWA operators. In Aggregation Functions in Theory and in Practise, pages 45. Springer. Neykov, N., C zek, P., Filzmoser, P., and Neytchev, P. (0). The least trimmed quantile regression. Computational Statistics & Data Analysis, 56(6): Ogryczak, W. and Ruszczynski, A. (00). Dual stochastic dominance and related mean-risk models. SIAM Journal on Optimization, 3():6078. Ogryczak, W. and Zawadzki, M. (00). Conditional median: A parametric solution concept for location problems. Annals of Operations Research, 0(-4):678. Pavlikov, K. and Uryasev, S. (04). CVaR norm and applications in optimization. Optimization Letters, 8(7): Pug, G. C. (000). Some remarks on the value-at-risk and the conditional value-at-risk. In Probabilistic constrained optimization, pages 78. Springer. Rockafellar, R. T. and Royset, J. O. (00). On buered failure probability in design and optimization of structures. Reliability Engineering and System Safety, 95(5): Rockafellar, R. T. and Uryasev, S. (000). Optimization of conditional value-at-risk. Journal of Risk, :4. Rockafellar, R. T. and Uryasev, S. (00). Conditional value-at-risk for general loss distributions. Journal of banking & nance, 6(7): Rockafellar, R. T. and Uryasev, S. (03). The fundamental risk quadrangle in risk management, optimization and statistical estimation. Surveys in Operations Research and Management Science, 8():3353. Rockafellar, R. T., Uryasev, S., and Zabarankin, M. (006). Generalized deviations in risk analysis. Finance and Stochastics, 0():574. Rockafellar, R. T., Uryasev, S., and Zabarankin, M. (008). Risk tuning with generalized linear regression. Mathematics of Operations Research, 33(3):779. Romeijn, H. E., Ahuja, R. K., Dempsey, J. F., and Kumar, A. (005). A column generation approach to radiation therapy treatment planning using aperture modulation. SIAM Journal on Optimization, 5(3): Rousseeuw, P. J. (984). Least median of squares regression. Journal of the American statistical association, 79(388): Rousseeuw, P. J. and Van Driessen, K. (999). Computing LTS regression for large data sets. In Institute of Mathematical Statistics Bulletin. Citeseer. Torra, V. and Narukawa, Y. (007). Modeling decisions: Information Fusion and Aggregation Operators. Springer, Berlin. Yager, R. R. (00). Norms induced from OWA operators. Fuzzy Systems, IEEE Transactions on, 8():5766. Zabarankin, M. and Uryasev, S. (04a). Random variables. In Statistical Decision Problems, pages 37. Springer. Zabarankin, M. and Uryasev, S. (04b). Regression models. In Statistical Decision Problems, pages 787. Springer. Appendix A. Proofs for Properties of CVaR (Superquantile) Norm Proof for Item 3 from Section Proof. Formulas (5)(7) follow directly from CVaR calculation formulas, see (e.g. Rockafellar and Uryasev, 000, formulas (3),(5)), (e.g. Rockafellar and Uryasev, 00, Denitions 3,4, Theorem 0). Let us prove formula (8). For any x 0, hence, and nally, P (X x) + P ( X x) F Y (x) = P (Y x) = = P (X x) + P (X < x) + P ( x X x) = = q + (+α)/ x (Y ) = inf + F X (x) X S α = α = α α = + F X (x), > + α } = inf x F X (x) > α } = q α + ( X ), q + p ( X )dp = α (+α)/ α q + (+p)/ (Y )d ( + p q + (+p)/(y )dp = ) = CVaR (+α)/ (Y ). 4

15 Proof for Item 4 from Section Proof. Let us proceed with the proof item by item.. The proof follows directly from CVaR ( X ) = sup x i } N i=.. If X is a discrete random variable, then q p ( X ) is a step function of p. Step number i is having length α i α i = p (i) and height q αi ( X ) = x (i). Then, using additivity property of the integral (6), ( α j ) X S α j = α j q p ( X )dp = N i=j+ αi α i q p ( X )dp = N i=j+ p (i) x (i). 3. Since q p ( X ) is a step function of p and α j < α < α j+, then, using additivity property of the integral (6), α q p ( X )dp = ( λ) q p ( X )dp + λ q p ( X )dp, λ = α α j, α j α j+ α j+ α j X S α = ( λ) α j α X S α j + λ α j+ α X S α j+. Proof for Item 9 from Section Proof. We further prove that axioms of the regular measure of error hold for X S α. E(X) [0, ], E(0) = 0 and E(X) > 0 for any X 0 follows from the fact that X S α is a norm. E(X) is closed and convex, which follows from Item from Proposition.. E(X) = X S α E X EX = ψ(ex) for ψ(x) = x on (, ) having ψ(0) = 0 and ψ(t) > 0 for t 0. 5

Conditional Value-at-Risk (CVaR) Norm: Stochastic Case

Conditional Value-at-Risk (CVaR) Norm: Stochastic Case Conditional Value-at-Risk (CVaR) Norm: Stochastic Case Alexander Mafusalov, Stan Uryasev RESEARCH REPORT 03-5 Risk Management and Financial Engineering Lab Department of Industrial and Systems Engineering

More information

Buered Probability of Exceedance: Mathematical Properties and Optimization

Buered Probability of Exceedance: Mathematical Properties and Optimization Buered Probability of Exceedance: Mathematical Properties and Optimization Alexander Mafusalov, Stan Uryasev RESEARCH REPORT 2014-1 Risk Management and Financial Engineering Lab Department of Industrial

More information

Generalized quantiles as risk measures

Generalized quantiles as risk measures Generalized quantiles as risk measures Bellini, Klar, Muller, Rosazza Gianin December 1, 2014 Vorisek Jan Introduction Quantiles q α of a random variable X can be defined as the minimizers of a piecewise

More information

CVaR and Examples of Deviation Risk Measures

CVaR and Examples of Deviation Risk Measures CVaR and Examples of Deviation Risk Measures Jakub Černý Department of Probability and Mathematical Statistics Stochastic Modelling in Economics and Finance November 10, 2014 1 / 25 Contents CVaR - Dual

More information

The Subdifferential of Convex Deviation Measures and Risk Functions

The Subdifferential of Convex Deviation Measures and Risk Functions The Subdifferential of Convex Deviation Measures and Risk Functions Nicole Lorenz Gert Wanka In this paper we give subdifferential formulas of some convex deviation measures using their conjugate functions

More information

MEASURES OF RISK IN STOCHASTIC OPTIMIZATION

MEASURES OF RISK IN STOCHASTIC OPTIMIZATION MEASURES OF RISK IN STOCHASTIC OPTIMIZATION R. T. Rockafellar University of Washington, Seattle University of Florida, Gainesville Insitute for Mathematics and its Applications University of Minnesota

More information

RISK AND RELIABILITY IN OPTIMIZATION UNDER UNCERTAINTY

RISK AND RELIABILITY IN OPTIMIZATION UNDER UNCERTAINTY RISK AND RELIABILITY IN OPTIMIZATION UNDER UNCERTAINTY Terry Rockafellar University of Washington, Seattle AMSI Optimise Melbourne, Australia 18 Jun 2018 Decisions in the Face of Uncertain Outcomes = especially

More information

Duality in Regret Measures and Risk Measures arxiv: v1 [q-fin.mf] 30 Apr 2017 Qiang Yao, Xinmin Yang and Jie Sun

Duality in Regret Measures and Risk Measures arxiv: v1 [q-fin.mf] 30 Apr 2017 Qiang Yao, Xinmin Yang and Jie Sun Duality in Regret Measures and Risk Measures arxiv:1705.00340v1 [q-fin.mf] 30 Apr 2017 Qiang Yao, Xinmin Yang and Jie Sun May 13, 2018 Abstract Optimization models based on coherent regret measures and

More information

COHERENT APPROACHES TO RISK IN OPTIMIZATION UNDER UNCERTAINTY

COHERENT APPROACHES TO RISK IN OPTIMIZATION UNDER UNCERTAINTY COHERENT APPROACHES TO RISK IN OPTIMIZATION UNDER UNCERTAINTY Terry Rockafellar University of Washington, Seattle University of Florida, Gainesville Goal: a coordinated view of recent ideas in risk modeling

More information

Bregman superquantiles. Estimation methods and applications

Bregman superquantiles. Estimation methods and applications Bregman superquantiles. Estimation methods and applications Institut de mathématiques de Toulouse 2 juin 2014 Joint work with F. Gamboa, A. Garivier (IMT) and B. Iooss (EDF R&D). 1 Coherent measure of

More information

CVaR distance between univariate probability distributions and approximation problems

CVaR distance between univariate probability distributions and approximation problems https://doi.org/10.1007/s10479-017-2732-8 S.I.: RISK MANAGEMENT APPROACHES IN ENGINEERING APPLICATIONS CVaR distance between univariate probability distributions and approximation problems Konstantin Pavlikov

More information

THE FUNDAMENTAL QUADRANGLE OF RISK

THE FUNDAMENTAL QUADRANGLE OF RISK THE FUNDAMENTAL QUADRANGLE OF RISK relating quantifications of various aspects of a random variable risk R D deviation optimization S estimation regret V E error Lecture 1: Lecture 2: Lecture 3: optimization,

More information

CONDITIONAL ACCEPTABILITY MAPPINGS AS BANACH-LATTICE VALUED MAPPINGS

CONDITIONAL ACCEPTABILITY MAPPINGS AS BANACH-LATTICE VALUED MAPPINGS CONDITIONAL ACCEPTABILITY MAPPINGS AS BANACH-LATTICE VALUED MAPPINGS RAIMUND KOVACEVIC Department of Statistics and Decision Support Systems, University of Vienna, Vienna, Austria Abstract. Conditional

More information

Inverse Stochastic Dominance Constraints Duality and Methods

Inverse Stochastic Dominance Constraints Duality and Methods Duality and Methods Darinka Dentcheva 1 Andrzej Ruszczyński 2 1 Stevens Institute of Technology Hoboken, New Jersey, USA 2 Rutgers University Piscataway, New Jersey, USA Research supported by NSF awards

More information

Bregman superquantiles. Estimation methods and applications

Bregman superquantiles. Estimation methods and applications Bregman superquantiles Estimation methods and applications Institut de mathématiques de Toulouse 17 novembre 2014 Joint work with F Gamboa, A Garivier (IMT) and B Iooss (EDF R&D) Bregman superquantiles

More information

Włodzimierz Ogryczak. Warsaw University of Technology, ICCE ON ROBUST SOLUTIONS TO MULTI-OBJECTIVE LINEAR PROGRAMS. Introduction. Abstract.

Włodzimierz Ogryczak. Warsaw University of Technology, ICCE ON ROBUST SOLUTIONS TO MULTI-OBJECTIVE LINEAR PROGRAMS. Introduction. Abstract. Włodzimierz Ogryczak Warsaw University of Technology, ICCE ON ROBUST SOLUTIONS TO MULTI-OBJECTIVE LINEAR PROGRAMS Abstract In multiple criteria linear programming (MOLP) any efficient solution can be found

More information

Generalized quantiles as risk measures

Generalized quantiles as risk measures Generalized quantiles as risk measures F. Bellini 1, B. Klar 2, A. Müller 3, E. Rosazza Gianin 1 1 Dipartimento di Statistica e Metodi Quantitativi, Università di Milano Bicocca 2 Institut für Stochastik,

More information

A SECOND ORDER STOCHASTIC DOMINANCE PORTFOLIO EFFICIENCY MEASURE

A SECOND ORDER STOCHASTIC DOMINANCE PORTFOLIO EFFICIENCY MEASURE K Y B E R N E I K A V O L U M E 4 4 ( 2 0 0 8 ), N U M B E R 2, P A G E S 2 4 3 2 5 8 A SECOND ORDER SOCHASIC DOMINANCE PORFOLIO EFFICIENCY MEASURE Miloš Kopa and Petr Chovanec In this paper, we introduce

More information

Linear Regression and Its Applications

Linear Regression and Its Applications Linear Regression and Its Applications Predrag Radivojac October 13, 2014 Given a data set D = {(x i, y i )} n the objective is to learn the relationship between features and the target. We usually start

More information

THE FUNDAMENTAL RISK QUADRANGLE IN RISK MANAGEMENT, OPTIMIZATION AND STATISTICAL ESTIMATION 1

THE FUNDAMENTAL RISK QUADRANGLE IN RISK MANAGEMENT, OPTIMIZATION AND STATISTICAL ESTIMATION 1 THE FUNDAMENTAL RISK QUADRANGLE IN RISK MANAGEMENT, OPTIMIZATION AND STATISTICAL ESTIMATION 1 R. Tyrrell Rockafellar, 2 Stan Uryasev 3 Abstract Random variables that stand for cost, loss or damage must

More information

Robust multi-sensor scheduling for multi-site surveillance

Robust multi-sensor scheduling for multi-site surveillance DOI 10.1007/s10878-009-9271-4 Robust multi-sensor scheduling for multi-site surveillance Nikita Boyko Timofey Turko Vladimir Boginski David E. Jeffcoat Stanislav Uryasev Grigoriy Zrazhevsky Panos M. Pardalos

More information

On Kusuoka Representation of Law Invariant Risk Measures

On Kusuoka Representation of Law Invariant Risk Measures MATHEMATICS OF OPERATIONS RESEARCH Vol. 38, No. 1, February 213, pp. 142 152 ISSN 364-765X (print) ISSN 1526-5471 (online) http://dx.doi.org/1.1287/moor.112.563 213 INFORMS On Kusuoka Representation of

More information

Stochastic dominance with imprecise information

Stochastic dominance with imprecise information Stochastic dominance with imprecise information Ignacio Montes, Enrique Miranda, Susana Montes University of Oviedo, Dep. of Statistics and Operations Research. Abstract Stochastic dominance, which is

More information

Robust Optimization for Risk Control in Enterprise-wide Optimization

Robust Optimization for Risk Control in Enterprise-wide Optimization Robust Optimization for Risk Control in Enterprise-wide Optimization Juan Pablo Vielma Department of Industrial Engineering University of Pittsburgh EWO Seminar, 011 Pittsburgh, PA Uncertainty in Optimization

More information

Techinical Proofs for Nonlinear Learning using Local Coordinate Coding

Techinical Proofs for Nonlinear Learning using Local Coordinate Coding Techinical Proofs for Nonlinear Learning using Local Coordinate Coding 1 Notations and Main Results Denition 1.1 (Lipschitz Smoothness) A function f(x) on R d is (α, β, p)-lipschitz smooth with respect

More information

Competitive Equilibria in a Comonotone Market

Competitive Equilibria in a Comonotone Market Competitive Equilibria in a Comonotone Market 1/51 Competitive Equilibria in a Comonotone Market Ruodu Wang http://sas.uwaterloo.ca/ wang Department of Statistics and Actuarial Science University of Waterloo

More information

Advanced Statistical Tools for Modelling of Composition and Processing Parameters for Alloy Development

Advanced Statistical Tools for Modelling of Composition and Processing Parameters for Alloy Development Advanced Statistical Tools for Modelling of Composition and Processing Parameters for Alloy Development Greg Zrazhevsky, Alex Golodnikov, Stan Uryasev, and Alex Zrazhevsky Abstract The paper presents new

More information

Robustness and bootstrap techniques in portfolio efficiency tests

Robustness and bootstrap techniques in portfolio efficiency tests Robustness and bootstrap techniques in portfolio efficiency tests Dept. of Probability and Mathematical Statistics, Charles University, Prague, Czech Republic July 8, 2013 Motivation Portfolio selection

More information

Regression and Statistical Inference

Regression and Statistical Inference Regression and Statistical Inference Walid Mnif wmnif@uwo.ca Department of Applied Mathematics The University of Western Ontario, London, Canada 1 Elements of Probability 2 Elements of Probability CDF&PDF

More information

Lecture 2: Convex Sets and Functions

Lecture 2: Convex Sets and Functions Lecture 2: Convex Sets and Functions Hyang-Won Lee Dept. of Internet & Multimedia Eng. Konkuk University Lecture 2 Network Optimization, Fall 2015 1 / 22 Optimization Problems Optimization problems are

More information

Random Variables. Random variables. A numerically valued map X of an outcome ω from a sample space Ω to the real line R

Random Variables. Random variables. A numerically valued map X of an outcome ω from a sample space Ω to the real line R In probabilistic models, a random variable is a variable whose possible values are numerical outcomes of a random phenomenon. As a function or a map, it maps from an element (or an outcome) of a sample

More information

Robust linear optimization under general norms

Robust linear optimization under general norms Operations Research Letters 3 (004) 50 56 Operations Research Letters www.elsevier.com/locate/dsw Robust linear optimization under general norms Dimitris Bertsimas a; ;, Dessislava Pachamanova b, Melvyn

More information

REPORT DOCUMENTATION PAGE

REPORT DOCUMENTATION PAGE REPORT DOCUMENTATION PAGE Form Approved OMB No. 0704-0188 Public reporting burden for this collection of information is estimated to average 1 hour per response, including the time for reviewing instructions,

More information

On deterministic reformulations of distributionally robust joint chance constrained optimization problems

On deterministic reformulations of distributionally robust joint chance constrained optimization problems On deterministic reformulations of distributionally robust joint chance constrained optimization problems Weijun Xie and Shabbir Ahmed School of Industrial & Systems Engineering Georgia Institute of Technology,

More information

MAT 570 REAL ANALYSIS LECTURE NOTES. Contents. 1. Sets Functions Countability Axiom of choice Equivalence relations 9

MAT 570 REAL ANALYSIS LECTURE NOTES. Contents. 1. Sets Functions Countability Axiom of choice Equivalence relations 9 MAT 570 REAL ANALYSIS LECTURE NOTES PROFESSOR: JOHN QUIGG SEMESTER: FALL 204 Contents. Sets 2 2. Functions 5 3. Countability 7 4. Axiom of choice 8 5. Equivalence relations 9 6. Real numbers 9 7. Extended

More information

The Canonical Model Space for Law-invariant Convex Risk Measures is L 1

The Canonical Model Space for Law-invariant Convex Risk Measures is L 1 The Canonical Model Space for Law-invariant Convex Risk Measures is L 1 Damir Filipović Gregor Svindland 3 November 2008 Abstract In this paper we establish a one-to-one correspondence between lawinvariant

More information

Optimization over Sparse Symmetric Sets via a Nonmonotone Projected Gradient Method

Optimization over Sparse Symmetric Sets via a Nonmonotone Projected Gradient Method Optimization over Sparse Symmetric Sets via a Nonmonotone Projected Gradient Method Zhaosong Lu November 21, 2015 Abstract We consider the problem of minimizing a Lipschitz dierentiable function over a

More information

Valid Inequalities and Restrictions for Stochastic Programming Problems with First Order Stochastic Dominance Constraints

Valid Inequalities and Restrictions for Stochastic Programming Problems with First Order Stochastic Dominance Constraints Valid Inequalities and Restrictions for Stochastic Programming Problems with First Order Stochastic Dominance Constraints Nilay Noyan Andrzej Ruszczyński March 21, 2006 Abstract Stochastic dominance relations

More information

Mathematical Institute, University of Utrecht. The problem of estimating the mean of an observed Gaussian innite-dimensional vector

Mathematical Institute, University of Utrecht. The problem of estimating the mean of an observed Gaussian innite-dimensional vector On Minimax Filtering over Ellipsoids Eduard N. Belitser and Boris Y. Levit Mathematical Institute, University of Utrecht Budapestlaan 6, 3584 CD Utrecht, The Netherlands The problem of estimating the mean

More information

Motivation General concept of CVaR Optimization Comparison. VaR and CVaR. Přemysl Bejda.

Motivation General concept of CVaR Optimization Comparison. VaR and CVaR. Přemysl Bejda. VaR and CVaR Přemysl Bejda premyslbejda@gmail.com 2014 Contents 1 Motivation 2 General concept of CVaR 3 Optimization 4 Comparison Contents 1 Motivation 2 General concept of CVaR 3 Optimization 4 Comparison

More information

Nonlinear Programming Models

Nonlinear Programming Models Nonlinear Programming Models Fabio Schoen 2008 http://gol.dsi.unifi.it/users/schoen Nonlinear Programming Models p. Introduction Nonlinear Programming Models p. NLP problems minf(x) x S R n Standard form:

More information

ON STATISTICAL INFERENCE UNDER ASYMMETRIC LOSS. Abstract. We introduce a wide class of asymmetric loss functions and show how to obtain

ON STATISTICAL INFERENCE UNDER ASYMMETRIC LOSS. Abstract. We introduce a wide class of asymmetric loss functions and show how to obtain ON STATISTICAL INFERENCE UNDER ASYMMETRIC LOSS FUNCTIONS Michael Baron Received: Abstract We introduce a wide class of asymmetric loss functions and show how to obtain asymmetric-type optimal decision

More information

SOME REMARKS ON THE SPACE OF DIFFERENCES OF SUBLINEAR FUNCTIONS

SOME REMARKS ON THE SPACE OF DIFFERENCES OF SUBLINEAR FUNCTIONS APPLICATIONES MATHEMATICAE 22,3 (1994), pp. 419 426 S. G. BARTELS and D. PALLASCHKE (Karlsruhe) SOME REMARKS ON THE SPACE OF DIFFERENCES OF SUBLINEAR FUNCTIONS Abstract. Two properties concerning the space

More information

David Hilbert was old and partly deaf in the nineteen thirties. Yet being a diligent

David Hilbert was old and partly deaf in the nineteen thirties. Yet being a diligent Chapter 5 ddddd dddddd dddddddd ddddddd dddddddd ddddddd Hilbert Space The Euclidean norm is special among all norms defined in R n for being induced by the Euclidean inner product (the dot product). A

More information

RANDOM VARIABLES, MONOTONE RELATIONS AND CONVEX ANALYSIS

RANDOM VARIABLES, MONOTONE RELATIONS AND CONVEX ANALYSIS RANDOM VARIABLES, MONOTONE RELATIONS AND CONVEX ANALYSIS R. Tyrrell Rockafellar, 1 Johannes O. Royset 2 Abstract Random variables can be described by their cumulative distribution functions, a class of

More information

The best expert versus the smartest algorithm

The best expert versus the smartest algorithm Theoretical Computer Science 34 004 361 380 www.elsevier.com/locate/tcs The best expert versus the smartest algorithm Peter Chen a, Guoli Ding b; a Department of Computer Science, Louisiana State University,

More information

Lecture 2: Random Variables and Expectation

Lecture 2: Random Variables and Expectation Econ 514: Probability and Statistics Lecture 2: Random Variables and Expectation Definition of function: Given sets X and Y, a function f with domain X and image Y is a rule that assigns to every x X one

More information

Theoretical Foundation of Uncertain Dominance

Theoretical Foundation of Uncertain Dominance Theoretical Foundation of Uncertain Dominance Yang Zuo, Xiaoyu Ji 2 Uncertainty Theory Laboratory, Department of Mathematical Sciences Tsinghua University, Beijing 84, China 2 School of Business, Renmin

More information

Comonotonicity and Maximal Stop-Loss Premiums

Comonotonicity and Maximal Stop-Loss Premiums Comonotonicity and Maximal Stop-Loss Premiums Jan Dhaene Shaun Wang Virginia Young Marc J. Goovaerts November 8, 1999 Abstract In this paper, we investigate the relationship between comonotonicity and

More information

1/37. Convexity theory. Victor Kitov

1/37. Convexity theory. Victor Kitov 1/37 Convexity theory Victor Kitov 2/37 Table of Contents 1 2 Strictly convex functions 3 Concave & strictly concave functions 4 Kullback-Leibler divergence 3/37 Convex sets Denition 1 Set X is convex

More information

Chapter 1. Preliminaries. The purpose of this chapter is to provide some basic background information. Linear Space. Hilbert Space.

Chapter 1. Preliminaries. The purpose of this chapter is to provide some basic background information. Linear Space. Hilbert Space. Chapter 1 Preliminaries The purpose of this chapter is to provide some basic background information. Linear Space Hilbert Space Basic Principles 1 2 Preliminaries Linear Space The notion of linear space

More information

On Coarse Geometry and Coarse Embeddability

On Coarse Geometry and Coarse Embeddability On Coarse Geometry and Coarse Embeddability Ilmari Kangasniemi August 10, 2016 Master's Thesis University of Helsinki Faculty of Science Department of Mathematics and Statistics Supervised by Erik Elfving

More information

HW1 solutions. 1. α Ef(x) β, where Ef(x) is the expected value of f(x), i.e., Ef(x) = n. i=1 p if(a i ). (The function f : R R is given.

HW1 solutions. 1. α Ef(x) β, where Ef(x) is the expected value of f(x), i.e., Ef(x) = n. i=1 p if(a i ). (The function f : R R is given. HW1 solutions Exercise 1 (Some sets of probability distributions.) Let x be a real-valued random variable with Prob(x = a i ) = p i, i = 1,..., n, where a 1 < a 2 < < a n. Of course p R n lies in the standard

More information

Generalized Hypothesis Testing and Maximizing the Success Probability in Financial Markets

Generalized Hypothesis Testing and Maximizing the Success Probability in Financial Markets Generalized Hypothesis Testing and Maximizing the Success Probability in Financial Markets Tim Leung 1, Qingshuo Song 2, and Jie Yang 3 1 Columbia University, New York, USA; leung@ieor.columbia.edu 2 City

More information

3.1 Basic properties of real numbers - continuation Inmum and supremum of a set of real numbers

3.1 Basic properties of real numbers - continuation Inmum and supremum of a set of real numbers Chapter 3 Real numbers The notion of real number was introduced in section 1.3 where the axiomatic denition of the set of all real numbers was done and some basic properties of the set of all real numbers

More information

Optimization and Optimal Control in Banach Spaces

Optimization and Optimal Control in Banach Spaces Optimization and Optimal Control in Banach Spaces Bernhard Schmitzer October 19, 2017 1 Convex non-smooth optimization with proximal operators Remark 1.1 (Motivation). Convex optimization: easier to solve,

More information

Birgit Rudloff Operations Research and Financial Engineering, Princeton University

Birgit Rudloff Operations Research and Financial Engineering, Princeton University TIME CONSISTENT RISK AVERSE DYNAMIC DECISION MODELS: AN ECONOMIC INTERPRETATION Birgit Rudloff Operations Research and Financial Engineering, Princeton University brudloff@princeton.edu Alexandre Street

More information

Maximization of AUC and Buffered AUC in Binary Classification

Maximization of AUC and Buffered AUC in Binary Classification Maximization of AUC and Buffered AUC in Binary Classification Matthew Norton,Stan Uryasev March 2016 RESEARCH REPORT 2015-2 Risk Management and Financial Engineering Lab Department of Industrial and Systems

More information

Portfolio optimization with stochastic dominance constraints

Portfolio optimization with stochastic dominance constraints Charles University in Prague Faculty of Mathematics and Physics Portfolio optimization with stochastic dominance constraints December 16, 2014 Contents Motivation 1 Motivation 2 3 4 5 Contents Motivation

More information

Convex envelopes, cardinality constrained optimization and LASSO. An application in supervised learning: support vector machines (SVMs)

Convex envelopes, cardinality constrained optimization and LASSO. An application in supervised learning: support vector machines (SVMs) ORF 523 Lecture 8 Princeton University Instructor: A.A. Ahmadi Scribe: G. Hall Any typos should be emailed to a a a@princeton.edu. 1 Outline Convexity-preserving operations Convex envelopes, cardinality

More information

4. Conditional risk measures and their robust representation

4. Conditional risk measures and their robust representation 4. Conditional risk measures and their robust representation We consider a discrete-time information structure given by a filtration (F t ) t=0,...,t on our probability space (Ω, F, P ). The time horizon

More information

Lecture 4 Lebesgue spaces and inequalities

Lecture 4 Lebesgue spaces and inequalities Lecture 4: Lebesgue spaces and inequalities 1 of 10 Course: Theory of Probability I Term: Fall 2013 Instructor: Gordan Zitkovic Lecture 4 Lebesgue spaces and inequalities Lebesgue spaces We have seen how

More information

Expected Shortfall is not elicitable so what?

Expected Shortfall is not elicitable so what? Expected Shortfall is not elicitable so what? Dirk Tasche Bank of England Prudential Regulation Authority 1 dirk.tasche@gmx.net Finance & Stochastics seminar Imperial College, November 20, 2013 1 The opinions

More information

FUNCTIONAL ANALYSIS HAHN-BANACH THEOREM. F (m 2 ) + α m 2 + x 0

FUNCTIONAL ANALYSIS HAHN-BANACH THEOREM. F (m 2 ) + α m 2 + x 0 FUNCTIONAL ANALYSIS HAHN-BANACH THEOREM If M is a linear subspace of a normal linear space X and if F is a bounded linear functional on M then F can be extended to M + [x 0 ] without changing its norm.

More information

On the Power of Robust Solutions in Two-Stage Stochastic and Adaptive Optimization Problems

On the Power of Robust Solutions in Two-Stage Stochastic and Adaptive Optimization Problems MATHEMATICS OF OPERATIONS RESEARCH Vol. xx, No. x, Xxxxxxx 00x, pp. xxx xxx ISSN 0364-765X EISSN 156-5471 0x xx0x 0xxx informs DOI 10.187/moor.xxxx.xxxx c 00x INFORMS On the Power of Robust Solutions in

More information

Introduction to Real Analysis

Introduction to Real Analysis Introduction to Real Analysis Joshua Wilde, revised by Isabel Tecu, Takeshi Suzuki and María José Boccardi August 13, 2013 1 Sets Sets are the basic objects of mathematics. In fact, they are so basic that

More information

ON DISCRETE HESSIAN MATRIX AND CONVEX EXTENSIBILITY

ON DISCRETE HESSIAN MATRIX AND CONVEX EXTENSIBILITY Journal of the Operations Research Society of Japan Vol. 55, No. 1, March 2012, pp. 48 62 c The Operations Research Society of Japan ON DISCRETE HESSIAN MATRIX AND CONVEX EXTENSIBILITY Satoko Moriguchi

More information

The newsvendor problem with convex risk

The newsvendor problem with convex risk UNIVERSIDAD CARLOS III DE MADRID WORKING PAPERS Working Paper Business Economic Series WP. 16-06. December, 12 nd, 2016. ISSN 1989-8843 Instituto para el Desarrollo Empresarial Universidad Carlos III de

More information

A proposal of a bivariate Conditional Tail Expectation

A proposal of a bivariate Conditional Tail Expectation A proposal of a bivariate Conditional Tail Expectation Elena Di Bernardino a joint works with Areski Cousin b, Thomas Laloë c, Véronique Maume-Deschamps d and Clémentine Prieur e a, b, d Université Lyon

More information

Lecture 7 Introduction to Statistical Decision Theory

Lecture 7 Introduction to Statistical Decision Theory Lecture 7 Introduction to Statistical Decision Theory I-Hsiang Wang Department of Electrical Engineering National Taiwan University ihwang@ntu.edu.tw December 20, 2016 1 / 55 I-Hsiang Wang IT Lecture 7

More information

Operations Research Letters. On a time consistency concept in risk averse multistage stochastic programming

Operations Research Letters. On a time consistency concept in risk averse multistage stochastic programming Operations Research Letters 37 2009 143 147 Contents lists available at ScienceDirect Operations Research Letters journal homepage: www.elsevier.com/locate/orl On a time consistency concept in risk averse

More information

Homework 6. Due: 10am Thursday 11/30/17

Homework 6. Due: 10am Thursday 11/30/17 Homework 6 Due: 10am Thursday 11/30/17 1. Hinge loss vs. logistic loss. In class we defined hinge loss l hinge (x, y; w) = (1 yw T x) + and logistic loss l logistic (x, y; w) = log(1 + exp ( yw T x ) ).

More information

L p Spaces and Convexity

L p Spaces and Convexity L p Spaces and Convexity These notes largely follow the treatments in Royden, Real Analysis, and Rudin, Real & Complex Analysis. 1. Convex functions Let I R be an interval. For I open, we say a function

More information

POLARS AND DUAL CONES

POLARS AND DUAL CONES POLARS AND DUAL CONES VERA ROSHCHINA Abstract. The goal of this note is to remind the basic definitions of convex sets and their polars. For more details see the classic references [1, 2] and [3] for polytopes.

More information

Some Properties of the Augmented Lagrangian in Cone Constrained Optimization

Some Properties of the Augmented Lagrangian in Cone Constrained Optimization MATHEMATICS OF OPERATIONS RESEARCH Vol. 29, No. 3, August 2004, pp. 479 491 issn 0364-765X eissn 1526-5471 04 2903 0479 informs doi 10.1287/moor.1040.0103 2004 INFORMS Some Properties of the Augmented

More information

Quantile prediction of a random eld extending the gaussian setting

Quantile prediction of a random eld extending the gaussian setting Quantile prediction of a random eld extending the gaussian setting 1 Joint work with : Véronique Maume-Deschamps 1 and Didier Rullière 2 1 Institut Camille Jordan Université Lyon 1 2 Laboratoire des Sciences

More information

Linear Algebra (part 1) : Vector Spaces (by Evan Dummit, 2017, v. 1.07) 1.1 The Formal Denition of a Vector Space

Linear Algebra (part 1) : Vector Spaces (by Evan Dummit, 2017, v. 1.07) 1.1 The Formal Denition of a Vector Space Linear Algebra (part 1) : Vector Spaces (by Evan Dummit, 2017, v. 1.07) Contents 1 Vector Spaces 1 1.1 The Formal Denition of a Vector Space.................................. 1 1.2 Subspaces...................................................

More information

Convex Analysis and Economic Theory AY Elementary properties of convex functions

Convex Analysis and Economic Theory AY Elementary properties of convex functions Division of the Humanities and Social Sciences Ec 181 KC Border Convex Analysis and Economic Theory AY 2018 2019 Topic 6: Convex functions I 6.1 Elementary properties of convex functions We may occasionally

More information

Lecture 14 October 13

Lecture 14 October 13 STAT 383C: Statistical Modeling I Fall 2015 Lecture 14 October 13 Lecturer: Purnamrita Sarkar Scribe: Some one Disclaimer: These scribe notes have been slightly proofread and may have typos etc. Note:

More information

Richard DiSalvo. Dr. Elmer. Mathematical Foundations of Economics. Fall/Spring,

Richard DiSalvo. Dr. Elmer. Mathematical Foundations of Economics. Fall/Spring, The Finite Dimensional Normed Linear Space Theorem Richard DiSalvo Dr. Elmer Mathematical Foundations of Economics Fall/Spring, 20-202 The claim that follows, which I have called the nite-dimensional normed

More information

Expected Shortfall is not elicitable so what?

Expected Shortfall is not elicitable so what? Expected Shortfall is not elicitable so what? Dirk Tasche Bank of England Prudential Regulation Authority 1 dirk.tasche@gmx.net Modern Risk Management of Insurance Firms Hannover, January 23, 2014 1 The

More information

Distributionally Robust Discrete Optimization with Entropic Value-at-Risk

Distributionally Robust Discrete Optimization with Entropic Value-at-Risk Distributionally Robust Discrete Optimization with Entropic Value-at-Risk Daniel Zhuoyu Long Department of SEEM, The Chinese University of Hong Kong, zylong@se.cuhk.edu.hk Jin Qi NUS Business School, National

More information

STAT 7032 Probability Spring Wlodek Bryc

STAT 7032 Probability Spring Wlodek Bryc STAT 7032 Probability Spring 2018 Wlodek Bryc Created: Friday, Jan 2, 2014 Revised for Spring 2018 Printed: January 9, 2018 File: Grad-Prob-2018.TEX Department of Mathematical Sciences, University of Cincinnati,

More information

Spazi vettoriali e misure di similaritá

Spazi vettoriali e misure di similaritá Spazi vettoriali e misure di similaritá R. Basili Corso di Web Mining e Retrieval a.a. 2009-10 March 25, 2010 Outline Outline Spazi vettoriali a valori reali Operazioni tra vettori Indipendenza Lineare

More information

Ambiguity in portfolio optimization

Ambiguity in portfolio optimization May/June 2006 Introduction: Risk and Ambiguity Frank Knight Risk, Uncertainty and Profit (1920) Risk: the decision-maker can assign mathematical probabilities to random phenomena Uncertainty: randomness

More information

Sharp bounds on the VaR for sums of dependent risks

Sharp bounds on the VaR for sums of dependent risks Paul Embrechts Sharp bounds on the VaR for sums of dependent risks joint work with Giovanni Puccetti (university of Firenze, Italy) and Ludger Rüschendorf (university of Freiburg, Germany) Mathematical

More information

Generalized quantiles as risk measures

Generalized quantiles as risk measures Generalized quantiles as risk measures Fabio Bellini a, Bernhard Klar b, Alfred Müller c,, Emanuela Rosazza Gianin a,1 a Dipartimento di Statistica e Metodi Quantitativi, Università di Milano Bicocca,

More information

Finite-dimensional spaces. C n is the space of n-tuples x = (x 1,..., x n ) of complex numbers. It is a Hilbert space with the inner product

Finite-dimensional spaces. C n is the space of n-tuples x = (x 1,..., x n ) of complex numbers. It is a Hilbert space with the inner product Chapter 4 Hilbert Spaces 4.1 Inner Product Spaces Inner Product Space. A complex vector space E is called an inner product space (or a pre-hilbert space, or a unitary space) if there is a mapping (, )

More information

Stochastic Optimization with Risk Measures

Stochastic Optimization with Risk Measures Stochastic Optimization with Risk Measures IMA New Directions Short Course on Mathematical Optimization Jim Luedtke Department of Industrial and Systems Engineering University of Wisconsin-Madison August

More information

Stochastic Design Criteria in Linear Models

Stochastic Design Criteria in Linear Models AUSTRIAN JOURNAL OF STATISTICS Volume 34 (2005), Number 2, 211 223 Stochastic Design Criteria in Linear Models Alexander Zaigraev N. Copernicus University, Toruń, Poland Abstract: Within the framework

More information

Semidefinite and Second Order Cone Programming Seminar Fall 2012 Project: Robust Optimization and its Application of Robust Portfolio Optimization

Semidefinite and Second Order Cone Programming Seminar Fall 2012 Project: Robust Optimization and its Application of Robust Portfolio Optimization Semidefinite and Second Order Cone Programming Seminar Fall 2012 Project: Robust Optimization and its Application of Robust Portfolio Optimization Instructor: Farid Alizadeh Author: Ai Kagawa 12/12/2012

More information

Quadratic Two-Stage Stochastic Optimization with Coherent Measures of Risk

Quadratic Two-Stage Stochastic Optimization with Coherent Measures of Risk Noname manuscript No. (will be inserted by the editor) Quadratic Two-Stage Stochastic Optimization with Coherent Measures of Risk Jie Sun Li-Zhi Liao Brian Rodrigues Received: date / Accepted: date Abstract

More information

Robustness in Stochastic Programs with Risk Constraints

Robustness in Stochastic Programs with Risk Constraints Robustness in Stochastic Programs with Risk Constraints Dept. of Probability and Mathematical Statistics, Faculty of Mathematics and Physics Charles University, Prague, Czech Republic www.karlin.mff.cuni.cz/~kopa

More information

arxiv: v3 [math.oc] 25 Apr 2018

arxiv: v3 [math.oc] 25 Apr 2018 Problem-driven scenario generation: an analytical approach for stochastic programs with tail risk measure Jamie Fairbrother *, Amanda Turner *, and Stein W. Wallace ** * STOR-i Centre for Doctoral Training,

More information

The Uniformity Principle: A New Tool for. Probabilistic Robustness Analysis. B. R. Barmish and C. M. Lagoa. further discussion.

The Uniformity Principle: A New Tool for. Probabilistic Robustness Analysis. B. R. Barmish and C. M. Lagoa. further discussion. The Uniformity Principle A New Tool for Probabilistic Robustness Analysis B. R. Barmish and C. M. Lagoa Department of Electrical and Computer Engineering University of Wisconsin-Madison, Madison, WI 53706

More information

Mathematical Preliminaries

Mathematical Preliminaries Chapter 33 Mathematical Preliminaries In this appendix, we provide essential definitions and key results which are used at various points in the book. We also provide a list of sources where more details

More information

Convex Optimization & Parsimony of L p-balls representation

Convex Optimization & Parsimony of L p-balls representation Convex Optimization & Parsimony of L p -balls representation LAAS-CNRS and Institute of Mathematics, Toulouse, France IMA, January 2016 Motivation Unit balls associated with nonnegative homogeneous polynomials

More information

Introduction to Real Analysis Alternative Chapter 1

Introduction to Real Analysis Alternative Chapter 1 Christopher Heil Introduction to Real Analysis Alternative Chapter 1 A Primer on Norms and Banach Spaces Last Updated: March 10, 2018 c 2018 by Christopher Heil Chapter 1 A Primer on Norms and Banach Spaces

More information

Statistical Learning Theory

Statistical Learning Theory Statistical Learning Theory Fundamentals Miguel A. Veganzones Grupo Inteligencia Computacional Universidad del País Vasco (Grupo Inteligencia Vapnik Computacional Universidad del País Vasco) UPV/EHU 1

More information

Thomas Knispel Leibniz Universität Hannover

Thomas Knispel Leibniz Universität Hannover Optimal long term investment under model ambiguity Optimal long term investment under model ambiguity homas Knispel Leibniz Universität Hannover knispel@stochastik.uni-hannover.de AnStAp0 Vienna, July

More information