Some multivariate risk indicators; minimization by using stochastic algorithms

Size: px
Start display at page:

Download "Some multivariate risk indicators; minimization by using stochastic algorithms"

Transcription

1 Some multivariate risk indicators; minimization by using stochastic algorithms Véronique Maume-Deschamps, université Lyon 1 - ISFA, Joint work with P. Cénac and C. Prieur. AST&Risk (ANR Project) 1 / 51

2 Multivariate risk process Consider a vectorial risk process X n = X 1 n., X d n Xn k = gains of the kth branch of a compagny, during the nth period, i.e. Xn k = Gn k L k n where G n k is the gain and Lk n the loss. There may be dependence: vectorial (with respect to k) and/or temporal (with respect to n). 2 / 51

3 Multivariate risk process Consider a vectorial risk process X n = X 1 n., X d n Xn k = gains of the kth branch of a compagny, during the nth period, i.e. Xn k = Gn k L k n where G n k is the gain and Lk n the loss. There may be dependence: vectorial (with respect to k) and/or temporal (with respect to n). u > 0: total capital of the compagny. How to allocate u = u u d between the d branches in an optimal way? (u k is allocated to the kth branch). 3 / 51

4 Solvency 2 rules Introduction New european rules Solvency 2: insurance companies have to better take into account dependencies in order to compute their solvency margin. Standard formula vs Internal models. Once the main risk drivers for the overall company have been identied and the global solvency capital requierement has been computed, it is necessary to split it into marginal capitals for each line of business, avoid as far as possible that some lines of business become insolvent too often. Minimize a risk indicator. 4 / 51

5 Ruin probability Introduction Consider Y k j periods and R k j = j p=1 X k p the total gain of the kth branch on j = u k + Y k j. The kth branch is insolvent at time j if R k j < 0. Ruin probability widely studied in dimension 1 both for continuous and discrete models (Cramer, Lundberg, Gerber, De Finetti,...): j Ψ(u) = P( j = 1,..., n X p + u < 0). p=1 5 / 51

6 Ruin probability Introduction Consider Y k j periods and R k j = j p=1 X k p the total gain of the kth branch on j = u k + Y k j. The kth branch is insolvent at time j if R k j < 0. Ruin probability widely studied in dimension 1 both for continuous and discrete models (Cramer, Lundberg, Gerber, De Finetti,...): j Ψ(u) = P( j = 1,..., n X p + u < 0). p=1 In higher dimension (d), we may consider the probability that one branch fails: Ψ(u 1,..., u d ) = P( k = 1,..., d; j = 1,..., n R k j < 0), not easy to manipulate (not convex). 6 / 51

7 Multivariate risk indicators (1) S. Loisel (2004) introduced the two following risk indicators (continuous time: Ruin cost: A(u 1,..., u d ) = d n E R k p 11 {R k <0}, p k=1 p=1 Does not take into account the dependence structure. 7 / 51

8 Multivariate risk indicators (1) S. Loisel (2004) introduced the two following risk indicators (continuous time: Ruin cost: A(u 1,..., u d ) = d n E R k p 11 {R k <0}, p k=1 p=1 Does not take into account the dependence structure. The following indicator takes into account the dependence structure: d n B(u 1,..., u d ) = E 11 {R k <0}11 p { d. k=1 p=1 l=1 Rl p >0} not convex and does not take into account the ruin cost. 8 / 51

9 Multivariate risk indicators (2) We consider: I (u 1,..., u d ) = d n E g k (R k p )11 {R k <0}11 p { d k=1 p=1 l=1 Rl p>0} g k : R R is a C 1 convex function, with g k (0) = 0, g k (x) 0 for x 0 and g k (x) 0 for x 0, k = 1,..., d it is a penalty function to the kth branch when it becomes insolvable.. 9 / 51

10 Multivariate risk indicators (2) We consider: I (u 1,..., u d ) = d n E g k (R k p )11 {R k <0}11 p { d k=1 p=1 l=1 Rl p>0} g k : R R is a C 1 convex function, with g k (0) = 0, g k (x) 0 for x 0 and g k (x) 0 for x 0, k = 1,..., d it is a penalty function to the kth branch when it becomes insolvable. Remark: g k (x) = x is a possible choice (then we consider the ruin amount). I is the expected sum of penalties that each line of business would have to pay due to its temporary potential insolvency (orange area).. 10 / 51

11 R R2 1 u = u1 + u2 R 2 1 u2 R 1 1 u time d k=1 R k p R 2 p Local insolvency Global insolvency R 1 p The red area has been studied by R. Biard, S. Loisel, C. Macci and N. Veraverbeke (asymptotic properties as u ) for a continuous model. 11 / 51

12 Minimization of I Problem: nd the minimum u R d + v v d = u: with constraint I (u ) = inf I (v), v v v d =u Rd +. Tool: a Kiefer-Wolfowitz version of the stochastic mirror algorithm. Avantages of the stochastic algorithms approach: no parametric hypothesis on the law of the X i 's, dependence allowed over one period, high dimension (d ) allowed. 12 / 51

13 Convexity of I In what follows, it is assumed that for all i, k, (Y k i, S i ) admits a d density, S i = : cumulated gain for the ith period. Property l=1 Y l i g k are dierentiable and convex, g k : R R with g k (0) = 0, g k (x) 0 for x < 0, g k (x) 0 for x > 0. Then the risk indicator I is convex on the convex set U u = {(v 1,..., v d ) (R + ) d / v v d = u}. 13 / 51

14 Classical algorithms Mirror descent algorithm A deterministic version of the Robbins-Monro algorithm x n+1 = x n γ n f (x n ). slope 1 γ 0 slope 1 γ 1 x x2 x1 x0 14 / 51

15 Robbins-Monro algorithm Classical algorithms Mirror descent algorithm Often, we have access only to a perturbation of f. Robbins-Monro type algorithms: χ n+1 = χ n γ n+1 ξ n+1, where ξ n+1 = f (χ n ) + ε n+1, (ε n ) n is an centred i.i.d. sequence, with ε n+1 independent of σ(χ 0,..., χ n ) = F n. Wide literature on this algorithm and its variants: from Robbins-Monro (1951), Duo, Küchner and Yin,... convergence of the algorithm a.e. with TCL... under various hypothesis. 15 / 51

16 Kiefer-Wolfowitz algorithm Classical algorithms Mirror descent algorithm Aim: Minimizing a (strictly) convex C 1 function f = zero of f. f is usually unknown Example f (x) = E(F (x, ξ)) F is approximated by D cn = F (χ n + c n, ξ 1 n) F (χ n c n, ξ 2 n) 2c n, with (ξ 1 n) and (ξ 2 n) two independent i.i.d. sequences of random variables of law ξ. D cn is seen as a perturbation of f. Kifer Wolfowitz algorithm: consider χ n+1 = χ n γ n F (χ n + c n, ξ 1 n) F (X n c n, ξ 2 n) 2c n. Under standard conditions, the algorithm converges a.e. + TCL / 51

17 Why can't we use K-W algorithm? Classical algorithms Mirror descent algorithm Linear constraint : u u d = u Lagrange multipliers (= ane part) = bad convergence properties. 17 / 51

18 Mirror algorithm Introduction Classical algorithms Mirror descent algorithm Deterministic mirror descent algorithm introduced by Nemirovski and Yudin (1983) Stochastic version of this algorithm proposed by C. Tauvel (2008) in her thesis. Optimization problem: f a C 1 convex function, min f (x), C is a compact and convex set of E. x C Observations: we have a noisy observation of f ψ(x) = f (x) + η(x), with a martingale dierence hypothesis for η. 18 / 51

19 Classical algorithms Mirror descent algorithm Auxiliary functions for the mirror descent algorithm Let us x x 0 an initial point, δ a strongly convex function on C, which is dierentiable on x 0 Construct an auxiliary function V : V (x) = δ(x) δ(x 0 ) x 0 x, δ (x 0 ), which will be used to push the trajectory into C. W β denotes the Fenchel-Legendre transform of βv : for z E W β (z) = sup { z, x βv (x)}. x C (γ n ) n and (β n ) n are two positive sequences, (β n ) n is non decreasing. Important property of W β : W β (ξ) C. 19 / 51

20 Classical algorithms Mirror descent algorithm Description of the stochastic mirror algorithm The algorithm constructs two random sequences: (χ n ) in C (ξ n ) in the dual space E Algorithm Initialisation: ξ 0 = 0 E, χ 0 C Update: for n = 1 to N do ξ n = ξ n 1 γ n ψ(χ n 1 ) χ n = W βn (ξ n ) Ouput: S N = N n=1 γ nχ n 1 N n=1 γ n 20 / 51

21 Mirror descent algorithm Classical algorithms Mirror descent algorithm C compact and convex Wβ mirror descent χ n χ n 1 ξ ξ n ξ n 1 21 / 51

22 Classical algorithms Mirror descent algorithm Convergence of the stochastic miror algorithm Under the hypothesis 1 martingale dierence hypothesis: E[η(χ n+1 ) F n ] = 0, with F n = σ(χ 0,..., χ n ). where ψ(x) = f (x) + η(x) is observable, 2 moment of order 2: σ > 0 such that for all n, E( η(χ n+1 ) 2 F n ) < σ 2 3 f admits a unique minimum x. C. Tauvel proved the convergence to 0 of E(f (S N )) E(f (x )). With an hypothesis of exponential moment on η, see gets the a.e. convergence of f (S N ) f (x ) to / 51

23 Another stochastic algorithm The risk indicator I Recall that we consider d n I (u 1,..., u d ) = E g k (R k p )11 {R k <0}11 p { d k=1 p=1 l=1 Rl p>0}. We are looking for the minimum u R d + v v d = u : under the constrainst I (u ) = inf I (v), v v v d =u Rd +. We shall use a Kiefer-Wolfowitz version of the mirror descent algorithm. 23 / 51

24 The mirror descent algorithm Another stochastic algorithm The set of constraint is U u = {v R d / v i 0, v v d = u}. A possible choice for V is the entropy function V (x) = ln d + d i=1 which is a strongly convex function. x ( i u ln xi ) = ln d + δ(x). u The Legendre-Fenchel transform may be computed easily: W β (ξ) = β ln ( 1 d d [ ] ) u exp ξ i. β i=1 24 / 51

25 Approximate gradient Another stochastic algorithm Y1 1 Y 1 n I (u 1,..., u d ) = E (I(u 1,..., u d, Y)) where Y =. Y1 d Yn d Denote I k ( c + i, Y ) = I(χ 1 i 1,..., χ k 1 i 1, χk i 1 + c i, χ k+1 i 1,..., χd i 1, Y), I k ( c i, Y ) = I(χ 1 i 1,..., χ k 1 i 1, χk i 1 c i, χ k+1 i 1,..., χd i 1, Y), Consider the random vector D ci I whose kth coordinate D k c i I(u 1,..., u d, Y) is I k ( c + i, Y ) I k ( c i, Y ) 2c i. 25 / 51

26 Our algorithm Introduction Another stochastic algorithm Algorithm Initialisation: Update: { ξ 0 = 0 (R m ) χ 0 C for i = 1,..., N do { ξi = ξ i 1 γ i Ψ ci (χ i 1, Y i ) χ i = W βi (ξ i ) Output: S N = N i=1 γ iχ i 1 N i=1 γ i with Ψ ci (χ i 1, Y i ) = D ci I(χ i 1, Y i ) and Y i independent copies of Y. 26 / 51

27 Conditions for the convergence (1) Another stochastic algorithm Condition (on I ) 1 I is a convex function from R d to R, 2 I is C 2 on U u, 3 I has a unique minimum u in U u, 4 σ > 0 such that for all v U u E ( ) I(v 1,..., v d, Y) 2 F i 1 σ 2. F i 1 is generated by (Y 0,..., Y i 1 ). 27 / 51

28 Conditions for the convergence (2) Another stochastic algorithm Condition (on the sequences) Let (β n ) n 0, (γ n ) n 0 and (c n ) n 0 be positive sequences, (β n ) n 0 is non decreasing and: 1 β N / N γ i=1 i 0, N + N 2 γ i=1 i c i / N γ i=1 i 3 4 N γ 2 i i=1 c 2β i i 1 ( ) 2 + γi i=1 c i <. N + 0, / N i=1 γ i N + 0, 28 / 51

29 Result Introduction Another stochastic algorithm Theorem With the above conditions, S N L 1 x. With an hypothesis of moment of order > 2 on I, S N a.s. x. 29 / 51

30 Idea of the proof (1) Another stochastic algorithm Based on the decomposition Ψ cn (χ n 1, Y n ) = I (χ n 1 ) + η cn (χ n 1, Y n ) + r cn (χ n 1 ). The condition of C. Tauvel fails because of the r cn (χ n 1 ) term. η cn (χ n 1, Y n ) is a martingale dierence. η cn (χ n 1, Y n ) = D cn I(χ n 1, Y n ) D cn I (χ n 1 ), r cn (χ n 1 ) = D cn I (χ n 1 ) I (χ n 1 ). and Law of large numbers for martingales Chow Lemma for martingales 30 / 51

31 Idea of the proof (2) Another stochastic algorithm Let ε N = I (S N ) I (x ) 0 Lemma ( N ) γ i ε N β N V (x ) i=1 + N γ i η c i (χ i 1, Y i ), χ i 1 x i=1 N γ i r c i (χ i 1 ), χ i 1 x i=1 N i=1 γ 2 i 2αβ i 1 Ψ c i (χ i 1, Y i ) 2 31 / 51

32 Idea of the proof (3) Another stochastic algorithm N γ i η c i (χ i 1, Y i ), χ i 1 x is a martingale i=1 N γ i r c i (χ i 1 ), χ i 1 x controlled by the fact that r c i (χ i 1 ) i=1 goes to 0 a.e. N i=1 γ 2 i Ψ c i (χ i 1, Y i ) 2 has bounded expectation and 2αβ i 1 controlled a.s. by using the Chow Lemma. 32 / 51

33 Another stochastic algorithm Hypothesis on I 1 Unicity of u : it is sucient that for some k and all u < w k < v k, [ ] E g k (Y k p + v k )1 { d k=1 Y k> u} 1 { v p k <Y k < w p k } < 0. This is satised if for some k, f Sp,Y > 0 on k p ] u, [ ], u[. 2 In the case g k (x) = x, the moment condition for I is equivalent to a moment condition (of the same order) on X n. 33 / 51

34 Introduction Some simulations for some simple models. We considered Normal laws, n = 1 observation of several periods of length 1, so that X k p = Y k p, X p R d are independent and identically distributed random vectors. Some dependencies on the coordinates of X p are allowed. 34 / 51

35 Parameters Introduction The algorithm has been performed with the following sequences (γ n ) n N, (c n ) n N and (β n ) n N : 1 γ n = (n + 1) α with α = , 1 c n = (n + 1) δ with δ = 1 4, β n = 1. Also, we have chosen u = 2 and the initialization of the algorithm (χ 0 ) is done at random uniformly in the simplex U u. 35 / 51

36 No dependence between the coordinates d = 2, n = 1, X 1 i and X 2 i are independent and the vectors X i are also independent. rst X 1 i and X 2 i have the same normal laws, then we consider dierent normal laws. 36 / 51

37 No dependence between the coordinates Same normal laws: X d i N (0.3, 1). For N = independent simulations we obtain : S N = (0.996; 1.004) Estimation of the minimum Zoom first coordinate first coordinate time time 37 / 51

38 No dependence between the coordinates Same normal laws: X d i N (0.3, 1). For N = independent simulations we obtain : S N = (0.996; 1.004) Estimation of the minimum Zoom second coordinate second coordinate time time 38 / 51

39 No dependence between the coordinates Same normal laws: For 50 simulations of length N = 1000, with the same parameters as above. The table below gives for each of the two coordinates, the mean of the estimation (û 1, û 2 ) of the minimum (u 1, u 2 ) and the standard error. rst coord. second coord. mean standard error / 51

40 No dependence between the coordinates Dierent normal laws: 50 simulations of length N = 1000, with two independent normal laws. X 1 p N (0.3, 1) and X 2 p N (0.8, 1). The table below gives for each of the two coordinates, the mean of the estimation (û 1, û 2 ) of the minimum (u 1, u 2 ) and the standard error. rst coord. second coord. mean standard error / 51

41 No dependence between the coordinates Dierent normal laws: 50 simulations of length N = 1000, with two independent normal laws. X 1 p N (0.3, 1) and X 2 p N (0.3, 4). The table below gives for each of the two coordinates, the mean of the estimation (û 1, û 2 ) of the minimum (u 1, u 2 ) and the standard error. rst coord. second coord. mean standard error / 51

42 Some dependence between the coordinates Two simple models of vectorial dependence: a comonotonic example: random vectors in R 3 with L = X 2 p N (0.3, 1) and X 3 p = 2X 2, X 1 p gaussian vectors. 42 / 51

43 Some dependence between the coordinates Comonotonic example. 50 simulations of length N = 1000 of this dimension 3 model. As before, we consider n = 1 (the periods are of length 1). The table below gives for each of the two coordinates, the mean of the estimation (û 1, û 2, û 3 ) of the minimum (u 1, u 2, u 3 ) and the standard error. rst coord. second coord. third coord. mean standard error In this model, X 2 and X 3 fail simultaneously and when they fail, X 3 is 2 two times worse than X / 51

44 Some dependence between the coordinates Gaussian vectors. We have also performed simulations for a Gaussian vector in R 3 with covariance matrix Σ = and expectation m = (0.3, 0.3, 0.3). As above, we have performed 50 simulations of length N = 1000, of the Gaussian vector X. 44 / 51

45 Some dependence between the coordinates The table below gives for each of the three coordinates, the mean of the estimation (û 1, û 2, û 3 ) of the minimum (u 1, u 2, u 3 ) and the standard error. rst coord. second coord. third coord. mean standard error / 51

46 A example with some temporal dependence The rst two coordinates are independant AR(1) with the same law: X i = 0.4X i 1 + ε i with (ε i ) i N gaussian white noise (N (0, 1)). The third coordinate is 2 times the second one. We have simulated 5 times 500 independent periodsof length n = 5. rst coord. second coord. third coord. mean standard error / 51

47 To do... Introduction A deeper study involving other laws (with heavy tail e.g.), more realistic models and temporal dependencies. Asymptotic theory for u. Some dependence on the Y i (using Benveniste, Métivier, Priouret results or methods for example). 47 / 51

48 Thanks for your attention 48 / 51

49 Recall the Chow Lemma Theorem Suppose (a N ) N N is a bounded sequence of positive numbers, N suppose that 1 < p 2. For N N, let A N = 1 + a k and k=0 A = lim N A N. Suppose that (Z N ) N N is a positive sequence of random variables adapted to F N and K is a constant such that E(Z N+1 F N ) K and sup E(Z p F N+1 N) < N then we have the following properties almost surely : on {A < } A k Z k+1 converges k=1 49 / 51

50 Recall the Chow Lemma Theorem Suppose (a N ) N N is a bounded sequence of positive numbers, N suppose that 1 < p 2. For N N, let A N = 1 + a k and k=0 A = lim N A N. Suppose that (Z N ) N N is a positive sequence of random variables adapted to F N and K is a constant such that E(Z N+1 F N ) K and sup E(Z p F N+1 N) < N then we have the following properties almost surely : on {A = } A 1 n 1 A k Z k+1 K. k=1 50 / 51

51 Recall the Chow Lemma Theorem Suppose (a N ) N N is a bounded sequence of positive numbers, N suppose that 1 < p 2. For N N, let A N = 1 + a k and k=0 A = lim N A N. Suppose that (Z N ) N N is a positive sequence of random variables adapted to F N and K is a constant such that E(Z N+1 F N ) K and sup E(Z p F N+1 N) < N then we have the following properties almost surely : Back to idea of proof. 51 / 51

Estimating Bivariate Tail: a copula based approach

Estimating Bivariate Tail: a copula based approach Estimating Bivariate Tail: a copula based approach Elena Di Bernardino, Université Lyon 1 - ISFA, Institut de Science Financiere et d'assurances - AST&Risk (ANR Project) Joint work with Véronique Maume-Deschamps

More information

Stochastic Optimization with Inequality Constraints Using Simultaneous Perturbations and Penalty Functions

Stochastic Optimization with Inequality Constraints Using Simultaneous Perturbations and Penalty Functions International Journal of Control Vol. 00, No. 00, January 2007, 1 10 Stochastic Optimization with Inequality Constraints Using Simultaneous Perturbations and Penalty Functions I-JENG WANG and JAMES C.

More information

A proposal of a bivariate Conditional Tail Expectation

A proposal of a bivariate Conditional Tail Expectation A proposal of a bivariate Conditional Tail Expectation Elena Di Bernardino a joint works with Areski Cousin b, Thomas Laloë c, Véronique Maume-Deschamps d and Clémentine Prieur e a, b, d Université Lyon

More information

Rare event simulation for the ruin problem with investments via importance sampling and duality

Rare event simulation for the ruin problem with investments via importance sampling and duality Rare event simulation for the ruin problem with investments via importance sampling and duality Jerey Collamore University of Copenhagen Joint work with Anand Vidyashankar (GMU) and Guoqing Diao (GMU).

More information

On the fast convergence of random perturbations of the gradient flow.

On the fast convergence of random perturbations of the gradient flow. On the fast convergence of random perturbations of the gradient flow. Wenqing Hu. 1 (Joint work with Chris Junchi Li 2.) 1. Department of Mathematics and Statistics, Missouri S&T. 2. Department of Operations

More information

Reinsurance and ruin problem: asymptotics in the case of heavy-tailed claims

Reinsurance and ruin problem: asymptotics in the case of heavy-tailed claims Reinsurance and ruin problem: asymptotics in the case of heavy-tailed claims Serguei Foss Heriot-Watt University, Edinburgh Karlovasi, Samos 3 June, 2010 (Joint work with Tomasz Rolski and Stan Zachary

More information

University Of Calgary Department of Mathematics and Statistics

University Of Calgary Department of Mathematics and Statistics University Of Calgary Department of Mathematics and Statistics Hawkes Seminar May 23, 2018 1 / 46 Some Problems in Insurance and Reinsurance Mohamed Badaoui Department of Electrical Engineering National

More information

Outline. A Central Limit Theorem for Truncating Stochastic Algorithms

Outline. A Central Limit Theorem for Truncating Stochastic Algorithms Outline A Central Limit Theorem for Truncating Stochastic Algorithms Jérôme Lelong http://cermics.enpc.fr/ lelong Tuesday September 5, 6 1 3 4 Jérôme Lelong (CERMICS) Tuesday September 5, 6 1 / 3 Jérôme

More information

Modèles de dépendance entre temps inter-sinistres et montants de sinistre en théorie de la ruine

Modèles de dépendance entre temps inter-sinistres et montants de sinistre en théorie de la ruine Séminaire de Statistiques de l'irma Modèles de dépendance entre temps inter-sinistres et montants de sinistre en théorie de la ruine Romain Biard LMB, Université de Franche-Comté en collaboration avec

More information

A random perturbation approach to some stochastic approximation algorithms in optimization.

A random perturbation approach to some stochastic approximation algorithms in optimization. A random perturbation approach to some stochastic approximation algorithms in optimization. Wenqing Hu. 1 (Presentation based on joint works with Chris Junchi Li 2, Weijie Su 3, Haoyi Xiong 4.) 1. Department

More information

Learning Methods for Online Prediction Problems. Peter Bartlett Statistics and EECS UC Berkeley

Learning Methods for Online Prediction Problems. Peter Bartlett Statistics and EECS UC Berkeley Learning Methods for Online Prediction Problems Peter Bartlett Statistics and EECS UC Berkeley Course Synopsis A finite comparison class: A = {1,..., m}. 1. Prediction with expert advice. 2. With perfect

More information

10-704: Information Processing and Learning Spring Lecture 8: Feb 5

10-704: Information Processing and Learning Spring Lecture 8: Feb 5 10-704: Information Processing and Learning Spring 2015 Lecture 8: Feb 5 Lecturer: Aarti Singh Scribe: Siheng Chen Disclaimer: These notes have not been subjected to the usual scrutiny reserved for formal

More information

A Note On The Erlang(λ, n) Risk Process

A Note On The Erlang(λ, n) Risk Process A Note On The Erlangλ, n) Risk Process Michael Bamberger and Mario V. Wüthrich Version from February 4, 2009 Abstract We consider the Erlangλ, n) risk process with i.i.d. exponentially distributed claims

More information

ESTIMATING BIVARIATE TAIL

ESTIMATING BIVARIATE TAIL Elena DI BERNARDINO b joint work with Clémentine PRIEUR a and Véronique MAUME-DESCHAMPS b a LJK, Université Joseph Fourier, Grenoble 1 b Laboratoire SAF, ISFA, Université Lyon 1 Framework Goal: estimating

More information

Stability of optimization problems with stochastic dominance constraints

Stability of optimization problems with stochastic dominance constraints Stability of optimization problems with stochastic dominance constraints D. Dentcheva and W. Römisch Stevens Institute of Technology, Hoboken Humboldt-University Berlin www.math.hu-berlin.de/~romisch SIAM

More information

Quantile prediction of a random eld extending the gaussian setting

Quantile prediction of a random eld extending the gaussian setting Quantile prediction of a random eld extending the gaussian setting 1 Joint work with : Véronique Maume-Deschamps 1 and Didier Rullière 2 1 Institut Camille Jordan Université Lyon 1 2 Laboratoire des Sciences

More information

Multivariate Normal-Laplace Distribution and Processes

Multivariate Normal-Laplace Distribution and Processes CHAPTER 4 Multivariate Normal-Laplace Distribution and Processes The normal-laplace distribution, which results from the convolution of independent normal and Laplace random variables is introduced by

More information

ECE521 lecture 4: 19 January Optimization, MLE, regularization

ECE521 lecture 4: 19 January Optimization, MLE, regularization ECE521 lecture 4: 19 January 2017 Optimization, MLE, regularization First four lectures Lectures 1 and 2: Intro to ML Probability review Types of loss functions and algorithms Lecture 3: KNN Convexity

More information

Sensitivity analysis of the expected utility maximization problem with respect to model perturbations

Sensitivity analysis of the expected utility maximization problem with respect to model perturbations Sensitivity analysis of the expected utility maximization problem with respect to model perturbations Mihai Sîrbu, The University of Texas at Austin based on joint work with Oleksii Mostovyi University

More information

Surrogate loss functions, divergences and decentralized detection

Surrogate loss functions, divergences and decentralized detection Surrogate loss functions, divergences and decentralized detection XuanLong Nguyen Department of Electrical Engineering and Computer Science U.C. Berkeley Advisors: Michael Jordan & Martin Wainwright 1

More information

Model Selection and Geometry

Model Selection and Geometry Model Selection and Geometry Pascal Massart Université Paris-Sud, Orsay Leipzig, February Purpose of the talk! Concentration of measure plays a fundamental role in the theory of model selection! Model

More information

Finite-time Ruin Probability of Renewal Model with Risky Investment and Subexponential Claims

Finite-time Ruin Probability of Renewal Model with Risky Investment and Subexponential Claims Proceedings of the World Congress on Engineering 29 Vol II WCE 29, July 1-3, 29, London, U.K. Finite-time Ruin Probability of Renewal Model with Risky Investment and Subexponential Claims Tao Jiang Abstract

More information

Stochastic Composition Optimization

Stochastic Composition Optimization Stochastic Composition Optimization Algorithms and Sample Complexities Mengdi Wang Joint works with Ethan X. Fang, Han Liu, and Ji Liu ORFE@Princeton ICCOPT, Tokyo, August 8-11, 2016 1 / 24 Collaborators

More information

Risk Aggregation. Paul Embrechts. Department of Mathematics, ETH Zurich Senior SFI Professor.

Risk Aggregation. Paul Embrechts. Department of Mathematics, ETH Zurich Senior SFI Professor. Risk Aggregation Paul Embrechts Department of Mathematics, ETH Zurich Senior SFI Professor www.math.ethz.ch/~embrechts/ Joint work with P. Arbenz and G. Puccetti 1 / 33 The background Query by practitioner

More information

The largest eigenvalues of the sample covariance matrix. in the heavy-tail case

The largest eigenvalues of the sample covariance matrix. in the heavy-tail case The largest eigenvalues of the sample covariance matrix 1 in the heavy-tail case Thomas Mikosch University of Copenhagen Joint work with Richard A. Davis (Columbia NY), Johannes Heiny (Aarhus University)

More information

1 Introduction The purpose of this paper is to illustrate the typical behavior of learning algorithms using stochastic approximations (SA). In particu

1 Introduction The purpose of this paper is to illustrate the typical behavior of learning algorithms using stochastic approximations (SA). In particu Strong Points of Weak Convergence: A Study Using RPA Gradient Estimation for Automatic Learning Felisa J. Vazquez-Abad * Department of Computer Science and Operations Research University of Montreal, Montreal,

More information

Beyond stochastic gradient descent for large-scale machine learning

Beyond stochastic gradient descent for large-scale machine learning Beyond stochastic gradient descent for large-scale machine learning Francis Bach INRIA - Ecole Normale Supérieure, Paris, France Joint work with Eric Moulines - October 2014 Big data revolution? A new

More information

Quantifying Stochastic Model Errors via Robust Optimization

Quantifying Stochastic Model Errors via Robust Optimization Quantifying Stochastic Model Errors via Robust Optimization IPAM Workshop on Uncertainty Quantification for Multiscale Stochastic Systems and Applications Jan 19, 2016 Henry Lam Industrial & Operations

More information

(Part 1) High-dimensional statistics May / 41

(Part 1) High-dimensional statistics May / 41 Theory for the Lasso Recall the linear model Y i = p j=1 β j X (j) i + ɛ i, i = 1,..., n, or, in matrix notation, Y = Xβ + ɛ, To simplify, we assume that the design X is fixed, and that ɛ is N (0, σ 2

More information

Efficient Methods for Stochastic Composite Optimization

Efficient Methods for Stochastic Composite Optimization Efficient Methods for Stochastic Composite Optimization Guanghui Lan School of Industrial and Systems Engineering Georgia Institute of Technology, Atlanta, GA 3033-005 Email: glan@isye.gatech.edu June

More information

A Dynamic Contagion Process with Applications to Finance & Insurance

A Dynamic Contagion Process with Applications to Finance & Insurance A Dynamic Contagion Process with Applications to Finance & Insurance Angelos Dassios Department of Statistics London School of Economics Angelos Dassios, Hongbiao Zhao (LSE) A Dynamic Contagion Process

More information

Multiple Random Variables

Multiple Random Variables Multiple Random Variables Joint Probability Density Let X and Y be two random variables. Their joint distribution function is F ( XY x, y) P X x Y y. F XY ( ) 1, < x

More information

Linear stochastic approximation driven by slowly varying Markov chains

Linear stochastic approximation driven by slowly varying Markov chains Available online at www.sciencedirect.com Systems & Control Letters 50 2003 95 102 www.elsevier.com/locate/sysconle Linear stochastic approximation driven by slowly varying Marov chains Viay R. Konda,

More information

Learning Methods for Online Prediction Problems. Peter Bartlett Statistics and EECS UC Berkeley

Learning Methods for Online Prediction Problems. Peter Bartlett Statistics and EECS UC Berkeley Learning Methods for Online Prediction Problems Peter Bartlett Statistics and EECS UC Berkeley Course Synopsis A finite comparison class: A = {1,..., m}. Converting online to batch. Online convex optimization.

More information

Information theoretic perspectives on learning algorithms

Information theoretic perspectives on learning algorithms Information theoretic perspectives on learning algorithms Varun Jog University of Wisconsin - Madison Departments of ECE and Mathematics Shannon Channel Hangout! May 8, 2018 Jointly with Adrian Tovar-Lopez

More information

A Tight Upper Bound on the Second-Order Coding Rate of Parallel Gaussian Channels with Feedback

A Tight Upper Bound on the Second-Order Coding Rate of Parallel Gaussian Channels with Feedback A Tight Upper Bound on the Second-Order Coding Rate of Parallel Gaussian Channels with Feedback Vincent Y. F. Tan (NUS) Joint work with Silas L. Fong (Toronto) 2017 Information Theory Workshop, Kaohsiung,

More information

Some functional (Hölderian) limit theorems and their applications (II)

Some functional (Hölderian) limit theorems and their applications (II) Some functional (Hölderian) limit theorems and their applications (II) Alfredas Račkauskas Vilnius University Outils Statistiques et Probabilistes pour la Finance Université de Rouen June 1 5, Rouen (Rouen

More information

Multivariate extensions of expectiles risk measures

Multivariate extensions of expectiles risk measures Depend. Model. 2017; 5:20 44 Research Article Special Issue: Recent Developments in Quantitative Risk Management Open Access Véronique Maume-Deschamps*, Didier Rullière, and Khalil Said Multivariate extensions

More information

Lecture 8: Information Theory and Statistics

Lecture 8: Information Theory and Statistics Lecture 8: Information Theory and Statistics Part II: Hypothesis Testing and I-Hsiang Wang Department of Electrical Engineering National Taiwan University ihwang@ntu.edu.tw December 23, 2015 1 / 50 I-Hsiang

More information

Scenario Aggregation

Scenario Aggregation Scenario Aggregation for Solvency Regulation Mathieu Cambou 1 (joint with Damir Filipović) Ecole Polytechnique Fédérale de Lausanne 1 SCOR ASTIN Colloquium The Hague, 22 May 2013 Outline Scenario Aggregation

More information

Self-normalized Cramér-Type Large Deviations for Independent Random Variables

Self-normalized Cramér-Type Large Deviations for Independent Random Variables Self-normalized Cramér-Type Large Deviations for Independent Random Variables Qi-Man Shao National University of Singapore and University of Oregon qmshao@darkwing.uoregon.edu 1. Introduction Let X, X

More information

Ancestor Problem for Branching Trees

Ancestor Problem for Branching Trees Mathematics Newsletter: Special Issue Commemorating ICM in India Vol. 9, Sp. No., August, pp. Ancestor Problem for Branching Trees K. B. Athreya Abstract Let T be a branching tree generated by a probability

More information

Practical approaches to the estimation of the ruin probability in a risk model with additional funds

Practical approaches to the estimation of the ruin probability in a risk model with additional funds Modern Stochastics: Theory and Applications (204) 67 80 DOI: 05559/5-VMSTA8 Practical approaches to the estimation of the ruin probability in a risk model with additional funds Yuliya Mishura a Olena Ragulina

More information

Large deviations and averaging for systems of slow fast stochastic reaction diffusion equations.

Large deviations and averaging for systems of slow fast stochastic reaction diffusion equations. Large deviations and averaging for systems of slow fast stochastic reaction diffusion equations. Wenqing Hu. 1 (Joint work with Michael Salins 2, Konstantinos Spiliopoulos 3.) 1. Department of Mathematics

More information

The exact asymptotics for hitting probability of a remote orthant by a multivariate Lévy process: the Cramér case

The exact asymptotics for hitting probability of a remote orthant by a multivariate Lévy process: the Cramér case The exact asymptotics for hitting probability of a remote orthant by a multivariate Lévy process: the Cramér case Konstantin Borovkov and Zbigniew Palmowski Abstract For a multivariate Lévy process satisfying

More information

The Pacic Institute for the Mathematical Sciences http://www.pims.math.ca pims@pims.math.ca Surprise Maximization D. Borwein Department of Mathematics University of Western Ontario London, Ontario, Canada

More information

Multivariate Distributions

Multivariate Distributions IEOR E4602: Quantitative Risk Management Spring 2016 c 2016 by Martin Haugh Multivariate Distributions We will study multivariate distributions in these notes, focusing 1 in particular on multivariate

More information

Moment Measures. Bo az Klartag. Tel Aviv University. Talk at the asymptotic geometric analysis seminar. Tel Aviv, May 2013

Moment Measures. Bo az Klartag. Tel Aviv University. Talk at the asymptotic geometric analysis seminar. Tel Aviv, May 2013 Tel Aviv University Talk at the asymptotic geometric analysis seminar Tel Aviv, May 2013 Joint work with Dario Cordero-Erausquin. A bijection We present a correspondence between convex functions and Borel

More information

Large Deviations for Weakly Dependent Sequences: The Gärtner-Ellis Theorem

Large Deviations for Weakly Dependent Sequences: The Gärtner-Ellis Theorem Chapter 34 Large Deviations for Weakly Dependent Sequences: The Gärtner-Ellis Theorem This chapter proves the Gärtner-Ellis theorem, establishing an LDP for not-too-dependent processes taking values in

More information

Introduction to Empirical Processes and Semiparametric Inference Lecture 09: Stochastic Convergence, Continued

Introduction to Empirical Processes and Semiparametric Inference Lecture 09: Stochastic Convergence, Continued Introduction to Empirical Processes and Semiparametric Inference Lecture 09: Stochastic Convergence, Continued Michael R. Kosorok, Ph.D. Professor and Chair of Biostatistics Professor of Statistics and

More information

Learning Processes, Mixed Equilibria and. Dynamical Systems Arising from. Repeated Games. October 31, 1997

Learning Processes, Mixed Equilibria and. Dynamical Systems Arising from. Repeated Games. October 31, 1997 Learning Processes, Mixed Equilibria and Dynamical Systems Arising from Repeated Games Michel Benam Department of Mathematics Universite Paul Sabatier, Toulouse Morris W. Hirsch Department of Mathematics

More information

A LOGARITHMIC EFFICIENT ESTIMATOR OF THE PROBABILITY OF RUIN WITH RECUPERATION FOR SPECTRALLY NEGATIVE LEVY RISK PROCESSES.

A LOGARITHMIC EFFICIENT ESTIMATOR OF THE PROBABILITY OF RUIN WITH RECUPERATION FOR SPECTRALLY NEGATIVE LEVY RISK PROCESSES. A LOGARITHMIC EFFICIENT ESTIMATOR OF THE PROBABILITY OF RUIN WITH RECUPERATION FOR SPECTRALLY NEGATIVE LEVY RISK PROCESSES Riccardo Gatto Submitted: April 204 Revised: July 204 Abstract This article provides

More information

Consistency of the maximum likelihood estimator for general hidden Markov models

Consistency of the maximum likelihood estimator for general hidden Markov models Consistency of the maximum likelihood estimator for general hidden Markov models Jimmy Olsson Centre for Mathematical Sciences Lund University Nordstat 2012 Umeå, Sweden Collaborators Hidden Markov models

More information

Machine Learning Brett Bernstein. Recitation 1: Gradients and Directional Derivatives

Machine Learning Brett Bernstein. Recitation 1: Gradients and Directional Derivatives Machine Learning Brett Bernstein Recitation 1: Gradients and Directional Derivatives Intro Question 1 We are given the data set (x 1, y 1 ),, (x n, y n ) where x i R d and y i R We want to fit a linear

More information

A Central Limit Theorem for Robbins Monro Algorithms with Projections

A Central Limit Theorem for Robbins Monro Algorithms with Projections A Central Limit Theorem for Robbins Monro Algorithms with Projections LLONG Jérôme, CRMICS, cole Nationale des Ponts et Chaussées, 6 et 8 av Blaise Pascal, 77455 Marne La Vallée, FRANC. 2th September 25

More information

Proving the Regularity of the Minimal Probability of Ruin via a Game of Stopping and Control

Proving the Regularity of the Minimal Probability of Ruin via a Game of Stopping and Control Proving the Regularity of the Minimal Probability of Ruin via a Game of Stopping and Control Erhan Bayraktar University of Michigan joint work with Virginia R. Young, University of Michigan K αρλoβασi,

More information

A Perturbed Gradient Algorithm in Hilbert spaces

A Perturbed Gradient Algorithm in Hilbert spaces A Perturbed Gradient Algorithm in Hilbert spaces Kengy Barty, Jean-Sébastien Roy, Cyrille Strugarek May 19, 2005 Abstract We propose a perturbed gradient algorithm with stochastic noises to solve a general

More information

On some nonlinear parabolic equation involving variable exponents

On some nonlinear parabolic equation involving variable exponents On some nonlinear parabolic equation involving variable exponents Goro Akagi (Kobe University, Japan) Based on a joint work with Giulio Schimperna (Pavia Univ., Italy) Workshop DIMO-2013 Diffuse Interface

More information

Optimal Transport: A Crash Course

Optimal Transport: A Crash Course Optimal Transport: A Crash Course Soheil Kolouri and Gustavo K. Rohde HRL Laboratories, University of Virginia Introduction What is Optimal Transport? The optimal transport problem seeks the most efficient

More information

Introduction to Rare Event Simulation

Introduction to Rare Event Simulation Introduction to Rare Event Simulation Brown University: Summer School on Rare Event Simulation Jose Blanchet Columbia University. Department of Statistics, Department of IEOR. Blanchet (Columbia) 1 / 31

More information

How hard is this function to optimize?

How hard is this function to optimize? How hard is this function to optimize? John Duchi Based on joint work with Sabyasachi Chatterjee, John Lafferty, Yuancheng Zhu Stanford University West Coast Optimization Rumble October 2016 Problem minimize

More information

A polynomial expansion to approximate ruin probabilities

A polynomial expansion to approximate ruin probabilities A polynomial expansion to approximate ruin probabilities P.O. Goffard 1 X. Guerrault 2 S. Loisel 3 D. Pommerêt 4 1 Axa France - Institut de mathématiques de Luminy Université de Aix-Marseille 2 Axa France

More information

Beyond stochastic gradient descent for large-scale machine learning

Beyond stochastic gradient descent for large-scale machine learning Beyond stochastic gradient descent for large-scale machine learning Francis Bach INRIA - Ecole Normale Supérieure, Paris, France Joint work with Eric Moulines, Nicolas Le Roux and Mark Schmidt - CAP, July

More information

Competitive Equilibria in a Comonotone Market

Competitive Equilibria in a Comonotone Market Competitive Equilibria in a Comonotone Market 1/51 Competitive Equilibria in a Comonotone Market Ruodu Wang http://sas.uwaterloo.ca/ wang Department of Statistics and Actuarial Science University of Waterloo

More information

Analysis of the ruin probability using Laplace transforms and Karamata Tauberian theorems

Analysis of the ruin probability using Laplace transforms and Karamata Tauberian theorems Analysis of the ruin probability using Laplace transforms and Karamata Tauberian theorems Corina Constantinescu and Enrique Thomann Abstract The classical result of Cramer-Lundberg states that if the rate

More information

p(z)

p(z) Chapter Statistics. Introduction This lecture is a quick review of basic statistical concepts; probabilities, mean, variance, covariance, correlation, linear regression, probability density functions and

More information

P (A G) dp G P (A G)

P (A G) dp G P (A G) First homework assignment. Due at 12:15 on 22 September 2016. Homework 1. We roll two dices. X is the result of one of them and Z the sum of the results. Find E [X Z. Homework 2. Let X be a r.v.. Assume

More information

Pavel Dvurechensky Alexander Gasnikov Alexander Tiurin. July 26, 2017

Pavel Dvurechensky Alexander Gasnikov Alexander Tiurin. July 26, 2017 Randomized Similar Triangles Method: A Unifying Framework for Accelerated Randomized Optimization Methods Coordinate Descent, Directional Search, Derivative-Free Method) Pavel Dvurechensky Alexander Gasnikov

More information

LEARNING IN CONCAVE GAMES

LEARNING IN CONCAVE GAMES LEARNING IN CONCAVE GAMES P. Mertikopoulos French National Center for Scientific Research (CNRS) Laboratoire d Informatique de Grenoble GSBE ETBC seminar Maastricht, October 22, 2015 Motivation and Preliminaries

More information

Mod-φ convergence I: examples and probabilistic estimates

Mod-φ convergence I: examples and probabilistic estimates Mod-φ convergence I: examples and probabilistic estimates Valentin Féray (joint work with Pierre-Loïc Méliot and Ashkan Nikeghbali) Institut für Mathematik, Universität Zürich Summer school in Villa Volpi,

More information

Stochastic Compositional Gradient Descent: Algorithms for Minimizing Nonlinear Functions of Expected Values

Stochastic Compositional Gradient Descent: Algorithms for Minimizing Nonlinear Functions of Expected Values Stochastic Compositional Gradient Descent: Algorithms for Minimizing Nonlinear Functions of Expected Values Mengdi Wang Ethan X. Fang Han Liu Abstract Classical stochastic gradient methods are well suited

More information

Lecture 8: Information Theory and Statistics

Lecture 8: Information Theory and Statistics Lecture 8: Information Theory and Statistics Part II: Hypothesis Testing and Estimation I-Hsiang Wang Department of Electrical Engineering National Taiwan University ihwang@ntu.edu.tw December 22, 2015

More information

Theory and Applications of Stochastic Systems Lecture Exponential Martingale for Random Walk

Theory and Applications of Stochastic Systems Lecture Exponential Martingale for Random Walk Instructor: Victor F. Araman December 4, 2003 Theory and Applications of Stochastic Systems Lecture 0 B60.432.0 Exponential Martingale for Random Walk Let (S n : n 0) be a random walk with i.i.d. increments

More information

Stochastic gradient descent and robustness to ill-conditioning

Stochastic gradient descent and robustness to ill-conditioning Stochastic gradient descent and robustness to ill-conditioning Francis Bach INRIA - Ecole Normale Supérieure, Paris, France ÉCOLE NORMALE SUPÉRIEURE Joint work with Aymeric Dieuleveut, Nicolas Flammarion,

More information

Normal Probability Plot Probability Probability

Normal Probability Plot Probability Probability Modelling multivariate returns Stefano Herzel Department ofeconomics, University of Perugia 1 Catalin Starica Department of Mathematical Statistics, Chalmers University of Technology Reha Tutuncu Department

More information

Information geometry of mirror descent

Information geometry of mirror descent Information geometry of mirror descent Geometric Science of Information Anthea Monod Department of Statistical Science Duke University Information Initiative at Duke G. Raskutti (UW Madison) and S. Mukherjee

More information

IE 5531 Midterm #2 Solutions

IE 5531 Midterm #2 Solutions IE 5531 Midterm #2 s Prof. John Gunnar Carlsson November 9, 2011 Before you begin: This exam has 9 pages and a total of 5 problems. Make sure that all pages are present. To obtain credit for a problem,

More information

STAT 200C: High-dimensional Statistics

STAT 200C: High-dimensional Statistics STAT 200C: High-dimensional Statistics Arash A. Amini May 30, 2018 1 / 59 Classical case: n d. Asymptotic assumption: d is fixed and n. Basic tools: LLN and CLT. High-dimensional setting: n d, e.g. n/d

More information

Lecture 4 Lebesgue spaces and inequalities

Lecture 4 Lebesgue spaces and inequalities Lecture 4: Lebesgue spaces and inequalities 1 of 10 Course: Theory of Probability I Term: Fall 2013 Instructor: Gordan Zitkovic Lecture 4 Lebesgue spaces and inequalities Lebesgue spaces We have seen how

More information

Mean-field SDE driven by a fractional BM. A related stochastic control problem

Mean-field SDE driven by a fractional BM. A related stochastic control problem Mean-field SDE driven by a fractional BM. A related stochastic control problem Rainer Buckdahn, Université de Bretagne Occidentale, Brest Durham Symposium on Stochastic Analysis, July 1th to July 2th,

More information

Ruin Probability for Non-standard Poisson Risk Model with Stochastic Returns

Ruin Probability for Non-standard Poisson Risk Model with Stochastic Returns Ruin Probability for Non-standard Poisson Risk Model with Stochastic Returns Tao Jiang Abstract This paper investigates the finite time ruin probability in non-homogeneous Poisson risk model, conditional

More information

On deterministic reformulations of distributionally robust joint chance constrained optimization problems

On deterministic reformulations of distributionally robust joint chance constrained optimization problems On deterministic reformulations of distributionally robust joint chance constrained optimization problems Weijun Xie and Shabbir Ahmed School of Industrial & Systems Engineering Georgia Institute of Technology,

More information

Practical conditions on Markov chains for weak convergence of tail empirical processes

Practical conditions on Markov chains for weak convergence of tail empirical processes Practical conditions on Markov chains for weak convergence of tail empirical processes Olivier Wintenberger University of Copenhagen and Paris VI Joint work with Rafa l Kulik and Philippe Soulier Toronto,

More information

MULTIVARIATE RISK PROCESSES WITH INTERACTING INTENSITIES

MULTIVARIATE RISK PROCESSES WITH INTERACTING INTENSITIES Applied Probability Trust (27 March 2008) MULTIVARIATE RISK PROCESSES WITH INTERACTING INTENSITIES NICOLE BÄUERLE, Universität Karlsruhe (TH) RUDOLF GRÜBEL, Leibniz Universität Hannover Abstract The classical

More information

Paper Review: Risk Processes with Hawkes Claims Arrivals

Paper Review: Risk Processes with Hawkes Claims Arrivals Paper Review: Risk Processes with Hawkes Claims Arrivals Stabile, Torrisi (2010) Gabriela Zeller University Calgary, Alberta, Canada 'Hawks Seminar' Talk Dept. of Math. & Stat. Calgary, Canada May 30,

More information

Distributed Stochastic Optimization in Networks with Low Informational Exchange

Distributed Stochastic Optimization in Networks with Low Informational Exchange Distributed Stochastic Optimization in Networs with Low Informational Exchange Wenjie Li and Mohamad Assaad, Senior Member, IEEE arxiv:80790v [csit] 30 Jul 08 Abstract We consider a distributed stochastic

More information

Almost Sure Convergence of Two Time-Scale Stochastic Approximation Algorithms

Almost Sure Convergence of Two Time-Scale Stochastic Approximation Algorithms Almost Sure Convergence of Two Time-Scale Stochastic Approximation Algorithms Vladislav B. Tadić Abstract The almost sure convergence of two time-scale stochastic approximation algorithms is analyzed under

More information

Random sets. Distributions, capacities and their applications. Ilya Molchanov. University of Bern, Switzerland

Random sets. Distributions, capacities and their applications. Ilya Molchanov. University of Bern, Switzerland Random sets Distributions, capacities and their applications Ilya Molchanov University of Bern, Switzerland Molchanov Random sets - Lecture 1. Winter School Sandbjerg, Jan 2007 1 E = R d ) Definitions

More information

Master s Written Examination

Master s Written Examination Master s Written Examination Option: Statistics and Probability Spring 016 Full points may be obtained for correct answers to eight questions. Each numbered question which may have several parts is worth

More information

Distributed MAP probability estimation of dynamic systems with wireless sensor networks

Distributed MAP probability estimation of dynamic systems with wireless sensor networks Distributed MAP probability estimation of dynamic systems with wireless sensor networks Felicia Jakubiec, Alejandro Ribeiro Dept. of Electrical and Systems Engineering University of Pennsylvania https://fling.seas.upenn.edu/~life/wiki/

More information

Numerical scheme for quadratic BSDEs and Dynamic Risk Measures

Numerical scheme for quadratic BSDEs and Dynamic Risk Measures Numerical scheme for quadratic BSDEs, Numerics and Finance, Oxford Man Institute Julien Azzaz I.S.F.A., Lyon, FRANCE 4 July 2012 Joint Work In Progress with Anis Matoussi, Le Mans, FRANCE Outline Framework

More information

ECON 616: Lecture 1: Time Series Basics

ECON 616: Lecture 1: Time Series Basics ECON 616: Lecture 1: Time Series Basics ED HERBST August 30, 2017 References Overview: Chapters 1-3 from Hamilton (1994). Technical Details: Chapters 2-3 from Brockwell and Davis (1987). Intuition: Chapters

More information

Exercises with solutions (Set D)

Exercises with solutions (Set D) Exercises with solutions Set D. A fair die is rolled at the same time as a fair coin is tossed. Let A be the number on the upper surface of the die and let B describe the outcome of the coin toss, where

More information

Ruin probabilities in multivariate risk models with periodic common shock

Ruin probabilities in multivariate risk models with periodic common shock Ruin probabilities in multivariate risk models with periodic common shock January 31, 2014 3 rd Workshop on Insurance Mathematics Universite Laval, Quebec, January 31, 2014 Ruin probabilities in multivariate

More information

Stochastic Processes

Stochastic Processes Stochastic Processes A very simple introduction Péter Medvegyev 2009, January Medvegyev (CEU) Stochastic Processes 2009, January 1 / 54 Summary from measure theory De nition (X, A) is a measurable space

More information

Reduced-load equivalence for queues with Gaussian input

Reduced-load equivalence for queues with Gaussian input Reduced-load equivalence for queues with Gaussian input A. B. Dieker CWI P.O. Box 94079 1090 GB Amsterdam, the Netherlands and University of Twente Faculty of Mathematical Sciences P.O. Box 17 7500 AE

More information

Decision principles derived from risk measures

Decision principles derived from risk measures Decision principles derived from risk measures Marc Goovaerts Marc.Goovaerts@econ.kuleuven.ac.be Katholieke Universiteit Leuven Decision principles derived from risk measures - Marc Goovaerts p. 1/17 Back

More information

Scaling Limits of Waves in Convex Scalar Conservation Laws under Random Initial Perturbations

Scaling Limits of Waves in Convex Scalar Conservation Laws under Random Initial Perturbations Scaling Limits of Waves in Convex Scalar Conservation Laws under Random Initial Perturbations Jan Wehr and Jack Xin Abstract We study waves in convex scalar conservation laws under noisy initial perturbations.

More information

MULTIVARIATE RISK PROCESSES WITH INTERACTING INTENSITIES

MULTIVARIATE RISK PROCESSES WITH INTERACTING INTENSITIES Adv Appl Prob 40, 578 601 (2008) Printed in Northern Ireland Applied Probability Trust 2008 MULTIVARIATE RISK PROCESSES WITH INTERACTING INTENSITIES NICOLE BÄUERLE, Universität Karlsruhe (TH) RUDOLF GRÜBEL,

More information

Worst-Case-Optimal Dynamic Reinsurance for Large Claims

Worst-Case-Optimal Dynamic Reinsurance for Large Claims Worst-Case-Optimal Dynamic Reinsurance for Large Claims by Olaf Menkens School of Mathematical Sciences Dublin City University (joint work with Ralf Korn and Mogens Steffensen) LUH-Kolloquium Versicherungs-

More information