LARGE DEVIATIONS AND RARE EVENT SIMULATION FOR PORTFOLIO CREDIT RISK

Size: px
Start display at page:

Download "LARGE DEVIATIONS AND RARE EVENT SIMULATION FOR PORTFOLIO CREDIT RISK"

Transcription

1 LARGE DEVIATIONS AND RARE EVENT SIMULATION FOR PORTFOLIO CREDIT RISK by Hasitha de Silva A Dissertation Subitted to the Graduate Faculty of George Mason University In Partial fulfillent of The Requireents for the Degree of Doctor of Philosophy Matheatics Coittee: Dr. A. Vidyashanar, Dissertation Director Dr. Habir Laba, Coittee Meber Dr. Thoas Wanner, Coittee Meber Dr. ie Xu, Coittee Meber Dr. David Walnut, Departent Chair Dr. Donna Fox, Associate Dean for Student Affairs, College of Science Dr. Peggy Agouris, Dean, College of Science Date: Suer Seester 206 George Mason University Fairfax, VA

2 Acnowledgents I would lie to express y deepest gratitude to y advisor, Dr. Anand N. Vidyashanar. Without his guidance, this thesis would not have coe to fruition. He was always available to answer y questions, and put e on the right trac, every tie I was confronted by a roadbloc in y research. I would also lie to than y coittee ebers Dr. Habir Laba, Dr. Thoas Wanner, and Dr. ie Xu for their valuable coents. Over the past few years, I have wored closely with any faculty ebers, at the Departent of Matheatics, at George Mason University, and I have received treendous support fro every one of the. Aongst the, I would lie to than especially the departent chair, Dr. David Walnut, and Dr. Maria Eelianeno. On a personal note, I a always grateful to y other, father, and y brother for always being there for e. ii

3 Table of Contents Page Abstract vi Introduction Factor Models of Credit Ris Vasice Single Factor Model Large Deviation Theory Sharp Large Deviations Iportance Sapling Monte Carlo Siulation Iportance Sapling Iportance sapling for rare events Relationship to Large Deviation Theory Organization of the Thesis Sharp Large Deviations for Single Factor Models Introduction Model Assuptions and Main Results Model Assuptions Main Results for Large Loss Threshold Regie Main Results for the Sall Default Regie Soe Preliinary Results Tools for the Upper Bound Coputation Tools for the Lower Bound Coputation Analysis of the Large Loss Regie Upper Bound Coputation Lower Bound Coputation Proofs of Main Theores: Large Loss Threshold Regie Proof of Theore Proof of Theore Proof of Theore iii

4 2.6 Analysis of the Sall Default Probability Regie Upper Liit Coputation Lower Liit Coputation Proofs of Main Theores: Sall Default Probability Regie Proof of Theore Suppleentary Proofs for the Chapter ε i N0, ε i Expλ c s ε i Str Exp, c, c 2, b with li = c 2 s Gaa Tail Asyptotic Rare Event Siulation for Single Factor Models Introduction Proble Stateent and Notation Iportance Sapling Procedure Main Results: Large Loss Threshold Regie Main Results: Sall Default Probability Regie Soe Probability Estiates Tools for the Upper Bound Coputation Tools for the Lower Bound Coputation Logarithic Efficiency: Large Loss Threshold Regie Upper Bound Coputation Lower Bound Coputation Proof of Main Theore: Large Loss Threshold Regie Logarithic Efficiency: Sall Default Probability Regie Upper Bound Coputation Lower Bound Coputation Proof of Main Theore: Sall Default Probability Regie Large Deviations for Higher Diensional Multifactor Gaussian Copula Introduction Model Assuptions and Main Results Model Assuptions Main Results for Large Loss Threshold Regie Main Results for the Sall Default Probability Regie Soe Preliinary Results iv

5 4.4 Analysis of the Large Loss Regie Analysis of Sall Default Regie Conclusions Future Wor Index Bibliography v

6 Abstract LARGE DEVIATIONS AND RARE EVENT SIMULATION FOR PORTFOLIO CREDIT RISK Hasitha de Silva, PhD George Mason University, 206 Dissertation Director: Dr. A. Vidyashanar Estiating the loss distribution of a credit portfolio is an iportant proble in credit ris anageent. When dealing with credit portfolios, correlations between defaults play an iportant role. Threshold based factor odels is a widely used ethod of odeling this phenoenon. Both the well now industry odels, KMV and CreditMetrics fall into this category. We begin this thesis by deriving sharp large deviation asyptotics for a single factor odel as the nuber of obligors go to infinity. Both the systeatic and idiosyncratic ris factors are allowed to tae general distributions. We derive our asyptotics under a unified fraewor, and they can be applied to any different probability distributions. Soe of the distributions that we consider for the ris factors are Gaussian, Exponential, Pareto, Gaa, and Stretched Exponential. Next we consider the rare event siulation proble for the sae odel. We develop an iportance sapling estiator, and prove its logarithic efficiency. Our iportance sapling estiator can be used for coputing Value-at-Ris and Expected Shortfall. Finally, we consider a ultifactor odel where the dependence between the obligors is given by the Gaussian copula. For this odel, we derive logarithic large deviations, letting both

7 the nuber of obligors and the nuber of ris factors go to infinity.

8 Chapter : Introduction. Factor Models of Credit Ris The ris that a given obligor ay default on its credit obligation is referred to as credit ris. Estiating the loss distribution of a credit portfolio is an iportant proble not only in ris anageent, but also in pricing structured financial products. When dealing with credit portfolios, correlations between defaults play an iportant role. Threshold based factor odels see [30], [2], is a widely used ethod of odeling this phenoenon. Both the well now industry odels, KMV and CreditMetrics fall into this category. These odels postulate that each obligor defaults if its latent variable crosses a pre-specified threshold. Correlations between the latent variables are introduced by using a coon factor. To ae the discussion ore precise, let the latent variable of the th obligor be given by X = az + bε.. In the above equation, Z and ε are independent rando variables, ε are i.i.d., and b = a 2 with 0 < a <. In financial literature, Z and ε are often referred to as the systeatic and idiosyncratic ris factors respectively. Let us denote the cuulative distribution functions of X, Z and ε by G, F and H respectively. We set the threshold for the th obligor at G p so that the th obligor defaults if X < G p. Therefore, we ay denote the default indicators by Y = {X <G p }..2

9 Then, by.2, the probability that the th obligor defaults is given by PY = = p. Let denote the nuber of obligors in the portfolio. Define the portfolio loss by L = = U l Y where U s are rando variables taing values in 0, ], and l s are deterinistic. U is the loss given default -recovery rate, and l is the credit exposure, so that U l is the actual loss suffered. By assuing U, l =, p = p and X, Z, ɛ N0,, Vasice established L li P x = Φ bφ x Φ p a. This is referred to as the large hoogeneous portfolio approxiation see [33] and [32]. We suarize the relevant results of what is generally nown as the Vasice Single Factor Model VSFM..2 Vasice Single Factor Model As discussed in the previous section, VSFM see [33] and [32] deals with approxiating the asyptotic loss distribution, assuing the latent variable odel given in.. Both the default probabilities and the credit exposures are assued to be unifor p = p and l = l. X, Z and ε are assued to be standard noral. For an expository note on VSFM, see [30]. For sae of clarity, we forally state the assuptions of the VSFM below. Assuptions VSFM. The default indicator Y is given by Y = {X < Φ p},.3 2

10 where X = az + bε, where Z and ɛ independent standard noral rando variables, 0 < a < and b = a Fro.3 above it follows that the unconditional default probability of the th obligor PY = is given by PY = = p. This eans that the lielihood of default of each obligor is the sae. 3. The default probability of the -th obligor conditioned on Z, denoted by pz, is given by pz = PY = Z = z = P X < Φ p Z = z = P az + bε < Φ p Z = z = P az + bε < Φ p = P ε < Φ p az b Φ p az = H. b 4. The total loss fro defaults L is given by 3

11 L = = Y. This eans that the credit exposure is the sae for every obligor. We next analyze the distribution of the portfolio loss fraction L. Theore.2.. Suppose Assuptions VSFM are satisfied. Then, L pz in probability as. Proof. Let ɛ > 0. L P pz > ɛ Z Var L Z ɛ 2 = 2 ɛ 2 VarL Z = pz pz ɛ 2 ɛ 2. By taing the expectation on both sides L P pz > ɛ 2 0 as. ɛ2 Therefore, for every ɛ > 0 li P L pz > ɛ = 0. 4

12 The above theore enables us to find the liiting distribution of the loss fraction L as. This is given in the next theore. Theore.2.2. Suppose Assuptions VSFM hold. Then, L li P x = Φ bφ x Φ p a. Proof. L li P x = P pz x since L = P Z > p x since pz = Φ = P Z < p x pz in distribution. Φ p az b is decreasing in z = Φ p x. But, Φ p az pz = Φ b = p z = Φ p bφ z. a Therefore, L li P x = Φ bφ x Φ p a..4 In the VSFM, the distributions of X, Z and ε are assued to be standard noral rando variables. An iportant fact to notice is that see [30] if we assue that X, ε and Z have 5

13 cuulative distribution functions H, G and F, and ε and Z both have syetric distributions, then the direct generalization of.4 is L li P x = F bh x G p a..5 Many extensions to the VSFM have been considered by previous authors. See for exaple [2], [8], [28] and [27]. In Chapter 2, we derive sharp large deviation asyptotics for a generalized version of the VSFM. We consider stochastic loss given defaults, non-hoogeneous default probabilities, and non-hoogeneous credit exposures..3 Large Deviation Theory Large deviation theory deals with approxiating the exponential rates at which certain probabilities of rare events decay. In this section, we provide a brief introduction to large deviation theory. Standard references for large deviation theory include [9], [3] and [0]. Consider a sequence {X i } i of i.i.d. rando variables with probability law P and EX = µ <. Assue E X <. By the wea law of large nubers, it is nown that for any ɛ > 0 li P X i µ < ɛ =, i= and hence for any l > µ, li P X i > l = 0. i= In certain situations, we ight be interested in estiating the probability p i= X i > l, 6

14 when is is large. One of the ost iportant results in large deviation theory, Craer s Theore states that p X i > l Ce Il for large, i= where C is a constant. Il is nown as the rate function. The above equation says that the probability P i= X i > l decays exponentially quicly when l > µ. Before we state Craer s Theore, we give below a foral definition of a sequence of probability easures satisfying a large deviation principle. Definition.3.. A sequence of probability easures {P } N is said to satisfy a large deviation principle with rate function I if,. I is lower sei continuous. 2. {x Ix < } is copact for every R. 3. For every closed set C li sup log P C inf x C Ix. 4. For every open set G li sup log P G inf x G Ix. Next we introduce soe basic and recurring concepts in large deviation theory. See [9] for the proofs. Definition.3.2. Let X be a rando variable and suppose Mθ = Ee θx < for all 7

15 θ R. Let Λθ = logmθ. Then the Fenchel-Legendre transfor of Λ is defined to be Λ x = supθx Λθ for x R..6 θ R The following theore suarizes soe useful properties of Λ defined above. Theore.3.3. Let X be a rando variable and suppose Mθ = Ee θx < for all θ R. Let Λθ = logmθ. Let EX = µ. Then,. Λ is convex and lower sei-continuous. 2. If Λ is differentiable at θ x and x = Λ θ x then, Λ x = θ x x Λθ x. 3. Λ x 0 for x R. 4. Λ = Λ is decreasing for x < and increasing for x > µ. We are now ready to provide a foral stateent of Craer s theore. Theore.3.4 Craer s Theore. Let {X } be a sequence of i.i.d. rando variables defined on a probability space Ω, F, P and let S = i= X. Let Mθ = Ee θx < for all θ R, and let Λθ = ln Mθ. Let l > EX. Then, li log P S > l = Λ l,.7 where, Λ l = sup θ R θl Λθ. Λ l in.7 is called the large deviation rate function. For certain rando variables, Λ can be calculated explicitly. 8

16 For instance, if X N0, σ 2 then, Λ l = l2 2σ 2, and if X Bernoullip then, Λ l = l log l l + l log p p for l [0, ]..3. Sharp Large Deviations Large deviation theory is concerned with estiating the rate of decay of rare events on an exponential scale. In soe instances ore accurate estiates are possible. These are called sharp or precise large deviations. Craer s Theore.7 iplies that there is a sequence f, l such that PS > l f, le Λ l,.8 and li log f, l = 0. Since Craer s Theore is concerned only with finding the rate function on a exponential scale, it does not tell us what f, l is. If we can find f, l, then we can estiate the probabilities ore accurately. f, l in.8 was deterined explicitly by Bahadur and Rao [4]. Theore.3.5 Theore of Bahadur and Rao. Let {X i } i N be a sequence of i.i.d. rando variables and let S = i= X i. Let Λθ = logmθ where Mθ = Ee θx. Assue Λθ is analytic at θ l and Λ θ l = l. Then, 9

17 li lp S > l =, where l = θ l Λ θ l 2π expλ l, and Λ l = sup{θl Λθ}. θ R Rear:. By Theore.3.5, f, l in.8 is deterined to be f, l = Cl, where Cl = θ l Λ θ l 2π. 2. Note that the assuption Λ θ l = l iplies that Λ l given in.7 can be expressed as Λ l = sup{θl Λθ} = θ l l Λθ l. θ R.4 Iportance Sapling Iportance Sapling is a variance reduction technique used in Monte Carlo siulations. We discuss an iportance sapling procedure for single factor credit ris odels in Chapter 3. 0

18 Therefore, in this section, we discuss soe basics of Monte Carlo siulation and Iportance Sapling..4. Monte Carlo Siulation Monte Carlo ethods are widely used to approxiate the expectations of rando variables. Suppose we are confronted with the proble of approxiating the expectation of the rando variable AX where X is a rando variable with density f and A : R R. Let us denote the quantity we need to approxiate by µ. µ = E f AX = Axfx dx. Suppose we now how to siulate fro the density f and let X denote the realization of the th siulation so that X are i.i.d. rando variables with coon density f. Then, α,f = AX.9 = is an unbiased estiator for the quantity E f AX. That is E f α,f = µ. The variance of a the estiator α,f is given by Var f α,f = 2 Var f AX = = Var f AX. If we can find another unbiased estiator β such that

19 Eβ = µ and Varβ < Var f α,f = VarAX then β ay be preferred over α,f. Iportance sapling is a ethod that is used to find such estiators β..4.2 Iportance Sapling We continue our discussion of the previous section: the proble of estiating E f AX. Let g be another density such that gx = 0 = fx = 0, x R. Then E f AX = Ax fx gx dx gx = E g AY fy gy. Therefore the estiator β,g given by β,g = AY fy gy = where Y s are i.i.d. with density g is an unbiased estiator for µ = E f AX. The variance of β,g is given by 2

20 Var g β,g = 2 Var g AY fy gy = = Var g AY fy gy. If Var g β,g < Var f α,f where α,f is defined in.9, then the estiator β,g will be preferred over α,f. Notice that the both estiators have the sae expected value. Therefore it suffices to copare the second oents. In order to find a better β,g copared to α,f we would require that E g A 2 Y f 2 Y g 2 Y < E f A 2 X. Notice also that the variance of β,g is Var g β,g = E g = AY fy 2 gy µ Ax fx 2 gx µ dx. Therefore the density gx = Axfx µ gives a zero variance estiator. This is of course not possible to find since it would require 3

21 the nowledge of the quantity µ, what we are trying to estiate in the first place. In the next section we consider the case of iportance sapling for rare events..4.3 Iportance sapling for rare events Consider the proble of estiating the probability PX B where the rando variable X has density f. PX B = E f {X B}X.0 Thus in view of the above discussion Ax = {x B}. Therefore the theoretically optial iportance sapling density gx is given by gx = {x B}xfx PX B which is exactly the conditional probability distribution of X given X B. Thus when we are looing for an iportance sapling density, we should loo for densities gx that assign a high value to x B gx dx. In the next section we loo into how rare event siulation is related to large deviation theory..4.4 Relationship to Large Deviation Theory Relationship between iportance sapling and large deviation theory arises through the coputation of rare event probabilities. Let {X i } i N be a i.i.d. rando variables with density f. Let S = = X. Suppose we are required to calculate the probability P f S > l = E f S > l where l >> E f X. For every θ R, 4

22 g θ x = e θx Λ f θ fx where Λ f θ = log E f e θx, defines a probability easure. In using Monte Carlo siulation to copute P f S > l, instead of sapling fro density f, we could saple fro g θ. But we would want to deterine the optial θ for doing so. P f S > l = E f {S > l} = E gθ {S > l}e θs+λ f θ We wish to pic θ so that E gθ {S > l}e θs+λ f θ 2 is iniized. Define the second oent of the iportance sapling estiator M 2 θ, l = E gθ {S > l}e θs+λ f θ 2.. Note that E gθ {S > l}e θs+λ f θ 2 e 2lθ+2Λ f θ. M 2 θ, l e 2lθ+2Λ f θ. log M 2 θ, l 2lθ + 2Λ f θ. Suppose there exists θ l such that Λ f l = sup θl Λ f θ θ R 5

23 = θ l l Λ f θ l..2 By.2, log M 2 θ l, l 2Λ f l. Therefore li sup log M θ 2 l, l 2Λ f l..3 We also have, for any θ, P f S > l 2 E gθ {S > l}e θs+λ f θ 2 by ensen s inequalty. Therefore, for any θ P f S > l 2 M 2 θ, l. 2 li inf 2 log P f S > l log M 2 θ, l. log P f S > l li inf log M 2 θ, l. Therefore By.3 and.4 we have 2Λ l li inf log M 2 θ l, l..4 6

24 li log M 2 θ l, l = 2Λ l. But we also have that li log P f S > l = Λ l. li log M 2 θ l, l log P f S > l = 2. Thus, as P{S > l} 0, the ratio log M 2 θ l,l log P f S >l converges to a liit. Therefore, by choosing θ = θ l we have found an asyptotically optial iportance sapling distribution g θl..5 Organization of the Thesis In this thesis we study large deviations and rare event siulation algoriths for factor odels of portfolio credit ris. In Chapter 2 and Chapter 3, we study a single factor odel where the latent variable of the th obligor is given by X = az + bε.5 where Z and ε are rando variables, ɛ are i.i.d. and b = a 2 with a 0. All three rando variables X, Z and ε are allowed to tae general distributions. 7

25 In Chapter 2 we derive with sharp large deviation asyptotics for the portfolio loss. Logarithic large deviations have been studied for the odel in.5 by Glasseran et al. in [7] and [5] assuing X, Z and ε are distributed as N0,. Our wor is inspired by Glasseran and his group but differs in the following sense:. We consider general probability distributions for X, Z and ε. 2. We derive sharp large deviation asyptotic where as Glasseran et al. see [7], [5] derive logarithic large deviations. In Chapter 3 we develop a rare even siulation algorith and prove its logarithic efficiency. This algorith is the sae algorith developed by Glasseran et al. in [7] assuing all rando variables are Standard Gaussian. Our contribution is that we prove its logarithic efficiency for general distribution functions. In Chapter 4 we derive logarithic large deviations for the ultifactor Gaussian copula odel with finite nuber of obligor types, previously considered by Glasseran et al. see [5]. Under their odel, if the th obligor is of type, its latent variable is given by X = a T Z + b ε,.6 where a R d with 0 < a < a < ā <, Z is a d diensional standard noral rando vector, b = a T a and ε are independent standard noral rando variables. Glasseran et al. see [5] derive logarithic large deviations for the portfolio loss letting the nuber of obligors go to infinity but holding the nuber of factors fixed at d. They also divide the set of obligors into t types where t N. We expand on the results by Glasseran et al. see [5] and let the nuber of factors go to infinity together with the nuber of obligors. To this end we change the odel in.6 to accoodate Z to be an 8

26 diensional vector. More specifically,.6 is replaced by X = a T Z + b ε, where a R with 0 < a < a < ā <, Z is a diensional standard noral rando vector, b = variables. a T a and ε are independent standard noral rando Finally, the concluding rears are given in Chapter 5. 9

27 Chapter 2: Sharp Large Deviations for Single Factor Models 2. Introduction Threshold based factor odels are a widely used tool in credit ris odeling see [30], [2]. Consider the factor odel where the latent variable of the th fir is given by X = az + bε. 2. In the above equation, Z and ε are independent rando variables, ɛ are i.i.d., and b = a 2 with 0 < a <. All three rando variables X, Z and ε are allowed to tae general distributions. Let us denote the cuulative distribution functions of X, Z and ε by G, F and H respectively. In financial literature, Z and ε are referred to as systeatic and idiosyncratic ris factors respectively see [30]. X typically represents the value of the th fir. The th fir defaults if its value falls below a pre-specified threshold. Let Y = {X <G p }, 2.2 so that the default probability of the -th fir is p. Let be the nuber of obligors, and let L = = U l Y be the loss of the portfolio, where U s are rando variables taing values in 0, ] and l s are deterinistic. U is the loss given default LGD and l is the exposure at default EAD, so that U l is the actual loss suffered. This type of odeling is otivated by Merton s fir value odel see [26]. 20

28 The systeatic ris factor Z, gives rise to correlations between X s. An iportant feature of 2. is that conditioned on Z, the latent variables X s becoe independent. The odel we study, the one given in 2. reduces to the widely used single factor Gaussian Copula when both Z and ε are assued to be Gaussian. The liiting loss distribution for the Gaussian Copula was first studied by Vasice see [33], [32]. By assuing U, l =, p = p, and X, Z, ɛ N0,, Vasice established that L li P > x = Φ bφ x Φ p a. 2.3 This is nown as the large hoogeneous portfolio approxiation see [33], [32] and [30]. In this chapter, we exaine the large deviation behavior for the Vasice s single factor odel when both Z and ε are assued to tae general distribution functions. But unlie Vasice, we consider heterogeneous portfolios and stochastic loss given defaults LGD. Our odel can be considered to fall under the category of threshold based factor odels see [2]. It is well nown that Vasice s result 2.3, holds for general distributions. See for exaple [30] and [29]. If we assue that X, ε and Z have cuulative distribution functions H, G, F, and ε and Z both have syetric distributions, then the direct generalization of 2.3 is L li P > x = F bh x G p a. 2.4 It is evident fro 2.4 that the event {L > x} is not a large deviation event. Large deviations for the Gaussian Copula was derived by Glasseran et al. see [7] for the single factor case, and Glasseran et al. see [5] for the ultifactor case. Following their 2

29 wor, we identify two large deviation regies: large loss threshold regie, and sall default probability regie. The otivation is as follows: If we replace x in 2.4 by Hs for soe s, then we would have a rare event with the loss level getting larger. This is the large loss regie. Siilarly, if we replace p in 2.4 by G s, then we have a rare event with the probability of default going to 0. This is the sall default regie. Our wor is inspired by Glasseran et al. [7] and [5] but is different in the following ways:. We consider general distributions for Z and ε. 2. We derive sharp large deviations where as they derived logarithic scale large deviation. Our analysis reveals that the large deviation regie is always governed by the distribution of Z. This is to be expected by 2.4. Glasseran et al. see [7] and [5] noted that large loss regie produces a heavier tail copared to the sall default regie. We ae the sae observation, even for general distributions. The rest of the chapter is organized as follows. In section 2.2, we postulate the odel assuptions and state the ain theores. The ain results for the large loss threshold regie is given by Theore 2.2.,Theore 2.2.2, and Theore 2.2.4, where as the ain results for the sall default regie is given by Theore Next we set out to prove these theores. Since we are dealing with two probability regies, it turns out that soe concepts recur in the proofs. Therefore, we establish soe preliinary results in section 2.3, and these will be eployed in proving our ain theores. Section 2.4 and section 2.5 deal with large loss threshold regie. In section 2.4 we prove two general theores, Theore 2.4. and Theore These two theores do not assue any particular distributions for Z and ε, but they rather hypothesize certain conditions that need to be satisfied by the distributions of Z and ε. In section 2.5, we apply these two general theores Theore 2.4. and Theore to prove Theore 2.2.,Theore 2.2.2, and Theore Section 2.6 and 2.7 is devoted to the sall default probability regie. 22

30 Siilar to the large loss regie, we prove two general theores, Theore 2.6. and Theore in section 2.6. These two theores are used to prove Theore in section Model Assuptions and Main Results In this section, we state the odel assuptions first and then we give the ain results for this chapter. In section 2.2., we provide a set of assuptions that are universally applicable to both probability regies. Then we consider each probability regie separately. In section is concerned with the large loss threshold regie: we first provide the regie specific assuptions, and then we state the ain results. Siilarly section is devoted to the sall default probability regie; we provide the regie specific assuptions, and then we state the ain results Model Assuptions In this section, we provide a set of assuptions, Assuptions GEN, that are universally applicable to both probability regies. We use the following notation for the large loss regie = Nuber of obligors Y = Default indicator of the -th obligor. for default and 0 otherwise. p = PY = p z = PY = Z = z L = l U Y where l is deterinistic and U is rando = 23

31 where as the following is for the sall default regie = Nuber of obligors Y = Default indicator of the -th obligor. for default and 0 otherwise. p = P Y = p z = P Y = Z = z L = = l U Y where l is deterinistic and U is rando. The reason for the notational difference is that in the sall default regie, we let the default probabilities approach 0, where as we hold the constant in the large loss regie. U is a rando variable taing values in 0, ]. In financial literature, it s nown as the loss given default LGD. It is the loss for a dollar worth of investent, which is also equal to inus the recovery rate. See Gordy [8] and Kupiec [2]. The following set of assuptions, Assuptions GEN are universally applicable for both probability regies. We then postulate regie specific assuptions Assuptions LL for the large loss threshold regie, and Assuptions SD for the sall default probability regie. Assuptions GEN. Latent variable X is defined as follows: Let Z be a rando variable with density f and distribution function F and let ε be a rando variable with density h and distribution function H. Z and ε 24

32 are independent for every. ε i and ε are independent for i. Assue f and h are syetric. Let X = az + bε, 2.5 where 0 < a < and b = a Fro 2.5 it follows that PX < x Z = z = PaZ + bε < x Z = z = P ε < x az b x az = H, b and PX < x = x az H fz dz. b We will denote Gx = PX < x and gx = d dx Gx. 3. The total loss fro defaults is denoted by L. For the large loss regie L = l U Y, = where as for the sall default regie 25

33 L = = l U Y. l is deterinistic and satisfies 0 < l l l <. U are i.i.d. rando variables that tae values in [u, ] for soe 0 < u. U are independent of Z and ε. The ean of U will be denoted by u = u. 4. li = u l = li u = l = C 2.6 for soe 0 < C <. Having stated Assuptions GEN which are universally applicable for both probability regies, we need to consider the two probability regies separately. In section we provide the assuptions and the ain results for the large loss threshold regie. Then, in section we provide the assuptions and the ain results for the sall default probability regie Main Results for Large Loss Threshold Regie This section devoted to the large loss threshold regie. The ain results for this section is given by Theore 2.2.,Theore 2.2.2, and Theore These theores are proved in section 2.5. Before we state the ain results, we provide the regie specific assuptions, Assuptions LL. We note that the full set of assuptions for large loss threshold regie coprise of Assuptions LL and Assuptions GEN. Assuptions LL 26

34 . The default indicators Y are given by Y = {X < G p }. 2.7 By 2.7 it follows that P Y = = p. 2. The default probabilities p satisfy 0 < p p p <. 3. By 2.7 above the default probability of the -th obligor conditioned on Z = z is given by p z = P X < G p Z = z = P az + bε < G p Z = z = P az + bε < G p = P ε < G p az b G p az = H. b 4. The threshold loss is given by x = Hss l u = Hss u l where s > = = We are interested in the finding exact asyptotic for PL > x in the following sense: we will loo for a quantity y such that li y PL > x =

35 y will typically be given in ters of ters a, b, s, s. We will also characterize the large deviation regie in ters of s. This eans that once we have assued certain distribution for Z and ε, and deterined y explicitly, 2.9 ay hold, for exaple when s = log but not when s =. Therefore it is iportant to characterize the large deviation regie for which 2.9 holds in ters of s. This tas is accoplished. Before we present our ain results for the large loss threshold regie, we introduce a quantity that will be recurring throughout the ain results. γ = sb a 2.0 A striing feature in the large loss regie copared to sall default regie is that even though the speed of convergence y given in 2.9 is governed by Z, the large deviation regie is governed by ε. We will notice in the sall default regie, ε has no influence on the large deviation regie. This indicates that for high quality credit, the idiosyncratic ris has ore of an influence on the loss distribution. We will give below soe theores that we derived by assuing specific distributions for Z and ɛ, in addition to the ain assuptions, Assuptions GEN and Assuptions LL. Even though the large deviation result is always given by the distribution of Z, there is no one theore that can account for all distributions. Therefore we give soe results below restricting ourselves to soe specific distributions. We consider Gaussian, Exponential and Stretched Exponential distributions. The reason for picing these distributions is that they have been considered as alternative odels for stoc price distributions See [24] and [25]. For our sharp large deviation theores to hold, Z needs to have a heavier tail than ε. If it is not the case, it is still possible to obtain logarithic large deviations. We first consider the case when ε N0,. Theore Suppose Assuptions GEN and Assuptions LL hold. Let γ be 28

36 given by 2.0. Suppose p = p = p. Suppose ε N0,. Suppose either s. li = 0 and s > 0 or log 2. s = log and 0 < s <. Then,. If Z N0, then, li γs 2 G p 2 a 2πγs e PL > x =. 2. If Z Gaaα, β then, li βγα β α γ α s α e β γs G p a PL > x =. The adissible large deviation regie characterized by s in Theore 2.2., coes fro the fact that ε N0,. It is clear that the large deviation result coe fro the distribution of Z. McCauleya and Gunaratne see [25] considered odeling stoc returns with exponential distributions. In our next theore we let both Z and ε be Exponential distributed. That is, Z Expλ and ε Expλ 0. Once again we need to ae the restriction that λ 0 λ in order to ensure that Z is heavier tailed than ε. Theore Suppose Assuptions GEN and Assuptions LL hold. Let γ be given by 2.0. Suppose p = p = p. Suppose ε Expλ and Z Expλ 0 with λ 0 λ. Suppose either s. li log = 0 and s > 0 or 29

37 2. s = log and 0 < s <. Then, li eλ 0 γs G p a PL > x =. The adissible large deviation regie of s in Theore owes to the fact that ε Expλ. We conclude the ain results for this section by considering the case when ε Stretched Exponential. Stretched Exponential distribution has recently been proposed for odeling stoc prices. For exaple, Sornette et al. [24] considered Stretched Exponential and Paretto distributions for odeling stoc prices. Definition Stretched Exponential Distribution. A rando variable Z is said to be Stretched Exponentially distributed with paraeters, c, c 2, b, denoted by Z Str Exp, c, c 2, b, if there exists 0, and slowly varying functions b, c, c 2 : 0, 0, and t 0 > 0 such that for t > t 0 : c t exp btt PZ > t c 2 t exp btt. Theore Suppose Assuptions GEN and Assuptions LL hold. Suppose p = p = p c. Suppose ε i Str Exp, c, c 2, b. Suppose li s c 2 s =. Suppose either. li bs s log = 0 and s > 0 or 2. bs s = log and s < 2. Then, 30

38 . If Z Str Exp 0, c 0, c0 2, b0 with either a 0 < or b = 0 and li b 0 s = B < then, li sup s γ 0 s 0 c 0 2 s eb0 PL > x and li inf s γ 0 s 0 c 0 s eb0 PL > x. 2. If Z Parettoα, β then, li α γs PL > x =. β Having stated both the assuptions and the ain results for the large loss threshold regie, we next consider the sall default probability regie Main Results for the Sall Default Regie This section devoted to the sall default probability regie. The ain results for this section is given by Theore This theore is proved in section 2.7. Before we state the ain results, we provide the regie specific assuptions, Assuptions SD and Assuptions SD2. We note that the full set of assuptions for sall default probability regie coprise of Assuptions SD, Assuptions SD2 and Assuptions GEN. Assuptions SD 3

39 . The default indicators Y are given by Y = {X < s s }, 2. for soe sequence s and 0 < s. This iplies that default probability of the -th obligor is p = P Y = = G s s. 2. By 2. above the default probability of the -th obligor conditioned on Z is given by p z = P Y = Z = z = P X < s s Z = z = P az + bε < ss Z = z = P ε < ss az b az ss = H. b 3. The threshold loss is given by x = q l u = qu l 2.2 = = where 2 < q <. 32

40 We are interested in coputing the rare event asyptotic for PL > x as. Siilar to the large loss threshold regie, in the sall default probability regie, the large deviation asyptotic is governed by the distribution of Z. But there is an iportant distinction. In the large loss threshold regie, the large deviation regie characterized by s was governed by ε. In the Sall Default regie, the distribution of ε has no effect on the large deviation regie. All that we require is that density of ε, h is continuous at q and hq 0. Recall that in this regie the threshold loss is given by x = q = u l. We state this as Assuptions SD2. As entioned before, the full set of assuptions for sall default regie coprise of Assuptions GEN, Assuptions SD and Assuptions SD2. Assuptions SD2. The density of ε, h does not vanish at H q : hh q h is continuous on a neighborhood of H q. Before we give the ain results for this section, we define an iportant quantity for this regie. We define γ = s a. 2.3 The following theore suarizes the ain results for the sall default probability regie. Theore Suppose Assuptions GEN, Assuptions SD and Assuptions SD2 hold. Let γ be given by 2.3. Then,. If Z N0, and li s 4 = 0 then, 2 li 2πγ s e γ 2 s + b a H q PL > x =. 33

41 2. If Z Expλ and li s 3 = 0 then, li eλ γ s + b a H q PL > x =. 3. If Z Gaaα, β with α > and li s 3 = 0 then, li βγα β α γ α s α e β γ s + b a H q PL > x =. s 3 4. If Z Str Exp, c, c 2, b and li bs = 0 then, li sup c s ebsγ s PL > x and li inf c 2 s ebsγ s PL > x. 5. If Z Parettoα, β and there exists 0 > 0 li s 0 = 0 then, li α γs PL > x =. β 2.3 Soe Preliinary Results Having stated the ain results, our next tas is to prove Theore 2.2.,Theore 2.2.2, Theore 2.2.4, and Theore We are dealing with two probability regies and it turn out there are soe siilarities between our ain proofs. In this section we introduce soe tools that we will use in coon for both the probability regies. In section 2.3. we introduce the tools for the 34

42 upper bound coputation and in section we introduce the tools for the lower bound coputation Tools for the Upper Bound Coputation Define the cuulant generating function of U by Λθ = log Ee θu 2.4 which obviously holds for both regies. Note that U are i.i.d rando variables taing values in [u, ]. Define the cuulant generating function of L conditioned on Z by ψ θ, z = log EeθL Z = z. 2.5 Let θ z = argin θ 0 { θx + ψ θ, z}. 2.6 Define a new easure P by dp dp = e θzl+ψθz,z, 2.7 and let E denote the expectation under P. The next lea shows that P is indeed a probability easure. Lea P defined by 2.7 is a probability easure. Proof. It suffices to show that E dp dp = E e θzl ψθz,z =. 35

43 E e θzl ψθz,z = E e θzl logeeθzl Z = E = E e θzl Ee θzl Z E e θzl Ee Z θzl Z Z = E Ee Z θzl E e θzl =. It should be noted that ψ θ, z defined above taes different expressions depending on what probability regie we are dealing with. In the large loss regie: ψ θ, z = log E e θl Z = z = log E e θ = l U Y Z = z = = log E e θl U Y Z = z by conditional independence = = = = = log E E e θl U Y U Z, Z = z = z log E + p ze θl U Z = z = log + p z E[e θl U Z = z] 36

44 = = log + p z Ee θl U by the independence of U and Z = = log + p z e Λθl. Clearly, a siilar calculation holds for the sall default regie as well. We opt to use the sae notation ψ θ, z for econoy since it would be obvious fro the context. We suarize this as follows. = ψ θ, z = log + p z e Λθl for the large loss regie = log + p z e Λθl. 2.8 for the sall default regie The following is a restateent of Lea 4.3 in [5]. Lea Let Λ is the cuulant generating function of U given by 2.4. Then, there exists a positive constant D such that log + αe Λθ αu θ + Dθ 2, 2.9 for all θ [0, ] and α [0, ]. u = u is the ean of U Tools for the Lower Bound Coputation The following is a restateent of Lea 3.0 in [5]. Lea Suppose a sequence of events {A } N and a sequence of positive integers {n } N with li n = are given. Suppose that, given A, T, =, 2,..., n are conditionally independent rando variables rando variables with conditional ean 0 for which, 37

45 li sup n 2 n = Var T A = 0. Let S = = T. Then, li P S > ɛ A = 0. Note that T and T l ay have different distributions for l. Note that = EL Z = l u p Z = l u p Z for the large loss regie. for the sall default regie But the following theore holds for both the probability regies nonetheless. Lea Let ηs : [0, ] [0, ] be a function such that ηs li = 0. Let z be an arbitrary sequence in R and let S = ηs L EL Z. 38

46 Then, li P S > ɛ Z = z = 0. Proof. We give the proof for the large loss regie. It is easily seen that a siilar proof holds for the sall default probability regie as well. Apply Lea with A = {Z = z }, n = T = ηs l U Y u p Z. T s are conditionally independent given Z. ET Z = z = 0 since U is independent of Z and Y. We will show that li sup V art 2 Z = z = 0. Note that Var T Z = z = ηs 2 l U 2 E Y u p Z 2 Z = z, 4ηs 2 l 2 39

47 and therefore 2 = Var T Z = z 4 l 2 2 4ηs 2 4 l 2 4ηs 2 0. Therefore, li sup V art 2 Z = z = 0. Therefore, for any ɛ > 0, li P S > ɛ Z = z = 0. With these tools at our disposal, we consider the two probability regies separately. 2.4 Analysis of the Large Loss Regie In this section, we prove two general theores, Theore 2.4. and Theore Theore 2.4. deals with the upper bound coputation, where as Theore deals with the lower bound coputation. These two theores will be used to prove Theore 2.2., Theore 2.2.2, and Theore We first introduce soe notation for this section. Define, for ξ R, 40

48 q ɛ ξ = H s + ξɛ s, 2.20 which proves to be a useful quantity for this section. We also reserve the notation θ and ɛ for two positive sequences such that θ, ɛ 0 and ξ 0 > 0. Define γ = sb a. 2.2 Large losses are ore liely to occur for those values of z for which p z is large. We exploit this fact in the following anner. A sequence of real nubers µ and a sequence of sets G defined by µ = γ ξ 0 ɛ s G p, 2.22 a and G = {z R : a.z sb ξ 0 ɛ s G p} = = { z R : H { z R : H G p az b G p az b } H s ξ 0 ɛ s } q ξ ɛ

49 will be used for the upper bound coputation. µ and G are related as follows: { } z G R : z sb ξ0 ɛ s G p a = µ = { } z R : z sb ξ 0ɛ s G p a = µ if a > if a < 0 Siilarly for the lower bound coputation, we define ν and H by ν = γ + ξ 0 ɛ s G p, 2.25 a and H = {z R : a.z sb + ξ 0 ɛ s G p} = = { z R : H { z R : H G p az b G p az b } H s + ξ 0 ɛ s } qξ ɛ ν and H are related by { } z H R : z sb+ξ0 ɛ s G p a = ν = { } z R : z sb+ξ 0ɛ s G p a = ν if a > if a < 0 As we entioned in the introduction, the large deviation asyptotic is governed by Z. But it turns out that the cdf of X has to satisfy certain lower bounds, naely for q ɛ 0 q ɛ ξ 0 and q ɛ ξ 0 q ɛ 0, in order for our large deviation results to hold. 42

50 2.4. Upper Bound Coputation We state the ain theore for the upper bound coputation below. Theore Suppose Assuptions GEN and Assuptions LL hold. Let ξ 0 > 0 and ɛ > 0 such that ɛ 0. Let γ be given by 2.2, let q ɛ ξ be given 2.20, and let µ be given by Then, there exists D > 0 such that for any positive sequence θ 0 PL > x e θ l.uqɛ 0 qɛ ξ 0+D l 2 θ 2 + PZ > µ, 2.28 where u = EU. Suppose further that we can pic functions Ψ U and ψ, and positive sequences ɛ, θ 0 such that the three equations given by A, A2, A3 are satisfied. li sup Ψ U s PZ > µ ψξ 0 A and li ψξ 0 =. ξ A2 li Ψ Us e θ l.uqɛ 0 qɛ ξ 0+D l 2 θ 2 = 0. A3 Then, 43

51 li sup Ψ U s PL > x Proof. First Note that 2.29 would follow by A and A2 given that 2.28 is true. Hence we need only prove We recall the following. By 2.8 ψ θ, z = = log + p z e Λθl, by 2.6 θ z = argin θ 0 { θx + ψ θ, z}, and by 2.7 P was defined to be dp dp = e θzl+ψθz,z, where E denotes the expectation under P. PL > x = E {L > x } G cz + E {L > x } G Z = E e θzl+ψθz,z {L > x } G cz + E {L > x } G Z E e θzx+ψθz,z {L > x } G cz + E {L > x } G Z E e θzx+ψθz,z G cz + E G Z 44

52 E e θ x+ψθ,z G cz + PZ G for any θ follows since θ zx + ψ θ, z θ x + ψ θ, z for any θ 0. By 2.24 { z R : z µ } if a > 0 G = { z R : z µ }. 2.3 if a < 0 Therefore, by syetry of Z, PZ G = PZ > µ. Hence, we have by 2.30 PL > x E e θ x+ψθ,z G cz + PZ < µ for any θ Now we need to handle the first ter of the RHS of the above equation. Notice that by 2.23, for z G c, G p az p z = H b G p az H b 45

53 = q ɛ ξ Recall that by 2.8 x = H ss l u. = Therefore with our new notation x = q0 ɛ l u. = At this point we will assue θ 0. We will specify θ depending on specific distributions we are looing at. For z G c, θ x + ψ θ, Z = θ q ɛ 0 = = l u + = = log + p ze Λθ l θq 0l ɛ u + log + p ze Λθ l θ q0l ɛ u + p zu l θ + D l 2 θ 2 = for large by θ q0l ɛ u + q ξ ɛ 0 u l θ + D l 2 θ 2 by 2.33 = θ l u q0 ɛ q ξ ɛ 0 + D l 2 θ 2 = 46

54 θ lu q0 ɛ q ξ ɛ 0 + D l 2 θ 2 = since q ɛ 0 q ɛ ξ 0 > 0 = θ l.uq ɛ 0 q ɛ ξ 0 + D l 2 θ 2. E e θ x+ψθ,z G cz = E e θ x+ψθ,z G cz E e θ qɛ 0 = l u + = +p log ze Λθ l e θ l.uq ɛ 0 qɛ ξ 0+D l 2 θ 2. Therefore, by 2.32 PL > x e θ l.uq ɛ 0 qɛ ξ 0+D l 2 θ 2 + PZ > µ Lower Bound Coputation We give below the ain theore for the coputation of the lower bound below. Theore Suppose Assuptions GEN and Assuptions LL hold. Let ξ 0 > 0 and ɛ > 0 such that ɛ 0. Let γ be given by 2.2, let q ɛ ξ be given by 2.20, and let ν be given by Let η : [0, ] [0, ] be such that There exists and d 0 ξ 0 and M = M ξ 0 such that for > M 47

55 qξ ɛ 0 q0 ɛ d 0 ξ 0 ηs B ηs 2 li = 0 B2 Then, PL > x P Z > ν, 2.34 for soe sequence. Suppose further that there exists functions Ψ L and ψ such that li inf Ψ Ls PZ > ν ψξ 0, B3 and li ψξ 0 =. ξ 0 0 B4 Then, li inf Ψ Ls PL > x

56 Proof. First note that 2.35 would follow fro B3 and B4, given that 2.34 is true. Therefore we need only prove Let S = ηs L EL Z and z be an arbitrary sequence in R. Then, by Lea and B2, for any ɛ > 0 li P S > ɛ Z = z = Lea Let z H. Then, li PL > x Z = z = Proof. Let z H. EL Z = z q ɛ ξ 0 l u = Therefore for > M ξ 0 ηs x EL Z = z ηs x q ɛ ξ 0 l u = = ηs qɛ 0 q ɛ ξ 0 = ηs qɛ ξ 0 q ɛ 0 l u = l u = ηs d 0ξ 0 ηs l u = by B 49

57 = d 0 ξ 0 l u 2.40 d 0 ξ 0 C, = where C = li l u. = Let 0 < ɛ < d 0 ξ 0 C. There exists M 2 = M 2 ξ 0 N such that for all > M 2 d 0 ξ 0 l u > ɛ, = or equivalently d 0 ξ 0 l u < ɛ. 2.4 = For > ax{m, M 2 } PL > x Z = z = P L EL Z > x EL Z Z = z = P = P ηs L EL Z > ηs S > ηs x EL Z Z = z 50 x EL Z Z = z

58 P S > d 0 ξ 0 = l u Z = z for > M by 2.40 P S > ɛ for > M 2 by 2.4 P S ɛ Z = z as. p z = H G p az b. Therefore p z is decreasing in z if a > 0. Analogously p z is increasing in z if a < 0. Notice also that by 2.27 { z R : z ν } if a > 0 H = { z R : z ν }. if a < 0 Assue a < 0. Then, PL > x P t [0, L > x Z = ν + t P Z = ν + t dt P L > x Z = ν P Z = ν + t dt t [0, = P L > x Z = ν P Z > ν Note that ν H. 5

59 Siilarly if a > 0 PL > x P t [0, L > x Z = ν t P Z = ν t dt P L > x Z = ν P Z = ν t dt t [0, = P L > x Z = ν P Z < ν. Note that ν H. Notice also that by the syetry of Z PZ > ν = PZ < ν. Therefore in either case whether a > 0 or a < 0, we have PL > x P Z > ν, for soe sequence. 2.5 Proofs of Main Theores: Large Loss Threshold Regie Now that we have proved the two general theores for this regie, we will use the to prove Theore 2.2., Theore 2.2.2, and Theore In these theores, ε is allowed to tae Gaussian, Exponential and Stretched Exponential distributions. Recall that q ɛ ξ = H + ξɛ ss, as given by 2.20 where H is the cuulative distribution of X. It turns out that our proofs 52

60 depend on obtaining lower bounds for the two quantities q ɛ 0 q ɛ ξ and q ɛ ξ q ɛ 0. These lower bounds need to be obtained for each distribution that X is assued to tae. That is for N0, in Theore 2.2., Expλ in Theore 2.2.2, and Stretched Exp in Theore We recall the definitions µ = γ ξ 0 ɛ s G p a given by 2.22, and ν = γ + ξ 0 ɛ s G p a given by 2.25, which we will be using throughout the rest of this section. We ay choose ɛ as we please, depending on the specific distributions we consider Proof of Theore 2.2. ε N0,. Z is allowed to tae N0, and Gaaα, β. Define ɛ =. We give s 2 below two leas whose proofs are given in the suppleentary proofs for the chapter, in section 2.8. Lea Let 0 < ξ 0 and ɛ =. Then, there exists M s 2 = M ξ 0 N such that for > M q0 ɛ q ξ ɛ 0 e 2 ξ 0ɛ 2 s 2 s 2 e ξ 0s 2 2πss Lea Let 0 < ξ 0 and ɛ =. Then, there exists M s 2 = M ξ 0 such that for > M qξ ɛ 0 q0 ɛ e 2 s2 s 2 e ξ 0s 2 2πss Lower Bound We wish to eploy Theore We need to prove B, B2, B3 and B4. Let 53

61 ɛ =. s 2 By 2.44, there exists M = M ξ 0 such that for > M,: q ɛ ξ 0 q ɛ 0 e 2 s2 s 2 Define ηs = ss e 2 s2 s 2 and d 0 ξ 0 = e ξ 0 s 2 2π 2. e ξ 0 s 2 2πss 2. B There exists and d 0 ξ 0 and M = Mξ 0 such that for > M: q ɛ ξ 0 q ɛ 0 d 0 ξ 0 ηs. B2 li ηs 2 = 0 since ηs 2 ss 2 e s2 s 2 log = log = log ss 2 e s2 s 2 log = s 2 s 2 log + os = s 2 s 2 log s 2 + o. Therefore by Theore 2.4.2, PL > x P Z > ν where li =. It reains to chec B3 and B4. In order to do that we need to consider separate cases for Z. Case : Z N0, Define Ψ L s = 2πγs e and ɛ =. s γs G p a. Recall that ν = γ + ξ 0 ɛ s G p a 54

62 2 ν 2 = γs G p + 2 γs G p γξ 0 ɛ s + γ 2 ξ 2 a a 0ɛ 2 s 2 = = 2 γs G p + 2γ 2 ξ 0 ɛ s 2 + G p γξ 0 ɛ s + γ 2 ξ 2 a a 0ɛ 2 s 2 2 γs G p + 2γ 2 ξ 0 + o a By noticing that PZ > ν e 2 2 γs G p +2γ a 2 ξ 0 +o 2π γs + o, B3 and B4 hold with Ψ L s = 2πγs e by Theore γs G p a and ψξ 0 = e γ2ξ0. Therefore li inf 2 γs 2 G p a 2πγs e PL > x. Case 2: Z Gaaα, β with α > Define Ψ L s = β Γα β α γ α s α e β γs G p a. Recall that ν = γ+ξ 0 ɛ s G p a, and ɛ =. Z Gaaα, β satisfies the following tail asyptotic: PZ > x s 2 β β α Γα xα e βx See the suppleentary proofs in section 2.8. Therefore B3 and B4 are satisfied with Ψ L s and ψξ 0 =. Therefore by Theore li inf β Γα β α γ α s α e β γs G p a PL > x. 55

63 Upper Bound Case : Z N0, 2 Define Ψ U s = 2πγs e γs 2 G p a. Recall that ɛ = s 2 and µ = γ ξ 0 ɛ s G p a. A siilar calculation as we did for the lower bound in 2.45 shows that A and A2 hold with Ψ U s = 2πγs e reains to show that: 2 2 γs G p a and ψξ 0 = e γ2ξ0. It li Ψ Us e θ l.uqɛ 0 qɛ ξ 0+D l 2 θ 2 = li e θ l.uq0 q ɛ ξ 0 +D l 2 θ = 0. 2 γs G p a The logarith of the quantity in question is θ l.uq ɛ 0 q ɛ ξ 0 + D l 2 θ = s 2 θ l.uq ɛ 0 q ɛ ξ 0 s 2 + D l 2 θ 2 s 2 2 γ G p s 2 + os 2 as γ G p + o as Define θ = s so that θ s 2 > M : q ɛ 0 q ɛ ξ 0 e 2 ξ 0 ɛ2 s 2 s 2 =. By 2.43, there exists M = M ξ 0 N such that for 2πss e ξ 0 s 2 2. Therefore for > M, θ s 2 q0 ɛ q ξ ɛ 0 θ e 2 ξ 0ɛ 2 s 2 s 2 s 2 2πss 56 e ξ 0s 2. 2

Machine Learning Basics: Estimators, Bias and Variance

Machine Learning Basics: Estimators, Bias and Variance Machine Learning Basics: Estiators, Bias and Variance Sargur N. srihari@cedar.buffalo.edu This is part of lecture slides on Deep Learning: http://www.cedar.buffalo.edu/~srihari/cse676 1 Topics in Basics

More information

Support recovery in compressed sensing: An estimation theoretic approach

Support recovery in compressed sensing: An estimation theoretic approach Support recovery in copressed sensing: An estiation theoretic approach Ain Karbasi, Ali Horati, Soheil Mohajer, Martin Vetterli School of Coputer and Counication Sciences École Polytechnique Fédérale de

More information

Monte Carlo simulation is widely used to measure the credit risk in portfolios of loans, corporate bonds, and

Monte Carlo simulation is widely used to measure the credit risk in portfolios of loans, corporate bonds, and MANAGMNT SCINC Vol. 5, No., Noveber 005, pp. 643 656 issn 005-909 eissn 56-550 05 5 643 infors doi 0.87/nsc.050.045 005 INFORMS Iportance Sapling for Portfolio Credit Risk Paul Glasseran, Jingyi Li Colubia

More information

A Simple Regression Problem

A Simple Regression Problem A Siple Regression Proble R. M. Castro March 23, 2 In this brief note a siple regression proble will be introduced, illustrating clearly the bias-variance tradeoff. Let Y i f(x i ) + W i, i,..., n, where

More information

STOPPING SIMULATED PATHS EARLY

STOPPING SIMULATED PATHS EARLY Proceedings of the 2 Winter Siulation Conference B.A.Peters,J.S.Sith,D.J.Medeiros,andM.W.Rohrer,eds. STOPPING SIMULATED PATHS EARLY Paul Glasseran Graduate School of Business Colubia University New Yor,

More information

SPECTRUM sensing is a core concept of cognitive radio

SPECTRUM sensing is a core concept of cognitive radio World Acadey of Science, Engineering and Technology International Journal of Electronics and Counication Engineering Vol:6, o:2, 202 Efficient Detection Using Sequential Probability Ratio Test in Mobile

More information

TEST OF HOMOGENEITY OF PARALLEL SAMPLES FROM LOGNORMAL POPULATIONS WITH UNEQUAL VARIANCES

TEST OF HOMOGENEITY OF PARALLEL SAMPLES FROM LOGNORMAL POPULATIONS WITH UNEQUAL VARIANCES TEST OF HOMOGENEITY OF PARALLEL SAMPLES FROM LOGNORMAL POPULATIONS WITH UNEQUAL VARIANCES S. E. Ahed, R. J. Tokins and A. I. Volodin Departent of Matheatics and Statistics University of Regina Regina,

More information

Course Notes for EE227C (Spring 2018): Convex Optimization and Approximation

Course Notes for EE227C (Spring 2018): Convex Optimization and Approximation Course Notes for EE7C (Spring 018: Convex Optiization and Approxiation Instructor: Moritz Hardt Eail: hardt+ee7c@berkeley.edu Graduate Instructor: Max Sichowitz Eail: sichow+ee7c@berkeley.edu October 15,

More information

Biostatistics Department Technical Report

Biostatistics Department Technical Report Biostatistics Departent Technical Report BST006-00 Estiation of Prevalence by Pool Screening With Equal Sized Pools and a egative Binoial Sapling Model Charles R. Katholi, Ph.D. Eeritus Professor Departent

More information

Extension of CSRSM for the Parametric Study of the Face Stability of Pressurized Tunnels

Extension of CSRSM for the Parametric Study of the Face Stability of Pressurized Tunnels Extension of CSRSM for the Paraetric Study of the Face Stability of Pressurized Tunnels Guilhe Mollon 1, Daniel Dias 2, and Abdul-Haid Soubra 3, M.ASCE 1 LGCIE, INSA Lyon, Université de Lyon, Doaine scientifique

More information

Course Notes for EE227C (Spring 2018): Convex Optimization and Approximation

Course Notes for EE227C (Spring 2018): Convex Optimization and Approximation Course Notes for EE227C (Spring 2018): Convex Optiization and Approxiation Instructor: Moritz Hardt Eail: hardt+ee227c@berkeley.edu Graduate Instructor: Max Sichowitz Eail: sichow+ee227c@berkeley.edu October

More information

Keywords: Estimator, Bias, Mean-squared error, normality, generalized Pareto distribution

Keywords: Estimator, Bias, Mean-squared error, normality, generalized Pareto distribution Testing approxiate norality of an estiator using the estiated MSE and bias with an application to the shape paraeter of the generalized Pareto distribution J. Martin van Zyl Abstract In this work the norality

More information

Non-Parametric Non-Line-of-Sight Identification 1

Non-Parametric Non-Line-of-Sight Identification 1 Non-Paraetric Non-Line-of-Sight Identification Sinan Gezici, Hisashi Kobayashi and H. Vincent Poor Departent of Electrical Engineering School of Engineering and Applied Science Princeton University, Princeton,

More information

A Note on the Applied Use of MDL Approximations

A Note on the Applied Use of MDL Approximations A Note on the Applied Use of MDL Approxiations Daniel J. Navarro Departent of Psychology Ohio State University Abstract An applied proble is discussed in which two nested psychological odels of retention

More information

arxiv: v2 [math.co] 3 Dec 2008

arxiv: v2 [math.co] 3 Dec 2008 arxiv:0805.2814v2 [ath.co] 3 Dec 2008 Connectivity of the Unifor Rando Intersection Graph Sion R. Blacburn and Stefanie Gere Departent of Matheatics Royal Holloway, University of London Egha, Surrey TW20

More information

The Distribution of the Covariance Matrix for a Subset of Elliptical Distributions with Extension to Two Kurtosis Parameters

The Distribution of the Covariance Matrix for a Subset of Elliptical Distributions with Extension to Two Kurtosis Parameters journal of ultivariate analysis 58, 96106 (1996) article no. 0041 The Distribution of the Covariance Matrix for a Subset of Elliptical Distributions with Extension to Two Kurtosis Paraeters H. S. Steyn

More information

Computational and Statistical Learning Theory

Computational and Statistical Learning Theory Coputational and Statistical Learning Theory Proble sets 5 and 6 Due: Noveber th Please send your solutions to learning-subissions@ttic.edu Notations/Definitions Recall the definition of saple based Radeacher

More information

1 Bounding the Margin

1 Bounding the Margin COS 511: Theoretical Machine Learning Lecturer: Rob Schapire Lecture #12 Scribe: Jian Min Si March 14, 2013 1 Bounding the Margin We are continuing the proof of a bound on the generalization error of AdaBoost

More information

CS Lecture 13. More Maximum Likelihood

CS Lecture 13. More Maximum Likelihood CS 6347 Lecture 13 More Maxiu Likelihood Recap Last tie: Introduction to axiu likelihood estiation MLE for Bayesian networks Optial CPTs correspond to epirical counts Today: MLE for CRFs 2 Maxiu Likelihood

More information

Block designs and statistics

Block designs and statistics Bloc designs and statistics Notes for Math 447 May 3, 2011 The ain paraeters of a bloc design are nuber of varieties v, bloc size, nuber of blocs b. A design is built on a set of v eleents. Each eleent

More information

E0 370 Statistical Learning Theory Lecture 6 (Aug 30, 2011) Margin Analysis

E0 370 Statistical Learning Theory Lecture 6 (Aug 30, 2011) Margin Analysis E0 370 tatistical Learning Theory Lecture 6 (Aug 30, 20) Margin Analysis Lecturer: hivani Agarwal cribe: Narasihan R Introduction In the last few lectures we have seen how to obtain high confidence bounds

More information

An Improved Particle Filter with Applications in Ballistic Target Tracking

An Improved Particle Filter with Applications in Ballistic Target Tracking Sensors & ransducers Vol. 72 Issue 6 June 204 pp. 96-20 Sensors & ransducers 204 by IFSA Publishing S. L. http://www.sensorsportal.co An Iproved Particle Filter with Applications in Ballistic arget racing

More information

On the block maxima method in extreme value theory

On the block maxima method in extreme value theory On the bloc axiethod in extree value theory Ana Ferreira ISA, Univ Tecn Lisboa, Tapada da Ajuda 349-7 Lisboa, Portugal CEAUL, FCUL, Bloco C6 - Piso 4 Capo Grande, 749-6 Lisboa, Portugal anafh@isa.utl.pt

More information

On Conditions for Linearity of Optimal Estimation

On Conditions for Linearity of Optimal Estimation On Conditions for Linearity of Optial Estiation Erah Akyol, Kuar Viswanatha and Kenneth Rose {eakyol, kuar, rose}@ece.ucsb.edu Departent of Electrical and Coputer Engineering University of California at

More information

Pattern Recognition and Machine Learning. Learning and Evaluation for Pattern Recognition

Pattern Recognition and Machine Learning. Learning and Evaluation for Pattern Recognition Pattern Recognition and Machine Learning Jaes L. Crowley ENSIMAG 3 - MMIS Fall Seester 2017 Lesson 1 4 October 2017 Outline Learning and Evaluation for Pattern Recognition Notation...2 1. The Pattern Recognition

More information

Probability Distributions

Probability Distributions Probability Distributions In Chapter, we ephasized the central role played by probability theory in the solution of pattern recognition probles. We turn now to an exploration of soe particular exaples

More information

Some Proofs: This section provides proofs of some theoretical results in section 3.

Some Proofs: This section provides proofs of some theoretical results in section 3. Testing Jups via False Discovery Rate Control Yu-Min Yen. Institute of Econoics, Acadeia Sinica, Taipei, Taiwan. E-ail: YMYEN@econ.sinica.edu.tw. SUPPLEMENTARY MATERIALS Suppleentary Materials contain

More information

13.2 Fully Polynomial Randomized Approximation Scheme for Permanent of Random 0-1 Matrices

13.2 Fully Polynomial Randomized Approximation Scheme for Permanent of Random 0-1 Matrices CS71 Randoness & Coputation Spring 018 Instructor: Alistair Sinclair Lecture 13: February 7 Disclaier: These notes have not been subjected to the usual scrutiny accorded to foral publications. They ay

More information

In this chapter, we consider several graph-theoretic and probabilistic models

In this chapter, we consider several graph-theoretic and probabilistic models THREE ONE GRAPH-THEORETIC AND STATISTICAL MODELS 3.1 INTRODUCTION In this chapter, we consider several graph-theoretic and probabilistic odels for a social network, which we do under different assuptions

More information

arxiv: v1 [cs.ds] 3 Feb 2014

arxiv: v1 [cs.ds] 3 Feb 2014 arxiv:40.043v [cs.ds] 3 Feb 04 A Bound on the Expected Optiality of Rando Feasible Solutions to Cobinatorial Optiization Probles Evan A. Sultani The Johns Hopins University APL evan@sultani.co http://www.sultani.co/

More information

RAFIA(MBA) TUTOR S UPLOADED FILE Course STA301: Statistics and Probability Lecture No 1 to 5

RAFIA(MBA) TUTOR S UPLOADED FILE Course STA301: Statistics and Probability Lecture No 1 to 5 Course STA0: Statistics and Probability Lecture No to 5 Multiple Choice Questions:. Statistics deals with: a) Observations b) Aggregates of facts*** c) Individuals d) Isolated ites. A nuber of students

More information

Bootstrapping Dependent Data

Bootstrapping Dependent Data Bootstrapping Dependent Data One of the key issues confronting bootstrap resapling approxiations is how to deal with dependent data. Consider a sequence fx t g n t= of dependent rando variables. Clearly

More information

Asynchronous Gossip Algorithms for Stochastic Optimization

Asynchronous Gossip Algorithms for Stochastic Optimization Asynchronous Gossip Algoriths for Stochastic Optiization S. Sundhar Ra ECE Dept. University of Illinois Urbana, IL 680 ssrini@illinois.edu A. Nedić IESE Dept. University of Illinois Urbana, IL 680 angelia@illinois.edu

More information

Fairness via priority scheduling

Fairness via priority scheduling Fairness via priority scheduling Veeraruna Kavitha, N Heachandra and Debayan Das IEOR, IIT Bobay, Mubai, 400076, India vavitha,nh,debayan}@iitbacin Abstract In the context of ulti-agent resource allocation

More information

Sharp Time Data Tradeoffs for Linear Inverse Problems

Sharp Time Data Tradeoffs for Linear Inverse Problems Sharp Tie Data Tradeoffs for Linear Inverse Probles Saet Oyak Benjain Recht Mahdi Soltanolkotabi January 016 Abstract In this paper we characterize sharp tie-data tradeoffs for optiization probles used

More information

Testing equality of variances for multiple univariate normal populations

Testing equality of variances for multiple univariate normal populations University of Wollongong Research Online Centre for Statistical & Survey Methodology Working Paper Series Faculty of Engineering and Inforation Sciences 0 esting equality of variances for ultiple univariate

More information

Model Fitting. CURM Background Material, Fall 2014 Dr. Doreen De Leon

Model Fitting. CURM Background Material, Fall 2014 Dr. Doreen De Leon Model Fitting CURM Background Material, Fall 014 Dr. Doreen De Leon 1 Introduction Given a set of data points, we often want to fit a selected odel or type to the data (e.g., we suspect an exponential

More information

Semicircle law for generalized Curie-Weiss matrix ensembles at subcritical temperature

Semicircle law for generalized Curie-Weiss matrix ensembles at subcritical temperature Seicircle law for generalized Curie-Weiss atrix ensebles at subcritical teperature Werner Kirsch Fakultät für Matheatik und Inforatik FernUniversität in Hagen, Gerany Thoas Kriecherbauer Matheatisches

More information

New upper bound for the B-spline basis condition number II. K. Scherer. Institut fur Angewandte Mathematik, Universitat Bonn, Bonn, Germany.

New upper bound for the B-spline basis condition number II. K. Scherer. Institut fur Angewandte Mathematik, Universitat Bonn, Bonn, Germany. New upper bound for the B-spline basis condition nuber II. A proof of de Boor's 2 -conjecture K. Scherer Institut fur Angewandte Matheati, Universitat Bonn, 535 Bonn, Gerany and A. Yu. Shadrin Coputing

More information

Computational and Statistical Learning Theory

Computational and Statistical Learning Theory Coputational and Statistical Learning Theory TTIC 31120 Prof. Nati Srebro Lecture 2: PAC Learning and VC Theory I Fro Adversarial Online to Statistical Three reasons to ove fro worst-case deterinistic

More information

Pseudo-marginal Metropolis-Hastings: a simple explanation and (partial) review of theory

Pseudo-marginal Metropolis-Hastings: a simple explanation and (partial) review of theory Pseudo-arginal Metropolis-Hastings: a siple explanation and (partial) review of theory Chris Sherlock Motivation Iagine a stochastic process V which arises fro soe distribution with density p(v θ ). Iagine

More information

COS 424: Interacting with Data. Written Exercises

COS 424: Interacting with Data. Written Exercises COS 424: Interacting with Data Hoework #4 Spring 2007 Regression Due: Wednesday, April 18 Written Exercises See the course website for iportant inforation about collaboration and late policies, as well

More information

GEE ESTIMATORS IN MIXTURE MODEL WITH VARYING CONCENTRATIONS

GEE ESTIMATORS IN MIXTURE MODEL WITH VARYING CONCENTRATIONS ACTA UIVERSITATIS LODZIESIS FOLIA OECOOMICA 3(3142015 http://dx.doi.org/10.18778/0208-6018.314.03 Olesii Doronin *, Rostislav Maiboroda ** GEE ESTIMATORS I MIXTURE MODEL WITH VARYIG COCETRATIOS Abstract.

More information

The degree of a typical vertex in generalized random intersection graph models

The degree of a typical vertex in generalized random intersection graph models Discrete Matheatics 306 006 15 165 www.elsevier.co/locate/disc The degree of a typical vertex in generalized rando intersection graph odels Jerzy Jaworski a, Michał Karoński a, Dudley Stark b a Departent

More information

On Constant Power Water-filling

On Constant Power Water-filling On Constant Power Water-filling Wei Yu and John M. Cioffi Electrical Engineering Departent Stanford University, Stanford, CA94305, U.S.A. eails: {weiyu,cioffi}@stanford.edu Abstract This paper derives

More information

ANALYSIS OF HALL-EFFECT THRUSTERS AND ION ENGINES FOR EARTH-TO-MOON TRANSFER

ANALYSIS OF HALL-EFFECT THRUSTERS AND ION ENGINES FOR EARTH-TO-MOON TRANSFER IEPC 003-0034 ANALYSIS OF HALL-EFFECT THRUSTERS AND ION ENGINES FOR EARTH-TO-MOON TRANSFER A. Bober, M. Guelan Asher Space Research Institute, Technion-Israel Institute of Technology, 3000 Haifa, Israel

More information

are equal to zero, where, q = p 1. For each gene j, the pairwise null and alternative hypotheses are,

are equal to zero, where, q = p 1. For each gene j, the pairwise null and alternative hypotheses are, Page of 8 Suppleentary Materials: A ultiple testing procedure for ulti-diensional pairwise coparisons with application to gene expression studies Anjana Grandhi, Wenge Guo, Shyaal D. Peddada S Notations

More information

Supplementary Material for Fast and Provable Algorithms for Spectrally Sparse Signal Reconstruction via Low-Rank Hankel Matrix Completion

Supplementary Material for Fast and Provable Algorithms for Spectrally Sparse Signal Reconstruction via Low-Rank Hankel Matrix Completion Suppleentary Material for Fast and Provable Algoriths for Spectrally Sparse Signal Reconstruction via Low-Ran Hanel Matrix Copletion Jian-Feng Cai Tianing Wang Ke Wei March 1, 017 Abstract We establish

More information

AN OPTIMAL SHRINKAGE FACTOR IN PREDICTION OF ORDERED RANDOM EFFECTS

AN OPTIMAL SHRINKAGE FACTOR IN PREDICTION OF ORDERED RANDOM EFFECTS Statistica Sinica 6 016, 1709-178 doi:http://dx.doi.org/10.5705/ss.0014.0034 AN OPTIMAL SHRINKAGE FACTOR IN PREDICTION OF ORDERED RANDOM EFFECTS Nilabja Guha 1, Anindya Roy, Yaakov Malinovsky and Gauri

More information

SOLUTIONS. PROBLEM 1. The Hamiltonian of the particle in the gravitational field can be written as, x 0, + U(x), U(x) =

SOLUTIONS. PROBLEM 1. The Hamiltonian of the particle in the gravitational field can be written as, x 0, + U(x), U(x) = SOLUTIONS PROBLEM 1. The Hailtonian of the particle in the gravitational field can be written as { Ĥ = ˆp2, x 0, + U(x), U(x) = (1) 2 gx, x > 0. The siplest estiate coes fro the uncertainty relation. If

More information

IN modern society that various systems have become more

IN modern society that various systems have become more Developent of Reliability Function in -Coponent Standby Redundant Syste with Priority Based on Maxiu Entropy Principle Ryosuke Hirata, Ikuo Arizono, Ryosuke Toohiro, Satoshi Oigawa, and Yasuhiko Takeoto

More information

Chapter 6 1-D Continuous Groups

Chapter 6 1-D Continuous Groups Chapter 6 1-D Continuous Groups Continuous groups consist of group eleents labelled by one or ore continuous variables, say a 1, a 2,, a r, where each variable has a well- defined range. This chapter explores:

More information

Computable Shell Decomposition Bounds

Computable Shell Decomposition Bounds Coputable Shell Decoposition Bounds John Langford TTI-Chicago jcl@cs.cu.edu David McAllester TTI-Chicago dac@autoreason.co Editor: Leslie Pack Kaelbling and David Cohn Abstract Haussler, Kearns, Seung

More information

Stochastic Subgradient Methods

Stochastic Subgradient Methods Stochastic Subgradient Methods Lingjie Weng Yutian Chen Bren School of Inforation and Coputer Science University of California, Irvine {wengl, yutianc}@ics.uci.edu Abstract Stochastic subgradient ethods

More information

Lower Bounds for Quantized Matrix Completion

Lower Bounds for Quantized Matrix Completion Lower Bounds for Quantized Matrix Copletion Mary Wootters and Yaniv Plan Departent of Matheatics University of Michigan Ann Arbor, MI Eail: wootters, yplan}@uich.edu Mark A. Davenport School of Elec. &

More information

Computable Shell Decomposition Bounds

Computable Shell Decomposition Bounds Journal of Machine Learning Research 5 (2004) 529-547 Subitted 1/03; Revised 8/03; Published 5/04 Coputable Shell Decoposition Bounds John Langford David McAllester Toyota Technology Institute at Chicago

More information

DEPARTMENT OF ECONOMETRICS AND BUSINESS STATISTICS

DEPARTMENT OF ECONOMETRICS AND BUSINESS STATISTICS ISSN 1440-771X AUSTRALIA DEPARTMENT OF ECONOMETRICS AND BUSINESS STATISTICS An Iproved Method for Bandwidth Selection When Estiating ROC Curves Peter G Hall and Rob J Hyndan Working Paper 11/00 An iproved

More information

On the Communication Complexity of Lipschitzian Optimization for the Coordinated Model of Computation

On the Communication Complexity of Lipschitzian Optimization for the Coordinated Model of Computation journal of coplexity 6, 459473 (2000) doi:0.006jco.2000.0544, available online at http:www.idealibrary.co on On the Counication Coplexity of Lipschitzian Optiization for the Coordinated Model of Coputation

More information

Approximation in Stochastic Scheduling: The Power of LP-Based Priority Policies

Approximation in Stochastic Scheduling: The Power of LP-Based Priority Policies Approxiation in Stochastic Scheduling: The Power of -Based Priority Policies Rolf Möhring, Andreas Schulz, Marc Uetz Setting (A P p stoch, r E( w and (B P p stoch E( w We will assue that the processing

More information

Experimental Design For Model Discrimination And Precise Parameter Estimation In WDS Analysis

Experimental Design For Model Discrimination And Precise Parameter Estimation In WDS Analysis City University of New York (CUNY) CUNY Acadeic Works International Conference on Hydroinforatics 8-1-2014 Experiental Design For Model Discriination And Precise Paraeter Estiation In WDS Analysis Giovanna

More information

Understanding Machine Learning Solution Manual

Understanding Machine Learning Solution Manual Understanding Machine Learning Solution Manual Written by Alon Gonen Edited by Dana Rubinstein Noveber 17, 2014 2 Gentle Start 1. Given S = ((x i, y i )), define the ultivariate polynoial p S (x) = i []:y

More information

ESTIMATING AND FORMING CONFIDENCE INTERVALS FOR EXTREMA OF RANDOM POLYNOMIALS. A Thesis. Presented to. The Faculty of the Department of Mathematics

ESTIMATING AND FORMING CONFIDENCE INTERVALS FOR EXTREMA OF RANDOM POLYNOMIALS. A Thesis. Presented to. The Faculty of the Department of Mathematics ESTIMATING AND FORMING CONFIDENCE INTERVALS FOR EXTREMA OF RANDOM POLYNOMIALS A Thesis Presented to The Faculty of the Departent of Matheatics San Jose State University In Partial Fulfillent of the Requireents

More information

Quantum algorithms (CO 781, Winter 2008) Prof. Andrew Childs, University of Waterloo LECTURE 15: Unstructured search and spatial search

Quantum algorithms (CO 781, Winter 2008) Prof. Andrew Childs, University of Waterloo LECTURE 15: Unstructured search and spatial search Quantu algoriths (CO 781, Winter 2008) Prof Andrew Childs, University of Waterloo LECTURE 15: Unstructured search and spatial search ow we begin to discuss applications of quantu walks to search algoriths

More information

Inspection; structural health monitoring; reliability; Bayesian analysis; updating; decision analysis; value of information

Inspection; structural health monitoring; reliability; Bayesian analysis; updating; decision analysis; value of information Cite as: Straub D. (2014). Value of inforation analysis with structural reliability ethods. Structural Safety, 49: 75-86. Value of Inforation Analysis with Structural Reliability Methods Daniel Straub

More information

Optimum Value of Poverty Measure Using Inverse Optimization Programming Problem

Optimum Value of Poverty Measure Using Inverse Optimization Programming Problem International Journal of Conteporary Matheatical Sciences Vol. 14, 2019, no. 1, 31-42 HIKARI Ltd, www.-hikari.co https://doi.org/10.12988/ijcs.2019.914 Optiu Value of Poverty Measure Using Inverse Optiization

More information

TRAVELING WAVE SOLUTIONS OF THE POROUS MEDIUM EQUATION. Laxmi P. Paudel. Dissertation Prepared for the Degree of DOCTOR OF PHILOSOPHY

TRAVELING WAVE SOLUTIONS OF THE POROUS MEDIUM EQUATION. Laxmi P. Paudel. Dissertation Prepared for the Degree of DOCTOR OF PHILOSOPHY TRAVELING WAVE SOLUTIONS OF THE POROUS MEDIUM EQUATION Laxi P Paudel Dissertation Prepared for the Degree of DOCTOR OF PHILOSOPHY UNIVERSITY OF NORTH TEXAS May 013 APPROVED: Joseph Iaia, Major Professor

More information

A general forulation of the cross-nested logit odel Michel Bierlaire, Dpt of Matheatics, EPFL, Lausanne Phone: Fax:

A general forulation of the cross-nested logit odel Michel Bierlaire, Dpt of Matheatics, EPFL, Lausanne Phone: Fax: A general forulation of the cross-nested logit odel Michel Bierlaire, EPFL Conference paper STRC 2001 Session: Choices A general forulation of the cross-nested logit odel Michel Bierlaire, Dpt of Matheatics,

More information

lecture 36: Linear Multistep Mehods: Zero Stability

lecture 36: Linear Multistep Mehods: Zero Stability 95 lecture 36: Linear Multistep Mehods: Zero Stability 5.6 Linear ultistep ethods: zero stability Does consistency iply convergence for linear ultistep ethods? This is always the case for one-step ethods,

More information

This article appeared in a journal published by Elsevier. The attached copy is furnished to the author for internal non-commercial research and

This article appeared in a journal published by Elsevier. The attached copy is furnished to the author for internal non-commercial research and This article appeared in a ournal published by Elsevier. The attached copy is furnished to the author for internal non-coercial research and education use, including for instruction at the authors institution

More information

List Scheduling and LPT Oliver Braun (09/05/2017)

List Scheduling and LPT Oliver Braun (09/05/2017) List Scheduling and LPT Oliver Braun (09/05/207) We investigate the classical scheduling proble P ax where a set of n independent jobs has to be processed on 2 parallel and identical processors (achines)

More information

State Estimation Problem for the Action Potential Modeling in Purkinje Fibers

State Estimation Problem for the Action Potential Modeling in Purkinje Fibers APCOM & ISCM -4 th Deceber, 203, Singapore State Estiation Proble for the Action Potential Modeling in Purinje Fibers *D. C. Estuano¹, H. R. B.Orlande and M. J.Colaço Federal University of Rio de Janeiro

More information

On random Boolean threshold networks

On random Boolean threshold networks On rando Boolean threshold networs Reinhard Hecel, Steffen Schober and Martin Bossert Institute of Telecounications and Applied Inforation Theory Ul University Albert-Einstein-Allee 43, 89081Ul, Gerany

More information

Spine Fin Efficiency A Three Sided Pyramidal Fin of Equilateral Triangular Cross-Sectional Area

Spine Fin Efficiency A Three Sided Pyramidal Fin of Equilateral Triangular Cross-Sectional Area Proceedings of the 006 WSEAS/IASME International Conference on Heat and Mass Transfer, Miai, Florida, USA, January 18-0, 006 (pp13-18) Spine Fin Efficiency A Three Sided Pyraidal Fin of Equilateral Triangular

More information

Statistics and Probability Letters

Statistics and Probability Letters Statistics and Probability Letters 79 2009 223 233 Contents lists available at ScienceDirect Statistics and Probability Letters journal hoepage: www.elsevier.co/locate/stapro A CLT for a one-diensional

More information

1 Proof of learning bounds

1 Proof of learning bounds COS 511: Theoretical Machine Learning Lecturer: Rob Schapire Lecture #4 Scribe: Akshay Mittal February 13, 2013 1 Proof of learning bounds For intuition of the following theore, suppose there exists a

More information

Tail Estimation of the Spectral Density under Fixed-Domain Asymptotics

Tail Estimation of the Spectral Density under Fixed-Domain Asymptotics Tail Estiation of the Spectral Density under Fixed-Doain Asyptotics Wei-Ying Wu, Chae Young Li and Yiin Xiao Wei-Ying Wu, Departent of Statistics & Probability Michigan State University, East Lansing,

More information

On the Necessary and Sufficient Conditions of a Meaningful Distance Function for High Dimensional Data Space

On the Necessary and Sufficient Conditions of a Meaningful Distance Function for High Dimensional Data Space On the Necessary and Sufficient Conditions of a Meaningful Distance Function for High Diensional Data Space Chih-Ming Hsu Ming-Syan Chen Abstract The use of effective distance functions has been explored

More information

C na (1) a=l. c = CO + Clm + CZ TWO-STAGE SAMPLE DESIGN WITH SMALL CLUSTERS. 1. Introduction

C na (1) a=l. c = CO + Clm + CZ TWO-STAGE SAMPLE DESIGN WITH SMALL CLUSTERS. 1. Introduction TWO-STGE SMPLE DESIGN WITH SMLL CLUSTERS Robert G. Clark and David G. Steel School of Matheatics and pplied Statistics, University of Wollongong, NSW 5 ustralia. (robert.clark@abs.gov.au) Key Words: saple

More information

Consistent Multiclass Algorithms for Complex Performance Measures. Supplementary Material

Consistent Multiclass Algorithms for Complex Performance Measures. Supplementary Material Consistent Multiclass Algoriths for Coplex Perforance Measures Suppleentary Material Notations. Let λ be the base easure over n given by the unifor rando variable (say U over n. Hence, for all easurable

More information

Asymptotic mean values of Gaussian polytopes

Asymptotic mean values of Gaussian polytopes Asyptotic ean values of Gaussian polytopes Daniel Hug, Götz Olaf Munsonius and Matthias Reitzner Abstract We consider geoetric functionals of the convex hull of norally distributed rando points in Euclidean

More information

Intelligent Systems: Reasoning and Recognition. Perceptrons and Support Vector Machines

Intelligent Systems: Reasoning and Recognition. Perceptrons and Support Vector Machines Intelligent Systes: Reasoning and Recognition Jaes L. Crowley osig 1 Winter Seester 2018 Lesson 6 27 February 2018 Outline Perceptrons and Support Vector achines Notation...2 Linear odels...3 Lines, Planes

More information

USEFUL HINTS FOR SOLVING PHYSICS OLYMPIAD PROBLEMS. By: Ian Blokland, Augustana Campus, University of Alberta

USEFUL HINTS FOR SOLVING PHYSICS OLYMPIAD PROBLEMS. By: Ian Blokland, Augustana Campus, University of Alberta 1 USEFUL HINTS FOR SOLVING PHYSICS OLYMPIAD PROBLEMS By: Ian Bloland, Augustana Capus, University of Alberta For: Physics Olypiad Weeend, April 6, 008, UofA Introduction: Physicists often attept to solve

More information

Kernel Methods and Support Vector Machines

Kernel Methods and Support Vector Machines Intelligent Systes: Reasoning and Recognition Jaes L. Crowley ENSIAG 2 / osig 1 Second Seester 2012/2013 Lesson 20 2 ay 2013 Kernel ethods and Support Vector achines Contents Kernel Functions...2 Quadratic

More information

Hybrid System Identification: An SDP Approach

Hybrid System Identification: An SDP Approach 49th IEEE Conference on Decision and Control Deceber 15-17, 2010 Hilton Atlanta Hotel, Atlanta, GA, USA Hybrid Syste Identification: An SDP Approach C Feng, C M Lagoa, N Ozay and M Sznaier Abstract The

More information

MASSACHUSETTS INSTITUTE OF TECHNOLOGY 6.436J/15.085J Fall 2008 Lecture 11 10/15/2008 ABSTRACT INTEGRATION I

MASSACHUSETTS INSTITUTE OF TECHNOLOGY 6.436J/15.085J Fall 2008 Lecture 11 10/15/2008 ABSTRACT INTEGRATION I MASSACHUSETTS INSTITUTE OF TECHNOLOGY 6.436J/15.085J Fall 2008 Lecture 11 10/15/2008 ABSTRACT INTEGRATION I Contents 1. Preliinaries 2. The ain result 3. The Rieann integral 4. The integral of a nonnegative

More information

Fixed-to-Variable Length Distribution Matching

Fixed-to-Variable Length Distribution Matching Fixed-to-Variable Length Distribution Matching Rana Ali Ajad and Georg Böcherer Institute for Counications Engineering Technische Universität München, Gerany Eail: raa2463@gail.co,georg.boecherer@tu.de

More information

Necessity of low effective dimension

Necessity of low effective dimension Necessity of low effective diension Art B. Owen Stanford University October 2002, Orig: July 2002 Abstract Practitioners have long noticed that quasi-monte Carlo ethods work very well on functions that

More information

New Bounds for Learning Intervals with Implications for Semi-Supervised Learning

New Bounds for Learning Intervals with Implications for Semi-Supervised Learning JMLR: Workshop and Conference Proceedings vol (1) 1 15 New Bounds for Learning Intervals with Iplications for Sei-Supervised Learning David P. Helbold dph@soe.ucsc.edu Departent of Coputer Science, University

More information

Qualitative Modelling of Time Series Using Self-Organizing Maps: Application to Animal Science

Qualitative Modelling of Time Series Using Self-Organizing Maps: Application to Animal Science Proceedings of the 6th WSEAS International Conference on Applied Coputer Science, Tenerife, Canary Islands, Spain, Deceber 16-18, 2006 183 Qualitative Modelling of Tie Series Using Self-Organizing Maps:

More information

Asymptotics of weighted random sums

Asymptotics of weighted random sums Asyptotics of weighted rando sus José Manuel Corcuera, David Nualart, Mark Podolskij arxiv:402.44v [ath.pr] 6 Feb 204 February 7, 204 Abstract In this paper we study the asyptotic behaviour of weighted

More information

Ufuk Demirci* and Feza Kerestecioglu**

Ufuk Demirci* and Feza Kerestecioglu** 1 INDIRECT ADAPTIVE CONTROL OF MISSILES Ufuk Deirci* and Feza Kerestecioglu** *Turkish Navy Guided Missile Test Station, Beykoz, Istanbul, TURKEY **Departent of Electrical and Electronics Engineering,

More information

Multiscale Entropy Analysis: A New Method to Detect Determinism in a Time. Series. A. Sarkar and P. Barat. Variable Energy Cyclotron Centre

Multiscale Entropy Analysis: A New Method to Detect Determinism in a Time. Series. A. Sarkar and P. Barat. Variable Energy Cyclotron Centre Multiscale Entropy Analysis: A New Method to Detect Deterinis in a Tie Series A. Sarkar and P. Barat Variable Energy Cyclotron Centre /AF Bidhan Nagar, Kolkata 700064, India PACS nubers: 05.45.Tp, 89.75.-k,

More information

ON THE TWO-LEVEL PRECONDITIONING IN LEAST SQUARES METHOD

ON THE TWO-LEVEL PRECONDITIONING IN LEAST SQUARES METHOD PROCEEDINGS OF THE YEREVAN STATE UNIVERSITY Physical and Matheatical Sciences 04,, p. 7 5 ON THE TWO-LEVEL PRECONDITIONING IN LEAST SQUARES METHOD M a t h e a t i c s Yu. A. HAKOPIAN, R. Z. HOVHANNISYAN

More information

Bayes Decision Rule and Naïve Bayes Classifier

Bayes Decision Rule and Naïve Bayes Classifier Bayes Decision Rule and Naïve Bayes Classifier Le Song Machine Learning I CSE 6740, Fall 2013 Gaussian Mixture odel A density odel p(x) ay be ulti-odal: odel it as a ixture of uni-odal distributions (e.g.

More information

Metric Entropy of Convex Hulls

Metric Entropy of Convex Hulls Metric Entropy of Convex Hulls Fuchang Gao University of Idaho Abstract Let T be a precopact subset of a Hilbert space. The etric entropy of the convex hull of T is estiated in ters of the etric entropy

More information

THE CONSTRUCTION OF GOOD EXTENSIBLE RANK-1 LATTICES. 1. Introduction We are interested in approximating a high dimensional integral [0,1]

THE CONSTRUCTION OF GOOD EXTENSIBLE RANK-1 LATTICES. 1. Introduction We are interested in approximating a high dimensional integral [0,1] MATHEMATICS OF COMPUTATION Volue 00, Nuber 0, Pages 000 000 S 0025-578(XX)0000-0 THE CONSTRUCTION OF GOOD EXTENSIBLE RANK- LATTICES JOSEF DICK, FRIEDRICH PILLICHSHAMMER, AND BENJAMIN J. WATERHOUSE Abstract.

More information

Multi-Scale/Multi-Resolution: Wavelet Transform

Multi-Scale/Multi-Resolution: Wavelet Transform Multi-Scale/Multi-Resolution: Wavelet Transfor Proble with Fourier Fourier analysis -- breaks down a signal into constituent sinusoids of different frequencies. A serious drawback in transforing to the

More information

Lecture October 23. Scribes: Ruixin Qiang and Alana Shine

Lecture October 23. Scribes: Ruixin Qiang and Alana Shine CSCI699: Topics in Learning and Gae Theory Lecture October 23 Lecturer: Ilias Scribes: Ruixin Qiang and Alana Shine Today s topic is auction with saples. 1 Introduction to auctions Definition 1. In a single

More information

Lecture 21 Nov 18, 2015

Lecture 21 Nov 18, 2015 CS 388R: Randoized Algoriths Fall 05 Prof. Eric Price Lecture Nov 8, 05 Scribe: Chad Voegele, Arun Sai Overview In the last class, we defined the ters cut sparsifier and spectral sparsifier and introduced

More information

Soft Computing Techniques Help Assign Weights to Different Factors in Vulnerability Analysis

Soft Computing Techniques Help Assign Weights to Different Factors in Vulnerability Analysis Soft Coputing Techniques Help Assign Weights to Different Factors in Vulnerability Analysis Beverly Rivera 1,2, Irbis Gallegos 1, and Vladik Kreinovich 2 1 Regional Cyber and Energy Security Center RCES

More information