CONCENTRATION FUNCTION AND SENSITIVITY TO THE PRIOR. Consiglio Nazionale delle Ricerche. Institute of Statistics and Decision Sciences
|
|
- Rolf Bishop
- 5 years ago
- Views:
Transcription
1 CONCENTRATION FUNCTION AND SENSITIVITY TO THE PRIOR Sandra Fortini 1 Fabrizio Ruggeri 1; 1 Consiglio Nazionale delle Ricerche Istituto per le Applicazioni della Matematica e dell'informatica Institute of Statistics and Decision Sciences Duke University AMS (1991) subject classication: Primary 6F15; Secondary 6F35. Key Words: Concentration function; Bayesian robustness; "{contaminations; Gini's concentration ratio; Pietra index. Running head: Concentration and sensitivity 1
2 Abstract. In robust Bayesian analysis, posterior ranges of some functions of the unknown parameters are usually considered when the prior probability measure varies in a class?. Such functions (probability of given sets, mean, variance, etc.) describe the variation of just one aspect of the posterior measure. The concentration function may describe more globally the changes in the posterior probability measure, because it gives simultaneously both the minimum and the maximum posterior probabilities, for any measure in?, achievable by all the sets with the same probability under a base posterior. In this paper, the concentration function is used to study the sensitivity of the posterior measures when some classes? of "{ contamination priors are considered. Besides, two well-known concentration indices are employed as a measure of the robustness and, nally, the results are applied to some examples. 1. Introduction. As described in Berger (1984, 1985, 199, 1994), the robust Bayesian approach usually deals with the uncertainty in specifying just one prior probability measure. Sometimes, a class? of probability measures seems to be the most plausible result of an elicitation process (e.g. it might consist of all the priors elicited by some experts). The class? should be easily specied, large enough to contain any plausible prior and small enough to exclude unnatural priors, and involving relatively simple computations. Posterior ranges of functions of the parameters (e.g. set probability, mean, etc.) are usually considered, as the prior measure varies in?. Small ranges mean that the inference (or decision) is not actually aected by a particular choice in?. As pointed out in Fortini and Ruggeri (1994a), a dierent robustness prob-
3 lem might be faced by means of the concentration function (c.f.), comparing the functional forms of any posterior probability measure, from a class? of priors, with a base posterior. The c.f., as dened by Cifarelli and Regazzini (1987), extends the notion of Lorenz curve and allows nding the range of the probabilities, under, of all the sets with equal probability x under. Robust analyses can be performed considering the largest range, for any x in [; 1], as the prior varies in?. The width of such intervals, expressed by means of the c.f., gives the distance between the posteriors and. Thus, it is worth computing the posterior c.f. for relevant classes of priors and the paper provides a simple, useful, expression for the "{contamination one; besides, the paper gives conditions under which the maximum range is easily found. Finally, another comparison is made possible by considering the concentration indices given in Cifarelli and Regazzini (1987). The paper complements the work in Fortini and Ruggeri (1994a) where the c.f. was mainly used to dene neighbourhoods of probability measures and computing bounds on posterior quantities over such classes. Here we develop their suggestion of considering a xed continuous, convex, monotone nondecreasing function which should lie below any posterior c.f. in order to ensure robustness. The results are then applied to some examples, including a model for the magnitudes of the major earthquakes in Lazio, an Italian region.. Concentration function. Cifarelli and Regazzini (1987) dened the c.f., as a generalisation of the Lorenz curve, described in Marshall and Olkin (1979, pag.5). The classical denition of concentration refers to the discrepancy between a discrete probability and a uniform one, say, and allows for the comparison 3
4 of the two probability measures, looking for subsets where is much more concentrated than (and vice versa). Cifarelli and Regazzini (1987) dened and studied the c.f. of with respect to (w.r.t.), where and are two probability measures on the same measurable space (; F). According to the Radon-Nikodym theorem, there is a unique partition fn; N C g F of and a nonnegative function h on N C such that, 8E F, (E) = s (N) = s (), where a () = Z Z E\N C h() (d) + s (E \ N), (N) =, \N C h() (d) and s denote the absolutely continuous and the singular part of w.r.t., respectively. Set h() = 1 all over N and dene H(y) = (f : h() yg) ; c x = inffy R : H(y) xg. Finally, let L x = f : h() c x g and L? x = f : h() < c x g. Denition 1 The function ' : [; 1]! [; 1] is said to be the concentration function of w.r.t. if '(x) = (L? x ) + c x fx? H(c? x )g for x (; 1), '() = and '(1) = a (). Observe that '(x) is a nondecreasing, continuous and convex function, such that '(x),?, '(x) = x 8x [; 1], = and '(x) = Z cx [x? H(t)]dt = Z x c t dt: (1) To better understand the behaviour of the c.f., two situations are described: '(1) = 1 denotes that is absolutely continuous w.r.t. while '(x) = ; x, means that gives no mass to a subset A F such that (A) =. To show how to practically draw the c.f. when and are absolutely continuous w.r.t. Lebesgue measure, consider, as an example, < +, the Borel {algebra F on it, G(; ) and E(1); then it follows that the Radon-Nikodym derivative is h() = 4 e?. 4
5 The c.f. '(x) is obtained by evaluating x = (L q ) and '(x) = (L q ), where L q = f : h() qg and q takes as many as possible values to draw the c.f. within the required accuracy. Probability measures can be compared, considering a partial ordering, induced by the c.f., in the space P of all probability measures. Denition Let ' 1, ' denote the c.f.'s of 1 and w.r.t. ; if ' (x) ' 1 (x) ; 8x [; 1]; we will say that is not less concentrated than 1 w.r.t.. Afterwards we will denote it by 1. A total ordering, consistent with the previous partial one, is achieved (Ci- R 1 farelli and Regazzini, 1987) when considering C () = fx? '(x)gdx and G () = sup x[;1] (x? '(x)) which are, respectively, the well-known Gini's area of concentration (Gini, 1914) and an index proposed by Pietra (1915), which equals half the total variation norm, as proved by Cifarelli and Regazzini (1987). Observe that C () = () G () = () = ; C () = 1 () G () = 1 ()? : The following Theorem, proved in Cifarelli and Regazzini (1987), states that '(x) substantially coincides with the minimum value of on the measurable subsets of with -measure not smaller than x. Theorem 1 If A F, (A) = x, then '(x) a (A). Moreover if x [; 1] is adherent to the range of H, then B x exists such that (B x ) = x and '(x) = a (B x ) = minf(a) : A F and (A) xg: () 5
6 If is nonatomic, then () holds for any x [; 1]. Such a Theorem is relevant in applying the c.f. to robust Bayesian analysis; in fact, given any x [; 1], the probability, under, of all the subsets A with {measure x, is such that '(x) (A) 1? '(1? x): (3) Two examples are presented to show how the c.f., far from substituting the other usual functions (e.g. the mean), furnishes dierent information about the probability measures. Comparisons among probability measures are sometimes made through their moments; it is well known that dierent measures could be found such that they share the rst n moments, while their dierence could be detected by the c.f.. As another example, consider two measures concentrated on disjoint but very close sets, say E 1 and E. In this case, the c.f. shows the dierence between them, which is large on the subsets which contain just one E i, i = 1;, although their means are very close. Finally, for completeness, it is worth mentioning all the other papers on the c.f., mainly applied to Bayesian robustness, which appeared after the works by Cifarelli and Regazzini: Fortini and Ruggeri (199, 1993, 1994b), Ruggeri and Wasserman (1991), Rotondi, Ruggeri and Vercesi (199), Carota and Ruggeri (1993) and Ruggeri (1993a). 3. Statement of the problem. In this paper, a class of posterior probability measures, corresponding to a class? of priors, is compared to a base posterior measure by means of their functional forms. Instead of looking at some 6
7 summarising quantity, like the mean, the concentration of the probability all over the parameter space is considered. The c.f. is the natural tool to perform such task, because it detects when a measure is more concentrated than another one. As mentioned in Fortini and Ruggeri (1994a), it is possible to specify a continuous, convex, nondecreasing function so that all the c.f.'s of the posterior measures w.r.t. the base one have to lie above it in order to have robustness. In this paper, following a suggestion in Regazzini (199), we consider the sensitivity to changes in the prior, looking for the lowest among all the c.f.'s of the posterior probability measures corresponding to some xed class of priors. If the lowest c.f. does not exist, then the inmum, pointwise, is considered. Besides, the robustness could be measured considering the distance of such function from the line connecting (; ) and (1; 1) by means of some indices, due to Gini and Pietra, and related to the c.f., as described in Cifarelli and Regazzini (1987). More formally, let X be a random variable on a dominated statistical space denoted by (X ; F X ; fp ; (; F)g); given a sample ~x from X, the experimental evidence about will be expressed by the likelihood function l ~x (), which we assume F X F-measurable. Let P denote the space of the probability measures on the parameter space (; F). Given a prior P, the posterior measure is dened by (A) = R A l ~x()(d) m, where m = R l ~x()(d) and A F. According to the robust Bayesian viewpoint, a class? of probability measures has to be considered, instead of eliciting just one prior. Suppose that there exists a base prior, like in the "{contamination class, and consider the class of the c.f.'s of,?, w.r.t.. Because of Theorem 1 and (3), it follows, for any? and A with (A) = x, 7
8 that b'(x) (A) 1? b'(1? x); where b'(x) = inf ' (x), for any x [; 1].? The interpretation of b', in terms of Bayesian robustness, is straightforward: the closest b'(x) and 1? b'(1? x) are for all x [; 1], the smallest the dierence of the functional forms of the posterior measures is. In particular, following a suggestion in Fortini and Ruggeri (1994), it should be checked that b'(x) g(x) for all x [; 1], where g(x) represents the maximum allowed discrepancy in order to ensure robustness. As an example, it could be g(x) = x, corresponding to the request sup j (A)? (A)j (A)(1? (A)). AF: (A)=x The computation of b' is quite simple when there exists b? such that b' 'b and this paper gives conditions ensuring its existence for some classes of priors; otherwise its computation becomes a harder task, which can be solved numerically. Besides, in this paper the comparison among probability measures is also possible through the concentration indices, considering C = sup C () and? G = sup G ():? The c.f. is well suited to satisfy the need, pointed out by Wasserman (199), of graphically summarising the results of a robust Bayesian analysis. In fact, the plot of b'(x) and 1? b'(1? x), like in Fig.1, gives a very intuitive description of the discrepancy between the base prior and the other measures. In Fig.1, '(x) denotes the c.f. of G(; ) with respect to E(1) and it is shown, as an example, that [:16; :559] is the range spanned by the probability, under, of the sets A with (A) = :4. 8
9 4. Analytic results. The c.f. and its derivatives are computed when the base prior is contaminated, by considering the class? " of the "{contaminated priors. Denition 3 Let be a xed prior probability measure and let " [; 1]. The class? " = f Q = (1?") +"Q; Q Qg, where Q P, is said to be an "{contamination class of priors. Some classes Q have been proposed and their properties are discussed in Berger (199). In this paper, we consider the class Q A of all the probability measures and, if is unimodal, the class Q U of all unimodal probability measures, with the same mode as. Proposition 1 Let and Q be probability measures on (; F); for any a [; 1], dene = (1? a) + aq. If ' and ' are the c.f.'s of and Q with respect to respectively, then '(x) = (1? a)x + a' (x): Let ' +(x) and '?(x) be, respectively, the right-hand and the left-hand derivatives of '(x), then it follows that ' +() = (1? a) + a inf ^h () and '?(1) = (1? a) + a sup ^h (); where ^h is the Radon-Nikodym derivative of Q with respect to. PROOF. The relation between ' and ' follows quite easily from the definition of c.f.. As pointed out by Cifarelli and Regazzini (1987), the right-hand derivative can be computed, because (1) implies that (' )? (1) = c 1. In the same way, (' ) + () is computed, observing that c t is right continuous at the origin. 9
10 Proposition Consider = (1? ") + "Q so that (1? ")D = (1? ")D + "D + "D Q Q ; with Q (1? ")D + "D Q Z D = l ~x () (d) and D Q = Z l ~x ()Q(d): (4) Let ' and ' denote the c.f.'s of and Q with respect to, respectively, so that '(x) = Moreover, it follows that ' +() = = '?(1) = = (1? ")D "D Q x + ' (x): (1? ")D + "D Q (1? ")D + "D Q (1? ")D (1? ")D + "D Q + "D Q inf (1? ")D + "D Q D (1? ") + " inf ^h () (1? ")D + "D Q (1? ")D (1? ")D + "D Q + ( D (1? ")D + "D Q ; h () "D Q sup h () (1? ")D + "D Q (1? ") + " sup ^h () where ^h and h are, respectively, the Radon-Nikodym derivatives of Q with respect to and of Q with respect to. PROOF. The expression about is well known (see Berger, 1985, p.6). The other results follow from Proposition 1 and the fact that ^h () D Q D because ^h () D Q l ~x ()(d) = l ~x ()Q(d) and h ()(d) = Q(d). D ) ; = h () The previous Propositions are now applied to the robustness analysis. The family of all posterior c.f.'s could be considered to get the inmum, pointwise, of the c.f.'s, as shown in Example 3. Besides, Example 4 will show how to assess robustness by means of a function lying under all the c.f.'s. We prefer now ending this Section by presenting some conditions ensuring the existence of the lowest c.f.. measures. After xing a nonatomic prior, let Q be a class of contaminating probability 1
11 Proposition 3 Suppose there exists b such that Q b Q, where Qb probability one to b. Given Q Q such that Z assigns l ~x ()Q(d) l ~x (b ); (5) then Q b Q : (1? ")D PROOF. Proposition and (1) imply that ' Q = b(x) (1? ")D + "l ~x ( ) x. (1? ")D Moreover, since l ~x ( ) D Q ; ' Q x ' b(x) Q (x) : (1? ")D + "D Q Condition (5) may be actually achieved, as shown in the examples of Section 4. The next Propositions follow immediately from Proposition 3. Proposition 4 Suppose that b is the maximum likelihood estimator of, given l ~x (). If Q b Q, then Q b Q for any Q Q. In particular, this result holds if Q = Q A. Proposition 5 Suppose that is unimodal with mode. If Q Q U and then Q Q for any Q Q. Z sup l ~x ()Q(d) l ~x ( ); QQ Another result is consequent upon Propositions 4 and 5. Proposition 6 Under the same hypotheses as Proposition 3, if (5) holds for any h i Q Q, then sup (A)? Q(A) (A) 1 + 1? " D?1, 8A F. Moreover C = G = 1 + 1? " D?1 QQ " l ~x ( ). " l ~x ( ) It follows from this Proposition that if (5) is fullled by any Q in Q, then there exists a contaminating measure Qb, which is the worst with regard to the 11
12 robustness, leading to the lowest c.f.. The same is not true if (5) does not hold, making the analysis more complex. Suppose that the space P is dominated by a measure and let denote the Radon-Nikodym derivative of with respect to ; assume that is the support of. Proposition 7 Let be unimodal with mode and let Q = Q U. Let h Q denote the Radon-Nikodym derivative of Q w.r.t.. Moreover let inf h Q () = hold for any Q Q. If there exists Q Q such that Z l ~x ()Q(d) > l ~x ( ); (6) then there is no b Q such that ' b Q (x) ' Q (x) for any x [; 1] and Q Q. PROOF. Let Q be the probability measure which gives probability one to. Since Q is a convex combination of absolutely continuous measures with respect to and Q, then ' Q (1) = ( Q ) a () < ( Q ) a() = ' Q (1). From (6) and Proposition, it follows (' Q ) () + < (' Q ) + (): Therefore, since the c.f. is continuous, there is a neighbourhood of the origin I() such that ' Q (x) < ' Q (x), for any x I(). The thesis follows immediately. If is unimodal and Q = Q U, then the behaviour of any contaminating measure, not covered by Propositions 4 and 7, depends on the value of ". Proposition 8 Let be unimodal with mode and let Q = Q U. Let h Q denote the Radon-Nikodym derivative of Q w.r.t.. Let Q be the probability measure which gives probability one to. If there exists Q Q such that inf h Q () > and R l ~x()q(d) > l ~x ( ); then Q Q if and only if inf q() () > D Q? l ~x ( ) D + " 1? " l ~x( ) : (7) 1
13 PROOF. The result follows from Proposition, by observing that (7) implies that (' Q ) +() > (' Q ) +(). As shown in the proofs of Propositions 7 and 8, the right-hand derivative of '(x) at x = is important in proving the existence of c.f.'s lying under the others just in a neighbourhood of the origin. Furthermore, the left-hand derivative at x = 1 plays a similar role. It is worth observing that such derivatives allow the approximation of the lower and upper bounds on the probability of the subsets A such that (A) is suciently close to. In fact, the minimisation of ' + (maximisation of '?) determines the contaminated probability measure whose c.f. lies under all the others in a neighbourhood of x = (x = 1). 5. Numerical examples. The c.f.'s are used in analysing four examples, in which the results shown in Section 3 are applied. Example 1. Assume that X N (; ), known, and N ( ; ), and known, with density function (). Let Q be the class of the probability measures which are either Q k U(? k; + k), k >, or Q 1, which assigns probability one to the point. Dene k = (1? ") + "Q k. Given a sample ~x from X, then the likelihood function is l ~x () = e?(?~x) =( ) p. The density of Q k, k >, is given by q k () = 1 k I [?k; +k]() where I A is the indicator function of the set A. (Note that Q k converges in distribution to Q 1 as k!.) According to the denitions as given in (4), it follows that D = e?(?~x) =(( + q )) and D ( + ) Qk = 1 Z +k l ~x ()d. k?k 13
14 Result 1 Given a sample ~x, then D Qk j ~x? j 1. PROOF. It holds that l ~x ( ) for any k > if and only if lim D k! + Q k = l ~x ( ) and lim D Q k!+1 k = +. Set h = k and v = j ~x? j. Then the derivative of D Qk w.r.t. h exists, D Qk (h) = 1 p (? R v+h v?h e?t = dt h + e?(v?h) = h + e?(v+h) = with lim f(h) = and f (h) = h[(v? h)e?(v?h) =? (v + h)e?(v+h) = ]. h! + h ) = 1 p f(h) h, For any h >, f (h) < if and only if g(h) = (v? h)e vh? (v + h)e?vh <. Such a condition is obviously satised if h v, so that D Qk l ~x ( ) for any k j ~x? j. Otherwise, suppose that v 1, then g(h) = ( 1 X v j+ h j+1? (j + 1)! 1X v j h j+1 (j)! ) ( 1 X v j h 1X j+1 (j + 1)!? ) v j h j+1. (j)! The convexity of D Qk implies that D Qk l ~x ( ) for any k > if v 1. Suppose now that v > 1, then lim D h! + Q k (h) = lim h! + f (h) that D Qk > l ~x ( ), for any k close enough to. 4h = e?v lim h(v? 1) = + so h! + Result Given a sample ~x, then the c.f. ' 1 of 1 w.r.t. is such that ' 1(x) ' k (x), for any x [; 1] and for any c.f. ' k of k w.r.t., if and only if j ~x? j 1. Given such a sample ~x, it follows that C = G = ( 1 + 1? " " s + exp? (~x? )!)?1 ( + ). Otherwise, there is no Qb k such that 'b k (x) ' k (x), for any x [; 1] and ' k. PROOF. Combining the previous Result and Proposition 5, it is immediate to prove that ' 1 lies under any other c.f. ' k if j ~x? j indices follows from Proposition The value of the 14
15 any k. Otherwise, dene h k () = q k( j ~x) ( j ~x) = q k() () D, then inf h k () =, for D Qk Given such a sample ~x, the previous Result implies that lim k! + D Qk = D + there exist innite values k > such that D Qk so that > l ~x ( ). From Proposition 7, the corresponding c.f.'s ' k lie under ' for some values x (; k ), with < k < 1. Since Q is the only measure in Q which is singular w.r.t., then it results ' k (1) > ' (1) so that no k, k >, is less concentrated than the other measures. Take =, = = 1, then it follows that C = G = ( 1 + 1? " p " exp? ~x 4!)?1. For any j~xj 1, it can be easily seen that the indices are small (large) when " is small (large), so that major contaminations lead to nonrobust situations, as intuitively expected. As an example, suppose ~x = :5, so that C = :15 if " = :1 and C = :69 if " = :5. Example. Assume that X E() and G(1 + ; 1), >, with density function (). Let Q be the class of the probability measures which are either Q G(1 + ; ), >, or Q 1, which assigns probability one to the point. Dene, for any >, = (1? ") + "Q. Given a sample ~x from X, then the likelihood function is l ~x () = e?~x. The measures Q are unimodal and their densities, for >, are given by q () = () e??( ). (Note that q 1 furthermore, Q converges in distribution to Q 1 as! +1.) and, According to the denitions as given in (4), it follows that 15
16 D = 1 + (1 + ~x) + and D Q = 1+ (1 + ). ( + ~x) + Result 3 Given a sample ~x, then D Q? p ~x + p. l ~x ( ) for any > if and only if PROOF. Set y = ~x and =, then D Q l ~x ( ) () 8 > f() = y + log(1 + 1 ) + ( + ) log( + y ). Note that lim f() =?1; lim f() = and f () =! +!+1 h() (1 + ) (y + ) where h() = 3 (y? 4y + ) + (1? 4y)? (y + y)? y : It holds lim h() =?y < and lim h() =! +!+1 8 >< >:?1? p y + p +1 o:w: The values y 6 [? p ; + p ] must be excluded because lim h() = +1!+1 implies that lim f() =!+1 + so that D Q > l ~x ( ) denitely as! +1. It is easy to prove that h () < for > so that f() is concave and negative for any >, for any value y [? p ; + p ]. : Result 4 Given a sample ~x, then the c.f. ' 1 of 1 w.r.t. is such that ' 1 (x) ' (x), for any x [; 1] and any c.f. ' of w.r.t., if and only if? p ~x + p. Given such a sample ~x, it follows that C = G = ( 1 + 1? " " 1 + (1 + ~x) + e ~x )?1. Otherwise, there is no b such that ' b (x) ' (x), for any x [; 1] and '. PROOF. Combining the previous Result and Proposition 5, it is immediate to prove that ' 1 lies under any other c.f. ' if? p ~x + p. The value of the indices follows from Proposition 6. 16
17 Otherwise, dene h () = q ( j ~x) ( j ~x) = q () () D D Q. Excluding the trivial case = 1 such that ' 1 (x) = x for any x [; 1], it may be proved that inf h () = 8 >< >: > 1?( + ) (?1) e? (?1) ( + ~x) +?( + )(1 + ~x) + > < < 1 : Given such a sample ~x, the previous Result implies that lim f() =!+1 + so that there exist innite values > 1 such that D Q > l ~x ( ). From Proposition 7, the corresponding c.f.'s ' lie under ' 1 for some values x (; ), with < < 1. Finally, ' (1) > ' 1 (1) implies that no, >, is less concentrated than the other probability measures. Take = 1, then it follows that C = G = ( 1 + 1? " " ) e ~x?1 (1 + ~x) 3 For any ~x such that? p ~x + p, it can be easily seen that the indices are small (large) when " is small (large), so that major contaminations lead to nonrobust situations, as intuitively expected. As an example, suppose ~x = 3, so that C = :16 if " = :1 and C = :6144 if " = :5.. Example 3. Assume that X b(1; ), i.e. a Bernoulli random variable, and has density function () = 8 >< >: 4 1= 4(1? ) 1= < 1: Consider the class of probability measures Q = fq 1 ; Q g, where Q 1 U(; 1), with density q 1 (), and Q U(; 1=), with density q (). 17
18 Dene k = (1? ") + "Q k, k = 1;. Given the sample ~x = 1, then the likelihood function is l ~x () =. According to their denitions as given in (4), it follows that D = 1=, D Q1 = 1= and D Q = 1=4. Dene h k () = q k( j ~x) ( j ~x) = q k() () D D Qk, k = 1;. It follows that h 1 () = 8 >< >: h () = (1? ) 8 >< >: 1 1= 1= < 1; 1= 1= < 1: Result 5 Let ' k be the c.f. of k w.r.t., k = 1;. Then it results ' 1 (x) = (1? ")x + "(1? p 1? x); x 1, and ' (x) = 8 >< >: (1? ")? " x x =3 (1? ") q? " x + "? " (1? 3 9(1? x) ) =3 < x 1: PROOF. Let ' k be the c.f. of Q w.r.t. k, k = 1;. Because of the denition of c.f. and h 1 () = h 1 (1? ), 1, it follows that ' 1 (x) = R 1?b b 1? b, where b = p 1? x=, b 1=, is the solution of x = R 1? b b x 1. q 1 (j~x)d = (j~x)d, Since h () = when 1= < 1, it follows that ' (x) = if x =3 = R 1= ([1=; 1]). When =3 < x 1, then ' (x) = q (j~x)d = 1? 4b, where b b = 3p 3(1? x)=, b 1=, is the solution of x = R 1= Applying Proposition, then ' 1 and ' are obtained. b (j~x)d + =3. Considering the convexity of the c.f.'s and evaluating the derivatives of ' 1 and ' at x = ; 1, it is proved that none of them lies under the other, so that 18
19 the robustness analysis has to be performed studying b'(x) = x [; 1], and the corresponding concentration indices. inf QQ ' Q(x), for any Result 6 Under the above conditions, it follows that b'(x) = 8 >< >: (1? ")? " x x =3 (1? ") q? " x + "? " (1? 3 9(1? x) ) =3 < x (1? ")x + "(1? p 1? x) < x 1; where is the unique solution of ' 1 (x) = ' (x), =3 < x 1. C = " 3? " and G = "? " " 11 15? 3p 9 5 (1? )5=3 # + " 3 (1? )3=. PROOF. The comparison of ' 1 and ' leads quite easily to nd b'. Furthermore The index C is the maximum between the Pietra's indices concerning ' 1 and '. The rst index is given by x? ' 1 (x) evaluated at x = =3 because it is the maximum for x =3 and ' 1 (=3) > 1 for any ". The second index is given by x? ' (x) evaluated at x = 3=4 but it is smaller than the rst one, for any ". The computation of the Gini's area of concentration G is indeed straightforward.. As an example, suppose " = :1 so that C = :3588 and G = :3973, while C = : and G = :4616 if " = :5. As shown by the indices, the base probability measure is just slightly aected by the contaminating priors, also under rather large values of ". Example 4. Consider the magnitude X of major earthquakes (i.e. with magnitude not less than 5) occurred in the Italian region Lazio since 186, extracted from a catalogue prepared by the Gruppo Catalogo dei Terremoti dell'istituto per la Geosica della Litosfera del Consiglio Nazionale delle Ricerche (CNR). Observe 19
20 that the real variable X is recorded at the values (5; 5:1; 5:; : : :), so that, it seems reasonable to consider, like in Ruggeri (1993b), the random variable Y = 1(X? 5) dened on N, distributed geometrically, with parameter. Such a simplication does not aect our analysis heavily, provided that we take ' 1, because the probability of Y being large is quite small since P 1 k=k (1? )k = (1? ) K ', even for a small K. A conjugate probability measure Be(8; ) and a noninformative one, U(; 1), are chosen as priors on the parameter. Their posteriors turn out to be, respectively, Be(11; :9) and Be(4; 1:9) and their posterior means are :7914 and :678, with respective standard deviations :153 and :1779. The two posterior means could be deemed enough close to ensure robustness, but a dierent picture is obtained when considering the c.f. of w.r.t. and the criterion for robustness is given by sup AF: (A)=x j (A)? (A)j (A)(1? (A)); i.e. '(x) g(x) = x ; 8x [; 1]. As shown in Fig., the c.f. '(x) lies below the curve g(x), showing that the functional forms of the posterior probability measures are quite dierent, at least according to our criterion. 6. Future researches. In this paper, the c.f. has been used to compare the functional forms of the posterior probability measures corresponding to a class of priors w.r.t. a base one. In principle, it is possible to dene a class of sampling models, so that a class of posterior probability measures is obtained as the likelihood changes. The same approach followed in this paper could then be applied, but we expect that computing the lowest c.f. will be an harder task.
21 Further developments are possible when considering either other classes of priors (possibly, nonparametric), like in Carota and Ruggeri (1993), or innitesimal variations of the base prior, like in Fortini and Ruggeri (1994b). Bibliography Berger, J. (1984). The robust Bayesian viewpoint (with discussion). Robustness of Bayesian Analysis, (ed. J.Kadane), Amsterdam: North Holland. Berger, J. (1985). Statistical Decision Theory and Bayesian Analysis. New York: Springer-Verlag. Berger, J. (199). Robust Bayesian Analysis : sensitivity to the prior. J. Statist. Plann. Inference, 5, Berger, J. (1994). An overview of Robust Bayesian Analysis. To appear in TEST. Carota, C. and Ruggeri, F. (1993). Robust Bayesian analysis given priors on partition sets. Quaderno IAMI 93.1, Milano: CNR-IAMI. Cifarelli, D.M. and Regazzini, E. (1987). On a general denition of concentration function. Sankhya B, 49, Fortini, S. and Ruggeri, F. (199). On dening neighbourhoods of measures through the concentration function. To appear in Sankhya A. Fortini, S. and Ruggeri, F. (1993). Concentration function and coecients of divergence for signed measures. To appear in J. Ital. Statist. Society. 1
22 Fortini, S. and Ruggeri, F. (1994a). Concentration functions and Bayesian robustness. To appear in J. Statist. Plann. Inference. Fortini, S. and Ruggeri, F. (1994b). Innitesimal properties of the concentration function. Unpublished Manuscript. Gini, C. (1914). Sulla misura della concentrazione della variabilita dei caratteri. Atti del Reale Istituto Veneto di S.L.A., A.A , 73, parte II, Marshall, A.W. and Olkin, I. (1979). Inequalities: theory of majorization and its applications. Academic Press, New York. Pietra, G. (1915). Delle relazioni tra gli indici di variabilita. Atti del Reale Istituto Veneto di S.L.A. A.A , 74, parte II, Regazzini, E. (199). Concentration comparisons between probability measures. Sankhya B 54, Rotondi, R., Ruggeri, F. and Vercesi, P. (199). A Bayesian approach to comparison of information sources. Quaderno IAMI 9.1, Milano: CNR-IAMI. Ruggeri, F. (1993a). Local and global sensitivity under some classes of priors. To appear in IMSIBAC IV. Amsterdam: North Holland. Ruggeri, F. (1993b). Bayesian comparison of Italian earthquakes. Quaderno IAMI 93.8, Milano: CNR-IAMI. Ruggeri, F. and Wasserman, L.A. (1991). Density based classes of priors: innites-
23 imal properties and approximations. Technical Report #58. Pittsburgh: Department of Statistics, Carnegie Mellon University. Wasserman, L.A. (199). Recent methodological advances in robust Bayesian inference. Bayesian Statistics IV. Oxford: Oxford Press. Sandra Fortini, Fabrizio Ruggeri Consiglio Nazionale delle Ricerche Istituto per le Applicazioni della Matematica e dell'informatica Via A.M. Ampere, 56 I-131 Milano, Italy. 3
24 Figure 1. Concentration function '(x) ( ) and 1?'(1?x) ( ) of G(; ) w.r.t. E(1). Figure. Concentration function ( ) between posteriors on and bound g(x) = x ( ). 4
25 f(1-x) f(x) x 5
26 f(x) g(x) x 6
Stochastic dominance with imprecise information
Stochastic dominance with imprecise information Ignacio Montes, Enrique Miranda, Susana Montes University of Oviedo, Dep. of Statistics and Operations Research. Abstract Stochastic dominance, which is
More informationON STATISTICAL INFERENCE UNDER ASYMMETRIC LOSS. Abstract. We introduce a wide class of asymmetric loss functions and show how to obtain
ON STATISTICAL INFERENCE UNDER ASYMMETRIC LOSS FUNCTIONS Michael Baron Received: Abstract We introduce a wide class of asymmetric loss functions and show how to obtain asymmetric-type optimal decision
More informationSOME MEASURABILITY AND CONTINUITY PROPERTIES OF ARBITRARY REAL FUNCTIONS
LE MATEMATICHE Vol. LVII (2002) Fasc. I, pp. 6382 SOME MEASURABILITY AND CONTINUITY PROPERTIES OF ARBITRARY REAL FUNCTIONS VITTORINO PATA - ALFONSO VILLANI Given an arbitrary real function f, the set D
More informationLinear Algebra (part 1) : Vector Spaces (by Evan Dummit, 2017, v. 1.07) 1.1 The Formal Denition of a Vector Space
Linear Algebra (part 1) : Vector Spaces (by Evan Dummit, 2017, v. 1.07) Contents 1 Vector Spaces 1 1.1 The Formal Denition of a Vector Space.................................. 1 1.2 Subspaces...................................................
More informationMATH MEASURE THEORY AND FOURIER ANALYSIS. Contents
MATH 3969 - MEASURE THEORY AND FOURIER ANALYSIS ANDREW TULLOCH Contents 1. Measure Theory 2 1.1. Properties of Measures 3 1.2. Constructing σ-algebras and measures 3 1.3. Properties of the Lebesgue measure
More informationPractical implementation of possibilistic probability mass functions
Soft Computing manuscript No. (will be inserted by the editor) Practical implementation of possibilistic probability mass functions Leen Gilbert, Gert de Cooman 1, Etienne E. Kerre 2 1 Universiteit Gent,
More informationI. ANALYSIS; PROBABILITY
ma414l1.tex Lecture 1. 12.1.2012 I. NLYSIS; PROBBILITY 1. Lebesgue Measure and Integral We recall Lebesgue measure (M411 Probability and Measure) λ: defined on intervals (a, b] by λ((a, b]) := b a (so
More informationHILBERT SPACES AND THE RADON-NIKODYM THEOREM. where the bar in the first equation denotes complex conjugation. In either case, for any x V define
HILBERT SPACES AND THE RADON-NIKODYM THEOREM STEVEN P. LALLEY 1. DEFINITIONS Definition 1. A real inner product space is a real vector space V together with a symmetric, bilinear, positive-definite mapping,
More informationHomework 11. Solutions
Homework 11. Solutions Problem 2.3.2. Let f n : R R be 1/n times the characteristic function of the interval (0, n). Show that f n 0 uniformly and f n µ L = 1. Why isn t it a counterexample to the Lebesgue
More informationh(x) lim H(x) = lim Since h is nondecreasing then h(x) 0 for all x, and if h is discontinuous at a point x then H(x) > 0. Denote
Real Variables, Fall 4 Problem set 4 Solution suggestions Exercise. Let f be of bounded variation on [a, b]. Show that for each c (a, b), lim x c f(x) and lim x c f(x) exist. Prove that a monotone function
More informationMathematical Institute, University of Utrecht. The problem of estimating the mean of an observed Gaussian innite-dimensional vector
On Minimax Filtering over Ellipsoids Eduard N. Belitser and Boris Y. Levit Mathematical Institute, University of Utrecht Budapestlaan 6, 3584 CD Utrecht, The Netherlands The problem of estimating the mean
More informationThe Lebesgue Integral
The Lebesgue Integral Having completed our study of Lebesgue measure, we are now ready to consider the Lebesgue integral. Before diving into the details of its construction, though, we would like to give
More informationvar D (B) = var(b? E D (B)) = var(b)? cov(b; D)(var(D))?1 cov(d; B) (2) Stone [14], and Hartigan [9] are among the rst to discuss the role of such ass
BAYES LINEAR ANALYSIS [This article appears in the Encyclopaedia of Statistical Sciences, Update volume 3, 1998, Wiley.] The Bayes linear approach is concerned with problems in which we want to combine
More informationA PRIMER OF LEBESGUE INTEGRATION WITH A VIEW TO THE LEBESGUE-RADON-NIKODYM THEOREM
A PRIMR OF LBSGU INTGRATION WITH A VIW TO TH LBSGU-RADON-NIKODYM THORM MISHL SKNDRI Abstract. In this paper, we begin by introducing some fundamental concepts and results in measure theory and in the Lebesgue
More information2. Intersection Multiplicities
2. Intersection Multiplicities 11 2. Intersection Multiplicities Let us start our study of curves by introducing the concept of intersection multiplicity, which will be central throughout these notes.
More informationMarkov processes Course note 2. Martingale problems, recurrence properties of discrete time chains.
Institute for Applied Mathematics WS17/18 Massimiliano Gubinelli Markov processes Course note 2. Martingale problems, recurrence properties of discrete time chains. [version 1, 2017.11.1] We introduce
More informationTheorem 2.1 (Caratheodory). A (countably additive) probability measure on a field has an extension. n=1
Chapter 2 Probability measures 1. Existence Theorem 2.1 (Caratheodory). A (countably additive) probability measure on a field has an extension to the generated σ-field Proof of Theorem 2.1. Let F 0 be
More informationNew concepts: Span of a vector set, matrix column space (range) Linearly dependent set of vectors Matrix null space
Lesson 6: Linear independence, matrix column space and null space New concepts: Span of a vector set, matrix column space (range) Linearly dependent set of vectors Matrix null space Two linear systems:
More informationMATHS 730 FC Lecture Notes March 5, Introduction
1 INTRODUCTION MATHS 730 FC Lecture Notes March 5, 2014 1 Introduction Definition. If A, B are sets and there exists a bijection A B, they have the same cardinality, which we write as A, #A. If there exists
More informationIf Y and Y 0 satisfy (1-2), then Y = Y 0 a.s.
20 6. CONDITIONAL EXPECTATION Having discussed at length the limit theory for sums of independent random variables we will now move on to deal with dependent random variables. An important tool in this
More information2 (Bonus). Let A X consist of points (x, y) such that either x or y is a rational number. Is A measurable? What is its Lebesgue measure?
MA 645-4A (Real Analysis), Dr. Chernov Homework assignment 1 (Due 9/5). Prove that every countable set A is measurable and µ(a) = 0. 2 (Bonus). Let A consist of points (x, y) such that either x or y is
More informationReminder Notes for the Course on Measures on Topological Spaces
Reminder Notes for the Course on Measures on Topological Spaces T. C. Dorlas Dublin Institute for Advanced Studies School of Theoretical Physics 10 Burlington Road, Dublin 4, Ireland. Email: dorlas@stp.dias.ie
More informationL p Functions. Given a measure space (X, µ) and a real number p [1, ), recall that the L p -norm of a measurable function f : X R is defined by
L p Functions Given a measure space (, µ) and a real number p [, ), recall that the L p -norm of a measurable function f : R is defined by f p = ( ) /p f p dµ Note that the L p -norm of a function f may
More informationStat 643 Review of Probability Results (Cressie)
Stat 643 Review of Probability Results (Cressie) Probability Space: ( HTT,, ) H is the set of outcomes T is a 5-algebra; subsets of H T is a probability measure mapping from T onto [0,] Measurable Space:
More informationLinear Regression and Its Applications
Linear Regression and Its Applications Predrag Radivojac October 13, 2014 Given a data set D = {(x i, y i )} n the objective is to learn the relationship between features and the target. We usually start
More information2 Measure Theory. 2.1 Measures
2 Measure Theory 2.1 Measures A lot of this exposition is motivated by Folland s wonderful text, Real Analysis: Modern Techniques and Their Applications. Perhaps the most ubiquitous measure in our lives
More informationPractical implementation of possibilistic probability mass functions
Practical implementation of possibilistic probability mass functions Leen Gilbert Gert de Cooman Etienne E. Kerre October 12, 2000 Abstract Probability assessments of events are often linguistic in nature.
More informationNotes on Measure, Probability and Stochastic Processes. João Lopes Dias
Notes on Measure, Probability and Stochastic Processes João Lopes Dias Departamento de Matemática, ISEG, Universidade de Lisboa, Rua do Quelhas 6, 1200-781 Lisboa, Portugal E-mail address: jldias@iseg.ulisboa.pt
More information290 J.M. Carnicer, J.M. Pe~na basis (u 1 ; : : : ; u n ) consisting of minimally supported elements, yet also has a basis (v 1 ; : : : ; v n ) which f
Numer. Math. 67: 289{301 (1994) Numerische Mathematik c Springer-Verlag 1994 Electronic Edition Least supported bases and local linear independence J.M. Carnicer, J.M. Pe~na? Departamento de Matematica
More informationIntroduction to Real Analysis
Introduction to Real Analysis Joshua Wilde, revised by Isabel Tecu, Takeshi Suzuki and María José Boccardi August 13, 2013 1 Sets Sets are the basic objects of mathematics. In fact, they are so basic that
More information460 HOLGER DETTE AND WILLIAM J STUDDEN order to examine how a given design behaves in the model g` with respect to the D-optimality criterion one uses
Statistica Sinica 5(1995), 459-473 OPTIMAL DESIGNS FOR POLYNOMIAL REGRESSION WHEN THE DEGREE IS NOT KNOWN Holger Dette and William J Studden Technische Universitat Dresden and Purdue University Abstract:
More informationStatistics 612: L p spaces, metrics on spaces of probabilites, and connections to estimation
Statistics 62: L p spaces, metrics on spaces of probabilites, and connections to estimation Moulinath Banerjee December 6, 2006 L p spaces and Hilbert spaces We first formally define L p spaces. Consider
More informationContents. 2.1 Vectors in R n. Linear Algebra (part 2) : Vector Spaces (by Evan Dummit, 2017, v. 2.50) 2 Vector Spaces
Linear Algebra (part 2) : Vector Spaces (by Evan Dummit, 2017, v 250) Contents 2 Vector Spaces 1 21 Vectors in R n 1 22 The Formal Denition of a Vector Space 4 23 Subspaces 6 24 Linear Combinations and
More informationHomework 1 Solutions ECEn 670, Fall 2013
Homework Solutions ECEn 670, Fall 03 A.. Use the rst seven relations to prove relations (A.0, (A.3, and (A.6. Prove (F G c F c G c (A.0. (F G c ((F c G c c c by A.6. (F G c F c G c by A.4 Prove F (F G
More informationA characterization of consistency of model weights given partial information in normal linear models
Statistics & Probability Letters ( ) A characterization of consistency of model weights given partial information in normal linear models Hubert Wong a;, Bertrand Clare b;1 a Department of Health Care
More information2 C. A. Gunter ackground asic Domain Theory. A poset is a set D together with a binary relation v which is reexive, transitive and anti-symmetric. A s
1 THE LARGEST FIRST-ORDER-AXIOMATIZALE CARTESIAN CLOSED CATEGORY OF DOMAINS 1 June 1986 Carl A. Gunter Cambridge University Computer Laboratory, Cambridge C2 3QG, England Introduction The inspiration for
More informationTechinical Proofs for Nonlinear Learning using Local Coordinate Coding
Techinical Proofs for Nonlinear Learning using Local Coordinate Coding 1 Notations and Main Results Denition 1.1 (Lipschitz Smoothness) A function f(x) on R d is (α, β, p)-lipschitz smooth with respect
More informationOn Kusuoka Representation of Law Invariant Risk Measures
MATHEMATICS OF OPERATIONS RESEARCH Vol. 38, No. 1, February 213, pp. 142 152 ISSN 364-765X (print) ISSN 1526-5471 (online) http://dx.doi.org/1.1287/moor.112.563 213 INFORMS On Kusuoka Representation of
More informationProduct measures, Tonelli s and Fubini s theorems For use in MAT4410, autumn 2017 Nadia S. Larsen. 17 November 2017.
Product measures, Tonelli s and Fubini s theorems For use in MAT4410, autumn 017 Nadia S. Larsen 17 November 017. 1. Construction of the product measure The purpose of these notes is to prove the main
More informationPROPERTY OF HALF{SPACES. July 25, Abstract
A CHARACTERIZATION OF GAUSSIAN MEASURES VIA THE ISOPERIMETRIC PROPERTY OF HALF{SPACES S. G. Bobkov y and C. Houdre z July 25, 1995 Abstract If the half{spaces of the form fx 2 R n : x 1 cg are extremal
More informationCoarse Geometry. Phanuel Mariano. Fall S.i.g.m.a. Seminar. Why Coarse Geometry? Coarse Invariants A Coarse Equivalence to R 1
Coarse Geometry 1 1 University of Connecticut Fall 2014 - S.i.g.m.a. Seminar Outline 1 Motivation 2 3 The partition space P ([a, b]). Preliminaries Main Result 4 Outline Basic Problem 1 Motivation 2 3
More informationMarkov Sonin Gaussian rule for singular functions
Journal of Computational and Applied Mathematics 169 (2004) 197 212 www.elsevier.com/locate/cam Markov Sonin Gaussian rule for singular functions G.Mastroianni, D.Occorsio Dipartimento di Matematica, Universita
More informationChapter 4. The dominated convergence theorem and applications
Chapter 4. The dominated convergence theorem and applications The Monotone Covergence theorem is one of a number of key theorems alllowing one to exchange limits and [Lebesgue] integrals (or derivatives
More informationLEBESGUE INTEGRATION. Introduction
LEBESGUE INTEGATION EYE SJAMAA Supplementary notes Math 414, Spring 25 Introduction The following heuristic argument is at the basis of the denition of the Lebesgue integral. This argument will be imprecise,
More informationTHEOREMS, ETC., FOR MATH 515
THEOREMS, ETC., FOR MATH 515 Proposition 1 (=comment on page 17). If A is an algebra, then any finite union or finite intersection of sets in A is also in A. Proposition 2 (=Proposition 1.1). For every
More informationquantitative information on the error caused by using the solution of the linear problem to describe the response of the elastic material on a corner
Quantitative Justication of Linearization in Nonlinear Hencky Material Problems 1 Weimin Han and Hong-ci Huang 3 Abstract. The classical linear elasticity theory is based on the assumption that the size
More informationCONVOLUTIONS OF LOGARITHMICALLY. Milan Merkle. Abstract. We give a simple proof of the fact that the logarithmic concavity of real
Univ. Beograd. Publ. Elektrotehn. Fak. Ser. Mat. 9 (998), 3{7. CONVOLUTIONS OF LOGARITHMICALLY CONCAVE FUNCTIONS Milan Merkle Abstract. We give a simple proof of the fact that the logarithmic concavity
More informationEstimates for probabilities of independent events and infinite series
Estimates for probabilities of independent events and infinite series Jürgen Grahl and Shahar evo September 9, 06 arxiv:609.0894v [math.pr] 8 Sep 06 Abstract This paper deals with finite or infinite sequences
More informationConsistent semiparametric Bayesian inference about a location parameter
Journal of Statistical Planning and Inference 77 (1999) 181 193 Consistent semiparametric Bayesian inference about a location parameter Subhashis Ghosal a, Jayanta K. Ghosh b; c; 1 d; ; 2, R.V. Ramamoorthi
More informationThe Uniformity Principle: A New Tool for. Probabilistic Robustness Analysis. B. R. Barmish and C. M. Lagoa. further discussion.
The Uniformity Principle A New Tool for Probabilistic Robustness Analysis B. R. Barmish and C. M. Lagoa Department of Electrical and Computer Engineering University of Wisconsin-Madison, Madison, WI 53706
More information4.1 Eigenvalues, Eigenvectors, and The Characteristic Polynomial
Linear Algebra (part 4): Eigenvalues, Diagonalization, and the Jordan Form (by Evan Dummit, 27, v ) Contents 4 Eigenvalues, Diagonalization, and the Jordan Canonical Form 4 Eigenvalues, Eigenvectors, and
More informationThe Lebesgue Integral
The Lebesgue Integral Brent Nelson In these notes we give an introduction to the Lebesgue integral, assuming only a knowledge of metric spaces and the iemann integral. For more details see [1, Chapters
More informationand Dagpunar [1988]) general methods for discrete distributions include two table methods (i.e. inversion by sequential or table-aided search and the
Rejection-Inversion to Generate Variates from Monotone Discrete Distributions W. H ORMANN and G. DERFLINGER University of Economics and Business Administration Vienna Department of Statistics, Augasse
More informationWeak convergence. Amsterdam, 13 November Leiden University. Limit theorems. Shota Gugushvili. Generalities. Criteria
Weak Leiden University Amsterdam, 13 November 2013 Outline 1 2 3 4 5 6 7 Definition Definition Let µ, µ 1, µ 2,... be probability measures on (R, B). It is said that µ n converges weakly to µ, and we then
More information18.175: Lecture 3 Integration
18.175: Lecture 3 Scott Sheffield MIT Outline Outline Recall definitions Probability space is triple (Ω, F, P) where Ω is sample space, F is set of events (the σ-algebra) and P : F [0, 1] is the probability
More informationMeasure Theory on Topological Spaces. Course: Prof. Tony Dorlas 2010 Typset: Cathal Ormond
Measure Theory on Topological Spaces Course: Prof. Tony Dorlas 2010 Typset: Cathal Ormond May 22, 2011 Contents 1 Introduction 2 1.1 The Riemann Integral........................................ 2 1.2 Measurable..............................................
More informationIntegration on Measure Spaces
Chapter 3 Integration on Measure Spaces In this chapter we introduce the general notion of a measure on a space X, define the class of measurable functions, and define the integral, first on a class of
More informationChapter 5. Measurable Functions
Chapter 5. Measurable Functions 1. Measurable Functions Let X be a nonempty set, and let S be a σ-algebra of subsets of X. Then (X, S) is a measurable space. A subset E of X is said to be measurable if
More informationON THE EQUIVALENCE OF CONGLOMERABILITY AND DISINTEGRABILITY FOR UNBOUNDED RANDOM VARIABLES
Submitted to the Annals of Probability ON THE EQUIVALENCE OF CONGLOMERABILITY AND DISINTEGRABILITY FOR UNBOUNDED RANDOM VARIABLES By Mark J. Schervish, Teddy Seidenfeld, and Joseph B. Kadane, Carnegie
More informationConstrained Optimization and Lagrangian Duality
CIS 520: Machine Learning Oct 02, 2017 Constrained Optimization and Lagrangian Duality Lecturer: Shivani Agarwal Disclaimer: These notes are designed to be a supplement to the lecture. They may or may
More informationCONDITIONAL ACCEPTABILITY MAPPINGS AS BANACH-LATTICE VALUED MAPPINGS
CONDITIONAL ACCEPTABILITY MAPPINGS AS BANACH-LATTICE VALUED MAPPINGS RAIMUND KOVACEVIC Department of Statistics and Decision Support Systems, University of Vienna, Vienna, Austria Abstract. Conditional
More information4.1 Notation and probability review
Directed and undirected graphical models Fall 2015 Lecture 4 October 21st Lecturer: Simon Lacoste-Julien Scribe: Jaime Roquero, JieYing Wu 4.1 Notation and probability review 4.1.1 Notations Let us recall
More information+ 2x sin x. f(b i ) f(a i ) < ɛ. i=1. i=1
Appendix To understand weak derivatives and distributional derivatives in the simplest context of functions of a single variable, we describe without proof some results from real analysis (see [7] and
More information3 (Due ). Let A X consist of points (x, y) such that either x or y is a rational number. Is A measurable? What is its Lebesgue measure?
MA 645-4A (Real Analysis), Dr. Chernov Homework assignment 1 (Due ). Show that the open disk x 2 + y 2 < 1 is a countable union of planar elementary sets. Show that the closed disk x 2 + y 2 1 is a countable
More informationMeasuring relative equality of concentration between different. income/wealth distributions
Measuring relative equality of concentration between different income/wealth distributions Quentin L Burrell Isle of Man International Business School The Nunnery Old Castletown Road Douglas Isle of Man
More informationLecture 5. 1 Chung-Fuchs Theorem. Tel Aviv University Spring 2011
Random Walks and Brownian Motion Tel Aviv University Spring 20 Instructor: Ron Peled Lecture 5 Lecture date: Feb 28, 20 Scribe: Yishai Kohn In today's lecture we return to the Chung-Fuchs theorem regarding
More informationInvariant HPD credible sets and MAP estimators
Bayesian Analysis (007), Number 4, pp. 681 69 Invariant HPD credible sets and MAP estimators Pierre Druilhet and Jean-Michel Marin Abstract. MAP estimators and HPD credible sets are often criticized in
More informationRough Sets, Rough Relations and Rough Functions. Zdzislaw Pawlak. Warsaw University of Technology. ul. Nowowiejska 15/19, Warsaw, Poland.
Rough Sets, Rough Relations and Rough Functions Zdzislaw Pawlak Institute of Computer Science Warsaw University of Technology ul. Nowowiejska 15/19, 00 665 Warsaw, Poland and Institute of Theoretical and
More informationCongurations of periodic orbits for equations with delayed positive feedback
Congurations of periodic orbits for equations with delayed positive feedback Dedicated to Professor Tibor Krisztin on the occasion of his 60th birthday Gabriella Vas 1 MTA-SZTE Analysis and Stochastics
More informationMax-Planck-Institut fur Mathematik in den Naturwissenschaften Leipzig Uniformly distributed measures in Euclidean spaces by Bernd Kirchheim and David Preiss Preprint-Nr.: 37 1998 Uniformly Distributed
More information3.1 Basic properties of real numbers - continuation Inmum and supremum of a set of real numbers
Chapter 3 Real numbers The notion of real number was introduced in section 1.3 where the axiomatic denition of the set of all real numbers was done and some basic properties of the set of all real numbers
More informationL p Spaces and Convexity
L p Spaces and Convexity These notes largely follow the treatments in Royden, Real Analysis, and Rudin, Real & Complex Analysis. 1. Convex functions Let I R be an interval. For I open, we say a function
More information3 Integration and Expectation
3 Integration and Expectation 3.1 Construction of the Lebesgue Integral Let (, F, µ) be a measure space (not necessarily a probability space). Our objective will be to define the Lebesgue integral R fdµ
More informationIntegral Jensen inequality
Integral Jensen inequality Let us consider a convex set R d, and a convex function f : (, + ]. For any x,..., x n and λ,..., λ n with n λ i =, we have () f( n λ ix i ) n λ if(x i ). For a R d, let δ a
More informationMidterm 1. Every element of the set of functions is continuous
Econ 200 Mathematics for Economists Midterm Question.- Consider the set of functions F C(0, ) dened by { } F = f C(0, ) f(x) = ax b, a A R and b B R That is, F is a subset of the set of continuous functions
More information1.1. MEASURES AND INTEGRALS
CHAPTER 1: MEASURE THEORY In this chapter we define the notion of measure µ on a space, construct integrals on this space, and establish their basic properties under limits. The measure µ(e) will be defined
More informationTHEOREMS, ETC., FOR MATH 516
THEOREMS, ETC., FOR MATH 516 Results labeled Theorem Ea.b.c (or Proposition Ea.b.c, etc.) refer to Theorem c from section a.b of Evans book (Partial Differential Equations). Proposition 1 (=Proposition
More informationOptimization and Optimal Control in Banach Spaces
Optimization and Optimal Control in Banach Spaces Bernhard Schmitzer October 19, 2017 1 Convex non-smooth optimization with proximal operators Remark 1.1 (Motivation). Convex optimization: easier to solve,
More informationMARIA GIRARDI Fact 1.1. For a bounded linear operator T from L 1 into X, the following statements are equivalent. (1) T is Dunford-Pettis. () T maps w
DENTABILITY, TREES, AND DUNFORD-PETTIS OPERATORS ON L 1 Maria Girardi University of Illinois at Urbana-Champaign Pacic J. Math. 148 (1991) 59{79 Abstract. If all bounded linear operators from L1 into a
More informationReal Analysis Problems
Real Analysis Problems Cristian E. Gutiérrez September 14, 29 1 1 CONTINUITY 1 Continuity Problem 1.1 Let r n be the sequence of rational numbers and Prove that f(x) = 1. f is continuous on the irrationals.
More information(2) E M = E C = X\E M
10 RICHARD B. MELROSE 2. Measures and σ-algebras An outer measure such as µ is a rather crude object since, even if the A i are disjoint, there is generally strict inequality in (1.14). It turns out to
More informationSupplement A: Construction and properties of a reected diusion
Supplement A: Construction and properties of a reected diusion Jakub Chorowski 1 Construction Assumption 1. For given constants < d < D let the pair (σ, b) Θ, where Θ : Θ(d, D) {(σ, b) C 1 (, 1]) C 1 (,
More informationWerner Romisch. Humboldt University Berlin. Abstract. Perturbations of convex chance constrained stochastic programs are considered the underlying
Stability of solutions to chance constrained stochastic programs Rene Henrion Weierstrass Institute for Applied Analysis and Stochastics D-7 Berlin, Germany and Werner Romisch Humboldt University Berlin
More informationChapter 8. General Countably Additive Set Functions. 8.1 Hahn Decomposition Theorem
Chapter 8 General Countably dditive Set Functions In Theorem 5.2.2 the reader saw that if f : X R is integrable on the measure space (X,, µ) then we can define a countably additive set function ν on by
More informationRough Approach to Fuzzification and Defuzzification in Probability Theory
Rough Approach to Fuzzification and Defuzzification in Probability Theory G. Cattaneo and D. Ciucci Dipartimento di Informatica, Sistemistica e Comunicazione Università di Milano Bicocca, Via Bicocca degli
More informationMeasure-theoretic probability
Measure-theoretic probability Koltay L. VEGTMAM144B November 28, 2012 (VEGTMAM144B) Measure-theoretic probability November 28, 2012 1 / 27 The probability space De nition The (Ω, A, P) measure space is
More informationLectures 22-23: Conditional Expectations
Lectures 22-23: Conditional Expectations 1.) Definitions Let X be an integrable random variable defined on a probability space (Ω, F 0, P ) and let F be a sub-σ-algebra of F 0. Then the conditional expectation
More informationExpectations and moments
slidenumbers,noslidenumbers debugmode,normalmode slidenumbers debugmode Expectations and moments Helle Bunzel Expectations and moments Helle Bunzel Expectations of Random Variables Denition The expected
More informationy x -0.5 y
SOLVING LINEAR ALGEBRAIC SYSTEMS and CRAMER'S RULE These notes should provide you with a brief review of the facts about linear algebraic systems and a method, Cramer's Rule, that is useful in solving
More informationReal Analysis Notes. Thomas Goller
Real Analysis Notes Thomas Goller September 4, 2011 Contents 1 Abstract Measure Spaces 2 1.1 Basic Definitions........................... 2 1.2 Measurable Functions........................ 2 1.3 Integration..............................
More informationRiesz Representation Theorems
Chapter 6 Riesz Representation Theorems 6.1 Dual Spaces Definition 6.1.1. Let V and W be vector spaces over R. We let L(V, W ) = {T : V W T is linear}. The space L(V, R) is denoted by V and elements of
More information2 Sequences, Continuity, and Limits
2 Sequences, Continuity, and Limits In this chapter, we introduce the fundamental notions of continuity and limit of a real-valued function of two variables. As in ACICARA, the definitions as well as proofs
More informationChapter 1: Probability Theory Lecture 1: Measure space, measurable function, and integration
Chapter 1: Probability Theory Lecture 1: Measure space, measurable function, and integration Random experiment: uncertainty in outcomes Ω: sample space: a set containing all possible outcomes Definition
More informationExternal Stability and Continuous Liapunov. investigated by means of a suitable extension of the Liapunov functions method. We
External Stability and Continuous Liapunov Functions Andrea Bacciotti Dipartimento di Matematica del Politecnico Torino, 10129 Italy Abstract. Itis well known that external stability of nonlinear input
More informationDynkin (λ-) and π-systems; monotone classes of sets, and of functions with some examples of application (mainly of a probabilistic flavor)
Dynkin (λ-) and π-systems; monotone classes of sets, and of functions with some examples of application (mainly of a probabilistic flavor) Matija Vidmar February 7, 2018 1 Dynkin and π-systems Some basic
More informationVector Space Basics. 1 Abstract Vector Spaces. 1. (commutativity of vector addition) u + v = v + u. 2. (associativity of vector addition)
Vector Space Basics (Remark: these notes are highly formal and may be a useful reference to some students however I am also posting Ray Heitmann's notes to Canvas for students interested in a direct computational
More informationDIVISIVE CONDITIONING: FURTHER RESULTS ON DILATION. Timothy Herron, Teddy Seidenfeld and Larry Wasserman 1. Carnegie Mellon University
DIVISIVE CONDITIONING: FURTHER RESULTS ON DILATION Timothy Herron, Teddy Seidenfeld and Larry Wasserman 1 Carnegie Mellon University Conditioning can make imprecise probabilities uniformly more imprecise.
More informationNonparametric one-sided testing for the mean and related extremum problems
Nonparametric one-sided testing for the mean and related extremum problems Norbert Gaffke University of Magdeburg, Faculty of Mathematics D-39016 Magdeburg, PF 4120, Germany E-mail: norbert.gaffke@mathematik.uni-magdeburg.de
More informationProbability. Lecture Notes. Adolfo J. Rumbos
Probability Lecture Notes Adolfo J. Rumbos October 20, 204 2 Contents Introduction 5. An example from statistical inference................ 5 2 Probability Spaces 9 2. Sample Spaces and σ fields.....................
More informationCourse 212: Academic Year Section 1: Metric Spaces
Course 212: Academic Year 1991-2 Section 1: Metric Spaces D. R. Wilkins Contents 1 Metric Spaces 3 1.1 Distance Functions and Metric Spaces............. 3 1.2 Convergence and Continuity in Metric Spaces.........
More information