SUPPLEMENTARY MATERIAL COBRA: A Combined Regression Strategy by G. Biau, A. Fischer, B. Guedj and J. D. Malley

Size: px
Start display at page:

Download "SUPPLEMENTARY MATERIAL COBRA: A Combined Regression Strategy by G. Biau, A. Fischer, B. Guedj and J. D. Malley"

Transcription

1 SUPPLEMENTARY MATERIAL COBRA: A Combined Regression Strategy by G. Biau, A. Fischer, B. Guedj and J. D. Malley A. Proofs A.1. Proof of Proposition.1 e have E T n (r k (X)) r? (X) = E T n (r k (X)) T(r k (X)) As for the double product, notice that But Consequently, and + E T(r k (X)) r? (X) E[(T n (r k (X)) T(r k (X)))(T(r k (X)) r? (X))]. E[(T n (r k (X)) T(r k (X)))(T(r k (X)) r? (X))] = E E (T n (r k (X)) T(r k (X)))(T(r k (X)) r? (X)) r k (X),D n = E (T n (r k (X)) T(r k (X)))E T(r k (X)) r? (X) r k (X),D n. E[r? (X) r k (X),D n ] = E[r? (X) r k (X)] (by independence of X and D n ) = E[E[Y X] r k (X)] = E[Y r k (X)] (since æ(r k (X)) Ω æ(x)) = T(r k (X)). E[(T n (r k (X)) T(r k (X)))(T(r k (X)) r? (X))] = 0 E T n (r k (X)) r? (X) = E T n (r k (X)) T(r k (X)) + E T(r k (X)) r? (X). Thus, by definition of the conditional epectation, and using the fact that T(r k (X)) = E[r? (X) r k (X)], E T n (r k (X)) r? (X) E T n (r k (X)) T(r k (X)) + infe f (r k (X)) r? (X), f 1

2 where the infimum is taken over all square integrable functions of r k (X). In particular, as desired. E T n (r k (X)) r? (X) min E r k,m(x) r? (X) + E T n (r k (X)) T(r k (X)),,...,M A.. Proof of Proposition. Note that the second statement is an immediate consequence of the first statement and Proposition.1, therefore we only have to prove that E T n (r k (X)) T (r k (X))! 0 as `!1. e start with a technical lemma, whose proof can be found in the monograph by Györfi et al. (00). Lemma A.1. Let B(n, p) be a binomial random variable with parameters n 1 and p > 0. Then 1 1 E 1 + B(n, p) p(n + 1) and E 1{B(n,p)>0} B(n, p) p(n + 1). For all distributions of (X,Y ), using the elementary inequality (a + b + c) 3(a + b + c ), note that E T n (r k (X)) T(r k (X)) = E n,i (X)(Y i T(r k (X i )) + T(r k (X i )) T(r k (X)) + T(r k (X))) T(r k (X)) 3E n,i (X)(T(r k (X i )) T(r k (X))) (A.1) + 3E n,i (X)(Y i T(r k (X i ))) (A.)! + 3E n,i (X) 1 T(r k (X)). (A.3)

3 Consequently, to prove the proposition, it suffices to establish that (A.1), (A.) and (A.3) tend to 0 as ` tends to infinity. This is done, respectively, in Proposition A.1, Proposition A. and Proposition A.3 below. Proposition A.1. Under the assumptions of Proposition., lim n,i (X)(T(r k (X i )) T(r k (X))) = 0. `!1 E Proof of Proposition A.1. By the Cauchy-Schwarz inequality, E n,i (X)(T(r k (X i )) T(r k (X))) q q = E n,i (X) n,i (X)(T(r k (X i )) T(r k (X))) E n, j (X) n,i (X) T(r k (X i )) T(r k (X)) j=1 = E n,i (X) T(r k (X i )) T(r k (X)) := A n. The function T is such that E[T (r k (X))] <1. Therefore, it can be approimated in an L sense by a continuous function with compact support, say T (see, e.g., Theorem A.1 in Györfi et al., 00). More precisely, for any > 0, there eists a function T such that Consequently, we obtain " A n = E 3E + 3E + 3E E T(rk (X)) T(r k (X)) <. n,i (X) T(r k (X i )) T(r k (X)) " n,i (X) T(r k (X i )) T(r k (X i )) " n,i (X) T(r k (X i )) T(r k (X)) " n,i (X) T(r k (X)) T(r k (X)) := 3A n1 + 3A n + 3A n3. 3

4 Computation of A n3. Thanks to the approimation of T by T, A n3 = E n,i (X) T(r k (X)) T(r k (X)) E T(rk (X)) T(r k (X)) <. Computation of A n1. Denote by µ the distribution of X. Then, Letting A n1 = E n,i (X) T(r k (X i )) T(r k (X i )) = `E 4 1T M { r k,m (X) r k,m (X 1 ) "`} j=1 1T M { r k,m (X) r k,m (X j ) "`} Ω = `E T(r k (u)) T(r k (u)) E A 0 n1 = E 1TM { r k,m() r k,m (u) "`} 1 T M { r k,m () r k,m (u) "`} + æ D k µ(du). 3 T(r k (X 1 )) T(r k (X 1 )) 5. j= 1T M { r k,m () r k,m (X j ) "`} 1TM { r k,m() r k,m (u) "`} 1 T M { r k,m () r k,m (u) "`} + D k, j= 1T M { r k,m () r k,m (X j ) "`} µ(d) µ(d) let us prove that A 0 n1. To this aim, observe that M` A 0 n1 = E 4 = E4 M X p=1 E4 1 { T M k,m ([r k,m(u) "`,r k,m (u)+"`])} 1 + j= 1 {X j T M k,m ([r k,m() "`,r k,m ()+"`])} 1 { S(a 1,...,a M ){1,} M k,1 (Ia 1 n,1 (u))\ \r 1 k,m (Ia M n,m (u))} 1 + j= 1 {X j T M k,m ([r k,m() "`,r k,m ()+"`])} 1 p {R n(u)} 1 + j= 1 {X j T M k,m ([r k,m() "`,r k,m ()+"`])} 3 µ(d) D 5 k 3 µ(d) D 5 k 3 µ(d) D k 5. 4

5 Here, I 1 n,m (u) = [r k,m(u) "`, r k,m (u)], I n,m (u) = [r k,m(u), r k,m (u) + "`], and Rn(u) p is the p-th set of the form k,1 (Ia 1 n,1 (u)) \ \r 1 k,m (Ia M (u)) assuming n,m that they have been ordered using the leicographic order of (a 1,...,a M ). Net, note that R p n(u) ) R p n(u) Ω M\ k,m ([r k,m() "`, r k,m () + "`]). To see this, just observe that, for all m = 1,...,M, if r k,m (z) [r k,m (u) "`, r k,m (u)], i.e., r k,m (u) "` r k,m (z) r k,m (u), then, as r k,m (u) "` r k,m () r k,m (u), one has r k,m () "` r k,m (z) r k,m () + "`. Similarly, if r k,m (u) r k,m (z) r k,m (u) + "`, then r k,m (u) r k,m () r k,m (u) + "` implies r k,m () "` r k,m (z) r k,m () + "`. Consequently, " A 0 M n1 X E = p=1 M X p=1 M X p=1 1 p {R n(u)} 1 + " µ{rn(u)} p E 1 + p µ{rn (u)} E `µ{rn(u)} p D k j= 1 {X j R p n(u)} j= 1 {X j R p n(u)} D k µ(d) D k M` (by the first statement of Lemma A.1). Thus, returning to A n1, we obtain A n1 M E T(r k (X) T(r k (X))) < M. Computation of A n. For any ± > 0, write A n = E n,i (X) T(r k (X i )) T(r k (X)) = E n,i (X) T(r k (X i )) T(r k (X)) 1 S M { r k,m (X) r k,m (X i ) >±} + E from which we get that n,i (X) T(r k (X i )) T(r k (X)) 1 T M { r k,m (X) r k,m (X i ) ±} A n 4 sup T(r k (u)) E u + " sup u,v, T M { r k,m (u) r k,m (v) ±} n,i (X)1 S M { r k,m (X) r k,m (X i ) >±} (A.4) T(r k (v)) T(r k (u)). (A.5) 5

6 ith respect to the term (A.4), if ± > "`, then n,i (X)1 S M { r k,m (X) r k,m (X i ) >±} = = 0. 1 T M { r k,m (X) r k,m (X i ) "`} 1S M { r k,m (X) r k,m (X i ) >±} j=1 1T M { r k,m (X) r k,m (X j ) "`} It follows that, for all ± > 0, this term converges to 0 as ` tends to infinity. On the other hand, letting ±! 0, we see that the term (A.5) tends to 0 as well, by uniform continuity of T. Hence, A n tends to 0 as ` tends to infinity. Letting finally go to 0, we conclude that A n vanishes as ` tends to infinity. Proposition A.. Under the assumptions of Proposition., lim `!1 E Proof of Proposition A.. n,i (X)(Y i T(r k (X i ))) E n,i (X)(Y i T(r k (X i ))) = = E = E j=1 " = 0. E[ n,i (X) n, j (X)(Y i T(r k (X i )))(Y j T(r k (X j )))] n,i (X) Y i T(r k (X i )) " n,i (X)æ (r k (X i )), where æ (r k ()) = E[ Y T(r k (X)) r k ()]. For any > 0, æ can be approimated in an L 1 sense by a continuous function with compact support æ, i.e., E æ (r k (X)) æ (r k (X)) <. 6

7 Thus E n,i (X)æ (r k (X i )) E n,i (X) æ (r k (X i )) " + E n,i (X) æ (r k (X i )) æ (r k (X i )) n,i (X) sup æ (r k (u)) E u " + E n,i (X) æ (r k (X i )) æ (r k (X i )). ith the same argument as for A n1, we obtain E n,i (X) æ (r k (X i )) æ (r k (X i )) M. h i Therefore, it remains to prove that E n,i (X)! 0 as `!1. To this aim, fi ± > 0, and note that n,i (X) = 1T M { r k,m (X) r k,m (X i ) "`} j=1 1T M { r k,m (X) r k,m (X j ) "`} 8 < min ± + : ±, 1 1T M { r k,m (X) r k,m (X i ) "`} 1 Ω æ 1T M { r k,m (X) r k,m (X i ) "`} >0 1T M { r k,m (X) r k,m (X i ) "`} To complete the proof, we have to establish that the epectation of the righthand term tends to 0. Denoting by I a bounded interval on the real line, we. 9 = ; 7

8 have E < : E 6 4 1Ω X i T M k,m ([r k,m (X) "`,r k,m (X)+"`]) 9 = æ >0 ; 1n X i T o M k,m ([r k,m(x) "`,r k,m (X)+"`]) 1 8 < : 1Ω X i T M k,m ([r k,m (X) "`,r k,m (X)+"`]) = 1 n æ X T o M k,m >0 (I) ; 1n X i T o M k,m ([r k,m(x) "`,r k,m (X)+"`]) = E 6 E M[ + µ 1 8 < : k,m (I c ) 1Ω X i T M k,m ([r k,m (X) "`,r k,m (X)+"`]) ii M[ D k,x + µ 9 = 1 n æ X T o M k,m >0 (I) ; 1n X i T o M k,m ([r k,m(x) "`,r k,m (X)+"`]) k,m (I c ) 1 n (` + 1) E X T o 3 M 4 k,m (I) µ( T 5 M k,m ([r k,m(x) "`, r k,m (X) + "`])) M[ + µ k,m (I c ). The last inequality arises from the second statement of Lemma A.1. By an appropriate choice of I, according to the technical statement (.), the second term on the right-hand side can be made as small as desired. Regarding the first term, there eists a finite number N` of points z 1,...,z N` such that M\ k,m (I) Ω [ ( j 1,..., j M ){1,...,N`} M k,1 (I n,1(z j1 )) \ \ k,m (I n,m(z jm )), where I n,m (z j ) = [z j "`/,z j +"`/]. Suppose, without loss of generality, that the sets k,1 (I n,1(z j1 )) \ \ k,m (I n,m(z jm )) are ordered, and denote by Rn p the p-th among the N M` = (d I /"`e) M sets. Here I denotes the length of the interval I and de denotes the smallest 8

9 integer greater than. For all p, R p n ) R p n Ω M\ k,m ([r k,m() "`, r k,m () + "`]). Indeed, if v R p n, then, for all m = 1,...,M, there eists j {1,...,N`} such that r k,m (v) [z j "`/,z j + "`/], that is z j "`/ r k,m (v) z j + "`/. Since we also have z j "`/ r k,m (X) z j + "`/, we obtain r k,m (X) "` r k,m (v) r k,m (X) + "`. In conclusion, p=1 1 n X T M k,m (I) o E4 µ( T 5 M k,m ([r k,m(x) "`, r k,m (X) + "`])) N M " 1 { XRn} p E p=1 µ( T M k,m ([r k,m(x) "`, r k,m (X) + "`])) N M 1{ XRn} p E µ(rn) p = N M` ª º I M =. "` The result follows from the assumption lim`!1 `" =1. M` Proposition A.3. Under the assumptions of Proposition., lim `!1 E! n,i (X) 1 T(r k (X)) 3 = 0. Proof of Proposition A.3. Since n,i(x) 1 1, one has! n,i (X) 1 T(r k (X)) T (r k (X)). Consequently, by Lebesgue s dominated convergence theorem, to prove the 9

10 proposition, it suffices to show that n,i (X) tends to 1 almost surely. Now,! P n,i (X) 6= 1 = P = P = =! 1 T M { r k,m (X) r k,m (X i )) "`} = 0 1 n X i T M P k,m([r k,m (X) "`,r k,m (X)+"`]) o = 0 µ 8i = 1,...,`,1 n X i T M k,m([r k,m () "`,r k,m ()+"`]) o = 0 µ(d) h 1 µ(\ M r 1 k,m [rk,m () "`, r k,m () + "`] )i` µ(d). Denote by I a bounded interval. Then,! P n,i (X) 6= 1 ep `µ(\ M r 1 k,m [rk,m () "`, r k,m () + "`] ) M[ 1 T { M (I)}µ(d) + µ k,m (I c ) k,m ma u ue u M[ + µ k,m (I c ). `µ(\ M r 1 k,m 1 T { M k,m (I)}! [rk,m () "`, r k,m () + "`] ) µ(d) Using the same arguments as in the proof of Proposition A., the probability l I P n,i(x) 6= 1 m M. is bounded by e 1 ` This bound vanishes as n tends "` to infinity since, by assumption, lim`!1 `" =1. M` A.3. Proof of Theorem.1 Choose. An easy calculation yields that E[ T n (r k ()) T(r k ()) rk (X 1 ),...,r k (X`),D k ] = E Tn (r k ()) E[T n (r k ()) rk (X 1 ),...,r k (X`),D k ] (A.6) r k (X 1 ),...,r k (X`),D k + E[Tn (r k ()) rk (X 1 ),...,r k (X`),D k ] T(r k ()) := E 1 + E. (A.7) 10

11 On the one hand, we have h Tn E 1 = E (r k ()) E[T n (r k ()) r k (X 1 ),...,r k (X`),D k ] i r k (X 1 ),...,r k (X`),D k = E" n,i ()(Y i E[Y i r k (X i )]) r k (X 1 ),...,r k (X`),D k. Developing the square and noticing that E Y j Y i,r k (X 1 ),...,r k (X`),D k = E[Yj r k (X j )], since Y j is independent of Y i and of the X j s with j 6= i, we have 1T M E 1 = E { r k,m () r k,m (X i ) "`} Y i E[Y i r k (X i )] 1T M { r k,m () r k,m (X i ) "`} r k(x 1 ),...,r k (X`),D k (A.8) Thus, = V(Y i r k (X i )) 1 T M { r k,m () r k,m (X i ) "`} 1T M { r k,m () r k,m (X i ) "`} E 1 4R 1Ω 1T M { r k,m () r k,m (X i ) "`} >0 æ 1T M { r k,m () r k,m (X i ) "`}., (A.9) where V() denotes the variance of a random variable. On the other hand, recalling the notation ß introduced in Section 3, we obtain for the second 11

12 term E : E = E[Tn (r k ()) r k (X 1 ),...,r k (X`),D k ] T(r k ()) = n,i ()E[Y i r k (X i )] T(r k ()) 1 {ß>0} + T (r k ())1 {ß=0} 1T M { r k,m () r k,m (X i ) "`} E[Y i r k (X i )] T(r k ()) + T (r k ())1 {ß=0} j=1 1T M { r k,m () r k,m (X j ) "`} 1 {ß>0} (A.10) (by Jensen s inequality) 1T M = { r k,m () r k,m (X i ) "`} T(r k(x i )) T(r k ()) 1 {ß>0} (A.11) j=1 1T M { r k,m () r k,m (X j ) "`} + T (r k ())1 {ß=0} L " ` + T (r k ())1 {ß=0}. (A.1) Now, E T n (r k (X)) T(r k (X)) E (T n (r k ()) T(r k ()) µ(d). Then, using the decomposition (A.7) and the upper bounds (A.9) and (A.1), E T n (r k (X)) T(r k (X)) 4R 1 {ß>0} E µ(d) + L " ` B + E T (r k ())1 {ß=0} µ(d) R Ω d 4R 1 {ß>0} E E D k æµ(d) + L " ` B + E E T (r k ())1 {ß=0} D k µ(d). Thus, thanks to Lemma A.1, E T n (r k (X)) T(r k (X)) 8R 1 (` + 1) µ( T M { r k,m () r k,m (X) "`}) µ(d) + L " `!` M\ + T (r k ()) 1 µ( { r k,m () r k,m (X) "`}) µ(d). 1

13 Consequently, E T n (r k (X)) T(r k (X)) 8R 1 (` + 1) µ( T M { r k,m () r k,m (X) "`}) µ(d) + L " `! M\ + T (r k ())ep `µ( { r k,m () r k,m (X) "`}) µ(d) 8R 1 (` + 1) µ( T M { r k,m () r k,m (X) "`}) µ(d) + L " ` µ + sup T (r k ())ma ue u ur + 1 `µ( T M { r k,m () r k,m (X) "`}) µ(d). Introducing a bounded interval I as in the proof of Proposition., we observe that the boundedness of the r k yields that! M[ µ k,m (I c ) = 0, as soon as I is sufficiently large, independently of k. Then, proceeding as in the proof of Proposition., we obtain E T n (r k (X)) T(r k (X)) ª º I M ª º 8R 1 I M "` ` L " ` + 1 R ma ue u ur + "` ` R C 1 + L " `, `" M` for some positive constant C 1, independent of k. Hence, for the choice "` / ` 1 M+, we obtain E T n (r k (X)) T(r k (X)) C` M+, for some positive constant C depending on L, R and independent of k, as desired. B. Numerical results 13

14 Table 4 (SM): Quadratic errors of the implemented machines and COBRA in high-dimensional situations. Means and standard deviations over 00 independent replications. Model 9 Model 10 Model 11 lars ridge fnn tree rf COBRA m sd m sd m sd Table 5 (SM): Quadratic errors of eponentially weighted aggregate (EA) and COBRA. 00 independent replications. Model 9 Model 10 Model 11 Model 1 EA COBRA m sd m sd m sd m sd

15 Figure (SM): Eamples of calibration of parameters "` and Æ. The bold point is the minimum. (a) Model 5, uncorrelated design. (b) Model 5, correlated design. Risk machine machines 3 machines 4 machines 5 machines 6 machines Risk machine machines 3 machines 4 machines 5 machines 6 machines Epsilon Epsilon (c) Model 9. (d) Model 1. Quadratic Risk machine machines 3 machines 4 machines 5 machines Quadratic Risk machine machines 3 machines 4 machines 5 machines Epsilon Epsilon 15

16 Figure 3 (SM): Boplots of quadratic errors, uncorrelated design. From left to right: lars, ridge, fnn, tree, randomforest, COBRA. (a) Model 1. (b) Model. (c) Model 3. (d) Model 4. Boplots of errors for the testing set Boplots of errors for the testing set LR Lasso Ridge KNN CART RF CR LR Lasso Ridge KNN CART RF CR LR Lasso Ridge KNN CART RF CR LR Lasso Ridge KNN CART RF CR (e) Model 5. (f) Model 6. (g) Model 7. (h) Model LR Lasso Ridge KNN CART RF COBRA LR Lasso Ridge KNN CART RF COBRA Lasso Ridge KNN CART RF COBRA Lasso Ridge KNN CART RF COBRA Figure 4 (SM): Boplots of quadratic errors, correlated design. From left to right: lars, ridge, fnn, tree, randomforest, COBRA. (a) Model 1. (b) Model. (c) Model 3. (d) Model 4. Boplots of errors for the testing set Boplots of errors for the testing set LR Lasso Ridge KNN CART RF CR LR Lasso Ridge KNN CART RF CR LR Lasso Ridge KNN CART RF CR LR Lasso Ridge KNN CART RF COBRA (e) Model 5. (f) Model 6. (g) Model 7. (h) Model LR Lasso Ridge KNN CART RF COBRA LR Lasso Ridge KNN CART RF COBRA Lasso Ridge KNN CART RF COBRA Lasso Ridge KNN CART RF COBRA 16

17 Figure 5 (SM): Prediction over the testing set, uncorrelated design. The more points on the first bissectri, the better the prediction. (a) Model 1. (b) Model. (c) Model 3. (d) Model 4. Testing set. Prediction versus true response Testing set. Prediction versus true response (e) Model 5. (f) Model 6. (g) Model 7. (h) Model Figure 6 (SM): Prediction over the testing set, correlated design. The more points on the first bissectri, the better the prediction. (a) Model 1. (b) Model. (c) Model 3. (d) Model 4. Testing set. Prediction versus true response Testing set. Prediction versus true response (e) Model 5. (f) Model 6. (g) Model 7. (h) Model

18 Figure 7 (SM): Eamples of reconstruction of the functional dependencies, for covariates 1 to 4. (a) Model 1, uncorrelated design. (b) Model 1, correlated design. Reconstruction of the functional dependence for covariates 1 to 4 Reconstruction of the functional dependence for covariates 1 to 4 f1() f() f3() f4() f1() f3() f4() f() (c) Model 3, uncorrelated design. (d) Model 3, correlated design. f1() f() f3() f4() f1() f3() f4() f()

19 Figure 8 (SM): Boplot of errors, high-dimensional models. (a) Model 9 Lasso Ridge FNN CART RF COBRA (b) Model 10 Lasso Ridge FNN CART RF COBRA (c) Model 11 Lasso Ridge FNN CART RF COBRA Figure 9 (SM): How stable is COBRA? (a) Boplot of errors: Initial sample is randomly cut (1000 replications of Model 1). Lasso Ridge FNN CART RF COBRA (b) Empirical risk with respect to the size of subsample D k, in Model Split Risk Figure 10 (SM): Boplot of errors: EA vs COBRA (a) Model 9. EA COBRA (b) Model 10. EA COBRA (c) Model 11. EA COBRA (d) Model 1. EA COBRA

20 Figure 11 (SM): Prediction over the testing set, real-life data sets. (a) Concrete Slump Test. (b) Concrete Compressive Strength (c) ine Quality, red wine. (d) ine Quality, white wine

21 Figure 1 (SM): Boplot of quadratic errors, real-life data sets. (a) Concrete Slump(b) Concrete Com-(cpressive Strength. wine. white ine Quality, red(d) ine Quality, Test. wine Lasso Ridge KNN CART RF COBRA Lasso Ridge KNN CART RF COBRA Lasso Ridge KNN CART RF COBRA Lasso Ridge KNN CART RF COBRA 1

9 Brownian Motion: Construction

9 Brownian Motion: Construction 9 Brownian Motion: Construction 9.1 Definition and Heuristics The central limit theorem states that the standard Gaussian distribution arises as the weak limit of the rescaled partial sums S n / p n of

More information

Math 180A. Lecture 16 Friday May 7 th. Expectation. Recall the three main probability density functions so far (1) Uniform (2) Exponential.

Math 180A. Lecture 16 Friday May 7 th. Expectation. Recall the three main probability density functions so far (1) Uniform (2) Exponential. Math 8A Lecture 6 Friday May 7 th Epectation Recall the three main probability density functions so far () Uniform () Eponential (3) Power Law e, ( ), Math 8A Lecture 6 Friday May 7 th Epectation Eample

More information

4 Sums of Independent Random Variables

4 Sums of Independent Random Variables 4 Sums of Independent Random Variables Standing Assumptions: Assume throughout this section that (,F,P) is a fixed probability space and that X 1, X 2, X 3,... are independent real-valued random variables

More information

Analysis of a Random Forests Model

Analysis of a Random Forests Model Analysis of a Random Forests Model Gérard Biau LSTA & LPMA Université Pierre et Marie Curie Paris VI Boîte 58, Tour 5-25, 2ème étage 4 place Jussieu, 75252 Paris Cedex 05, France DMA 2 Ecole Normale Supérieure

More information

= 1 2 x (x 1) + 1 {x} (1 {x}). [t] dt = 1 x (x 1) + O (1), [t] dt = 1 2 x2 + O (x), (where the error is not now zero when x is an integer.

= 1 2 x (x 1) + 1 {x} (1 {x}). [t] dt = 1 x (x 1) + O (1), [t] dt = 1 2 x2 + O (x), (where the error is not now zero when x is an integer. Problem Sheet,. i) Draw the graphs for [] and {}. ii) Show that for α R, α+ α [t] dt = α and α+ α {t} dt =. Hint Split these integrals at the integer which must lie in any interval of length, such as [α,

More information

Upper Bounds for the Ruin Probability in a Risk Model With Interest WhosePremiumsDependonthe Backward Recurrence Time Process

Upper Bounds for the Ruin Probability in a Risk Model With Interest WhosePremiumsDependonthe Backward Recurrence Time Process Λ4flΛ4» ν ff ff χ Vol.4, No.4 211 8fl ADVANCES IN MATHEMATICS Aug., 211 Upper Bounds for the Ruin Probability in a Risk Model With Interest WhosePremiumsDependonthe Backward Recurrence Time Process HE

More information

n px p x (1 p) n x. p x n(n 1)... (n x + 1) x!

n px p x (1 p) n x. p x n(n 1)... (n x + 1) x! Lectures 3-4 jacques@ucsd.edu 7. Classical discrete distributions D. The Poisson Distribution. If a coin with heads probability p is flipped independently n times, then the number of heads is Bin(n, p)

More information

18.175: Lecture 3 Integration

18.175: Lecture 3 Integration 18.175: Lecture 3 Scott Sheffield MIT Outline Outline Recall definitions Probability space is triple (Ω, F, P) where Ω is sample space, F is set of events (the σ-algebra) and P : F [0, 1] is the probability

More information

The Dirichlet s P rinciple. In this lecture we discuss an alternative formulation of the Dirichlet problem for the Laplace equation:

The Dirichlet s P rinciple. In this lecture we discuss an alternative formulation of the Dirichlet problem for the Laplace equation: Oct. 1 The Dirichlet s P rinciple In this lecture we discuss an alternative formulation of the Dirichlet problem for the Laplace equation: 1. Dirichlet s Principle. u = in, u = g on. ( 1 ) If we multiply

More information

Grouped Network Vector Autoregression

Grouped Network Vector Autoregression Statistica Sinica: Supplement Grouped Networ Vector Autoregression Xuening Zhu 1 and Rui Pan 2 1 Fudan University, 2 Central University of Finance and Economics Supplementary Material We present here the

More information

ECON 5350 Class Notes Review of Probability and Distribution Theory

ECON 5350 Class Notes Review of Probability and Distribution Theory ECON 535 Class Notes Review of Probability and Distribution Theory 1 Random Variables Definition. Let c represent an element of the sample space C of a random eperiment, c C. A random variable is a one-to-one

More information

Supplementary Materials for

Supplementary Materials for advances.sciencemag.org/cgi/content/full/4/2/eaao659/dc Supplementary Materials for Neyman-Pearson classification algorithms and NP receiver operating characteristics The PDF file includes: Xin Tong, Yang

More information

Economics 620, Lecture 2: Regression Mechanics (Simple Regression)

Economics 620, Lecture 2: Regression Mechanics (Simple Regression) 1 Economics 620, Lecture 2: Regression Mechanics (Simple Regression) Observed variables: y i ; x i i = 1; :::; n Hypothesized (model): Ey i = + x i or y i = + x i + (y i Ey i ) ; renaming we get: y i =

More information

Part II Probability and Measure

Part II Probability and Measure Part II Probability and Measure Theorems Based on lectures by J. Miller Notes taken by Dexter Chua Michaelmas 2016 These notes are not endorsed by the lecturers, and I have modified them (often significantly)

More information

Appendix B for The Evolution of Strategic Sophistication (Intended for Online Publication)

Appendix B for The Evolution of Strategic Sophistication (Intended for Online Publication) Appendix B for The Evolution of Strategic Sophistication (Intended for Online Publication) Nikolaus Robalino and Arthur Robson Appendix B: Proof of Theorem 2 This appendix contains the proof of Theorem

More information

Theorem 2.1 (Caratheodory). A (countably additive) probability measure on a field has an extension. n=1

Theorem 2.1 (Caratheodory). A (countably additive) probability measure on a field has an extension. n=1 Chapter 2 Probability measures 1. Existence Theorem 2.1 (Caratheodory). A (countably additive) probability measure on a field has an extension to the generated σ-field Proof of Theorem 2.1. Let F 0 be

More information

MATHEMATICS 154, SPRING 2009 PROBABILITY THEORY Outline #11 (Tail-Sum Theorem, Conditional distribution and expectation)

MATHEMATICS 154, SPRING 2009 PROBABILITY THEORY Outline #11 (Tail-Sum Theorem, Conditional distribution and expectation) MATHEMATICS 154, SPRING 2009 PROBABILITY THEORY Outline #11 (Tail-Sum Theorem, Conditional distribution and expectation) Last modified: March 7, 2009 Reference: PRP, Sections 3.6 and 3.7. 1. Tail-Sum Theorem

More information

5 Birkhoff s Ergodic Theorem

5 Birkhoff s Ergodic Theorem 5 Birkhoff s Ergodic Theorem Birkhoff s Ergodic Theorem extends the validity of Kolmogorov s strong law to the class of stationary sequences of random variables. Stationary sequences occur naturally even

More information

Lecture Notes 3 Convergence (Chapter 5)

Lecture Notes 3 Convergence (Chapter 5) Lecture Notes 3 Convergence (Chapter 5) 1 Convergence of Random Variables Let X 1, X 2,... be a sequence of random variables and let X be another random variable. Let F n denote the cdf of X n and let

More information

g(y)dy = f(x)dx. y B If we write x = x(y) in the second integral, then the change of variable technique gives g(y)dy = f{x(y)} dx dy dy.

g(y)dy = f(x)dx. y B If we write x = x(y) in the second integral, then the change of variable technique gives g(y)dy = f{x(y)} dx dy dy. PROBABILITY DISTRIBUTIONS: (continued) The change of variables technique. Let f() and let y = y() be a monotonic transformation of such that = (y) eists. Let A be an event defined in terms of, and let

More information

The main results about probability measures are the following two facts:

The main results about probability measures are the following two facts: Chapter 2 Probability measures The main results about probability measures are the following two facts: Theorem 2.1 (extension). If P is a (continuous) probability measure on a field F 0 then it has a

More information

V. Gupta, A. Aral and M. Ozhavzali. APPROXIMATION BY q-szász-mirakyan-baskakov OPERATORS

V. Gupta, A. Aral and M. Ozhavzali. APPROXIMATION BY q-szász-mirakyan-baskakov OPERATORS F A S C I C U L I M A T H E M A T I C I Nr 48 212 V. Gupta, A. Aral and M. Ozhavzali APPROXIMATION BY -SZÁSZ-MIRAKYAN-BASKAKOV OPERATORS Abstract. In the present paper we propose the analogue of well known

More information

Supremum of simple stochastic processes

Supremum of simple stochastic processes Subspace embeddings Daniel Hsu COMS 4772 1 Supremum of simple stochastic processes 2 Recap: JL lemma JL lemma. For any ε (0, 1/2), point set S R d of cardinality 16 ln n S = n, and k N such that k, there

More information

MATH 1010E University Mathematics Lecture Notes (week 8) Martin Li

MATH 1010E University Mathematics Lecture Notes (week 8) Martin Li MATH 1010E University Mathematics Lecture Notes (week 8) Martin Li 1 L Hospital s Rule Another useful application of mean value theorems is L Hospital s Rule. It helps us to evaluate its of indeterminate

More information

Lebesgue-Radon-Nikodym Theorem

Lebesgue-Radon-Nikodym Theorem Lebesgue-Radon-Nikodym Theorem Matt Rosenzweig 1 Lebesgue-Radon-Nikodym Theorem In what follows, (, A) will denote a measurable space. We begin with a review of signed measures. 1.1 Signed Measures Definition

More information

STAT 7032 Probability Spring Wlodek Bryc

STAT 7032 Probability Spring Wlodek Bryc STAT 7032 Probability Spring 2018 Wlodek Bryc Created: Friday, Jan 2, 2014 Revised for Spring 2018 Printed: January 9, 2018 File: Grad-Prob-2018.TEX Department of Mathematical Sciences, University of Cincinnati,

More information

MATH 117 LECTURE NOTES

MATH 117 LECTURE NOTES MATH 117 LECTURE NOTES XIN ZHOU Abstract. This is the set of lecture notes for Math 117 during Fall quarter of 2017 at UC Santa Barbara. The lectures follow closely the textbook [1]. Contents 1. The set

More information

Quasi-regression for heritability

Quasi-regression for heritability Quasi-regression for heritability Art B. Owen Stanford University March 01 Abstract We show in an idealized model that the narrow sense (linear heritability from d autosomal SNPs can be estimated without

More information

Section 7.2: One-to-One, Onto and Inverse Functions

Section 7.2: One-to-One, Onto and Inverse Functions Section 7.2: One-to-One, Onto and Inverse Functions In this section we shall developed the elementary notions of one-to-one, onto and inverse functions, similar to that developed in a basic algebra course.

More information

Learning Theory. Ingo Steinwart University of Stuttgart. September 4, 2013

Learning Theory. Ingo Steinwart University of Stuttgart. September 4, 2013 Learning Theory Ingo Steinwart University of Stuttgart September 4, 2013 Ingo Steinwart University of Stuttgart () Learning Theory September 4, 2013 1 / 62 Basics Informal Introduction Informal Description

More information

arxiv: v1 [math.st] 14 Mar 2016

arxiv: v1 [math.st] 14 Mar 2016 Impact of subsampling and pruning on random forests. arxiv:1603.04261v1 math.st] 14 Mar 2016 Roxane Duroux Sorbonne Universités, UPMC Univ Paris 06, F-75005, Paris, France roxane.duroux@upmc.fr Erwan Scornet

More information

Lecture 2: CDF and EDF

Lecture 2: CDF and EDF STAT 425: Introduction to Nonparametric Statistics Winter 2018 Instructor: Yen-Chi Chen Lecture 2: CDF and EDF 2.1 CDF: Cumulative Distribution Function For a random variable X, its CDF F () contains all

More information

Sobolev Spaces. Chapter Hölder spaces

Sobolev Spaces. Chapter Hölder spaces Chapter 2 Sobolev Spaces Sobolev spaces turn out often to be the proper setting in which to apply ideas of functional analysis to get information concerning partial differential equations. Here, we collect

More information

Bias-Variance Tradeoff. David Dalpiaz STAT 430, Fall 2017

Bias-Variance Tradeoff. David Dalpiaz STAT 430, Fall 2017 Bias-Variance Tradeoff David Dalpiaz STAT 430, Fall 2017 1 Announcements Homework 03 released Regrade policy Style policy? 2 Statistical Learning Supervised Learning Regression Parametric Non-Parametric

More information

ABSTRACT CONDITIONAL EXPECTATION IN L 2

ABSTRACT CONDITIONAL EXPECTATION IN L 2 ABSTRACT CONDITIONAL EXPECTATION IN L 2 Abstract. We prove that conditional expecations exist in the L 2 case. The L 2 treatment also gives us a geometric interpretation for conditional expectation. 1.

More information

Measure Theory on Topological Spaces. Course: Prof. Tony Dorlas 2010 Typset: Cathal Ormond

Measure Theory on Topological Spaces. Course: Prof. Tony Dorlas 2010 Typset: Cathal Ormond Measure Theory on Topological Spaces Course: Prof. Tony Dorlas 2010 Typset: Cathal Ormond May 22, 2011 Contents 1 Introduction 2 1.1 The Riemann Integral........................................ 2 1.2 Measurable..............................................

More information

A SYNOPSIS OF HILBERT SPACE THEORY

A SYNOPSIS OF HILBERT SPACE THEORY A SYNOPSIS OF HILBERT SPACE THEORY Below is a summary of Hilbert space theory that you find in more detail in any book on functional analysis, like the one Akhiezer and Glazman, the one by Kreiszig or

More information

3 Integration and Expectation

3 Integration and Expectation 3 Integration and Expectation 3.1 Construction of the Lebesgue Integral Let (, F, µ) be a measure space (not necessarily a probability space). Our objective will be to define the Lebesgue integral R fdµ

More information

The Codimension of the Zeros of a Stable Process in Random Scenery

The Codimension of the Zeros of a Stable Process in Random Scenery The Codimension of the Zeros of a Stable Process in Random Scenery Davar Khoshnevisan The University of Utah, Department of Mathematics Salt Lake City, UT 84105 0090, U.S.A. davar@math.utah.edu http://www.math.utah.edu/~davar

More information

Appendix A Functional Analysis

Appendix A Functional Analysis Appendix A Functional Analysis A.1 Metric Spaces, Banach Spaces, and Hilbert Spaces Definition A.1. Metric space. Let X be a set. A map d : X X R is called metric on X if for all x,y,z X it is i) d(x,y)

More information

MAT 570 REAL ANALYSIS LECTURE NOTES. Contents. 1. Sets Functions Countability Axiom of choice Equivalence relations 9

MAT 570 REAL ANALYSIS LECTURE NOTES. Contents. 1. Sets Functions Countability Axiom of choice Equivalence relations 9 MAT 570 REAL ANALYSIS LECTURE NOTES PROFESSOR: JOHN QUIGG SEMESTER: FALL 204 Contents. Sets 2 2. Functions 5 3. Countability 7 4. Axiom of choice 8 5. Equivalence relations 9 6. Real numbers 9 7. Extended

More information

An introduction to some aspects of functional analysis

An introduction to some aspects of functional analysis An introduction to some aspects of functional analysis Stephen Semmes Rice University Abstract These informal notes deal with some very basic objects in functional analysis, including norms and seminorms

More information

Part IA Probability. Theorems. Based on lectures by R. Weber Notes taken by Dexter Chua. Lent 2015

Part IA Probability. Theorems. Based on lectures by R. Weber Notes taken by Dexter Chua. Lent 2015 Part IA Probability Theorems Based on lectures by R. Weber Notes taken by Dexter Chua Lent 2015 These notes are not endorsed by the lecturers, and I have modified them (often significantly) after lectures.

More information

MATH 418: Lectures on Conditional Expectation

MATH 418: Lectures on Conditional Expectation MATH 418: Lectures on Conditional Expectation Instructor: r. Ed Perkins, Notes taken by Adrian She Conditional expectation is one of the most useful tools of probability. The Radon-Nikodym theorem enables

More information

Convergence at first and second order of some approximations of stochastic integrals

Convergence at first and second order of some approximations of stochastic integrals Convergence at first and second order of some approximations of stochastic integrals Bérard Bergery Blandine, Vallois Pierre IECN, Nancy-Université, CNRS, INRIA, Boulevard des Aiguillettes B.P. 239 F-5456

More information

Random forests and averaging classifiers

Random forests and averaging classifiers Random forests and averaging classifiers Gábor Lugosi ICREA and Pompeu Fabra University Barcelona joint work with Gérard Biau (Paris 6) Luc Devroye (McGill, Montreal) Leo Breiman Binary classification

More information

Problems for Chapter 3.

Problems for Chapter 3. Problems for Chapter 3. Let A denote a nonempty set of reals. The complement of A, denoted by A, or A C is the set of all points not in A. We say that belongs to the interior of A, Int A, if there eists

More information

Mathematical Institute, University of Utrecht. The problem of estimating the mean of an observed Gaussian innite-dimensional vector

Mathematical Institute, University of Utrecht. The problem of estimating the mean of an observed Gaussian innite-dimensional vector On Minimax Filtering over Ellipsoids Eduard N. Belitser and Boris Y. Levit Mathematical Institute, University of Utrecht Budapestlaan 6, 3584 CD Utrecht, The Netherlands The problem of estimating the mean

More information

Lecture I: Asymptotics for large GUE random matrices

Lecture I: Asymptotics for large GUE random matrices Lecture I: Asymptotics for large GUE random matrices Steen Thorbjørnsen, University of Aarhus andom Matrices Definition. Let (Ω, F, P) be a probability space and let n be a positive integer. Then a random

More information

On the asymptotics of random forests

On the asymptotics of random forests On the asymptotics of random forests Erwan Scornet Sorbonne Universités, UPMC Univ Paris 06, F-75005, Paris, France erwan.scornet@upmc.fr Abstract The last decade has witnessed a growing interest in random

More information

The incomplete gamma functions. Notes by G.J.O. Jameson. These notes incorporate the Math. Gazette article [Jam1], with some extra material.

The incomplete gamma functions. Notes by G.J.O. Jameson. These notes incorporate the Math. Gazette article [Jam1], with some extra material. The incomplete gamma functions Notes by G.J.O. Jameson These notes incorporate the Math. Gazette article [Jam], with some etra material. Definitions and elementary properties functions: Recall the integral

More information

Empirical Processes: General Weak Convergence Theory

Empirical Processes: General Weak Convergence Theory Empirical Processes: General Weak Convergence Theory Moulinath Banerjee May 18, 2010 1 Extended Weak Convergence The lack of measurability of the empirical process with respect to the sigma-field generated

More information

C.7. Numerical series. Pag. 147 Proof of the converging criteria for series. Theorem 5.29 (Comparison test) Let a k and b k be positive-term series

C.7. Numerical series. Pag. 147 Proof of the converging criteria for series. Theorem 5.29 (Comparison test) Let a k and b k be positive-term series C.7 Numerical series Pag. 147 Proof of the converging criteria for series Theorem 5.29 (Comparison test) Let and be positive-term series such that 0, for any k 0. i) If the series converges, then also

More information

The Canonical Gaussian Measure on R

The Canonical Gaussian Measure on R The Canonical Gaussian Measure on R 1. Introduction The main goal of this course is to study Gaussian measures. The simplest example of a Gaussian measure is the canonical Gaussian measure P on R where

More information

SDS : Theoretical Statistics

SDS : Theoretical Statistics SDS 384 11: Theoretical Statistics Lecture 1: Introduction Purnamrita Sarkar Department of Statistics and Data Science The University of Texas at Austin https://psarkar.github.io/teaching Manegerial Stuff

More information

Problem Set 5: Solutions Math 201A: Fall 2016

Problem Set 5: Solutions Math 201A: Fall 2016 Problem Set 5: s Math 21A: Fall 216 Problem 1. Define f : [1, ) [1, ) by f(x) = x + 1/x. Show that f(x) f(y) < x y for all x, y [1, ) with x y, but f has no fixed point. Why doesn t this example contradict

More information

Copyright 2010 Pearson Education, Inc. Publishing as Prentice Hall.

Copyright 2010 Pearson Education, Inc. Publishing as Prentice Hall. .1 Limits of Sequences. CHAPTER.1.0. a) True. If converges, then there is an M > 0 such that M. Choose by Archimedes an N N such that N > M/ε. Then n N implies /n M/n M/N < ε. b) False. = n does not converge,

More information

CSCI-6971 Lecture Notes: Probability theory

CSCI-6971 Lecture Notes: Probability theory CSCI-6971 Lecture Notes: Probability theory Kristopher R. Beevers Department of Computer Science Rensselaer Polytechnic Institute beevek@cs.rpi.edu January 31, 2006 1 Properties of probabilities Let, A,

More information

17. Convergence of Random Variables

17. Convergence of Random Variables 7. Convergence of Random Variables In elementary mathematics courses (such as Calculus) one speaks of the convergence of functions: f n : R R, then lim f n = f if lim f n (x) = f(x) for all x in R. This

More information

u( x) = g( y) ds y ( 1 ) U solves u = 0 in U; u = 0 on U. ( 3)

u( x) = g( y) ds y ( 1 ) U solves u = 0 in U; u = 0 on U. ( 3) M ath 5 2 7 Fall 2 0 0 9 L ecture 4 ( S ep. 6, 2 0 0 9 ) Properties and Estimates of Laplace s and Poisson s Equations In our last lecture we derived the formulas for the solutions of Poisson s equation

More information

Stochastic integration. P.J.C. Spreij

Stochastic integration. P.J.C. Spreij Stochastic integration P.J.C. Spreij this version: April 22, 29 Contents 1 Stochastic processes 1 1.1 General theory............................... 1 1.2 Stopping times...............................

More information

Harmonic sets and the harmonic prime number theorem

Harmonic sets and the harmonic prime number theorem Harmonic sets and the harmonic prime number theorem Version: 9th September 2004 Kevin A. Broughan and Rory J. Casey University of Waikato, Hamilton, New Zealand E-mail: kab@waikato.ac.nz We restrict primes

More information

MA2223 Tutorial solutions Part 1. Metric spaces

MA2223 Tutorial solutions Part 1. Metric spaces MA2223 Tutorial solutions Part 1. Metric spaces T1 1. Show that the function d(,y) = y defines a metric on R. The given function is symmetric and non-negative with d(,y) = 0 if and only if = y. It remains

More information

Understanding Generalization Error: Bounds and Decompositions

Understanding Generalization Error: Bounds and Decompositions CIS 520: Machine Learning Spring 2018: Lecture 11 Understanding Generalization Error: Bounds and Decompositions Lecturer: Shivani Agarwal Disclaimer: These notes are designed to be a supplement to the

More information

Introduction to Probability Theory for Graduate Economics Fall 2008

Introduction to Probability Theory for Graduate Economics Fall 2008 Introduction to Probability Theory for Graduate Economics Fall 008 Yiğit Sağlam October 10, 008 CHAPTER - RANDOM VARIABLES AND EXPECTATION 1 1 Random Variables A random variable (RV) is a real-valued function

More information

Conditional expectation

Conditional expectation Chapter II Conditional expectation II.1 Introduction Let X be a square integrable real-valued random variable. The constant c which minimizes E[(X c) 2 ] is the expectation of X. Indeed, we have, with

More information

l(y j ) = 0 for all y j (1)

l(y j ) = 0 for all y j (1) Problem 1. The closed linear span of a subset {y j } of a normed vector space is defined as the intersection of all closed subspaces containing all y j and thus the smallest such subspace. 1 Show that

More information

Random Feature Maps for Dot Product Kernels Supplementary Material

Random Feature Maps for Dot Product Kernels Supplementary Material Random Feature Maps for Dot Product Kernels Supplementary Material Purushottam Kar and Harish Karnick Indian Institute of Technology Kanpur, INDIA {purushot,hk}@cse.iitk.ac.in Abstract This document contains

More information

THEOREMS, ETC., FOR MATH 515

THEOREMS, ETC., FOR MATH 515 THEOREMS, ETC., FOR MATH 515 Proposition 1 (=comment on page 17). If A is an algebra, then any finite union or finite intersection of sets in A is also in A. Proposition 2 (=Proposition 1.1). For every

More information

CSE 546 Midterm Exam, Fall 2014

CSE 546 Midterm Exam, Fall 2014 CSE 546 Midterm Eam, Fall 2014 1. Personal info: Name: UW NetID: Student ID: 2. There should be 14 numbered pages in this eam (including this cover sheet). 3. You can use an material ou brought: an book,

More information

ROTH S THEOREM THE FOURIER ANALYTIC APPROACH NEIL LYALL

ROTH S THEOREM THE FOURIER ANALYTIC APPROACH NEIL LYALL ROTH S THEOREM THE FOURIER AALYTIC APPROACH EIL LYALL Roth s Theorem. Let δ > 0 and ep ep(cδ ) for some absolute constant C. Then any A {0,,..., } of size A = δ necessarily contains a (non-trivial) arithmetic

More information

On rate of convergence in distribution of asymptotically normal statistics based on samples of random size

On rate of convergence in distribution of asymptotically normal statistics based on samples of random size Annales Mathematicae et Informaticae 39 212 pp. 17 28 Proceedings of the Conference on Stochastic Models and their Applications Faculty of Informatics, University of Debrecen, Debrecen, Hungary, August

More information

SYLLABUS FOR ENTRANCE EXAMINATION NANYANG TECHNOLOGICAL UNIVERSITY FOR INTERNATIONAL STUDENTS A-LEVEL MATHEMATICS

SYLLABUS FOR ENTRANCE EXAMINATION NANYANG TECHNOLOGICAL UNIVERSITY FOR INTERNATIONAL STUDENTS A-LEVEL MATHEMATICS SYLLABUS FOR ENTRANCE EXAMINATION NANYANG TECHNOLOGICAL UNIVERSITY FOR INTERNATIONAL STUDENTS A-LEVEL MATHEMATICS STRUCTURE OF EXAMINATION PAPER. There will be one -hour paper consisting of 4 questions..

More information

Piecewise Smooth Solutions to the Burgers-Hilbert Equation

Piecewise Smooth Solutions to the Burgers-Hilbert Equation Piecewise Smooth Solutions to the Burgers-Hilbert Equation Alberto Bressan and Tianyou Zhang Department of Mathematics, Penn State University, University Park, Pa 68, USA e-mails: bressan@mathpsuedu, zhang

More information

Invariant measures for iterated function systems

Invariant measures for iterated function systems ANNALES POLONICI MATHEMATICI LXXV.1(2000) Invariant measures for iterated function systems by Tomasz Szarek (Katowice and Rzeszów) Abstract. A new criterion for the existence of an invariant distribution

More information

VISCOSITY SOLUTIONS. We follow Han and Lin, Elliptic Partial Differential Equations, 5.

VISCOSITY SOLUTIONS. We follow Han and Lin, Elliptic Partial Differential Equations, 5. VISCOSITY SOLUTIONS PETER HINTZ We follow Han and Lin, Elliptic Partial Differential Equations, 5. 1. Motivation Throughout, we will assume that Ω R n is a bounded and connected domain and that a ij C(Ω)

More information

Problem set 1, Real Analysis I, Spring, 2015.

Problem set 1, Real Analysis I, Spring, 2015. Problem set 1, Real Analysis I, Spring, 015. (1) Let f n : D R be a sequence of functions with domain D R n. Recall that f n f uniformly if and only if for all ɛ > 0, there is an N = N(ɛ) so that if n

More information

Symmetric Probability Theory

Symmetric Probability Theory Symmetric Probability Theory Kurt Weichselberger, Munich I. The Project p. 2 II. The Theory of Interval Probability p. 4 III. The Logical Concept of Probability p. 6 IV. Inference p. 11 Kurt.Weichselberger@stat.uni-muenchen.de

More information

Heat kernels of some Schrödinger operators

Heat kernels of some Schrödinger operators Heat kernels of some Schrödinger operators Alexander Grigor yan Tsinghua University 28 September 2016 Consider an elliptic Schrödinger operator H = Δ + Φ, where Δ = n 2 i=1 is the Laplace operator in R

More information

INDISTINGUISHABILITY OF ABSOLUTELY CONTINUOUS AND SINGULAR DISTRIBUTIONS

INDISTINGUISHABILITY OF ABSOLUTELY CONTINUOUS AND SINGULAR DISTRIBUTIONS INDISTINGUISHABILITY OF ABSOLUTELY CONTINUOUS AND SINGULAR DISTRIBUTIONS STEVEN P. LALLEY AND ANDREW NOBEL Abstract. It is shown that there are no consistent decision rules for the hypothesis testing problem

More information

Hartogs Theorem: separate analyticity implies joint Paul Garrett garrett/

Hartogs Theorem: separate analyticity implies joint Paul Garrett  garrett/ (February 9, 25) Hartogs Theorem: separate analyticity implies joint Paul Garrett garrett@math.umn.edu http://www.math.umn.edu/ garrett/ (The present proof of this old result roughly follows the proof

More information

Projection Theorem 1

Projection Theorem 1 Projection Theorem 1 Cauchy-Schwarz Inequality Lemma. (Cauchy-Schwarz Inequality) For all x, y in an inner product space, [ xy, ] x y. Equality holds if and only if x y or y θ. Proof. If y θ, the inequality

More information

Lecture 21: Expectation of CRVs, Fatou s Lemma and DCT Integration of Continuous Random Variables

Lecture 21: Expectation of CRVs, Fatou s Lemma and DCT Integration of Continuous Random Variables EE50: Probability Foundations for Electrical Engineers July-November 205 Lecture 2: Expectation of CRVs, Fatou s Lemma and DCT Lecturer: Krishna Jagannathan Scribe: Jainam Doshi In the present lecture,

More information

1 Machine Learning Concepts (16 points)

1 Machine Learning Concepts (16 points) CSCI 567 Fall 2018 Midterm Exam DO NOT OPEN EXAM UNTIL INSTRUCTED TO DO SO PLEASE TURN OFF ALL CELL PHONES Problem 1 2 3 4 5 6 Total Max 16 10 16 42 24 12 120 Points Please read the following instructions

More information

Chapter 2. Discrete Distributions

Chapter 2. Discrete Distributions Chapter. Discrete Distributions Objectives ˆ Basic Concepts & Epectations ˆ Binomial, Poisson, Geometric, Negative Binomial, and Hypergeometric Distributions ˆ Introduction to the Maimum Likelihood Estimation

More information

Essential Background for Real Analysis I (MATH 5210)

Essential Background for Real Analysis I (MATH 5210) Background Material 1 Essential Background for Real Analysis I (MATH 5210) Note. These notes contain several definitions, theorems, and examples from Analysis I (MATH 4217/5217) which you must know for

More information

Random Bernstein-Markov factors

Random Bernstein-Markov factors Random Bernstein-Markov factors Igor Pritsker and Koushik Ramachandran October 20, 208 Abstract For a polynomial P n of degree n, Bernstein s inequality states that P n n P n for all L p norms on the unit

More information

Gaussian Processes. 1. Basic Notions

Gaussian Processes. 1. Basic Notions Gaussian Processes 1. Basic Notions Let T be a set, and X : {X } T a stochastic process, defined on a suitable probability space (Ω P), that is indexed by T. Definition 1.1. We say that X is a Gaussian

More information

UPPER DEVIATIONS FOR SPLIT TIMES OF BRANCHING PROCESSES

UPPER DEVIATIONS FOR SPLIT TIMES OF BRANCHING PROCESSES Applied Probability Trust 7 May 22 UPPER DEVIATIONS FOR SPLIT TIMES OF BRANCHING PROCESSES HAMED AMINI, AND MARC LELARGE, ENS-INRIA Abstract Upper deviation results are obtained for the split time of a

More information

We denote the space of distributions on Ω by D ( Ω) 2.

We denote the space of distributions on Ω by D ( Ω) 2. Sep. 1 0, 008 Distributions Distributions are generalized functions. Some familiarity with the theory of distributions helps understanding of various function spaces which play important roles in the study

More information

INTRODUCTION TO DIOPHANTINE EQUATIONS

INTRODUCTION TO DIOPHANTINE EQUATIONS INTRODUCTION TO DIOPHANTINE EQUATIONS In the earl 20th centur, Thue made an important breakthrough in the stud of diophantine equations. His proof is one of the first eamples of the polnomial method. His

More information

1: PROBABILITY REVIEW

1: PROBABILITY REVIEW 1: PROBABILITY REVIEW Marek Rutkowski School of Mathematics and Statistics University of Sydney Semester 2, 2016 M. Rutkowski (USydney) Slides 1: Probability Review 1 / 56 Outline We will review the following

More information

Chapter 2 Piecewise Bijective Functions and Continuous Inputs

Chapter 2 Piecewise Bijective Functions and Continuous Inputs Chapter 2 Piecewise Bijective Functions and Continuous Inputs In this section, we treat the class of systems that can be described by piecewise bijective functions. We call a function piecewise bijective

More information

CLASSICAL PROBABILITY MODES OF CONVERGENCE AND INEQUALITIES

CLASSICAL PROBABILITY MODES OF CONVERGENCE AND INEQUALITIES CLASSICAL PROBABILITY 2008 2. MODES OF CONVERGENCE AND INEQUALITIES JOHN MORIARTY In many interesting and important situations, the object of interest is influenced by many random factors. If we can construct

More information

Final Overview. Introduction to ML. Marek Petrik 4/25/2017

Final Overview. Introduction to ML. Marek Petrik 4/25/2017 Final Overview Introduction to ML Marek Petrik 4/25/2017 This Course: Introduction to Machine Learning Build a foundation for practice and research in ML Basic machine learning concepts: max likelihood,

More information

CLOSED RANGE POSITIVE OPERATORS ON BANACH SPACES

CLOSED RANGE POSITIVE OPERATORS ON BANACH SPACES Acta Math. Hungar., 142 (2) (2014), 494 501 DOI: 10.1007/s10474-013-0380-2 First published online December 11, 2013 CLOSED RANGE POSITIVE OPERATORS ON BANACH SPACES ZS. TARCSAY Department of Applied Analysis,

More information

{η : η=linear combination of 1, Z 1,, Z n

{η : η=linear combination of 1, Z 1,, Z n If 3. Orthogonal projection. Conditional expectation in the wide sense Let (X n n be a sequence of random variables with EX n = σ n and EX n 0. EX k X j = { σk, k = j 0, otherwise, (X n n is the sequence

More information

Taylor Series and Series Convergence (Online)

Taylor Series and Series Convergence (Online) 7in 0in Felder c02_online.te V3 - February 9, 205 9:5 A.M. Page CHAPTER 2 Taylor Series and Series Convergence (Online) 2.8 Asymptotic Epansions In introductory calculus classes the statement this series

More information

18.175: Lecture 8 Weak laws and moment-generating/characteristic functions

18.175: Lecture 8 Weak laws and moment-generating/characteristic functions 18.175: Lecture 8 Weak laws and moment-generating/characteristic functions Scott Sheffield MIT 18.175 Lecture 8 1 Outline Moment generating functions Weak law of large numbers: Markov/Chebyshev approach

More information

BOUNDARY VALUE PROBLEMS ON A HALF SIERPINSKI GASKET

BOUNDARY VALUE PROBLEMS ON A HALF SIERPINSKI GASKET BOUNDARY VALUE PROBLEMS ON A HALF SIERPINSKI GASKET WEILIN LI AND ROBERT S. STRICHARTZ Abstract. We study boundary value problems for the Laplacian on a domain Ω consisting of the left half of the Sierpinski

More information