576M 2, we. Yi. Using Bernstein s inequality with the fact that E[Yi ] = Thus, P (S T 0.5T t) e 0.5t2

Size: px
Start display at page:

Download "576M 2, we. Yi. Using Bernstein s inequality with the fact that E[Yi ] = Thus, P (S T 0.5T t) e 0.5t2"

Transcription

1 APPENDIX he Appendix is structured as follows: Section A contains the missing proofs, Section B contains the result of the applicability of our techniques for Stackelberg games, Section C constains results about the sample complexity of standard SUQR, Section D contains the weaker sample complexity bound result for the generalized SUQR model derived using the approach of Haussler and Section E contains additional experiments. A. PROOFS Proof of heorem PROOF. First, Haussler uses the following pseudo metric ρ on A that is defined using the loss function l: ρ(a, b = max y Y l(y, a l(y, b. o start with, relying on Haussler s result, we show P r( h H. ˆr h ( z r h (p < α 3 4C ( α 48, H, ρ e α m 576M Choose α = α /4M and ν = M in heorem 9 of [4]. Using property (3 (Section., [4] of d v we obtain r s ɛ whenever d v(r, s α. Using this directly in heorem 9 of Haussler [4] we obtain the desired result above. Note the dependence of the above probability on m (the number of samples, and compare it to the first pre-condition in the PAC learning result. By equating δ/ to 4C(α/48, H, ρe α m 576M, we derive the sample complexity as m 576M 8C(α/48, H, ρ α δ We wish to compute a bound on C(ɛ, H, ρ in order to use the above result to obtain sample complexity. First, we prove that ρ d l for the loss function we use. his result is used to bound C(ɛ, H, ρ, since, it is readily verified from definition that C(ɛ, H, ρ C(ɛ/, H, d l. Such a bounding directly gives m 576M 8C(α/96, H, ρ α δ Below we prove that ρ d l. LEMMA 8. Given the loss function defined above, we have ρ(a, b max i a i b i i ai bi dl (a, b PROOF. By definition, ρ(a, b = max i ai + b i + + ea i max i a i b i + + ea i. here + eb i is j and k such that max r e a k e b k ea i e b i for all i. hus, + minrt + t = ea j e b j + ea i + eb i + eb i ea i e b i for all i and min r = + maxrt + t where t = eb i. he greatest positive value of the RHS is max r a j b j and least negative value possible for LHS is min r a k b k. hus, + ea i + eb i max i ai b i Hence, we obtain ρ(a, b = max i l(y i, a l(y i, b max i a i b i, and the last inequality is trivial. hus, using the above result we get Proof of Lemma m 576M α 8C(α/96, H, d l δ PROOF. First, note that x i = x i x lies between [, ] due to the constraints on x i, x. hen, for any two functions g, g G we have the following result: d L (P, d l (g, g = = d l (w(x i x, w (x i x dp (x X = (w w (x i x dp (x X (w w dp (x = (w w X Also, note that since the range of any g = w(x i x is [ M, M ] 4 4 and given x i x lies between [, ], we can claim that w lies between [ M, M ]. hus, given the distance between functions is 4 4 bounded by the difference in weights, it enough to divide the M/ range of the weights into intervals of size ɛ and consider functions at the boundaries. Hence the ɛ-cover has at most M/4ɛ functions. he proof for constant valued functions F i is similar, since its straightforward to see the distance between two functions in this space is the difference in the constant output. Also, the constants lie in [ M, M ], hen, the argument is same as the G case. 4 4 Proof of Lemma 3 PROOF. First, the space of functions Ĥ = {h/ ˆK h H i} is Lipschitz with Lipschitz constant and h i(x M/ ˆK. Clearly N (ɛ, H i, d l N (ɛ/ ˆK, Ĥ, d l. Using the following result from [3]: for any Lipschitz real valued function space H with constant, any positive integer s and any distance d N (ɛ, H, d l ( M(s + ˆKɛ + (s + N ( sɛ s+,x,d hen, we get the bound on N (ɛ/ ˆK, Ĥ, d l by choosing s = and d = d l, and hence obtain the desired bound on N (ɛ, H i, d l. Proof of Lemma 4 PROOF. For ease of notation, we do the proof with k standing for K +. Let Y i = U i 0.5, then Y / and S 0.5 = i Yi. Using Bernstein s inequality with the fact that E[Yi ] = / P ( i Y i = S 0.5 t e 0.5t /+t/6 hus, P (S 0.5 t e 0.5t /+t/6. ake k = 0.5 t, and hence t = 0.5 k = (0.5 k/. Hence, Proof of heorem 3 P (S k e 3 (0.5 k/ k/ PROOF. Given the results of Lemma 3, we get the sample complexity is of order ( α δ + ( N ( α, X, d l

2 Now, suing result of Lemma 4, we get the required order in the heorem. We wish to note that if K/ is a constant then the O(e in Lemma 4 gets swamped by the term. However, in practice for fixed, this term does provide lower actual complexity bound than what is indicated by the order. Proof of Lemma 5 PROOF. Observe that due to the definition of K any solution to MinLip will have Lipschitz constant K. hus, it suffices to show that the Lipschitz constant of h i is K, to prove that h i is a solution of MinLip. ake any two x, x. If the min in the expression for h i occurs for the same j for both x, x then h i(x h i(x is given by K x x j x x j. By application of triangle inequality x x x x j x x j x x hus, h i(x h i(x K x x. For the other case when the min for x occurs at some j and min for x at some j we have the following: h i(x = h ij + K x x j and h i(x = h ij + K x x j. Also, due to the min, h i(x h ij + K x x j = h i(x + K x x j K x x j. hus, we get h i(x h i(x K ( x x j x x j K x x Using the symmetric case inequality for x we get h i(x h i(x K ( x x j x x j K x x Combining both these we can claim that h i(x h i(x K x x. hus, we have proved that h i is K Lipschitz, and hence a solution of MinLip. Proof of Lemma 6 PROOF. Let p X be the marginal of p(x, y for space X. Define the expected entropy E[H(x] = px(x Iy=t i qp i (x qp i (x dx. Given the loss function, we know that r h (p = p(x, y Iy=t i qh i (x dx dy. his is same as p X(x Iy=t i qp i (x Iy=t i qh i (x dx dy. his reduces to p X(x Iy=t i qp i (x qh i (x dx dy. hus, we have E[H(x] + r h (p = p X(x I y=ti q p i (x qp i (x dx dy qi h (x Hence, we obtain E[H(x] + r h (p = E[KL(q p (x q h (x] Hence, r h (p r h (p is equal to E[KL(q p (x q h (x] E[KL(q p (x q (x] hus, from the assumptions, we get E[KL(q p (x q h (x] α + ɛ with probability δ. Next, using Markov inequality, with probability δ P r(kl(q p (x q h (x (α + ɛ /3 (α + ɛ /3 that is using the notation = (α+ɛ /3, with probability δ P r(kl(q p (x q h (x /3 /3 Using Pinkser s inequality we get (/ q p (x q h (x KL(q p (x q h (x. hat is, the event KL(q p (x q h (x /3 implies the event q p (x q h (x. hus, P r( q p (x q h (x P r(kl(q p (x q h (x /3. hus, we obtain: with probability δ, P r( q p (x q h (x. Proof of Lemma 7 PROOF. We know that q h i (x = 0. hus, eh i (x j eh j (x (assume h (x = j eh j (x qi h (x qi h (x = qi h (x e h i(x h i (x j eh j (x Let r denote e h l (x eh j (x e h l (x e h j (x j eh j (x j eh j (x. here is l and k such that max r = ehk(x eh j (x e h k (x for all j and min r = for all e h j (x j. hen, min r r max r First, note that due to our assumption that for each i h i(x h i(x ˆK x x, we have e ˆK x x min r r max r e ˆK x x Using the Lipschitzness we can also claim that e ˆK x x e h i(x h i (x e ˆK x x. hus, e ˆK x x e h i(x h i (x r e ˆK x x Since, e ˆK x x < and e ˆK x x > we have e h i(x h i (x r max( e ˆK x x, e ˆK x x Also, it is a fact that e y.5 y for y 3/4. hus, we obtain e h i(x h i (x r 3 ˆK x x for ˆK x x 3/4 hus, q h (x q h (x = i qh i (x q h i (x = j eh j (x i qh i (x e h i(x h i (x ( j eh j (x i qh i (x 3 ˆK x x for ˆK x x 3/8. Since i qh i (x =, we have q h (x q h (x 3 ˆK x x for x x 3/8 ˆK In other words q h is locally 3 ˆK-Lipschitz for every l norm ball of size 3/8 ˆK. he following allows us to prove global Lipschitzness. LEMMA 9. Any locally L-Lipschitz function f for every l p ball of size δ 0 on a compact convex set X R n is Lipschitz on the set X. he Lipschitz constant is also L. PROOF. ake any two points x, y X, the straight line joining x, y lies in X (as X is convex. Also, a finite number of balls of size δ 0 cover X (due to compactness. hus, there are finitely many points x = z,..., z µ = y on the line from x, y such that d lp (z i, z i+ δ 0. Further, since these points lie on a straight line we have d lp (x, y = µ d lp (z i, z i+ hen, let any metric d be used to measure distance in the range space of f, thus, we get d(f(x, f(y µ d(f(z i, f(z i+ µ Ld lp (z i, z i+ = Ld lp (x, y

3 Since in our case the defender mixed strategy space is compact and convex and q h (x satisfies the above lemma with L = 3 ˆK and δ 0 = 3/8 ˆK, q h (x is 3 ˆK-Lipschitz. Proof of heorem 4 PROOF. Coupled with the guarantee that with prob. δ, P r( q p (x q h (x, the assumptions guarantee that with prob. δ for the learned hypothesis h there must exist a x B(x, ɛ such that q p (x q h (x and there must exist x B( x, ɛ such that q p (x q h (x. First, for notational ease let γ denote. he following are immediate using triangle inequality, with the results q p (x q h (x γ and q p (x q h (x γ and the Lipschitzness assumptions q p (x q h (x Kɛ + γ (optx q p ( x q h (x 3 ˆKɛ + γ (opt x We call x Uq h ( x x Uq h (x as equation opth. hus, we bound the utility loss as following x Uq p (x x Uq p ( x = x Uq p (x x Uq h ( x + x Uq h ( x x Up(y/ x x Uq p (x x Uq h (x + x Uq h ( x x Up(y/ x using opth = (x x Uq p (x + x U(q p (x q h (x + x Uq h ( x x Uq p ( x ɛ + (Kɛ + γ + x Uq h ( x x Uq p ( x using x B(x, ɛ, optx = ((K + ɛ + γ + x U(q h ( x q h (x + x U(q h (x q p ( x (K + ɛ + γ + 6 ˆKɛ + γ using x B( x, ɛ with Lipschitz q h, opt x B. EXENSION O SACKELBERG GAMES Our technique extends to Stackelberg games by noting that the single resource case K = with targets gives xi. his directly maps to a probability distribution over actions. he x i s with x = xi is the probability of playing an action. With this set-up now the security game is a standard Stackelberg game, but where the leader has actions and follower has actions. hus, in order to capture the general Stakelberg game, for the adversary, we assume N actions for the adversary (instead of above. hen, similar to security games q,..., q N denotes the adversary s probability of playing an action. hus, the function h now outputs vectors of size N (instead of O(, i.e., A is a subset of N dimensional Euclidean space. he model of security game in the PAC framework extends as is to this Stackelberg setup, just with h(x and A being N dimensional. he rest of the analysis proceeds exactly as for security games for both parametric and non-parametric case, by replacing the corresponding to the adversary s action space by N. Since, the proof technique is exactly same, we just state the final results. hus, for a Stackelberg game with leader actions and N follower actions, the bound for heorem becomes 576M α 8C(α/96N, H, d l δ It can be seen from the proof for the parametric part that the sample complexity does not depend on the dimensionality of X, but only on the dimensionality of A. Hence, the sample complexity results from generalized SUQR parametric case is O ( α ( δ + N N α and for the non-parametric case, which depends on both dimensionality of X and, the sample complexity is O ( α ( δ + N + α C. ANALYSIS OF SANDARD SUQR FORM For SUQR the rewards and penalties are given and fixed. Let the rewards be given and fixed r = r,..., r (each r i [0, r max], r max > 0, and the penalty values are p = p,..., p (each p i [0, p min], p min < 0. hus, the output of h is h(x = w x + w r + w 3p,..., w x + w r + w 3p where r i = r i r and same for p i. Note that in the above formulation all the component functions h i(x have same weights. We can consider the function space H as the following direct-sum semi-free product G F E = { g + f + e,..., g + f + e g,..., g G, f,..., f F, e,..., e E}, where each of G, F, E is defined below. G = { g,..., g g,..., g ig i, all g i have same weight} where G i has functions of the form wx i. F = { f,..., f f,..., f if i, all f i have same weight} where F i has constant valued functions of the form wr i. E = { e,..., e e,..., e ie i, all e i have same weight} where E i has constant valued functions of the form wp i. Consider an ɛ/3-cover U e for E, an ɛ/3-cover U f for F and ɛ/3-cover U g for G. We claim that U e U f U g is an ɛ-cover for E F G. hus, the size of the ɛ-cover for E F G is bounded by U e U f U g. hus, N (ɛ, H, d l N (ɛ/3, G, d l N (ɛ/3, F, d l N (ɛ/3, E, d l aking sup over P we get C(ɛ, H, d l C(ɛ/3, G, d l C(ɛ/3, F, d l C(ɛ/3, E, d l Now, we show that U e U f U g is an ɛ-cover for H = E F G Fix any h H = E F G. hen, h = e + f + g for some e E, f F, g G. Let e U e be ɛ/3 close to e, f U f be ɛ/3 close to f and g U g be ɛ/3 close to g. hen, d L (P,d l (h, h k = d l (h i(x, h i(x dp (x X k k d l (g i(x, g i(x X k +d l (f i(x, f i(x + d l (e i(x, e i(x dp (x = d L (P,d l (g, g + d L (P,d l (f, f + d L (P,d l (e, e ɛ Similar to Lemma, it is possible to show that for any probability distribution P, for any function g, g d l (g, g w w

4 and f, f d l (f, f w w r max and e, e d l (e, e w w p min. Assume each of the functions have a range [ M/6, M/6] (this does not affect the order in terms of M. Given, these ranges w for g can take values in [ M/6, M/6], w for g can take values in [ M/6r max, M/6r max] and w for g can take values in [ M/6 p min, M/6 p min ]. o get a capacity of ɛ/3 it is enough to divide the respective w range into intervals of ɛ/3, and consider the boundaries. his yields an ɛ/3-capacity of M/ɛ, M/ɛr max and M/ɛ p min for G, F and E respectively. hus, C(ɛ, H, d l (M/ɛ 3 r max p min Plugging this in sample complexity from heorem we get the results that the sample complexity is O ( α ( δ + α D. ALERNAE PROOF FOR GENERAL- IZED SUQR SAMPLE COMPLEXIY As discussed in the main paper we use the function space H with each component function space H i given by w ix i + c i. hen, we can directly use Equation. We still need to bound C(ɛ, H i, d l. For this, we note the set of functions w ix i + c i has two free parameters w i and c i, thus, this function space is a subset of the vector space of functions of dimension two (two values needs to represent each function. Using the pseudo-dimension technique [4] we know that for psuedo-dimension d of function space H i we get more samples are added. o further show its potential, we modified the true adversary model of generating attacks from SUQR to the following: q i e w x i +c i, i.e., instead of x i, the adversary reasons based on x i. We considered the same true weight vector to simulate attacks. hen, we observe in Figs. (g (for payoff structure and (h (for payoff structure data, that α approaches a value closer to zero for 500 or more sample. Also, the NPL model performs better than the parametric model with 500 or more samples. his shows that the NPL approach is more accurate when the true adversary does not satisfy the simple parametric istic form, indicating that when we don t know the true function of the adversary s decision making process, adopting a non-parametric method to learn the adversary s behavior is more effective. C(ɛ, H i, d l ( em ɛ em ɛ d Also, we know [4] that pseudo-dimension is equal to the vector space dimension if the function class is a subset of a vector space. herefore, for our case d =. herefore, using Equation we get C(ɛ, H, d l ( em ɛ em ɛ Plugging this result in heorem we get the sample complexity of ( ( ( O ( α δ + ( α α E. EXPERIMENAL RESULS Here we provide additional experimental results on the Uganda, AM and simulated datasets. he AM dataset consisted of 3 unique mixed strategies, 6 of which were deployed for one payoff structure and the remaining 6 for another. In the main paper, we provided results on AM data for payoff structure. Here, in Figs. (a and (b, we show results on the AM data for both the parametric (SUQR and NPL learning settings on payoff structure. For running experiments on simulated data, we used the same mixed strategies and features as for the AM data, but simulated attacks, first using the actual SUQR model and then using a modified form of the SUQR model. Figs. (c and (d show results on simulated data on payoff structures and for the parametric cases, when the data is generated by an adversary with an SUQR model with true weight vector reported in Nguyen et. al [6] ((w, w, w 3 = ( 9.85, 0.37, 0.5 (c i = w R i + w 3P i. Similar results for the NPL model are shown in Figs. (e and (f respectively. We can see that the NPL approach performs poorly with only one or five samples as expectied but improves significantly as

5 (a AM Parametric Results Payoff (b AM Nonparametric Results Payoff (c Simulated Data Payoff - Parametric results (d Simulated Data Payoff - Parametric results (e Simulated Data Payoff - Nonparametric results (f Simulated Data Payoff - Nonparametric results (g Parametric vs Non-parametric results on Simulated (for various sample sizes data from payoff when the true adversary model is different from the parametric learned function (h Parametric vs Non-parametric results on Simulated (for various sample sizes data from payoff when the true adversary model is different from the parametric learned function Figure : Results on Uganda, AM and simulated datasets for the parametric and non-parametric cases respectively.

arxiv: v3 [cs.ai] 20 Nov 2015

arxiv: v3 [cs.ai] 20 Nov 2015 Learning Adversary Behavior in Security Games: A PAC Model Perspective Arunesh Sinha, Debarun Kar, Milind Tambe University of Southern California {aruneshs, dkar, tambe}@usc.edu arxiv:5.00043v3 [cs.ai]

More information

Metric Spaces and Topology

Metric Spaces and Topology Chapter 2 Metric Spaces and Topology From an engineering perspective, the most important way to construct a topology on a set is to define the topology in terms of a metric on the set. This approach underlies

More information

MATH 426, TOPOLOGY. p 1.

MATH 426, TOPOLOGY. p 1. MATH 426, TOPOLOGY THE p-norms In this document we assume an extended real line, where is an element greater than all real numbers; the interval notation [1, ] will be used to mean [1, ) { }. 1. THE p

More information

MA651 Topology. Lecture 10. Metric Spaces.

MA651 Topology. Lecture 10. Metric Spaces. MA65 Topology. Lecture 0. Metric Spaces. This text is based on the following books: Topology by James Dugundgji Fundamental concepts of topology by Peter O Neil Linear Algebra and Analysis by Marc Zamansky

More information

Definition 6.1. A metric space (X, d) is complete if every Cauchy sequence tends to a limit in X.

Definition 6.1. A metric space (X, d) is complete if every Cauchy sequence tends to a limit in X. Chapter 6 Completeness Lecture 18 Recall from Definition 2.22 that a Cauchy sequence in (X, d) is a sequence whose terms get closer and closer together, without any limit being specified. In the Euclidean

More information

Immerse Metric Space Homework

Immerse Metric Space Homework Immerse Metric Space Homework (Exercises -2). In R n, define d(x, y) = x y +... + x n y n. Show that d is a metric that induces the usual topology. Sketch the basis elements when n = 2. Solution: Steps

More information

MAT 544 Problem Set 2 Solutions

MAT 544 Problem Set 2 Solutions MAT 544 Problem Set 2 Solutions Problems. Problem 1 A metric space is separable if it contains a dense subset which is finite or countably infinite. Prove that every totally bounded metric space X is separable.

More information

Generalization bounds

Generalization bounds Advanced Course in Machine Learning pring 200 Generalization bounds Handouts are jointly prepared by hie Mannor and hai halev-hwartz he problem of characterizing learnability is the most basic question

More information

THE INVERSE FUNCTION THEOREM

THE INVERSE FUNCTION THEOREM THE INVERSE FUNCTION THEOREM W. PATRICK HOOPER The implicit function theorem is the following result: Theorem 1. Let f be a C 1 function from a neighborhood of a point a R n into R n. Suppose A = Df(a)

More information

Lecture 3: Expected Value. These integrals are taken over all of Ω. If we wish to integrate over a measurable subset A Ω, we will write

Lecture 3: Expected Value. These integrals are taken over all of Ω. If we wish to integrate over a measurable subset A Ω, we will write Lecture 3: Expected Value 1.) Definitions. If X 0 is a random variable on (Ω, F, P), then we define its expected value to be EX = XdP. Notice that this quantity may be. For general X, we say that EX exists

More information

Computational and Statistical Learning Theory

Computational and Statistical Learning Theory Computational and Statistical Learning Theory Problem set 1 Due: Monday, October 10th Please send your solutions to learning-submissions@ttic.edu Notation: Input space: X Label space: Y = {±1} Sample:

More information

1 Topology Definition of a topology Basis (Base) of a topology The subspace topology & the product topology on X Y 3

1 Topology Definition of a topology Basis (Base) of a topology The subspace topology & the product topology on X Y 3 Index Page 1 Topology 2 1.1 Definition of a topology 2 1.2 Basis (Base) of a topology 2 1.3 The subspace topology & the product topology on X Y 3 1.4 Basic topology concepts: limit points, closed sets,

More information

Maths 212: Homework Solutions

Maths 212: Homework Solutions Maths 212: Homework Solutions 1. The definition of A ensures that x π for all x A, so π is an upper bound of A. To show it is the least upper bound, suppose x < π and consider two cases. If x < 1, then

More information

Course 212: Academic Year Section 1: Metric Spaces

Course 212: Academic Year Section 1: Metric Spaces Course 212: Academic Year 1991-2 Section 1: Metric Spaces D. R. Wilkins Contents 1 Metric Spaces 3 1.1 Distance Functions and Metric Spaces............. 3 1.2 Convergence and Continuity in Metric Spaces.........

More information

Austin Mohr Math 730 Homework. f(x) = y for some x λ Λ

Austin Mohr Math 730 Homework. f(x) = y for some x λ Λ Austin Mohr Math 730 Homework In the following problems, let Λ be an indexing set and let A and B λ for λ Λ be arbitrary sets. Problem 1B1 ( ) Show A B λ = (A B λ ). λ Λ λ Λ Proof. ( ) x A B λ λ Λ x A

More information

CS229T/STATS231: Statistical Learning Theory. Lecturer: Tengyu Ma Lecture 11 Scribe: Jongho Kim, Jamie Kang October 29th, 2018

CS229T/STATS231: Statistical Learning Theory. Lecturer: Tengyu Ma Lecture 11 Scribe: Jongho Kim, Jamie Kang October 29th, 2018 CS229T/STATS231: Statistical Learning Theory Lecturer: Tengyu Ma Lecture 11 Scribe: Jongho Kim, Jamie Kang October 29th, 2018 1 Overview This lecture mainly covers Recall the statistical theory of GANs

More information

Math 4317 : Real Analysis I Mid-Term Exam 1 25 September 2012

Math 4317 : Real Analysis I Mid-Term Exam 1 25 September 2012 Instructions: Answer all of the problems. Math 4317 : Real Analysis I Mid-Term Exam 1 25 September 2012 Definitions (2 points each) 1. State the definition of a metric space. A metric space (X, d) is set

More information

Many of you got these steps reversed or otherwise out of order.

Many of you got these steps reversed or otherwise out of order. Problem 1. Let (X, d X ) and (Y, d Y ) be metric spaces. Suppose that there is a bijection f : X Y such that for all x 1, x 2 X. 1 10 d X(x 1, x 2 ) d Y (f(x 1 ), f(x 2 )) 10d X (x 1, x 2 ) Show that if

More information

Continuity. Matt Rosenzweig

Continuity. Matt Rosenzweig Continuity Matt Rosenzweig Contents 1 Continuity 1 1.1 Rudin Chapter 4 Exercises........................................ 1 1.1.1 Exercise 1............................................. 1 1.1.2 Exercise

More information

Functional Analysis Exercise Class

Functional Analysis Exercise Class Functional Analysis Exercise Class Week 9 November 13 November Deadline to hand in the homeworks: your exercise class on week 16 November 20 November Exercises (1) Show that if T B(X, Y ) and S B(Y, Z)

More information

Measure Theory on Topological Spaces. Course: Prof. Tony Dorlas 2010 Typset: Cathal Ormond

Measure Theory on Topological Spaces. Course: Prof. Tony Dorlas 2010 Typset: Cathal Ormond Measure Theory on Topological Spaces Course: Prof. Tony Dorlas 2010 Typset: Cathal Ormond May 22, 2011 Contents 1 Introduction 2 1.1 The Riemann Integral........................................ 2 1.2 Measurable..............................................

More information

Problem Set 1: Solutions Math 201A: Fall Problem 1. Let (X, d) be a metric space. (a) Prove the reverse triangle inequality: for every x, y, z X

Problem Set 1: Solutions Math 201A: Fall Problem 1. Let (X, d) be a metric space. (a) Prove the reverse triangle inequality: for every x, y, z X Problem Set 1: s Math 201A: Fall 2016 Problem 1. Let (X, d) be a metric space. (a) Prove the reverse triangle inequality: for every x, y, z X d(x, y) d(x, z) d(z, y). (b) Prove that if x n x and y n y

More information

MAT 570 REAL ANALYSIS LECTURE NOTES. Contents. 1. Sets Functions Countability Axiom of choice Equivalence relations 9

MAT 570 REAL ANALYSIS LECTURE NOTES. Contents. 1. Sets Functions Countability Axiom of choice Equivalence relations 9 MAT 570 REAL ANALYSIS LECTURE NOTES PROFESSOR: JOHN QUIGG SEMESTER: FALL 204 Contents. Sets 2 2. Functions 5 3. Countability 7 4. Axiom of choice 8 5. Equivalence relations 9 6. Real numbers 9 7. Extended

More information

COMPLETE METRIC SPACES AND THE CONTRACTION MAPPING THEOREM

COMPLETE METRIC SPACES AND THE CONTRACTION MAPPING THEOREM COMPLETE METRIC SPACES AND THE CONTRACTION MAPPING THEOREM A metric space (M, d) is a set M with a metric d(x, y), x, y M that has the properties d(x, y) = d(y, x), x, y M d(x, y) d(x, z) + d(z, y), x,

More information

Bootcamp. Christoph Thiele. Summer As in the case of separability we have the following two observations: Lemma 1 Finite sets are compact.

Bootcamp. Christoph Thiele. Summer As in the case of separability we have the following two observations: Lemma 1 Finite sets are compact. Bootcamp Christoph Thiele Summer 212.1 Compactness Definition 1 A metric space is called compact, if every cover of the space has a finite subcover. As in the case of separability we have the following

More information

Topology. Xiaolong Han. Department of Mathematics, California State University, Northridge, CA 91330, USA address:

Topology. Xiaolong Han. Department of Mathematics, California State University, Northridge, CA 91330, USA  address: Topology Xiaolong Han Department of Mathematics, California State University, Northridge, CA 91330, USA E-mail address: Xiaolong.Han@csun.edu Remark. You are entitled to a reward of 1 point toward a homework

More information

2.1 Laplacian Variants

2.1 Laplacian Variants -3 MS&E 337: Spectral Graph heory and Algorithmic Applications Spring 2015 Lecturer: Prof. Amin Saberi Lecture 2-3: 4/7/2015 Scribe: Simon Anastasiadis and Nolan Skochdopole Disclaimer: hese notes have

More information

COMPLETION OF A METRIC SPACE

COMPLETION OF A METRIC SPACE COMPLETION OF A METRIC SPACE HOW ANY INCOMPLETE METRIC SPACE CAN BE COMPLETED REBECCA AND TRACE Given any incomplete metric space (X,d), ( X, d X) a completion, with (X,d) ( X, d X) where X complete, and

More information

Generalization Bounds in Machine Learning. Presented by: Afshin Rostamizadeh

Generalization Bounds in Machine Learning. Presented by: Afshin Rostamizadeh Generalization Bounds in Machine Learning Presented by: Afshin Rostamizadeh Outline Introduction to generalization bounds. Examples: VC-bounds Covering Number bounds Rademacher bounds Stability bounds

More information

Lecture 8: Basic convex analysis

Lecture 8: Basic convex analysis Lecture 8: Basic convex analysis 1 Convex sets Both convex sets and functions have general importance in economic theory, not only in optimization. Given two points x; y 2 R n and 2 [0; 1]; the weighted

More information

Nonlinear equations. Norms for R n. Convergence orders for iterative methods

Nonlinear equations. Norms for R n. Convergence orders for iterative methods Nonlinear equations Norms for R n Assume that X is a vector space. A norm is a mapping X R with x such that for all x, y X, α R x = = x = αx = α x x + y x + y We define the following norms on the vector

More information

Finite Metric Spaces & Their Embeddings: Introduction and Basic Tools

Finite Metric Spaces & Their Embeddings: Introduction and Basic Tools Finite Metric Spaces & Their Embeddings: Introduction and Basic Tools Manor Mendel, CMI, Caltech 1 Finite Metric Spaces Definition of (semi) metric. (M, ρ): M a (finite) set of points. ρ a distance function

More information

MATH 51H Section 4. October 16, Recall what it means for a function between metric spaces to be continuous:

MATH 51H Section 4. October 16, Recall what it means for a function between metric spaces to be continuous: MATH 51H Section 4 October 16, 2015 1 Continuity Recall what it means for a function between metric spaces to be continuous: Definition. Let (X, d X ), (Y, d Y ) be metric spaces. A function f : X Y is

More information

Locally convex spaces, the hyperplane separation theorem, and the Krein-Milman theorem

Locally convex spaces, the hyperplane separation theorem, and the Krein-Milman theorem 56 Chapter 7 Locally convex spaces, the hyperplane separation theorem, and the Krein-Milman theorem Recall that C(X) is not a normed linear space when X is not compact. On the other hand we could use semi

More information

I. The space C(K) Let K be a compact metric space, with metric d K. Let B(K) be the space of real valued bounded functions on K with the sup-norm

I. The space C(K) Let K be a compact metric space, with metric d K. Let B(K) be the space of real valued bounded functions on K with the sup-norm I. The space C(K) Let K be a compact metric space, with metric d K. Let B(K) be the space of real valued bounded functions on K with the sup-norm Proposition : B(K) is complete. f = sup f(x) x K Proof.

More information

Lecture 9 Metric spaces. The contraction fixed point theorem. The implicit function theorem. The existence of solutions to differenti. equations.

Lecture 9 Metric spaces. The contraction fixed point theorem. The implicit function theorem. The existence of solutions to differenti. equations. Lecture 9 Metric spaces. The contraction fixed point theorem. The implicit function theorem. The existence of solutions to differential equations. 1 Metric spaces 2 Completeness and completion. 3 The contraction

More information

Linear Analysis Lecture 5

Linear Analysis Lecture 5 Linear Analysis Lecture 5 Inner Products and V Let dim V < with inner product,. Choose a basis B and let v, w V have coordinates in F n given by x 1. x n and y 1. y n, respectively. Let A F n n be the

More information

Mathematical Analysis Outline. William G. Faris

Mathematical Analysis Outline. William G. Faris Mathematical Analysis Outline William G. Faris January 8, 2007 2 Chapter 1 Metric spaces and continuous maps 1.1 Metric spaces A metric space is a set X together with a real distance function (x, x ) d(x,

More information

Tools from Lebesgue integration

Tools from Lebesgue integration Tools from Lebesgue integration E.P. van den Ban Fall 2005 Introduction In these notes we describe some of the basic tools from the theory of Lebesgue integration. Definitions and results will be given

More information

Convex Analysis and Optimization Chapter 2 Solutions

Convex Analysis and Optimization Chapter 2 Solutions Convex Analysis and Optimization Chapter 2 Solutions Dimitri P. Bertsekas with Angelia Nedić and Asuman E. Ozdaglar Massachusetts Institute of Technology Athena Scientific, Belmont, Massachusetts http://www.athenasc.com

More information

CHAPTER 1. Metric Spaces. 1. Definition and examples

CHAPTER 1. Metric Spaces. 1. Definition and examples CHAPTER Metric Spaces. Definition and examples Metric spaces generalize and clarify the notion of distance in the real line. The definitions will provide us with a useful tool for more general applications

More information

Lecture 18: March 15

Lecture 18: March 15 CS71 Randomness & Computation Spring 018 Instructor: Alistair Sinclair Lecture 18: March 15 Disclaimer: These notes have not been subjected to the usual scrutiny accorded to formal publications. They may

More information

Whitney s Extension Problem for C m

Whitney s Extension Problem for C m Whitney s Extension Problem for C m by Charles Fefferman Department of Mathematics Princeton University Fine Hall Washington Road Princeton, New Jersey 08544 Email: cf@math.princeton.edu Supported by Grant

More information

Blackwell s Approachability Theorem: A Generalization in a Special Case. Amy Greenwald, Amir Jafari and Casey Marks

Blackwell s Approachability Theorem: A Generalization in a Special Case. Amy Greenwald, Amir Jafari and Casey Marks Blackwell s Approachability Theorem: A Generalization in a Special Case Amy Greenwald, Amir Jafari and Casey Marks Department of Computer Science Brown University Providence, Rhode Island 02912 CS-06-01

More information

Deep Linear Networks with Arbitrary Loss: All Local Minima Are Global

Deep Linear Networks with Arbitrary Loss: All Local Minima Are Global homas Laurent * 1 James H. von Brecht * 2 Abstract We consider deep linear networks with arbitrary convex differentiable loss. We provide a short and elementary proof of the fact that all local minima

More information

Numerical Sequences and Series

Numerical Sequences and Series Numerical Sequences and Series Written by Men-Gen Tsai email: b89902089@ntu.edu.tw. Prove that the convergence of {s n } implies convergence of { s n }. Is the converse true? Solution: Since {s n } is

More information

1. Bounded linear maps. A linear map T : E F of real Banach

1. Bounded linear maps. A linear map T : E F of real Banach DIFFERENTIABLE MAPS 1. Bounded linear maps. A linear map T : E F of real Banach spaces E, F is bounded if M > 0 so that for all v E: T v M v. If v r T v C for some positive constants r, C, then T is bounded:

More information

5 Integration with Differential Forms The Poincare Lemma Proper Maps and Degree Topological Invariance of Degree...

5 Integration with Differential Forms The Poincare Lemma Proper Maps and Degree Topological Invariance of Degree... Contents 1 Review of Topology 3 1.1 Metric Spaces............................... 3 1.2 Open and Closed Sets.......................... 3 1.3 Metrics on R n............................... 4 1.4 Continuity.................................

More information

Math 259: Introduction to Analytic Number Theory Functions of finite order: product formula and logarithmic derivative

Math 259: Introduction to Analytic Number Theory Functions of finite order: product formula and logarithmic derivative Math 259: Introduction to Analytic Number Theory Functions of finite order: product formula and logarithmic derivative This chapter is another review of standard material in complex analysis. See for instance

More information

Math 259: Introduction to Analytic Number Theory Functions of finite order: product formula and logarithmic derivative

Math 259: Introduction to Analytic Number Theory Functions of finite order: product formula and logarithmic derivative Math 259: Introduction to Analytic Number Theory Functions of finite order: product formula and logarithmic derivative This chapter is another review of standard material in complex analysis. See for instance

More information

Selçuk Demir WS 2017 Functional Analysis Homework Sheet

Selçuk Demir WS 2017 Functional Analysis Homework Sheet Selçuk Demir WS 2017 Functional Analysis Homework Sheet 1. Let M be a metric space. If A M is non-empty, we say that A is bounded iff diam(a) = sup{d(x, y) : x.y A} exists. Show that A is bounded iff there

More information

Empirical Processes: General Weak Convergence Theory

Empirical Processes: General Weak Convergence Theory Empirical Processes: General Weak Convergence Theory Moulinath Banerjee May 18, 2010 1 Extended Weak Convergence The lack of measurability of the empirical process with respect to the sigma-field generated

More information

Exercise Solutions to Functional Analysis

Exercise Solutions to Functional Analysis Exercise Solutions to Functional Analysis Note: References refer to M. Schechter, Principles of Functional Analysis Exersize that. Let φ,..., φ n be an orthonormal set in a Hilbert space H. Show n f n

More information

Math 426 Homework 4 Due 3 November 2017

Math 426 Homework 4 Due 3 November 2017 Math 46 Homework 4 Due 3 November 017 1. Given a metric space X,d) and two subsets A,B, we define the distance between them, dista,b), as the infimum inf a A, b B da,b). a) Prove that if A is compact and

More information

A minimalist s exposition of EM

A minimalist s exposition of EM A minimalist s exposition of EM Karl Stratos 1 What EM optimizes Let O, H be a random variables representing the space of samples. Let be the parameter of a generative model with an associated probability

More information

Separation in General Normed Vector Spaces 1

Separation in General Normed Vector Spaces 1 John Nachbar Washington University March 12, 2016 Separation in General Normed Vector Spaces 1 1 Introduction Recall the Basic Separation Theorem for convex sets in R N. Theorem 1. Let A R N be non-empty,

More information

Math 341: Convex Geometry. Xi Chen

Math 341: Convex Geometry. Xi Chen Math 341: Convex Geometry Xi Chen 479 Central Academic Building, University of Alberta, Edmonton, Alberta T6G 2G1, CANADA E-mail address: xichen@math.ualberta.ca CHAPTER 1 Basics 1. Euclidean Geometry

More information

Math General Topology Fall 2012 Homework 6 Solutions

Math General Topology Fall 2012 Homework 6 Solutions Math 535 - General Topology Fall 202 Homework 6 Solutions Problem. Let F be the field R or C of real or complex numbers. Let n and denote by F[x, x 2,..., x n ] the set of all polynomials in n variables

More information

The Vapnik-Chervonenkis Dimension

The Vapnik-Chervonenkis Dimension The Vapnik-Chervonenkis Dimension Prof. Dan A. Simovici UMB 1 / 91 Outline 1 Growth Functions 2 Basic Definitions for Vapnik-Chervonenkis Dimension 3 The Sauer-Shelah Theorem 4 The Link between VCD and

More information

Class Notes for MATH 255.

Class Notes for MATH 255. Class Notes for MATH 255. by S. W. Drury Copyright c 2006, by S. W. Drury. Contents 0 LimSup and LimInf Metric Spaces and Analysis in Several Variables 6. Metric Spaces........................... 6.2 Normed

More information

Set, functions and Euclidean space. Seungjin Han

Set, functions and Euclidean space. Seungjin Han Set, functions and Euclidean space Seungjin Han September, 2018 1 Some Basics LOGIC A is necessary for B : If B holds, then A holds. B A A B is the contraposition of B A. A is sufficient for B: If A holds,

More information

Continuous Functions on Metric Spaces

Continuous Functions on Metric Spaces Continuous Functions on Metric Spaces Math 201A, Fall 2016 1 Continuous functions Definition 1. Let (X, d X ) and (Y, d Y ) be metric spaces. A function f : X Y is continuous at a X if for every ɛ > 0

More information

THE INVERSE FUNCTION THEOREM FOR LIPSCHITZ MAPS

THE INVERSE FUNCTION THEOREM FOR LIPSCHITZ MAPS THE INVERSE FUNCTION THEOREM FOR LIPSCHITZ MAPS RALPH HOWARD DEPARTMENT OF MATHEMATICS UNIVERSITY OF SOUTH CAROLINA COLUMBIA, S.C. 29208, USA HOWARD@MATH.SC.EDU Abstract. This is an edited version of a

More information

Hilbert s Metric and Gromov Hyperbolicity

Hilbert s Metric and Gromov Hyperbolicity Hilbert s Metric and Gromov Hyperbolicity Andrew Altman May 13, 2014 1 1 HILBERT METRIC 2 1 Hilbert Metric The Hilbert metric is a distance function defined on a convex bounded subset of the n-dimensional

More information

Statistical Data Mining and Machine Learning Hilary Term 2016

Statistical Data Mining and Machine Learning Hilary Term 2016 Statistical Data Mining and Machine Learning Hilary Term 2016 Dino Sejdinovic Department of Statistics Oxford Slides and other materials available at: http://www.stats.ox.ac.uk/~sejdinov/sdmml Naïve Bayes

More information

Math 140A - Fall Final Exam

Math 140A - Fall Final Exam Math 140A - Fall 2014 - Final Exam Problem 1. Let {a n } n 1 be an increasing sequence of real numbers. (i) If {a n } has a bounded subsequence, show that {a n } is itself bounded. (ii) If {a n } has a

More information

2. Metric Spaces. 2.1 Definitions etc.

2. Metric Spaces. 2.1 Definitions etc. 2. Metric Spaces 2.1 Definitions etc. The procedure in Section for regarding R as a topological space may be generalized to many other sets in which there is some kind of distance (formally, sets with

More information

Math 273a: Optimization Subgradients of convex functions

Math 273a: Optimization Subgradients of convex functions Math 273a: Optimization Subgradients of convex functions Made by: Damek Davis Edited by Wotao Yin Department of Mathematics, UCLA Fall 2015 online discussions on piazza.com 1 / 42 Subgradients Assumptions

More information

Adaptivity to Local Smoothness and Dimension in Kernel Regression

Adaptivity to Local Smoothness and Dimension in Kernel Regression Adaptivity to Local Smoothness and Dimension in Kernel Regression Samory Kpotufe Toyota Technological Institute-Chicago samory@tticedu Vikas K Garg Toyota Technological Institute-Chicago vkg@tticedu Abstract

More information

Total Expected Discounted Reward MDPs: Existence of Optimal Policies

Total Expected Discounted Reward MDPs: Existence of Optimal Policies Total Expected Discounted Reward MDPs: Existence of Optimal Policies Eugene A. Feinberg Department of Applied Mathematics and Statistics State University of New York at Stony Brook Stony Brook, NY 11794-3600

More information

Topological properties

Topological properties CHAPTER 4 Topological properties 1. Connectedness Definitions and examples Basic properties Connected components Connected versus path connected, again 2. Compactness Definition and first examples Topological

More information

Appendix B Convex analysis

Appendix B Convex analysis This version: 28/02/2014 Appendix B Convex analysis In this appendix we review a few basic notions of convexity and related notions that will be important for us at various times. B.1 The Hausdorff distance

More information

Hausdorff measure. Jordan Bell Department of Mathematics, University of Toronto. October 29, 2014

Hausdorff measure. Jordan Bell Department of Mathematics, University of Toronto. October 29, 2014 Hausdorff measure Jordan Bell jordan.bell@gmail.com Department of Mathematics, University of Toronto October 29, 2014 1 Outer measures and metric outer measures Suppose that X is a set. A function ν :

More information

Flat Chain and Flat Cochain Related to Koch Curve

Flat Chain and Flat Cochain Related to Koch Curve ISSN 1479-3889 (print), 1479-3897 (online) International Journal of Nonlinear Science Vol. 3 (2007) No. 2, pp. 144-149 Flat Chain and Flat Cochain Related to Koch Curve Lifeng Xi Institute of Mathematics,

More information

Bregman Divergence and Mirror Descent

Bregman Divergence and Mirror Descent Bregman Divergence and Mirror Descent Bregman Divergence Motivation Generalize squared Euclidean distance to a class of distances that all share similar properties Lots of applications in machine learning,

More information

LECTURE 15: COMPLETENESS AND CONVEXITY

LECTURE 15: COMPLETENESS AND CONVEXITY LECTURE 15: COMPLETENESS AND CONVEXITY 1. The Hopf-Rinow Theorem Recall that a Riemannian manifold (M, g) is called geodesically complete if the maximal defining interval of any geodesic is R. On the other

More information

A IMPROVEMENT OF LAX S PROOF OF THE IVT

A IMPROVEMENT OF LAX S PROOF OF THE IVT A IMPROVEMEN OF LAX S PROOF OF HE IV BEN KORDESH AND WILLIAM RICHER. Introduction Let Cube = [, ] n R n. Lax gave an short proof of a version of the Change of Variables heorem (CV), which aylor clarified

More information

Implicit Functions, Curves and Surfaces

Implicit Functions, Curves and Surfaces Chapter 11 Implicit Functions, Curves and Surfaces 11.1 Implicit Function Theorem Motivation. In many problems, objects or quantities of interest can only be described indirectly or implicitly. It is then

More information

Discrete Geometry. Problem 1. Austin Mohr. April 26, 2012

Discrete Geometry. Problem 1. Austin Mohr. April 26, 2012 Discrete Geometry Austin Mohr April 26, 2012 Problem 1 Theorem 1 (Linear Programming Duality). Suppose x, y, b, c R n and A R n n, Ax b, x 0, A T y c, and y 0. If x maximizes c T x and y minimizes b T

More information

Basic Properties of Metric and Normed Spaces

Basic Properties of Metric and Normed Spaces Basic Properties of Metric and Normed Spaces Computational and Metric Geometry Instructor: Yury Makarychev The second part of this course is about metric geometry. We will study metric spaces, low distortion

More information

AW -Convergence and Well-Posedness of Non Convex Functions

AW -Convergence and Well-Posedness of Non Convex Functions Journal of Convex Analysis Volume 10 (2003), No. 2, 351 364 AW -Convergence Well-Posedness of Non Convex Functions Silvia Villa DIMA, Università di Genova, Via Dodecaneso 35, 16146 Genova, Italy villa@dima.unige.it

More information

21.2 Example 1 : Non-parametric regression in Mean Integrated Square Error Density Estimation (L 2 2 risk)

21.2 Example 1 : Non-parametric regression in Mean Integrated Square Error Density Estimation (L 2 2 risk) 10-704: Information Processing and Learning Spring 2015 Lecture 21: Examples of Lower Bounds and Assouad s Method Lecturer: Akshay Krishnamurthy Scribes: Soumya Batra Note: LaTeX template courtesy of UC

More information

AN EXPLORATION OF THE METRIZABILITY OF TOPOLOGICAL SPACES

AN EXPLORATION OF THE METRIZABILITY OF TOPOLOGICAL SPACES AN EXPLORATION OF THE METRIZABILITY OF TOPOLOGICAL SPACES DUSTIN HEDMARK Abstract. A study of the conditions under which a topological space is metrizable, concluding with a proof of the Nagata Smirnov

More information

EXPOSITORY NOTES ON DISTRIBUTION THEORY, FALL 2018

EXPOSITORY NOTES ON DISTRIBUTION THEORY, FALL 2018 EXPOSITORY NOTES ON DISTRIBUTION THEORY, FALL 2018 While these notes are under construction, I expect there will be many typos. The main reference for this is volume 1 of Hörmander, The analysis of liner

More information

Metric Thickenings of Euclidean Submanifolds

Metric Thickenings of Euclidean Submanifolds Metric Thickenings of Euclidean Submanifolds Advisor: Dr. Henry Adams Committe: Dr. Chris Peterson, Dr. Daniel Cooley Joshua Mirth Masters Thesis Defense October 3, 2017 Introduction Motivation Can we

More information

The Inverse Function Theorem 1

The Inverse Function Theorem 1 John Nachbar Washington University April 11, 2014 1 Overview. The Inverse Function Theorem 1 If a function f : R R is C 1 and if its derivative is strictly positive at some x R, then, by continuity of

More information

Math 54 - HW Solutions 5

Math 54 - HW Solutions 5 Math 54 - HW Solutions 5 Dan Crytser August 6, 202 Problem 20.a To show that the Manhattan metric d(, y) = y +... + n y n induces the standard topology on R n, we show that it induces the same topology

More information

A Convex Upper Bound on the Log-Partition Function for Binary Graphical Models

A Convex Upper Bound on the Log-Partition Function for Binary Graphical Models A Convex Upper Bound on the Log-Partition Function for Binary Graphical Models Laurent El Ghaoui and Assane Gueye Department of Electrical Engineering and Computer Science University of California Berkeley

More information

Comparing continuous and discrete versions of Hilbert s thirteenth problem

Comparing continuous and discrete versions of Hilbert s thirteenth problem Comparing continuous and discrete versions of Hilbert s thirteenth problem Lynnelle Ye 1 Introduction Hilbert s thirteenth problem is the following conjecture: a solution to the equation t 7 +xt + yt 2

More information

4 Countability axioms

4 Countability axioms 4 COUNTABILITY AXIOMS 4 Countability axioms Definition 4.1. Let X be a topological space X is said to be first countable if for any x X, there is a countable basis for the neighborhoods of x. X is said

More information

Lecture Learning infinite hypothesis class via VC-dimension and Rademacher complexity;

Lecture Learning infinite hypothesis class via VC-dimension and Rademacher complexity; CSCI699: Topics in Learning and Game Theory Lecture 2 Lecturer: Ilias Diakonikolas Scribes: Li Han Today we will cover the following 2 topics: 1. Learning infinite hypothesis class via VC-dimension and

More information

Mid Term-1 : Practice problems

Mid Term-1 : Practice problems Mid Term-1 : Practice problems These problems are meant only to provide practice; they do not necessarily reflect the difficulty level of the problems in the exam. The actual exam problems are likely to

More information

NECESSARY CONDITIONS FOR WEIGHTED POINTWISE HARDY INEQUALITIES

NECESSARY CONDITIONS FOR WEIGHTED POINTWISE HARDY INEQUALITIES NECESSARY CONDITIONS FOR WEIGHTED POINTWISE HARDY INEQUALITIES JUHA LEHRBÄCK Abstract. We establish necessary conditions for domains Ω R n which admit the pointwise (p, β)-hardy inequality u(x) Cd Ω(x)

More information

g 2 (x) (1/3)M 1 = (1/3)(2/3)M.

g 2 (x) (1/3)M 1 = (1/3)(2/3)M. COMPACTNESS If C R n is closed and bounded, then by B-W it is sequentially compact: any sequence of points in C has a subsequence converging to a point in C Conversely, any sequentially compact C R n is

More information

[2] (a) Develop and describe the piecewise linear Galerkin finite element approximation of,

[2] (a) Develop and describe the piecewise linear Galerkin finite element approximation of, 269 C, Vese Practice problems [1] Write the differential equation u + u = f(x, y), (x, y) Ω u = 1 (x, y) Ω 1 n + u = x (x, y) Ω 2, Ω = {(x, y) x 2 + y 2 < 1}, Ω 1 = {(x, y) x 2 + y 2 = 1, x 0}, Ω 2 = {(x,

More information

Nishant Gurnani. GAN Reading Group. April 14th, / 107

Nishant Gurnani. GAN Reading Group. April 14th, / 107 Nishant Gurnani GAN Reading Group April 14th, 2017 1 / 107 Why are these Papers Important? 2 / 107 Why are these Papers Important? Recently a large number of GAN frameworks have been proposed - BGAN, LSGAN,

More information

Chapter 2. Fuzzy function space Introduction. A fuzzy function can be considered as the generalization of a classical function.

Chapter 2. Fuzzy function space Introduction. A fuzzy function can be considered as the generalization of a classical function. Chapter 2 Fuzzy function space 2.1. Introduction A fuzzy function can be considered as the generalization of a classical function. A classical function f is a mapping from the domain D to a space S, where

More information

Functional Analysis. Franck Sueur Metric spaces Definitions Completeness Compactness Separability...

Functional Analysis. Franck Sueur Metric spaces Definitions Completeness Compactness Separability... Functional Analysis Franck Sueur 2018-2019 Contents 1 Metric spaces 1 1.1 Definitions........................................ 1 1.2 Completeness...................................... 3 1.3 Compactness......................................

More information

Lecture 4. f X T, (x t, ) = f X,T (x, t ) f T (t )

Lecture 4. f X T, (x t, ) = f X,T (x, t ) f T (t ) LECURE NOES 21 Lecture 4 7. Sufficient statistics Consider the usual statistical setup: the data is X and the paramter is. o gain information about the parameter we study various functions of the data

More information

Mathematics for Economists

Mathematics for Economists Mathematics for Economists Victor Filipe Sao Paulo School of Economics FGV Metric Spaces: Basic Definitions Victor Filipe (EESP/FGV) Mathematics for Economists Jan.-Feb. 2017 1 / 34 Definitions and Examples

More information