Convergence of Langevin MCMC in KL-divergence

Size: px
Start display at page:

Download "Convergence of Langevin MCMC in KL-divergence"

Transcription

1 Convergence of Langevin MCMC in KL-ivergence Xiang Cheng an Peter Bartlett Eitor: Abstract Langevin iffusion is a commonly use tool for sampling from a given istribution. In this work, we establish that when the target ensity p is such that log p is L smooth an m strongly convex, iscrete Langevin iffusion prouces a istribution p with KL (p p ) ɛ in Õ( ɛ ) steps, where is the imension of the sample space. We also stuy the convergence rate when the strong-convexity assumption is absent. By consiering the Langevin iffusion as a graient flow in the space of probability istributions, we obtain an elegant analysis that applies to the stronger property of convergence in KL-ivergence an gives a conceptually simpler proof of the best-known convergence results in weaker metrics. 1. Introuction Suppose that we woul like to sample from a ensity p (x) = e U(x)+C where C is the normalizing constant. We know U(x), but we o not know the normalizing constant. This comes up, for example, in variational inference, when the normalization constant is computationally intractable. One way to sample from p is to consier the Langevin iffusion: x 0 p 0 x t = U( x t )t + B t (1) Where p 0 is some initial istribution an B t is Brownian motion (see Section 4). The stationary istribution of the above SDE is p. The Langevin MCMC algorithm, given in two equivalent forms in (3) an (4), is an algorithm base on iscretizing (1). Previous works have shown the convergence of (4) in both total variation istance (3, 4) an -Wasserstein istance (5). The approach in these papers relies on first showing the convergence of (1), an then bouning the iscretization error between (4) an (). In this paper, our main goal is to establish the convergence of p t in (4) in KL (p t p ). KLivergence is perhaps the most natural notion of istance between probability istributions in this context, because of its close relationship to maximum likelihoo estimation, its interpretation as information gain in Bayesian statistics, an its central role in information 1

2 theory. Convergence in KL-ivergence implies convergence in total variation an -Wasserstein istance, thus we are able to obtain convergence rates in total variation an -Wasserstein that are comparable to the results shown in (3, 4, 5).. Relate Work The first non-asymptotic analysis of the iscrete Langevin iffusion (4) was ue to Dalalyan in 3. This was soon followe by the work by Durmus an Moulines in 4, which improve upon the results in 3. Subsequently, Durmus an Moulines also establishe convergence of (4) for the -Wasserstein istance in 5. We remark that the proofs of Lemma 7, 11 an 13 are essentially taken from 5. In a slightly ifferent irection from the goals of this paper, Bubeck et al 6 an Durmus et al 9 stuie variants of (4) which work when log p is not smooth. This is important, for example, when we want to sample from the uniform istribution over some convex set, so log p is the inicator function. Very recently, Dalalyan et al 13 prove the convergence of Langevin Monte Carlo when only stochastic graients are available. Our work also borrows heavily from the theory establishe in the book of Ambrosio, Gigli an Savare 1, which stuies the unerlying probability istribution p t inuce by (1) as a graient flow in probability space. This allows us to view (4) as a eterministic convex optimization proceure over the probability space, with KL-ivergence as the objective. This beautiful line of work relating SDEs with graient flows in probability space was begun by Joran, Kinerlehrer an Otto. We refer any intereste reaer to an excellent survey by Santambrogio in 10. Finally, we remark that the theory in 1 has some very interesting connections with the stuy of normalization flows in 7 an 8. For example, the tangent velocity of (), given by v t = log p log p t, can be thought of as a eterministic transformation that inuces a normalizing flow. 3. Our Contribution In this section, we compare the results we obtain with those in 3, 4 an 5. Our main contribution is establishing the first nonasymptotic convergence Kullback-Leibler ivergence for (4) when U(x) is m strongly convex an L smooth. (see Theorem 1). As a consequence, we also unify the proof of convergence in total variation an W as simple corollaries to the convergence in KL. The following table compares the number of iterations of (3) require to achieve ɛ error in each of the three quantities accoring to the analysis of various papers.

3 Table 1: Comparison of iteration complexity T V W KL 3, 4 Õ( ) - - ɛ 5 Õ( ) Õ( ) - ɛ ɛ this Õ( ) Õ( ) Õ( ɛ ɛ ɛ ) paper In Section 7, we also state a convergence result for when U is not strongly convex. The corollary for convergence in total variation has a better epenence on the imension than the corresponing result in 3, but a worse epenence on ɛ. 4. Definitions We enote by P(R ) the space of all probability istributions over R. In the rest of this paper, only istributions with ensities wrt the Lebesgue measure will appear (see Lemma 16), both in the algorithm an in the analysis. With abuse of notation, we use the same symbol (e.g. p) to enote both the probability istribution an its ensity wrt the Lebesgue measure. We let B t be the -imensional Brownian motion. Let p be the target istribution such that U(x) = log p (x)+c has L Lipschitz continuous graients an m strong convexity, i.e. for all x: mi U(x) LI For a given initial istribution p 0, the Exact Langevin Diffusion is given by the following stochastic ifferential equation (recall U(x) log p (x)): x 0 p 0 x t = U( x t )t + B t () (This is ientical to (1), restate here for ease of reference.) For a given initial istribution p 0, an for a given stepsize h, the Langevin MCMC Algorithm is given by the following: Where ξ i ii N(0, 1). u 0 p 0 u i+1 = u i h U(u i ) + hξ i (3) For a given initial istribution p 0 an stepsize h, the Discretize Langevin Diffusion is given by the following SDE: x 0 p 0 (4) x t = U(x τ(t) )t + B t Let p t enote the istribution of x t 3

4 Where τ(t) t h h (note that τ(t) is parametrize by h). It is easily verifie that for any i, x ih from (4) is equivalent to u i in (3). Note that the ifference between () an (4) is in the rift term: one is U( x t ), the other is U(x τ(t) ) For the rest of this paper, we will use p t to exclusively enote the istribution of x t in (4). We assume without loss of generality that arg min x U(x) = 0, an that U(0) = 0. (We can always shift the space to achieve this, an the minimizer of U is easy to fin using, say, graient escent.) For the rest of this paper, we will let ( ) µ(x) log µ(x) p (x) x, F (µ) = if µ has ensity wrt Lebesgue measure else be the KL-ivergence between µ an p. It is well known that F is minimize by p, an F (p ) = 0. Finally, given a vector fiel v : R R an a istribution µ P(R ), we efine the L (µ)-norm of v as v L (µ) E µ v(x) 4.1 Backgroun on Wasserstein istance an curves in P(R ) Given two istributions µ, ν P(R ), let Γ(µ, ν) be the set of all joint istributions over the prouct space R R whose marginals equal µ an ν respectively. (Γ is the set of all couplings) The Wasserstein istance is efine as W (µ, ν) = inf γ Γ(µ,ν) ( x y )γ(x, y) Let (X 1, B(X 1 )) an (X, B(X )) be two measurable spaces, µ be a measure, an r : X 1 X be a measurable map. The push-forwar measure of µ through r is efine as r # µ(b) = µ(r 1 (B)) B B(X ) Intuitively, for any f, E r# µf(x) = E µ f(r(x)). 4

5 It is a well known result that for any two istributions µ an ν which have ensity wrt the Lebesgue measure, the optimal coupling is inuce by a map T opt : R R, i.e. W (µ, ν) = ( x y )γ (x, y) for γ = (I, T opt ) # µ Where I is the ientity map, an T opt satisfies T opt# µ = ν, so by efinition, γ Γ(µ, ν). We call T opt the optimal transport map, an T opt I the optimal isplacement map. Given two points ν an π in P(R ), a curve µ t : 0, 1 P(R ) is a constant-speegeoesic between ν an π if µ 0 = ν, µ 1 = π an W (µ s, µ t ) = (t s)w (ν, π) for all 0 s t 1. If vν π is the optimal isplacement map between ν an π, then the constant-spee-geoesic µ t is nicely characterize by µ t = (I + tv π ν ) # ν (5) Given a curve µ t : R + P(R ), we efine its metric erivative as µ W (µ t lim sup s, µ t ) s t s t (6). Intuitively, this is the spee of the curve in -Wasserstein istance. We say that a curve µ t is absolutely continuous if b a µ t < for all a, b R. Given a curve µ t : R + P(R ) an a sequence of velocity fiels v t : R + (R R ), we say that µ t an v t satisfy the continuity equation at t if t µ t(x) + (µ t (x) v t (x)) = 0 (7) (We assume that µ t has ensity wrt Lebesgue measure for all t) Remark 1 If µ t is a constant-spee-geoesic between ν an π, then v π ν satisfies (7) at t = 0, by the characterization in (5). We say that v t is tangent to µ t at t if the continuity equation hols an v t + w L (µ t ) v t L (µ t ) for all w such that (µ t w) = 0. Intuitively, v t is tangent to µ t if it minimizes v t L (µ t ) among all velocity fiels v that satisfy the continuity equation. 5. Preliminary Lemmas This section presents some basic results neee for our main theorem. 5.1 Calculus over P(R ) In this section, we present some crucial Lemmas which allow us to stuy the evolution of F (µ t ) along a curve µ t : R + P(R ). These results are all immeiate consequences of results proven in 1. 5

6 Lemma 1 For any µ P(R ), let δf δµ (µ) : R R be the first variation of F at µ efine ( ) ) as δf δµ (µ) (x) log + 1. Let the subifferential of F at µ be given by ( µ(x) p (x) ( ) δf w µ δµ (µ) : R R. For any curve µ t : R + P(R ), an for any v t that satisfies the continuity equation for µ t (see equation (7)), the following hols: t F (µ t) = E µt wµt (x), v t (x) Base on Lemma 1, we efine (for any µ P(R )) the operator D µ (v) is linear in v. D µ (v) E µ w µ (x), v(x) : (R R ) R (8) Lemma Let µ t be an absolutely continuous curve in P(R ) with tangent velocity fiel v t. Let µ t be the metric erivative of µ t. Then v t L (µ t ) = µ t Lemma 3 For any µ P(R ), let D µ sup v L (µ) 1 D µ (v), then ( ) δf D µ = δµ (µ) (x) µ(x)x Furthermore, for any absolutely continuous curve µ t : R + P(R ) with tangent velocity v t, we have t F (µ t) D µ t v t L (µ t ) As a Corollary of Lemma an Lemma 3, we have the following result: Corollary 4 Let µ t be an absolutely continuous curve with tangent velocity fiel v t. Then t F (µ t) D µt µ t 5. Exact an Discrete Graient Flow for F (p) In this section, we will stuy the curve p t : R + P(R ) efine in (4). Unless otherwise specifie, we will assume that p 0 is an arbitrary istribution. Let x t be as efine in (4). 6

7 For any given t an for all s, we efine a stochastic process y t s as y t s = x s for s t y t s = U(y t s)s + B s for s t (9) let q t s enote the istribution for y t s From s = t onwars, this is the exact Langevin iffusion with p t as the initial istribution (compare with expression ()). Finally, for each t, we efine a sequence z t s by zs t = x s for s t zs t = ( U(zτ(t) t ) + U(zt s))s, for s t (10) let g t s enote the istribution for z t s zs t represents the iscretization error of p s through the ivergence between q t s an p s (formally state in Lemma 5). Note that zτ(t) t = xt τ(t) because τ(t) t. Remark The the B s in (4) an (9) are the same. Thus, x s (from (4)), y t s (from (9)) an z t s (from (10)) efine a coupling between the the curves p s, q t s an g t s. Our proof strategy is as follows: 1. In Lemma 5, we emonstrate that the ivergence between p s (iscretize Langevin) an q t s (exact Langevin) can be represente as a curve g t s.. In Lemma 6, we emonstrate that the "ecrease in F (p t ) ue to exact Langevin" given by s F (qt s) is sufficiently negative. 3. In Lemma 7, we show that the "iscretization error" given by s (F (p s) F (q t s)) is small. 4. Ae together, they imply that s F (p s) is sufficiently negative. Lemma 5 For all x R an t R + s gt s(x) = ( s p s(x) s qt s(x)) Lemma 6 For all s, t R + s F (qt s) = D q t s Lemma 7 For all t R + ( F (ps ) F (q t s s) ) ( L ) h E pτ(t) x + L h D pt 6. Strong Convexity Result In this section, we stuy the consequence of assuming m strong convexity an L smoothness of U(x). 7

8 6.1 Theorem statement an iscussion Theorem 1 Let x t an p t be as efine in (4) with p 0 = N(0, 1 m ). If an h = mɛ 16L k = 16 L log L mɛ m ɛ Then KL (p kh p ) ɛ The above theorem immeiately allows us to obtain the convergence rate of p kh in both total variation an -Wasserstein istance. Corollary 8 Using the choice of k an h in Theorem 1, we get 1. T V (p kh, p ) ɛ. W (p kh, p ) ɛ m The first item follows from Pinsker s inequality. The secon item follows from (1), where we take µ 0 to be p an µ 1 to be p kh, an noting that D p = 0. To achieve δ accuracy in Total Variation or W, we apply Theorem 1 with ɛ = δ an ɛ = mδ respectively. Remark 3 The log term in Theorem 1 is not crucial. One can run (3) a few times, each time aiming to only halve the objective F (p t ) F (p ) (thus the stepsize starts out large an is also halve each subsequent run). The proof is quite simple an will be omitte. 6. Proof of Theorem 1 We now state the Lemmas neee to prove Theorem 1. We first establish a notion of strong convexity of F (µ) with respect to W metric. Lemma 9 If log p (x) is m strongly convex, then F (µ t ) (1 t)f (µ 0 ) + tf (µ 1 ) m t(1 t)w (µ 0, µ 1 ) (11) for all µ 0, µ 1 P(R ) an t 0, 1, let µ t : 0, 1 P(R ) be the constant-spee geoesic between µ 0 an µ 1. (recall from (5) that If v µ 1 µ 0 is the optimal isplacement map from µ 0 to µ 1, then µ t = (I + t v µ 1 µ 0 ) # µ 0.) Equivalently, F (µ 1 ) F (µ 0 ) + D µ0 (v µ 1 µ 0 ) + m W (µ 0, µ 1 ) (1) We call this the m-strong-geoesic-convexity of F wrt the W istance. 8

9 The above Lemma is essentially implie by well known results on strong-geoesic-convexity, see for example 1. Next, we use the m strong geoesic convexity of F to upper boun F (µ) F (p ) by 1 m D µ (for any µ P(R )). This is analogous to how f(x) f(x ) 1 m f(x) for stanar m-strongly-convex functions in R. Lemma 10 Uner our assumption that log p (x) is m strongly convex, we have that for all µ P(R ), F (µ) F (p ) 1 m D µ Now, recall p t from (4). We use strong convexity to obtain a boun on E pt x for all t. This will be important for bouning the iscretization error in conjunction with Lemma 7 Lemma 11 Let p t be as efine in (4). If p 0 is such that E p0 x 4 m, an h 1 L in the efinition of (4), then for all t R +, E pt x 4 m Finally, we put everything together to prove Theorem 1. Proof of Theorem 1 We first note that h = mɛ 16L 1 L. By Lemma 11, for all t, E pt x 4 m. Combine with Lemma 7, we get that for all t R + ( ) s F (p s) F (q t s) 4L h m + L h D pt Suppose that F (p t ) F (p ) ɛ, an let h = mɛ 16L 1 { m ɛ 16 min L, mɛ } L then t ( ) s F (p s) F (q t s) 4L h m + L h 1 mɛ Dpt 1 D p t Where the last inequality is because Lemma 10 an the assumption that F (p t ) F (p ) ɛ together imply that D pt mɛ. 9

10 So combining Lemma 6 an Lemma 5, we have t F (p t) = s F (qt s) + s F (p s) F (q t s) D pt + 1 D p t = 1 D p t Where the last line once again follows from Lemma 10. m(f (p t ) F (p )) (13) To hanle the case when F (p t ) F (p ) ɛ, we use the following argument: 1. We can conclue that F (p t ) F (p ) ɛ implies t F (p t) 0.. By the results of Lemma 16 an Lemma 17, for all t, p t is finite an D pt is finite, so t F (p t) is finite an F (p t ) is continuous in t. 3. Thus, if F (p t ) ɛ for some t kh, then F (p s ) ɛ for all s t as F (p t ) ɛ implies t F (p t) 0 an F (p t ) is continuous in t. Thus F (p kh ) F (p ) ɛ. Thus, we nee only consier the case that F (p t ) ɛ for all t kh. This means that (13) hols for all t kh. By Gronwall s inequality, we get F (p kh ) F (p ) (F (p 0 ) F (p )) exp( mkh) We thus nee to pick k = 1 m log F (p 0) F (p ) ɛ h = 16 L log F (p0) F (p ) ɛ m ɛ Using the fact that p 0 = N(0, 1 m ). Using L-smoothness an m-strong convexity, we can show that log p (x) L x + log(π m ), an log p 0 (x) = m x log(π m ). We thus get that F (p 0 ) F (p ) = KL (p 0 p ) L m, so k = 16 L log L mɛ m ɛ 10

11 7. Weak convexity result In this section, we stuy the case when log p is not m strongly convex (but still convex an L smooth). Let π h be the stationary istribution of (4) with stepsize h. We will assume that we can choose an initial istribution p 0 which satisfies W (p 0, p ) = C 1 (14) an E p x = C (15). Let h be the largest stepsize such that W (π h, p ) C 1, h h (16) 7.1 Theorem statement an iscussion Theorem Let C 1, C an h be efine as in the beginning of this section. Let x t an p t be as efine in (4) with p 0 satisfying (14). If h = 1 { 48 min ɛ C 1 (C 1 + C )L, ɛ } C1, h L = 1 { 48 min ɛ C 1 C L, ɛ }, h L C 1 an k = C 1 ɛh + C 1 log(f (r0 ) F (p )) h Then KL (p kh p ) ɛ Once again, applying Pinsker s inequality, we get that the above choice of k an t yiels T V (r k, p ) ɛ. Without strong convexity, we cannot get a boun on W from bouning F (r k ) F (p ) like we i in corollary 8. In 3, a proof in the non-strongly-convex case was obtaine by running Langevin MCMC on p p exp( δ x ) log p is thus strongly convex with m = δ, an T V (p, p ) δ. By the results of 3, or 4, or Theorem 1, we nee iterations to get T V (p kh, p ) δ. k = Õ(3 δ 4 ) (17) 11

12 { On the other han, if we assume log(f (p 0 ) F (p )) 1 ɛ an h 1 10 min the results of Theorem implies that h = ɛ { } 1 ɛ L min, C 1 C C 1 } ɛ ɛ C 1 C, L C1 L To get T V (p kh, p ) δ, we nee k = L C 3 1 δ 4 { max C, C } 1 δ Even if we ignore C 1 an C, our result is not strictly better than (17) as we have a worse epenence on δ. However, we o have a better epenence on. The proof of Theorem is quite similar to that of Theorem 1, so we efer it to the appenix. Acknowlegments We gratefully acknowlege the support of the NSF through grant IIS an of the Australian Research Council through an Australian Laureate Fellowship (FL ) an through the Australian Research Council Centre of Excellence for Mathematical an Statistical Frontiers (ACEMS). References 1 Ambrosio, Luigi, Nicola Gigli, an Giuseppe Savaré. Graient flows: in metric spaces an in the space of probability measures. Springer Science & Business Meia, 008. Joran, Richar, Davi Kinerlehrer, an Felix Otto. "The variational formulation of the Fokker Planck equation." SIAM journal on mathematical analysis 9.1 (1998): Dalalyan, Arnak S. "Theoretical guarantees for approximate sampling from smooth an log-concave ensities." Journal of the Royal Statistical Society: Series B (Statistical Methoology) (016). 4 Durmus, Alain, an Eric Moulines. "Non-asymptotic convergence analysis for the Unajuste Langevin Algorithm." arxiv preprint arxiv: (015). 5 Durmus, Alain, an Eric Moulines. "High-imensional Bayesian inference via the Unajuste Langevin Algorithm." (016). 6 Bubeck, Sébastien, Ronen Elan, an Joseph Lehec. "Sampling from a log-concave istribution with Projecte Langevin Monte Carlo." Avances in Neural Information Processing Systems 8 (015). 7 Danilo Jimenez Rezene an Shakir Mohame. "Variational inference with normalizing flows" Proceeings of the 3n International Conference on International Conference on Machine Learning - Volume 37 (015) 1

13 8 Liu, Qiang, an Dilin Wang. "Stein variational graient escent: A general purpose bayesian inference algorithm." Avances In Neural Information Processing Systems Durmus, Alain, Eric Moulines, an Marcelo Pereyra. "Efficient Bayesian computation by proximal Markov chain Monte Carlo: when Langevin meets Moreau." arxiv preprint arxiv: (016). 10 Santambrogio, Filippo. "Eucliean, metric, an Wasserstein graient flows: an overview." Bulletin of Mathematical Sciences 7.1 (017): Santambrogio, Filippo. "Optimal transport for applie mathematicians." Birkäuser, NY (015). 1 Eberle, Anreas. "Reflection couplings an contraction rates for iffusions." Probability theory an relate fiels (016): Dalalyan, Arnak S., an Avetik G. Karagulyan. "User-frienly guarantees for the Langevin Monte Carlo with inaccurate graient." arxiv preprint arxiv: (017). 13

14 8. Supplementary Materials Proof of Lemma 1 The proof is irectly from results in 1. See Theorem , with F(µ γ) = KL (µ γ), with µ = µ, γ = p, σ = µ p, F (ρ) = ρ log ρ, L F (σ) = σ, an w µ = L F (σ) σ = log µ p. The expression for t F (p t) comes from expression (section E of chapter 10.1., page 33). See also expressions an (One can also refer to Theorem an Theorem for proofs of w µ for the KLivergence functional in more general settings.) By Lemma 16, w pt is well efine for all t. Proof of Lemma Theorem of 1. Proof of Lemma 3 By efinition of D µt (v) in (8) an Lemma 1 an Cauchy Schwarz. Proof of Lemma 5 In this proof, we treat t as a fixe but arbitrary number, an prove the Lemma for all t R +. We will use x s, y t s, z t s, p s, q t s an g t s as efine in (4), (9) an (10). First, consier the case when t = τ(t). By efinition, x t = y t t = z t t, an p t = q t t = g t t. By Fokker Planck, s p s(x) = U(x t ) + tr( p t ) = U(yt) t + tr( q t t) = s qt s(x) On the other han zs t = U(zτ(t) t ) + U(zt t) = U(x t ) + U(x t ) = 0 Thus s gt s = 0 So Lemma 5 hols. In the remainer of this proof, we assume that t τ(t). For a given Θ R, we let Π 1 (Θ) enote the projection of Θ onto its first coorinates, an Π (Θ) enote the projection of Θ onto its last coorinates. With abuse of notation, for P P(R ), we let Π 1 (P) an Π (P) enote the corresponing marginal ensities. We will consier three stochastic processes: Θ s, Λ t s, Ψ t s over R for s τ(t), τ(t) + h). 14

15 First, we introuce the stochastic process Θ s for s τ(t), τ(t) + h) x Θ τ(t) = τ(t) U(x τ(t) ) Π (Θ Θ s = s ) Bt t for s τ(t), τ(t) + h) We let P s enote the ensity for Θ s. Intuitively, P s is the joint ensity between x s an U(x τ(t) ). One can verify that Π 1 (Θ s ) = x s an Π 1 (P s ) = p s. By Fokker-Planck, we have Θ R s P s(θ) = + ( P t (Θ) Θ i=1 i ) Π (Θ) 0 P t (Θ) (18) Next, for any given t, we introuce the stochastic process Λ t s for s τ(t), τ(t) + h). Λ t s = Θ s Λ t U(Π1 (Λ s = t s)) Bs s for s t for s t Let Q t s enote the ensity for Λ t s. One can verify that Π 1 (Λ t s) = y t s an Π 1 (Q t s) = q t s. By Fokker-Planck, we have Θ R Finally, efine s Qt s(θ) = + ( Q t t(θ) Θ i=1 i ) U(Π1 (Θ)) 0 Q t t(θ) (19) Ψ t s = Θ s Ψ t Π (Ψ s = t s) + U(Π 1 (Ψ t s)) s 0 Bs + 0 for s t for s t Let G t s enote the ensity for Ψ t s. One can verify that Π 1 (Ψ t s) = z t s an Π 1 (G t s) = g t s. By Fokker-Planck, we have Θ R s Gt s(θ) = ( G t t(θ) ) Π (Θ) + U(Π 1 (Θ)) 0 (0) 15

16 By efinition, Θ t = Λ t t = Ψ t t almost surely, an P t = Q t t = G t t. Taking the ifference between (18), (19) thus gives ( ) s P s(θ) Q t s(θ) Π (Θ) + U(Π = P t (Θ) 1 (Θ)) 0 = s Gt s (Θ) Finally, marginalizing out the last coorinates on both sies, an recalling that Π 1 (P s ) = p s, Π 1 (Q t s) = q t s an Π 1 (G t s) = g t s, we prove the Lemma. Proof of Lemma 6 The fact that q t s is the steepest escent follows from the fact that Fokker-Planck equation for Langevin iffusion yiels, for all x R By efinition of (7), we get that s qt s(x) = (q t s(x) log p (x)) + tr( q t s(x)) ( ( )) = q t s(x) log p (x) q t s(x) v s = log p (x) q t s(x) (1) satisfies the continuity equation for q t s. By Lemma 1, ( ) q t w q t s = log s p Thus s F (qt s) = D q t s (v s ) = E q t wq t s s = Dq t The last equality follows from the first statement of Lemma 3. s Proof of Lemma 7 Consier zs t an gs t as efine in (10). By Lemma 5, ( s p s s qt s). The first variation of F, efine by ( ) F (µ + ɛ ) F (µ) δf lim = ɛ 0 ɛ δµ (µ) (x) (x)x s gt s = is linear (see Chapter 7. of 11). (In the above, : R R is an arbitrary 0-mean perturbation). In aition, because p t = q t t = gt, t we have δf δµ (p t) = δf δµ (qt t) = δf δµ (gt t), we get that ( s F (gt s) = s F (p s) s s)) F (qt 16

17 We will upper boun g t s, then apply Corollary 4. g t s = lim ɛ W (gt+ɛ, t gt) t 1 ɛ( U(xt lim E ) U(x ɛ 0 ɛ τ(t) )) = E U(x t ) U(x τ(t) ) E L x t x τ(t) =L E (t τ(t)) U(x τ(t) ) + (B t B τ(t) ) L(t τ(t)) E U(x τ(t) ) + L (t τ(t)) L(t τ(t)) L E x τ(t) + L (t τ(t)) ɛ 0 1 Where the first line is by efinition of metric erivative, secon line is by the coupling between gt t an gt+ɛ) t inuce by the joint istribution (zt, t zt+ɛ) t an the fact that zτ(t) t = x t. The fourth line is by Lipschitz-graient of U(x), fifth line is by efinition of x t, sixth line is by variance of B t B 0, seventh line is once again by Lipschitz-graient of U(x). Thus, we upper boun gs t by L (t τ(t)) E x τ(t) + L (t τ(t)). Applying Corollary 4, an using the fact that for all t, t τ(t) h, we get s F (gt s) ( L h E ) x τ(t) + L h D g t t ( L h E ) x τ(t) + L h D pt The last line is because g t t = p t by efinition. Proof of Lemma 9 By Theorem an Proposition 9.3. of 1, m-strong-convexity of log p implies geoesic convexity. Expression (11) then follows from the efinition of geoesic convexity in efinition of 1. Rearrranging terms, iviing by t an taking limit as t 0, we get F (µ F (µ 1 ) F (µ 0 ) + lim t ) F (µ 0 ) + m t 0 t W (µ 0, µ 1 ) = F (µ 0 ) + D µ0 (v µ 1 µ 0 ) + m W (µ 0, µ 1 ) 17

18 The last equality follows by Lemma 1 an by the remark immeiately following (7). We remark that the proof of (1) is completely analogous to the proof of first-orer characterization of strongly convex functions over R. Proof of Lemma 10 We consier (1), an use two facts 1. For any µ P(R ), D µ (v) is linear in v. (see (8)). For any µ, ν P(R ), W (µ, ν) = E µ v ν µ(x), by efinition of W an v ν µ as the optimal isplacement map. We apply Lemma 10 with µ 0 = p an µ 1 = µ. Let v p µ be the optimal isplacement map from µ to p, so (1) gives F (µ) F (p ) D µ (v p µ ) m W (µ, p ) = D µ (v p µ ) m E µ v p µ (x) Let v arg max v L (µ) 1 D µ (v), so D µ (v ) = D µ by linearity. We know that the maximizer of arg max D µ (v) m v E µ vµ p (x) = c v for some real number c. Taking erivatives wrt c gives c = 1 m D µ. Thus we get F (µ) F (p ) m D µ Proof of Lemma 11 We prove this by inuction on k. First, by efinition of p 0 = N(0, 1 m ), we get that E pt x = m 4 m, t 0h Next, we assume that for some k, an for all t kh, E pt x 4 m. For the inuctive step, we consier t (kh, (k + 1)h From (4), x t = x kh (t kh) U(x kh ) + (B t B kh ) By smoothness an strong convexity an the assumption that arg min x U(x) = 0, we get that for all x an for all t: (x (t kh) U(x)) 0 (1 mt) x 0 18

19 (note that h 1 L implies that t kh 1 L.) So for all t E x pt x =E x pkh x (t kh) U(x) + (B t B kh ) =E x pkh x (t kh) U(x) + E (B t B kh ) (1 mt)e x pkh x + t =E x pkh x + (t mte x pkh x ) By inuctive hypothesis, we have E x pt x 4 m for all t kh If E x pkh x m, then E x p t x E x p kh x 4 m. If E x pkh x m, then E x p t x m + L 4 m (by t kh 1 L an by L m). Thus if p kh is such that E pkh x 4 m, then it must be that E p t x 4 m t (kh, (k + 1)h, thus proving the inuctive step. for all 8.1 Proof of Theorem First, we present a Lemma for upper bouning F (µ) F (p ) for µ P(R ) in the absence of strong convexity. The following Lemma plays an analogous role to Lemma 10. Lemma 1 Let F be convex in W, then for all µ P(R ), F (µ) F (p ) D µ W (µ, p ) Proof of Lemma 1 Similar to the proof of Lemma 10, we consier (1), but with m = 0, (an once again v p µ enotes the optimal isplacement map from µ to p ): F (µ) F (p ) D p (v p µ ) D µ v p µ L (µ) D µ W (µ, p ) Where first inequality is from (1), secon line is by efinition of D µ, thir line is by efintion of Wasserstein istance an the fact that v p µ is the optimal transport map. Next, we establish that for a fixe stepsize h, W (p t, π h ) is nonincreasing, using a synchronous coupling technique taken from 5. Lemma 13 Let p t be efine as in the statement of Theorem (). Let h be a fixe stepsize satisfying h min{ 1 L, h }. Then for all k, W (p kh, π h ) W (p 0, π h ) 19

20 Proof of Lemma 13 First, we emonstrate that (4) is contractive in W. We will prove this by inuction. Base case: trivially true. Inuctive Hypothesis: W (p kh, π h ) W (p 0, π h ) for some k. Inuctive Step: Let T be the optimal transport map from p kh to π h. We will emonstrate a coupling between p (k+1)h an π h with cost less than W (p kh, π h ). The Lemma then follows from inuction. Since x kh p kh (see (4)), the optimal coupling between p kh an π h is given by the pair of ranom variables (x kh, T (x kh )). For t kh, (k + 1)h, x (k+1)h = x kh h U(x kh ) + (B (k+1)h B kh )). Consier the coupling γ between p kh an π h efine by the following pair of ranom variables ( x kh h U(x kh ) + (B (k+1)h B kh ), T (x kh ) h U(T (x kh )) + ) (B (k+1)h B kh ) (Note that π h is stationary uner the iscrete Langevin iffusion with stepsize h, so γ oes have the right marginals). To emonstrate contraction in W : W (p (k+1)h, π h ) ( E x kh h U(x kh ) + ) (B (k+1)h B kh ) ( T (x kh ) h U(T (x kh )) + ) (B (k+1)h B kh ) =E (x kh h U(x kh )) (T (x kh ) h U(T (x kh ))) E x kh T (x kh ) h U(x kh ) U(T (x kh )), x kh T (x kh ) + h U(x kh ) U(T (x kh )) E x kh T (x kh ) =W (p kh, π h ) where the last equality follows by optimality of T, an the last inequality follows because L-smoothness of U(x) implies h U(x kh ) U(T (x kh )), x kh T (x kh ) h L U(x kh) U(T (x kh )) h U(x kh ) U(T (x kh )) 0

21 This completes the inuctive step. Corollary 14 Let p t be as efine in (). Then for all t, W (p t, p ) 4C 1 Proof of Corollary 14 First, if t = τ(t), then by Lemma 13 an (16) an triangle inequality, we get our conclusion. So assume that t τ(t). Using ientical arguments as in Lemma 13, an noting the assumption on h in (16) an the fact that h h, we can show that W (p t, π t τ(t) ) W (p τ(t), π t τ(t) ) () By triangle inequality an the assumption in (16), we have W (p t, p ) W (p t, π t τ(t) ) + W (π t τ(t), p ) W (p τ(t), π t τ(t) ) + W (π t τ(t), p ) W (p τ(t), π h ) + W (π h, p ) + W (π h, p ) + W (π t τ(t), p ) 4C 1 Where the first inequality is by triangle inequality, the secon inequality is by (), thir inequality is by triangle inequality, fourth inequality is by assumption (16) an the fact that t τ(t) h h. Next, we use Lemma 13, to boun E x kh for all k: Lemma 15 Let h, x t an p t be as efine in the statement of Theorem. Then for all k E x kh 4(C 1 + C ) Proof of Lemma 15 Let γ(x, y) be the optimal coupling between p kh an π h. Let γ (x, y) be the optimal coupling between π h an p. Then 1

22 E pkh x = Eγ x = E γ x y + y E γ x y + Eγ y = W (p kh, π h ) + E πh y = W (p kh, π h ) + E γ x = W (p kh, π h ) + E γ x y + y W (p kh, π h ) + 4E γ x y + 4Eγ y W (p kh, π h ) + 4W (π h, p ) + 4E p x By efinition of C at the start of Section 7, we have. E p x C By Lemma 13, we have W (p kh, π h ) W (p 0, π h ) C 1 By efinition of h at the start of Section 7, an h in Theorem (which ensures h h ), we have W (π h, p ) C 1 Proof of Theorem First, we boun the iscretization error (for an arbitrary t). By Lemma 7: ( F (ps ) F (q t s s) ) ( ) L t E pτ(t) x τ(t) + L t D pt ( ) L t E pτ(t) x τ(t) + L t D pt Given the choice of, we can ensure that h = 1 { 48 min ɛ C 1 (C 1 + C )L, ɛ L C1 ( ) L h E x τ(t) + L h 1 4 ɛ 8C 1, h } ( ) L h 18(C1 + C ) + L h

23 where the first inequality comes from Lemma 15. Assume that F (p s ) F (p ) ɛ. By Lemma 1 an Corollary 14, we have D ps F (p s) F (p ) W (p s, p ) ɛ W (p s, p ) ɛ (3) 4C 1 This implies that s F (p s) F (q t s) 1 D p t The rate of ecrease of F (p t ) thus satisfies t F (p t) F (p ) = t F (qt s) F (p ) + t (F (p s) F (q t s)) = D pt + 1 D p t 1 D p t 1 C1 (F (p t ) F (p )) We now stuy two regimes. The first regime is when F (p t ) F (p ) 1, t F (p t) F (p ) (F (p t ) F (p )), which implies 1 C 1 We thus achieve F (p t ) F (p ) 1 in F (p t ) F (p ) (F (p 0 ) F (p )) exp( t C 1 log(f (p 0 ) F (p )) In the secon regime, F (p t ) F (p ) 1. By noting that f t = 1 t is the solution to t f t = ft, an letting f t = 1 (F (p C1 t ) F (p )), we get F (p t ) F (p ) C 1 t. To achieve F (p t ) F (p ) ɛ, we set t = C 1 ɛ. Overall, we just nee to set t C 1 ) t C 1 ɛ + C 1 log(f (p 0 ) F (p )) This, combine with the choice of h earlier, proves the theorem. 3

24 8. Some regularity results In this subsection, we provie some regularity results neee in various parts of the paper. Lemma 16 Let w µ be as efine in Lemma 1. Let p t be as efine in 4. For all t, w pt well efine, an E pt wpt is finite. is Proof of Lemma 16 First, we establish the following statement: For any t, there exists a δ R with µ δ,y (x) being the istribution of N(y, δ) an p P(R ) such that 1. For all x R, p t (x) = E y p µδ,y (x). E p x is finite. If t = τ(t), then let p = (I( ) h U( )) # p τ(t) 1 an let δ = h. Otherwise, if t τ(t), then let p = (I( ) (t τ(t)) U( )) # p τ(t) an δ = (t τ(t)). Where we use the efinition of push-forwar istribution from (4). 1. now can be easily verifie. To see, let t = τ(t) 1 in case 1 an let t = τ(t) in case. E p x x h U(x) =E pt E pt E pt x + h E pt x + h L E pt U(x) x ( + h L ) 4 m Where the last inequality follows by Lemma 11. Since µ δ,y (x) for all x, y, E p µδ,y (x) is ifferentiable for all x. This proves the first part of the Lemma. Next, a nice property of Gaussians is that Thus, x µ δ,y (x) = µ δ,y (x y) δ log p t (x) = 1 δ log E y p µδ,y (x) = 1 δ 1 E y p µδ,y (x) E ( y p (y x)µδ,y (x)) ) = 1 δ E y µ x δ y x Where µ x δ enotes the conitional istribution of y given x, when y p an x µ δ,y. 4

25 Thus E x pt(x) log pt (x) 1 δ E x p t(x) = δ E y p Ey µ x δ y + x y + δ E x p t x Where the first inequality is by Jensen s inequality an Young s inequality an the preceing result, the secon inequality is by efinition of conitional istribution, the thir inequality is by the fact that δ > 0 (by efinition at the start of the proof), the fact that E x pt x 4 m (by Lemma 11), an by the fact that E y p y < (see item. at the start of the proof) Finally, we have that w pt L (p t) =E pt wpt (x) =E pt log pt (x) log p (x) E pt log pt (x) + Ept log p (x) < Where the last inequality uses the fact that log p ( x) = U(x) L x an E pt x 4 m. Lemma 17 Let p t be as efine in (4). Then p t is finite for all t, where p t is the metric erivative of p t, as efine in (6). Proof of Lemma 17 We efine the ranom variable ξ to be istribute as N(0, 1). For all t, let x t be as efine in (4). One can verify that the ranom variable y t x τ(t) (t τ(t))u(x τ(t) ) + (t τ(t))ξ has the same istribution as x t. Thus y t an y t+ɛ efine a coupling between p t an p t+ɛ. We thus have 5

26 Let h t τ(t) p t = lim ɛ W (p t, p t+ɛ ) 1 lim E y t y t+ɛ ɛ 0 ɛ 1 = lim E x pτ(t) ɛ U(x) + ( (h + ɛ) h)ξ ɛ 0 ɛ 1 = lim E x pτ(t) ɛ U(x) ɛ 0 + E ( (h + ɛ) h)ξ ɛ 1 lim E x pτ(t) ɛ U(x) ɛ 0 ɛ + 1 E ( (h + ɛ) h)ξ ɛ = E x pτ(t) U(x) 1 + E ξ 8h ɛ 0 1 Where the last inequality follows by Taylor expansion of h + ɛ. We can boun the first term by a finite number using U(x) L x, then applying Lemma 11. The secon term is finite for h 0. For the case h = 0, we know that w pt satisfies the continuity equation for p t at t, an so p t = w pt L (p t) <, by Lemma an Lemma 16. 6

An Optimal Algorithm for Bandit and Zero-Order Convex Optimization with Two-Point Feedback

An Optimal Algorithm for Bandit and Zero-Order Convex Optimization with Two-Point Feedback Journal of Machine Learning Research 8 07) - Submitte /6; Publishe 5/7 An Optimal Algorithm for Banit an Zero-Orer Convex Optimization with wo-point Feeback Oha Shamir Department of Computer Science an

More information

WUCHEN LI AND STANLEY OSHER

WUCHEN LI AND STANLEY OSHER CONSTRAINED DYNAMICAL OPTIMAL TRANSPORT AND ITS LAGRANGIAN FORMULATION WUCHEN LI AND STANLEY OSHER Abstract. We propose ynamical optimal transport (OT) problems constraine in a parameterize probability

More information

Robust Forward Algorithms via PAC-Bayes and Laplace Distributions. ω Q. Pr (y(ω x) < 0) = Pr A k

Robust Forward Algorithms via PAC-Bayes and Laplace Distributions. ω Q. Pr (y(ω x) < 0) = Pr A k A Proof of Lemma 2 B Proof of Lemma 3 Proof: Since the support of LL istributions is R, two such istributions are equivalent absolutely continuous with respect to each other an the ivergence is well-efine

More information

Second order differentiation formula on RCD(K, N) spaces

Second order differentiation formula on RCD(K, N) spaces Secon orer ifferentiation formula on RCD(K, N) spaces Nicola Gigli Luca Tamanini February 8, 018 Abstract We prove the secon orer ifferentiation formula along geoesics in finite-imensional RCD(K, N) spaces.

More information

A PAC-Bayesian Approach to Spectrally-Normalized Margin Bounds for Neural Networks

A PAC-Bayesian Approach to Spectrally-Normalized Margin Bounds for Neural Networks A PAC-Bayesian Approach to Spectrally-Normalize Margin Bouns for Neural Networks Behnam Neyshabur, Srinah Bhojanapalli, Davi McAllester, Nathan Srebro Toyota Technological Institute at Chicago {bneyshabur,

More information

Convergence of Random Walks

Convergence of Random Walks Chapter 16 Convergence of Ranom Walks This lecture examines the convergence of ranom walks to the Wiener process. This is very important both physically an statistically, an illustrates the utility of

More information

Least-Squares Regression on Sparse Spaces

Least-Squares Regression on Sparse Spaces Least-Squares Regression on Sparse Spaces Yuri Grinberg, Mahi Milani Far, Joelle Pineau School of Computer Science McGill University Montreal, Canaa {ygrinb,mmilan1,jpineau}@cs.mcgill.ca 1 Introuction

More information

Calculus of Variations

Calculus of Variations 16.323 Lecture 5 Calculus of Variations Calculus of Variations Most books cover this material well, but Kirk Chapter 4 oes a particularly nice job. x(t) x* x*+ αδx (1) x*- αδx (1) αδx (1) αδx (1) t f t

More information

Notes on Lie Groups, Lie algebras, and the Exponentiation Map Mitchell Faulk

Notes on Lie Groups, Lie algebras, and the Exponentiation Map Mitchell Faulk Notes on Lie Groups, Lie algebras, an the Exponentiation Map Mitchell Faulk 1. Preliminaries. In these notes, we concern ourselves with special objects calle matrix Lie groups an their corresponing Lie

More information

Monotonicity for excited random walk in high dimensions

Monotonicity for excited random walk in high dimensions Monotonicity for excite ranom walk in high imensions Remco van er Hofsta Mark Holmes March, 2009 Abstract We prove that the rift θ, β) for excite ranom walk in imension is monotone in the excitement parameter

More information

Computing Exact Confidence Coefficients of Simultaneous Confidence Intervals for Multinomial Proportions and their Functions

Computing Exact Confidence Coefficients of Simultaneous Confidence Intervals for Multinomial Proportions and their Functions Working Paper 2013:5 Department of Statistics Computing Exact Confience Coefficients of Simultaneous Confience Intervals for Multinomial Proportions an their Functions Shaobo Jin Working Paper 2013:5

More information

Lecture Introduction. 2 Examples of Measure Concentration. 3 The Johnson-Lindenstrauss Lemma. CS-621 Theory Gems November 28, 2012

Lecture Introduction. 2 Examples of Measure Concentration. 3 The Johnson-Lindenstrauss Lemma. CS-621 Theory Gems November 28, 2012 CS-6 Theory Gems November 8, 0 Lecture Lecturer: Alesaner Mąry Scribes: Alhussein Fawzi, Dorina Thanou Introuction Toay, we will briefly iscuss an important technique in probability theory measure concentration

More information

Lower bounds on Locality Sensitive Hashing

Lower bounds on Locality Sensitive Hashing Lower bouns on Locality Sensitive Hashing Rajeev Motwani Assaf Naor Rina Panigrahy Abstract Given a metric space (X, X ), c 1, r > 0, an p, q [0, 1], a istribution over mappings H : X N is calle a (r,

More information

6 General properties of an autonomous system of two first order ODE

6 General properties of an autonomous system of two first order ODE 6 General properties of an autonomous system of two first orer ODE Here we embark on stuying the autonomous system of two first orer ifferential equations of the form ẋ 1 = f 1 (, x 2 ), ẋ 2 = f 2 (, x

More information

Euler equations for multiple integrals

Euler equations for multiple integrals Euler equations for multiple integrals January 22, 2013 Contents 1 Reminer of multivariable calculus 2 1.1 Vector ifferentiation......................... 2 1.2 Matrix ifferentiation........................

More information

Generalization of the persistent random walk to dimensions greater than 1

Generalization of the persistent random walk to dimensions greater than 1 PHYSICAL REVIEW E VOLUME 58, NUMBER 6 DECEMBER 1998 Generalization of the persistent ranom walk to imensions greater than 1 Marián Boguñá, Josep M. Porrà, an Jaume Masoliver Departament e Física Fonamental,

More information

The total derivative. Chapter Lagrangian and Eulerian approaches

The total derivative. Chapter Lagrangian and Eulerian approaches Chapter 5 The total erivative 51 Lagrangian an Eulerian approaches The representation of a flui through scalar or vector fiels means that each physical quantity uner consieration is escribe as a function

More information

Linear First-Order Equations

Linear First-Order Equations 5 Linear First-Orer Equations Linear first-orer ifferential equations make up another important class of ifferential equations that commonly arise in applications an are relatively easy to solve (in theory)

More information

Introduction to the Vlasov-Poisson system

Introduction to the Vlasov-Poisson system Introuction to the Vlasov-Poisson system Simone Calogero 1 The Vlasov equation Consier a particle with mass m > 0. Let x(t) R 3 enote the position of the particle at time t R an v(t) = ẋ(t) = x(t)/t its

More information

L p Theory for the Multidimensional Aggregation Equation

L p Theory for the Multidimensional Aggregation Equation L p Theory for the Multiimensional Aggregation Equation ANDREA L. BERTOZZI University of California - Los Angeles THOMAS LAURENT University of California - Los Angeles AND JESÚS ROSADO Universitat Autònoma

More information

Agmon Kolmogorov Inequalities on l 2 (Z d )

Agmon Kolmogorov Inequalities on l 2 (Z d ) Journal of Mathematics Research; Vol. 6, No. ; 04 ISSN 96-9795 E-ISSN 96-9809 Publishe by Canaian Center of Science an Eucation Agmon Kolmogorov Inequalities on l (Z ) Arman Sahovic Mathematics Department,

More information

Logarithmic spurious regressions

Logarithmic spurious regressions Logarithmic spurious regressions Robert M. e Jong Michigan State University February 5, 22 Abstract Spurious regressions, i.e. regressions in which an integrate process is regresse on another integrate

More information

JUST THE MATHS UNIT NUMBER DIFFERENTIATION 2 (Rates of change) A.J.Hobson

JUST THE MATHS UNIT NUMBER DIFFERENTIATION 2 (Rates of change) A.J.Hobson JUST THE MATHS UNIT NUMBER 10.2 DIFFERENTIATION 2 (Rates of change) by A.J.Hobson 10.2.1 Introuction 10.2.2 Average rates of change 10.2.3 Instantaneous rates of change 10.2.4 Derivatives 10.2.5 Exercises

More information

PDE Notes, Lecture #11

PDE Notes, Lecture #11 PDE Notes, Lecture # from Professor Jalal Shatah s Lectures Febuary 9th, 2009 Sobolev Spaces Recall that for u L loc we can efine the weak erivative Du by Du, φ := udφ φ C0 If v L loc such that Du, φ =

More information

Some Examples. Uniform motion. Poisson processes on the real line

Some Examples. Uniform motion. Poisson processes on the real line Some Examples Our immeiate goal is to see some examples of Lévy processes, an/or infinitely-ivisible laws on. Uniform motion Choose an fix a nonranom an efine X := for all (1) Then, {X } is a [nonranom]

More information

Introduction. A Dirichlet Form approach to MCMC Optimal Scaling. MCMC idea

Introduction. A Dirichlet Form approach to MCMC Optimal Scaling. MCMC idea Introuction A Dirichlet Form approach to MCMC Optimal Scaling Markov chain Monte Carlo (MCMC quotes: Metropolis et al. (1953, running coe on the Los Alamos MANIAC: a feasible approach to statistical mechanics

More information

LECTURE NOTES ON DVORETZKY S THEOREM

LECTURE NOTES ON DVORETZKY S THEOREM LECTURE NOTES ON DVORETZKY S THEOREM STEVEN HEILMAN Abstract. We present the first half of the paper [S]. In particular, the results below, unless otherwise state, shoul be attribute to G. Schechtman.

More information

Lecture 10: October 30, 2017

Lecture 10: October 30, 2017 Information an Coing Theory Autumn 2017 Lecturer: Mahur Tulsiani Lecture 10: October 30, 2017 1 I-Projections an applications In this lecture, we will talk more about fining the istribution in a set Π

More information

Math 342 Partial Differential Equations «Viktor Grigoryan

Math 342 Partial Differential Equations «Viktor Grigoryan Math 342 Partial Differential Equations «Viktor Grigoryan 6 Wave equation: solution In this lecture we will solve the wave equation on the entire real line x R. This correspons to a string of infinite

More information

NOTES ON EULER-BOOLE SUMMATION (1) f (l 1) (n) f (l 1) (m) + ( 1)k 1 k! B k (y) f (k) (y) dy,

NOTES ON EULER-BOOLE SUMMATION (1) f (l 1) (n) f (l 1) (m) + ( 1)k 1 k! B k (y) f (k) (y) dy, NOTES ON EULER-BOOLE SUMMATION JONATHAN M BORWEIN, NEIL J CALKIN, AND DANTE MANNA Abstract We stuy a connection between Euler-MacLaurin Summation an Boole Summation suggeste in an AMM note from 196, which

More information

Chaos, Solitons and Fractals Nonlinear Science, and Nonequilibrium and Complex Phenomena

Chaos, Solitons and Fractals Nonlinear Science, and Nonequilibrium and Complex Phenomena Chaos, Solitons an Fractals (7 64 73 Contents lists available at ScienceDirect Chaos, Solitons an Fractals onlinear Science, an onequilibrium an Complex Phenomena journal homepage: www.elsevier.com/locate/chaos

More information

Topic 7: Convergence of Random Variables

Topic 7: Convergence of Random Variables Topic 7: Convergence of Ranom Variables Course 003, 2016 Page 0 The Inference Problem So far, our starting point has been a given probability space (S, F, P). We now look at how to generate information

More information

QF101: Quantitative Finance September 5, Week 3: Derivatives. Facilitator: Christopher Ting AY 2017/2018. f ( x + ) f(x) f(x) = lim

QF101: Quantitative Finance September 5, Week 3: Derivatives. Facilitator: Christopher Ting AY 2017/2018. f ( x + ) f(x) f(x) = lim QF101: Quantitative Finance September 5, 2017 Week 3: Derivatives Facilitator: Christopher Ting AY 2017/2018 I recoil with ismay an horror at this lamentable plague of functions which o not have erivatives.

More information

MARKO NEDELJKOV, DANIJELA RAJTER-ĆIRIĆ

MARKO NEDELJKOV, DANIJELA RAJTER-ĆIRIĆ GENERALIZED UNIFORMLY CONTINUOUS SEMIGROUPS AND SEMILINEAR HYPERBOLIC SYSTEMS WITH REGULARIZED DERIVATIVES MARKO NEDELJKOV, DANIJELA RAJTER-ĆIRIĆ Abstract. We aopt the theory of uniformly continuous operator

More information

Math Notes on differentials, the Chain Rule, gradients, directional derivative, and normal vectors

Math Notes on differentials, the Chain Rule, gradients, directional derivative, and normal vectors Math 18.02 Notes on ifferentials, the Chain Rule, graients, irectional erivative, an normal vectors Tangent plane an linear approximation We efine the partial erivatives of f( xy, ) as follows: f f( x+

More information

Relative Entropy and Score Function: New Information Estimation Relationships through Arbitrary Additive Perturbation

Relative Entropy and Score Function: New Information Estimation Relationships through Arbitrary Additive Perturbation Relative Entropy an Score Function: New Information Estimation Relationships through Arbitrary Aitive Perturbation Dongning Guo Department of Electrical Engineering & Computer Science Northwestern University

More information

Lectures - Week 10 Introduction to Ordinary Differential Equations (ODES) First Order Linear ODEs

Lectures - Week 10 Introduction to Ordinary Differential Equations (ODES) First Order Linear ODEs Lectures - Week 10 Introuction to Orinary Differential Equations (ODES) First Orer Linear ODEs When stuying ODEs we are consiering functions of one inepenent variable, e.g., f(x), where x is the inepenent

More information

Table of Common Derivatives By David Abraham

Table of Common Derivatives By David Abraham Prouct an Quotient Rules: Table of Common Derivatives By Davi Abraham [ f ( g( ] = [ f ( ] g( + f ( [ g( ] f ( = g( [ f ( ] g( g( f ( [ g( ] Trigonometric Functions: sin( = cos( cos( = sin( tan( = sec

More information

Lecture 6: Calculus. In Song Kim. September 7, 2011

Lecture 6: Calculus. In Song Kim. September 7, 2011 Lecture 6: Calculus In Song Kim September 7, 20 Introuction to Differential Calculus In our previous lecture we came up with several ways to analyze functions. We saw previously that the slope of a linear

More information

Schrödinger s equation.

Schrödinger s equation. Physics 342 Lecture 5 Schröinger s Equation Lecture 5 Physics 342 Quantum Mechanics I Wenesay, February 3r, 2010 Toay we iscuss Schröinger s equation an show that it supports the basic interpretation of

More information

IPA Derivatives for Make-to-Stock Production-Inventory Systems With Backorders Under the (R,r) Policy

IPA Derivatives for Make-to-Stock Production-Inventory Systems With Backorders Under the (R,r) Policy IPA Derivatives for Make-to-Stock Prouction-Inventory Systems With Backorers Uner the (Rr) Policy Yihong Fan a Benamin Melame b Yao Zhao c Yorai Wari Abstract This paper aresses Infinitesimal Perturbation

More information

A Note on Exact Solutions to Linear Differential Equations by the Matrix Exponential

A Note on Exact Solutions to Linear Differential Equations by the Matrix Exponential Avances in Applie Mathematics an Mechanics Av. Appl. Math. Mech. Vol. 1 No. 4 pp. 573-580 DOI: 10.4208/aamm.09-m0946 August 2009 A Note on Exact Solutions to Linear Differential Equations by the Matrix

More information

Self-normalized Martingale Tail Inequality

Self-normalized Martingale Tail Inequality Online-to-Confience-Set Conversions an Application to Sparse Stochastic Banits A Self-normalize Martingale Tail Inequality The self-normalize martingale tail inequality that we present here is the scalar-value

More information

1. Aufgabenblatt zur Vorlesung Probability Theory

1. Aufgabenblatt zur Vorlesung Probability Theory 24.10.17 1. Aufgabenblatt zur Vorlesung By (Ω, A, P ) we always enote the unerlying probability space, unless state otherwise. 1. Let r > 0, an efine f(x) = 1 [0, [ (x) exp( r x), x R. a) Show that p f

More information

A new proof of the sharpness of the phase transition for Bernoulli percolation on Z d

A new proof of the sharpness of the phase transition for Bernoulli percolation on Z d A new proof of the sharpness of the phase transition for Bernoulli percolation on Z Hugo Duminil-Copin an Vincent Tassion October 8, 205 Abstract We provie a new proof of the sharpness of the phase transition

More information

Monte Carlo Methods with Reduced Error

Monte Carlo Methods with Reduced Error Monte Carlo Methos with Reuce Error As has been shown, the probable error in Monte Carlo algorithms when no information about the smoothness of the function is use is Dξ r N = c N. It is important for

More information

Lecture 5. Symmetric Shearer s Lemma

Lecture 5. Symmetric Shearer s Lemma Stanfor University Spring 208 Math 233: Non-constructive methos in combinatorics Instructor: Jan Vonrák Lecture ate: January 23, 208 Original scribe: Erik Bates Lecture 5 Symmetric Shearer s Lemma Here

More information

Acute sets in Euclidean spaces

Acute sets in Euclidean spaces Acute sets in Eucliean spaces Viktor Harangi April, 011 Abstract A finite set H in R is calle an acute set if any angle etermine by three points of H is acute. We examine the maximal carinality α() of

More information

SYMPLECTIC GEOMETRY: LECTURE 3

SYMPLECTIC GEOMETRY: LECTURE 3 SYMPLECTIC GEOMETRY: LECTURE 3 LIAT KESSLER 1. Local forms Vector fiels an the Lie erivative. A vector fiel on a manifol M is a smooth assignment of a vector tangent to M at each point. We think of M as

More information

Time-of-Arrival Estimation in Non-Line-Of-Sight Environments

Time-of-Arrival Estimation in Non-Line-Of-Sight Environments 2 Conference on Information Sciences an Systems, The Johns Hopkins University, March 2, 2 Time-of-Arrival Estimation in Non-Line-Of-Sight Environments Sinan Gezici, Hisashi Kobayashi an H. Vincent Poor

More information

Markov Chains in Continuous Time

Markov Chains in Continuous Time Chapter 23 Markov Chains in Continuous Time Previously we looke at Markov chains, where the transitions betweenstatesoccurreatspecifietime- steps. That it, we mae time (a continuous variable) avance in

More information

12.11 Laplace s Equation in Cylindrical and

12.11 Laplace s Equation in Cylindrical and SEC. 2. Laplace s Equation in Cylinrical an Spherical Coorinates. Potential 593 2. Laplace s Equation in Cylinrical an Spherical Coorinates. Potential One of the most important PDEs in physics an engineering

More information

Lower Bounds for the Smoothed Number of Pareto optimal Solutions

Lower Bounds for the Smoothed Number of Pareto optimal Solutions Lower Bouns for the Smoothe Number of Pareto optimal Solutions Tobias Brunsch an Heiko Röglin Department of Computer Science, University of Bonn, Germany brunsch@cs.uni-bonn.e, heiko@roeglin.org Abstract.

More information

Assignment 1. g i (x 1,..., x n ) dx i = 0. i=1

Assignment 1. g i (x 1,..., x n ) dx i = 0. i=1 Assignment 1 Golstein 1.4 The equations of motion for the rolling isk are special cases of general linear ifferential equations of constraint of the form g i (x 1,..., x n x i = 0. i=1 A constraint conition

More information

Diophantine Approximations: Examining the Farey Process and its Method on Producing Best Approximations

Diophantine Approximations: Examining the Farey Process and its Method on Producing Best Approximations Diophantine Approximations: Examining the Farey Process an its Metho on Proucing Best Approximations Kelly Bowen Introuction When a person hears the phrase irrational number, one oes not think of anything

More information

Introduction to Markov Processes

Introduction to Markov Processes Introuction to Markov Processes Connexions moule m44014 Zzis law Gustav) Meglicki, Jr Office of the VP for Information Technology Iniana University RCS: Section-2.tex,v 1.24 2012/12/21 18:03:08 gustav

More information

ALGEBRAIC AND ANALYTIC PROPERTIES OF ARITHMETIC FUNCTIONS

ALGEBRAIC AND ANALYTIC PROPERTIES OF ARITHMETIC FUNCTIONS ALGEBRAIC AND ANALYTIC PROPERTIES OF ARITHMETIC FUNCTIONS MARK SCHACHNER Abstract. When consiere as an algebraic space, the set of arithmetic functions equippe with the operations of pointwise aition an

More information

REAL ANALYSIS I HOMEWORK 5

REAL ANALYSIS I HOMEWORK 5 REAL ANALYSIS I HOMEWORK 5 CİHAN BAHRAN The questions are from Stein an Shakarchi s text, Chapter 3. 1. Suppose ϕ is an integrable function on R with R ϕ(x)x = 1. Let K δ(x) = δ ϕ(x/δ), δ > 0. (a) Prove

More information

Gradient flow of the Chapman-Rubinstein-Schatzman model for signed vortices

Gradient flow of the Chapman-Rubinstein-Schatzman model for signed vortices Graient flow of the Chapman-Rubinstein-Schatzman moel for signe vortices Luigi Ambrosio, Eoaro Mainini an Sylvia Serfaty Deicate to the memory of Michelle Schatzman (1949-2010) Abstract We continue the

More information

arxiv:math/ v1 [math.pr] 3 Jul 2006

arxiv:math/ v1 [math.pr] 3 Jul 2006 The Annals of Applie Probability 006, Vol. 6, No., 475 55 DOI: 0.4/050560500000079 c Institute of Mathematical Statistics, 006 arxiv:math/0607054v [math.pr] 3 Jul 006 OPTIMAL SCALING FOR PARTIALLY UPDATING

More information

Separation of Variables

Separation of Variables Physics 342 Lecture 1 Separation of Variables Lecture 1 Physics 342 Quantum Mechanics I Monay, January 25th, 2010 There are three basic mathematical tools we nee, an then we can begin working on the physical

More information

Tractability results for weighted Banach spaces of smooth functions

Tractability results for weighted Banach spaces of smooth functions Tractability results for weighte Banach spaces of smooth functions Markus Weimar Mathematisches Institut, Universität Jena Ernst-Abbe-Platz 2, 07740 Jena, Germany email: markus.weimar@uni-jena.e March

More information

Lecture 2: Correlated Topic Model

Lecture 2: Correlated Topic Model Probabilistic Moels for Unsupervise Learning Spring 203 Lecture 2: Correlate Topic Moel Inference for Correlate Topic Moel Yuan Yuan First of all, let us make some claims about the parameters an variables

More information

7.1 Support Vector Machine

7.1 Support Vector Machine 67577 Intro. to Machine Learning Fall semester, 006/7 Lecture 7: Support Vector Machines an Kernel Functions II Lecturer: Amnon Shashua Scribe: Amnon Shashua 7. Support Vector Machine We return now to

More information

Proof of SPNs as Mixture of Trees

Proof of SPNs as Mixture of Trees A Proof of SPNs as Mixture of Trees Theorem 1. If T is an inuce SPN from a complete an ecomposable SPN S, then T is a tree that is complete an ecomposable. Proof. Argue by contraiction that T is not a

More information

arxiv: v2 [math.dg] 16 Dec 2014

arxiv: v2 [math.dg] 16 Dec 2014 A ONOTONICITY FORULA AND TYPE-II SINGULARITIES FOR THE EAN CURVATURE FLOW arxiv:1312.4775v2 [math.dg] 16 Dec 2014 YONGBING ZHANG Abstract. In this paper, we introuce a monotonicity formula for the mean

More information

II. First variation of functionals

II. First variation of functionals II. First variation of functionals The erivative of a function being zero is a necessary conition for the etremum of that function in orinary calculus. Let us now tackle the question of the equivalent

More information

arxiv: v4 [math.pr] 27 Jul 2016

arxiv: v4 [math.pr] 27 Jul 2016 The Asymptotic Distribution of the Determinant of a Ranom Correlation Matrix arxiv:309768v4 mathpr] 7 Jul 06 AM Hanea a, & GF Nane b a Centre of xcellence for Biosecurity Risk Analysis, University of Melbourne,

More information

Asymptotic estimates on the time derivative of entropy on a Riemannian manifold

Asymptotic estimates on the time derivative of entropy on a Riemannian manifold Asymptotic estimates on the time erivative of entropy on a Riemannian manifol Arian P. C. Lim a, Dejun Luo b a Nanyang Technological University, athematics an athematics Eucation, Singapore b Key Lab of

More information

An extension of Alexandrov s theorem on second derivatives of convex functions

An extension of Alexandrov s theorem on second derivatives of convex functions Avances in Mathematics 228 (211 2258 2267 www.elsevier.com/locate/aim An extension of Alexanrov s theorem on secon erivatives of convex functions Joseph H.G. Fu 1 Department of Mathematics, University

More information

CHAPTER 1 : DIFFERENTIABLE MANIFOLDS. 1.1 The definition of a differentiable manifold

CHAPTER 1 : DIFFERENTIABLE MANIFOLDS. 1.1 The definition of a differentiable manifold CHAPTER 1 : DIFFERENTIABLE MANIFOLDS 1.1 The efinition of a ifferentiable manifol Let M be a topological space. This means that we have a family Ω of open sets efine on M. These satisfy (1), M Ω (2) the

More information

Many problems in physics, engineering, and chemistry fall in a general class of equations of the form. d dx. d dx

Many problems in physics, engineering, and chemistry fall in a general class of equations of the form. d dx. d dx Math 53 Notes on turm-liouville equations Many problems in physics, engineering, an chemistry fall in a general class of equations of the form w(x)p(x) u ] + (q(x) λ) u = w(x) on an interval a, b], plus

More information

LATTICE-BASED D-OPTIMUM DESIGN FOR FOURIER REGRESSION

LATTICE-BASED D-OPTIMUM DESIGN FOR FOURIER REGRESSION The Annals of Statistics 1997, Vol. 25, No. 6, 2313 2327 LATTICE-BASED D-OPTIMUM DESIGN FOR FOURIER REGRESSION By Eva Riccomagno, 1 Rainer Schwabe 2 an Henry P. Wynn 1 University of Warwick, Technische

More information

Abstract A nonlinear partial differential equation of the following form is considered:

Abstract A nonlinear partial differential equation of the following form is considered: M P E J Mathematical Physics Electronic Journal ISSN 86-6655 Volume 2, 26 Paper 5 Receive: May 3, 25, Revise: Sep, 26, Accepte: Oct 6, 26 Eitor: C.E. Wayne A Nonlinear Heat Equation with Temperature-Depenent

More information

A. Exclusive KL View of the MLE

A. Exclusive KL View of the MLE A. Exclusive KL View of the MLE Lets assume a change-of-variable moel p Z z on the ranom variable Z R m, such as the one use in Dinh et al. 2017: z 0 p 0 z 0 an z = ψz 0, where ψ is an invertible function

More information

FLUCTUATIONS IN THE NUMBER OF POINTS ON SMOOTH PLANE CURVES OVER FINITE FIELDS. 1. Introduction

FLUCTUATIONS IN THE NUMBER OF POINTS ON SMOOTH PLANE CURVES OVER FINITE FIELDS. 1. Introduction FLUCTUATIONS IN THE NUMBER OF POINTS ON SMOOTH PLANE CURVES OVER FINITE FIELDS ALINA BUCUR, CHANTAL DAVID, BROOKE FEIGON, MATILDE LALÍN 1 Introuction In this note, we stuy the fluctuations in the number

More information

SINGULAR PERTURBATION AND STATIONARY SOLUTIONS OF PARABOLIC EQUATIONS IN GAUSS-SOBOLEV SPACES

SINGULAR PERTURBATION AND STATIONARY SOLUTIONS OF PARABOLIC EQUATIONS IN GAUSS-SOBOLEV SPACES Communications on Stochastic Analysis Vol. 2, No. 2 (28) 289-36 Serials Publications www.serialspublications.com SINGULAR PERTURBATION AND STATIONARY SOLUTIONS OF PARABOLIC EQUATIONS IN GAUSS-SOBOLEV SPACES

More information

The derivative of a function f(x) is another function, defined in terms of a limiting expression: f(x + δx) f(x)

The derivative of a function f(x) is another function, defined in terms of a limiting expression: f(x + δx) f(x) Y. D. Chong (2016) MH2801: Complex Methos for the Sciences 1. Derivatives The erivative of a function f(x) is another function, efine in terms of a limiting expression: f (x) f (x) lim x δx 0 f(x + δx)

More information

On a limit theorem for non-stationary branching processes.

On a limit theorem for non-stationary branching processes. On a limit theorem for non-stationary branching processes. TETSUYA HATTORI an HIROSHI WATANABE 0. Introuction. The purpose of this paper is to give a limit theorem for a certain class of iscrete-time multi-type

More information

From slow diffusion to a hard height constraint: characterizing congested aggregation

From slow diffusion to a hard height constraint: characterizing congested aggregation 3 2 1 Time Position -0.5 0.0 0.5 0 From slow iffusion to a har height constraint: characterizing congeste aggregation Katy Craig University of California, Santa Barbara BIRS April 12, 2017 2 collective

More information

Mark J. Machina CARDINAL PROPERTIES OF "LOCAL UTILITY FUNCTIONS"

Mark J. Machina CARDINAL PROPERTIES OF LOCAL UTILITY FUNCTIONS Mark J. Machina CARDINAL PROPERTIES OF "LOCAL UTILITY FUNCTIONS" This paper outlines the carinal properties of "local utility functions" of the type use by Allen [1985], Chew [1983], Chew an MacCrimmon

More information

u!i = a T u = 0. Then S satisfies

u!i = a T u = 0. Then S satisfies Deterministic Conitions for Subspace Ientifiability from Incomplete Sampling Daniel L Pimentel-Alarcón, Nigel Boston, Robert D Nowak University of Wisconsin-Maison Abstract Consier an r-imensional subspace

More information

Analyzing Tensor Power Method Dynamics in Overcomplete Regime

Analyzing Tensor Power Method Dynamics in Overcomplete Regime Journal of Machine Learning Research 18 (2017) 1-40 Submitte 9/15; Revise 11/16; Publishe 4/17 Analyzing Tensor Power Metho Dynamics in Overcomplete Regime Animashree Ananumar Department of Electrical

More information

TMA 4195 Matematisk modellering Exam Tuesday December 16, :00 13:00 Problems and solution with additional comments

TMA 4195 Matematisk modellering Exam Tuesday December 16, :00 13:00 Problems and solution with additional comments Problem F U L W D g m 3 2 s 2 0 0 0 0 2 kg 0 0 0 0 0 0 Table : Dimension matrix TMA 495 Matematisk moellering Exam Tuesay December 6, 2008 09:00 3:00 Problems an solution with aitional comments The necessary

More information

1. Introduction. This manuscript considers the problem of dynamic nonlocal aggregation equations of the form ρ div(ρ K ρ) = 0 (1.

1. Introduction. This manuscript considers the problem of dynamic nonlocal aggregation equations of the form ρ div(ρ K ρ) = 0 (1. CHARACTERIZATION OF RADIALLY SYMMETRIC FINITE TIME BLOWUP IN MULTIDIMENSIONAL AGGREGATION EQUATIONS ANDREA L. BERTOZZI, JOHN B. GARNETT, AND THOMAS LAURENT Abstract. This paper stuies the transport of

More information

Math 11 Fall 2016 Section 1 Monday, September 19, Definition: A vector parametric equation for the line parallel to vector v = x v, y v, z v

Math 11 Fall 2016 Section 1 Monday, September 19, Definition: A vector parametric equation for the line parallel to vector v = x v, y v, z v Math Fall 06 Section Monay, September 9, 06 First, some important points from the last class: Definition: A vector parametric equation for the line parallel to vector v = x v, y v, z v passing through

More information

Multi-View Clustering via Canonical Correlation Analysis

Multi-View Clustering via Canonical Correlation Analysis Technical Report TTI-TR-2008-5 Multi-View Clustering via Canonical Correlation Analysis Kamalika Chauhuri UC San Diego Sham M. Kakae Toyota Technological Institute at Chicago ABSTRACT Clustering ata in

More information

SYSTEMS OF DIFFERENTIAL EQUATIONS, EULER S FORMULA. where L is some constant, usually called the Lipschitz constant. An example is

SYSTEMS OF DIFFERENTIAL EQUATIONS, EULER S FORMULA. where L is some constant, usually called the Lipschitz constant. An example is SYSTEMS OF DIFFERENTIAL EQUATIONS, EULER S FORMULA. Uniqueness for solutions of ifferential equations. We consier the system of ifferential equations given by x = v( x), () t with a given initial conition

More information

Lecture 2 Lagrangian formulation of classical mechanics Mechanics

Lecture 2 Lagrangian formulation of classical mechanics Mechanics Lecture Lagrangian formulation of classical mechanics 70.00 Mechanics Principle of stationary action MATH-GA To specify a motion uniquely in classical mechanics, it suffices to give, at some time t 0,

More information

International Journal of Pure and Applied Mathematics Volume 35 No , ON PYTHAGOREAN QUADRUPLES Edray Goins 1, Alain Togbé 2

International Journal of Pure and Applied Mathematics Volume 35 No , ON PYTHAGOREAN QUADRUPLES Edray Goins 1, Alain Togbé 2 International Journal of Pure an Applie Mathematics Volume 35 No. 3 007, 365-374 ON PYTHAGOREAN QUADRUPLES Eray Goins 1, Alain Togbé 1 Department of Mathematics Purue University 150 North University Street,

More information

A Review of Multiple Try MCMC algorithms for Signal Processing

A Review of Multiple Try MCMC algorithms for Signal Processing A Review of Multiple Try MCMC algorithms for Signal Processing Luca Martino Image Processing Lab., Universitat e València (Spain) Universia Carlos III e Mari, Leganes (Spain) Abstract Many applications

More information

Conservation Laws. Chapter Conservation of Energy

Conservation Laws. Chapter Conservation of Energy 20 Chapter 3 Conservation Laws In orer to check the physical consistency of the above set of equations governing Maxwell-Lorentz electroynamics [(2.10) an (2.12) or (1.65) an (1.68)], we examine the action

More information

05 The Continuum Limit and the Wave Equation

05 The Continuum Limit and the Wave Equation Utah State University DigitalCommons@USU Founations of Wave Phenomena Physics, Department of 1-1-2004 05 The Continuum Limit an the Wave Equation Charles G. Torre Department of Physics, Utah State University,

More information

THE VAN KAMPEN EXPANSION FOR LINKED DUFFING LINEAR OSCILLATORS EXCITED BY COLORED NOISE

THE VAN KAMPEN EXPANSION FOR LINKED DUFFING LINEAR OSCILLATORS EXCITED BY COLORED NOISE Journal of Soun an Vibration (1996) 191(3), 397 414 THE VAN KAMPEN EXPANSION FOR LINKED DUFFING LINEAR OSCILLATORS EXCITED BY COLORED NOISE E. M. WEINSTEIN Galaxy Scientific Corporation, 2500 English Creek

More information

arxiv: v1 [math.dg] 30 May 2012

arxiv: v1 [math.dg] 30 May 2012 VARIATION OF THE ODUUS OF A FOIATION. CISKA arxiv:1205.6786v1 [math.dg] 30 ay 2012 Abstract. The p moulus mo p (F) of a foliation F on a Riemannian manifol is a generalization of extremal length of plane

More information

Problem set 2: Solutions Math 207B, Winter 2016

Problem set 2: Solutions Math 207B, Winter 2016 Problem set : Solutions Math 07B, Winter 016 1. A particle of mass m with position x(t) at time t has potential energy V ( x) an kinetic energy T = 1 m x t. The action of the particle over times t t 1

More information

ORDINARY DIFFERENTIAL EQUATIONS AND SINGULAR INTEGRALS. Gianluca Crippa

ORDINARY DIFFERENTIAL EQUATIONS AND SINGULAR INTEGRALS. Gianluca Crippa Manuscript submitte to AIMS Journals Volume X, Number 0X, XX 200X Website: http://aimsciences.org pp. X XX ORDINARY DIFFERENTIAL EQUATIONS AND SINGULAR INTEGRALS Gianluca Crippa Departement Mathematik

More information

SOME LYAPUNOV TYPE POSITIVE OPERATORS ON ORDERED BANACH SPACES

SOME LYAPUNOV TYPE POSITIVE OPERATORS ON ORDERED BANACH SPACES Ann. Aca. Rom. Sci. Ser. Math. Appl. ISSN 2066-6594 Vol. 5, No. 1-2 / 2013 SOME LYAPUNOV TYPE POSITIVE OPERATORS ON ORDERED BANACH SPACES Vasile Dragan Toaer Morozan Viorica Ungureanu Abstract In this

More information

Fall 2016: Calculus I Final

Fall 2016: Calculus I Final Answer the questions in the spaces provie on the question sheets. If you run out of room for an answer, continue on the back of the page. NO calculators or other electronic evices, books or notes are allowe

More information